content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
How do I group into different dates based on change in another column values in Pandas
I have data that looks like this
df = pd.DataFrame({'ID': [1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2],
'DATE': ['1/1/2015','1/2/2015', '1/3/2015','1/4/2015','1/5/2015','1/6/2015','1/7/2015','1/8/2015',
'1/9/2016','1/2/2015','1/3/2015','1/4/2015','1/5/2015','1/6/2015','1/7/2015'],
'CD': ['A','A','A','A','B','B','A','A','C','A','A','A','A','A','A']})
What I would like to do is group by ID and CD and get the start and stop change for each change. I tried using groupby and agg function but it will group all A together even though they needs to be separated since there is B in between 2 A.
df1 = df.groupby(['ID','CD'])
df1 = df1.agg(
Start_Date = ('Date',np.min),
End_Date=('Date', np.min)
).reset_index()
What I get is :
I was hoping if some one could help me get the result I need. What I am looking for is :
A:
make grouper for grouping
grouper = df['CD'].ne(df['CD'].shift(1)).cumsum()
grouper:
0 1
1 1
2 1
3 1
4 2
5 2
6 3
7 3
8 4
9 5
10 5
11 5
12 5
13 5
14 5
Name: CD, dtype: int32
then use groupby with grouper
df.groupby(['ID', grouper, 'CD'])['DATE'].agg([min, max]).droplevel(1)
output:
min max
ID CD
1 A 1/1/2015 1/4/2015
B 1/5/2015 1/6/2015
A 1/7/2015 1/8/2015
C 1/9/2016 1/9/2016
2 A 1/2/2015 1/7/2015
change column name and use reset_index and so on..for your desired output
(df.groupby(['ID', grouper, 'CD'])['DATE'].agg([min, max]).droplevel(1)
.set_axis(['Start_Date', 'End_Date'], axis=1)
.reset_index()
.assign(CD=lambda x: x.pop('CD')))
result
ID Start_Date End_Date CD
0 1 1/1/2015 1/4/2015 A
1 1 1/5/2015 1/6/2015 B
2 1 1/7/2015 1/8/2015 A
3 1 1/9/2016 1/9/2016 C
4 2 1/2/2015 1/7/2015 A
| How do I group into different dates based on change in another column values in Pandas | I have data that looks like this
df = pd.DataFrame({'ID': [1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2],
'DATE': ['1/1/2015','1/2/2015', '1/3/2015','1/4/2015','1/5/2015','1/6/2015','1/7/2015','1/8/2015',
'1/9/2016','1/2/2015','1/3/2015','1/4/2015','1/5/2015','1/6/2015','1/7/2015'],
'CD': ['A','A','A','A','B','B','A','A','C','A','A','A','A','A','A']})
What I would like to do is group by ID and CD and get the start and stop change for each change. I tried using groupby and agg function but it will group all A together even though they needs to be separated since there is B in between 2 A.
df1 = df.groupby(['ID','CD'])
df1 = df1.agg(
Start_Date = ('Date',np.min),
End_Date=('Date', np.min)
).reset_index()
What I get is :
I was hoping if some one could help me get the result I need. What I am looking for is :
| [
"make grouper for grouping\ngrouper = df['CD'].ne(df['CD'].shift(1)).cumsum()\n\ngrouper:\n0 1\n1 1\n2 1\n3 1\n4 2\n5 2\n6 3\n7 3\n8 4\n9 5\n10 5\n11 5\n12 5\n13 5\n14 5\nName: CD, dtype: int32\n\nthen use groupby with grouper\ndf.groupby(['ID', grouper, 'CD'])['DATE'].agg([min, max]).droplevel(1)\n\noutput:\n min max\nID CD \n1 A 1/1/2015 1/4/2015\n B 1/5/2015 1/6/2015\n A 1/7/2015 1/8/2015\n C 1/9/2016 1/9/2016\n2 A 1/2/2015 1/7/2015\n\n\nchange column name and use reset_index and so on..for your desired output\n(df.groupby(['ID', grouper, 'CD'])['DATE'].agg([min, max]).droplevel(1)\n .set_axis(['Start_Date', 'End_Date'], axis=1)\n .reset_index()\n .assign(CD=lambda x: x.pop('CD')))\n\nresult\n ID Start_Date End_Date CD\n0 1 1/1/2015 1/4/2015 A\n1 1 1/5/2015 1/6/2015 B\n2 1 1/7/2015 1/8/2015 A\n3 1 1/9/2016 1/9/2016 C\n4 2 1/2/2015 1/7/2015 A\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python",
"python_3.x"
] | stackoverflow_0074643312_pandas_python_python_3.x.txt |
Q:
How do I Scrape and Iterate Through a Table from Website in Python
I am trying to scrape and iterate through a table in Python and then input it into a pandas DataFrame, but I am having trouble even finding the table using BeautifulSoup. This is what I normally do, but there does not seem to be a table within the source code. How would I pull the main table on this page?
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
import requests
url = 'https://markets.ft.com/data/director-dealings'
site = requests.get(url)
soup = BeautifulSoup(site.content, 'html.parser')
table = soup.find('table')
print(table)
I also have no idea how to iterate through the table, so if you could give me some pointers on that, it would be much appreciated as well.
Thanks!
A:
That is because there is no table tag on the URL you are loading. Go to the URL and view the source then search on "<table" you will find that there are no results
A:
when you get the table, define the HTML tag of the table as a dictionary dict or a list list such as "tablex"
after that you may do the following
for x in tablex:
#x represents each item from the table
print(x)
A:
Save the data in a link
link='xyz_xyz.com'
tables_on_page = pd.read_html(link)
i=0
while i<len(tables_on_page):
print(tables_on_page[i])
i=i+1
| How do I Scrape and Iterate Through a Table from Website in Python | I am trying to scrape and iterate through a table in Python and then input it into a pandas DataFrame, but I am having trouble even finding the table using BeautifulSoup. This is what I normally do, but there does not seem to be a table within the source code. How would I pull the main table on this page?
from bs4 import BeautifulSoup
import pandas as pd
import numpy as np
import requests
url = 'https://markets.ft.com/data/director-dealings'
site = requests.get(url)
soup = BeautifulSoup(site.content, 'html.parser')
table = soup.find('table')
print(table)
I also have no idea how to iterate through the table, so if you could give me some pointers on that, it would be much appreciated as well.
Thanks!
| [
"That is because there is no table tag on the URL you are loading. Go to the URL and view the source then search on \"<table\" you will find that there are no results\n",
"when you get the table, define the HTML tag of the table as a dictionary dict or a list list such as \"tablex\"\nafter that you may do the following\nfor x in tablex:\n #x represents each item from the table\n print(x)\n\n",
"Save the data in a link\nlink='xyz_xyz.com'\ntables_on_page = pd.read_html(link)\ni=0\nwhile i<len(tables_on_page):\n print(tables_on_page[i])\n i=i+1\n\n"
] | [
0,
0,
0
] | [] | [] | [
"beautifulsoup",
"python"
] | stackoverflow_0064729163_beautifulsoup_python.txt |
Q:
Imported a file but it says module not found
So i have the issue that i imported a file "DB.py", but if i try to run my program it says that the module wasnt found.
See this error
This is how in imported the file
I already tried to change the file name n stuff but nothing rly works.
A:
You need a blank python file named __init__.py in the directory that has DB.py. It tells python that there is a python module in that directory.
| Imported a file but it says module not found | So i have the issue that i imported a file "DB.py", but if i try to run my program it says that the module wasnt found.
See this error
This is how in imported the file
I already tried to change the file name n stuff but nothing rly works.
| [
"You need a blank python file named __init__.py in the directory that has DB.py. It tells python that there is a python module in that directory.\n"
] | [
0
] | [] | [] | [
"python",
"python_import"
] | stackoverflow_0074643602_python_python_import.txt |
Q:
Get the first item from an iterable that matches a condition
I would like to get the first item from a list matching a condition. It's important that the resulting method not process the entire list, which could be quite large. For example, the following function is adequate:
def first(the_iterable, condition = lambda x: True):
for i in the_iterable:
if condition(i):
return i
This function could be used something like this:
>>> first(range(10))
0
>>> first(range(10), lambda i: i > 3)
4
However, I can't think of a good built-in / one-liner to let me do this. I don't particularly want to copy this function around if I don't have to. Is there a built-in way to get the first item matching a condition?
A:
Python 2.6+ and Python 3:
If you want StopIteration to be raised if no matching element is found:
next(x for x in the_iterable if x > 3)
If you want default_value (e.g. None) to be returned instead:
next((x for x in the_iterable if x > 3), default_value)
Note that you need an extra pair of parentheses around the generator expression in this case − they are needed whenever the generator expression isn't the only argument.
I see most answers resolutely ignore the next built-in and so I assume that for some mysterious reason they're 100% focused on versions 2.5 and older -- without mentioning the Python-version issue (but then I don't see that mention in the answers that do mention the next built-in, which is why I thought it necessary to provide an answer myself -- at least the "correct version" issue gets on record this way;-).
Python <= 2.5
The .next() method of iterators immediately raises StopIteration if the iterator immediately finishes -- i.e., for your use case, if no item in the iterable satisfies the condition. If you don't care (i.e., you know there must be at least one satisfactory item) then just use .next() (best on a genexp, line for the next built-in in Python 2.6 and better).
If you do care, wrapping things in a function as you had first indicated in your Q seems best, and while the function implementation you proposed is just fine, you could alternatively use itertools, a for...: break loop, or a genexp, or a try/except StopIteration as the function's body, as various answers suggested. There's not much added value in any of these alternatives so I'd go for the starkly-simple version you first proposed.
A:
Damn Exceptions!
I love this answer. However, since next() raise a StopIteration exception when there are no items,
i would use the following snippet to avoid an exception:
a = []
item = next((x for x in a), None)
For example,
a = []
item = next(x for x in a)
Will raise a StopIteration exception;
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
A:
As a reusable, documented and tested function
def first(iterable, condition = lambda x: True):
"""
Returns the first item in the `iterable` that
satisfies the `condition`.
If the condition is not given, returns the first item of
the iterable.
Raises `StopIteration` if no item satysfing the condition is found.
>>> first( (1,2,3), condition=lambda x: x % 2 == 0)
2
>>> first(range(3, 100))
3
>>> first( () )
Traceback (most recent call last):
...
StopIteration
"""
return next(x for x in iterable if condition(x))
Version with default argument
@zorf suggested a version of this function where you can have a predefined return value if the iterable is empty or has no items matching the condition:
def first(iterable, default = None, condition = lambda x: True):
"""
Returns the first item in the `iterable` that
satisfies the `condition`.
If the condition is not given, returns the first item of
the iterable.
If the `default` argument is given and the iterable is empty,
or if it has no items matching the condition, the `default` argument
is returned if it matches the condition.
The `default` argument being None is the same as it not being given.
Raises `StopIteration` if no item satisfying the condition is found
and default is not given or doesn't satisfy the condition.
>>> first( (1,2,3), condition=lambda x: x % 2 == 0)
2
>>> first(range(3, 100))
3
>>> first( () )
Traceback (most recent call last):
...
StopIteration
>>> first([], default=1)
1
>>> first([], default=1, condition=lambda x: x % 2 == 0)
Traceback (most recent call last):
...
StopIteration
>>> first([1,3,5], default=1, condition=lambda x: x % 2 == 0)
Traceback (most recent call last):
...
StopIteration
"""
try:
return next(x for x in iterable if condition(x))
except StopIteration:
if default is not None and condition(default):
return default
else:
raise
A:
The most efficient way in Python 3 are one of the following (using a similar example):
With "comprehension" style:
next(i for i in range(100000000) if i == 1000)
WARNING: The expression works also with Python 2, but in the example is used range that returns an iterable object in Python 3 instead of a list like Python 2 (if you want to construct an iterable in Python 2 use xrange instead).
Note that the expression avoid to construct a list in the comprehension expression next([i for ...]), that would cause to create a list with all the elements before filter the elements, and would cause to process the entire options, instead of stop the iteration once i == 1000.
With "functional" style:
next(filter(lambda i: i == 1000, range(100000000)))
WARNING: This doesn't work in Python 2, even replacing range with xrange due that filter create a list instead of a iterator (inefficient), and the next function only works with iterators.
Default value
As mentioned in other responses, you must add a extra-parameter to the function next if you want to avoid an exception raised when the condition is not fulfilled.
"functional" style:
next(filter(lambda i: i == 1000, range(100000000)), False)
"comprehension" style:
With this style you need to surround the comprehension expression with () to avoid a SyntaxError: Generator expression must be parenthesized if not sole argument:
next((i for i in range(100000000) if i == 1000), False)
A:
Similar to using ifilter, you could use a generator expression:
>>> (x for x in xrange(10) if x > 5).next()
6
In either case, you probably want to catch StopIteration though, in case no elements satisfy your condition.
Technically speaking, I suppose you could do something like this:
>>> foo = None
>>> for foo in (x for x in xrange(10) if x > 5): break
...
>>> foo
6
It would avoid having to make a try/except block. But that seems kind of obscure and abusive to the syntax.
A:
I would write this
next(x for x in xrange(10) if x > 3)
A:
For anyone using Python 3.8 or newer I recommend using "Assignment Expressions" as described in PEP 572 -- Assignment Expressions.
if any((match := i) > 3 for i in range(10)):
print(match)
A:
The itertools module contains a filter function for iterators. The first element of the filtered iterator can be obtained by calling next() on it:
from itertools import ifilter
print ifilter((lambda i: i > 3), range(10)).next()
A:
For older versions of Python where the next built-in doesn't exist:
(x for x in range(10) if x > 3).next()
A:
By using
(index for index, value in enumerate(the_iterable) if condition(value))
one can check the condition of the value of the first item in the_iterable, and obtain its index without the need to evaluate all of the items in the_iterable.
The complete expression to use is
first_index = next(index for index, value in enumerate(the_iterable) if condition(value))
Here first_index assumes the value of the first value identified in the expression discussed above.
A:
This question already has great answers. I'm only adding my two cents because I landed here trying to find a solution to my own problem, which is very similar to the OP.
If you want to find the INDEX of the first item matching a criteria using generators, you can simply do:
next(index for index, value in enumerate(iterable) if condition)
A:
In Python 3:
a = (None, False, 0, 1)
assert next(filter(None, a)) == 1
In Python 2.6:
a = (None, False, 0, 1)
assert next(iter(filter(None, a))) == 1
EDIT: I thought it was obvious, but apparently not: instead of None you can pass a function (or a lambda) with a check for the condition:
a = [2,3,4,5,6,7,8]
assert next(filter(lambda x: x%2, a)) == 3
A:
You could also use the argwhere function in Numpy. For example:
i) Find the first "l" in "helloworld":
import numpy as np
l = list("helloworld") # Create list
i = np.argwhere(np.array(l)=="l") # i = array([[2],[3],[8]])
index_of_first = i.min()
ii) Find first random number > 0.1
import numpy as np
r = np.random.rand(50) # Create random numbers
i = np.argwhere(r>0.1)
index_of_first = i.min()
iii) Find the last random number > 0.1
import numpy as np
r = np.random.rand(50) # Create random numbers
i = np.argwhere(r>0.1)
index_of_last = i.max()
A:
here is a speedtest of three ways. Next() is not the fastest way.
from timeit import default_timer as timer
# Is set irreflexive?
def a():
return frozenset((x3, x3) for x3 in set([x1[x2] for x2 in range(2) for x1 in value]) if (x3, x3) in value) == frozenset()
def b():
return next((False for x1 in value if (x1[0], x1[0]) in value or (x1[1], x1[1]) in value), True)
def c():
for x1 in value:
if (x1[0], x1[0]) in value or (x1[1], x1[1]) in value:
return False
return True
times = 1000000
value = frozenset({(1, 3), (2, 1)})
start_time = timer()
for x in range(times):
a()
print("a(): Calculation ended after " + str(round((timer() - start_time) * 1000) / 1000.0) + " sec")
start_time = timer()
for x in range(times):
b()
print("b(): Calculation ended after " + str(round((timer() - start_time) * 1000) / 1000.0) + " sec")
start_time = timer()
for x in range(times):
c()
print("c(): Calculation ended after " + str(round((timer() - start_time) * 1000) / 1000.0) + " sec")
Results to:
Calculation ended after 1.365 sec
Calculation ended after 0.685 sec
Calculation ended after 0.493 sec
A:
I know it is too late but still, here is my answer:
def find_index(nums, fn):
return next(i for i, x in enumerate(nums) if fn(x))
print(find_index([1, 2, 3, 4], lambda n: n % 2 == 1))
A:
If you don't want to use next() you can use unpacking:
>>> a, *_ = filter(lambda e: e == 10, [7,8,9,10,11,12])
>>> a
10
>>> _
[]
>>> a, *_ = filter(lambda e: e == 1000, [7,8,9,10,11,12])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: not enough values to unpack (expected at least 1, got 0)
Note that using filter() is equivalent to writing (item for item in iterable if condition) Python Docs.
If you need support for the edge case you can write like this:
>>> a, *_ = [e for e in [7,8,9,10,11,12] if e == 1000] or [None]
>>> a
None
>>> _
[]
A:
The following are 3 alternatives, with benchmarks.
Using next()
The one-liner:
values = list(range(1, 10000000))
value = next((x for x in values if x > 9999999), None)
Using a function
This is an alternative to using next() using a function, it's about 2%-5% faster:
values = list(range(1, 10000000))
def first(items):
for item in items:
if item > 9999999: # Your condition
return item
return None # Default value
value = first(values)
Using lambda
This is a function that can be used for replacing next() in all cases. Performance are about 300% slower:
values = list(range(1, 10000000))
def first(items, condition, default = None):
for item in items:
if condition(item):
return item
return default
value = first(values, lambda x: x > 9999999, None)
Benchmarks
Function: 1x
Next: 1.02x-1.05x
Lambda: > 3x
Memory consumption is on par.
This is the benchmark.
| Get the first item from an iterable that matches a condition | I would like to get the first item from a list matching a condition. It's important that the resulting method not process the entire list, which could be quite large. For example, the following function is adequate:
def first(the_iterable, condition = lambda x: True):
for i in the_iterable:
if condition(i):
return i
This function could be used something like this:
>>> first(range(10))
0
>>> first(range(10), lambda i: i > 3)
4
However, I can't think of a good built-in / one-liner to let me do this. I don't particularly want to copy this function around if I don't have to. Is there a built-in way to get the first item matching a condition?
| [
"Python 2.6+ and Python 3:\nIf you want StopIteration to be raised if no matching element is found:\nnext(x for x in the_iterable if x > 3)\n\nIf you want default_value (e.g. None) to be returned instead:\nnext((x for x in the_iterable if x > 3), default_value)\n\nNote that you need an extra pair of parentheses around the generator expression in this case − they are needed whenever the generator expression isn't the only argument.\nI see most answers resolutely ignore the next built-in and so I assume that for some mysterious reason they're 100% focused on versions 2.5 and older -- without mentioning the Python-version issue (but then I don't see that mention in the answers that do mention the next built-in, which is why I thought it necessary to provide an answer myself -- at least the \"correct version\" issue gets on record this way;-).\nPython <= 2.5\nThe .next() method of iterators immediately raises StopIteration if the iterator immediately finishes -- i.e., for your use case, if no item in the iterable satisfies the condition. If you don't care (i.e., you know there must be at least one satisfactory item) then just use .next() (best on a genexp, line for the next built-in in Python 2.6 and better).\nIf you do care, wrapping things in a function as you had first indicated in your Q seems best, and while the function implementation you proposed is just fine, you could alternatively use itertools, a for...: break loop, or a genexp, or a try/except StopIteration as the function's body, as various answers suggested. There's not much added value in any of these alternatives so I'd go for the starkly-simple version you first proposed.\n",
"Damn Exceptions!\nI love this answer. However, since next() raise a StopIteration exception when there are no items,\ni would use the following snippet to avoid an exception:\na = []\nitem = next((x for x in a), None)\n\n\nFor example, \na = []\nitem = next(x for x in a)\n\nWill raise a StopIteration exception;\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nStopIteration\n\n",
"As a reusable, documented and tested function\ndef first(iterable, condition = lambda x: True):\n \"\"\"\n Returns the first item in the `iterable` that\n satisfies the `condition`.\n\n If the condition is not given, returns the first item of\n the iterable.\n\n Raises `StopIteration` if no item satysfing the condition is found.\n\n >>> first( (1,2,3), condition=lambda x: x % 2 == 0)\n 2\n >>> first(range(3, 100))\n 3\n >>> first( () )\n Traceback (most recent call last):\n ...\n StopIteration\n \"\"\"\n\n return next(x for x in iterable if condition(x))\n\nVersion with default argument\n@zorf suggested a version of this function where you can have a predefined return value if the iterable is empty or has no items matching the condition:\ndef first(iterable, default = None, condition = lambda x: True):\n \"\"\"\n Returns the first item in the `iterable` that\n satisfies the `condition`.\n\n If the condition is not given, returns the first item of\n the iterable.\n\n If the `default` argument is given and the iterable is empty,\n or if it has no items matching the condition, the `default` argument\n is returned if it matches the condition.\n\n The `default` argument being None is the same as it not being given.\n\n Raises `StopIteration` if no item satisfying the condition is found\n and default is not given or doesn't satisfy the condition.\n\n >>> first( (1,2,3), condition=lambda x: x % 2 == 0)\n 2\n >>> first(range(3, 100))\n 3\n >>> first( () )\n Traceback (most recent call last):\n ...\n StopIteration\n >>> first([], default=1)\n 1\n >>> first([], default=1, condition=lambda x: x % 2 == 0)\n Traceback (most recent call last):\n ...\n StopIteration\n >>> first([1,3,5], default=1, condition=lambda x: x % 2 == 0)\n Traceback (most recent call last):\n ...\n StopIteration\n \"\"\"\n\n try:\n return next(x for x in iterable if condition(x))\n except StopIteration:\n if default is not None and condition(default):\n return default\n else:\n raise\n\n",
"The most efficient way in Python 3 are one of the following (using a similar example):\nWith \"comprehension\" style:\nnext(i for i in range(100000000) if i == 1000)\n\nWARNING: The expression works also with Python 2, but in the example is used range that returns an iterable object in Python 3 instead of a list like Python 2 (if you want to construct an iterable in Python 2 use xrange instead).\nNote that the expression avoid to construct a list in the comprehension expression next([i for ...]), that would cause to create a list with all the elements before filter the elements, and would cause to process the entire options, instead of stop the iteration once i == 1000.\nWith \"functional\" style:\nnext(filter(lambda i: i == 1000, range(100000000)))\n\nWARNING: This doesn't work in Python 2, even replacing range with xrange due that filter create a list instead of a iterator (inefficient), and the next function only works with iterators.\nDefault value\nAs mentioned in other responses, you must add a extra-parameter to the function next if you want to avoid an exception raised when the condition is not fulfilled.\n\"functional\" style:\nnext(filter(lambda i: i == 1000, range(100000000)), False)\n\n\"comprehension\" style:\nWith this style you need to surround the comprehension expression with () to avoid a SyntaxError: Generator expression must be parenthesized if not sole argument:\nnext((i for i in range(100000000) if i == 1000), False)\n\n",
"Similar to using ifilter, you could use a generator expression:\n>>> (x for x in xrange(10) if x > 5).next()\n6\n\nIn either case, you probably want to catch StopIteration though, in case no elements satisfy your condition.\nTechnically speaking, I suppose you could do something like this:\n>>> foo = None\n>>> for foo in (x for x in xrange(10) if x > 5): break\n... \n>>> foo\n6\n\nIt would avoid having to make a try/except block. But that seems kind of obscure and abusive to the syntax.\n",
"I would write this \nnext(x for x in xrange(10) if x > 3)\n\n",
"For anyone using Python 3.8 or newer I recommend using \"Assignment Expressions\" as described in PEP 572 -- Assignment Expressions.\nif any((match := i) > 3 for i in range(10)):\n print(match)\n\n",
"The itertools module contains a filter function for iterators. The first element of the filtered iterator can be obtained by calling next() on it:\nfrom itertools import ifilter\n\nprint ifilter((lambda i: i > 3), range(10)).next()\n\n",
"For older versions of Python where the next built-in doesn't exist:\n(x for x in range(10) if x > 3).next()\n\n",
"By using \n(index for index, value in enumerate(the_iterable) if condition(value))\n\none can check the condition of the value of the first item in the_iterable, and obtain its index without the need to evaluate all of the items in the_iterable.\nThe complete expression to use is\nfirst_index = next(index for index, value in enumerate(the_iterable) if condition(value))\n\nHere first_index assumes the value of the first value identified in the expression discussed above.\n",
"This question already has great answers. I'm only adding my two cents because I landed here trying to find a solution to my own problem, which is very similar to the OP. \nIf you want to find the INDEX of the first item matching a criteria using generators, you can simply do:\nnext(index for index, value in enumerate(iterable) if condition)\n\n",
"In Python 3:\na = (None, False, 0, 1)\nassert next(filter(None, a)) == 1\n\nIn Python 2.6:\na = (None, False, 0, 1)\nassert next(iter(filter(None, a))) == 1\n\nEDIT: I thought it was obvious, but apparently not: instead of None you can pass a function (or a lambda) with a check for the condition:\na = [2,3,4,5,6,7,8]\nassert next(filter(lambda x: x%2, a)) == 3\n\n",
"You could also use the argwhere function in Numpy. For example:\ni) Find the first \"l\" in \"helloworld\":\nimport numpy as np\nl = list(\"helloworld\") # Create list\ni = np.argwhere(np.array(l)==\"l\") # i = array([[2],[3],[8]])\nindex_of_first = i.min()\n\nii) Find first random number > 0.1\nimport numpy as np\nr = np.random.rand(50) # Create random numbers\ni = np.argwhere(r>0.1)\nindex_of_first = i.min()\n\niii) Find the last random number > 0.1\nimport numpy as np\nr = np.random.rand(50) # Create random numbers\ni = np.argwhere(r>0.1)\nindex_of_last = i.max()\n\n",
"here is a speedtest of three ways. Next() is not the fastest way.\nfrom timeit import default_timer as timer\n\n# Is set irreflexive?\n\ndef a():\n return frozenset((x3, x3) for x3 in set([x1[x2] for x2 in range(2) for x1 in value]) if (x3, x3) in value) == frozenset()\n\n\ndef b():\n return next((False for x1 in value if (x1[0], x1[0]) in value or (x1[1], x1[1]) in value), True)\n\n\ndef c():\n for x1 in value:\n if (x1[0], x1[0]) in value or (x1[1], x1[1]) in value:\n return False\n return True\n\n\ntimes = 1000000\nvalue = frozenset({(1, 3), (2, 1)})\n\n\nstart_time = timer()\nfor x in range(times):\n a()\nprint(\"a(): Calculation ended after \" + str(round((timer() - start_time) * 1000) / 1000.0) + \" sec\")\n\nstart_time = timer()\nfor x in range(times):\n b()\nprint(\"b(): Calculation ended after \" + str(round((timer() - start_time) * 1000) / 1000.0) + \" sec\")\n\nstart_time = timer()\nfor x in range(times):\n c()\nprint(\"c(): Calculation ended after \" + str(round((timer() - start_time) * 1000) / 1000.0) + \" sec\")\n\nResults to:\nCalculation ended after 1.365 sec\nCalculation ended after 0.685 sec\nCalculation ended after 0.493 sec\n\n",
"I know it is too late but still, here is my answer:\ndef find_index(nums, fn):\n return next(i for i, x in enumerate(nums) if fn(x))\nprint(find_index([1, 2, 3, 4], lambda n: n % 2 == 1))\n\n",
"If you don't want to use next() you can use unpacking:\n>>> a, *_ = filter(lambda e: e == 10, [7,8,9,10,11,12])\n>>> a\n10\n>>> _\n[]\n>>> a, *_ = filter(lambda e: e == 1000, [7,8,9,10,11,12])\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nValueError: not enough values to unpack (expected at least 1, got 0)\n\nNote that using filter() is equivalent to writing (item for item in iterable if condition) Python Docs.\nIf you need support for the edge case you can write like this:\n>>> a, *_ = [e for e in [7,8,9,10,11,12] if e == 1000] or [None]\n>>> a\nNone\n>>> _\n[]\n\n",
"The following are 3 alternatives, with benchmarks.\nUsing next()\nThe one-liner:\nvalues = list(range(1, 10000000))\n\nvalue = next((x for x in values if x > 9999999), None)\n\nUsing a function\nThis is an alternative to using next() using a function, it's about 2%-5% faster:\nvalues = list(range(1, 10000000))\n\ndef first(items):\n for item in items:\n if item > 9999999: # Your condition\n return item\n return None # Default value\n\nvalue = first(values)\n\nUsing lambda\nThis is a function that can be used for replacing next() in all cases. Performance are about 300% slower:\nvalues = list(range(1, 10000000))\n\ndef first(items, condition, default = None):\n for item in items:\n if condition(item):\n return item\n return default\n\nvalue = first(values, lambda x: x > 9999999, None)\n\nBenchmarks\n\nFunction: 1x\nNext: 1.02x-1.05x\nLambda: > 3x\n\nMemory consumption is on par.\nThis is the benchmark.\n"
] | [
767,
52,
43,
26,
13,
12,
12,
8,
7,
5,
4,
1,
1,
1,
0,
0,
0
] | [
"Oneliner:\nthefirst = [i for i in range(10) if i > 3][0]\n\nIf youre not sure that any element will be valid according to the criteria, you should enclose this with try/except since that [0] can raise an IndexError.\n"
] | [
-3
] | [
"iterator",
"python"
] | stackoverflow_0002361426_iterator_python.txt |
Q:
Compare the values of one list with the values of another list
I have the list a:
a = ['wood', 'stone', 'bricks', 'diamond']
And the list b:
b = ['iron', 'gold', 'stone', 'diamond', 'wood']
I need to compare lists and if value of list a equals with value from list b, it will be added to a list c:
c = ['wood', 'stone', 'diamond']
How can I compare these lists?
A:
You could convert them to sets and get the intersection.
list(set(a) & set(b))
A:
When comparing values of one list to another you can use one of two options:
First you could use a for loop like so:
c = []
for element in a:
if element in b:
c.append(element)
print(c)
This is a rather chunky way of doing it, rather you could just use a comprehension like so:
c = [element for element in a if element in b]
print(c)
Both of these answers give the output of:
['wood', 'stone', 'diamond']
Hope this helps.
| Compare the values of one list with the values of another list | I have the list a:
a = ['wood', 'stone', 'bricks', 'diamond']
And the list b:
b = ['iron', 'gold', 'stone', 'diamond', 'wood']
I need to compare lists and if value of list a equals with value from list b, it will be added to a list c:
c = ['wood', 'stone', 'diamond']
How can I compare these lists?
| [
"You could convert them to sets and get the intersection.\nlist(set(a) & set(b))\n\n",
"When comparing values of one list to another you can use one of two options:\nFirst you could use a for loop like so:\nc = []\n\nfor element in a:\n if element in b:\n c.append(element)\nprint(c)\n\nThis is a rather chunky way of doing it, rather you could just use a comprehension like so:\nc = [element for element in a if element in b]\nprint(c)\n\nBoth of these answers give the output of:\n\n['wood', 'stone', 'diamond']\n\nHope this helps.\n"
] | [
1,
0
] | [] | [] | [
"compare",
"list",
"python"
] | stackoverflow_0074643499_compare_list_python.txt |
Q:
request.headers.get('Authorization') is empty in Flask production
I have the following flask method that simple returns back the value of the Authorization header:
@app.route('/test', methods=['POST'])
def test():
return jsonify({"data" : request.headers.get('Authorization') })
When I submit the following curl request to my API which has been deployed to a DO instance, the header comes back as null:
curl --data '' -H "Authorization: test" api.mysite.com/test
{
"data": null
}
Yet when I submit the same request on my instance running locally it returns the header contents
{
"data": "test"
}
Any ideas?
A:
Have you tried using a proper Authorization header? Possibly the header is being filtered out by a web application firewall or proxy because it doesn't specify a scheme. For example:
curl --data '' -H "Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=" api.mysite.com/test
This sends a basic authorization header with the base64 encoded credentials username:password.
A:
On Google Cloud Functions (python 3.10 + Flask), I've been able to get the Authorization header as follows:
request.environ.get('HTTP_AUTHORIZATION')
| request.headers.get('Authorization') is empty in Flask production | I have the following flask method that simple returns back the value of the Authorization header:
@app.route('/test', methods=['POST'])
def test():
return jsonify({"data" : request.headers.get('Authorization') })
When I submit the following curl request to my API which has been deployed to a DO instance, the header comes back as null:
curl --data '' -H "Authorization: test" api.mysite.com/test
{
"data": null
}
Yet when I submit the same request on my instance running locally it returns the header contents
{
"data": "test"
}
Any ideas?
| [
"Have you tried using a proper Authorization header? Possibly the header is being filtered out by a web application firewall or proxy because it doesn't specify a scheme. For example:\n\ncurl --data '' -H \"Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ=\" api.mysite.com/test\n\nThis sends a basic authorization header with the base64 encoded credentials username:password.\n",
"On Google Cloud Functions (python 3.10 + Flask), I've been able to get the Authorization header as follows:\nrequest.environ.get('HTTP_AUTHORIZATION')\n\n"
] | [
2,
0
] | [] | [] | [
"authorization",
"flask",
"http",
"python"
] | stackoverflow_0038927945_authorization_flask_http_python.txt |
Q:
install Chrome driver, AttributeError: 'Service' object has no attribute 'process'
#This is the code that i tried to use chrome driver
from selenium import webdriver
import time
from selenium.webdriver.common.keys import Keys
driver=webdriver.Chrome(r"/usr/local/bin/chromedriver")
#But the error is coming..
<ipython-input-118-b695456c07d9>:6: DeprecationWarning: executable_path has been deprecated, please pass in a Service object
driver=webdriver.Chrome(r"/usr/local/bin/chromedriver")
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-118-b695456c07d9> in <module>
4
5
----> 6 driver=webdriver.Chrome(r"/usr/local/bin/chromedriver")
3 frames
/usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/service.py in assert_process_still_running(self)
114 def assert_process_still_running(self) -> None:
115 """Check if the underlying process is still running."""
--> 116 return_code = self.process.poll()
117 if return_code:
118 raise WebDriverException(f"Service {self.path} unexpectedly exited. Status code was: {return_code}")
AttributeError: 'Service' object has no attribute 'process'
I checked the version of Chrome and Chrome driver and the path.
But I can't solve it. I'm a real beginner who doesn't know much about Python, however, I have to crawl some data, but I can't even install a chrome driver. Please help me, Python masters
A:
In the latest Selenium version, executable_path has been deprecated, so you have to use Service:
from selenium.webdriver.chrome.service import Service
driver = webdriver.Chrome(service=Service(<chromedriver.exe path>))
driver.get(<URL>)
| install Chrome driver, AttributeError: 'Service' object has no attribute 'process' | #This is the code that i tried to use chrome driver
from selenium import webdriver
import time
from selenium.webdriver.common.keys import Keys
driver=webdriver.Chrome(r"/usr/local/bin/chromedriver")
#But the error is coming..
<ipython-input-118-b695456c07d9>:6: DeprecationWarning: executable_path has been deprecated, please pass in a Service object
driver=webdriver.Chrome(r"/usr/local/bin/chromedriver")
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-118-b695456c07d9> in <module>
4
5
----> 6 driver=webdriver.Chrome(r"/usr/local/bin/chromedriver")
3 frames
/usr/local/lib/python3.8/dist-packages/selenium/webdriver/common/service.py in assert_process_still_running(self)
114 def assert_process_still_running(self) -> None:
115 """Check if the underlying process is still running."""
--> 116 return_code = self.process.poll()
117 if return_code:
118 raise WebDriverException(f"Service {self.path} unexpectedly exited. Status code was: {return_code}")
AttributeError: 'Service' object has no attribute 'process'
I checked the version of Chrome and Chrome driver and the path.
But I can't solve it. I'm a real beginner who doesn't know much about Python, however, I have to crawl some data, but I can't even install a chrome driver. Please help me, Python masters
| [
"In the latest Selenium version, executable_path has been deprecated, so you have to use Service:\nfrom selenium.webdriver.chrome.service import Service\n\ndriver = webdriver.Chrome(service=Service(<chromedriver.exe path>))\ndriver.get(<URL>)\n\n"
] | [
0
] | [] | [] | [
"attributeerror",
"python",
"selenium",
"selenium_chromedriver"
] | stackoverflow_0074643146_attributeerror_python_selenium_selenium_chromedriver.txt |
Q:
Flask-Sqlite: not visualizing username list from database in the dropdown menu
I am new to web dev with Python Flask and SQlalchemy and I am trying to populate a a dropdown menu with usernames from the table "user" for a Kanban board. I was able to fetch the datas from the database but for some reason they are not displayed in the dropdown menu in my dashboard template. Actually the dropdown menu is populated but the names are not displayed:
app.py
from flask import g
import sqlite3
def get_db():
DATABASE = 'C:/path/to/db'
db = getattr(g, '_database', None)
if db is None:
db = g._database = sqlite3.connect(DATABASE)
return db
@app.route('/dashboard')
def dashboard():
cur = get_db().cursor()
team_members = cur.execute("SELECT username FROM user").fetchall()
print(team_members)
return render_template('dashboard.html', team_members=team_members)
dashboard.html
<div class="dropdown">
<p style="font-family:verdana">
Choose a team member:<SELECT name="team_members_usernames" style="font-family:verdana">
{% for t in team_members %}
<OPTION value={{t[0]}}>{{t[1]}}</OPTION>
{% endfor %}
</SELECT>
</p>
</div>
In the console I am getting the usernames like this:
[('User_1',), ('User_2',), ('User_3',), ('User_4',)]
What am I doing wrong? I would be thankful for every tip!
A:
The usernames you're getting in a console is a list of tuples. Each tuple has only one item which you're accessing in dashboard.html like this:
<OPTION value={{t[0]}}>{{t[1]}}</OPTION>
The second item {{t[1}} is None therefore you are getting no visible text in a select field. Try to go like this:
<OPTION value={{t[0]}}>{{t[0]}}</OPTION>
| Flask-Sqlite: not visualizing username list from database in the dropdown menu | I am new to web dev with Python Flask and SQlalchemy and I am trying to populate a a dropdown menu with usernames from the table "user" for a Kanban board. I was able to fetch the datas from the database but for some reason they are not displayed in the dropdown menu in my dashboard template. Actually the dropdown menu is populated but the names are not displayed:
app.py
from flask import g
import sqlite3
def get_db():
DATABASE = 'C:/path/to/db'
db = getattr(g, '_database', None)
if db is None:
db = g._database = sqlite3.connect(DATABASE)
return db
@app.route('/dashboard')
def dashboard():
cur = get_db().cursor()
team_members = cur.execute("SELECT username FROM user").fetchall()
print(team_members)
return render_template('dashboard.html', team_members=team_members)
dashboard.html
<div class="dropdown">
<p style="font-family:verdana">
Choose a team member:<SELECT name="team_members_usernames" style="font-family:verdana">
{% for t in team_members %}
<OPTION value={{t[0]}}>{{t[1]}}</OPTION>
{% endfor %}
</SELECT>
</p>
</div>
In the console I am getting the usernames like this:
[('User_1',), ('User_2',), ('User_3',), ('User_4',)]
What am I doing wrong? I would be thankful for every tip!
| [
"The usernames you're getting in a console is a list of tuples. Each tuple has only one item which you're accessing in dashboard.html like this:\n<OPTION value={{t[0]}}>{{t[1]}}</OPTION>\nThe second item {{t[1}} is None therefore you are getting no visible text in a select field. Try to go like this:\n<OPTION value={{t[0]}}>{{t[0]}}</OPTION>\n"
] | [
1
] | [] | [] | [
"flask",
"python",
"sqlite"
] | stackoverflow_0074643082_flask_python_sqlite.txt |
Q:
Create a column of differences in 'col2' for each item in 'col1'
I have the following dataframe:
df=pd.DataFrame({
'col1' : ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'D'],
'col2' : [9.6,10.4, 11.2, 3.3, 6, 4, 1.94, 15.44, 6.17, 8.16]
})
It has the display :
col1 col2
0 A 9.60
1 A 10.40
2 A 11.20
3 B 3.30
4 B 6.00
5 B 4.00
6 C 1.94
7 C 15.44
8 C 6.17
9 D 8.16
I want to get the following output:
col1 col2 Diff
0 A 9.60 0
1 A 10.40 0.80
2 A 11.20 0.80
3 B 3.30 0
4 B 6.00 2.70
5 B 4.00 -2.00
6 C 1.94 0
7 C 15.44 13.50
8 C 6.17 -9.27
9 D 8.16 0
I tried to use diff() but it calculate differences for all values in col2 however I want to do that for each item in col1.
So far I tried df['col2'].diff() but not worked,
Any help from your side will be highly appreciated, thanks.
A:
You need a groupby, this works i think :
df.insert(2, 'Diff', (df.groupby('col1')['col2'].diff()))
result :
col1 col2 Diff
0 A 9.60 NaN
1 A 10.40 0.80
2 A 11.20 0.80
3 B 3.30 NaN
4 B 6.00 2.70
5 B 4.00 -2.00
6 C 1.94 NaN
7 C 15.44 13.50
8 C 6.17 -9.27
9 D 8.16 NaN
(you can replace the NaN by 0 if you wish)
A:
How about this:
differences = []
for val in df.col1.unique():
diffs = df.loc[df.col1 == val].col2.diff()
differences.extend(diffs)
Then you can add the differences list as a new column.
A:
You can use groupby() and diff()and assign the result to the new column Diff. Than you only need to fillna(0):
df['Diff'] = df.groupby('col1')['col2'].diff().fillna(0)
That should solve your problem:
col1 col2 Diff
0 A 9.60 0.00
1 A 10.40 0.80
2 A 11.20 0.80
3 B 3.30 0.00
4 B 6.00 2.70
5 B 4.00 -2.00
6 C 1.94 0.00
7 C 15.44 13.50
8 C 6.17 -9.27
9 D 8.16 0.00
| Create a column of differences in 'col2' for each item in 'col1' | I have the following dataframe:
df=pd.DataFrame({
'col1' : ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'D'],
'col2' : [9.6,10.4, 11.2, 3.3, 6, 4, 1.94, 15.44, 6.17, 8.16]
})
It has the display :
col1 col2
0 A 9.60
1 A 10.40
2 A 11.20
3 B 3.30
4 B 6.00
5 B 4.00
6 C 1.94
7 C 15.44
8 C 6.17
9 D 8.16
I want to get the following output:
col1 col2 Diff
0 A 9.60 0
1 A 10.40 0.80
2 A 11.20 0.80
3 B 3.30 0
4 B 6.00 2.70
5 B 4.00 -2.00
6 C 1.94 0
7 C 15.44 13.50
8 C 6.17 -9.27
9 D 8.16 0
I tried to use diff() but it calculate differences for all values in col2 however I want to do that for each item in col1.
So far I tried df['col2'].diff() but not worked,
Any help from your side will be highly appreciated, thanks.
| [
"You need a groupby, this works i think :\ndf.insert(2, 'Diff', (df.groupby('col1')['col2'].diff()))\n\n\nresult :\n col1 col2 Diff\n0 A 9.60 NaN\n1 A 10.40 0.80\n2 A 11.20 0.80\n3 B 3.30 NaN\n4 B 6.00 2.70\n5 B 4.00 -2.00\n6 C 1.94 NaN\n7 C 15.44 13.50\n8 C 6.17 -9.27\n9 D 8.16 NaN\n\n(you can replace the NaN by 0 if you wish)\n",
"How about this:\ndifferences = []\n\nfor val in df.col1.unique():\n diffs = df.loc[df.col1 == val].col2.diff()\n differences.extend(diffs)\n\nThen you can add the differences list as a new column.\n",
"You can use groupby() and diff()and assign the result to the new column Diff. Than you only need to fillna(0):\ndf['Diff'] = df.groupby('col1')['col2'].diff().fillna(0)\n\nThat should solve your problem:\n col1 col2 Diff\n0 A 9.60 0.00\n1 A 10.40 0.80\n2 A 11.20 0.80\n3 B 3.30 0.00\n4 B 6.00 2.70\n5 B 4.00 -2.00\n6 C 1.94 0.00\n7 C 15.44 13.50\n8 C 6.17 -9.27\n9 D 8.16 0.00\n\n"
] | [
2,
1,
1
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074643572_dataframe_pandas_python.txt |
Q:
Dynamically change scale of widgets in Canvas - Python
I am trying to create a canvas with one useful function for me. I need to make my program understand when object on canvas is too big so this object will collide with borders of canvas and automatically will decrise size of object. I know that canvas can't change size of objects, canvas just change coordinates of object.
So, I did almost everything, and it works, but only first time. If I create a rectangle with bigger sizes again, my program will not change coordinates again. With .bbox function we can see, that coordinates are still the same. I think that after "If statment" I need to update new position of coordinates.
Better just to take my code and try it. For example: maximum size for height is 356mm and for width is 473mm. If you will write into height 357mm or into width 474mm -> function "create" will automatically change coordinates with ".scale", but only once.
My idea is to write different sizes of the rectangle and I always want to see my rectangle in the middle of the canvas. Not to allow this rectangle touch borders, but if it did -> change coordinates and make it smaller.
How I can make it each time when object touching borders?
My code:
from tkinter import *
from tkinter import ttk
import tkinter as tk
create_win = tk.Tk()
create_win.title("Sample canvas")
create_win.geometry("500x550")
create_win.resizable(False, False)
create_win.configure(background="white")
create_win.columnconfigure(0, weight=1)
create_win.rowconfigure(0, weight=1)
main_frame = LabelFrame(create_win, text="Sample Canvas", background="white")
main_frame.grid(row=0, column=0, sticky=NW+SE, padx=10, pady=[0,10])
c = Canvas(main_frame, background="white", cursor="crosshair")
c.pack(fill=BOTH, expand=True)
entry_fields = LabelFrame(create_win, text="Enter data", background="white")
entry_fields.grid(row=1, column=0, sticky=NW+SE, padx=10, pady=[0,10])
width_lbl = Label(entry_fields, text="Width", font=("Calibri Light", 14), background="white")
width_lbl.grid(row=0, column=0)
height_lbl = Label(entry_fields, text="Height", font=("Calibri Light", 14), background="white")
height_lbl.grid(row=0, column=1)
width_entry = Entry(entry_fields, font=("Calibri Light", 14), bd=2, justify=CENTER, width=22)
width_entry.grid(row=1, column=0, padx=[10,0], pady=[5,10])
height_entry = Entry(entry_fields, font=("Calibri Light", 14), bd=2, justify=CENTER, width=22)
height_entry.grid(row=1, column=1, padx=[4,0], pady=[5,10])
create_win.update()
canvas_w = c.winfo_width()
canvas_h = (c.winfo_height() - 60)
#print(canvas_w)
#print(canvas_h)
def create():
width_entry_int = int(float(width_entry.get()))
height_entry_int = int(float(height_entry.get()))
x1 = (canvas_w - width_entry_int) / 2
y1 = (canvas_h - height_entry_int) / 2
x2 = x1 + int(float(width_entry.get()))
y2 = y1 + int(float(height_entry.get()))
c.delete("all")
rect = c.create_rectangle(x1, y1, x2, y2, fill="red", outline="red")
data = c.bbox(rect)
print(data)
if float(data[0]) <= 0 or float(data[1]) <= 0:
c.scale("all", ((x1+x2) / 2), ((y1+y2) / 2), 0.5, 0.5)
apply_btn = Button(create_win, text="Apply", font=("Calibri Light", 14), padx=5, pady=5, bd=0, background="#747d8c", fg="white", cursor="hand2", command=create)
apply_btn.grid(row=2, column=0, pady=[0,10])
create_win.mainloop()
Thanks in advance!
A:
Okay now, you should change this part
data = c.bbox(rect)
if float(data[0]) <= 0 or float(data[1]) <= 0:
print(data)
c.scale("all", ((x1+x2) / 2), ((y1+y2) / 2), 0.5, 0.5)
into this one:
data = c.bbox(rect)
while float(data[0]) <= 0 or float(data[1]) <= 0:
c.scale("all", ((x1+x2) / 2), ((y1+y2) / 2), 0.5, 0.5)
data = c.bbox(rect)
Now until it doesn't touches walls, it rescales.
| Dynamically change scale of widgets in Canvas - Python | I am trying to create a canvas with one useful function for me. I need to make my program understand when object on canvas is too big so this object will collide with borders of canvas and automatically will decrise size of object. I know that canvas can't change size of objects, canvas just change coordinates of object.
So, I did almost everything, and it works, but only first time. If I create a rectangle with bigger sizes again, my program will not change coordinates again. With .bbox function we can see, that coordinates are still the same. I think that after "If statment" I need to update new position of coordinates.
Better just to take my code and try it. For example: maximum size for height is 356mm and for width is 473mm. If you will write into height 357mm or into width 474mm -> function "create" will automatically change coordinates with ".scale", but only once.
My idea is to write different sizes of the rectangle and I always want to see my rectangle in the middle of the canvas. Not to allow this rectangle touch borders, but if it did -> change coordinates and make it smaller.
How I can make it each time when object touching borders?
My code:
from tkinter import *
from tkinter import ttk
import tkinter as tk
create_win = tk.Tk()
create_win.title("Sample canvas")
create_win.geometry("500x550")
create_win.resizable(False, False)
create_win.configure(background="white")
create_win.columnconfigure(0, weight=1)
create_win.rowconfigure(0, weight=1)
main_frame = LabelFrame(create_win, text="Sample Canvas", background="white")
main_frame.grid(row=0, column=0, sticky=NW+SE, padx=10, pady=[0,10])
c = Canvas(main_frame, background="white", cursor="crosshair")
c.pack(fill=BOTH, expand=True)
entry_fields = LabelFrame(create_win, text="Enter data", background="white")
entry_fields.grid(row=1, column=0, sticky=NW+SE, padx=10, pady=[0,10])
width_lbl = Label(entry_fields, text="Width", font=("Calibri Light", 14), background="white")
width_lbl.grid(row=0, column=0)
height_lbl = Label(entry_fields, text="Height", font=("Calibri Light", 14), background="white")
height_lbl.grid(row=0, column=1)
width_entry = Entry(entry_fields, font=("Calibri Light", 14), bd=2, justify=CENTER, width=22)
width_entry.grid(row=1, column=0, padx=[10,0], pady=[5,10])
height_entry = Entry(entry_fields, font=("Calibri Light", 14), bd=2, justify=CENTER, width=22)
height_entry.grid(row=1, column=1, padx=[4,0], pady=[5,10])
create_win.update()
canvas_w = c.winfo_width()
canvas_h = (c.winfo_height() - 60)
#print(canvas_w)
#print(canvas_h)
def create():
width_entry_int = int(float(width_entry.get()))
height_entry_int = int(float(height_entry.get()))
x1 = (canvas_w - width_entry_int) / 2
y1 = (canvas_h - height_entry_int) / 2
x2 = x1 + int(float(width_entry.get()))
y2 = y1 + int(float(height_entry.get()))
c.delete("all")
rect = c.create_rectangle(x1, y1, x2, y2, fill="red", outline="red")
data = c.bbox(rect)
print(data)
if float(data[0]) <= 0 or float(data[1]) <= 0:
c.scale("all", ((x1+x2) / 2), ((y1+y2) / 2), 0.5, 0.5)
apply_btn = Button(create_win, text="Apply", font=("Calibri Light", 14), padx=5, pady=5, bd=0, background="#747d8c", fg="white", cursor="hand2", command=create)
apply_btn.grid(row=2, column=0, pady=[0,10])
create_win.mainloop()
Thanks in advance!
| [
"Okay now, you should change this part\ndata = c.bbox(rect)\n\nif float(data[0]) <= 0 or float(data[1]) <= 0:\n print(data)\n c.scale(\"all\", ((x1+x2) / 2), ((y1+y2) / 2), 0.5, 0.5)\n\ninto this one:\ndata = c.bbox(rect)\nwhile float(data[0]) <= 0 or float(data[1]) <= 0:\n c.scale(\"all\", ((x1+x2) / 2), ((y1+y2) / 2), 0.5, 0.5)\n data = c.bbox(rect)\n\nNow until it doesn't touches walls, it rescales.\n"
] | [
0
] | [] | [] | [
"canvas",
"python",
"scale",
"tkinter",
"tkinter_canvas"
] | stackoverflow_0074627144_canvas_python_scale_tkinter_tkinter_canvas.txt |
Q:
Write array with opcua
everyone!
I need to write a variable as an array (list) to the OPC server. I am using Python and Python OPC-UA.
In the picture you can see the name and structure of the variable where I am trying to write the data.
opc-image
I try to use this code and get an error
q = [0]*50
data = client.get_node(f'ns=3;s="OSC_Profile"."THIS"')
dv = ua.Variant(q, ua.VariantType.ExtensionObject)
data.set_value(dv)
KeyError Traceback (most recent call last)
Input In [18], in <cell line: 9>()
20 dv = ua.Variant(q, ua.VariantType.ExtensionObject)
---> 21 data.set_value(dv)
KeyError: 'int'
I would be glad if someone could suggest the right way
A:
Ok you have a custom datatype so you have to create the corresponding class:
q = [ua.OSC(...), ua.OSC(...), ...]
data.set_value(ua.Variant(q, ua.VariantType.ExtensionObject))
| Write array with opcua | everyone!
I need to write a variable as an array (list) to the OPC server. I am using Python and Python OPC-UA.
In the picture you can see the name and structure of the variable where I am trying to write the data.
opc-image
I try to use this code and get an error
q = [0]*50
data = client.get_node(f'ns=3;s="OSC_Profile"."THIS"')
dv = ua.Variant(q, ua.VariantType.ExtensionObject)
data.set_value(dv)
KeyError Traceback (most recent call last)
Input In [18], in <cell line: 9>()
20 dv = ua.Variant(q, ua.VariantType.ExtensionObject)
---> 21 data.set_value(dv)
KeyError: 'int'
I would be glad if someone could suggest the right way
| [
"Ok you have a custom datatype so you have to create the corresponding class:\nq = [ua.OSC(...), ua.OSC(...), ...]\ndata.set_value(ua.Variant(q, ua.VariantType.ExtensionObject)) \n\n"
] | [
1
] | [] | [] | [
"opc",
"opc_ua",
"python"
] | stackoverflow_0074640874_opc_opc_ua_python.txt |
Q:
Iam getting this error how to solve AttributeError: 'tuple' object has no attribute 'setInput'?
I have imported a picture and everything is fine but I am getting an AttributeError when running net.setInput(blob).
import numpy as np
import cv2
#load the image
img=cv2.imread(r"E:\Face recognition\3_FaceDetection_FeatureExtraction\images\faces.jpg")
cv2.imshow('faces',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
print(img)
net = cv2.dnn.readNetFromCaffe("E:/Face recognition/Models/deploy.prototxt.txt"),("E:/Face recognition/Models/res10_300x300_ssd_iter_140000_fp16.caffemodel")
#extract blob
blob = cv2.dnn.blobFromImage(img, 1, (300,300), (104,177,123), swapRB=False)
net.setInput(blob)
AttributeError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 net.setInput(blob)
AttributeError: 'tuple' object has no attribute 'setInput'
A:
As @Dan Mašek mentioned in his comment, the critical line is
net = cv2.dnn.readNetFromCaffe("E:/Face recognition/Models/deploy.prototxt.txt"),("E:/Face recognition/Models/res10_300x300_ssd_iter_140000_fp16.caffemodel")
The comma makes net a tuple containing the return value of the function calll cv2.dnn.readNetFromCaffe("E:/Face recognition/Models/deploy.prototxt.txt") and the string "E:/Face recognition/Models/res10_300x300_ssd_iter_140000_fp16.caffemodel". This is equivalent to
>>> a = 4, "abc"
>>> a
(4, 'abc')
>>> type(a)
tuple
In the last line you call net.setInput(blob) but a tuple has not attribute or method setInput, which is why you get the error.
Probably, you wanted to write net = cv2.dnn.readNetFromCaffe("E:/Face recognition/Models/deploy.prototxt.txt", "E:/Face recognition/Models/res10_300x300_ssd_iter_140000_fp16.caffemodel") or simply net = cv2.dnn.readNetFromCaffe("E:/Face recognition/Models/deploy.prototxt.txt") (I do not know the method readNetFromCaffe, so I do not know what it requires as input).
| Iam getting this error how to solve AttributeError: 'tuple' object has no attribute 'setInput'? | I have imported a picture and everything is fine but I am getting an AttributeError when running net.setInput(blob).
import numpy as np
import cv2
#load the image
img=cv2.imread(r"E:\Face recognition\3_FaceDetection_FeatureExtraction\images\faces.jpg")
cv2.imshow('faces',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
print(img)
net = cv2.dnn.readNetFromCaffe("E:/Face recognition/Models/deploy.prototxt.txt"),("E:/Face recognition/Models/res10_300x300_ssd_iter_140000_fp16.caffemodel")
#extract blob
blob = cv2.dnn.blobFromImage(img, 1, (300,300), (104,177,123), swapRB=False)
net.setInput(blob)
AttributeError Traceback (most recent call last)
Input In [6], in <cell line: 1>()
----> 1 net.setInput(blob)
AttributeError: 'tuple' object has no attribute 'setInput'
| [
"As @Dan Mašek mentioned in his comment, the critical line is\nnet = cv2.dnn.readNetFromCaffe(\"E:/Face recognition/Models/deploy.prototxt.txt\"),(\"E:/Face recognition/Models/res10_300x300_ssd_iter_140000_fp16.caffemodel\")\n\nThe comma makes net a tuple containing the return value of the function calll cv2.dnn.readNetFromCaffe(\"E:/Face recognition/Models/deploy.prototxt.txt\") and the string \"E:/Face recognition/Models/res10_300x300_ssd_iter_140000_fp16.caffemodel\". This is equivalent to\n>>> a = 4, \"abc\"\n>>> a\n(4, 'abc')\n>>> type(a)\ntuple\n\nIn the last line you call net.setInput(blob) but a tuple has not attribute or method setInput, which is why you get the error.\nProbably, you wanted to write net = cv2.dnn.readNetFromCaffe(\"E:/Face recognition/Models/deploy.prototxt.txt\", \"E:/Face recognition/Models/res10_300x300_ssd_iter_140000_fp16.caffemodel\") or simply net = cv2.dnn.readNetFromCaffe(\"E:/Face recognition/Models/deploy.prototxt.txt\") (I do not know the method readNetFromCaffe, so I do not know what it requires as input).\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074628839_python.txt |
Q:
Convert dataframe column names from camel case to snake case
I want to change the column labels of a Pandas DataFrame from
['evaluationId,createdAt,scheduleEndDate,sharedTo, ...]
to
['EVALUATION_ID,CREATED_AT,SCHEDULE_END_DATE,SHARED_TO,...]
I have a lot of columns with this pattern "aaaBb" and I want to create this pattern "AAA_BB" of renamed columns
Can anyone help me?
Cheers
I tried something like
new_columns = [unidecode(x).upper()
for x in df.columns]
But I don't have idea how to create a solution.
A:
You can use a regex with str.replace to detect the lowercase-UPPERCASE shifts and insert a _, then str.upper:
df.columns = (df.columns
.str.replace('(?<=[a-z])(?=[A-Z])', '_', regex=True)
.str.upper()
)
Before:
evaluationId createdAt scheduleEndDate sharedTo
0 NaN NaN NaN NaN
After:
EVALUATION_ID CREATED_AT SCHEDULE_END_DATE SHARED_TO
0 NaN NaN NaN NaN
A:
To change the labels of the columns in a Pandas DataFrame, you can use the DataFrame.rename() method. This method takes a dictionary as its argument, where the keys are the old column labels and the values are the new labels.
For example, to change the column labels in the DataFrame you provided, you can use the following code:
import pandas as pd
import pandas as pd
# Create a sample DataFrame
df = pd.DataFrame(columns=['evaluationId,createdAt,scheduleEndDate,sharedTo, ...'])
# Use the rename() method to change the column labels
df = df.rename(columns={'evaluationId,createdAt,scheduleEndDate,sharedTo, ...': 'EVALUATION_ID,CREATED_AT,SCHEDULE_END_DATE,SHARED_TO,...'})
Note that the keys and values in the dictionary passed to the rename() method should be strings.
If you have many columns with the pattern "aaaBb" that you want to change to "AAA_BB", you can use the str.replace() method to do this in one go. This method takes two arguments: the substring you want to find, and the substring you want to replace it with.
For example, you could use the following code to replace all instances of "aaaBb" with "AAA_BB" in the column labels:
import pandas as pd
# Create a sample DataFrame
df = pd.DataFrame(columns=['aaaBb1', 'aaaBb2', 'aaaBb3', ...])
# Use the rename() method with the str.replace() method to change the column labels
df = df.rename(columns=lambda x: x.str.replace('aaaBb', 'AAA_BB'))
This code uses the lambda function to apply the str.replace() method to each column label in the DataFrame. This allows you to replace all instances of "aaaBb" with "AAA_BB" in the column labels in one go.
| Convert dataframe column names from camel case to snake case | I want to change the column labels of a Pandas DataFrame from
['evaluationId,createdAt,scheduleEndDate,sharedTo, ...]
to
['EVALUATION_ID,CREATED_AT,SCHEDULE_END_DATE,SHARED_TO,...]
I have a lot of columns with this pattern "aaaBb" and I want to create this pattern "AAA_BB" of renamed columns
Can anyone help me?
Cheers
I tried something like
new_columns = [unidecode(x).upper()
for x in df.columns]
But I don't have idea how to create a solution.
| [
"You can use a regex with str.replace to detect the lowercase-UPPERCASE shifts and insert a _, then str.upper:\ndf.columns = (df.columns\n .str.replace('(?<=[a-z])(?=[A-Z])', '_', regex=True)\n .str.upper()\n )\n\nBefore:\n evaluationId createdAt scheduleEndDate sharedTo\n0 NaN NaN NaN NaN\n\nAfter:\n EVALUATION_ID CREATED_AT SCHEDULE_END_DATE SHARED_TO\n0 NaN NaN NaN NaN\n\n",
"To change the labels of the columns in a Pandas DataFrame, you can use the DataFrame.rename() method. This method takes a dictionary as its argument, where the keys are the old column labels and the values are the new labels.\nFor example, to change the column labels in the DataFrame you provided, you can use the following code:\nimport pandas as pd\nimport pandas as pd \n# Create a sample DataFrame\ndf = pd.DataFrame(columns=['evaluationId,createdAt,scheduleEndDate,sharedTo, ...'])\n\n# Use the rename() method to change the column labels\ndf = df.rename(columns={'evaluationId,createdAt,scheduleEndDate,sharedTo, ...': 'EVALUATION_ID,CREATED_AT,SCHEDULE_END_DATE,SHARED_TO,...'})\n\nNote that the keys and values in the dictionary passed to the rename() method should be strings.\nIf you have many columns with the pattern \"aaaBb\" that you want to change to \"AAA_BB\", you can use the str.replace() method to do this in one go. This method takes two arguments: the substring you want to find, and the substring you want to replace it with.\nFor example, you could use the following code to replace all instances of \"aaaBb\" with \"AAA_BB\" in the column labels:\nimport pandas as pd\n\n# Create a sample DataFrame\ndf = pd.DataFrame(columns=['aaaBb1', 'aaaBb2', 'aaaBb3', ...])\n\n# Use the rename() method with the str.replace() method to change the column labels\ndf = df.rename(columns=lambda x: x.str.replace('aaaBb', 'AAA_BB'))\n\nThis code uses the lambda function to apply the str.replace() method to each column label in the DataFrame. This allows you to replace all instances of \"aaaBb\" with \"AAA_BB\" in the column labels in one go.\n"
] | [
1,
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074643621_dataframe_pandas_python.txt |
Q:
How to give Tkinter file dialog focus
I'm using OS X. I'm double clicking my script to run it from Finder. This script imports and runs the function below.
I'd like the script to present a Tkinter open file dialog and return a list of files selected.
Here's what I have so far:
def open_files(starting_dir):
"""Returns list of filenames+paths given starting dir"""
import Tkinter
import tkFileDialog
root = Tkinter.Tk()
root.withdraw() # Hide root window
filenames = tkFileDialog.askopenfilenames(parent=root,initialdir=starting_dir)
return list(filenames)
I double click the script, terminal opens, the Tkinter file dialog opens. The problem is that the file dialog is behind the terminal.
Is there a way to suppress the terminal or ensure the file dialog ends up on top?
Thanks,
Wes
A:
For anybody that ends up here via Google (like I did), here is a hack I've devised that works in both Windows and Ubuntu. In my case, I actually still need the terminal, but just want the dialog to be on top when displayed.
# Make a top-level instance and hide since it is ugly and big.
root = Tkinter.Tk()
root.withdraw()
# Make it almost invisible - no decorations, 0 size, top left corner.
root.overrideredirect(True)
root.geometry('0x0+0+0')
# Show window again and lift it to top so it can get focus,
# otherwise dialogs will end up behind the terminal.
root.deiconify()
root.lift()
root.focus_force()
filenames = tkFileDialog.askopenfilenames(parent=root) # Or some other dialog
# Get rid of the top-level instance once to make it actually invisible.
root.destroy()
A:
Use AppleEvents to give focus to Python. Eg:
import os
os.system('''/usr/bin/osascript -e 'tell app "Finder" to set frontmost of process "Python" to true' ''')
A:
I had this issue with the window behind Spyder:
root = tk.Tk()
root.overrideredirect(True)
root.geometry('0x0+0+0')
root.focus_force()
FT = [("%s files" % ftype, "*.%s" % ftype), ('All Files', '*.*')]
ttl = 'Select File'
File = filedialog.askopenfilename(parent=root, title=ttl, filetypes=FT)
root.withdraw()
A:
filenames = tkFileDialog.askopenfilenames(parent=root,initialdir=starting_dir)
Well parent=root is enough for making tkFileDialog on top. It simply means that your root is not on top, try making root on top and automatically tkFileDialog will take top of the parent.
A:
Try the focus_set method. For more, see the Dialog Windows page in PythonWare's An Introduction to Tkinter.
A:
None of the other answers above worked for me 100% of the time.
In the end, what worked for me was adding 2 attibutes: -alpha and -topmost
This will force the window to be always on top, which was what I wanted.
import tkinter as tk
root = tk.Tk()
# Hide the window
root.attributes('-alpha', 0.0)
# Always have it on top
root.attributes('-topmost', True)
file_name = tk.filedialog.askopenfilename( parent=root,
title='Open file',
initialdir=starting_dir,
filetypes=[("text files", "*.txt")])
# Destroy the window when the file dialog is finished
root.destroy()
| How to give Tkinter file dialog focus | I'm using OS X. I'm double clicking my script to run it from Finder. This script imports and runs the function below.
I'd like the script to present a Tkinter open file dialog and return a list of files selected.
Here's what I have so far:
def open_files(starting_dir):
"""Returns list of filenames+paths given starting dir"""
import Tkinter
import tkFileDialog
root = Tkinter.Tk()
root.withdraw() # Hide root window
filenames = tkFileDialog.askopenfilenames(parent=root,initialdir=starting_dir)
return list(filenames)
I double click the script, terminal opens, the Tkinter file dialog opens. The problem is that the file dialog is behind the terminal.
Is there a way to suppress the terminal or ensure the file dialog ends up on top?
Thanks,
Wes
| [
"For anybody that ends up here via Google (like I did), here is a hack I've devised that works in both Windows and Ubuntu. In my case, I actually still need the terminal, but just want the dialog to be on top when displayed.\n# Make a top-level instance and hide since it is ugly and big.\nroot = Tkinter.Tk()\nroot.withdraw()\n\n# Make it almost invisible - no decorations, 0 size, top left corner.\nroot.overrideredirect(True)\nroot.geometry('0x0+0+0')\n\n# Show window again and lift it to top so it can get focus,\n# otherwise dialogs will end up behind the terminal.\nroot.deiconify()\nroot.lift()\nroot.focus_force()\n\nfilenames = tkFileDialog.askopenfilenames(parent=root) # Or some other dialog\n\n# Get rid of the top-level instance once to make it actually invisible.\nroot.destroy()\n\n",
"Use AppleEvents to give focus to Python. Eg:\nimport os\n\n os.system('''/usr/bin/osascript -e 'tell app \"Finder\" to set frontmost of process \"Python\" to true' ''')\n\n",
"I had this issue with the window behind Spyder:\nroot = tk.Tk()\nroot.overrideredirect(True)\nroot.geometry('0x0+0+0')\nroot.focus_force()\nFT = [(\"%s files\" % ftype, \"*.%s\" % ftype), ('All Files', '*.*')]\nttl = 'Select File'\nFile = filedialog.askopenfilename(parent=root, title=ttl, filetypes=FT)\nroot.withdraw()\n\n",
"filenames = tkFileDialog.askopenfilenames(parent=root,initialdir=starting_dir)\nWell parent=root is enough for making tkFileDialog on top. It simply means that your root is not on top, try making root on top and automatically tkFileDialog will take top of the parent.\n",
"Try the focus_set method. For more, see the Dialog Windows page in PythonWare's An Introduction to Tkinter.\n",
"None of the other answers above worked for me 100% of the time.\nIn the end, what worked for me was adding 2 attibutes: -alpha and -topmost\nThis will force the window to be always on top, which was what I wanted.\nimport tkinter as tk\n\nroot = tk.Tk()\n# Hide the window\nroot.attributes('-alpha', 0.0)\n# Always have it on top\nroot.attributes('-topmost', True)\nfile_name = tk.filedialog.askopenfilename( parent=root, \n title='Open file',\n initialdir=starting_dir,\n filetypes=[(\"text files\", \"*.txt\")])\n# Destroy the window when the file dialog is finished\nroot.destroy()\n\n"
] | [
15,
6,
4,
1,
0,
0
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0003375227_python_tkinter.txt |
Q:
ParserError: unable to convert txt file to df due to json format and delimiter being the same
Im fairly new dealing with .txt files that has a dictionary within it. Im trying to pd.read_csv and create a dataframe in pandas.I get thrown an error of Error tokenizing data. C error: Expected 4 fields in line 2, saw 11. I belive I found the root problem which is the file is difficult to read because each row contains a dict, whose key-value pairs are separated by commas in this case is the delimiter.
Data (store.txt)
id,name,storeid,report
11,JohnSmith,3221-123-555,{"Source":"online","FileFormat":0,"Isonline":true,"comment":"NAN","itemtrack":"110", "info": {"haircolor":"black", "age":53}, "itemsboughtid":[],"stolenitem":[{"item":"candy","code":1},{"item":"candy","code":1}]}
35,BillyDan,3221-123-555,{"Source":"letter","FileFormat":0,"Isonline":false,"comment":"this is the best store, hands down and i will surely be back...","itemtrack":"110", "info": {"haircolor":"black", "age":21},"itemsboughtid":[1,42,465,5],"stolenitem":[{"item":"shoe","code":2}]}
64,NickWalker,3221-123-555, {"Source":"letter","FileFormat":0,"Isonline":false, "comment":"we need this area to be fixed, so much stuff is everywhere and i do not like this one bit at all, never again...","itemtrack":"110", "info": {"haircolor":"red", "age":22},"itemsboughtid":[1,2],"stolenitem":[{"item":"sweater","code":11},{"item":"mask","code":221},{"item":"jack,jill","code":001}]}
How would I read this csv file and create new columns based on the key-values. In addition, what if there are more key-value in other data... for example > 11 keys within the dictionary.
Is there a an efficient way of create a df from the example above?
My code when trying to read as csv##
df = pd.read_csv('store.txt', header=None)
I tried to import json and user a converter but it do not work and converted all the commas to a |
`
import json
df = pd.read_csv('store.txt', converters={'report': json.loads}, header=0, sep="|")
In addition I also tried to use:
`
import pandas as pd
import json
df=pd.read_csv('store.txt', converters={'report':json.loads}, header=0, quotechar="'")
I also was thinking to add a quote at the begining of the dictionary and at the end to make it a string but thought that was too tedious to find the closing brackets.
A:
I think adding quotes around the dictionaries is the right approach. You can use regex to do so and use a different quote character than " (I used § in my example):
from io import StringIO
import re
import json
with open("store.txt", "r") as f:
csv_content = re.sub(r"(\{.*})", r"§\1§", f.read())
df = pd.read_csv(StringIO(csv_content), skipinitialspace=True, quotechar="§", engine="python")
df_out = pd.concat([
df[["id", "name", "storeid"]],
pd.DataFrame(df["report"].apply(lambda x: json.loads(x)).values.tolist())
], axis=1)
print(df_out)
Note: the very last value in your csv isn't valid json: "code":001. It should either be "code":"001" or "code":1
Output:
id name storeid Source ... itemtrack info itemsboughtid stolenitem
0 11 JohnSmith 3221-123-555 online ... 110 {'haircolor': 'black', 'age': 53} [] [{'item': 'candy', 'code': 1}, {'item': 'candy...
1 35 BillyDan 3221-123-555 letter ... 110 {'haircolor': 'black', 'age': 21} [1, 42, 465, 5] [{'item': 'shoe', 'code': 2}]
2 64 NickWalker 3221-123-555 letter ... 110 {'haircolor': 'red', 'age': 22} [1, 2] [{'item': 'sweater', 'code': 11}, {'item': 'ma...
| ParserError: unable to convert txt file to df due to json format and delimiter being the same | Im fairly new dealing with .txt files that has a dictionary within it. Im trying to pd.read_csv and create a dataframe in pandas.I get thrown an error of Error tokenizing data. C error: Expected 4 fields in line 2, saw 11. I belive I found the root problem which is the file is difficult to read because each row contains a dict, whose key-value pairs are separated by commas in this case is the delimiter.
Data (store.txt)
id,name,storeid,report
11,JohnSmith,3221-123-555,{"Source":"online","FileFormat":0,"Isonline":true,"comment":"NAN","itemtrack":"110", "info": {"haircolor":"black", "age":53}, "itemsboughtid":[],"stolenitem":[{"item":"candy","code":1},{"item":"candy","code":1}]}
35,BillyDan,3221-123-555,{"Source":"letter","FileFormat":0,"Isonline":false,"comment":"this is the best store, hands down and i will surely be back...","itemtrack":"110", "info": {"haircolor":"black", "age":21},"itemsboughtid":[1,42,465,5],"stolenitem":[{"item":"shoe","code":2}]}
64,NickWalker,3221-123-555, {"Source":"letter","FileFormat":0,"Isonline":false, "comment":"we need this area to be fixed, so much stuff is everywhere and i do not like this one bit at all, never again...","itemtrack":"110", "info": {"haircolor":"red", "age":22},"itemsboughtid":[1,2],"stolenitem":[{"item":"sweater","code":11},{"item":"mask","code":221},{"item":"jack,jill","code":001}]}
How would I read this csv file and create new columns based on the key-values. In addition, what if there are more key-value in other data... for example > 11 keys within the dictionary.
Is there a an efficient way of create a df from the example above?
My code when trying to read as csv##
df = pd.read_csv('store.txt', header=None)
I tried to import json and user a converter but it do not work and converted all the commas to a |
`
import json
df = pd.read_csv('store.txt', converters={'report': json.loads}, header=0, sep="|")
In addition I also tried to use:
`
import pandas as pd
import json
df=pd.read_csv('store.txt', converters={'report':json.loads}, header=0, quotechar="'")
I also was thinking to add a quote at the begining of the dictionary and at the end to make it a string but thought that was too tedious to find the closing brackets.
| [
"I think adding quotes around the dictionaries is the right approach. You can use regex to do so and use a different quote character than \" (I used § in my example):\nfrom io import StringIO\nimport re\nimport json\n\nwith open(\"store.txt\", \"r\") as f:\n csv_content = re.sub(r\"(\\{.*})\", r\"§\\1§\", f.read())\n\ndf = pd.read_csv(StringIO(csv_content), skipinitialspace=True, quotechar=\"§\", engine=\"python\")\n\ndf_out = pd.concat([\n df[[\"id\", \"name\", \"storeid\"]],\n pd.DataFrame(df[\"report\"].apply(lambda x: json.loads(x)).values.tolist())\n], axis=1)\n\nprint(df_out)\n\n\nNote: the very last value in your csv isn't valid json: \"code\":001. It should either be \"code\":\"001\" or \"code\":1\nOutput:\n id name storeid Source ... itemtrack info itemsboughtid stolenitem\n0 11 JohnSmith 3221-123-555 online ... 110 {'haircolor': 'black', 'age': 53} [] [{'item': 'candy', 'code': 1}, {'item': 'candy...\n1 35 BillyDan 3221-123-555 letter ... 110 {'haircolor': 'black', 'age': 21} [1, 42, 465, 5] [{'item': 'shoe', 'code': 2}]\n2 64 NickWalker 3221-123-555 letter ... 110 {'haircolor': 'red', 'age': 22} [1, 2] [{'item': 'sweater', 'code': 11}, {'item': 'ma...\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python",
"text_files"
] | stackoverflow_0074641167_pandas_python_text_files.txt |
Q:
How do I add this code (txt file) into my class __init__ section?
Using Python. I have a class with 4 functions (addStudent, showStudent, deleteStudent, searchStudent), and am pulling from a database from a .txt file.
I have this code at the beginning of every function:
data = "studentMockData_AS2.txt"
students = []
with open(data, "r") as datafile:
for line in datafile:
datum = line.split()
students.append(datum)
I tried to add it into the def init() part of my class to avoid having it in every function, but this isn't working. Mostly getting AttributeError: 'str' object has no attribute 'students'. This is how it looks:
class Student():
def __init__(self, data):
self.data = "studentMockData_AS2.txt"
self.students = []
with open (data, 'r') as datafile:
self.content = datafile.read()
for line in datafile:
self.datum = line.split()
self.students.append(self.datum)
def SearchStudent(self):
if self == ('byId'):
searchId = input('Enter student id: ')
for self.datum in self.students:
if self.datum[0] == searchId:
print(self.datum)
# the rest of the code
A:
There is something going wrong with your function SearchStudent. You may want to pass an extra parameter OR to use a class member variable to compare to the string 'byId'.
In the case the if condition passes (i.e.: self == 'byId' is true), then your class reference self is a string. So it is normal it does not have the members you request.
class Student():
def __init__(self, data):
self.data = "studentMockData_AS2.txt"
self.students = []
with open (data, 'r') as datafile:
self.content = datafile.read()
for line in datafile:
self.datum = line.split()
self.students.append(self.datum)
# notice the new parameter here
def SearchStudent(self, rtype):
if rtype == ('byId'):
searchId = input('Enter student id: ')
for self.datum in self.students:
if self.datum[0] == searchId:
print(self.datum)
# the rest of the code
| How do I add this code (txt file) into my class __init__ section? | Using Python. I have a class with 4 functions (addStudent, showStudent, deleteStudent, searchStudent), and am pulling from a database from a .txt file.
I have this code at the beginning of every function:
data = "studentMockData_AS2.txt"
students = []
with open(data, "r") as datafile:
for line in datafile:
datum = line.split()
students.append(datum)
I tried to add it into the def init() part of my class to avoid having it in every function, but this isn't working. Mostly getting AttributeError: 'str' object has no attribute 'students'. This is how it looks:
class Student():
def __init__(self, data):
self.data = "studentMockData_AS2.txt"
self.students = []
with open (data, 'r') as datafile:
self.content = datafile.read()
for line in datafile:
self.datum = line.split()
self.students.append(self.datum)
def SearchStudent(self):
if self == ('byId'):
searchId = input('Enter student id: ')
for self.datum in self.students:
if self.datum[0] == searchId:
print(self.datum)
# the rest of the code
| [
"There is something going wrong with your function SearchStudent. You may want to pass an extra parameter OR to use a class member variable to compare to the string 'byId'.\nIn the case the if condition passes (i.e.: self == 'byId' is true), then your class reference self is a string. So it is normal it does not have the members you request.\nclass Student(): \n\n def __init__(self, data):\n self.data = \"studentMockData_AS2.txt\"\n self.students = []\n with open (data, 'r') as datafile:\n self.content = datafile.read()\n for line in datafile:\n self.datum = line.split()\n self.students.append(self.datum)\n\n # notice the new parameter here\n def SearchStudent(self, rtype):\n if rtype == ('byId'):\n searchId = input('Enter student id: ')\n \n for self.datum in self.students:\n if self.datum[0] == searchId:\n print(self.datum)\n # the rest of the code\n\n"
] | [
0
] | [] | [] | [
"class",
"python"
] | stackoverflow_0074643650_class_python.txt |
Q:
Python in-memory cache with time to live
I have multiple threads running the same process that need to be able to to notify each other that something should not be worked on for the next n seconds its not the end of the world if they do however.
My aim is to be able to pass a string and a TTL to the cache and be able to fetch all the strings that are in the cache as a list. The cache can live in memory and the TTL's will be no more than 20 seconds.
Does anyone have a any suggestions for how this can be accomplished?
A:
In case you don't want to use any 3rd libraries, you can add one more parameter to your expensive function: ttl_hash=None. This new parameter is so-called "time sensitive hash", its the only purpose is to affect lru_cache.
For example:
from functools import lru_cache
import time
@lru_cache()
def my_expensive_function(a, b, ttl_hash=None):
del ttl_hash # to emphasize we don't use it and to shut pylint up
return a + b # horrible CPU load...
def get_ttl_hash(seconds=3600):
"""Return the same value withing `seconds` time period"""
return round(time.time() / seconds)
# somewhere in your code...
res = my_expensive_function(2, 2, ttl_hash=get_ttl_hash())
# cache will be updated once in an hour
A:
The OP is using python 2.7 but if you're using python 3, ExpiringDict mentioned in the accepted answer is currently, well, expired. The last commit to the github repo was June 17, 2017 and there is an open issue that it doesn't work with Python 3.5
As of September 1, 2020, there is a more recently maintained project cachetools.
pip install cachetools
from cachetools import TTLCache
cache = TTLCache(maxsize=10, ttl=360)
cache['apple'] = 'top dog'
...
>>> cache['apple']
'top dog'
... after 360 seconds...
>>> cache['apple']
KeyError exception raised
ttl is the time to live in seconds.
A:
Regarding an expiring in-memory cache, for general purpose use, a common design pattern to typically do this is not via a dictionary, but via a function or method decorator. A cache dictionary is managed behind the scenes. As such, this answer somewhat complements the answer by User which uses a dictionary rather than a decorator.
The ttl_cache decorator in cachetools works a lot like functools.lru_cache, but with a time to live.
import cachetools.func
@cachetools.func.ttl_cache(maxsize=128, ttl=10 * 60)
def example_function(key):
return get_expensively_computed_value(key)
class ExampleClass:
EXP = 2
@classmethod
@cachetools.func.ttl_cache()
def example_classmethod(cls, i):
return i * cls.EXP
@staticmethod
@cachetools.func.ttl_cache()
def example_staticmethod(i):
return i * 3
A:
You can use the expiringdict module:
The core of the library is ExpiringDict class which is an ordered dictionary with auto-expiring values for caching purposes.
In the description they do not talk about multithreading, so in order not to mess up, use a Lock.
A:
I absolutely love the idea from @iutinvg, I just wanted to take it a little further; decouple it from having to know to pass the ttl into every function and just make it a decorator so you don't have to think about it. If you have django, py3, and don't feel like pip installing any dependencies, try this out.
import time
from django.utils.functional import lazy
from functools import lru_cache, partial, update_wrapper
def lru_cache_time(seconds, maxsize=None):
"""
Adds time aware caching to lru_cache
"""
def wrapper(func):
# Lazy function that makes sure the lru_cache() invalidate after X secs
ttl_hash = lazy(lambda: round(time.time() / seconds), int)()
@lru_cache(maxsize)
def time_aware(__ttl, *args, **kwargs):
"""
Main wrapper, note that the first argument ttl is not passed down.
This is because no function should bother to know this that
this is here.
"""
def wrapping(*args, **kwargs):
return func(*args, **kwargs)
return wrapping(*args, **kwargs)
return update_wrapper(partial(time_aware, ttl_hash), func)
return wrapper
Proving it works (with examples):
@lru_cache_time(seconds=10)
def meaning_of_life():
"""
This message should show up if you call help().
"""
print('this better only show up once!')
return 42
@lru_cache_time(seconds=10)
def multiply(a, b):
"""
This message should show up if you call help().
"""
print('this better only show up once!')
return a * b
# This is a test, prints a `.` for every second, there should be 10s
# between each "this better only show up once!" *2 because of the two functions.
for _ in range(20):
meaning_of_life()
multiply(50, 99991)
print('.')
time.sleep(1)
A:
I know this is a little old, but for those who are interested in no third-party dependencies, this is a minor wrapper around the builtin functools.lru_cache (I noticed Javier's similar answer after writing this, but figured I post it anyway since this doesn't require Django):
import functools
import time
def time_cache(max_age, maxsize=128, typed=False):
"""Least-recently-used cache decorator with time-based cache invalidation.
Args:
max_age: Time to live for cached results (in seconds).
maxsize: Maximum cache size (see `functools.lru_cache`).
typed: Cache on distinct input types (see `functools.lru_cache`).
"""
def _decorator(fn):
@functools.lru_cache(maxsize=maxsize, typed=typed)
def _new(*args, __time_salt, **kwargs):
return fn(*args, **kwargs)
@functools.wraps(fn)
def _wrapped(*args, **kwargs):
return _new(*args, **kwargs, __time_salt=int(time.time() / max_age))
return _wrapped
return _decorator
And its usage:
@time_cache(10)
def expensive(a: int):
"""An expensive function."""
time.sleep(1 + a)
print("Starting...")
expensive(1)
print("Again...")
expensive(1)
print("Done")
NB this uses time.time and comes with all its caveats. You may want to use time.monotonic instead if available/appropriate.
A:
If you want to avoid third-party packages, you can add in a custom timed_lru_cache decorator, which builds upon the lru_cache decorator.
The below defaults to a 20-second lifetime and a max size of 128. Note that the entire cache expires after 20 seconds, not individual items.
from datetime import datetime, timedelta
from functools import lru_cache, wraps
def timed_lru_cache(seconds: int = 20, maxsize: int = 128):
def wrapper_cache(func):
func = lru_cache(maxsize=maxsize)(func)
func.lifetime = timedelta(seconds=seconds)
func.expiration = datetime.utcnow() + func.lifetime
@wraps(func)
def wrapped_func(*args, **kwargs):
if datetime.utcnow() >= func.expiration:
func.cache_clear()
func.expiration = datetime.utcnow() + func.lifetime
return func(*args, **kwargs)
return wrapped_func
return wrapper_cache
Then, just add @timed_lru_cache() above your function and you'll be good to go:
@timed_lru_cache()
def my_function():
# code goes here...
A:
Yet Another Solution
How it works?
The user function is cached using @functools.lru_cache with support for maxsize and typed parameters.
The Result object records the function's return value and "death" time using time.monotonic() + ttl.
The wrapper function checks the "death" time of the return value against time.monotonic() and if the current time exceeds the "death" time, then recalculates the return value with a new "death" time.
Show me the code:
from functools import lru_cache, wraps
from time import monotonic
def lru_cache_with_ttl(maxsize=128, typed=False, ttl=60):
"""Least-recently used cache with time-to-live (ttl) limit."""
class Result:
__slots__ = ('value', 'death')
def __init__(self, value, death):
self.value = value
self.death = death
def decorator(func):
@lru_cache(maxsize=maxsize, typed=typed)
def cached_func(*args, **kwargs):
value = func(*args, **kwargs)
death = monotonic() + ttl
return Result(value, death)
@wraps(func)
def wrapper(*args, **kwargs):
result = cached_func(*args, **kwargs)
if result.death < monotonic():
result.value = func(*args, **kwargs)
result.death = monotonic() + ttl
return result.value
wrapper.cache_clear = cached_func.cache_clear
return wrapper
return decorator
How to use it?
# Recalculate cached results after 5 seconds.
@lru_cache_with_ttl(ttl=5)
def expensive_function(a, b):
return a + b
Benefits
Short, easy to review, and no PyPI install necessary. Relies only on the Python standard library, 3.7+.
No annoying ttl=10 parameter needed at all callsites.
Does not evict all items at the same time.
Key/value pairs actually live for the given TTL value.
Stores only one key/value pair per unique (*args, **kwargs) even when items expire.
Works as a decorator (kudos to the Javier Buzzi answer and Lewis Belcher answer).
Is thread safe.
Benefits from the C-optimizations of CPython from python.org and is compatible with PyPy.
The accepted answer fails #2, #3, #4, #5, and #6.
Drawbacks
Does not proactively evict expired items. Expired items are evicted only when the cache reaches maximum size. If the cache will not reach the maximum size (say maxsize is None), then no evictions will ever occur.
However, only one key/value pair is stored in the cache per unique (*args, **kwargs) given to the cached function. So if there are only 10 different parameter combinations, then the cache will only ever have 10 entries at max.
Note that the "time sensitive hash" and "time salt" solutions are much worse because multiple key/value cache items with identical keys (but different time hashes/salts) are left in the cache.
A:
Something like that ?
from time import time, sleep
import itertools
from threading import Thread, RLock
import signal
class CacheEntry():
def __init__(self, string, ttl=20):
self.string = string
self.expires_at = time() + ttl
self._expired = False
def expired(self):
if self._expired is False:
return (self.expires_at < time())
else:
return self._expired
class CacheList():
def __init__(self):
self.entries = []
self.lock = RLock()
def add_entry(self, string, ttl=20):
with self.lock:
self.entries.append(CacheEntry(string, ttl))
def read_entries(self):
with self.lock:
self.entries = list(itertools.dropwhile(lambda x:x.expired(), self.entries))
return self.entries
def read_entries(name, slp, cachelist):
while True:
print "{}: {}".format(name, ",".join(map(lambda x:x.string, cachelist.read_entries())))
sleep(slp)
def add_entries(name, ttl, cachelist):
s = 'A'
while True:
cachelist.add_entry(s, ttl)
print("Added ({}): {}".format(name, s))
sleep(1)
s += 'A'
if __name__ == "__main__":
signal.signal(signal.SIGINT, signal.SIG_DFL)
cl = CacheList()
print_threads = []
print_threads.append(Thread(None, read_entries, args=('t1', 1, cl)))
# print_threads.append(Thread(None, read_entries, args=('t2', 2, cl)))
# print_threads.append(Thread(None, read_entries, args=('t3', 3, cl)))
adder_thread = Thread(None, add_entries, args=('a1', 2, cl))
adder_thread.start()
for t in print_threads:
t.start()
for t in print_threads:
t.join()
adder_thread.join()
A:
I really liked @iutinvg solution due to its simplicity. However, I don't want to put an extra argument into every function, that I need to cache.
So inspired by Lewis and Javiers answer, I thought a decorator would be best. However, I did not want to use 3rd party libraries (as Javier) and I thought I could improve upon Lewis solution. So this is what I came up with.
import time
from functools import lru_cache
def ttl_lru_cache(seconds_to_live: int, maxsize: int = 128):
"""
Time aware lru caching
"""
def wrapper(func):
@lru_cache(maxsize)
def inner(__ttl, *args, **kwargs):
# Note that __ttl is not passed down to func,
# as it's only used to trigger cache miss after some time
return func(*args, **kwargs)
return lambda *args, **kwargs: inner(time.time() // seconds_to_live, *args, **kwargs)
return wrapper
My solution use a lambda to get fewer lines of code and integer floor division (//) so no casting to int is required.
Usage
@ttl_lru_cache(seconds_to_live=10)
def expensive(a: int):
"""An expensive function."""
time.sleep(1 + a)
print("Starting...")
expensive(1)
print("Again...")
expensive(1)
print("Done")
Note: With these decorators, you should never set maxsize=None, because the cache would then grow to infinity over time.
| Python in-memory cache with time to live | I have multiple threads running the same process that need to be able to to notify each other that something should not be worked on for the next n seconds its not the end of the world if they do however.
My aim is to be able to pass a string and a TTL to the cache and be able to fetch all the strings that are in the cache as a list. The cache can live in memory and the TTL's will be no more than 20 seconds.
Does anyone have a any suggestions for how this can be accomplished?
| [
"In case you don't want to use any 3rd libraries, you can add one more parameter to your expensive function: ttl_hash=None. This new parameter is so-called \"time sensitive hash\", its the only purpose is to affect lru_cache.\nFor example:\nfrom functools import lru_cache\nimport time\n\n\n@lru_cache()\ndef my_expensive_function(a, b, ttl_hash=None):\n del ttl_hash # to emphasize we don't use it and to shut pylint up\n return a + b # horrible CPU load...\n\n\ndef get_ttl_hash(seconds=3600):\n \"\"\"Return the same value withing `seconds` time period\"\"\"\n return round(time.time() / seconds)\n\n\n# somewhere in your code...\nres = my_expensive_function(2, 2, ttl_hash=get_ttl_hash())\n# cache will be updated once in an hour\n\n\n",
"The OP is using python 2.7 but if you're using python 3, ExpiringDict mentioned in the accepted answer is currently, well, expired. The last commit to the github repo was June 17, 2017 and there is an open issue that it doesn't work with Python 3.5\nAs of September 1, 2020, there is a more recently maintained project cachetools.\npip install cachetools\nfrom cachetools import TTLCache\n\ncache = TTLCache(maxsize=10, ttl=360)\ncache['apple'] = 'top dog'\n...\n>>> cache['apple']\n'top dog'\n... after 360 seconds...\n>>> cache['apple']\nKeyError exception raised\n\nttl is the time to live in seconds.\n",
"Regarding an expiring in-memory cache, for general purpose use, a common design pattern to typically do this is not via a dictionary, but via a function or method decorator. A cache dictionary is managed behind the scenes. As such, this answer somewhat complements the answer by User which uses a dictionary rather than a decorator.\nThe ttl_cache decorator in cachetools works a lot like functools.lru_cache, but with a time to live.\nimport cachetools.func\n\[email protected]_cache(maxsize=128, ttl=10 * 60)\ndef example_function(key):\n return get_expensively_computed_value(key)\n\n\nclass ExampleClass:\n EXP = 2\n\n @classmethod\n @cachetools.func.ttl_cache()\n def example_classmethod(cls, i):\n return i * cls.EXP\n\n @staticmethod\n @cachetools.func.ttl_cache()\n def example_staticmethod(i):\n return i * 3\n\n",
"You can use the expiringdict module:\n\nThe core of the library is ExpiringDict class which is an ordered dictionary with auto-expiring values for caching purposes.\n\nIn the description they do not talk about multithreading, so in order not to mess up, use a Lock.\n",
"I absolutely love the idea from @iutinvg, I just wanted to take it a little further; decouple it from having to know to pass the ttl into every function and just make it a decorator so you don't have to think about it. If you have django, py3, and don't feel like pip installing any dependencies, try this out.\nimport time\nfrom django.utils.functional import lazy\nfrom functools import lru_cache, partial, update_wrapper\n\n\ndef lru_cache_time(seconds, maxsize=None):\n \"\"\"\n Adds time aware caching to lru_cache\n \"\"\"\n def wrapper(func):\n # Lazy function that makes sure the lru_cache() invalidate after X secs\n ttl_hash = lazy(lambda: round(time.time() / seconds), int)()\n \n @lru_cache(maxsize)\n def time_aware(__ttl, *args, **kwargs):\n \"\"\"\n Main wrapper, note that the first argument ttl is not passed down. \n This is because no function should bother to know this that \n this is here.\n \"\"\"\n def wrapping(*args, **kwargs):\n return func(*args, **kwargs)\n return wrapping(*args, **kwargs)\n return update_wrapper(partial(time_aware, ttl_hash), func)\n return wrapper\n\nProving it works (with examples):\n@lru_cache_time(seconds=10)\ndef meaning_of_life():\n \"\"\"\n This message should show up if you call help().\n \"\"\"\n print('this better only show up once!')\n return 42\n\n\n@lru_cache_time(seconds=10)\ndef multiply(a, b):\n \"\"\"\n This message should show up if you call help().\n \"\"\"\n print('this better only show up once!')\n return a * b\n \n# This is a test, prints a `.` for every second, there should be 10s \n# between each \"this better only show up once!\" *2 because of the two functions.\nfor _ in range(20):\n meaning_of_life()\n multiply(50, 99991)\n print('.')\n time.sleep(1)\n\n",
"I know this is a little old, but for those who are interested in no third-party dependencies, this is a minor wrapper around the builtin functools.lru_cache (I noticed Javier's similar answer after writing this, but figured I post it anyway since this doesn't require Django):\nimport functools\nimport time\n\n\ndef time_cache(max_age, maxsize=128, typed=False):\n \"\"\"Least-recently-used cache decorator with time-based cache invalidation.\n\n Args:\n max_age: Time to live for cached results (in seconds).\n maxsize: Maximum cache size (see `functools.lru_cache`).\n typed: Cache on distinct input types (see `functools.lru_cache`).\n \"\"\"\n def _decorator(fn):\n @functools.lru_cache(maxsize=maxsize, typed=typed)\n def _new(*args, __time_salt, **kwargs):\n return fn(*args, **kwargs)\n\n @functools.wraps(fn)\n def _wrapped(*args, **kwargs):\n return _new(*args, **kwargs, __time_salt=int(time.time() / max_age))\n\n return _wrapped\n\n return _decorator\n\nAnd its usage:\n@time_cache(10)\ndef expensive(a: int):\n \"\"\"An expensive function.\"\"\"\n time.sleep(1 + a)\n\n\nprint(\"Starting...\")\nexpensive(1)\nprint(\"Again...\")\nexpensive(1)\nprint(\"Done\")\n\nNB this uses time.time and comes with all its caveats. You may want to use time.monotonic instead if available/appropriate.\n",
"If you want to avoid third-party packages, you can add in a custom timed_lru_cache decorator, which builds upon the lru_cache decorator.\nThe below defaults to a 20-second lifetime and a max size of 128. Note that the entire cache expires after 20 seconds, not individual items.\nfrom datetime import datetime, timedelta\nfrom functools import lru_cache, wraps\n\n\ndef timed_lru_cache(seconds: int = 20, maxsize: int = 128):\n def wrapper_cache(func):\n func = lru_cache(maxsize=maxsize)(func)\n func.lifetime = timedelta(seconds=seconds)\n func.expiration = datetime.utcnow() + func.lifetime\n\n @wraps(func)\n def wrapped_func(*args, **kwargs):\n if datetime.utcnow() >= func.expiration:\n func.cache_clear()\n func.expiration = datetime.utcnow() + func.lifetime\n\n return func(*args, **kwargs)\n\n return wrapped_func\n\n return wrapper_cache\n\nThen, just add @timed_lru_cache() above your function and you'll be good to go:\n@timed_lru_cache()\ndef my_function():\n # code goes here...\n\n",
"Yet Another Solution\nHow it works?\n\nThe user function is cached using @functools.lru_cache with support for maxsize and typed parameters.\nThe Result object records the function's return value and \"death\" time using time.monotonic() + ttl.\nThe wrapper function checks the \"death\" time of the return value against time.monotonic() and if the current time exceeds the \"death\" time, then recalculates the return value with a new \"death\" time.\n\nShow me the code:\nfrom functools import lru_cache, wraps\nfrom time import monotonic\n\n\ndef lru_cache_with_ttl(maxsize=128, typed=False, ttl=60):\n \"\"\"Least-recently used cache with time-to-live (ttl) limit.\"\"\"\n\n class Result:\n __slots__ = ('value', 'death')\n\n def __init__(self, value, death):\n self.value = value\n self.death = death\n\n def decorator(func):\n @lru_cache(maxsize=maxsize, typed=typed)\n def cached_func(*args, **kwargs):\n value = func(*args, **kwargs)\n death = monotonic() + ttl\n return Result(value, death)\n\n @wraps(func)\n def wrapper(*args, **kwargs):\n result = cached_func(*args, **kwargs)\n if result.death < monotonic():\n result.value = func(*args, **kwargs)\n result.death = monotonic() + ttl\n return result.value\n\n wrapper.cache_clear = cached_func.cache_clear\n return wrapper\n\n return decorator\n\nHow to use it?\n# Recalculate cached results after 5 seconds.\n@lru_cache_with_ttl(ttl=5)\ndef expensive_function(a, b):\n return a + b\n\nBenefits\n\nShort, easy to review, and no PyPI install necessary. Relies only on the Python standard library, 3.7+.\nNo annoying ttl=10 parameter needed at all callsites.\nDoes not evict all items at the same time.\nKey/value pairs actually live for the given TTL value.\nStores only one key/value pair per unique (*args, **kwargs) even when items expire.\nWorks as a decorator (kudos to the Javier Buzzi answer and Lewis Belcher answer).\nIs thread safe.\nBenefits from the C-optimizations of CPython from python.org and is compatible with PyPy.\n\nThe accepted answer fails #2, #3, #4, #5, and #6.\nDrawbacks\nDoes not proactively evict expired items. Expired items are evicted only when the cache reaches maximum size. If the cache will not reach the maximum size (say maxsize is None), then no evictions will ever occur.\nHowever, only one key/value pair is stored in the cache per unique (*args, **kwargs) given to the cached function. So if there are only 10 different parameter combinations, then the cache will only ever have 10 entries at max.\nNote that the \"time sensitive hash\" and \"time salt\" solutions are much worse because multiple key/value cache items with identical keys (but different time hashes/salts) are left in the cache.\n",
"Something like that ?\nfrom time import time, sleep\nimport itertools\nfrom threading import Thread, RLock\nimport signal\n\n\nclass CacheEntry():\n def __init__(self, string, ttl=20):\n self.string = string\n self.expires_at = time() + ttl\n self._expired = False\n\n def expired(self):\n if self._expired is False:\n return (self.expires_at < time())\n else:\n return self._expired\n\nclass CacheList():\n def __init__(self):\n self.entries = []\n self.lock = RLock()\n\n def add_entry(self, string, ttl=20):\n with self.lock:\n self.entries.append(CacheEntry(string, ttl))\n\n def read_entries(self):\n with self.lock:\n self.entries = list(itertools.dropwhile(lambda x:x.expired(), self.entries))\n return self.entries\n\ndef read_entries(name, slp, cachelist):\n while True:\n print \"{}: {}\".format(name, \",\".join(map(lambda x:x.string, cachelist.read_entries())))\n sleep(slp)\n\ndef add_entries(name, ttl, cachelist):\n s = 'A'\n while True:\n cachelist.add_entry(s, ttl)\n print(\"Added ({}): {}\".format(name, s))\n sleep(1)\n s += 'A'\n\n\n\nif __name__ == \"__main__\":\n signal.signal(signal.SIGINT, signal.SIG_DFL)\n\n cl = CacheList()\n print_threads = []\n print_threads.append(Thread(None, read_entries, args=('t1', 1, cl)))\n # print_threads.append(Thread(None, read_entries, args=('t2', 2, cl)))\n # print_threads.append(Thread(None, read_entries, args=('t3', 3, cl)))\n\n adder_thread = Thread(None, add_entries, args=('a1', 2, cl))\n adder_thread.start()\n\n for t in print_threads:\n t.start()\n\n for t in print_threads:\n t.join()\n\n adder_thread.join()\n\n",
"I really liked @iutinvg solution due to its simplicity. However, I don't want to put an extra argument into every function, that I need to cache.\nSo inspired by Lewis and Javiers answer, I thought a decorator would be best. However, I did not want to use 3rd party libraries (as Javier) and I thought I could improve upon Lewis solution. So this is what I came up with.\nimport time\nfrom functools import lru_cache\n\n\ndef ttl_lru_cache(seconds_to_live: int, maxsize: int = 128):\n \"\"\"\n Time aware lru caching\n \"\"\"\n def wrapper(func):\n\n @lru_cache(maxsize)\n def inner(__ttl, *args, **kwargs):\n # Note that __ttl is not passed down to func,\n # as it's only used to trigger cache miss after some time\n return func(*args, **kwargs)\n return lambda *args, **kwargs: inner(time.time() // seconds_to_live, *args, **kwargs)\n return wrapper\n\nMy solution use a lambda to get fewer lines of code and integer floor division (//) so no casting to int is required.\nUsage\n@ttl_lru_cache(seconds_to_live=10)\ndef expensive(a: int):\n \"\"\"An expensive function.\"\"\"\n time.sleep(1 + a)\n\n\nprint(\"Starting...\")\nexpensive(1)\nprint(\"Again...\")\nexpensive(1)\nprint(\"Done\")\n\nNote: With these decorators, you should never set maxsize=None, because the cache would then grow to infinity over time.\n"
] | [
163,
130,
44,
32,
19,
19,
8,
7,
5,
2
] | [
"You can also go for dictttl, which has MutableMapping, OrderedDict and defaultDict(list)\nInitialize an ordinary dict with each key having a ttl of 30 seconds\ndata = {'a': 1, 'b': 2}\ndict_ttl = DictTTL(30, data)\n\nOrderedDict\ndata = {'a': 1, 'b': 2}\ndict_ttl = OrderedDictTTL(30, data)\n\ndefaultDict(list)\ndict_ttl = DefaultDictTTL(30)\ndata = {'a': [10, 20], 'b': [1, 2]}\n[dict_ttl.append_values(k, v) for k, v in data.items()]\n\n",
"Somebody took some work to put it into a python package, see https://github.com/vpaliy/lru-expiring-cache.\nWell, I've been mislead by the other answers (which don't really address the question), so this might not be the best tool. Still,\nfrom lru import LruCache\n\ncache = LruCache(maxsize=10, concurrent=True)\n\ndef producer(key: str, value = True, TTL = 20):\n cache.add(key = key, value = value, expires = TTL)\n\ndef consumer():\n remaining_items = cache.items()\n # Alternatively, iterate over available items until you find one not in the cache\n return remaining_items\n\n\nproducer(\"1\", TTL = 1)\nproducer(\"5\", TTL = 3)\nprint(consumer()) ## 1, 5\n\ntime.sleep(2)\nprint(consumer()) ## 5\n\ntime.sleep(2)\nprint(consumer()) ## nothing\n\nTo my surprise, it keeps a ('Concurrent', 'True') entry when running in concurrent mode.\n"
] | [
-1,
-1
] | [
"caching",
"python"
] | stackoverflow_0031771286_caching_python.txt |
Q:
How to suppress OpenAI API warnings in Python
When can I suppress warnings such as:
message='Request to OpenAI API' method=post path=https://api.openai.com/v1/engines/davinci/completions
when I am running OpenAI in python?
A:
Add this to your Jupyter Notebook or Python script file
import logging
logging.getLogger().setLevel(logging.CRITICAL)
A:
When you are using the OpenAI python package and do requests to GPT-3 for example you will get messages like OP shows (showing the duration of the request for example).
In theory you need to get the base logger and raise its level.
However, it seems like inside the openAI package a specific logger is specified. Therefore you need to load this one and set its level.
The Info-messages will be gone then:
openai.util.logging.getLogger().setLevel(logging.WARNING)
| How to suppress OpenAI API warnings in Python | When can I suppress warnings such as:
message='Request to OpenAI API' method=post path=https://api.openai.com/v1/engines/davinci/completions
when I am running OpenAI in python?
| [
"Add this to your Jupyter Notebook or Python script file\nimport logging\nlogging.getLogger().setLevel(logging.CRITICAL)\n\n",
"When you are using the OpenAI python package and do requests to GPT-3 for example you will get messages like OP shows (showing the duration of the request for example).\nIn theory you need to get the base logger and raise its level.\nHowever, it seems like inside the openAI package a specific logger is specified. Therefore you need to load this one and set its level.\nThe Info-messages will be gone then:\nopenai.util.logging.getLogger().setLevel(logging.WARNING)\n\n"
] | [
0,
0
] | [] | [] | [
"openai",
"python"
] | stackoverflow_0071893613_openai_python.txt |
Q:
How to configure Atom to run Python3 scripts?
In my terminal, I type $ which python3, outputting
/opt/local/bin/python3
I would like to configure Atom to run Python3 scripts. In my Atom Config, I have
runner:
python: "/opt/local/bin/python3"
However, if I run the following script in some script named filename.py,
import sys
print(sys.version)
I get the following output:
2.7.11 (default, Feb 18 2016, 22:00:44)
[GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)]
How exactly does one set up the PATH for Python3.x scripts to run correctly? Is there a different package I could use?
A:
Go to the Atom's menu bar -> Packages -> Script -> Configure Script
(Or, you can use the shortcut Shift+Ctrl+Alt+O)
Then type python3 to the Command space.
Hopefully, it will work.
A:
i am using "script" package (3.18.1 by rgbkrk) to run code inside atom and this is how i fixed it
open package settings -> view code
open lib -> grammars -> python.coffee
change from python to python3 in those two places 'Selection Based' and 'File Based'
A:
Install atom-runner in your Atom going into your settings of Atom and then inside Package and search for atom-runner and install it.
Now click on settings tab for atom-runner as shown above on picture.
Then click on View Code as shown in below picture.
Then go to lib folder and open atom-runner.coffee and replace the following section of code:
defaultScopeMap:
coffee: 'coffee'
js: 'node'
ruby: 'ruby'
python: 'python3'
go: 'go run'
shell: 'bash'
powershell: 'powershell -noninteractive -noprofile -c -'
Make sure that for python keyword value is python3, by default it is python. Refer to the pic below:
Other way is to find the location of python3 using command
which python3
for me output is :
/usr/local/bin/python3
and add as a shebang in your every python file. For example:-
#!/usr/local/bin/python3
import sys
print("Version ",sys.version)
Only catch is that you have to write this in each file.
A:
If you are using Mac OS X, use the directory on the terminal to open the file.
Select the file python3, right click and select "get info". Select the directory from "Where:" and past it in Atom.
As Terry told you:
Then type python3 to the Command space.
A:
You can use the Atom package atom-python-run to launch python code from Atom, the python version can be configured in the package settings. By default atom-python-run uses the syntax python {file}. If the python command on your system is not yet pointing to python3, just replace the setting and write python3 {file}.
A:
You are probably using atom-python-run package to run Python directly from Atom. If Python2 is the default version of Python in your system, then Atom will try to run your Python code with Python2 interpreter. All you have to do is to change some settings in atom-python-run package to tell it that we want to use Python3. The process is simple. Go to settings>>Packages, click the settings button on atom-python-run package and in the fields of F5 and F6 command, exchange python with python3. That's it. Now you can run your Python3 script by pressing F5 or F6 button.
A:
If you are using Atom on Mac OS and have script 3.18.1 and atom-python-run 0.9.7 packages installed, the following steps will help you out.
Script-> Configure Script
Then type in Python3 in the command field in the options dialog.
This should solve your problem.
A:
For mac user:
if you want to use Python3 by default, you can open Atom Settings, Atom→Preferences→Open Config Folder, and open.atom/packages/script/lib/grammars/python.coffee, Changing python to python3 under 'Selection Based' and 'File Based', saving it.
A:
Im using Linux/Ubuntu so in other Linux distro and Mac this method will work.
First you need go to package settings and find Script and click its Settings.
in next step click on view code.
With this, you will get access to all the main source files of the package. follow the path below to reach the python.js file.
> lib > grammars > python.js
Now you need to change the highlighted parts to the following value, you just need to change the python to python3 :
export const Python = {
"Selection Based": {
command: "python3",
args(context) {
setEncoding()
const code = context.getCode()
const tmpFile = GrammarUtils.createTempFileWithCode(code)
return ["-u", tmpFile]
},
},
"File Based": {
command: "python3",
args({ filepath }) {
setEncoding()
return ["-u", filepath]
},
},
}
Save the file and close it and return to your program and run your program using Run Script or Ctrl + shift + b .
You will get the following result
Everything will work fine.
A:
Just add your command in the Configure Run Options and save it. Then use 'Run with Profile' to use the command to execute your script. This worked for me.
A:
I have Atom v1.57.0 application installed in Ubuntu 20.04 and the atom package called script is installed to execute python3 scripts.
To allow Atom configuration to run python3 script persistently, I installed the python-is-python3 package by running the terminal command sudo apt install python-is-python3. These links 1 and 2 explain what it is. So simply pressing Ctrl+Shift+B or Run Script will run the python3 scripts persistently by default and without error messages.
If you do not want python to be replace python3 at the system-wide level, you can uninstalled python-is-python3, i.e. run terminal command sudo apt remove python-is-python3. Then in Atom, pressed Ctrl+Shift+P, typed script and selected Script: Run Options. For Command, you can key in the path /usr/bin/python3.8 or /usr/bin/python3 to define using python3.8 or python3 to execute your python script. Thereafter, you have to click on Save as profile, give it a profile name and then selected that profile name to execute your python script. See this video that was created by Corey-Schafer from 18.21mins. It shows you what to do. So finally, instead of using Ctrl+Shift+B or Run Script to execute your script, you now have to use Alt+Ctrl+Shift+B or Run With Profile to execute the script. More tedious but this approach allows finer control. For folks who want to use Atom to run different version-types of Python scripts.
| How to configure Atom to run Python3 scripts? | In my terminal, I type $ which python3, outputting
/opt/local/bin/python3
I would like to configure Atom to run Python3 scripts. In my Atom Config, I have
runner:
python: "/opt/local/bin/python3"
However, if I run the following script in some script named filename.py,
import sys
print(sys.version)
I get the following output:
2.7.11 (default, Feb 18 2016, 22:00:44)
[GCC 4.2.1 Compatible Apple LLVM 7.0.2 (clang-700.1.81)]
How exactly does one set up the PATH for Python3.x scripts to run correctly? Is there a different package I could use?
| [
"Go to the Atom's menu bar -> Packages -> Script -> Configure Script\n(Or, you can use the shortcut Shift+Ctrl+Alt+O)\nThen type python3 to the Command space.\nHopefully, it will work.\n",
"i am using \"script\" package (3.18.1 by rgbkrk) to run code inside atom and this is how i fixed it\n\nopen package settings -> view code \nopen lib -> grammars -> python.coffee\nchange from python to python3 in those two places 'Selection Based' and 'File Based'\n\n",
"Install atom-runner in your Atom going into your settings of Atom and then inside Package and search for atom-runner and install it.\n\nNow click on settings tab for atom-runner as shown above on picture.\nThen click on View Code as shown in below picture.\n \nThen go to lib folder and open atom-runner.coffee and replace the following section of code:\ndefaultScopeMap:\ncoffee: 'coffee'\njs: 'node'\nruby: 'ruby'\npython: 'python3'\ngo: 'go run'\nshell: 'bash'\npowershell: 'powershell -noninteractive -noprofile -c -'\n\nMake sure that for python keyword value is python3, by default it is python. Refer to the pic below:\n \nOther way is to find the location of python3 using command\nwhich python3\n\nfor me output is :\n/usr/local/bin/python3\n\nand add as a shebang in your every python file. For example:-\n#!/usr/local/bin/python3\nimport sys\nprint(\"Version \",sys.version)\n\nOnly catch is that you have to write this in each file.\n",
"If you are using Mac OS X, use the directory on the terminal to open the file. \nSelect the file python3, right click and select \"get info\". Select the directory from \"Where:\" and past it in Atom. \nAs Terry told you:\n\nThen type python3 to the Command space.\n\n",
"You can use the Atom package atom-python-run to launch python code from Atom, the python version can be configured in the package settings. By default atom-python-run uses the syntax python {file}. If the python command on your system is not yet pointing to python3, just replace the setting and write python3 {file}.\n",
"You are probably using atom-python-run package to run Python directly from Atom. If Python2 is the default version of Python in your system, then Atom will try to run your Python code with Python2 interpreter. All you have to do is to change some settings in atom-python-run package to tell it that we want to use Python3. The process is simple. Go to settings>>Packages, click the settings button on atom-python-run package and in the fields of F5 and F6 command, exchange python with python3. That's it. Now you can run your Python3 script by pressing F5 or F6 button.\n\n",
"If you are using Atom on Mac OS and have script 3.18.1 and atom-python-run 0.9.7 packages installed, the following steps will help you out.\nScript-> Configure Script\n\nThen type in Python3 in the command field in the options dialog.\n\nThis should solve your problem.\n",
"For mac user:\nif you want to use Python3 by default, you can open Atom Settings, Atom→Preferences→Open Config Folder, and open.atom/packages/script/lib/grammars/python.coffee, Changing python to python3 under 'Selection Based' and 'File Based', saving it.\n",
"Im using Linux/Ubuntu so in other Linux distro and Mac this method will work.\nFirst you need go to package settings and find Script and click its Settings.\n\nin next step click on view code.\n\nWith this, you will get access to all the main source files of the package. follow the path below to reach the python.js file.\n> lib > grammars > python.js\n\n\nNow you need to change the highlighted parts to the following value, you just need to change the python to python3 :\nexport const Python = {\n \"Selection Based\": {\n command: \"python3\",\n args(context) {\n setEncoding()\n const code = context.getCode()\n const tmpFile = GrammarUtils.createTempFileWithCode(code)\n return [\"-u\", tmpFile]\n },\n },\n\n \"File Based\": {\n command: \"python3\",\n args({ filepath }) {\n setEncoding()\n return [\"-u\", filepath]\n },\n },\n}\n\nSave the file and close it and return to your program and run your program using Run Script or Ctrl + shift + b .\nYou will get the following result\n\nEverything will work fine.\n",
"Just add your command in the Configure Run Options and save it. Then use 'Run with Profile' to use the command to execute your script. This worked for me.\n\n",
"I have Atom v1.57.0 application installed in Ubuntu 20.04 and the atom package called script is installed to execute python3 scripts.\nTo allow Atom configuration to run python3 script persistently, I installed the python-is-python3 package by running the terminal command sudo apt install python-is-python3. These links 1 and 2 explain what it is. So simply pressing Ctrl+Shift+B or Run Script will run the python3 scripts persistently by default and without error messages.\nIf you do not want python to be replace python3 at the system-wide level, you can uninstalled python-is-python3, i.e. run terminal command sudo apt remove python-is-python3. Then in Atom, pressed Ctrl+Shift+P, typed script and selected Script: Run Options. For Command, you can key in the path /usr/bin/python3.8 or /usr/bin/python3 to define using python3.8 or python3 to execute your python script. Thereafter, you have to click on Save as profile, give it a profile name and then selected that profile name to execute your python script. See this video that was created by Corey-Schafer from 18.21mins. It shows you what to do. So finally, instead of using Ctrl+Shift+B or Run Script to execute your script, you now have to use Alt+Ctrl+Shift+B or Run With Profile to execute the script. More tedious but this approach allows finer control. For folks who want to use Atom to run different version-types of Python scripts.\n"
] | [
33,
12,
6,
4,
3,
2,
1,
1,
1,
0,
0
] | [] | [] | [
"atom_editor",
"path",
"python",
"python_3.x"
] | stackoverflow_0035546627_atom_editor_path_python_python_3.x.txt |
Q:
Hexadecimal to decimal array each element 1 byte long
I have the following hexadecimal string:
7609a2fed47be9131ea1b803afc517b8
I want to convert it to hexadecimal array each element 1 byte long
[76 09 a2 fe d4 7b e9 13 1e a1 b8 03 af c5 17 b8]
then i need to convert it to decimal array.
i tried converting it to binary then to hexadecimal again, it did not work
A:
To convert a hexadecimal string to a list of hexadecimal values, each one byte long, you can use the binascii.unhexlify() method from the binascii module. This method takes a hexadecimal string as its input, and returns a bytes object containing the corresponding binary data.
You can then use the bytearray() constructor to convert the bytes object to a bytearray, which is similar to a list, but allows you to manipulate the individual bytes.
Here is an example of how you could use these methods to convert the hexadecimal string you provided:
import binascii
# Define the hexadecimal string
hex_string = '7609a2fed47be9131ea1b803afc517b8'
# Use the unhexlify() method to convert the hexadecimal string to a bytes object
bytes_obj = binascii.unhexlify(hex_string)
# Use the bytearray() constructor to convert the bytes object to a bytearray
hex_list = bytearray(bytes_obj)
# Display the resulting bytearray
print(hex_list)
This code should produce the following output:
bytearray(b'v\t\xa2\xfe\xd4{\xe9\x13\x1e\xa1\xb8\x03\xaf\xc5\x17\xb8')
As you can see, the hexadecimal string has been converted to a bytearray, where each element is one byte long.
To convert this bytearray to a list of decimal values, you can use the list() function to convert the bytearray to a regular list, and then use the map() function with the int() function to convert each element from hexadecimal to decimal.
Here is an example of how you could do this:
# Convert the bytearray to a regular list
hex_list = list(hex_list)
# Use the map() function with the int() function to convert each element from hexadecimal to decimal
decimal_list = list(map(int, hex_list))
# Display the resulting list
print(decimal_list)
This code should produce the following output:
[118, 9, 162, 254, 212, 123, 233, 19, 30, 161, 184, 3, 175, 197, 23, 184]
A:
Try this:
# Hexadecimal to Decimal
hex_string = '7609a2fed47be9131ea1b803afc517b8'
dec_array = [int(hex_string[i:i+2], 16) for i in range(0, len(hex_string), 2)]
print(dec_array)
# Output: [118, 9, 162, 254, 212, 123, 233, 19, 30, 161, 184, 3, 175, 197, 23, 184]
| Hexadecimal to decimal array each element 1 byte long | I have the following hexadecimal string:
7609a2fed47be9131ea1b803afc517b8
I want to convert it to hexadecimal array each element 1 byte long
[76 09 a2 fe d4 7b e9 13 1e a1 b8 03 af c5 17 b8]
then i need to convert it to decimal array.
i tried converting it to binary then to hexadecimal again, it did not work
| [
"To convert a hexadecimal string to a list of hexadecimal values, each one byte long, you can use the binascii.unhexlify() method from the binascii module. This method takes a hexadecimal string as its input, and returns a bytes object containing the corresponding binary data.\nYou can then use the bytearray() constructor to convert the bytes object to a bytearray, which is similar to a list, but allows you to manipulate the individual bytes.\nHere is an example of how you could use these methods to convert the hexadecimal string you provided:\nimport binascii\n\n# Define the hexadecimal string\nhex_string = '7609a2fed47be9131ea1b803afc517b8'\n\n# Use the unhexlify() method to convert the hexadecimal string to a bytes object\nbytes_obj = binascii.unhexlify(hex_string)\n\n# Use the bytearray() constructor to convert the bytes object to a bytearray\nhex_list = bytearray(bytes_obj)\n\n# Display the resulting bytearray\nprint(hex_list)\n\nThis code should produce the following output:\nbytearray(b'v\\t\\xa2\\xfe\\xd4{\\xe9\\x13\\x1e\\xa1\\xb8\\x03\\xaf\\xc5\\x17\\xb8')\n\nAs you can see, the hexadecimal string has been converted to a bytearray, where each element is one byte long.\nTo convert this bytearray to a list of decimal values, you can use the list() function to convert the bytearray to a regular list, and then use the map() function with the int() function to convert each element from hexadecimal to decimal.\nHere is an example of how you could do this:\n# Convert the bytearray to a regular list\nhex_list = list(hex_list)\n\n# Use the map() function with the int() function to convert each element from hexadecimal to decimal\ndecimal_list = list(map(int, hex_list))\n\n# Display the resulting list\nprint(decimal_list)\n\nThis code should produce the following output:\n[118, 9, 162, 254, 212, 123, 233, 19, 30, 161, 184, 3, 175, 197, 23, 184]\n\n",
"Try this:\n# Hexadecimal to Decimal\nhex_string = '7609a2fed47be9131ea1b803afc517b8'\ndec_array = [int(hex_string[i:i+2], 16) for i in range(0, len(hex_string), 2)]\nprint(dec_array)\n# Output: [118, 9, 162, 254, 212, 123, 233, 19, 30, 161, 184, 3, 175, 197, 23, 184]\n\n"
] | [
0,
0
] | [] | [] | [
"arrays",
"hex",
"python"
] | stackoverflow_0074643739_arrays_hex_python.txt |
Q:
How Can I call a function with values that will be executed only once when I have several parameters in pytest
I run pytest with several params:
@pytest.mark.parametrize('signature_algorithm, cipher, name', [
pytest.param(rsa_pss_rsae_sha256, AES128-GCM-SHA256, "KEY1"),
pytest.param(rsa_pss_rsae_sha384, AES128-GCM-SHA256, "KEY2"),
....
def test(signature_algorithm, cipher, cert_name, functaion1("data")):
.....
functaion1(data):
Change file in server (using data value)
I need to call the functaion1(data) only once when test starts. How can I achieve this using pytest?
I tried to add functaion1 as parameter in test function but I got syntax error while I add parentheses to the functaion1 (because I want to send a certain value to function1).
A:
First of all, I recommend you to read the documentation
https://docs.pytest.org/en/6.2.x/parametrize.html
My spider sense says "KEY1K" is a project requirement. Therefore, you can either keep or remove it from parametrization and setup on test function begin
@pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6), ("6*9", 42)])
@pytest.mark.parametrize('signature_algorithm, cipher, expected_value', [
pytest.param(rsa_pss_rsae_sha256, AES128-GCM-SHA256, 'expected_result_1'),
pytest.param(rsa_pss_rsae_sha384, AES128-GCM-SHA256, 'expected_result_2')
]):
def test(signature_algorithm, cipher) :
do_stuff("KEY1K")
# Do test stuff
undo_stuff("KEY1K")
A:
To call a function only once before the test starts you can use @pytest.fixture().
For this example we can call this a "setup" function but it can have any name you define. Add your functaion1(data) function call to the setup function.
I commented out the functaion1(data) function call in my example since it is not clear where "data" is coming from.
import pytest
class Testonething:
@pytest.fixture()
def setup(self):
print("setting up")
# functaion1(data)
def test_one_thing(self, setup):
print("This is branch1")
assert True
| How Can I call a function with values that will be executed only once when I have several parameters in pytest | I run pytest with several params:
@pytest.mark.parametrize('signature_algorithm, cipher, name', [
pytest.param(rsa_pss_rsae_sha256, AES128-GCM-SHA256, "KEY1"),
pytest.param(rsa_pss_rsae_sha384, AES128-GCM-SHA256, "KEY2"),
....
def test(signature_algorithm, cipher, cert_name, functaion1("data")):
.....
functaion1(data):
Change file in server (using data value)
I need to call the functaion1(data) only once when test starts. How can I achieve this using pytest?
I tried to add functaion1 as parameter in test function but I got syntax error while I add parentheses to the functaion1 (because I want to send a certain value to function1).
| [
"First of all, I recommend you to read the documentation\nhttps://docs.pytest.org/en/6.2.x/parametrize.html\nMy spider sense says \"KEY1K\" is a project requirement. Therefore, you can either keep or remove it from parametrization and setup on test function begin\[email protected](\"test_input,expected\", [(\"3+5\", 8), (\"2+4\", 6), (\"6*9\", 42)])\n\n\[email protected]('signature_algorithm, cipher, expected_value', [\n pytest.param(rsa_pss_rsae_sha256, AES128-GCM-SHA256, 'expected_result_1'),\n pytest.param(rsa_pss_rsae_sha384, AES128-GCM-SHA256, 'expected_result_2')\n]):\ndef test(signature_algorithm, cipher) :\n do_stuff(\"KEY1K\")\n # Do test stuff\n undo_stuff(\"KEY1K\")\n\n\n",
"To call a function only once before the test starts you can use @pytest.fixture().\nFor this example we can call this a \"setup\" function but it can have any name you define. Add your functaion1(data) function call to the setup function.\nI commented out the functaion1(data) function call in my example since it is not clear where \"data\" is coming from.\nimport pytest\n\nclass Testonething:\n\n @pytest.fixture()\n def setup(self):\n print(\"setting up\")\n # functaion1(data)\n\n def test_one_thing(self, setup):\n print(\"This is branch1\")\n assert True\n\n"
] | [
0,
0
] | [] | [] | [
"pytest",
"python"
] | stackoverflow_0074642527_pytest_python.txt |
Q:
How to get the difference between two dictionaries in Python?
I have two dictionaries, and I need to find the difference between the two, which should give me both a key and a value.
I have searched and found some addons/packages like datadiff and dictdiff-master, but when I try to import them in Python 2.7, it says that no such modules are defined.
I used a set here:
first_dict = {}
second_dict = {}
value = set(second_dict) - set(first_dict)
print value
My output is:
>>> set(['SCD-3547', 'SCD-3456'])
I am getting only keys, and I need to also get the values.
A:
I think it's better to use the symmetric difference operation of sets to do that Here is the link to the doc.
>>> dict1 = {1:'donkey', 2:'chicken', 3:'dog'}
>>> dict2 = {1:'donkey', 2:'chimpansee', 4:'chicken'}
>>> set1 = set(dict1.items())
>>> set2 = set(dict2.items())
>>> set1 ^ set2
{(2, 'chimpansee'), (4, 'chicken'), (2, 'chicken'), (3, 'dog')}
It is symmetric because:
>>> set2 ^ set1
{(2, 'chimpansee'), (4, 'chicken'), (2, 'chicken'), (3, 'dog')}
This is not the case when using the difference operator.
>>> set1 - set2
{(2, 'chicken'), (3, 'dog')}
>>> set2 - set1
{(2, 'chimpansee'), (4, 'chicken')}
However it may not be a good idea to convert the resulting set to a dictionary because you may lose information:
>>> dict(set1 ^ set2)
{2: 'chicken', 3: 'dog', 4: 'chicken'}
A:
Try the following snippet, using a dictionary comprehension:
value = { k : second_dict[k] for k in set(second_dict) - set(first_dict) }
In the above code we find the difference of the keys and then rebuild a dict taking the corresponding values.
A:
Another solution would be dictdiffer (https://github.com/inveniosoftware/dictdiffer).
import dictdiffer
a_dict = {
'a': 'foo',
'b': 'bar',
'd': 'barfoo'
}
b_dict = {
'a': 'foo',
'b': 'BAR',
'c': 'foobar'
}
for diff in list(dictdiffer.diff(a_dict, b_dict)):
print diff
A diff is a tuple with the type of change, the changed value, and the path to the entry.
('change', 'b', ('bar', 'BAR'))
('add', '', [('c', 'foobar')])
('remove', '', [('d', 'barfoo')])
A:
You were right to look at using a set, we just need to dig in a little deeper to get your method to work.
First, the example code:
test_1 = {"foo": "bar", "FOO": "BAR"}
test_2 = {"foo": "bar", "f00": "b@r"}
We can see right now that both dictionaries contain a similar key/value pair:
{"foo": "bar", ...}
Each dictionary also contains a completely different key value pair. But how do we detect the difference? Dictionaries don't support that. Instead, you'll want to use a set.
Here is how to turn each dictionary into a set we can use:
set_1 = set(test_1.items())
set_2 = set(test_2.items())
This returns a set containing a series of tuples. Each tuple represents one key/value pair from your dictionary.
Now, to find the difference between set_1 and set_2:
print set_1 - set_2
>>> {('FOO', 'BAR')}
Want a dictionary back? Easy, just:
dict(set_1 - set_2)
>>> {'FOO': 'BAR'}
A:
You can use DeepDiff:
pip install deepdiff
Among other things, it lets you recursively calculate the difference of dictionaries, iterables, strings and other objects:
>>> from deepdiff import DeepDiff
>>> d1 = {1:1, 2:2, 3:3, "foo":4}
>>> d2 = {1:1, 2:4, 3:3, "bar":5, 6:6}
>>> DeepDiff(d1, d2)
{'dictionary_item_added': [root['bar'], root[6]],
'dictionary_item_removed': [root['foo']],
'values_changed': {'root[2]': {'new_value': 4, 'old_value': 2}}}
It lets you see what changed (even types), what was added and what was removed. It also lets you do many other things like ignoring duplicates and ignoring paths (defined by regex).
A:
A solution is to use the unittest module:
from unittest import TestCase
TestCase().assertDictEqual(expected_dict, actual_dict)
Obtained from How can you test that two dictionaries are equal with pytest in python
A:
This function gives you all the diffs (and what stayed the same) based on the dictionary keys only. It also highlights some nice Dict comprehension, Set operations and python 3.6 type annotations :)
from typing import Dict, Any, Tuple
def get_dict_diffs(a: Dict[str, Any], b: Dict[str, Any]) -> Tuple[Dict[str, Any], Dict[str, Any], Dict[str, Any], Dict[str, Any]]:
added_to_b_dict: Dict[str, Any] = {k: b[k] for k in set(b) - set(a)}
removed_from_a_dict: Dict[str, Any] = {k: a[k] for k in set(a) - set(b)}
common_dict_a: Dict[str, Any] = {k: a[k] for k in set(a) & set(b)}
common_dict_b: Dict[str, Any] = {k: b[k] for k in set(a) & set(b)}
return added_to_b_dict, removed_from_a_dict, common_dict_a, common_dict_b
If you want to compare the dictionary values:
values_in_b_not_a_dict = {k : b[k] for k, _ in set(b.items()) - set(a.items())}
A:
A function using the symmetric difference set operator, as mentioned in other answers, which preserves the origins of the values:
def diff_dicts(a, b, missing=KeyError):
"""
Find keys and values which differ from `a` to `b` as a dict.
If a value differs from `a` to `b` then the value in the returned dict will
be: `(a_value, b_value)`. If either is missing then the token from
`missing` will be used instead.
:param a: The from dict
:param b: The to dict
:param missing: A token used to indicate the dict did not include this key
:return: A dict of keys to tuples with the matching value from a and b
"""
return {
key: (a.get(key, missing), b.get(key, missing))
for key in dict(
set(a.items()) ^ set(b.items())
).keys()
}
Example
print(diff_dicts({'a': 1, 'b': 1}, {'b': 2, 'c': 2}))
# {'c': (<class 'KeyError'>, 2), 'a': (1, <class 'KeyError'>), 'b': (1, 2)}
How this works
We use the symmetric difference set operator on the tuples generated from taking items. This generates a set of distinct (key, value) tuples from the two dicts.
We then make a new dict from that to collapse the keys together and iterate over these. These are the only keys that have changed from one dict to the next.
We then compose a new dict using these keys with a tuple of the values from each dict substituting in our missing token when the key isn't present.
A:
Not sure this is what the OP asked for, but this is what I was looking for when I came across this question - specifically, how to show key by key the difference between two dicts:
Pitfall: when one dict has a missing key, and the second has it with a None value, the function would assume they are similar
This is not optimized at all - suitable for small dicts
def diff_dicts(a, b, drop_similar=True):
res = a.copy()
for k in res:
if k not in b:
res[k] = (res[k], None)
for k in b:
if k in res:
res[k] = (res[k], b[k])
else:
res[k] = (None, b[k])
if drop_similar:
res = {k:v for k,v in res.items() if v[0] != v[1]}
return res
print(diff_dicts({'a': 1}, {}))
print(diff_dicts({'a': 1}, {'a': 2}))
print(diff_dicts({'a': 2}, {'a': 2}))
print(diff_dicts({'a': 2}, {'b': 2}))
print(diff_dicts({'a': 2}, {'a': 2, 'b': 1}))
Output:
{'a': (1, None)}
{'a': (1, 2)}
{}
{'a': (2, None), 'b': (None, 2)}
{'b': (None, 1)}
A:
This is my own version, from combining https://stackoverflow.com/a/67263119/919692 with https://stackoverflow.com/a/48544451/919692, and now I see it is quite similar to https://stackoverflow.com/a/47433207/919692:
def dict_diff(dict_a, dict_b, show_value_diff=True):
result = {}
result['added'] = {k: dict_b[k] for k in set(dict_b) - set(dict_a)}
result['removed'] = {k: dict_a[k] for k in set(dict_a) - set(dict_b)}
if show_value_diff:
common_keys = set(dict_a) & set(dict_b)
result['value_diffs'] = {
k:(dict_a[k], dict_b[k])
for k in common_keys
if dict_a[k] != dict_b[k]
}
return result
A:
I would recommend using something already written by good developers. Like pytest. It has a deal with any data type, not only dicts. And, BTW, pytest is very good at testing.
from _pytest.assertion.util import _compare_eq_any
print('\n'.join(_compare_eq_any({'a': 'b'}, {'aa': 'vv'}, verbose=3)))
Output is:
Left contains 1 more item:
{'a': 'b'}
Right contains 1 more item:
{'aa': 'vv'}
Full diff:
- {'aa': 'vv'}
? - ^^
+ {'a': 'b'}
? ^
If you don't like using private functions (started with _), just have a look at the source code and copy/paste the function to your code.
P.S.: Tested with pytest==6.2.4
A:
What about this? Not as pretty but explicit.
orig_dict = {'a' : 1, 'b' : 2}
new_dict = {'a' : 2, 'v' : 'hello', 'b' : 2}
updates = {}
for k2, v2 in new_dict.items():
if k2 in orig_dict:
if v2 != orig_dict[k2]:
updates.update({k2 : v2})
else:
updates.update({k2 : v2})
#test it
#value of 'a' was changed
#'v' is a completely new entry
assert all(k in updates for k in ['a', 'v'])
A:
def flatten_it(d):
if isinstance(d, list) or isinstance(d, tuple):
return tuple([flatten_it(item) for item in d])
elif isinstance(d, dict):
return tuple([(flatten_it(k), flatten_it(v)) for k, v in sorted(d.items())])
else:
return d
dict1 = {'a': 1, 'b': 2, 'c': 3}
dict2 = {'a': 1, 'b': 1}
print set(flatten_it(dict1)) - set(flatten_it(dict2)) # set([('b', 2), ('c', 3)])
# or
print set(flatten_it(dict2)) - set(flatten_it(dict1)) # set([('b', 1)])
A:
Old question, but thought I'd share my solution anyway. Pretty simple.
dicta_set = set(dicta.items()) # creates a set of tuples (k/v pairs)
dictb_set = set(dictb.items())
setdiff = dictb_set.difference(dicta_set) # any set method you want for comparisons
for k, v in setdiff: # unpack the tuples for processing
print(f"k/v differences = {k}: {v}")
This code creates two sets of tuples representing the k/v pairs. It then uses a set method of your choosing to compare the tuples. Lastly, it unpacks the tuples (k/v pairs) for processing.
A:
This will return a new dict (only changed data).
def get_difference(obj_1: dict, obj_2: dict) -> dict:
result = {}
for key in obj_1.keys():
value = obj_1[key]
if isinstance(value, dict):
difference = get_difference(value, obj_2.get(key, {}))
if difference:
result[key] = difference
elif value != obj_2.get(key):
result[key] = obj_2.get(key, None)
return result
A:
Here is a variation that lets you update dict1 values if you know the values in dict2 are right.
Consider:
dict1.update((k, dict2.get(k)) for k, v in dict1.items())
A:
For one side comparison you can use dict comprehension:
dict1 = {'a': 1, 'b': 2, 'c': 3, 'd': 4}
dict2 = {'a': OMG, 'b': 2, 'c': 3, 'd': 4}
data = {a:dict1[a] for a in dict1 if dict1[a] != dict2[a]}
output: {'a': 1}
A:
sharedmLst = set(a_dic.items()).intersection(b_dic.items())
diff_from_b = set(a_dic.items()) - sharedmLst
diff_from_a = set(b_dic.items()) - sharedmLst
print("Among the items in a_dic, the item different from b_dic",diff_from_b)
print("Among the items in b_dic, the item different from a_dic",diff_from_a)
Result :
Among the items in a_dic, the item different from b_dic {('b', 2)}
Among the items in b_dic, the item different from a_dic {('b', 20)}
| How to get the difference between two dictionaries in Python? | I have two dictionaries, and I need to find the difference between the two, which should give me both a key and a value.
I have searched and found some addons/packages like datadiff and dictdiff-master, but when I try to import them in Python 2.7, it says that no such modules are defined.
I used a set here:
first_dict = {}
second_dict = {}
value = set(second_dict) - set(first_dict)
print value
My output is:
>>> set(['SCD-3547', 'SCD-3456'])
I am getting only keys, and I need to also get the values.
| [
"I think it's better to use the symmetric difference operation of sets to do that Here is the link to the doc.\n>>> dict1 = {1:'donkey', 2:'chicken', 3:'dog'}\n>>> dict2 = {1:'donkey', 2:'chimpansee', 4:'chicken'}\n>>> set1 = set(dict1.items())\n>>> set2 = set(dict2.items())\n>>> set1 ^ set2\n{(2, 'chimpansee'), (4, 'chicken'), (2, 'chicken'), (3, 'dog')}\n\nIt is symmetric because:\n>>> set2 ^ set1\n{(2, 'chimpansee'), (4, 'chicken'), (2, 'chicken'), (3, 'dog')}\n\nThis is not the case when using the difference operator.\n>>> set1 - set2\n{(2, 'chicken'), (3, 'dog')}\n>>> set2 - set1\n{(2, 'chimpansee'), (4, 'chicken')}\n\nHowever it may not be a good idea to convert the resulting set to a dictionary because you may lose information:\n>>> dict(set1 ^ set2)\n{2: 'chicken', 3: 'dog', 4: 'chicken'}\n\n",
"Try the following snippet, using a dictionary comprehension:\nvalue = { k : second_dict[k] for k in set(second_dict) - set(first_dict) }\n\nIn the above code we find the difference of the keys and then rebuild a dict taking the corresponding values.\n",
"Another solution would be dictdiffer (https://github.com/inveniosoftware/dictdiffer).\nimport dictdiffer \n\na_dict = { \n 'a': 'foo',\n 'b': 'bar',\n 'd': 'barfoo'\n} \n\nb_dict = { \n 'a': 'foo', \n 'b': 'BAR',\n 'c': 'foobar'\n} \n\nfor diff in list(dictdiffer.diff(a_dict, b_dict)): \n print diff\n\nA diff is a tuple with the type of change, the changed value, and the path to the entry.\n('change', 'b', ('bar', 'BAR'))\n('add', '', [('c', 'foobar')])\n('remove', '', [('d', 'barfoo')])\n\n",
"You were right to look at using a set, we just need to dig in a little deeper to get your method to work.\nFirst, the example code:\ntest_1 = {\"foo\": \"bar\", \"FOO\": \"BAR\"}\ntest_2 = {\"foo\": \"bar\", \"f00\": \"b@r\"}\n\nWe can see right now that both dictionaries contain a similar key/value pair:\n{\"foo\": \"bar\", ...}\n\nEach dictionary also contains a completely different key value pair. But how do we detect the difference? Dictionaries don't support that. Instead, you'll want to use a set.\nHere is how to turn each dictionary into a set we can use:\nset_1 = set(test_1.items())\nset_2 = set(test_2.items())\n\nThis returns a set containing a series of tuples. Each tuple represents one key/value pair from your dictionary.\nNow, to find the difference between set_1 and set_2:\nprint set_1 - set_2\n>>> {('FOO', 'BAR')}\n\nWant a dictionary back? Easy, just:\ndict(set_1 - set_2)\n>>> {'FOO': 'BAR'}\n\n",
"You can use DeepDiff:\npip install deepdiff\n\nAmong other things, it lets you recursively calculate the difference of dictionaries, iterables, strings and other objects:\n>>> from deepdiff import DeepDiff\n\n>>> d1 = {1:1, 2:2, 3:3, \"foo\":4}\n>>> d2 = {1:1, 2:4, 3:3, \"bar\":5, 6:6}\n>>> DeepDiff(d1, d2)\n{'dictionary_item_added': [root['bar'], root[6]],\n 'dictionary_item_removed': [root['foo']],\n 'values_changed': {'root[2]': {'new_value': 4, 'old_value': 2}}}\n\nIt lets you see what changed (even types), what was added and what was removed. It also lets you do many other things like ignoring duplicates and ignoring paths (defined by regex).\n",
"A solution is to use the unittest module:\nfrom unittest import TestCase\nTestCase().assertDictEqual(expected_dict, actual_dict)\n\nObtained from How can you test that two dictionaries are equal with pytest in python\n",
"This function gives you all the diffs (and what stayed the same) based on the dictionary keys only. It also highlights some nice Dict comprehension, Set operations and python 3.6 type annotations :)\nfrom typing import Dict, Any, Tuple\ndef get_dict_diffs(a: Dict[str, Any], b: Dict[str, Any]) -> Tuple[Dict[str, Any], Dict[str, Any], Dict[str, Any], Dict[str, Any]]:\n\n added_to_b_dict: Dict[str, Any] = {k: b[k] for k in set(b) - set(a)}\n removed_from_a_dict: Dict[str, Any] = {k: a[k] for k in set(a) - set(b)}\n common_dict_a: Dict[str, Any] = {k: a[k] for k in set(a) & set(b)}\n common_dict_b: Dict[str, Any] = {k: b[k] for k in set(a) & set(b)}\n return added_to_b_dict, removed_from_a_dict, common_dict_a, common_dict_b\n\nIf you want to compare the dictionary values:\nvalues_in_b_not_a_dict = {k : b[k] for k, _ in set(b.items()) - set(a.items())}\n\n",
"A function using the symmetric difference set operator, as mentioned in other answers, which preserves the origins of the values:\ndef diff_dicts(a, b, missing=KeyError):\n \"\"\"\n Find keys and values which differ from `a` to `b` as a dict.\n\n If a value differs from `a` to `b` then the value in the returned dict will\n be: `(a_value, b_value)`. If either is missing then the token from \n `missing` will be used instead.\n\n :param a: The from dict\n :param b: The to dict\n :param missing: A token used to indicate the dict did not include this key\n :return: A dict of keys to tuples with the matching value from a and b\n \"\"\"\n return {\n key: (a.get(key, missing), b.get(key, missing))\n for key in dict(\n set(a.items()) ^ set(b.items())\n ).keys()\n }\n\nExample\nprint(diff_dicts({'a': 1, 'b': 1}, {'b': 2, 'c': 2}))\n\n# {'c': (<class 'KeyError'>, 2), 'a': (1, <class 'KeyError'>), 'b': (1, 2)}\n\nHow this works\nWe use the symmetric difference set operator on the tuples generated from taking items. This generates a set of distinct (key, value) tuples from the two dicts.\nWe then make a new dict from that to collapse the keys together and iterate over these. These are the only keys that have changed from one dict to the next.\nWe then compose a new dict using these keys with a tuple of the values from each dict substituting in our missing token when the key isn't present.\n",
"Not sure this is what the OP asked for, but this is what I was looking for when I came across this question - specifically, how to show key by key the difference between two dicts:\nPitfall: when one dict has a missing key, and the second has it with a None value, the function would assume they are similar\nThis is not optimized at all - suitable for small dicts\ndef diff_dicts(a, b, drop_similar=True):\n res = a.copy()\n\n for k in res:\n if k not in b:\n res[k] = (res[k], None)\n\n for k in b:\n if k in res:\n res[k] = (res[k], b[k])\n else:\n res[k] = (None, b[k])\n\n if drop_similar:\n res = {k:v for k,v in res.items() if v[0] != v[1]}\n\n return res\n\n\nprint(diff_dicts({'a': 1}, {}))\nprint(diff_dicts({'a': 1}, {'a': 2}))\nprint(diff_dicts({'a': 2}, {'a': 2}))\nprint(diff_dicts({'a': 2}, {'b': 2}))\nprint(diff_dicts({'a': 2}, {'a': 2, 'b': 1}))\n\nOutput:\n{'a': (1, None)}\n{'a': (1, 2)}\n{}\n{'a': (2, None), 'b': (None, 2)}\n{'b': (None, 1)}\n\n",
"This is my own version, from combining https://stackoverflow.com/a/67263119/919692 with https://stackoverflow.com/a/48544451/919692, and now I see it is quite similar to https://stackoverflow.com/a/47433207/919692:\ndef dict_diff(dict_a, dict_b, show_value_diff=True):\n result = {}\n result['added'] = {k: dict_b[k] for k in set(dict_b) - set(dict_a)}\n result['removed'] = {k: dict_a[k] for k in set(dict_a) - set(dict_b)}\n if show_value_diff:\n common_keys = set(dict_a) & set(dict_b)\n result['value_diffs'] = {\n k:(dict_a[k], dict_b[k])\n for k in common_keys\n if dict_a[k] != dict_b[k]\n }\n return result\n\n",
"I would recommend using something already written by good developers. Like pytest. It has a deal with any data type, not only dicts. And, BTW, pytest is very good at testing.\nfrom _pytest.assertion.util import _compare_eq_any\n\nprint('\\n'.join(_compare_eq_any({'a': 'b'}, {'aa': 'vv'}, verbose=3)))\n\nOutput is:\nLeft contains 1 more item:\n{'a': 'b'}\nRight contains 1 more item:\n{'aa': 'vv'}\nFull diff:\n- {'aa': 'vv'}\n? - ^^\n+ {'a': 'b'}\n? ^\n\nIf you don't like using private functions (started with _), just have a look at the source code and copy/paste the function to your code.\nP.S.: Tested with pytest==6.2.4\n",
"What about this? Not as pretty but explicit.\norig_dict = {'a' : 1, 'b' : 2}\nnew_dict = {'a' : 2, 'v' : 'hello', 'b' : 2}\n\nupdates = {}\nfor k2, v2 in new_dict.items():\n if k2 in orig_dict: \n if v2 != orig_dict[k2]:\n updates.update({k2 : v2})\n else:\n updates.update({k2 : v2})\n\n#test it\n#value of 'a' was changed\n#'v' is a completely new entry\nassert all(k in updates for k in ['a', 'v'])\n\n",
"def flatten_it(d):\n if isinstance(d, list) or isinstance(d, tuple):\n return tuple([flatten_it(item) for item in d])\n elif isinstance(d, dict):\n return tuple([(flatten_it(k), flatten_it(v)) for k, v in sorted(d.items())])\n else:\n return d\n\ndict1 = {'a': 1, 'b': 2, 'c': 3}\ndict2 = {'a': 1, 'b': 1}\n\nprint set(flatten_it(dict1)) - set(flatten_it(dict2)) # set([('b', 2), ('c', 3)])\n# or \nprint set(flatten_it(dict2)) - set(flatten_it(dict1)) # set([('b', 1)])\n\n",
"Old question, but thought I'd share my solution anyway. Pretty simple.\ndicta_set = set(dicta.items()) # creates a set of tuples (k/v pairs)\ndictb_set = set(dictb.items())\nsetdiff = dictb_set.difference(dicta_set) # any set method you want for comparisons\nfor k, v in setdiff: # unpack the tuples for processing\n print(f\"k/v differences = {k}: {v}\")\n\nThis code creates two sets of tuples representing the k/v pairs. It then uses a set method of your choosing to compare the tuples. Lastly, it unpacks the tuples (k/v pairs) for processing.\n",
"This will return a new dict (only changed data).\ndef get_difference(obj_1: dict, obj_2: dict) -> dict:\nresult = {}\n\nfor key in obj_1.keys():\n value = obj_1[key]\n\n if isinstance(value, dict):\n difference = get_difference(value, obj_2.get(key, {}))\n\n if difference:\n result[key] = difference\n\n elif value != obj_2.get(key):\n result[key] = obj_2.get(key, None)\n\nreturn result\n\n",
"Here is a variation that lets you update dict1 values if you know the values in dict2 are right.\nConsider:\ndict1.update((k, dict2.get(k)) for k, v in dict1.items())\n\n",
"For one side comparison you can use dict comprehension:\ndict1 = {'a': 1, 'b': 2, 'c': 3, 'd': 4}\ndict2 = {'a': OMG, 'b': 2, 'c': 3, 'd': 4}\n\ndata = {a:dict1[a] for a in dict1 if dict1[a] != dict2[a]}\n\noutput: {'a': 1}\n",
"sharedmLst = set(a_dic.items()).intersection(b_dic.items())\ndiff_from_b = set(a_dic.items()) - sharedmLst\ndiff_from_a = set(b_dic.items()) - sharedmLst\n\nprint(\"Among the items in a_dic, the item different from b_dic\",diff_from_b)\nprint(\"Among the items in b_dic, the item different from a_dic\",diff_from_a)\n\n\nResult :\nAmong the items in a_dic, the item different from b_dic {('b', 2)}\nAmong the items in b_dic, the item different from a_dic {('b', 20)}\n\n"
] | [
184,
108,
67,
12,
12,
10,
8,
8,
7,
6,
6,
5,
5,
3,
1,
0,
0,
0
] | [] | [] | [
"dictionary",
"python"
] | stackoverflow_0032815640_dictionary_python.txt |
Q:
Python linking to wrong library folder - sndfile library not found
I get the following error when trying to import the librosa library into my python project and running it in the global python environment:
Traceback (most recent call last): File
"/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/soundfile.py",
line 142, in
raise OSError('sndfile library not found') OSError: sndfile library not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File
"Bloompipe/Synthesis_Module/bloompipe_synthesis/testSynthesis.py",
line 6, in
from LSD.lucidsonicdreams import LucidSonicDream File "Bloompipe/Synthesis_Module/bloompipe_synthesis/LSD/lucidsonicdreams/init.py",
line 1, in
from .main import * File "Bloompipe/Synthesis_Module/bloompipe_synthesis/LSD/lucidsonicdreams/main.py",
line 15, in
from .AudioAnalyse import * File "Bloompipe/Synthesis_Module/bloompipe_synthesis/LSD/lucidsonicdreams/AudioAnalyse.py",
line 3, in
import librosa.display File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/librosa/init.py",
line 209, in
from . import core File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/librosa/core/init.py",
line 6, in
from .audio import * # pylint: disable=wildcard-import File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/librosa/core/audio.py",
line 8, in
import soundfile as sf File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/soundfile.py",
line 162, in
_snd = _ffi.dlopen(_os.path.join( OSError: cannot load library '/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib':
dlopen(/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib,
0x0002): tried:
'/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib'
(no such file)
Process finished with exit code 1
I installed the libsndfile library with homebrew and also for a virtual conda environment. When trying to run the program in the conda environment it produces the following error:
Traceback (most recent call last): File
".conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/soundfile.py",
line 143, in
_snd = _ffi.dlopen(_libname) OSError: cannot load library '.conda/envs/bloompipe_synthesis/bin/../lib/libsndfile.dylib':
dlopen(.conda/envs/bloompipe_synthesis/bin/../lib/libsndfile.dylib,
0x0002): Library not loaded: @rpath/libvorbis.0.4.9.dylib Referenced
from:
.conda/envs/bloompipe_synthesis/lib/libsndfile.1.0.31.dylib
Reason: tried:
'.conda/envs/bloompipe_synthesis/lib/libvorbis.0.4.9.dylib'
(no such file),
'.conda/envs/bloompipe_synthesis/lib/libvorbis.0.4.9.dylib'
(no such file),
'.conda/envs/bloompipe_synthesis/lib/libvorbis.0.4.9.dylib'
(no such file),
'.conda/envs/bloompipe_synthesis/lib/libvorbis.0.4.9.dylib'
(no such file),
'.conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/../../libvorbis.0.4.9.dylib'
(no such file),
'.conda/envs/bloompipe_synthesis/lib/libvorbis.0.4.9.dylib'
(no such file),
'.conda/envs/bloompipe_synthesis/bin/../lib/libvorbis.0.4.9.dylib'
(no such file), '/usr/local/lib/libvorbis.0.4.9.dylib' (no such file),
'/usr/lib/libvorbis.0.4.9.dylib' (no such file)
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File
"Bloompipe/Synthesis_Module/bloompipe_synthesis/testSynthesis.py",
line 6, in
from LSD.lucidsonicdreams import LucidSonicDream File "Bloompipe/Synthesis_Module/bloompipe_synthesis/LSD/lucidsonicdreams/init.py",
line 1, in
from .main import * File "Bloompipe/Synthesis_Module/bloompipe_synthesis/LSD/lucidsonicdreams/main.py",
line 15, in
from .AudioAnalyse import * File "Bloompipe/Synthesis_Module/bloompipe_synthesis/LSD/lucidsonicdreams/AudioAnalyse.py",
line 3, in
import librosa.display File ".conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/librosa/init.py",
line 209, in
from . import core File ".conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/librosa/core/init.py",
line 6, in
from .audio import * # pylint: disable=wildcard-import File ".conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/librosa/core/audio.py",
line 8, in
import soundfile as sf File ".conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/soundfile.py",
line 162, in
_snd = _ffi.dlopen(_os.path.join( OSError: cannot load library '.conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib':
dlopen(.conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib,
0x0002): tried:
'.conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib'
(no such file)
Process finished with exit code 1
The thing is that in both cases it is looking for the .dylib files in the wrong directories. My homebrew installation is in /opt/homebrew/lib and has the files libsndfile.dylib and libsndfile.1.dylib in it and also the libvorbis.dylib file. When trying to run on the global python environment it is looking for those files in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/_soundfile_data/ though.
My conda installation is in /opt/anaconda3/lib and has the files libsndfile.dylib, libsndfile.1.0.31.dylib and libsndfile.1.dylib in it and also the libvorbis.dylib and libvorbis.0.4.9.dylib file. When trying to run on the conda python environment it is looking for those files in .conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/_soundfile_data/.
In both cases when looking in those site-packages directories, the _soundfile_data folder doesn't exist even when activating the hidden files. I don't know why that doesn't exist.
I tried executing:
export CPATH=/opt/homebrew/include
export LIBRARY_PATH=/opt/homebrew/lib
export PYTHONPATH=/opt/homebrew/lib
To include the paths into the python path when running
Then I printed the path variables with import sys and print(sys.path), this was the output for my global python:
['Bloompipe/Synthesis_Module/bloompipe_synthesis',
'Bloompipe/Synthesis_Module/bloompipe_synthesis',
'/Library/Frameworks/Python.framework/Versions/3.9/lib/python39.zip',
'/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9',
'/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload',
'/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages',
'opt/homebrew/lib']
And for the conda environment I tried:
conda develop .conda/envs/bloompipe_synthesis/lib
conda develop /opt/homebrew/lib
conda develop /opt/anaconda3/lib
Then the sys.path output is:
['Bloompipe/Synthesis_Module/bloompipe_synthesis',
'.conda/envs/bloompipe_synthesis/lib/python39.zip',
'.conda/envs/bloompipe_synthesis/lib/python3.9',
'.conda/envs/bloompipe_synthesis/lib/python3.9/lib-dynload',
'.conda/envs/bloompipe_synthesis/lib/python3.9/site-packages',
'.conda/envs/bloompipe_synthesis/lib',
'/opt/homebrew/lib',
'/opt/anaconda3/lib']
Weirdly, python is still not looking in those directories when executing the librosa import.
Finally, I tried adding the path to the homebrew installation manually by putting sys.path.append("/opt/homebrew/lib") in the beginning of the python file. It still produces the exact same errors.
So my question is, why does the _soundfile_data directory not exist in my site-packages folders for the global python and the conda environment and why doesn't it include the .dylib files for libsndfile?
Secondly, why does:
export LIBRARY_PATH=/opt/homebrew/lib
export PYTHONPATH=/opt/homebrew/lib
not do that those paths appear when printing the sys.path content?
Thirdly, why does python not find the libsndfile.dylib files with the conda environment, even though I added the homebrew and the conda installation of libsndfile to the sys path with the conda develop command?
My python3.9 is installed in /usr/local/bin/python3.9 and my conda python3.9 environment is installed in /.conda/envs/bloompipe_synthesis/bin/python
I'm on a new mac with Mac OS Monterey.
Any help is greatly appreciated!
A:
As far as I know it only works with python 3.6 and 3.7 (lucidsonicdreams), although I didn't have success on 3.6. I had to create a virtual environment through conda and run code through Jupyter notebook. conda install tensorflow==1.15 (will not work with higher versions), python==3.7, pip install lucidsonicdreams in your new python 3.7 environment. Make sure module versions line up with your Nvidia CUDA drivers or lucidsonicdreams won't work.
| Python linking to wrong library folder - sndfile library not found | I get the following error when trying to import the librosa library into my python project and running it in the global python environment:
Traceback (most recent call last): File
"/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/soundfile.py",
line 142, in
raise OSError('sndfile library not found') OSError: sndfile library not found
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File
"Bloompipe/Synthesis_Module/bloompipe_synthesis/testSynthesis.py",
line 6, in
from LSD.lucidsonicdreams import LucidSonicDream File "Bloompipe/Synthesis_Module/bloompipe_synthesis/LSD/lucidsonicdreams/init.py",
line 1, in
from .main import * File "Bloompipe/Synthesis_Module/bloompipe_synthesis/LSD/lucidsonicdreams/main.py",
line 15, in
from .AudioAnalyse import * File "Bloompipe/Synthesis_Module/bloompipe_synthesis/LSD/lucidsonicdreams/AudioAnalyse.py",
line 3, in
import librosa.display File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/librosa/init.py",
line 209, in
from . import core File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/librosa/core/init.py",
line 6, in
from .audio import * # pylint: disable=wildcard-import File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/librosa/core/audio.py",
line 8, in
import soundfile as sf File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/soundfile.py",
line 162, in
_snd = _ffi.dlopen(_os.path.join( OSError: cannot load library '/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib':
dlopen(/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib,
0x0002): tried:
'/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib'
(no such file)
Process finished with exit code 1
I installed the libsndfile library with homebrew and also for a virtual conda environment. When trying to run the program in the conda environment it produces the following error:
Traceback (most recent call last): File
".conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/soundfile.py",
line 143, in
_snd = _ffi.dlopen(_libname) OSError: cannot load library '.conda/envs/bloompipe_synthesis/bin/../lib/libsndfile.dylib':
dlopen(.conda/envs/bloompipe_synthesis/bin/../lib/libsndfile.dylib,
0x0002): Library not loaded: @rpath/libvorbis.0.4.9.dylib Referenced
from:
.conda/envs/bloompipe_synthesis/lib/libsndfile.1.0.31.dylib
Reason: tried:
'.conda/envs/bloompipe_synthesis/lib/libvorbis.0.4.9.dylib'
(no such file),
'.conda/envs/bloompipe_synthesis/lib/libvorbis.0.4.9.dylib'
(no such file),
'.conda/envs/bloompipe_synthesis/lib/libvorbis.0.4.9.dylib'
(no such file),
'.conda/envs/bloompipe_synthesis/lib/libvorbis.0.4.9.dylib'
(no such file),
'.conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/../../libvorbis.0.4.9.dylib'
(no such file),
'.conda/envs/bloompipe_synthesis/lib/libvorbis.0.4.9.dylib'
(no such file),
'.conda/envs/bloompipe_synthesis/bin/../lib/libvorbis.0.4.9.dylib'
(no such file), '/usr/local/lib/libvorbis.0.4.9.dylib' (no such file),
'/usr/lib/libvorbis.0.4.9.dylib' (no such file)
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File
"Bloompipe/Synthesis_Module/bloompipe_synthesis/testSynthesis.py",
line 6, in
from LSD.lucidsonicdreams import LucidSonicDream File "Bloompipe/Synthesis_Module/bloompipe_synthesis/LSD/lucidsonicdreams/init.py",
line 1, in
from .main import * File "Bloompipe/Synthesis_Module/bloompipe_synthesis/LSD/lucidsonicdreams/main.py",
line 15, in
from .AudioAnalyse import * File "Bloompipe/Synthesis_Module/bloompipe_synthesis/LSD/lucidsonicdreams/AudioAnalyse.py",
line 3, in
import librosa.display File ".conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/librosa/init.py",
line 209, in
from . import core File ".conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/librosa/core/init.py",
line 6, in
from .audio import * # pylint: disable=wildcard-import File ".conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/librosa/core/audio.py",
line 8, in
import soundfile as sf File ".conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/soundfile.py",
line 162, in
_snd = _ffi.dlopen(_os.path.join( OSError: cannot load library '.conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib':
dlopen(.conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib,
0x0002): tried:
'.conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/_soundfile_data/libsndfile.dylib'
(no such file)
Process finished with exit code 1
The thing is that in both cases it is looking for the .dylib files in the wrong directories. My homebrew installation is in /opt/homebrew/lib and has the files libsndfile.dylib and libsndfile.1.dylib in it and also the libvorbis.dylib file. When trying to run on the global python environment it is looking for those files in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/_soundfile_data/ though.
My conda installation is in /opt/anaconda3/lib and has the files libsndfile.dylib, libsndfile.1.0.31.dylib and libsndfile.1.dylib in it and also the libvorbis.dylib and libvorbis.0.4.9.dylib file. When trying to run on the conda python environment it is looking for those files in .conda/envs/bloompipe_synthesis/lib/python3.9/site-packages/_soundfile_data/.
In both cases when looking in those site-packages directories, the _soundfile_data folder doesn't exist even when activating the hidden files. I don't know why that doesn't exist.
I tried executing:
export CPATH=/opt/homebrew/include
export LIBRARY_PATH=/opt/homebrew/lib
export PYTHONPATH=/opt/homebrew/lib
To include the paths into the python path when running
Then I printed the path variables with import sys and print(sys.path), this was the output for my global python:
['Bloompipe/Synthesis_Module/bloompipe_synthesis',
'Bloompipe/Synthesis_Module/bloompipe_synthesis',
'/Library/Frameworks/Python.framework/Versions/3.9/lib/python39.zip',
'/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9',
'/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/lib-dynload',
'/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages',
'opt/homebrew/lib']
And for the conda environment I tried:
conda develop .conda/envs/bloompipe_synthesis/lib
conda develop /opt/homebrew/lib
conda develop /opt/anaconda3/lib
Then the sys.path output is:
['Bloompipe/Synthesis_Module/bloompipe_synthesis',
'.conda/envs/bloompipe_synthesis/lib/python39.zip',
'.conda/envs/bloompipe_synthesis/lib/python3.9',
'.conda/envs/bloompipe_synthesis/lib/python3.9/lib-dynload',
'.conda/envs/bloompipe_synthesis/lib/python3.9/site-packages',
'.conda/envs/bloompipe_synthesis/lib',
'/opt/homebrew/lib',
'/opt/anaconda3/lib']
Weirdly, python is still not looking in those directories when executing the librosa import.
Finally, I tried adding the path to the homebrew installation manually by putting sys.path.append("/opt/homebrew/lib") in the beginning of the python file. It still produces the exact same errors.
So my question is, why does the _soundfile_data directory not exist in my site-packages folders for the global python and the conda environment and why doesn't it include the .dylib files for libsndfile?
Secondly, why does:
export LIBRARY_PATH=/opt/homebrew/lib
export PYTHONPATH=/opt/homebrew/lib
not do that those paths appear when printing the sys.path content?
Thirdly, why does python not find the libsndfile.dylib files with the conda environment, even though I added the homebrew and the conda installation of libsndfile to the sys path with the conda develop command?
My python3.9 is installed in /usr/local/bin/python3.9 and my conda python3.9 environment is installed in /.conda/envs/bloompipe_synthesis/bin/python
I'm on a new mac with Mac OS Monterey.
Any help is greatly appreciated!
| [
"As far as I know it only works with python 3.6 and 3.7 (lucidsonicdreams), although I didn't have success on 3.6. I had to create a virtual environment through conda and run code through Jupyter notebook. conda install tensorflow==1.15 (will not work with higher versions), python==3.7, pip install lucidsonicdreams in your new python 3.7 environment. Make sure module versions line up with your Nvidia CUDA drivers or lucidsonicdreams won't work.\n"
] | [
0
] | [] | [] | [
"anaconda",
"librosa",
"libsndfile",
"python",
"pythonpath"
] | stackoverflow_0072623930_anaconda_librosa_libsndfile_python_pythonpath.txt |
Q:
Python. Attach data to asyncio.Task
Is there proper way to attach additional data to asyncio.create_task()? The example is
import asyncio
from dataclasses import dataclass
@dataclass
class Foo:
name: str
url_to_download: str
size: int
...
async def download_file(url: str):
return await download_impl()
objs: list[Foo] = [obj1, obj2, ...]
# Question
# How to attach the Foo object to the each task?
tasks = [asyncio.create_task(download_file(obj.url_to_download)) for obj in objs]
for task in asyncio.as_completed(tasks):
# Question
# How to find out which Foo obj corresponds the downloaded data?
data = await task
process(data)
There is also way to forward the Foo object to the download_file and return it with the downloaded data, but it is poor design. Do I miss something, or anyone has a better design to solve that problem?
A:
If I understood correctly, you just want a way to match the returned values from all your tasks to the instances of Foo whose url_to_download attributes you passed as arguments to said tasks.
Since all you are doing in that last loop is blocking until all tasks are completed, you may as well simply run the coroutines concurrently via asyncio.gather. The order of the individual return values in the list it returns corresponds to the order of the coroutines passed to it as arguments:
from asyncio import gather, run
from dataclasses import dataclass
@dataclass
class Foo:
url_to_download: str
...
async def download_file(url: str):
return url
async def main() -> None:
objs: list[Foo] = [Foo("foo"), Foo("bar"), Foo("baz")]
returned_values = await gather(
*(download_file(obj.url_to_download) for obj in objs)
)
print(returned_values) # ['foo', 'bar', 'baz']
if __name__ == '__main__':
run(main())
That means you can simply match the objs and returned_values via index or zip them or whatever you need to do.
As for your comment that saving the returned the returned value in an attribute of the object itself is "poor design", I see absolutely no justification for that assessment. That would also be a perfectly valid way and arguably even cleaner. You might even define a download method on Foo for that purpose. But that is another discussion.
| Python. Attach data to asyncio.Task | Is there proper way to attach additional data to asyncio.create_task()? The example is
import asyncio
from dataclasses import dataclass
@dataclass
class Foo:
name: str
url_to_download: str
size: int
...
async def download_file(url: str):
return await download_impl()
objs: list[Foo] = [obj1, obj2, ...]
# Question
# How to attach the Foo object to the each task?
tasks = [asyncio.create_task(download_file(obj.url_to_download)) for obj in objs]
for task in asyncio.as_completed(tasks):
# Question
# How to find out which Foo obj corresponds the downloaded data?
data = await task
process(data)
There is also way to forward the Foo object to the download_file and return it with the downloaded data, but it is poor design. Do I miss something, or anyone has a better design to solve that problem?
| [
"If I understood correctly, you just want a way to match the returned values from all your tasks to the instances of Foo whose url_to_download attributes you passed as arguments to said tasks.\nSince all you are doing in that last loop is blocking until all tasks are completed, you may as well simply run the coroutines concurrently via asyncio.gather. The order of the individual return values in the list it returns corresponds to the order of the coroutines passed to it as arguments:\nfrom asyncio import gather, run\nfrom dataclasses import dataclass\n\n\n@dataclass\nclass Foo:\n url_to_download: str\n ...\n\n\nasync def download_file(url: str):\n return url\n\n\nasync def main() -> None:\n objs: list[Foo] = [Foo(\"foo\"), Foo(\"bar\"), Foo(\"baz\")]\n returned_values = await gather(\n *(download_file(obj.url_to_download) for obj in objs)\n )\n print(returned_values) # ['foo', 'bar', 'baz']\n\n\nif __name__ == '__main__':\n run(main())\n\nThat means you can simply match the objs and returned_values via index or zip them or whatever you need to do.\n\nAs for your comment that saving the returned the returned value in an attribute of the object itself is \"poor design\", I see absolutely no justification for that assessment. That would also be a perfectly valid way and arguably even cleaner. You might even define a download method on Foo for that purpose. But that is another discussion.\n"
] | [
0
] | [] | [] | [
"python",
"python_asyncio"
] | stackoverflow_0074635301_python_python_asyncio.txt |
Q:
Locating all numbers not present in multiple columns
I am having difficulties trying to locate multiple values from columns in a csv file.
So far i have tried defining the columns from which i want to extract the values as,
Assignments = (data.loc[:, ~data.columns.isin(['A', 'B','C'])])
This should take each column not named 'A', 'B' of 'C' from the csv file.
I tried running the code,
data.loc[(data[Assignments] != 20)]
but i am met with the error message: # If we have a listlike key, _check_indexing_error will raise
KeyError: 100
The wanted outcome is a list of all rows that don't contain the value 20 (I am also not sure how to add more values than one like for example != 20,10,0.
Any help is much appreciated.
A:
For a single value, you can do: df[(df[:] != 20).all(axis = 1)]
For multiple values you can use numpy arrays to do elementwise boolean logic:
ar1 = np.array((df[:] != 20).all(axis = 1))
ar2 = np.array((df[:] != 30).all(axis = 1))
df[ar1 & ar2]
A:
To select rows in a Pandas DataFrame that do not contain a specific value, you can use the ~ operator to negate the DataFrame.isin() method. This method returns a Boolean mask indicating whether each element in the DataFrame is equal to one of the values you provide.
For example, to select rows in the DataFrame that do not contain the value 20, you can use the following code:
import pandas as pd
# Load the DataFrame from a CSV file
data = pd.read_csv('data.csv')
# Select the columns you want to include
assignments = data.loc[:, ~data.columns.isin(['A', 'B', 'C'])]
# Select rows that do not contain the value 20
selected = data.loc[~data.isin([20]).any(axis=1)]
This code first selects the columns in the DataFrame that are not named 'A', 'B', or 'C'. It then uses the isin() method to create a Boolean mask indicating which rows contain the value 20. Finally, it uses the ~ operator to negate the mask, selecting only the rows that do not contain the value 20.
To include multiple values in the isin() method, you can pass them as a list or array to the values parameter. For example, to select rows that do not contain the values 20, 10, or 0, you can use the following code:
import pandas as pd
# Load the DataFrame from a CSV file
data = pd.read_csv('data.csv')
# Select the columns you want to include
assignments = data.loc[:, ~data.columns.isin(['A', 'B', 'C'])]
# Select rows that do not contain the values 20, 10, or 0
selected = data.loc[~data.isin([20, 10, 0]).any(axis=1)]
| Locating all numbers not present in multiple columns | I am having difficulties trying to locate multiple values from columns in a csv file.
So far i have tried defining the columns from which i want to extract the values as,
Assignments = (data.loc[:, ~data.columns.isin(['A', 'B','C'])])
This should take each column not named 'A', 'B' of 'C' from the csv file.
I tried running the code,
data.loc[(data[Assignments] != 20)]
but i am met with the error message: # If we have a listlike key, _check_indexing_error will raise
KeyError: 100
The wanted outcome is a list of all rows that don't contain the value 20 (I am also not sure how to add more values than one like for example != 20,10,0.
Any help is much appreciated.
| [
"For a single value, you can do: df[(df[:] != 20).all(axis = 1)]\nFor multiple values you can use numpy arrays to do elementwise boolean logic:\nar1 = np.array((df[:] != 20).all(axis = 1))\nar2 = np.array((df[:] != 30).all(axis = 1))\ndf[ar1 & ar2]\n\n",
"To select rows in a Pandas DataFrame that do not contain a specific value, you can use the ~ operator to negate the DataFrame.isin() method. This method returns a Boolean mask indicating whether each element in the DataFrame is equal to one of the values you provide.\nFor example, to select rows in the DataFrame that do not contain the value 20, you can use the following code:\nimport pandas as pd\n\n# Load the DataFrame from a CSV file\ndata = pd.read_csv('data.csv')\n\n# Select the columns you want to include\nassignments = data.loc[:, ~data.columns.isin(['A', 'B', 'C'])]\n\n# Select rows that do not contain the value 20\nselected = data.loc[~data.isin([20]).any(axis=1)]\n\nThis code first selects the columns in the DataFrame that are not named 'A', 'B', or 'C'. It then uses the isin() method to create a Boolean mask indicating which rows contain the value 20. Finally, it uses the ~ operator to negate the mask, selecting only the rows that do not contain the value 20.\nTo include multiple values in the isin() method, you can pass them as a list or array to the values parameter. For example, to select rows that do not contain the values 20, 10, or 0, you can use the following code:\nimport pandas as pd\n\n# Load the DataFrame from a CSV file\ndata = pd.read_csv('data.csv')\n\n# Select the columns you want to include\nassignments = data.loc[:, ~data.columns.isin(['A', 'B', 'C'])]\n\n# Select rows that do not contain the values 20, 10, or 0\nselected = data.loc[~data.isin([20, 10, 0]).any(axis=1)]\n\n"
] | [
1,
0
] | [] | [] | [
"csv",
"pandas",
"python"
] | stackoverflow_0074642981_csv_pandas_python.txt |
Q:
How to strip a 2d array in python Numpy Array?
Suppose i have an np array like this-
[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]
[15 16 17 18 19]]
I want a function fun_strip(x) . After applying this function i want the returned array to look like this:
[[ 6 7 8]
[11 12 13]]
A:
Do you want to remove 1 value on each border?
a = np.array([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
out = a[1:a.shape[0]-1, 1:a.shape[1]-1]
Generalization for N:
N = 1
a[N:a.shape[0]-N, N:a.shape[1]-N]
Output:
array([[ 6, 7, 8],
[11, 12, 13]])
A:
Your specific example:
arr = np.array([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
arr[1:3, 1:4]
Strip function in general:
def strip_func(arr, r1, r2, c1, c2):
return(arr[r1:r2+1, c1:c2+1])
r1 and r2 is the beginning and end of the range of rows that you want to subset. c1 and c2 is the same but for the columns.
A:
A solution that works only for the specified use case wwould be:
def fun_strip(array):
return np.array([array[1][1:4],array[2][1:4]])
You need to specvify better what the use case rappresent if you want a better,more general implementation
| How to strip a 2d array in python Numpy Array? | Suppose i have an np array like this-
[[ 0 1 2 3 4]
[ 5 6 7 8 9]
[10 11 12 13 14]
[15 16 17 18 19]]
I want a function fun_strip(x) . After applying this function i want the returned array to look like this:
[[ 6 7 8]
[11 12 13]]
| [
"Do you want to remove 1 value on each border?\na = np.array([[ 0, 1, 2, 3, 4],\n [ 5, 6, 7, 8, 9],\n [10, 11, 12, 13, 14],\n [15, 16, 17, 18, 19]])\n\nout = a[1:a.shape[0]-1, 1:a.shape[1]-1]\n\nGeneralization for N:\nN = 1\na[N:a.shape[0]-N, N:a.shape[1]-N]\n\nOutput:\narray([[ 6, 7, 8],\n [11, 12, 13]])\n\n",
"Your specific example:\narr = np.array([[0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9],\n [10, 11, 12, 13, 14],\n [15, 16, 17, 18, 19]])\narr[1:3, 1:4]\n\nStrip function in general:\ndef strip_func(arr, r1, r2, c1, c2):\n return(arr[r1:r2+1, c1:c2+1])\n\nr1 and r2 is the beginning and end of the range of rows that you want to subset. c1 and c2 is the same but for the columns.\n",
"A solution that works only for the specified use case wwould be:\ndef fun_strip(array):\n return np.array([array[1][1:4],array[2][1:4]])\n\nYou need to specvify better what the use case rappresent if you want a better,more general implementation\n"
] | [
1,
0,
0
] | [] | [] | [
"numpy",
"python",
"strip"
] | stackoverflow_0074643721_numpy_python_strip.txt |
Q:
Calculate by how much a row has shifted horizontally in pandas dataframe
I have a dataframe where the rows have been shifted horizontally by an unknown amount. Each and every row has shifted by a different amount as shown below:
Heading 1
Heading 2
Unnamed: 1
Unnamed: 2
NaN
34
24
NaN
5
NaN
NaN
NaN
NaN
NaN
13
77
NaN
NaN
NaN
18
In the above dataframe, there are only 2 original columns (Heading 1 and Heading 2) but due to row shift (in rows 1 and 3), extra columns (Unnamed: 1 and Unnamed: 2) have been created with the default name Unnamed: 1 and Unnamed: 2.
Now for each row, I want to calculate:
1.) The spill over. Spill over is basically the amount of NaN values in extra columns(Unnamed columns). For example in row 1 there is one non NaN value in extra columns (Unnamed: 1) and hence the spill over is 1. In row 2 there are no non NaN values in extra columns so the spill over is 0. In row 3 there are 2 non NaN values in extra columns(Unnamed: 1 and Unnamed: 2) hence the spill over is 2 and in row 4 there are 1 non NaN values in extra columns so the spill over is 1.
2.) The amount of NaN values in the original columns(Heading 1 and Heading 2). For example in row 1 amount of Nan values in original columns are 1, in row 2 amount of NaN values in original columns is 0, in row 3 amount of NaN values in original columns is 2 and in row 4 amount of NaN values in original columns is 2.
So basically for each row, I have to calculate the amount of Nan values in original columns(Heading 1 and Heading 2) and the amount of non NaN values in extra columns(Unnamed: 1 and Unnamed: 2).
I can get the amount of extra columns (Unnamed:1 and so on) present in a dataframe by:
len(df.filter(regex=("Unnamed:.*")).columns.to_list())
Thank you!
A:
Updated Answer
The logic that @mozway gave was an elegant one liner which i liked a lot but for some reason does not work always. Also it does not give the non nan values in the extra columns.
I managed to get it working in a slightly long but relatively simple to understand logic. Here goes:
#read the excel file
df = pd.read_excel('df.xlsx')
#subset the df into original and extra df's
extra = df.filter(regex=("Unnamed:.*"))
original = df.drop(extra, axis = 1)
#ori contains a list of count of NaN values in original columns as asked
ori = original.isnull().sum(axis=1).tolist() #or to_dict() if you want a dict
ext = len(extra.columns) - extra.isnull().sum(axis=1)
#ext1 contains a list of count of non NaN values in the extra columns as asked
ext1 = ext.tolist() # or to_dict() if you want a dict
Original comment/answer
@mozway As mentioned in the comments I am adding your code I tried to apply the logic to only a subset of dataframe:
extra = df.filter(regex=("Unnamed:.*"))
y = extra.isna().cummin(axis=1).sum(axis=1).clip(upper=2).tolist()
According to the dataframe the output should be [1, 2, 0, 1] (as there are 1 nan values in row 1, 2 in row 2 0 in row 3 and 1 in row 4) but the above code is giving output [0, 2, 0, 1]
A:
You can use isna and cummin to identify the leading NAs, then sum to count them and clip to limit the shift to the original number of columns:
df.isna().cummin(axis=1).sum(axis=1).clip(upper=2)
Output:
0 1
1 0
2 2
3 2
dtype: int64
Intermediates:
df.isna()
Heading 1 Heading 2 Unnamed: 1 Unnamed: 2
0 True False False True
1 False False True True
2 True True False False
3 True True True False
df.isna().cummin(axis=1)
Heading 1 Heading 2 Unnamed: 1 Unnamed: 2
0 True False False False
1 False False False False
2 True True False False
3 True True True False
df.isna().cummin(axis=1).sum(axis=1)
0 1
1 0
2 2
3 3
dtype: int64
| Calculate by how much a row has shifted horizontally in pandas dataframe | I have a dataframe where the rows have been shifted horizontally by an unknown amount. Each and every row has shifted by a different amount as shown below:
Heading 1
Heading 2
Unnamed: 1
Unnamed: 2
NaN
34
24
NaN
5
NaN
NaN
NaN
NaN
NaN
13
77
NaN
NaN
NaN
18
In the above dataframe, there are only 2 original columns (Heading 1 and Heading 2) but due to row shift (in rows 1 and 3), extra columns (Unnamed: 1 and Unnamed: 2) have been created with the default name Unnamed: 1 and Unnamed: 2.
Now for each row, I want to calculate:
1.) The spill over. Spill over is basically the amount of NaN values in extra columns(Unnamed columns). For example in row 1 there is one non NaN value in extra columns (Unnamed: 1) and hence the spill over is 1. In row 2 there are no non NaN values in extra columns so the spill over is 0. In row 3 there are 2 non NaN values in extra columns(Unnamed: 1 and Unnamed: 2) hence the spill over is 2 and in row 4 there are 1 non NaN values in extra columns so the spill over is 1.
2.) The amount of NaN values in the original columns(Heading 1 and Heading 2). For example in row 1 amount of Nan values in original columns are 1, in row 2 amount of NaN values in original columns is 0, in row 3 amount of NaN values in original columns is 2 and in row 4 amount of NaN values in original columns is 2.
So basically for each row, I have to calculate the amount of Nan values in original columns(Heading 1 and Heading 2) and the amount of non NaN values in extra columns(Unnamed: 1 and Unnamed: 2).
I can get the amount of extra columns (Unnamed:1 and so on) present in a dataframe by:
len(df.filter(regex=("Unnamed:.*")).columns.to_list())
Thank you!
| [
"Updated Answer\nThe logic that @mozway gave was an elegant one liner which i liked a lot but for some reason does not work always. Also it does not give the non nan values in the extra columns.\nI managed to get it working in a slightly long but relatively simple to understand logic. Here goes:\n#read the excel file\ndf = pd.read_excel('df.xlsx')\n\n#subset the df into original and extra df's\nextra = df.filter(regex=(\"Unnamed:.*\"))\noriginal = df.drop(extra, axis = 1)\n\n#ori contains a list of count of NaN values in original columns as asked \nori = original.isnull().sum(axis=1).tolist() #or to_dict() if you want a dict\next = len(extra.columns) - extra.isnull().sum(axis=1)\n#ext1 contains a list of count of non NaN values in the extra columns as asked\next1 = ext.tolist() # or to_dict() if you want a dict\n\nOriginal comment/answer\n@mozway As mentioned in the comments I am adding your code I tried to apply the logic to only a subset of dataframe:\nextra = df.filter(regex=(\"Unnamed:.*\"))\ny = extra.isna().cummin(axis=1).sum(axis=1).clip(upper=2).tolist()\n\nAccording to the dataframe the output should be [1, 2, 0, 1] (as there are 1 nan values in row 1, 2 in row 2 0 in row 3 and 1 in row 4) but the above code is giving output [0, 2, 0, 1]\n",
"You can use isna and cummin to identify the leading NAs, then sum to count them and clip to limit the shift to the original number of columns:\ndf.isna().cummin(axis=1).sum(axis=1).clip(upper=2)\n\nOutput:\n0 1\n1 0\n2 2\n3 2\ndtype: int64\n\nIntermediates:\ndf.isna()\n\n Heading 1 Heading 2 Unnamed: 1 Unnamed: 2\n0 True False False True\n1 False False True True\n2 True True False False\n3 True True True False\n\ndf.isna().cummin(axis=1)\n\n Heading 1 Heading 2 Unnamed: 1 Unnamed: 2\n0 True False False False\n1 False False False False\n2 True True False False\n3 True True True False\n\ndf.isna().cummin(axis=1).sum(axis=1)\n\n0 1\n1 0\n2 2\n3 3\ndtype: int64\n\n"
] | [
1,
0
] | [] | [] | [
"data_cleaning",
"data_preprocessing",
"dataframe",
"pandas",
"python"
] | stackoverflow_0074641344_data_cleaning_data_preprocessing_dataframe_pandas_python.txt |
Q:
Check if string is in a pandas dataframe
I would like to see if a particular string exists in a particular column within my dataframe.
I'm getting the error
ValueError: The truth value of a Series is ambiguous. Use a.empty,
a.bool(), a.item(), a.any() or a.all().
import pandas as pd
BabyDataSet = [('Bob', 968), ('Jessica', 155), ('Mary', 77), ('John', 578), ('Mel', 973)]
a = pd.DataFrame(data=BabyDataSet, columns=['Names', 'Births'])
if a['Names'].str.contains('Mel'):
print ("Mel is there")
A:
a['Names'].str.contains('Mel') will return an indicator vector of boolean values of size len(BabyDataSet)
Therefore, you can use
mel_count=a['Names'].str.contains('Mel').sum()
if mel_count>0:
print ("There are {m} Mels".format(m=mel_count))
Or any(), if you don't care how many records match your query
if a['Names'].str.contains('Mel').any():
print ("Mel is there")
A:
You should use any()
In [98]: a['Names'].str.contains('Mel').any()
Out[98]: True
In [99]: if a['Names'].str.contains('Mel').any():
....: print("Mel is there")
....:
Mel is there
a['Names'].str.contains('Mel') gives you a series of bool values
In [100]: a['Names'].str.contains('Mel')
Out[100]:
0 False
1 False
2 False
3 False
4 True
Name: Names, dtype: bool
A:
OP meant to find out whether the string 'Mel' exists in a particular column, not contained in any string in the column. Therefore the use of contains is not needed, and is not efficient.
A simple equals-to is enough:
df = pd.DataFrame({"names": ["Melvin", "Mel", "Me", "Mel", "A.Mel"]})
mel_count = (df['names'] == 'Mel').sum()
print("There are {num} instances of 'Mel'. ".format(num=mel_count))
mel_exists = (df['names'] == 'Mel').any()
print("'Mel' exists in the dataframe.".format(num=mel_exists))
mel_exists2 = 'Mel' in df['names'].values
print("'Mel' is in the dataframe: " + str(mel_exists2))
Prints:
There are 2 instances of 'Mel'.
'Mel' exists in the dataframe.
'Mel' is in the dataframe: True
A:
I bumped into the same problem, I used:
if "Mel" in a["Names"].values:
print("Yep")
But this solution may be slower since internally pandas create a list from a Series.
A:
If there is any chance that you will need to search for empty strings,
a['Names'].str.contains('')
will NOT work, as it will always return True.
Instead, use
if '' in a["Names"].values
to accurately reflect whether or not a string is in a Series, including the edge case of searching for an empty string.
A:
For case-insensitive search.
a['Names'].str.lower().str.contains('mel').any()
A:
Pandas seem to be recommending df.to_numpy since the other methods still raise a FutureWarning: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_numpy.html#pandas.DataFrame.to_numpy
So, an alternative that would work int this case is:
b=a['Names']
c = b.to_numpy().tolist()
if 'Mel' in c:
print("Mel is in the dataframe column Names")
A:
import re
s = 'string'
df['Name'] = df['Name'].str.findall(s, flags = re.IGNORECASE)
#or
df['Name'] = df[df['Name'].isin(['string1', 'string2'])]
A:
import pandas as pd
(data_frame.col_name=='str_name_to_check').sum()
A:
If you want to save the results then you can use this:
a['result'] = a['Names'].apply(lambda x : ','.join([item for item in str(x).split() if item.lower() in ['mel', 'etc']]))
A:
To check if a particular string exists in a column of a Pandas DataFrame, you can use the DataFrame.str.contains() method. This method returns a Boolean mask indicating which elements in the column contain the string you provide.
However, the str.contains() method cannot be used directly in an if statement, because it returns a series of Boolean values instead of a single value. To use the str.contains() method in an if statement, you need to use one of the methods that return a single value, such as any(), all(), or bool().
For example, to check if the string 'Mel' exists in the 'Names' column of the DataFrame you provided, you can use the following code:
import pandas as pd
# Define the data
BabyDataSet = [('Bob', 968), ('Jessica', 155), ('Mary', 77), ('John', 578), ('Mel', 973)]
# Create a DataFrame from the data
a = pd.DataFrame(data=BabyDataSet, columns=['Names', 'Births'])
# Use the str.contains() method with the any() method to check if the 'Names' column contains 'Mel'
if a['Names'].str.contains('Mel').any():
print("Mel is there")
This code uses the any() method to check if any of the elements in the 'Names' column contain the string 'Mel'. If at least one element contains this string, the if statement will be executed and the message "Mel is there" will be printed.
| Check if string is in a pandas dataframe | I would like to see if a particular string exists in a particular column within my dataframe.
I'm getting the error
ValueError: The truth value of a Series is ambiguous. Use a.empty,
a.bool(), a.item(), a.any() or a.all().
import pandas as pd
BabyDataSet = [('Bob', 968), ('Jessica', 155), ('Mary', 77), ('John', 578), ('Mel', 973)]
a = pd.DataFrame(data=BabyDataSet, columns=['Names', 'Births'])
if a['Names'].str.contains('Mel'):
print ("Mel is there")
| [
"a['Names'].str.contains('Mel') will return an indicator vector of boolean values of size len(BabyDataSet)\nTherefore, you can use\nmel_count=a['Names'].str.contains('Mel').sum()\nif mel_count>0:\n print (\"There are {m} Mels\".format(m=mel_count))\n\nOr any(), if you don't care how many records match your query\nif a['Names'].str.contains('Mel').any():\n print (\"Mel is there\")\n\n",
"You should use any()\nIn [98]: a['Names'].str.contains('Mel').any()\nOut[98]: True\n\nIn [99]: if a['Names'].str.contains('Mel').any():\n ....: print(\"Mel is there\")\n ....:\nMel is there\n\na['Names'].str.contains('Mel') gives you a series of bool values\nIn [100]: a['Names'].str.contains('Mel')\nOut[100]:\n0 False\n1 False\n2 False\n3 False\n4 True\nName: Names, dtype: bool\n\n",
"OP meant to find out whether the string 'Mel' exists in a particular column, not contained in any string in the column. Therefore the use of contains is not needed, and is not efficient.\nA simple equals-to is enough:\ndf = pd.DataFrame({\"names\": [\"Melvin\", \"Mel\", \"Me\", \"Mel\", \"A.Mel\"]})\n\nmel_count = (df['names'] == 'Mel').sum() \nprint(\"There are {num} instances of 'Mel'. \".format(num=mel_count)) \n \nmel_exists = (df['names'] == 'Mel').any() \nprint(\"'Mel' exists in the dataframe.\".format(num=mel_exists)) \n\nmel_exists2 = 'Mel' in df['names'].values \nprint(\"'Mel' is in the dataframe: \" + str(mel_exists2)) \n\nPrints:\nThere are 2 instances of 'Mel'. \n'Mel' exists in the dataframe.\n'Mel' is in the dataframe: True\n\n",
"I bumped into the same problem, I used:\nif \"Mel\" in a[\"Names\"].values:\n print(\"Yep\")\n\nBut this solution may be slower since internally pandas create a list from a Series.\n",
"If there is any chance that you will need to search for empty strings, \n a['Names'].str.contains('') \n\nwill NOT work, as it will always return True.\nInstead, use\n if '' in a[\"Names\"].values\n\nto accurately reflect whether or not a string is in a Series, including the edge case of searching for an empty string. \n",
"For case-insensitive search.\na['Names'].str.lower().str.contains('mel').any()\n\n",
"Pandas seem to be recommending df.to_numpy since the other methods still raise a FutureWarning: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_numpy.html#pandas.DataFrame.to_numpy\nSo, an alternative that would work int this case is:\nb=a['Names']\nc = b.to_numpy().tolist()\nif 'Mel' in c:\n print(\"Mel is in the dataframe column Names\")\n\n",
"import re\ns = 'string'\n\ndf['Name'] = df['Name'].str.findall(s, flags = re.IGNORECASE)\n\n#or\ndf['Name'] = df[df['Name'].isin(['string1', 'string2'])]\n\n",
"import pandas as pd\n\n(data_frame.col_name=='str_name_to_check').sum()\n\n",
"If you want to save the results then you can use this:\na['result'] = a['Names'].apply(lambda x : ','.join([item for item in str(x).split() if item.lower() in ['mel', 'etc']]))\n\n",
"To check if a particular string exists in a column of a Pandas DataFrame, you can use the DataFrame.str.contains() method. This method returns a Boolean mask indicating which elements in the column contain the string you provide.\nHowever, the str.contains() method cannot be used directly in an if statement, because it returns a series of Boolean values instead of a single value. To use the str.contains() method in an if statement, you need to use one of the methods that return a single value, such as any(), all(), or bool().\nFor example, to check if the string 'Mel' exists in the 'Names' column of the DataFrame you provided, you can use the following code:\nimport pandas as pd\n\n# Define the data\nBabyDataSet = [('Bob', 968), ('Jessica', 155), ('Mary', 77), ('John', 578), ('Mel', 973)]\n\n# Create a DataFrame from the data\na = pd.DataFrame(data=BabyDataSet, columns=['Names', 'Births'])\n\n# Use the str.contains() method with the any() method to check if the 'Names' column contains 'Mel'\nif a['Names'].str.contains('Mel').any():\n print(\"Mel is there\")\n\nThis code uses the any() method to check if any of the elements in the 'Names' column contain the string 'Mel'. If at least one element contains this string, the if statement will be executed and the message \"Mel is there\" will be printed.\n"
] | [
165,
36,
20,
8,
3,
3,
2,
2,
1,
0,
0
] | [
"You should check the value of your line of code like adding checking length of it.\nif(len(a['Names'].str.contains('Mel'))>0):\n print(\"Name Present\")\n\n"
] | [
-1
] | [
"pandas",
"python"
] | stackoverflow_0030944577_pandas_python.txt |
Q:
Given a list of 2-columns pandas dataframes, how can I take the median of the second columns?
I have a list of pandas dataframes, each with 2-columns. The first column represents an ID, and the second represents the values. How would I combine these dataframes to where values with common IDs are replaced with its median?
E.g
df_1 = pd.DataFrame({'#id': [1,2,3,4], 'values_1': [1,3,4,3]})
df_2 = pd.DataFrame({'#id': [1,2,3,5], 'values_2': [2,5,7,6]})
df_3 = pd.DataFrame({'#id': [1,2,4,5], 'values_3': [5,6,7,8]})
I would like the resulting new dataframe to be:
answer = pd.DataFrame({'#id': [1,2,3,4,5], 'values': [2,5,5.5,5,7]})
A:
Merge all df's into one df. Then group by id and calculate the median of each group.
df = pd.concat([df_1,df_2,df_3])
df = df.groupby('#id').agg({'values':'median'})
'''
#id values
1 2.0
2 5.0
3 5.5
4 5.0
5 7.0
'''
Write to excel:
df.reset_index().to_excel('give_an_excel_name.xlsx',index=None)
#or
df.to_excel('give_an_excel_name.xlsx')
| Given a list of 2-columns pandas dataframes, how can I take the median of the second columns? | I have a list of pandas dataframes, each with 2-columns. The first column represents an ID, and the second represents the values. How would I combine these dataframes to where values with common IDs are replaced with its median?
E.g
df_1 = pd.DataFrame({'#id': [1,2,3,4], 'values_1': [1,3,4,3]})
df_2 = pd.DataFrame({'#id': [1,2,3,5], 'values_2': [2,5,7,6]})
df_3 = pd.DataFrame({'#id': [1,2,4,5], 'values_3': [5,6,7,8]})
I would like the resulting new dataframe to be:
answer = pd.DataFrame({'#id': [1,2,3,4,5], 'values': [2,5,5.5,5,7]})
| [
"Merge all df's into one df. Then group by id and calculate the median of each group.\ndf = pd.concat([df_1,df_2,df_3])\ndf = df.groupby('#id').agg({'values':'median'})\n'''\n#id values\n1 2.0\n2 5.0\n3 5.5\n4 5.0\n5 7.0\n\n'''\n\nWrite to excel:\ndf.reset_index().to_excel('give_an_excel_name.xlsx',index=None)\n#or\ndf.to_excel('give_an_excel_name.xlsx')\n\n"
] | [
0
] | [] | [] | [
"numpy",
"pandas",
"python"
] | stackoverflow_0074643951_numpy_pandas_python.txt |
Q:
How do I run Pygame on Pycharm
I am trying to run pygame on the PyCharm IDE, I have installed the latest version of pygame for python 3.5 and have added it to the project interpreter. I installed pygame from http://www.lfd.uci.edu/~gohlke/pythonlibs/#pygame and copied it too python35-32/Scripts/. The test program below runs fine in the python shell but I want to run it from PyCharm.
I have been trying to solve this for the past hour and I've hit a wall.
When I run this simple test program to see if pygame is working:
import pygame
pygame.init()
pygame.display.set_mode((800, 800))
I get this error:
Traceback (most recent call last):
File "C:/Users/jerem/PycharmProjects/Games/hello_pygame.py", line 1, in <module>
import pygame
File "C:\Users\jerem\AppData\Roaming\Python\Python35\site-packages\pygame\__init__.py", line 141, in <module>
from pygame.base import *
ImportError: No module named 'pygame.base'
Any help would be greatly appreiciated!
Thanks again,
JC
A:
Follow the instructions provided here. I think its related to the problem you are having on pygame and PyCharm on windows.
How do I download Pygame for Python 3.5.1?
A:
its bcus pycharme has not recognised you're env or working on wrong env
https://www.jetbrains.com/help/pycharm/creating-virtual-environment.html
check this
| How do I run Pygame on Pycharm | I am trying to run pygame on the PyCharm IDE, I have installed the latest version of pygame for python 3.5 and have added it to the project interpreter. I installed pygame from http://www.lfd.uci.edu/~gohlke/pythonlibs/#pygame and copied it too python35-32/Scripts/. The test program below runs fine in the python shell but I want to run it from PyCharm.
I have been trying to solve this for the past hour and I've hit a wall.
When I run this simple test program to see if pygame is working:
import pygame
pygame.init()
pygame.display.set_mode((800, 800))
I get this error:
Traceback (most recent call last):
File "C:/Users/jerem/PycharmProjects/Games/hello_pygame.py", line 1, in <module>
import pygame
File "C:\Users\jerem\AppData\Roaming\Python\Python35\site-packages\pygame\__init__.py", line 141, in <module>
from pygame.base import *
ImportError: No module named 'pygame.base'
Any help would be greatly appreiciated!
Thanks again,
JC
| [
"Follow the instructions provided here. I think its related to the problem you are having on pygame and PyCharm on windows. \nHow do I download Pygame for Python 3.5.1?\n",
"its bcus pycharme has not recognised you're env or working on wrong env\n\nhttps://www.jetbrains.com/help/pycharm/creating-virtual-environment.html\ncheck this\n"
] | [
0,
0
] | [] | [] | [
"failed_installation",
"pycharm",
"pygame",
"python",
"python_3.x"
] | stackoverflow_0039339709_failed_installation_pycharm_pygame_python_python_3.x.txt |
Q:
Enforce `Unresolved attribute reference` when referencing non-existing Python enums.Enum
I am just disappointed with the behavior of Enum vs standard object-based class, that I don't get in my IDE warning about Unresolved attribute reference, maybe I am not aware of some OO nuance which will allow making this happen?
class Animal(Enum):
ant = 1
bee = 2
cat = 3
dog = 4
writing Animal.tiger should imply a warning about Unresolved attribute reference.
How to get there?
A:
This bug has been fixed in PyCharm 2022.1, the Unresolved attribute reference warning is now correctly shown by the IDE's linter.
| Enforce `Unresolved attribute reference` when referencing non-existing Python enums.Enum | I am just disappointed with the behavior of Enum vs standard object-based class, that I don't get in my IDE warning about Unresolved attribute reference, maybe I am not aware of some OO nuance which will allow making this happen?
class Animal(Enum):
ant = 1
bee = 2
cat = 3
dog = 4
writing Animal.tiger should imply a warning about Unresolved attribute reference.
How to get there?
| [
"This bug has been fixed in PyCharm 2022.1, the Unresolved attribute reference warning is now correctly shown by the IDE's linter.\n\n"
] | [
1
] | [] | [] | [
"enums",
"pycharm",
"python"
] | stackoverflow_0059462483_enums_pycharm_python.txt |
Q:
Pandas: enter missing rows in a dataframe
I'm collecting time series data, but sometimes for some time points there is no data to be collected. Just say for example I am collecting data across four time points, I might get a dataframe like this:
df_ = pd.DataFrame({'group': ['A']*3+['B']*3,
'time': [1,2,4,1,3,4],
'value': [100,105,111,200,234,222]})
sometimes there is a datapoint missing and so there is no row for that point, I would like groupby and to forward fill with the previous value to create a new row form which would look like this:
df_missing_completed = pd.DataFrame({'group': ['A']*4+['B']*4,
'time': [1,2,3,4,1,2,3,4],
'value': [100, 101, 105,111,200, 202, 234,222]})
I had the idea that I could create an new dataframe as a template with all the dates and time points, without any values, join it with the real data which would induce NA's, and do a ffillon the value column to fill in the missing data, like below:
df_template = pd.DataFrame({'group': ['A']*4+['B']*4,
'time': [1,2,3,4,1,2,3,4]})
df_final = pd.merge(df_template, df_, on = ['group', 'time'], how='left')
df_final['filled_values'] = df_final['value'].fillna(method='ffill')
but this seems like a messy solution, and with the real data the df_templete will be more complex to create. Does anyone know a better one? Thanks!
A:
I would use:
(df_.pivot(index='time', columns='group', values='value')
# reindex only of you want to add missing times for all groups
.reindex(range(df_['time'].min(), df_['time'].max()+1))
.ffill().unstack().reset_index(name='value')
)
Output:
group time value
0 A 1 100.0
1 A 2 105.0
2 A 3 105.0
3 A 4 111.0
4 B 1 200.0
5 B 2 200.0
6 B 3 234.0
7 B 4 222.0
A:
Instead of a template dataframe you could create a new index and then reindex with ffill:
new_idx = pd.MultiIndex.from_product([list('AB'), range(1,5)], names=['group', 'time'])
df_.set_index(['group', 'time']).reindex(new_idx, method='ffill').reset_index()
The result keeps the datatype of the value column:
group time value
0 A 1 100
1 A 2 105
2 A 3 105
3 A 4 111
4 B 1 200
5 B 2 200
6 B 3 234
7 B 4 222
| Pandas: enter missing rows in a dataframe | I'm collecting time series data, but sometimes for some time points there is no data to be collected. Just say for example I am collecting data across four time points, I might get a dataframe like this:
df_ = pd.DataFrame({'group': ['A']*3+['B']*3,
'time': [1,2,4,1,3,4],
'value': [100,105,111,200,234,222]})
sometimes there is a datapoint missing and so there is no row for that point, I would like groupby and to forward fill with the previous value to create a new row form which would look like this:
df_missing_completed = pd.DataFrame({'group': ['A']*4+['B']*4,
'time': [1,2,3,4,1,2,3,4],
'value': [100, 101, 105,111,200, 202, 234,222]})
I had the idea that I could create an new dataframe as a template with all the dates and time points, without any values, join it with the real data which would induce NA's, and do a ffillon the value column to fill in the missing data, like below:
df_template = pd.DataFrame({'group': ['A']*4+['B']*4,
'time': [1,2,3,4,1,2,3,4]})
df_final = pd.merge(df_template, df_, on = ['group', 'time'], how='left')
df_final['filled_values'] = df_final['value'].fillna(method='ffill')
but this seems like a messy solution, and with the real data the df_templete will be more complex to create. Does anyone know a better one? Thanks!
| [
"I would use:\n(df_.pivot(index='time', columns='group', values='value')\n # reindex only of you want to add missing times for all groups\n .reindex(range(df_['time'].min(), df_['time'].max()+1))\n .ffill().unstack().reset_index(name='value')\n)\n\nOutput:\n group time value\n0 A 1 100.0\n1 A 2 105.0\n2 A 3 105.0\n3 A 4 111.0\n4 B 1 200.0\n5 B 2 200.0\n6 B 3 234.0\n7 B 4 222.0\n\n",
"Instead of a template dataframe you could create a new index and then reindex with ffill:\nnew_idx = pd.MultiIndex.from_product([list('AB'), range(1,5)], names=['group', 'time'])\ndf_.set_index(['group', 'time']).reindex(new_idx, method='ffill').reset_index()\n\nThe result keeps the datatype of the value column:\n group time value\n0 A 1 100\n1 A 2 105\n2 A 3 105\n3 A 4 111\n4 B 1 200\n5 B 2 200\n6 B 3 234\n7 B 4 222\n\n"
] | [
2,
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074643048_pandas_python.txt |
Q:
Pycharm Multiprocessing Error, What Can I Do?
I'm facing a error with Pycharm when tried to use multiprocessing.
import multiprocessing
import time
inicio = time.perf_counter()
def calcula_soma():
print('Iniciando a Funcao...')
soma = 0
for i in range(50_000_000):
soma = soma + 1
print('Calculo Finalizado!')
if __name__ == '__main__':
cod1 = multiprocessing.Process(target=calcula_soma)
cod2 = multiprocessing.Process(target=calcula_soma)
cod1.start()
cod2.start()
cod1.join()
cod2.join()
fim = time.perf_counter()
total = round(fim - inicio, 2)
print(f'Tempo: {total}')
The error message is as follows:
Traceback (most recent call last):
File "C:....\Anaconda3\envs\teste\lib\code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "C:....\AppData\Local\JetBrains\PyCharm 2022.2.4\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:....\AppData\Local\JetBrains\PyCharm 2022.2.4\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:....\PycharmProjects\pythonProject\teste.py", line 19, in <module>
cod1.start()
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\context.py", line 336, in _Popen
return Popen(process_obj)
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
reduction.dump(process_obj, to_child)
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function calcula_soma at 0x0000021CE30DE0E0>: attribute lookup calcula_soma on __main__ failed
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
When I run it in Jupyter Notebook, the code works so this problem is about Pycharm. I google it and find out a lot of same problems with this error but none of resulutions helps me.
A:
After a long time looking for a anwser, I didn't find quickly. So to help someone else this is what helps me.
The link about this issues is: https://youtrack.jetbrains.com/issue/PY-50116
Solution:
In menus bar go to Run/Debug Configurations
Add a New Configuration
Click on Python
Rename the Configuration
Select the script
Uncheck the "Run with Python Console" option
DONE =D
| Pycharm Multiprocessing Error, What Can I Do? | I'm facing a error with Pycharm when tried to use multiprocessing.
import multiprocessing
import time
inicio = time.perf_counter()
def calcula_soma():
print('Iniciando a Funcao...')
soma = 0
for i in range(50_000_000):
soma = soma + 1
print('Calculo Finalizado!')
if __name__ == '__main__':
cod1 = multiprocessing.Process(target=calcula_soma)
cod2 = multiprocessing.Process(target=calcula_soma)
cod1.start()
cod2.start()
cod1.join()
cod2.join()
fim = time.perf_counter()
total = round(fim - inicio, 2)
print(f'Tempo: {total}')
The error message is as follows:
Traceback (most recent call last):
File "C:....\Anaconda3\envs\teste\lib\code.py", line 90, in runcode
exec(code, self.locals)
File "<input>", line 1, in <module>
File "C:....\AppData\Local\JetBrains\PyCharm 2022.2.4\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:....\AppData\Local\JetBrains\PyCharm 2022.2.4\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:....\PycharmProjects\pythonProject\teste.py", line 19, in <module>
cod1.start()
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\process.py", line 121, in start
self._popen = self._Popen(self)
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\context.py", line 336, in _Popen
return Popen(process_obj)
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\popen_spawn_win32.py", line 93, in __init__
reduction.dump(process_obj, to_child)
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <function calcula_soma at 0x0000021CE30DE0E0>: attribute lookup calcula_soma on __main__ failed
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\spawn.py", line 116, in spawn_main
exitcode = _main(fd, parent_sentinel)
File "C:....\Anaconda3\envs\teste\lib\multiprocessing\spawn.py", line 126, in _main
self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
When I run it in Jupyter Notebook, the code works so this problem is about Pycharm. I google it and find out a lot of same problems with this error but none of resulutions helps me.
| [
"After a long time looking for a anwser, I didn't find quickly. So to help someone else this is what helps me.\nThe link about this issues is: https://youtrack.jetbrains.com/issue/PY-50116\nSolution:\n\nIn menus bar go to Run/Debug Configurations\n\n\n\nAdd a New Configuration\n\n\n\nClick on Python\n\n\n\nRename the Configuration\n\n\n\nSelect the script\n\n\n\nUncheck the \"Run with Python Console\" option\n\n\n\nDONE =D\n\n"
] | [
1
] | [] | [] | [
"multiprocessing",
"pycharm",
"python",
"python_3.x"
] | stackoverflow_0074644040_multiprocessing_pycharm_python_python_3.x.txt |
Q:
Airflow: How to set a dynamic timeout to python sensor
I have four tasks t1,t2,t3,t4. I want to push a value into xcom in t1 and pull that value from Xcom and use that as timeout in pythonsensor which is t4
Currently the value is hardcoded
PythonSensor(
task_id="poll_status",
poke_interval=POKE_INTERVAL,
timeout=180,
mode=MODE,
soft_fail=True,
python_callable=t2
)
want something like
PythonSensor(
task_id="poll_job_status",
poke_interval=POKE_INTERVAL,
timeout=ti.xcom_pull(key="timeout")[0], // value pushed into xcom in t1
mode=MODE,
soft_fail=True,
python_callable=t2
)
A:
According to airflow XCOM documentation(https://airflow.apache.org/docs/apache-airflow/stable/concepts/xcoms.html), you might wish to use templated result:
PythonSensor(
task_id="poll_job_status",
poke_interval=POKE_INTERVAL,
timeout={{ task_instance.xcom_pull(task_ids='t1', key='timeout') }}, # value pushed into xcom in t1
mode=MODE,
soft_fail=True,
python_callable=t2
)
| Airflow: How to set a dynamic timeout to python sensor | I have four tasks t1,t2,t3,t4. I want to push a value into xcom in t1 and pull that value from Xcom and use that as timeout in pythonsensor which is t4
Currently the value is hardcoded
PythonSensor(
task_id="poll_status",
poke_interval=POKE_INTERVAL,
timeout=180,
mode=MODE,
soft_fail=True,
python_callable=t2
)
want something like
PythonSensor(
task_id="poll_job_status",
poke_interval=POKE_INTERVAL,
timeout=ti.xcom_pull(key="timeout")[0], // value pushed into xcom in t1
mode=MODE,
soft_fail=True,
python_callable=t2
)
| [
"According to airflow XCOM documentation(https://airflow.apache.org/docs/apache-airflow/stable/concepts/xcoms.html), you might wish to use templated result:\nPythonSensor(\n task_id=\"poll_job_status\",\n poke_interval=POKE_INTERVAL,\n timeout={{ task_instance.xcom_pull(task_ids='t1', key='timeout') }}, # value pushed into xcom in t1\n mode=MODE,\n soft_fail=True,\n python_callable=t2 \n )\n\n"
] | [
0
] | [] | [] | [
"airflow",
"python"
] | stackoverflow_0074641709_airflow_python.txt |
Q:
Speed up importing huge json files
I am trying to open up some huge json files
papers0 = []
papers1 = []
papers2 = []
papers3 = []
papers4 = []
papers5 = []
papers6 = []
papers7 = []
for x in range(8):
for line in open(f'part_00{x}.json', 'r'):
globals()['papers%s' % x].append(json.loads(line))
However the process above is slow. I wonder if there is some parallelization trick or some other in order to speed it up.
Thank you
A:
If the JSON files are very large then loading them (as Python dictionaries) will be I/O bound. Therefore, multithreading would be appropriate for parallelisation.
Rather than having discrete variables for each dictionary, why not have a single dictionary keyed on the significant numeric part of the filename(s).
For example:
from concurrent.futures import ThreadPoolExecutor as TPE
from json import load as LOAD
from sys import stderr as STDERR
NFILES = 8
JDATA = {}
def get_json(n):
try:
with open(f'part_00{n}.json') as j:
return n, LOAD(j)
except Exception as e:
print(e, file=STDERR)
return n, None
def main():
with TPE() as tpe:
JDATA = dict(tpe.map(get_json, range(NFILES)))
if __name__ == '__main__':
main()
After running this, the dictionary representation of the JSON file part_005.json (for example) would be accessible as JDATA[5]
Note that if an exception arises during accessing or processing of any of the files, the relevant dictionary value will be None
| Speed up importing huge json files | I am trying to open up some huge json files
papers0 = []
papers1 = []
papers2 = []
papers3 = []
papers4 = []
papers5 = []
papers6 = []
papers7 = []
for x in range(8):
for line in open(f'part_00{x}.json', 'r'):
globals()['papers%s' % x].append(json.loads(line))
However the process above is slow. I wonder if there is some parallelization trick or some other in order to speed it up.
Thank you
| [
"If the JSON files are very large then loading them (as Python dictionaries) will be I/O bound. Therefore, multithreading would be appropriate for parallelisation.\nRather than having discrete variables for each dictionary, why not have a single dictionary keyed on the significant numeric part of the filename(s).\nFor example:\nfrom concurrent.futures import ThreadPoolExecutor as TPE\nfrom json import load as LOAD\nfrom sys import stderr as STDERR\n\nNFILES = 8\nJDATA = {}\n\ndef get_json(n):\n try:\n with open(f'part_00{n}.json') as j:\n return n, LOAD(j)\n except Exception as e:\n print(e, file=STDERR)\n return n, None\n\ndef main():\n with TPE() as tpe:\n JDATA = dict(tpe.map(get_json, range(NFILES)))\n\nif __name__ == '__main__':\n main()\n\nAfter running this, the dictionary representation of the JSON file part_005.json (for example) would be accessible as JDATA[5]\nNote that if an exception arises during accessing or processing of any of the files, the relevant dictionary value will be None\n"
] | [
1
] | [] | [] | [
"for_loop",
"json",
"list",
"python"
] | stackoverflow_0074643262_for_loop_json_list_python.txt |
Q:
Python C++ API make member private
I'm making a python extension module using my C++ code and I've made a struct that I use to pass my C++ variables. I want some of those variables to be inaccessible from the python level. How can I do that?
typedef struct {
PyObject_HEAD
std::string region;
std::string stream;
bool m_is_surface = false;
bool m_is_stream = false;
} PyType;
I want m_is_surface and m_is_stream to be inaccessible by the user. Only PyType's methods should access it. So the and user CAN'T do something like this:
import my_module
instance = my_module.PyType()
instance.m_is_surface = False # This should raise an error. Or preferably the user can't see this at all
I cannot just add private to the struct because python type members are created as a standalone function and is linked to the type later on, so I cannot access them inside the struct's methods.
So if I say:
int PyType_init(PyType *self, PyObject *args, PyObject *kwds)
{
static char *kwlist[] = {"thread", "region", "stream", NULL};
PyObject *tmp;
if (!PyArg_ParseTupleAndKeywords(args, kwds, "bs|s", kwlist,
&self->thread, &self->region, &self->stream))
return -1;
return 0;
}
It will raise an is private within this context error.
A:
You should do nothing. Unless you create an accessor property these attributes are already inaccessible from Python. Python cannot automatically see C/C++ struct members.
| Python C++ API make member private | I'm making a python extension module using my C++ code and I've made a struct that I use to pass my C++ variables. I want some of those variables to be inaccessible from the python level. How can I do that?
typedef struct {
PyObject_HEAD
std::string region;
std::string stream;
bool m_is_surface = false;
bool m_is_stream = false;
} PyType;
I want m_is_surface and m_is_stream to be inaccessible by the user. Only PyType's methods should access it. So the and user CAN'T do something like this:
import my_module
instance = my_module.PyType()
instance.m_is_surface = False # This should raise an error. Or preferably the user can't see this at all
I cannot just add private to the struct because python type members are created as a standalone function and is linked to the type later on, so I cannot access them inside the struct's methods.
So if I say:
int PyType_init(PyType *self, PyObject *args, PyObject *kwds)
{
static char *kwlist[] = {"thread", "region", "stream", NULL};
PyObject *tmp;
if (!PyArg_ParseTupleAndKeywords(args, kwds, "bs|s", kwlist,
&self->thread, &self->region, &self->stream))
return -1;
return 0;
}
It will raise an is private within this context error.
| [
"You should do nothing. Unless you create an accessor property these attributes are already inaccessible from Python. Python cannot automatically see C/C++ struct members.\n"
] | [
1
] | [] | [] | [
"c++",
"python",
"python_c_api"
] | stackoverflow_0074642027_c++_python_python_c_api.txt |
Q:
How do I remove unwanted parts from strings in a Python DataFrame column
Based on the script originally suggested by u/commandlineluser at reddit, I (as a Python novice) attempted to revise the original code to remove unwanted parts that vary across column values. The Python script involves creating a dictionary with keys and values and using a list comprehension with str.replace.
(part of the original script by u/commandlineluser at reddit)
extensions = "dat", "ssp", "dta", "v9", "xlsx"
(The next line is my revision to the above part, and below is the complete code block)
extensions = "dat", "ssp", "dta", "20dta", "u20dta", "f1dta", "f2dta", "v9", "xlsx"
Some of the results are different than what I desire. Please see below (what I tried).
import pandas as pd
import re
data = {"full_url": ['https://meps.ahrq.gov/data_files/pufs/h225/h225dat.zip',
'https://meps.ahrq.gov/data_files/pufs/h51bdat.zip',
'https://meps.ahrq.gov/data_files/pufs/h47f1dat.zip',
'https://meps.ahrq.gov/data_files/pufs/h225/h225ssp.zip',
'https://meps.ahrq.gov/data_files/pufs/h220i/h220if1dta.zip',
'https://meps.ahrq.gov/data_files/pufs/h220h/h220hv9.zip',
'https://meps.ahrq.gov/data_files/pufs/h220e/h220exlsx.zip',
'https://meps.ahrq.gov/data_files/pufs/h224/h224xlsx.zip',
'https://meps.ahrq.gov/data_files/pufs/h036brr/h36brr20dta.zip',
'https://meps.ahrq.gov/data_files/pufs/h036/h36u20dta.zip',
'https://meps.ahrq.gov/data_files/pufs/h197i/h197if1dta.zip',
'https://meps.ahrq.gov/data_files/pufs/h197i/h197if2dta.zip']}
df = pd.DataFrame(data)
extensions = ["dat", "ssp", "dta", "20dta", "u20dta", "f1dta", "f2dta", "v9", "xlsx"]
replacements = dict.fromkeys((f"{ext}[.]zip$" for ext in extensions), "")
df["file_id"] = df["full_url"].str.split("/").str[-1].replace(replacements, regex=True)
print(df["file_id"])
Annotated output
0 h225 (looks good)
1 h51b (looks good)
2 h47f1 (h47 -> desired)
3 h225 (looks good)
4 h220if1 (h220i -> desired)
5 h220h (looks good)
6 h220e (looks good)
7 h224 (looks good)
8 h36brr20 (h36brr -> desired)
9 h36u20 (h36 -> desired)
10 h197if1 (h197i -> desired)
11 h197if2 (h197i -> desired)
A:
You have two issues here, and they are all in this line:
extensions = ["dat", "ssp", "dta", "20dta", "u20dta", "f1dta", "f2dta", "v9", "xlsx"]
First issue
The first issue is in the order of the elements of this list. "dat" and "dta" are substrings of other elements in this string and they are at the front of this list. Let's take an example: h47f1dat.zip needs to become h47. But in these lines:
replacements = dict.fromkeys((f"{ext}[.]zip$" for ext in extensions), "")
df["file_id"] = df["full_url"].str.split("/").str[-1].replace(replacements, regex=True)
You keep the order, meaning that you'll first be filtering with the "dat" string, which becomes h47f1. This can be easily fixed by reordering your list.
Second issue
You missed an entry in your extensions list: if you want h47f1dat.zip to become h47 you need to have "f1dat" in your list but you only have "f1dta".
Conclusion
You were almost there! There was simply a small issue with the order of the elements and one extension was missing (or you have a typo in your URLs).
The following extensions list:
extensions = ["ssp", "20dta", "u20dta", "f1dat", "f1dta", "f2dta", "v9", "dat", "dta", "xlsx"]
Together with the rest of your code gives you the result you want:
0 h225
1 h51b
2 h47
3 h225
4 h220i
5 h220h
6 h220e
7 h224
8 h36brr
9 h36u
10 h197i
11 h197i
A:
Good catch about the issue of the order of the elements and the missing extension! Thank you.
Question 1: Do you mean the list extensions is not sorted alphabetically? Can I not use the Python sort() method to sort the list? I have over one thousand rows in the actual dataframe, and I prefer to sort the list programmatically. I hope I do not misunderstand your comments.
Question 2: I don't understand why I am getting h36u instead of the desired value h36 in the output even after reordering the list as you suggested. Any thoughts?
I have tried another approach (code below) using Jupyter Lab, which provides the output in which the first two values are different from the desired output (also shown below), but the other values seem to be what I desire including h36.
df["file_id"] = df["full_url"].str.split("/").str[-1].str.replace(r'(\dat.zip \
|f1dat.zip|dta.zip|f1dta.zip|f2dta.zip|20dta.zip|u20dta.zip|xlsx.zip|v9.zip|ssp.zip)' \
,'', regex=True)
print(df["file_id"])
Output (annotated)
0 h225dat.zip (not desired; h225 desired)
1 h51bdat.zipn (not desired; h51b desired)
2 h47
3 h225
4 h220i
5 h220h
6 h220e
7 h224
8 h36brr
9 h36
10 h197i
11 h197i
Question 3: Any comments on the above alternative code snippets?
| How do I remove unwanted parts from strings in a Python DataFrame column | Based on the script originally suggested by u/commandlineluser at reddit, I (as a Python novice) attempted to revise the original code to remove unwanted parts that vary across column values. The Python script involves creating a dictionary with keys and values and using a list comprehension with str.replace.
(part of the original script by u/commandlineluser at reddit)
extensions = "dat", "ssp", "dta", "v9", "xlsx"
(The next line is my revision to the above part, and below is the complete code block)
extensions = "dat", "ssp", "dta", "20dta", "u20dta", "f1dta", "f2dta", "v9", "xlsx"
Some of the results are different than what I desire. Please see below (what I tried).
import pandas as pd
import re
data = {"full_url": ['https://meps.ahrq.gov/data_files/pufs/h225/h225dat.zip',
'https://meps.ahrq.gov/data_files/pufs/h51bdat.zip',
'https://meps.ahrq.gov/data_files/pufs/h47f1dat.zip',
'https://meps.ahrq.gov/data_files/pufs/h225/h225ssp.zip',
'https://meps.ahrq.gov/data_files/pufs/h220i/h220if1dta.zip',
'https://meps.ahrq.gov/data_files/pufs/h220h/h220hv9.zip',
'https://meps.ahrq.gov/data_files/pufs/h220e/h220exlsx.zip',
'https://meps.ahrq.gov/data_files/pufs/h224/h224xlsx.zip',
'https://meps.ahrq.gov/data_files/pufs/h036brr/h36brr20dta.zip',
'https://meps.ahrq.gov/data_files/pufs/h036/h36u20dta.zip',
'https://meps.ahrq.gov/data_files/pufs/h197i/h197if1dta.zip',
'https://meps.ahrq.gov/data_files/pufs/h197i/h197if2dta.zip']}
df = pd.DataFrame(data)
extensions = ["dat", "ssp", "dta", "20dta", "u20dta", "f1dta", "f2dta", "v9", "xlsx"]
replacements = dict.fromkeys((f"{ext}[.]zip$" for ext in extensions), "")
df["file_id"] = df["full_url"].str.split("/").str[-1].replace(replacements, regex=True)
print(df["file_id"])
Annotated output
0 h225 (looks good)
1 h51b (looks good)
2 h47f1 (h47 -> desired)
3 h225 (looks good)
4 h220if1 (h220i -> desired)
5 h220h (looks good)
6 h220e (looks good)
7 h224 (looks good)
8 h36brr20 (h36brr -> desired)
9 h36u20 (h36 -> desired)
10 h197if1 (h197i -> desired)
11 h197if2 (h197i -> desired)
| [
"You have two issues here, and they are all in this line:\nextensions = [\"dat\", \"ssp\", \"dta\", \"20dta\", \"u20dta\", \"f1dta\", \"f2dta\", \"v9\", \"xlsx\"]\n\nFirst issue\nThe first issue is in the order of the elements of this list. \"dat\" and \"dta\" are substrings of other elements in this string and they are at the front of this list. Let's take an example: h47f1dat.zip needs to become h47. But in these lines:\nreplacements = dict.fromkeys((f\"{ext}[.]zip$\" for ext in extensions), \"\")\ndf[\"file_id\"] = df[\"full_url\"].str.split(\"/\").str[-1].replace(replacements, regex=True)\n\nYou keep the order, meaning that you'll first be filtering with the \"dat\" string, which becomes h47f1. This can be easily fixed by reordering your list.\nSecond issue\nYou missed an entry in your extensions list: if you want h47f1dat.zip to become h47 you need to have \"f1dat\" in your list but you only have \"f1dta\".\nConclusion\nYou were almost there! There was simply a small issue with the order of the elements and one extension was missing (or you have a typo in your URLs).\nThe following extensions list:\nextensions = [\"ssp\", \"20dta\", \"u20dta\", \"f1dat\", \"f1dta\", \"f2dta\", \"v9\", \"dat\", \"dta\", \"xlsx\"]\n\nTogether with the rest of your code gives you the result you want:\n0 h225 \n1 h51b \n2 h47 \n3 h225 \n4 h220i \n5 h220h \n6 h220e \n7 h224 \n8 h36brr \n9 h36u \n10 h197i \n11 h197i\n\n",
"Good catch about the issue of the order of the elements and the missing extension! Thank you.\nQuestion 1: Do you mean the list extensions is not sorted alphabetically? Can I not use the Python sort() method to sort the list? I have over one thousand rows in the actual dataframe, and I prefer to sort the list programmatically. I hope I do not misunderstand your comments.\nQuestion 2: I don't understand why I am getting h36u instead of the desired value h36 in the output even after reordering the list as you suggested. Any thoughts?\nI have tried another approach (code below) using Jupyter Lab, which provides the output in which the first two values are different from the desired output (also shown below), but the other values seem to be what I desire including h36.\ndf[\"file_id\"] = df[\"full_url\"].str.split(\"/\").str[-1].str.replace(r'(\\dat.zip \\\n|f1dat.zip|dta.zip|f1dta.zip|f2dta.zip|20dta.zip|u20dta.zip|xlsx.zip|v9.zip|ssp.zip)' \\\n,'', regex=True)\nprint(df[\"file_id\"])\n\nOutput (annotated)\n0 h225dat.zip (not desired; h225 desired)\n1 h51bdat.zipn (not desired; h51b desired)\n2 h47\n3 h225\n4 h220i\n5 h220h\n6 h220e\n7 h224\n8 h36brr\n9 h36\n10 h197i\n11 h197i\n\n\nQuestion 3: Any comments on the above alternative code snippets?\n"
] | [
0,
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0074634406_python_regex.txt |
Q:
Matching a string if it contains all words of a list in python
I have a number of long strings and I want to match those that contain all words of a given list.
keywords=['special','dreams']
search_string1="This is something that manifests especially in dreams"
search_string2="This is something that manifests in special cases in dreams"
I want only search_string2 matched. So far I have this code:
if all(x in search_text for x in keywords):
print("matched")
The problem is that it will also match search_string1. Obviously I need to include some regex matching that uses \w or or \b, but I can't figure out how I can include a regex in the if all statement.
Can anyone help?
A:
you can use regex to do the same but I prefer to just use python.
string classes in python can be split to list of words. (join can join a list to string). while using word in list_of_words will help you understand if word is in the list.
keywords=['special','dreams']
found = True
for word in keywords:
if not word in search_string1.split():
found = False
A:
Could be not the best idea, but we could check if one set is a part of another set:
keywords = ['special', 'dreams']
strs = [
"This is something that manifests especially in dreams",
"This is something that manifests in special cases in dreams"
]
_keywords = set(keywords)
for s in strs:
s_set = set(s.split())
if _keywords.issubset(s_set):
print(f"Matched: {s}")
A:
Axe319's comment works and is closest to my original question of how to solve the problem using regex. To quote the solution again:
all(re.search(fr'\b{x}\b', search_text) for x in keywords)
Thanks to everyone!
| Matching a string if it contains all words of a list in python | I have a number of long strings and I want to match those that contain all words of a given list.
keywords=['special','dreams']
search_string1="This is something that manifests especially in dreams"
search_string2="This is something that manifests in special cases in dreams"
I want only search_string2 matched. So far I have this code:
if all(x in search_text for x in keywords):
print("matched")
The problem is that it will also match search_string1. Obviously I need to include some regex matching that uses \w or or \b, but I can't figure out how I can include a regex in the if all statement.
Can anyone help?
| [
"you can use regex to do the same but I prefer to just use python.\nstring classes in python can be split to list of words. (join can join a list to string). while using word in list_of_words will help you understand if word is in the list.\nkeywords=['special','dreams']\nfound = True\nfor word in keywords:\n if not word in search_string1.split():\n found = False\n\n",
"Could be not the best idea, but we could check if one set is a part of another set:\nkeywords = ['special', 'dreams']\n\nstrs = [\n \"This is something that manifests especially in dreams\",\n \"This is something that manifests in special cases in dreams\"\n]\n\n_keywords = set(keywords)\nfor s in strs:\n s_set = set(s.split())\n if _keywords.issubset(s_set):\n print(f\"Matched: {s}\")\n\n",
"Axe319's comment works and is closest to my original question of how to solve the problem using regex. To quote the solution again:\nall(re.search(fr'\\b{x}\\b', search_text) for x in keywords)\nThanks to everyone!\n"
] | [
1,
1,
1
] | [] | [] | [
"python",
"string"
] | stackoverflow_0074634142_python_string.txt |
Q:
Python3 - Tkinter - Widget creation with a dict
Im trying to create widgets dynamically onto the root window of tkinter with a json file as a widget config. I simplified the code so you can also test things out. (Im using grid() in my main code but its not necessary here)
The json file contains an items list with each widget in a seperate dict, it can look for example like this.
new_dict = {
"items": [
{
"type": "frame",
"id": "f1"
},
{
"type": "button",
"id": "b1",
"text": "Button1"
}
]
}
My goal here to create a widget, stored in the value of the id field, so i can later on change widgets states or other things via .config()
( Example: f1 = Frame(root) )
For this example my code looks like this:
( Note: im using locals() to create the specific variables, if there is a better way, please let me know )
# Example: Change Frame Color to Blue
def change_background(frame_id):
locals()[frame_id].config(bg=blue)
# Root Window
root = Tk()
root.geometry("800x600")
# Widget Creation
for item in new_dict["items"]:
if item["type"] == "frame":
frame_id = item["id"]
locals()[item["id"]] = Frame(root, width=200, height=200, bg="green")
locals()[item["id"]].pack(side=BOTTOM, fill=BOTH)
elif item["type"] == "button":
locals()[item["id"]] = Button(root, text=item["text"], command=lambda: change_background(frame_id))
locals()[item["id"]].place(x=50, y=50, anchor=CENTER)
root.mainloop()
Now my problem here is that i cant give the frame id into the change_background function. If i do that im getting following Error:
KeyError: 'f1'
I dont quite understand the problem here, because pack(), place() and grid() works fine with each widget.
A:
As what the name locals means, it stores only local variables. So locals() inside the function just contains local variables defined inside the function.
It is not recommended to use locals() like this. Just use a normal dictionary instead:
...
# Example: Change Frame Color to Blue
def change_background(frame_id):
widgets[frame_id].config(bg="blue")
# Root Window
root = Tk()
root.geometry("800x600")
# Widget Creation
# use a normal dictionary instead of locals()
widgets = {}
for item in new_dict["items"]:
if item["type"] == "frame":
frame_id = item["id"]
widgets[item["id"]] = Frame(root, width=200, height=200, bg="green")
widgets[item["id"]].pack(side=BOTTOM, fill=BOTH)
elif item["type"] == "button":
widgets[item["id"]] = Button(root, text=item["text"], command=lambda: change_background(frame_id))
widgets[item["id"]].place(x=50, y=50, anchor=CENTER)
...
| Python3 - Tkinter - Widget creation with a dict | Im trying to create widgets dynamically onto the root window of tkinter with a json file as a widget config. I simplified the code so you can also test things out. (Im using grid() in my main code but its not necessary here)
The json file contains an items list with each widget in a seperate dict, it can look for example like this.
new_dict = {
"items": [
{
"type": "frame",
"id": "f1"
},
{
"type": "button",
"id": "b1",
"text": "Button1"
}
]
}
My goal here to create a widget, stored in the value of the id field, so i can later on change widgets states or other things via .config()
( Example: f1 = Frame(root) )
For this example my code looks like this:
( Note: im using locals() to create the specific variables, if there is a better way, please let me know )
# Example: Change Frame Color to Blue
def change_background(frame_id):
locals()[frame_id].config(bg=blue)
# Root Window
root = Tk()
root.geometry("800x600")
# Widget Creation
for item in new_dict["items"]:
if item["type"] == "frame":
frame_id = item["id"]
locals()[item["id"]] = Frame(root, width=200, height=200, bg="green")
locals()[item["id"]].pack(side=BOTTOM, fill=BOTH)
elif item["type"] == "button":
locals()[item["id"]] = Button(root, text=item["text"], command=lambda: change_background(frame_id))
locals()[item["id"]].place(x=50, y=50, anchor=CENTER)
root.mainloop()
Now my problem here is that i cant give the frame id into the change_background function. If i do that im getting following Error:
KeyError: 'f1'
I dont quite understand the problem here, because pack(), place() and grid() works fine with each widget.
| [
"As what the name locals means, it stores only local variables. So locals() inside the function just contains local variables defined inside the function.\nIt is not recommended to use locals() like this. Just use a normal dictionary instead:\n...\n# Example: Change Frame Color to Blue\ndef change_background(frame_id):\n widgets[frame_id].config(bg=\"blue\")\n\n# Root Window\nroot = Tk()\nroot.geometry(\"800x600\")\n\n# Widget Creation\n# use a normal dictionary instead of locals()\nwidgets = {}\nfor item in new_dict[\"items\"]:\n if item[\"type\"] == \"frame\":\n frame_id = item[\"id\"]\n widgets[item[\"id\"]] = Frame(root, width=200, height=200, bg=\"green\")\n widgets[item[\"id\"]].pack(side=BOTTOM, fill=BOTH)\n elif item[\"type\"] == \"button\":\n widgets[item[\"id\"]] = Button(root, text=item[\"text\"], command=lambda: change_background(frame_id))\n widgets[item[\"id\"]].place(x=50, y=50, anchor=CENTER)\n...\n\n"
] | [
1
] | [] | [] | [
"dictionary",
"python",
"tkinter",
"variables"
] | stackoverflow_0074629883_dictionary_python_tkinter_variables.txt |
Q:
Join on different named columns in Pandas
I have two dataframes in Pandas
left = pd.DataFrame(
{"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}
)
right = pd.DataFrame(
{"C": ["A0", "A1", "A2"], "D": ["D0", "D2", "D3"]}
)
How would I left join on the column A in left dataframe and column C in right dataframe?
Output
B D A
B0 D0 A0
B1 D2 A1
B2 D3 A2
A:
You can use merge with kwargs left_on and right_on:
pd.merge(left, right, how="left", left_on="A", right_on="C")
Output:
A B C D
0 A0 B0 A0 D0
1 A1 B1 A1 D2
2 A2 B2 A2 D3
Edit: you can drop the C column with .drop("C", axis=1)
| Join on different named columns in Pandas | I have two dataframes in Pandas
left = pd.DataFrame(
{"A": ["A0", "A1", "A2"], "B": ["B0", "B1", "B2"]}
)
right = pd.DataFrame(
{"C": ["A0", "A1", "A2"], "D": ["D0", "D2", "D3"]}
)
How would I left join on the column A in left dataframe and column C in right dataframe?
Output
B D A
B0 D0 A0
B1 D2 A1
B2 D3 A2
| [
"You can use merge with kwargs left_on and right_on:\npd.merge(left, right, how=\"left\", left_on=\"A\", right_on=\"C\")\n\nOutput:\n A B C D\n0 A0 B0 A0 D0\n1 A1 B1 A1 D2\n2 A2 B2 A2 D3\n\nEdit: you can drop the C column with .drop(\"C\", axis=1)\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074644002_pandas_python.txt |
Q:
Time complexity in case of multiple "in" operator usage in a condition in python
Suppose I have 3 elements that I want to check if they are in an iterable say (str or list).
I'm going to use an str as an example now but it should be the same in the case of a list:
Assuming values to check for are 'a','b','c' and string to search in is 'abcd' saved in variable line.
There are "two" general ways of doing this:
One is to just do multiple checks
if 'a' in line and 'b' in line and 'c' in line:
#Do something
pass
Another is to use all
if all( sub_str in line for sub_str in ['a','b','c']):
#Do something
pass
I want to know if there is any time-complexity difference between the two approaches.
A:
Hard to say about time complexity, but the first is actually faster at runtime, presumably because it doesn't involve
a name lookup (all)
a function call (all)
a generator expression
a list construction (['a','b','c']) (though this may be negligible for an imported module)
See comments.
See for yourself:
# All found
~ $ python3 -m timeit -s "line = 'fooabc'" "'a' in line and 'b' in line and 'c' in line"
20000000 loops, best of 5: 55 nsec per loop
~ $ python3 -m timeit -s "line = 'fooabc'" "all( sub_str in line for sub_str in ['a','b','c'])"
5000000 loops, best of 5: 397 nsec per loop
# None found
~ $ python3 -m timeit -s "line = 'fooddd'" "'a' in line and 'b' in line and 'c' in line"
50000000 loops, best of 5: 25.2 nsec per loop
~ $ python3 -m timeit -s "line = 'fooddd'" "all( sub_str in line for sub_str in ['a','b','c'])"
5000000 loops, best of 5: 325 nsec per loop
A:
Both are O(mn) where m is the number of values to search and n is the length of the string.
There is no simple way to do asymptotically better. If you wanted to test for or instead of and, you could write a regex like a|b|c and it would take O(n) time to scan a string of length n, so long as you use a regular expression library which guarantees linear time searches (which is possible by using finite automata, but the standard library's re module doesn't give this guarantee).
But you want to test whether all of the patterns match, which is harder to do with a regex. If the patterns cannot overlap, then a regular expression can still work; you will need to iterate through the matches and keep track of which patterns have been found so far (e.g. using a set). Again, this will take O(n) time if you use a regex library that guarantees linear time searches.
If the patterns can overlap, then it's still not theoretically impossible to do with a regex (because an intersection of regular languages is regular), but it's impractical. An alternative approach would be something like the Aho–Corasick algorithm which can find all matches of a set of strings in O(n) time, including overlapping matches. This algorithm is not available in the standard library but there appear to be several third-party packages which implement it.
All of that said, if m is small then I expect it is hard to get any significant improvement in actual running time compared to the simple O(mn) approach, even if other algorithms theoretically have a lower complexity.
A:
I want to know if there is any time-complexity difference between the two approaches.
TL;DR: no.
The all() approach has overhead inherent in calling a function, and producing a generator for the argument to all() also has a cost. But I expect these costs each to scale as O(1) with any problem dimension you choose, and you perform each one exactly once, so they have no impact on asymptotic complexity.
I expect execution of an all() to contribute a linear scale factor with respect to the number or elements available from the iterable presented to it. In the first program, however, changing the number of elements to test would require modifying the program, so this number is not an adjustable parameter. Therefore, scaling with respect to it is irrelevant for the purposes of the comparison we're considering.
Supposing that the "Do something" is the same in both cases, there remains only the evaluations of expressions of the form x in line. The two alternatives evaluate the same number of such expressions, with the same operands, so these scale the same in the two alternatives.
| Time complexity in case of multiple "in" operator usage in a condition in python | Suppose I have 3 elements that I want to check if they are in an iterable say (str or list).
I'm going to use an str as an example now but it should be the same in the case of a list:
Assuming values to check for are 'a','b','c' and string to search in is 'abcd' saved in variable line.
There are "two" general ways of doing this:
One is to just do multiple checks
if 'a' in line and 'b' in line and 'c' in line:
#Do something
pass
Another is to use all
if all( sub_str in line for sub_str in ['a','b','c']):
#Do something
pass
I want to know if there is any time-complexity difference between the two approaches.
| [
"Hard to say about time complexity, but the first is actually faster at runtime, presumably because it doesn't involve\n\na name lookup (all)\na function call (all)\na generator expression\na list construction (['a','b','c']) (though this may be negligible for an imported module)\nSee comments.\n\nSee for yourself:\n# All found\n~ $ python3 -m timeit -s \"line = 'fooabc'\" \"'a' in line and 'b' in line and 'c' in line\"\n20000000 loops, best of 5: 55 nsec per loop\n~ $ python3 -m timeit -s \"line = 'fooabc'\" \"all( sub_str in line for sub_str in ['a','b','c'])\"\n5000000 loops, best of 5: 397 nsec per loop\n# None found\n~ $ python3 -m timeit -s \"line = 'fooddd'\" \"'a' in line and 'b' in line and 'c' in line\"\n50000000 loops, best of 5: 25.2 nsec per loop\n~ $ python3 -m timeit -s \"line = 'fooddd'\" \"all( sub_str in line for sub_str in ['a','b','c'])\"\n5000000 loops, best of 5: 325 nsec per loop\n\n",
"Both are O(mn) where m is the number of values to search and n is the length of the string.\nThere is no simple way to do asymptotically better. If you wanted to test for or instead of and, you could write a regex like a|b|c and it would take O(n) time to scan a string of length n, so long as you use a regular expression library which guarantees linear time searches (which is possible by using finite automata, but the standard library's re module doesn't give this guarantee).\nBut you want to test whether all of the patterns match, which is harder to do with a regex. If the patterns cannot overlap, then a regular expression can still work; you will need to iterate through the matches and keep track of which patterns have been found so far (e.g. using a set). Again, this will take O(n) time if you use a regex library that guarantees linear time searches.\nIf the patterns can overlap, then it's still not theoretically impossible to do with a regex (because an intersection of regular languages is regular), but it's impractical. An alternative approach would be something like the Aho–Corasick algorithm which can find all matches of a set of strings in O(n) time, including overlapping matches. This algorithm is not available in the standard library but there appear to be several third-party packages which implement it.\nAll of that said, if m is small then I expect it is hard to get any significant improvement in actual running time compared to the simple O(mn) approach, even if other algorithms theoretically have a lower complexity.\n",
"\nI want to know if there is any time-complexity difference between the two approaches.\n\nTL;DR: no.\nThe all() approach has overhead inherent in calling a function, and producing a generator for the argument to all() also has a cost. But I expect these costs each to scale as O(1) with any problem dimension you choose, and you perform each one exactly once, so they have no impact on asymptotic complexity.\nI expect execution of an all() to contribute a linear scale factor with respect to the number or elements available from the iterable presented to it. In the first program, however, changing the number of elements to test would require modifying the program, so this number is not an adjustable parameter. Therefore, scaling with respect to it is irrelevant for the purposes of the comparison we're considering.\nSupposing that the \"Do something\" is the same in both cases, there remains only the evaluations of expressions of the form x in line. The two alternatives evaluate the same number of such expressions, with the same operands, so these scale the same in the two alternatives.\n"
] | [
6,
1,
1
] | [] | [] | [
"python"
] | stackoverflow_0074643486_python.txt |
Q:
converting pandas column data from list of tuples to dict of dicts
I am trying to convert the pandas dataframe column values from -
{'01AB': [("ABC", 5),("XYZ", 4),("LMN", 1)], '02AB_QTY': [("Other", 20),("not_Other", 150)]}
this is what i have tried till now, but not working
import pandas as pd
df = pd.DataFrame.from_records([{'01AB': [("ABC", 5),("XYZ", 4),("LMN", 1)], '02AB_QTY': [("Other", 20),("not_Other", 150)]}])
col_list = ["01AB", "02AB_QTY",]
# for col in col_list:
# df[col] = df[col].apply(lambda x: {} if x is None else {key: {v[0]:v[1] for v in list_item} for key, list_item in x.items()})
df
expected output is like
{'01AB': {"ABC":5,"XYZ":4,"LMN":1}, '02AB_QTY': {"Other":20,"not_Other":150}}
A:
We can use df.applymap(), with dict comprehension to convert each list to a dict, like this:
df[col_list] = df[col_list].applymap(lambda lst: {k: v for k, v in lst})
A:
import pandas as pd
df = pd.DataFrame.from_records([{'01AB': [("ABC", 5),("XYZ", 4),("LMN", 1)], '02AB_QTY': [("Other", 20),("not_Other", 150)]}])
out_dict = dict()
for col in df.columns:
out_dict[col] = dict(df[col][0])
Output:
{'01AB': {'ABC': 5, 'XYZ': 4, 'LMN': 1},
'02AB_QTY': {'Other': 20, 'not_Other': 150}}
A:
You can use a dictionary comprehension:
out = {k: dict(x) for k,v in df.iloc[0].to_dict().items()}
Output:
{'01AB': {'ABC': 5, 'XYZ': 4, 'LMN': 1},
'02AB_QTY': {'ABC': 5, 'XYZ': 4, 'LMN': 1}}
A:
thanks for all the leads, I was able to resolve the issue like below -
import pandas as pd
df = pd.DataFrame.from_records([{'01AB': [("ABC", 5),("XYZ", 4),("LMN", 1)], '02AB_QTY': [("Other", 20),("not_Other", 150)]}])
col_list = ["01AB", "02AB_QTY",]
print(df)
for col in col_list:
df[col] = df[col].apply(lambda x: {} if x is None else dict(x))
print(df)
OutPut -
01AB 02AB_QTY
{'ABC': 5, 'XYZ': 4, 'LMN': 1} {'Other': 20, 'not_Other': 150}
| converting pandas column data from list of tuples to dict of dicts | I am trying to convert the pandas dataframe column values from -
{'01AB': [("ABC", 5),("XYZ", 4),("LMN", 1)], '02AB_QTY': [("Other", 20),("not_Other", 150)]}
this is what i have tried till now, but not working
import pandas as pd
df = pd.DataFrame.from_records([{'01AB': [("ABC", 5),("XYZ", 4),("LMN", 1)], '02AB_QTY': [("Other", 20),("not_Other", 150)]}])
col_list = ["01AB", "02AB_QTY",]
# for col in col_list:
# df[col] = df[col].apply(lambda x: {} if x is None else {key: {v[0]:v[1] for v in list_item} for key, list_item in x.items()})
df
expected output is like
{'01AB': {"ABC":5,"XYZ":4,"LMN":1}, '02AB_QTY': {"Other":20,"not_Other":150}}
| [
"We can use df.applymap(), with dict comprehension to convert each list to a dict, like this:\ndf[col_list] = df[col_list].applymap(lambda lst: {k: v for k, v in lst})\n\n",
"import pandas as pd\n\ndf = pd.DataFrame.from_records([{'01AB': [(\"ABC\", 5),(\"XYZ\", 4),(\"LMN\", 1)], '02AB_QTY': [(\"Other\", 20),(\"not_Other\", 150)]}])\n\nout_dict = dict()\nfor col in df.columns:\n out_dict[col] = dict(df[col][0])\n\nOutput:\n{'01AB': {'ABC': 5, 'XYZ': 4, 'LMN': 1},\n '02AB_QTY': {'Other': 20, 'not_Other': 150}}\n\n",
"You can use a dictionary comprehension:\nout = {k: dict(x) for k,v in df.iloc[0].to_dict().items()}\n\nOutput:\n{'01AB': {'ABC': 5, 'XYZ': 4, 'LMN': 1},\n '02AB_QTY': {'ABC': 5, 'XYZ': 4, 'LMN': 1}}\n\n",
"thanks for all the leads, I was able to resolve the issue like below -\nimport pandas as pd\n\ndf = pd.DataFrame.from_records([{'01AB': [(\"ABC\", 5),(\"XYZ\", 4),(\"LMN\", 1)], '02AB_QTY': [(\"Other\", 20),(\"not_Other\", 150)]}])\n\ncol_list = [\"01AB\", \"02AB_QTY\",]\n\nprint(df)\n\nfor col in col_list:\n df[col] = df[col].apply(lambda x: {} if x is None else dict(x))\n\nprint(df)\n\nOutPut -\n01AB 02AB_QTY\n{'ABC': 5, 'XYZ': 4, 'LMN': 1} {'Other': 20, 'not_Other': 150}\n\n"
] | [
1,
1,
1,
0
] | [] | [] | [
"amazon_dynamodb",
"lambda",
"pandas",
"python"
] | stackoverflow_0074643412_amazon_dynamodb_lambda_pandas_python.txt |
Q:
Does package management of a WinPython installation depend on the machine?
I have a WinPython installation on a network server that I can access from two different machines. As the installation is portable, I would expect the packages versions to be the same whether I use one or the other machine.
However, I recently downgraded tensorflow from 2.9 to 2.6 using machine A, and when I check the installed version, I get:
2.6 when running the command from machine A.
2.9 when running the command from machine B.
What should I do on machine B to fix the situation ?
A:
Actually I found the issue. When downgrading tensorflow I used the --user option, so 2.6 version was installed in a user-specific location of machine A, not accessible from machine B.
| Does package management of a WinPython installation depend on the machine? | I have a WinPython installation on a network server that I can access from two different machines. As the installation is portable, I would expect the packages versions to be the same whether I use one or the other machine.
However, I recently downgraded tensorflow from 2.9 to 2.6 using machine A, and when I check the installed version, I get:
2.6 when running the command from machine A.
2.9 when running the command from machine B.
What should I do on machine B to fix the situation ?
| [
"Actually I found the issue. When downgrading tensorflow I used the --user option, so 2.6 version was installed in a user-specific location of machine A, not accessible from machine B.\n"
] | [
0
] | [] | [] | [
"pip",
"python",
"python_install"
] | stackoverflow_0074643168_pip_python_python_install.txt |
Q:
Random Number Script Is not looping
for y in (random.randint(0,9)) in (x):
TypeError: argument of type 'int' is not iterable
import random
x = (random.randint(0,9))
print (x)
y = (random.randint(0,9))
print (y)
for y in (random.randint(0,9)) in (x):
if (y)==(x):
break
A:
what does (random.randint(0,9)) in (x) means?
You try iterate over int/number. You need to create an iterable object to loop over it like list, tuple, range etc.
A:
If you want to iterate until x==y you should use a while loop. for loops iterate over a sequence.
You could do something like
import random
x = (random.randint(0,9))
print (x)
y = (random.randint(0,9))
print (y)
while y != x:
y = (random.randint(0,9))
In general do not put assignations inside conditions (we are not in C!)
| Random Number Script Is not looping |
for y in (random.randint(0,9)) in (x):
TypeError: argument of type 'int' is not iterable
import random
x = (random.randint(0,9))
print (x)
y = (random.randint(0,9))
print (y)
for y in (random.randint(0,9)) in (x):
if (y)==(x):
break
| [
"what does (random.randint(0,9)) in (x) means?\nYou try iterate over int/number. You need to create an iterable object to loop over it like list, tuple, range etc.\n",
"If you want to iterate until x==y you should use a while loop. for loops iterate over a sequence.\nYou could do something like\nimport random\nx = (random.randint(0,9))\nprint (x)\ny = (random.randint(0,9))\nprint (y)\nwhile y != x:\n y = (random.randint(0,9))\n\nIn general do not put assignations inside conditions (we are not in C!)\n"
] | [
0,
0
] | [] | [] | [
"loops",
"numbers",
"python",
"random",
"typeerror"
] | stackoverflow_0074644103_loops_numbers_python_random_typeerror.txt |
Q:
Pytest/Locust: ModuleNotFoundError No module named
Ive tried to find anwsers on similar topics, but... nothing helped.
When I run my regular tests with pytest -m blablabla - there are no problems, but
when I run locust by command:
locust -f my_locustfiles/instr_performance.py
than got this:
(venv) evgen@TLL amapitest % locust -f my_locustfiles/instr_performance.py
Traceback (most recent call last):
File "/Users/evgen/venv/bin/locust", line 8, in <module>
sys.exit(main())
File "/Users/evgen/venv/lib/python3.10/site-packages/locust/main.py", line 70, in main
docstring, _user_classes, shape_class = load_locustfile(_locustfile)
File "/Users/evgen/venv/lib/python3.10/site-packages/locust/util/load_locustfile.py", line 58, in load_locustfile
imported = source.load_module()
File "<frozen importlib._bootstrap_external>", line 548, in _check_name_wrapper
File "<frozen importlib._bootstrap_external>", line 1063, in load_module
File "<frozen importlib._bootstrap_external>", line 888, in load_module
File "<frozen importlib._bootstrap>", line 290, in _load_module_shim
File "<frozen importlib._bootstrap>", line 719, in _load
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/evgen/PycharmProjects/api-testing/amapitest/my_locustfiles/instr_performance.py", line 8, in <module>
from amapitest.src.helpers.jwt_generator import generate_json_web_token
ModuleNotFoundError: No module named 'amapitest.src.helpers'
my project structure:
├── amapitest
│ ├── my_locustfiles
│ │ ├── __init__.py
│ │ └── instr_performance.py
│ ├── src
│ │ ├── configs
│ │ ├── dao
│ │ ├── helpers
│ │ └── utilities
│ ├── tests
│ └── tmp
├── amapitest.egg-info
├── build
├── dist
├── env.sh
├── requirements.txt
└── setup.py
additional info:
locust 2.11.0
pytest 7.1.2
python 3.10
macOS monterey
A:
Your current directory is automatically added to sys.path by locust, according to the documentation https://docs.locust.io/en/stable/writing-a-locustfile.html#how-to-structure-your-test-code
Try going to the parent directory and run
locust -f amapitest/my_locustfiles/instr_performance.py
A:
Ive solved my problem by moving "my_locustfiles" folder into root directory of project and running test from this directory.
I still dont understand why locust cant find imported methods when locust file was located in "amapitest"
but...anyway, thanks everyone
A:
print(sys.path) to see the path that helps Python to find modules.
If the path is found. e.g. Let's say "/Users/user/SomeFolder/amapitest/", try @Cyberwiz's comment, adding __init__.py file in every folder to turn folders to modules. Execute locust at amapitest folder
If you want to execute locust at sub-folder and module not found still happened, try Step 3.
Within instr_performance.py, append parent path like this
sys.path.append(
os.path.join(
os.path.dirname(__file__), '..'
)
)
You will see '/Users/user/SomeFolder/amapitest/my_locustfiles/..' when print(sys.path), I think this workaround may helps
| Pytest/Locust: ModuleNotFoundError No module named | Ive tried to find anwsers on similar topics, but... nothing helped.
When I run my regular tests with pytest -m blablabla - there are no problems, but
when I run locust by command:
locust -f my_locustfiles/instr_performance.py
than got this:
(venv) evgen@TLL amapitest % locust -f my_locustfiles/instr_performance.py
Traceback (most recent call last):
File "/Users/evgen/venv/bin/locust", line 8, in <module>
sys.exit(main())
File "/Users/evgen/venv/lib/python3.10/site-packages/locust/main.py", line 70, in main
docstring, _user_classes, shape_class = load_locustfile(_locustfile)
File "/Users/evgen/venv/lib/python3.10/site-packages/locust/util/load_locustfile.py", line 58, in load_locustfile
imported = source.load_module()
File "<frozen importlib._bootstrap_external>", line 548, in _check_name_wrapper
File "<frozen importlib._bootstrap_external>", line 1063, in load_module
File "<frozen importlib._bootstrap_external>", line 888, in load_module
File "<frozen importlib._bootstrap>", line 290, in _load_module_shim
File "<frozen importlib._bootstrap>", line 719, in _load
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 883, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "/Users/evgen/PycharmProjects/api-testing/amapitest/my_locustfiles/instr_performance.py", line 8, in <module>
from amapitest.src.helpers.jwt_generator import generate_json_web_token
ModuleNotFoundError: No module named 'amapitest.src.helpers'
my project structure:
├── amapitest
│ ├── my_locustfiles
│ │ ├── __init__.py
│ │ └── instr_performance.py
│ ├── src
│ │ ├── configs
│ │ ├── dao
│ │ ├── helpers
│ │ └── utilities
│ ├── tests
│ └── tmp
├── amapitest.egg-info
├── build
├── dist
├── env.sh
├── requirements.txt
└── setup.py
additional info:
locust 2.11.0
pytest 7.1.2
python 3.10
macOS monterey
| [
"Your current directory is automatically added to sys.path by locust, according to the documentation https://docs.locust.io/en/stable/writing-a-locustfile.html#how-to-structure-your-test-code\nTry going to the parent directory and run\nlocust -f amapitest/my_locustfiles/instr_performance.py\n",
"Ive solved my problem by moving \"my_locustfiles\" folder into root directory of project and running test from this directory.\nI still dont understand why locust cant find imported methods when locust file was located in \"amapitest\"\nbut...anyway, thanks everyone\n",
"\nprint(sys.path) to see the path that helps Python to find modules.\n\nIf the path is found. e.g. Let's say \"/Users/user/SomeFolder/amapitest/\", try @Cyberwiz's comment, adding __init__.py file in every folder to turn folders to modules. Execute locust at amapitest folder\n\n\nIf you want to execute locust at sub-folder and module not found still happened, try Step 3.\n\nWithin instr_performance.py, append parent path like this\nsys.path.append(\n os.path.join(\n os.path.dirname(__file__), '..'\n )\n)\n\nYou will see '/Users/user/SomeFolder/amapitest/my_locustfiles/..' when print(sys.path), I think this workaround may helps\n\n"
] | [
0,
0,
0
] | [] | [] | [
"locust",
"performance_testing",
"pytest",
"python",
"web_api_testing"
] | stackoverflow_0073375591_locust_performance_testing_pytest_python_web_api_testing.txt |
Q:
Tracking progress of joblib.Parallel execution
Is there a simple way to track the overall progress of a joblib.Parallel execution?
I have a long-running execution composed of thousands of jobs, which I want to track and record in a database. However, to do that, whenever Parallel finishes a task, I need it to execute a callback, reporting how many remaining jobs are left.
I've accomplished a similar task before with Python's stdlib multiprocessing.Pool, by launching a thread that records the number of pending jobs in Pool's job list.
Looking at the code, Parallel inherits Pool, so I thought I could pull off the same trick, but it doesn't seem to use these that list, and I haven't been able to figure out how else to "read" it's internal status any other way.
A:
Yet another step ahead from dano's and Connor's answers is to wrap the whole thing as a context manager:
import contextlib
import joblib
from tqdm import tqdm
@contextlib.contextmanager
def tqdm_joblib(tqdm_object):
"""Context manager to patch joblib to report into tqdm progress bar given as argument"""
class TqdmBatchCompletionCallback(joblib.parallel.BatchCompletionCallBack):
def __call__(self, *args, **kwargs):
tqdm_object.update(n=self.batch_size)
return super().__call__(*args, **kwargs)
old_batch_callback = joblib.parallel.BatchCompletionCallBack
joblib.parallel.BatchCompletionCallBack = TqdmBatchCompletionCallback
try:
yield tqdm_object
finally:
joblib.parallel.BatchCompletionCallBack = old_batch_callback
tqdm_object.close()
Then you can use it like this and don't leave monkey patched code once you're done:
from math import sqrt
from joblib import Parallel, delayed
with tqdm_joblib(tqdm(desc="My calculation", total=10)) as progress_bar:
Parallel(n_jobs=16)(delayed(sqrt)(i**2) for i in range(10))
which is awesome I think and it looks similar to tqdm pandas integration.
A:
Why can't you simply use tqdm? The following worked for me
from joblib import Parallel, delayed
from datetime import datetime
from tqdm import tqdm
def myfun(x):
return x**2
results = Parallel(n_jobs=8)(delayed(myfun)(i) for i in tqdm(range(1000))
100%|██████████| 1000/1000 [00:00<00:00, 10563.37it/s]
A:
The documentation you linked to states that Parallel has an optional progress meter. It's implemented by using the callback keyword argument provided by multiprocessing.Pool.apply_async:
# This is inside a dispatch function
self._lock.acquire()
job = self._pool.apply_async(SafeFunction(func), args,
kwargs, callback=CallBack(self.n_dispatched, self))
self._jobs.append(job)
self.n_dispatched += 1
...
class CallBack(object):
""" Callback used by parallel: it is used for progress reporting, and
to add data to be processed
"""
def __init__(self, index, parallel):
self.parallel = parallel
self.index = index
def __call__(self, out):
self.parallel.print_progress(self.index)
if self.parallel._original_iterable:
self.parallel.dispatch_next()
And here's print_progress:
def print_progress(self, index):
elapsed_time = time.time() - self._start_time
# This is heuristic code to print only 'verbose' times a messages
# The challenge is that we may not know the queue length
if self._original_iterable:
if _verbosity_filter(index, self.verbose):
return
self._print('Done %3i jobs | elapsed: %s',
(index + 1,
short_format_time(elapsed_time),
))
else:
# We are finished dispatching
queue_length = self.n_dispatched
# We always display the first loop
if not index == 0:
# Display depending on the number of remaining items
# A message as soon as we finish dispatching, cursor is 0
cursor = (queue_length - index + 1
- self._pre_dispatch_amount)
frequency = (queue_length // self.verbose) + 1
is_last_item = (index + 1 == queue_length)
if (is_last_item or cursor % frequency):
return
remaining_time = (elapsed_time / (index + 1) *
(self.n_dispatched - index - 1.))
self._print('Done %3i out of %3i | elapsed: %s remaining: %s',
(index + 1,
queue_length,
short_format_time(elapsed_time),
short_format_time(remaining_time),
))
The way they implement this is kind of weird, to be honest - it seems to assume that tasks will always be completed in the order that they're started. The index variable that goes to print_progress is just the self.n_dispatched variable at the time the job was actually started. So the first job launched will always finish with an index of 0, even if say, the third job finished first. It also means they don't actually keep track of the number of completed jobs. So there's no instance variable for you to monitor.
I think your best best is to make your own CallBack class, and monkey patch Parallel:
from math import sqrt
from collections import defaultdict
from joblib import Parallel, delayed
class CallBack(object):
completed = defaultdict(int)
def __init__(self, index, parallel):
self.index = index
self.parallel = parallel
def __call__(self, index):
CallBack.completed[self.parallel] += 1
print("done with {}".format(CallBack.completed[self.parallel]))
if self.parallel._original_iterable:
self.parallel.dispatch_next()
import joblib.parallel
joblib.parallel.CallBack = CallBack
if __name__ == "__main__":
print(Parallel(n_jobs=2)(delayed(sqrt)(i**2) for i in range(10)))
Output:
done with 1
done with 2
done with 3
done with 4
done with 5
done with 6
done with 7
done with 8
done with 9
done with 10
[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]
That way, your callback gets called whenever a job completes, rather than the default one.
A:
Expanding on dano's answer for the newest version of the joblib library. There were a couple of changes to the internal implementation.
from joblib import Parallel, delayed
from collections import defaultdict
# patch joblib progress callback
class BatchCompletionCallBack(object):
completed = defaultdict(int)
def __init__(self, time, index, parallel):
self.index = index
self.parallel = parallel
def __call__(self, index):
BatchCompletionCallBack.completed[self.parallel] += 1
print("done with {}".format(BatchCompletionCallBack.completed[self.parallel]))
if self.parallel._original_iterator is not None:
self.parallel.dispatch_next()
import joblib.parallel
joblib.parallel.BatchCompletionCallBack = BatchCompletionCallBack
A:
TLDR solution:
Works with joblib 0.14.0 and tqdm 4.46.0 using python 3.5. Credits to frenzykryger for contextlib suggestions, dano and Connor for monkey patching idea.
import contextlib
import joblib
from tqdm import tqdm
from joblib import Parallel, delayed
@contextlib.contextmanager
def tqdm_joblib(tqdm_object):
"""Context manager to patch joblib to report into tqdm progress bar given as argument"""
def tqdm_print_progress(self):
if self.n_completed_tasks > tqdm_object.n:
n_completed = self.n_completed_tasks - tqdm_object.n
tqdm_object.update(n=n_completed)
original_print_progress = joblib.parallel.Parallel.print_progress
joblib.parallel.Parallel.print_progress = tqdm_print_progress
try:
yield tqdm_object
finally:
joblib.parallel.Parallel.print_progress = original_print_progress
tqdm_object.close()
You can use this the same way as described by frenzykryger
import time
def some_method(wait_time):
time.sleep(wait_time)
with tqdm_joblib(tqdm(desc="My method", total=10)) as progress_bar:
Parallel(n_jobs=2)(delayed(some_method)(0.2) for i in range(10))
Longer explanation:
The solution by Jon is simple to implement, but it only measures the dispatched task. If the task takes a long time, the bar will be stuck at 100% while waiting for the last dispatched task to finish execution.
The context manager approach by frenzykryger, improved from dano and Connor, is better, but the BatchCompletionCallBack can also be called with ImmediateResult before the task completes (See Intermediate results from joblib). This is going to get us a count that is over 100%.
Instead of monkey patching the BatchCompletionCallBack, we can just patch the print_progress function in Parallel. The BatchCompletionCallBack already calls this print_progress anyway. If the verbose is set (i.e. Parallel(n_jobs=2, verbose=100)), the print_progress will be printing out completed tasks, though not as nice as tqdm. Looking at the code, the print_progress is a class method, so it already has self.n_completed_tasks that logs the number we want. All we have to do is just to compare this with the current state of joblib's progress and update only if there is a difference.
This was tested in joblib 0.14.0 and tqdm 4.46.0 using python 3.5.
A:
Text progress bar
One more variant for those, who want text progress bar without additional modules like tqdm. Actual for joblib=0.11, python 3.5.2 on linux at 16.04.2018 and shows progress upon subtask completion.
Redefine native class:
class BatchCompletionCallBack(object):
# Added code - start
global total_n_jobs
# Added code - end
def __init__(self, dispatch_timestamp, batch_size, parallel):
self.dispatch_timestamp = dispatch_timestamp
self.batch_size = batch_size
self.parallel = parallel
def __call__(self, out):
self.parallel.n_completed_tasks += self.batch_size
this_batch_duration = time.time() - self.dispatch_timestamp
self.parallel._backend.batch_completed(self.batch_size,
this_batch_duration)
self.parallel.print_progress()
# Added code - start
progress = self.parallel.n_completed_tasks / total_n_jobs
print(
"\rProgress: [{0:50s}] {1:.1f}%".format('#' * int(progress * 50), progress*100)
, end="", flush=True)
if self.parallel.n_completed_tasks == total_n_jobs:
print('\n')
# Added code - end
if self.parallel._original_iterator is not None:
self.parallel.dispatch_next()
import joblib.parallel
import time
joblib.parallel.BatchCompletionCallBack = BatchCompletionCallBack
Define global constant before usage with total number of jobs:
total_n_jobs = 10
This will result in something like this:
Progress: [######################################## ] 80.0%
A:
Here's another answer to your question with the following syntax:
aprun = ParallelExecutor(n_jobs=5)
a1 = aprun(total=25)(delayed(func)(i ** 2 + j) for i in range(5) for j in range(5))
a2 = aprun(total=16)(delayed(func)(i ** 2 + j) for i in range(4) for j in range(4))
a2 = aprun(bar='txt')(delayed(func)(i ** 2 + j) for i in range(4) for j in range(4))
a2 = aprun(bar=None)(delayed(func)(i ** 2 + j) for i in range(4) for j in range(4))
https://stackoverflow.com/a/40415477/232371
A:
In Jupyter tqdm starts a new line in the output each time it outputs.
So for Jupyter Notebook it will be:
For use in Jupyter notebook.
No sleeps:
from joblib import Parallel, delayed
from datetime import datetime
from tqdm import notebook
def myfun(x):
return x**2
results = Parallel(n_jobs=8)(delayed(myfun)(i) for i in notebook.tqdm(range(1000)))
100% 1000/1000 [00:06<00:00, 143.70it/s]
With time.sleep:
from joblib import Parallel, delayed
from datetime import datetime
from tqdm import notebook
from random import randint
import time
def myfun(x):
time.sleep(randint(1, 5))
return x**2
results = Parallel(n_jobs=7)(delayed(myfun)(i) for i in notebook.tqdm(range(100)))
What I'm currently using instead of joblib.Parallel:
import concurrent.futures
from tqdm import notebook
from random import randint
import time
iterable = [i for i in range(50)]
def myfun(x):
time.sleep(randint(1, 5))
return x**2
def run(func, iterable, max_workers=8):
with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
results = list(notebook.tqdm(executor.map(func, iterable), total=len(iterable)))
return results
run(myfun, iterable)
A:
Setting verbose=13 was enough for me: https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html
I get a line on stderr that says something like:
[Parallel(n_jobs=16)]: Done 134 tasks | elapsed: 7.7min
| Tracking progress of joblib.Parallel execution | Is there a simple way to track the overall progress of a joblib.Parallel execution?
I have a long-running execution composed of thousands of jobs, which I want to track and record in a database. However, to do that, whenever Parallel finishes a task, I need it to execute a callback, reporting how many remaining jobs are left.
I've accomplished a similar task before with Python's stdlib multiprocessing.Pool, by launching a thread that records the number of pending jobs in Pool's job list.
Looking at the code, Parallel inherits Pool, so I thought I could pull off the same trick, but it doesn't seem to use these that list, and I haven't been able to figure out how else to "read" it's internal status any other way.
| [
"Yet another step ahead from dano's and Connor's answers is to wrap the whole thing as a context manager:\nimport contextlib\nimport joblib\nfrom tqdm import tqdm\n\[email protected]\ndef tqdm_joblib(tqdm_object):\n \"\"\"Context manager to patch joblib to report into tqdm progress bar given as argument\"\"\"\n class TqdmBatchCompletionCallback(joblib.parallel.BatchCompletionCallBack):\n def __call__(self, *args, **kwargs):\n tqdm_object.update(n=self.batch_size)\n return super().__call__(*args, **kwargs)\n\n old_batch_callback = joblib.parallel.BatchCompletionCallBack\n joblib.parallel.BatchCompletionCallBack = TqdmBatchCompletionCallback\n try:\n yield tqdm_object\n finally:\n joblib.parallel.BatchCompletionCallBack = old_batch_callback\n tqdm_object.close()\n\nThen you can use it like this and don't leave monkey patched code once you're done:\nfrom math import sqrt\nfrom joblib import Parallel, delayed\n\nwith tqdm_joblib(tqdm(desc=\"My calculation\", total=10)) as progress_bar:\n Parallel(n_jobs=16)(delayed(sqrt)(i**2) for i in range(10))\n\nwhich is awesome I think and it looks similar to tqdm pandas integration.\n",
"Why can't you simply use tqdm? The following worked for me\nfrom joblib import Parallel, delayed\nfrom datetime import datetime\nfrom tqdm import tqdm\n\ndef myfun(x):\n return x**2\n\nresults = Parallel(n_jobs=8)(delayed(myfun)(i) for i in tqdm(range(1000))\n100%|██████████| 1000/1000 [00:00<00:00, 10563.37it/s]\n\n",
"The documentation you linked to states that Parallel has an optional progress meter. It's implemented by using the callback keyword argument provided by multiprocessing.Pool.apply_async:\n# This is inside a dispatch function\nself._lock.acquire()\njob = self._pool.apply_async(SafeFunction(func), args,\n kwargs, callback=CallBack(self.n_dispatched, self))\nself._jobs.append(job)\nself.n_dispatched += 1\n\n...\nclass CallBack(object):\n \"\"\" Callback used by parallel: it is used for progress reporting, and\n to add data to be processed\n \"\"\"\n def __init__(self, index, parallel):\n self.parallel = parallel\n self.index = index\n\n def __call__(self, out):\n self.parallel.print_progress(self.index)\n if self.parallel._original_iterable:\n self.parallel.dispatch_next()\n\nAnd here's print_progress:\ndef print_progress(self, index):\n elapsed_time = time.time() - self._start_time\n\n # This is heuristic code to print only 'verbose' times a messages\n # The challenge is that we may not know the queue length\n if self._original_iterable:\n if _verbosity_filter(index, self.verbose):\n return\n self._print('Done %3i jobs | elapsed: %s',\n (index + 1,\n short_format_time(elapsed_time),\n ))\n else:\n # We are finished dispatching\n queue_length = self.n_dispatched\n # We always display the first loop\n if not index == 0:\n # Display depending on the number of remaining items\n # A message as soon as we finish dispatching, cursor is 0\n cursor = (queue_length - index + 1\n - self._pre_dispatch_amount)\n frequency = (queue_length // self.verbose) + 1\n is_last_item = (index + 1 == queue_length)\n if (is_last_item or cursor % frequency):\n return\n remaining_time = (elapsed_time / (index + 1) *\n (self.n_dispatched - index - 1.))\n self._print('Done %3i out of %3i | elapsed: %s remaining: %s',\n (index + 1,\n queue_length,\n short_format_time(elapsed_time),\n short_format_time(remaining_time),\n ))\n\nThe way they implement this is kind of weird, to be honest - it seems to assume that tasks will always be completed in the order that they're started. The index variable that goes to print_progress is just the self.n_dispatched variable at the time the job was actually started. So the first job launched will always finish with an index of 0, even if say, the third job finished first. It also means they don't actually keep track of the number of completed jobs. So there's no instance variable for you to monitor.\nI think your best best is to make your own CallBack class, and monkey patch Parallel:\nfrom math import sqrt\nfrom collections import defaultdict\nfrom joblib import Parallel, delayed\n\nclass CallBack(object):\n completed = defaultdict(int)\n\n def __init__(self, index, parallel):\n self.index = index\n self.parallel = parallel\n\n def __call__(self, index):\n CallBack.completed[self.parallel] += 1\n print(\"done with {}\".format(CallBack.completed[self.parallel]))\n if self.parallel._original_iterable:\n self.parallel.dispatch_next()\n\nimport joblib.parallel\njoblib.parallel.CallBack = CallBack\n\nif __name__ == \"__main__\":\n print(Parallel(n_jobs=2)(delayed(sqrt)(i**2) for i in range(10)))\n\nOutput:\ndone with 1\ndone with 2\ndone with 3\ndone with 4\ndone with 5\ndone with 6\ndone with 7\ndone with 8\ndone with 9\ndone with 10\n[0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0]\n\nThat way, your callback gets called whenever a job completes, rather than the default one.\n",
"Expanding on dano's answer for the newest version of the joblib library. There were a couple of changes to the internal implementation.\nfrom joblib import Parallel, delayed\nfrom collections import defaultdict\n\n# patch joblib progress callback\nclass BatchCompletionCallBack(object):\n completed = defaultdict(int)\n\n def __init__(self, time, index, parallel):\n self.index = index\n self.parallel = parallel\n\n def __call__(self, index):\n BatchCompletionCallBack.completed[self.parallel] += 1\n print(\"done with {}\".format(BatchCompletionCallBack.completed[self.parallel]))\n if self.parallel._original_iterator is not None:\n self.parallel.dispatch_next()\n\nimport joblib.parallel\njoblib.parallel.BatchCompletionCallBack = BatchCompletionCallBack\n\n",
"TLDR solution:\nWorks with joblib 0.14.0 and tqdm 4.46.0 using python 3.5. Credits to frenzykryger for contextlib suggestions, dano and Connor for monkey patching idea.\nimport contextlib\nimport joblib\nfrom tqdm import tqdm\nfrom joblib import Parallel, delayed\n\[email protected]\ndef tqdm_joblib(tqdm_object):\n \"\"\"Context manager to patch joblib to report into tqdm progress bar given as argument\"\"\"\n\n def tqdm_print_progress(self):\n if self.n_completed_tasks > tqdm_object.n:\n n_completed = self.n_completed_tasks - tqdm_object.n\n tqdm_object.update(n=n_completed)\n\n original_print_progress = joblib.parallel.Parallel.print_progress\n joblib.parallel.Parallel.print_progress = tqdm_print_progress\n\n try:\n yield tqdm_object\n finally:\n joblib.parallel.Parallel.print_progress = original_print_progress\n tqdm_object.close()\n\nYou can use this the same way as described by frenzykryger\nimport time\ndef some_method(wait_time):\n time.sleep(wait_time)\n\nwith tqdm_joblib(tqdm(desc=\"My method\", total=10)) as progress_bar:\n Parallel(n_jobs=2)(delayed(some_method)(0.2) for i in range(10))\n\nLonger explanation:\nThe solution by Jon is simple to implement, but it only measures the dispatched task. If the task takes a long time, the bar will be stuck at 100% while waiting for the last dispatched task to finish execution.\nThe context manager approach by frenzykryger, improved from dano and Connor, is better, but the BatchCompletionCallBack can also be called with ImmediateResult before the task completes (See Intermediate results from joblib). This is going to get us a count that is over 100%.\nInstead of monkey patching the BatchCompletionCallBack, we can just patch the print_progress function in Parallel. The BatchCompletionCallBack already calls this print_progress anyway. If the verbose is set (i.e. Parallel(n_jobs=2, verbose=100)), the print_progress will be printing out completed tasks, though not as nice as tqdm. Looking at the code, the print_progress is a class method, so it already has self.n_completed_tasks that logs the number we want. All we have to do is just to compare this with the current state of joblib's progress and update only if there is a difference.\nThis was tested in joblib 0.14.0 and tqdm 4.46.0 using python 3.5.\n",
"Text progress bar\nOne more variant for those, who want text progress bar without additional modules like tqdm. Actual for joblib=0.11, python 3.5.2 on linux at 16.04.2018 and shows progress upon subtask completion.\nRedefine native class:\nclass BatchCompletionCallBack(object):\n # Added code - start\n global total_n_jobs\n # Added code - end\n def __init__(self, dispatch_timestamp, batch_size, parallel):\n self.dispatch_timestamp = dispatch_timestamp\n self.batch_size = batch_size\n self.parallel = parallel\n\n def __call__(self, out):\n self.parallel.n_completed_tasks += self.batch_size\n this_batch_duration = time.time() - self.dispatch_timestamp\n\n self.parallel._backend.batch_completed(self.batch_size,\n this_batch_duration)\n self.parallel.print_progress()\n # Added code - start\n progress = self.parallel.n_completed_tasks / total_n_jobs\n print(\n \"\\rProgress: [{0:50s}] {1:.1f}%\".format('#' * int(progress * 50), progress*100)\n , end=\"\", flush=True)\n if self.parallel.n_completed_tasks == total_n_jobs:\n print('\\n')\n # Added code - end\n if self.parallel._original_iterator is not None:\n self.parallel.dispatch_next()\n\nimport joblib.parallel\nimport time\njoblib.parallel.BatchCompletionCallBack = BatchCompletionCallBack\n\nDefine global constant before usage with total number of jobs:\ntotal_n_jobs = 10\n\nThis will result in something like this:\nProgress: [######################################## ] 80.0%\n\n",
"Here's another answer to your question with the following syntax:\naprun = ParallelExecutor(n_jobs=5)\n\na1 = aprun(total=25)(delayed(func)(i ** 2 + j) for i in range(5) for j in range(5))\na2 = aprun(total=16)(delayed(func)(i ** 2 + j) for i in range(4) for j in range(4))\na2 = aprun(bar='txt')(delayed(func)(i ** 2 + j) for i in range(4) for j in range(4))\na2 = aprun(bar=None)(delayed(func)(i ** 2 + j) for i in range(4) for j in range(4))\n\nhttps://stackoverflow.com/a/40415477/232371\n",
"In Jupyter tqdm starts a new line in the output each time it outputs.\nSo for Jupyter Notebook it will be:\nFor use in Jupyter notebook.\nNo sleeps:\nfrom joblib import Parallel, delayed\nfrom datetime import datetime\nfrom tqdm import notebook\n\ndef myfun(x):\n return x**2\n\nresults = Parallel(n_jobs=8)(delayed(myfun)(i) for i in notebook.tqdm(range(1000))) \n\n100% 1000/1000 [00:06<00:00, 143.70it/s]\nWith time.sleep:\nfrom joblib import Parallel, delayed\nfrom datetime import datetime\nfrom tqdm import notebook\nfrom random import randint\nimport time\n\ndef myfun(x):\n time.sleep(randint(1, 5))\n return x**2\n\nresults = Parallel(n_jobs=7)(delayed(myfun)(i) for i in notebook.tqdm(range(100)))\n\nWhat I'm currently using instead of joblib.Parallel:\nimport concurrent.futures\nfrom tqdm import notebook\nfrom random import randint\nimport time\n\niterable = [i for i in range(50)]\n\ndef myfun(x):\n time.sleep(randint(1, 5))\n return x**2\n\ndef run(func, iterable, max_workers=8):\n with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:\n results = list(notebook.tqdm(executor.map(func, iterable), total=len(iterable)))\n return results\n\nrun(myfun, iterable)\n\n",
"Setting verbose=13 was enough for me: https://joblib.readthedocs.io/en/latest/generated/joblib.Parallel.html\nI get a line on stderr that says something like:\n[Parallel(n_jobs=16)]: Done 134 tasks | elapsed: 7.7min\n\n"
] | [
77,
25,
22,
11,
7,
4,
1,
0,
0
] | [] | [] | [
"joblib",
"multiprocessing",
"multithreading",
"parallel_processing",
"python"
] | stackoverflow_0024983493_joblib_multiprocessing_multithreading_parallel_processing_python.txt |
Q:
Individually specify nested dict fields in pydantic model
Is it possible to specify the individual fields in a dict contained inside a pydantic model? I was not able to find anything but maybe I'm using the wrong keywords. I'm thinking of something like this:
from pydantic import BaseModel
class User(BaseModel):
id: int
name: str = 'Jane Doe'
stats = {
age: int,
height: float,
}
EDIT: After some feedback I feel I need to clarify a bit some of the conditions of this and give a more complete example. What I'm looking to do is more similar to this:
from pydantic import BaseModel, NonNegativeInt, NonNegativeFloat
from pydantic.generics import GenericModel
DataT = TypeVar('DataT')
class Trait(GenericModel, Generic[DataT]):
descript: str
value: DataT
class CharacterBarbarian(BaseModel):
id: int
name: str = 'Barbarok'
traits = {
strength: Trait[NonNegativeInt] = Trait[NonNegativeInt](descript='the force', value=18),
height: Trait[NonNegativeFloat] = Trait[NonNegativeFloat](descript='tallness', value=1.8),
weight: Trait[NonNegativeFloat] = Trait[NonNegativeFloat](descript='width', value=92.1),
}
class CharacterWizard(BaseModel):
id: int
name: str = 'Supremus'
traits = {
intelligence: Trait[NonNegativeInt] = Trait[NonNegativeInt](descript='smarts', value=16),
spells: Trait[NonNegativeInt] = Trait[NonNegativeInt](descript='number of them', value=4),
}
SavedWizard_dict = { # Read from file for example
'id': 1234,
'name': "Gandalf",
'traits': {
'intelligence': {'descript': 'smarts', 'value': 20}
'spells': {'descript': 'number of them', 'value': 100),
},
}
SavedWizard = CharacterWizard(**SavedWizard_dict)
So basically I'm trying to leverage the intrinsic ability of pydantic to serialize/deserialize dict/json to save and initialize my classes. At the same time, these pydantic classes are composed of a list/dict of specific versions of a generic pydantic class, but the selection of these changes from class to class.
A:
You can do something similar using nested classes:
from pydantic import BaseModel
class UserStats(BaseModel):
age: int
height: float
class User(BaseModel):
id: int
name = 'Jane Doe'
stats: UserStats
Then when you construct any User instance you can pass the stats field as a dictionary and it would be converted automatically:
user = User(id=1234, stats={"age": 30, "height": 180.0})
The only difference is that the stats field of User is a class (instance of UserStats) so if you want to access fields of it you need to do so using attribute access not using dictionary access:
print(user.age) # ok!
print(user["age"]) # not ok...
If you need ste stats attribute to be a dictionary then you could use TypedDict from the typing_extensions (python3.7) or typing (python3.8+) module:
from typing import TypedDict
from pydantic import BaseModel
class UserStats(TypedDict):
age: int
height: float
class User(BaseModel):
id: int
name = 'Jane Doe'
stats: UserStats
user = User(id=1234, stats={"age": 30, "height": 180.0})
print(user.stats["age"]) # will work!
EDIT:
As mentioned in the comments of this answer the question author did not want to globally define the UserStats class to avoid pollution.
This can be solved by defining the class directly inside the User class:
class User(BaseModel):
id: int
name = "Jane Doe"
class Stats(TypedDict):
age: int
height: float
stats: Stats
This allows for multiple classes like User to each define their stats without duplicated Stats classes in the global namespace.
A more coincise but less friendly to type checkers and language servers would be to use the functional API of TypedDict:
class User(BaseModel):
id: int
name = "Jane Doe"
stats: TypedDict("Stats", age=int, height=float)
A:
Just for the sake of completeness, you can also make use of the functional API provided by TypedDict to write your model like this:
from typing import TypedDict
from pydantic import BaseModel
class User(BaseModel):
id: int
name = 'Jane Doe'
stats: TypedDict("Stats", {"age": int, "height": float})
I can't think of a way to make this more concise. I was not sure at first regarding how this plays with type checkers, but at least PyCharm with the Pydantic plugin seems to have no trouble correctly inferring the types and spitting out warnings, if you try to provide a wrongly typed value in the stats dictionary.
| Individually specify nested dict fields in pydantic model | Is it possible to specify the individual fields in a dict contained inside a pydantic model? I was not able to find anything but maybe I'm using the wrong keywords. I'm thinking of something like this:
from pydantic import BaseModel
class User(BaseModel):
id: int
name: str = 'Jane Doe'
stats = {
age: int,
height: float,
}
EDIT: After some feedback I feel I need to clarify a bit some of the conditions of this and give a more complete example. What I'm looking to do is more similar to this:
from pydantic import BaseModel, NonNegativeInt, NonNegativeFloat
from pydantic.generics import GenericModel
DataT = TypeVar('DataT')
class Trait(GenericModel, Generic[DataT]):
descript: str
value: DataT
class CharacterBarbarian(BaseModel):
id: int
name: str = 'Barbarok'
traits = {
strength: Trait[NonNegativeInt] = Trait[NonNegativeInt](descript='the force', value=18),
height: Trait[NonNegativeFloat] = Trait[NonNegativeFloat](descript='tallness', value=1.8),
weight: Trait[NonNegativeFloat] = Trait[NonNegativeFloat](descript='width', value=92.1),
}
class CharacterWizard(BaseModel):
id: int
name: str = 'Supremus'
traits = {
intelligence: Trait[NonNegativeInt] = Trait[NonNegativeInt](descript='smarts', value=16),
spells: Trait[NonNegativeInt] = Trait[NonNegativeInt](descript='number of them', value=4),
}
SavedWizard_dict = { # Read from file for example
'id': 1234,
'name': "Gandalf",
'traits': {
'intelligence': {'descript': 'smarts', 'value': 20}
'spells': {'descript': 'number of them', 'value': 100),
},
}
SavedWizard = CharacterWizard(**SavedWizard_dict)
So basically I'm trying to leverage the intrinsic ability of pydantic to serialize/deserialize dict/json to save and initialize my classes. At the same time, these pydantic classes are composed of a list/dict of specific versions of a generic pydantic class, but the selection of these changes from class to class.
| [
"You can do something similar using nested classes:\nfrom pydantic import BaseModel\n\nclass UserStats(BaseModel):\n age: int\n height: float\n\nclass User(BaseModel):\n id: int\n name = 'Jane Doe'\n stats: UserStats\n\nThen when you construct any User instance you can pass the stats field as a dictionary and it would be converted automatically:\nuser = User(id=1234, stats={\"age\": 30, \"height\": 180.0})\n\nThe only difference is that the stats field of User is a class (instance of UserStats) so if you want to access fields of it you need to do so using attribute access not using dictionary access:\nprint(user.age) # ok!\nprint(user[\"age\"]) # not ok...\n\nIf you need ste stats attribute to be a dictionary then you could use TypedDict from the typing_extensions (python3.7) or typing (python3.8+) module:\nfrom typing import TypedDict\nfrom pydantic import BaseModel\n\nclass UserStats(TypedDict):\n age: int\n height: float\n\nclass User(BaseModel):\n id: int\n name = 'Jane Doe'\n stats: UserStats\n\nuser = User(id=1234, stats={\"age\": 30, \"height\": 180.0})\nprint(user.stats[\"age\"]) # will work!\n\nEDIT:\nAs mentioned in the comments of this answer the question author did not want to globally define the UserStats class to avoid pollution.\nThis can be solved by defining the class directly inside the User class:\nclass User(BaseModel):\n id: int\n name = \"Jane Doe\"\n class Stats(TypedDict):\n age: int\n height: float\n stats: Stats\n\nThis allows for multiple classes like User to each define their stats without duplicated Stats classes in the global namespace.\nA more coincise but less friendly to type checkers and language servers would be to use the functional API of TypedDict:\n class User(BaseModel):\n id: int\n name = \"Jane Doe\"\n stats: TypedDict(\"Stats\", age=int, height=float)\n\n",
"Just for the sake of completeness, you can also make use of the functional API provided by TypedDict to write your model like this:\nfrom typing import TypedDict\nfrom pydantic import BaseModel\n\n\nclass User(BaseModel):\n id: int\n name = 'Jane Doe'\n stats: TypedDict(\"Stats\", {\"age\": int, \"height\": float})\n\nI can't think of a way to make this more concise. I was not sure at first regarding how this plays with type checkers, but at least PyCharm with the Pydantic plugin seems to have no trouble correctly inferring the types and spitting out warnings, if you try to provide a wrongly typed value in the stats dictionary.\n"
] | [
3,
2
] | [] | [] | [
"pydantic",
"python"
] | stackoverflow_0074643755_pydantic_python.txt |
Q:
pandas dataframe - searching for numbers
On the column named "criacao" I have some data stored as object, and they are, for example, "2022-01-03 10:20:40" as df.head(1) and "2022-12-01 10:33:25" as df.tail(1).
I want to extract the data for every 3 months and store in a variable.
I am doing the following and it works for the first month:
fd_pri_tri = (df_fresh
[(df_fresh['criacao']
.str.contains('2022-01'))
]
)
As soon as I try adding the code for month two and three, the var goes blank.
fd_pri_tri = (df_fresh
[(df_fresh['criacao']
.str.contains('2022-01'))
&(df_fresh['criacao']
.str.contains('2022-02'))
&(df_fresh['criacao']
.str.contains('2022-03'))
]
)
Any thoughts?
A:
An option is to convert the criacao date string value to a python datetime data type, review the month for this date, and then assign that month to a specific quarter.
Month 1-3, assigned to Quarter 1
Month 4-6, assigned to Quarter 2
Month 7-9, assigned to Quarter 3
Month 10-12, assigned to Quarter 4
Pandas lets you use a function to apply the logic on each value of a column and output to a new column using apply and lambda
# import datetime
from datetime import datetime
def assign_quarter(criacao):
# convert format from string to datetime
criacao_formatted = datetime.strptime(criacao, "%Y-%m-%d %H:%M:%S")
# return quarter of year
return (criacao_formatted.month-1)//3
# make a new column for the criacao quarter
df_fresh['criacao_quarter'] = df_fresh.apply(lambda x: assign_quarter(x['criacao']))
More about apply here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html
Then there are a couple options for how to extract the criacao values and create the fd_pri_tri variable,
like this
fd_pri_tri = (df_fresh.query('criacao_quarter == 1')['criacao'])
or this
fd_pri_tri = (df_fresh.loc[df_fresh['criacao_quarter'] == 1, 'criacao'])
| pandas dataframe - searching for numbers | On the column named "criacao" I have some data stored as object, and they are, for example, "2022-01-03 10:20:40" as df.head(1) and "2022-12-01 10:33:25" as df.tail(1).
I want to extract the data for every 3 months and store in a variable.
I am doing the following and it works for the first month:
fd_pri_tri = (df_fresh
[(df_fresh['criacao']
.str.contains('2022-01'))
]
)
As soon as I try adding the code for month two and three, the var goes blank.
fd_pri_tri = (df_fresh
[(df_fresh['criacao']
.str.contains('2022-01'))
&(df_fresh['criacao']
.str.contains('2022-02'))
&(df_fresh['criacao']
.str.contains('2022-03'))
]
)
Any thoughts?
| [
"An option is to convert the criacao date string value to a python datetime data type, review the month for this date, and then assign that month to a specific quarter.\n\nMonth 1-3, assigned to Quarter 1\nMonth 4-6, assigned to Quarter 2\nMonth 7-9, assigned to Quarter 3\nMonth 10-12, assigned to Quarter 4\n\nPandas lets you use a function to apply the logic on each value of a column and output to a new column using apply and lambda\n# import datetime\nfrom datetime import datetime\n\ndef assign_quarter(criacao):\n # convert format from string to datetime \n criacao_formatted = datetime.strptime(criacao, \"%Y-%m-%d %H:%M:%S\")\n # return quarter of year\n return (criacao_formatted.month-1)//3\n\n# make a new column for the criacao quarter\ndf_fresh['criacao_quarter'] = df_fresh.apply(lambda x: assign_quarter(x['criacao']))\n\nMore about apply here: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.apply.html\nThen there are a couple options for how to extract the criacao values and create the fd_pri_tri variable,\nlike this\nfd_pri_tri = (df_fresh.query('criacao_quarter == 1')['criacao'])\nor this\nfd_pri_tri = (df_fresh.loc[df_fresh['criacao_quarter'] == 1, 'criacao'])\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074643963_pandas_python.txt |
Q:
more efficient way to select image pixel that satisfy multiple color conditions
i want to select fire pixels in each video frame using Lab, RGB, and YCbCr color rule.
i've tried with the code below, but the video output turns out really laggy. i believe this caused by iterating through pixel frame with loop. i wonder if there is more efficient way to do this (e.g. with numpy).
here is my code:
# COLOR SEGMENTATION
lab_convert = convert_lab(frame)
L, a, b = cv2.split(lab_convert)
L_mean, a_mean, b_mean = cv2.mean(lab_convert)[:-1]
rgb_convert = convert_rgb(frame)
R, G, B = cv2.split(rgb_convert)
R_mean = cv2.mean(R)[0]
ycbcr_convert = convert_ycbcr(frame)
Y, Cb, Cr = cv2.split(ycbcr_convert)
height, width, _ = frame.shape
frame_copy = frame.copy()
for y in range(height):
for x in range(width):
# rules
lab_rule_1 = L[y, x] >= L_mean
lab_rule_2 = a[y, x] >= a_mean
lab_rule_3 = b[y, x] >= b_mean
lab_rule_4 = b[y, x] >= a_mean
lab_satisfied = lab_rule_1 and lab_rule_2 and lab_rule_3 and lab_rule_4
rgb_rule_1 = R[y, x] > G[y, x] > B[y, x]
rgb_rule_2 = R[y, x] >= R_mean
rgb_satisfied = rgb_rule_1 and rgb_rule_2
ycbcr_rule_1 = (R[y, x] >= G[y, x]) and (G[y, x] > B[y, x])
ycbcr_rule_2 = (R[y, x] > 190) and (G[y, x] > 100) and (B[y, x] < 140)
ycbcr_rule_3 = Y[y, x] >= Cb[y, x]
ycbcr_rule_4 = Cr[y, x] >= Cb[y, x]
ycbcr_satisfied = (ycbcr_rule_1 and ycbcr_rule_2) or (ycbcr_rule_3 and ycbcr_rule_4)
# check if pixel satisfies all rules (per color space)
if lab_satisfied:
lab_convert[y, x] = 255
else:
lab_convert[y, x] = 0
if rgb_satisfied:
rgb_convert[y, x] = 255
else:
rgb_convert[y, x] = 0
if ycbcr_satisfied:
ycbcr_convert[y, x] = 255
else:
ycbcr_convert[y, x] = 0
# check if pixel satisfies all rules
if lab_satisfied or rgb_satisfied or ycbcr_satisfied:
frame_copy[y, x] = 255
else:
frame_copy[y, x] = 0
A:
Solved!
shoutout to @dan-mašek for helping me to solve this problem. here is the code: https://pastebin.com/DXrxmf17
# COLOR SEGMENTATION
lab_convert = convert_lab(frame)
L, a, b = cv2.split(lab_convert)
L_mean, a_mean, b_mean = cv2.mean(lab_convert)[:-1]
rgb_convert = convert_rgb(frame)
R, G, B = cv2.split(rgb_convert)
R_mean = cv2.mean(R)[0]
ycbcr_convert = convert_ycbcr(frame)
Y, Cb, Cr = cv2.split(ycbcr_convert)
lab_rule_1 = L >= L_mean
lab_rule_2 = a >= a_mean
lab_rule_3 = b >= b_mean
lab_rule_4 = b >= a_mean
lab_satisfied = lab_rule_1 & lab_rule_2 & lab_rule_3 & lab_rule_4
rgb_rule_1 = (R > G) & (G > B) # No chaining in numpy
rgb_rule_2 = R >= R_mean
rgb_satisfied = rgb_rule_1 & rgb_rule_2
ycbcr_rule_1 = (R >= G) & (G > B)
ycbcr_rule_2 = (R > 190) & (G > 100) & (B < 140)
ycbcr_rule_3 = Y >= Cb
ycbcr_rule_4 = Cr >= Cb
ycbcr_satisfied = (ycbcr_rule_1 & ycbcr_rule_2) | (ycbcr_rule_3 & ycbcr_rule_4)
lab_convert = np.uint8(lab_satisfied) * 255
rgb_convert = np.uint8(rgb_satisfied) * 255
ycbcr_convert = np.uint8(ycbcr_satisfied) * 255
combined = lab_convert | rgb_convert | ycbcr_convert
frame_copy = np.repeat(combined[:, :, np.newaxis], 3, axis=2)
| more efficient way to select image pixel that satisfy multiple color conditions | i want to select fire pixels in each video frame using Lab, RGB, and YCbCr color rule.
i've tried with the code below, but the video output turns out really laggy. i believe this caused by iterating through pixel frame with loop. i wonder if there is more efficient way to do this (e.g. with numpy).
here is my code:
# COLOR SEGMENTATION
lab_convert = convert_lab(frame)
L, a, b = cv2.split(lab_convert)
L_mean, a_mean, b_mean = cv2.mean(lab_convert)[:-1]
rgb_convert = convert_rgb(frame)
R, G, B = cv2.split(rgb_convert)
R_mean = cv2.mean(R)[0]
ycbcr_convert = convert_ycbcr(frame)
Y, Cb, Cr = cv2.split(ycbcr_convert)
height, width, _ = frame.shape
frame_copy = frame.copy()
for y in range(height):
for x in range(width):
# rules
lab_rule_1 = L[y, x] >= L_mean
lab_rule_2 = a[y, x] >= a_mean
lab_rule_3 = b[y, x] >= b_mean
lab_rule_4 = b[y, x] >= a_mean
lab_satisfied = lab_rule_1 and lab_rule_2 and lab_rule_3 and lab_rule_4
rgb_rule_1 = R[y, x] > G[y, x] > B[y, x]
rgb_rule_2 = R[y, x] >= R_mean
rgb_satisfied = rgb_rule_1 and rgb_rule_2
ycbcr_rule_1 = (R[y, x] >= G[y, x]) and (G[y, x] > B[y, x])
ycbcr_rule_2 = (R[y, x] > 190) and (G[y, x] > 100) and (B[y, x] < 140)
ycbcr_rule_3 = Y[y, x] >= Cb[y, x]
ycbcr_rule_4 = Cr[y, x] >= Cb[y, x]
ycbcr_satisfied = (ycbcr_rule_1 and ycbcr_rule_2) or (ycbcr_rule_3 and ycbcr_rule_4)
# check if pixel satisfies all rules (per color space)
if lab_satisfied:
lab_convert[y, x] = 255
else:
lab_convert[y, x] = 0
if rgb_satisfied:
rgb_convert[y, x] = 255
else:
rgb_convert[y, x] = 0
if ycbcr_satisfied:
ycbcr_convert[y, x] = 255
else:
ycbcr_convert[y, x] = 0
# check if pixel satisfies all rules
if lab_satisfied or rgb_satisfied or ycbcr_satisfied:
frame_copy[y, x] = 255
else:
frame_copy[y, x] = 0
| [
"Solved!\nshoutout to @dan-mašek for helping me to solve this problem. here is the code: https://pastebin.com/DXrxmf17\n# COLOR SEGMENTATION\nlab_convert = convert_lab(frame)\nL, a, b = cv2.split(lab_convert)\nL_mean, a_mean, b_mean = cv2.mean(lab_convert)[:-1]\n\nrgb_convert = convert_rgb(frame)\nR, G, B = cv2.split(rgb_convert)\nR_mean = cv2.mean(R)[0]\n\nycbcr_convert = convert_ycbcr(frame)\nY, Cb, Cr = cv2.split(ycbcr_convert)\n\nlab_rule_1 = L >= L_mean\nlab_rule_2 = a >= a_mean\nlab_rule_3 = b >= b_mean\nlab_rule_4 = b >= a_mean\nlab_satisfied = lab_rule_1 & lab_rule_2 & lab_rule_3 & lab_rule_4\n\nrgb_rule_1 = (R > G) & (G > B) # No chaining in numpy\nrgb_rule_2 = R >= R_mean\nrgb_satisfied = rgb_rule_1 & rgb_rule_2\n\nycbcr_rule_1 = (R >= G) & (G > B)\nycbcr_rule_2 = (R > 190) & (G > 100) & (B < 140)\nycbcr_rule_3 = Y >= Cb\nycbcr_rule_4 = Cr >= Cb\nycbcr_satisfied = (ycbcr_rule_1 & ycbcr_rule_2) | (ycbcr_rule_3 & ycbcr_rule_4)\n\n\nlab_convert = np.uint8(lab_satisfied) * 255\nrgb_convert = np.uint8(rgb_satisfied) * 255\nycbcr_convert = np.uint8(ycbcr_satisfied) * 255\n\ncombined = lab_convert | rgb_convert | ycbcr_convert\nframe_copy = np.repeat(combined[:, :, np.newaxis], 3, axis=2)\n\n"
] | [
1
] | [] | [] | [
"image_processing",
"image_segmentation",
"numpy",
"opencv",
"python"
] | stackoverflow_0074641817_image_processing_image_segmentation_numpy_opencv_python.txt |
Q:
How do I use OpenCV to detect exclusively almost straight edges?
I'm trying to detect straight edges in a basketball card and what I have so far does a good job of detecting all edges. I would like for this piece of code however, to detect exclusively straight edges (the outline of the card).
import cv2
import numpy as np
import imutils
img = cv2.imread('edgedetection/cardgiannis.jpeg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
kernel_size = 5
blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0)
low_threshold = 50
high_threshold = 150
edges = cv2.Canny(blur_gray, low_threshold, high_threshold)
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi / 180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 50 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
line_image = np.copy(img) * 0 # creating a blank to draw lines on
# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),5)
# Draw the lines on the image
lines_edges = cv2.addWeighted(img, 0.8, line_image, 1, 0)
cv2.imshow('lines', lines_edges)
cv2.waitKey()
A:
Here is what I think.
If you can somehow get the highest & lowest valued coordinates in the pixels that are forming the lines, you can use those pixels to form a rectangle that consist of the straight edges.
Using the pixels to (roughly) form a rectangle can be the solution!
To find the pixels in a line, you can take a look in here, here and here.
A:
you may use cv.approxPolyDP to extract dominant points from edges and stick to the edges that have 2 dominant points.
ps: you apply approxPolyDP on edges after collecting them by cv.findcontours
| How do I use OpenCV to detect exclusively almost straight edges? | I'm trying to detect straight edges in a basketball card and what I have so far does a good job of detecting all edges. I would like for this piece of code however, to detect exclusively straight edges (the outline of the card).
import cv2
import numpy as np
import imutils
img = cv2.imread('edgedetection/cardgiannis.jpeg')
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
kernel_size = 5
blur_gray = cv2.GaussianBlur(gray,(kernel_size, kernel_size),0)
low_threshold = 50
high_threshold = 150
edges = cv2.Canny(blur_gray, low_threshold, high_threshold)
rho = 1 # distance resolution in pixels of the Hough grid
theta = np.pi / 180 # angular resolution in radians of the Hough grid
threshold = 15 # minimum number of votes (intersections in Hough grid cell)
min_line_length = 50 # minimum number of pixels making up a line
max_line_gap = 20 # maximum gap in pixels between connectable line segments
line_image = np.copy(img) * 0 # creating a blank to draw lines on
# Run Hough on edge detected image
# Output "lines" is an array containing endpoints of detected line segments
lines = cv2.HoughLinesP(edges, rho, theta, threshold, np.array([]),
min_line_length, max_line_gap)
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(line_image,(x1,y1),(x2,y2),(255,0,0),5)
# Draw the lines on the image
lines_edges = cv2.addWeighted(img, 0.8, line_image, 1, 0)
cv2.imshow('lines', lines_edges)
cv2.waitKey()
| [
"Here is what I think.\nIf you can somehow get the highest & lowest valued coordinates in the pixels that are forming the lines, you can use those pixels to form a rectangle that consist of the straight edges.\n\nUsing the pixels to (roughly) form a rectangle can be the solution!\n\nTo find the pixels in a line, you can take a look in here, here and here.\n",
"you may use cv.approxPolyDP to extract dominant points from edges and stick to the edges that have 2 dominant points.\nps: you apply approxPolyDP on edges after collecting them by cv.findcontours\n"
] | [
0,
0
] | [] | [] | [
"computer_vision",
"edge_detection",
"image_processing",
"opencv",
"python"
] | stackoverflow_0073213310_computer_vision_edge_detection_image_processing_opencv_python.txt |
Q:
Apply lambda function to two columns in two Pandas dataframes
I have two data frames that I'm trying to merge, based on a primary & foreign key of company name. One data set has ~50,000 unique company names, the other one has about 5,000. Duplicate company names are possible within each list. I'm trying to produce some string-edit distance metrics comparing two columns between two data frames.
Here's an MWE with example data frames:
mwe1 = pd.DataFrame({'company_name': ['Deloitte',
'PriceWaterhouseCoopers',
'KPMG',
'Ernst & Young',
'intentionall typo company XYZ'
],
'revenue': [100, 200, 300, 250, 400]
}
)
mwe2 = pd.DataFrame({'salesforce_name': ['Deloite',
'PriceWaterhouseCooper'
],
'CEO': ['John', 'Jane']
}
)
I want:
company_name revenue salesforce_name CEO similarity_score ...
Deloitte 100 Deloite John 98
PriceWaterhouseCoopers 200 Deloite John 0
KPMG 300 Deloite John 15
Ernst & Young 250 Deloite John 10
intentionall typo company XYZ 400 Deloite John 2
Deloitte 100 PriceWaterhouseCooper Jane 20
PriceWaterhouseCoopers 200 PriceWaterhouseCooper Jane 97
KPMG 300 PriceWaterhouseCooper Jane 5
Ernst & Young 250 PriceWaterhouseCooper Jane 7
intentionall typo company XYZ 400 PriceWaterhouseCooper Jane 3
In the above, there's 1 similarity score. In actuality, I'd like to have several columns, one for each metric. Example metrics include Jaro-Winkler, Levenshtein, etc. Here's an example I found producing the metric for two strings, but how do I use this for two Pandas Series of unequal length, like my MWE example?
import abydos.distance as abd
abd.DiscountedLevenshtein().sim('coca-cola company','coca-cola group')
A:
You can add a common column for both datasets and then we can use pandas merge on that common column
Here is the sample code
mwe1['common']=mwe2['common']=1
df = pd.merge(mwe1,mwe2,on='common').drop('common',1)
df.sort_values(by='salesforce_name',inplace=True)
Output:
company_name revenue salesforce_name CEO
0 Deloitte 100 Deloite John
2 PriceWaterhouseCoopers 200 Deloite John
4 KPMG 300 Deloite John
6 Ernst & Young 250 Deloite John
8 intentionall typo company XYZ 400 Deloite John
1 Deloitte 100 PriceWaterhouseCooper Jane
3 PriceWaterhouseCoopers 200 PriceWaterhouseCooper Jane
5 KPMG 300 PriceWaterhouseCooper Jane
7 Ernst & Young 250 PriceWaterhouseCooper Jane
9 intentionall typo company XYZ 400 PriceWaterhouseCooper Jane
| Apply lambda function to two columns in two Pandas dataframes | I have two data frames that I'm trying to merge, based on a primary & foreign key of company name. One data set has ~50,000 unique company names, the other one has about 5,000. Duplicate company names are possible within each list. I'm trying to produce some string-edit distance metrics comparing two columns between two data frames.
Here's an MWE with example data frames:
mwe1 = pd.DataFrame({'company_name': ['Deloitte',
'PriceWaterhouseCoopers',
'KPMG',
'Ernst & Young',
'intentionall typo company XYZ'
],
'revenue': [100, 200, 300, 250, 400]
}
)
mwe2 = pd.DataFrame({'salesforce_name': ['Deloite',
'PriceWaterhouseCooper'
],
'CEO': ['John', 'Jane']
}
)
I want:
company_name revenue salesforce_name CEO similarity_score ...
Deloitte 100 Deloite John 98
PriceWaterhouseCoopers 200 Deloite John 0
KPMG 300 Deloite John 15
Ernst & Young 250 Deloite John 10
intentionall typo company XYZ 400 Deloite John 2
Deloitte 100 PriceWaterhouseCooper Jane 20
PriceWaterhouseCoopers 200 PriceWaterhouseCooper Jane 97
KPMG 300 PriceWaterhouseCooper Jane 5
Ernst & Young 250 PriceWaterhouseCooper Jane 7
intentionall typo company XYZ 400 PriceWaterhouseCooper Jane 3
In the above, there's 1 similarity score. In actuality, I'd like to have several columns, one for each metric. Example metrics include Jaro-Winkler, Levenshtein, etc. Here's an example I found producing the metric for two strings, but how do I use this for two Pandas Series of unequal length, like my MWE example?
import abydos.distance as abd
abd.DiscountedLevenshtein().sim('coca-cola company','coca-cola group')
| [
"You can add a common column for both datasets and then we can use pandas merge on that common column\nHere is the sample code\nmwe1['common']=mwe2['common']=1\ndf = pd.merge(mwe1,mwe2,on='common').drop('common',1)\ndf.sort_values(by='salesforce_name',inplace=True)\n\nOutput:\n company_name revenue salesforce_name CEO\n0 Deloitte 100 Deloite John\n2 PriceWaterhouseCoopers 200 Deloite John\n4 KPMG 300 Deloite John\n6 Ernst & Young 250 Deloite John\n8 intentionall typo company XYZ 400 Deloite John\n1 Deloitte 100 PriceWaterhouseCooper Jane\n3 PriceWaterhouseCoopers 200 PriceWaterhouseCooper Jane\n5 KPMG 300 PriceWaterhouseCooper Jane\n7 Ernst & Young 250 PriceWaterhouseCooper Jane\n9 intentionall typo company XYZ 400 PriceWaterhouseCooper Jane\n\n"
] | [
0
] | [] | [] | [
"fuzzy_comparison",
"pandas",
"python",
"string"
] | stackoverflow_0074635404_fuzzy_comparison_pandas_python_string.txt |
Q:
displaying a table without invalid values python
How do I get rid of invalid values in a ragged list to display in a table?
A:
I'm not sure I fully understand what format you're trying to do, but hope ths helps point you in the right direction.
If you want to print only the values where val > 0.05:
Make your joint_pmf_ID a numpy array.
Index by boolean condition and set the desired values to 0.
In your looping function, don't print zeros.
joint_pmf_ID = np.array(joint_pmf_ID)
joint_pmf_id[joint_pmf_id<=0.05] = 0
# Below is just a fancy print function; print how you will
for i, row in enumerate(joint_pmf_ID):
print('| ')
for j, val in enumerate(row):
print(val if val else ' ')
print(' | ')
| displaying a table without invalid values python | How do I get rid of invalid values in a ragged list to display in a table?
| [
"I'm not sure I fully understand what format you're trying to do, but hope ths helps point you in the right direction.\nIf you want to print only the values where val > 0.05:\n\nMake your joint_pmf_ID a numpy array.\nIndex by boolean condition and set the desired values to 0.\nIn your looping function, don't print zeros.\n\njoint_pmf_ID = np.array(joint_pmf_ID)\njoint_pmf_id[joint_pmf_id<=0.05] = 0\n# Below is just a fancy print function; print how you will\nfor i, row in enumerate(joint_pmf_ID):\n print('| ')\n for j, val in enumerate(row):\n print(val if val else ' ')\n print(' | ')\n\n"
] | [
1
] | [] | [] | [
"numpy",
"python",
"statistics"
] | stackoverflow_0074644169_numpy_python_statistics.txt |
Q:
Python - Timeit within a class
I'm having some real trouble with timing a function from within an instance of a class. I'm not sure I'm going about it the right way (never used timeIt before) and I tried a few variations of the second argument importing things, but no luck. Here's a silly example of what I'm doing:
import timeit
class TimedClass():
def __init__(self):
self.x = 13
self.y = 15
t = timeit.Timer("self.square(self.x, self.y)")
try:
t.timeit()
except:
t.print_exc()
def square(self, _x, _y):
print _x**_y
myTimedClass = TimedClass()
Which, when ran, complains about self.
Traceback (most recent call last):
File "timeItTest.py", line 9, in __init__
t.timeit()
File "C:\Python26\lib\timeit.py", line 193, in timeit
timing = self.inner(it, self.timer)
File "<timeit-src>", line 6, in inner
self.square(self.x, self.y)
NameError: global name 'self' is not defined
This has to do with TimeIt creating a little virtual environment to run the function in but what do I have to pass to the second argument to make it all happy?
A:
if you're willing to consider alternatives to timeit, i recently found the stopwatch timer utility which might be useful in your case. it's really simple and intuitive, too:
import stopwatch
class TimedClass():
def __init__(self):
t = stopwatch.Timer()
# do stuff here
t.stop()
print t.elapsed
A:
Why do you want the timing inside the class being timed itself? If you take the timing out of the class, you can just pass a reference. I.e.
import timeit
class TimedClass():
def __init__(self):
self.x = 13
self.y = 15
def square(self, _x, _y):
print _x**_y
myTimedClass = TimedClass()
timeit.Timer(myTImedClass.square).timeit()
(of course the class itself is redundant, I assume you have a complexer use-case where a simple method is not sufficient).
In general, just pass a callable that has all setup contained/configured. If you want to pass strings to be timed they should contain all necessary setup inside them, i.e.
timeit.Timer("[str(x) for x in range(100)]").timeit()
If you really, really need the timing inside the class, wrap the call in a local method, i.e.
def __init__(self, ..):
def timewrapper():
return self.multiply(self.x, self.y)
timeit.Timer(timewrapper)
A:
To address your initial error, you can use timeit within a class with parameters like this:
t = timeit.Timer(lambda: self.square(self.x, self.y)).timeit()
A:
(posting as an answer because the code markup does not work in a comment)
I would add a try/finally closure for additional safety:
class TimedClass():
def __init__(self):
t = stopwatch.Timer()
try:
# do stuff here, you can even use return "foo" here and throw exceptions
finally:
t.stop()
print t.elapsed
A:
Just pass self through the new globals parameter of timeit.Timer() (added in version 3.5):
import timeit
class TimedClass():
def __init__(self):
self.x = 13
self.y = 15
my_globals = globals()
my_globals.update({'self':self})
t = timeit.Timer(stmt="self.square(self.x, self.y)", globals=my_globals)
try:
t.timeit()
except:
t.print_exc()
def square(self, _x, _y):
print (_x**_y)
myTimedClass = TimedClass()
| Python - Timeit within a class | I'm having some real trouble with timing a function from within an instance of a class. I'm not sure I'm going about it the right way (never used timeIt before) and I tried a few variations of the second argument importing things, but no luck. Here's a silly example of what I'm doing:
import timeit
class TimedClass():
def __init__(self):
self.x = 13
self.y = 15
t = timeit.Timer("self.square(self.x, self.y)")
try:
t.timeit()
except:
t.print_exc()
def square(self, _x, _y):
print _x**_y
myTimedClass = TimedClass()
Which, when ran, complains about self.
Traceback (most recent call last):
File "timeItTest.py", line 9, in __init__
t.timeit()
File "C:\Python26\lib\timeit.py", line 193, in timeit
timing = self.inner(it, self.timer)
File "<timeit-src>", line 6, in inner
self.square(self.x, self.y)
NameError: global name 'self' is not defined
This has to do with TimeIt creating a little virtual environment to run the function in but what do I have to pass to the second argument to make it all happy?
| [
"if you're willing to consider alternatives to timeit, i recently found the stopwatch timer utility which might be useful in your case. it's really simple and intuitive, too:\nimport stopwatch\n\nclass TimedClass():\n\n def __init__(self):\n t = stopwatch.Timer()\n # do stuff here\n t.stop()\n print t.elapsed\n\n",
"Why do you want the timing inside the class being timed itself? If you take the timing out of the class, you can just pass a reference. I.e.\nimport timeit\n\nclass TimedClass():\n def __init__(self):\n self.x = 13\n self.y = 15\n\n def square(self, _x, _y):\n print _x**_y\n\nmyTimedClass = TimedClass()\ntimeit.Timer(myTImedClass.square).timeit()\n\n(of course the class itself is redundant, I assume you have a complexer use-case where a simple method is not sufficient).\nIn general, just pass a callable that has all setup contained/configured. If you want to pass strings to be timed they should contain all necessary setup inside them, i.e.\ntimeit.Timer(\"[str(x) for x in range(100)]\").timeit()\n\nIf you really, really need the timing inside the class, wrap the call in a local method, i.e.\ndef __init__(self, ..):\n def timewrapper():\n return self.multiply(self.x, self.y)\n\n timeit.Timer(timewrapper)\n\n",
"To address your initial error, you can use timeit within a class with parameters like this:\n t = timeit.Timer(lambda: self.square(self.x, self.y)).timeit()\n\n",
"(posting as an answer because the code markup does not work in a comment)\nI would add a try/finally closure for additional safety:\nclass TimedClass():\n def __init__(self):\n t = stopwatch.Timer()\n try:\n # do stuff here, you can even use return \"foo\" here and throw exceptions\n finally:\n t.stop()\n print t.elapsed\n\n",
"Just pass self through the new globals parameter of timeit.Timer() (added in version 3.5):\nimport timeit\n\nclass TimedClass():\n def __init__(self):\n self.x = 13\n self.y = 15\n my_globals = globals()\n my_globals.update({'self':self})\n t = timeit.Timer(stmt=\"self.square(self.x, self.y)\", globals=my_globals)\n try:\n t.timeit()\n except:\n t.print_exc()\n\n def square(self, _x, _y):\n print (_x**_y)\n\nmyTimedClass = TimedClass()\n\n"
] | [
10,
9,
7,
0,
0
] | [] | [] | [
"python",
"self",
"timeit"
] | stackoverflow_0003609148_python_self_timeit.txt |
Q:
how to compare dictionaries and color the different key, value
I have a django application. And I have two texboxes where data is displayed coming from two different functions.
So the data is displayed in the texboxes. But the key,value has to be marked red when there is a difference in the two dictionaries. So in this example it is ananas that has the difference.
So I have the TestFile with the data:
class TestFile:
def __init__(self) -> None:
pass
def data_compare2(self):
fruits2 = {
"appel": 3962.00,
"waspeen": 3304.07,
"ananas": 30,
}
set2 = set([(k, v) for k, v in fruits2.items()])
return set2
def data_compare(self):
fruits = {
"appel": 3962.00,
"waspeen": 3304.07,
"ananas": 24,
}
set1 = set([(k, v) for k, v in fruits.items()])
return set1
def compare_dic(self):
set1 = self.data_compare()
set2 = self.data_compare2()
diff_set = list(set1 - set2) + list(set2 - set1)
return diff_set
the views.py:
from .test_file import TestFile
def data_compare(request):
test_file = TestFile()
content_excel = ""
content = ""
content = test_file.data_compare()
content_excel = test_file.data_compare2()
diff_set =test_file.compare_dic()
context = {"content": content, "content_excel": content_excel, "diff_set": diff_set}
return render(request, "main/data_compare.html", context)
and the template:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<div class="form-outline">
<div class="form-group">
<textarea class="inline-txtarea form-control" id="content" cols="10" rows="10">
{% for key, value in content %}
<span {% if key in diff_set %} style="color: red;" {% endif %}> {{ key }}: {{value}}</span><br>
{% endfor %}
</textarea>
</div>
</div>
<div class="form-outline">
<div class="form-group">
<textarea class="inline-txtarea form-control" id="content.excel" cols="70" rows="25">
{% for key, value in content_excel %}
<span {% if key in diff_set %} style="color: red;" {% endif %}>{{ key }}: {{value}}</span><br>
{% endfor %}
</textarea>
</div>
</div>
</body>
</html>
The problem I face is that I try to loop over the dictionaries in the template. But the diffrence is not colored red. So this is the output:
<span> waspeen: 3304.07</span><br>
<span> ananas: 24</span><br>
<span> appel: 3962.0</span><br>
so if I do this:
def data_compare(request):
test_file = TestFile()
content_excel = ""
content = ""
content = test_file.data_compare()
content_excel = test_file.data_compare2()
diff_set =test_file.compare_dic()
print(diff_set)
context = {"content": content, "content_excel": content_excel, "diff_set": diff_set}
return render(request, "main/data_compare.html", context)
Then I see in the print statement the correct difference:
[('ananas', 24), ('ananas', 30)]
But how to mark this red in the template?
A:
There are two misconcepts. First is how you are building you 'diff_set' variable to check in the template, it should be a list with the name of the fruit (Otherwise you need to do logic at template level, which is one thing you should always avoid.):
['ananas',...]
Second is trying to color lines inside a text area with HTML tags. It is possible to do it based on this answer (personally I never even tried such thing).
Besides that, there are some redundant processes in your code. But, in order to make it work the way it is, just filter your diff_set, and change your template:
template.html:
{% block content %}
<div class="container center">
{% for key, value in content %}
<span {% if key in diff_set %} style="color: red;" {% endif %}>{{ key }}: {{value}}</span><br>
{% endfor %}
<p>------------------------------------------------------</p>
{% for key, value in content_excel %}
<span {% if key in diff_set %} style="color: red;"{% endif %}>{{ key }}: {{value}}</span><br>
{% endfor %}
</div>
{% endblock %}
views.py:
from .test_file import TestFile
def compare_data(request):
test_file = TestFile()
content_excel = ""
content = ""
content = test_file.data_compare()
content_excel = test_file.data_compare2()
diff_set =test_file.compare_dic()
# filter diff_set
unique_keys = []
for v in diff_set:
if v[0] not in unique_keys:
unique_keys.append(v[0])
context = {"content": content, "content_excel": content_excel, "diff_set": unique_keys}
return render(request, "compare.html", context)
| how to compare dictionaries and color the different key, value | I have a django application. And I have two texboxes where data is displayed coming from two different functions.
So the data is displayed in the texboxes. But the key,value has to be marked red when there is a difference in the two dictionaries. So in this example it is ananas that has the difference.
So I have the TestFile with the data:
class TestFile:
def __init__(self) -> None:
pass
def data_compare2(self):
fruits2 = {
"appel": 3962.00,
"waspeen": 3304.07,
"ananas": 30,
}
set2 = set([(k, v) for k, v in fruits2.items()])
return set2
def data_compare(self):
fruits = {
"appel": 3962.00,
"waspeen": 3304.07,
"ananas": 24,
}
set1 = set([(k, v) for k, v in fruits.items()])
return set1
def compare_dic(self):
set1 = self.data_compare()
set2 = self.data_compare2()
diff_set = list(set1 - set2) + list(set2 - set1)
return diff_set
the views.py:
from .test_file import TestFile
def data_compare(request):
test_file = TestFile()
content_excel = ""
content = ""
content = test_file.data_compare()
content_excel = test_file.data_compare2()
diff_set =test_file.compare_dic()
context = {"content": content, "content_excel": content_excel, "diff_set": diff_set}
return render(request, "main/data_compare.html", context)
and the template:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
</head>
<body>
<div class="form-outline">
<div class="form-group">
<textarea class="inline-txtarea form-control" id="content" cols="10" rows="10">
{% for key, value in content %}
<span {% if key in diff_set %} style="color: red;" {% endif %}> {{ key }}: {{value}}</span><br>
{% endfor %}
</textarea>
</div>
</div>
<div class="form-outline">
<div class="form-group">
<textarea class="inline-txtarea form-control" id="content.excel" cols="70" rows="25">
{% for key, value in content_excel %}
<span {% if key in diff_set %} style="color: red;" {% endif %}>{{ key }}: {{value}}</span><br>
{% endfor %}
</textarea>
</div>
</div>
</body>
</html>
The problem I face is that I try to loop over the dictionaries in the template. But the diffrence is not colored red. So this is the output:
<span> waspeen: 3304.07</span><br>
<span> ananas: 24</span><br>
<span> appel: 3962.0</span><br>
so if I do this:
def data_compare(request):
test_file = TestFile()
content_excel = ""
content = ""
content = test_file.data_compare()
content_excel = test_file.data_compare2()
diff_set =test_file.compare_dic()
print(diff_set)
context = {"content": content, "content_excel": content_excel, "diff_set": diff_set}
return render(request, "main/data_compare.html", context)
Then I see in the print statement the correct difference:
[('ananas', 24), ('ananas', 30)]
But how to mark this red in the template?
| [
"There are two misconcepts. First is how you are building you 'diff_set' variable to check in the template, it should be a list with the name of the fruit (Otherwise you need to do logic at template level, which is one thing you should always avoid.):\n['ananas',...]\n\nSecond is trying to color lines inside a text area with HTML tags. It is possible to do it based on this answer (personally I never even tried such thing).\nBesides that, there are some redundant processes in your code. But, in order to make it work the way it is, just filter your diff_set, and change your template:\ntemplate.html:\n{% block content %}\n <div class=\"container center\">\n {% for key, value in content %}\n <span {% if key in diff_set %} style=\"color: red;\" {% endif %}>{{ key }}: {{value}}</span><br>\n {% endfor %}\n\n <p>------------------------------------------------------</p>\n\n {% for key, value in content_excel %}\n <span {% if key in diff_set %} style=\"color: red;\"{% endif %}>{{ key }}: {{value}}</span><br>\n {% endfor %}\n </div>\n{% endblock %}\n\nviews.py:\nfrom .test_file import TestFile\n\ndef compare_data(request):\n test_file = TestFile()\n\n content_excel = \"\"\n content = \"\"\n content = test_file.data_compare()\n content_excel = test_file.data_compare2()\n diff_set =test_file.compare_dic()\n\n # filter diff_set\n unique_keys = []\n for v in diff_set:\n if v[0] not in unique_keys:\n unique_keys.append(v[0])\n\n context = {\"content\": content, \"content_excel\": content_excel, \"diff_set\": unique_keys}\n\n return render(request, \"compare.html\", context)\n\n"
] | [
1
] | [] | [] | [
"django",
"python"
] | stackoverflow_0074641427_django_python.txt |
Q:
Way to recreate all python class instances each time program loops
For a python RPG, I have player_module.py, which contains all classes, methods and functions.
class RingOfRegeneration(Regeneration):
def __init__(self):
super().__init__()
self.name = "Ring of Regeneration"
self.item_type = "Rings of Regeneration"
self.regenerate = 1
self.sell_price = 10000
self.buy_price = 10000
ring_of_regeneration = RingOfRegeneration()
Right now I have over 30 class instances. The class instances and their attributes are referenced throughout the module, (which is about 10,000 lines at this point). I have a loot_dict within the module from which random items may be found, which simply contains the instances:
loot_dict = {
'Armor': [leather_armor, ....],
'Rings of Regeneration': [ring_of_regeneration...],
... }
I also have the main.py loop. Class instances like swords and rings can be found in the dungeon, and can be enhanced. For instance, a ring of regeneration can be enhanced throughout the game to self.regenerate = 2, 3...etc.
My problem is, when the player dies and is given the choice to play again, breaking out to the top level loop and restarting the game, if loot is found, it still has the 'enhanced' values. I want to simply reset or recreate all class instances every time the player restarts the loop and as a beginner, I can't figure out a way to do this without exiting the program and restarting it from the command line. I have been unable to grasp any solutions from similar topics. Lastly, if I have painted myself into a corner, as a last resort, is there a way to simply re-run the program from within the program?
A:
Your loot dict has in real reference to objects and if you assign to player character any object from loot_dict you assign reference to the same object what is in your loot_dict. (Just like a pointer in cpp).
You need to create new instance of object every time the player gets new item.
To achive it You can do something like this:
loot_dict={'Rings':[RingOfRegeneration]}
Now u can call it:
item_class = random.choice(list(loot_dict.keys()))
new_item_instance = random.choice(loot_dict[item_class])() #Calling class __init__ method
new_item_instace is now fresh base statistics.
| Way to recreate all python class instances each time program loops | For a python RPG, I have player_module.py, which contains all classes, methods and functions.
class RingOfRegeneration(Regeneration):
def __init__(self):
super().__init__()
self.name = "Ring of Regeneration"
self.item_type = "Rings of Regeneration"
self.regenerate = 1
self.sell_price = 10000
self.buy_price = 10000
ring_of_regeneration = RingOfRegeneration()
Right now I have over 30 class instances. The class instances and their attributes are referenced throughout the module, (which is about 10,000 lines at this point). I have a loot_dict within the module from which random items may be found, which simply contains the instances:
loot_dict = {
'Armor': [leather_armor, ....],
'Rings of Regeneration': [ring_of_regeneration...],
... }
I also have the main.py loop. Class instances like swords and rings can be found in the dungeon, and can be enhanced. For instance, a ring of regeneration can be enhanced throughout the game to self.regenerate = 2, 3...etc.
My problem is, when the player dies and is given the choice to play again, breaking out to the top level loop and restarting the game, if loot is found, it still has the 'enhanced' values. I want to simply reset or recreate all class instances every time the player restarts the loop and as a beginner, I can't figure out a way to do this without exiting the program and restarting it from the command line. I have been unable to grasp any solutions from similar topics. Lastly, if I have painted myself into a corner, as a last resort, is there a way to simply re-run the program from within the program?
| [
"Your loot dict has in real reference to objects and if you assign to player character any object from loot_dict you assign reference to the same object what is in your loot_dict. (Just like a pointer in cpp).\nYou need to create new instance of object every time the player gets new item.\nTo achive it You can do something like this:\nloot_dict={'Rings':[RingOfRegeneration]}\n\nNow u can call it:\nitem_class = random.choice(list(loot_dict.keys())) \nnew_item_instance = random.choice(loot_dict[item_class])() #Calling class __init__ method\n\nnew_item_instace is now fresh base statistics.\n"
] | [
1
] | [] | [] | [
"class",
"python",
"reset"
] | stackoverflow_0074644091_class_python_reset.txt |
Q:
Tkinter menu background is not changing
I am trying to create a menu for a tkinter project
but I am facing some problems
the menu background is not changing
import tkinter as tk
root = tk.Tk()
Menu1= tk.Menu(root, background="red")
filemenu = tk.Menu(Menu1, tearoff=0)
filemenu.add_command(label="New")
filemenu.add_command(label="Open")
filemenu.add_command(label="Save")
filemenu.add_command(label="Save as...")
filemenu.add_command(label="Close")
Menu1.add_cascade(label="File", menu=filemenu)
root.config(menu=Menu1)
root.mainloop()
A:
If you want to change the background of the items in your File menu, you have to config that Menu instead of the root Menu1
filemenu.config(background='red')
If you want to color each menu item individually, you can set background for each add_command
filemenu.add_command(label="New", background='red')
filemenu.add_command(label="Open", background='orange')
filemenu.add_command(label="Save", background='yellow')
filemenu.add_command(label="Save as...", background='green')
filemenu.add_command(label="Close", background='blue')
| Tkinter menu background is not changing | I am trying to create a menu for a tkinter project
but I am facing some problems
the menu background is not changing
import tkinter as tk
root = tk.Tk()
Menu1= tk.Menu(root, background="red")
filemenu = tk.Menu(Menu1, tearoff=0)
filemenu.add_command(label="New")
filemenu.add_command(label="Open")
filemenu.add_command(label="Save")
filemenu.add_command(label="Save as...")
filemenu.add_command(label="Close")
Menu1.add_cascade(label="File", menu=filemenu)
root.config(menu=Menu1)
root.mainloop()
| [
"If you want to change the background of the items in your File menu, you have to config that Menu instead of the root Menu1\nfilemenu.config(background='red')\n\nIf you want to color each menu item individually, you can set background for each add_command\nfilemenu.add_command(label=\"New\", background='red')\nfilemenu.add_command(label=\"Open\", background='orange')\nfilemenu.add_command(label=\"Save\", background='yellow')\nfilemenu.add_command(label=\"Save as...\", background='green')\nfilemenu.add_command(label=\"Close\", background='blue')\n\n"
] | [
0
] | [] | [] | [
"python",
"tkinter"
] | stackoverflow_0074644255_python_tkinter.txt |
Q:
Fill up DataFrame counting to full cartesian product
Look at this code:
result=pd.DataFrame(df.groupby(['col1','col2'])['col3'].count())
It basically does what I want to with one minor issue: I want the result to have the full cartesian product of all occurring values of col1 and col2 as index. Of course, my command takes only those combinations of col1 and col2 into account which actually are present in df. I'd like to fill up the result with 0 entries at the non occuring elements of the cartesian product. How do I do that?
Example: Let's say df contians the values
('A',1,'Bob')
('A',1,'James')
('A',2,'Bond')
('B',3,'Alice')
('C',1,'Klaus')
('C',1,'Peter')
Then the result is
('A',1) : 2
('A',2) : 1
('B',3) : 1
('C',1) : 2
I want it to be filled up with
('A',3) : 0
('B',1) : 0
('B',2) : 0
('C',2) : 0
('C',3) : 0
A:
is this what you're looking for?
result = df.groupby(["col1", "col2"])["col3"].count().unstack(fill_value=0).stack()
result :
col1 col2
A 1 2
2 1
3 0
B 1 0
2 0
3 1
C 1 2
2 0
3 0
| Fill up DataFrame counting to full cartesian product | Look at this code:
result=pd.DataFrame(df.groupby(['col1','col2'])['col3'].count())
It basically does what I want to with one minor issue: I want the result to have the full cartesian product of all occurring values of col1 and col2 as index. Of course, my command takes only those combinations of col1 and col2 into account which actually are present in df. I'd like to fill up the result with 0 entries at the non occuring elements of the cartesian product. How do I do that?
Example: Let's say df contians the values
('A',1,'Bob')
('A',1,'James')
('A',2,'Bond')
('B',3,'Alice')
('C',1,'Klaus')
('C',1,'Peter')
Then the result is
('A',1) : 2
('A',2) : 1
('B',3) : 1
('C',1) : 2
I want it to be filled up with
('A',3) : 0
('B',1) : 0
('B',2) : 0
('C',2) : 0
('C',3) : 0
| [
"is this what you're looking for?\nresult = df.groupby([\"col1\", \"col2\"])[\"col3\"].count().unstack(fill_value=0).stack()\n\n\nresult :\ncol1 col2\nA 1 2\n 2 1\n 3 0\nB 1 0\n 2 0\n 3 1\nC 1 2\n 2 0\n 3 0\n\n\n"
] | [
2
] | [] | [] | [
"cartesian_product",
"count",
"pandas",
"python"
] | stackoverflow_0074643411_cartesian_product_count_pandas_python.txt |
Q:
Add value to OAuth2PasswordRequestForm in FastAPI
i want the client to pass aditional information while loggin in to FastApi. I think for that i have to change the scheme for OAuth2PasswordRequestForm. Can anyone explain how to do that?
Im using the code from the FastApi tutorial right now:
https://fastapi.tiangolo.com/tutorial/security/oauth2-jwt/
A:
If I understand your question correctly, you would like to have the user pass the info required by OAuth2PasswordRequestForm and also include some extra required information.
The easiest way to do this would probably be to create your own scheme that is a subclass of OAuth2PasswordRequestForm.
import fastapi
from fastapi import Depends
from fastapi.security import OAuth2PasswordRequestForm
app = fastapi.FastAPI()
class ExtendedOAuth2PasswordRequestForm(OAuth2PasswordRequestForm):
extra_data_field: str
@app.post("/your_endpoint")
def login_for_access_token(form_data: ExtendedOAuth2PasswordRequestForm = Depends()):
#do stuff with all of the normal OAuth2PasswordRequestForm and your extra data field...
A:
You can add extend parameter to body request as follow:
And add the parameter to form data in frontend part:
| Add value to OAuth2PasswordRequestForm in FastAPI | i want the client to pass aditional information while loggin in to FastApi. I think for that i have to change the scheme for OAuth2PasswordRequestForm. Can anyone explain how to do that?
Im using the code from the FastApi tutorial right now:
https://fastapi.tiangolo.com/tutorial/security/oauth2-jwt/
| [
"If I understand your question correctly, you would like to have the user pass the info required by OAuth2PasswordRequestForm and also include some extra required information.\nThe easiest way to do this would probably be to create your own scheme that is a subclass of OAuth2PasswordRequestForm.\n\n import fastapi\n from fastapi import Depends\n from fastapi.security import OAuth2PasswordRequestForm\n \n app = fastapi.FastAPI()\n\n class ExtendedOAuth2PasswordRequestForm(OAuth2PasswordRequestForm):\n extra_data_field: str\n\n @app.post(\"/your_endpoint\")\n def login_for_access_token(form_data: ExtendedOAuth2PasswordRequestForm = Depends()):\n #do stuff with all of the normal OAuth2PasswordRequestForm and your extra data field...\n\n",
"You can add extend parameter to body request as follow:\n\nAnd add the parameter to form data in frontend part:\n\n"
] | [
0,
0
] | [] | [] | [
"fastapi",
"python"
] | stackoverflow_0068973789_fastapi_python.txt |
Q:
Iterate and compare nth element with the rest of elements in list
I have a list of integers and have to check if all of the integers to the right (so, from each int to the end of the list) are strictly smaller than the one I'm iterating over.
E.g. for [42, 7, 12, 9, 2, 5] I have to compare if 42 is bigger than [7, 12, 9, 2, 5], if 7 is bigger than [12, 9, 2, 5] and so on to the end of the list.
The function has to return the amount of numbers that are bigger than the rest of the list, including the last one - as there's no bigger number after it.
Code I have now:
def count_dominators(items):
if len(items)< 2:
return len(items)
dominators_count = 0
for ind, el in enumerate(items[0:-1]):
if el > items[ind+1]:
dominators_count += 1
pass
I've tried iterating with nested for-loops, using enumerate() and iterating over that using itertools.combinations(), and combining those ways, but to no avail - I usually get TypeErrors linked to iterating (e.g list index out of range, even though I used slicing to avoid it).
A:
I'd suggest using enumerate to iterate over the items with their indices (as you're already doing), and then using all to iterate over all the items after a given item, and sum to sum all of the all results:
>>> def count_dominators(items):
... return sum(all(n > m for m in items[i+1:]) for i, n in enumerate(items))
...
>>> count_dominators([42, 7, 12, 9, 2, 5])
4
Another approach would be to compare to the max of the remaining items, although then you need to special-case the end of the list (all returns True for an empty list, but max will raise an exception):
>>> def count_dominators(items):
... return sum(n > max(items[i+1:]) for i, n in enumerate(items[:-1])) + 1
...
>>> count_dominators([42, 7, 12, 9, 2, 5])
4
| Iterate and compare nth element with the rest of elements in list | I have a list of integers and have to check if all of the integers to the right (so, from each int to the end of the list) are strictly smaller than the one I'm iterating over.
E.g. for [42, 7, 12, 9, 2, 5] I have to compare if 42 is bigger than [7, 12, 9, 2, 5], if 7 is bigger than [12, 9, 2, 5] and so on to the end of the list.
The function has to return the amount of numbers that are bigger than the rest of the list, including the last one - as there's no bigger number after it.
Code I have now:
def count_dominators(items):
if len(items)< 2:
return len(items)
dominators_count = 0
for ind, el in enumerate(items[0:-1]):
if el > items[ind+1]:
dominators_count += 1
pass
I've tried iterating with nested for-loops, using enumerate() and iterating over that using itertools.combinations(), and combining those ways, but to no avail - I usually get TypeErrors linked to iterating (e.g list index out of range, even though I used slicing to avoid it).
| [
"I'd suggest using enumerate to iterate over the items with their indices (as you're already doing), and then using all to iterate over all the items after a given item, and sum to sum all of the all results:\n>>> def count_dominators(items):\n... return sum(all(n > m for m in items[i+1:]) for i, n in enumerate(items))\n...\n>>> count_dominators([42, 7, 12, 9, 2, 5])\n4\n\nAnother approach would be to compare to the max of the remaining items, although then you need to special-case the end of the list (all returns True for an empty list, but max will raise an exception):\n>>> def count_dominators(items):\n... return sum(n > max(items[i+1:]) for i, n in enumerate(items[:-1])) + 1\n...\n>>> count_dominators([42, 7, 12, 9, 2, 5])\n4\n\n"
] | [
1
] | [] | [] | [
"python"
] | stackoverflow_0074644376_python.txt |
Q:
Getting invalid signature for HMAC authentication of python pickle file
I am trying to use HMAC authentication for reading and write pickle files.
Sample Data :
import base64
import hashlib
import hmac
from datetime import datetime
import six
import pandas as pd
import pickle
df1 = pd.DataFrame({'id' : [1,2,3,4,5],
'score' : [720, 700, 710, 690, 670]})
df2 = pd.DataFrame({'name' : ['abc', 'pqr', 'xyz'],
'address' : ['1st st', '2nd ave', '3rd st'] })
mylist = ['a', 'b', 'c', 'd', 'e']
mydict = {1 : 'p', 2 : 'q', 3 : 'r'}
obj = [df1, df2, mylist, mydict]
Write pickle file using:
data = pickle.dumps(obj)
digest = hmac.new(b'unique-key-here', data, hashlib.blake2b).hexdigest()
with open('temp.txt', 'wb') as output:
output.write(bytes(digest, sys.stdin.encoding) + data)
But when I try to read it back using:
with open('temp.txt', 'rb') as f:
digest = f.readline()
data = f.read()
recomputed = hmac.new(b'unique-key-here', data, hashlib.blake2b).hexdigest()
if not compare_digest(digest, bytes(recomputed, sys.stdin.encoding)):
print('Invalid signature')
else:
print('Signature matching')
I am getting Invalid signature as output. Could someone please help me understand where I am going wrong.
A:
Here is some code illustrating what I think is a cleaner way to solve the problem.
import hashlib
import hmac
import io
import os
import pickle
sample_obj = {'hello': [os.urandom(50)]}
data = pickle.dumps(sample_obj)
# write it out
my_hmac = hmac.new(b'my_hmac_key', digestmod=hashlib.blake2b)
my_hmac.update(data)
mac_result = my_hmac.digest()
pickle_out = io.BytesIO()
pickle_out.write(mac_result + data)
# read it in
pickle_in = io.BytesIO(pickle_out.getbuffer().tobytes())
my_hmac = hmac.new(b'my_hmac_key', digestmod=hashlib.blake2b)
mac_from_stream = pickle_in.read(my_hmac.digest_size)
data_from_stream = pickle_in.read()
my_hmac.update(data_from_stream)
computed_mac = my_hmac.digest()
# see if they match
print(hmac.compare_digest(computed_mac, mac_from_stream))
We avoid hexdigest() all together and thus eliminate unnecessary encoding and decoding. We create the mac instance and keep it around so that we can get the hmac.digest_size property. The use of io.BytesIO is just for illustrating the I/O part of your code.
| Getting invalid signature for HMAC authentication of python pickle file | I am trying to use HMAC authentication for reading and write pickle files.
Sample Data :
import base64
import hashlib
import hmac
from datetime import datetime
import six
import pandas as pd
import pickle
df1 = pd.DataFrame({'id' : [1,2,3,4,5],
'score' : [720, 700, 710, 690, 670]})
df2 = pd.DataFrame({'name' : ['abc', 'pqr', 'xyz'],
'address' : ['1st st', '2nd ave', '3rd st'] })
mylist = ['a', 'b', 'c', 'd', 'e']
mydict = {1 : 'p', 2 : 'q', 3 : 'r'}
obj = [df1, df2, mylist, mydict]
Write pickle file using:
data = pickle.dumps(obj)
digest = hmac.new(b'unique-key-here', data, hashlib.blake2b).hexdigest()
with open('temp.txt', 'wb') as output:
output.write(bytes(digest, sys.stdin.encoding) + data)
But when I try to read it back using:
with open('temp.txt', 'rb') as f:
digest = f.readline()
data = f.read()
recomputed = hmac.new(b'unique-key-here', data, hashlib.blake2b).hexdigest()
if not compare_digest(digest, bytes(recomputed, sys.stdin.encoding)):
print('Invalid signature')
else:
print('Signature matching')
I am getting Invalid signature as output. Could someone please help me understand where I am going wrong.
| [
"Here is some code illustrating what I think is a cleaner way to solve the problem.\nimport hashlib\nimport hmac\nimport io\nimport os\nimport pickle\n\nsample_obj = {'hello': [os.urandom(50)]}\n\ndata = pickle.dumps(sample_obj)\n\n# write it out\n\nmy_hmac = hmac.new(b'my_hmac_key', digestmod=hashlib.blake2b)\nmy_hmac.update(data)\nmac_result = my_hmac.digest()\npickle_out = io.BytesIO()\npickle_out.write(mac_result + data)\n\n# read it in\n\npickle_in = io.BytesIO(pickle_out.getbuffer().tobytes())\nmy_hmac = hmac.new(b'my_hmac_key', digestmod=hashlib.blake2b)\nmac_from_stream = pickle_in.read(my_hmac.digest_size)\ndata_from_stream = pickle_in.read()\nmy_hmac.update(data_from_stream)\ncomputed_mac = my_hmac.digest()\n\n# see if they match\n\nprint(hmac.compare_digest(computed_mac, mac_from_stream))\n\nWe avoid hexdigest() all together and thus eliminate unnecessary encoding and decoding. We create the mac instance and keep it around so that we can get the hmac.digest_size property. The use of io.BytesIO is just for illustrating the I/O part of your code.\n"
] | [
1
] | [] | [] | [
"hmac",
"pickle",
"python"
] | stackoverflow_0074638045_hmac_pickle_python.txt |
Q:
Python Clean Specific Elements in List of Lists
I have a noisy list with 3 different rows that looks like this:
array = ['Apple Mug Seaweed Wallet Toilet Bear Toy Key Alcohol Paper',
'cup, egg, pillow, leash, banana, raindrop, phone, animal, shirt, basket',
'1. Dog 2. America 3. Notebook 4. Moisturizer 5. ADHD 6. Balloon 7. Contacts 8. Blanket 9. Home 10. Pencil']
How do I index each element in each row to normalize all the strings and remove any "," as well as any numerical values to look like this:
array = ['Apple Mug Seaweed Wallet Toilet Bear Toy Key Alcohol Paper',
'cup egg pillow leash banana raindrop phone animal shirt basket',
'Dog America Notebook Moisturizer ADHD Balloon Contacts Blanket Home Pencil']
I have tried:
for i in array:
for j in i:
j.strip(" ")
j.strip(",")
However am confused by the order and sequence in which to store the words back into their specific row. Thanks
A:
One approach is to use list comprehension (optional) and regular expressions, where a pattern can be set to keep only alphabetical characters. (e.g.: [a-zA-Z]+ meaning one or more alpha characters.)
The str.join() function is used to combine the regex search output into a single string, which is added as an element to the output list.
For example:
import re
out = [' '.join(re.findall('[a-zA-Z]+', i)) for i in array]
Output:
['Apple Mug Seaweed Wallet Toilet Bear Toy Key Alcohol Paper',
'cup egg pillow leash banana raindrop phone animal shirt basket',
'Dog America Notebook Moisturizer ADHD Balloon Contacts Blanket Home Pencil']
A:
Without using the RE library you can also continue the code you used earlier and just add a try/except block to look for numbers, hope this is also helpful.
out = []
for i in array:
row = []
for j in i.split():
j = j.strip(',.')
try:
int(j)
except:
row.append(j)
row = f"{' '.join(row)}"
out.append(row)
print(out)
| Python Clean Specific Elements in List of Lists | I have a noisy list with 3 different rows that looks like this:
array = ['Apple Mug Seaweed Wallet Toilet Bear Toy Key Alcohol Paper',
'cup, egg, pillow, leash, banana, raindrop, phone, animal, shirt, basket',
'1. Dog 2. America 3. Notebook 4. Moisturizer 5. ADHD 6. Balloon 7. Contacts 8. Blanket 9. Home 10. Pencil']
How do I index each element in each row to normalize all the strings and remove any "," as well as any numerical values to look like this:
array = ['Apple Mug Seaweed Wallet Toilet Bear Toy Key Alcohol Paper',
'cup egg pillow leash banana raindrop phone animal shirt basket',
'Dog America Notebook Moisturizer ADHD Balloon Contacts Blanket Home Pencil']
I have tried:
for i in array:
for j in i:
j.strip(" ")
j.strip(",")
However am confused by the order and sequence in which to store the words back into their specific row. Thanks
| [
"One approach is to use list comprehension (optional) and regular expressions, where a pattern can be set to keep only alphabetical characters. (e.g.: [a-zA-Z]+ meaning one or more alpha characters.)\nThe str.join() function is used to combine the regex search output into a single string, which is added as an element to the output list.\nFor example:\nimport re\n\nout = [' '.join(re.findall('[a-zA-Z]+', i)) for i in array]\n\nOutput:\n['Apple Mug Seaweed Wallet Toilet Bear Toy Key Alcohol Paper',\n 'cup egg pillow leash banana raindrop phone animal shirt basket',\n 'Dog America Notebook Moisturizer ADHD Balloon Contacts Blanket Home Pencil']\n\n",
"Without using the RE library you can also continue the code you used earlier and just add a try/except block to look for numbers, hope this is also helpful.\nout = []\nfor i in array:\n row = []\n for j in i.split():\n j = j.strip(',.')\n try:\n int(j)\n except:\n row.append(j)\n row = f\"{' '.join(row)}\"\n out.append(row)\nprint(out)\n\n"
] | [
2,
0
] | [] | [] | [
"list",
"python"
] | stackoverflow_0074633667_list_python.txt |
Q:
Service geckodriver unexpectedly exited. Status code was: -6
I am using Ubuntu 22.04, and I got this error when I tried to run Selenium tutorial:
selenium.common.exceptions.WebDriverException: Message: Service geckodriver unexpectedly exited. Status code was: -6
And this is the code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
driver = webdriver.Firefox()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element(By.NAME, "q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()
driver.quit()
Someone can help with the solution? Thanks for your help.
A:
Just to be sure, do you have Firefox installed? Does this thread help for you?
| Service geckodriver unexpectedly exited. Status code was: -6 | I am using Ubuntu 22.04, and I got this error when I tried to run Selenium tutorial:
selenium.common.exceptions.WebDriverException: Message: Service geckodriver unexpectedly exited. Status code was: -6
And this is the code:
from selenium import webdriver
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
driver = webdriver.Firefox()
driver.get("http://www.python.org")
assert "Python" in driver.title
elem = driver.find_element(By.NAME, "q")
elem.clear()
elem.send_keys("pycon")
elem.send_keys(Keys.RETURN)
assert "No results found." not in driver.page_source
driver.close()
driver.quit()
Someone can help with the solution? Thanks for your help.
| [
"Just to be sure, do you have Firefox installed? Does this thread help for you?\n"
] | [
0
] | [] | [] | [
"python",
"selenium",
"selenium_webdriver",
"web_scraping"
] | stackoverflow_0074644408_python_selenium_selenium_webdriver_web_scraping.txt |
Q:
Python Code Coverage and Multiprocessing
I use coveralls in combination with coverage.py to track python code coverage of my testing scripts. I use the following commands:
coverage run --parallel-mode --source=mysource --omit=*/stuff/idont/need.py ./mysource/tests/run_all_tests.py
coverage combine
coveralls --verbose
This works quite nicely with the exception of multiprocessing. Code executed by worker pools or child processes is not tracked.
Is there a possibility to also track multiprocessing code? Any particular option I am missing? Maybe adding wrappers to the multiprocessing library to start coverage every time a new process is spawned?
EDIT:
I (and jonrsharpe, also :-) found a monkey-patch for multiprocessing.
However, this does not work for me, my Tracis-CI build is killed almost right after the start. I checked the problem on my local machine and apparently adding the patch to multiprocessing busts my memory. Tests that take much less than 1GB of memory need more than 16GB with this fix.
EDIT2:
The monkey-patch does work after a small modification: Removing
the config_file parsing (config_file=os.environ['COVERAGE_PROCESS_START']) did the trick. This solved the issue of the bloated memory. Accordingly, the corresponding line simply becomes:
cov = coverage(data_suffix=True)
A:
Coverage 4.0 includes a command-line option --concurrency=multiprocessing to deal with this. You must use coverage combine afterward. For instance, if your tests are in regression_tests.py, then you would simply do this at the command line:
coverage run --concurrency=multiprocessing regression_tests.py
coverage combine
A:
I've had spent some time trying to make sure coverage works with multiprocessing.Pool, but it never worked.
I have finally made a fix that makes it work - would be happy if someone directed me if I am doing something wrong.
https://gist.github.com/andreycizov/ee59806a3ac6955c127e511c5e84d2b6
A:
One of the possible causes of missing coverage data from forked processes, even with concurrency=multiprocessing, is the way of multiprocessing.Pool shutdown. For example, with statement leads to terminate() call (see __exit__ here). As a consequence, pool workers have no time to save coverage data. I had to use close(), timed join() (in a thread), terminate sequence instead of with to get coverage results saved.
| Python Code Coverage and Multiprocessing | I use coveralls in combination with coverage.py to track python code coverage of my testing scripts. I use the following commands:
coverage run --parallel-mode --source=mysource --omit=*/stuff/idont/need.py ./mysource/tests/run_all_tests.py
coverage combine
coveralls --verbose
This works quite nicely with the exception of multiprocessing. Code executed by worker pools or child processes is not tracked.
Is there a possibility to also track multiprocessing code? Any particular option I am missing? Maybe adding wrappers to the multiprocessing library to start coverage every time a new process is spawned?
EDIT:
I (and jonrsharpe, also :-) found a monkey-patch for multiprocessing.
However, this does not work for me, my Tracis-CI build is killed almost right after the start. I checked the problem on my local machine and apparently adding the patch to multiprocessing busts my memory. Tests that take much less than 1GB of memory need more than 16GB with this fix.
EDIT2:
The monkey-patch does work after a small modification: Removing
the config_file parsing (config_file=os.environ['COVERAGE_PROCESS_START']) did the trick. This solved the issue of the bloated memory. Accordingly, the corresponding line simply becomes:
cov = coverage(data_suffix=True)
| [
"Coverage 4.0 includes a command-line option --concurrency=multiprocessing to deal with this. You must use coverage combine afterward. For instance, if your tests are in regression_tests.py, then you would simply do this at the command line:\ncoverage run --concurrency=multiprocessing regression_tests.py\ncoverage combine\n\n",
"I've had spent some time trying to make sure coverage works with multiprocessing.Pool, but it never worked.\nI have finally made a fix that makes it work - would be happy if someone directed me if I am doing something wrong.\nhttps://gist.github.com/andreycizov/ee59806a3ac6955c127e511c5e84d2b6\n",
"One of the possible causes of missing coverage data from forked processes, even with concurrency=multiprocessing, is the way of multiprocessing.Pool shutdown. For example, with statement leads to terminate() call (see __exit__ here). As a consequence, pool workers have no time to save coverage data. I had to use close(), timed join() (in a thread), terminate sequence instead of with to get coverage results saved.\n"
] | [
28,
1,
0
] | [] | [] | [
"code_coverage",
"coverage.py",
"coveralls",
"multiprocessing",
"python"
] | stackoverflow_0028297497_code_coverage_coverage.py_coveralls_multiprocessing_python.txt |
Q:
No module named kubernetes.dynamic.resource
I try to deploy some .yaml file with code of Kubernetes, but get error
TASK [/cur/develop/inno/777/name.k8s/roles/deploy_k8s_dashboard : Apply the Kubernetes dashboard] **************************************************************************************************************************************************************************************
Monday 17 October 2022 13:52:07 +0200 (0:00:00.836) 0:00:01.410 ********
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named kubernetes.dynamic.resource
fatal: [cibd1]: FAILED! => changed=false
error: No module named kubernetes.dynamic.resource
msg: Failed to import the required Python library (kubernetes) on bvm's Python /usr/bin/python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named kubernetes.dynamic.resource
fatal: [cibd1]: FAILED! => changed=false
error: No module named kubernetes.dynamic.resource
msg: Failed to import the required Python library (kubernetes) on bvm's Python /usr/bin/python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter
Could you please help with advice, How can I fix it?
A:
I've had a similar problem. locally I had to execute the following
ansible-galaxy collection install kubernetes.core
On the target server make sure that you have python3 installed as python2 won't be enough. Once that is installed, I also had to define the below in my vars:
ansible_python_interpreter: /bin/python3
| No module named kubernetes.dynamic.resource | I try to deploy some .yaml file with code of Kubernetes, but get error
TASK [/cur/develop/inno/777/name.k8s/roles/deploy_k8s_dashboard : Apply the Kubernetes dashboard] **************************************************************************************************************************************************************************************
Monday 17 October 2022 13:52:07 +0200 (0:00:00.836) 0:00:01.410 ********
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named kubernetes.dynamic.resource
fatal: [cibd1]: FAILED! => changed=false
error: No module named kubernetes.dynamic.resource
msg: Failed to import the required Python library (kubernetes) on bvm's Python /usr/bin/python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter
An exception occurred during task execution. To see the full traceback, use -vvv. The error was: ImportError: No module named kubernetes.dynamic.resource
fatal: [cibd1]: FAILED! => changed=false
error: No module named kubernetes.dynamic.resource
msg: Failed to import the required Python library (kubernetes) on bvm's Python /usr/bin/python. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter
Could you please help with advice, How can I fix it?
| [
"I've had a similar problem. locally I had to execute the following\nansible-galaxy collection install kubernetes.core\n\nOn the target server make sure that you have python3 installed as python2 won't be enough. Once that is installed, I also had to define the below in my vars:\nansible_python_interpreter: /bin/python3\n\n"
] | [
0
] | [] | [] | [
"ansible",
"kubernetes",
"python"
] | stackoverflow_0074096933_ansible_kubernetes_python.txt |
Q:
GitPython Submodule Tree Display
I'm working with a git repo that has multiple sub-modules beneath. I'm able to walk through the blobs and trees in that folder with no problem, but when I encounter a submodule, I receive the following error: AttributeError: Cannot retrieve the name of a submodule if it was not set initially.
The code I'm using looks like this:
from git import Repo
repo = Repo("repo_name")
origin = repo.remotes[0]
ref = origin.refs[0]
tree = ref.commit.tree['devices'] <- devices is a submodule
print tree.name
I'm not fully understanding why I cannot access the tree from the submodule. The submodule does exist, and is populated under the .git/modules folder, so which name does it expect to find?
edit: Looks like I've figured out what I've been doing wrong. The Submodule type is a separate repo, so I must treat it as such. In doing so, I must use the submodule.module() to get the proper repo, and then dig down into the tree like I did with my main repo. I was being a little optimistic to think that the tree would abstract all of that away, and keep digging at the submodule like another tree. It did not do that. :)
A:
For a full description on why Submodule objects retrieved from Trees are not fully functional, see https://github.com/gitpython-developers/GitPython/issues/1092
So the short answer: It is a bug/missing feature in GitPython that is still unsolved.
| GitPython Submodule Tree Display | I'm working with a git repo that has multiple sub-modules beneath. I'm able to walk through the blobs and trees in that folder with no problem, but when I encounter a submodule, I receive the following error: AttributeError: Cannot retrieve the name of a submodule if it was not set initially.
The code I'm using looks like this:
from git import Repo
repo = Repo("repo_name")
origin = repo.remotes[0]
ref = origin.refs[0]
tree = ref.commit.tree['devices'] <- devices is a submodule
print tree.name
I'm not fully understanding why I cannot access the tree from the submodule. The submodule does exist, and is populated under the .git/modules folder, so which name does it expect to find?
edit: Looks like I've figured out what I've been doing wrong. The Submodule type is a separate repo, so I must treat it as such. In doing so, I must use the submodule.module() to get the proper repo, and then dig down into the tree like I did with my main repo. I was being a little optimistic to think that the tree would abstract all of that away, and keep digging at the submodule like another tree. It did not do that. :)
| [
"For a full description on why Submodule objects retrieved from Trees are not fully functional, see https://github.com/gitpython-developers/GitPython/issues/1092\nSo the short answer: It is a bug/missing feature in GitPython that is still unsolved.\n"
] | [
0
] | [] | [] | [
"git",
"gitpython",
"python"
] | stackoverflow_0034360710_git_gitpython_python.txt |
Q:
Repeat a function 3 times
I'm programming the "Rock, paper, scissors". I want now to run the function I added below 3 times. I tried using a for _ in range(s) but printed the same result 3 times.
import random
OPTIONS = ["Rock", "Paper", "Scissors"]
def get_user_input():
user_choice = input("Select your play (Rock, Paper or Scissors): ")
return user_choice
def random_choice():
computer_choice = random.choice(OPTIONS)
return computer_choice
def game(user_choice, computer_choice):
if user_choice == computer_choice:
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nDraw.")
else:
if computer_choice == "Rock" and user_choice == "Paper":
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nYou won!")
if computer_choice == "Rock" and user_choice == "Scissors":
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nYou lost...")
if computer_choice == "Paper" and user_choice == "Scissors":
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nYou won!")
if computer_choice == "Paper" and user_choice == "Rock":
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nYou lost...")
if computer_choice == "Scissors" and user_choice == "Rock":
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nYou won!")
if computer_choice == "Scissors" and user_choice == "Paper":
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nYou lost...")
def main():
user_choice = get_user_input()
computer_choice = random_choice()
for _ in range(3):
if game(user_choice, computer_choice):
break
else:
print("Game has ended!")
main()
Any tips on how to implement three rounds?
A:
I think you have to rework your code in main()
def main():
for _ in range(3):
user_choice = get_user_input()
computer_choice = random_choice()
game(user_choice, computer_choice)
print("Game has ended!")
Try something like this
A:
Your program works great! just one small placement error, the random_choice needs to change with every iteration of the loop so I moved the random_choice assignment variable in and that solved the problem. This is also if you want the player to only be able to make one choice.
def main():
user_choice = get_user_input()
for _ in range(3):
computer_choice = random_choice()
game('Rock', computer_choice)
print("Game has ended!")
main()
if you want the player to pick every round then it would look like this;
def main():
for _ in range(3):
user_choice = get_user_input()
computer_choice = random_choice()
game('Rock', computer_choice)
print("Game has ended!")
main()
| Repeat a function 3 times | I'm programming the "Rock, paper, scissors". I want now to run the function I added below 3 times. I tried using a for _ in range(s) but printed the same result 3 times.
import random
OPTIONS = ["Rock", "Paper", "Scissors"]
def get_user_input():
user_choice = input("Select your play (Rock, Paper or Scissors): ")
return user_choice
def random_choice():
computer_choice = random.choice(OPTIONS)
return computer_choice
def game(user_choice, computer_choice):
if user_choice == computer_choice:
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nDraw.")
else:
if computer_choice == "Rock" and user_choice == "Paper":
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nYou won!")
if computer_choice == "Rock" and user_choice == "Scissors":
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nYou lost...")
if computer_choice == "Paper" and user_choice == "Scissors":
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nYou won!")
if computer_choice == "Paper" and user_choice == "Rock":
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nYou lost...")
if computer_choice == "Scissors" and user_choice == "Rock":
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nYou won!")
if computer_choice == "Scissors" and user_choice == "Paper":
print(f"You selected: {user_choice}.\nComputer selected: {computer_choice}.\nYou lost...")
def main():
user_choice = get_user_input()
computer_choice = random_choice()
for _ in range(3):
if game(user_choice, computer_choice):
break
else:
print("Game has ended!")
main()
Any tips on how to implement three rounds?
| [
"I think you have to rework your code in main()\ndef main():\n\n for _ in range(3):\n user_choice = get_user_input()\n computer_choice = random_choice()\n game(user_choice, computer_choice)\n\n print(\"Game has ended!\")\n\nTry something like this\n",
"Your program works great! just one small placement error, the random_choice needs to change with every iteration of the loop so I moved the random_choice assignment variable in and that solved the problem. This is also if you want the player to only be able to make one choice.\ndef main():\n user_choice = get_user_input()\n for _ in range(3):\n computer_choice = random_choice()\n game('Rock', computer_choice)\n print(\"Game has ended!\")\n\nmain()\n\nif you want the player to pick every round then it would look like this;\ndef main():\n for _ in range(3):\n user_choice = get_user_input()\n computer_choice = random_choice()\n game('Rock', computer_choice)\n print(\"Game has ended!\")\n \n\nmain()\n\n"
] | [
2,
1
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074644420_python_python_3.x.txt |
Q:
Normalizing Nested JSON in python
I'm really new to API'S and python so,
I'm trying to convert a API request JSON that contains nested data into a pandas DataFrame to input it in power BI as a external python file, but I can't figure out what's hapennig with my JSON normalizing. It is a paginate API, so i had to implement a loop to get all data from it. I was expecting after runnig my code, a beautiful dataframe just to import on Power BI, can anyone help me?
the request API JSON be like
"retorno": {
"produtos": [
{
"produto": {
"id": "15874512815",
"codigo": "005.02.G",
"descricao": "CALÇA NATUREZA OFF WHITE TAMANHO G",
"tipo": "P",
"situacao": "Ativo",
"unidade": "UN",
"preco": "172.9000000000",
"precoCusto": null,
"descricaoCurta": "<p style=\"box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px 1em; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;\">Aquela calça super comfy e estilosa para te acompanhar nesse inverno!<\/span><\/span><\/p>\n<p style=\"box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;\">Com moletom felpado e uma modelagem que abraça o corpo, ela é a nossa queridinha por aqui! <3<\/span><\/span><\/p>",
"descricaoComplementar": "",
"dataInclusao": "2022-05-26",
"dataAlteracao": "2022-11-08",
"imageThumbnail": "http:\/\/orgbling.s3.amazonaws.com\/358dd3a99dc65df69a3f3852c88a9c7f\/t\/ff947f17eb154298f3fb991594ab17b2?AWSAccessKeyId=AKIATCLMSGFX4J7TU445&Expires=1669817234&Signature=PPHn8S%2FTDDIGI0nKS3dYXXwfLR4%3D",
"urlVideo": "",
"nomeFornecedor": "",
"codigoFabricante": "",
"marca": "",
"class_fiscal": "6006.32.20",
"cest": "",
"origem": "0",
"idGrupoProduto": "0",
"linkExterno": "",
"observacoes": "",
"grupoProduto": null,
"garantia": null,
"descricaoFornecedor": null,
"idFabricante": "",
"categoria": {
"id": "5108713",
"descricao": "Calças"
},
"pesoLiq": "0.30000",
"pesoBruto": "0.31000",
"estoqueMinimo": "0.00",
"estoqueMaximo": "0.00",
"gtin": "",
"gtinEmbalagem": "",
"larguraProduto": "1",
"alturaProduto": "1",
"profundidadeProduto": "1",
"unidadeMedida": "Centímetros",
"itensPorCaixa": 0,
"volumes": 0,
"localizacao": "",
"crossdocking": "0",
"condicao": "Não Especificado",
"freteGratis": "N",
"producao": "P",
"dataValidade": "0000-00-00",
"spedTipoItem": "",
"clonarDadosPai": "S",
"codigoPai": "005.02"
}
},
{
"produto": {
"id": "15874512814",
"codigo": "005.02.M",
"descricao": "CALÇA NATUREZA OFF WHITE TAMANHO M",
"tipo": "P",
"situacao": "Ativo",
"unidade": "UN",
"preco": "172.9000000000",
"precoCusto": null,
"descricaoCurta": "<p style=\"box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px 1em; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;\">Aquela calça super comfy e estilosa para te acompanhar nesse inverno!<\/span><\/span><\/p>\n<p style=\"box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;\">Com moletom felpado e uma modelagem que abraça o corpo, ela é a nossa queridinha por aqui! <3<\/span><\/span><\/p>",
"descricaoComplementar": "",
"dataInclusao": "2022-05-26",
"dataAlteracao": "2022-11-08",
"imageThumbnail": "http:\/\/orgbling.s3.amazonaws.com\/358dd3a99dc65df69a3f3852c88a9c7f\/t\/ff947f17eb154298f3fb991594ab17b2?AWSAccessKeyId=AKIATCLMSGFX4J7TU445&Expires=1669817234&Signature=PPHn8S%2FTDDIGI0nKS3dYXXwfLR4%3D",
"urlVideo": "",
"nomeFornecedor": "",
"codigoFabricante": "",
"marca": "",
"class_fiscal": "6006.32.20",
"cest": "",
"origem": "0",
"idGrupoProduto": "0",
"linkExterno": "",
"observacoes": "",
"grupoProduto": null,
"garantia": null,
"descricaoFornecedor": null,
"idFabricante": "",
"categoria": {
"id": "5108713",
"descricao": "Calças"
},
"pesoLiq": "0.30000",
"pesoBruto": "0.31000",
"estoqueMinimo": "0.00",
"estoqueMaximo": "0.00",
"gtin": "",
"gtinEmbalagem": "",
"larguraProduto": "1",
"alturaProduto": "1",
"profundidadeProduto": "1",
"unidadeMedida": "Centímetros",
"itensPorCaixa": 0,
"volumes": 0,
"localizacao": "",
"crossdocking": "0",
"condicao": "Não Especificado",
"freteGratis": "N",
"producao": "P",
"dataValidade": "0000-00-00",
"spedTipoItem": "",
"clonarDadosPai": "S",
"codigoPai": "005.02"
}
},
My code be like
import requests
import pandas as pd
from pandas import json_normalize
import json
BLING_SECRET_KEY = my_apikey
def list_products(page=1):
url = f'https://bling.com.br/Api/v2/produtos/page={page}/json/'
payload = {'apikey': BLING_SECRET_KEY,}
all_products = {'retorno': {'produtos': []}}
if page == 'all':
page = 1
while True:
url = f'https://bling.com.br/Api/v2/produtos/page={page}/json/'
produtos = requests.get(url, params=payload)
try:
pagina = produtos.json()['retorno']['produtos']
page += 1
for item in pagina:
all_products['retorno']['produtos'].append(item)
except KeyError:
break
df = json_normalize(all_products,
meta=['produtos'])
print(df)
produtos = list_products('all')
With this code, I'm getting the follow result
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
A:
if produtos = list_products('all') is the output df in the question.
produtos = list_products('all')
df = pd.json_normalize(produtos.explode('retorno.produtos')['retorno.produtos'])
'''
| | produto.id | produto.codigo | produto.descricao | produto.tipo | produto.situacao | produto.unidade | produto.preco | produto.precoCusto | produto.descricaoCurta | produto.descricaoComplementar | produto.dataInclusao | produto.dataAlteracao | produto.imageThumbnail | produto.urlVideo | produto.nomeFornecedor | produto.codigoFabricante | produto.marca | produto.class_fiscal | produto.cest | produto.origem | produto.idGrupoProduto | produto.linkExterno | produto.observacoes | produto.grupoProduto | produto.garantia | produto.descricaoFornecedor | produto.idFabricante | produto.categoria.id | produto.categoria.descricao | produto.pesoLiq | produto.pesoBruto | produto.estoqueMinimo | produto.estoqueMaximo | produto.gtin | produto.gtinEmbalagem | produto.larguraProduto | produto.alturaProduto | produto.profundidadeProduto | produto.unidadeMedida | produto.itensPorCaixa | produto.volumes | produto.localizacao | produto.crossdocking | produto.condicao | produto.freteGratis | produto.producao | produto.dataValidade | produto.spedTipoItem | produto.clonarDadosPai | produto.codigoPai |
|---:|-------------:|:-----------------|:------------------------------------|:---------------|:-------------------|:------------------|----------------:|---------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------|:-----------------------|:------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------|:-------------------------|:---------------------------|:----------------|:-----------------------|:---------------|-----------------:|-------------------------:|:----------------------|:----------------------|-----------------------:|-------------------:|------------------------------:|:-----------------------|-----------------------:|:------------------------------|------------------:|--------------------:|------------------------:|------------------------:|:---------------|:------------------------|-------------------------:|------------------------:|------------------------------:|:------------------------|------------------------:|------------------:|:----------------------|-----------------------:|:-------------------|:----------------------|:-------------------|:-----------------------|:-----------------------|:-------------------------|--------------------:|
| 0 | 15874512815 | 005.02.G | CALÇA NATUREZA OFF WHITE TAMANHO G | P | Ativo | UN | 172.9 | nan | <p style="box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px 1em; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;"><span style="box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;"><span style="box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;">Aquela calça super comfy e estilosa para te acompanhar nesse inverno!<\/span><\/span><\/p> | | 2022-05-26 | 2022-11-08 | http:\/\/orgbling.s3.amazonaws.com\/358dd3a99dc65df69a3f3852c88a9c7f\/t\/ff947f17eb154298f3fb991594ab17b2?AWSAccessKeyId=AKIATCLMSGFX4J7TU445&Expires=1669817234&Signature=PPHn8S%2FTDDIGI0nKS3dYXXwfLR4%3D | | | | | 6006.32.20 | | 0 | 0 | | | nan | nan | nan | | 5108713 | Calças | 0.3 | 0.31 | 0 | 0 | | | 1 | 1 | 1 | Centímetros | 0 | 0 | | 0 | Não Especificado | N | P | 0000-00-00 | | S | 5.02 |
| | | | | | | | | | <p style="box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;"><span style="box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;"><span style="box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;">Com moletom felpado e uma modelagem que abraça o corpo, ela é a nossa queridinha por aqui! <3<\/span><\/span><\/p> | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 15874512814 | 005.02.M | CALÇA NATUREZA OFF WHITE TAMANHO M | P | Ativo | UN | 172.9 | nan | <p style="box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px 1em; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;"><span style="box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;"><span style="box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;">Aquela calça super comfy e estilosa para te acompanhar nesse inverno!<\/span><\/span><\/p> | | 2022-05-26 | 2022-11-08 | http:\/\/orgbling.s3.amazonaws.com\/358dd3a99dc65df69a3f3852c88a9c7f\/t\/ff947f17eb154298f3fb991594ab17b2?AWSAccessKeyId=AKIATCLMSGFX4J7TU445&Expires=1669817234&Signature=PPHn8S%2FTDDIGI0nKS3dYXXwfLR4%3D | | | | | 6006.32.20 | | 0 | 0 | | | nan | nan | nan | | 5108713 | Calças | 0.3 | 0.31 | 0 | 0 | | | 1 | 1 | 1 | Centímetros | 0 | 0 | | 0 | Não Especificado | N | P | 0000-00-00 | | S | 5.02 |
| | | | | | | | | | <p style="box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;"><span style="box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;"><span style="box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;">Com moletom felpado e uma modelagem que abraça o corpo, ela é a nossa queridinha por aqui! <3<\/span><\/span><\/p> | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
'''
Full code:
import requests
import pandas as pd
from pandas import json_normalize
import json
BLING_SECRET_KEY = my_apikey
def list_products(page=1):
url = f'https://bling.com.br/Api/v2/produtos/page={page}/json/'
payload = {'apikey': BLING_SECRET_KEY,}
all_products = {'retorno': {'produtos': []}}
if page == 'all':
page = 1
while True:
url = f'https://bling.com.br/Api/v2/produtos/page={page}/json/'
produtos = requests.get(url, params=payload)
try:
pagina = produtos.json()['retorno']['produtos']
page += 1
for item in pagina:
all_products['retorno']['produtos'].append(item)
except KeyError:
break
df = json_normalize(all_products,
meta=['produtos'])
return df
produtos = list_products('all')
df = pd.json_normalize(produtos.explode('retorno.produtos')['retorno.produtos'])
| Normalizing Nested JSON in python | I'm really new to API'S and python so,
I'm trying to convert a API request JSON that contains nested data into a pandas DataFrame to input it in power BI as a external python file, but I can't figure out what's hapennig with my JSON normalizing. It is a paginate API, so i had to implement a loop to get all data from it. I was expecting after runnig my code, a beautiful dataframe just to import on Power BI, can anyone help me?
the request API JSON be like
"retorno": {
"produtos": [
{
"produto": {
"id": "15874512815",
"codigo": "005.02.G",
"descricao": "CALÇA NATUREZA OFF WHITE TAMANHO G",
"tipo": "P",
"situacao": "Ativo",
"unidade": "UN",
"preco": "172.9000000000",
"precoCusto": null,
"descricaoCurta": "<p style=\"box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px 1em; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;\">Aquela calça super comfy e estilosa para te acompanhar nesse inverno!<\/span><\/span><\/p>\n<p style=\"box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;\">Com moletom felpado e uma modelagem que abraça o corpo, ela é a nossa queridinha por aqui! <3<\/span><\/span><\/p>",
"descricaoComplementar": "",
"dataInclusao": "2022-05-26",
"dataAlteracao": "2022-11-08",
"imageThumbnail": "http:\/\/orgbling.s3.amazonaws.com\/358dd3a99dc65df69a3f3852c88a9c7f\/t\/ff947f17eb154298f3fb991594ab17b2?AWSAccessKeyId=AKIATCLMSGFX4J7TU445&Expires=1669817234&Signature=PPHn8S%2FTDDIGI0nKS3dYXXwfLR4%3D",
"urlVideo": "",
"nomeFornecedor": "",
"codigoFabricante": "",
"marca": "",
"class_fiscal": "6006.32.20",
"cest": "",
"origem": "0",
"idGrupoProduto": "0",
"linkExterno": "",
"observacoes": "",
"grupoProduto": null,
"garantia": null,
"descricaoFornecedor": null,
"idFabricante": "",
"categoria": {
"id": "5108713",
"descricao": "Calças"
},
"pesoLiq": "0.30000",
"pesoBruto": "0.31000",
"estoqueMinimo": "0.00",
"estoqueMaximo": "0.00",
"gtin": "",
"gtinEmbalagem": "",
"larguraProduto": "1",
"alturaProduto": "1",
"profundidadeProduto": "1",
"unidadeMedida": "Centímetros",
"itensPorCaixa": 0,
"volumes": 0,
"localizacao": "",
"crossdocking": "0",
"condicao": "Não Especificado",
"freteGratis": "N",
"producao": "P",
"dataValidade": "0000-00-00",
"spedTipoItem": "",
"clonarDadosPai": "S",
"codigoPai": "005.02"
}
},
{
"produto": {
"id": "15874512814",
"codigo": "005.02.M",
"descricao": "CALÇA NATUREZA OFF WHITE TAMANHO M",
"tipo": "P",
"situacao": "Ativo",
"unidade": "UN",
"preco": "172.9000000000",
"precoCusto": null,
"descricaoCurta": "<p style=\"box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px 1em; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;\">Aquela calça super comfy e estilosa para te acompanhar nesse inverno!<\/span><\/span><\/p>\n<p style=\"box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;\">Com moletom felpado e uma modelagem que abraça o corpo, ela é a nossa queridinha por aqui! <3<\/span><\/span><\/p>",
"descricaoComplementar": "",
"dataInclusao": "2022-05-26",
"dataAlteracao": "2022-11-08",
"imageThumbnail": "http:\/\/orgbling.s3.amazonaws.com\/358dd3a99dc65df69a3f3852c88a9c7f\/t\/ff947f17eb154298f3fb991594ab17b2?AWSAccessKeyId=AKIATCLMSGFX4J7TU445&Expires=1669817234&Signature=PPHn8S%2FTDDIGI0nKS3dYXXwfLR4%3D",
"urlVideo": "",
"nomeFornecedor": "",
"codigoFabricante": "",
"marca": "",
"class_fiscal": "6006.32.20",
"cest": "",
"origem": "0",
"idGrupoProduto": "0",
"linkExterno": "",
"observacoes": "",
"grupoProduto": null,
"garantia": null,
"descricaoFornecedor": null,
"idFabricante": "",
"categoria": {
"id": "5108713",
"descricao": "Calças"
},
"pesoLiq": "0.30000",
"pesoBruto": "0.31000",
"estoqueMinimo": "0.00",
"estoqueMaximo": "0.00",
"gtin": "",
"gtinEmbalagem": "",
"larguraProduto": "1",
"alturaProduto": "1",
"profundidadeProduto": "1",
"unidadeMedida": "Centímetros",
"itensPorCaixa": 0,
"volumes": 0,
"localizacao": "",
"crossdocking": "0",
"condicao": "Não Especificado",
"freteGratis": "N",
"producao": "P",
"dataValidade": "0000-00-00",
"spedTipoItem": "",
"clonarDadosPai": "S",
"codigoPai": "005.02"
}
},
My code be like
import requests
import pandas as pd
from pandas import json_normalize
import json
BLING_SECRET_KEY = my_apikey
def list_products(page=1):
url = f'https://bling.com.br/Api/v2/produtos/page={page}/json/'
payload = {'apikey': BLING_SECRET_KEY,}
all_products = {'retorno': {'produtos': []}}
if page == 'all':
page = 1
while True:
url = f'https://bling.com.br/Api/v2/produtos/page={page}/json/'
produtos = requests.get(url, params=payload)
try:
pagina = produtos.json()['retorno']['produtos']
page += 1
for item in pagina:
all_products['retorno']['produtos'].append(item)
except KeyError:
break
df = json_normalize(all_products,
meta=['produtos'])
print(df)
produtos = list_products('all')
With this code, I'm getting the follow result
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
retorno.produtos
0 [{'produto': {'id': '15956635451', 'codigo': '...
| [
"if produtos = list_products('all') is the output df in the question.\nprodutos = list_products('all')\ndf = pd.json_normalize(produtos.explode('retorno.produtos')['retorno.produtos'])\n\n'''\n| | produto.id | produto.codigo | produto.descricao | produto.tipo | produto.situacao | produto.unidade | produto.preco | produto.precoCusto | produto.descricaoCurta | produto.descricaoComplementar | produto.dataInclusao | produto.dataAlteracao | produto.imageThumbnail | produto.urlVideo | produto.nomeFornecedor | produto.codigoFabricante | produto.marca | produto.class_fiscal | produto.cest | produto.origem | produto.idGrupoProduto | produto.linkExterno | produto.observacoes | produto.grupoProduto | produto.garantia | produto.descricaoFornecedor | produto.idFabricante | produto.categoria.id | produto.categoria.descricao | produto.pesoLiq | produto.pesoBruto | produto.estoqueMinimo | produto.estoqueMaximo | produto.gtin | produto.gtinEmbalagem | produto.larguraProduto | produto.alturaProduto | produto.profundidadeProduto | produto.unidadeMedida | produto.itensPorCaixa | produto.volumes | produto.localizacao | produto.crossdocking | produto.condicao | produto.freteGratis | produto.producao | produto.dataValidade | produto.spedTipoItem | produto.clonarDadosPai | produto.codigoPai |\n|---:|-------------:|:-----------------|:------------------------------------|:---------------|:-------------------|:------------------|----------------:|---------------------:|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:--------------------------------|:-----------------------|:------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------|:-------------------------|:---------------------------|:----------------|:-----------------------|:---------------|-----------------:|-------------------------:|:----------------------|:----------------------|-----------------------:|-------------------:|------------------------------:|:-----------------------|-----------------------:|:------------------------------|------------------:|--------------------:|------------------------:|------------------------:|:---------------|:------------------------|-------------------------:|------------------------:|------------------------------:|:------------------------|------------------------:|------------------:|:----------------------|-----------------------:|:-------------------|:----------------------|:-------------------|:-----------------------|:-----------------------|:-------------------------|--------------------:|\n| 0 | 15874512815 | 005.02.G | CALÇA NATUREZA OFF WHITE TAMANHO G | P | Ativo | UN | 172.9 | nan | <p style=\"box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px 1em; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;\">Aquela calça super comfy e estilosa para te acompanhar nesse inverno!<\\/span><\\/span><\\/p> | | 2022-05-26 | 2022-11-08 | http:\\/\\/orgbling.s3.amazonaws.com\\/358dd3a99dc65df69a3f3852c88a9c7f\\/t\\/ff947f17eb154298f3fb991594ab17b2?AWSAccessKeyId=AKIATCLMSGFX4J7TU445&Expires=1669817234&Signature=PPHn8S%2FTDDIGI0nKS3dYXXwfLR4%3D | | | | | 6006.32.20 | | 0 | 0 | | | nan | nan | nan | | 5108713 | Calças | 0.3 | 0.31 | 0 | 0 | | | 1 | 1 | 1 | Centímetros | 0 | 0 | | 0 | Não Especificado | N | P | 0000-00-00 | | S | 5.02 |\n| | | | | | | | | | <p style=\"box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;\">Com moletom felpado e uma modelagem que abraça o corpo, ela é a nossa queridinha por aqui! <3<\\/span><\\/span><\\/p> | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |\n| 1 | 15874512814 | 005.02.M | CALÇA NATUREZA OFF WHITE TAMANHO M | P | Ativo | UN | 172.9 | nan | <p style=\"box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px 1em; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;\">Aquela calça super comfy e estilosa para te acompanhar nesse inverno!<\\/span><\\/span><\\/p> | | 2022-05-26 | 2022-11-08 | http:\\/\\/orgbling.s3.amazonaws.com\\/358dd3a99dc65df69a3f3852c88a9c7f\\/t\\/ff947f17eb154298f3fb991594ab17b2?AWSAccessKeyId=AKIATCLMSGFX4J7TU445&Expires=1669817234&Signature=PPHn8S%2FTDDIGI0nKS3dYXXwfLR4%3D | | | | | 6006.32.20 | | 0 | 0 | | | nan | nan | nan | | 5108713 | Calças | 0.3 | 0.31 | 0 | 0 | | | 1 | 1 | 1 | Centímetros | 0 | 0 | | 0 | Não Especificado | N | P | 0000-00-00 | | S | 5.02 |\n| | | | | | | | | | <p style=\"box-sizing: border-box; padding: 0px; border: 0px; outline: none; margin-block: 0px; color: #333333; font-family: Lato, sans-serif; font-size: 18px; background-color: #fefffe;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-size: 12px;\"><span style=\"box-sizing: border-box; padding: 0px; margin: 0px; border: 0px; outline: none; font-family: Tahoma, Geneva, sans-serif;\">Com moletom felpado e uma modelagem que abraça o corpo, ela é a nossa queridinha por aqui! <3<\\/span><\\/span><\\/p> | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |\n'''\n\nFull code:\nimport requests\nimport pandas as pd\nfrom pandas import json_normalize\nimport json\n\nBLING_SECRET_KEY = my_apikey\n\ndef list_products(page=1):\n url = f'https://bling.com.br/Api/v2/produtos/page={page}/json/'\n payload = {'apikey': BLING_SECRET_KEY,}\n all_products = {'retorno': {'produtos': []}}\n\n if page == 'all':\n page = 1\n \n while True:\n url = f'https://bling.com.br/Api/v2/produtos/page={page}/json/'\n produtos = requests.get(url, params=payload)\n try:\n pagina = produtos.json()['retorno']['produtos']\n page += 1\n for item in pagina:\n all_products['retorno']['produtos'].append(item)\n except KeyError:\n\n break\n\n df = json_normalize(all_products, \n meta=['produtos'])\n return df \n \n\nprodutos = list_products('all')\ndf = pd.json_normalize(produtos.explode('retorno.produtos')['retorno.produtos'])\n\n\n"
] | [
0
] | [] | [] | [
"api",
"json",
"pandas",
"powerbi",
"python"
] | stackoverflow_0074633613_api_json_pandas_powerbi_python.txt |
Q:
PySnmp query not working for reachable target but command line 'snmpget' succeeds
I need an SNMP server that can monitor an SNMP agent. For this purpose, I wrote a basic Python application, and I run an SNMP agent (polinux/snmpd image based), on the same network, the agent has a fixed IP address. When I run the SNMP get query from the server container, I get the desired OIB, but when I want to do the same programmatically using PySnmp, it just doesn't work. I have tried in a lots of ways but without success.
Do you have any idea?
My docker compose file:
services:
spm-health-service:
build:
context: .
dockerfile: docker/Dockerfile
restart: unless-stopped
env_file:
- .env
ports:
- "4500:4500"
depends_on:
- scs-server
- network-device-identifier
spm:
image: polinux/snmpd
restart: unless-stopped
privileged: true
ports:
- "161:161/udp"
volumes:
- ./resources/snmpd-conf/snmpd.conf:/etc/snmp/snmpd.conf
- ./resources/spm-mib/:/usr/local/share/snmp/mibs/
networks:
default:
ipv4_address: 10.5.0.5
networks:
default:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1
My command and result when I enter bash shell of "spm-health-service":
$ winpty docker exec -it 627 bash
root@627d241920e3:/app# snmpget -v2c -t 10 -c public 10.5.0.5 .1.3.6.1.2.1.1.3.0
Created directory: /var/lib/snmp/cert_indexes
iso.3.6.1.2.1.1.3.0 = Timeticks: (21437) 0:03:34.37
root@627d241920e3:/app#
My Python code:
def get_system_uptime(self) -> Union[str, None]:
"""Get system uptime"""
try:
result = snmp_get(self.ip, [SYS_UP_TIME], self.snmp_ro_credential)
except SNMPError as e:
logging.error(e.message)
return None
return result.get(SYS_UP_TIME, None)
- - - - - - -
def snmp_get(
target: str,
oids: list[str],
credentials: CommunityData = SNMP_RO_CREDENTIAL,
port: int = SNMP_DEFAULT_PORT,
) -> dict:
"""SNMP get"""
handler = getCmd(
SNMP_ENGINE,
credentials,
UdpTransportTarget((target, port), timeout=5, retries=10),
SNMP_CONTEXT,
*construct_object_types(oids),
)
return fetch(handler)[0]
- - - - - - -
def fetch(fetch_handler, count: int = -1) -> list[dict]:
"""Fetch"""
def convert(_var_binds):
"""Convert"""
return {str(var_bind[0]): var_bind[1] for var_bind in _var_binds}
result = []
while True:
if count == 0:
break
count -= 1
try:
error_indication, error_status, error_index, var_binds = next(fetch_handler)
if error_indication:
raise SNMPError(f"SNMP error: {error_indication}")
if error_status:
raise SNMPError(
f"SNMP error: {error_status.prettyPrint()} at \
{error_index and var_binds[int(error_index) - 1][0] or '?'}",
)
items = convert(var_binds)
result.append(items)
except StopIteration:
break
return result
- - - - - - - - - -
The exception I get:
spm-health-service-spm-health-service-1 | 2022-11-14 10:53:58,729 [INFO]: [get_spms_from_network_mapping] - line 32 | Query for 10.5.0.5
spm-health-service-spm-health-service-1 | 2022-11-14 10:53:58,792 [ERROR]: [get_spms_from_network_mapping] - line 36 | poll error: Traceback (most recent call last):
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pysnmp/carrier/asyncore/dispatch.py", line 45, in runDispatcher
spm-health-service-spm-health-service-1 | loop(timeout or self.getTimerResolution(),
spm-health-service-spm-health-service-1 | ; File "/usr/local/lib/python3.11/asyncore.py", line 212, in loop
spm-health-service-spm-health-service-1 | poll_fun(timeout, map)
spm-health-service-spm-health-service-1 | ; File "/usr/local/lib/python3.11/asyncore.py", line 193, in poll2
spm-health-service-spm-health-service-1 | readwrite(obj, flags)
spm-health-service-spm-health-service-1 | ; File "/usr/local/lib/python3.11/asyncore.py", line 128, in readwrite
spm-health-service-spm-health-service-1 | obj.handle_error()
spm-health-service-spm-health-service-1 | ; File "/usr/local/lib/python3.11/asyncore.py", line 113, in readwrite
spm-health-service-spm-health-service-1 | obj.handle_read_event()
spm-health-service-spm-health-service-1 | ; File "/usr/local/lib/python3.11/asyncore.py", line 425, in handle_read_event
spm-health-service-spm-health-service-1 | self.handle_read()
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pysnmp/carrier/asyncore/dgram/base.py", line 170, in handle_read
spm-health-service-spm-health-service-1 | self._cbFun(self, transportAddress, incomingMessage)
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pysnmp/carrier/base.py", line 84, in _cbFun
spm-health-service-spm-health-service-1 | self.__recvCallables[recvId](
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pysnmp/entity/engine.py", line 151, in __receiveMessageCbFun
spm-health-service-spm-health-service-1 | self.msgAndPduDsp.receiveMessage(
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pysnmp/proto/rfc3412.py", line 291, in receiveMessage
spm-health-service-spm-health-service-1 | msgVersion = verdec.decodeMessageVersion(wholeMsg)
spm-health-service-spm-health-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pysnmp/proto/api/verdec.py", line 15, in decodeMessageVersion
spm-health-service-spm-health-service-1 | seq, wholeMsg = decoder.decode(
spm-health-service-spm-health-service-1 | ^^^^^^^^^^^^^^^
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pyasn1/codec/ber/decoder.py", line 2003, in __call__
spm-health-service-spm-health-service-1 | for asn1Object in streamingDecoder:
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pyasn1/codec/ber/decoder.py", line 1918, in __iter__
spm-health-service-spm-health-service-1 | for asn1Object in self._singleItemDecoder(
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pyasn1/codec/ber/decoder.py", line 1778, in __call__
spm-health-service-spm-health-service-1 | for value in concreteDecoder.valueDecoder(
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pyasn1/codec/ber/decoder.py", line 654, in valueDecoder
spm-health-service-spm-health-service-1 | for chunk in substrateFun(asn1Object, substrate, length, options):
spm-health-service-spm-health-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
spm-health-service-spm-health-service-1 | ;TypeError: decodeMessageVersion.<locals>.<lambda>() takes 3 positional arguments but 4 were given
spm-health-service-spm-health-service-1 | caused by <class 'TypeError'>: decodeMessageVersion.<locals>.<lambda>() takes 3 positional arguments but 4 were given
A:
The issue was with Pipfile, the logs indicated some dependency issues and conflicts. I do not remember why but at some point the allow_prereleases flag was set to true in the file. When I set it to false and recreated the Pipfile.lock, the error gone away.
| PySnmp query not working for reachable target but command line 'snmpget' succeeds | I need an SNMP server that can monitor an SNMP agent. For this purpose, I wrote a basic Python application, and I run an SNMP agent (polinux/snmpd image based), on the same network, the agent has a fixed IP address. When I run the SNMP get query from the server container, I get the desired OIB, but when I want to do the same programmatically using PySnmp, it just doesn't work. I have tried in a lots of ways but without success.
Do you have any idea?
My docker compose file:
services:
spm-health-service:
build:
context: .
dockerfile: docker/Dockerfile
restart: unless-stopped
env_file:
- .env
ports:
- "4500:4500"
depends_on:
- scs-server
- network-device-identifier
spm:
image: polinux/snmpd
restart: unless-stopped
privileged: true
ports:
- "161:161/udp"
volumes:
- ./resources/snmpd-conf/snmpd.conf:/etc/snmp/snmpd.conf
- ./resources/spm-mib/:/usr/local/share/snmp/mibs/
networks:
default:
ipv4_address: 10.5.0.5
networks:
default:
driver: bridge
ipam:
config:
- subnet: 10.5.0.0/16
gateway: 10.5.0.1
My command and result when I enter bash shell of "spm-health-service":
$ winpty docker exec -it 627 bash
root@627d241920e3:/app# snmpget -v2c -t 10 -c public 10.5.0.5 .1.3.6.1.2.1.1.3.0
Created directory: /var/lib/snmp/cert_indexes
iso.3.6.1.2.1.1.3.0 = Timeticks: (21437) 0:03:34.37
root@627d241920e3:/app#
My Python code:
def get_system_uptime(self) -> Union[str, None]:
"""Get system uptime"""
try:
result = snmp_get(self.ip, [SYS_UP_TIME], self.snmp_ro_credential)
except SNMPError as e:
logging.error(e.message)
return None
return result.get(SYS_UP_TIME, None)
- - - - - - -
def snmp_get(
target: str,
oids: list[str],
credentials: CommunityData = SNMP_RO_CREDENTIAL,
port: int = SNMP_DEFAULT_PORT,
) -> dict:
"""SNMP get"""
handler = getCmd(
SNMP_ENGINE,
credentials,
UdpTransportTarget((target, port), timeout=5, retries=10),
SNMP_CONTEXT,
*construct_object_types(oids),
)
return fetch(handler)[0]
- - - - - - -
def fetch(fetch_handler, count: int = -1) -> list[dict]:
"""Fetch"""
def convert(_var_binds):
"""Convert"""
return {str(var_bind[0]): var_bind[1] for var_bind in _var_binds}
result = []
while True:
if count == 0:
break
count -= 1
try:
error_indication, error_status, error_index, var_binds = next(fetch_handler)
if error_indication:
raise SNMPError(f"SNMP error: {error_indication}")
if error_status:
raise SNMPError(
f"SNMP error: {error_status.prettyPrint()} at \
{error_index and var_binds[int(error_index) - 1][0] or '?'}",
)
items = convert(var_binds)
result.append(items)
except StopIteration:
break
return result
- - - - - - - - - -
The exception I get:
spm-health-service-spm-health-service-1 | 2022-11-14 10:53:58,729 [INFO]: [get_spms_from_network_mapping] - line 32 | Query for 10.5.0.5
spm-health-service-spm-health-service-1 | 2022-11-14 10:53:58,792 [ERROR]: [get_spms_from_network_mapping] - line 36 | poll error: Traceback (most recent call last):
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pysnmp/carrier/asyncore/dispatch.py", line 45, in runDispatcher
spm-health-service-spm-health-service-1 | loop(timeout or self.getTimerResolution(),
spm-health-service-spm-health-service-1 | ; File "/usr/local/lib/python3.11/asyncore.py", line 212, in loop
spm-health-service-spm-health-service-1 | poll_fun(timeout, map)
spm-health-service-spm-health-service-1 | ; File "/usr/local/lib/python3.11/asyncore.py", line 193, in poll2
spm-health-service-spm-health-service-1 | readwrite(obj, flags)
spm-health-service-spm-health-service-1 | ; File "/usr/local/lib/python3.11/asyncore.py", line 128, in readwrite
spm-health-service-spm-health-service-1 | obj.handle_error()
spm-health-service-spm-health-service-1 | ; File "/usr/local/lib/python3.11/asyncore.py", line 113, in readwrite
spm-health-service-spm-health-service-1 | obj.handle_read_event()
spm-health-service-spm-health-service-1 | ; File "/usr/local/lib/python3.11/asyncore.py", line 425, in handle_read_event
spm-health-service-spm-health-service-1 | self.handle_read()
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pysnmp/carrier/asyncore/dgram/base.py", line 170, in handle_read
spm-health-service-spm-health-service-1 | self._cbFun(self, transportAddress, incomingMessage)
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pysnmp/carrier/base.py", line 84, in _cbFun
spm-health-service-spm-health-service-1 | self.__recvCallables[recvId](
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pysnmp/entity/engine.py", line 151, in __receiveMessageCbFun
spm-health-service-spm-health-service-1 | self.msgAndPduDsp.receiveMessage(
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pysnmp/proto/rfc3412.py", line 291, in receiveMessage
spm-health-service-spm-health-service-1 | msgVersion = verdec.decodeMessageVersion(wholeMsg)
spm-health-service-spm-health-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pysnmp/proto/api/verdec.py", line 15, in decodeMessageVersion
spm-health-service-spm-health-service-1 | seq, wholeMsg = decoder.decode(
spm-health-service-spm-health-service-1 | ^^^^^^^^^^^^^^^
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pyasn1/codec/ber/decoder.py", line 2003, in __call__
spm-health-service-spm-health-service-1 | for asn1Object in streamingDecoder:
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pyasn1/codec/ber/decoder.py", line 1918, in __iter__
spm-health-service-spm-health-service-1 | for asn1Object in self._singleItemDecoder(
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pyasn1/codec/ber/decoder.py", line 1778, in __call__
spm-health-service-spm-health-service-1 | for value in concreteDecoder.valueDecoder(
spm-health-service-spm-health-service-1 | ; File "/root/.local/share/virtualenvs/app-4PlAip0Q/lib/python3.11/site-packages/pyasn1/codec/ber/decoder.py", line 654, in valueDecoder
spm-health-service-spm-health-service-1 | for chunk in substrateFun(asn1Object, substrate, length, options):
spm-health-service-spm-health-service-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
spm-health-service-spm-health-service-1 | ;TypeError: decodeMessageVersion.<locals>.<lambda>() takes 3 positional arguments but 4 were given
spm-health-service-spm-health-service-1 | caused by <class 'TypeError'>: decodeMessageVersion.<locals>.<lambda>() takes 3 positional arguments but 4 were given
| [
"The issue was with Pipfile, the logs indicated some dependency issues and conflicts. I do not remember why but at some point the allow_prereleases flag was set to true in the file. When I set it to false and recreated the Pipfile.lock, the error gone away.\n"
] | [
0
] | [] | [] | [
"docker_compose",
"mib",
"pysnmp",
"python",
"snmp"
] | stackoverflow_0074430619_docker_compose_mib_pysnmp_python_snmp.txt |
Q:
Delta Lake Table Storage Sorting
I have a delta lake table and inserting the data into that table. Business asked to sort the data while storing it in the table.
I sorted my dataframe before creating the delta table as below
df.sort()
and then created the delta table as below
df.write.format('delta').Option('mergeSchema, true).save('deltalocation')
when retrieving this data into dataframe i see the data is still unsorted.
and i have to do df.sort in order to display the sorted data.
Per my understanding the data cannot actually be stored in a sorted order and the user will have to write a sorting query while extracting the data from the table.
I need to understand if this is correct and also how the delta lake internally stores the data.
My understanding is that it partitions the data and doesn't care about the sort order. data is spread across multiple partitions.
Can someone please clarify this in more detail and advise if my undertanding is correct ?
A:
Delta Lake itself does not itself enable sorting because this would require any engine writing to sort the data. To balance simplicity, speed of ingestion, and speed of query, this is why Delta Lake itself does not require or enable sorting per se. i.e., your statement is correct.
My understanding is that it partitions the data and doesn't care about the sort order. data is spread across multiple partitions.
Note that Delta Lake includes data skipping and OPTIMIZE ZORDER. This allows you to skip files/data using the column statistics and by clustering the data. While sorting can be helpful for a single column, Z-order provides better multi-column data cluster. More info is available in Delta 2.0 - The Foundation of your Data Lakehouse is Open.
Saying this, how Delta Lake stores the data is often a product of what the writer itself is doing. If you were to specify a sort during the write phase, e.g.:
df_sorted = df.repartition("date").sortWithinPartitions("date", "id")
df_sorted.write.format("delta").partitionBy("date").save('deltalocation')
Then the data should be sorted and when read it will be sorted as well.
| Delta Lake Table Storage Sorting | I have a delta lake table and inserting the data into that table. Business asked to sort the data while storing it in the table.
I sorted my dataframe before creating the delta table as below
df.sort()
and then created the delta table as below
df.write.format('delta').Option('mergeSchema, true).save('deltalocation')
when retrieving this data into dataframe i see the data is still unsorted.
and i have to do df.sort in order to display the sorted data.
Per my understanding the data cannot actually be stored in a sorted order and the user will have to write a sorting query while extracting the data from the table.
I need to understand if this is correct and also how the delta lake internally stores the data.
My understanding is that it partitions the data and doesn't care about the sort order. data is spread across multiple partitions.
Can someone please clarify this in more detail and advise if my undertanding is correct ?
| [
"Delta Lake itself does not itself enable sorting because this would require any engine writing to sort the data. To balance simplicity, speed of ingestion, and speed of query, this is why Delta Lake itself does not require or enable sorting per se. i.e., your statement is correct.\n\nMy understanding is that it partitions the data and doesn't care about the sort order. data is spread across multiple partitions.\n\nNote that Delta Lake includes data skipping and OPTIMIZE ZORDER. This allows you to skip files/data using the column statistics and by clustering the data. While sorting can be helpful for a single column, Z-order provides better multi-column data cluster. More info is available in Delta 2.0 - The Foundation of your Data Lakehouse is Open.\nSaying this, how Delta Lake stores the data is often a product of what the writer itself is doing. If you were to specify a sort during the write phase, e.g.:\ndf_sorted = df.repartition(\"date\").sortWithinPartitions(\"date\", \"id\")\ndf_sorted.write.format(\"delta\").partitionBy(\"date\").save('deltalocation')\n\nThen the data should be sorted and when read it will be sorted as well.\n"
] | [
0
] | [] | [] | [
"azure_data_lake",
"azure_databricks",
"delta_lake",
"pyspark",
"python"
] | stackoverflow_0074638658_azure_data_lake_azure_databricks_delta_lake_pyspark_python.txt |
Q:
Is there an Alternative to textvariable on ttk.Entry widget, for tk.Text widget on TK inter?
so according to Tkinter documentation, the purpose of an Entry widget is to allow the user to enter or edit a single line of text. I want to get enter and Edit multiple lines of text but there is a problem. The alternative suggested by tkinter, Text, doesn't do the job at all!
here is my code using ttk.Entry:
def _set_project_description_frame(self):
project_description_frame = ttk.Frame(self._main_frame)
ttk.Label(project_description_frame, text="Project Description").pack(side=tk.TOP, anchor="nw")
ttk.Entry(project_description_frame, textvariable=self._project_description).pack(side=tk.LEFT,
expand=True, fill=tk.X)
return project_description_frame
This produces the following tkinter output:
but what I really wan is to have the description section actually look like a place where a description would go, so I replaced ttk.Entry with tk.Text:
def _set_project_description_frame(self):
project_description_frame = ttk.Frame(self._main_frame)
ttk.Label(project_description_frame, text="Project Description").pack(side=tk.TOP, anchor="nw")
tk.Text(project_description_frame, height=5).pack(side=tk.LEFT, expand=True, fill=tk.X)
return project_description_frame
The result shown in the picture bellow:
The last result is exactly what I want but the problem is that without the passing textvariable=self._project_description to tk.Text it's impossible to actually save whatever is typed into that text field.
So my question is, how do I fix the code in order to be able to capture what's input into the text field the same way ttk.Entry does when I can't pass textvariable=self._project_description to it?
A:
The Text widget doesn't support textvariable because a Text widget can contain more than just text. To get the content out of the Text widget you need to call the get method on the widget.
To get all of the contents of the widget you first need to keep a reference to the widget. Then it's just a matter of calling get with an index for the range of text you want. Typically the two indexes will be the strings "1.0" for the first character, and "end-1c" for the last character before the newline added by the text widget itself.
def _set_project_description_frame(self):
...
self.text_widget = tk.Text(project_description_frame, height=5)
self.text_widget.pack(side=tk.LEFT, expand=True, fill=tk.X)
...
With that, anywhere else in your code that needs to fetch the data can do this:
self.text_widget.get("1.0", "end-1c")
| Is there an Alternative to textvariable on ttk.Entry widget, for tk.Text widget on TK inter? | so according to Tkinter documentation, the purpose of an Entry widget is to allow the user to enter or edit a single line of text. I want to get enter and Edit multiple lines of text but there is a problem. The alternative suggested by tkinter, Text, doesn't do the job at all!
here is my code using ttk.Entry:
def _set_project_description_frame(self):
project_description_frame = ttk.Frame(self._main_frame)
ttk.Label(project_description_frame, text="Project Description").pack(side=tk.TOP, anchor="nw")
ttk.Entry(project_description_frame, textvariable=self._project_description).pack(side=tk.LEFT,
expand=True, fill=tk.X)
return project_description_frame
This produces the following tkinter output:
but what I really wan is to have the description section actually look like a place where a description would go, so I replaced ttk.Entry with tk.Text:
def _set_project_description_frame(self):
project_description_frame = ttk.Frame(self._main_frame)
ttk.Label(project_description_frame, text="Project Description").pack(side=tk.TOP, anchor="nw")
tk.Text(project_description_frame, height=5).pack(side=tk.LEFT, expand=True, fill=tk.X)
return project_description_frame
The result shown in the picture bellow:
The last result is exactly what I want but the problem is that without the passing textvariable=self._project_description to tk.Text it's impossible to actually save whatever is typed into that text field.
So my question is, how do I fix the code in order to be able to capture what's input into the text field the same way ttk.Entry does when I can't pass textvariable=self._project_description to it?
| [
"The Text widget doesn't support textvariable because a Text widget can contain more than just text. To get the content out of the Text widget you need to call the get method on the widget.\nTo get all of the contents of the widget you first need to keep a reference to the widget. Then it's just a matter of calling get with an index for the range of text you want. Typically the two indexes will be the strings \"1.0\" for the first character, and \"end-1c\" for the last character before the newline added by the text widget itself.\ndef _set_project_description_frame(self):\n ...\n self.text_widget = tk.Text(project_description_frame, height=5)\n self.text_widget.pack(side=tk.LEFT, expand=True, fill=tk.X)\n ...\n\nWith that, anywhere else in your code that needs to fetch the data can do this:\nself.text_widget.get(\"1.0\", \"end-1c\")\n\n"
] | [
1
] | [] | [] | [
"python",
"tkinter",
"user_interface"
] | stackoverflow_0074643264_python_tkinter_user_interface.txt |
Q:
How can I sort a column in python3?
I am creating a tool to automate some tasks. These tasks generate two DataFrames, but when concatenating them the columns are messed up as follows:
col2 col4 col3 col1
0 A 2 0 a
1 A 1 1 B
2 B 9 9 c
3 NaN 8 4 D
4 D 7 2 e
5 C 4 3 F
But I need to rearrange them so that they look like this:
col1 col2 col3 col4
0 a A 0 2
1 B A 1 1
2 c B 9 9
3 D NaN 4 8
4 e D 2 7
5 F C 3 4
Can someone help me?
I tried with sort_values, but it didn't work, and I can't find anywhere another way to try to solve the problem.
A:
use following code:
df.sort_index(axis=1)
A:
You can do:
df = df[sorted(df.columns.tolist())].copy()
A:
df = df[['col1', 'col2', 'col3', 'col4']]
| How can I sort a column in python3? | I am creating a tool to automate some tasks. These tasks generate two DataFrames, but when concatenating them the columns are messed up as follows:
col2 col4 col3 col1
0 A 2 0 a
1 A 1 1 B
2 B 9 9 c
3 NaN 8 4 D
4 D 7 2 e
5 C 4 3 F
But I need to rearrange them so that they look like this:
col1 col2 col3 col4
0 a A 0 2
1 B A 1 1
2 c B 9 9
3 D NaN 4 8
4 e D 2 7
5 F C 3 4
Can someone help me?
I tried with sort_values, but it didn't work, and I can't find anywhere another way to try to solve the problem.
| [
"use following code:\ndf.sort_index(axis=1)\n\n",
"You can do:\ndf = df[sorted(df.columns.tolist())].copy()\n\n",
"df = df[['col1', 'col2', 'col3', 'col4']]\n\n"
] | [
1,
0,
0
] | [] | [] | [
"data_science",
"pandas",
"python",
"python_3.x"
] | stackoverflow_0074644592_data_science_pandas_python_python_3.x.txt |
Q:
Why is a simple in-place addition much faster with numba than numpy?
According to the snippet below, performing an in-place addition with a numba jit-compiled function is ~10 times faster than with numpy's ufunc.
This would be understandable with a function performing multiple numpy operations as explained in this question.
But here the improvement concern 1 simple numpy ufunc...
So why is numba so much faster ? I'm (naively ?) expecting that the numpy ufunc somehow internally uses some compiled code and that a task as simple as an addition would already be close to optimally optimized ?
More generally : should I expect such dramatic performance differences for other numpy functions ? Is there a way to predict when it's worth to re-write a function and numba-jit it ?
the code :
import numpy as np
import timeit
import numba
N = 200
target1 = np.ones( N )
target2 = np.ones( N )
# we're going to add these values :
addedValues = np.random.uniform( size=1000000 )
# into these positions :
indices = np.random.randint(N,size=1000000)
@numba.njit
def addat(target, index, tobeadded):
for i in range( index.size):
target[index[i]] += tobeadded[i]
# pre-run to jit compile the function
addat( target2, indices, addedValues)
target2 = np.ones( N ) # reset
npaddat = np.add.at
t1 = timeit.timeit( "npaddat( target1, indices, addedValues)", number=3, globals=globals())
t2 = timeit.timeit( "addat( target2, indices, addedValues)", number=3,globals=globals())
assert( (target1==target2).all() )
print("np.add.at time=",t1, )
print("jit-ed addat time =",t2 )
on my computer I get :
np.add.at time= 0.21222890191711485
jit-ed addat time = 0.003389443038031459
so more than a factor 10 improvement...
A:
The ufunc.add.at() is much more generic then your addat(). It iterates over the array elements and calls some unit operation function for each element. Let the unit operation function be add_vectors(). It adds two input vectors, where a vector means array elements in C-contiguous order and aligned. It utilizes SIMD operations if possible.
Because the ufunc.add.at() accesses elements randomly(not sequentially), the add_vectors() should be called multiple times for each pair of input elements. But your addat() does not have this penalty because Numba generates a machine code that accesses Numpy array elements directly.
You can see the overhead in the Numpy source at this and this for example.
For your second question on the performance of other Numpy functions, I recommend to experiment by yourself, because both Numpy and Numba do so complex operations behind the scene.(My naive opinion is that a well written Numba implementation for a ufunc operation will perform better than the Numpy implementation, because Numba also utilizes SIMD operations if possible.)
| Why is a simple in-place addition much faster with numba than numpy? | According to the snippet below, performing an in-place addition with a numba jit-compiled function is ~10 times faster than with numpy's ufunc.
This would be understandable with a function performing multiple numpy operations as explained in this question.
But here the improvement concern 1 simple numpy ufunc...
So why is numba so much faster ? I'm (naively ?) expecting that the numpy ufunc somehow internally uses some compiled code and that a task as simple as an addition would already be close to optimally optimized ?
More generally : should I expect such dramatic performance differences for other numpy functions ? Is there a way to predict when it's worth to re-write a function and numba-jit it ?
the code :
import numpy as np
import timeit
import numba
N = 200
target1 = np.ones( N )
target2 = np.ones( N )
# we're going to add these values :
addedValues = np.random.uniform( size=1000000 )
# into these positions :
indices = np.random.randint(N,size=1000000)
@numba.njit
def addat(target, index, tobeadded):
for i in range( index.size):
target[index[i]] += tobeadded[i]
# pre-run to jit compile the function
addat( target2, indices, addedValues)
target2 = np.ones( N ) # reset
npaddat = np.add.at
t1 = timeit.timeit( "npaddat( target1, indices, addedValues)", number=3, globals=globals())
t2 = timeit.timeit( "addat( target2, indices, addedValues)", number=3,globals=globals())
assert( (target1==target2).all() )
print("np.add.at time=",t1, )
print("jit-ed addat time =",t2 )
on my computer I get :
np.add.at time= 0.21222890191711485
jit-ed addat time = 0.003389443038031459
so more than a factor 10 improvement...
| [
"The ufunc.add.at() is much more generic then your addat(). It iterates over the array elements and calls some unit operation function for each element. Let the unit operation function be add_vectors(). It adds two input vectors, where a vector means array elements in C-contiguous order and aligned. It utilizes SIMD operations if possible.\nBecause the ufunc.add.at() accesses elements randomly(not sequentially), the add_vectors() should be called multiple times for each pair of input elements. But your addat() does not have this penalty because Numba generates a machine code that accesses Numpy array elements directly.\nYou can see the overhead in the Numpy source at this and this for example.\nFor your second question on the performance of other Numpy functions, I recommend to experiment by yourself, because both Numpy and Numba do so complex operations behind the scene.(My naive opinion is that a well written Numba implementation for a ufunc operation will perform better than the Numpy implementation, because Numba also utilizes SIMD operations if possible.)\n"
] | [
1
] | [] | [] | [
"numba",
"numpy",
"python"
] | stackoverflow_0074640730_numba_numpy_python.txt |
Q:
Get specific data from dictionary generated from yaml
I have a yaml that I need to process with a python script, the yaml is something like this:
user: john
description: Blablabla
version: 1
data:
data1: {type : bool, default: 0, flag: True}
data2: {type : bool, default: 0, flag: True}
data3: {type : float, default: 0, flag: false}
I need a list the the names of all the data that for example are bools or float, or all the ones where "flag" equals True or False, but I'm having difficulties moving around the list and getting what I need.
I've tried something like this:
x = raw_data.items()
for i in range(len(x['data'])):
if (x['data'][i]).get('type') == "bool":
print(x['data'][i])
But then I get an error: TypeError: 'dict_items' object is not subscriptable
A:
You can use the type() function to check the type of an object in Python. In this case, x['data'] returns a dictionary, so you can use the items() method to get the items in the dictionary and then iterate over them to check their values. Here's an example of how you can do this:
# Get the items in the 'data' dictionary
data_items = x['data'].items()
# Iterate over the items in the dictionary
for key, value in data_items:
# Check the type of the value
if type(value) == bool:
print(key)
In this code, key will be the name of the data (e.g. data1, data2, etc.), and value will be the dictionary containing the type, default value, and flag. You can then check the values in this dictionary using the get() method, as you did in your code.
A:
Given a file called yaml.yaml containing your supplied yaml data:
user: john
description: Blablabla
version: 1
data:
data1: {type : bool, default: 0, flag: True}
data2: {type : bool, default: 0, flag: True}
data3: {type : float, default: 0, flag: false}
You can parse the file with PyYAML in a standard way, which is covered in other answers such as this one. For example:
import yaml
with open("yaml.yaml", "r") as stream:
try:
my_dict = yaml.safe_load(stream)
print(my_dict)
except yaml.YAMLError as exc:
print(exc)
Yields a print out of your YAML as a dict:
$ python.exe yaml_so.py
{'user': 'john', 'description': 'Blablabla', 'version': 1, 'data': {'data1': {'type': 'bool', 'default': 0, 'flag': True}, 'data2': {'type': 'bool', 'default': 0, 'flag': True}, 'data3': {'type': 'float', 'default': 0, 'flag': False}}}
Creating a list of all data items where flag == True can be accomplished by using dict.items() to iterate a view of the dictionary, for example:
for key, sub_dict in my_dict["data"].items():
if "flag" in sub_dict:
for sub_key, value in sub_dict.items():
if sub_key == "flag" and value:
print(f"sub_dict_entry: {key} -> key: {sub_key} -> value: {value}")
Yields:
sub_dict_entry: data1 -> key: flag -> value: True
sub_dict_entry: data2 -> key: flag -> value: True
| Get specific data from dictionary generated from yaml | I have a yaml that I need to process with a python script, the yaml is something like this:
user: john
description: Blablabla
version: 1
data:
data1: {type : bool, default: 0, flag: True}
data2: {type : bool, default: 0, flag: True}
data3: {type : float, default: 0, flag: false}
I need a list the the names of all the data that for example are bools or float, or all the ones where "flag" equals True or False, but I'm having difficulties moving around the list and getting what I need.
I've tried something like this:
x = raw_data.items()
for i in range(len(x['data'])):
if (x['data'][i]).get('type') == "bool":
print(x['data'][i])
But then I get an error: TypeError: 'dict_items' object is not subscriptable
| [
"You can use the type() function to check the type of an object in Python. In this case, x['data'] returns a dictionary, so you can use the items() method to get the items in the dictionary and then iterate over them to check their values. Here's an example of how you can do this:\n# Get the items in the 'data' dictionary\ndata_items = x['data'].items()\n\n# Iterate over the items in the dictionary\nfor key, value in data_items:\n # Check the type of the value\n if type(value) == bool:\n print(key)\n\n\nIn this code, key will be the name of the data (e.g. data1, data2, etc.), and value will be the dictionary containing the type, default value, and flag. You can then check the values in this dictionary using the get() method, as you did in your code.\n",
"Given a file called yaml.yaml containing your supplied yaml data:\nuser: john\ndescription: Blablabla\nversion: 1\ndata: \n data1: {type : bool, default: 0, flag: True}\n data2: {type : bool, default: 0, flag: True}\n data3: {type : float, default: 0, flag: false}\n\nYou can parse the file with PyYAML in a standard way, which is covered in other answers such as this one. For example:\nimport yaml\n\nwith open(\"yaml.yaml\", \"r\") as stream:\n try:\n my_dict = yaml.safe_load(stream)\n print(my_dict)\n except yaml.YAMLError as exc:\n print(exc)\n\nYields a print out of your YAML as a dict:\n$ python.exe yaml_so.py \n{'user': 'john', 'description': 'Blablabla', 'version': 1, 'data': {'data1': {'type': 'bool', 'default': 0, 'flag': True}, 'data2': {'type': 'bool', 'default': 0, 'flag': True}, 'data3': {'type': 'float', 'default': 0, 'flag': False}}}\n\nCreating a list of all data items where flag == True can be accomplished by using dict.items() to iterate a view of the dictionary, for example:\nfor key, sub_dict in my_dict[\"data\"].items():\n if \"flag\" in sub_dict:\n for sub_key, value in sub_dict.items():\n if sub_key == \"flag\" and value:\n print(f\"sub_dict_entry: {key} -> key: {sub_key} -> value: {value}\")\n\nYields:\nsub_dict_entry: data1 -> key: flag -> value: True\nsub_dict_entry: data2 -> key: flag -> value: True\n\n"
] | [
0,
0
] | [] | [] | [
"python",
"python_3.x",
"yaml"
] | stackoverflow_0074644248_python_python_3.x_yaml.txt |
Q:
How to display all returned photos on a kivy app using python
I am making a project involving looking up images. I decided to use kivy, creating a system where you can input a name of a folder (located in a specific directory) into a search bar, and it will return all images inside said folder tagged '.jpg'. I am stuck with updating the window to display these images, and I can't find anything online which helps.
from kivy.properties import StringProperty, ObjectProperty
from kivy.uix.screenmanager import Screen
from kivymd.app import MDApp
from kivy.uix.textinput import TextInput
from kivy.uix.button import Button
from kivy.uix.gridlayout import GridLayout
from kivy.uix.widget import Widget
from kivy.uix.image import Image
from kivy.core.window import Window
import glob
import os
os.chdir(r'directory/of/imagefolders')
class functions:
def lookup(name):
root = f'imagefolder'
for root, subdirs, files in os.walk(f'imagefolder'):
for d in subdirs:
if d == name:
print(d)
path = f'imagefolder/'+d
print(path)
return path
def load_images(f):
facefile = []
for images in glob.glob(f + '\*.jpg'):
facefile.append(images)
print(facefile)
return facefile
class MainApp(MDApp):
def build(self):
Window.clearcolor = (1,1,1,1)
layout = GridLayout(cols=2, row_force_default=True, row_default_height=40,
spacing=10, padding=20)
self.val = TextInput(text="Enter name")
submit = Button(text='Submit', on_press=self.submit)
layout.add_widget(self.val)
layout.add_widget(submit)
return layout
def submit(self,obj):
print(self.val.text)
layout2 = GridLayout(cols=2, row_force_default=True, row_default_height=40,
spacing=10, padding=20)
name = self.val.text
f = functions.lookup(name)
print(f)
facefile = functions.load_images(f)
x = len(facefile)
print(x)
for i in range(0, x):
print(facefile[i])
self.img = Image(source=facefile[i])
self.img.size_hint_x = 1
self.img.size_hint_y = 1
self.img.pos = (200,100)
self.img.opacity = 1
layout2.add_widget(self.img)
return layout2
MainApp().run()
This is what I tried, but the window doesn't update. All images are returned (demonstrated by print(facefile[i])), but nothing happens with them. Any help would be massively appreciated
A:
Your submit() method is creating Image widgets and adding them to a newly created layout2, You are returning that new layout2 from that method, but that does not add it to your GUI. Try replacing:
layout2.add_widget(self.img)
with:
self.root.add_widget(self.img)
| How to display all returned photos on a kivy app using python | I am making a project involving looking up images. I decided to use kivy, creating a system where you can input a name of a folder (located in a specific directory) into a search bar, and it will return all images inside said folder tagged '.jpg'. I am stuck with updating the window to display these images, and I can't find anything online which helps.
from kivy.properties import StringProperty, ObjectProperty
from kivy.uix.screenmanager import Screen
from kivymd.app import MDApp
from kivy.uix.textinput import TextInput
from kivy.uix.button import Button
from kivy.uix.gridlayout import GridLayout
from kivy.uix.widget import Widget
from kivy.uix.image import Image
from kivy.core.window import Window
import glob
import os
os.chdir(r'directory/of/imagefolders')
class functions:
def lookup(name):
root = f'imagefolder'
for root, subdirs, files in os.walk(f'imagefolder'):
for d in subdirs:
if d == name:
print(d)
path = f'imagefolder/'+d
print(path)
return path
def load_images(f):
facefile = []
for images in glob.glob(f + '\*.jpg'):
facefile.append(images)
print(facefile)
return facefile
class MainApp(MDApp):
def build(self):
Window.clearcolor = (1,1,1,1)
layout = GridLayout(cols=2, row_force_default=True, row_default_height=40,
spacing=10, padding=20)
self.val = TextInput(text="Enter name")
submit = Button(text='Submit', on_press=self.submit)
layout.add_widget(self.val)
layout.add_widget(submit)
return layout
def submit(self,obj):
print(self.val.text)
layout2 = GridLayout(cols=2, row_force_default=True, row_default_height=40,
spacing=10, padding=20)
name = self.val.text
f = functions.lookup(name)
print(f)
facefile = functions.load_images(f)
x = len(facefile)
print(x)
for i in range(0, x):
print(facefile[i])
self.img = Image(source=facefile[i])
self.img.size_hint_x = 1
self.img.size_hint_y = 1
self.img.pos = (200,100)
self.img.opacity = 1
layout2.add_widget(self.img)
return layout2
MainApp().run()
This is what I tried, but the window doesn't update. All images are returned (demonstrated by print(facefile[i])), but nothing happens with them. Any help would be massively appreciated
| [
"Your submit() method is creating Image widgets and adding them to a newly created layout2, You are returning that new layout2 from that method, but that does not add it to your GUI. Try replacing:\nlayout2.add_widget(self.img)\n\nwith:\nself.root.add_widget(self.img)\n\n"
] | [
0
] | [] | [] | [
"image",
"kivy",
"kivymd",
"python"
] | stackoverflow_0074644180_image_kivy_kivymd_python.txt |
Q:
changing an iterative function into a recursive one
def itr(n):
s = 0
for i in range(0, n+1):
s = s + i * i
return s
This is a simple iterative function that i would like to change into a recursive function.
def rec(n):
import math
if n!=0:
s=n-(2*math.sqrt(n))
if s!=0:
return(s+rec(n))
else:
return(n)
else:
return n
This is my try at doing said thing, but i cannot quiet get it right.
It would be nice if someone could explain to me why my solution does not work and even if you do not want to, if you could instead just give me the solution.
A:
def recursive(total, n):
if n == 0:
return total
else:
return recursive(total + n * n, n - 1)
A couple of thought
This can be refactored to using only a single argument, but by supplying both the total and the current iteration count, it is easier to see how to transform the iterative approach to a recursive one.
While this function can be made recursive, it should not be, as there is no advantage over the iterative approach.
| changing an iterative function into a recursive one | def itr(n):
s = 0
for i in range(0, n+1):
s = s + i * i
return s
This is a simple iterative function that i would like to change into a recursive function.
def rec(n):
import math
if n!=0:
s=n-(2*math.sqrt(n))
if s!=0:
return(s+rec(n))
else:
return(n)
else:
return n
This is my try at doing said thing, but i cannot quiet get it right.
It would be nice if someone could explain to me why my solution does not work and even if you do not want to, if you could instead just give me the solution.
| [
"def recursive(total, n):\n if n == 0:\n return total\n else:\n return recursive(total + n * n, n - 1)\n\nA couple of thought\n\nThis can be refactored to using only a single argument, but by supplying both the total and the current iteration count, it is easier to see how to transform the iterative approach to a recursive one.\nWhile this function can be made recursive, it should not be, as there is no advantage over the iterative approach.\n\n"
] | [
0
] | [] | [] | [
"loops",
"python",
"recursion"
] | stackoverflow_0074644545_loops_python_recursion.txt |
Q:
If I'm printing (0,n) in a for loop, how do i make the program print the last number?
I'm coding a simple for loop to print out all the numbers of a user inputted n using this code:
if __name__ == '__main__':
n = int(input())
for i in range (1,n):
print(i, end=" ")
I expected a result like:
Input:
5
Output:
1 2 3 4 5
but instead, I am getting this output:
1 2 3 4
A:
Just add 1. The range function, when used as range(start, stop), does not include stop, and it stops when the next number is greater than or equal to stop.
if __name__ == '__main__':
n = int(input())
for i in range(1,n + 1):
print(i, end=" ")
| If I'm printing (0,n) in a for loop, how do i make the program print the last number? | I'm coding a simple for loop to print out all the numbers of a user inputted n using this code:
if __name__ == '__main__':
n = int(input())
for i in range (1,n):
print(i, end=" ")
I expected a result like:
Input:
5
Output:
1 2 3 4 5
but instead, I am getting this output:
1 2 3 4
| [
"Just add 1. The range function, when used as range(start, stop), does not include stop, and it stops when the next number is greater than or equal to stop.\nif __name__ == '__main__':\n n = int(input())\n for i in range(1,n + 1):\n print(i, end=\" \")\n\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0074644680_python.txt |
Q:
How to summarise multiple arrays from one df row into one array?
How do I "extract" a row from a dataframe containing multiple arrays and transfer it into a single array?
data = numpy.array([([6,5,6], [2,6,3], [3,4,5]), ([0,9,4], [7,6,5], [8,2,4]), (1,3,5)])
df = pd.DataFrame(data)
print(df)
target1 = [6,5,6,2,6,3,3,4,5]
target2 = [0,9,4,7,6,5,8,2,4]
print(target, target2)
A:
Extract the row using whichever method you prefer and save it as a list
Use any of these methods to flatten your list
row = list(df.loc[0])
target = [val for val in sublist for sublist in row]
| How to summarise multiple arrays from one df row into one array? | How do I "extract" a row from a dataframe containing multiple arrays and transfer it into a single array?
data = numpy.array([([6,5,6], [2,6,3], [3,4,5]), ([0,9,4], [7,6,5], [8,2,4]), (1,3,5)])
df = pd.DataFrame(data)
print(df)
target1 = [6,5,6,2,6,3,3,4,5]
target2 = [0,9,4,7,6,5,8,2,4]
print(target, target2)
| [
"\nExtract the row using whichever method you prefer and save it as a list\nUse any of these methods to flatten your list\n\nrow = list(df.loc[0])\ntarget = [val for val in sublist for sublist in row]\n\n"
] | [
1
] | [] | [] | [
"arrays",
"pandas",
"python"
] | stackoverflow_0074644621_arrays_pandas_python.txt |
Q:
Get a Spark dataframe field into a String value
I am currently trying to filter my dataframe into an if and get the field returned into variable.
Here is my code:
if df_table.filter(col(field).contains("val")):
id_2 = df_table.select(another_field)
print(id_2)
# Recursive call with new variable
The problem is : it looks like the if filtering works, but id_2 gives me the column name and type where I want the value itself from that field.
The output for this code is:
DataFrame[ID_1: bigint]
DataFrame[ID_2: bigint]
...
If I try collect like this : id_2 = df_table.select(another_field).collect()
I get this : [Row(ID_1=3013848), Row(ID_1=319481), Row(ID_1=391948)...] which looks like just listing all id in a list.
I thought of doing : id_2 = df_table.select(another_field).filter(col(field).contains("val"))
but I still get the same result as first attempt.
I would like my id_2 for each iteration of my loop to take value from the field I am filtering on. Like :
3013848
319481
...
But not a list from every value of matching fields from my dataframe.
Any idea on how I could get that into my variable ?
Thank you for helping.
A:
In fact, dataFrame.select(colName) is supposed to return a column(a dataframe of with only one column) but not the column value of the line. I see in your comment you want to do recursive lookup in a spark dataframe. The thing is, firstly, spark AFAIK, doesn't support recursive operation. If you have a deep recursive operation to do, you'd better collect the dataframe you have and do it on your driver without spark. Instead, you can use what library you want but you lose the advantage of treating the data in the distributive way.
Secondly, spark isn't designed to do operations with iteration on each record. Try to achieve with join of dataframes, but it return to my first point, if your later operation of join depends on your join result, in a recursive way, just forget spark.
| Get a Spark dataframe field into a String value | I am currently trying to filter my dataframe into an if and get the field returned into variable.
Here is my code:
if df_table.filter(col(field).contains("val")):
id_2 = df_table.select(another_field)
print(id_2)
# Recursive call with new variable
The problem is : it looks like the if filtering works, but id_2 gives me the column name and type where I want the value itself from that field.
The output for this code is:
DataFrame[ID_1: bigint]
DataFrame[ID_2: bigint]
...
If I try collect like this : id_2 = df_table.select(another_field).collect()
I get this : [Row(ID_1=3013848), Row(ID_1=319481), Row(ID_1=391948)...] which looks like just listing all id in a list.
I thought of doing : id_2 = df_table.select(another_field).filter(col(field).contains("val"))
but I still get the same result as first attempt.
I would like my id_2 for each iteration of my loop to take value from the field I am filtering on. Like :
3013848
319481
...
But not a list from every value of matching fields from my dataframe.
Any idea on how I could get that into my variable ?
Thank you for helping.
| [
"In fact, dataFrame.select(colName) is supposed to return a column(a dataframe of with only one column) but not the column value of the line. I see in your comment you want to do recursive lookup in a spark dataframe. The thing is, firstly, spark AFAIK, doesn't support recursive operation. If you have a deep recursive operation to do, you'd better collect the dataframe you have and do it on your driver without spark. Instead, you can use what library you want but you lose the advantage of treating the data in the distributive way.\nSecondly, spark isn't designed to do operations with iteration on each record. Try to achieve with join of dataframes, but it return to my first point, if your later operation of join depends on your join result, in a recursive way, just forget spark.\n"
] | [
0
] | [] | [] | [
"apache_spark",
"apache_spark_sql",
"pyspark",
"python"
] | stackoverflow_0074630297_apache_spark_apache_spark_sql_pyspark_python.txt |
Q:
numpy-equivalent of list.pop?
Is there a numpy method which is equivalent to the builtin pop for python lists?
Popping obviously doesn't work on numpy arrays, and I want to avoid a list conversion.
A:
There is no pop method for NumPy arrays, but you could just use basic slicing (which would be efficient since it returns a view, not a copy):
In [104]: y = np.arange(5); y
Out[105]: array([0, 1, 2, 3, 4])
In [106]: last, y = y[-1], y[:-1]
In [107]: last, y
Out[107]: (4, array([0, 1, 2, 3]))
If there were a pop method it would return the last value in y and modify y.
Above,
last, y = y[-1], y[:-1]
assigns the last value to the variable last and modifies y.
A:
Here is one example using numpy.delete():
import numpy as np
arr = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])
print(arr)
# array([[ 1, 2, 3, 4],
# [ 5, 6, 7, 8],
# [ 9, 10, 11, 12]])
arr = np.delete(arr, 1, 0)
print(arr)
# array([[ 1, 2, 3, 4],
# [ 9, 10, 11, 12]])
A:
Pop doesn't exist for NumPy arrays, but you can use NumPy indexing in combination with array restructuring, for example hstack/vstack or numpy.delete(), to emulate popping.
Here are some example functions I can think of (which apparently don't work when the index is -1, but you can fix this with a simple conditional):
def poprow(my_array,pr):
""" row popping in numpy arrays
Input: my_array - NumPy array, pr: row index to pop out
Output: [new_array,popped_row] """
i = pr
pop = my_array[i]
new_array = np.vstack((my_array[:i],my_array[i+1:]))
return [new_array,pop]
def popcol(my_array,pc):
""" column popping in numpy arrays
Input: my_array: NumPy array, pc: column index to pop out
Output: [new_array,popped_col] """
i = pc
pop = my_array[:,i]
new_array = np.hstack((my_array[:,:i],my_array[:,i+1:]))
return [new_array,pop]
This returns the array without the popped row/column, as well as the popped row/column separately:
>>> A = np.array([[1,2,3],[4,5,6]])
>>> [A,poparow] = poprow(A,0)
>>> poparow
array([1, 2, 3])
>>> A = np.array([[1,2,3],[4,5,6]])
>>> [A,popacol] = popcol(A,2)
>>> popacol
array([3, 6])
A:
There isn't any pop() method for numpy arrays unlike List, Here're some alternatives you can try out-
Using Basic Slicing
>>> x = np.array([1,2,3,4,5])
>>> x = x[:-1]; x
>>> [1,2,3,4]
Or, By Using delete()
Syntax - np.delete(arr, obj, axis=None)
arr: Input array
obj: Row or column number to delete
axis: Axis to delete
>>> x = np.array([1,2,3,4,5])
>>> x = x = np.delete(x, len(x)-1, 0)
>>> [1,2,3,4]
A:
The important thing is that it takes one from the original array and deletes it.
If you don't m
ind the superficial implementation of a single method to complete the process, the following code will do what you want.
import numpy as np
a = np.arange(0, 3)
i = 0
selected, others = a[i], np.delete(a, i)
print(selected)
print(others)
# result:
# 0
# [1 2]
A:
The most 'elegant' solution for retrieving and removing a random item in Numpy is this:
import numpy as np
import random
arr = np.array([1, 3, 5, 2, 8, 7])
element = random.choice(arr)
elementIndex = np.where(arr == element)[0][0]
arr = np.delete(arr, elementIndex)
For curious coders:
The np.where() method returns two lists. The first returns the row indexes of the matching elements and the second the column indexes. This is useful when searching for elements in a 2d array. In our case, the first element of the first returned list is interesting.
A:
unutbu had a simple answer for this, but pop() can also take an index as a parameter. This is how you replicate it with numpy:
pop_index = 4
pop = y[pop_index]
y = np.concatenate([y[:pop_index],y[pop_index+1:]])
A:
To add, If you want to implement pop for a row or column from a numpy 2D array you could do like:
col = arr[:, -1] # gets the last column
np.delete(arr, -1, 1) # deletes the last column
and for row:
row = arr[-1, :] # gets the last row
np.delete(arr, -1, 0) # deletes the last row
A:
OK, since I didn't see a good answer that RETURNS the 1st element and REMOVES it from the original array, I wrote a simple (if kludgy) function utilizing global for a 1d array (modification required for multidims):
tmp_array_for_popfunc = 1d_array
def array_pop():
global tmp_array_for_popfunc
r = tmp_array_for_popfunc[0]
tmp_array_for_popfunc = np.delete(tmp_array_for_popfunc, 0)
return r
check it by using-
print(len(tmp_array_for_popfunc)) # confirm initial size of tmp_array_for_popfunc
print(array_pop()) #prints return value at tmp_array_for_popfunc[0]
print(len(tmp_array_for_popfunc)) # now size is 1 smaller
A:
I made a function as follow, doing almost the same. This function has 2 arguments: np_array and index, and return the value of the given index of the array.
def np_pop(np_array, index=-1):
'''
Pop the "index" from np_array and return the value.
Default value for index is the last element.
'''
# add this to make sure 'numpy' is imported
import numpy as np
# read the value of the given array at the given index
value = np_array[index]
# remove value from array
np.delete(np_array, index, 0)
# return the value
return value
Remember you can add a condition to make sure the given index is exist in the array and return -1 if anything goes wrong.
Now you can use it like this:
import numpy as np
i = 2 # let's assume we want to pop index number 2
y = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]) # assume 'y' is our numpy array
poped_val = np_pop(y, i) # value of the piped index
| numpy-equivalent of list.pop? | Is there a numpy method which is equivalent to the builtin pop for python lists?
Popping obviously doesn't work on numpy arrays, and I want to avoid a list conversion.
| [
"There is no pop method for NumPy arrays, but you could just use basic slicing (which would be efficient since it returns a view, not a copy):\nIn [104]: y = np.arange(5); y\nOut[105]: array([0, 1, 2, 3, 4])\n\nIn [106]: last, y = y[-1], y[:-1]\n\nIn [107]: last, y\nOut[107]: (4, array([0, 1, 2, 3]))\n\nIf there were a pop method it would return the last value in y and modify y.\nAbove, \nlast, y = y[-1], y[:-1]\n\nassigns the last value to the variable last and modifies y.\n",
"Here is one example using numpy.delete():\nimport numpy as np\narr = np.array([[1,2,3,4], [5,6,7,8], [9,10,11,12]])\nprint(arr)\n# array([[ 1, 2, 3, 4],\n# [ 5, 6, 7, 8],\n# [ 9, 10, 11, 12]])\narr = np.delete(arr, 1, 0)\nprint(arr)\n# array([[ 1, 2, 3, 4],\n# [ 9, 10, 11, 12]])\n\n",
"Pop doesn't exist for NumPy arrays, but you can use NumPy indexing in combination with array restructuring, for example hstack/vstack or numpy.delete(), to emulate popping.\nHere are some example functions I can think of (which apparently don't work when the index is -1, but you can fix this with a simple conditional):\ndef poprow(my_array,pr):\n \"\"\" row popping in numpy arrays\n Input: my_array - NumPy array, pr: row index to pop out\n Output: [new_array,popped_row] \"\"\"\n i = pr\n pop = my_array[i]\n new_array = np.vstack((my_array[:i],my_array[i+1:]))\n return [new_array,pop]\n\ndef popcol(my_array,pc):\n \"\"\" column popping in numpy arrays\n Input: my_array: NumPy array, pc: column index to pop out\n Output: [new_array,popped_col] \"\"\"\n i = pc\n pop = my_array[:,i]\n new_array = np.hstack((my_array[:,:i],my_array[:,i+1:]))\n return [new_array,pop]\n\nThis returns the array without the popped row/column, as well as the popped row/column separately: \n>>> A = np.array([[1,2,3],[4,5,6]])\n>>> [A,poparow] = poprow(A,0)\n>>> poparow\narray([1, 2, 3])\n\n>>> A = np.array([[1,2,3],[4,5,6]])\n>>> [A,popacol] = popcol(A,2)\n>>> popacol\narray([3, 6])\n\n",
"There isn't any pop() method for numpy arrays unlike List, Here're some alternatives you can try out-\n\nUsing Basic Slicing\n\n>>> x = np.array([1,2,3,4,5])\n>>> x = x[:-1]; x\n>>> [1,2,3,4]\n\n\nOr, By Using delete() \n\nSyntax - np.delete(arr, obj, axis=None)\narr: Input array \nobj: Row or column number to delete \naxis: Axis to delete \n>>> x = np.array([1,2,3,4,5])\n>>> x = x = np.delete(x, len(x)-1, 0)\n>>> [1,2,3,4]\n\n",
"The important thing is that it takes one from the original array and deletes it.\nIf you don't m\nind the superficial implementation of a single method to complete the process, the following code will do what you want.\nimport numpy as np\n\na = np.arange(0, 3)\ni = 0\nselected, others = a[i], np.delete(a, i)\n\nprint(selected)\nprint(others)\n\n# result:\n# 0\n# [1 2]\n\n",
"The most 'elegant' solution for retrieving and removing a random item in Numpy is this:\nimport numpy as np\nimport random\n\narr = np.array([1, 3, 5, 2, 8, 7])\nelement = random.choice(arr)\nelementIndex = np.where(arr == element)[0][0]\narr = np.delete(arr, elementIndex)\n\nFor curious coders:\nThe np.where() method returns two lists. The first returns the row indexes of the matching elements and the second the column indexes. This is useful when searching for elements in a 2d array. In our case, the first element of the first returned list is interesting.\n",
"unutbu had a simple answer for this, but pop() can also take an index as a parameter. This is how you replicate it with numpy:\npop_index = 4\npop = y[pop_index]\ny = np.concatenate([y[:pop_index],y[pop_index+1:]])\n\n",
"To add, If you want to implement pop for a row or column from a numpy 2D array you could do like:\ncol = arr[:, -1] # gets the last column\nnp.delete(arr, -1, 1) # deletes the last column\n\nand for row:\nrow = arr[-1, :] # gets the last row\nnp.delete(arr, -1, 0) # deletes the last row\n\n",
"OK, since I didn't see a good answer that RETURNS the 1st element and REMOVES it from the original array, I wrote a simple (if kludgy) function utilizing global for a 1d array (modification required for multidims):\ntmp_array_for_popfunc = 1d_array\n\ndef array_pop():\n global tmp_array_for_popfunc\n \n r = tmp_array_for_popfunc[0]\n tmp_array_for_popfunc = np.delete(tmp_array_for_popfunc, 0)\n return r\n\ncheck it by using-\nprint(len(tmp_array_for_popfunc)) # confirm initial size of tmp_array_for_popfunc\nprint(array_pop()) #prints return value at tmp_array_for_popfunc[0]\nprint(len(tmp_array_for_popfunc)) # now size is 1 smaller\n\n",
"I made a function as follow, doing almost the same. This function has 2 arguments: np_array and index, and return the value of the given index of the array.\ndef np_pop(np_array, index=-1):\n '''\n Pop the \"index\" from np_array and return the value. \n Default value for index is the last element.\n '''\n # add this to make sure 'numpy' is imported \n import numpy as np\n # read the value of the given array at the given index\n value = np_array[index]\n # remove value from array\n np.delete(np_array, index, 0)\n # return the value\n return value\n\nRemember you can add a condition to make sure the given index is exist in the array and return -1 if anything goes wrong.\nNow you can use it like this:\nimport numpy as np\ni = 2 # let's assume we want to pop index number 2\ny = np.array([1, 2, 3, 4, 5, 6, 7, 8, 9]) # assume 'y' is our numpy array \npoped_val = np_pop(y, i) # value of the piped index\n\n"
] | [
30,
20,
6,
6,
2,
1,
1,
0,
0,
0
] | [] | [] | [
"arrays",
"list",
"numpy",
"python",
"stack"
] | stackoverflow_0039945410_arrays_list_numpy_python_stack.txt |
Q:
group rows based on column and sum their values
df = pd.DataFrame({'c1':['Ax','Bx','Ay','By'], 'c2':[1,2,3,4]})
c1 c2
0 Ax 1
1 Bx 2
2 Ay 3
3 By 4
I'd like to group xs and ys in c1 and sum their respective c2 values.
Desired output:
c1 c2
0 Cx 3
1 Cy 7
A:
example
df.groupby(df['c1'].str[-1]).sum()
output:
c2
c1
x 3
y 7
use following code:
df.groupby('C' + df['c1'].str[-1]).sum().reset_index()
result:
c1 c2
0 Cx 3
1 Cy 7
A:
You may do:
out = df.groupby(df.c1.str[-1]).sum().reset_index()
out['c1'] = 'C' + out['c1']
print(out):
c1 c2
0 Cx 3
1 Cy 7
A:
Groupby is very flexible so let's group by the last character of the "c1" column and then sum the "c2" values:
>>> (df.groupby(df.c1.str[-1])["c2"]
.sum().reset_index()
.assign(c1=lambda fr: fr.c1.radd("C")))
c1 c2
0 Cx 3
1 Cy 7
Without the assign at the end, resultant "c1" values are the groupers, i.e., "x" and "y". I add (from right, hence radd) the character "C" to them.
A:
Here is a proposition using pandas.Series.replace with GroupBy.sum :
out = (
df
.assign(c1= df["c1"].str.replace("[A-Z]", "C", regex=True))
.groupby("c1", as_index=False).sum(numeric_only=True)
)
Output :
print(out)
c1 c2
0 Cx 3
1 Cy 7
| group rows based on column and sum their values | df = pd.DataFrame({'c1':['Ax','Bx','Ay','By'], 'c2':[1,2,3,4]})
c1 c2
0 Ax 1
1 Bx 2
2 Ay 3
3 By 4
I'd like to group xs and ys in c1 and sum their respective c2 values.
Desired output:
c1 c2
0 Cx 3
1 Cy 7
| [
"example\ndf.groupby(df['c1'].str[-1]).sum()\n\noutput:\n c2\nc1 \nx 3\ny 7\n\nuse following code:\ndf.groupby('C' + df['c1'].str[-1]).sum().reset_index()\n\nresult:\n c1 c2\n0 Cx 3\n1 Cy 7\n\n",
"You may do:\nout = df.groupby(df.c1.str[-1]).sum().reset_index()\nout['c1'] = 'C' + out['c1']\n\nprint(out):\n c1 c2\n0 Cx 3\n1 Cy 7\n\n",
"Groupby is very flexible so let's group by the last character of the \"c1\" column and then sum the \"c2\" values:\n>>> (df.groupby(df.c1.str[-1])[\"c2\"]\n .sum().reset_index()\n .assign(c1=lambda fr: fr.c1.radd(\"C\")))\n\n c1 c2\n0 Cx 3\n1 Cy 7\n\nWithout the assign at the end, resultant \"c1\" values are the groupers, i.e., \"x\" and \"y\". I add (from right, hence radd) the character \"C\" to them.\n",
"Here is a proposition using pandas.Series.replace with GroupBy.sum :\nout = (\n df\n .assign(c1= df[\"c1\"].str.replace(\"[A-Z]\", \"C\", regex=True))\n .groupby(\"c1\", as_index=False).sum(numeric_only=True)\n )\n\nOutput :\n\nprint(out)\n\n c1 c2\n0 Cx 3\n1 Cy 7\n\n"
] | [
2,
1,
1,
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074644651_pandas_python.txt |
Q:
Failed to create virtual environment in PyCharm
I have problem with create virtual environment in PyCharm.
Exactly, Python in version 3.10 was add to Path during installation and I use latest version PyCharm community.
Did anyone have a similar problem?
Adding Informations
How I create environment :
file -> New project
Location : D:\mm\projekty\pythonProject2
marked New virtual environment using ( virtualenv)
Location : D:\mm\projekty\pythonProject2\venv
Base interpreter : C:\Users\mm\AppData\Local\Programs\Python\Python310\python.exe
In CMD:
C:\Users\mm>python
Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
A:
I had the same problem. I needed to install package python3-venv.
A:
In order to fix this, I had to run from my terminal:
pip install virtualenv
After installing the virtualenv package everything works as expected.
A:
If you have python3-env already installed, the commands provided in most of the answers will not work as you need the python3-venv package specifically for Python 3.10
The exact package as pointed by @fabel in comments is python3.10-venv .
sudo apt install python3.10-venv
Run this command and it should be good to go.
A:
I attempted the previous answers and eventually found that I had to delete the venv folder and allow PyCharm to recreate it.
A:
In my case, I didn't have pip installed on my computer.
A:
just open your terminal and install pip package:
In ubuntu:
sudo apt install pip
For windows:
https://phoenixnap.com/kb/install-pip-windows
Then try to create the virtual environment again
A:
A possible reason could be not having the package virtualevn installed in your computer. I had the problem after reinstalling OS.
The following is valid for ubuntu OS with Python3 installed.
Check if the pip is installed after installing Python.
Use the command sudo apt install python3-pip.
Once pip installed, install the package using pip3 install virtualenv.
Then go back to Pycharm IDE settings to set up the venv.
A:
In my case i wasn't the owner of the project file. I was needed to run the CHOWN command to resolve this.
sudo chown $USER /Users/Sites/***<Project_Folder>***
and i was done.
A:
In my case, there was something wrong with the latest PyCharm Community Edition of 2022.2.3 version (build ID: 222.4345.23). I tried everything mentioned here with no vain. After spending several hours, just downgraded to version 2021.3.2 version of PyCharm community edition, and it just worked. Hope this helps.
A:
I ran "pip install virtualenv" in the terminal,
but after trying again it didn't work.
I downloaded python 3.10 from python.org
then because i had problems updating my python version from 3.9 to 3.10 on my computer, i decided to try to make a virtual environment in PyCharm, in my main project (my only one) I clicked "Add Interpreter..." then selected "Virtualenv environment" then set the location to "/home/myname/Documents/PyCharm/venvPy3.10" and then i set base interpreter to the one i downloaded by clicking on the three dots on the right of it then going to my downloads folder. I also selected both "Inherit global site-packages" and "Make available to all project" then clicked "OK".
Then it gives me warning.
A:
There is a bug in Windows venv, which is known to be exposed if you install a VisualStudio 2022 runtime. If PyCharm uses venv and not another virtual environment (not sure as I don't use PyCharm) s See if my issue/workaround in this Q&A aligns with yours.
Edit: I realize that you are using virtualenv instead. However, virtualenv uses venv.EnvBuilder so the issue could still be related.
A:
if you don't have pip before install pip
sudo apt install python3-pip
press Ctrl + Alt + S
then click settings button and select show all
press Alt + Insert keys then
Select Virtualenv Environment and check Inherit global site-packages
A:
I had the same problem, but solved it by adding an interpreter manually.
A:
If someone is still not able to fix this then, create it manually.
go in the dir that you want the venv in, then python3.10 -m venv <name of venv>
source <name of venv>/bin/activate
go in Python
Interpeter settings and then select the location of the manually
created venv in the 'existing environment'
A:
I had same problem tried many things
But I realized that Window Defender is blocking PyCharm to create virtual environment
Just go in Defender Settings and allow PyChram
A:
I had same issue with following version.
pycharm - PyCharm 2022.1.3 (Community Edition)
python - python 3.9
Once I changed interpreter version to python 3.10. It started working.
A:
I ran into the same problem, but was able to resolve it in my environment.
Go to Help -> Edit Custom VM Options and add the following
-Dfile.encoding=UTF-8
Here is my environment:
Windows 11
PyCharm Community 2022.2.3 (installed from JetBrains ToolBox)
Python 3.11 (installed from microsoft store)
A:
I had the same error and I don't know how the solution that I'll explain solved
I was naming the project as "Joining Data with pandas", "joining_data_with_pandas"
but when I changed the name to "joiningDataPandas", it works with no error.
I think it may be a bug from the ide or something, because if I tried to create a new project with the old name that has spaces or "_" the error will be back, but with writing the project name with the camelCase, there is no error.
A:
Open and clear the log: %AppData%\Local\JetBrains\PyCharmCE2022.1\log\idea.log
(in PyCharm click Help > Show Log in Explorer).
Try to create VirtualEnv via PyCharm, you will see
the "Failed to create Venv..." message
Open the log and look for errors In my case it was unable to import some modules because the threading module was not found (ie: ModuleNotFoundError: No module named 'threading'). My python3.10 was broken, maybe I have some problems with my PATH variable.
I was missing the modules threading, logging, and weakref, so I just copied them to %AppData%\Local\Programs\Python\Python310\Lib\ (from site-packages folder, in my case)
Enjoy creating as many VENV's as you needed. If you still have an error in PyCharm then repeat steps 2, 3, and 4 until you fix all errors about missing modules or other.
A:
I fixed this problem by first deleting my current venv folder. Then I went back to PyCharm to Configure Local Environment>Add Local Interpreter> and made sure the location is in an empty directory. I did this by just adding /venv at the end of my path.
| Failed to create virtual environment in PyCharm | I have problem with create virtual environment in PyCharm.
Exactly, Python in version 3.10 was add to Path during installation and I use latest version PyCharm community.
Did anyone have a similar problem?
Adding Informations
How I create environment :
file -> New project
Location : D:\mm\projekty\pythonProject2
marked New virtual environment using ( virtualenv)
Location : D:\mm\projekty\pythonProject2\venv
Base interpreter : C:\Users\mm\AppData\Local\Programs\Python\Python310\python.exe
In CMD:
C:\Users\mm>python
Python 3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
| [
"I had the same problem. I needed to install package python3-venv.\n",
"In order to fix this, I had to run from my terminal:\npip install virtualenv\n\nAfter installing the virtualenv package everything works as expected.\n",
"If you have python3-env already installed, the commands provided in most of the answers will not work as you need the python3-venv package specifically for Python 3.10\nThe exact package as pointed by @fabel in comments is python3.10-venv .\nsudo apt install python3.10-venv\n\nRun this command and it should be good to go.\n",
"I attempted the previous answers and eventually found that I had to delete the venv folder and allow PyCharm to recreate it.\n",
"In my case, I didn't have pip installed on my computer.\n",
"just open your terminal and install pip package:\nIn ubuntu:\nsudo apt install pip\nFor windows:\nhttps://phoenixnap.com/kb/install-pip-windows\nThen try to create the virtual environment again\n",
"A possible reason could be not having the package virtualevn installed in your computer. I had the problem after reinstalling OS.\nThe following is valid for ubuntu OS with Python3 installed.\nCheck if the pip is installed after installing Python.\nUse the command sudo apt install python3-pip.\nOnce pip installed, install the package using pip3 install virtualenv.\nThen go back to Pycharm IDE settings to set up the venv.\n",
"In my case i wasn't the owner of the project file. I was needed to run the CHOWN command to resolve this.\n\nsudo chown $USER /Users/Sites/***<Project_Folder>***\n\nand i was done.\n",
"In my case, there was something wrong with the latest PyCharm Community Edition of 2022.2.3 version (build ID: 222.4345.23). I tried everything mentioned here with no vain. After spending several hours, just downgraded to version 2021.3.2 version of PyCharm community edition, and it just worked. Hope this helps.\n",
"I ran \"pip install virtualenv\" in the terminal,\nbut after trying again it didn't work.\nI downloaded python 3.10 from python.org\nthen because i had problems updating my python version from 3.9 to 3.10 on my computer, i decided to try to make a virtual environment in PyCharm, in my main project (my only one) I clicked \"Add Interpreter...\" then selected \"Virtualenv environment\" then set the location to \"/home/myname/Documents/PyCharm/venvPy3.10\" and then i set base interpreter to the one i downloaded by clicking on the three dots on the right of it then going to my downloads folder. I also selected both \"Inherit global site-packages\" and \"Make available to all project\" then clicked \"OK\".\nThen it gives me warning.\n",
"There is a bug in Windows venv, which is known to be exposed if you install a VisualStudio 2022 runtime. If PyCharm uses venv and not another virtual environment (not sure as I don't use PyCharm) s See if my issue/workaround in this Q&A aligns with yours.\nEdit: I realize that you are using virtualenv instead. However, virtualenv uses venv.EnvBuilder so the issue could still be related.\n",
"if you don't have pip before install pip\nsudo apt install python3-pip\n\npress Ctrl + Alt + S\n\nthen click settings button and select show all\n\npress Alt + Insert keys then\n\nSelect Virtualenv Environment and check Inherit global site-packages\n",
"I had the same problem, but solved it by adding an interpreter manually.\n\n",
"If someone is still not able to fix this then, create it manually.\n\ngo in the dir that you want the venv in, then python3.10 -m venv <name of venv>\nsource <name of venv>/bin/activate\ngo in Python\nInterpeter settings and then select the location of the manually\ncreated venv in the 'existing environment'\n\n",
"I had same problem tried many things\nBut I realized that Window Defender is blocking PyCharm to create virtual environment\nJust go in Defender Settings and allow PyChram\n",
"I had same issue with following version.\npycharm - PyCharm 2022.1.3 (Community Edition)\npython - python 3.9\nOnce I changed interpreter version to python 3.10. It started working.\n",
"I ran into the same problem, but was able to resolve it in my environment.\nGo to Help -> Edit Custom VM Options and add the following\n-Dfile.encoding=UTF-8\n\nHere is my environment:\nWindows 11\nPyCharm Community 2022.2.3 (installed from JetBrains ToolBox)\nPython 3.11 (installed from microsoft store)\n\n",
"I had the same error and I don't know how the solution that I'll explain solved\nI was naming the project as \"Joining Data with pandas\", \"joining_data_with_pandas\"\nbut when I changed the name to \"joiningDataPandas\", it works with no error.\nI think it may be a bug from the ide or something, because if I tried to create a new project with the old name that has spaces or \"_\" the error will be back, but with writing the project name with the camelCase, there is no error.\n",
"\nOpen and clear the log: %AppData%\\Local\\JetBrains\\PyCharmCE2022.1\\log\\idea.log\n(in PyCharm click Help > Show Log in Explorer).\n\nTry to create VirtualEnv via PyCharm, you will see\nthe \"Failed to create Venv...\" message \n\nOpen the log and look for errors In my case it was unable to import some modules because the threading module was not found (ie: ModuleNotFoundError: No module named 'threading'). My python3.10 was broken, maybe I have some problems with my PATH variable.\n\nI was missing the modules threading, logging, and weakref, so I just copied them to %AppData%\\Local\\Programs\\Python\\Python310\\Lib\\ (from site-packages folder, in my case)\n\nEnjoy creating as many VENV's as you needed. If you still have an error in PyCharm then repeat steps 2, 3, and 4 until you fix all errors about missing modules or other.\n\n\n",
"I fixed this problem by first deleting my current venv folder. Then I went back to PyCharm to Configure Local Environment>Add Local Interpreter> and made sure the location is in an empty directory. I did this by just adding /venv at the end of my path.\n"
] | [
22,
19,
10,
7,
6,
4,
2,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0,
0
] | [] | [] | [
"pycharm",
"python",
"virtualenv"
] | stackoverflow_0069709251_pycharm_python_virtualenv.txt |
Q:
Extract mail from Each value in a Column in a Dataframe
Create a function that evaluates the data within a cell and extracts only the email, place the value found in a new column called "Email Found".
This is the Code I'm using, it work if I use it with a single str, but it doesn't work for my DataFrame
import re
def extract_mail(text):
match = re.search(r'[\w.+-]+@[\w-]+\.[\w.-]+', text)
return match
This is the error that appears
enter image description here
Input
Maxwell <[email protected]> Contact Info.
Julianna <[email protected]> Contact Info.
Janelle <[email protected]> Contact Info.
Output
[email protected]
[email protected]
[email protected]
I have to created a New Column in the dataframe called "Email Found".
A:
IIUC, try this using the str accessor with extract method and your regex:
df['email'] = df['info'].str.extract('([\w.+-]+@[\w-]+\.[\w.-]+)')
df['Email Found'] = df['email'].notna()
output:
info \
0 Maxwell <[email protected]>...
1 Julianna <[email protected]>...
2 Janelle <[email protected]...
email Email Found
0 [email protected] True
1 [email protected] True
2 [email protected] True
| Extract mail from Each value in a Column in a Dataframe | Create a function that evaluates the data within a cell and extracts only the email, place the value found in a new column called "Email Found".
This is the Code I'm using, it work if I use it with a single str, but it doesn't work for my DataFrame
import re
def extract_mail(text):
match = re.search(r'[\w.+-]+@[\w-]+\.[\w.-]+', text)
return match
This is the error that appears
enter image description here
Input
Maxwell <[email protected]> Contact Info.
Julianna <[email protected]> Contact Info.
Janelle <[email protected]> Contact Info.
Output
[email protected]
[email protected]
[email protected]
I have to created a New Column in the dataframe called "Email Found".
| [
"IIUC, try this using the str accessor with extract method and your regex:\ndf['email'] = df['info'].str.extract('([\\w.+-]+@[\\w-]+\\.[\\w.-]+)')\ndf['Email Found'] = df['email'].notna()\n\noutput:\n info \\\n0 Maxwell <[email protected]>... \n1 Julianna <[email protected]>... \n2 Janelle <[email protected]... \n\n email Email Found \n0 [email protected] True \n1 [email protected] True \n2 [email protected] True \n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python",
"text_extraction"
] | stackoverflow_0074644673_dataframe_pandas_python_text_extraction.txt |
Q:
What is wrong with the convert function?
def main():
askForTime = input("What time is it? ").strip().split()
time = convert(askForTime)
if 7 <= time <= 8:
print("breacfast time")
elif 12 <= time <= 13:
print("lunch time")
elif 18 <= time <= 19:
print("dinner time")
def convert(clock):
if "p.m" in clock or "pm" in clock:
hours, minutes = clock[0].split(":")
timer = (float(hours) + 12) + (float(minutes) / 60)
return timer
else:
hours, minutes = clock[0].split(":")
timer = float(hours) + (float(minutes) / 60)
return timer
if __name__ == "__main__":
main()
convert successfully returns decimal hours
expected "7.5", not "Error\n"
I checked my program, and the convert function does indeed produce decimal hours.
Could someone please explain to me what I'm missing?
A:
Inspect the askForTime value you get from this line:
def main():
askForTime = input("What time is it? ").strip().split()
When you do, you will discover that it is returning the input as a list. (That's because you used .split() - the return is a list.) Your convert() function "works" because it expects a list. However, check50 tests by calling with a string, like this:
pset_1/meal/SO_74635140/ $ python
>>> from meal import convert
>>> print(convert("7:30"))
When you run as shown above, you will get this error message:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/workspaces/68927141/pset_1/meal/SO_74635140/meal.py", line 19, in convert
hours, minutes = clock[0].split(":")
ValueError: not enough values to unpack (expected 2, got 1)
Also, you have "breakfast time" misspelled as "breacfast time". That will cause another check50 error once you fix the 1st problem.
| What is wrong with the convert function? | def main():
askForTime = input("What time is it? ").strip().split()
time = convert(askForTime)
if 7 <= time <= 8:
print("breacfast time")
elif 12 <= time <= 13:
print("lunch time")
elif 18 <= time <= 19:
print("dinner time")
def convert(clock):
if "p.m" in clock or "pm" in clock:
hours, minutes = clock[0].split(":")
timer = (float(hours) + 12) + (float(minutes) / 60)
return timer
else:
hours, minutes = clock[0].split(":")
timer = float(hours) + (float(minutes) / 60)
return timer
if __name__ == "__main__":
main()
convert successfully returns decimal hours
expected "7.5", not "Error\n"
I checked my program, and the convert function does indeed produce decimal hours.
Could someone please explain to me what I'm missing?
| [
"Inspect the askForTime value you get from this line:\ndef main():\n askForTime = input(\"What time is it? \").strip().split()\n\nWhen you do, you will discover that it is returning the input as a list. (That's because you used .split() - the return is a list.) Your convert() function \"works\" because it expects a list. However, check50 tests by calling with a string, like this:\npset_1/meal/SO_74635140/ $ python\n>>> from meal import convert\n>>> print(convert(\"7:30\"))\n\nWhen you run as shown above, you will get this error message:\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\n File \"/workspaces/68927141/pset_1/meal/SO_74635140/meal.py\", line 19, in convert\n hours, minutes = clock[0].split(\":\")\nValueError: not enough values to unpack (expected 2, got 1)\n\nAlso, you have \"breakfast time\" misspelled as \"breacfast time\". That will cause another check50 error once you fix the 1st problem.\n"
] | [
0
] | [] | [] | [
"cs50",
"decimal",
"python",
"type_conversion"
] | stackoverflow_0074635140_cs50_decimal_python_type_conversion.txt |
Q:
f-string like behaviour with explicit format method
With f-string I can do like this:
a = 10
f'a equals {a}' # 'a equals 10'
f'b equals {a - 1}' # 'b equals 9'
but when using .format I cannot do any operation on the variable:
'b equals {a - 1}'.format(dict(a=10)) # KeyError: 'a - 1'
The error message is clear - the format function treats everything in the {} as an argument name. Can I circumvent that somehow?
I cannot use the f-string, because the message is prepared before values of the variables are known.
EDIT:
Ok, it seems that it can not be possible - it would work as an implicit eval which would be very unsafe.
A:
When using format, the {} are a place holder for an expression. Do the arithmetic in the format argument, not in the place holder.
str = "a = {}"
a = 10
stra = str.format(a-1)
print(stra)
>> a = 9
| f-string like behaviour with explicit format method | With f-string I can do like this:
a = 10
f'a equals {a}' # 'a equals 10'
f'b equals {a - 1}' # 'b equals 9'
but when using .format I cannot do any operation on the variable:
'b equals {a - 1}'.format(dict(a=10)) # KeyError: 'a - 1'
The error message is clear - the format function treats everything in the {} as an argument name. Can I circumvent that somehow?
I cannot use the f-string, because the message is prepared before values of the variables are known.
EDIT:
Ok, it seems that it can not be possible - it would work as an implicit eval which would be very unsafe.
| [
"When using format, the {} are a place holder for an expression. Do the arithmetic in the format argument, not in the place holder.\nstr = \"a = {}\"\na = 10\nstra = str.format(a-1)\nprint(stra)\n>> a = 9\n\n"
] | [
1
] | [] | [] | [
"f_string",
"python"
] | stackoverflow_0074644395_f_string_python.txt |
Q:
regular expression in Python to update string in a file
Anything that starts with <a class=“rms-req-link” href=“https://rms. AND ends with </a> should be replaced by TBD.
Example:
<a class=“req-link” href=“https://doc.test.com/req_view/ABC-3456">ABC-3456</a>
or:
<a class=“req-link” href=“https://doc.test.com/req_view/ABC-1234">ABC-1234</a>
Such strings should be replaced by TBD in the file.
Code I tried:
import re
output = open("regex1.txt","w")
input = open("regex.txt")
for line in input:
output.write(re.sub(r"^<a class=“req-link” .*=“https://([a-zA-Z]+(\.[a-zA-Z]+)+).*</a>$", 'TBD', line))
input.close()
output.close()
A:
As mentioned in the comments, the pattern you mention does not match the one you use in your code, nor does it correspond to the example strings you want replaced. So you may or may not want to adjust the following pattern depending on what you actually need.
import re
from pathlib import Path
PATTERN = re.compile(r'<a\s+class=“req-link”\s+href=“https://.*?</a>')
def replace_a_tags(input_file: str, output_file: str) -> None:
contents = Path(input_file).read_text()
with Path(output_file).open("w") as f:
f.write(re.sub(PATTERN, "TBD", contents))
if __name__ == "__main__":
replace_a_tags("input.txt", "output.txt")
The .*? is important to match lazily (as opposed to greedily) so that it matches any character (.) between zero and unlimited times, as few times as possible until it hits the closing anchor tag.
The pattern matches both your example strings.
The Path.read_text method obviously reads the entire file into memory, so that may be a problem, if it happens to be gigantic, but I doubt it. The benefit is that the global regex replacement is much more efficient than iterating over each line in the file individually.
| regular expression in Python to update string in a file | Anything that starts with <a class=“rms-req-link” href=“https://rms. AND ends with </a> should be replaced by TBD.
Example:
<a class=“req-link” href=“https://doc.test.com/req_view/ABC-3456">ABC-3456</a>
or:
<a class=“req-link” href=“https://doc.test.com/req_view/ABC-1234">ABC-1234</a>
Such strings should be replaced by TBD in the file.
Code I tried:
import re
output = open("regex1.txt","w")
input = open("regex.txt")
for line in input:
output.write(re.sub(r"^<a class=“req-link” .*=“https://([a-zA-Z]+(\.[a-zA-Z]+)+).*</a>$", 'TBD', line))
input.close()
output.close()
| [
"As mentioned in the comments, the pattern you mention does not match the one you use in your code, nor does it correspond to the example strings you want replaced. So you may or may not want to adjust the following pattern depending on what you actually need.\nimport re\nfrom pathlib import Path\n\n\nPATTERN = re.compile(r'<a\\s+class=“req-link”\\s+href=“https://.*?</a>')\n\n\ndef replace_a_tags(input_file: str, output_file: str) -> None:\n contents = Path(input_file).read_text()\n with Path(output_file).open(\"w\") as f:\n f.write(re.sub(PATTERN, \"TBD\", contents))\n\n\nif __name__ == \"__main__\":\n replace_a_tags(\"input.txt\", \"output.txt\")\n\nThe .*? is important to match lazily (as opposed to greedily) so that it matches any character (.) between zero and unlimited times, as few times as possible until it hits the closing anchor tag.\nThe pattern matches both your example strings.\nThe Path.read_text method obviously reads the entire file into memory, so that may be a problem, if it happens to be gigantic, but I doubt it. The benefit is that the global regex replacement is much more efficient than iterating over each line in the file individually.\n"
] | [
0
] | [] | [] | [
"python",
"regex"
] | stackoverflow_0074644465_python_regex.txt |
Q:
Hi, sorry, I ran into this problem while coding, please help me
sen=(input('lotfan sen khod ra vared konid :'))
if sen > 40:
print('pirshodiiiiii')
elif sen >=30 and sen <=40:
print('mian sal hasti')
elif sen >=20 and sen<=30:
print('hanoz javani')
elif sen >=15 and sen <=20:
print('nojavan hasti')
elif sen >=10 and sen <=15:
print('kodak hasti')
else:
print('man nemitonam senet ro tashkhis bedam:')
this is eror:TypeError: '>' not supported between instances of 'str' and 'int'
I wanted my program to run, but it gave me this error. Please tell me what is the reason? Thank you
A:
input returns a str
sen = input('lotfan sen khod ra vared konid :')
You can convert it to an int for your comparisons
sen = int(input('lotfan sen khod ra vared konid :'))
| Hi, sorry, I ran into this problem while coding, please help me | sen=(input('lotfan sen khod ra vared konid :'))
if sen > 40:
print('pirshodiiiiii')
elif sen >=30 and sen <=40:
print('mian sal hasti')
elif sen >=20 and sen<=30:
print('hanoz javani')
elif sen >=15 and sen <=20:
print('nojavan hasti')
elif sen >=10 and sen <=15:
print('kodak hasti')
else:
print('man nemitonam senet ro tashkhis bedam:')
this is eror:TypeError: '>' not supported between instances of 'str' and 'int'
I wanted my program to run, but it gave me this error. Please tell me what is the reason? Thank you
| [
"input returns a str\nsen = input('lotfan sen khod ra vared konid :')\n\nYou can convert it to an int for your comparisons\nsen = int(input('lotfan sen khod ra vared konid :'))\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074644909_python.txt |
Q:
Python sys package not available in conda?
I am trying to install the python sys package in my conda 4.13.0 environment on MX-Linux:
conda install sys
The answer is:
PackagesNotFoundError: The following packages are not available from current channels:
- sys
Current channels:
- https://repo.anaconda.com/pkgs/main/linux-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/linux-64
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page
The same happens for other common use packages like os.
I tried looking for sys in https://anaconda.org: I find lots of packages but not the sys one.
I expected to find the sys package or a similar one.
A:
The sys package is part of python's standard library, which means it comes with python and does not need installed separately.
| Python sys package not available in conda? | I am trying to install the python sys package in my conda 4.13.0 environment on MX-Linux:
conda install sys
The answer is:
PackagesNotFoundError: The following packages are not available from current channels:
- sys
Current channels:
- https://repo.anaconda.com/pkgs/main/linux-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/linux-64
- https://repo.anaconda.com/pkgs/r/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page
The same happens for other common use packages like os.
I tried looking for sys in https://anaconda.org: I find lots of packages but not the sys one.
I expected to find the sys package or a similar one.
| [
"The sys package is part of python's standard library, which means it comes with python and does not need installed separately.\n"
] | [
0
] | [] | [] | [
"conda",
"python",
"sys"
] | stackoverflow_0074643892_conda_python_sys.txt |
Q:
apply_along_axis with various variables
I have a cooccurrence matrix, a symmetric matrix (Numpy Array) in which each cell indicates the frequency of two co-occurring words.
In this matrix, I want to calculate the association strength. Which is defined as the number of times word i and j co-occur, divided by the product of i- and j's total frequency:
def calculate_association_strength(self, cooc, i, j, word_occurrences):
return cooc/(word_occurrences[i]*word_occurrences[j])
Here:
cooc = the cooccurrence of word i and j, with size vocabulary_size x vocabulary_size.
word_occurences = a list of length vocabulary_size, showing at each index the frequency of word i.
i and j = integers, indicating the word indices.
I am looping through the cooccurrence matrix to calculate the association strength per cell. However, this approach is very slow. I am familiar with the apply_along_axis method. However, it is unclear how to use it for this method. Is this possible? And if so, how can I do this?
A:
IIUC you want to row- and columnwise divide each element of coor by word_occurrences. This can be done by simple elementwise division and broadcasting:
import numpy as np
cooc = np.array([[0, 1, 1, 0], [1, 0, 2, 1], [1, 2, 0, 1], [0, 1, 1, 0]])
word_occurrences = [1, 2, 3, 4]
cooc / word_occurrences / np.array(word_occurrences)[:, np.newaxis]
Result:
array([[0. , 0.5 , 0.33333333, 0. ],
[0.5 , 0. , 0.33333333, 0.125 ],
[0.33333333, 0.33333333, 0. , 0.08333333],
[0. , 0.125 , 0.08333333, 0. ]])
Is this what you are looking for?
| apply_along_axis with various variables | I have a cooccurrence matrix, a symmetric matrix (Numpy Array) in which each cell indicates the frequency of two co-occurring words.
In this matrix, I want to calculate the association strength. Which is defined as the number of times word i and j co-occur, divided by the product of i- and j's total frequency:
def calculate_association_strength(self, cooc, i, j, word_occurrences):
return cooc/(word_occurrences[i]*word_occurrences[j])
Here:
cooc = the cooccurrence of word i and j, with size vocabulary_size x vocabulary_size.
word_occurences = a list of length vocabulary_size, showing at each index the frequency of word i.
i and j = integers, indicating the word indices.
I am looping through the cooccurrence matrix to calculate the association strength per cell. However, this approach is very slow. I am familiar with the apply_along_axis method. However, it is unclear how to use it for this method. Is this possible? And if so, how can I do this?
| [
"IIUC you want to row- and columnwise divide each element of coor by word_occurrences. This can be done by simple elementwise division and broadcasting:\nimport numpy as np\n\ncooc = np.array([[0, 1, 1, 0], [1, 0, 2, 1], [1, 2, 0, 1], [0, 1, 1, 0]])\nword_occurrences = [1, 2, 3, 4]\n\ncooc / word_occurrences / np.array(word_occurrences)[:, np.newaxis]\n\nResult:\narray([[0. , 0.5 , 0.33333333, 0. ],\n [0.5 , 0. , 0.33333333, 0.125 ],\n [0.33333333, 0.33333333, 0. , 0.08333333],\n [0. , 0.125 , 0.08333333, 0. ]])\n\nIs this what you are looking for?\n"
] | [
0
] | [] | [] | [
"apply",
"numpy",
"python"
] | stackoverflow_0074643395_apply_numpy_python.txt |
Q:
Detect whether to fetch from psycopg2 cursor or not?
Let's say if I execute the following command.
insert into hello (username) values ('me')
and I ran like
cursor.fetchall()
I get the following error
psycopg2.ProgrammingError: no results to fetch
How can I detect whether to call fetchall() or not without checking the query is "insert" or "select"?
Thanks.
A:
Look at this attribute:
cur.description
After you have executed your query, it will be set to None if no rows were returned, or will contain data otherwise - for example:
(Column(name='id', type_code=20, display_size=None, internal_size=8, precision=None, scale=None, null_ok=None),)
Catching exceptions is not ideal because there may be a case where you're overriding a genuine exception.
A:
Check whether cursor.pgresult_ptr is None or not.
cursor.execute(sql)
if cursor.pgresult_ptr is not None:
cursor.fetchall()
A:
The accepted answer using cur.description does not solve the problem any more. cur.statusmessage can be a solution. This returns SELECT 0 or INSERT 0 1. A simple string operation can then help determine the last query.
A:
The problem is that what turns out to be None is the result of cur.fetchone()
So the way to stop the loop is :
cursor.execute("SELECT * from rep_usd")
output = cursor.fetchone()
while output is not None:
print(output)
output = DBCursor.fetchone()
cursor.description will never be None!
A:
The current best solution I found is to use cursor.rowcount after an execute(). This will be > 0 if the execute() command returns a value otherwise it will be 0.
| Detect whether to fetch from psycopg2 cursor or not? | Let's say if I execute the following command.
insert into hello (username) values ('me')
and I ran like
cursor.fetchall()
I get the following error
psycopg2.ProgrammingError: no results to fetch
How can I detect whether to call fetchall() or not without checking the query is "insert" or "select"?
Thanks.
| [
"Look at this attribute:\ncur.description\n\nAfter you have executed your query, it will be set to None if no rows were returned, or will contain data otherwise - for example:\n(Column(name='id', type_code=20, display_size=None, internal_size=8, precision=None, scale=None, null_ok=None),)\n\nCatching exceptions is not ideal because there may be a case where you're overriding a genuine exception.\n",
"Check whether cursor.pgresult_ptr is None or not.\ncursor.execute(sql)\nif cursor.pgresult_ptr is not None:\n cursor.fetchall()\n\n",
"The accepted answer using cur.description does not solve the problem any more. cur.statusmessage can be a solution. This returns SELECT 0 or INSERT 0 1. A simple string operation can then help determine the last query.\n",
"The problem is that what turns out to be None is the result of cur.fetchone()\nSo the way to stop the loop is :\ncursor.execute(\"SELECT * from rep_usd\")\noutput = cursor.fetchone()\nwhile output is not None:\n print(output)\n output = DBCursor.fetchone()\n\ncursor.description will never be None!\n",
"The current best solution I found is to use cursor.rowcount after an execute(). This will be > 0 if the execute() command returns a value otherwise it will be 0.\n"
] | [
31,
4,
2,
0,
0
] | [] | [] | [
"psycopg2",
"python"
] | stackoverflow_0038657566_psycopg2_python.txt |
Q:
How to get name of function that executed code?
Let's say for example I have this fuction:
def example(foo:str="bar"):
# code
How do I get the name of the function (for this, "example") that executed the code, something like this:
def example(foor:str="bar"):
print(functions.get()["name"]) # prints "example"
I looked at the inspect modules and the examples but they didn't make sense and didn't seem to do what I wanted.
A:
Did you mean something like this:
def example():
pass
a = []
a.append(example)
# What is a[0]'s name?
print(a[0].__name__)
As en element of the list a, we don't know the function's name. But by calling the __name__ attribute, I get the associated string.
| How to get name of function that executed code? | Let's say for example I have this fuction:
def example(foo:str="bar"):
# code
How do I get the name of the function (for this, "example") that executed the code, something like this:
def example(foor:str="bar"):
print(functions.get()["name"]) # prints "example"
I looked at the inspect modules and the examples but they didn't make sense and didn't seem to do what I wanted.
| [
"Did you mean something like this:\ndef example():\n pass\n\na = []\na.append(example)\n\n# What is a[0]'s name?\nprint(a[0].__name__)\n\nAs en element of the list a, we don't know the function's name. But by calling the __name__ attribute, I get the associated string.\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x"
] | stackoverflow_0074644299_python_python_3.x.txt |
Q:
Optimize applying a conditional filter to pandas dataframe for large input list
I have a large (>3 million rows) pandas dataframe that I'd like to apply a single condition (a simple greater than less than) to a large number of inputs. I'd skip the "new_df" step below, but putting it here for clarity.
For example:
df = pd.DataFrame({"X":[0,2,3,6,13],
"Y":[10,12,16,8,22]})
input = 13
new_df = df[(input >= df["X"]) & (input <= df["Y"])]
inds = new_df.index
print(df)
print(new_df)
print(inds)
X Y
0 0 10
1 2 12
2 3 16
3 6 8
4 13 22
X Y
2 3 16
4 13 22
Int64Index([2, 4], dtype='int64')
The Y value will always be > X and X will always be sorted. This is simple , but instead of "input=13" I'd like to apply this filter to thousands of input values and get indices back for each input.
Something like:
df = pd.DataFrame({"X":[0,2,3,6,13],
"Y":[10,12,16,8,22]})
ind_list = []
for input in range(1,5000):
inds = df[(input >= df["X"]) & (input <= df["Y"])].index
ind_list.append(inds)
It works fine, but repeatedly calling a conditional filter on a 3+ million row dataframe over and over takes some time. I feel like there's probably a more efficient way. Any suggestions?
And to be clear since my previous question was flagged as a duplicate for some reason, I am not asking how to get the indices of a dataframe, I'm looking for alternative more efficient ways than conditional filtering inside of a for-loop.
A:
Of course, you can parallelize all available cores of your CPU. I use the convenient and simple parallelbar library. The idea is simple. Create a function that returns a list of indexes. Run this function on a thread pool and pass a list of inputs to the pool.
import pandas as pd
import numpy as np
from parallelbar import progress_map #pip install parallelbar
#create dataframe with 3000 000 rows
df = pd.DataFrame({"X":np.random.randint(0, 1000, size=3000_000),
"Y":np.random.randint(0, 1000, size=3000_000)})
# synchronous cicle
ind_list = []
for input in range(1,5000):
inds = df[(input >= df["X"]) & (input <= df["Y"])].index
ind_list.append(inds)
Outpu: Wall time: 25 s
#create simple function
def foo(inp):
return df[(inp >= df["X"]) & (inp <= df["Y"])].index
#run in the thread pool with map method
%%time
result = progress_map(foo, range(1,5000), executor='threads')
Output: Wall time is 17 s.
It's a little faster.
also, you can pass to numpy array. Operations directly on these arrays are always faster than pandas.
np_val = df.values
#synchronous cycle with numpy
%%time
ind_list = []
for i in range(1,5000):
inds = np.where((i >= np_val[:,0]) & (i <= np_val[:,1]))[0]
ind_list.append(inds)
Output: Wall time is 15 s
def foo(inp):
return np.where((inp >= np_val[:,0]) & (inp <= np_val[:,1]))[0]
%%time
result = progress_map(foo, range(1,5000), executor='threads')
Outpu: Wall time is 8.5 s
Total speedup: 25/8.5 ~ 3
| Optimize applying a conditional filter to pandas dataframe for large input list | I have a large (>3 million rows) pandas dataframe that I'd like to apply a single condition (a simple greater than less than) to a large number of inputs. I'd skip the "new_df" step below, but putting it here for clarity.
For example:
df = pd.DataFrame({"X":[0,2,3,6,13],
"Y":[10,12,16,8,22]})
input = 13
new_df = df[(input >= df["X"]) & (input <= df["Y"])]
inds = new_df.index
print(df)
print(new_df)
print(inds)
X Y
0 0 10
1 2 12
2 3 16
3 6 8
4 13 22
X Y
2 3 16
4 13 22
Int64Index([2, 4], dtype='int64')
The Y value will always be > X and X will always be sorted. This is simple , but instead of "input=13" I'd like to apply this filter to thousands of input values and get indices back for each input.
Something like:
df = pd.DataFrame({"X":[0,2,3,6,13],
"Y":[10,12,16,8,22]})
ind_list = []
for input in range(1,5000):
inds = df[(input >= df["X"]) & (input <= df["Y"])].index
ind_list.append(inds)
It works fine, but repeatedly calling a conditional filter on a 3+ million row dataframe over and over takes some time. I feel like there's probably a more efficient way. Any suggestions?
And to be clear since my previous question was flagged as a duplicate for some reason, I am not asking how to get the indices of a dataframe, I'm looking for alternative more efficient ways than conditional filtering inside of a for-loop.
| [
"Of course, you can parallelize all available cores of your CPU. I use the convenient and simple parallelbar library. The idea is simple. Create a function that returns a list of indexes. Run this function on a thread pool and pass a list of inputs to the pool.\nimport pandas as pd\nimport numpy as np\nfrom parallelbar import progress_map #pip install parallelbar\n#create dataframe with 3000 000 rows\ndf = pd.DataFrame({\"X\":np.random.randint(0, 1000, size=3000_000),\n \"Y\":np.random.randint(0, 1000, size=3000_000)})\n# synchronous cicle\nind_list = []\nfor input in range(1,5000):\n inds = df[(input >= df[\"X\"]) & (input <= df[\"Y\"])].index\n ind_list.append(inds)\n\nOutpu: Wall time: 25 s\n\n#create simple function\ndef foo(inp):\n return df[(inp >= df[\"X\"]) & (inp <= df[\"Y\"])].index\n\n#run in the thread pool with map method\n%%time\nresult = progress_map(foo, range(1,5000), executor='threads')\n\nOutput: Wall time is 17 s.\n\nIt's a little faster.\nalso, you can pass to numpy array. Operations directly on these arrays are always faster than pandas.\nnp_val = df.values\n\n#synchronous cycle with numpy\n%%time\nind_list = []\nfor i in range(1,5000):\n inds = np.where((i >= np_val[:,0]) & (i <= np_val[:,1]))[0]\n ind_list.append(inds)\n\nOutput: Wall time is 15 s\n\n\ndef foo(inp):\n return np.where((inp >= np_val[:,0]) & (inp <= np_val[:,1]))[0]\n\n%%time\nresult = progress_map(foo, range(1,5000), executor='threads')\n\nOutpu: Wall time is 8.5 s\n\nTotal speedup: 25/8.5 ~ 3\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074644336_dataframe_pandas_python.txt |
Q:
Python-setattr pass function with args
I'm trying to set methods of a class programmatically by calling setattr in a loop, but the reference I pass to the function that is called by this method defaults back to its last value, instead of what was passed at the time of the setattrcall. Curiously, I'm also setting the __doc__ attribute and this assignment actually works as expected:
class Foo2:
def do_this(self, pass_this: str):
print(pass_this)
class Foo:
def __init__(self):
self.reference = "ti-hihi"
self.foo2 = Foo2()
for (method_name, pass_this) in [("bar", "passed-for-bar"), ("bar2", "passed-for-bar2")]:
my_doc = f"""my_custom_docstring: {pass_this}"""
def some_func():
self.foo2.do_this(pass_this=pass_this)
some_func.__doc__ = my_doc
setattr(self, method_name, some_func)
if __name__ == '__main__':
f = Foo()
f.bar() # prints "pass-for-bar2" instead of "pass-for-bar"
f.bar.__doc__ # prints "pass-for-bar" as expected
I already tried a few things but couldn't figure it out.
Things I tried:
lambda -- my best bet, tbh
def some_func(reference):
self.foo2.do_this(pass_this=reference)
some_func.__doc__ = my_doc
setattr(self, method_name, lambda: some_func(pass_this))
deepcopy
import copy
def some_func():
self.foo2.do_this(pass_this=copy.deepcopy(pass_this))
some_func.__doc__ = my_doc
setattr(self, method_name, some_func)
another deepcopy variant which feels dangerous if I think about the place I want to put this:
import copy
def some_func():
self.foo2.do_this(pass_this=pass_this)
some_func.__doc__ = my_doc
setattr(self, method_name, copy.deepcopy(some_func))
... and a few combinations of those but I'm missing some crucial piece.
A:
Methods are class attributes, so some_func needs to be attached to type(self), not self itself.
class Foo:
def __init__(self):
self.reference = "ti-hihi"
self.foo2 = Foo2()
for (method_name, pass_this) in [("bar", "passed-for-bar"), ("bar2", "passed-for-bar2")]:
my_doc = f"""my_custom_docstring: {pass_this}"""
def some_func(self):
self.foo2.do_this(pass_this=pass_this)
some_func.__doc__ = my_doc
setattr(type(self), method_name, some_func)
As such, the __init_ method is not really the proper place to do this, as you'll be repeatedly attaching the (effectively same) methods to the class every time you instantiate the class. A class decorator would be more appropriate:
def add_methods(cls):
for (method_name, pass_this) in [("bar", "passed-for-bar"), ("bar2", "passed-for-bar2")]:
my_doc = ...
def some_func(self):
self.foo2.do_this(pass_this=pass_this)
some_func.__doc__ = my_doc
setattr(cls, method_name, some_func)
return cls
@add_methods
class Foo:
def __init__(self):
self.reference = "ti-hihi"
self.foo2 = Foo2()
add_methods doesn't really require any special information about Foo. Indeed, add_methods can be applied to any class that
Defines a foo2 attribute for its instances, and
foo2 has a type that provides an appropriate do_this method.
Even that assumption can be made more explicit by defining a suitable base class.
def FooBase:
def __init__(self):
self.foo2 = Foo2()
@add_methods
def Foo(FooBase):
pass
Now you can simply state that add_methods is designed to work with subclass of FooBase.
A:
Thanks to @AndrewAllaire for this tip, using functools.partial solved it for me
some_func = partial(self.foo2.do_this, pass_this=pass_this)
| Python-setattr pass function with args | I'm trying to set methods of a class programmatically by calling setattr in a loop, but the reference I pass to the function that is called by this method defaults back to its last value, instead of what was passed at the time of the setattrcall. Curiously, I'm also setting the __doc__ attribute and this assignment actually works as expected:
class Foo2:
def do_this(self, pass_this: str):
print(pass_this)
class Foo:
def __init__(self):
self.reference = "ti-hihi"
self.foo2 = Foo2()
for (method_name, pass_this) in [("bar", "passed-for-bar"), ("bar2", "passed-for-bar2")]:
my_doc = f"""my_custom_docstring: {pass_this}"""
def some_func():
self.foo2.do_this(pass_this=pass_this)
some_func.__doc__ = my_doc
setattr(self, method_name, some_func)
if __name__ == '__main__':
f = Foo()
f.bar() # prints "pass-for-bar2" instead of "pass-for-bar"
f.bar.__doc__ # prints "pass-for-bar" as expected
I already tried a few things but couldn't figure it out.
Things I tried:
lambda -- my best bet, tbh
def some_func(reference):
self.foo2.do_this(pass_this=reference)
some_func.__doc__ = my_doc
setattr(self, method_name, lambda: some_func(pass_this))
deepcopy
import copy
def some_func():
self.foo2.do_this(pass_this=copy.deepcopy(pass_this))
some_func.__doc__ = my_doc
setattr(self, method_name, some_func)
another deepcopy variant which feels dangerous if I think about the place I want to put this:
import copy
def some_func():
self.foo2.do_this(pass_this=pass_this)
some_func.__doc__ = my_doc
setattr(self, method_name, copy.deepcopy(some_func))
... and a few combinations of those but I'm missing some crucial piece.
| [
"Methods are class attributes, so some_func needs to be attached to type(self), not self itself.\nclass Foo:\n\n def __init__(self):\n self.reference = \"ti-hihi\"\n self.foo2 = Foo2()\n for (method_name, pass_this) in [(\"bar\", \"passed-for-bar\"), (\"bar2\", \"passed-for-bar2\")]:\n my_doc = f\"\"\"my_custom_docstring: {pass_this}\"\"\"\n\n def some_func(self):\n self.foo2.do_this(pass_this=pass_this)\n some_func.__doc__ = my_doc\n setattr(type(self), method_name, some_func)\n\nAs such, the __init_ method is not really the proper place to do this, as you'll be repeatedly attaching the (effectively same) methods to the class every time you instantiate the class. A class decorator would be more appropriate:\ndef add_methods(cls):\n for (method_name, pass_this) in [(\"bar\", \"passed-for-bar\"), (\"bar2\", \"passed-for-bar2\")]:\n my_doc = ...\n def some_func(self):\n self.foo2.do_this(pass_this=pass_this)\n some_func.__doc__ = my_doc\n setattr(cls, method_name, some_func)\n return cls\n\n\n@add_methods\nclass Foo:\n def __init__(self):\n self.reference = \"ti-hihi\"\n self.foo2 = Foo2()\n\nadd_methods doesn't really require any special information about Foo. Indeed, add_methods can be applied to any class that\n\nDefines a foo2 attribute for its instances, and\nfoo2 has a type that provides an appropriate do_this method.\n\nEven that assumption can be made more explicit by defining a suitable base class.\ndef FooBase:\n def __init__(self):\n self.foo2 = Foo2()\n\n@add_methods\ndef Foo(FooBase):\n pass\n\nNow you can simply state that add_methods is designed to work with subclass of FooBase.\n",
"Thanks to @AndrewAllaire for this tip, using functools.partial solved it for me\nsome_func = partial(self.foo2.do_this, pass_this=pass_this)\n\n"
] | [
0,
0
] | [] | [] | [
"class",
"lambda",
"python",
"setattr"
] | stackoverflow_0074644762_class_lambda_python_setattr.txt |
Q:
python module dlls
Is there a way to make a python module load a dll in my application directory rather than the version that came with the python installation, without making changes to the python installation (which would then require I made an installer, and be careful I didn't break other apps for people by overwrting python modules and changing dll versions globaly...)?
Specifically I would like python to use my version of the sqlite3.dll, rather than the version that came with python (which is older and doesn't appear to have the fts3 module).
A:
If you're talking about Python module DLLs, then simply modifying sys.path should be fine. However, if you're talking about DLLs linked against those DLLs; i.e. a libfoo.dll which a foo.pyd depends on, then you need to modify your PATH environment variable. I wrote about doing this for PyGTK a while ago, but in your case I think it should be as simple as:
import os
os.environ['PATH'] = 'my-app-dir' + os.pathsep + os.environ['PATH']
That will insert my-app-dir at the head of your Windows path, which I believe also controls the load-order for DLLs.
Keep in mind that you will need to do this before loading the DLL in question, i.e., before importing anything interesting.
sqlite3 may be a bit of a special case, though, since it is distributed with Python; it's obviously kind of tricky to test this quickly, so I haven't checked sqlite3.dll specifically.
A:
The answer with modifying os.environ['PATH'] is right but it didn't work for me because I use python 3.9.
Still I was getting an error:
ImportError: DLL load failed while importing module X: The specified
module could not be found.
Turned out since version python 3.8 they added a mechanism to do this more securely.
Read documentation on os.add_dll_directory https://docs.python.org/3/library/os.html#os.add_dll_directory
Specifically see python 3.8 what's new:
DLL dependencies for extension modules and DLLs loaded with ctypes on Windows are now resolved more securely. Only the system paths, the directory containing the DLL or PYD file, and directories added with add_dll_directory() are searched for load-time dependencies. Specifically, PATH and the current working directory are no longer used, and modifications to these will no longer have any effect on normal DLL resolution. If your application relies on these mechanisms, you should check for add_dll_directory() and if it exists, use it to add your DLLs directory while loading your library.
So now this is the new way to make it work in python 3.8 and later:
import os
os.add_dll_directory('my-app-dir')
Again, the old way is still correct and you will have to use it in python 3.7 and older:
import os
os.environ['PATH'] = 'my-app-dir' + os.pathsep + os.environ['PATH']
After that my module with a dll dependency has been successfully loaded.
A:
Ok it turns out python always loads the dll in the same directory as the pyd file, regardless of what the python and os paths are set to.
So I needed to copy the _sqlite3.pyd from python/v2.5/DLLS to my apps directory where the new sqlite3.dll is, making it load my new dll, rather than the one that comes with python (since the pyd files seem to follow the PYTHONPATH, even though the actual dlls themselves don't).
A:
If your version of sqlite is in sys.path before the systems version it will use that. So you can either put it in the current directory or change the PYTHONPATH environment variable to do that.
A:
I had the same issue as administrative rights to the default python library is blocked in a corporate environment and its extremely troublesome to perform installations.
What works for me:
Duplicate the sqlite3 library in a new location
Put in the latest sqlite3.dll (version you want from sqlite3 web) and the old _sqlite3.pyd into the new location or the new sqlite3 library. The old _sqlite3.pyd can be found in the default python library lib/DLLs folder.
Go to the new sqlite3 library and amend the dbapi2.py as follows:
Change "from _sqlite3 import *" to "from sqlite3._sqlite3 import *"
Make sure python loads this new sqlite3 library first. Add the path to this library if you must.
A:
I encountered the same problem for my .pyd that depends on cuda/cudnn/tensorrt.
I came with a little function I call before importing my module.
def add_cuda_to_path():
if os.name != "nt":
return
path = os.getenv("PATH")
if not path:
return
path_split = path.split(";")
for folder in path_split:
if "cuda" in folder.lower() or "tensorrt" in folder.lower():
os.add_dll_directory(folder)
It is the easiest workaround I found, so I don't need to hardcode any path.
I guess little improvement would to actually check for .dll file, but this snippet serves well my needs.
Then you can simply:
add_cuda_to_path()
import my_module_that_depends_on_cuda
Have a nice day.
| python module dlls | Is there a way to make a python module load a dll in my application directory rather than the version that came with the python installation, without making changes to the python installation (which would then require I made an installer, and be careful I didn't break other apps for people by overwrting python modules and changing dll versions globaly...)?
Specifically I would like python to use my version of the sqlite3.dll, rather than the version that came with python (which is older and doesn't appear to have the fts3 module).
| [
"If you're talking about Python module DLLs, then simply modifying sys.path should be fine. However, if you're talking about DLLs linked against those DLLs; i.e. a libfoo.dll which a foo.pyd depends on, then you need to modify your PATH environment variable. I wrote about doing this for PyGTK a while ago, but in your case I think it should be as simple as:\nimport os\nos.environ['PATH'] = 'my-app-dir' + os.pathsep + os.environ['PATH']\n\nThat will insert my-app-dir at the head of your Windows path, which I believe also controls the load-order for DLLs.\nKeep in mind that you will need to do this before loading the DLL in question, i.e., before importing anything interesting.\nsqlite3 may be a bit of a special case, though, since it is distributed with Python; it's obviously kind of tricky to test this quickly, so I haven't checked sqlite3.dll specifically.\n",
"The answer with modifying os.environ['PATH'] is right but it didn't work for me because I use python 3.9.\nStill I was getting an error:\n\nImportError: DLL load failed while importing module X: The specified\nmodule could not be found.\n\nTurned out since version python 3.8 they added a mechanism to do this more securely.\nRead documentation on os.add_dll_directory https://docs.python.org/3/library/os.html#os.add_dll_directory\nSpecifically see python 3.8 what's new:\n\nDLL dependencies for extension modules and DLLs loaded with ctypes on Windows are now resolved more securely. Only the system paths, the directory containing the DLL or PYD file, and directories added with add_dll_directory() are searched for load-time dependencies. Specifically, PATH and the current working directory are no longer used, and modifications to these will no longer have any effect on normal DLL resolution. If your application relies on these mechanisms, you should check for add_dll_directory() and if it exists, use it to add your DLLs directory while loading your library.\n\nSo now this is the new way to make it work in python 3.8 and later:\nimport os\nos.add_dll_directory('my-app-dir')\n\nAgain, the old way is still correct and you will have to use it in python 3.7 and older:\nimport os\nos.environ['PATH'] = 'my-app-dir' + os.pathsep + os.environ['PATH']\n\nAfter that my module with a dll dependency has been successfully loaded.\n",
"Ok it turns out python always loads the dll in the same directory as the pyd file, regardless of what the python and os paths are set to.\nSo I needed to copy the _sqlite3.pyd from python/v2.5/DLLS to my apps directory where the new sqlite3.dll is, making it load my new dll, rather than the one that comes with python (since the pyd files seem to follow the PYTHONPATH, even though the actual dlls themselves don't).\n",
"If your version of sqlite is in sys.path before the systems version it will use that. So you can either put it in the current directory or change the PYTHONPATH environment variable to do that.\n",
"I had the same issue as administrative rights to the default python library is blocked in a corporate environment and its extremely troublesome to perform installations.\nWhat works for me:\n\nDuplicate the sqlite3 library in a new location\nPut in the latest sqlite3.dll (version you want from sqlite3 web) and the old _sqlite3.pyd into the new location or the new sqlite3 library. The old _sqlite3.pyd can be found in the default python library lib/DLLs folder.\nGo to the new sqlite3 library and amend the dbapi2.py as follows:\nChange \"from _sqlite3 import *\" to \"from sqlite3._sqlite3 import *\"\nMake sure python loads this new sqlite3 library first. Add the path to this library if you must.\n\n",
"I encountered the same problem for my .pyd that depends on cuda/cudnn/tensorrt.\nI came with a little function I call before importing my module.\ndef add_cuda_to_path():\n if os.name != \"nt\":\n return\n path = os.getenv(\"PATH\")\n if not path:\n return\n path_split = path.split(\";\")\n for folder in path_split:\n if \"cuda\" in folder.lower() or \"tensorrt\" in folder.lower():\n os.add_dll_directory(folder)\n\nIt is the easiest workaround I found, so I don't need to hardcode any path.\nI guess little improvement would to actually check for .dll file, but this snippet serves well my needs.\nThen you can simply:\nadd_cuda_to_path()\nimport my_module_that_depends_on_cuda\n\nHave a nice day.\n"
] | [
32,
15,
7,
0,
0,
0
] | [] | [] | [
"module",
"python"
] | stackoverflow_0000214852_module_python.txt |
Q:
Multiple choice in model
ANIMALS = (('dog','dog'), ('cat','cat'))
class Owner(models.Model):
animal = models.Charfield(choices=ANIMALS, max_length=10)
My problem is how I can do if I have both ?
A:
your need to have model like this:
class Choices(models.Model):
animal = models.CharField(max_length=20)
class Owner(models.Model):
animal = models.ManyToManyField(Choices)
A:
Presumably your best solution is to think about how you are going to store the data at a database level. This will dictate your approach and therefore your solution.
You want a single column that stores multiple values so your best approach would be to use django-multiselectfield
from multiselectfield import MultiSelectField
MY_CHOICES = ((1, 'Item title 2.1'),
(2, 'Item title 2.2'),
(3, 'Item title 2.3'),
(4, 'Item title 2.4'),
(5, 'Item title 2.5'))
class MyModel(models.Model):
my_field = MultiSelectField(choices=MY_CHOICES,
max_choices=3,
max_length=3)
| Multiple choice in model | ANIMALS = (('dog','dog'), ('cat','cat'))
class Owner(models.Model):
animal = models.Charfield(choices=ANIMALS, max_length=10)
My problem is how I can do if I have both ?
| [
"your need to have model like this:\nclass Choices(models.Model):\n animal = models.CharField(max_length=20)\n\nclass Owner(models.Model):\n animal = models.ManyToManyField(Choices)\n\n",
"Presumably your best solution is to think about how you are going to store the data at a database level. This will dictate your approach and therefore your solution.\nYou want a single column that stores multiple values so your best approach would be to use django-multiselectfield\nfrom multiselectfield import MultiSelectField\n\nMY_CHOICES = ((1, 'Item title 2.1'),\n (2, 'Item title 2.2'),\n (3, 'Item title 2.3'),\n (4, 'Item title 2.4'),\n (5, 'Item title 2.5'))\n\nclass MyModel(models.Model):\n my_field = MultiSelectField(choices=MY_CHOICES,\n max_choices=3,\n max_length=3)\n\n"
] | [
0,
0
] | [] | [] | [
"django",
"python"
] | stackoverflow_0074644492_django_python.txt |
Q:
Renaming Anytree Parent and Child Name
I have a dataset as follows
Unique Name
Parent
Child
US_SQ
A
A1
UC_LC
A
A2
UK_SJ
A2
A21
UI_QQ
B
B1
Now I want to set the output as follows:
US_SQ
├── A1
└── UC_LC
└── UK_SJ
UI_QQ
└── B1
In other words, I want to use the Unique name column value in the tree.
This is the code that I am using:
def add_nodes(nodes, parent, child):
if parent not in nodes:
nodes[parent] = Node(parent)
if child not in nodes:
nodes[child] = Node(child)
nodes[child].parent = nodes[parent]
data = pd.DataFrame(columns=["Parent","Child"], data=[["US_SQ","A","A1"],["UC_LC","A","A2"],["UK_SJ","A2","A21"],["UI_QQ","B","B1"]])
nodes = {} # store references to created nodes
# data.apply(lambda x: add_nodes(nodes, x["Parent"], x["Child"]), axis=1) # 1-liner
for parent, child in zip(data["Parent"],data["Child"]):
add_nodes(nodes, parent, child)
roots = list(data[~data["Parent"].isin(data["Child"])]["Parent"].unique())
for root in roots: # you can skip this for roots[0], if there is no forest and just 1 tree
for pre, _, node in RenderTree(nodes[root]):
print("%s%s" % (pre, node.name))
Also, is there a way to access the tree data efficiently/ is there any format to save the tree data so that we can easily find the parent/child node easily?
The above data and problem is used from here:
Read data from a pandas DataFrame and create a tree using anytree in python
A:
There are two parts to your question.
1. Renaming the Node
Regarding renaming the node by using Unique Name as the alias for Parent name, the above answer on aliasDict is good but we can modify the DataFrame directly instead, leaving your code unchanged.
I have modified your DataFrame because it does not seem to run properly, and your code example does not clearly show that Unique Name is an alias for Parent in some cases.
data = pd.DataFrame(
columns=["Unique Name", "Parent", "Child"],
data=[
["US_SQ", "A", "A1"],
["US_SQ", "A", "A2"],
["UC_LC", "A2", "A21"],
["UI_QQ", "B", "B1"]
]
)
# Rename Parent and Child columns using aliasDict
aliasDict = dict(data[["Parent", "Unique Name"]].values)
data["Parent"] = data["Parent"].replace(aliasDict)
data["Child"] = data["Child"].replace(aliasDict)
# Your original code - unchanged
nodes = {}
for parent, child in zip(data["Parent"],data["Child"]):
add_nodes(nodes, parent, child)
2. Exporting to DataFrame
In the second part, anyTree does not provide integration with pandas DataFrame. An alternative bigtree Python package does this out-of-the-box for you.
The whole code example can be implemented as such,
import pandas as pd
from bigtree import dataframe_to_tree_by_relation, print_tree, tree_to_dataframe
data = pd.DataFrame(
columns=["Unique Name", "Parent", "Child"],
data=[
["root", "root", "A"], # added this line
["root", "root", "B"], # added this line
["US_SQ", "A", "A1"],
["US_SQ", "A", "A2"],
["UC_LC", "A2", "A21"],
["UI_QQ", "B", "B1"]
]
)
# Rename Parent and Child columns using aliasDict (same as above)
aliasDict = dict(data[["Parent", "Unique Name"]].values)
data["Parent"] = data["Parent"].replace(aliasDict)
data["Child"] = data["Child"].replace(aliasDict)
# Create a tree from dataframe, print the tree
root = dataframe_to_tree_by_relation(data, parent_col="Parent", child_col="Child")
print_tree(root)
# root
# ├── US_SQ
# │ ├── A1
# │ └── UC_LC
# │ └── A21
# └── UI_QQ
# └── B1
# Export tree to dataframe
tree_to_dataframe(root, parent_col="Parent", name_col="Child")
# path Child Parent
# 0 /root root None
# 1 /root/US_SQ US_SQ root
# 2 /root/US_SQ/A1 A1 US_SQ
# 3 /root/US_SQ/UC_LC UC_LC US_SQ
# 4 /root/US_SQ/UC_LC/A21 A21 UC_LC
# 5 /root/UI_QQ UI_QQ root
# 6 /root/UI_QQ/B1 B1 UI_QQ
Source: I'm the creator of bigtree ;)
| Renaming Anytree Parent and Child Name | I have a dataset as follows
Unique Name
Parent
Child
US_SQ
A
A1
UC_LC
A
A2
UK_SJ
A2
A21
UI_QQ
B
B1
Now I want to set the output as follows:
US_SQ
├── A1
└── UC_LC
└── UK_SJ
UI_QQ
└── B1
In other words, I want to use the Unique name column value in the tree.
This is the code that I am using:
def add_nodes(nodes, parent, child):
if parent not in nodes:
nodes[parent] = Node(parent)
if child not in nodes:
nodes[child] = Node(child)
nodes[child].parent = nodes[parent]
data = pd.DataFrame(columns=["Parent","Child"], data=[["US_SQ","A","A1"],["UC_LC","A","A2"],["UK_SJ","A2","A21"],["UI_QQ","B","B1"]])
nodes = {} # store references to created nodes
# data.apply(lambda x: add_nodes(nodes, x["Parent"], x["Child"]), axis=1) # 1-liner
for parent, child in zip(data["Parent"],data["Child"]):
add_nodes(nodes, parent, child)
roots = list(data[~data["Parent"].isin(data["Child"])]["Parent"].unique())
for root in roots: # you can skip this for roots[0], if there is no forest and just 1 tree
for pre, _, node in RenderTree(nodes[root]):
print("%s%s" % (pre, node.name))
Also, is there a way to access the tree data efficiently/ is there any format to save the tree data so that we can easily find the parent/child node easily?
The above data and problem is used from here:
Read data from a pandas DataFrame and create a tree using anytree in python
| [
"There are two parts to your question.\n1. Renaming the Node\nRegarding renaming the node by using Unique Name as the alias for Parent name, the above answer on aliasDict is good but we can modify the DataFrame directly instead, leaving your code unchanged.\nI have modified your DataFrame because it does not seem to run properly, and your code example does not clearly show that Unique Name is an alias for Parent in some cases.\ndata = pd.DataFrame(\n columns=[\"Unique Name\", \"Parent\", \"Child\"],\n data=[\n [\"US_SQ\", \"A\", \"A1\"],\n [\"US_SQ\", \"A\", \"A2\"],\n [\"UC_LC\", \"A2\", \"A21\"],\n [\"UI_QQ\", \"B\", \"B1\"]\n ]\n)\n\n# Rename Parent and Child columns using aliasDict\naliasDict = dict(data[[\"Parent\", \"Unique Name\"]].values)\ndata[\"Parent\"] = data[\"Parent\"].replace(aliasDict)\ndata[\"Child\"] = data[\"Child\"].replace(aliasDict)\n\n# Your original code - unchanged\nnodes = {}\nfor parent, child in zip(data[\"Parent\"],data[\"Child\"]):\n add_nodes(nodes, parent, child)\n\n2. Exporting to DataFrame\nIn the second part, anyTree does not provide integration with pandas DataFrame. An alternative bigtree Python package does this out-of-the-box for you.\nThe whole code example can be implemented as such,\nimport pandas as pd\nfrom bigtree import dataframe_to_tree_by_relation, print_tree, tree_to_dataframe\n\ndata = pd.DataFrame(\n columns=[\"Unique Name\", \"Parent\", \"Child\"],\n data=[\n [\"root\", \"root\", \"A\"], # added this line\n [\"root\", \"root\", \"B\"], # added this line\n [\"US_SQ\", \"A\", \"A1\"],\n [\"US_SQ\", \"A\", \"A2\"],\n [\"UC_LC\", \"A2\", \"A21\"],\n [\"UI_QQ\", \"B\", \"B1\"]\n ]\n)\n\n# Rename Parent and Child columns using aliasDict (same as above)\naliasDict = dict(data[[\"Parent\", \"Unique Name\"]].values)\ndata[\"Parent\"] = data[\"Parent\"].replace(aliasDict)\ndata[\"Child\"] = data[\"Child\"].replace(aliasDict)\n\n# Create a tree from dataframe, print the tree\nroot = dataframe_to_tree_by_relation(data, parent_col=\"Parent\", child_col=\"Child\")\nprint_tree(root)\n# root\n# ├── US_SQ\n# │ ├── A1\n# │ └── UC_LC\n# │ └── A21\n# └── UI_QQ\n# └── B1\n\n# Export tree to dataframe\ntree_to_dataframe(root, parent_col=\"Parent\", name_col=\"Child\")\n# path Child Parent\n# 0 /root root None\n# 1 /root/US_SQ US_SQ root\n# 2 /root/US_SQ/A1 A1 US_SQ\n# 3 /root/US_SQ/UC_LC UC_LC US_SQ\n# 4 /root/US_SQ/UC_LC/A21 A21 UC_LC\n# 5 /root/UI_QQ UI_QQ root\n# 6 /root/UI_QQ/B1 B1 UI_QQ\n\nSource: I'm the creator of bigtree ;)\n"
] | [
2
] | [] | [] | [
"anytree",
"python",
"tree",
"treeview"
] | stackoverflow_0074622229_anytree_python_tree_treeview.txt |
Q:
Why the packages object-hash and crypto / hashlib return different values for sha1?
I have a javascript frontend which compares two object-hash sha1 hashes in order to determine if an input has changed (in which case a processing pipeline needs to be reran).
I started building a python interface to interact with the same backend which uses hashlib for the sha1 generation, but unfortunately the two functions return different hash values even though the inputs are the same.
I managed to produce the same hash values as hashlib using crypto, which means that the issue arises from object-hash.
hashlib
import json
import hashlib
data = {
'key1': 'value1',
'key2': 'value2',
'key3': 'value3',
};
json_data = json.dumps(data, separators=(',', ':')).encode('utf-8')
hash = hashlib.sha1()
hash.update(json_data)
print(hash.hexdigest())
# outputs f692755b3c38bc6b0dc376d775db8b07d6d5f256
crypto
const crypto = require('crypto');
const data = {
key1: 'value1',
key2: 'value2',
key3: 'value3',
};
const stringData = JSON.stringify(data)
const shasum = crypto.createHash('sha1')
shasum.update(stringData)
console.log(shasum.digest('hex'));
// (same as hashlib) outputs f692755b3c38bc6b0dc376d775db8b07d6d5f256
object-hash (Tested with and without stringifying with no success)
const data = {
key1: 'value1',
key2: 'value2',
key3: 'value3',
};
const stringData = JSON.stringify(data)
const objectHash = require('object-hash');
console.log(objectHash.sha1(stringData));
// outputs b5b0a100d7852748fe2e35bf00eeb536ad2d17d1
I saw in object-hash docs that the package is using crypto so it doesn't make sense for the two outputs to be different.
How can I make object-hash and hashlib/crypto all produce the same sha1 value?
A:
It turns out that object-hash prefixes the variable for hashing with its type. In the case of strings I needed to add string:{string_length}: to the hash stream.
hash = hashlib.sha1()
hash.update(f'string:{len(json_data)}:'.encode('utf-8')) # The line in question
hash.update(json_data)
res = hash.hexdigest()
print(res)
Having done that, the hashes produced by hashlib and crypto are the same as those of object-hash.
Note: This is not documented and I had to look through the source code to find exactly how to prefix strings in particular. Other types have different prefixes.
| Why the packages object-hash and crypto / hashlib return different values for sha1? | I have a javascript frontend which compares two object-hash sha1 hashes in order to determine if an input has changed (in which case a processing pipeline needs to be reran).
I started building a python interface to interact with the same backend which uses hashlib for the sha1 generation, but unfortunately the two functions return different hash values even though the inputs are the same.
I managed to produce the same hash values as hashlib using crypto, which means that the issue arises from object-hash.
hashlib
import json
import hashlib
data = {
'key1': 'value1',
'key2': 'value2',
'key3': 'value3',
};
json_data = json.dumps(data, separators=(',', ':')).encode('utf-8')
hash = hashlib.sha1()
hash.update(json_data)
print(hash.hexdigest())
# outputs f692755b3c38bc6b0dc376d775db8b07d6d5f256
crypto
const crypto = require('crypto');
const data = {
key1: 'value1',
key2: 'value2',
key3: 'value3',
};
const stringData = JSON.stringify(data)
const shasum = crypto.createHash('sha1')
shasum.update(stringData)
console.log(shasum.digest('hex'));
// (same as hashlib) outputs f692755b3c38bc6b0dc376d775db8b07d6d5f256
object-hash (Tested with and without stringifying with no success)
const data = {
key1: 'value1',
key2: 'value2',
key3: 'value3',
};
const stringData = JSON.stringify(data)
const objectHash = require('object-hash');
console.log(objectHash.sha1(stringData));
// outputs b5b0a100d7852748fe2e35bf00eeb536ad2d17d1
I saw in object-hash docs that the package is using crypto so it doesn't make sense for the two outputs to be different.
How can I make object-hash and hashlib/crypto all produce the same sha1 value?
| [
"It turns out that object-hash prefixes the variable for hashing with its type. In the case of strings I needed to add string:{string_length}: to the hash stream.\nhash = hashlib.sha1()\n\nhash.update(f'string:{len(json_data)}:'.encode('utf-8')) # The line in question\n\nhash.update(json_data)\nres = hash.hexdigest()\nprint(res)\n\nHaving done that, the hashes produced by hashlib and crypto are the same as those of object-hash.\nNote: This is not documented and I had to look through the source code to find exactly how to prefix strings in particular. Other types have different prefixes.\n"
] | [
0
] | [] | [] | [
"hashlib",
"javascript",
"node.js",
"object_hash",
"python"
] | stackoverflow_0074633334_hashlib_javascript_node.js_object_hash_python.txt |
Q:
Found array with 0 feature(s) (shape=(10792, 0)) while a minimum of 1 is required
Hey I am using Jupitor Notebook and doing machine learning.
I wrote this code but getting this error and I dont know what is the error.
This is my code for reference:
f`rom sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
imp = IterativeImputer(random_state=42)
date = pd.Timestamp('2200-01-01')
for col in combi:
if combi[col].dtype=="object":
combi[col].fillna("not listed", inplace=True)
if combi[col].dtype=="int":
#X[col].fillna(X[col].mode()[0], inplace=True)
combi[col].fillna(combi[col].mean(), inplace=True)
#combi[col] = combi[col].astype.int()
if combi[col].dtype=='float':
#X[col].fillna(X[col].mean(), inplace=True)
combi[col] = imp.fit_transform(combi[col].values.reshape(-1,1))
if combi[col].dtype=="datetime64[ns]":
combi[col].fillna(date, inplace=True)
combi`
Solution of the problem
A:
This is not how for loops work in Python.
for col in combi:
if combi[col].dtype=="object":
# ...
col isn't an index into the collection you're iterating over (combi), it is the dereferenced element itself. Change all instances of combi[col] inside your for loop to col to correct. Example:
for col in combi:
if col.dtype=="object":
You didn't post all of your code so it's unclear if this will resolved the problem you're seeing, but it is certainly a step in the right direction.
| Found array with 0 feature(s) (shape=(10792, 0)) while a minimum of 1 is required | Hey I am using Jupitor Notebook and doing machine learning.
I wrote this code but getting this error and I dont know what is the error.
This is my code for reference:
f`rom sklearn.experimental import enable_iterative_imputer
from sklearn.impute import IterativeImputer
imp = IterativeImputer(random_state=42)
date = pd.Timestamp('2200-01-01')
for col in combi:
if combi[col].dtype=="object":
combi[col].fillna("not listed", inplace=True)
if combi[col].dtype=="int":
#X[col].fillna(X[col].mode()[0], inplace=True)
combi[col].fillna(combi[col].mean(), inplace=True)
#combi[col] = combi[col].astype.int()
if combi[col].dtype=='float':
#X[col].fillna(X[col].mean(), inplace=True)
combi[col] = imp.fit_transform(combi[col].values.reshape(-1,1))
if combi[col].dtype=="datetime64[ns]":
combi[col].fillna(date, inplace=True)
combi`
Solution of the problem
| [
"This is not how for loops work in Python.\nfor col in combi:\n if combi[col].dtype==\"object\":\n # ...\n\ncol isn't an index into the collection you're iterating over (combi), it is the dereferenced element itself. Change all instances of combi[col] inside your for loop to col to correct. Example:\nfor col in combi:\n if col.dtype==\"object\":\n\nYou didn't post all of your code so it's unclear if this will resolved the problem you're seeing, but it is certainly a step in the right direction.\n"
] | [
0
] | [] | [] | [
"jupyter_notebook",
"kaggle",
"python",
"scikit_learn",
"seaborn"
] | stackoverflow_0074644908_jupyter_notebook_kaggle_python_scikit_learn_seaborn.txt |
Q:
Random Numbers in Excel into percentage
Hi my code works fine but is there any way to print how many times numbers 1-6 were said into a percentage
I haven't tried anything yet.
import pandas as pd
import random
data = [random.randint(0,6) for _ in range(10)]
df = pd.DataFrame(data)
print(df)
df.to_excel(r'H:\Grade10\Cs\Mir Hussain 12.00.00 3.xlsx', index=False)
A:
So from your data you could do:
import random
from collections import Counter
data = [random.randint(0,6) for _ in range(10)]
total_data = [data]
frequency = Counter(data)
number_elements = len(data)
total_data.append(list((frequency[item] / number_elements)*100 if item != 0 else '' for item in total_data[0]))
df = pd.DataFrame(total_data)
print(df)
df.to_excel(r'H:\Grade10\Cs\Mir Hussain 12.00.00 3.xlsx', index=False)
Item frequency count in Python
A:
import pandas as pd
import random
data = [random.randint(0,6) for _ in range(10)]
df = pd.DataFrame(data)
df = df.value_counts(normalize=True) # this line makes the percentages
df.to_excel(r'H:\Grade10\Cs\Mir Hussain 12.00.00 3.xlsx', index=False)
| Random Numbers in Excel into percentage | Hi my code works fine but is there any way to print how many times numbers 1-6 were said into a percentage
I haven't tried anything yet.
import pandas as pd
import random
data = [random.randint(0,6) for _ in range(10)]
df = pd.DataFrame(data)
print(df)
df.to_excel(r'H:\Grade10\Cs\Mir Hussain 12.00.00 3.xlsx', index=False)
| [
"So from your data you could do:\nimport random\nfrom collections import Counter\n\n\ndata = [random.randint(0,6) for _ in range(10)]\ntotal_data = [data]\n\nfrequency = Counter(data)\nnumber_elements = len(data)\n\ntotal_data.append(list((frequency[item] / number_elements)*100 if item != 0 else '' for item in total_data[0]))\n\ndf = pd.DataFrame(total_data)\n\nprint(df)\n\ndf.to_excel(r'H:\\Grade10\\Cs\\Mir Hussain 12.00.00 3.xlsx', index=False)\n\nItem frequency count in Python\n",
"import pandas as pd\nimport random\n\ndata = [random.randint(0,6) for _ in range(10)]\n\ndf = pd.DataFrame(data)\n\ndf = df.value_counts(normalize=True) # this line makes the percentages\n\ndf.to_excel(r'H:\\Grade10\\Cs\\Mir Hussain 12.00.00 3.xlsx', index=False)\n\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074644798_python.txt |
Q:
Adding floats precision
First time encountering floating point arithmetic.
How can I add:
0.4047617913405519 + 250459325658972.0
and choose my presicion?
I get
250459325658972.4
But I want at least
250459325658972.405
Why is python doing that. Any further resources?
A:
I think what you are looking for is the Decimal module:
>>> from decimal import *
>>> getcontext().prec = 6
>>> Decimal(1) / Decimal(7)
Decimal('0.142857')
>>> getcontext().prec = 28
>>> Decimal(1) / Decimal(7)
Decimal('0.1428571428571428571428571429')
It gives complete control on the precision of your operations as long as you use Decimal objects.
| Adding floats precision | First time encountering floating point arithmetic.
How can I add:
0.4047617913405519 + 250459325658972.0
and choose my presicion?
I get
250459325658972.4
But I want at least
250459325658972.405
Why is python doing that. Any further resources?
| [
"I think what you are looking for is the Decimal module:\n>>> from decimal import *\n>>> getcontext().prec = 6\n>>> Decimal(1) / Decimal(7)\nDecimal('0.142857')\n>>> getcontext().prec = 28\n>>> Decimal(1) / Decimal(7)\nDecimal('0.1428571428571428571428571429')\n\nIt gives complete control on the precision of your operations as long as you use Decimal objects.\n"
] | [
1
] | [
"Use this\nprint(\"{0:.4f}\".format(250459325658972.0 +0.4047617913405519))\n\n"
] | [
-2
] | [
"floating_point",
"python"
] | stackoverflow_0074644755_floating_point_python.txt |
Q:
ANACONDA navigator cannot launch-from win32com.shell import shellcon, shell
I have downloaded the ANACONDA(Anaconda3-2020.02-Windows-x86) and installed. However, i found that i cannot lauch the ANACONDA navigator so i tried using the command line and got its feedback.
from win32com.shell import shellcon,shell
Import Error:DLL load failed: The specified moduld could not found.
***(base) C:\WINDOWS\system32>Anaconda -navigator
Traceback (most recent call last):
File "C:\Users\aaron.wu\Anaconda3\Scripts\anaconda-script.py", line 6, in <module>
from binstar_client.scripts.cli import main
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\__init__.py", line 17, in <module>
from .utils import compute_hash, jencode, pv
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\utils\__init__.py", line 17, in <module>
from .config import (get_server_api, dirs, load_token, store_token,
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\utils\config.py", line 54, in <module>
USER_LOGDIR = dirs.user_log_dir
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\utils\appdirs.py", line 258, in user_log_dir
version=self.version)
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\utils\appdirs.py", line 205, in user_log_dir
path = user_data_dir(appname, appauthor, version); version = False
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\utils\appdirs.py", line 67, in user_data_dir
path = os.path.join(_get_win_folder(const), appauthor, appname)
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\utils\appdirs.py", line 284, in _get_win_folder_with_pywin32
from win32com.shell import shellcon, shell
ImportError: DLL load failed: The specified module could not be found.***
It seems that the reason of failure is the module win32com.shell and i tried to install win32com.shell using conda install win32com.shell or reinstall the ANACONDA. In the end, it did not work out. I am new to ANACONDA and really need some help for sorting this out! Thanks!
A:
Had exactly the same issue and solved it by installing the latest of win32com.
pip install pywin32==301
A:
I had the same issue:
from win32com.shell import shellcon, shell
ImportError: DLL load failed: The specified module could not be found.***
I fixed mine by clearing my environment variable called PYTHONPATH. I was messing with it and tried to add paths to it for another application (which turned out to not use PYTHONPATH).
I guess that was confusing Anaconda when starting up because it was importing incorrect libraries or something. I couldn't use Spyder, Anaconda Navigator, Jupyter Notebooks or anything other than Anaconda Prompt. Glad the issue wasn't too big!
A:
It finally solved!
i uninstalled the 32-bit Anaconda
Meanwhile, i noticed that in the C:\Users\aaron.wu\AppData\Local\Programs
There is folder named python which is not supposed to be there and it includeds pip folder.
I manually delete "python" folder and reinstalled 64-bit and it worked out!
A:
It looks like the issue is related to a conflict in newer versions of the pywin32 package.
Try downgrading the package (I had 301 installed):
pip install --user --upgrade pywin32==228
Source: https://github.com/conda/conda/issues/11503#issuecomment-1159613779
| ANACONDA navigator cannot launch-from win32com.shell import shellcon, shell | I have downloaded the ANACONDA(Anaconda3-2020.02-Windows-x86) and installed. However, i found that i cannot lauch the ANACONDA navigator so i tried using the command line and got its feedback.
from win32com.shell import shellcon,shell
Import Error:DLL load failed: The specified moduld could not found.
***(base) C:\WINDOWS\system32>Anaconda -navigator
Traceback (most recent call last):
File "C:\Users\aaron.wu\Anaconda3\Scripts\anaconda-script.py", line 6, in <module>
from binstar_client.scripts.cli import main
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\__init__.py", line 17, in <module>
from .utils import compute_hash, jencode, pv
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\utils\__init__.py", line 17, in <module>
from .config import (get_server_api, dirs, load_token, store_token,
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\utils\config.py", line 54, in <module>
USER_LOGDIR = dirs.user_log_dir
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\utils\appdirs.py", line 258, in user_log_dir
version=self.version)
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\utils\appdirs.py", line 205, in user_log_dir
path = user_data_dir(appname, appauthor, version); version = False
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\utils\appdirs.py", line 67, in user_data_dir
path = os.path.join(_get_win_folder(const), appauthor, appname)
File "C:\Users\aaron.wu\Anaconda3\lib\site-packages\binstar_client\utils\appdirs.py", line 284, in _get_win_folder_with_pywin32
from win32com.shell import shellcon, shell
ImportError: DLL load failed: The specified module could not be found.***
It seems that the reason of failure is the module win32com.shell and i tried to install win32com.shell using conda install win32com.shell or reinstall the ANACONDA. In the end, it did not work out. I am new to ANACONDA and really need some help for sorting this out! Thanks!
| [
"Had exactly the same issue and solved it by installing the latest of win32com.\npip install pywin32==301\n\n",
"I had the same issue:\nfrom win32com.shell import shellcon, shell\n ImportError: DLL load failed: The specified module could not be found.***\n\nI fixed mine by clearing my environment variable called PYTHONPATH. I was messing with it and tried to add paths to it for another application (which turned out to not use PYTHONPATH).\nI guess that was confusing Anaconda when starting up because it was importing incorrect libraries or something. I couldn't use Spyder, Anaconda Navigator, Jupyter Notebooks or anything other than Anaconda Prompt. Glad the issue wasn't too big!\n",
"It finally solved!\ni uninstalled the 32-bit Anaconda\nMeanwhile, i noticed that in the C:\\Users\\aaron.wu\\AppData\\Local\\Programs\nThere is folder named python which is not supposed to be there and it includeds pip folder.\nI manually delete \"python\" folder and reinstalled 64-bit and it worked out!\n",
"It looks like the issue is related to a conflict in newer versions of the pywin32 package.\nTry downgrading the package (I had 301 installed):\npip install --user --upgrade pywin32==228\n\nSource: https://github.com/conda/conda/issues/11503#issuecomment-1159613779\n"
] | [
5,
4,
1,
0
] | [] | [] | [
"anaconda",
"python"
] | stackoverflow_0061382692_anaconda_python.txt |
Subsets and Splits