content
stringlengths 85
101k
| title
stringlengths 0
150
| question
stringlengths 15
48k
| answers
sequence | answers_scores
sequence | non_answers
sequence | non_answers_scores
sequence | tags
sequence | name
stringlengths 35
137
|
---|---|---|---|---|---|---|---|---|
Q:
how to read specific file while passing multiple parameters in python?
I am trying to read specific big query table by passing argument value in my function, so depends on which argument value I pass, my function should read specific datatable. To do so, I was able to write this in python, but I feel like this is not quite elegant. I tried to use *args and **kwargs, but seems **kwargs could be better way to handle this but my current code has some syntax error. Can anyone look into my code and suggest possible way of improving this function?
my current attempt:
this is my first function to read file based on parameter value:
# !pip install pandas-gbq
import pandas as pd
def _readbqTble1(project_name='fb', dataset_name='iris',which_dim='lifespan', which_src='asia'):
def str_join(*args):
return '_'.join(map(str, args))
table_name = str_join('inst_pic',{*which_src},{*which_dim})
query = "select * from `{project_name}.{dataset_name}.{table_name}`".format(project_name=project_name,dataset_name=dataset_name,table_name=table_name)
df = pd.read_gbq(query,project_id=project_name,dialect='standard')
return df
but this line give me error when I call the function: table_name = str_join('inst_pic',{*which_src},{*which_dim}). to me, logic is fine, *args should give is parameter value, so that way we can concatenate the string, I got syntax error instead.
this is my another approach by using **kwargs:
def _readbqTble2(**kwargs):
def str_join(*args):
return '_'.join(map(str, args))
for _, v in kwargs.items():
table_name=str_join('inst_pic',v)
for k, v in kwargs.items():
query = "select * from `{k}.{k}.{table_name}`".format(kwargs,table_name=table_name)
df = pd.read_gbq(query,project_id=project_name,dialect='standard')
return df
if __name__=="__main__":
_readbqTble2(project_name='fb', dataset_name='iris', which_dim='lifespan', which_src='asia')
_readbqTble1
but this line query = "select * from {k}.{k}.{table_name}".format(kwargs,table_name=table_name) also give me error instead. I feel like **kwargs could be better way here. Can anyone point me out what went wrong here? what's the best way to read data based on parameter value in python?
use case:
I have those tables names listed in bigquery such as:
fb.iris.inst_pic_asia_lifespan
fb.iris.inst_pic_asia_others
fb.iris.inst_pic_europe_lifespan
fb.iris.inst_pic_europe_others
basically, I want to read specific table depends on the argument that we passed. I want my function be more parametric so can handle any file I want to read based on argument values that we passed.
Can anyone suggest any elegant way of doing this?
A:
You have confusion over the how *args and **kwargs works in general. Please see this article to help you demystify. You can also see this SO answer: here
Working version of your code with inline comments:
def _readbqTble1(which_dim="lifespan", which_src="asia", **kwargs):
def str_join(*args):
return "_".join(map(str, args))
# `*args` takes all arguments in the call-list, and puts it in a tuple
# Thus, you shouldn't wrap your `which_src` in a set like `{which_src}`,
# just simply pass the string in the call-list.
table_name = str_join("inst_pic", which_src, which_dim)
# `**kwargs` will "unpack" all kv's in the call-list to `_readbqTble1` as if
# they were passed as k=v arguments to string.format below.
query = "select * from `{project_name}.{dataset_name}.{table_name}`".format(
table_name=table_name,
**kwargs,
)
df = pd.read_gbq(query, project_id=kwargs.pop("project_name"), dialect="standard")
return df
| how to read specific file while passing multiple parameters in python? | I am trying to read specific big query table by passing argument value in my function, so depends on which argument value I pass, my function should read specific datatable. To do so, I was able to write this in python, but I feel like this is not quite elegant. I tried to use *args and **kwargs, but seems **kwargs could be better way to handle this but my current code has some syntax error. Can anyone look into my code and suggest possible way of improving this function?
my current attempt:
this is my first function to read file based on parameter value:
# !pip install pandas-gbq
import pandas as pd
def _readbqTble1(project_name='fb', dataset_name='iris',which_dim='lifespan', which_src='asia'):
def str_join(*args):
return '_'.join(map(str, args))
table_name = str_join('inst_pic',{*which_src},{*which_dim})
query = "select * from `{project_name}.{dataset_name}.{table_name}`".format(project_name=project_name,dataset_name=dataset_name,table_name=table_name)
df = pd.read_gbq(query,project_id=project_name,dialect='standard')
return df
but this line give me error when I call the function: table_name = str_join('inst_pic',{*which_src},{*which_dim}). to me, logic is fine, *args should give is parameter value, so that way we can concatenate the string, I got syntax error instead.
this is my another approach by using **kwargs:
def _readbqTble2(**kwargs):
def str_join(*args):
return '_'.join(map(str, args))
for _, v in kwargs.items():
table_name=str_join('inst_pic',v)
for k, v in kwargs.items():
query = "select * from `{k}.{k}.{table_name}`".format(kwargs,table_name=table_name)
df = pd.read_gbq(query,project_id=project_name,dialect='standard')
return df
if __name__=="__main__":
_readbqTble2(project_name='fb', dataset_name='iris', which_dim='lifespan', which_src='asia')
_readbqTble1
but this line query = "select * from {k}.{k}.{table_name}".format(kwargs,table_name=table_name) also give me error instead. I feel like **kwargs could be better way here. Can anyone point me out what went wrong here? what's the best way to read data based on parameter value in python?
use case:
I have those tables names listed in bigquery such as:
fb.iris.inst_pic_asia_lifespan
fb.iris.inst_pic_asia_others
fb.iris.inst_pic_europe_lifespan
fb.iris.inst_pic_europe_others
basically, I want to read specific table depends on the argument that we passed. I want my function be more parametric so can handle any file I want to read based on argument values that we passed.
Can anyone suggest any elegant way of doing this?
| [
"You have confusion over the how *args and **kwargs works in general. Please see this article to help you demystify. You can also see this SO answer: here\nWorking version of your code with inline comments:\ndef _readbqTble1(which_dim=\"lifespan\", which_src=\"asia\", **kwargs):\n def str_join(*args):\n return \"_\".join(map(str, args))\n\n # `*args` takes all arguments in the call-list, and puts it in a tuple\n # Thus, you shouldn't wrap your `which_src` in a set like `{which_src}`,\n # just simply pass the string in the call-list.\n table_name = str_join(\"inst_pic\", which_src, which_dim)\n \n # `**kwargs` will \"unpack\" all kv's in the call-list to `_readbqTble1` as if\n # they were passed as k=v arguments to string.format below.\n query = \"select * from `{project_name}.{dataset_name}.{table_name}`\".format(\n table_name=table_name,\n **kwargs,\n )\n df = pd.read_gbq(query, project_id=kwargs.pop(\"project_name\"), dialect=\"standard\")\n return df\n\n"
] | [
1
] | [] | [] | [
"dataframe",
"function",
"python"
] | stackoverflow_0074636896_dataframe_function_python.txt |
Q:
Django Rest Framework : RetrieveUpdateAPIView
I want to add multiple data to database with RetrieveUpdateAPIView and I am not able to add that data in database. How can I update this all date in single Patch method.
My view is like
class CompanyDetailViewAPI(RetrieveUpdateAPIView):
queryset = Companies.objects.all()
serializer_class = CompanyDetailsSerializer
permission_classes = [IsAuthenticated]
authentication_classes = [JWTAuthentication]
lookup_field = "id"
and my serializer is like
class KeyPersonsSerializer(serializers.ModelSerializer):
class Meta:
model = KeyPersons
fields = [
"id",
"person_name",
"designation",
"email",
"contact_number",
]
class CompanyDetailsSerializer(serializers.ModelSerializer):
key_person = KeyPersonsSerializer(many=True)
class Meta:
model = Companies
fields = [
"id",
"address_line_1",
"address_line_2",
"city",
"landmark",
"state",
"country",
"pincode",
"website",
"person_name",
"designation",
"email",
"phone",
"ho_email",
"ho_contact_number",
"company_registration_no",
"gst_no",
"key_person",
"company_corporate_presentation",
"about_company",
"company_established_date",
"no_of_facilities",
"no_of_employees",
]
I have tried this but I cant see any result.
How can I update this all date in single Patch method
A:
You can use a single Patch request to update the data in the database. You will need to use a custom serializer to include all the fields you want to update. Here is an example of how you could do this:
class CompanyDetailViewAPI(RetrieveUpdateAPIView):
queryset = Companies.objects.all()
serializer_class = CompanyDetailsSerializer
permission_classes = [IsAuthenticated]
authentication_classes = [JWTAuthentication]
lookup_field = "id"
class CompanyDetailsSerializer(serializers.ModelSerializer):
key_person = KeyPersonsSerializer(many=True)
class Meta:
model = Companies
fields = [
"id",
"address_line_1",
"address_line_2",
"city",
"landmark",
"state",
"country",
"pincode",
"website",
"person_name",
"designation",
"email",
"phone",
"ho_email",
"
| Django Rest Framework : RetrieveUpdateAPIView | I want to add multiple data to database with RetrieveUpdateAPIView and I am not able to add that data in database. How can I update this all date in single Patch method.
My view is like
class CompanyDetailViewAPI(RetrieveUpdateAPIView):
queryset = Companies.objects.all()
serializer_class = CompanyDetailsSerializer
permission_classes = [IsAuthenticated]
authentication_classes = [JWTAuthentication]
lookup_field = "id"
and my serializer is like
class KeyPersonsSerializer(serializers.ModelSerializer):
class Meta:
model = KeyPersons
fields = [
"id",
"person_name",
"designation",
"email",
"contact_number",
]
class CompanyDetailsSerializer(serializers.ModelSerializer):
key_person = KeyPersonsSerializer(many=True)
class Meta:
model = Companies
fields = [
"id",
"address_line_1",
"address_line_2",
"city",
"landmark",
"state",
"country",
"pincode",
"website",
"person_name",
"designation",
"email",
"phone",
"ho_email",
"ho_contact_number",
"company_registration_no",
"gst_no",
"key_person",
"company_corporate_presentation",
"about_company",
"company_established_date",
"no_of_facilities",
"no_of_employees",
]
I have tried this but I cant see any result.
How can I update this all date in single Patch method
| [
"You can use a single Patch request to update the data in the database. You will need to use a custom serializer to include all the fields you want to update. Here is an example of how you could do this:\nclass CompanyDetailViewAPI(RetrieveUpdateAPIView):\nqueryset = Companies.objects.all()\nserializer_class = CompanyDetailsSerializer\npermission_classes = [IsAuthenticated]\nauthentication_classes = [JWTAuthentication]\nlookup_field = \"id\"\n\nclass CompanyDetailsSerializer(serializers.ModelSerializer):\nkey_person = KeyPersonsSerializer(many=True)\nclass Meta:\n model = Companies\n fields = [\n \"id\",\n \"address_line_1\",\n \"address_line_2\",\n \"city\",\n \"landmark\",\n \"state\",\n \"country\",\n \"pincode\",\n \"website\",\n \"person_name\",\n \"designation\",\n \"email\",\n \"phone\",\n \"ho_email\",\n \"\n\n"
] | [
0
] | [] | [] | [
"django",
"django_rest_framework",
"mysql",
"pgadmin",
"python"
] | stackoverflow_0074628350_django_django_rest_framework_mysql_pgadmin_python.txt |
Q:
Hey, I've just started learning python. Can someone explain how this function is returning the value? I made this by following a tutorial
def raiseToPower(basNum, powNum):
result = 1
for index in range(powNum):
result = result * basNum
return result
print(raiseToPower(3, 3))
result: '27'
| Hey, I've just started learning python. Can someone explain how this function is returning the value? I made this by following a tutorial | def raiseToPower(basNum, powNum):
result = 1
for index in range(powNum):
result = result * basNum
return result
print(raiseToPower(3, 3))
result: '27'
| [] | [] | [
"def raiseToPower(basNum, powNum):\n\n result = 1\n \n for index in range(powNum): # here multiplying baseNum with itself powNum times\n result = result * basNum\n return result\n\n\nCalculating 3^3\n\nprint(raiseToPower(3, 3))\n27\n\n\nCalculating 2^10\n\nprint(raiseToPower(2, 10))\n1024\n\n"
] | [
-4
] | [
"python"
] | stackoverflow_0074637279_python.txt |
Q:
Why does `scipy.sparse.csr_matrix` broadcast multiplication but not subtraction?
I am trying to understand solutions to this question here, and while I can just reuse the code I would prefer to know what is happening before I do.
The question is about how to tile a scipy.sparse.csr_matrix object, and the top answer (by @user3357359) at the time of writing shows how to tile a single row of a matrix across multiple rows as:
from scipy.sparse import csr_matrix
sparse_row = csr_matrix([[0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0]])
repeat_number = 3
repeated_row_matrix = csr_matrix(np.ones([repeat_number,1])) * sparse_row
(I have added the sparse_row and repeat_number initialisation to help make things concrete).
If I now convert this to a dense matrix and print as so:
print(f"repeated_row_matrix.todense() = {repeated_row_matrix.todense()}")
This gives output:
repeated_row_matrix.todense() =
[[0 0 0 0 0 1 0 1 1 0 0 0]
[0 0 0 0 0 1 0 1 1 0 0 0]
[0 0 0 0 0 1 0 1 1 0 0 0]]
The operation on the right of the repeated_row_matrix assignment seems to me to be performing broadcasting. The original sparse_row has shape (1,12), the temporary matrix is a (3,1) matrix of ones, and the result is a (3,12) matrix. So far, this is similar behaviour as you would expect from numpy.array. However, if I try the same thing with the subtraction operator:
sparse_row = csr_matrix([[0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0]])
repeat_number = 3
repeated_row_matrix = csr_matrix(np.ones([repeat_number,1])) - sparse_row
print(f"repeated_row_matrix.todense() =\n{repeated_row_matrix.todense()}")
I get an error in the third line:
3 repeated_row_matrix = csr_matrix(np.ones([repeat_number,1])) - sparse_row
...
ValueError: inconsistent shapes
Is this intended behaviour? And if so, why?
I guess that a multiplication between two sparse K-vectors with n1 and n2 non-zeros respectively, would always have less than or equal to min(n1,n2) non-zeros. A subtraction would have in the worst case n1+n2 non-zeros but does this really explain why one behaviour is allowed and one is not.
I wish to perform subtraction of a single row vector from a matrix (for a sparse implementation of K-medoids I am playing with). To perform subtraction, I am creating a temporary sparse array which tiles the original row by using broadcasting with multiplication then I can subtract one array from another. I am sure there should be a better way, but I don't see it.
Also, @"C.J. Jackson" replies in the comments that a better way to construct the tiling is:
sparse_row[np.zeros(repeat_number),:]
This works, but I have no idea why or what functionality is being employed. Can someone point me to the documentation? If sparse_row were a numpy.array then this does not cause tiling.
Thanks in advance.
A:
With dense arrays, broadcasted multiplication and matrix multiplication can do the same thing for special cases. For example with 2 1d arrays
In [3]: x = np.arange(3); y = np.arange(5)
broadcasted:
In [4]: x[:,None]*y # (3,1)*(5,) => (3,1)*(1,5) => (3,5)
Out[4]:
array([[0, 0, 0, 0, 0],
[0, 1, 2, 3, 4],
[0, 2, 4, 6, 8]])
dot/matrix multiplication of a (3,1) and (1,5). This is not broadcasting. It is doing sum-of-products on the shared size 1 dimension:
In [5]: x[:,None]@y[None,:]
Out[5]:
array([[0, 0, 0, 0, 0],
[0, 1, 2, 3, 4],
[0, 2, 4, 6, 8]])
Make sparse matrices for these:
In [6]: Mx = sparse.csr_matrix(x);My = sparse.csr_matrix(y)
In [11]: Mx
Out[11]:
<1x3 sparse matrix of type '<class 'numpy.intc'>'
with 2 stored elements in Compressed Sparse Row format>
In [12]: My
Out[12]:
<1x5 sparse matrix of type '<class 'numpy.intc'>'
with 4 stored elements in Compressed Sparse Row format>
Note the shapes (1,3) and (1,5). To do the matrix multiplication, the first needs to be transposed to (3,1):
In [13]: Mx.T@My
Out[13]:
<3x5 sparse matrix of type '<class 'numpy.intc'>'
with 8 stored elements in Compressed Sparse Column format>
In [14]: _.A
Out[14]:
array([[0, 0, 0, 0, 0],
[0, 1, 2, 3, 4],
[0, 2, 4, 6, 8]], dtype=int32)
Mx.T*My works the same way, because sparse is modeled on np.matrix (and MATLAB), where * is matrix multiplication.
Element-wise multiplication works in the same way as for dense:
In [20]: Mx.T.multiply(My)
Out[20]:
<3x5 sparse matrix of type '<class 'numpy.intc'>'
with 8 stored elements in Compressed Sparse Column format>
I'm a little surprised, it does look at bit like broadcasting, though it doesn't involve any automatic None dimensions (sparse is always 2d). Funny, I can't find a element-wise multiplication for the dense matix.
But as you found Mx.T-My raises the inconsistent shapes error. The sparse developers chose not to implement this kind of subtraction (or addition). In general addition or subtraction of sparse matrices is a problem. It can easily result in a dense matrix, if you add something to all elements, including the "implied" 0s.
In [41]: Mx+1
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Input In [41], in <cell line: 1>()
----> 1 Mx+1
File ~\anaconda3\lib\site-packages\scipy\sparse\base.py:410, in spmatrix.__add__(self, other)
408 return self.copy()
409 # Now we would add this scalar to every element.
--> 410 raise NotImplementedError('adding a nonzero scalar to a '
411 'sparse matrix is not supported')
412 elif isspmatrix(other):
413 if other.shape != self.shape:
NotImplementedError: adding a nonzero scalar to a sparse matrix is not supported
To replicate the broadcasted subtraction:
In [54]: x[:,None]-y
Out[54]:
array([[ 0, -1, -2, -3, -4],
[ 1, 0, -1, -2, -3],
[ 2, 1, 0, -1, -2]])
We have to 'tile' the matrices. Your link shows some options (including my answer). Another option is to vstack several instances of the matrices. sparse.vstack actually makes a new matrix, using the coo matrix format:
In [55]: Mxx = sparse.vstack([Mx]*5);Myy = sparse.vstack([My,My,My])
In [56]: Mxx,Myy
Out[56]:
(<5x3 sparse matrix of type '<class 'numpy.intc'>'
with 10 stored elements in Compressed Sparse Row format>,
<3x5 sparse matrix of type '<class 'numpy.intc'>'
with 12 stored elements in Compressed Sparse Row format>)
Now two (3,5) matrices can be added or subtracted:
In [57]: Mxx.T-Myy
Out[57]:
<3x5 sparse matrix of type '<class 'numpy.intc'>'
with 12 stored elements in Compressed Sparse Column format>
In [58]: _.A
Out[58]:
array([[ 0, -1, -2, -3, -4],
[ 1, 0, -1, -2, -3],
[ 2, 1, 0, -1, -2]], dtype=int32)
In the linear algebra world (esp. finite difference and finite elements) where sparse math was developed, matrix multiplication was important. Other math that operates just on the nonzero elements is fairly easy. But operations that change sparsity are (relatively) expensive. New values, and submatrices are best added to the coo inputs. coo duplicates are added when converted to csr. So addition/subtraction of whole matrices is (intentionally) limited.
| Why does `scipy.sparse.csr_matrix` broadcast multiplication but not subtraction? | I am trying to understand solutions to this question here, and while I can just reuse the code I would prefer to know what is happening before I do.
The question is about how to tile a scipy.sparse.csr_matrix object, and the top answer (by @user3357359) at the time of writing shows how to tile a single row of a matrix across multiple rows as:
from scipy.sparse import csr_matrix
sparse_row = csr_matrix([[0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0]])
repeat_number = 3
repeated_row_matrix = csr_matrix(np.ones([repeat_number,1])) * sparse_row
(I have added the sparse_row and repeat_number initialisation to help make things concrete).
If I now convert this to a dense matrix and print as so:
print(f"repeated_row_matrix.todense() = {repeated_row_matrix.todense()}")
This gives output:
repeated_row_matrix.todense() =
[[0 0 0 0 0 1 0 1 1 0 0 0]
[0 0 0 0 0 1 0 1 1 0 0 0]
[0 0 0 0 0 1 0 1 1 0 0 0]]
The operation on the right of the repeated_row_matrix assignment seems to me to be performing broadcasting. The original sparse_row has shape (1,12), the temporary matrix is a (3,1) matrix of ones, and the result is a (3,12) matrix. So far, this is similar behaviour as you would expect from numpy.array. However, if I try the same thing with the subtraction operator:
sparse_row = csr_matrix([[0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 0]])
repeat_number = 3
repeated_row_matrix = csr_matrix(np.ones([repeat_number,1])) - sparse_row
print(f"repeated_row_matrix.todense() =\n{repeated_row_matrix.todense()}")
I get an error in the third line:
3 repeated_row_matrix = csr_matrix(np.ones([repeat_number,1])) - sparse_row
...
ValueError: inconsistent shapes
Is this intended behaviour? And if so, why?
I guess that a multiplication between two sparse K-vectors with n1 and n2 non-zeros respectively, would always have less than or equal to min(n1,n2) non-zeros. A subtraction would have in the worst case n1+n2 non-zeros but does this really explain why one behaviour is allowed and one is not.
I wish to perform subtraction of a single row vector from a matrix (for a sparse implementation of K-medoids I am playing with). To perform subtraction, I am creating a temporary sparse array which tiles the original row by using broadcasting with multiplication then I can subtract one array from another. I am sure there should be a better way, but I don't see it.
Also, @"C.J. Jackson" replies in the comments that a better way to construct the tiling is:
sparse_row[np.zeros(repeat_number),:]
This works, but I have no idea why or what functionality is being employed. Can someone point me to the documentation? If sparse_row were a numpy.array then this does not cause tiling.
Thanks in advance.
| [
"With dense arrays, broadcasted multiplication and matrix multiplication can do the same thing for special cases. For example with 2 1d arrays\nIn [3]: x = np.arange(3); y = np.arange(5)\n\nbroadcasted:\nIn [4]: x[:,None]*y # (3,1)*(5,) => (3,1)*(1,5) => (3,5)\nOut[4]: \narray([[0, 0, 0, 0, 0],\n [0, 1, 2, 3, 4],\n [0, 2, 4, 6, 8]])\n\ndot/matrix multiplication of a (3,1) and (1,5). This is not broadcasting. It is doing sum-of-products on the shared size 1 dimension:\nIn [5]: x[:,None]@y[None,:]\nOut[5]: \narray([[0, 0, 0, 0, 0],\n [0, 1, 2, 3, 4],\n [0, 2, 4, 6, 8]])\n\nMake sparse matrices for these:\nIn [6]: Mx = sparse.csr_matrix(x);My = sparse.csr_matrix(y) \nIn [11]: Mx\nOut[11]: \n<1x3 sparse matrix of type '<class 'numpy.intc'>'\n with 2 stored elements in Compressed Sparse Row format> \nIn [12]: My\nOut[12]: \n<1x5 sparse matrix of type '<class 'numpy.intc'>'\n with 4 stored elements in Compressed Sparse Row format>\n\nNote the shapes (1,3) and (1,5). To do the matrix multiplication, the first needs to be transposed to (3,1):\nIn [13]: Mx.T@My\nOut[13]: \n<3x5 sparse matrix of type '<class 'numpy.intc'>'\n with 8 stored elements in Compressed Sparse Column format>\n\nIn [14]: _.A\nOut[14]: \narray([[0, 0, 0, 0, 0],\n [0, 1, 2, 3, 4],\n [0, 2, 4, 6, 8]], dtype=int32)\n\nMx.T*My works the same way, because sparse is modeled on np.matrix (and MATLAB), where * is matrix multiplication.\nElement-wise multiplication works in the same way as for dense:\nIn [20]: Mx.T.multiply(My)\nOut[20]: \n<3x5 sparse matrix of type '<class 'numpy.intc'>'\n with 8 stored elements in Compressed Sparse Column format>\n\nI'm a little surprised, it does look at bit like broadcasting, though it doesn't involve any automatic None dimensions (sparse is always 2d). Funny, I can't find a element-wise multiplication for the dense matix.\nBut as you found Mx.T-My raises the inconsistent shapes error. The sparse developers chose not to implement this kind of subtraction (or addition). In general addition or subtraction of sparse matrices is a problem. It can easily result in a dense matrix, if you add something to all elements, including the \"implied\" 0s.\nIn [41]: Mx+1\n---------------------------------------------------------------------------\nNotImplementedError Traceback (most recent call last)\nInput In [41], in <cell line: 1>()\n----> 1 Mx+1\n\nFile ~\\anaconda3\\lib\\site-packages\\scipy\\sparse\\base.py:410, in spmatrix.__add__(self, other)\n 408 return self.copy()\n 409 # Now we would add this scalar to every element.\n--> 410 raise NotImplementedError('adding a nonzero scalar to a '\n 411 'sparse matrix is not supported')\n 412 elif isspmatrix(other):\n 413 if other.shape != self.shape:\n\nNotImplementedError: adding a nonzero scalar to a sparse matrix is not supported\n\nTo replicate the broadcasted subtraction:\nIn [54]: x[:,None]-y\nOut[54]: \narray([[ 0, -1, -2, -3, -4],\n [ 1, 0, -1, -2, -3],\n [ 2, 1, 0, -1, -2]])\n\nWe have to 'tile' the matrices. Your link shows some options (including my answer). Another option is to vstack several instances of the matrices. sparse.vstack actually makes a new matrix, using the coo matrix format:\nIn [55]: Mxx = sparse.vstack([Mx]*5);Myy = sparse.vstack([My,My,My]) \nIn [56]: Mxx,Myy\nOut[56]: \n(<5x3 sparse matrix of type '<class 'numpy.intc'>'\n with 10 stored elements in Compressed Sparse Row format>,\n <3x5 sparse matrix of type '<class 'numpy.intc'>'\n with 12 stored elements in Compressed Sparse Row format>)\n\nNow two (3,5) matrices can be added or subtracted:\nIn [57]: Mxx.T-Myy\nOut[57]: \n<3x5 sparse matrix of type '<class 'numpy.intc'>'\n with 12 stored elements in Compressed Sparse Column format>\n\nIn [58]: _.A\nOut[58]: \narray([[ 0, -1, -2, -3, -4],\n [ 1, 0, -1, -2, -3],\n [ 2, 1, 0, -1, -2]], dtype=int32)\n\nIn the linear algebra world (esp. finite difference and finite elements) where sparse math was developed, matrix multiplication was important. Other math that operates just on the nonzero elements is fairly easy. But operations that change sparsity are (relatively) expensive. New values, and submatrices are best added to the coo inputs. coo duplicates are added when converted to csr. So addition/subtraction of whole matrices is (intentionally) limited.\n"
] | [
0
] | [] | [] | [
"python",
"scipy",
"sparse_matrix"
] | stackoverflow_0074634985_python_scipy_sparse_matrix.txt |
Q:
To iterate through a .txt table in python
Student ID, Assignment, Score
123456, Zany Text, 100
123456, Magic 9 Ball, 60
123456, Nim Grab, 80
123456, Dungeon Crawl, 78
123456, Ultimate TODO List, 90
654321, Zany Text, 48
This is the content of the .txt file, I need to iterate through this text and get the average Score of all the students (and do some other stuff, but ill figure that out later).
Tried putting all the content in a 2d tuple, which worked but i couldnt reference any of the elements in the list inside the list - print(tup[1][1]) gave error list index out of range but print(tup[1]) did not.
my code-
infile=open("scores.txt", "r")
total=0
i=0
tup=[]
for i in range (7):
tup.append([])
tup[0].append(infile.readline().split(","))
b=''
for i in range(1, len(tup)):
tup[i].append(infile.readline().split(","))
for i in range(0,len(tup)):
for j in range(len(tup[i])):
print(tup[i][j], end=' ')
print()
A:
It should be something like:
# read the file
content = []
with open("scores.txt", "r") as infile:
for line in infile:
if not line.isspace(): # skip empty lines
content.append(line.strip().split(','))
# print the content
for row in content:
for element in row:
print(el, end=' ')
print()
mean = sum([float(x[-1]) for x in content[1:]])/(len(content)-1)
print(mean)
So you have made a several mistakes here:
You open the file without closing it later
You predefine length of your file - this is not needed and not healthy
You separate first line read from the file and the rest - I understand why you would like to do that. But as you treat header and the rest of the content the same way there is no actual need to do that. It is better to read everything the same way and treat the header later.
Finally, to calculate mean it is better to create a list comprehension of the numerical value of the last column (hence [-1] index) starting from the second row (hence the [1:])
One small thing. When reading files, there is always the need to check valid inputs. I've added the treatment for trailing empty lines as an example.
A:
You don't need to "pre-allocate" lists in python: if you want to add new elements, simply call tup.append.
You'll want to strip off the end '\n' from .readline() using strip.
You can iterate over elements of a list using for item in list:
infile=open("scores.txt", "r")
table = []
table.append(infile.readline().strip().split(","))
for _ in range(16):
table.append(infile.readline().strip().split(","))
for row in table:
for col in row:
print(col, end=' ')
print()
total = sum(int(row[-1]) for row in table[1:] if row[-1])
print(total)
A:
The file appears to be in CSV format. Change your file from .txt to .csv . Now you can use csv library to get your desired outcome.
import csv
# opening the CSV file
with open('score.csv', mode='r')as file:
# reading the CSV file
csvFile = csv.reader(file)
# removing Headers from the file
header = next(csvFile)
# displaying the contents of the CSV file
avg_score = 0
total_score = 0
total_row = 0
for lines in csvFile:
total_score += int(lines[2])
total_row += 1
avg_score = total_score/total_row
print(avg_score)
Hope this helps. Happy Coding :)
| To iterate through a .txt table in python | Student ID, Assignment, Score
123456, Zany Text, 100
123456, Magic 9 Ball, 60
123456, Nim Grab, 80
123456, Dungeon Crawl, 78
123456, Ultimate TODO List, 90
654321, Zany Text, 48
This is the content of the .txt file, I need to iterate through this text and get the average Score of all the students (and do some other stuff, but ill figure that out later).
Tried putting all the content in a 2d tuple, which worked but i couldnt reference any of the elements in the list inside the list - print(tup[1][1]) gave error list index out of range but print(tup[1]) did not.
my code-
infile=open("scores.txt", "r")
total=0
i=0
tup=[]
for i in range (7):
tup.append([])
tup[0].append(infile.readline().split(","))
b=''
for i in range(1, len(tup)):
tup[i].append(infile.readline().split(","))
for i in range(0,len(tup)):
for j in range(len(tup[i])):
print(tup[i][j], end=' ')
print()
| [
"It should be something like:\n# read the file\ncontent = []\nwith open(\"scores.txt\", \"r\") as infile:\n for line in infile:\n if not line.isspace(): # skip empty lines\n content.append(line.strip().split(','))\n\n# print the content \nfor row in content:\n for element in row:\n print(el, end=' ')\n print()\n\nmean = sum([float(x[-1]) for x in content[1:]])/(len(content)-1)\nprint(mean)\n\nSo you have made a several mistakes here:\n\nYou open the file without closing it later\nYou predefine length of your file - this is not needed and not healthy\nYou separate first line read from the file and the rest - I understand why you would like to do that. But as you treat header and the rest of the content the same way there is no actual need to do that. It is better to read everything the same way and treat the header later.\nFinally, to calculate mean it is better to create a list comprehension of the numerical value of the last column (hence [-1] index) starting from the second row (hence the [1:])\nOne small thing. When reading files, there is always the need to check valid inputs. I've added the treatment for trailing empty lines as an example.\n\n",
"\nYou don't need to \"pre-allocate\" lists in python: if you want to add new elements, simply call tup.append.\nYou'll want to strip off the end '\\n' from .readline() using strip.\nYou can iterate over elements of a list using for item in list:\n\n\ninfile=open(\"scores.txt\", \"r\")\ntable = []\ntable.append(infile.readline().strip().split(\",\"))\nfor _ in range(16):\n table.append(infile.readline().strip().split(\",\"))\nfor row in table:\n for col in row:\n print(col, end=' ')\n print()\ntotal = sum(int(row[-1]) for row in table[1:] if row[-1])\nprint(total)\n\n",
"The file appears to be in CSV format. Change your file from .txt to .csv . Now you can use csv library to get your desired outcome.\nimport csv\n\n# opening the CSV file\nwith open('score.csv', mode='r')as file:\n\n # reading the CSV file\n csvFile = csv.reader(file)\n # removing Headers from the file\n header = next(csvFile)\n # displaying the contents of the CSV file\n\n avg_score = 0\n total_score = 0\n total_row = 0\n\n for lines in csvFile:\n total_score += int(lines[2])\n total_row += 1\n\n avg_score = total_score/total_row\n print(avg_score)\n\nHope this helps. Happy Coding :)\n"
] | [
1,
0,
0
] | [] | [] | [
"arrays",
"list",
"multidimensional_array",
"python",
"tuples"
] | stackoverflow_0074637025_arrays_list_multidimensional_array_python_tuples.txt |
Q:
PyCharm terminal and project interpreter do not match
I have developed a project with PyCharm using Python 3.7. Now I want to upgrade the project to Python 3.10.
I managed to install Python 3.10 and select it as the project interpreter but the terminal in PyCharm is still using 3.7. Why is that? How to use 3.10 in all cases?
*Additional question is whether it is possible to transfer the virtual environment of the project in Python 3.7 to the new interpreter Python 3.10 somehow.
A:
Case 1. Starting with a single fresh project
The interpreter that gets activated when you open the terminal (or a new terminal tab) is the one chosen in File > Settings > Project > Python Interpreter provided you've chosen File > Settings > Tools > Terminal > Activate virtualenv.
If you start with a fresh project the value is controlled by the value THE_INTERPRETER_NAME in your project.iml file:
<?xml version="1.0" encoding="UTF-8"?>
<module type="PYTHON_MODULE" version="4">
<component name="NewModuleRootManager">
<content url="file://$MODULE_DIR$" />
<orderEntry type="jdk" jdkName="THE_INTERPRETER_NAME" jdkType="Python SDK" />
<orderEntry type="sourceFolder" forTests="false" />
</component>
</module>
Case 2. One project having other projects attached in the same window
The problem is if you have a complex project with several projects open in the same window and one primary project. In that case you can configure different interpreters for each project, I tried it out and the terminal activates the interpreter set for the last project in the list, the controlling variable is set in misc.xml in the .idea folder of the primary project.
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectRootManager" version="2" project-jdk-name="Python 3.9 (delete_this_venv)" project-jdk-type="Python SDK" />
<component name="PyCharmProfessionalAdvertiser">
<option name="shown" value="true" />
</component>
<component name="PythonCompatibilityInspectionAdvertiser">
<option name="version" value="3" />
</component>
</project>
I went through the settings but there's no other option to configure this behavior beyond what I explained.
| PyCharm terminal and project interpreter do not match | I have developed a project with PyCharm using Python 3.7. Now I want to upgrade the project to Python 3.10.
I managed to install Python 3.10 and select it as the project interpreter but the terminal in PyCharm is still using 3.7. Why is that? How to use 3.10 in all cases?
*Additional question is whether it is possible to transfer the virtual environment of the project in Python 3.7 to the new interpreter Python 3.10 somehow.
| [
"Case 1. Starting with a single fresh project\nThe interpreter that gets activated when you open the terminal (or a new terminal tab) is the one chosen in File > Settings > Project > Python Interpreter provided you've chosen File > Settings > Tools > Terminal > Activate virtualenv.\nIf you start with a fresh project the value is controlled by the value THE_INTERPRETER_NAME in your project.iml file:\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<module type=\"PYTHON_MODULE\" version=\"4\">\n <component name=\"NewModuleRootManager\">\n <content url=\"file://$MODULE_DIR$\" />\n <orderEntry type=\"jdk\" jdkName=\"THE_INTERPRETER_NAME\" jdkType=\"Python SDK\" />\n <orderEntry type=\"sourceFolder\" forTests=\"false\" />\n </component>\n</module>\n\nCase 2. One project having other projects attached in the same window\nThe problem is if you have a complex project with several projects open in the same window and one primary project. In that case you can configure different interpreters for each project, I tried it out and the terminal activates the interpreter set for the last project in the list, the controlling variable is set in misc.xml in the .idea folder of the primary project.\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project version=\"4\">\n <component name=\"ProjectRootManager\" version=\"2\" project-jdk-name=\"Python 3.9 (delete_this_venv)\" project-jdk-type=\"Python SDK\" />\n <component name=\"PyCharmProfessionalAdvertiser\">\n <option name=\"shown\" value=\"true\" />\n </component>\n <component name=\"PythonCompatibilityInspectionAdvertiser\">\n <option name=\"version\" value=\"3\" />\n </component>\n</project>\n\n\nI went through the settings but there's no other option to configure this behavior beyond what I explained.\n"
] | [
1
] | [] | [] | [
"interpreter",
"pycharm",
"python"
] | stackoverflow_0074605697_interpreter_pycharm_python.txt |
Q:
Automated Market Makers - Liquidity Pool - Question about the calculation
Esteemed,
I would like to code all the steps for the correct calculation of a pool balance according to Uniswap V2 logic.
Anyone who knew how to help can write in any programming language (Python, Javascript etc.), in this example I used R.
The balancing process for a liquidity pool can be seen here: example1 and example2.
However, it is not clear how to do it taking into account the Uniswap fee which is 0.3% for every trade.
library(tidyverse)
##### INITIAL PARAMETERS
#Uniswap charges users a flat 0.30% fee for
#every trade that takes place on the platform and
#automatically sends it to a liquidity reserve
Uniswap.fee <- 0.30 / 100
ETH.initial.price <- 100
ETH.pool.price <- 100
BNT.initial.price <- 1
BNT.pool.price <- 1
##### INITIAL SITUATION
BNT.units <- 1000
BNT.total <- BNT.units * BNT.initial.price
ETH.units <- BNT.total / ETH.initial.price
ETH.total <- ETH.units * ETH.initial.price
Pool.Value <- BNT.total + ETH.total
Pool.DF <- data.frame(Symbol = c("BNT", "ETH"), Share = c(BNT.total,ETH.total))
ggplot(Pool.DF, aes(x = Symbol, y=Share, fill=Symbol)) +
geom_bar(width = 1, position = "dodge", stat="identity") + labs(title="Initial POOL")
##### FINAL SITUATION
ETH.final.price <- 120
ETH.pool.price <- 100
BNT.final.price <- 1
BNT.pool.price <- 1
##### Imbalanced.Pool
Imbalanced.Pool <- data.frame(Symbol = c("BNT", "ETH"),
Share = c(BNT.total,ETH.units * ETH.final.price ))
ggplot(Imbalanced.Pool, aes(x = Symbol, y=Share, fill=Symbol)) +
geom_bar(width = 1, position = "dodge", stat="identity")+ labs(title="Imbalanced POOL: ETH valorization")
#### need to balance:
...
Now I don't know how to continue the steps to correctly balance and obtain the impermanent loss and arb profit values.
Thank you very much,
A:
For DeFi protocols using automated market makers, we implement a pool of a pair of assets (for example, BTC-USDT), and we price the two assets simply with:
b * u = constant
Here b is the amount of BTC in the pool, and u is the amount of USDT. Besides, the constant is often written as K in many papers.
Now assume that you want to swap an amount of b' BTC to USDT. You give the b' BTC to the DeFi protocol, and the protocol finds that it should reduce the amount of USDT (which will be given to you) in order to keep b * u = constant. Then with simple calculations, the pool decides to give you u' USDT so that:
(b+b')*(u-u') = constant
With this transaction (assuming that no 0.3% fee is accrued), you obtain u' USDT by giving b' BTC.
Then it comes to the 0.3% fee. When you give b' BTC to the pool, the pool acts as if you give only the deducted amount b'' == 99.7% * b'. Then the pool computes u'' so that
(b+b'')*(u-u'') = constant
In actual computation there are always errors due to limited number of decimals (you can only receive 0.1272 USDT instead of 0.127272727272...). Therefore, the asserted constant can be slowly increasing as more and more swap transactions are executed. This slowly increase the volume of the liquidity pool. Note that the accrued 0.3% fee may not serve as part of the liquidity pool.
For codes, I would use the codes of Flamingo as an example.
First you initialize a swap through FlamingoSwapRouter. You give your sold tokens to the router
https://github.com/flamingo-finance/flamingo-contract-swap/blob/74e61f8406f9e8ededed72f9cb7e0139091ae17c/Swap/flamingo-contract-swap/FlamingoSwapRouter/FlamingoSwapRouterContract.cs#L237
Then the router SafeTransfer your sold token to its swapping pairContract, and call the swap method of the swapping contract to give you the token you buy.
https://github.com/flamingo-finance/flamingo-contract-swap/blob/74e61f8406f9e8ededed72f9cb7e0139091ae17c/Swap/flamingo-contract-swap/FlamingoSwapPair/FlamingoSwapPairContract.cs#L140
You can see the code var balance0Adjusted = balance0 * 1000 - amount0In * 3; which adjusts the assumed amount of token in the liquidity pool. There is a slightly exeeding amount of token in the liquidity pool which are accrued fees, not part of the liquidity.
A:
Across automated market makers, the prices of assets stored within the smart contract are enforced by the "constant product." This is represented through the formula x * y = k, where x and y represent the quantity of tokens you provide liquidity for.
The product of the quantities of the tokens you are providing liquidity for (k) remains the same regardless of market forces, meaning that when there is demand for either asset, the price a market taker will get for the asset is deterministic (ignoring gas fees & slippage). If a buyer of an asset would like to buy 10 of asset "x," the price at which they buy will be determined based on the quantity of "y" they need to supply to keep "k" the same as it was before the trade is made.
With this principle in mind, we can answer your question about how to continue the steps to correctly balance and obtain the impermanent loss and arb profit values; we can do so in Python. Remember, when you begin liquidity providing, the $VALUE of each token needs to be = to the other.
df = pd.DataFrame()
def ethq_amm_simulation(_ethp, _k):
return np.sqrt(_k/_ethp)
def bntq_amm_simulation(_ethq, _k):
return _k/_ethq
df["ethp"] = insert your ETH price data here
df["bntp"] = insert your BNT price data here
df["k"] = your portfolio size / 2 * ((your portfolio size / 2) /
*ETH price per BNT quantity at T0 (when you started becoming an LP)
df["ethq"] = df.apply(lambda x: ethq_amm_simulation(x.ethp, x.k), axis=1)
df["bntq"] = df.apply(lambda x: bntq_amm_simulation(x.ethq, x.k), axis=1)
df["LP Value"] = (df["ethp"] * df["ethq"]) + df["bntq"] * df["bntp"]
df["Hold Value"] = df["ethp"] * df["ethq"] at T0 (when you started
becoming an LP) + (df["bntp"] * df["bntq"] at T0 (when you started
becoming an LP)
df["Impermanent Loss"] = df["Hold Value"] - df["LP Value"]
| Automated Market Makers - Liquidity Pool - Question about the calculation | Esteemed,
I would like to code all the steps for the correct calculation of a pool balance according to Uniswap V2 logic.
Anyone who knew how to help can write in any programming language (Python, Javascript etc.), in this example I used R.
The balancing process for a liquidity pool can be seen here: example1 and example2.
However, it is not clear how to do it taking into account the Uniswap fee which is 0.3% for every trade.
library(tidyverse)
##### INITIAL PARAMETERS
#Uniswap charges users a flat 0.30% fee for
#every trade that takes place on the platform and
#automatically sends it to a liquidity reserve
Uniswap.fee <- 0.30 / 100
ETH.initial.price <- 100
ETH.pool.price <- 100
BNT.initial.price <- 1
BNT.pool.price <- 1
##### INITIAL SITUATION
BNT.units <- 1000
BNT.total <- BNT.units * BNT.initial.price
ETH.units <- BNT.total / ETH.initial.price
ETH.total <- ETH.units * ETH.initial.price
Pool.Value <- BNT.total + ETH.total
Pool.DF <- data.frame(Symbol = c("BNT", "ETH"), Share = c(BNT.total,ETH.total))
ggplot(Pool.DF, aes(x = Symbol, y=Share, fill=Symbol)) +
geom_bar(width = 1, position = "dodge", stat="identity") + labs(title="Initial POOL")
##### FINAL SITUATION
ETH.final.price <- 120
ETH.pool.price <- 100
BNT.final.price <- 1
BNT.pool.price <- 1
##### Imbalanced.Pool
Imbalanced.Pool <- data.frame(Symbol = c("BNT", "ETH"),
Share = c(BNT.total,ETH.units * ETH.final.price ))
ggplot(Imbalanced.Pool, aes(x = Symbol, y=Share, fill=Symbol)) +
geom_bar(width = 1, position = "dodge", stat="identity")+ labs(title="Imbalanced POOL: ETH valorization")
#### need to balance:
...
Now I don't know how to continue the steps to correctly balance and obtain the impermanent loss and arb profit values.
Thank you very much,
| [
"For DeFi protocols using automated market makers, we implement a pool of a pair of assets (for example, BTC-USDT), and we price the two assets simply with:\nb * u = constant\nHere b is the amount of BTC in the pool, and u is the amount of USDT. Besides, the constant is often written as K in many papers.\nNow assume that you want to swap an amount of b' BTC to USDT. You give the b' BTC to the DeFi protocol, and the protocol finds that it should reduce the amount of USDT (which will be given to you) in order to keep b * u = constant. Then with simple calculations, the pool decides to give you u' USDT so that:\n(b+b')*(u-u') = constant\nWith this transaction (assuming that no 0.3% fee is accrued), you obtain u' USDT by giving b' BTC.\nThen it comes to the 0.3% fee. When you give b' BTC to the pool, the pool acts as if you give only the deducted amount b'' == 99.7% * b'. Then the pool computes u'' so that\n(b+b'')*(u-u'') = constant\nIn actual computation there are always errors due to limited number of decimals (you can only receive 0.1272 USDT instead of 0.127272727272...). Therefore, the asserted constant can be slowly increasing as more and more swap transactions are executed. This slowly increase the volume of the liquidity pool. Note that the accrued 0.3% fee may not serve as part of the liquidity pool.\nFor codes, I would use the codes of Flamingo as an example.\nFirst you initialize a swap through FlamingoSwapRouter. You give your sold tokens to the router\nhttps://github.com/flamingo-finance/flamingo-contract-swap/blob/74e61f8406f9e8ededed72f9cb7e0139091ae17c/Swap/flamingo-contract-swap/FlamingoSwapRouter/FlamingoSwapRouterContract.cs#L237\nThen the router SafeTransfer your sold token to its swapping pairContract, and call the swap method of the swapping contract to give you the token you buy.\nhttps://github.com/flamingo-finance/flamingo-contract-swap/blob/74e61f8406f9e8ededed72f9cb7e0139091ae17c/Swap/flamingo-contract-swap/FlamingoSwapPair/FlamingoSwapPairContract.cs#L140\nYou can see the code var balance0Adjusted = balance0 * 1000 - amount0In * 3; which adjusts the assumed amount of token in the liquidity pool. There is a slightly exeeding amount of token in the liquidity pool which are accrued fees, not part of the liquidity.\n",
"Across automated market makers, the prices of assets stored within the smart contract are enforced by the \"constant product.\" This is represented through the formula x * y = k, where x and y represent the quantity of tokens you provide liquidity for.\nThe product of the quantities of the tokens you are providing liquidity for (k) remains the same regardless of market forces, meaning that when there is demand for either asset, the price a market taker will get for the asset is deterministic (ignoring gas fees & slippage). If a buyer of an asset would like to buy 10 of asset \"x,\" the price at which they buy will be determined based on the quantity of \"y\" they need to supply to keep \"k\" the same as it was before the trade is made.\nWith this principle in mind, we can answer your question about how to continue the steps to correctly balance and obtain the impermanent loss and arb profit values; we can do so in Python. Remember, when you begin liquidity providing, the $VALUE of each token needs to be = to the other.\ndf = pd.DataFrame()\n\ndef ethq_amm_simulation(_ethp, _k):\n return np.sqrt(_k/_ethp)\n\ndef bntq_amm_simulation(_ethq, _k):\n return _k/_ethq\n\ndf[\"ethp\"] = insert your ETH price data here\n\ndf[\"bntp\"] = insert your BNT price data here\n\ndf[\"k\"] = your portfolio size / 2 * ((your portfolio size / 2) / \n*ETH price per BNT quantity at T0 (when you started becoming an LP)\n\ndf[\"ethq\"] = df.apply(lambda x: ethq_amm_simulation(x.ethp, x.k), axis=1)\n\ndf[\"bntq\"] = df.apply(lambda x: bntq_amm_simulation(x.ethq, x.k), axis=1)\n\ndf[\"LP Value\"] = (df[\"ethp\"] * df[\"ethq\"]) + df[\"bntq\"] * df[\"bntp\"]\n\ndf[\"Hold Value\"] = df[\"ethp\"] * df[\"ethq\"] at T0 (when you started \nbecoming an LP) + (df[\"bntp\"] * df[\"bntq\"] at T0 (when you started \nbecoming an LP)\n\ndf[\"Impermanent Loss\"] = df[\"Hold Value\"] - df[\"LP Value\"]\n\n"
] | [
0,
0
] | [] | [] | [
"cryptocurrency",
"javascript",
"logic",
"python",
"r"
] | stackoverflow_0070148373_cryptocurrency_javascript_logic_python_r.txt |
Q:
A problem concerning ordering issue using sort_vlue(pandas)
I want to find out a max value, so I use df.groupby('h_type').max()['h_price'] but it gives a strange result. Therefore, I use the following code and then find out there is an ordering issue
bond=pd.read_csv('/content/drive/MyDrive/test/datahistory/c.csv',index_col='h_type')
a=bond.loc['mansion']
aMax=a.sort_values(['h_price'],ascending=False)
aMax
Then it yields:
What kind of the problem may it be?
A:
Problem is column h_price is not numeric, need:
bond=pd.read_csv('/content/drive/MyDrive/test/datahistory/c.csv',index_col='h_type')
bond['h_price'] = bond['h_price'].str.replace(',','.', regex=True).astype(float)
a=bond.loc['mansion']
aMax=a.sort_values(['h_price'],ascending=False)
| A problem concerning ordering issue using sort_vlue(pandas) | I want to find out a max value, so I use df.groupby('h_type').max()['h_price'] but it gives a strange result. Therefore, I use the following code and then find out there is an ordering issue
bond=pd.read_csv('/content/drive/MyDrive/test/datahistory/c.csv',index_col='h_type')
a=bond.loc['mansion']
aMax=a.sort_values(['h_price'],ascending=False)
aMax
Then it yields:
What kind of the problem may it be?
| [
"Problem is column h_price is not numeric, need:\nbond=pd.read_csv('/content/drive/MyDrive/test/datahistory/c.csv',index_col='h_type')\nbond['h_price'] = bond['h_price'].str.replace(',','.', regex=True).astype(float)\n\na=bond.loc['mansion']\naMax=a.sort_values(['h_price'],ascending=False)\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074636885_pandas_python.txt |
Q:
Rearrange object DataFrame
I'm trying to rearrange an object dataframe to be in DDMMYYYY format.
The original format was MM/DD/YYYY.
import string
import pandas as pd
csv_file = 'export.csv'
df = pd.read_csv(csv_file, index_col=False)
df["Date1"] = df["Order_Date"].str.split(" ").str.get(0)
df["Date"] = df["Date1"].str.split("/")
zz= df["Date"]
print(zz)
>>>
0 [11, 01, 2022]
1 [11, 01, 2022]
2 [11, 01, 2022]
3 [11, 01, 2022]
4 [11, 01, 2022]
...
2768 [11, 22, 2022]
2769 [11, 22, 2022]
2770 [11, 22, 2022]
2771 [11, 22, 2022]
2772 [11, 22, 2022]
Name: Date, Length: 2773, dtype: object
I want the output to be like this
>>>
0 [01112022]
1 [01112022]
2 [01112022]
3 [01112022]
4 [01112022]
...
A:
Instead solitting convert column to datetimes byto_datetime and then use Series.dt.strftime:
df["Date"] = pd.to_datetime(df["Order_Date"].str.split().str.get(0)).dt.strftime('%d%m%Y')
Or use Series.str.extract for valeus before first space:
df["Date"] = (pd.to_datetime(df["Order_Date"].str.extract('(.*)\s+', expand=False))
.dt.strftime('%d%m%Y'))
| Rearrange object DataFrame | I'm trying to rearrange an object dataframe to be in DDMMYYYY format.
The original format was MM/DD/YYYY.
import string
import pandas as pd
csv_file = 'export.csv'
df = pd.read_csv(csv_file, index_col=False)
df["Date1"] = df["Order_Date"].str.split(" ").str.get(0)
df["Date"] = df["Date1"].str.split("/")
zz= df["Date"]
print(zz)
>>>
0 [11, 01, 2022]
1 [11, 01, 2022]
2 [11, 01, 2022]
3 [11, 01, 2022]
4 [11, 01, 2022]
...
2768 [11, 22, 2022]
2769 [11, 22, 2022]
2770 [11, 22, 2022]
2771 [11, 22, 2022]
2772 [11, 22, 2022]
Name: Date, Length: 2773, dtype: object
I want the output to be like this
>>>
0 [01112022]
1 [01112022]
2 [01112022]
3 [01112022]
4 [01112022]
...
| [
"Instead solitting convert column to datetimes byto_datetime and then use Series.dt.strftime:\ndf[\"Date\"] = pd.to_datetime(df[\"Order_Date\"].str.split().str.get(0)).dt.strftime('%d%m%Y')\n\nOr use Series.str.extract for valeus before first space:\ndf[\"Date\"] = (pd.to_datetime(df[\"Order_Date\"].str.extract('(.*)\\s+', expand=False))\n .dt.strftime('%d%m%Y'))\n\n"
] | [
2
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074637483_dataframe_pandas_python.txt |
Q:
How do I detect if a string with whitespace contains a number in pandas?
I'm trying to seperate a list of mailing addresses and names that are intertwined in a single column due to poor formating. I initially wanted to grab all of the entries that are alphanumeric so I can separate them however I'm having trouble with the .isalnum() function. My string is a mailing address so I can't remove all of the white space as it would ruin the formatting.
'123 main st'.isalnum()
doesn't work because of the white space and I can't quite figure out how to effectively check if this is correct.
A:
Possible duplicate: Check if a string contains a number.
def has_numbers(inputString):
return any(char.isdigit() for char in inputString)
has_numbers("123 main st")
True
| How do I detect if a string with whitespace contains a number in pandas? | I'm trying to seperate a list of mailing addresses and names that are intertwined in a single column due to poor formating. I initially wanted to grab all of the entries that are alphanumeric so I can separate them however I'm having trouble with the .isalnum() function. My string is a mailing address so I can't remove all of the white space as it would ruin the formatting.
'123 main st'.isalnum()
doesn't work because of the white space and I can't quite figure out how to effectively check if this is correct.
| [
"Possible duplicate: Check if a string contains a number.\ndef has_numbers(inputString):\n return any(char.isdigit() for char in inputString)\n\nhas_numbers(\"123 main st\")\nTrue\n\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074637221_pandas_python.txt |
Q:
i am a new python user and i tried ursina, i created a little 2d shooter game but i don't know how to add a reload time for the each shot
I want to have a delay between each shot or maybe a limited bulled number and then a reload time.
def input (key):
if key == 'space':
e = Entity(y=zeri.y, x=zeri.x+2, model='quad', collider='box', texture="textures/bum.png")
e.animate_x(30, duration=2, curve=curve.linear)
invoke(destroy, e, delay=2)
This is the bullet code.
I tried by using time.sleep() and expected it to stop the player from using commands but it just stops everything.
For the bulled reload I tried:
def input (key):
bullets=10
if key == 'space' and if bullets>0:
e = Entity(y=zeri.y, x=zeri.x+2, model='quad', collider='box', texture="textures/bum.png")
e.animate_x(30, duration=2, curve=curve.linear)
invoke(destroy, e, delay=2)
bullets=bullets-1
I expected it to stop shooting after 10 bullets, but you can still shoot for as long as you want.
A:
This would give you delay after each shot (untested pseudo code out of my head but you should get the picture)
from time import time
shot_threshold = 20 # Each shot no sooner than 20ms from last one
last_shot = None
if key == 'space':
if not last_shot or time.time_ns() - last_shot > shot_threshold:
# store the shot stamp to check when fired again
last_shot = time.time_ns()
# handle your shot normal way
Similar way logic would be for reload delay. Just add another variable
to hold the stamp when you start the reload and extend the condition
to check for that (or reorganize your code as you like, as you i.e.
may want to add some visual effect for the period of reloading etc).
The only difference would be that that stamp would be set after your
bullet count drops to 0.
| i am a new python user and i tried ursina, i created a little 2d shooter game but i don't know how to add a reload time for the each shot | I want to have a delay between each shot or maybe a limited bulled number and then a reload time.
def input (key):
if key == 'space':
e = Entity(y=zeri.y, x=zeri.x+2, model='quad', collider='box', texture="textures/bum.png")
e.animate_x(30, duration=2, curve=curve.linear)
invoke(destroy, e, delay=2)
This is the bullet code.
I tried by using time.sleep() and expected it to stop the player from using commands but it just stops everything.
For the bulled reload I tried:
def input (key):
bullets=10
if key == 'space' and if bullets>0:
e = Entity(y=zeri.y, x=zeri.x+2, model='quad', collider='box', texture="textures/bum.png")
e.animate_x(30, duration=2, curve=curve.linear)
invoke(destroy, e, delay=2)
bullets=bullets-1
I expected it to stop shooting after 10 bullets, but you can still shoot for as long as you want.
| [
"This would give you delay after each shot (untested pseudo code out of my head but you should get the picture)\nfrom time import time\n\nshot_threshold = 20 # Each shot no sooner than 20ms from last one\nlast_shot = None\n\nif key == 'space':\n if not last_shot or time.time_ns() - last_shot > shot_threshold:\n # store the shot stamp to check when fired again\n last_shot = time.time_ns()\n # handle your shot normal way\n\nSimilar way logic would be for reload delay. Just add another variable\nto hold the stamp when you start the reload and extend the condition\nto check for that (or reorganize your code as you like, as you i.e.\nmay want to add some visual effect for the period of reloading etc).\nThe only difference would be that that stamp would be set after your\nbullet count drops to 0.\n"
] | [
0
] | [] | [] | [
"python",
"ursina"
] | stackoverflow_0074620120_python_ursina.txt |
Q:
importing a csv file with clean columns using pandas?
so i'm trying to import this csv file and each value is seperated by a comma but how do i make new rows and columns from the imported data?
I tried importing it as normal and printing the data frame in different ways.
A:
try the same with
df = pd.read_csv('file_name.csv', sep = ',')
this might work
| importing a csv file with clean columns using pandas? | so i'm trying to import this csv file and each value is seperated by a comma but how do i make new rows and columns from the imported data?
I tried importing it as normal and printing the data frame in different ways.
| [
"try the same with\ndf = pd.read_csv('file_name.csv', sep = ',')\n\nthis might work\n"
] | [
0
] | [] | [] | [
"csv",
"dataframe",
"format",
"pandas",
"python"
] | stackoverflow_0074636627_csv_dataframe_format_pandas_python.txt |
Q:
How to pass a javascript variable in html to views.py?
I am currently trying to make an website using django.
And i faced a problem like i wrote in title.
What i want to make is like this,
first of all, shop page shows all products.
But, when a user select a brand name on dropdown menu, shop page must shows only that brand products.
To do this, i have to get a variable which a user select on dropdown menu,
and my view function should run at the same time.
Please let me know how can i resolve this.
i made a dropdown in html as below.
<shop_test.html>
<form action="{% url 'shop' %}" method="get" id="selected_brand">
<select name="selected_brand" id="selected_brand">
<option value="ALL">Select Brand</option>
<option value="A">A_BRAND</option>
<option value="B">B_BRAND</option>
<option value="C">C_BRAND</option>
</select>
</form>
<script type="text/javascript">
$(document).ready(function(){
$("select[name=selected_brand]").change(function () {
$(".forms").submit();
});
});
</script>
and my views.py is as below.
def ShopView(request):
brand_text = request.GET.get('selected_brand')
if brand_text == None:
product_list = Product.objects.all()
elif brand_text != 'ALL':
product_list = Product.objects.filter(brand=brand_text)
else:
product_list = Product.objects.all()
context = {
'brand_text': brand_text,
'product_list': product_list,
}
return render(request, 'shop_test.html', context)
i tried to google it a lot of times, but i counldn't resolve this.
A:
base.html
{% load static %}
<!DOCTYPE html>
<html lang='en'>
<head>
<title>{% block title %}My amazing site{% endblock %}</title>
<meta charset='utf-8'>
<link rel="stylesheet" href="{% static 'base.css' %}">
</head>
<body>
<div id="content">
{% block content %}{% endblock %}
</div>
</body>
<footer>
{% block script %}{% endblock %}
</footer>
</html>
blank.html
{% extends 'base.html' %}
{% block content %}
<form action="{% url 'core:pass-variable-js' %}" method="get" onChange=sendForm() id="selection_form">
<select name="selected_brand" id="selected_brand">
<option value="ALL" {% if brand == "ALL" %}selected{% endif %}>Select Brand</option>
<option value="A" {% if brand == "A" %}selected{% endif %}>A_BRAND</option>
<option value="B" {% if brand == "B" %}selected{% endif %}>B_BRAND</option>
<option value="C" {% if brand == "C" %}selected{% endif %}>C_BRAND</option>
</select>
</form>
{% for product in products%}
<div>
<p>Product: {{ product.name }}<br>Brand: {{ product.brand }}</p><br>
</div>
{% endfor %}
{% endblock %}
{% block script %}
<script>
function sendForm() {
document.getElementById("selection_form").submit();
}
</script>
{% endblock %}
views.py
def pass_js_variable(request):
brand = 'ALL'
products = Product.objects.all()
if request.GET.get('selected_brand'):
brand = request.GET.get('selected_brand')
match brand:
case 'ALL':
products = Product.objects.all()
case default:
products = Product.objects.filter(brand=brand)
context = {'brand': brand, 'products': products}
return render(request, 'blank.html', context)
Technically we are not passing a JS variable. We are just retrieving an variable from the request object.
In fact, if we use JS to send the value using AJAX the main difference would be the page NOT being reloaded.
| How to pass a javascript variable in html to views.py? | I am currently trying to make an website using django.
And i faced a problem like i wrote in title.
What i want to make is like this,
first of all, shop page shows all products.
But, when a user select a brand name on dropdown menu, shop page must shows only that brand products.
To do this, i have to get a variable which a user select on dropdown menu,
and my view function should run at the same time.
Please let me know how can i resolve this.
i made a dropdown in html as below.
<shop_test.html>
<form action="{% url 'shop' %}" method="get" id="selected_brand">
<select name="selected_brand" id="selected_brand">
<option value="ALL">Select Brand</option>
<option value="A">A_BRAND</option>
<option value="B">B_BRAND</option>
<option value="C">C_BRAND</option>
</select>
</form>
<script type="text/javascript">
$(document).ready(function(){
$("select[name=selected_brand]").change(function () {
$(".forms").submit();
});
});
</script>
and my views.py is as below.
def ShopView(request):
brand_text = request.GET.get('selected_brand')
if brand_text == None:
product_list = Product.objects.all()
elif brand_text != 'ALL':
product_list = Product.objects.filter(brand=brand_text)
else:
product_list = Product.objects.all()
context = {
'brand_text': brand_text,
'product_list': product_list,
}
return render(request, 'shop_test.html', context)
i tried to google it a lot of times, but i counldn't resolve this.
| [
"base.html\n{% load static %}\n\n<!DOCTYPE html>\n<html lang='en'>\n <head>\n <title>{% block title %}My amazing site{% endblock %}</title>\n <meta charset='utf-8'>\n <link rel=\"stylesheet\" href=\"{% static 'base.css' %}\">\n </head>\n\n <body>\n <div id=\"content\">\n {% block content %}{% endblock %}\n </div>\n </body>\n\n <footer>\n {% block script %}{% endblock %}\n </footer>\n</html>\n\nblank.html\n{% extends 'base.html' %}\n\n{% block content %}\n <form action=\"{% url 'core:pass-variable-js' %}\" method=\"get\" onChange=sendForm() id=\"selection_form\">\n <select name=\"selected_brand\" id=\"selected_brand\">\n <option value=\"ALL\" {% if brand == \"ALL\" %}selected{% endif %}>Select Brand</option>\n <option value=\"A\" {% if brand == \"A\" %}selected{% endif %}>A_BRAND</option> \n <option value=\"B\" {% if brand == \"B\" %}selected{% endif %}>B_BRAND</option>\n <option value=\"C\" {% if brand == \"C\" %}selected{% endif %}>C_BRAND</option>\n </select> \n </form>\n\n {% for product in products%}\n <div>\n <p>Product: {{ product.name }}<br>Brand: {{ product.brand }}</p><br>\n </div>\n {% endfor %}\n{% endblock %}\n\n{% block script %}\n <script>\n function sendForm() {\n document.getElementById(\"selection_form\").submit();\n }\n </script>\n{% endblock %}\n\nviews.py\ndef pass_js_variable(request):\n brand = 'ALL'\n products = Product.objects.all()\n\n if request.GET.get('selected_brand'):\n brand = request.GET.get('selected_brand')\n match brand:\n case 'ALL':\n products = Product.objects.all()\n case default:\n products = Product.objects.filter(brand=brand)\n\n context = {'brand': brand, 'products': products}\n return render(request, 'blank.html', context)\n\nTechnically we are not passing a JS variable. We are just retrieving an variable from the request object.\nIn fact, if we use JS to send the value using AJAX the main difference would be the page NOT being reloaded.\n"
] | [
0
] | [] | [] | [
"django",
"javascript",
"python"
] | stackoverflow_0074636986_django_javascript_python.txt |
Q:
How to append a column to a DataFrame that collects values of another DataFrame in Python?
I have two tables (as Pandas' DataFrame), one is like
name
val
name1
0
name2
1
the other is
name
tag
name1
tg1
name1
tg2
name1
tg3
name1
tg3
name2
kg1
name2
kg1
name3
other
and I want to append a column to the first DataFrame collecting all values of the second table by name, i.e.
name
val
new_column
name1
0
[tg1, tg2, tg3, tg3]
name2
1
[kg1, kg1]
I know I can use row-wise operation to achieve this, but is there a way that I can use inbuilt Pandas' methods to do this? If I want to remove duplicates of the collected array in new_column at the same time, what method should I use?
A:
Use DataFrame.join with aggregate lists:
df = df1.join(df2.groupby('name')['tag'].agg(list).rename('new_column'), on='name')
print (df)
name val new_column
0 name1 0 [tg1, tg2, tg3, tg3]
1 name2 1 [kg1, kg1]
| How to append a column to a DataFrame that collects values of another DataFrame in Python? | I have two tables (as Pandas' DataFrame), one is like
name
val
name1
0
name2
1
the other is
name
tag
name1
tg1
name1
tg2
name1
tg3
name1
tg3
name2
kg1
name2
kg1
name3
other
and I want to append a column to the first DataFrame collecting all values of the second table by name, i.e.
name
val
new_column
name1
0
[tg1, tg2, tg3, tg3]
name2
1
[kg1, kg1]
I know I can use row-wise operation to achieve this, but is there a way that I can use inbuilt Pandas' methods to do this? If I want to remove duplicates of the collected array in new_column at the same time, what method should I use?
| [
"Use DataFrame.join with aggregate lists:\ndf = df1.join(df2.groupby('name')['tag'].agg(list).rename('new_column'), on='name')\nprint (df)\n name val new_column\n0 name1 0 [tg1, tg2, tg3, tg3]\n1 name2 1 [kg1, kg1]\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074637551_dataframe_pandas_python.txt |
Q:
How to remove the error of 'can only concatenate str (not "float") to str' in Python while returning a string?
I am trying to calculate a percentage of landmass occupied by a country from the total landmass.I am taking two arguments as string and float in a function and returning String along with the calculated percentage in it. For example the Input =area_of_country("Russia", 17098242) and Output = "Russia is 11.48% of the total world's landmass". Below is my code
class Solution(object):
def landmass(self, st, num):
percentage = 148940000 / num * 100
return st + "is" + percentage + "of total world mass!"
if __name__ == "__main__":
s = "Russia"
n = 17098242
print(Solution().landmass(s, n))
Error :-
return st + "is" + percentage + "of total world mass!"
TypeError: can only concatenate str (not "float") to str
A:
You need to cast the percentage (since it's a float) into a string when concatenating using the + operator. So your return statement would look like:
return str(st) + "is" + str(percentage) + "of total world mass!"
A:
instead of this:
return st + "is" + percentage + "of total world mass!"
Try this:
return str(st) + "is" + str(percentage) + "of total world mass!"
| How to remove the error of 'can only concatenate str (not "float") to str' in Python while returning a string? | I am trying to calculate a percentage of landmass occupied by a country from the total landmass.I am taking two arguments as string and float in a function and returning String along with the calculated percentage in it. For example the Input =area_of_country("Russia", 17098242) and Output = "Russia is 11.48% of the total world's landmass". Below is my code
class Solution(object):
def landmass(self, st, num):
percentage = 148940000 / num * 100
return st + "is" + percentage + "of total world mass!"
if __name__ == "__main__":
s = "Russia"
n = 17098242
print(Solution().landmass(s, n))
Error :-
return st + "is" + percentage + "of total world mass!"
TypeError: can only concatenate str (not "float") to str
| [
"You need to cast the percentage (since it's a float) into a string when concatenating using the + operator. So your return statement would look like:\nreturn str(st) + \"is\" + str(percentage) + \"of total world mass!\"\n\n",
"instead of this:\n\nreturn st + \"is\" + percentage + \"of total world mass!\"\n\nTry this:\nreturn str(st) + \"is\" + str(percentage) + \"of total world mass!\"\n\n"
] | [
2,
1
] | [] | [] | [
"data_structures",
"integer",
"python",
"string"
] | stackoverflow_0074637548_data_structures_integer_python_string.txt |
Q:
Error concatenating specific sheet from multiple workbooks into one df
I am trying to separate out a specific sheet from about 300 excel workbooks and combine them into a single dataframe.
I have tried this code:
import pandas as pd
import glob
import openpyxl
from openpyxl import load_workbook
pd.set_option("display.max_rows", 100, "display.max_columns", 100)
allexcelfiles = glob.glob(r"C:\Users\LELI Laptop 5\Desktop\DTP1\*.xlsx")
cefdf = []
for ExcelFile in allexcelfiles:
wb = load_workbook(ExcelFile)
for sheet in wb:
list_of_sheetnames = [sheet for sheet in wb.sheetnames if "SAR" in sheet]
df = pd.read_excel(ExcelFile, sheet_name = list_of_sheetnames, nrows = 24)
cefdf.append(df)
df = pd.concat(cefdf)
From which I get this error:
TypeError: cannot concatenate object of type '<class 'dict'>'; only Series and DataFrame objs are valid
I then tried this:
df = pd.DataFrame(pd.read_excel(ExcelFile, sheet_name = list_of_sheetnames, nrows = 24))
From which I get this error:
ValueError: If using all scalar values, you must pass an index
A:
You can concat dictonary of DataFrames, reason is because multiple sheetnames in list_of_sheetnames:
for ExcelFile in allexcelfiles:
wb = load_workbook(ExcelFile)
list_of_sheetnames = [sheet for sheet in wb.sheetnames if "SAR" in sheet]
dfs = pd.read_excel(ExcelFile, sheet_name = list_of_sheetnames, nrows = 24)
cefdf.append(pd.concat(dfs))
df = pd.concat(cefdf)
| Error concatenating specific sheet from multiple workbooks into one df | I am trying to separate out a specific sheet from about 300 excel workbooks and combine them into a single dataframe.
I have tried this code:
import pandas as pd
import glob
import openpyxl
from openpyxl import load_workbook
pd.set_option("display.max_rows", 100, "display.max_columns", 100)
allexcelfiles = glob.glob(r"C:\Users\LELI Laptop 5\Desktop\DTP1\*.xlsx")
cefdf = []
for ExcelFile in allexcelfiles:
wb = load_workbook(ExcelFile)
for sheet in wb:
list_of_sheetnames = [sheet for sheet in wb.sheetnames if "SAR" in sheet]
df = pd.read_excel(ExcelFile, sheet_name = list_of_sheetnames, nrows = 24)
cefdf.append(df)
df = pd.concat(cefdf)
From which I get this error:
TypeError: cannot concatenate object of type '<class 'dict'>'; only Series and DataFrame objs are valid
I then tried this:
df = pd.DataFrame(pd.read_excel(ExcelFile, sheet_name = list_of_sheetnames, nrows = 24))
From which I get this error:
ValueError: If using all scalar values, you must pass an index
| [
"You can concat dictonary of DataFrames, reason is because multiple sheetnames in list_of_sheetnames:\nfor ExcelFile in allexcelfiles:\n wb = load_workbook(ExcelFile)\n\n list_of_sheetnames = [sheet for sheet in wb.sheetnames if \"SAR\" in sheet]\n \n dfs = pd.read_excel(ExcelFile, sheet_name = list_of_sheetnames, nrows = 24)\n cefdf.append(pd.concat(dfs))\n \ndf = pd.concat(cefdf)\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"excel",
"pandas",
"python"
] | stackoverflow_0074637544_dataframe_excel_pandas_python.txt |
Q:
error while inserting values python mysql
everything syntax is right but still getting errors .how to fix this error ?
not inserting data due to syntax error
try:
cur.execute('insert into test(PassengerId,SibSp,Parch,Embarked) values(892,0,0,'Q'),(893,1,0,'S'),(894,0,0,'Q'),(895,0,0,'S'),(896,1,1,'S')')
print('data inserted successfully')
except Exception as e:
print(e)
error =
cur.execute('insert into test(PassengerId,SibSp,Parch,Embarked) values(892,0,0,'Q'),(893,1,0,'S'),(894,0,0,'Q'),(895,0,0,'S'),(896,1,1,'S')')
^
SyntaxError: invalid syntax
A:
Since you have using ' to wrap the parameters,so you need to use " to wrap the sql itself
try:
cur.execute("insert into test(PassengerId,SibSp,Parch,Embarked) values(892,0,0,'Q'),(893,1,0,'S'),(894,0,0,'Q'),(895,0,0,'S'),(896,1,1,'S')")
print('data inserted successfully')
except Exception as e:
print(e)
| error while inserting values python mysql | everything syntax is right but still getting errors .how to fix this error ?
not inserting data due to syntax error
try:
cur.execute('insert into test(PassengerId,SibSp,Parch,Embarked) values(892,0,0,'Q'),(893,1,0,'S'),(894,0,0,'Q'),(895,0,0,'S'),(896,1,1,'S')')
print('data inserted successfully')
except Exception as e:
print(e)
error =
cur.execute('insert into test(PassengerId,SibSp,Parch,Embarked) values(892,0,0,'Q'),(893,1,0,'S'),(894,0,0,'Q'),(895,0,0,'S'),(896,1,1,'S')')
^
SyntaxError: invalid syntax
| [
"Since you have using ' to wrap the parameters,so you need to use \" to wrap the sql itself\ntry:\n cur.execute(\"insert into test(PassengerId,SibSp,Parch,Embarked) values(892,0,0,'Q'),(893,1,0,'S'),(894,0,0,'Q'),(895,0,0,'S'),(896,1,1,'S')\")\n print('data inserted successfully')\nexcept Exception as e:\n print(e)\n\n"
] | [
1
] | [] | [] | [
"jupyter_notebook",
"mysql",
"python"
] | stackoverflow_0074637584_jupyter_notebook_mysql_python.txt |
Q:
how to run same set of scenarios in 2 backgrounds in behave python
I have 2 backgrounds for my login page 1) in which the user accepts the cookie 2) user declines cookies.
Feature:ORG_LOGIN|Login action with organization ID after accepting cookies
Background:
Given User is on login page
When user accepts the cookies
And User navigates to organization tab
And clicks on password eye
# @positive
# Scenario:Login with valid credentials
# When User enters valid Organization ID, username and password
# And hits Login button
# Then Dashboard page is displayed
After this the same set of scenarios(8 in number) need to be tested.
I am using behave 1.2.6 with python 3.11.0 and selenium 4.6.0
Since there can be only 1 background per feature file, I tried copying all scenarios into another feature file with different background. I get following error
behave.step_registry.AmbiguousStep: @given('User is on login page') has already been defined in
existing step @given('User is on login page') at steps/login_ac.py:8
Any thoughts how can I implement it.
A:
I resolved it by removing duplicate implementations of the steps.
| how to run same set of scenarios in 2 backgrounds in behave python | I have 2 backgrounds for my login page 1) in which the user accepts the cookie 2) user declines cookies.
Feature:ORG_LOGIN|Login action with organization ID after accepting cookies
Background:
Given User is on login page
When user accepts the cookies
And User navigates to organization tab
And clicks on password eye
# @positive
# Scenario:Login with valid credentials
# When User enters valid Organization ID, username and password
# And hits Login button
# Then Dashboard page is displayed
After this the same set of scenarios(8 in number) need to be tested.
I am using behave 1.2.6 with python 3.11.0 and selenium 4.6.0
Since there can be only 1 background per feature file, I tried copying all scenarios into another feature file with different background. I get following error
behave.step_registry.AmbiguousStep: @given('User is on login page') has already been defined in
existing step @given('User is on login page') at steps/login_ac.py:8
Any thoughts how can I implement it.
| [
"I resolved it by removing duplicate implementations of the steps.\n"
] | [
0
] | [] | [] | [
"bdd",
"python",
"selenium_webdriver"
] | stackoverflow_0074637546_bdd_python_selenium_webdriver.txt |
Q:
How can I use pdb (Python debugger) in Visual Studio Code IDE's debugger?
I always used pdb for Python debugging before. Recently, I start using Visual Studio Code.
It looks in Visual Studio Code debugger, if I set a breakpoint(), Visual Studio Code will show variables' value at stopped position in the left window and I have to control it by a GUI bar.
So in "integratedTerminal" or "externalTerminal", I have no control by command line which is shown here and there isn't a pdb prompt popup. I kind of feel this surprises me since it hijacks pure Python stuff.
So is there a way to have both, the variables watch window and pdb prompt control? Especially in "integratedTerminal" or "externalTerminal".
Below are files under folder .vscode,
File settings.json
{
"python.pythonPath": "/Users/<USERNAME>/miniconda3/envs/<CONDA_ENV>/bin/python"
}
File launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Python Module",
"type": "python",
"python": "/Users/<USERNAME>/miniconda3/envs/<CONDA_ENV>/bin/python",
"request": "launch",
"program": "/Users/<USERNAME>/miniconda3/envs/<CONDA_ENV>/bin/<COMMAND>",
"console": "integratedTerminal",
"args": [
"hello-world"
],
"cwd": "${workspaceRoot}",
}
]
}
A:
According to the information you described, when I use "breakpoint()" in the code, I click F5 to debug the code in Visual Studio Code. When the code stops, we can use the shortcut key Ctrl + Shift + ` to open a new terminal and enter the pdb interactive window. At this point, we can not only see the debug variable value, but also use the 'pdb' command:
Update:
A:
This topic bugged me too, so I opened a feature request, where someone pointed the Debug-Console (in Visual Studio Code next to the terminal) out which lets you interact with Python at the point, where you're debugging. In case you don't find it directly here is a video how to do so. Once you found it, you can call arbitrary python functions in it.
| How can I use pdb (Python debugger) in Visual Studio Code IDE's debugger? | I always used pdb for Python debugging before. Recently, I start using Visual Studio Code.
It looks in Visual Studio Code debugger, if I set a breakpoint(), Visual Studio Code will show variables' value at stopped position in the left window and I have to control it by a GUI bar.
So in "integratedTerminal" or "externalTerminal", I have no control by command line which is shown here and there isn't a pdb prompt popup. I kind of feel this surprises me since it hijacks pure Python stuff.
So is there a way to have both, the variables watch window and pdb prompt control? Especially in "integratedTerminal" or "externalTerminal".
Below are files under folder .vscode,
File settings.json
{
"python.pythonPath": "/Users/<USERNAME>/miniconda3/envs/<CONDA_ENV>/bin/python"
}
File launch.json
{
"version": "0.2.0",
"configurations": [
{
"name": "Python Module",
"type": "python",
"python": "/Users/<USERNAME>/miniconda3/envs/<CONDA_ENV>/bin/python",
"request": "launch",
"program": "/Users/<USERNAME>/miniconda3/envs/<CONDA_ENV>/bin/<COMMAND>",
"console": "integratedTerminal",
"args": [
"hello-world"
],
"cwd": "${workspaceRoot}",
}
]
}
| [
"According to the information you described, when I use \"breakpoint()\" in the code, I click F5 to debug the code in Visual Studio Code. When the code stops, we can use the shortcut key Ctrl + Shift + ` to open a new terminal and enter the pdb interactive window. At this point, we can not only see the debug variable value, but also use the 'pdb' command:\n\nUpdate:\n\n",
"This topic bugged me too, so I opened a feature request, where someone pointed the Debug-Console (in Visual Studio Code next to the terminal) out which lets you interact with Python at the point, where you're debugging. In case you don't find it directly here is a video how to do so. Once you found it, you can call arbitrary python functions in it.\n"
] | [
2,
0
] | [] | [] | [
"pdb",
"python",
"visual_studio_code",
"vscode_debugger"
] | stackoverflow_0065677725_pdb_python_visual_studio_code_vscode_debugger.txt |
Q:
Triggering an Azure Function that takes more than 2 minutes to run from logic apps
I am trying to trigger an Azure Function from Logic Apps. Running the Azure function takes more than 2 minutes as it is reading a file from a location, converts it to another format and then writes it to a different location. The problem is that the Logic Apps is creating a request, waits for 2 minutes to get a response, but this response doesn't come because the function is not finishing that fast. So the logic app assumes there is an error and recreates the request.
I read in the documentation that there is no way to increase the timeout period. I tried creating two threads in the azure function. One returns 202 http status code to the logic app, and the other one would remain as a daemon and keeps running. But the file doesn't seem to be copied.
Does anyone have any idea how could this be achieved?
A:
Continue the work on another logic app.
Just change your logic app to return Accepted/OK response and calls the function.
The function does the work and after it finishes (or fails) it calls another logic app where it continues the work (or deal with the error).
A:
I agree with @Mocas, and also you can make your logic apps response to asynchronous behavior as below:
So, this waits for the response to be 202 status and then this steps completes.
Reference:
Handle inbound or incoming HTTPS calls - Azure Logic Apps | Microsoft Learn
Running the Azure function takes more than 2 minutes as it is reading a file from a location, converts it to another format and then writes it to a different location.
Or you can Create event grid as When an event occurs as below:
So, when you complete the process of converting then send it to a storage account and this event fires when blob gets created and after this you can do your rest of work.
| Triggering an Azure Function that takes more than 2 minutes to run from logic apps | I am trying to trigger an Azure Function from Logic Apps. Running the Azure function takes more than 2 minutes as it is reading a file from a location, converts it to another format and then writes it to a different location. The problem is that the Logic Apps is creating a request, waits for 2 minutes to get a response, but this response doesn't come because the function is not finishing that fast. So the logic app assumes there is an error and recreates the request.
I read in the documentation that there is no way to increase the timeout period. I tried creating two threads in the azure function. One returns 202 http status code to the logic app, and the other one would remain as a daemon and keeps running. But the file doesn't seem to be copied.
Does anyone have any idea how could this be achieved?
| [
"Continue the work on another logic app.\nJust change your logic app to return Accepted/OK response and calls the function.\nThe function does the work and after it finishes (or fails) it calls another logic app where it continues the work (or deal with the error).\n",
"I agree with @Mocas, and also you can make your logic apps response to asynchronous behavior as below:\n\n\nSo, this waits for the response to be 202 status and then this steps completes.\nReference:\n\nHandle inbound or incoming HTTPS calls - Azure Logic Apps | Microsoft Learn\n\n\nRunning the Azure function takes more than 2 minutes as it is reading a file from a location, converts it to another format and then writes it to a different location.\n\nOr you can Create event grid as When an event occurs as below:\n\nSo, when you complete the process of converting then send it to a storage account and this event fires when blob gets created and after this you can do your rest of work.\n"
] | [
0,
0
] | [] | [] | [
"azure_functions",
"azure_logic_apps",
"python"
] | stackoverflow_0074446349_azure_functions_azure_logic_apps_python.txt |
Q:
Django | joined path is located outside of the base path component {% static img.thumbnail.url %}, Error 400 with whitenoise
I've finish my first app in Django and works perfectly, but still have pre-deployment problems since I set DEGUG=False ...
Here is just to display an image in a template... T_T
I was using this, but now it does'nt work when I use whitenoise to serve my image localy... And it return a Bad Request(400) error...
Models.py
class GalleryItem(models.Model):
thumbnail = models.ImageField(blank=True,upload_to='gallery/thumb')
img_wide = models.ImageField(blank=True,upload_to='gallery')
template.py
{% load staticfiles %}
{% for img in img_to_display %}
<a href="{{ img.img_wide.url}}" class="swipebox" title="">
<img src="{% static img.thumbnail.url %}" alt="{{ img.alt}}">
</a>
{% endfor %}
urls.py
from django.conf.urls import url, include
from django.contrib import admin
from django.conf import settings
import os
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
url(r'^gallery/', include('gallery.urls')),
url(r'^shop/', include('shop.urls')),
url(r'^events/', include('events.urls')),
url(r'^page/', include('paginator.urls')),
url(r'^news/', include('blog.urls')),
url(r'^ckeditor/', include('ckeditor_uploader.urls')),
url(r'^admin/', admin.site.urls),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
settings.py
import os
import dj_database_url
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
print("BASE_DIR = ",BASE_DIR)
MEDIA_ROOT = os.path.join(BASE_DIR, 'wt/static/media/')
MEDIA_URL = '/media/'
SECRET_KEY = 'SECRET_KEY'
DEBUG = False
INSTALLED_APPS = [
'ckeditor',
'ckeditor_uploader',
'team.apps.TeamConfig',
'gallery.apps.GalleryConfig',
'shop.apps.ShopConfig',
'events.apps.EventsConfig',
'blog.apps.BlogConfig',
'paginator.apps.paginatorConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE_CLASSES = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
]
ROOT_URLCONF = 'wt.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
"django.contrib.auth.context_processors.auth",
"django.core.context_processors.request",
"django.core.context_processors.debug",
"django.core.context_processors.i18n",
"django.core.context_processors.media",
"django.core.context_processors.static",
"django.core.context_processors.tz",
"django.contrib.messages.context_processors.messages",
],
},
},
]
WSGI_APPLICATION = 'wt.wsgi.application'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'wt_db',
'USER': 'postgres',
'PASSWORD': 'PASSWORD',
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
LANGUAGE_CODE = 'fr-fr'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
db_from_env = dj_database_url.config(conn_max_age=500)
DATABASES['default'].update(db_from_env)
ALLOWED_HOSTS = ['localhost', '127.0.0.1',]
STATIC_ROOT = os.path.join(BASE_DIR, 'wt/staticfiles')
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'wt/static'),
os.path.join(BASE_DIR, 'wt/staticfiles'),
]
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
CKEDITOR_UPLOAD_PATH = 'uploads'
CKEDITOR_IMAGE_BACKEND = 'pillow'
CKEDITOR_BROWSE_SHOW_DIRS = True
Here my error log :
The joined path (E:\media\gallery\thumb\lost-thumb.jpg) is located outside of the base path component (E:\dev\wt\wt\wt\staticfiles)
[15/May/2016 20:01:41] "GET /page/gallery HTTP/1.1" 400 26
Thanks a lot for helping ! :)
EDIT :
principal structure
project folder
A:
Bro, you cant load staticfile when you use images on models, there is 2 different ways to work with images in django.
Statics files is for files that are static(images files like logo of your company, banners, javascript files, css files)
Media Files is for dinamic files like user photo, user gallery, product images
Static Files - This way you use your staticfiles save at your static folder where you place it in static root at your settings.py and then you use {% load staticfiles %} and {% static '' %}
Media Files - This files is that one you save with your models, ImageField, FileField and etc... that one you do not load as static, cuz they are not a static file (you can edit it from your models), that do not means you will save it on your database, this will generate a copy of your file with hashed name on it at you media folder where you place it in media root at your settings.py and media files you use like that {{ ..url }} so in your case gallery.thumbnail.url (btw, remind to call your gallery object at your views and send it to template to allow your to use it)
So the other anwers was right, you need to decide what you want to use, keep in mind that your path localy is different where you deploy, remember to use environment variables with the right path to set up in your settings
Django Docs: https://docs.djangoproject.com/en/1.11/topics/files/
A:
I guess it was a security issue. Even if "whitenoise" is good to serve true static files in production, it can't serve media files.
I was making a structure error :
# Don't place your 'media' files IN your 'static' file like this :
MEDIA_ROOT = os.path.join(BASE_DIR, 'wt/static/media/')
MEDIA_ROOT never have to be in the "static" file of your project (even if you can make it works in some ways, it's not a good practice I think).
'MEDIA' files (in production), have to serve out of the Django project. I've read somewhere that we have to use a CDN. And firstly I choose CloudFlare (because it's free), but it wasn't working, cause you need a subdomain/hostname to point your MEDIA_ROOT, and Cloudflare doesn't give that. Finally, I choose Amazon S3.
So, in conclusion, write something like {% static img.thumbnail.url %} makes no sense. Because everything uploaded via admin/user haven't to be in "static".
Use {{ img.thumbnail.url }} instead.
A:
paste the below code in settings.py file
STATIC_URL = '/static/'
# Add these new lines
STATICFILES_DIRS = (
os.path.join(BASE_DIR, 'static'),
)
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
MEDIA_ROOT = os.path.join(BASE_DIR, 'media/')
MEDIA_URL = "/media/"
and in urls.py
from django.conf import settings
from django.conf.urls.static import static
if settings.DEBUG:
urlpatterns += static(settings.STATIC_URL, document_root = settings.STATIC_ROOT)
urlpatterns += static(settings.MEDIA_URL, document_root = settings.MEDIA_ROOT)
| Django | joined path is located outside of the base path component {% static img.thumbnail.url %}, Error 400 with whitenoise | I've finish my first app in Django and works perfectly, but still have pre-deployment problems since I set DEGUG=False ...
Here is just to display an image in a template... T_T
I was using this, but now it does'nt work when I use whitenoise to serve my image localy... And it return a Bad Request(400) error...
Models.py
class GalleryItem(models.Model):
thumbnail = models.ImageField(blank=True,upload_to='gallery/thumb')
img_wide = models.ImageField(blank=True,upload_to='gallery')
template.py
{% load staticfiles %}
{% for img in img_to_display %}
<a href="{{ img.img_wide.url}}" class="swipebox" title="">
<img src="{% static img.thumbnail.url %}" alt="{{ img.alt}}">
</a>
{% endfor %}
urls.py
from django.conf.urls import url, include
from django.contrib import admin
from django.conf import settings
import os
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
url(r'^gallery/', include('gallery.urls')),
url(r'^shop/', include('shop.urls')),
url(r'^events/', include('events.urls')),
url(r'^page/', include('paginator.urls')),
url(r'^news/', include('blog.urls')),
url(r'^ckeditor/', include('ckeditor_uploader.urls')),
url(r'^admin/', admin.site.urls),
] + static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
settings.py
import os
import dj_database_url
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
print("BASE_DIR = ",BASE_DIR)
MEDIA_ROOT = os.path.join(BASE_DIR, 'wt/static/media/')
MEDIA_URL = '/media/'
SECRET_KEY = 'SECRET_KEY'
DEBUG = False
INSTALLED_APPS = [
'ckeditor',
'ckeditor_uploader',
'team.apps.TeamConfig',
'gallery.apps.GalleryConfig',
'shop.apps.ShopConfig',
'events.apps.EventsConfig',
'blog.apps.BlogConfig',
'paginator.apps.paginatorConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
MIDDLEWARE_CLASSES = [
'django.middleware.security.SecurityMiddleware',
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.auth.middleware.SessionAuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'whitenoise.middleware.WhiteNoiseMiddleware',
]
ROOT_URLCONF = 'wt.urls'
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [os.path.join(BASE_DIR, 'templates')],
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
"django.contrib.auth.context_processors.auth",
"django.core.context_processors.request",
"django.core.context_processors.debug",
"django.core.context_processors.i18n",
"django.core.context_processors.media",
"django.core.context_processors.static",
"django.core.context_processors.tz",
"django.contrib.messages.context_processors.messages",
],
},
},
]
WSGI_APPLICATION = 'wt.wsgi.application'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'wt_db',
'USER': 'postgres',
'PASSWORD': 'PASSWORD',
'HOST': '127.0.0.1',
'PORT': '5432',
}
}
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
LANGUAGE_CODE = 'fr-fr'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
db_from_env = dj_database_url.config(conn_max_age=500)
DATABASES['default'].update(db_from_env)
ALLOWED_HOSTS = ['localhost', '127.0.0.1',]
STATIC_ROOT = os.path.join(BASE_DIR, 'wt/staticfiles')
STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, 'wt/static'),
os.path.join(BASE_DIR, 'wt/staticfiles'),
]
STATICFILES_STORAGE = 'whitenoise.django.GzipManifestStaticFilesStorage'
CKEDITOR_UPLOAD_PATH = 'uploads'
CKEDITOR_IMAGE_BACKEND = 'pillow'
CKEDITOR_BROWSE_SHOW_DIRS = True
Here my error log :
The joined path (E:\media\gallery\thumb\lost-thumb.jpg) is located outside of the base path component (E:\dev\wt\wt\wt\staticfiles)
[15/May/2016 20:01:41] "GET /page/gallery HTTP/1.1" 400 26
Thanks a lot for helping ! :)
EDIT :
principal structure
project folder
| [
"Bro, you cant load staticfile when you use images on models, there is 2 different ways to work with images in django.\nStatics files is for files that are static(images files like logo of your company, banners, javascript files, css files)\nMedia Files is for dinamic files like user photo, user gallery, product images\n\nStatic Files - This way you use your staticfiles save at your static folder where you place it in static root at your settings.py and then you use {% load staticfiles %} and {% static '' %}\nMedia Files - This files is that one you save with your models, ImageField, FileField and etc... that one you do not load as static, cuz they are not a static file (you can edit it from your models), that do not means you will save it on your database, this will generate a copy of your file with hashed name on it at you media folder where you place it in media root at your settings.py and media files you use like that {{ ..url }} so in your case gallery.thumbnail.url (btw, remind to call your gallery object at your views and send it to template to allow your to use it)\n\nSo the other anwers was right, you need to decide what you want to use, keep in mind that your path localy is different where you deploy, remember to use environment variables with the right path to set up in your settings\nDjango Docs: https://docs.djangoproject.com/en/1.11/topics/files/\n",
"I guess it was a security issue. Even if \"whitenoise\" is good to serve true static files in production, it can't serve media files.\nI was making a structure error :\n# Don't place your 'media' files IN your 'static' file like this :\n\nMEDIA_ROOT = os.path.join(BASE_DIR, 'wt/static/media/')\n\nMEDIA_ROOT never have to be in the \"static\" file of your project (even if you can make it works in some ways, it's not a good practice I think).\n'MEDIA' files (in production), have to serve out of the Django project. I've read somewhere that we have to use a CDN. And firstly I choose CloudFlare (because it's free), but it wasn't working, cause you need a subdomain/hostname to point your MEDIA_ROOT, and Cloudflare doesn't give that. Finally, I choose Amazon S3.\nSo, in conclusion, write something like {% static img.thumbnail.url %} makes no sense. Because everything uploaded via admin/user haven't to be in \"static\".\nUse {{ img.thumbnail.url }} instead.\n",
"paste the below code in settings.py file\nSTATIC_URL = '/static/'\n\n# Add these new lines\nSTATICFILES_DIRS = (\n os.path.join(BASE_DIR, 'static'),\n)\n\nSTATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')\nMEDIA_ROOT = os.path.join(BASE_DIR, 'media/')\nMEDIA_URL = \"/media/\"\n\nand in urls.py\nfrom django.conf import settings\nfrom django.conf.urls.static import static\n\nif settings.DEBUG:\n urlpatterns += static(settings.STATIC_URL, document_root = settings.STATIC_ROOT)\n urlpatterns += static(settings.MEDIA_URL, document_root = settings.MEDIA_ROOT)\n\n"
] | [
6,
2,
0
] | [
"Try <img src=\"{{ img.thumbnail.image.url }}\" alt=\"{{ img.alt}}\">\n"
] | [
-1
] | [
"bad_request",
"django",
"django_staticfiles",
"python"
] | stackoverflow_0037241902_bad_request_django_django_staticfiles_python.txt |
Q:
join two rows itertively to create new table in spark with one row for each two rows in new table
Have a table where I want to go in range of two rows
id | col b | message
1 | abc | hello |
2 | abc | world |
3 | abc 1| morning|
4 | abc | night |
...|... | .... |
100| abc1 | Monday |
101| abc1 | Tuesday|
How to I create below table that goes in a range of two and shows the first id with the second col b and message in spark.
Final table will look like this.
id | full message
1 | 01:02,abc,world
3 | 03:04,abc,night
.. |................
100| 100:101,abc1,Tuesday
A:
With pandas, you can use:
group = np.arange(len(df))//2
(df.astype({'id': 'str'})
.groupby(group)
.agg(**{'id_': ('id', ':'.join),
'id': ('id', 'first'),
'first': ('col b', 'first'),
'last': ('message', 'last'),
})
.set_index('id')
.agg(','.join, axis=1)
.reset_index(name='full message')
)
Output:
id full message
0 1 1:2,abc,world
1 3 3:4,abc 1,night
2 100 100:101,abc1,Tuesday
A:
In pyspark you can use Window, example
window = Window.orderBy('id').rowsBetween(Window.currentRow, 1)
(df
.withColumn('ids', F.concat_ws(':', F.first('id').over(window), F.last('id').over(window)))
.withColumn('messages', F.concat_ws(',', F.first('col b').over(window), F.last('message').over(window)))
.withColumn('full_message', F.concat_ws(',', 'ids', 'messages'))
# select only the first entries, regardless of the id
.withColumn('seq_id', F.row_number().over(Window.orderBy('id')))
.filter(F.col('seq_id') % 2 != 0)
.select('id', 'full_message')
)
Output:
id full_message
1 1:2,abc,world
3 3:4,abc 1,night
100 100:101,abc1,Tuesday
| join two rows itertively to create new table in spark with one row for each two rows in new table | Have a table where I want to go in range of two rows
id | col b | message
1 | abc | hello |
2 | abc | world |
3 | abc 1| morning|
4 | abc | night |
...|... | .... |
100| abc1 | Monday |
101| abc1 | Tuesday|
How to I create below table that goes in a range of two and shows the first id with the second col b and message in spark.
Final table will look like this.
id | full message
1 | 01:02,abc,world
3 | 03:04,abc,night
.. |................
100| 100:101,abc1,Tuesday
| [
"With pandas, you can use:\ngroup = np.arange(len(df))//2\n\n(df.astype({'id': 'str'})\n .groupby(group)\n .agg(**{'id_': ('id', ':'.join),\n 'id': ('id', 'first'),\n 'first': ('col b', 'first'),\n 'last': ('message', 'last'),\n })\n .set_index('id')\n .agg(','.join, axis=1)\n .reset_index(name='full message')\n)\n\nOutput:\n id full message\n0 1 1:2,abc,world\n1 3 3:4,abc 1,night\n2 100 100:101,abc1,Tuesday\n\n",
"In pyspark you can use Window, example\nwindow = Window.orderBy('id').rowsBetween(Window.currentRow, 1)\n\n(df\n.withColumn('ids', F.concat_ws(':', F.first('id').over(window), F.last('id').over(window)))\n.withColumn('messages', F.concat_ws(',', F.first('col b').over(window), F.last('message').over(window)))\n.withColumn('full_message', F.concat_ws(',', 'ids', 'messages'))\n# select only the first entries, regardless of the id\n.withColumn('seq_id', F.row_number().over(Window.orderBy('id')))\n.filter(F.col('seq_id') % 2 != 0)\n.select('id', 'full_message')\n)\n\nOutput:\nid full_message\n1 1:2,abc,world\n3 3:4,abc 1,night\n100 100:101,abc1,Tuesday\n\n"
] | [
1,
1
] | [] | [] | [
"apache_spark",
"dataframe",
"pyspark",
"python"
] | stackoverflow_0074626112_apache_spark_dataframe_pyspark_python.txt |
Q:
Create .CSV file based on similar key in two data frame
I would like to generate .csv file based on identical columns in ground truth prediction values. I have tried lots of ways but I am able to create single .csv file but unable to create an individual .csv file.
main_predicted.csv: It contains more than 4500 records with image name and prediction result
imgs pred
imagenet_aeroplane_n02690373_10203_1.jpg aeroplane
imagenet_aeroplane_n02690373_1038_0.jpg aeroplane
imagenet_aeroplane_n02690373_1119_2.jpg aeroplane
imagenet_aeroplane_n02690373_1295_0.jpg aeroplane
The other ground truth folder (specific class) labels.csv contains 568 records. I would like to generate a .csv file to match images name in both .csv and create new .csv file along with imags and pred
Code I am using below code to create one .csv file. Now, I want to match 2nd table imgs column with first table imgs columns and create a new table with imgs and pred names.
image_dir = np.array(sum(image_dir, []))
preds = np.concatenate(preds)
csv = {'imgs': np.array(image_dir), 'pred': np.array(preds),
}
csv = pd.DataFrame(csv)
print(csv)
csv.to_csv('evaluation/cls_ref/res/iid.csv', index=False)
A:
read the file and filter with imgs from actual labels
img_preds = pd.read_csv('main_predicted.csv')
images = img_preds['imgs']
actual_labels = pd.read_csv('labels.csv')
output = img_preds[img_preds['imgs'].isin(actual_labels['imgs'].to_list())]
| Create .CSV file based on similar key in two data frame | I would like to generate .csv file based on identical columns in ground truth prediction values. I have tried lots of ways but I am able to create single .csv file but unable to create an individual .csv file.
main_predicted.csv: It contains more than 4500 records with image name and prediction result
imgs pred
imagenet_aeroplane_n02690373_10203_1.jpg aeroplane
imagenet_aeroplane_n02690373_1038_0.jpg aeroplane
imagenet_aeroplane_n02690373_1119_2.jpg aeroplane
imagenet_aeroplane_n02690373_1295_0.jpg aeroplane
The other ground truth folder (specific class) labels.csv contains 568 records. I would like to generate a .csv file to match images name in both .csv and create new .csv file along with imags and pred
Code I am using below code to create one .csv file. Now, I want to match 2nd table imgs column with first table imgs columns and create a new table with imgs and pred names.
image_dir = np.array(sum(image_dir, []))
preds = np.concatenate(preds)
csv = {'imgs': np.array(image_dir), 'pred': np.array(preds),
}
csv = pd.DataFrame(csv)
print(csv)
csv.to_csv('evaluation/cls_ref/res/iid.csv', index=False)
| [
"read the file and filter with imgs from actual labels\nimg_preds = pd.read_csv('main_predicted.csv')\n\nimages = img_preds['imgs']\n\nactual_labels = pd.read_csv('labels.csv')\n\noutput = img_preds[img_preds['imgs'].isin(actual_labels['imgs'].to_list())]\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"pandas",
"python",
"pytorch"
] | stackoverflow_0074636197_dataframe_pandas_python_pytorch.txt |
Q:
change the content of zip object by adding blank line at the end
I am opening a zip file using python like this...
with open('/home/ubuntu/myzip/myzip.zip', 'rb') as zf:
It is working as expected. The contents zf.read() are passed to an API. Is there any way to modify the file without chaging it's content? For e.g. adding an enter mark at the end.
I need to do this so that the service should consider it as a different file. It is not accepting the same content for some reason.
A:
Opening ZIP Files for Reading and Writing
import zipfile
>>> with zipfile.ZipFile("sample.zip",
mode="r") as archive:
... archive.printdir()
...
File Name
Modified Size
hello.txt
2021-09-07 19:50:10 83
lorem.md
2021-09-07 19:50:10 2609
realpython.md
2021-09-07 19:50:10 428
The first argument to the initializer of ZipFile can be a string representing the path to the ZIP file that you need to open. This argument can accept file-like and path-like objects too. In this example, you use a string-based path.
The second argument to ZipFile is a single-letter string representing the mode that you’ll use to open the file. As you learned at the beginning of this section, ZipFile can accept four possible modes, depending on your needs. The mode positional argument defaults to "r", so you can get rid of it if you want to open the archive for reading only.
Inside the with statement, you call .printdir() on archive. The archive variable now holds the instance of ZipFile itself. This function provides a quick way to display the content of the underlying ZIP file on your screen. The function’s output has a user-friendly tabular format with three informative columns:
File Name
Modified
Size
If you want to make sure that you’re targeting a valid ZIP file before you try to open it, then you can wrap ZipFile in a try … except statement and catch any BadZipFile exception:
>>> import zipfile
>>> try:
... with zipfile.ZipFile("sample.zip") as
archive:
... archive.printdir()
... except zipfile.BadZipFile as error:
... print(error)
...
File Name
Modified Size
hello.txt
2021-09-07 19:50:10 83
lorem.md
2021-09-07 19:50:10 2609
realpython.md
2021-09-07 19:50:10 428
>>> try:
... with
zipfile.ZipFile("bad_sample.zip") as archive:
... archive.printdir()
... except zipfile.BadZipFile as error:
... print(error)
...
File is not a zip file
The first example successfully opens sample.zip without raising a BadZipFile exception. That’s because sample.zip has a valid ZIP format. On the other hand, the second example doesn’t succeed in opening bad_sample.zip, because the file is not a valid ZIP file.
To check for a valid ZIP file, you can also use the is_zipfile() function:
>>> import zipfile
>>> if zipfile.is_zipfile("sample.zip"):
... with zipfile.ZipFile("sample.zip",
"r") as archive:
... archive.printdir()
... else:
... print("File is not a zip file")
...
File Name
Modified Size
hello.txt
2021-09-07 19:50:10 83
lorem.md
2021-09-07 19:50:10 2609
realpython.md
2021-09-07 19:50:10 428
>>> if zipfile.is_zipfile("bad_sample.zip"):
... with
zipfile.ZipFile("bad_sample.zip", "r") as
archive:
... archive.printdir()
... else:
... print("File is not a zip file")
...
File is not a zip file
In these examples, you use a conditional statement with is_zipfile() as a condition. This function takes a filename argument that holds the path to a ZIP file in your file system. This argument can accept string, file-like, or path-like objects. The function returns True if filename is a valid ZIP file. Otherwise, it returns False.
Now say you want to add hello.txt to a hello.zip archive using ZipFile. To do that, you can use the write mode ("w"). This mode opens a ZIP file for writing. If the target ZIP file exists, then the "w" mode truncates it and writes any new content you pass in.
Note: If you’re using ZipFile with existing files, then you should be careful with the "w" mode. You can truncate your ZIP file and lose all the original content.
If the target ZIP file doesn’t exist, then ZipFile creates it for you when you close the archive:
>>> import zipfile
>>> with zipfile.ZipFile("hello.zip",
mode="w") as archive:
... archive.write("hello.txt")
...
After running this code, you’ll have a hello.zip file in your python-zipfile/ directory. If you list the file content using .printdir(), then you’ll notice that hello.txt will be there. In this example, you call .write() on the ZipFile object. This method allows you to write member files into your ZIP archives. Note that the argument to .write() should be an existing file.
Note: ZipFile is smart enough to create a new archive when you use the class in writing mode and the target archive doesn’t exist. However, the class doesn’t create new directories in the path to the target ZIP file if those directories don’t already exist.
That explains why the following code won’t work:
>>> import zipfile
>>> with zipfile.ZipFile("missing/hello.zip",
mode="w") as archive:
... archive.write("hello.txt")
...
Traceback (most recent call last):
...
FileNotFoundError: [Errno 2] No such file or
directory: 'missing/hello.zip'
Because the missing/ directory in the path to the target hello.zip file doesn’t exist, you get a FileNotFoundError exception.
The append mode ("a") allows you to append new member files to an existing ZIP file. This mode doesn’t truncate the archive, so its original content is safe. If the target ZIP file doesn’t exist, then the "a" mode creates a new one for you and then appends any input files that you pass as an argument to .write().
To try out the "a" mode, go ahead and add the new_hello.txt file to your newly created hello.zip archive:
>>> import zipfile
>>> with zipfile.ZipFile("hello.zip",
mode="a") as archive:
... archive.write("new_hello.txt")
...
>>> with zipfile.ZipFile("hello.zip") as
archive:
... archive.printdir()
...
File Name
Modified Size
hello.txt
2021-09-07 19:50:10 83
new_hello.txt
2021-08-31 17:13:44
Here, you use the append mode to add new_hello.txt to the hello.zip file. Then you run .printdir() to confirm that the new file is present in the ZIP file.
ZipFile also supports an exclusive mode ("x"). This mode allows you to exclusively create new ZIP files and write new member files into them. You’ll use the exclusive mode when you want to make a new ZIP file without overwriting an existing one. If the target file already exists, then you get FileExistsError.
Finally, if you create a ZIP file using the "w", "a", or "x" mode and then close the archive without adding any member files, then ZipFile creates an empty archive with the appropriate ZIP format.
| change the content of zip object by adding blank line at the end | I am opening a zip file using python like this...
with open('/home/ubuntu/myzip/myzip.zip', 'rb') as zf:
It is working as expected. The contents zf.read() are passed to an API. Is there any way to modify the file without chaging it's content? For e.g. adding an enter mark at the end.
I need to do this so that the service should consider it as a different file. It is not accepting the same content for some reason.
| [
"Opening ZIP Files for Reading and Writing\nimport zipfile\n\n>>> with zipfile.ZipFile(\"sample.zip\", \nmode=\"r\") as archive:\n... archive.printdir()\n...\nFile Name \nModified Size\nhello.txt \n2021-09-07 19:50:10 83\nlorem.md \n2021-09-07 19:50:10 2609\nrealpython.md \n2021-09-07 19:50:10 428\n\nThe first argument to the initializer of ZipFile can be a string representing the path to the ZIP file that you need to open. This argument can accept file-like and path-like objects too. In this example, you use a string-based path.\nThe second argument to ZipFile is a single-letter string representing the mode that you’ll use to open the file. As you learned at the beginning of this section, ZipFile can accept four possible modes, depending on your needs. The mode positional argument defaults to \"r\", so you can get rid of it if you want to open the archive for reading only.\nInside the with statement, you call .printdir() on archive. The archive variable now holds the instance of ZipFile itself. This function provides a quick way to display the content of the underlying ZIP file on your screen. The function’s output has a user-friendly tabular format with three informative columns:\nFile Name\nModified\nSize\nIf you want to make sure that you’re targeting a valid ZIP file before you try to open it, then you can wrap ZipFile in a try … except statement and catch any BadZipFile exception:\n>>> import zipfile\n\n>>> try:\n... with zipfile.ZipFile(\"sample.zip\") as \narchive:\n... archive.printdir()\n... except zipfile.BadZipFile as error:\n... print(error)\n...\nFile Name \nModified Size\nhello.txt \n2021-09-07 19:50:10 83\nlorem.md \n2021-09-07 19:50:10 2609\nrealpython.md \n2021-09-07 19:50:10 428\n\n>>> try:\n... with \nzipfile.ZipFile(\"bad_sample.zip\") as archive:\n... archive.printdir()\n... except zipfile.BadZipFile as error:\n... print(error)\n...\nFile is not a zip file\n\nThe first example successfully opens sample.zip without raising a BadZipFile exception. That’s because sample.zip has a valid ZIP format. On the other hand, the second example doesn’t succeed in opening bad_sample.zip, because the file is not a valid ZIP file.\nTo check for a valid ZIP file, you can also use the is_zipfile() function:\n>>> import zipfile\n\n>>> if zipfile.is_zipfile(\"sample.zip\"):\n... with zipfile.ZipFile(\"sample.zip\", \n\"r\") as archive:\n... archive.printdir()\n... else:\n... print(\"File is not a zip file\")\n...\nFile Name \nModified Size\nhello.txt \n2021-09-07 19:50:10 83\nlorem.md \n2021-09-07 19:50:10 2609\nrealpython.md \n2021-09-07 19:50:10 428\n\n>>> if zipfile.is_zipfile(\"bad_sample.zip\"):\n... with \nzipfile.ZipFile(\"bad_sample.zip\", \"r\") as \narchive:\n... archive.printdir()\n... else:\n... print(\"File is not a zip file\")\n...\nFile is not a zip file\n\nIn these examples, you use a conditional statement with is_zipfile() as a condition. This function takes a filename argument that holds the path to a ZIP file in your file system. This argument can accept string, file-like, or path-like objects. The function returns True if filename is a valid ZIP file. Otherwise, it returns False.\nNow say you want to add hello.txt to a hello.zip archive using ZipFile. To do that, you can use the write mode (\"w\"). This mode opens a ZIP file for writing. If the target ZIP file exists, then the \"w\" mode truncates it and writes any new content you pass in.\nNote: If you’re using ZipFile with existing files, then you should be careful with the \"w\" mode. You can truncate your ZIP file and lose all the original content.\nIf the target ZIP file doesn’t exist, then ZipFile creates it for you when you close the archive:\n>>> import zipfile\n\n>>> with zipfile.ZipFile(\"hello.zip\", \nmode=\"w\") as archive:\n... archive.write(\"hello.txt\")\n...\n\nAfter running this code, you’ll have a hello.zip file in your python-zipfile/ directory. If you list the file content using .printdir(), then you’ll notice that hello.txt will be there. In this example, you call .write() on the ZipFile object. This method allows you to write member files into your ZIP archives. Note that the argument to .write() should be an existing file.\nNote: ZipFile is smart enough to create a new archive when you use the class in writing mode and the target archive doesn’t exist. However, the class doesn’t create new directories in the path to the target ZIP file if those directories don’t already exist.\nThat explains why the following code won’t work:\n>>> import zipfile\n\n>>> with zipfile.ZipFile(\"missing/hello.zip\", \nmode=\"w\") as archive:\n... archive.write(\"hello.txt\")\n...\nTraceback (most recent call last):\n...\nFileNotFoundError: [Errno 2] No such file or \ndirectory: 'missing/hello.zip'\n\nBecause the missing/ directory in the path to the target hello.zip file doesn’t exist, you get a FileNotFoundError exception.\nThe append mode (\"a\") allows you to append new member files to an existing ZIP file. This mode doesn’t truncate the archive, so its original content is safe. If the target ZIP file doesn’t exist, then the \"a\" mode creates a new one for you and then appends any input files that you pass as an argument to .write().\nTo try out the \"a\" mode, go ahead and add the new_hello.txt file to your newly created hello.zip archive:\n>>> import zipfile\n\n>>> with zipfile.ZipFile(\"hello.zip\", \nmode=\"a\") as archive:\n... archive.write(\"new_hello.txt\")\n...\n\n>>> with zipfile.ZipFile(\"hello.zip\") as \narchive:\n... archive.printdir()\n...\nFile Name \nModified Size\nhello.txt \n2021-09-07 19:50:10 83\nnew_hello.txt \n2021-08-31 17:13:44 \n\nHere, you use the append mode to add new_hello.txt to the hello.zip file. Then you run .printdir() to confirm that the new file is present in the ZIP file.\nZipFile also supports an exclusive mode (\"x\"). This mode allows you to exclusively create new ZIP files and write new member files into them. You’ll use the exclusive mode when you want to make a new ZIP file without overwriting an existing one. If the target file already exists, then you get FileExistsError.\nFinally, if you create a ZIP file using the \"w\", \"a\", or \"x\" mode and then close the archive without adding any member files, then ZipFile creates an empty archive with the appropriate ZIP format.\n"
] | [
2
] | [] | [] | [
"python"
] | stackoverflow_0074637702_python.txt |
Q:
Global variable with Django and Celery
I have a code like this,
wl_data = {}
def set_wl_data():
global wl_data
wl_data = get_watchlist_data()
def get_wl_data(scripcodes):
# Filtering Data
result = {scripcode:detail for scripcode, detail in wl_data.iteritems() if int(scripcode) in scripcodes or scripcode in scripcodes}
return result
I am running this as a django project,
I am calling the setter method from celery, to update the global variable wl_data.
tastypie api will call the getter method get_wl_data to fetch global variable wl_data.
The problem is celery is updating wl_data properly.
But when we hit the tastypie api url in browser, the getter method
serves the old data.
There are so many related questions in stack overflow, but the difference here is setter method is called by celery task. Please help me to solve this issue.
A:
If you're doing anything with global variables in a Django project, you're doing it wrong. In this case, Celery and Django are running in completely separate processes, so cannot share data. You need to get Celery to store that data somewhere - in the db, or a file - so that Django can pick it up and serve it.
A:
The code below works with the global variable num for me in Django 3.1.7 and Celery 5.1.2:
# "store/tasks.py"
from celery import shared_task
num = 0
@shared_task
def test1():
global num
num += 1
return num
@shared_task
def test2():
global num
num += 1
return num
@shared_task
def test3():
global num
num += 1
return num
# "store/views.py"
from django.http import HttpResponse
from .tasks import test1, test2, test3
def test(request):
test1.delay()
test2.delay()
test3.delay()
return HttpResponse("Test")
Output:
Task store.tasks.test1[c222183b-73be-4fba-9813-be8141c6669c] received
Task store.tasks.test1[c222183b-73be-4fba-9813-be8141c6669c] succeeded in 0.0s: 1
Task store.tasks.test2[aa4bc9e5-95c6-4f8b-8122-3df273822ab5] received
Task store.tasks.test2[aa4bc9e5-95c6-4f8b-8122-3df273822ab5] succeeded in 0.0s: 2
Task store.tasks.test3[472727f3-368f-48ad-9d49-72f14962e8c5] received
Task store.tasks.test3[472727f3-368f-48ad-9d49-72f14962e8c5] succeeded in 0.0s: 3
| Global variable with Django and Celery | I have a code like this,
wl_data = {}
def set_wl_data():
global wl_data
wl_data = get_watchlist_data()
def get_wl_data(scripcodes):
# Filtering Data
result = {scripcode:detail for scripcode, detail in wl_data.iteritems() if int(scripcode) in scripcodes or scripcode in scripcodes}
return result
I am running this as a django project,
I am calling the setter method from celery, to update the global variable wl_data.
tastypie api will call the getter method get_wl_data to fetch global variable wl_data.
The problem is celery is updating wl_data properly.
But when we hit the tastypie api url in browser, the getter method
serves the old data.
There are so many related questions in stack overflow, but the difference here is setter method is called by celery task. Please help me to solve this issue.
| [
"If you're doing anything with global variables in a Django project, you're doing it wrong. In this case, Celery and Django are running in completely separate processes, so cannot share data. You need to get Celery to store that data somewhere - in the db, or a file - so that Django can pick it up and serve it.\n",
"The code below works with the global variable num for me in Django 3.1.7 and Celery 5.1.2:\n# \"store/tasks.py\"\n\nfrom celery import shared_task\n\nnum = 0\n\n@shared_task\ndef test1():\n global num\n num += 1\n return num\n\n@shared_task\ndef test2():\n global num\n num += 1\n return num\n\n@shared_task\ndef test3():\n global num\n num += 1\n return num\n\n# \"store/views.py\"\n\nfrom django.http import HttpResponse\nfrom .tasks import test1, test2, test3\n\ndef test(request):\n test1.delay()\n test2.delay()\n test3.delay()\n return HttpResponse(\"Test\")\n\nOutput:\nTask store.tasks.test1[c222183b-73be-4fba-9813-be8141c6669c] received\nTask store.tasks.test1[c222183b-73be-4fba-9813-be8141c6669c] succeeded in 0.0s: 1\nTask store.tasks.test2[aa4bc9e5-95c6-4f8b-8122-3df273822ab5] received\nTask store.tasks.test2[aa4bc9e5-95c6-4f8b-8122-3df273822ab5] succeeded in 0.0s: 2\nTask store.tasks.test3[472727f3-368f-48ad-9d49-72f14962e8c5] received\nTask store.tasks.test3[472727f3-368f-48ad-9d49-72f14962e8c5] succeeded in 0.0s: 3\n\n"
] | [
3,
0
] | [] | [] | [
"celery",
"django",
"django_celery",
"global_variables",
"python"
] | stackoverflow_0018465918_celery_django_django_celery_global_variables_python.txt |
Q:
AWS - NoCredentialsError: Unable to locate credentials
I'm new in AWS also a beginner in python.
I have situation in here,I'm facing this kind of issue:
"The NoCredentialsError is an error encountered when using the Boto3 library to interface with Amazon Web Services (AWS).Specifically, this error is encountered when your AWS credentials are missing, invalid, or cannot be located by your Python script. "
File "/usr/lib/python2.7/site-packages/botocore/auth.py", line 373, in add_auth
raise NoCredentialsError()
NoCredentialsError: Unable to locate credentials
And i did some troubleshooting by checking the path given and the credentials is in there.
**[root@xxxxxx aws]# ls
config credentials
Which part am i missing and where else need to check?
Please advice.
A:
The correct path is ~/.aws, not ~/aws. Please double check your setup with aws docs:
in a folder named .aws in your home directory.
| AWS - NoCredentialsError: Unable to locate credentials | I'm new in AWS also a beginner in python.
I have situation in here,I'm facing this kind of issue:
"The NoCredentialsError is an error encountered when using the Boto3 library to interface with Amazon Web Services (AWS).Specifically, this error is encountered when your AWS credentials are missing, invalid, or cannot be located by your Python script. "
File "/usr/lib/python2.7/site-packages/botocore/auth.py", line 373, in add_auth
raise NoCredentialsError()
NoCredentialsError: Unable to locate credentials
And i did some troubleshooting by checking the path given and the credentials is in there.
**[root@xxxxxx aws]# ls
config credentials
Which part am i missing and where else need to check?
Please advice.
| [
"The correct path is ~/.aws, not ~/aws. Please double check your setup with aws docs:\n\nin a folder named .aws in your home directory.\n\n"
] | [
2
] | [] | [] | [
"amazon_web_services",
"python"
] | stackoverflow_0074637735_amazon_web_services_python.txt |
Q:
Stuck at CS50's Finance: index() does not show stocks info
Programming noob here having some trouble with Harvard's CS50 Finance problem.
I am really stuck! No matter what I do, I just can't get to see the stocks information at the index.html page ("stock symbol", "shares", "current price", "total shares value"). My table rows are there, but they are blank:
index.html page with blank rows
It's been days now, but I don't know what I'm doing wrong here, since the user_cash information appears correctly. How can I solve this?
Here's my application.py code for index():
@app.route("/")
@login_required
def index():
# getting user id
owner_id = session["user_id"]
# getting user stocks
rows = db.execute("SELECT stock_name, SUM(shares) FROM owners WHERE owner_id = ? GROUP BY stock_name", owner_id)
# getting user cash
user_cash = round(db.execute("SELECT cash FROM users WHERE id = ?", owner_id)[0]["cash"], 2)
# saving stocks in a list
stocks = []
total_cash = 0
for row in rows:
stock = lookup(row["stock_name"])
shares_value = stock["price"] * row["SUM(shares)"]
total_cash += shares_value
stocks.append({"symbol":stock["symbol"], "shares":row["SUM(shares)"], "price":usd(stock["price"]), "shares_value":usd(shares_value)})
total_cash = user_cash + total_cash
return render_template("index.html", stocks=stocks, total_cash=usd(round(total_cash, 2)), user_cash=usd(user_cash))
And here's my index.html code:
{% extends "layout.html" %}
{% block title %}
Index
{% endblock %}
{% block main %}
<p><h2>Index</h2></p>
<table border="3" align="center">
<thead>
<tr>
<td> Stock </td>
<td> Shares </td>
<td> Current price </td>
<td> Total shares value </td>
</tr>
</thead>
<tbody>
{% for stock in stocks %}
<tr>
<td>{{ stocks["symbol"] }}</td>-
<td>{{ stocks["shares"] }}</td>
<td>{{ stocks["price"] }}</td>
<td>{{ stocks["shares_value"] }}</td>
</tr>
{% endfor %}
</tbody>
</table>
<p><h2>Your funds</h2></p>
<table border="3" align="center">
<thead>
<tr>
<td> Cash </td>
<td> Total patrimony </td>
</tr>
</thead>
<tbody>
<tr>
<td>{{ user_cash }}</td>
<td>{{ total_cash }}</td>
</tr>
</tbody>
</table>
{% endblock %}
I've tried to show the correct information by asking the right questions to my database.
Also, I've had some trouble looping over the stock's info, but now I think that a good approach is to store it in a list and then send it via "render_template()" to the HTML file (yet, no luck implementing this logic).
Thank you!
| Stuck at CS50's Finance: index() does not show stocks info | Programming noob here having some trouble with Harvard's CS50 Finance problem.
I am really stuck! No matter what I do, I just can't get to see the stocks information at the index.html page ("stock symbol", "shares", "current price", "total shares value"). My table rows are there, but they are blank:
index.html page with blank rows
It's been days now, but I don't know what I'm doing wrong here, since the user_cash information appears correctly. How can I solve this?
Here's my application.py code for index():
@app.route("/")
@login_required
def index():
# getting user id
owner_id = session["user_id"]
# getting user stocks
rows = db.execute("SELECT stock_name, SUM(shares) FROM owners WHERE owner_id = ? GROUP BY stock_name", owner_id)
# getting user cash
user_cash = round(db.execute("SELECT cash FROM users WHERE id = ?", owner_id)[0]["cash"], 2)
# saving stocks in a list
stocks = []
total_cash = 0
for row in rows:
stock = lookup(row["stock_name"])
shares_value = stock["price"] * row["SUM(shares)"]
total_cash += shares_value
stocks.append({"symbol":stock["symbol"], "shares":row["SUM(shares)"], "price":usd(stock["price"]), "shares_value":usd(shares_value)})
total_cash = user_cash + total_cash
return render_template("index.html", stocks=stocks, total_cash=usd(round(total_cash, 2)), user_cash=usd(user_cash))
And here's my index.html code:
{% extends "layout.html" %}
{% block title %}
Index
{% endblock %}
{% block main %}
<p><h2>Index</h2></p>
<table border="3" align="center">
<thead>
<tr>
<td> Stock </td>
<td> Shares </td>
<td> Current price </td>
<td> Total shares value </td>
</tr>
</thead>
<tbody>
{% for stock in stocks %}
<tr>
<td>{{ stocks["symbol"] }}</td>-
<td>{{ stocks["shares"] }}</td>
<td>{{ stocks["price"] }}</td>
<td>{{ stocks["shares_value"] }}</td>
</tr>
{% endfor %}
</tbody>
</table>
<p><h2>Your funds</h2></p>
<table border="3" align="center">
<thead>
<tr>
<td> Cash </td>
<td> Total patrimony </td>
</tr>
</thead>
<tbody>
<tr>
<td>{{ user_cash }}</td>
<td>{{ total_cash }}</td>
</tr>
</tbody>
</table>
{% endblock %}
I've tried to show the correct information by asking the right questions to my database.
Also, I've had some trouble looping over the stock's info, but now I think that a good approach is to store it in a list and then send it via "render_template()" to the HTML file (yet, no luck implementing this logic).
Thank you!
| [] | [] | [
"instead of this:\n {% for stock in stocks %}\n <tr>\n <td>{{ stocks[\"symbol\"] }}</td>-\n <td>{{ stocks[\"shares\"] }}</td>\n <td>{{ stocks[\"price\"] }}</td>\n <td>{{ stocks[\"shares_value\"] }}</td>\n </tr>\n {% endfor %}\n\ntry this:\n {% for stock in stocks %}\n <tr>\n <td>{{ stock[\"symbol\"] }}</td>-\n <td>{{ stock[\"shares\"] }}</td>\n <td>{{ stock[\"price\"] }}</td>\n <td>{{ stock[\"shares_value\"] }}</td>\n </tr>\n {% endfor %}\n\nAlso try to check whether the values are correct in flask code.\nHelpful link:-\nhttps://realpython.com/primer-on-jinja-templating/\n"
] | [
-1
] | [
"flask",
"jinja2",
"python"
] | stackoverflow_0074634546_flask_jinja2_python.txt |
Q:
Having trouble finding the text of google search result
I've been trying to use BeautifulSoup to find the text of each search result on google. Using the developer tools, I can see that this is represented by a <h3> with the class " LC20lb DKV0Md ".
However I cant seem find this using BeautifulSoup. What am I doing wrong?
import requests
from bs4 import BeautifulSoup
res = requests.get('http://google.com/search?q=world+news')
soup = BeautifulSoup(res.content, 'html.parser')
soup.find_all('h3', class_= 'LC201b DKV0Md')
A:
You do not have to search by class, you simply can select all <h3> that includes a <div> and than get_text() of each:
import requests
from bs4 import BeautifulSoup
res = requests.get('http://google.com/search?q=world+news')
soup = BeautifulSoup(res.content, 'html.parser')
[x.get_text() for x in soup.select('h3 div')]
Output:
['World - BBC News',
'BBC News World',
'Latest news from around the world | The Guardian',
'World - breaking news, videos and headlines - CNN',
'CNN International - Breaking News, US News, World News and Video',
'Welt-Nachrichten',
'BBC World News (Fernsehsender)',
'World News - Breaking international news and headlines | Sky News',
'International News | Latest World News, Videos & Photos -ABC',
'World News Headlines | Reuters',
'World News - Hindustan Times',
'World News | International Headlines - Breaking World - Global News']
A:
If you having difficulties figuring out which element to use, have a look at SelectorGadget Chrome extension, it's easier and faster than searching through dev tools. However, it does not always work perfectly if the website is rendered via JavaScript.
find the text of each search result on google.
Assuming that you meant "from all pages", you can scrape Google Search Results information from all pages using while True loop i,e pagination.
The while loop is an endless loop that will dynamically be paginating to the next page if .d6cvqb a[id=pnnext] selector is present which is responsible for the "next page" button:
if soup.select_one('.d6cvqb a[id=pnnext]'):
params["start"] += 10 # increment a URL parameter that controls page number
else:
break
Keep in mind that the request may be blocked if you're using requests as the default user-agent in requests library is a python-requests.
To avoid it, one of the steps could be to rotate user-agent, for example, to switch between PC, mobile, and tablet, as well as between browsers e.g. Chrome, Firefox, Safari, Edge and so on.
Check code in online IDE.
from bs4 import BeautifulSoup
import requests, json, lxml
# https://docs.python-requests.org/en/master/user/quickstart/#passing-parameters-in-urls
params = {
"q": "world+news", # query
"hl": "en", # language
"gl": "us", # country of the search, US -> USA
"start": 0, # number page by default up to 0
#"num": 100 # parameter defines the maximum number of results to return.
}
# https://docs.python-requests.org/en/master/user/quickstart/#custom-headers
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36"
}
page_num = 0
website_data = []
while True:
page_num += 1
print(f"page: {page_num}")
html = requests.get("https://www.google.com/search", params=params, headers=headers, timeout=30)
soup = BeautifulSoup(html.text, 'lxml')
for result in soup.select(".tF2Cxc"):
title = result.select_one(".DKV0Md").text
website_link = result.select_one(".yuRUbf a")["href"]
website_data.append({
"title": title,
"website_link": website_link
})
if soup.select_one('.d6cvqb a[id=pnnext]'):
params["start"] += 10
else:
break
print(json.dumps(website_data, indent=2, ensure_ascii=False))
Example output:
[
{
"title": "World news – breaking news, videos and headlines - CNN",
"website_link": "https://www.cnn.com/world"
},
{
"title": "World - BBC News",
"website_link": "https://www.bbc.com/news/world"
},
{
"title": "Latest news from around the world | The Guardian",
"website_link": "https://www.theguardian.com/world"
},
{
"title": "World News Headlines | Reuters",
"website_link": "https://www.reuters.com/news/archive/worldNews"
},
{
"title": "World - NBC News",
"website_link": "https://www.nbcnews.com/world"
},
# ...
]
Also you can use Google Search Engine Results API from SerpApi. It's a paid API with the free plan.
The difference is that it will bypass blocks (including CAPTCHA) from Google, no need to create the parser and maintain it.
Code example:
from serpapi import GoogleSearch
from urllib.parse import urlsplit, parse_qsl
import json, os
params = {
"api_key": os.getenv("API_KEY"), # serpapi key
"engine": "google", # serpapi parser engine
"q": "world+news", # search query
"num": "100" # number of results per page (100 per page in this case)
# other search parameters: https://serpapi.com/search-api#api-parameters
}
search = GoogleSearch(params) # where data extraction happens
organic_results_data = []
page_num = 0
while True:
results = search.get_dict() # JSON -> Python dictionary
page_num += 1
for result in results["organic_results"]:
organic_results_data.append({
"title": result.get("title")
})
if "next_link" in results.get("serpapi_pagination", []):
search.params_dict.update(dict(parse_qsl(urlsplit(results.get("serpapi_pagination").get("next_link")).query)))
else:
break
print(json.dumps(organic_results_data, indent=2, ensure_ascii=False))
Output:
[
{
"title": "World news – breaking news, videos and headlines - CNN"
},
{
"title": "World - BBC News"
},
{
"title": "Latest news from around the world | The Guardian"
},
{
"title": "World News Headlines | Reuters"
},
{
"title": "World - NBC News"
},
# ...
]
| Having trouble finding the text of google search result | I've been trying to use BeautifulSoup to find the text of each search result on google. Using the developer tools, I can see that this is represented by a <h3> with the class " LC20lb DKV0Md ".
However I cant seem find this using BeautifulSoup. What am I doing wrong?
import requests
from bs4 import BeautifulSoup
res = requests.get('http://google.com/search?q=world+news')
soup = BeautifulSoup(res.content, 'html.parser')
soup.find_all('h3', class_= 'LC201b DKV0Md')
| [
"You do not have to search by class, you simply can select all <h3> that includes a <div> and than get_text() of each:\nimport requests\nfrom bs4 import BeautifulSoup\n\nres = requests.get('http://google.com/search?q=world+news')\nsoup = BeautifulSoup(res.content, 'html.parser')\n\n[x.get_text() for x in soup.select('h3 div')]\n\nOutput:\n['World - BBC News',\n 'BBC News World',\n 'Latest news from around the world | The Guardian',\n 'World - breaking news, videos and headlines - CNN',\n 'CNN International - Breaking News, US News, World News and Video',\n 'Welt-Nachrichten',\n 'BBC World News (Fernsehsender)',\n 'World News - Breaking international news and headlines | Sky News',\n 'International News | Latest World News, Videos & Photos -ABC',\n 'World News Headlines | Reuters',\n 'World News - Hindustan Times',\n 'World News | International Headlines - Breaking World - Global News']\n\n",
"If you having difficulties figuring out which element to use, have a look at SelectorGadget Chrome extension, it's easier and faster than searching through dev tools. However, it does not always work perfectly if the website is rendered via JavaScript.\n\nfind the text of each search result on google.\n\nAssuming that you meant \"from all pages\", you can scrape Google Search Results information from all pages using while True loop i,e pagination.\nThe while loop is an endless loop that will dynamically be paginating to the next page if .d6cvqb a[id=pnnext] selector is present which is responsible for the \"next page\" button:\nif soup.select_one('.d6cvqb a[id=pnnext]'):\n params[\"start\"] += 10 # increment a URL parameter that controls page number\nelse:\n break\n\nKeep in mind that the request may be blocked if you're using requests as the default user-agent in requests library is a python-requests.\nTo avoid it, one of the steps could be to rotate user-agent, for example, to switch between PC, mobile, and tablet, as well as between browsers e.g. Chrome, Firefox, Safari, Edge and so on.\nCheck code in online IDE.\nfrom bs4 import BeautifulSoup\nimport requests, json, lxml\n\n# https://docs.python-requests.org/en/master/user/quickstart/#passing-parameters-in-urls\nparams = {\n \"q\": \"world+news\", # query\n \"hl\": \"en\", # language\n \"gl\": \"us\", # country of the search, US -> USA\n \"start\": 0, # number page by default up to 0\n #\"num\": 100 # parameter defines the maximum number of results to return.\n}\n\n# https://docs.python-requests.org/en/master/user/quickstart/#custom-headers\nheaders = {\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36\"\n}\n\npage_num = 0\n\nwebsite_data = []\n\nwhile True:\n page_num += 1\n print(f\"page: {page_num}\")\n \n html = requests.get(\"https://www.google.com/search\", params=params, headers=headers, timeout=30)\n soup = BeautifulSoup(html.text, 'lxml')\n \n for result in soup.select(\".tF2Cxc\"):\n title = result.select_one(\".DKV0Md\").text\n website_link = result.select_one(\".yuRUbf a\")[\"href\"]\n \n website_data.append({\n \"title\": title,\n \"website_link\": website_link \n })\n \n if soup.select_one('.d6cvqb a[id=pnnext]'):\n params[\"start\"] += 10\n else:\n break\n\nprint(json.dumps(website_data, indent=2, ensure_ascii=False))\n\nExample output:\n[\n {\n \"title\": \"World news – breaking news, videos and headlines - CNN\",\n \"website_link\": \"https://www.cnn.com/world\"\n },\n {\n \"title\": \"World - BBC News\",\n \"website_link\": \"https://www.bbc.com/news/world\"\n },\n {\n \"title\": \"Latest news from around the world | The Guardian\",\n \"website_link\": \"https://www.theguardian.com/world\"\n },\n {\n \"title\": \"World News Headlines | Reuters\",\n \"website_link\": \"https://www.reuters.com/news/archive/worldNews\"\n },\n {\n \"title\": \"World - NBC News\",\n \"website_link\": \"https://www.nbcnews.com/world\"\n },\n # ...\n]\n\n\nAlso you can use Google Search Engine Results API from SerpApi. It's a paid API with the free plan.\nThe difference is that it will bypass blocks (including CAPTCHA) from Google, no need to create the parser and maintain it.\nCode example:\nfrom serpapi import GoogleSearch\nfrom urllib.parse import urlsplit, parse_qsl\nimport json, os\n\nparams = {\n \"api_key\": os.getenv(\"API_KEY\"), # serpapi key\n \"engine\": \"google\", # serpapi parser engine\n \"q\": \"world+news\", # search query\n \"num\": \"100\" # number of results per page (100 per page in this case)\n # other search parameters: https://serpapi.com/search-api#api-parameters\n}\n\nsearch = GoogleSearch(params) # where data extraction happens\n\norganic_results_data = []\npage_num = 0\n\nwhile True:\n results = search.get_dict() # JSON -> Python dictionary\n \n page_num += 1\n \n for result in results[\"organic_results\"]:\n organic_results_data.append({\n \"title\": result.get(\"title\") \n })\n \n if \"next_link\" in results.get(\"serpapi_pagination\", []):\n search.params_dict.update(dict(parse_qsl(urlsplit(results.get(\"serpapi_pagination\").get(\"next_link\")).query)))\n else:\n break\n \nprint(json.dumps(organic_results_data, indent=2, ensure_ascii=False))\n\nOutput:\n[\n {\n \"title\": \"World news – breaking news, videos and headlines - CNN\"\n },\n {\n \"title\": \"World - BBC News\"\n },\n {\n \"title\": \"Latest news from around the world | The Guardian\"\n },\n {\n \"title\": \"World News Headlines | Reuters\"\n },\n {\n \"title\": \"World - NBC News\"\n },\n # ...\n]\n\n"
] | [
1,
1
] | [] | [] | [
"beautifulsoup",
"python",
"web_scraping"
] | stackoverflow_0069062701_beautifulsoup_python_web_scraping.txt |
Q:
python for loop parallel processing - appending data to list
I am have below step in my code which is taking around 45 to 50 mins to run (there are other steps which barely take few seconds)
So I am trying to optimize the execution/run time for this step it is essentially a for loop inside a function
def getSwitchStatus(dashboard: meraki.DashboardAPI,switches):
statuses = []
#Establish the timestamp for midnight yesterday to enable collecting of yesterdays data
yesterday_midnight = datetime.combine(datetime.today(), time.min) - timedelta(days = 1)
for dic in switches:
statuses.append(dashboard.switch.getDeviceSwitchPortsStatuses(dic['serial'],t0=yesterday_midnight))
return statuses
Here is what I have tried to do so far
def switchsts():
print("Inside switchsts")
for dic in switches:
statuses.append(dashboard.switch.getDeviceSwitchPortsStatuses(dic['serial'],t0=yesterday_midnight))
def getSwitchStatus(dashboard: meraki.DashboardAPI,switches):
print("Testing if switches is accessible")
print("Switches type",type(switches))
print("Switches",switches[0])
p = Process(target=switchsts,args=())
p.start()
p.join()
return statuses
print(statuses)
Unfortunately this is throwing an error here:
for dic in switches:
NameError: name 'switches' is not defined
Which is strange because I am able to print 'Switches' when the code reaches inside the getswitchstatus function but somehow the function that I am trying to parallelize doesnt seem to read it.
Inside switchsts
Process Process-1:
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Sample_project\venv\ciscomeraki_file_parallelprocessing.py", line 83, in switchsts
for dic in switches:
NameError: name 'switches' is not defined
P.S.: I am new to parellel processing so I am guessing I am making some rookie mistake
*Edit1 Adding code for 'switches'
def getSwitches(dashboard: meraki.DashboardAPI,orgID, network_id=False):
if network_id is False or network_id is None:
devices = dashboard.organizations.getOrganizationDevices(
orgID,
total_pages='all',
productTypes='switch'
)
return devices
else:
devices = dashboard.organizations.getOrganizationDevices(
orgID,
total_pages='all',
productTypes='switch',
networkIds=network_id
)
return devices
A:
It's saying that your switchsts() fucntion have not switches in it.
so try this if it works:
def switchsts(switches):
print("Inside switchsts")
for dic in switches:
statuses.append(dashboard.switch.getDeviceSwitchPortsStatuses(dic['serial'],t0=yesterday_midnight))
def getSwitchStatus(dashboard: meraki.DashboardAPI,switches):
print("Testing if switches is accessible")
print("Switches type",type(switches))
print("Switches",switches[0])
p = Process(target=switchsts, args=(switches,))
p.start()
p.join()
return statuses
print(statuses)
| python for loop parallel processing - appending data to list | I am have below step in my code which is taking around 45 to 50 mins to run (there are other steps which barely take few seconds)
So I am trying to optimize the execution/run time for this step it is essentially a for loop inside a function
def getSwitchStatus(dashboard: meraki.DashboardAPI,switches):
statuses = []
#Establish the timestamp for midnight yesterday to enable collecting of yesterdays data
yesterday_midnight = datetime.combine(datetime.today(), time.min) - timedelta(days = 1)
for dic in switches:
statuses.append(dashboard.switch.getDeviceSwitchPortsStatuses(dic['serial'],t0=yesterday_midnight))
return statuses
Here is what I have tried to do so far
def switchsts():
print("Inside switchsts")
for dic in switches:
statuses.append(dashboard.switch.getDeviceSwitchPortsStatuses(dic['serial'],t0=yesterday_midnight))
def getSwitchStatus(dashboard: meraki.DashboardAPI,switches):
print("Testing if switches is accessible")
print("Switches type",type(switches))
print("Switches",switches[0])
p = Process(target=switchsts,args=())
p.start()
p.join()
return statuses
print(statuses)
Unfortunately this is throwing an error here:
for dic in switches:
NameError: name 'switches' is not defined
Which is strange because I am able to print 'Switches' when the code reaches inside the getswitchstatus function but somehow the function that I am trying to parallelize doesnt seem to read it.
Inside switchsts
Process Process-1:
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\multiprocessing\process.py", line 314, in _bootstrap
self.run()
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\multiprocessing\process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "C:\Sample_project\venv\ciscomeraki_file_parallelprocessing.py", line 83, in switchsts
for dic in switches:
NameError: name 'switches' is not defined
P.S.: I am new to parellel processing so I am guessing I am making some rookie mistake
*Edit1 Adding code for 'switches'
def getSwitches(dashboard: meraki.DashboardAPI,orgID, network_id=False):
if network_id is False or network_id is None:
devices = dashboard.organizations.getOrganizationDevices(
orgID,
total_pages='all',
productTypes='switch'
)
return devices
else:
devices = dashboard.organizations.getOrganizationDevices(
orgID,
total_pages='all',
productTypes='switch',
networkIds=network_id
)
return devices
| [
"It's saying that your switchsts() fucntion have not switches in it.\nso try this if it works:\ndef switchsts(switches):\n print(\"Inside switchsts\")\n for dic in switches:\n statuses.append(dashboard.switch.getDeviceSwitchPortsStatuses(dic['serial'],t0=yesterday_midnight)) \n\ndef getSwitchStatus(dashboard: meraki.DashboardAPI,switches): \n print(\"Testing if switches is accessible\")\n print(\"Switches type\",type(switches))\n print(\"Switches\",switches[0])\n\n p = Process(target=switchsts, args=(switches,))\n p.start()\n p.join()\n return statuses\n print(statuses)\n\n"
] | [
0
] | [] | [] | [
"for_loop",
"parallel_processing",
"python",
"python_3.x"
] | stackoverflow_0074637765_for_loop_parallel_processing_python_python_3.x.txt |
Q:
how can i do "removeEmptyTag:False" in django-ckeditor in settings.py
source
<div class="icon-lg text-white m-b15"><i class="flaticon-fuel-station"></i></div>
output
<div class="icon-lg text-white m-b15"> </div>
django-ckeditor automatic remove tag :{
A:
There is this issue on github, it is actually not a bug, it is a feature.
To disable this feature add this to your config:
'allowedContent': True, 'extraAllowedContent': '*(*)',
Optional - customizing CKEditor editor
| how can i do "removeEmptyTag:False" in django-ckeditor in settings.py |
source
<div class="icon-lg text-white m-b15"><i class="flaticon-fuel-station"></i></div>
output
<div class="icon-lg text-white m-b15"> </div>
django-ckeditor automatic remove tag :{
| [
"There is this issue on github, it is actually not a bug, it is a feature.\nTo disable this feature add this to your config:\n\n'allowedContent': True, 'extraAllowedContent': '*(*)',\n\nOptional - customizing CKEditor editor\n"
] | [
0
] | [] | [] | [
"django",
"django_ckeditor",
"python"
] | stackoverflow_0074637817_django_django_ckeditor_python.txt |
Q:
Web-scrape. BeautifulSoup. Multiple Pages. How on earth would you do that?
Hi I am a Newbie to programming. So I spent 4 days trying to learn python. I evented some new swear words too.
I was particularly interested in trying as an exercise some web-scraping to learn something new and get some exposure to see how it all works.
This is what I came up with. See code at end. It works (to a degree)
But what's missing?
This website has pagination on it. In this case 11 pages worth. How would you go about adding to this script and getting python to go look at those other pages too and carry out the same scrape. Ie scrape page one , scrape page 2, 3 ... 11 and post the results to a csv?
https://www.organicwine.com.au/vegan/?pgnum=1
https://www.organicwine.com.au/vegan/?pgnum=2
https://www.organicwine.com.au/vegan/?pgnum=3
https://www.organicwine.com.au/vegan/?pgnum=4
https://www.organicwine.com.au/vegan/?pgnum=5
https://www.organicwine.com.au/vegan/?pgnum=6
https://www.organicwine.com.au/vegan/?pgnum=7
8, 9,10, and 11
On these pages the images are actually a thumbnail images something like 251px by 251px.
How would you go about adding to this script to say. And whilst you are at it follow the links to the detailed product page and capture the image link from there where the images are 1600px by 1600px and post those links to CSV
https://www.organicwine.com.au/mercer-wines-preservative-free-shiraz-2020
When we have identified those links lets also download those larger images to a folder
CSV writer. Also I don't understand line 58
for i in range(23)
how would i know how many products there were without counting them (i.e. there is 24 products on page one)
So this is what I want to learn how to do. Not asking for much (he says sarcastically) I could pay someone on up-work to do it but where's the fun in that? and that does not teach me how to 'fish'.
Where is a good place to learn python? A master class on web-scraping. It seems to be trial and error and blog posts and where ever you can pick up bits of information to piece it all together.
Maybe I need a mentor.
I wish there had been someone I could have reached out to, to tell me what beautifulSoup was all about. worked it out by trial and error and mostly guessing. No understanding of it but it just works.
Anyway, any help in pulling this all together to produce a decent script would be greatly appreciated.
Hopefully there is someone out there who would not mind helping me.
Apologies to organicwine for using their website as a learning tool. I do not wish to cause any harm or be a nuisance to the site
Thank you in advance
John
code:
import requests
import csv
from bs4 import BeautifulSoup
URL = "https://www.organicwine.com.au/vegan/?pgnum=1"
response = requests.get(URL)
website_html = response.text
soup = BeautifulSoup(website_html, "html.parser")
product_title = soup.find_all('div', class_="caption")
# print(product_title)
winename = []
for wine in product_title:
winetext = wine.a.text
winename.append(winetext)
print(f'''Wine Name: {winetext}''')
# print(f'''\nWine Name: {winename}\n''')
product_price = soup.find_all('div', class_='wrap-thumb-mob')
# print(product_price.text)
price =[]
for wine in product_price:
wineprice = wine.span.text
price.append(wineprice)
print(f'''Wine Price: {wineprice}''')
# print(f'''\nWine Price: {price}\n''')
image =[]
product_image_link = (soup.find_all('div', class_='thumbnail-image'))
# print(product_image_link)
for imagelink in product_image_link:
wineimagelink = imagelink.a['href']
image.append(wineimagelink)
# image.append(imagelink)
print(f'''Wine Image Lin: {wineimagelink}''')
# print(f'''\nWine Image: {image}\n''')
#
#
# """ writing data to CSV """
# open OrganicWine2.csv file in "write" mode
# newline stops a blank line appearing in csv
with open('OrganicWine2.csv', 'w',newline='') as file:
# create a "writer" object
writer = csv.writer(file, delimiter=',')
# use "writer" obj to write
# you should give a "list"
writer.writerow(["Wine Name", "Wine Price", "Wine Image Link"])
for i in range(23):
writer.writerow([
winename[i],
price[i],
image[i],
])
A:
In this case, to do pagination, instead of for i in range(1, 100) which is a hardcoded way of paging, it's better to use a while loop to dynamically paginate all possible pages.
"While" is an infinite loop and it will be executed until the transition to the next page is possible, in this case it will check for the presence of the button for the next page, for which the CSS selector ".fa-chevron-right" is responsible:
if soup.select_one(".fa-chevron-right"):
params["pgnum"] += 1 # go to the next page
else:
break
To extract the full size image an additional request is required, CSS selector ".main-image a" is responsible for full-size images:
full_image_html = requests.get(link, headers=headers, timeout=30)
image_soup = BeautifulSoup(full_image_html.text, "lxml")
try:
original_image = f'https://www.organicwine.com.au{image_soup.select_one(".main-image a")["href"]}'
except:
original_image = None
An additional step to avoid being blocked is to rotate user-agents. Ideally, it would be better to use residential proxies with random user-agent.
pandas can be used to extract data in CSV format:
pd.DataFrame(data=data).to_csv("<csv_file_name>.csv", index=False)
For a quick and easy search for CSS selectors, you can use the SelectorGadget Chrome extension (not always work perfectly if the website is rendered via JavaScript).
Check code with pagination and saving information to CSV in online IDE.
from bs4 import BeautifulSoup
import requests, json, lxml
import pandas as pd
# https://requests.readthedocs.io/en/latest/user/quickstart/#custom-headers
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.60 Safari/537.36",
}
params = {
'pgnum': 1 # number page by default
}
data = []
while True:
page = requests.get(
"https://www.organicwine.com.au/vegan/?",
params=params,
headers=headers,
timeout=30,
)
soup = BeautifulSoup(page.text, "lxml")
print(f"Extracting page: {params['pgnum']}")
for products in soup.select(".price-btn-conts"):
try:
title = products.select_one(".new-h3").text
except:
title = None
try:
price = products.select_one(".price").text.strip()
except:
price = None
try:
snippet = products.select_one(".price-btn-conts p a").text
except:
snippet = None
try:
link = products.select_one(".new-h3 a")["href"]
except:
link = None
# additional request is needed to extract full size image
full_image_html = requests.get(link, headers=headers, timeout=30)
image_soup = BeautifulSoup(full_image_html.text, "lxml")
try:
original_image = f'https://www.organicwine.com.au{image_soup.select_one(".main-image a")["href"]}'
except:
original_image = None
data.append(
{
"title": title,
"price": price,
"snippet": snippet,
"link": link,
"original_image": original_image
}
)
if soup.select_one(".fa-chevron-right"):
params["pgnum"] += 1
else:
break
# save to CSV (install, import pandas as pd)
pd.DataFrame(data=data).to_csv("<csv_file_name>.csv", index=False)
print(json.dumps(data, indent=2, ensure_ascii=False))
Example output:
[
{
"title": "Yangarra McLaren Vale GSM 2016",
"price": "$29.78 in a straight 12\nor $34.99 each",
"snippet": "The Yangarra GSM is a careful blending of Grenache, Shiraz and Mourvèdre in which the composition varies from year to year, conveying the traditional estate blends of the southern Rhône. The backbone of the wine comes fr...",
"link": "https://www.organicwine.com.au/yangarra-mclaren-vale-gsm-2016",
"original_image": "https://www.organicwine.com.au/assets/full/YG_GSM_16.png?20211110083637"
},
{
"title": "Yangarra Old Vine Grenache 2020",
"price": "$37.64 in a straight 12\nor $41.99 each",
"snippet": "Produced from the fruit of dry grown bush vines planted high up in the Estate's elevated vineyards in deep sandy soils. These venerated vines date from 1946 and produce a wine that is complex, perfumed and elegant with a...",
"link": "https://www.organicwine.com.au/yangarra-old-vine-grenache-2020",
"original_image": "https://www.organicwine.com.au/assets/full/YG_GRE_20.jpg?20210710165951"
},
#...
]
| Web-scrape. BeautifulSoup. Multiple Pages. How on earth would you do that? | Hi I am a Newbie to programming. So I spent 4 days trying to learn python. I evented some new swear words too.
I was particularly interested in trying as an exercise some web-scraping to learn something new and get some exposure to see how it all works.
This is what I came up with. See code at end. It works (to a degree)
But what's missing?
This website has pagination on it. In this case 11 pages worth. How would you go about adding to this script and getting python to go look at those other pages too and carry out the same scrape. Ie scrape page one , scrape page 2, 3 ... 11 and post the results to a csv?
https://www.organicwine.com.au/vegan/?pgnum=1
https://www.organicwine.com.au/vegan/?pgnum=2
https://www.organicwine.com.au/vegan/?pgnum=3
https://www.organicwine.com.au/vegan/?pgnum=4
https://www.organicwine.com.au/vegan/?pgnum=5
https://www.organicwine.com.au/vegan/?pgnum=6
https://www.organicwine.com.au/vegan/?pgnum=7
8, 9,10, and 11
On these pages the images are actually a thumbnail images something like 251px by 251px.
How would you go about adding to this script to say. And whilst you are at it follow the links to the detailed product page and capture the image link from there where the images are 1600px by 1600px and post those links to CSV
https://www.organicwine.com.au/mercer-wines-preservative-free-shiraz-2020
When we have identified those links lets also download those larger images to a folder
CSV writer. Also I don't understand line 58
for i in range(23)
how would i know how many products there were without counting them (i.e. there is 24 products on page one)
So this is what I want to learn how to do. Not asking for much (he says sarcastically) I could pay someone on up-work to do it but where's the fun in that? and that does not teach me how to 'fish'.
Where is a good place to learn python? A master class on web-scraping. It seems to be trial and error and blog posts and where ever you can pick up bits of information to piece it all together.
Maybe I need a mentor.
I wish there had been someone I could have reached out to, to tell me what beautifulSoup was all about. worked it out by trial and error and mostly guessing. No understanding of it but it just works.
Anyway, any help in pulling this all together to produce a decent script would be greatly appreciated.
Hopefully there is someone out there who would not mind helping me.
Apologies to organicwine for using their website as a learning tool. I do not wish to cause any harm or be a nuisance to the site
Thank you in advance
John
code:
import requests
import csv
from bs4 import BeautifulSoup
URL = "https://www.organicwine.com.au/vegan/?pgnum=1"
response = requests.get(URL)
website_html = response.text
soup = BeautifulSoup(website_html, "html.parser")
product_title = soup.find_all('div', class_="caption")
# print(product_title)
winename = []
for wine in product_title:
winetext = wine.a.text
winename.append(winetext)
print(f'''Wine Name: {winetext}''')
# print(f'''\nWine Name: {winename}\n''')
product_price = soup.find_all('div', class_='wrap-thumb-mob')
# print(product_price.text)
price =[]
for wine in product_price:
wineprice = wine.span.text
price.append(wineprice)
print(f'''Wine Price: {wineprice}''')
# print(f'''\nWine Price: {price}\n''')
image =[]
product_image_link = (soup.find_all('div', class_='thumbnail-image'))
# print(product_image_link)
for imagelink in product_image_link:
wineimagelink = imagelink.a['href']
image.append(wineimagelink)
# image.append(imagelink)
print(f'''Wine Image Lin: {wineimagelink}''')
# print(f'''\nWine Image: {image}\n''')
#
#
# """ writing data to CSV """
# open OrganicWine2.csv file in "write" mode
# newline stops a blank line appearing in csv
with open('OrganicWine2.csv', 'w',newline='') as file:
# create a "writer" object
writer = csv.writer(file, delimiter=',')
# use "writer" obj to write
# you should give a "list"
writer.writerow(["Wine Name", "Wine Price", "Wine Image Link"])
for i in range(23):
writer.writerow([
winename[i],
price[i],
image[i],
])
| [
"In this case, to do pagination, instead of for i in range(1, 100) which is a hardcoded way of paging, it's better to use a while loop to dynamically paginate all possible pages.\n\"While\" is an infinite loop and it will be executed until the transition to the next page is possible, in this case it will check for the presence of the button for the next page, for which the CSS selector \".fa-chevron-right\" is responsible:\n if soup.select_one(\".fa-chevron-right\"):\n params[\"pgnum\"] += 1 # go to the next page\nelse:\n break\n\nTo extract the full size image an additional request is required, CSS selector \".main-image a\" is responsible for full-size images:\nfull_image_html = requests.get(link, headers=headers, timeout=30)\nimage_soup = BeautifulSoup(full_image_html.text, \"lxml\")\n \ntry:\n original_image = f'https://www.organicwine.com.au{image_soup.select_one(\".main-image a\")[\"href\"]}'\nexcept:\n original_image = None\n\nAn additional step to avoid being blocked is to rotate user-agents. Ideally, it would be better to use residential proxies with random user-agent.\npandas can be used to extract data in CSV format:\npd.DataFrame(data=data).to_csv(\"<csv_file_name>.csv\", index=False)\n\nFor a quick and easy search for CSS selectors, you can use the SelectorGadget Chrome extension (not always work perfectly if the website is rendered via JavaScript).\nCheck code with pagination and saving information to CSV in online IDE.\nfrom bs4 import BeautifulSoup\nimport requests, json, lxml\nimport pandas as pd\n\n# https://requests.readthedocs.io/en/latest/user/quickstart/#custom-headers\nheaders = {\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/100.0.4896.60 Safari/537.36\",\n }\n \nparams = {\n 'pgnum': 1 # number page by default \n }\n\ndata = []\n\nwhile True:\n page = requests.get(\n \"https://www.organicwine.com.au/vegan/?\",\n params=params,\n headers=headers,\n timeout=30,\n )\n soup = BeautifulSoup(page.text, \"lxml\")\n\n print(f\"Extracting page: {params['pgnum']}\")\n\n for products in soup.select(\".price-btn-conts\"):\n try:\n title = products.select_one(\".new-h3\").text\n except:\n title = None\n try:\n price = products.select_one(\".price\").text.strip()\n except:\n price = None\n try:\n snippet = products.select_one(\".price-btn-conts p a\").text\n except:\n snippet = None\n try:\n link = products.select_one(\".new-h3 a\")[\"href\"]\n except:\n link = None\n\n # additional request is needed to extract full size image\n full_image_html = requests.get(link, headers=headers, timeout=30)\n image_soup = BeautifulSoup(full_image_html.text, \"lxml\")\n \n try:\n original_image = f'https://www.organicwine.com.au{image_soup.select_one(\".main-image a\")[\"href\"]}'\n except:\n original_image = None\n \n data.append(\n {\n \"title\": title,\n \"price\": price,\n \"snippet\": snippet,\n \"link\": link,\n \"original_image\": original_image\n }\n )\n\n if soup.select_one(\".fa-chevron-right\"):\n params[\"pgnum\"] += 1\n else:\n break\n\n# save to CSV (install, import pandas as pd)\npd.DataFrame(data=data).to_csv(\"<csv_file_name>.csv\", index=False)\nprint(json.dumps(data, indent=2, ensure_ascii=False))\n\nExample output:\n[\n {\n \"title\": \"Yangarra McLaren Vale GSM 2016\",\n \"price\": \"$29.78 in a straight 12\\nor $34.99 each\",\n \"snippet\": \"The Yangarra GSM is a careful blending of Grenache, Shiraz and Mourvèdre in which the composition varies from year to year, conveying the traditional estate blends of the southern Rhône. The backbone of the wine comes fr...\",\n \"link\": \"https://www.organicwine.com.au/yangarra-mclaren-vale-gsm-2016\",\n \"original_image\": \"https://www.organicwine.com.au/assets/full/YG_GSM_16.png?20211110083637\"\n },\n {\n \"title\": \"Yangarra Old Vine Grenache 2020\",\n \"price\": \"$37.64 in a straight 12\\nor $41.99 each\",\n \"snippet\": \"Produced from the fruit of dry grown bush vines planted high up in the Estate's elevated vineyards in deep sandy soils. These venerated vines date from 1946 and produce a wine that is complex, perfumed and elegant with a...\",\n \"link\": \"https://www.organicwine.com.au/yangarra-old-vine-grenache-2020\",\n \"original_image\": \"https://www.organicwine.com.au/assets/full/YG_GRE_20.jpg?20210710165951\"\n },\n #...\n]\n\n"
] | [
0
] | [
"Create the URL by putting the page number in it, then put the rest of your code into a for loop and you can use len(winenames) to count how many results you have. You should do the writing outside the for loop. Here's your code with those changes:\nimport requests\nimport csv\nfrom bs4 import BeautifulSoup\n\nnum_pages = 11\n\nresult = []\nfor pgnum in range(num_pages):\n url = f\"https://www.organicwine.com.au/vegan/?pgnum={pgnum+1}\"\n response = requests.get(url)\n website_html = response.text\n\n soup = BeautifulSoup(website_html, \"html.parser\")\n\n product_title = soup.find_all(\"div\", class_=\"caption\")\n\n winename = []\n for wine in product_title:\n winetext = wine.a.text\n winename.append(winetext)\n\n product_price = soup.find_all(\"div\", class_=\"wrap-thumb-mob\")\n\n price = []\n for wine in product_price:\n wineprice = wine.span.text\n price.append(wineprice)\n\n image = []\n product_image_link = soup.find_all(\"div\", class_=\"thumbnail-image\")\n for imagelink in product_image_link:\n winelink = imagelink.a[\"href\"]\n response = requests.get(winelink)\n wine_page_soup = BeautifulSoup(response.text, \"html.parser\")\n main_image = wine_page_soup.find(\"a\", class_=\"fancybox\")\n image.append(main_image['href'])\n\n for i in range(len(winename)):\n result.append([winename[i], price[i], image[i]])\n\n\nwith open(\"/tmp/OrganicWine2.csv\", \"w\", newline=\"\") as file:\n writer = csv.writer(file, delimiter=\",\")\n writer.writerow([\"Wine Name\", \"Wine Price\", \"Wine Image Link\"])\n writer.writerows(results)\n\nAnd here's how I would rewrite your code to accomplish this task. It's more pythonic (you should basically never write range(len(something)), there's always a cleaner way) and it doesn't require knowing how many pages of results there are:\nimport csv\nimport itertools\nimport time\n\nimport requests\nfrom bs4 import BeautifulSoup\n\ndata = []\n# Try opening 100 pages at most, in case the scraping code is broken\n# which can happen because websites change.\nfor pgnum in range(1, 100):\n url = f\"https://www.organicwine.com.au/vegan/?pgnum={pgnum}\"\n response = requests.get(url)\n website_html = response.text\n\n soup = BeautifulSoup(website_html, \"html.parser\")\n\n search_results = soup.find_all(\"div\", class_=\"thumbnail\")\n for search_result in search_results:\n name = search_result.find(\"div\", class_=\"caption\").a.text\n price = search_result.find(\"p\", class_=\"price\").span.text\n\n # link to the product's page\n link = search_result.find(\"div\", class_=\"thumbnail-image\").a[\"href\"]\n\n # get the full resolution product image\n response = requests.get(link)\n time.sleep(1) # rate limit\n wine_page_soup = BeautifulSoup(response.text, \"html.parser\")\n main_image = wine_page_soup.find(\"a\", class_=\"fancybox\")\n image_url = main_image[\"href\"]\n\n # or you can just \"guess\" it from the thumbnail's URL\n # thumbnail = search_result.find(\"div\", class_=\"thumbnail-image\").a.img['src']\n # image_url = thumbnail.replace('/thumbL/', '/full/')\n\n data.append([name, price, link, image_url])\n\n # if there's no \"next page\" button or no search results on the current page,\n # stop scraping\n if not soup.find(\"i\", class_=\"fa-chevron-right\") or not search_results:\n break\n\n # rate limit\n time.sleep(1)\n\n\nwith open(\"/tmp/OrganicWine3.csv\", \"w\", newline=\"\") as file:\n writer = csv.writer(file, delimiter=\",\")\n writer.writerow([\"Wine Name\", \"Wine Price\", \"Wine Link\", \"Wine Image Link\"])\n writer.writerows(data)\n\n"
] | [
-1
] | [
"beautifulsoup",
"python",
"web_scraping"
] | stackoverflow_0067665789_beautifulsoup_python_web_scraping.txt |
Q:
Having trouble using the fetchone method in python
This is the error I have tried fixing this plenty and find myself stuck
total = cur.fetchone()[2]
IndexError: tuple index out of range
import sqlite3
def display_records(results):
for row in results:
print(row[0], row[1], row[2])
print()
def display_total(total):
for row in total:
print(row[2])
print()
conn = sqlite3.connect('cities.db')
cur = conn.cursor()
x = int(input('Please select a choice by entering the corresponding number: '))
if x == 1:
print('Displaying a list of cities sorted by population, in ascending order.')
cur.execute('''SELECT * FROM Cities ORDER BY Population''')
results = cur.fetchall()
display_records(results)
elif x == 2:
print('Displaying a list of cities sorted by population, in descending order.')
cur.execute('''SELECT * FROM Cities ORDER BY Population DESC''')
results = cur.fetchall()
display_records(results)
elif x == 3:
print('Displaying a list of cities sorted by name.')
cur.execute('''SELECT * FROM Cities ORDER BY CityName''')
results = cur.fetchall()
display_records(results)
elif x == 4:
print('Displaying the total population of all cities.')
cur.execute('''SELECT sum(Population) FROM Cities''')
total = cur.fetchone()[2]
elif x == 5:
print('Displaying the average population of all cities.')
cur.execute('''SELECT AVG(Population) FROM Cities''')
avg = cur.fetchone()[2]
print('The total population is:', format(avg, '.2f'))
conn.commit()
conn.close()
Im trying to write a program that takes information from a database file. Im however having trouble using the fetchone method to fetch the population data and finding the sum. It would be great if someone could shoot me some pointers. :)
A:
The error is telling you that the returned tuple from cur.fetchone() does not have a third element. This is because the SQL statement (for x==4) only asks for a single parameter, namely sum(Population).
Hence the row returned by fetchone() only consists of a tuple with a single element. To get rid of the error, you should index the first element, not the third. Same goes for x == 5.
#[...]
elif x == 4:
print('Displaying the total population of all cities.')
cur.execute('''SELECT sum(Population) FROM Cities''')
total = cur.fetchone()[0] # <= index the first element
elif x == 5:
print('Displaying the average population of all cities.')
cur.execute('''SELECT AVG(Population) FROM Cities''')
avg = cur.fetchone()[0] # <= index the first element
print('The total population is:', format(avg, '.2f'))
P.S.: You should do something with the total, e.g. print it. At the moment, it just gets discarded.
| Having trouble using the fetchone method in python | This is the error I have tried fixing this plenty and find myself stuck
total = cur.fetchone()[2]
IndexError: tuple index out of range
import sqlite3
def display_records(results):
for row in results:
print(row[0], row[1], row[2])
print()
def display_total(total):
for row in total:
print(row[2])
print()
conn = sqlite3.connect('cities.db')
cur = conn.cursor()
x = int(input('Please select a choice by entering the corresponding number: '))
if x == 1:
print('Displaying a list of cities sorted by population, in ascending order.')
cur.execute('''SELECT * FROM Cities ORDER BY Population''')
results = cur.fetchall()
display_records(results)
elif x == 2:
print('Displaying a list of cities sorted by population, in descending order.')
cur.execute('''SELECT * FROM Cities ORDER BY Population DESC''')
results = cur.fetchall()
display_records(results)
elif x == 3:
print('Displaying a list of cities sorted by name.')
cur.execute('''SELECT * FROM Cities ORDER BY CityName''')
results = cur.fetchall()
display_records(results)
elif x == 4:
print('Displaying the total population of all cities.')
cur.execute('''SELECT sum(Population) FROM Cities''')
total = cur.fetchone()[2]
elif x == 5:
print('Displaying the average population of all cities.')
cur.execute('''SELECT AVG(Population) FROM Cities''')
avg = cur.fetchone()[2]
print('The total population is:', format(avg, '.2f'))
conn.commit()
conn.close()
Im trying to write a program that takes information from a database file. Im however having trouble using the fetchone method to fetch the population data and finding the sum. It would be great if someone could shoot me some pointers. :)
| [
"The error is telling you that the returned tuple from cur.fetchone() does not have a third element. This is because the SQL statement (for x==4) only asks for a single parameter, namely sum(Population).\nHence the row returned by fetchone() only consists of a tuple with a single element. To get rid of the error, you should index the first element, not the third. Same goes for x == 5.\n#[...]\nelif x == 4:\n print('Displaying the total population of all cities.')\n cur.execute('''SELECT sum(Population) FROM Cities''')\n total = cur.fetchone()[0] # <= index the first element\n\nelif x == 5:\n print('Displaying the average population of all cities.')\n cur.execute('''SELECT AVG(Population) FROM Cities''')\n avg = cur.fetchone()[0] # <= index the first element\n print('The total population is:', format(avg, '.2f'))\n\nP.S.: You should do something with the total, e.g. print it. At the moment, it just gets discarded.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074623539_python.txt |
Q:
If-elif-else statement not working in simple Python 3.3 class?
My code is:
def nameAndConfirm():
global name,confirm
print("What is your name? ")
name = input()
str(name)
print("Is",name,"correct? ")
confirm = input()
str(confirm)
print(confirm)
if confirm.upper() == "Y" or "YES":
classSelection()
elif confirm.upper() == "N" or "NO":
nameAndConfirm()
else:
print("Valid answers are Y/Yes or N/No!")
nameAndConfirm()
nameAndConfirm()
Critique on this code would be nice as well. I know its very shifty, I know how to make it shorter in some ways but I was trying to get my if-elif-else to work. I have no clue what else I can do as I've tried everything I know. Also I made an indent 4 spaces in the above code. **Edit: sorry the error is that it always runs the "if" it never goes past the first if line no matter what you enter in for confirm
A:
The condition confirm.upper() == "Y" or "YES" and the other one are not evaluated as you expect. You want
confirm.upper() in {"Y", "YES"}
or
confirm.upper() == "Y" or confirm.upper() == "YES"
Your condition is equivalent to:
(confirm.upper() == "Y") or "YES"
which is always truthy:
In [1]: True or "Yes"
Out[1]: True
In [2]: False or "Yes"
Out[2]: 'Yes'
On a separate note, the lines
str(name)
and
str(confirm)
don't do anything. The values returned by the functions are not saved anywhere, and name and confirm are not altered. Moreover, they are already strings to begin with, because they hold the return values of input().
| If-elif-else statement not working in simple Python 3.3 class? | My code is:
def nameAndConfirm():
global name,confirm
print("What is your name? ")
name = input()
str(name)
print("Is",name,"correct? ")
confirm = input()
str(confirm)
print(confirm)
if confirm.upper() == "Y" or "YES":
classSelection()
elif confirm.upper() == "N" or "NO":
nameAndConfirm()
else:
print("Valid answers are Y/Yes or N/No!")
nameAndConfirm()
nameAndConfirm()
Critique on this code would be nice as well. I know its very shifty, I know how to make it shorter in some ways but I was trying to get my if-elif-else to work. I have no clue what else I can do as I've tried everything I know. Also I made an indent 4 spaces in the above code. **Edit: sorry the error is that it always runs the "if" it never goes past the first if line no matter what you enter in for confirm
| [
"The condition confirm.upper() == \"Y\" or \"YES\" and the other one are not evaluated as you expect. You want \nconfirm.upper() in {\"Y\", \"YES\"}\n\nor\nconfirm.upper() == \"Y\" or confirm.upper() == \"YES\"\n\nYour condition is equivalent to:\n(confirm.upper() == \"Y\") or \"YES\"\n\nwhich is always truthy:\nIn [1]: True or \"Yes\"\nOut[1]: True\n\nIn [2]: False or \"Yes\"\nOut[2]: 'Yes'\n\n\nOn a separate note, the lines\nstr(name)\n\nand\nstr(confirm)\n\ndon't do anything. The values returned by the functions are not saved anywhere, and name and confirm are not altered. Moreover, they are already strings to begin with, because they hold the return values of input().\n"
] | [
10
] | [
"That't not the correct way to implement \"or\" in your if condition:\n\nconfirm.upper() == \"Y\" or confirm.upper() == \"YES\"\n\nIt should be like this\n"
] | [
-1
] | [
"if_statement",
"python",
"python_3.x"
] | stackoverflow_0018624465_if_statement_python_python_3.x.txt |
Q:
Loaded keras model with custom layer has different weights to model which was saved
I have implemented a Transformer encoder in keras using the template provided by Francois Chollet here. After I train the model I save it using model.save, but when I load it again for inference I find that the weights seem to be random again, and therefore my model loses all inference ability.
I have looked at similar issues on SO and Github, and applied the following suggestions, but still getting the same issue:
Use the @tf.keras.utils.register_keras_serializable() decorator on the class.
Make sure **kwargs is in the init call
Make sure the custom layer has get_config and from_config methods.
Use custom_object_scope to load model.
Below is a minimally reproducible example to replicate the issue. How do I change it so that the model weights save correctly?
import numpy as np
from tensorflow import keras
import tensorflow as tf
from tensorflow.keras import layers
from keras.models import load_model
from keras.utils import custom_object_scope
@tf.keras.utils.register_keras_serializable()
class TransformerEncoder(layers.Layer):
def __init__(self, embed_dim, dense_dim, num_heads, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
self.dense_dim = dense_dim
self.num_heads = num_heads
self.attention = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim)
self.dense_proj = keras.Sequential(
[
layers.Dense(dense_dim, activation="relu"),
layers.Dense(embed_dim),
]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
def call(self, inputs, mask=None):
if mask is not None:
mask = mask[:, tf.newaxis, :]
attention_output = self.attention(
inputs, inputs, attention_mask=mask)
proj_input = self.layernorm_1(inputs + attention_output)
proj_output = self.dense_proj(proj_input)
return self.layernorm_2(proj_input + proj_output)
def get_config(self):
config = super().get_config()
config.update({
"embed_dim": self.embed_dim,
"num_heads": self.num_heads,
"dense_dim": self.dense_dim,
})
return config
@classmethod
def from_config(cls, config):
return cls(**config)
# Create simple model:
encoder = TransformerEncoder(embed_dim=2, dense_dim=2, num_heads=1)
inputs = keras.Input(shape=(2, 2), batch_size=None, name="test_inputs")
x = encoder(inputs)
x = layers.Flatten()(x)
outputs = layers.Dense(1, activation="linear")(x)
model = keras.Model(inputs, outputs)
# Fit the model and save it:
np.random.seed(42)
X = np.random.rand(10, 2, 2)
y = np.ones(10)
model.compile(optimizer=keras.optimizers.Adam(), loss="mean_squared_error")
model.fit(X, y, epochs=2, batch_size=1)
model.save("./test_model")
# Load the saved model:
with custom_object_scope({
'TransformerEncoder': TransformerEncoder
}):
loaded_model = load_model("./test_model")
print(model.weights[0].numpy())
print(loaded_model.weights[0].numpy())
A:
The weights are saved (you can load them with load_weights after loading the model). The problem is that you create new layers in __init__. You need to recreate them from their config, for example:
class TransformerEncoder(layers.Layer):
def __init__(self, embed_dim, dense_dim, num_heads, attention_config=None, dense_proj_config=None, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
self.dense_dim = dense_dim
self.num_heads = num_heads
self.attention = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim) \
if attention_config is None else layers.MultiHeadAttention.from_config(attention_config)
self.dense_proj = keras.Sequential(
[
layers.Dense(dense_dim, activation="relu"),
layers.Dense(embed_dim),
]
) if dense_proj_config is None else keras.Sequential.from_config(dense_proj_config)
...
def call(self, inputs, mask=None):
...
def get_config(self):
config = super().get_config()
config.update({
"embed_dim": self.embed_dim,
"num_heads": self.num_heads,
"dense_dim": self.dense_dim,
"attention_config": self.attention.get_config(),
"dense_proj_config": self.dense_proj.get_config(),
})
return config
Output:
[[[-0.810745 -0.14727005]]
[[ 0.8542909 0.09689581]]]
[[[-0.810745 -0.14727005]]
[[ 0.8542909 0.09689581]]]
| Loaded keras model with custom layer has different weights to model which was saved | I have implemented a Transformer encoder in keras using the template provided by Francois Chollet here. After I train the model I save it using model.save, but when I load it again for inference I find that the weights seem to be random again, and therefore my model loses all inference ability.
I have looked at similar issues on SO and Github, and applied the following suggestions, but still getting the same issue:
Use the @tf.keras.utils.register_keras_serializable() decorator on the class.
Make sure **kwargs is in the init call
Make sure the custom layer has get_config and from_config methods.
Use custom_object_scope to load model.
Below is a minimally reproducible example to replicate the issue. How do I change it so that the model weights save correctly?
import numpy as np
from tensorflow import keras
import tensorflow as tf
from tensorflow.keras import layers
from keras.models import load_model
from keras.utils import custom_object_scope
@tf.keras.utils.register_keras_serializable()
class TransformerEncoder(layers.Layer):
def __init__(self, embed_dim, dense_dim, num_heads, **kwargs):
super().__init__(**kwargs)
self.embed_dim = embed_dim
self.dense_dim = dense_dim
self.num_heads = num_heads
self.attention = layers.MultiHeadAttention(
num_heads=num_heads, key_dim=embed_dim)
self.dense_proj = keras.Sequential(
[
layers.Dense(dense_dim, activation="relu"),
layers.Dense(embed_dim),
]
)
self.layernorm_1 = layers.LayerNormalization()
self.layernorm_2 = layers.LayerNormalization()
def call(self, inputs, mask=None):
if mask is not None:
mask = mask[:, tf.newaxis, :]
attention_output = self.attention(
inputs, inputs, attention_mask=mask)
proj_input = self.layernorm_1(inputs + attention_output)
proj_output = self.dense_proj(proj_input)
return self.layernorm_2(proj_input + proj_output)
def get_config(self):
config = super().get_config()
config.update({
"embed_dim": self.embed_dim,
"num_heads": self.num_heads,
"dense_dim": self.dense_dim,
})
return config
@classmethod
def from_config(cls, config):
return cls(**config)
# Create simple model:
encoder = TransformerEncoder(embed_dim=2, dense_dim=2, num_heads=1)
inputs = keras.Input(shape=(2, 2), batch_size=None, name="test_inputs")
x = encoder(inputs)
x = layers.Flatten()(x)
outputs = layers.Dense(1, activation="linear")(x)
model = keras.Model(inputs, outputs)
# Fit the model and save it:
np.random.seed(42)
X = np.random.rand(10, 2, 2)
y = np.ones(10)
model.compile(optimizer=keras.optimizers.Adam(), loss="mean_squared_error")
model.fit(X, y, epochs=2, batch_size=1)
model.save("./test_model")
# Load the saved model:
with custom_object_scope({
'TransformerEncoder': TransformerEncoder
}):
loaded_model = load_model("./test_model")
print(model.weights[0].numpy())
print(loaded_model.weights[0].numpy())
| [
"The weights are saved (you can load them with load_weights after loading the model). The problem is that you create new layers in __init__. You need to recreate them from their config, for example:\n class TransformerEncoder(layers.Layer):\n def __init__(self, embed_dim, dense_dim, num_heads, attention_config=None, dense_proj_config=None, **kwargs):\n super().__init__(**kwargs)\n self.embed_dim = embed_dim\n self.dense_dim = dense_dim\n self.num_heads = num_heads\n self.attention = layers.MultiHeadAttention(\n num_heads=num_heads, key_dim=embed_dim) \\\n if attention_config is None else layers.MultiHeadAttention.from_config(attention_config)\n self.dense_proj = keras.Sequential(\n [\n layers.Dense(dense_dim, activation=\"relu\"),\n layers.Dense(embed_dim),\n ]\n ) if dense_proj_config is None else keras.Sequential.from_config(dense_proj_config)\n ...\n\n def call(self, inputs, mask=None):\n ...\n\n def get_config(self):\n config = super().get_config()\n config.update({\n \"embed_dim\": self.embed_dim,\n \"num_heads\": self.num_heads,\n \"dense_dim\": self.dense_dim,\n \"attention_config\": self.attention.get_config(),\n \"dense_proj_config\": self.dense_proj.get_config(),\n })\n return config\n\nOutput:\n[[[-0.810745 -0.14727005]]\n\n[[ 0.8542909 0.09689581]]]\n[[[-0.810745 -0.14727005]]\n\n[[ 0.8542909 0.09689581]]]\n\n"
] | [
1
] | [
"the secrete is how it works you can try it with the model.get_weights() but I sample in the layer.get_weight() that is because easiliy see.\nSample: Custom layer with random initial values, result in small of randoms number changed when run it couple of time.\nimport tensorflow as tf\n\nclass MyDenseLayer(tf.keras.layers.Layer):\n def __init__(self, num_outputs):\n super(MyDenseLayer, self).__init__()\n self.num_outputs = num_outputs\n \n def build(self, input_shape):\n \"\"\" initialize weights with randomize numbers \"\"\"\n min_size_init = tf.keras.initializers.RandomUniform(minval=1, maxval=5, seed=None)\n self.kernel = self.add_weight(shape=[int(input_shape[-1]), self.num_outputs],\n initializer = min_size_init, trainable=True)\n \n def call(self, inputs):\n return tf.matmul(inputs, self.kernel)\n\n\nstart = 3\nlimit = 33\ndelta = 3\n\n# Create DATA\nsample = tf.range(start, limit, delta)\nsample = tf.cast( sample, dtype=tf.float32 )\n\n# Initail, ( 10, 1 )\nsample = tf.constant( sample, shape=( 10, 1 ) )\nlayer = MyDenseLayer(10)\ndata = layer(sample)\n\nOutput: The same layer initialized continues of the call() process\n### 1st round ###\n# [array([[-0.07862139, -0.45416605, -0.53606 , 0.18597281, 0.2919714 ,\n # -0.27334914, 0.60890776, -0.3856985 , 0.58052486, -0.5634572 ]], dtype=float32)]\n \n### 2nd round ###\n# [array([[ 0.5949032 , 0.05113244, -0.51997787, 0.26252705, -0.09235346,\n # -0.35243294, -0.0187515 , -0.12527376, 0.22348166, 0.37051445]], dtype=float32)]\n \n### 3rd round ###\n# [array([[-0.6654639 , -0.46027896, -0.48666477, -0.23095328, 0.30391783,\n # 0.21867174, -0.5405392 , -0.45399982, -0.22143698, 0.66893476]], dtype=float32)]\n\nSample: Re-called every time tell the layer to reset the initial value.\nlayer.build([1]) \nprint( data )\nprint( layer.get_weights() )\n\nOutput: The model.call() result in differnt not continues.\n### 1st round ###\n# [array([[ 0.73738164, 0.14095825, -0.5416008 , -0.35084447, -0.35209572,\n # -0.35504425, 0.1692887 , 0.2611189 , 0.43355125, -0.3325353 ]], dtype=float32)]\n \n### 2nd round ###\n# [array([[ 0.5949032 , 0.05113244, -0.51997787, 0.26252705, -0.09235346,\n # -0.35243294, -0.0187515 , -0.12527376, 0.22348166, 0.37051445]], dtype=float32)]\n \n### 3rd round ###\n# [array([[-0.6654639 , -0.46027896, -0.48666477, -0.23095328, 0.30391783,\n # 0.21867174, -0.5405392 , -0.45399982, -0.22143698, 0.66893476]], dtype=float32)]\n\nSample: We included layer-initialized values requirements, suppose to start at the same initial for all actions.\n\"\"\" initialize weights with values ones \"\"\"\n min_size_init = tf.keras.initializers.Ones()\n\nOutput: The same results are re-produced every time.\n### 1st round ###\n# tf.Tensor(\n# [[ 3. 3. 3. 3. 3. 3. 3. 3. 3. 3.]\n # [ 6. 6. 6. 6. 6. 6. 6. 6. 6. 6.]\n # [ 9. 9. 9. 9. 9. 9. 9. 9. 9. 9.]\n # [12. 12. 12. 12. 12. 12. 12. 12. 12. 12.]\n # [15. 15. 15. 15. 15. 15. 15. 15. 15. 15.]\n # [18. 18. 18. 18. 18. 18. 18. 18. 18. 18.]\n # [21. 21. 21. 21. 21. 21. 21. 21. 21. 21.]\n # [24. 24. 24. 24. 24. 24. 24. 24. 24. 24.]\n # [27. 27. 27. 27. 27. 27. 27. 27. 27. 27.]\n # [30. 30. 30. 30. 30. 30. 30. 30. 30. 30.]], shape=(10, 10), dtype=float32)\n# [array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]], dtype=float32)]\n\n### 2nd round ###\n# tf.Tensor(\n# [[ 3. 3. 3. 3. 3. 3. 3. 3. 3. 3.]\n # [ 6. 6. 6. 6. 6. 6. 6. 6. 6. 6.]\n # [ 9. 9. 9. 9. 9. 9. 9. 9. 9. 9.]\n # [12. 12. 12. 12. 12. 12. 12. 12. 12. 12.]\n # [15. 15. 15. 15. 15. 15. 15. 15. 15. 15.]\n # [18. 18. 18. 18. 18. 18. 18. 18. 18. 18.]\n # [21. 21. 21. 21. 21. 21. 21. 21. 21. 21.]\n # [24. 24. 24. 24. 24. 24. 24. 24. 24. 24.]\n # [27. 27. 27. 27. 27. 27. 27. 27. 27. 27.]\n # [30. 30. 30. 30. 30. 30. 30. 30. 30. 30.]], shape=(10, 10), dtype=float32)\n# [array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1.]], dtype=float32)]\n\nSample: Implementation\ntemp = tf.random.normal([10], 1, 0.2, tf.float32)\ntemp = np.asarray(temp) * np.asarray([ coefficient_0, coefficient_1, coefficient_2, coefficient_3, coefficient_4, coefficient_5, coefficient_6, coefficient_7, coefficient_8, coefficient_9 ])\ntemp = tf.nn.softmax(temp)\naction = int(np.argmax(temp)) \n\nOutput: All variables are co-variances of environment variables it selects max() or min() value mapped to target actions in the game. Added some random value that does not win the filters times value create of actions feedbacks.\n( I used to add it as GIF but of course they deleted them, both who keep have question on my replies )\n\n"
] | [
-1
] | [
"keras",
"python",
"tensorflow"
] | stackoverflow_0074636441_keras_python_tensorflow.txt |
Q:
Unable to read data from mongoDB using Pyspark or Python in AWS EMR
I am trying to read data from 3 node MongoDB cluster(replica set) using PySpark and native python in AWS EMR. I am facing issues while executing the codes with in AWS EMR cluster as explained below but the same codes are working fine in my local windows machine.
spark version - 2.4.8
Scala version - 2.11.12
MongoDB version - 4.4.8
mongo-spark-connector version - mongo-spark-connector_2.11:2.4.4
python version - 3.7.10
Through Pyspark - (issue - pyspark is giving empty dataframe)
Below are the commands while running pyspark job in local and cluster mode.
local mode : spark-submit --master local[*] --packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.4 test.py
cluster mode :
spark-submit --master yarn --deploy-mode cluster --packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.4 test.py
with both the modes I am not able to read data from mongoDB(empty dataframe) even though telnet is working across all nodes from spark cluster(from all nodes) . From the logs, I can confirm that spark is able to communicate with mongoDB and my pyspark job is giving empty dataframe. Please find below screenshots for same!
mongoDB connecting to pyspark
pyspark giving empty dataframe
Below is the code snippet for same:
from pyspark.sql import SparkSession, SQLContext
from pyspark import SparkConf, SparkContext
import sys
import json
sc = SparkContext()
spark = SparkSession(sc).builder.appName("MongoDbToS3").config("spark.mongodb.input.uri", "mongodb://usename:password@host1:port1,host2:port2,host3:port3/db.collection/?replicaSet=ABCD&authSource=admin").getOrCreate()
data = spark.read.format("com.mongodb.spark.sql.DefaultSource").load()
data.show()
please let me know anything I am doing wrong or missing in pyspark code?
Through native python code - (issue - code is getting stuck if batch_size >1 and if batch_size =1 it will print first 24 mongo documents and then cursor hangs)
I am using pymongo driver to connect to mongoDB through native python code. The issue is when I try to fetch/print mongoDB documents with batch_size of 1000 the code hangs forever and then it gives network time out error. But if I make batch_size =1 then cursor is able to fetch first 24 documents after that again cursor hangs. we observed that 25th document is very big(around 4kb) compared to first 24 documents and then we tried skipping 25th document, then cursor started fetching next documents but again it was getting stuck at some other position, so we observed whenever the document size is large the cursor is getting stuck.
can you guys please help me in understanding the issue?
is there anything blocking from networking side or mongoDB side?
below is code snippet :
from datetime import datetime
import json
#import boto3
from bson import json_util
import pymongo
client = pymongo.MongoClient("mongodb://username@host:port/?authSource=admin&socketTimeoutMS=3600000&maxIdleTimeMS=3600000")
# Database Name
db = client["database_name"]
# Collection Name
quoteinfo__collection= db["collection_name"]
results = quoteinfo__collection.find({}).batch_size(1000)
doc_count = quoteinfo__collection.count_documents({})
print("documents count from collection: ",doc_count)
print(results)
record_increment_no = 1
for record in results:
print(record)
print(record_increment_no)
record_increment_no = record_increment_no + 1
results.close()
below is output screenshot for same
for batch_size = 1000 (code hangs and gives network timeout error)
pymongo code getting stuck
network timeout error
batch_size = 1 (prints documents only till 24th and then cursor hangs)
printing 24 documents and hangs
A:
there were some issues with AWS account peering between our dev and MongoDB hosted AWS account as explained below
Traffic was flowing through VPC Peering for one of the routes instead of Transit Gateway.
MongoDB IPs were not falling under CIDR ranges of the Route Table
after adding transit gateway for MongoDb IP1 and MongoDB IP2,we are able to read data properly with any batch size for any collection.
| Unable to read data from mongoDB using Pyspark or Python in AWS EMR | I am trying to read data from 3 node MongoDB cluster(replica set) using PySpark and native python in AWS EMR. I am facing issues while executing the codes with in AWS EMR cluster as explained below but the same codes are working fine in my local windows machine.
spark version - 2.4.8
Scala version - 2.11.12
MongoDB version - 4.4.8
mongo-spark-connector version - mongo-spark-connector_2.11:2.4.4
python version - 3.7.10
Through Pyspark - (issue - pyspark is giving empty dataframe)
Below are the commands while running pyspark job in local and cluster mode.
local mode : spark-submit --master local[*] --packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.4 test.py
cluster mode :
spark-submit --master yarn --deploy-mode cluster --packages org.mongodb.spark:mongo-spark-connector_2.11:2.4.4 test.py
with both the modes I am not able to read data from mongoDB(empty dataframe) even though telnet is working across all nodes from spark cluster(from all nodes) . From the logs, I can confirm that spark is able to communicate with mongoDB and my pyspark job is giving empty dataframe. Please find below screenshots for same!
mongoDB connecting to pyspark
pyspark giving empty dataframe
Below is the code snippet for same:
from pyspark.sql import SparkSession, SQLContext
from pyspark import SparkConf, SparkContext
import sys
import json
sc = SparkContext()
spark = SparkSession(sc).builder.appName("MongoDbToS3").config("spark.mongodb.input.uri", "mongodb://usename:password@host1:port1,host2:port2,host3:port3/db.collection/?replicaSet=ABCD&authSource=admin").getOrCreate()
data = spark.read.format("com.mongodb.spark.sql.DefaultSource").load()
data.show()
please let me know anything I am doing wrong or missing in pyspark code?
Through native python code - (issue - code is getting stuck if batch_size >1 and if batch_size =1 it will print first 24 mongo documents and then cursor hangs)
I am using pymongo driver to connect to mongoDB through native python code. The issue is when I try to fetch/print mongoDB documents with batch_size of 1000 the code hangs forever and then it gives network time out error. But if I make batch_size =1 then cursor is able to fetch first 24 documents after that again cursor hangs. we observed that 25th document is very big(around 4kb) compared to first 24 documents and then we tried skipping 25th document, then cursor started fetching next documents but again it was getting stuck at some other position, so we observed whenever the document size is large the cursor is getting stuck.
can you guys please help me in understanding the issue?
is there anything blocking from networking side or mongoDB side?
below is code snippet :
from datetime import datetime
import json
#import boto3
from bson import json_util
import pymongo
client = pymongo.MongoClient("mongodb://username@host:port/?authSource=admin&socketTimeoutMS=3600000&maxIdleTimeMS=3600000")
# Database Name
db = client["database_name"]
# Collection Name
quoteinfo__collection= db["collection_name"]
results = quoteinfo__collection.find({}).batch_size(1000)
doc_count = quoteinfo__collection.count_documents({})
print("documents count from collection: ",doc_count)
print(results)
record_increment_no = 1
for record in results:
print(record)
print(record_increment_no)
record_increment_no = record_increment_no + 1
results.close()
below is output screenshot for same
for batch_size = 1000 (code hangs and gives network timeout error)
pymongo code getting stuck
network timeout error
batch_size = 1 (prints documents only till 24th and then cursor hangs)
printing 24 documents and hangs
| [
"there were some issues with AWS account peering between our dev and MongoDB hosted AWS account as explained below\n\nTraffic was flowing through VPC Peering for one of the routes instead of Transit Gateway.\nMongoDB IPs were not falling under CIDR ranges of the Route Table\n\nafter adding transit gateway for MongoDb IP1 and MongoDB IP2,we are able to read data properly with any batch size for any collection.\n"
] | [
0
] | [] | [] | [
"amazon_emr",
"mongodb",
"pyspark",
"python"
] | stackoverflow_0073973822_amazon_emr_mongodb_pyspark_python.txt |
Q:
`model.summary()` with TensorFlow model subclassing print output shape as "multiple"
I tried to implement Vgg network with following VggBlock.
class VggBlock(tf.keras.Model):
def __init__(self, filters, repetitions):
super(VggBlock, self).__init__()
self.repetitions = repetitions
self.conv_layers = [Conv2D(filters=filters, kernel_size=(3, 3), padding='same', activation='relu') for _ in range(repetitions)]
self.max_pool = MaxPool2D(pool_size=(2, 2))
def call(self, inputs):
x = inputs
for layer in self.conv_layers:
x = layer(x)
return self.max_pool(x)
test_block = VggBlock(filters=64, repetitions=2)
temp_inputs = Input(shape=(224, 224, 3))
test_block(temp_inputs)
test_block.summary()
Then the above code prints:
Model: "vgg_block"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) multiple 1792
conv2d_1 (Conv2D) multiple 36928
max_pooling2d (MaxPooling2D multiple 0
)
=================================================================
Total params: 38,720
Trainable params: 38,720
Non-trainable params: 0
_________________________________________________________________
And if I build Vgg with these blocks, its summary() also prints "multiple".
There are some questions similar to my problem, ex:
https://github.com/keras-team/keras/issues/13782 ,
model.summary() can't print output shape while using subclass model
However, I can not extend the answers in the second link: in terms of varying input_shape.
How do I treat summary() in order to make "multiple" to be an appropriate shape.
A:
You already linked some workarounds. You seem to be landing here, because the output shape of each layer cannot be determined. As stated here:
You can do all these things (printing input / output shapes) in a
Functional or Sequential model because these models are static graphs
of layers.
In contrast, a subclassed model is a piece of Python code (a call
method). There is no graph of layers here. We cannot know how layers
are connected to each other (because that's defined in the body of
call, not as an explicit data structure), so we cannot infer input /
output shapes.
You could also try something like this:
import tensorflow as tf
class VggBlock(tf.keras.Model):
def __init__(self, filters, repetitions, image_shape):
super(VggBlock, self).__init__()
self.repetitions = repetitions
self.conv_layers = [tf.keras.layers.Conv2D(filters=filters, kernel_size=(3, 3), padding='same', activation='relu') for _ in range(repetitions)]
self.max_pool = tf.keras.layers.MaxPool2D(pool_size=(2, 2))
inputs = tf.keras.layers.Input(shape=image_shape)
x = inputs
for layer in self.conv_layers:
x = layer(x)
outputs = self.max_pool(x)
self.model = tf.keras.Model(inputs, outputs)
def call(self, inputs):
return self.model(inputs)
def summary(self):
self.model.summary()
test_block = VggBlock(filters=64, repetitions=2, image_shape=(224, 224, 3))
test_block.summary()
Model: "model"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_1 (InputLayer) [(None, 224, 224, 3)] 0
conv2d (Conv2D) (None, 224, 224, 64) 1792
conv2d_1 (Conv2D) (None, 224, 224, 64) 36928
max_pooling2d (MaxPooling2D (None, 112, 112, 64) 0
)
=================================================================
Total params: 38,720
Trainable params: 38,720
Non-trainable params: 0
_________________________________________________________________
| `model.summary()` with TensorFlow model subclassing print output shape as "multiple" | I tried to implement Vgg network with following VggBlock.
class VggBlock(tf.keras.Model):
def __init__(self, filters, repetitions):
super(VggBlock, self).__init__()
self.repetitions = repetitions
self.conv_layers = [Conv2D(filters=filters, kernel_size=(3, 3), padding='same', activation='relu') for _ in range(repetitions)]
self.max_pool = MaxPool2D(pool_size=(2, 2))
def call(self, inputs):
x = inputs
for layer in self.conv_layers:
x = layer(x)
return self.max_pool(x)
test_block = VggBlock(filters=64, repetitions=2)
temp_inputs = Input(shape=(224, 224, 3))
test_block(temp_inputs)
test_block.summary()
Then the above code prints:
Model: "vgg_block"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) multiple 1792
conv2d_1 (Conv2D) multiple 36928
max_pooling2d (MaxPooling2D multiple 0
)
=================================================================
Total params: 38,720
Trainable params: 38,720
Non-trainable params: 0
_________________________________________________________________
And if I build Vgg with these blocks, its summary() also prints "multiple".
There are some questions similar to my problem, ex:
https://github.com/keras-team/keras/issues/13782 ,
model.summary() can't print output shape while using subclass model
However, I can not extend the answers in the second link: in terms of varying input_shape.
How do I treat summary() in order to make "multiple" to be an appropriate shape.
| [
"You already linked some workarounds. You seem to be landing here, because the output shape of each layer cannot be determined. As stated here:\n\nYou can do all these things (printing input / output shapes) in a\nFunctional or Sequential model because these models are static graphs\nof layers.\nIn contrast, a subclassed model is a piece of Python code (a call\nmethod). There is no graph of layers here. We cannot know how layers\nare connected to each other (because that's defined in the body of\ncall, not as an explicit data structure), so we cannot infer input /\noutput shapes.\n\nYou could also try something like this:\nimport tensorflow as tf\n\nclass VggBlock(tf.keras.Model):\n\n def __init__(self, filters, repetitions, image_shape):\n super(VggBlock, self).__init__()\n self.repetitions = repetitions\n\n self.conv_layers = [tf.keras.layers.Conv2D(filters=filters, kernel_size=(3, 3), padding='same', activation='relu') for _ in range(repetitions)]\n self.max_pool = tf.keras.layers.MaxPool2D(pool_size=(2, 2))\n\n inputs = tf.keras.layers.Input(shape=image_shape)\n x = inputs\n for layer in self.conv_layers:\n x = layer(x)\n outputs = self.max_pool(x)\n self.model = tf.keras.Model(inputs, outputs)\n\n def call(self, inputs):\n return self.model(inputs)\n \n def summary(self):\n self.model.summary()\n\ntest_block = VggBlock(filters=64, repetitions=2, image_shape=(224, 224, 3))\ntest_block.summary()\n\nModel: \"model\"\n_________________________________________________________________\n Layer (type) Output Shape Param # \n=================================================================\n input_1 (InputLayer) [(None, 224, 224, 3)] 0 \n \n conv2d (Conv2D) (None, 224, 224, 64) 1792 \n \n conv2d_1 (Conv2D) (None, 224, 224, 64) 36928 \n \n max_pooling2d (MaxPooling2D (None, 112, 112, 64) 0 \n ) \n \n=================================================================\nTotal params: 38,720\nTrainable params: 38,720\nNon-trainable params: 0\n_________________________________________________________________\n\n"
] | [
0
] | [] | [] | [
"keras",
"model",
"python",
"subclass",
"tensorflow"
] | stackoverflow_0074636279_keras_model_python_subclass_tensorflow.txt |
Q:
how to find substrintstring between two words in regex of python
I have this kind of string in python programme
Quantity: 1
Return reason: Style not as expected
Customer comments: Colour is too deeo
Return Shipping Carrier: courier
I want to get the string between customer comment and return shipping carrier with regex so, how to find this substring
A:
If you would like to search over multiline string, use the following re flags:
re.findall('Customer comments: (.*)Return Shipping Carrier', s, re.MULTILINE|re.DOTALL)
| how to find substrintstring between two words in regex of python | I have this kind of string in python programme
Quantity: 1
Return reason: Style not as expected
Customer comments: Colour is too deeo
Return Shipping Carrier: courier
I want to get the string between customer comment and return shipping carrier with regex so, how to find this substring
| [
"If you would like to search over multiline string, use the following re flags:\nre.findall('Customer comments: (.*)Return Shipping Carrier', s, re.MULTILINE|re.DOTALL)\n\n"
] | [
1
] | [] | [] | [
"python",
"substr"
] | stackoverflow_0074637535_python_substr.txt |
Q:
Unexpected behavior in a nested while loop
I'm trying to better understand how Python flow control works. Accordingly, I wrote a little script to understand how to make functional menus that include the ability to enter and leave sub-menus. The code below works as expected (no errors, it prints the expected output, etc). But when I enter option 4 "Exit" in the sub-menu "stage1" it accepts the input and re-prints the sub-menu. If I select option 4 again it exits the sub-menu and returns me to the main menu.
Why would it work after selecting the option twice?
I trimmed my code down to include the snippet for your review, and once I trimmed it down, it no longer requires I enter the "Exit" option twice. I'd like to have a menu with more than 1 option, so I'd love to get to the bottom of this.
import time
import os
def stage1():
print("stage1")
time.sleep(1)
stage1_loop = 1
while stage1_loop == 1:
os.system('clear')
print("Sub Menu")
print("1. Stage 1")
print("4. Exit")
option = int(input("Please select a stage."))
if option == 1:
stage1()
elif option == 2:
stage2()
elif option == 3:
stage3()
elif option == 4:
print("Exit")
stage1_loop = 0
main_loop = 1
while main_loop == 1:
os.system('clear')
print("Main Menu")
print("1. Stage 1")
print("4. Exit")
option = int(input("Please select a stage."))
if option == 1:
stage1()
elif option == 2:
stage2()
elif option == 3:
stage3()
elif option == 4:
print("Exit")
main_loop = 0
##############################################################
print("Main Menu")
for stage in [1, 2, 3]:
print(f"{stage}. Stage {stage}")
print("4. Exit")
If I comment out the elif lines for stage 2 & 3 in the sub-menu then the issue disappears.
EDIT - after removing the recursive call to stage 1 I fixed my issue (see answers below. Pasted in the code snippet suggested by one of the answers to clean up my menu printing code. Cheers!
A:
Your code is quite messy:
You don't require to create variables for while. You can use True and break
Looks like you're repeating code a lot of times, we'll fix that later on
Here's the fixed code:
import time
import os
def stage1():
print("stage1")
time.sleep(1)
while True:
# os.system('clear')
print("Sub Menu")
print("1. Stage 1")
print("4. Exit")
option = int(input("Please select a stage."))
if option == 1:
stage1()
elif option == 2:
stage2()
elif option == 3:
stage3()
elif option == 4:
print("Exit")
return True # returns true and exits the loop
while True:
# os.system('clear')
print("Main Menu")
print("1. Stage 1")
print("4. Exit")
option = int(input("Please select a stage."))
if option == 1:
if stage1(): # checks if stage 4 is True and then breaks the loop
break
elif option == 2:
stage2()
elif option == 3:
stage3()
elif option == 4:
print("Exit")
break
Here's a faster non-repeating code for all 4 stages:
import time
import os
def stageFunction(stage):
if stage in [1,2,3]: print(f"Stage {stage}") # Fast print stage number
if stage == 1:
# Do whatever
pass
elif stage == 2:
# DO whatever
pass
elif stage == 3:
# Do whatever
pass
elif stage == 4:
print("Exit")
# Do whatever
exit()
else:
print("Invalid Stage!")
def stageSystem(stage=0):
# os.system('clear')
print("Main Menu")
for i in range(1,4):
print(f"{i}. Stage {i}") # fast print stages
print("4. Exit")
option = stage
if (not stage):
option = int(input("Please select a stage: "))
stageSystem(option)
else:
stageFunction(stage)
stageSystem() # recursive call
stageSystem()
It might look lengthy, but it works pretty well considering you have 4 stages.
A:
I see multiple potential issues here, let's get into it:
First issue: stage1 is recursive:
You've included a call to stage 1 inside stage 1.
Which means that when you call the stage1 function if you press 1, you have created a sub-sub menu.
To fix this issue is suggest the following edit:
def stage1():
print("stage1")
time.sleep(1)
stage1_loop = 1
while stage1_loop == 1:
os.system('clear')
print("Sub Menu")
print("1. Stage 1")
print("4. Exit")
option = int(input("Please select a stage."))
if option == 1:
continue # This skips to the next loop iteration
elif option == 2:
stage2()
elif option == 3:
stage3()
elif option == 4:
print("Exit")
stage1_loop = 0
Second issue: You'd like for exit to quit, but you are inside a sub menu.
For that simply call the exit() method and your code will terminate altogether. You won't need to exit the submenu and then the menu.
Note: I may have misunderstood this last point, in that case simply ignore the second issue.
| Unexpected behavior in a nested while loop | I'm trying to better understand how Python flow control works. Accordingly, I wrote a little script to understand how to make functional menus that include the ability to enter and leave sub-menus. The code below works as expected (no errors, it prints the expected output, etc). But when I enter option 4 "Exit" in the sub-menu "stage1" it accepts the input and re-prints the sub-menu. If I select option 4 again it exits the sub-menu and returns me to the main menu.
Why would it work after selecting the option twice?
I trimmed my code down to include the snippet for your review, and once I trimmed it down, it no longer requires I enter the "Exit" option twice. I'd like to have a menu with more than 1 option, so I'd love to get to the bottom of this.
import time
import os
def stage1():
print("stage1")
time.sleep(1)
stage1_loop = 1
while stage1_loop == 1:
os.system('clear')
print("Sub Menu")
print("1. Stage 1")
print("4. Exit")
option = int(input("Please select a stage."))
if option == 1:
stage1()
elif option == 2:
stage2()
elif option == 3:
stage3()
elif option == 4:
print("Exit")
stage1_loop = 0
main_loop = 1
while main_loop == 1:
os.system('clear')
print("Main Menu")
print("1. Stage 1")
print("4. Exit")
option = int(input("Please select a stage."))
if option == 1:
stage1()
elif option == 2:
stage2()
elif option == 3:
stage3()
elif option == 4:
print("Exit")
main_loop = 0
##############################################################
print("Main Menu")
for stage in [1, 2, 3]:
print(f"{stage}. Stage {stage}")
print("4. Exit")
If I comment out the elif lines for stage 2 & 3 in the sub-menu then the issue disappears.
EDIT - after removing the recursive call to stage 1 I fixed my issue (see answers below. Pasted in the code snippet suggested by one of the answers to clean up my menu printing code. Cheers!
| [
"Your code is quite messy:\n\nYou don't require to create variables for while. You can use True and break\nLooks like you're repeating code a lot of times, we'll fix that later on\n\nHere's the fixed code:\nimport time\nimport os\n\ndef stage1():\n print(\"stage1\")\n time.sleep(1)\n while True:\n # os.system('clear')\n print(\"Sub Menu\")\n print(\"1. Stage 1\")\n print(\"4. Exit\")\n option = int(input(\"Please select a stage.\"))\n if option == 1:\n stage1()\n elif option == 2:\n stage2()\n elif option == 3:\n stage3()\n elif option == 4:\n print(\"Exit\")\n return True # returns true and exits the loop\n\nwhile True:\n # os.system('clear')\n print(\"Main Menu\")\n print(\"1. Stage 1\")\n print(\"4. Exit\")\n option = int(input(\"Please select a stage.\"))\n if option == 1:\n if stage1(): # checks if stage 4 is True and then breaks the loop\n break\n elif option == 2:\n stage2()\n elif option == 3:\n stage3()\n elif option == 4:\n print(\"Exit\")\n break\n\nHere's a faster non-repeating code for all 4 stages:\nimport time\nimport os\n\ndef stageFunction(stage):\n if stage in [1,2,3]: print(f\"Stage {stage}\") # Fast print stage number\n if stage == 1:\n # Do whatever\n pass\n elif stage == 2:\n # DO whatever\n pass\n elif stage == 3:\n # Do whatever\n pass\n elif stage == 4:\n print(\"Exit\")\n # Do whatever\n exit()\n else:\n print(\"Invalid Stage!\")\n\ndef stageSystem(stage=0):\n # os.system('clear')\n print(\"Main Menu\")\n for i in range(1,4):\n print(f\"{i}. Stage {i}\") # fast print stages\n print(\"4. Exit\")\n option = stage\n if (not stage): \n option = int(input(\"Please select a stage: \"))\n stageSystem(option)\n else:\n stageFunction(stage)\n stageSystem() # recursive call\nstageSystem()\n\nIt might look lengthy, but it works pretty well considering you have 4 stages.\n",
"I see multiple potential issues here, let's get into it:\n\nFirst issue: stage1 is recursive:\n\nYou've included a call to stage 1 inside stage 1.\nWhich means that when you call the stage1 function if you press 1, you have created a sub-sub menu.\nTo fix this issue is suggest the following edit:\ndef stage1():\n print(\"stage1\")\n time.sleep(1)\n stage1_loop = 1\n while stage1_loop == 1:\n os.system('clear')\n print(\"Sub Menu\")\n print(\"1. Stage 1\")\n print(\"4. Exit\")\n option = int(input(\"Please select a stage.\"))\n if option == 1:\n continue # This skips to the next loop iteration\n elif option == 2:\n stage2()\n elif option == 3:\n stage3()\n elif option == 4:\n print(\"Exit\")\n stage1_loop = 0\n\n\nSecond issue: You'd like for exit to quit, but you are inside a sub menu.\n\nFor that simply call the exit() method and your code will terminate altogether. You won't need to exit the submenu and then the menu.\nNote: I may have misunderstood this last point, in that case simply ignore the second issue.\n"
] | [
0,
0
] | [] | [] | [
"loops",
"nested",
"python",
"python_3.x",
"while_loop"
] | stackoverflow_0074637867_loops_nested_python_python_3.x_while_loop.txt |
Q:
AttributeError: module 'lib' has no attribute 'EVP_MD_CTX_new'
I'm attempting to use the Python package googleapiclient to download analytics, but it's giving me an OpenSSL related traceback:
File "/project/.env/lib/python3.7/site-packages/googleanalytics/auth/__init__.py", line 95, in authenticate
accounts = oauth.authenticate(credentials)
File "/project/.env/lib/python3.7/site-packages/googleanalytics/auth/credentials.py", line 216, in normalized_fn
return fn(credentials)
File "/project/.env/lib/python3.7/site-packages/googleanalytics/auth/oauth.py", line 44, in authenticate
raw_accounts = service.management().accounts().list().execute()['items']
File "/project/.env/lib/python3.7/site-packages/googleapiclient/_helpers.py", line 131, in positional_wrapper
return wrapped(*args, **kwargs)
File "/project/.env/lib/python3.7/site-packages/googleapiclient/http.py", line 931, in execute
headers=self.headers,
File "/project/.env/lib/python3.7/site-packages/googleapiclient/http.py", line 190, in _retry_request
resp, content = http.request(uri, method, *args, **kwargs)
File "/project/.env/lib/python3.7/site-packages/oauth2client/client.py", line 572, in new_request
self._refresh(request_orig)
File "/project/.env/lib/python3.7/site-packages/oauth2client/client.py", line 842, in _refresh
self._do_refresh_request(http_request)
File "/project/.env/lib/python3.7/site-packages/oauth2client/client.py", line 869, in _do_refresh_request
body = self._generate_refresh_request_body()
File "/project/.env/lib/python3.7/site-packages/oauth2client/client.py", line 1549, in _generate_refresh_request_body
assertion = self._generate_assertion()
File "/project/.env/lib/python3.7/site-packages/oauth2client/client.py", line 1677, in _generate_assertion
private_key, self.private_key_password), payload)
File "/project/.env/lib/python3.7/site-packages/oauth2client/crypt.py", line 92, in make_signed_jwt
signature = signer.sign(signing_input)
File "/project/.env/lib/python3.7/site-packages/oauth2client/_openssl_crypt.py", line 99, in sign
return crypto.sign(self._key, message, 'sha256')
File "/project/.env/lib/python3.7/site-packages/OpenSSL/crypto.py", line 3008, in sign
md_ctx = _lib.EVP_MD_CTX_new()
AttributeError: module 'lib' has no attribute 'EVP_MD_CTX_new'
I'm using versions:
google-api-python-client==2.26.1
pyOpenSSL==22.0.0
I'm guessing the cause of the error is a version mismatch between the Python package and system library, but I'm not sure how to resolve this. How do I diagnose this issue?
A:
It is happening due to dependency mismatch between pyopenssl and cyrptography.
I managed to solve this issue by downgrading cryptography library using following command.
pip install cryptography==36.0.0
| AttributeError: module 'lib' has no attribute 'EVP_MD_CTX_new' | I'm attempting to use the Python package googleapiclient to download analytics, but it's giving me an OpenSSL related traceback:
File "/project/.env/lib/python3.7/site-packages/googleanalytics/auth/__init__.py", line 95, in authenticate
accounts = oauth.authenticate(credentials)
File "/project/.env/lib/python3.7/site-packages/googleanalytics/auth/credentials.py", line 216, in normalized_fn
return fn(credentials)
File "/project/.env/lib/python3.7/site-packages/googleanalytics/auth/oauth.py", line 44, in authenticate
raw_accounts = service.management().accounts().list().execute()['items']
File "/project/.env/lib/python3.7/site-packages/googleapiclient/_helpers.py", line 131, in positional_wrapper
return wrapped(*args, **kwargs)
File "/project/.env/lib/python3.7/site-packages/googleapiclient/http.py", line 931, in execute
headers=self.headers,
File "/project/.env/lib/python3.7/site-packages/googleapiclient/http.py", line 190, in _retry_request
resp, content = http.request(uri, method, *args, **kwargs)
File "/project/.env/lib/python3.7/site-packages/oauth2client/client.py", line 572, in new_request
self._refresh(request_orig)
File "/project/.env/lib/python3.7/site-packages/oauth2client/client.py", line 842, in _refresh
self._do_refresh_request(http_request)
File "/project/.env/lib/python3.7/site-packages/oauth2client/client.py", line 869, in _do_refresh_request
body = self._generate_refresh_request_body()
File "/project/.env/lib/python3.7/site-packages/oauth2client/client.py", line 1549, in _generate_refresh_request_body
assertion = self._generate_assertion()
File "/project/.env/lib/python3.7/site-packages/oauth2client/client.py", line 1677, in _generate_assertion
private_key, self.private_key_password), payload)
File "/project/.env/lib/python3.7/site-packages/oauth2client/crypt.py", line 92, in make_signed_jwt
signature = signer.sign(signing_input)
File "/project/.env/lib/python3.7/site-packages/oauth2client/_openssl_crypt.py", line 99, in sign
return crypto.sign(self._key, message, 'sha256')
File "/project/.env/lib/python3.7/site-packages/OpenSSL/crypto.py", line 3008, in sign
md_ctx = _lib.EVP_MD_CTX_new()
AttributeError: module 'lib' has no attribute 'EVP_MD_CTX_new'
I'm using versions:
google-api-python-client==2.26.1
pyOpenSSL==22.0.0
I'm guessing the cause of the error is a version mismatch between the Python package and system library, but I'm not sure how to resolve this. How do I diagnose this issue?
| [
"It is happening due to dependency mismatch between pyopenssl and cyrptography.\nI managed to solve this issue by downgrading cryptography library using following command.\n\npip install cryptography==36.0.0\n\n"
] | [
0
] | [] | [] | [
"google_api_client",
"pyopenssl",
"python"
] | stackoverflow_0071191038_google_api_client_pyopenssl_python.txt |
Q:
Unable to locate the element on an angular website
I have tried the below code and it is always timing out. But when I look it up with the browser inspector, I can see the Username input element.
I have also tried to find it by ID but not able to.
Tried almost all of the existing questions/solutions on this forum but was unable to figure it out.
driver.get("https://www.dat.com/login")
time.sleep(5)
driver.find_element_by_css_selector("a[href*='https://power.dat.com/']").click()
time.sleep(5)
# explicitly waiting until input element load
try:
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.NAME, "username"))
)
except TimeoutException:
print(
"Waited 10 seconds for the username input to load but did not happen..."
)
except Exception as e:
print(f"Exception while waiting for username input to appear: \n {e}")
sys.exit(1)
A:
2 issues here:
After clicking on "DAT Power" on the first page a new tab is opened. To continue working there you need to switch the driver to the second tab.
You will probably want to enter a text into the username there. If so you need to wait for element clickability, not presence only.
Also, the current Selenium version (4.x) no more supports find_element_by_* methods, the new style should be used, as following presented.
The following code works:
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
options = Options()
options.add_argument("start-maximized")
webdriver_service = Service('C:\webdrivers\chromedriver.exe')
driver = webdriver.Chrome(options=options, service=webdriver_service)
wait = WebDriverWait(driver, 10)
url = "https://www.dat.com/login"
driver.get(url)
wait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, "a[href*='https://power.dat.com/']"))).click()
new_tab = driver.window_handles[1]
driver.switch_to.window(new_tab)
wait.until(EC.element_to_be_clickable((By.NAME, "username"))).send_keys("[email protected]")
The result is
| Unable to locate the element on an angular website | I have tried the below code and it is always timing out. But when I look it up with the browser inspector, I can see the Username input element.
I have also tried to find it by ID but not able to.
Tried almost all of the existing questions/solutions on this forum but was unable to figure it out.
driver.get("https://www.dat.com/login")
time.sleep(5)
driver.find_element_by_css_selector("a[href*='https://power.dat.com/']").click()
time.sleep(5)
# explicitly waiting until input element load
try:
WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.NAME, "username"))
)
except TimeoutException:
print(
"Waited 10 seconds for the username input to load but did not happen..."
)
except Exception as e:
print(f"Exception while waiting for username input to appear: \n {e}")
sys.exit(1)
| [
"2 issues here:\n\nAfter clicking on \"DAT Power\" on the first page a new tab is opened. To continue working there you need to switch the driver to the second tab.\nYou will probably want to enter a text into the username there. If so you need to wait for element clickability, not presence only.\nAlso, the current Selenium version (4.x) no more supports find_element_by_* methods, the new style should be used, as following presented.\nThe following code works:\n\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.service import Service\nfrom selenium.webdriver.chrome.options import Options\nfrom selenium.webdriver.support.ui import WebDriverWait\nfrom selenium.webdriver.common.by import By\nfrom selenium.webdriver.support import expected_conditions as EC\n\noptions = Options()\noptions.add_argument(\"start-maximized\")\n\nwebdriver_service = Service('C:\\webdrivers\\chromedriver.exe')\ndriver = webdriver.Chrome(options=options, service=webdriver_service)\nwait = WebDriverWait(driver, 10)\n\nurl = \"https://www.dat.com/login\"\ndriver.get(url)\n\nwait.until(EC.element_to_be_clickable((By.CSS_SELECTOR, \"a[href*='https://power.dat.com/']\"))).click()\nnew_tab = driver.window_handles[1]\ndriver.switch_to.window(new_tab)\nwait.until(EC.element_to_be_clickable((By.NAME, \"username\"))).send_keys(\"[email protected]\")\n\nThe result is\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x",
"selenium",
"selenium_webdriver",
"webdriverwait"
] | stackoverflow_0074636236_python_python_3.x_selenium_selenium_webdriver_webdriverwait.txt |
Q:
Drawing text on images in pillow
i was trying to solve a problem i had in a code that should draw text from a text file on a picture. the problem i had is that the program stack all the text on each other in every picture after the first picture(2,3,4,5). i can't explain what's the problem so i'll just leave a photo (https://i.stack.imgur.com/nkY2O.png)
#vars
f = open("text.txt","r")
img = Image.open("testpic.jpg")
draw = ImageDraw.Draw(img)
img_center = (215,190)
fnt = ImageFont.truetype('arial.ttf',32)
#code
for i in range(1,6):
img_txt = (f.readline())
draw.text(img_center, img_txt, font=fnt, stroke_fill=(0, 0, 0))
img.save('Image'+str(i)+'.png')
i tried to change the image text to f.readlines() but the problem was still there.
A:
Here's a pretty reasonable, but none too complicated method:
#!/usr/bin/env python3
from PIL import Image, ImageDraw, ImageFont
# Generate annotations file with 3 lines
annotations = 'annotations.txt'
with open(annotations, 'w') as f:
f.write("Merry\nChristmas\nStackOverflow")
# Open background and get dimensions
im = Image.open("background.jpg")
width, height = im.size
# Make a canvas to draw on, same size as background
# Set up everything necessary outside loop
canvas = im.copy()
draw = ImageDraw.Draw(canvas)
textpos = (int(width/4), int(height/3))
font = ImageFont.truetype('arial.ttf',32)
# Iterate over lines in "annotations.txt"
N = 0
with open(annotations,"r") as f:
while True:
text = f.readline()
if not text:
break
# Copy background onto our canvas - thereby clearing previous iteration
canvas.paste(im)
draw.text(textpos, text, font=font, stroke_fill=(0, 0, 0))
canvas.save(f'image-{N}.png')
N += 1
It produces three images:
-rw-r--r--@ 1 mark staff 2863 29 Nov 16:13 image-0.png
-rw-r--r--@ 1 mark staff 4134 29 Nov 16:13 image-1.png
-rw-r--r--@ 1 mark staff 5816 29 Nov 16:13 image-2.png
A:
Here's my modified version of your code:
#vars
img = Image.open("testpic.jpg")
img_center = (215,190)
fnt = ImageFont.truetype('arial.ttf',32)
#code
with open("text.txt", "rt") as f:
for i, line in enumerate(f, start=1):
img_copy = img.copy()
draw = ImageDraw.Draw(img_copy)
draw.text(img_center, line, font=fnt, stroke_fill=(0, 0, 0))
img_copy.save(f'Image{i}.png')
if i == 6:
break
| Drawing text on images in pillow | i was trying to solve a problem i had in a code that should draw text from a text file on a picture. the problem i had is that the program stack all the text on each other in every picture after the first picture(2,3,4,5). i can't explain what's the problem so i'll just leave a photo (https://i.stack.imgur.com/nkY2O.png)
#vars
f = open("text.txt","r")
img = Image.open("testpic.jpg")
draw = ImageDraw.Draw(img)
img_center = (215,190)
fnt = ImageFont.truetype('arial.ttf',32)
#code
for i in range(1,6):
img_txt = (f.readline())
draw.text(img_center, img_txt, font=fnt, stroke_fill=(0, 0, 0))
img.save('Image'+str(i)+'.png')
i tried to change the image text to f.readlines() but the problem was still there.
| [
"Here's a pretty reasonable, but none too complicated method:\n#!/usr/bin/env python3\n\nfrom PIL import Image, ImageDraw, ImageFont\n\n# Generate annotations file with 3 lines\nannotations = 'annotations.txt'\nwith open(annotations, 'w') as f:\n f.write(\"Merry\\nChristmas\\nStackOverflow\")\n\n# Open background and get dimensions\nim = Image.open(\"background.jpg\")\nwidth, height = im.size\n\n# Make a canvas to draw on, same size as background\n# Set up everything necessary outside loop\ncanvas = im.copy()\ndraw = ImageDraw.Draw(canvas)\ntextpos = (int(width/4), int(height/3))\nfont = ImageFont.truetype('arial.ttf',32)\n\n# Iterate over lines in \"annotations.txt\"\nN = 0\nwith open(annotations,\"r\") as f:\n while True:\n text = f.readline()\n if not text:\n break\n # Copy background onto our canvas - thereby clearing previous iteration\n canvas.paste(im)\n draw.text(textpos, text, font=font, stroke_fill=(0, 0, 0))\n canvas.save(f'image-{N}.png')\n N += 1\n\nIt produces three images:\n-rw-r--r--@ 1 mark staff 2863 29 Nov 16:13 image-0.png\n-rw-r--r--@ 1 mark staff 4134 29 Nov 16:13 image-1.png\n-rw-r--r--@ 1 mark staff 5816 29 Nov 16:13 image-2.png\n\n\n\n\n",
"Here's my modified version of your code:\n#vars\nimg = Image.open(\"testpic.jpg\")\nimg_center = (215,190)\nfnt = ImageFont.truetype('arial.ttf',32)\n\n#code\nwith open(\"text.txt\", \"rt\") as f:\n for i, line in enumerate(f, start=1):\n img_copy = img.copy()\n draw = ImageDraw.Draw(img_copy)\n draw.text(img_center, line, font=fnt, stroke_fill=(0, 0, 0))\n img_copy.save(f'Image{i}.png')\n if i == 6:\n break\n\n"
] | [
0,
0
] | [] | [] | [
"python",
"python_imaging_library"
] | stackoverflow_0074597779_python_python_imaging_library.txt |
Q:
Pytorch's share_memory_() vs built-in Python's shared_memory: Why in Pytorch we don't need to access the shared memory-block?
Trying to learn about the built-in multiprocessing and Pytorch's multiprocessing packages, I have observed a different behavior between both. I find this to be strange since Pytorch's package is fully-compatible with the built-in package.
Concretely, I'm refering to the way variables are shared between processes. In Pytorch, tensor's are moved to shared_memory via the inplace operation share_memory_(). On the other hand, we can get the same result with the built-in package by using the shared_memory module.
The difference between both that I'm struggling to understand is that, with the built-in version, we have to explicitely access the shared memory-block inside the launched process. However, we don't need to do that with the Pytorch version.
Here is a Pytorch's toy example showing this:
import time
import torch
# the same behavior happens when importing:
# import multiprocessing as mp
import torch.multiprocessing as mp
def get_time(s):
return round(time.time() - s, 1)
def foo(a):
# wait ~1sec to print the value of the tensor.
time.sleep(1.0)
with lock:
#-------------------------------------------------------------------
# WITHOUT explicitely accessing the shared memory block, we can observe
# that the tensor has changed:
#-------------------------------------------------------------------
print(f"{__name__}\t{get_time(s)}\t\t{a}")
# global variables.
lock = mp.Lock()
s = time.time()
if __name__ == '__main__':
print("Module\t\tTime\t\tValue")
print("-"*50)
# create tensor and assign it to shared memory.
a = torch.zeros(2).share_memory_()
print(f"{__name__}\t{get_time(s)}\t\t{a}")
# start child process.
p0 = mp.Process(target=foo, args=(a,))
p0.start()
# modify the value of the tensor after ~0.5sec.
time.sleep(0.5)
with lock:
a[0] = 1.0
print(f"{__name__}\t{get_time(s)}\t\t{a}")
time.sleep(1.5)
p0.join()
which outputs (as expected):
Module Time Value
--------------------------------------------------
__main__ 0.0 tensor([0., 0.])
__main__ 0.5 tensor([1., 0.])
__mp_main__ 1.0 tensor([1., 0.])
And here is a toy example with the built-in package:
import time
import multiprocessing as mp
from multiprocessing import shared_memory
import numpy as np
def get_time(s):
return round(time.time() - s, 1)
def foo(shm_name, shape, type_):
#-------------------------------------------------------------------
# WE NEED TO explicitely access the shared memory block to observe
# that the array has changed:
#-------------------------------------------------------------------
existing_shm = shared_memory.SharedMemory(name=shm_name)
a = np.ndarray(shape, type_, buffer=existing_shm.buf)
# wait ~1sec to print the value.
time.sleep(1.0)
with lock:
print(f"{__name__}\t{get_time(s)}\t\t{a}")
# global variables.
lock = mp.Lock()
s = time.time()
if __name__ == '__main__':
print("Module\t\tTime\t\tValue")
print("-"*35)
# create numpy array and shared memory block.
a = np.zeros(2,)
shm = shared_memory.SharedMemory(create=True, size=a.nbytes)
a_shared = np.ndarray(a.shape, a.dtype, buffer=shm.buf)
a_shared[:] = a[:]
print(f"{__name__}\t{get_time(s)}\t\t{a_shared}")
# start child process.
p0 = mp.Process(target=foo, args=(shm.name, a.shape, a.dtype))
p0.start()
# modify the value of the vaue after ~0.5sec.
time.sleep(0.5)
with lock:
a_shared[0] = 1.0
print(f"{__name__}\t{get_time(s)}\t\t{a_shared}")
time.sleep(1.5)
p0.join()
which equivalently outputs, as expected:
Module Time Value
-----------------------------------
__main__ 0.0 [0. 0.]
__main__ 0.5 [1. 0.]
__mp_main__ 1.0 [1. 0.]
So what I'm strugging to understand is why we don't need to follow the same steps in both versions, built-in and Pytorch's, i.e. how Pytorch is able to avoid the need to explicitely access the shared memory-block?
P.S. I'm using a Windows OS and Python 3.9
A:
You are writing a love letter to the pytorch authors.
That is, you are patting them on the back,
congratulating their wrapper efforts as "a job well done!"
It's a lovely library.
Let's take a step back and use a very simple
data structure, a dictionary d.
If parent initializes d with some values,
and then kicks off a pair of worker children,
each child has a copy of d.
How did that happen?
The multiprocessing module forked off
the workers, looked at the set of defined
variables which includes d, and serialized
those (key, value) pairs from parent down to
the children.
So at this point we have 3 independent copies
of d. If parent or either child modifies d,
the other 2 copies are completely unaffected.
Now switch gears to the pytorch wrapper.
You offered some nice concise code that demos
the little .SharedMemory() dance an app would
need to do if we want 3 references to same shared structure
rather than 3 independent copies.
The pytorch wrapper serializes references
to common data structure, rather than producing copies.
Under the hood it's doing exactly the dance that you did.
But with no repeated verbiage up at the app level,
as the details have nicely been abstracted away, FTW!
Why in Pytorch we don't need to access the shared memory-block?
tl;dr: We do need to access it. But the library shoulders the burden of worrying about the details, so we don't have to.
A:
pytorch has a simple wrapper around shared memory, python's shared memory module is only a wrapper around the underlying OS dependent functions.
the way it can be done is that you don't serialize the array or the shared memory themselves, and only serialize what's needed to create them by using the __getstate__ and __setstate__ methods from the docs, so that your object acts as both a proxy and a container at the same time.
the following bar class can double for a proxy and a container this way, which is useful if the user shouldn't have to worry about the shared memory part.
import time
import multiprocessing as mp
from multiprocessing import shared_memory
import numpy as np
class bar:
def __init__(self):
self._size = 10
self._type = np.uint8
self.shm = shared_memory.SharedMemory(create=True, size=self._size)
self._mem_name = self.shm.name
self.arr = np.ndarray([self._size], self._type, buffer=self.shm.buf)
def __getstate__(self):
"""Return state values to be pickled."""
return (self._mem_name, self._size, self._type)
def __setstate__(self, state):
"""Restore state from the unpickled state values."""
self._mem_name, self._size, self._type = state
self.shm = shared_memory.SharedMemory(self._mem_name)
self.arr = np.ndarray([self._size], self._type, buffer=self.shm.buf)
def get_time(s):
return round(time.time() - s, 1)
def foo(shm, lock):
# -------------------------------------------------------------------
# without explicitely access the shared memory block we observe
# that the array has changed:
# -------------------------------------------------------------------
a = shm
# wait ~1sec to print the value.
time.sleep(1.0)
with lock:
print(f"{__name__}\t{get_time(s)}\t\t{a.arr}")
# global variables.
s = time.time()
if __name__ == '__main__':
lock = mp.Lock() # to work on windows/mac.
print("Module\t\tTime\t\tValue")
print("-" * 35)
# create numpy array and shared memory block.
a = bar()
print(f"{__name__}\t{get_time(s)}\t\t{a.arr}")
# start child process.
p0 = mp.Process(target=foo, args=(a, lock))
p0.start()
# modify the value of the vaue after ~0.5sec.
time.sleep(0.5)
with lock:
a.arr[0] = 1.0
print(f"{__name__}\t{get_time(s)}\t\t{a.arr}")
time.sleep(1.5)
p0.join()
python just makes it much easier to hide such details inside the class without bothering the user with such details.
Edit: i wish they'd make locks non-inheritable so your code can raise an error on the lock, instead you'll find out one day that it doesn't actually lock ... After it crashes your application in production.
| Pytorch's share_memory_() vs built-in Python's shared_memory: Why in Pytorch we don't need to access the shared memory-block? | Trying to learn about the built-in multiprocessing and Pytorch's multiprocessing packages, I have observed a different behavior between both. I find this to be strange since Pytorch's package is fully-compatible with the built-in package.
Concretely, I'm refering to the way variables are shared between processes. In Pytorch, tensor's are moved to shared_memory via the inplace operation share_memory_(). On the other hand, we can get the same result with the built-in package by using the shared_memory module.
The difference between both that I'm struggling to understand is that, with the built-in version, we have to explicitely access the shared memory-block inside the launched process. However, we don't need to do that with the Pytorch version.
Here is a Pytorch's toy example showing this:
import time
import torch
# the same behavior happens when importing:
# import multiprocessing as mp
import torch.multiprocessing as mp
def get_time(s):
return round(time.time() - s, 1)
def foo(a):
# wait ~1sec to print the value of the tensor.
time.sleep(1.0)
with lock:
#-------------------------------------------------------------------
# WITHOUT explicitely accessing the shared memory block, we can observe
# that the tensor has changed:
#-------------------------------------------------------------------
print(f"{__name__}\t{get_time(s)}\t\t{a}")
# global variables.
lock = mp.Lock()
s = time.time()
if __name__ == '__main__':
print("Module\t\tTime\t\tValue")
print("-"*50)
# create tensor and assign it to shared memory.
a = torch.zeros(2).share_memory_()
print(f"{__name__}\t{get_time(s)}\t\t{a}")
# start child process.
p0 = mp.Process(target=foo, args=(a,))
p0.start()
# modify the value of the tensor after ~0.5sec.
time.sleep(0.5)
with lock:
a[0] = 1.0
print(f"{__name__}\t{get_time(s)}\t\t{a}")
time.sleep(1.5)
p0.join()
which outputs (as expected):
Module Time Value
--------------------------------------------------
__main__ 0.0 tensor([0., 0.])
__main__ 0.5 tensor([1., 0.])
__mp_main__ 1.0 tensor([1., 0.])
And here is a toy example with the built-in package:
import time
import multiprocessing as mp
from multiprocessing import shared_memory
import numpy as np
def get_time(s):
return round(time.time() - s, 1)
def foo(shm_name, shape, type_):
#-------------------------------------------------------------------
# WE NEED TO explicitely access the shared memory block to observe
# that the array has changed:
#-------------------------------------------------------------------
existing_shm = shared_memory.SharedMemory(name=shm_name)
a = np.ndarray(shape, type_, buffer=existing_shm.buf)
# wait ~1sec to print the value.
time.sleep(1.0)
with lock:
print(f"{__name__}\t{get_time(s)}\t\t{a}")
# global variables.
lock = mp.Lock()
s = time.time()
if __name__ == '__main__':
print("Module\t\tTime\t\tValue")
print("-"*35)
# create numpy array and shared memory block.
a = np.zeros(2,)
shm = shared_memory.SharedMemory(create=True, size=a.nbytes)
a_shared = np.ndarray(a.shape, a.dtype, buffer=shm.buf)
a_shared[:] = a[:]
print(f"{__name__}\t{get_time(s)}\t\t{a_shared}")
# start child process.
p0 = mp.Process(target=foo, args=(shm.name, a.shape, a.dtype))
p0.start()
# modify the value of the vaue after ~0.5sec.
time.sleep(0.5)
with lock:
a_shared[0] = 1.0
print(f"{__name__}\t{get_time(s)}\t\t{a_shared}")
time.sleep(1.5)
p0.join()
which equivalently outputs, as expected:
Module Time Value
-----------------------------------
__main__ 0.0 [0. 0.]
__main__ 0.5 [1. 0.]
__mp_main__ 1.0 [1. 0.]
So what I'm strugging to understand is why we don't need to follow the same steps in both versions, built-in and Pytorch's, i.e. how Pytorch is able to avoid the need to explicitely access the shared memory-block?
P.S. I'm using a Windows OS and Python 3.9
| [
"You are writing a love letter to the pytorch authors.\nThat is, you are patting them on the back,\ncongratulating their wrapper efforts as \"a job well done!\"\nIt's a lovely library.\nLet's take a step back and use a very simple\ndata structure, a dictionary d.\nIf parent initializes d with some values,\nand then kicks off a pair of worker children,\neach child has a copy of d.\nHow did that happen?\nThe multiprocessing module forked off\nthe workers, looked at the set of defined\nvariables which includes d, and serialized\nthose (key, value) pairs from parent down to\nthe children.\nSo at this point we have 3 independent copies\nof d. If parent or either child modifies d,\nthe other 2 copies are completely unaffected.\nNow switch gears to the pytorch wrapper.\nYou offered some nice concise code that demos\nthe little .SharedMemory() dance an app would\nneed to do if we want 3 references to same shared structure\nrather than 3 independent copies.\nThe pytorch wrapper serializes references\nto common data structure, rather than producing copies.\nUnder the hood it's doing exactly the dance that you did.\nBut with no repeated verbiage up at the app level,\nas the details have nicely been abstracted away, FTW!\n\nWhy in Pytorch we don't need to access the shared memory-block?\n\ntl;dr: We do need to access it. But the library shoulders the burden of worrying about the details, so we don't have to.\n",
"pytorch has a simple wrapper around shared memory, python's shared memory module is only a wrapper around the underlying OS dependent functions.\nthe way it can be done is that you don't serialize the array or the shared memory themselves, and only serialize what's needed to create them by using the __getstate__ and __setstate__ methods from the docs, so that your object acts as both a proxy and a container at the same time.\nthe following bar class can double for a proxy and a container this way, which is useful if the user shouldn't have to worry about the shared memory part.\nimport time\nimport multiprocessing as mp\nfrom multiprocessing import shared_memory\nimport numpy as np\n\nclass bar:\n def __init__(self):\n self._size = 10\n self._type = np.uint8\n self.shm = shared_memory.SharedMemory(create=True, size=self._size)\n self._mem_name = self.shm.name\n self.arr = np.ndarray([self._size], self._type, buffer=self.shm.buf)\n\n def __getstate__(self):\n \"\"\"Return state values to be pickled.\"\"\"\n return (self._mem_name, self._size, self._type)\n\n def __setstate__(self, state):\n \"\"\"Restore state from the unpickled state values.\"\"\"\n self._mem_name, self._size, self._type = state\n self.shm = shared_memory.SharedMemory(self._mem_name)\n self.arr = np.ndarray([self._size], self._type, buffer=self.shm.buf)\n\ndef get_time(s):\n return round(time.time() - s, 1)\n\ndef foo(shm, lock):\n # -------------------------------------------------------------------\n # without explicitely access the shared memory block we observe\n # that the array has changed:\n # -------------------------------------------------------------------\n a = shm\n\n # wait ~1sec to print the value.\n time.sleep(1.0)\n with lock:\n print(f\"{__name__}\\t{get_time(s)}\\t\\t{a.arr}\")\n\n# global variables.\ns = time.time()\n\nif __name__ == '__main__':\n lock = mp.Lock() # to work on windows/mac.\n\n print(\"Module\\t\\tTime\\t\\tValue\")\n print(\"-\" * 35)\n\n # create numpy array and shared memory block.\n a = bar()\n print(f\"{__name__}\\t{get_time(s)}\\t\\t{a.arr}\")\n\n # start child process.\n p0 = mp.Process(target=foo, args=(a, lock))\n p0.start()\n\n # modify the value of the vaue after ~0.5sec.\n time.sleep(0.5)\n with lock:\n a.arr[0] = 1.0\n\n print(f\"{__name__}\\t{get_time(s)}\\t\\t{a.arr}\")\n time.sleep(1.5)\n\n p0.join()\n\npython just makes it much easier to hide such details inside the class without bothering the user with such details.\nEdit: i wish they'd make locks non-inheritable so your code can raise an error on the lock, instead you'll find out one day that it doesn't actually lock ... After it crashes your application in production.\n"
] | [
2,
1
] | [] | [] | [
"multiprocessing",
"python",
"pytorch",
"shared_memory"
] | stackoverflow_0074635994_multiprocessing_python_pytorch_shared_memory.txt |
Q:
How can I generate pseudo random sequence of binary number using Hopefield neural network?
For some work, I need to generate a sequence of random binary patterns using the Hopefield neural network.
Like I want to generate a 42-bit long binary sequence such as '11101100011100111001100011001110001010001' How can I generate it?
I have tried several ways but no proper solution is coming out.
A:
Use numpy to pick a random int.
Here we specifiy dtype to address up to 64 bits, and a maximum value of 2^42.
Use f-string to print the binary representation and pad with leading 0 if needed.
import numpy as np
n = np.random.randint(2**42, dtype=np.int64)
print(f"{n:042b}")
| How can I generate pseudo random sequence of binary number using Hopefield neural network? | For some work, I need to generate a sequence of random binary patterns using the Hopefield neural network.
Like I want to generate a 42-bit long binary sequence such as '11101100011100111001100011001110001010001' How can I generate it?
I have tried several ways but no proper solution is coming out.
| [
"Use numpy to pick a random int.\nHere we specifiy dtype to address up to 64 bits, and a maximum value of 2^42.\nUse f-string to print the binary representation and pad with leading 0 if needed.\nimport numpy as np\n\nn = np.random.randint(2**42, dtype=np.int64)\nprint(f\"{n:042b}\")\n\n"
] | [
1
] | [] | [] | [
"binary",
"python",
"random"
] | stackoverflow_0074637615_binary_python_random.txt |
Q:
I need to join every two string in the list
input list = ["Raman","Panjikar","Rohan","singh","roshan","kumar"]
output list = ["Raman Panjikar","Rohan singh","roshan kumar"]
Tried concat but it did not work.
A:
input_list = ["Raman","Panjikar","Rohan","singh","roshan","kumar"]
output_list=[]
if len(input_list)%2==0:
l=len(input_list)
else:
l=len(input_list)-1
for i in range(0,l,2):
x=f'{input_list[i]} {input_list[i+1]}'
output_list.append(x)
print(output_list)
| I need to join every two string in the list | input list = ["Raman","Panjikar","Rohan","singh","roshan","kumar"]
output list = ["Raman Panjikar","Rohan singh","roshan kumar"]
Tried concat but it did not work.
| [
"input_list = [\"Raman\",\"Panjikar\",\"Rohan\",\"singh\",\"roshan\",\"kumar\"]\noutput_list=[]\nif len(input_list)%2==0:\n l=len(input_list)\nelse:\n l=len(input_list)-1\nfor i in range(0,l,2):\n x=f'{input_list[i]} {input_list[i+1]}'\n output_list.append(x)\nprint(output_list)\n\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074637476_python.txt |
Q:
How can i add subject to my python email program in ssl?
I want to add subject to my python email program how can i do that? i dont want to use any python libraries i just want to do that with ssl, Can anyone can give me code example so that i can just copy paste? i looked 4 stack over flow solutions too but i dont understand how to do that in my program the reason was i am not a python programmer but i want to use this program, i think to add a subject you hardly have to spend 1 min writing that, to if you know how to do that in ssl only pls answer this question with a code example, dont suggest me any other stack overflow question link because i looked many and i cant able to do that,
import smtplib, ssl
email = "[email protected]"
password = "mypassword"
message = """\
Hello World
"""
receiver = "[email protected]"
port = 465
sslcontext = ssl.create_default_context()
connection = smtplib.SMTP_SSL("smtp.gmail.com", port, context=sslcontext)
connection.login(email, password)
connection.sendmail(email, reciever, message)
print("sent")
A:
You can make the subject as a header of the body text.
Try this :
import smtplib, ssl
email = "[email protected]"
password = "mypassword"
subject= "Put here your subject"
body = """\
Hello World
"""
message = 'Subject: {}\n\n{}'.format(subject, body)
receiver = "[email protected]"
port = 465
sslcontext = ssl.create_default_context()
connection = smtplib.SMTP_SSL("smtp.gmail.com", port, context=sslcontext)
connection.login(email, password)
connection.sendmail(email, reciever, message)
print("sent")
| How can i add subject to my python email program in ssl? | I want to add subject to my python email program how can i do that? i dont want to use any python libraries i just want to do that with ssl, Can anyone can give me code example so that i can just copy paste? i looked 4 stack over flow solutions too but i dont understand how to do that in my program the reason was i am not a python programmer but i want to use this program, i think to add a subject you hardly have to spend 1 min writing that, to if you know how to do that in ssl only pls answer this question with a code example, dont suggest me any other stack overflow question link because i looked many and i cant able to do that,
import smtplib, ssl
email = "[email protected]"
password = "mypassword"
message = """\
Hello World
"""
receiver = "[email protected]"
port = 465
sslcontext = ssl.create_default_context()
connection = smtplib.SMTP_SSL("smtp.gmail.com", port, context=sslcontext)
connection.login(email, password)
connection.sendmail(email, reciever, message)
print("sent")
| [
"You can make the subject as a header of the body text.\nTry this :\nimport smtplib, ssl\n\nemail = \"[email protected]\"\npassword = \"mypassword\"\n\nsubject= \"Put here your subject\"\nbody = \"\"\"\\\nHello World\n\"\"\"\nmessage = 'Subject: {}\\n\\n{}'.format(subject, body)\n\nreceiver = \"[email protected]\"\nport = 465\n\nsslcontext = ssl.create_default_context()\nconnection = smtplib.SMTP_SSL(\"smtp.gmail.com\", port, context=sslcontext)\nconnection.login(email, password)\nconnection.sendmail(email, reciever, message)\n\nprint(\"sent\")\n\n"
] | [
0
] | [] | [] | [
"python",
"python_3.x",
"ssl"
] | stackoverflow_0074638146_python_python_3.x_ssl.txt |
Q:
Python: website's class prints out an empty list
I'm trying to scrape everything in the class stats (item price and price changes) with the following script:
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
url = "https://secure.runescape.com/m=itemdb_oldschool/Dragon+warhammer/viewitem?obj=13576"
uClient = uReq(url)
page_html = uClient.read()
page_soup = soup(page_html, "html.parser")
price = page_soup.find_all(class_ = "stats")
print(price)
I get this print:
[]
I used this script for all my other webscrappes and it's the first time I get something like that.
I tried looking around, asked some people, I still can't find a solution.
A:
check the value of page_soup variable:
<html style="height:100%"><head><META NAME="ROBOTS" CONTENT="NOINDEX, NOFOLLOW"><meta name="format-detection" content="telephone=no"><meta name="viewport" content="initial-scale=1.0"><meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"><script type="text/javascript" src="/_Incapsula_Resource?SWJIYLWA=719d34d31c8e3a6e6fffd425f7e032f3"></script><script src="/Criciousand-meth-shake-Exit-be-till-in-ches-Shad" async></script></head><body style="margin:0px;height:100%"><iframe id="main-iframe" src="/_Incapsula_Resource?SWUDNSAI=30&xinfo=7-5532445-0%20NNNY%20RT%281620414344651%2056%29%20q%280%20-1%20-1%201%29%20r%281%20-1%29%20B12%2814%2c0%2c0%29%20U5&incident_id=1233000410021120939-28775082668132935&edet=12&cinfo=0e000000d694&rpinfo=0&cts=UC3pkO3NyZP9f4EA4%2fm56lwz1Y6BhOV6CwF4xNVSeeeNp96DzLjUUDt3%2b5RYEDst" frameborder=0 width="100%" height="100%" marginheight="0px" marginwidth="0px">Request unsuccessful. Incapsula incident ID: 1233000410021120939-28775082668132935</iframe></body></html>
if you visit the website in incognito mode you will see the same result.
As the page does not have a class named 'stats', the result of the page_soup.find_all(class_ = "stats") is an empty list.
A:
A possible problem with site parsing may arise due to the fact that when trying to request a site, it may consider that this is a bot, so that this does not happen, you need to send headers that contain user-agent in the request, then the site will assume that you are a user and display the information.
If the first method did not help, you can try using rotate user-agent, for example, to switch between PC, mobile, and tablet, as well as between browsers e.g. Chrome, Firefox, Safari, Edge and so on. A good idea will be also to combine rotated proxies with rotated user-agents which will minimize the chance of being blocked.
Check code in online IDE.
from bs4 import BeautifulSoup
import requests, json, lxml
# https://requests.readthedocs.io/en/latest/user/quickstart/#custom-headers
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36",
}
data = []
html = requests.get("https://secure.runescape.com/m=itemdb_oldschool/Dragon+warhammer/viewitem?obj=13576", headers=headers, timeout=30)
soup = BeautifulSoup(html.text, "lxml")
current_guide_price = soup.select_one("h3 span").text
today_change = soup.select_one("li:nth-child(1) .stats__gp-change").text
one_month_change = soup.select_one("li+ li .stats__change--positive .stats__gp-change").text
three_month_change = soup.select_one(".stats__change--negative .stats__gp-change").text
six_month_change = soup.select_one("li:nth-child(4) .stats__gp-change").text
data.append({
"Current Guide Price": current_guide_price,
"Today's Change": today_change,
"1 Month Change": one_month_change,
"3 Month Change": three_month_change,
"6 Month Change": six_month_change
})
print(json.dumps(data, indent=2))
Example output:
[
{
"Current Guide Price": "29.6m",
"Today's Change": "110.8k",
"1 Month Change": "475.0k",
"3 Month Change": "- 4.3m",
"6 Month Change": "- 18.5m"
}
]
| Python: website's class prints out an empty list | I'm trying to scrape everything in the class stats (item price and price changes) with the following script:
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup
url = "https://secure.runescape.com/m=itemdb_oldschool/Dragon+warhammer/viewitem?obj=13576"
uClient = uReq(url)
page_html = uClient.read()
page_soup = soup(page_html, "html.parser")
price = page_soup.find_all(class_ = "stats")
print(price)
I get this print:
[]
I used this script for all my other webscrappes and it's the first time I get something like that.
I tried looking around, asked some people, I still can't find a solution.
| [
"check the value of page_soup variable:\n<html style=\"height:100%\"><head><META NAME=\"ROBOTS\" CONTENT=\"NOINDEX, NOFOLLOW\"><meta name=\"format-detection\" content=\"telephone=no\"><meta name=\"viewport\" content=\"initial-scale=1.0\"><meta http-equiv=\"X-UA-Compatible\" content=\"IE=edge,chrome=1\"><script type=\"text/javascript\" src=\"/_Incapsula_Resource?SWJIYLWA=719d34d31c8e3a6e6fffd425f7e032f3\"></script><script src=\"/Criciousand-meth-shake-Exit-be-till-in-ches-Shad\" async></script></head><body style=\"margin:0px;height:100%\"><iframe id=\"main-iframe\" src=\"/_Incapsula_Resource?SWUDNSAI=30&xinfo=7-5532445-0%20NNNY%20RT%281620414344651%2056%29%20q%280%20-1%20-1%201%29%20r%281%20-1%29%20B12%2814%2c0%2c0%29%20U5&incident_id=1233000410021120939-28775082668132935&edet=12&cinfo=0e000000d694&rpinfo=0&cts=UC3pkO3NyZP9f4EA4%2fm56lwz1Y6BhOV6CwF4xNVSeeeNp96DzLjUUDt3%2b5RYEDst\" frameborder=0 width=\"100%\" height=\"100%\" marginheight=\"0px\" marginwidth=\"0px\">Request unsuccessful. Incapsula incident ID: 1233000410021120939-28775082668132935</iframe></body></html>\n\nif you visit the website in incognito mode you will see the same result.\nAs the page does not have a class named 'stats', the result of the page_soup.find_all(class_ = \"stats\") is an empty list.\n",
"A possible problem with site parsing may arise due to the fact that when trying to request a site, it may consider that this is a bot, so that this does not happen, you need to send headers that contain user-agent in the request, then the site will assume that you are a user and display the information.\nIf the first method did not help, you can try using rotate user-agent, for example, to switch between PC, mobile, and tablet, as well as between browsers e.g. Chrome, Firefox, Safari, Edge and so on. A good idea will be also to combine rotated proxies with rotated user-agents which will minimize the chance of being blocked.\nCheck code in online IDE.\nfrom bs4 import BeautifulSoup\nimport requests, json, lxml\n\n# https://requests.readthedocs.io/en/latest/user/quickstart/#custom-headers\nheaders = {\n \"User-Agent\": \"Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/103.0.0.0 Safari/537.36\",\n}\n\ndata = []\n\nhtml = requests.get(\"https://secure.runescape.com/m=itemdb_oldschool/Dragon+warhammer/viewitem?obj=13576\", headers=headers, timeout=30)\nsoup = BeautifulSoup(html.text, \"lxml\")\n\ncurrent_guide_price = soup.select_one(\"h3 span\").text\ntoday_change = soup.select_one(\"li:nth-child(1) .stats__gp-change\").text\none_month_change = soup.select_one(\"li+ li .stats__change--positive .stats__gp-change\").text\nthree_month_change = soup.select_one(\".stats__change--negative .stats__gp-change\").text\nsix_month_change = soup.select_one(\"li:nth-child(4) .stats__gp-change\").text\n\ndata.append({\n \"Current Guide Price\": current_guide_price,\n \"Today's Change\": today_change,\n \"1 Month Change\": one_month_change,\n \"3 Month Change\": three_month_change,\n \"6 Month Change\": six_month_change\n})\n\nprint(json.dumps(data, indent=2))\n\nExample output:\n[\n {\n \"Current Guide Price\": \"29.6m\",\n \"Today's Change\": \"110.8k\",\n \"1 Month Change\": \"475.0k\",\n \"3 Month Change\": \"- 4.3m\",\n \"6 Month Change\": \"- 18.5m\"\n }\n]\n\n"
] | [
0,
0
] | [] | [] | [
"beautifulsoup",
"python",
"web_scraping"
] | stackoverflow_0067440401_beautifulsoup_python_web_scraping.txt |
Q:
Issues viewing or sharing items upload to Sharepoint with msgraph in Python
I've been trying to upload files to a Sharepoint site using a python app.
I've successfully authenticated with my Azure app, using msgraph.
I've successfully (I think) uploaded files -
/drives/{drive_id}/root:/path/to/file:/content
returns
@microsoft.graph.downloadUrl: https://xxx.sharepoint.com/_layouts/15/download.aspx?UniqueId=xxx&Translate=false&tempauth=exxx&ApiVersion=2.0
createdDateTime: 2022-11-23T19:41:22Z
eTag: "{xxx},52"
id: xxx
lastModifiedDateTime: 2022-11-25T08:37:46Z
name: 2022-09.pdf
webUrl: https://xxx.sharepoint.com/NPSScheduling/All%20NPS%20Folders/Salesforce/2022-09.pdf
cTag: "c:{xxx},52"
size: 33097
createdBy: {'application': {'id': 'f333ebf7-899a-44aa-8697-6d5b71e8f722', 'displayName': 'Salesforce Chrono UPloads'}, 'user': {'displayName': 'SharePoint App'}}
lastModifiedBy: {'application': {'id': 'f333ebf7-899a-44aa-8697-6d5b71e8f722', 'displayName': 'Salesforce Chrono UPloads'}, 'user': {'displayName': 'SharePoint App'}}
parentReference: {'driveType': 'documentLibrary', 'driveId': 'b!xxx', 'id': 'xxx', 'path': '/drives/b!xxx/root:/All NPS Folders/Salesforce'}
file: {'mimeType': 'application/pdf', 'hashes': {'quickXorHash': 'nEGcsGbiYw5Q1OZfcBOg+2pbGts='}}
fileSystemInfo: {'createdDateTime': '2022-11-23T19:41:22Z', 'lastModifiedDateTime': '2022-11-25T08:37:46Z'}
shared: {'scope': 'users'}
However, when I try to view files in my Sharepoint folder, I don't see them. I am able to navigate to the webUrl, but get 'Could not load pdf'.
I tried creating a share link, but keep getting
item not found
no matter which format I try for the endpoint:
/sites/<site_id>/drive/items/<item_id>/createLink
/sites/<site_id>/drives/<drive_id>/items/<item_id>/createLink
/sites/<site_id>/path/to/file/createLink
/drives/<drive_id>/root:/path/to/file/createLink
and other variants on the same theme.
Any ideas?
A:
It looks like you're having trouble viewing the files you've uploaded to your SharePoint site. There are a few potential causes for this issue, so I'll try to provide some suggestions that may help you resolve it.
First, make sure that you're using the correct URL to view the files. The property in the response you posted is the correct URL to use to view the file in a web browser. When you navigate to this URL, you should see the file if it has been successfully uploaded.webUrl
If you're still unable to view the file, there may be an issue with the file itself. For example, the file may be corrupted or may not be in a format that is supported by SharePoint. In this case, you may need to re-upload the file or try using a different file format.
Another potential cause of this issue is that the file may not have been uploaded to the correct location on the SharePoint site. When you upload a file using the Microsoft Graph API, you need to specify the location on the site where you want the file to be uploaded. This is done using the property in the request body. For example:
{
"name": "2022-09.pdf",
"path": "/All NPS Folders/Salesforce"
}
Make sure that the property in your request is correct and points to the correct location on the SharePoint site.path
Finally, if you're still having trouble viewing the files, it's possible that there may be an issue with the permissions on the SharePoint site. If the files are not visible to you, it may be because you don't have the correct permissions to view them. In this case, you may need to contact the site owner or the administrator of the SharePoint site to grant you access to the files.
I hope this helps! Let me know if you have any other questions.
| Issues viewing or sharing items upload to Sharepoint with msgraph in Python | I've been trying to upload files to a Sharepoint site using a python app.
I've successfully authenticated with my Azure app, using msgraph.
I've successfully (I think) uploaded files -
/drives/{drive_id}/root:/path/to/file:/content
returns
@microsoft.graph.downloadUrl: https://xxx.sharepoint.com/_layouts/15/download.aspx?UniqueId=xxx&Translate=false&tempauth=exxx&ApiVersion=2.0
createdDateTime: 2022-11-23T19:41:22Z
eTag: "{xxx},52"
id: xxx
lastModifiedDateTime: 2022-11-25T08:37:46Z
name: 2022-09.pdf
webUrl: https://xxx.sharepoint.com/NPSScheduling/All%20NPS%20Folders/Salesforce/2022-09.pdf
cTag: "c:{xxx},52"
size: 33097
createdBy: {'application': {'id': 'f333ebf7-899a-44aa-8697-6d5b71e8f722', 'displayName': 'Salesforce Chrono UPloads'}, 'user': {'displayName': 'SharePoint App'}}
lastModifiedBy: {'application': {'id': 'f333ebf7-899a-44aa-8697-6d5b71e8f722', 'displayName': 'Salesforce Chrono UPloads'}, 'user': {'displayName': 'SharePoint App'}}
parentReference: {'driveType': 'documentLibrary', 'driveId': 'b!xxx', 'id': 'xxx', 'path': '/drives/b!xxx/root:/All NPS Folders/Salesforce'}
file: {'mimeType': 'application/pdf', 'hashes': {'quickXorHash': 'nEGcsGbiYw5Q1OZfcBOg+2pbGts='}}
fileSystemInfo: {'createdDateTime': '2022-11-23T19:41:22Z', 'lastModifiedDateTime': '2022-11-25T08:37:46Z'}
shared: {'scope': 'users'}
However, when I try to view files in my Sharepoint folder, I don't see them. I am able to navigate to the webUrl, but get 'Could not load pdf'.
I tried creating a share link, but keep getting
item not found
no matter which format I try for the endpoint:
/sites/<site_id>/drive/items/<item_id>/createLink
/sites/<site_id>/drives/<drive_id>/items/<item_id>/createLink
/sites/<site_id>/path/to/file/createLink
/drives/<drive_id>/root:/path/to/file/createLink
and other variants on the same theme.
Any ideas?
| [
"It looks like you're having trouble viewing the files you've uploaded to your SharePoint site. There are a few potential causes for this issue, so I'll try to provide some suggestions that may help you resolve it.\nFirst, make sure that you're using the correct URL to view the files. The property in the response you posted is the correct URL to use to view the file in a web browser. When you navigate to this URL, you should see the file if it has been successfully uploaded.webUrl\nIf you're still unable to view the file, there may be an issue with the file itself. For example, the file may be corrupted or may not be in a format that is supported by SharePoint. In this case, you may need to re-upload the file or try using a different file format.\nAnother potential cause of this issue is that the file may not have been uploaded to the correct location on the SharePoint site. When you upload a file using the Microsoft Graph API, you need to specify the location on the site where you want the file to be uploaded. This is done using the property in the request body. For example:\n{\n \"name\": \"2022-09.pdf\",\n \"path\": \"/All NPS Folders/Salesforce\"\n}\n\nMake sure that the property in your request is correct and points to the correct location on the SharePoint site.path\nFinally, if you're still having trouble viewing the files, it's possible that there may be an issue with the permissions on the SharePoint site. If the files are not visible to you, it may be because you don't have the correct permissions to view them. In this case, you may need to contact the site owner or the administrator of the SharePoint site to grant you access to the files.\nI hope this helps! Let me know if you have any other questions.\n"
] | [
0
] | [] | [] | [
"file_upload",
"msgraph",
"python",
"share",
"sharepoint"
] | stackoverflow_0074570308_file_upload_msgraph_python_share_sharepoint.txt |
Q:
How to fill empty cells and any cell which contains only spaces with Null in Spark DataFrame?
I have a dataset which has empty cells, and also cells which contain only spaces (one or more). I want to convert all these cells into Null.
Sample dataset:
data = [("", "CA", " "), ("Julia", "", None),("Robert", " ", None), ("Tom", "NJ", " ")]
df = spark.createDataFrame(data,["name", "state", "code"])
df.show()
I can convert empty cells by:
df = df.select( [F.when(F.col(c)=="", None).otherwise(F.col(c)).alias(c) for c in df.columns] )
df.show()
And cells with one space:
df = df.select( [F.when(F.col(c)==" ", None).otherwise(F.col(c)).alias(c) for c in df.columns] )
df.show()
But, I don't want to repeat the above codes for cells with 2, 3, or more spaces.
Is there any way I can convert those cells at once?
A:
you could use trim to remove the spaces which leaves a blank and then check for blanks in all cells.
see example below
data_sdf. \
selectExpr(*['if(trim({0}) = "", null, {0}) as {0}'.format(c) for c in data_sdf.columns]). \
show()
# +------+-----+----+
# | name|state|code|
# +------+-----+----+
# | null| CA|null|
# | Julia| null|null|
# |Robert| null|null|
# | Tom| NJ|null|
# +------+-----+----+
the list comprehension would result in if expression statements for every column
['if(trim({0}) = "", null, {0}) as {0}'.format(c) for c in data_sdf.columns]
# ['if(trim(name) = "", null, name) as name',
# 'if(trim(state) = "", null, state) as state',
# 'if(trim(code) = "", null, code) as code']
A:
You can additional use trim or regex_replace the column before you apply when-otherwise
Trim
df = df.select( [F.when(F.trim(F.col(c))=="", None).otherwise(F.col(c)).alias(c) for c in df.columns] )
Regex Replace
df = df.select( [F.when(F.regexp_replace(col(c), "^\s+$", ""))=="", None).otherwise(F.col(c)).alias(c) for c in df.columns] )
| How to fill empty cells and any cell which contains only spaces with Null in Spark DataFrame? | I have a dataset which has empty cells, and also cells which contain only spaces (one or more). I want to convert all these cells into Null.
Sample dataset:
data = [("", "CA", " "), ("Julia", "", None),("Robert", " ", None), ("Tom", "NJ", " ")]
df = spark.createDataFrame(data,["name", "state", "code"])
df.show()
I can convert empty cells by:
df = df.select( [F.when(F.col(c)=="", None).otherwise(F.col(c)).alias(c) for c in df.columns] )
df.show()
And cells with one space:
df = df.select( [F.when(F.col(c)==" ", None).otherwise(F.col(c)).alias(c) for c in df.columns] )
df.show()
But, I don't want to repeat the above codes for cells with 2, 3, or more spaces.
Is there any way I can convert those cells at once?
| [
"you could use trim to remove the spaces which leaves a blank and then check for blanks in all cells.\nsee example below\ndata_sdf. \\\n selectExpr(*['if(trim({0}) = \"\", null, {0}) as {0}'.format(c) for c in data_sdf.columns]). \\\n show()\n\n# +------+-----+----+\n# | name|state|code|\n# +------+-----+----+\n# | null| CA|null|\n# | Julia| null|null|\n# |Robert| null|null|\n# | Tom| NJ|null|\n# +------+-----+----+\n\n\nthe list comprehension would result in if expression statements for every column\n['if(trim({0}) = \"\", null, {0}) as {0}'.format(c) for c in data_sdf.columns]\n\n# ['if(trim(name) = \"\", null, name) as name',\n# 'if(trim(state) = \"\", null, state) as state',\n# 'if(trim(code) = \"\", null, code) as code']\n\n",
"You can additional use trim or regex_replace the column before you apply when-otherwise\nTrim\ndf = df.select( [F.when(F.trim(F.col(c))==\"\", None).otherwise(F.col(c)).alias(c) for c in df.columns] )\n\nRegex Replace\ndf = df.select( [F.when(F.regexp_replace(col(c), \"^\\s+$\", \"\"))==\"\", None).otherwise(F.col(c)).alias(c) for c in df.columns] )\n\n"
] | [
1,
1
] | [] | [] | [
"dataframe",
"null",
"pyspark",
"python"
] | stackoverflow_0074638211_dataframe_null_pyspark_python.txt |
Q:
Find position of object by Key in json file
Is there anyway i can find the position of object by its key in Json file. I tried with the collection module but seems not to work with data from the json file even though its dictionary
reda.json file
[{"carl": "33"}, {"break": "55"}, {"user": "heake"}, ]
import json
import collections
json_data = json.load(open('reda.json'))
if type(json_data) is dict:
json_data = [json_data]
d = collections.OrderedDict((json_data))
h = tuple(d.keys()).index('break')
print(h)
Also tried this
j = 'break'
for i in json_data:
if j in i:
print(j.index('break'))
Result is 0
``
A:
You can use enumerate to generate indices for a sequence:
json_data = [{"carl": "33"}, {"break": "55"}, {"user": "heake"}]
key = 'break'
for index, record in enumerate(json_data):
if key in record:
print(index)
This outputs: 1
A:
You don't require collections for this. Simply use list comprehension to generate a list and then get the index.
Here's my code:
import json
json_data = json.load(open('reda.json'))
json_key_index = [key for key in json_data]
print(json_key_index.index("break"))
Also, looking at your reda.json the format doesn't seem pretty well-versed. I recommend changing the reda.json to:
{
"carl": "33",
"break": "55",
"user": "heake"
}
| Find position of object by Key in json file | Is there anyway i can find the position of object by its key in Json file. I tried with the collection module but seems not to work with data from the json file even though its dictionary
reda.json file
[{"carl": "33"}, {"break": "55"}, {"user": "heake"}, ]
import json
import collections
json_data = json.load(open('reda.json'))
if type(json_data) is dict:
json_data = [json_data]
d = collections.OrderedDict((json_data))
h = tuple(d.keys()).index('break')
print(h)
Also tried this
j = 'break'
for i in json_data:
if j in i:
print(j.index('break'))
Result is 0
``
| [
"You can use enumerate to generate indices for a sequence:\njson_data = [{\"carl\": \"33\"}, {\"break\": \"55\"}, {\"user\": \"heake\"}]\nkey = 'break'\nfor index, record in enumerate(json_data):\n if key in record:\n print(index)\n\nThis outputs: 1\n",
"You don't require collections for this. Simply use list comprehension to generate a list and then get the index.\nHere's my code:\nimport json\n\njson_data = json.load(open('reda.json'))\njson_key_index = [key for key in json_data]\nprint(json_key_index.index(\"break\"))\n\nAlso, looking at your reda.json the format doesn't seem pretty well-versed. I recommend changing the reda.json to:\n{\n \"carl\": \"33\", \n \"break\": \"55\", \n \"user\": \"heake\"\n}\n\n"
] | [
3,
0
] | [] | [] | [
"dictionary",
"python",
"python_3.x"
] | stackoverflow_0074638267_dictionary_python_python_3.x.txt |
Q:
it throws a 404 when without url prefix (FLASK)
You see Im having a problem where in flask, I made a web app. and I added the URL prefix as views
and you see without /views attached to localhost it throws a 404, I wanna change it so it will redirect automatically to /views when you go to the regular URL such as http://127.0.0.1:8000/
I tried adding @app.route in app.py but it just caused even more problems
A:
You could redirect automatically from http://127.0.0.1:8000/ to http://127.0.0.1:8000/views using the code below.
from flask import Flask, jsonify, redirect
app = Flask(__name__)
#Page 1
@app.route('/', methods=['GET'])
def welcome():
return redirect("http://127.0.0.1:8000/views", code=302)
#Page 2
@app.route('/views', methods=['GET'])
def hello():
return jsonify({"data": "Hello"})
if __name__ == '__main__':
app.run(host="0.0.0.0", port="8000")
Output
#127.0.0.1 - - [01/Dec/2022 15:23:23] "GET / HTTP/1.1" 302 - (Redirecting to views page)
#127.0.0.1 - - [01/Dec/2022 15:23:23] "GET /views HTTP/1.1" 200 -
Hope this helps. Happy Coding :)
| it throws a 404 when without url prefix (FLASK) | You see Im having a problem where in flask, I made a web app. and I added the URL prefix as views
and you see without /views attached to localhost it throws a 404, I wanna change it so it will redirect automatically to /views when you go to the regular URL such as http://127.0.0.1:8000/
I tried adding @app.route in app.py but it just caused even more problems
| [
"You could redirect automatically from http://127.0.0.1:8000/ to http://127.0.0.1:8000/views using the code below.\nfrom flask import Flask, jsonify, redirect\n\napp = Flask(__name__)\n\n#Page 1\[email protected]('/', methods=['GET'])\ndef welcome():\n return redirect(\"http://127.0.0.1:8000/views\", code=302)\n\n#Page 2\[email protected]('/views', methods=['GET'])\ndef hello():\n return jsonify({\"data\": \"Hello\"})\n\nif __name__ == '__main__':\n app.run(host=\"0.0.0.0\", port=\"8000\")\n\nOutput\n #127.0.0.1 - - [01/Dec/2022 15:23:23] \"GET / HTTP/1.1\" 302 - (Redirecting to views page)\n #127.0.0.1 - - [01/Dec/2022 15:23:23] \"GET /views HTTP/1.1\" 200 -\n\nHope this helps. Happy Coding :)\n"
] | [
0
] | [] | [] | [
"flask",
"python",
"python_3.x",
"url",
"web"
] | stackoverflow_0074638293_flask_python_python_3.x_url_web.txt |
Q:
Visualizing Image Augmentations Using Keras Image Data Generator - Input data in `NumpyArrayIterator` should have rank 4
I've built a CNN model and used pickled German traffic sign images to train it. I've experimented with applying data augmentations to the images and I'm having trouble displaying these images using matplotlib and the Keras Image Data Generator.
I've imported the necessary libraries for the processes and below is where I'm obtaining the pickled road class signs:
# The pickle module implements binary protocols for serializing and de-serializing a Python object structure.
with open("./traffic-signs-data/train.p", mode='rb') as training_data:
train = pickle.load(training_data)
with open("./traffic-signs-data/valid.p", mode='rb') as validation_data:
valid = pickle.load(validation_data)
with open("./traffic-signs-data/test.p", mode='rb') as testing_data:
test = pickle.load(testing_data)
X_train, y_train = train['features'], train['labels']
X_validation, y_validation = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
# Shuffling the dataset
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
Creating grayscale images
X_train_gray = np.sum(X_train / 3, axis = 3, keepdims = True)
X_test_gray = np.sum(X_test / 3, axis = 3, keepdims = True)
X_validation_gray = np.sum(X_validation / 3, axis = 3, keepdims = True)
X_train_gray_norm = (X_train_gray - 128) / 128
X_test_gray_norm = (X_test_gray - 128) / 128
X_validation_gray_norm = (X_validation_gray - 128) / 128
Below I'm applying data augmentations to images
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
rotation_range = 90,
width_shift_range = 0.1,
vertical_flip = True,
)
Fitting the grayscale images to the Keras data generator with the augmentations
datagen.fit(X_train_gray_norm)
Fitting the data generator to the CNN model that I've built but not showing
cnn_model.fit_generator(datagen.flow(X_train_gray_norm, y_train, batch_size = 250), epochs = 100)
Trying to showcase the images with the data augmentations applied
i = 100
pic = datagen.flow(X_train_gray[i], batch_size = 1)
plt.figure(figsize=(10,8))
plt.show()
Met with this error:
ValueError: ('Input data in NumpyArrayIterator should have rank 4. You passed an array with shape', (32, 32, 1))
A:
Expand the dimensions of the array in axis-0
datagen.flow(np.expand_dims(X_train_gray[i], 0), batch_size = 1)
#<keras.preprocessing.image.NumpyArrayIterator at 0x23b5ff0f5b0>
| Visualizing Image Augmentations Using Keras Image Data Generator - Input data in `NumpyArrayIterator` should have rank 4 | I've built a CNN model and used pickled German traffic sign images to train it. I've experimented with applying data augmentations to the images and I'm having trouble displaying these images using matplotlib and the Keras Image Data Generator.
I've imported the necessary libraries for the processes and below is where I'm obtaining the pickled road class signs:
# The pickle module implements binary protocols for serializing and de-serializing a Python object structure.
with open("./traffic-signs-data/train.p", mode='rb') as training_data:
train = pickle.load(training_data)
with open("./traffic-signs-data/valid.p", mode='rb') as validation_data:
valid = pickle.load(validation_data)
with open("./traffic-signs-data/test.p", mode='rb') as testing_data:
test = pickle.load(testing_data)
X_train, y_train = train['features'], train['labels']
X_validation, y_validation = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']
# Shuffling the dataset
from sklearn.utils import shuffle
X_train, y_train = shuffle(X_train, y_train)
Creating grayscale images
X_train_gray = np.sum(X_train / 3, axis = 3, keepdims = True)
X_test_gray = np.sum(X_test / 3, axis = 3, keepdims = True)
X_validation_gray = np.sum(X_validation / 3, axis = 3, keepdims = True)
X_train_gray_norm = (X_train_gray - 128) / 128
X_test_gray_norm = (X_test_gray - 128) / 128
X_validation_gray_norm = (X_validation_gray - 128) / 128
Below I'm applying data augmentations to images
from keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
rotation_range = 90,
width_shift_range = 0.1,
vertical_flip = True,
)
Fitting the grayscale images to the Keras data generator with the augmentations
datagen.fit(X_train_gray_norm)
Fitting the data generator to the CNN model that I've built but not showing
cnn_model.fit_generator(datagen.flow(X_train_gray_norm, y_train, batch_size = 250), epochs = 100)
Trying to showcase the images with the data augmentations applied
i = 100
pic = datagen.flow(X_train_gray[i], batch_size = 1)
plt.figure(figsize=(10,8))
plt.show()
Met with this error:
ValueError: ('Input data in NumpyArrayIterator should have rank 4. You passed an array with shape', (32, 32, 1))
| [
"Expand the dimensions of the array in axis-0\ndatagen.flow(np.expand_dims(X_train_gray[i], 0), batch_size = 1)\n\n#<keras.preprocessing.image.NumpyArrayIterator at 0x23b5ff0f5b0>\n\n"
] | [
0
] | [] | [] | [
"imagedatagenerator",
"keras",
"matplotlib",
"python",
"tensorflow"
] | stackoverflow_0074634887_imagedatagenerator_keras_matplotlib_python_tensorflow.txt |
Q:
Django search bar isn't giving correct results
views.py
from django.shortcuts import render
from ecommerceapp.models import Product
from django.db.models import Q
def searchResult(request):
products=None
query=None
if 'q' in request.GET:
query = request.GET.get('q')
products=Product.objects.all().filter(Q(name__contains=query) | Q(desc__contains=query))
return render(request,'search.html',{'query':query,'products':products})
In views.py I have imported a model named 'Product' of another application.
search.html
{% extends 'base.html' %}
{% load static %}
{% block metadescription %}
Welcome to FASHION STORE-Your Beauty
{% endblock %}
{% block title %}
Search-FASHION STORE
{% endblock %}
{% block content %}
<div>
<p class="text-center my_search_text">You have searched for :<b>"{{query}}"</b></p>
</div>
<div class="container">
<div class="row mx_auto">
{% for product in products %}
<div class="my_bottom_margin col-9 col-sm-12 col-md-6 col-lg-4" >
<div class="card text-center" style="min-width:18rem;">
<a href="{{product.get_url1}}"><img class="card-img-top my_image" src="{{product.image.url}}" alt="{{product.name}}" style="height:400px; width:100%;"></a>
<div class="card_body">
<h4>{{product.name}}</h4>
<p>₹{{product.price}}</p>
</div>
</div>
</div>
{% empty %}
<div class="row mx_auto">
<p class="text-center my_search_text">0 results found.</p>
</div>
{% endfor %}
</div>
</div>
{% endblock %}
navbar.html
<nav class="navbar navbar-expand-lg bg-light">
<div class="container-fluid">
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav me-auto mb-2 mb-lg-0">
<li class="nav-item">
<a class="nav-link" href="#">Home</a>
</li>
<li class="nav-item dropdown {% if 'ecommerceapp' in request.path %} active {% endif %} ">
<a class="nav-link dropdown-toggle" href="#" role="button" data-bs-toggle="dropdown" aria-expanded="false">
Shop
</a>
<ul class="dropdown-menu">
<li><a class="dropdown-item" href="{% url 'ecommerceapp:allProductCategory' %}">All Products</a></li>
{% for cat in links %}
<li><a class="dropdown-item" href="{{cat.get_url}}">{{cat.name}}</a></li>
{% endfor %}
</ul>
</li>
<li class="nav-item">
<a class="nav-link disabled" href=""><i class="fa fa-shopping-cart"></i></a>
</li>
</ul>
<form class="d-flex" action="{% url 'search_app:searchResult' %}" method="get">
{% csrf_token %}
<input class="form-control me-2" type="search" placeholder="Search" name="q" aria-label="Search">
<button class="btn btn-outline-success" type="submit"><i class="fa fa-search"></i></button>
</form>
</div>
</div>
</nav>
When I'm searching using search bar, not getting the correct results. When giving the word completely, correct results are getting.
Example: When I type x in the search bar, it give me the results 'shirt' instead of giving '0 results found'.
A:
Example: When I type x in the search bar, it give me the results 'shirt' instead of giving '0 results found'.
The __contains is used to check whether the field contains given word or not, it is case-sensitive. And using | in Q objects means it is optional and works as OR condition, so maybe it is possible when you type x, the name field doesn't contains x but the field desc contain x that's why you are getting the shirt as instance or else you can simply put the query to Product.objects.filter(Q(name__contains=query)) and .all() only creates the copy of the Queryset so it doesn't require here.
| Django search bar isn't giving correct results | views.py
from django.shortcuts import render
from ecommerceapp.models import Product
from django.db.models import Q
def searchResult(request):
products=None
query=None
if 'q' in request.GET:
query = request.GET.get('q')
products=Product.objects.all().filter(Q(name__contains=query) | Q(desc__contains=query))
return render(request,'search.html',{'query':query,'products':products})
In views.py I have imported a model named 'Product' of another application.
search.html
{% extends 'base.html' %}
{% load static %}
{% block metadescription %}
Welcome to FASHION STORE-Your Beauty
{% endblock %}
{% block title %}
Search-FASHION STORE
{% endblock %}
{% block content %}
<div>
<p class="text-center my_search_text">You have searched for :<b>"{{query}}"</b></p>
</div>
<div class="container">
<div class="row mx_auto">
{% for product in products %}
<div class="my_bottom_margin col-9 col-sm-12 col-md-6 col-lg-4" >
<div class="card text-center" style="min-width:18rem;">
<a href="{{product.get_url1}}"><img class="card-img-top my_image" src="{{product.image.url}}" alt="{{product.name}}" style="height:400px; width:100%;"></a>
<div class="card_body">
<h4>{{product.name}}</h4>
<p>₹{{product.price}}</p>
</div>
</div>
</div>
{% empty %}
<div class="row mx_auto">
<p class="text-center my_search_text">0 results found.</p>
</div>
{% endfor %}
</div>
</div>
{% endblock %}
navbar.html
<nav class="navbar navbar-expand-lg bg-light">
<div class="container-fluid">
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarSupportedContent" aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav me-auto mb-2 mb-lg-0">
<li class="nav-item">
<a class="nav-link" href="#">Home</a>
</li>
<li class="nav-item dropdown {% if 'ecommerceapp' in request.path %} active {% endif %} ">
<a class="nav-link dropdown-toggle" href="#" role="button" data-bs-toggle="dropdown" aria-expanded="false">
Shop
</a>
<ul class="dropdown-menu">
<li><a class="dropdown-item" href="{% url 'ecommerceapp:allProductCategory' %}">All Products</a></li>
{% for cat in links %}
<li><a class="dropdown-item" href="{{cat.get_url}}">{{cat.name}}</a></li>
{% endfor %}
</ul>
</li>
<li class="nav-item">
<a class="nav-link disabled" href=""><i class="fa fa-shopping-cart"></i></a>
</li>
</ul>
<form class="d-flex" action="{% url 'search_app:searchResult' %}" method="get">
{% csrf_token %}
<input class="form-control me-2" type="search" placeholder="Search" name="q" aria-label="Search">
<button class="btn btn-outline-success" type="submit"><i class="fa fa-search"></i></button>
</form>
</div>
</div>
</nav>
When I'm searching using search bar, not getting the correct results. When giving the word completely, correct results are getting.
Example: When I type x in the search bar, it give me the results 'shirt' instead of giving '0 results found'.
| [
"\nExample: When I type x in the search bar, it give me the results 'shirt' instead of giving '0 results found'.\n\nThe __contains is used to check whether the field contains given word or not, it is case-sensitive. And using | in Q objects means it is optional and works as OR condition, so maybe it is possible when you type x, the name field doesn't contains x but the field desc contain x that's why you are getting the shirt as instance or else you can simply put the query to Product.objects.filter(Q(name__contains=query)) and .all() only creates the copy of the Queryset so it doesn't require here.\n"
] | [
1
] | [] | [] | [
"django",
"python",
"searchbar"
] | stackoverflow_0074638132_django_python_searchbar.txt |
Q:
How can I get the coordinate information of model.visualize_topics() function - BERTopic
I am trying to get the coordinate informations of the docs placed on the graph by model.visualize_topics() for my BERTopic topic analysis project. Is there any way to see the source code of the function and save the coordinates to use for more advanced analysis?
I found following code as the source code of the visualize_topics() function. But there is not any information about the coordinates in it.
def visualize_topics(self,
topics: List[int] = None,
top_n_topics: int = None,
width: int = 650,
height: int = 650) -> go.Figure:
""" Visualize topics, their sizes, and their corresponding words
This visualization is highly inspired by LDAvis, a great visualization
technique typically reserved for LDA.
Arguments:
topics: A selection of topics to visualize
top_n_topics: Only select the top n most frequent topics
width: The width of the figure.
height: The height of the figure.
Examples:
To visualize the topics simply run:
```python
topic_model.visualize_topics()
```
Or if you want to save the resulting figure:
```python
fig = topic_model.visualize_topics()
fig.write_html("path/to/file.html")
```
"""
check_is_fitted(self)
return plotting.visualize_topics(self,
topics=topics,
top_n_topics=top_n_topics,
width=width,
height=height)
A:
In order to allow for modularity in BERTopic the plotting was separated of the main functions. You can find all information about plotting in BERTopic here and, more specifically, you can find the code for plotting.visualize_topics here.
Having said that, part of that function is creating the coordinate system as follows:
embeddings = topic_model.c_tf_idf_.toarray()[indices]
embeddings = MinMaxScaler().fit_transform(embeddings)
embeddings = UMAP(n_neighbors=2, n_components=2, metric='hellinger').fit_transform(embeddings)
Here, it takes the c-TF-IDF representation of all topics, scales them and finally uses dimensionality reduction with UMAP using Hellinger distance to map the representation into 2-dimensional space. One thing to note though is that no random_state is set in UMAP which results in a stochastic process.
| How can I get the coordinate information of model.visualize_topics() function - BERTopic | I am trying to get the coordinate informations of the docs placed on the graph by model.visualize_topics() for my BERTopic topic analysis project. Is there any way to see the source code of the function and save the coordinates to use for more advanced analysis?
I found following code as the source code of the visualize_topics() function. But there is not any information about the coordinates in it.
def visualize_topics(self,
topics: List[int] = None,
top_n_topics: int = None,
width: int = 650,
height: int = 650) -> go.Figure:
""" Visualize topics, their sizes, and their corresponding words
This visualization is highly inspired by LDAvis, a great visualization
technique typically reserved for LDA.
Arguments:
topics: A selection of topics to visualize
top_n_topics: Only select the top n most frequent topics
width: The width of the figure.
height: The height of the figure.
Examples:
To visualize the topics simply run:
```python
topic_model.visualize_topics()
```
Or if you want to save the resulting figure:
```python
fig = topic_model.visualize_topics()
fig.write_html("path/to/file.html")
```
"""
check_is_fitted(self)
return plotting.visualize_topics(self,
topics=topics,
top_n_topics=top_n_topics,
width=width,
height=height)
| [
"In order to allow for modularity in BERTopic the plotting was separated of the main functions. You can find all information about plotting in BERTopic here and, more specifically, you can find the code for plotting.visualize_topics here.\nHaving said that, part of that function is creating the coordinate system as follows:\nembeddings = topic_model.c_tf_idf_.toarray()[indices]\n embeddings = MinMaxScaler().fit_transform(embeddings)\n embeddings = UMAP(n_neighbors=2, n_components=2, metric='hellinger').fit_transform(embeddings)\n\nHere, it takes the c-TF-IDF representation of all topics, scales them and finally uses dimensionality reduction with UMAP using Hellinger distance to map the representation into 2-dimensional space. One thing to note though is that no random_state is set in UMAP which results in a stochastic process.\n"
] | [
0
] | [] | [] | [
"bert_language_model",
"python",
"topic_modeling"
] | stackoverflow_0074566683_bert_language_model_python_topic_modeling.txt |
Q:
Delete a file in a directory except for first file (or specific file) in Python
I want to delete all files in a directory except for one file in python.
I used os.remove and os.system(with rm and fine), but all of them return errors.
Lets say I have a folder X and in there I have files named 1 2 3 4.
alongside folder X, I have main.py. in main.py how can I write a command to go to the folder and delete all files except for 1.
Thanks...
I tried
os.system(f"rm -v !('1')")
but it says ''rm' is not recognized as an internal or external command,
operable program or batch file.'
I tried
os.system(f"find ./X -not -name '1' -delete")
os.system(f"find /X -not -name '1' -delete")
os.system(f"find . -not -name '1' -delete")
os.system(f"find X -not -name '1' -delete")
But all of them says 'Parameter format not correct'
A:
You can do this in Python using various functions from the os module rather than relying on find.
from os import chdir, listdir, remove, getcwd
def delete_from(directory: str, keep: list) -> None:
cwd = getcwd()
try:
chdir(directory)
for file in listdir():
if not file in keep:
remove(file)
finally:
chdir(cwd)
Call this with a path to the directory to be affected and a list of files (basenames only) to be retained.
e.g.,
delete_from('X', ['1'])
| Delete a file in a directory except for first file (or specific file) in Python | I want to delete all files in a directory except for one file in python.
I used os.remove and os.system(with rm and fine), but all of them return errors.
Lets say I have a folder X and in there I have files named 1 2 3 4.
alongside folder X, I have main.py. in main.py how can I write a command to go to the folder and delete all files except for 1.
Thanks...
I tried
os.system(f"rm -v !('1')")
but it says ''rm' is not recognized as an internal or external command,
operable program or batch file.'
I tried
os.system(f"find ./X -not -name '1' -delete")
os.system(f"find /X -not -name '1' -delete")
os.system(f"find . -not -name '1' -delete")
os.system(f"find X -not -name '1' -delete")
But all of them says 'Parameter format not correct'
| [
"You can do this in Python using various functions from the os module rather than relying on find.\nfrom os import chdir, listdir, remove, getcwd\n\ndef delete_from(directory: str, keep: list) -> None:\n cwd = getcwd()\n try:\n chdir(directory)\n for file in listdir():\n if not file in keep:\n remove(file)\n finally:\n chdir(cwd)\n\nCall this with a path to the directory to be affected and a list of files (basenames only) to be retained.\ne.g.,\ndelete_from('X', ['1'])\n\n"
] | [
1
] | [] | [] | [
"delete_file",
"find",
"os.system",
"python",
"rm"
] | stackoverflow_0074638353_delete_file_find_os.system_python_rm.txt |
Q:
Check column names and column types in Great Expectations
Currently, I am validating the table schema with expect_table_columns_to_match_set by feeding in a list of columns. However, I want to validate the schema associated with each column such as string. The only available Great Expectations rule expect_column_values_to_be_of_type has to be written for each column name and also creates redundancy by repeating the column names.
Is there any rule that I am missing that I can validate both the name and the schema at the same time?
For exmaple, given column a: string, b: int, c: boolean, I want to pass that whole info into one function instead of having to break it into [a,b,c] and validating [a], string` separately for each column.
Ideally, it will be something like expect_column_schmea([(column_name_a, column_type_a), (column_name_b, column_type_b)]
A:
Great Expectations does not have a built-in rule that allows you to validate both the names and the schema of columns in a table at the same time。 However, you can use a combination of the and rules to achieve the same
First, you can use the rule to validate the names of the columns in the table。 This rule takes a set of column names as an argument and checks that the table contains exactly those columns。 For example, if you want to validate that a table has columns , , and , you could use the following code:
expect_table_columns_to_match_set(
table=my_table,
expected_set={'a', 'b', 'c'},
)
Next, you can use the rule to validate the schema of each column in the table. This rule takes a dictionary of column names and expected data types as an argument and checks that the values in each column are of the specified type. For example, if you want to validate that column is of type , column is of type , and column is of type , you could use the following code:
expect_column_values_to_be_of_type(
table=my_table,
column_name_to_type_mapping={
'a': 'string',
'b': 'int',
'c': 'boolean',
},
)
By combining these two rules, you can validate both the names and the schema of the columns in a table. You can also use a for loop to iterate over the list of columns and their expected data types and generate the expected set of column names and the column name to type mapping automatically. For example:
# Define a list of columns and their expected data types
columns = [('a', 'string'), ('b', 'int'), ('c', 'boolean')]
# Gener
| Check column names and column types in Great Expectations | Currently, I am validating the table schema with expect_table_columns_to_match_set by feeding in a list of columns. However, I want to validate the schema associated with each column such as string. The only available Great Expectations rule expect_column_values_to_be_of_type has to be written for each column name and also creates redundancy by repeating the column names.
Is there any rule that I am missing that I can validate both the name and the schema at the same time?
For exmaple, given column a: string, b: int, c: boolean, I want to pass that whole info into one function instead of having to break it into [a,b,c] and validating [a], string` separately for each column.
Ideally, it will be something like expect_column_schmea([(column_name_a, column_type_a), (column_name_b, column_type_b)]
| [
"Great Expectations does not have a built-in rule that allows you to validate both the names and the schema of columns in a table at the same time。 However, you can use a combination of the and rules to achieve the same\nFirst, you can use the rule to validate the names of the columns in the table。 This rule takes a set of column names as an argument and checks that the table contains exactly those columns。 For example, if you want to validate that a table has columns , , and , you could use the following code:\nexpect_table_columns_to_match_set(\n table=my_table,\n expected_set={'a', 'b', 'c'},\n)\n\nNext, you can use the rule to validate the schema of each column in the table. This rule takes a dictionary of column names and expected data types as an argument and checks that the values in each column are of the specified type. For example, if you want to validate that column is of type , column is of type , and column is of type , you could use the following code:\nexpect_column_values_to_be_of_type(\n table=my_table,\n column_name_to_type_mapping={\n 'a': 'string',\n 'b': 'int',\n 'c': 'boolean',\n },\n)\n\nBy combining these two rules, you can validate both the names and the schema of the columns in a table. You can also use a for loop to iterate over the list of columns and their expected data types and generate the expected set of column names and the column name to type mapping automatically. For example:\n# Define a list of columns and their expected data types\ncolumns = [('a', 'string'), ('b', 'int'), ('c', 'boolean')]\n\n# Gener\n\n"
] | [
0
] | [] | [] | [
"great_expectations",
"python"
] | stackoverflow_0074483457_great_expectations_python.txt |
Q:
"ProgrammingError: column users_appuser.id does not exist" extending User model django
I am extending User on django and didn't realize I need to add phone number. When I made the model below, I forgot to makemigrations migrate before creating an AppUser. So I had made an AppUser/User combo as normal, but before the db was ready. It asks me to provide a default and I provided the string '1' because it wouldn't take 1 as integer
from __future__ import unicode_literals
from django.db import models
from django.contrib.auth.models import User
from django.core.validators import RegexValidator
class AppUser(models.Model):
user = models.OneToOneField(User, primary_key=True, on_delete=models.CASCADE)
phone_regex = RegexValidator(regex=r'^\+?1?\d{9,15}$', message="Phone number must be entered in the format: '+999999999'. Up to 15 digits allowed.")
phone_number = models.CharField(validators=[phone_regex], max_length=16, blank=True) # validators should be a list
def __str__(self):
return self.first_name + " " + self.last_name + ": " + self.email
Now I can't touch User or AppUser:
In [4]: appusers = AppUser.objects.all()
In [5]: for user in appusers:
...: user.delete()
...:
---------------------------------------------------------------------------
ProgrammingError Traceback (most recent call last)
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/backends/utils.py in execute(self, sql, params)
63 else:
---> 64 return self.cursor.execute(sql, params)
65
ProgrammingError: column users_appuser.id does not exist
LINE 1: SELECT "users_appuser"."id", "users_appuser"."user_id", "use...
^
The above exception was the direct cause of the following exception:
ProgrammingError Traceback (most recent call last)
<ipython-input-5-fd4a00b110e0> in <module>()
----> 1 for user in appusers:
2 user.delete()
3
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/models/query.py in __iter__(self)
254 - Responsible for turning the rows into model objects.
255 """
--> 256 self._fetch_all()
257 return iter(self._result_cache)
258
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/models/query.py in _fetch_all(self)
1085 def _fetch_all(self):
1086 if self._result_cache is None:
-> 1087 self._result_cache = list(self.iterator())
1088 if self._prefetch_related_lookups and not self._prefetch_done:
1089 self._prefetch_related_objects()
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/models/query.py in __iter__(self)
52 # Execute the query. This will also fill compiler.select, klass_info,
53 # and annotations.
---> 54 results = compiler.execute_sql()
55 select, klass_info, annotation_col_map = (compiler.select, compiler.klass_info,
56 compiler.annotation_col_map)
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/models/sql/compiler.py in execute_sql(self, result_type)
833 cursor = self.connection.cursor()
834 try:
--> 835 cursor.execute(sql, params)
836 except Exception:
837 cursor.close()
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/backends/utils.py in execute(self, sql, params)
77 start = time()
78 try:
---> 79 return super(CursorDebugWrapper, self).execute(sql, params)
80 finally:
81 stop = time()
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/backends/utils.py in execute(self, sql, params)
62 return self.cursor.execute(sql)
63 else:
---> 64 return self.cursor.execute(sql, params)
65
66 def executemany(self, sql, param_list):
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/utils.py in __exit__(self, exc_type, exc_value, traceback)
92 if dj_exc_type not in (DataError, IntegrityError):
93 self.wrapper.errors_occurred = True
---> 94 six.reraise(dj_exc_type, dj_exc_value, traceback)
95
96 def __call__(self, func):
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/utils/six.py in reraise(tp, value, tb)
683 value = tp()
684 if value.__traceback__ is not tb:
--> 685 raise value.with_traceback(tb)
686 raise value
687
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/backends/utils.py in execute(self, sql, params)
62 return self.cursor.execute(sql)
63 else:
---> 64 return self.cursor.execute(sql, params)
65
66 def executemany(self, sql, param_list):
ProgrammingError: column users_appuser.id does not exist
LINE 1: SELECT "users_appuser"."id", "users_appuser"."user_id", "use...
Same for other:
In [3]: for user in users:
...: user.delete()
blah blah
ProgrammingError: column users_appuser.user_id does not exist
LINE 1: DELETE FROM "users_appuser" WHERE "users_appuser"."user_id" ...
Can I repair this filth without dropping and remaking entire db? Thank you
A:
This is happening because you are migrating both your apps and admin models at the same time.
To fix this,
Delete Database,
Delete migrations,
Migrate admin models first before applying makemigrations
Do make migrations and migrate.
| "ProgrammingError: column users_appuser.id does not exist" extending User model django | I am extending User on django and didn't realize I need to add phone number. When I made the model below, I forgot to makemigrations migrate before creating an AppUser. So I had made an AppUser/User combo as normal, but before the db was ready. It asks me to provide a default and I provided the string '1' because it wouldn't take 1 as integer
from __future__ import unicode_literals
from django.db import models
from django.contrib.auth.models import User
from django.core.validators import RegexValidator
class AppUser(models.Model):
user = models.OneToOneField(User, primary_key=True, on_delete=models.CASCADE)
phone_regex = RegexValidator(regex=r'^\+?1?\d{9,15}$', message="Phone number must be entered in the format: '+999999999'. Up to 15 digits allowed.")
phone_number = models.CharField(validators=[phone_regex], max_length=16, blank=True) # validators should be a list
def __str__(self):
return self.first_name + " " + self.last_name + ": " + self.email
Now I can't touch User or AppUser:
In [4]: appusers = AppUser.objects.all()
In [5]: for user in appusers:
...: user.delete()
...:
---------------------------------------------------------------------------
ProgrammingError Traceback (most recent call last)
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/backends/utils.py in execute(self, sql, params)
63 else:
---> 64 return self.cursor.execute(sql, params)
65
ProgrammingError: column users_appuser.id does not exist
LINE 1: SELECT "users_appuser"."id", "users_appuser"."user_id", "use...
^
The above exception was the direct cause of the following exception:
ProgrammingError Traceback (most recent call last)
<ipython-input-5-fd4a00b110e0> in <module>()
----> 1 for user in appusers:
2 user.delete()
3
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/models/query.py in __iter__(self)
254 - Responsible for turning the rows into model objects.
255 """
--> 256 self._fetch_all()
257 return iter(self._result_cache)
258
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/models/query.py in _fetch_all(self)
1085 def _fetch_all(self):
1086 if self._result_cache is None:
-> 1087 self._result_cache = list(self.iterator())
1088 if self._prefetch_related_lookups and not self._prefetch_done:
1089 self._prefetch_related_objects()
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/models/query.py in __iter__(self)
52 # Execute the query. This will also fill compiler.select, klass_info,
53 # and annotations.
---> 54 results = compiler.execute_sql()
55 select, klass_info, annotation_col_map = (compiler.select, compiler.klass_info,
56 compiler.annotation_col_map)
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/models/sql/compiler.py in execute_sql(self, result_type)
833 cursor = self.connection.cursor()
834 try:
--> 835 cursor.execute(sql, params)
836 except Exception:
837 cursor.close()
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/backends/utils.py in execute(self, sql, params)
77 start = time()
78 try:
---> 79 return super(CursorDebugWrapper, self).execute(sql, params)
80 finally:
81 stop = time()
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/backends/utils.py in execute(self, sql, params)
62 return self.cursor.execute(sql)
63 else:
---> 64 return self.cursor.execute(sql, params)
65
66 def executemany(self, sql, param_list):
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/utils.py in __exit__(self, exc_type, exc_value, traceback)
92 if dj_exc_type not in (DataError, IntegrityError):
93 self.wrapper.errors_occurred = True
---> 94 six.reraise(dj_exc_type, dj_exc_value, traceback)
95
96 def __call__(self, func):
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/utils/six.py in reraise(tp, value, tb)
683 value = tp()
684 if value.__traceback__ is not tb:
--> 685 raise value.with_traceback(tb)
686 raise value
687
/home/cchilders/.virtualenvs/austin_eats/lib/python3.5/site-packages/django/db/backends/utils.py in execute(self, sql, params)
62 return self.cursor.execute(sql)
63 else:
---> 64 return self.cursor.execute(sql, params)
65
66 def executemany(self, sql, param_list):
ProgrammingError: column users_appuser.id does not exist
LINE 1: SELECT "users_appuser"."id", "users_appuser"."user_id", "use...
Same for other:
In [3]: for user in users:
...: user.delete()
blah blah
ProgrammingError: column users_appuser.user_id does not exist
LINE 1: DELETE FROM "users_appuser" WHERE "users_appuser"."user_id" ...
Can I repair this filth without dropping and remaking entire db? Thank you
| [
"This is happening because you are migrating both your apps and admin models at the same time.\nTo fix this,\n\nDelete Database,\nDelete migrations,\nMigrate admin models first before applying makemigrations\nDo make migrations and migrate.\n\n"
] | [
0
] | [] | [] | [
"django",
"django_1.10",
"django_contrib",
"python",
"python_3.x"
] | stackoverflow_0039951088_django_django_1.10_django_contrib_python_python_3.x.txt |
Q:
How to use 3 or more telegram clients at the same time?
i want to use 3 or more telegram clients at the same time, with 1 or/and 2 clients i don't have problems, but with 3 clients i get errors.
client2 = TelegramClient('session1', api_id2, api_hash2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\telethon\client\telegrambaseclient.py", line 294, in __init__
session.set_dc(
File "C:\Python311\Lib\site-packages\telethon\sessions\sqlite.py", line 168, in set_dc
self._update_session_table()
File "C:\Python311\Lib\site-packages\telethon\sessions\sqlite.py", line 194, in _update_session_table
c.execute('delete from sessions')
sqlite3.OperationalError: database is locked
What i want to do:
I want to use multiple accounts, one will only stay in groups, and when someone join in the group, the first account will get event "event.user_joined" and get the member id and then, using others account send them a private message (I already realized this part, but only with 2 accounts) but i want, for every 50 messages sent, to switch to next account.
In this case, first 50 messages to be sent by "client1", next 50 messages to be sent by "client2" until last client i have (I want atleast 6) and then start again.
This is the code im using now
@client.on(events.ChatAction)
async def handler(event):
index = 0
if (event.user_added or event.user_joined):
user = await event.get_user()
receiver = InputPeerUser(user.id,user.access_hash)
index = 0
try:
if index < 50:
await client1.send_message(receiver, message)
print('Message sent successfully!')
elif index < 100:
await client2.send_message(receiver, message)
print('Message sent successfully!')
elif index < 150:
await client3.send_message(receiver, message)
print('Message sent successfully!')
# elif index < 200:
# await client4.send_message(receiver, message)
# print('Message sent successfully!')
# elif index < 250:
# await client5.send_message(receiver, message)
# print('Message sent successfully!')
elif index < 200:
index == 0
except:
pass
i used this part of code for logging in more than 2 clients.
client = TelegramClient('session', api_id, api_hash)
client.start()
client1 = TelegramClient('session1', api_id1, api_hash1)
client1.start()
client2 = TelegramClient('session1', api_id2, api_hash2)
client2.start()
client3 = TelegramClient('session1', api_id3, api_hash3)
client3.start()
client4 = TelegramClient('session1', api_id4, api_hash4)
client4.start()
client5 = TelegramClient('session1', api_id5, api_hash5)
client5.start()
And this is the error i get, when im trying to connect the 3rd client
PS C:\Users\37378\Desktop\Telegram new member dm> python .\main.py
Please enter your phone (or bot token): 6282274692947
Please enter the code you received: 30365
Signed in successfully as Dufufj Ff
Please enter your phone (or bot token): 6281996803497
Please enter the code you received: 63977
Signed in successfully as Hduduf
Traceback (most recent call last):
File "C:\Users\37378\Desktop\Telegram new member dm\main.py", line 31, in <module>
client2.start()
File "C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\client\auth.py", line 134, in start
else self.loop.run_until_complete(coro)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\asyncio\base_events.py", line 649, in run_until_complete
return future.result()
File "C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\client\auth.py", line 141, in _start
await self.connect()
File "C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\client\telegrambaseclient.py", line 537, in connect
self.session.auth_key = self._sender.auth_key
File "C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\sessions\sqlite.py", line 180, in auth_key
self._update_session_table()
File "C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\sessions\sqlite.py", line 194, in _update_session_table
c.execute('delete from sessions')
sqlite3.OperationalError: database is locked
Task was destroyed but it is pending!
task: <Task pending name='Task-47' coro=<Connection._send_loop() running at C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\network\connection\connection.py:311> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-48' coro=<Connection._recv_loop() running at C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\network\connection\connection.py:329> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-49' coro=<MTProtoSender._send_loop() running at C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\network\mtprotosender.py:462> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-50' coro=<MTProtoSender._recv_loop() running at C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\network\mtprotosender.py:505> wait_for=<Future pending cb=[Task.task_wakeup()]>>
A:
Change each client to use different session names, and properly start them (right now, you're only starting client repeatedly)
client = TelegramClient('session', api_id, api_hash)
client.start()
client1 = TelegramClient('session1', api_id1, api_hash1)
client1.start()
client2 = TelegramClient('session2', api_id2, api_hash2)
client2.start()
client3 = TelegramClient('session3', api_id3, api_hash3)
client3.start()
client4 = TelegramClient('session4', api_id4, api_hash4)
client4.start()
client5 = TelegramClient('session5', api_id5, api_hash5)
client5.start()
| How to use 3 or more telegram clients at the same time? | i want to use 3 or more telegram clients at the same time, with 1 or/and 2 clients i don't have problems, but with 3 clients i get errors.
client2 = TelegramClient('session1', api_id2, api_hash2)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\telethon\client\telegrambaseclient.py", line 294, in __init__
session.set_dc(
File "C:\Python311\Lib\site-packages\telethon\sessions\sqlite.py", line 168, in set_dc
self._update_session_table()
File "C:\Python311\Lib\site-packages\telethon\sessions\sqlite.py", line 194, in _update_session_table
c.execute('delete from sessions')
sqlite3.OperationalError: database is locked
What i want to do:
I want to use multiple accounts, one will only stay in groups, and when someone join in the group, the first account will get event "event.user_joined" and get the member id and then, using others account send them a private message (I already realized this part, but only with 2 accounts) but i want, for every 50 messages sent, to switch to next account.
In this case, first 50 messages to be sent by "client1", next 50 messages to be sent by "client2" until last client i have (I want atleast 6) and then start again.
This is the code im using now
@client.on(events.ChatAction)
async def handler(event):
index = 0
if (event.user_added or event.user_joined):
user = await event.get_user()
receiver = InputPeerUser(user.id,user.access_hash)
index = 0
try:
if index < 50:
await client1.send_message(receiver, message)
print('Message sent successfully!')
elif index < 100:
await client2.send_message(receiver, message)
print('Message sent successfully!')
elif index < 150:
await client3.send_message(receiver, message)
print('Message sent successfully!')
# elif index < 200:
# await client4.send_message(receiver, message)
# print('Message sent successfully!')
# elif index < 250:
# await client5.send_message(receiver, message)
# print('Message sent successfully!')
elif index < 200:
index == 0
except:
pass
i used this part of code for logging in more than 2 clients.
client = TelegramClient('session', api_id, api_hash)
client.start()
client1 = TelegramClient('session1', api_id1, api_hash1)
client1.start()
client2 = TelegramClient('session1', api_id2, api_hash2)
client2.start()
client3 = TelegramClient('session1', api_id3, api_hash3)
client3.start()
client4 = TelegramClient('session1', api_id4, api_hash4)
client4.start()
client5 = TelegramClient('session1', api_id5, api_hash5)
client5.start()
And this is the error i get, when im trying to connect the 3rd client
PS C:\Users\37378\Desktop\Telegram new member dm> python .\main.py
Please enter your phone (or bot token): 6282274692947
Please enter the code you received: 30365
Signed in successfully as Dufufj Ff
Please enter your phone (or bot token): 6281996803497
Please enter the code you received: 63977
Signed in successfully as Hduduf
Traceback (most recent call last):
File "C:\Users\37378\Desktop\Telegram new member dm\main.py", line 31, in <module>
client2.start()
File "C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\client\auth.py", line 134, in start
else self.loop.run_until_complete(coro)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.2288.0_x64__qbz5n2kfra8p0\lib\asyncio\base_events.py", line 649, in run_until_complete
return future.result()
File "C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\client\auth.py", line 141, in _start
await self.connect()
File "C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\client\telegrambaseclient.py", line 537, in connect
self.session.auth_key = self._sender.auth_key
File "C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\sessions\sqlite.py", line 180, in auth_key
self._update_session_table()
File "C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\sessions\sqlite.py", line 194, in _update_session_table
c.execute('delete from sessions')
sqlite3.OperationalError: database is locked
Task was destroyed but it is pending!
task: <Task pending name='Task-47' coro=<Connection._send_loop() running at C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\network\connection\connection.py:311> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-48' coro=<Connection._recv_loop() running at C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\network\connection\connection.py:329> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-49' coro=<MTProtoSender._send_loop() running at C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\network\mtprotosender.py:462> wait_for=<Future pending cb=[Task.task_wakeup()]>>
Task was destroyed but it is pending!
task: <Task pending name='Task-50' coro=<MTProtoSender._recv_loop() running at C:\Users\37378\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\telethon\network\mtprotosender.py:505> wait_for=<Future pending cb=[Task.task_wakeup()]>>
| [
"Change each client to use different session names, and properly start them (right now, you're only starting client repeatedly)\nclient = TelegramClient('session', api_id, api_hash)\nclient.start()\nclient1 = TelegramClient('session1', api_id1, api_hash1)\nclient1.start()\nclient2 = TelegramClient('session2', api_id2, api_hash2)\nclient2.start()\nclient3 = TelegramClient('session3', api_id3, api_hash3)\nclient3.start()\nclient4 = TelegramClient('session4', api_id4, api_hash4)\nclient4.start()\nclient5 = TelegramClient('session5', api_id5, api_hash5)\nclient5.start()\n\n"
] | [
0
] | [] | [] | [
"multiple_instances",
"python",
"telegram",
"telethon"
] | stackoverflow_0074634227_multiple_instances_python_telegram_telethon.txt |
Q:
Vscode seems to be changing directory when running a python script
I seem to be having a vscode related issue. I am doing the open() function but no matter what I ask it to do it gives me a directory error. The file that I want the python script to interact with is in the same folder so it should work but when I do "import os" and "os.getcwd()" the directory it says I am in is Desktop. (the script and file are both in the "/Desktop/Python/File Handling" directory)
It seems the script is stuck at the Desktop directory when I try to run it from vscode. If I run it by doing python3 "name of script" command in the kali linux terminal it works fine and if I check my directory again with os.getcwd() it says the correct one (/Desktop/Python/File Handling).
So I believe it's something with vscode as it literally just randomly happened one day. Yesterday my scripts were working fine and now all the ones I run from vscode, that are supposed to interact with the files in their respective folders, don't work. The vscode terminal gives me this code and as you can see it does the cd command at the start, which I believe might be the issue for why it always looks at files in the Desktop directory but I do not know how to make it stop doing that.
$ cd /home/kali/Desktop ; /usr/bin/env /bin/python /home/kali/.vscode/extensions/ms-python.python-2022.18.2/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher 41017 -- /home/kali/Desktop/Python/File\ Handling/File\ Handling\ 2.py
Traceback (most recent call last):
File "/home/kali/Desktop/Python/File Handling/File Handling 2.py", line 3, in <module>
f = open("apple.jpeg", "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'apple.jpeg'
Lastly, I know about the os.chdir(r"/home/kali/Desktop/Python/File Handling") function and it fixes the issue but I do not want to have to write this command at the top of every script that is supposed to interact with the files in the folder it is in by default and because this issue just randomly came up out of nowhere the next day I opened vscode and ran my script from yesterday (without changing any code or vscode settings.)
P.S. I am using a VM as well if that helps.
A:
This is caused by vscode using workspace as root floder.
This will lead to a problem. When you use the os.getcwd() method in the deep directory of the workspace, you will still get the workspace directory.
You can open your settings and search Python > Terminal: Execute In File Dir then check it.
You can also use debug mode and add the following to your launch.json:
"cwd": "${fileDirname}"
| Vscode seems to be changing directory when running a python script | I seem to be having a vscode related issue. I am doing the open() function but no matter what I ask it to do it gives me a directory error. The file that I want the python script to interact with is in the same folder so it should work but when I do "import os" and "os.getcwd()" the directory it says I am in is Desktop. (the script and file are both in the "/Desktop/Python/File Handling" directory)
It seems the script is stuck at the Desktop directory when I try to run it from vscode. If I run it by doing python3 "name of script" command in the kali linux terminal it works fine and if I check my directory again with os.getcwd() it says the correct one (/Desktop/Python/File Handling).
So I believe it's something with vscode as it literally just randomly happened one day. Yesterday my scripts were working fine and now all the ones I run from vscode, that are supposed to interact with the files in their respective folders, don't work. The vscode terminal gives me this code and as you can see it does the cd command at the start, which I believe might be the issue for why it always looks at files in the Desktop directory but I do not know how to make it stop doing that.
$ cd /home/kali/Desktop ; /usr/bin/env /bin/python /home/kali/.vscode/extensions/ms-python.python-2022.18.2/pythonFiles/lib/python/debugpy/adapter/../../debugpy/launcher 41017 -- /home/kali/Desktop/Python/File\ Handling/File\ Handling\ 2.py
Traceback (most recent call last):
File "/home/kali/Desktop/Python/File Handling/File Handling 2.py", line 3, in <module>
f = open("apple.jpeg", "rb")
FileNotFoundError: [Errno 2] No such file or directory: 'apple.jpeg'
Lastly, I know about the os.chdir(r"/home/kali/Desktop/Python/File Handling") function and it fixes the issue but I do not want to have to write this command at the top of every script that is supposed to interact with the files in the folder it is in by default and because this issue just randomly came up out of nowhere the next day I opened vscode and ran my script from yesterday (without changing any code or vscode settings.)
P.S. I am using a VM as well if that helps.
| [
"This is caused by vscode using workspace as root floder.\nThis will lead to a problem. When you use the os.getcwd() method in the deep directory of the workspace, you will still get the workspace directory.\nYou can open your settings and search Python > Terminal: Execute In File Dir then check it.\n\nYou can also use debug mode and add the following to your launch.json:\n\"cwd\": \"${fileDirname}\"\n\n"
] | [
1
] | [] | [] | [
"python",
"visual_studio_code"
] | stackoverflow_0074637571_python_visual_studio_code.txt |
Q:
Python function not executing in azure automation runbook
I am working on writing a function in python azure automation runbook, where the basic gist of what I'm trying to accomplish is shown in the code below. But, I'm perplexed as when I take the code out of the function and run the code as is it all works as expected. But, when the function is used I see the output for Start & End but I don't see the function getting executed. I'm not clear the piece I miss for this to not work.
Python:
import sys
import requests
print ("Start")
def myFunction():
# do something
print("This is my function doing something")
myFunction()
print ("End")
A:
Your code worked successfully and achieved the expected outcomes when I tried it in my environment, as shown below:
First and foremost, you should add the requests(3.8) module/package to the Azure Automation runbook as you are using it in code.
Add a package:
Path:
Azure automation account -> Python packages -> Add a Python Package
Note: Versions of Python runbook and packages should be compatible with each other.
I've created a Python runbook (3.8) and the function code was successfully executed.
#!/usr/bin/env python3
import sys
import requests
print ("Start")
def myFunction():
print("This is my function doing something")
myFunction()
print ("End")
| Python function not executing in azure automation runbook |
I am working on writing a function in python azure automation runbook, where the basic gist of what I'm trying to accomplish is shown in the code below. But, I'm perplexed as when I take the code out of the function and run the code as is it all works as expected. But, when the function is used I see the output for Start & End but I don't see the function getting executed. I'm not clear the piece I miss for this to not work.
Python:
import sys
import requests
print ("Start")
def myFunction():
# do something
print("This is my function doing something")
myFunction()
print ("End")
| [
"Your code worked successfully and achieved the expected outcomes when I tried it in my environment, as shown below:\nFirst and foremost, you should add the requests(3.8) module/package to the Azure Automation runbook as you are using it in code.\nAdd a package:\nPath:\nAzure automation account -> Python packages -> Add a Python Package\n\nNote: Versions of Python runbook and packages should be compatible with each other.\nI've created a Python runbook (3.8) and the function code was successfully executed.\n#!/usr/bin/env python3\nimport sys\nimport requests\nprint (\"Start\")\ndef myFunction():\nprint(\"This is my function doing something\")\nmyFunction()\nprint (\"End\")\n\n\n"
] | [
1
] | [] | [] | [
"azure",
"azure_automation",
"python"
] | stackoverflow_0074634586_azure_azure_automation_python.txt |
Q:
Can someone explain to my why my django admin theme is dark?
I host my small project on pythonanywhere and after i host it i check if it is working and when i click the django admin, the theme of my django admin is dark and when i tried to run on my local host the theme is white so i tried to double check my static url and i think it is fine and btw this is my static url for my admin
Static Url: /static/admin, Static Directory: /home/k3v1nSocialProject/.virtualenvs/myprojenv/lib/python3.8/site-packages/django/contrib/admin/static/admin. Can someone explain to me what is happening and why is my django admin theme is dark?
A:
As part of the Django 3.2 release, the admin now has a dark theme that is applied based on a prefers-color-scheme media query. Release Notes
The admin now supports theming, and includes a dark theme that is enabled according to browser settings. See Theming support for more details.
A:
From django 3.2 we have possibility to adjust admin themes.
Fastest way to ignore Dark theme is:
Create admin folder inside your templates folder, then create file base.html
templates/admin/base.html
copy this code into base.html
{% extends 'admin/base.html' %}
{% block extrahead %}{{ block.super }}
<style>
/* VARIABLE DEFINITIONS */
:root {
--primary: #79aec8;
--secondary: #417690;
--accent: #f5dd5d;
--primary-fg: #fff;
--body-fg: #333;
--body-bg: #fff;
--body-quiet-color: #666;
--body-loud-color: #000;
--header-color: #ffc;
--header-branding-color: var(--accent);
--header-bg: var(--secondary);
--header-link-color: var(--primary-fg);
--breadcrumbs-fg: #c4dce8;
--breadcrumbs-link-fg: var(--body-bg);
--breadcrumbs-bg: var(--primary);
--link-fg: #447e9b;
--link-hover-color: #036;
--link-selected-fg: #5b80b2;
--hairline-color: #e8e8e8;
--border-color: #ccc;
--error-fg: #ba2121;
--message-success-bg: #dfd;
--message-warning-bg: #ffc;
--message-error-bg: #ffefef;
--darkened-bg: #f8f8f8; /* A bit darker than --body-bg */
--selected-bg: #e4e4e4; /* E.g. selected table cells */
--selected-row: #ffc;
--button-fg: #fff;
--button-bg: var(--primary);
--button-hover-bg: #609ab6;
--default-button-bg: var(--secondary);
--default-button-hover-bg: #205067;
--close-button-bg: #888; /* Previously #bbb, contrast 1.92 */
--close-button-hover-bg: #747474;
--delete-button-bg: #ba2121;
--delete-button-hover-bg: #a41515;
--object-tools-fg: var(--button-fg);
--object-tools-bg: var(--close-button-bg);
--object-tools-hover-bg: var(--close-button-hover-bg);
}
</style>
{% endblock %}
Now you should have original colors back.
A:
To those wondering where to put this override data from Adam's response above, it would depend on where your TEMPLATES DIRS are assigned in the settings file. For example:
settings.py
TEMPLATES = [
{
'BACKEND': 'django.template.backends.django.DjangoTemplates',
'DIRS': [(os.path.join(BASE_DIR, 'templates')),], # <- Template path to put the html file
'APP_DIRS': True,
'OPTIONS': {
'context_processors': [
'django.template.context_processors.debug',
'django.template.context_processors.request',
'django.contrib.auth.context_processors.auth',
'django.contrib.messages.context_processors.messages',
],
},
},
]
Note the DIRS directory. This translates to a templates folder at the same level as my manage.py file.
Inside that templates folder i have another folder called admin and an html file called base. So it looks like this: \projectname\templates\admin\base.html
Then in the base.html file i have the code Adam mentions from the documentation theming support
{% extends 'admin/base.html' %}
{% block extrahead %}{{ block.super }}
<style>
:root {
--primary: #9774d5;
--secondary: #785cab;
--link-fg: #7c449b;
--link-selected-fg: #8f5bb2;
--body-fg: #333;
--body-bg: #fff;
--body-quiet-color: #666;
--body-loud-color: #000;
--darkened-bg: #f8f8f8; /* A bit darker than --body-bg */
--selected-bg: #e4e4e4; /* E.g. selected table cells */
--selected-row: #ffc;
}
</style>
{% endblock %}
This should now work for you. If you use these exact settings here, it will be a light theme with purple. Then you can just accordingly.
A:
Django 3.2 admin theme change
If you want to return old theme as i did you can just override color variables.
Go to django/contrib/admin/static/admin/css/base.css and copy block that looks like this
/* VARIABLE DEFINITIONS */
:root {
--primary: #79aec8;
--secondary: #417690;
.......
}
Next create folder named admin in templates folder and create base.html file inside and place this code. Make sure that you replace :root with variables that you got from initial base.html
{% extends 'admin/base.html' %}
{% block extrahead %}{{ block.super }}
<style>
:root {
--primary: #79aec8;
--secondary: #417690;
--accent: #f5dd5d;
--primary-fg: #fff;
......
}
</style>
{% endblock %}
And enjoy old beautiful look of Django that we all like)
A:
For those who would like to have a nice switch between dark and light mode.
This feature will come in Django 4.2 (scheduled April 2023) but I've amended Sarah Abderamane's PR to work in 4.1.
Do the following to activate it:
Add a file admin/color_theme_dark_mode.html to your templates directory:
<style>
html[data-theme="light"],
:root {
--primary: #79aec8;
--secondary: #417690;
--accent: #f5dd5d;
--primary-fg: #fff;
--body-fg: #333;
--body-bg: #fff;
--body-quiet-color: #666;
--body-loud-color: #000;
--header-color: #ffc;
--header-branding-color: var(--accent);
--header-bg: var(--secondary);
--header-link-color: var(--primary-fg);
--breadcrumbs-fg: #c4dce8;
--breadcrumbs-link-fg: var(--body-bg);
--breadcrumbs-bg: var(--primary);
--link-fg: #447e9b;
--link-hover-color: #036;
--link-selected-fg: #5b80b2;
--hairline-color: #e8e8e8;
--border-color: #ccc;
--error-fg: #ba2121;
--message-success-bg: #dfd;
--message-warning-bg: #ffc;
--message-error-bg: #ffefef;
--darkened-bg: #f8f8f8; /* A bit darker than --body-bg */
--selected-bg: #e4e4e4; /* E.g. selected table cells */
--selected-row: #ffc;
--button-fg: #fff;
--button-bg: var(--primary);
--button-hover-bg: #609ab6;
--default-button-bg: var(--secondary);
--default-button-hover-bg: #205067;
--close-button-bg: #888; /* Previously #bbb, contrast 1.92 */
--close-button-hover-bg: #747474;
--delete-button-bg: #ba2121;
--delete-button-hover-bg: #a41515;
--object-tools-fg: var(--button-fg);
--object-tools-bg: var(--close-button-bg);
--object-tools-hover-bg: var(--close-button-hover-bg);
}
html[data-theme="dark"] {
--primary: #264b5d;
--primary-fg: #f7f7f7;
--body-fg: #eeeeee;
--body-bg: #121212;
--body-quiet-color: #e0e0e0;
--body-loud-color: #ffffff;
--breadcrumbs-link-fg: #e0e0e0;
--breadcrumbs-bg: var(--primary);
--link-fg: #81d4fa;
--link-hover-color: #4ac1f7;
--link-selected-fg: #6f94c6;
--hairline-color: #272727;
--border-color: #353535;
--error-fg: #e35f5f;
--message-success-bg: #006b1b;
--message-warning-bg: #583305;
--message-error-bg: #570808;
--darkened-bg: #212121;
--selected-bg: #1b1b1b;
--selected-row: #00363a;
--close-button-bg: #333333;
--close-button-hover-bg: #666666;
}
/* THEME SWITCH */
.theme-toggle {
cursor: pointer;
border: none;
padding: 0;
background: transparent;
vertical-align: middle;
margin-left: 5px;
margin-top: -1px;
}
.theme-toggle svg {
vertical-align: middle;
height: 1rem;
width: 1rem;
display: none;
}
/* ICONS */
.theme-toggle svg.theme-icon-when-dark,
.theme-toggle svg.theme-icon-when-light {
fill: var(--header-link-color);
color: var(--header-bg);
}
html[data-theme="dark"] .theme-toggle svg.theme-icon-when-dark {
display: block;
}
html[data-theme="light"] .theme-toggle svg.theme-icon-when-light {
display: block;
}
.visually-hidden {
position: absolute;
width: 1px;
height: 1px;
padding: 0;
overflow: hidden;
clip: rect(0, 0, 0, 0);
white-space: nowrap;
border: 0;
color: var(--body-fg);
background-color: var(--body-bg);
}
</style>
<script>
// Avoid flashes of a light theme.
const currentTheme = localStorage.getItem("theme");
document.documentElement.dataset.theme = currentTheme || "auto";
window.addEventListener("load", function (e) {
function setTheme(mode) {
if (mode !== "light" && mode !== "dark" && mode !== "auto") {
console.error(`Got invalid theme mode: ${mode}. Resetting to auto.`);
mode = "auto";
}
if (mode === "auto") {
const prefersDark = window.matchMedia("(prefers-color-scheme: dark)").matches;
mode = prefersDark ? "dark" : "light";
}
document.documentElement.dataset.theme = mode;
localStorage.setItem("theme", mode);
}
function cycleTheme() {
const currentTheme = localStorage.getItem("theme");
if (currentTheme) currentTheme === "light" ? setTheme("dark") : setTheme("light");
else setTheme("auto"); // resets to the system theme
}
function initTheme() {
// set theme defined in localStorage if there is one, or fallback
// to system mode
const currentTheme = localStorage.getItem("theme");
currentTheme ? setTheme(currentTheme) : setTheme("auto");
}
function setupTheme() {
// Attach event handlers for toggling themes
const buttons = document.getElementsByClassName("theme-toggle");
Array.from(buttons).forEach((btn) => {
btn.addEventListener("click", cycleTheme);
});
initTheme();
}
setupTheme();
});
</script>
Add a file admin/color_theme_toggle.html to your templates directory:
<button class="theme-toggle">
<div class="visually-hidden">Toggle light / dark color theme</div>
<svg class="theme-icon-when-dark">
<use xlink:href="#icon-moon" />
</svg>
<svg class="theme-icon-when-light">
<use xlink:href="#icon-sun" />
</svg>
</button>
<!-- SVGs -->
<div style="display: none">
<svg xmlns="http://www.w3.org/2000/svg">
<symbol viewBox="0 0 24 24" width="16" height="16" id="icon-auto">
<path d="M0 0h24v24H0z" fill="currentColor" />
<path
d="M12 22C6.477 22 2 17.523 2 12S6.477 2 12 2s10 4.477 10 10-4.477 10-10 10zm0-2V4a8 8 0 1 0 0 16z"
/>
</symbol>
<symbol viewBox="0 0 24 24" width="16" height="16" id="icon-moon">
<path d="M0 0h24v24H0z" fill="currentColor" />
<path
d="M10 7a7 7 0 0 0 12 4.9v.1c0 5.523-4.477 10-10 10S2 17.523 2 12 6.477 2 12 2h.1A6.979 6.979 0 0 0 10 7zm-6 5a8 8 0 0 0 15.062 3.762A9 9 0 0 1 8.238 4.938 7.999 7.999 0 0 0 4 12z"
/>
</symbol>
<symbol viewBox="0 0 24 24" width="16" height="16" id="icon-sun">
<path d="M0 0h24v24H0z" fill="currentColor" />
<path
d="M12 18a6 6 0 1 1 0-12 6 6 0 0 1 0 12zm0-2a4 4 0 1 0 0-8 4 4 0 0 0 0 8zM11 1h2v3h-2V1zm0 19h2v3h-2v-3zM3.515 4.929l1.414-1.414L7.05 5.636 5.636 7.05 3.515 4.93zM16.95 18.364l1.414-1.414 2.121 2.121-1.414 1.414-2.121-2.121zm2.121-14.85l1.414 1.415-2.121 2.121-1.414-1.414 2.121-2.121zM5.636 16.95l1.414 1.414-2.121 2.121-1.414-1.414 2.121-2.121zM23 11v2h-3v-2h3zM4 11v2H1v-2h3z"
/>
</symbol>
</svg>
</div>
<!-- END SVGs -->
Add the following to the base.html file in your templates directory:
{% block dark-mode-vars %}
{{ block.super }}
{% include "admin/color_theme_dark_mode.html" %}
{% endblock %}
{% block userlinks %}
{{ block.super }}
{% include "admin/color_theme_toggle.html" %}
{% endblock %}
Enjoy a new icon at top right to switch between light/dark (I removed "auto" to simplify it a bit):
| Can someone explain to my why my django admin theme is dark? | I host my small project on pythonanywhere and after i host it i check if it is working and when i click the django admin, the theme of my django admin is dark and when i tried to run on my local host the theme is white so i tried to double check my static url and i think it is fine and btw this is my static url for my admin
Static Url: /static/admin, Static Directory: /home/k3v1nSocialProject/.virtualenvs/myprojenv/lib/python3.8/site-packages/django/contrib/admin/static/admin. Can someone explain to me what is happening and why is my django admin theme is dark?
| [
"As part of the Django 3.2 release, the admin now has a dark theme that is applied based on a prefers-color-scheme media query. Release Notes\n\nThe admin now supports theming, and includes a dark theme that is enabled according to browser settings. See Theming support for more details.\n\n",
"From django 3.2 we have possibility to adjust admin themes.\nFastest way to ignore Dark theme is:\nCreate admin folder inside your templates folder, then create file base.html\ntemplates/admin/base.html\n\ncopy this code into base.html\n{% extends 'admin/base.html' %}\n\n{% block extrahead %}{{ block.super }}\n<style>\n/* VARIABLE DEFINITIONS */\n:root {\n --primary: #79aec8;\n --secondary: #417690;\n --accent: #f5dd5d;\n --primary-fg: #fff;\n\n --body-fg: #333;\n --body-bg: #fff;\n --body-quiet-color: #666;\n --body-loud-color: #000;\n\n --header-color: #ffc;\n --header-branding-color: var(--accent);\n --header-bg: var(--secondary);\n --header-link-color: var(--primary-fg);\n\n --breadcrumbs-fg: #c4dce8;\n --breadcrumbs-link-fg: var(--body-bg);\n --breadcrumbs-bg: var(--primary);\n\n --link-fg: #447e9b;\n --link-hover-color: #036;\n --link-selected-fg: #5b80b2;\n\n --hairline-color: #e8e8e8;\n --border-color: #ccc;\n\n --error-fg: #ba2121;\n\n --message-success-bg: #dfd;\n --message-warning-bg: #ffc;\n --message-error-bg: #ffefef;\n\n --darkened-bg: #f8f8f8; /* A bit darker than --body-bg */\n --selected-bg: #e4e4e4; /* E.g. selected table cells */\n --selected-row: #ffc;\n\n --button-fg: #fff;\n --button-bg: var(--primary);\n --button-hover-bg: #609ab6;\n --default-button-bg: var(--secondary);\n --default-button-hover-bg: #205067;\n --close-button-bg: #888; /* Previously #bbb, contrast 1.92 */\n --close-button-hover-bg: #747474;\n --delete-button-bg: #ba2121;\n --delete-button-hover-bg: #a41515;\n\n --object-tools-fg: var(--button-fg);\n --object-tools-bg: var(--close-button-bg);\n --object-tools-hover-bg: var(--close-button-hover-bg);\n}\n\n\n</style>\n{% endblock %}\n\nNow you should have original colors back.\n",
"To those wondering where to put this override data from Adam's response above, it would depend on where your TEMPLATES DIRS are assigned in the settings file. For example:\nsettings.py\nTEMPLATES = [\n {\n 'BACKEND': 'django.template.backends.django.DjangoTemplates',\n 'DIRS': [(os.path.join(BASE_DIR, 'templates')),], # <- Template path to put the html file\n 'APP_DIRS': True,\n 'OPTIONS': {\n 'context_processors': [\n 'django.template.context_processors.debug',\n 'django.template.context_processors.request',\n 'django.contrib.auth.context_processors.auth',\n 'django.contrib.messages.context_processors.messages',\n ],\n },\n },\n]\n\nNote the DIRS directory. This translates to a templates folder at the same level as my manage.py file.\nInside that templates folder i have another folder called admin and an html file called base. So it looks like this: \\projectname\\templates\\admin\\base.html\nThen in the base.html file i have the code Adam mentions from the documentation theming support\n{% extends 'admin/base.html' %}\n \n{% block extrahead %}{{ block.super }}\n<style>\n :root {\n --primary: #9774d5;\n --secondary: #785cab;\n --link-fg: #7c449b;\n --link-selected-fg: #8f5bb2;\n --body-fg: #333;\n --body-bg: #fff;\n --body-quiet-color: #666;\n --body-loud-color: #000;\n --darkened-bg: #f8f8f8; /* A bit darker than --body-bg */\n --selected-bg: #e4e4e4; /* E.g. selected table cells */\n --selected-row: #ffc;\n }\n\n</style>\n{% endblock %}\n\nThis should now work for you. If you use these exact settings here, it will be a light theme with purple. Then you can just accordingly.\n",
"Django 3.2 admin theme change\nIf you want to return old theme as i did you can just override color variables.\nGo to django/contrib/admin/static/admin/css/base.css and copy block that looks like this\n/* VARIABLE DEFINITIONS */\n:root {\n --primary: #79aec8;\n --secondary: #417690; \n .......\n}\n\nNext create folder named admin in templates folder and create base.html file inside and place this code. Make sure that you replace :root with variables that you got from initial base.html\n{% extends 'admin/base.html' %}\n \n{% block extrahead %}{{ block.super }}\n<style>\n :root {\n --primary: #79aec8;\n --secondary: #417690;\n --accent: #f5dd5d;\n --primary-fg: #fff;\n ......\n }\n\n</style>\n{% endblock %}\n\nAnd enjoy old beautiful look of Django that we all like)\n",
"For those who would like to have a nice switch between dark and light mode.\nThis feature will come in Django 4.2 (scheduled April 2023) but I've amended Sarah Abderamane's PR to work in 4.1.\nDo the following to activate it:\n\nAdd a file admin/color_theme_dark_mode.html to your templates directory:\n\n<style>\n html[data-theme=\"light\"],\n :root {\n --primary: #79aec8;\n --secondary: #417690;\n --accent: #f5dd5d;\n --primary-fg: #fff;\n\n --body-fg: #333;\n --body-bg: #fff;\n --body-quiet-color: #666;\n --body-loud-color: #000;\n\n --header-color: #ffc;\n --header-branding-color: var(--accent);\n --header-bg: var(--secondary);\n --header-link-color: var(--primary-fg);\n\n --breadcrumbs-fg: #c4dce8;\n --breadcrumbs-link-fg: var(--body-bg);\n --breadcrumbs-bg: var(--primary);\n\n --link-fg: #447e9b;\n --link-hover-color: #036;\n --link-selected-fg: #5b80b2;\n\n --hairline-color: #e8e8e8;\n --border-color: #ccc;\n\n --error-fg: #ba2121;\n\n --message-success-bg: #dfd;\n --message-warning-bg: #ffc;\n --message-error-bg: #ffefef;\n\n --darkened-bg: #f8f8f8; /* A bit darker than --body-bg */\n --selected-bg: #e4e4e4; /* E.g. selected table cells */\n --selected-row: #ffc;\n\n --button-fg: #fff;\n --button-bg: var(--primary);\n --button-hover-bg: #609ab6;\n --default-button-bg: var(--secondary);\n --default-button-hover-bg: #205067;\n --close-button-bg: #888; /* Previously #bbb, contrast 1.92 */\n --close-button-hover-bg: #747474;\n --delete-button-bg: #ba2121;\n --delete-button-hover-bg: #a41515;\n\n --object-tools-fg: var(--button-fg);\n --object-tools-bg: var(--close-button-bg);\n --object-tools-hover-bg: var(--close-button-hover-bg);\n }\n\n html[data-theme=\"dark\"] {\n --primary: #264b5d;\n --primary-fg: #f7f7f7;\n\n --body-fg: #eeeeee;\n --body-bg: #121212;\n --body-quiet-color: #e0e0e0;\n --body-loud-color: #ffffff;\n\n --breadcrumbs-link-fg: #e0e0e0;\n --breadcrumbs-bg: var(--primary);\n\n --link-fg: #81d4fa;\n --link-hover-color: #4ac1f7;\n --link-selected-fg: #6f94c6;\n\n --hairline-color: #272727;\n --border-color: #353535;\n\n --error-fg: #e35f5f;\n --message-success-bg: #006b1b;\n --message-warning-bg: #583305;\n --message-error-bg: #570808;\n\n --darkened-bg: #212121;\n --selected-bg: #1b1b1b;\n --selected-row: #00363a;\n\n --close-button-bg: #333333;\n --close-button-hover-bg: #666666;\n }\n\n /* THEME SWITCH */\n .theme-toggle {\n cursor: pointer;\n border: none;\n padding: 0;\n background: transparent;\n vertical-align: middle;\n margin-left: 5px;\n margin-top: -1px;\n }\n\n .theme-toggle svg {\n vertical-align: middle;\n height: 1rem;\n width: 1rem;\n display: none;\n }\n\n /* ICONS */\n .theme-toggle svg.theme-icon-when-dark,\n .theme-toggle svg.theme-icon-when-light {\n fill: var(--header-link-color);\n color: var(--header-bg);\n }\n\n html[data-theme=\"dark\"] .theme-toggle svg.theme-icon-when-dark {\n display: block;\n }\n\n html[data-theme=\"light\"] .theme-toggle svg.theme-icon-when-light {\n display: block;\n }\n\n .visually-hidden {\n position: absolute;\n width: 1px;\n height: 1px;\n padding: 0;\n overflow: hidden;\n clip: rect(0, 0, 0, 0);\n white-space: nowrap;\n border: 0;\n color: var(--body-fg);\n background-color: var(--body-bg);\n }\n</style>\n\n<script>\n // Avoid flashes of a light theme.\n const currentTheme = localStorage.getItem(\"theme\");\n document.documentElement.dataset.theme = currentTheme || \"auto\";\n\n window.addEventListener(\"load\", function (e) {\n function setTheme(mode) {\n if (mode !== \"light\" && mode !== \"dark\" && mode !== \"auto\") {\n console.error(`Got invalid theme mode: ${mode}. Resetting to auto.`);\n mode = \"auto\";\n }\n\n if (mode === \"auto\") {\n const prefersDark = window.matchMedia(\"(prefers-color-scheme: dark)\").matches;\n mode = prefersDark ? \"dark\" : \"light\";\n }\n\n document.documentElement.dataset.theme = mode;\n localStorage.setItem(\"theme\", mode);\n }\n\n function cycleTheme() {\n const currentTheme = localStorage.getItem(\"theme\");\n if (currentTheme) currentTheme === \"light\" ? setTheme(\"dark\") : setTheme(\"light\");\n else setTheme(\"auto\"); // resets to the system theme\n }\n\n function initTheme() {\n // set theme defined in localStorage if there is one, or fallback\n // to system mode\n const currentTheme = localStorage.getItem(\"theme\");\n currentTheme ? setTheme(currentTheme) : setTheme(\"auto\");\n }\n\n function setupTheme() {\n // Attach event handlers for toggling themes\n const buttons = document.getElementsByClassName(\"theme-toggle\");\n Array.from(buttons).forEach((btn) => {\n btn.addEventListener(\"click\", cycleTheme);\n });\n initTheme();\n }\n\n setupTheme();\n });\n</script>\n\n\nAdd a file admin/color_theme_toggle.html to your templates directory:\n\n<button class=\"theme-toggle\">\n <div class=\"visually-hidden\">Toggle light / dark color theme</div>\n <svg class=\"theme-icon-when-dark\">\n <use xlink:href=\"#icon-moon\" />\n </svg>\n <svg class=\"theme-icon-when-light\">\n <use xlink:href=\"#icon-sun\" />\n </svg>\n</button>\n\n<!-- SVGs -->\n<div style=\"display: none\">\n <svg xmlns=\"http://www.w3.org/2000/svg\">\n <symbol viewBox=\"0 0 24 24\" width=\"16\" height=\"16\" id=\"icon-auto\">\n <path d=\"M0 0h24v24H0z\" fill=\"currentColor\" />\n <path\n d=\"M12 22C6.477 22 2 17.523 2 12S6.477 2 12 2s10 4.477 10 10-4.477 10-10 10zm0-2V4a8 8 0 1 0 0 16z\"\n />\n </symbol>\n <symbol viewBox=\"0 0 24 24\" width=\"16\" height=\"16\" id=\"icon-moon\">\n <path d=\"M0 0h24v24H0z\" fill=\"currentColor\" />\n <path\n d=\"M10 7a7 7 0 0 0 12 4.9v.1c0 5.523-4.477 10-10 10S2 17.523 2 12 6.477 2 12 2h.1A6.979 6.979 0 0 0 10 7zm-6 5a8 8 0 0 0 15.062 3.762A9 9 0 0 1 8.238 4.938 7.999 7.999 0 0 0 4 12z\"\n />\n </symbol>\n <symbol viewBox=\"0 0 24 24\" width=\"16\" height=\"16\" id=\"icon-sun\">\n <path d=\"M0 0h24v24H0z\" fill=\"currentColor\" />\n <path\n d=\"M12 18a6 6 0 1 1 0-12 6 6 0 0 1 0 12zm0-2a4 4 0 1 0 0-8 4 4 0 0 0 0 8zM11 1h2v3h-2V1zm0 19h2v3h-2v-3zM3.515 4.929l1.414-1.414L7.05 5.636 5.636 7.05 3.515 4.93zM16.95 18.364l1.414-1.414 2.121 2.121-1.414 1.414-2.121-2.121zm2.121-14.85l1.414 1.415-2.121 2.121-1.414-1.414 2.121-2.121zM5.636 16.95l1.414 1.414-2.121 2.121-1.414-1.414 2.121-2.121zM23 11v2h-3v-2h3zM4 11v2H1v-2h3z\"\n />\n </symbol>\n </svg>\n</div>\n<!-- END SVGs -->\n\n\nAdd the following to the base.html file in your templates directory:\n\n{% block dark-mode-vars %}\n{{ block.super }}\n{% include \"admin/color_theme_dark_mode.html\" %}\n{% endblock %}\n\n\n{% block userlinks %}\n{{ block.super }}\n{% include \"admin/color_theme_toggle.html\" %}\n{% endblock %}\n\n\nEnjoy a new icon at top right to switch between light/dark (I removed \"auto\" to simplify it a bit):\n\n\n"
] | [
14,
8,
6,
2,
1
] | [] | [] | [
"django",
"python",
"pythonanywhere"
] | stackoverflow_0067135053_django_python_pythonanywhere.txt |
Q:
How to run Xvfb with xvfbwrapper on AWS EC2 to records screen of selenium headless session
I have a problem running selenium sesion with Xvfb to record video file with sesion. Below is my session and wrapper
from selenium import webdriver
from xvfbwrapper import Xvfb
class TestPages(unittest.TestCase):
def setUp(self):
self.xvfb = Xvfb(width=1280, height=720)
self.addCleanup(self.xvfb.stop)
self.xvfb.start()
self.browser = webdriver.Chrome()
self.addCleanup(self.browser.quit)
def testUbuntuHomepage(self):
self.browser.get('http://www.ubuntu.com')
self.assertIn('Ubuntu', self.browser.title)
def testGoogleHomepage(self):
self.browser.get('http://www.google.com')
self.assertIn('Google', self.browser.title)
if __name__ == '__main__':
unittest.main()
Sesions exit with no errors and
The problem is dosent create any kind of files in main directory or /temp directory
where are the files ?
A:
After very long research of multiple way to run this on AWS came to few concussions:
Running on AWS Linux is way too complicated, definitely recommend AWS Ubuntu EC2 for this. This is due to lack of information and some libraries not compatible (scrot, issues with DISPLAY).
The problem at some point is FFMPEG killing all memory and causing disconnecting ssh and other issues.
Here is my final solution, how to record video of what is happening inside Selenium on AWS EC Ubuntu.
from selenium import webdriver
import time
import os
import subprocess
from pyvirtualdisplay.smartdisplay import SmartDisplay
with SmartDisplay() as disp:
print(os.environ.get("DISPLAY"))
print ("before webdriver")
driver = webdriver.Firefox()
subprocess.call(['ffmpeg -y -video_size 1024x768 -rtbufsize 500M -framerate 24 -f x11grab -i :0.0 -preset ultrafast output.mp4 &'], shell=True)
driver.get("YOUR WEBSITE URL")
# YOUR SELENIUM CODE HERE
subprocess.call(["echo q"], shell=True)
disp.stop()
| How to run Xvfb with xvfbwrapper on AWS EC2 to records screen of selenium headless session | I have a problem running selenium sesion with Xvfb to record video file with sesion. Below is my session and wrapper
from selenium import webdriver
from xvfbwrapper import Xvfb
class TestPages(unittest.TestCase):
def setUp(self):
self.xvfb = Xvfb(width=1280, height=720)
self.addCleanup(self.xvfb.stop)
self.xvfb.start()
self.browser = webdriver.Chrome()
self.addCleanup(self.browser.quit)
def testUbuntuHomepage(self):
self.browser.get('http://www.ubuntu.com')
self.assertIn('Ubuntu', self.browser.title)
def testGoogleHomepage(self):
self.browser.get('http://www.google.com')
self.assertIn('Google', self.browser.title)
if __name__ == '__main__':
unittest.main()
Sesions exit with no errors and
The problem is dosent create any kind of files in main directory or /temp directory
where are the files ?
| [
"After very long research of multiple way to run this on AWS came to few concussions:\n\nRunning on AWS Linux is way too complicated, definitely recommend AWS Ubuntu EC2 for this. This is due to lack of information and some libraries not compatible (scrot, issues with DISPLAY).\nThe problem at some point is FFMPEG killing all memory and causing disconnecting ssh and other issues.\n\nHere is my final solution, how to record video of what is happening inside Selenium on AWS EC Ubuntu.\n from selenium import webdriver\n import time\n import os \n import subprocess\n \n from pyvirtualdisplay.smartdisplay import SmartDisplay\n with SmartDisplay() as disp:\n print(os.environ.get(\"DISPLAY\"))\n print (\"before webdriver\")\n driver = webdriver.Firefox()\n subprocess.call(['ffmpeg -y -video_size 1024x768 -rtbufsize 500M -framerate 24 -f x11grab -i :0.0 -preset ultrafast output.mp4 &'], shell=True)\n driver.get(\"YOUR WEBSITE URL\")\n # YOUR SELENIUM CODE HERE \n subprocess.call([\"echo q\"], shell=True)\n disp.stop()\n\n"
] | [
0
] | [] | [] | [
"amazon_ec2",
"python",
"selenium",
"xvfb"
] | stackoverflow_0074574411_amazon_ec2_python_selenium_xvfb.txt |
Q:
Is there a way to plot a line that is a consistent length using matplotlib/cartopy?
I am using matplotlib and cartopy to draw lines overlaid on maps in python. As of right now, I am just identifying the lat/lon of two points and plotting a line between them. Since I am taking cross sections across these lines, I would like to find a way to make the line the same length (say 300km long) no matter where I place it on the map. Is this possible without just using trial and error and setting the points until they are the desired length?
lat1, lat2, lon1, lon2 = [34.5, 36, -100, -97]
x, y = [lon1, lon2], [lat1, lat2]
ax1.plot(x, y, color="black", marker="o", zorder=3, transform = ccrs.PlateCarree(), linewidth = 2.5)
Here are the relevant parts of the code that I am using now. This works, but I am looking for a way to hold the line length constant rather than changing the values for the endpoints at "lat1, lat2, lon1, lon2." I envision setting a line length, a mid-point (lat/lon), and an angle that would pivot around that point. I don't know if that's even possible, but that's how I'd imagine it'd have to work!
Example of a line that the cross section would be through
A:
The Geod class from pyproj is very convenient for these types of operations. The example below moves in a line from a given lat/lon based on an azimuth and distance (in meters).
Since you mention starting with a mid-point, you can do this twice for each direction to get the full line.
In the example below, I just retrieve the end points and use Cartopy (ccrs.Geodetic()) to do the interpolation to a great circle. But you can also do this yourself with the same Geod object (see the npts method), and sample a given amount of points a long the line. The latter might be convenient if you need those coordinates to extract data for example.
import matplotlib.pyplot as plt
import cartopy.crs as ccrs
from pyproj import Geod
import numpy as np
# start with a random point
np.random.seed(0)
lon = np.random.randint(-180,180)
lat = np.random.randint(-90,90)
# and a random direction and distance
half_width = (1 + np.random.rand()) * 5000000 # meters
azimuth = np.random.randint(0,360)
# calculate the end points
geod = Geod(ellps="WGS84")
lon_end1, lat_end1, azi_rev1 = geod.fwd(lon, lat, azimuth, half_width)
lon_end2, lat_end2, azi_rev2 = geod.fwd(lon, lat, azimuth-180, half_width)
# visualize the result
fig, ax = plt.subplots(
figsize=(8,4), dpi=86, layout="compressed", facecolor="w",
subplot_kw=dict(projection=ccrs.PlateCarree()),
)
ax.set_title(f"{lon=}, {lat=}, {azimuth=}, {half_width=:1.1f}m")
ax.plot(
[lon_end1, lon, lon_end2],
[lat_end1, lat, lat_end2],
"ro-",
transform=ccrs.Geodetic(),
)
ax.coastlines()
ax.set_global()
A:
the easiest way to do this would probably be to re-project your data into an equidistant projection, such as azimuthal equidistant, then buffer the point by 300km. You could do this
import cartopy.crs as ccrs
import geopandas as gpd
point = gpd.GeoDataFrame(
geometry=gpd.points_from_xy([-100], [34.5], crs="epsg:4326")
)
crs = ccrs.AzimuthalEquidistant(-100, 34.5)
circle = point.to_crs(crs).buffer(300000).boundary.to_crs("epsg:4326")
This creates an ellipsoid of points in lat/lon space (a circle in actual distance):
In [17]: circle.iloc[0]
Out[17]: <shapely.geometry.linestring.LineString at 0x18c3db7c0>
In [18]: circle.iloc[0].xy
Out[18]:
(array('d', [-96.73458302693649, -96.76051210175493, -96.81721890848735, -96.90389145413285, -97.0194601113924, -97.16261553916054, -97.33182721184983, -97.52536229026433, -97.74130463130324, -97.97757379088794, -98.23194392302969, -98.5020625175967, -98.78546895046141, -99.07961284313612, -99.38187224585855, -99.68957166961148, -100.0, -100.31042833038852, -100.61812775414145, -100.92038715686388, -101.21453104953859, -101.4979374824033, -101.76805607697031, -102.02242620911204, -102.25869536869676, -102.47463770973567, -102.66817278815017, -102.83738446083946, -102.98053988860761, -103.09610854586715, -103.18278109151265, -103.23948789824507, -103.26541697306351, -103.26003093177158, -103.22308261845816, -103.15462889147989, -103.05504203609833, -102.92501821713451, -102.7655823598689, -102.57808885099398, -102.3642174900262, -102.12596419991539, -101.86562612586265, -101.58578091250277, -101.28926014666801, -100.97911717692438, -100.65858975917035, -100.33105821410338, -100.0, -99.66894178589662, -99.34141024082963, -99.02088282307564, -98.710739853332, -98.41421908749726, -98.13437387413735, -97.87403580008461, -97.63578250997378, -97.42191114900605, -97.23441764013111, -97.07498178286552, -96.94495796390169, -96.84537110852011, -96.77691738154184, -96.73996906822842, -96.73458302693649]),
array('d', [34.45635448578617, 34.191923718769814, 33.930839345631036, 33.67556892797355, 33.42850443127362, 33.191942126948454, 32.96806390193457, 32.75892001047773, 32.566413289704464, 32.39228485200862, 32.23810126231687, 32.105243205881486, 31.99489565146612, 31.90803951485474, 31.845444827911535, 31.80766541851577, 31.795035106337213, 31.80766541851577, 31.845444827911535, 31.90803951485474, 31.99489565146612, 32.105243205881486, 32.23810126231687, 32.392284852008615, 32.566413289704464, 32.75892001047773, 32.96806390193457, 33.191942126948454, 33.42850443127362, 33.67556892797355, 33.93083934563104, 34.191923718769814, 34.45635448578617, 34.72160994100843, 34.985136962504626, 35.244374905489536, 35.49678051263328, 35.739853647811024, 35.97116361010222, 36.18837573214065, 36.38927791396905, 36.57180669376817, 36.73407241410168, 36.874383010755274, 36.99126593481258, 37.08348772070849, 37.15007073604399, 37.19030669398129, 37.20376657546522, 37.19030669398129, 37.15007073604399, 37.08348772070849, 36.99126593481258, 36.87438301075528, 36.73407241410169, 36.57180669376818, 36.38927791396906, 36.18837573214065, 35.97116361010222, 35.739853647811024, 35.496780512633286, 35.244374905489536, 34.98513696250464, 34.72160994100843, 34.45635448578617]))
| Is there a way to plot a line that is a consistent length using matplotlib/cartopy? | I am using matplotlib and cartopy to draw lines overlaid on maps in python. As of right now, I am just identifying the lat/lon of two points and plotting a line between them. Since I am taking cross sections across these lines, I would like to find a way to make the line the same length (say 300km long) no matter where I place it on the map. Is this possible without just using trial and error and setting the points until they are the desired length?
lat1, lat2, lon1, lon2 = [34.5, 36, -100, -97]
x, y = [lon1, lon2], [lat1, lat2]
ax1.plot(x, y, color="black", marker="o", zorder=3, transform = ccrs.PlateCarree(), linewidth = 2.5)
Here are the relevant parts of the code that I am using now. This works, but I am looking for a way to hold the line length constant rather than changing the values for the endpoints at "lat1, lat2, lon1, lon2." I envision setting a line length, a mid-point (lat/lon), and an angle that would pivot around that point. I don't know if that's even possible, but that's how I'd imagine it'd have to work!
Example of a line that the cross section would be through
| [
"The Geod class from pyproj is very convenient for these types of operations. The example below moves in a line from a given lat/lon based on an azimuth and distance (in meters).\nSince you mention starting with a mid-point, you can do this twice for each direction to get the full line.\nIn the example below, I just retrieve the end points and use Cartopy (ccrs.Geodetic()) to do the interpolation to a great circle. But you can also do this yourself with the same Geod object (see the npts method), and sample a given amount of points a long the line. The latter might be convenient if you need those coordinates to extract data for example.\nimport matplotlib.pyplot as plt\nimport cartopy.crs as ccrs\nfrom pyproj import Geod\nimport numpy as np\n\n# start with a random point\nnp.random.seed(0)\nlon = np.random.randint(-180,180)\nlat = np.random.randint(-90,90)\n\n# and a random direction and distance\nhalf_width = (1 + np.random.rand()) * 5000000 # meters\nazimuth = np.random.randint(0,360)\n\n# calculate the end points\ngeod = Geod(ellps=\"WGS84\")\nlon_end1, lat_end1, azi_rev1 = geod.fwd(lon, lat, azimuth, half_width)\nlon_end2, lat_end2, azi_rev2 = geod.fwd(lon, lat, azimuth-180, half_width)\n\n# visualize the result\nfig, ax = plt.subplots(\n figsize=(8,4), dpi=86, layout=\"compressed\", facecolor=\"w\", \n subplot_kw=dict(projection=ccrs.PlateCarree()),\n)\n\nax.set_title(f\"{lon=}, {lat=}, {azimuth=}, {half_width=:1.1f}m\")\nax.plot(\n [lon_end1, lon, lon_end2], \n [lat_end1, lat, lat_end2], \n \"ro-\",\n transform=ccrs.Geodetic(),\n)\nax.coastlines()\nax.set_global()\n\n\n",
"the easiest way to do this would probably be to re-project your data into an equidistant projection, such as azimuthal equidistant, then buffer the point by 300km. You could do this\nimport cartopy.crs as ccrs\nimport geopandas as gpd\n\npoint = gpd.GeoDataFrame(\n geometry=gpd.points_from_xy([-100], [34.5], crs=\"epsg:4326\")\n)\n\ncrs = ccrs.AzimuthalEquidistant(-100, 34.5)\n\ncircle = point.to_crs(crs).buffer(300000).boundary.to_crs(\"epsg:4326\")\n\nThis creates an ellipsoid of points in lat/lon space (a circle in actual distance):\nIn [17]: circle.iloc[0]\nOut[17]: <shapely.geometry.linestring.LineString at 0x18c3db7c0>\n\nIn [18]: circle.iloc[0].xy\nOut[18]:\n(array('d', [-96.73458302693649, -96.76051210175493, -96.81721890848735, -96.90389145413285, -97.0194601113924, -97.16261553916054, -97.33182721184983, -97.52536229026433, -97.74130463130324, -97.97757379088794, -98.23194392302969, -98.5020625175967, -98.78546895046141, -99.07961284313612, -99.38187224585855, -99.68957166961148, -100.0, -100.31042833038852, -100.61812775414145, -100.92038715686388, -101.21453104953859, -101.4979374824033, -101.76805607697031, -102.02242620911204, -102.25869536869676, -102.47463770973567, -102.66817278815017, -102.83738446083946, -102.98053988860761, -103.09610854586715, -103.18278109151265, -103.23948789824507, -103.26541697306351, -103.26003093177158, -103.22308261845816, -103.15462889147989, -103.05504203609833, -102.92501821713451, -102.7655823598689, -102.57808885099398, -102.3642174900262, -102.12596419991539, -101.86562612586265, -101.58578091250277, -101.28926014666801, -100.97911717692438, -100.65858975917035, -100.33105821410338, -100.0, -99.66894178589662, -99.34141024082963, -99.02088282307564, -98.710739853332, -98.41421908749726, -98.13437387413735, -97.87403580008461, -97.63578250997378, -97.42191114900605, -97.23441764013111, -97.07498178286552, -96.94495796390169, -96.84537110852011, -96.77691738154184, -96.73996906822842, -96.73458302693649]),\n array('d', [34.45635448578617, 34.191923718769814, 33.930839345631036, 33.67556892797355, 33.42850443127362, 33.191942126948454, 32.96806390193457, 32.75892001047773, 32.566413289704464, 32.39228485200862, 32.23810126231687, 32.105243205881486, 31.99489565146612, 31.90803951485474, 31.845444827911535, 31.80766541851577, 31.795035106337213, 31.80766541851577, 31.845444827911535, 31.90803951485474, 31.99489565146612, 32.105243205881486, 32.23810126231687, 32.392284852008615, 32.566413289704464, 32.75892001047773, 32.96806390193457, 33.191942126948454, 33.42850443127362, 33.67556892797355, 33.93083934563104, 34.191923718769814, 34.45635448578617, 34.72160994100843, 34.985136962504626, 35.244374905489536, 35.49678051263328, 35.739853647811024, 35.97116361010222, 36.18837573214065, 36.38927791396905, 36.57180669376817, 36.73407241410168, 36.874383010755274, 36.99126593481258, 37.08348772070849, 37.15007073604399, 37.19030669398129, 37.20376657546522, 37.19030669398129, 37.15007073604399, 37.08348772070849, 36.99126593481258, 36.87438301075528, 36.73407241410169, 36.57180669376818, 36.38927791396906, 36.18837573214065, 35.97116361010222, 35.739853647811024, 35.496780512633286, 35.244374905489536, 34.98513696250464, 34.72160994100843, 34.45635448578617]))\n\n"
] | [
1,
0
] | [] | [] | [
"cartopy",
"line",
"matplotlib",
"python"
] | stackoverflow_0074636590_cartopy_line_matplotlib_python.txt |
Q:
how to make 1111 into 1 in python
i want to make function to make from 1111 into 1 and 0000 into 0.
for the example:
Input:
['1111', '1111', '0000', '1111', '1111', '0000', '1111', '0000', '0000', '1111', '0000', '0000', '0000', '0000', '0000', '0000', '0000', '1111', '0000', '0000', '1111', '1111', '0000', '0000', '0000', '0000', '1111', '0000', '1111', '1111', '0000', '0000']
Desired output:
11011010010000000100110000101100
But i don't know how to make it or the algorithm. Can you help me?
My attempt so far:
def bagiskalar(biner):
print(biner)
biner = str(biner)
n = 4
hasil = []
potong = [biner[i:i+n] for i in range(0, len(biner), n)]
for a in potong:
hasil = potong.append(a)
return hasil
A:
You can easily use list comprehension and the join() method:
lst = ['1111', '1111', '0000', '1111', '1111', '0000', '1111', '0000', '0000', '1111', '0000', '0000', '0000', '0000', '0000', '0000', '0000', '1111', '0000', '0000', '1111', '1111', '0000', '0000', '0000', '0000', '1111', '0000', '1111', '1111', '0000', '0000']
s = ''.join(['1' if x == '1111' else '0' for x in lst ])
print(s)
# prints 11011010010000000100110000101100
A:
Assuming that your input is well-formatted, i.e. you don't expect other values than 1111 or 0000, you can optimize by taking only the first character.
output = "".join([x[0] for x in input])
A:
Another approach is to have pre-populated dictionary.
mapper = {'1111': '1', '0000': '0'} # can be extended for other maps if needed
input_data = ['1111', '1111', '0000', '1111', '1111', '0000', '1111', '0000', '0000', '1111', '0000', '0000', '0000', '0000', '0000', '0000', '0000', '1111', '0000', '0000', '1111', '1111', '0000', '0000', '0000', '0000', '1111', '0000', '1111', '1111', '0000', '0000']
output_data = ''.join([mapper.get(item) for item in input_data])
A:
This looks like school assignment, so I'll make simple example. I assume, you really want to append result only from exact '0000' and '1111', and ignore other strings.
def bagiskalar(biner):
result = ""
for b in biner:
if b == '0000' or b == '1111':
result = result + b[0]
return result
It just goes through your list of strings, checks each one of the that is it either '0000' or '1111'. And if it is, then add the first character ('0' or '1') to the result string.
| how to make 1111 into 1 in python | i want to make function to make from 1111 into 1 and 0000 into 0.
for the example:
Input:
['1111', '1111', '0000', '1111', '1111', '0000', '1111', '0000', '0000', '1111', '0000', '0000', '0000', '0000', '0000', '0000', '0000', '1111', '0000', '0000', '1111', '1111', '0000', '0000', '0000', '0000', '1111', '0000', '1111', '1111', '0000', '0000']
Desired output:
11011010010000000100110000101100
But i don't know how to make it or the algorithm. Can you help me?
My attempt so far:
def bagiskalar(biner):
print(biner)
biner = str(biner)
n = 4
hasil = []
potong = [biner[i:i+n] for i in range(0, len(biner), n)]
for a in potong:
hasil = potong.append(a)
return hasil
| [
"You can easily use list comprehension and the join() method:\nlst = ['1111', '1111', '0000', '1111', '1111', '0000', '1111', '0000', '0000', '1111', '0000', '0000', '0000', '0000', '0000', '0000', '0000', '1111', '0000', '0000', '1111', '1111', '0000', '0000', '0000', '0000', '1111', '0000', '1111', '1111', '0000', '0000']\ns = ''.join(['1' if x == '1111' else '0' for x in lst ])\nprint(s)\n# prints 11011010010000000100110000101100\n\n",
"Assuming that your input is well-formatted, i.e. you don't expect other values than 1111 or 0000, you can optimize by taking only the first character.\noutput = \"\".join([x[0] for x in input])\n\n",
"Another approach is to have pre-populated dictionary.\nmapper = {'1111': '1', '0000': '0'} # can be extended for other maps if needed\n\ninput_data = ['1111', '1111', '0000', '1111', '1111', '0000', '1111', '0000', '0000', '1111', '0000', '0000', '0000', '0000', '0000', '0000', '0000', '1111', '0000', '0000', '1111', '1111', '0000', '0000', '0000', '0000', '1111', '0000', '1111', '1111', '0000', '0000']\n\noutput_data = ''.join([mapper.get(item) for item in input_data]) \n\n",
"This looks like school assignment, so I'll make simple example. I assume, you really want to append result only from exact '0000' and '1111', and ignore other strings.\ndef bagiskalar(biner):\n result = \"\"\n for b in biner:\n if b == '0000' or b == '1111':\n result = result + b[0]\n return result\n\nIt just goes through your list of strings, checks each one of the that is it either '0000' or '1111'. And if it is, then add the first character ('0' or '1') to the result string.\n"
] | [
2,
1,
0,
0
] | [] | [] | [
"algorithm",
"binary",
"python",
"scalar"
] | stackoverflow_0074637992_algorithm_binary_python_scalar.txt |
Q:
Tkinter Enter widget is not taking input into floating number
I'm using an API to convert one unit to another. I'm trying to get input from user by "Entry Widget" as floating number. This is my code.
import requests
import tkinter as tk
root = tk.Tk()
def pressMe():
out_entry.insert(1, f'{result}')
in_entry = tk.Entry(root)
in_entry.pack()
out_entry = tk.Entry(root)
out_entry.pack()
but = tk.Button(root, command=pressMe)
but.pack()
val = float(in_entry.get())
url1 = "https://measurement-unit-converter.p.rapidapi.com/length/units"
querystring = {"value":val,"from":"km","to":"m"}
headers = {
"X-RapidAPI-Key": "0760466864***********************4jsnb3eaeb63d084",
"X-RapidAPI-Host": "measurement-unit-converter.p.rapidapi.com"
}
response = requests.request("GET", url1, headers=headers, params=querystring)
data = response.json()
result = data['result']
root.mainloop()
In here I'm making first entry widget in_entry by which I can take input and the second one out_entry to show the output, one button is made to do so. But I'm getting this error.
Traceback (most recent call last):
File "d:\5ht project\check.py", line 290, in <module>
val = float(in_entry.get())
ValueError: could not convert string to float: ''
I know that this problem is because I'm trying to convert the empty string into float which is not possible. But I'm not able to find any solution to this. Is there any way by which I can have input from user by in_entry and after that I can use that value in API which can return me the converted value, and then I can insert this converted value to the second entry widget out_entry. Hope you get my question.
This is the link of API I'm using.
https://rapidapi.com/me-Egq5JBzo4/api/measurement-unit-converter/
A:
it is as jasonharper said: "You are calling .get() on your Entry a millisecond or so after it was created - it's not physically possible for the user to have typed in anything yet!"
what you can do:
import requests
import tkinter as tk
root = tk.Tk()
in_entry = tk.Entry(root)
in_entry.pack()
out_entry = tk.Entry(root)
out_entry.pack()
url1 = "https://measurement-unit-converter.p.rapidapi.com/length/units" #this is a constant, isn't it?
def pressMe(): # the in entry must be above because this function it will get the value
val = float(in_entry.get()) # you get the value here to make sure the user has clicked the button (then you know there is a number in the entry)
querystring = {"value":val,"from":"km","to":"m"}
headers = {
"X-RapidAPI-Key": "0760466864***********************4jsnb3eaeb63d084",
"X-RapidAPI-Host": "measurement-unit-converter.p.rapidapi.com"
} # maybe also a constant? you can this piece of code also place at the top of your program
response = requests.request("GET", url1, headers=headers, params=querystring)
data = response.json()
result = data['result'] #here i get an keyerror because i'm not subscribed
out_entry.insert(1, f'{result}')
but = tk.Button(root, command=pressMe) # the button is created here, because its command is pressme. for that, it's really important that it is below pressme
but.pack()
root.mainloop()
then i can't help you further because the only thing i get from your url is:
{'message': 'You are not subscribed to this API.'}
| Tkinter Enter widget is not taking input into floating number | I'm using an API to convert one unit to another. I'm trying to get input from user by "Entry Widget" as floating number. This is my code.
import requests
import tkinter as tk
root = tk.Tk()
def pressMe():
out_entry.insert(1, f'{result}')
in_entry = tk.Entry(root)
in_entry.pack()
out_entry = tk.Entry(root)
out_entry.pack()
but = tk.Button(root, command=pressMe)
but.pack()
val = float(in_entry.get())
url1 = "https://measurement-unit-converter.p.rapidapi.com/length/units"
querystring = {"value":val,"from":"km","to":"m"}
headers = {
"X-RapidAPI-Key": "0760466864***********************4jsnb3eaeb63d084",
"X-RapidAPI-Host": "measurement-unit-converter.p.rapidapi.com"
}
response = requests.request("GET", url1, headers=headers, params=querystring)
data = response.json()
result = data['result']
root.mainloop()
In here I'm making first entry widget in_entry by which I can take input and the second one out_entry to show the output, one button is made to do so. But I'm getting this error.
Traceback (most recent call last):
File "d:\5ht project\check.py", line 290, in <module>
val = float(in_entry.get())
ValueError: could not convert string to float: ''
I know that this problem is because I'm trying to convert the empty string into float which is not possible. But I'm not able to find any solution to this. Is there any way by which I can have input from user by in_entry and after that I can use that value in API which can return me the converted value, and then I can insert this converted value to the second entry widget out_entry. Hope you get my question.
This is the link of API I'm using.
https://rapidapi.com/me-Egq5JBzo4/api/measurement-unit-converter/
| [
"it is as jasonharper said: \"You are calling .get() on your Entry a millisecond or so after it was created - it's not physically possible for the user to have typed in anything yet!\"\nwhat you can do:\nimport requests\nimport tkinter as tk\nroot = tk.Tk()\n\nin_entry = tk.Entry(root)\nin_entry.pack()\n\nout_entry = tk.Entry(root)\nout_entry.pack()\n\nurl1 = \"https://measurement-unit-converter.p.rapidapi.com/length/units\" #this is a constant, isn't it?\n\ndef pressMe(): # the in entry must be above because this function it will get the value\n val = float(in_entry.get()) # you get the value here to make sure the user has clicked the button (then you know there is a number in the entry)\n querystring = {\"value\":val,\"from\":\"km\",\"to\":\"m\"}\n headers = {\n \"X-RapidAPI-Key\": \"0760466864***********************4jsnb3eaeb63d084\",\n \"X-RapidAPI-Host\": \"measurement-unit-converter.p.rapidapi.com\"\n } # maybe also a constant? you can this piece of code also place at the top of your program\n\n response = requests.request(\"GET\", url1, headers=headers, params=querystring)\n\n data = response.json()\n result = data['result'] #here i get an keyerror because i'm not subscribed\n \n out_entry.insert(1, f'{result}')\n\nbut = tk.Button(root, command=pressMe) # the button is created here, because its command is pressme. for that, it's really important that it is below pressme\nbut.pack()\n\n\nroot.mainloop()\n\nthen i can't help you further because the only thing i get from your url is:\n{'message': 'You are not subscribed to this API.'}\n\n"
] | [
0
] | [] | [] | [
"api",
"python",
"tkinter",
"tkinter_entry"
] | stackoverflow_0074564400_api_python_tkinter_tkinter_entry.txt |
Q:
Syntax for interpolating planes (Python)
I have a function D(x,y,z) in which I want to evaluate (via interpolation) planes within the z, y, and z axis. i.e. I want the output of my interpolations to be a 2D plane holding one of the values fixed, D(x,y,0) for example.
I have created an interpolating function via scipy using some given values of D, D_values, for my input values of x,y,z.
from scipy.interpolate import RegularGridInterpolator as rgi
D_interp=rgi((x_positions,y_positions,z_positions), D_values)
Now I can get any point interpolated by just calling
D_interpolated=D_interp(xi,yi,zi)
I understand how I can evaluate individual points from this, but how would I interpolate a plane? For example, in my case, D_values is of size 345x155x303 and I want to interpolate 345x155 planes all along the z axis corresponding to the x and y input values, at z=0, z=1, z=2, etc.
My attempt at a solution is to feed in the x_positions, y_positions vectors individually into D_interp keeping z fixed, but this just gets me a set of D values evaluated at specific positions, rather than organized into a grid like the planar output I'd actually like. Syntax doesn't allow me to call something like
Plane=D_interp(x_positions,y_positions,0)
so I was not quite sure about the syntax of calling this function to have planar output.
any help appreciated
Thanks,
A:
The typical approach to combining multiple arrays with different sizes corresponding to different dimensions in numpy and scipy is to use broadcasting. Here is a sample problem to illustrate the application:
x_positions = np.linspace(0, 10, 101)
y_positions = np.linspace(-10, 10, 201)
z_positions = np.linspace(-5, 5, 101)
D_values = np.sin(2 * np.pi * x_positions[:, None, None] * y_positions[:, None] / 100) + np.cos(2 * np.pi * y_positions[:, None] * z_positions / 50)
This is similar to the D_values array you describe in your problem, where each of the bins in the different directions correspond to the *_positions arrays. I used broadcasting to turn x_positions into a (101, 1, 1)-shaped array, y_positions into a (201, 1)-shaped array and left z_positions as a (101,)-shaped array. The result is that D_values is a (101, 201, 101)-shaped array. The reshaped versions of the input arrays did not copy any data!
You can call your interpolator using the same idea that I used to create a sample D_values.
D_interp = rgi((x_positions, y_positions, z_positions), D_values)
Let's say you want to fix z = 0. All that scipy requires is that the inputs broadcast together. Scalars broadcast with everything, so you can just do
x_interp = np.linspace(0.05, 0.95, 200)
y_interp = np.linspace(-9.95, 9.95, 400)
z_interp = 0
D_xy_interp = D_interp((x_interp[:, None], y_interp, z_interp))
The advantage to doing this over creating a mesh is that you don't have to copy any data around and create extra 200x400 input arrays. Another advantage is that you have better control over the output. In this case, D_xy_interp has shape (len(x_interp), len(y_interp)). That's because in general, the shape of the output will be the broadcasted shape of the input. You can see that when we created D_values, and you can see it here. Since 0 is a scalar, it does not contribute to the shape. But I could also make a (400, 200) shaped array instead:
D_interp((x_interp, y_interp[:, None], z_interp))
Or even a (100, 4, 100, 2) shaped array:
D_interp((x_interp.reshape(-1, 2), y_interp.reshape(-1, 4, 1, 1), z_interp))
In either case, let's verify that the interpolator did it's job. We can compare the interpolated values to a much finer sampling of the function that created D_values:
D_xy_values = np.sin(2 * np.pi * x_interp[:, None] * y_interp / 100) + np.cos(2 * np.pi * y_interp * z_interp / 50)
fig, ax = plt.subplots(subplot_kw={'projection': '3d'})
ax.plot_surface(x_interp[:, None], y_interp, D_xy_interp, label='Interp')
ax.plot_surface(x_interp[:, None], y_interp, D_xy_values, label='Values')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z')
plt.show()
At the moment it doesn't look like you can add legends to 3D plots:
.
The two plots are virtually indistinguishable. With the default color cycler, you will see the surface chance from blue to orange as you rotate it. Here is an analytical verification:
>>> np.sqrt(np.mean((D_xy_values - D_xy_interp)**2))
4.707625623185639e-05
A:
To evaluate a plane using the RegularGridInterpolator class from SciPy, you can use the __call__ method of the interpolator object and specify the values for the fixed dimensions as keyword arguments.
For example, to interpolate a plane along the z-axis at z=0, you can use the following code:
from scipy.interpolate import RegularGridInterpolator as rgi
# Define the input values for the x, y, and z dimensions
x_positions = ...
y_positions = ...
z_positions = ...
# Define the values of the function D at each point in the grid
D_values = ...
# Create the interpolator object
D_interp = rgi((x_positions, y_positions, z_positions), D_values)
# Interpolate the plane at z=0
plane = D_interp(z=0)
The plane variable will now be a 2D array containing the interpolated values of the function D on the x-y plane at z=0.
You can repeat this process for each value of z that you want to interpolate. For example, to interpolate all the planes along the z-axis, you could use a loop as follows:
from scipy.interpolate import RegularGridInterpolator as rgi
# Define the input values for the x, y, and z dimensions
x_positions = ...
y_positions = ...
z_positions = ...
# Define the values of the function D at each point in the grid
D_values = ...
# Create the interpolator object
D_interp = rgi((x_positions, y_positions, z_positions), D_values)
# Interpolate all the planes along the z-axis
planes = []
for z in z_positions:
plane = D_interp(z=z)
planes.append(plane)
The planes variable will now be a list of 2D arrays containing the interpolated values of the function D on the x-y plane for each value of z in z_positions.
A:
You can generate a meshgrid of each plane, ravel the coordinates out to a list of points, interpolate at each point and finally reshape the results back to a list of planes:
x_positions = np.arange(0, 355)
y_positions = np.arange(0, 155)
z_positions = np.arange(0, 303)
z_mesh, x_mesh, y_mesh = np.meshgrid(z_positions, x_positions, y_positions, indexing='ij')
pts = np.vstack([x_mesh.ravel(), y_mesh.ravel(), z_mesh.ravel()]).transpose()
plane = np.reshape(interp(pts), x_mesh.shape)
In this example I have aranged the z_positions at the first position of the mesh, so you get an array of planes.
| Syntax for interpolating planes (Python) | I have a function D(x,y,z) in which I want to evaluate (via interpolation) planes within the z, y, and z axis. i.e. I want the output of my interpolations to be a 2D plane holding one of the values fixed, D(x,y,0) for example.
I have created an interpolating function via scipy using some given values of D, D_values, for my input values of x,y,z.
from scipy.interpolate import RegularGridInterpolator as rgi
D_interp=rgi((x_positions,y_positions,z_positions), D_values)
Now I can get any point interpolated by just calling
D_interpolated=D_interp(xi,yi,zi)
I understand how I can evaluate individual points from this, but how would I interpolate a plane? For example, in my case, D_values is of size 345x155x303 and I want to interpolate 345x155 planes all along the z axis corresponding to the x and y input values, at z=0, z=1, z=2, etc.
My attempt at a solution is to feed in the x_positions, y_positions vectors individually into D_interp keeping z fixed, but this just gets me a set of D values evaluated at specific positions, rather than organized into a grid like the planar output I'd actually like. Syntax doesn't allow me to call something like
Plane=D_interp(x_positions,y_positions,0)
so I was not quite sure about the syntax of calling this function to have planar output.
any help appreciated
Thanks,
| [
"The typical approach to combining multiple arrays with different sizes corresponding to different dimensions in numpy and scipy is to use broadcasting. Here is a sample problem to illustrate the application:\nx_positions = np.linspace(0, 10, 101)\ny_positions = np.linspace(-10, 10, 201)\nz_positions = np.linspace(-5, 5, 101)\nD_values = np.sin(2 * np.pi * x_positions[:, None, None] * y_positions[:, None] / 100) + np.cos(2 * np.pi * y_positions[:, None] * z_positions / 50)\n\nThis is similar to the D_values array you describe in your problem, where each of the bins in the different directions correspond to the *_positions arrays. I used broadcasting to turn x_positions into a (101, 1, 1)-shaped array, y_positions into a (201, 1)-shaped array and left z_positions as a (101,)-shaped array. The result is that D_values is a (101, 201, 101)-shaped array. The reshaped versions of the input arrays did not copy any data!\nYou can call your interpolator using the same idea that I used to create a sample D_values.\nD_interp = rgi((x_positions, y_positions, z_positions), D_values)\n\nLet's say you want to fix z = 0. All that scipy requires is that the inputs broadcast together. Scalars broadcast with everything, so you can just do\nx_interp = np.linspace(0.05, 0.95, 200)\ny_interp = np.linspace(-9.95, 9.95, 400)\nz_interp = 0\nD_xy_interp = D_interp((x_interp[:, None], y_interp, z_interp))\n\nThe advantage to doing this over creating a mesh is that you don't have to copy any data around and create extra 200x400 input arrays. Another advantage is that you have better control over the output. In this case, D_xy_interp has shape (len(x_interp), len(y_interp)). That's because in general, the shape of the output will be the broadcasted shape of the input. You can see that when we created D_values, and you can see it here. Since 0 is a scalar, it does not contribute to the shape. But I could also make a (400, 200) shaped array instead:\nD_interp((x_interp, y_interp[:, None], z_interp))\n\nOr even a (100, 4, 100, 2) shaped array:\nD_interp((x_interp.reshape(-1, 2), y_interp.reshape(-1, 4, 1, 1), z_interp))\n\nIn either case, let's verify that the interpolator did it's job. We can compare the interpolated values to a much finer sampling of the function that created D_values:\nD_xy_values = np.sin(2 * np.pi * x_interp[:, None] * y_interp / 100) + np.cos(2 * np.pi * y_interp * z_interp / 50)\n\nfig, ax = plt.subplots(subplot_kw={'projection': '3d'})\nax.plot_surface(x_interp[:, None], y_interp, D_xy_interp, label='Interp')\nax.plot_surface(x_interp[:, None], y_interp, D_xy_values, label='Values')\nax.set_xlabel('X')\nax.set_ylabel('Y')\nax.set_zlabel('Z')\nplt.show()\n\nAt the moment it doesn't look like you can add legends to 3D plots:\n.\nThe two plots are virtually indistinguishable. With the default color cycler, you will see the surface chance from blue to orange as you rotate it. Here is an analytical verification:\n>>> np.sqrt(np.mean((D_xy_values - D_xy_interp)**2))\n4.707625623185639e-05\n\n",
"To evaluate a plane using the RegularGridInterpolator class from SciPy, you can use the __call__ method of the interpolator object and specify the values for the fixed dimensions as keyword arguments.\nFor example, to interpolate a plane along the z-axis at z=0, you can use the following code:\nfrom scipy.interpolate import RegularGridInterpolator as rgi\n\n# Define the input values for the x, y, and z dimensions\nx_positions = ...\ny_positions = ...\nz_positions = ...\n\n# Define the values of the function D at each point in the grid\nD_values = ...\n\n# Create the interpolator object\nD_interp = rgi((x_positions, y_positions, z_positions), D_values)\n\n# Interpolate the plane at z=0\nplane = D_interp(z=0)\n\nThe plane variable will now be a 2D array containing the interpolated values of the function D on the x-y plane at z=0.\nYou can repeat this process for each value of z that you want to interpolate. For example, to interpolate all the planes along the z-axis, you could use a loop as follows:\nfrom scipy.interpolate import RegularGridInterpolator as rgi\n\n# Define the input values for the x, y, and z dimensions\nx_positions = ...\ny_positions = ...\nz_positions = ...\n\n# Define the values of the function D at each point in the grid\nD_values = ...\n\n# Create the interpolator object\nD_interp = rgi((x_positions, y_positions, z_positions), D_values)\n\n# Interpolate all the planes along the z-axis\nplanes = []\nfor z in z_positions:\n plane = D_interp(z=z)\n planes.append(plane)\n\nThe planes variable will now be a list of 2D arrays containing the interpolated values of the function D on the x-y plane for each value of z in z_positions.\n",
"You can generate a meshgrid of each plane, ravel the coordinates out to a list of points, interpolate at each point and finally reshape the results back to a list of planes:\nx_positions = np.arange(0, 355)\ny_positions = np.arange(0, 155)\nz_positions = np.arange(0, 303)\n\nz_mesh, x_mesh, y_mesh = np.meshgrid(z_positions, x_positions, y_positions, indexing='ij')\npts = np.vstack([x_mesh.ravel(), y_mesh.ravel(), z_mesh.ravel()]).transpose()\nplane = np.reshape(interp(pts), x_mesh.shape)\n\nIn this example I have aranged the z_positions at the first position of the mesh, so you get an array of planes.\n"
] | [
1,
1,
0
] | [] | [] | [
"interpolation",
"plane",
"python",
"scipy"
] | stackoverflow_0074574442_interpolation_plane_python_scipy.txt |
Q:
How to get ax plot id for matplotlib RectangleSelector callback?
I have multiple ax plot with RectangleSelector and its callback as below.
from matplotlib.widgets import RectangleSelector
import numpy as np
import matplotlib.pyplot as plt
def select_callback(eclick, erelease):
x1, y1 = eclick.xdata, eclick.ydata
x2, y2 = erelease.xdata, erelease.ydata
fig = plt.figure(constrained_layout=True)
axs = fig.subplots(4)
x = np.linspace(0, 10, 1000)
selectors = []
for idx,ax in enumerate(axs):
ax.plot(x, np.sin(2*np.pi*x)) # plot something
selectors.append(RectangleSelector(
ax, select_callback,
useblit=True,
button=[1, 3], # disable middle button
minspanx=5, minspany=5,
spancoords='pixels',
interactive=True))
plt.show()
I can draw and get callback perfectly but I would like to capture which ax is working on, for example its id. Any advice or guidance on this would be greatly appreciated, Thanks.
A:
You can use a partial:
from matplotlib.widgets import RectangleSelector
import numpy as np
import matplotlib.pyplot as plt
from functools import partial
def select_callback(eclick, erelease, idx):
x1, y1 = eclick.xdata, eclick.ydata
x2, y2 = erelease.xdata, erelease.ydata
print(idx)
fig = plt.figure(constrained_layout=True)
axs = fig.subplots(4)
x = np.linspace(0, 10, 1000)
selectors = []
for idx,ax in enumerate(axs):
ax.plot(x, np.sin(2*np.pi*x)) # plot something
selectors.append(RectangleSelector(
ax, partial(select_callback, idx=idx),
useblit=True,
button=[1, 3], # disable middle button
minspanx=5, minspany=5,
spancoords='pixels',
interactive=True))
plt.show()
| How to get ax plot id for matplotlib RectangleSelector callback? | I have multiple ax plot with RectangleSelector and its callback as below.
from matplotlib.widgets import RectangleSelector
import numpy as np
import matplotlib.pyplot as plt
def select_callback(eclick, erelease):
x1, y1 = eclick.xdata, eclick.ydata
x2, y2 = erelease.xdata, erelease.ydata
fig = plt.figure(constrained_layout=True)
axs = fig.subplots(4)
x = np.linspace(0, 10, 1000)
selectors = []
for idx,ax in enumerate(axs):
ax.plot(x, np.sin(2*np.pi*x)) # plot something
selectors.append(RectangleSelector(
ax, select_callback,
useblit=True,
button=[1, 3], # disable middle button
minspanx=5, minspany=5,
spancoords='pixels',
interactive=True))
plt.show()
I can draw and get callback perfectly but I would like to capture which ax is working on, for example its id. Any advice or guidance on this would be greatly appreciated, Thanks.
| [
"You can use a partial:\nfrom matplotlib.widgets import RectangleSelector\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom functools import partial\n\ndef select_callback(eclick, erelease, idx):\n x1, y1 = eclick.xdata, eclick.ydata\n x2, y2 = erelease.xdata, erelease.ydata\n print(idx)\n\nfig = plt.figure(constrained_layout=True)\naxs = fig.subplots(4)\n\nx = np.linspace(0, 10, 1000)\n\nselectors = []\nfor idx,ax in enumerate(axs):\n ax.plot(x, np.sin(2*np.pi*x)) # plot something\n selectors.append(RectangleSelector(\n ax, partial(select_callback, idx=idx),\n useblit=True,\n button=[1, 3], # disable middle button\n minspanx=5, minspany=5,\n spancoords='pixels',\n interactive=True))\n\nplt.show()\n\n\n"
] | [
1
] | [] | [] | [
"matplotlib",
"python"
] | stackoverflow_0074635825_matplotlib_python.txt |
Q:
Keyword argument (kwargs) function error in Python
Following is the code:
def my_funct(**kwarg):
print(kwarg[fn]*kwarg[sn])
print('enter 2 numbers to get product of')
a=input()
print('enter second number')
b=input()
my_funct(fn=a,sn=b)
The output is error saying 'fn is not defined'. What is the solution?
A:
Kwargs is a dictionary where the variable name is the key and has type str. You are trying to find the value to the key which is saved in the variable fn but this variable hasn't been defined. Instead what you want is the value corresponding to the key 'fn', so you do
print(kwarg['fn'] * kwarg['sn'])
Edit: Because your corresponding values come from the input() function, you should make sure they are actually numbers and not strings. So when you do a=input() you should change that to a=float(input()). Same for b.
| Keyword argument (kwargs) function error in Python | Following is the code:
def my_funct(**kwarg):
print(kwarg[fn]*kwarg[sn])
print('enter 2 numbers to get product of')
a=input()
print('enter second number')
b=input()
my_funct(fn=a,sn=b)
The output is error saying 'fn is not defined'. What is the solution?
| [
"Kwargs is a dictionary where the variable name is the key and has type str. You are trying to find the value to the key which is saved in the variable fn but this variable hasn't been defined. Instead what you want is the value corresponding to the key 'fn', so you do\nprint(kwarg['fn'] * kwarg['sn'])\n\nEdit: Because your corresponding values come from the input() function, you should make sure they are actually numbers and not strings. So when you do a=input() you should change that to a=float(input()). Same for b.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074638569_python.txt |
Q:
Exception has occurred: ModuleNotFoundError No module named 'src'
I am facing this problem while my "from src.simulation.simulator import Simulator" is in the same directory. what should i do now to resolve this problem?
I have tried to solve this problem but did not understand what to do!!
A:
Since you haven't updated for a long time, I have made a guess.
This maybe caused by vscode using workspace as root floder.
This will lead to a problem. When you use the os.getcwd() method in the deep directory of the workspace, you will still get the workspace directory.
You can open your settings and search Python > Terminal: Execute In File Dir then check it.
You can also use debug mode and add the following to your launch.json:
"cwd": "${fileDirname}"
| Exception has occurred: ModuleNotFoundError No module named 'src' | I am facing this problem while my "from src.simulation.simulator import Simulator" is in the same directory. what should i do now to resolve this problem?
I have tried to solve this problem but did not understand what to do!!
| [
"Since you haven't updated for a long time, I have made a guess.\nThis maybe caused by vscode using workspace as root floder.\nThis will lead to a problem. When you use the os.getcwd() method in the deep directory of the workspace, you will still get the workspace directory.\nYou can open your settings and search Python > Terminal: Execute In File Dir then check it.\n\nYou can also use debug mode and add the following to your launch.json:\n\"cwd\": \"${fileDirname}\"\n\n"
] | [
0
] | [] | [] | [
"module_export",
"python",
"visual_studio_code"
] | stackoverflow_0074384770_module_export_python_visual_studio_code.txt |
Q:
Raspberry Pi 4B, running Python Script using serial at boot
I'm currently using crontab on a Raspberry Pi 4 Model B to launch my python script at boot.
I've added this at the bottom of sudo crontab -e :
@reboot sh /home/pi/start.sh > /home/pi/logs/cronlog 2>&1 &
My start.sh script is like that :
#!/bin/sh
# start.sh
cd /home/pi/Desktop/Python_Scripts/Projet
sudo python3 main.py
If I run the shell script manually, everything works fine, but when it runs at boot, the serial communication doesn't work.
I already tried to add some delay in my python script to wait for the serial interface to be fully initialized but it still doesn't work.
Thanks in advance for any kind of help
EDIT : I must clarify that the script runs perfectly if I run
sh /home/pi/start.sh > /home/pi/logs/cronlog 2>&1 &
in the command line. However, the only thing that doesn't work if I run it at boot with crontab is the serial communication (looking up signals with an oscilloscope, it doesn't send data through the serial interface) but every other aspect of the program runs fine.
A:
Most likely cron is executing before the serial interface is initialized and causing your python script to raise an exception.
This can be verified by adding a relatively small delay (ie: 30 seconds) into your python script to see if it then functions properly.
If the script only needs to be run once, a simple fix could be to utilize the autostart file instead of cron. Commands in this file are run only after the gui is brought up successfully. This is located at /etc/xdg/lxsession/LXDE-pi/autostart
A:
try this
open terminal
*> sudo nano /etc/xdg/lxsession/LXDE-pi/autostart *
add
@path/app_name
thiswii work surely
| Raspberry Pi 4B, running Python Script using serial at boot | I'm currently using crontab on a Raspberry Pi 4 Model B to launch my python script at boot.
I've added this at the bottom of sudo crontab -e :
@reboot sh /home/pi/start.sh > /home/pi/logs/cronlog 2>&1 &
My start.sh script is like that :
#!/bin/sh
# start.sh
cd /home/pi/Desktop/Python_Scripts/Projet
sudo python3 main.py
If I run the shell script manually, everything works fine, but when it runs at boot, the serial communication doesn't work.
I already tried to add some delay in my python script to wait for the serial interface to be fully initialized but it still doesn't work.
Thanks in advance for any kind of help
EDIT : I must clarify that the script runs perfectly if I run
sh /home/pi/start.sh > /home/pi/logs/cronlog 2>&1 &
in the command line. However, the only thing that doesn't work if I run it at boot with crontab is the serial communication (looking up signals with an oscilloscope, it doesn't send data through the serial interface) but every other aspect of the program runs fine.
| [
"Most likely cron is executing before the serial interface is initialized and causing your python script to raise an exception.\nThis can be verified by adding a relatively small delay (ie: 30 seconds) into your python script to see if it then functions properly.\nIf the script only needs to be run once, a simple fix could be to utilize the autostart file instead of cron. Commands in this file are run only after the gui is brought up successfully. This is located at /etc/xdg/lxsession/LXDE-pi/autostart\n",
"try this\nopen terminal\n*> sudo nano /etc/xdg/lxsession/LXDE-pi/autostart *\nadd\n@path/app_name\nthiswii work surely\n"
] | [
0,
0
] | [
"I also use Raspberry pi4.\nI reccomend you use crontab -e without sudo. \nMy crontab -e like this: \n@reboot bash /usr/bin/start_counter.sh\nMy /usr/bin/start_counter.sh:\n#!/usr/bin/bash\nwhile true\ndo\n python3 /home/pi/people_counter_android/main.py\ndone\n\nIn my way its work. I hope this help you.\n"
] | [
-1
] | [
"cron",
"python",
"python_3.x",
"raspberry_pi",
"raspberry_pi4"
] | stackoverflow_0067487273_cron_python_python_3.x_raspberry_pi_raspberry_pi4.txt |
Q:
using break function within the function to stop the further execution of program
def my_function(df_1) :
df_1 = df_1.filter[['col_1','col_2','col_3']]
# Keeping only those records where col_1 == 'success'
df_1 = df_1[df_1['col_1'] == 'success']
# Checking if the df_1 shape is 0
if df_1.shape[0]==0:
print('No records found')
break
#further program
I am looking to break the execution of further program if the if condition is met.
is this the correct way to do so..? since break only ends the loop, but i want to end the function
A:
You need to use
if df_1.shape[0]==0:
print('No records found')
return
A:
Just for further clarification, the answers above are correct, but the break statement is an used in loops like a for loop or while loop to end the loop prematurely.
Functions on the other hand end either at the last line or when return is called. If no agument is passed to return, the function returns None by default. (None is also returned by default if the return statement is not called at all.)
A:
You can replace break with return just using return will help you escape the function which will break the further execution of the function.
def my_function(df_1) :
df_1 = df_1.filter[['col_1','col_2','col_3']]
# Keeping only those records where col_1 == 'success'
df_1 = df_1[df_1['col_1'] == 'success']
# Checking if the df_1 shape is 0
if df_1.shape[0]==0:
print('No records found')
return
#further program
| using break function within the function to stop the further execution of program | def my_function(df_1) :
df_1 = df_1.filter[['col_1','col_2','col_3']]
# Keeping only those records where col_1 == 'success'
df_1 = df_1[df_1['col_1'] == 'success']
# Checking if the df_1 shape is 0
if df_1.shape[0]==0:
print('No records found')
break
#further program
I am looking to break the execution of further program if the if condition is met.
is this the correct way to do so..? since break only ends the loop, but i want to end the function
| [
"You need to use\n if df_1.shape[0]==0:\n print('No records found')\n return \n\n",
"Just for further clarification, the answers above are correct, but the break statement is an used in loops like a for loop or while loop to end the loop prematurely.\nFunctions on the other hand end either at the last line or when return is called. If no agument is passed to return, the function returns None by default. (None is also returned by default if the return statement is not called at all.)\n",
"You can replace break with return just using return will help you escape the function which will break the further execution of the function.\ndef my_function(df_1) :\n \n df_1 = df_1.filter[['col_1','col_2','col_3']]\n \n # Keeping only those records where col_1 == 'success'\n df_1 = df_1[df_1['col_1'] == 'success']\n \n # Checking if the df_1 shape is 0\n if df_1.shape[0]==0:\n print('No records found')\n return\n\n #further program\n\n"
] | [
3,
1,
0
] | [] | [] | [
"dataframe",
"python"
] | stackoverflow_0074638627_dataframe_python.txt |
Q:
Fill matrix with For Loop
I am trying to fill a matrix with a for loop in Python, this may be an math problem more than anything else, or I just need to find a new solution. I need to write this loop (just an example but the same concept):
matrix = np.zeros((1,8))
for i, j in zip(range(2,6), range(1,5)):
matrix[0,0*i:2*i] = [i*2, j*2]
The expected output is the following:
[[4. 2. 6. 4. 8. 6. 10. 8.]]
That is, for the first iteration, column 0 and 1 of the matrix is filled, for the second iteration column 2 and is filled, etc.
For this I need the Loop for spit out the following matrixes:
matrix[0,0:2]
matrix[0,2:4]
matrix[0,4:6]
matrix[0,6:8]
The second part is easy, but how can I write a expression that first print out 0, to then print out 2, 4, 6 etc.
Thank you.
A:
I have managed to solve this myself now, maybe not the most elegant solution, but it works. So instead of using the range(2,6) and range(1,5) to assign the values to the right place in the matrix, I added another variable range(4) which I call p.
matrix = np.zeros((1,8))
for i, j, p in zip(range(2,6), range(1,5), range(4)):
matrix[0,2*p:2*p+2] = [i*2, j*2]
# Gives the output
print(matrix)
[[ 4. 2. 6. 4. 8. 6. 10. 8.]]
| Fill matrix with For Loop | I am trying to fill a matrix with a for loop in Python, this may be an math problem more than anything else, or I just need to find a new solution. I need to write this loop (just an example but the same concept):
matrix = np.zeros((1,8))
for i, j in zip(range(2,6), range(1,5)):
matrix[0,0*i:2*i] = [i*2, j*2]
The expected output is the following:
[[4. 2. 6. 4. 8. 6. 10. 8.]]
That is, for the first iteration, column 0 and 1 of the matrix is filled, for the second iteration column 2 and is filled, etc.
For this I need the Loop for spit out the following matrixes:
matrix[0,0:2]
matrix[0,2:4]
matrix[0,4:6]
matrix[0,6:8]
The second part is easy, but how can I write a expression that first print out 0, to then print out 2, 4, 6 etc.
Thank you.
| [
"I have managed to solve this myself now, maybe not the most elegant solution, but it works. So instead of using the range(2,6) and range(1,5) to assign the values to the right place in the matrix, I added another variable range(4) which I call p.\nmatrix = np.zeros((1,8))\nfor i, j, p in zip(range(2,6), range(1,5), range(4)):\n matrix[0,2*p:2*p+2] = [i*2, j*2]\n# Gives the output\nprint(matrix)\n[[ 4. 2. 6. 4. 8. 6. 10. 8.]]\n\n\n"
] | [
0
] | [] | [] | [
"for_loop",
"python"
] | stackoverflow_0074638447_for_loop_python.txt |
Q:
Office365-REST-Python-Client - How to read more than 100 rows from a Sharepoint (MS-List)
I'm using the following Python (v3.8.10) code with the latest version of the Office365-REST-Python-Client to access an MS-List on my Sharepoint site:
sp_lists = ctx.web.lists
s_list = sp_lists.get_by_title(staff_list)
l_items = s_list.get_items()
ctx.load(l_items)
ctx.execute_query()
It works except only the 1st 100 records are returned. This seems to be a well known issue, but after searching I can't find the code changes required to enable all records to be returned up to the limit (5000 I believe?).
Any help with this is most appreciated.
Many thanks in advance.
A:
Found the answer - in case this is useful to anyone
sp_lists = ctx.web.lists
s_list = sp_lists.get_by_title(staff_list)
l_items= s_list.items.paged(500).get().execute_query()
[Courtesy of]
[1]: https://github.com/vgrem/Office365-REST-Python-Client/blob/master/examples/sharepoint/lists/read_large_list.py
| Office365-REST-Python-Client - How to read more than 100 rows from a Sharepoint (MS-List) | I'm using the following Python (v3.8.10) code with the latest version of the Office365-REST-Python-Client to access an MS-List on my Sharepoint site:
sp_lists = ctx.web.lists
s_list = sp_lists.get_by_title(staff_list)
l_items = s_list.get_items()
ctx.load(l_items)
ctx.execute_query()
It works except only the 1st 100 records are returned. This seems to be a well known issue, but after searching I can't find the code changes required to enable all records to be returned up to the limit (5000 I believe?).
Any help with this is most appreciated.
Many thanks in advance.
| [
"Found the answer - in case this is useful to anyone\nsp_lists = ctx.web.lists\ns_list = sp_lists.get_by_title(staff_list)\nl_items= s_list.items.paged(500).get().execute_query()\n\n[Courtesy of]\n[1]: https://github.com/vgrem/Office365-REST-Python-Client/blob/master/examples/sharepoint/lists/read_large_list.py\n"
] | [
0
] | [] | [] | [
"office365_rest_client",
"python",
"sharepoint",
"sharepoint_list"
] | stackoverflow_0074628645_office365_rest_client_python_sharepoint_sharepoint_list.txt |
Q:
Uploaded data from excel to Python
I am trying to use a program written on github.
The author provides an example of running the code with random data.
xy = pd.DataFrame(np.random.normal(0, 1, (500, 3)), columns=["x_0", "x_1", "x_2"])
xy["y"] = xy.sum(axis=1) + np.random.normal(0, 1, 500)
# set a unique and monotonically increasing index (default index would suffice):
xy.index = pd.date_range("2021-04-19", "2022-08-31").map(lambda x: x.date())
Is there an easy way to upload an excel file and get the same labeling as this?
A:
You can use pd.read_excel to load in an excel file:
xy = pd.read_excel("/path/to/file")
It is recommended to store the data in .csv format, though. You can still open it in excel and it is much more efficient with storage.
| Uploaded data from excel to Python | I am trying to use a program written on github.
The author provides an example of running the code with random data.
xy = pd.DataFrame(np.random.normal(0, 1, (500, 3)), columns=["x_0", "x_1", "x_2"])
xy["y"] = xy.sum(axis=1) + np.random.normal(0, 1, 500)
# set a unique and monotonically increasing index (default index would suffice):
xy.index = pd.date_range("2021-04-19", "2022-08-31").map(lambda x: x.date())
Is there an easy way to upload an excel file and get the same labeling as this?
| [
"You can use pd.read_excel to load in an excel file:\nxy = pd.read_excel(\"/path/to/file\")\n\nIt is recommended to store the data in .csv format, though. You can still open it in excel and it is much more efficient with storage.\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0074638789_python.txt |
Q:
Check unique value when define concrete class for abstract variable in python
Suppose that I have this architecture for my classes:
# abstracts.py
import abc
class AbstractReader(metaclass=abc.ABCMeta):
@classmethod
def get_reader_name(cl):
return cls._READER_NAME
@classmethod
@property
@abc.abstractmethod
def _READER_NAME(cls):
raise NotImplementedError
# concretes.py
from .abstracts import AbstractReader
class ReaderConcreteNumber1(AbstractReader):
_READER_NAME = "NAME1"
class ReaderConcreteNumber2(AbstractReader):
_READER_NAME = "NAME2"
Also I have a manager classes that find concrete classes by _READER_NAME variable. So I need to define unique _READER_NAME for each of my concrete classes.
how do I check that NAME1 and NAME2 are unique when concrete classes are going to define?
A:
This is a very special case, but it can be solved with a singleton pattern.
To ease things for our selfes we first create a singleton annotation
# anotations.py
def singleton(clazz):
"""Singleton annotator ensures the annotated class is a singleton"""
class ClassW(clazz):
"""Creates a new sealed class from the object to create."""
_instance = None
def __new__(cls, *args, **kwargs):
if ClassW._instance is None:
ClassW._instance = super(ClassW, cls).__new__(clazz, *args, **kwargs)
ClassW._instance._sealed = False
return ClassW._instance
def __init__(self, *args, **kwargs):
if self._sealed:
return
super(ClassW, self).__init__(*args, **kwargs)
self._sealed = True
ClassW.__name__ = clazz.__name__
return ClassW
Now we construct a singleton Registry class, to register our classes with and to do the checking.
# registry.py
from .annotations import singleton
@singleton
class ReaderRegistry:
"""
Singleton class to register processing readers
### Usage
To register a block call the register function with an ID and the class object.
ReaderRegistry().register('FooName', FooReader)
The class for the block can then be obtained via
Registry()['FooName']
"""
registry = {}
def register(self, key: str, clazz: Type[Block]) -> None:
"""Register a new reader. Names must be unique within the registry"""
if key in self:
raise f"Reader with key {key} already registered."
self.registry[key] = clazz
def __contains__(self, key: str) -> bool:
return key in self.registry.keys()
def __getitem__(self, key: str) -> Type[Block]:
return self.registry[key]
Now, you can
#concretes.py
from .abstracts import AbstractReader
from .registry import ReaderRegistry
class ReaderConcreteNumber1(AbstractReader):
_READER_NAME = "NAME1"
# Note that this is OUTSIDE and AFTER the class definition,
# e.g. end of the file.
RederRegistry().register(ReaderConcreteNumber1._READER_NAME , ReaderConcreteNumber1)
If a reader with such a name exists in the registry already there will be an exception thrown once the file is imported. Now, you can just lookip the classes to construct by theire naume in the registry, e.g.
if reader _namenot in ReaderRegistry():
raise f"Block [{reader _name}] is not known."
reader = ReaderRegistry()[reader _name]
A:
You can create a metaclass with a constructor that uses a set to keep track of the name of each instantiating class and raises an exception if a given name already exists in the set:
class UniqueName(type):
names = set()
def __new__(metacls, cls, bases, classdict):
name = classdict['_READER_NAME']
if name in metacls.names:
raise ValueError(f"Class with name '{name}' already exists.")
metacls.names.add(name)
return super().__new__(metacls, cls, bases, classdict)
And make it the metaclass of your AbstractReader class. since Python does not allow a class to have multiple metaclasses, you would need to make AbstractReader inherit from abc.ABCMeta instead of having it as a metaclass:
class AbstractReader(abc.ABCMeta, metaclass=UniqueName):
... # your original code here
Or if you want to use ABCMeta as metaclass in your AbstractReader, just override ABCMeta class and set child ABC as metaclass in AbstractReader:
class BaseABCMeta(abc.ABCMeta):
"""
Check unique name for _READER_NAME variable
"""
_readers_name = set()
def __new__(mcls, name, bases, namespace, **kwargs):
reader_name = namespace['_READER_NAME']
if reader_name in mcls._readers_name:
raise ValueError(f"Class with name '{reader_name}' already exists. ")
mcls._readers_name.add(reader_name)
return super().__new__(mcls, name, bases, namespace, **kwargs)
class AbstractReader(metaclass=BaseABCMeta):
# Your codes ...
So that:
class ReaderConcreteNumber1(AbstractReader):
_READER_NAME = "NAME1"
class ReaderConcreteNumber2(AbstractReader):
_READER_NAME = "NAME1"
would produce:
ValueError: Class with name 'NAME1' already exists.
Demo: https://replit.com/@blhsing/MerryEveryInternet
| Check unique value when define concrete class for abstract variable in python | Suppose that I have this architecture for my classes:
# abstracts.py
import abc
class AbstractReader(metaclass=abc.ABCMeta):
@classmethod
def get_reader_name(cl):
return cls._READER_NAME
@classmethod
@property
@abc.abstractmethod
def _READER_NAME(cls):
raise NotImplementedError
# concretes.py
from .abstracts import AbstractReader
class ReaderConcreteNumber1(AbstractReader):
_READER_NAME = "NAME1"
class ReaderConcreteNumber2(AbstractReader):
_READER_NAME = "NAME2"
Also I have a manager classes that find concrete classes by _READER_NAME variable. So I need to define unique _READER_NAME for each of my concrete classes.
how do I check that NAME1 and NAME2 are unique when concrete classes are going to define?
| [
"This is a very special case, but it can be solved with a singleton pattern.\nTo ease things for our selfes we first create a singleton annotation\n# anotations.py\n\ndef singleton(clazz):\n \"\"\"Singleton annotator ensures the annotated class is a singleton\"\"\"\n\n class ClassW(clazz):\n \"\"\"Creates a new sealed class from the object to create.\"\"\"\n _instance = None\n\n def __new__(cls, *args, **kwargs):\n if ClassW._instance is None:\n ClassW._instance = super(ClassW, cls).__new__(clazz, *args, **kwargs)\n ClassW._instance._sealed = False\n\n return ClassW._instance\n\n def __init__(self, *args, **kwargs):\n if self._sealed:\n return\n\n super(ClassW, self).__init__(*args, **kwargs)\n self._sealed = True\n\n ClassW.__name__ = clazz.__name__\n return ClassW\n\nNow we construct a singleton Registry class, to register our classes with and to do the checking.\n# registry.py\n\nfrom .annotations import singleton \n\n@singleton\nclass ReaderRegistry:\n \"\"\"\n Singleton class to register processing readers\n\n ### Usage\n To register a block call the register function with an ID and the class object.\n ReaderRegistry().register('FooName', FooReader)\n The class for the block can then be obtained via\n Registry()['FooName']\n\n \"\"\"\n registry = {}\n\n def register(self, key: str, clazz: Type[Block]) -> None:\n \"\"\"Register a new reader. Names must be unique within the registry\"\"\"\n if key in self:\n raise f\"Reader with key {key} already registered.\"\n self.registry[key] = clazz\n\n def __contains__(self, key: str) -> bool:\n return key in self.registry.keys()\n\n def __getitem__(self, key: str) -> Type[Block]:\n return self.registry[key]\n\nNow, you can\n#concretes.py \n\nfrom .abstracts import AbstractReader\nfrom .registry import ReaderRegistry\n\nclass ReaderConcreteNumber1(AbstractReader):\n _READER_NAME = \"NAME1\"\n\n# Note that this is OUTSIDE and AFTER the class definition, \n# e.g. end of the file. \nRederRegistry().register(ReaderConcreteNumber1._READER_NAME , ReaderConcreteNumber1)\n\nIf a reader with such a name exists in the registry already there will be an exception thrown once the file is imported. Now, you can just lookip the classes to construct by theire naume in the registry, e.g.\nif reader _namenot in ReaderRegistry():\n raise f\"Block [{reader _name}] is not known.\"\n\nreader = ReaderRegistry()[reader _name]\n\n",
"You can create a metaclass with a constructor that uses a set to keep track of the name of each instantiating class and raises an exception if a given name already exists in the set:\nclass UniqueName(type):\n names = set()\n\n def __new__(metacls, cls, bases, classdict):\n name = classdict['_READER_NAME']\n if name in metacls.names:\n raise ValueError(f\"Class with name '{name}' already exists.\")\n metacls.names.add(name)\n return super().__new__(metacls, cls, bases, classdict)\n\nAnd make it the metaclass of your AbstractReader class. since Python does not allow a class to have multiple metaclasses, you would need to make AbstractReader inherit from abc.ABCMeta instead of having it as a metaclass:\nclass AbstractReader(abc.ABCMeta, metaclass=UniqueName):\n ... # your original code here\n\nOr if you want to use ABCMeta as metaclass in your AbstractReader, just override ABCMeta class and set child ABC as metaclass in AbstractReader:\nclass BaseABCMeta(abc.ABCMeta):\n \"\"\"\n Check unique name for _READER_NAME variable\n \"\"\"\n _readers_name = set()\n\n def __new__(mcls, name, bases, namespace, **kwargs):\n reader_name = namespace['_READER_NAME']\n if reader_name in mcls._readers_name:\n raise ValueError(f\"Class with name '{reader_name}' already exists. \")\n mcls._readers_name.add(reader_name)\n\n return super().__new__(mcls, name, bases, namespace, **kwargs)\n\nclass AbstractReader(metaclass=BaseABCMeta):\n # Your codes ...\n\nSo that:\nclass ReaderConcreteNumber1(AbstractReader):\n _READER_NAME = \"NAME1\"\n\nclass ReaderConcreteNumber2(AbstractReader):\n _READER_NAME = \"NAME1\"\n\nwould produce:\nValueError: Class with name 'NAME1' already exists.\n\nDemo: https://replit.com/@blhsing/MerryEveryInternet\n"
] | [
1,
1
] | [] | [] | [
"design_patterns",
"oop",
"python",
"python_3.x"
] | stackoverflow_0074638479_design_patterns_oop_python_python_3.x.txt |
Q:
Tkinter program does not terminate
import threading
import tkinter
from tkinter import *
flag = False
def terminate_prog():
global win3, mylabel
global flag
flag = True
win3.destroy()
def loop_func():
global flag, mylabel
while True:
if flag:
break
else:
mylabel.config(text="Loop")
global mylabel
global button_win3_end
global win3
win3 = tkinter.Tk()
win3.geometry("700x500")
mylabel = Label(win3, text="Text")
mylabel.pack()
check_thread = threading.Thread(target=loop_func)
check_thread.start()
button_win3_end = Button(win3, text="Exit", command=lambda: terminate_prog)
button_win3_end.place(x=40, y=400)
win3.mainloop()
In the code, there is a label in the window and 'loop_func' function called by check_thread that is used to continuously run the loop. When I click on exit, the loop terminates because flag is True, but window is not destroyed and the program does not terminate.
terminate_prog function should terminate the program but it does not.
A:
I have left comments in the code, for the purpose of self learning.
Also see:
SimpleNamespace
Scoping rules
tkinters after method
lambda function
import tkinter as tk
#import tkinter once and avoid wildcard imports
import types
namespace = types.SimpleNamespace()
#use simplenamespace instead of all these global statements
namespace.flag = False
namespace.num = 0
def terminate_prog():
#win3/mylabel is already in the global namespace
namespace.flag = True
win3.destroy()
def loop_func():
if namespace.flag:
return #do nothing
else:
mylabel.config(text=namespace.num)
namespace.num += 1
win3.after(100, loop_func)
#it is useless to global in the global namespace
win3 = tk.Tk()
win3.geometry("700x500")
mylabel = tk.Label(win3, text="Text")
mylabel.pack()
button_win3_end = tk.Button(win3, text="Exit", command=terminate_prog)
#lambda is not needed, so don't use it
button_win3_end.place(x=40, y=400)
#use an after loop instead of threading
loop_func()
win3.mainloop()
| Tkinter program does not terminate | import threading
import tkinter
from tkinter import *
flag = False
def terminate_prog():
global win3, mylabel
global flag
flag = True
win3.destroy()
def loop_func():
global flag, mylabel
while True:
if flag:
break
else:
mylabel.config(text="Loop")
global mylabel
global button_win3_end
global win3
win3 = tkinter.Tk()
win3.geometry("700x500")
mylabel = Label(win3, text="Text")
mylabel.pack()
check_thread = threading.Thread(target=loop_func)
check_thread.start()
button_win3_end = Button(win3, text="Exit", command=lambda: terminate_prog)
button_win3_end.place(x=40, y=400)
win3.mainloop()
In the code, there is a label in the window and 'loop_func' function called by check_thread that is used to continuously run the loop. When I click on exit, the loop terminates because flag is True, but window is not destroyed and the program does not terminate.
terminate_prog function should terminate the program but it does not.
| [
"I have left comments in the code, for the purpose of self learning.\nAlso see:\n\nSimpleNamespace\nScoping rules\ntkinters after method\nlambda function\n\n\nimport tkinter as tk\n#import tkinter once and avoid wildcard imports\nimport types\n\nnamespace = types.SimpleNamespace()\n#use simplenamespace instead of all these global statements\nnamespace.flag = False\nnamespace.num = 0\n\ndef terminate_prog():\n #win3/mylabel is already in the global namespace\n namespace.flag = True\n win3.destroy()\n\n\ndef loop_func():\n if namespace.flag:\n return #do nothing\n else:\n mylabel.config(text=namespace.num)\n namespace.num += 1\n win3.after(100, loop_func)\n\n#it is useless to global in the global namespace\nwin3 = tk.Tk()\nwin3.geometry(\"700x500\")\n\nmylabel = tk.Label(win3, text=\"Text\")\nmylabel.pack()\nbutton_win3_end = tk.Button(win3, text=\"Exit\", command=terminate_prog)\n#lambda is not needed, so don't use it\nbutton_win3_end.place(x=40, y=400)\n\n#use an after loop instead of threading\nloop_func()\n\nwin3.mainloop()\n\n"
] | [
1
] | [] | [] | [
"python",
"tkinter",
"user_interface"
] | stackoverflow_0074625860_python_tkinter_user_interface.txt |
Q:
MinMaxScaler for dataframe: ValueError: setting an array element with a sequence
I do the preprocessing for the data to apply to K-means cluster for time-series data following hour. Then, I normalize the data but it shows the error:
`
Traceback (most recent call last):
File ".venv\lib\site-packages\pandas\core\series.py", line 191, in wrapper
raise TypeError(f"cannot convert the series to {converter}")
TypeError: cannot convert the series to <class 'float'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".venv\timesequence.py", line 210, in <module>
matrix = pd.DataFrame(scaler.fit_transform(x_calls), columns=df_hours.columns, index=df_hours.index)
File ".venv\lib\site-packages\sklearn\base.py", line 867, in fit_transform
return self.fit(X, **fit_params).transform(X)
File ".venv\lib\site-packages\sklearn\preprocessing\_data.py", line 420, in fit
return self.partial_fit(X, y)
File ".venv\lib\site-packages\sklearn\preprocessing\_data.py", line 457, in partial_fit
X = self._validate_data(
File ".venv\lib\site-packages\sklearn\base.py", line 577, in _validate_data
X = check_array(X, input_name="X", **check_params)
File ".venv\lib\site-packages\sklearn\utils\validation.py", line 856, in check_array
array = np.asarray(array, order=order, dtype=dtype)
File ".venv\lib\site-packages\pandas\core\generic.py", line 2064, in __array__
return np.asarray(self._values, dtype=dtype)
ValueError: setting an array element with a sequence.
#--------------------Preprocessing ds
counter_ = 0
zero = 0
df_hours = pd.DataFrame({
'Hour': [],
'SumView':[],
'CountStudent':[]
}, dtype=object)
while counter_ < 24:
if (counter_ in sub_data_hour['Hour']):
row = sub_data_hour.loc[(pd.to_numeric(sub_data_hour['Hour'], errors='coerce')) == counter_]
df_hours.loc[len(df_hours.index)] = [counter_, row['SumView'], row['CountStudent']]
else:
df_hours.loc[len(df_hours.index)] = [counter_, zero, zero]
counter_ += 1
#----------Normalize dataset------------
x_calls = df_hours.columns[2:]
scaler = MinMaxScaler()
matrix = pd.DataFrame(scaler.fit_transform(df_hours[x_calls]), columns=x_calls, index=df_hours.index)
`
I did try .to_numpy() or .values or [['column1','column2']] following this post pandas dataframe columns scaling with sklearn
But it did not work. Could anyone please help me to fix this? Thanks.
A:
The problem here is the datatype of df_hours I preprocessed.
Solution: change row['SumView'] to row['SumView'].values[0] and do the same with row['CountStudent'].
| MinMaxScaler for dataframe: ValueError: setting an array element with a sequence | I do the preprocessing for the data to apply to K-means cluster for time-series data following hour. Then, I normalize the data but it shows the error:
`
Traceback (most recent call last):
File ".venv\lib\site-packages\pandas\core\series.py", line 191, in wrapper
raise TypeError(f"cannot convert the series to {converter}")
TypeError: cannot convert the series to <class 'float'>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File ".venv\timesequence.py", line 210, in <module>
matrix = pd.DataFrame(scaler.fit_transform(x_calls), columns=df_hours.columns, index=df_hours.index)
File ".venv\lib\site-packages\sklearn\base.py", line 867, in fit_transform
return self.fit(X, **fit_params).transform(X)
File ".venv\lib\site-packages\sklearn\preprocessing\_data.py", line 420, in fit
return self.partial_fit(X, y)
File ".venv\lib\site-packages\sklearn\preprocessing\_data.py", line 457, in partial_fit
X = self._validate_data(
File ".venv\lib\site-packages\sklearn\base.py", line 577, in _validate_data
X = check_array(X, input_name="X", **check_params)
File ".venv\lib\site-packages\sklearn\utils\validation.py", line 856, in check_array
array = np.asarray(array, order=order, dtype=dtype)
File ".venv\lib\site-packages\pandas\core\generic.py", line 2064, in __array__
return np.asarray(self._values, dtype=dtype)
ValueError: setting an array element with a sequence.
#--------------------Preprocessing ds
counter_ = 0
zero = 0
df_hours = pd.DataFrame({
'Hour': [],
'SumView':[],
'CountStudent':[]
}, dtype=object)
while counter_ < 24:
if (counter_ in sub_data_hour['Hour']):
row = sub_data_hour.loc[(pd.to_numeric(sub_data_hour['Hour'], errors='coerce')) == counter_]
df_hours.loc[len(df_hours.index)] = [counter_, row['SumView'], row['CountStudent']]
else:
df_hours.loc[len(df_hours.index)] = [counter_, zero, zero]
counter_ += 1
#----------Normalize dataset------------
x_calls = df_hours.columns[2:]
scaler = MinMaxScaler()
matrix = pd.DataFrame(scaler.fit_transform(df_hours[x_calls]), columns=x_calls, index=df_hours.index)
`
I did try .to_numpy() or .values or [['column1','column2']] following this post pandas dataframe columns scaling with sklearn
But it did not work. Could anyone please help me to fix this? Thanks.
| [
"The problem here is the datatype of df_hours I preprocessed.\nSolution: change row['SumView'] to row['SumView'].values[0] and do the same with row['CountStudent'].\n"
] | [
0
] | [] | [] | [
"dataframe",
"normalization",
"python"
] | stackoverflow_0074592554_dataframe_normalization_python.txt |
Q:
How to get the specific multiple value out from a list in python and If the user input is equal to the mulitple values print output
Pass=[0,20,40,60,80,100,120]
while True:
Pass_Input=int(input("Enter : "))
if Pass_Input in Pass[5:6]:
print("Progress")
elif Pass_Input in Pass[0:2]:
print("Progress Module Trailer")
elif Pass_Input in Pass[0]:
print("Exclude")
Input:
Enter : 120
Output I get:
Traceback (most recent call last):
File "E:\IIT\Python\CW_Python\1 st question 2nd try.py", line 8, in <module>
elif Pass_Input in Pass[0]:
TypeError: argument of type 'int' is not iterable
Output I expect:
Progress
A:
In your last elif evaluation, you are using Pass[0] which is not a list but a value. You should write
elif Pass_Input == Pass[0]
A:
Pass[5:6] is [100]
and 120 is not in [100] with no doubt.
list slice Pass[5:6] means from 5 to 6, which 5 is included and 6 is not.
In [1]: Pass=[0,20,40,60,80,100,120]
In [2]: Pass[5:6]
Out[2]: [100]
then the program runs to elif Pass_Input in Pass[0]: . Pass[0] is 0, which can not be iterable, so you get a TypeError
you can change Pass[5:6] to Pass[5:7] to get Process output
| How to get the specific multiple value out from a list in python and If the user input is equal to the mulitple values print output | Pass=[0,20,40,60,80,100,120]
while True:
Pass_Input=int(input("Enter : "))
if Pass_Input in Pass[5:6]:
print("Progress")
elif Pass_Input in Pass[0:2]:
print("Progress Module Trailer")
elif Pass_Input in Pass[0]:
print("Exclude")
Input:
Enter : 120
Output I get:
Traceback (most recent call last):
File "E:\IIT\Python\CW_Python\1 st question 2nd try.py", line 8, in <module>
elif Pass_Input in Pass[0]:
TypeError: argument of type 'int' is not iterable
Output I expect:
Progress
| [
"In your last elif evaluation, you are using Pass[0] which is not a list but a value. You should write\nelif Pass_Input == Pass[0]\n\n",
"Pass[5:6] is [100]\nand 120 is not in [100] with no doubt.\nlist slice Pass[5:6] means from 5 to 6, which 5 is included and 6 is not.\nIn [1]: Pass=[0,20,40,60,80,100,120]\nIn [2]: Pass[5:6]\nOut[2]: [100]\n\nthen the program runs to elif Pass_Input in Pass[0]: . Pass[0] is 0, which can not be iterable, so you get a TypeError\n\nyou can change Pass[5:6] to Pass[5:7] to get Process output\n"
] | [
0,
0
] | [] | [] | [
"python"
] | stackoverflow_0074638769_python.txt |
Q:
Removing double quotations marks from String in Pandas Series
i am currently looping through a subset of a Pandas DataFrame and the string values inside have double quotation marks surrounding them. If i don't get them removed, i won't be able to compare them to what i need them compared to.
This is the code i have so far:
df_asinn = df.copy()
for index, screen_name in df.loc[:, ["username"]].iterrows():
user_tweet_list = []
screen_name[0] = screen_name[0].strip()
stripped_screen_name = screen_name[0].strip()
The value varaible containing the string value in screen_name[0]. I have ried multiple things, with no prevail. I tried using screen_name.strip(), which did not work. I tried to use .translate('"'), however, this did also not work. It would be great if anyone had a solution to the problem.
Te goal is from having the list strings looking like this
"String1",
"String2",
"String3"
To looking like this:
String1,
String2,
String3
I know its supposed to look like that, because if i do print("String1"), the output is:
String1
A:
Not quite sure exactly what you're getting at: see comment by @Panda Kim. However, two things that might point you in the right direction:
Calling string.replace(char, '') will replace all instances of char in string with a blank string, effectively removing them. I usually use this method, though translate works well too. It seems, though, that the proper call is translate(None, char).
By calling the str method on a column (df[col_name].str), you can call any string function and it will apply that function to allelements in the column. For instance, calling df['screen name'].str.replace('\"', '') will give the same result as looping through df['screen name'] and calling replace on each element.
| Removing double quotations marks from String in Pandas Series | i am currently looping through a subset of a Pandas DataFrame and the string values inside have double quotation marks surrounding them. If i don't get them removed, i won't be able to compare them to what i need them compared to.
This is the code i have so far:
df_asinn = df.copy()
for index, screen_name in df.loc[:, ["username"]].iterrows():
user_tweet_list = []
screen_name[0] = screen_name[0].strip()
stripped_screen_name = screen_name[0].strip()
The value varaible containing the string value in screen_name[0]. I have ried multiple things, with no prevail. I tried using screen_name.strip(), which did not work. I tried to use .translate('"'), however, this did also not work. It would be great if anyone had a solution to the problem.
Te goal is from having the list strings looking like this
"String1",
"String2",
"String3"
To looking like this:
String1,
String2,
String3
I know its supposed to look like that, because if i do print("String1"), the output is:
String1
| [
"Not quite sure exactly what you're getting at: see comment by @Panda Kim. However, two things that might point you in the right direction:\n\nCalling string.replace(char, '') will replace all instances of char in string with a blank string, effectively removing them. I usually use this method, though translate works well too. It seems, though, that the proper call is translate(None, char).\nBy calling the str method on a column (df[col_name].str), you can call any string function and it will apply that function to allelements in the column. For instance, calling df['screen name'].str.replace('\\\"', '') will give the same result as looping through df['screen name'] and calling replace on each element.\n\n"
] | [
1
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074638744_pandas_python.txt |
Q:
Run python exe in raspberry Pi
I have a python script named client.py and I created an .exe file using pyinstaller in windows 10.
pyinstaller client.py
I want to run this .exe in Raspberry Pi 3. What should I do in order to do so?
A:
2 ways:
Compile the code on the raspberry pi. PyInstaller supports Linux, and it works almost identically.
Use wine
| Run python exe in raspberry Pi | I have a python script named client.py and I created an .exe file using pyinstaller in windows 10.
pyinstaller client.py
I want to run this .exe in Raspberry Pi 3. What should I do in order to do so?
| [
"2 ways:\n\nCompile the code on the raspberry pi. PyInstaller supports Linux, and it works almost identically.\nUse wine\n\n"
] | [
0
] | [
"You would just need to copy the client.py to the raspberry py, install any required pip modules and run via python client.py\n.exe binaries typically are windows only... There are ways to emulate a Windows environment and run windows applications on a linux system but doing such is extremely heavy-handed and would tax a raspberry pi's limited resources.\n",
"transfer your program to raspberry pi os.\nthen.\nthen save it in raspberry pi.\ninstall pyinstaller by pip command\npip install pyinstaller.\nthen cd to that directory where file is stored.\nthen run\npyinstaller client.py\n\n"
] | [
-1,
-1
] | [
"pyinstaller",
"python",
"raspberry_pi"
] | stackoverflow_0061200250_pyinstaller_python_raspberry_pi.txt |
Q:
How to fix error: 'AnnAssign' nodes are not implemented in Python
I try to do following:
import pandas as pd
d = {'col1': [1, 7, 3, 6], 'col2': [3, 4, 9, 1]}
df = pd.DataFrame(data=d)
out = df.query('col1 > col2')
out= col1 col2
1 7 4
3 6 1
This works OK. But when I modify column name col1 --> col1:suf
d = {'col1:suf': [1, 7, 3, 6], 'col2': [3, 4, 9, 1]}
df = pd.DataFrame(data=d)
out = df.query('col1:suf > col2')
I get an error:
'AnnAssign' nodes are not implemented
Is there easy way to avoid this behavior? Or course renaming headers etc. is a workaround
A:
The colon : is a special character in SQL queries. You need to enclose it in backticks.
Try this :
out = df.query('`col1:suf` > col2')
Output :
print(out)
col1:suf col2
1 7 4
3 6 1
A:
According to ValentinFFM's comment on this issue, you need to put a backtick quote around your column name like
df.query('`Column: Name`==value')
| How to fix error: 'AnnAssign' nodes are not implemented in Python | I try to do following:
import pandas as pd
d = {'col1': [1, 7, 3, 6], 'col2': [3, 4, 9, 1]}
df = pd.DataFrame(data=d)
out = df.query('col1 > col2')
out= col1 col2
1 7 4
3 6 1
This works OK. But when I modify column name col1 --> col1:suf
d = {'col1:suf': [1, 7, 3, 6], 'col2': [3, 4, 9, 1]}
df = pd.DataFrame(data=d)
out = df.query('col1:suf > col2')
I get an error:
'AnnAssign' nodes are not implemented
Is there easy way to avoid this behavior? Or course renaming headers etc. is a workaround
| [
"The colon : is a special character in SQL queries. You need to enclose it in backticks.\nTry this :\nout = df.query('`col1:suf` > col2')\n\nOutput :\nprint(out)\n\n col1:suf col2\n1 7 4\n3 6 1\n\n",
"According to ValentinFFM's comment on this issue, you need to put a backtick quote around your column name like\ndf.query('`Column: Name`==value')\n\n"
] | [
1,
0
] | [] | [] | [
"dataframe",
"pandas",
"python"
] | stackoverflow_0074638814_dataframe_pandas_python.txt |
Q:
Reduce edges in a MultiDiGraph
I've a MultiDiGraph in which there are some edges that I need remove.
import networkx as ntx
import matplotlib.pyplot as plt
edges = [
(6, 7), (7, 6), (7, 11), (11, 7), (11, 8), (8, 11), (8, 9), (9, 8), (9, 5), (5, 9),
(5, 10), (10, 5), (10, 2), (2, 10),
(2, 1), (1, 2), (1, 0), (0, 1),
(0, 12), (12, 0),
(11, 14), (14, 11),
(5, 3),
(3, 4),
(4, 13), (13, 4),
]
G = ntx.MultiDiGraph()
G.add_edges_from(edges)
fig, ax = plt.subplots(figsize=(10, 10))
ntx.draw_networkx(G, with_labels=True)
plt.show()
Above an example of my graph, I will remove the edges between nodes 5 and 12 and I will the same with all situation in which an edge is linked with only one edge. There are some situation in which a bidirectional edge is linked to a one way edge, in this case I don't wont to merge the edges.
So, my aim is to obtain the graph below:
new_edges = [
(6, 11), (11, 6),
(11, 14), (14, 11),
(11, 5), (5, 11),
(5, 12), (12, 5),
(5, 4),
(4, 13), (13, 4),
]
new_G = ntx.MultiDiGraph()
new_G.add_edges_from(new_edges)
fig, ax = plt.subplots(figsize=(10, 10))
ntx.draw_networkx(new_G, with_labels=True)
plt.show()
I found this question but is useful for a Graph not for a MultiDiGraph.
A:
Firstly, for the bidirectional edges, it's pretty much the same as the one for Graph but we add another edge in the opposite direction to make it bidirectional:
# Select all nodes with only 2 neighbors
nodes_to_remove_bidirectional = [n for n in G.nodes if len(list(G.neighbors(n))) == 2]
# For each of those nodes
for node in nodes_to_remove_bidirectional:
# We add an edge between neighbors (len == 2 so it is correct)
neighbors = list(G.neighbors(node))
G.add_edge(*neighbors)
neighbors.reverse() # add the edge with opposite direction
G.add_edge(*neighbors)
# And delete the node
G.remove_node(node)
This will result:
Now for the unidirectional edges, we select nodes with one incoming edge (predecessor) and one outgoing edge (successor) to different nodes:
nodes_to_remove_unidirectional = [n for n in G.nodes if
len(list(G.neighbors(n))) == 1 and
len(list(G.predecessors(n))) == 1 and
next(G.neighbors(n)) != next(G.predecessors(n))
]
# For each of those nodes
for node in nodes_to_remove_unidirectional:
# We add an edge between neighbors (len == 2 so it is correct)
neighbors = list(G.neighbors(node))
G.add_edge(next(G.predecessors(node)), next(G.neighbors(node)))
# And delete the node
G.remove_node(node)
This will result:
| Reduce edges in a MultiDiGraph | I've a MultiDiGraph in which there are some edges that I need remove.
import networkx as ntx
import matplotlib.pyplot as plt
edges = [
(6, 7), (7, 6), (7, 11), (11, 7), (11, 8), (8, 11), (8, 9), (9, 8), (9, 5), (5, 9),
(5, 10), (10, 5), (10, 2), (2, 10),
(2, 1), (1, 2), (1, 0), (0, 1),
(0, 12), (12, 0),
(11, 14), (14, 11),
(5, 3),
(3, 4),
(4, 13), (13, 4),
]
G = ntx.MultiDiGraph()
G.add_edges_from(edges)
fig, ax = plt.subplots(figsize=(10, 10))
ntx.draw_networkx(G, with_labels=True)
plt.show()
Above an example of my graph, I will remove the edges between nodes 5 and 12 and I will the same with all situation in which an edge is linked with only one edge. There are some situation in which a bidirectional edge is linked to a one way edge, in this case I don't wont to merge the edges.
So, my aim is to obtain the graph below:
new_edges = [
(6, 11), (11, 6),
(11, 14), (14, 11),
(11, 5), (5, 11),
(5, 12), (12, 5),
(5, 4),
(4, 13), (13, 4),
]
new_G = ntx.MultiDiGraph()
new_G.add_edges_from(new_edges)
fig, ax = plt.subplots(figsize=(10, 10))
ntx.draw_networkx(new_G, with_labels=True)
plt.show()
I found this question but is useful for a Graph not for a MultiDiGraph.
| [
"Firstly, for the bidirectional edges, it's pretty much the same as the one for Graph but we add another edge in the opposite direction to make it bidirectional:\n# Select all nodes with only 2 neighbors\nnodes_to_remove_bidirectional = [n for n in G.nodes if len(list(G.neighbors(n))) == 2]\n# For each of those nodes\nfor node in nodes_to_remove_bidirectional:\n # We add an edge between neighbors (len == 2 so it is correct)\n neighbors = list(G.neighbors(node))\n G.add_edge(*neighbors)\n neighbors.reverse() # add the edge with opposite direction\n G.add_edge(*neighbors)\n # And delete the node\n G.remove_node(node)\n\nThis will result:\n\nNow for the unidirectional edges, we select nodes with one incoming edge (predecessor) and one outgoing edge (successor) to different nodes:\nnodes_to_remove_unidirectional = [n for n in G.nodes if \n len(list(G.neighbors(n))) == 1 and\n len(list(G.predecessors(n))) == 1 and\n next(G.neighbors(n)) != next(G.predecessors(n))\n ]\n\n# For each of those nodes\nfor node in nodes_to_remove_unidirectional:\n # We add an edge between neighbors (len == 2 so it is correct)\n neighbors = list(G.neighbors(node))\n G.add_edge(next(G.predecessors(node)), next(G.neighbors(node)))\n # And delete the node\n G.remove_node(node)\n\nThis will result:\n\n"
] | [
1
] | [] | [] | [
"networkx",
"python"
] | stackoverflow_0074638719_networkx_python.txt |
Q:
Reading NLP models from Azure blob container
I have uploaded the sentence transformer model on my blob container. The idea is to load the model into a Python notebook from the blob container. To do this I do the following:
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
service = BlobServiceClient(account_url="https://<name_of_my_blob>.blob.core.windows.net/", credential=credential)
then point to the location on the container where my model is i.e.:
https://<name_of_my_blob>.blob.core.windows.net/fla-models/all-MiniLM-L6-v2,
model = SentenceTransformer(service.get_blob_to_path('fla-models/all-MiniLM-L6-v2'))
But I get
AttributeError: 'BlobServiceClient' object has no attribute 'get_blob_to_path'
I have the latest installation of azure-storage-blob. I wonder what am I doing wrong there and if the Azure lib is not longer supporting get_blob_to_path method?
A:
I tried in my environment and got below results:
'BlobServiceClient' object has no attribute 'get_blob_to_path'
The above error tells that BlobServiceClient is not longer supporting get_blob_to_path method. because Blockblobservice is the library is using get_blob_to_path.
Example:
from azure.storage.blob import BlockBlobService
block_blob_service = BlockBlobService(account_name='myaccount', account_key='mykey')
block_blob_service.get_blob_to_path('mycontainer', 'myblockblob', 'filename')
if you need read the file in azure blob storage you can use the below code:
Code:
from azure.storage.blob import BlobServiceClient
import json
service=BlobServiceClient.from_connection_string("<connection_string>")
container_client = service.get_container_client("test")
model=service.get_blob_client( "test",blob="all-MiniLM-L6-v2/modules.json"))
blob_data = model.download_blob()
data = json.loads(blob_data.readall())
print(data)
Console:
In portal **all-MiniLM-L6-v2**is blob name(folder), If you run only blob folder name it won't run you need use specific file name inside the folder.
If you need all-MiniLM-L6-v2 to get what are the files are present you can use the below code:
Code:
from azure.storage.blob import BlobServiceClient
service=BlobServiceClient.from_connection_string("<connection_string>")
container_client = service.get_container_client("test")
blob_list = container_client.list_blobs()
for blob in blob_list:\
print("\t" + blob.name)
Console:
For your reference you can use this So-thread works with function.
| Reading NLP models from Azure blob container | I have uploaded the sentence transformer model on my blob container. The idea is to load the model into a Python notebook from the blob container. To do this I do the following:
from azure.storage.blob import BlobServiceClient, BlobClient, ContainerClient
service = BlobServiceClient(account_url="https://<name_of_my_blob>.blob.core.windows.net/", credential=credential)
then point to the location on the container where my model is i.e.:
https://<name_of_my_blob>.blob.core.windows.net/fla-models/all-MiniLM-L6-v2,
model = SentenceTransformer(service.get_blob_to_path('fla-models/all-MiniLM-L6-v2'))
But I get
AttributeError: 'BlobServiceClient' object has no attribute 'get_blob_to_path'
I have the latest installation of azure-storage-blob. I wonder what am I doing wrong there and if the Azure lib is not longer supporting get_blob_to_path method?
| [
"I tried in my environment and got below results:\n\n'BlobServiceClient' object has no attribute 'get_blob_to_path'\n\nThe above error tells that BlobServiceClient is not longer supporting get_blob_to_path method. because Blockblobservice is the library is using get_blob_to_path.\nExample:\nfrom azure.storage.blob import BlockBlobService\n\nblock_blob_service = BlockBlobService(account_name='myaccount', account_key='mykey')\n\nblock_blob_service.get_blob_to_path('mycontainer', 'myblockblob', 'filename')\n\n\nif you need read the file in azure blob storage you can use the below code:\nCode:\nfrom azure.storage.blob import BlobServiceClient\nimport json\nservice=BlobServiceClient.from_connection_string(\"<connection_string>\")\n\ncontainer_client = service.get_container_client(\"test\")\nmodel=service.get_blob_client( \"test\",blob=\"all-MiniLM-L6-v2/modules.json\"))\nblob_data = model.download_blob()\ndata = json.loads(blob_data.readall())\nprint(data)\n \n\nConsole:\nIn portal **all-MiniLM-L6-v2**is blob name(folder), If you run only blob folder name it won't run you need use specific file name inside the folder.\n\nIf you need all-MiniLM-L6-v2 to get what are the files are present you can use the below code:\nCode:\nfrom azure.storage.blob import BlobServiceClient\n\nservice=BlobServiceClient.from_connection_string(\"<connection_string>\")\n\ncontainer_client = service.get_container_client(\"test\")\nblob_list = container_client.list_blobs()\n\nfor blob in blob_list:\\\n\nprint(\"\\t\" + blob.name)\n\nConsole:\n\nFor your reference you can use this So-thread works with function.\n"
] | [
0
] | [] | [] | [
"azure",
"azure_blob_storage",
"azure_python_sdk",
"python",
"sentence_transformers"
] | stackoverflow_0074614336_azure_azure_blob_storage_azure_python_sdk_python_sentence_transformers.txt |
Q:
How to get time in '2022-12-01T09:13:45Z' this format?
from datetime import datetime
import pytz
# local datetime to ISO Datetime
iso_date = datetime.now().replace(microsecond=0).isoformat()
print('ISO Datetime:', iso_date)
This doesn't give me the required format i want
2022-05-18T13:43:13
I wanted to get the time like '2022-12-01T09:13:45Z'
A:
The time format that you want is known as Zulu time format, the following code changes UTC to Zulu format.
Example 1
import datetime
now = datetime.datetime.now(datetime.timezone.utc)
print(now)
Output
#2022-12-01 10:07:06.552326+00:00
Example 2 (Hack)
import datetime
now = datetime.datetime.now(datetime.timezone.utc)
now = now.strftime('%Y-%m-%dT%H:%M:%S')+ now.strftime('.%f')[:4] + 'Z'
print(now)
Output
#2022-12-01T10:06:41.122Z
Hope this helps. Happy Coding :)
A:
You can use datime's strftime function i.e.
current_datetime = datetime.now().replace(microsecond=0)
print(f'ISO Datetime: {current_datetime.strftime("%Y-%m-%dT%H:%M:%SZ")}')
| How to get time in '2022-12-01T09:13:45Z' this format? | from datetime import datetime
import pytz
# local datetime to ISO Datetime
iso_date = datetime.now().replace(microsecond=0).isoformat()
print('ISO Datetime:', iso_date)
This doesn't give me the required format i want
2022-05-18T13:43:13
I wanted to get the time like '2022-12-01T09:13:45Z'
| [
"The time format that you want is known as Zulu time format, the following code changes UTC to Zulu format.\nExample 1\nimport datetime\nnow = datetime.datetime.now(datetime.timezone.utc)\nprint(now)\n\nOutput\n#2022-12-01 10:07:06.552326+00:00\n\nExample 2 (Hack)\nimport datetime\nnow = datetime.datetime.now(datetime.timezone.utc)\nnow = now.strftime('%Y-%m-%dT%H:%M:%S')+ now.strftime('.%f')[:4] + 'Z'\nprint(now)\n\nOutput\n#2022-12-01T10:06:41.122Z\n\nHope this helps. Happy Coding :)\n",
"You can use datime's strftime function i.e.\ncurrent_datetime = datetime.now().replace(microsecond=0)\nprint(f'ISO Datetime: {current_datetime.strftime(\"%Y-%m-%dT%H:%M:%SZ\")}')\n\n"
] | [
1,
0
] | [] | [] | [
"datetime",
"python",
"pytz"
] | stackoverflow_0074638856_datetime_python_pytz.txt |
Q:
Aggregate by unique values & their counts using pandas
I have a df:
# create generic df with 1 date column and 2 value columns
df = pd.DataFrame({'date': pd.date_range('2020-01-01', '2020-01-31', freq='D'), \
'value1': np.random.randint(0, 10, 31), \
'value2': np.random.randint(0, 100, 31),\
'value3': np.random.randint(0, 1000, 31)})
I want to group by this df by date in W intervals, take the average of value2, count of value3 and distinct values of value1 & the count of those values in this or similar format :
{9:2, 4:1, 6:2, 5:1, 3:1}
[(9, 2), (4,1), (6,2), (5,1), (3,1)]
Basically this represent that in the first week there were 2 counts of value 9 in the column value1 and so on, similar to what df.groupby(pd.Grouper(key='date', freq='W')).value1.value_counts() returns, but trying
df.groupby(pd.Grouper(key='date', freq='W'))\
.agg({'value1': 'mean', 'value2': 'mean', 'value3': pd.Series.value_counts()})\
.reset_index()
Returns an error:
TypeError: value_counts() missing 1 required positional argument: 'self'
My desired output should look like this:
date value2 value3 value_1
2020-01-05 62.600000 5 {1:5, 3:2}
2020-01-12 30.000000 7 {2:2, 3:3, 6:1}
2020-01-19 34.428571 7 {2:2, 3:3, 6:1}
2020-01-26 51.428571 7 {2:1, 4:3, 8:1}
2020-02-02 48.000000 5 {2:1, 3:5, 7:1}
The column value1 as mentioned above can have a different format, such as a list with tuples of values.
A:
Convert values to dictionaries with lambda function:
df = df.groupby(pd.Grouper(key='date', freq='W'))\
.agg({'value1': 'mean', 'value2': 'mean',
'value3': lambda x: x.value_counts().to_dict()})\
.reset_index()
print (df)
date value1 value2 \
0 2020-01-05 3.200000 41.000000
1 2020-01-12 4.714286 58.714286
2 2020-01-19 4.285714 65.285714
3 2020-01-26 6.428571 68.857143
4 2020-02-02 4.000000 36.600000
value3
0 {984: 1, 920: 1, 853: 1, 660: 1, 101: 1}
1 {421: 1, 726: 1, 23: 1, 408: 1, 398: 1, 493: 1...
2 {176: 1, 209: 1, 180: 1, 566: 1, 280: 1, 570: ...
3 {49: 1, 113: 1, 327: 1, 777: 1, 59: 1, 301: 1,...
4 {113: 1, 983: 1, 181: 1, 239: 1, 839: 1}
Or use collections.Counter:
from collections import Counter
df = df.groupby(pd.Grouper(key='date', freq='W'))\
.agg({'value1': 'mean', 'value2': 'mean', 'value3': Counter})\
.reset_index()
| Aggregate by unique values & their counts using pandas | I have a df:
# create generic df with 1 date column and 2 value columns
df = pd.DataFrame({'date': pd.date_range('2020-01-01', '2020-01-31', freq='D'), \
'value1': np.random.randint(0, 10, 31), \
'value2': np.random.randint(0, 100, 31),\
'value3': np.random.randint(0, 1000, 31)})
I want to group by this df by date in W intervals, take the average of value2, count of value3 and distinct values of value1 & the count of those values in this or similar format :
{9:2, 4:1, 6:2, 5:1, 3:1}
[(9, 2), (4,1), (6,2), (5,1), (3,1)]
Basically this represent that in the first week there were 2 counts of value 9 in the column value1 and so on, similar to what df.groupby(pd.Grouper(key='date', freq='W')).value1.value_counts() returns, but trying
df.groupby(pd.Grouper(key='date', freq='W'))\
.agg({'value1': 'mean', 'value2': 'mean', 'value3': pd.Series.value_counts()})\
.reset_index()
Returns an error:
TypeError: value_counts() missing 1 required positional argument: 'self'
My desired output should look like this:
date value2 value3 value_1
2020-01-05 62.600000 5 {1:5, 3:2}
2020-01-12 30.000000 7 {2:2, 3:3, 6:1}
2020-01-19 34.428571 7 {2:2, 3:3, 6:1}
2020-01-26 51.428571 7 {2:1, 4:3, 8:1}
2020-02-02 48.000000 5 {2:1, 3:5, 7:1}
The column value1 as mentioned above can have a different format, such as a list with tuples of values.
| [
"Convert values to dictionaries with lambda function:\ndf = df.groupby(pd.Grouper(key='date', freq='W'))\\\n .agg({'value1': 'mean', 'value2': 'mean', \n 'value3': lambda x: x.value_counts().to_dict()})\\\n .reset_index()\nprint (df)\n date value1 value2 \\\n0 2020-01-05 3.200000 41.000000 \n1 2020-01-12 4.714286 58.714286 \n2 2020-01-19 4.285714 65.285714 \n3 2020-01-26 6.428571 68.857143 \n4 2020-02-02 4.000000 36.600000 \n\n value3 \n0 {984: 1, 920: 1, 853: 1, 660: 1, 101: 1} \n1 {421: 1, 726: 1, 23: 1, 408: 1, 398: 1, 493: 1... \n2 {176: 1, 209: 1, 180: 1, 566: 1, 280: 1, 570: ... \n3 {49: 1, 113: 1, 327: 1, 777: 1, 59: 1, 301: 1,... \n4 {113: 1, 983: 1, 181: 1, 239: 1, 839: 1} \n\nOr use collections.Counter:\nfrom collections import Counter\n \ndf = df.groupby(pd.Grouper(key='date', freq='W'))\\\n .agg({'value1': 'mean', 'value2': 'mean', 'value3': Counter})\\\n .reset_index()\n\n"
] | [
4
] | [] | [] | [
"group_by",
"pandas",
"python"
] | stackoverflow_0074639048_group_by_pandas_python.txt |
Q:
Python lightblue : "AttributeError: module 'lightblue' has no attribute 'finddevices'"
.
Hi everyone, hope you're doing well!
I am trying to write a Python (3.8.9) script so that my computer detects every Bluetooth device it can find and provides me with the list of devices it has found.
Then, I installed pybluez and lightblue with
pip3 install pybluez
and
pip3 install python-lightblue
python-lightblue has the version 1.0.3.
Now, here's my code :
import bluetooth
def scan():
print("Scanning for bluetooth devices:")
devices = bluetooth.discover_devices(lookup_names = True, lookup_class = True)
number_of_devices = len(devices)
print(number_of_devices," devices found")
for addr, name, device_class in devices:
print("\n")
print("Device:")
print("Device Name: %s" % (name))
When I run this, I get the following mistake :
Traceback (most recent call last):
File "bluetooth_detect.py", line 16, in <module>
scan()
File "bluetooth_detect.py", line 5, in scan
devices = bluetooth.discover_devices(lookup_names = True, lookup_class = True)
File "/Users/<my_name>/Library/Python/3.8/lib/python/site-packages/bluetooth/macos.py", line 12, in discover_devices
devices = lightblue.finddevices(getnames=lookup_names, length=duration)
AttributeError: module 'lightblue' has no attribute 'finddevices'
Yet, when I check the documentation on the Internet, the module lightblue does have the attribute finddevices, and I can't find any mention of this problem on another forum (maybe I didn't look up long enough ?)
Would anyone have any idea ?
Thank you in advance ! :-)
A:
I think the problem is in 3-year old latest version in pip registry.
Try to install from master from GitHub
pip install git+https://github.com/pybluez/pybluez.git
This will solve your problem
| Python lightblue : "AttributeError: module 'lightblue' has no attribute 'finddevices'" | .
Hi everyone, hope you're doing well!
I am trying to write a Python (3.8.9) script so that my computer detects every Bluetooth device it can find and provides me with the list of devices it has found.
Then, I installed pybluez and lightblue with
pip3 install pybluez
and
pip3 install python-lightblue
python-lightblue has the version 1.0.3.
Now, here's my code :
import bluetooth
def scan():
print("Scanning for bluetooth devices:")
devices = bluetooth.discover_devices(lookup_names = True, lookup_class = True)
number_of_devices = len(devices)
print(number_of_devices," devices found")
for addr, name, device_class in devices:
print("\n")
print("Device:")
print("Device Name: %s" % (name))
When I run this, I get the following mistake :
Traceback (most recent call last):
File "bluetooth_detect.py", line 16, in <module>
scan()
File "bluetooth_detect.py", line 5, in scan
devices = bluetooth.discover_devices(lookup_names = True, lookup_class = True)
File "/Users/<my_name>/Library/Python/3.8/lib/python/site-packages/bluetooth/macos.py", line 12, in discover_devices
devices = lightblue.finddevices(getnames=lookup_names, length=duration)
AttributeError: module 'lightblue' has no attribute 'finddevices'
Yet, when I check the documentation on the Internet, the module lightblue does have the attribute finddevices, and I can't find any mention of this problem on another forum (maybe I didn't look up long enough ?)
Would anyone have any idea ?
Thank you in advance ! :-)
| [
"I think the problem is in 3-year old latest version in pip registry.\nTry to install from master from GitHub\n\npip install git+https://github.com/pybluez/pybluez.git\n\nThis will solve your problem\n"
] | [
0
] | [] | [] | [
"python"
] | stackoverflow_0071840588_python.txt |
Q:
Serious anaconda error :failed with repodata from current_repodata.json
I got fatal problem with using anaconda.
When I use conda update, conda install
Always arise failed with repodata from current_repodata.json, will retry with next repodata source.
And It take several "HOUR" then fail to search packages.
Even I cannot do:
conda update --all OR conda update -n base conda
I don't know what happened my server.
Error example below:
(base) [hanjg@h5 ~]$ conda update conda -c conda-canary
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Solving environment: failed with repodata from current_repodata.json, will retry with
next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed
Solving environment: | failed
CondaError: KeyboardInterrupt(->It takes too much time so I interrupt)
My conda channel list: default (Last priority), conda-forge(Top priority), bioconda
other server computer works well(it is not problem with internet or network authorization.)
Best regards.
A:
I had the same issue and have been trying to fix for hours. Setting channel priority to false seems to be the thing that has worked best so far. In the end, to get things working I:
Deleted all traces of conda and did fresh install
Started with: conda config --set channel_priority false
Then: conda update conda
Followed by: conda update --all --yes
Then started installing my usual packages etc.
| Serious anaconda error :failed with repodata from current_repodata.json | I got fatal problem with using anaconda.
When I use conda update, conda install
Always arise failed with repodata from current_repodata.json, will retry with next repodata source.
And It take several "HOUR" then fail to search packages.
Even I cannot do:
conda update --all OR conda update -n base conda
I don't know what happened my server.
Error example below:
(base) [hanjg@h5 ~]$ conda update conda -c conda-canary
Collecting package metadata (current_repodata.json): done
Solving environment: failed with repodata from current_repodata.json, will retry with next repodata source.
Solving environment: failed with repodata from current_repodata.json, will retry with
next repodata source.
Collecting package metadata (repodata.json): done
Solving environment: failed
Solving environment: | failed
CondaError: KeyboardInterrupt(->It takes too much time so I interrupt)
My conda channel list: default (Last priority), conda-forge(Top priority), bioconda
other server computer works well(it is not problem with internet or network authorization.)
Best regards.
| [
"I had the same issue and have been trying to fix for hours. Setting channel priority to false seems to be the thing that has worked best so far. In the end, to get things working I:\n\nDeleted all traces of conda and did fresh install\nStarted with: conda config --set channel_priority false\nThen: conda update conda\nFollowed by: conda update --all --yes\nThen started installing my usual packages etc.\n\n"
] | [
0
] | [] | [] | [
"anaconda",
"python"
] | stackoverflow_0072422453_anaconda_python.txt |
Q:
How do I set a variable with the value being a random range, with variables set to have a random range?
I have this variable called "number" with the range being the variables "a" and "b". These range variables are ranges themselves.
from random import *
a = randint(1, 99)
b = randint(2, 100)
number = randint(a, b)
print(number)
When I try to enter this code, I occasionally receive an integer or get this error:
Traceback (most recent call last):
File "main.py", line 6, in <module>
number = randint(a, b)
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/random.py", line 248, in randint
return self.randrange(a, b+1)
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/random.py", line 226, in randrange
raise ValueError("empty range for randrange() (%d, %d, %d)" % (istart, istop, width))
ValueError: empty range for randrange() (17, 6, -11)
A:
You need to ensure that the first parameter passed to randint is less than or equal to the second parameter. How about:
from random import randint
a = randint(1, 99)
b = randint(a, 100)
number = randint(a, b)
print(number)
...which is equivalent to:
from random import randint
number = randint(1, 100)
print(number)
A:
randint(a, b) has a condition.
a <= b
Since you are generating a and b as random range, sometimes a is greater than b. You need to adjust your a and b range so that a is always less than or equal to b.
a = randint(1, 50)
b = randint(50, 100)
A:
As your range for randint needs to consist of a lower and upper boundary, your script will occasionally break when b is randomly lower than a. To prevent this, make sure b cannot be lower than a. An example:
from random import *
a = randint(1, 99)
b = randint(a, 100)
number = randint(a, b)
print(number)
| How do I set a variable with the value being a random range, with variables set to have a random range? | I have this variable called "number" with the range being the variables "a" and "b". These range variables are ranges themselves.
from random import *
a = randint(1, 99)
b = randint(2, 100)
number = randint(a, b)
print(number)
When I try to enter this code, I occasionally receive an integer or get this error:
Traceback (most recent call last):
File "main.py", line 6, in <module>
number = randint(a, b)
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/random.py", line 248, in randint
return self.randrange(a, b+1)
File "/nix/store/2vm88xw7513h9pyjyafw32cps51b0ia1-python3-3.8.12/lib/python3.8/random.py", line 226, in randrange
raise ValueError("empty range for randrange() (%d, %d, %d)" % (istart, istop, width))
ValueError: empty range for randrange() (17, 6, -11)
| [
"You need to ensure that the first parameter passed to randint is less than or equal to the second parameter. How about:\nfrom random import randint\n\na = randint(1, 99)\nb = randint(a, 100)\nnumber = randint(a, b)\nprint(number)\n\n...which is equivalent to:\nfrom random import randint\n\nnumber = randint(1, 100)\nprint(number)\n\n",
"randint(a, b) has a condition.\na <= b\n\nSince you are generating a and b as random range, sometimes a is greater than b. You need to adjust your a and b range so that a is always less than or equal to b.\na = randint(1, 50)\nb = randint(50, 100)\n\n",
"As your range for randint needs to consist of a lower and upper boundary, your script will occasionally break when b is randomly lower than a. To prevent this, make sure b cannot be lower than a. An example:\nfrom random import *\n\na = randint(1, 99)\nb = randint(a, 100)\n\nnumber = randint(a, b)\nprint(number)\n\n"
] | [
3,
1,
0
] | [
"Using this it seems to do the job you asked.\nimport random\n\na = random.randint(1, 99)\nb = random.randint(2, 100)\n\nnumber = random.randint(a, b)\nprint(number)\n\nI don't have any error running this.\n"
] | [
-2
] | [
"module",
"python",
"python_3.x",
"random"
] | stackoverflow_0074638985_module_python_python_3.x_random.txt |
Q:
Date out of range pymongo
I am trying to retrive date from mongodb but it's out of the range date in python.
I tried setting the min and max but the problel still occurs. Can anyone help please ?
A:
I am filtering data of whole one day, hope this will work
from datetime import datetime, timedelta
import pymongo
client = pymongo.MongoClient("mongodb", 27017)
db = client["attendance"]
daily = db.daily
fy,fm,fd=request.json["from_date"].split("-")
fy,fm,fd=int(fy),int(fm),int(fd)
ty,tm,td=request.json["to_date"].split("-")
ty,tm,td=int(ty),int(tm),int(td)
start_date = datetime(fy,fm,fd,hour=0,minute=0,second=0)
end_date = datetime(ty,tm,td-1,hour=23,minute=59,second=59) + timedelta(1)
result=daily.find({'Date':{"$gte":start_date, "$lte":start_date + timedelta(1)}})
| Date out of range pymongo | I am trying to retrive date from mongodb but it's out of the range date in python.
I tried setting the min and max but the problel still occurs. Can anyone help please ?
| [
"I am filtering data of whole one day, hope this will work\nfrom datetime import datetime, timedelta\nimport pymongo\nclient = pymongo.MongoClient(\"mongodb\", 27017)\ndb = client[\"attendance\"]\ndaily = db.daily\nfy,fm,fd=request.json[\"from_date\"].split(\"-\")\nfy,fm,fd=int(fy),int(fm),int(fd)\nty,tm,td=request.json[\"to_date\"].split(\"-\")\nty,tm,td=int(ty),int(tm),int(td)\nstart_date = datetime(fy,fm,fd,hour=0,minute=0,second=0)\nend_date = datetime(ty,tm,td-1,hour=23,minute=59,second=59) + timedelta(1)\nresult=daily.find({'Date':{\"$gte\":start_date, \"$lte\":start_date + timedelta(1)}})\n\n"
] | [
0
] | [] | [] | [
"mongodb",
"pymongo",
"python"
] | stackoverflow_0074638981_mongodb_pymongo_python.txt |
Q:
how to change the out_features of densenet121 model?
How to change the out_features of densenet121 model?
I am using the code below to train the model:
from torch.nn.modules.dropout import Dropout
class Densnet121(nn.Module):
def __init__(self):
super(Densnet121, self).__init__()
self.cnn1 = nn.Conv2d(in_channels=3 , out_channels=64 , kernel_size=3 , stride=1 )
self.Densenet_121 = models.densenet121(pretrained=True)
self.gap = AvgPool2d(kernel_size=2, stride=1, padding=1)
self.bn1 = nn.BatchNorm2d(1024)
self.do1 = nn.Dropout(0.25)
self.linear = nn.Linear(256,256)
self.bn2 = nn.BatchNorm2d(256)
self.do2 = nn.Dropout(0.25)
self.output = nn.Linear(64 * 64 * 64,2)
self.act = nn.ReLU()
def densenet(self):
for param in self.Densenet_121.parameters():
param.requires_grad = False
self.Densenet_121.classifier = nn.Linear(1024, 1024)
return self.Densenet_121
def forward(self, x):
img = self.act(self.cnn1(x))
img = self.densenet(img)
img = self.gap(img)
img = self.bn1(img)
img = self.do1(img)
img = self.linear(img)
img = self.bn2(img)
img = self.do2(img)
img = torch.flatten(img, 1)
img = self.output(img)
return img
When training this model, I face the following error:
RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[64, 64, 62, 62] to have 3 channels, but got 64 channels instead
A:
Your first conv layer outputs a tensor of shape (b, 64, h, w) while the following layer, the densenet model expects 3 channels. Hence the error that was raised:
"expected input [...] to have 3 channels, but got 64 channels instead"
Unfortunately, this value is hardcoded in the source of the Densenet class, see reference.
One workaround however is to overwrite the first convolutional layer after the densenet has been initialized. Something like this should work:
# First gather the conv layer specs
conv = self.Densenet_121.features.conv0
kwargs = {k: getattr(conv, k) for k in
('out_channels', 'stride', 'kernel_size', 'padding', 'bias')}
# overwrite with identical specs with new in_channels
model.features.conv0 = nn.Conv2d(in_channels=64, **kwargs)
Alternatively, you can do:
w = model.features.conv0.weight
w.data = torch.rand(len(w), 64, *w.shape[:2])
Which replaces the underlying convolutional layer weight without affecting its metadata (eg. conv.in_channels remains equal to 3), this could have side effects. So I would recommend following the first approach.
| how to change the out_features of densenet121 model? | How to change the out_features of densenet121 model?
I am using the code below to train the model:
from torch.nn.modules.dropout import Dropout
class Densnet121(nn.Module):
def __init__(self):
super(Densnet121, self).__init__()
self.cnn1 = nn.Conv2d(in_channels=3 , out_channels=64 , kernel_size=3 , stride=1 )
self.Densenet_121 = models.densenet121(pretrained=True)
self.gap = AvgPool2d(kernel_size=2, stride=1, padding=1)
self.bn1 = nn.BatchNorm2d(1024)
self.do1 = nn.Dropout(0.25)
self.linear = nn.Linear(256,256)
self.bn2 = nn.BatchNorm2d(256)
self.do2 = nn.Dropout(0.25)
self.output = nn.Linear(64 * 64 * 64,2)
self.act = nn.ReLU()
def densenet(self):
for param in self.Densenet_121.parameters():
param.requires_grad = False
self.Densenet_121.classifier = nn.Linear(1024, 1024)
return self.Densenet_121
def forward(self, x):
img = self.act(self.cnn1(x))
img = self.densenet(img)
img = self.gap(img)
img = self.bn1(img)
img = self.do1(img)
img = self.linear(img)
img = self.bn2(img)
img = self.do2(img)
img = torch.flatten(img, 1)
img = self.output(img)
return img
When training this model, I face the following error:
RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[64, 64, 62, 62] to have 3 channels, but got 64 channels instead
| [
"Your first conv layer outputs a tensor of shape (b, 64, h, w) while the following layer, the densenet model expects 3 channels. Hence the error that was raised:\n\n\"expected input [...] to have 3 channels, but got 64 channels instead\"\n\nUnfortunately, this value is hardcoded in the source of the Densenet class, see reference.\nOne workaround however is to overwrite the first convolutional layer after the densenet has been initialized. Something like this should work:\n# First gather the conv layer specs\nconv = self.Densenet_121.features.conv0\nkwargs = {k: getattr(conv, k) for k in \n ('out_channels', 'stride', 'kernel_size', 'padding', 'bias')}\n\n# overwrite with identical specs with new in_channels\nmodel.features.conv0 = nn.Conv2d(in_channels=64, **kwargs) \n\nAlternatively, you can do:\nw = model.features.conv0.weight\nw.data = torch.rand(len(w), 64, *w.shape[:2])\n\nWhich replaces the underlying convolutional layer weight without affecting its metadata (eg. conv.in_channels remains equal to 3), this could have side effects. So I would recommend following the first approach.\n"
] | [
1
] | [] | [] | [
"densenet",
"python",
"pytorch"
] | stackoverflow_0074633673_densenet_python_pytorch.txt |
Q:
Poetry and Pyenv versioning issue
Can someone explain to me what is going on here?
I'm trying to get pyenv and poetry to place nice together. I am on an AWS instance of Ubuntu 20.04 which has python 3.8.10 installed. (I have removed all traces of python2 from the system). I would like to use python 3.10 but I can't just upgrade to that (thank you very much Amazon). So enter pyenv.
I made an empty project with the poetry new command and here is the pyproject.toml file.
[tool.poetry]
name = "test"
version = "0.1.0"
description = ""
authors = ["ken <[email protected]>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.10"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
I have 3.10.7 installed through pyenv. If I run poetry run python --version I get the following output.
The currently activated Python version 3.8.10 is not supported by the project (^3.10).
Trying to find and use a compatible version.
Using python3 (3.10.7)
Python 3.8.10
It finds and "uses" 3.10.7 but then reports 3.8.10? Huh?
If I then run poetry env use 3.10 and try again I get ...
Current Python version (3.8.10) is not allowed by the project (^3.10).
Please change python executable via the "env use" command.
... and it fails to run completely, i.e. no version reported from the python command. How is my current python version still 3.8.10. If I run python --version at the command-line straight away (not through poetry), I get 3.10.7. What is going on here?!
As a check if I run poetry env use system then I indeed get back to my first problem. :(
A:
Poetry can't handle the Python dependency for you. That is, it can't install the correct Python version for you; it can only handle Python package dependencies correctly for you. But it will check for the correct Python version.
Since Poetry itself depends on Python, and in this case that's an older version (3.8) than required (3.10), it likely gets confused, and can't work with this repository: Poetry doesn't know about your Python 3.10! (or at least, avoids making guesses.) The error message could be clearer (for example, "Poetry is installed with Python version 3.8, but this project requires 3.10. Please install Poetry for a working version 3.10"), but the error does hint at this problem.
Install Poetry for the correct Python version (Pyenv 3.10 here), and use that Poetry.
Note: when you start a project with poetry new or poetry init, Poetry will default to its default Python as main dependency. So initially, your Python dependency in the pyproject.toml file will have been 3.8. You then probably changed that yourself to 3.10. That is okay-ish, but Poetry didn't know about that (in fact, it can't change the Python dependency version). If you had started directly from Pyenv's 3.10 with, say, python -m poetry init, you would have been fine.
Additional remark: numerous issues on Poetry's GitHub suggest that this can work in ways (there are various issues of people running into this error message and using Pyenv, but responses suggest this can work), so try a search there.
In fact, for testing, it makes a lot of sense to be able to test this with, in this case, Python 3.10 and 3.11. While Poetry would only be installed for 3.10, 3.11 should be to work as well. At a guess here, the difference is that you are developing with 3.11 (and thus using Poetry v-env), but only testing with 3.11, running some variant of python3.11 -m pip install . for that test. That would first download Poetry for 3.11, then build the rest of the project, and run the tests, in a temporary directory. Which is a different thing than what you are doing currently (given the question).
| Poetry and Pyenv versioning issue | Can someone explain to me what is going on here?
I'm trying to get pyenv and poetry to place nice together. I am on an AWS instance of Ubuntu 20.04 which has python 3.8.10 installed. (I have removed all traces of python2 from the system). I would like to use python 3.10 but I can't just upgrade to that (thank you very much Amazon). So enter pyenv.
I made an empty project with the poetry new command and here is the pyproject.toml file.
[tool.poetry]
name = "test"
version = "0.1.0"
description = ""
authors = ["ken <[email protected]>"]
readme = "README.md"
[tool.poetry.dependencies]
python = "^3.10"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
I have 3.10.7 installed through pyenv. If I run poetry run python --version I get the following output.
The currently activated Python version 3.8.10 is not supported by the project (^3.10).
Trying to find and use a compatible version.
Using python3 (3.10.7)
Python 3.8.10
It finds and "uses" 3.10.7 but then reports 3.8.10? Huh?
If I then run poetry env use 3.10 and try again I get ...
Current Python version (3.8.10) is not allowed by the project (^3.10).
Please change python executable via the "env use" command.
... and it fails to run completely, i.e. no version reported from the python command. How is my current python version still 3.8.10. If I run python --version at the command-line straight away (not through poetry), I get 3.10.7. What is going on here?!
As a check if I run poetry env use system then I indeed get back to my first problem. :(
| [
"Poetry can't handle the Python dependency for you. That is, it can't install the correct Python version for you; it can only handle Python package dependencies correctly for you. But it will check for the correct Python version.\nSince Poetry itself depends on Python, and in this case that's an older version (3.8) than required (3.10), it likely gets confused, and can't work with this repository: Poetry doesn't know about your Python 3.10! (or at least, avoids making guesses.) The error message could be clearer (for example, \"Poetry is installed with Python version 3.8, but this project requires 3.10. Please install Poetry for a working version 3.10\"), but the error does hint at this problem.\nInstall Poetry for the correct Python version (Pyenv 3.10 here), and use that Poetry.\nNote: when you start a project with poetry new or poetry init, Poetry will default to its default Python as main dependency. So initially, your Python dependency in the pyproject.toml file will have been 3.8. You then probably changed that yourself to 3.10. That is okay-ish, but Poetry didn't know about that (in fact, it can't change the Python dependency version). If you had started directly from Pyenv's 3.10 with, say, python -m poetry init, you would have been fine.\n\nAdditional remark: numerous issues on Poetry's GitHub suggest that this can work in ways (there are various issues of people running into this error message and using Pyenv, but responses suggest this can work), so try a search there.\nIn fact, for testing, it makes a lot of sense to be able to test this with, in this case, Python 3.10 and 3.11. While Poetry would only be installed for 3.10, 3.11 should be to work as well. At a guess here, the difference is that you are developing with 3.11 (and thus using Poetry v-env), but only testing with 3.11, running some variant of python3.11 -m pip install . for that test. That would first download Poetry for 3.11, then build the rest of the project, and run the tests, in a temporary directory. Which is a different thing than what you are doing currently (given the question).\n"
] | [
0
] | [
"You have python 3.10 install when is great, but the system version of python still exist. What you have to do now is to switch the python version you have just installed.\nyou can run\npyenv global 3.10 this will switch to the 3.10 version you just installed\nAfter that you can run your poetry command and it will work since your pyproject.toml specifies that you need to use the python3.10\nIn the future if you need to use another version of python install it with pyenv install 3.x and the switch to it with pyenv global 3.x\nYou can also use local instead of global if you're in the working directory and don't want to make the version your global python version\n"
] | [
-1
] | [
"pyenv",
"python",
"python_poetry"
] | stackoverflow_0074635227_pyenv_python_python_poetry.txt |
Q:
how to modify grouped data in pandas
i would like to modify grouped data in pandas. I wrote a shortcode that doesn't work. unfortunately outside of the loop when I use gr.get_group('Audi') the data remains unchanged. How to modify grouped daraframes and how to return from grouped data to dataframes later.
import pandas as pd
import numpy as np
d = {'car' : ["Audi", "Audi", "Audi", "BMW", "BMW", "BMW", "FIAT", "FIAT", "FIAT", "FIAT"],
'year' : [2000, 2001, 1995, 1992, 2003, 2003, 2011, 1982, 1997, 2002]}
df = pd.DataFrame.from_dict(d)
df['new'] = np.nan
gr = df.groupby('car')
for key, val in gr:
val.loc[val['year']<2000, 'new'] = f'new {key}'
gr.get_group('car')
I would like to use this method because in each dataframe I want to use a different method to set the new column
for example for Audi it will usually be adding a variable, while for BMW I want to use the map function
for key, val in gr:
if key == 'Audi':
val.loc[val['year']<2000, 'new'] = f'new {key}'
elif key == 'BMW':
pass
# here another method
elif key == 'FIAT'
# here another metod
else:
val.loc[val['year']<2000, 'new'] = 'UNKNOW'
at the end i would like to get a table like dataframe but with filled column `new
A:
Try to pd.concat the val in each for loop to with the df_new like below
import pandas as pd
import numpy as np
d = {'car' : ["Audi", "Audi", "Audi", "BMW", "BMW", "BMW", "FIAT", "FIAT", "FIAT", "FIAT"],
'year' : [2000, 2001, 1995, 1992, 2003, 2003, 2011, 1982, 1997, 2002]}
df = pd.DataFrame.from_dict(d)
df['new'] = np.nan
df_new = pd.DataFrame()
gr = df.groupby('car')
for key, val in gr:
print(key,val)
if key == 'Audi':
val.loc[val['year']<2000, 'new'] = f'new {key}'
elif key == 'BMW':
pass
# here another method
elif key == 'FIAT':
pass# here another metod
else:
val.loc[val['year']<2000, 'new'] = 'UNKNOW'
df_new = pd.concat([df_new, val])
Probably you can also do this with df.itertuples or some other method which I am currently not aware.
| how to modify grouped data in pandas | i would like to modify grouped data in pandas. I wrote a shortcode that doesn't work. unfortunately outside of the loop when I use gr.get_group('Audi') the data remains unchanged. How to modify grouped daraframes and how to return from grouped data to dataframes later.
import pandas as pd
import numpy as np
d = {'car' : ["Audi", "Audi", "Audi", "BMW", "BMW", "BMW", "FIAT", "FIAT", "FIAT", "FIAT"],
'year' : [2000, 2001, 1995, 1992, 2003, 2003, 2011, 1982, 1997, 2002]}
df = pd.DataFrame.from_dict(d)
df['new'] = np.nan
gr = df.groupby('car')
for key, val in gr:
val.loc[val['year']<2000, 'new'] = f'new {key}'
gr.get_group('car')
I would like to use this method because in each dataframe I want to use a different method to set the new column
for example for Audi it will usually be adding a variable, while for BMW I want to use the map function
for key, val in gr:
if key == 'Audi':
val.loc[val['year']<2000, 'new'] = f'new {key}'
elif key == 'BMW':
pass
# here another method
elif key == 'FIAT'
# here another metod
else:
val.loc[val['year']<2000, 'new'] = 'UNKNOW'
at the end i would like to get a table like dataframe but with filled column `new
| [
"Try to pd.concat the val in each for loop to with the df_new like below\nimport pandas as pd\nimport numpy as np\n\nd = {'car' : [\"Audi\", \"Audi\", \"Audi\", \"BMW\", \"BMW\", \"BMW\", \"FIAT\", \"FIAT\", \"FIAT\", \"FIAT\"],\n 'year' : [2000, 2001, 1995, 1992, 2003, 2003, 2011, 1982, 1997, 2002]}\n\ndf = pd.DataFrame.from_dict(d)\ndf['new'] = np.nan\ndf_new = pd.DataFrame()\ngr = df.groupby('car')\n\nfor key, val in gr:\n print(key,val)\n if key == 'Audi':\n val.loc[val['year']<2000, 'new'] = f'new {key}'\n elif key == 'BMW':\n pass\n # here another method\n elif key == 'FIAT':\n pass# here another metod\n else:\n val.loc[val['year']<2000, 'new'] = 'UNKNOW'\n df_new = pd.concat([df_new, val])\n\nProbably you can also do this with df.itertuples or some other method which I am currently not aware.\n"
] | [
0
] | [] | [] | [
"pandas",
"python"
] | stackoverflow_0074639029_pandas_python.txt |
Q:
using arrays of different sizes within a function
I'm trying to write a function that will take a set of arguments from the rows of a 2d array and use them in conjunction with all the elements of a longer 1d array:
x = np.linspace(-10,10,100)
abc = np.array([[1,2,1],
[1,3,5],
[12.5,-6.4,-1.25],
[4,2,1]])
def quadratic(a, b, c, x):
return a*(x ** 2) + b*x + c
y = quadratic(abc[:,0], abc[:,1], abc[:,2], x)
But this returns:
operands could not be broadcast together with shapes (4,) (100,)
When I manually enter the a, b and c values, I get a 100-element 1d array, so I would expect this to return a (4,100) array. What gives?
A:
In numpy the operation x * y is performed element wise where one or both values can be expanded to make them compatible. This is called broadcasting.
In the example above your arrays have different dimensions (100,0) and (4,3), hence the error.
When multiplying matrixes you should be using dot instead.
import numpy as np
x = np.linspace(-10,10,100)
abc = np.array([[1,2,1],
[1,3,5],
[12.5,-6.4,-1.25],
[4,2,1]])
np.dot(x, abc[:,0])
A:
abc[:, i] is of shape (4). You need it to be of shape (4, 1) to broadcast against x of shape (100) to produce your desired (4, 100) output---so you will need to do abc[:, i, None] to add that extra dimension.
The following should work:
y = quadratic(abc[:, 0, None], abc[:, 1, None], abc[:, 2, None], x)
| using arrays of different sizes within a function | I'm trying to write a function that will take a set of arguments from the rows of a 2d array and use them in conjunction with all the elements of a longer 1d array:
x = np.linspace(-10,10,100)
abc = np.array([[1,2,1],
[1,3,5],
[12.5,-6.4,-1.25],
[4,2,1]])
def quadratic(a, b, c, x):
return a*(x ** 2) + b*x + c
y = quadratic(abc[:,0], abc[:,1], abc[:,2], x)
But this returns:
operands could not be broadcast together with shapes (4,) (100,)
When I manually enter the a, b and c values, I get a 100-element 1d array, so I would expect this to return a (4,100) array. What gives?
| [
"In numpy the operation x * y is performed element wise where one or both values can be expanded to make them compatible. This is called broadcasting.\nIn the example above your arrays have different dimensions (100,0) and (4,3), hence the error.\nWhen multiplying matrixes you should be using dot instead.\nimport numpy as np\n\nx = np.linspace(-10,10,100)\nabc = np.array([[1,2,1],\n [1,3,5],\n [12.5,-6.4,-1.25],\n [4,2,1]])\n\nnp.dot(x, abc[:,0])\n\n",
"abc[:, i] is of shape (4). You need it to be of shape (4, 1) to broadcast against x of shape (100) to produce your desired (4, 100) output---so you will need to do abc[:, i, None] to add that extra dimension.\nThe following should work:\ny = quadratic(abc[:, 0, None], abc[:, 1, None], abc[:, 2, None], x)\n\n"
] | [
1,
1
] | [] | [] | [
"arrays",
"broadcast",
"function",
"numpy",
"python"
] | stackoverflow_0074637231_arrays_broadcast_function_numpy_python.txt |
Q:
Cannot import name 'available_if' from 'sklearn.utils.metaestimators'
While importing "from imblearn.over_sampling import SMOTE", getting import error. Please check and help.
I tried upgrading sklearn, but the upgrade was undone with 'OSError'.
Firsty installed imbalance-learn through pip.
!pip install -U imbalanced-learn
Using jupyter notebook
Windows 10
sklearn version - 0.24.1
numpy version - 1.19.5
--------------------------------------------------------------------------
ImportError Traceback (most recent call last)
in
----> 1 from imblearn.over_sampling import SMOTE
~\anaconda3\lib\site-packages\imblearn_init_.py in
35 import types
36
---> 37 from . import combine
38 from . import ensemble
39 from . import exceptions
~\anaconda3\lib\site-packages\imblearn\combine_init_.py in
3 """
4
----> 5 from ._smote_enn import SMOTEENN
6 from ._smote_tomek import SMOTETomek
7
~\anaconda3\lib\site-packages\imblearn\combine_smote_enn.py in
8 from sklearn.utils import check_X_y
9
---> 10 from ..base import BaseSampler
11 from ..over_sampling import SMOTE
12 from ..over_sampling.base import BaseOverSampler
~\anaconda3\lib\site-packages\imblearn\base.py in
13 from sklearn.utils.multiclass import check_classification_targets
14
---> 15 from .utils import check_sampling_strategy, check_target_type
16 from .utils._validation import ArraysTransformer
17 from .utils._validation import _deprecate_positional_args
~\anaconda3\lib\site-packages\imblearn\utils_init_.py in
5 from ._docstring import Substitution
6
----> 7 from ._validation import check_neighbors_object
8 from ._validation import check_target_type
9 from ._validation import check_sampling_strategy
~\anaconda3\lib\site-packages\imblearn\utils_validation.py in
14 from sklearn.base import clone
15 from sklearn.neighbors._base import KNeighborsMixin
---> 16 from sklearn.neighbors import NearestNeighbors
17 from sklearn.utils import column_or_1d
18 from sklearn.utils.multiclass import type_of_target
~\anaconda3\lib\site-packages\sklearn\neighbors_init_.py in
14 from ._nearest_centroid import NearestCentroid
15 from ._kde import KernelDensity
---> 16 from ._lof import LocalOutlierFactor
17 from ._nca import NeighborhoodComponentsAnalysis
18 from ._base import VALID_METRICS, VALID_METRICS_SPARSE
~\anaconda3\lib\site-packages\sklearn\neighbors_lof.py in
10 from ..base import OutlierMixin
11
---> 12 from ..utils.metaestimators import available_if
13 from ..utils.validation import check_is_fitted
14 from ..utils import check_array
ImportError: cannot import name 'available_if' from 'sklearn.utils.metaestimators'
(C:\Users\dks_m\anaconda3\lib\site-packages\sklearn\utils\metaestimators.py)
A:
IF in jupyter , restart the kernel.This fixed!
A:
I believe the issue is with python versioning of scikit-learn. I was able to resolve by reinstalling the Python3 version:
pip uninstall scikit-learn -y
pip3 install scikit-learn
Remember to restart terminal/notebook after package updates.
This gives me scikit-learn v1.0.2 which resolves error in Python3
A:
Try using anaconda prompt for installation.
It works for me.
A:
That usually happens when 2 different versions of packages do not match. If you are using the jupyter notebook, restarting your environment is going to solve your issue.
A:
Good day all. What helped me is installing pycaret=='2.3.10 ' and scikit-learn='0.23.2' at the same time. These two version are compatible and all works fine. I installed scikit-learn using conda as the older versions are not available through pip, and I installed Pycaret using pip3. I hope this helps all who have struggled to get this working like I did.
A:
If using databricks :
Go-to: view -> view spark ui -> Libraries -> install new -> Library Source -> PyPi -> add package name : imbalanced-learn -> install ->
Restart the cluster
// Do not install from active notebook using cli command.
| Cannot import name 'available_if' from 'sklearn.utils.metaestimators' | While importing "from imblearn.over_sampling import SMOTE", getting import error. Please check and help.
I tried upgrading sklearn, but the upgrade was undone with 'OSError'.
Firsty installed imbalance-learn through pip.
!pip install -U imbalanced-learn
Using jupyter notebook
Windows 10
sklearn version - 0.24.1
numpy version - 1.19.5
--------------------------------------------------------------------------
ImportError Traceback (most recent call last)
in
----> 1 from imblearn.over_sampling import SMOTE
~\anaconda3\lib\site-packages\imblearn_init_.py in
35 import types
36
---> 37 from . import combine
38 from . import ensemble
39 from . import exceptions
~\anaconda3\lib\site-packages\imblearn\combine_init_.py in
3 """
4
----> 5 from ._smote_enn import SMOTEENN
6 from ._smote_tomek import SMOTETomek
7
~\anaconda3\lib\site-packages\imblearn\combine_smote_enn.py in
8 from sklearn.utils import check_X_y
9
---> 10 from ..base import BaseSampler
11 from ..over_sampling import SMOTE
12 from ..over_sampling.base import BaseOverSampler
~\anaconda3\lib\site-packages\imblearn\base.py in
13 from sklearn.utils.multiclass import check_classification_targets
14
---> 15 from .utils import check_sampling_strategy, check_target_type
16 from .utils._validation import ArraysTransformer
17 from .utils._validation import _deprecate_positional_args
~\anaconda3\lib\site-packages\imblearn\utils_init_.py in
5 from ._docstring import Substitution
6
----> 7 from ._validation import check_neighbors_object
8 from ._validation import check_target_type
9 from ._validation import check_sampling_strategy
~\anaconda3\lib\site-packages\imblearn\utils_validation.py in
14 from sklearn.base import clone
15 from sklearn.neighbors._base import KNeighborsMixin
---> 16 from sklearn.neighbors import NearestNeighbors
17 from sklearn.utils import column_or_1d
18 from sklearn.utils.multiclass import type_of_target
~\anaconda3\lib\site-packages\sklearn\neighbors_init_.py in
14 from ._nearest_centroid import NearestCentroid
15 from ._kde import KernelDensity
---> 16 from ._lof import LocalOutlierFactor
17 from ._nca import NeighborhoodComponentsAnalysis
18 from ._base import VALID_METRICS, VALID_METRICS_SPARSE
~\anaconda3\lib\site-packages\sklearn\neighbors_lof.py in
10 from ..base import OutlierMixin
11
---> 12 from ..utils.metaestimators import available_if
13 from ..utils.validation import check_is_fitted
14 from ..utils import check_array
ImportError: cannot import name 'available_if' from 'sklearn.utils.metaestimators'
(C:\Users\dks_m\anaconda3\lib\site-packages\sklearn\utils\metaestimators.py)
| [
"IF in jupyter , restart the kernel.This fixed!\n",
"I believe the issue is with python versioning of scikit-learn. I was able to resolve by reinstalling the Python3 version:\npip uninstall scikit-learn -y\n\npip3 install scikit-learn \n\nRemember to restart terminal/notebook after package updates.\nThis gives me scikit-learn v1.0.2 which resolves error in Python3\n",
"Try using anaconda prompt for installation.\nIt works for me.\n",
"That usually happens when 2 different versions of packages do not match. If you are using the jupyter notebook, restarting your environment is going to solve your issue.\n",
"Good day all. What helped me is installing pycaret=='2.3.10 ' and scikit-learn='0.23.2' at the same time. These two version are compatible and all works fine. I installed scikit-learn using conda as the older versions are not available through pip, and I installed Pycaret using pip3. I hope this helps all who have struggled to get this working like I did.\n",
"If using databricks :\nGo-to: view -> view spark ui -> Libraries -> install new -> Library Source -> PyPi -> add package name : imbalanced-learn -> install ->\nRestart the cluster\n// Do not install from active notebook using cli command.\n"
] | [
7,
2,
0,
0,
0,
0
] | [] | [] | [
"imbalanced_data",
"imblearn",
"jupyter_notebook",
"python",
"smote"
] | stackoverflow_0069602057_imbalanced_data_imblearn_jupyter_notebook_python_smote.txt |
Q:
Python DataFrames - Help needed with creating a new column based on several conditionals
I have a challenges DataFrame from the Great British Baking Show. Feel free to download the dataset:
pd.read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2022/2022-10-25/challenges.csv")
I've cleaned up the table and now have columns of series (1 through 10), episode (6 through 10), baker (names of each baker), and result (what happened to the baker each week (eliminated vs still on the show)). I am looking for a solution that allows me to add a new column called final_score that will list the final placement of each baker for each series.
In english what I am trying to do is:
Count the unique number of bakers per a series.
For each series,
for each episode,
if result == 'OUT',
add a column to the DF that records the baker's final score. The first score from each season will be equal to the count of bakers from step 1. I will then subtract the total baker count by 1.
As am example, the number of bakers from season 1 is 10. In episode 1, both Lea and Mark were eliminated so I want 'final_score' to read 10 for both of them. In episode 2, both Annetha and Louise were eliminated so I want their score to read 8.
I've spent most of the day on this problem and I'm fairly stuck. I've tried window functions, apply functions, list comprehension but the closest I've gotten is pasted below. With attempt 1, I know the problem is at: if df.result =='OUT':. I understand that this is a series but I've tried .result.items(), result.all(), result.any(), if df.loc[df.result] == 'OUT': but nothing seems to work.
Attempt 1
def final_score(df):
#count the number of bakers per season
baker_count = df.groupby('series')['baker'].nunique()
#for each season
for s in df.series:
#create a interable that counts the number of bakers that have been eliminated. Start at 0
bakers_out = 0
bakers_remaining = baker_count[int(s)]
#for each season
for e in df.episode:
#does result say OUT for each contestant?
if df.result =='OUT':
df['final_score'] = bakers_remaining
#if so, then we'll add +1 to our bakers_out iterator.
bakers_out +=1
#set the final score category to our baker_count iterator
df['final_score'] = bakers_remaining
#subtract the number of bakers left by the amount we just lost
bakers_remaining -= bakers_out
else:
next
return df
Attempt 2 wasn't about me creating a new dataframe but rather trying to trouble shoot this problem and print out my desired output to the console. This is pretty close but I want the final result to be a dense scoring so the two bakers that got out in series 1, episode 1 should both end up in 10th place, and the two bakers that got out the following week should both show 8th place.
baker_count = df.groupby('series')['baker'].nunique()
#for each series
for s in df.series.unique():
bakers_out = 0
bakers_remaining = baker_count[int(s)]
#for each episode
for e in df.episode.unique():
#create a list of results
data_results = list(df[(df.series==s) & (df.episode==e)].result)
for dr in data_results:
if dr =='OUT':
bakers_out += 1
print (s,e,dr,';final place:',bakers_remaining,';bakers out:',bakers_out)
else:
print (s,e,dr,'--')
bakers_remaining -= 1
Snippet of the result
1.0 1.0 IN --
1.0 1.0 IN --
1.0 1.0 IN --
1.0 1.0 IN --
1.0 1.0 IN --
1.0 1.0 OUT ;final place: 10 ;bakers out: 1
1.0 1.0 OUT ;final place: 10 ;bakers out: 2
1.0 2.0 IN --
1.0 2.0 IN --
1.0 2.0 IN --
1.0 2.0 IN --
1.0 2.0 IN --
1.0 2.0 IN --
1.0 2.0 OUT ;final place: 9 ;bakers out: 3
1.0 2.0 OUT ;final place: 9 ;bakers out: 4
Thanks everyone and please let me know what other information I should provide.
A:
You could try the following (df your dataframe):
m = df["result"].eq("OUT")
df["final_score"] = (
df.groupby("series")["baker"].transform("nunique")
- df[m].groupby("series")["baker"].cumcount()
)
df["final_score"] = df[m].groupby(["series", "episode"])["final_score"].transform("max")
Result for the first 2 seasons (not all columns):
print(df[m & df["series"].isin([1, 2])])
series episode baker result final_score
8 1 1 Lea OUT 10.0
9 1 1 Mark OUT 10.0
16 1 2 Annetha OUT 8.0
17 1 2 Louise OUT 8.0
25 1 3 Jonathan OUT 6.0
34 1 4 David OUT 5.0
43 1 5 Jasminder OUT 4.0
70 2 1 Keith OUT 12.0
81 2 2 Simon OUT 11.0
91 2 3 Ian OUT 10.0
92 2 3 Urvashi OUT 10.0
101 2 4 Ben OUT 8.0
112 2 5 Jason OUT 7.0
113 2 5 Robert OUT 7.0
123 2 6 Yasmin OUT 5.0
135 2 7 Janet OUT 4.0
| Python DataFrames - Help needed with creating a new column based on several conditionals | I have a challenges DataFrame from the Great British Baking Show. Feel free to download the dataset:
pd.read_csv("https://raw.githubusercontent.com/rfordatascience/tidytuesday/master/data/2022/2022-10-25/challenges.csv")
I've cleaned up the table and now have columns of series (1 through 10), episode (6 through 10), baker (names of each baker), and result (what happened to the baker each week (eliminated vs still on the show)). I am looking for a solution that allows me to add a new column called final_score that will list the final placement of each baker for each series.
In english what I am trying to do is:
Count the unique number of bakers per a series.
For each series,
for each episode,
if result == 'OUT',
add a column to the DF that records the baker's final score. The first score from each season will be equal to the count of bakers from step 1. I will then subtract the total baker count by 1.
As am example, the number of bakers from season 1 is 10. In episode 1, both Lea and Mark were eliminated so I want 'final_score' to read 10 for both of them. In episode 2, both Annetha and Louise were eliminated so I want their score to read 8.
I've spent most of the day on this problem and I'm fairly stuck. I've tried window functions, apply functions, list comprehension but the closest I've gotten is pasted below. With attempt 1, I know the problem is at: if df.result =='OUT':. I understand that this is a series but I've tried .result.items(), result.all(), result.any(), if df.loc[df.result] == 'OUT': but nothing seems to work.
Attempt 1
def final_score(df):
#count the number of bakers per season
baker_count = df.groupby('series')['baker'].nunique()
#for each season
for s in df.series:
#create a interable that counts the number of bakers that have been eliminated. Start at 0
bakers_out = 0
bakers_remaining = baker_count[int(s)]
#for each season
for e in df.episode:
#does result say OUT for each contestant?
if df.result =='OUT':
df['final_score'] = bakers_remaining
#if so, then we'll add +1 to our bakers_out iterator.
bakers_out +=1
#set the final score category to our baker_count iterator
df['final_score'] = bakers_remaining
#subtract the number of bakers left by the amount we just lost
bakers_remaining -= bakers_out
else:
next
return df
Attempt 2 wasn't about me creating a new dataframe but rather trying to trouble shoot this problem and print out my desired output to the console. This is pretty close but I want the final result to be a dense scoring so the two bakers that got out in series 1, episode 1 should both end up in 10th place, and the two bakers that got out the following week should both show 8th place.
baker_count = df.groupby('series')['baker'].nunique()
#for each series
for s in df.series.unique():
bakers_out = 0
bakers_remaining = baker_count[int(s)]
#for each episode
for e in df.episode.unique():
#create a list of results
data_results = list(df[(df.series==s) & (df.episode==e)].result)
for dr in data_results:
if dr =='OUT':
bakers_out += 1
print (s,e,dr,';final place:',bakers_remaining,';bakers out:',bakers_out)
else:
print (s,e,dr,'--')
bakers_remaining -= 1
Snippet of the result
1.0 1.0 IN --
1.0 1.0 IN --
1.0 1.0 IN --
1.0 1.0 IN --
1.0 1.0 IN --
1.0 1.0 OUT ;final place: 10 ;bakers out: 1
1.0 1.0 OUT ;final place: 10 ;bakers out: 2
1.0 2.0 IN --
1.0 2.0 IN --
1.0 2.0 IN --
1.0 2.0 IN --
1.0 2.0 IN --
1.0 2.0 IN --
1.0 2.0 OUT ;final place: 9 ;bakers out: 3
1.0 2.0 OUT ;final place: 9 ;bakers out: 4
Thanks everyone and please let me know what other information I should provide.
| [
"You could try the following (df your dataframe):\nm = df[\"result\"].eq(\"OUT\")\ndf[\"final_score\"] = (\n df.groupby(\"series\")[\"baker\"].transform(\"nunique\")\n - df[m].groupby(\"series\")[\"baker\"].cumcount()\n)\ndf[\"final_score\"] = df[m].groupby([\"series\", \"episode\"])[\"final_score\"].transform(\"max\")\n\nResult for the first 2 seasons (not all columns):\nprint(df[m & df[\"series\"].isin([1, 2])])\n\n series episode baker result final_score\n8 1 1 Lea OUT 10.0\n9 1 1 Mark OUT 10.0\n16 1 2 Annetha OUT 8.0\n17 1 2 Louise OUT 8.0\n25 1 3 Jonathan OUT 6.0\n34 1 4 David OUT 5.0\n43 1 5 Jasminder OUT 4.0\n70 2 1 Keith OUT 12.0\n81 2 2 Simon OUT 11.0\n91 2 3 Ian OUT 10.0\n92 2 3 Urvashi OUT 10.0\n101 2 4 Ben OUT 8.0\n112 2 5 Jason OUT 7.0\n113 2 5 Robert OUT 7.0\n123 2 6 Yasmin OUT 5.0\n135 2 7 Janet OUT 4.0\n\n"
] | [
0
] | [] | [] | [
"dataframe",
"loops",
"pandas",
"python"
] | stackoverflow_0074625061_dataframe_loops_pandas_python.txt |
Q:
Better way of excuting a always running app?
Good day.
I have a question about the correct way of implemting code that needs to run every 5 minutes.
Is it better to:
A - Inside the code have a timeloop that starts after 5 minutes, and
executes.
B - Have a script that runs every 5 minutes and executes your
application.
C - Other?
BG: This will be running on a windows server 2022, to send mail every 5 minutes if certain condations where met.
Thank you.
A:
B.) The script is named Windows Task Scheduler and comes with permission management etc.. A Windows server admin can tell you about it.
Why?
Your app might have memory leaks (well, Python not so much) and it runs more stable when it's restarted every time.
An app that sleeps still uses memory, which may be swapped to disk and read back when it awakes. If the app terminates, the memory will be freed and not be swapped to disk.
Your app may crash and no longer do what you expect to be done at every interval
The user may (accidentally?) terminate your app with the same effect
Why not / when not?
If the time to initialize the app (e.g. reading data from database or disk) takes a long time (especially longer than the sleep time).
| Better way of excuting a always running app? | Good day.
I have a question about the correct way of implemting code that needs to run every 5 minutes.
Is it better to:
A - Inside the code have a timeloop that starts after 5 minutes, and
executes.
B - Have a script that runs every 5 minutes and executes your
application.
C - Other?
BG: This will be running on a windows server 2022, to send mail every 5 minutes if certain condations where met.
Thank you.
| [
"B.) The script is named Windows Task Scheduler and comes with permission management etc.. A Windows server admin can tell you about it.\nWhy?\n\nYour app might have memory leaks (well, Python not so much) and it runs more stable when it's restarted every time.\nAn app that sleeps still uses memory, which may be swapped to disk and read back when it awakes. If the app terminates, the memory will be freed and not be swapped to disk.\nYour app may crash and no longer do what you expect to be done at every interval\nThe user may (accidentally?) terminate your app with the same effect\n\nWhy not / when not?\n\nIf the time to initialize the app (e.g. reading data from database or disk) takes a long time (especially longer than the sleep time).\n\n"
] | [
3
] | [] | [] | [
"python"
] | stackoverflow_0074639118_python.txt |
Q:
AttributeError: 'DataFrameWriter' object has no attribute 'start'
I am trying to write a code using Kafka, Python and SparK
The problem statement is: Read data from XML and the data consumed will be in the binary format. This data has to be stored in a data frame.
I am getting below error:
Error:
File "C:/Users/HP/PycharmProjects/xml_streaming/ConS.py", line 55, in
.format("console")
AttributeError: 'DataFrameWriter' object has no attribute 'start'
Here is my code for reference:
#import *
# Set spark environments
#os.environ['PYSPARK_PYTHON'] = <PATH>
#os.environ['PYSPARK_DRIVER_PYTHON'] = <PATH>
spark = SparkSession\
.builder\
.master("local[1]")\
.appName("Consumer")\
.getOrCreate()
topic_Name = 'XML_File_Processing3'
consumer = kafka.KafkaConsumer(topic_Name, bootstrap_servers=['localhost:9092'], auto_offset_reset='latest')
kafka_df = spark\
.read \
.format("kafka") \
.option("kafka.bootstrap.servers", "localhost:9092") \
.option("kafka.security.protocol", "SSL") \
.option("failOnDataLoss", "false") \
.option("subscribe", topic_Name) \
.load()
#.option("startingOffsets", "earliest") \
print("Loaded to DataFrame kafka_df")
kafka_df.printSchema()
new_df = kafka_df.selectExpr("CAST(value AS STRING)")
schema = ArrayType(StructType()\
.add("book_id", IntegerType())\
.add("author", StringType())\
.add("title", StringType())\
.add("genre",StringType())\
.add("price",IntegerType())\
.add("publish_date", IntegerType())\
.add("description", StringType()))
book_DF = new_df.select(from_json(col("value"), schema).alias("dataf")) #.('data')).select("data.*")
book_DF.printSchema()
#book_DF.select("dataf.author").show()
book_DF.write\
.format("console")\
.start()
A:
I don't have a lot of experience with kafka, but at the end you're using the start() method on the result of book_DF.write.format("console"), which is a DataFrameWriter object. This does not have a start() method.
Do you want to write this as a stream? Then you'll probably need to use something like the writeStream method:
book_DF.writeStream \
.format("kafka") \
.start()
More info + examples can be found here.
If you simply want to print your dataframe to the console you should be able to use the show method for that. So in your case: book_DF.show()
| AttributeError: 'DataFrameWriter' object has no attribute 'start' | I am trying to write a code using Kafka, Python and SparK
The problem statement is: Read data from XML and the data consumed will be in the binary format. This data has to be stored in a data frame.
I am getting below error:
Error:
File "C:/Users/HP/PycharmProjects/xml_streaming/ConS.py", line 55, in
.format("console")
AttributeError: 'DataFrameWriter' object has no attribute 'start'
Here is my code for reference:
#import *
# Set spark environments
#os.environ['PYSPARK_PYTHON'] = <PATH>
#os.environ['PYSPARK_DRIVER_PYTHON'] = <PATH>
spark = SparkSession\
.builder\
.master("local[1]")\
.appName("Consumer")\
.getOrCreate()
topic_Name = 'XML_File_Processing3'
consumer = kafka.KafkaConsumer(topic_Name, bootstrap_servers=['localhost:9092'], auto_offset_reset='latest')
kafka_df = spark\
.read \
.format("kafka") \
.option("kafka.bootstrap.servers", "localhost:9092") \
.option("kafka.security.protocol", "SSL") \
.option("failOnDataLoss", "false") \
.option("subscribe", topic_Name) \
.load()
#.option("startingOffsets", "earliest") \
print("Loaded to DataFrame kafka_df")
kafka_df.printSchema()
new_df = kafka_df.selectExpr("CAST(value AS STRING)")
schema = ArrayType(StructType()\
.add("book_id", IntegerType())\
.add("author", StringType())\
.add("title", StringType())\
.add("genre",StringType())\
.add("price",IntegerType())\
.add("publish_date", IntegerType())\
.add("description", StringType()))
book_DF = new_df.select(from_json(col("value"), schema).alias("dataf")) #.('data')).select("data.*")
book_DF.printSchema()
#book_DF.select("dataf.author").show()
book_DF.write\
.format("console")\
.start()
| [
"I don't have a lot of experience with kafka, but at the end you're using the start() method on the result of book_DF.write.format(\"console\"), which is a DataFrameWriter object. This does not have a start() method.\nDo you want to write this as a stream? Then you'll probably need to use something like the writeStream method:\n book_DF.writeStream \\\n .format(\"kafka\") \\\n .start()\n\nMore info + examples can be found here.\nIf you simply want to print your dataframe to the console you should be able to use the show method for that. So in your case: book_DF.show()\n"
] | [
0
] | [] | [] | [
"apache_kafka",
"apache_spark",
"consumer",
"python",
"python_3.x"
] | stackoverflow_0074638593_apache_kafka_apache_spark_consumer_python_python_3.x.txt |
Q:
Python given an array A of N integers, returns the smallest positive integer (greater than 0) that does not occur in A in O(n) time complexity
For example:
input: A = [ 6 4 3 -5 0 2 -7 1 ]
output: 5
Since 5 is the smallest positive integer that does not occur in the array.
I have written two solutions to that problem. The first one is good but I don't want to use any external libraries + its O(n)*log(n) complexity. The second solution "In which I need your help to optimize it" gives an error when the input is chaotic sequences length=10005 (with minus).
Solution 1:
from itertools import count, filterfalse
def minpositive(a):
return(next(filterfalse(set(a).__contains__, count(1))))
Solution 2:
def minpositive(a):
count = 0
b = list(set([i for i in a if i>0]))
if min(b, default = 0) > 1 or min(b, default = 0) == 0 :
min_val = 1
else:
min_val = min([b[i-1]+1 for i, x in enumerate(b) if x - b[i - 1] >1], default=b[-1]+1)
return min_val
Note: This was a demo test in codility, solution 1 got 100% and
solution 2 got 77 %.
Error in "solution2" was due to:
Performance tests ->
medium chaotic sequences length=10005 (with minus) got 3 expected
10000
Performance tests -> large chaotic + many -1, 1, 2, 3 (with
minus) got 5 expected 10000
A:
Testing for the presence of a number in a set is fast in Python so you could try something like this:
def minpositive(a):
A = set(a)
ans = 1
while ans in A:
ans += 1
return ans
A:
Fast for large arrays.
def minpositive(arr):
if 1 not in arr: # protection from error if ( max(arr) < 0 )
return 1
else:
maxArr = max(arr) # find max element in 'arr'
c1 = set(range(2, maxArr+2)) # create array from 2 to max
c2 = c1 - set(arr) # find all positive elements outside the array
return min(c2)
A:
I have an easy solution. No need to sort.
def solution(A):
s = set(A)
m = max(A) + 2
for N in range(1, m):
if N not in s:
return N
return 1
Note: It is 100% total score (Correctness & Performance)
A:
def minpositive(A):
"""Given an list A of N integers,
returns the smallest positive integer (greater than 0)
that does not occur in A in O(n) time complexity
Args:
A: list of integers
Returns:
integer: smallest positive integer
e.g:
A = [1,2,3]
smallest_positive_int = 4
"""
len_nrs_list = len(A)
N = set(range(1, len_nrs_list+2))
return min(N-set(A)) #gets the min value using the N integers
A:
This solution passes the performance test with a score of 100%
def solution(A):
n = sorted(i for i in set(A) if i > 0) # Remove duplicates and negative numbers
if not n:
return 1
ln = len(n)
for i in range(1, ln + 1):
if i != n[i - 1]:
return i
return ln + 1
A:
def solution(A):
B = set(sorted(A))
m = 1
for x in B:
if x == m:
m+=1
return m
A:
Continuing on from Niroj Shrestha and najeeb-jebreel, added an initial portion to avoid iteration in case of a complete set. Especially important if the array is very large.
def smallest_positive_int(A):
sorted_A = sorted(A)
last_in_sorted_A = sorted_A[-1]
#check if straight continuous list
if len(sorted_A) == last_in_sorted_A:
return last_in_sorted_A + 1
else:
#incomplete list, iterate to find the smallest missing number
sol=1
for x in sorted_A:
if x == sol:
sol += 1
else:
break
return sol
A = [1,2,7,4,5,6]
print(smallest_positive_int(A))
A:
This question doesn't really need another answer, but there is a solution that has not been proposed yet, that I believe to be faster than what's been presented so far.
As others have pointed out, we know the answer lies in the range [1, len(A)+1], inclusively. We can turn that into a set and take the minimum element in the set difference with A. That's a good O(N) solution since set operations are O(1).
However, we don't need to use a Python set to store [1, len(A)+1], because we're starting with a dense set. We can use an array instead, which will replace set hashing by list indexing and give us another O(N) solution with a lower constant.
def minpositive(a):
# the "set" of possible answer - values_found[i-1] will tell us whether i is in a
values_found = [False] * (len(a)+1)
# note any values in a in the range [1, len(a)+1] as found
for i in a:
if i > 0 and i <= len(a)+1:
values_found[i-1] = True
# extract the smallest value not found
for i, found in enumerate(values_found):
if not found:
return i+1
We know the final for loop always finds a value that was not marked, because it has one more element than a, so at least one of its cells was not set to True.
A:
def check_min(a):
x= max(a)
if x-1 in a:
return x+1
elif x <= 0:
return 1
else:
return x-1
Correct me if i'm wrong but this works for me.
A:
def solution(A):
clone = 1
A.sort()
for itr in range(max(A) + 2):
if itr not in A and itr >= 1:
clone = itr
break
return clone
print(solution([2,1,4,7]))
#returns 3
A:
def solution(A):
n = 1
for i in A:
if n in A:
n = n+1
else:
return n
return n
A:
def not_in_A(a):
a=sorted(a)
if max(a)<1:
return(1)
for i in range(0,len(a)-1):
if a[i+1]-a[i]>1:
out=a[i]+1
if out==0 or out<1:
continue
return(out)
return(max(a)+1)
A:
mark and then find the first one that didn't find
nums = [ 6, 4, 3, -5, 0, 2, -7, 1 ]
def check_min(nums):
marks = [-1] * len(nums)
for idx, num in enumerate(nums):
if num >= 0:
marks[num] = idx
for idx, mark in enumerate(marks):
if mark == -1:
return idx
return idx + 1
| Python given an array A of N integers, returns the smallest positive integer (greater than 0) that does not occur in A in O(n) time complexity | For example:
input: A = [ 6 4 3 -5 0 2 -7 1 ]
output: 5
Since 5 is the smallest positive integer that does not occur in the array.
I have written two solutions to that problem. The first one is good but I don't want to use any external libraries + its O(n)*log(n) complexity. The second solution "In which I need your help to optimize it" gives an error when the input is chaotic sequences length=10005 (with minus).
Solution 1:
from itertools import count, filterfalse
def minpositive(a):
return(next(filterfalse(set(a).__contains__, count(1))))
Solution 2:
def minpositive(a):
count = 0
b = list(set([i for i in a if i>0]))
if min(b, default = 0) > 1 or min(b, default = 0) == 0 :
min_val = 1
else:
min_val = min([b[i-1]+1 for i, x in enumerate(b) if x - b[i - 1] >1], default=b[-1]+1)
return min_val
Note: This was a demo test in codility, solution 1 got 100% and
solution 2 got 77 %.
Error in "solution2" was due to:
Performance tests ->
medium chaotic sequences length=10005 (with minus) got 3 expected
10000
Performance tests -> large chaotic + many -1, 1, 2, 3 (with
minus) got 5 expected 10000
| [
"Testing for the presence of a number in a set is fast in Python so you could try something like this:\ndef minpositive(a):\n A = set(a)\n ans = 1\n while ans in A:\n ans += 1\n return ans\n\n",
"Fast for large arrays.\ndef minpositive(arr):\n if 1 not in arr: # protection from error if ( max(arr) < 0 )\n return 1\n else:\n maxArr = max(arr) # find max element in 'arr'\n c1 = set(range(2, maxArr+2)) # create array from 2 to max\n c2 = c1 - set(arr) # find all positive elements outside the array\n return min(c2)\n\n\n",
"I have an easy solution. No need to sort.\ndef solution(A):\n s = set(A)\n m = max(A) + 2\n for N in range(1, m):\n if N not in s:\n return N\n return 1\n\nNote: It is 100% total score (Correctness & Performance)\n",
"def minpositive(A):\n \"\"\"Given an list A of N integers, \n returns the smallest positive integer (greater than 0) \n that does not occur in A in O(n) time complexity\n\n Args:\n A: list of integers\n Returns:\n integer: smallest positive integer\n\n e.g:\n A = [1,2,3]\n smallest_positive_int = 4\n \"\"\"\n len_nrs_list = len(A)\n N = set(range(1, len_nrs_list+2))\n \n return min(N-set(A)) #gets the min value using the N integers\n \n\n",
"This solution passes the performance test with a score of 100%\ndef solution(A):\n n = sorted(i for i in set(A) if i > 0) # Remove duplicates and negative numbers\n if not n:\n return 1\n ln = len(n)\n\n for i in range(1, ln + 1):\n if i != n[i - 1]:\n return i\n\n return ln + 1\n\n",
"def solution(A):\n B = set(sorted(A))\n m = 1\n for x in B:\n if x == m:\n m+=1\n return m\n\n",
"Continuing on from Niroj Shrestha and najeeb-jebreel, added an initial portion to avoid iteration in case of a complete set. Especially important if the array is very large.\ndef smallest_positive_int(A):\n sorted_A = sorted(A)\n last_in_sorted_A = sorted_A[-1]\n #check if straight continuous list\n if len(sorted_A) == last_in_sorted_A:\n return last_in_sorted_A + 1\n else:\n #incomplete list, iterate to find the smallest missing number\n sol=1\n for x in sorted_A:\n if x == sol:\n sol += 1\n else:\n break\n return sol\n\nA = [1,2,7,4,5,6]\nprint(smallest_positive_int(A))\n\n\n",
"This question doesn't really need another answer, but there is a solution that has not been proposed yet, that I believe to be faster than what's been presented so far.\nAs others have pointed out, we know the answer lies in the range [1, len(A)+1], inclusively. We can turn that into a set and take the minimum element in the set difference with A. That's a good O(N) solution since set operations are O(1).\nHowever, we don't need to use a Python set to store [1, len(A)+1], because we're starting with a dense set. We can use an array instead, which will replace set hashing by list indexing and give us another O(N) solution with a lower constant.\ndef minpositive(a):\n # the \"set\" of possible answer - values_found[i-1] will tell us whether i is in a\n values_found = [False] * (len(a)+1)\n # note any values in a in the range [1, len(a)+1] as found\n for i in a:\n if i > 0 and i <= len(a)+1:\n values_found[i-1] = True\n # extract the smallest value not found\n for i, found in enumerate(values_found):\n if not found:\n return i+1\n\nWe know the final for loop always finds a value that was not marked, because it has one more element than a, so at least one of its cells was not set to True.\n",
"def check_min(a):\n x= max(a)\n if x-1 in a:\n return x+1\n elif x <= 0:\n return 1\n else:\n return x-1\n\nCorrect me if i'm wrong but this works for me.\n",
"def solution(A):\nclone = 1\nA.sort()\nfor itr in range(max(A) + 2):\n if itr not in A and itr >= 1:\n clone = itr\n break\nreturn clone\n\nprint(solution([2,1,4,7]))\n\n#returns 3\n\n",
"def solution(A):\n n = 1\n for i in A:\n if n in A:\n n = n+1 \n else:\n return n\n return n\n\n",
"def not_in_A(a):\n a=sorted(a)\n if max(a)<1:\n return(1)\n for i in range(0,len(a)-1):\n if a[i+1]-a[i]>1:\n out=a[i]+1\n if out==0 or out<1:\n continue\n return(out)\n \n return(max(a)+1)\n\n",
"mark and then find the first one that didn't find\nnums = [ 6, 4, 3, -5, 0, 2, -7, 1 ]\ndef check_min(nums):\n marks = [-1] * len(nums)\n for idx, num in enumerate(nums):\n if num >= 0:\n marks[num] = idx\n for idx, mark in enumerate(marks):\n if mark == -1:\n return idx\n return idx + 1\n\n"
] | [
80,
5,
3,
2,
1,
0,
0,
0,
0,
0,
0,
0,
0
] | [
"I just modified the answer by @najeeb-jebreel and now the function gives an optimal solution.\ndef solution(A):\n sorted_set = set(sorted(A))\n sol = 1\n for x in sorted_set:\n if x == sol:\n sol += 1\n else:\n break\n return sol\n\n",
"I reduced the length of set before comparing\na=[1,222,3,4,24,5,6,7,8,9,10,15,2,3,3,11,-1]\n#a=[1,2,3,6,3]\ndef sol(a_array):\n a_set=set()\n b_set=set()\n cnt=1\n for i in a_array:\n\n #In order to get the greater performance\n #Checking if element is greater than length+1 \n #then it can't be output( our result in solution)\n \n if i<=len(a) and i >=1:\n \n a_set.add(i) # Adding array element in set \n b_set.add(cnt) # Adding iterator in set\n cnt=cnt+1\n b_set=b_set.difference(a_set)\n if((len(b_set)) > 1): \n return(min(b_set))\n else:\n return max(a_set)+1\n\nsol(a) \n\n",
"def solution(A):\n nw_A = sorted(set(A))\n if all(i < 0 for i in nw_A):\n return 1\n else:\n ans = 1\n while ans in nw_A:\n ans += 1\n if ans not in nw_A:\n return ans\n\nFor better performance if there is a possibility to import numpy package.\ndef solution(A):\n import numpy as np\n nw_A = np.unique(np.array(A))\n if np.all((nw_A < 0)):\n return 1\n else:\n ans = 1\n while ans in nw_A:\n ans += 1\n if ans not in nw_A:\n return ans\n\n",
"def solution(A):\n# write your code in Python 3.6\nmin_num = float(\"inf\")\nset_A = set(A)\n# finding the smallest number\nfor num in set_A:\n if num < min_num:\n min_num = num\n# print(min_num)\n\n#if negative make positive \nif min_num < 0 or min_num == 0:\n min_num = 1\n# print(min_num)\n\n# if in set add 1 until not \nwhile min_num in set_A: \n min_num += 1\nreturn min_num\n\nNot sure why this is not 100% in correctness. It is 100% performance\n",
"def solution(A):\n arr = set(A)\n N = set(range(1, 100001))\n while N in arr:\n N += 1\n return min(N - arr)\n\nsolution([1, 2, 6, 4])\n#returns 3\n\n"
] | [
-1,
-1,
-1,
-1,
-2
] | [
"algorithm",
"python",
"python_3.x",
"time_complexity"
] | stackoverflow_0049224022_algorithm_python_python_3.x_time_complexity.txt |
Q:
ValueError: You are trying to load a weight file containing 293 layers into a model with 147 layers
If you are getting this error while following the code from this tutorial
https://pixellib.readthedocs.io/en/latest/image_ade20k.html
ValueError: You are trying to load a weight file containing 293 layers into a model with 147 layers
A:
The issue can be solved by installing these versions*
!pip3 install tensorflow==2.6.0
!pip3 install keras==2.6.0
!pip3 install imgaug
!pip3 install pillow==8.2.0
!pip install pixellib==0.5.2
!pip install labelme2coco==0.1.2
A:
I got it working by changing some imports in pixellib/semantic/deeplab.py.
You can substitute all imports tensorflow.python.keras with tensorflow.keras exept for from tensorflow.python.keras.utils.layer_utils import get_source_inputs.
| ValueError: You are trying to load a weight file containing 293 layers into a model with 147 layers | If you are getting this error while following the code from this tutorial
https://pixellib.readthedocs.io/en/latest/image_ade20k.html
ValueError: You are trying to load a weight file containing 293 layers into a model with 147 layers
| [
"The issue can be solved by installing these versions*\n!pip3 install tensorflow==2.6.0\n!pip3 install keras==2.6.0\n!pip3 install imgaug\n!pip3 install pillow==8.2.0\n!pip install pixellib==0.5.2\n!pip install labelme2coco==0.1.2\n\n",
"I got it working by changing some imports in pixellib/semantic/deeplab.py.\nYou can substitute all imports tensorflow.python.keras with tensorflow.keras exept for from tensorflow.python.keras.utils.layer_utils import get_source_inputs.\n"
] | [
0,
0
] | [] | [] | [
"keras",
"pixellib",
"python",
"tensorflow"
] | stackoverflow_0073084941_keras_pixellib_python_tensorflow.txt |
Q:
Why PyList_Append is called each time a list is evaluated?
I'm working with CPython3.11.0a3+. I added a break point at PyList_Append and modified the function to stop when the newitem is a dict. The original function:
int
PyList_Append(PyObject *op, PyObject *newitem)
{
if (PyList_Check(op) && (newitem != NULL))
return app1((PyListObject *)op, newitem);
PyErr_BadInternalCall();
return -1;
}
Modified version:
int
PyList_Append(PyObject *op, PyObject *newitem)
{
if (PyDict_CheckExact(newitem)) {
fprintf(stdout, "hello\n");
}
if (PyList_Check(op) && (newitem != NULL))
return app1((PyListObject *)op, newitem);
PyErr_BadInternalCall();
return -1;
}
I placed a break point on line fprintf(stdout, "hello\n");. This is because in python startup process, there are infinite calls of PyList_Append but none of them are with newitem of type dict. So in this way I can escape all of those calls and reach the repl to workaround with my own variables.
(gdb) break Objects/listobject.c:328
Breakpoint 1 at 0x44fb00: file Objects/listobject.c, line 328.
(gdb) run
Starting program: /home/amirreza/Desktop/edu/python_internals/1/cpython/python
Missing separate debuginfos, use: dnf debuginfo-install glibc-2.31-2.fc32.x86_64
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Python 3.11.0a3+ (heads/main-dirty:1cbb887, Dec 1 2022, 12:22:29) [GCC 10.3.1 20210422 (Red Hat 10.3.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> a = []
>>> a.append(3)
>>> a
[3]
>>> a.append({})
>>>
>>> a
Breakpoint 1, PyList_Append (op=op@entry=0x7fffea8e9780, newitem=newitem@entry=0x7fffeaa22bc0) at Objects/listobject.c:328
328 fprintf(stdout, "hello\n");
Missing separate debuginfos, use: dnf debuginfo-install ncurses-libs-6.1-15.20191109.fc32.x86_64 readline-8.0-4.fc32.x86_64
(gdb) c
Continuing.
hello
[3, {}]
>>>
>>> a
Breakpoint 1, PyList_Append (op=op@entry=0x7fffea8e9780, newitem=newitem@entry=0x7fffeaa22bc0) at Objects/listobject.c:328
328 fprintf(stdout, "hello\n");
(gdb) c
Continuing.
hello
[3, {}]
>>>
>>> a
Breakpoint 1, PyList_Append (op=op@entry=0x7fffea8e9780, newitem=newitem@entry=0x7fffeaa22bc0) at Objects/listobject.c:328
328 fprintf(stdout, "hello\n");
(gdb) c
Continuing.
hello
[3, {}]
>>>
My question is why each time I only try to evaluate the a list, the PyList_Append is invoked? For the first time we can ignore it due to lazy-evaluation. But for the second and third time, why it's called every time?
A:
dict.__repr__ uses the CPython-internal Py_ReprEnter and Py_ReprLeave functions to stop infinite recursion for recursively nested data structures, and Py_ReprEnter appends the dict to a list of objects currently having their repr evaluated in the running thread.
| Why PyList_Append is called each time a list is evaluated? | I'm working with CPython3.11.0a3+. I added a break point at PyList_Append and modified the function to stop when the newitem is a dict. The original function:
int
PyList_Append(PyObject *op, PyObject *newitem)
{
if (PyList_Check(op) && (newitem != NULL))
return app1((PyListObject *)op, newitem);
PyErr_BadInternalCall();
return -1;
}
Modified version:
int
PyList_Append(PyObject *op, PyObject *newitem)
{
if (PyDict_CheckExact(newitem)) {
fprintf(stdout, "hello\n");
}
if (PyList_Check(op) && (newitem != NULL))
return app1((PyListObject *)op, newitem);
PyErr_BadInternalCall();
return -1;
}
I placed a break point on line fprintf(stdout, "hello\n");. This is because in python startup process, there are infinite calls of PyList_Append but none of them are with newitem of type dict. So in this way I can escape all of those calls and reach the repl to workaround with my own variables.
(gdb) break Objects/listobject.c:328
Breakpoint 1 at 0x44fb00: file Objects/listobject.c, line 328.
(gdb) run
Starting program: /home/amirreza/Desktop/edu/python_internals/1/cpython/python
Missing separate debuginfos, use: dnf debuginfo-install glibc-2.31-2.fc32.x86_64
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Python 3.11.0a3+ (heads/main-dirty:1cbb887, Dec 1 2022, 12:22:29) [GCC 10.3.1 20210422 (Red Hat 10.3.1-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> a = []
>>> a.append(3)
>>> a
[3]
>>> a.append({})
>>>
>>> a
Breakpoint 1, PyList_Append (op=op@entry=0x7fffea8e9780, newitem=newitem@entry=0x7fffeaa22bc0) at Objects/listobject.c:328
328 fprintf(stdout, "hello\n");
Missing separate debuginfos, use: dnf debuginfo-install ncurses-libs-6.1-15.20191109.fc32.x86_64 readline-8.0-4.fc32.x86_64
(gdb) c
Continuing.
hello
[3, {}]
>>>
>>> a
Breakpoint 1, PyList_Append (op=op@entry=0x7fffea8e9780, newitem=newitem@entry=0x7fffeaa22bc0) at Objects/listobject.c:328
328 fprintf(stdout, "hello\n");
(gdb) c
Continuing.
hello
[3, {}]
>>>
>>> a
Breakpoint 1, PyList_Append (op=op@entry=0x7fffea8e9780, newitem=newitem@entry=0x7fffeaa22bc0) at Objects/listobject.c:328
328 fprintf(stdout, "hello\n");
(gdb) c
Continuing.
hello
[3, {}]
>>>
My question is why each time I only try to evaluate the a list, the PyList_Append is invoked? For the first time we can ignore it due to lazy-evaluation. But for the second and third time, why it's called every time?
| [
"dict.__repr__ uses the CPython-internal Py_ReprEnter and Py_ReprLeave functions to stop infinite recursion for recursively nested data structures, and Py_ReprEnter appends the dict to a list of objects currently having their repr evaluated in the running thread.\n"
] | [
2
] | [] | [] | [
"c",
"cpython",
"gdb",
"python",
"python_internals"
] | stackoverflow_0074639259_c_cpython_gdb_python_python_internals.txt |
Q:
Pandas filter by comparing columns
I have a dataframe like this:
Description keyword
1 plays the piano plays
2 plays the piano write
3 plays the piano piano
4 knows how to write the
5 knows how to write to
I want to filter it so that I keep the rows where the keyword is in the description. So here I would like to keep:
Description keyword
1 plays the piano plays
3 plays the piano piano
5 knows how to write to
Is there an efficient way to do this?
A:
Assuming your dataframe is called df:
df[df.apply(lambda x: x['keyword'] in x['Description'], axis=1)]
| Pandas filter by comparing columns | I have a dataframe like this:
Description keyword
1 plays the piano plays
2 plays the piano write
3 plays the piano piano
4 knows how to write the
5 knows how to write to
I want to filter it so that I keep the rows where the keyword is in the description. So here I would like to keep:
Description keyword
1 plays the piano plays
3 plays the piano piano
5 knows how to write to
Is there an efficient way to do this?
| [
"Assuming your dataframe is called df:\ndf[df.apply(lambda x: x['keyword'] in x['Description'], axis=1)]\n\n"
] | [
1
] | [] | [] | [
"filter",
"pandas",
"python"
] | stackoverflow_0074639249_filter_pandas_python.txt |
Subsets and Splits