content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Calculate the total of one measure (with a upper limit) in another measure
I need to calculate the total value of a column per employee per month. Then I need to impose a limit of 177 per employee per month. This will go into a matrix with employee as rows and months as columns. Lastly, i want to add up all the amounts per month to show the total in a line chart.
I made a measure to calculate the 1% with a max of amount of 177= if(0.01sum[amount]>177, 177,0.01sum[amount]). Then I used this measure in my matrix as explained above. This worked fine, but when i want to make the line chart the limit of 177 is still imposed because I use the same measure.
A:
I tested it with some dummy data! Please do it like this:
Employee Month Amount
Jack January 1500
Joe February 20000
Joe March 1600
Jack April 1800
Brad June 10000
Jack July 9500
Joe February 9500
Brad April 6500
Jack December 12000
Joe June 8000
Brad April 9500
Jack January 1000
Jack April 1100
Jack April 8000
Joe February 12000
Joe February 12500
Joe February 13000
Brad June 15000
Brad June 16000
Here is the measure (DAX Code)you need to use:
your_measure =
if(0.01 * sum(your_table[Amount]) > 177, 177,0.01* sum(your_table[Amount]))
Then lets put it on a matrix and line chart:
If you want your 177 restriction not to be applied in line chart, Why not create another simple total measure:
= SUM(your table[amount])
|
Calculate the total of one measure (with a upper limit) in another measure
|
I need to calculate the total value of a column per employee per month. Then I need to impose a limit of 177 per employee per month. This will go into a matrix with employee as rows and months as columns. Lastly, i want to add up all the amounts per month to show the total in a line chart.
I made a measure to calculate the 1% with a max of amount of 177= if(0.01sum[amount]>177, 177,0.01sum[amount]). Then I used this measure in my matrix as explained above. This worked fine, but when i want to make the line chart the limit of 177 is still imposed because I use the same measure.
|
[
"I tested it with some dummy data! Please do it like this:\nEmployee Month Amount\nJack January 1500\nJoe February 20000\nJoe March 1600\nJack April 1800\nBrad June 10000\nJack July 9500\nJoe February 9500\nBrad April 6500\nJack December 12000\nJoe June 8000\nBrad April 9500\nJack January 1000\nJack April 1100\nJack April 8000\nJoe February 12000\nJoe February 12500\nJoe February 13000\nBrad June 15000\nBrad June 16000\n\nHere is the measure (DAX Code)you need to use:\nyour_measure = \nif(0.01 * sum(your_table[Amount]) > 177, 177,0.01* sum(your_table[Amount]))\n\nThen lets put it on a matrix and line chart:\n\nIf you want your 177 restriction not to be applied in line chart, Why not create another simple total measure:\n= SUM(your table[amount])\n\n"
] |
[
0
] |
[] |
[] |
[
"dax",
"measure",
"powerbi"
] |
stackoverflow_0074664965_dax_measure_powerbi.txt
|
Q:
xargs pipes with multiple commands
I am trying to find all files older than 30 days with type .pkl in a directory, get the parent directory of found files and remove them.
My solution looks something like this:
find path_to_dir/.cache -type f -name "*.pkl" -mtime +30 |
xargs dirname | xargs rm -r
Is there a way to avoid the double xargs?
Also, a further task of mine would be to delete all log files, but not parent directory. i.e.
find path_to_dir -type f -name "*.log" -mtime +30 -delete
Could I do above in one line without code replication?
A:
If you're using GNU find, you can use the %h token in the -printf command:
find path_to_dir/.cache -type f -name '*.pkl' -mtime +30 -printf '%h\0' |
xargs -0 rm -r
For combining the two operations, something like this might work:
find path_to_dir/ -mtime +30 -type f \( -name '*.pkl' -printf '%h\0' -o -name '*.log' -print0 \) |
xargs -0 rm -r
...but that assumes that you can use the same starting prefix for both operations (in your example, you're using path_to_dir/.cache in the first case and path_to_dir in the second case).
|
xargs pipes with multiple commands
|
I am trying to find all files older than 30 days with type .pkl in a directory, get the parent directory of found files and remove them.
My solution looks something like this:
find path_to_dir/.cache -type f -name "*.pkl" -mtime +30 |
xargs dirname | xargs rm -r
Is there a way to avoid the double xargs?
Also, a further task of mine would be to delete all log files, but not parent directory. i.e.
find path_to_dir -type f -name "*.log" -mtime +30 -delete
Could I do above in one line without code replication?
|
[
"If you're using GNU find, you can use the %h token in the -printf command:\nfind path_to_dir/.cache -type f -name '*.pkl' -mtime +30 -printf '%h\\0' |\n xargs -0 rm -r\n\nFor combining the two operations, something like this might work:\nfind path_to_dir/ -mtime +30 -type f \\( -name '*.pkl' -printf '%h\\0' -o -name '*.log' -print0 \\) |\n xargs -0 rm -r\n\n...but that assumes that you can use the same starting prefix for both operations (in your example, you're using path_to_dir/.cache in the first case and path_to_dir in the second case).\n"
] |
[
1
] |
[] |
[] |
[
"bash",
"sh",
"shell"
] |
stackoverflow_0074667809_bash_sh_shell.txt
|
Q:
Python: Extract keywords from string
Hey Guys I am searching for a fast/efficient way to extract keywords (defined in a list) from a String (in a Dataframe) without being case sensitive or dependent on " " chars:
keys = ['I', 'love', 'Cookies']
String from df= "xxxxxxxxIxx xx cookies"
result should by either ['I'] or ['I', 'Cookies']
I am currently using f"({'|'.join(keys)}) which is case sensitive. What would you recommend for long strings in even longer dataframes :)
Thanks in advance
A:
Working code as per your inputs:
my_str ="xxxxxxxixxx xx cookhes"
my_list = ["I", "love", "Cookies"]
if any(substring.casefold() in my_str.casefold() for substring in my_list):
print('Contains element')
else:
print('Not contain any element.')
More info on the following answer from StackOverflow:
Case insensitive 'in'
|
Python: Extract keywords from string
|
Hey Guys I am searching for a fast/efficient way to extract keywords (defined in a list) from a String (in a Dataframe) without being case sensitive or dependent on " " chars:
keys = ['I', 'love', 'Cookies']
String from df= "xxxxxxxxIxx xx cookies"
result should by either ['I'] or ['I', 'Cookies']
I am currently using f"({'|'.join(keys)}) which is case sensitive. What would you recommend for long strings in even longer dataframes :)
Thanks in advance
|
[
"Working code as per your inputs:\nmy_str =\"xxxxxxxixxx xx cookhes\"\nmy_list = [\"I\", \"love\", \"Cookies\"]\nif any(substring.casefold() in my_str.casefold() for substring in my_list):\n print('Contains element')\nelse:\n print('Not contain any element.')\n\nMore info on the following answer from StackOverflow:\nCase insensitive 'in'\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"python",
"string",
"substring"
] |
stackoverflow_0074669279_dataframe_python_string_substring.txt
|
Q:
hi why does my else in while loop is not working?
I wanted to select two numbers and when I run the program it will start form the lower one and will print me numbers one after one till the big number.
the loop in the while is working but the else doesnt work...
num1= int(input('enter first number'))
num2= int (input('enter second number'))
while num1 > num2 :
print(num2)
num2= num2 + 1
else:
print(num1)
num1 = num1 + 1
I wanted to select two numbers and when I run the program it will start form the lower one and will print me numbers one after one till the big number.
the loop in the while is working but the else doesnt work...
|
hi why does my else in while loop is not working?
|
I wanted to select two numbers and when I run the program it will start form the lower one and will print me numbers one after one till the big number.
the loop in the while is working but the else doesnt work...
num1= int(input('enter first number'))
num2= int (input('enter second number'))
while num1 > num2 :
print(num2)
num2= num2 + 1
else:
print(num1)
num1 = num1 + 1
I wanted to select two numbers and when I run the program it will start form the lower one and will print me numbers one after one till the big number.
the loop in the while is working but the else doesnt work...
|
[] |
[] |
[
"You haven't written If statement in your code that,s why its not working\n"
] |
[
-1
] |
[
"python",
"while_loop"
] |
stackoverflow_0074669439_python_while_loop.txt
|
Q:
how to solve error ( functions containing switch are not expanded inline ) in C++
#include<iostream.h>
#include<conio.h>
class hostel_mangt
{
public:
int x,h,id,rc,hd;
char name[15],dol[10];
void oprt_1()
{
cout<<"do u want to see or update room's ?"<<endl;
cout<<"enter 1 to see and 0 to do operations = "<<endl;
cin>>h;
}
void display_1()
{
if(h==1)
{
if (name==NULL)
{
cout<<"room is empty "<<endl;
}
else
{
cout<<"id = "<<id<<endl;
cout<<"name = "<<name<<endl;
cout<<"date of leaving = "<<dol<<endl;
}
else
{
cout<<" 1. Update the room member and its data "<<endl;
cout<<" 2. delete the room member and its data "<<endl;
cout<<"enter choice = " ;
cin>>x;
switch(x)
{
case 1:
{
cout<<"what do u want to update ? ";<<endl
cout<<" 1. name "<<endl;
cout<<" 2. date of leaving"<<endl;
cin>>rc;
switch(rc)
{
case 1:
{
cout<<"enter new name = "<<endl;
cin>>name;
}
case 2:
{
cout<<"enter updated date of leaving = ";
cin >>date;
}
}
break;
}
case 2:
{
cout<<"what do you want to be deleted = ";
cout<<" 1. name "<<endl;
cout<<" 2. date of leaving "<<endl;
cin>>hd;
switch(hd)
{
case 1:
{
name==NULL;
break;
}
case 2:
{
dol==NULL;
break;
}
break;
}
}
}
int main()
{
public:
int i,c;
clrscr();
hostel_mangt hm[10];
for(i=0;i<10;i++)
{
hm.oprt_1();
hm.display_1();
cout<<"do u want to continue ? "<<endl<<"if yes enter 1"<<endl;
cin>>c;
if(c!=1)
{
break;
}
}
getch();
return 0:
}
i am using turbo c
i am making a project named hostel management using classes ,array of objects,and switch case because hostel rooms can be empty but if i used normal array,stack,queue it wont work as it does can not have null value in middle
A:
You have a bunch of syntax errors.
Missing closing brace for the if (h == 1) statement.
Misplaced semicolon for the cout<<"what do u want to update ? ";<<endl line (it should be at the end)
Missing closing brace for the outermost else statement in the display_1() function.
Missing closing brace for display_1().
Missing closing brace for the hostel_mangt class.
And several other errors such as using comparisons (==) where there should be assignments (=), using public: in the main() function (it shouldn't be there), and so on.
Here's your code with those errors fixed:
#include <iostream>
#include <string>
using namespace std;
class hostel_mangt {
public:
int x, h, id, rc, hd;
string name, dol; // use std::string instead of character arrays, they're much easier to use
// char name[15], dol[10];
void oprt_1() {
cout << "do u want to see or update room's ?" << endl;
cout << "enter 1 to see and 0 to do operations = " << endl;
cin >> h;
}
void display_1() {
if (h == 1) {
if (name.empty()) {
cout << "room is empty " << endl;
} else {
cout << "id = " << id << endl;
cout << "name = " << name << endl;
cout << "date of leaving = " << dol << endl;
}
} else {
cout << " 1. Update the room member and its data " << endl;
cout << " 2. delete the room member and its data " << endl;
cout << "enter choice = ";
cin >> x;
switch (x) {
case 1: {
cout << "what do u want to update ? " << endl;
cout << " 1. name " << endl;
cout << " 2. date of leaving" << endl;
cin >> rc;
switch (rc) {
case 1: {
cout << "enter new name = " << endl;
cin >> name;
}
case 2: {
cout << "enter updated date of leaving = ";
cin >> date;
}
}
break;
}
case 2: {
cout << "what do you want to be deleted = ";
cout << " 1. name " << endl;
cout << " 2. date of leaving " << endl;
cin >> hd;
switch (hd) {
case 1: {
name.clear();
// name == NULL;
break;
}
case 2: {
dol.clear();
// dol == NULL;
break;
}
break;
}
}
}
}
}
};
int main() {
int i, c;
clrscr();
hostel_mangt hm[10];
for (i = 0; i < 10; i++) {
// to access an element of an array you use the square brackets []:
hm[i].oprt_1();
hm[i].display_1();
cout << "do u want to continue ? " << endl << "if yes enter 1" << endl;
cin >> c;
if (c != 1) {
break;
}
}
getch();
return 0;
}
There are probably still issues with your code but this is what I managed to fix.
|
how to solve error ( functions containing switch are not expanded inline ) in C++
|
#include<iostream.h>
#include<conio.h>
class hostel_mangt
{
public:
int x,h,id,rc,hd;
char name[15],dol[10];
void oprt_1()
{
cout<<"do u want to see or update room's ?"<<endl;
cout<<"enter 1 to see and 0 to do operations = "<<endl;
cin>>h;
}
void display_1()
{
if(h==1)
{
if (name==NULL)
{
cout<<"room is empty "<<endl;
}
else
{
cout<<"id = "<<id<<endl;
cout<<"name = "<<name<<endl;
cout<<"date of leaving = "<<dol<<endl;
}
else
{
cout<<" 1. Update the room member and its data "<<endl;
cout<<" 2. delete the room member and its data "<<endl;
cout<<"enter choice = " ;
cin>>x;
switch(x)
{
case 1:
{
cout<<"what do u want to update ? ";<<endl
cout<<" 1. name "<<endl;
cout<<" 2. date of leaving"<<endl;
cin>>rc;
switch(rc)
{
case 1:
{
cout<<"enter new name = "<<endl;
cin>>name;
}
case 2:
{
cout<<"enter updated date of leaving = ";
cin >>date;
}
}
break;
}
case 2:
{
cout<<"what do you want to be deleted = ";
cout<<" 1. name "<<endl;
cout<<" 2. date of leaving "<<endl;
cin>>hd;
switch(hd)
{
case 1:
{
name==NULL;
break;
}
case 2:
{
dol==NULL;
break;
}
break;
}
}
}
int main()
{
public:
int i,c;
clrscr();
hostel_mangt hm[10];
for(i=0;i<10;i++)
{
hm.oprt_1();
hm.display_1();
cout<<"do u want to continue ? "<<endl<<"if yes enter 1"<<endl;
cin>>c;
if(c!=1)
{
break;
}
}
getch();
return 0:
}
i am using turbo c
i am making a project named hostel management using classes ,array of objects,and switch case because hostel rooms can be empty but if i used normal array,stack,queue it wont work as it does can not have null value in middle
|
[
"You have a bunch of syntax errors.\n\nMissing closing brace for the if (h == 1) statement.\nMisplaced semicolon for the cout<<\"what do u want to update ? \";<<endl line (it should be at the end)\nMissing closing brace for the outermost else statement in the display_1() function.\nMissing closing brace for display_1().\nMissing closing brace for the hostel_mangt class.\n\nAnd several other errors such as using comparisons (==) where there should be assignments (=), using public: in the main() function (it shouldn't be there), and so on.\nHere's your code with those errors fixed:\n#include <iostream>\n#include <string>\nusing namespace std;\n\nclass hostel_mangt {\n public:\n int x, h, id, rc, hd;\n string name, dol; // use std::string instead of character arrays, they're much easier to use\n // char name[15], dol[10];\n\n void oprt_1() {\n cout << \"do u want to see or update room's ?\" << endl;\n cout << \"enter 1 to see and 0 to do operations = \" << endl;\n cin >> h;\n }\n\n void display_1() {\n if (h == 1) {\n if (name.empty()) {\n cout << \"room is empty \" << endl;\n } else {\n cout << \"id = \" << id << endl;\n cout << \"name = \" << name << endl;\n cout << \"date of leaving = \" << dol << endl;\n }\n } else {\n cout << \" 1. Update the room member and its data \" << endl;\n cout << \" 2. delete the room member and its data \" << endl;\n cout << \"enter choice = \";\n cin >> x;\n\n switch (x) {\n case 1: {\n cout << \"what do u want to update ? \" << endl;\n cout << \" 1. name \" << endl;\n cout << \" 2. date of leaving\" << endl;\n cin >> rc;\n\n switch (rc) {\n case 1: {\n cout << \"enter new name = \" << endl;\n cin >> name;\n }\n case 2: {\n cout << \"enter updated date of leaving = \";\n cin >> date;\n }\n }\n break;\n }\n case 2: {\n cout << \"what do you want to be deleted = \";\n cout << \" 1. name \" << endl;\n cout << \" 2. date of leaving \" << endl;\n cin >> hd;\n switch (hd) {\n case 1: {\n name.clear();\n // name == NULL;\n break;\n }\n case 2: {\n dol.clear();\n // dol == NULL;\n break;\n }\n break;\n }\n }\n }\n }\n }\n};\nint main() {\n int i, c;\n clrscr();\n hostel_mangt hm[10];\n for (i = 0; i < 10; i++) {\n // to access an element of an array you use the square brackets []:\n hm[i].oprt_1();\n hm[i].display_1();\n cout << \"do u want to continue ? \" << endl << \"if yes enter 1\" << endl;\n cin >> c;\n if (c != 1) {\n break;\n }\n }\n getch();\n return 0;\n}\n\nThere are probably still issues with your code but this is what I managed to fix.\n"
] |
[
0
] |
[] |
[] |
[
"c++",
"turbo_c",
"turbo_c++"
] |
stackoverflow_0074669232_c++_turbo_c_turbo_c++.txt
|
Q:
How to activate a virtualenv in a github action?
I am used to work with virtualenvs. However for some reason I am not able to activate an env in a github action job.
In order to debug I added this step:
- name: Activate virtualenv
run: |
echo $PATH
. .venv/bin/activate
ls /home/runner/work/<APP>/<APP>/.venv/bin
echo $PATH
On the action logs I can see
/opt/hostedtoolcache/Python/3.9.13/x64/bin:/opt/hostedtoolcache/Python/3.9.13/x64:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[...] # Cut here because a lot of lines are displayed. My executables are present including the one I'm trying to execute : pre-commit.
/home/runner/work/<APP>/<APP>/.venv/bin:/opt/hostedtoolcache/Python/3.9.13/x64/bin:/opt/hostedtoolcache/Python/3.9.13/x64:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
So it should work...
But the next steps which is
- name: Linters
run: pre-commit
Generates those error logs
Run pre-commit
pre-commit
shell: /usr/bin/bash -e {0}
env:
[...] # private
/home/runner/work/_temp/8e893c8d-5032-4dbb-8a15-59be68cb0f5d.sh: line 1: pre-commit: command not found
Error: Process completed with exit code 127.
I have no issue if I transform the step above this way :
- name: Linters
run: .venv/bin/pre-commit
For some reason bash is not able to find my executable while the folder containing it is referenced in $PATH.
A:
I'm sure you know that activation of a virtualenv is not magic β it just prepends β¦/.venv/bin/ to $PATH. Now the problematic thing in Github Action is that every run is executed by a different shell and hence every run has a default PATH as if the virtualenv was deactivated.
I see 3 ways to overcome that. The 1st you already mentioned β just use .venv/bin/<command>.
The 2nd is to activate the venv in every step:
- name: Linters
run: |
. .venv/bin/activate
pre-commit
The 3rd is: activate it once and store $PATH in a file that Actions use to restore environment variables at every step. The file is described in the docs.
So your entire workflow should looks like this:
- name: Activate virtualenv
run: |
. .venv/bin/activate
echo PATH=$PATH >> $GITHUB_ENV
- name: Linters
run: pre-commit
|
How to activate a virtualenv in a github action?
|
I am used to work with virtualenvs. However for some reason I am not able to activate an env in a github action job.
In order to debug I added this step:
- name: Activate virtualenv
run: |
echo $PATH
. .venv/bin/activate
ls /home/runner/work/<APP>/<APP>/.venv/bin
echo $PATH
On the action logs I can see
/opt/hostedtoolcache/Python/3.9.13/x64/bin:/opt/hostedtoolcache/Python/3.9.13/x64:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
[...] # Cut here because a lot of lines are displayed. My executables are present including the one I'm trying to execute : pre-commit.
/home/runner/work/<APP>/<APP>/.venv/bin:/opt/hostedtoolcache/Python/3.9.13/x64/bin:/opt/hostedtoolcache/Python/3.9.13/x64:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
So it should work...
But the next steps which is
- name: Linters
run: pre-commit
Generates those error logs
Run pre-commit
pre-commit
shell: /usr/bin/bash -e {0}
env:
[...] # private
/home/runner/work/_temp/8e893c8d-5032-4dbb-8a15-59be68cb0f5d.sh: line 1: pre-commit: command not found
Error: Process completed with exit code 127.
I have no issue if I transform the step above this way :
- name: Linters
run: .venv/bin/pre-commit
For some reason bash is not able to find my executable while the folder containing it is referenced in $PATH.
|
[
"I'm sure you know that activation of a virtualenv is not magic β it just prepends β¦/.venv/bin/ to $PATH. Now the problematic thing in Github Action is that every run is executed by a different shell and hence every run has a default PATH as if the virtualenv was deactivated.\nI see 3 ways to overcome that. The 1st you already mentioned β just use .venv/bin/<command>.\nThe 2nd is to activate the venv in every step:\n- name: Linters\n run: |\n . .venv/bin/activate\n pre-commit\n\nThe 3rd is: activate it once and store $PATH in a file that Actions use to restore environment variables at every step. The file is described in the docs.\nSo your entire workflow should looks like this:\n- name: Activate virtualenv\n run: |\n . .venv/bin/activate\n echo PATH=$PATH >> $GITHUB_ENV\n\n- name: Linters\n run: pre-commit\n\n"
] |
[
1
] |
[] |
[] |
[
"bash",
"github_actions",
"virtualenv"
] |
stackoverflow_0074668349_bash_github_actions_virtualenv.txt
|
Q:
Does zmq need integrity checks in application layer?
I'm building a distributed application using ZMQ framework that needs to assure the integrity of the packages exchanged. My question is whether or not do I need to perform integrity checks on the client and server on the application layer.
I have implemented a checksum approach using MD5 hash in both client's and server's side. However, I suspect that this might be redundant since zmq might be already handling integrity checks in the background. I have read ZMQ - The guide and found scarce information on this matter rather than small references that indicate that zmq already does integrity checks:
It delivers whole messages exactly as they were sent, using a simple
framing on the wire. If you write a 10k message, you will receive a
10k message.
I also searched in forums, including SO and couldn't found any solid reference that could confirm the reference. I would appreciate if someone could confirm it and ideally include a useful source.
EDIT
I am looking for answers other than "trust the docs" or "implement checksums" for two reasons:
I think that there need to be clear and easy-to-find references to what seems to be one of the key selling points of ZMQ.
The system under design must be fast, thus not wasting time in redundant ops.
A:
Looking at the documentation i read it as the entire purpose of ZeroMO is reliable transmission.
A:
I'd say that it depends on what you're worried about most.
ZMQ is built on top of tcp and other protocols, and these in turn rely on underlying things like IP, Ethernet. When you start getting down towards these physical layers of a network stack, there's integrity checking built in to provide reliable services to the layers above (include application libraries like ZMQ). So, in ordinary circumstances, one would not need to put in your own integrity checks because that's already taken care of for you. ZMQ does not do anything extra so far as I know - it simply assumes the underlying network stack always delivers bytes properly, intact.
However, such underlying integrity checks do not guarantee to eliminate all bit errors, they just get the bit error rate up to some very good level where most applications don't care (e.g. 1 in 10^12) and probably never, ever experience a problem. Adding in a supplementary checksum is going to push the net b.e.r. to even more very safe levels.
If you're worried about some active attack by a malicious third party against the zmtp protocol itself, then you may wish to introduce your own integrity check, likely cryptographic in nature, with all that entails. This might involve using libsodium along with ZeroMQ. That certainly was a thing, and probably still is unless I'm out of date and have missed a deprecation notice.
Summary
I'd say:
Ordinary app, runtime of days, weeks - nothing extra needed
Very long running app (years) and bit errors are utterly unacceptable (e.g. a safety critical application) - add a checksum
Needs to operate in a hostile environment where protocol attacks may occur - add a strong encryption layer like libsodium.
A:
It is recommended to perform integrity checks on the client and server sides in a distributed application using the ZeroMQ framework. This is because ZeroMQ does not provide any built-in mechanism for ensuring the integrity of the messages that are exchanged between the client and server.
While ZeroMQ does provide reliable message delivery using a simple framing mechanism, it does not guarantee the integrity of the messages. This means that messages may be corrupted or altered during transmission, and it is up to the application to implement mechanisms for detecting and handling such errors.
One way to ensure the integrity of messages in a distributed application using ZeroMQ is to use checksums or hash-based message authentication codes (HMACs) to verify the integrity of the messages. These mechanisms can be implemented on the client and server sides, and can be used to detect any errors or modifications to the messages that may have occurred during transmission.
For more information about ensuring the integrity of messages in a distributed application using ZeroMQ, you can refer to the following resources:
The ZeroMQ documentation: https://zeromq.org/documentation/
The ZeroMQ - The guide book: http://zguide.zeromq.org/page:all#toc39
The "Ensuring Message Integrity" section of the ZeroMQ - The guide book: http://zguide.zeromq.org/page:all#toc39
A:
According to the documentation for the ZeroMQ framework, the framework does not perform any data integrity checks by default. This means that you will need to implement your own checksum or data integrity verification mechanism in your application if you want to ensure that the data being exchanged between the client and server is not corrupted or altered in any way.
Here is a quote from the ZeroMQ - The Guide documentation:
ZeroMQ does not validate that the message you received is the same
message you sent. ZeroMQ does not check for duplicate messages,
guarantee message ordering, or assure that every message was
delivered. ZeroMQ is a simple transport layer that passes messages
between applications. It is your responsibility to build any
reliability or security on top of it.
Therefore, it is not redundant to include checksum or data integrity verification in your application. In fact, it is recommended to do so in order to ensure that the messages exchanged between the client and server are not corrupted or altered.
Here is a link to the relevant section of the ZeroMQ - The Guide documentation:
https://zguide.zeromq.org/page:all#Protocol-Design-Principles
A:
The ZeroMQ (ZMQ) messaging library does not require integrity checks in the application layer. ZMQ provides a set of low-level communication protocols that enable applications to exchange messages with each other in a fast and efficient manner. ZMQ does not include any built-in mechanisms for integrity checks, such as error correction or checksum calculations, at the application layer.
However, this does not mean that applications built on top of ZMQ do not need integrity checks. Depending on the specific requirements and goals of the application, it may be necessary to implement integrity checks in the application layer to ensure the correctness and reliability of the data being exchanged. For example, an application may want to include checksum calculations or error correction mechanisms in order to detect and correct errors in the transmitted data.
In general, the decision to include integrity checks in an application built on top of ZMQ will depend on the specific requirements and goals of the application, as well as the trade-offs between performance, reliability, and complexity. It is up to the developers of the application to determine whether integrity checks are necessary, and to implement appropriate mechanisms to ensure the integrity of the data being exchanged.
|
Does zmq need integrity checks in application layer?
|
I'm building a distributed application using ZMQ framework that needs to assure the integrity of the packages exchanged. My question is whether or not do I need to perform integrity checks on the client and server on the application layer.
I have implemented a checksum approach using MD5 hash in both client's and server's side. However, I suspect that this might be redundant since zmq might be already handling integrity checks in the background. I have read ZMQ - The guide and found scarce information on this matter rather than small references that indicate that zmq already does integrity checks:
It delivers whole messages exactly as they were sent, using a simple
framing on the wire. If you write a 10k message, you will receive a
10k message.
I also searched in forums, including SO and couldn't found any solid reference that could confirm the reference. I would appreciate if someone could confirm it and ideally include a useful source.
EDIT
I am looking for answers other than "trust the docs" or "implement checksums" for two reasons:
I think that there need to be clear and easy-to-find references to what seems to be one of the key selling points of ZMQ.
The system under design must be fast, thus not wasting time in redundant ops.
|
[
"Looking at the documentation i read it as the entire purpose of ZeroMO is reliable transmission.\n",
"I'd say that it depends on what you're worried about most.\nZMQ is built on top of tcp and other protocols, and these in turn rely on underlying things like IP, Ethernet. When you start getting down towards these physical layers of a network stack, there's integrity checking built in to provide reliable services to the layers above (include application libraries like ZMQ). So, in ordinary circumstances, one would not need to put in your own integrity checks because that's already taken care of for you. ZMQ does not do anything extra so far as I know - it simply assumes the underlying network stack always delivers bytes properly, intact.\nHowever, such underlying integrity checks do not guarantee to eliminate all bit errors, they just get the bit error rate up to some very good level where most applications don't care (e.g. 1 in 10^12) and probably never, ever experience a problem. Adding in a supplementary checksum is going to push the net b.e.r. to even more very safe levels.\nIf you're worried about some active attack by a malicious third party against the zmtp protocol itself, then you may wish to introduce your own integrity check, likely cryptographic in nature, with all that entails. This might involve using libsodium along with ZeroMQ. That certainly was a thing, and probably still is unless I'm out of date and have missed a deprecation notice.\nSummary\nI'd say:\n\nOrdinary app, runtime of days, weeks - nothing extra needed\nVery long running app (years) and bit errors are utterly unacceptable (e.g. a safety critical application) - add a checksum\nNeeds to operate in a hostile environment where protocol attacks may occur - add a strong encryption layer like libsodium.\n\n",
"It is recommended to perform integrity checks on the client and server sides in a distributed application using the ZeroMQ framework. This is because ZeroMQ does not provide any built-in mechanism for ensuring the integrity of the messages that are exchanged between the client and server.\nWhile ZeroMQ does provide reliable message delivery using a simple framing mechanism, it does not guarantee the integrity of the messages. This means that messages may be corrupted or altered during transmission, and it is up to the application to implement mechanisms for detecting and handling such errors.\nOne way to ensure the integrity of messages in a distributed application using ZeroMQ is to use checksums or hash-based message authentication codes (HMACs) to verify the integrity of the messages. These mechanisms can be implemented on the client and server sides, and can be used to detect any errors or modifications to the messages that may have occurred during transmission.\nFor more information about ensuring the integrity of messages in a distributed application using ZeroMQ, you can refer to the following resources:\nThe ZeroMQ documentation: https://zeromq.org/documentation/\nThe ZeroMQ - The guide book: http://zguide.zeromq.org/page:all#toc39\nThe \"Ensuring Message Integrity\" section of the ZeroMQ - The guide book: http://zguide.zeromq.org/page:all#toc39\n",
"According to the documentation for the ZeroMQ framework, the framework does not perform any data integrity checks by default. This means that you will need to implement your own checksum or data integrity verification mechanism in your application if you want to ensure that the data being exchanged between the client and server is not corrupted or altered in any way.\nHere is a quote from the ZeroMQ - The Guide documentation:\n\nZeroMQ does not validate that the message you received is the same\nmessage you sent. ZeroMQ does not check for duplicate messages,\nguarantee message ordering, or assure that every message was\ndelivered. ZeroMQ is a simple transport layer that passes messages\nbetween applications. It is your responsibility to build any\nreliability or security on top of it.\n\nTherefore, it is not redundant to include checksum or data integrity verification in your application. In fact, it is recommended to do so in order to ensure that the messages exchanged between the client and server are not corrupted or altered.\nHere is a link to the relevant section of the ZeroMQ - The Guide documentation:\nhttps://zguide.zeromq.org/page:all#Protocol-Design-Principles\n",
"The ZeroMQ (ZMQ) messaging library does not require integrity checks in the application layer. ZMQ provides a set of low-level communication protocols that enable applications to exchange messages with each other in a fast and efficient manner. ZMQ does not include any built-in mechanisms for integrity checks, such as error correction or checksum calculations, at the application layer.\nHowever, this does not mean that applications built on top of ZMQ do not need integrity checks. Depending on the specific requirements and goals of the application, it may be necessary to implement integrity checks in the application layer to ensure the correctness and reliability of the data being exchanged. For example, an application may want to include checksum calculations or error correction mechanisms in order to detect and correct errors in the transmitted data.\nIn general, the decision to include integrity checks in an application built on top of ZMQ will depend on the specific requirements and goals of the application, as well as the trade-offs between performance, reliability, and complexity. It is up to the developers of the application to determine whether integrity checks are necessary, and to implement appropriate mechanisms to ensure the integrity of the data being exchanged.\n"
] |
[
0,
0,
0,
0,
0
] |
[] |
[] |
[
"distributed_system",
"zeromq"
] |
stackoverflow_0074560522_distributed_system_zeromq.txt
|
Q:
Order of evaluation in v != std::exchange(v, predecessor(v))
I keep finding more idioms that lend themselves to std::exchange.
Today I found myself writing this in an answer:
do {
path.push_front(v);
} while (v != std::exchange(v, pmap[v]));
I like it a lot more than, say
do {
path.push_front(v);
if (v == pmap[v])
break;
v= pmap[v];
} while (true);
Hopefully for obvious reasons.
However, I'm not big on standardese and I can't help but worry that lhs != rhs doesn't guarantee that the right-hand side expression isn't fully evaluated before the left-hand-side. That would make it a tautologous comparison - which would by definition return true.
The code, however, does run correctly, apparently evaluating lhs first.
Does anyone know
whether the standard guarantees this evaluation order
if it has changed in recent standards, which standard version first specified it?
PS. I realize that this is a special case of f(a,b) where f is operator!=. I've tried to answer my own query using the information found here but have failed to reach a conclusion to date:
https://en.cppreference.com/w/cpp/language/eval_order
https://en.wikipedia.org/wiki/Sequence_point
Order of evaluation in C++ function parameters
What are the evaluation order guarantees introduced by C++17?
A:
C++17 introduced rules on sequences. What was UB before is now well defined. This applies to arguments to function calls as well as a select assortment of operators:
sequenced before is an asymmetric, transitive, pair-wise relationship
between evaluations within the same thread.
If A is sequenced before B (or, equivalently, B is sequenced after A), then evaluation of A will be complete before evaluation of B
begins.
The built-in != however is not sequenced (see link above). A function call would be sequenced but the order of evaluation is not guaranteed:
In a function call, value computations and side effects of the
initialization of every parameter are indeterminately sequenced with
respect to value computations and side effects of any other parameter.
(emphasis added)
To my reading, even if you wrote a wrapper function, your compiler would not be required to evaluate v first, then std::exchange(v, pmap[v]) and finally equal(..). And reversing the evaluation order, I believe, would change semantics in your example.
So sadly, as nice as std::exchange is, in this case, it is not guaranteed to do what you need it to.
A:
For the built-in != operator, or an overload taking at least the first argument by value (i.e. operator !=(T, T)):
This is UB per [intro.execution]/10:
Except where noted, evaluations of operands of individual operators and of subexpressions of individual expressions are unsequenced.
[...] If a side effect on a memory location is unsequenced relative to either another side effect on the same memory location or a value computation using the value of any object in the same memory location, and they are not potentially concurrent, the behavior is undefined.
(The != operator does not have any special sequencing properties.)
Whether != is overloaded for v's type does not affect sequencing rules (since you're not calling it using function call notation) ([over.match.oper]/2):
[...] the operator notation is first transformed to the equivalent function-call notation [...] However, the operands are sequenced in the order prescribed for the built-in operator.
(And even if you did use the function call notation, the operands would still be indeterminately sequenced, meaning no UB but no guarantee of consistent results either.)
In the case of an overload taking the operands by reference (such as operator !=(const T&, const T&) or T::operator !=(const T&) const):
The behavior is well-defined.
Binding a reference (directly) does not access the object (as in the case of v in your example), nor does calling a member function on it, so there's no conflict between the two operands. And the access that happens in the body of the function (the actual comparison) is sequenced after the initialization of its parameters ([intro.execution]/11):
When calling a function [...], every value computation and side effect associated with any argument expression [...] is sequenced before execution of every expression or statement in the body of the called function.
Which also means that the comparison will always take place after the side effects of both operands. In your example this means you'll always be comparing the post-exchange value of v to the value returned by exchange (v's previous one).
The above is true before C++17 as well, though for slightly different reasons.
In C++14, as opposed to C++17 ([expr]/2):
Overloaded operators obey the rules for syntax specified in Clause [expr], but the requirements of [...] evaluation order are replaced by the rules for function call.
...but the arguments in a function call are themselves unsequenced as opposed to indeterminately sequenced ([expr.call]/8):
The evaluations of the postfix expression and of the arguments are all unsequenced relative to one another.
(These quotes are from non-normative notes but they illustrate the point well.)
Which means the effect is still the same: operands are unsequenced, UB in the case where evaluating v accesses its value, well-defined otherwise. The only difference is that explicitly using function call syntax in the first case does not prevent UB.
A:
Regardless of whether or not this works, it is almost completely unreadable code -- and remember that we write code mostly for humans, not compilers!
As a general rule, using conditionals that have side effects are often hard to read and understand because our brains are not equipped to do two things at once: Understand whether the condition is true or false, and also keep track of what changes. Do one thing at a time if you want others to easily understand what is happening. In your case, you are adding a third dimension to it: when something changes; using conditionals in which you have a function call that changes something, and comparing the changed result against something else is just impossible to read :-)
If you absolutely must use this idiom, separate the three parts:
do {
path.push_front(v);
} while ([&v,&pmap]() mutable
{
auto old_v = v;
v = pmap[v];
return v != old_v;
} ());
This code is equivalent, of course, but I think it is far easier to read. Although I will claim that it is still harder to read than your alternative code without using std::exchange() :-)
A:
In the expression v != std::exchange(v, pmap[v]), the standard does not specify the order in which v and std::exchange(v, pmap[v]) are evaluated. Therefore, it is not safe to use this idiom in your code, because the behavior of your code could be undefined.
To ensure that your code has well-defined behavior, you can use the std::atomic_compare_exchange_strong function from the <atomic> header instead of std::exchange. This function performs an atomic compare-and-exchange operation, which guarantees that the comparison and the exchange are performed atomically and in the specified order.
Here is an example of how you could use std::atomic_compare_exchange_strong to safely implement the loop in your code:
std::atomic<int> v{ ... }; // initialize v with the initial value
std::map<int, int> pmap{ ... }; // initialize pmap with the map of values
std::deque<int> path;
while (true)
{
// Perform an atomic compare-and-exchange operation.
// If v is equal to pmap[v], set v to pmap[v] and return true.
// Otherwise, do nothing and return false.
if (!std::atomic_compare_exchange_strong(&v, &pmap[v], pmap[v]))
break;
path.push_front(v);
}
This code guarantees that the comparison and the exchange are performed in the specified order, and therefore has well-defined behavior.
|
Order of evaluation in v != std::exchange(v, predecessor(v))
|
I keep finding more idioms that lend themselves to std::exchange.
Today I found myself writing this in an answer:
do {
path.push_front(v);
} while (v != std::exchange(v, pmap[v]));
I like it a lot more than, say
do {
path.push_front(v);
if (v == pmap[v])
break;
v= pmap[v];
} while (true);
Hopefully for obvious reasons.
However, I'm not big on standardese and I can't help but worry that lhs != rhs doesn't guarantee that the right-hand side expression isn't fully evaluated before the left-hand-side. That would make it a tautologous comparison - which would by definition return true.
The code, however, does run correctly, apparently evaluating lhs first.
Does anyone know
whether the standard guarantees this evaluation order
if it has changed in recent standards, which standard version first specified it?
PS. I realize that this is a special case of f(a,b) where f is operator!=. I've tried to answer my own query using the information found here but have failed to reach a conclusion to date:
https://en.cppreference.com/w/cpp/language/eval_order
https://en.wikipedia.org/wiki/Sequence_point
Order of evaluation in C++ function parameters
What are the evaluation order guarantees introduced by C++17?
|
[
"C++17 introduced rules on sequences. What was UB before is now well defined. This applies to arguments to function calls as well as a select assortment of operators:\n\nsequenced before is an asymmetric, transitive, pair-wise relationship\nbetween evaluations within the same thread.\n\nIf A is sequenced before B (or, equivalently, B is sequenced after A), then evaluation of A will be complete before evaluation of B\nbegins.\n\n\nThe built-in != however is not sequenced (see link above). A function call would be sequenced but the order of evaluation is not guaranteed:\n\n\nIn a function call, value computations and side effects of the\ninitialization of every parameter are indeterminately sequenced with\nrespect to value computations and side effects of any other parameter.\n\n\n(emphasis added)\nTo my reading, even if you wrote a wrapper function, your compiler would not be required to evaluate v first, then std::exchange(v, pmap[v]) and finally equal(..). And reversing the evaluation order, I believe, would change semantics in your example.\nSo sadly, as nice as std::exchange is, in this case, it is not guaranteed to do what you need it to.\n",
"For the built-in != operator, or an overload taking at least the first argument by value (i.e. operator !=(T, T)):\nThis is UB per [intro.execution]/10:\n\nExcept where noted, evaluations of operands of individual operators and of subexpressions of individual expressions are unsequenced.\n\n\n[...] If a side effect on a memory location is unsequenced relative to either another side effect on the same memory location or a value computation using the value of any object in the same memory location, and they are not potentially concurrent, the behavior is undefined.\n\n(The != operator does not have any special sequencing properties.)\nWhether != is overloaded for v's type does not affect sequencing rules (since you're not calling it using function call notation) ([over.match.oper]/2):\n\n[...] the operator notation is first transformed to the equivalent function-call notation [...] However, the operands are sequenced in the order prescribed for the built-in operator.\n\n(And even if you did use the function call notation, the operands would still be indeterminately sequenced, meaning no UB but no guarantee of consistent results either.)\n\nIn the case of an overload taking the operands by reference (such as operator !=(const T&, const T&) or T::operator !=(const T&) const):\nThe behavior is well-defined.\nBinding a reference (directly) does not access the object (as in the case of v in your example), nor does calling a member function on it, so there's no conflict between the two operands. And the access that happens in the body of the function (the actual comparison) is sequenced after the initialization of its parameters ([intro.execution]/11):\n\nWhen calling a function [...], every value computation and side effect associated with any argument expression [...] is sequenced before execution of every expression or statement in the body of the called function.\n\nWhich also means that the comparison will always take place after the side effects of both operands. In your example this means you'll always be comparing the post-exchange value of v to the value returned by exchange (v's previous one).\n\nThe above is true before C++17 as well, though for slightly different reasons.\nIn C++14, as opposed to C++17 ([expr]/2):\n\nOverloaded operators obey the rules for syntax specified in Clause [expr], but the requirements of [...] evaluation order are replaced by the rules for function call.\n\n...but the arguments in a function call are themselves unsequenced as opposed to indeterminately sequenced ([expr.call]/8):\n\nThe evaluations of the postfix expression and of the arguments are all unsequenced relative to one another.\n\n(These quotes are from non-normative notes but they illustrate the point well.)\nWhich means the effect is still the same: operands are unsequenced, UB in the case where evaluating v accesses its value, well-defined otherwise. The only difference is that explicitly using function call syntax in the first case does not prevent UB.\n",
"Regardless of whether or not this works, it is almost completely unreadable code -- and remember that we write code mostly for humans, not compilers!\nAs a general rule, using conditionals that have side effects are often hard to read and understand because our brains are not equipped to do two things at once: Understand whether the condition is true or false, and also keep track of what changes. Do one thing at a time if you want others to easily understand what is happening. In your case, you are adding a third dimension to it: when something changes; using conditionals in which you have a function call that changes something, and comparing the changed result against something else is just impossible to read :-)\nIf you absolutely must use this idiom, separate the three parts:\ndo {\n path.push_front(v);\n} while ([&v,&pmap]() mutable\n {\n auto old_v = v; \n v = pmap[v];\n return v != old_v;\n } ());\n\nThis code is equivalent, of course, but I think it is far easier to read. Although I will claim that it is still harder to read than your alternative code without using std::exchange() :-)\n",
"In the expression v != std::exchange(v, pmap[v]), the standard does not specify the order in which v and std::exchange(v, pmap[v]) are evaluated. Therefore, it is not safe to use this idiom in your code, because the behavior of your code could be undefined.\nTo ensure that your code has well-defined behavior, you can use the std::atomic_compare_exchange_strong function from the <atomic> header instead of std::exchange. This function performs an atomic compare-and-exchange operation, which guarantees that the comparison and the exchange are performed atomically and in the specified order.\nHere is an example of how you could use std::atomic_compare_exchange_strong to safely implement the loop in your code:\nstd::atomic<int> v{ ... }; // initialize v with the initial value\nstd::map<int, int> pmap{ ... }; // initialize pmap with the map of values\nstd::deque<int> path;\n\nwhile (true)\n{\n// Perform an atomic compare-and-exchange operation.\n// If v is equal to pmap[v], set v to pmap[v] and return true.\n// Otherwise, do nothing and return false.\nif (!std::atomic_compare_exchange_strong(&v, &pmap[v], pmap[v]))\nbreak;\npath.push_front(v);\n}\n\nThis code guarantees that the comparison and the exchange are performed in the specified order, and therefore has well-defined behavior.\n"
] |
[
19,
14,
2,
1
] |
[] |
[] |
[
"c++",
"language_lawyer",
"sequence_points"
] |
stackoverflow_0074601619_c++_language_lawyer_sequence_points.txt
|
Q:
Get the first element of an array
I have an array:
array( 4 => 'apple', 7 => 'orange', 13 => 'plum' )
I would like to get the first element of this array. Expected result: string apple
One requirement: it cannot be done with passing by reference, so array_shift is not a good solution.
How can I do this?
A:
Original answer, but costly (O(n)):
array_shift(array_values($array));
In O(1):
array_pop(array_reverse($array));
Other use cases, etc...
If modifying (in the sense of resetting array pointers) of $array is not a problem, you might use:
reset($array);
This should be theoretically more efficient, if a array "copy" is needed:
array_shift(array_slice($array, 0, 1));
With PHP 5.4+ (but might cause an index error if empty):
array_values($array)[0];
A:
As Mike pointed out (the easiest possible way):
$arr = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );
echo reset($arr); // Echoes "apple"
If you want to get the key: (execute it after reset)
echo key($arr); // Echoes "4"
From PHP's documentation:
mixed reset ( array | object &$array );
Description:
reset() rewinds array's internal pointer to the first element and returns the value of the first array element, or FALSE if the array is
empty.
A:
$first_value = reset($array); // First element's value
$first_key = key($array); // First element's key
A:
current($array)
returns the first element of an array, according to the PHP manual.
Every array has an internal pointer to its "current" element, which is initialized to the first element inserted into the array.
So it works until you have re-positioned the array pointer, and otherwise you'll have to use reset() which ll rewind array and ll return first element of array
According to the PHP manual reset.
reset() rewinds array's internal pointer to the first element and returns the value of the first array element.
Examples of current() and reset()
$array = array('step one', 'step two', 'step three', 'step four');
// by default, the pointer is on the first element
echo current($array) . "<br />\n"; // "step one"
//Forward the array pointer and then reset it
// skip two steps
next($array);
next($array);
echo current($array) . "<br />\n"; // "step three"
// reset pointer, start again on step one
echo reset($array) . "<br />\n"; // "step one"
A:
$arr = $array = array( 9 => 'apple', 7 => 'orange', 13 => 'plum' );
echo reset($arr); // echoes 'apple'
If you don't want to lose the current pointer position, just create an alias for the array.
A:
You can get the Nth element with a language construct, "list":
// First item
list($firstItem) = $yourArray;
// First item from an array that is returned from a function
list($firstItem) = functionThatReturnsArray();
// Second item
list( , $secondItem) = $yourArray;
With the array_keys function you can do the same for keys:
list($firstKey) = array_keys($yourArray);
list(, $secondKey) = array_keys($yourArray);
A:
PHP 7.3 added two functions for getting the first and the last key of an array directly without modification of the original array and without creating any temporary objects:
array_key_first
array_key_last
Apart from being semantically meaningful, these functions don't even move the array pointer (as foreach would do).
Having the keys, one can get the values by the keys directly.
Examples (all of them require PHP 7.3+)
Getting the first/last key and value:
$my_array = ['IT', 'rules', 'the', 'world'];
$first_key = array_key_first($my_array);
$first_value = $my_array[$first_key];
$last_key = array_key_last($my_array);
$last_value = $my_array[$last_key];
Getting the first/last value as one-liners, assuming the array cannot be empty:
$first_value = $my_array[ array_key_first($my_array) ];
$last_value = $my_array[ array_key_last($my_array) ];
Getting the first/last value as one-liners, with defaults for empty arrays:
$first_value = empty($my_array) ? 'default' : $my_array[ array_key_first($my_array) ];
$last_value = empty($my_array) ? 'default' : $my_array[ array_key_last($my_array) ];
A:
PHP 5.4+:
array_values($array)[0];
A:
Some arrays don't work with functions like list, reset or current. Maybe they're "faux" arrays - partially implementing ArrayIterator, for example.
If you want to pull the first value regardless of the array, you can short-circuit an iterator:
foreach($array_with_unknown_keys as $value) break;
Your value will then be available in $value and the loop will break after the first iteration. This is more efficient than copying a potentially large array to a function like array_unshift(array_values($arr)).
You can grab the key this way too:
foreach($array_with_unknown_keys as $key=>$value) break;
If you're calling this from a function, simply return early:
function grab_first($arr) {
foreach($arr as $value) return $value;
}
A:
Suppose:
$array = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );
Just use:
$array[key($array)]
to get first element or
key($array)
to get first key.
Or you can unlink the first if you want to remove it.
A:
From Laravel's helpers:
function head($array)
{
return reset($array);
}
The array being passed by value to the function, the reset() affects the internal pointer of a copy of the array, and it doesn't touch the original
array (note it returns false if the array is empty).
Usage example:
$data = ['foo', 'bar', 'baz'];
current($data); // foo
next($data); // bar
head($data); // foo
next($data); // baz
Also, here is an alternative. It's very marginally faster, but more interesting. It lets easily change the default value if the array is empty:
function head($array, $default = null)
{
foreach ($array as $item) {
return $item;
}
return $default;
}
For the record, here is another answer of mine, for the array's last element.
A:
Keep this simple! There are lots of correct answers here, but to minimize all the confusion, these two work and reduce a lot of overhead:
key($array) gets the first key of an array
current($array) gets the first value of an array
EDIT:
Regarding the comments below. The following example will output: string(13) "PHP code test"
$array = array
(
'1' => 'PHP code test',
'foo' => 'bar', 5 , 5 => 89009,
'case' => 'Random Stuff: '.rand(100,999),
'PHP Version' => phpversion(),
0 => 'ending text here'
);
var_dump(current($array));
A:
Simply do:
array_shift(array_slice($array,0,1));
A:
$arr = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );
foreach($arr as $first) break;
echo $first;
Output:
apple
A:
I would do echo current($array) .
A:
PHP 7.3 added two functions for getting the first and the last key of an array directly without modification of the original array and without creating any temporary objects:
array_key_first
array_key_last
"There are several ways to provide this functionality for versions prior to PHP 7.3.0. It is possible to use array_keys(), but that may be rather inefficient. It is also possible to use reset() and key(), but that may change the internal array pointer. An efficient solution, which does not change the internal array pointer, written as polyfill:"
<?php
if (!function_exists('array_key_first')) {
function array_key_first($arr) {
foreach($arr as $key => $unused) {
return $key;
}
return NULL;
}
}
if (!function_exists('array_key_last')) {
function array_key_last($arr) {
return array_key_first(array_reverse($arr, true));
}
}
?>
A:
$myArray = array (4 => 'apple', 7 => 'orange', 13 => 'plum');
$arrayKeys = array_keys($myArray);
// The first element of your array is:
echo $myArray[$arrayKeys[0]];
A:
$array=array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );
$firstValue = each($array)[1];
This is much more efficient than array_values() because the each() function does not copy the entire array.
For more info see http://www.php.net/manual/en/function.each.php
A:
A kludgy way is:
$foo = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );
function get_first ($foo) {
foreach ($foo as $k=>$v){
return $v;
}
}
print get_first($foo);
A:
Most of these work! BUT for a quick single line (low resource) call:
$array = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );
echo $array[key($array)];
// key($array) -> will return the first key (which is 4 in this example)
Although this works, and decently well, please also see my additional answer:
https://stackoverflow.com/a/48410351/1804013
A:
Use:
$first = array_slice($array, 0, 1);
$val= $first[0];
By default, array_slice does not preserve keys, so we can safely use zero as the index.
A:
This is a little late to the game, but I was presented with a problem where my array contained array elements as children inside it, and thus I couldn't just get a string representation of the first array element. By using PHP's current() function, I managed this:
<?php
$original = array(4 => array('one', 'two'), 7 => array('three', 'four'));
reset($original); // to reset the internal array pointer...
$first_element = current($original); // get the current element...
?>
Thanks to all the current solutions helped me get to this answer, I hope this helps someone sometime!
A:
<?php
$arr = array(3 => "Apple", 5 => "Ball", 11 => "Cat");
echo array_values($arr)[0]; // Outputs: Apple
?>
Other Example:
<?php
$arr = array(3 => "Apple", 5 => "Ball", 11 => "Cat");
echo current($arr); // Outputs: Apple
echo reset($arr); // Outputs: Apple
echo next($arr); // Outputs: Ball
echo current($arr); // Outputs: Ball
echo reset($arr); // Outputs: Apple
?>
A:
I think using array_values would be your best bet here. You could return the value at index zero from the result of that function to get 'apple'.
A:
Two solutions for you.
Solution 1 - Just use the key. You have not said that you can not use it. :)
<?php
// Get the first element of this array.
$array = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );
// Gets the first element by key
$result = $array[4];
// Expected result: string apple
assert('$result === "apple" /* Expected result: string apple. */');
?>
Solution 2 - array_flip() + key()
<?php
// Get first element of this array. Expected result: string apple
$array = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );
// Turn values to keys
$array = array_flip($array);
// You might thrown a reset in just to make sure
// that the array pointer is at the first element.
// Also, reset returns the first element.
// reset($myArray);
// Return the first key
$firstKey = key($array);
assert('$firstKey === "apple" /* Expected result: string apple. */');
?>
Solution 3 - array_keys()
echo $array[array_keys($array)[0]];
A:
I imagine the author just was looking for a way to get the first element of an array after getting it from some function (mysql_fetch_row, for example) without generating a STRICT "Only variables should be passed by reference".
If it so, almost all the ways described here will get this message... and some of them uses a lot of additional memory duplicating an array (or some part of it). An easy way to avoid it is just assigning the value inline before calling any of those functions:
$first_item_of_array = current($tmp_arr = mysql_fetch_row(...));
// or
$first_item_of_array = reset($tmp_arr = func_get_my_huge_array());
This way you don't get the STRICT message on screen, nor in logs, and you don't create any additional arrays. It works with both indexed AND associative arrays.
A:
No one has suggested using the ArrayIterator class:
$array = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );
$first_element = (new ArrayIterator($array))->current();
echo $first_element; //'apple'
gets around the by reference stipulation of the OP.
A:
This is not so simple response in the real world. Suppose that we have these examples of possible responses that you can find in some libraries.
$array1 = array();
$array2 = array(1,2,3,4);
$array3 = array('hello'=>'world', 'foo'=>'bar');
$array4 = null;
var_dump('reset1', reset($array1));
var_dump('reset2', reset($array2));
var_dump('reset3', reset($array3));
var_dump('reset4', reset($array4)); // Warning
var_dump('array_shift1', array_shift($array1));
var_dump('array_shift2', array_shift($array2));
var_dump('array_shift3', array_shift($array3));
var_dump('array_shift4', array_shift($array4)); // Warning
var_dump('each1', each($array1));
var_dump('each2', each($array2));
var_dump('each3', each($array3));
var_dump('each4', each($array4)); // Warning
var_dump('array_values1', array_values($array1)[0]); // Notice
var_dump('array_values2', array_values($array2)[0]);
var_dump('array_values3', array_values($array3)[0]);
var_dump('array_values4', array_values($array4)[0]); // Warning
var_dump('array_slice1', array_slice($array1, 0, 1));
var_dump('array_slice2', array_slice($array2, 0, 1));
var_dump('array_slice3', array_slice($array3, 0, 1));
var_dump('array_slice4', array_slice($array4, 0, 1)); // Warning
list($elm) = $array1; // Notice
var_dump($elm);
list($elm) = $array2;
var_dump($elm);
list($elm) = $array3; // Notice
var_dump($elm);
list($elm) = $array4;
var_dump($elm);
Like you can see, we have several 'one line' solutions that work well in some cases, but not in all.
In my opinion, you have should that handler only with arrays.
Now talking about performance, assuming that we have always array, like this:
$elm = empty($array) ? null : ...($array);
...you would use without errors:
$array[count($array)-1];
array_shift
reset
array_values
array_slice
array_shift is faster than reset, that is more fast than [count()-1], and these three are faster than array_values and array_slice.
A:
Use array_keys() to access the keys of your associative array as a numerical indexed array, which is then again can be used as key for the array.
When the solution is arr[0]:
(Note, that since the array with the keys is 0-based index, the 1st
element is index 0)
You can use a variable and then subtract one, to get your logic, that 1 => 'apple'.
$i = 1;
$arr = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );
echo $arr[array_keys($arr)[$i-1]];
Output:
apple
Well, for simplicity- just use:
$arr = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );
echo $arr[array_keys($arr)[0]];
Output:
apple
By the first method not just the first element, but can treat an associative array like an indexed array.
A:
I don't like fiddling with the array's internal pointer, but it's also inefficient to build a second array with array_keys() or array_values(), so I usually define this:
function array_first(array $f) {
foreach ($f as $v) {
return $v;
}
throw new Exception('array was empty');
}
A:
You can get the first element by using this coding:
$array_key_set = array_keys($array);
$first_element = $array[$array_key_set[0]];
Or use:
$i=0;
foreach($array as $arr)
{
if($i==0)
{
$first_element=$arr;
break;
}
$i++;
}
echo $first_element;
A:
One line closure, copy, reset:
<?php
$fruits = array(4 => 'apple', 7 => 'orange', 13 => 'plum');
echo (function() use ($fruits) { return reset($fruits); })();
Output:
apple
Alternatively the shorter short arrow function:
echo (fn() => reset($fruits))();
This uses by-value variable binding as above. Both will not mutate the original pointer.
A:
A small change to what Sarfraz posted is:
$array = array(1, 2, 3, 4, 5);
$output = array_slice($array, 0, 1);
print_r ($output);
A:
I like the "list" example, but "list" only works on the left-hand-side of an assignment. If we don't want to assign a variable, we would be forced to make up a temporary name, which at best pollutes our scope and at worst overwrites an existing value:
list($x) = some_array();
var_dump($x);
The above will overwrite any existing value of $x, and the $x variable will hang around as long as this scope is active (the end of this function/method, or forever if we're in the top-level). This can be worked around using call_user_func and an anonymous function, but it's clunky:
var_dump(call_user_func(function($arr) { list($x) = $arr; return $x; },
some_array()));
If we use anonymous functions like this, we can actually get away with reset and array_shift, even though they use pass-by-reference. This is because calling a function will bind its arguments, and these arguments can be passed by reference:
var_dump(call_user_func(function($arr) { return reset($arr); },
array_values(some_array())));
However, this is actually overkill, since call_user_func will perform this temporary assignment internally. This lets us treat pass-by-reference functions as if they were pass-by-value, without any warnings or errors:
var_dump(call_user_func('reset', array_values(some_array())));
A:
Also worth bearing in mind is the context in which you're doing this, as an exhaustive check can be expensive and not always necessary.
For example, this solution works fine for the situation in which I'm using it (but obviously it can't be relied on in all cases...)
/**
* A quick and dirty way to determine whether the passed in array is associative or not, assuming that either:<br/>
* <br/>
* 1) All the keys are strings - i.e. associative<br/>
* or<br/>
* 2) All the keys are numeric - i.e. not associative<br/>
*
* @param array $objects
* @return boolean
*/
private function isAssociativeArray(array $objects)
{
// This isn't true in the general case, but it's a close enough (and quick) approximation for the context in
// which we're using it.
reset($objects);
return count($objects) > 0 && is_string(key($objects));
}
A:
Nice one with a combination of array_slice and implode:
$arr = array(1, 2, 3);
echo implode(array_slice($arr, 0, 1));
// Outputs 1
/*---------------------------------*/
$arr = array(
'key_1' => 'One',
'key_2' => 'Two',
'key_3' => 'Three',
);
echo implode(array_slice($arr, 0, 1));
// Outputs One
A:
If you are using Laravel you can do:
$array = ['a', 'b', 'c'];
$first = collect($array)->first();
|
Get the first element of an array
|
I have an array:
array( 4 => 'apple', 7 => 'orange', 13 => 'plum' )
I would like to get the first element of this array. Expected result: string apple
One requirement: it cannot be done with passing by reference, so array_shift is not a good solution.
How can I do this?
|
[
"Original answer, but costly (O(n)):\narray_shift(array_values($array));\n\nIn O(1):\narray_pop(array_reverse($array));\n\nOther use cases, etc...\nIf modifying (in the sense of resetting array pointers) of $array is not a problem, you might use:\nreset($array);\n\nThis should be theoretically more efficient, if a array \"copy\" is needed:\narray_shift(array_slice($array, 0, 1));\n\nWith PHP 5.4+ (but might cause an index error if empty):\narray_values($array)[0];\n\n",
"As Mike pointed out (the easiest possible way):\n$arr = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );\necho reset($arr); // Echoes \"apple\"\n\nIf you want to get the key: (execute it after reset)\necho key($arr); // Echoes \"4\"\n\nFrom PHP's documentation:\n\nmixed reset ( array | object &$array );\n\nDescription:\n\nreset() rewinds array's internal pointer to the first element and returns the value of the first array element, or FALSE if the array is\nempty.\n\n",
"$first_value = reset($array); // First element's value\n$first_key = key($array); // First element's key\n\n",
"current($array)\nreturns the first element of an array, according to the PHP manual.\n\nEvery array has an internal pointer to its \"current\" element, which is initialized to the first element inserted into the array.\n\nSo it works until you have re-positioned the array pointer, and otherwise you'll have to use reset() which ll rewind array and ll return first element of array\nAccording to the PHP manual reset.\n\nreset() rewinds array's internal pointer to the first element and returns the value of the first array element.\n\nExamples of current() and reset()\n$array = array('step one', 'step two', 'step three', 'step four');\n\n// by default, the pointer is on the first element\necho current($array) . \"<br />\\n\"; // \"step one\"\n\n//Forward the array pointer and then reset it\n\n// skip two steps\nnext($array);\nnext($array);\necho current($array) . \"<br />\\n\"; // \"step three\"\n\n// reset pointer, start again on step one\necho reset($array) . \"<br />\\n\"; // \"step one\"\n\n",
"$arr = $array = array( 9 => 'apple', 7 => 'orange', 13 => 'plum' );\necho reset($arr); // echoes 'apple'\n\nIf you don't want to lose the current pointer position, just create an alias for the array.\n",
"You can get the Nth element with a language construct, \"list\":\n// First item\nlist($firstItem) = $yourArray;\n\n// First item from an array that is returned from a function\nlist($firstItem) = functionThatReturnsArray();\n\n// Second item\nlist( , $secondItem) = $yourArray;\n\nWith the array_keys function you can do the same for keys:\nlist($firstKey) = array_keys($yourArray);\nlist(, $secondKey) = array_keys($yourArray);\n\n",
"PHP 7.3 added two functions for getting the first and the last key of an array directly without modification of the original array and without creating any temporary objects:\n\narray_key_first \narray_key_last \n\nApart from being semantically meaningful, these functions don't even move the array pointer (as foreach would do). \nHaving the keys, one can get the values by the keys directly.\n\nExamples (all of them require PHP 7.3+)\nGetting the first/last key and value:\n$my_array = ['IT', 'rules', 'the', 'world'];\n\n$first_key = array_key_first($my_array);\n$first_value = $my_array[$first_key];\n\n$last_key = array_key_last($my_array);\n$last_value = $my_array[$last_key];\n\nGetting the first/last value as one-liners, assuming the array cannot be empty:\n$first_value = $my_array[ array_key_first($my_array) ];\n\n$last_value = $my_array[ array_key_last($my_array) ];\n\nGetting the first/last value as one-liners, with defaults for empty arrays:\n$first_value = empty($my_array) ? 'default' : $my_array[ array_key_first($my_array) ];\n\n$last_value = empty($my_array) ? 'default' : $my_array[ array_key_last($my_array) ];\n\n",
"PHP 5.4+:\narray_values($array)[0];\n\n",
"Some arrays don't work with functions like list, reset or current. Maybe they're \"faux\" arrays - partially implementing ArrayIterator, for example.\nIf you want to pull the first value regardless of the array, you can short-circuit an iterator:\nforeach($array_with_unknown_keys as $value) break;\n\nYour value will then be available in $value and the loop will break after the first iteration. This is more efficient than copying a potentially large array to a function like array_unshift(array_values($arr)).\nYou can grab the key this way too:\nforeach($array_with_unknown_keys as $key=>$value) break;\n\nIf you're calling this from a function, simply return early:\nfunction grab_first($arr) {\n foreach($arr as $value) return $value;\n}\n\n",
"Suppose:\n$array = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );\n\nJust use:\n$array[key($array)]\n\nto get first element or\nkey($array)\n\nto get first key.\nOr you can unlink the first if you want to remove it.\n",
"From Laravel's helpers:\nfunction head($array)\n{\n return reset($array);\n}\n\nThe array being passed by value to the function, the reset() affects the internal pointer of a copy of the array, and it doesn't touch the original\narray (note it returns false if the array is empty).\nUsage example:\n$data = ['foo', 'bar', 'baz'];\n\ncurrent($data); // foo\nnext($data); // bar\nhead($data); // foo\nnext($data); // baz\n\nAlso, here is an alternative. It's very marginally faster, but more interesting. It lets easily change the default value if the array is empty:\nfunction head($array, $default = null)\n{\n foreach ($array as $item) {\n return $item;\n }\n return $default;\n}\n\n\nFor the record, here is another answer of mine, for the array's last element.\n",
"Keep this simple! There are lots of correct answers here, but to minimize all the confusion, these two work and reduce a lot of overhead:\nkey($array) gets the first key of an array \ncurrent($array) gets the first value of an array\n\nEDIT:\nRegarding the comments below. The following example will output: string(13) \"PHP code test\"\n$array = array\n(\n '1' => 'PHP code test', \n 'foo' => 'bar', 5 , 5 => 89009, \n 'case' => 'Random Stuff: '.rand(100,999),\n 'PHP Version' => phpversion(),\n 0 => 'ending text here'\n);\n\nvar_dump(current($array));\n\n",
"Simply do:\narray_shift(array_slice($array,0,1));\n\n",
"$arr = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );\nforeach($arr as $first) break;\necho $first;\n\nOutput:\napple\n\n",
"I would do echo current($array) . \n",
"PHP 7.3 added two functions for getting the first and the last key of an array directly without modification of the original array and without creating any temporary objects:\n\narray_key_first \narray_key_last \n\n\"There are several ways to provide this functionality for versions prior to PHP 7.3.0. It is possible to use array_keys(), but that may be rather inefficient. It is also possible to use reset() and key(), but that may change the internal array pointer. An efficient solution, which does not change the internal array pointer, written as polyfill:\"\n<?php\nif (!function_exists('array_key_first')) {\n function array_key_first($arr) {\n foreach($arr as $key => $unused) {\n return $key;\n }\n return NULL;\n }\n}\n\nif (!function_exists('array_key_last')) {\n function array_key_last($arr) {\n return array_key_first(array_reverse($arr, true));\n }\n}\n?>\n\n",
"$myArray = array (4 => 'apple', 7 => 'orange', 13 => 'plum');\n$arrayKeys = array_keys($myArray);\n\n// The first element of your array is:\necho $myArray[$arrayKeys[0]];\n\n",
"$array=array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );\n\n$firstValue = each($array)[1];\n\nThis is much more efficient than array_values() because the each() function does not copy the entire array.\nFor more info see http://www.php.net/manual/en/function.each.php\n",
"A kludgy way is:\n$foo = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );\n\nfunction get_first ($foo) {\n foreach ($foo as $k=>$v){\n return $v;\n }\n}\n\nprint get_first($foo);\n\n",
"Most of these work! BUT for a quick single line (low resource) call:\n$array = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );\necho $array[key($array)];\n\n// key($array) -> will return the first key (which is 4 in this example)\n\nAlthough this works, and decently well, please also see my additional answer:\nhttps://stackoverflow.com/a/48410351/1804013\n",
"Use:\n$first = array_slice($array, 0, 1); \n$val= $first[0];\n\nBy default, array_slice does not preserve keys, so we can safely use zero as the index.\n",
"This is a little late to the game, but I was presented with a problem where my array contained array elements as children inside it, and thus I couldn't just get a string representation of the first array element. By using PHP's current() function, I managed this:\n<?php\n $original = array(4 => array('one', 'two'), 7 => array('three', 'four'));\n reset($original); // to reset the internal array pointer...\n $first_element = current($original); // get the current element...\n?>\n\nThanks to all the current solutions helped me get to this answer, I hope this helps someone sometime!\n",
"<?php\n $arr = array(3 => \"Apple\", 5 => \"Ball\", 11 => \"Cat\");\n echo array_values($arr)[0]; // Outputs: Apple\n?>\n\nOther Example:\n<?php\n $arr = array(3 => \"Apple\", 5 => \"Ball\", 11 => \"Cat\");\n echo current($arr); // Outputs: Apple\n echo reset($arr); // Outputs: Apple\n echo next($arr); // Outputs: Ball\n echo current($arr); // Outputs: Ball\n echo reset($arr); // Outputs: Apple\n?>\n\n",
"I think using array_values would be your best bet here. You could return the value at index zero from the result of that function to get 'apple'.\n",
"Two solutions for you.\nSolution 1 - Just use the key. You have not said that you can not use it. :)\n<?php\n // Get the first element of this array.\n $array = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );\n\n // Gets the first element by key\n $result = $array[4];\n\n // Expected result: string apple\n assert('$result === \"apple\" /* Expected result: string apple. */');\n?>\n\nSolution 2 - array_flip() + key()\n<?php\n // Get first element of this array. Expected result: string apple\n $array = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );\n\n // Turn values to keys\n $array = array_flip($array);\n\n // You might thrown a reset in just to make sure\n // that the array pointer is at the first element.\n // Also, reset returns the first element.\n // reset($myArray);\n\n // Return the first key\n $firstKey = key($array);\n\n assert('$firstKey === \"apple\" /* Expected result: string apple. */');\n?>\n\nSolution 3 - array_keys()\necho $array[array_keys($array)[0]];\n\n",
"I imagine the author just was looking for a way to get the first element of an array after getting it from some function (mysql_fetch_row, for example) without generating a STRICT \"Only variables should be passed by reference\".\nIf it so, almost all the ways described here will get this message... and some of them uses a lot of additional memory duplicating an array (or some part of it). An easy way to avoid it is just assigning the value inline before calling any of those functions:\n$first_item_of_array = current($tmp_arr = mysql_fetch_row(...));\n// or\n$first_item_of_array = reset($tmp_arr = func_get_my_huge_array());\n\nThis way you don't get the STRICT message on screen, nor in logs, and you don't create any additional arrays. It works with both indexed AND associative arrays.\n",
"No one has suggested using the ArrayIterator class:\n$array = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );\n$first_element = (new ArrayIterator($array))->current();\necho $first_element; //'apple'\n\ngets around the by reference stipulation of the OP.\n",
"This is not so simple response in the real world. Suppose that we have these examples of possible responses that you can find in some libraries.\n$array1 = array();\n$array2 = array(1,2,3,4);\n$array3 = array('hello'=>'world', 'foo'=>'bar');\n$array4 = null;\n\nvar_dump('reset1', reset($array1));\nvar_dump('reset2', reset($array2));\nvar_dump('reset3', reset($array3));\nvar_dump('reset4', reset($array4)); // Warning\n\nvar_dump('array_shift1', array_shift($array1));\nvar_dump('array_shift2', array_shift($array2));\nvar_dump('array_shift3', array_shift($array3));\nvar_dump('array_shift4', array_shift($array4)); // Warning\n\nvar_dump('each1', each($array1));\nvar_dump('each2', each($array2));\nvar_dump('each3', each($array3));\nvar_dump('each4', each($array4)); // Warning\n\nvar_dump('array_values1', array_values($array1)[0]); // Notice\nvar_dump('array_values2', array_values($array2)[0]);\nvar_dump('array_values3', array_values($array3)[0]);\nvar_dump('array_values4', array_values($array4)[0]); // Warning\n\nvar_dump('array_slice1', array_slice($array1, 0, 1));\nvar_dump('array_slice2', array_slice($array2, 0, 1));\nvar_dump('array_slice3', array_slice($array3, 0, 1));\nvar_dump('array_slice4', array_slice($array4, 0, 1)); // Warning\n\nlist($elm) = $array1; // Notice\nvar_dump($elm);\nlist($elm) = $array2;\nvar_dump($elm);\nlist($elm) = $array3; // Notice\nvar_dump($elm);\nlist($elm) = $array4;\nvar_dump($elm);\n\nLike you can see, we have several 'one line' solutions that work well in some cases, but not in all.\nIn my opinion, you have should that handler only with arrays.\nNow talking about performance, assuming that we have always array, like this:\n$elm = empty($array) ? null : ...($array);\n\n...you would use without errors:\n$array[count($array)-1];\narray_shift\nreset\narray_values\narray_slice\n\narray_shift is faster than reset, that is more fast than [count()-1], and these three are faster than array_values and array_slice.\n",
"Use array_keys() to access the keys of your associative array as a numerical indexed array, which is then again can be used as key for the array.\nWhen the solution is arr[0]:\n\n(Note, that since the array with the keys is 0-based index, the 1st\n element is index 0)\n\nYou can use a variable and then subtract one, to get your logic, that 1 => 'apple'.\n$i = 1;\n$arr = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );\necho $arr[array_keys($arr)[$i-1]];\n\nOutput:\napple\n\nWell, for simplicity- just use:\n$arr = array( 4 => 'apple', 7 => 'orange', 13 => 'plum' );\necho $arr[array_keys($arr)[0]];\n\nOutput:\napple\n\nBy the first method not just the first element, but can treat an associative array like an indexed array.\n",
"I don't like fiddling with the array's internal pointer, but it's also inefficient to build a second array with array_keys() or array_values(), so I usually define this:\nfunction array_first(array $f) {\n foreach ($f as $v) {\n return $v;\n }\n throw new Exception('array was empty');\n}\n\n",
"You can get the first element by using this coding:\n$array_key_set = array_keys($array);\n$first_element = $array[$array_key_set[0]];\n\nOr use:\n$i=0;\nforeach($array as $arr)\n{\n if($i==0)\n {\n $first_element=$arr;\n break;\n }\n $i++;\n}\necho $first_element;\n\n",
"One line closure, copy, reset:\n<?php\n\n$fruits = array(4 => 'apple', 7 => 'orange', 13 => 'plum');\n\necho (function() use ($fruits) { return reset($fruits); })();\n\nOutput:\napple\n\nAlternatively the shorter short arrow function:\necho (fn() => reset($fruits))();\n\nThis uses by-value variable binding as above. Both will not mutate the original pointer.\n",
"A small change to what Sarfraz posted is:\n$array = array(1, 2, 3, 4, 5);\n$output = array_slice($array, 0, 1);\nprint_r ($output);\n\n",
"I like the \"list\" example, but \"list\" only works on the left-hand-side of an assignment. If we don't want to assign a variable, we would be forced to make up a temporary name, which at best pollutes our scope and at worst overwrites an existing value:\nlist($x) = some_array();\nvar_dump($x);\n\nThe above will overwrite any existing value of $x, and the $x variable will hang around as long as this scope is active (the end of this function/method, or forever if we're in the top-level). This can be worked around using call_user_func and an anonymous function, but it's clunky:\nvar_dump(call_user_func(function($arr) { list($x) = $arr; return $x; },\n some_array()));\n\nIf we use anonymous functions like this, we can actually get away with reset and array_shift, even though they use pass-by-reference. This is because calling a function will bind its arguments, and these arguments can be passed by reference: \nvar_dump(call_user_func(function($arr) { return reset($arr); },\n array_values(some_array())));\n\nHowever, this is actually overkill, since call_user_func will perform this temporary assignment internally. This lets us treat pass-by-reference functions as if they were pass-by-value, without any warnings or errors:\nvar_dump(call_user_func('reset', array_values(some_array())));\n\n",
"Also worth bearing in mind is the context in which you're doing this, as an exhaustive check can be expensive and not always necessary.\nFor example, this solution works fine for the situation in which I'm using it (but obviously it can't be relied on in all cases...)\n /**\n * A quick and dirty way to determine whether the passed in array is associative or not, assuming that either:<br/>\n * <br/>\n * 1) All the keys are strings - i.e. associative<br/>\n * or<br/>\n * 2) All the keys are numeric - i.e. not associative<br/>\n *\n * @param array $objects\n * @return boolean\n */\nprivate function isAssociativeArray(array $objects)\n{\n // This isn't true in the general case, but it's a close enough (and quick) approximation for the context in\n // which we're using it.\n\n reset($objects);\n return count($objects) > 0 && is_string(key($objects));\n}\n\n",
"Nice one with a combination of array_slice and implode:\n$arr = array(1, 2, 3);\necho implode(array_slice($arr, 0, 1));\n// Outputs 1\n\n/*---------------------------------*/\n\n$arr = array(\n 'key_1' => 'One',\n 'key_2' => 'Two',\n 'key_3' => 'Three',\n);\necho implode(array_slice($arr, 0, 1));\n// Outputs One\n\n",
"If you are using Laravel you can do:\n$array = ['a', 'b', 'c'];\n$first = collect($array)->first();\n\n"
] |
[
1599,
865,
314,
145,
112,
76,
76,
55,
28,
28,
21,
20,
15,
13,
13,
10,
9,
8,
7,
6,
5,
4,
4,
3,
3,
2,
2,
1,
1,
1,
1,
1,
0,
0,
0,
0,
0
] |
[
"There are too many answers here, and the selected answer will work for most of the cases.\nIn my case, I had a 2D array, and array_values for some odd reason was removing the keys on the inner arrays. So I end up with this:\n$keys = array_keys($myArray); // Fetches all the keys\n$firstElement = $myArray[$keys[0]]; // Get the first element using first key\n\n",
"Finding the first and last items in an array:\n// Get the first item in the array\nprint $array[0]; // Prints 1\n\n// Get the last item in the array\nprint end($array);\n\n"
] |
[
-3,
-10
] |
[
"arrays",
"php"
] |
stackoverflow_0001921421_arrays_php.txt
|
Q:
LINUX: Failed to pkg-config on libcryptsetup
Can someone help me with this error:
pkg-config --cflags -- libcryptsetup
Package libcryptsetup was not found in the pkg-config search path.
Perhaps you should add the directory containing `libcryptsetup.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libcryptsetup' found
My server:
Red Hat Enterprise Linux Server release 7.8 (Maipo)
~> rpm -qa | grep cryptsetup
cryptsetup-libs-2.0.3-6.el7.x86_64
cryptsetup-2.0.3-6.el7.x86_64
I am not really sure how to set the ENV var for PKG_CONFIG_PATH.
Thanks James
A:
Now you can use dnf --enablerepo=crb install cryptsetup-devel to install that on CentOS9-stream.
base on https://centos.pkgs.org/9-stream/centos-crb-x86_64/cryptsetup-devel-2.4.3-5.el9.x86_64.rpm.html
|
LINUX: Failed to pkg-config on libcryptsetup
|
Can someone help me with this error:
pkg-config --cflags -- libcryptsetup
Package libcryptsetup was not found in the pkg-config search path.
Perhaps you should add the directory containing `libcryptsetup.pc'
to the PKG_CONFIG_PATH environment variable
No package 'libcryptsetup' found
My server:
Red Hat Enterprise Linux Server release 7.8 (Maipo)
~> rpm -qa | grep cryptsetup
cryptsetup-libs-2.0.3-6.el7.x86_64
cryptsetup-2.0.3-6.el7.x86_64
I am not really sure how to set the ENV var for PKG_CONFIG_PATH.
Thanks James
|
[
"Now you can use dnf --enablerepo=crb install cryptsetup-devel to install that on CentOS9-stream.\nbase on https://centos.pkgs.org/9-stream/centos-crb-x86_64/cryptsetup-devel-2.4.3-5.el9.x86_64.rpm.html\n"
] |
[
0
] |
[] |
[] |
[
"linux",
"luks",
"pkg_config"
] |
stackoverflow_0064708855_linux_luks_pkg_config.txt
|
Q:
Unable to set headers in nestjs
I am learning nest.js and I am having a hard time setting the response headers..
Here is the code snip
@Controller()
export class AppController {
constructor(private readonly appService: AppService) {}
//@get('users')
@Get()
@Header('Content-Type','text/html')
getHello(): string {
console.log('log')
return this.appService.getHello();
}
}
I can see the log in the terminal running.
When I open the network tab in google chrome to verify the headers, I don't see anything.
I am following this tutorial:: https://youtu.be/F_oOtaxb0L8?t=1530 on youtube
I looked for similar issues on the web and found this :: https://github.com/nestjs/azure-func-http/issues/407
I looked at the documentation and I think I am using everything correctly::
https://docs.nestjs.com/controllers#headers
not sure where I am goofing
A:
You are actually doing it correct. It's not related with nestjs. It's because of your browser's devtools.
You can confirm it by running in your terminal curl -v localhost:3000 (If you don't use linux idk, just send a request without your browser and check the headers)
You should see Content-Type: text/html; charset=utf-8 in headers.
You can also disable persist logs and start a new incognito tab and test in the browser tools. For the first response, dev tools should show you the Content-Type header. For some reason it does not show it more than once.
For detailed explanation probably this is a good question to check.
|
Unable to set headers in nestjs
|
I am learning nest.js and I am having a hard time setting the response headers..
Here is the code snip
@Controller()
export class AppController {
constructor(private readonly appService: AppService) {}
//@get('users')
@Get()
@Header('Content-Type','text/html')
getHello(): string {
console.log('log')
return this.appService.getHello();
}
}
I can see the log in the terminal running.
When I open the network tab in google chrome to verify the headers, I don't see anything.
I am following this tutorial:: https://youtu.be/F_oOtaxb0L8?t=1530 on youtube
I looked for similar issues on the web and found this :: https://github.com/nestjs/azure-func-http/issues/407
I looked at the documentation and I think I am using everything correctly::
https://docs.nestjs.com/controllers#headers
not sure where I am goofing
|
[
"You are actually doing it correct. It's not related with nestjs. It's because of your browser's devtools.\nYou can confirm it by running in your terminal curl -v localhost:3000 (If you don't use linux idk, just send a request without your browser and check the headers)\nYou should see Content-Type: text/html; charset=utf-8 in headers.\nYou can also disable persist logs and start a new incognito tab and test in the browser tools. For the first response, dev tools should show you the Content-Type header. For some reason it does not show it more than once.\nFor detailed explanation probably this is a good question to check.\n"
] |
[
0
] |
[] |
[] |
[
"nestjs",
"nestjs_config"
] |
stackoverflow_0074668872_nestjs_nestjs_config.txt
|
Q:
celery: RuntimeError: RPC backend missing task request for task_id
I was trying to track when a task starts and task_track_started wasn't working for me. I tried the answer from this post here but I ran into an error.
I've the following code to update the state of a task once it's started. This is inside a django module which is then installed as an app in another django project where it's run.
from celery import current_app
from celery.signals import after_task_publish
@after_task_publish.connect
def update_sent_state(sender=None, headers=None, **kwargs):
task = current_app.tasks.get(sender)
backend = task.backend if task else current_app.backend
backend.store_result(headers['id'], None, "SENT")
The signal gets fired and but it errors out on the last line with the following error:
Traceback (most recent call last):
File "/home/dd_env/lib/python3.8/site-packages/celery/backends/rpc.py", line 175, in destination_for
request = request or current_task.request
File "/home/dd_env/lib/python3.8/site-packages/celery/local.py", line 143, in __getattr__
return getattr(self._get_current_object(), name)
AttributeError: 'NoneType' object has no attribute 'request'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dd_env/lib/python3.8/site-packages/celery/utils/dispatch/signal.py", line 276, in send
response = receiver(signal=self, sender=sender, **named)
File "/home/config_module/spinner/signals.py", line 319, in update_sent_state
backend.store_result(headers['id'], None, "SENT")
File "/home/dd_env/lib/python3.8/site-packages/celery/backends/rpc.py", line 198, in store_result
routing_key, correlation_id = self.destination_for(task_id, request)
File "/home/dd_env/lib/python3.8/site-packages/celery/backends/rpc.py", line 177, in destination_for
raise RuntimeError(
RuntimeError: RPC backend missing task request for '4cb351b0-3643-4f53-a238-bad84c18042d'
Inside the signal, invoking methods like backend.get_state(headers['id']) or backend.get_result(headers['id']) returns the expected output. The task is being executed successfully and the results returned but I'm unable to set it's status. backend.mark_as_started(headers['id']) also returns the same error.
Here's what my task definition looks like:
from celery import shared_task
@shared_task
def update_keywords_task(pk: int):
<Random CRUD operations>
Here're my celery settings:
app = Celery('<app_name>', backend='rpc://', broker='pyamqp://')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
app.conf.broker_transport_options = {
'max_retries': 3,
'interval_start': 0,
'interval_step': 0.2,
'interval_max': 0.2,
}
Why is my task request not found?
A:
I had a similar issue...
I think it has to do with using RPC as the backend. see this question for more info:
Getting Celery task results using RPC backend
|
celery: RuntimeError: RPC backend missing task request for task_id
|
I was trying to track when a task starts and task_track_started wasn't working for me. I tried the answer from this post here but I ran into an error.
I've the following code to update the state of a task once it's started. This is inside a django module which is then installed as an app in another django project where it's run.
from celery import current_app
from celery.signals import after_task_publish
@after_task_publish.connect
def update_sent_state(sender=None, headers=None, **kwargs):
task = current_app.tasks.get(sender)
backend = task.backend if task else current_app.backend
backend.store_result(headers['id'], None, "SENT")
The signal gets fired and but it errors out on the last line with the following error:
Traceback (most recent call last):
File "/home/dd_env/lib/python3.8/site-packages/celery/backends/rpc.py", line 175, in destination_for
request = request or current_task.request
File "/home/dd_env/lib/python3.8/site-packages/celery/local.py", line 143, in __getattr__
return getattr(self._get_current_object(), name)
AttributeError: 'NoneType' object has no attribute 'request'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/dd_env/lib/python3.8/site-packages/celery/utils/dispatch/signal.py", line 276, in send
response = receiver(signal=self, sender=sender, **named)
File "/home/config_module/spinner/signals.py", line 319, in update_sent_state
backend.store_result(headers['id'], None, "SENT")
File "/home/dd_env/lib/python3.8/site-packages/celery/backends/rpc.py", line 198, in store_result
routing_key, correlation_id = self.destination_for(task_id, request)
File "/home/dd_env/lib/python3.8/site-packages/celery/backends/rpc.py", line 177, in destination_for
raise RuntimeError(
RuntimeError: RPC backend missing task request for '4cb351b0-3643-4f53-a238-bad84c18042d'
Inside the signal, invoking methods like backend.get_state(headers['id']) or backend.get_result(headers['id']) returns the expected output. The task is being executed successfully and the results returned but I'm unable to set it's status. backend.mark_as_started(headers['id']) also returns the same error.
Here's what my task definition looks like:
from celery import shared_task
@shared_task
def update_keywords_task(pk: int):
<Random CRUD operations>
Here're my celery settings:
app = Celery('<app_name>', backend='rpc://', broker='pyamqp://')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
app.conf.broker_transport_options = {
'max_retries': 3,
'interval_start': 0,
'interval_step': 0.2,
'interval_max': 0.2,
}
Why is my task request not found?
|
[
"I had a similar issue...\nI think it has to do with using RPC as the backend. see this question for more info:\nGetting Celery task results using RPC backend\n"
] |
[
0
] |
[] |
[] |
[
"celery",
"django_celery"
] |
stackoverflow_0067819138_celery_django_celery.txt
|
Q:
laravel model event from trait not firing
why this is not working?? anything else i need to doo??? (note: i don't want to call any boot method from model). in models its working fine with booted method
// route
Route::get('/tests', function () {
return Test::find(1)->update([
'name' => Str::random(6)
]);
});
// models
namespace App\Models;
use App\Http\Traits\Sortable;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class Test extends Model
{
use HasFactory;
use Sortable;
protected $guarded = ["id"];
}
// traits
namespace App\Http\Traits;
trait Sortable
{
protected static function bootSort()
{
static::updated(function ($model) {
dd("updated", $model->toArray());
});
static::updating(function ($model) {
dd("updating", $model->toArray());
});
static::saving(function ($model) {
dd("saving", $model->toArray());
});
static::saved(function ($model) {
dd("saved", $model->toArray());
});
}
}
A:
I got the answer if I want to fire model event from trait: I have to make the boot method name as trait class name with prefix boot
for example: if my trait name is Sortable the boot method name will be bootSortable
below is the full solution to question:
// route
Route::get('/tests', function () {
return Test::find(1)->update([
'name' => Str::random(6)
]);
});
// models
namespace App\Models;
use App\Http\Traits\Sortable;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class Test extends Model
{
use HasFactory;
use Sortable;
protected $guarded = ["id"];
}
// traits
namespace App\Http\Traits;
trait Sortable
{
// before
protected static function bootSort()
// after fix
protected static function bootSortable()
{
static::updated(function ($model) {
dd("updated", $model->toArray());
});
static::updating(function ($model) {
dd("updating", $model->toArray());
});
static::saving(function ($model) {
dd("saving", $model->toArray());
});
static::saved(function ($model) {
dd("saved", $model->toArray());
});
}
}
A:
you can use laravel observers, it's much more simpler
|
laravel model event from trait not firing
|
why this is not working?? anything else i need to doo??? (note: i don't want to call any boot method from model). in models its working fine with booted method
// route
Route::get('/tests', function () {
return Test::find(1)->update([
'name' => Str::random(6)
]);
});
// models
namespace App\Models;
use App\Http\Traits\Sortable;
use Illuminate\Database\Eloquent\Factories\HasFactory;
use Illuminate\Database\Eloquent\Model;
class Test extends Model
{
use HasFactory;
use Sortable;
protected $guarded = ["id"];
}
// traits
namespace App\Http\Traits;
trait Sortable
{
protected static function bootSort()
{
static::updated(function ($model) {
dd("updated", $model->toArray());
});
static::updating(function ($model) {
dd("updating", $model->toArray());
});
static::saving(function ($model) {
dd("saving", $model->toArray());
});
static::saved(function ($model) {
dd("saved", $model->toArray());
});
}
}
|
[
"I got the answer if I want to fire model event from trait: I have to make the boot method name as trait class name with prefix boot\nfor example: if my trait name is Sortable the boot method name will be bootSortable\nbelow is the full solution to question:\n// route\nRoute::get('/tests', function () {\n\n return Test::find(1)->update([\n 'name' => Str::random(6)\n ]);\n});\n\n// models\nnamespace App\\Models;\n\nuse App\\Http\\Traits\\Sortable;\nuse Illuminate\\Database\\Eloquent\\Factories\\HasFactory;\nuse Illuminate\\Database\\Eloquent\\Model;\n\nclass Test extends Model\n{\n use HasFactory;\n use Sortable;\n protected $guarded = [\"id\"];\n}\n\n// traits\nnamespace App\\Http\\Traits;\n\ntrait Sortable\n{\n // before\n protected static function bootSort()\n // after fix\n protected static function bootSortable()\n {\n static::updated(function ($model) {\n dd(\"updated\", $model->toArray());\n });\n static::updating(function ($model) {\n dd(\"updating\", $model->toArray());\n });\n static::saving(function ($model) {\n dd(\"saving\", $model->toArray());\n });\n static::saved(function ($model) {\n dd(\"saved\", $model->toArray());\n });\n }\n}\n\n",
"you can use laravel observers, it's much more simpler\n"
] |
[
1,
0
] |
[] |
[] |
[
"eloquent",
"laravel",
"php"
] |
stackoverflow_0074668899_eloquent_laravel_php.txt
|
Q:
awk to get first column if the a specific number in the line is greater than a digit
I have a data file (file.txt) contains the below lines:
123 pro=tegs, ETA=12:00, team=xyz,user1=tom,dom=dby.com
345 pro=rbs, team=abc,user1=chan,dom=sbc.int,ETA=23:00
456 team=efg, pro=bvy,ETA=22:00,dom=sss.co.uk,user2=lis
I'm expecting to get the first column ($1) only if the ETA= number is greater than 15, like here I will have 2nd and 3rd line first column only is expected.
345
456
I tried like cat file.txt | awk -F [,TPF=]' '{print $1}' but its print whole line which has ETA at the end.
A:
With your shown samples please try following GNU awk code. Using match function of GNU awk where I am using regex (^[0-9]+).*ETA=([0-9]+):[0-9]+ which creates 2 capturing groups and saves its values into array arr. Then checking condition if 2nd element of arr is greater than 15 then print 1st value of arr array as per requirement.
awk '
match($0,/(^[0-9]+).*\<ETA=([0-9]+):[0-9]+/,arr) && arr[2]+0>15{
print arr[1]
}
' Input_file
A:
Using awk
$ awk -F"[=, ]" '{for (i=1;i<NF;i++) if ($i=="ETA") if ($(i+1) > 15) print $1}' input_file
345
456
A:
I would harness GNU AWK for this task following way, let file.txt content be
123 pro=tegs, ETA=12:00, team=xyz,user1=tom,dom=dby.com
345 pro=rbs, team=abc,user1=chan,dom=sbc.int,ETA=23:00
456 team=efg, pro=bvy,ETA=02:00,dom=sss.co.uk,user2=lis
then
awk 'substr($0,index($0,"ETA=")+4,2)+0>15{print $1}' file.txt
gives output
345
Explanation: I use String functions, index to find where is ETA= then substr to get 2 characters after ETA=, 4 is used as ETA= is 4 characters long and index gives start position, I use +0 to convert to integer then compare it with 15. Disclaimer: this solution assumes every row has ETA= followed by exactly 2 digits.
(tested in GNU Awk 5.0.1)
A:
Whenever input contains tag=value pairs as yours does, it's best to first create an array of those mappings (v[]) below and then you can just access the values by their tags (names):
$ cat tst.awk
BEGIN {
FS = "[, =]+"
OFS = ","
}
{
delete v
for ( i=2; i<NF; i+=2 ) {
v[$i] = $(i+1)
}
}
v["ETA"]+0 > 15 {
print $1
}
$ awk -f tst.awk file
345
456
With that approach you can trivially enhance the script in future to access whatever values you like by their names, test them in whatever combinations you like, output them in whatever order you like, etc. For example:
$ cat tst.awk
BEGIN {
FS = "[, =]+"
OFS = ","
}
{
delete v
for ( i=2; i<NF; i+=2 ) {
v[$i] = $(i+1)
}
}
(v["pro"] ~ /b/) && (v["ETA"]+0 > 15) {
print $1, v["team"], v["dom"]
}
$ awk -f tst.awk file
345,abc,sbc.int
456,efg,sss.co.uk
Think about how you'd enhance any other solution to do the above or anything remotely similar.
A:
It's unclear why you think your attempt would do anything of the sort. Your attempt uses a completely different field separator and does not compare anything against the number 15.
You'll also want to get rid of the useless use of cat.
When you specify a column separator with -F that changes what the first column $1 actually means; it is then everything before the first occurrence of the separator. Probably separately split the line to obtain the first column, space-separated.
awk -F 'ETA=' '$2 > 15 { split($0, n, /[ \t]+/); print n[1] }' file.txt
The value in $2 will be the data after the first separator (and up until the next one) but using it in a numeric comparison simply ignores any non-numeric text after the number at the beginning of the field. So for example, on the first line, we are actually literally checking if 12:00, team=xyz,user1=tom,dom=dby.com is larger than 15 but it effectively checks if 12 is larger than 15 (which is obviously false).
When the condition is true, we split the original line $0 into the array n on sequences of whitespace, and then print the first element of this array.
A:
Using awk you could match ETA= followed by 1 or more digits. Then get the match without the ETA= part and check if the number is greater than 15 and print the first field.
awk '/^[0-9]/ && match($0, /ETA=[0-9]+/) {
if(substr($0, RSTART+4, RLENGTH-4)+0 > 15) print $1
}' file
Output
345
456
If the first field should start with a number:
awk '/^[0-9]/ && match($0, /ETA=[0-9]+/) {
if(substr($0, RSTART+4, RLENGTH-4) > 15)+0 print $1
}' file
A:
To get the first column of a line in a file using awk if a specific number in that line is greater than a specified digit, you can use the following syntax:
awk '$1 > digit {print $1}' file.txt
This will print the first column of every line in file.txt where the first number is greater than digit.
To get the first column only if the ETA value is greater than 15, you can use the following command:
awk -F '[,= ]' '$6 > 15 {print $1}' file.txt
This will use the -F option to specify that the fields in file.txt are separated by either a comma, equals sign, or space, and then print the first column of every line where the sixth field (which corresponds to the ETA value) is greater than 15.
Here is an example:
$ cat file.txt
123 pro=tegs, ETA=12:00, team=xyz,user1=tom,dom=dby.com
345 pro=rbs, team=abc,user1=chan,dom=sbc.int,ETA=23:00
456 team=efg, pro=bvy,ETA=22:00,dom=sss.co.uk,user2=lis
$ awk -F '[,= ]' '$6 > 15 {print $1}' file.txt
345
456
In this example, the awk command is used to print the first column of every line in file.txt where the sixth field (the ETA value) is greater than 15. Since the ETA values in the second and third lines are greater than 15, the first column of those lines (345 and 456) are printed
A:
You can use the awk command to extract the first column of each line in the file where the ETA field is greater than 15. To do this, you can use the awk command with the -F option to set the field separator to a comma, and then use an if statement to check if the ETA field is greater than 15. If it is, you can print the first field in the line. Here is an example of how you could use the awk command to do this:
awk -F, '{if ($4 > 15) print $1}' file.txt
This awk command will set the field separator to a comma, and then check if the fourth field (the ETA field) is greater than 15. If it is, it will print the first field in the line.
Alternatively, if you want to print only the first column of each line that has an ETA field greater than 15, you can use the following awk command:
awk -F, '{if ($4 > 15) print $1}' file.txt | awk '{print $1}'
This command will first use the awk command to filter out the lines where the ETA field is not greater than 15, and then use a second awk command to extract only the first field of each remaining line.
I hope this helps! Let me know if you have any other questions.
A:
The following command will print the first column of lines that have an ETA value greater than 15:
awk -F '[, ]' '$4 == "ETA=" && substr($5, 1, 2) > 15 {print $1}' file.txt
The -F option sets the field separator to a comma (,) or space ( ). This means that each line will be split into fields based on either a comma or a space.
The $4 == "ETA=" condition checks whether the fourth field is equal to the string "ETA=". If it is, then the substr($5, 1, 2) > 15 condition is checked. This extracts the first two characters of the fifth field (the ETA value) and checks if it is greater than 15.
If both conditions are true, then the print $1 statement is executed, which prints the first field (column) of the line.
Note: This solution assumes that the ETA value is always in the format HH:MM (hours:minutes), where HH is the hour (0-23) and MM is the minute (0-59). If the ETA value can be in a different format, then you will need to modify the substr expression accordingly.
|
awk to get first column if the a specific number in the line is greater than a digit
|
I have a data file (file.txt) contains the below lines:
123 pro=tegs, ETA=12:00, team=xyz,user1=tom,dom=dby.com
345 pro=rbs, team=abc,user1=chan,dom=sbc.int,ETA=23:00
456 team=efg, pro=bvy,ETA=22:00,dom=sss.co.uk,user2=lis
I'm expecting to get the first column ($1) only if the ETA= number is greater than 15, like here I will have 2nd and 3rd line first column only is expected.
345
456
I tried like cat file.txt | awk -F [,TPF=]' '{print $1}' but its print whole line which has ETA at the end.
|
[
"With your shown samples please try following GNU awk code. Using match function of GNU awk where I am using regex (^[0-9]+).*ETA=([0-9]+):[0-9]+ which creates 2 capturing groups and saves its values into array arr. Then checking condition if 2nd element of arr is greater than 15 then print 1st value of arr array as per requirement.\nawk '\nmatch($0,/(^[0-9]+).*\\<ETA=([0-9]+):[0-9]+/,arr) && arr[2]+0>15{\n print arr[1]\n}\n' Input_file\n\n",
"Using awk\n$ awk -F\"[=, ]\" '{for (i=1;i<NF;i++) if ($i==\"ETA\") if ($(i+1) > 15) print $1}' input_file\n345\n456\n\n",
"I would harness GNU AWK for this task following way, let file.txt content be\n123 pro=tegs, ETA=12:00, team=xyz,user1=tom,dom=dby.com\n345 pro=rbs, team=abc,user1=chan,dom=sbc.int,ETA=23:00\n456 team=efg, pro=bvy,ETA=02:00,dom=sss.co.uk,user2=lis\n\nthen\nawk 'substr($0,index($0,\"ETA=\")+4,2)+0>15{print $1}' file.txt\n\ngives output\n345\n\nExplanation: I use String functions, index to find where is ETA= then substr to get 2 characters after ETA=, 4 is used as ETA= is 4 characters long and index gives start position, I use +0 to convert to integer then compare it with 15. Disclaimer: this solution assumes every row has ETA= followed by exactly 2 digits.\n(tested in GNU Awk 5.0.1)\n",
"Whenever input contains tag=value pairs as yours does, it's best to first create an array of those mappings (v[]) below and then you can just access the values by their tags (names):\n$ cat tst.awk\nBEGIN {\n FS = \"[, =]+\"\n OFS = \",\"\n}\n{\n delete v\n for ( i=2; i<NF; i+=2 ) {\n v[$i] = $(i+1)\n }\n}\nv[\"ETA\"]+0 > 15 {\n print $1\n}\n\n\n$ awk -f tst.awk file\n345\n456\n\nWith that approach you can trivially enhance the script in future to access whatever values you like by their names, test them in whatever combinations you like, output them in whatever order you like, etc. For example:\n$ cat tst.awk\nBEGIN {\n FS = \"[, =]+\"\n OFS = \",\"\n}\n{\n delete v\n for ( i=2; i<NF; i+=2 ) {\n v[$i] = $(i+1)\n }\n}\n(v[\"pro\"] ~ /b/) && (v[\"ETA\"]+0 > 15) {\n print $1, v[\"team\"], v[\"dom\"]\n}\n\n\n$ awk -f tst.awk file\n345,abc,sbc.int\n456,efg,sss.co.uk\n\nThink about how you'd enhance any other solution to do the above or anything remotely similar.\n",
"It's unclear why you think your attempt would do anything of the sort. Your attempt uses a completely different field separator and does not compare anything against the number 15.\nYou'll also want to get rid of the useless use of cat.\nWhen you specify a column separator with -F that changes what the first column $1 actually means; it is then everything before the first occurrence of the separator. Probably separately split the line to obtain the first column, space-separated.\nawk -F 'ETA=' '$2 > 15 { split($0, n, /[ \\t]+/); print n[1] }' file.txt\n\nThe value in $2 will be the data after the first separator (and up until the next one) but using it in a numeric comparison simply ignores any non-numeric text after the number at the beginning of the field. So for example, on the first line, we are actually literally checking if 12:00, team=xyz,user1=tom,dom=dby.com is larger than 15 but it effectively checks if 12 is larger than 15 (which is obviously false).\nWhen the condition is true, we split the original line $0 into the array n on sequences of whitespace, and then print the first element of this array.\n",
"Using awk you could match ETA= followed by 1 or more digits. Then get the match without the ETA= part and check if the number is greater than 15 and print the first field.\nawk '/^[0-9]/ && match($0, /ETA=[0-9]+/) {\n if(substr($0, RSTART+4, RLENGTH-4)+0 > 15) print $1\n}' file\n\nOutput\n345\n456\n\nIf the first field should start with a number:\nawk '/^[0-9]/ && match($0, /ETA=[0-9]+/) {\n if(substr($0, RSTART+4, RLENGTH-4) > 15)+0 print $1\n}' file\n\n",
"To get the first column of a line in a file using awk if a specific number in that line is greater than a specified digit, you can use the following syntax:\nawk '$1 > digit {print $1}' file.txt\n\nThis will print the first column of every line in file.txt where the first number is greater than digit.\nTo get the first column only if the ETA value is greater than 15, you can use the following command:\nawk -F '[,= ]' '$6 > 15 {print $1}' file.txt\n\nThis will use the -F option to specify that the fields in file.txt are separated by either a comma, equals sign, or space, and then print the first column of every line where the sixth field (which corresponds to the ETA value) is greater than 15.\nHere is an example:\n$ cat file.txt\n123 pro=tegs, ETA=12:00, team=xyz,user1=tom,dom=dby.com\n345 pro=rbs, team=abc,user1=chan,dom=sbc.int,ETA=23:00\n456 team=efg, pro=bvy,ETA=22:00,dom=sss.co.uk,user2=lis\n\n$ awk -F '[,= ]' '$6 > 15 {print $1}' file.txt\n345\n456\n\nIn this example, the awk command is used to print the first column of every line in file.txt where the sixth field (the ETA value) is greater than 15. Since the ETA values in the second and third lines are greater than 15, the first column of those lines (345 and 456) are printed\n",
"You can use the awk command to extract the first column of each line in the file where the ETA field is greater than 15. To do this, you can use the awk command with the -F option to set the field separator to a comma, and then use an if statement to check if the ETA field is greater than 15. If it is, you can print the first field in the line. Here is an example of how you could use the awk command to do this:\nawk -F, '{if ($4 > 15) print $1}' file.txt\n\nThis awk command will set the field separator to a comma, and then check if the fourth field (the ETA field) is greater than 15. If it is, it will print the first field in the line.\nAlternatively, if you want to print only the first column of each line that has an ETA field greater than 15, you can use the following awk command:\nawk -F, '{if ($4 > 15) print $1}' file.txt | awk '{print $1}'\n\n\nThis command will first use the awk command to filter out the lines where the ETA field is not greater than 15, and then use a second awk command to extract only the first field of each remaining line.\nI hope this helps! Let me know if you have any other questions.\n",
"The following command will print the first column of lines that have an ETA value greater than 15:\nawk -F '[, ]' '$4 == \"ETA=\" && substr($5, 1, 2) > 15 {print $1}' file.txt\n\nThe -F option sets the field separator to a comma (,) or space ( ). This means that each line will be split into fields based on either a comma or a space.\nThe $4 == \"ETA=\" condition checks whether the fourth field is equal to the string \"ETA=\". If it is, then the substr($5, 1, 2) > 15 condition is checked. This extracts the first two characters of the fifth field (the ETA value) and checks if it is greater than 15.\nIf both conditions are true, then the print $1 statement is executed, which prints the first field (column) of the line.\nNote: This solution assumes that the ETA value is always in the format HH:MM (hours:minutes), where HH is the hour (0-23) and MM is the minute (0-59). If the ETA value can be in a different format, then you will need to modify the substr expression accordingly.\n"
] |
[
5,
4,
3,
3,
2,
2,
0,
0,
0
] |
[] |
[] |
[
"awk",
"shell"
] |
stackoverflow_0074610426_awk_shell.txt
|
Q:
Is there an OpenSSL for windows?
I'm trying to generate OpenSSL certificates on Windows OS. But I find most of the commands related to OpenSSL are for *nix OS.
Is there an OpenSSL for Windows OS? If yes, from where can I get it? Is this official OpenSSL build for Windows?
A:
Search openssl shining light production in google and download from the first link
A:
Yes. You can do one of two things:
1) Build it yourself
You'll need a build environment (either Visual Studio or msys2 based), and a few other pre-requisites. Download the source from here:
https://www.openssl.org/source/
And (assuming you downloaded the 1.1.0 version), read the INSTALL notes here:
https://github.com/openssl/openssl/blob/OpenSSL_1_1_0-stable/INSTALL
There are also some Windows specific notes here:
https://github.com/openssl/openssl/blob/OpenSSL_1_1_0-stable/NOTES.WIN
2) Download a pre-compiled version
The OpenSSL project doesn't distribute pre-compiled binaries, but they do maintain a list of third-party provided binaries. The list is here:
https://wiki.openssl.org/index.php/Binaries
A:
Both Cygwin and MSYS distribute pre-compiled openssl binaries, which I use everyday.
If you don't like a *nix like style, please refer this official page for standalone distrbutions.
https://wiki.openssl.org/index.php/Binaries
Disclaim: I have not tested the software listed on the page.
A:
I am using this version https://slproweb.com/products/Win32OpenSSL.html and install it with
Essentials
winget install -e ShiningLight.OpenSSL.Light
Full
winget install -e ShiningLight.OpenSSL
A:
If you're using Chocolatey, you can also install with
choco install openssl
More details in here and for me it also installed some other stuff, like VC Redist:
Installed:
- kb2919355 v1.0.20160915
- kb3033929 v1.0.5
- kb2999226 v1.0.20181019
- openssl v1.1.1.1900
- vcredist2015 v14.0.24215.20170201
- kb2919442 v1.0.20160915
- vcredist140 v14.34.31931
- kb3035131 v1.0.3
- chocolatey-windowsupdate.extension v1.0.5
Packages requiring reboot:
- vcredist140 (exit code 3010)
This was done on Windows 10.
|
Is there an OpenSSL for windows?
|
I'm trying to generate OpenSSL certificates on Windows OS. But I find most of the commands related to OpenSSL are for *nix OS.
Is there an OpenSSL for Windows OS? If yes, from where can I get it? Is this official OpenSSL build for Windows?
|
[
"Search openssl shining light production in google and download from the first link\n",
"Yes. You can do one of two things:\n1) Build it yourself\nYou'll need a build environment (either Visual Studio or msys2 based), and a few other pre-requisites. Download the source from here:\nhttps://www.openssl.org/source/\nAnd (assuming you downloaded the 1.1.0 version), read the INSTALL notes here:\nhttps://github.com/openssl/openssl/blob/OpenSSL_1_1_0-stable/INSTALL\nThere are also some Windows specific notes here:\nhttps://github.com/openssl/openssl/blob/OpenSSL_1_1_0-stable/NOTES.WIN\n2) Download a pre-compiled version\nThe OpenSSL project doesn't distribute pre-compiled binaries, but they do maintain a list of third-party provided binaries. The list is here:\nhttps://wiki.openssl.org/index.php/Binaries\n",
"Both Cygwin and MSYS distribute pre-compiled openssl binaries, which I use everyday.\nIf you don't like a *nix like style, please refer this official page for standalone distrbutions.\nhttps://wiki.openssl.org/index.php/Binaries\nDisclaim: I have not tested the software listed on the page.\n",
"I am using this version https://slproweb.com/products/Win32OpenSSL.html and install it with\nEssentials\nwinget install -e ShiningLight.OpenSSL.Light\n\nFull\nwinget install -e ShiningLight.OpenSSL\n\n",
"If you're using Chocolatey, you can also install with\nchoco install openssl\n\nMore details in here and for me it also installed some other stuff, like VC Redist:\nInstalled:\n - kb2919355 v1.0.20160915\n - kb3033929 v1.0.5\n - kb2999226 v1.0.20181019\n - openssl v1.1.1.1900\n - vcredist2015 v14.0.24215.20170201\n - kb2919442 v1.0.20160915\n - vcredist140 v14.34.31931\n - kb3035131 v1.0.3\n - chocolatey-windowsupdate.extension v1.0.5\n\nPackages requiring reboot:\n - vcredist140 (exit code 3010)\n\nThis was done on Windows 10.\n"
] |
[
5,
4,
0,
0,
0
] |
[] |
[] |
[
"openssl",
"windows"
] |
stackoverflow_0051374639_openssl_windows.txt
|
Q:
Horizontal Display?
@foreach (var item in APModel)
{
<div class="row row-cols-1 row-cols-md-2 g-4">
<div class="col">
<div class="card" style="width: 18rem;">
<img src="https://images.pexels.com/photos/2360673/pexels-photo-2360673.jpeg?auto=compress&cs=tinysrgb&w=600" class="card-img-top" alt="...">
<div class="card-body">
<h6 class="card-title">PropertyType: @item.PropertyType</h6>
<h6 class="card-title">PropertyStatus: @item.PropertyStatus</h6>
<h6 class="card-title">Price: @item.Price</h6>
<h6 class="card-title">Location: @item.Location</h6>
<h6 class="card-title">Rooms @item.TotalRooms</h6>
<h6 class="card-title">Baths @item.Bathrooms</h6>
<p class="card-text">@item.Description</p>
<button class="btn btn-outline-danger btn-sm"
@onclick="(()=>DeleteProperty(item.PropertyID))">
Del
</button>
</div>
</div>
</div>
</div>
}
I wanted to Display these cards horizantly but they keep getting displayed vertically what should i do please help
A:
The display flex in CSS will solve this issue. I can't see your cards of course so you will have to handle the actual styling after this. You can also use grid styling for this as well.
.card-body {
display: flex;
justify-content: space-between;
}
<div class="row row-cols-1 row-cols-md-2 g-4">
<div class="col">
<div class="card" style="width: 18rem;">
<img src="https://images.pexels.com/photos/2360673/pexels-photo-2360673.jpeg?auto=compress&cs=tinysrgb&w=600" class="card-img-top" alt="...">
<div class="card-body">
<h6 class="card-title">PropertyType: @item.PropertyType</h6>
<h6 class="card-title">PropertyStatus: @item.PropertyStatus</h6>
<h6 class="card-title">Price: @item.Price</h6>
<h6 class="card-title">Location: @item.Location</h6>
<h6 class="card-title">Rooms @item.TotalRooms</h6>
<h6 class="card-title">Baths @item.Bathrooms</h6>
<p class="card-text">@item.Description</p>
</div>
<button class="btn btn-outline-danger btn-sm"
@onclick="(()=>DeleteProperty(item.PropertyID))">
Del
</button>
</div>
</div>
</div>
|
Horizontal Display?
|
@foreach (var item in APModel)
{
<div class="row row-cols-1 row-cols-md-2 g-4">
<div class="col">
<div class="card" style="width: 18rem;">
<img src="https://images.pexels.com/photos/2360673/pexels-photo-2360673.jpeg?auto=compress&cs=tinysrgb&w=600" class="card-img-top" alt="...">
<div class="card-body">
<h6 class="card-title">PropertyType: @item.PropertyType</h6>
<h6 class="card-title">PropertyStatus: @item.PropertyStatus</h6>
<h6 class="card-title">Price: @item.Price</h6>
<h6 class="card-title">Location: @item.Location</h6>
<h6 class="card-title">Rooms @item.TotalRooms</h6>
<h6 class="card-title">Baths @item.Bathrooms</h6>
<p class="card-text">@item.Description</p>
<button class="btn btn-outline-danger btn-sm"
@onclick="(()=>DeleteProperty(item.PropertyID))">
Del
</button>
</div>
</div>
</div>
</div>
}
I wanted to Display these cards horizantly but they keep getting displayed vertically what should i do please help
|
[
"The display flex in CSS will solve this issue. I can't see your cards of course so you will have to handle the actual styling after this. You can also use grid styling for this as well.\n\n\n.card-body {\n display: flex;\n justify-content: space-between;\n }\n<div class=\"row row-cols-1 row-cols-md-2 g-4\">\n <div class=\"col\">\n <div class=\"card\" style=\"width: 18rem;\">\n <img src=\"https://images.pexels.com/photos/2360673/pexels-photo-2360673.jpeg?auto=compress&cs=tinysrgb&w=600\" class=\"card-img-top\" alt=\"...\">\n <div class=\"card-body\">\n <h6 class=\"card-title\">PropertyType: @item.PropertyType</h6>\n <h6 class=\"card-title\">PropertyStatus: @item.PropertyStatus</h6>\n <h6 class=\"card-title\">Price: @item.Price</h6>\n <h6 class=\"card-title\">Location: @item.Location</h6>\n <h6 class=\"card-title\">Rooms @item.TotalRooms</h6>\n <h6 class=\"card-title\">Baths @item.Bathrooms</h6>\n <p class=\"card-text\">@item.Description</p>\n </div>\n <button class=\"btn btn-outline-danger btn-sm\"\n @onclick=\"(()=>DeleteProperty(item.PropertyID))\">\n Del\n </button>\n </div>\n </div>\n </div>\n\n\n\n"
] |
[
1
] |
[] |
[] |
[
"bootstrap_5",
"css",
"html"
] |
stackoverflow_0074669156_bootstrap_5_css_html.txt
|
Q:
Nginx reverseproxy and LXC
I install nextcloud with Turnkey LXC template on Proxmox.
I use Nginx as Reverseproxy on another LXC.
My LXC Nextcloud is on 192.168.1.46 and my LXC Nginx on 192.168.1.38.
I have a free account on No-ip for dynamic dns for access like : XxX.Xxx.Xxx
When I go to my Nextcloud on LAN is OK on IP LAN, but when I try by dynamic DNS with βXXX.XXX/nextcloudβ I redirected to LAN IP, and itβs inaccessible.
Here it's my nginx conf file :
upstream plex_backend {
server 192.168.1.30:32400;
keepalive 32;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name Xxx.xxx.xxx;
ssl_certificate /etc/letsencrypt/live/Xxx.xxx.xxx/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/Xxx.xxx.xxx/privkey.pem; # managed by Certbot
location / {
proxy_pass http://192.168.1.41;
}
location /nextcloud {
proxy_pass http://192.168.1.46/nextcloud;
}
}
server {
if ($host = Xxx.xxx.xxx) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80;
server_name Xxx.xxx.xxx;
proxy_buffering off;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
add_header Referrer-Policy strict-origin;
real_ip_header X-Forwarded-For;
rewrite ^(.*) https://Xxx.xxx.xxx$1 permanent;
location / {
rewrite ^(.*) https://Xxx.xxx.xxx$1 permanent;
proxy_set_header X-Forwarded-Ssl on;
}
location /nextcloud {
proxy_pass http://192.168.46/nextcloud;
}
}
I don't understand why.
Thanks
A:
location /nextcloud {
proxy_pass http://192.168.46/nextcloud;
}
You should use "https" or NextCloud will try to redirect (it depends on NC configuration tbh):
location /nextcloud {
proxy_pass https://192.168.46/nextcloud;
}
|
Nginx reverseproxy and LXC
|
I install nextcloud with Turnkey LXC template on Proxmox.
I use Nginx as Reverseproxy on another LXC.
My LXC Nextcloud is on 192.168.1.46 and my LXC Nginx on 192.168.1.38.
I have a free account on No-ip for dynamic dns for access like : XxX.Xxx.Xxx
When I go to my Nextcloud on LAN is OK on IP LAN, but when I try by dynamic DNS with βXXX.XXX/nextcloudβ I redirected to LAN IP, and itβs inaccessible.
Here it's my nginx conf file :
upstream plex_backend {
server 192.168.1.30:32400;
keepalive 32;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
server_name Xxx.xxx.xxx;
ssl_certificate /etc/letsencrypt/live/Xxx.xxx.xxx/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/Xxx.xxx.xxx/privkey.pem; # managed by Certbot
location / {
proxy_pass http://192.168.1.41;
}
location /nextcloud {
proxy_pass http://192.168.1.46/nextcloud;
}
}
server {
if ($host = Xxx.xxx.xxx) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80 default_server;
listen [::]:80;
server_name Xxx.xxx.xxx;
proxy_buffering off;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
add_header X-XSS-Protection "1; mode=block";
add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
add_header Referrer-Policy strict-origin;
real_ip_header X-Forwarded-For;
rewrite ^(.*) https://Xxx.xxx.xxx$1 permanent;
location / {
rewrite ^(.*) https://Xxx.xxx.xxx$1 permanent;
proxy_set_header X-Forwarded-Ssl on;
}
location /nextcloud {
proxy_pass http://192.168.46/nextcloud;
}
}
I don't understand why.
Thanks
|
[
"location /nextcloud {\nproxy_pass http://192.168.46/nextcloud;\n}\n\nYou should use \"https\" or NextCloud will try to redirect (it depends on NC configuration tbh):\nlocation /nextcloud {\nproxy_pass https://192.168.46/nextcloud;\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"lxc",
"nextcloud",
"nginx_reverse_proxy",
"proxmox",
"turnkeylinux.org"
] |
stackoverflow_0072533776_lxc_nextcloud_nginx_reverse_proxy_proxmox_turnkeylinux.org.txt
|
Q:
How can I do pattern matching in the purescript repl
I have the following in the repl
> :t foo
Tuple Int Int
I made an attempt to do pattern matching against foo
> (Tuple q w) = foo
Unexpected token '=' at line 1, column 13
So my question is: "What's the proper syntax to do pattern matching in the repl?"
A:
Well, you should be able the following way:
(q /\ w) = foo
|
How can I do pattern matching in the purescript repl
|
I have the following in the repl
> :t foo
Tuple Int Int
I made an attempt to do pattern matching against foo
> (Tuple q w) = foo
Unexpected token '=' at line 1, column 13
So my question is: "What's the proper syntax to do pattern matching in the repl?"
|
[
"Well, you should be able the following way:\n(q /\\ w) = foo\n\n"
] |
[
0
] |
[] |
[] |
[
"purescript"
] |
stackoverflow_0071165533_purescript.txt
|
Q:
How to create a Service Port in Client go
I am having trouble adding the Ports field in ServiceSpec. What am I doing wrong?
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
port := corev1.ServicePort{}
port.Port = 8443
ports := make(corev1.ServicePort, 1)
service := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test-webhook-admissions",
Namespace: "test",
Labels: map[string]string{
"app.kubernetes.io/instance": "test",
"app.kubernetes.io/name": "test",
"control-plane": "controller-manager",
},
},
Spec: corev1.ServiceSpec{
Ports: ports, // Not working
Selector: nil,
//ClusterIP: "",
},
}
A:
Think you have to append the object port to your slice ports.
A:
you are trying to assign the ports variable to the Ports field of the ServiceSpec struct, but you are using the incorrect syntax. To fix the issue, you can use the following code:
port := corev1.ServicePort{}
port.Port = 8443
ports := []corev1.ServicePort{port}
service := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test-webhook-admissions",
Namespace: "test",
Labels: map[string]string{
"app.kubernetes.io/instance": "test",
"app.kubernetes.io/name": "test",
"control-plane": "controller-manager",
},
},
Spec: corev1.ServiceSpec{
Ports: ports,
Selector: nil,
//ClusterIP: "",
},
}
The main difference is that the ports variable is now defined as a slice of ServicePort objects, instead of a single ServicePort object. Additionally, the Port field of the ServicePort object is set to 8443. You can also add additional ServicePort objects to the ports slice, if necessary.
|
How to create a Service Port in Client go
|
I am having trouble adding the Ports field in ServiceSpec. What am I doing wrong?
import (
corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
port := corev1.ServicePort{}
port.Port = 8443
ports := make(corev1.ServicePort, 1)
service := &corev1.Service{
ObjectMeta: metav1.ObjectMeta{
Name: "test-webhook-admissions",
Namespace: "test",
Labels: map[string]string{
"app.kubernetes.io/instance": "test",
"app.kubernetes.io/name": "test",
"control-plane": "controller-manager",
},
},
Spec: corev1.ServiceSpec{
Ports: ports, // Not working
Selector: nil,
//ClusterIP: "",
},
}
|
[
"Think you have to append the object port to your slice ports.\n",
"you are trying to assign the ports variable to the Ports field of the ServiceSpec struct, but you are using the incorrect syntax. To fix the issue, you can use the following code:\nport := corev1.ServicePort{}\nport.Port = 8443\nports := []corev1.ServicePort{port}\n\nservice := &corev1.Service{\n ObjectMeta: metav1.ObjectMeta{\n Name: \"test-webhook-admissions\",\n Namespace: \"test\",\n Labels: map[string]string{\n \"app.kubernetes.io/instance\": \"test\",\n \"app.kubernetes.io/name\": \"test\",\n \"control-plane\": \"controller-manager\",\n },\n },\n Spec: corev1.ServiceSpec{\n Ports: ports,\n Selector: nil,\n //ClusterIP: \"\",\n },\n}\n\nThe main difference is that the ports variable is now defined as a slice of ServicePort objects, instead of a single ServicePort object. Additionally, the Port field of the ServicePort object is set to 8443. You can also add additional ServicePort objects to the ports slice, if necessary.\n"
] |
[
0,
0
] |
[] |
[] |
[
"go",
"kubernetes"
] |
stackoverflow_0074655705_go_kubernetes.txt
|
Q:
Github keep link in readme persistent to latest releases asset file
I have an installer file in the latest release and I want to have a persistent link in the readme to it. It seems that the /releases/latest isn't an alias that I could use to construct the path as /releases/latest/mydownloader.exe
The current workarounds I have:
1) Create a tag release and always delete and recreate it:
github.com/user/project/releases/download/release/install.exe
2) Modify readme.md anytime I do a new release and update path
github.com/user/project/releases/download/20190218/install.exe
A:
The tag remains the least intrusive option (you don't have to modify your README, adding a new commit on each release)
As explain in "Is there a link to GitHub for downloading a file in the latest release of a repository?", there is no API support for referencing a latest released file as a permanent link.
A:
you can point to /releases/latest
|
Github keep link in readme persistent to latest releases asset file
|
I have an installer file in the latest release and I want to have a persistent link in the readme to it. It seems that the /releases/latest isn't an alias that I could use to construct the path as /releases/latest/mydownloader.exe
The current workarounds I have:
1) Create a tag release and always delete and recreate it:
github.com/user/project/releases/download/release/install.exe
2) Modify readme.md anytime I do a new release and update path
github.com/user/project/releases/download/20190218/install.exe
|
[
"The tag remains the least intrusive option (you don't have to modify your README, adding a new commit on each release)\nAs explain in \"Is there a link to GitHub for downloading a file in the latest release of a repository?\", there is no API support for referencing a latest released file as a permanent link.\n",
"you can point to /releases/latest\n"
] |
[
2,
0
] |
[] |
[] |
[
"git",
"github"
] |
stackoverflow_0054756487_git_github.txt
|
Q:
Object Construction using Generics
Im having issues creating Objects using generics in order to test my rewritten methods, I think something is wrong with my constructor but im not entirely sure what.
I put the methods I believe are the main issues at the top, and the rest of the code under for context.
Any help is appreciated.
/**
* Initializes an empty map.
*/
public AbstractMiniMap() {
this.size = 0;
this.keys = new Object[CAPACITY];
this.vals = new Object[CAPACITY];
}
Object Creation in Main
Getting a "Cannot instantiate the type AbstractMiniMap" error, under the AbstractMiniMap after new.
public static void main(String[] args) {
AbstractMiniMap<Double, Double> asd = new AbstractMiniMap<>(20,30);
}
import java.util.List;
import java.util.Set;
import java.util.StringJoiner;
/**
* This class provides a skeletal implementation of the {@code MiniMap}
* interface. It provides separate arrays for the keys and values of the map, as
* well as implementations of the {@code MiniMap} accessor methods.
*
* <p>
* A functioning {@code MiniMap} implementation can be created by extending this
* class and implementing the {@code push} and {@code remove} methods in the
* subclass.
*
* @param <K> the type of keys maintained by this map
* @param <V> the type of mapped values
*/
public abstract class AbstractMiniMap<K, V> implements MiniMap<K, V> {
/**
* The array of keys.
*/
protected Object keys[];
/**
* The array of values.
*/
protected Object vals[];
/**
* The number of mappings currently in this map.
*/
protected int size;
private static final int CAPACITY = 16;
/**
* Initializes an empty map.
*/
public AbstractMiniMap() {
this.size = 0;
this.keys = new Object[CAPACITY];
this.vals = new Object[CAPACITY];
}
/**
* Returns the capacity (maximum number of elements) that this map can hold at
* any one time.
*
* @return the capacity of this map
*/
public final int capacity() {
return CAPACITY;
}
/**
* Returns the number of key-value mappings held by this map.
*
* @return the number of key-value mappings held by this map
*/
public final int size() {
return this.size;
}
/**
* HINT: VERY USEFUL METHOD. You don't have to implement this method, but it
* prevents a lot of code duplication.
*
* Returns the index of the element equal to the specified key in
* {@code this.keys} if such an element exists, or {@code -1} otherwise.
*
* @param key a key to search for
* @return the index of the element equal to the specified key in
* {@code this.keys} if such an element exists, or {@code -1} otherwise
*/
protected int indexOfKey(Object key) {
int counter = -1;
for (Object i: this.keys){
counter++;
if (i.equals(key)) {
return counter;
}
}
return -1;
}
/**
* HINT: VERY USEFUL METHOD. You don't have to implement this method, but it
* prevents a lot of code duplication.
*
* Returns the index of the element equal to the specified value in
* {@code this.vals} if such an element exists, or {@code -1} otherwise.
*
* @param value a value to search for
* @return the index of the element equal to the specified value in
* {@code this.vals} if such an element exists, or {@code -1} otherwise
*/
protected int indexOfValue(Object value) {
int counter = -1;
for(Object i:this.vals) {
counter++;
if(i.equals(value)) {
return counter;
}
}
return -1;
}
/**
* Returns true if the map contains a mapping for the specified key. More
* formally, returns true if and only if this map has a key {@code k} such that
* {@code k.equals(key)} is true.
*
* @param key a key to search for in this map
* @return true if this map contains a mapping for the specified key
*/
public boolean containsKey(Object key) {
// HINT: Consider using indexOfKey....
int response = indexOfKey(key);
if (response != -1) {
return true;
}
return false;
}
/**
* Returns true if the map contains one or more keys that map to the specified
* value. More formally, returns true if and only if this map has a value
* {@code v} such that {@code v.equals(value)} is true.
*
* @param value a value to search for in this map
* @return true if this map contains one or more mappings for the specified
* value
*/
public boolean containsValue(Object value) {
// HINT: Consider using indexOfValue....
int response = indexOfValue(value);
if(response != -1) {
return true;
}
return false;
}
/**
* Returns the value that the specified key maps to, or {@code null} if
* {@code containsKey(key)} is false.
*
* @param key the key to search for
* @return the value that the specified key maps to, or {@code null} if the map
* does not contain the specified key
*/
public V get(Object key) {
// HINT: Maybe use indexOfKey....
int response = indexOfKey(key);
if(response == -1) {
return null;
}
else {
V t = (V) this.vals[response];
return t;
}
}
/**
* Returns a string representation of this map. The returned string contains the
* key-value pairs as strings enclosed in braces ({@code "{}"}). Adjacent
* mappings are separated by the characters {@code ", "} (comma and space). Each
* key-value mapping is rendered as the key followed by an equals sign
* ({@code "="}) followed by the associated value. For example, a map containing
* a string key {@code "a"} mapped to the integer value {@code 100} and a string
* key {@code "b"} mapped to the integer value {@code 200} would have the string
* representation:
*
* <p>
* {@code "{a=100, b=200}"} or {@code "{b=100, a=200}"}
*
* <p>
* depending on how the mappings are stored in the map.
*
*
* @return a string representation of this map
*/
public String toString() {
// HINT: Use a java.util.StringJoiner object to help build the string
}
/**
* STUDENTS SHOULD NOT USE THIS METHOD.
*
* <p>
* Returns a set of keys equal to the set held by this map in the order that
* they appear in {@code this.keys}. For testing purposes only.
*
* @return a set of keys held by this map in the order that they appear in
* {@code this.keys}
*/
public Set<K> keys() {
return A6Utils.getKeys(this.keys);
}
/**
* STUDENTS SHOULD NOT USE THIS METHOD.
*
* <p>
* Returns a list of values equal to the list held by this map in the order that
* they appear in {@code this.vals}. For testing purposes only.
*
* @return a list of values equal to the list held by this map in the order that
* they appear in {@code this.vals}
*/
public List<V> values() {
return A6Utils.getValues(this.vals);
}
public static void main(String[] args) {
AbstractMiniMap<Double, Double> asd = new AbstractMiniMap<>(20,30);
}
}
Tried casting to object on right side of equals sign.
A:
As you can see in the class declaration, the AbstractMiniMap is an abstract class:
public abstract class AbstractMiniMap<K, V> implements MiniMap<K, V>
Abstract classes can't be instantiated in Java since abstract classes are defined to be abstract super-classes that should not be instantiated on their own but further implemented in a sub-class.
Possible Solutions
Anonymous sub-class
In your example, you might be able to instantiate the AbstractMiniMap with an anonymous inline sub-class like this: new AbstractMiniMap<>() {};. That is possible because AbstractMiniMap does not contain any abstract methods that would have been to be implemented by sub-classes.
Un-abstract
On the other hand, you could also just remove the abstract from the class declaration, if you want the class to be instantiable. But then you would also have to remove the "Abstract" prefix of AbstractMiniMap in order to comply with common naming conventions.
Clean Solution
The clean solution (if you want the AbstractMiniMap to remain abstract) would be to create a normal sub-class of it and instantiate this one.
Either
public class DoubleMiniMap extends AbstractMiniMap<Double, Double> {
}
with MiniMap<Double, Double> map = new DoubleMiniMap();
Or
public class GenericMiniMap<K, V> extends AbstractMiniMap<K, V> {
}
with MiniMap<Double, Double> map = new GenericMiniMap<>();
|
Object Construction using Generics
|
Im having issues creating Objects using generics in order to test my rewritten methods, I think something is wrong with my constructor but im not entirely sure what.
I put the methods I believe are the main issues at the top, and the rest of the code under for context.
Any help is appreciated.
/**
* Initializes an empty map.
*/
public AbstractMiniMap() {
this.size = 0;
this.keys = new Object[CAPACITY];
this.vals = new Object[CAPACITY];
}
Object Creation in Main
Getting a "Cannot instantiate the type AbstractMiniMap" error, under the AbstractMiniMap after new.
public static void main(String[] args) {
AbstractMiniMap<Double, Double> asd = new AbstractMiniMap<>(20,30);
}
import java.util.List;
import java.util.Set;
import java.util.StringJoiner;
/**
* This class provides a skeletal implementation of the {@code MiniMap}
* interface. It provides separate arrays for the keys and values of the map, as
* well as implementations of the {@code MiniMap} accessor methods.
*
* <p>
* A functioning {@code MiniMap} implementation can be created by extending this
* class and implementing the {@code push} and {@code remove} methods in the
* subclass.
*
* @param <K> the type of keys maintained by this map
* @param <V> the type of mapped values
*/
public abstract class AbstractMiniMap<K, V> implements MiniMap<K, V> {
/**
* The array of keys.
*/
protected Object keys[];
/**
* The array of values.
*/
protected Object vals[];
/**
* The number of mappings currently in this map.
*/
protected int size;
private static final int CAPACITY = 16;
/**
* Initializes an empty map.
*/
public AbstractMiniMap() {
this.size = 0;
this.keys = new Object[CAPACITY];
this.vals = new Object[CAPACITY];
}
/**
* Returns the capacity (maximum number of elements) that this map can hold at
* any one time.
*
* @return the capacity of this map
*/
public final int capacity() {
return CAPACITY;
}
/**
* Returns the number of key-value mappings held by this map.
*
* @return the number of key-value mappings held by this map
*/
public final int size() {
return this.size;
}
/**
* HINT: VERY USEFUL METHOD. You don't have to implement this method, but it
* prevents a lot of code duplication.
*
* Returns the index of the element equal to the specified key in
* {@code this.keys} if such an element exists, or {@code -1} otherwise.
*
* @param key a key to search for
* @return the index of the element equal to the specified key in
* {@code this.keys} if such an element exists, or {@code -1} otherwise
*/
protected int indexOfKey(Object key) {
int counter = -1;
for (Object i: this.keys){
counter++;
if (i.equals(key)) {
return counter;
}
}
return -1;
}
/**
* HINT: VERY USEFUL METHOD. You don't have to implement this method, but it
* prevents a lot of code duplication.
*
* Returns the index of the element equal to the specified value in
* {@code this.vals} if such an element exists, or {@code -1} otherwise.
*
* @param value a value to search for
* @return the index of the element equal to the specified value in
* {@code this.vals} if such an element exists, or {@code -1} otherwise
*/
protected int indexOfValue(Object value) {
int counter = -1;
for(Object i:this.vals) {
counter++;
if(i.equals(value)) {
return counter;
}
}
return -1;
}
/**
* Returns true if the map contains a mapping for the specified key. More
* formally, returns true if and only if this map has a key {@code k} such that
* {@code k.equals(key)} is true.
*
* @param key a key to search for in this map
* @return true if this map contains a mapping for the specified key
*/
public boolean containsKey(Object key) {
// HINT: Consider using indexOfKey....
int response = indexOfKey(key);
if (response != -1) {
return true;
}
return false;
}
/**
* Returns true if the map contains one or more keys that map to the specified
* value. More formally, returns true if and only if this map has a value
* {@code v} such that {@code v.equals(value)} is true.
*
* @param value a value to search for in this map
* @return true if this map contains one or more mappings for the specified
* value
*/
public boolean containsValue(Object value) {
// HINT: Consider using indexOfValue....
int response = indexOfValue(value);
if(response != -1) {
return true;
}
return false;
}
/**
* Returns the value that the specified key maps to, or {@code null} if
* {@code containsKey(key)} is false.
*
* @param key the key to search for
* @return the value that the specified key maps to, or {@code null} if the map
* does not contain the specified key
*/
public V get(Object key) {
// HINT: Maybe use indexOfKey....
int response = indexOfKey(key);
if(response == -1) {
return null;
}
else {
V t = (V) this.vals[response];
return t;
}
}
/**
* Returns a string representation of this map. The returned string contains the
* key-value pairs as strings enclosed in braces ({@code "{}"}). Adjacent
* mappings are separated by the characters {@code ", "} (comma and space). Each
* key-value mapping is rendered as the key followed by an equals sign
* ({@code "="}) followed by the associated value. For example, a map containing
* a string key {@code "a"} mapped to the integer value {@code 100} and a string
* key {@code "b"} mapped to the integer value {@code 200} would have the string
* representation:
*
* <p>
* {@code "{a=100, b=200}"} or {@code "{b=100, a=200}"}
*
* <p>
* depending on how the mappings are stored in the map.
*
*
* @return a string representation of this map
*/
public String toString() {
// HINT: Use a java.util.StringJoiner object to help build the string
}
/**
* STUDENTS SHOULD NOT USE THIS METHOD.
*
* <p>
* Returns a set of keys equal to the set held by this map in the order that
* they appear in {@code this.keys}. For testing purposes only.
*
* @return a set of keys held by this map in the order that they appear in
* {@code this.keys}
*/
public Set<K> keys() {
return A6Utils.getKeys(this.keys);
}
/**
* STUDENTS SHOULD NOT USE THIS METHOD.
*
* <p>
* Returns a list of values equal to the list held by this map in the order that
* they appear in {@code this.vals}. For testing purposes only.
*
* @return a list of values equal to the list held by this map in the order that
* they appear in {@code this.vals}
*/
public List<V> values() {
return A6Utils.getValues(this.vals);
}
public static void main(String[] args) {
AbstractMiniMap<Double, Double> asd = new AbstractMiniMap<>(20,30);
}
}
Tried casting to object on right side of equals sign.
|
[
"As you can see in the class declaration, the AbstractMiniMap is an abstract class:\npublic abstract class AbstractMiniMap<K, V> implements MiniMap<K, V>\n\nAbstract classes can't be instantiated in Java since abstract classes are defined to be abstract super-classes that should not be instantiated on their own but further implemented in a sub-class.\nPossible Solutions\nAnonymous sub-class\nIn your example, you might be able to instantiate the AbstractMiniMap with an anonymous inline sub-class like this: new AbstractMiniMap<>() {};. That is possible because AbstractMiniMap does not contain any abstract methods that would have been to be implemented by sub-classes.\nUn-abstract\nOn the other hand, you could also just remove the abstract from the class declaration, if you want the class to be instantiable. But then you would also have to remove the \"Abstract\" prefix of AbstractMiniMap in order to comply with common naming conventions.\nClean Solution\nThe clean solution (if you want the AbstractMiniMap to remain abstract) would be to create a normal sub-class of it and instantiate this one.\nEither\npublic class DoubleMiniMap extends AbstractMiniMap<Double, Double> {\n}\n\nwith MiniMap<Double, Double> map = new DoubleMiniMap();\nOr\npublic class GenericMiniMap<K, V> extends AbstractMiniMap<K, V> {\n}\n\nwith MiniMap<Double, Double> map = new GenericMiniMap<>();\n"
] |
[
1
] |
[] |
[] |
[
"constructor",
"generics",
"java",
"oop"
] |
stackoverflow_0074669359_constructor_generics_java_oop.txt
|
Q:
Trying to get historical data for multiple securities using python and IB API - df not clearing between loops
I'm trying to get historical data for several products through the IB API, and store each product in a dataframe (which I need to save in separate csv files).
This is my code, the main issue is that the dataframe isn't clearing between loops, when moving onto the second loop the df contains data for 2 products, the third for 3. I'm not sure where / how to clear the df.
from ibapi.client import EClient
from ibapi.wrapper import EWrapper
from ibapi.contract import Contract
import pandas as pd
import threading
import time
class IBapi(EWrapper, EClient):
def __init__(self):
EClient.__init__(self, self)
self.data = []
def historicalData(self, reqId, bar):
self.data.append([bar.date, bar.open, bar.high, bar.low, bar.close, bar.volume])
def error(self, reqId, errorCode, errorString):
print("Error. Id: " , reqId, " Code: " , errorCode , " Msg: " , errorString)
def historicalDataEnd(self, reqId: int, start: str, end: str):
print("HistoricalDataEnd. ReqId:", reqId, "from", start, "to", end)
self.df = pd.DataFrame(self.data)
def run_loop():
app.run()
app = IBapi()
#Create contract object
ES_contract = Contract()
ES_contract.symbol = 'ES'
ES_contract.secType = 'FUT'
ES_contract.exchange = 'GLOBEX'
ES_contract.lastTradeDateOrContractMonth = '202209'
#Create contract object
VIX_contract = Contract()
VIX_contract.symbol = 'VIX'
VIX_contract.secType = 'IND'
VIX_contract.exchange = 'CBOE'
VIX_contract.currency = 'USD'
#Create contract object
DAX_contract = Contract()
DAX_contract.symbol = 'DAX'
DAX_contract.secType = 'FUT'
DAX_contract.exchange = 'EUREX'
DAX_contract.currency = 'EUR'
DAX_contract.lastTradeDateOrContractMonth = '202209'
DAX_contract.multiplier = '25'
products={'ES': ES_contract, 'VIX': VIX_contract, 'DAX': DAX_contract}
nid=1
app.connect('127.0.0.1', 4001, 123)
#Start the socket in a thread
api_thread = threading.Thread(target=run_loop, daemon=True)
api_thread.start()
time.sleep(1) #Sleep interval to allow time for connection to server
def fetchdata_function(name,nid):
df=pd.DataFrame()
#Request historical candles
app.reqHistoricalData(nid, products[name], '', '1 W', '5 mins', 'TRADES', 0, 2, False, [])
time.sleep(10) #sleep to allow enough time for data to be returned
df = pd.DataFrame(app.data, columns=['Date', 'Open', 'High', 'Low', 'Close', 'Volume'])
df['Date'] = pd.to_datetime(df['Date'],unit='s')
df=df.set_index('Date')
df.to_csv('1week'+str(name)+'5min.csv')
print(df)
names=['ES', 'DAX', 'VIX']
for name in names:
fetchdata_function(name,nid)
nid=nid+1
app.disconnect()
A:
create a dictionary and append the app.data as a key value pair in the historicaldata callback. Then you can access them separately - in fact converting a dict to multi-level dataframe is also possible
|
Trying to get historical data for multiple securities using python and IB API - df not clearing between loops
|
I'm trying to get historical data for several products through the IB API, and store each product in a dataframe (which I need to save in separate csv files).
This is my code, the main issue is that the dataframe isn't clearing between loops, when moving onto the second loop the df contains data for 2 products, the third for 3. I'm not sure where / how to clear the df.
from ibapi.client import EClient
from ibapi.wrapper import EWrapper
from ibapi.contract import Contract
import pandas as pd
import threading
import time
class IBapi(EWrapper, EClient):
def __init__(self):
EClient.__init__(self, self)
self.data = []
def historicalData(self, reqId, bar):
self.data.append([bar.date, bar.open, bar.high, bar.low, bar.close, bar.volume])
def error(self, reqId, errorCode, errorString):
print("Error. Id: " , reqId, " Code: " , errorCode , " Msg: " , errorString)
def historicalDataEnd(self, reqId: int, start: str, end: str):
print("HistoricalDataEnd. ReqId:", reqId, "from", start, "to", end)
self.df = pd.DataFrame(self.data)
def run_loop():
app.run()
app = IBapi()
#Create contract object
ES_contract = Contract()
ES_contract.symbol = 'ES'
ES_contract.secType = 'FUT'
ES_contract.exchange = 'GLOBEX'
ES_contract.lastTradeDateOrContractMonth = '202209'
#Create contract object
VIX_contract = Contract()
VIX_contract.symbol = 'VIX'
VIX_contract.secType = 'IND'
VIX_contract.exchange = 'CBOE'
VIX_contract.currency = 'USD'
#Create contract object
DAX_contract = Contract()
DAX_contract.symbol = 'DAX'
DAX_contract.secType = 'FUT'
DAX_contract.exchange = 'EUREX'
DAX_contract.currency = 'EUR'
DAX_contract.lastTradeDateOrContractMonth = '202209'
DAX_contract.multiplier = '25'
products={'ES': ES_contract, 'VIX': VIX_contract, 'DAX': DAX_contract}
nid=1
app.connect('127.0.0.1', 4001, 123)
#Start the socket in a thread
api_thread = threading.Thread(target=run_loop, daemon=True)
api_thread.start()
time.sleep(1) #Sleep interval to allow time for connection to server
def fetchdata_function(name,nid):
df=pd.DataFrame()
#Request historical candles
app.reqHistoricalData(nid, products[name], '', '1 W', '5 mins', 'TRADES', 0, 2, False, [])
time.sleep(10) #sleep to allow enough time for data to be returned
df = pd.DataFrame(app.data, columns=['Date', 'Open', 'High', 'Low', 'Close', 'Volume'])
df['Date'] = pd.to_datetime(df['Date'],unit='s')
df=df.set_index('Date')
df.to_csv('1week'+str(name)+'5min.csv')
print(df)
names=['ES', 'DAX', 'VIX']
for name in names:
fetchdata_function(name,nid)
nid=nid+1
app.disconnect()
|
[
"create a dictionary and append the app.data as a key value pair in the historicaldata callback. Then you can access them separately - in fact converting a dict to multi-level dataframe is also possible\n"
] |
[
0
] |
[] |
[] |
[
"ib_api",
"interactive_brokers",
"pandas",
"python",
"tws"
] |
stackoverflow_0073211491_ib_api_interactive_brokers_pandas_python_tws.txt
|
Q:
How many shards does a MongoDB cluster support?
My colleague told me that a MongoDB cluster can have at max of 1024 shards. But I can't find this number in MongoDB documentation. Is this statement true for MongoDB? Here is the MongoDB documentation I'm referring.
A:
MongoDB employee Stennie stated that he is not aware of a specific limit on number of shards.
Source: https://www.mongodb.com/community/forums/t/maximum-number-of-shards/98230
|
How many shards does a MongoDB cluster support?
|
My colleague told me that a MongoDB cluster can have at max of 1024 shards. But I can't find this number in MongoDB documentation. Is this statement true for MongoDB? Here is the MongoDB documentation I'm referring.
|
[
"MongoDB employee Stennie stated that he is not aware of a specific limit on number of shards.\nSource: https://www.mongodb.com/community/forums/t/maximum-number-of-shards/98230\n"
] |
[
0
] |
[] |
[] |
[
"mongodb"
] |
stackoverflow_0074666464_mongodb.txt
|
Q:
How can I lock my computer with AutoHotkey?
I'm trying to bind "Esc" key to lock my computer with AutoHotkey.
Manually pressing Winkey + l will lock my computer, but it doesn't work in my AutoHotkey script.
esc::
MsgBox Going to lock
Send, #l
Return
I have tried multiple other AutoHotkey syntax (without the modifier for example) without success.
A:
Per the recommendation in the comments by wOxxOm:
Esc::
{
DllCall("LockWorkStation")
}
return
A:
What you are doing in that code, is pressing the windows key first and then the 'l' key. Not both at the same time. To make key combinations, you need to press the combination key down and then the key you want to combine it with. Remember to release the key afterwards. Your code would then look like:
Send {LWin down}
Send l
Send {LWin up}
or
Send {LWin down}l{LWin up}
A:
Just improving the code of my fellow camarades, you can block AND turn off the screen if you want to:
#J::
KeyWait LWin
KeyWait J
DllCall("LockWorkStation")
SendMessage,0x112,0xF170,2,,Program Manager
Return
|
How can I lock my computer with AutoHotkey?
|
I'm trying to bind "Esc" key to lock my computer with AutoHotkey.
Manually pressing Winkey + l will lock my computer, but it doesn't work in my AutoHotkey script.
esc::
MsgBox Going to lock
Send, #l
Return
I have tried multiple other AutoHotkey syntax (without the modifier for example) without success.
|
[
"Per the recommendation in the comments by wOxxOm: \nEsc::\n{\nDllCall(\"LockWorkStation\")\n}\nreturn\n\n",
"What you are doing in that code, is pressing the windows key first and then the 'l' key. Not both at the same time. To make key combinations, you need to press the combination key down and then the key you want to combine it with. Remember to release the key afterwards. Your code would then look like:\nSend {LWin down}\nSend l\nSend {LWin up}\n\nor\nSend {LWin down}l{LWin up}\n\n",
"Just improving the code of my fellow camarades, you can block AND turn off the screen if you want to:\n#J::\n KeyWait LWin\n KeyWait J\n DllCall(\"LockWorkStation\")\n SendMessage,0x112,0xF170,2,,Program Manager\nReturn\n\n"
] |
[
14,
0,
0
] |
[] |
[] |
[
"autohotkey",
"lockscreen",
"window"
] |
stackoverflow_0042314908_autohotkey_lockscreen_window.txt
|
Q:
Meteor reports my server method does not exist
fairly new to Meteor and JS, doing a lot of reading and research. I have been following an example of an HTTP request but I keep getting an error "404, method Abc not found":
This is how my JS file looks like:
if (Meteor.isServer) {
Meteor.methods({
Abc: function () {
this.unblock();
return Meteor.http.call("GET", //HTTP REQUEST TEXT);
}
});
}
if (Meteor.isClient) {
Meteor.call("Abc", function(error, results) {
console.log(error);
console.log(results);
});
}
Why the server method is not found if it is in the same file? I only want to show the content of the HTTP response.
Debugging and re-reading the tutorials.
A:
Apparently, your code is loaded only on the client side. You should separate the client and the server logic by using the "client" and "server" folders respectively . Or in your example you should use "both" folder.
|
Meteor reports my server method does not exist
|
fairly new to Meteor and JS, doing a lot of reading and research. I have been following an example of an HTTP request but I keep getting an error "404, method Abc not found":
This is how my JS file looks like:
if (Meteor.isServer) {
Meteor.methods({
Abc: function () {
this.unblock();
return Meteor.http.call("GET", //HTTP REQUEST TEXT);
}
});
}
if (Meteor.isClient) {
Meteor.call("Abc", function(error, results) {
console.log(error);
console.log(results);
});
}
Why the server method is not found if it is in the same file? I only want to show the content of the HTTP response.
Debugging and re-reading the tutorials.
|
[
"Apparently, your code is loaded only on the client side. You should separate the client and the server logic by using the \"client\" and \"server\" folders respectively . Or in your example you should use \"both\" folder.\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"meteor"
] |
stackoverflow_0074656383_javascript_meteor.txt
|
Q:
how to allocate elastip ip to autoscaling group with 1 instance
i have 1 instance in auto scaling group with min = 1 max = 1 and desired = 1
I also have an elastic ip which i want to assign single instance and also when this once instance goes down, the elastic ip should be released and allocated to the new instance. I have attached admin policy with the role which is attached in the launch configuration for ASG. I have added below information in user data in launch template but my elastic ip is still not getting associated with new instance. I really need help with this please
#!/bin/bash
InstanceID=`/usr/bin/curl -s http://169.254.169.254/latest/meta-data/instance-id`
Allocate_ID= 'eipalloc-0d54643260cd69141'
aws ec2 associate-address --instance-id $InstanceID --allocation-id $Allocate_ID
A:
You'll need to disassociate the address from the instance to which the EIP is attached to before associating it again.
This will do the job:
#!/bin/bash
ALLOCATE_ID="eipalloc-0d54643260cd69141"
# Release the EIP if it is currently associated with an instance
aws ec2 disassociate-address --association-id "$ALLOCATE_ID" || true
# Associate address to this instance
INSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)
aws ec2 associate-address --instance-id "$INSTANCE_ID" --allocation-id "$ALLOCATE_ID"
|
how to allocate elastip ip to autoscaling group with 1 instance
|
i have 1 instance in auto scaling group with min = 1 max = 1 and desired = 1
I also have an elastic ip which i want to assign single instance and also when this once instance goes down, the elastic ip should be released and allocated to the new instance. I have attached admin policy with the role which is attached in the launch configuration for ASG. I have added below information in user data in launch template but my elastic ip is still not getting associated with new instance. I really need help with this please
#!/bin/bash
InstanceID=`/usr/bin/curl -s http://169.254.169.254/latest/meta-data/instance-id`
Allocate_ID= 'eipalloc-0d54643260cd69141'
aws ec2 associate-address --instance-id $InstanceID --allocation-id $Allocate_ID
|
[
"You'll need to disassociate the address from the instance to which the EIP is attached to before associating it again.\nThis will do the job:\n#!/bin/bash\n\nALLOCATE_ID=\"eipalloc-0d54643260cd69141\"\n\n# Release the EIP if it is currently associated with an instance\naws ec2 disassociate-address --association-id \"$ALLOCATE_ID\" || true\n\n# Associate address to this instance\nINSTANCE_ID=$(curl -s http://169.254.169.254/latest/meta-data/instance-id)\naws ec2 associate-address --instance-id \"$INSTANCE_ID\" --allocation-id \"$ALLOCATE_ID\"\n\n"
] |
[
1
] |
[] |
[] |
[
"amazon_ec2",
"amazon_web_services",
"aws_auto_scaling",
"elastic_ip"
] |
stackoverflow_0074668047_amazon_ec2_amazon_web_services_aws_auto_scaling_elastic_ip.txt
|
Q:
develop shared library alongside another project without having them be in the same solution
Let's say I have a c# project Foo and a classlibrary called Bar
I'm wanting to develop Bar alongside Foo which will use Bar as a shared library. I'd like to keep these Foo and Bar in their own git repositories.
When I debug Foo, I'd like to be able to step into Bar to see what it's doing under the hood. When I make changes to Bar, I'd like to be able to have my changes reflected in Foo. It's okay if I'd have to build Bar first for my changes to take effect.
When I eventually deploy Foo, I'd like to import Bar as a nuget package, rather than including it as a part of the solution for Foo
Is this possible in c#? I've been trying to develop a shared library and a repository that uses that library as a template for future projects. I've tried to publish Bar as a nuget package to my local filesystem but it's been giving me problems; I'm unable to step into functions that call into Bar from project Foo and when I make changes to Bar I have to build, pack, then publish the library again. If I don't bump the version number of bar when I do this, this results in errors where I have to go to the nuget package in my filesystem and delete it manually.
Aside
If you're interested Bar contains extension methods for setting up a connection to a message broker along with classes for configuration definition and "contract" classes that need to be shared among projects.
A:
you can go red path or blue.
"Is this possible in c#?" - This is not c# or any language. This is solution/project management. Many things are possible. You can definitely develop as raw projects or DLL. Include debug-built DLL and PDB file into your nuget, and you will be able to step through your referenced library.
You don't have to use a separate solution from yout GIT/TFS. You can develop using any local solution, not binded to source control.
A:
I'm wanting to develop Bar alongside Foo which will use Bar as a shared library. I'd like to keep these Foo and Bar in their own git repositories.
Yes. In Foo, add all files from Bar as needed by add as Link. Namespaces and files will be honored as if they existed in Foo; but no files will be copied...only referenced.
This is an old Silverlight trick to share one set of code between two projects because of the two different versions of the CLR from the web services to the Silverlight project. It allowed models to be brought over from the web services without trying to pull in a dll which had a totally different CLR.
Create the code in project 1. Then for project 2, add the files by linking them from project 1. To do that type of add, its really adding a symbolic link to the file(s).
How
The trick is to include as a link into the project as needed.
In the second project right click and select Add then Existing Item... or shift alt A.
Browse to the location of the file(s) found in the first project and select the file(s).
Once the file(s) have been selected, then on the Add button select the drop down arrow.
Select Add as link to add the common files(s) as a link into the project.
I'm unable to step into functions that call into Bar from project Foo and when I make changes to Bar I have to build, pack, then publish the library again.
The linking of files, as mentioned, will gain access to the file as if the file was actually within the project, but the file physically resides elsewhere. If the linked file(s) change, those changes are reflected in the project that linked them in. Building due to changes still applies, but that should be minor.
A:
This is a common problem when developing code in nuget dependency chains.
One solution is to use something like NuGetReferenceSwitcher.
The disadvantage is that the tool will change your csproj files back and forth and you need to take good care of not commiting unwanted changes.
Another solution I employ successfully is to create a sibling project to your Foo project that uses ProjectReference to Bar instead of PackageReference.
I detailed the approach on my blog.
It involves editing your project files by hand, which is simple in SDK-style projects.
If you are comfortable with (or want to learn) that, here is the Gist:
Extract everything except the PackageReference to Bar from the Foo project into a Foo.props file.
Import that Foo.props file into the Foo project. Note that until now, effectively nothing has changed.
Create a copy of the Foo.csproj in some other folder and name it e. g. FooDev.csproj. Link the source files from the Foo (sic!) project into the FooDev project (using the technique that Ξ©megaMan already described in their answer). Include FooDev.csproj in your Foo solution.
In your FooDev.cproj change the PackageReference to Bar to a ProjectReference.
You now have both a Foo.csproj that uses a PackageReference to Bar and a FooDev.csproj that uses a ProjectReference to Bar.
You will be able to immediately see the effects of the changes you make to Bar on the FooDev project.
|
develop shared library alongside another project without having them be in the same solution
|
Let's say I have a c# project Foo and a classlibrary called Bar
I'm wanting to develop Bar alongside Foo which will use Bar as a shared library. I'd like to keep these Foo and Bar in their own git repositories.
When I debug Foo, I'd like to be able to step into Bar to see what it's doing under the hood. When I make changes to Bar, I'd like to be able to have my changes reflected in Foo. It's okay if I'd have to build Bar first for my changes to take effect.
When I eventually deploy Foo, I'd like to import Bar as a nuget package, rather than including it as a part of the solution for Foo
Is this possible in c#? I've been trying to develop a shared library and a repository that uses that library as a template for future projects. I've tried to publish Bar as a nuget package to my local filesystem but it's been giving me problems; I'm unable to step into functions that call into Bar from project Foo and when I make changes to Bar I have to build, pack, then publish the library again. If I don't bump the version number of bar when I do this, this results in errors where I have to go to the nuget package in my filesystem and delete it manually.
Aside
If you're interested Bar contains extension methods for setting up a connection to a message broker along with classes for configuration definition and "contract" classes that need to be shared among projects.
|
[
"you can go red path or blue.\n\"Is this possible in c#?\" - This is not c# or any language. This is solution/project management. Many things are possible. You can definitely develop as raw projects or DLL. Include debug-built DLL and PDB file into your nuget, and you will be able to step through your referenced library.\nYou don't have to use a separate solution from yout GIT/TFS. You can develop using any local solution, not binded to source control.\n\n",
"\nI'm wanting to develop Bar alongside Foo which will use Bar as a shared library. I'd like to keep these Foo and Bar in their own git repositories.\n\nYes. In Foo, add all files from Bar as needed by add as Link. Namespaces and files will be honored as if they existed in Foo; but no files will be copied...only referenced.\n\nThis is an old Silverlight trick to share one set of code between two projects because of the two different versions of the CLR from the web services to the Silverlight project. It allowed models to be brought over from the web services without trying to pull in a dll which had a totally different CLR.\n\nCreate the code in project 1. Then for project 2, add the files by linking them from project 1. To do that type of add, its really adding a symbolic link to the file(s).\nHow\nThe trick is to include as a link into the project as needed.\n\nIn the second project right click and select Add then Existing Item... or shift alt A.\nBrowse to the location of the file(s) found in the first project and select the file(s).\nOnce the file(s) have been selected, then on the Add button select the drop down arrow.\nSelect Add as link to add the common files(s) as a link into the project.\n\n\n\n\nI'm unable to step into functions that call into Bar from project Foo and when I make changes to Bar I have to build, pack, then publish the library again.\n\nThe linking of files, as mentioned, will gain access to the file as if the file was actually within the project, but the file physically resides elsewhere. If the linked file(s) change, those changes are reflected in the project that linked them in. Building due to changes still applies, but that should be minor.\n",
"This is a common problem when developing code in nuget dependency chains.\nOne solution is to use something like NuGetReferenceSwitcher.\nThe disadvantage is that the tool will change your csproj files back and forth and you need to take good care of not commiting unwanted changes.\nAnother solution I employ successfully is to create a sibling project to your Foo project that uses ProjectReference to Bar instead of PackageReference.\nI detailed the approach on my blog.\nIt involves editing your project files by hand, which is simple in SDK-style projects.\nIf you are comfortable with (or want to learn) that, here is the Gist:\n\nExtract everything except the PackageReference to Bar from the Foo project into a Foo.props file.\nImport that Foo.props file into the Foo project. Note that until now, effectively nothing has changed.\nCreate a copy of the Foo.csproj in some other folder and name it e. g. FooDev.csproj. Link the source files from the Foo (sic!) project into the FooDev project (using the technique that Ξ©megaMan already described in their answer). Include FooDev.csproj in your Foo solution.\nIn your FooDev.cproj change the PackageReference to Bar to a ProjectReference.\n\nYou now have both a Foo.csproj that uses a PackageReference to Bar and a FooDev.csproj that uses a ProjectReference to Bar.\nYou will be able to immediately see the effects of the changes you make to Bar on the FooDev project.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"c#",
"class_library",
"nuget",
"visual_studio"
] |
stackoverflow_0074663708_c#_class_library_nuget_visual_studio.txt
|
Q:
Create a link to a specific word count position such as bookmark in docx
How this project works:
Searches external docx / OCR data for a keyword
Builds a context of 100 words surrounding the keyword
Builds a docx to store the passage with a hyperlink posted under each completed search
What is missing:
A way to link to the passage to its source from the external document in Word, so you can just use a hyperlink to it, but the problem is the OCR docx files read have no headings to bookmark a run, and I could not create them with long OCR, so it is not manageable from the aspect of going in to the docx file one by one reading gibberish at times.
So Word needs to be able to store the solution in the document where the passage is printed in the new file. This hyperlink code works... I need something more than what I have here to find the passage locations on its source, unless MS Word will not support such a specific function as finding the indexed word position of the passage? Can I build a macro and call it in python to make a link and run its position using the index?
Hyperlinking/bookmark code post ref:
def add_hyperlink(paragraph, text, url):
# This gets access to the document.xml.rels file and gets a new relation id value
part = paragraph.part
r_id = part.relate_to(url, docx.opc.constants.RELATIONSHIP_TYPE.HYPERLINK, is_external=True)
# Create the w:hyperlink tag and add needed values
hyperlink = docx.oxml.shared.OxmlElement('w:hyperlink')
hyperlink.set(docx.oxml.shared.qn('r:id'), r_id, )
# Create a w:r element and a new w:rPr element
new_run = docx.oxml.shared.OxmlElement('w:r')
rPr = docx.oxml.shared.OxmlElement('w:rPr')
# Join all the xml elements together add the required text to the w:r element
new_run.append(rPr)
new_run.text = text
hyperlink.append(new_run)
# Create a new Run object and add the hyperlink into it
r = paragraph.add_run()
r._r.append(hyperlink)
# A workaround for the lack of a hyperlink style (doesn't go purple after using the link)
# Delete this if using a template that has the hyperlink style in it
r.font.color.theme_color = MSO_THEME_COLOR_INDEX.HYPERLINK
r.font.underline = True
return hyperlink
def extract_surround_words(text, keyword, n):
'''
text : input text
keyword : the search keyword we are looking
n : number of words around the keyword
'''
# extracting all the words from text
words = re.findall(r'\w+', text)
passage = []
passageText = ''
saveIndex = []
passagePos = []
indexVal = ''
document = Document()
document.add_heading("The keyword searched is: " + searchKeyword + ", WORD COUNT: " + str(len(text)) + "\n", 0)
# iterate through all the words
for index, word in enumerate(words):
# check if search keyword matches
if word == keyword and len(words) > 0:
saveIndex.append(str(index-n))
# fetch left side words and right
passage = words[index - n: index] #start text run
passage.append(keyword)
passage += words[index + 1: index + n + 1] #end of run
passagePos = "\nWORD COUNT POSITION: " + str(saveIndex.pop() + "\n")
bookmark = add_bookmark(index, passagePos)
print(str(passagePos))
for wd in passage:
passageText += ' ' + wd
parag = document.add_paragraph(passageText)
add_hyperlink(parag, passagePos, os.path.join(path, file))
passage.append("\n\n")
document.save(os.path.join(output_path, out_file_doc))
return passageText
A:
To build a system that searches for a keyword in external documents, extracts a context of 100 words surrounding the keyword, and creates a new document with hyperlinks to the passages in the original document. The problem you are facing is that the OCR documents do not have headings or bookmarks, so it is difficult to create hyperlinks to specific passages in the original document.
One solution to this problem could be to use a macro in Microsoft Word to create the hyperlinks and store them in the new document. You can write a macro in Visual Basic for Applications (VBA) that takes the keyword, context, and file path of the original document as input and returns the position of the keyword in the original document. This macro can then be called from your Python script using the python-docx library, which allows you to run VBA macros and interact with Microsoft Word documents.
Here is an example of how you could write the macro and call it from your Python script:
VBA macro (save as CreateHyperlink.bas):
Sub CreateHyperlink(keyword As String, context As String, filePath As String)
' Open the original document
Dim originalDoc As Document
Set originalDoc = Documents.Open(filePath)
' Search for the keyword in the original document
Dim keywordRange As Range
Set keywordRange = originalDoc.Range.Find(keyword)
' Get the position of the keyword in the original document
Dim keywordPos As Long
keywordPos = keywordRange.Start
' Close the original document
originalDoc.Close
' Return the keyword position to the calling Python script
ThisDocument.VBProject.VBComponents("ThisDocument")._
CodeModule.AddFromString "keywordPos = " & keywordPos
End Sub
Python script:
# Import the required libraries
from docx import Document
from docx.enum.vba import MsoModuleType
from docx.vba.module import Module
# Open the new document
document = Document()
# Add the VBA macro to the new document
vba_filename = 'CreateHyperlink.bas'
with open(vba_filename, 'rb') as f:
vba_bin = f.read()
document.add_vba_binary(vba_bin)
# Call the VBA macro from the Python script
document.vba_modules['ThisDocument']._modules[0]._methods[0].Run(
keyword='keyword',
context='context',
filePath='filePath',
)
# Get the keyword position returned by the VBA macro
keywordPos = document.vba_modules['ThisDocument']._modules[0]._attributes['keywordPos']
# Use the keyword position to create a hyperlink to the original passage
paragraph = document.add_paragraph(context)
document.add_hyperlink(paragraph, keywordPos, filePath)
# Save the new document
document.save('new_document.docx')
|
Create a link to a specific word count position such as bookmark in docx
|
How this project works:
Searches external docx / OCR data for a keyword
Builds a context of 100 words surrounding the keyword
Builds a docx to store the passage with a hyperlink posted under each completed search
What is missing:
A way to link to the passage to its source from the external document in Word, so you can just use a hyperlink to it, but the problem is the OCR docx files read have no headings to bookmark a run, and I could not create them with long OCR, so it is not manageable from the aspect of going in to the docx file one by one reading gibberish at times.
So Word needs to be able to store the solution in the document where the passage is printed in the new file. This hyperlink code works... I need something more than what I have here to find the passage locations on its source, unless MS Word will not support such a specific function as finding the indexed word position of the passage? Can I build a macro and call it in python to make a link and run its position using the index?
Hyperlinking/bookmark code post ref:
def add_hyperlink(paragraph, text, url):
# This gets access to the document.xml.rels file and gets a new relation id value
part = paragraph.part
r_id = part.relate_to(url, docx.opc.constants.RELATIONSHIP_TYPE.HYPERLINK, is_external=True)
# Create the w:hyperlink tag and add needed values
hyperlink = docx.oxml.shared.OxmlElement('w:hyperlink')
hyperlink.set(docx.oxml.shared.qn('r:id'), r_id, )
# Create a w:r element and a new w:rPr element
new_run = docx.oxml.shared.OxmlElement('w:r')
rPr = docx.oxml.shared.OxmlElement('w:rPr')
# Join all the xml elements together add the required text to the w:r element
new_run.append(rPr)
new_run.text = text
hyperlink.append(new_run)
# Create a new Run object and add the hyperlink into it
r = paragraph.add_run()
r._r.append(hyperlink)
# A workaround for the lack of a hyperlink style (doesn't go purple after using the link)
# Delete this if using a template that has the hyperlink style in it
r.font.color.theme_color = MSO_THEME_COLOR_INDEX.HYPERLINK
r.font.underline = True
return hyperlink
def extract_surround_words(text, keyword, n):
'''
text : input text
keyword : the search keyword we are looking
n : number of words around the keyword
'''
# extracting all the words from text
words = re.findall(r'\w+', text)
passage = []
passageText = ''
saveIndex = []
passagePos = []
indexVal = ''
document = Document()
document.add_heading("The keyword searched is: " + searchKeyword + ", WORD COUNT: " + str(len(text)) + "\n", 0)
# iterate through all the words
for index, word in enumerate(words):
# check if search keyword matches
if word == keyword and len(words) > 0:
saveIndex.append(str(index-n))
# fetch left side words and right
passage = words[index - n: index] #start text run
passage.append(keyword)
passage += words[index + 1: index + n + 1] #end of run
passagePos = "\nWORD COUNT POSITION: " + str(saveIndex.pop() + "\n")
bookmark = add_bookmark(index, passagePos)
print(str(passagePos))
for wd in passage:
passageText += ' ' + wd
parag = document.add_paragraph(passageText)
add_hyperlink(parag, passagePos, os.path.join(path, file))
passage.append("\n\n")
document.save(os.path.join(output_path, out_file_doc))
return passageText
|
[
"To build a system that searches for a keyword in external documents, extracts a context of 100 words surrounding the keyword, and creates a new document with hyperlinks to the passages in the original document. The problem you are facing is that the OCR documents do not have headings or bookmarks, so it is difficult to create hyperlinks to specific passages in the original document.\nOne solution to this problem could be to use a macro in Microsoft Word to create the hyperlinks and store them in the new document. You can write a macro in Visual Basic for Applications (VBA) that takes the keyword, context, and file path of the original document as input and returns the position of the keyword in the original document. This macro can then be called from your Python script using the python-docx library, which allows you to run VBA macros and interact with Microsoft Word documents.\nHere is an example of how you could write the macro and call it from your Python script:\nVBA macro (save as CreateHyperlink.bas):\nSub CreateHyperlink(keyword As String, context As String, filePath As String)\n ' Open the original document\n Dim originalDoc As Document\n Set originalDoc = Documents.Open(filePath)\n\n ' Search for the keyword in the original document\n Dim keywordRange As Range\n Set keywordRange = originalDoc.Range.Find(keyword)\n\n ' Get the position of the keyword in the original document\n Dim keywordPos As Long\n keywordPos = keywordRange.Start\n\n ' Close the original document\n originalDoc.Close\n\n ' Return the keyword position to the calling Python script\n ThisDocument.VBProject.VBComponents(\"ThisDocument\")._\n CodeModule.AddFromString \"keywordPos = \" & keywordPos\nEnd Sub\n\nPython script:\n# Import the required libraries\nfrom docx import Document\nfrom docx.enum.vba import MsoModuleType\nfrom docx.vba.module import Module\n\n# Open the new document\ndocument = Document()\n\n# Add the VBA macro to the new document\nvba_filename = 'CreateHyperlink.bas'\nwith open(vba_filename, 'rb') as f:\n vba_bin = f.read()\ndocument.add_vba_binary(vba_bin)\n\n# Call the VBA macro from the Python script\ndocument.vba_modules['ThisDocument']._modules[0]._methods[0].Run(\n keyword='keyword',\n context='context',\n filePath='filePath',\n)\n\n# Get the keyword position returned by the VBA macro\nkeywordPos = document.vba_modules['ThisDocument']._modules[0]._attributes['keywordPos']\n\n# Use the keyword position to create a hyperlink to the original passage\nparagraph = document.add_paragraph(context)\ndocument.add_hyperlink(paragraph, keywordPos, filePath)\n\n# Save the new document\ndocument.save('new_document.docx')\n\n"
] |
[
0
] |
[] |
[] |
[
"hyperlink",
"ms_word",
"python"
] |
stackoverflow_0074669471_hyperlink_ms_word_python.txt
|
Q:
summary statistics from a list
I'm trying to create summary stats from a list. I have mock data, created 3 sub-sets of data, saved it to csv and imported it, now i'm trying to calculate a mean for each column in each csv as a list.
examples_mean <- map(examples, function (data_df) {
data_df <- data_df %>%
summarise(vars(column1, column2), mean(mean))
return(data_df)
})
examples_mean
I tried using lapply() and various other suggestion from stackoverflow
but the only one that doesn't give an error/ warning is my code above, but returns this, below,
which is what I need, but its missing the values in the mean column:
> examples_mean
$`data/example_1.csv`
# A tibble: 2 Γ 2
`vars(column1, column2)` `mean(mean)`
<quos> <dbl>
1 column1 NA
2 column2 NA
$`data/example_2.csv`
# A tibble: 2 Γ 2
`vars(column1, column2)` `mean(mean)`
<quos> <dbl>
1 column1 NA
2 column2 NA
$`data/example_3.csv`
# A tibble: 2 Γ 2
`vars(column1, column2)` `mean(mean)`
<quos> <dbl>
1 column1 NA
2 column2 NA
A:
There are some syntax errors. If the intention is to loop over the columns column1, column2 in each of the datasets in the list, loop over the list with map, then select the column1, column2 and loop across those columns in summarise to get the mean of the column
library(purrr)
library(dplyr)
map(examples, ~
.x %>%
summarise(across(c(column1, column2), ~ mean(.x, na.rm = TRUE)))
)
|
summary statistics from a list
|
I'm trying to create summary stats from a list. I have mock data, created 3 sub-sets of data, saved it to csv and imported it, now i'm trying to calculate a mean for each column in each csv as a list.
examples_mean <- map(examples, function (data_df) {
data_df <- data_df %>%
summarise(vars(column1, column2), mean(mean))
return(data_df)
})
examples_mean
I tried using lapply() and various other suggestion from stackoverflow
but the only one that doesn't give an error/ warning is my code above, but returns this, below,
which is what I need, but its missing the values in the mean column:
> examples_mean
$`data/example_1.csv`
# A tibble: 2 Γ 2
`vars(column1, column2)` `mean(mean)`
<quos> <dbl>
1 column1 NA
2 column2 NA
$`data/example_2.csv`
# A tibble: 2 Γ 2
`vars(column1, column2)` `mean(mean)`
<quos> <dbl>
1 column1 NA
2 column2 NA
$`data/example_3.csv`
# A tibble: 2 Γ 2
`vars(column1, column2)` `mean(mean)`
<quos> <dbl>
1 column1 NA
2 column2 NA
|
[
"There are some syntax errors. If the intention is to loop over the columns column1, column2 in each of the datasets in the list, loop over the list with map, then select the column1, column2 and loop across those columns in summarise to get the mean of the column\nlibrary(purrr)\nlibrary(dplyr)\nmap(examples, ~ \n .x %>%\n summarise(across(c(column1, column2), ~ mean(.x, na.rm = TRUE)))\n\n )\n\n"
] |
[
1
] |
[] |
[] |
[
"list",
"r"
] |
stackoverflow_0074669538_list_r.txt
|
Q:
How to keep proxmox VM / CT up and runnig after a cluster node goes down in HA Cluster without VM/CT access loss even one second?
i've just configured proxmox HA cluster / Ceph(Monitor, Manager, OSD) with 3 nodes.
After a node goes down, the VM /CT switches perfectly to another node of the cluster. But the problem is that it takes about 5 minutes to restore the VM / CT status after switching to another node.
So what i'd like to ask is : What should i configure else TO KEEP VM / CT UP AND RUNNIG AFTER SWITCHING TO another node of the cluster with the fact that no access to VM / CT gets lost even one second ???
Thanks in advance!
A:
HA in Proxmox is not perfect, I stopped using it because this reason. You can have 3 independent containers and control traffic thru reverse-proxy, firewall or other external source.
|
How to keep proxmox VM / CT up and runnig after a cluster node goes down in HA Cluster without VM/CT access loss even one second?
|
i've just configured proxmox HA cluster / Ceph(Monitor, Manager, OSD) with 3 nodes.
After a node goes down, the VM /CT switches perfectly to another node of the cluster. But the problem is that it takes about 5 minutes to restore the VM / CT status after switching to another node.
So what i'd like to ask is : What should i configure else TO KEEP VM / CT UP AND RUNNIG AFTER SWITCHING TO another node of the cluster with the fact that no access to VM / CT gets lost even one second ???
Thanks in advance!
|
[
"HA in Proxmox is not perfect, I stopped using it because this reason. You can have 3 independent containers and control traffic thru reverse-proxy, firewall or other external source.\n"
] |
[
0
] |
[] |
[] |
[
"ceph",
"containers",
"high_availability",
"proxmox",
"virtual_machine"
] |
stackoverflow_0072212567_ceph_containers_high_availability_proxmox_virtual_machine.txt
|
Q:
Is it possible in Python to call a child from parent class without initialize the child?
I want to know if is it possible to create a parent class to handle some common logic, but have some specific logic in child classes and run it without initialize the child as it's in abstraction.
For example:
class Person:
def __init__(self, fname, lname, country):
self.firstname = fname
self.lastname = lname
self.country = country
if country == "US":
# Call class UnitedStates(Person)
else if country == "CA":
# Call class Canada(Person)
def printCountry(self):
print(self.firstname + " " + self.lastname + " is from " + self.country)
class UnitedStates(Person):
def __init__(self):
super().country="United States"
pass
class Canada(Person):
def __init__(self):
super().country="Canada"
pass
x = Person("John", "Doe", "US")
x.printCountry()
y = Person("Jane", "Doe", "CA")
y.printCountry()
So in x I have "John Doe is from United States" and in y I have "Jane Doe is from Canada".
The reason I need that come from a high complex logic and that's the easiest way to deal, so that sample is a dummy version of what I need, otherwise I'll need to find the best way for a "work around".
Thanks in advance.
|
Is it possible in Python to call a child from parent class without initialize the child?
|
I want to know if is it possible to create a parent class to handle some common logic, but have some specific logic in child classes and run it without initialize the child as it's in abstraction.
For example:
class Person:
def __init__(self, fname, lname, country):
self.firstname = fname
self.lastname = lname
self.country = country
if country == "US":
# Call class UnitedStates(Person)
else if country == "CA":
# Call class Canada(Person)
def printCountry(self):
print(self.firstname + " " + self.lastname + " is from " + self.country)
class UnitedStates(Person):
def __init__(self):
super().country="United States"
pass
class Canada(Person):
def __init__(self):
super().country="Canada"
pass
x = Person("John", "Doe", "US")
x.printCountry()
y = Person("Jane", "Doe", "CA")
y.printCountry()
So in x I have "John Doe is from United States" and in y I have "Jane Doe is from Canada".
The reason I need that come from a high complex logic and that's the easiest way to deal, so that sample is a dummy version of what I need, otherwise I'll need to find the best way for a "work around".
Thanks in advance.
|
[] |
[] |
[
"Yes, it is possible to create a parent class that has some common logic and child classes that have specific logic, and to call the child class methods without initializing an instance of the child class. However, the code you have provided will not work as you expect it to because it contains some errors and logical issues.\nHere is one way you could modify your code to achieve the behavior you want:\nclass Person:\n def __init__(self, fname, lname, country):\n self.firstname = fname\n self.lastname = lname\n self.country = country\n\n # Call the appropriate child class based on the country\n if country == \"US\":\n self.us = UnitedStates()\n else if country == \"CA\":\n self.ca = Canada()\n\n def printCountry(self):\n print(self.firstname + \" \" + self.lastname + \" is from \" + self.country)\n\nclass UnitedStates(Person):\n def __init__(self):\n super().__init__()\n self.country = \"United States\"\n\nclass Canada(Person):\n def __init__(self):\n super().__init__()\n self.country = \"Canada\"\n\n# Create an instance of the Person class and call the printCountry method\nx = Person(\"John\", \"Doe\", \"US\")\nx.printCountry()\n\n# Create another instance of the Person class and call the printCountry method\ny = Person(\"Jane\", \"Doe\", \"CA\")\ny.printCountry()\n\nIn this code, the Person class has a constructor method that takes three arguments: the first name, last name, and country of a person. Based on the country, it creates an instance of the appropriate child class (UnitedStates or Canada) and assigns it to the us or ca attribute of the Person instance. The printCountry method of the Person class then prints a message using the firstname, lastname, and country attributes of the Person instance.\nThe UnitedStates and Canada child classes each have a constructor method that calls the init method of the parent Person class, and then sets the country attribute to the appropriate value.\nWhen you create an instance of the Person class and call the printCountry method, the appropriate child class is called and the country attribute is set to the correct value before the message is printed.\n"
] |
[
-1
] |
[
"abstract_class",
"class_hierarchy",
"python",
"python_3.x"
] |
stackoverflow_0074669535_abstract_class_class_hierarchy_python_python_3.x.txt
|
Q:
Why is my matrix in my function and in main different values?
So I'm writing a simple C program that essentially creates a 2D array and allows the user to input values into the 2D array. Then other functions find the smallest and largest value within that array, as well as their position in the array. When I print the matrix in the function, it prints correctly as it should. However, whenever I print it in main as a test or try to access it in my other functions, my array goes from 1, 2, 3, 4, etc. to 1, 1, 1, 1. I used the same function in a previous code I wrote, and it worked just fine, so I'm kind of stumped. Also, I'm not allowed to modify main, I just put a simple loop to print the array there as a test. This is my first time posting here, so I apologize if my formatting is wrong. Any help would be greatly appreciated.
Here is my code:
#include <stdio.h>
#define ROWS 4
#define COLS 3
void generateMtx(int mtx[ROWS][COLS])
{
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
printf("Enter row %d, column %d: ", i, j);
scanf("%d", &mtx[i][j]);
}
}
//Test print in function
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
printf("%d ", mtx[i][j]);
}
printf("\n");
}
}
int matrixSmallest(int arr[ROWS][COLS])
{
int smallest = arr[0][0];
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
if (smallest > arr[i][j])
{
smallest = arr[i][j];
}
}
}
return smallest;
}
int matrixLargest(int arr[ROWS][COLS])
{
int largest = arr[0][0];
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
if (largest < arr[i][j])
{
largest = arr[i][j];
}
}
}
return largest;
}
int elementPosition(int arr[ROWS][COLS], int num, int pos[2])
{
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
if (arr[i][j] = num)
{
pos[0] = i;
pos[1] = j;
}
}
}
return pos[2];
}
int main()
{
int mtx[ROWS][COLS];
generateMtx(mtx);
int smallest = matrixSmallest(mtx);
int smallPosition[2] = {-1, -1};
elementPosition(mtx, smallest, smallPosition);
int largest = matrixLargest(mtx);
int largePosition[2] = {-1, -1};
elementPosition(mtx, largest, largePosition);
printf("Largest element: %d\n", largest);
printf(" found at row %d, column %d\n", largePosition[0], largePosition[1]);
printf("Smallest element: %d\n", smallest);
printf(" found at row %d, column %d\n", smallPosition[0], smallPosition[1]);
//Test print in main
//Can't modify main
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
printf("%d ", mtx[i][j]);
}
printf("\n");
}
return 0;
}
Code for the same function I used on my previous problem:
#include <stdio.h>
#define ROWS 5
#define COLS 3
float generateMtx(float arr[ROWS][COLS])
{
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
printf("Enter row %d, column %d: ", i, j);
scanf("%f", &arr[i][j]);
}
}
printf("\n");
return arr[ROWS][COLS];
}
float columnAverages(float arr[ROWS][COLS], float colavg[COLS])
{
float sum = 0;
float avg = 0;
for (int i = 0; i < COLS; i++)
{
for (int j = 0; j < ROWS; j++)
{
sum += arr[j][i];
}
avg = sum/5.0;
colavg[i] = avg;
sum = 0;
}
return colavg[COLS];
}
float rowAverages(float arr[ROWS][COLS], float rowavg[ROWS])
{
float sum = 0;
float avg = 0;
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
sum += arr[i][j];
}
avg = sum/3.0;
rowavg[i] = avg;
sum = 0;
}
return rowavg[ROWS];
}
void regionAverage(float arr[ROWS][COLS], int top, int bottom, int left, int right)
{
printf("\n\nEnter top region boundary: ");
scanf("%d", &top);
printf("Enter bottom region boundary: ");
scanf("%d", &bottom);
printf("Enter left region boundary: ");
scanf("%d", &left);
printf("Enter right region boundary: ");
scanf("%d", &right);
float sum = 0;
float avg = 0;
float count = 0;
for (int i = top; i <= bottom; i++)
{
for (int j = left; j <= right; j++)
{
sum += arr[i][j];
count++;
}
}
avg = sum/count;
printf("Region average: %.1f", avg);
}
int main(void)
{
float mtx[ROWS][COLS];
generateMtx(mtx);
float rowavg[ROWS];
float colavg[COLS];
int top, bottom, left, right;
columnAverages(mtx, colavg);
rowAverages(mtx, rowavg);
printf(" ");
for (int c = 0; c < 3; c++)
{
printf("Col %d ", c);
}
printf("\n");
for (int i = 0; i < ROWS; i++)
{
printf("Row %d ", i);
for (int j = 0; j < COLS; j++)
{
printf("%8.1f", mtx[i][j]);
}
printf("\n");
}
printf("\n");
printf(" ");
for (int c = 0; c < 3; c++)
{
printf("Col %d ", c);
}
printf(" Avg");
printf("\n");
for (int i = 0; i < ROWS; i++)
{
printf("Row %d ", i);
for (int j = 0; j < COLS; j++)
{
printf("%8.1f", mtx[i][j]);
}
printf("%8.1f", rowavg[i]);
printf("\n");
}
printf(" Avg ");
for (int i = 0; i < COLS; i++)
{
printf("%8.1f", colavg[i]);
}
regionAverage(mtx, top, bottom, left, right);
return 0;
}
Output when printed from generateMTX function:
1 2 3
4 5 6
7 8 9
10 11 12
Output when printed from main:
1 1 1
1 1 1
1 1 1
1 1 1
A:
This is your Problem:
if (arr[i][j] = num)
You are overwriting the contents of your array in elementPosition
It should be:
if (arr[i][j] == num)
A:
As you wrote in your current code
int elementPosition(int arr[ROWS][COLS], int num, int pos[2])
{
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
if (arr[i][j] = num)
{
pos[0] = i;
pos[1] = j;
}
}
}
return pos[2];
}
When you write if (arr[i][j] = num) , it overwrites the value of the array since = is assigning operator and == is used for equal condition checking
|
Why is my matrix in my function and in main different values?
|
So I'm writing a simple C program that essentially creates a 2D array and allows the user to input values into the 2D array. Then other functions find the smallest and largest value within that array, as well as their position in the array. When I print the matrix in the function, it prints correctly as it should. However, whenever I print it in main as a test or try to access it in my other functions, my array goes from 1, 2, 3, 4, etc. to 1, 1, 1, 1. I used the same function in a previous code I wrote, and it worked just fine, so I'm kind of stumped. Also, I'm not allowed to modify main, I just put a simple loop to print the array there as a test. This is my first time posting here, so I apologize if my formatting is wrong. Any help would be greatly appreciated.
Here is my code:
#include <stdio.h>
#define ROWS 4
#define COLS 3
void generateMtx(int mtx[ROWS][COLS])
{
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
printf("Enter row %d, column %d: ", i, j);
scanf("%d", &mtx[i][j]);
}
}
//Test print in function
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
printf("%d ", mtx[i][j]);
}
printf("\n");
}
}
int matrixSmallest(int arr[ROWS][COLS])
{
int smallest = arr[0][0];
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
if (smallest > arr[i][j])
{
smallest = arr[i][j];
}
}
}
return smallest;
}
int matrixLargest(int arr[ROWS][COLS])
{
int largest = arr[0][0];
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
if (largest < arr[i][j])
{
largest = arr[i][j];
}
}
}
return largest;
}
int elementPosition(int arr[ROWS][COLS], int num, int pos[2])
{
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
if (arr[i][j] = num)
{
pos[0] = i;
pos[1] = j;
}
}
}
return pos[2];
}
int main()
{
int mtx[ROWS][COLS];
generateMtx(mtx);
int smallest = matrixSmallest(mtx);
int smallPosition[2] = {-1, -1};
elementPosition(mtx, smallest, smallPosition);
int largest = matrixLargest(mtx);
int largePosition[2] = {-1, -1};
elementPosition(mtx, largest, largePosition);
printf("Largest element: %d\n", largest);
printf(" found at row %d, column %d\n", largePosition[0], largePosition[1]);
printf("Smallest element: %d\n", smallest);
printf(" found at row %d, column %d\n", smallPosition[0], smallPosition[1]);
//Test print in main
//Can't modify main
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
printf("%d ", mtx[i][j]);
}
printf("\n");
}
return 0;
}
Code for the same function I used on my previous problem:
#include <stdio.h>
#define ROWS 5
#define COLS 3
float generateMtx(float arr[ROWS][COLS])
{
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
printf("Enter row %d, column %d: ", i, j);
scanf("%f", &arr[i][j]);
}
}
printf("\n");
return arr[ROWS][COLS];
}
float columnAverages(float arr[ROWS][COLS], float colavg[COLS])
{
float sum = 0;
float avg = 0;
for (int i = 0; i < COLS; i++)
{
for (int j = 0; j < ROWS; j++)
{
sum += arr[j][i];
}
avg = sum/5.0;
colavg[i] = avg;
sum = 0;
}
return colavg[COLS];
}
float rowAverages(float arr[ROWS][COLS], float rowavg[ROWS])
{
float sum = 0;
float avg = 0;
for (int i = 0; i < ROWS; i++)
{
for (int j = 0; j < COLS; j++)
{
sum += arr[i][j];
}
avg = sum/3.0;
rowavg[i] = avg;
sum = 0;
}
return rowavg[ROWS];
}
void regionAverage(float arr[ROWS][COLS], int top, int bottom, int left, int right)
{
printf("\n\nEnter top region boundary: ");
scanf("%d", &top);
printf("Enter bottom region boundary: ");
scanf("%d", &bottom);
printf("Enter left region boundary: ");
scanf("%d", &left);
printf("Enter right region boundary: ");
scanf("%d", &right);
float sum = 0;
float avg = 0;
float count = 0;
for (int i = top; i <= bottom; i++)
{
for (int j = left; j <= right; j++)
{
sum += arr[i][j];
count++;
}
}
avg = sum/count;
printf("Region average: %.1f", avg);
}
int main(void)
{
float mtx[ROWS][COLS];
generateMtx(mtx);
float rowavg[ROWS];
float colavg[COLS];
int top, bottom, left, right;
columnAverages(mtx, colavg);
rowAverages(mtx, rowavg);
printf(" ");
for (int c = 0; c < 3; c++)
{
printf("Col %d ", c);
}
printf("\n");
for (int i = 0; i < ROWS; i++)
{
printf("Row %d ", i);
for (int j = 0; j < COLS; j++)
{
printf("%8.1f", mtx[i][j]);
}
printf("\n");
}
printf("\n");
printf(" ");
for (int c = 0; c < 3; c++)
{
printf("Col %d ", c);
}
printf(" Avg");
printf("\n");
for (int i = 0; i < ROWS; i++)
{
printf("Row %d ", i);
for (int j = 0; j < COLS; j++)
{
printf("%8.1f", mtx[i][j]);
}
printf("%8.1f", rowavg[i]);
printf("\n");
}
printf(" Avg ");
for (int i = 0; i < COLS; i++)
{
printf("%8.1f", colavg[i]);
}
regionAverage(mtx, top, bottom, left, right);
return 0;
}
Output when printed from generateMTX function:
1 2 3
4 5 6
7 8 9
10 11 12
Output when printed from main:
1 1 1
1 1 1
1 1 1
1 1 1
|
[
"This is your Problem:\nif (arr[i][j] = num)\n\nYou are overwriting the contents of your array in elementPosition\nIt should be:\nif (arr[i][j] == num)\n\n",
"As you wrote in your current code\nint elementPosition(int arr[ROWS][COLS], int num, int pos[2])\n{\n for (int i = 0; i < ROWS; i++)\n {\n for (int j = 0; j < COLS; j++)\n {\n if (arr[i][j] = num)\n {\n pos[0] = i;\n pos[1] = j;\n }\n }\n }\n return pos[2];\n}\n\nWhen you write if (arr[i][j] = num) , it overwrites the value of the array since = is assigning operator and == is used for equal condition checking\n"
] |
[
2,
0
] |
[] |
[] |
[
"c"
] |
stackoverflow_0074669384_c.txt
|
Q:
How to make the line smoother with Path in SwiftUI?
When the user touches the screen, it will record the gesture as CGPoint and then display them with Path. But now the line is not smooth at the turning point. What should I do?
This is my code:
struct LineView: View {
@State var removeLine = false
@State var singleLineData = [CGPoint]()
var body: some View {
ZStack {
Rectangle()
.cornerRadius(20)
.opacity(0.1)
.shadow(color: .gray, radius: 4, x: 0, y: 2)
Path { path in
path.addLines(singleLineData)
}
.stroke(style: StrokeStyle(lineWidth: 2, lineCap: .round, lineJoin: .round))
}
.gesture(
DragGesture()
.onChanged { state in
if removeLine {
singleLineData.removeAll()
removeLine = false
}
singleLineData.append(state.location)
}
.onEnded { _ in
removeLine = true
}
)
.frame(width: 370, height: 500)
}
}
struct LineView_Previews: PreviewProvider {
static var previews: some View {
LineView()
}
}
A:
I'm back and I know a solution to make line smoother.
First I created a class to return Path γPS: I learned this from Mrs. Karin Prater's youtube course("How to make a drawing app with SwiftUI 3"), Shoutout to herγ
class DrawingEngine {
func createPath(for points: [CGPoint]) -> Path {
var path = Path()
if let firstPoint = points.first {
path.move(to: firstPoint)
}
for index in 1..<points.count {
let mid = calculateMidPoint(points[index-1], points[index])
path.addQuadCurve(to: mid, control: points[index - 1])
}
if let last = points.last {
path.addLine(to: last)
}
return path
}
func calculateMidPoint(_ p1: CGPoint, _ p2: CGPoint) -> CGPoint {
let newMidPoint = CGPoint(x: (p1.x+p2.x)/2, y: (p1.y+p2.y)/2)
return newMidPoint
}
}
Then I only record new state points whose linear distance from the previous point exceeds a certain value(10 or 20...)
.gesture(DragGesture()
.onChanged { state in
if removeLine {
removeLine = false
singleLineData = [CGPoint]()
}
var exceedsMinimumDistance: Bool {
return sqrt(pow((singleLineData[singleLineData.count-1].x - state.location.x), 2) + pow((singleLineData[singleLineData.count-1].y - state.location.y), 2)) > 20
}
if singleLineData.count == 0 {
singleLineData.append(state.location)
} else if exceedsMinimumDistance {
singleLineData.append(state.location)
}
}
.onEnded { _ in
removeLine = true
})
This is full code of LineView:
struct LineView: View {
@State var removeLine = false
@State var singleLineData = [CGPoint]()
let engine = DrawingEngine()
let minimumDistance: CGFloat = 20
var body: some View {
ZStack {
Rectangle()
.cornerRadius(20)
.opacity(0.1)
.shadow(color: .gray, radius: 4, x: 0, y: 2)
if singleLineData.count != 0 {
Path { path in
path = engine.createPath(for: singleLineData)
}
.stroke(style: StrokeStyle(lineWidth: 4, lineCap: .round, lineJoin: .round))
}
}
.gesture(DragGesture()
.onChanged { state in
if removeLine {
removeLine = false
singleLineData = [CGPoint]()
}
var exceedsMinimumDistance: Bool {
return sqrt(pow((singleLineData[singleLineData.count-1].x - state.location.x), 2) + pow((singleLineData[singleLineData.count-1].y - state.location.y), 2)) > minimumDistance
}
if singleLineData.count == 0 {
singleLineData.append(state.location)
} else if exceedsMinimumDistance {
singleLineData.append(state.location)
}
}
.onEnded { _ in
removeLine = true
})
}
}
My English is not good, hope this solution could help you ;)
|
How to make the line smoother with Path in SwiftUI?
|
When the user touches the screen, it will record the gesture as CGPoint and then display them with Path. But now the line is not smooth at the turning point. What should I do?
This is my code:
struct LineView: View {
@State var removeLine = false
@State var singleLineData = [CGPoint]()
var body: some View {
ZStack {
Rectangle()
.cornerRadius(20)
.opacity(0.1)
.shadow(color: .gray, radius: 4, x: 0, y: 2)
Path { path in
path.addLines(singleLineData)
}
.stroke(style: StrokeStyle(lineWidth: 2, lineCap: .round, lineJoin: .round))
}
.gesture(
DragGesture()
.onChanged { state in
if removeLine {
singleLineData.removeAll()
removeLine = false
}
singleLineData.append(state.location)
}
.onEnded { _ in
removeLine = true
}
)
.frame(width: 370, height: 500)
}
}
struct LineView_Previews: PreviewProvider {
static var previews: some View {
LineView()
}
}
|
[
"I'm back and I know a solution to make line smoother.\nFirst I created a class to return Path γPS: I learned this from Mrs. Karin Prater's youtube course(\"How to make a drawing app with SwiftUI 3\"), Shoutout to herγ\nclass DrawingEngine {\n func createPath(for points: [CGPoint]) -> Path {\n var path = Path()\n \n if let firstPoint = points.first {\n path.move(to: firstPoint)\n }\n \n for index in 1..<points.count {\n let mid = calculateMidPoint(points[index-1], points[index])\n \n path.addQuadCurve(to: mid, control: points[index - 1])\n }\n \n if let last = points.last {\n path.addLine(to: last)\n }\n \n return path\n }\n \n func calculateMidPoint(_ p1: CGPoint, _ p2: CGPoint) -> CGPoint {\n let newMidPoint = CGPoint(x: (p1.x+p2.x)/2, y: (p1.y+p2.y)/2)\n return newMidPoint\n }\n}\n\nThen I only record new state points whose linear distance from the previous point exceeds a certain value(10 or 20...)\n.gesture(DragGesture()\n .onChanged { state in\n \n if removeLine {\n removeLine = false\n singleLineData = [CGPoint]()\n }\n var exceedsMinimumDistance: Bool {\n return sqrt(pow((singleLineData[singleLineData.count-1].x - state.location.x), 2) + pow((singleLineData[singleLineData.count-1].y - state.location.y), 2)) > 20\n }\n \n if singleLineData.count == 0 {\n singleLineData.append(state.location)\n } else if exceedsMinimumDistance {\n singleLineData.append(state.location)\n }\n \n }\n .onEnded { _ in\n removeLine = true\n })\n\nThis is full code of LineView:\nstruct LineView: View {\n @State var removeLine = false\n @State var singleLineData = [CGPoint]()\n let engine = DrawingEngine()\n let minimumDistance: CGFloat = 20\n \n var body: some View {\n ZStack {\n Rectangle()\n .cornerRadius(20)\n .opacity(0.1)\n .shadow(color: .gray, radius: 4, x: 0, y: 2)\n \n if singleLineData.count != 0 {\n Path { path in\n path = engine.createPath(for: singleLineData)\n }\n .stroke(style: StrokeStyle(lineWidth: 4, lineCap: .round, lineJoin: .round))\n }\n \n }\n .gesture(DragGesture()\n .onChanged { state in\n \n if removeLine {\n removeLine = false\n singleLineData = [CGPoint]()\n }\n var exceedsMinimumDistance: Bool {\n return sqrt(pow((singleLineData[singleLineData.count-1].x - state.location.x), 2) + pow((singleLineData[singleLineData.count-1].y - state.location.y), 2)) > minimumDistance\n }\n \n if singleLineData.count == 0 {\n singleLineData.append(state.location)\n } else if exceedsMinimumDistance {\n singleLineData.append(state.location)\n }\n \n }\n .onEnded { _ in\n removeLine = true\n })\n }\n}\n\nMy English is not good, hope this solution could help you ;)\n"
] |
[
0
] |
[] |
[] |
[
"ios",
"swift",
"swiftui"
] |
stackoverflow_0072944068_ios_swift_swiftui.txt
|
Q:
How to efficiently access Microsoft.Maui.Devices.Sensor.Locations in SQL Server
This is more a design question so please bear with me.
I have a system that stores locations consisting of the ID, Longitude and Latitude.
I need to compare the distance between my current location and the locations in the database a only choose ones that are within a certain distance.
I have the formula that calculates the distance between 2 locations based on the long/lat and that works great.
My issue is I may have 10 of thousands of locations in the database and don't want to loop through them all every time I need a list of locations close by.
Not sure what other datapoint I can store with the location to make it so I only have to compare a smaller subset.
Thanks.
A:
As was mentioned in the comments, SQL Server has had support for geospatial since (iirc) SQL 2008. And I know that there is support within .NET for that as well so you should be able to define the data and query it from within your application.
Since the datatype is index-able, k nearest neighbor queries are pretty efficient. There's even a topic in the documentation for that use case. Doing a lift and shift from that page:
DECLARE @g geography = 'POINT(-121.626 47.8315)';
SELECT TOP(7) SpatialLocation.ToString(), City
FROM Person.Address
WHERE SpatialLocation.STDistance(@g) IS NOT NULL
ORDER BY SpatialLocation.STDistance(@g);
If you need all the points within that radius, just omit the top clause.
A:
https://gis.stackexchange.com/ is a good place for in-depth advice on this topic.
A classic approach to quickly locating "nearby" values, is to "grid" the area of interest:
Associate each location with a "grid cell", where each cell is a convenient size. Pick a cell-edge-length such that most cells will hold a small number of values and/or that is similar to the distance range you typically query.
If cell edge is 1 km, and you need locations within 2 km, then get data from 5x5 cells centered at the "target" location.
This is guaranteed to include all data +- 2 km from any location within the central cell.
Apply distance formula to each returned location; some will be beyond 2 km.
I've only done this in memory, not from a DB. I think you add two columns, one for X cell number, other for Y cell number.
With indexes on both of those. So can efficiently get a range of Xs by a range of Ys.
Not sure if a combined "X,Y" index helps or not.
|
How to efficiently access Microsoft.Maui.Devices.Sensor.Locations in SQL Server
|
This is more a design question so please bear with me.
I have a system that stores locations consisting of the ID, Longitude and Latitude.
I need to compare the distance between my current location and the locations in the database a only choose ones that are within a certain distance.
I have the formula that calculates the distance between 2 locations based on the long/lat and that works great.
My issue is I may have 10 of thousands of locations in the database and don't want to loop through them all every time I need a list of locations close by.
Not sure what other datapoint I can store with the location to make it so I only have to compare a smaller subset.
Thanks.
|
[
"As was mentioned in the comments, SQL Server has had support for geospatial since (iirc) SQL 2008. And I know that there is support within .NET for that as well so you should be able to define the data and query it from within your application.\nSince the datatype is index-able, k nearest neighbor queries are pretty efficient. There's even a topic in the documentation for that use case. Doing a lift and shift from that page:\nDECLARE @g geography = 'POINT(-121.626 47.8315)';\n\nSELECT TOP(7) SpatialLocation.ToString(), City\nFROM Person.Address \nWHERE SpatialLocation.STDistance(@g) IS NOT NULL \nORDER BY SpatialLocation.STDistance(@g); \n\nIf you need all the points within that radius, just omit the top clause.\n",
"https://gis.stackexchange.com/ is a good place for in-depth advice on this topic.\nA classic approach to quickly locating \"nearby\" values, is to \"grid\" the area of interest:\nAssociate each location with a \"grid cell\", where each cell is a convenient size. Pick a cell-edge-length such that most cells will hold a small number of values and/or that is similar to the distance range you typically query.\nIf cell edge is 1 km, and you need locations within 2 km, then get data from 5x5 cells centered at the \"target\" location.\nThis is guaranteed to include all data +- 2 km from any location within the central cell.\nApply distance formula to each returned location; some will be beyond 2 km.\nI've only done this in memory, not from a DB. I think you add two columns, one for X cell number, other for Y cell number.\nWith indexes on both of those. So can efficiently get a range of Xs by a range of Ys.\nNot sure if a combined \"X,Y\" index helps or not.\n"
] |
[
1,
0
] |
[] |
[] |
[
"geolocation",
"maui",
"sql_server"
] |
stackoverflow_0074663377_geolocation_maui_sql_server.txt
|
Q:
Plotting Scatter plot with different lines
Please I am trying to plot a scatter plot as shown in the attached image.
I have tried the below code but it is not working. This is in python by the way.
hours = [n / 3600 for n in seconds]
fig, ax = plt.subplots(figsize=(8, 6))
## Your code here
ax.plot(hours, fish_counts, marker="x")
ax.set_xlabel("Hours since low tide")
ax.set_ylabel("Jellyfish entering bay over 15 minutes")
ax.legend()[![enter image description here][1]][1]
Attached image is how the output should look. Thank you.
[1]: https://i.stack.imgur.com/5KQiz.png
A:
To plot a scatter plot with the data you provided, you can use the scatter method instead of the plot method. Here is an example of how you could do this:
# import the necessary packages
import matplotlib.pyplot as plt
# define the data
hours = [n / 3600 for n in seconds]
fish_counts = [10, 12, 8, 11, 9, 15, 20, 22, 19, 25]
# create a figure and an axes
fig, ax = plt.subplots(figsize=(8, 6))
# plot the data as a scatter plot
ax.scatter(hours, fish_counts, marker="x")
# set the x-axis label
ax.set_xlabel("Hours since low tide")
# set the y-axis label
ax.set_ylabel("Jellyfish entering bay over 15 minutes")
# show the legend
ax.legend()
# show the plot
plt.show()
This code will create a scatter plot with the hours and fish_counts data, using the x marker to represent the data points. The x-axis will be labeled "Hours since low tide" and the y-axis will be labeled "Jellyfish entering bay over 15 minutes".
In this example, the scatter method takes the hours and fish_counts arrays as the first and second arguments, respectively. The marker argument is set to "x" to use the x marker for the data points.
You can also customize the appearance of the scatter plot by setting additional arguments to the scatter method. For example, you can use the color argument to set the color of the data points, or the s argument to set the size of the markers. Here is an example of how you could use these arguments:
# create a figure and an axes
fig, ax = plt.subplots(figsize=(8, 6))
# plot the data as a scatter plot with customized colors and marker sizes
ax.scatter(hours, fish_counts, marker="x", color="green", s=100)
# set the x-axis label
ax.set_xlabel("Hours since low tide")
# set the y-axis label
ax.set_ylabel("Jellyfish entering bay over 15 minutes")
# show the legend
ax.legend()
# show the plot
plt.show()
A:
To create a scatter plot in Python with the data and format shown in the image, you can use the following code:
hours = [n / 3600 for n in seconds]
fig, ax = plt.subplots(figsize=(8, 6))
ax.scatter(hours, fish_counts, marker="x", color="red")
ax.set_xlabel("Hours since low tide")
ax.set_ylabel("Jellyfish entering bay over 15 minutes")
ax.legend()
The key difference between this code and the code you provided is that it uses the scatter() method to create the scatter plot, instead of the plot() method. The scatter() method allows you to specify the marker style and color for the data points, which is necessary to match the format of the scatter plot in the image.
By using this code, you should be able to create a scatter plot that matches the format shown in the image.
A:
To add lines to your scatter plot, you can use the ax.plot() method. The first argument to this method should be the x-coordinates of the points on the line, and the second argument should be the y-coordinates of the points on the line. Here is an example:
# Set up the plot
fig, ax = plt.subplots(figsize=(8, 6))
# Add the scatter plot
ax.scatter(hours, fish_counts, marker="x")
# Add the lines
ax.plot([0, 24], [200, 200], color="green")
ax.plot([0, 24], [300, 300], color="orange")
ax.plot([0, 24], [400, 400], color="green")
# Add the axes labels
ax.set_xlabel("Hours since low tide")
ax.set_ylabel("Jellyfish entering bay over 15 minutes")
# Show the plot
plt.show()
In this code, we use ax.plot() to add three lines to the plot, each with different color and y-coordinate values. You can adjust the x-coordinates of the lines to position them as desired on the plot. You can also adjust the colors of the lines by passing a different color value to the color argument of ax.plot(). You can specify colors by their name (e.g. "green") or by their hex code (e.g. "#00ff00").
|
Plotting Scatter plot with different lines
|
Please I am trying to plot a scatter plot as shown in the attached image.
I have tried the below code but it is not working. This is in python by the way.
hours = [n / 3600 for n in seconds]
fig, ax = plt.subplots(figsize=(8, 6))
## Your code here
ax.plot(hours, fish_counts, marker="x")
ax.set_xlabel("Hours since low tide")
ax.set_ylabel("Jellyfish entering bay over 15 minutes")
ax.legend()[![enter image description here][1]][1]
Attached image is how the output should look. Thank you.
[1]: https://i.stack.imgur.com/5KQiz.png
|
[
"To plot a scatter plot with the data you provided, you can use the scatter method instead of the plot method. Here is an example of how you could do this:\n# import the necessary packages\nimport matplotlib.pyplot as plt\n\n# define the data\nhours = [n / 3600 for n in seconds]\nfish_counts = [10, 12, 8, 11, 9, 15, 20, 22, 19, 25]\n\n# create a figure and an axes\nfig, ax = plt.subplots(figsize=(8, 6))\n\n# plot the data as a scatter plot\nax.scatter(hours, fish_counts, marker=\"x\")\n\n# set the x-axis label\nax.set_xlabel(\"Hours since low tide\")\n\n# set the y-axis label\nax.set_ylabel(\"Jellyfish entering bay over 15 minutes\")\n\n# show the legend\nax.legend()\n\n# show the plot\nplt.show()\n\nThis code will create a scatter plot with the hours and fish_counts data, using the x marker to represent the data points. The x-axis will be labeled \"Hours since low tide\" and the y-axis will be labeled \"Jellyfish entering bay over 15 minutes\".\nIn this example, the scatter method takes the hours and fish_counts arrays as the first and second arguments, respectively. The marker argument is set to \"x\" to use the x marker for the data points.\nYou can also customize the appearance of the scatter plot by setting additional arguments to the scatter method. For example, you can use the color argument to set the color of the data points, or the s argument to set the size of the markers. Here is an example of how you could use these arguments:\n# create a figure and an axes\nfig, ax = plt.subplots(figsize=(8, 6))\n\n# plot the data as a scatter plot with customized colors and marker sizes\nax.scatter(hours, fish_counts, marker=\"x\", color=\"green\", s=100)\n\n# set the x-axis label\nax.set_xlabel(\"Hours since low tide\")\n\n# set the y-axis label\nax.set_ylabel(\"Jellyfish entering bay over 15 minutes\")\n\n# show the legend\nax.legend()\n\n# show the plot\nplt.show()\n\n",
"To create a scatter plot in Python with the data and format shown in the image, you can use the following code:\nhours = [n / 3600 for n in seconds]\nfig, ax = plt.subplots(figsize=(8, 6))\nax.scatter(hours, fish_counts, marker=\"x\", color=\"red\")\nax.set_xlabel(\"Hours since low tide\")\nax.set_ylabel(\"Jellyfish entering bay over 15 minutes\")\nax.legend()\n\nThe key difference between this code and the code you provided is that it uses the scatter() method to create the scatter plot, instead of the plot() method. The scatter() method allows you to specify the marker style and color for the data points, which is necessary to match the format of the scatter plot in the image.\nBy using this code, you should be able to create a scatter plot that matches the format shown in the image.\n",
"To add lines to your scatter plot, you can use the ax.plot() method. The first argument to this method should be the x-coordinates of the points on the line, and the second argument should be the y-coordinates of the points on the line. Here is an example:\n# Set up the plot\nfig, ax = plt.subplots(figsize=(8, 6))\n\n# Add the scatter plot\nax.scatter(hours, fish_counts, marker=\"x\")\n\n# Add the lines\nax.plot([0, 24], [200, 200], color=\"green\")\nax.plot([0, 24], [300, 300], color=\"orange\")\nax.plot([0, 24], [400, 400], color=\"green\")\n\n# Add the axes labels\nax.set_xlabel(\"Hours since low tide\")\nax.set_ylabel(\"Jellyfish entering bay over 15 minutes\")\n\n# Show the plot\nplt.show()\n\nIn this code, we use ax.plot() to add three lines to the plot, each with different color and y-coordinate values. You can adjust the x-coordinates of the lines to position them as desired on the plot. You can also adjust the colors of the lines by passing a different color value to the color argument of ax.plot(). You can specify colors by their name (e.g. \"green\") or by their hex code (e.g. \"#00ff00\").\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"matplotlib",
"python"
] |
stackoverflow_0074668688_matplotlib_python.txt
|
Q:
running composer install in Dockerfile
I am trying to dockerize a PHP laravel app. I am using a PHP and a composer image to achieve this. However, when I run composer install, I get all my packages installed but then run into this error:
/app/vendor does not exist and could not be created.
I want composer to create the /vendor directory! Could this be a permission issue?
Here is my Dockerfile:
FROM php:7.4.3-cli
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
COPY --from=composer:2.4.4 /usr/bin/composer /usr/local/bin/composer
# Set working directory
WORKDIR /app
COPY . .
# Add a new user "john" with user id 8877
RUN useradd -u 8877 john
# Change to non-root privilege
USER john
RUN composer install
I created a user with an arbitrary ID since it's a bad practice to run composer install as root security-wise.
A:
It looks to me the issue is that the composer install command is running by the non-root user named "john", but the directory named "vendor" is owned by the root user.
When the "john" user tries to write to the "vendor" directory, it doesn't have permission to do so, because it is owned by the root user and the groups' or other users don't have permission to write into it (that is: files and directories can not be created).
One possible solution is to create the vendor directory before running the composer install command, and give non-root users (like the john user) permission to write to it.
You can do this by adding the following five lines to your Dockerfile before running composer install:
# Create the vendor directory
RUN mkdir -p /app/vendor
# Give the john user permission to write to the vendor directory
RUN chown john:john /app/vendor
With these instructions added, the directory named "vendor" will be created and owned by the user named "john ", so the composer install command should be able to write to it (root can, too, because root).
Another possible solution is to run the composer install command as the root user, but this is not recommended for various reasons and can have - as always with root - relevant security implications.
Instead, use the --no-plugins and --no-scripts flags (a.k.a command-line switches or options) when running composer install to prevent it from running any potentially insecure scripts or plugins (example follows). Done that way, you can still run the command as the user named "john" without compromising security.
# Run composer install as the john user, without running any scripts or plugins
RUN composer install --no-plugins --no-scripts
A:
I was able to solve the problem by making some changes to my Dockerfile:
FROM php:7.4.3-cli
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
COPY --from=composer:2.4.4 /usr/bin/composer /usr/local/bin/composer
# Add a new user "john" with user id 8877
RUN useradd -u 8877 john
# Set working directory
WORKDIR /app
COPY . .
RUN chmod -R 775 /app
RUN chown -R john:john /app
# Change to non-root privilege
USER john
RUN composer install --no-scripts --no-plugins
|
running composer install in Dockerfile
|
I am trying to dockerize a PHP laravel app. I am using a PHP and a composer image to achieve this. However, when I run composer install, I get all my packages installed but then run into this error:
/app/vendor does not exist and could not be created.
I want composer to create the /vendor directory! Could this be a permission issue?
Here is my Dockerfile:
FROM php:7.4.3-cli
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
COPY --from=composer:2.4.4 /usr/bin/composer /usr/local/bin/composer
# Set working directory
WORKDIR /app
COPY . .
# Add a new user "john" with user id 8877
RUN useradd -u 8877 john
# Change to non-root privilege
USER john
RUN composer install
I created a user with an arbitrary ID since it's a bad practice to run composer install as root security-wise.
|
[
"It looks to me the issue is that the composer install command is running by the non-root user named \"john\", but the directory named \"vendor\" is owned by the root user.\nWhen the \"john\" user tries to write to the \"vendor\" directory, it doesn't have permission to do so, because it is owned by the root user and the groups' or other users don't have permission to write into it (that is: files and directories can not be created).\nOne possible solution is to create the vendor directory before running the composer install command, and give non-root users (like the john user) permission to write to it.\nYou can do this by adding the following five lines to your Dockerfile before running composer install:\n# Create the vendor directory\nRUN mkdir -p /app/vendor\n\n# Give the john user permission to write to the vendor directory\nRUN chown john:john /app/vendor\n\nWith these instructions added, the directory named \"vendor\" will be created and owned by the user named \"john \", so the composer install command should be able to write to it (root can, too, because root).\nAnother possible solution is to run the composer install command as the root user, but this is not recommended for various reasons and can have - as always with root - relevant security implications.\nInstead, use the --no-plugins and --no-scripts flags (a.k.a command-line switches or options) when running composer install to prevent it from running any potentially insecure scripts or plugins (example follows). Done that way, you can still run the command as the user named \"john\" without compromising security.\n# Run composer install as the john user, without running any scripts or plugins\nRUN composer install --no-plugins --no-scripts\n\n",
"I was able to solve the problem by making some changes to my Dockerfile:\nFROM php:7.4.3-cli\n\n# Install system dependencies\nRUN apt-get update && apt-get install -y \\\n git \\\n curl \\\n libpng-dev \\\n libonig-dev \\\n libxml2-dev \\\n zip \\\n unzip\n\n# Clear cache\nRUN apt-get clean && rm -rf /var/lib/apt/lists/*\n\n# Install PHP extensions\nRUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd\n\nCOPY --from=composer:2.4.4 /usr/bin/composer /usr/local/bin/composer\n\n# Add a new user \"john\" with user id 8877\nRUN useradd -u 8877 john\n\n# Set working directory\nWORKDIR /app\nCOPY . . \n\nRUN chmod -R 775 /app\nRUN chown -R john:john /app\n\n# Change to non-root privilege\nUSER john\n\nRUN composer install --no-scripts --no-plugins\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"composer_php",
"docker",
"laravel",
"laravel_artisan",
"php"
] |
stackoverflow_0074666126_composer_php_docker_laravel_laravel_artisan_php.txt
|
Q:
Introducing probabilities to patches to replace each-other
I want to create a model which stimulates cell replication in human tissues. To do this I will only be working with patches and not turtles.
A key concept to cell replication is fitness. Fitness in simplified terms is how 'strong' a cell is to replace the cell next to it.
Initially I created a tissue like stimulation where each color is a cell type with a fixed fitness 100. Then I introduced a mutated cell whose fitness ranges from 90 to 110. What I want to do now is introduce probabilities for cell replication based on different fitness values.
So if we have 2 cells next to each other, one with fitness 95 and the other with fitness 100, I want to have a code that says the cell with fitness 100 has a 75% to replace the cell with fitness 95. Of course this should go across the ranges from 90-110 and this probability will depend on what the fitness values of cells next to each other have.
patches-own [ fitness ]
to setup
clear-all
setup-patches
reset-ticks
end
to setup-patches
ask patches ;; randomly set the patches' colors
[ set fitness 100
set pcolor (random colors) * 10 + 5
if pcolor = 75 ;; 75 is too close to another color so change it to 125
[ set pcolor 125 ] ]
end
to go
if (variance [pcolor] of patches) = 0
[ stop ]
ask patches [
;; each patch randomly picks a neighboring patch
;; to copy a color from
set pcolor [pcolor] of one-of neighbors
set fitness [fitness] of one-of neighbors
if fitness > 100
[set pcolor 65]
]
tick
end
to mutate
;let mutateSet [patches with [ pcolor = 125]]
ask patches
[
if ( (random-float 1) < 0.05 ) [
set pcolor 65
set fitness ((random 20) + 90)
]
]
end
This is what I have so far, and I cannot figure out how to introduce this probability parameter accordingly inside the go section. I saw somewhere the rnd function helps with probabilities, but it was using turtles and not patches.
A:
One very important tip I want to give you is to think about the stochasticity and scheduling in your model. Currently your agents take their action one at a time, with the order within each tick being randomised. This means that the order in which the patches change their pcolor has an influence on the outcome.
A way to circumvent this is to ask turtles twice. The first one lets each patch choose whether or not they want to change, the second ask actually does the changing. That way they all choose before any of them change.
The segregation model is a good example of that (it uses turtles but that doesn't make any important difference).
This choosing part (probably a separate procedure that you write) is where the magic happens. You can have each patch check their own fitness and the fitness of all nearby patches ([fitness] of neighbors). When you have these fitness values, you can use them to calculate the probabilities that you want (which depends completely on what you are trying to model).
When you have all your probabilities, you can use one of various methods to determine which one is randomly picked. I'm not going to write this out as there are already numerous examples of this exact thing on stackoverflow:
Multiple mutually exclusive events and probabilities in netlogo
In NetLogo how do use random-float with known percentage chances?
Netlogo - selecting a value from a weighted list based on a randomly drawn number
|
Introducing probabilities to patches to replace each-other
|
I want to create a model which stimulates cell replication in human tissues. To do this I will only be working with patches and not turtles.
A key concept to cell replication is fitness. Fitness in simplified terms is how 'strong' a cell is to replace the cell next to it.
Initially I created a tissue like stimulation where each color is a cell type with a fixed fitness 100. Then I introduced a mutated cell whose fitness ranges from 90 to 110. What I want to do now is introduce probabilities for cell replication based on different fitness values.
So if we have 2 cells next to each other, one with fitness 95 and the other with fitness 100, I want to have a code that says the cell with fitness 100 has a 75% to replace the cell with fitness 95. Of course this should go across the ranges from 90-110 and this probability will depend on what the fitness values of cells next to each other have.
patches-own [ fitness ]
to setup
clear-all
setup-patches
reset-ticks
end
to setup-patches
ask patches ;; randomly set the patches' colors
[ set fitness 100
set pcolor (random colors) * 10 + 5
if pcolor = 75 ;; 75 is too close to another color so change it to 125
[ set pcolor 125 ] ]
end
to go
if (variance [pcolor] of patches) = 0
[ stop ]
ask patches [
;; each patch randomly picks a neighboring patch
;; to copy a color from
set pcolor [pcolor] of one-of neighbors
set fitness [fitness] of one-of neighbors
if fitness > 100
[set pcolor 65]
]
tick
end
to mutate
;let mutateSet [patches with [ pcolor = 125]]
ask patches
[
if ( (random-float 1) < 0.05 ) [
set pcolor 65
set fitness ((random 20) + 90)
]
]
end
This is what I have so far, and I cannot figure out how to introduce this probability parameter accordingly inside the go section. I saw somewhere the rnd function helps with probabilities, but it was using turtles and not patches.
|
[
"One very important tip I want to give you is to think about the stochasticity and scheduling in your model. Currently your agents take their action one at a time, with the order within each tick being randomised. This means that the order in which the patches change their pcolor has an influence on the outcome.\nA way to circumvent this is to ask turtles twice. The first one lets each patch choose whether or not they want to change, the second ask actually does the changing. That way they all choose before any of them change.\nThe segregation model is a good example of that (it uses turtles but that doesn't make any important difference).\nThis choosing part (probably a separate procedure that you write) is where the magic happens. You can have each patch check their own fitness and the fitness of all nearby patches ([fitness] of neighbors). When you have these fitness values, you can use them to calculate the probabilities that you want (which depends completely on what you are trying to model).\nWhen you have all your probabilities, you can use one of various methods to determine which one is randomly picked. I'm not going to write this out as there are already numerous examples of this exact thing on stackoverflow:\n\nMultiple mutually exclusive events and probabilities in netlogo\nIn NetLogo how do use random-float with known percentage chances?\nNetlogo - selecting a value from a weighted list based on a randomly drawn number\n\n"
] |
[
1
] |
[] |
[] |
[
"netlogo",
"patch",
"probability"
] |
stackoverflow_0074668996_netlogo_patch_probability.txt
|
Q:
net::ERR_CONNECTION_REFUSED using Laravel 9, ReactJs with vite js
I'm trying to build an app using Laravel 9 and ReactJS with vite js. I tried following command to build.
npm run dev
But I'm getting following errors,
GET http://[::1]:5173/resources/css/app.css net::ERR_CONNECTION_REFUSED
GET http://[::1]:5173/@vite/client net::ERR_CONNECTION_REFUSED
GET http://[::1]:5173/resources/js/app.jsx net::ERR_CONNECTION_REFUSED
GET http://[::1]:5173/@react-refresh net::ERR_CONNECTION_REFUSED
A:
This means that your assets are not built yet, use npm run build.
A:
I think I may have found a solution with the build option called Rollup.
When building in production, rollup will remove unused code. In this process it will bundle the required assets and reference them in accordance to the URL that you would be using at that current moment.
To fixed it, you could try this:
export default defineConfig({
build: {
rollupOptions: {}
}
})
I was helped by a similar issue posted on Github so maybe you could use that as a point of reference.
Here is the Discussion
A:
build assets before uploading your project in live server or cpanel
you can use this code - npm run build
you can see some change in your project folder after build assets file.
assets file will come to this folder - public->build
|
net::ERR_CONNECTION_REFUSED using Laravel 9, ReactJs with vite js
|
I'm trying to build an app using Laravel 9 and ReactJS with vite js. I tried following command to build.
npm run dev
But I'm getting following errors,
GET http://[::1]:5173/resources/css/app.css net::ERR_CONNECTION_REFUSED
GET http://[::1]:5173/@vite/client net::ERR_CONNECTION_REFUSED
GET http://[::1]:5173/resources/js/app.jsx net::ERR_CONNECTION_REFUSED
GET http://[::1]:5173/@react-refresh net::ERR_CONNECTION_REFUSED
|
[
"This means that your assets are not built yet, use npm run build.\n",
"I think I may have found a solution with the build option called Rollup.\nWhen building in production, rollup will remove unused code. In this process it will bundle the required assets and reference them in accordance to the URL that you would be using at that current moment.\nTo fixed it, you could try this:\n\n\nexport default defineConfig({\n build: {\n rollupOptions: {}\n }\n })\n\n\n\nI was helped by a similar issue posted on Github so maybe you could use that as a point of reference.\nHere is the Discussion\n",
"build assets before uploading your project in live server or cpanel\nyou can use this code - npm run build\nyou can see some change in your project folder after build assets file.\nassets file will come to this folder - public->build\n"
] |
[
1,
0,
0
] |
[
"Local is working fine for me, try adding this to vite.config.js file:\nserver: { cors: false },\n\nor (try adding and mixing all values)\nserver: { https: false, cors: false, hmr: false, port: 8000 },\n\nWorked for me when building to prod using npm run build (with vue btw).\nAdd this to your .env file if you still have problems with the port.\nASSET_URL=http://yoururl:port\n\n"
] |
[
-2
] |
[
"laravel",
"npm",
"php",
"reactjs",
"vite"
] |
stackoverflow_0073783480_laravel_npm_php_reactjs_vite.txt
|
Q:
How to get back indices and column labels after groupby and apply operations?
Example:
df = pd.DataFrame(data=np.arange(0, 12).reshape(3, 4))
df.index = ["a", "b", "c"]
df.columns = [["d", "d", "e", "e"], ["f", "g", "f", "g"]]
df.columns.names = ["L1", "L2"]
df.groupby(level="L1", axis=1).apply(
lambda x: scipy.stats.ttest_1samp(x, axis=1, popmean=0).pvalue
)
This returns:
L1
d [0.49999999999999956, 0.07044657495455454, 0.0...
e [0.12566591637800234, 0.0488745039443948, 0.03...
dtype: object
since scipy.stats.ttest_1samp outputs an object with numpy arrays. But I would like to convert it back to a DataFrame with the correct indexes ['a', 'b', 'c']
A:
Here is one way to do it with Pandas T property:
series = df.groupby(level="L1", axis=1).apply(
lambda x: scipy.stats.ttest_1samp(x, axis=1, popmean=0).pvalue
)
new_df = pd.DataFrame(series.tolist(), index=series.index, columns=df.index).T
Then:
L1 d e
a 0.500000 0.125666
b 0.070447 0.048875
c 0.037405 0.030292
|
How to get back indices and column labels after groupby and apply operations?
|
Example:
df = pd.DataFrame(data=np.arange(0, 12).reshape(3, 4))
df.index = ["a", "b", "c"]
df.columns = [["d", "d", "e", "e"], ["f", "g", "f", "g"]]
df.columns.names = ["L1", "L2"]
df.groupby(level="L1", axis=1).apply(
lambda x: scipy.stats.ttest_1samp(x, axis=1, popmean=0).pvalue
)
This returns:
L1
d [0.49999999999999956, 0.07044657495455454, 0.0...
e [0.12566591637800234, 0.0488745039443948, 0.03...
dtype: object
since scipy.stats.ttest_1samp outputs an object with numpy arrays. But I would like to convert it back to a DataFrame with the correct indexes ['a', 'b', 'c']
|
[
"Here is one way to do it with Pandas T property:\nseries = df.groupby(level=\"L1\", axis=1).apply(\n lambda x: scipy.stats.ttest_1samp(x, axis=1, popmean=0).pvalue\n)\n\nnew_df = pd.DataFrame(series.tolist(), index=series.index, columns=df.index).T\n\nThen:\nL1 d e\na 0.500000 0.125666\nb 0.070447 0.048875\nc 0.037405 0.030292\n\n"
] |
[
0
] |
[] |
[] |
[
"group_by",
"pandas"
] |
stackoverflow_0074620237_group_by_pandas.txt
|
Q:
Create multiple excel files keeping only specific values in column A from a master sheet
I am really struggling in creating a macro that from a master Excel file creates multiple Excel files based on the values in the first column. More specifically, I have in column "A" some categories, and based on all the categories (ITT1, ITT2, ITT3, ITT4 and ITT5) I would like to create multiple excel files containing the sheet with just 1 category. At the moment, I have been able to save just 1 file with 1 category. But I cannot do it with multiple. Could you kindly help me please? I am stuck.
Sub Split()
Dim location As String
location = "Z:\Incent_2022\ORDINARIA\RETAIL-WHS\Andamento\Q4\Andamento\Novembre\And. Inc Q4_ITT1.xlsm"
ActiveWorkbook.SaveAs Filename:=location, FileFormat:=52
With ActiveSheet
Const FirstRow As Long = 6
Dim LastRow As Long
LastRow = .Cells(.Rows.Count, "A").End(xlUp).Row ' get last used row in column A
Dim Row As Long
For Row = LastRow To FirstRow Step -1
If Not .Range("A" & Row).Value = "ITT1" Then
.Range("A" & Row).EntireRow.Delete
End If
Next Row
End With
ActiveWorkbook.Close SaveChanges:=True
End Sub
A:
This is working for me perfectly. There are a few things you will need to change to fit your sheet.
Option Explicit
Sub Export_Files()
Dim I As Long
Dim lRow As Long
Dim SaveLoc As String
Dim OutWB As Workbook
Dim TypeList
Dim TypeRG As Range
' > Create Unique List of Used Types
lRow = Range("A" & Rows.Count).End(xlUp).Row
Set TypeRG = Sheet1.Range("A2:A" & lRow)
TypeList = Application.WorksheetFunction.Unique(TypeRG)
' > My Directory
SaveLoc = "C:\Users\cameron\Documents\temp\"
' >
For I = 1 To UBound(TypeList, 1)
'Create File:
Set OutWB = Workbooks.Add
OutWB.SaveAs SaveLoc & TypeList(I, 1)
'Transfer Data to file:
Sheet1.Range("A1:E" & lRow).AutoFilter Field:=1, Criteria1:=TypeList(I, 1)
Sheet1.Range("A1:E" & lRow).SpecialCells(xlCellTypeVisible).Copy
OutWB.Worksheets(1).Paste
OutWB.Save
OutWB.Close
Next I
End Sub
To Change:
SaveLoc - to your preferred directory
The TypeRG range if yours is not in A Column (also your lRow maybe)
your autofilter range if your data range is larger than mine.
Exaple of my data:
A:
Export Split Data
Sub ExportSplitData()
' Define constants.
Const SRC_NAME As String = "Sheet1"
Const SRC_FIRST_CELL As String = "A5"
Const SRC_CRITERIA_COLUMN As Long = 1
Const DST_FOLDER As String _
= "Z:\Incent_2022\ORDINARIA\RETAIL-WHS\Andamento\Q4\Andamento\Novembre\"
Const DST_NAME_LEFT As String = "And. Inc Q4_"
Const DST_EXTENSION As String = ".xlsm"
' Reference the Source worksheet.
Dim swb As Workbook: Set swb = ThisWorkbook ' workbook containing this code
Dim sws As Worksheet: Set sws = swb.Sheets(SRC_NAME)
Application.ScreenUpdating = False
' To leave the source workbook intact, export the worksheet
' to a new (helper) workbook and reference the range (there).
sws.Copy
Dim hwb As Workbook: Set hwb = Workbooks(Workbooks.Count)
Dim hws As Worksheet: Set hws = hwb.Sheets(SRC_NAME)
If hws.FilterMode Then hws.ShowAllData
Dim hfCell As Range: Set hfCell = hws.Range(SRC_FIRST_CELL)
Dim hrg As Range, hdrg As Range, hfrrg As Range, hrCount As Long
With hws.UsedRange
Set hfrrg = Intersect(hfCell.EntireRow, .Cells)
Set hrg = hfrrg.Resize(.Rows.Count + .Row - hfrrg.Row)
hrCount = hrg.Rows.Count
Set hdrg = hrg.Resize(hrCount - 1).Offset(1) ' no headers
End With
' Sort the range by the criteria column.
hrg.Sort hrg.Columns(SRC_CRITERIA_COLUMN), xlAscending, , , , , , xlYes
' Write the unique values from the criteria column to a dictionary.
Dim hData() As Variant: hData = hdrg.Columns(SRC_CRITERIA_COLUMN).Value
Dim dict As Object: Set dict = CreateObject("Scripting.Dictionary")
dict.CompareMode = vbTextCompare
Dim r As Long
For r = 1 To hrCount - 1
If Len(CStr(hData(r, 1))) > 0 Then
dict(hData(r, 1)) = Empty
End If
Next r
' Loop through the keys of the dictionary and export
' the sorted helper worksheet to be processed in yet another file,
' the destination workbook.
Dim dwb As Workbook, dws As Worksheet, drg As Range, ddrg As Range
Dim rKey As Variant, dFilePath As String
For Each rKey In dict.Keys
hws.Copy
Set dwb = Workbooks(Workbooks.Count)
Set dws = dwb.Sheets(SRC_NAME)
Set drg = dws.Range(hrg.Address) ' has headers
Set ddrg = dws.Range(hdrg.Address) ' no headers
drg.AutoFilter SRC_CRITERIA_COLUMN, "<>" & rKey ' filter
ddrg.SpecialCells(xlCellTypeVisible).Delete xlShiftUp ' delete
dws.AutoFilterMode = False ' turn off filter
dFilePath = DST_FOLDER & DST_NAME_LEFT & rKey & DST_EXTENSION
Application.DisplayAlerts = False
dwb.SaveAs dFilePath, xlOpenXMLWorkbookMacroEnabled
Application.DisplayAlerts = True
dwb.Close SaveChanges:=False
Next rKey
' Close the helper file.
hwb.Close SaveChanges:=False
Application.ScreenUpdating = True
' Inform.
MsgBox "Split data exported.", vbInformation
End Sub
|
Create multiple excel files keeping only specific values in column A from a master sheet
|
I am really struggling in creating a macro that from a master Excel file creates multiple Excel files based on the values in the first column. More specifically, I have in column "A" some categories, and based on all the categories (ITT1, ITT2, ITT3, ITT4 and ITT5) I would like to create multiple excel files containing the sheet with just 1 category. At the moment, I have been able to save just 1 file with 1 category. But I cannot do it with multiple. Could you kindly help me please? I am stuck.
Sub Split()
Dim location As String
location = "Z:\Incent_2022\ORDINARIA\RETAIL-WHS\Andamento\Q4\Andamento\Novembre\And. Inc Q4_ITT1.xlsm"
ActiveWorkbook.SaveAs Filename:=location, FileFormat:=52
With ActiveSheet
Const FirstRow As Long = 6
Dim LastRow As Long
LastRow = .Cells(.Rows.Count, "A").End(xlUp).Row ' get last used row in column A
Dim Row As Long
For Row = LastRow To FirstRow Step -1
If Not .Range("A" & Row).Value = "ITT1" Then
.Range("A" & Row).EntireRow.Delete
End If
Next Row
End With
ActiveWorkbook.Close SaveChanges:=True
End Sub
|
[
"This is working for me perfectly. There are a few things you will need to change to fit your sheet.\nOption Explicit\n\nSub Export_Files()\n \n Dim I As Long\n Dim lRow As Long\n Dim SaveLoc As String\n Dim OutWB As Workbook\n Dim TypeList\n Dim TypeRG As Range\n \n ' > Create Unique List of Used Types\n lRow = Range(\"A\" & Rows.Count).End(xlUp).Row\n Set TypeRG = Sheet1.Range(\"A2:A\" & lRow)\n TypeList = Application.WorksheetFunction.Unique(TypeRG)\n \n ' > My Directory\n SaveLoc = \"C:\\Users\\cameron\\Documents\\temp\\\"\n \n ' >\n For I = 1 To UBound(TypeList, 1)\n 'Create File:\n Set OutWB = Workbooks.Add\n OutWB.SaveAs SaveLoc & TypeList(I, 1)\n \n 'Transfer Data to file:\n Sheet1.Range(\"A1:E\" & lRow).AutoFilter Field:=1, Criteria1:=TypeList(I, 1)\n Sheet1.Range(\"A1:E\" & lRow).SpecialCells(xlCellTypeVisible).Copy\n OutWB.Worksheets(1).Paste\n OutWB.Save\n OutWB.Close\n \n Next I\n \nEnd Sub\n\nTo Change:\n\nSaveLoc - to your preferred directory\nThe TypeRG range if yours is not in A Column (also your lRow maybe)\nyour autofilter range if your data range is larger than mine.\n\nExaple of my data:\n\n\n",
"Export Split Data\nSub ExportSplitData()\n \n ' Define constants.\n \n Const SRC_NAME As String = \"Sheet1\"\n Const SRC_FIRST_CELL As String = \"A5\"\n Const SRC_CRITERIA_COLUMN As Long = 1\n \n Const DST_FOLDER As String _\n = \"Z:\\Incent_2022\\ORDINARIA\\RETAIL-WHS\\Andamento\\Q4\\Andamento\\Novembre\\\"\n Const DST_NAME_LEFT As String = \"And. Inc Q4_\"\n Const DST_EXTENSION As String = \".xlsm\"\n \n ' Reference the Source worksheet.\n \n Dim swb As Workbook: Set swb = ThisWorkbook ' workbook containing this code\n Dim sws As Worksheet: Set sws = swb.Sheets(SRC_NAME)\n \n Application.ScreenUpdating = False\n \n ' To leave the source workbook intact, export the worksheet\n ' to a new (helper) workbook and reference the range (there).\n \n sws.Copy\n Dim hwb As Workbook: Set hwb = Workbooks(Workbooks.Count)\n \n Dim hws As Worksheet: Set hws = hwb.Sheets(SRC_NAME)\n If hws.FilterMode Then hws.ShowAllData\n \n Dim hfCell As Range: Set hfCell = hws.Range(SRC_FIRST_CELL)\n \n Dim hrg As Range, hdrg As Range, hfrrg As Range, hrCount As Long\n \n With hws.UsedRange\n Set hfrrg = Intersect(hfCell.EntireRow, .Cells)\n Set hrg = hfrrg.Resize(.Rows.Count + .Row - hfrrg.Row)\n hrCount = hrg.Rows.Count\n Set hdrg = hrg.Resize(hrCount - 1).Offset(1) ' no headers\n End With\n \n ' Sort the range by the criteria column.\n \n hrg.Sort hrg.Columns(SRC_CRITERIA_COLUMN), xlAscending, , , , , , xlYes\n \n ' Write the unique values from the criteria column to a dictionary.\n \n Dim hData() As Variant: hData = hdrg.Columns(SRC_CRITERIA_COLUMN).Value\n \n Dim dict As Object: Set dict = CreateObject(\"Scripting.Dictionary\")\n dict.CompareMode = vbTextCompare\n \n Dim r As Long\n \n For r = 1 To hrCount - 1\n If Len(CStr(hData(r, 1))) > 0 Then\n dict(hData(r, 1)) = Empty\n End If\n Next r\n \n ' Loop through the keys of the dictionary and export\n ' the sorted helper worksheet to be processed in yet another file,\n ' the destination workbook.\n \n Dim dwb As Workbook, dws As Worksheet, drg As Range, ddrg As Range\n Dim rKey As Variant, dFilePath As String\n \n For Each rKey In dict.Keys\n \n hws.Copy\n \n Set dwb = Workbooks(Workbooks.Count)\n Set dws = dwb.Sheets(SRC_NAME)\n Set drg = dws.Range(hrg.Address) ' has headers\n Set ddrg = dws.Range(hdrg.Address) ' no headers\n \n drg.AutoFilter SRC_CRITERIA_COLUMN, \"<>\" & rKey ' filter\n ddrg.SpecialCells(xlCellTypeVisible).Delete xlShiftUp ' delete\n dws.AutoFilterMode = False ' turn off filter\n \n dFilePath = DST_FOLDER & DST_NAME_LEFT & rKey & DST_EXTENSION\n \n Application.DisplayAlerts = False\n dwb.SaveAs dFilePath, xlOpenXMLWorkbookMacroEnabled\n Application.DisplayAlerts = True\n \n dwb.Close SaveChanges:=False\n \n Next rKey\n \n ' Close the helper file.\n \n hwb.Close SaveChanges:=False\n \n Application.ScreenUpdating = True\n \n ' Inform.\n \n MsgBox \"Split data exported.\", vbInformation\n\nEnd Sub\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"copy",
"distinct_values",
"excel",
"multiple_makefiles",
"vba"
] |
stackoverflow_0074660172_copy_distinct_values_excel_multiple_makefiles_vba.txt
|
Q:
Animate horizontal scrollview in android studio
Im trying to animate my horiontal scrollView. At the moment its going from left to right perfectly with:
HorizontalScrollView headband = findViewById(R.id.scroll);
ObjectAnimator.ofInt(headband, "scrollX", 2000).setDuration(10000).start();
Now I want two more things:
How to do the inverse, from right to left ? I assume using sleep is not a good option
Ho to know exactly the size of my ScrollView in pixels ? As the 2000 value I'm using is random, just working.
A:
After weeks of struggle here is the solution ! Hope it'll be useful to someone.
HorizontalScrollView headband = findViewById(R.id.scroll);
animator1 = ObjectAnimator.ofInt(headband, "scrollX", 1700).setDuration(10000);
animator2 = ObjectAnimator.ofInt(headband, "scrollX", 0).setDuration(10000);
animator1.start();
animator1.addListener(new AnimatorListenerAdapter() {
@Override
public void onAnimationEnd(Animator animation) {
super.onAnimationEnd(animation);
animator2.start();
}
});
animator2.addListener(new AnimatorListenerAdapter() {
@Override
public void onAnimationEnd(Animator animation) {
super.onAnimationEnd(animation);
animator1.start();
}
});
A:
You can use ObjectAnimator's repeatMode to get it to repeat in the opposite direction and repeatCount to get it to continue indefinitely. You can get the scrollable size of your ScrollView by calculating scrollView[0].width - scrollView.width. Here's how it would look in Kotlin with your variables:
ObjectAnimator.ofInt(headband, "scrollX", 0, headband[0].width - headband.width).apply {
repeatMode = ObjectAnimator.REVERSE
repeatCount = Animation.INFINITE
duration = 10000
start()
}
If you're executing this somewhere where the width of the views is not known yet, like in onCreate(), them wrap it all in headband.doOnLayout { <code> }.
|
Animate horizontal scrollview in android studio
|
Im trying to animate my horiontal scrollView. At the moment its going from left to right perfectly with:
HorizontalScrollView headband = findViewById(R.id.scroll);
ObjectAnimator.ofInt(headband, "scrollX", 2000).setDuration(10000).start();
Now I want two more things:
How to do the inverse, from right to left ? I assume using sleep is not a good option
Ho to know exactly the size of my ScrollView in pixels ? As the 2000 value I'm using is random, just working.
|
[
"After weeks of struggle here is the solution ! Hope it'll be useful to someone.\n HorizontalScrollView headband = findViewById(R.id.scroll);\n\n\n animator1 = ObjectAnimator.ofInt(headband, \"scrollX\", 1700).setDuration(10000);\n animator2 = ObjectAnimator.ofInt(headband, \"scrollX\", 0).setDuration(10000);\n\n animator1.start();\n animator1.addListener(new AnimatorListenerAdapter() {\n @Override\n public void onAnimationEnd(Animator animation) {\n super.onAnimationEnd(animation);\n animator2.start();\n\n }\n });\n\n animator2.addListener(new AnimatorListenerAdapter() {\n @Override\n public void onAnimationEnd(Animator animation) {\n super.onAnimationEnd(animation);\n animator1.start();\n }\n });\n\n",
"You can use ObjectAnimator's repeatMode to get it to repeat in the opposite direction and repeatCount to get it to continue indefinitely. You can get the scrollable size of your ScrollView by calculating scrollView[0].width - scrollView.width. Here's how it would look in Kotlin with your variables:\nObjectAnimator.ofInt(headband, \"scrollX\", 0, headband[0].width - headband.width).apply {\n repeatMode = ObjectAnimator.REVERSE\n repeatCount = Animation.INFINITE\n duration = 10000\n start()\n}\n\nIf you're executing this somewhere where the width of the views is not known yet, like in onCreate(), them wrap it all in headband.doOnLayout { <code> }.\n"
] |
[
1,
0
] |
[] |
[] |
[
"android",
"animation",
"objectanimator",
"scrollview"
] |
stackoverflow_0063179754_android_animation_objectanimator_scrollview.txt
|
Q:
Exposed drop-down menu for jetpack compose
I was wondering if there is a solution for Exposed drop-down menu for jetpack compose?
I couldn't find a proper solution for this component inside jetpack compose. Any help?
A:
The version 1.1.0-alpha06 introduced the implementation of ExposedDropdownMenu based on ExposedDropdownMenuBox with TextField and DropdownMenu inside.
Something like:
val options = listOf("Option 1", "Option 2", "Option 3", "Option 4", "Option 5")
var expanded by remember { mutableStateOf(false) }
var selectedOptionText by remember { mutableStateOf(options[0]) }
ExposedDropdownMenuBox(
expanded = expanded,
onExpandedChange = {
expanded = !expanded
}
) {
TextField(
readOnly = true,
value = selectedOptionText,
onValueChange = { },
label = { Text("Label") },
trailingIcon = {
ExposedDropdownMenuDefaults.TrailingIcon(
expanded = expanded
)
},
colors = ExposedDropdownMenuDefaults.textFieldColors()
)
ExposedDropdownMenu(
expanded = expanded,
onDismissRequest = {
expanded = false
}
) {
options.forEach { selectionOption ->
DropdownMenuItem(
onClick = {
selectedOptionText = selectionOption
expanded = false
}
) {
Text(text = selectionOption)
}
}
}
}
If you are using M3 (androidx.compose.material3) you have also to pass the menuAnchor modifier to the TextField:
ExposedDropdownMenuBox(
expanded = expanded,
onExpandedChange = { expanded = !expanded },
) {
TextField(
//...
modifier = Modifier.menuAnchor()
)
ExposedDropdownMenu(){ /*.. */ }
}
With the version 1.0.x there isn't a built-in component.
You can use a OutlinedTextField + DropdownMenu.
It is just a basic (very basic) implementation:
var expanded by remember { mutableStateOf(false) }
val suggestions = listOf("Item1","Item2","Item3")
var selectedText by remember { mutableStateOf("") }
var textfieldSize by remember { mutableStateOf(Size.Zero)}
val icon = if (expanded)
Icons.Filled.ArrowDropUp //it requires androidx.compose.material:material-icons-extended
else
Icons.Filled.ArrowDropDown
Column() {
OutlinedTextField(
value = selectedText,
onValueChange = { selectedText = it },
modifier = Modifier
.fillMaxWidth()
.onGloballyPositioned { coordinates ->
//This value is used to assign to the DropDown the same width
textfieldSize = coordinates.size.toSize()
},
label = {Text("Label")},
trailingIcon = {
Icon(icon,"contentDescription",
Modifier.clickable { expanded = !expanded })
}
)
DropdownMenu(
expanded = expanded,
onDismissRequest = { expanded = false },
modifier = Modifier
.width(with(LocalDensity.current){textfieldSize.width.toDp()})
) {
suggestions.forEach { label ->
DropdownMenuItem(onClick = {
selectedText = label
}) {
Text(text = label)
}
}
}
}
A:
This is what I did to get the width the same as the text field: Copying and modifying Gabriele's answer.
var expanded by remember { mutableStateOf(false) }
val suggestions = listOf("Item1","Item2","Item3")
var selectedText by remember { mutableStateOf("") }
var dropDownWidth by remember { mutableStateOf(0) }
val icon = if (expanded)
Icons.Filled.....
else
Icons.Filled.ArrowDropDown
Column() {
OutlinedTextField(
value = selectedText,
onValueChange = { selectedText = it },
modifier = Modifier.fillMaxWidth()
.onSizeChanged {
dropDownWidth = it.width
},
label = {Text("Label")},
trailingIcon = {
Icon(icon,"contentDescription", Modifier.clickable { expanded = !expanded })
}
)
DropdownMenu(
expanded = expanded,
onDismissRequest = { expanded = false },
modifier = Modifier
.width(with(LocalDensity.current){dropDownWidth.toDp()})
) {
suggestions.forEach { label ->
DropdownMenuItem(onClick = {
selectedText = label
}) {
Text(text = label)
}
}
}
}
A:
Here's my version.
I achieved this without using a TextField (so no keyboard).
There's a "regular" and an "outlined" version.
import androidx.compose.animation.core.animateFloatAsState
import androidx.compose.foundation.background
import androidx.compose.foundation.border
import androidx.compose.foundation.clickable
import androidx.compose.foundation.layout.*
import androidx.compose.foundation.shape.ZeroCornerSize
import androidx.compose.material.*
import androidx.compose.material.icons.Icons
import androidx.compose.material.icons.filled.ExpandMore
import androidx.compose.runtime.*
import androidx.compose.ui.Alignment
import androidx.compose.ui.Modifier
import androidx.compose.ui.draw.clip
import androidx.compose.ui.draw.drawBehind
import androidx.compose.ui.draw.rotate
import androidx.compose.ui.geometry.Offset
import androidx.compose.ui.geometry.Size
import androidx.compose.ui.graphics.Color
import androidx.compose.ui.graphics.Shape
import androidx.compose.ui.layout.onGloballyPositioned
import androidx.compose.ui.platform.LocalDensity
import androidx.compose.ui.platform.LocalFocusManager
import androidx.compose.ui.unit.Dp
import androidx.compose.ui.unit.dp
import androidx.compose.ui.unit.toSize
import kotlinx.coroutines.delay
import kotlinx.coroutines.launch
// ExposedDropDownMenu will be added in Jetpack Compose 1.1.0.
// This is a reimplementation while waiting.
// See https://stackoverflow.com/questions/67111020/exposed-drop-down-menu-for-jetpack-compose/6904285
@Composable
fun SimpleExposedDropDownMenu(
values: List<String>,
selectedIndex: Int,
onChange: (Int) -> Unit,
label: @Composable () -> Unit,
modifier: Modifier,
backgroundColor: Color = MaterialTheme.colors.onSurface.copy(alpha = TextFieldDefaults.BackgroundOpacity),
shape: Shape = MaterialTheme.shapes.small.copy(bottomEnd = ZeroCornerSize, bottomStart = ZeroCornerSize)
) {
SimpleExposedDropDownMenuImpl(
values = values,
selectedIndex = selectedIndex,
onChange = onChange,
label = label,
modifier = modifier,
backgroundColor = backgroundColor,
shape = shape,
decorator = { color, width, content ->
Box(
Modifier
.drawBehind {
val strokeWidth = width.value * density
val y = size.height - strokeWidth / 2
drawLine(
color,
Offset(0f, y),
Offset(size.width, y),
strokeWidth
)
}
) {
content()
}
}
)
}
@Composable
fun SimpleOutlinedExposedDropDownMenu(
values: List<String>,
selectedIndex: Int,
onChange: (Int) -> Unit,
label: @Composable () -> Unit,
modifier: Modifier,
backgroundColor: Color = MaterialTheme.colors.onSurface.copy(alpha = TextFieldDefaults.BackgroundOpacity),
shape: Shape = MaterialTheme.shapes.small
) {
SimpleExposedDropDownMenuImpl(
values = values,
selectedIndex = selectedIndex,
onChange = onChange,
label = label,
modifier = modifier,
backgroundColor = backgroundColor,
shape = shape,
decorator = { color, width, content ->
Box(
Modifier
.border(width, color, shape)
) {
content()
}
}
)
}
@Composable
private fun SimpleExposedDropDownMenuImpl(
values: List<String>,
selectedIndex: Int,
onChange: (Int) -> Unit,
label: @Composable () -> Unit,
modifier: Modifier,
backgroundColor: Color,
shape: Shape,
decorator: @Composable (Color, Dp, @Composable () -> Unit) -> Unit
) {
var expanded by remember { mutableStateOf(false) }
var textfieldSize by remember { mutableStateOf(Size.Zero) }
val indicatorColor =
if (expanded) MaterialTheme.colors.primary.copy(alpha = ContentAlpha.high)
else MaterialTheme.colors.onSurface.copy(alpha = TextFieldDefaults.UnfocusedIndicatorLineOpacity)
val indicatorWidth = (if (expanded) 2 else 1).dp
val labelColor =
if (expanded) MaterialTheme.colors.primary.copy(alpha = ContentAlpha.high)
else MaterialTheme.colors.onSurface.copy(ContentAlpha.medium)
val trailingIconColor = MaterialTheme.colors.onSurface.copy(alpha = TextFieldDefaults.IconOpacity)
val rotation: Float by animateFloatAsState(if (expanded) 180f else 0f)
val focusManager = LocalFocusManager.current
Column(modifier = modifier.width(IntrinsicSize.Min)) {
decorator(indicatorColor, indicatorWidth) {
Box(
Modifier
.fillMaxWidth()
.background(color = backgroundColor, shape = shape)
.onGloballyPositioned { textfieldSize = it.size.toSize() }
.clip(shape)
.clickable {
expanded = !expanded
focusManager.clearFocus()
}
.padding(start = 16.dp, end = 12.dp, top = 7.dp, bottom = 10.dp)
) {
Column(Modifier.padding(end = 32.dp)) {
ProvideTextStyle(value = MaterialTheme.typography.caption.copy(color = labelColor)) {
label()
}
Text(
text = values[selectedIndex],
modifier = Modifier.padding(top = 1.dp)
)
}
Icon(
imageVector = Icons.Filled.ExpandMore,
contentDescription = "Change",
tint = trailingIconColor,
modifier = Modifier
.align(Alignment.CenterEnd)
.padding(top = 4.dp)
.rotate(rotation)
)
}
}
DropdownMenu(
expanded = expanded,
onDismissRequest = { expanded = false },
modifier = Modifier
.width(with(LocalDensity.current) { textfieldSize.width.toDp() })
) {
values.forEachIndexed { i, v ->
val scope = rememberCoroutineScope()
DropdownMenuItem(
onClick = {
onChange(i)
scope.launch {
delay(150)
expanded = false
}
}
) {
Text(v)
}
}
}
}
}
A:
A few modifications to @Gabriele Mariotti answer A user can select an outline text field and select from an option. Option will be disappear one user select any option.
@Composable
fun DropDownMenu(optionList: List<String>,label:String,) {
var expanded by remember { mutableStateOf(false) }
var selectedText by remember { mutableStateOf("") }
var textfieldSize by remember { mutableStateOf(Size.Zero) }
val icon = if (expanded)
Icons.Filled.KeyboardArrowUp
else
Icons.Filled.KeyboardArrowDown
Column() {
OutlinedTextField(
value = selectedText,
onValueChange = { selectedText = it },
enabled = false,
modifier = Modifier
.fillMaxWidth()
.onGloballyPositioned { coordinates ->
//This value is used to assign to the DropDown the same width
textfieldSize = coordinates.size.toSize()
}
.clickable { expanded = !expanded },
label = { Text(label) },
trailingIcon = {
Icon(icon, "Drop Down Icon",
Modifier.clickable { expanded = !expanded })
}
)
DropdownMenu(
expanded = expanded,
onDismissRequest = { expanded = false },
modifier = Modifier
.width(with(LocalDensity.current) { textfieldSize.width.toDp() })
) {
optionList.forEach { label ->
DropdownMenuItem(onClick = {
selectedText = label
expanded = !expanded
}) {
Text(text = label)
}
}
}
}
}
A:
In addition to what has been written here, I case could be useful to someone and for my personal memo note for next usages, I've realized this drop-down menu function component using BasicTextField for no decoration and no default padding, no arrow icon, with item selected text aligned to right (.End) , filling max text width (.fillMaxWidth()) with single line in list.
data class DropDownMenuParameter(
var options: List<String>,
var expanded: Boolean,
var selectedOptionText: String,
var backgroundColor: Color
)
@ExperimentalMaterialApi
@Composable
fun DropDownMenuComponent(params: DropDownMenuParameter) {
var expanded by remember { mutableStateOf(params.expanded) }
ExposedDropdownMenuBox(
expanded = expanded,
onExpandedChange = {
expanded = !expanded
}
) {
BasicTextField(
modifier = Modifier
.background(params.backgroundColor)
.fillMaxWidth(),
readOnly = true,
value = params.selectedOptionText,
onValueChange = { },
textStyle = TextStyle(
color = Color.White,
textAlign = TextAlign.End,
fontSize = 16.sp,
),
singleLine = true
)
ExposedDropdownMenu(
modifier = Modifier
.background(params.backgroundColor),
expanded = expanded,
onDismissRequest = {
expanded = false
}
) {
params.options.forEach { selectionOption ->
DropdownMenuItem(
modifier = Modifier
.background(params.backgroundColor),
onClick = {
params.selectedOptionText = selectionOption
expanded = false
},
) {
Text(
text = selectionOption,
color = Color.White,
)
}
}
}
}
}
My usage :
@OptIn(ExperimentalAnimationApi::class, ExperimentalMaterialApi::class)
@Composable
fun SubscribeSubscriptionDetails(selectedSubscription : Subscription){
val categoryOptions = listOf("Entertainment", "Gaming", "Business", "Utility", "Music", "Food & Drink", "Health & Fitness", "Bank", "Transport", "Education", "Insurance", "News")
val categoryExpanded by rememberSaveable { mutableStateOf(false) }
val categorySelectedOptionText
by rememberSaveable { mutableStateOf(selectedSubscription.category) }
val categoryDropDownMenuPar by remember {
mutableStateOf(
DropDownMenuParameter(
options = categoryOptions,
expanded = categoryExpanded,
selectedOptionText = categorySelectedOptionText,
backgroundColor = serviceColorDecoded
)
)
}
// ....
Row { // categoria
Text(
modifier = Modifier
.padding(textMargin_24, 0.dp, 0.dp, 0.dp)
.weight(0.5f),
text = "Categoria",
fontWeight = FontWeight.Bold,
color = Color.White,
textAlign = TextAlign.Left,
fontSize = 16.sp,
)
Row(
modifier = Modifier
.padding(0.dp, 0.dp, 24.dp, 0.dp)
.weight(0.5f),
horizontalArrangement = Arrangement.End
){
DropDownMenuComponent(categoryDropDownMenuPar)
}
}
// .....
}
to retrieve the value after selection : categoryDropDownMenuPar.selectedOptionText
A:
If you are using material3 and a newer version of compose (this is working for v1.3.1), the DropdownMenuItem has changed slightly. Text must now be a property (rather than an @Composable).
You will still need to opt in to the experimental api, @OptIn(ExperimentalMaterial3Api::class).
This example is in the androidx.compose.material3 documentation.
import androidx.compose.material3.DropdownMenuItem
import androidx.compose.material3.ExposedDropdownMenuBox
import androidx.compose.material3.Text
import androidx.compose.material3.TextField
import androidx.compose.runtime.mutableStateOf
import androidx.compose.runtime.remember
val options = listOf("Option 1", "Option 2", "Option 3", "Option 4", "Option 5")
var expanded by remember { mutableStateOf(false) }
var selectedOptionText by remember { mutableStateOf(options[0]) }
// We want to react on tap/press on TextField to show menu
ExposedDropdownMenuBox(
expanded = expanded,
onExpandedChange = { expanded = !expanded },
) {
TextField(
// The `menuAnchor` modifier must be passed to the text field for correctness.
modifier = Modifier.menuAnchor(),
readOnly = true,
value = selectedOptionText,
onValueChange = {},
label = { Text("Label") },
trailingIcon = { ExposedDropdownMenuDefaults.TrailingIcon(expanded = expanded) },
colors = ExposedDropdownMenuDefaults.textFieldColors(),
)
ExposedDropdownMenu(
expanded = expanded,
onDismissRequest = { expanded = false },
) {
options.forEach { selectionOption ->
DropdownMenuItem(
text = { Text(selectionOption) },
onClick = {
selectedOptionText = selectionOption
expanded = false
},
contentPadding = ExposedDropdownMenuDefaults.ItemContentPadding,
)
}
}
}
Doing this the 'old way', I had the following errors on the Text(text = selectionOption) line:
No value passed for parameter 'text'
Type mismatch: inferred type is () -> Unit but MutableInteractionSource was expected
@Composable invocations can only happen from the context of a @Composable function
|
Exposed drop-down menu for jetpack compose
|
I was wondering if there is a solution for Exposed drop-down menu for jetpack compose?
I couldn't find a proper solution for this component inside jetpack compose. Any help?
|
[
"The version 1.1.0-alpha06 introduced the implementation of ExposedDropdownMenu based on ExposedDropdownMenuBox with TextField and DropdownMenu inside.\nSomething like:\n val options = listOf(\"Option 1\", \"Option 2\", \"Option 3\", \"Option 4\", \"Option 5\")\n var expanded by remember { mutableStateOf(false) }\n var selectedOptionText by remember { mutableStateOf(options[0]) }\n \n ExposedDropdownMenuBox(\n expanded = expanded,\n onExpandedChange = {\n expanded = !expanded\n }\n ) {\n TextField(\n readOnly = true,\n value = selectedOptionText,\n onValueChange = { },\n label = { Text(\"Label\") },\n trailingIcon = {\n ExposedDropdownMenuDefaults.TrailingIcon(\n expanded = expanded\n )\n },\n colors = ExposedDropdownMenuDefaults.textFieldColors()\n )\n ExposedDropdownMenu(\n expanded = expanded,\n onDismissRequest = {\n expanded = false\n }\n ) {\n options.forEach { selectionOption ->\n DropdownMenuItem(\n onClick = {\n selectedOptionText = selectionOption\n expanded = false\n }\n ) {\n Text(text = selectionOption)\n }\n }\n }\n }\n\n\nIf you are using M3 (androidx.compose.material3) you have also to pass the menuAnchor modifier to the TextField:\nExposedDropdownMenuBox(\n expanded = expanded,\n onExpandedChange = { expanded = !expanded },\n) {\n TextField(\n //...\n modifier = Modifier.menuAnchor()\n )\n ExposedDropdownMenu(){ /*.. */ }\n}\n\n\nWith the version 1.0.x there isn't a built-in component.\nYou can use a OutlinedTextField + DropdownMenu.\nIt is just a basic (very basic) implementation:\nvar expanded by remember { mutableStateOf(false) }\nval suggestions = listOf(\"Item1\",\"Item2\",\"Item3\")\nvar selectedText by remember { mutableStateOf(\"\") }\n\nvar textfieldSize by remember { mutableStateOf(Size.Zero)}\n\nval icon = if (expanded)\n Icons.Filled.ArrowDropUp //it requires androidx.compose.material:material-icons-extended\nelse\n Icons.Filled.ArrowDropDown\n\n\nColumn() {\n OutlinedTextField(\n value = selectedText,\n onValueChange = { selectedText = it },\n modifier = Modifier\n .fillMaxWidth()\n .onGloballyPositioned { coordinates ->\n //This value is used to assign to the DropDown the same width\n textfieldSize = coordinates.size.toSize()\n },\n label = {Text(\"Label\")},\n trailingIcon = {\n Icon(icon,\"contentDescription\",\n Modifier.clickable { expanded = !expanded })\n }\n )\n DropdownMenu(\n expanded = expanded,\n onDismissRequest = { expanded = false },\n modifier = Modifier\n .width(with(LocalDensity.current){textfieldSize.width.toDp()})\n ) {\n suggestions.forEach { label ->\n DropdownMenuItem(onClick = {\n selectedText = label\n }) {\n Text(text = label)\n }\n }\n }\n}\n\n\n\n",
"This is what I did to get the width the same as the text field: Copying and modifying Gabriele's answer.\nvar expanded by remember { mutableStateOf(false) }\nval suggestions = listOf(\"Item1\",\"Item2\",\"Item3\")\nvar selectedText by remember { mutableStateOf(\"\") }\n\nvar dropDownWidth by remember { mutableStateOf(0) }\n\nval icon = if (expanded)\n Icons.Filled.....\nelse\n Icons.Filled.ArrowDropDown\n\n\nColumn() {\n OutlinedTextField(\n value = selectedText,\n onValueChange = { selectedText = it },\n modifier = Modifier.fillMaxWidth()\n .onSizeChanged {\n dropDownWidth = it.width\n },\n label = {Text(\"Label\")},\n trailingIcon = {\n Icon(icon,\"contentDescription\", Modifier.clickable { expanded = !expanded })\n }\n )\n DropdownMenu(\n expanded = expanded,\n onDismissRequest = { expanded = false },\n modifier = Modifier\n .width(with(LocalDensity.current){dropDownWidth.toDp()})\n ) {\n suggestions.forEach { label ->\n DropdownMenuItem(onClick = {\n selectedText = label\n }) {\n Text(text = label)\n }\n }\n }\n}\n\n",
"Here's my version.\nI achieved this without using a TextField (so no keyboard).\nThere's a \"regular\" and an \"outlined\" version.\nimport androidx.compose.animation.core.animateFloatAsState\nimport androidx.compose.foundation.background\nimport androidx.compose.foundation.border\nimport androidx.compose.foundation.clickable\nimport androidx.compose.foundation.layout.*\nimport androidx.compose.foundation.shape.ZeroCornerSize\nimport androidx.compose.material.*\nimport androidx.compose.material.icons.Icons\nimport androidx.compose.material.icons.filled.ExpandMore\nimport androidx.compose.runtime.*\nimport androidx.compose.ui.Alignment\nimport androidx.compose.ui.Modifier\nimport androidx.compose.ui.draw.clip\nimport androidx.compose.ui.draw.drawBehind\nimport androidx.compose.ui.draw.rotate\nimport androidx.compose.ui.geometry.Offset\nimport androidx.compose.ui.geometry.Size\nimport androidx.compose.ui.graphics.Color\nimport androidx.compose.ui.graphics.Shape\nimport androidx.compose.ui.layout.onGloballyPositioned\nimport androidx.compose.ui.platform.LocalDensity\nimport androidx.compose.ui.platform.LocalFocusManager\nimport androidx.compose.ui.unit.Dp\nimport androidx.compose.ui.unit.dp\nimport androidx.compose.ui.unit.toSize\nimport kotlinx.coroutines.delay\nimport kotlinx.coroutines.launch\n\n\n// ExposedDropDownMenu will be added in Jetpack Compose 1.1.0.\n// This is a reimplementation while waiting.\n// See https://stackoverflow.com/questions/67111020/exposed-drop-down-menu-for-jetpack-compose/6904285\n\n@Composable\nfun SimpleExposedDropDownMenu(\n values: List<String>,\n selectedIndex: Int,\n onChange: (Int) -> Unit,\n label: @Composable () -> Unit,\n modifier: Modifier,\n backgroundColor: Color = MaterialTheme.colors.onSurface.copy(alpha = TextFieldDefaults.BackgroundOpacity),\n shape: Shape = MaterialTheme.shapes.small.copy(bottomEnd = ZeroCornerSize, bottomStart = ZeroCornerSize)\n) {\n SimpleExposedDropDownMenuImpl(\n values = values,\n selectedIndex = selectedIndex,\n onChange = onChange,\n label = label,\n modifier = modifier,\n backgroundColor = backgroundColor,\n shape = shape,\n decorator = { color, width, content ->\n Box(\n Modifier\n .drawBehind {\n val strokeWidth = width.value * density\n val y = size.height - strokeWidth / 2\n drawLine(\n color,\n Offset(0f, y),\n Offset(size.width, y),\n strokeWidth\n )\n }\n ) {\n content()\n }\n }\n )\n}\n\n@Composable\nfun SimpleOutlinedExposedDropDownMenu(\n values: List<String>,\n selectedIndex: Int,\n onChange: (Int) -> Unit,\n label: @Composable () -> Unit,\n modifier: Modifier,\n backgroundColor: Color = MaterialTheme.colors.onSurface.copy(alpha = TextFieldDefaults.BackgroundOpacity),\n shape: Shape = MaterialTheme.shapes.small\n) {\n SimpleExposedDropDownMenuImpl(\n values = values,\n selectedIndex = selectedIndex,\n onChange = onChange,\n label = label,\n modifier = modifier,\n backgroundColor = backgroundColor,\n shape = shape,\n decorator = { color, width, content ->\n Box(\n Modifier\n .border(width, color, shape)\n ) {\n content()\n }\n }\n )\n}\n\n@Composable\nprivate fun SimpleExposedDropDownMenuImpl(\n values: List<String>,\n selectedIndex: Int,\n onChange: (Int) -> Unit,\n label: @Composable () -> Unit,\n modifier: Modifier,\n backgroundColor: Color,\n shape: Shape,\n decorator: @Composable (Color, Dp, @Composable () -> Unit) -> Unit\n) {\n var expanded by remember { mutableStateOf(false) }\n var textfieldSize by remember { mutableStateOf(Size.Zero) }\n\n val indicatorColor =\n if (expanded) MaterialTheme.colors.primary.copy(alpha = ContentAlpha.high)\n else MaterialTheme.colors.onSurface.copy(alpha = TextFieldDefaults.UnfocusedIndicatorLineOpacity)\n val indicatorWidth = (if (expanded) 2 else 1).dp\n val labelColor =\n if (expanded) MaterialTheme.colors.primary.copy(alpha = ContentAlpha.high)\n else MaterialTheme.colors.onSurface.copy(ContentAlpha.medium)\n val trailingIconColor = MaterialTheme.colors.onSurface.copy(alpha = TextFieldDefaults.IconOpacity)\n\n val rotation: Float by animateFloatAsState(if (expanded) 180f else 0f)\n\n val focusManager = LocalFocusManager.current\n\n Column(modifier = modifier.width(IntrinsicSize.Min)) {\n decorator(indicatorColor, indicatorWidth) {\n Box(\n Modifier\n .fillMaxWidth()\n .background(color = backgroundColor, shape = shape)\n .onGloballyPositioned { textfieldSize = it.size.toSize() }\n .clip(shape)\n .clickable {\n expanded = !expanded\n focusManager.clearFocus()\n }\n .padding(start = 16.dp, end = 12.dp, top = 7.dp, bottom = 10.dp)\n ) {\n Column(Modifier.padding(end = 32.dp)) {\n ProvideTextStyle(value = MaterialTheme.typography.caption.copy(color = labelColor)) {\n label()\n }\n Text(\n text = values[selectedIndex],\n modifier = Modifier.padding(top = 1.dp)\n )\n }\n Icon(\n imageVector = Icons.Filled.ExpandMore,\n contentDescription = \"Change\",\n tint = trailingIconColor,\n modifier = Modifier\n .align(Alignment.CenterEnd)\n .padding(top = 4.dp)\n .rotate(rotation)\n )\n\n }\n }\n\n DropdownMenu(\n expanded = expanded,\n onDismissRequest = { expanded = false },\n modifier = Modifier\n .width(with(LocalDensity.current) { textfieldSize.width.toDp() })\n ) {\n values.forEachIndexed { i, v ->\n val scope = rememberCoroutineScope()\n DropdownMenuItem(\n onClick = {\n onChange(i)\n scope.launch {\n delay(150)\n expanded = false\n }\n }\n ) {\n Text(v)\n }\n }\n }\n }\n}\n\n",
"A few modifications to @Gabriele Mariotti answer A user can select an outline text field and select from an option. Option will be disappear one user select any option.\n @Composable\nfun DropDownMenu(optionList: List<String>,label:String,) {\n var expanded by remember { mutableStateOf(false) }\n\n var selectedText by remember { mutableStateOf(\"\") }\n\n var textfieldSize by remember { mutableStateOf(Size.Zero) }\n\n val icon = if (expanded)\n Icons.Filled.KeyboardArrowUp\n else\n Icons.Filled.KeyboardArrowDown\n\n\n Column() {\n OutlinedTextField(\n value = selectedText,\n onValueChange = { selectedText = it },\n enabled = false,\n modifier = Modifier\n .fillMaxWidth()\n .onGloballyPositioned { coordinates ->\n //This value is used to assign to the DropDown the same width\n textfieldSize = coordinates.size.toSize()\n }\n .clickable { expanded = !expanded },\n label = { Text(label) },\n trailingIcon = {\n Icon(icon, \"Drop Down Icon\",\n Modifier.clickable { expanded = !expanded })\n }\n )\n DropdownMenu(\n expanded = expanded,\n onDismissRequest = { expanded = false },\n modifier = Modifier\n .width(with(LocalDensity.current) { textfieldSize.width.toDp() })\n ) {\n optionList.forEach { label ->\n DropdownMenuItem(onClick = {\n selectedText = label\n expanded = !expanded\n }) {\n Text(text = label)\n }\n }\n }\n }\n}\n\n",
"In addition to what has been written here, I case could be useful to someone and for my personal memo note for next usages, I've realized this drop-down menu function component using BasicTextField for no decoration and no default padding, no arrow icon, with item selected text aligned to right (.End) , filling max text width (.fillMaxWidth()) with single line in list.\n\ndata class DropDownMenuParameter(\n var options: List<String>,\n var expanded: Boolean,\n var selectedOptionText: String,\n var backgroundColor: Color\n )\n\n\n\n\n@ExperimentalMaterialApi\n@Composable\nfun DropDownMenuComponent(params: DropDownMenuParameter) {\n var expanded by remember { mutableStateOf(params.expanded) }\n \n\n ExposedDropdownMenuBox(\n expanded = expanded,\n onExpandedChange = {\n expanded = !expanded\n }\n ) {\n BasicTextField(\n modifier = Modifier\n .background(params.backgroundColor)\n .fillMaxWidth(),\n readOnly = true,\n value = params.selectedOptionText,\n onValueChange = { },\n textStyle = TextStyle(\n color = Color.White,\n textAlign = TextAlign.End,\n fontSize = 16.sp,\n ),\n singleLine = true\n\n )\n ExposedDropdownMenu(\n modifier = Modifier\n .background(params.backgroundColor),\n expanded = expanded,\n onDismissRequest = {\n expanded = false\n }\n ) {\n params.options.forEach { selectionOption ->\n DropdownMenuItem(\n modifier = Modifier\n .background(params.backgroundColor),\n onClick = {\n params.selectedOptionText = selectionOption\n expanded = false\n },\n\n ) {\n Text(\n text = selectionOption,\n color = Color.White,\n )\n\n }\n }\n }\n }\n\n}\n\nMy usage :\n@OptIn(ExperimentalAnimationApi::class, ExperimentalMaterialApi::class)\n@Composable\nfun SubscribeSubscriptionDetails(selectedSubscription : Subscription){\n\n \n \n val categoryOptions = listOf(\"Entertainment\", \"Gaming\", \"Business\", \"Utility\", \"Music\", \"Food & Drink\", \"Health & Fitness\", \"Bank\", \"Transport\", \"Education\", \"Insurance\", \"News\")\n val categoryExpanded by rememberSaveable { mutableStateOf(false) }\nval categorySelectedOptionText\n by rememberSaveable { mutableStateOf(selectedSubscription.category) }\nval categoryDropDownMenuPar by remember {\n mutableStateOf(\n DropDownMenuParameter(\n options = categoryOptions,\n expanded = categoryExpanded,\n selectedOptionText = categorySelectedOptionText,\n backgroundColor = serviceColorDecoded\n )\n )\n}\n\n // ....\n\n\n Row { // categoria\n\n Text(\n modifier = Modifier\n .padding(textMargin_24, 0.dp, 0.dp, 0.dp)\n .weight(0.5f),\n text = \"Categoria\",\n fontWeight = FontWeight.Bold,\n color = Color.White,\n textAlign = TextAlign.Left,\n fontSize = 16.sp,\n )\n\n\n Row(\n modifier = Modifier\n .padding(0.dp, 0.dp, 24.dp, 0.dp)\n .weight(0.5f),\n horizontalArrangement = Arrangement.End\n ){\n DropDownMenuComponent(categoryDropDownMenuPar)\n }\n\n\n }\n\n\n // .....\n\n\n}\n\nto retrieve the value after selection : categoryDropDownMenuPar.selectedOptionText\n",
"If you are using material3 and a newer version of compose (this is working for v1.3.1), the DropdownMenuItem has changed slightly. Text must now be a property (rather than an @Composable).\nYou will still need to opt in to the experimental api, @OptIn(ExperimentalMaterial3Api::class).\nThis example is in the androidx.compose.material3 documentation.\nimport androidx.compose.material3.DropdownMenuItem\nimport androidx.compose.material3.ExposedDropdownMenuBox\nimport androidx.compose.material3.Text\nimport androidx.compose.material3.TextField\nimport androidx.compose.runtime.mutableStateOf\nimport androidx.compose.runtime.remember\n\nval options = listOf(\"Option 1\", \"Option 2\", \"Option 3\", \"Option 4\", \"Option 5\")\nvar expanded by remember { mutableStateOf(false) }\nvar selectedOptionText by remember { mutableStateOf(options[0]) }\n// We want to react on tap/press on TextField to show menu\nExposedDropdownMenuBox(\n expanded = expanded,\n onExpandedChange = { expanded = !expanded },\n) {\n TextField(\n // The `menuAnchor` modifier must be passed to the text field for correctness.\n modifier = Modifier.menuAnchor(),\n readOnly = true,\n value = selectedOptionText,\n onValueChange = {},\n label = { Text(\"Label\") },\n trailingIcon = { ExposedDropdownMenuDefaults.TrailingIcon(expanded = expanded) },\n colors = ExposedDropdownMenuDefaults.textFieldColors(),\n )\n ExposedDropdownMenu(\n expanded = expanded,\n onDismissRequest = { expanded = false },\n ) {\n options.forEach { selectionOption ->\n DropdownMenuItem(\n text = { Text(selectionOption) },\n onClick = {\n selectedOptionText = selectionOption\n expanded = false\n },\n contentPadding = ExposedDropdownMenuDefaults.ItemContentPadding,\n )\n }\n }\n}\n\nDoing this the 'old way', I had the following errors on the Text(text = selectionOption) line:\n\nNo value passed for parameter 'text'\nType mismatch: inferred type is () -> Unit but MutableInteractionSource was expected\n@Composable invocations can only happen from the context of a @Composable function\n\n"
] |
[
70,
11,
7,
1,
0,
0
] |
[
"You can use OutlinedTextField directly in ExposedDropdownMenuBox. It's not perfect but at least very simple.\n\nvar expanded by remember { mutableStateOf(false) }\nval options = listOf(\"Option 1\", \"Option 2\", \"Option 3\", \"Option 4\", \"Option 5\")\nvar selectedOptionText by remember { mutableStateOf(options[0]) }\n\nExposedDropdownMenuBox(\n expanded = expanded,\n onExpandedChange = { expanded = expanded.not() },\n) {\n OutlinedTextField(\n value = selectedOptionText,\n onValueChange = { selectedOptionText = it },\n label = { Text(\"Label\") },\n readOnly = true,\n trailingIcon = { ExposedDropdownMenuDefaults.TrailingIcon(expanded = expanded) },\n modifier = Modifier.fillMaxWidth().padding(horizontal = 16.dp),\n )\n ExposedDropdownMenu(\n expanded = expanded,\n onDismissRequest = { expanded = false },\n ) {\n options.forEach { option ->\n DropdownMenuItem(\n onClick = {\n selectedOptionText = option\n expanded = false\n }\n ) {\n Text(text = option)\n }\n }\n }\n}\n\n\n"
] |
[
-1
] |
[
"android",
"android_compose_exposeddropdown",
"android_compose_textfield",
"android_jetpack_compose",
"kotlin"
] |
stackoverflow_0067111020_android_android_compose_exposeddropdown_android_compose_textfield_android_jetpack_compose_kotlin.txt
|
Q:
VM428:7 Uncaught TypeError: Cannot read properties of null (reading 'CodeMirror') at :7:17
The code is running nicely but I dont't understand where am I get this error.
It says:
VM428:7 Uncaught TypeError: Cannot read properties of null (reading 'CodeMirror')
at <anonymous>:7:17
Any sort of help with this will be appreciated.
Thanks in advance.
A:
Remove blackbox extension and react development tool extension from chrome
A:
Just Remove blackbox extension from your chrome and its done
A:
It looks like the error is being thrown because the CodeMirror variable is null, meaning it is not defined or has not been assigned a value. This can happen if the CodeMirror library is not properly included in your code, or if it is not being loaded before the code that is trying to access it is executed.
To fix this error, you will need to make sure that the CodeMirror library is properly included in your code and is being loaded before any code that tries to access it is executed. This can typically be done by including the library in the <head> section of your HTML document, or by using a script tag to load it asynchronously.
For example, if you are using a script tag to load the CodeMirror library, you could try something like this:
<script src="https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.56.0/codemirror.min.js"></script>
<script>
// Your code that uses CodeMirror goes here
// ...
</script>
|
VM428:7 Uncaught TypeError: Cannot read properties of null (reading 'CodeMirror') at :7:17
|
The code is running nicely but I dont't understand where am I get this error.
It says:
VM428:7 Uncaught TypeError: Cannot read properties of null (reading 'CodeMirror')
at <anonymous>:7:17
Any sort of help with this will be appreciated.
Thanks in advance.
|
[
"Remove blackbox extension and react development tool extension from chrome\n",
"Just Remove blackbox extension from your chrome and its done\n",
"It looks like the error is being thrown because the CodeMirror variable is null, meaning it is not defined or has not been assigned a value. This can happen if the CodeMirror library is not properly included in your code, or if it is not being loaded before the code that is trying to access it is executed.\nTo fix this error, you will need to make sure that the CodeMirror library is properly included in your code and is being loaded before any code that tries to access it is executed. This can typically be done by including the library in the <head> section of your HTML document, or by using a script tag to load it asynchronously.\nFor example, if you are using a script tag to load the CodeMirror library, you could try something like this:\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/codemirror/5.56.0/codemirror.min.js\"></script>\n<script>\n // Your code that uses CodeMirror goes here\n // ...\n</script>\n\n"
] |
[
19,
6,
0
] |
[
"i found that the error is thriggred everytime when i am in debug console of chrome and i press any key of my keyboard.\nso my code was good\n"
] |
[
-1
] |
[
"javascript",
"reactjs"
] |
stackoverflow_0074485409_javascript_reactjs.txt
|
Q:
How to get dominant color from image in flutter?
I want to extract dominant color from a image so that i can apply it as blending to other images. how can i achieve that??
In my current code i have given color manually but i want it to be generated by app.
class MyApp extends StatelessWidget {
Color face = new HexColor("a8a8a8");
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: Text("Image from assets"),
),
body: Column (
mainAxisAlignment: MainAxisAlignment.center,
children:<Widget>[
Row(
mainAxisAlignment: MainAxisAlignment.start,
children:<Widget>[
new Image.asset('assets/images/6.jpg',
color: face, colorBlendMode:BlendMode.modulate ,
fit:BoxFit.cover,
height: 50,
width: 50,
),
new Image.asset('assets/images/1.jpg',
color: face, colorBlendMode: BlendMode.modulate,
fit:BoxFit.cover,
height: 200,
width: 200,
),
]),
])),
);
}
}
A:
I found solution using palette_generator package..
First import library
import 'package:palette_generator/palette_generator.dart';
add it in pubspec.yaml file too
The below function will return palette
Future<PaletteGenerator>_updatePaletteGenerator ()async
{
paletteGenerator = await PaletteGenerator.fromImageProvider(
Image.asset("assets/images/8.jfif").image,
);
return paletteGenerator;
}
Now we can fetch it in future builder
FutureBuilder<PaletteGenerator>(
future: _updatePaletteGenerator(), // async work
builder: (BuildContext context, AsyncSnapshot<PaletteGenerator> snapshot) {
switch (snapshot.connectionState) {
case ConnectionState.waiting: return Center(child:CircularProgressIndicator());
default:
if (snapshot.hasError)
return new Text('Error: ${snapshot.error}');
else {
// Color color=new Color(snapshot.data.dominantColor.color);
face=snapshot.data.dominantColor.color;
return new Text('color: ${face.toString()}');
}}})
This is how we can fetch dominant color easily
A:
Here you have the palette_generator library, and even if you search on youtube or some other places you can find some tutorials about which results gives you.
https://pub.dev/packages/palette_generator
A:
import 'dart:async';
import 'dart:typed_data';
import 'dart:ui' as ui;
import 'dart:math';
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
import 'package:image/image.dart' as img;
import 'package:flutter/services.dart' show rootBundle;
void main() => runApp(const MaterialApp(home: MyApp()));
class MyApp extends StatefulWidget {
const MyApp({Key? key}) : super(key: key);
@override
State<StatefulWidget> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
String imagePath = 'assets/5.jpg';
GlobalKey imageKey = GlobalKey();
GlobalKey paintKey = GlobalKey();
// CHANGE THIS FLAG TO TEST BASIC IMAGE, AND SNAPSHOT.
bool useSnapshot = true;
// based on useSnapshot=true ? paintKey : imageKey ;
// this key is used in this example to keep the code shorter.
late GlobalKey currentKey;
final StreamController<Color> _stateController = StreamController<Color>();
//late img.Image photo ;
img.Image? photo;
@override
void initState() {
currentKey = useSnapshot ? paintKey : imageKey;
super.initState();
}
@override
Widget build(BuildContext context) {
final String title = useSnapshot ? "snapshot" : "basic";
return SafeArea(
child: Scaffold(
appBar: AppBar(title: Text("Color picker $title")),
body: StreamBuilder(
initialData: Colors.green[500],
stream: _stateController.stream,
builder: (buildContext, snapshot) {
Color selectedColor = snapshot.data as Color ?? Colors.green;
return Stack(
children: <Widget>[
RepaintBoundary(
key: paintKey,
child: GestureDetector(
onPanDown: (details) {
searchPixel(details.globalPosition);
},
onPanUpdate: (details) {
searchPixel(details.globalPosition);
},
child: Center(
child: Image.asset(
imagePath,
key: imageKey,
//color: Colors.red,
//colorBlendMode: BlendMode.hue,
//alignment: Alignment.bottomRight,
fit: BoxFit.contain,
//scale: .8,
),
),
),
),
Container(
margin: const EdgeInsets.all(70),
width: 50,
height: 50,
decoration: BoxDecoration(
shape: BoxShape.circle,
color: selectedColor!,
border: Border.all(width: 2.0, color: Colors.white),
boxShadow: [
const BoxShadow(
color: Colors.black12,
blurRadius: 4,
offset: Offset(0, 2))
]),
),
Positioned(
child: Text('${selectedColor}',
style: const TextStyle(
color: Colors.white,
backgroundColor: Colors.black54)),
left: 114,
top: 95,
),
],
);
}),
),
);
}
void searchPixel(Offset globalPosition) async {
if (photo == null) {
await (useSnapshot ? loadSnapshotBytes() : loadImageBundleBytes());
}
_calculatePixel(globalPosition);
}
void _calculatePixel(Offset globalPosition) {
RenderBox box = currentKey.currentContext!.findRenderObject() as RenderBox;
Offset localPosition = box.globalToLocal(globalPosition);
double px = localPosition.dx;
double py = localPosition.dy;
if (!useSnapshot) {
double widgetScale = box.size.width / photo!.width;
print(py);
px = (px / widgetScale);
py = (py / widgetScale);
}
int pixel32 = photo!.getPixelSafe(px.toInt(), py.toInt());
int hex = abgrToArgb(pixel32);
_stateController.add(Color(hex));
}
Future<void> loadImageBundleBytes() async {
ByteData imageBytes = await rootBundle.load(imagePath);
setImageBytes(imageBytes);
}
Future<void> loadSnapshotBytes() async {
RenderRepaintBoundary boxPaint =
paintKey.currentContext!.findRenderObject() as RenderRepaintBoundary;
//RenderObject? boxPaint = paintKey.currentContext.findRenderObject();
ui.Image capture = await boxPaint.toImage();
ByteData? imageBytes =
await capture.toByteData(format: ui.ImageByteFormat.png);
setImageBytes(imageBytes!);
capture.dispose();
}
void setImageBytes(ByteData imageBytes) {
List<int> values = imageBytes.buffer.asUint8List();
photo;
photo = img.decodeImage(values)!;
}
}
// image lib uses uses KML color format, convert #AABBGGRR to regular #AARRGGBB
int abgrToArgb(int argbColor) {
int r = (argbColor >> 16) & 0xFF;
int b = argbColor & 0xFF;
return (argbColor & 0xFF00FF00) | (b << 16) | r;
}
|
How to get dominant color from image in flutter?
|
I want to extract dominant color from a image so that i can apply it as blending to other images. how can i achieve that??
In my current code i have given color manually but i want it to be generated by app.
class MyApp extends StatelessWidget {
Color face = new HexColor("a8a8a8");
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(
title: Text("Image from assets"),
),
body: Column (
mainAxisAlignment: MainAxisAlignment.center,
children:<Widget>[
Row(
mainAxisAlignment: MainAxisAlignment.start,
children:<Widget>[
new Image.asset('assets/images/6.jpg',
color: face, colorBlendMode:BlendMode.modulate ,
fit:BoxFit.cover,
height: 50,
width: 50,
),
new Image.asset('assets/images/1.jpg',
color: face, colorBlendMode: BlendMode.modulate,
fit:BoxFit.cover,
height: 200,
width: 200,
),
]),
])),
);
}
}
|
[
"I found solution using palette_generator package..\nFirst import library\nimport 'package:palette_generator/palette_generator.dart';\n\nadd it in pubspec.yaml file too\nThe below function will return palette\nFuture<PaletteGenerator>_updatePaletteGenerator ()async\n{\n paletteGenerator = await PaletteGenerator.fromImageProvider(\n Image.asset(\"assets/images/8.jfif\").image,\n );\nreturn paletteGenerator;\n}\n\nNow we can fetch it in future builder\n FutureBuilder<PaletteGenerator>(\n future: _updatePaletteGenerator(), // async work\n builder: (BuildContext context, AsyncSnapshot<PaletteGenerator> snapshot) {\n switch (snapshot.connectionState) {\n case ConnectionState.waiting: return Center(child:CircularProgressIndicator());\n default:\n if (snapshot.hasError)\n return new Text('Error: ${snapshot.error}');\n else {\n // Color color=new Color(snapshot.data.dominantColor.color);\n face=snapshot.data.dominantColor.color;\n return new Text('color: ${face.toString()}');\n }}})\n\nThis is how we can fetch dominant color easily\n",
"Here you have the palette_generator library, and even if you search on youtube or some other places you can find some tutorials about which results gives you.\nhttps://pub.dev/packages/palette_generator\n",
"import 'dart:async';\nimport 'dart:typed_data';\nimport 'dart:ui' as ui;\nimport 'dart:math';\n\nimport 'package:flutter/material.dart';\nimport 'package:flutter/rendering.dart';\nimport 'package:image/image.dart' as img;\nimport 'package:flutter/services.dart' show rootBundle;\n\nvoid main() => runApp(const MaterialApp(home: MyApp()));\n\nclass MyApp extends StatefulWidget {\n const MyApp({Key? key}) : super(key: key);\n\n @override\n State<StatefulWidget> createState() => _MyAppState();\n}\n\nclass _MyAppState extends State<MyApp> {\n String imagePath = 'assets/5.jpg';\n GlobalKey imageKey = GlobalKey();\n GlobalKey paintKey = GlobalKey();\n\n // CHANGE THIS FLAG TO TEST BASIC IMAGE, AND SNAPSHOT.\n bool useSnapshot = true;\n\n // based on useSnapshot=true ? paintKey : imageKey ;\n // this key is used in this example to keep the code shorter.\n late GlobalKey currentKey;\n\n final StreamController<Color> _stateController = StreamController<Color>();\n //late img.Image photo ;\n img.Image? photo;\n\n @override\n void initState() {\n currentKey = useSnapshot ? paintKey : imageKey;\n super.initState();\n }\n\n @override\n Widget build(BuildContext context) {\n final String title = useSnapshot ? \"snapshot\" : \"basic\";\n return SafeArea(\n child: Scaffold(\n appBar: AppBar(title: Text(\"Color picker $title\")),\n body: StreamBuilder(\n initialData: Colors.green[500],\n stream: _stateController.stream,\n builder: (buildContext, snapshot) {\n Color selectedColor = snapshot.data as Color ?? Colors.green;\n return Stack(\n children: <Widget>[\n RepaintBoundary(\n key: paintKey,\n child: GestureDetector(\n onPanDown: (details) {\n searchPixel(details.globalPosition);\n },\n onPanUpdate: (details) {\n searchPixel(details.globalPosition);\n },\n child: Center(\n child: Image.asset(\n imagePath,\n key: imageKey,\n //color: Colors.red,\n //colorBlendMode: BlendMode.hue,\n //alignment: Alignment.bottomRight,\n fit: BoxFit.contain,\n //scale: .8,\n ),\n ),\n ),\n ),\n Container(\n margin: const EdgeInsets.all(70),\n width: 50,\n height: 50,\n decoration: BoxDecoration(\n shape: BoxShape.circle,\n color: selectedColor!,\n border: Border.all(width: 2.0, color: Colors.white),\n boxShadow: [\n const BoxShadow(\n color: Colors.black12,\n blurRadius: 4,\n offset: Offset(0, 2))\n ]),\n ),\n Positioned(\n child: Text('${selectedColor}',\n style: const TextStyle(\n color: Colors.white,\n backgroundColor: Colors.black54)),\n left: 114,\n top: 95,\n ),\n ],\n );\n }),\n ),\n );\n }\n\n void searchPixel(Offset globalPosition) async {\n if (photo == null) {\n await (useSnapshot ? loadSnapshotBytes() : loadImageBundleBytes());\n }\n _calculatePixel(globalPosition);\n }\n\n void _calculatePixel(Offset globalPosition) {\n RenderBox box = currentKey.currentContext!.findRenderObject() as RenderBox;\n Offset localPosition = box.globalToLocal(globalPosition);\n\n double px = localPosition.dx;\n double py = localPosition.dy;\n\n if (!useSnapshot) {\n double widgetScale = box.size.width / photo!.width;\n print(py);\n px = (px / widgetScale);\n py = (py / widgetScale);\n }\n\n int pixel32 = photo!.getPixelSafe(px.toInt(), py.toInt());\n int hex = abgrToArgb(pixel32);\n\n _stateController.add(Color(hex));\n }\n\n Future<void> loadImageBundleBytes() async {\n ByteData imageBytes = await rootBundle.load(imagePath);\n setImageBytes(imageBytes);\n }\n\n Future<void> loadSnapshotBytes() async {\n RenderRepaintBoundary boxPaint =\n paintKey.currentContext!.findRenderObject() as RenderRepaintBoundary;\n //RenderObject? boxPaint = paintKey.currentContext.findRenderObject();\n ui.Image capture = await boxPaint.toImage();\n\n ByteData? imageBytes =\n await capture.toByteData(format: ui.ImageByteFormat.png);\n setImageBytes(imageBytes!);\n capture.dispose();\n }\n\n void setImageBytes(ByteData imageBytes) {\n List<int> values = imageBytes.buffer.asUint8List();\n photo;\n photo = img.decodeImage(values)!;\n }\n}\n\n// image lib uses uses KML color format, convert #AABBGGRR to regular #AARRGGBB\nint abgrToArgb(int argbColor) {\n int r = (argbColor >> 16) & 0xFF;\n int b = argbColor & 0xFF;\n return (argbColor & 0xFF00FF00) | (b << 16) | r;\n}\n\n"
] |
[
14,
2,
0
] |
[] |
[] |
[
"color_palette",
"colors",
"dart",
"flutter",
"image"
] |
stackoverflow_0062718295_color_palette_colors_dart_flutter_image.txt
|
Q:
Woocommerce Send only email admin if specific product is on the cart
I want that if there is a specific item in the cart (via id), only the notification email is sent to the admin and not to the customer
Is there a snippet that allows me to do this?
Thank you
A:
Use the following code in your theme's functions.php file. Don't forget to replace the email with your own.
add_action( 'woocommerce_add_to_cart',
'cwpai_woo_send_email_to_admin_on_add_to_cart', 10, 6 );
function cwpai_woo_send_email_to_admin_on_add_to_cart( $cart_item_key,
$product_id, $quantity, $variation_id, $variation, $cart_item_data ) {
$product = wc_get_product( $product_id );
$product_name = $product->get_name();
$user = wp_get_current_user();
$user_name = $user->user_login;
$time = date( 'Y-m-d H:i:s' );
$message = sprintf( '%s was added to the cart by %s at %s',
$product_name,
$user_name, $time );
wp_mail( '[email protected]', 'Product added to cart', $message );
}
|
Woocommerce Send only email admin if specific product is on the cart
|
I want that if there is a specific item in the cart (via id), only the notification email is sent to the admin and not to the customer
Is there a snippet that allows me to do this?
Thank you
|
[
"Use the following code in your theme's functions.php file. Don't forget to replace the email with your own.\nadd_action( 'woocommerce_add_to_cart', \n'cwpai_woo_send_email_to_admin_on_add_to_cart', 10, 6 );\nfunction cwpai_woo_send_email_to_admin_on_add_to_cart( $cart_item_key, \n $product_id, $quantity, $variation_id, $variation, $cart_item_data ) {\n $product = wc_get_product( $product_id );\n $product_name = $product->get_name();\n $user = wp_get_current_user();\n $user_name = $user->user_login;\n $time = date( 'Y-m-d H:i:s' );\n $message = sprintf( '%s was added to the cart by %s at %s', \n $product_name, \n $user_name, $time );\n wp_mail( '[email protected]', 'Product added to cart', $message );\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"function",
"hook_woocommerce",
"woocommerce",
"wordpress"
] |
stackoverflow_0074653006_function_hook_woocommerce_woocommerce_wordpress.txt
|
Q:
How to use Validation for Custom component within EditForm
I have used Validation in EditForm (For Combobox/Textbox etc).It works fine as per my requirement (when Click on submit button).When I use Custom Component within EditForm,the validation message is still showing even when we enter some value.Please advise how to use validation message for custom component
<div class="col-sm-12 col-md-12 col-lg-12">
<MaterialSearchVertical @ref="materialSearchVertical" EditEnabled="@EditEnabled" MaterialTypeID="materialRequestDetailDisplaySetup.MaterialTypeID"
PurchaseTypeID="materialRequestSetup.PurchaseTypeID"
MaterialSKUID="materialRequestDetailDisplaySetup.MaterialSKUID"
MaterialTypeLoad=" AND A.ReferenceDetailsCode in ('SUMAT','MNT','HLTS') "
EventCallBackMaterialSearch="MaterialSearchEventCallBack"></MaterialSearchVertical>
<ValidationMessage For="@(() => materialRequestDetailDisplaySetup.MaterialSKUID)" />
</div>
Note:-
A:
probably your custom component is not properly bound to the Property you're validating, so when you change the value, that new value is not updated in the model Property (MaterialSKUID);
or the custom component is not calling: EditContext.NotifyFieldChanged
usually you should have @bind-Value="PropertyName" which should ensure the component is bound correctly.
https://learn.microsoft.com/en-us/aspnet/core/blazor/components/data-binding?view=aspnetcore-7.0
|
How to use Validation for Custom component within EditForm
|
I have used Validation in EditForm (For Combobox/Textbox etc).It works fine as per my requirement (when Click on submit button).When I use Custom Component within EditForm,the validation message is still showing even when we enter some value.Please advise how to use validation message for custom component
<div class="col-sm-12 col-md-12 col-lg-12">
<MaterialSearchVertical @ref="materialSearchVertical" EditEnabled="@EditEnabled" MaterialTypeID="materialRequestDetailDisplaySetup.MaterialTypeID"
PurchaseTypeID="materialRequestSetup.PurchaseTypeID"
MaterialSKUID="materialRequestDetailDisplaySetup.MaterialSKUID"
MaterialTypeLoad=" AND A.ReferenceDetailsCode in ('SUMAT','MNT','HLTS') "
EventCallBackMaterialSearch="MaterialSearchEventCallBack"></MaterialSearchVertical>
<ValidationMessage For="@(() => materialRequestDetailDisplaySetup.MaterialSKUID)" />
</div>
Note:-
|
[
"probably your custom component is not properly bound to the Property you're validating, so when you change the value, that new value is not updated in the model Property (MaterialSKUID);\nor the custom component is not calling: EditContext.NotifyFieldChanged\nusually you should have @bind-Value=\"PropertyName\" which should ensure the component is bound correctly.\nhttps://learn.microsoft.com/en-us/aspnet/core/blazor/components/data-binding?view=aspnetcore-7.0\n"
] |
[
0
] |
[] |
[] |
[
"blazor"
] |
stackoverflow_0074664810_blazor.txt
|
Q:
Cannot install kubernetes helm chart Error: cannot re-use a name that is still in use
Cannot install the helm chart but when I use raw file generated by helm, I am able to install via kubectl apply.
Following error is displayed when i use helm install myChart . --debug
Error: cannot re-use a name that is still in use
helm.go:88: [debug] cannot re-use a name that is still in use
helm.sh/helm/v3/pkg/action.(*Install).availableName
helm.sh/helm/v3/pkg/action/install.go:442
helm.sh/helm/v3/pkg/action.(*Install).Run
helm.sh/helm/v3/pkg/action/install.go:185
main.runInstall
helm.sh/helm/v3/cmd/helm/install.go:242
main.newInstallCmd.func2
helm.sh/helm/v3/cmd/helm/install.go:120
github.com/spf13/cobra.(*Command).execute
github.com/spf13/[email protected]/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/[email protected]/command.go:960
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/[email protected]/command.go:897
main.main
helm.sh/helm/v3/cmd/helm/helm.go:87
runtime.main
runtime/proc.go:225
runtime.goexit
runtime/asm_amd64.s:1371
Installing raw file generated by helm with the following command works great but when I run helm install myChart . it gives the above error
helm install myChart . --dry-run > myChart.yaml
kubectl apply -f myChart.yaml
A:
Use upgrade instead install:
helm upgrade -i myChart .
The -i flag install the release if it doesn't exist.
A:
Another option could be:
List the available helm charts: helm list. E.g.:
Delete the required helm chart helm delete phoenix-chart. E.g.:
A:
works for me - helm uninstall /<chart_name>
|
Cannot install kubernetes helm chart Error: cannot re-use a name that is still in use
|
Cannot install the helm chart but when I use raw file generated by helm, I am able to install via kubectl apply.
Following error is displayed when i use helm install myChart . --debug
Error: cannot re-use a name that is still in use
helm.go:88: [debug] cannot re-use a name that is still in use
helm.sh/helm/v3/pkg/action.(*Install).availableName
helm.sh/helm/v3/pkg/action/install.go:442
helm.sh/helm/v3/pkg/action.(*Install).Run
helm.sh/helm/v3/pkg/action/install.go:185
main.runInstall
helm.sh/helm/v3/cmd/helm/install.go:242
main.newInstallCmd.func2
helm.sh/helm/v3/cmd/helm/install.go:120
github.com/spf13/cobra.(*Command).execute
github.com/spf13/[email protected]/command.go:852
github.com/spf13/cobra.(*Command).ExecuteC
github.com/spf13/[email protected]/command.go:960
github.com/spf13/cobra.(*Command).Execute
github.com/spf13/[email protected]/command.go:897
main.main
helm.sh/helm/v3/cmd/helm/helm.go:87
runtime.main
runtime/proc.go:225
runtime.goexit
runtime/asm_amd64.s:1371
Installing raw file generated by helm with the following command works great but when I run helm install myChart . it gives the above error
helm install myChart . --dry-run > myChart.yaml
kubectl apply -f myChart.yaml
|
[
"Use upgrade instead install:\nhelm upgrade -i myChart .\n\nThe -i flag install the release if it doesn't exist.\n",
"Another option could be:\n\nList the available helm charts: helm list. E.g.:\n\nDelete the required helm chart helm delete phoenix-chart. E.g.:\n\n\n",
"works for me - helm uninstall /<chart_name>\n"
] |
[
23,
4,
0
] |
[] |
[] |
[
"devops",
"helm3",
"kubectl",
"kubernetes",
"kubernetes_helm"
] |
stackoverflow_0070464815_devops_helm3_kubectl_kubernetes_kubernetes_helm.txt
|
Q:
Using Grid to layer items over each other
I am using absolute positioning to layer div one, two, three (and cover main). Would it be possible to achieve the same with CSS grid?
div.main is always displayed
div one, two, three will be shown when needed
Update: Toggle button added for better visualisation
const div = document.querySelector('div.content');
document.querySelector('button').addEventListener('click', () => {
const on = document.querySelector('.on');
on?.classList.remove('on');
(on?.nextElementSibling || div.firstElementChild).classList.add('on');
});
* {
box-sizing: border-box;
}
button {
padding: 0.5em;
margin-bottom: 1em;
}
.content {
width: 15em;
height: 8em;
position: relative;
margin: 0;
padding: 0;
border: 1px solid #aaa;
}
.main {
height: 100%;
background: #eee;
padding: 0.5em;
}
.one,
.two,
.three {
display: none;
margin: 0;
padding: 2em;
width: 100%;
height: 100%;
position: absolute;
top: 0;
left: 0;
z-index: 3;
}
.one.on,
.two.on,
.three.on {
display: block;
}
.one {
background: #fef8;
}
.two {
background: #fec8;
}
.three {
background: #cdc8;
}
<button>Toggle</button>
<div class="content">
<div class="main on">This is main</div>
<div class="one">This is one</div>
<div class="two">This is two</div>
<div class="three">This is three</div>
</div>
A:
Assuming that the div just need to stack on each other, perhaps a simple implement would be setting content a grid of a single cell, and have all the div placed in grid-area: 1/1/1/1.
Example: (with a simple display toggle)
const btn = document.querySelector("button");
const divs = document.querySelectorAll("div > div");
let i = 1;
btn.addEventListener("click", () => {
if (i === 4) {
divs.forEach((div) => div.classList.remove("on"));
i = 1;
return;
}
divs[i - 1].classList.toggle("on");
divs[i].classList.toggle("on");
i++;
});
.content {
width: 15em;
height: 20em;
margin: 0;
padding: 0;
border: 1px solid blue;
display: grid;
}
.main {
border: 1px solid red;
grid-area: 1/1/1/1;
}
.one,
.two,
.three {
display: none;
margin: 0;
padding: 1em;
z-index: 3;
border: 1px solid green;
grid-area: 1/1/1/1;
}
.one.on,
.two.on,
.three.on {
display: block;
}
button {
padding: 6px;
margin-bottom: 1em;
}
.one {
background-color: rgba(100, 149, 237, 0.25);
}
.two {
background-color: rgba(34, 139, 34, 0.25);
}
.three {
background-color: rgba(255, 140, 0, 0.25);
}
<button>Toggle</button>
<div class="content">
<div class="main">This is main</div>
<div class="one">This is one</div>
<div class="two">This is two.</div>
<div class="three">This is three.</div>
</div>
|
Using Grid to layer items over each other
|
I am using absolute positioning to layer div one, two, three (and cover main). Would it be possible to achieve the same with CSS grid?
div.main is always displayed
div one, two, three will be shown when needed
Update: Toggle button added for better visualisation
const div = document.querySelector('div.content');
document.querySelector('button').addEventListener('click', () => {
const on = document.querySelector('.on');
on?.classList.remove('on');
(on?.nextElementSibling || div.firstElementChild).classList.add('on');
});
* {
box-sizing: border-box;
}
button {
padding: 0.5em;
margin-bottom: 1em;
}
.content {
width: 15em;
height: 8em;
position: relative;
margin: 0;
padding: 0;
border: 1px solid #aaa;
}
.main {
height: 100%;
background: #eee;
padding: 0.5em;
}
.one,
.two,
.three {
display: none;
margin: 0;
padding: 2em;
width: 100%;
height: 100%;
position: absolute;
top: 0;
left: 0;
z-index: 3;
}
.one.on,
.two.on,
.three.on {
display: block;
}
.one {
background: #fef8;
}
.two {
background: #fec8;
}
.three {
background: #cdc8;
}
<button>Toggle</button>
<div class="content">
<div class="main on">This is main</div>
<div class="one">This is one</div>
<div class="two">This is two</div>
<div class="three">This is three</div>
</div>
|
[
"Assuming that the div just need to stack on each other, perhaps a simple implement would be setting content a grid of a single cell, and have all the div placed in grid-area: 1/1/1/1.\nExample: (with a simple display toggle)\n\n\nconst btn = document.querySelector(\"button\");\nconst divs = document.querySelectorAll(\"div > div\");\n\nlet i = 1;\n\nbtn.addEventListener(\"click\", () => {\n if (i === 4) {\n divs.forEach((div) => div.classList.remove(\"on\"));\n i = 1;\n return;\n }\n divs[i - 1].classList.toggle(\"on\");\n divs[i].classList.toggle(\"on\");\n i++;\n});\n.content {\n width: 15em;\n height: 20em;\n margin: 0;\n padding: 0;\n border: 1px solid blue;\n display: grid;\n}\n\n.main {\n border: 1px solid red;\n grid-area: 1/1/1/1;\n}\n\n.one,\n.two,\n.three {\n display: none;\n margin: 0;\n padding: 1em;\n z-index: 3;\n border: 1px solid green;\n grid-area: 1/1/1/1;\n}\n\n.one.on,\n.two.on,\n.three.on {\n display: block;\n}\n\nbutton {\n padding: 6px;\n margin-bottom: 1em;\n}\n\n.one {\n background-color: rgba(100, 149, 237, 0.25);\n}\n\n.two {\nbackground-color: rgba(34, 139, 34, 0.25);\n}\n\n.three {\nbackground-color: rgba(255, 140, 0, 0.25);\n}\n<button>Toggle</button>\n<div class=\"content\">\n <div class=\"main\">This is main</div>\n <div class=\"one\">This is one</div>\n <div class=\"two\">This is two.</div>\n <div class=\"three\">This is three.</div>\n</div>\n\n\n\n"
] |
[
1
] |
[] |
[] |
[
"css"
] |
stackoverflow_0074669521_css.txt
|
Q:
Proxmox Failed all of a sudden mentioning that CPU was blocked for more than 25s
I'm a beginner in Proxmox.
Actually, everything was working perfectly fine until I installed 2 additional hard disks in my server.
All of a sudden, I had an error mentioning that CPU was blocked for more then 25sec... I restarted and the server and now it's telling me that I'm in Emergency Mode...
I have 2 options:
Give root password for maintenance
press Control-D to continue
Control-D:
When I'm pressing control-D to continue it's saying: Reloading system manager configuration, starting default target. Then it starts asking the password for maintenance or press control-d... again and again...
Root Password
Is is the same password as the one I'm using to connect to the GUI? If that's the case, since I have a very long password with special characters, I need to see what I'm typing which is not the case by default... How can I check that?
What if I remove the 2 new additional hard disks ? Should it start normally ? Or I will still be in emergency mode ?
A:
It's the same password and no, you cannot see what you are typing.
Remove drives, it should boot normally. You need to check and edit /etc/fstab so it's not using new drives as root. Maybe these two disks are not empty?
|
Proxmox Failed all of a sudden mentioning that CPU was blocked for more than 25s
|
I'm a beginner in Proxmox.
Actually, everything was working perfectly fine until I installed 2 additional hard disks in my server.
All of a sudden, I had an error mentioning that CPU was blocked for more then 25sec... I restarted and the server and now it's telling me that I'm in Emergency Mode...
I have 2 options:
Give root password for maintenance
press Control-D to continue
Control-D:
When I'm pressing control-D to continue it's saying: Reloading system manager configuration, starting default target. Then it starts asking the password for maintenance or press control-d... again and again...
Root Password
Is is the same password as the one I'm using to connect to the GUI? If that's the case, since I have a very long password with special characters, I need to see what I'm typing which is not the case by default... How can I check that?
What if I remove the 2 new additional hard disks ? Should it start normally ? Or I will still be in emergency mode ?
|
[
"It's the same password and no, you cannot see what you are typing.\nRemove drives, it should boot normally. You need to check and edit /etc/fstab so it's not using new drives as root. Maybe these two disks are not empty?\n"
] |
[
0
] |
[] |
[] |
[
"proxmox"
] |
stackoverflow_0072295769_proxmox.txt
|
Q:
FLUTTER Sign-up/Sign-in with Google popup not appearing
I want to make the user login to the app with google but the pop-up is not coming on the screen. What do I do to make the pop-up appear? When I press on the "Continue with Google" button, it does nothing.
When I debug my code, the debugger straight up goes to the last print statement -
print("GOOGLE SIGN IN: ${googleSignIn.clientId}");
here is my code-
import 'package:bloc/bloc.dart';
import 'package:equatable/equatable.dart';
import 'package:firebase_auth/firebase_auth.dart';
import 'package:google_sign_in/google_sign_in.dart';
part 'google_sign_in_event.dart';
part 'google_sign_in_state.dart';
class GoogleSignInBloc extends Bloc<GoogleSignInEvent, GoogleSignInState> {
GoogleSignInBloc() : super(GoogleSignInInitial()) {
on<GoogleLogInEvent>((event, emit) {
GoogleSignIn googleSignIn = GoogleSignIn();
GoogleSignInAccount? _user;
// GoogleSignInAccount get user => _user!;
//final googlelogin = MeditationGoogleSignIn().googleLogIn();
Future googleLogIn() async {
try {
final googleUser = await googleSignIn.signIn();
if (googleUser == null) {
print("NO GOOGLE USER");
return null;
}
_user = googleUser;
final googleAuth = await googleUser.authentication;
final credential = GoogleAuthProvider.credential(
accessToken: googleAuth.accessToken,
idToken: googleAuth.idToken,
);
await FirebaseAuth.instance.signInWithCredential(credential);
} catch (e) {
print("THERE IS AN ERROR IN LOGIN: ${e.toString()}");
}
}
print("GOOGLE SIGN IN: ${googleSignIn.clientId}");
});
on<GoogleLogOutEvent>((event, emit) {
Future googleLogOut() async {
final googleSignIn = GoogleSignIn();
await googleSignIn.disconnect();
FirebaseAuth.instance.signOut();
}
});
}
}
A:
Try add scopes: GoogleSignIn(scopes: ['email', 'profile'])
|
FLUTTER Sign-up/Sign-in with Google popup not appearing
|
I want to make the user login to the app with google but the pop-up is not coming on the screen. What do I do to make the pop-up appear? When I press on the "Continue with Google" button, it does nothing.
When I debug my code, the debugger straight up goes to the last print statement -
print("GOOGLE SIGN IN: ${googleSignIn.clientId}");
here is my code-
import 'package:bloc/bloc.dart';
import 'package:equatable/equatable.dart';
import 'package:firebase_auth/firebase_auth.dart';
import 'package:google_sign_in/google_sign_in.dart';
part 'google_sign_in_event.dart';
part 'google_sign_in_state.dart';
class GoogleSignInBloc extends Bloc<GoogleSignInEvent, GoogleSignInState> {
GoogleSignInBloc() : super(GoogleSignInInitial()) {
on<GoogleLogInEvent>((event, emit) {
GoogleSignIn googleSignIn = GoogleSignIn();
GoogleSignInAccount? _user;
// GoogleSignInAccount get user => _user!;
//final googlelogin = MeditationGoogleSignIn().googleLogIn();
Future googleLogIn() async {
try {
final googleUser = await googleSignIn.signIn();
if (googleUser == null) {
print("NO GOOGLE USER");
return null;
}
_user = googleUser;
final googleAuth = await googleUser.authentication;
final credential = GoogleAuthProvider.credential(
accessToken: googleAuth.accessToken,
idToken: googleAuth.idToken,
);
await FirebaseAuth.instance.signInWithCredential(credential);
} catch (e) {
print("THERE IS AN ERROR IN LOGIN: ${e.toString()}");
}
}
print("GOOGLE SIGN IN: ${googleSignIn.clientId}");
});
on<GoogleLogOutEvent>((event, emit) {
Future googleLogOut() async {
final googleSignIn = GoogleSignIn();
await googleSignIn.disconnect();
FirebaseAuth.instance.signOut();
}
});
}
}
|
[
"Try add scopes: GoogleSignIn(scopes: ['email', 'profile'])\n"
] |
[
0
] |
[] |
[] |
[
"authentication",
"dart",
"flutter",
"google_signin",
"popup"
] |
stackoverflow_0074445718_authentication_dart_flutter_google_signin_popup.txt
|
Q:
Don't use asdf if there is no .tool-versions file?
I installed node.js:
brew install node
I also installed asdf with node.js plugin. Is there any way to use node.js installed via Homebrew globally if there is no .tool-versions file?
Right now I am getting an error if this file does not exist:
No version is set for command node
Consider adding one of the following versions in your config file at
nodejs 19.0.0
nodejs 19.0.1
nodejs 19.1.0
nodejs 19.2.0
I don't want to use asdf globally. Just for few projects.
A:
Yes, you can use the brew link command to make the node.js binary installed via Homebrew available globally on your system. This will allow you to run the node and npm commands from any location on your system, without having to specify the path to the binary.
Here is an example of how you might use the brew link command to make node.js available globally on your system:
Open a terminal window and navigate to the directory where node.js was installed by Homebrew. This is typically /usr/local/Cellar/node/<version>, where <version> is the version of node.js you installed.
Run the brew link node command to create a symlink to the node.js binary in the global /usr/local/bin directory. This will make the node and npm commands available globally on your system.
Verify that node.js was linked successfully by running the node -v command. This should print the version of node.js that you installed via Homebrew.
Keep in mind that using the brew link command can cause conflicts with asdf and the node.js plugin if you have both installed on your system. If you are using asdf to manage your node.js version, it is recommended that you use the asdf global command to specify the version of node.js you want to use globally, rather than using the brew link command. This will ensure that asdf and the node.js plugin take precedence over the globally-installed version of node.js.
|
Don't use asdf if there is no .tool-versions file?
|
I installed node.js:
brew install node
I also installed asdf with node.js plugin. Is there any way to use node.js installed via Homebrew globally if there is no .tool-versions file?
Right now I am getting an error if this file does not exist:
No version is set for command node
Consider adding one of the following versions in your config file at
nodejs 19.0.0
nodejs 19.0.1
nodejs 19.1.0
nodejs 19.2.0
I don't want to use asdf globally. Just for few projects.
|
[
"Yes, you can use the brew link command to make the node.js binary installed via Homebrew available globally on your system. This will allow you to run the node and npm commands from any location on your system, without having to specify the path to the binary.\nHere is an example of how you might use the brew link command to make node.js available globally on your system:\n\nOpen a terminal window and navigate to the directory where node.js was installed by Homebrew. This is typically /usr/local/Cellar/node/<version>, where <version> is the version of node.js you installed.\n\nRun the brew link node command to create a symlink to the node.js binary in the global /usr/local/bin directory. This will make the node and npm commands available globally on your system.\n\nVerify that node.js was linked successfully by running the node -v command. This should print the version of node.js that you installed via Homebrew.\n\n\nKeep in mind that using the brew link command can cause conflicts with asdf and the node.js plugin if you have both installed on your system. If you are using asdf to manage your node.js version, it is recommended that you use the asdf global command to specify the version of node.js you want to use globally, rather than using the brew link command. This will ensure that asdf and the node.js plugin take precedence over the globally-installed version of node.js.\n"
] |
[
1
] |
[] |
[] |
[
"asdf",
"asdf_vm"
] |
stackoverflow_0074669564_asdf_asdf_vm.txt
|
Q:
How to have google sheets autocomplete some aspects of a function, but keep others the same
I need to be able to replicate these function on a large scale:
=INDEX(List1!A1:G21, MATCH(F2, List1!A1:A21), MATCH(E2, List1!B1:F1)+1)
but I need these aspects to stay the same:
A1:G21
A1:A21
B1:F1
and this to change according to their position in the sheet:
F2:E2
I also need this function:
=INDEX(List1!A1:G21, MATCH(F2, List1!A1:A21), 7)
and need to change this value according to is position:
F2
but need this to stay the same:
A1:G21
A1:A21
7
I had tried using Google's autocomplete, but it obviously scales all the value in accordance to its position and since I am very new to sheets, I haven't a clue on what I could do. I tried writing a js function, but my experience in js is little and I couldn't keep up.
A:
You can't put an $ before the aspect you want fixated:
$A$1 would never move
$A1 would be always in Column 1 and change its number
A$1 would only move the column but not the row
|
How to have google sheets autocomplete some aspects of a function, but keep others the same
|
I need to be able to replicate these function on a large scale:
=INDEX(List1!A1:G21, MATCH(F2, List1!A1:A21), MATCH(E2, List1!B1:F1)+1)
but I need these aspects to stay the same:
A1:G21
A1:A21
B1:F1
and this to change according to their position in the sheet:
F2:E2
I also need this function:
=INDEX(List1!A1:G21, MATCH(F2, List1!A1:A21), 7)
and need to change this value according to is position:
F2
but need this to stay the same:
A1:G21
A1:A21
7
I had tried using Google's autocomplete, but it obviously scales all the value in accordance to its position and since I am very new to sheets, I haven't a clue on what I could do. I tried writing a js function, but my experience in js is little and I couldn't keep up.
|
[
"You can't put an $ before the aspect you want fixated:\n$A$1 would never move\n$A1 would be always in Column 1 and change its number\nA$1 would only move the column but not the row\n"
] |
[
0
] |
[] |
[] |
[
"google_apps_script",
"google_sheets"
] |
stackoverflow_0074669519_google_apps_script_google_sheets.txt
|
Q:
Django - How do you create several model instances at the same time because they are connected
I want to create a user profile and the user profile has a location (address). I need to create the profile first and location second, and then match the profile and the location using a third model called ProfileLocation. I want to do this using one api call, because all the data comes from one form and the location depends on the profile.
There is a location model that has OneToOne fields for Country, State and City. The countries, states and cities will have the database tables populated before the time. There is an extra model called ProfileLocation that links the profile to the location. So I have to create all of them at once and struggling with what the best way to do it is. Also what type of DRF view do I use for the endpoint? I need to understand the logic please and I cannot find an example on the net.
Do I need to create a custom function based view and run the data through the existing serializers? In that case how can I bundle the incoming data for each specific serializer?
This is all very new to me
Locations model.py:
from django.db import models
from django_extensions.db.fields import AutoSlugField
class Country(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(populate_from=["name"])
country_code = models.CharField(max_length=5)
dial_code = models.CharField(max_length=5)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name = "country"
verbose_name_plural = "countries"
db_table = "countries"
ordering = ["name"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
class State(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(populate_from=["name"])
country = models.OneToOneField(Country, on_delete=models.CASCADE, default=None)
created_at = models.DateTimeField("date post was created", auto_now_add=True)
updated_at = models.DateTimeField("date post was updated", auto_now=True)
class Meta:
verbose_name = "state"
verbose_name_plural = "states"
db_table = "states"
unique_together = ["country", "name"]
ordering = ["name"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
class City(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(populate_from=["name"])
country = models.OneToOneField(Country, on_delete=models.CASCADE, default=None)
state = models.OneToOneField(State, on_delete=models.CASCADE, default=None)
created_at = models.DateTimeField("date post was created", auto_now_add=True)
updated_at = models.DateTimeField("date post was updated", auto_now=True)
class Meta:
verbose_name = "city"
verbose_name_plural = "cities"
db_table = "cities"
unique_together = ["country", "state", "name"]
ordering = ["name"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
class Location(models.Model):
name = models.CharField(max_length=50, default=None)
slug = AutoSlugField(populate_from=["name"])
street = models.CharField(max_length=100)
additional = models.CharField(max_length=100)
country = models.OneToOneField(State, on_delete=models.CASCADE, related_name="countries")
state = models.OneToOneField(State, on_delete=models.CASCADE, related_name="states")
city = models.OneToOneField(City, on_delete=models.CASCADE, related_name="cities")
zip = models.CharField(max_length=30)
phone = models.CharField(max_length=15)
created_at = models.DateTimeField(auto_now_add=True, verbose_name="created at")
updated_at = models.DateTimeField(auto_now=True, verbose_name="updated at")
class Meta:
verbose_name = "location"
verbose_name_plural = "locations"
db_table = "locations"
ordering = ["zip"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
This is my Location models.py:
from django.db import models
from django_extensions.db.fields import AutoSlugField
class Country(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(populate_from=["name"])
country_code = models.CharField(max_length=5)
dial_code = models.CharField(max_length=5)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name = "country"
verbose_name_plural = "countries"
db_table = "countries"
ordering = ["name"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
class State(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(populate_from=["name"])
country = models.OneToOneField(Country, on_delete=models.CASCADE, default=None)
created_at = models.DateTimeField("date post was created", auto_now_add=True)
updated_at = models.DateTimeField("date post was updated", auto_now=True)
class Meta:
verbose_name = "state"
verbose_name_plural = "states"
db_table = "states"
unique_together = ["country", "name"]
ordering = ["name"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
class City(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(populate_from=["name"])
country = models.OneToOneField(Country, on_delete=models.CASCADE, default=None)
state = models.OneToOneField(State, on_delete=models.CASCADE, default=None)
created_at = models.DateTimeField("date post was created", auto_now_add=True)
updated_at = models.DateTimeField("date post was updated", auto_now=True)
class Meta:
verbose_name = "city"
verbose_name_plural = "cities"
db_table = "cities"
unique_together = ["country", "state", "name"]
ordering = ["name"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
class Location(models.Model):
name = models.CharField(max_length=50, default=None)
slug = AutoSlugField(populate_from=["name"])
street = models.CharField(max_length=100)
additional = models.CharField(max_length=100)
country = models.OneToOneField(State, on_delete=models.CASCADE, related_name="countries")
state = models.OneToOneField(State, on_delete=models.CASCADE, related_name="states")
city = models.OneToOneField(City, on_delete=models.CASCADE, related_name="cities")
zip = models.CharField(max_length=30)
phone = models.CharField(max_length=15)
created_at = models.DateTimeField(auto_now_add=True, verbose_name="created at")
updated_at = models.DateTimeField(auto_now=True, verbose_name="updated at")
class Meta:
verbose_name = "location"
verbose_name_plural = "locations"
db_table = "locations"
ordering = ["zip"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
Here are the Location serializers which are ordinary modelserializers:
from rest_framework import serializers
from .models import *
from profiles.models import ProfileLocation
class CountrySerializer(serializers.ModelSerializer):
class Meta:
model = Country
fields = [
"id",
"name",
"country_code",
"dial_code",
"created_at",
"updated_at",
]
class StateSerializer(serializers.ModelSerializer):
class Meta:
model = State
fields = [
"id",
"name",
"country",
"created_at",
"updated_at",
]
class CitySerializer(serializers.ModelSerializer):
class Meta:
model = City
fields = [
"id",
"name",
"country",
"state",
"created_at",
"updated_at",
]
class LocationSerializer(serializers.ModelSerializer):
class Meta:
model = Location
fields = [
"name",
"street",
"additional",
"zip",
"city",
"phone",
"created_at",
"updated_at",
]
class ProfileLocationSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = ProfileLocation
fields =[
"location"
"profile"
]
and the profile serializer:
from rest_framework import serializers
from .models import *
from locations.serializers import ProfileLocationSerializer
class ProfileSerializer(serializers.ModelSerializer):
location = ProfileLocationSerializer()
class Meta:
model = Profile
fields = [
"background",
"photo",
"first_name",
"middle_name",
"last_name",
"birthdate",
"gender",
"bio",
"languages",
"is_verified",
"verification",
"location",
"website",
"user",
"created_at",
"updated_at",
]
def create(self, validated_data):
new_profile = Profile.objects.create(**validated_data)
return new_profile
This view creates the profile without any problems but excludes the location obviously.
class ProfileViewSet(viewsets.ModelViewSet):
permission_classes = [permissions.IsAuthenticated]
queryset = Profile.objects.all()
serializer_class = ProfileSerializer
Thank you in advance
A:
your procedure is perfect, just need to override the create method of the serializer
def create(self, validated_data):
# for better understand print or log the validated_data
location_data = validated_data.pop('location') # all location data will be poped from the validated data as a dict
# create a location object
location_obj = Location.objects.create(**location_data)
location_obj.save()
# then add the location object in profile obj
new_profile = Profile.objects.create(**validated_data, location=location_ojb)
new_profile.save()
return new_profile
please go through the official doc
|
Django - How do you create several model instances at the same time because they are connected
|
I want to create a user profile and the user profile has a location (address). I need to create the profile first and location second, and then match the profile and the location using a third model called ProfileLocation. I want to do this using one api call, because all the data comes from one form and the location depends on the profile.
There is a location model that has OneToOne fields for Country, State and City. The countries, states and cities will have the database tables populated before the time. There is an extra model called ProfileLocation that links the profile to the location. So I have to create all of them at once and struggling with what the best way to do it is. Also what type of DRF view do I use for the endpoint? I need to understand the logic please and I cannot find an example on the net.
Do I need to create a custom function based view and run the data through the existing serializers? In that case how can I bundle the incoming data for each specific serializer?
This is all very new to me
Locations model.py:
from django.db import models
from django_extensions.db.fields import AutoSlugField
class Country(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(populate_from=["name"])
country_code = models.CharField(max_length=5)
dial_code = models.CharField(max_length=5)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name = "country"
verbose_name_plural = "countries"
db_table = "countries"
ordering = ["name"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
class State(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(populate_from=["name"])
country = models.OneToOneField(Country, on_delete=models.CASCADE, default=None)
created_at = models.DateTimeField("date post was created", auto_now_add=True)
updated_at = models.DateTimeField("date post was updated", auto_now=True)
class Meta:
verbose_name = "state"
verbose_name_plural = "states"
db_table = "states"
unique_together = ["country", "name"]
ordering = ["name"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
class City(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(populate_from=["name"])
country = models.OneToOneField(Country, on_delete=models.CASCADE, default=None)
state = models.OneToOneField(State, on_delete=models.CASCADE, default=None)
created_at = models.DateTimeField("date post was created", auto_now_add=True)
updated_at = models.DateTimeField("date post was updated", auto_now=True)
class Meta:
verbose_name = "city"
verbose_name_plural = "cities"
db_table = "cities"
unique_together = ["country", "state", "name"]
ordering = ["name"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
class Location(models.Model):
name = models.CharField(max_length=50, default=None)
slug = AutoSlugField(populate_from=["name"])
street = models.CharField(max_length=100)
additional = models.CharField(max_length=100)
country = models.OneToOneField(State, on_delete=models.CASCADE, related_name="countries")
state = models.OneToOneField(State, on_delete=models.CASCADE, related_name="states")
city = models.OneToOneField(City, on_delete=models.CASCADE, related_name="cities")
zip = models.CharField(max_length=30)
phone = models.CharField(max_length=15)
created_at = models.DateTimeField(auto_now_add=True, verbose_name="created at")
updated_at = models.DateTimeField(auto_now=True, verbose_name="updated at")
class Meta:
verbose_name = "location"
verbose_name_plural = "locations"
db_table = "locations"
ordering = ["zip"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
This is my Location models.py:
from django.db import models
from django_extensions.db.fields import AutoSlugField
class Country(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(populate_from=["name"])
country_code = models.CharField(max_length=5)
dial_code = models.CharField(max_length=5)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
class Meta:
verbose_name = "country"
verbose_name_plural = "countries"
db_table = "countries"
ordering = ["name"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
class State(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(populate_from=["name"])
country = models.OneToOneField(Country, on_delete=models.CASCADE, default=None)
created_at = models.DateTimeField("date post was created", auto_now_add=True)
updated_at = models.DateTimeField("date post was updated", auto_now=True)
class Meta:
verbose_name = "state"
verbose_name_plural = "states"
db_table = "states"
unique_together = ["country", "name"]
ordering = ["name"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
class City(models.Model):
name = models.CharField(max_length=50)
slug = AutoSlugField(populate_from=["name"])
country = models.OneToOneField(Country, on_delete=models.CASCADE, default=None)
state = models.OneToOneField(State, on_delete=models.CASCADE, default=None)
created_at = models.DateTimeField("date post was created", auto_now_add=True)
updated_at = models.DateTimeField("date post was updated", auto_now=True)
class Meta:
verbose_name = "city"
verbose_name_plural = "cities"
db_table = "cities"
unique_together = ["country", "state", "name"]
ordering = ["name"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
class Location(models.Model):
name = models.CharField(max_length=50, default=None)
slug = AutoSlugField(populate_from=["name"])
street = models.CharField(max_length=100)
additional = models.CharField(max_length=100)
country = models.OneToOneField(State, on_delete=models.CASCADE, related_name="countries")
state = models.OneToOneField(State, on_delete=models.CASCADE, related_name="states")
city = models.OneToOneField(City, on_delete=models.CASCADE, related_name="cities")
zip = models.CharField(max_length=30)
phone = models.CharField(max_length=15)
created_at = models.DateTimeField(auto_now_add=True, verbose_name="created at")
updated_at = models.DateTimeField(auto_now=True, verbose_name="updated at")
class Meta:
verbose_name = "location"
verbose_name_plural = "locations"
db_table = "locations"
ordering = ["zip"]
def __str__(self):
return self.name
def get_absolute_url(self):
return self.slug
Here are the Location serializers which are ordinary modelserializers:
from rest_framework import serializers
from .models import *
from profiles.models import ProfileLocation
class CountrySerializer(serializers.ModelSerializer):
class Meta:
model = Country
fields = [
"id",
"name",
"country_code",
"dial_code",
"created_at",
"updated_at",
]
class StateSerializer(serializers.ModelSerializer):
class Meta:
model = State
fields = [
"id",
"name",
"country",
"created_at",
"updated_at",
]
class CitySerializer(serializers.ModelSerializer):
class Meta:
model = City
fields = [
"id",
"name",
"country",
"state",
"created_at",
"updated_at",
]
class LocationSerializer(serializers.ModelSerializer):
class Meta:
model = Location
fields = [
"name",
"street",
"additional",
"zip",
"city",
"phone",
"created_at",
"updated_at",
]
class ProfileLocationSerializer(serializers.HyperlinkedModelSerializer):
class Meta:
model = ProfileLocation
fields =[
"location"
"profile"
]
and the profile serializer:
from rest_framework import serializers
from .models import *
from locations.serializers import ProfileLocationSerializer
class ProfileSerializer(serializers.ModelSerializer):
location = ProfileLocationSerializer()
class Meta:
model = Profile
fields = [
"background",
"photo",
"first_name",
"middle_name",
"last_name",
"birthdate",
"gender",
"bio",
"languages",
"is_verified",
"verification",
"location",
"website",
"user",
"created_at",
"updated_at",
]
def create(self, validated_data):
new_profile = Profile.objects.create(**validated_data)
return new_profile
This view creates the profile without any problems but excludes the location obviously.
class ProfileViewSet(viewsets.ModelViewSet):
permission_classes = [permissions.IsAuthenticated]
queryset = Profile.objects.all()
serializer_class = ProfileSerializer
Thank you in advance
|
[
"your procedure is perfect, just need to override the create method of the serializer\ndef create(self, validated_data):\n # for better understand print or log the validated_data\n location_data = validated_data.pop('location') # all location data will be poped from the validated data as a dict\n # create a location object\n location_obj = Location.objects.create(**location_data)\n location_obj.save()\n # then add the location object in profile obj\n new_profile = Profile.objects.create(**validated_data, location=location_ojb)\n new_profile.save()\n return new_profile\n\nplease go through the official doc\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_rest_framework",
"python"
] |
stackoverflow_0074668588_django_django_rest_framework_python.txt
|
Q:
How to write R function that prints out residual plot interpretation
Can someone tell me how to write an R function that tests normality and homoscedasticity of the residuals of any given model. Moreover, the function must also print a message that interprets the results from the tests.
I've made the function that prints out the plots but i don't know how to print out its respective interpretation
Residuals <- function(model) {
par(mfrow=c(2,2))
plot(model)
}
Residuals()
A:
You can use the which argument for plot() to specify which plots you want; whether you use par(mfrow) is a personal preference. I'm not sure what you mean by interpretation, but you can add a call of summary(model) to print the results. You could also potentially use report() to print out a short text summary of your results.
data(mtcars)
mod <- lm(mpg ~ cyl + hp, data = mtcars)
res_func <- function(model) {
par(mfrow = c(1,2)) # depends if you want them side by side or not
plot(model, ask = FALSE, which = c(2,3))
summary(model)
}
res_func(model = mod)
# other option
library(report)
res_func2 <- function(model) {
par(mfrow = c(1,2)) # depends if you want them side by side or not
plot(model, ask = FALSE, which = c(2,3))
report(model)
}
res_func2(model = mod)
|
How to write R function that prints out residual plot interpretation
|
Can someone tell me how to write an R function that tests normality and homoscedasticity of the residuals of any given model. Moreover, the function must also print a message that interprets the results from the tests.
I've made the function that prints out the plots but i don't know how to print out its respective interpretation
Residuals <- function(model) {
par(mfrow=c(2,2))
plot(model)
}
Residuals()
|
[
"You can use the which argument for plot() to specify which plots you want; whether you use par(mfrow) is a personal preference. I'm not sure what you mean by interpretation, but you can add a call of summary(model) to print the results. You could also potentially use report() to print out a short text summary of your results.\ndata(mtcars)\nmod <- lm(mpg ~ cyl + hp, data = mtcars)\n\nres_func <- function(model) {\n par(mfrow = c(1,2)) # depends if you want them side by side or not\n plot(model, ask = FALSE, which = c(2,3))\n summary(model)\n}\n\nres_func(model = mod)\n\n# other option\nlibrary(report)\n\nres_func2 <- function(model) {\n par(mfrow = c(1,2)) # depends if you want them side by side or not\n plot(model, ask = FALSE, which = c(2,3))\n report(model)\n}\n\nres_func2(model = mod)\n\n"
] |
[
2
] |
[] |
[] |
[
"r"
] |
stackoverflow_0074669534_r.txt
|
Q:
Blazor - how to get button innerText
<button type="submit" class="bu2ncon10" @onclick="@(()=>{StringClicked = button.innerText;})">
A:
You're the one that sets the text of this button, so you already have it, there's no need to get it
|
Blazor - how to get button innerText
|
<button type="submit" class="bu2ncon10" @onclick="@(()=>{StringClicked = button.innerText;})">
|
[
"You're the one that sets the text of this button, so you already have it, there's no need to get it\n"
] |
[
0
] |
[] |
[] |
[
"blazor"
] |
stackoverflow_0074662419_blazor.txt
|
Q:
sycn a vertical recyclerview with and horizontal recyclerview
I am creating a food menu layout, the menu has categories with items.
at the top is a list of category names like drinks, sushi, etc, which is a recyclerview that scrolls horizontally, at the bottom are the category items for example under drinks there is cocacola Fanta, etc which is a recyclerview that scrolls vertically. I am trying to synch the two recyclerviews together with a behavior in which when you scroll the vertical, it scrolls the horizontal and vice versa.
I created this class to implement this feature.
import android.graphics.Typeface
import android.os.Handler
import android.os.Looper
import android.view.View
import android.widget.ImageView
import android.widget.TextView
import androidx.recyclerview.widget.LinearLayoutManager
import androidx.recyclerview.widget.LinearSmoothScroller
import androidx.recyclerview.widget.RecyclerView
class TwoRecyclerViews(
private val recyclerViewHorizontal: RecyclerView,
private val recyclerViewVertical: RecyclerView,
private var indices: List<Int>,
private var isSmoothScroll: Boolean = false,
) {
private var attached = false
private var horizontalRecyclerState = RecyclerView.SCROLL_STATE_IDLE
private var verticalRecyclerState = RecyclerView.SCROLL_STATE_IDLE
private val smoothScrollerVertical: RecyclerView.SmoothScroller =
object : LinearSmoothScroller(recyclerViewVertical.context) {
override fun getVerticalSnapPreference(): Int {
return SNAP_TO_START
}
}
fun attach() {
recyclerViewHorizontal.adapter
?: throw RuntimeException("Cannot attach with no Adapter provided to RecyclerView")
recyclerViewVertical.adapter
?: throw RuntimeException("Cannot attach with no Adapter provided to RecyclerView")
updateFirstPosition()
notifyIndicesChanged()
attached = true
}
private fun detach() {
recyclerViewVertical.clearOnScrollListeners()
recyclerViewHorizontal.clearOnScrollListeners()
}
fun reAttach() {
detach()
attach()
}
private fun updateFirstPosition() {
Handler(Looper.getMainLooper()).postDelayed({
val view = recyclerViewHorizontal.findViewHolderForLayoutPosition(0)?.itemView
val textView = view?.findViewById<TextView>(R.id.horizontalCategoryName)
val imageView = view?.findViewById<ImageView>(R.id.categorySelectionIndicator)
imageView?.visibility = View.VISIBLE
textView?.setTypeface(null, Typeface.BOLD)
textView?.setTextColor(recyclerViewVertical.context.getColor(R.color.primary_1))
}, 100)
}
fun isAttached() = attached
private fun notifyIndicesChanged() {
recyclerViewHorizontal.addOnScrollListener(onHorizontalScrollListener)
recyclerViewVertical.addOnScrollListener(onVerticalScrollListener)
}
private val onHorizontalScrollListener = object : RecyclerView.OnScrollListener() {
override fun onScrollStateChanged(recyclerView: RecyclerView, newState: Int) {
horizontalRecyclerState = newState
}
override fun onScrolled(recyclerView: RecyclerView, dx: Int, dy: Int) {
super.onScrolled(recyclerView, dx, dy)
val linearLayoutManager: LinearLayoutManager =
recyclerView.layoutManager as LinearLayoutManager?
?: throw RuntimeException("No LinearLayoutManager attached to the RecyclerView.")
var itemPosition =
linearLayoutManager.findFirstCompletelyVisibleItemPosition()
if (itemPosition == -1) {
itemPosition =
linearLayoutManager.findFirstVisibleItemPosition()
}
if (horizontalRecyclerState == RecyclerView.SCROLL_STATE_DRAGGING ||
horizontalRecyclerState == RecyclerView.SCROLL_STATE_SETTLING
) {
for (position in indices.indices) {
val view = recyclerView.findViewHolderForLayoutPosition(indices[position])?.itemView
val textView = view?.findViewById<TextView>(R.id.horizontalCategoryName)
val imageView = view?.findViewById<ImageView>(R.id.categorySelectionIndicator)
if (itemPosition == indices[position]) {
if (isSmoothScroll) {
smoothScrollerVertical.targetPosition = indices[position]
recyclerViewVertical.layoutManager?.startSmoothScroll(smoothScrollerVertical)
} else {
(recyclerViewVertical.layoutManager as LinearLayoutManager?)?.scrollToPositionWithOffset(
indices[position], 16.dpToPx()
)
}
imageView?.visibility = View.VISIBLE
textView?.setTypeface(null, Typeface.BOLD)
textView?.setTextColor(recyclerView.context.getColor(R.color.primary_1))
} else {
imageView?.visibility = View.GONE
textView?.setTypeface(null, Typeface.NORMAL)
textView?.setTextColor(recyclerView.context.getColor(R.color.secondary_5))
}
}
}
}
}
private val onVerticalScrollListener = object : RecyclerView.OnScrollListener() {
override fun onScrollStateChanged(recyclerView: RecyclerView, newState: Int) {
verticalRecyclerState = newState
}
override fun onScrolled(recyclerView: RecyclerView, dx: Int, dy: Int) {
super.onScrolled(recyclerView, dx, dy)
val linearLayoutManager: LinearLayoutManager =
recyclerView.layoutManager as LinearLayoutManager?
?: throw RuntimeException("No LinearLayoutManager attached to the RecyclerView.")
var itemPosition =
linearLayoutManager.findFirstCompletelyVisibleItemPosition()
if (itemPosition == -1) {
itemPosition =
linearLayoutManager.findFirstVisibleItemPosition()
}
if (verticalRecyclerState == RecyclerView.SCROLL_STATE_DRAGGING ||
verticalRecyclerState == RecyclerView.SCROLL_STATE_SETTLING
) {
for (position in indices.indices) {
val view = recyclerViewHorizontal.findViewHolderForAdapterPosition(indices[position])?.itemView
val textView = view?.findViewById<TextView>(R.id.horizontalCategoryName)
val imageView = view?.findViewById<ImageView>(R.id.categorySelectionIndicator)
if (itemPosition == indices[position]) {
(recyclerViewHorizontal.layoutManager as LinearLayoutManager?)?.scrollToPositionWithOffset(
indices[position], 16.dpToPx()
)
imageView?.visibility = View.VISIBLE
textView?.setTypeface(null, Typeface.BOLD)
textView?.setTextColor(recyclerViewVertical.context.getColor(R.color.primary_1))
} else {
imageView?.visibility = View.GONE
textView?.setTypeface(null, Typeface.NORMAL)
textView?.setTextColor(recyclerViewVertical.context.getColor(R.color.secondary_5))
}
}
}
}
}
}
the class works fine for the vertical scroll, but there is an instability with the horizontal scroll. if you also have a better solution than the class i created kindly share.
A:
To fix the issue with your TwoRecyclerViews class, you can try the following:
Use addOnItemTouchListener() instead of addOnScrollListener() on both RecyclerViews, since addOnScrollListener() won't get called when the user flings the RecyclerView.
private fun notifyIndicesChanged() {
recyclerViewHorizontal.addOnItemTouchListener(onHorizontalScrollListener)
recyclerViewVertical.addOnItemTouchListener(onVerticalScrollListener)
}
Use findFirstVisibleItemPosition() instead of findFirstCompletelyVisibleItemPosition() to get the current item position in the RecyclerViews, since findFirstCompletelyVisibleItemPosition() will return -1 if there are no completely visible items.
var itemPosition = linearLayoutManager.findFirstVisibleItemPosition()
Use the itemPosition variable to get the current item in the RecyclerView, and then use the indices list to get the corresponding index in the other RecyclerView.
// Get the current item in the RecyclerView
val currentItem =
linearLayoutManager.findViewByPosition(itemPosition)
// Use the indices list to get the corresponding index in the other RecyclerView
val index = indices[itemPosition]
Use the index variable to scroll the other RecyclerView to the correct position.
// Scroll the other RecyclerView to the correct position
val otherLinearLayoutManager = otherRecyclerView.layout
A:
The best way to achieve your UI/UX requirement is to use TabLayout with a vertical recycler view.
Both list items in the recycler view and tabs in the tab layout can be set up as a dynamic number of items/tabs
When you scroll up and down and reach the respective category, update the tab layout using the following code. You can identify each category from the
TabLayout tabLayout = (TabLayout) findViewById(R.id.tabs); // Once for the Activity of Fragment
TabLayout.Tab tab = tabLayout.getTabAt(someIndex); // Some index should be obtain from the dataset
tab.select();
In the same way, when clicking on a tab or scrolling the tab layout, Update the RecyclerVew accordingly./
recyclerView.smoothScrollToPosition(itemCount)
Hope this will help, Cheers!!
A:
It looks like the issue with the horizontal scrolling is that it only scrolls to the first item in the vertical RecyclerView, instead of scrolling to the corresponding item for the category that is being scrolled to in the horizontal RecyclerView. To fix this, you can use the LinearLayoutManager.scrollToPositionWithOffset() method to scroll to the corresponding item in the vertical RecyclerView.
Here is an example of how you could update the onHorizontalScrollListener to use this method:
private val onHorizontalScrollListener = object : RecyclerView.OnScrollListener() {
override fun onScrollStateChanged(recyclerView: RecyclerView, newState: Int) {
horizontalRecyclerState = newState
}
override fun onScrolled(recyclerView: RecyclerView, dx: Int, dy: Int) {
super.onScrolled(recyclerView, dx, dy)
val linearLayoutManager: LinearLayoutManager =
recyclerView.layoutManager as LinearLayoutManager?
?: throw RuntimeException("No LinearLayoutManager attached to the RecyclerView.")
var itemPosition =
linearLayoutManager.findFirstCompletelyVisibleItemPosition()
if (itemPosition == -1) {
itemPosition =
linearLayoutManager.findFirstVisibleItemPosition()
}
if (horizontalRecyclerState == RecyclerView.SCROLL_STATE_DRAGGING ||
horizontalRecyclerState == RecyclerView.SCROLL_STATE_SETTLING
) {
if (indices.size > itemPosition) {
val index = indices[itemPosition]
if (isSmoothScroll) {
smoothScrollerVertical.targetPosition = index
recyclerViewVertical.layoutManager?.startSmoothScroll(smoothScrollerVertical)
} else {
// Use the LinearLayoutManager.scrollToPositionWithOffset() method to scroll to the corresponding item in the vertical RecyclerView
recyclerViewVertical.layoutManager?.scrollToPositionWithOffset(index, 0)
}
}
}
}
}
Note that this solution assumes that the indices list contains the indices of the corresponding items in the vertical RecyclerView for each category in the horizontal RecyclerView. You will need to make sure that this list is correctly populated with the correct indices.
|
sycn a vertical recyclerview with and horizontal recyclerview
|
I am creating a food menu layout, the menu has categories with items.
at the top is a list of category names like drinks, sushi, etc, which is a recyclerview that scrolls horizontally, at the bottom are the category items for example under drinks there is cocacola Fanta, etc which is a recyclerview that scrolls vertically. I am trying to synch the two recyclerviews together with a behavior in which when you scroll the vertical, it scrolls the horizontal and vice versa.
I created this class to implement this feature.
import android.graphics.Typeface
import android.os.Handler
import android.os.Looper
import android.view.View
import android.widget.ImageView
import android.widget.TextView
import androidx.recyclerview.widget.LinearLayoutManager
import androidx.recyclerview.widget.LinearSmoothScroller
import androidx.recyclerview.widget.RecyclerView
class TwoRecyclerViews(
private val recyclerViewHorizontal: RecyclerView,
private val recyclerViewVertical: RecyclerView,
private var indices: List<Int>,
private var isSmoothScroll: Boolean = false,
) {
private var attached = false
private var horizontalRecyclerState = RecyclerView.SCROLL_STATE_IDLE
private var verticalRecyclerState = RecyclerView.SCROLL_STATE_IDLE
private val smoothScrollerVertical: RecyclerView.SmoothScroller =
object : LinearSmoothScroller(recyclerViewVertical.context) {
override fun getVerticalSnapPreference(): Int {
return SNAP_TO_START
}
}
fun attach() {
recyclerViewHorizontal.adapter
?: throw RuntimeException("Cannot attach with no Adapter provided to RecyclerView")
recyclerViewVertical.adapter
?: throw RuntimeException("Cannot attach with no Adapter provided to RecyclerView")
updateFirstPosition()
notifyIndicesChanged()
attached = true
}
private fun detach() {
recyclerViewVertical.clearOnScrollListeners()
recyclerViewHorizontal.clearOnScrollListeners()
}
fun reAttach() {
detach()
attach()
}
private fun updateFirstPosition() {
Handler(Looper.getMainLooper()).postDelayed({
val view = recyclerViewHorizontal.findViewHolderForLayoutPosition(0)?.itemView
val textView = view?.findViewById<TextView>(R.id.horizontalCategoryName)
val imageView = view?.findViewById<ImageView>(R.id.categorySelectionIndicator)
imageView?.visibility = View.VISIBLE
textView?.setTypeface(null, Typeface.BOLD)
textView?.setTextColor(recyclerViewVertical.context.getColor(R.color.primary_1))
}, 100)
}
fun isAttached() = attached
private fun notifyIndicesChanged() {
recyclerViewHorizontal.addOnScrollListener(onHorizontalScrollListener)
recyclerViewVertical.addOnScrollListener(onVerticalScrollListener)
}
private val onHorizontalScrollListener = object : RecyclerView.OnScrollListener() {
override fun onScrollStateChanged(recyclerView: RecyclerView, newState: Int) {
horizontalRecyclerState = newState
}
override fun onScrolled(recyclerView: RecyclerView, dx: Int, dy: Int) {
super.onScrolled(recyclerView, dx, dy)
val linearLayoutManager: LinearLayoutManager =
recyclerView.layoutManager as LinearLayoutManager?
?: throw RuntimeException("No LinearLayoutManager attached to the RecyclerView.")
var itemPosition =
linearLayoutManager.findFirstCompletelyVisibleItemPosition()
if (itemPosition == -1) {
itemPosition =
linearLayoutManager.findFirstVisibleItemPosition()
}
if (horizontalRecyclerState == RecyclerView.SCROLL_STATE_DRAGGING ||
horizontalRecyclerState == RecyclerView.SCROLL_STATE_SETTLING
) {
for (position in indices.indices) {
val view = recyclerView.findViewHolderForLayoutPosition(indices[position])?.itemView
val textView = view?.findViewById<TextView>(R.id.horizontalCategoryName)
val imageView = view?.findViewById<ImageView>(R.id.categorySelectionIndicator)
if (itemPosition == indices[position]) {
if (isSmoothScroll) {
smoothScrollerVertical.targetPosition = indices[position]
recyclerViewVertical.layoutManager?.startSmoothScroll(smoothScrollerVertical)
} else {
(recyclerViewVertical.layoutManager as LinearLayoutManager?)?.scrollToPositionWithOffset(
indices[position], 16.dpToPx()
)
}
imageView?.visibility = View.VISIBLE
textView?.setTypeface(null, Typeface.BOLD)
textView?.setTextColor(recyclerView.context.getColor(R.color.primary_1))
} else {
imageView?.visibility = View.GONE
textView?.setTypeface(null, Typeface.NORMAL)
textView?.setTextColor(recyclerView.context.getColor(R.color.secondary_5))
}
}
}
}
}
private val onVerticalScrollListener = object : RecyclerView.OnScrollListener() {
override fun onScrollStateChanged(recyclerView: RecyclerView, newState: Int) {
verticalRecyclerState = newState
}
override fun onScrolled(recyclerView: RecyclerView, dx: Int, dy: Int) {
super.onScrolled(recyclerView, dx, dy)
val linearLayoutManager: LinearLayoutManager =
recyclerView.layoutManager as LinearLayoutManager?
?: throw RuntimeException("No LinearLayoutManager attached to the RecyclerView.")
var itemPosition =
linearLayoutManager.findFirstCompletelyVisibleItemPosition()
if (itemPosition == -1) {
itemPosition =
linearLayoutManager.findFirstVisibleItemPosition()
}
if (verticalRecyclerState == RecyclerView.SCROLL_STATE_DRAGGING ||
verticalRecyclerState == RecyclerView.SCROLL_STATE_SETTLING
) {
for (position in indices.indices) {
val view = recyclerViewHorizontal.findViewHolderForAdapterPosition(indices[position])?.itemView
val textView = view?.findViewById<TextView>(R.id.horizontalCategoryName)
val imageView = view?.findViewById<ImageView>(R.id.categorySelectionIndicator)
if (itemPosition == indices[position]) {
(recyclerViewHorizontal.layoutManager as LinearLayoutManager?)?.scrollToPositionWithOffset(
indices[position], 16.dpToPx()
)
imageView?.visibility = View.VISIBLE
textView?.setTypeface(null, Typeface.BOLD)
textView?.setTextColor(recyclerViewVertical.context.getColor(R.color.primary_1))
} else {
imageView?.visibility = View.GONE
textView?.setTypeface(null, Typeface.NORMAL)
textView?.setTextColor(recyclerViewVertical.context.getColor(R.color.secondary_5))
}
}
}
}
}
}
the class works fine for the vertical scroll, but there is an instability with the horizontal scroll. if you also have a better solution than the class i created kindly share.
|
[
"To fix the issue with your TwoRecyclerViews class, you can try the following:\n\nUse addOnItemTouchListener() instead of addOnScrollListener() on both RecyclerViews, since addOnScrollListener() won't get called when the user flings the RecyclerView.\nprivate fun notifyIndicesChanged() {\n recyclerViewHorizontal.addOnItemTouchListener(onHorizontalScrollListener)\n recyclerViewVertical.addOnItemTouchListener(onVerticalScrollListener)\n}\n\n\nUse findFirstVisibleItemPosition() instead of findFirstCompletelyVisibleItemPosition() to get the current item position in the RecyclerViews, since findFirstCompletelyVisibleItemPosition() will return -1 if there are no completely visible items.\nvar itemPosition = linearLayoutManager.findFirstVisibleItemPosition()\n\n\nUse the itemPosition variable to get the current item in the RecyclerView, and then use the indices list to get the corresponding index in the other RecyclerView.\n// Get the current item in the RecyclerView\nval currentItem =\nlinearLayoutManager.findViewByPosition(itemPosition)\n\n// Use the indices list to get the corresponding index in the other RecyclerView\nval index = indices[itemPosition]\n\n\nUse the index variable to scroll the other RecyclerView to the correct position.\n// Scroll the other RecyclerView to the correct position\nval otherLinearLayoutManager = otherRecyclerView.layout\n\n\n\n",
"The best way to achieve your UI/UX requirement is to use TabLayout with a vertical recycler view.\nBoth list items in the recycler view and tabs in the tab layout can be set up as a dynamic number of items/tabs\nWhen you scroll up and down and reach the respective category, update the tab layout using the following code. You can identify each category from the\nTabLayout tabLayout = (TabLayout) findViewById(R.id.tabs); // Once for the Activity of Fragment\nTabLayout.Tab tab = tabLayout.getTabAt(someIndex); // Some index should be obtain from the dataset \ntab.select();\n\nIn the same way, when clicking on a tab or scrolling the tab layout, Update the RecyclerVew accordingly./\nrecyclerView.smoothScrollToPosition(itemCount)\n\nHope this will help, Cheers!!\n",
"It looks like the issue with the horizontal scrolling is that it only scrolls to the first item in the vertical RecyclerView, instead of scrolling to the corresponding item for the category that is being scrolled to in the horizontal RecyclerView. To fix this, you can use the LinearLayoutManager.scrollToPositionWithOffset() method to scroll to the corresponding item in the vertical RecyclerView.\nHere is an example of how you could update the onHorizontalScrollListener to use this method:\nprivate val onHorizontalScrollListener = object : RecyclerView.OnScrollListener() {\n override fun onScrollStateChanged(recyclerView: RecyclerView, newState: Int) {\n horizontalRecyclerState = newState\n }\n\n override fun onScrolled(recyclerView: RecyclerView, dx: Int, dy: Int) {\n super.onScrolled(recyclerView, dx, dy)\n\n val linearLayoutManager: LinearLayoutManager =\n recyclerView.layoutManager as LinearLayoutManager?\n ?: throw RuntimeException(\"No LinearLayoutManager attached to the RecyclerView.\")\n\n var itemPosition =\n linearLayoutManager.findFirstCompletelyVisibleItemPosition()\n\n if (itemPosition == -1) {\n itemPosition =\n linearLayoutManager.findFirstVisibleItemPosition()\n }\n\n if (horizontalRecyclerState == RecyclerView.SCROLL_STATE_DRAGGING ||\n horizontalRecyclerState == RecyclerView.SCROLL_STATE_SETTLING\n ) {\n if (indices.size > itemPosition) {\n val index = indices[itemPosition]\n if (isSmoothScroll) {\n smoothScrollerVertical.targetPosition = index\n recyclerViewVertical.layoutManager?.startSmoothScroll(smoothScrollerVertical)\n } else {\n // Use the LinearLayoutManager.scrollToPositionWithOffset() method to scroll to the corresponding item in the vertical RecyclerView\n recyclerViewVertical.layoutManager?.scrollToPositionWithOffset(index, 0)\n }\n }\n }\n }\n}\n\nNote that this solution assumes that the indices list contains the indices of the corresponding items in the vertical RecyclerView for each category in the horizontal RecyclerView. You will need to make sure that this list is correctly populated with the correct indices.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"android",
"android_recyclerview",
"kotlin"
] |
stackoverflow_0074567007_android_android_recyclerview_kotlin.txt
|
Q:
Separate ViewModels for same ListFragment and DetailsFragment or Same Viewmodel?
I read somewhere that google recommends that every fragment that you have in your project should have its own ViewModel.
But...
Let's suppose you have the same set of data being passed in both the Fragments, for instance, i have a MovieListFragment and a MovieDetailsFragment, so now i am wondering if i should make Separate ViewModels for these or should i go for Same Viewmodel?
I want to understand which would be the better approach here.
A:
You should use a shared view model has shown in the Android code lab. The whole idea with a view model is to share. But if you don't have to share between fragments but only return data to activity then you should have one view model per fragment.
https://developer.android.com/codelabs/basic-android-kotlin-training-shared-viewmodel#0
|
Separate ViewModels for same ListFragment and DetailsFragment or Same Viewmodel?
|
I read somewhere that google recommends that every fragment that you have in your project should have its own ViewModel.
But...
Let's suppose you have the same set of data being passed in both the Fragments, for instance, i have a MovieListFragment and a MovieDetailsFragment, so now i am wondering if i should make Separate ViewModels for these or should i go for Same Viewmodel?
I want to understand which would be the better approach here.
|
[
"You should use a shared view model has shown in the Android code lab. The whole idea with a view model is to share. But if you don't have to share between fragments but only return data to activity then you should have one view model per fragment.\nhttps://developer.android.com/codelabs/basic-android-kotlin-training-shared-viewmodel#0\n"
] |
[
0
] |
[] |
[] |
[
"android",
"kotlin",
"mvvm"
] |
stackoverflow_0074669205_android_kotlin_mvvm.txt
|
Q:
How to get textarea value using Reactjs
I am working on Reactjs and using nextjs, Right now i am trying to get value
of "textarea" and "dropdown/select", but i am getting empty result,How can i do this ?
I tried with following code
const msgChange = (e) => {
const value = e.target.value;
setState({
...state,
[e.target.msg]: value
});
};
const countryChange = (e) => {
const value = e.target.value;
setState({
...state,
[e.target.country]: value
});
};
const handleSubmit = (e) => {
var msg = state.msg;
alert('msg is '+msg);
}
<form className='row' onSubmit={handleSubmit}>
<select className="form-select" aria-label="Default select example" onChange={countryChange} name="country">
<option selected>Country</option>
<option value="abc">abc</option>
<option value="xyz">xyz</option>
</select>
<textarea onChange={msgChange} name="msgs"></textarea>
<input type="submit" value="send" className='sendbtn' />
</form>
A:
You're setting the key to e.target.msg which doesn't exist. I assume you meant to set it using the element's name instead. You would also need to do that with the select element. So, with that in mind, you can actually combine those two functions into one that handles the change events for both elements.
const { useEffect, useState } = React;
function Example() {
const [ state, setState ] = useState({});
const handleChange = (e) => {
// Destructure the name and value from
// the changed element
const { name, value } = e.target;
setState({ ...state, [name]: value });
};
// Log the change in state
useEffect(() => console.log(state), [state]);
return (
<form className='row'>
<select
className="form-select"
aria-label="Default select example"
onChange={handleChange}
name="country"
>
<option selected disabled>Country</option>
<option value="abc">abc</option>
<option value="xyz">xyz</option>
</select>
<textarea
onChange={handleChange}
name="msg"
></textarea>
</form>
);
}
ReactDOM.render(
<Example />,
document.getElementById('react')
);
<script src="https://cdnjs.cloudflare.com/ajax/libs/react/17.0.2/umd/react.production.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/17.0.2/umd/react-dom.production.min.js"></script>
<div id="react"></div>
A:
You are setting the wrong attributes in the onChange handlers for both the textArea and select dropdown.
Assuming you have the below kind of useState in your code.
const [state, setState] = useState({});
You should use the below attributes to add your value to the state.
const msgChange = (e) => {
const value = e.target.value;
setState({
...state,
[e.target.name]: value
});
};
const countryChange = (e) => {
const value = e.target.value;
setState({
...state,
[e.target.name]: value
});
};
Access the value below from the state
const handleSubmit = (e) => {
const msg = state.msgs;
const country = state.country;
console.log("message ---->", msg, "country --->", country);
};
|
How to get textarea value using Reactjs
|
I am working on Reactjs and using nextjs, Right now i am trying to get value
of "textarea" and "dropdown/select", but i am getting empty result,How can i do this ?
I tried with following code
const msgChange = (e) => {
const value = e.target.value;
setState({
...state,
[e.target.msg]: value
});
};
const countryChange = (e) => {
const value = e.target.value;
setState({
...state,
[e.target.country]: value
});
};
const handleSubmit = (e) => {
var msg = state.msg;
alert('msg is '+msg);
}
<form className='row' onSubmit={handleSubmit}>
<select className="form-select" aria-label="Default select example" onChange={countryChange} name="country">
<option selected>Country</option>
<option value="abc">abc</option>
<option value="xyz">xyz</option>
</select>
<textarea onChange={msgChange} name="msgs"></textarea>
<input type="submit" value="send" className='sendbtn' />
</form>
|
[
"You're setting the key to e.target.msg which doesn't exist. I assume you meant to set it using the element's name instead. You would also need to do that with the select element. So, with that in mind, you can actually combine those two functions into one that handles the change events for both elements.\n\n\nconst { useEffect, useState } = React;\n\nfunction Example() {\n\n const [ state, setState ] = useState({});\n\n const handleChange = (e) => {\n \n // Destructure the name and value from\n // the changed element\n const { name, value } = e.target;\n setState({ ...state, [name]: value });\n };\n\n // Log the change in state\n useEffect(() => console.log(state), [state]);\n\n return (\n <form className='row'>\n <select\n className=\"form-select\"\n aria-label=\"Default select example\"\n onChange={handleChange}\n name=\"country\"\n >\n <option selected disabled>Country</option>\n <option value=\"abc\">abc</option>\n <option value=\"xyz\">xyz</option>\n </select>\n <textarea\n onChange={handleChange}\n name=\"msg\"\n ></textarea>\n </form>\n );\n\n}\n\nReactDOM.render(\n <Example />,\n document.getElementById('react')\n);\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/react/17.0.2/umd/react.production.min.js\"></script>\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/react-dom/17.0.2/umd/react-dom.production.min.js\"></script>\n<div id=\"react\"></div>\n\n\n\n",
"You are setting the wrong attributes in the onChange handlers for both the textArea and select dropdown.\nAssuming you have the below kind of useState in your code.\n\nconst [state, setState] = useState({});\n\n\nYou should use the below attributes to add your value to the state.\n\nconst msgChange = (e) => {\n const value = e.target.value;\n setState({\n ...state,\n [e.target.name]: value\n });\n};\n\nconst countryChange = (e) => {\n const value = e.target.value;\n setState({\n ...state,\n [e.target.name]: value\n });\n};\n\n\nAccess the value below from the state\n\nconst handleSubmit = (e) => {\n const msg = state.msgs;\n const country = state.country;\n console.log(\"message ---->\", msg, \"country --->\", country); \n};\n\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"javascript",
"next.js",
"reactjs"
] |
stackoverflow_0074669511_javascript_next.js_reactjs.txt
|
Q:
Checking if the first letter of string is in uppercase in Dart
I want to check if the first letter of string is in uppercase in Dart language. How can I implement it? Thanks in advance.
A:
The simplest way I can think of is to compare the first letter of the string with the uppercase equivalent of it. Something like:
bool isUpperCase(String string) {
if (string == null) {
return false;
}
if (string.isEmpty) {
return false;
}
if (string.trimLeft().isEmpty) {
return false;
}
String firstLetter = string.trimLeft().substring(0, 1);
if (double.tryParse(firstLetter) != null) {
return false;
}
return firstLetter.toUpperCase() == string.substring(0, 1);
}
Updated the answer to take in consideration digits.
Also @Saed Nabil is right, this solution will return true if the string starts with any character that is not a letter (except for digits).
A:
You can use the validators library if you are not already using it.
Then use this method
isUppercase(String str) β bool
check if the string str is uppercase
Don't forget to import the dependency, see documentation, to the pubspec.yaml and to your code import 'package:validators/validators.dart';.
Example code:
if(isUppercase(value[0])){
... do some magic
}
You should check that the value is not empty and not null first for safety. Like this:
if(value != null && value.isNotEmpty && isUppercase(value[0])){
... do amazing things
}
A:
check this code it will return the actual uppercase letter else will return null
void main(){
var myString = "1s you said";
var firstCapital = firstCapitalLetter(myString);
if( firstCapital != null){
print("First Capital Letter is ${firstCapital}");
}else{
print("Not found");
}
}
String firstCapitalLetter(String myString){
final allCapitals = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"; //string.substring(0, 1).toUpperCase() == string.substring(0, 1) will not work with for ex. numbers;
if (myString == null) {
return null;
}
if (myString.isEmpty) {
return null;
}
if (myString.trimLeft().isEmpty) {
return null;
}
if( allCapitals.contains(myString[0])){
return myString[0];
}else{
return null;
}
}
this is a typical case for Optional type of Java language , please check this library if you prefer functional style code
optional package
A:
bool isUppercase(String str){
return str == str.toUpperCase();
}
This seems to be elegant way for checking uppercase
String s='Hello';
bool isUpper = isUppercase(s[0]);
print(isUpper); //true
isUpper = isUppercase(s[1]);
print(isUpper); //false
A:
Add this extension
extension Case on String{
// isuppercase
bool isUpperCase(){
int ascii = codeUnitAt(0);
return ascii >= 65 && ascii <= 90;
}
// islowercase
bool isLowerCase(){
int ascii = codeUnitAt(0);
return ascii >= 97 && ascii <= 122;
}
}
use it like this
String letter = 'A';
print(letter.isUpperCase()); // true
|
Checking if the first letter of string is in uppercase in Dart
|
I want to check if the first letter of string is in uppercase in Dart language. How can I implement it? Thanks in advance.
|
[
"The simplest way I can think of is to compare the first letter of the string with the uppercase equivalent of it. Something like:\nbool isUpperCase(String string) {\n if (string == null) {\n return false;\n }\n if (string.isEmpty) {\n return false;\n }\n if (string.trimLeft().isEmpty) {\n return false;\n }\n String firstLetter = string.trimLeft().substring(0, 1);\n if (double.tryParse(firstLetter) != null) {\n return false;\n }\n return firstLetter.toUpperCase() == string.substring(0, 1); \n}\n\nUpdated the answer to take in consideration digits.\nAlso @Saed Nabil is right, this solution will return true if the string starts with any character that is not a letter (except for digits).\n",
"You can use the validators library if you are not already using it.\nThen use this method\n\nisUppercase(String str) β bool\ncheck if the string str is uppercase\n\nDon't forget to import the dependency, see documentation, to the pubspec.yaml and to your code import 'package:validators/validators.dart';.\nExample code:\nif(isUppercase(value[0])){\n ... do some magic\n}\n\nYou should check that the value is not empty and not null first for safety. Like this:\nif(value != null && value.isNotEmpty && isUppercase(value[0])){\n ... do amazing things\n}\n\n",
"check this code it will return the actual uppercase letter else will return null\nvoid main(){\n var myString = \"1s you said\";\n var firstCapital = firstCapitalLetter(myString);\n\n if( firstCapital != null){\n print(\"First Capital Letter is ${firstCapital}\");\n }else{\n print(\"Not found\");\n }\n}\nString firstCapitalLetter(String myString){\n\n final allCapitals = \"ABCDEFGHIJKLMNOPQRSTUVWXYZ\"; //string.substring(0, 1).toUpperCase() == string.substring(0, 1) will not work with for ex. numbers;\n\n if (myString == null) {\n return null;\n }\n if (myString.isEmpty) {\n return null;\n }\n if (myString.trimLeft().isEmpty) {\n return null;\n }\n\n if( allCapitals.contains(myString[0])){\n return myString[0];\n }else{\n return null;\n }\n }\n\nthis is a typical case for Optional type of Java language , please check this library if you prefer functional style code\noptional package\n",
"bool isUppercase(String str){\n return str == str.toUpperCase();\n }\n\nThis seems to be elegant way for checking uppercase\nString s='Hello';\n\nbool isUpper = isUppercase(s[0]);\nprint(isUpper); //true\n\nisUpper = isUppercase(s[1]);\nprint(isUpper); //false\n\n",
"Add this extension\nextension Case on String{\n // isuppercase\n bool isUpperCase(){\n int ascii = codeUnitAt(0);\n return ascii >= 65 && ascii <= 90;\n }\n // islowercase\n bool isLowerCase(){\n int ascii = codeUnitAt(0);\n return ascii >= 97 && ascii <= 122;\n }\n}\n\nuse it like this\nString letter = 'A';\nprint(letter.isUpperCase()); // true\n\n"
] |
[
12,
2,
0,
0,
0
] |
[] |
[] |
[
"dart",
"flutter"
] |
stackoverflow_0056155581_dart_flutter.txt
|
Q:
I am getting "Expected to decode Dictionary but found an array instead."
I am trying to parse JSON from some weather API but i am receiving such error when trying to decode in one of my model's type. I've tried some JSON parser to "improve" my struct but i am still getting this error.
My main struct is:
struct LocationData: Codable {
let name: String
let local_names: LocalNames
let lat: Double
let lon: Double
}
enum CodingKeys: String, CodingKey {
case name
case localNames = "local_names"
case lat
case lon
}
struct LocalNames: Codable {
let ru: String
let id: String
}
I am using this struct to get the needed data (lattitude and longitude) to later pass it to interface:
struct CurrentLocation {
let lattitude: Double
let longitude: Double
init?(CurrentLocationData: LocationData) {
lattitude = CurrentLocationData.lat
longitude = CurrentLocationData.lon
}
}
And this is original JSON:
[
{
"name": "London",
"local_names": {},
"lat": 51.5073219,
"lon": -0.1276474,
"country": "GB",
"state": "England"
},
{},
{},
{},
{}
]
Unfolded JSON is accessible here:
https://pastebin.com/ZFn2vLUd
Please, help me to understand what am i missing here?
I've tried multiple struct patterns but all failed to work. I've found the solution with using typealias and a [Array] for my struct but it fails for a sub struct
struct LocationDatum: Codable {
let name: String
let local_names: LocalNames
let lat: Double
let lon: Double
}
enum CodingKeys: String, CodingKey {
case name
case localNames = "local_names"
case lat
case lon
}
struct LocalNames: Codable {
let ru: String
let id: String
}
typealias LocationData = [LocationDatum]
struct CurrentLocation {
let lattitude: Double
let longitude: Double
init?(CurrentLocationData: LocationData) {
lattitude = CurrentLocationData.lat
longitude = CurrentLocationData.lon
}
}
with this error:
Value of type 'LocationData' (aka 'Array') has no member 'lat'
Here's my decoding method:
func parseJSONLocation(withData data: Data) -> CurrentLocation? {
let decoder = JSONDecoder()
do {
let locationData = try decoder.decode([LocationData].self, from: data)
guard let currentLocation = CurrentLocation(CurrentLocationData: locationData) else {
return nil
}
return currentLocation
} catch let error as NSError {
print(String(describing: error))
}
return nil
}
Which return next error if i am trying to use [LocationData].self instead of LocationData.self
Cannot convert value of type '[LocationData]' to expected argument type 'LocationData'
A:
I'm not sure about your exact end result but let me help you with this error
as the error is very clear
Value of type 'LocationData' (aka 'Array') has no
member 'lat'
In initializer you passed LocationData which is an array of LocationDatum and now are directly trying to access lat long from an array which is not possible you must specify a index like
lattitude = CurrentLocationData[0].lat ?? 0.0
or pass a single object of LocationDatum as a parameter whatever based on your requirement.
by the way, I used this model based on your provided JSON
struct LocationDatum : Codable {
let name : String?
let local_names : Local_names?
let lat : Double?
let lon : Double?
let country : String?
let state : String?
enum CodingKeys: String, CodingKey {
case name = "name"
case local_names = "local_names"
case lat = "lat"
case lon = "lon"
case country = "country"
case state = "state"
}
init(from decoder: Decoder) throws {
let values = try decoder.container(keyedBy: CodingKeys.self)
name = try values.decodeIfPresent(String.self, forKey: .name)
local_names = try values.decodeIfPresent(Local_names.self, forKey: .local_names)
lat = try values.decodeIfPresent(Double.self, forKey: .lat)
lon = try values.decodeIfPresent(Double.self, forKey: .lon)
country = try values.decodeIfPresent(String.self, forKey: .country)
state = try values.decodeIfPresent(String.self, forKey: .state)
}
}
struct Local_names : Codable {
let ru : String?
let id : String?
enum CodingKeys: String, CodingKey {
case ru = "ru"
case id = "let"
}
init(from decoder: Decoder) throws {
let values = try decoder.container(keyedBy: CodingKeys.self)
ru = try values.decodeIfPresent(String.self, forKey: .ru)
id = try values.decodeIfPresent(String.self, forKey: .id)
}
}
|
I am getting "Expected to decode Dictionary but found an array instead."
|
I am trying to parse JSON from some weather API but i am receiving such error when trying to decode in one of my model's type. I've tried some JSON parser to "improve" my struct but i am still getting this error.
My main struct is:
struct LocationData: Codable {
let name: String
let local_names: LocalNames
let lat: Double
let lon: Double
}
enum CodingKeys: String, CodingKey {
case name
case localNames = "local_names"
case lat
case lon
}
struct LocalNames: Codable {
let ru: String
let id: String
}
I am using this struct to get the needed data (lattitude and longitude) to later pass it to interface:
struct CurrentLocation {
let lattitude: Double
let longitude: Double
init?(CurrentLocationData: LocationData) {
lattitude = CurrentLocationData.lat
longitude = CurrentLocationData.lon
}
}
And this is original JSON:
[
{
"name": "London",
"local_names": {},
"lat": 51.5073219,
"lon": -0.1276474,
"country": "GB",
"state": "England"
},
{},
{},
{},
{}
]
Unfolded JSON is accessible here:
https://pastebin.com/ZFn2vLUd
Please, help me to understand what am i missing here?
I've tried multiple struct patterns but all failed to work. I've found the solution with using typealias and a [Array] for my struct but it fails for a sub struct
struct LocationDatum: Codable {
let name: String
let local_names: LocalNames
let lat: Double
let lon: Double
}
enum CodingKeys: String, CodingKey {
case name
case localNames = "local_names"
case lat
case lon
}
struct LocalNames: Codable {
let ru: String
let id: String
}
typealias LocationData = [LocationDatum]
struct CurrentLocation {
let lattitude: Double
let longitude: Double
init?(CurrentLocationData: LocationData) {
lattitude = CurrentLocationData.lat
longitude = CurrentLocationData.lon
}
}
with this error:
Value of type 'LocationData' (aka 'Array') has no member 'lat'
Here's my decoding method:
func parseJSONLocation(withData data: Data) -> CurrentLocation? {
let decoder = JSONDecoder()
do {
let locationData = try decoder.decode([LocationData].self, from: data)
guard let currentLocation = CurrentLocation(CurrentLocationData: locationData) else {
return nil
}
return currentLocation
} catch let error as NSError {
print(String(describing: error))
}
return nil
}
Which return next error if i am trying to use [LocationData].self instead of LocationData.self
Cannot convert value of type '[LocationData]' to expected argument type 'LocationData'
|
[
"I'm not sure about your exact end result but let me help you with this error\nas the error is very clear\n\nValue of type 'LocationData' (aka 'Array') has no\nmember 'lat'\n\nIn initializer you passed LocationData which is an array of LocationDatum and now are directly trying to access lat long from an array which is not possible you must specify a index like\n\nlattitude = CurrentLocationData[0].lat ?? 0.0\n\nor pass a single object of LocationDatum as a parameter whatever based on your requirement.\nby the way, I used this model based on your provided JSON\nstruct LocationDatum : Codable {\n let name : String?\n let local_names : Local_names?\n let lat : Double?\n let lon : Double?\n let country : String?\n let state : String?\n\n enum CodingKeys: String, CodingKey {\n\n case name = \"name\"\n case local_names = \"local_names\"\n case lat = \"lat\"\n case lon = \"lon\"\n case country = \"country\"\n case state = \"state\"\n }\n\n init(from decoder: Decoder) throws {\n let values = try decoder.container(keyedBy: CodingKeys.self)\n name = try values.decodeIfPresent(String.self, forKey: .name)\n local_names = try values.decodeIfPresent(Local_names.self, forKey: .local_names)\n lat = try values.decodeIfPresent(Double.self, forKey: .lat)\n lon = try values.decodeIfPresent(Double.self, forKey: .lon)\n country = try values.decodeIfPresent(String.self, forKey: .country)\n state = try values.decodeIfPresent(String.self, forKey: .state)\n }\n}\n\nstruct Local_names : Codable {\n let ru : String?\n let id : String?\n\n enum CodingKeys: String, CodingKey {\n\n case ru = \"ru\"\n case id = \"let\"\n }\n\n init(from decoder: Decoder) throws {\n let values = try decoder.container(keyedBy: CodingKeys.self)\n ru = try values.decodeIfPresent(String.self, forKey: .ru)\n id = try values.decodeIfPresent(String.self, forKey: .id)\n }\n\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"ios",
"json",
"swift"
] |
stackoverflow_0074669330_ios_json_swift.txt
|
Q:
R: Meaning of "\" in Sapply?
I have a dataset that looks something like this:
name = c("john", "john", "john", "alex","alex", "tim", "tim", "tim", "ralph", "ralph")
year = c(2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012, 2014, 2016)
my_data = data.frame(name, year)
name year
1 john 2010
2 john 2011
3 john 2012
4 alex 2011
5 alex 2012
6 tim 2010
7 tim 2011
8 tim 2012
9 ralph 2014
10 ralph 2016
I am trying to count the "number of rows with at least one missing (i.e. non-consecutive) year", for example:
# sample output
year count
1 2014, 2016 1
In a previous question (Counting Number of Unique Column Values Per Group), I received an answer - but when I tried to apply this answer, I got the following error:
agg <- aggregate(year ~ name, my_data, c)
agg <- agg$year[sapply(agg$year, \(y) any(diff(y) != 1))]
as.data.frame(table(sapply(agg, paste, collapse = ", ")))
Error: unexpected input .... " ... \"
I think this error might be due to the fact that I am using an older version of R.
Does anyone know if an alternate symbol can be used to replace "" in R that is supported by older versions of R?
Thanks!
A:
In tidyverse, we may do this as
library(dplyr)
my_data %>%
group_by(name) %>%
filter(any(diff(year) != 1)) %>%
summarise(year = toString(year)) %>%
count(year, name = 'count')
-output
# A tibble: 1 Γ 2
year count
<chr> <int>
1 2014, 2016 1
The error in OP's code is based on the R version. The lambda concise option (\(x) -> function(x)) is introduced only recently from versions R > 4.0
|
R: Meaning of "\" in Sapply?
|
I have a dataset that looks something like this:
name = c("john", "john", "john", "alex","alex", "tim", "tim", "tim", "ralph", "ralph")
year = c(2010, 2011, 2012, 2011, 2012, 2010, 2011, 2012, 2014, 2016)
my_data = data.frame(name, year)
name year
1 john 2010
2 john 2011
3 john 2012
4 alex 2011
5 alex 2012
6 tim 2010
7 tim 2011
8 tim 2012
9 ralph 2014
10 ralph 2016
I am trying to count the "number of rows with at least one missing (i.e. non-consecutive) year", for example:
# sample output
year count
1 2014, 2016 1
In a previous question (Counting Number of Unique Column Values Per Group), I received an answer - but when I tried to apply this answer, I got the following error:
agg <- aggregate(year ~ name, my_data, c)
agg <- agg$year[sapply(agg$year, \(y) any(diff(y) != 1))]
as.data.frame(table(sapply(agg, paste, collapse = ", ")))
Error: unexpected input .... " ... \"
I think this error might be due to the fact that I am using an older version of R.
Does anyone know if an alternate symbol can be used to replace "" in R that is supported by older versions of R?
Thanks!
|
[
"In tidyverse, we may do this as\nlibrary(dplyr)\nmy_data %>% \n group_by(name) %>% \n filter(any(diff(year) != 1)) %>%\n summarise(year = toString(year)) %>%\n count(year, name = 'count')\n\n-output\n# A tibble: 1 Γ 2\n year count\n <chr> <int>\n1 2014, 2016 1\n\n\nThe error in OP's code is based on the R version. The lambda concise option (\\(x) -> function(x)) is introduced only recently from versions R > 4.0\n"
] |
[
3
] |
[] |
[] |
[
"r"
] |
stackoverflow_0074669593_r.txt
|
Q:
discord.py interaction error message not working
I'm trying to make an error handler with a command and it gives the user an ephermal message saying Invalid language but I get the following traceback (Below the code). I might be doing something wrong in the interaction argument (I'm new to the whole interaction thing and I'm trying it out)
@client.hybrid_command(name = "translate", with_app_command=True, description="Google translate a message to a language", aliases=["tr"])
@commands.guild_only()
async def translate(ctx, interaction: discord.Interaction, language, *, message):
language = language.lower()
if language not in googletrans.LANGUAGES and language not in googletrans.LANGCODES:
await interaction.response.send_message("Invalid Language. Try Again.", ephemeral=True)
Traceback (most recent call last):
File "/home/container/main.py", line 400, in <module>
async def translate(ctx, interaction: discord.Interaction, language, *, message):
File "/home/container/.local/lib/python3.9/site-packages/discord/ext/commands/bot.py", line 289, in decorator
result = hybrid_command(name=name, *args, with_app_command=with_app_command, **kwargs)(func)
File "/home/container/.local/lib/python3.9/site-packages/discord/ext/commands/hybrid.py", line 888, in decorator
return HybridCommand(func, name=name, with_app_command=with_app_command, **attrs) # type: ignore # ???
File "/home/container/.local/lib/python3.9/site-packages/discord/ext/commands/hybrid.py", line 509, in __init__
HybridAppCommand(self) if self.with_app_command else None
File "/home/container/.local/lib/python3.9/site-packages/discord/ext/commands/hybrid.py", line 306, in __init__
super().__init__(
File "/home/container/.local/lib/python3.9/site-packages/discord/app_commands/commands.py", line 677, in __init__
self._params: Dict[str, CommandParameter] = _extract_parameters_from_callback(callback, callback.__globals__)
File "/home/container/.local/lib/python3.9/site-packages/discord/app_commands/commands.py", line 393, in _extract_parameters_from_callback
param = annotation_to_parameter(resolved, parameter)
File "/home/container/.local/lib/python3.9/site-packages/discord/app_commands/transformers.py", line 828, in annotation_to_parameter
(inner, default, validate_default) = get_supported_annotation(annotation)
File "/home/container/.local/lib/python3.9/site-packages/discord/app_commands/transformers.py", line 787, in get_supported_annotation
raise TypeError(f'unsupported type annotation {annotation!r}')
TypeError: unsupported type annotation <class 'discord.interactions.Interaction'>
sys:1: RuntimeWarning: coroutine 'Command.__call__' was never awaited
A:
You cannot have both ctx and interaction in hybrid command callback, you can only have ctx, which is a Context object.
You can fix this by removing the interaction from the callback argument.
async def translate(ctx, language, *, message):
|
discord.py interaction error message not working
|
I'm trying to make an error handler with a command and it gives the user an ephermal message saying Invalid language but I get the following traceback (Below the code). I might be doing something wrong in the interaction argument (I'm new to the whole interaction thing and I'm trying it out)
@client.hybrid_command(name = "translate", with_app_command=True, description="Google translate a message to a language", aliases=["tr"])
@commands.guild_only()
async def translate(ctx, interaction: discord.Interaction, language, *, message):
language = language.lower()
if language not in googletrans.LANGUAGES and language not in googletrans.LANGCODES:
await interaction.response.send_message("Invalid Language. Try Again.", ephemeral=True)
Traceback (most recent call last):
File "/home/container/main.py", line 400, in <module>
async def translate(ctx, interaction: discord.Interaction, language, *, message):
File "/home/container/.local/lib/python3.9/site-packages/discord/ext/commands/bot.py", line 289, in decorator
result = hybrid_command(name=name, *args, with_app_command=with_app_command, **kwargs)(func)
File "/home/container/.local/lib/python3.9/site-packages/discord/ext/commands/hybrid.py", line 888, in decorator
return HybridCommand(func, name=name, with_app_command=with_app_command, **attrs) # type: ignore # ???
File "/home/container/.local/lib/python3.9/site-packages/discord/ext/commands/hybrid.py", line 509, in __init__
HybridAppCommand(self) if self.with_app_command else None
File "/home/container/.local/lib/python3.9/site-packages/discord/ext/commands/hybrid.py", line 306, in __init__
super().__init__(
File "/home/container/.local/lib/python3.9/site-packages/discord/app_commands/commands.py", line 677, in __init__
self._params: Dict[str, CommandParameter] = _extract_parameters_from_callback(callback, callback.__globals__)
File "/home/container/.local/lib/python3.9/site-packages/discord/app_commands/commands.py", line 393, in _extract_parameters_from_callback
param = annotation_to_parameter(resolved, parameter)
File "/home/container/.local/lib/python3.9/site-packages/discord/app_commands/transformers.py", line 828, in annotation_to_parameter
(inner, default, validate_default) = get_supported_annotation(annotation)
File "/home/container/.local/lib/python3.9/site-packages/discord/app_commands/transformers.py", line 787, in get_supported_annotation
raise TypeError(f'unsupported type annotation {annotation!r}')
TypeError: unsupported type annotation <class 'discord.interactions.Interaction'>
sys:1: RuntimeWarning: coroutine 'Command.__call__' was never awaited
|
[
"You cannot have both ctx and interaction in hybrid command callback, you can only have ctx, which is a Context object.\nYou can fix this by removing the interaction from the callback argument.\nasync def translate(ctx, language, *, message):\n\n"
] |
[
0
] |
[] |
[] |
[
"discord",
"discord.py",
"python"
] |
stackoverflow_0074669450_discord_discord.py_python.txt
|
Q:
How to find source code line with syntax error in Razor file
I have a Blazor web project, and am getting errors on one rather large page file when I build. There are no errors if I close and relaunch VS 2022, until I build. Then, it says there are 2 errors ("} expected" and "; expected"), but it shows them in the .g file, and when I double-click on the error line in the Error window, nothing happens.
How do I trace to where in the source code the bad lines are? There are 1300 lines of code in this page...
A:
I had to close and relaunch Visual Studio a couple of times, now it shows the error on the correct .razor page file. Apparently CodeLens and syntax checking are very flakey in Razor pages. Once these 2 errors were resolved, went away, and there were no more errors - I did a Build, and it came up with another "; expected". Which of course was resolved by closing and relaunching VS, reopening the solution, and waiting for CodeLens to check the opened file. Then the error came up.
A:
In blazor (VS2022 current version) don't look at the errors tab, always check the "Output" tab, you'll find the errors there, and double clicking on them usually will get you to the correct line.
the .g files are generated code:
error the in razor.g.cs file, can't find in VS, can't find the _razor.g.cs file
|
How to find source code line with syntax error in Razor file
|
I have a Blazor web project, and am getting errors on one rather large page file when I build. There are no errors if I close and relaunch VS 2022, until I build. Then, it says there are 2 errors ("} expected" and "; expected"), but it shows them in the .g file, and when I double-click on the error line in the Error window, nothing happens.
How do I trace to where in the source code the bad lines are? There are 1300 lines of code in this page...
|
[
"I had to close and relaunch Visual Studio a couple of times, now it shows the error on the correct .razor page file. Apparently CodeLens and syntax checking are very flakey in Razor pages. Once these 2 errors were resolved, went away, and there were no more errors - I did a Build, and it came up with another \"; expected\". Which of course was resolved by closing and relaunching VS, reopening the solution, and waiting for CodeLens to check the opened file. Then the error came up.\n",
"In blazor (VS2022 current version) don't look at the errors tab, always check the \"Output\" tab, you'll find the errors there, and double clicking on them usually will get you to the correct line.\nthe .g files are generated code:\nerror the in razor.g.cs file, can't find in VS, can't find the _razor.g.cs file\n"
] |
[
0,
0
] |
[] |
[] |
[
"blazor",
"razor",
"razor_pages",
"visual_studio"
] |
stackoverflow_0074659026_blazor_razor_razor_pages_visual_studio.txt
|
Q:
Exception from Tracker recompute function, RangeError: Invalid time value | react, meteor,
I'm getting an error saying Exception from Tracker recompute function and RangeError: Invalid time value.
The problem is sometime the card gets undefined and since it is undefined, I can't run this : const dateFormat = isThisYear(createdAt) ? "MMM d" : "MMM d, yyyy";
Is there a way to execute dateFormat only when the card has value?
import React from "react";
import Child from "./Child";
import { format, isThisYear } from "date-fns";
const Sample = ({ card }) => {
const { createdAt, title } = card || {};
const dateFormat = isThisYear(createdAt) ? "MMM d" : "MMM d, yyyy";
const createdDate = format(createdAt, dateFormat);
return (
<>
<Child createdDate={createdDate} />
</>
);
};
export default Sample;
A:
This isn't a necessarily a Meteor.js question , but more of a vanilla javascript question , you should check for truthiness of your variables before returning them . createdAt and(or) title in this case .
I might approach you situation as below .
import React from "react";
import Child from "./Child";
import { format, isThisYear } from "date-fns";
const Sample = ({ card }) => {
const { createdAt, title } = card || {};
if(createdAt){
const dateFormat = isThisYear(createdAt) ? "MMM d" : "MMM d, yyyy";
const createdDate = format(createdAt, dateFormat);
}
if(createdDate){
return (
<>
<Child createdDate={createdDate} />
</>
);
}else {
return (
<>
<Child createdDate={} />
</>
);
}
};
export default Sample;
|
Exception from Tracker recompute function, RangeError: Invalid time value | react, meteor,
|
I'm getting an error saying Exception from Tracker recompute function and RangeError: Invalid time value.
The problem is sometime the card gets undefined and since it is undefined, I can't run this : const dateFormat = isThisYear(createdAt) ? "MMM d" : "MMM d, yyyy";
Is there a way to execute dateFormat only when the card has value?
import React from "react";
import Child from "./Child";
import { format, isThisYear } from "date-fns";
const Sample = ({ card }) => {
const { createdAt, title } = card || {};
const dateFormat = isThisYear(createdAt) ? "MMM d" : "MMM d, yyyy";
const createdDate = format(createdAt, dateFormat);
return (
<>
<Child createdDate={createdDate} />
</>
);
};
export default Sample;
|
[
"This isn't a necessarily a Meteor.js question , but more of a vanilla javascript question , you should check for truthiness of your variables before returning them . createdAt and(or) title in this case .\nI might approach you situation as below .\nimport React from \"react\";\nimport Child from \"./Child\";\nimport { format, isThisYear } from \"date-fns\";\n\nconst Sample = ({ card }) => {\n const { createdAt, title } = card || {};\n if(createdAt){\n const dateFormat = isThisYear(createdAt) ? \"MMM d\" : \"MMM d, yyyy\";\n const createdDate = format(createdAt, dateFormat);\n }\n if(createdDate){\n return (\n <>\n <Child createdDate={createdDate} />\n </>\n );\n }else {\n return (\n <>\n <Child createdDate={} />\n </>\n ); \n }\n};\n\nexport default Sample;\n\n"
] |
[
0
] |
[] |
[] |
[
"date_fns",
"meteor",
"reactjs"
] |
stackoverflow_0074550237_date_fns_meteor_reactjs.txt
|
Q:
How to use semaphores to control progress of 2 threads?
I learned the concept of semaphore. And I'm trying to implement it.
I've been trying to implement it for over 19 hours, but I can't do it, so I'm writing to ask for your help.
It checks the progress of the current two threads, just as it does with CV, and if both threads output entered, it can resume the subsequent operation.
Below is the full text of the code.
`
#include <stdio.h>
#include <unistd.h>
#include <assert.h>
#include <semaphore.h>
#include <pthread.h>
#include <stdlib.h>
void *child1(void *arg) {
printf("child thread 1 entered!\n");
// call semaphoreshere here
printf("child thread 1 exits!\n");
return NULL;
}
void *child2(void *arg) {
printf("child thread 2: entered!\n");
// call semaphores here
printf("child thread 2: exits\n");
return NULL;
}
int main(int argc, char *argv[]) {
pthread_t p1, p2;
printf("parent thread: begin\n");
// init semaphores here
// // sem_init(&empty, 0, 0);
// // sem_init(&full, 0, 0); //Code tried but not working properly
pthread_create(&p1, NULL, child1, NULL);
pthread_create(&p2, NULL, child2, NULL);
pthread_join(p1, NULL);
pthread_join(p2, NULL);
printf("parent thread: end\n");
return 0;
}
`
Using ONLY TWO semaphores within the code, this attempts to control the thread's internal execution order so that both threads must output the ~~entered message before it can exit.
The execution result I want is as follows.
>>>
parent thread: begin
child thread 1 entered!
child thread 2: entered!
child thread 2: exits
child thread 1 exits!
parent thread: end
>>>
parent thread: begin
child thread 2 entered!
child thread 1: entered!
child thread 1: exits
child thread 2 exits!
parent thread: end
Like this, I want to implement only the role of monitoring each other to see if they've entered.
I'd appreciate your help.
thanks.
A:
What you want to do (in semaphoore-thinking) is:
thread 1 waits till thread 2 is done before it exits.
thread 2 waits till thread 1 is done before it exits.
Which leads to the following code, which I modified a bit because I cannot stand global variables.
#include <stdio.h>
#include <unistd.h>
#include <assert.h>
#include <semaphore.h>
#include <pthread.h>
#include <stdlib.h>
typedef struct SharedContext_tag {
sem_t t1done;
sem_t t2done;
} SharedContext_t;
void *child1(void* arg) {
SharedContext_t* ctx = (SharedContext_t*) arg;
printf("child thread 1: entered!\n");
sem_post(&ctx->t1done);
sem_wait(&ctx->t2done);
printf("child thread 1 exits!\n");
return NULL;
}
void *child2(void* arg) {
SharedContext_t* ctx = (SharedContext_t*) arg;
printf("child thread 2: entered!\n");
sem_post(&ctx->t2done);
sem_wait(&ctx->t1done);
printf("child thread 2: exits!\n");
return NULL;
}
int main(int argc, const char* argv[]) {
pthread_t p1;
pthread_t p2;
SharedContext_t context;
sem_init(&context.t1done, 0, 0);
sem_init(&context.t2done, 0, 0);
printf("parent thread: begin\n");
pthread_create(&p1, NULL, child1, &context);
pthread_create(&p2, NULL, child2, &context);
pthread_join(p1, NULL);
pthread_join(p2, NULL);
printf("parent thread: end\n");
sem_close(&context.t1done);
sem_close(&context.t2done);
return 0;
}
On my machine at this time (being careful here!), the output is as required:
> ./sema
parent thread: begin
child thread 1: entered!
child thread 2: entered!
child thread 2: exits!
child thread 1 exits!
parent thread: end
In order for it to work, you need to link against the real-time library, librt. You do so by adding -pthread to your compile command.
> clang-13 -g -O0 -pthread -o sema sema.c
|
How to use semaphores to control progress of 2 threads?
|
I learned the concept of semaphore. And I'm trying to implement it.
I've been trying to implement it for over 19 hours, but I can't do it, so I'm writing to ask for your help.
It checks the progress of the current two threads, just as it does with CV, and if both threads output entered, it can resume the subsequent operation.
Below is the full text of the code.
`
#include <stdio.h>
#include <unistd.h>
#include <assert.h>
#include <semaphore.h>
#include <pthread.h>
#include <stdlib.h>
void *child1(void *arg) {
printf("child thread 1 entered!\n");
// call semaphoreshere here
printf("child thread 1 exits!\n");
return NULL;
}
void *child2(void *arg) {
printf("child thread 2: entered!\n");
// call semaphores here
printf("child thread 2: exits\n");
return NULL;
}
int main(int argc, char *argv[]) {
pthread_t p1, p2;
printf("parent thread: begin\n");
// init semaphores here
// // sem_init(&empty, 0, 0);
// // sem_init(&full, 0, 0); //Code tried but not working properly
pthread_create(&p1, NULL, child1, NULL);
pthread_create(&p2, NULL, child2, NULL);
pthread_join(p1, NULL);
pthread_join(p2, NULL);
printf("parent thread: end\n");
return 0;
}
`
Using ONLY TWO semaphores within the code, this attempts to control the thread's internal execution order so that both threads must output the ~~entered message before it can exit.
The execution result I want is as follows.
>>>
parent thread: begin
child thread 1 entered!
child thread 2: entered!
child thread 2: exits
child thread 1 exits!
parent thread: end
>>>
parent thread: begin
child thread 2 entered!
child thread 1: entered!
child thread 1: exits
child thread 2 exits!
parent thread: end
Like this, I want to implement only the role of monitoring each other to see if they've entered.
I'd appreciate your help.
thanks.
|
[
"What you want to do (in semaphoore-thinking) is:\n\nthread 1 waits till thread 2 is done before it exits.\nthread 2 waits till thread 1 is done before it exits.\n\nWhich leads to the following code, which I modified a bit because I cannot stand global variables.\n#include <stdio.h>\n#include <unistd.h>\n#include <assert.h>\n#include <semaphore.h>\n#include <pthread.h>\n#include <stdlib.h>\n\ntypedef struct SharedContext_tag {\n sem_t t1done;\n sem_t t2done;\n} SharedContext_t;\n\nvoid *child1(void* arg) {\n SharedContext_t* ctx = (SharedContext_t*) arg;\n printf(\"child thread 1: entered!\\n\");\n sem_post(&ctx->t1done);\n sem_wait(&ctx->t2done);\n printf(\"child thread 1 exits!\\n\");\n return NULL;\n}\n\nvoid *child2(void* arg) {\n SharedContext_t* ctx = (SharedContext_t*) arg; \n printf(\"child thread 2: entered!\\n\");\n sem_post(&ctx->t2done);\n sem_wait(&ctx->t1done);\n printf(\"child thread 2: exits!\\n\");\n return NULL;\n}\n\nint main(int argc, const char* argv[]) {\n pthread_t p1;\n pthread_t p2;\n SharedContext_t context;\n sem_init(&context.t1done, 0, 0);\n sem_init(&context.t2done, 0, 0);\n printf(\"parent thread: begin\\n\");\n pthread_create(&p1, NULL, child1, &context);\n pthread_create(&p2, NULL, child2, &context);\n pthread_join(p1, NULL);\n pthread_join(p2, NULL);\n printf(\"parent thread: end\\n\");\n sem_close(&context.t1done);\n sem_close(&context.t2done);\n return 0; \n}\n\nOn my machine at this time (being careful here!), the output is as required:\n> ./sema\nparent thread: begin\nchild thread 1: entered!\nchild thread 2: entered!\nchild thread 2: exits!\nchild thread 1 exits!\nparent thread: end\n\nIn order for it to work, you need to link against the real-time library, librt. You do so by adding -pthread to your compile command.\n> clang-13 -g -O0 -pthread -o sema sema.c\n\n"
] |
[
1
] |
[] |
[] |
[
"c",
"mutex",
"semaphore"
] |
stackoverflow_0074668935_c_mutex_semaphore.txt
|
Q:
How to integer decimal numbers?
I was trying to integrate a function with decimals, but it tells me leading zeros are not allowed, I can't eliminate the zeros like some solutions around suggest doing.
sp.Integral(24-0,03*x+0,006*x*3)
File "<ipython-input-81-adbc892d75d6>", line 1
sp.Integral(24-0,03*x+0,006*x*3)
^
SyntaxError: leading zeros in decimal integer literals are not permitted; use an 0o prefix for octal integers
I tried using int but that didn't work either.
A:
When using decimal numbers in Python, you need to use a dot (.) instead of a comma (,). For example, you can write 0.03 instead of 0,03. This is because in Python, the comma is used to separate items in a list.
To fix the error in your code, you can try changing the commas to dots. Here is how your code should look:
sp.Integral(24-0.03*x+0.006*x*3)
It's also worth noting that leading zeros are not allowed in Python for decimal numbers. So you should remove the leading zeros in your code. For example, you can write 0.03 instead of 0,03. Here is how your code should look with the leading zeros removed:
sp.Integral(24-0.03*x+0.006*x*3)
I hope this helps!
|
How to integer decimal numbers?
|
I was trying to integrate a function with decimals, but it tells me leading zeros are not allowed, I can't eliminate the zeros like some solutions around suggest doing.
sp.Integral(24-0,03*x+0,006*x*3)
File "<ipython-input-81-adbc892d75d6>", line 1
sp.Integral(24-0,03*x+0,006*x*3)
^
SyntaxError: leading zeros in decimal integer literals are not permitted; use an 0o prefix for octal integers
I tried using int but that didn't work either.
|
[
"When using decimal numbers in Python, you need to use a dot (.) instead of a comma (,). For example, you can write 0.03 instead of 0,03. This is because in Python, the comma is used to separate items in a list.\nTo fix the error in your code, you can try changing the commas to dots. Here is how your code should look:\nsp.Integral(24-0.03*x+0.006*x*3)\n\nIt's also worth noting that leading zeros are not allowed in Python for decimal numbers. So you should remove the leading zeros in your code. For example, you can write 0.03 instead of 0,03. Here is how your code should look with the leading zeros removed:\nsp.Integral(24-0.03*x+0.006*x*3)\n\nI hope this helps!\n"
] |
[
0
] |
[] |
[] |
[
"error_handling",
"integer",
"leading_zero"
] |
stackoverflow_0074669623_error_handling_integer_leading_zero.txt
|
Q:
Pick main color from picture
I'm new to Dart/Flutter framework and I'm still exploring their possibilities.
I know in Android it's possible to take a picture and extract the main color value from it programmatically. (Android example)
I wonder, how would this be achieved in pure Dart? I would like it to be compatible with both iOS and Android operating system.
A:
Here's a simple function which returns the dominant color given an ImageProvider. This shows the basic usage of Palette Generator without all the boilerplate.
import 'package:palette_generator/palette_generator.dart';
// Calculate dominant color from ImageProvider
Future<Color> getImagePalette (ImageProvider imageProvider) async {
final PaletteGenerator paletteGenerator = await PaletteGenerator
.fromImageProvider(imageProvider);
return paletteGenerator.dominantColor.color;
}
Then use FutureBuilder on the output to build a Widget.
A:
I probably think you got a fix but for future searches to this question, I suggest you check Pallete Generator by the flutter team.
I will try and give a simple explanation of how the code works but for a detailed example head over to the plugin's GitHub repo.
The example below is going to take an image then select the dominant colors from it and then display the colors
First, we add the required imports
import 'package:palette_generator/palette_generator.dart';
After that let's create the main application class.
class MyApp extends StatelessWidget {
// This widget is the root of your application.
@override
Widget build(BuildContext context) {
return MaterialApp(
...
home: const HomePage(
title: 'Colors from image',
image: AssetImage('assets/images/artwork_default.png',),
imageSize: Size(256.0, 170.0),
...
),
);
}
}
In the image field above, place the image that you want to extract the dominant colors from, i used the image shown here.
Next, we create the HomePage class
@immutable
class HomePage extends StatefulWidget {
/// Creates the home page.
const HomePage({
Key key,
this.title,
this.image,
this.imageSize,
}) : super(key: key);
final String title; //App title
final ImageProvider image; //Image provider to load the colors from
final Size imageSize; //Image dimensions
@override
_HomePageState createState() {
return _HomePageState();
}
}
Lets create the _HomePageState too
class _HomePageState extends State<HomePage> {
Rect region;
PaletteGenerator paletteGenerator;
final GlobalKey imageKey = GlobalKey();
@override
void initState() {
super.initState();
region = Offset.zero & widget.imageSize;
_updatePaletteGenerator(region);
}
Future<void> _updatePaletteGenerator(Rect newRegion) async {
paletteGenerator = await PaletteGenerator.fromImageProvider(
widget.image,
size: widget.imageSize,
region: newRegion,
maximumColorCount: 20,
);
setState(() {});
}
@override
Widget build(BuildContext context) {
return Scaffold(
backgroundColor: _kBackgroundColor,
appBar: AppBar(
title: Text(widget.title),
),
body: Column(
mainAxisSize: MainAxisSize.max,
mainAxisAlignment: MainAxisAlignment.start,
crossAxisAlignment: CrossAxisAlignment.center,
children: <Widget>[
new AspectRatio(
aspectRatio: 15 / 15,
child: Image(
key: imageKey,
image: widget.image,
),
),
Expanded(child: Swatches(generator: paletteGenerator)),
],
),
);
}
}
The code above just lays out the image and the Swatches which is a class defined below. In initState, we first select a region which the colors will be derived from which in our case is the whole image.
After that we create a class Swatches which receives a PalleteGenerator and draws the swatches for it.
class Swatches extends StatelessWidget {
const Swatches({Key key, this.generator}) : super(key: key);
// The PaletteGenerator that contains all of the swatches that we're going
// to display.
final PaletteGenerator generator;
@override
Widget build(BuildContext context) {
final List<Widget> swatches = <Widget>[];
//The generator field can be null, if so, we return an empty container
if (generator == null || generator.colors.isEmpty) {
return Container();
}
//Loop through the colors in the PaletteGenerator and add them to the list of swatches above
for (Color color in generator.colors) {
swatches.add(PaletteSwatch(color: color));
}
return Column(
mainAxisAlignment: MainAxisAlignment.center,
mainAxisSize: MainAxisSize.min,
crossAxisAlignment: CrossAxisAlignment.center,
children: <Widget>[
//All the colors,
Wrap(
children: swatches,
),
//The colors with ranking
Container(height: 30.0),
PaletteSwatch(label: 'Dominant', color: generator.dominantColor?.color),
PaletteSwatch(
label: 'Light Vibrant', color: generator.lightVibrantColor?.color),
PaletteSwatch(label: 'Vibrant', color: generator.vibrantColor?.color),
PaletteSwatch(
label: 'Dark Vibrant', color: generator.darkVibrantColor?.color),
PaletteSwatch(
label: 'Light Muted', color: generator.lightMutedColor?.color),
PaletteSwatch(label: 'Muted', color: generator.mutedColor?.color),
PaletteSwatch(
label: 'Dark Muted', color: generator.darkMutedColor?.color),
],
);
}
}
After that lets create a PaletteSwatch class. A palette swatch is just a square of color with an optional label
@immutable
class PaletteSwatch extends StatelessWidget {
// Creates a PaletteSwatch.
//
// If the [color] argument is omitted, then the swatch will show a
// placeholder instead, to indicate that there is no color.
const PaletteSwatch({
Key key,
this.color,
this.label,
}) : super(key: key);
// The color of the swatch. May be null.
final Color color;
// The optional label to display next to the swatch.
final String label;
@override
Widget build(BuildContext context) {
// Compute the "distance" of the color swatch and the background color
// so that we can put a border around those color swatches that are too
// close to the background's saturation and lightness. We ignore hue for
// the comparison.
final HSLColor hslColor = HSLColor.fromColor(color ?? Colors.transparent);
final HSLColor backgroundAsHsl = HSLColor.fromColor(_kBackgroundColor);
final double colorDistance = math.sqrt(
math.pow(hslColor.saturation - backgroundAsHsl.saturation, 2.0) +
math.pow(hslColor.lightness - backgroundAsHsl.lightness, 2.0));
Widget swatch = Padding(
padding: const EdgeInsets.all(2.0),
child: color == null
? const Placeholder(
fallbackWidth: 34.0,
fallbackHeight: 20.0,
color: Color(0xff404040),
strokeWidth: 2.0,
)
: Container(
decoration: BoxDecoration(
color: color,
border: Border.all(
width: 1.0,
color: _kPlaceholderColor,
style: colorDistance < 0.2
? BorderStyle.solid
: BorderStyle.none,
)),
width: 34.0,
height: 20.0,
),
);
if (label != null) {
swatch = ConstrainedBox(
constraints: const BoxConstraints(maxWidth: 130.0, minWidth: 130.0),
child: Row(
mainAxisAlignment: MainAxisAlignment.start,
children: <Widget>[
swatch,
Container(width: 5.0),
Text(label),
],
),
);
}
return swatch;
}
}
Hope this helps, thank you.
A:
//////////////////////////////
//
// 2019, roipeker.com
// screencast - demo simple image:
// https://youtu.be/EJyRH4_pY8I
//
// screencast - demo snapshot:
// https://youtu.be/-LxPcL7T61E
//
//////////////////////////////
import 'dart:async';
import 'dart:typed_data';
import 'dart:ui' as ui;
import 'dart:math';
import 'package:flutter/material.dart';
import 'package:flutter/rendering.dart';
import 'package:image/image.dart' as img;
import 'package:flutter/services.dart' show rootBundle;
void main() => runApp(const MaterialApp(home: MyApp()));
class MyApp extends StatefulWidget {
const MyApp({Key? key}) : super(key: key);
@override
State<StatefulWidget> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
String imagePath = 'assets/5.jpg';
GlobalKey imageKey = GlobalKey();
GlobalKey paintKey = GlobalKey();
// CHANGE THIS FLAG TO TEST BASIC IMAGE, AND SNAPSHOT.
bool useSnapshot = true;
// based on useSnapshot=true ? paintKey : imageKey ;
// this key is used in this example to keep the code shorter.
late GlobalKey currentKey;
final StreamController<Color> _stateController = StreamController<Color>();
//late img.Image photo ;
img.Image? photo;
@override
void initState() {
currentKey = useSnapshot ? paintKey : imageKey;
super.initState();
}
@override
Widget build(BuildContext context) {
final String title = useSnapshot ? "snapshot" : "basic";
return SafeArea(
child: Scaffold(
appBar: AppBar(title: Text("Color picker $title")),
body: StreamBuilder(
initialData: Colors.green[500],
stream: _stateController.stream,
builder: (buildContext, snapshot) {
Color selectedColor = snapshot.data as Color ?? Colors.green;
return Stack(
children: <Widget>[
RepaintBoundary(
key: paintKey,
child: GestureDetector(
onPanDown: (details) {
searchPixel(details.globalPosition);
},
onPanUpdate: (details) {
searchPixel(details.globalPosition);
},
child: Center(
child: Image.asset(
imagePath,
key: imageKey,
//color: Colors.red,
//colorBlendMode: BlendMode.hue,
//alignment: Alignment.bottomRight,
fit: BoxFit.contain,
//scale: .8,
),
),
),
),
Container(
margin: const EdgeInsets.all(70),
width: 50,
height: 50,
decoration: BoxDecoration(
shape: BoxShape.circle,
color: selectedColor!,
border: Border.all(width: 2.0, color: Colors.white),
boxShadow: [
const BoxShadow(
color: Colors.black12,
blurRadius: 4,
offset: Offset(0, 2))
]),
),
Positioned(
child: Text('${selectedColor}',
style: const TextStyle(
color: Colors.white,
backgroundColor: Colors.black54)),
left: 114,
top: 95,
),
],
);
}),
),
);
}
void searchPixel(Offset globalPosition) async {
if (photo == null) {
await (useSnapshot ? loadSnapshotBytes() : loadImageBundleBytes());
}
_calculatePixel(globalPosition);
}
void _calculatePixel(Offset globalPosition) {
RenderBox box = currentKey.currentContext!.findRenderObject() as RenderBox;
Offset localPosition = box.globalToLocal(globalPosition);
double px = localPosition.dx;
double py = localPosition.dy;
if (!useSnapshot) {
double widgetScale = box.size.width / photo!.width;
print(py);
px = (px / widgetScale);
py = (py / widgetScale);
}
int pixel32 = photo!.getPixelSafe(px.toInt(), py.toInt());
int hex = abgrToArgb(pixel32);
_stateController.add(Color(hex));
}
Future<void> loadImageBundleBytes() async {
ByteData imageBytes = await rootBundle.load(imagePath);
setImageBytes(imageBytes);
}
Future<void> loadSnapshotBytes() async {
RenderRepaintBoundary boxPaint =
paintKey.currentContext!.findRenderObject() as RenderRepaintBoundary;
//RenderObject? boxPaint = paintKey.currentContext.findRenderObject();
ui.Image capture = await boxPaint.toImage();
ByteData? imageBytes =
await capture.toByteData(format: ui.ImageByteFormat.png);
setImageBytes(imageBytes!);
capture.dispose();
}
void setImageBytes(ByteData imageBytes) {
List<int> values = imageBytes.buffer.asUint8List();
photo;
photo = img.decodeImage(values)!;
}
}
// image lib uses uses KML color format, convert #AABBGGRR to regular #AARRGGBB
int abgrToArgb(int argbColor) {
int r = (argbColor >> 16) & 0xFF;
int b = argbColor & 0xFF;
return (argbColor & 0xFF00FF00) | (b << 16) | r;
}
|
Pick main color from picture
|
I'm new to Dart/Flutter framework and I'm still exploring their possibilities.
I know in Android it's possible to take a picture and extract the main color value from it programmatically. (Android example)
I wonder, how would this be achieved in pure Dart? I would like it to be compatible with both iOS and Android operating system.
|
[
"Here's a simple function which returns the dominant color given an ImageProvider. This shows the basic usage of Palette Generator without all the boilerplate.\nimport 'package:palette_generator/palette_generator.dart';\n\n// Calculate dominant color from ImageProvider\nFuture<Color> getImagePalette (ImageProvider imageProvider) async {\n final PaletteGenerator paletteGenerator = await PaletteGenerator\n .fromImageProvider(imageProvider);\n return paletteGenerator.dominantColor.color;\n}\n\nThen use FutureBuilder on the output to build a Widget.\n",
"I probably think you got a fix but for future searches to this question, I suggest you check Pallete Generator by the flutter team. \nI will try and give a simple explanation of how the code works but for a detailed example head over to the plugin's GitHub repo.\nThe example below is going to take an image then select the dominant colors from it and then display the colors\nFirst, we add the required imports\nimport 'package:palette_generator/palette_generator.dart';\n\nAfter that let's create the main application class.\nclass MyApp extends StatelessWidget {\n // This widget is the root of your application.\n @override\n Widget build(BuildContext context) {\n return MaterialApp(\n ...\n home: const HomePage(\n title: 'Colors from image',\n image: AssetImage('assets/images/artwork_default.png',),\n imageSize: Size(256.0, 170.0),\n ...\n\n ),\n );\n }\n}\n\nIn the image field above, place the image that you want to extract the dominant colors from, i used the image shown here.\nNext, we create the HomePage class\n@immutable\nclass HomePage extends StatefulWidget {\n /// Creates the home page.\n const HomePage({\n Key key,\n this.title,\n this.image,\n this.imageSize,\n }) : super(key: key);\n\n final String title; //App title\n final ImageProvider image; //Image provider to load the colors from\n final Size imageSize; //Image dimensions\n\n @override\n _HomePageState createState() {\n return _HomePageState();\n }\n}\n\nLets create the _HomePageState too\nclass _HomePageState extends State<HomePage> {\n Rect region;\n PaletteGenerator paletteGenerator;\n\n final GlobalKey imageKey = GlobalKey();\n\n @override\n void initState() {\n super.initState();\n region = Offset.zero & widget.imageSize;\n _updatePaletteGenerator(region);\n }\n\n Future<void> _updatePaletteGenerator(Rect newRegion) async {\n paletteGenerator = await PaletteGenerator.fromImageProvider(\n widget.image,\n size: widget.imageSize,\n region: newRegion,\n maximumColorCount: 20,\n );\n setState(() {});\n }\n\n\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n backgroundColor: _kBackgroundColor,\n appBar: AppBar(\n title: Text(widget.title),\n ),\n body: Column(\n mainAxisSize: MainAxisSize.max,\n mainAxisAlignment: MainAxisAlignment.start,\n crossAxisAlignment: CrossAxisAlignment.center,\n children: <Widget>[\n new AspectRatio(\n aspectRatio: 15 / 15,\n child: Image(\n key: imageKey,\n image: widget.image,\n ),\n ),\n Expanded(child: Swatches(generator: paletteGenerator)),\n ],\n ),\n );\n }\n}\n\nThe code above just lays out the image and the Swatches which is a class defined below. In initState, we first select a region which the colors will be derived from which in our case is the whole image.\nAfter that we create a class Swatches which receives a PalleteGenerator and draws the swatches for it. \nclass Swatches extends StatelessWidget {\n\n const Swatches({Key key, this.generator}) : super(key: key);\n\n // The PaletteGenerator that contains all of the swatches that we're going\n // to display.\n final PaletteGenerator generator;\n\n @override\n Widget build(BuildContext context) {\n final List<Widget> swatches = <Widget>[];\n //The generator field can be null, if so, we return an empty container\n if (generator == null || generator.colors.isEmpty) {\n return Container();\n }\n //Loop through the colors in the PaletteGenerator and add them to the list of swatches above\n for (Color color in generator.colors) {\n swatches.add(PaletteSwatch(color: color));\n }\n return Column(\n mainAxisAlignment: MainAxisAlignment.center,\n mainAxisSize: MainAxisSize.min,\n crossAxisAlignment: CrossAxisAlignment.center,\n children: <Widget>[\n //All the colors,\n Wrap(\n children: swatches,\n ),\n //The colors with ranking\n Container(height: 30.0),\n PaletteSwatch(label: 'Dominant', color: generator.dominantColor?.color),\n PaletteSwatch(\n label: 'Light Vibrant', color: generator.lightVibrantColor?.color),\n PaletteSwatch(label: 'Vibrant', color: generator.vibrantColor?.color),\n PaletteSwatch(\n label: 'Dark Vibrant', color: generator.darkVibrantColor?.color),\n PaletteSwatch(\n label: 'Light Muted', color: generator.lightMutedColor?.color),\n PaletteSwatch(label: 'Muted', color: generator.mutedColor?.color),\n PaletteSwatch(\n label: 'Dark Muted', color: generator.darkMutedColor?.color),\n ],\n );\n }\n}\n\nAfter that lets create a PaletteSwatch class. A palette swatch is just a square of color with an optional label\n@immutable\nclass PaletteSwatch extends StatelessWidget {\n // Creates a PaletteSwatch.\n //\n // If the [color] argument is omitted, then the swatch will show a\n // placeholder instead, to indicate that there is no color.\n const PaletteSwatch({\n Key key,\n this.color,\n this.label,\n }) : super(key: key);\n\n // The color of the swatch. May be null.\n final Color color;\n\n // The optional label to display next to the swatch.\n final String label;\n\n @override\n Widget build(BuildContext context) {\n // Compute the \"distance\" of the color swatch and the background color\n // so that we can put a border around those color swatches that are too\n // close to the background's saturation and lightness. We ignore hue for\n // the comparison.\n final HSLColor hslColor = HSLColor.fromColor(color ?? Colors.transparent);\n final HSLColor backgroundAsHsl = HSLColor.fromColor(_kBackgroundColor);\n final double colorDistance = math.sqrt(\n math.pow(hslColor.saturation - backgroundAsHsl.saturation, 2.0) +\n math.pow(hslColor.lightness - backgroundAsHsl.lightness, 2.0));\n\n Widget swatch = Padding(\n padding: const EdgeInsets.all(2.0),\n child: color == null\n ? const Placeholder(\n fallbackWidth: 34.0,\n fallbackHeight: 20.0,\n color: Color(0xff404040),\n strokeWidth: 2.0,\n )\n : Container(\n decoration: BoxDecoration(\n color: color,\n border: Border.all(\n width: 1.0,\n color: _kPlaceholderColor,\n style: colorDistance < 0.2\n ? BorderStyle.solid\n : BorderStyle.none,\n )),\n width: 34.0,\n height: 20.0,\n ),\n );\n\n if (label != null) {\n swatch = ConstrainedBox(\n constraints: const BoxConstraints(maxWidth: 130.0, minWidth: 130.0),\n child: Row(\n mainAxisAlignment: MainAxisAlignment.start,\n children: <Widget>[\n swatch,\n Container(width: 5.0),\n Text(label),\n ],\n ),\n );\n }\n return swatch;\n }\n}\n\nHope this helps, thank you.\n",
" //////////////////////////////\n//\n// 2019, roipeker.com\n// screencast - demo simple image:\n// https://youtu.be/EJyRH4_pY8I\n//\n// screencast - demo snapshot:\n// https://youtu.be/-LxPcL7T61E\n//\n//////////////////////////////\n\nimport 'dart:async';\nimport 'dart:typed_data';\nimport 'dart:ui' as ui;\nimport 'dart:math';\n\nimport 'package:flutter/material.dart';\nimport 'package:flutter/rendering.dart';\nimport 'package:image/image.dart' as img;\nimport 'package:flutter/services.dart' show rootBundle;\n\nvoid main() => runApp(const MaterialApp(home: MyApp()));\n\nclass MyApp extends StatefulWidget {\n const MyApp({Key? key}) : super(key: key);\n\n @override\n State<StatefulWidget> createState() => _MyAppState();\n}\n\nclass _MyAppState extends State<MyApp> {\n String imagePath = 'assets/5.jpg';\n GlobalKey imageKey = GlobalKey();\n GlobalKey paintKey = GlobalKey();\n\n // CHANGE THIS FLAG TO TEST BASIC IMAGE, AND SNAPSHOT.\n bool useSnapshot = true;\n\n // based on useSnapshot=true ? paintKey : imageKey ;\n // this key is used in this example to keep the code shorter.\n late GlobalKey currentKey;\n\n final StreamController<Color> _stateController = StreamController<Color>();\n //late img.Image photo ;\n img.Image? photo;\n\n @override\n void initState() {\n currentKey = useSnapshot ? paintKey : imageKey;\n super.initState();\n }\n\n @override\n Widget build(BuildContext context) {\n final String title = useSnapshot ? \"snapshot\" : \"basic\";\n return SafeArea(\n child: Scaffold(\n appBar: AppBar(title: Text(\"Color picker $title\")),\n body: StreamBuilder(\n initialData: Colors.green[500],\n stream: _stateController.stream,\n builder: (buildContext, snapshot) {\n Color selectedColor = snapshot.data as Color ?? Colors.green;\n return Stack(\n children: <Widget>[\n RepaintBoundary(\n key: paintKey,\n child: GestureDetector(\n onPanDown: (details) {\n searchPixel(details.globalPosition);\n },\n onPanUpdate: (details) {\n searchPixel(details.globalPosition);\n },\n child: Center(\n child: Image.asset(\n imagePath,\n key: imageKey,\n //color: Colors.red,\n //colorBlendMode: BlendMode.hue,\n //alignment: Alignment.bottomRight,\n fit: BoxFit.contain,\n //scale: .8,\n ),\n ),\n ),\n ),\n Container(\n margin: const EdgeInsets.all(70),\n width: 50,\n height: 50,\n decoration: BoxDecoration(\n shape: BoxShape.circle,\n color: selectedColor!,\n border: Border.all(width: 2.0, color: Colors.white),\n boxShadow: [\n const BoxShadow(\n color: Colors.black12,\n blurRadius: 4,\n offset: Offset(0, 2))\n ]),\n ),\n Positioned(\n child: Text('${selectedColor}',\n style: const TextStyle(\n color: Colors.white,\n backgroundColor: Colors.black54)),\n left: 114,\n top: 95,\n ),\n ],\n );\n }),\n ),\n );\n }\n\n void searchPixel(Offset globalPosition) async {\n if (photo == null) {\n await (useSnapshot ? loadSnapshotBytes() : loadImageBundleBytes());\n }\n _calculatePixel(globalPosition);\n }\n\n void _calculatePixel(Offset globalPosition) {\n RenderBox box = currentKey.currentContext!.findRenderObject() as RenderBox;\n Offset localPosition = box.globalToLocal(globalPosition);\n\n double px = localPosition.dx;\n double py = localPosition.dy;\n\n if (!useSnapshot) {\n double widgetScale = box.size.width / photo!.width;\n print(py);\n px = (px / widgetScale);\n py = (py / widgetScale);\n }\n\n int pixel32 = photo!.getPixelSafe(px.toInt(), py.toInt());\n int hex = abgrToArgb(pixel32);\n\n _stateController.add(Color(hex));\n }\n\n Future<void> loadImageBundleBytes() async {\n ByteData imageBytes = await rootBundle.load(imagePath);\n setImageBytes(imageBytes);\n }\n\n Future<void> loadSnapshotBytes() async {\n RenderRepaintBoundary boxPaint =\n paintKey.currentContext!.findRenderObject() as RenderRepaintBoundary;\n //RenderObject? boxPaint = paintKey.currentContext.findRenderObject();\n ui.Image capture = await boxPaint.toImage();\n\n ByteData? imageBytes =\n await capture.toByteData(format: ui.ImageByteFormat.png);\n setImageBytes(imageBytes!);\n capture.dispose();\n }\n\n void setImageBytes(ByteData imageBytes) {\n List<int> values = imageBytes.buffer.asUint8List();\n photo;\n photo = img.decodeImage(values)!;\n }\n}\n\n// image lib uses uses KML color format, convert #AABBGGRR to regular #AARRGGBB\nint abgrToArgb(int argbColor) {\n int r = (argbColor >> 16) & 0xFF;\n int b = argbColor & 0xFF;\n return (argbColor & 0xFF00FF00) | (b << 16) | r;\n}\n\n"
] |
[
34,
26,
0
] |
[] |
[] |
[
"android",
"dart",
"flutter",
"ios"
] |
stackoverflow_0050449610_android_dart_flutter_ios.txt
|
Q:
Persisting sessions across subdomains in Laravel 5
Using 5.0
in config/session.php I have set 'domain' => '.example.com' but it is not working. I cannot persist a session on even one domain like this.
My site has many subdomains:
vancouver.example.com
newyork.example.com
etc... they are hosted on the same server and are the same Laravel app (share the same storage directory)
I login with the correct credentials, upon which the app redirects to another page on the site, and I have no session at that point. var_dump(Auth::user()) shows null even though I logged in with the correct credentials.
storage/framework/sessions shows 14 different files there, they are all for me and I cleared them out before I started testing this.
I'll attach my AuthController@postLogin method below, which works fine if session.php 'domain' => null
public function postLogin(Request $request)
{
$this->validate($request, [
'email' => 'required|email', 'password' => 'required',
]);
$credentials = $request->only('email', 'password');
if ($this->auth->attempt($credentials, $request->has('remember'))) {
Session::flash('message', 'You are now logged in.');
Session::flash('status', 'success');
if (str_contains($_SERVER['HTTP_REFERER'], '?goto=')) {
$params = explode('?', $_SERVER['HTTP_REFERER'])[1];
$target = explode('=', $params)[1];
} else {
$target = '/';
}
return redirect($target);
}
return redirect($this->loginPath())
->withInput($request->only('email', 'remember'))
->withErrors([
'email' => $this->getFailedLoginMessage(),
]);
}
A:
Figured it out. Update domain => '.example.com' in session.php and clear the cookies for the site in question.
A:
@gadss
you need to add session table like this
php artisan session:table
composer dump-autoload
php artisan migrate
and change .env to
SESSION_DRIVER=database
also modify config/session.php
'driver' => env('SESSION_DRIVER', 'database') and
'domain' => '.yourdomain.com'
after that clear your browser's cache and cookies.
A:
You'll need to update the session configuration to persist the session in domain-wide including subdomains. Follow the steps given below.
Go to config/session.php and update the domain with prefix . as config => '.your-domain.com'.
Then clear your application cache, Open the Chrome DevTool and Go to Application > Application > Clear Storage. You'll need to clear out the previous cookies also.
run artisan command php artisan config:cache or php artisan config:clear to drop previously cached laravel application configs.
If you are using database as the session driver, You need to create a session table for that. run command php artisan session:table to generate the session table migration and then migrate it using php artisan migrate. Then perform the three steps given above.
A:
With Laravel 8 it becomes more simplier :
Add SESSION_DOMAIN to your .env file :
SESSION_DOMAIN=.yourdomain.tld
Clear configuration cache :
php artisan config:cache
Delete your browser sessions cookies, then session become shared between all your subdomains.
In my case I used to AutoLogin user to subdomain once account is created on www. domain. Worked fine.
A:
Have you tried storing the sessions in the database, memcached, or redis instead of in files? I had a similar situation to yours and storing sessions in the database solved the issue for me.
For some reason Laravel's session driver doesn't handle cross domain sessions correctly when using the file driver.
A:
If someone still gets the problem with subdomain cookie. Try to change Session Cookie Name in config/session.php
A:
If someone needs to sync session in subdomains with different laravel application sharing same database
Follow all the instructions of @Kiran Maniya
Then you have to keep same application name in order to get same session name. Or just change the cookie config in config/session.php
You can hardcode it if keeping same name is not possible.
'cookie' => env(
'SESSION_COOKIE',
Str::slug(env('APP_NAME', 'laravel'), '_').'_session'
)
to something like:
'cookie' => env(
'SESSION_COOKIE',
'session_sharing_application_session'
)
|
Persisting sessions across subdomains in Laravel 5
|
Using 5.0
in config/session.php I have set 'domain' => '.example.com' but it is not working. I cannot persist a session on even one domain like this.
My site has many subdomains:
vancouver.example.com
newyork.example.com
etc... they are hosted on the same server and are the same Laravel app (share the same storage directory)
I login with the correct credentials, upon which the app redirects to another page on the site, and I have no session at that point. var_dump(Auth::user()) shows null even though I logged in with the correct credentials.
storage/framework/sessions shows 14 different files there, they are all for me and I cleared them out before I started testing this.
I'll attach my AuthController@postLogin method below, which works fine if session.php 'domain' => null
public function postLogin(Request $request)
{
$this->validate($request, [
'email' => 'required|email', 'password' => 'required',
]);
$credentials = $request->only('email', 'password');
if ($this->auth->attempt($credentials, $request->has('remember'))) {
Session::flash('message', 'You are now logged in.');
Session::flash('status', 'success');
if (str_contains($_SERVER['HTTP_REFERER'], '?goto=')) {
$params = explode('?', $_SERVER['HTTP_REFERER'])[1];
$target = explode('=', $params)[1];
} else {
$target = '/';
}
return redirect($target);
}
return redirect($this->loginPath())
->withInput($request->only('email', 'remember'))
->withErrors([
'email' => $this->getFailedLoginMessage(),
]);
}
|
[
"Figured it out. Update domain => '.example.com' in session.php and clear the cookies for the site in question.\n",
"@gadss \nyou need to add session table like this\nphp artisan session:table\n\ncomposer dump-autoload\n\nphp artisan migrate\n\nand change .env to\nSESSION_DRIVER=database\nalso modify config/session.php\n'driver' => env('SESSION_DRIVER', 'database') and\n'domain' => '.yourdomain.com'\n\nafter that clear your browser's cache and cookies.\n",
"You'll need to update the session configuration to persist the session in domain-wide including subdomains. Follow the steps given below.\n\nGo to config/session.php and update the domain with prefix . as config => '.your-domain.com'.\nThen clear your application cache, Open the Chrome DevTool and Go to Application > Application > Clear Storage. You'll need to clear out the previous cookies also.\nrun artisan command php artisan config:cache or php artisan config:clear to drop previously cached laravel application configs.\n\nIf you are using database as the session driver, You need to create a session table for that. run command php artisan session:table to generate the session table migration and then migrate it using php artisan migrate. Then perform the three steps given above.\n",
"With Laravel 8 it becomes more simplier :\nAdd SESSION_DOMAIN to your .env file :\nSESSION_DOMAIN=.yourdomain.tld\n\nClear configuration cache :\nphp artisan config:cache\n\nDelete your browser sessions cookies, then session become shared between all your subdomains.\nIn my case I used to AutoLogin user to subdomain once account is created on www. domain. Worked fine.\n",
"Have you tried storing the sessions in the database, memcached, or redis instead of in files? I had a similar situation to yours and storing sessions in the database solved the issue for me. \nFor some reason Laravel's session driver doesn't handle cross domain sessions correctly when using the file driver.\n",
"If someone still gets the problem with subdomain cookie. Try to change Session Cookie Name in config/session.php\n",
"\nIf someone needs to sync session in subdomains with different laravel application sharing same database\n\nFollow all the instructions of @Kiran Maniya\nThen you have to keep same application name in order to get same session name. Or just change the cookie config in config/session.php\nYou can hardcode it if keeping same name is not possible.\n'cookie' => env(\n 'SESSION_COOKIE',\n Str::slug(env('APP_NAME', 'laravel'), '_').'_session'\n) \n\nto something like:\n'cookie' => env(\n 'SESSION_COOKIE',\n 'session_sharing_application_session'\n) \n\n"
] |
[
70,
21,
10,
8,
4,
2,
0
] |
[] |
[] |
[
"laravel",
"laravel_5"
] |
stackoverflow_0030338518_laravel_laravel_5.txt
|
Q:
Print a sequence using a loop in C++. 1,2,5,6,9,10,13,14,17,18
Please help write a C++ Program to print this sequence
1,2,5,6,9,10,13,14,17,18 up to 500. I need it for my homework as a student.
I tried
#include <iostream>
using namespace std;
int main()
{
for (int i = 1; i < 454; i++) {
if (i = i + 1) {
continue;
}
}
return 0;
}
A:
I guess this is what you were looking for
#include <iostream>
int main()
{
for (int i = 1; i <= 500; i++) {
if (i % 4 == 1 || i % 4 == 2) {
std::cout << i << ',';
}
}
std::cout << '\n';
}
If the remainder after dividing a number by 4 is 1 or 2 then print the number.
A:
Assuming the sequence is defined as alternating increments of 1 and 3, i.e., +1, +3, +1, +3, +1, ... perhaps you can use a while loop:
#include <iostream>
int main()
{
int i = 0, x = 1;
while (x <= 500)
{
std::cout << x << '\n'; // Print current value of x.
x += i == 1 ? 3 : 1; // Increment x based on value of i.
i ^= 1; // Toggle i between 1 and 0.
}
}
Output:
1
2
5
6
9
10
13
14
17
18
21
22
25
26
29
30
33
34
37
38
41
42
45
46
49
50
53
54
57
58
61
62
65
66
69
70
73
74
77
78
81
82
85
86
89
90
93
94
97
98
101
102
105
106
109
110
113
114
117
118
121
122
125
126
129
130
133
134
137
138
141
142
145
146
149
150
153
154
157
158
161
162
165
166
169
170
173
174
177
178
181
182
185
186
189
190
193
194
197
198
201
202
205
206
209
210
213
214
217
218
221
222
225
226
229
230
233
234
237
238
241
242
245
246
249
250
253
254
257
258
261
262
265
266
269
270
273
274
277
278
281
282
285
286
289
290
293
294
297
298
301
302
305
306
309
310
313
314
317
318
321
322
325
326
329
330
333
334
337
338
341
342
345
346
349
350
353
354
357
358
361
362
365
366
369
370
373
374
377
378
381
382
385
386
389
390
393
394
397
398
401
402
405
406
409
410
413
414
417
418
421
422
425
426
429
430
433
434
437
438
441
442
445
446
449
450
453
454
457
458
461
462
465
466
469
470
473
474
477
478
481
482
485
486
489
490
493
494
497
498
|
Print a sequence using a loop in C++. 1,2,5,6,9,10,13,14,17,18
|
Please help write a C++ Program to print this sequence
1,2,5,6,9,10,13,14,17,18 up to 500. I need it for my homework as a student.
I tried
#include <iostream>
using namespace std;
int main()
{
for (int i = 1; i < 454; i++) {
if (i = i + 1) {
continue;
}
}
return 0;
}
|
[
"I guess this is what you were looking for\n#include <iostream>\n\nint main()\n{\n for (int i = 1; i <= 500; i++) {\n if (i % 4 == 1 || i % 4 == 2) {\n std::cout << i << ',';\n }\n }\n std::cout << '\\n';\n}\n\nIf the remainder after dividing a number by 4 is 1 or 2 then print the number.\n",
"Assuming the sequence is defined as alternating increments of 1 and 3, i.e., +1, +3, +1, +3, +1, ... perhaps you can use a while loop:\n#include <iostream>\n\nint main()\n{\n int i = 0, x = 1;\n while (x <= 500)\n {\n std::cout << x << '\\n'; // Print current value of x.\n x += i == 1 ? 3 : 1; // Increment x based on value of i.\n i ^= 1; // Toggle i between 1 and 0.\n }\n}\n\nOutput:\n1\n2\n5\n6\n9\n10\n13\n14\n17\n18\n21\n22\n25\n26\n29\n30\n33\n34\n37\n38\n41\n42\n45\n46\n49\n50\n53\n54\n57\n58\n61\n62\n65\n66\n69\n70\n73\n74\n77\n78\n81\n82\n85\n86\n89\n90\n93\n94\n97\n98\n101\n102\n105\n106\n109\n110\n113\n114\n117\n118\n121\n122\n125\n126\n129\n130\n133\n134\n137\n138\n141\n142\n145\n146\n149\n150\n153\n154\n157\n158\n161\n162\n165\n166\n169\n170\n173\n174\n177\n178\n181\n182\n185\n186\n189\n190\n193\n194\n197\n198\n201\n202\n205\n206\n209\n210\n213\n214\n217\n218\n221\n222\n225\n226\n229\n230\n233\n234\n237\n238\n241\n242\n245\n246\n249\n250\n253\n254\n257\n258\n261\n262\n265\n266\n269\n270\n273\n274\n277\n278\n281\n282\n285\n286\n289\n290\n293\n294\n297\n298\n301\n302\n305\n306\n309\n310\n313\n314\n317\n318\n321\n322\n325\n326\n329\n330\n333\n334\n337\n338\n341\n342\n345\n346\n349\n350\n353\n354\n357\n358\n361\n362\n365\n366\n369\n370\n373\n374\n377\n378\n381\n382\n385\n386\n389\n390\n393\n394\n397\n398\n401\n402\n405\n406\n409\n410\n413\n414\n417\n418\n421\n422\n425\n426\n429\n430\n433\n434\n437\n438\n441\n442\n445\n446\n449\n450\n453\n454\n457\n458\n461\n462\n465\n466\n469\n470\n473\n474\n477\n478\n481\n482\n485\n486\n489\n490\n493\n494\n497\n498\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"c++",
"for_loop",
"loops",
"while_loop"
] |
stackoverflow_0074669191_c++_for_loop_loops_while_loop.txt
|
Q:
giving precedence to arithmetic operators in python3
I am implementing a simple arithmetic calculation on a server which includes add, sub, mul and Div, for the simplicity purposes no other operations are being done and also no parentheses "()" to change the precedence. The input I will have for the client is something like "1-2.1+3.6*5+10/2"(no dot product, 2.1 or 3.6 is a floating number). I have created a function to send the operands and operators but at a time I can send udp message of 1 computation in the format of (num1,op,num2)
import struct
import socket
ip = "127.0.0.1"
port = 11200
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, 0) #creating socket
print("Do Ctrl+c to exit the program !!")
def sendRecv( num1, op, num2):
#sending udp message with num1,op and num
#receiving udp message with the result as res
res = s.recieve()
return res
sendRecv(in1, in_op, in2)
I was able to split the operators and operands using the regular split and separated them like
str = ['1', '-', '2.1', '+', '3.6', '*', '5', '+', '10', '/', '2']
since the multiplication and the division takes precedence over addition and subtraction (3.6, *, 5) should be sent first followed by the division, I am trying to write a while loop with while(len(str>0)), I am trying to understand how I can send multiplication first, store the intermediate result in the list itself and do a recurring function till all the computations are sent through message. I am not allowed to perform ny operation on client side, I can only send values to "SendRecv()". Any suggestions or ideas on how to proceed will be helpful.
Thanks in advance
A:
Recursively split the expression according to operator precedence:
def do_calc(num1, op, num2):
# Stub to represent the server call that performs one operation.
# Note that actually using eval() in your backend is REALLY BAD.
expr = f"{num1} {op} {num2}"
res = str(eval(expr))
print(expr, "=", res)
return res
def calc_loop(tokens):
if len(tokens) == 1:
return tokens[0]
if len(tokens) == 3:
return do_calc(*tokens)
for ops in "-+", "/*":
if any(op in tokens for op in ops):
op_idx = max(tokens.index(op) for op in ops if op in tokens)
return calc_loop([
calc_loop(tokens[:op_idx]),
tokens[op_idx],
calc_loop(tokens[op_idx+1:]),
])
expr = ['1', '-', '2.1', '+', '3.6', '*', '5', '+', '10', '/', '2']
print(' '.join(expr), '=', calc_loop(expr))
prints:
1 - 2.1 = -1.1
3.6 * 5 = 18.0
10 / 2 = 5.0
18.0 + 5.0 = 23.0
-1.1 + 23.0 = 21.9
1 - 2.1 + 3.6 * 5 + 10 / 2 = 21.9
A:
Arrange to process only specific operands in a given pass. Make multiple passes, each with different sets of operators. Splice in the answers as they happen.
def doWork(lst, ops):
lst = list(lst)
idx = 0
while idx < len(lst):
if lst[i] in ops:
lst[idx-1:idx+2] = sendRecv(*lst[idx-1:idx+2])
else:
idx += 1
return lst
results = doWork(str, '*/')
results = doWork(results, '+-')
results = results[0]
A:
A typical use case for the classic shunting yard algorithm :
# operators and their precedences
ops = { '*': 2, '/': 2, '+': 1, '-': 1,}
# evaluate a stream of tokens
def evaluate(tokens):
vstack = []
ostack = []
def step():
v2 = vstack.pop()
v1 = vstack.pop()
op = ostack.pop()
vstack.append(sendRecv(v1, op, v2))
for tok in tokens:
if tok in ops:
if ostack and ops[ostack[-1]] >= ops[tok]:
step()
ostack.append(tok)
else:
vstack.append(tok)
while ostack:
step()
return vstack.pop()
# simulate the conversation with the server
def sendRecv(v1, op, v2):
res = eval(f'{v1} {op} {v2}')
return res
s = '3 + 4 * 2 + 3 / 5 + 6'
print(eval(s))
print(evaluate(s.split()))
|
giving precedence to arithmetic operators in python3
|
I am implementing a simple arithmetic calculation on a server which includes add, sub, mul and Div, for the simplicity purposes no other operations are being done and also no parentheses "()" to change the precedence. The input I will have for the client is something like "1-2.1+3.6*5+10/2"(no dot product, 2.1 or 3.6 is a floating number). I have created a function to send the operands and operators but at a time I can send udp message of 1 computation in the format of (num1,op,num2)
import struct
import socket
ip = "127.0.0.1"
port = 11200
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, 0) #creating socket
print("Do Ctrl+c to exit the program !!")
def sendRecv( num1, op, num2):
#sending udp message with num1,op and num
#receiving udp message with the result as res
res = s.recieve()
return res
sendRecv(in1, in_op, in2)
I was able to split the operators and operands using the regular split and separated them like
str = ['1', '-', '2.1', '+', '3.6', '*', '5', '+', '10', '/', '2']
since the multiplication and the division takes precedence over addition and subtraction (3.6, *, 5) should be sent first followed by the division, I am trying to write a while loop with while(len(str>0)), I am trying to understand how I can send multiplication first, store the intermediate result in the list itself and do a recurring function till all the computations are sent through message. I am not allowed to perform ny operation on client side, I can only send values to "SendRecv()". Any suggestions or ideas on how to proceed will be helpful.
Thanks in advance
|
[
"Recursively split the expression according to operator precedence:\ndef do_calc(num1, op, num2):\n # Stub to represent the server call that performs one operation.\n # Note that actually using eval() in your backend is REALLY BAD.\n expr = f\"{num1} {op} {num2}\" \n res = str(eval(expr))\n print(expr, \"=\", res)\n return res\n\ndef calc_loop(tokens):\n if len(tokens) == 1:\n return tokens[0]\n if len(tokens) == 3:\n return do_calc(*tokens)\n for ops in \"-+\", \"/*\":\n if any(op in tokens for op in ops):\n op_idx = max(tokens.index(op) for op in ops if op in tokens)\n return calc_loop([\n calc_loop(tokens[:op_idx]),\n tokens[op_idx],\n calc_loop(tokens[op_idx+1:]),\n ])\n\nexpr = ['1', '-', '2.1', '+', '3.6', '*', '5', '+', '10', '/', '2']\nprint(' '.join(expr), '=', calc_loop(expr))\n\nprints:\n1 - 2.1 = -1.1\n3.6 * 5 = 18.0\n10 / 2 = 5.0\n18.0 + 5.0 = 23.0\n-1.1 + 23.0 = 21.9\n1 - 2.1 + 3.6 * 5 + 10 / 2 = 21.9\n\n",
"Arrange to process only specific operands in a given pass. Make multiple passes, each with different sets of operators. Splice in the answers as they happen.\ndef doWork(lst, ops):\n lst = list(lst)\n idx = 0\n while idx < len(lst):\n if lst[i] in ops:\n lst[idx-1:idx+2] = sendRecv(*lst[idx-1:idx+2])\n else:\n idx += 1\n return lst\n\nresults = doWork(str, '*/')\nresults = doWork(results, '+-')\nresults = results[0]\n\n\n",
"A typical use case for the classic shunting yard algorithm :\n# operators and their precedences\nops = { '*': 2, '/': 2, '+': 1, '-': 1,}\n\n# evaluate a stream of tokens\ndef evaluate(tokens):\n vstack = []\n ostack = []\n\n def step():\n v2 = vstack.pop()\n v1 = vstack.pop()\n op = ostack.pop()\n vstack.append(sendRecv(v1, op, v2))\n\n for tok in tokens:\n if tok in ops:\n if ostack and ops[ostack[-1]] >= ops[tok]:\n step()\n ostack.append(tok)\n else:\n vstack.append(tok)\n\n while ostack:\n step()\n\n return vstack.pop()\n\n# simulate the conversation with the server\ndef sendRecv(v1, op, v2):\n res = eval(f'{v1} {op} {v2}')\n return res\n\ns = '3 + 4 * 2 + 3 / 5 + 6'\n\nprint(eval(s))\nprint(evaluate(s.split()))\n\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"list",
"python",
"python_3.x",
"sorting"
] |
stackoverflow_0074668808_list_python_python_3.x_sorting.txt
|
Q:
Unable to call fabricjs mouse-events on canvas in react
We are unable to call fabric.js events(mouse events) in react (like canvas.on, canvas.off) but those events are working when we are using them in constructor.
this.canvas.on('mouse:move', function() {
console.log('canvas event');
});
we don't get console.log when we are using in factory method.
I tried to write the fabricjs canvas events in the factory method but we can't get the desired output.
we were expecting the console statement on mouse events.
A:
this keyword inside the factory method does not refer to the React component instance. This means that the this.canvas reference inside the factory method does not refer to the Fabric.js canvas instance, but to something else (most likely the global object).
Use an arrow function for the factory method, which does not have its own this binding.
class MyReactComponent extends React.Component {
constructor(props) {
super(props);
this.canvas = new fabric.Canvas('my-canvas');
}
componentDidMount() {
this.canvas.on('mouse:move', () => {
console.log('canvas event');
});
}
render() {
return <canvas id="my-canvas" width="300" height="300" />;
}
}
I replaced the factory method with the componentDidMount lifecycle method, which is called after the component has been rendered.
I called the on method on the this.canvas instance to attach a mouse move event handler. Because the componentDidMount method is an arrow function, the this keyword inside the event handler refers to the React component instance, which means that the this.canvas reference refers to the Fabric.js canvas instance.
|
Unable to call fabricjs mouse-events on canvas in react
|
We are unable to call fabric.js events(mouse events) in react (like canvas.on, canvas.off) but those events are working when we are using them in constructor.
this.canvas.on('mouse:move', function() {
console.log('canvas event');
});
we don't get console.log when we are using in factory method.
I tried to write the fabricjs canvas events in the factory method but we can't get the desired output.
we were expecting the console statement on mouse events.
|
[
"this keyword inside the factory method does not refer to the React component instance. This means that the this.canvas reference inside the factory method does not refer to the Fabric.js canvas instance, but to something else (most likely the global object).\nUse an arrow function for the factory method, which does not have its own this binding.\nclass MyReactComponent extends React.Component {\n constructor(props) {\n super(props);\n this.canvas = new fabric.Canvas('my-canvas');\n }\n\n componentDidMount() {\n this.canvas.on('mouse:move', () => {\n console.log('canvas event');\n });\n }\n\n render() {\n return <canvas id=\"my-canvas\" width=\"300\" height=\"300\" />;\n }\n}\n\nI replaced the factory method with the componentDidMount lifecycle method, which is called after the component has been rendered.\nI called the on method on the this.canvas instance to attach a mouse move event handler. Because the componentDidMount method is an arrow function, the this keyword inside the event handler refers to the React component instance, which means that the this.canvas reference refers to the Fabric.js canvas instance.\n"
] |
[
0
] |
[] |
[] |
[
"canvasjs",
"fabric",
"html5_canvas",
"mouseevent",
"reactjs"
] |
stackoverflow_0074669592_canvasjs_fabric_html5_canvas_mouseevent_reactjs.txt
|
Q:
Does creating a GIT branch in 2 different ways give the same result? How to do this with worktree?
Lets say I create a GIT branch like this:
git clone <main repository>
cd <main repository>
git checkout develop
git branch test/udp_client
git checkout test/udp_client
Or like this:
git clone <main repository>
cd <main repository>
git branch develop/test/udp_client
git checkout develop/test/udp_client
Question A: Is the end result the same?
Question B: How would I do this using git worktree?
A:
Let's go step by step comparing the 2 code blocks. Block β1:
git checkout develop
git branch test/udp_client
git checkout test/udp_client
What it does: it checks out (or create from remote) branch develop. It creates a new branch test/udp_client (but not develop/test/udp_client; the name of the current branch is not used in creating a new branch) pointing to the same commit as the current branch (the current is develop). Then it checks out the new branch. The last 2 commands in the block can be combined into one command: git checkout -b test/udp_client. Branch develop is not required to be the currently checked out branch β the command git checkout -b test/udp_client develop creates a new branch pointing to the same commit as develop and checks out the new branch; so the command replaces all 3.
The second block
git branch develop/test/udp_client
git checkout develop/test/udp_client
is very similar. If we ignore the existence of branch develop in the repository what the block does is: it creates a new branch develop/test/udp_client pointing to the same commit as the current branch; the current branch is not necessary develop, it's most probably main or master. Then the code checks out the new branch.
Unfortunately there is already branch develop so the 2nd block fails with cryptic error message "fatal: Failed to lock ref for update: Not a directory" or "fatal: Failed to lock ref develop/test/udp_client for update: branch develop already exists". The problem is slashes are used in branch names exactly as path separators: branches are stored as files and branches with slashes are stored as directories with the leaf (last path component) as a file. You cannot have develop both as a directory and a file at the same time. I.e., if you have branch develop you cannot create develop/test/udp_client and vice versa β if you have branch develop/test/udp_client you cannot create branch develop.
I cannot answer Question B as I don't use worktrees β I use a lot of submodules and worktrees are incompatible with submodules so I just use separate clones.
|
Does creating a GIT branch in 2 different ways give the same result? How to do this with worktree?
|
Lets say I create a GIT branch like this:
git clone <main repository>
cd <main repository>
git checkout develop
git branch test/udp_client
git checkout test/udp_client
Or like this:
git clone <main repository>
cd <main repository>
git branch develop/test/udp_client
git checkout develop/test/udp_client
Question A: Is the end result the same?
Question B: How would I do this using git worktree?
|
[
"Let's go step by step comparing the 2 code blocks. Block β1:\ngit checkout develop\ngit branch test/udp_client\ngit checkout test/udp_client\n\nWhat it does: it checks out (or create from remote) branch develop. It creates a new branch test/udp_client (but not develop/test/udp_client; the name of the current branch is not used in creating a new branch) pointing to the same commit as the current branch (the current is develop). Then it checks out the new branch. The last 2 commands in the block can be combined into one command: git checkout -b test/udp_client. Branch develop is not required to be the currently checked out branch β the command git checkout -b test/udp_client develop creates a new branch pointing to the same commit as develop and checks out the new branch; so the command replaces all 3.\nThe second block\ngit branch develop/test/udp_client\ngit checkout develop/test/udp_client\n\nis very similar. If we ignore the existence of branch develop in the repository what the block does is: it creates a new branch develop/test/udp_client pointing to the same commit as the current branch; the current branch is not necessary develop, it's most probably main or master. Then the code checks out the new branch.\nUnfortunately there is already branch develop so the 2nd block fails with cryptic error message \"fatal: Failed to lock ref for update: Not a directory\" or \"fatal: Failed to lock ref develop/test/udp_client for update: branch develop already exists\". The problem is slashes are used in branch names exactly as path separators: branches are stored as files and branches with slashes are stored as directories with the leaf (last path component) as a file. You cannot have develop both as a directory and a file at the same time. I.e., if you have branch develop you cannot create develop/test/udp_client and vice versa β if you have branch develop/test/udp_client you cannot create branch develop.\nI cannot answer Question B as I don't use worktrees β I use a lot of submodules and worktrees are incompatible with submodules so I just use separate clones.\n"
] |
[
1
] |
[] |
[] |
[
"git"
] |
stackoverflow_0074665841_git.txt
|
Q:
Can a Map of data be passed as a parameter in go_router?
I want to pass a map of complex data as a parameter to a GoRoute(). However, from what I can see, the param is a String. I tried converting my Map -> String, but it immediately causes all sorts of errors due to format errors: Unexpected character (at character 2).
Im most likely going about this the incorrect way.
Is it even possible to send a Map of data as a parameter in GoRouter?
A:
It can be done, but JsonEncode and then JsonDecode on the other side, I have performed it incorrectly
|
Can a Map of data be passed as a parameter in go_router?
|
I want to pass a map of complex data as a parameter to a GoRoute(). However, from what I can see, the param is a String. I tried converting my Map -> String, but it immediately causes all sorts of errors due to format errors: Unexpected character (at character 2).
Im most likely going about this the incorrect way.
Is it even possible to send a Map of data as a parameter in GoRouter?
|
[
"It can be done, but JsonEncode and then JsonDecode on the other side, I have performed it incorrectly\n"
] |
[
0
] |
[] |
[] |
[
"flutter",
"flutter_go_router"
] |
stackoverflow_0074669575_flutter_flutter_go_router.txt
|
Q:
Javascript Regular Expressions - Replace non-numeric characters
This works:
var.replace(/[^0-9]+/g, '');
That simple snippet will replace anything that is not a number with nothing.
But decimals are real too. So, I'm trying to figure out how to include a period.
I'm sure it's really simple, but my tests aren't working.
A:
Did you escape the period? var.replace(/[^0-9\.]+/g, '');
A:
Replacing something that is not a number is a little trickier than replacing something that is a number.
Those suggesting to simply add the dot, are ignoring the fact that . is also used as a period, so:
This is a test. 0.9, 1, 2, 3 will become .0.9123.
The specific regex in your problem will depend a lot on the purpose. If you only have a single number in your string, you could do this:
var.replace(/.*?(([0-9]*\.)?[0-9]+).*/g, "$1")
This finds the first number, and replaces the entire string with the matched number.
A:
Try this:
var.replace(/[^0-9\\.]+/g, '');
A:
there's a lot of correct answers already, just pointing out that you might need to account for negative signs too.. "\-" add that to any existing answer to allow for negative numbers.
A:
Try this:
var.replace(/[0-9]*\.?[0-9]+/g, '');
That only matches valid decimals (eg "1", "1.0", ".5", but not "1.0.22")
A:
If you don't want to catch IP address along with decimals:
var.replace(/[^0-9]+\\.?[0-9]*/g, '');
Which will only catch numerals with one or zero periods
A:
How about doing this:
var numbers = str.gsub(/[0-9]*\.?[0-9]+/, "#{0} ");
A:
Sweet and short inline replacing of non-numerical characters in the ASP.Net Textbox:
<asp:TextBox ID="txtJobNo" runat="server" class="TextBoxStyle" onkeyup="this.value=this.value.replace(/[^0-9]/g,'')" />
Alter the regex part as you'ld like. Lots and lots of people complain about the cursor going straight to the end when using the arrow keys, but people tend to deal with this without noticing it for instance, arrow... arrow... arrow... okay then... backspace back space, enter the new chars.
A:
Here are a couple of jQuery input class types I use:
$("input.intgr").keyup(function (e) { // Filter non-digits from input value.
if (/\D/g.test($(this).val())) $(this).val($(this).val().replace(/\D/g, ''));
});
$("input.nmbr").keyup(function (e) { // Filter non-numeric from input value.
var tVal=$(this).val();
if (tVal!="" && isNaN(tVal)){
tVal=(tVal.substr(0,1).replace(/[^0-9\.\-]/, '')+tVal.substr(1).replace(/[^0-9\.]/, ''));
var raVal=tVal.split(".")
if(raVal.length>2)
tVal=raVal[0]+"."+raVal.slice(1).join("");
$(this).val(tVal);
}
});
intgr allows only numeric - like other solutions here.
nmbr allows only positive/negative decimal. Negative must be the first character (you can add "+" to the filter if you need it), strips -3.6.23.333 to -3.623333
I'm putting nmbr up because I got tired of trying to find the way to keep only 1 decimal and negative in 1st position
A:
This one just worked for -ve to +ve numbers
<input type="text" oninput="this.value = this.value.replace(/[^0-9\-]+/g, '').replace(/(\..*)\./g, '$1');">
A:
I use this expression to exclude all non-numeric characters + keep negative numbers with minus sign.
variable.replace(/[^0-9.,\-]/g,'')
|
Javascript Regular Expressions - Replace non-numeric characters
|
This works:
var.replace(/[^0-9]+/g, '');
That simple snippet will replace anything that is not a number with nothing.
But decimals are real too. So, I'm trying to figure out how to include a period.
I'm sure it's really simple, but my tests aren't working.
|
[
"Did you escape the period? var.replace(/[^0-9\\.]+/g, '');\n",
"Replacing something that is not a number is a little trickier than replacing something that is a number.\nThose suggesting to simply add the dot, are ignoring the fact that . is also used as a period, so:\nThis is a test. 0.9, 1, 2, 3 will become .0.9123.\nThe specific regex in your problem will depend a lot on the purpose. If you only have a single number in your string, you could do this:\nvar.replace(/.*?(([0-9]*\\.)?[0-9]+).*/g, \"$1\")\nThis finds the first number, and replaces the entire string with the matched number.\n",
"Try this:\nvar.replace(/[^0-9\\\\.]+/g, '');\n\n",
"there's a lot of correct answers already, just pointing out that you might need to account for negative signs too.. \"\\-\" add that to any existing answer to allow for negative numbers. \n",
"Try this:\nvar.replace(/[0-9]*\\.?[0-9]+/g, '');\n\nThat only matches valid decimals (eg \"1\", \"1.0\", \".5\", but not \"1.0.22\")\n",
"If you don't want to catch IP address along with decimals:\nvar.replace(/[^0-9]+\\\\.?[0-9]*/g, '');\n\nWhich will only catch numerals with one or zero periods\n",
"How about doing this:\nvar numbers = str.gsub(/[0-9]*\\.?[0-9]+/, \"#{0} \");\n\n",
"Sweet and short inline replacing of non-numerical characters in the ASP.Net Textbox:\n <asp:TextBox ID=\"txtJobNo\" runat=\"server\" class=\"TextBoxStyle\" onkeyup=\"this.value=this.value.replace(/[^0-9]/g,'')\" />\n\nAlter the regex part as you'ld like. Lots and lots of people complain about the cursor going straight to the end when using the arrow keys, but people tend to deal with this without noticing it for instance, arrow... arrow... arrow... okay then... backspace back space, enter the new chars.\n",
"Here are a couple of jQuery input class types I use:\n$(\"input.intgr\").keyup(function (e) { // Filter non-digits from input value.\n if (/\\D/g.test($(this).val())) $(this).val($(this).val().replace(/\\D/g, ''));\n});\n$(\"input.nmbr\").keyup(function (e) { // Filter non-numeric from input value.\n var tVal=$(this).val();\n if (tVal!=\"\" && isNaN(tVal)){\n tVal=(tVal.substr(0,1).replace(/[^0-9\\.\\-]/, '')+tVal.substr(1).replace(/[^0-9\\.]/, ''));\n var raVal=tVal.split(\".\")\n if(raVal.length>2)\n tVal=raVal[0]+\".\"+raVal.slice(1).join(\"\");\n $(this).val(tVal);\n } \n});\n\nintgr allows only numeric - like other solutions here.\nnmbr allows only positive/negative decimal. Negative must be the first character (you can add \"+\" to the filter if you need it), strips -3.6.23.333 to -3.623333\nI'm putting nmbr up because I got tired of trying to find the way to keep only 1 decimal and negative in 1st position\n",
"This one just worked for -ve to +ve numbers\n\n\n<input type=\"text\" oninput=\"this.value = this.value.replace(/[^0-9\\-]+/g, '').replace(/(\\..*)\\./g, '$1');\">\n\n\n\n",
"I use this expression to exclude all non-numeric characters + keep negative numbers with minus sign.\nvariable.replace(/[^0-9.,\\-]/g,'')\n\n"
] |
[
118,
10,
6,
2,
1,
1,
0,
0,
0,
0,
0
] |
[] |
[] |
[
"javascript",
"regex"
] |
stackoverflow_0002555059_javascript_regex.txt
|
Q:
asdf: how to specify loading of entire folder instead of each separate file
I have a asdf system definition like this:
(asdf:defsystem #:my-package
:serial t...
:components ((:file "package")
(:file "macros")..........
(:file "tests/test-debug")
(:file "tests/test-regression") ))
Instead of specifying each file in the folder tests separately, I would like to specify all the files in the folder tests. Something like tests/* or (:directory "tests")
Can that be done?
A:
In the asdf source repository, see contrib/wild-modules.lisp
A:
Yes, you can specify all the files in a directory using the :directory component type in an asdf system definition. This will include all the files in the specified directory in the system, allowing you to avoid listing each file individually.
Here is an example of how you might modify your asdf system definition to use the :directory component type:
(asdf:defsystem #:my-package
:serial t...
:components ((:file "package")
(:file "macros")
...
(:directory "tests"))
:perform (test))
In this example, the :directory component type is used to include all the files in the tests directory in the system. This means that you no longer need to specify each file in the tests directory individually.
Keep in mind that the :directory component type will include all files in the specified directory, including any subdirectories. If you only want to include files in the top-level directory, you can use the :glob component type instead. This allows you to specify a pattern that matches the files you want to include, rather than including all files in the directory.
For example, you could use the :glob component type to include only files with the .lisp extension in the tests directory, like this:
(asdf:defsystem #:my-package
:serial t...
:components ((:file "package")
(:file "macros")
...
(:glob "tests/*.lisp"))
:perform (test))
This will include only the files with the .lisp extension in the tests directory in the system, rather than all files in the directory. You can use a different pattern to match the files you want to include, depending on your specific needs. Consult the asdf documentation for more information.
|
asdf: how to specify loading of entire folder instead of each separate file
|
I have a asdf system definition like this:
(asdf:defsystem #:my-package
:serial t...
:components ((:file "package")
(:file "macros")..........
(:file "tests/test-debug")
(:file "tests/test-regression") ))
Instead of specifying each file in the folder tests separately, I would like to specify all the files in the folder tests. Something like tests/* or (:directory "tests")
Can that be done?
|
[
"In the asdf source repository, see contrib/wild-modules.lisp\n",
"Yes, you can specify all the files in a directory using the :directory component type in an asdf system definition. This will include all the files in the specified directory in the system, allowing you to avoid listing each file individually.\nHere is an example of how you might modify your asdf system definition to use the :directory component type:\n(asdf:defsystem #:my-package\n :serial t...\n\n :components ((:file \"package\")\n (:file \"macros\")\n ...\n (:directory \"tests\"))\n :perform (test))\n\nIn this example, the :directory component type is used to include all the files in the tests directory in the system. This means that you no longer need to specify each file in the tests directory individually.\nKeep in mind that the :directory component type will include all files in the specified directory, including any subdirectories. If you only want to include files in the top-level directory, you can use the :glob component type instead. This allows you to specify a pattern that matches the files you want to include, rather than including all files in the directory.\nFor example, you could use the :glob component type to include only files with the .lisp extension in the tests directory, like this:\n(asdf:defsystem #:my-package\n :serial t...\n\n :components ((:file \"package\")\n (:file \"macros\")\n ...\n (:glob \"tests/*.lisp\"))\n :perform (test))\n\nThis will include only the files with the .lisp extension in the tests directory in the system, rather than all files in the directory. You can use a different pattern to match the files you want to include, depending on your specific needs. Consult the asdf documentation for more information.\n"
] |
[
2,
1
] |
[] |
[] |
[
"asdf"
] |
stackoverflow_0022885735_asdf.txt
|
Q:
Image file on root node for a virtual machine - can it be moved?
I am using proxmox and created a virtual machine yesterday. Today, I noticed that there is hardly any memory left on my root nodes /dev/mapper disk, which causes the VM to stop. I found out that there is an image file (extension .qcow2) in the directory /var/lib/vz/images, which belongs to the newly created VM, which consumes quite a lot memory.
I know that images can be used to install operating systems from and I asked myself if this image file is a necessary component for the VM to work or if the image file is only created as a kind of backup. If it is a backup file, I could save it on another disk to solve my problem.
Thanks for your help.
A:
It's your virtual machine disk, you cannot just remove it. You can create VM disk with "Thin provision" checked in Storage configuration on hypervisor, it will consume only what you use, not allocate all space at once. Use Clonezilla or dd to clone all data to new disk.
|
Image file on root node for a virtual machine - can it be moved?
|
I am using proxmox and created a virtual machine yesterday. Today, I noticed that there is hardly any memory left on my root nodes /dev/mapper disk, which causes the VM to stop. I found out that there is an image file (extension .qcow2) in the directory /var/lib/vz/images, which belongs to the newly created VM, which consumes quite a lot memory.
I know that images can be used to install operating systems from and I asked myself if this image file is a necessary component for the VM to work or if the image file is only created as a kind of backup. If it is a backup file, I could save it on another disk to solve my problem.
Thanks for your help.
|
[
"It's your virtual machine disk, you cannot just remove it. You can create VM disk with \"Thin provision\" checked in Storage configuration on hypervisor, it will consume only what you use, not allocate all space at once. Use Clonezilla or dd to clone all data to new disk.\n"
] |
[
0
] |
[] |
[] |
[
"proxmox",
"virtual_machine"
] |
stackoverflow_0074023860_proxmox_virtual_machine.txt
|
Q:
How do I get a variable with the name of the user running ansible?
I'm scripting a deployment process that takes the name of the user running the ansible script (e.g. tlau) and creates a deployment directory on the remote system based on that username and the current date/time (e.g. tlau-deploy-2014-10-15-16:52).
You would think this is available in ansible facts (e.g. LOGNAME or SUDO_USER), but those are all set to either "root" or the deployment id being used to ssh into the remote system. None of those contain the local user, the one who is currently running the ansible process.
How can I script getting the name of the user running the ansible process and use it in my playbook?
A:
If you gather_facts, which is enabled by default for playbooks, there is a built-in variable that is set called ansible_user_id that provides the user name that the tasks are being run as. You can then use this variable in other tasks or templates with {{ ansible_user_id }}. This would save you the step of running a task to register that variable.
See: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variables-discovered-from-systems-facts
A:
If you mean the username on the host system, there are two options:
You can run a local action (which runs on the host machine rather than the target machine):
- name: get the username running the deploy
become: false
local_action: command whoami
register: username_on_the_host
- debug: var=username_on_the_host
In this example, the output of the whoami command is registered in a variable called "username_on_the_host", and the username will be contained in username_on_the_host.stdout.
(the debug task is not required here, it just demonstrates the content of the variable)
The second options is to use a "lookup plugin":
{{ lookup('env', 'USER') }}
Read about lookup plugins here: docs.ansible.com/ansible/playbooks_lookups.html
A:
I put something like the following in all templates:
# Placed here by {{ lookup('env','USER') }} using Ansible, {{ ansible_date_time.date }}.
When templated over it shows up as:
# Placed here by staylorx using Ansible, 2017-01-11.
If I use {{ ansible_user_id }} and I've become root then that variable indicates "root", not what I want most of the time.
A:
This seems to work for me (ansible 2.9.12):
- name: get the non root remote user
set_fact:
remote_regular_user: "{{ ansible_env.SUDO_USER or ansible_user_id }}"
You can also simply set this as a variable - e.g. in your group_vars/all.yml:
remote_regular_user: "{{ ansible_env.SUDO_USER or ansible_user_id }}"
A:
This reads the user name from the remote system, because it is not guaranteed, that the user names on the local and the remote system are the same. It is possible to change the name in the SSH configuration.
- name: Run whoami without become.
command: whoami
changed_when: false
become: false
register: whoami
- name: Set a fact with the user name.
set_fact:
login_user: "{{ whoami.stdout }}"
A:
if you want to get the user who run the template in ansible tower you could use this var {{tower_user_name}} in your playbook but itΒ΄s only defined on manually executions
tower_user_name :The user name of the Tower user that started this job. This is not available for callback or scheduled jobs.
check this docs https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html
A:
When you use the "become" option to launch Ansible or run a task, the logged in user will change to the user you are changing to (typically root). To get the name of the original user used to log in to the remote host with (ie: before escalating) you can use the ansible_user special variable. In addition, if you want to gather facts for a specific user other than the one currently running a task, you can use the user built-in module by doing something like this:
- user
name: "username"
register: user_data
Now the user_data fact contains a bunch of useful information about that user, including their uid, gid, home folder, and a bunch of other stuff. See the return value for this task in the docs for details. Using this technique, you can get details about the original user Ansible was launched with by doing something like this:
- user
name: "{{ ansible_user }}"
register: user_data
Conversely, if all you want is the name of the active user that is running a specific task (ie: which accounts for any user-switches that occur with the "become" operation) you can use the ansible_user_id fact instead.
|
How do I get a variable with the name of the user running ansible?
|
I'm scripting a deployment process that takes the name of the user running the ansible script (e.g. tlau) and creates a deployment directory on the remote system based on that username and the current date/time (e.g. tlau-deploy-2014-10-15-16:52).
You would think this is available in ansible facts (e.g. LOGNAME or SUDO_USER), but those are all set to either "root" or the deployment id being used to ssh into the remote system. None of those contain the local user, the one who is currently running the ansible process.
How can I script getting the name of the user running the ansible process and use it in my playbook?
|
[
"If you gather_facts, which is enabled by default for playbooks, there is a built-in variable that is set called ansible_user_id that provides the user name that the tasks are being run as. You can then use this variable in other tasks or templates with {{ ansible_user_id }}. This would save you the step of running a task to register that variable.\nSee: https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variables-discovered-from-systems-facts\n",
"If you mean the username on the host system, there are two options:\nYou can run a local action (which runs on the host machine rather than the target machine):\n- name: get the username running the deploy\n become: false\n local_action: command whoami\n register: username_on_the_host\n\n- debug: var=username_on_the_host\n\nIn this example, the output of the whoami command is registered in a variable called \"username_on_the_host\", and the username will be contained in username_on_the_host.stdout.\n(the debug task is not required here, it just demonstrates the content of the variable)\n\nThe second options is to use a \"lookup plugin\":\n{{ lookup('env', 'USER') }}\n\nRead about lookup plugins here: docs.ansible.com/ansible/playbooks_lookups.html\n",
"I put something like the following in all templates:\n# Placed here by {{ lookup('env','USER') }} using Ansible, {{ ansible_date_time.date }}.\n\nWhen templated over it shows up as:\n# Placed here by staylorx using Ansible, 2017-01-11.\n\nIf I use {{ ansible_user_id }} and I've become root then that variable indicates \"root\", not what I want most of the time.\n",
"This seems to work for me (ansible 2.9.12):\n- name: get the non root remote user\n set_fact:\n remote_regular_user: \"{{ ansible_env.SUDO_USER or ansible_user_id }}\"\n\nYou can also simply set this as a variable - e.g. in your group_vars/all.yml:\nremote_regular_user: \"{{ ansible_env.SUDO_USER or ansible_user_id }}\"\n\n",
"This reads the user name from the remote system, because it is not guaranteed, that the user names on the local and the remote system are the same. It is possible to change the name in the SSH configuration.\n- name: Run whoami without become.\n command: whoami\n changed_when: false\n become: false\n register: whoami\n\n- name: Set a fact with the user name.\n set_fact:\n login_user: \"{{ whoami.stdout }}\"\n\n",
"if you want to get the user who run the template in ansible tower you could use this var {{tower_user_name}} in your playbook but itΒ΄s only defined on manually executions\ntower_user_name :The user name of the Tower user that started this job. This is not available for callback or scheduled jobs.\ncheck this docs https://docs.ansible.com/ansible-tower/latest/html/userguide/job_templates.html\n",
"When you use the \"become\" option to launch Ansible or run a task, the logged in user will change to the user you are changing to (typically root). To get the name of the original user used to log in to the remote host with (ie: before escalating) you can use the ansible_user special variable. In addition, if you want to gather facts for a specific user other than the one currently running a task, you can use the user built-in module by doing something like this:\n- user\n name: \"username\"\n register: user_data\n\nNow the user_data fact contains a bunch of useful information about that user, including their uid, gid, home folder, and a bunch of other stuff. See the return value for this task in the docs for details. Using this technique, you can get details about the original user Ansible was launched with by doing something like this:\n- user\n name: \"{{ ansible_user }}\"\n register: user_data\n\nConversely, if all you want is the name of the active user that is running a specific task (ie: which accounts for any user-switches that occur with the \"become\" operation) you can use the ansible_user_id fact instead.\n"
] |
[
140,
103,
54,
6,
4,
0,
0
] |
[] |
[] |
[
"ansible",
"variables"
] |
stackoverflow_0026394096_ansible_variables.txt
|
Q:
How to copy asset files before deploying cloud functions to Firebase
I have a firebase project where I write typescript functions to be deployed into Google Node Cloud functions.
When I run the firebase deploy --only functions command, it transpiles my code into javascript and put the js output into a folder called lib next to src folder where my typescript functions are.
However, some of my functions need access to local files such as .ttf files or some other file types. Those don't get copied over to the lib folder and therefore, I get errors in runtime Error: ENOENT: no such file or directory, open path/to/file
Question 1 : How do I get the deploy command to copy assets files to the output folder ?
Question 2 : Given that all my functions live in separate files, in separate folders, and so do my assets, how should I reference my assets so that they can be found ? Should I give the path to the asset file relative to the lib folder or relative to where the function lives ?
EDIT -1
See the project structure here :
the code that need the font lives in the some-function.ts file. And it uses the pdfmake library that needs fonts to work.
Here is how I do it in the some-function.ts file :
const fonts: TFontDictionary = {
Poppins: {
normal: './fonts/Poppins/Poppins-Regular.ttf',
bold: './fonts/Poppins/Poppins-Bold.ttf',
italics: './fonts/Poppins/Poppins-Medium.ttf',
bolditalics: './fonts/Poppins/Poppins-Thin.ttf',
}
};
const pdfmake = require('pdfmake');
const printer = new pdfmake(fonts);
So how do I reference such fonts given that they are located in the fonts folder. or event if I put them in a separate folder at the root or src ?
A:
After some digging I came to a solution that I would like to share here.
1- First thing, as suggested by @Dharmaraj, I added a script that removes the lib folder, before building the project, and copy my assets files before deploying.
So in the package.json under functions folder, I updated the build command as follow
"remove-lib":"rm -rf lib",
"copy-assets":"cp -rf src/path/to/your-folder/fonts lib/path/to/your-folder",
"do-build":"tsc",
"build": "npm run remove-lib && npm run do-build && npm run copy-assets",
Actually, adding the remove lib command solved another issue I had with deploying functions. I think this should be the default behaviour because when you rename function files, old ones stick in lib folder causing all sort of issues.
Anyway, now in order to correctly reference your asset, just construct the path as ${__dirname}/fonts/some-font.ttf provided that the fonts folder lives in the same folder as the file your are editing.
|
How to copy asset files before deploying cloud functions to Firebase
|
I have a firebase project where I write typescript functions to be deployed into Google Node Cloud functions.
When I run the firebase deploy --only functions command, it transpiles my code into javascript and put the js output into a folder called lib next to src folder where my typescript functions are.
However, some of my functions need access to local files such as .ttf files or some other file types. Those don't get copied over to the lib folder and therefore, I get errors in runtime Error: ENOENT: no such file or directory, open path/to/file
Question 1 : How do I get the deploy command to copy assets files to the output folder ?
Question 2 : Given that all my functions live in separate files, in separate folders, and so do my assets, how should I reference my assets so that they can be found ? Should I give the path to the asset file relative to the lib folder or relative to where the function lives ?
EDIT -1
See the project structure here :
the code that need the font lives in the some-function.ts file. And it uses the pdfmake library that needs fonts to work.
Here is how I do it in the some-function.ts file :
const fonts: TFontDictionary = {
Poppins: {
normal: './fonts/Poppins/Poppins-Regular.ttf',
bold: './fonts/Poppins/Poppins-Bold.ttf',
italics: './fonts/Poppins/Poppins-Medium.ttf',
bolditalics: './fonts/Poppins/Poppins-Thin.ttf',
}
};
const pdfmake = require('pdfmake');
const printer = new pdfmake(fonts);
So how do I reference such fonts given that they are located in the fonts folder. or event if I put them in a separate folder at the root or src ?
|
[
"After some digging I came to a solution that I would like to share here.\n1- First thing, as suggested by @Dharmaraj, I added a script that removes the lib folder, before building the project, and copy my assets files before deploying.\nSo in the package.json under functions folder, I updated the build command as follow\n\"remove-lib\":\"rm -rf lib\",\n\"copy-assets\":\"cp -rf src/path/to/your-folder/fonts lib/path/to/your-folder\",\n\"do-build\":\"tsc\",\n\"build\": \"npm run remove-lib && npm run do-build && npm run copy-assets\",\n\nActually, adding the remove lib command solved another issue I had with deploying functions. I think this should be the default behaviour because when you rename function files, old ones stick in lib folder causing all sort of issues.\nAnyway, now in order to correctly reference your asset, just construct the path as ${__dirname}/fonts/some-font.ttf provided that the fonts folder lives in the same folder as the file your are editing.\n"
] |
[
0
] |
[] |
[] |
[
"firebase",
"google_cloud_functions",
"node.js"
] |
stackoverflow_0074666615_firebase_google_cloud_functions_node.js.txt
|
Q:
python sql table with paramter to json
Good Day!
I am trying to conver sql query into json with python, but getting an error when try to use sql query with a paramater:
sql syntax error: incorrect syntax near "%"
it works ok without setting paramater
My db is hana and module is hdbcli
my code
def db(db_name="xxx"):
return dbapi.connect(address=db_name, port="xx", user="xx", password="123")
def query_db(query, args=(), one=False):
cur = db().cursor()
cur.execute(query, args)
r = [dict((cur.description[i][0], value) for i, value in enumerate(row)) for row in cur.fetchall()]
cur.connection.close()
return (r[0] if r else None) if one else r
def test(request):
my_query = query_db("select bname, name_text from addrs where num=%s", (100,))
return JsonResponse(my_query, safe=False)
urlpatterns = [
path('s4d/', test),
]
thanks
A:
hana with hdbdcli uses :placeholder for prepared statements
some mpre infrmation can be found
my_query = query_db("select bname, name_text from addrs where num=:num", {"num": 100})
you use for two parameter
where id=:id and c2= :c2
{"id": id, "c2": c2}
|
python sql table with paramter to json
|
Good Day!
I am trying to conver sql query into json with python, but getting an error when try to use sql query with a paramater:
sql syntax error: incorrect syntax near "%"
it works ok without setting paramater
My db is hana and module is hdbcli
my code
def db(db_name="xxx"):
return dbapi.connect(address=db_name, port="xx", user="xx", password="123")
def query_db(query, args=(), one=False):
cur = db().cursor()
cur.execute(query, args)
r = [dict((cur.description[i][0], value) for i, value in enumerate(row)) for row in cur.fetchall()]
cur.connection.close()
return (r[0] if r else None) if one else r
def test(request):
my_query = query_db("select bname, name_text from addrs where num=%s", (100,))
return JsonResponse(my_query, safe=False)
urlpatterns = [
path('s4d/', test),
]
thanks
|
[
"hana with hdbdcli uses :placeholder for prepared statements\nsome mpre infrmation can be found\nmy_query = query_db(\"select bname, name_text from addrs where num=:num\", {\"num\": 100})\n\nyou use for two parameter\nwhere id=:id and c2= :c2\n{\"id\": id, \"c2\": c2}\n\n"
] |
[
0
] |
[] |
[] |
[
"django",
"hana",
"json",
"python",
"sql"
] |
stackoverflow_0074669302_django_hana_json_python_sql.txt
|
Q:
Discord py 2.0 interaction option
Discord 2.0 Py\
@bot.tree.command()
@app_commands.describe(amount="Please give amount")
async def clear(interaction: discord.Interaction, amount: int):
await interaction.response.send_message(f"You clean {amount} message", ephemeral=True)
await interaction.channel.purge(limit=amount)
Hello this is my code. All good, but i want do this command an option. So i mean command can non required ? Can user give a empty argument ?
A:
You can make the option not required by setting a default value for it, and the library will make it optional for you.
# set the default value for the "amount" argument to 1; if the user doesn't input the option, the argument will be 1.
async def clear(interaction: discord.Interaction, amount: int = 1):
You can look at the official example here.
|
Discord py 2.0 interaction option
|
Discord 2.0 Py\
@bot.tree.command()
@app_commands.describe(amount="Please give amount")
async def clear(interaction: discord.Interaction, amount: int):
await interaction.response.send_message(f"You clean {amount} message", ephemeral=True)
await interaction.channel.purge(limit=amount)
Hello this is my code. All good, but i want do this command an option. So i mean command can non required ? Can user give a empty argument ?
|
[
"You can make the option not required by setting a default value for it, and the library will make it optional for you.\n# set the default value for the \"amount\" argument to 1; if the user doesn't input the option, the argument will be 1.\nasync def clear(interaction: discord.Interaction, amount: int = 1):\n\nYou can look at the official example here.\n"
] |
[
0
] |
[] |
[] |
[
"discord",
"discord.py",
"python"
] |
stackoverflow_0074669206_discord_discord.py_python.txt
|
Q:
Visual Studio Code Jupyter not recognising conda kernel
I created a new conda environment named 'ct' and installed Python 3.10.6, Jupyter Lab, matplotlib and numpy. Also the ipykernel is installed.
VS Code lets me select Python 3.10.6 from 'ct' as interpreter without issues.
VS Code select interpreter
But I cannot choose 'ct' as kernel as VS Code only suggests the 'base' kernel from conda. 'base' does not have the desired packages installed which leads to the following error when running this code:
import matplotlib as mat
print(mat.__version__)
error:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Untitled-1.ipynb Cell 1 in <cell line: 1>()
----> 1 import matplotlib as mat
2 print(mat.__version__)
ModuleNotFoundError: No module named 'matplotlib'
This is actually totally fine but I don't get why the 'ct' kernel is not showing up in the list when trying to change the kernel.
Cannot choose kernel
Also when running jupyter lab in browser from 'ct' environment everything is working as should.
When listing all installed packages in 'ct' in the VS Code terminal all packages show up.
Restarting VS Code and trying with other new conda environments does not help the issue.
Did I somehow miss something?
A:
What finally worked out for me was closing VS Code entirely, recreating the environment and creating a new blank notebook in VS Code. Now the kernel shows up and is surprisingly available for all new and old notebooks.
I also found this option in the Jupyter settings in VS Code: https://i.stack.imgur.com/rcJU6.png
I haven't tried it yet, but it might be helpful to someone experiencing similar issues.
Also Zac's solution above might be super helpful. Thank you for sharing!
A:
Switching to the "pre-release" version of the Jupyter extension immediately solved this problem for me.
A:
I solved this issue by opening the directory in VS Code, instead of only the .ipynb file.
A:
Obviously, this is not a universal problem.
You can read the docs and recreate the conda environment.
This may also be related to the fact that your conda environment is not activated. Use the command conda activate ct to activate it.
A:
Try this conda install -n meta_ai ipykernel --update-deps --force-reinstall
Somehow it solved my problems.
If it still can't solve your problem, try also opening the directory in VS Code, instead of only the .ipynb file.
A:
Try this conda install -n meta_ai ipykernel --update-deps --force-reinstall Somehow it solved my problems. If it still can't solve your problem, try also opening the directory in VS Code, instead of only the .ipynb file.
|
Visual Studio Code Jupyter not recognising conda kernel
|
I created a new conda environment named 'ct' and installed Python 3.10.6, Jupyter Lab, matplotlib and numpy. Also the ipykernel is installed.
VS Code lets me select Python 3.10.6 from 'ct' as interpreter without issues.
VS Code select interpreter
But I cannot choose 'ct' as kernel as VS Code only suggests the 'base' kernel from conda. 'base' does not have the desired packages installed which leads to the following error when running this code:
import matplotlib as mat
print(mat.__version__)
error:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Untitled-1.ipynb Cell 1 in <cell line: 1>()
----> 1 import matplotlib as mat
2 print(mat.__version__)
ModuleNotFoundError: No module named 'matplotlib'
This is actually totally fine but I don't get why the 'ct' kernel is not showing up in the list when trying to change the kernel.
Cannot choose kernel
Also when running jupyter lab in browser from 'ct' environment everything is working as should.
When listing all installed packages in 'ct' in the VS Code terminal all packages show up.
Restarting VS Code and trying with other new conda environments does not help the issue.
Did I somehow miss something?
|
[
"What finally worked out for me was closing VS Code entirely, recreating the environment and creating a new blank notebook in VS Code. Now the kernel shows up and is surprisingly available for all new and old notebooks.\nI also found this option in the Jupyter settings in VS Code: https://i.stack.imgur.com/rcJU6.png\nI haven't tried it yet, but it might be helpful to someone experiencing similar issues.\nAlso Zac's solution above might be super helpful. Thank you for sharing!\n",
"Switching to the \"pre-release\" version of the Jupyter extension immediately solved this problem for me.\n",
"I solved this issue by opening the directory in VS Code, instead of only the .ipynb file.\n",
"\nObviously, this is not a universal problem.\nYou can read the docs and recreate the conda environment.\nThis may also be related to the fact that your conda environment is not activated. Use the command conda activate ct to activate it.\n",
"Try this conda install -n meta_ai ipykernel --update-deps --force-reinstall\nSomehow it solved my problems.\nIf it still can't solve your problem, try also opening the directory in VS Code, instead of only the .ipynb file.\n",
"Try this conda install -n meta_ai ipykernel --update-deps --force-reinstall Somehow it solved my problems. If it still can't solve your problem, try also opening the directory in VS Code, instead of only the .ipynb file.\n"
] |
[
2,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"conda",
"jupyter",
"kernel",
"python",
"visual_studio_code"
] |
stackoverflow_0074028297_conda_jupyter_kernel_python_visual_studio_code.txt
|
Q:
How to check for palindrome excluding the non-alphanumeric characters?
Here's the code that I attempted
public String isPalindrome(String s) {
String trimmed = s.replaceAll("[^A-Za-z0-9]", "");
String reversed = "";
int len = trimmed.length();
for (int i = len - 1; i >= 0; i--) {
char[] allChars = trimmed.toCharArray();
reversed += allChars[i];
}
if (trimmed.equalsIgnoreCase(reversed)) {
return "true";
} else {
return "false";
}
}
Sample Input 1
A man, a plan, a canal: Panama
Sample Output 1
true
Explanation 1
The given string is palindrome when considering only alphanumeric characters.
Sample Input 2
race a car
Sample Output 2
false
Explanation 2
The given string is not a palindrome when considering alphanumeric characters.
A:
You can return boolean instead of String:
public static boolean isPalindrome(String s) {
String trimmed = s.replaceAll("[^A-Za-z0-9]", "").toLowerCase();
int from = 0, to = trimmed.length() - 1;
while (from < to) {
if (trimmed.charAt(from) != trimmed.charAt(to)) {
return false;
}
from++;
to--;
}
return true;
}
A:
You can use StringBuilder to reverse a String:
public static void main(String[] args) {
String input = "a#b!b^a";
String clean = input.replaceAll("[^A-Za-z0-9]", "");
String reverse = new StringBuilder(clean).reverse().toString();
boolean isPalindrome = reverse.equals(clean);
System.out.println(isPalindrome);
}
A:
Your variable len comes from the length of the String s. But you use the value on the array coming from trimmed.
So if you want to remove the IndexOutOfBoundsException you should change your len declaration to:
int len = trimmed.length();
A:
You can do like this in linear time as the loops are driven by the presence of non-alphabetic/digit characters. Also, no trimming or reversal of the string is required.
String[] test = {"A man, a plan, a canal: Panama",
"race a car","foobar", "ABC2CEc2cba"};
for (String s : test) {
System.out.printf("%5b -> %s%n", isPalindrome(s), s);
}
prints
true -> A man, a plan, a canal: Panama
false -> race a car
false -> foobar
true -> ABC2CEc2cba
The outer while loop drives then entire process until the indices cross or are equal. The inner loops simply skip over non-alphabetic/digit characters.
public static boolean isPalindrome(String s) {
int k = s.length() - 1;
int i = 0;
char c1 = '#';
char c2 = '#';
while (i <= k) {
while (!Character.isLetterOrDigit(c1 = s.charAt(i++)));
while (!Character.isLetterOrDigit(c2 = s.charAt(k--)));
if (Character.toLowerCase(c1) != Character.toLowerCase(c2)) {
return false;
}
}
return true;
}
|
How to check for palindrome excluding the non-alphanumeric characters?
|
Here's the code that I attempted
public String isPalindrome(String s) {
String trimmed = s.replaceAll("[^A-Za-z0-9]", "");
String reversed = "";
int len = trimmed.length();
for (int i = len - 1; i >= 0; i--) {
char[] allChars = trimmed.toCharArray();
reversed += allChars[i];
}
if (trimmed.equalsIgnoreCase(reversed)) {
return "true";
} else {
return "false";
}
}
Sample Input 1
A man, a plan, a canal: Panama
Sample Output 1
true
Explanation 1
The given string is palindrome when considering only alphanumeric characters.
Sample Input 2
race a car
Sample Output 2
false
Explanation 2
The given string is not a palindrome when considering alphanumeric characters.
|
[
"You can return boolean instead of String:\npublic static boolean isPalindrome(String s) {\n String trimmed = s.replaceAll(\"[^A-Za-z0-9]\", \"\").toLowerCase();\n\n int from = 0, to = trimmed.length() - 1;\n while (from < to) {\n if (trimmed.charAt(from) != trimmed.charAt(to)) {\n return false;\n }\n from++;\n to--;\n }\n return true;\n}\n\n",
"You can use StringBuilder to reverse a String:\npublic static void main(String[] args) {\n String input = \"a#b!b^a\";\n String clean = input.replaceAll(\"[^A-Za-z0-9]\", \"\");\n String reverse = new StringBuilder(clean).reverse().toString();\n boolean isPalindrome = reverse.equals(clean);\n System.out.println(isPalindrome);\n}\n\n",
"Your variable len comes from the length of the String s. But you use the value on the array coming from trimmed.\nSo if you want to remove the IndexOutOfBoundsException you should change your len declaration to:\nint len = trimmed.length();\n",
"You can do like this in linear time as the loops are driven by the presence of non-alphabetic/digit characters. Also, no trimming or reversal of the string is required.\nString[] test = {\"A man, a plan, a canal: Panama\",\n \"race a car\",\"foobar\", \"ABC2CEc2cba\"};\n\nfor (String s : test) {\n System.out.printf(\"%5b -> %s%n\", isPalindrome(s), s);\n}\n\nprints\n true -> A man, a plan, a canal: Panama \nfalse -> race a car\nfalse -> foobar\n true -> ABC2CEc2cba \n\nThe outer while loop drives then entire process until the indices cross or are equal. The inner loops simply skip over non-alphabetic/digit characters.\npublic static boolean isPalindrome(String s) {\n int k = s.length() - 1;\n int i = 0;\n char c1 = '#';\n char c2 = '#';\n while (i <= k) {\n while (!Character.isLetterOrDigit(c1 = s.charAt(i++)));\n while (!Character.isLetterOrDigit(c2 = s.charAt(k--)));\n if (Character.toLowerCase(c1) != Character.toLowerCase(c2)) {\n return false;\n }\n }\n return true;\n}\n\n"
] |
[
0,
0,
0,
0
] |
[] |
[] |
[
"java"
] |
stackoverflow_0074668064_java.txt
|
Q:
Groovy script doesn't find file in resources
I have a small Gradle project I just started that has a single Groovy script in src/main/groovy and a text file in src/main/resources/input/myInput.txt. My script just has this content currently:
def food = [:]
currentFood = 0
currentElf = 0
new File('src/main/resources/input/myInput.txt').eachLine { line ->
if (line.isBlank()) {
food[currentElf++] = currentFood
currentFood = 0
} else {
currentFood += line.toInteger()
}
}
However, when I run it, I get java.io.FileNotFoundException: src/main/resources/input/myInput.txt. This is pretty much straight from this Baeldung article, which are usually pretty reliable. What is going wrong here?
A:
Instead of loading it via File, load it via the class and getResourcesAsStream like so:
this.getClass().getResourceAsStream("/input/day-1-small.txt").eachLine { line ->
|
Groovy script doesn't find file in resources
|
I have a small Gradle project I just started that has a single Groovy script in src/main/groovy and a text file in src/main/resources/input/myInput.txt. My script just has this content currently:
def food = [:]
currentFood = 0
currentElf = 0
new File('src/main/resources/input/myInput.txt').eachLine { line ->
if (line.isBlank()) {
food[currentElf++] = currentFood
currentFood = 0
} else {
currentFood += line.toInteger()
}
}
However, when I run it, I get java.io.FileNotFoundException: src/main/resources/input/myInput.txt. This is pretty much straight from this Baeldung article, which are usually pretty reliable. What is going wrong here?
|
[
"Instead of loading it via File, load it via the class and getResourcesAsStream like so:\nthis.getClass().getResourceAsStream(\"/input/day-1-small.txt\").eachLine { line ->\n\n"
] |
[
1
] |
[] |
[] |
[
"file",
"gradle",
"groovy"
] |
stackoverflow_0074657497_file_gradle_groovy.txt
|
Q:
RangeError (index): Invalid value: Valid value range is empty: 0
I am trying to fetch a list from API that is two methods fetchImages and fetchCategories. the first time it is showing a red screen error and then after 2 seconds automatically it is loading the list. Can you please tell me what's the issue with my code and how to avoid showing that red screen error in my app?
Widget build(context) {
try{
if (isFirst == true) {
fetchImage();
fetchCategories(context);
isFirst = false;
}
}catch(Exception){
}
return MaterialApp(
home: Scaffold(
backgroundColor: Colors.black,
appBar: AppBar(
title: Text('Lets see images!'),
),
body: new Column(
children: <Widget>[
new Row(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset(
catimages[0],
width: 60.0,
height: 60.0,
),
),
new Text(
categoriesText[0],
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint("on tv clikced");
widget.fetchApI.fetchSubCategories(context, 6);
}),
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset(
catimages[1],
width: 60.0,
height: 60.0,
),
),
new Text(
categoriesText[1],
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint("on moview clicked");
widget. fetchApI.fetchSubCategories(context, 7);
},
),
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset(
catimages[2],
width: 60.0,
height: 60.0,
),
),
new Text(
categoriesText[2],
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint("on news clicked");
widget.fetchApI.fetchSubCategories(context, 10);
},
),
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset(catimages[3],
width: 60.0, height: 60.0),
),
new Text(
categoriesText[3],
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint('on shows clicked');
widget.fetchApI.fetchSubCategories(context, 8);
},
),
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset('assets/live_icon.png',
width: 60.0, height: 60.0),
),
new Text(
'Live',
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint('on live clicked');
},
),
],
),
ImageList(images,widget.fetchApI),
],
),
),
);
}
A:
Make sure specifying the length of the list of data. For example, if you're using ListView.builder give proper value to the attribute itemCount.
ListView.builder(
itemCount: snapshot.data.length,
itemBuilder: (ctx, index) {
return WidgetItem();
});
A:
The problem can be that you are trying to access a variable/array that is not ready yet (maybe because the future/api call is not finished)
A quick workaround could be to check the length of the array or check for null, example:
Text( (myArray?.length > 0 ? myArray[0] : '') );
A:
There are quick-and-dirty answer, and proper answer
Quick-and-dirty
Use list?.elementAt(<index>) ?? "" for safe access to element of a list
Widget build(context) {
try{
if (isFirst == true) {
fetchImage();
fetchCategories(context);
isFirst = false;
}
}catch(Exception){
}
return MaterialApp(
home: Scaffold(
backgroundColor: Colors.black,
appBar: AppBar(
title: Text('Lets see images!'),
),
body: new Column(
children: <Widget>[
new Row(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset(
catimages?.elementAt(0) ?? "",
width: 60.0,
height: 60.0,
),
),
new Text(
categoriesText?.elementAt(0) ?? "",
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint("on tv clikced");
widget.fetchApI.fetchSubCategories(context, 6);
}),
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset(
catimages?.elementAt(1) ?? "",
width: 60.0,
height: 60.0,
),
),
new Text(
categoriesText?.elementAt(1) ?? "",
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint("on moview clicked");
widget. fetchApI.fetchSubCategories(context, 7);
},
),
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset(
catimages?.elementAt(2) ?? "",
width: 60.0,
height: 60.0,
),
),
new Text(
categoriesText?.elementAt(2) ?? "",
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint("on news clicked");
widget.fetchApI.fetchSubCategories(context, 10);
},
),
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset(catimages?.elementAt(3) ?? "",
width: 60.0, height: 60.0),
),
new Text(
categoriesText?.elementAt(3) ?? "",
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint('on shows clicked');
widget.fetchApI.fetchSubCategories(context, 8);
},
),
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset('assets/live_icon.png',
width: 60.0, height: 60.0),
),
new Text(
'Live',
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint('on live clicked');
},
),
],
),
ImageList(images,widget.fetchApI),
],
),
),
);
}
}
Proper answer
Frankly, if I were to review this code, even if it works seamlessly, I would reject this change, because of the structure/pattern this code is using is quite bad.
Please use FutureBuilder, StreamBuilder or ValueListenableBuilder instead, but you need to provide more code (especially fetchImage and fetchCategories) for us to help.
A:
Null safe
Reason for error:
This error occurs on retrieving the value for an index that doesn't exist in the List. For example:
List<int> list = [];
list[0]; // <-- Error since there's no element at index 0 in the list.
Solution:
Check if the the List is not null and has the element at index:
var myList = nullableList;
var index = 0;
if (myList != null && myList.length > index) {
myList[index]; // You can safely access the element here.
}
A:
You are not getting the data. The data folder from or data source is missing. The same happened for me. Later, I created the json file for data and pointed to that location. And it got fixed simply!
A:
I got same issue when tried to access a array which was empty. This was as part of null safety.
my earlier code was
TextBox(response.customerDetails!.address![0].city),
which caused me error so I changed the code to
Text(
(response.cutomerDetails.address.isNotEmpty)
? response.customerDetails!.address![0].city
: "N/A",
),
add a check when accessing arrays. This helped me remove the error.
A:
It happens when you are going to fetch some data but it is not available on that index/position
So, you have to check the index/position value where it is null or not
In my case Listview -> itemcount was perfect but showing this error And then solved it by following checking code
Text("${(widget.topSellItem.subjects.isEmpty) ? "" : widget.topSellItem!.subjects[0].subject.name}"),
A:
I have solved this issue in flutter null safety version by following way.
Reason : It happened when value is not available for that index.
You can check itemCount item value is available or not at builder,
Solution with Null Safety would be like :
ListView.builder(
itemCount: snapshot.data!.items.length, //OR snapshot.data!.length
itemBuilder: (context, index) {
return (index > 0) ? YourWidget() : Container();
});
A:
In case the other methods don't work, check if your database contains any conflicting data entries. If so, fix them.
A:
First, declare the array of objects.
late Map<String, dynamic> product={};
the HTTP answer is:
{
"id": "1",
"codigo": "mw9wcsABvk",
"nombre": "Router TPLink Gaming 5G",
"portada": [
{
"url": "/php/assets/producto/mw9wcsABvk/2729233.png",
"name": "2729233.png"
}
]
}
In Widget build
body: Center(
child: Column(
children: [
if(producto.isNotEmpty)
Expanded(
child: Column(
children: [
ConstrainedBox(
constraints: BoxConstraints.tight(Size(double.infinity, 256)),
child: Stack(
alignment: AlignmentDirectional.center,
children: [
Positioned(
child: Image.network("${host}${producto["portada"][0]["url"]}"),
),
],
),
),
],
),
),
],
),
),
A:
Had same problem when accessing empty arrays, and fix it this ways : data.allData[index].reviews!.isEmpty ? 0 : data.allData[index].reviews![0].rating
when there's data in it, it will access first index.
A:
You must specify the length of the list of data. For example, if you're using ListView along with builder function then you must provide its item length count as itemCount.
ListView.builder(
shrinkWrap: true,
itemCount: snapshot.data.length,
itemBuilder: (context, index) {
return //your widget
});
A:
This error comes because of these reasons.
Not using a builder in a screen.
While using a builder we have to provide a condition that checking the list was empty or not. If the list is empty we have to show a circular progress indicator and the list is not empty we can show the list.
|
RangeError (index): Invalid value: Valid value range is empty: 0
|
I am trying to fetch a list from API that is two methods fetchImages and fetchCategories. the first time it is showing a red screen error and then after 2 seconds automatically it is loading the list. Can you please tell me what's the issue with my code and how to avoid showing that red screen error in my app?
Widget build(context) {
try{
if (isFirst == true) {
fetchImage();
fetchCategories(context);
isFirst = false;
}
}catch(Exception){
}
return MaterialApp(
home: Scaffold(
backgroundColor: Colors.black,
appBar: AppBar(
title: Text('Lets see images!'),
),
body: new Column(
children: <Widget>[
new Row(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset(
catimages[0],
width: 60.0,
height: 60.0,
),
),
new Text(
categoriesText[0],
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint("on tv clikced");
widget.fetchApI.fetchSubCategories(context, 6);
}),
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset(
catimages[1],
width: 60.0,
height: 60.0,
),
),
new Text(
categoriesText[1],
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint("on moview clicked");
widget. fetchApI.fetchSubCategories(context, 7);
},
),
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset(
catimages[2],
width: 60.0,
height: 60.0,
),
),
new Text(
categoriesText[2],
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint("on news clicked");
widget.fetchApI.fetchSubCategories(context, 10);
},
),
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset(catimages[3],
width: 60.0, height: 60.0),
),
new Text(
categoriesText[3],
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint('on shows clicked');
widget.fetchApI.fetchSubCategories(context, 8);
},
),
new InkResponse(
child: new Column(
children: <Widget>[
Padding(
padding: EdgeInsets.all(10.0),
child: new Image.asset('assets/live_icon.png',
width: 60.0, height: 60.0),
),
new Text(
'Live',
style: TextStyle(color: Colors.white),
),
],
),
onTap: () {
debugPrint('on live clicked');
},
),
],
),
ImageList(images,widget.fetchApI),
],
),
),
);
}
|
[
"Make sure specifying the length of the list of data. For example, if you're using ListView.builder give proper value to the attribute itemCount. \nListView.builder(\n itemCount: snapshot.data.length,\n itemBuilder: (ctx, index) {\n return WidgetItem();\n });\n\n",
"The problem can be that you are trying to access a variable/array that is not ready yet (maybe because the future/api call is not finished)\nA quick workaround could be to check the length of the array or check for null, example:\nText( (myArray?.length > 0 ? myArray[0] : '') );\n\n",
"There are quick-and-dirty answer, and proper answer\nQuick-and-dirty\nUse list?.elementAt(<index>) ?? \"\" for safe access to element of a list\nWidget build(context) {\n try{\n if (isFirst == true) {\n fetchImage();\n fetchCategories(context);\n isFirst = false;\n }\n }catch(Exception){\n\n }\n\n return MaterialApp(\n home: Scaffold(\n backgroundColor: Colors.black,\n appBar: AppBar(\n title: Text('Lets see images!'),\n ),\n body: new Column(\n children: <Widget>[\n new Row(\n mainAxisAlignment: MainAxisAlignment.center,\n children: <Widget>[\n new InkResponse(\n child: new Column(\n children: <Widget>[\n Padding(\n padding: EdgeInsets.all(10.0),\n child: new Image.asset(\n catimages?.elementAt(0) ?? \"\",\n width: 60.0,\n height: 60.0,\n ),\n ),\n new Text(\n categoriesText?.elementAt(0) ?? \"\",\n style: TextStyle(color: Colors.white),\n ),\n ],\n ),\n onTap: () {\n debugPrint(\"on tv clikced\");\n widget.fetchApI.fetchSubCategories(context, 6);\n }),\n new InkResponse(\n child: new Column(\n children: <Widget>[\n Padding(\n padding: EdgeInsets.all(10.0),\n child: new Image.asset(\n catimages?.elementAt(1) ?? \"\",\n width: 60.0,\n height: 60.0,\n ),\n ),\n new Text(\n categoriesText?.elementAt(1) ?? \"\",\n style: TextStyle(color: Colors.white),\n ),\n ],\n ),\n onTap: () {\n debugPrint(\"on moview clicked\");\n widget. fetchApI.fetchSubCategories(context, 7);\n },\n ),\n new InkResponse(\n child: new Column(\n children: <Widget>[\n Padding(\n padding: EdgeInsets.all(10.0),\n child: new Image.asset(\n catimages?.elementAt(2) ?? \"\",\n width: 60.0,\n height: 60.0,\n ),\n ),\n new Text(\n categoriesText?.elementAt(2) ?? \"\",\n style: TextStyle(color: Colors.white),\n ),\n ],\n ),\n onTap: () {\n debugPrint(\"on news clicked\");\n widget.fetchApI.fetchSubCategories(context, 10);\n },\n ),\n new InkResponse(\n child: new Column(\n children: <Widget>[\n Padding(\n padding: EdgeInsets.all(10.0),\n child: new Image.asset(catimages?.elementAt(3) ?? \"\",\n width: 60.0, height: 60.0),\n ),\n new Text(\n categoriesText?.elementAt(3) ?? \"\",\n style: TextStyle(color: Colors.white),\n ),\n ],\n ),\n onTap: () {\n debugPrint('on shows clicked');\n widget.fetchApI.fetchSubCategories(context, 8);\n },\n ),\n new InkResponse(\n child: new Column(\n children: <Widget>[\n Padding(\n padding: EdgeInsets.all(10.0),\n child: new Image.asset('assets/live_icon.png',\n width: 60.0, height: 60.0),\n ),\n new Text(\n 'Live',\n style: TextStyle(color: Colors.white),\n ),\n ],\n ),\n onTap: () {\n debugPrint('on live clicked');\n },\n ),\n ],\n ),\n ImageList(images,widget.fetchApI),\n ],\n ),\n ),\n );\n }\n}\n\nProper answer\nFrankly, if I were to review this code, even if it works seamlessly, I would reject this change, because of the structure/pattern this code is using is quite bad.\nPlease use FutureBuilder, StreamBuilder or ValueListenableBuilder instead, but you need to provide more code (especially fetchImage and fetchCategories) for us to help.\n",
"Null safe\nReason for error:\nThis error occurs on retrieving the value for an index that doesn't exist in the List. For example:\nList<int> list = [];\nlist[0]; // <-- Error since there's no element at index 0 in the list. \n\nSolution:\nCheck if the the List is not null and has the element at index:\nvar myList = nullableList;\nvar index = 0;\nif (myList != null && myList.length > index) {\n myList[index]; // You can safely access the element here. \n}\n\n",
"You are not getting the data. The data folder from or data source is missing. The same happened for me. Later, I created the json file for data and pointed to that location. And it got fixed simply!\n",
"I got same issue when tried to access a array which was empty. This was as part of null safety.\nmy earlier code was\nTextBox(response.customerDetails!.address![0].city),\n\nwhich caused me error so I changed the code to\nText(\n (response.cutomerDetails.address.isNotEmpty) \n ? response.customerDetails!.address![0].city \n : \"N/A\",\n),\n\nadd a check when accessing arrays. This helped me remove the error.\n",
"It happens when you are going to fetch some data but it is not available on that index/position\nSo, you have to check the index/position value where it is null or not\nIn my case Listview -> itemcount was perfect but showing this error And then solved it by following checking code\nText(\"${(widget.topSellItem.subjects.isEmpty) ? \"\" : widget.topSellItem!.subjects[0].subject.name}\"),\n\n",
"I have solved this issue in flutter null safety version by following way.\nReason : It happened when value is not available for that index.\nYou can check itemCount item value is available or not at builder,\nSolution with Null Safety would be like :\nListView.builder(\n itemCount: snapshot.data!.items.length, //OR snapshot.data!.length\n itemBuilder: (context, index) {\n return (index > 0) ? YourWidget() : Container();\n });\n\n",
"In case the other methods don't work, check if your database contains any conflicting data entries. If so, fix them.\n",
"First, declare the array of objects.\nlate Map<String, dynamic> product={};\n\nthe HTTP answer is:\n{\n \"id\": \"1\",\n \"codigo\": \"mw9wcsABvk\",\n \"nombre\": \"Router TPLink Gaming 5G\",\n \"portada\": [\n {\n \"url\": \"/php/assets/producto/mw9wcsABvk/2729233.png\",\n \"name\": \"2729233.png\"\n }\n ]\n}\n\nIn Widget build\n body: Center(\n child: Column(\n children: [\n if(producto.isNotEmpty)\n Expanded(\n child: Column(\n children: [\n ConstrainedBox(\n constraints: BoxConstraints.tight(Size(double.infinity, 256)),\n child: Stack(\n alignment: AlignmentDirectional.center,\n children: [\n Positioned(\n child: Image.network(\"${host}${producto[\"portada\"][0][\"url\"]}\"),\n ),\n ],\n ),\n ),\n ],\n ),\n ),\n ],\n ),\n ),\n\n",
"Had same problem when accessing empty arrays, and fix it this ways : data.allData[index].reviews!.isEmpty ? 0 : data.allData[index].reviews![0].rating\nwhen there's data in it, it will access first index.\n",
"You must specify the length of the list of data. For example, if you're using ListView along with builder function then you must provide its item length count as itemCount.\nListView.builder(\n shrinkWrap: true,\n itemCount: snapshot.data.length,\n itemBuilder: (context, index) {\n return //your widget\n });\n\n",
"This error comes because of these reasons.\n\nNot using a builder in a screen.\nWhile using a builder we have to provide a condition that checking the list was empty or not. If the list is empty we have to show a circular progress indicator and the list is not empty we can show the list.\n\n"
] |
[
46,
19,
14,
8,
1,
1,
1,
1,
0,
0,
0,
0,
0
] |
[
"If you are fetching data from the API consider using FutureBuilder.\n",
"To me, going to the project directory and running the command flutter clean fixed the error\n"
] |
[
-1,
-4
] |
[
"dart",
"flutter",
"list",
"range"
] |
stackoverflow_0054977982_dart_flutter_list_range.txt
|
Q:
Boost::Asio not able to configure Windows & Linux serial port structures using native handle
I am working on serial port routines using Boost::Asio.
I am configuring the port using the wrappers provided by Boost::Asio.
In my application either the end of data is denoted by receive timeout or \r\n line termination sequence.
As Boost::Asio doesn't provide wrapper to access and configure DCB - Windows, termios - Linux structure in order to configure timeout and/or line-ending; I am accessing native port handle/file descriptor that is returned via the native_handle wrapper and configuring the structures manually.
However it seems I am unable to configure the port properly.
Even if I configure the end of data to be denoted by \n the data is partially returned in chunks.
Similarly the data is also returned before timeout has occurred.
Update
The system works in 2 modes
Line Mode -> The data is read line by line and the processed. Line ending characters \r, \n or \r\n.
Bulk Mode -> Multi-line data is read. Once no data is received for pre-dertmined interval, the data is said to be completely received. e.g. If I don't receive new data for 50 milli-seconds I consider the transfer to be complete.
Code
bool SerialPort::open_port(void)
{
try
{
this->port.open(this->port_name);
this->native_port = this->port.native_handle();
return true;
}
catch (const std::exception& ex)
{
PLOG_FATAL << ex.what();
}
return false;
}
bool SerialPort::open_port(const std::string& port_name, std::uint32_t baud_rate, std::uint8_t data_bits, std::uint8_t stop_bits,
parity_t parity, flow_control_t flow_control, std::uint32_t read_timeout, std::uint32_t read_inter_byte_timeout,
std::uint32_t write_timeout)
{
try
{
this->port_name = port_name;
if (not this->open_port())
return false;
if (not this->set_baud_rate(baud_rate).has_value())
return false;
if (not this->set_data_bits(data_bits).has_value())
return false;
if (not this->set_stop_bits(stop_bits).has_value())
return false;
if (not this->set_parity(parity).has_value())
return false;
if (not this->set_flow_control(flow_control).has_value())
return false;
this->read_timeout = read_timeout;
if (read_inter_byte_timeout <= 0)
this->read_inter_byte_timeout = 1;
#ifdef _WIN64
BOOL return_value;
DCB dcb = { 0 };
COMMTIMEOUTS timeouts = { 0 };
if (this->line_mode) //Set COM port to return data either at \n or \r
{
/*
* If the function succeeds, the return value is nonzero.
* If the function fails, the return value is zero. To get extended error information, call GetLastError.
*/
return_value = GetCommState(this->native_port, &dcb);
if (return_value)
{
if(this->new_line_character == '\r')
dcb.EofChar = '\r'; //Specify end of data character as carriage-return (\r)
else // --> Default
dcb.EofChar = '\n'; //Specify end of data character as new-line (\n)
}
else
{
PLOG_ERROR << "Error GetCommState : " << GetLastErrorAsString();
return false;
}
/*
* If the function succeeds, the return value is nonzero.
* If the function fails, the return value is zero. To get extended error information, call GetLastError.
*/
return_value = SetCommState(this->native_port, &dcb);
if (not return_value)
{
PLOG_ERROR << "Error SetCommState : " << GetLastErrorAsString();
return false;
}
}
else //Set COM port to return data on timeout
{
/*
* If the function succeeds, the return value is nonzero.
* If the function fails, the return value is zero. To get extended error information, call GetLastError.
*/
return_value = GetCommTimeouts(this->native_port, &timeouts);
if (return_value)
{
timeouts.ReadIntervalTimeout = this->read_inter_byte_timeout; // Timeout in miliseconds
//timeouts.ReadTotalTimeoutConstant = 0; //MAXDWORD; // in milliseconds - not needed
//timeouts.ReadTotalTimeoutMultiplier = 0; // in milliseconds - not needed
//timeouts.WriteTotalTimeoutConstant = 50; // in milliseconds - not needed
//timeouts.WriteTotalTimeoutMultiplier = write_timeout; // in milliseconds - not needed
}
else
{
PLOG_ERROR << "Error GetCommTimeouts : " << GetLastErrorAsString();
return false;
}
/*
* If the function succeeds, the return value is nonzero.
* If the function fails, the return value is zero. To get extended error information, call GetLastError.
*/
return_value = SetCommTimeouts(this->native_port, &timeouts);
if (not return_value)
{
PLOG_ERROR << "Error SetCommTimeouts : " << GetLastErrorAsString();
return false;
}
}
#else //For Linux termios
#endif // _WIN64
return true;
}
catch (const std::exception& ex)
{
PLOG_ERROR << ex.what();
return false;
}
}
void SerialPort::read_handler(const boost::system::error_code& error, std::size_t bytes_transferred)
{
this->read_async(); // I realized I was calling read_async before reading data
bool receive_complete{ false };
try
{
if (error not_eq boost::system::errc::success) //Error in serial port read
{
PLOG_ERROR << error.to_string();
this->async_signal.emit(this->port_number, SerialPortEvents::read_error, error.to_string());
return;
}
if (this->line_mode)
{
std::string temporary_recieve_data;
std::transform(this->read_buffer.begin(), this->read_buffer.begin() + bytes_transferred, //Data is added to temporary buffer
std::back_inserter(temporary_recieve_data), [](std::byte character) {
return static_cast<char>(character);
}
);
boost::algorithm::trim(temporary_recieve_data); // Trim handles space character, tab, carriage return, newline, vertical tab and form feed
//Data is further processed based on the Process logic
receive_complete = true;
}
else // Bulk-Data. Just append data to end of received_data string buffer.
// Wait for timeout to trigger recevive_complete
{
//Test Function
std::transform(this->read_buffer.begin(), this->read_buffer.begin() + bytes_transferred,
std::back_inserter(this->received_data), [](std::byte character) {
return static_cast<char>(character);
}
);
this->async_signal.emit(this->port_number, SerialPortEvents::read_data, this->received_data); //Data has been recieved send to server via MQTT
}
}
catch (const std::exception& ex)
{
PLOG_ERROR << ex.what();
this->async_signal.emit(this->port_number, SerialPortEvents::read_error, ex.what());
}
}
Supporting Function
std::optional<std::uint32_t> SerialPort::set_baud_rate(std::uint32_t baud_rate)
{
boost::system::error_code error;
std::uint32_t _baud_rate = 1200;
switch (baud_rate)
{
case 1200:
case 2400:
case 4800:
case 9600:
case 115200:
_baud_rate = baud_rate;
break;
default:
_baud_rate = 1200;
break;
}
this->port.set_option(boost::asio::serial_port_base::baud_rate(_baud_rate), error);
if (error)
{
PLOG_FATAL << error.message();
return std::nullopt;
}
return baud_rate;
}
std::optional<std::uint8_t> SerialPort::set_data_bits(std::uint8_t data_bits)
{
boost::system::error_code error;
std::uint32_t _data_bits = 8;
switch (data_bits)
{
case 7:
case 8:
_data_bits = data_bits;
break;
default:
_data_bits = 8;
break;
}
this->port.set_option(boost::asio::serial_port_base::character_size(_data_bits), error);
if (error)
{
PLOG_FATAL << error.message();
return std::nullopt;
}
return data_bits;
}
std::optional<std::uint8_t> SerialPort::set_stop_bits(std::uint8_t stop_bits)
{
boost::system::error_code error;
switch (stop_bits)
{
case 1:
this->port.set_option(boost::asio::serial_port_base::stop_bits(boost::asio::serial_port_base::stop_bits::one), error);
break;
case 2:
this->port.set_option(boost::asio::serial_port_base::stop_bits(boost::asio::serial_port_base::stop_bits::two), error);
break;
default:
this->port.set_option(boost::asio::serial_port_base::stop_bits(boost::asio::serial_port_base::stop_bits::one), error);
break;
}
if (error)
{
PLOG_FATAL << error.message();
return std::nullopt;
}
return stop_bits;
}
std::optional<parity_t> SerialPort::set_parity(parity_t parity)
{
boost::system::error_code error;
switch (parity)
{
case Parity::none:
this->port.set_option(boost::asio::serial_port_base::parity(boost::asio::serial_port_base::parity::none), error);
break;
case Parity::even:
this->port.set_option(boost::asio::serial_port_base::parity(boost::asio::serial_port_base::parity::even), error);
break;
case Parity::odd:
this->port.set_option(boost::asio::serial_port_base::parity(boost::asio::serial_port_base::parity::odd), error);
break;
default:
this->port.set_option(boost::asio::serial_port_base::parity(boost::asio::serial_port_base::parity::none), error);
break;
}
if (error)
{
PLOG_FATAL << error.message();
return std::nullopt;
}
return parity;
}
std::optional<flow_control_t> SerialPort::set_flow_control(flow_control_t flow_control)
{
boost::system::error_code error;
switch (flow_control)
{
case FlowControl::none:
this->port.set_option(boost::asio::serial_port_base::flow_control(boost::asio::serial_port_base::flow_control::none), error);
break;
case FlowControl::hardware:
this->port.set_option(boost::asio::serial_port_base::flow_control(boost::asio::serial_port_base::flow_control::hardware), error);
break;
case FlowControl::software:
this->port.set_option(boost::asio::serial_port_base::flow_control(boost::asio::serial_port_base::flow_control::software), error);
break;
default:
this->port.set_option(boost::asio::serial_port_base::flow_control(boost::asio::serial_port_base::flow_control::none), error);
break;
}
if (error)
{
PLOG_FATAL << error.message();
return std::nullopt;
}
return flow_control;
}
A:
To your comments:
the incoming data is multi-line hence I have set read timeout inetrval in the serial port structures
How does that make sense. There's no real difference handling timeouts in Asio either way/
Regarding the overload:
but I am getting E0304 no instance of overloaded function "boost::asio::async_read_until" matches the argument list error.
Here's my ten pence:
async_read_until(port,
asio::dynamic_buffer(read_buffer),
"\r\n",
bind(&SerialPort::read_handler, this, error, bytes_transferred));
Note that you need dynamic buffers. The simplest thing I could think of that stays close to your original:
std::vector<std::byte> read_buffer;
Now, we'll update the read handler to erase the "consumed" part, because received buffer may contain data beyond the delimiter.
void read_handler(boost::system::error_code ec, size_t const bytes_transferred) {
std::cerr << "received " << bytes_transferred << " bytes (" << ec.message() << ")"
<< std::endl;
auto b = reinterpret_cast<char const*>(read_buffer.data()),
e = b + std::min(bytes_transferred, read_buffer.size());
if (std::all_of(
b, e, //
[](uint8_t ch) { return std::isspace(ch) || std::isgraph(ch); })) //
{
std::cerr << "ascii: " << quoted(std::string_view(b, e)) << std::endl;
} else {
std::cerr << "binary: ";
auto fmt = std::cerr.flags();
for (auto it = b; it != e; ++it) {
std::cerr << " " << std::hex << std::showbase << std::setfill('0')
<< std::setw(4) << static_cast<unsigned>(*it);
}
std::cerr.flags(fmt);
}
std::cerr << std::endl;
read_buffer.erase(begin(read_buffer), begin(read_buffer) + bytes_transferred);
if (!ec)
read_async(ignore_timeout);
}
Full listing: Live On Coliru
#include <boost/asio.hpp>
#include <boost/asio/serial_port.hpp>
#include <boost/bind/bind.hpp>
#include <iomanip>
#include <iostream>
#include <ranges>
namespace asio = boost::asio;
static inline std::ostream PLOG_ERROR(std::cerr.rdbuf());
struct SerialPort {
static constexpr uint32_t ignore_timeout = -1;
SerialPort(asio::any_io_executor ex, std::string dev) : port(ex, dev) {}
bool read_async(uint32_t timeout_override) {
try {
// not necessary: std::ranges::fill(read_buffer, std::byte{});
if (timeout_override not_eq SerialPort::ignore_timeout) {
read_timeout = timeout_override;
}
using namespace asio::placeholders;
async_read_until(port,
asio::dynamic_buffer(read_buffer),
"\r\n",
bind(&SerialPort::read_handler, this, error, bytes_transferred));
return true;
} catch (std::exception const& ex) {
PLOG_ERROR << ex.what() << std::endl;
return false;
}
}
private:
void read_handler(boost::system::error_code ec, size_t const bytes_transferred) {
std::cerr << "received " << bytes_transferred << " bytes (" << ec.message() << ")"
<< std::endl;
auto b = reinterpret_cast<char const*>(read_buffer.data()),
e = b + std::min(bytes_transferred, read_buffer.size());
if (std::all_of(
b, e, //
[](uint8_t ch) { return std::isspace(ch) || std::isgraph(ch); })) //
{
std::cerr << "ascii: " << quoted(std::string_view(b, e)) << std::endl;
} else {
std::cerr << "binary: ";
auto fmt = std::cerr.flags();
for (auto it = b; it != e; ++it) {
std::cerr << " " << std::hex << std::showbase << std::setfill('0')
<< std::setw(4) << static_cast<unsigned>(*it);
}
std::cerr.flags(fmt);
}
std::cerr << std::endl;
read_buffer.erase(begin(read_buffer), begin(read_buffer) + bytes_transferred);
if (!ec)
read_async(ignore_timeout);
}
uint32_t read_timeout = 10;
std::vector<std::byte> read_buffer;
asio::serial_port port;
};
int main(int argc, char** argv) {
asio::io_context ioc;
SerialPort sp(make_strand(ioc), argc > 1 ? argv[1] : "/dev/ttyS0");
sp.read_async(SerialPort::ignore_timeout);
ioc.run();
}
Local demo:
|
Boost::Asio not able to configure Windows & Linux serial port structures using native handle
|
I am working on serial port routines using Boost::Asio.
I am configuring the port using the wrappers provided by Boost::Asio.
In my application either the end of data is denoted by receive timeout or \r\n line termination sequence.
As Boost::Asio doesn't provide wrapper to access and configure DCB - Windows, termios - Linux structure in order to configure timeout and/or line-ending; I am accessing native port handle/file descriptor that is returned via the native_handle wrapper and configuring the structures manually.
However it seems I am unable to configure the port properly.
Even if I configure the end of data to be denoted by \n the data is partially returned in chunks.
Similarly the data is also returned before timeout has occurred.
Update
The system works in 2 modes
Line Mode -> The data is read line by line and the processed. Line ending characters \r, \n or \r\n.
Bulk Mode -> Multi-line data is read. Once no data is received for pre-dertmined interval, the data is said to be completely received. e.g. If I don't receive new data for 50 milli-seconds I consider the transfer to be complete.
Code
bool SerialPort::open_port(void)
{
try
{
this->port.open(this->port_name);
this->native_port = this->port.native_handle();
return true;
}
catch (const std::exception& ex)
{
PLOG_FATAL << ex.what();
}
return false;
}
bool SerialPort::open_port(const std::string& port_name, std::uint32_t baud_rate, std::uint8_t data_bits, std::uint8_t stop_bits,
parity_t parity, flow_control_t flow_control, std::uint32_t read_timeout, std::uint32_t read_inter_byte_timeout,
std::uint32_t write_timeout)
{
try
{
this->port_name = port_name;
if (not this->open_port())
return false;
if (not this->set_baud_rate(baud_rate).has_value())
return false;
if (not this->set_data_bits(data_bits).has_value())
return false;
if (not this->set_stop_bits(stop_bits).has_value())
return false;
if (not this->set_parity(parity).has_value())
return false;
if (not this->set_flow_control(flow_control).has_value())
return false;
this->read_timeout = read_timeout;
if (read_inter_byte_timeout <= 0)
this->read_inter_byte_timeout = 1;
#ifdef _WIN64
BOOL return_value;
DCB dcb = { 0 };
COMMTIMEOUTS timeouts = { 0 };
if (this->line_mode) //Set COM port to return data either at \n or \r
{
/*
* If the function succeeds, the return value is nonzero.
* If the function fails, the return value is zero. To get extended error information, call GetLastError.
*/
return_value = GetCommState(this->native_port, &dcb);
if (return_value)
{
if(this->new_line_character == '\r')
dcb.EofChar = '\r'; //Specify end of data character as carriage-return (\r)
else // --> Default
dcb.EofChar = '\n'; //Specify end of data character as new-line (\n)
}
else
{
PLOG_ERROR << "Error GetCommState : " << GetLastErrorAsString();
return false;
}
/*
* If the function succeeds, the return value is nonzero.
* If the function fails, the return value is zero. To get extended error information, call GetLastError.
*/
return_value = SetCommState(this->native_port, &dcb);
if (not return_value)
{
PLOG_ERROR << "Error SetCommState : " << GetLastErrorAsString();
return false;
}
}
else //Set COM port to return data on timeout
{
/*
* If the function succeeds, the return value is nonzero.
* If the function fails, the return value is zero. To get extended error information, call GetLastError.
*/
return_value = GetCommTimeouts(this->native_port, &timeouts);
if (return_value)
{
timeouts.ReadIntervalTimeout = this->read_inter_byte_timeout; // Timeout in miliseconds
//timeouts.ReadTotalTimeoutConstant = 0; //MAXDWORD; // in milliseconds - not needed
//timeouts.ReadTotalTimeoutMultiplier = 0; // in milliseconds - not needed
//timeouts.WriteTotalTimeoutConstant = 50; // in milliseconds - not needed
//timeouts.WriteTotalTimeoutMultiplier = write_timeout; // in milliseconds - not needed
}
else
{
PLOG_ERROR << "Error GetCommTimeouts : " << GetLastErrorAsString();
return false;
}
/*
* If the function succeeds, the return value is nonzero.
* If the function fails, the return value is zero. To get extended error information, call GetLastError.
*/
return_value = SetCommTimeouts(this->native_port, &timeouts);
if (not return_value)
{
PLOG_ERROR << "Error SetCommTimeouts : " << GetLastErrorAsString();
return false;
}
}
#else //For Linux termios
#endif // _WIN64
return true;
}
catch (const std::exception& ex)
{
PLOG_ERROR << ex.what();
return false;
}
}
void SerialPort::read_handler(const boost::system::error_code& error, std::size_t bytes_transferred)
{
this->read_async(); // I realized I was calling read_async before reading data
bool receive_complete{ false };
try
{
if (error not_eq boost::system::errc::success) //Error in serial port read
{
PLOG_ERROR << error.to_string();
this->async_signal.emit(this->port_number, SerialPortEvents::read_error, error.to_string());
return;
}
if (this->line_mode)
{
std::string temporary_recieve_data;
std::transform(this->read_buffer.begin(), this->read_buffer.begin() + bytes_transferred, //Data is added to temporary buffer
std::back_inserter(temporary_recieve_data), [](std::byte character) {
return static_cast<char>(character);
}
);
boost::algorithm::trim(temporary_recieve_data); // Trim handles space character, tab, carriage return, newline, vertical tab and form feed
//Data is further processed based on the Process logic
receive_complete = true;
}
else // Bulk-Data. Just append data to end of received_data string buffer.
// Wait for timeout to trigger recevive_complete
{
//Test Function
std::transform(this->read_buffer.begin(), this->read_buffer.begin() + bytes_transferred,
std::back_inserter(this->received_data), [](std::byte character) {
return static_cast<char>(character);
}
);
this->async_signal.emit(this->port_number, SerialPortEvents::read_data, this->received_data); //Data has been recieved send to server via MQTT
}
}
catch (const std::exception& ex)
{
PLOG_ERROR << ex.what();
this->async_signal.emit(this->port_number, SerialPortEvents::read_error, ex.what());
}
}
Supporting Function
std::optional<std::uint32_t> SerialPort::set_baud_rate(std::uint32_t baud_rate)
{
boost::system::error_code error;
std::uint32_t _baud_rate = 1200;
switch (baud_rate)
{
case 1200:
case 2400:
case 4800:
case 9600:
case 115200:
_baud_rate = baud_rate;
break;
default:
_baud_rate = 1200;
break;
}
this->port.set_option(boost::asio::serial_port_base::baud_rate(_baud_rate), error);
if (error)
{
PLOG_FATAL << error.message();
return std::nullopt;
}
return baud_rate;
}
std::optional<std::uint8_t> SerialPort::set_data_bits(std::uint8_t data_bits)
{
boost::system::error_code error;
std::uint32_t _data_bits = 8;
switch (data_bits)
{
case 7:
case 8:
_data_bits = data_bits;
break;
default:
_data_bits = 8;
break;
}
this->port.set_option(boost::asio::serial_port_base::character_size(_data_bits), error);
if (error)
{
PLOG_FATAL << error.message();
return std::nullopt;
}
return data_bits;
}
std::optional<std::uint8_t> SerialPort::set_stop_bits(std::uint8_t stop_bits)
{
boost::system::error_code error;
switch (stop_bits)
{
case 1:
this->port.set_option(boost::asio::serial_port_base::stop_bits(boost::asio::serial_port_base::stop_bits::one), error);
break;
case 2:
this->port.set_option(boost::asio::serial_port_base::stop_bits(boost::asio::serial_port_base::stop_bits::two), error);
break;
default:
this->port.set_option(boost::asio::serial_port_base::stop_bits(boost::asio::serial_port_base::stop_bits::one), error);
break;
}
if (error)
{
PLOG_FATAL << error.message();
return std::nullopt;
}
return stop_bits;
}
std::optional<parity_t> SerialPort::set_parity(parity_t parity)
{
boost::system::error_code error;
switch (parity)
{
case Parity::none:
this->port.set_option(boost::asio::serial_port_base::parity(boost::asio::serial_port_base::parity::none), error);
break;
case Parity::even:
this->port.set_option(boost::asio::serial_port_base::parity(boost::asio::serial_port_base::parity::even), error);
break;
case Parity::odd:
this->port.set_option(boost::asio::serial_port_base::parity(boost::asio::serial_port_base::parity::odd), error);
break;
default:
this->port.set_option(boost::asio::serial_port_base::parity(boost::asio::serial_port_base::parity::none), error);
break;
}
if (error)
{
PLOG_FATAL << error.message();
return std::nullopt;
}
return parity;
}
std::optional<flow_control_t> SerialPort::set_flow_control(flow_control_t flow_control)
{
boost::system::error_code error;
switch (flow_control)
{
case FlowControl::none:
this->port.set_option(boost::asio::serial_port_base::flow_control(boost::asio::serial_port_base::flow_control::none), error);
break;
case FlowControl::hardware:
this->port.set_option(boost::asio::serial_port_base::flow_control(boost::asio::serial_port_base::flow_control::hardware), error);
break;
case FlowControl::software:
this->port.set_option(boost::asio::serial_port_base::flow_control(boost::asio::serial_port_base::flow_control::software), error);
break;
default:
this->port.set_option(boost::asio::serial_port_base::flow_control(boost::asio::serial_port_base::flow_control::none), error);
break;
}
if (error)
{
PLOG_FATAL << error.message();
return std::nullopt;
}
return flow_control;
}
|
[
"To your comments:\n\nthe incoming data is multi-line hence I have set read timeout inetrval in the serial port structures\n\nHow does that make sense. There's no real difference handling timeouts in Asio either way/\nRegarding the overload:\n\nbut I am getting E0304 no instance of overloaded function \"boost::asio::async_read_until\" matches the argument list error.\n\nHere's my ten pence:\nasync_read_until(port,\n asio::dynamic_buffer(read_buffer),\n \"\\r\\n\",\n bind(&SerialPort::read_handler, this, error, bytes_transferred));\n\nNote that you need dynamic buffers. The simplest thing I could think of that stays close to your original:\nstd::vector<std::byte> read_buffer;\n\nNow, we'll update the read handler to erase the \"consumed\" part, because received buffer may contain data beyond the delimiter.\nvoid read_handler(boost::system::error_code ec, size_t const bytes_transferred) {\n std::cerr << \"received \" << bytes_transferred << \" bytes (\" << ec.message() << \")\"\n << std::endl;\n\n auto b = reinterpret_cast<char const*>(read_buffer.data()),\n e = b + std::min(bytes_transferred, read_buffer.size());\n\n if (std::all_of(\n b, e, //\n [](uint8_t ch) { return std::isspace(ch) || std::isgraph(ch); })) //\n {\n std::cerr << \"ascii: \" << quoted(std::string_view(b, e)) << std::endl;\n } else {\n std::cerr << \"binary: \";\n auto fmt = std::cerr.flags();\n for (auto it = b; it != e; ++it) {\n std::cerr << \" \" << std::hex << std::showbase << std::setfill('0')\n << std::setw(4) << static_cast<unsigned>(*it);\n }\n std::cerr.flags(fmt);\n }\n std::cerr << std::endl;\n\n read_buffer.erase(begin(read_buffer), begin(read_buffer) + bytes_transferred);\n\n if (!ec)\n read_async(ignore_timeout);\n}\n\nFull listing: Live On Coliru\n#include <boost/asio.hpp>\n#include <boost/asio/serial_port.hpp>\n#include <boost/bind/bind.hpp>\n#include <iomanip>\n#include <iostream>\n#include <ranges>\nnamespace asio = boost::asio;\n\nstatic inline std::ostream PLOG_ERROR(std::cerr.rdbuf());\n\nstruct SerialPort {\n static constexpr uint32_t ignore_timeout = -1;\n\n SerialPort(asio::any_io_executor ex, std::string dev) : port(ex, dev) {}\n\n bool read_async(uint32_t timeout_override) {\n try {\n // not necessary: std::ranges::fill(read_buffer, std::byte{});\n\n if (timeout_override not_eq SerialPort::ignore_timeout) {\n read_timeout = timeout_override;\n }\n using namespace asio::placeholders;\n\n async_read_until(port,\n asio::dynamic_buffer(read_buffer),\n \"\\r\\n\",\n bind(&SerialPort::read_handler, this, error, bytes_transferred));\n\n return true;\n } catch (std::exception const& ex) {\n PLOG_ERROR << ex.what() << std::endl;\n return false;\n }\n }\n\n private:\n void read_handler(boost::system::error_code ec, size_t const bytes_transferred) {\n std::cerr << \"received \" << bytes_transferred << \" bytes (\" << ec.message() << \")\"\n << std::endl;\n\n auto b = reinterpret_cast<char const*>(read_buffer.data()),\n e = b + std::min(bytes_transferred, read_buffer.size());\n\n if (std::all_of(\n b, e, //\n [](uint8_t ch) { return std::isspace(ch) || std::isgraph(ch); })) //\n {\n std::cerr << \"ascii: \" << quoted(std::string_view(b, e)) << std::endl;\n } else {\n std::cerr << \"binary: \";\n auto fmt = std::cerr.flags();\n for (auto it = b; it != e; ++it) {\n std::cerr << \" \" << std::hex << std::showbase << std::setfill('0')\n << std::setw(4) << static_cast<unsigned>(*it);\n }\n std::cerr.flags(fmt);\n }\n std::cerr << std::endl;\n\n read_buffer.erase(begin(read_buffer), begin(read_buffer) + bytes_transferred);\n\n if (!ec)\n read_async(ignore_timeout);\n }\n\n uint32_t read_timeout = 10;\n std::vector<std::byte> read_buffer;\n asio::serial_port port;\n};\n\nint main(int argc, char** argv) {\n asio::io_context ioc;\n\n SerialPort sp(make_strand(ioc), argc > 1 ? argv[1] : \"/dev/ttyS0\");\n sp.read_async(SerialPort::ignore_timeout);\n\n ioc.run();\n}\n\nLocal demo:\n\n"
] |
[
0
] |
[] |
[] |
[
"boost_asio",
"c++"
] |
stackoverflow_0074663561_boost_asio_c++.txt
|
Q:
Deparse.level = 2 in R
I need to know what deparse.level = 2 means and what it does to a table and everything I write makes me more confused. Can anyone help me please?
Thank you
I tried to apply the table without diparse.level and I can see the order of the table changes but it also adds labels so I cant undersdant what exactly it is meant to do
A:
Not sure what you mean with the ordering part, but table(..., deparse.level = 2) makes the function willing to name the dimensions in the table things that are not symbols (variable names, basically) if an argument wasn't named in the call. In effect it tries extra hard to assign names to the dimensions even if they're something like a function call, for example. See the Details and Examples in the next help text with ?table.
More detail:
The documentation technically does explain it, but it's... a bit dense:
If the argument dnn is not supplied, the internal function list.names is called to compute the βdimname namesβ. If the arguments in ... are named, those names are used. For the remaining arguments, deparse.level = 0 gives an empty name, deparse.level = 1 uses the supplied argument if it is a symbol, and deparse.level = 2 will deparse the argument.
There's a good example below that though:
> a <- letters[1:3]
> table(a, sample(a)) # dnn is c("a", "")
a a b c
a 0 0 1
b 1 0 0
c 0 1 0
> table(a, sample(a), deparse.level = 0) # dnn is c("", "")
a b c
a 1 0 0
b 0 0 1
c 0 1 0
> table(a, sample(a), deparse.level = 2) # dnn is c("a", "sample(a)")
sample(a)
a a b c
a 1 0 0
b 0 0 1
c 0 1 0
Only in the last one is it willing to name a dimension "sample(a)". In all those cases the second vector isn't given as a named argument, so it tries to figure out what symbol to use for it (with level 1, the default) or what text of any kind to use for it (with level 2).
Even more:
And about what it means by "if it is a symbol," see ?is.symbol and ?deparse and the rabbit hole that leads to. It's not about how weird the name looks; you can do something like this, and it's fine with it at deparse level 1 since it is a symbol in this context:
> `sample(a)` <- sample(a)
> table(a, `sample(a)`)
sample(a)
a a b c
a 0 0 1
b 1 0 0
c 0 1 0
|
Deparse.level = 2 in R
|
I need to know what deparse.level = 2 means and what it does to a table and everything I write makes me more confused. Can anyone help me please?
Thank you
I tried to apply the table without diparse.level and I can see the order of the table changes but it also adds labels so I cant undersdant what exactly it is meant to do
|
[
"Not sure what you mean with the ordering part, but table(..., deparse.level = 2) makes the function willing to name the dimensions in the table things that are not symbols (variable names, basically) if an argument wasn't named in the call. In effect it tries extra hard to assign names to the dimensions even if they're something like a function call, for example. See the Details and Examples in the next help text with ?table.\n\nMore detail:\nThe documentation technically does explain it, but it's... a bit dense:\n\nIf the argument dnn is not supplied, the internal function list.names is called to compute the βdimname namesβ. If the arguments in ... are named, those names are used. For the remaining arguments, deparse.level = 0 gives an empty name, deparse.level = 1 uses the supplied argument if it is a symbol, and deparse.level = 2 will deparse the argument.\n\nThere's a good example below that though:\n> a <- letters[1:3]\n> table(a, sample(a)) # dnn is c(\"a\", \"\")\n \na a b c\n a 0 0 1\n b 1 0 0\n c 0 1 0\n> table(a, sample(a), deparse.level = 0) # dnn is c(\"\", \"\")\n \n a b c\n a 1 0 0\n b 0 0 1\n c 0 1 0\n> table(a, sample(a), deparse.level = 2) # dnn is c(\"a\", \"sample(a)\")\n\n sample(a)\na a b c\n a 1 0 0\n b 0 0 1\n c 0 1 0\n\nOnly in the last one is it willing to name a dimension \"sample(a)\". In all those cases the second vector isn't given as a named argument, so it tries to figure out what symbol to use for it (with level 1, the default) or what text of any kind to use for it (with level 2).\n\nEven more:\nAnd about what it means by \"if it is a symbol,\" see ?is.symbol and ?deparse and the rabbit hole that leads to. It's not about how weird the name looks; you can do something like this, and it's fine with it at deparse level 1 since it is a symbol in this context:\n> `sample(a)` <- sample(a)\n> table(a, `sample(a)`)\n sample(a)\na a b c\n a 0 0 1\n b 1 0 0\n c 0 1 0\n\n"
] |
[
0
] |
[] |
[] |
[
"r"
] |
stackoverflow_0074338282_r.txt
|
Q:
Power BI - Get the max of a value based on a category, status and date
I've a table that shows me the category by status and the start and end date:
and I'm trying to create a conditional column that returns the maximum of each category and status.
I tried to create the column in M
Logic= if [Status] = "InProgress" and [Start] = List.Max(#"Sorted Rows1"[Start]) and [End] = #datetime(2999, 12, 31, 0, 0, 0) then 2 else if [Status] = "Succeeded" then 1 else 0
ββand it is working correctly except for the case of CategoryG which should only return the value of "2" from the Logic column, and instead it is returning the maximum of the Succeeded status and the InProgress status, where it should only show the InProgress status because it has a longer start date.
I also tried creating in DAX but got the same result:
Flag DAX = SWITCH( TRUE (),
'Table'[Start]= MAX ( 'Table'[Start] )
&& YEAR('Table'[End]) = 2999
&& 'Table'[Status] = "InProgress", 2,
'Table'[Status] = "Failed", 1,
'Table'[Start]= LASTNONBLANK( 'Table'[Start],1 )
&& 'Table'[Status] = "Succeeded" && YEAR('Table'[End]) <> 2999, 3,
0
)
Sample data 1:
CategoryName Status StartDate EndDate
CategoryA Succeeded 01/12/2022 22:31:54 02/12/2022 01:31:39
CategoryA InProgress 02/12/2022 00:24:52 16/01/2001 00:00:00
CategoryB InProgress 02/12/2022 01:31:40 16/01/2001 00:00:00
CategoryB Succeeded 02/12/2022 01:31:41 02/12/2022 04:25:46
CategoryC InProgress 02/12/2022 04:25:48 16/01/2001 00:00:00
CategoryC Succeeded 02/12/2022 04:25:49 02/12/2022 08:23:52
CategoryD InProgress 02/12/2022 08:23:56 16/01/2001 00:00:00
CategoryE InProgress 02/12/2022 08:23:56 16/01/2001 00:00:00
CategoryD Succeeded 02/12/2022 08:23:57 02/12/2022 09:51:37
CategoryE Succeeded 02/12/2022 08:23:57 02/12/2022 09:42:21
CategoryF InProgress 02/12/2022 09:42:35 16/01/2001 00:00:00
CategoryF Succeeded 02/12/2022 09:42:36 02/12/2022 12:17:46
CategoryG Succeeded 02/12/2022 12:17:52 02/12/2022 15:07:59
CategoryG InProgress 02/12/2022 12:17:53 31/12/2999 00:00:00
Sample data 2 (with Failed):
CategoryName Status StartDate EndDate
CategoryA Succeeded 01/12/2022 22:31:54 02/12/2022 01:31:39
CategoryA InProgress 02/12/2022 00:24:52 16/01/2001 00:00:00
CategoryB InProgress 02/12/2022 01:31:40 16/01/2001 00:00:00
CategoryB Failed 02/12/2022 01:31:41 02/12/2022 04:25:46
desire output:
Briefly, for each category I want to create a table that shows the last Status with the minimum of the StartDate and the maximum of the EndDate, to show the execution time with the last Status of each Category.
The desire table will only have one row for each category showing the last status, minimum StartDate and maximum EndDate.
Can anyone please help me in achieving this?
Thank you!
A:
In your edited text, you write: "The desire table will only have one row for each category showing the last status, minimum StartDate and maximum EndDate."
That is different from your screenshot labelled desire output.
But if that is really what you want, then you can accomplish this in M-Code
Group by CategoryName
Add custom aggregations to extract the other information
As below:
#"Grouped Rows" = Table.Group(#"Previous Step", {"CategoryName"}, {
{"Min Start", each List.Min([StartDate]), type datetime},
{"Max End", each List.Max([EndDate]), type datetime},
{"Last Status", each [Status]{List.PositionOf([StartDate], List.Max([StartDate]))}, type text}})
|
Power BI - Get the max of a value based on a category, status and date
|
I've a table that shows me the category by status and the start and end date:
and I'm trying to create a conditional column that returns the maximum of each category and status.
I tried to create the column in M
Logic= if [Status] = "InProgress" and [Start] = List.Max(#"Sorted Rows1"[Start]) and [End] = #datetime(2999, 12, 31, 0, 0, 0) then 2 else if [Status] = "Succeeded" then 1 else 0
ββand it is working correctly except for the case of CategoryG which should only return the value of "2" from the Logic column, and instead it is returning the maximum of the Succeeded status and the InProgress status, where it should only show the InProgress status because it has a longer start date.
I also tried creating in DAX but got the same result:
Flag DAX = SWITCH( TRUE (),
'Table'[Start]= MAX ( 'Table'[Start] )
&& YEAR('Table'[End]) = 2999
&& 'Table'[Status] = "InProgress", 2,
'Table'[Status] = "Failed", 1,
'Table'[Start]= LASTNONBLANK( 'Table'[Start],1 )
&& 'Table'[Status] = "Succeeded" && YEAR('Table'[End]) <> 2999, 3,
0
)
Sample data 1:
CategoryName Status StartDate EndDate
CategoryA Succeeded 01/12/2022 22:31:54 02/12/2022 01:31:39
CategoryA InProgress 02/12/2022 00:24:52 16/01/2001 00:00:00
CategoryB InProgress 02/12/2022 01:31:40 16/01/2001 00:00:00
CategoryB Succeeded 02/12/2022 01:31:41 02/12/2022 04:25:46
CategoryC InProgress 02/12/2022 04:25:48 16/01/2001 00:00:00
CategoryC Succeeded 02/12/2022 04:25:49 02/12/2022 08:23:52
CategoryD InProgress 02/12/2022 08:23:56 16/01/2001 00:00:00
CategoryE InProgress 02/12/2022 08:23:56 16/01/2001 00:00:00
CategoryD Succeeded 02/12/2022 08:23:57 02/12/2022 09:51:37
CategoryE Succeeded 02/12/2022 08:23:57 02/12/2022 09:42:21
CategoryF InProgress 02/12/2022 09:42:35 16/01/2001 00:00:00
CategoryF Succeeded 02/12/2022 09:42:36 02/12/2022 12:17:46
CategoryG Succeeded 02/12/2022 12:17:52 02/12/2022 15:07:59
CategoryG InProgress 02/12/2022 12:17:53 31/12/2999 00:00:00
Sample data 2 (with Failed):
CategoryName Status StartDate EndDate
CategoryA Succeeded 01/12/2022 22:31:54 02/12/2022 01:31:39
CategoryA InProgress 02/12/2022 00:24:52 16/01/2001 00:00:00
CategoryB InProgress 02/12/2022 01:31:40 16/01/2001 00:00:00
CategoryB Failed 02/12/2022 01:31:41 02/12/2022 04:25:46
desire output:
Briefly, for each category I want to create a table that shows the last Status with the minimum of the StartDate and the maximum of the EndDate, to show the execution time with the last Status of each Category.
The desire table will only have one row for each category showing the last status, minimum StartDate and maximum EndDate.
Can anyone please help me in achieving this?
Thank you!
|
[
"In your edited text, you write: \"The desire table will only have one row for each category showing the last status, minimum StartDate and maximum EndDate.\"\nThat is different from your screenshot labelled desire output.\nBut if that is really what you want, then you can accomplish this in M-Code\n\nGroup by CategoryName\nAdd custom aggregations to extract the other information\n\nAs below:\n #\"Grouped Rows\" = Table.Group(#\"Previous Step\", {\"CategoryName\"}, {\n {\"Min Start\", each List.Min([StartDate]), type datetime},\n {\"Max End\", each List.Max([EndDate]), type datetime},\n {\"Last Status\", each [Status]{List.PositionOf([StartDate], List.Max([StartDate]))}, type text}})\n\n\n"
] |
[
0
] |
[] |
[] |
[
"m",
"powerbi",
"powerbi_desktop",
"powerquery"
] |
stackoverflow_0074665908_m_powerbi_powerbi_desktop_powerquery.txt
|
Q:
Calling class prototype methods via index with TypeScript
I would like to be able to call class prototype methods using bracket notation, so that the method name can be decided at run time:
classInstance['methodName'](arg);
I am failing to do this properly with TypeScript:
class Foo {
readonly ro: string = '';
constructor() {}
fn(s: number) { console.log(s); }
}
const foo = new Foo();
const methods = ['fn'];
foo['fn'](0)
// Type 'undefined' cannot be used as an index type.
foo[methods[0]](1);
// This expression is not callable.
// Not all constituents of type 'string | ((s: number) => void)' are callable.
// Type 'string' has no call signatures.
foo[methods[0] as keyof Foo](1);
The above example is in the TS Playground.
I think that I have a reasonable understanding of what the errors mean and why the string literal in foo['fn'](0) does not produce an error. However, I don't understand how to prevent the errors. I thought that I might be able to use Extract to build a type comprising of Function, but I've failed to do that.
How can I produce a list of typed method names over which my code can iterate? And better, is it possible for the class to export such a list so that users of the class can easily access them?
Background Information
I have a Playwright test that needs to iterate over a list of methods from a Page Object Model, producing a screenshot for each.
A:
You need to make sure that your methods array is typed as an array containing only valid method names:
const methods: ('fn' | β¦)[] = ['fn'];
Notice that
const methods: (keyof Foo)[] = ['fn'];
doesn't cut it because Foo has also other keys (e.g. ro) that are not the names of methods, or the names of methods with a different signature than you need.
You can also just use
const methods = ['fn'] as const;
A:
When you write
const methods = ['fn'];
The compiler infers the type of methods as string[], which means it may contain any number of any strings at all. So the compiler does not keep track of exactly which values are in the array, or where they are. This allows you to do things later like
methods.push("hello");
Often, this is what people want when they initialize a variable. But in your case, it is a problem, because then methods[0] could be any string whatsoever (or undefined if you have the --noUncheckedIndexedAccess compiler option enabled).
If you want the compiler to keep track of the exact literal types of the values in the array, the easiest way to do so is with a const assertion:
const methods = ['fn'] as const;
This tells the compiler that you would like to treat methods as essentially unchanging, and that it should infer the most specific type it can, more or less. Now methods is inferred to be of type
// const methods: readonly ["fn"]
which means that the compiler knows that methods is a readonly tuple containing exactly one element, whose type is the string literal type "fn".
So now the compiler knows that methods[0] is "fn", and your call compiles with no error:
foo[methods[0]](1); // okay
Playground link to code
|
Calling class prototype methods via index with TypeScript
|
I would like to be able to call class prototype methods using bracket notation, so that the method name can be decided at run time:
classInstance['methodName'](arg);
I am failing to do this properly with TypeScript:
class Foo {
readonly ro: string = '';
constructor() {}
fn(s: number) { console.log(s); }
}
const foo = new Foo();
const methods = ['fn'];
foo['fn'](0)
// Type 'undefined' cannot be used as an index type.
foo[methods[0]](1);
// This expression is not callable.
// Not all constituents of type 'string | ((s: number) => void)' are callable.
// Type 'string' has no call signatures.
foo[methods[0] as keyof Foo](1);
The above example is in the TS Playground.
I think that I have a reasonable understanding of what the errors mean and why the string literal in foo['fn'](0) does not produce an error. However, I don't understand how to prevent the errors. I thought that I might be able to use Extract to build a type comprising of Function, but I've failed to do that.
How can I produce a list of typed method names over which my code can iterate? And better, is it possible for the class to export such a list so that users of the class can easily access them?
Background Information
I have a Playwright test that needs to iterate over a list of methods from a Page Object Model, producing a screenshot for each.
|
[
"You need to make sure that your methods array is typed as an array containing only valid method names:\nconst methods: ('fn' | β¦)[] = ['fn'];\n\nNotice that\nconst methods: (keyof Foo)[] = ['fn'];\n\ndoesn't cut it because Foo has also other keys (e.g. ro) that are not the names of methods, or the names of methods with a different signature than you need.\nYou can also just use\nconst methods = ['fn'] as const;\n\n",
"When you write\nconst methods = ['fn'];\n\nThe compiler infers the type of methods as string[], which means it may contain any number of any strings at all. So the compiler does not keep track of exactly which values are in the array, or where they are. This allows you to do things later like\nmethods.push(\"hello\");\n\nOften, this is what people want when they initialize a variable. But in your case, it is a problem, because then methods[0] could be any string whatsoever (or undefined if you have the --noUncheckedIndexedAccess compiler option enabled).\n\nIf you want the compiler to keep track of the exact literal types of the values in the array, the easiest way to do so is with a const assertion:\nconst methods = ['fn'] as const;\n\nThis tells the compiler that you would like to treat methods as essentially unchanging, and that it should infer the most specific type it can, more or less. Now methods is inferred to be of type\n// const methods: readonly [\"fn\"]\n\nwhich means that the compiler knows that methods is a readonly tuple containing exactly one element, whose type is the string literal type \"fn\".\nSo now the compiler knows that methods[0] is \"fn\", and your call compiles with no error:\nfoo[methods[0]](1); // okay\n\nPlayground link to code\n"
] |
[
1,
1
] |
[] |
[] |
[
"es6_class",
"javascript",
"typescript"
] |
stackoverflow_0074668291_es6_class_javascript_typescript.txt
|
Q:
Pragmatics of typed intermediate languages
One trend in the compilation is to use typed intermediate languages. Haskell's ghc with its core intermediate language, a variant of System F-omega, is an example of this architecture [ 1 ]. Another is LLVM, which has a typed intermediate language at its core [ 2 ]. The benefit of this approach is that errors in the transformations that make up parts of the code generator can be detected early. In addition, the type information can be used during optimization and code generation.
For efficiency, typed IRs are type-checked, rather than have their type inferred. To make type-checks fast, each variable and each binder carry types for easy type-checking.
However, many transformations in the compiler pipeline may introduce new variables. For example, a normalization transformation K(.) might transform an application
M(N)
into an expression like
let x = K(M) in
let y = K(N) in x(y)
Question. I wonder how compilers handle the issue of giving types to newly introduced
variables. Do they re-typecheck, in the example above K(M) and K(N)? Isn't that time-consuming? And does it require passing an environment around? Do they use maps from AST nodes to type information to avoid re-running type checking?
S. Marlow, S. Peyton Jones, The Glasgow Haskell Compiler.
LLVM Language Reference Manual.
A:
Do they re-typecheck, in the example above K(M) and K(N)?
Yes, they do. It's not that bad, though. The typechecker knows that K(M) is an application of K to M. It knows what the type of K is, and that should be a function type. It knows what the type of M is, and it can check that that's the same as the input type of the function. So it knows that K(M) has the output type of K.
The typechecker also knows that K(N) is an application of K to N. It knows that K(N) has the same type as K(M). And it knows that x(y) is an application of x to y. It knows that x has a function type and that y has the same type as x's input type. So it knows that x(y) has the same type as x's output type. So it knows that the entire expression has the same type as K's output type.
Isn't that time-consuming?
Not really. The typechecker doesn't have to check the entire expression before it can start typechecking. It can check subexpressions as it goes, and it can do some caching to avoid rechecking subexpressions that it's already checked.
And does it require passing an environment around? Do they use maps from AST nodes to type information to avoid re-running type checking?
Yes, they do. And they do use maps from AST nodes to type information, but they don't use them to avoid re-running type checking. They use them to avoid running the same type checking on the same expression twice. (That might be the same thing, I'm not entirely sure.)
A:
Compilers often use a combination of techniques to handle the issue of giving types to newly introduced variables. One approach is to re-typecheck the expressions that introduce new variables, such as K(M) and K(N) in the example above. This can be time-consuming, but modern compilers often use optimization techniques to make the type-checking process more efficient.
Another approach is to use type inference, where the compiler uses information from the surrounding code to infer the types of the new variables. This can be more efficient than re-typechecking, but it requires the compiler to have a strong understanding of the type system and the relationships between different types.
In some cases, compilers may also use maps from AST nodes to type information to avoid re-running type-checking for expressions that have already been type-checked. This can be useful for optimizing the compilation process and reducing the amount of time spent on type-checking.
|
Pragmatics of typed intermediate languages
|
One trend in the compilation is to use typed intermediate languages. Haskell's ghc with its core intermediate language, a variant of System F-omega, is an example of this architecture [ 1 ]. Another is LLVM, which has a typed intermediate language at its core [ 2 ]. The benefit of this approach is that errors in the transformations that make up parts of the code generator can be detected early. In addition, the type information can be used during optimization and code generation.
For efficiency, typed IRs are type-checked, rather than have their type inferred. To make type-checks fast, each variable and each binder carry types for easy type-checking.
However, many transformations in the compiler pipeline may introduce new variables. For example, a normalization transformation K(.) might transform an application
M(N)
into an expression like
let x = K(M) in
let y = K(N) in x(y)
Question. I wonder how compilers handle the issue of giving types to newly introduced
variables. Do they re-typecheck, in the example above K(M) and K(N)? Isn't that time-consuming? And does it require passing an environment around? Do they use maps from AST nodes to type information to avoid re-running type checking?
S. Marlow, S. Peyton Jones, The Glasgow Haskell Compiler.
LLVM Language Reference Manual.
|
[
"\nDo they re-typecheck, in the example above K(M) and K(N)?\n\nYes, they do. It's not that bad, though. The typechecker knows that K(M) is an application of K to M. It knows what the type of K is, and that should be a function type. It knows what the type of M is, and it can check that that's the same as the input type of the function. So it knows that K(M) has the output type of K.\nThe typechecker also knows that K(N) is an application of K to N. It knows that K(N) has the same type as K(M). And it knows that x(y) is an application of x to y. It knows that x has a function type and that y has the same type as x's input type. So it knows that x(y) has the same type as x's output type. So it knows that the entire expression has the same type as K's output type.\n\nIsn't that time-consuming?\n\nNot really. The typechecker doesn't have to check the entire expression before it can start typechecking. It can check subexpressions as it goes, and it can do some caching to avoid rechecking subexpressions that it's already checked.\n\nAnd does it require passing an environment around? Do they use maps from AST nodes to type information to avoid re-running type checking?\n\nYes, they do. And they do use maps from AST nodes to type information, but they don't use them to avoid re-running type checking. They use them to avoid running the same type checking on the same expression twice. (That might be the same thing, I'm not entirely sure.)\n",
"Compilers often use a combination of techniques to handle the issue of giving types to newly introduced variables. One approach is to re-typecheck the expressions that introduce new variables, such as K(M) and K(N) in the example above. This can be time-consuming, but modern compilers often use optimization techniques to make the type-checking process more efficient.\nAnother approach is to use type inference, where the compiler uses information from the surrounding code to infer the types of the new variables. This can be more efficient than re-typechecking, but it requires the compiler to have a strong understanding of the type system and the relationships between different types.\nIn some cases, compilers may also use maps from AST nodes to type information to avoid re-running type-checking for expressions that have already been type-checked. This can be useful for optimizing the compilation process and reducing the amount of time spent on type-checking.\n"
] |
[
0,
0
] |
[] |
[] |
[
"compiler_construction",
"compiler_optimization",
"ghc",
"intermediate_language",
"llvm_ir"
] |
stackoverflow_0034336065_compiler_construction_compiler_optimization_ghc_intermediate_language_llvm_ir.txt
|
Q:
grouped multi select dropdown in flutter
after long search for create multi select dropdown in flutter i found tow solutions
first one with custom class :
Is there an equivalent widget in flutter to the "select multiple" element in HTML
scound one with the package :
multi_select_flutter
But what I want is how to make a grouped dropdown in either of these two ways Because giving a title to each option group is very important in my case like this:
A:
In the items, list set the type to data to add checkbox or to sep to add a title. The output from the dialog will be a dictionary in the form of {2,3} where value 2 = Cordoba.
Full Code
import 'package:flutter/material.dart';
void main() => runApp(const MyApp());
class MyApp extends StatelessWidget {
const MyApp({super.key});
@override
Widget build(BuildContext context) {
return const MaterialApp(
debugShowCheckedModeBanner: false,
home: HomeScreen(),
);
}
}
class MultiSelectDialogItem<V> {
V value;
String name;
String type;
MultiSelectDialogItem(
{required this.name, required this.type, required this.value});
}
class MultiSelectDialog<V> extends StatefulWidget {
const MultiSelectDialog({
Key? key,
required this.items,
required this.initialSelectedValues,
}) : super(key: key);
final List<MultiSelectDialogItem<V>> items;
final Set<V> initialSelectedValues;
@override
State<StatefulWidget> createState() => _MultiSelectDialogState<V>();
}
class _MultiSelectDialogState<V> extends State<MultiSelectDialog<V>> {
final _selectedValues = <V>{};
@override
void initState() {
super.initState();
_selectedValues.addAll(widget.initialSelectedValues);
}
void _onItemCheckedChange(V itemValue, bool checked) {
setState(() {
if (checked) {
_selectedValues.add(itemValue);
} else {
_selectedValues.remove(itemValue);
}
});
}
void _onCancelTap() {
Navigator.pop(context);
}
void _onSubmitTap() {
Navigator.pop(context, _selectedValues);
}
@override
Widget build(BuildContext context) {
return AlertDialog(
title: const Text('Select place'),
contentPadding: const EdgeInsets.all(20.0),
content: SingleChildScrollView(
child: ListTileTheme(
contentPadding: const EdgeInsets.fromLTRB(14.0, 0.0, 24.0, 0.0),
child: ListBody(
children: widget.items.map(_buildItem).toList(),
),
),
),
actions: <Widget>[
ElevatedButton(
onPressed: _onCancelTap,
child: const Text('CANCEL'),
),
ElevatedButton(
onPressed: _onSubmitTap,
child: const Text('OK'),
)
],
);
}
Widget _buildItem(MultiSelectDialogItem<V> item) {
final checked = _selectedValues.contains(item.value);
return item.type == "data"
? CheckboxListTile(
value: checked,
title: Text(item.name),
controlAffinity: ListTileControlAffinity.leading,
onChanged: (checked) => _onItemCheckedChange(item.value, checked!),
)
: Container(
child: Padding(
padding: const EdgeInsets.all(10.0),
child: Text(
item.name,
style: TextStyle(color: Color.fromARGB(255, 91, 91, 91)),
),
),
);
}
}
class HomeScreen extends StatefulWidget {
const HomeScreen({super.key});
@override
HomeScreenState createState() => HomeScreenState();
}
class HomeScreenState extends State<HomeScreen> {
void _showMultiSelect(BuildContext context) async {
final items = <MultiSelectDialogItem<int>>[
MultiSelectDialogItem(name: 'Argentina', type: 'sep', value: 1),
MultiSelectDialogItem(name: 'Cordoba', type: 'data', value: 2),
MultiSelectDialogItem(name: 'Chaco', type: 'data', value: 3),
MultiSelectDialogItem(name: 'Buenos Aires', type: 'data', value: 4),
MultiSelectDialogItem(name: 'USA', type: 'sep', value: 5),
MultiSelectDialogItem(name: 'California', type: 'data', value: 6),
MultiSelectDialogItem(name: 'Florida', type: 'data', value: 7),
];
final selectedValues = await showDialog<Set>(
context: context,
builder: (BuildContext context) {
return MultiSelectDialog(
items: items,
initialSelectedValues: [].toSet(),
);
},
);
print(selectedValues);
}
@override
Widget build(BuildContext context) {
return Scaffold(
body: Center(
child: ElevatedButton(
child: Text("show dialog"),
onPressed: () {
_showMultiSelect(context);
},
),
),
);
}
}
Output
Hope this helps. Happy Coding :)
|
grouped multi select dropdown in flutter
|
after long search for create multi select dropdown in flutter i found tow solutions
first one with custom class :
Is there an equivalent widget in flutter to the "select multiple" element in HTML
scound one with the package :
multi_select_flutter
But what I want is how to make a grouped dropdown in either of these two ways Because giving a title to each option group is very important in my case like this:
|
[
"In the items, list set the type to data to add checkbox or to sep to add a title. The output from the dialog will be a dictionary in the form of {2,3} where value 2 = Cordoba.\nFull Code\nimport 'package:flutter/material.dart';\n\nvoid main() => runApp(const MyApp());\n\nclass MyApp extends StatelessWidget {\n const MyApp({super.key});\n\n @override\n Widget build(BuildContext context) {\n return const MaterialApp(\n debugShowCheckedModeBanner: false,\n home: HomeScreen(),\n );\n }\n}\n\nclass MultiSelectDialogItem<V> {\n V value;\n\n String name;\n String type;\n\n MultiSelectDialogItem(\n {required this.name, required this.type, required this.value});\n}\n\nclass MultiSelectDialog<V> extends StatefulWidget {\n const MultiSelectDialog({\n Key? key,\n required this.items,\n required this.initialSelectedValues,\n }) : super(key: key);\n\n final List<MultiSelectDialogItem<V>> items;\n final Set<V> initialSelectedValues;\n\n @override\n State<StatefulWidget> createState() => _MultiSelectDialogState<V>();\n}\n\nclass _MultiSelectDialogState<V> extends State<MultiSelectDialog<V>> {\n final _selectedValues = <V>{};\n\n @override\n void initState() {\n super.initState();\n _selectedValues.addAll(widget.initialSelectedValues);\n }\n\n void _onItemCheckedChange(V itemValue, bool checked) {\n setState(() {\n if (checked) {\n _selectedValues.add(itemValue);\n } else {\n _selectedValues.remove(itemValue);\n }\n });\n }\n\n void _onCancelTap() {\n Navigator.pop(context);\n }\n\n void _onSubmitTap() {\n Navigator.pop(context, _selectedValues);\n }\n\n @override\n Widget build(BuildContext context) {\n return AlertDialog(\n title: const Text('Select place'),\n contentPadding: const EdgeInsets.all(20.0),\n content: SingleChildScrollView(\n child: ListTileTheme(\n contentPadding: const EdgeInsets.fromLTRB(14.0, 0.0, 24.0, 0.0),\n child: ListBody(\n children: widget.items.map(_buildItem).toList(),\n ),\n ),\n ),\n actions: <Widget>[\n ElevatedButton(\n onPressed: _onCancelTap,\n child: const Text('CANCEL'),\n ),\n ElevatedButton(\n onPressed: _onSubmitTap,\n child: const Text('OK'),\n )\n ],\n );\n }\n\n Widget _buildItem(MultiSelectDialogItem<V> item) {\n final checked = _selectedValues.contains(item.value);\n return item.type == \"data\"\n ? CheckboxListTile(\n value: checked,\n title: Text(item.name),\n controlAffinity: ListTileControlAffinity.leading,\n onChanged: (checked) => _onItemCheckedChange(item.value, checked!),\n )\n : Container(\n child: Padding(\n padding: const EdgeInsets.all(10.0),\n child: Text(\n item.name,\n style: TextStyle(color: Color.fromARGB(255, 91, 91, 91)),\n ),\n ),\n );\n }\n}\n\nclass HomeScreen extends StatefulWidget {\n const HomeScreen({super.key});\n\n @override\n HomeScreenState createState() => HomeScreenState();\n}\n\nclass HomeScreenState extends State<HomeScreen> {\n void _showMultiSelect(BuildContext context) async {\n final items = <MultiSelectDialogItem<int>>[\n MultiSelectDialogItem(name: 'Argentina', type: 'sep', value: 1),\n MultiSelectDialogItem(name: 'Cordoba', type: 'data', value: 2),\n MultiSelectDialogItem(name: 'Chaco', type: 'data', value: 3),\n MultiSelectDialogItem(name: 'Buenos Aires', type: 'data', value: 4),\n MultiSelectDialogItem(name: 'USA', type: 'sep', value: 5),\n MultiSelectDialogItem(name: 'California', type: 'data', value: 6),\n MultiSelectDialogItem(name: 'Florida', type: 'data', value: 7),\n ];\n\n final selectedValues = await showDialog<Set>(\n context: context,\n builder: (BuildContext context) {\n return MultiSelectDialog(\n items: items,\n initialSelectedValues: [].toSet(),\n );\n },\n );\n\n print(selectedValues);\n }\n\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n body: Center(\n child: ElevatedButton(\n child: Text(\"show dialog\"),\n onPressed: () {\n _showMultiSelect(context);\n },\n ),\n ),\n );\n }\n}\n\nOutput\n\nHope this helps. Happy Coding :)\n"
] |
[
0
] |
[] |
[] |
[
"dropdown",
"flutter"
] |
stackoverflow_0074668945_dropdown_flutter.txt
|
Q:
Change model representation in Flask-Admin without modifying model
I have a model with a __repr__ method, which is used for display in Flask-Admin. I want to display a different value, but don't want to change the model. I found this answer, but that still requires modifying the model. How can I specify a separate representation for Flask-Admin?
class MyModel(db.Model):
data = db.Column(db.Integer)
def __repr__(self):
return '<MyModel: data=%s>' % self.data
Update
File: models.py
class Parent(db.Model):
__tablename__ = "parent"
id = db.Column(db.Integer, primary_key=True)
p_name = db.Column(db.Text)
children = db.relationship('Child', backref='child', lazy='dynamic')
def __repr__(self):
return '<Parent: name=%s' % self.p_name
class Child(db.Model):
__tablename__ = "child"
id = db.Column(db.Integer, primary_key=True)
c_name = db.Column(db.Text)
parent_id = db.Column(db.Integer, db.ForeignKey('parent.id'))
File: admin.py
from flask.ext.admin import Admin
from flask.ext.admin.contrib.sqla import ModelView
from app import app, db
from models import Parent, Child
admin = Admin(app, 'My App')
admin.add_view(ModelView(Parent, db.session))
admin.add_view(ModelView(Child, db.session))
When I try to create or edit "child" through admin panel, I see representation from "Parent" class. I suppose it is because of relationship and I don't know how to redefine the representation for admin panel only.
A:
The following answers have helped me to solve my issue:
How to tell flask-admin to use alternative representation when displaying Foreign Key Fields?
Flask-admin, editing relationship giving me object representation of Foreign Key object
Flask-Admin Many-to-Many field display
The cause was in that I tried to replace __repr__ with __unicode__ instead just add __unicode__ method.
But if anybody knows solution without modifying models, let me know and I'll add it here.
A:
You could subclass the model:
class MyNewModel(MyModel):
def __repr__(self):
return '<MyModel: DATA IS %d!>' % self.data
and then use MyNewModel instead of MyModel.
A:
I have the same problem and I've found this solve:
class Child(Parent):
def __repr__(self):
return '<Child: name=%s' % self.p_name
setattr(Parent, '__repr__', Child.__repr__)
It overloads Parent.__repr__, but now you can not to change SQLA model.
|
Change model representation in Flask-Admin without modifying model
|
I have a model with a __repr__ method, which is used for display in Flask-Admin. I want to display a different value, but don't want to change the model. I found this answer, but that still requires modifying the model. How can I specify a separate representation for Flask-Admin?
class MyModel(db.Model):
data = db.Column(db.Integer)
def __repr__(self):
return '<MyModel: data=%s>' % self.data
Update
File: models.py
class Parent(db.Model):
__tablename__ = "parent"
id = db.Column(db.Integer, primary_key=True)
p_name = db.Column(db.Text)
children = db.relationship('Child', backref='child', lazy='dynamic')
def __repr__(self):
return '<Parent: name=%s' % self.p_name
class Child(db.Model):
__tablename__ = "child"
id = db.Column(db.Integer, primary_key=True)
c_name = db.Column(db.Text)
parent_id = db.Column(db.Integer, db.ForeignKey('parent.id'))
File: admin.py
from flask.ext.admin import Admin
from flask.ext.admin.contrib.sqla import ModelView
from app import app, db
from models import Parent, Child
admin = Admin(app, 'My App')
admin.add_view(ModelView(Parent, db.session))
admin.add_view(ModelView(Child, db.session))
When I try to create or edit "child" through admin panel, I see representation from "Parent" class. I suppose it is because of relationship and I don't know how to redefine the representation for admin panel only.
|
[
"The following answers have helped me to solve my issue:\n\nHow to tell flask-admin to use alternative representation when displaying Foreign Key Fields?\nFlask-admin, editing relationship giving me object representation of Foreign Key object\nFlask-Admin Many-to-Many field display\n\nThe cause was in that I tried to replace __repr__ with __unicode__ instead just add __unicode__ method.\nBut if anybody knows solution without modifying models, let me know and I'll add it here.\n",
"You could subclass the model:\nclass MyNewModel(MyModel):\n def __repr__(self):\n return '<MyModel: DATA IS %d!>' % self.data\n\nand then use MyNewModel instead of MyModel.\n",
"I have the same problem and I've found this solve:\nclass Child(Parent):\ndef __repr__(self):\n return '<Child: name=%s' % self.p_name\n\nsetattr(Parent, '__repr__', Child.__repr__)\n\nIt overloads Parent.__repr__, but now you can not to change SQLA model.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"flask",
"flask_admin",
"python"
] |
stackoverflow_0037031399_flask_flask_admin_python.txt
|
Q:
Display the name of the pdf file from a base64 string
I have a function below that sets an array of pdf files using the base64 string of the selected files. I want to display the name of the file the user selected in a list as a string (For example, if the user selected a file named john.pdf, I want that file name as a string displayed). Right now obviously it only displays very long base64 strings. How can I display "john.pdf"?
const handleFile = (e) => {
let selectedFile = e.target.files[0];
if (selectedFile) {
if (selectedFile && allowedFiles.includes(selectedFile.type)) {
let reader = new FileReader();
reader.readAsDataURL(selectedFile);
reader.onloadend = (e) => {
const newPdfFiles = [...currentPdfFiles];
newPdfFiles.push(e.target.result);
console.log(`current pdfs: ${currentPdfFiles}`);
setCurrentPdfFiles(newPdfFiles);
console.log(currentPdfFiles);
};
} else {
//setPdfError("Not a valid pdf");
}
} else {
console.log("please select file");
}
};
<div className="card mt-4">
<ul className="list-group list-group-flush">
{currentPdfFiles.length > 1 &&
currentPdfFiles.map((pdfFile) => {
return <li className="list-group-item">{pdfFile}</li>;
})}
</ul>
</div>
</div>
A:
const handleFile = (e) => {
let selectedFile = e.target.files[0];
if (selectedFile) {
if (selectedFile && allowedFiles.includes(selectedFile.type)) {
let reader = new FileReader();
reader.readAsDataURL(selectedFile);
reader.onloadend = (e) => {
const newPdfFiles = [...currentPdfFiles];
newPdfFiles.push(e.target.result);
console.log(`current pdfs: ${currentPdfFiles}`);
setCurrentPdfFiles(newPdfFiles);
console.log(currentPdfFiles);
};
} else {
//setPdfError("Not a valid pdf");
}
} else {
console.log("please select file");
}
};
// ...
<div className="card mt-4">
<ul className="list-group list-group-flush">
{currentPdfFiles.length > 1 &&
currentPdfFiles.map((pdfFile, index) => {
return <li className="list-group-item">{selectedFile[index].name}</li>;
})}
</ul>
</div>
I added an index to the map function, so that we can access the file name from the selectedFile array by its index. I used the name property of the File object to display the file name in the list.
|
Display the name of the pdf file from a base64 string
|
I have a function below that sets an array of pdf files using the base64 string of the selected files. I want to display the name of the file the user selected in a list as a string (For example, if the user selected a file named john.pdf, I want that file name as a string displayed). Right now obviously it only displays very long base64 strings. How can I display "john.pdf"?
const handleFile = (e) => {
let selectedFile = e.target.files[0];
if (selectedFile) {
if (selectedFile && allowedFiles.includes(selectedFile.type)) {
let reader = new FileReader();
reader.readAsDataURL(selectedFile);
reader.onloadend = (e) => {
const newPdfFiles = [...currentPdfFiles];
newPdfFiles.push(e.target.result);
console.log(`current pdfs: ${currentPdfFiles}`);
setCurrentPdfFiles(newPdfFiles);
console.log(currentPdfFiles);
};
} else {
//setPdfError("Not a valid pdf");
}
} else {
console.log("please select file");
}
};
<div className="card mt-4">
<ul className="list-group list-group-flush">
{currentPdfFiles.length > 1 &&
currentPdfFiles.map((pdfFile) => {
return <li className="list-group-item">{pdfFile}</li>;
})}
</ul>
</div>
</div>
|
[
"const handleFile = (e) => {\n let selectedFile = e.target.files[0];\n if (selectedFile) {\n if (selectedFile && allowedFiles.includes(selectedFile.type)) {\n let reader = new FileReader();\n reader.readAsDataURL(selectedFile);\n reader.onloadend = (e) => {\n const newPdfFiles = [...currentPdfFiles];\n newPdfFiles.push(e.target.result);\n console.log(`current pdfs: ${currentPdfFiles}`);\n setCurrentPdfFiles(newPdfFiles);\n console.log(currentPdfFiles);\n };\n } else {\n //setPdfError(\"Not a valid pdf\");\n }\n } else {\n console.log(\"please select file\");\n }\n};\n\n// ...\n\n<div className=\"card mt-4\">\n <ul className=\"list-group list-group-flush\">\n {currentPdfFiles.length > 1 &&\n currentPdfFiles.map((pdfFile, index) => {\n return <li className=\"list-group-item\">{selectedFile[index].name}</li>;\n })}\n </ul>\n</div>\n\nI added an index to the map function, so that we can access the file name from the selectedFile array by its index. I used the name property of the File object to display the file name in the list.\n"
] |
[
1
] |
[] |
[] |
[
"javascript",
"reactjs"
] |
stackoverflow_0074669663_javascript_reactjs.txt
|
Q:
Best way to handle cancel button a form in Vue.js
I have 4 inputs in my panel; when user clicks cancel I need to revert to the original values before they're modified. Is there a Vuetify or Vue.js way to achieve this? or do I have to manage it using JS by storing all values in a tmp variable?
A:
You could use copy of initial object and set that copy by click reset button.
That copy object could be created in created/mounted hook.
Example here - codesandbox.io/s/eloquent-mclean-7hrvsr?file=/src/App.vue
A:
i think it's preferred if you have another variable, for example tempVehicle to handle the form.
you update the variable tempVehicle when opening the form
and clear the variable tempVehicle when closing the form
the data section would look like this
// assuming you're using vue 2 because of vuetify
data: () => ({
tempVehicle: {
Model: '',
ModelYear: '',
VIN: '',
Make: '',
}
})
and this would be your methods to set item and cancel function
methods: {
// your method to set the form values
setFormValue(vehicle) {
this.tempVehicle = vehicle
},
// method to cancel the operation
cancel() {
this.tempVehicle = {
Model: '',
ModelYear: '',
VIN: '',
Make: '',
}
},
},
and finally when the user click the update, you can set the target item from your list with the updated variable tempVehicle
A:
I find this answer very elegant since I don't have to store my data into another tmp variable
I can just load values from my localStorage again.
this.savedVehicles = JSON.parse(localStorage.getItem("savedVehicles"));
It's working for me perfectly.
|
Best way to handle cancel button a form in Vue.js
|
I have 4 inputs in my panel; when user clicks cancel I need to revert to the original values before they're modified. Is there a Vuetify or Vue.js way to achieve this? or do I have to manage it using JS by storing all values in a tmp variable?
|
[
"You could use copy of initial object and set that copy by click reset button.\nThat copy object could be created in created/mounted hook.\nExample here - codesandbox.io/s/eloquent-mclean-7hrvsr?file=/src/App.vue\n",
"i think it's preferred if you have another variable, for example tempVehicle to handle the form.\nyou update the variable tempVehicle when opening the form\nand clear the variable tempVehicle when closing the form\nthe data section would look like this\n// assuming you're using vue 2 because of vuetify\ndata: () => ({\n tempVehicle: {\n Model: '',\n ModelYear: '',\n VIN: '',\n Make: '',\n }\n})\n\nand this would be your methods to set item and cancel function\nmethods: {\n // your method to set the form values\n setFormValue(vehicle) {\n this.tempVehicle = vehicle\n },\n // method to cancel the operation\n cancel() {\n this.tempVehicle = {\n Model: '',\n ModelYear: '',\n VIN: '',\n Make: '',\n }\n },\n},\n\nand finally when the user click the update, you can set the target item from your list with the updated variable tempVehicle\n",
"I find this answer very elegant since I don't have to store my data into another tmp variable\nI can just load values from my localStorage again.\n\nthis.savedVehicles = JSON.parse(localStorage.getItem(\"savedVehicles\"));\n\nIt's working for me perfectly.\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"javascript",
"vue.js",
"vuejs2",
"vuetify.js"
] |
stackoverflow_0074669137_javascript_vue.js_vuejs2_vuetify.js.txt
|
Q:
Python Quandl giving me error
So I have a bit of code in python which tries to get home prices from zillow. I am following the documentation exactly but I still get errors. The code:
import quandl
quandl.ApiConfig.api_key = "I have a key here in the code"
data = quandl.get("http://www.quandl.com/api/v3/datasets/ZILL/S00022_A.csv", returns="numpy")
This, however, returns:
raise ValueError(Message.ERROR_COLUMN_INDEX_TYPE % dataset)
ValueError: The column index must be expressed as an integer for http://www.quandl.com/api/v3/datasets/ZILL/S00022_A.csv.
What does this mean and how do I fix it? Thanks in advance.
A:
The code quandl.get() goes with the installed csv file and not an URL. So please import a dataset code and try to import it in your code by
quandl.get('WIKI/GOOGL')
Here, I have imported a dataset for stock prediction of Google
|
Python Quandl giving me error
|
So I have a bit of code in python which tries to get home prices from zillow. I am following the documentation exactly but I still get errors. The code:
import quandl
quandl.ApiConfig.api_key = "I have a key here in the code"
data = quandl.get("http://www.quandl.com/api/v3/datasets/ZILL/S00022_A.csv", returns="numpy")
This, however, returns:
raise ValueError(Message.ERROR_COLUMN_INDEX_TYPE % dataset)
ValueError: The column index must be expressed as an integer for http://www.quandl.com/api/v3/datasets/ZILL/S00022_A.csv.
What does this mean and how do I fix it? Thanks in advance.
|
[
"The code quandl.get() goes with the installed csv file and not an URL. So please import a dataset code and try to import it in your code by\nquandl.get('WIKI/GOOGL')\n\nHere, I have imported a dataset for stock prediction of Google\n"
] |
[
0
] |
[] |
[] |
[
"database",
"python",
"python_3.x",
"quandl",
"zillow"
] |
stackoverflow_0046900561_database_python_python_3.x_quandl_zillow.txt
|
Q:
How to show category names from a mysql database table in the dropdown list of django form
I am working on a article management platform webapp using django. I have created a registration form using the django form where I want to show category names from the category table.
This is the code to create category table where I have two column. One is cid which is ID and another one is category_name. Here the category name will be for example: Technology, Software engineering, Medicine etc.
blog.models.py
from django.db import models
# Create your models here.
class Category(models.Model):
cid = models.AutoField(primary_key=True, blank=True)
category_name = models.CharField(max_length=100)
def __str__(self):
return self.category_name
The cid is a foreign key for users table because each user must select a category name from the specialization field to register an account in this app. As I am using built-in user model, so I have
added the cid as a foreign key in the user table as given below.
users/model.py
from django.db import models
from blog.models import Category
from django.contrib.auth.models import AbstractUser
# Create your models here.
class CustomUser(AbstractUser):
cid = models.ForeignKey(Category, on_delete=models.CASCADE)
in the forms.py file I have added the email and specialization field to display them in the registration form like below. However, I am not sure if the category code part is okay or not. could you please look into it?
users/forms.py
from django import forms
from django.contrib.auth.models import User
from django.contrib.auth.forms import UserCreationForm
from blog.models import Category
class UserRegisterForm(UserCreationForm):
email = forms.EmailField()
category = Category()
cid = forms.CharField(label='Specialization', widget=forms.Select(choices=category.category_name))
class Meta:
model = User
fields = ['username', 'email', 'password1', 'password2', 'cid']
This is register.html file:
register.html file
{% extends "users/base.html" %}
{% load crispy_forms_tags %}
{% block content %}
<div class="content-section">
<form method="POST">
{% csrf_token %}
<fieldset class="form-group">
{{ form| crispy }}
</fieldset>
<div class="form-group">
<button class="btn btn-outline-info" type="submit">Sign Up</button>
</div>
</form>
<div class="border-top pt-3">
<small class="text-muted">
Already Have An Account? <a class="ml-2" href="{% url 'login' %}">Sign In</a>
</small>
</div>
</div>
{% endblock content %}
I want to show the category names here in the specialization dropdown list which will be come from the cateogry table but those category names are not showing in the dropdown list.
Registration page UI
I am not understanding how to solve this problem. Could anyone help me out to solve this problem. What will be the coding part to solve this.
I tried to add category names in the specialization dropdown list but I failed.I want anyone to solve this problem
A:
First of all, category needs to be a field, not a class. Use ModelChoiceField for this.
|
How to show category names from a mysql database table in the dropdown list of django form
|
I am working on a article management platform webapp using django. I have created a registration form using the django form where I want to show category names from the category table.
This is the code to create category table where I have two column. One is cid which is ID and another one is category_name. Here the category name will be for example: Technology, Software engineering, Medicine etc.
blog.models.py
from django.db import models
# Create your models here.
class Category(models.Model):
cid = models.AutoField(primary_key=True, blank=True)
category_name = models.CharField(max_length=100)
def __str__(self):
return self.category_name
The cid is a foreign key for users table because each user must select a category name from the specialization field to register an account in this app. As I am using built-in user model, so I have
added the cid as a foreign key in the user table as given below.
users/model.py
from django.db import models
from blog.models import Category
from django.contrib.auth.models import AbstractUser
# Create your models here.
class CustomUser(AbstractUser):
cid = models.ForeignKey(Category, on_delete=models.CASCADE)
in the forms.py file I have added the email and specialization field to display them in the registration form like below. However, I am not sure if the category code part is okay or not. could you please look into it?
users/forms.py
from django import forms
from django.contrib.auth.models import User
from django.contrib.auth.forms import UserCreationForm
from blog.models import Category
class UserRegisterForm(UserCreationForm):
email = forms.EmailField()
category = Category()
cid = forms.CharField(label='Specialization', widget=forms.Select(choices=category.category_name))
class Meta:
model = User
fields = ['username', 'email', 'password1', 'password2', 'cid']
This is register.html file:
register.html file
{% extends "users/base.html" %}
{% load crispy_forms_tags %}
{% block content %}
<div class="content-section">
<form method="POST">
{% csrf_token %}
<fieldset class="form-group">
{{ form| crispy }}
</fieldset>
<div class="form-group">
<button class="btn btn-outline-info" type="submit">Sign Up</button>
</div>
</form>
<div class="border-top pt-3">
<small class="text-muted">
Already Have An Account? <a class="ml-2" href="{% url 'login' %}">Sign In</a>
</small>
</div>
</div>
{% endblock content %}
I want to show the category names here in the specialization dropdown list which will be come from the cateogry table but those category names are not showing in the dropdown list.
Registration page UI
I am not understanding how to solve this problem. Could anyone help me out to solve this problem. What will be the coding part to solve this.
I tried to add category names in the specialization dropdown list but I failed.I want anyone to solve this problem
|
[
"First of all, category needs to be a field, not a class. Use ModelChoiceField for this.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_forms",
"django_models",
"mysql",
"python"
] |
stackoverflow_0074669516_django_django_forms_django_models_mysql_python.txt
|
Q:
Append and Delete of String of hackerrank
/* why the two test cases are not passed by this code*/
/*the link of the problem is https://www.hackerrank.com/challenges/append-and-delete/problem */
static String appendAndDelete(String s, String t, int k) {
if (s.length() + t.length() < k)
return "Yes";
int commonlength = 0;
for (int i = 0; i < Math.min(s.length(), t.length()); i++) {
if (t.charAt(i) == s.charAt(i))
commonlength++;
else
break;
}
if ((k - s.length() - t.length() + 2 * commonlength) % 2 == 0) {
return "Yes";
}
return "No";
}
A:
You need to add one more condition in your code, because below condition is not enough:
(k - s.length() - t.length() + 2 * commonlength) % 2 == 0
Try this:
int balance = k - s.length() - t.length() + 2 * commonlength;
if (balance >= 0 && (balance) % 2 == 0) {
return "Yes";
}
you need to have one more addition condition : balance >= 0 as mentioned above.
Here is a working solution that passes all test cases, added comments in code for clear understanding:
static String appendAndDelete(String s, String t, int k) {
// Check if k is greater or equal to both the lengths
if (s.length() + t.length() <= k)
return "Yes";
int commonlength = 0;
// Get the common matching character length
for (int i = 0; i < Math.min(s.length(), t.length()); i++) {
if (t.charAt(i) == s.charAt(i))
commonlength++;
else {
break;
}
}
// count how many modifications still needed
int balance = s.length() - commonlength;
balance += t.length() - commonlength;
// Check if k is greater than balance count
if (balance <= k) {
// Special case, we need to perform exactly k operations
// so if balance is odd then k should be odd, if balance is even
// then k must be even.
if ((balance - k) % 2 == 0) {
return "Yes";
}
}
return "No";
}
A:
That's pretty straight forward. Here is the solution that pass all of the mentioned test case:
static String appendAndDelete(String s, String t, int k) {
if (s.equals(t))
return (k >= s.length() * 2 || k % 2 == 0) ? "Yes" : "No";
int commonlength = 0;
for (int i = 0; i < Math.min(s.length(), t.length()); i++) {
if (t.charAt(i) != s.charAt(i))
break;
commonlength++;
}
int cs = s.length() - commonlength;
int ct = t.length() - commonlength;
int tot = cs + ct;
return ((tot == k) || (tot < k && (tot - k) % 2 == 0) || (tot + (2 * commonlength) <= k)) ? "Yes" : "No";
}
A:
Here is my solution for the problem where all the test cases are passed.
public static string appendAndDelete(string s, string t, int k)
{
char[] sArray = s.ToCharArray();
char[] tArray = t.ToCharArray();
var commonLength = 0;
var result = "";
if (s.Length < t.Length)
{
var firstChar = s[0];
var count = t.Count(x => x == firstChar);
if (count==t.Length)
{
result = "Yes";
}
else
result = "No";
}
else if (string.Compare(s, t) == 0)
{
result="Yes";
}
else
{
for (int i = 0; i < tArray.Length; i++)
{
if (sArray[i] == tArray[i])
{
commonLength++;
}
else { break; }
}
var totalSubRequired = (s.Length - commonLength) + (t.Length - commonLength);
if (k >= totalSubRequired)
{
if (string.Compare(s, t) == 0)
{
result = "Yes";
}
else
{
var commonString = s.Substring(0, commonLength);
var attachString = t.Substring(commonLength, t.Length - commonLength);
var combineString = string.Concat(commonString, attachString);
if (string.Compare(t, combineString) == 0)
{
result = "Yes";
}
else { result = "No"; ; }
}
}
else { result = "No"; ; }
}
return result;
}
A:
A bit long but passes all the test cases. Runtime complexity O(N)
public static String appendAndDelete(String s, String t, int k) {
// Write your code here
int sl = s.length();
int tl = t.length();
int min_1 = 0, min_2 = 0, min_3 = 0;
int counter = 0;
String res = "No";
if (sl == tl) {
for (int i = sl - 1; i >= 0; i--) {
counter++;
if (s.charAt(i) != t.charAt(i)) {
min_1 = counter * 2;
}
}
if (min_1 == 0) {
min_1 = 2;
}
min_2 = sl * 2;
min_3 = (sl * 2) + 1;
if (min_1 % 2 == 0) {
if ((k >= min_1 && k <= min_2) && (k % 2 == 0)) {
res = "Yes";
}else if (k >= min_3) {
res = "Yes";
}
}
}else if (sl > tl) {
min_1 = sl - tl;
int dif_1 = 0;
for (int i = (tl - 1); i >= 0; i--) {
counter++;
if (s.charAt(i) != t.charAt(i)) {
dif_1 = counter * 2;
}
}
min_1 += dif_1;
min_2 = ((sl - (sl - tl)) * 2) + (sl - tl);
min_3 = ((sl - (sl - tl)) * 2 + 1) + (sl - tl);
if (min_1 % 2 == 0) {
if ((k >= min_1 && k <= min_2) && (k % 2) == 0) {
res = "Yes";
}else if (k >= min_3) {
res = "Yes";
}
}else{
if((k >= min_1 && k <= min_2) && (k % 2) == 1) {
res = "Yes";
}else if (k >= min_3) {
res = "Yes";
}
}
}else if (tl > sl) {
min_1 = tl - sl;
int dif_1 = 0;
for (int i = (sl - 1); i >= 0; i--) {
counter++;
if (s.charAt(i) != t.charAt(i)) {
dif_1 = counter * 2;
}
}
min_1 += dif_1;
min_2 = ((tl - (tl - sl)) * 2) + (tl - sl);
min_3 = ((tl - (tl - sl)) * 2 + 1) + (tl - sl);
if (min_1 % 2 == 0) {
if ((k >= min_1 && k <= min_2) && (k % 2) == 0) {
res = "Yes";
}else if (k >= min_3) {
res = "Yes";
}
}else if ((k >= min_1 && k <= min_2) && (k % 2) == 1) {
res = "Yes";
}else if (k >= min_3) {
res = "Yes";
}
}
return res;
}
A:
solution of the problem in golang
package main
import (
"bufio"
"fmt"
"io"
"math"
"os"
"strconv"
"strings"
)
func appendAndDelete(s string, t string, k int32) string {
// Write your code here
yes := "Yes"
no := "No"
if len(s)+len(t) <= int(k) {
return yes
}
min := math.Min(float64(len(s)), float64(len(t)))
length := 0
for i := 0; i < int(min); i++ {
if s[i] == t[i] {
length++
} else {
break
}
}
total := (len(s) - length) + (len(t) - length)
if total <= int(k) && (total-int(k))%2 == 0 {
return yes
}
return no
}
func main() {
reader := bufio.NewReaderSize(os.Stdin, 16*1024*1024)
stdout, err := os.Create(os.Getenv("OUTPUT_PATH"))
checkError(err)
defer stdout.Close()
writer := bufio.NewWriterSize(stdout, 16*1024*1024)
s := readLine(reader)
t := readLine(reader)
kTemp, err := strconv.ParseInt(strings.TrimSpace(readLine(reader)), 10, 64)
checkError(err)
k := int32(kTemp)
result := appendAndDelete(s, t, k)
fmt.Fprintf(writer, "%s\n", result)
writer.Flush()
}
func readLine(reader *bufio.Reader) string {
str, _, err := reader.ReadLine()
if err == io.EOF {
return ""
}
return strings.TrimRight(string(str), "\r\n")
}
func checkError(err error) {
if err != nil {
panic(err)
}
}
|
Append and Delete of String of hackerrank
|
/* why the two test cases are not passed by this code*/
/*the link of the problem is https://www.hackerrank.com/challenges/append-and-delete/problem */
static String appendAndDelete(String s, String t, int k) {
if (s.length() + t.length() < k)
return "Yes";
int commonlength = 0;
for (int i = 0; i < Math.min(s.length(), t.length()); i++) {
if (t.charAt(i) == s.charAt(i))
commonlength++;
else
break;
}
if ((k - s.length() - t.length() + 2 * commonlength) % 2 == 0) {
return "Yes";
}
return "No";
}
|
[
"You need to add one more condition in your code, because below condition is not enough:\n(k - s.length() - t.length() + 2 * commonlength) % 2 == 0 \nTry this:\nint balance = k - s.length() - t.length() + 2 * commonlength;\n\nif (balance >= 0 && (balance) % 2 == 0) {\n return \"Yes\";\n}\n\nyou need to have one more addition condition : balance >= 0 as mentioned above.\nHere is a working solution that passes all test cases, added comments in code for clear understanding:\nstatic String appendAndDelete(String s, String t, int k) {\n // Check if k is greater or equal to both the lengths\n if (s.length() + t.length() <= k)\n return \"Yes\";\n\n int commonlength = 0;\n // Get the common matching character length\n for (int i = 0; i < Math.min(s.length(), t.length()); i++) {\n if (t.charAt(i) == s.charAt(i))\n commonlength++;\n else {\n break;\n }\n }\n\n // count how many modifications still needed\n int balance = s.length() - commonlength;\n balance += t.length() - commonlength;\n\n // Check if k is greater than balance count\n if (balance <= k) {\n // Special case, we need to perform exactly k operations\n // so if balance is odd then k should be odd, if balance is even\n // then k must be even.\n if ((balance - k) % 2 == 0) {\n return \"Yes\";\n }\n }\n return \"No\";\n}\n\n",
"That's pretty straight forward. Here is the solution that pass all of the mentioned test case:\nstatic String appendAndDelete(String s, String t, int k) {\n\n if (s.equals(t))\n return (k >= s.length() * 2 || k % 2 == 0) ? \"Yes\" : \"No\";\n\n int commonlength = 0;\n\n for (int i = 0; i < Math.min(s.length(), t.length()); i++) {\n if (t.charAt(i) != s.charAt(i))\n break;\n commonlength++;\n }\n\n int cs = s.length() - commonlength;\n int ct = t.length() - commonlength;\n int tot = cs + ct;\n\n return ((tot == k) || (tot < k && (tot - k) % 2 == 0) || (tot + (2 * commonlength) <= k)) ? \"Yes\" : \"No\";\n\n}\n\n",
"Here is my solution for the problem where all the test cases are passed.\npublic static string appendAndDelete(string s, string t, int k)\n{\n char[] sArray = s.ToCharArray();\n char[] tArray = t.ToCharArray();\n var commonLength = 0;\n var result = \"\";\n if (s.Length < t.Length)\n {\n var firstChar = s[0];\n var count = t.Count(x => x == firstChar);\n if (count==t.Length)\n {\n result = \"Yes\";\n }\n else\n result = \"No\";\n }\n else if (string.Compare(s, t) == 0)\n {\n \n result=\"Yes\";\n \n }\n else\n {\n for (int i = 0; i < tArray.Length; i++)\n {\n if (sArray[i] == tArray[i])\n {\n commonLength++;\n }\n else { break; }\n }\n var totalSubRequired = (s.Length - commonLength) + (t.Length - commonLength);\n if (k >= totalSubRequired)\n {\n if (string.Compare(s, t) == 0)\n {\n result = \"Yes\";\n }\n else\n {\n var commonString = s.Substring(0, commonLength);\n var attachString = t.Substring(commonLength, t.Length - commonLength);\n var combineString = string.Concat(commonString, attachString);\n if (string.Compare(t, combineString) == 0)\n {\n result = \"Yes\";\n }\n else { result = \"No\"; ; }\n }\n }\n else { result = \"No\"; ; }\n }\n return result;\n}\n\n",
"A bit long but passes all the test cases. Runtime complexity O(N)\n\n\npublic static String appendAndDelete(String s, String t, int k) {\n // Write your code here\n int sl = s.length();\n int tl = t.length();\n int min_1 = 0, min_2 = 0, min_3 = 0;\n int counter = 0;\n String res = \"No\";\n if (sl == tl) {\n for (int i = sl - 1; i >= 0; i--) {\n counter++;\n if (s.charAt(i) != t.charAt(i)) {\n min_1 = counter * 2;\n }\n }\n if (min_1 == 0) {\n min_1 = 2;\n }\n min_2 = sl * 2;\n min_3 = (sl * 2) + 1;\n if (min_1 % 2 == 0) {\n if ((k >= min_1 && k <= min_2) && (k % 2 == 0)) {\n res = \"Yes\";\n }else if (k >= min_3) {\n res = \"Yes\";\n }\n }\n }else if (sl > tl) {\n min_1 = sl - tl;\n int dif_1 = 0;\n for (int i = (tl - 1); i >= 0; i--) {\n counter++;\n if (s.charAt(i) != t.charAt(i)) {\n dif_1 = counter * 2;\n }\n }\n min_1 += dif_1;\n min_2 = ((sl - (sl - tl)) * 2) + (sl - tl);\n min_3 = ((sl - (sl - tl)) * 2 + 1) + (sl - tl);\n if (min_1 % 2 == 0) {\n if ((k >= min_1 && k <= min_2) && (k % 2) == 0) {\n res = \"Yes\";\n }else if (k >= min_3) {\n res = \"Yes\";\n }\n }else{\n if((k >= min_1 && k <= min_2) && (k % 2) == 1) {\n res = \"Yes\";\n }else if (k >= min_3) {\n res = \"Yes\";\n }\n }\n }else if (tl > sl) {\n min_1 = tl - sl;\n int dif_1 = 0;\n for (int i = (sl - 1); i >= 0; i--) {\n counter++;\n if (s.charAt(i) != t.charAt(i)) {\n dif_1 = counter * 2;\n }\n }\n min_1 += dif_1;\n min_2 = ((tl - (tl - sl)) * 2) + (tl - sl);\n min_3 = ((tl - (tl - sl)) * 2 + 1) + (tl - sl);\n if (min_1 % 2 == 0) {\n if ((k >= min_1 && k <= min_2) && (k % 2) == 0) {\n res = \"Yes\";\n }else if (k >= min_3) {\n res = \"Yes\";\n }\n }else if ((k >= min_1 && k <= min_2) && (k % 2) == 1) {\n res = \"Yes\";\n }else if (k >= min_3) {\n res = \"Yes\";\n }\n }\n return res;\n }\n\n\n\n",
"solution of the problem in golang\npackage main\n\nimport (\n \"bufio\"\n \"fmt\"\n \"io\"\n \"math\"\n \"os\"\n \"strconv\"\n \"strings\"\n)\n\nfunc appendAndDelete(s string, t string, k int32) string {\n // Write your code here\n yes := \"Yes\"\n no := \"No\"\n if len(s)+len(t) <= int(k) {\n return yes\n }\n min := math.Min(float64(len(s)), float64(len(t)))\n length := 0\n for i := 0; i < int(min); i++ {\n if s[i] == t[i] {\n length++\n } else {\n break\n }\n }\n\n total := (len(s) - length) + (len(t) - length)\n\n if total <= int(k) && (total-int(k))%2 == 0 {\n return yes\n }\n return no\n}\n\nfunc main() {\n reader := bufio.NewReaderSize(os.Stdin, 16*1024*1024)\n\n stdout, err := os.Create(os.Getenv(\"OUTPUT_PATH\"))\n checkError(err)\n\n defer stdout.Close()\n\n writer := bufio.NewWriterSize(stdout, 16*1024*1024)\n\n s := readLine(reader)\n\n t := readLine(reader)\n\n kTemp, err := strconv.ParseInt(strings.TrimSpace(readLine(reader)), 10, 64)\n checkError(err)\n k := int32(kTemp)\n\n result := appendAndDelete(s, t, k)\n\n fmt.Fprintf(writer, \"%s\\n\", result)\n\n writer.Flush()\n}\n\nfunc readLine(reader *bufio.Reader) string {\n str, _, err := reader.ReadLine()\n if err == io.EOF {\n return \"\"\n }\n\n return strings.TrimRight(string(str), \"\\r\\n\")\n}\n\nfunc checkError(err error) {\n if err != nil {\n panic(err)\n }\n}\n\n"
] |
[
1,
0,
0,
0,
0
] |
[] |
[] |
[
"append",
"java"
] |
stackoverflow_0061849231_append_java.txt
|
Q:
pip uninstall: "No files were found to uninstall."
I have created a python module, call it 'foo_bar'.
I can install it and I can upgrade it, but I cannot uninstall it.
I build my module using bdist_wheel:
$ python3 setup.py bdist_wheel
And I install and upgrade it as follows:
$ python3 -m pip --timeout 60 install --upgrade dist/foo_bar-1.4.3-py3-none-any.whl
It is listed within Python 3.4 framework directory:
ls -al /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/
drwxr-xr-x 12 samwise admin 408 Jun 21 02:50 foo_bar
drwxr-xr-x 9 samwise admin 306 Jun 21 02:50 foo_bar-1.4.3.dist-info
And it listed within pip freeze:
$ python3 -m pip freeze
foo-bar==1.4.3
However, if I try to perform pip uninstall, it cannot find it's files
$ python3 -m pip uninstall foo-bar
Can't uninstall 'foo-bar'. No files were found to uninstall.
Did I do something wrong within my setup.py for it not to be able to find my modules files during uninstall?
Version info is as follows:
$ python3 --version
Python 3.4.4
$ python3 -m pip --version
pip 8.1.2 from /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages (python 3.4)
A:
I had the same issue. Using verbose helped me to find out a bit more the reason:
$ pip3 uninstall --verbose my-homemade-package
Not sure how to uninstall: my-homemade-package e48e635 - Check: /home/olivier/my-homemade-package
Can't uninstall 'my-homemade-package'. No files were found to uninstall.
Removing everything that was 'my-homemade-package' related in /usr/local/python2.x and /usr/local/python3.x did not help.
I did a pip3 show my-homemade-package and got the location of the installed package on my computer:
$ pip3 show my-homemade-package
Name: my-homemade-package
Version: e48e635
Summary: My Home Made package
Home-page: UNKNOWN
Author: UNKNOWN
Author-email: UNKNOWN
License: Proprietary
Location: /home/olivier/my-homemade-package
Requires: pyOpenSSL, pyasn1, protobuf
Removing /home/olivier/my-homemade-package sorted it out the issue (ie: the package was not listed).
A:
This is an old post, but it was top result in Google. The above answers are correct, however, in my case there was still line /usr/local/lib/python3.6/site-packages/easy-install.pth that I had to remove after also removing the egg files.
A:
So I was having a similar issue to OP. I could install my package with pip install dist/mypackage.tar.gz. Installation would work fine, but at the end it would show Can't uninstall 'mypackage'. No files were found to uninstall., and indeed pip uninstall mypackage wouldn't work later on.
It sounds silly but what worked for me was to change working directory: once I left mypackage/ directory, pip uninstall mypackage worked.
A:
I had such issue when I renamed my module in setup.py.
Old old_name.egg-info directory still existed in my_module directory. So, when I installed module with pip install -e . pip created a line in python3.8/site-packages/easy-install.pth pointing to module directory. After that module was listed by pip list with both names: new-name and old-name. And when I tried to remove old module with pip remove old-name pip showed error:
Found existing installation: old-name 0.3.0
Can't uninstall 'old-name'. No files were found to uninstall.
The solution was to remove directory old_name.egg-info from module directory. After that pip list shows only new-name.
Probably it is not direct answer to original post but one of the solutions for issue in topic-name.
|
pip uninstall: "No files were found to uninstall."
|
I have created a python module, call it 'foo_bar'.
I can install it and I can upgrade it, but I cannot uninstall it.
I build my module using bdist_wheel:
$ python3 setup.py bdist_wheel
And I install and upgrade it as follows:
$ python3 -m pip --timeout 60 install --upgrade dist/foo_bar-1.4.3-py3-none-any.whl
It is listed within Python 3.4 framework directory:
ls -al /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/
drwxr-xr-x 12 samwise admin 408 Jun 21 02:50 foo_bar
drwxr-xr-x 9 samwise admin 306 Jun 21 02:50 foo_bar-1.4.3.dist-info
And it listed within pip freeze:
$ python3 -m pip freeze
foo-bar==1.4.3
However, if I try to perform pip uninstall, it cannot find it's files
$ python3 -m pip uninstall foo-bar
Can't uninstall 'foo-bar'. No files were found to uninstall.
Did I do something wrong within my setup.py for it not to be able to find my modules files during uninstall?
Version info is as follows:
$ python3 --version
Python 3.4.4
$ python3 -m pip --version
pip 8.1.2 from /Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages (python 3.4)
|
[
"I had the same issue. Using verbose helped me to find out a bit more the reason:\n$ pip3 uninstall --verbose my-homemade-package\nNot sure how to uninstall: my-homemade-package e48e635 - Check: /home/olivier/my-homemade-package\nCan't uninstall 'my-homemade-package'. No files were found to uninstall.\n\nRemoving everything that was 'my-homemade-package' related in /usr/local/python2.x and /usr/local/python3.x did not help.\nI did a pip3 show my-homemade-package and got the location of the installed package on my computer:\n$ pip3 show my-homemade-package\nName: my-homemade-package\nVersion: e48e635\nSummary: My Home Made package\nHome-page: UNKNOWN\nAuthor: UNKNOWN\nAuthor-email: UNKNOWN\nLicense: Proprietary\nLocation: /home/olivier/my-homemade-package\nRequires: pyOpenSSL, pyasn1, protobuf\n\nRemoving /home/olivier/my-homemade-package sorted it out the issue (ie: the package was not listed).\n",
"This is an old post, but it was top result in Google. The above answers are correct, however, in my case there was still line /usr/local/lib/python3.6/site-packages/easy-install.pth that I had to remove after also removing the egg files.\n",
"So I was having a similar issue to OP. I could install my package with pip install dist/mypackage.tar.gz. Installation would work fine, but at the end it would show Can't uninstall 'mypackage'. No files were found to uninstall., and indeed pip uninstall mypackage wouldn't work later on.\nIt sounds silly but what worked for me was to change working directory: once I left mypackage/ directory, pip uninstall mypackage worked.\n",
"I had such issue when I renamed my module in setup.py.\nOld old_name.egg-info directory still existed in my_module directory. So, when I installed module with pip install -e . pip created a line in python3.8/site-packages/easy-install.pth pointing to module directory. After that module was listed by pip list with both names: new-name and old-name. And when I tried to remove old module with pip remove old-name pip showed error:\nFound existing installation: old-name 0.3.0\nCan't uninstall 'old-name'. No files were found to uninstall.\n\nThe solution was to remove directory old_name.egg-info from module directory. After that pip list shows only new-name.\nProbably it is not direct answer to original post but one of the solutions for issue in topic-name.\n"
] |
[
29,
10,
7,
3
] |
[
"Issue: User cannot uninstall a python package installed via pip:\npip uninstall youtube-dl\nFound existing installation: youtube-dl 2021.12.17\nNot uninstalling youtube-dl at /usr/lib/python3/dist-packages, outside environment /usr\nCan't uninstall 'youtube-dl'. No files were found to uninstall.\n\nReason: PEBKAC.\nWell, a simple\napt purge youtube-dl\n\ndid the trick. There had been a system package \"youtube-dl\" installed, with said version:\ndpkg -l youtube-dl\nii youtube-dl 2021.12.17-1~nd110+1\n\nAt the same time users used to install packages locally using pip. Both packages of the same version (2021.12.17). And both ways (apt and pip) referred to the packages by the same name. Turned out, this tends to confuse users..\nNext level: Have a package installed three ways: apt, pip --system and plain pip as user. Maybe pip as root (locally) FWIW, too.\n"
] |
[
-1
] |
[
"pip",
"python_3.x",
"python_wheel",
"uninstallation"
] |
stackoverflow_0037941523_pip_python_3.x_python_wheel_uninstallation.txt
|
Q:
BigDecimal from Double incorrect value?
I am trying to make a BigDecimal from a string. Don't ask me why, I just need it! This is my code:
Double theDouble = new Double(".3");
System.out.println("The Double: " + theDouble.toString());
BigDecimal theBigDecimal = new BigDecimal(theDouble);
System.out.println("The Big: " + theBigDecimal.toString());
This is the output I get?
The Double: 0.3
The Big: 0.299999999999999988897769753748434595763683319091796875
Any ideas?
A:
When you create a double, the value 0.3 cannot be represented exactly. You can create a BigDecimal from a string without the intermediate double, as in
new BigDecimal("0.3")
A floating point number is represented as a binary fraction and an exponent. Therefore there are some number that cannot be represented exactly. There is an analogous problem in base 10 with numbers like 1/3, which is 0.333333333..... Any decimal representation of 1/3 is inexact. This happens to a DIFFERENT set of fractions in binary, and 0.3 is one of the set that is inexact in binary.
A:
Another way is to use MathContext.DECIMAL32 which guarantees 7 digit precision (which is good enough in our case):
Double theDouble = new Double(".3");
System.out.println("The Double: " + theDouble.toString());
BigDecimal theBigDecimal = new BigDecimal(theDouble, MathContext.DECIMAL32); // <-- here
System.out.println("The Big: " + theBigDecimal.toString());
OUTPUT
The Double: 0.3
The Big: 0.3000000
A:
You can give a big decimal a specified precision. e.g. append to your example:
Double theDouble = new Double(".3");
theBigDecimal = new BigDecimal(theDouble, new MathContext(2));
System.out.println("The Big: " + theBigDecimal.toString());
This will print out "0.30"
A:
Since new Double(".3") can't be represented exactly, the nearest value is 0x1.3333333333333P-2 or .299999999999999988897769753748434595763683319091796875, what would be need to is this:
Double theDouble = new Double(".3");
System.out.println("The Double: " + theDouble.toString());
BigDecimal theBigDecimal = new
BigDecimal(theDouble).setScale(2, RoundingMode.CEILING); // <-- here
System.out.println("The Big: " + theBigDecimal.toString());
This will print:
The Double: 0.3
The Big: 0.30
|
BigDecimal from Double incorrect value?
|
I am trying to make a BigDecimal from a string. Don't ask me why, I just need it! This is my code:
Double theDouble = new Double(".3");
System.out.println("The Double: " + theDouble.toString());
BigDecimal theBigDecimal = new BigDecimal(theDouble);
System.out.println("The Big: " + theBigDecimal.toString());
This is the output I get?
The Double: 0.3
The Big: 0.299999999999999988897769753748434595763683319091796875
Any ideas?
|
[
"When you create a double, the value 0.3 cannot be represented exactly. You can create a BigDecimal from a string without the intermediate double, as in\nnew BigDecimal(\"0.3\")\n\nA floating point number is represented as a binary fraction and an exponent. Therefore there are some number that cannot be represented exactly. There is an analogous problem in base 10 with numbers like 1/3, which is 0.333333333..... Any decimal representation of 1/3 is inexact. This happens to a DIFFERENT set of fractions in binary, and 0.3 is one of the set that is inexact in binary.\n",
"Another way is to use MathContext.DECIMAL32 which guarantees 7 digit precision (which is good enough in our case):\nDouble theDouble = new Double(\".3\");\nSystem.out.println(\"The Double: \" + theDouble.toString());\nBigDecimal theBigDecimal = new BigDecimal(theDouble, MathContext.DECIMAL32); // <-- here\nSystem.out.println(\"The Big: \" + theBigDecimal.toString());\n\nOUTPUT\nThe Double: 0.3\nThe Big: 0.3000000\n\n",
"You can give a big decimal a specified precision. e.g. append to your example:\nDouble theDouble = new Double(\".3\");\ntheBigDecimal = new BigDecimal(theDouble, new MathContext(2));\nSystem.out.println(\"The Big: \" + theBigDecimal.toString());\n\nThis will print out \"0.30\"\n",
"Since new Double(\".3\") can't be represented exactly, the nearest value is 0x1.3333333333333P-2 or .299999999999999988897769753748434595763683319091796875, what would be need to is this:\nDouble theDouble = new Double(\".3\");\nSystem.out.println(\"The Double: \" + theDouble.toString());\nBigDecimal theBigDecimal = new \nBigDecimal(theDouble).setScale(2, RoundingMode.CEILING); // <-- here\nSystem.out.println(\"The Big: \" + theBigDecimal.toString());\n\nThis will print:\nThe Double: 0.3\nThe Big: 0.30\n\n"
] |
[
12,
1,
0,
0
] |
[] |
[] |
[
"bigdecimal",
"double",
"java"
] |
stackoverflow_0003693014_bigdecimal_double_java.txt
|
Q:
Define image versions in flux-patch.yml and use these version in the base yml file
I have a flux-patch.yml file and I would like to define a specific version of an image that may not follow semantic versioning (g.e. 1.4.4-1234-develop) in this file, and then use this particular version in the base yml files that use this image.
Example of flux-patch.yml:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: demo
spec:
template:
spec:
$setElementOrder/containers:
- name: podinfod
containers:
- image: stefanprodan/podinfo:1.4.4-1234-develop
name: podinfod
one of the base ymls looks like this:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: demo
labels:
app: podinfo
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: podinfo
template:
metadata:
labels:
app: podinfo
containers:
- name: podinfo
image: # USE THE VERSION SPECIFIED IN FLUX-PATCH.YML
imagePullPolicy: IfNotPresent
Can this be achieved with flux?
A:
Yes, you can achieve this with Flux.You would first define the specific version of the image you want to use in the flux-patch.yml file, as you have shown in your example. Then, in the base YAML files that use the image, you can reference the version specified in the flux-patch.yml file using an annotation.
Here is an example of how this might look:
# base YAML file
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: demo
labels:
app: podinfo
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: podinfo
template:
metadata:
labels:
app: podinfo
containers:
- name: podinfo
image: stefanprodan/podinfo
imagePullPolicy: IfNotPresent
# Annotation to reference the specific image version from flux-patch.yml
annotations:
fluxcd.io/image.tag: 1.4.4-1234-develop
Then, when you apply the base YAML file with Flux, it will automatically use the specific version of the image defined in the flux-patch.yml file.
|
Define image versions in flux-patch.yml and use these version in the base yml file
|
I have a flux-patch.yml file and I would like to define a specific version of an image that may not follow semantic versioning (g.e. 1.4.4-1234-develop) in this file, and then use this particular version in the base yml files that use this image.
Example of flux-patch.yml:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: demo
spec:
template:
spec:
$setElementOrder/containers:
- name: podinfod
containers:
- image: stefanprodan/podinfo:1.4.4-1234-develop
name: podinfod
one of the base ymls looks like this:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: podinfo
namespace: demo
labels:
app: podinfo
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: podinfo
template:
metadata:
labels:
app: podinfo
containers:
- name: podinfo
image: # USE THE VERSION SPECIFIED IN FLUX-PATCH.YML
imagePullPolicy: IfNotPresent
Can this be achieved with flux?
|
[
"Yes, you can achieve this with Flux.You would first define the specific version of the image you want to use in the flux-patch.yml file, as you have shown in your example. Then, in the base YAML files that use the image, you can reference the version specified in the flux-patch.yml file using an annotation.\nHere is an example of how this might look:\n# base YAML file\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: podinfo\n namespace: demo\n labels:\n app: podinfo\nspec:\n replicas: 1\n strategy:\n type: Recreate\n selector:\n matchLabels:\n app: podinfo\n template:\n metadata:\n labels:\n app: podinfo\n containers:\n - name: podinfo\n image: stefanprodan/podinfo\n imagePullPolicy: IfNotPresent\n # Annotation to reference the specific image version from flux-patch.yml\n annotations:\n fluxcd.io/image.tag: 1.4.4-1234-develop\n\nThen, when you apply the base YAML file with Flux, it will automatically use the specific version of the image defined in the flux-patch.yml file.\n"
] |
[
0
] |
[] |
[] |
[
"flux",
"kubernetes",
"yaml"
] |
stackoverflow_0074654388_flux_kubernetes_yaml.txt
|
Q:
Filtering render functions from CodeClimate method-lines check
We're adding CodeClimate to a project and running into a lot of method-lines errors for the render functions in our React components,
example:-
Function render has 78 lines of code (exceeds 40 allowed). Consider refactoring.
We would like to filter out all our render functions from the method-lines check. We could increase the line threshold or disable the check altogether, but we still want the check for other functions, so that's not desirable.
There is node filtering for duplication checks, but I can't find anything similar for method-lines.
A:
To filter out your render functions from the method-lines check in CodeClimate, you can use the ignore keyword in a .codeclimate.yml configuration file. This file allows you to customize the behavior of CodeClimate's checks, including the method-lines check.
Here's an example of how you might use the ignore keyword to exclude your render functions from the method-lines check:
method-lines:
enabled: true
ignore:
- '**/render'
In this example, the ignore keyword is used to specify a pattern that matches the names of the render functions you want to exclude from the method-lines check. In this case, the pattern '**/render' matches any function named render in any file or directory.
You can also use the ignore keyword to exclude specific render functions by their full path or to exclude multiple patterns by separating them with commas. For example:
method-lines:
enabled: true
ignore:
- '**/render'
- 'src/components/Button/render.js'
- 'src/components/Card/render.js'
In this example, the ignore keyword is used to exclude the render function in any file or directory, as well as the render functions in the Button and Card components.
You can learn more about the ignore keyword and other configuration options for the method-lines check in the CodeClimate documentation.
A:
It is possible to configure CodeClimate to exclude specific files or directories from the method-lines check. This way, you can exclude the render functions in your React components from the method-lines check, while still checking the rest of your code for this issue.
To exclude specific files or directories from the method-lines check, you will need to add a .codeclimate.yml file to the root of your project. In this file, you can specify which files or directories should be excluded from the method-lines check using the exclude_paths option. For example, to exclude all files in the src/components directory, you would add the following to your .codeclimate.yml file:
engines:
method_lines:
enabled: true
exclude_paths:
- src/components
You can also use glob patterns to match multiple files or directories. For example, to exclude all files ending in .render.js from the method-lines check, you could use the following pattern:
engines:
method_lines:
enabled: true
exclude_paths:
- "**/*.render.js"
For more information about configuring CodeClimate checks, see the CodeClimate documentation.
A:
Yes, it is possible to filter out specific methods from the method-lines check in CodeClimate. You can configure CodeClimate to ignore certain files or lines within files using a .codeclimate.yml configuration file in the root of your project. In this file, you can specify which files or lines to ignore by using the exclude_paths and exclude_patterns options.
For example, to ignore all of the render functions in your React components, you could use the following configuration:
exclude_patterns:
- "src/components/*.jsx:function render"
This will exclude all lines containing the string "function render" from files ending in ".jsx" in the "src/components" directory.
You can also use regular expressions to match specific patterns within your code. For example, to exclude all methods named "render" that take two arguments, you could use the following configuration:
exclude_patterns:
- "src/components/*.jsx:function render\(\w+, \w+\)"
For more information on configuring CodeClimate, see the CodeClimate documentation.
|
Filtering render functions from CodeClimate method-lines check
|
We're adding CodeClimate to a project and running into a lot of method-lines errors for the render functions in our React components,
example:-
Function render has 78 lines of code (exceeds 40 allowed). Consider refactoring.
We would like to filter out all our render functions from the method-lines check. We could increase the line threshold or disable the check altogether, but we still want the check for other functions, so that's not desirable.
There is node filtering for duplication checks, but I can't find anything similar for method-lines.
|
[
"To filter out your render functions from the method-lines check in CodeClimate, you can use the ignore keyword in a .codeclimate.yml configuration file. This file allows you to customize the behavior of CodeClimate's checks, including the method-lines check.\nHere's an example of how you might use the ignore keyword to exclude your render functions from the method-lines check:\n method-lines:\n enabled: true\n ignore:\n - '**/render'\n\nIn this example, the ignore keyword is used to specify a pattern that matches the names of the render functions you want to exclude from the method-lines check. In this case, the pattern '**/render' matches any function named render in any file or directory.\nYou can also use the ignore keyword to exclude specific render functions by their full path or to exclude multiple patterns by separating them with commas. For example:\n method-lines:\n enabled: true\n ignore:\n - '**/render'\n - 'src/components/Button/render.js'\n - 'src/components/Card/render.js'\n\nIn this example, the ignore keyword is used to exclude the render function in any file or directory, as well as the render functions in the Button and Card components.\nYou can learn more about the ignore keyword and other configuration options for the method-lines check in the CodeClimate documentation.\n",
"It is possible to configure CodeClimate to exclude specific files or directories from the method-lines check. This way, you can exclude the render functions in your React components from the method-lines check, while still checking the rest of your code for this issue.\nTo exclude specific files or directories from the method-lines check, you will need to add a .codeclimate.yml file to the root of your project. In this file, you can specify which files or directories should be excluded from the method-lines check using the exclude_paths option. For example, to exclude all files in the src/components directory, you would add the following to your .codeclimate.yml file:\nengines:\n method_lines:\n enabled: true\n exclude_paths:\n - src/components\n\nYou can also use glob patterns to match multiple files or directories. For example, to exclude all files ending in .render.js from the method-lines check, you could use the following pattern:\nengines:\n method_lines:\n enabled: true\n exclude_paths:\n - \"**/*.render.js\"\n\nFor more information about configuring CodeClimate checks, see the CodeClimate documentation.\n",
"Yes, it is possible to filter out specific methods from the method-lines check in CodeClimate. You can configure CodeClimate to ignore certain files or lines within files using a .codeclimate.yml configuration file in the root of your project. In this file, you can specify which files or lines to ignore by using the exclude_paths and exclude_patterns options.\nFor example, to ignore all of the render functions in your React components, you could use the following configuration:\nexclude_patterns:\n - \"src/components/*.jsx:function render\"\n\nThis will exclude all lines containing the string \"function render\" from files ending in \".jsx\" in the \"src/components\" directory.\nYou can also use regular expressions to match specific patterns within your code. For example, to exclude all methods named \"render\" that take two arguments, you could use the following configuration:\nexclude_patterns:\n - \"src/components/*.jsx:function render\\(\\w+, \\w+\\)\"\n\nFor more information on configuring CodeClimate, see the CodeClimate documentation.\n"
] |
[
0,
0,
0
] |
[
"Check this document. You can disable the Enforce Diff Coverage and Enforce Totoal Coverage checks in codeclimate so that these reports are not run for your commits.\n",
"you need a codwclimate.yml file and you can change the threshold with the following - although having a giant render function isn't really great - i'd suggest keeping it under 50 lines as well.\nversion: \"2\" # required to adjust maintainability checks\nchecks:\n argument-count:\n config:\n threshold: 4\n complex-logic:\n config:\n threshold: 4\n file-lines:\n config:\n threshold: 250\n method-complexity:\n config:\n threshold: 5\n method-count:\n config:\n threshold: 20\n method-lines:\n config:\n threshold: 25\n\nthis is from the docs here: https://docs.codeclimate.com/docs/advanced-configuration#section-default-check-configurations\nmethod-lines is the last one above - and please make sure to not cut/paste as the YML will need the indentation to be exact. Good luck!\n"
] |
[
-1,
-4
] |
[
"code_climate",
"reactjs"
] |
stackoverflow_0051138460_code_climate_reactjs.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.