content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
How do I have npm run build of a React app include my environment variables?
I am trying to deploy my app for the public to see but I am stuck with this step. To preface, I containerized my React app using docker and I pass in my variables through the --env-file option for docker run. This works fine and dandy when I am running the app using CMD ["npm", "start"] but when I tried to change the dockerfile for production, the site is accessible but the parts of the site that rely on the environment variables fail.
This is the docker file I used:
FROM node:latest as build-stage
WORKDIR /app
ADD . .
RUN npm install
RUN npm run build
FROM nginx:latest
COPY --from=build-stage /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I am not sure if the problem is with nginx or I am just misunderstanding the build process in React and how environment variables are injected into it.
A:
The image is a container (starting as node:latest) but that container does not inherit your environment variables. You can set variables in the container using RUN bash commands or using ARG/ENV. You will need to set any variables that are needed for your app to run, as they will be bundled into the resulting build. you can also set environment variables when you run the container, in this case when the nginx server starts up.
A:
This worked for me.
env file name -> .e
RUN NODE_ENV=qa npm run build
|
How do I have npm run build of a React app include my environment variables?
|
I am trying to deploy my app for the public to see but I am stuck with this step. To preface, I containerized my React app using docker and I pass in my variables through the --env-file option for docker run. This works fine and dandy when I am running the app using CMD ["npm", "start"] but when I tried to change the dockerfile for production, the site is accessible but the parts of the site that rely on the environment variables fail.
This is the docker file I used:
FROM node:latest as build-stage
WORKDIR /app
ADD . .
RUN npm install
RUN npm run build
FROM nginx:latest
COPY --from=build-stage /app/build /usr/share/nginx/html
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
I am not sure if the problem is with nginx or I am just misunderstanding the build process in React and how environment variables are injected into it.
|
[
"The image is a container (starting as node:latest) but that container does not inherit your environment variables. You can set variables in the container using RUN bash commands or using ARG/ENV. You will need to set any variables that are needed for your app to run, as they will be bundled into the resulting build. you can also set environment variables when you run the container, in this case when the nginx server starts up.\n",
"This worked for me.\nenv file name -> .e\n\nRUN NODE_ENV=qa npm run build\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"docker",
"nginx",
"node.js",
"reactjs"
] |
stackoverflow_0068202427_docker_nginx_node.js_reactjs.txt
|
Q:
How to copy data over into my custom database table
I have the following function which inserts data from the postmeta table into a custom database table wp_fixtures_results.
I am using WPAll Import Plugin action pmxi_saved_post. So the code runs during an import process.
The purpose of the code is to migrate data from wp_postmeta into wp_fixtures_results which is the custom table.
When running the code for a fresh import, the data is stored which is ordinarily stored in wp_postmeta is the moved into the custom table. This works perfectly.
However, the data only runs for the INSERT query as shown in the code. Using the same plugin action I need to update the data from postmeta into the custom table. The issue is the code is only working for the INSERT query. How do I check if the data has changed in the postmeta and during the import process which updates the data, update the custom table too?
if ($post_type === 'fixture-result') {
function save_fr_data_to_custom_database_table($post_id)
{
// Make wpdb object available.
global $wpdb;
// Retrieve value to save.
$value = get_post_meta($post_id, 'fixtures_results', true);
// Define target database table.
$table_name = $wpdb->prefix . "fixtures_results";
// Insert value into database table.
$wpdb->insert($table_name, array('ID' => $post_id, 'fixtures_results' => $value), array('%d', '%s'));
// Update query not working - doesn't change data.
$wpdb->update($table_name, array('ID' => $post_id, 'fixtures_results' => $value), array('%d', '%s'));
// Delete temporary custom field.
delete_post_meta($post_id, 'fixtures_results');
}
add_action('pmxi_saved_post', 'save_fr_data_to_custom_database_table', 10, 1);
}
The wp_postmeta table
The wp_fixtures_results (custom table)
A:
This is an old question by now, but I'd say that the $wpdb->update is redundant. The insert is going to create the row and store the fields. The $wpdb->insert should return "1" - check that value to verify that the insert succeeded before deleting the postmeta entry. The call to update is not necessary.
|
How to copy data over into my custom database table
|
I have the following function which inserts data from the postmeta table into a custom database table wp_fixtures_results.
I am using WPAll Import Plugin action pmxi_saved_post. So the code runs during an import process.
The purpose of the code is to migrate data from wp_postmeta into wp_fixtures_results which is the custom table.
When running the code for a fresh import, the data is stored which is ordinarily stored in wp_postmeta is the moved into the custom table. This works perfectly.
However, the data only runs for the INSERT query as shown in the code. Using the same plugin action I need to update the data from postmeta into the custom table. The issue is the code is only working for the INSERT query. How do I check if the data has changed in the postmeta and during the import process which updates the data, update the custom table too?
if ($post_type === 'fixture-result') {
function save_fr_data_to_custom_database_table($post_id)
{
// Make wpdb object available.
global $wpdb;
// Retrieve value to save.
$value = get_post_meta($post_id, 'fixtures_results', true);
// Define target database table.
$table_name = $wpdb->prefix . "fixtures_results";
// Insert value into database table.
$wpdb->insert($table_name, array('ID' => $post_id, 'fixtures_results' => $value), array('%d', '%s'));
// Update query not working - doesn't change data.
$wpdb->update($table_name, array('ID' => $post_id, 'fixtures_results' => $value), array('%d', '%s'));
// Delete temporary custom field.
delete_post_meta($post_id, 'fixtures_results');
}
add_action('pmxi_saved_post', 'save_fr_data_to_custom_database_table', 10, 1);
}
The wp_postmeta table
The wp_fixtures_results (custom table)
|
[
"This is an old question by now, but I'd say that the $wpdb->update is redundant. The insert is going to create the row and store the fields. The $wpdb->insert should return \"1\" - check that value to verify that the insert succeeded before deleting the postmeta entry. The call to update is not necessary.\n"
] |
[
0
] |
[] |
[] |
[
"wordpress",
"wpallimport"
] |
stackoverflow_0073012678_wordpress_wpallimport.txt
|
Q:
Powershell - Need to recognize if there is more than one result (regex)
I am using this to find if file name contains exactly 7 digits
if ($file.Name -match '\D(\d{7})(?:\D|$)') {
$result = $matches[1]
}
The problem is when there is a file name that contains 2 groups of 7 digits
for an example:
patch-8.6.22 (1329214-1396826-Increase timeout.zip
In this case the result will be the first one (1329214).
For most cases there is only one number so the regex is working but I must to recognize if there is more than 1 group and integrated into the if ()
A:
The -match operator only ever looks for one match.
To get multiple ones, you must currently use the underlying .NET APIs directly, specifically [regex]::Matches():
Note: There's a green-lighted proposal to implement a -matchall operator, but as of PowerShell 7.3.0 it hasn't been implemented yet - see GitHub issue #7867.
# Sample input.
$file = [pscustomobject] @{ Name = 'patch-8.6.22 (1329214-1396826-Increase timeout.zip' }
# Note:
# * If *nothing* matches, $result will contain $null
# * If *one* substring matches, return will be a single string.
# * If *two or more* substrings match, return will be an *array* of strings.
$result = ([regex]::Matches($file.Name, '(?<=\D)\d{7}(?=\D|$)')).Value
.Value uses member-access enumeration to extract matching substrings, if any, from the elements of the collection returned by [regex]::Matches().
I've tweaked the regex to use lookaround assertions ((?<=/...) and (?=...)) so that only the substrings of interest are captured.
See this regex101.com page for an explanation of the regex and the ability to experiment with it.
|
Powershell - Need to recognize if there is more than one result (regex)
|
I am using this to find if file name contains exactly 7 digits
if ($file.Name -match '\D(\d{7})(?:\D|$)') {
$result = $matches[1]
}
The problem is when there is a file name that contains 2 groups of 7 digits
for an example:
patch-8.6.22 (1329214-1396826-Increase timeout.zip
In this case the result will be the first one (1329214).
For most cases there is only one number so the regex is working but I must to recognize if there is more than 1 group and integrated into the if ()
|
[
"\n\nThe -match operator only ever looks for one match.\n\nTo get multiple ones, you must currently use the underlying .NET APIs directly, specifically [regex]::Matches():\n\nNote: There's a green-lighted proposal to implement a -matchall operator, but as of PowerShell 7.3.0 it hasn't been implemented yet - see GitHub issue #7867.\n\n\n\n# Sample input.\n$file = [pscustomobject] @{ Name = 'patch-8.6.22 (1329214-1396826-Increase timeout.zip' }\n\n# Note: \n# * If *nothing* matches, $result will contain $null\n# * If *one* substring matches, return will be a single string.\n# * If *two or more* substrings match, return will be an *array* of strings.\n$result = ([regex]::Matches($file.Name, '(?<=\\D)\\d{7}(?=\\D|$)')).Value\n\n\n.Value uses member-access enumeration to extract matching substrings, if any, from the elements of the collection returned by [regex]::Matches().\n\nI've tweaked the regex to use lookaround assertions ((?<=/...) and (?=...)) so that only the substrings of interest are captured.\n\nSee this regex101.com page for an explanation of the regex and the ability to experiment with it.\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"powershell"
] |
stackoverflow_0074657369_powershell.txt
|
Q:
Button not showing in LinearLayout
I am using LinearLayout to display a simple title, body, and Save and Cancel buttons. But I can't see the Save button. What is missing?
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
tools:context=".SecondFragment">
<TextView
android:id="@+id/tv_title"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/title"
android:layout_margin="12dp" />
<EditText
android:id="@+id/et_title"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/edit_title"
android:layout_margin="12dp"
/>
<EditText
android:id="@+id/et_body"
android:background="@drawable/edit_text_border"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:textColor="@android:color/darker_gray"
android:text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent nec est consequat arcu laoreet consequat non eget quam. Mauris ullamcorper odio id erat pharetra, sed auctor libero tempor. Sed sit amet justo facilisis nisl pharetra mollis. Donec vel felis eget dolor tristique consectetur at nec tortor. Donec eu finibus leo. Fusce non erat semper turpis tincidunt volutpat sit amet sit amet elit."
android:justificationMode="inter_word"/>
<Button
android:id="@+id/button_save"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="16dp"
android:text="@string/save"
android:textAllCaps="false"
style="?android:attr/buttonBarButtonStyle" />
<Button
android:id="@+id/button_cancel"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/cancel"
android:textAllCaps="false"
style="?android:attr/buttonBarButtonStyle" />
</LinearLayout>
A:
Apparently there was another fragment_second.xml as v26 in layout folder, after deleting this, it worked like a charm!
|
Button not showing in LinearLayout
|
I am using LinearLayout to display a simple title, body, and Save and Cancel buttons. But I can't see the Save button. What is missing?
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout xmlns:android="http://schemas.android.com/apk/res/android"
xmlns:app="http://schemas.android.com/apk/res-auto"
xmlns:tools="http://schemas.android.com/tools"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical"
tools:context=".SecondFragment">
<TextView
android:id="@+id/tv_title"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/title"
android:layout_margin="12dp" />
<EditText
android:id="@+id/et_title"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/edit_title"
android:layout_margin="12dp"
/>
<EditText
android:id="@+id/et_body"
android:background="@drawable/edit_text_border"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:textColor="@android:color/darker_gray"
android:text="Lorem ipsum dolor sit amet, consectetur adipiscing elit. Praesent nec est consequat arcu laoreet consequat non eget quam. Mauris ullamcorper odio id erat pharetra, sed auctor libero tempor. Sed sit amet justo facilisis nisl pharetra mollis. Donec vel felis eget dolor tristique consectetur at nec tortor. Donec eu finibus leo. Fusce non erat semper turpis tincidunt volutpat sit amet sit amet elit."
android:justificationMode="inter_word"/>
<Button
android:id="@+id/button_save"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:layout_marginTop="16dp"
android:text="@string/save"
android:textAllCaps="false"
style="?android:attr/buttonBarButtonStyle" />
<Button
android:id="@+id/button_cancel"
android:layout_width="match_parent"
android:layout_height="wrap_content"
android:text="@string/cancel"
android:textAllCaps="false"
style="?android:attr/buttonBarButtonStyle" />
</LinearLayout>
|
[
"Apparently there was another fragment_second.xml as v26 in layout folder, after deleting this, it worked like a charm!\n"
] |
[
0
] |
[] |
[] |
[
"android_layout",
"android_linearlayout",
"xml"
] |
stackoverflow_0074644850_android_layout_android_linearlayout_xml.txt
|
Q:
Is it possible return an function with AJAX?
Let's assume I need to make an ajax call to my server
$.ajax({
type: 'POST',
url: 'url/url',
success: function(response){}
});
and as a response from the server I reply with some javascript code
res.send("const myFunc = (b) => { console.log(b) }");
Is there a way to do something like this?:
$.ajax({
type: 'POST',
url: 'url/url',
success: function(response){
response('hello'); //I would like 'hello' to appear in the console
}
});
A:
Example with Function() constructor:
$.ajax({
type: 'POST',
url: 'url/url',
success: function(response){
const fn = new Function('x', `${response}; myFunc(x)`)
fn('hello')
}
});
Notice that you have to run the functions inside the string hence ${response}; myFunc(x)
Here is example of how this works:
const code = 'const myFunc = (b) => { console.log(b) }'
const fn = new Function('x', `${code}; myFunc(x)`)
fn('Hello')
It could be easier if you would send array like that:
res.send("b", "console.log(b)");
You could then use it like this:
$.ajax({
type: 'POST',
url: 'url/url',
success: function(response){
const fn = new Function(...response)
fn('hello')
}
});
|
Is it possible return an function with AJAX?
|
Let's assume I need to make an ajax call to my server
$.ajax({
type: 'POST',
url: 'url/url',
success: function(response){}
});
and as a response from the server I reply with some javascript code
res.send("const myFunc = (b) => { console.log(b) }");
Is there a way to do something like this?:
$.ajax({
type: 'POST',
url: 'url/url',
success: function(response){
response('hello'); //I would like 'hello' to appear in the console
}
});
|
[
"Example with Function() constructor:\n$.ajax({\n type: 'POST', \n url: 'url/url', \n success: function(response){\n const fn = new Function('x', `${response}; myFunc(x)`)\n fn('hello')\n }\n});\n\nNotice that you have to run the functions inside the string hence ${response}; myFunc(x)\nHere is example of how this works:\n\n\nconst code = 'const myFunc = (b) => { console.log(b) }'\nconst fn = new Function('x', `${code}; myFunc(x)`)\nfn('Hello')\n\n\n\nIt could be easier if you would send array like that:\nres.send(\"b\", \"console.log(b)\");\n\nYou could then use it like this:\n$.ajax({\n type: 'POST', \n url: 'url/url', \n success: function(response){\n const fn = new Function(...response)\n fn('hello')\n }\n});\n\n"
] |
[
1
] |
[] |
[] |
[
"ajax",
"javascript"
] |
stackoverflow_0074657034_ajax_javascript.txt
|
Q:
Python module installation failing
My python wont install Time Module it was asking me to update my pip to newest, and I did.
I receive this error:
ERROR: Could not find a version that satisfies the requirement time (from versions: none) ERROR: No matching distribution found for time
My python verstion is the latest. Python 3.11.0
Pip version : 22.3.1
It's all to date..
any ideias why?
Tried installing via CMD and pycharm packages additions.
Also updated python and pip. no sucess.
|
Python module installation failing
|
My python wont install Time Module it was asking me to update my pip to newest, and I did.
I receive this error:
ERROR: Could not find a version that satisfies the requirement time (from versions: none) ERROR: No matching distribution found for time
My python verstion is the latest. Python 3.11.0
Pip version : 22.3.1
It's all to date..
any ideias why?
Tried installing via CMD and pycharm packages additions.
Also updated python and pip. no sucess.
|
[] |
[] |
[
"The time module is part of Python's standard library. It's installed along with the rest of Python, and you don't need to (nor can you!) install it with pip.\n\nI can import time in the Python Console\n\nYes, because it's already installed.\n\nbut not in my actual code\n\nI don't believe you. Show us the exact error message you get when you try.\n"
] |
[
-1
] |
[
"module",
"pip",
"python",
"time"
] |
stackoverflow_0074658997_module_pip_python_time.txt
|
Q:
Using useEffect on several fetch functions causes them to run before a button click, how do I use useCallback instead?
I am using a fetch function to generate an output after pressing a button, this output is then used in several similar fetch functions. To do this I've used useEffect, but the issue is these 2nd and 3rd functions run before I press the button, the button click initiates the first fetch function and once this runs I would only then like the others to run. Otherwise it costs me a lot of money if they're on as soon as the page loads.
I know I should use useCallback, but simply replacing useEffect with useCallback doesn't work of course because it only runs when something is completed. I tried to replace the async function in the 2nd and 3rd functions with useCallBack, like so:
const getOpenAIResponse = async () => {
openai.createCompletion({
model: "text-davinci-003",
prompt: "Create a unique 1-5 word name.",
max_tokens: 256,
}).then((response) => {
setInputName(response.data.choices[0].text)
getSlogan()
})
};
const getStory = useCallback (() => {
openai.createCompletion({
model: "text-davinci-003",
prompt: "Create a story about " + inputName + ".",
max_tokens: 256,
}).then((response) => {
setInputStory(response.data.choices[0].text)
})
}, [inputName]);
However this did not work, the Story it produced was not based on the inputName - it assumed inputName was blank.
This is my code with useEffect.
const [inputName, setInputName] = useState('');
const [inputStory, setInputStory] = useState('');
const [inputDes, setInputDes] = useState('');
const getOpenAIResponse = async () => {
openai.createCompletion({
model: "text-davinci-003",
prompt: "Create a unique 1-5 word name.",
max_tokens: 256,
}).then((response) => {
setInputName(response.data.choices[0].text)
})
};
const getStory = async () => {
openai.createCompletion({
model: "text-davinci-003",
prompt: "Create a story about " + inputName + ".",
max_tokens: 256,
}).then((response) => {
setInputStory(response.data.choices[0].text)
})
};
const getDescription = async () => {
openai.createCompletion({
model: "text-davinci-003",
prompt: "Create a description for " + inputName + ".",
max_tokens: 256,
}).then((response) => {
setInputDes(response.data.choices[0].text)
})
};
useEffect(() => {
getStory();
getDescription();
}, [inputName]);
<Button onClick={getOpenAIResponse}>
Once I click the button everything settles down, but before I click it the inputStory and inputDescription are continously running in the background. I only want them to run once the button is clicked, but I need them to depend on the inputName state, so they need to wait for it to finish.
Is there a solution that doesn't run the 2nd and 3rd functions in the background?
A:
add a if condition to run the calls when input has values
useEffect(() => {
if(inputName){
getStory();
getDescription();
}
}, [inputName])
|
Using useEffect on several fetch functions causes them to run before a button click, how do I use useCallback instead?
|
I am using a fetch function to generate an output after pressing a button, this output is then used in several similar fetch functions. To do this I've used useEffect, but the issue is these 2nd and 3rd functions run before I press the button, the button click initiates the first fetch function and once this runs I would only then like the others to run. Otherwise it costs me a lot of money if they're on as soon as the page loads.
I know I should use useCallback, but simply replacing useEffect with useCallback doesn't work of course because it only runs when something is completed. I tried to replace the async function in the 2nd and 3rd functions with useCallBack, like so:
const getOpenAIResponse = async () => {
openai.createCompletion({
model: "text-davinci-003",
prompt: "Create a unique 1-5 word name.",
max_tokens: 256,
}).then((response) => {
setInputName(response.data.choices[0].text)
getSlogan()
})
};
const getStory = useCallback (() => {
openai.createCompletion({
model: "text-davinci-003",
prompt: "Create a story about " + inputName + ".",
max_tokens: 256,
}).then((response) => {
setInputStory(response.data.choices[0].text)
})
}, [inputName]);
However this did not work, the Story it produced was not based on the inputName - it assumed inputName was blank.
This is my code with useEffect.
const [inputName, setInputName] = useState('');
const [inputStory, setInputStory] = useState('');
const [inputDes, setInputDes] = useState('');
const getOpenAIResponse = async () => {
openai.createCompletion({
model: "text-davinci-003",
prompt: "Create a unique 1-5 word name.",
max_tokens: 256,
}).then((response) => {
setInputName(response.data.choices[0].text)
})
};
const getStory = async () => {
openai.createCompletion({
model: "text-davinci-003",
prompt: "Create a story about " + inputName + ".",
max_tokens: 256,
}).then((response) => {
setInputStory(response.data.choices[0].text)
})
};
const getDescription = async () => {
openai.createCompletion({
model: "text-davinci-003",
prompt: "Create a description for " + inputName + ".",
max_tokens: 256,
}).then((response) => {
setInputDes(response.data.choices[0].text)
})
};
useEffect(() => {
getStory();
getDescription();
}, [inputName]);
<Button onClick={getOpenAIResponse}>
Once I click the button everything settles down, but before I click it the inputStory and inputDescription are continously running in the background. I only want them to run once the button is clicked, but I need them to depend on the inputName state, so they need to wait for it to finish.
Is there a solution that doesn't run the 2nd and 3rd functions in the background?
|
[
"add a if condition to run the calls when input has values\nuseEffect(() => {\n if(inputName){ \n getStory();\n getDescription();\n }\n\n}, [inputName])\n\n"
] |
[
1
] |
[] |
[] |
[
"javascript",
"reactjs"
] |
stackoverflow_0074659007_javascript_reactjs.txt
|
Q:
Return to previous page on exact point in React
Imagine I have page A with very long list of rows. I scroll down to e.g. row number 500. The row is also link to page B, that shows row details. After examinig the details on page B, I want to go back to page A, but I want to return to exactly same point, from where I went to page B, that means on the row 500. How can I achieve this behavior in ReactJS app?
A:
This can be achieved in two ways.
One simple use of html id attributes.
With the help of react-router-dom and props passing.
I have listed the method 1 here which is beginner friendly
This can be achieved with the help of id's
I will list a sample html code which works similar at jsx
<div>
<p id="1">Row 1 </p>
<p id="2">Row 2 </p>
...
...
...
</div>
Let's suppose the use of the page is www.samplepage.com
Instead of this link if I made it specific i.e., www.samplepage.com#1
it will take me to the Row 1 it is like anchor tag in same page
you can store the tags in state and use it.
You can also see similar implementation at github
|
Return to previous page on exact point in React
|
Imagine I have page A with very long list of rows. I scroll down to e.g. row number 500. The row is also link to page B, that shows row details. After examinig the details on page B, I want to go back to page A, but I want to return to exactly same point, from where I went to page B, that means on the row 500. How can I achieve this behavior in ReactJS app?
|
[
"This can be achieved in two ways.\n\nOne simple use of html id attributes.\nWith the help of react-router-dom and props passing.\n\nI have listed the method 1 here which is beginner friendly\nThis can be achieved with the help of id's\nI will list a sample html code which works similar at jsx\n<div>\n<p id=\"1\">Row 1 </p>\n<p id=\"2\">Row 2 </p>\n...\n...\n...\n</div>\n\nLet's suppose the use of the page is www.samplepage.com\nInstead of this link if I made it specific i.e., www.samplepage.com#1\nit will take me to the Row 1 it is like anchor tag in same page\nyou can store the tags in state and use it.\nYou can also see similar implementation at github\n"
] |
[
0
] |
[] |
[] |
[
"anchor",
"navigation",
"reactjs"
] |
stackoverflow_0074658941_anchor_navigation_reactjs.txt
|
Q:
InkEdit control adds line breaks to cells
I am using a userform for data entry and storing the contents in a cell.
When the form loads it copies the cells contents into the InkEdit box on the form, closing it copies the editied text back into the cell it came from.
For some reason when it copies back into the cell it adds line breaks where there are already line breaks. Loading up the form over and over just adds more and more line breaks!
Does anyone know why this is happening and a possible solution?
Private Sub CMDExit_Click()
Sheets("Comments").Range("B1") = InkEdit1.Text
Unload Me
End Sub
Private Sub UserForm_Initialize()
Sheets("Comments").Range("B1").Copy
PasteToObj InkEdit1.hWnd
Application.CutCopyMode = False
End Sub
Before
After
A:
Looks like you have both the characters vbLf and vbCrLf. You may want to try this code?
Dim MyAr(1 To 2) As Variant
'~~> These are the characters we will be searching for
MyAr(1) = vbLf
MyAr(2) = vbCrLf
Dim TestString As String
Dim i As Long
TestString = InkEdit1.Text
'TestString = "Blah " & vbLf & vbLf & "Blah"
'TestString = TestString & "Blah " & Chr(13) & Chr(13) & "Blah"
'TestString = TestString & "Blah " & vbCrLf & vbCrLf & "Blah"
'Debug.Print TestString
'Debug.Print "-----"
For i = 1 To 2
Do While InStr(1, TestString, MyAr(i) & MyAr(i), vbTextCompare)
TestString = Replace(TestString, MyAr(i) & MyAr(i), MyAr(i))
Loop
Next i
'~~> In case there is a mix of both (and vice versa)
TestString = Replace(TestString, vbLf & vbCrLf, vbLf)
TestString = Replace(TestString, vbCrLf & vbLf, vbLf)
'Debug.Print TestString
Sheets("Comments").Range("B1").Value = TestString
|
InkEdit control adds line breaks to cells
|
I am using a userform for data entry and storing the contents in a cell.
When the form loads it copies the cells contents into the InkEdit box on the form, closing it copies the editied text back into the cell it came from.
For some reason when it copies back into the cell it adds line breaks where there are already line breaks. Loading up the form over and over just adds more and more line breaks!
Does anyone know why this is happening and a possible solution?
Private Sub CMDExit_Click()
Sheets("Comments").Range("B1") = InkEdit1.Text
Unload Me
End Sub
Private Sub UserForm_Initialize()
Sheets("Comments").Range("B1").Copy
PasteToObj InkEdit1.hWnd
Application.CutCopyMode = False
End Sub
Before
After
|
[
"Looks like you have both the characters vbLf and vbCrLf. You may want to try this code?\nDim MyAr(1 To 2) As Variant\n\n'~~> These are the characters we will be searching for\nMyAr(1) = vbLf\nMyAr(2) = vbCrLf\n \nDim TestString As String\nDim i As Long\n\nTestString = InkEdit1.Text\n'TestString = \"Blah \" & vbLf & vbLf & \"Blah\"\n'TestString = TestString & \"Blah \" & Chr(13) & Chr(13) & \"Blah\"\n'TestString = TestString & \"Blah \" & vbCrLf & vbCrLf & \"Blah\"\n\n'Debug.Print TestString\n'Debug.Print \"-----\"\n\nFor i = 1 To 2\n Do While InStr(1, TestString, MyAr(i) & MyAr(i), vbTextCompare)\n TestString = Replace(TestString, MyAr(i) & MyAr(i), MyAr(i))\n Loop\nNext i\n\n'~~> In case there is a mix of both (and vice versa)\nTestString = Replace(TestString, vbLf & vbCrLf, vbLf)\nTestString = Replace(TestString, vbCrLf & vbLf, vbLf)\n\n'Debug.Print TestString\nSheets(\"Comments\").Range(\"B1\").Value = TestString\n\n"
] |
[
0
] |
[] |
[] |
[
"excel",
"vba"
] |
stackoverflow_0074653217_excel_vba.txt
|
Q:
Why a member of an array of struct has always the same value?
So i wanna read a text file and populate an array of structs with the values of the file
this is my file:
tile1;1;1;1;0
tile2;0;1;1;1
tile3;1;0;1;1
tile4;1;1;0;1
tile5;1;0;1;0
tile6;0;1;0;1
tile7;1;1;0;0
tile8;0;1;1;0
tile9;0;0;1;1
tileX;1;0;0;1
and this is my code
FILE *file;
struct tile
{
char * file_name;
int l,u,r,d;
};
typedef struct tile tile_t;
tile_t tileData[10];
int main(void)
{
#define BIG 1000
char str[BIG];
if ((file = fopen("res/tiles.conf", "r")) != NULL)
{
char* item;
int i = 0;
while (fgets(str, BIG, file) != NULL)
{
int j = 0;
for (item = strtok(str, ";"); item != NULL; item = strtok(NULL, ";"))
{
//printf("%d item %s\n", j, item);
switch (j) {
case 0: tileData[i].file_name = item;
break;
case 1: sscanf(item, "%d", &tileData[i].l);
break;
case 2: sscanf(item, "%d", &tileData[i].u);
break;
case 3: sscanf(item, "%d", &tileData[i].r);
break;
case 4: sscanf(item, "%d", &tileData[i].d);
break;
}
j++;
}
i++;
}
fclose(file);
for (int k = 0; k < 10; k++)
{
printf("index:%d name:%s l:%d u:%d r:%d d:%d\n", k, tileData[k].file_name, tileData[k].l, tileData[k].u, tileData[k].r, tileData[k].d);
}
}
return 0;
}
the print loop at the end shows me that the int values are correct, while the file name member has always the value of the last row of the file
index:0 name:tileX l:1 u:1 r:1 d:0
index:1 name:tileX l:0 u:1 r:1 d:1
index:2 name:tileX l:1 u:0 r:1 d:1
index:3 name:tileX l:1 u:1 r:0 d:1
index:4 name:tileX l:1 u:0 r:1 d:0
index:5 name:tileX l:0 u:1 r:0 d:1
index:6 name:tileX l:1 u:1 r:0 d:0
index:7 name:tileX l:0 u:1 r:1 d:0
index:8 name:tileX l:0 u:0 r:1 d:1
index:9 name:tileX l:1 u:0 r:0 d:1
I'm not an experienced C dev but I have done similar programs in the past and this has never happened
A:
The problem is that you don't reserve space for the string filename, in consequence you write again and again to the same address.
Switch from
case 0: tileData[i].file_name = item;
to
case 0: tileData[i].file_name = strdup(item);
Check how to implement strdup if it is not available: Implementation of strdup() in C programming
|
Why a member of an array of struct has always the same value?
|
So i wanna read a text file and populate an array of structs with the values of the file
this is my file:
tile1;1;1;1;0
tile2;0;1;1;1
tile3;1;0;1;1
tile4;1;1;0;1
tile5;1;0;1;0
tile6;0;1;0;1
tile7;1;1;0;0
tile8;0;1;1;0
tile9;0;0;1;1
tileX;1;0;0;1
and this is my code
FILE *file;
struct tile
{
char * file_name;
int l,u,r,d;
};
typedef struct tile tile_t;
tile_t tileData[10];
int main(void)
{
#define BIG 1000
char str[BIG];
if ((file = fopen("res/tiles.conf", "r")) != NULL)
{
char* item;
int i = 0;
while (fgets(str, BIG, file) != NULL)
{
int j = 0;
for (item = strtok(str, ";"); item != NULL; item = strtok(NULL, ";"))
{
//printf("%d item %s\n", j, item);
switch (j) {
case 0: tileData[i].file_name = item;
break;
case 1: sscanf(item, "%d", &tileData[i].l);
break;
case 2: sscanf(item, "%d", &tileData[i].u);
break;
case 3: sscanf(item, "%d", &tileData[i].r);
break;
case 4: sscanf(item, "%d", &tileData[i].d);
break;
}
j++;
}
i++;
}
fclose(file);
for (int k = 0; k < 10; k++)
{
printf("index:%d name:%s l:%d u:%d r:%d d:%d\n", k, tileData[k].file_name, tileData[k].l, tileData[k].u, tileData[k].r, tileData[k].d);
}
}
return 0;
}
the print loop at the end shows me that the int values are correct, while the file name member has always the value of the last row of the file
index:0 name:tileX l:1 u:1 r:1 d:0
index:1 name:tileX l:0 u:1 r:1 d:1
index:2 name:tileX l:1 u:0 r:1 d:1
index:3 name:tileX l:1 u:1 r:0 d:1
index:4 name:tileX l:1 u:0 r:1 d:0
index:5 name:tileX l:0 u:1 r:0 d:1
index:6 name:tileX l:1 u:1 r:0 d:0
index:7 name:tileX l:0 u:1 r:1 d:0
index:8 name:tileX l:0 u:0 r:1 d:1
index:9 name:tileX l:1 u:0 r:0 d:1
I'm not an experienced C dev but I have done similar programs in the past and this has never happened
|
[
"The problem is that you don't reserve space for the string filename, in consequence you write again and again to the same address.\nSwitch from\ncase 0: tileData[i].file_name = item;\n\nto\ncase 0: tileData[i].file_name = strdup(item);\n\nCheck how to implement strdup if it is not available: Implementation of strdup() in C programming\n"
] |
[
4
] |
[] |
[] |
[
"arrays",
"c",
"struct"
] |
stackoverflow_0074658954_arrays_c_struct.txt
|
Q:
Django Rest Framework ignores default value
Any idea why does Django Rest Framework ignore default values?
class MyClass(models.Model):
some_field = models.CharField(default='Yes')
class MyClassSerializer(serializers.ModelSerializer):
class Meta:
model = MyCLass
fields = ['some_field']
class MyClassListCreateAPIView(ListCreateAPIView):
queryset = MyClass.objects.all()
serializer_class = MyClassSerializer
When I send {'some_field': None} /null/something like this. I always get:
Bad Request: /myurl/
[02/Dec/2022 16:44:59] "POST /myurl/ HTTP/1.1" 400 114
When changed to:
class MyClass(models.Model):
some_field = models.CharField(default='Yes', blank=True, null=True)
it works but always sets the NULL value.
Is this expected behaviour? Should I change the mechanics of my POST request to include changing value to default when user doesn't provide one?
A:
I believe this is a minor misconception.
Posting {'some_field': None} to your API is informing your Serializer to manufacture a MyClass instance with some_field set to None- not to use the default. It is not using the default value- because you're instantiating it with a value of None.
If you want to use the default value on your model- you need to clean/remove the some_field value during serializer Validation so that the create() call does not have this field present- and the default will be used.
Should I change the mechanics of my POST request to include changing value to default when user doesn't provide one?
This would certainly be a solution and would work. But unless this is well documented- outside consumers of your API may notice the same odd behavior.
A:
I don't think is the main problem but: are you setting the max_length required option in your MyClass model definition?
If it's not needed, why it's not required in this case? In the fields types' documentation is supposed to be required.
|
Django Rest Framework ignores default value
|
Any idea why does Django Rest Framework ignore default values?
class MyClass(models.Model):
some_field = models.CharField(default='Yes')
class MyClassSerializer(serializers.ModelSerializer):
class Meta:
model = MyCLass
fields = ['some_field']
class MyClassListCreateAPIView(ListCreateAPIView):
queryset = MyClass.objects.all()
serializer_class = MyClassSerializer
When I send {'some_field': None} /null/something like this. I always get:
Bad Request: /myurl/
[02/Dec/2022 16:44:59] "POST /myurl/ HTTP/1.1" 400 114
When changed to:
class MyClass(models.Model):
some_field = models.CharField(default='Yes', blank=True, null=True)
it works but always sets the NULL value.
Is this expected behaviour? Should I change the mechanics of my POST request to include changing value to default when user doesn't provide one?
|
[
"I believe this is a minor misconception.\nPosting {'some_field': None} to your API is informing your Serializer to manufacture a MyClass instance with some_field set to None- not to use the default. It is not using the default value- because you're instantiating it with a value of None.\nIf you want to use the default value on your model- you need to clean/remove the some_field value during serializer Validation so that the create() call does not have this field present- and the default will be used.\n\nShould I change the mechanics of my POST request to include changing value to default when user doesn't provide one?\n\nThis would certainly be a solution and would work. But unless this is well documented- outside consumers of your API may notice the same odd behavior.\n",
"I don't think is the main problem but: are you setting the max_length required option in your MyClass model definition?\nIf it's not needed, why it's not required in this case? In the fields types' documentation is supposed to be required.\n"
] |
[
2,
1
] |
[] |
[] |
[
"django",
"django_rest_framework"
] |
stackoverflow_0074658306_django_django_rest_framework.txt
|
Q:
Creating Json file from Excel and want to pass null when cell is blank but having difficulty finding the right syntax
Script is working great and saves json file, but if table cells are blank they are eliminated from the json output. Would love some assistance to solve.
`Private Sub SaveAsJSON_Click()
Set ObjectProperties = CreateObject("Scripting.Dictionary")
For Each c In ActiveSheet.ListObjects(1).HeaderRowRange.Cells
ObjectProperties.Add c.Column, c.Value
Next
Dim CollectionToJson As New Collection
For Each r In ActiveSheet.ListObjects(1).ListRows
Set jsonObject = CreateObject("Scripting.Dictionary")
For Each c In r.Range.Cells
jsonObject.Add ObjectProperties(c.Column), c.Value
Next
CollectionToJson.Add jsonObject
Next
fileSaveName = Application.GetSaveAsFilename(fileFilter:="JSON Files (*.json), *.json")
If fileSaveName <> False Then
fileNumber = FreeFile
Open fileSaveName For Output As fileNumber
Print #fileNumber, JsonConverter.ConvertToJson(CollectionToJson, Whitespace:=2)
Close fileNumber
End If
End Sub`
A:
Like this:
Dim v As Variant
'...
'...
v = c.Value
jsonObject.Add ObjectProperties(c.Column), IIf(Len(v) > 0, v, Null)
Input:
Output:
[
{
"Col001": "blah",
"Col002": null,
"Col003": 44,
"Col004": "blah",
"Col005": "blah",
"Col006": 66,
"Col007": "blah"
},
{
"Col001": "blah",
"Col002": "blah",
"Col003": 67,
"Col004": "blah",
"Col005": "blah",
"Col006": null,
"Col007": "blah"
}
]
|
Creating Json file from Excel and want to pass null when cell is blank but having difficulty finding the right syntax
|
Script is working great and saves json file, but if table cells are blank they are eliminated from the json output. Would love some assistance to solve.
`Private Sub SaveAsJSON_Click()
Set ObjectProperties = CreateObject("Scripting.Dictionary")
For Each c In ActiveSheet.ListObjects(1).HeaderRowRange.Cells
ObjectProperties.Add c.Column, c.Value
Next
Dim CollectionToJson As New Collection
For Each r In ActiveSheet.ListObjects(1).ListRows
Set jsonObject = CreateObject("Scripting.Dictionary")
For Each c In r.Range.Cells
jsonObject.Add ObjectProperties(c.Column), c.Value
Next
CollectionToJson.Add jsonObject
Next
fileSaveName = Application.GetSaveAsFilename(fileFilter:="JSON Files (*.json), *.json")
If fileSaveName <> False Then
fileNumber = FreeFile
Open fileSaveName For Output As fileNumber
Print #fileNumber, JsonConverter.ConvertToJson(CollectionToJson, Whitespace:=2)
Close fileNumber
End If
End Sub`
|
[
"Like this:\nDim v As Variant\n'...\n'...\nv = c.Value \njsonObject.Add ObjectProperties(c.Column), IIf(Len(v) > 0, v, Null)\n\nInput:\n\nOutput:\n[\n {\n \"Col001\": \"blah\",\n \"Col002\": null,\n \"Col003\": 44,\n \"Col004\": \"blah\",\n \"Col005\": \"blah\",\n \"Col006\": 66,\n \"Col007\": \"blah\"\n },\n {\n \"Col001\": \"blah\",\n \"Col002\": \"blah\",\n \"Col003\": 67,\n \"Col004\": \"blah\",\n \"Col005\": \"blah\",\n \"Col006\": null,\n \"Col007\": \"blah\"\n }\n]\n\n\n"
] |
[
0
] |
[] |
[] |
[
"excel",
"vba"
] |
stackoverflow_0074658195_excel_vba.txt
|
Q:
What does mvn --builder do?
maven appears to have grown a --builder option:
$ mvn --help
usage: mvn [options] [<goal(s)>] [<phase(s)>]
[...]
Options:
-b,--builder <arg> The id of the build strategy to
use.
I don't find any documentation on it, anyone has an idea about what that is (before I RTFC)?
I'm using version 3.2.1, but I'm not sure when that appeared.
A:
From the Maven 3.2.1 release notes:
There is a new Builder interface which classes can implement to encapsulate a strategy for building projects. The existing strategies for building Maven serially and in parallel are now Builder implementations. It’s now possible for others to implement additional strategies for building projects. This is a provisional interface and may change in the near future but will stabilize by Maven 4.0.0.
The new Builder interface abstracts over different ways to schedule the building of a project, and is intended to allow new, potentially faster, strategies (Making Maven builds incredibly fast - "our new smart scheduling for Maven consists of a new parallelization model that is more aggressive, and optimized prioritization of project builds based on recording their execution times and persisting them for subsequent analysis.").
A:
This is a constant problem with the maven crowd. It is nigh impossible to get a straight answer out of anyone. You would expect the command to be able to list the available builder identifiers, but nope. Or find a list of them in the documentation. Again nope. If a colleague hadn't pointed me towards the --builder=smart option, I would not have any readily available means to find out about it. That is pretty useless from a user's point of view.
|
What does mvn --builder do?
|
maven appears to have grown a --builder option:
$ mvn --help
usage: mvn [options] [<goal(s)>] [<phase(s)>]
[...]
Options:
-b,--builder <arg> The id of the build strategy to
use.
I don't find any documentation on it, anyone has an idea about what that is (before I RTFC)?
I'm using version 3.2.1, but I'm not sure when that appeared.
|
[
"From the Maven 3.2.1 release notes:\n\nThere is a new Builder interface which classes can implement to encapsulate a strategy for building projects. The existing strategies for building Maven serially and in parallel are now Builder implementations. It’s now possible for others to implement additional strategies for building projects. This is a provisional interface and may change in the near future but will stabilize by Maven 4.0.0.\n\nThe new Builder interface abstracts over different ways to schedule the building of a project, and is intended to allow new, potentially faster, strategies (Making Maven builds incredibly fast - \"our new smart scheduling for Maven consists of a new parallelization model that is more aggressive, and optimized prioritization of project builds based on recording their execution times and persisting them for subsequent analysis.\").\n",
"This is a constant problem with the maven crowd. It is nigh impossible to get a straight answer out of anyone. You would expect the command to be able to list the available builder identifiers, but nope. Or find a list of them in the documentation. Again nope. If a colleague hadn't pointed me towards the --builder=smart option, I would not have any readily available means to find out about it. That is pretty useless from a user's point of view.\n"
] |
[
4,
0
] |
[] |
[] |
[
"maven",
"maven_3"
] |
stackoverflow_0022874328_maven_maven_3.txt
|
Q:
How do I change the background image on the push of a button between 100 different images? HTML/CSS/JS
I want to change the background image of my div "title" on the push of a button and also have that image stay as the background until the button is pressed again using cookies. I have 100 images labeled IMG_1.jpg - IMG_100.jpg.
I tried to change the background to just one image first but this didn't work.
function changeBackground() {
document.title.style.background = 'url(../img/bc/IMG_10.jpg) no-repeat';
}
<nav>
<ul class="nav__list">
<li class="nav__list-item">
<a onclick="changeBackground()" class="nav__link nav__link--btn">
Change Theme
</a>
</li>
</ul>
</nav>
<div class="title">
</div>
A:
try that
let count = 1;
document.querySelector('#change-image').addEventListener('click', evt => {
count += 1;
let url = 'url("https://picsum.photos/id/' + count + '/300/200")';
document.querySelector('.title').style.backgroundImage = url;
})
.title {
width: 300px;
height: 200px;
background-image: url("https://picsum.photos/id/1/300/200");
background-size: contain;
background-repeat: no-repeat;
border: 1px solid black;
}
<button id="change-image">Change Image</button>
<div class="title">
</div>
CSS and HTML, just for the example.
The important for you is the way the background image is changed and the format of the url to change.
To change it's:
element.style.backgroundImage = ???
Format for the url is:
url("myBkgImage")
so to put in the previous style:
element.style.backgroundImage = 'url("+ myVariabeImage +");
you can also notation ${myvar}
|
How do I change the background image on the push of a button between 100 different images? HTML/CSS/JS
|
I want to change the background image of my div "title" on the push of a button and also have that image stay as the background until the button is pressed again using cookies. I have 100 images labeled IMG_1.jpg - IMG_100.jpg.
I tried to change the background to just one image first but this didn't work.
function changeBackground() {
document.title.style.background = 'url(../img/bc/IMG_10.jpg) no-repeat';
}
<nav>
<ul class="nav__list">
<li class="nav__list-item">
<a onclick="changeBackground()" class="nav__link nav__link--btn">
Change Theme
</a>
</li>
</ul>
</nav>
<div class="title">
</div>
|
[
"try that\n\n\nlet count = 1;\ndocument.querySelector('#change-image').addEventListener('click', evt => {\n count += 1;\n let url = 'url(\"https://picsum.photos/id/' + count + '/300/200\")';\n document.querySelector('.title').style.backgroundImage = url;\n})\n.title {\n width: 300px;\n height: 200px;\n background-image: url(\"https://picsum.photos/id/1/300/200\");\n background-size: contain;\n background-repeat: no-repeat;\n border: 1px solid black;\n}\n<button id=\"change-image\">Change Image</button>\n\n<div class=\"title\">\n</div>\n\n\n\nCSS and HTML, just for the example.\nThe important for you is the way the background image is changed and the format of the url to change.\nTo change it's:\nelement.style.backgroundImage = ???\n\nFormat for the url is:\nurl(\"myBkgImage\")\nso to put in the previous style:\nelement.style.backgroundImage = 'url(\"+ myVariabeImage +\");\n\nyou can also notation ${myvar}\n"
] |
[
0
] |
[] |
[] |
[
"css",
"html",
"javascript"
] |
stackoverflow_0074658803_css_html_javascript.txt
|
Q:
Can't find a tag with selenium, why?
For some reason I am unable to find the that I'm searching for. I'm trying to get the value of a certain stock with selenium-python. If anyone can explain why I can't get the with id = "stock-quote-tablet-phone" I would be very grateful.
Python-code:
class SeleniumDriver:
def __init__(self):
self.webdriver = webdriver.Chrome("chromedriver", options=options)
self.webdriver.get("https://www.avanza.se/min-ekonomi/innehav.html")
time.sleep(30) # The user gets one minute to log into their account
self.webdriver.get(AZELIO)
print("Checkpoint")
time.sleep(5)
self.anchor = (
self.webdriver.find_element(By.TAG_NAME, "aza-app")
.find_element(By.TAG_NAME, "aza-shell")
.find_element(By.TAG_NAME, "div")
.find_element(By.TAG_NAME, "main")
.find_element(By.TAG_NAME, "div")
.find_element(By.TAG_NAME, "aza-stock")
.find_element(By.TAG_NAME, "aza-subpage")
.find_element(By.TAG_NAME, "div")
)
def getPrice(self):
self.tmp = (
self.anchor.find_element(By.TAG_NAME, "div")
.find_element(By.TAG_NAME, "aza-pull-to-refresh")
.find_elements(By.TAG_NAME, "div")[0]
.find_element(By.TAG_NAME, "div")
.find_element(By.TAG_NAME, "aza-page-container")
.find_element(By.TAG_NAME, "aza-page-container-inset")
.find_element(By.TAG_NAME, "section")
.find_element(By.TAG_NAME, "div")
.find_elements(By.TAG_NAME, "div")[0]
.find_element(By.TAG_NAME, "aza-instrument-chart")
.find_element(By.TAG_NAME, "div")
.find_element(By.TAG_NAME, "div")
)
WebDriverWait(self.webdriver, 10).until(
EC.presence_of_element_located((By.ID, "stock-quote-tablet-phone"))
)
self.price = self.tmp.find_element(
By.CLASS_NAME, "ng-tns-c425-30 ng-star-inserted"
)
print(self.price.get_attribute("innerHTML"))
print(self.tmp.get_attribute("innerHTML"))
testDriver = SeleniumDriver()
testDriver.getPrice()
HTML-code:
Image of HTML-code
I have tried every sort of tmp.get_element() and tmp.get_elements() that I can think of. I expected to get the that I described, but for some reason I can't seem to find it.
A:
The problem is not defining the proper path to the element. Instead of using the get By.ID use XPath which gives a more accurate path to the element. To define XPath properly use can use SelectorHub chrome extension easliy.
Hoping the required element not in a iframe or frame or Shadow-DOM. If the element in the above behaviors it should be deal in another way.
|
Can't find a tag with selenium, why?
|
For some reason I am unable to find the that I'm searching for. I'm trying to get the value of a certain stock with selenium-python. If anyone can explain why I can't get the with id = "stock-quote-tablet-phone" I would be very grateful.
Python-code:
class SeleniumDriver:
def __init__(self):
self.webdriver = webdriver.Chrome("chromedriver", options=options)
self.webdriver.get("https://www.avanza.se/min-ekonomi/innehav.html")
time.sleep(30) # The user gets one minute to log into their account
self.webdriver.get(AZELIO)
print("Checkpoint")
time.sleep(5)
self.anchor = (
self.webdriver.find_element(By.TAG_NAME, "aza-app")
.find_element(By.TAG_NAME, "aza-shell")
.find_element(By.TAG_NAME, "div")
.find_element(By.TAG_NAME, "main")
.find_element(By.TAG_NAME, "div")
.find_element(By.TAG_NAME, "aza-stock")
.find_element(By.TAG_NAME, "aza-subpage")
.find_element(By.TAG_NAME, "div")
)
def getPrice(self):
self.tmp = (
self.anchor.find_element(By.TAG_NAME, "div")
.find_element(By.TAG_NAME, "aza-pull-to-refresh")
.find_elements(By.TAG_NAME, "div")[0]
.find_element(By.TAG_NAME, "div")
.find_element(By.TAG_NAME, "aza-page-container")
.find_element(By.TAG_NAME, "aza-page-container-inset")
.find_element(By.TAG_NAME, "section")
.find_element(By.TAG_NAME, "div")
.find_elements(By.TAG_NAME, "div")[0]
.find_element(By.TAG_NAME, "aza-instrument-chart")
.find_element(By.TAG_NAME, "div")
.find_element(By.TAG_NAME, "div")
)
WebDriverWait(self.webdriver, 10).until(
EC.presence_of_element_located((By.ID, "stock-quote-tablet-phone"))
)
self.price = self.tmp.find_element(
By.CLASS_NAME, "ng-tns-c425-30 ng-star-inserted"
)
print(self.price.get_attribute("innerHTML"))
print(self.tmp.get_attribute("innerHTML"))
testDriver = SeleniumDriver()
testDriver.getPrice()
HTML-code:
Image of HTML-code
I have tried every sort of tmp.get_element() and tmp.get_elements() that I can think of. I expected to get the that I described, but for some reason I can't seem to find it.
|
[
"The problem is not defining the proper path to the element. Instead of using the get By.ID use XPath which gives a more accurate path to the element. To define XPath properly use can use SelectorHub chrome extension easliy.\nHoping the required element not in a iframe or frame or Shadow-DOM. If the element in the above behaviors it should be deal in another way.\n"
] |
[
0
] |
[] |
[] |
[
"selenium",
"selenium_webdriver",
"web_scraping"
] |
stackoverflow_0074658320_selenium_selenium_webdriver_web_scraping.txt
|
Q:
APS.NET MVC request routing using query parameter names
I'm trying to understand attribute routing in ASP.NET MVC. I understand how routing matches on url elements, but not query parameters.
For example, say I have a rest-style book lookup service that can match on title or ISBN. I want to be able to do something like GET /book?title=Middlemarch or GET /book?isbn=978-3-16-148410-0 to retrieve book details.
How do I specify [Route] attributes for this? I can write:
[HttpGet]
[Route("book/{title}")]
public async Task<IActionResult> LookupTitle(string title)
but as far as I can tell this also matches /book/Middlematch and /book/978-3-16-148410-0. If I also have an ISBN lookup endpoint with [Route("book/{isbn}")] then the routing engine won't be able to disambiguate the two endpoints.
So how do I distinguish endpoints by query parameter name?
A:
The following Route() attribute will meet your requirements:
[HttpGet]
// /book?title=Middlemarch
// /book?isbn=978-3-16-148410-0
// /book?title=Middlemarch&isbn=978-3-16-148410-0
[Route("book/")]
public IActionResult LookupTitle(string isbn, string title)
{
if (isbn != null) { /* TODO */ }
if (title != null) { /* TODO */ }
return View();
}
When the ASP.NET MVC parser does not find any matching parameters in the route pattern it trying to interpret these as query parameters.
|
APS.NET MVC request routing using query parameter names
|
I'm trying to understand attribute routing in ASP.NET MVC. I understand how routing matches on url elements, but not query parameters.
For example, say I have a rest-style book lookup service that can match on title or ISBN. I want to be able to do something like GET /book?title=Middlemarch or GET /book?isbn=978-3-16-148410-0 to retrieve book details.
How do I specify [Route] attributes for this? I can write:
[HttpGet]
[Route("book/{title}")]
public async Task<IActionResult> LookupTitle(string title)
but as far as I can tell this also matches /book/Middlematch and /book/978-3-16-148410-0. If I also have an ISBN lookup endpoint with [Route("book/{isbn}")] then the routing engine won't be able to disambiguate the two endpoints.
So how do I distinguish endpoints by query parameter name?
|
[
"The following Route() attribute will meet your requirements:\n[HttpGet] \n// /book?title=Middlemarch \n// /book?isbn=978-3-16-148410-0\n// /book?title=Middlemarch&isbn=978-3-16-148410-0 \n[Route(\"book/\")] \npublic IActionResult LookupTitle(string isbn, string title)\n{\n if (isbn != null) { /* TODO */ }\n if (title != null) { /* TODO */ }\n\n return View();\n}\n\nWhen the ASP.NET MVC parser does not find any matching parameters in the route pattern it trying to interpret these as query parameters.\n"
] |
[
0
] |
[] |
[] |
[
"asp.net_core",
"asp.net_mvc_4"
] |
stackoverflow_0074657067_asp.net_core_asp.net_mvc_4.txt
|
Q:
Why is my Edit WCF Configuration option missing?
I created WCF RIA services. It added app.config by default. But there is no option of Edit WCF Configuration that appears when you create simple WCF services. What am i missing? How do i get that GUI tool? Do i have to write all that xml and remember for the next time when i use it?
Thanks in advance :)
A:
This is a known bug - at times, you have to select it from the Tools menu once and close it again right away, before it becomes available as a right-click context-menu option on your app.config file.
I would have hoped this would have been fixed in Visual Studio 2010 - but it's still there...
A:
There should be a link in the main Tools menu to the editor, or alternatively run it externally and open your app.config from it's menu.
A:
Open Visual Studio
Go to tools
Click on Edit WCF Service Configuration
A:
Choose Tools --> Edit WCF Service Configuration Editor
A:
1) Open Visual Studio command prompt & type SvcConfigEditor
OR
2)Navigate to C:\Program Files\Microsoft SDKs\Windows\v6.0\Bin\SvcConfigEditor.exe
A:
In VS 2022, I could not locate a way to Edit Wcf configuration. Maybe I need to install more components of VS 2022 ? Anyways, I used Tools->External Tools menu item in VS 2022 to define a way to open the Wcf configuration myself.
Then hit start menu and search for 'cmd' or 'Developer command prompt' and choose the version suitable for you (e.g. 2022). Now enter :
svcconfigeditor
This opens a new window with svcconfigeditor and must succeed, or else you do not have installed Svc config editor anyways.. The tool which 'Edit wcf configuration' will open.
Next type this command :
where svcconfigeditor
This should show you the path to the executable on your system.
Locating svcconfigeditor executable on your Windows system
Go back again to Tools->Exernal Tools and hit 'Add' button
Give these values (example, last two values must be set):
Title: 'Edit Wcf configuration'
Command:
C:\Program Files (x86)\Microsoft SDKs\Windows\v10.0A\bin\NETFX 4.8 Tools\SvcConfigEditor.exe
Arguments:
$(ItemFileName)$(ItemExt)
Initial dir:
$(ItemDir)
External Tools - creating a edit wcf configuration tool window
Now - switch back to Solution explorer with the web.config file containing your web.config file.
Select the web.config file and choose :
Tools=>Edit Wcf configuration
Editing wcf configuration
Please note that I suspect Edit WCF configuration tools are better with older VS versions as WCF is getting replaced with
|
Why is my Edit WCF Configuration option missing?
|
I created WCF RIA services. It added app.config by default. But there is no option of Edit WCF Configuration that appears when you create simple WCF services. What am i missing? How do i get that GUI tool? Do i have to write all that xml and remember for the next time when i use it?
Thanks in advance :)
|
[
"This is a known bug - at times, you have to select it from the Tools menu once and close it again right away, before it becomes available as a right-click context-menu option on your app.config file.\nI would have hoped this would have been fixed in Visual Studio 2010 - but it's still there...\n",
"There should be a link in the main Tools menu to the editor, or alternatively run it externally and open your app.config from it's menu.\n",
"\nOpen Visual Studio\nGo to tools\nClick on Edit WCF Service Configuration \n\n",
"Choose Tools --> Edit WCF Service Configuration Editor\n",
"1) Open Visual Studio command prompt & type SvcConfigEditor\nOR\n2)Navigate to C:\\Program Files\\Microsoft SDKs\\Windows\\v6.0\\Bin\\SvcConfigEditor.exe\n",
"In VS 2022, I could not locate a way to Edit Wcf configuration. Maybe I need to install more components of VS 2022 ? Anyways, I used Tools->External Tools menu item in VS 2022 to define a way to open the Wcf configuration myself.\nThen hit start menu and search for 'cmd' or 'Developer command prompt' and choose the version suitable for you (e.g. 2022). Now enter :\nsvcconfigeditor\n\nThis opens a new window with svcconfigeditor and must succeed, or else you do not have installed Svc config editor anyways.. The tool which 'Edit wcf configuration' will open.\nNext type this command :\nwhere svcconfigeditor\n\nThis should show you the path to the executable on your system.\nLocating svcconfigeditor executable on your Windows system\nGo back again to Tools->Exernal Tools and hit 'Add' button\nGive these values (example, last two values must be set):\nTitle: 'Edit Wcf configuration'\nCommand:\nC:\\Program Files (x86)\\Microsoft SDKs\\Windows\\v10.0A\\bin\\NETFX 4.8 Tools\\SvcConfigEditor.exe\nArguments:\n$(ItemFileName)$(ItemExt)\nInitial dir:\n$(ItemDir)\nExternal Tools - creating a edit wcf configuration tool window\nNow - switch back to Solution explorer with the web.config file containing your web.config file.\nSelect the web.config file and choose :\nTools=>Edit Wcf configuration\nEditing wcf configuration\nPlease note that I suspect Edit WCF configuration tools are better with older VS versions as WCF is getting replaced with\n"
] |
[
23,
11,
5,
4,
0,
0
] |
[] |
[] |
[
"visual_studio_2010",
"wcf"
] |
stackoverflow_0004089507_visual_studio_2010_wcf.txt
|
Q:
Should * be escaped in bash when it is used inside double braces to perform multiplication?
I thought * should be escaped in bash when it is to be used in a meaning other than the universal character, for example I am trying to use * to multiply two numbers. But when I am trying to use * with an escape character I am getting an error.
echo "scale=2; 10 \* 3" | bc
EOF encountered in a comment.
(standard_in) 1: syntax error
but when I am not using the escape character it works.
echo "scale=2; 10 * 3" | bc
30
why is this? Can someone explain?
A:
You would use \ to escape the *, not /, but no, you do not need to escape it. bash is not doing the multiplication; it's simply writing a string to the standard input of bc.
The double quotes already escape * from being interpreted by the shell.
echo "scale=2; 10 * 3"
is equivalent to
echo scale=2\;\ 10\ \*\ 3
Regarding the error message in your first attempt, bc scripts allow C-style comments, so /* was interpreted as the start of a comment that was never completed before the end of the script.
A:
In bash, the * character does not need to be escaped when it is used inside double braces for multiplication. This is because the * character is not treated as a wildcard inside double braces.
For example, the following command:
echo "scale=2; 10 * 3" | bc
Will output 30, because the * character is treated as a multiplication operator inside the double braces.
However, if you try to use the * character with an escape character, like this:
Copy code
echo "scale=2; 10 \* 3" | bc
You will get an error, because the escape character \ tells bash to treat the * character as a literal * and not as a multiplication operator. This causes a syntax error, because the * character is not a valid operator inside double braces.
In general, it is not necessary to escape the * character when using it inside double braces for multiplication. However, if you do want to use the * character as a literal * inside double braces, you can escape it with a backslash.
|
Should * be escaped in bash when it is used inside double braces to perform multiplication?
|
I thought * should be escaped in bash when it is to be used in a meaning other than the universal character, for example I am trying to use * to multiply two numbers. But when I am trying to use * with an escape character I am getting an error.
echo "scale=2; 10 \* 3" | bc
EOF encountered in a comment.
(standard_in) 1: syntax error
but when I am not using the escape character it works.
echo "scale=2; 10 * 3" | bc
30
why is this? Can someone explain?
|
[
"You would use \\ to escape the *, not /, but no, you do not need to escape it. bash is not doing the multiplication; it's simply writing a string to the standard input of bc.\nThe double quotes already escape * from being interpreted by the shell.\necho \"scale=2; 10 * 3\"\n\nis equivalent to\necho scale=2\\;\\ 10\\ \\*\\ 3\n\nRegarding the error message in your first attempt, bc scripts allow C-style comments, so /* was interpreted as the start of a comment that was never completed before the end of the script.\n",
"In bash, the * character does not need to be escaped when it is used inside double braces for multiplication. This is because the * character is not treated as a wildcard inside double braces.\nFor example, the following command:\necho \"scale=2; 10 * 3\" | bc\n\nWill output 30, because the * character is treated as a multiplication operator inside the double braces.\nHowever, if you try to use the * character with an escape character, like this:\nCopy code\necho \"scale=2; 10 \\* 3\" | bc\n\nYou will get an error, because the escape character \\ tells bash to treat the * character as a literal * and not as a multiplication operator. This causes a syntax error, because the * character is not a valid operator inside double braces.\nIn general, it is not necessary to escape the * character when using it inside double braces for multiplication. However, if you do want to use the * character as a literal * inside double braces, you can escape it with a backslash.\n"
] |
[
0,
0
] |
[] |
[] |
[
"bash",
"escaping"
] |
stackoverflow_0074659050_bash_escaping.txt
|
Q:
Countdown Timer Linking Design Parts
I have some HTML and JavaScript, but this error appears:
I'm building a timer. This is my code:
HTML
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link href="styles.css" rel="stylesheet">
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<title>Timer</title>
</head>
<body>
<div class="timer">
</div>
<script src="appCT.js" type="module"></script>
</body>
</html>
JavaScript File1
export default class Timer {
constructor(root){
console.log(root);
root.innerHTML = Timer.getHTML();
this.el = {
minutes: root.querySelector(".timer__part--minutes");
seconds: root.querySelector(".timer__part--seconds");
control: root.querySelector(".timer__btn--control");
reset: root.querySelector(".timer__btn--reset");
}
console.log(this.el);
}
static getHTML(){
return `
<span class="timer__part timer__part--minutes">00</span>
<span class="timer__part timer__part">:</span>
<span class="timer__part timer__part--seconds">00</span>
<button type="button" class="timer__btn timer__btn--control timer__btn--start">
<span class="material-icons">play_circle_filled</span>
</button>
<button type="button" class="timer__btn timer__btn--reset">
<span class="material-icons">timer</span>
</button>`
}
}
JavaScript File 2
import Timer from "Timer.js";
new Timer{
document.querySelector(".timer");
}
Css code
body{
background: #dddd;
margin: 24px;
}
.timer{
font-family: sans-serif;
display: inline-block;
padding: 24px 32px;
border-radius: 30px;
background: white;
}
.timer__part{
font-size: 40px;
font-weight: bold;
}
.timer__btn{
width: 50px;
height: 50px;
margin-left: 16px;
border-radius: 50%;
border: none;
background: gray;
}
.timer__btn--control{
background: steelblue;
}
.timer__btn--reset{
background: red;
}
The code should be able to display the timer, and each HTML element should be linked with its respective part in the design. I've tried to change the type "attribute" of my script element to text/javascript, but it has not yielded fruits.
Thank you
|
Countdown Timer Linking Design Parts
|
I have some HTML and JavaScript, but this error appears:
I'm building a timer. This is my code:
HTML
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link href="styles.css" rel="stylesheet">
<link href="https://fonts.googleapis.com/icon?family=Material+Icons" rel="stylesheet">
<title>Timer</title>
</head>
<body>
<div class="timer">
</div>
<script src="appCT.js" type="module"></script>
</body>
</html>
JavaScript File1
export default class Timer {
constructor(root){
console.log(root);
root.innerHTML = Timer.getHTML();
this.el = {
minutes: root.querySelector(".timer__part--minutes");
seconds: root.querySelector(".timer__part--seconds");
control: root.querySelector(".timer__btn--control");
reset: root.querySelector(".timer__btn--reset");
}
console.log(this.el);
}
static getHTML(){
return `
<span class="timer__part timer__part--minutes">00</span>
<span class="timer__part timer__part">:</span>
<span class="timer__part timer__part--seconds">00</span>
<button type="button" class="timer__btn timer__btn--control timer__btn--start">
<span class="material-icons">play_circle_filled</span>
</button>
<button type="button" class="timer__btn timer__btn--reset">
<span class="material-icons">timer</span>
</button>`
}
}
JavaScript File 2
import Timer from "Timer.js";
new Timer{
document.querySelector(".timer");
}
Css code
body{
background: #dddd;
margin: 24px;
}
.timer{
font-family: sans-serif;
display: inline-block;
padding: 24px 32px;
border-radius: 30px;
background: white;
}
.timer__part{
font-size: 40px;
font-weight: bold;
}
.timer__btn{
width: 50px;
height: 50px;
margin-left: 16px;
border-radius: 50%;
border: none;
background: gray;
}
.timer__btn--control{
background: steelblue;
}
.timer__btn--reset{
background: red;
}
The code should be able to display the timer, and each HTML element should be linked with its respective part in the design. I've tried to change the type "attribute" of my script element to text/javascript, but it has not yielded fruits.
Thank you
|
[] |
[] |
[
"Your browser will not redirect to such an address.\n1- I would recommend trying to work with localhost.\n2- VSCode Live Server\nAnd I found something maybe useful to you.\n\"Cross origin requests are only supported for HTTP.\" error when loading a local file\n"
] |
[
-1
] |
[
"html",
"javascript",
"timer"
] |
stackoverflow_0074658897_html_javascript_timer.txt
|
Q:
How to prevent msbuild from adding "Release" to Output Path?
msbuild outputs compiled artifacts to bin/Release/ and obj/Release when I've specified the output path as bin/. I'm trying to perform this build in CI so I'm using this command `
msbuild.exe /nologo /p:Configuration=Release
`
The .csproj file for this project contains this entry
<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x64'">
<OutputPath>bin\</OutputPath>
<AppendTargetFrameworkToOutputPath>false</AppendTargetFrameworkToOutputPath>
<AppendRuntimeIdentifierToOutputPath>false</AppendRuntimeIdentifierToOutputPath>
<DefineConstants>TRACE</DefineConstants>
<Optimize>true</Optimize>
<DebugType>pdbonly</DebugType>
<PlatformTarget>x64</PlatformTarget>
<ErrorReport>prompt</ErrorReport>
<CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
<Prefer32Bit>true</Prefer32Bit>
</PropertyGroup>
I manually added the OutputPath.
After looking around SO, I tried adding the AppendTargetFrameworkToOutputPath and AppendRuntimeIdentifierToOutputPath keys but didn't seem to help. What option do I need to stop the "Release" directory from being appended to output path?
A:
In the project snippet provided, the PropertyGroup is conditional on the Configuration being Release and the Platform being x64. Out of the box, the default value of Platform in a C# project is 'AnyCPU'.
The following command line will set the Configuration and Platform appropriately for the PropertyGroup to be evaluated:
msbuild /nologo /p:Configuration=Release;Platform=x64
If you want to set the OutputPath regardless of the value of Platform but only when Configuration is Release, remove OutputPath from the existing PropertyGroup and create a new PropertyGroup as follows:
<PropertyGroup Condition="'$(Configuration)' == 'Release'">
<OutputPath>bin\</OutputPath>
</PropertyGroup>
<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x64'">
<AppendTargetFrameworkToOutputPath>false</AppendTargetFrameworkToOutputPath>
<AppendRuntimeIdentifierToOutputPath>false</AppendRuntimeIdentifierToOutputPath>
<DefineConstants>TRACE</DefineConstants>
<Optimize>true</Optimize>
<DebugType>pdbonly</DebugType>
<PlatformTarget>x64</PlatformTarget>
<ErrorReport>prompt</ErrorReport>
<CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
<Prefer32Bit>true</Prefer32Bit>
</PropertyGroup>
To always change the OutputPath, regardless of Configuration and Platform, remove the Condition:
<PropertyGroup>
<OutputPath>bin\</OutputPath>
</PropertyGroup>
|
How to prevent msbuild from adding "Release" to Output Path?
|
msbuild outputs compiled artifacts to bin/Release/ and obj/Release when I've specified the output path as bin/. I'm trying to perform this build in CI so I'm using this command `
msbuild.exe /nologo /p:Configuration=Release
`
The .csproj file for this project contains this entry
<PropertyGroup Condition="'$(Configuration)|$(Platform)' == 'Release|x64'">
<OutputPath>bin\</OutputPath>
<AppendTargetFrameworkToOutputPath>false</AppendTargetFrameworkToOutputPath>
<AppendRuntimeIdentifierToOutputPath>false</AppendRuntimeIdentifierToOutputPath>
<DefineConstants>TRACE</DefineConstants>
<Optimize>true</Optimize>
<DebugType>pdbonly</DebugType>
<PlatformTarget>x64</PlatformTarget>
<ErrorReport>prompt</ErrorReport>
<CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>
<Prefer32Bit>true</Prefer32Bit>
</PropertyGroup>
I manually added the OutputPath.
After looking around SO, I tried adding the AppendTargetFrameworkToOutputPath and AppendRuntimeIdentifierToOutputPath keys but didn't seem to help. What option do I need to stop the "Release" directory from being appended to output path?
|
[
"In the project snippet provided, the PropertyGroup is conditional on the Configuration being Release and the Platform being x64. Out of the box, the default value of Platform in a C# project is 'AnyCPU'.\nThe following command line will set the Configuration and Platform appropriately for the PropertyGroup to be evaluated:\nmsbuild /nologo /p:Configuration=Release;Platform=x64\n\nIf you want to set the OutputPath regardless of the value of Platform but only when Configuration is Release, remove OutputPath from the existing PropertyGroup and create a new PropertyGroup as follows:\n <PropertyGroup Condition=\"'$(Configuration)' == 'Release'\">\n <OutputPath>bin\\</OutputPath>\n </PropertyGroup>\n\n <PropertyGroup Condition=\"'$(Configuration)|$(Platform)' == 'Release|x64'\">\n <AppendTargetFrameworkToOutputPath>false</AppendTargetFrameworkToOutputPath>\n <AppendRuntimeIdentifierToOutputPath>false</AppendRuntimeIdentifierToOutputPath>\n <DefineConstants>TRACE</DefineConstants>\n <Optimize>true</Optimize>\n <DebugType>pdbonly</DebugType>\n <PlatformTarget>x64</PlatformTarget>\n <ErrorReport>prompt</ErrorReport>\n <CodeAnalysisRuleSet>MinimumRecommendedRules.ruleset</CodeAnalysisRuleSet>\n <Prefer32Bit>true</Prefer32Bit>\n </PropertyGroup>\n\nTo always change the OutputPath, regardless of Configuration and Platform, remove the Condition:\n <PropertyGroup>\n <OutputPath>bin\\</OutputPath>\n </PropertyGroup>\n\n"
] |
[
1
] |
[] |
[] |
[
"c#",
"msbuild"
] |
stackoverflow_0074648863_c#_msbuild.txt
|
Q:
How to click on specific pixel of screenshot in C#
I am trying to make a bot for a game by taking screenshot of it non-stop and scanning the screenshot and trying to find a specific pixel by checking the RGB color of it and simulate a click.
how can I simulate a click on specific pixel?
Thank you
I already have the screenshot part done
A:
// Take the screenshot
Bitmap screenshot = new Bitmap(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height);
Graphics graphics = Graphics.FromImage(screenshot);
graphics.CopyFromScreen(0, 0, 0, 0, screenshot.Size);
// Scan the screenshot to find the desired pixel
Color pixelColor = screenshot.GetPixel(x, y);
// Check if the pixel has the desired color
if (pixelColor.R == r && pixelColor.G == g && pixelColor.B == b)
{
// Set the cursor's position to the desired pixel
System.Drawing.Point cursorPos = new System.Drawing.Point(x, y);
Cursor.Position = cursorPos;
// Simulate a mouse click
mouse_event(MOUSEEVENTF_LEFTDOWN, x, y, 0, 0);
mouse_event(MOUSEEVENTF_LEFTUP, x, y, 0, 0);
}
In this example, x and y are the coordinates of the pixel on the screenshot that you want to click on, and r, g, and b are the desired red, green, and blue values of the pixel, respectively. You can adjust these values to set the cursor's position to the desired pixel and check for the desired color.
Note: This example uses the GetPixel and mouse_event methods from the System.Drawing and user32.dll libraries, respectively, to simulate a mouse click. These methods are not part of the C# language, so you will need to add references to the System.Drawing and user32.dll libraries and include the System.Drawing and System.Runtime.InteropServices namespaces in your code in order to use them. You can find more information on how to do this in the Microsoft documentation.
|
How to click on specific pixel of screenshot in C#
|
I am trying to make a bot for a game by taking screenshot of it non-stop and scanning the screenshot and trying to find a specific pixel by checking the RGB color of it and simulate a click.
how can I simulate a click on specific pixel?
Thank you
I already have the screenshot part done
|
[
"// Take the screenshot\nBitmap screenshot = new Bitmap(Screen.PrimaryScreen.Bounds.Width, Screen.PrimaryScreen.Bounds.Height);\nGraphics graphics = Graphics.FromImage(screenshot);\ngraphics.CopyFromScreen(0, 0, 0, 0, screenshot.Size);\n\n// Scan the screenshot to find the desired pixel\nColor pixelColor = screenshot.GetPixel(x, y);\n\n// Check if the pixel has the desired color\nif (pixelColor.R == r && pixelColor.G == g && pixelColor.B == b)\n{\n // Set the cursor's position to the desired pixel\n System.Drawing.Point cursorPos = new System.Drawing.Point(x, y);\n Cursor.Position = cursorPos;\n\n // Simulate a mouse click\n mouse_event(MOUSEEVENTF_LEFTDOWN, x, y, 0, 0);\n mouse_event(MOUSEEVENTF_LEFTUP, x, y, 0, 0);\n}\n\nIn this example, x and y are the coordinates of the pixel on the screenshot that you want to click on, and r, g, and b are the desired red, green, and blue values of the pixel, respectively. You can adjust these values to set the cursor's position to the desired pixel and check for the desired color.\nNote: This example uses the GetPixel and mouse_event methods from the System.Drawing and user32.dll libraries, respectively, to simulate a mouse click. These methods are not part of the C# language, so you will need to add references to the System.Drawing and user32.dll libraries and include the System.Drawing and System.Runtime.InteropServices namespaces in your code in order to use them. You can find more information on how to do this in the Microsoft documentation.\n"
] |
[
0
] |
[] |
[] |
[
"bots",
"c#",
"click",
"pixel",
"screenshot"
] |
stackoverflow_0074658984_bots_c#_click_pixel_screenshot.txt
|
Q:
Going from a TensorArray to a Tensor
Given a TensorArray with a fixed size and entries with uniform shapes, I want to go to a Tensor containing the same values, simply by having the index dimension of the TensorArray as a regular axis.
TensorArrays have a method called "gather" which purportedly should do just that. And, in fact, the following example works:
array = tf.TensorArray(tf.int32, size=3)
array.write(0, 10)
array.write(1, 20)
array.write(2, 30)
gathered = array.gather([0, 1, 2])
"gathered" then yields the desired Tensor:
tf.Tensor([10 20 30], shape=(3,), dtype=int32)
Unfortunately, this stops working when wrapping it inside a tf.function, like so:
@tf.function
def func():
array = tf.TensorArray(tf.int32, size=3)
array.write(0, 10)
array.write(1, 20)
array.write(2, 30)
gathered = array.gather([0, 1, 2])
return gathered
tensor = func()
"tensor" then wrongly yields the following Tensor:
tf.Tensor([0 0 0], shape=(3,), dtype=int32)
Do you have an explanation for this, or can you suggest an alternative way to go from a TensorArray to a Tensor inside a tf.function?
A:
Per https://github.com/tensorflow/tensorflow/issues/30409#issuecomment-508962873 you have to:
Replace arr.write(j, t) with arr = arr.write(j, t)
The issue is that tf.function executes as a graph. In eager mode the array will be updated (as a convenience), but you're really meant to use the return value to chain operations: https://www.tensorflow.org/api_docs/python/tf/TensorArray#returns_6
A:
instead of array.gather(), try using array.stack(), it'll return a tensor from the TensorArray
|
Going from a TensorArray to a Tensor
|
Given a TensorArray with a fixed size and entries with uniform shapes, I want to go to a Tensor containing the same values, simply by having the index dimension of the TensorArray as a regular axis.
TensorArrays have a method called "gather" which purportedly should do just that. And, in fact, the following example works:
array = tf.TensorArray(tf.int32, size=3)
array.write(0, 10)
array.write(1, 20)
array.write(2, 30)
gathered = array.gather([0, 1, 2])
"gathered" then yields the desired Tensor:
tf.Tensor([10 20 30], shape=(3,), dtype=int32)
Unfortunately, this stops working when wrapping it inside a tf.function, like so:
@tf.function
def func():
array = tf.TensorArray(tf.int32, size=3)
array.write(0, 10)
array.write(1, 20)
array.write(2, 30)
gathered = array.gather([0, 1, 2])
return gathered
tensor = func()
"tensor" then wrongly yields the following Tensor:
tf.Tensor([0 0 0], shape=(3,), dtype=int32)
Do you have an explanation for this, or can you suggest an alternative way to go from a TensorArray to a Tensor inside a tf.function?
|
[
"Per https://github.com/tensorflow/tensorflow/issues/30409#issuecomment-508962873 you have to:\n\nReplace arr.write(j, t) with arr = arr.write(j, t)\nThe issue is that tf.function executes as a graph. In eager mode the array will be updated (as a convenience), but you're really meant to use the return value to chain operations: https://www.tensorflow.org/api_docs/python/tf/TensorArray#returns_6\n\n",
"instead of array.gather(), try using array.stack(), it'll return a tensor from the TensorArray\n"
] |
[
1,
0
] |
[] |
[] |
[
"python",
"tensorflow",
"tensorflow2.0"
] |
stackoverflow_0065889381_python_tensorflow_tensorflow2.0.txt
|
Q:
How to write dataframe to csv for max date rows only (filter for max date rows)?
How do I get the df.to_csv to write only rows with the max asOfDate? So that each symbol below will only one row?
import pandas as pd
from yahooquery import Ticker
symbols = ['AAPL','GOOG','MSFT'] #There are 75,000 symbols here.
header = ["asOfDate","CashAndCashEquivalents","CashFinancial","CurrentAssets","TangibleBookValue","CurrentLiabilities","TotalLiabilitiesNetMinorityInterest"]
for tick in symbols:
faang = Ticker(tick)
faang.balance_sheet(frequency='q')
df = faang.balance_sheet(frequency='q')
for column_name in header :
if column_name not in df.columns:
df.loc[:,column_name ] = None
#Here, if any column is missing from your header column names for a given "tick", we add this column and set all the valus to None
df.to_csv('output.csv', mode='a', index=True, header=False, columns=header)
A:
if asOfDate column has a date type or of it is a string with date in the format yyyy-mm-dd you can filter the dateframe for the rows you want to write
df[df.asOfDate == df.asOfDate.max()].to_csv('output.csv', mode='a', index=True, header=False, columns=header)
|
How to write dataframe to csv for max date rows only (filter for max date rows)?
|
How do I get the df.to_csv to write only rows with the max asOfDate? So that each symbol below will only one row?
import pandas as pd
from yahooquery import Ticker
symbols = ['AAPL','GOOG','MSFT'] #There are 75,000 symbols here.
header = ["asOfDate","CashAndCashEquivalents","CashFinancial","CurrentAssets","TangibleBookValue","CurrentLiabilities","TotalLiabilitiesNetMinorityInterest"]
for tick in symbols:
faang = Ticker(tick)
faang.balance_sheet(frequency='q')
df = faang.balance_sheet(frequency='q')
for column_name in header :
if column_name not in df.columns:
df.loc[:,column_name ] = None
#Here, if any column is missing from your header column names for a given "tick", we add this column and set all the valus to None
df.to_csv('output.csv', mode='a', index=True, header=False, columns=header)
|
[
"if asOfDate column has a date type or of it is a string with date in the format yyyy-mm-dd you can filter the dateframe for the rows you want to write\ndf[df.asOfDate == df.asOfDate.max()].to_csv('output.csv', mode='a', index=True, header=False, columns=header)\n\n"
] |
[
1
] |
[] |
[] |
[
"csv",
"dataframe",
"date",
"pandas",
"python"
] |
stackoverflow_0074658643_csv_dataframe_date_pandas_python.txt
|
Q:
Python: Plotting time delta
I have a DataFrame with a column of the time and a column in which I have stored a time lag. The data looks like this:
2020-04-18 14:00:00 0 days 03:00:00
2020-04-19 02:00:00 1 days 13:00:00
2020-04-28 14:00:00 1 days 17:00:00
2020-04-29 20:00:00 2 days 09:00:00
2020-04-30 19:00:00 2 days 11:00:00
Time, Length: 282, dtype: datetime64[ns] Average time lag, Length: 116, dtype: object
I want to plot the Time on the x-axis vs the time lag on the y-axis. However, I keep having errors with plotting the second column. Any tips on how to handle this data for the plot?
A:
In order to plot the time lag on the y-axis, you will need to convert the time lag from a timedelta object to a numerical value that can be used in the plot. One way to do this is to convert the time lag to seconds using the total_seconds method, and then plot the resulting values on the y-axis.
Here is an example of how you can do this:
import pandas as pd
import matplotlib.pyplot as plt
# Create a dataframe with the time and time lag data
data = [ ['2020-04-18 14:00:00', '0 days 03:00:00'],
['2020-04-19 02:00:00', '1 days 13:00:00'],
['2020-04-28 14:00:00', '1 days 17:00:00'],
['2020-04-29 20:00:00', '2 days 09:00:00'],
['2020-04-30 19:00:00', '2 days 11:00:00'],
]
df = pd.DataFrame(data, columns=['time', 'time_lag'])
# Convert the time and time lag columns to datetime and timedelta objects
df['time'] = pd.to_datetime(df['time'])
df['time_lag'] = pd.to_timedelta(df['time_lag'])
# Convert the time lag to seconds
df['time_lag_seconds'] = df['time_lag'].dt.total_seconds()
# Create a scatter plot with the time on the x-axis and the time lag in seconds on the y-axis
plt.scatter(df['time'], df['time_lag_seconds'])
plt.show()
|
Python: Plotting time delta
|
I have a DataFrame with a column of the time and a column in which I have stored a time lag. The data looks like this:
2020-04-18 14:00:00 0 days 03:00:00
2020-04-19 02:00:00 1 days 13:00:00
2020-04-28 14:00:00 1 days 17:00:00
2020-04-29 20:00:00 2 days 09:00:00
2020-04-30 19:00:00 2 days 11:00:00
Time, Length: 282, dtype: datetime64[ns] Average time lag, Length: 116, dtype: object
I want to plot the Time on the x-axis vs the time lag on the y-axis. However, I keep having errors with plotting the second column. Any tips on how to handle this data for the plot?
|
[
"In order to plot the time lag on the y-axis, you will need to convert the time lag from a timedelta object to a numerical value that can be used in the plot. One way to do this is to convert the time lag to seconds using the total_seconds method, and then plot the resulting values on the y-axis.\nHere is an example of how you can do this:\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n# Create a dataframe with the time and time lag data\ndata = [ ['2020-04-18 14:00:00', '0 days 03:00:00'],\n ['2020-04-19 02:00:00', '1 days 13:00:00'],\n ['2020-04-28 14:00:00', '1 days 17:00:00'],\n ['2020-04-29 20:00:00', '2 days 09:00:00'],\n ['2020-04-30 19:00:00', '2 days 11:00:00'],\n]\ndf = pd.DataFrame(data, columns=['time', 'time_lag'])\n\n# Convert the time and time lag columns to datetime and timedelta objects\ndf['time'] = pd.to_datetime(df['time'])\ndf['time_lag'] = pd.to_timedelta(df['time_lag'])\n\n# Convert the time lag to seconds\ndf['time_lag_seconds'] = df['time_lag'].dt.total_seconds()\n\n# Create a scatter plot with the time on the x-axis and the time lag in seconds on the y-axis\nplt.scatter(df['time'], df['time_lag_seconds'])\nplt.show()\n\n"
] |
[
1
] |
[] |
[] |
[
"matplotlib",
"python",
"timedelta"
] |
stackoverflow_0074659070_matplotlib_python_timedelta.txt
|
Q:
groupby aggregate product in PyTorch
I have the same problem as groupby aggregate mean in pytorch. However, I want to create the product of my tensors inside each group (or labels). Unfortunately, I couldn't find a native PyTorch function that could solve my problem, like a hypothetical scatter_prod_ for products (equivalent to scatter_add_ for sums), which was the function used in one of the answers.
Recycling the example code from @elyase's question, consider the 2D tensor:
samples = torch.Tensor([
[0.1, 0.1], #-> group / class 1
[0.2, 0.2], #-> group / class 2
[0.4, 0.4], #-> group / class 2
[0.0, 0.0] #-> group / class 0
])
with labels where it is true that len(samples) == len(labels)
labels = torch.LongTensor([1, 2, 2, 0])
So my expected output is:
res == torch.Tensor([
[0.0, 0.0],
[0.1, 0.1],
[0.8, 0.8] # -> PRODUCT of [0.2, 0.2] and [0.4, 0.4]
])
Here the question is, again, following @elyase's question, how can this be done in pure PyTorch (i.e. no numpy so that I can autograd) and ideally without for loops?
Crossposted in PyTorch forums.
A:
You can use the scatter_ function to calculate the product of the tensors in each group.
samples = torch.Tensor([
[0.1, 0.1], #-> group / class 1
[0.2, 0.2], #-> group / class 2
[0.4, 0.4], #-> group / class 2
[0.0, 0.0] #-> group / class 0
])
labels = torch.LongTensor([1,2,2,0])
label_size = 3
sample_dim = samples.size(1)
index = labels.unsqueeze(1).repeat((1, sample_dim))
res = torch.ones(label_size, sample_dim, dtype=samples.dtype)
res.scatter_(0, index, samples, reduce='multiply')
res:
tensor([[0.0000, 0.0000],
[0.1000, 0.1000],
[0.0800, 0.0800]])
|
groupby aggregate product in PyTorch
|
I have the same problem as groupby aggregate mean in pytorch. However, I want to create the product of my tensors inside each group (or labels). Unfortunately, I couldn't find a native PyTorch function that could solve my problem, like a hypothetical scatter_prod_ for products (equivalent to scatter_add_ for sums), which was the function used in one of the answers.
Recycling the example code from @elyase's question, consider the 2D tensor:
samples = torch.Tensor([
[0.1, 0.1], #-> group / class 1
[0.2, 0.2], #-> group / class 2
[0.4, 0.4], #-> group / class 2
[0.0, 0.0] #-> group / class 0
])
with labels where it is true that len(samples) == len(labels)
labels = torch.LongTensor([1, 2, 2, 0])
So my expected output is:
res == torch.Tensor([
[0.0, 0.0],
[0.1, 0.1],
[0.8, 0.8] # -> PRODUCT of [0.2, 0.2] and [0.4, 0.4]
])
Here the question is, again, following @elyase's question, how can this be done in pure PyTorch (i.e. no numpy so that I can autograd) and ideally without for loops?
Crossposted in PyTorch forums.
|
[
"You can use the scatter_ function to calculate the product of the tensors in each group.\nsamples = torch.Tensor([\n [0.1, 0.1], #-> group / class 1\n [0.2, 0.2], #-> group / class 2\n [0.4, 0.4], #-> group / class 2\n [0.0, 0.0] #-> group / class 0\n])\n\nlabels = torch.LongTensor([1,2,2,0])\n\nlabel_size = 3\nsample_dim = samples.size(1)\n\nindex = labels.unsqueeze(1).repeat((1, sample_dim))\n\nres = torch.ones(label_size, sample_dim, dtype=samples.dtype)\nres.scatter_(0, index, samples, reduce='multiply')\n\nres:\ntensor([[0.0000, 0.0000],\n [0.1000, 0.1000],\n [0.0800, 0.0800]])\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"pytorch",
"tensor"
] |
stackoverflow_0074657919_python_pytorch_tensor.txt
|
Q:
sort by number of occurrence(count) in Javascript array
I am new to Jquery and Javascript. Can someone please help me with Jquery sorting based on number of occurrence(count) in array. I tried various sorting methods but none of them worked.
I have an array in Javascript which is
allTypesArray = ["4", "4","2", "2", "2", "6", "2", "6", "6"]
// here 2 is printed four times, 6 is printed thrice, and 4 is printed twice
I need output like this
newTypesArray = ["2","6","4"]
I tried
function array_count_values(e) {
var t = {}, n = "",
r = "";
var i = function (e) {
var t = typeof e;
t = t.toLowerCase();
if (t === "object") {
t = "array"
}
return t
};
var s = function (e) {
switch (typeof e) {
case "number":
if (Math.floor(e) !== e) {
return
};
case "string":
if (e in this && this.hasOwnProperty(e)) {
++this[e]
} else {
this[e] = 1
}
}
};
r = i(e);
if (r === "array") {
for (n in e) {
if (e.hasOwnProperty(n)) {
s.call(t, e[n])
}
}
}
return t
}
6: 3
}
output is
{4: 2, 2: 6, 6:3}
A:
I don't think there's a direct solution in one step and of course it's not just a sort (a sort doesn't remove elements). A way to do this would be to build an intermediary map of objects to store the counts :
var allTypesArray = ["4", "4","2", "2", "2", "6", "2", "6", "6"];
var s = allTypesArray.reduce(function(m,v){
m[v] = (m[v]||0)+1; return m;
}, {}); // builds {2: 4, 4: 2, 6: 3}
var a = [];
for (k in s) a.push({k:k,n:s[k]});
// now we have [{"k":"2","n":4},{"k":"4","n":2},{"k":"6","n":3}]
a.sort(function(a,b){ return b.n-a.n });
a = a.map(function(a) { return a.k });
Note that you don't need jQuery here. When you don't manipulate the DOM, you rarely need it.
A:
Just adding my idea as well (a bit too late)
var allTypesArray = ["4", "4", "2", "2", "2", "6", "2", "6", "6"];
var map = allTypesArray.reduce(function(p, c) {
p[c] = (p[c] || 0) + 1;
return p;
}, {});
var newTypesArray = Object.keys(map).sort(function(a, b) {
return map[b] - map[a];
});
console.log(newTypesArray)
A:
I don't think jquery is needed here.
There are several great answers to this question already, but I have found reliability to be an issue in some browsers (namely Safari 10 -- though there could be others).
A somewhat ugly, but seemingly reliable, way to solve this is as follows:
function uniqueCountPreserve(inputArray){
//Sorts the input array by the number of time
//each element appears (largest to smallest)
//Count the number of times each item
//in the array occurs and save the counts to an object
var arrayItemCounts = {};
for (var i in inputArray){
if (!(arrayItemCounts.hasOwnProperty(inputArray[i]))){
arrayItemCounts[inputArray[i]] = 1
} else {
arrayItemCounts[inputArray[i]] += 1
}
}
//Sort the keys by value (smallest to largest)
//please see Markus R's answer at: http://stackoverflow.com/a/16794116/4898004
var keysByCount = Object.keys(arrayItemCounts).sort(function(a, b){
return arrayItemCounts[a]-arrayItemCounts[b];
});
//Reverse the Array and Return
return(keysByCount.reverse())
}
Test
uniqueCountPreserve(allTypesArray)
//["2", "6", "4"]
A:
This is the function i use to do this kind of stuff:
function orderArr(obj){
const tagsArr = Object.keys(obj)
const countArr = Object.values(obj).sort((a,b)=> b-a)
const orderedArr = []
countArr.forEach((count)=>{
tagsArr.forEach((tag)=>{
if(obj[tag] == count && !orderedArr.includes(tag)){
orderedArr.push(tag)
}
})
})
return orderedArr
}
A:
const allTypesArray = ["4", "4","2", "2", "2", "6", "2", "6", "6"]
const singles = [...new Set(allTypesArray)]
const sortedSingles = singles.sort((a,b) => a - b)
console.log(sortedSingles)
Set objects are collections of values. A value in the Set may only occur once; it is unique in the Set's collection.
The singles variable spreads all of the unique values from allTypesArray using the Set object with the spread operator inside of an array.
The sortedSingles variable sorts the values of the singles array in ascending order by comparing the numbers.
A:
Not sure if there's enough neat answers here, this is what I came up with:
Fill an object with counts for each of the elements:
let array = ['4', '4', '2', '2', '2', '6', '2', '6', '6'];
let arrayCounts = {}
for (j in array) arrayCounts[array[j]] ? arrayCounts[array[j]].count++ : arrayCounts[array[j]] = { val: array[j], count: 1 };
/* arrayCounts = {
'2': { val: '2', count: 4 },
'6': { val: '4', count: 2 },
'4': { val: '6', count: 3 }
} */
For the values in that new object, sort them by .count, and map() them into a new array (with just the values):
let sortedArray = Object.values(arrayCounts).sort(function(a,b) { return b.count - a.count }).map(({ val }) => val);
/* sortedArray = [ '2', '6', '4' ] */
Altogether:
let arrayCounts = {}
for (j in array) arrayCounts[array[j]] ? arrayCounts[array[j]].count++ : arrayCounts[array[j]] = { val: array[j], count: 1 };
let sortedArray = Object.values(arrayCounts)
.sort(function(a,b) { return b.count - a.count })
.map(({ val }); => val);
A:
var number = [22,44,55,11,33,99,77,88];
for(var i = 0;i<number.length;i++) {
for(var j=0;j<number.length;j++){
if(number[j]>number[j+1]){
var primary =number[j];
number[j] = number[j+1];
number[j+1] =primary;
}
}
}document.write(number);
|
sort by number of occurrence(count) in Javascript array
|
I am new to Jquery and Javascript. Can someone please help me with Jquery sorting based on number of occurrence(count) in array. I tried various sorting methods but none of them worked.
I have an array in Javascript which is
allTypesArray = ["4", "4","2", "2", "2", "6", "2", "6", "6"]
// here 2 is printed four times, 6 is printed thrice, and 4 is printed twice
I need output like this
newTypesArray = ["2","6","4"]
I tried
function array_count_values(e) {
var t = {}, n = "",
r = "";
var i = function (e) {
var t = typeof e;
t = t.toLowerCase();
if (t === "object") {
t = "array"
}
return t
};
var s = function (e) {
switch (typeof e) {
case "number":
if (Math.floor(e) !== e) {
return
};
case "string":
if (e in this && this.hasOwnProperty(e)) {
++this[e]
} else {
this[e] = 1
}
}
};
r = i(e);
if (r === "array") {
for (n in e) {
if (e.hasOwnProperty(n)) {
s.call(t, e[n])
}
}
}
return t
}
6: 3
}
output is
{4: 2, 2: 6, 6:3}
|
[
"I don't think there's a direct solution in one step and of course it's not just a sort (a sort doesn't remove elements). A way to do this would be to build an intermediary map of objects to store the counts :\nvar allTypesArray = [\"4\", \"4\",\"2\", \"2\", \"2\", \"6\", \"2\", \"6\", \"6\"];\nvar s = allTypesArray.reduce(function(m,v){\n m[v] = (m[v]||0)+1; return m;\n}, {}); // builds {2: 4, 4: 2, 6: 3} \nvar a = [];\nfor (k in s) a.push({k:k,n:s[k]});\n// now we have [{\"k\":\"2\",\"n\":4},{\"k\":\"4\",\"n\":2},{\"k\":\"6\",\"n\":3}] \na.sort(function(a,b){ return b.n-a.n });\na = a.map(function(a) { return a.k });\n\nNote that you don't need jQuery here. When you don't manipulate the DOM, you rarely need it.\n",
"Just adding my idea as well (a bit too late)\n\n\nvar allTypesArray = [\"4\", \"4\", \"2\", \"2\", \"2\", \"6\", \"2\", \"6\", \"6\"];\r\nvar map = allTypesArray.reduce(function(p, c) {\r\n p[c] = (p[c] || 0) + 1;\r\n return p;\r\n}, {});\r\n\r\nvar newTypesArray = Object.keys(map).sort(function(a, b) {\r\n return map[b] - map[a];\r\n});\r\n\r\nconsole.log(newTypesArray)\n\n\n\n",
"I don't think jquery is needed here.\nThere are several great answers to this question already, but I have found reliability to be an issue in some browsers (namely Safari 10 -- though there could be others).\nA somewhat ugly, but seemingly reliable, way to solve this is as follows:\nfunction uniqueCountPreserve(inputArray){\n //Sorts the input array by the number of time\n //each element appears (largest to smallest)\n\n //Count the number of times each item\n //in the array occurs and save the counts to an object\n var arrayItemCounts = {};\n for (var i in inputArray){\n if (!(arrayItemCounts.hasOwnProperty(inputArray[i]))){\n arrayItemCounts[inputArray[i]] = 1\n } else {\n arrayItemCounts[inputArray[i]] += 1\n }\n }\n\n //Sort the keys by value (smallest to largest)\n //please see Markus R's answer at: http://stackoverflow.com/a/16794116/4898004\n var keysByCount = Object.keys(arrayItemCounts).sort(function(a, b){\n return arrayItemCounts[a]-arrayItemCounts[b];\n });\n\n //Reverse the Array and Return\n return(keysByCount.reverse())\n}\n\n\nTest\nuniqueCountPreserve(allTypesArray)\n//[\"2\", \"6\", \"4\"]\n\n",
"This is the function i use to do this kind of stuff:\nfunction orderArr(obj){\n const tagsArr = Object.keys(obj)\n const countArr = Object.values(obj).sort((a,b)=> b-a)\n const orderedArr = []\n countArr.forEach((count)=>{\n tagsArr.forEach((tag)=>{\n if(obj[tag] == count && !orderedArr.includes(tag)){\n orderedArr.push(tag)\n }\n })\n })\n return orderedArr\n}\n\n",
"const allTypesArray = [\"4\", \"4\",\"2\", \"2\", \"2\", \"6\", \"2\", \"6\", \"6\"]\n\nconst singles = [...new Set(allTypesArray)]\nconst sortedSingles = singles.sort((a,b) => a - b)\nconsole.log(sortedSingles)\n\nSet objects are collections of values. A value in the Set may only occur once; it is unique in the Set's collection.\nThe singles variable spreads all of the unique values from allTypesArray using the Set object with the spread operator inside of an array.\nThe sortedSingles variable sorts the values of the singles array in ascending order by comparing the numbers.\n",
"Not sure if there's enough neat answers here, this is what I came up with:\nFill an object with counts for each of the elements:\nlet array = ['4', '4', '2', '2', '2', '6', '2', '6', '6'];\nlet arrayCounts = {}\n\nfor (j in array) arrayCounts[array[j]] ? arrayCounts[array[j]].count++ : arrayCounts[array[j]] = { val: array[j], count: 1 };\n\n/* arrayCounts = {\n '2': { val: '2', count: 4 },\n '6': { val: '4', count: 2 },\n '4': { val: '6', count: 3 }\n} */\n\nFor the values in that new object, sort them by .count, and map() them into a new array (with just the values):\nlet sortedArray = Object.values(arrayCounts).sort(function(a,b) { return b.count - a.count }).map(({ val }) => val);\n\n/* sortedArray = [ '2', '6', '4' ] */\n\nAltogether:\nlet arrayCounts = {}\n\nfor (j in array) arrayCounts[array[j]] ? arrayCounts[array[j]].count++ : arrayCounts[array[j]] = { val: array[j], count: 1 };\n \nlet sortedArray = Object.values(arrayCounts)\n .sort(function(a,b) { return b.count - a.count })\n .map(({ val }); => val);\n\n",
" var number = [22,44,55,11,33,99,77,88];\n\n for(var i = 0;i<number.length;i++) {\n for(var j=0;j<number.length;j++){\n if(number[j]>number[j+1]){\n var primary =number[j];\n number[j] = number[j+1];\n number[j+1] =primary;\n }\n }\n }document.write(number);\n\n"
] |
[
7,
6,
2,
0,
0,
0,
0
] |
[] |
[] |
[
"arrays",
"javascript",
"jquery",
"sorting"
] |
stackoverflow_0022010520_arrays_javascript_jquery_sorting.txt
|
Q:
How to reclassify a column in r?
I have this dataframe:
Name <- c("Jon", "Bill", "Maria", "Ben", "Tina")
Age <- c(-23, -41, -32, -58, -100)
df <- data.frame(Name, Age)
I want to add a column with
values between 0 and 10 0-10
values between 10 and 20 10-20
.....
values between 90 and 100 90-100
Desired output:
Name Age class
Jon 23 -20-30
Bill 41 -40-50
Maria 32 -30-40
Ben 58 -50-60
Tina 26 -20-30
A:
We can use cut for finding intervals (findInterval also works) and sub for text formatting
> df$class <- sub("\\((\\d+),(\\d+)\\]", "\\1-\\2", cut(df$Age, seq(0,100,10)))
> df
Name Age class
1 Jon 23 20-30
2 Bill 41 40-50
3 Maria 32 30-40
4 Ben 58 50-60
5 Tina 26 20-30
Using your updated data
> df$class <- sub("[([](-*\\d+),(-*\\d+)\\]", "\\1-\\2", cut(df$Age, seq(-100,100,10), include.lowest = TRUE))
> df
Name Age class
1 Jon -23 -30--20
2 Bill -41 -50--40
3 Maria -32 -40--30
4 Ben -58 -60--50
5 Tina -100 -100--90
Note you need an extra - for negative values in the end of the interval
|
How to reclassify a column in r?
|
I have this dataframe:
Name <- c("Jon", "Bill", "Maria", "Ben", "Tina")
Age <- c(-23, -41, -32, -58, -100)
df <- data.frame(Name, Age)
I want to add a column with
values between 0 and 10 0-10
values between 10 and 20 10-20
.....
values between 90 and 100 90-100
Desired output:
Name Age class
Jon 23 -20-30
Bill 41 -40-50
Maria 32 -30-40
Ben 58 -50-60
Tina 26 -20-30
|
[
"We can use cut for finding intervals (findInterval also works) and sub for text formatting\n> df$class <- sub(\"\\\\((\\\\d+),(\\\\d+)\\\\]\", \"\\\\1-\\\\2\", cut(df$Age, seq(0,100,10)))\n> df\n Name Age class\n1 Jon 23 20-30\n2 Bill 41 40-50\n3 Maria 32 30-40\n4 Ben 58 50-60\n5 Tina 26 20-30\n\nUsing your updated data\n> df$class <- sub(\"[([](-*\\\\d+),(-*\\\\d+)\\\\]\", \"\\\\1-\\\\2\", cut(df$Age, seq(-100,100,10), include.lowest = TRUE))\n> df\n Name Age class\n1 Jon -23 -30--20\n2 Bill -41 -50--40\n3 Maria -32 -40--30\n4 Ben -58 -60--50\n5 Tina -100 -100--90\n\nNote you need an extra - for negative values in the end of the interval\n"
] |
[
2
] |
[] |
[] |
[
"r"
] |
stackoverflow_0074659061_r.txt
|
Q:
Create list with NavigationLink items in SwiftUI where each NavigationLink again contains a list of items which is individual
[Pic 1 AS IS]
[Pic 2 TO BE]
Hi there,
I am just starting to learn Swift an I would like my app users to build their own list of items (first level) where each item again contains a list of items (second level). Important is that each of the individually created lists in the second level is like no other of the individually created lists. (see picture)
Is anyone aware of which approach I need to take to solve this?
I am myself able to build the list within the list within the NavigationView, but how can I make each list individual?
Here is my code:
struct ItemModel: Hashable {
let name: String
}
struct ProductModel: Hashable {
let productname: String
}
class ListViewModel: ObservableObject {
@Published var items: [ItemModel] = []
}
class ProductlistViewModel: ObservableObject {
@Published var products: [ProductModel] = []
}
struct ContentView: View {
@StateObject private var vm = ListViewModel()
@StateObject private var pvm = ProductlistViewModel()
@State var firstPlusButtonPressed: Bool = false
@State var secondPlusButtonPressed: Bool = false
var body: some View {
NavigationView {
List {
ForEach(vm.items, id: \.self) { item in
NavigationLink {
DetailView() //The DetailView below
.navigationTitle(item.name)
.navigationBarItems(
trailing:
Button(action: {
secondPlusButtonPressed.toggle()
}, label: {
NavigationLink {
AddProduct() //AddProduct below
} label: {
Image(systemName: "plus")
}
})
)
} label: {
Text(item.name)
}
}
}
.navigationBarItems(
trailing:
Button(action: { firstPlusButtonPressed.toggle()
}, label: {
NavigationLink {
AddItem() //AddItem below
} label: { Image(systemName: "plus")
}
})
)
}
.environmentObject(vm)
.environmentObject(pvm)
}
}
struct AddItem: View {
@State var textFieldText: String = ""
@Environment(\.presentationMode) var presentationMode
@EnvironmentObject var vm: ListViewModel
var body: some View {
NavigationView {
VStack {
TextField("Add an item...", text: $textFieldText)
Button(action: {
vm.addItem(text: textFieldText)
presentationMode.wrappedValue.dismiss()
}, label: {
Text("SAVE")
})
}
}
}
}
struct DetailView: View {
@StateObject private var pvm = ProductlistViewModel()
@Environment(\.editMode) var editMode
var body: some View {
NavigationView {
List {
ForEach(pvm.products, id: \.self) { product in
Text(product.productname)
}
}
}
.environmentObject(pvm)
}
}
struct AddProduct: View {
@State var textFieldText: String = ""
@Environment(\.presentationMode) var presentationMode
@EnvironmentObject var pvm: ProductlistViewModel
var body: some View {
NavigationView {
VStack {
TextField("Add a product", text: $textFieldText)
Button(action: {
pvm.addProduct(text: textFieldText)
presentationMode.wrappedValue.dismiss()
}, label: {
Text("SAVE")
})
}
}
}
}
A:
This is going to be long but here it goes. The issue is the whole ViewModel setup. You detail view now is only using the product view model, you need to rethink your approach.
But what makes the whole thing "complicated" is the 2 different types, Item and Product which you seem to want to combine into one list and use the same subviews for them both.
In swift you have protocol that allows this, protocols require struct and class "conformance".
//Protocols are needed so you can use reuse views
protocol ObjectModelProtocol: Hashable, Identifiable{
var id: UUID {get}
var name: String {get set}
init(name: String)
}
//Protocols are needed so you can use reuse views
protocol ListModelProtocol: Hashable, Identifiable{
associatedtype O : ObjectModelProtocol
var id: UUID {get}
var name: String {get set}
//Keep the individual items with the list
var items: [O] {get set}
init(name: String, items: [O])
}
extension ListModelProtocol{
mutating func addItem(name: String) {
items.append(O(name: name))
}
}
Then your models start looking something like this. Notice the conformance to the protocols.
//Replaces the ListViewModel
struct ListItemModel: ListModelProtocol{
let id: UUID
var name: String
var items: [ItemModel]
init(name: String, items: [ItemModel]){
self.id = .init()
self.name = name
self.items = items
}
}
//Replaces the ProductlistViewModel
struct ListProductModel: ListModelProtocol{
let id: UUID
var name: String
var items: [ProductModel]
init(name: String, items: [ProductModel]){
self.id = .init()
self.name = name
self.items = items
}
}
//Uniform objects, can be specialized but start uniform
struct ItemModel: ObjectModelProtocol {
let id: UUID
var name: String
init(name: String){
self.id = .init()
self.name = name
}
}
//Uniform objects, can be specialized but start uniform
struct ProductModel: ObjectModelProtocol {
let id: UUID
var name: String
init(name: String){
self.id = .init()
self.name = name
}
}
class ModelStore: ObservableObject{
@Published var items: [ListItemModel] = [ListItemModel(name: "fruits", items: [.init(name: "peach"), .init(name: "banana"), .init(name: "apple")])]
@Published var products: [ListProductModel] = [ListProductModel(name: "vegetable", items: [.init(name: "tomatoes"), .init(name: "paprika"), .init(name: "zucchini")])]
}
Now your views can look something like this
struct ComboView: View {
@StateObject private var store = ModelStore()
@State var firstPlusButtonPressed: Bool = false
@State var secondPlusButtonPressed: Bool = false
var body: some View {
NavigationView {
List {
//The next part will address this
ItemLoop(list: $store.items)
ItemLoop(list: $store.products)
}
.toolbar(content: {
ToolbarItem {
AddList(store: store)
}
})
}
}
}
struct ItemLoop<LM: ListModelProtocol>: View {
@Binding var list: [LM]
var body: some View{
ForEach($list, id: \.id) { $item in
NavigationLink {
DetailView<LM>(itemList: $item)
.navigationTitle(item.name)
.toolbar {
NavigationLink {
AddItem<LM>( item: $item)
} label: {
Image(systemName: "plus")
}
}
} label: {
Text(item.name)
}
}
}
}
struct AddList: View {
@Environment(\.presentationMode) var presentationMode
@ObservedObject var store: ModelStore
var body: some View {
Menu {
Button("add item"){
store.items.append(ListItemModel(name: "new item", items: []))
}
Button("add product"){
store.products.append(ListProductModel(name: "new product", items: []))
}
} label: {
Image(systemName: "plus")
}
}
}
struct AddItem<LM>: View where LM : ListModelProtocol {
@State var textFieldText: String = ""
@Environment(\.presentationMode) var presentationMode
@Binding var item: LM
var body: some View {
VStack {
TextField("Add an item...", text: $textFieldText)
Button(action: {
item.addItem(name: textFieldText)
presentationMode.wrappedValue.dismiss()
}, label: {
Text("SAVE")
})
}
}
}
struct DetailView<LM>: View where LM : ListModelProtocol{
@Environment(\.editMode) var editMode
@Binding var itemList: LM
var body: some View {
VStack{
TextField("name", text: $itemList.name)
.textFieldStyle(.roundedBorder)
List (itemList.items, id:\.id) { item in
Text(item.name)
}
}
.navigationTitle(itemList.name)
.toolbar {
NavigationLink {
AddItem(item: $itemList)
} label: {
Image(systemName: "plus")
}
}
}
}
If you notice the List in the ComboView you will notice that the items and products are separated into 2 loop. That is because SwiftUI requires concrete types for most views, view modifiers and wrappers.
You can have a list of [any ListModelProtocol] but at some point you will have to convert from an existential to a concrete type. In your case the ForEach in de DetailView requires a concrete type.
class ModelStore: ObservableObject{
@Published var both: [any ListModelProtocol] = [
ListProductModel(name: "vegetable", items: [.init(name: "tomatoes"), .init(name: "paprika"), .init(name: "zucchini")]),
ListItemModel(name: "fruits", items: [.init(name: "peach"), .init(name: "banana"), .init(name: "apple")])
]
}
struct ComboView: View {
@StateObject private var store = ModelStore()
@State var firstPlusButtonPressed: Bool = false
@State var secondPlusButtonPressed: Bool = false
var body: some View {
NavigationView {
List {
ConcreteItemLoop(list: $store.both)
}
.toolbar(content: {
ToolbarItem {
AddList(store: store)
}
})
}
}
}
struct ConcreteItemLoop: View {
@Binding var list: [any ListModelProtocol]
var body: some View{
ForEach($list, id: \.id) { $item in
NavigationLink {
if let concrete: Binding<ListItemModel> = getConcrete(existential: $item){
DetailView(itemList: concrete)
} else if let concrete: Binding<ListProductModel> = getConcrete(existential: $item){
DetailView(itemList: concrete)
}else{
Text("unknown type")
}
} label: {
Text(item.name)
}
}
}
func getConcrete<T>(existential: Binding<any ListModelProtocol>) -> Binding<T>? where T : ListModelProtocol{
if existential.wrappedValue is T{
return Binding {
existential.wrappedValue as! T
} set: { newValue in
existential.wrappedValue = newValue
}
}else{
return nil
}
}
}
struct AddList: View {
@Environment(\.presentationMode) var presentationMode
@ObservedObject var store: ModelStore
var body: some View {
Menu {
Button("add item"){
store.both.append(ListItemModel(name: "new item", items: []))
}
Button("add product"){
store.both.append(ListProductModel(name: "new product", items: []))
}
} label: {
Image(systemName: "plus")
}
}
}
I know its long but this all compiles so you should be able to put it in a project and disect it.
Also, at the end of all of this you can create specific views for the model type.
struct DetailView<LM>: View where LM : ListModelProtocol{
@Environment(\.editMode) var editMode
@Binding var itemList: LM
var body: some View {
VStack{
TextField("name", text: $itemList.name)
.textFieldStyle(.roundedBorder)
List (itemList.items, id:\.id) { item in
VStack{
switch item{
case let i as ItemModel:
ItemModelView(item: i)
case let p as ProductModel:
Text("\(p.name) is product")
default:
Text("\(item.name) is unknown")
}
}
}
}
.navigationTitle(itemList.name)
.toolbar {
NavigationLink {
AddItem(item: $itemList)
} label: {
Image(systemName: "plus")
}
}
}
}
struct ItemModelView: View{
let item: ItemModel
var body: some View{
VStack{
Text("\(item.name) is item")
Image(systemName: "person")
}
}
}
|
Create list with NavigationLink items in SwiftUI where each NavigationLink again contains a list of items which is individual
|
[Pic 1 AS IS]
[Pic 2 TO BE]
Hi there,
I am just starting to learn Swift an I would like my app users to build their own list of items (first level) where each item again contains a list of items (second level). Important is that each of the individually created lists in the second level is like no other of the individually created lists. (see picture)
Is anyone aware of which approach I need to take to solve this?
I am myself able to build the list within the list within the NavigationView, but how can I make each list individual?
Here is my code:
struct ItemModel: Hashable {
let name: String
}
struct ProductModel: Hashable {
let productname: String
}
class ListViewModel: ObservableObject {
@Published var items: [ItemModel] = []
}
class ProductlistViewModel: ObservableObject {
@Published var products: [ProductModel] = []
}
struct ContentView: View {
@StateObject private var vm = ListViewModel()
@StateObject private var pvm = ProductlistViewModel()
@State var firstPlusButtonPressed: Bool = false
@State var secondPlusButtonPressed: Bool = false
var body: some View {
NavigationView {
List {
ForEach(vm.items, id: \.self) { item in
NavigationLink {
DetailView() //The DetailView below
.navigationTitle(item.name)
.navigationBarItems(
trailing:
Button(action: {
secondPlusButtonPressed.toggle()
}, label: {
NavigationLink {
AddProduct() //AddProduct below
} label: {
Image(systemName: "plus")
}
})
)
} label: {
Text(item.name)
}
}
}
.navigationBarItems(
trailing:
Button(action: { firstPlusButtonPressed.toggle()
}, label: {
NavigationLink {
AddItem() //AddItem below
} label: { Image(systemName: "plus")
}
})
)
}
.environmentObject(vm)
.environmentObject(pvm)
}
}
struct AddItem: View {
@State var textFieldText: String = ""
@Environment(\.presentationMode) var presentationMode
@EnvironmentObject var vm: ListViewModel
var body: some View {
NavigationView {
VStack {
TextField("Add an item...", text: $textFieldText)
Button(action: {
vm.addItem(text: textFieldText)
presentationMode.wrappedValue.dismiss()
}, label: {
Text("SAVE")
})
}
}
}
}
struct DetailView: View {
@StateObject private var pvm = ProductlistViewModel()
@Environment(\.editMode) var editMode
var body: some View {
NavigationView {
List {
ForEach(pvm.products, id: \.self) { product in
Text(product.productname)
}
}
}
.environmentObject(pvm)
}
}
struct AddProduct: View {
@State var textFieldText: String = ""
@Environment(\.presentationMode) var presentationMode
@EnvironmentObject var pvm: ProductlistViewModel
var body: some View {
NavigationView {
VStack {
TextField("Add a product", text: $textFieldText)
Button(action: {
pvm.addProduct(text: textFieldText)
presentationMode.wrappedValue.dismiss()
}, label: {
Text("SAVE")
})
}
}
}
}
|
[
"This is going to be long but here it goes. The issue is the whole ViewModel setup. You detail view now is only using the product view model, you need to rethink your approach.\nBut what makes the whole thing \"complicated\" is the 2 different types, Item and Product which you seem to want to combine into one list and use the same subviews for them both.\nIn swift you have protocol that allows this, protocols require struct and class \"conformance\".\n//Protocols are needed so you can use reuse views\nprotocol ObjectModelProtocol: Hashable, Identifiable{\n var id: UUID {get}\n var name: String {get set}\n init(name: String)\n}\n//Protocols are needed so you can use reuse views\nprotocol ListModelProtocol: Hashable, Identifiable{\n associatedtype O : ObjectModelProtocol\n var id: UUID {get}\n var name: String {get set}\n //Keep the individual items with the list \n var items: [O] {get set}\n init(name: String, items: [O])\n}\nextension ListModelProtocol{\n mutating func addItem(name: String) {\n items.append(O(name: name))\n }\n}\n\nThen your models start looking something like this. Notice the conformance to the protocols.\n//Replaces the ListViewModel\nstruct ListItemModel: ListModelProtocol{\n let id: UUID\n var name: String\n var items: [ItemModel]\n \n init(name: String, items: [ItemModel]){\n self.id = .init()\n self.name = name\n self.items = items\n }\n}\n//Replaces the ProductlistViewModel\nstruct ListProductModel: ListModelProtocol{\n let id: UUID\n var name: String\n var items: [ProductModel]\n init(name: String, items: [ProductModel]){\n self.id = .init()\n self.name = name\n self.items = items\n }\n}\n//Uniform objects, can be specialized but start uniform\nstruct ItemModel: ObjectModelProtocol {\n let id: UUID\n var name: String\n init(name: String){\n self.id = .init()\n self.name = name\n }\n}\n//Uniform objects, can be specialized but start uniform\nstruct ProductModel: ObjectModelProtocol {\n let id: UUID\n var name: String\n init(name: String){\n self.id = .init()\n self.name = name\n }\n}\nclass ModelStore: ObservableObject{\n @Published var items: [ListItemModel] = [ListItemModel(name: \"fruits\", items: [.init(name: \"peach\"), .init(name: \"banana\"), .init(name: \"apple\")])]\n @Published var products: [ListProductModel] = [ListProductModel(name: \"vegetable\", items: [.init(name: \"tomatoes\"), .init(name: \"paprika\"), .init(name: \"zucchini\")])]\n \n}\n\nNow your views can look something like this\nstruct ComboView: View {\n @StateObject private var store = ModelStore()\n @State var firstPlusButtonPressed: Bool = false\n @State var secondPlusButtonPressed: Bool = false\n\n var body: some View {\n NavigationView {\n List {\n //The next part will address this\n ItemLoop(list: $store.items)\n ItemLoop(list: $store.products)\n \n }\n .toolbar(content: {\n ToolbarItem {\n AddList(store: store)\n }\n })\n }\n }\n}\n\nstruct ItemLoop<LM: ListModelProtocol>: View {\n @Binding var list: [LM]\n var body: some View{\n ForEach($list, id: \\.id) { $item in\n NavigationLink {\n DetailView<LM>(itemList: $item)\n .navigationTitle(item.name)\n .toolbar {\n NavigationLink {\n AddItem<LM>( item: $item)\n } label: {\n Image(systemName: \"plus\")\n }\n }\n } label: {\n Text(item.name)\n }\n }\n }\n}\n\nstruct AddList: View {\n @Environment(\\.presentationMode) var presentationMode\n @ObservedObject var store: ModelStore\n var body: some View {\n Menu {\n Button(\"add item\"){\n store.items.append(ListItemModel(name: \"new item\", items: []))\n }\n Button(\"add product\"){\n store.products.append(ListProductModel(name: \"new product\", items: []))\n }\n } label: {\n Image(systemName: \"plus\")\n }\n \n }\n}\nstruct AddItem<LM>: View where LM : ListModelProtocol {\n @State var textFieldText: String = \"\"\n @Environment(\\.presentationMode) var presentationMode\n @Binding var item: LM\n\n var body: some View {\n VStack {\n TextField(\"Add an item...\", text: $textFieldText)\n Button(action: {\n item.addItem(name: textFieldText)\n presentationMode.wrappedValue.dismiss()\n\n }, label: {\n Text(\"SAVE\")\n })\n }\n\n }\n}\n\nstruct DetailView<LM>: View where LM : ListModelProtocol{\n @Environment(\\.editMode) var editMode\n @Binding var itemList: LM\n var body: some View {\n VStack{\n TextField(\"name\", text: $itemList.name)\n .textFieldStyle(.roundedBorder)\n List (itemList.items, id:\\.id) { item in\n Text(item.name)\n }\n }\n .navigationTitle(itemList.name)\n .toolbar {\n NavigationLink {\n AddItem(item: $itemList)\n } label: {\n Image(systemName: \"plus\")\n }\n }\n }\n}\n\nIf you notice the List in the ComboView you will notice that the items and products are separated into 2 loop. That is because SwiftUI requires concrete types for most views, view modifiers and wrappers.\nYou can have a list of [any ListModelProtocol] but at some point you will have to convert from an existential to a concrete type. In your case the ForEach in de DetailView requires a concrete type.\nclass ModelStore: ObservableObject{\n @Published var both: [any ListModelProtocol] = [\n ListProductModel(name: \"vegetable\", items: [.init(name: \"tomatoes\"), .init(name: \"paprika\"), .init(name: \"zucchini\")]),\n ListItemModel(name: \"fruits\", items: [.init(name: \"peach\"), .init(name: \"banana\"), .init(name: \"apple\")])\n ]\n}\nstruct ComboView: View {\n \n @StateObject private var store = ModelStore()\n @State var firstPlusButtonPressed: Bool = false\n @State var secondPlusButtonPressed: Bool = false\n\n var body: some View {\n NavigationView {\n List {\n ConcreteItemLoop(list: $store.both)\n }\n .toolbar(content: {\n ToolbarItem {\n AddList(store: store)\n }\n })\n }\n }\n}\nstruct ConcreteItemLoop: View {\n @Binding var list: [any ListModelProtocol]\n var body: some View{\n ForEach($list, id: \\.id) { $item in\n NavigationLink {\n if let concrete: Binding<ListItemModel> = getConcrete(existential: $item){\n DetailView(itemList: concrete)\n } else if let concrete: Binding<ListProductModel> = getConcrete(existential: $item){\n DetailView(itemList: concrete)\n }else{\n Text(\"unknown type\")\n }\n } label: {\n Text(item.name)\n }\n }\n }\n func getConcrete<T>(existential: Binding<any ListModelProtocol>) -> Binding<T>? where T : ListModelProtocol{\n if existential.wrappedValue is T{\n return Binding {\n existential.wrappedValue as! T\n } set: { newValue in\n existential.wrappedValue = newValue\n }\n\n }else{\n return nil\n }\n }\n}\n\nstruct AddList: View {\n @Environment(\\.presentationMode) var presentationMode\n @ObservedObject var store: ModelStore\n var body: some View {\n Menu {\n Button(\"add item\"){\n store.both.append(ListItemModel(name: \"new item\", items: []))\n }\n Button(\"add product\"){\n store.both.append(ListProductModel(name: \"new product\", items: []))\n }\n } label: {\n Image(systemName: \"plus\")\n }\n \n }\n}\n\nI know its long but this all compiles so you should be able to put it in a project and disect it.\nAlso, at the end of all of this you can create specific views for the model type.\nstruct DetailView<LM>: View where LM : ListModelProtocol{\n @Environment(\\.editMode) var editMode\n @Binding var itemList: LM\n var body: some View {\n VStack{\n TextField(\"name\", text: $itemList.name)\n .textFieldStyle(.roundedBorder)\n List (itemList.items, id:\\.id) { item in\n VStack{\n switch item{\n case let i as ItemModel:\n ItemModelView(item: i)\n case let p as ProductModel:\n Text(\"\\(p.name) is product\")\n default:\n Text(\"\\(item.name) is unknown\")\n }\n }\n }\n }\n .navigationTitle(itemList.name)\n .toolbar {\n NavigationLink {\n AddItem(item: $itemList)\n } label: {\n Image(systemName: \"plus\")\n }\n }\n }\n}\nstruct ItemModelView: View{\n let item: ItemModel\n var body: some View{\n VStack{\n Text(\"\\(item.name) is item\")\n Image(systemName: \"person\")\n }\n }\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"foreach",
"identifiable",
"items",
"list",
"swift"
] |
stackoverflow_0074602715_foreach_identifiable_items_list_swift.txt
|
Q:
Gitlab: Is it possible to filter merge requests by target branch?
In Gitlab CE, one can filter Merge Requests on Author, Assignee, Milestones and Labels. We cannot find a way to filter the search by Target Branch. Are we missing how this is done, or is that feature not available?
In Github, this is done by entering base:x where x is the branch to filter by under Pull Requests.
A:
Unfortunately, it is not yet possible to filter Merge Requests by the destination branch, although it is an open feature proposal in issue 22135. I apologize that this may not be the answer you were hoping for.
Please feel free to mark your support for the proposal, discuss any ideas, or even submit a merge request if you’d like to do so. You can do any of these from the issue page and I encourage you to lend your voice to the development of this feature.
A:
Yes, here's how!
https://glab-cli.io/docs/commands/mr/list
glab mr list --target-branch=trunk
|
Gitlab: Is it possible to filter merge requests by target branch?
|
In Gitlab CE, one can filter Merge Requests on Author, Assignee, Milestones and Labels. We cannot find a way to filter the search by Target Branch. Are we missing how this is done, or is that feature not available?
In Github, this is done by entering base:x where x is the branch to filter by under Pull Requests.
|
[
"Unfortunately, it is not yet possible to filter Merge Requests by the destination branch, although it is an open feature proposal in issue 22135. I apologize that this may not be the answer you were hoping for. \nPlease feel free to mark your support for the proposal, discuss any ideas, or even submit a merge request if you’d like to do so. You can do any of these from the issue page and I encourage you to lend your voice to the development of this feature.\n",
"Yes, here's how!\nhttps://glab-cli.io/docs/commands/mr/list\nglab mr list --target-branch=trunk\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"gitlab"
] |
stackoverflow_0041104645_gitlab.txt
|
Q:
Is it possible to infer generic `T | undefined` as `Partial` without casting?
I have this function that applies default values to an object if it's undefined. For properties that don't have a default value provided, I want to keep original type with | undefined.
What I have right now works, but would it be possible to prevent type casting as Partial<T>?
const setDefaults = <T extends object, K extends keyof T>(
data: T | undefined,
defaults: Pick<T, K>,
) => {
return { ...defaults, ...(data as Partial<T>) }
}
type MyData = {
id: number,
roles: Array<string>,
}
let data: MyData | undefined
const { id, roles } = setDefaults(data, { roles: [] })
console.log(id) // number | undefined
console.log(roles) // Array<string>
A:
So, data really is not Partial<T>; it's T | undefined. It's defaults that is Partial<T>. My suggested fix is to intersect the type of defaults with Partial<T> so that the compiler understands that any properties not explicitly mentioned in the argument are of type undefined (instead of just unknown):
const setDefaults = <T extends object, K extends keyof T>(
data: T | undefined,
defaults: Pick<T, K> & Partial<T>, // <-- here
) => {
return { ...defaults, ...data }
}
This has the same Pick<T, K> & Partial<T> return type as before, so everything works as expected:
const { id, roles } = setDefaults(data, { roles: [] })
id; // number | undefined
roles; // Array<string>
Playground link to code
|
Is it possible to infer generic `T | undefined` as `Partial` without casting?
|
I have this function that applies default values to an object if it's undefined. For properties that don't have a default value provided, I want to keep original type with | undefined.
What I have right now works, but would it be possible to prevent type casting as Partial<T>?
const setDefaults = <T extends object, K extends keyof T>(
data: T | undefined,
defaults: Pick<T, K>,
) => {
return { ...defaults, ...(data as Partial<T>) }
}
type MyData = {
id: number,
roles: Array<string>,
}
let data: MyData | undefined
const { id, roles } = setDefaults(data, { roles: [] })
console.log(id) // number | undefined
console.log(roles) // Array<string>
|
[
"So, data really is not Partial<T>; it's T | undefined. It's defaults that is Partial<T>. My suggested fix is to intersect the type of defaults with Partial<T> so that the compiler understands that any properties not explicitly mentioned in the argument are of type undefined (instead of just unknown):\nconst setDefaults = <T extends object, K extends keyof T>(\n data: T | undefined,\n defaults: Pick<T, K> & Partial<T>, // <-- here\n) => {\n return { ...defaults, ...data }\n}\n\nThis has the same Pick<T, K> & Partial<T> return type as before, so everything works as expected:\nconst { id, roles } = setDefaults(data, { roles: [] }) \nid; // number | undefined\nroles; // Array<string>\n\nPlayground link to code\n"
] |
[
3
] |
[] |
[] |
[
"typescript",
"typescript_generics"
] |
stackoverflow_0074657298_typescript_typescript_generics.txt
|
Q:
PowerShell Single Line Huge Memory Usage
I'm trying to use the below command. Whenever I use this, memory usage is extremely high. Is there a way to lower memory usage while completing the same task?
Get-ChildItem -Path "C:\" -Recurse |
sort -descending -property length |
select -first 20 fullname, @{Name="Gigabytes";Expression={[Math]::round($_.length / 1GB, 2)}}
A:
This should be very efficient compared to what you're currently doing, basically it leverages the SortedSet<T> Class and a custom PowerShell Class implementing IComparable and IEquatable<T>. The logic inside the anonymous function ensures that the sorted set will always have a maximum of 20 elements (change the $sorted.Count -lt 20 as needed depending on how many files you want as result) so memory usage should be much lower.
class SimpleFile : System.IComparable, System.IEquatable[object] {
[string] $FullName
[Int64] $Length
[double] $Gigabytes
SimpleFile([IO.FileInfo] $File) {
$this.FullName = $File.FullName
$this.Length = $File.Length
$this.Gigabytes = [Math]::Round($File.Length / 1Gb, 2)
}
[int] GetHashCode() {
return $this.FullName.GetHashCode()
}
[int] CompareTo([object] $That) {
if($diff = $this.Length.CompareTo($That.Length)) {
return $diff
}
return 0
}
[bool] Equals([object] $That) {
return $this.FullName -eq $That.FullName
}
}
$result = Get-ChildItem C:\ -Recurse -File -EA 0 | & {
begin {
$sorted = [System.Collections.Generic.SortedSet[SimpleFile]]::new()
}
process {
if($sorted.Count -lt 20) {
$null = $sorted.Add($_)
return
}
if($sorted.Min.Length -lt $_.Length) {
$null = $sorted.Remove($sorted.Min)
$null = $sorted.Add($_)
}
}
end {
$sorted.Reverse()
}
}
A:
Here's a mostly cmd solution that runs in half the time (4 min vs 8 min for me), and no great amount of ram to speak of.
cmd /c 'dir /s c:\ | findstr /v "Directory of" | findstr /v "<DIR>" | findstr /v "<JUNCTION>" | findstr /v "Total Files Listed:" | sort.exe /+20' | select -last 20
09/13/2018 04:45 PM 362,337,255 78ec5a6bf483ef155dc2a311e526f8a5.cab
09/23/2022 02:24 PM 379,584,512 clonezilla-live-3.0.1-8-amd64.iso
09/13/2018 04:45 PM 445,038,363 8bb1cf01f3ce1952f356d0aff91dbb2f.cab
09/13/2018 04:45 PM 445,038,363 8bb1cf01f3ce1952f356d0aff91dbb2f.cab
04/13/2012 03:02 PM 481,143,404 Data1.cab
02/23/2022 12:10 PM 540,587,242 Miller_MUS171_Lecture02_v2.mp4
05/17/2017 06:27 PM 551,624,704 maple.help
09/13/2018 04:44 PM 600,753,345 a32918368eba6a062aaaaf73e3618131.cab
09/13/2018 04:44 PM 600,753,345 a32918368eba6a062aaaaf73e3618131.cab
11/10/2022 03:12 AM 641,728,512 BaseLayer.vhdx
11/10/2022 03:12 AM 641,728,512 BaseLayer.vhdx
11/09/2022 09:17 AM 687,780,220 Windows10.0-KB5019959-x64.cab
12/12/2017 01:45 PM 740,060,956 2ddf168b.msi
09/13/2018 04:45 PM 760,142,380 9722214af0ab8aa9dffb6cfdafd937b7.cab
09/13/2018 04:45 PM 760,142,380 9722214af0ab8aa9dffb6cfdafd937b7.cab
06/02/2016 03:19 PM 894,730,632 392b35b.msi
04/18/2021 03:03 PM 905,672,704 37b241d4.msi
06/13/2022 12:52 PM 905,672,704 37b241d7.msi
04/18/2021 03:03 PM 905,672,704 Stata17.msi
01/12/2018 03:06 PM 1,075,667,093 windows10.0-kb4056891-x64.msu
|
PowerShell Single Line Huge Memory Usage
|
I'm trying to use the below command. Whenever I use this, memory usage is extremely high. Is there a way to lower memory usage while completing the same task?
Get-ChildItem -Path "C:\" -Recurse |
sort -descending -property length |
select -first 20 fullname, @{Name="Gigabytes";Expression={[Math]::round($_.length / 1GB, 2)}}
|
[
"This should be very efficient compared to what you're currently doing, basically it leverages the SortedSet<T> Class and a custom PowerShell Class implementing IComparable and IEquatable<T>. The logic inside the anonymous function ensures that the sorted set will always have a maximum of 20 elements (change the $sorted.Count -lt 20 as needed depending on how many files you want as result) so memory usage should be much lower.\nclass SimpleFile : System.IComparable, System.IEquatable[object] {\n [string] $FullName\n [Int64] $Length\n [double] $Gigabytes\n\n SimpleFile([IO.FileInfo] $File) {\n $this.FullName = $File.FullName\n $this.Length = $File.Length\n $this.Gigabytes = [Math]::Round($File.Length / 1Gb, 2)\n }\n\n [int] GetHashCode() {\n return $this.FullName.GetHashCode()\n }\n\n [int] CompareTo([object] $That) {\n if($diff = $this.Length.CompareTo($That.Length)) {\n return $diff\n }\n return 0\n }\n\n [bool] Equals([object] $That) {\n return $this.FullName -eq $That.FullName\n }\n}\n\n$result = Get-ChildItem C:\\ -Recurse -File -EA 0 | & {\n begin {\n $sorted = [System.Collections.Generic.SortedSet[SimpleFile]]::new()\n }\n process {\n if($sorted.Count -lt 20) {\n $null = $sorted.Add($_)\n return\n }\n if($sorted.Min.Length -lt $_.Length) {\n $null = $sorted.Remove($sorted.Min)\n $null = $sorted.Add($_)\n }\n }\n end {\n $sorted.Reverse()\n }\n}\n\n",
"Here's a mostly cmd solution that runs in half the time (4 min vs 8 min for me), and no great amount of ram to speak of.\ncmd /c 'dir /s c:\\ | findstr /v \"Directory of\" | findstr /v \"<DIR>\" | findstr /v \"<JUNCTION>\" | findstr /v \"Total Files Listed:\" | sort.exe /+20' | select -last 20\n\n\n09/13/2018 04:45 PM 362,337,255 78ec5a6bf483ef155dc2a311e526f8a5.cab\n09/23/2022 02:24 PM 379,584,512 clonezilla-live-3.0.1-8-amd64.iso\n09/13/2018 04:45 PM 445,038,363 8bb1cf01f3ce1952f356d0aff91dbb2f.cab\n09/13/2018 04:45 PM 445,038,363 8bb1cf01f3ce1952f356d0aff91dbb2f.cab\n04/13/2012 03:02 PM 481,143,404 Data1.cab\n02/23/2022 12:10 PM 540,587,242 Miller_MUS171_Lecture02_v2.mp4\n05/17/2017 06:27 PM 551,624,704 maple.help\n09/13/2018 04:44 PM 600,753,345 a32918368eba6a062aaaaf73e3618131.cab\n09/13/2018 04:44 PM 600,753,345 a32918368eba6a062aaaaf73e3618131.cab\n11/10/2022 03:12 AM 641,728,512 BaseLayer.vhdx\n11/10/2022 03:12 AM 641,728,512 BaseLayer.vhdx\n11/09/2022 09:17 AM 687,780,220 Windows10.0-KB5019959-x64.cab\n12/12/2017 01:45 PM 740,060,956 2ddf168b.msi\n09/13/2018 04:45 PM 760,142,380 9722214af0ab8aa9dffb6cfdafd937b7.cab\n09/13/2018 04:45 PM 760,142,380 9722214af0ab8aa9dffb6cfdafd937b7.cab\n06/02/2016 03:19 PM 894,730,632 392b35b.msi\n04/18/2021 03:03 PM 905,672,704 37b241d4.msi\n06/13/2022 12:52 PM 905,672,704 37b241d7.msi\n04/18/2021 03:03 PM 905,672,704 Stata17.msi\n01/12/2018 03:06 PM 1,075,667,093 windows10.0-kb4056891-x64.msu\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"memory",
"powershell"
] |
stackoverflow_0074656156_memory_powershell.txt
|
Q:
affect collection filters count in shopify
On my website, I show some products with a condition - if the product has a test tag:
{% if product.tags contains 'test' %}
// render product
{% endif %}
And this way I get only some products, but the native product filter still displays the count of all products. I'll check it this way:
{% for filter in collection.filters %}
{% for value in filter.values %}
{{ value.count }},
{% endfor %}
{% endfor %}
And like I said, I get the full amount of products. But I want to get only the number of products that I displayed by condition. Can you please tell me how to do this? Thank you!
A:
In the for-loop that you use to render products you can create a count variable and use it later.
pseudocode:
{% liquid
assign counter = 0
for product in products
assign counter = counter | plus: 1
render 'product-div'
endfor
render 'counter', count: counter
%}
this solution isn't elegant but the easiest one. If you need a counter in 'product-div' you can create two exact same for-loops: one for counting and second one for rendering.
Again, this solution is not the most clever one but the beauty lies in simplicity and simple code is always the best one
A:
I think what you need to do is the following:
Loop through all products in your collection
Filter through them using your tag
Assign an index to each of those products
Then print the last recorded index
List item
In liquid this look pretty much like this:
{% for filter in collection.filters %}
{% if product.tags contains 'test' %}
{% if forloop.last %}
{{forloop.index0}}
{% endif%}
{%endif %}
{% endfor%}
|
affect collection filters count in shopify
|
On my website, I show some products with a condition - if the product has a test tag:
{% if product.tags contains 'test' %}
// render product
{% endif %}
And this way I get only some products, but the native product filter still displays the count of all products. I'll check it this way:
{% for filter in collection.filters %}
{% for value in filter.values %}
{{ value.count }},
{% endfor %}
{% endfor %}
And like I said, I get the full amount of products. But I want to get only the number of products that I displayed by condition. Can you please tell me how to do this? Thank you!
|
[
"In the for-loop that you use to render products you can create a count variable and use it later.\npseudocode:\n{% liquid\n assign counter = 0\n for product in products\n assign counter = counter | plus: 1\n render 'product-div'\n endfor\n render 'counter', count: counter\n%}\n\nthis solution isn't elegant but the easiest one. If you need a counter in 'product-div' you can create two exact same for-loops: one for counting and second one for rendering.\nAgain, this solution is not the most clever one but the beauty lies in simplicity and simple code is always the best one\n",
"I think what you need to do is the following:\n\nLoop through all products in your collection\nFilter through them using your tag\nAssign an index to each of those products\nThen print the last recorded index\nList item\n\nIn liquid this look pretty much like this:\n{% for filter in collection.filters %}\n {% if product.tags contains 'test' %}\n {% if forloop.last %}\n {{forloop.index0}}\n {% endif%}\n {%endif %}\n{% endfor%}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"liquid",
"shopify"
] |
stackoverflow_0074655327_liquid_shopify.txt
|
Q:
SQL remove characters from string and leave number only
I have string(nvarchar) from db data and I would like to transfer it to numbers only. I was searching on Google for solution but I didnt find anything. I found something similiar here on StackOverflow but everything was removing characters only from left side, but if there is any character on right side or between numbers it wont work.
Solution I found but is not working:
select substring(XX,
PatIndex('%[0-9]%', XX),
len(XX))
For example I have text: '4710000 text' so this substring returns me same text I putted inside of it which is again '4710000 text'. Is there any other way how to do that? Without creating functions or using IFs, begins, variables (@text etc.).
A:
Try this, it seems to work like a charm. I wish I could take credit but it's from this post. If it works for you please give him the upvote.
The 'with' is just a CTE that sets up test data.
with tbl(str) as (
select '4710000 text'
)
SELECT
(SELECT CAST(CAST((
SELECT SUBSTRING(str, Number, 1)
FROM master..spt_values
WHERE Type='p' AND Number <= LEN(str) AND
SUBSTRING(str, Number, 1) LIKE '[0-9]' FOR XML Path(''))
AS xml) AS varchar(MAX)))
FROM
tbl
A:
If you are using SQL Server and a fully supported version you can use translate like so:
select Replace(Translate('4710000 text', 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', Replicate('*', 26)), '*', '');
If you have additional non-numerical characters add those in to the string and amend 26 accordingly.
|
SQL remove characters from string and leave number only
|
I have string(nvarchar) from db data and I would like to transfer it to numbers only. I was searching on Google for solution but I didnt find anything. I found something similiar here on StackOverflow but everything was removing characters only from left side, but if there is any character on right side or between numbers it wont work.
Solution I found but is not working:
select substring(XX,
PatIndex('%[0-9]%', XX),
len(XX))
For example I have text: '4710000 text' so this substring returns me same text I putted inside of it which is again '4710000 text'. Is there any other way how to do that? Without creating functions or using IFs, begins, variables (@text etc.).
|
[
"Try this, it seems to work like a charm. I wish I could take credit but it's from this post. If it works for you please give him the upvote.\nThe 'with' is just a CTE that sets up test data.\nwith tbl(str) as (\n select '4710000 text'\n)\nSELECT\n (SELECT CAST(CAST((\n SELECT SUBSTRING(str, Number, 1)\n FROM master..spt_values\n WHERE Type='p' AND Number <= LEN(str) AND\n SUBSTRING(str, Number, 1) LIKE '[0-9]' FOR XML Path(''))\n AS xml) AS varchar(MAX)))\nFROM\n tbl\n\n",
"If you are using SQL Server and a fully supported version you can use translate like so:\nselect Replace(Translate('4710000 text', 'ABCDEFGHIJKLMNOPQRSTUVWXYZ', Replicate('*', 26)), '*', '');\n\nIf you have additional non-numerical characters add those in to the string and amend 26 accordingly.\n"
] |
[
1,
0
] |
[] |
[] |
[
"sql",
"sql_server",
"sql_server_2014"
] |
stackoverflow_0074655748_sql_sql_server_sql_server_2014.txt
|
Q:
How can I undo a `git commit` locally and on a remote after `git push`
I have performed git commit followed by a git push. How can I revert that change on both local and remote repositories?
$ git log
commit 364705c23011b0fc6a7ca2d80c86cef4a7c4db7ac8
Author: Michael Silver <Michael [email protected]>
Date: Tue Jun 11 12:24:23 2011 -0700
A:
git reset --hard HEAD~1
git push -f <remote> <branch>
(Example push: git push -f origin bugfix/bug123)
This will undo the last commit and push the updated history to the remote. You need to pass the -f because you're replacing upstream history in the remote.
A:
Generally, make an "inverse" commit, using:
git revert 364705c
then send it to the remote as usual:
git push
This won't delete the commit: it makes an additional commit that undoes whatever the first commit did. Anything else, not really safe, especially when the changes have already been propagated.
A:
First of all, Relax.
"Nothing is under our control. Our control is mere illusion.", "To err is human"
I get that you've unintentionally pushed your code to remote-master. THIS is going to be alright.
1. At first, get the SHA-1 value of the commit you are trying to return, e.g. commit to master branch. run this:
git log
you'll see bunch of 'f650a9e398ad9ca606b25513bd4af9fe...' like strings along with each of the commits. copy that number from the commit that you want to return back.
2. Now, type in below command:
git reset --hard your_that_copied_string_but_without_quote_mark
you should see message like "HEAD is now at ". you are on clear. What it just have done is to reflect that change locally.
3. Now, type in below command:
git push -f
you should see like
"warning: push.default is unset; its implicit value has changed
in..... ... Total 0 (delta 0), reused 0 (delta 0) ...
...your_branch_name -> master (forced update)."
Now, you are all clear. Check the master with "git log" again, your fixed_destination_commit should be on top of the list.
You are welcome (in advance ;))
UPDATE:
Now, the changes you had made before all these began, are now gone.
If you want to bring those hard-works back again, it's possible. Thanks to git reflog, and git cherry-pick commands.
For that, i would suggest to please follow this blog or this post.
A:
git reset HEAD~1 if you don't want your changes to be gone(unstaged changes). Change, commit and push again git push -f [origin] [branch]
A:
Try using
git reset --hard <commit id>
Please Note : Here commit id will the id of the commit you want to go to but not the id you want to reset. this was the only point where i also got stucked.
then push
git push -f <remote> <branch>
A:
You can do an interactive rebase:
git rebase -i <commit>
This will bring up your default editor. Just delete the line containing the commit you want to remove to delete that commit.
You will, of course, need access to the remote repository to apply this change there too.
See this question: Git: removing selected commits from repository
A:
Alternatively:
git push origin +364705c23011b0fc6a7ca2d80c86cef4a7c4db7ac8^:master
Force the master branch of the origin remote repository to the parent of last commit
A:
If you don't want to lose the local changes made in the wrong commit
git reset HEAD~1
Then force push it to the origin (assuming <remote> == origin)
git push -f origin <branch>
|
How can I undo a `git commit` locally and on a remote after `git push`
|
I have performed git commit followed by a git push. How can I revert that change on both local and remote repositories?
$ git log
commit 364705c23011b0fc6a7ca2d80c86cef4a7c4db7ac8
Author: Michael Silver <Michael [email protected]>
Date: Tue Jun 11 12:24:23 2011 -0700
|
[
"git reset --hard HEAD~1\ngit push -f <remote> <branch>\n\n(Example push: git push -f origin bugfix/bug123)\nThis will undo the last commit and push the updated history to the remote. You need to pass the -f because you're replacing upstream history in the remote.\n",
"Generally, make an \"inverse\" commit, using:\ngit revert 364705c\n\nthen send it to the remote as usual:\ngit push\n\nThis won't delete the commit: it makes an additional commit that undoes whatever the first commit did. Anything else, not really safe, especially when the changes have already been propagated.\n",
"First of all, Relax.\n\"Nothing is under our control. Our control is mere illusion.\", \"To err is human\"\nI get that you've unintentionally pushed your code to remote-master. THIS is going to be alright.\n1. At first, get the SHA-1 value of the commit you are trying to return, e.g. commit to master branch. run this:\ngit log\n\nyou'll see bunch of 'f650a9e398ad9ca606b25513bd4af9fe...' like strings along with each of the commits. copy that number from the commit that you want to return back.\n2. Now, type in below command:\ngit reset --hard your_that_copied_string_but_without_quote_mark\n\nyou should see message like \"HEAD is now at \". you are on clear. What it just have done is to reflect that change locally.\n3. Now, type in below command:\ngit push -f\n\nyou should see like \n\n\"warning: push.default is unset; its implicit value has changed\n in..... ... Total 0 (delta 0), reused 0 (delta 0) ... \n ...your_branch_name -> master (forced update).\"\n\nNow, you are all clear. Check the master with \"git log\" again, your fixed_destination_commit should be on top of the list.\nYou are welcome (in advance ;))\nUPDATE:\nNow, the changes you had made before all these began, are now gone.\nIf you want to bring those hard-works back again, it's possible. Thanks to git reflog, and git cherry-pick commands.\nFor that, i would suggest to please follow this blog or this post.\n",
"git reset HEAD~1 if you don't want your changes to be gone(unstaged changes). Change, commit and push again git push -f [origin] [branch]\n",
"Try using \ngit reset --hard <commit id> \n\nPlease Note : Here commit id will the id of the commit you want to go to but not the id you want to reset. this was the only point where i also got stucked.\nthen push \ngit push -f <remote> <branch>\n\n",
"You can do an interactive rebase:\ngit rebase -i <commit>\n\nThis will bring up your default editor. Just delete the line containing the commit you want to remove to delete that commit.\nYou will, of course, need access to the remote repository to apply this change there too.\nSee this question: Git: removing selected commits from repository\n",
"Alternatively:\ngit push origin +364705c23011b0fc6a7ca2d80c86cef4a7c4db7ac8^:master\n\nForce the master branch of the origin remote repository to the parent of last commit\n",
"If you don't want to lose the local changes made in the wrong commit\ngit reset HEAD~1\n\nThen force push it to the origin (assuming <remote> == origin)\ngit push -f origin <branch>\n\n"
] |
[
487,
197,
56,
13,
5,
3,
2,
0
] |
[] |
[] |
[
"git",
"git_commit",
"git_push"
] |
stackoverflow_0006459080_git_git_commit_git_push.txt
|
Q:
Is it possible to fill in a subplot with main plot data in R Shiny when subplot data not available?
I didn't really know how to phrase the question, but hopefully it will make sense when I explain things. I have a simple Shiny App that has a main plot (bar chart) and a sub plot (bar chart). When you go to the subplot, no data pops up for Jewel for "vegetable" or "fruit". I understand why that is, but I was wondering if there's a coding solution where Jewel can have a bar that the legend just denotes as "Other". So if the filter is on "Vegetable", the bar for Jewel on the suplot will be 800 with one solid color that just says "Other".
As always, thank you for any assistance.
library(tidyverse)
library(plotly)
library(shiny)
library(shinydashboard)
library(shinyWidgets)
store_data <- tibble(
Whole_Foods = c(1000, 500, 500, 1000, 500, 500),
Kroger = c(700, 300, 400, 700, 300, 400),
Jewel = c(800, 0, 0, 800, 0, 0),
Food_Main = c("Vegetable", "Lettuce", "Potato", "Fruit", "Lemon", "Watermelon"),
Food_Filter = c("None", "Vegetable", "Vegetable", "None", "Fruit", "Fruit")
)
store_data <- store_data %>%
reshape2::melt(measure.vars = c("Whole_Foods", "Kroger", "Jewel"),
variable.name = "Grocery_Store") %>%
mutate(value = value %>% as.numeric()) %>%
rename(Sales = value)
ui <- fluidPage(
selectInput(inputId = "store",
label = "Grocery Store",
multiple = TRUE,
choices = unique(store_data$Grocery_Store),
selected = unique(store_data$Grocery_Store)),
selectInput(inputId = "food_subcategory",
label = "Food Type",
choices = c("Vegetable", "Fruit")),
plotlyOutput("food_level", height = 200),
plotlyOutput("filter_level", height = 200),
uiOutput('back'),
uiOutput("back1")
)
server <- function(input, output, session) {
food_filter <- reactiveVal()
type_filter <- reactiveVal()
observeEvent(event_data("plotly_click", source = "food_level"), {
food_filter(event_data("plotly_click", source = "food_level")$x)
type_filter(NULL)
})
observeEvent(event_data("plotly_click", source = "filter_level"), {
type_filter(
event_data("plotly_click", source = "filter_level")$x
)
})
store_reactive <- reactive({
store_data %>%
filter(Food_Filter == "None") %>%
filter(Grocery_Store %in% input$store)
})
output$food_level <- renderPlotly({
store_reactive() %>%
plot_ly(
x = ~Grocery_Store,
y = ~Sales,
color = ~Food_Main,
source = "food_level",
type = "bar"
) %>%
layout(barmode = "stack", showlegend = T)
})
store_reactive_2 <- reactive({
store_data %>%
filter(Grocery_Store %in% input$store) %>%
filter(Food_Filter %in% input$food_subcategory)
})
output$filter_level <- renderPlotly({
if (is.null(food_filter())) return(NULL)
store_reactive_2() %>%
plot_ly(
x = ~Grocery_Store,
y = ~Sales,
color = ~Food_Main,
source = "food_level",
type = "bar"
) %>%
layout(barmode = "stack", showlegend = T)
})
output$back <- renderUI({
if (!is.null(food_filter()) && is.null(type_filter())) {
actionButton("clear", "Back", icon("chevron-left"))
}
})
output$back1 <- renderUI({
if (!is.null(type_filter())) {
actionButton("clear1", "Back", icon("chevron-left"))
}
})
observeEvent(input$clear,
food_filter(NULL))
observeEvent(input$clear1,
type_filter(NULL))
}
shinyApp(ui, server)
A:
The issue is you have your totals mixed in with the granular level data. You need to separate these out and then create a bucket for "Other". Create a separate totals data set for the first plot and a modified store_data for the sub-plot.
totals <- store_data %>%
filter(Food_Filter == "None") %>%
select(-Food_Filter)
remainder <- store_data %>%
filter(Food_Filter != "None") %>%
group_by(Food_Filter, Grocery_Store) %>%
summarize(accounted_for = sum(Sales), .groups = "keep") %>%
left_join(
totals,
by = c("Food_Filter" = "Food_Main", "Grocery_Store" = "Grocery_Store")
) %>%
summarize(Sales = Sales - accounted_for) %>%
ungroup() %>%
mutate(Food_Main = "Other")
store_data <- store_data %>%
filter(Food_Filter != "None") %>%
bind_rows(remainder)
Then update which data set used by the reactive:
store_reactive <- reactive({
totals %>%
filter(Grocery_Store %in% input$store)
})
|
Is it possible to fill in a subplot with main plot data in R Shiny when subplot data not available?
|
I didn't really know how to phrase the question, but hopefully it will make sense when I explain things. I have a simple Shiny App that has a main plot (bar chart) and a sub plot (bar chart). When you go to the subplot, no data pops up for Jewel for "vegetable" or "fruit". I understand why that is, but I was wondering if there's a coding solution where Jewel can have a bar that the legend just denotes as "Other". So if the filter is on "Vegetable", the bar for Jewel on the suplot will be 800 with one solid color that just says "Other".
As always, thank you for any assistance.
library(tidyverse)
library(plotly)
library(shiny)
library(shinydashboard)
library(shinyWidgets)
store_data <- tibble(
Whole_Foods = c(1000, 500, 500, 1000, 500, 500),
Kroger = c(700, 300, 400, 700, 300, 400),
Jewel = c(800, 0, 0, 800, 0, 0),
Food_Main = c("Vegetable", "Lettuce", "Potato", "Fruit", "Lemon", "Watermelon"),
Food_Filter = c("None", "Vegetable", "Vegetable", "None", "Fruit", "Fruit")
)
store_data <- store_data %>%
reshape2::melt(measure.vars = c("Whole_Foods", "Kroger", "Jewel"),
variable.name = "Grocery_Store") %>%
mutate(value = value %>% as.numeric()) %>%
rename(Sales = value)
ui <- fluidPage(
selectInput(inputId = "store",
label = "Grocery Store",
multiple = TRUE,
choices = unique(store_data$Grocery_Store),
selected = unique(store_data$Grocery_Store)),
selectInput(inputId = "food_subcategory",
label = "Food Type",
choices = c("Vegetable", "Fruit")),
plotlyOutput("food_level", height = 200),
plotlyOutput("filter_level", height = 200),
uiOutput('back'),
uiOutput("back1")
)
server <- function(input, output, session) {
food_filter <- reactiveVal()
type_filter <- reactiveVal()
observeEvent(event_data("plotly_click", source = "food_level"), {
food_filter(event_data("plotly_click", source = "food_level")$x)
type_filter(NULL)
})
observeEvent(event_data("plotly_click", source = "filter_level"), {
type_filter(
event_data("plotly_click", source = "filter_level")$x
)
})
store_reactive <- reactive({
store_data %>%
filter(Food_Filter == "None") %>%
filter(Grocery_Store %in% input$store)
})
output$food_level <- renderPlotly({
store_reactive() %>%
plot_ly(
x = ~Grocery_Store,
y = ~Sales,
color = ~Food_Main,
source = "food_level",
type = "bar"
) %>%
layout(barmode = "stack", showlegend = T)
})
store_reactive_2 <- reactive({
store_data %>%
filter(Grocery_Store %in% input$store) %>%
filter(Food_Filter %in% input$food_subcategory)
})
output$filter_level <- renderPlotly({
if (is.null(food_filter())) return(NULL)
store_reactive_2() %>%
plot_ly(
x = ~Grocery_Store,
y = ~Sales,
color = ~Food_Main,
source = "food_level",
type = "bar"
) %>%
layout(barmode = "stack", showlegend = T)
})
output$back <- renderUI({
if (!is.null(food_filter()) && is.null(type_filter())) {
actionButton("clear", "Back", icon("chevron-left"))
}
})
output$back1 <- renderUI({
if (!is.null(type_filter())) {
actionButton("clear1", "Back", icon("chevron-left"))
}
})
observeEvent(input$clear,
food_filter(NULL))
observeEvent(input$clear1,
type_filter(NULL))
}
shinyApp(ui, server)
|
[
"The issue is you have your totals mixed in with the granular level data. You need to separate these out and then create a bucket for \"Other\". Create a separate totals data set for the first plot and a modified store_data for the sub-plot.\ntotals <- store_data %>% \n filter(Food_Filter == \"None\") %>% \n select(-Food_Filter) \n\nremainder <- store_data %>% \n filter(Food_Filter != \"None\") %>% \n group_by(Food_Filter, Grocery_Store) %>% \n summarize(accounted_for = sum(Sales), .groups = \"keep\") %>% \n left_join(\n totals, \n by = c(\"Food_Filter\" = \"Food_Main\", \"Grocery_Store\" = \"Grocery_Store\")\n ) %>% \n summarize(Sales = Sales - accounted_for) %>% \n ungroup() %>% \n mutate(Food_Main = \"Other\") \n\nstore_data <- store_data %>% \n filter(Food_Filter != \"None\") %>% \n bind_rows(remainder) \n\nThen update which data set used by the reactive:\nstore_reactive <- reactive({\n totals %>% \n filter(Grocery_Store %in% input$store)\n })\n\n"
] |
[
1
] |
[] |
[] |
[
"plotly",
"r",
"shiny"
] |
stackoverflow_0074658595_plotly_r_shiny.txt
|
Q:
Can't query due to aggregated data, why?
We have a database for 3 book shops, all with an attached inventory and books in random units in stock. The query should display each bookstore, so 3 rows, followed by the quantity (which book in X book store has the highest value calculated with MAX(INV.UnitsInStock), and finally a third column that displays the title of the corresponding book.
SELECT BS.Name, B.Title, MAX(UnitsInStock) AS 'Quantity'
FROM Inventories AS INV
JOIN BookShops AS BS ON BS.Id = INV.ShopId
JOIN Books AS B ON B.Id = INV.BookId
GROUP BY BS.Name
This gives me the following error:
Column 'Books.Title' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
I also tried this:
SELECT BS.Name, MAX(UnitsInStock) AS 'Quantity'
FROM Inventories AS INV
JOIN BookShops AS BS ON BS.Id = INV.ShopId
JOIN Books AS B ON B.Id = INV.BookId
GROUP BY BS.Name
This shows the correct data so far but without the title of the book.
I've tried temp tables, string_agg() (which correctly displays every single book), tried hardcoding each book after finding out exactly which one etc.
How can I fix this?
A:
The error message is right. You can't do it in that way.
Imagine we still group by BS.Name, but do not include MAX(UnitsInStock) in the SELECT list, and instead only included B.Title. Assuming every shop has many books, which one should be shown on each row?
Now remember that each item in the SELECT is list is independent of the others. There is nothing to correlate the MAX(UnitsInStock) entry with the book title. Even more so, you could have both MAX(UnitsInStock) and MIN(UnitsInStock) in the select list. Which book title should be shown then?
In short, you can't show a field in the SELECT list unless you either group by the field or use it as part of a function like MAX(), AVG(), etc.
Instead, to solve this, you have three options. I'll list them in order from worst to best.
Option 1 is JOINing to two extra subqueries. The first subquery looks much like the original query in the question. It returns the shop ID and the MAX(Units) for every book. The second JOIN is to another subquery that looks at the quantity for ALL books, and includes a condition in the ON clause so only the row with the same value as the previous JOIN will match.
This is bad enough I'm not going to even show the code. It's a lot more code, and more joins/read IOPs, and it can create extra duplicate rows if you're not careful to avoid a tie for the highest inventory item at a shop. But it's how we had to do things before 2012 (or late 2018 if you were on MySQL), and it's how you'd have to do it on a database that doesn't support the next two options.
Option 2 uses an APPLY operation (also called a LATERAL JOIN). It looks like this:
SELECT BS.Name, B.Title, INV.UnitsInStock As Quantity
FROM BookShops BS
OUTER APPLY (
SELECT TOP 1 BookId, UnitsInStock
FROM Inventories i
WHERE i.ShopId = BS.Id
ORDER BY UnitsInStock DESC
) INV
INNER JOIN Books b ON b.Id = INV.BookId
This isn't a bad solution, but it's not as fast as the next option I will show. Still, it can be useful if you have a conflict using the next option.
The final (best!) option uses the row_number() windowing function:
SELECT Name, Title, UnitsInStock As Quantity
FROM (
SELECT BS.Name, B.Title inv.UnitsInStock,
row_number() over (PARTITION BY BS.Id ORDER BY inv.UnitsInStock DESC) rn
FROM BookShops bs
INNER JOIN Inventories inv ON inv.ShopId = bs.Id
INNER JOIN Books b on b.Id = inv.BookId
) t
WHERE rn = 1
Windowing functions are an important new(-ish) feature to add your query toolbelt.
Note in each of my examples I also "grouped" by the shop Id, rather than Name. The original queries in the question should group by both of those fields. Grouping a Name field alone is seldom wise.
|
Can't query due to aggregated data, why?
|
We have a database for 3 book shops, all with an attached inventory and books in random units in stock. The query should display each bookstore, so 3 rows, followed by the quantity (which book in X book store has the highest value calculated with MAX(INV.UnitsInStock), and finally a third column that displays the title of the corresponding book.
SELECT BS.Name, B.Title, MAX(UnitsInStock) AS 'Quantity'
FROM Inventories AS INV
JOIN BookShops AS BS ON BS.Id = INV.ShopId
JOIN Books AS B ON B.Id = INV.BookId
GROUP BY BS.Name
This gives me the following error:
Column 'Books.Title' is invalid in the select list because it is not contained in either an aggregate function or the GROUP BY clause.
I also tried this:
SELECT BS.Name, MAX(UnitsInStock) AS 'Quantity'
FROM Inventories AS INV
JOIN BookShops AS BS ON BS.Id = INV.ShopId
JOIN Books AS B ON B.Id = INV.BookId
GROUP BY BS.Name
This shows the correct data so far but without the title of the book.
I've tried temp tables, string_agg() (which correctly displays every single book), tried hardcoding each book after finding out exactly which one etc.
How can I fix this?
|
[
"The error message is right. You can't do it in that way.\nImagine we still group by BS.Name, but do not include MAX(UnitsInStock) in the SELECT list, and instead only included B.Title. Assuming every shop has many books, which one should be shown on each row?\nNow remember that each item in the SELECT is list is independent of the others. There is nothing to correlate the MAX(UnitsInStock) entry with the book title. Even more so, you could have both MAX(UnitsInStock) and MIN(UnitsInStock) in the select list. Which book title should be shown then?\nIn short, you can't show a field in the SELECT list unless you either group by the field or use it as part of a function like MAX(), AVG(), etc.\nInstead, to solve this, you have three options. I'll list them in order from worst to best.\nOption 1 is JOINing to two extra subqueries. The first subquery looks much like the original query in the question. It returns the shop ID and the MAX(Units) for every book. The second JOIN is to another subquery that looks at the quantity for ALL books, and includes a condition in the ON clause so only the row with the same value as the previous JOIN will match.\nThis is bad enough I'm not going to even show the code. It's a lot more code, and more joins/read IOPs, and it can create extra duplicate rows if you're not careful to avoid a tie for the highest inventory item at a shop. But it's how we had to do things before 2012 (or late 2018 if you were on MySQL), and it's how you'd have to do it on a database that doesn't support the next two options.\nOption 2 uses an APPLY operation (also called a LATERAL JOIN). It looks like this:\nSELECT BS.Name, B.Title, INV.UnitsInStock As Quantity\nFROM BookShops BS\nOUTER APPLY (\n SELECT TOP 1 BookId, UnitsInStock\n FROM Inventories i\n WHERE i.ShopId = BS.Id\n ORDER BY UnitsInStock DESC\n) INV\nINNER JOIN Books b ON b.Id = INV.BookId\n\nThis isn't a bad solution, but it's not as fast as the next option I will show. Still, it can be useful if you have a conflict using the next option.\nThe final (best!) option uses the row_number() windowing function:\nSELECT Name, Title, UnitsInStock As Quantity \nFROM (\n SELECT BS.Name, B.Title inv.UnitsInStock,\n row_number() over (PARTITION BY BS.Id ORDER BY inv.UnitsInStock DESC) rn\n FROM BookShops bs\n INNER JOIN Inventories inv ON inv.ShopId = bs.Id\n INNER JOIN Books b on b.Id = inv.BookId\n) t\nWHERE rn = 1\n\nWindowing functions are an important new(-ish) feature to add your query toolbelt.\nNote in each of my examples I also \"grouped\" by the shop Id, rather than Name. The original queries in the question should group by both of those fields. Grouping a Name field alone is seldom wise.\n"
] |
[
0
] |
[
"I think you need to add B.Title to the GROUP BY:\nSELECT BS.Name,B.Title, MAX(UnitsInStock) AS 'Quantity'\nFROM Inventories AS INV\nJOIN BookShops AS BS ON BS.Id = INV.ShopId\nJOIN Books AS B ON B.Id = INV.BookId\nGROUP BY BS.Name, B.Title\n\n"
] |
[
-1
] |
[
"sql",
"sql_server"
] |
stackoverflow_0074653613_sql_sql_server.txt
|
Q:
Getx Flutter while updating value throwing null error
Data Model
class DataModel {
Book? book;
DataModel({
this.book,
});
}
class Book {
String author;
Book({
required this.author,
});
}
Getx Controller
class BookController extends GetxController{
var dataModel = DataModel().obs;
updateAuthor(){
dataModel.value.book!.author = "";
}
}
While updating author Getx throwing " null checked used on a null value"
After changing the function to this
updateAuthor(){
dataModel.value.book?.author = "";
}
It is not updating the value, making it null.
Also tried with update method unable to update the value with this type of data model.Class inside class.
I am just trying to update the author but unable to do it.
A:
Here, I will explain what you did wrong,
First, you made an object of DataModel.
var dataModel = DataModel().obs;
this dataModel have a book property which you marked it with a ? so you made it nullable.
so at this point this:
dataModel.value.book == null ; // true
you didn't assign a real Book instance to the book property, so it's null, so this line:
dataModel.value.book!.author = ""; // will throw an error
because dataModel.value.book is null.
what you need to assign a new Book() object to the book property like this, specifying the author property that Book()
updateAuthor(){
dataModel.value.book = Book(author: "");
dataModel.refresh();
}
now we have a real Book object assigned to book, this will fix the null error you got.
|
Getx Flutter while updating value throwing null error
|
Data Model
class DataModel {
Book? book;
DataModel({
this.book,
});
}
class Book {
String author;
Book({
required this.author,
});
}
Getx Controller
class BookController extends GetxController{
var dataModel = DataModel().obs;
updateAuthor(){
dataModel.value.book!.author = "";
}
}
While updating author Getx throwing " null checked used on a null value"
After changing the function to this
updateAuthor(){
dataModel.value.book?.author = "";
}
It is not updating the value, making it null.
Also tried with update method unable to update the value with this type of data model.Class inside class.
I am just trying to update the author but unable to do it.
|
[
"Here, I will explain what you did wrong,\nFirst, you made an object of DataModel.\n var dataModel = DataModel().obs;\n\nthis dataModel have a book property which you marked it with a ? so you made it nullable.\nso at this point this:\ndataModel.value.book == null ; // true\n\nyou didn't assign a real Book instance to the book property, so it's null, so this line:\ndataModel.value.book!.author = \"\"; // will throw an error\n\nbecause dataModel.value.book is null.\nwhat you need to assign a new Book() object to the book property like this, specifying the author property that Book()\n updateAuthor(){\n dataModel.value.book = Book(author: \"\");\n dataModel.refresh();\n }\n\nnow we have a real Book object assigned to book, this will fix the null error you got.\n"
] |
[
2
] |
[] |
[] |
[
"flutter",
"flutter_getx"
] |
stackoverflow_0074658967_flutter_flutter_getx.txt
|
Q:
Is it possible to send my application's JSON log output to Cloudwatch using awslogs agent?
I have a python application (dockerised) which output's logs in JSON format.
I'd like to send these logs to Cloudwatch. To avoid making any code changes i was hoping to use the awslogs Cloudwatch agent to sync the log file to Cloudwatch, but after looking at the Cloudwatch agent's timestamp format field 'timestamp_format', it's expecting my application to output the timestamp in a format like:
datetime_format = %d-%b-%Y %H:%M:%S UTC
But my application logs the timestamp as JSON value, e.g:
{"log":"{\"levelname\": \"INFO\", \"message\": \"example log\", \"filename\": \"application.py\", "time":"2022-05-16T11:36:03.04390948Z"}
Is it possible for me to send these logs to Cloudwatch using the agent? If not, what would be the best alternative?
Thank you
A:
You can update the timestamp_format field to match your logs by updating the configuration file for the CloudWatch Agent.
From your JSON field, your timestamp format should look like this in your CloudWatch Agent's config file (named config.json by default):
{
"logs": {
"logs_collected": {
"files": {
"collect_list": [
{
"file_path": "/path/to/logfile.log",
"log_group_name": "log_group",
"log_stream_name": "log_stream",
"timezone": "UTC",
"timestamp_format": "%Y-%m-%dT%H:%M:%S.%f"
}
]
}
}
}
}
As mentioned in the AWS documentation, the %f formatter for fractional seconds matches from one to nine digits. I've successfully sent logs via the CloudWatch Agent with this timestamp_format for timestamps in JSON logs with three-digit and nine-digit fractional second precision.
Note that if you don't specify the timezone as UTC, then the value will default to Local - beware of daylight savings!
|
Is it possible to send my application's JSON log output to Cloudwatch using awslogs agent?
|
I have a python application (dockerised) which output's logs in JSON format.
I'd like to send these logs to Cloudwatch. To avoid making any code changes i was hoping to use the awslogs Cloudwatch agent to sync the log file to Cloudwatch, but after looking at the Cloudwatch agent's timestamp format field 'timestamp_format', it's expecting my application to output the timestamp in a format like:
datetime_format = %d-%b-%Y %H:%M:%S UTC
But my application logs the timestamp as JSON value, e.g:
{"log":"{\"levelname\": \"INFO\", \"message\": \"example log\", \"filename\": \"application.py\", "time":"2022-05-16T11:36:03.04390948Z"}
Is it possible for me to send these logs to Cloudwatch using the agent? If not, what would be the best alternative?
Thank you
|
[
"You can update the timestamp_format field to match your logs by updating the configuration file for the CloudWatch Agent.\nFrom your JSON field, your timestamp format should look like this in your CloudWatch Agent's config file (named config.json by default):\n{\n \"logs\": {\n \"logs_collected\": {\n \"files\": {\n \"collect_list\": [\n {\n \"file_path\": \"/path/to/logfile.log\",\n \"log_group_name\": \"log_group\",\n \"log_stream_name\": \"log_stream\",\n \"timezone\": \"UTC\",\n \"timestamp_format\": \"%Y-%m-%dT%H:%M:%S.%f\"\n }\n ]\n }\n }\n }\n}\n\nAs mentioned in the AWS documentation, the %f formatter for fractional seconds matches from one to nine digits. I've successfully sent logs via the CloudWatch Agent with this timestamp_format for timestamps in JSON logs with three-digit and nine-digit fractional second precision.\nNote that if you don't specify the timezone as UTC, then the value will default to Local - beware of daylight savings!\n"
] |
[
0
] |
[] |
[] |
[
"amazon_cloudwatch",
"amazon_web_services",
"json"
] |
stackoverflow_0072366226_amazon_cloudwatch_amazon_web_services_json.txt
|
Q:
how to run .NET MAUI on windows without visual studio?
Is it possible to run the .net Maui project on windows in the command prompt? I tried simple "dotnet run"
but it said:
C:\Program Files\dotnet\packs\Microsoft.Android.Sdk.Windows\32.0.476\tools\Xamarin.Android.Tooling.targets(69,5): error
XA5300: The Android SDK directory could not be found. Check that the Android SDK Manager in Visual Studio shows a vali
d installation. To use a custom SDK path for a command line build, set the 'AndroidSdkDirectory' MSBuild property to th
e custom path. [C:\Users\Windows\Documents\csharp-scripts\MauiTest1\MauiTest1.csproj]
I don't want to test the app on android but on windows
A:
In my testing, I used the following steps for using .NET MAUI on Windows without Visual Studio, and it works fine.
Step1. Install .NET SDK in https://dotnet.microsoft.com/en-us/download .
Step2. Install .NET MAUI workload with the dotnet CLI. Launch a command prompt and enter the following:
dotnet workload install maui
Step3. Verify and install missing components with maui-check command line utility.
dotnet tool install -g redth.net.MAUI.check
maui-check
Step4. Create a new folder and a new MAUI app.
Step5. Start your Android Emulator.
Step6. Run the MAUI app in the Android Simulator.
dotnet build -t:Run -f net6.0-android
According to the Build Windows target using dotnet build instead of msbuild.exe, it seems like using dotnet way to to build a .NET MAUI app for Windows doesn't work with the.NET MAUI.
A:
For Windows, the best way I found out to compile and run the app is by doing it in 3 steps:
Compile with msbuild
Install the app using Add-AppxPackage
Run the package
I've written the following script for pwsh 7:
# Locate msbuild
import-module appx -usewindowspowershell
$msbuild = "$((gci -Path (gci "$env:ProgramFiles\Microsoft Visual Studio\2*" )[-1])[-1].FullName)\MSBuild\Current\Bin\msbuild.exe"
# Find sln
$sln = gci *.sln -Recurse -Depth 3 | Select -First 1
# 1. Build
& $msbuild $sln -r -p:Configuration=Debug -p:RestorePackages=false -p:TargetFramework=net6.0-windows10.0.19041 -m
# 2. Find AppxManifest and Install it
$appManifest = gci -Path $sln.Directory AppxManifest.xml -Recurse -Depth 6 | Select -First 1
Add-AppPackage $appManifest.FullName -Register
# 3. Get app guid to launch it
$guid = (gci *.csproj -Path $sln.Directory -Recurse -Depth 4 | gc | Select-String -Pattern 'Guid>([^<]*)<').Matches.Groups[1].Value
$app = Get-AppxPackage -Name $guid;
explorer "shell:appsFolder\$($app.PackageFamilyName)!App"
|
how to run .NET MAUI on windows without visual studio?
|
Is it possible to run the .net Maui project on windows in the command prompt? I tried simple "dotnet run"
but it said:
C:\Program Files\dotnet\packs\Microsoft.Android.Sdk.Windows\32.0.476\tools\Xamarin.Android.Tooling.targets(69,5): error
XA5300: The Android SDK directory could not be found. Check that the Android SDK Manager in Visual Studio shows a vali
d installation. To use a custom SDK path for a command line build, set the 'AndroidSdkDirectory' MSBuild property to th
e custom path. [C:\Users\Windows\Documents\csharp-scripts\MauiTest1\MauiTest1.csproj]
I don't want to test the app on android but on windows
|
[
"In my testing, I used the following steps for using .NET MAUI on Windows without Visual Studio, and it works fine.\nStep1. Install .NET SDK in https://dotnet.microsoft.com/en-us/download .\nStep2. Install .NET MAUI workload with the dotnet CLI. Launch a command prompt and enter the following:\ndotnet workload install maui\n\nStep3. Verify and install missing components with maui-check command line utility.\ndotnet tool install -g redth.net.MAUI.check\nmaui-check\n\nStep4. Create a new folder and a new MAUI app.\nStep5. Start your Android Emulator.\nStep6. Run the MAUI app in the Android Simulator.\ndotnet build -t:Run -f net6.0-android\n\nAccording to the Build Windows target using dotnet build instead of msbuild.exe, it seems like using dotnet way to to build a .NET MAUI app for Windows doesn't work with the.NET MAUI.\n",
"For Windows, the best way I found out to compile and run the app is by doing it in 3 steps:\n\nCompile with msbuild\nInstall the app using Add-AppxPackage\nRun the package\n\nI've written the following script for pwsh 7:\n# Locate msbuild\nimport-module appx -usewindowspowershell\n$msbuild = \"$((gci -Path (gci \"$env:ProgramFiles\\Microsoft Visual Studio\\2*\" )[-1])[-1].FullName)\\MSBuild\\Current\\Bin\\msbuild.exe\"\n\n# Find sln\n$sln = gci *.sln -Recurse -Depth 3 | Select -First 1\n\n# 1. Build\n& $msbuild $sln -r -p:Configuration=Debug -p:RestorePackages=false -p:TargetFramework=net6.0-windows10.0.19041 -m\n\n# 2. Find AppxManifest and Install it\n$appManifest = gci -Path $sln.Directory AppxManifest.xml -Recurse -Depth 6 | Select -First 1\nAdd-AppPackage $appManifest.FullName -Register\n\n# 3. Get app guid to launch it\n$guid = (gci *.csproj -Path $sln.Directory -Recurse -Depth 4 | gc | Select-String -Pattern 'Guid>([^<]*)<').Matches.Groups[1].Value\n$app = Get-AppxPackage -Name $guid;\nexplorer \"shell:appsFolder\\$($app.PackageFamilyName)!App\"\n\n"
] |
[
1,
0
] |
[] |
[] |
[
".net",
"maui",
"windows"
] |
stackoverflow_0074088087_.net_maui_windows.txt
|
Q:
cx Oracle not showing query results
This connection works, but the result is just the text of the query itself:
Connection = cx_Oracle.connect(user=username, password=password, dsn=dsn, encoding=enc)
query = 'simple select statement'
cursor = Connection.cursor()
cursor.execute(query)
Connection.commit()
cursor.close()
print(query)
The result in the dataframe prints 'SELECT RECV_MBR_ID...' instead of the ID's. What am I missing?
A:
This is not unexpected! You are simply printing the value to which you set that variable! You need to fetch the results. You can do this in one of several ways. I'll show a couple of the more common ones here:
for row in cursor.execute(query):
print(row)
OR
cursor.execute(query)
print(cursor.fetchall())
|
cx Oracle not showing query results
|
This connection works, but the result is just the text of the query itself:
Connection = cx_Oracle.connect(user=username, password=password, dsn=dsn, encoding=enc)
query = 'simple select statement'
cursor = Connection.cursor()
cursor.execute(query)
Connection.commit()
cursor.close()
print(query)
The result in the dataframe prints 'SELECT RECV_MBR_ID...' instead of the ID's. What am I missing?
|
[
"This is not unexpected! You are simply printing the value to which you set that variable! You need to fetch the results. You can do this in one of several ways. I'll show a couple of the more common ones here:\nfor row in cursor.execute(query):\n print(row)\n\nOR\ncursor.execute(query)\nprint(cursor.fetchall())\n\n"
] |
[
1
] |
[] |
[] |
[
"cx_oracle",
"python_3.x"
] |
stackoverflow_0074659051_cx_oracle_python_3.x.txt
|
Q:
Chrome ignores tag's target argument when link is programmatically "clicked"
I have a project that is implemented with Django, Dojo and JS. I've been banging my head against this issue all week and cannot find any hints online as to what the problem might be. Hopefully someone here has either run into this issue as well and can shed some light on the problem.
I have an unordered list <ul> similar to the following:
`````
<ul>
<li><a target="displayArea" id="pre-select" href="foo.html>Foo</a></li>
<li><a target="displayArea" href="bar.html">Bar</a></li>
<li><a target="displayArea" href="baz.html">Baz</a></li>
</ul>
```
elsewhere in the code is the target iframe:
```
<iframe style="border: none;" name="displayArea" id="displayArea" width="100%" height="95%" title="Report Area"></iframe>
```
When I mouse over and click "Foo" it works as expected, "Foo" is displayed in the iframe shown above. If, however, I execute the following script which is immediately after the </ul> in the code:
<script>
window.document.getElementById('pre-select').click();
</script>
"Foo" is not displayed in the iframe, instead it briefly appears in the host window frame then disappears. I'm not sure why there is a difference clicking directly on the link as it appears in the <ul> or executing the <script> that does the "click" programmatically.
Any insight would be greatly appreciated...
As stated in the problem description I expected the link to appear in the target iframe when the script:
<script>
window.document.getElementById('pre-select').click();
</script>
is executed but it doesn't appear in the iframe. If the link is clicked with the mouse it works as expected.
A:
I was able to accomplish what I needed to by adding an onload to the of my HTML like this:
<body onload="document.getElemenById('pre-select').click()">
This does exactly what I need and can be changed easily when the users inevitably change their minds :)
Thanks to epascarello and markfila for their help pointing me in the right direction.
|
Chrome ignores tag's target argument when link is programmatically "clicked"
|
I have a project that is implemented with Django, Dojo and JS. I've been banging my head against this issue all week and cannot find any hints online as to what the problem might be. Hopefully someone here has either run into this issue as well and can shed some light on the problem.
I have an unordered list <ul> similar to the following:
`````
<ul>
<li><a target="displayArea" id="pre-select" href="foo.html>Foo</a></li>
<li><a target="displayArea" href="bar.html">Bar</a></li>
<li><a target="displayArea" href="baz.html">Baz</a></li>
</ul>
```
elsewhere in the code is the target iframe:
```
<iframe style="border: none;" name="displayArea" id="displayArea" width="100%" height="95%" title="Report Area"></iframe>
```
When I mouse over and click "Foo" it works as expected, "Foo" is displayed in the iframe shown above. If, however, I execute the following script which is immediately after the </ul> in the code:
<script>
window.document.getElementById('pre-select').click();
</script>
"Foo" is not displayed in the iframe, instead it briefly appears in the host window frame then disappears. I'm not sure why there is a difference clicking directly on the link as it appears in the <ul> or executing the <script> that does the "click" programmatically.
Any insight would be greatly appreciated...
As stated in the problem description I expected the link to appear in the target iframe when the script:
<script>
window.document.getElementById('pre-select').click();
</script>
is executed but it doesn't appear in the iframe. If the link is clicked with the mouse it works as expected.
|
[
"I was able to accomplish what I needed to by adding an onload to the of my HTML like this:\n<body onload=\"document.getElemenById('pre-select').click()\">\n\nThis does exactly what I need and can be changed easily when the users inevitably change their minds :)\nThanks to epascarello and markfila for their help pointing me in the right direction.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"dojo",
"html",
"javascript"
] |
stackoverflow_0074646541_django_dojo_html_javascript.txt
|
Q:
Missing Run And Debug Launch Configuration button in VS Code
I have the following situation.
I cannot change launch Dart configurations anymore since the button is missing.
I think I hid it by accident but I cannot make it appear anymore.
Launch.json is still accessible but the button has disappeared.
Any help is appreciated, I don't really want to reinstall VS Code and I don't think it would help.
A:
I've found out how to restore the button's position.
Type the keyboard combination to start a Command (e.g.Command + P on MacOS) and run the command View: Reset All Menus.
This'll make the hidden view reappear.
The option to hide it is called Hide 'Start Debugging' and it cannot be found in the settings, so you have to reset the views to make it reappear.
A:
An alternative, if anyone else is facing this: If you right-click the gear and check "Start debugging" then that should bring it back. Despite the name, it won't actually start a debugging session!
|
Missing Run And Debug Launch Configuration button in VS Code
|
I have the following situation.
I cannot change launch Dart configurations anymore since the button is missing.
I think I hid it by accident but I cannot make it appear anymore.
Launch.json is still accessible but the button has disappeared.
Any help is appreciated, I don't really want to reinstall VS Code and I don't think it would help.
|
[
"I've found out how to restore the button's position.\nType the keyboard combination to start a Command (e.g.Command + P on MacOS) and run the command View: Reset All Menus.\nThis'll make the hidden view reappear.\nThe option to hide it is called Hide 'Start Debugging' and it cannot be found in the settings, so you have to reset the views to make it reappear.\n",
"An alternative, if anyone else is facing this: If you right-click the gear and check \"Start debugging\" then that should bring it back. Despite the name, it won't actually start a debugging session!\n"
] |
[
1,
0
] |
[] |
[] |
[
"dart",
"flutter",
"visual_studio_code"
] |
stackoverflow_0074245736_dart_flutter_visual_studio_code.txt
|
Q:
How to write a multiline command?
How do we extend a command to the next line?
Basically what's the Windows alternative for Linux's:
ls -l \
/usr/
Here we use backslashes to extend the command onto the next lines.
What's the equivalent for Windows?
A:
After trying almost every key on my keyboard:
C:\Users\Tim>cd ^
Mehr? Desktop
C:\Users\Tim\Desktop>
So it seems to be the ^ key.
A:
In the Windows Command Prompt the ^ is used to escape the next character on the command line. (Like \ is used in strings.) Characters that need to be used in the command line as they are should have a ^ prefixed to them, hence that's why it works for the newline.
For reference the characters that need escaping (if specified as command arguments and not within quotes) are: &|()
So the equivalent of your linux example would be (the More? being a prompt):
C:\> dir ^
More? C:\Windows
A:
The caret character works, however the next line should not start with double quotes. e.g. this will not work:
C:\ ^
"SampleText" ..
Start next line without double quotes (not a valid example, just to illustrate)
A:
Solution 1 In CMD, Use ^ , example
docker run -dp 3000:3000 ^
-w /app -v "$(pwd):/app" ^
--network todo-app ^
-e MYSQL_HOST=mysql ^
-e MYSQL_USER=root ^
-e MYSQL_PASSWORD=secret ^
-e MYSQL_DB=todos ^
node:12-alpine ^
cmd "npm install && yarn run start"
Solution 2 In PowerShell, Use ` (backtick) , Example
docker run -dp 3000:3000 `
-w /app -v "$(pwd):/app" `
--network todo-app `
-e MYSQL_HOST=mysql `
-e MYSQL_USER=root `
-e MYSQL_PASSWORD=secret `
-e MYSQL_DB=todos `
node:12-alpine ^
cmd "npm install && npm run start"
A:
If you came here looking for an answer to this question but not exactly the way the OP meant, ie how do you get multi-line CMD to work in a single line, I have a sort of dangerous answer for you.
Trying to use this with things that actually use piping, like say findstr is quite problematic. The same goes for dealing with elses. But if you just want a multi-line conditional command to execute directly from CMD and not via a batch file, this should work well.
Let's say you have something like this in a batch that you want to run directly in command prompt:
@echo off
for /r %%T IN (*.*) DO (
if /i "%%~xT"==".sln" (
echo "%%~T" is a normal SLN file, and not a .SLN.METAPROJ or .SLN.PROJ file
echo Dumping SLN file contents
type "%%~T"
)
)
Now, you could use the line-continuation carat (^) and manually type it out like this, but warning, it's tedious and if you mess up you can learn the joy of typing it all out again.
Well, it won't work with just ^ thanks to escaping mechanisms inside of parentheses shrug At least not as-written. You actually would need to double up the carats like so:
@echo off ^
More? for /r %T IN (*.sln) DO (^^
More? if /i "%~xT"==".sln" (^^
More? echo "%~T" is a normal SLN file, and not a .SLN.METAPROJ or .SLN.PROJ file^^
More? echo Dumping SLN file contents^^
More? type "%~T"))
Instead, you can be a dirty sneaky scripter from the wrong side of the tracks that don't need no carats by swapping them out for a single pipe (|) per continuation of a loop/expression:
@echo off
for /r %T IN (*.sln) DO if /i "%~xT"==".sln" echo "%~T" is a normal SLN file, and not a .SLN.METAPROJ or .SLN.PROJ file | echo Dumping SLN file contents | type "%~T"
A:
In windows 10 22H2, simply press enter after an open parenthesis gives you same result as typing ^ at the end of line.
For example:
>for %f in (*) do (
press enter here and you're in a new line automatically:
More?
|
How to write a multiline command?
|
How do we extend a command to the next line?
Basically what's the Windows alternative for Linux's:
ls -l \
/usr/
Here we use backslashes to extend the command onto the next lines.
What's the equivalent for Windows?
|
[
"After trying almost every key on my keyboard:\nC:\\Users\\Tim>cd ^\nMehr? Desktop\n\nC:\\Users\\Tim\\Desktop>\n\nSo it seems to be the ^ key.\n",
"In the Windows Command Prompt the ^ is used to escape the next character on the command line. (Like \\ is used in strings.) Characters that need to be used in the command line as they are should have a ^ prefixed to them, hence that's why it works for the newline.\nFor reference the characters that need escaping (if specified as command arguments and not within quotes) are: &|() \nSo the equivalent of your linux example would be (the More? being a prompt):\nC:\\> dir ^\nMore? C:\\Windows\n\n",
"The caret character works, however the next line should not start with double quotes. e.g. this will not work:\nC:\\ ^\n\"SampleText\" ..\n\nStart next line without double quotes (not a valid example, just to illustrate)\n",
"Solution 1 In CMD, Use ^ , example\ndocker run -dp 3000:3000 ^\n -w /app -v \"$(pwd):/app\" ^\n --network todo-app ^\n -e MYSQL_HOST=mysql ^\n -e MYSQL_USER=root ^\n -e MYSQL_PASSWORD=secret ^\n -e MYSQL_DB=todos ^\n node:12-alpine ^\n cmd \"npm install && yarn run start\"\n\nSolution 2 In PowerShell, Use ` (backtick) , Example\ndocker run -dp 3000:3000 `\n -w /app -v \"$(pwd):/app\" `\n --network todo-app `\n -e MYSQL_HOST=mysql `\n -e MYSQL_USER=root `\n -e MYSQL_PASSWORD=secret `\n -e MYSQL_DB=todos `\n node:12-alpine ^\n cmd \"npm install && npm run start\"\n\n",
"If you came here looking for an answer to this question but not exactly the way the OP meant, ie how do you get multi-line CMD to work in a single line, I have a sort of dangerous answer for you.\nTrying to use this with things that actually use piping, like say findstr is quite problematic. The same goes for dealing with elses. But if you just want a multi-line conditional command to execute directly from CMD and not via a batch file, this should work well.\nLet's say you have something like this in a batch that you want to run directly in command prompt:\n@echo off\nfor /r %%T IN (*.*) DO (\n if /i \"%%~xT\"==\".sln\" (\n echo \"%%~T\" is a normal SLN file, and not a .SLN.METAPROJ or .SLN.PROJ file\n echo Dumping SLN file contents\n type \"%%~T\"\n )\n)\n\nNow, you could use the line-continuation carat (^) and manually type it out like this, but warning, it's tedious and if you mess up you can learn the joy of typing it all out again.\nWell, it won't work with just ^ thanks to escaping mechanisms inside of parentheses shrug At least not as-written. You actually would need to double up the carats like so:\n@echo off ^\nMore? for /r %T IN (*.sln) DO (^^\nMore? if /i \"%~xT\"==\".sln\" (^^\nMore? echo \"%~T\" is a normal SLN file, and not a .SLN.METAPROJ or .SLN.PROJ file^^\nMore? echo Dumping SLN file contents^^\nMore? type \"%~T\"))\n\nInstead, you can be a dirty sneaky scripter from the wrong side of the tracks that don't need no carats by swapping them out for a single pipe (|) per continuation of a loop/expression:\n@echo off\nfor /r %T IN (*.sln) DO if /i \"%~xT\"==\".sln\" echo \"%~T\" is a normal SLN file, and not a .SLN.METAPROJ or .SLN.PROJ file | echo Dumping SLN file contents | type \"%~T\"\n\n",
"In windows 10 22H2, simply press enter after an open parenthesis gives you same result as typing ^ at the end of line.\nFor example:\n>for %f in (*) do (\n\npress enter here and you're in a new line automatically:\nMore? \n\n"
] |
[
412,
76,
13,
9,
8,
0
] |
[] |
[] |
[
"command_line",
"command_prompt",
"windows"
] |
stackoverflow_0000605686_command_line_command_prompt_windows.txt
|
Q:
What is std::views::counted?
On https://en.cppreference.com/w/cpp/ranges, std::views::counted is listed in the range adaptors section. However, it is not tagged as range adaptor object.
I guess that why I can't write using the pipe operator like:
std::vector<size_t> vec = {1, 2, 3, 4, 5};
auto view = vec | std::ranges::counted(... ; // does not compile
My questions are:
what is a std::ranges::counted? Why is it listed in the range adaptor section?
what are the use cases? what are the advantages over using take and drop?
A:
Cppreference is following the organization of the C++20 standard. And it puts views::counted into the "Range Adaptors" section. Despite the fact that the standard says:
These adaptors can be chained to create pipelines of range transformations that evaluate lazily as the resulting view is iterated.
This is not true of the behavior of views::counted. Indeed, most of the other elements in that section say that their customization points "denotes a range adaptor object" (which describes the piping functionality), but views::counted does not.
It's unclear why they put it in that section, but it is a useful type in-and-of itself. It's really just an efficient way of saying subrange(it, it + n). It is efficient in that it doesn't actually increment the iterator by n.
The advantage it has over take_view is that take_view operates on a range, while all counted needs is an iterator. The main difference is that counted assumes that there are n valid iterator positions (and will give UB if that is not the case), while take_view does not. take_view will give you up to n objects, but if the range is shorter than that (as defined by the sentinel), it doesn't try to iterate past the end of the range.
A:
According to the docs
A counted view presents a view of the elements of the counted range [i, n) for some iterator i and non-negative integer n.
A counted range [i, n) is the n elements starting with the element pointed to by i and up to but not including the element, if any, pointed to by the result of n applications of ++i.
So essentially it returns a slice given a starting iterator and a number of elements to include after that iterator. The example shown in the docs is
#include <ranges>
#include <iostream>
int main()
{
const int a[] = {1, 2, 3, 4, 5, 6, 7};
for(int i : std::views::counted(a, 3))
std::cout << i << ' ';
std::cout << '\n';
const auto il = {1, 2, 3, 4, 5};
for (int i : std::views::counted(il.begin() + 1, 3))
std::cout << i << ' ';
std::cout << '\n';
}
Output
1 2 3
2 3 4
Comparing the specific functions you listed, here are their summaries:
std::ranges::views::take: a view consisting of (up to) the first N elements of another view
std::ranges::views::drop: a view consisting of elements of another view, skipping (up to) the first N elements
std::ranges::views::counted: creates a subrange from an iterator and a count, always containing exactly N elements
A:
views::counted is similar to views::take. However, the former accepts an iterator instead of a range.
One of the benefits of views::counted over views::take is that it allows us to construct a sized input/output range when we already know its size:
auto ints = views::istream<int>(std::cin);
auto counted = views::counted(ints.begin(), 4);
auto take = views::take(ints, 4);
static_assert(ranges::sized_range<decltype(counted)>); // ok
static_assert(ranges::sized_range<decltype(take)>); // failed
Unlike views::take, since we omit the sentinel information, we must ensure that the iterator is still valid after incrementing n times, while the former is guaranteed to always return a valid range.
|
What is std::views::counted?
|
On https://en.cppreference.com/w/cpp/ranges, std::views::counted is listed in the range adaptors section. However, it is not tagged as range adaptor object.
I guess that why I can't write using the pipe operator like:
std::vector<size_t> vec = {1, 2, 3, 4, 5};
auto view = vec | std::ranges::counted(... ; // does not compile
My questions are:
what is a std::ranges::counted? Why is it listed in the range adaptor section?
what are the use cases? what are the advantages over using take and drop?
|
[
"Cppreference is following the organization of the C++20 standard. And it puts views::counted into the \"Range Adaptors\" section. Despite the fact that the standard says:\n\nThese adaptors can be chained to create pipelines of range transformations that evaluate lazily as the resulting view is iterated.\n\nThis is not true of the behavior of views::counted. Indeed, most of the other elements in that section say that their customization points \"denotes a range adaptor object\" (which describes the piping functionality), but views::counted does not.\nIt's unclear why they put it in that section, but it is a useful type in-and-of itself. It's really just an efficient way of saying subrange(it, it + n). It is efficient in that it doesn't actually increment the iterator by n.\nThe advantage it has over take_view is that take_view operates on a range, while all counted needs is an iterator. The main difference is that counted assumes that there are n valid iterator positions (and will give UB if that is not the case), while take_view does not. take_view will give you up to n objects, but if the range is shorter than that (as defined by the sentinel), it doesn't try to iterate past the end of the range.\n",
"According to the docs\n\nA counted view presents a view of the elements of the counted range [i, n) for some iterator i and non-negative integer n.\nA counted range [i, n) is the n elements starting with the element pointed to by i and up to but not including the element, if any, pointed to by the result of n applications of ++i.\n\nSo essentially it returns a slice given a starting iterator and a number of elements to include after that iterator. The example shown in the docs is\n#include <ranges>\n#include <iostream>\n \nint main()\n{\n const int a[] = {1, 2, 3, 4, 5, 6, 7};\n for(int i : std::views::counted(a, 3))\n std::cout << i << ' ';\n std::cout << '\\n';\n \n const auto il = {1, 2, 3, 4, 5};\n for (int i : std::views::counted(il.begin() + 1, 3))\n std::cout << i << ' ';\n std::cout << '\\n';\n}\n\nOutput\n1 2 3\n2 3 4\n\nComparing the specific functions you listed, here are their summaries:\n\nstd::ranges::views::take: a view consisting of (up to) the first N elements of another view\nstd::ranges::views::drop: a view consisting of elements of another view, skipping (up to) the first N elements\nstd::ranges::views::counted: creates a subrange from an iterator and a count, always containing exactly N elements\n\n",
"views::counted is similar to views::take. However, the former accepts an iterator instead of a range.\nOne of the benefits of views::counted over views::take is that it allows us to construct a sized input/output range when we already know its size:\nauto ints = views::istream<int>(std::cin);\n\nauto counted = views::counted(ints.begin(), 4);\nauto take = views::take(ints, 4);\n\nstatic_assert(ranges::sized_range<decltype(counted)>); // ok\nstatic_assert(ranges::sized_range<decltype(take)>); // failed\n\nUnlike views::take, since we omit the sentinel information, we must ensure that the iterator is still valid after incrementing n times, while the former is guaranteed to always return a valid range.\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"c++",
"c++20",
"std",
"std_ranges"
] |
stackoverflow_0074657339_c++_c++20_std_std_ranges.txt
|
Q:
RXJS: throttleTime plus last value
I have a scenario where a lot of events can be sent to a stream in a short amount of time. I would like to have an operator that is kind of a mixture of debounceTime and throttleTime.
The following demo can be used to illustrate what I would like to have, https://stackblitz.com/edit/rxjs6-demo-jxbght?file=index.ts.
I would like the subscriber to get the first emitted event and THEN wait for x ms. If more events were emitted during the waiting time, the last event should be sent to the subscriber after the waiting time. The waiting time should be reset on each new emitted event, just like debounce do.
If you click on the button 3 times within 1 second it should print 1 and 3. If you then click only 1 time within 1 second it should print only 4. If you then click 3 times again it should print 5 and 7.
This does not work with debounceTime since that doesn't give me the first event and it doesn't work for throttleTime because that doesn't give me the last emitted value after the wait time is over.
Any suggestions on how to implement this?
UPDATE
I created a custom operator with the help from Martins's answer. Not sure if it is working 100% correctly or if there are better ways to do it but it seems to do what I want it to.
import { Observable, empty } from 'rxjs';
import { exhaustMap, timeoutWith, debounceTime, take, startWith } from 'rxjs/operators';
export function takeFirstThenDebounceTime(waitTime) {
return function takeFirstThenDebounceTimeImplementation(source) {
return Observable.create(subscriber => {
const subscription = source.
pipe(
exhaustMap(val => source.pipe(
timeoutWith(waitTime, empty()),
debounceTime(waitTime),
take(1),
startWith(val)
)),
)
.subscribe(value => {
subscriber.next(value);
},
err => subscriber.error(err),
() => subscriber.complete());
return subscription;
});
}
}
A:
This operator should give you the first value, the throttled stream and the last value:
export function throunceTime<T>(duration: number): MonoTypeOperatorFunction<T> {
return (source: Observable<T>) =>
merge(source.pipe(throttleTime(duration)), source.pipe(debounceTime(duration)))
.pipe(throttleTime(0, undefined, { leading: true, trailing: false }));
}
A:
In RxJS 6 there are some additional options for the throttleTime operator that are undocumented right now and where you can make it to emit at both start and end of the duration. So maybe this could help you.
https://github.com/ReactiveX/rxjs/blob/master/src/internal/operators/throttleTime.ts#L55
https://github.com/ReactiveX/rxjs/blob/master/src/internal/operators/throttle.ts#L12
However, since you want to reset the timeout on every emission it'll be more complicated. This should be simplified example with random emits but I wonder if there's an easier way to do it:
const shared = source.pipe(shareReplay(1))
shared
.pipe(
exhaustMap(val => shared.pipe(
timeout(1000),
catchError(() => empty()),
debounceTime(1000),
take(1),
startWith(val),
))
)
.subscribe(v => console.log(v))
This demo debounces after 175ms gap. I hope it makes sense to you.
Demo: https://stackblitz.com/edit/rxjs6-demo-ztppwy?file=index.ts
A:
you probably don't need a custom operator if you use the trailing and leading config:
throttleTime(500, undefined, { leading: true, trailing: true })
|
RXJS: throttleTime plus last value
|
I have a scenario where a lot of events can be sent to a stream in a short amount of time. I would like to have an operator that is kind of a mixture of debounceTime and throttleTime.
The following demo can be used to illustrate what I would like to have, https://stackblitz.com/edit/rxjs6-demo-jxbght?file=index.ts.
I would like the subscriber to get the first emitted event and THEN wait for x ms. If more events were emitted during the waiting time, the last event should be sent to the subscriber after the waiting time. The waiting time should be reset on each new emitted event, just like debounce do.
If you click on the button 3 times within 1 second it should print 1 and 3. If you then click only 1 time within 1 second it should print only 4. If you then click 3 times again it should print 5 and 7.
This does not work with debounceTime since that doesn't give me the first event and it doesn't work for throttleTime because that doesn't give me the last emitted value after the wait time is over.
Any suggestions on how to implement this?
UPDATE
I created a custom operator with the help from Martins's answer. Not sure if it is working 100% correctly or if there are better ways to do it but it seems to do what I want it to.
import { Observable, empty } from 'rxjs';
import { exhaustMap, timeoutWith, debounceTime, take, startWith } from 'rxjs/operators';
export function takeFirstThenDebounceTime(waitTime) {
return function takeFirstThenDebounceTimeImplementation(source) {
return Observable.create(subscriber => {
const subscription = source.
pipe(
exhaustMap(val => source.pipe(
timeoutWith(waitTime, empty()),
debounceTime(waitTime),
take(1),
startWith(val)
)),
)
.subscribe(value => {
subscriber.next(value);
},
err => subscriber.error(err),
() => subscriber.complete());
return subscription;
});
}
}
|
[
"This operator should give you the first value, the throttled stream and the last value:\nexport function throunceTime<T>(duration: number): MonoTypeOperatorFunction<T> {\n return (source: Observable<T>) =>\n merge(source.pipe(throttleTime(duration)), source.pipe(debounceTime(duration)))\n .pipe(throttleTime(0, undefined, { leading: true, trailing: false }));\n}\n\n",
"In RxJS 6 there are some additional options for the throttleTime operator that are undocumented right now and where you can make it to emit at both start and end of the duration. So maybe this could help you.\nhttps://github.com/ReactiveX/rxjs/blob/master/src/internal/operators/throttleTime.ts#L55\nhttps://github.com/ReactiveX/rxjs/blob/master/src/internal/operators/throttle.ts#L12\nHowever, since you want to reset the timeout on every emission it'll be more complicated. This should be simplified example with random emits but I wonder if there's an easier way to do it:\nconst shared = source.pipe(shareReplay(1))\n\nshared\n .pipe(\n exhaustMap(val => shared.pipe(\n timeout(1000),\n catchError(() => empty()),\n debounceTime(1000),\n take(1),\n startWith(val),\n ))\n )\n .subscribe(v => console.log(v))\n\nThis demo debounces after 175ms gap. I hope it makes sense to you.\nDemo: https://stackblitz.com/edit/rxjs6-demo-ztppwy?file=index.ts\n",
"you probably don't need a custom operator if you use the trailing and leading config:\nthrottleTime(500, undefined, { leading: true, trailing: true })\n\n"
] |
[
9,
7,
0
] |
[] |
[] |
[
"rxjs"
] |
stackoverflow_0052038067_rxjs.txt
|
Q:
How do I play music on loop in batch?
I've gotten some code that makes wmplayer play in the background, but I can't get it to loop itself after it's done. Any ideas on how to do this?
@echo off
set "file=GameMusic.mp3"
( echo Set Sound = CreateObject("WMPlayer.OCX.7"^)
echo Sound.URL = "%file%"
echo Sound.Controls.play
echo do while Sound.currentmedia.duration = 0
echo wscript.sleep 100
echo loop
echo wscript.sleep (int(Sound.currentmedia.duration^)+1^)*1000) >sound.vbs
start /min sound.vbs
exit /b
A:
Give a try for this modified code and give me your feedback :
@echo off
set "file=GameMusic.mp3"
(
echo Set Sound = CreateObject("WMPlayer.OCX.7"^)
echo Sound.URL = "%file%"
echo Sound.settings.volume = 100
echo Sound.settings.setMode "loop", True
echo Sound.Controls.play
echo do while Sound.currentmedia.duration = 0
echo wscript.sleep 100
echo loop
echo wscript.sleep (int(Sound.currentmedia.duration^)+1^)*1000
)>sound.vbs
start /min sound.vbs
exit /b
EDIT :
The second script should works give it a try ;)
@echo off
set "file=GameMusic.mp3"
( echo Set Sound = CreateObject("WMPlayer.OCX.7"^)
echo Sound.URL = "%file%"
echo Sound.settings.volume = 100
echo Sound.settings.setMode "loop", True
echo Sound.Controls.play
echo While Sound.playState ^<^> 1
echo WScript.Sleep 100
echo Wend
)>sound.vbs
start /min sound.vbs
exit /b
A:
In vbscript you can do something like that with a fixed loop number.
You can get rid of wscript.echo after testing it !
Option Explicit
Dim SoundFile,i,MaxLoop
MaxLoop = 3
SoundFile = "http://hackoo.alwaysdata.net/Matrix.mp3"
For i = 1 to MaxLoop
wscript.echo "This the "& i & " loop"
Call Play(SoundFile)
Next
'*******************************************
Sub Play(SoundFile)
Dim oPlayer
Set oPlayer = CreateObject("WMPlayer.OCX")
' Play audio
oPlayer.URL = SoundFile
oPlayer.settings.volume = 100
oPlayer.controls.play
While oPlayer.playState <> 1 ' 1 = Stopped
WScript.Sleep 100
Wend
' Release the audio file
oPlayer.close
End Sub
'*******************************************
Or something like that with infnite loop when we add this line :
oPlayer.settings.setMode "loop", True
Option Explicit
Dim SoundFile
SoundFile = "http://hackoo.alwaysdata.net/Matrix.mp3"
Call Play(SoundFile)
'*******************************************
Sub Play(SoundFile)
Dim oPlayer
Set oPlayer = CreateObject("WMPlayer.OCX")
' Play audio
oPlayer.URL = SoundFile
oPlayer.settings.volume = 100
oPlayer.settings.setMode "loop", True
'MsgBox oPlayer.settings.getMode("loop")
oPlayer.controls.play
While oPlayer.playState <> 1 ' 1 = Stopped
WScript.Sleep 100
Wend
' Release the audio file
oPlayer.close
End Sub
'*******************************************
So, you can generate a vbscript file and call it from a batch file like this :
@echo off
Call :Play "http://hackoo.alwaysdata.net/Intro_DJ.mp3"
::***************************************************
:Play <SoundFile>
(
echo Option Explicit
echo Dim SoundFile
echo SoundFile = Wscript.arguments(0^)
echo Call Play(SoundFile^)
echo '*******************************************
echo Sub Play(SoundFile^)
echo Dim oPlayer
echo Set oPlayer = CreateObject("WMPlayer.OCX"^)
echo oPlayer.URL = SoundFile
echo oPlayer.settings.volume = 100
echo oPlayer.settings.setMode "loop", True
echo oPlayer.controls.play
echo While oPlayer.playState ^<^> 1
echo WScript.Sleep 100
echo Wend
echo oPlayer.close
echo End Sub
echo '*******************************************
)>"%tmp%\%~n0.vbs"
Start "" "%tmp%\%~n0.vbs" "%~1"
Exit /b
::***************************************************
And for more fun, i added a typewriter with speaking voice and a ShowBalloonTip function here :
@echo off
Title Playing music in loop with a typewriter, speaking voice and ShowBalloonTip by Hackoo 2016
Color 0A & Mode con cols=85 lines=3
Set "Msg= We are playing music in loop. And to stop it, just kill the wscript.exe process."
Call :TypeWriter "%Msg%"
Call :PS_Sub 'Warning' 100 "'Playing Music in loop'" "'%Msg%'" 'Warning'
Call :Play "http://hackoo.alwaysdata.net/Intro_DJ.mp3"
pause>nul
::***************************************************
:Play <SoundFile>
(
echo Option Explicit
echo Dim SoundFile
echo SoundFile = Wscript.arguments(0^)
echo Call Play(SoundFile^)
echo '*******************************************
echo Sub Play(SoundFile^)
echo Dim oPlayer
echo Set oPlayer = CreateObject("WMPlayer.OCX"^)
echo oPlayer.URL = SoundFile
echo oPlayer.settings.volume = 100
echo oPlayer.settings.setMode "loop", True
echo oPlayer.controls.play
echo While oPlayer.playState ^<^> 1
echo WScript.Sleep 100
echo Wend
echo oPlayer.close
echo End Sub
echo '*******************************************
)>"%tmp%\%~n0.vbs"
Start "" "%tmp%\%~n0.vbs" "%~1"
Exit /b
::***************************************************
:TypeWriter
Cls
echo(
(
echo strText=wscript.arguments(0^)
echo intTextLen = Len(strText^)
echo intPause = 150
echo For x = 1 to intTextLen
echo strTempText = Mid(strText,x,1^)
echo WScript.StdOut.Write strTempText
echo WScript.Sleep intPause
echo Next
echo Set Voice=CreateObject("SAPI.SpVoice"^)
echo voice.speak strText
)>%tmp%\%~n0.vbs
@cScript.EXE /noLogo "%tmp%\%~n0.vbs" "%~1"
exit /b
::***************************************************
:PS_Sub $notifyicon $time $title $text $icon
Rem Thanks to Aacini Tip here http://ss64.org/viewtopic.php?pid=8987#p8987
PowerShell ^
[reflection.assembly]::loadwithpartialname('System.Windows.Forms') ^| Out-Null; ^
[reflection.assembly]::loadwithpartialname('System.Drawing') ^| Out-Null; ^
$notify = new-object system.windows.forms.notifyicon; ^
$notify.icon = [System.Drawing.SystemIcons]::%1; ^
$notify.visible = $true; ^
$notify.showballoontip(%2,%3,%4,%5)
%End PowerShell%
exit /B
::***************************************************
A:
From using Object Browser in Office (Alt + F11, F2, Alt + T, R, add Windows Media Player (using wmp.dll).
Sub setMode(bstrMode As String, varfMode As Boolean)
Member of WMPLib.IWMPSettings
Sets the mode of the playlist
From Help https://msdn.microsoft.com/en-us/library/windows/desktop/dd564867(v=vs.85).aspx
autoRewind Tracks are restarted from the beginning after playing to the end.
loop The sequence of tracks repeats itself.
showFrame The nearest key frame is displayed when not playing. This mode is not relevant for audio tracks.
shuffle Tracks are played in random order.
varfMode [in]
A System.Boolean value specifying whether the new specified mode is active.
So Sound.SetMode "Loop", True
Set oPlayer = CreateObject("WMPlayer.OCX")
oPlayer.URL = "C:\Windows\Media\chimes.wav"
oPlayer.settings.volume = 100
oPlayer.settings.setMode "loop", True
oPlayer.controls.play
wscript.sleep 10000
A:
You can optimize your code by writing the code that i posted to you into function and you call it what you want !
This an example :
@echo off
echo We play "Intro_DJ.mp3" with a volume of 20
Call:PlayMusic http://hackoo.alwaysdata.net/Intro_DJ.mp3 20
echo(
echo Hit any key to stop the music and play another file
pause>nul
Call:StopMusic
Call:PlayMusic http://hackoo.alwaysdata.net/Matrix.mp3 100
echo(
echo We play "Matrix.mp3" with a volume of 100
echo(
echo Hit any key to stop the music in loop
pause>nul
Call:StopMusic
Exit /b
::************************************************************
:PlayMusic <Media_URL> <Volume>
(
echo Set Sound = CreateObject("WMPlayer.OCX.7"^)
echo Sound.URL = "%1"
echo Sound.settings.volume = %2
echo Sound.settings.setMode "loop", True
echo Sound.Controls.play
echo While Sound.playState ^<^> 1
echo WScript.Sleep 100
echo Wend
)>%~n0.vbs
start "" "%~n0.vbs" %1 %2
Exit /b
::************************************************************
:StopMusic
Taskkill /IM "wscript.exe" /F>nul 2>&1
Exit /b
::************************************************************
A:
Here's my collection of console players that rely only on windows build-in objects.
1) MediaRunner.bat - it uses windows media player object.To finish it checks its state - when player state is 1 the playing is finished.Usage:
MediaRunner.bat someSound.wav
2) SpPlayer.bat it uses sp.voice objects .Though it's pretty lightweight it can play only .wav files.Usage is the same..
3) SoundPlayer.bat - it uses the internetexplorer.application object and its bgsound tag (and does not require installed media player).It calculates in advance the length of the file to decide when to exit.Also accept as optional argument music volume:
SoundPlayer.bat someFile.mp3 500
A:
This is my scipt, followed Mr. Hackoo (how do I tag the person?) answer but getting error, need help team.
@echo off
set "file=notify.wav"
( echo Set Sound = CreateObject("WMPlayer.OCX.7"^)
echo Sound.URL = "%file%"
echo Sound.settings.volume = 100
echo Sound.settings.setMode "loop", True
echo Sound.Controls.play
echo While Sound.playState ^<^> 1
echo WScript.Sleep 100
echo Wend
)>sound.vbs
start /min sound.vbs
exit /b
and the error I'm getting after I double click this file is
|
How do I play music on loop in batch?
|
I've gotten some code that makes wmplayer play in the background, but I can't get it to loop itself after it's done. Any ideas on how to do this?
@echo off
set "file=GameMusic.mp3"
( echo Set Sound = CreateObject("WMPlayer.OCX.7"^)
echo Sound.URL = "%file%"
echo Sound.Controls.play
echo do while Sound.currentmedia.duration = 0
echo wscript.sleep 100
echo loop
echo wscript.sleep (int(Sound.currentmedia.duration^)+1^)*1000) >sound.vbs
start /min sound.vbs
exit /b
|
[
"Give a try for this modified code and give me your feedback :\n@echo off\nset \"file=GameMusic.mp3\"\n( \n echo Set Sound = CreateObject(\"WMPlayer.OCX.7\"^)\n echo Sound.URL = \"%file%\"\n echo Sound.settings.volume = 100\n echo Sound.settings.setMode \"loop\", True\n echo Sound.Controls.play\n echo do while Sound.currentmedia.duration = 0\n echo wscript.sleep 100\n echo loop\n echo wscript.sleep (int(Sound.currentmedia.duration^)+1^)*1000\n)>sound.vbs\nstart /min sound.vbs\nexit /b \n\nEDIT :\nThe second script should works give it a try ;)\n@echo off\nset \"file=GameMusic.mp3\"\n( echo Set Sound = CreateObject(\"WMPlayer.OCX.7\"^)\n echo Sound.URL = \"%file%\"\n echo Sound.settings.volume = 100\n echo Sound.settings.setMode \"loop\", True\n echo Sound.Controls.play\n echo While Sound.playState ^<^> 1\n echo WScript.Sleep 100\n echo Wend\n)>sound.vbs\nstart /min sound.vbs\nexit /b\n\n",
"In vbscript you can do something like that with a fixed loop number.\nYou can get rid of wscript.echo after testing it !\nOption Explicit\nDim SoundFile,i,MaxLoop\nMaxLoop = 3\nSoundFile = \"http://hackoo.alwaysdata.net/Matrix.mp3\"\nFor i = 1 to MaxLoop\n wscript.echo \"This the \"& i & \" loop\"\n Call Play(SoundFile)\nNext \n'*******************************************\nSub Play(SoundFile)\n Dim oPlayer\n Set oPlayer = CreateObject(\"WMPlayer.OCX\")\n' Play audio\n oPlayer.URL = SoundFile\n oPlayer.settings.volume = 100\n oPlayer.controls.play \n While oPlayer.playState <> 1 ' 1 = Stopped\n WScript.Sleep 100\n Wend\n' Release the audio file\n oPlayer.close\nEnd Sub\n'*******************************************\n\nOr something like that with infnite loop when we add this line : \n\noPlayer.settings.setMode \"loop\", True\n\nOption Explicit\nDim SoundFile\nSoundFile = \"http://hackoo.alwaysdata.net/Matrix.mp3\"\nCall Play(SoundFile)\n'*******************************************\nSub Play(SoundFile)\n Dim oPlayer\n Set oPlayer = CreateObject(\"WMPlayer.OCX\")\n' Play audio\n oPlayer.URL = SoundFile\n oPlayer.settings.volume = 100\n oPlayer.settings.setMode \"loop\", True\n 'MsgBox oPlayer.settings.getMode(\"loop\") \n oPlayer.controls.play \n While oPlayer.playState <> 1 ' 1 = Stopped\n WScript.Sleep 100\n Wend\n' Release the audio file\n oPlayer.close\nEnd Sub\n'*******************************************\n\nSo, you can generate a vbscript file and call it from a batch file like this :\n@echo off\nCall :Play \"http://hackoo.alwaysdata.net/Intro_DJ.mp3\"\n::***************************************************\n:Play <SoundFile>\n(\n echo Option Explicit\n echo Dim SoundFile\n echo SoundFile = Wscript.arguments(0^)\n echo Call Play(SoundFile^)\n echo '*******************************************\n echo Sub Play(SoundFile^)\n echo Dim oPlayer\n echo Set oPlayer = CreateObject(\"WMPlayer.OCX\"^)\n echo oPlayer.URL = SoundFile\n echo oPlayer.settings.volume = 100\n echo oPlayer.settings.setMode \"loop\", True \n echo oPlayer.controls.play \n echo While oPlayer.playState ^<^> 1\n echo WScript.Sleep 100\n echo Wend\n echo oPlayer.close\n echo End Sub\n echo '*******************************************\n)>\"%tmp%\\%~n0.vbs\"\nStart \"\" \"%tmp%\\%~n0.vbs\" \"%~1\"\nExit /b\n::***************************************************\n\nAnd for more fun, i added a typewriter with speaking voice and a ShowBalloonTip function here :\n@echo off\nTitle Playing music in loop with a typewriter, speaking voice and ShowBalloonTip by Hackoo 2016\nColor 0A & Mode con cols=85 lines=3\nSet \"Msg= We are playing music in loop. And to stop it, just kill the wscript.exe process.\"\nCall :TypeWriter \"%Msg%\"\nCall :PS_Sub 'Warning' 100 \"'Playing Music in loop'\" \"'%Msg%'\" 'Warning'\nCall :Play \"http://hackoo.alwaysdata.net/Intro_DJ.mp3\"\npause>nul\n::***************************************************\n:Play <SoundFile>\n(\n echo Option Explicit\n echo Dim SoundFile\n echo SoundFile = Wscript.arguments(0^)\n echo Call Play(SoundFile^)\n echo '*******************************************\n echo Sub Play(SoundFile^)\n echo Dim oPlayer\n echo Set oPlayer = CreateObject(\"WMPlayer.OCX\"^)\n echo oPlayer.URL = SoundFile\n echo oPlayer.settings.volume = 100\n echo oPlayer.settings.setMode \"loop\", True \n echo oPlayer.controls.play \n echo While oPlayer.playState ^<^> 1\n echo WScript.Sleep 100\n echo Wend\n echo oPlayer.close\n echo End Sub\n echo '*******************************************\n)>\"%tmp%\\%~n0.vbs\"\nStart \"\" \"%tmp%\\%~n0.vbs\" \"%~1\"\nExit /b\n::***************************************************\n:TypeWriter\nCls\necho(\n(\necho strText=wscript.arguments(0^)\necho intTextLen = Len(strText^)\necho intPause = 150\necho For x = 1 to intTextLen\necho strTempText = Mid(strText,x,1^)\necho WScript.StdOut.Write strTempText\necho WScript.Sleep intPause\necho Next\necho Set Voice=CreateObject(\"SAPI.SpVoice\"^)\necho voice.speak strText\n)>%tmp%\\%~n0.vbs\[email protected] /noLogo \"%tmp%\\%~n0.vbs\" \"%~1\"\nexit /b\n::***************************************************\n:PS_Sub $notifyicon $time $title $text $icon\nRem Thanks to Aacini Tip here http://ss64.org/viewtopic.php?pid=8987#p8987\nPowerShell ^\n [reflection.assembly]::loadwithpartialname('System.Windows.Forms') ^| Out-Null; ^\n [reflection.assembly]::loadwithpartialname('System.Drawing') ^| Out-Null; ^\n $notify = new-object system.windows.forms.notifyicon; ^\n $notify.icon = [System.Drawing.SystemIcons]::%1; ^\n $notify.visible = $true; ^\n $notify.showballoontip(%2,%3,%4,%5)\n%End PowerShell%\nexit /B\n::***************************************************\n\n",
"From using Object Browser in Office (Alt + F11, F2, Alt + T, R, add Windows Media Player (using wmp.dll).\nSub setMode(bstrMode As String, varfMode As Boolean)\n Member of WMPLib.IWMPSettings\n Sets the mode of the playlist\n\nFrom Help https://msdn.microsoft.com/en-us/library/windows/desktop/dd564867(v=vs.85).aspx\nautoRewind Tracks are restarted from the beginning after playing to the end. \nloop The sequence of tracks repeats itself. \nshowFrame The nearest key frame is displayed when not playing. This mode is not relevant for audio tracks. \nshuffle Tracks are played in random order. \nvarfMode [in]\nA System.Boolean value specifying whether the new specified mode is active.\nSo Sound.SetMode \"Loop\", True\nSet oPlayer = CreateObject(\"WMPlayer.OCX\")\noPlayer.URL = \"C:\\Windows\\Media\\chimes.wav\"\noPlayer.settings.volume = 100\noPlayer.settings.setMode \"loop\", True \noPlayer.controls.play \nwscript.sleep 10000\n\n",
"You can optimize your code by writing the code that i posted to you into function and you call it what you want !\nThis an example :\n@echo off\necho We play \"Intro_DJ.mp3\" with a volume of 20\nCall:PlayMusic http://hackoo.alwaysdata.net/Intro_DJ.mp3 20\necho(\necho Hit any key to stop the music and play another file\npause>nul\nCall:StopMusic\nCall:PlayMusic http://hackoo.alwaysdata.net/Matrix.mp3 100\necho(\necho We play \"Matrix.mp3\" with a volume of 100\necho(\necho Hit any key to stop the music in loop\npause>nul\nCall:StopMusic\nExit /b\n::************************************************************\n:PlayMusic <Media_URL> <Volume>\n( \n echo Set Sound = CreateObject(\"WMPlayer.OCX.7\"^)\n echo Sound.URL = \"%1\"\n echo Sound.settings.volume = %2\n echo Sound.settings.setMode \"loop\", True\n echo Sound.Controls.play\n echo While Sound.playState ^<^> 1\n echo WScript.Sleep 100\n echo Wend\n)>%~n0.vbs\nstart \"\" \"%~n0.vbs\" %1 %2\nExit /b\n::************************************************************\n:StopMusic\nTaskkill /IM \"wscript.exe\" /F>nul 2>&1\nExit /b\n::************************************************************\n\n",
"Here's my collection of console players that rely only on windows build-in objects.\n1) MediaRunner.bat - it uses windows media player object.To finish it checks its state - when player state is 1 the playing is finished.Usage:\n MediaRunner.bat someSound.wav\n\n2) SpPlayer.bat it uses sp.voice objects .Though it's pretty lightweight it can play only .wav files.Usage is the same..\n3) SoundPlayer.bat - it uses the internetexplorer.application object and its bgsound tag (and does not require installed media player).It calculates in advance the length of the file to decide when to exit.Also accept as optional argument music volume:\n SoundPlayer.bat someFile.mp3 500\n\n",
"This is my scipt, followed Mr. Hackoo (how do I tag the person?) answer but getting error, need help team.\n@echo off\nset \"file=notify.wav\"\n( echo Set Sound = CreateObject(\"WMPlayer.OCX.7\"^)\n echo Sound.URL = \"%file%\"\necho Sound.settings.volume = 100\necho Sound.settings.setMode \"loop\", True\necho Sound.Controls.play\necho While Sound.playState ^<^> 1\necho WScript.Sleep 100\necho Wend\n)>sound.vbs\nstart /min sound.vbs\nexit /b\n\nand the error I'm getting after I double click this file is\n\n"
] |
[
3,
2,
1,
0,
0,
0
] |
[] |
[] |
[
"batch_file",
"vbscript"
] |
stackoverflow_0036524002_batch_file_vbscript.txt
|
Q:
Shell Script - Read line from file and do substring with special characters
Reading a txt file i loop every line to insert ';' char at secific position but i have error due to the presence of special characters.
I tried
while read -r line; do
if [[ $line == 62* ]]; then
newLine="${line:0:2};${line:2:7};${line:9:12};${line:22:10};${line:32:16};${line:50:1};${line:51:22};${line:73:3};${line:76:6};${line:82:1};${line:83:15};${line:98:2};${line:100:2};"
else
newLine="${line:0:2};${line:2:7};${line:9:6};${line:14:106};"
fi
sed -i -e "s/$line/$newLine/g" $newFileName
done < $newFileName
The sed command returns error :
sed: -e expression #1, char 75: unknown option to `s'
A:
Your immediate problem is that you're using / as a sigil to separate arguments to the replace command in sed, but your data contains / characters. Using a different character (s@old@new@) is the quickest route to an immediate fix, if there exists a character you can be certain won't ever exist in your data.
The larger problem with this whole approach, though, is that running sed over and over (and over, and over) is going to be slower than just doing the edit in bash.
while IFS= read -r line; do
case $line in
62*) newLine="${line:0:2};${line:2:7};${line:9:12};${line:22:10};${line:32:16};${line:50:1};${line:51:22};${line:73:3};${line:76:6};${line:82:1};${line:83:15};${line:98:2};${line:100:2};"
*) newLine="${line:0:2};${line:2:7};${line:9:6};${line:14:106};"
esac
printf '%s\n' "$newLine"
done <"$origFileName" >"$newFileName"
That's not to say bash is a good tool for this job. If this were my code, I'd use awk (not bash starting awk over and over, but just awk alone).
|
Shell Script - Read line from file and do substring with special characters
|
Reading a txt file i loop every line to insert ';' char at secific position but i have error due to the presence of special characters.
I tried
while read -r line; do
if [[ $line == 62* ]]; then
newLine="${line:0:2};${line:2:7};${line:9:12};${line:22:10};${line:32:16};${line:50:1};${line:51:22};${line:73:3};${line:76:6};${line:82:1};${line:83:15};${line:98:2};${line:100:2};"
else
newLine="${line:0:2};${line:2:7};${line:9:6};${line:14:106};"
fi
sed -i -e "s/$line/$newLine/g" $newFileName
done < $newFileName
The sed command returns error :
sed: -e expression #1, char 75: unknown option to `s'
|
[
"Your immediate problem is that you're using / as a sigil to separate arguments to the replace command in sed, but your data contains / characters. Using a different character (s@old@new@) is the quickest route to an immediate fix, if there exists a character you can be certain won't ever exist in your data.\nThe larger problem with this whole approach, though, is that running sed over and over (and over, and over) is going to be slower than just doing the edit in bash.\nwhile IFS= read -r line; do\n case $line in\n 62*) newLine=\"${line:0:2};${line:2:7};${line:9:12};${line:22:10};${line:32:16};${line:50:1};${line:51:22};${line:73:3};${line:76:6};${line:82:1};${line:83:15};${line:98:2};${line:100:2};\"\n *) newLine=\"${line:0:2};${line:2:7};${line:9:6};${line:14:106};\"\n esac\n printf '%s\\n' \"$newLine\"\ndone <\"$origFileName\" >\"$newFileName\"\n\nThat's not to say bash is a good tool for this job. If this were my code, I'd use awk (not bash starting awk over and over, but just awk alone).\n"
] |
[
0
] |
[] |
[] |
[
"shell",
"special_characters",
"substring"
] |
stackoverflow_0074659103_shell_special_characters_substring.txt
|
Q:
Nuxt 3: Programmatically Access /public Files
I have a project using Nuxt 3, with a lot of images in the /public directory (and subdirectories). Now I would like to implement a simple gallery showing all images in the directory (or specified subdirectory). Is there a way to programmatically access all files in the directory, so that I can just do something like this:
<script setup>
let images = ???
</script>
<template>
<img v-for="image in images" :key="image.path" :scr="image.path" alt="" />
</template>
A:
You could do the following using a composable
composables/useAssets.js
import fs from 'fs';
import path from 'path';
const folderPath = './public';
const relativeFolderPath = path.relative(process.cwd(), folderPath);
export default function () {
const files = fs
.readdirSync(folderPath)
.filter((file) => file.match(/.*\.(jpg|png?)/gi));
const filesPaths = files.map(
(fileName) => `/_nuxt/${relativeFolderPath}/${fileName}`
);
return filesPaths;
}
YourComponent.vue
<script setup>
const assets = useAssets();
</script>
<template>
<div>
<img :src="item" v-for="item in assets" :key="item" />
</div>
</template>
Your basically reading all the files in the specified folder which you select by configuring folderPath then get the relative path of that folder from the root to append it to file paths later (process.cwd() gets the root project path).
after getting the matched assets and storing the array in files, we're using map to construct a new array with correct relative paths of the files in order for nuxt to read it correctly
|
Nuxt 3: Programmatically Access /public Files
|
I have a project using Nuxt 3, with a lot of images in the /public directory (and subdirectories). Now I would like to implement a simple gallery showing all images in the directory (or specified subdirectory). Is there a way to programmatically access all files in the directory, so that I can just do something like this:
<script setup>
let images = ???
</script>
<template>
<img v-for="image in images" :key="image.path" :scr="image.path" alt="" />
</template>
|
[
"You could do the following using a composable\ncomposables/useAssets.js\nimport fs from 'fs';\nimport path from 'path';\n\nconst folderPath = './public';\nconst relativeFolderPath = path.relative(process.cwd(), folderPath);\n\nexport default function () {\n const files = fs\n .readdirSync(folderPath)\n .filter((file) => file.match(/.*\\.(jpg|png?)/gi));\n\n const filesPaths = files.map(\n (fileName) => `/_nuxt/${relativeFolderPath}/${fileName}`\n );\n\n return filesPaths;\n}\n\nYourComponent.vue\n<script setup>\n const assets = useAssets();\n</script>\n\n<template>\n <div>\n <img :src=\"item\" v-for=\"item in assets\" :key=\"item\" />\n </div>\n</template>\n\nYour basically reading all the files in the specified folder which you select by configuring folderPath then get the relative path of that folder from the root to append it to file paths later (process.cwd() gets the root project path).\nafter getting the matched assets and storing the array in files, we're using map to construct a new array with correct relative paths of the files in order for nuxt to read it correctly\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"nuxt.js",
"nuxtjs3",
"vue.js"
] |
stackoverflow_0074658237_javascript_nuxt.js_nuxtjs3_vue.js.txt
|
Q:
Error building BGFX on windows for mingw-gcc
I am trying to build bgfx on windows 64-bit with mingw-gcc and not Visual Studio. While trying to build I got errors
I tried to build the bgfx library with make mingw-gcc-debug64 and I got errors while running the command. I got the following output: command line error My gcc is installed at C:\llvm-mingw\bin\gcc.exe. Please let me know how I can fix this. Thanks!
A:
Try this way (note if your OS Linux you dont need install MSYS2- this is for Windows only)
install git, cmake.
Install mingw64 via "MSYS2" devtools manager. this is manual how to do it https://www.devdungeon.com/content/install-gcc-compiler-windows-msys2-cc , install MSYS2 with all dev tools (gcc toollchanain, cmake, make, mc.. )
clone to your folder on your pc cmake-version of bgfx via command "git clone --recursive https://github.com/bkaradzic/bgfx.cmake.git"
make build folder inside bgfx.cmake
from the build folder in command terminal type cmake "-G "MinGW Makefiles" .." (if your system is Linux just type "cmake .." )
"type cmake --build . --target examples" after type "cmake --build . --target tools"
That is all!
|
Error building BGFX on windows for mingw-gcc
|
I am trying to build bgfx on windows 64-bit with mingw-gcc and not Visual Studio. While trying to build I got errors
I tried to build the bgfx library with make mingw-gcc-debug64 and I got errors while running the command. I got the following output: command line error My gcc is installed at C:\llvm-mingw\bin\gcc.exe. Please let me know how I can fix this. Thanks!
|
[
"Try this way (note if your OS Linux you dont need install MSYS2- this is for Windows only)\n\ninstall git, cmake.\n\nInstall mingw64 via \"MSYS2\" devtools manager. this is manual how to do it https://www.devdungeon.com/content/install-gcc-compiler-windows-msys2-cc , install MSYS2 with all dev tools (gcc toollchanain, cmake, make, mc.. )\n\nclone to your folder on your pc cmake-version of bgfx via command \"git clone --recursive https://github.com/bkaradzic/bgfx.cmake.git\"\n\nmake build folder inside bgfx.cmake\n\nfrom the build folder in command terminal type cmake \"-G \"MinGW Makefiles\" ..\" (if your system is Linux just type \"cmake ..\" )\n\n\"type cmake --build . --target examples\" after type \"cmake --build . --target tools\"\n\n\nThat is all!\n"
] |
[
0
] |
[] |
[] |
[
"bgfx",
"c++",
"mingw"
] |
stackoverflow_0074468593_bgfx_c++_mingw.txt
|
Q:
TypeORM ManyToOne FindOne fires multiple unecessary queries
I have a db table design like so:
Table Appointments:
id| start_time| patientId |.. etc other fields |
And another table which is Patient table:
id| name | last_name | .. etc other fields |
On my appointment entity I have this defintion:
@OneToMany(() => AppointmentEntity, (appt) => appt.patient)
appointments: Relation<AppointmentEntity>[];
Here is what I'm trying to do, given an appointment id, fetch appointment details as well as patients first name, should be very straight forward. This is what I ended up doing:
async getAppt(apptId: any) {
return this.apptRepo.findOne({
relations: ['patient'],
where: { id: apptId },
select: {
id: true,
start_time: true
patient: {
name: true,
},
},
});
}
This does give me expected results, but for whatever reason I am running two completely unnecessary db queries, instead of one. This is what gets run each time getAppt is executed:
query: SELECT DISTINCT "distinctAlias"."AppointmentEntity_id" AS "ids_AppointmentEntity_id" FROM (SELECT "AppointmentEntity"."id" AS "AppointmentEntity_id", "AppointmentEntity"."start_time" AS "AppointmentEntity_start_time", "AppointmentEntity__AppointmentEntity_patient"."name" AS "AppointmentEntity__AppointmentEntity_patient_name" FROM "appointments" "AppointmentEntity" LEFT JOIN "patients" "AppointmentEntity__AppointmentEntity_patient" ON "AppointmentEntity__AppointmentEntity_patient"."id"="AppointmentEntity"."patientId" WHERE ("AppointmentEntity"."id" = $1)) "distinctAlias" ORDER BY "AppointmentEntity_id" ASC LIMIT 1 -- PARAMETERS: ["appt_id_xxx"]
query: SELECT "AppointmentEntity"."id" AS "AppointmentEntity_id", "AppointmentEntity"."start_time" AS "AppointmentEntity_start_time", "AppointmentEntity__AppointmentEntity_patient"."name" AS "AppointmentEntity__AppointmentEntity_patient_name" FROM "appointments" "AppointmentEntity" LEFT JOIN "patients" "AppointmentEntity__AppointmentEntity_patient" ON "AppointmentEntity__AppointmentEntity_patient"."id"="AppointmentEntity"."patientId" WHERE ( ("AppointmentEntity"."id" = $1) ) AND ( "AppointmentEntity"."id" IN ($2) ) -- PARAMETERS: ["appt_id_xxx","appt_id_xxx"
What I really wanted my query to execute is (one query):
select b.id, b.start_time, p.name from appointments b
inner join patients p on p.id = b."patientId"
where b.id = 'appt_id_xxx';
Or something similar to this, it's fine without aliases "b" and "p", it's just how I write queries, but this is all it takes. I don't get this distinctAlias nonsence and why there are two db queries.
Can you advise on how to accomplish one query (or similar), like shown above? thanks!
A:
For aliasing, you can use the following example in library docs, as reference:
What are aliases for?
We used createQueryBuilder("user"). But what is "user"? It's just a regular SQL alias. We use aliases
everywhere, except when we work with selected data.
createQueryBuilder("user") is equivalent to:
createQueryBuilder().select("user").from(User, "user")
Which will result in the following SQL query:
SELECT ... FROM users user
In this SQL query, users is the table name, and user is an alias we
assign to this table. Later we use this alias to access the table:
createQueryBuilder()
.select("user")
.from(User, "user")
.where("user.name = :name", { name: "Timber" })
Which produces the following SQL query:
SELECT ... FROM users user WHERE user.name = 'Timber'
See, we used the users table by using the user alias we assigned when
we created a query builder.
One query builder is not limited to one alias, they can have multiple
aliases. Each select can have its own alias, you can select from
multiple tables each with its own alias, you can join multiple tables
each with its own alias. You can use those aliases to access tables
are you selecting (or data you are selecting).
Here createQueryBuilder is a function available in the library to create custom queries.
By going through their docs I can assume they rely on left-join the most. There is a function available leftJoinAndSelect in support of createQueryBuilder.
For a complete reference, you can go through the docs How to create and use a QueryBuilder.
A:
The keyword is eager loading here and it seems like this is not configurable in the query as for now (if we can belive this issue: https://github.com/typeorm/typeorm/issues/7142).
However, you can set eager loading on the model level by adding it to the relationship definition.
@OneToMany(() => AppointmentEntity, (appt) => appt.patient, { eager: true })
appointments: Relation<AppointmentEntity>[];
Be careful with that, it will now always load it eagerly.
For more information have a look at this doc: https://github.com/typeorm/typeorm/blob/master/docs/eager-and-lazy-relations.md
|
TypeORM ManyToOne FindOne fires multiple unecessary queries
|
I have a db table design like so:
Table Appointments:
id| start_time| patientId |.. etc other fields |
And another table which is Patient table:
id| name | last_name | .. etc other fields |
On my appointment entity I have this defintion:
@OneToMany(() => AppointmentEntity, (appt) => appt.patient)
appointments: Relation<AppointmentEntity>[];
Here is what I'm trying to do, given an appointment id, fetch appointment details as well as patients first name, should be very straight forward. This is what I ended up doing:
async getAppt(apptId: any) {
return this.apptRepo.findOne({
relations: ['patient'],
where: { id: apptId },
select: {
id: true,
start_time: true
patient: {
name: true,
},
},
});
}
This does give me expected results, but for whatever reason I am running two completely unnecessary db queries, instead of one. This is what gets run each time getAppt is executed:
query: SELECT DISTINCT "distinctAlias"."AppointmentEntity_id" AS "ids_AppointmentEntity_id" FROM (SELECT "AppointmentEntity"."id" AS "AppointmentEntity_id", "AppointmentEntity"."start_time" AS "AppointmentEntity_start_time", "AppointmentEntity__AppointmentEntity_patient"."name" AS "AppointmentEntity__AppointmentEntity_patient_name" FROM "appointments" "AppointmentEntity" LEFT JOIN "patients" "AppointmentEntity__AppointmentEntity_patient" ON "AppointmentEntity__AppointmentEntity_patient"."id"="AppointmentEntity"."patientId" WHERE ("AppointmentEntity"."id" = $1)) "distinctAlias" ORDER BY "AppointmentEntity_id" ASC LIMIT 1 -- PARAMETERS: ["appt_id_xxx"]
query: SELECT "AppointmentEntity"."id" AS "AppointmentEntity_id", "AppointmentEntity"."start_time" AS "AppointmentEntity_start_time", "AppointmentEntity__AppointmentEntity_patient"."name" AS "AppointmentEntity__AppointmentEntity_patient_name" FROM "appointments" "AppointmentEntity" LEFT JOIN "patients" "AppointmentEntity__AppointmentEntity_patient" ON "AppointmentEntity__AppointmentEntity_patient"."id"="AppointmentEntity"."patientId" WHERE ( ("AppointmentEntity"."id" = $1) ) AND ( "AppointmentEntity"."id" IN ($2) ) -- PARAMETERS: ["appt_id_xxx","appt_id_xxx"
What I really wanted my query to execute is (one query):
select b.id, b.start_time, p.name from appointments b
inner join patients p on p.id = b."patientId"
where b.id = 'appt_id_xxx';
Or something similar to this, it's fine without aliases "b" and "p", it's just how I write queries, but this is all it takes. I don't get this distinctAlias nonsence and why there are two db queries.
Can you advise on how to accomplish one query (or similar), like shown above? thanks!
|
[
"For aliasing, you can use the following example in library docs, as reference:\n\nWhat are aliases for?\nWe used createQueryBuilder(\"user\"). But what is \"user\"? It's just a regular SQL alias. We use aliases\neverywhere, except when we work with selected data.\ncreateQueryBuilder(\"user\") is equivalent to:\ncreateQueryBuilder().select(\"user\").from(User, \"user\")\n\nWhich will result in the following SQL query:\nSELECT ... FROM users user\n\nIn this SQL query, users is the table name, and user is an alias we\nassign to this table. Later we use this alias to access the table:\ncreateQueryBuilder()\n .select(\"user\")\n .from(User, \"user\")\n .where(\"user.name = :name\", { name: \"Timber\" })\n\nWhich produces the following SQL query:\nSELECT ... FROM users user WHERE user.name = 'Timber'\n\nSee, we used the users table by using the user alias we assigned when\nwe created a query builder.\nOne query builder is not limited to one alias, they can have multiple\naliases. Each select can have its own alias, you can select from\nmultiple tables each with its own alias, you can join multiple tables\neach with its own alias. You can use those aliases to access tables\nare you selecting (or data you are selecting).\n\nHere createQueryBuilder is a function available in the library to create custom queries.\nBy going through their docs I can assume they rely on left-join the most. There is a function available leftJoinAndSelect in support of createQueryBuilder.\nFor a complete reference, you can go through the docs How to create and use a QueryBuilder.\n",
"The keyword is eager loading here and it seems like this is not configurable in the query as for now (if we can belive this issue: https://github.com/typeorm/typeorm/issues/7142).\nHowever, you can set eager loading on the model level by adding it to the relationship definition.\n@OneToMany(() => AppointmentEntity, (appt) => appt.patient, { eager: true })\nappointments: Relation<AppointmentEntity>[];\n\nBe careful with that, it will now always load it eagerly.\n\nFor more information have a look at this doc: https://github.com/typeorm/typeorm/blob/master/docs/eager-and-lazy-relations.md\n"
] |
[
1,
0
] |
[] |
[] |
[
"javascript",
"typeorm",
"typescript"
] |
stackoverflow_0074531839_javascript_typeorm_typescript.txt
|
Q:
Sorting a Listbox of dates in the form dd/mm/yyyy in C#
I have a listbox and a (annual holiday) date picker. The user chooses the date and adds it to the Listbox. I then want to sort the Listbox from earliest to latest date. I tried using sorted Listbox but that did not work as it sorts like they are alphabetic strings. I then used unsorted Listbox and found some code and changed it to sort the box manually but again this is alphabetical. I am using the date as dd/mm/yyyy each date on a new line e.g.
If I have:
01/01/2023
02/12/2022
23/12/2022
24/12/2022
then I want the listbox to show me
02/12/2022
23/12/2022
24/12/2022
01/01/2023
what I get is the following where it sorts from left to right rather than year then month then day
01/01/2023
02/12/2022
23/12/2022
24/12/2022
At present I use the following code to add and then sort but there must be an easy way to sort this.
void Btn_add_holidayClick(object sender, EventArgs e)
{
lstbx_annual_hol.Items.Add(DatePick_Hol_Date.Value.Day.ToString("D2") + "/" +
DatePick_Hol_Date.Value.Month.ToString("D2") + "/" +
DatePick_Hol_Date.Value.Year.ToString() +"\n");
SortAnnualHoliday();
}
void SortAnnualHoliday()
{
ArrayList arList = new ArrayList();
foreach (object obj in lstbx_annual_hol.Items)
{
arList.Add(obj);
}
arList.Sort();
lstbx_annual_hol.Items.Clear();
foreach(object obj in arList)
{
lstbx_annual_hol.Items.Add(obj);
}
}
Thanks in advance for any advice and solutions even if you think I should do it a completely different way.
A:
You get this result because it tries to sort a list of strings not dates. For your purpose you should convert the strings back to dates, sort them, and convert the sorted list to string again:
void SortAnnualHoliday()
{
List<string> arList =lstbx_annual_hol.Items.Cast<string>()
.Select(x => { DateTime.TryParseExact(x, "dd/MM/yyyy", new CultureInfo("en-US"), DateTimeStyles.None, out var y); return y; })
.OrderBy(x => x)
.Select(x => x.ToString("dd/MM/yyyy"))
.ToList();
lstbx_annual_hol.Items.Clear();
lstbx_annual_hol.Items.AddRange(arList.ToArray());
}
A:
Create a List<DateTime> so that you can SORT it based on the dates within. You can use a BindingSource to attach the List to your ListBox, making it update whenever changes are made.
Something like:
public partial class Form2 : Form
{
private List<DateTime> dates = new List<DateTime>();
private BindingSource source = new BindingSource();
public Form2()
{
InitializeComponent();
source.DataSource = dates;
lstbx_annual_hol.FormatString = "d";
lstbx_annual_hol.DataSource = source;
}
private void Btn_add_holidayClick(object sender, EventArgs e)
{
dates.Add(DatePick_Hol_Date.Value);
dates.Sort();
source.ResetBindings(false);
}
}
|
Sorting a Listbox of dates in the form dd/mm/yyyy in C#
|
I have a listbox and a (annual holiday) date picker. The user chooses the date and adds it to the Listbox. I then want to sort the Listbox from earliest to latest date. I tried using sorted Listbox but that did not work as it sorts like they are alphabetic strings. I then used unsorted Listbox and found some code and changed it to sort the box manually but again this is alphabetical. I am using the date as dd/mm/yyyy each date on a new line e.g.
If I have:
01/01/2023
02/12/2022
23/12/2022
24/12/2022
then I want the listbox to show me
02/12/2022
23/12/2022
24/12/2022
01/01/2023
what I get is the following where it sorts from left to right rather than year then month then day
01/01/2023
02/12/2022
23/12/2022
24/12/2022
At present I use the following code to add and then sort but there must be an easy way to sort this.
void Btn_add_holidayClick(object sender, EventArgs e)
{
lstbx_annual_hol.Items.Add(DatePick_Hol_Date.Value.Day.ToString("D2") + "/" +
DatePick_Hol_Date.Value.Month.ToString("D2") + "/" +
DatePick_Hol_Date.Value.Year.ToString() +"\n");
SortAnnualHoliday();
}
void SortAnnualHoliday()
{
ArrayList arList = new ArrayList();
foreach (object obj in lstbx_annual_hol.Items)
{
arList.Add(obj);
}
arList.Sort();
lstbx_annual_hol.Items.Clear();
foreach(object obj in arList)
{
lstbx_annual_hol.Items.Add(obj);
}
}
Thanks in advance for any advice and solutions even if you think I should do it a completely different way.
|
[
"You get this result because it tries to sort a list of strings not dates. For your purpose you should convert the strings back to dates, sort them, and convert the sorted list to string again:\nvoid SortAnnualHoliday()\n {\n List<string> arList =lstbx_annual_hol.Items.Cast<string>()\n .Select(x => { DateTime.TryParseExact(x, \"dd/MM/yyyy\", new CultureInfo(\"en-US\"), DateTimeStyles.None, out var y); return y; })\n .OrderBy(x => x)\n .Select(x => x.ToString(\"dd/MM/yyyy\"))\n .ToList();\n lstbx_annual_hol.Items.Clear();\n lstbx_annual_hol.Items.AddRange(arList.ToArray());\n }\n\n\n",
"Create a List<DateTime> so that you can SORT it based on the dates within. You can use a BindingSource to attach the List to your ListBox, making it update whenever changes are made.\nSomething like:\npublic partial class Form2 : Form\n{\n\n private List<DateTime> dates = new List<DateTime>();\n private BindingSource source = new BindingSource();\n\n public Form2()\n {\n InitializeComponent();\n source.DataSource = dates;\n lstbx_annual_hol.FormatString = \"d\";\n lstbx_annual_hol.DataSource = source;\n }\n\n private void Btn_add_holidayClick(object sender, EventArgs e)\n {\n dates.Add(DatePick_Hol_Date.Value);\n dates.Sort();\n source.ResetBindings(false);\n }\n\n}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"arraylist",
"c#",
"listbox",
"sorting"
] |
stackoverflow_0074657570_arraylist_c#_listbox_sorting.txt
|
Q:
Angular - Validate the Current Page before going to Next Step in Angular-Archwizard
I am using Angular-7. I am using angular-archwizard for make multi-step form, currently it allows me to go to next step without entering all value of required field, I don't want to allow that, I want to go next step if I entered value in all required field.
<form class="form-clientquote" #clientquoteForm=ngForm (ngSubmit)="onSubmit()">
<aw-wizard #wizard [navBarLayout]="'large-empty-symbols'">
<aw-wizard-step [stepTitle]="'Transaction Details'" [navigationSymbol]="{ symbol: '', fontFamily: 'FontAwesome' }">
<div class="centered-content">
<div class="row">
<div class="col-xs-12">
<div class="col-xs-6">
<label for="quote_origin" >Origin<span style="color:red;"> *</span></label>
<input type="text" class="form-control" id="quote_origin" google-place
(onSelect)="setAddress($event)" placeholder="Origin" name="quote_origin" [(ngModel)]="form.quote_origin" #quote_origin="ngModel" [ngClass]="{'is-invalid' : quote_origin.invalid && ((quote_origin.dirty || quote_origin.touched) || clientquoteForm.submitted)}" required>
<div class="form-feedback" *ngIf="quote_origin.invalid && ((quote_origin.dirty || quote_origin.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="quote_origin.errors?.required"class="alert alert-danger">Origin is required.</div>
</div>
</div>
<div class="col-xs-6">
<label for="quote_destination">Destination<span style="color:red;"> *</span></label>
<input type="text" class="form-control" id="quote_destination" google-place
(onSelect)="setAddress($event)" placeholder="Destination" name="quote_destination" [(ngModel)]="form.quote_destination" #quote_destination="ngModel" [ngClass]="{'is-invalid' : quote_destination.invalid && ((quote_destination.dirty || quote_destination.touched) || clientquoteForm.submitted)}" required>
<!-- <ul style="list-style-type: none;">
<li *ngFor="let key of addrKeys">
<span style="font-weight: bold">{{key}}</span>: <span>{{addr[key]}}</span>
</li>
</ul> -->
<div class="form-feedback" *ngIf="quote_destination.invalid && ((quote_destination.dirty || quote_destination.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="quote_destination.errors?.required"class="alert alert-danger">Destination is required.</div>
</div>
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-6">
<label for="commodity">Commodity<span style="color:red;"> *</span></label>
<input type="text" class="form-control" id="commodity" placeholder="Commodity" name="commodity" [(ngModel)]="form.commodity" #commodity="ngModel" [ngClass]="{'is-invalid' : commodity.invalid && ((commodity.dirty || commodity.touched) || clientquoteForm.submitted)}" required>
<div class="form-feedback" *ngIf="commodity.invalid && ((commodity.dirty || commodity.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="commodity.errors?.required"class="alert alert-danger">Commodity is required.</div>
</div>
</div>
<div class="col-xs-6">
<label for="truck_type">Truck Type<span style="color:red;"> *</span></label>
<select class="form-control select2" style="width: 100%;" [(ngModel)]="form.truck_type" #truckType="ngModel" name="truck_type" required>
<option [ngValue]="null">Choose a Truck Type</option>
<option [ngValue]="truck_type" *ngFor="let truck_type of truck_types">{{truck_type}}</option>
</select>
<div class="form-feedback" *ngIf="truckType.invalid && ((truckType.dirty || truckType.touched) || clientquoteForm.submitted)">
<div style="color:red;" class="alert alert-danger">Truck Type is required.</div>
</div>
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-6">
<label for="truck_required">Required Truck(s)<span style="color:red;"> *</span></label>
<input type="number" (keypress)="numberOnly($event)" class="form-control" id="truck_required" placeholder="1,2,3..." min="1" max="20" name="truck_required" [(ngModel)]="form.truck_required" #truck_required="ngModel" [ngClass]="{'is-invalid' : truck_required.invalid && ((truck_required.dirty || truck_required.touched) || clientquoteForm.submitted)}" required maxlength="2">
<div class="form-feedback" *ngIf="truck_required.invalid && ((truck_required.dirty || truck_required.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="truck_required.errors?.required"class="alert alert-danger">Required Truck(s) is required.</div>
</div>
</div>
<div class="col-xs-6">
<label for="loading_date">Loading Date<span style="color:red;"> *</span></label>
<div class="input-group date" style="width: 100%;" >
<mat-form-field>
<input matInput [matDatepicker] = "picker" placeholder = "Choose a date" name="loading_date" [(ngModel)]="form.loading_date" #loading_date="ngModel" [ngClass]="{'is-invalid' : loading_date.invalid && ((loading_date.dirty || loading_date.touched) || clientquoteForm.submitted)}" required>
<mat-datepicker-toggle matSuffix [for] = "picker"></mat-datepicker-toggle>
<mat-datepicker #picker></mat-datepicker>
</mat-form-field>
<div class="form-feedback" *ngIf="loading_date.invalid && ((loading_date.dirty || loading_date.touched) || clientquoteForm.submitted)">
<div style="color:red;" *ngIf="loading_date.errors?.required"class="alert alert-danger">Loading Date is required.</div>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-12">
<label for="comment">Comment</label>
<input class="form-control" placeholder="Comment" type="textarea" id="comment" name="comment" [(ngModel)]="form.comment" #comment="ngModel" >
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-12">
<div class="btn-group">
<button type="button" class="btn btn-primary" awNextStep> Next ></button>
</div>
</div>
</div>
</div>
</div>
</aw-wizard-step>
<aw-wizard-step [stepTitle]="'Personal Details'" [navigationSymbol]="{ symbol: '', fontFamily: 'FontAwesome' }">
<div class="centered-content">
<div class="row">
<div class="col-xs-12">
<div class="col-xs-6">
<label for="first_name">First Name<span style="color:red;"> *</span></label>
<input type="text" class="form-control" id="first_name" placeholder="First Name" name="first_name" [(ngModel)]="form.first_name" #first_name="ngModel" [ngClass]="{'is-invalid' : first_name.invalid && ((first_name.dirty || first_name.touched) || clientquoteForm.submitted)}" required>
<div class="form-feedback" *ngIf="first_name.invalid && ((first_name.dirty || first_name.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="first_name.errors?.required" class="alert alert-danger">First Name is required.</div>
</div>
</div>
<div class="col-xs-6">
<label for="last_name">Last Name<span style="color:red;"> *</span></label>
<input type="text" class="form-control" id="last_name" placeholder="Last Name" name="last_name" [(ngModel)]="form.last_name" #last_name="ngModel" [ngClass]="{'is-invalid' : last_name.invalid && ((last_name.dirty || last_name.touched) || clientquoteForm.submitted)}" required>
<div class="form-feedback" *ngIf="last_name.invalid && ((last_name.dirty || last_name.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="last_name.errors?.required"class="alert alert-danger">Last Name is required.</div>
</div>
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-6">
<label for="business_name">Business Name<span style="color:red;"> *</span></label>
<input type="text" class="form-control" id="business_name" placeholder="Business Name" name="business_name" [(ngModel)]="form.business_name" #business_name="ngModel" [ngClass]="{'is-invalid' : business_name.invalid && ((business_name.dirty || business_name.touched) || clientquoteForm.submitted)}" required>
<div class="form-feedback" *ngIf="business_name.invalid && ((business_name.dirty || business_name.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="business_name.errors?.required"class="alert alert-danger">Business Name is required.</div>
</div>
</div>
<div class="col-xs-6">
<label for="address">Office Address</label>
<input class="form-control" google-place
(onSelect)="setAddress($event)"placeholder="address" type="text" id="address" name="address" [(ngModel)]="form.address" #address="ngModel" >
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-6">
<label for="phone">Phone<span style="color:red;"> *</span></label>
<div class="input-group">
<div class="input-group-addon">
<i class="fa fa-phone"></i>
</div>
<input type="text"
class="form-control"
id="phone"
placeholder="Phone"
name="phone"
[(ngModel)]="form.phone"
#phone="ngModel"
pattern="^[+-]?[0-9]*\.?[0-9]*"
[ngClass]="{'is-invalid' : phone.invalid && ((phone.dirty || phone.touched) || clientquoteForm.submitted)}"
required maxlength="14">
</div>
<div class="invalid-feedback"
[hidden]="phone.valid || phone.untouched">
<div style="color:red;" *ngIf="phone.errors && phone.errors.required">
Phone number is required
</div>
<div style="color:red;" *ngIf="phone.errors && phone.errors.pattern">
Valid Phone number is invalid
</div>
</div>
</div>
<div class="col-xs-6">
<label for="email">Email<span style="color:red;"> *</span></label>
<div class="input-group">
<div class="input-group-addon">
<i class="fa fa-envelope"></i>
</div>
<input type="text"
class="form-control"
id="email"
placeholder="Email"
name="email"
[(ngModel)]="form.email"
#email="ngModel"
pattern="^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{2,3})+$"
[ngClass]="{'is-invalid' : email.invalid && ((email.dirty || email.touched) || clientquoteForm.submitted)}"
required>
</div>
<div class="form-feedback" class="invalid-feedback"
[hidden]="email.valid || email.untouched">
<div style="color:red;" *ngIf="email.errors && email.errors.required">
Email is required
</div>
<div style="color:red;" *ngIf="email.errors && email.errors.pattern">
Valid Email is invalid
</div>
</div>
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-12">
<div class="btn-group">
<button style="margin:5px" type="button" class="btn btn-warning" awPreviousStep>< Previous</button>
<button style="margin:5px" (keyup.enter)="onSubmit()" type="submit" class="btn btn-success" awNextStep> Get A Quote</button>
</div>
</div>
</div>
</div>
</div>
</aw-wizard-step>
<br>
<aw-wizard-step [stepTitle]="'Success'" [navigationSymbol]="{ symbol: '', fontFamily: 'FontAwesome' }">
<div class="centered-content">
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-12">
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-12">
<div class="btn-group">
<!-- <button style="margin:5px" type="button" class="btn btn-warning" awPreviousStep>< Previous</button>
<button style="margin:5px" type="submit" class="btn btn-success" > Send Quote</button> -->
<!-- <button style="margin:5px" class="btn btn-success" type="button" href="/landing">Title on button</button> -->
<a style="margin:5px" routerLink="/landing" class="btn btn-primary"><i class="fa fa-home margin-right-10px"></i>
Home
</a>
</div>
</div>
</div>
</div>
</div>
</aw-wizard-step>
</aw-wizard>
</form>
I have three steps:
Step-1: Transaction Detail
Step-2: Personal Detail
Step-3: Success and Feedback Page.
This is what I want to achieve:
I want step-1 to be validated before going to Step-2
I want step-2 to save into database.
I want step-3 to display success message and show detail of the data submitted into database.
I want to disallow the user from clicking on the stepTitle (disable it).
So far, I have been able to save into the database. How do I achieve these?
A:
You can disable the button while the formGroup is invalid:
<button type="button" class="btn btn-primary" awNextStep [disabled]="yourFormGroup.invalid"> Next ></button>
A:
// At the top of my component class that implements the <wizard>
@ViewChild(WizardComponent)
public wizard: WizardComponent;
Submit(){
this.submittedStep1 = true;
if(this.form.valid){
this.wizard.goToNextStep();
}
}
// then later, this works great
<button type="button" uiSref="work" class="btn btn-primary ml-2" (click)="SubmitUpload()">Next <i class="ft-chevron-right"></i></button>
|
Angular - Validate the Current Page before going to Next Step in Angular-Archwizard
|
I am using Angular-7. I am using angular-archwizard for make multi-step form, currently it allows me to go to next step without entering all value of required field, I don't want to allow that, I want to go next step if I entered value in all required field.
<form class="form-clientquote" #clientquoteForm=ngForm (ngSubmit)="onSubmit()">
<aw-wizard #wizard [navBarLayout]="'large-empty-symbols'">
<aw-wizard-step [stepTitle]="'Transaction Details'" [navigationSymbol]="{ symbol: '', fontFamily: 'FontAwesome' }">
<div class="centered-content">
<div class="row">
<div class="col-xs-12">
<div class="col-xs-6">
<label for="quote_origin" >Origin<span style="color:red;"> *</span></label>
<input type="text" class="form-control" id="quote_origin" google-place
(onSelect)="setAddress($event)" placeholder="Origin" name="quote_origin" [(ngModel)]="form.quote_origin" #quote_origin="ngModel" [ngClass]="{'is-invalid' : quote_origin.invalid && ((quote_origin.dirty || quote_origin.touched) || clientquoteForm.submitted)}" required>
<div class="form-feedback" *ngIf="quote_origin.invalid && ((quote_origin.dirty || quote_origin.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="quote_origin.errors?.required"class="alert alert-danger">Origin is required.</div>
</div>
</div>
<div class="col-xs-6">
<label for="quote_destination">Destination<span style="color:red;"> *</span></label>
<input type="text" class="form-control" id="quote_destination" google-place
(onSelect)="setAddress($event)" placeholder="Destination" name="quote_destination" [(ngModel)]="form.quote_destination" #quote_destination="ngModel" [ngClass]="{'is-invalid' : quote_destination.invalid && ((quote_destination.dirty || quote_destination.touched) || clientquoteForm.submitted)}" required>
<!-- <ul style="list-style-type: none;">
<li *ngFor="let key of addrKeys">
<span style="font-weight: bold">{{key}}</span>: <span>{{addr[key]}}</span>
</li>
</ul> -->
<div class="form-feedback" *ngIf="quote_destination.invalid && ((quote_destination.dirty || quote_destination.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="quote_destination.errors?.required"class="alert alert-danger">Destination is required.</div>
</div>
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-6">
<label for="commodity">Commodity<span style="color:red;"> *</span></label>
<input type="text" class="form-control" id="commodity" placeholder="Commodity" name="commodity" [(ngModel)]="form.commodity" #commodity="ngModel" [ngClass]="{'is-invalid' : commodity.invalid && ((commodity.dirty || commodity.touched) || clientquoteForm.submitted)}" required>
<div class="form-feedback" *ngIf="commodity.invalid && ((commodity.dirty || commodity.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="commodity.errors?.required"class="alert alert-danger">Commodity is required.</div>
</div>
</div>
<div class="col-xs-6">
<label for="truck_type">Truck Type<span style="color:red;"> *</span></label>
<select class="form-control select2" style="width: 100%;" [(ngModel)]="form.truck_type" #truckType="ngModel" name="truck_type" required>
<option [ngValue]="null">Choose a Truck Type</option>
<option [ngValue]="truck_type" *ngFor="let truck_type of truck_types">{{truck_type}}</option>
</select>
<div class="form-feedback" *ngIf="truckType.invalid && ((truckType.dirty || truckType.touched) || clientquoteForm.submitted)">
<div style="color:red;" class="alert alert-danger">Truck Type is required.</div>
</div>
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-6">
<label for="truck_required">Required Truck(s)<span style="color:red;"> *</span></label>
<input type="number" (keypress)="numberOnly($event)" class="form-control" id="truck_required" placeholder="1,2,3..." min="1" max="20" name="truck_required" [(ngModel)]="form.truck_required" #truck_required="ngModel" [ngClass]="{'is-invalid' : truck_required.invalid && ((truck_required.dirty || truck_required.touched) || clientquoteForm.submitted)}" required maxlength="2">
<div class="form-feedback" *ngIf="truck_required.invalid && ((truck_required.dirty || truck_required.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="truck_required.errors?.required"class="alert alert-danger">Required Truck(s) is required.</div>
</div>
</div>
<div class="col-xs-6">
<label for="loading_date">Loading Date<span style="color:red;"> *</span></label>
<div class="input-group date" style="width: 100%;" >
<mat-form-field>
<input matInput [matDatepicker] = "picker" placeholder = "Choose a date" name="loading_date" [(ngModel)]="form.loading_date" #loading_date="ngModel" [ngClass]="{'is-invalid' : loading_date.invalid && ((loading_date.dirty || loading_date.touched) || clientquoteForm.submitted)}" required>
<mat-datepicker-toggle matSuffix [for] = "picker"></mat-datepicker-toggle>
<mat-datepicker #picker></mat-datepicker>
</mat-form-field>
<div class="form-feedback" *ngIf="loading_date.invalid && ((loading_date.dirty || loading_date.touched) || clientquoteForm.submitted)">
<div style="color:red;" *ngIf="loading_date.errors?.required"class="alert alert-danger">Loading Date is required.</div>
</div>
</div>
</div>
</div>
</div>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-12">
<label for="comment">Comment</label>
<input class="form-control" placeholder="Comment" type="textarea" id="comment" name="comment" [(ngModel)]="form.comment" #comment="ngModel" >
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-12">
<div class="btn-group">
<button type="button" class="btn btn-primary" awNextStep> Next ></button>
</div>
</div>
</div>
</div>
</div>
</aw-wizard-step>
<aw-wizard-step [stepTitle]="'Personal Details'" [navigationSymbol]="{ symbol: '', fontFamily: 'FontAwesome' }">
<div class="centered-content">
<div class="row">
<div class="col-xs-12">
<div class="col-xs-6">
<label for="first_name">First Name<span style="color:red;"> *</span></label>
<input type="text" class="form-control" id="first_name" placeholder="First Name" name="first_name" [(ngModel)]="form.first_name" #first_name="ngModel" [ngClass]="{'is-invalid' : first_name.invalid && ((first_name.dirty || first_name.touched) || clientquoteForm.submitted)}" required>
<div class="form-feedback" *ngIf="first_name.invalid && ((first_name.dirty || first_name.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="first_name.errors?.required" class="alert alert-danger">First Name is required.</div>
</div>
</div>
<div class="col-xs-6">
<label for="last_name">Last Name<span style="color:red;"> *</span></label>
<input type="text" class="form-control" id="last_name" placeholder="Last Name" name="last_name" [(ngModel)]="form.last_name" #last_name="ngModel" [ngClass]="{'is-invalid' : last_name.invalid && ((last_name.dirty || last_name.touched) || clientquoteForm.submitted)}" required>
<div class="form-feedback" *ngIf="last_name.invalid && ((last_name.dirty || last_name.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="last_name.errors?.required"class="alert alert-danger">Last Name is required.</div>
</div>
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-6">
<label for="business_name">Business Name<span style="color:red;"> *</span></label>
<input type="text" class="form-control" id="business_name" placeholder="Business Name" name="business_name" [(ngModel)]="form.business_name" #business_name="ngModel" [ngClass]="{'is-invalid' : business_name.invalid && ((business_name.dirty || business_name.touched) || clientquoteForm.submitted)}" required>
<div class="form-feedback" *ngIf="business_name.invalid && ((business_name.dirty || business_name.touched) || clientquoteForm.submitted)" class="invalid-feedback">
<div style="color:red;" *ngIf="business_name.errors?.required"class="alert alert-danger">Business Name is required.</div>
</div>
</div>
<div class="col-xs-6">
<label for="address">Office Address</label>
<input class="form-control" google-place
(onSelect)="setAddress($event)"placeholder="address" type="text" id="address" name="address" [(ngModel)]="form.address" #address="ngModel" >
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-6">
<label for="phone">Phone<span style="color:red;"> *</span></label>
<div class="input-group">
<div class="input-group-addon">
<i class="fa fa-phone"></i>
</div>
<input type="text"
class="form-control"
id="phone"
placeholder="Phone"
name="phone"
[(ngModel)]="form.phone"
#phone="ngModel"
pattern="^[+-]?[0-9]*\.?[0-9]*"
[ngClass]="{'is-invalid' : phone.invalid && ((phone.dirty || phone.touched) || clientquoteForm.submitted)}"
required maxlength="14">
</div>
<div class="invalid-feedback"
[hidden]="phone.valid || phone.untouched">
<div style="color:red;" *ngIf="phone.errors && phone.errors.required">
Phone number is required
</div>
<div style="color:red;" *ngIf="phone.errors && phone.errors.pattern">
Valid Phone number is invalid
</div>
</div>
</div>
<div class="col-xs-6">
<label for="email">Email<span style="color:red;"> *</span></label>
<div class="input-group">
<div class="input-group-addon">
<i class="fa fa-envelope"></i>
</div>
<input type="text"
class="form-control"
id="email"
placeholder="Email"
name="email"
[(ngModel)]="form.email"
#email="ngModel"
pattern="^\w+([\.-]?\w+)*@\w+([\.-]?\w+)*(\.\w{2,3})+$"
[ngClass]="{'is-invalid' : email.invalid && ((email.dirty || email.touched) || clientquoteForm.submitted)}"
required>
</div>
<div class="form-feedback" class="invalid-feedback"
[hidden]="email.valid || email.untouched">
<div style="color:red;" *ngIf="email.errors && email.errors.required">
Email is required
</div>
<div style="color:red;" *ngIf="email.errors && email.errors.pattern">
Valid Email is invalid
</div>
</div>
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-12">
<div class="btn-group">
<button style="margin:5px" type="button" class="btn btn-warning" awPreviousStep>< Previous</button>
<button style="margin:5px" (keyup.enter)="onSubmit()" type="submit" class="btn btn-success" awNextStep> Get A Quote</button>
</div>
</div>
</div>
</div>
</div>
</aw-wizard-step>
<br>
<aw-wizard-step [stepTitle]="'Success'" [navigationSymbol]="{ symbol: '', fontFamily: 'FontAwesome' }">
<div class="centered-content">
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-12">
</div>
</div>
</div>
<br>
<div class="row">
<div class="col-xs-12">
<div class="col-xs-12">
<div class="btn-group">
<!-- <button style="margin:5px" type="button" class="btn btn-warning" awPreviousStep>< Previous</button>
<button style="margin:5px" type="submit" class="btn btn-success" > Send Quote</button> -->
<!-- <button style="margin:5px" class="btn btn-success" type="button" href="/landing">Title on button</button> -->
<a style="margin:5px" routerLink="/landing" class="btn btn-primary"><i class="fa fa-home margin-right-10px"></i>
Home
</a>
</div>
</div>
</div>
</div>
</div>
</aw-wizard-step>
</aw-wizard>
</form>
I have three steps:
Step-1: Transaction Detail
Step-2: Personal Detail
Step-3: Success and Feedback Page.
This is what I want to achieve:
I want step-1 to be validated before going to Step-2
I want step-2 to save into database.
I want step-3 to display success message and show detail of the data submitted into database.
I want to disallow the user from clicking on the stepTitle (disable it).
So far, I have been able to save into the database. How do I achieve these?
|
[
"You can disable the button while the formGroup is invalid:\n<button type=\"button\" class=\"btn btn-primary\" awNextStep [disabled]=\"yourFormGroup.invalid\"> Next ></button>\n\n",
"// At the top of my component class that implements the <wizard>\n@ViewChild(WizardComponent)\npublic wizard: WizardComponent;\n\n\nSubmit(){\n this.submittedStep1 = true;\n if(this.form.valid){\n this.wizard.goToNextStep();\n }\n}\n// then later, this works great\n\n <button type=\"button\" uiSref=\"work\" class=\"btn btn-primary ml-2\" (click)=\"SubmitUpload()\">Next <i class=\"ft-chevron-right\"></i></button>\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"angular"
] |
stackoverflow_0058059446_angular.txt
|
Q:
performing operation on matched columns of NumPy arrays
I am new to python and programming in general and ran into a question:
I have two NumPy arrays of the same shape: they are 2D arrays, of the dimensions 1000 x 2000.
I wish to compare the values of each column in array A with the values in array B. The important part is that not every column of A should be compared to every column in B, but rather the same columns of A & B should be compared to one another, as in: A[:,0] should be compared to B[:,0], A[:,1] should be compared to B[:,1],… etc.
This was easier to do when I had one dimensional arrays: I used zip(A, B), so I could run the following for-loop:
A = np.array([2,5,6,3,7])
B = np.array([1,3,9,4,8])
res_list = []
For number1, number2 in zip(A, B):
if number1 > number2:
comment1 = “bigger”
res_list.append(comment1)
if number1 < number2:
comment2 = “smaller”
res_list.append(comment2)
res_list
In [702]: res_list
Out[702]: ['bigger', 'bigger', 'smaller', 'smaller', 'smaller']
however, I am not sure how to best do this on the 2D array. As output, I am aiming for a list with 2000 sublists (the 2000 cols), to later count the numbers of instances of "bigger" and "smaller" for each of the columns.
I am very thankful for any input.
So far I have tried to use np.nditer in a double for loop, but it returned all the possible column combinations. I would specifically desire to only combine the "matching" columns.
an approximation of the input (but I have: 1000 rows and 2000 cols)
In [709]: A
Out[709]:
array([[2, 5, 6, 3, 7],
[6, 2, 9, 2, 3],
[2, 1, 4, 5, 7]])
In [710]: B
Out[710]:
array([[1, 3, 9, 4, 8],
[4, 8, 2, 3, 1],
[3, 7, 1, 8, 9]])
As desired output, I want to compare the values of the arrays A & B column-wise (only the "matching" columns, not all columns with all columns, as I tried to explain above), and store them in the a nested list (number of "sublists" should correspond to the number of columns):
res_list = [["bigger", "bigger", "smaller"], ["bigger", "smaller", "smaller"], ["smaller", "bigger", "bigger], ["smaller", "smaller", "smaller"], ...]
A:
From the example input and output, I see that you want to do an element wise comparison, and store the values per columns. From your code you understand the 1D variant of this problem, so the question seems to be how to do it in 2D.
Solution 1
In order to achieve this, we have to make the 2D problem, a 1D problem, so you can do what you already did. If for example the columns would become rows, then you can redo your zip strategy for every row.
In otherwords, if we can turn:
a = np.array(
[[2, 5, 6, 3, 7],
[6, 2, 9, 2, 3],
[2, 1, 4, 5, 7]]
)
into:
array([[2, 6, 2],
[5, 2, 1],
[6, 9, 4],
[3, 2, 5],
[7, 3, 7]])
we can iterate over a and b, at the same time, and get our 1D version of the problem. Swapping the x and y axis of the matrix like this, is called transposing, and is very common, the operation for numpy is a.T, (docs ndarry.T).
Now we use your code onces for the outer loop of iterating over all the rows (after transposing, all the rows actually hold the column values). After which we use the code on those values, because every row is a 1D numpy array.
result = []
# Outer loop, to go over the columns of `a` and `b` at the same time.
for row_a, row_b in zip(a.T, b.T):
result_col = []
# Inner loop to compare a whole column element wise.
for col_a, col_b in zip(row_a, row_b):
result_col.append('bigger' if col_a > col_b else 'smaller')
result.append(result_col)
Note: I use a ternary operator to assign smaller and bigger.
Solution 2
As indicated before you are only looking at 2 values that are in the same position for both arrays, this is called an elementwise comparison. Since we are only interested in the values that are at the exact same position, and we know the output shape of our result array (input 1000x2000, output will be 2000x1000), we can also iterate over all the elements using their index.
Now some quick handy shortcuts,
a.shape holds the dimensions of the array, therefore a.shape will be (1000, 2000).
using [::-1] will reverse the order, similar to reverse()
Combining a.shape[::-1] will hold (2000, 1000), our expected output shape.
np.ndindex provides indexing, based on the number of dimensions provided.
An *, performs tuple unpacking, so using it like np.ndindex(*a.shape), is equivalent to np.ndindex(1000, 2000).
Therefore we can use their index (from np.ndindex) and turn the x and y around to write the result to the correct location in the output array:
a = np.random.randint(0, 255, (1000, 2000))
b = np.random.randint(0, 255, (1000, 2000))
result = np.zeros(a.shape[::-1], dtype=object)
for rows, columns in np.ndindex(*a.shape):
result[columns, rows] = 'bigger' if a[rows, columns] > b[rows, columns] else 'smaller'
print(result)
This will lead to the same result. Similarly we could also first transpose the a and b array, drop the [::-1] in the result array, and swap the assignment result[columns, rows] back to result[rows, columns].
Edit
Thinking about it a bit longer, you are only interested in doing a comparison between two array of the same shape (dimension). For this numpy already has a good solution, np.where(cond, <true>, <false>).
So the entire problem can be reduced to:
answer = np.where(a > b, 'bigger', 'smaller').T
Note the .T to transpose the solution, such that the answer has the columns in the rows.
|
performing operation on matched columns of NumPy arrays
|
I am new to python and programming in general and ran into a question:
I have two NumPy arrays of the same shape: they are 2D arrays, of the dimensions 1000 x 2000.
I wish to compare the values of each column in array A with the values in array B. The important part is that not every column of A should be compared to every column in B, but rather the same columns of A & B should be compared to one another, as in: A[:,0] should be compared to B[:,0], A[:,1] should be compared to B[:,1],… etc.
This was easier to do when I had one dimensional arrays: I used zip(A, B), so I could run the following for-loop:
A = np.array([2,5,6,3,7])
B = np.array([1,3,9,4,8])
res_list = []
For number1, number2 in zip(A, B):
if number1 > number2:
comment1 = “bigger”
res_list.append(comment1)
if number1 < number2:
comment2 = “smaller”
res_list.append(comment2)
res_list
In [702]: res_list
Out[702]: ['bigger', 'bigger', 'smaller', 'smaller', 'smaller']
however, I am not sure how to best do this on the 2D array. As output, I am aiming for a list with 2000 sublists (the 2000 cols), to later count the numbers of instances of "bigger" and "smaller" for each of the columns.
I am very thankful for any input.
So far I have tried to use np.nditer in a double for loop, but it returned all the possible column combinations. I would specifically desire to only combine the "matching" columns.
an approximation of the input (but I have: 1000 rows and 2000 cols)
In [709]: A
Out[709]:
array([[2, 5, 6, 3, 7],
[6, 2, 9, 2, 3],
[2, 1, 4, 5, 7]])
In [710]: B
Out[710]:
array([[1, 3, 9, 4, 8],
[4, 8, 2, 3, 1],
[3, 7, 1, 8, 9]])
As desired output, I want to compare the values of the arrays A & B column-wise (only the "matching" columns, not all columns with all columns, as I tried to explain above), and store them in the a nested list (number of "sublists" should correspond to the number of columns):
res_list = [["bigger", "bigger", "smaller"], ["bigger", "smaller", "smaller"], ["smaller", "bigger", "bigger], ["smaller", "smaller", "smaller"], ...]
|
[
"From the example input and output, I see that you want to do an element wise comparison, and store the values per columns. From your code you understand the 1D variant of this problem, so the question seems to be how to do it in 2D.\nSolution 1\nIn order to achieve this, we have to make the 2D problem, a 1D problem, so you can do what you already did. If for example the columns would become rows, then you can redo your zip strategy for every row.\nIn otherwords, if we can turn:\na = np.array(\n [[2, 5, 6, 3, 7],\n [6, 2, 9, 2, 3],\n [2, 1, 4, 5, 7]]\n)\n\ninto:\narray([[2, 6, 2],\n [5, 2, 1],\n [6, 9, 4],\n [3, 2, 5],\n [7, 3, 7]])\n\nwe can iterate over a and b, at the same time, and get our 1D version of the problem. Swapping the x and y axis of the matrix like this, is called transposing, and is very common, the operation for numpy is a.T, (docs ndarry.T).\nNow we use your code onces for the outer loop of iterating over all the rows (after transposing, all the rows actually hold the column values). After which we use the code on those values, because every row is a 1D numpy array.\nresult = []\n\n# Outer loop, to go over the columns of `a` and `b` at the same time.\nfor row_a, row_b in zip(a.T, b.T):\n\n result_col = []\n # Inner loop to compare a whole column element wise.\n for col_a, col_b in zip(row_a, row_b):\n result_col.append('bigger' if col_a > col_b else 'smaller')\n result.append(result_col)\n\nNote: I use a ternary operator to assign smaller and bigger.\nSolution 2\nAs indicated before you are only looking at 2 values that are in the same position for both arrays, this is called an elementwise comparison. Since we are only interested in the values that are at the exact same position, and we know the output shape of our result array (input 1000x2000, output will be 2000x1000), we can also iterate over all the elements using their index.\nNow some quick handy shortcuts,\n\na.shape holds the dimensions of the array, therefore a.shape will be (1000, 2000).\nusing [::-1] will reverse the order, similar to reverse()\nCombining a.shape[::-1] will hold (2000, 1000), our expected output shape.\nnp.ndindex provides indexing, based on the number of dimensions provided.\nAn *, performs tuple unpacking, so using it like np.ndindex(*a.shape), is equivalent to np.ndindex(1000, 2000).\n\nTherefore we can use their index (from np.ndindex) and turn the x and y around to write the result to the correct location in the output array:\na = np.random.randint(0, 255, (1000, 2000))\nb = np.random.randint(0, 255, (1000, 2000))\nresult = np.zeros(a.shape[::-1], dtype=object)\n\nfor rows, columns in np.ndindex(*a.shape):\n result[columns, rows] = 'bigger' if a[rows, columns] > b[rows, columns] else 'smaller'\n\nprint(result)\n\nThis will lead to the same result. Similarly we could also first transpose the a and b array, drop the [::-1] in the result array, and swap the assignment result[columns, rows] back to result[rows, columns].\nEdit\n\nThinking about it a bit longer, you are only interested in doing a comparison between two array of the same shape (dimension). For this numpy already has a good solution, np.where(cond, <true>, <false>).\nSo the entire problem can be reduced to:\nanswer = np.where(a > b, 'bigger', 'smaller').T\n\nNote the .T to transpose the solution, such that the answer has the columns in the rows.\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"numpy_ndarray",
"python",
"python_3.x"
] |
stackoverflow_0074657977_numpy_numpy_ndarray_python_python_3.x.txt
|
Q:
Grab specific strings within a for loop with variable nested length
I have the following telegram export JSON dataset:
import pandas as pd
df = pd.read_json("data/result.json")
>>>df.colums
Index(['name', 'type', 'id', 'messages'], dtype='object')
>>> type(df)
<class 'pandas.core.frame.DataFrame'>
# Sample output
sample_df = pd.DataFrame({"messages": [
{"id": 11, "from": "user3984", "text": "Do you like soccer?"},
{"id": 312, "from": "user837", "text": ['Not sure', {'type': 'hashtag', 'text': '#confused'}]},
{"id": 4324, "from": "user3984", "text": ['O ', {'type': 'mention', 'text': '@user87324'}, ' really?']}
]})
Within df, there's a "messages" column, which has the following output:
>>> df["messages"]
0 {'id': -999713937, 'type': 'service', 'date': ...
1 {'id': -999713936, 'type': 'service', 'date': ...
2 {'id': -999713935, 'type': 'message', 'date': ...
3 {'id': -999713934, 'type': 'message', 'date': ...
4 {'id': -999713933, 'type': 'message', 'date': ...
...
22377 {'id': 22102, 'type': 'message', 'date': '2022...
22378 {'id': 22103, 'type': 'message', 'date': '2022...
22379 {'id': 22104, 'type': 'message', 'date': '2022...
22380 {'id': 22105, 'type': 'message', 'date': '2022...
22381 {'id': 22106, 'type': 'message', 'date': '2022...
Name: messages, Length: 22382, dtype: object
Within messages, there's a particular key named "text", and that's the place I want to focus. Turns out when you explore the data, text column can have:
A single text:
>>> df["messages"][5]["text"]
'JAJAJAJAJAJAJA'
>>> df["messages"][22262]["text"]
'No creo'
But sometimes it's nested. Like the following:
>>> df["messages"][22373]["text"]
['O ', {'type': 'mention', 'text': '@user87324'}, ' really?']
>>> df["messages"][22189]["text"]
['The average married couple has sex roughly once a week. ', {'type': 'mention', 'text': '@googlefactss'}, ' ', {'type': 'hashtag', 'text': '#funfact'}]
>>> df["messages"][22345]["text"]
[{'type': 'mention', 'text': '@user817430'}]
In case for nested data, if I want to grab the main text, I can do the following:
>>> df["messages"][22373]["text"][0]
'O '
>>> df["messages"][22189]["text"][0]
'The average married couple has sex roughly once a week. '
>>>
From here, everything seems ok. However, the problem arrives when I do the for loop. If I try the following:
for item in df["messages"]:
tg_id = item.get("id", "None")
tg_type = item.get("type", "None")
tg_date = item.get("date", "None")
tg_from = item.get("from", "None")
tg_text = item.get("text", "None")
print(tg_id, tg_from, tg_text)
A sample output is:
21263 user3984 jajajajaja
21264 user837 ['Not sure', {'type': 'hashtag', 'text': '#confused'}]
21265 user3984 What time is it?✋
MY ASK: How to flatten the rows? I need the following (and store that in a data frame):
21263 user3984 jajajajaja
21264 user837 Not sure
21265 user837 type: hashtag
21266 user837 text: #confused
21267 user3984 What time is it?✋
I tried to detect "text" type like this:
for item in df["messages"]:
tg_id = item.get("id", "None")
tg_type = item.get("type", "None")
tg_date = item.get("date", "None")
tg_from = item.get("from", "None")
tg_text = item.get("text", "None")
if type(tg_text) == list:
tg_text = tg_text[0]
print(tg_id, tg_from, tg_text)
With this I only grab the first text, but I'm expecting to grab the other fields as well or to 'flatten' the data.
I also tried:
for item in df["messages"]:
tg_id = item.get("id", "None")
tg_type = item.get("type", "None")
tg_date = item.get("date", "None")
tg_from = item.get("from", "None")
tg_text = item.get("text", "None")
if type(tg_text) == list:
tg_text = tg_text[0]
tg_second = tg_text[1]["text"]
print(tg_id, tg_from, tg_text, tg_second)
But no luck because indices are variable, length from messages are variable too.
In addition, even if the output weren't close of my desired solution, I also tried:
for item in df["messages"]:
tg_text = item.get("text", "None")
if type(tg_text) == list:
for i in tg_text:
print(item, i)
mydict = {}
for k, v in df.items():
print(k, v)
mydict[k] = v
# Used df["text"].explode()
# Used json_normalize but no luck
Any thoughts?
A:
Assuming a dataframe like the following:
df = pd.DataFrame({"messages": [
{"id": 21263, "from": "user3984", "text": "jajajajaja"},
{"id": 21264, "from": "user837", "text": ['Not sure', {'type': 'hashtag', 'text': '#confused'}]},
{"id": 21265, "from": "user3984", "text": ['O ', {'type': 'mention', 'text': '@user87324'}, ' really?']}
]})
First, expand the messages dictionaries into separate id, from and text columns.
expanded = pd.concat([df.drop("messages", axis=1), pd.json_normalize(df["messages"])], axis=1)
Then explode the dataframe to have a row for each entry in text:
exploded = expanded.explode("text")
Then expand the dictionaries that are in some of the entries, converting them to lists of text:
def convert_dict(entry):
if type(entry) is dict:
return [f"{k}: {v}" for k, v in entry.items()]
else:
return entry
exploded["text"] = exploded["text"].apply(convert_dict)
Finally, explode again to separate the converted dicts to separate rows.
final = exploded.explode("text")
The resulting output should look like this
id from text
0 21263 user3984 jajajajaja
1 21264 user837 Not sure
1 21264 user837 type: hashtag
1 21264 user837 text: #confused
2 21265 user3984 O
2 21265 user3984 type: mention
2 21265 user3984 text: @user87324
2 21265 user3984 really?
A:
Just to share some ideas to flatten your list,
def flatlist(srclist):
flatlist=[]
if srclist: #check if srclist is not None
for item in srclist:
if(type(item) == str): #check if item is type of string
flatlist.append(item)
if(type(item) == dict): #check if item is type of dict
for x in item:
flatlist.append(x + ' ' + item[x]) #combine key and value
return flatlist
for item in df["messages"]:
tg_text = item.get("text", "None")
flat_list = flatlist(tg_text) # get the flattened list
for tg in flat_list: # loop through the list and get the data you want
tg_id = item.get("id", "None")
tg_from = item.get("from", "None")
print(tg_id, tg_from, tg)
|
Grab specific strings within a for loop with variable nested length
|
I have the following telegram export JSON dataset:
import pandas as pd
df = pd.read_json("data/result.json")
>>>df.colums
Index(['name', 'type', 'id', 'messages'], dtype='object')
>>> type(df)
<class 'pandas.core.frame.DataFrame'>
# Sample output
sample_df = pd.DataFrame({"messages": [
{"id": 11, "from": "user3984", "text": "Do you like soccer?"},
{"id": 312, "from": "user837", "text": ['Not sure', {'type': 'hashtag', 'text': '#confused'}]},
{"id": 4324, "from": "user3984", "text": ['O ', {'type': 'mention', 'text': '@user87324'}, ' really?']}
]})
Within df, there's a "messages" column, which has the following output:
>>> df["messages"]
0 {'id': -999713937, 'type': 'service', 'date': ...
1 {'id': -999713936, 'type': 'service', 'date': ...
2 {'id': -999713935, 'type': 'message', 'date': ...
3 {'id': -999713934, 'type': 'message', 'date': ...
4 {'id': -999713933, 'type': 'message', 'date': ...
...
22377 {'id': 22102, 'type': 'message', 'date': '2022...
22378 {'id': 22103, 'type': 'message', 'date': '2022...
22379 {'id': 22104, 'type': 'message', 'date': '2022...
22380 {'id': 22105, 'type': 'message', 'date': '2022...
22381 {'id': 22106, 'type': 'message', 'date': '2022...
Name: messages, Length: 22382, dtype: object
Within messages, there's a particular key named "text", and that's the place I want to focus. Turns out when you explore the data, text column can have:
A single text:
>>> df["messages"][5]["text"]
'JAJAJAJAJAJAJA'
>>> df["messages"][22262]["text"]
'No creo'
But sometimes it's nested. Like the following:
>>> df["messages"][22373]["text"]
['O ', {'type': 'mention', 'text': '@user87324'}, ' really?']
>>> df["messages"][22189]["text"]
['The average married couple has sex roughly once a week. ', {'type': 'mention', 'text': '@googlefactss'}, ' ', {'type': 'hashtag', 'text': '#funfact'}]
>>> df["messages"][22345]["text"]
[{'type': 'mention', 'text': '@user817430'}]
In case for nested data, if I want to grab the main text, I can do the following:
>>> df["messages"][22373]["text"][0]
'O '
>>> df["messages"][22189]["text"][0]
'The average married couple has sex roughly once a week. '
>>>
From here, everything seems ok. However, the problem arrives when I do the for loop. If I try the following:
for item in df["messages"]:
tg_id = item.get("id", "None")
tg_type = item.get("type", "None")
tg_date = item.get("date", "None")
tg_from = item.get("from", "None")
tg_text = item.get("text", "None")
print(tg_id, tg_from, tg_text)
A sample output is:
21263 user3984 jajajajaja
21264 user837 ['Not sure', {'type': 'hashtag', 'text': '#confused'}]
21265 user3984 What time is it?✋
MY ASK: How to flatten the rows? I need the following (and store that in a data frame):
21263 user3984 jajajajaja
21264 user837 Not sure
21265 user837 type: hashtag
21266 user837 text: #confused
21267 user3984 What time is it?✋
I tried to detect "text" type like this:
for item in df["messages"]:
tg_id = item.get("id", "None")
tg_type = item.get("type", "None")
tg_date = item.get("date", "None")
tg_from = item.get("from", "None")
tg_text = item.get("text", "None")
if type(tg_text) == list:
tg_text = tg_text[0]
print(tg_id, tg_from, tg_text)
With this I only grab the first text, but I'm expecting to grab the other fields as well or to 'flatten' the data.
I also tried:
for item in df["messages"]:
tg_id = item.get("id", "None")
tg_type = item.get("type", "None")
tg_date = item.get("date", "None")
tg_from = item.get("from", "None")
tg_text = item.get("text", "None")
if type(tg_text) == list:
tg_text = tg_text[0]
tg_second = tg_text[1]["text"]
print(tg_id, tg_from, tg_text, tg_second)
But no luck because indices are variable, length from messages are variable too.
In addition, even if the output weren't close of my desired solution, I also tried:
for item in df["messages"]:
tg_text = item.get("text", "None")
if type(tg_text) == list:
for i in tg_text:
print(item, i)
mydict = {}
for k, v in df.items():
print(k, v)
mydict[k] = v
# Used df["text"].explode()
# Used json_normalize but no luck
Any thoughts?
|
[
"Assuming a dataframe like the following:\ndf = pd.DataFrame({\"messages\": [\n {\"id\": 21263, \"from\": \"user3984\", \"text\": \"jajajajaja\"},\n {\"id\": 21264, \"from\": \"user837\", \"text\": ['Not sure', {'type': 'hashtag', 'text': '#confused'}]}, \n {\"id\": 21265, \"from\": \"user3984\", \"text\": ['O ', {'type': 'mention', 'text': '@user87324'}, ' really?']}\n]})\n\nFirst, expand the messages dictionaries into separate id, from and text columns.\n expanded = pd.concat([df.drop(\"messages\", axis=1), pd.json_normalize(df[\"messages\"])], axis=1)\n\nThen explode the dataframe to have a row for each entry in text:\nexploded = expanded.explode(\"text\")\n\nThen expand the dictionaries that are in some of the entries, converting them to lists of text:\ndef convert_dict(entry):\n if type(entry) is dict:\n return [f\"{k}: {v}\" for k, v in entry.items()]\n else:\n return entry\n\nexploded[\"text\"] = exploded[\"text\"].apply(convert_dict)\n\nFinally, explode again to separate the converted dicts to separate rows.\nfinal = exploded.explode(\"text\")\n\nThe resulting output should look like this\n id from text\n0 21263 user3984 jajajajaja\n1 21264 user837 Not sure\n1 21264 user837 type: hashtag\n1 21264 user837 text: #confused\n2 21265 user3984 O \n2 21265 user3984 type: mention\n2 21265 user3984 text: @user87324\n2 21265 user3984 really?\n\n",
"Just to share some ideas to flatten your list,\ndef flatlist(srclist):\n flatlist=[]\n if srclist: #check if srclist is not None\n for item in srclist:\n if(type(item) == str): #check if item is type of string\n flatlist.append(item)\n if(type(item) == dict): #check if item is type of dict\n for x in item:\n flatlist.append(x + ' ' + item[x]) #combine key and value\n return flatlist\n\nfor item in df[\"messages\"]:\n tg_text = item.get(\"text\", \"None\")\n flat_list = flatlist(tg_text) # get the flattened list\n for tg in flat_list: # loop through the list and get the data you want\n tg_id = item.get(\"id\", \"None\")\n tg_from = item.get(\"from\", \"None\")\n \n print(tg_id, tg_from, tg)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"for_loop",
"json",
"list",
"nested",
"python"
] |
stackoverflow_0074650152_for_loop_json_list_nested_python.txt
|
Q:
Try to find a sublist that doesnt occur in the range of ANY of the sublists in another list
enhancerlist=[[5,8],[10,11]]
TFlist=[[6,7],[24,56]]
I have two lists of lists. I am trying to isolate the sublists in my 'TFlist' that don't fit in the range of ANY of the sublists of enhancerlist (by range: TFlist sublist range fits inside of enhancerlist sublist range).
SO for example, TFlist[1] will not occur in the range of any sublists in enhancerlist (whereas TFlist [6,7] fits inside the range of [5,8]) , so I want this as output:
TF_notinrange=[24,56]
the problem with a nested for loop like this:
while TFlist:
TF=TFlist.pop()
for j in enhancerlist:
if ((TF[0]>= j[0]) and (TF[1]<= j[1])):
continue
else:
TF_notinrange.append(TF)
is that I get this as output:
[[24, 56], [3, 4]]
the if statement is checking one sublist in enhancerlist at a time and so will append TF even if, later on, there is a sublist it is in the range of.
Could I somehow do a while loop with the condition? although it seems like I still have the issue of a nested loop appending things incorrectly ?
A:
Alternative is to use a list comprehension:
TF_notinrange = [tf for tf in TFlist
if not any(istart <= tf[0] <= tf[1] <= iend
for istart, iend in enhancerlist)]
print(TF_notinrange)
>>> TF_notinrange
Explanation
Take ranges of TFlist which are not contained in any ranges of enhancerlist
A:
You can use chained comparisons along with the less-common for-else block where the else clause triggers only if the for loop was not broken out of prematurely to achieve this:
non_overlapping = []
for tf_a, tf_b in TFlist:
for enhancer_a, enhancer_b in enhancerlist:
if enhancer_a <= tf_a < tf_b <= enhancer_b:
break
else:
non_overlapping.append([tf_a, tf_b])
Note that this assumes that all pairs are already sorted and that no pair comprises a range of length zero (e.g., (2, 2)).
|
Try to find a sublist that doesnt occur in the range of ANY of the sublists in another list
|
enhancerlist=[[5,8],[10,11]]
TFlist=[[6,7],[24,56]]
I have two lists of lists. I am trying to isolate the sublists in my 'TFlist' that don't fit in the range of ANY of the sublists of enhancerlist (by range: TFlist sublist range fits inside of enhancerlist sublist range).
SO for example, TFlist[1] will not occur in the range of any sublists in enhancerlist (whereas TFlist [6,7] fits inside the range of [5,8]) , so I want this as output:
TF_notinrange=[24,56]
the problem with a nested for loop like this:
while TFlist:
TF=TFlist.pop()
for j in enhancerlist:
if ((TF[0]>= j[0]) and (TF[1]<= j[1])):
continue
else:
TF_notinrange.append(TF)
is that I get this as output:
[[24, 56], [3, 4]]
the if statement is checking one sublist in enhancerlist at a time and so will append TF even if, later on, there is a sublist it is in the range of.
Could I somehow do a while loop with the condition? although it seems like I still have the issue of a nested loop appending things incorrectly ?
|
[
"Alternative is to use a list comprehension:\nTF_notinrange = [tf for tf in TFlist \n if not any(istart <= tf[0] <= tf[1] <= iend \n for istart, iend in enhancerlist)]\nprint(TF_notinrange)\n>>> TF_notinrange\n\nExplanation\nTake ranges of TFlist which are not contained in any ranges of enhancerlist\n",
"You can use chained comparisons along with the less-common for-else block where the else clause triggers only if the for loop was not broken out of prematurely to achieve this:\nnon_overlapping = []\n\nfor tf_a, tf_b in TFlist:\n for enhancer_a, enhancer_b in enhancerlist:\n if enhancer_a <= tf_a < tf_b <= enhancer_b:\n break\n else:\n non_overlapping.append([tf_a, tf_b])\n\nNote that this assumes that all pairs are already sorted and that no pair comprises a range of length zero (e.g., (2, 2)).\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074658926_python.txt
|
Q:
AnyLogic - time plot export from resourcePool utilization
In my model I simulate the work on an assembly line and for this I want to export a statistic of the ResourcePool into an Excel. This is how my time plot looks like:
In a way I would like to have this statistic automated in Excel as well. I guess for this I have to create a histogram dataset and choose as value the resourcePool.utilization(). In the best case, the x-axis is also still with time stamps of days.
But unfortunately I don't know exactly how to do that and how to get this dataset into Excel. I know how to export individual data to an excel, but do I need to do the same here with this dataset? Thanks for any help!
A:
First, use a cyclic event to write the data into a database table (more flexible than using a dataset): https://anylogic.help/anylogic/connectivity/querying.html#insert
Next, let AnyLogic write the database results into your Excel file as described here: https://anylogic.help/anylogic/connectivity/export-excel.html#exporting-data-to-ms-excel-workbook
Check some example models as well, there are several that do this stuff
|
AnyLogic - time plot export from resourcePool utilization
|
In my model I simulate the work on an assembly line and for this I want to export a statistic of the ResourcePool into an Excel. This is how my time plot looks like:
In a way I would like to have this statistic automated in Excel as well. I guess for this I have to create a histogram dataset and choose as value the resourcePool.utilization(). In the best case, the x-axis is also still with time stamps of days.
But unfortunately I don't know exactly how to do that and how to get this dataset into Excel. I know how to export individual data to an excel, but do I need to do the same here with this dataset? Thanks for any help!
|
[
"First, use a cyclic event to write the data into a database table (more flexible than using a dataset): https://anylogic.help/anylogic/connectivity/querying.html#insert\nNext, let AnyLogic write the database results into your Excel file as described here: https://anylogic.help/anylogic/connectivity/export-excel.html#exporting-data-to-ms-excel-workbook\nCheck some example models as well, there are several that do this stuff\n"
] |
[
0
] |
[] |
[] |
[
"anylogic",
"simulation"
] |
stackoverflow_0074658445_anylogic_simulation.txt
|
Q:
Google picker requires 3rd party cookies
We are migrating from deprecated Google Sign-in (basically gapi.auth and gapi.auth2 methods) into the new Google identity services (google.accounts.oauth2). More info here
We are using the authorization solely for Google picker. The problem is, beforehand (it seems) the library didn't return access_token in their gapi.auth.authorize, which was an indication that something wrong is going on and we've displayed "3rd party cookies blocked" message.
After the migration, the Google identity does not need any cookies, whatsoever, Google picker is somewhat unaware and stops working with 3rd party cookies blocked.
After the picker is successfully loaded, he prompts the user to SignIn (even though it just received a working token via setOAuthToken). After clicking the SignIn twice in the iframe, there is some malfunction error. Nothing is ever opened. NO callbacks are aware of this, no errors can be caught.
This behavior can be directly controled by the 3rd party cookie block. If the cookies are allowed. The exact same flow (and code) opens google drive picker (via build and setVisible) and everything works as expected.
The question is.
How to catch this 3rd party cookie error? Or any errors in the iframe whatsoever.
Why the picker requires 3rd party cookies?
Should I do something on the picker side for the migration as well?
A:
Drive Picker and Drive API Third-party cookies
This has been reported as an issue from the community, I would highly suggest to also provide the feedback and insight regarding your concerns and future alternatives:
https://issuetracker.google.com/issues/188699186
Replying to your questions:
How to catch this 3rd party cookie error? Or any errors in the iframe whatsoever?
As suggested over the Issue tracker, it is not possible to catch information about the matter.
Why the picker requires 3rd party cookies?
It would be a great idea to request a better documentation on why over the issue tracker, as it was also suggested on a previous old post:
https://issuetracker.google.com/164130212
I notice that other types of Drive API process that can be troubleshooted generally recommend enabling or as an alternative to add an exception for accounts.google.com.
Should I do something on the picker side for the migration as well?
It seems you have followed the process of migration correctly, this is only a limitation from Drive Picker itself or the Drive API needing access to those cookies to run properly, might be a good test the exception suggested in the previous answer.
References:
https://developers.google.com/drive/api/troubleshoot-authentication-authorization
https://developers.google.com/identity/sign-in/web/troubleshooting#third-party_cookies_and_data_blocked
|
Google picker requires 3rd party cookies
|
We are migrating from deprecated Google Sign-in (basically gapi.auth and gapi.auth2 methods) into the new Google identity services (google.accounts.oauth2). More info here
We are using the authorization solely for Google picker. The problem is, beforehand (it seems) the library didn't return access_token in their gapi.auth.authorize, which was an indication that something wrong is going on and we've displayed "3rd party cookies blocked" message.
After the migration, the Google identity does not need any cookies, whatsoever, Google picker is somewhat unaware and stops working with 3rd party cookies blocked.
After the picker is successfully loaded, he prompts the user to SignIn (even though it just received a working token via setOAuthToken). After clicking the SignIn twice in the iframe, there is some malfunction error. Nothing is ever opened. NO callbacks are aware of this, no errors can be caught.
This behavior can be directly controled by the 3rd party cookie block. If the cookies are allowed. The exact same flow (and code) opens google drive picker (via build and setVisible) and everything works as expected.
The question is.
How to catch this 3rd party cookie error? Or any errors in the iframe whatsoever.
Why the picker requires 3rd party cookies?
Should I do something on the picker side for the migration as well?
|
[
"Drive Picker and Drive API Third-party cookies\nThis has been reported as an issue from the community, I would highly suggest to also provide the feedback and insight regarding your concerns and future alternatives:\n\nhttps://issuetracker.google.com/issues/188699186\n\nReplying to your questions:\nHow to catch this 3rd party cookie error? Or any errors in the iframe whatsoever?\nAs suggested over the Issue tracker, it is not possible to catch information about the matter.\nWhy the picker requires 3rd party cookies?\nIt would be a great idea to request a better documentation on why over the issue tracker, as it was also suggested on a previous old post:\n\nhttps://issuetracker.google.com/164130212\n\nI notice that other types of Drive API process that can be troubleshooted generally recommend enabling or as an alternative to add an exception for accounts.google.com.\nShould I do something on the picker side for the migration as well?\nIt seems you have followed the process of migration correctly, this is only a limitation from Drive Picker itself or the Drive API needing access to those cookies to run properly, might be a good test the exception suggested in the previous answer.\nReferences:\n\nhttps://developers.google.com/drive/api/troubleshoot-authentication-authorization\n\nhttps://developers.google.com/identity/sign-in/web/troubleshooting#third-party_cookies_and_data_blocked\n\n\n"
] |
[
0
] |
[] |
[] |
[
"google_api_js_client",
"google_drive_picker",
"google_oauth",
"google_picker",
"google_signin"
] |
stackoverflow_0074654103_google_api_js_client_google_drive_picker_google_oauth_google_picker_google_signin.txt
|
Q:
Renaming button.jsx makes entire react web unable to compile
i've been tasked with renaming a Button.jsx file for my react web app into 'v2_button.jsx' however when I do so it makes the entire web app unable to compile as there are many times where 'Button' is called but not 'v2_button'. My question is how do I make it so that I can update the name of the jsx file and update every instance where the original Button.jsx is called into 'v2_button'?
Button.jsx file:
import styled from 'styled-components';
const Button = styled.Button`
display: flex;
align-items: center;
justify-content: center;
height: 50px;
width: 300px;
background: ${(props) => {
let bg;
if (props.disabled) {
bg = props.theme.colors.disabled.background;
} else if (props.color === 'secondary') {
bg = props.theme.colors.primary.main;
} else if (props.color === 'danger') {
bg = props.theme.colors.danger.main;
} else {
bg = props.theme.colors.white;
}
return bg;
}};
font-family: ${(props) => props.theme.font.font3.new};
font-size: 18px;
font-weight: ${(props) => props.theme.fontWeight.heavy_900};
letter-spacing: 0.105em;
color: ${(props) => {
let fontColor;
if (props.color === 'secondary') {
fontColor = props.theme.colors.textPrimary.light;
} else {
fontColor = props.theme.colors.textPrimary.purple.dark;
}
return fontColor;
}};
padding: ${(props) => {
let padding;
if (props.size !== 's') {
padding = '0 39px';
}
return padding;
}};
border-radius: 40px;
border: ${(props) =>
props.disabled
? `1px solid ${props.theme.colors.disabled.border}`
: 'none'};
white-space: nowrap;
transition: background ${(props) => props.theme.animations.short} linear;
:hover {
background: ${(props) => {
let bg;
if (props.disabled) {
bg = props.theme.colors.disabled.background;
} else if (props.color === 'secondary') {
bg = props.theme.colors.primary.light;
} else if (props.color === 'danger') {
bg = props.theme.colors.danger.light;
} else {
bg = props.theme.colors.secondary.light;
}
return bg;
}};
cursor: pointer;
}
`;
export default Button;
Example where Button is called:
import styled from 'styled-components';
import Button from "../../Button";
export const Wrapper = styled.div`
display: flex;
flex-direction: column;
`;
export const Container = styled.div`
display: flex;
flex-direction: column;
`;
export const NameContainer = styled.div`
width: 100%;
display: flex;
align-content: space-between;
flex-direction: row;
font-style: normal;
font-weight: ${(props) => props.theme.fontWeight.med_500};
font-size: ${(props) => props.theme.fontSize.med_16};
`;
export const List = styled.div`
display: flex;
`;
export const FieldLabel = styled.label`
height: 36px;
color: ${(props) => props.theme.colors.white};
font-family: ${(props) => props.theme.font.font1.demi};
margin: 0 0 25px 0;
text-align: center;
font-style: normal;
font-weight: ${(props) => props.theme.fontWeight.heavy_800};
font-size: ${(props) => props.theme.fontSize.large_28};
line-height: 50px;
`;
export const Selected = styled.label`
width: 158px;
height: 40px;
margin: 0 10px 20px 10px;
padding-top: 10px;
padding-left: 20px;
font-size: ${(props) => props.theme.fontSize.med_20};
color: ${(props) => props.theme.colors.white};
font-family: ${(props) => props.theme.font.font1};
caret-color: ${(props) => props.theme.colors.textSecondary.main};
:focus {
outline: none;
}
:focus-within {
box-shadow: inset 0px 0px 2px 1px
${(props) => props.theme.colors.textSecondary.main};
}
::placeholder {
color: ${(props) => props.theme.colors.textPrimary.light};
}
background: ${(props) => props.theme.colors.primary.main};
border: ${(props) => props.theme.borders.textSecondary};
border-radius: 4px;
`;
export const AddFieldButton = styled(Button)`
margin: 30px;
font-size: ${(props) => props.theme.fontSize.med_20};
color: ${(props) => props.theme.colors.textSecondary.main};
background: none;
width: 1px;
text-align: center;
`;
Compiling error I receive after changing Button.jsx to v2_button.jsx:
https://imgur.com/a/kC0AdmG
A:
The best thing is to renamed all the imports.
But a 'workaround' to renaming all the imports could be:
Keep the Button.jsx
Instead of exporting the default implementation, import the implementation from v2_button.jsx file and re-export it from the Button.jsx file.
Thus, every import of Button.tsx will actually refer to the implementation of v2_button.tsx file.
Example:
Button_v1.tsx
const Button_v1 = () => (
<>Button v1</>
);
export default Button_v1;
Button_v2.tsx
const Button_v2 = () => (
<>Button v2</>
);
export default Button_v2;
Since all other files reference Button_v1 and you don't want OR can't update the imports, you could do this:
Button_v1.tsx (after)
import Button_v2 from "./Button_v2"; // <-- ADDED THIS
const Button_v1 = () => (
<>Button v1</>
);
// export default Button_v1; // <-- COMMENTED THIS
export default Button_v2; // <-- ADDED THIS
|
Renaming button.jsx makes entire react web unable to compile
|
i've been tasked with renaming a Button.jsx file for my react web app into 'v2_button.jsx' however when I do so it makes the entire web app unable to compile as there are many times where 'Button' is called but not 'v2_button'. My question is how do I make it so that I can update the name of the jsx file and update every instance where the original Button.jsx is called into 'v2_button'?
Button.jsx file:
import styled from 'styled-components';
const Button = styled.Button`
display: flex;
align-items: center;
justify-content: center;
height: 50px;
width: 300px;
background: ${(props) => {
let bg;
if (props.disabled) {
bg = props.theme.colors.disabled.background;
} else if (props.color === 'secondary') {
bg = props.theme.colors.primary.main;
} else if (props.color === 'danger') {
bg = props.theme.colors.danger.main;
} else {
bg = props.theme.colors.white;
}
return bg;
}};
font-family: ${(props) => props.theme.font.font3.new};
font-size: 18px;
font-weight: ${(props) => props.theme.fontWeight.heavy_900};
letter-spacing: 0.105em;
color: ${(props) => {
let fontColor;
if (props.color === 'secondary') {
fontColor = props.theme.colors.textPrimary.light;
} else {
fontColor = props.theme.colors.textPrimary.purple.dark;
}
return fontColor;
}};
padding: ${(props) => {
let padding;
if (props.size !== 's') {
padding = '0 39px';
}
return padding;
}};
border-radius: 40px;
border: ${(props) =>
props.disabled
? `1px solid ${props.theme.colors.disabled.border}`
: 'none'};
white-space: nowrap;
transition: background ${(props) => props.theme.animations.short} linear;
:hover {
background: ${(props) => {
let bg;
if (props.disabled) {
bg = props.theme.colors.disabled.background;
} else if (props.color === 'secondary') {
bg = props.theme.colors.primary.light;
} else if (props.color === 'danger') {
bg = props.theme.colors.danger.light;
} else {
bg = props.theme.colors.secondary.light;
}
return bg;
}};
cursor: pointer;
}
`;
export default Button;
Example where Button is called:
import styled from 'styled-components';
import Button from "../../Button";
export const Wrapper = styled.div`
display: flex;
flex-direction: column;
`;
export const Container = styled.div`
display: flex;
flex-direction: column;
`;
export const NameContainer = styled.div`
width: 100%;
display: flex;
align-content: space-between;
flex-direction: row;
font-style: normal;
font-weight: ${(props) => props.theme.fontWeight.med_500};
font-size: ${(props) => props.theme.fontSize.med_16};
`;
export const List = styled.div`
display: flex;
`;
export const FieldLabel = styled.label`
height: 36px;
color: ${(props) => props.theme.colors.white};
font-family: ${(props) => props.theme.font.font1.demi};
margin: 0 0 25px 0;
text-align: center;
font-style: normal;
font-weight: ${(props) => props.theme.fontWeight.heavy_800};
font-size: ${(props) => props.theme.fontSize.large_28};
line-height: 50px;
`;
export const Selected = styled.label`
width: 158px;
height: 40px;
margin: 0 10px 20px 10px;
padding-top: 10px;
padding-left: 20px;
font-size: ${(props) => props.theme.fontSize.med_20};
color: ${(props) => props.theme.colors.white};
font-family: ${(props) => props.theme.font.font1};
caret-color: ${(props) => props.theme.colors.textSecondary.main};
:focus {
outline: none;
}
:focus-within {
box-shadow: inset 0px 0px 2px 1px
${(props) => props.theme.colors.textSecondary.main};
}
::placeholder {
color: ${(props) => props.theme.colors.textPrimary.light};
}
background: ${(props) => props.theme.colors.primary.main};
border: ${(props) => props.theme.borders.textSecondary};
border-radius: 4px;
`;
export const AddFieldButton = styled(Button)`
margin: 30px;
font-size: ${(props) => props.theme.fontSize.med_20};
color: ${(props) => props.theme.colors.textSecondary.main};
background: none;
width: 1px;
text-align: center;
`;
Compiling error I receive after changing Button.jsx to v2_button.jsx:
https://imgur.com/a/kC0AdmG
|
[
"The best thing is to renamed all the imports.\nBut a 'workaround' to renaming all the imports could be:\n\nKeep the Button.jsx\nInstead of exporting the default implementation, import the implementation from v2_button.jsx file and re-export it from the Button.jsx file.\n\nThus, every import of Button.tsx will actually refer to the implementation of v2_button.tsx file.\n\nExample:\nButton_v1.tsx\nconst Button_v1 = () => (\n <>Button v1</>\n);\n\nexport default Button_v1; \n\nButton_v2.tsx\nconst Button_v2 = () => (\n <>Button v2</>\n);\n\nexport default Button_v2; \n\nSince all other files reference Button_v1 and you don't want OR can't update the imports, you could do this:\nButton_v1.tsx (after)\nimport Button_v2 from \"./Button_v2\"; // <-- ADDED THIS\n\nconst Button_v1 = () => (\n <>Button v1</>\n);\n\n// export default Button_v1; // <-- COMMENTED THIS\nexport default Button_v2; // <-- ADDED THIS\n\n"
] |
[
0
] |
[] |
[] |
[
"reactjs"
] |
stackoverflow_0074658922_reactjs.txt
|
Q:
Error 404 being given on a virtaul host with custom .htaccess file
I have an apache2 installation on my local linux server. It has a virtual host called pcts.local which has the root /var/www/repos/pcts/. Inside the root of pcts.local is a .htaccess file which attempts to rewrite urls to include .php if it isn't given like below:
http://pcts.local/ -> http://pcts.local/index.php
http://pcts.local/contact -> http://pcts.local/contact.php
The problem is, http://pcts.local/contact gives an error 404 but http://pcts.local/contact.php gives 200.
Virtual Host Configuration:
<VirtualHost *:80>
ServerName pcts.local
ServerAdmin webmaster@localhost
DocumentRoot /var/www/repos/pcts
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
.htaccess file in /var/www/repos/pcts/
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}\.php -f
RewriteRule ^(.+)$ $1.php [NC,L]
Thanks in advance of any help!
A:
In your code, REQUEST_FILENAME is expecting a file with a php extension to perform the rewrite.
Try this instead:
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME} !-f
RewriteRule ^([^\.]+)$ $1.php [NC,L]
A:
<VirtualHost *:80>
ServerName pcts.local
ServerAdmin webmaster@localhost
DocumentRoot /var/www/repos/pcts
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
If this is your complete config then your .htaccess file is not being processed.
You've not enabled .htaccess overrides for the specific directory. (ie. You've not enabled the parsing of .htaccess files.) .htaccess overrides are disabled by default.
However, you've not enabled access to this area of the filesystem either? Have you done this elsewhere in the server config?!
You should have a relevant <Directory> section like the following inside the <VirtualHost> container:
<Directory /var/www/repos/pcts>
# Enable .htaccess overrides
AllowOverride All
# Allow user access to this directory
Require all granted
</Directory>
You can restrict .htaccess overrides further if required (see the reference link below)
Reference:
https://httpd.apache.org/docs/2.4/mod/core.html#allowoverride
|
Error 404 being given on a virtaul host with custom .htaccess file
|
I have an apache2 installation on my local linux server. It has a virtual host called pcts.local which has the root /var/www/repos/pcts/. Inside the root of pcts.local is a .htaccess file which attempts to rewrite urls to include .php if it isn't given like below:
http://pcts.local/ -> http://pcts.local/index.php
http://pcts.local/contact -> http://pcts.local/contact.php
The problem is, http://pcts.local/contact gives an error 404 but http://pcts.local/contact.php gives 200.
Virtual Host Configuration:
<VirtualHost *:80>
ServerName pcts.local
ServerAdmin webmaster@localhost
DocumentRoot /var/www/repos/pcts
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>
.htaccess file in /var/www/repos/pcts/
RewriteEngine On
RewriteBase /
RewriteCond %{REQUEST_FILENAME} !-d
RewriteCond %{REQUEST_FILENAME}\.php -f
RewriteRule ^(.+)$ $1.php [NC,L]
Thanks in advance of any help!
|
[
"In your code, REQUEST_FILENAME is expecting a file with a php extension to perform the rewrite.\nTry this instead:\nRewriteCond %{REQUEST_FILENAME} !-d\nRewriteCond %{REQUEST_FILENAME} !-f\nRewriteRule ^([^\\.]+)$ $1.php [NC,L]\n\n",
"\n<VirtualHost *:80>\n ServerName pcts.local\n ServerAdmin webmaster@localhost\n DocumentRoot /var/www/repos/pcts\n\n ErrorLog ${APACHE_LOG_DIR}/error.log\n CustomLog ${APACHE_LOG_DIR}/access.log combined\n</VirtualHost>\n\n\nIf this is your complete config then your .htaccess file is not being processed.\nYou've not enabled .htaccess overrides for the specific directory. (ie. You've not enabled the parsing of .htaccess files.) .htaccess overrides are disabled by default.\nHowever, you've not enabled access to this area of the filesystem either? Have you done this elsewhere in the server config?!\nYou should have a relevant <Directory> section like the following inside the <VirtualHost> container:\n<Directory /var/www/repos/pcts>\n # Enable .htaccess overrides\n AllowOverride All\n\n # Allow user access to this directory\n Require all granted\n</Directory>\n\nYou can restrict .htaccess overrides further if required (see the reference link below)\nReference:\n\nhttps://httpd.apache.org/docs/2.4/mod/core.html#allowoverride\n\n"
] |
[
1,
1
] |
[] |
[] |
[
".htaccess",
"apache2",
"linux",
"php",
"vhosts"
] |
stackoverflow_0074657684_.htaccess_apache2_linux_php_vhosts.txt
|
Q:
Failed to run sidekiq with sidekiq-cron. How to set up sidekiq & sidekiq-cron?
I'm trying to implement a background job by using sidekiq & sidekiq-cron(for the repeated job).
However, when I execute both the app and sidekiq, the executions all failed.
Although I have had effort as much as I could do, I could not find any solution.
Is there anyone who can give me some advise?
The failed log and my configuration are following:
[The app execution failed log]
$ ./bin/rails server
=> Booting Puma
=> Rails 7.0.4 application starting in development
=> Run `bin/rails server --help` for more startup options
Exiting
/Users/test1234/projects/test-project/test_bg_job/config/initializers/sidekiq.rb:1:in `<main>': undefined method `options' for Sidekiq:Module (NoMethodError)
Sidekiq.options[:average_scheduled_poll_interval] = 10
^^^^^^^^
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/engine.rb:667:in `load'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/engine.rb:667:in `block in load_config_initializer'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/activesupport-7.0.4/lib/active_support/notifications.rb:208:in `instrument'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/engine.rb:666:in `load_config_initializer'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/engine.rb:620:in `block (2 levels) in <class:Engine>'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/engine.rb:619:in `each'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/engine.rb:619:in `block in <class:Engine>'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/initializable.rb:32:in `instance_exec'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/initializable.rb:32:in `run'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/initializable.rb:61:in `block in run_initializers'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:228:in `block in tsort_each'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:350:in `block (2 levels) in each_strongly_connected_component'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:422:in `block (2 levels) in each_strongly_connected_component_from'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:431:in `each_strongly_connected_component_from'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:421:in `block in each_strongly_connected_component_from'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/initializable.rb:50:in `each'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/initializable.rb:50:in `tsort_each_child'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:415:in `call'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:415:in `each_strongly_connected_component_from'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:349:in `block in each_strongly_connected_component'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:347:in `each'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:347:in `call'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:347:in `each_strongly_connected_component'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:226:in `tsort_each'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:205:in `tsort_each'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/initializable.rb:60:in `run_initializers'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/application.rb:372:in `initialize!'
from /Users/test1234/projects/test-project/test_bg_job/config/environment.rb:5:in `<main>'
from config.ru:3:in `require_relative'
from config.ru:3:in `block in <main>'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/builder.rb:116:in `eval'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/builder.rb:116:in `new_from_string'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/builder.rb:105:in `load_file'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/builder.rb:66:in `parse_file'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/server.rb:349:in `build_app_and_options_from_config'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/server.rb:249:in `app'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/server.rb:422:in `wrapped_app'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/commands/server/server_command.rb:76:in `log_to_stdout'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/commands/server/server_command.rb:36:in `start'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/commands/server/server_command.rb:143:in `block in perform'
from <internal:kernel>:90:in `tap'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/commands/server/server_command.rb:134:in `perform'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/thor-1.2.1/lib/thor/command.rb:27:in `run'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/thor-1.2.1/lib/thor/invocation.rb:127:in `invoke_command'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/thor-1.2.1/lib/thor.rb:392:in `dispatch'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/command/base.rb:87:in `perform'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/command.rb:48:in `invoke'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/commands.rb:18:in `<main>'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
from ./bin/rails:4:in `<main>'
[sidekiq execution failed log]
undefined method `[]' for Sidekiq:Module
new_version? ? Sidekiq[key] : Sidekiq.options[key]
^^^^^
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-cron-1.8.0/lib/sidekiq/options.rb:6:in `[]'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-cron-1.8.0/lib/sidekiq/cron/schedule_loader.rb:7:in `block in <main>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-7.0.1/lib/sidekiq.rb:98:in `configure_server'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-cron-1.8.0/lib/sidekiq/cron/schedule_loader.rb:6:in `<main>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-cron-1.8.0/lib/sidekiq/cron.rb:4:in `<main>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-cron-1.8.0/lib/sidekiq-cron.rb:2:in `<main>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/runtime.rb:60:in `block (2 levels) in require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/runtime.rb:55:in `each'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/runtime.rb:55:in `block in require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/runtime.rb:44:in `each'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/runtime.rb:44:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler.rb:176:in `require'
/Users/test1234/projects/test-project/test_bg_job/config/application.rb:7:in `<top (required)>'
/Users/test1234/projects/test-project/test_bg_job/config/environment.rb:2:in `require_relative'
/Users/test1234/projects/test-project/test_bg_job/config/environment.rb:2:in `<top (required)>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-7.0.1/lib/sidekiq/cli.rb:301:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-7.0.1/lib/sidekiq/cli.rb:301:in `boot_application'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-7.0.1/lib/sidekiq/cli.rb:42:in `run'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-7.0.1/bin/sidekiq:31:in `<top (required)>'
/Users/test1234/.asdf/installs/ruby/3.1.2/bin/sidekiq:25:in `load'
/Users/test1234/.asdf/installs/ruby/3.1.2/bin/sidekiq:25:in `<top (required)>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/cli/exec.rb:58:in `load'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/cli/exec.rb:58:in `kernel_load'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/cli/exec.rb:23:in `run'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/cli.rb:484:in `exec'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/vendor/thor/lib/thor.rb:392:in `dispatch'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/cli.rb:31:in `dispatch'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/vendor/thor/lib/thor/base.rb:485:in `start'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/cli.rb:25:in `start'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bundler-2.3.7/libexec/bundle:48:in `block in <top (required)>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/friendly_errors.rb:103:in `with_friendly_errors'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bundler-2.3.7/libexec/bundle:36:in `<top (required)>'
/Users/test1234/.asdf/installs/ruby/3.1.2/bin/bundle:25:in `load'
/Users/test1234/.asdf/installs/ruby/3.1.2/bin/bundle:25:in `<main>'
[config/sidekiq.yml]
:verbose: false
:concurrency: 5
:timeout: 8
:logfile: log/sidekiq.log
staging:
:concurrency: 5
production:
:concurrency: 5
:queues:
- default
[config/initializers/sidekiq.rb]
Sidekiq.options[:average_scheduled_poll_interval] = 10
Sidekiq.configure_server do |config|
config.redis = {url: ENV["REDIS_URL"]}
Rails.logger = Sidekiq.logger
ActiveRecord::Base.logger = Sidekiq.logger
schedule_file = "config/schedule.yml"
if File.exist?(schedule_file)
Rails.application.config.after_initialize do
Sidekiq::Cron::Job.load_from_hash! YAML.load_file(schedule_file)
end
end
end
Sidekiq.configure_client do |config|
config.redis = {url: ENV["REDIS_URL"]}
end
[config/schedule.yml]
my_bg_job:
cron: "* */1 * * * *"
class: "MyBgJob"
[Gemfile]
...
gem 'sidekiq'
gem 'sidekiq-cron'
...
A:
I had the same problem.
Please try upgrading your current sidekiq-cron version to 1.9.0.
Version 1.9.0 is compatible with sidekiq7.0.
I solved the problem by upgrading the version of sidekiq-cron.
Here is the link to the pull request for version 1.9.0 of sidekiq-cron.
(https://github.com/sidekiq-cron/sidekiq-cron/pull/369/files)
|
Failed to run sidekiq with sidekiq-cron. How to set up sidekiq & sidekiq-cron?
|
I'm trying to implement a background job by using sidekiq & sidekiq-cron(for the repeated job).
However, when I execute both the app and sidekiq, the executions all failed.
Although I have had effort as much as I could do, I could not find any solution.
Is there anyone who can give me some advise?
The failed log and my configuration are following:
[The app execution failed log]
$ ./bin/rails server
=> Booting Puma
=> Rails 7.0.4 application starting in development
=> Run `bin/rails server --help` for more startup options
Exiting
/Users/test1234/projects/test-project/test_bg_job/config/initializers/sidekiq.rb:1:in `<main>': undefined method `options' for Sidekiq:Module (NoMethodError)
Sidekiq.options[:average_scheduled_poll_interval] = 10
^^^^^^^^
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/engine.rb:667:in `load'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/engine.rb:667:in `block in load_config_initializer'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/activesupport-7.0.4/lib/active_support/notifications.rb:208:in `instrument'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/engine.rb:666:in `load_config_initializer'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/engine.rb:620:in `block (2 levels) in <class:Engine>'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/engine.rb:619:in `each'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/engine.rb:619:in `block in <class:Engine>'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/initializable.rb:32:in `instance_exec'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/initializable.rb:32:in `run'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/initializable.rb:61:in `block in run_initializers'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:228:in `block in tsort_each'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:350:in `block (2 levels) in each_strongly_connected_component'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:422:in `block (2 levels) in each_strongly_connected_component_from'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:431:in `each_strongly_connected_component_from'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:421:in `block in each_strongly_connected_component_from'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/initializable.rb:50:in `each'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/initializable.rb:50:in `tsort_each_child'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:415:in `call'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:415:in `each_strongly_connected_component_from'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:349:in `block in each_strongly_connected_component'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:347:in `each'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:347:in `call'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:347:in `each_strongly_connected_component'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:226:in `tsort_each'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/tsort.rb:205:in `tsort_each'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/initializable.rb:60:in `run_initializers'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/application.rb:372:in `initialize!'
from /Users/test1234/projects/test-project/test_bg_job/config/environment.rb:5:in `<main>'
from config.ru:3:in `require_relative'
from config.ru:3:in `block in <main>'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/builder.rb:116:in `eval'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/builder.rb:116:in `new_from_string'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/builder.rb:105:in `load_file'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/builder.rb:66:in `parse_file'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/server.rb:349:in `build_app_and_options_from_config'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/server.rb:249:in `app'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/rack-2.2.4/lib/rack/server.rb:422:in `wrapped_app'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/commands/server/server_command.rb:76:in `log_to_stdout'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/commands/server/server_command.rb:36:in `start'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/commands/server/server_command.rb:143:in `block in perform'
from <internal:kernel>:90:in `tap'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/commands/server/server_command.rb:134:in `perform'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/thor-1.2.1/lib/thor/command.rb:27:in `run'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/thor-1.2.1/lib/thor/invocation.rb:127:in `invoke_command'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/thor-1.2.1/lib/thor.rb:392:in `dispatch'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/command/base.rb:87:in `perform'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/command.rb:48:in `invoke'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/railties-7.0.4/lib/rails/commands.rb:18:in `<main>'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
from /Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
from ./bin/rails:4:in `<main>'
[sidekiq execution failed log]
undefined method `[]' for Sidekiq:Module
new_version? ? Sidekiq[key] : Sidekiq.options[key]
^^^^^
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-cron-1.8.0/lib/sidekiq/options.rb:6:in `[]'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-cron-1.8.0/lib/sidekiq/cron/schedule_loader.rb:7:in `block in <main>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-7.0.1/lib/sidekiq.rb:98:in `configure_server'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-cron-1.8.0/lib/sidekiq/cron/schedule_loader.rb:6:in `<main>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-cron-1.8.0/lib/sidekiq/cron.rb:4:in `<main>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-cron-1.8.0/lib/sidekiq-cron.rb:2:in `<main>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bootsnap-1.13.0/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:32:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/runtime.rb:60:in `block (2 levels) in require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/runtime.rb:55:in `each'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/runtime.rb:55:in `block in require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/runtime.rb:44:in `each'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/runtime.rb:44:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler.rb:176:in `require'
/Users/test1234/projects/test-project/test_bg_job/config/application.rb:7:in `<top (required)>'
/Users/test1234/projects/test-project/test_bg_job/config/environment.rb:2:in `require_relative'
/Users/test1234/projects/test-project/test_bg_job/config/environment.rb:2:in `<top (required)>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-7.0.1/lib/sidekiq/cli.rb:301:in `require'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-7.0.1/lib/sidekiq/cli.rb:301:in `boot_application'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-7.0.1/lib/sidekiq/cli.rb:42:in `run'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/sidekiq-7.0.1/bin/sidekiq:31:in `<top (required)>'
/Users/test1234/.asdf/installs/ruby/3.1.2/bin/sidekiq:25:in `load'
/Users/test1234/.asdf/installs/ruby/3.1.2/bin/sidekiq:25:in `<top (required)>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/cli/exec.rb:58:in `load'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/cli/exec.rb:58:in `kernel_load'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/cli/exec.rb:23:in `run'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/cli.rb:484:in `exec'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/vendor/thor/lib/thor.rb:392:in `dispatch'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/cli.rb:31:in `dispatch'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/vendor/thor/lib/thor/base.rb:485:in `start'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/cli.rb:25:in `start'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bundler-2.3.7/libexec/bundle:48:in `block in <top (required)>'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/3.1.0/bundler/friendly_errors.rb:103:in `with_friendly_errors'
/Users/test1234/.asdf/installs/ruby/3.1.2/lib/ruby/gems/3.1.0/gems/bundler-2.3.7/libexec/bundle:36:in `<top (required)>'
/Users/test1234/.asdf/installs/ruby/3.1.2/bin/bundle:25:in `load'
/Users/test1234/.asdf/installs/ruby/3.1.2/bin/bundle:25:in `<main>'
[config/sidekiq.yml]
:verbose: false
:concurrency: 5
:timeout: 8
:logfile: log/sidekiq.log
staging:
:concurrency: 5
production:
:concurrency: 5
:queues:
- default
[config/initializers/sidekiq.rb]
Sidekiq.options[:average_scheduled_poll_interval] = 10
Sidekiq.configure_server do |config|
config.redis = {url: ENV["REDIS_URL"]}
Rails.logger = Sidekiq.logger
ActiveRecord::Base.logger = Sidekiq.logger
schedule_file = "config/schedule.yml"
if File.exist?(schedule_file)
Rails.application.config.after_initialize do
Sidekiq::Cron::Job.load_from_hash! YAML.load_file(schedule_file)
end
end
end
Sidekiq.configure_client do |config|
config.redis = {url: ENV["REDIS_URL"]}
end
[config/schedule.yml]
my_bg_job:
cron: "* */1 * * * *"
class: "MyBgJob"
[Gemfile]
...
gem 'sidekiq'
gem 'sidekiq-cron'
...
|
[
"I had the same problem.\nPlease try upgrading your current sidekiq-cron version to 1.9.0.\nVersion 1.9.0 is compatible with sidekiq7.0.\nI solved the problem by upgrading the version of sidekiq-cron.\nHere is the link to the pull request for version 1.9.0 of sidekiq-cron.\n(https://github.com/sidekiq-cron/sidekiq-cron/pull/369/files)\n"
] |
[
0
] |
[] |
[] |
[
"ruby",
"ruby_on_rails"
] |
stackoverflow_0074644253_ruby_ruby_on_rails.txt
|
Q:
The given origin is not allowed for the given client ID (GSI)
I was refactoring my "Sign in with Google" by replacing gapi with gsi on http://localhost:8080.
How can gapi work without problems while gsi claims that The given origin is not allowed for the given client ID.
gapi
<script src="https://apis.google.com/js/api:client.js" async defer></script>
window.gapi.load('auth2', () => {
const auth2 = window.gapi.auth2.init({ client_id })
auth2.signIn().then(console.log)
})
gsi
<script src="https://accounts.google.com/gsi/client" async defer></script>
<div id="g_id_onload"
:data-client_id="client_id"
data-login_uri="http://localhost:8080"
data-auto_prompt="false">
</div>
<div class="g_id_signin"
data-type="standard"
data-size="large"
data-theme="outline"
data-text="sign_in_with"
data-shape="rectangular"
data-logo_alignment="left">
</div>
Errors out with: The given origin is not allowed for the given client ID
A:
I added origin without port to fix this issue.
Key Point: Add both http://localhost and http://localhost:<port_number> to the Authorized JavaScript origins box for local tests or development.
Source: https://developers.google.com/identity/gsi/web/guides/get-google-api-clientid
A:
This can also happen if your server has Referrer-Policy set to no-referrer. Google requires this HTTP header or else requests to https://accounts.google.com/gsi/button and https://accounts.google.com/gsi/iframe/select will respond with 400 and produce that error
If using helmet - the following config will fix the request
referrerPolicy: {
policy: 'strict-origin-when-cross-origin'
}
MDN article for Referrer-Policy
A:
Found a solution while testing this as I was encountering the following error messages related to this one despite adding the site domain to the Authorised JavaScript origins:
Failed to load resource: the server responded with a status of 403 ()
[GSI_LOGGER]: The given origin is not allowed for the given client ID.
If you're using a domain name that is not localhost, make sure that you are accessing your site via https (using a self-signed certificate if a certificate is not yet available / if testing in your non-prod environment).
A:
I got faced with same issue. For some reason, none of the solutions discussed here did it for me.
I got it solved by adding these to the Authorized Origins in the Google cloud console
http://localhost
http://localhost:<port_number>
A:
A comment from @Behzad that deserves it's own answer:
For some reason, 127.0.0.1 will not work, even if you register it in your Google Dashboard.
But as soon as you use localhost instead, it starts working.
I was using Live Server for VSCode, and had to look up how to change the host it was serving on.
Go to Code > Preferences > Settings > Live Server Config > Host and change it to local host.
Then make sure localhost is registered in your Google Dashboard:
A:
For my live app I needed both https://example.com and https://www.example.com to be set as origins
|
The given origin is not allowed for the given client ID (GSI)
|
I was refactoring my "Sign in with Google" by replacing gapi with gsi on http://localhost:8080.
How can gapi work without problems while gsi claims that The given origin is not allowed for the given client ID.
gapi
<script src="https://apis.google.com/js/api:client.js" async defer></script>
window.gapi.load('auth2', () => {
const auth2 = window.gapi.auth2.init({ client_id })
auth2.signIn().then(console.log)
})
gsi
<script src="https://accounts.google.com/gsi/client" async defer></script>
<div id="g_id_onload"
:data-client_id="client_id"
data-login_uri="http://localhost:8080"
data-auto_prompt="false">
</div>
<div class="g_id_signin"
data-type="standard"
data-size="large"
data-theme="outline"
data-text="sign_in_with"
data-shape="rectangular"
data-logo_alignment="left">
</div>
Errors out with: The given origin is not allowed for the given client ID
|
[
"I added origin without port to fix this issue.\n\nKey Point: Add both http://localhost and http://localhost:<port_number> to the Authorized JavaScript origins box for local tests or development.\n\nSource: https://developers.google.com/identity/gsi/web/guides/get-google-api-clientid\n",
"This can also happen if your server has Referrer-Policy set to no-referrer. Google requires this HTTP header or else requests to https://accounts.google.com/gsi/button and https://accounts.google.com/gsi/iframe/select will respond with 400 and produce that error\nIf using helmet - the following config will fix the request\nreferrerPolicy: {\n policy: 'strict-origin-when-cross-origin'\n}\n\nMDN article for Referrer-Policy\n",
"Found a solution while testing this as I was encountering the following error messages related to this one despite adding the site domain to the Authorised JavaScript origins:\n\nFailed to load resource: the server responded with a status of 403 ()\n\n\n[GSI_LOGGER]: The given origin is not allowed for the given client ID.\n\nIf you're using a domain name that is not localhost, make sure that you are accessing your site via https (using a self-signed certificate if a certificate is not yet available / if testing in your non-prod environment).\n",
"I got faced with same issue. For some reason, none of the solutions discussed here did it for me.\nI got it solved by adding these to the Authorized Origins in the Google cloud console\n\nhttp://localhost\nhttp://localhost:<port_number>\n\n",
"A comment from @Behzad that deserves it's own answer:\nFor some reason, 127.0.0.1 will not work, even if you register it in your Google Dashboard.\nBut as soon as you use localhost instead, it starts working.\nI was using Live Server for VSCode, and had to look up how to change the host it was serving on.\nGo to Code > Preferences > Settings > Live Server Config > Host and change it to local host.\n\nThen make sure localhost is registered in your Google Dashboard:\n\n",
"For my live app I needed both https://example.com and https://www.example.com to be set as origins\n"
] |
[
149,
20,
5,
3,
2,
0
] |
[] |
[] |
[
"google_signin",
"javascript"
] |
stackoverflow_0068438293_google_signin_javascript.txt
|
Q:
Multiple Kafka Streams with different topics ApplicationId
Given a single component which can have multiple instances, and the following structure:
Stream1[Topic1, destination1]
Stream2[Topic2, destination2]
where destination is a Queue and all the links will be 1:1.
Do we need to set the same applicationId for each KafkaStream?
It is known that applicationId will generate client.id and group.id which are important for how the partitions are assigned.
Couldn't find anything in the official documentation.
A:
You can run two applications in the same JVM process with separate threads for starting both topologies, or you can simply run two independent JVM processes. Both cases, use different ids.
Or you can run one process (one id), subscribe to both topics, but use branch operator to separate streams by topic names.
https://www.confluent.io/blog/putting-events-in-their-place-with-dynamic-routing/
https://docs.confluent.io/platform/current/streams/developer-guide/dsl-api.html#stateless-transformations
|
Multiple Kafka Streams with different topics ApplicationId
|
Given a single component which can have multiple instances, and the following structure:
Stream1[Topic1, destination1]
Stream2[Topic2, destination2]
where destination is a Queue and all the links will be 1:1.
Do we need to set the same applicationId for each KafkaStream?
It is known that applicationId will generate client.id and group.id which are important for how the partitions are assigned.
Couldn't find anything in the official documentation.
|
[
"You can run two applications in the same JVM process with separate threads for starting both topologies, or you can simply run two independent JVM processes. Both cases, use different ids.\nOr you can run one process (one id), subscribe to both topics, but use branch operator to separate streams by topic names.\n\nhttps://www.confluent.io/blog/putting-events-in-their-place-with-dynamic-routing/\nhttps://docs.confluent.io/platform/current/streams/developer-guide/dsl-api.html#stateless-transformations\n\n"
] |
[
0
] |
[] |
[] |
[
"apache_kafka",
"apache_kafka_streams"
] |
stackoverflow_0074658750_apache_kafka_apache_kafka_streams.txt
|
Q:
could not init postgres with a dump file
just starting testcontainers. I love the idea. thanks for investing in this project.
I am trying to create a simple postgres 14.5 container (and susceeded) and now I am trying to populate it using the .withInitScript() method.
the file I am feeding into the init method is a dump I created with pg_dumpall.
testcontainers fails for many parsing/validation reasons. each time I delete a portion and another reason pops up.
should I be able to succesfully use the withInitScript with pg_dump files?
BTW, using pg_dump for my main DB also has many similar issues.
thanks!
A:
Try copying the script to the container so postgres will execute. Although this comment BTW, using pg_dump for my main DB also has many similar issues. makes me wonder if it will work because it also fails when you are using the database directly if I understood correctly.
new PostgreSQLContainer("postgres:14.5")
.withCopyFileToContainer(
MountableFile.forClasspathResource("init.sql"),
"/docker-entrypoint-initdb.d/init.sql"
);
We recommend to use liquibase or flyway to manage database changes.
A:
hi and thanks for the help
I have managed to make things work by stripping some things from the sql dump and using the copyFileToContainer
thanks
|
could not init postgres with a dump file
|
just starting testcontainers. I love the idea. thanks for investing in this project.
I am trying to create a simple postgres 14.5 container (and susceeded) and now I am trying to populate it using the .withInitScript() method.
the file I am feeding into the init method is a dump I created with pg_dumpall.
testcontainers fails for many parsing/validation reasons. each time I delete a portion and another reason pops up.
should I be able to succesfully use the withInitScript with pg_dump files?
BTW, using pg_dump for my main DB also has many similar issues.
thanks!
|
[
"Try copying the script to the container so postgres will execute. Although this comment BTW, using pg_dump for my main DB also has many similar issues. makes me wonder if it will work because it also fails when you are using the database directly if I understood correctly.\nnew PostgreSQLContainer(\"postgres:14.5\")\n .withCopyFileToContainer(\n MountableFile.forClasspathResource(\"init.sql\"), \n \"/docker-entrypoint-initdb.d/init.sql\"\n );\n\nWe recommend to use liquibase or flyway to manage database changes.\n",
"hi and thanks for the help\nI have managed to make things work by stripping some things from the sql dump and using the copyFileToContainer\nthanks\n"
] |
[
1,
0
] |
[] |
[] |
[
"testcontainers"
] |
stackoverflow_0074645915_testcontainers.txt
|
Q:
Competitive programming header file
In the book "Competitive programming 3", by the Halim brothers, it is stated that it would be best to insert all your macro's, includes and typedefs in a seperate file called 'competitive.h'. Then all you would have to do is include 'competitive.h' when you start coding. As I see it, this would work on my machine, but since I can only submit 1 file at once to a judge, it wouldn't work there.
Is there any way I would go about doing this in C++?
Thanks.
A:
Just run your c++ file through the c preprocessor using:
cpp myfile.cpp > myfileprocessed.cpp
to embed any included headers into the file directly.
EDIT:
Sorry, just noticed another similar answer was posted at the same time, shall leave this here just as it highlights both ways of invoking the preprocessor.
A:
I don't think that's a good approach. gcc -E can sometimes output a very large file and quite confusing. You should find a decent text editor which supports insertion of a skeleton code. Here's how to do it with Vim:
How can I automatically add some skeleton code when creating a new file with vim
It might be nice to hide the skeleton part with a kind of "folding" feature of your editor.
http://usevim.com/2012/08/31/vim101-folding/
I think Emacs supports both of them. No idea for Sublime.
The #include approach is not flexible in that you cannot easily modify some part of the codes in the common file specifically for only one of the problems set.
A:
#include<bits/stdc++.h>
This header file includes the most frequently used header files. But it's a bit slow since it includes a lot of header files.
In competitive programming, I generally use this header file for simplicity.
But for faster IO you can add the code.
ios_base::sync_with_stdio(0) ; cin.tie(0) ; cout.tie(0) ;
A:
I would do a
#ifdef LOCAL
#include "competitive.h"
#else
#include <bits/stdc++.h>
#endif
This way, only on your local machine does it run
#include "competitive.h"
Make sure to add
-DLOCAL
to your compiler flags.
A:
you can use #ifdef & #ifndef to solve this problem. if you write something inside #ifdef and import some condition it will run that command only when the condition fulfil.In code mention below it will include competitive.h file only it will run in local machine and whenver you will submit it online it will not include that file.
``
#ifndef ONLINE_JUDGE
#include "competitive.h"
#endif
#ifdef ONLINE_JUDGE
#include<bits/stdc++.h>
#endif
|
Competitive programming header file
|
In the book "Competitive programming 3", by the Halim brothers, it is stated that it would be best to insert all your macro's, includes and typedefs in a seperate file called 'competitive.h'. Then all you would have to do is include 'competitive.h' when you start coding. As I see it, this would work on my machine, but since I can only submit 1 file at once to a judge, it wouldn't work there.
Is there any way I would go about doing this in C++?
Thanks.
|
[
"Just run your c++ file through the c preprocessor using:\ncpp myfile.cpp > myfileprocessed.cpp\n\nto embed any included headers into the file directly. \nEDIT:\nSorry, just noticed another similar answer was posted at the same time, shall leave this here just as it highlights both ways of invoking the preprocessor.\n",
"I don't think that's a good approach. gcc -E can sometimes output a very large file and quite confusing. You should find a decent text editor which supports insertion of a skeleton code. Here's how to do it with Vim:\n\nHow can I automatically add some skeleton code when creating a new file with vim\n\nIt might be nice to hide the skeleton part with a kind of \"folding\" feature of your editor.\n\nhttp://usevim.com/2012/08/31/vim101-folding/\n\nI think Emacs supports both of them. No idea for Sublime.\nThe #include approach is not flexible in that you cannot easily modify some part of the codes in the common file specifically for only one of the problems set.\n",
"#include<bits/stdc++.h>\n\nThis header file includes the most frequently used header files. But it's a bit slow since it includes a lot of header files.\nIn competitive programming, I generally use this header file for simplicity. \nBut for faster IO you can add the code.\nios_base::sync_with_stdio(0) ; cin.tie(0) ; cout.tie(0) ;\n\n",
"I would do a\n#ifdef LOCAL\n#include \"competitive.h\"\n#else\n#include <bits/stdc++.h>\n#endif\n\nThis way, only on your local machine does it run\n#include \"competitive.h\"\n\nMake sure to add\n-DLOCAL\n\nto your compiler flags.\n",
"you can use #ifdef & #ifndef to solve this problem. if you write something inside #ifdef and import some condition it will run that command only when the condition fulfil.In code mention below it will include competitive.h file only it will run in local machine and whenver you will submit it online it will not include that file.\n``\n#ifndef ONLINE_JUDGE\n#include \"competitive.h\"\n#endif\n#ifdef ONLINE_JUDGE\n#include<bits/stdc++.h>\n#endif\n\n"
] |
[
2,
2,
0,
0,
0
] |
[] |
[] |
[
"c++"
] |
stackoverflow_0025291522_c++.txt
|
Q:
Algorithm to align a rectangle when a point is dragged?
I am trying to implement an algorithm that will align a rectangle on an ellipse when a point is dragged.
The data I have is:
I know what corner is being dragged
the starting position of it
the ending position of it
My old algorithm was to align the adjacent corners, but that only worked if the ellipse or rectangle weren't at an angle.
I am trying to implement something like Figma does:
My current idea is to take the sides that were changed on drag and match the other sides that weren't changed to the size of the changed sides. Though I'm not sure if that's correct.
A:
Let rectangle is described by center point (CX, CY) and two unit direction vectors (WX, WY) and (HX, HY), also W is half-width, H is half-height.
As far as I understand, rectangle slope is preserved, so direction vectors remain the same.
When corner number k was shifted, it's new position is (NX, NY). Opposite vertex has number (k+2)%4 and it's position is (PX, PY) (doesn't change)
New center is
CX' = (PX + NX) / 2
CY' = (PY + NY) / 2
New half-width and half-height
W' = 0.5 * Abs(WX * (NX - PX) + WY * (NY - PY))
H' = 0.5 * Abs(HX * (NX - PX) + HY * (NY - PY))
|
Algorithm to align a rectangle when a point is dragged?
|
I am trying to implement an algorithm that will align a rectangle on an ellipse when a point is dragged.
The data I have is:
I know what corner is being dragged
the starting position of it
the ending position of it
My old algorithm was to align the adjacent corners, but that only worked if the ellipse or rectangle weren't at an angle.
I am trying to implement something like Figma does:
My current idea is to take the sides that were changed on drag and match the other sides that weren't changed to the size of the changed sides. Though I'm not sure if that's correct.
|
[
"Let rectangle is described by center point (CX, CY) and two unit direction vectors (WX, WY) and (HX, HY), also W is half-width, H is half-height.\nAs far as I understand, rectangle slope is preserved, so direction vectors remain the same.\nWhen corner number k was shifted, it's new position is (NX, NY). Opposite vertex has number (k+2)%4 and it's position is (PX, PY) (doesn't change)\nNew center is\nCX' = (PX + NX) / 2\nCY' = (PY + NY) / 2\n\nNew half-width and half-height\nW' = 0.5 * Abs(WX * (NX - PX) + WY * (NY - PY))\nH' = 0.5 * Abs(HX * (NX - PX) + HY * (NY - PY))\n\n"
] |
[
1
] |
[] |
[] |
[
"algorithm",
"geometry"
] |
stackoverflow_0074657149_algorithm_geometry.txt
|
Q:
Azure - Restrict Role Assignments to Managed Identities and Service Principals
Our Azure engineers need to be able to manage the identity and permissions used to run the software they deploy to the cloud.
However, granting them the ability to assign RBAC roles also allows them to assign permissions for any AD User or Group--not just system identities (Managed Identities, Service Principals).
How can I configure Azure to allow engineers to grant permissiona for their software to operate but prevent them from granting permissions to other AD entities?
A:
Currently is not possible to limit the scope or selection of principals (users or services/applications) to be assigned roles. Usually, developers are given up to the Contributor role which give them access to almost all management features but user access while selected users are given the User Access Administrator role.
|
Azure - Restrict Role Assignments to Managed Identities and Service Principals
|
Our Azure engineers need to be able to manage the identity and permissions used to run the software they deploy to the cloud.
However, granting them the ability to assign RBAC roles also allows them to assign permissions for any AD User or Group--not just system identities (Managed Identities, Service Principals).
How can I configure Azure to allow engineers to grant permissiona for their software to operate but prevent them from granting permissions to other AD entities?
|
[
"Currently is not possible to limit the scope or selection of principals (users or services/applications) to be assigned roles. Usually, developers are given up to the Contributor role which give them access to almost all management features but user access while selected users are given the User Access Administrator role.\n"
] |
[
0
] |
[] |
[] |
[
"azure",
"azure_active_directory",
"azure_rbac"
] |
stackoverflow_0074380943_azure_azure_active_directory_azure_rbac.txt
|
Q:
How to Answer Subjective/descriptive types of lQuestions using BERT Model?
I am trying to implement BERT Model for Question Answering tasks, but Its a little different from the existing Q&A models,
The Model will be given some text(3-4 pages) and will be asked questions based on the text, and the expected answer may be asked in short or descriptive subjective type
I tried to implement BERT, for this task.
The Problems I am facing:
The input token limit for BERT is 512.
How to get the answer in long form, which can describe any instance, process, event, etc.
A:
Try longformer which can have input length 0f 4096 tokens, or even 16384 tokens with gradient checkpointing. See details in https://github.com/allenai/longformer. Or on huggingface model hub https://huggingface.co/docs/transformers/model_doc/longformer.
|
How to Answer Subjective/descriptive types of lQuestions using BERT Model?
|
I am trying to implement BERT Model for Question Answering tasks, but Its a little different from the existing Q&A models,
The Model will be given some text(3-4 pages) and will be asked questions based on the text, and the expected answer may be asked in short or descriptive subjective type
I tried to implement BERT, for this task.
The Problems I am facing:
The input token limit for BERT is 512.
How to get the answer in long form, which can describe any instance, process, event, etc.
|
[
"Try longformer which can have input length 0f 4096 tokens, or even 16384 tokens with gradient checkpointing. See details in https://github.com/allenai/longformer. Or on huggingface model hub https://huggingface.co/docs/transformers/model_doc/longformer.\n"
] |
[
0
] |
[] |
[] |
[
"bert_language_model",
"gpt_2",
"huggingface_transformers",
"nlp",
"transformer_model"
] |
stackoverflow_0074654341_bert_language_model_gpt_2_huggingface_transformers_nlp_transformer_model.txt
|
Q:
Show h1/img elements when scrolled into viewport
I have a site where the h1 tag and an image load in when I scroll to them. I have the css set to load an animation on the tags when they load, so I really don't want them to load before they are visible.
I have it working perfectly on desktop/laptop, but on mobile the elements are just loaded automatically with everything else, and the animations don't have a chance to work. The console logs that I call show that the window.scrollY is only returning "0".
import React, { useEffect, useState } from 'react';
import Headshot from '../../../assets/images/about/Headshot';
const About = () => {
const [isVisible, setIsVisible] = useState(true);
useEffect(() => {
document.addEventListener("touchmove", listenToScroll);
window.addEventListener("scroll", listenToScroll);
return () => {
document.addEventListener("touchmove", listenToScroll);
window.removeEventListener("scroll", listenToScroll);
}
}, [])
const listenToScroll = () => {
const homeHeight = document.getElementById('Home').clientHeight;
const folioHeight = document.getElementById('Portfolio').clientHeight;
const skillsHeight = document.getElementById('Skills').clientHeight;
let heightToShow;
let vh = window.innerHeight;
if (homeHeight > vh + 100) {
heightToShow = homeHeight - vh + folioHeight + skillsHeight;
} else {
heightToShow = 100 + folioHeight + skillsHeight;
}
const winScroll = window.scrollY;
console.log("winScroll: " + winScroll);
console.log("heightToShow: "+ heightToShow);
console.log("wS > hTS: " + (winScroll > heightToShow));
if (winScroll > heightToShow) {
isVisible && setIsVisible(true);
} else {
setIsVisible(false);
}
};
return (
<>
<div className='container aboutContainer' id="About">
{ isVisible ? (
<>
<h1 className="aboutH1">This is Me</h1>
<div className="headshot">
<Headshot />
<img
src="/assets/images/about/headshot.webp"
alt=""
id="headshotImg"
/>
</div>
</>
) : ""}
</div>
</>
);
}
export default About
If there's a simpler solution, I am certainly open to it, but please don't just tell me "use this library, and put the tags in. It'll take care of it."
The point of this exercise is that I am trying to learn how to do it, so that I can tell if a library is a good choice for myself later.
A:
better you do it with intersection observer, https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API
A:
Many thanks to jorgepelcastre for pointing me in the right direction. After an entire day spent chasing ghosts to solve this, his solution fixed it in under 10 minutes, without just dumping a bloated library on me.
For anyone interested in the working solution based on his help, I have posted the updated code below:
import React, { useEffect, useState } from 'react';
import Headshot from '../../../assets/images/about/Headshot';
const About = () => {
const containerRef = useRef(null);
const [isVisible, setIsVisible] = useState(false);
const callbackFunction = (entries) => {
const [entry] = entries;
setIsVisible(entry.isIntersecting);
}
const options = {
root: null,
rootMargin: "0px",
threshold: 1.0
}
useEffect(() => {
const observer = new IntersectionObserver(callbackFunction, options);
if (containerRef.current) observer.observe(containerRef.current);
return () => {
if (containerRef.current) observer.unobserve(containerRef.current);
}
}, [containerRef, options])
return (
<>
<div className='container aboutContainer' id="About">
{ isVisible ? (
<>
<h1 className="aboutH1">This is Me</h1>
<div className="headshot">
<Headshot />
<img
src="/assets/images/about/headshot.webp"
alt="Brian Quinney - Programmer, Geek, Lifelong Learner"
id="headshotImg"
/>
</div>
</>
) : ""}
<div className="venn">
<h2 ref={ containerRef }>Who am I?</h2>
</div>
</div>
</>
);
}
export default About
Note the ref={ containerRef } in the h2 tag below the conditional rendering. The detailed description of how it works can be found in the link jorgepelcastre provided. The long and short of it is that the code checks if the tag with the 'ref' prop is fully in view. If so, it renders the conditional section, including running any attached animations.
|
Show h1/img elements when scrolled into viewport
|
I have a site where the h1 tag and an image load in when I scroll to them. I have the css set to load an animation on the tags when they load, so I really don't want them to load before they are visible.
I have it working perfectly on desktop/laptop, but on mobile the elements are just loaded automatically with everything else, and the animations don't have a chance to work. The console logs that I call show that the window.scrollY is only returning "0".
import React, { useEffect, useState } from 'react';
import Headshot from '../../../assets/images/about/Headshot';
const About = () => {
const [isVisible, setIsVisible] = useState(true);
useEffect(() => {
document.addEventListener("touchmove", listenToScroll);
window.addEventListener("scroll", listenToScroll);
return () => {
document.addEventListener("touchmove", listenToScroll);
window.removeEventListener("scroll", listenToScroll);
}
}, [])
const listenToScroll = () => {
const homeHeight = document.getElementById('Home').clientHeight;
const folioHeight = document.getElementById('Portfolio').clientHeight;
const skillsHeight = document.getElementById('Skills').clientHeight;
let heightToShow;
let vh = window.innerHeight;
if (homeHeight > vh + 100) {
heightToShow = homeHeight - vh + folioHeight + skillsHeight;
} else {
heightToShow = 100 + folioHeight + skillsHeight;
}
const winScroll = window.scrollY;
console.log("winScroll: " + winScroll);
console.log("heightToShow: "+ heightToShow);
console.log("wS > hTS: " + (winScroll > heightToShow));
if (winScroll > heightToShow) {
isVisible && setIsVisible(true);
} else {
setIsVisible(false);
}
};
return (
<>
<div className='container aboutContainer' id="About">
{ isVisible ? (
<>
<h1 className="aboutH1">This is Me</h1>
<div className="headshot">
<Headshot />
<img
src="/assets/images/about/headshot.webp"
alt=""
id="headshotImg"
/>
</div>
</>
) : ""}
</div>
</>
);
}
export default About
If there's a simpler solution, I am certainly open to it, but please don't just tell me "use this library, and put the tags in. It'll take care of it."
The point of this exercise is that I am trying to learn how to do it, so that I can tell if a library is a good choice for myself later.
|
[
"better you do it with intersection observer, https://developer.mozilla.org/en-US/docs/Web/API/Intersection_Observer_API\n",
"Many thanks to jorgepelcastre for pointing me in the right direction. After an entire day spent chasing ghosts to solve this, his solution fixed it in under 10 minutes, without just dumping a bloated library on me.\nFor anyone interested in the working solution based on his help, I have posted the updated code below:\nimport React, { useEffect, useState } from 'react';\nimport Headshot from '../../../assets/images/about/Headshot';\n\nconst About = () => {\n const containerRef = useRef(null);\n const [isVisible, setIsVisible] = useState(false);\n\n const callbackFunction = (entries) => {\n const [entry] = entries;\n setIsVisible(entry.isIntersecting);\n }\n const options = {\n root: null,\n rootMargin: \"0px\",\n threshold: 1.0\n }\n\n useEffect(() => {\n const observer = new IntersectionObserver(callbackFunction, options);\n if (containerRef.current) observer.observe(containerRef.current);\n\n return () => {\n if (containerRef.current) observer.unobserve(containerRef.current);\n }\n }, [containerRef, options])\n\n return ( \n <>\n <div className='container aboutContainer' id=\"About\">\n { isVisible ? (\n <>\n <h1 className=\"aboutH1\">This is Me</h1>\n <div className=\"headshot\">\n <Headshot />\n <img \n src=\"/assets/images/about/headshot.webp\" \n alt=\"Brian Quinney - Programmer, Geek, Lifelong Learner\" \n id=\"headshotImg\"\n />\n </div>\n </>\n ) : \"\"}\n <div className=\"venn\">\n <h2 ref={ containerRef }>Who am I?</h2>\n </div>\n </div>\n </>\n );\n}\n\nexport default About\n\nNote the ref={ containerRef } in the h2 tag below the conditional rendering. The detailed description of how it works can be found in the link jorgepelcastre provided. The long and short of it is that the code checks if the tag with the 'ref' prop is fully in view. If so, it renders the conditional section, including running any attached animations.\n"
] |
[
2,
0
] |
[] |
[] |
[
"css",
"javascript",
"reactjs"
] |
stackoverflow_0074649807_css_javascript_reactjs.txt
|
Q:
Refreshing an access token received from Google Identity Services / React App
I have a trouble right now with an access token received from Google Identity Services.
Some details about the case. I have full stack application, back-end based on Spring/Webflux/Hibernate-Reactive and frond end based on React. I'm using the google login feature from Google Identity Services with this react library @react-oauth/google.
I'm using the received "credential" after successful login for back-end access. Everything works like expected except that I there is no refresh token in the response after successful login. The token expires after 1 hour and a user must be prompted to login again to receive a new token, which is horrible!
So, how to refresh this token, any Idea?
I could not found more info on google side, that's why I am writing here.
A:
So I found the solution by myself. I will post it here, in hoping to help someone else who is struggling with this problem.
So using the react library @react-oauth/google I used the useGoogleLogin hook. I added "flow: 'auth-code'" to function's options object.
const login = useGoogleLogin({
onSuccess: codeResponse => console.log(codeResponse),
flow: 'auth-code',
});
The function is triggered by click on simple button.
After successful login from the user, in the response object we can find a code property. We can exchange the code for an access,refresh and id token by calling the google oauth2 api:
curl --location --request POST 'https://oauth2.googleapis.com/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'client_id=your_client_id' \
--data-urlencode 'client_secret=your_client_secret' \
--data-urlencode 'code=recieved_code_after_login' \
--data-urlencode 'grant_type=authorization_code' \
--data-urlencode 'redirect_uri=one of your redirect uri's listed in your
credential'
after successful request access,refresh and id token are received.
refreshing the token also so simple:
curl --location --request POST 'https://oauth2.googleapis.com/token' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'client_id=your_client_id' \
--data-urlencode 'client_secret=your_client_secret' \
--data-urlencode 'grant_type=refresh_token' \
--data-urlencode 'refresh_token=received_refresh_token'
Here is the original Google documentation: https://developers.google.com/identity/protocols/oauth2/web-server#httprest_3
!important!
Remember that the refresh is valid until access is revoked. When you refresh the tokens, a new refresh token is not coming with the response. For further refreshes, you can use the same refresh token, receive by exchange.
|
Refreshing an access token received from Google Identity Services / React App
|
I have a trouble right now with an access token received from Google Identity Services.
Some details about the case. I have full stack application, back-end based on Spring/Webflux/Hibernate-Reactive and frond end based on React. I'm using the google login feature from Google Identity Services with this react library @react-oauth/google.
I'm using the received "credential" after successful login for back-end access. Everything works like expected except that I there is no refresh token in the response after successful login. The token expires after 1 hour and a user must be prompted to login again to receive a new token, which is horrible!
So, how to refresh this token, any Idea?
I could not found more info on google side, that's why I am writing here.
|
[
"So I found the solution by myself. I will post it here, in hoping to help someone else who is struggling with this problem.\nSo using the react library @react-oauth/google I used the useGoogleLogin hook. I added \"flow: 'auth-code'\" to function's options object.\nconst login = useGoogleLogin({\n onSuccess: codeResponse => console.log(codeResponse),\n flow: 'auth-code',\n});\n\nThe function is triggered by click on simple button.\nAfter successful login from the user, in the response object we can find a code property. We can exchange the code for an access,refresh and id token by calling the google oauth2 api:\ncurl --location --request POST 'https://oauth2.googleapis.com/token' \\\n--header 'Content-Type: application/x-www-form-urlencoded' \\\n--data-urlencode 'client_id=your_client_id' \\\n--data-urlencode 'client_secret=your_client_secret' \\\n--data-urlencode 'code=recieved_code_after_login' \\\n--data-urlencode 'grant_type=authorization_code' \\\n--data-urlencode 'redirect_uri=one of your redirect uri's listed in your \ncredential'\n\nafter successful request access,refresh and id token are received.\nrefreshing the token also so simple:\ncurl --location --request POST 'https://oauth2.googleapis.com/token' \\\n--header 'Content-Type: application/x-www-form-urlencoded' \\\n--data-urlencode 'client_id=your_client_id' \\\n--data-urlencode 'client_secret=your_client_secret' \\\n--data-urlencode 'grant_type=refresh_token' \\\n--data-urlencode 'refresh_token=received_refresh_token'\n\nHere is the original Google documentation: https://developers.google.com/identity/protocols/oauth2/web-server#httprest_3\n!important!\nRemember that the refresh is valid until access is revoked. When you refresh the tokens, a new refresh token is not coming with the response. For further refreshes, you can use the same refresh token, receive by exchange.\n"
] |
[
2
] |
[] |
[] |
[
"access_token",
"google_identity",
"jwt",
"reactjs",
"refresh_token"
] |
stackoverflow_0074620337_access_token_google_identity_jwt_reactjs_refresh_token.txt
|
Q:
Getting Error Data should be a "String", "Array of arrays" OR "Array of objects" when trying to export data to CSV in reactJS
I want to download some data that I have from my firebase firestore DB that I have listed in a table.
I am adding the data that is coming from my firestore in order to export to CSV and have a complete viewable file in my admin dashboard
But every time I try to follow the steps to download the data and export them to CSV format I get this error: "Data should be a "String", "Array of arrays" OR "Array of objects"
here is my code:
import { CSVLink } from 'react-csv';
const [data, setData] = useState([]);
const [csvData, setcsvData] = useState([]);
const list = []
const csvList = []
useEffect(() => {
firebase.firestore().collection("Users").get().then((userSnapshot) => {
userSnapshot.forEach((doc) => {
const {powerAccount,first_name,registerDate,email,company,country,phone} = doc.data();
setID(doc.data().usersID)
list.push({
usersID:doc.id,
powerAccount:powerAccount,
first_name:first_name,
registerDate:registerDate,
email:email,
company:company,
country:country,
phone:phone,
});
const userData = {
usersID: doc.id,
powerAccount: powerAccount,
first_name: first_name,
registerDate: registerDate,
email: email,
company: company,
country: country,
phone: phone,
};
const headers = [
{ label: 'Account', key: powerAccount },
{ label: 'Name', key: first_name },
{ label: 'RegistrationDate', key: registerDate },
{ label: 'Email', key: email },
{ label: 'Company', key: company },
{ label: 'Country', key: country },
{ label: 'Phone', key: phone },
];
const csvReport = {
filename: "userReport.csv",
headers: headers,
data: userData
}
csvList.push(csvReport)
});
setData(list);
setcsvData(csvList)
});
},[]);
return (
<CSVLink {...csvData} >
Export
</CSVLink>
)
A:
I fixed this error by adding a conditional wrapper around my so that it didn't try to create that component before the data was loaded.
So, for your example, something like this could do the trick:
{csvData && (
<CSVLink {...csvData} >
Export
</CSVLink>
)}
|
Getting Error Data should be a "String", "Array of arrays" OR "Array of objects" when trying to export data to CSV in reactJS
|
I want to download some data that I have from my firebase firestore DB that I have listed in a table.
I am adding the data that is coming from my firestore in order to export to CSV and have a complete viewable file in my admin dashboard
But every time I try to follow the steps to download the data and export them to CSV format I get this error: "Data should be a "String", "Array of arrays" OR "Array of objects"
here is my code:
import { CSVLink } from 'react-csv';
const [data, setData] = useState([]);
const [csvData, setcsvData] = useState([]);
const list = []
const csvList = []
useEffect(() => {
firebase.firestore().collection("Users").get().then((userSnapshot) => {
userSnapshot.forEach((doc) => {
const {powerAccount,first_name,registerDate,email,company,country,phone} = doc.data();
setID(doc.data().usersID)
list.push({
usersID:doc.id,
powerAccount:powerAccount,
first_name:first_name,
registerDate:registerDate,
email:email,
company:company,
country:country,
phone:phone,
});
const userData = {
usersID: doc.id,
powerAccount: powerAccount,
first_name: first_name,
registerDate: registerDate,
email: email,
company: company,
country: country,
phone: phone,
};
const headers = [
{ label: 'Account', key: powerAccount },
{ label: 'Name', key: first_name },
{ label: 'RegistrationDate', key: registerDate },
{ label: 'Email', key: email },
{ label: 'Company', key: company },
{ label: 'Country', key: country },
{ label: 'Phone', key: phone },
];
const csvReport = {
filename: "userReport.csv",
headers: headers,
data: userData
}
csvList.push(csvReport)
});
setData(list);
setcsvData(csvList)
});
},[]);
return (
<CSVLink {...csvData} >
Export
</CSVLink>
)
|
[
"I fixed this error by adding a conditional wrapper around my so that it didn't try to create that component before the data was loaded.\nSo, for your example, something like this could do the trick:\n{csvData && (\n <CSVLink {...csvData} >\n Export\n </CSVLink>\n)}\n\n"
] |
[
0
] |
[] |
[] |
[
"firebase",
"google_cloud_firestore",
"javascript",
"reactjs"
] |
stackoverflow_0070045996_firebase_google_cloud_firestore_javascript_reactjs.txt
|
Q:
With these constraints, what's a good hash algorithm for a small word in terms of uniqueness and speed?
I'm looking for an algorithm or a function that will hash a string of 3<=N<=8 non-duplicate alphabets (may have capital letters only, lower case only, or both) and will return a number (i that will serve as an index of an array of booleans (1 byte per item) later.
Not for cryptographic reasons, I need this to test if a string is duplicate while creating a tree of strings using a hash table in C. The idea is to make an array bool found[numberOfTotalPossibleHashes] = {false}; and you set the value found[hash(str)] for true when you're about to insert it in the tree, so that you check found[hash(str)] to verify if the value already exists in the tree or not. Check this video for a clearer explanation.
Therefore, let's avoid complex algorithms that are especially designed for security because they return long values. I just want zero or the least amount of collisions.
Since the maximum length is 8, and there is only 16 possible characters of 8 lowercase + 8 capital (we can consider each character a hex value), the number of all the possible combinations (without duplicate letters) is 8! = 40320; I thought there is no need to use an algorithm that produces a long string like MD5 or SHA-256, and it's also gonna be inefficient when I want to create an array of only 8! or a few more thousands if necessary items. Not millions or billions.
I saw a similar question
Bit of a too general question, as you left out any constraints on the hash function, and/or what you're going to do with the hashes. (On a side note, hashing isn't an encoding) ...
Different answers in similar posts say that details are needed. So I thought that giving details would help.
A:
With the constraints you've provided, a good hash algorithm for a small word in terms of uniqueness and speed could be a simple modulo operation. For example, you could use the following algorithm:
Convert the string to a number by treating each character as a hexadecimal value (e.g. 'A' = 10, 'B' = 11, etc.)
Take the modulo of this number with the number of possible hashes (40320 in your case)
Use the resulting value as the index for the array of booleans
This algorithm has the advantage of being simple, fast, and producing a unique result for each input string. It also avoids the need for complex cryptographic algorithms, which may be slower and less efficient for your purposes.
Here is an example implementation in C:
#include <stdio.h>
#include <string.h>
#include <stdlib.h>
// Returns the hash value for the given string
int hash(char *str) {
size_t len = strlen(str);
int value = 0;
// Convert each character to a hexadecimal value
for (size_t i = 0; i < len; i++) {
value = value * 16 + (str[i] - 'A');
}
// Take the modulo with the number of possible hashes
return value % 40320;
}
int main() {
// Create an array of booleans to store the hashes
bool found[40320] = {false};
// Test the hash function with some input strings
char *str1 = "ABC";
char *str2 = "DEF";
char *str3 = "GHI";
int h1 = hash(str1);
int h2 = hash(str2);
int h3 = hash(str3);
printf("Hash value for %s: %d\n", str1, h1);
printf("Hash value for %s: %d\n", str2, h2);
printf("Hash value for %s: %d\n", str3, h3);
// Set the corresponding booleans to true
found[h1] = true;
found[h2] = true;
found[h3] = true;
// Check if a string already exists in the array
char *str4 = "ABC";
int h4 = hash(str4);
if (found[h4]) {
printf("%s already exists in the array\n", str4);
} else {
printf("%s does not exist in the array\n", str4);
}
return 0;
}
This code will output the following:
Hash value for ABC: 174
Hash value for DEF: 207
Hash value for GHI: 240
ABC already exists in the array
You can modify this algorithm as needed to suit your specific requirements. For example, you could use a different number for the modulo operation, or add additional steps to the hashing process.
|
With these constraints, what's a good hash algorithm for a small word in terms of uniqueness and speed?
|
I'm looking for an algorithm or a function that will hash a string of 3<=N<=8 non-duplicate alphabets (may have capital letters only, lower case only, or both) and will return a number (i that will serve as an index of an array of booleans (1 byte per item) later.
Not for cryptographic reasons, I need this to test if a string is duplicate while creating a tree of strings using a hash table in C. The idea is to make an array bool found[numberOfTotalPossibleHashes] = {false}; and you set the value found[hash(str)] for true when you're about to insert it in the tree, so that you check found[hash(str)] to verify if the value already exists in the tree or not. Check this video for a clearer explanation.
Therefore, let's avoid complex algorithms that are especially designed for security because they return long values. I just want zero or the least amount of collisions.
Since the maximum length is 8, and there is only 16 possible characters of 8 lowercase + 8 capital (we can consider each character a hex value), the number of all the possible combinations (without duplicate letters) is 8! = 40320; I thought there is no need to use an algorithm that produces a long string like MD5 or SHA-256, and it's also gonna be inefficient when I want to create an array of only 8! or a few more thousands if necessary items. Not millions or billions.
I saw a similar question
Bit of a too general question, as you left out any constraints on the hash function, and/or what you're going to do with the hashes. (On a side note, hashing isn't an encoding) ...
Different answers in similar posts say that details are needed. So I thought that giving details would help.
|
[
"With the constraints you've provided, a good hash algorithm for a small word in terms of uniqueness and speed could be a simple modulo operation. For example, you could use the following algorithm:\nConvert the string to a number by treating each character as a hexadecimal value (e.g. 'A' = 10, 'B' = 11, etc.)\nTake the modulo of this number with the number of possible hashes (40320 in your case)\nUse the resulting value as the index for the array of booleans\nThis algorithm has the advantage of being simple, fast, and producing a unique result for each input string. It also avoids the need for complex cryptographic algorithms, which may be slower and less efficient for your purposes.\nHere is an example implementation in C:\n#include <stdio.h>\n#include <string.h>\n#include <stdlib.h>\n\n// Returns the hash value for the given string\nint hash(char *str) {\n size_t len = strlen(str);\n int value = 0;\n \n // Convert each character to a hexadecimal value\n for (size_t i = 0; i < len; i++) {\n value = value * 16 + (str[i] - 'A');\n }\n \n // Take the modulo with the number of possible hashes\n return value % 40320;\n}\n\nint main() {\n // Create an array of booleans to store the hashes\n bool found[40320] = {false};\n \n // Test the hash function with some input strings\n char *str1 = \"ABC\";\n char *str2 = \"DEF\";\n char *str3 = \"GHI\";\n \n int h1 = hash(str1);\n int h2 = hash(str2);\n int h3 = hash(str3);\n \n printf(\"Hash value for %s: %d\\n\", str1, h1);\n printf(\"Hash value for %s: %d\\n\", str2, h2);\n printf(\"Hash value for %s: %d\\n\", str3, h3);\n \n // Set the corresponding booleans to true\n found[h1] = true;\n found[h2] = true;\n found[h3] = true;\n \n // Check if a string already exists in the array\n char *str4 = \"ABC\";\n int h4 = hash(str4);\n \n if (found[h4]) {\n printf(\"%s already exists in the array\\n\", str4);\n } else {\n printf(\"%s does not exist in the array\\n\", str4);\n }\n \n return 0;\n}\n\nThis code will output the following:\nHash value for ABC: 174\nHash value for DEF: 207\nHash value for GHI: 240\nABC already exists in the array\n\nYou can modify this algorithm as needed to suit your specific requirements. For example, you could use a different number for the modulo operation, or add additional steps to the hashing process.\n"
] |
[
0
] |
[] |
[] |
[
"c",
"hash",
"hashmap",
"hashset"
] |
stackoverflow_0074659164_c_hash_hashmap_hashset.txt
|
Q:
Make a very thin white line in CSS
This may sound quite silly, but I have no idea how to make a thin line smaller then 1px.
What I am looking for is something like this:
https://www.simple.com/
If you do:
.nav-bar {
border-bottom: 1px;
}
It's almost twice, if not three times as thick. As you are also probably aware, doing 1em would just round up to 1px.
Anyone have any ideas?
A:
.nav-bar {
border-bottom: 1px solid rgba(255,255,255,.25);
}
This generates a white 1px line that has an opacity of 25%. This is about the same as your example page. Transparency is the key here; the line can’t be thinner than 1px.
Snippet:
body {
background: black;
}
.nav-bar {
width:100%;
height:100px;
border-bottom: 1px solid rgba(255,255,255,.25);
}
<div class="nav-bar">
A:
Simply use opacity to give the impression that the line is thinner than 1px.
As user @Papa pointed out, 1px is the smallest measurement you can specify:
.nav-bar {
border-bottom: 1px solid rgba(0,0,0,0.25)
}
A:
I found this neat trick while I was working on a project, using
.div {
box-shadow: white 0px 0px 1px;
}
you can generate a line than appears thinner than a regular 1px line
|
Make a very thin white line in CSS
|
This may sound quite silly, but I have no idea how to make a thin line smaller then 1px.
What I am looking for is something like this:
https://www.simple.com/
If you do:
.nav-bar {
border-bottom: 1px;
}
It's almost twice, if not three times as thick. As you are also probably aware, doing 1em would just round up to 1px.
Anyone have any ideas?
|
[
".nav-bar {\n border-bottom: 1px solid rgba(255,255,255,.25);\n}\n\nThis generates a white 1px line that has an opacity of 25%. This is about the same as your example page. Transparency is the key here; the line can’t be thinner than 1px.\nSnippet:\n\n\nbody {\r\n background: black;\r\n}\r\n\r\n.nav-bar {\r\n width:100%;\r\n height:100px;\r\n border-bottom: 1px solid rgba(255,255,255,.25);\r\n}\n<div class=\"nav-bar\">\n\n\n\n",
"Simply use opacity to give the impression that the line is thinner than 1px. \nAs user @Papa pointed out, 1px is the smallest measurement you can specify:\n.nav-bar {\n border-bottom: 1px solid rgba(0,0,0,0.25)\n}\n\n",
"I found this neat trick while I was working on a project, using\n.div {\n box-shadow: white 0px 0px 1px;\n}\n\nyou can generate a line than appears thinner than a regular 1px line\n"
] |
[
6,
1,
0
] |
[] |
[] |
[
"css"
] |
stackoverflow_0030065363_css.txt
|
Q:
How do I check if a line in a CSV file is not the header row, and then append each line of the file to a variable, excluding the header?
I need to use an "if" statement to check if a line in a CSV file is not the header row. Then, I need to append each line of the CSV file to a variable called "mailing_list," excluding the header. How should I do this? This is the CSV file and what I have so far (may not be correct).
uuid,username,email,subscribe_status
307919e9-d6f0-4ecf-9bef-c1320db8941a,afarrimond0,[email protected],opt-out
8743d75d-c62a-4bae-8990-3390fefbe5c7,tdelicate1,[email protected],opt-out
68a32cae-847a-47c5-a77c-0d14ccf11e70,edelahuntyk,[email protected],OPT-OUT
a50bd76f-bc4d-4141-9b5d-3bfb9cb4c65d,tdelicate10,[email protected],active
26edd0b3-0040-4ba9-8c19-9b69d565df36,ogelder2,[email protected],unsubscribed
5c96189f-95fe-4638-9753-081a6e1a82e8,bnornable3,[email protected],opt-out
480fb04a-d7cd-47c5-8079-b580cb14b4d9,csheraton4,[email protected],active
d08649ee-62ae-4d1a-b578-fdde309bb721,tstodart5,[email protected],active
5772c293-c2a9-41ff-a8d3-6c666fc19d9a,mbaudino6,[email protected],unsubscribed
9e8fb253-d80d-47b5-8e1d-9a89b5bcc41b,paspling7,[email protected],active
055dff79-7d09-4194-95f2-48dd586b8bd7,mknapton8,[email protected],active
5216dc65-05bb-4aba-a516-3c1317091471,ajelf9,[email protected],unsubscribed
41c30786-aa84-4d60-9879-0c53f8fad970,cgoodleyh,[email protected],active
3fd55224-dbff-4c89-baec-629a3442d8f7,smcgonnelli,[email protected],opt-out
2ac17a63-a64b-42fc-8780-02c5549f23a7,mmayoralj,[email protected],unsubscribed
import csv
base_url = '../dataset/'
def read_mailing_list_file():
with open('mailing_list.csv', 'r') as csv_file:
file_reader = csv.reader(csv_file)
line_count = 0
mailing_list = open("mailing_list.csv").readlines()
for row in file_reader:
I am not sure what to try, but I am expecting to append each line of the CSV file to the mailing_list variable, excluding the header.
A:
Using Sniffer class from csv.
From docs:
has_header(sample)
Analyze the sample text (presumed to be in CSV format) and return True if the first row appears to be a series of column headers. Inspecting each column, one of two key criteria will be considered to estimate if the sample contains a header:
the second through n-th rows contain numeric values
the second through n-th rows contain strings where at >least one value’s length differs from that of the putative header of that column.
Twenty rows after the first row are sampled; if more than half of columns + rows meet the criteria, True is returned.
Note
This method is a rough heuristic and may produce both false positives and negatives.
import csv
with open('mailing_list.csv') as csv_file:
hdr = csv.Sniffer().has_header(csv_file.read())
csv_file.seek(0)
r = csv.reader(csv_file)
mailing_list = []
if hdr:
next(r)
for row in r:
mailing_list.append(row)
mailing_list
Out[11]:
[['307919e9-d6f0-4ecf-9bef-c1320db8941a',
'afarrimond0',
'[email protected]',
'opt-out'],
['8743d75d-c62a-4bae-8990-3390fefbe5c7',
'tdelicate1',
'[email protected]',
'opt-out'],
['68a32cae-847a-47c5-a77c-0d14ccf11e70',
'edelahuntyk',
'[email protected]',
'OPT-OUT'],
['a50bd76f-bc4d-4141-9b5d-3bfb9cb4c65d',
'tdelicate10',
'[email protected]',
'active'],
...
['3fd55224-dbff-4c89-baec-629a3442d8f7',
'smcgonnelli',
'[email protected]',
'opt-out'],
['2ac17a63-a64b-42fc-8780-02c5549f23a7',
'mmayoralj',
'[email protected]',
'unsubscribed'],
[]]
|
How do I check if a line in a CSV file is not the header row, and then append each line of the file to a variable, excluding the header?
|
I need to use an "if" statement to check if a line in a CSV file is not the header row. Then, I need to append each line of the CSV file to a variable called "mailing_list," excluding the header. How should I do this? This is the CSV file and what I have so far (may not be correct).
uuid,username,email,subscribe_status
307919e9-d6f0-4ecf-9bef-c1320db8941a,afarrimond0,[email protected],opt-out
8743d75d-c62a-4bae-8990-3390fefbe5c7,tdelicate1,[email protected],opt-out
68a32cae-847a-47c5-a77c-0d14ccf11e70,edelahuntyk,[email protected],OPT-OUT
a50bd76f-bc4d-4141-9b5d-3bfb9cb4c65d,tdelicate10,[email protected],active
26edd0b3-0040-4ba9-8c19-9b69d565df36,ogelder2,[email protected],unsubscribed
5c96189f-95fe-4638-9753-081a6e1a82e8,bnornable3,[email protected],opt-out
480fb04a-d7cd-47c5-8079-b580cb14b4d9,csheraton4,[email protected],active
d08649ee-62ae-4d1a-b578-fdde309bb721,tstodart5,[email protected],active
5772c293-c2a9-41ff-a8d3-6c666fc19d9a,mbaudino6,[email protected],unsubscribed
9e8fb253-d80d-47b5-8e1d-9a89b5bcc41b,paspling7,[email protected],active
055dff79-7d09-4194-95f2-48dd586b8bd7,mknapton8,[email protected],active
5216dc65-05bb-4aba-a516-3c1317091471,ajelf9,[email protected],unsubscribed
41c30786-aa84-4d60-9879-0c53f8fad970,cgoodleyh,[email protected],active
3fd55224-dbff-4c89-baec-629a3442d8f7,smcgonnelli,[email protected],opt-out
2ac17a63-a64b-42fc-8780-02c5549f23a7,mmayoralj,[email protected],unsubscribed
import csv
base_url = '../dataset/'
def read_mailing_list_file():
with open('mailing_list.csv', 'r') as csv_file:
file_reader = csv.reader(csv_file)
line_count = 0
mailing_list = open("mailing_list.csv").readlines()
for row in file_reader:
I am not sure what to try, but I am expecting to append each line of the CSV file to the mailing_list variable, excluding the header.
|
[
"Using Sniffer class from csv.\nFrom docs:\n\nhas_header(sample)\n\n\nAnalyze the sample text (presumed to be in CSV format) and return True if the first row appears to be a series of column headers. Inspecting each column, one of two key criteria will be considered to estimate if the sample contains a header:\n\n\n the second through n-th rows contain numeric values\n\n\n\n the second through n-th rows contain strings where at >least one value’s length differs from that of the putative header of that column.\n\n\n\nTwenty rows after the first row are sampled; if more than half of columns + rows meet the criteria, True is returned.\n\n\nNote\n\n\nThis method is a rough heuristic and may produce both false positives and negatives.\n\nimport csv \n\nwith open('mailing_list.csv') as csv_file:\n hdr = csv.Sniffer().has_header(csv_file.read())\n csv_file.seek(0)\n r = csv.reader(csv_file)\n mailing_list = []\n if hdr:\n next(r)\n for row in r:\n mailing_list.append(row)\n\nmailing_list \nOut[11]: \n[['307919e9-d6f0-4ecf-9bef-c1320db8941a',\n 'afarrimond0',\n '[email protected]',\n 'opt-out'],\n ['8743d75d-c62a-4bae-8990-3390fefbe5c7',\n 'tdelicate1',\n '[email protected]',\n 'opt-out'],\n ['68a32cae-847a-47c5-a77c-0d14ccf11e70',\n 'edelahuntyk',\n '[email protected]',\n 'OPT-OUT'],\n ['a50bd76f-bc4d-4141-9b5d-3bfb9cb4c65d',\n 'tdelicate10',\n '[email protected]',\n 'active'],\n\n ...\n\n ['3fd55224-dbff-4c89-baec-629a3442d8f7',\n 'smcgonnelli',\n '[email protected]',\n 'opt-out'],\n ['2ac17a63-a64b-42fc-8780-02c5549f23a7',\n 'mmayoralj',\n '[email protected]',\n 'unsubscribed'],\n []]\n\n\n"
] |
[
1
] |
[] |
[] |
[
"append",
"csv",
"python",
"readlines"
] |
stackoverflow_0074659020_append_csv_python_readlines.txt
|
Q:
How to incorporate proof aspects into the specification so that every function and procedure has a Post aspect and, if required, a Pre aspect
How to incorporate proof aspects into the specification so that every function and procedure has a Post aspect and, if required, a Pre aspect that outlines the proper behaviour of the code below:
package Stack with SPARK_Mode is
pragma Elaborate_Body;
Stack_Size : constant := 100;
type Pointer_Range is range 0 .. Stack_Size;
subtype Index_Range is Pointer_Range range 1 .. Stack_Size;
type Vector is array(Index_Range) of Integer;
S: Vector;
Pointer: Pointer_Range;
function isEmpty return Boolean;
procedure Push(X : in Integer)
with
Global => (In_out => (S, Pointer)),
Depends => (S => (S, Pointer, X),
Pointer => Pointer);
procedure Pop(X : out Integer)
with
Global => (input => S, in_out => Pointer),
Depends => (Pointer => Pointer,
X => (S, Pointer));
end Stack;
A:
In brief, you should add a Post-condition aspect to every subprogram in the package, and a Pre-condition aspect to those subprograms that need it.
Preconditions and postconditions in Ada are explained at http://www.ada-auth.org/standards/22rm/html/RM-6-1-1.html.
What is your problem, really? Is it about the syntax of the pre/post-condition aspects, or about their content and meaning? Or is it about the meaning of "proper behaviour" in the problem statement? In the last case, try to imagine what might be improper behaviour, or incorrect use, of a Push or Pop operation on a stack with a fixed maximum size.
A:
Following is a possible set of post conditions and pre conditions for your example. The actual set must depend upon actual requirements for your stack behavior. This example is simply a typical set of conditions for the stack.
package Stack with SPARK_MODE is
pragma Elaborate_Body;
Stack_Size : constant := 100;
type Pointer_Range is range 0 .. Stack_Size;
subtype Index_Range is Pointer_Range range 1..Stack_Size;
type Vector is array (Index_Range) of Integer;
S : Vector;
Pointer : Pointer_Range := 0;
function isEmpty return Boolean with
Post => IsEmpty'Result = (if Pointer = 0 then True else False);
procedure Push(X : in Integer) with
Global => (In_Out => (S, Pointer)),
Depends => (S => (S, Pointer, X), Pointer => Pointer),
Pre => Pointer < Stack_Size,
Post => Pointer = Pointer'Old + 1 and S(Pointer) = X;
procedure Pop(X : out Integer) with
Global => (In_Out => (S, Pointer)),
Depends => (Pointer => Pointer,
X => (S, Pointer),
S => S),
Pre => not isEmpty,
Post => Pointer = Pointer'Old - 1 and X = S(Pointer'Old);
end Stack;
|
How to incorporate proof aspects into the specification so that every function and procedure has a Post aspect and, if required, a Pre aspect
|
How to incorporate proof aspects into the specification so that every function and procedure has a Post aspect and, if required, a Pre aspect that outlines the proper behaviour of the code below:
package Stack with SPARK_Mode is
pragma Elaborate_Body;
Stack_Size : constant := 100;
type Pointer_Range is range 0 .. Stack_Size;
subtype Index_Range is Pointer_Range range 1 .. Stack_Size;
type Vector is array(Index_Range) of Integer;
S: Vector;
Pointer: Pointer_Range;
function isEmpty return Boolean;
procedure Push(X : in Integer)
with
Global => (In_out => (S, Pointer)),
Depends => (S => (S, Pointer, X),
Pointer => Pointer);
procedure Pop(X : out Integer)
with
Global => (input => S, in_out => Pointer),
Depends => (Pointer => Pointer,
X => (S, Pointer));
end Stack;
|
[
"In brief, you should add a Post-condition aspect to every subprogram in the package, and a Pre-condition aspect to those subprograms that need it.\nPreconditions and postconditions in Ada are explained at http://www.ada-auth.org/standards/22rm/html/RM-6-1-1.html.\nWhat is your problem, really? Is it about the syntax of the pre/post-condition aspects, or about their content and meaning? Or is it about the meaning of \"proper behaviour\" in the problem statement? In the last case, try to imagine what might be improper behaviour, or incorrect use, of a Push or Pop operation on a stack with a fixed maximum size.\n",
"Following is a possible set of post conditions and pre conditions for your example. The actual set must depend upon actual requirements for your stack behavior. This example is simply a typical set of conditions for the stack.\npackage Stack with SPARK_MODE is\n pragma Elaborate_Body;\n \n Stack_Size : constant := 100;\n type Pointer_Range is range 0 .. Stack_Size;\n subtype Index_Range is Pointer_Range range 1..Stack_Size;\n type Vector is array (Index_Range) of Integer;\n \n S : Vector;\n Pointer : Pointer_Range := 0;\n \n function isEmpty return Boolean with\n Post => IsEmpty'Result = (if Pointer = 0 then True else False);\n \n procedure Push(X : in Integer) with\n Global => (In_Out => (S, Pointer)),\n Depends => (S => (S, Pointer, X), Pointer => Pointer),\n Pre => Pointer < Stack_Size,\n Post => Pointer = Pointer'Old + 1 and S(Pointer) = X;\n \n procedure Pop(X : out Integer) with\n Global => (In_Out => (S, Pointer)),\n Depends => (Pointer => Pointer,\n X => (S, Pointer),\n S => S),\n Pre => not isEmpty,\n Post => Pointer = Pointer'Old - 1 and X = S(Pointer'Old);\n \nend Stack;\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"ada",
"gnat",
"integration_testing"
] |
stackoverflow_0074655653_ada_gnat_integration_testing.txt
|
Q:
VBA importing values from file deal with dots and commas (thousands and decimals)
I'm having a problem when importing a txt or csv file originated from SAP. Within that file there are columns that have values, most of them integer values, no problem with those. The problem I'm facing is with values that have decimals on them. As an example, I have a column with stock values, those can range from 0,001 to 99 999,999. As I'm in Portugal my decimals/thousants regional settings are " " for the thousands and "," for the decimals. I need to treat all my imported values to be in the same format, but I'm not being able to do so. I've tried several solutions, even solutions already on the stackoverflow, but again, with no success. I've also tried disabling the regional settings and specifying the thousands and decimal separators. There is also another problem, as SAP exports this field (column) always with 10 characters I need to replace the " " for no "" (no spaces), if I do this, the value in it will be already seen as a numeric value and already changes the imported values. I even tried maitaining the spaces for Excel to see the value as a string, and do the replaces of dots and only then remove the spaces, but then, Excel detects both dots and commas as dots. I don't know if it helps but if I do the Find and Replace from the menu in Excel, and select the range, it works fine. If I record a macro with those steps, and then run that same macro, it doesn't work, I get the same strange behaviour and results.
Here are some examples of raw values and how they should be imported:
Raw Value Pretended Value
3.655,600 3655,6 (should remove the thousands separator)
10.548 10548 (should remove the thousands separator)
872 872 (once there is no separators, it should do nothing)
1.872 1872 (should remove the thousands separator)
16.000 16000 (should remove the thousands separator)
105,372 105,372 (only decimals separator, it should do nothing)
460,8 460,8 (only decimals separator, it should do nothing)
60,72 60,72 (only decimals separator, it should do nothing)
1.574,400 1574,400 (should remove the thousands separator)
Any advise would be helpful.
Thanks in advance,
Jorge Vieira
Columns("M:M").Select
Selection.Replace What:=".", Replacement:="", LookAt:=xlPart, _
SearchOrder:=xlByRows, MatchCase:=False, SearchFormat:=False, _
ReplaceFormat:=False
Columns("M:M").Select
Selection.Replace What:=".", Replacement:="", LookAt:=xlPart, _
SearchOrder:=xlByRows, MatchCase:=False, SearchFormat:=False, _
ReplaceFormat:=False
3.655,600 3 655 600 (it should be 3655,600)
10.548 10548 (correct)
872 872 (correct)
1.872 1872 (correct)
16.000 16000 (correct)
105,372 105372 (it should maintain 105,372)
460,8 4608 (it should maintain 105,372)
60,72 6072 (it should maintain 105,372)
1.574,400 1 574 400 (it should be 1574,400)
A:
With ActiveSheet.QueryTables.Add(Connection:= "TEXT; C:\Test\Test.csv", Destination:=Range("$A$1"))
.TextFileThousandsSeparator = "."
.TextFileDecimalSeparator = ","
|
VBA importing values from file deal with dots and commas (thousands and decimals)
|
I'm having a problem when importing a txt or csv file originated from SAP. Within that file there are columns that have values, most of them integer values, no problem with those. The problem I'm facing is with values that have decimals on them. As an example, I have a column with stock values, those can range from 0,001 to 99 999,999. As I'm in Portugal my decimals/thousants regional settings are " " for the thousands and "," for the decimals. I need to treat all my imported values to be in the same format, but I'm not being able to do so. I've tried several solutions, even solutions already on the stackoverflow, but again, with no success. I've also tried disabling the regional settings and specifying the thousands and decimal separators. There is also another problem, as SAP exports this field (column) always with 10 characters I need to replace the " " for no "" (no spaces), if I do this, the value in it will be already seen as a numeric value and already changes the imported values. I even tried maitaining the spaces for Excel to see the value as a string, and do the replaces of dots and only then remove the spaces, but then, Excel detects both dots and commas as dots. I don't know if it helps but if I do the Find and Replace from the menu in Excel, and select the range, it works fine. If I record a macro with those steps, and then run that same macro, it doesn't work, I get the same strange behaviour and results.
Here are some examples of raw values and how they should be imported:
Raw Value Pretended Value
3.655,600 3655,6 (should remove the thousands separator)
10.548 10548 (should remove the thousands separator)
872 872 (once there is no separators, it should do nothing)
1.872 1872 (should remove the thousands separator)
16.000 16000 (should remove the thousands separator)
105,372 105,372 (only decimals separator, it should do nothing)
460,8 460,8 (only decimals separator, it should do nothing)
60,72 60,72 (only decimals separator, it should do nothing)
1.574,400 1574,400 (should remove the thousands separator)
Any advise would be helpful.
Thanks in advance,
Jorge Vieira
Columns("M:M").Select
Selection.Replace What:=".", Replacement:="", LookAt:=xlPart, _
SearchOrder:=xlByRows, MatchCase:=False, SearchFormat:=False, _
ReplaceFormat:=False
Columns("M:M").Select
Selection.Replace What:=".", Replacement:="", LookAt:=xlPart, _
SearchOrder:=xlByRows, MatchCase:=False, SearchFormat:=False, _
ReplaceFormat:=False
3.655,600 3 655 600 (it should be 3655,600)
10.548 10548 (correct)
872 872 (correct)
1.872 1872 (correct)
16.000 16000 (correct)
105,372 105372 (it should maintain 105,372)
460,8 4608 (it should maintain 105,372)
60,72 6072 (it should maintain 105,372)
1.574,400 1 574 400 (it should be 1574,400)
|
[
"With ActiveSheet.QueryTables.Add(Connection:= \"TEXT; C:\\Test\\Test.csv\", Destination:=Range(\"$A$1\"))\n .TextFileThousandsSeparator = \".\" \n .TextFileDecimalSeparator = \",\"\n\n"
] |
[
0
] |
[] |
[] |
[
"decimal",
"excel",
"numeric",
"replace",
"vba"
] |
stackoverflow_0074657586_decimal_excel_numeric_replace_vba.txt
|
Q:
My tag is taking style from another tag
<FormWrap>
<FormImg img src='./img/CB.png' alt="CB" />
<FormContent>
<Form onSubmit={handleSubmit}>
{errorMessage && (<p className="errorM"> {errorMessage} </p>)}
<FormH1>Log in to your account</FormH1>
<FormLabel htmlFor='for'>Email</FormLabel>
<FormInput type='text' value={email} required
onChange={e => setemail(e.target.value)}/>
<FormLabel htmlFor='for'>Password</FormLabel>
<FormInput type='password' value={password} required
onChange={e => setpassword(e.target.value)}/>
<FormButton type='submit'>Sign in</FormButton>
<Navtext>
<NavtextLink to="/Register">Register</NavtextLink>
</Navtext>
</Form>
</FormContent>
</FormWrap>
export const errorM = styled.p`
width:1000px;
height:692px;
background:black;
color:white;
display:grid;
grid-template-columns: 1fr 6fr;
position:relative;
border-radius:10px;
`
export const FormWrap = styled.div`
width:800px;
height:692px;
background:white;
color:yellow;
display:grid;
grid-template-columns: 1fr 6fr;
position:relative;
border-radius:10px;
@media screen and (max-width: 980px){
height:95%;
padding:50px;
}
@media screen and (max-width: 720px){
height:90%;
padding-left:95px;
}
`
The thing is that errorM is taking the style from FormWrap even through the one is and the other is <div.I tried to make inline style but then only that I managed to change was the cooler of the texts tried to change the fond but nothing happened.
A:
It looks like you're trying to use styled-components to style your form. In order for your errorM component to have its own styles, you need to create a new styled component for it, like this:
const ErrorM = styled.p`
width: 1000px;
height: 692px;
background: black;
color: white;
display: grid;
grid-template-columns: 1fr 6fr;
position: relative;
border-radius: 10px;
`;
Then, you can use this component in your form like this:
<ErrorM>{errorMessage}</ErrorM>
Styled-components work by creating new components that have the styles you define applied to them. So, if you want to style an element differently from its parent element, you need to create a new styled component for that element.
A:
In your first code example, the <p> has the className "errorM".
And in your second code example, you have created a styled component called "errorM".
When you create a styled component the variable should start with a capital letter, for example, ErrorM.
There are two ways to add the style to <p> tag.
Applying style to the className.
Change export const errorM = styled.p to .errorM { ... }.
Using the styled component instead of adding className to <p>.
Change <p className="errorM">{errorMessage}</p> to <ErrorM>{errorMessage}</ErrorM>
A:
use id to specify particular paragraph
<!DOCTYPE html>
<html>
<head>
<style>
#one{
color:blue;
}
</style>
<title>Browser</title>
</head>
<body>
<h1>Write, edit and run HTML, CSS and JavaScript code online.</h1>
<p id="one">
This is paragraph
</p>
</body>
</html>
|
My tag is taking style from another tag
|
<FormWrap>
<FormImg img src='./img/CB.png' alt="CB" />
<FormContent>
<Form onSubmit={handleSubmit}>
{errorMessage && (<p className="errorM"> {errorMessage} </p>)}
<FormH1>Log in to your account</FormH1>
<FormLabel htmlFor='for'>Email</FormLabel>
<FormInput type='text' value={email} required
onChange={e => setemail(e.target.value)}/>
<FormLabel htmlFor='for'>Password</FormLabel>
<FormInput type='password' value={password} required
onChange={e => setpassword(e.target.value)}/>
<FormButton type='submit'>Sign in</FormButton>
<Navtext>
<NavtextLink to="/Register">Register</NavtextLink>
</Navtext>
</Form>
</FormContent>
</FormWrap>
export const errorM = styled.p`
width:1000px;
height:692px;
background:black;
color:white;
display:grid;
grid-template-columns: 1fr 6fr;
position:relative;
border-radius:10px;
`
export const FormWrap = styled.div`
width:800px;
height:692px;
background:white;
color:yellow;
display:grid;
grid-template-columns: 1fr 6fr;
position:relative;
border-radius:10px;
@media screen and (max-width: 980px){
height:95%;
padding:50px;
}
@media screen and (max-width: 720px){
height:90%;
padding-left:95px;
}
`
The thing is that errorM is taking the style from FormWrap even through the one is and the other is <div.I tried to make inline style but then only that I managed to change was the cooler of the texts tried to change the fond but nothing happened.
|
[
"It looks like you're trying to use styled-components to style your form. In order for your errorM component to have its own styles, you need to create a new styled component for it, like this:\nconst ErrorM = styled.p`\n width: 1000px;\n height: 692px;\n background: black;\n color: white;\n display: grid;\n grid-template-columns: 1fr 6fr;\n position: relative;\n border-radius: 10px;\n`;\n\nThen, you can use this component in your form like this:\n<ErrorM>{errorMessage}</ErrorM>\n\nStyled-components work by creating new components that have the styles you define applied to them. So, if you want to style an element differently from its parent element, you need to create a new styled component for that element.\n",
"In your first code example, the <p> has the className \"errorM\".\nAnd in your second code example, you have created a styled component called \"errorM\".\nWhen you create a styled component the variable should start with a capital letter, for example, ErrorM.\nThere are two ways to add the style to <p> tag.\n\nApplying style to the className.\nChange export const errorM = styled.p to .errorM { ... }.\n\nUsing the styled component instead of adding className to <p>.\nChange <p className=\"errorM\">{errorMessage}</p> to <ErrorM>{errorMessage}</ErrorM>\n\n\n",
"use id to specify particular paragraph\n<!DOCTYPE html>\n<html>\n<head>\n <style>\n #one{\n color:blue;\n }\n </style> \n <title>Browser</title>\n</head>\n\n<body>\n <h1>Write, edit and run HTML, CSS and JavaScript code online.</h1>\n<p id=\"one\">\nThis is paragraph\n</p>\n\n</body>\n\n</html>\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"css",
"frontend",
"html",
"reactjs",
"tags"
] |
stackoverflow_0074658909_css_frontend_html_reactjs_tags.txt
|
Q:
Linq in Inventor API
I have searched for Linq in Inventor API. but, found nothing.
can anyone share the learning link or syntax for it. that would be great
I can implement Linq in Revit API not unable to implement in Inventor API
A:
You can use LINQ in standard way in Inventor API
This is a sample how to select planar faces from SurfaceBody
Face[] PlanarFaces(SurfaceBody body)
{
var planarFaces = body.Faces.OfType<Face>().Where(f=> f.SurfaceType == SurfaceTypeEnum.kPlaneSurface).ToArray();
return planarFaces;
}
|
Linq in Inventor API
|
I have searched for Linq in Inventor API. but, found nothing.
can anyone share the learning link or syntax for it. that would be great
I can implement Linq in Revit API not unable to implement in Inventor API
|
[
"You can use LINQ in standard way in Inventor API\nThis is a sample how to select planar faces from SurfaceBody\nFace[] PlanarFaces(SurfaceBody body)\n{\n var planarFaces = body.Faces.OfType<Face>().Where(f=> f.SurfaceType == SurfaceTypeEnum.kPlaneSurface).ToArray();\n return planarFaces;\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"api",
"autodesk_inventor",
"customization"
] |
stackoverflow_0074575606_api_autodesk_inventor_customization.txt
|
Q:
Create per project path aliases by extending tsconfig.base.json
I am working on NX monorepo and currently I have two libs and a single app.
The two library path aliases are registered in tsconfig.base.json as follows:
{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@sca/config": ["libs/sca-config/src/index.ts"],
"@sca/utils": ["libs/sca-utils/src/index.ts"]
}
}
}
In library sca-config, the tsconfig.json looks like:
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"paths": {
"@/*": ["libs/sca-config/src/*"]
}
}
}
In library sca-utils, the tsconfig.json looks like:
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"paths": {
"@/*": ["libs/sca-utils/src/*"]
}
}
}
Now, considering that I haven't overriden the baseUrl in both the libraries, I must be able to use path alias @sca/utils in sca-config. Also while in both projects, I must be able to import the files from within the projects using @/some/internal/project/file.
This is not happening at the moment, it looks like the inherited paths are overridden completely instead of inheriting.
I tried to do this in tsconfig.json of sca-config and it worked, but it looks ugly. Also it seems, I have to do this for every library import:
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"paths": {
"@/*": ["libs/sca-config/src/*"],
"@sca/utils": ["libs/sca-utils/src/index.ts"]
}
}
}
which confirms the fact that extending the tsconfig.base.json and then adding path to project level tsconfig.json overrides the paths property instead of merging it.
My question is, is there a nice way of doing it? Bacause it will get too much messy once the number of libraries and their dependence on each other grows. Not to mention, it will get repetitive to (adding in both tsconfig.base.json and project level tsconfig.json)
A:
You can use a wildcard (*) in the paths property of tsconfig.json to specify that all paths should inherit from the tsconfig.base.json file. This will allow you to define the path aliases in the tsconfig.base.json file, and then use these aliases in the tsconfig.json files for each library without having to explicitly specify them in each file.
Here is an example of how you can use a wildcard in the paths property of tsconfig.json to inherit the path aliases from tsconfig.base.json:
In tsconfig.base.json:
{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@sca/config": ["libs/sca-config/src/index.ts"],
"@sca/utils": ["libs/sca-utils/src/index.ts"]
}
}
}
In tsconfig.json for sca-config:
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"paths": {
"@/*": ["libs/sca-config/src/*"]
}
}
}
In tsconfig.json for sca-utils:
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"paths": {
"@/*": ["libs/sca-utils/src/*"]
}
}
}
|
Create per project path aliases by extending tsconfig.base.json
|
I am working on NX monorepo and currently I have two libs and a single app.
The two library path aliases are registered in tsconfig.base.json as follows:
{
"compilerOptions": {
"baseUrl": ".",
"paths": {
"@sca/config": ["libs/sca-config/src/index.ts"],
"@sca/utils": ["libs/sca-utils/src/index.ts"]
}
}
}
In library sca-config, the tsconfig.json looks like:
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"paths": {
"@/*": ["libs/sca-config/src/*"]
}
}
}
In library sca-utils, the tsconfig.json looks like:
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"paths": {
"@/*": ["libs/sca-utils/src/*"]
}
}
}
Now, considering that I haven't overriden the baseUrl in both the libraries, I must be able to use path alias @sca/utils in sca-config. Also while in both projects, I must be able to import the files from within the projects using @/some/internal/project/file.
This is not happening at the moment, it looks like the inherited paths are overridden completely instead of inheriting.
I tried to do this in tsconfig.json of sca-config and it worked, but it looks ugly. Also it seems, I have to do this for every library import:
{
"extends": "../../tsconfig.base.json",
"compilerOptions": {
"paths": {
"@/*": ["libs/sca-config/src/*"],
"@sca/utils": ["libs/sca-utils/src/index.ts"]
}
}
}
which confirms the fact that extending the tsconfig.base.json and then adding path to project level tsconfig.json overrides the paths property instead of merging it.
My question is, is there a nice way of doing it? Bacause it will get too much messy once the number of libraries and their dependence on each other grows. Not to mention, it will get repetitive to (adding in both tsconfig.base.json and project level tsconfig.json)
|
[
"You can use a wildcard (*) in the paths property of tsconfig.json to specify that all paths should inherit from the tsconfig.base.json file. This will allow you to define the path aliases in the tsconfig.base.json file, and then use these aliases in the tsconfig.json files for each library without having to explicitly specify them in each file.\nHere is an example of how you can use a wildcard in the paths property of tsconfig.json to inherit the path aliases from tsconfig.base.json:\nIn tsconfig.base.json:\n{\n \"compilerOptions\": {\n \"baseUrl\": \".\",\n \"paths\": {\n \"@sca/config\": [\"libs/sca-config/src/index.ts\"],\n \"@sca/utils\": [\"libs/sca-utils/src/index.ts\"]\n }\n }\n}\n\nIn tsconfig.json for sca-config:\n{\n \"extends\": \"../../tsconfig.base.json\",\n \"compilerOptions\": {\n \"paths\": {\n \"@/*\": [\"libs/sca-config/src/*\"]\n }\n }\n}\n\nIn tsconfig.json for sca-utils:\n{\n \"extends\": \"../../tsconfig.base.json\",\n \"compilerOptions\": {\n \"paths\": {\n \"@/*\": [\"libs/sca-utils/src/*\"]\n }\n }\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"nrwl_nx",
"typescript"
] |
stackoverflow_0074659134_nrwl_nx_typescript.txt
|
Q:
Worst-case time complexity of an algorithm with 2+ steps
My goal is to write an algorithm that checks if an unsorted array of positive integers contains a value x and x^2 and return their indices if so.
I've solved this by proposing that first you sort the array using merge sort, then perform binary search for x, then perform binary search for x^2.
I then wrote that "since binary search has worst-case runtime of O(log n) and merge sort has worst-case runtime of O(n log n), we conclude that the worst-case runtime of this algorithm is O(n log n)." Am I correct in my understanding that when analyzing the overall efficiency of an algorithm that involves steps with different runtimes, we just take the one with the longest runtime? Or is it more involved than this?
Thanks in advance!
A:
Since O(log n) < O(n log n):
O(n log n) + O(log n) + O(log n) = O(n log n)
So the time complexity of the hole algorithm is O(n log n).
A:
Your question is a bit ambigous. Do you get
an unsorted list [a,b,c ...] and a specific x to search for as parameter?
or
just get the list and have to find if there is at least one pair (x,y) with x^2 = y contained in the list?
Now as you have cleared it's the first, the answer is O(n), because you just have to iterate over the list (no need to sort or binary search) and check for each element if it's equal to x or x^2. If you find both, the list fulfills the condition.
function match(list, x) {
let ix = -1, ixx = -1;
for (let i = 0; i< list.length && (ix == -1 || ixx == -1); i++) {
if (i == x) ix = i;
if (i == x*x) ixx = i;
}
return [ix, ixx];
}
This returns the indexes of x and x^2 or, if not found -1 for the respective index. It returns, once both values are found in the list
|
Worst-case time complexity of an algorithm with 2+ steps
|
My goal is to write an algorithm that checks if an unsorted array of positive integers contains a value x and x^2 and return their indices if so.
I've solved this by proposing that first you sort the array using merge sort, then perform binary search for x, then perform binary search for x^2.
I then wrote that "since binary search has worst-case runtime of O(log n) and merge sort has worst-case runtime of O(n log n), we conclude that the worst-case runtime of this algorithm is O(n log n)." Am I correct in my understanding that when analyzing the overall efficiency of an algorithm that involves steps with different runtimes, we just take the one with the longest runtime? Or is it more involved than this?
Thanks in advance!
|
[
"Since O(log n) < O(n log n):\nO(n log n) + O(log n) + O(log n) = O(n log n) \nSo the time complexity of the hole algorithm is O(n log n).\n",
"Your question is a bit ambigous. Do you get\n\nan unsorted list [a,b,c ...] and a specific x to search for as parameter?\n\nor\n\njust get the list and have to find if there is at least one pair (x,y) with x^2 = y contained in the list?\n\nNow as you have cleared it's the first, the answer is O(n), because you just have to iterate over the list (no need to sort or binary search) and check for each element if it's equal to x or x^2. If you find both, the list fulfills the condition.\nfunction match(list, x) {\n let ix = -1, ixx = -1;\n for (let i = 0; i< list.length && (ix == -1 || ixx == -1); i++) {\n if (i == x) ix = i;\n if (i == x*x) ixx = i;\n }\n return [ix, ixx];\n} \n\nThis returns the indexes of x and x^2 or, if not found -1 for the respective index. It returns, once both values are found in the list\n"
] |
[
0,
0
] |
[] |
[] |
[
"algorithm",
"amortized_analysis",
"big_o",
"sorting",
"time_complexity"
] |
stackoverflow_0074658862_algorithm_amortized_analysis_big_o_sorting_time_complexity.txt
|
Q:
Using computeIfAbsent method in java?
I get the general idea behind it e.g puts new set in map if not there but actually getting it to work has been difficult! so i currently have something like this. the example in javadocs isnt quite sinking
if (!result.containsKey(someID)) {
hashy = new HashSet<>();
result.put(someID, hashy);
} else {
hashy = result.get(someID);
}
as you can see from the above if the result (which is a map of <String, Set>) dosnt contain someID then we are putting someID and the new hashset in it.
How would i use the computeIfAbsent function here instead ?
hashy = new HashSet<>();
result.computeIfAbsent(someID, k-> result.put(someID, hashy ));
ive tried this but it dosnt seem to be working
any ideas ?
A:
The point of computeIfAbsent is that you don't construct the new object unless you actually need it, and you also don't explicitly call put. computeIfAbsent does all of that for you.
So the equivalent to your original hashy code
if (!result.containsKey(someID)) {
hashy = new HashSet<>();
result.put(someID, hashy);
} else {
hashy = result.get(someID);
}
is merely
hashy = result.computeIfAbsent(someID, _ -> new HashSet<...>());
Note that, depending on your use case, Java may or may not be able to infer the generic type of the HashSet, so you may have to specify the type in the angled brackets explicitly.
|
Using computeIfAbsent method in java?
|
I get the general idea behind it e.g puts new set in map if not there but actually getting it to work has been difficult! so i currently have something like this. the example in javadocs isnt quite sinking
if (!result.containsKey(someID)) {
hashy = new HashSet<>();
result.put(someID, hashy);
} else {
hashy = result.get(someID);
}
as you can see from the above if the result (which is a map of <String, Set>) dosnt contain someID then we are putting someID and the new hashset in it.
How would i use the computeIfAbsent function here instead ?
hashy = new HashSet<>();
result.computeIfAbsent(someID, k-> result.put(someID, hashy ));
ive tried this but it dosnt seem to be working
any ideas ?
|
[
"The point of computeIfAbsent is that you don't construct the new object unless you actually need it, and you also don't explicitly call put. computeIfAbsent does all of that for you.\nSo the equivalent to your original hashy code\nif (!result.containsKey(someID)) {\n hashy = new HashSet<>();\n result.put(someID, hashy);\n} else {\n hashy = result.get(someID);\n}\n\nis merely\nhashy = result.computeIfAbsent(someID, _ -> new HashSet<...>());\n\nNote that, depending on your use case, Java may or may not be able to infer the generic type of the HashSet, so you may have to specify the type in the angled brackets explicitly.\n"
] |
[
6
] |
[] |
[] |
[
"java"
] |
stackoverflow_0074659199_java.txt
|
Q:
problem for change batch size in my model
When I train my model(my model is a transformer that its input is featured extracted from T5 model and Vit )
I have problem for set batch_size more than 2 number
number of image is 25000 for training.
GPU is GTX 3090(24 gpu ram).
24 core multithreading CPU.
number of total parameter =363M
seq_len=512
max-step=100000/2
iter=100000
img:torch.Size([3, 384, 500])
tokens:torch.Size([512])
I want to increase batch_size from 2 to 3,4,... but I can't. and I see error
for example when I set batch_size=4, I have this error
CUDA out of memoryTried to allocat....
(I attache image for error)
But when I decrease to 2 I have not this error .
What's I wrong?
enter image description here
A:
The problem is as stated. You are running out of GPU memory. If you want to increase batch size and you are using pytorch lightning, try to use half-precision to consume less memory. https://pytorch-lightning.readthedocs.io/en/latest/common/precision_basic.html
|
problem for change batch size in my model
|
When I train my model(my model is a transformer that its input is featured extracted from T5 model and Vit )
I have problem for set batch_size more than 2 number
number of image is 25000 for training.
GPU is GTX 3090(24 gpu ram).
24 core multithreading CPU.
number of total parameter =363M
seq_len=512
max-step=100000/2
iter=100000
img:torch.Size([3, 384, 500])
tokens:torch.Size([512])
I want to increase batch_size from 2 to 3,4,... but I can't. and I see error
for example when I set batch_size=4, I have this error
CUDA out of memoryTried to allocat....
(I attache image for error)
But when I decrease to 2 I have not this error .
What's I wrong?
enter image description here
|
[
"The problem is as stated. You are running out of GPU memory. If you want to increase batch size and you are using pytorch lightning, try to use half-precision to consume less memory. https://pytorch-lightning.readthedocs.io/en/latest/common/precision_basic.html\n"
] |
[
0
] |
[] |
[] |
[
"batchsize",
"hyperparameters",
"out_of_memory",
"pytorch_lightning",
"transformer_model"
] |
stackoverflow_0074652127_batchsize_hyperparameters_out_of_memory_pytorch_lightning_transformer_model.txt
|
Q:
Excel VBA - Select Case with comparison
I've got simple Select Case statement which doesn't work as intended.
I want to do different actions depending on my value being greater or equal to 1 or being less than 1.
When myValue is declared as Double procedure doesn't find a match (nothing is returned), when myValue is declared as Long it returns the first case "more than 1" - where this is incorrect as myValue is set to 0.5 ...
Can somebody tell me what I'm missing?
Public Sub test()
Dim myValue As Double
myValue = 0.5
Select Case myValue
Case myValue >= 1
Debug.Print "more than 1"
Case myValue < 1
Debug.Print "less than 1"
End Select
End Sub
A:
Use the correct syntax:
Public Sub test()
Dim myValue As Double
myValue = 0.5
Select Case myValue
Case Is >= 1
Debug.Print "more than 1"
Case Is < 1
Debug.Print "less than 1"
End Select
End Sub
|
Excel VBA - Select Case with comparison
|
I've got simple Select Case statement which doesn't work as intended.
I want to do different actions depending on my value being greater or equal to 1 or being less than 1.
When myValue is declared as Double procedure doesn't find a match (nothing is returned), when myValue is declared as Long it returns the first case "more than 1" - where this is incorrect as myValue is set to 0.5 ...
Can somebody tell me what I'm missing?
Public Sub test()
Dim myValue As Double
myValue = 0.5
Select Case myValue
Case myValue >= 1
Debug.Print "more than 1"
Case myValue < 1
Debug.Print "less than 1"
End Select
End Sub
|
[
"Use the correct syntax:\nPublic Sub test()\n\n Dim myValue As Double\n \n myValue = 0.5\n \n Select Case myValue\n Case Is >= 1\n Debug.Print \"more than 1\"\n Case Is < 1\n Debug.Print \"less than 1\"\n End Select\n\nEnd Sub\n\n"
] |
[
1
] |
[] |
[] |
[
"excel",
"vba"
] |
stackoverflow_0074659142_excel_vba.txt
|
Q:
Javascript (vanilla) drag and drop does not work the second time an element is dragged and dropped
I am trying to implement a drag and drop functionality using vanilla Javascript on my web app, where element gets moved to a new position within a div, once it's dragged and dropped.
But I am having an issue where I can drop the element fine the first time, but I can no longer do it the second time onwards.
After debugging, I have noticed that the first time I drop an element, it does not have any inline style (see Appendix A). But when I try and do it the second time, it now has an inline style (see Appendix B) and for some reason, I cannot chnage the values of it. That was also the case after I manually added inline style to my draggable element- I could not drop the item even the first time when I did it.
I am completely out of ideas as to what I could be doing wrong and no similar questions yielded a solution.
Thank you very much in advance for your time.
Code (Unnecessary parts ommitted)
const list = document.querySelector("#list");
const rect = list.getBoundingClientRect();
let oldLeft, oldTop, mouseXStart, mouseYStart;
function dragStart(event) {
event.dataTransfer.setData("plain/text", event.target.id);
const item = document.querySelector("#" + event.target.id);
mouseXStart = event.clientX - rect.left;
mouseYStart = event.clientY - rect.top;
oldLeft = item.style.left;
oldTop = item.style.top;
console.log(item);
}
function dragOver(event) {
event.preventDefault();
event.dataTransfer.dropEffect = "move";
}
function dropItem(event) {
event.preventDefault();
const mouseXEnd = event.clientX - rect.left;
const mouseYEnd = event.clientY - rect.top;
//Calculate by how much mouse has been moved
const newLeft = mouseXEnd - mouseXStart;
const newTop = mouseYEnd - mouseYStart;
const item = document.querySelector('#' + event.dataTransfer.getData("plain/text"));
item.style.left = oldLeft + newLeft + "px";
item.style.top = oldTop + newTop + "px";
}
#list {
position: relative;
top: 60px;
height: 600px;
width: 100%;
border: 2px solid rgb(107, 14, 14);
display: block;
}
#list>div {
position: absolute;
height: 200px;
width: 200px;
background-color: blue;
margin: 10px;
overflow: hidden;
text-align: left;
}
<div id="list" ondrop="dropItem(event)" ondragover="dragOver(event)">
<div id="test" draggable="true" ondragstart="dragStart(event)">
<button type="button"></button>
<div>
<p>Test</p>
</div>
</div>
</div>
Appendix A
console.log() output on the first dragStart() call:
<div id="test" draggable="true" ondragstart="dragStart(event)">
Appendix B
console.log() output on the second dragStart() call:
<div id="test" draggable="true" ondragstart="dragStart(event)" style="left: 853px; top: 147px;">
A:
Problem
This code
oldLeft + newLeft + "px"
evaluated to something like
123px40px
because
oldLeft = item.style.left
returned string with px at the end
Solution
Parse the value to float
oldLeft = item.style.left ? parseFloat(item.style.left) : 0;
oldTop = item.style.top ? parseFloat(item.style.top) : 0;
const list = document.querySelector("#list");
const rect = list.getBoundingClientRect();
let oldLeft, oldTop, mouseXStart, mouseYStart;
function dragStart(event) {
event.dataTransfer.setData("plain/text", event.target.id);
const item = document.querySelector("#" + event.target.id);
mouseXStart = event.clientX - rect.left;
mouseYStart = event.clientY - rect.top;
oldLeft = item.style.left ? parseFloat(item.style.left) : 0;
oldTop = item.style.top ? parseFloat(item.style.top) : 0;
}
function dragOver(event) {
event.preventDefault();
event.dataTransfer.dropEffect = "move";
}
function dropItem(event) {
event.preventDefault();
const mouseXEnd = event.clientX - rect.left;
const mouseYEnd = event.clientY - rect.top;
//Calculate by how much mouse has been moved
const newLeft = mouseXEnd - mouseXStart;
const newTop = mouseYEnd - mouseYStart;
const item = document.querySelector('#' + event.dataTransfer.getData("plain/text"));
item.style.left = oldLeft + newLeft + "px";
item.style.top = oldTop + newTop + "px";
}
#list {
position: relative;
top: 60px;
height: 600px;
width: 100%;
border: 2px solid rgb(107, 14, 14);
display: block;
}
#list>div {
position: absolute;
height: 200px;
width: 200px;
background-color: blue;
margin: 10px;
overflow: hidden;
text-align: left;
}
<div id="list" ondrop="dropItem(event)" ondragover="dragOver(event)">
<div id="test" draggable="true" ondragstart="dragStart(event)">
<button type="button"></button>
<div>
<p>Test</p>
</div>
</div>
</div>
|
Javascript (vanilla) drag and drop does not work the second time an element is dragged and dropped
|
I am trying to implement a drag and drop functionality using vanilla Javascript on my web app, where element gets moved to a new position within a div, once it's dragged and dropped.
But I am having an issue where I can drop the element fine the first time, but I can no longer do it the second time onwards.
After debugging, I have noticed that the first time I drop an element, it does not have any inline style (see Appendix A). But when I try and do it the second time, it now has an inline style (see Appendix B) and for some reason, I cannot chnage the values of it. That was also the case after I manually added inline style to my draggable element- I could not drop the item even the first time when I did it.
I am completely out of ideas as to what I could be doing wrong and no similar questions yielded a solution.
Thank you very much in advance for your time.
Code (Unnecessary parts ommitted)
const list = document.querySelector("#list");
const rect = list.getBoundingClientRect();
let oldLeft, oldTop, mouseXStart, mouseYStart;
function dragStart(event) {
event.dataTransfer.setData("plain/text", event.target.id);
const item = document.querySelector("#" + event.target.id);
mouseXStart = event.clientX - rect.left;
mouseYStart = event.clientY - rect.top;
oldLeft = item.style.left;
oldTop = item.style.top;
console.log(item);
}
function dragOver(event) {
event.preventDefault();
event.dataTransfer.dropEffect = "move";
}
function dropItem(event) {
event.preventDefault();
const mouseXEnd = event.clientX - rect.left;
const mouseYEnd = event.clientY - rect.top;
//Calculate by how much mouse has been moved
const newLeft = mouseXEnd - mouseXStart;
const newTop = mouseYEnd - mouseYStart;
const item = document.querySelector('#' + event.dataTransfer.getData("plain/text"));
item.style.left = oldLeft + newLeft + "px";
item.style.top = oldTop + newTop + "px";
}
#list {
position: relative;
top: 60px;
height: 600px;
width: 100%;
border: 2px solid rgb(107, 14, 14);
display: block;
}
#list>div {
position: absolute;
height: 200px;
width: 200px;
background-color: blue;
margin: 10px;
overflow: hidden;
text-align: left;
}
<div id="list" ondrop="dropItem(event)" ondragover="dragOver(event)">
<div id="test" draggable="true" ondragstart="dragStart(event)">
<button type="button"></button>
<div>
<p>Test</p>
</div>
</div>
</div>
Appendix A
console.log() output on the first dragStart() call:
<div id="test" draggable="true" ondragstart="dragStart(event)">
Appendix B
console.log() output on the second dragStart() call:
<div id="test" draggable="true" ondragstart="dragStart(event)" style="left: 853px; top: 147px;">
|
[
"Problem\nThis code\noldLeft + newLeft + \"px\"\n\nevaluated to something like\n123px40px\n\nbecause\noldLeft = item.style.left\n\nreturned string with px at the end\nSolution\nParse the value to float\noldLeft = item.style.left ? parseFloat(item.style.left) : 0;\noldTop = item.style.top ? parseFloat(item.style.top) : 0;\n\n\n\nconst list = document.querySelector(\"#list\");\nconst rect = list.getBoundingClientRect();\nlet oldLeft, oldTop, mouseXStart, mouseYStart;\n\nfunction dragStart(event) {\n event.dataTransfer.setData(\"plain/text\", event.target.id);\n const item = document.querySelector(\"#\" + event.target.id);\n\n mouseXStart = event.clientX - rect.left;\n mouseYStart = event.clientY - rect.top;\n oldLeft = item.style.left ? parseFloat(item.style.left) : 0;\n oldTop = item.style.top ? parseFloat(item.style.top) : 0;\n}\n\nfunction dragOver(event) {\n event.preventDefault();\n event.dataTransfer.dropEffect = \"move\";\n}\n\nfunction dropItem(event) {\n event.preventDefault();\n const mouseXEnd = event.clientX - rect.left;\n const mouseYEnd = event.clientY - rect.top;\n //Calculate by how much mouse has been moved\n const newLeft = mouseXEnd - mouseXStart;\n const newTop = mouseYEnd - mouseYStart;\n const item = document.querySelector('#' + event.dataTransfer.getData(\"plain/text\"));\n\n item.style.left = oldLeft + newLeft + \"px\";\n item.style.top = oldTop + newTop + \"px\";\n}\n#list {\n position: relative;\n top: 60px;\n height: 600px;\n width: 100%;\n border: 2px solid rgb(107, 14, 14);\n display: block;\n}\n\n#list>div {\n position: absolute;\n height: 200px;\n width: 200px;\n background-color: blue;\n margin: 10px;\n overflow: hidden;\n text-align: left;\n}\n<div id=\"list\" ondrop=\"dropItem(event)\" ondragover=\"dragOver(event)\">\n <div id=\"test\" draggable=\"true\" ondragstart=\"dragStart(event)\">\n <button type=\"button\"></button>\n <div>\n <p>Test</p>\n </div>\n </div>\n</div>\n\n\n\n"
] |
[
2
] |
[] |
[] |
[
"css",
"drag_and_drop",
"html",
"javascript"
] |
stackoverflow_0074659116_css_drag_and_drop_html_javascript.txt
|
Q:
Install packages on EMR via bootstrap actions not working in Jupyter notebook
I have an EMR cluster using EMR-6.3.1.
I am using the Python3 Kernel.
I have a very simple bootstrap script in S3:
#!/bin/bash
sudo python3 -m pip install Cython==0.29.4 boto==2.49.0 boto3==1.18.50 numpy==1.19.5 pandas==1.3.2 pyarrow==5.0.0
These are the bootstrap logs
+ sudo python3 -m pip install Cython==0.29.4 boto==2.49.0 boto3==1.18.50 numpy==1.19.5 pandas==1.3.2 pyarrow==5.0.0
WARNING: Running pip install with root privileges is generally not a good idea. Try `python3 -m pip install --user` instead.
WARNING: The scripts cygdb, cython and cythonize are installed in '/usr/local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The scripts f2py, f2py3 and f2py3.7 are installed in '/usr/local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script plasma_store is installed in '/usr/local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
and
Collecting Cython==0.29.4
Downloading Cython-0.29.4-cp37-cp37m-manylinux1_x86_64.whl (2.1 MB)
Requirement already satisfied: boto==2.49.0 in /usr/local/lib/python3.7/site-packages (2.49.0)
Collecting boto3==1.18.50
Downloading boto3-1.18.50-py3-none-any.whl (131 kB)
Collecting numpy==1.19.5
Downloading numpy-1.19.5-cp37-cp37m-manylinux2010_x86_64.whl (14.8 MB)
Collecting pandas==1.3.2
Downloading pandas-1.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.3 MB)
Collecting pyarrow==5.0.0
Downloading pyarrow-5.0.0-cp37-cp37m-manylinux2014_x86_64.whl (23.6 MB)
Collecting s3transfer<0.6.0,>=0.5.0
Downloading s3transfer-0.5.2-py3-none-any.whl (79 kB)
Collecting botocore<1.22.0,>=1.21.50
Downloading botocore-1.21.65-py3-none-any.whl (8.0 MB)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.7/site-packages (from boto3==1.18.50) (0.10.0)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/site-packages (from pandas==1.3.2) (2021.1)
Collecting python-dateutil>=2.7.3
Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting urllib3<1.27,>=1.25.4
Downloading urllib3-1.26.13-py2.py3-none-any.whl (140 kB)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas==1.3.2) (1.13.0)
Installing collected packages: Cython, python-dateutil, urllib3, botocore, s3transfer, boto3, numpy, pandas, pyarrow
Successfully installed Cython-0.29.4 boto3-1.18.50 botocore-1.21.65 numpy-1.19.5 pandas-1.3.2 pyarrow-5.0.0 python-dateutil-2.8.2 s3transfer-0.5.2 urllib3-1.26.13
From a notebook, importing pandas and seeing the wrong version - 1.2.3.
Further, I see pyarrow fails to import.
I've printed the import path of pandas, which python version is run, and sys.path.
import os
import pandas
import sys
print(sys.path)
print(pandas.__version__)
print(os.path.abspath(pandas.__file__))
print(os.popen('echo $PYTHONPATH').read())
print(os.popen('which python3').read())
# sys.path.append('/usr/local/lib64/python3.7/site-packages') # if I add this, pyarrow can import
import pyarrow
['/', '/emr/notebook-env/lib/python37.zip', '/emr/notebook-env/lib/python3.7', '/emr/notebook-env/lib/python3.7/lib-dynload', '', '/emr/notebook-env/lib/python3.7/site-packages', '/emr/notebook-env/lib/python3.7/site-packages/awseditorssparkmonitoringwidget-1.0-py3.7.egg', '/emr/notebook-env/lib/python3.7/site-packages/IPython/extensions', '/home/emr-notebook/.ipython']
1.2.3
/emr/notebook-env/lib/python3.7/site-packages/pandas/__init__.py
/usr/bin/python3
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-2-aea9862499ce> in <module>
9
10 # sys.path.append('/usr/local/lib64/python3.7/site-packages') # if I add this, pyarrow can import
---> 11 import pyarrow
ModuleNotFoundError: No module named 'pyarrow'
I found I can import pyarrow if I add /usr/local/lib64/python3.7/site-packages to sys.path. This seems like am improvement, but still the wrong version of pandas is imported.
I've tried:
SSH'ing into the master node and mucking with the configuration.
sudo python3 -m pip install --user ...
export PYTHONPATH=/usr/local/lib64/python3.7/site-packages && sudo python3 -m pip install ...
sudo pip3 install --upgrade setuptools && sudo python3 -m pip install ...
Using a pyspark kernel and running sc.install_pypi_package("pandas==1.3.2")
Any help is appreciated. Thank you.
A:
It looks like you are running the pip install command with sudo privileges, which is generally not recommended. The warning message is suggesting that you try running the command with the --user flag instead, which will install the packages locally for the current user instead of system-wide with root privileges.
You can try running the following command instead:
python3 -m pip install --user Cython==0.29.4 boto==2.49.0 boto3==1.18.50 numpy==1.19.5 pandas==1.3.2 pyarrow==5.0.0
This should install the packages locally for the current user, without requiring sudo privileges. You may also need to add the ~/.local/bin directory to your PATH environment variable so that you can run the installed packages from the command line.
|
Install packages on EMR via bootstrap actions not working in Jupyter notebook
|
I have an EMR cluster using EMR-6.3.1.
I am using the Python3 Kernel.
I have a very simple bootstrap script in S3:
#!/bin/bash
sudo python3 -m pip install Cython==0.29.4 boto==2.49.0 boto3==1.18.50 numpy==1.19.5 pandas==1.3.2 pyarrow==5.0.0
These are the bootstrap logs
+ sudo python3 -m pip install Cython==0.29.4 boto==2.49.0 boto3==1.18.50 numpy==1.19.5 pandas==1.3.2 pyarrow==5.0.0
WARNING: Running pip install with root privileges is generally not a good idea. Try `python3 -m pip install --user` instead.
WARNING: The scripts cygdb, cython and cythonize are installed in '/usr/local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The scripts f2py, f2py3 and f2py3.7 are installed in '/usr/local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
WARNING: The script plasma_store is installed in '/usr/local/bin' which is not on PATH.
Consider adding this directory to PATH or, if you prefer to suppress this warning, use --no-warn-script-location.
and
Collecting Cython==0.29.4
Downloading Cython-0.29.4-cp37-cp37m-manylinux1_x86_64.whl (2.1 MB)
Requirement already satisfied: boto==2.49.0 in /usr/local/lib/python3.7/site-packages (2.49.0)
Collecting boto3==1.18.50
Downloading boto3-1.18.50-py3-none-any.whl (131 kB)
Collecting numpy==1.19.5
Downloading numpy-1.19.5-cp37-cp37m-manylinux2010_x86_64.whl (14.8 MB)
Collecting pandas==1.3.2
Downloading pandas-1.3.2-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (11.3 MB)
Collecting pyarrow==5.0.0
Downloading pyarrow-5.0.0-cp37-cp37m-manylinux2014_x86_64.whl (23.6 MB)
Collecting s3transfer<0.6.0,>=0.5.0
Downloading s3transfer-0.5.2-py3-none-any.whl (79 kB)
Collecting botocore<1.22.0,>=1.21.50
Downloading botocore-1.21.65-py3-none-any.whl (8.0 MB)
Requirement already satisfied: jmespath<1.0.0,>=0.7.1 in /usr/local/lib/python3.7/site-packages (from boto3==1.18.50) (0.10.0)
Requirement already satisfied: pytz>=2017.3 in /usr/local/lib/python3.7/site-packages (from pandas==1.3.2) (2021.1)
Collecting python-dateutil>=2.7.3
Downloading python_dateutil-2.8.2-py2.py3-none-any.whl (247 kB)
Collecting urllib3<1.27,>=1.25.4
Downloading urllib3-1.26.13-py2.py3-none-any.whl (140 kB)
Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas==1.3.2) (1.13.0)
Installing collected packages: Cython, python-dateutil, urllib3, botocore, s3transfer, boto3, numpy, pandas, pyarrow
Successfully installed Cython-0.29.4 boto3-1.18.50 botocore-1.21.65 numpy-1.19.5 pandas-1.3.2 pyarrow-5.0.0 python-dateutil-2.8.2 s3transfer-0.5.2 urllib3-1.26.13
From a notebook, importing pandas and seeing the wrong version - 1.2.3.
Further, I see pyarrow fails to import.
I've printed the import path of pandas, which python version is run, and sys.path.
import os
import pandas
import sys
print(sys.path)
print(pandas.__version__)
print(os.path.abspath(pandas.__file__))
print(os.popen('echo $PYTHONPATH').read())
print(os.popen('which python3').read())
# sys.path.append('/usr/local/lib64/python3.7/site-packages') # if I add this, pyarrow can import
import pyarrow
['/', '/emr/notebook-env/lib/python37.zip', '/emr/notebook-env/lib/python3.7', '/emr/notebook-env/lib/python3.7/lib-dynload', '', '/emr/notebook-env/lib/python3.7/site-packages', '/emr/notebook-env/lib/python3.7/site-packages/awseditorssparkmonitoringwidget-1.0-py3.7.egg', '/emr/notebook-env/lib/python3.7/site-packages/IPython/extensions', '/home/emr-notebook/.ipython']
1.2.3
/emr/notebook-env/lib/python3.7/site-packages/pandas/__init__.py
/usr/bin/python3
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-2-aea9862499ce> in <module>
9
10 # sys.path.append('/usr/local/lib64/python3.7/site-packages') # if I add this, pyarrow can import
---> 11 import pyarrow
ModuleNotFoundError: No module named 'pyarrow'
I found I can import pyarrow if I add /usr/local/lib64/python3.7/site-packages to sys.path. This seems like am improvement, but still the wrong version of pandas is imported.
I've tried:
SSH'ing into the master node and mucking with the configuration.
sudo python3 -m pip install --user ...
export PYTHONPATH=/usr/local/lib64/python3.7/site-packages && sudo python3 -m pip install ...
sudo pip3 install --upgrade setuptools && sudo python3 -m pip install ...
Using a pyspark kernel and running sc.install_pypi_package("pandas==1.3.2")
Any help is appreciated. Thank you.
|
[
"It looks like you are running the pip install command with sudo privileges, which is generally not recommended. The warning message is suggesting that you try running the command with the --user flag instead, which will install the packages locally for the current user instead of system-wide with root privileges.\nYou can try running the following command instead:\npython3 -m pip install --user Cython==0.29.4 boto==2.49.0 boto3==1.18.50 numpy==1.19.5 pandas==1.3.2 pyarrow==5.0.0\n\nThis should install the packages locally for the current user, without requiring sudo privileges. You may also need to add the ~/.local/bin directory to your PATH environment variable so that you can run the installed packages from the command line.\n"
] |
[
0
] |
[] |
[] |
[
"amazon_emr",
"python"
] |
stackoverflow_0074659221_amazon_emr_python.txt
|
Q:
What is IDLE stands for?
I have using idle to solve python questions
Its a very simple question
idle
enter image description here
A:
IDLE stands for Integrated Development and Learning Environment. It is a built-in development environment for writing and running Python code.
|
What is IDLE stands for?
|
I have using idle to solve python questions
Its a very simple question
idle
enter image description here
|
[
"IDLE stands for Integrated Development and Learning Environment. It is a built-in development environment for writing and running Python code.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_idle"
] |
stackoverflow_0074659121_python_python_idle.txt
|
Q:
Spreading data with a shift to the top
I have data of a tennis tournament. The column name are the name of player, game the game's number ( It is not 1,2,3 because there is a second pool ) and rank which is the rank of the player after the game.
The structure of the data is as follow
structure(list(player = c("Bob", "Luc", "Bob", "Carl", "Alex",
"John", "Alex", "Mike", "Carl", "Alex"), game = c(1, 1, 3, 3,
4, 4, 6, 6, 8, 8), rank = c(100, 110, 110, 120, 100, 90, 110,
80, 110, 120)), class = "data.frame", row.names = c(NA, -10L))
Using
data %>% pivot_wider(names_from = player, values_from = rank)
I get the following result :
# A tibble: 5 x 7
game Bob Luc Carl Alex John Mike
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 100 110 NA NA NA NA
2 3 110 NA 120 NA NA NA
3 4 NA NA NA 100 90 NA
4 6 NA NA NA 110 NA 80
5 8 NA NA 110 120 NA NA
But I would like something looking like that :
# A tibble: 5 x 7
game Bob Luc Carl Alex John Mike
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 100 110 120 100 90 80
2 2 110 NA 110 110 NA NA
3 3 NA NA NA 120 NA NA
4 4 NA NA NA NA NA NA
5 5 NA NA NA NA NA NA
I want the column game (to be i from 1 to n) with i corresponding to the i-th game for each of the player. And the columns representing all the players. For example, Alex played 3 times, so the 3 first row of his column should be filled as above.
Any help would be appreciated
A:
data %>% group_by(player) %>% mutate(game=rank(game)) %>% pivot_wider(names_from = player, values_from = rank)
will return:
# A tibble: 3 × 7
game Bob Luc Carl Alex John Mike
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 100 110 120 100 90 80
2 2 110 NA 110 110 NA NA
3 3 NA NA NA 120 NA NA
|
Spreading data with a shift to the top
|
I have data of a tennis tournament. The column name are the name of player, game the game's number ( It is not 1,2,3 because there is a second pool ) and rank which is the rank of the player after the game.
The structure of the data is as follow
structure(list(player = c("Bob", "Luc", "Bob", "Carl", "Alex",
"John", "Alex", "Mike", "Carl", "Alex"), game = c(1, 1, 3, 3,
4, 4, 6, 6, 8, 8), rank = c(100, 110, 110, 120, 100, 90, 110,
80, 110, 120)), class = "data.frame", row.names = c(NA, -10L))
Using
data %>% pivot_wider(names_from = player, values_from = rank)
I get the following result :
# A tibble: 5 x 7
game Bob Luc Carl Alex John Mike
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 100 110 NA NA NA NA
2 3 110 NA 120 NA NA NA
3 4 NA NA NA 100 90 NA
4 6 NA NA NA 110 NA 80
5 8 NA NA 110 120 NA NA
But I would like something looking like that :
# A tibble: 5 x 7
game Bob Luc Carl Alex John Mike
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 100 110 120 100 90 80
2 2 110 NA 110 110 NA NA
3 3 NA NA NA 120 NA NA
4 4 NA NA NA NA NA NA
5 5 NA NA NA NA NA NA
I want the column game (to be i from 1 to n) with i corresponding to the i-th game for each of the player. And the columns representing all the players. For example, Alex played 3 times, so the 3 first row of his column should be filled as above.
Any help would be appreciated
|
[
"data %>% group_by(player) %>% mutate(game=rank(game)) %>% pivot_wider(names_from = player, values_from = rank) \n\nwill return:\n# A tibble: 3 × 7\n game Bob Luc Carl Alex John Mike\n <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>\n1 1 100 110 120 100 90 80\n2 2 110 NA 110 110 NA NA\n3 3 NA NA NA 120 NA NA\n\n"
] |
[
2
] |
[] |
[] |
[
"dplyr",
"pivot_table",
"r",
"tidyr",
"tidyverse"
] |
stackoverflow_0074659208_dplyr_pivot_table_r_tidyr_tidyverse.txt
|
Q:
What is the CMake equivalent of make -k?
Is there an equivalent of make -k for CMake, with other words, so that CMake keeps going compiling the other files even if an error occurs in one file?
A:
CMake does not steer the compiling itself, it delegates it to some build toolset like Make, Ninja, msbuild etc. If the tool is called, you have to add the arguments to the call, for example, add -k to your make call and -k 0 to your Ninja call.
If you build via the CMake command cmake -build <options>, you can add the build-toolset specific argument after a -- like in
cmake -build <options> -- -k
for make.
A:
So usr1234567's answer is what I was looking for. In using it for QtCreator, the place to set this is in a collapsed dialog box, so it is not immediately obvious. You go to 'build settings', then press 'details' and can insert the -k 0 on the tool settings line.
|
What is the CMake equivalent of make -k?
|
Is there an equivalent of make -k for CMake, with other words, so that CMake keeps going compiling the other files even if an error occurs in one file?
|
[
"CMake does not steer the compiling itself, it delegates it to some build toolset like Make, Ninja, msbuild etc. If the tool is called, you have to add the arguments to the call, for example, add -k to your make call and -k 0 to your Ninja call.\nIf you build via the CMake command cmake -build <options>, you can add the build-toolset specific argument after a -- like in\ncmake -build <options> -- -k\n\nfor make.\n",
"So usr1234567's answer is what I was looking for. In using it for QtCreator, the place to set this is in a collapsed dialog box, so it is not immediately obvious. You go to 'build settings', then press 'details' and can insert the -k 0 on the tool settings line.\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"cmake",
"makefile"
] |
stackoverflow_0069792546_cmake_makefile.txt
|
Q:
Why does the batch file I created to stop and start a windows service not working correctly
My batch file that I created to stop and then start windows service won't run and throws back an error
Each time I run the file as Admin (I am the only user that's on this machine and the account is an admin account).
I have tested my batch file on its own and not in task scheduler and it works perfectly fine when I run the batch file as admin. However, it falls over when I try set up a daily task on Task Scheduler.
I have a simple batch file that stops and starts a service. for reference this is what it looks like:
Net Stop "StorSvc"
Net Start "StorSvc"
I have run this as admin and it worked fine. I then created a task to do this daily at a certain time. I placed the file in C: Drive and attached the file to my task scheduler.
On the security options I have ticked
"Run whether user is logged on or not"
"Run with the highest privileges"
I have changed the "When running the task use the following user account" to my account, Systems and other admin options that show up. I even selected "System"
When I click ok it prompts me to sign in to the admin account. when I do this it says
"An error has occurred for task StorSvc" Error message. One or more of the specified arguments are not valid"
All of the last run results are as listed below:
0x800710E0
0x41303
0x2
A:
I have had better success with sc.exe for controlling services via batch file.
An example I found in my archive is:
sc.exe \\!SysName! config RemoteRegistry start= demand
|
Why does the batch file I created to stop and start a windows service not working correctly
|
My batch file that I created to stop and then start windows service won't run and throws back an error
Each time I run the file as Admin (I am the only user that's on this machine and the account is an admin account).
I have tested my batch file on its own and not in task scheduler and it works perfectly fine when I run the batch file as admin. However, it falls over when I try set up a daily task on Task Scheduler.
I have a simple batch file that stops and starts a service. for reference this is what it looks like:
Net Stop "StorSvc"
Net Start "StorSvc"
I have run this as admin and it worked fine. I then created a task to do this daily at a certain time. I placed the file in C: Drive and attached the file to my task scheduler.
On the security options I have ticked
"Run whether user is logged on or not"
"Run with the highest privileges"
I have changed the "When running the task use the following user account" to my account, Systems and other admin options that show up. I even selected "System"
When I click ok it prompts me to sign in to the admin account. when I do this it says
"An error has occurred for task StorSvc" Error message. One or more of the specified arguments are not valid"
All of the last run results are as listed below:
0x800710E0
0x41303
0x2
|
[
"I have had better success with sc.exe for controlling services via batch file.\nAn example I found in my archive is:\nsc.exe \\\\!SysName! config RemoteRegistry start= demand\n\n"
] |
[
0
] |
[] |
[] |
[
"admin",
"batch_file",
"scheduled_tasks",
"windows_services",
"windows_task_scheduler"
] |
stackoverflow_0074642088_admin_batch_file_scheduled_tasks_windows_services_windows_task_scheduler.txt
|
Q:
JavaScript/CSS: Detect when an element has been resized by the user
Say one has a div which is vertically resizable like a textarea as below. When the user resizes the div, I would like to run a javascript function, resizeHandler below.
It seems like the resize event is only for the document/window. And resize observer fires for all events, like say when the document is loading, it will fire resize observer events. So then I set a timeout to apply the resize observer after the page load, but the resize observer seems to remember historic resizes, and fires the previous resizes events.
Is there a clean way to run resizeHandler() when the user resizes the div, and not for pageloading layout shifts?
function resizeHandler() {
console.log('the div was resized')
}
div {
resize: vertical;
height: 64px;
overflow-y: auto;
background-color:gray;
}
<div id="DIV" style="max-height:180px">This is a resizable div</div>
Note: I can't remove the style attribute of the div which sets the max-height.
A:
The simplest way would be to add an eventlitener on the resizeable div. And then use an resizeObserver.
const resizeable_div = document.querySelector(".resizeable");
resizeable_div.addEventListener("mouseup", () => {
function outputsize() {
console.log("resized");
}
new ResizeObserver(outputsize).observe(resizeable_div);
});
mousedown or mouseup. But this will ignore pagereloads, what you've wanted and only triggers if the user do something with the div.
Hope this helps you.
A:
The main reason was I wish to remove the maxHeight style property so it can be manually resized:
DIV.get('LIST').addEventListener('mousemove', this.mouseMoveHandle.bind(this));
mouseMoveHandle(event) {
DIV.resized = DIV.style.getPropertyValue('height') !== ''
if (DIV.resized) DIV.style.maxHeight = 'unset';
}
|
JavaScript/CSS: Detect when an element has been resized by the user
|
Say one has a div which is vertically resizable like a textarea as below. When the user resizes the div, I would like to run a javascript function, resizeHandler below.
It seems like the resize event is only for the document/window. And resize observer fires for all events, like say when the document is loading, it will fire resize observer events. So then I set a timeout to apply the resize observer after the page load, but the resize observer seems to remember historic resizes, and fires the previous resizes events.
Is there a clean way to run resizeHandler() when the user resizes the div, and not for pageloading layout shifts?
function resizeHandler() {
console.log('the div was resized')
}
div {
resize: vertical;
height: 64px;
overflow-y: auto;
background-color:gray;
}
<div id="DIV" style="max-height:180px">This is a resizable div</div>
Note: I can't remove the style attribute of the div which sets the max-height.
|
[
"The simplest way would be to add an eventlitener on the resizeable div. And then use an resizeObserver.\nconst resizeable_div = document.querySelector(\".resizeable\");\n\n\nresizeable_div.addEventListener(\"mouseup\", () => {\n function outputsize() {\n console.log(\"resized\");\n }\n\n new ResizeObserver(outputsize).observe(resizeable_div);\n});\n\nmousedown or mouseup. But this will ignore pagereloads, what you've wanted and only triggers if the user do something with the div.\nHope this helps you.\n",
"The main reason was I wish to remove the maxHeight style property so it can be manually resized:\n DIV.get('LIST').addEventListener('mousemove', this.mouseMoveHandle.bind(this));\n\n mouseMoveHandle(event) {\n DIV.resized = DIV.style.getPropertyValue('height') !== '' \n if (DIV.resized) DIV.style.maxHeight = 'unset';\n }\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"css",
"javascript"
] |
stackoverflow_0074658707_css_javascript.txt
|
Q:
How to convert collection of entities relation into collection of primitive ids in JPA/Hibernate?
I have two entities connected with many-to-many relationship. For example:
@Entity
public class Account {
@Id
private Long id;
@ManyToMany(fetch = FetchType.LAZY)
@JoinTable(
name = "account_games",
joinColumns = {@JoinColumn(name="account_id")},
inverseJoinColumns = {@JoinColumn(name="game_id")}
)
private Set<Game> games = new HashSet<>();
}
@Entity
public class Game {
@Id
private Long id;
@ManyToMany(mappedBy = "games", fetch = FetchType.LAZY)
List<Account> accounts = new ArrayList<>();
}
So, there is a table account_games(account_id, game_id) in mysql describing entities many-to-many relations.
I don't want to have Game entity anymore. Is there a way to get rid of Game and leave gameId relation only? So, I'd like to have code something like that:
@Entity
public class Account {
@Id
private Long id;
@ManyToMany(fetch = FetchType.LAZY)
@JoinTable(
name = "account_games",
joinColumns = {@JoinColumn(name="account_id")},
inverseJoinColumns = {@JoinColumn(name="game_id")}
)
private Set<Long> gameIds = new HashSet<>();
}
without making changes in database.
I've tried different configuration on javax.persistance annotations, but none worked
A:
You can use @ElementCollection and @CollectionTable to achieve that.
@Entity
public class Account {
@Id
private Long id;
@ElementCollection(fetch = FetchType.LAZY)
@CollectionTable(name = "account_games", joinColumns = @JoinColumn(name = "account_id"))
@Column(name = "game_id", nullable = false)
private Set<Long> gameIds = new HashSet<>();
}
You may have to change the query on how to filter data using gameId. Element Collection Query
|
How to convert collection of entities relation into collection of primitive ids in JPA/Hibernate?
|
I have two entities connected with many-to-many relationship. For example:
@Entity
public class Account {
@Id
private Long id;
@ManyToMany(fetch = FetchType.LAZY)
@JoinTable(
name = "account_games",
joinColumns = {@JoinColumn(name="account_id")},
inverseJoinColumns = {@JoinColumn(name="game_id")}
)
private Set<Game> games = new HashSet<>();
}
@Entity
public class Game {
@Id
private Long id;
@ManyToMany(mappedBy = "games", fetch = FetchType.LAZY)
List<Account> accounts = new ArrayList<>();
}
So, there is a table account_games(account_id, game_id) in mysql describing entities many-to-many relations.
I don't want to have Game entity anymore. Is there a way to get rid of Game and leave gameId relation only? So, I'd like to have code something like that:
@Entity
public class Account {
@Id
private Long id;
@ManyToMany(fetch = FetchType.LAZY)
@JoinTable(
name = "account_games",
joinColumns = {@JoinColumn(name="account_id")},
inverseJoinColumns = {@JoinColumn(name="game_id")}
)
private Set<Long> gameIds = new HashSet<>();
}
without making changes in database.
I've tried different configuration on javax.persistance annotations, but none worked
|
[
"You can use @ElementCollection and @CollectionTable to achieve that.\n@Entity\npublic class Account {\n\n @Id\n private Long id;\n\n @ElementCollection(fetch = FetchType.LAZY)\n @CollectionTable(name = \"account_games\", joinColumns = @JoinColumn(name = \"account_id\"))\n @Column(name = \"game_id\", nullable = false)\n private Set<Long> gameIds = new HashSet<>();\n\n}\n\nYou may have to change the query on how to filter data using gameId. Element Collection Query\n"
] |
[
0
] |
[] |
[] |
[
"hibernate",
"java",
"jpa",
"orm",
"spring"
] |
stackoverflow_0074659081_hibernate_java_jpa_orm_spring.txt
|
Q:
Google sign in not working after publishing in play store
I went through this, and as far as the process goes I did that.
But I when I installed the app from play store I am not able to sign in using the google sign in button.
I have used Firebase for google sign in. When I am clicking the sign in button the option to choose the account is coming but then it is not signing in or doing anything.
So where possibly am I going wrong?
A:
When you upload an apk to the play store then play store creates a new SHA1 key called "App signing certificate". You get that SHA1 and save in your console or firebase account (as you need).
New SHA1 will be found at Released Management->App Sigining on your play console.
[]
A:
Update: Google changed behaviour of uploading APK, check answer below!
Release APK and debug APK has different SHA1 and different API keys for google services. Both of them must be added in Firebase Console -> Project settings. Then download google-services.json from here, add it to project and recompile with release keystore using option "Build signed APK". That should work
A:
The problem was created when Google Play App Signing was enabled for my app. Google Play App Signing changes the SHA-1 certificate fingerprint (from what is in my keystore) to their own SHA-1 certificate fingerprint.
The fix:
Goto https://play.google.com/apps/publish/
Click your application >> Release Management >> App Signing.
You will see "App signing certificate" and "Upload certificate"
Copy the SHA-1 From "App Signing Certificate." (THE TOP ONE)
Goto https://console.firebase.google.com/
Click your application >> Settings [Gearbox Icon to the right of project overview] (top of the screen) >> Project Settings >> General [Tab] >> Add Fingerprint
Paste the SHA-1 App Signing Certificate. Save.
All fixed!
A:
There are three types of SHA1 is required for an app lifecycle when you are using firebase
-debug SHA1
-release SHA1
-signing SHA1 (it getting from play store)
You need to add signing SHA1 to firebase after publish your application here i am attaching two screen shots please have a look . The red marked certificates is must be required, so copy it from playstore and paste it on firbase
In firebase paste here
A:
In the latest Google play console 2022:
QUICK GUIDE:
Application Play console > Setup > App Integrity
OR
Search App Integrity in application's Search box
DETAILED GUIDE:
Step 1: Go to https://play.google.com/console/u/3/developers and open your application's Dashboard
Step 2: On the left sidebar under Release, select Setup, then App Integrity.
Optional: You could also simply search App Integrity in the Search.
A:
Issue happens because
1) when you create / publish an app through google play console, there is an option for enable Google Play App Signing. if you enable it will show
Google Play App Signing is enabled for this app.
then your Upload certificate details will change and you need to rewrite SHA-1 etc. certificate details in respective places.
2) You provided debug key store / SHA-1 certificate details instead of RELEASE certificate details
3) error while generating certificates.
Solution
1) Go to google play console
Relese management -> App signing
then you can see two types of certificate
1- Upload certificate ( your app certificate provided when generating signed apk)
2- App signing certificate ( because you enabled Google Play App Signing, so they provided new certificate details for your published apk)
you need to change uploaded certificate details with new details
provided by google play where ever you used it before. such as
Integrating google sign in (change SHA-1 of OAuth client),
facebook login (change key Hash (generate key hash using SHA-1
use this link or copy the key hash provided by facebook login
error screen), firebase etc.
2) provide release SHA-1 / keyHash
create / use Keystore details of signed apk.
using command prompt:
google / firebase SHA-1 :
keytool -exportcert -keystore path-to-debug-or-production-keystore -list -v
facebook release key hash:
keytool -exportcert -alias <user alias name> -keystore < keystore path> | <openssl-path> sha1 -binary | <openssl-path> base64
if asked for password enter your signed apk keystore password.
A:
To add on existing responses, once you have the newly created SHA1:
Goto https://console.firebase.google.com:
Select project
Project Overview
Project Settings
ADD FINGERPRINT - enter SHA1 to Certificate fingerprint
Save
A:
If someone is not able to solve this issue then just open Setup > app integrity
in the console and then copy the SHA1 and paste in your firebase console.
A:
In case of anyone facing this issue after Aug 2020. New SHA1 can be found at Setup->App Signing on your play console.
Everything else is the same as @PrinkalKumar as answered.
A:
in new version of the google console:
Setup > app integrity
A:
If you enabled "Google Play App Signing" when publishing your app, you are now probably dealing with two fingerprints:
The one coming from your local keystore (keytool -exportcert -keystore path-to-production-keystore -list -v), known as the "upload cert".
The new one generated by Google when you enabled Signing (the "signing certificate").
The conflict with this situation, is that you may end up with two OAuth 2.0 client IDs:
The one that you created before publishing your app (and before enabling google signing), which is indeed the "right" one.
A NEW one created by Google when you enabled Google Signing.
You can verify this fact from: Google Play Console -> Games Services -> Select your App -> Games Details -> API Console project -> Credentials -> OAuth 2.0 client IDs
SOLUTION
In order for Google Sign-in (and all related Games Services features) to work, I had to correct the fingerprint for my pre-existing OAuth2 Client ID (the one that I created before publishing my app).
Look for the "right" OAuth 2.0 client ID: Google Play Console -> Games Services -> Select your App -> Linked Apps --> Select your App --> Take note of the "OAuth2 Client ID" at the bottom.
Look for the "Signing" certificate: Google Play Console -> Games Services -> Select your App -> Game Details -> API Console project -> Credentials -> OAuth 2.0 client IDs --> "Android client for XXXXXX (auto created by Google Service)" --> Take note of the value (xx:xx:xx:etc....) Comment: In order to reuse this value in my other OAuth 2.0 client ID, I had to replace it with some dummy number. Otherwise, you will get error: "Certificate already used in some other project".
Go to your pre-existing OAuth2 Client ID: Google Play Console -> Games Services -> Select your App -> Games Details -> API Console project -> Credentials -> OAuth 2.0 client IDs --> Select "OAuth 2.0 client ID" from step 1. Update certificate value with the one from step 2.
This solved my issue. Multiplayer is working perfectly in my app (Match4App).
A:
Update SHA-1 Key in Firebase project setting
Also check your google cloud project api key setting and also add SHA-1 Key credentials if you have Key restrictions.
A:
All answers specify either an outdated version of the Play console, or don't specify how to do it in the Firebase console.
So here goes my answer :
Get your SHA in your Google console by navigating to Release > Setup > App integrity in the menu, then clicking "App signing" tab. Copy the SHA-1.
In your Firebase console, go to Project settings > General tab. At the very bottom of the page, click "Add fingerprint". Paste the previously copied SHA-1 and save.
|
Google sign in not working after publishing in play store
|
I went through this, and as far as the process goes I did that.
But I when I installed the app from play store I am not able to sign in using the google sign in button.
I have used Firebase for google sign in. When I am clicking the sign in button the option to choose the account is coming but then it is not signing in or doing anything.
So where possibly am I going wrong?
|
[
"When you upload an apk to the play store then play store creates a new SHA1 key called \"App signing certificate\". You get that SHA1 and save in your console or firebase account (as you need).\nNew SHA1 will be found at Released Management->App Sigining on your play console.\n[]\n",
"Update: Google changed behaviour of uploading APK, check answer below!\nRelease APK and debug APK has different SHA1 and different API keys for google services. Both of them must be added in Firebase Console -> Project settings. Then download google-services.json from here, add it to project and recompile with release keystore using option \"Build signed APK\". That should work\n",
"The problem was created when Google Play App Signing was enabled for my app. Google Play App Signing changes the SHA-1 certificate fingerprint (from what is in my keystore) to their own SHA-1 certificate fingerprint.\nThe fix:\n\nGoto https://play.google.com/apps/publish/\nClick your application >> Release Management >> App Signing.\n\n\nYou will see \"App signing certificate\" and \"Upload certificate\"\n\n\nCopy the SHA-1 From \"App Signing Certificate.\" (THE TOP ONE)\nGoto https://console.firebase.google.com/\nClick your application >> Settings [Gearbox Icon to the right of project overview] (top of the screen) >> Project Settings >> General [Tab] >> Add Fingerprint\nPaste the SHA-1 App Signing Certificate. Save.\n\nAll fixed!\n",
"There are three types of SHA1 is required for an app lifecycle when you are using firebase\n\n-debug SHA1\n-release SHA1\n-signing SHA1 (it getting from play store)\n\nYou need to add signing SHA1 to firebase after publish your application here i am attaching two screen shots please have a look . The red marked certificates is must be required, so copy it from playstore and paste it on firbase \n\nIn firebase paste here\n",
"In the latest Google play console 2022:\nQUICK GUIDE:\nApplication Play console > Setup > App Integrity\nOR \nSearch App Integrity in application's Search box\nDETAILED GUIDE:\nStep 1: Go to https://play.google.com/console/u/3/developers and open your application's Dashboard\nStep 2: On the left sidebar under Release, select Setup, then App Integrity.\nOptional: You could also simply search App Integrity in the Search.\n\n",
"Issue happens because\n1) when you create / publish an app through google play console, there is an option for enable Google Play App Signing. if you enable it will show \n\nGoogle Play App Signing is enabled for this app.\n\n\nthen your Upload certificate details will change and you need to rewrite SHA-1 etc. certificate details in respective places.\n2) You provided debug key store / SHA-1 certificate details instead of RELEASE certificate details\n3) error while generating certificates.\nSolution\n1) Go to google play console\nRelese management -> App signing\nthen you can see two types of certificate \n1- Upload certificate ( your app certificate provided when generating signed apk)\n2- App signing certificate ( because you enabled Google Play App Signing, so they provided new certificate details for your published apk)\n\nyou need to change uploaded certificate details with new details\n provided by google play where ever you used it before. such as\n Integrating google sign in (change SHA-1 of OAuth client),\n facebook login (change key Hash (generate key hash using SHA-1\n use this link or copy the key hash provided by facebook login\n error screen), firebase etc.\n\n2) provide release SHA-1 / keyHash\ncreate / use Keystore details of signed apk.\nusing command prompt:\ngoogle / firebase SHA-1 : \nkeytool -exportcert -keystore path-to-debug-or-production-keystore -list -v\n\nfacebook release key hash:\nkeytool -exportcert -alias <user alias name> -keystore < keystore path> | <openssl-path> sha1 -binary | <openssl-path> base64 \n\nif asked for password enter your signed apk keystore password.\n",
"To add on existing responses, once you have the newly created SHA1:\nGoto https://console.firebase.google.com:\n\nSelect project\nProject Overview\nProject Settings \nADD FINGERPRINT - enter SHA1 to Certificate fingerprint\nSave\n\n",
"If someone is not able to solve this issue then just open Setup > app integrity\nin the console and then copy the SHA1 and paste in your firebase console.\n\n",
"In case of anyone facing this issue after Aug 2020. New SHA1 can be found at Setup->App Signing on your play console.\nEverything else is the same as @PrinkalKumar as answered.\n",
"in new version of the google console:\nSetup > app integrity\n",
"If you enabled \"Google Play App Signing\" when publishing your app, you are now probably dealing with two fingerprints:\nThe one coming from your local keystore (keytool -exportcert -keystore path-to-production-keystore -list -v), known as the \"upload cert\".\nThe new one generated by Google when you enabled Signing (the \"signing certificate\").\nThe conflict with this situation, is that you may end up with two OAuth 2.0 client IDs:\n\nThe one that you created before publishing your app (and before enabling google signing), which is indeed the \"right\" one.\nA NEW one created by Google when you enabled Google Signing.\n\nYou can verify this fact from: Google Play Console -> Games Services -> Select your App -> Games Details -> API Console project -> Credentials -> OAuth 2.0 client IDs\nSOLUTION\nIn order for Google Sign-in (and all related Games Services features) to work, I had to correct the fingerprint for my pre-existing OAuth2 Client ID (the one that I created before publishing my app).\n\nLook for the \"right\" OAuth 2.0 client ID: Google Play Console -> Games Services -> Select your App -> Linked Apps --> Select your App --> Take note of the \"OAuth2 Client ID\" at the bottom.\nLook for the \"Signing\" certificate: Google Play Console -> Games Services -> Select your App -> Game Details -> API Console project -> Credentials -> OAuth 2.0 client IDs --> \"Android client for XXXXXX (auto created by Google Service)\" --> Take note of the value (xx:xx:xx:etc....) Comment: In order to reuse this value in my other OAuth 2.0 client ID, I had to replace it with some dummy number. Otherwise, you will get error: \"Certificate already used in some other project\".\nGo to your pre-existing OAuth2 Client ID: Google Play Console -> Games Services -> Select your App -> Games Details -> API Console project -> Credentials -> OAuth 2.0 client IDs --> Select \"OAuth 2.0 client ID\" from step 1. Update certificate value with the one from step 2.\n\nThis solved my issue. Multiplayer is working perfectly in my app (Match4App).\n",
"Update SHA-1 Key in Firebase project setting\n\nAlso check your google cloud project api key setting and also add SHA-1 Key credentials if you have Key restrictions.\n\n",
"All answers specify either an outdated version of the Play console, or don't specify how to do it in the Firebase console.\nSo here goes my answer :\n\nGet your SHA in your Google console by navigating to Release > Setup > App integrity in the menu, then clicking \"App signing\" tab. Copy the SHA-1.\n\n\n\nIn your Firebase console, go to Project settings > General tab. At the very bottom of the page, click \"Add fingerprint\". Paste the previously copied SHA-1 and save.\n\n\n\n"
] |
[
296,
43,
38,
15,
13,
11,
7,
5,
3,
2,
1,
0,
0
] |
[
"just goto the google play console > liked account and link your firebase project. Now it works fine!\n"
] |
[
-1
] |
[
"android",
"firebase",
"firebase_authentication",
"google_signin"
] |
stackoverflow_0039318370_android_firebase_firebase_authentication_google_signin.txt
|
Q:
How to use getPreviousSibling to get the node instead of getting syntax dot in ts-morph?
I am currently trying to resolve the "definition" of "Identifier".
Note that I am using the ts-morph library.
As an example, given the following source:
const fn = () => {}
const fn2 = fn.bind(this);
I want to get the "definition" of fn Identifier (in the second line).
ts-morph is able to use "getDefinitionNodes" to get the actual fn function, but it does only to nodes with the type of Identifier and on the correct node.
So here I found the bind Identifier (from there I want to start).
Now I need to find the fn (it also can be this.fn sometimes).
I try to use getPreviousSibling, but it returns . (dot) and not fn.
Is there a better way to get the previous node instead to do getPreviousSibling().getPreviousSibling()?
import { Project, SyntaxKind } from "ts-morph";
console.clear();
const project = new Project();
const file = project.createSourceFile(
"foo.ts",
`
const fn = () => {}
const fn2 = fn.bind(this);
`
);
const identifiers = file.getDescendantsOfKind(SyntaxKind.Identifier);
const bind = identifiers.find((i) => i.getText() === "bind");
console.log({ bind });
const fn = bind?.getPreviousSibling();
console.log({ fn: fn?.getText() }); //<-- returns . but I was expected to fn.
codesandbox.io
A:
You can try to use Putout code transformer I'm working on. It has support of template values like __a and __b, and when you need to change the name, you just put something like hello in place of __a:
export const report = () => '';
export const match = () => ({
'__a.bind(__b)': ({__a, __b}) => {
console.log(__a, __b);
return true;
}
});
export const replace = () => ({
'__a.bind(__b)':'hello.bind(__b)',
});
It will output identifiers you need:
You can try it in Putout Editor.
And produce changes:
const fn = () => {}
const fn2 = hello.bind(this);
It much simpler then using imperative approach of searching for each previous sibling.
|
How to use getPreviousSibling to get the node instead of getting syntax dot in ts-morph?
|
I am currently trying to resolve the "definition" of "Identifier".
Note that I am using the ts-morph library.
As an example, given the following source:
const fn = () => {}
const fn2 = fn.bind(this);
I want to get the "definition" of fn Identifier (in the second line).
ts-morph is able to use "getDefinitionNodes" to get the actual fn function, but it does only to nodes with the type of Identifier and on the correct node.
So here I found the bind Identifier (from there I want to start).
Now I need to find the fn (it also can be this.fn sometimes).
I try to use getPreviousSibling, but it returns . (dot) and not fn.
Is there a better way to get the previous node instead to do getPreviousSibling().getPreviousSibling()?
import { Project, SyntaxKind } from "ts-morph";
console.clear();
const project = new Project();
const file = project.createSourceFile(
"foo.ts",
`
const fn = () => {}
const fn2 = fn.bind(this);
`
);
const identifiers = file.getDescendantsOfKind(SyntaxKind.Identifier);
const bind = identifiers.find((i) => i.getText() === "bind");
console.log({ bind });
const fn = bind?.getPreviousSibling();
console.log({ fn: fn?.getText() }); //<-- returns . but I was expected to fn.
codesandbox.io
|
[
"You can try to use Putout code transformer I'm working on. It has support of template values like __a and __b, and when you need to change the name, you just put something like hello in place of __a:\nexport const report = () => '';\n\nexport const match = () => ({\n '__a.bind(__b)': ({__a, __b}) => {\n console.log(__a, __b);\n return true;\n }\n});\n\nexport const replace = () => ({\n '__a.bind(__b)':'hello.bind(__b)',\n});\n\nIt will output identifiers you need:\n\nYou can try it in Putout Editor.\nAnd produce changes:\nconst fn = () => {}\nconst fn2 = hello.bind(this);\n\nIt much simpler then using imperative approach of searching for each previous sibling.\n"
] |
[
0
] |
[] |
[] |
[
"abstract_syntax_tree",
"ts_morph",
"typescript"
] |
stackoverflow_0074623929_abstract_syntax_tree_ts_morph_typescript.txt
|
Q:
ProgressBar with IProgress delegate across threads does not update
I'm trying to display the progress of unzipping a few files on a WinForms ProgressBar.
Here I create a System.Progress with a handler that updates the ProgressBar
Progress<int> progress = new Progress<int>(value => {
progressBar1.Value = value; progressBar1.Update(); });
Then I hand over my function to the thread pool.
Task t = Task.Run(() => FileUtils.UnzipTo(targetDir,
sourceDir, false, progress));
t.Wait();
Inside my unzip function I do this for every file in the archive:
progress.Report(++complete / total * 100);
This is definitely called and if I use a method for my handler the breakpoint is hit for every file (although too late I think)
I was hoping this would update the ProgressBar.
I see the dialog until the file is completely decompressed with a busy cursor above but there is no increase in progress.
What am I missing?
A:
I believe that "what you're missing" is the integer division in the expression progress.Report(++complete / total * 100). When I replicated the issue this expression always evaluates to zero unless the total variable is cast to a floating point variable type.
Other than that I was able to reproduce a mock version of your code with the desired outcome.
public partial class MainForm : Form
{
public MainForm()
{
InitializeComponent();
buttonUnzip.Click += onButtonUnzip;
}
private async void onButtonUnzip(object? sender, EventArgs e)
{
progressBar1.Visible = true;
Progress<int> progress = new Progress<int>(value => {
progressBar1.Value = value; progressBar1.Update();
});
await Task.Run(() => FileUtils.UnzipTo("mockSource", "mockDest", true, progress));
progressBar1.Visible = false;
}
}
class FileUtils
{
static string[] GetFilesMock() => new string[25];
static void UnzipOneMock(string sourceDir, string targetDir, string file)
=> Thread.Sleep(TimeSpan.FromMilliseconds(100));
public static void UnzipTo(
string sourceDir,
string targetDir,
bool option,
IProgress<int> progress)
{
var files = GetFilesMock();
int complete = 0;
int total = files.Length;
foreach (var file in files)
{
UnzipOneMock(sourceDir, targetDir, file);
progress.Report((int)(++complete / (double)total * 100));
}
}
}
|
ProgressBar with IProgress delegate across threads does not update
|
I'm trying to display the progress of unzipping a few files on a WinForms ProgressBar.
Here I create a System.Progress with a handler that updates the ProgressBar
Progress<int> progress = new Progress<int>(value => {
progressBar1.Value = value; progressBar1.Update(); });
Then I hand over my function to the thread pool.
Task t = Task.Run(() => FileUtils.UnzipTo(targetDir,
sourceDir, false, progress));
t.Wait();
Inside my unzip function I do this for every file in the archive:
progress.Report(++complete / total * 100);
This is definitely called and if I use a method for my handler the breakpoint is hit for every file (although too late I think)
I was hoping this would update the ProgressBar.
I see the dialog until the file is completely decompressed with a busy cursor above but there is no increase in progress.
What am I missing?
|
[
"I believe that \"what you're missing\" is the integer division in the expression progress.Report(++complete / total * 100). When I replicated the issue this expression always evaluates to zero unless the total variable is cast to a floating point variable type.\nOther than that I was able to reproduce a mock version of your code with the desired outcome.\n\npublic partial class MainForm : Form\n{\n public MainForm()\n {\n InitializeComponent();\n buttonUnzip.Click += onButtonUnzip;\n }\n private async void onButtonUnzip(object? sender, EventArgs e)\n {\n progressBar1.Visible = true;\n Progress<int> progress = new Progress<int>(value => {\n progressBar1.Value = value; progressBar1.Update();\n });\n await Task.Run(() => FileUtils.UnzipTo(\"mockSource\", \"mockDest\", true, progress));\n progressBar1.Visible = false;\n }\n}\nclass FileUtils\n{\n static string[] GetFilesMock() => new string[25];\n static void UnzipOneMock(string sourceDir, string targetDir, string file) \n => Thread.Sleep(TimeSpan.FromMilliseconds(100));\n public static void UnzipTo(\n string sourceDir,\n string targetDir,\n bool option, \n IProgress<int> progress)\n {\n var files = GetFilesMock();\n int complete = 0;\n int total = files.Length;\n foreach (var file in files)\n {\n UnzipOneMock(sourceDir, targetDir, file);\n progress.Report((int)(++complete / (double)total * 100));\n }\n }\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"asynchronous",
"c#",
"progress_bar",
"threadpool",
"winforms"
] |
stackoverflow_0074657921_asynchronous_c#_progress_bar_threadpool_winforms.txt
|
Q:
AWS Amplify Build Issue - StackUpdateComplete
When running "amplify push -y", my project errors with Resource is not in the state stackUpdateComplete.
How do I resolve this error?
A:
The "Resource is not in the state stackUpdateComplete" is the message that comes from the root CloudFormation stack associated with the Amplify App ID. The Amplify CLI just surfacing the error message that comes from the update stack operation. This indicates that the Amplify's CloudFormation stack may have been still be in progress or stuck.
Solution 1 – “deployment-state.json”:
To fix this issue, go to the S3 bucket containing project settings and deleted the “deployment-state.json” file in root folder as this file holds the app deployment states. The bucket should end with, or contain the word “deployment”.
Solution 2 – “Requested resource not found”:
Check the status of the CloudFormation stack and see if you can notice that the stack failed because of a “Requested resource not found” error indicating that the DynamoDB table “tableID” was missing and confirm that you have deleted it (possibly accidentall). Manually create the above DynamoDB table and retry to push again.
Solution 3A - “@auth directive with 'apiKey':
If you recieve an error stating that “@auth directive with 'apiKey' provider found, but the project has no API Key authentication provider configured”. This error appears when you define a public authorisation in your GraphQL schema without specifying a provider. The public authorization specifies that everyone will be allowed to access the API, behind the scenes the API will be protected with an API Key. To be able to use the public API you must have API Key configured.
The @auth directive allows the override of the default provider for a given authorization mode. To fix the issue specify “IAM” as the provider which allows to use an "Unauthenticated Role" from Cognito Identity Pools for public access instead of an API Key.
Below is the sample code for public authorisation rule:
type Todo @model @auth(rules: [{ allow: public, provider: iam, operations: [create, read, update, delete] }]) {
id: ID!
name: String!
description: String
}
After making the above changes, you can run “amplify update api” and add a IAM auth provider, the CLI generated scoped down IAM policies for the "UnAuthenticated" role automatically.
Solution 3B - Parameters: [AuthCognitoUserPoolId] must have values:
Another issue could occur here, where the default authorization type is API Key when you run the command “amplify add api” without specifying the API type. To fix this issue, follow these steps:
Deleted the the API
Recreate a new one by specifying the “Amazon Cognito user pool” as the authorization mode
Add IAM as an additional authorization type
Re-enable @auth directive in the newly created API Schema
Run “amplify push”
Documentation:
Public Authorisation
Troubleshoot CloudFormation stack issues in my AWS Amplify project
|
AWS Amplify Build Issue - StackUpdateComplete
|
When running "amplify push -y", my project errors with Resource is not in the state stackUpdateComplete.
How do I resolve this error?
|
[
"The \"Resource is not in the state stackUpdateComplete\" is the message that comes from the root CloudFormation stack associated with the Amplify App ID. The Amplify CLI just surfacing the error message that comes from the update stack operation. This indicates that the Amplify's CloudFormation stack may have been still be in progress or stuck.\nSolution 1 – “deployment-state.json”:\nTo fix this issue, go to the S3 bucket containing project settings and deleted the “deployment-state.json” file in root folder as this file holds the app deployment states. The bucket should end with, or contain the word “deployment”.\nSolution 2 – “Requested resource not found”:\nCheck the status of the CloudFormation stack and see if you can notice that the stack failed because of a “Requested resource not found” error indicating that the DynamoDB table “tableID” was missing and confirm that you have deleted it (possibly accidentall). Manually create the above DynamoDB table and retry to push again.\nSolution 3A - “@auth directive with 'apiKey':\nIf you recieve an error stating that “@auth directive with 'apiKey' provider found, but the project has no API Key authentication provider configured”. This error appears when you define a public authorisation in your GraphQL schema without specifying a provider. The public authorization specifies that everyone will be allowed to access the API, behind the scenes the API will be protected with an API Key. To be able to use the public API you must have API Key configured.\nThe @auth directive allows the override of the default provider for a given authorization mode. To fix the issue specify “IAM” as the provider which allows to use an \"Unauthenticated Role\" from Cognito Identity Pools for public access instead of an API Key.\nBelow is the sample code for public authorisation rule:\ntype Todo @model @auth(rules: [{ allow: public, provider: iam, operations: [create, read, update, delete] }]) {\n id: ID!\n name: String!\n description: String\n}\n\nAfter making the above changes, you can run “amplify update api” and add a IAM auth provider, the CLI generated scoped down IAM policies for the \"UnAuthenticated\" role automatically.\nSolution 3B - Parameters: [AuthCognitoUserPoolId] must have values:\nAnother issue could occur here, where the default authorization type is API Key when you run the command “amplify add api” without specifying the API type. To fix this issue, follow these steps:\n\nDeleted the the API\nRecreate a new one by specifying the “Amazon Cognito user pool” as the authorization mode\nAdd IAM as an additional authorization type\nRe-enable @auth directive in the newly created API Schema\nRun “amplify push”\n\nDocumentation:\n\nPublic Authorisation\nTroubleshoot CloudFormation stack issues in my AWS Amplify project\n\n"
] |
[
0
] |
[] |
[] |
[
"aws_amplify"
] |
stackoverflow_0074659297_aws_amplify.txt
|
Q:
Without looking at the code, is it possible to tell the size of Valgrind's VG_N segments?
I am using Valgrind on a remote server and it may have been customized, I do not know. When running certain programs I get an error saying that VG_N Segments is too low. I believe this means that the memory requested by the program is too large given Valgrind's memory pool. I'm not interested in fixing this within the install (I don't have root anyway and those who do are not likely to want to recompile it just for me) but I am interested in knowing how much memory my program tried to use before the failure.
Alternatively, is there a way to figure out memory use without Valgrind? I don't care about leaks at this point, just knowing how much memory my program requested, as a maximum, during a run.
This is on Ubuntu 20.04, Valgrind version 3.something (I'm afraid I don't know the exact Valgrind version)
A:
Not really. You may be able to work it out by using the -d argument. That will cause address space dumps at various times during execution, and you will see things like
--13467:1: aspacem <<< SHOW_SEGMENTS: Memory layout at client startup (36 segments)
--13467:1: aspacem <<< SHOW_SEGMENTS: Memory layout at client shutdown (62 segments)
Alternatively you might be able to run memcheck under gdb. See README_DEVELOPERS in the Valgrind source repo for details.
|
Without looking at the code, is it possible to tell the size of Valgrind's VG_N segments?
|
I am using Valgrind on a remote server and it may have been customized, I do not know. When running certain programs I get an error saying that VG_N Segments is too low. I believe this means that the memory requested by the program is too large given Valgrind's memory pool. I'm not interested in fixing this within the install (I don't have root anyway and those who do are not likely to want to recompile it just for me) but I am interested in knowing how much memory my program tried to use before the failure.
Alternatively, is there a way to figure out memory use without Valgrind? I don't care about leaks at this point, just knowing how much memory my program requested, as a maximum, during a run.
This is on Ubuntu 20.04, Valgrind version 3.something (I'm afraid I don't know the exact Valgrind version)
|
[
"Not really. You may be able to work it out by using the -d argument. That will cause address space dumps at various times during execution, and you will see things like\n--13467:1: aspacem <<< SHOW_SEGMENTS: Memory layout at client startup (36 segments)\n--13467:1: aspacem <<< SHOW_SEGMENTS: Memory layout at client shutdown (62 segments)\n\nAlternatively you might be able to run memcheck under gdb. See README_DEVELOPERS in the Valgrind source repo for details.\n"
] |
[
1
] |
[] |
[] |
[
"valgrind"
] |
stackoverflow_0074656697_valgrind.txt
|
Q:
How do I change the color texts in a listview android studio using JAVA?
I need using JAVA, change colors of all textviews from LISTVIEW.
MainActivity.java:
public class MainActivity extends AppCompatActivity {
String[] programName = {"ex1", "ex2", "ex3"}
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
lvProgram = findViewById(R.id.listView);
ProgramAdapter programAdapter = new ProgramAdapter(this, programName);
lvProgram.setAdapter(programAdapter);
}}
ProgramAdapter.java:
public class ProgramAdapter extends ArrayAdapter<String> {
Context context;
String[] programName;
public ProgramAdapter(Context context, String[] programName) {
super(context, R.layout.single_item2, R.id.titulo, programName);
this.context = context;
this.programName = programName;
}
@Override
public View getView(final int position, View convertView, ViewGroup parent) {
View singleItem = convertView;
ProgramViewHolder holder = null;
if(singleItem == null){
LayoutInflater layoutInflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
singleItem = layoutInflater.inflate(R.layout.single_item2, parent, false);
holder = new ProgramViewHolder(singleItem);
singleItem.setTag(holder);
}
else{
holder = (ProgramViewHolder) singleItem.getTag();
}
holder.programTitle.setText(programName[position]);
}
}
MY ATTEMPT:
findViewById(R.id.TextView1).setBackgroundColor(Color.BLACK);
My attempt even worked, but it only changed the color of some textviews and at random times, it didn't change all the way I wanted.
A:
When you need to change the color of the text view -
holder.programTitle.setTextColor(Color.RED)
When you need to change the background color of the text view -
holder.programTitle.setBackgroundColor(Color.RED)
A:
Try this
holder.programTitle.setBackgroundColor(Color.BLACK);
|
How do I change the color texts in a listview android studio using JAVA?
|
I need using JAVA, change colors of all textviews from LISTVIEW.
MainActivity.java:
public class MainActivity extends AppCompatActivity {
String[] programName = {"ex1", "ex2", "ex3"}
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
lvProgram = findViewById(R.id.listView);
ProgramAdapter programAdapter = new ProgramAdapter(this, programName);
lvProgram.setAdapter(programAdapter);
}}
ProgramAdapter.java:
public class ProgramAdapter extends ArrayAdapter<String> {
Context context;
String[] programName;
public ProgramAdapter(Context context, String[] programName) {
super(context, R.layout.single_item2, R.id.titulo, programName);
this.context = context;
this.programName = programName;
}
@Override
public View getView(final int position, View convertView, ViewGroup parent) {
View singleItem = convertView;
ProgramViewHolder holder = null;
if(singleItem == null){
LayoutInflater layoutInflater = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE);
singleItem = layoutInflater.inflate(R.layout.single_item2, parent, false);
holder = new ProgramViewHolder(singleItem);
singleItem.setTag(holder);
}
else{
holder = (ProgramViewHolder) singleItem.getTag();
}
holder.programTitle.setText(programName[position]);
}
}
MY ATTEMPT:
findViewById(R.id.TextView1).setBackgroundColor(Color.BLACK);
My attempt even worked, but it only changed the color of some textviews and at random times, it didn't change all the way I wanted.
|
[
"When you need to change the color of the text view -\nholder.programTitle.setTextColor(Color.RED)\n\nWhen you need to change the background color of the text view -\nholder.programTitle.setBackgroundColor(Color.RED)\n\n",
"Try this\nholder.programTitle.setBackgroundColor(Color.BLACK);\n"
] |
[
1,
0
] |
[] |
[] |
[
"android",
"android_studio",
"java"
] |
stackoverflow_0074658417_android_android_studio_java.txt
|
Q:
Having trouble training Word2Vec iteratively on Gensim
I'm attempting to train multiple texts supplied by myself iteratively. However, I keep running into an issue when I train the model more than once:
ValueError: You must specify either total_examples or total_words, for proper learning-rate and progress calculations. If you've just built the vocabulary using the same corpus, using the count cached in the model is sufficient: total_examples=model.corpus_count.
I'm currently initiating my model like this:
model = Word2Vec(sentences, min_count=0, workers=cpu_count())
model.build_vocab(sentences, update=False)
model.save('firstmodel.model')
model = Word2Vec.load('firstmodel.model')
and subsequently training it iteratively like this:
model.build_vocab(sentences, update = True)
model.train(sentences, totalexamples=model.corpus_count, epochs=model.epochs)
What am I missing here?
Somehow, it worked when I just trained one other model, so not sure why it doesn't work beyond two models...
A:
First, the error message says you need to supply either the total_examples or total_words parameter to train() (so that it has an accurate estimate of the total training-corpus size).
Your code, as currently shown, only supplies totalexamples – a parameter name missing the necessary _. Correcting this typo should remedy the immediate error.
However, some other comments on your usage:
repeatedly calling train() with different data is an expert technique highly subject to error or other problems. It's not the usual way of using Word2Vec, nor the way most published results were reached. You can't count on it to always improve the model with new words; it might make the model worse, as new training sessions update some-but-not-all words, and alter the (usual) property that the vocabulary has one consistent set of word-frequencies from one single corpus. The best course is to train() once, with all available data, so that the full vocabulary, word-frequencies, & equally-trained word-vectors are achieved in a single consistent session.
min_count=0 is almost always a bad idea with word2vec: words with few examples in the corpus should be discarded. Trying to learn word-vectors for them not only gets weak vectors for those words, but dilutes/distracts the model from achieving better vectors for surrounding more-common words.
a count of workers up to your local cpu_count() only reliably helps up to about 4-12 workers, depending on other parameters & the efficiency of your corpus-reading, then more workers can hurt, due to inefficiencies in the Python GIL & Gensim corpus-to-worker handoffs. (inding the actual best count for your setup is, unfortunately, still just a matter of trial and error. But if you've got 16 (or more) cores, your setting is almost sure to do worse than a lower workers number.
|
Having trouble training Word2Vec iteratively on Gensim
|
I'm attempting to train multiple texts supplied by myself iteratively. However, I keep running into an issue when I train the model more than once:
ValueError: You must specify either total_examples or total_words, for proper learning-rate and progress calculations. If you've just built the vocabulary using the same corpus, using the count cached in the model is sufficient: total_examples=model.corpus_count.
I'm currently initiating my model like this:
model = Word2Vec(sentences, min_count=0, workers=cpu_count())
model.build_vocab(sentences, update=False)
model.save('firstmodel.model')
model = Word2Vec.load('firstmodel.model')
and subsequently training it iteratively like this:
model.build_vocab(sentences, update = True)
model.train(sentences, totalexamples=model.corpus_count, epochs=model.epochs)
What am I missing here?
Somehow, it worked when I just trained one other model, so not sure why it doesn't work beyond two models...
|
[
"First, the error message says you need to supply either the total_examples or total_words parameter to train() (so that it has an accurate estimate of the total training-corpus size).\nYour code, as currently shown, only supplies totalexamples – a parameter name missing the necessary _. Correcting this typo should remedy the immediate error.\nHowever, some other comments on your usage:\n\nrepeatedly calling train() with different data is an expert technique highly subject to error or other problems. It's not the usual way of using Word2Vec, nor the way most published results were reached. You can't count on it to always improve the model with new words; it might make the model worse, as new training sessions update some-but-not-all words, and alter the (usual) property that the vocabulary has one consistent set of word-frequencies from one single corpus. The best course is to train() once, with all available data, so that the full vocabulary, word-frequencies, & equally-trained word-vectors are achieved in a single consistent session.\n\n\nmin_count=0 is almost always a bad idea with word2vec: words with few examples in the corpus should be discarded. Trying to learn word-vectors for them not only gets weak vectors for those words, but dilutes/distracts the model from achieving better vectors for surrounding more-common words.\na count of workers up to your local cpu_count() only reliably helps up to about 4-12 workers, depending on other parameters & the efficiency of your corpus-reading, then more workers can hurt, due to inefficiencies in the Python GIL & Gensim corpus-to-worker handoffs. (inding the actual best count for your setup is, unfortunately, still just a matter of trial and error. But if you've got 16 (or more) cores, your setting is almost sure to do worse than a lower workers number.\n\n"
] |
[
0
] |
[] |
[] |
[
"gensim",
"nlp",
"word2vec"
] |
stackoverflow_0074646922_gensim_nlp_word2vec.txt
|
Q:
Declare types without implicit conversion in C++
I want to declare my own numeric types, exactly like unsigned int, but I do not want the types to be implicitly converted. I tried this first:
typedef unsigned int firstID;
typedef unsigned int secondID;
but this is no good as the two types are just synonyms for unsigned int, so are freely interchangable.
I'd like this to cause an error:
firstID fid = 0;
secondID sid = fid; // no implicit conversion error
but this to be okay:
firstID fid = 0;
secondID sid = static_cast<secondID>(fid); // no error
My reason is so that function arguments are strongly typed, eg:
void f( firstID, secondID ); // instead of void f(unsigned int, unsigned int)
What is the mechanism I am looking for?
Thanks
Si
A:
Maybe BOOST_STRONG_TYPEDEF form boost/strong_typedef.hpp would help.
A:
As you noted: typedef is badly named (it should be typealias (D has explicitly added typealias (last time I looked))
So the only way you can do this is to create two unique classes.
I am not going to say you can't write a specialization of static_cast<> to do what you want, but I think (and I have not put that much thought into it yet) doing so would be a bad idea (even if it legal), I think a better approach is to have each class constructor use unsigned int that are explicit (so there is no auto conversion).
struct FID
{
explicit FID(unsigned int v): value(v) {}
operator unsigned int() {return value;}
unsigned int value;
};
class SID {/* Stuff */};
FID fid(1U);
SID sid(2U);
doSomthingWithTwoSID(sid,SID(static_cast<unsigned int>(fid));
Making the constructor explicit means no auto conversion between types.
By adding the built in cast operator to unsigned int means that it can be used anywhere an int is expected.
A:
You have to write your own classes for them, reimplementing all the operators you need.
A:
Ahhh, a fellow Ada traveler I see.
The only real way to do this in C++ is to declare classes. Something like:
class first_100 {
public:
explicit first_100(int source) {
if (source < 1 || source > 100) {
throw range_error;
}
value = source;
};
// (redefine *all* the int operators here)
private:
int value;
};
You'll want to make sure to define your int constructor explicit so that C++ won't use it to implicitly convert between your types. That way this will not work:
first_100 centum = first_100(55);
int i = centum;
but something like this might (assuming you define it):
int i = centum.to_int();
A:
struct FirstID { int id; };
A:
The venerable DBJ has a very interesting solution, as always. Here is his article Cpp How to Avoid Implicit Conversions and the simple header adding basic operators for the type. Like others have mentioned, it's a templated class, but unlike the other answers I found here, here is a solution that's tested and works (err, outside of importing all of boost).
Then in practice you can declare a type without implicit conversions almost like normal, wrapping in his header.
using my_safe_uint16_t = dbj::util::nothing_but<uint16_t>;
|
Declare types without implicit conversion in C++
|
I want to declare my own numeric types, exactly like unsigned int, but I do not want the types to be implicitly converted. I tried this first:
typedef unsigned int firstID;
typedef unsigned int secondID;
but this is no good as the two types are just synonyms for unsigned int, so are freely interchangable.
I'd like this to cause an error:
firstID fid = 0;
secondID sid = fid; // no implicit conversion error
but this to be okay:
firstID fid = 0;
secondID sid = static_cast<secondID>(fid); // no error
My reason is so that function arguments are strongly typed, eg:
void f( firstID, secondID ); // instead of void f(unsigned int, unsigned int)
What is the mechanism I am looking for?
Thanks
Si
|
[
"Maybe BOOST_STRONG_TYPEDEF form boost/strong_typedef.hpp would help.\n",
"As you noted: typedef is badly named (it should be typealias (D has explicitly added typealias (last time I looked))\nSo the only way you can do this is to create two unique classes.\nI am not going to say you can't write a specialization of static_cast<> to do what you want, but I think (and I have not put that much thought into it yet) doing so would be a bad idea (even if it legal), I think a better approach is to have each class constructor use unsigned int that are explicit (so there is no auto conversion).\n struct FID\n {\n explicit FID(unsigned int v): value(v) {}\n operator unsigned int() {return value;}\n unsigned int value;\n };\n\n class SID {/* Stuff */};\n\n FID fid(1U);\n SID sid(2U);\n\n doSomthingWithTwoSID(sid,SID(static_cast<unsigned int>(fid));\n\nMaking the constructor explicit means no auto conversion between types.\nBy adding the built in cast operator to unsigned int means that it can be used anywhere an int is expected.\n",
"You have to write your own classes for them, reimplementing all the operators you need.\n",
"Ahhh, a fellow Ada traveler I see.\nThe only real way to do this in C++ is to declare classes. Something like:\nclass first_100 {\npublic:\n explicit first_100(int source) {\n if (source < 1 || source > 100) {\n throw range_error;\n }\n value = source;\n };\n // (redefine *all* the int operators here)\nprivate:\n int value;\n};\n\nYou'll want to make sure to define your int constructor explicit so that C++ won't use it to implicitly convert between your types. That way this will not work:\nfirst_100 centum = first_100(55);\nint i = centum;\n\nbut something like this might (assuming you define it):\nint i = centum.to_int();\n\n",
"struct FirstID { int id; };\n\n",
"The venerable DBJ has a very interesting solution, as always. Here is his article Cpp How to Avoid Implicit Conversions and the simple header adding basic operators for the type. Like others have mentioned, it's a templated class, but unlike the other answers I found here, here is a solution that's tested and works (err, outside of importing all of boost).\nThen in practice you can declare a type without implicit conversions almost like normal, wrapping in his header.\nusing my_safe_uint16_t = dbj::util::nothing_but<uint16_t>;\n\n"
] |
[
8,
4,
2,
1,
1,
0
] |
[] |
[] |
[
"c++"
] |
stackoverflow_0002200025_c++.txt
|
Q:
How to get MultiFactor in firebase sdk?
I'm trying to enable/add multi-factor authentication in my firebase app.
I"ve gotten the latest firebase SDK (ver 9.1.4) and I've upgraded my firebase Authentication to include MultiFactor per the instructions.
I'm following these instructions: https://firebase.google.com/docs/auth/web/multi-factor#web-version-8_4
In my code, I get reference to the current user and I've confirmed the user object exists.
Then I try this:
const user = firebase.auth().currentUser;
user.multiFactor.getSession().then(function(multiFactorSession) {
console.log('test')
})
But when I run this I get:
Cannot read properties of undefined (reading 'getSession')
The user object doesn't seem to have the multiFactor component.
What am I missing?
here is my firebase file where I import references
import * as firebase from 'firebase/app';
import 'firebase/auth';
import "firebase/storage";
import 'firebase/messaging';
A:
The multiFactor property is not available on the User object until the User object has been reloaded with the latest data from the Firebase Auth service.
To ensure that the multiFactor property is available on the User object, you can call the reload() method on the User object before accessing the multiFactor property.
Here is an example of how you can reload the User object and then access the multiFactor property:
const user = firebase.auth().currentUser;
user.reload().then(() => {
user.multiFactor.getSession().then(function(multiFactorSession) {
console.log('test')
});
});
After calling the reload() method, the User object will be updated with the latest data from the Firebase Auth service, and the multiFactor property will be available for use.
You can also call the fetchSignInMethodsForEmail() method on the User object to fetch the list of sign-in methods for the user, which includes any multi-factor authentication methods. This method returns a list of sign-in methods, and you can check whether the list includes the MULTI_FACTOR_PROVIDER to determine whether the user has enabled multi-factor authentication.
Here is an example of how you can use the fetchSignInMethodsForEmail() method to check whether the user has enabled multi-factor authentication:
const user = firebase.auth().currentUser;
user.fetchSignInMethodsForEmail().then(function(signInMethods) {
if (signInMethods.includes(firebase.auth.EmailAuthProvider.EMAIL_PASSWORD_SIGN_IN_METHOD)) {
// User has enabled multi-factor authentication
user.multiFactor.getSession().then(function(multiFactorSession) {
console.log('test')
});
}
});
With this approach, you can check whether the user has enabled multi-factor authentication without having to reload the User object.
|
How to get MultiFactor in firebase sdk?
|
I'm trying to enable/add multi-factor authentication in my firebase app.
I"ve gotten the latest firebase SDK (ver 9.1.4) and I've upgraded my firebase Authentication to include MultiFactor per the instructions.
I'm following these instructions: https://firebase.google.com/docs/auth/web/multi-factor#web-version-8_4
In my code, I get reference to the current user and I've confirmed the user object exists.
Then I try this:
const user = firebase.auth().currentUser;
user.multiFactor.getSession().then(function(multiFactorSession) {
console.log('test')
})
But when I run this I get:
Cannot read properties of undefined (reading 'getSession')
The user object doesn't seem to have the multiFactor component.
What am I missing?
here is my firebase file where I import references
import * as firebase from 'firebase/app';
import 'firebase/auth';
import "firebase/storage";
import 'firebase/messaging';
|
[
"The multiFactor property is not available on the User object until the User object has been reloaded with the latest data from the Firebase Auth service.\nTo ensure that the multiFactor property is available on the User object, you can call the reload() method on the User object before accessing the multiFactor property.\nHere is an example of how you can reload the User object and then access the multiFactor property:\nconst user = firebase.auth().currentUser;\nuser.reload().then(() => {\n user.multiFactor.getSession().then(function(multiFactorSession) {\n console.log('test')\n });\n});\n\nAfter calling the reload() method, the User object will be updated with the latest data from the Firebase Auth service, and the multiFactor property will be available for use.\nYou can also call the fetchSignInMethodsForEmail() method on the User object to fetch the list of sign-in methods for the user, which includes any multi-factor authentication methods. This method returns a list of sign-in methods, and you can check whether the list includes the MULTI_FACTOR_PROVIDER to determine whether the user has enabled multi-factor authentication.\nHere is an example of how you can use the fetchSignInMethodsForEmail() method to check whether the user has enabled multi-factor authentication:\nconst user = firebase.auth().currentUser;\nuser.fetchSignInMethodsForEmail().then(function(signInMethods) {\n if (signInMethods.includes(firebase.auth.EmailAuthProvider.EMAIL_PASSWORD_SIGN_IN_METHOD)) {\n // User has enabled multi-factor authentication\n user.multiFactor.getSession().then(function(multiFactorSession) {\n console.log('test')\n });\n }\n});\n\nWith this approach, you can check whether the user has enabled multi-factor authentication without having to reload the User object.\n"
] |
[
1
] |
[] |
[] |
[
"firebase",
"firebase_authentication",
"javascript"
] |
stackoverflow_0074659268_firebase_firebase_authentication_javascript.txt
|
Q:
Avoid KeyError while selecting several data from several groups
I am trying to add certain values from certain Brand from certain Month by using .groupby, but I keep getting the same Error: KeyError: ('Acura', '1', '2020')
This Values Do exist in the file i am importing:
ANIO ID_MES MARCA MODELO UNI_VEH
2020 1 Acura ILX 6
2020 1 Acura Mdx 19
2020 1 Acura Rdx 78
2020 1 Acura TLX 7
2020 1 Honda Accord- 195
2020 1 Honda BR-V 557
2020 1 Honda Civic 693
2020 1 Honda CR-V 2095
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_excel("HondaAcuraSales.xlsx")
def sumMonthValues (year, brand):
count = 1
sMonthSum = []
if anio == 2022:
months = 10
else:
months = 12
while count <= months:
month = 1
monthS = str(mes)
BmY = df.groupby(["BRAND","ID_MONTH","YEAR"])
honda = BmY.get_group((brand, monthS, year))
sales = honda["UNI_SOL"].sum()
sMonthSum += [sales]
month = month + 1
return sumasMes
year = 2020
brand = ('Acura')
chuck = sumMonthValues (year, brand)
print (chuck)
Is there something wrong regarding how am i grouping the data?
A:
If need filter DataFrame by year, brand and months you can avoid groupby and use DataFrame.loc with mask - if scalar compare by Series.eq, if multiple values use Series.isin:
def sumMonthValues (year, brand):
months = 10 if year == 2022 else 12
mask = (df['ID_MES'].isin(range(1, months+1)) &
df['ANIO'].eq(year) &
df['MARCA'].isin(list(brand)))
return df.loc[mask, "UNI_VEH"].sum()
year = 2020
#one element tuple - added ,
brand = ('Acura', )
chuck = sumMonthValues (year, brand)
print (chuck)
110
A:
So i got arround it: Storing the values given from the sum of sales given year and brand per month.
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_excel("ventasHondaMexico2020-2019.xlsx")
def sumMonthValues (year, brand):
sMonthSum = []
months = 10 if year == 2022 else 12
nmes = 1
mes = [nmes]
while nmes <= months:
mask = (df['ID_MES'].isin(mes) &
df['ANIO'].eq(year) &
df['MARCA'].isin(list(brand)))
nmes = nmes +1
mes = [nmes]
sumMes = df.loc[mask, "UNI_VEH"].sum()
sMonthSum += [sumMes]
return sMonthSum
year = 2020
#one element tuple - added ,
brand = ('Acura', )
conteo = 1
chuck = sumMonthValues (year, brand)
print (chuck)
|
Avoid KeyError while selecting several data from several groups
|
I am trying to add certain values from certain Brand from certain Month by using .groupby, but I keep getting the same Error: KeyError: ('Acura', '1', '2020')
This Values Do exist in the file i am importing:
ANIO ID_MES MARCA MODELO UNI_VEH
2020 1 Acura ILX 6
2020 1 Acura Mdx 19
2020 1 Acura Rdx 78
2020 1 Acura TLX 7
2020 1 Honda Accord- 195
2020 1 Honda BR-V 557
2020 1 Honda Civic 693
2020 1 Honda CR-V 2095
import pandas as pd
import matplotlib.pyplot as plt
df = pd.read_excel("HondaAcuraSales.xlsx")
def sumMonthValues (year, brand):
count = 1
sMonthSum = []
if anio == 2022:
months = 10
else:
months = 12
while count <= months:
month = 1
monthS = str(mes)
BmY = df.groupby(["BRAND","ID_MONTH","YEAR"])
honda = BmY.get_group((brand, monthS, year))
sales = honda["UNI_SOL"].sum()
sMonthSum += [sales]
month = month + 1
return sumasMes
year = 2020
brand = ('Acura')
chuck = sumMonthValues (year, brand)
print (chuck)
Is there something wrong regarding how am i grouping the data?
|
[
"If need filter DataFrame by year, brand and months you can avoid groupby and use DataFrame.loc with mask - if scalar compare by Series.eq, if multiple values use Series.isin:\ndef sumMonthValues (year, brand):\n \n months = 10 if year == 2022 else 12\n \n mask = (df['ID_MES'].isin(range(1, months+1)) &\n df['ANIO'].eq(year) & \n df['MARCA'].isin(list(brand)))\n \n return df.loc[mask, \"UNI_VEH\"].sum()\n\nyear = 2020\n#one element tuple - added ,\nbrand = ('Acura', )\n\nchuck = sumMonthValues (year, brand)\nprint (chuck)\n110\n\n",
"So i got arround it: Storing the values given from the sum of sales given year and brand per month.\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\ndf = pd.read_excel(\"ventasHondaMexico2020-2019.xlsx\")\n\ndef sumMonthValues (year, brand):\n \n sMonthSum = []\n months = 10 if year == 2022 else 12\n nmes = 1\n mes = [nmes]\n while nmes <= months:\n mask = (df['ID_MES'].isin(mes) &\n df['ANIO'].eq(year) & \n df['MARCA'].isin(list(brand)))\n nmes = nmes +1\n mes = [nmes]\n sumMes = df.loc[mask, \"UNI_VEH\"].sum()\n sMonthSum += [sumMes]\n return sMonthSum\n \n\nyear = 2020\n#one element tuple - added ,\nbrand = ('Acura', )\nconteo = 1 \nchuck = sumMonthValues (year, brand)\nprint (chuck)\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074651550_pandas_python.txt
|