content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Trouble finding file directory using Chrome OS
I'm trying to read in a .txt file on chromebook to manipulate the data that is there. My code to source the file is:
def readInFile():
arr_intValues = []
myFile = open("#MY FILE HERE", "r")
#you need to change the directory
for myLine in myFile:
arr_intValues.append(int(myLine))
return arr_intValues
myNewList = readInFile()
print(myNewList)
Trouble is, i cannot find out where the source of the file is for this code. If i drop the file into a chrome tab, it reads:
file:///media/fuse/crostini_9549350604ce9beeb7d5a9a66d5bf09733144d34_termina_penguin/RandomNum.txt
Meanwhile the file location that "Get info" returns is:
My files/Linux files/RandomNum.txt
Both of these options fail if I attempt to open it and print it in my code.
How do I find the correct file directory?
I attempted to find the directory of a .txt file on using Chrome OS and have not been successful.
A:
If you're running the program on Linux, you'll need to specify the Linux path.
|
Trouble finding file directory using Chrome OS
|
I'm trying to read in a .txt file on chromebook to manipulate the data that is there. My code to source the file is:
def readInFile():
arr_intValues = []
myFile = open("#MY FILE HERE", "r")
#you need to change the directory
for myLine in myFile:
arr_intValues.append(int(myLine))
return arr_intValues
myNewList = readInFile()
print(myNewList)
Trouble is, i cannot find out where the source of the file is for this code. If i drop the file into a chrome tab, it reads:
file:///media/fuse/crostini_9549350604ce9beeb7d5a9a66d5bf09733144d34_termina_penguin/RandomNum.txt
Meanwhile the file location that "Get info" returns is:
My files/Linux files/RandomNum.txt
Both of these options fail if I attempt to open it and print it in my code.
How do I find the correct file directory?
I attempted to find the directory of a .txt file on using Chrome OS and have not been successful.
|
[
"If you're running the program on Linux, you'll need to specify the Linux path.\n\n"
] |
[
0
] |
[] |
[] |
[
"google_chrome",
"google_chrome_extension",
"python_3.9",
"python_idle"
] |
stackoverflow_0074663389_google_chrome_google_chrome_extension_python_3.9_python_idle.txt
|
Q:
JS returing key : value pair as in JSON format
I am working on a problem where I'm supposed to get a two-word string from a JS object and transform it to two new key:value pairs (e.g. get from: {name: "Bob Jones", Age:34} to: {firstName:Bob, lastName:Jones, Age:34}).
Below my code:
/*enter original object, access value of "name" and split it in two; then create new object that will have a new property "last name" with the second part of original name value
-code for finding where one part of strings end where the other begins, then output the part in separate properties*/
function makeGuestList(person) {
let oldName = person.name;
let indexOfSpace = oldName.indexOf(" ");
let givenName = oldName.slice(0,indexOfSpace);
let familyName = oldName.slice(indexOfSpace);
///////////from here down something must be wrong
person.firstName = givenName;
person.lastName = familyName; //just update person
delete person.name; //this was the original name property I need to get rid of
//return person;
console.log(person);
}
I tried outputting in console and I think I know what's the issue. My object return as {"first Name":Bob, "lastName":Jones, Age:34}
From what I found online (mainly here), this seems to be the JSON format, but the platform I'm doing the code on want's me to be able to output key/property of the object without quotes.
Any ideas, what can I be doing wrong? (I know there's probably a more efficient way of solving the issue than what I've done, I just want to understand the issue).
Sorry for being too verbose, and Thank you.
A:
You still have typos in your edits. FYI, programming requires that you make zero typos. You will not be able to program like that. Please keep it in mind, and try to be careful.
You wrote:
My object return as {"first Name":Bob, "lastName":Jones, Age:34}
What you meant might have been something like:
My object returns as {"firstName":"Bob", "lastName":"Jones", "Age":34}
Finally, here's how to print a JavaScript object, as JSON, without quotes:
var myPersonObject = {"firstName":"Bob", "lastName":"Jones", "Age":34};
var myPersonAsJSON = JSON.stringify( myPersonObject );
//We will use a Regular Expression and the replace() function to remove the quotes:
var myPersonAsJSONWithoutQuotes = myPersonAsJSON.replace( /"/g, "" );
console.log( myPersonAsJSONWithoutQuotes );
|
JS returing key : value pair as in JSON format
|
I am working on a problem where I'm supposed to get a two-word string from a JS object and transform it to two new key:value pairs (e.g. get from: {name: "Bob Jones", Age:34} to: {firstName:Bob, lastName:Jones, Age:34}).
Below my code:
/*enter original object, access value of "name" and split it in two; then create new object that will have a new property "last name" with the second part of original name value
-code for finding where one part of strings end where the other begins, then output the part in separate properties*/
function makeGuestList(person) {
let oldName = person.name;
let indexOfSpace = oldName.indexOf(" ");
let givenName = oldName.slice(0,indexOfSpace);
let familyName = oldName.slice(indexOfSpace);
///////////from here down something must be wrong
person.firstName = givenName;
person.lastName = familyName; //just update person
delete person.name; //this was the original name property I need to get rid of
//return person;
console.log(person);
}
I tried outputting in console and I think I know what's the issue. My object return as {"first Name":Bob, "lastName":Jones, Age:34}
From what I found online (mainly here), this seems to be the JSON format, but the platform I'm doing the code on want's me to be able to output key/property of the object without quotes.
Any ideas, what can I be doing wrong? (I know there's probably a more efficient way of solving the issue than what I've done, I just want to understand the issue).
Sorry for being too verbose, and Thank you.
|
[
"You still have typos in your edits. FYI, programming requires that you make zero typos. You will not be able to program like that. Please keep it in mind, and try to be careful.\nYou wrote:\n\nMy object return as {\"first Name\":Bob, \"lastName\":Jones, Age:34}\n\nWhat you meant might have been something like:\n\nMy object returns as {\"firstName\":\"Bob\", \"lastName\":\"Jones\", \"Age\":34}\n\nFinally, here's how to print a JavaScript object, as JSON, without quotes:\n\nvar myPersonObject = {\"firstName\":\"Bob\", \"lastName\":\"Jones\", \"Age\":34};\n\nvar myPersonAsJSON = JSON.stringify( myPersonObject );\n\n//We will use a Regular Expression and the replace() function to remove the quotes:\nvar myPersonAsJSONWithoutQuotes = myPersonAsJSON.replace( /\"/g, \"\" );\n\nconsole.log( myPersonAsJSONWithoutQuotes );\n\n\n"
] |
[
0
] |
[] |
[] |
[
"elixir_jason",
"javascript",
"key_value",
"object"
] |
stackoverflow_0074662615_elixir_jason_javascript_key_value_object.txt
|
Q:
Remove GRUB cursor (before kernel)
I am trying to remove cursor that appears even before Boot menu and kernel (I am using GRUB2). I can remove "GRUB loading" and "Welcome" messages but the cursor still remains. I tried modifying registers (AH, CH) with INT 10H routine in grub-core/boot/i386/pc/boot.S but that didn't work out. Does anyone know if I am on the right tracks? Can someone give me additional help?
A:
I got the answer from this post https://unix.stackexchange.com/questions/548277/how-to-hide-blinking-text-cursor-in-grub-boot. All you have to do is add the parameter vt.global_cursor_default=0 in this line in /etc/default/grub:
GRUB_CMDLINE_LINUX=
... Then update grub
|
Remove GRUB cursor (before kernel)
|
I am trying to remove cursor that appears even before Boot menu and kernel (I am using GRUB2). I can remove "GRUB loading" and "Welcome" messages but the cursor still remains. I tried modifying registers (AH, CH) with INT 10H routine in grub-core/boot/i386/pc/boot.S but that didn't work out. Does anyone know if I am on the right tracks? Can someone give me additional help?
|
[
"I got the answer from this post https://unix.stackexchange.com/questions/548277/how-to-hide-blinking-text-cursor-in-grub-boot. All you have to do is add the parameter vt.global_cursor_default=0 in this line in /etc/default/grub:\n\nGRUB_CMDLINE_LINUX=\n\n... Then update grub\n"
] |
[
1
] |
[] |
[] |
[
"archlinux",
"assembly",
"bootloader",
"grub2"
] |
stackoverflow_0044833882_archlinux_assembly_bootloader_grub2.txt
|
Q:
Axios returns gibberish with JSONPlaceholder
I'm trying to learn to use Axios to fetch API data (ultimately to work with the HubSpot API).
I've set up a small testing project where I'm trying to retrieve data from JSONPlaceholder and the RapidAPI FamousQuotes API, using fetch and Axios.
Everything works fine for Rapid API using the code examples for Axios and fetch.
But when I use the same code to target JSONPlacholder, I get weird results.
Here's the code using fetch, which works:
//import node-fetch
const url = 'https://jsonplaceholder.typicode.com/posts';
const options = {
method: 'GET',
headers: {}
};
fetch(url, options)
.then(res => res.json())
.then(json => console.log(json))
.catch(err => console.error('error:' + err));
returns
[
{
userId: 1,
id: 1,
title: 'sunt aut facere repellat provident occaecati excepturi optio reprehenderit',
body: 'quia et suscipit\n' +
etc ...
However, if I'm using axios i'm getting something strange:
const axios = require('axios')
const url = 'https://jsonplaceholder.typicode.com/posts';
const options = {
method: 'GET',
url: url,
headers: {}
};
axios.request(options).then(function (response) {
console.log(response);
}).catch(function (error) {
console.error(error);
});
returns gibberish text:
`\x13\x7Fk�H��\x01�\b\x1D>����\x15��T��N\x1B.\x16���r�H�v\x1E �_"9\x18'?1�\x13��J��\\���LA.\t��H\x17���\x10\x9B!�b�\x12\x04R� 9�܅��ڹ�\x0EK�}��%��A\x12�v�\x15Q*�g�dwf�\n` +
'�ئ��iQ��}���0\t\x18c�?�߾������K\x05�_��/y���\x1E/�\x1D�9��K^�,E+�v\x1C���95��6���xIyu��\x0E�]\x1C��\x07:�w��a�,�{�|��嗼�\n' +
'R���b6��\x1B�A�P\x1B\n' +
'�n�]eYG�0�w��^�ك�ee�D$\x15R)��KC�SW3��SK\x0EN-A\x04�\x92\x18���\x12<`D�+��oJ���/"��\x03j\x03�V��\x18�\x14\x11�{��]|�ĺ�@\x1ELE��_B\tZ���\t�w��ӏܠ�\x10赿5B\f���w}RSnSm0ɐ�ϺR\x13r�9.\x0F��3P\x7F�\x03v�\x06��.�^8�%�&\x037&C^��\x0B�\n' +
I've tried forcing the response type to "text" or the encoding to "utf8" with no succes.
What's weird is that when targeting 'https://famous-quotes4.p.rapidapi.com/random' with both methods, I get proper responses in both cases.
I'm probably missing something but I don't know what ...
A:
Add Accept-Encoding with application/json in Axios Get header.
It is defect in v1.2.0
It is known defect.
The default of it is gzip in axios v1.2.0
const axios = require('axios')
const url = 'https://jsonplaceholder.typicode.com/posts';
const options = {
method: 'GET',
url: url,
headers: {
'Accept-Encoding': 'application/json'
}
};
axios.request(options).then(function (response) {
console.log(response.data);
}).catch(function (error) {
console.error(error);
});
Result
$ node get-data.js
[
{
userId: 1,
id: 1,
title: 'sunt aut facere repellat provident occaecati excepturi optio reprehenderit',
body: 'quia et suscipit\n' +
'suscipit recusandae consequuntur expedita et cum\n' +
'reprehenderit molestiae ut ut quas totam\n' +
'nostrum rerum est autem sunt rem eveniet architecto'
},
{
userId: 1,
id: 2,
title: 'qui est esse',
body: 'est rerum tempore vitae\n' +
'sequi sint nihil reprehenderit dolor beatae ea dolores neque\n' +
'fugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\n' +
'qui aperiam non debitis possimus qui neque nisi nulla'
},
...
|
Axios returns gibberish with JSONPlaceholder
|
I'm trying to learn to use Axios to fetch API data (ultimately to work with the HubSpot API).
I've set up a small testing project where I'm trying to retrieve data from JSONPlaceholder and the RapidAPI FamousQuotes API, using fetch and Axios.
Everything works fine for Rapid API using the code examples for Axios and fetch.
But when I use the same code to target JSONPlacholder, I get weird results.
Here's the code using fetch, which works:
//import node-fetch
const url = 'https://jsonplaceholder.typicode.com/posts';
const options = {
method: 'GET',
headers: {}
};
fetch(url, options)
.then(res => res.json())
.then(json => console.log(json))
.catch(err => console.error('error:' + err));
returns
[
{
userId: 1,
id: 1,
title: 'sunt aut facere repellat provident occaecati excepturi optio reprehenderit',
body: 'quia et suscipit\n' +
etc ...
However, if I'm using axios i'm getting something strange:
const axios = require('axios')
const url = 'https://jsonplaceholder.typicode.com/posts';
const options = {
method: 'GET',
url: url,
headers: {}
};
axios.request(options).then(function (response) {
console.log(response);
}).catch(function (error) {
console.error(error);
});
returns gibberish text:
`\x13\x7Fk�H��\x01�\b\x1D>����\x15��T��N\x1B.\x16���r�H�v\x1E �_"9\x18'?1�\x13��J��\\���LA.\t��H\x17���\x10\x9B!�b�\x12\x04R� 9�܅��ڹ�\x0EK�}��%��A\x12�v�\x15Q*�g�dwf�\n` +
'�ئ��iQ��}���0\t\x18c�?�߾������K\x05�_��/y���\x1E/�\x1D�9��K^�,E+�v\x1C���95��6���xIyu��\x0E�]\x1C��\x07:�w��a�,�{�|��嗼�\n' +
'R���b6��\x1B�A�P\x1B\n' +
'�n�]eYG�0�w��^�ك�ee�D$\x15R)��KC�SW3��SK\x0EN-A\x04�\x92\x18���\x12<`D�+��oJ���/"��\x03j\x03�V��\x18�\x14\x11�{��]|�ĺ�@\x1ELE��_B\tZ���\t�w��ӏܠ�\x10赿5B\f���w}RSnSm0ɐ�ϺR\x13r�9.\x0F��3P\x7F�\x03v�\x06��.�^8�%�&\x037&C^��\x0B�\n' +
I've tried forcing the response type to "text" or the encoding to "utf8" with no succes.
What's weird is that when targeting 'https://famous-quotes4.p.rapidapi.com/random' with both methods, I get proper responses in both cases.
I'm probably missing something but I don't know what ...
|
[
"Add Accept-Encoding with application/json in Axios Get header.\nIt is defect in v1.2.0\nIt is known defect.\nThe default of it is gzip in axios v1.2.0\nconst axios = require('axios')\nconst url = 'https://jsonplaceholder.typicode.com/posts';\nconst options = {\n method: 'GET',\n url: url,\n headers: {\n 'Accept-Encoding': 'application/json'\n }\n};\naxios.request(options).then(function (response) {\n console.log(response.data);\n}).catch(function (error) {\n console.error(error);\n});\n\nResult\n$ node get-data.js\n[\n {\n userId: 1,\n id: 1,\n title: 'sunt aut facere repellat provident occaecati excepturi optio reprehenderit',\n body: 'quia et suscipit\\n' +\n 'suscipit recusandae consequuntur expedita et cum\\n' +\n 'reprehenderit molestiae ut ut quas totam\\n' +\n 'nostrum rerum est autem sunt rem eveniet architecto'\n },\n {\n userId: 1,\n id: 2,\n title: 'qui est esse',\n body: 'est rerum tempore vitae\\n' +\n 'sequi sint nihil reprehenderit dolor beatae ea dolores neque\\n' +\n 'fugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\\n' +\n 'qui aperiam non debitis possimus qui neque nisi nulla'\n },\n...\n\n"
] |
[
0
] |
[] |
[] |
[
"axios",
"jsonplaceholder"
] |
stackoverflow_0074637860_axios_jsonplaceholder.txt
|
Q:
Is there a way to make a tic tac toe app using React functions
The main problem is that my tic tac toe board is not updating when I click on the square inside the board.
It seems that the useState function; setXIsNext is working. But I don't know if setGame function is working or not.
I tried to remove the slice function but it still didn't update.
I tried putting
dupGame[i] = xIsNext ? 'X' : 'O'
into setGame function which updated the 1st square in the board but not the square that was clicked.
Code Used:
import { useState } from 'react'
import './tiktacktoe.css'
function Square(props) {
return (
<div className="square" onClick={props.onClick}>
{props.value}
</div>
)
}
function Board(){
const [game, setGame] = useState(Array(9).fill(null))
const [xIsNext, setXIsNext] = useState(true)
function handleClick(i) {
const dupGame = game.slice()
if (dupGame[i]) {
return
}
dupGame[i] = xIsNext ? 'X' : 'O'
setXIsNext(!xIsNext)
setGame(dupGame)
}
function renderSquare(i){
return (
<Square value={game[i]} onClick={handleClick} id={i} />
)
}
let board = []
for (let i = 0; i < 9; i++) {
board.push(renderSquare(i))
}
let status = "Next player: " + (xIsNext ? 'X' : 'O')
return (
<div >
<div>
{status}
</div>
<div className='board'>
{board}
</div>
</div>
)
}
export function TikTackToe() {
return (
<div>
<Board/>
</div>
)
}
A:
youre onclick function is wrong, when called the event is passed to the function.
try
<Square value={game[i]} onClick={() => handleClick(i)} id={i} />
|
Is there a way to make a tic tac toe app using React functions
|
The main problem is that my tic tac toe board is not updating when I click on the square inside the board.
It seems that the useState function; setXIsNext is working. But I don't know if setGame function is working or not.
I tried to remove the slice function but it still didn't update.
I tried putting
dupGame[i] = xIsNext ? 'X' : 'O'
into setGame function which updated the 1st square in the board but not the square that was clicked.
Code Used:
import { useState } from 'react'
import './tiktacktoe.css'
function Square(props) {
return (
<div className="square" onClick={props.onClick}>
{props.value}
</div>
)
}
function Board(){
const [game, setGame] = useState(Array(9).fill(null))
const [xIsNext, setXIsNext] = useState(true)
function handleClick(i) {
const dupGame = game.slice()
if (dupGame[i]) {
return
}
dupGame[i] = xIsNext ? 'X' : 'O'
setXIsNext(!xIsNext)
setGame(dupGame)
}
function renderSquare(i){
return (
<Square value={game[i]} onClick={handleClick} id={i} />
)
}
let board = []
for (let i = 0; i < 9; i++) {
board.push(renderSquare(i))
}
let status = "Next player: " + (xIsNext ? 'X' : 'O')
return (
<div >
<div>
{status}
</div>
<div className='board'>
{board}
</div>
</div>
)
}
export function TikTackToe() {
return (
<div>
<Board/>
</div>
)
}
|
[
"youre onclick function is wrong, when called the event is passed to the function.\ntry\n<Square value={game[i]} onClick={() => handleClick(i)} id={i} />\n\n"
] |
[
2
] |
[] |
[] |
[
"react_hooks",
"reactjs"
] |
stackoverflow_0074663468_react_hooks_reactjs.txt
|
Q:
Java find missing numbers in array
I am trying to learn Java on my own for only 3 weeks (from YouTube videos and blogs) and this is my first language. I want to write a program to find missing number (s) in an ascending integer array. I found a way, but it only works if the last number in the incrementing array is less than 10. Like 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.
I also found other programs on the internet, but they all have the same problem.
I tried to write on my own with my limited 3 week knowledge and succeeded. But I think I took the long way. The code is almost 27 lines long.
Let me show you the code
I have an integer array with 9 elements:
[12, 13, 17, 18, 20, 21, 24, 25, 26]
and 14, 15, 16, 19, 22, 23 are missing
public class Exercises {
public static void main(String[] args) {
int[] arr = {12, 13, 17, 18, 20, 21, 24, 25, 26};
int len = arr.length;
System.out.println("\nArray source: \n" + Arrays.toString(arr));
//for avoiding ArrayIndexOutOfBoundsException
//error I am creating temp. array with 10 elemets
//and adding main array elements to temp. array
int[] tempArr = new int[len + 1];
for (int i = 0; i < len; i++) {
tempArr[i] = arr[i];
}
//adding last element to temp. array
int max = 0;
for (int i = 0; i < tempArr.length; i++) {
if (tempArr[i] > max) {
max = tempArr[i];
}
}
tempArr[tempArr.length - 1] = max + 1;
System.out.println("\nMissing number(S): ");
for (int i = 0; i < len - 1; i++) {
// If it comes to the last loppf main array
// this will be use temp. arrays' last element to
// compare main array
if (i == (len - 1) && (tempArr[i + 1] - arr[i]) > 1) {
System.out.println(tempArr[i]);
} else if ((arr[i + 1] - arr[i]) > 1) {
for (int a = 1; a <= (arr[i + 1] - arr[i]) - 1; a++) {
System.out.println(arr[i] + a);
}
}
}
}
}
Output:
Array source:
[12, 13, 17, 18, 20, 21, 24, 25, 26]
Missing number(S):
14
15
16
19
22
23
I got what I wanted, but is there a more optimal way to do that?
Btw if I want to make code more esthetic it becomes huge :D
public class Exercises {
public static void main(String[] args) {
int[] arr = {12, 13, 17, 18, 20, 21, 24, 25, 26};
int len = arr.length;
int[] tempArr = new int[len + 1];
int[] correctArr = new int[MathUtils.max(arr) - MathUtils.min(arr) + 1];
int countArr = (MathUtils.max(arr) - (MathUtils.max(arr) - MathUtils.min(arr)) - 1);
for (int i = 0; i < correctArr.length; i++) {
countArr++;
correctArr[i] = countArr;
}
System.out.println("\nArray source: \n" + Arrays.toString(arr));
System.out.println("Source should be: \n" + Arrays.toString(correctArr));
for (int i = 0; i < len; i++) {
tempArr[i] = arr[i];
}
int max = 0;
for (int i = 0; i < tempArr.length; i++) {
if (tempArr[i] > max) {
max = tempArr[i];
}
}
tempArr[tempArr.length - 1] = max + 1;
int count = 0;
for (int i = 0; i < len - 1; i++) {
if (i == (len - 1) && (tempArr[i + 1] - arr[i]) > 1) {
count++;
} else if ((arr[i + 1] - arr[i]) > 1) {
for (int a = 1; a <= (arr[i + 1] - arr[i]) - 1; a++) {
count++;
}
}
}
if (count == 1) {
System.out.println("\nThere is only one missing number:");
} else if (count > 1) {
System.out.println("\nThere are " + count + " missing numbers:");
}
for (int i = 0; i < len - 1; i++) {
if (i == (len - 1) && (tempArr[i + 1] - arr[i]) > 1) {
System.out.println(tempArr[i]);
} else if ((arr[i + 1] - arr[i]) > 1) {
for (int a = 1; a <= (arr[i + 1] - arr[i]) - 1; a++) {
System.out.println(arr[i] + a);
}
}
}
}
}
Output:
Array source:
[12, 13, 17, 18, 20, 21, 24, 25, 26]
Source should be:
[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]
There are 6 missing numbers:
14
15
16
19
22
23
A:
You look to be over complicating things since a simple nested for loop is probably all that you need. The outer loop loops through the array, up to but not including the last number, and the inner loop loops between one item in the array going to the next item, from arr[i] + up to arr[i + 1]. Note that the inner loop won't "loop" if the two array items are contiguous, and so no if blocks are needed:
public class MissingNumber {
public static void main(String[] args) {
int[] arr = {12, 13, 17, 18, 20, 21, 24, 25, 26};
System.out.println("missing numbers");
for (int i = 0; i < arr.length - 1; i++) {
for (int j = arr[i] + 1; j < arr[i + 1]; j++) {
System.out.println("" + j);
}
}
}
}
Output:
missing numbers
14
15
16
19
22
23
That's it
A:
You can use List<Integer> to avoid repeating the same calculation.
static List<Integer> missingNumbers(int[] a) {
List<Integer> missingNumbers = new ArrayList<>();
for (int i = 1, length = a.length; i < length; ++i)
if (a[i - 1] + 1 < a[i])
for (int j = a[i - 1] + 1; j < a[i]; ++j)
missingNumbers.add(j);
return missingNumbers;
}
public static void main(String[] args) {
int[] a = {12, 13, 17, 18, 20, 21, 24, 25, 26};
List<Integer> missingNumbers = missingNumbers(a);
if (missingNumbers.size() == 1)
System.out.println("There is only one missing number:");
else
System.out.println("There are " + missingNumbers.size() + " missing numbers:");
for (int n : missingNumbers)
System.out.println(n);
}
output:
There are 6 missing numbers:
14
15
16
19
22
23
A:
int[] arr = { 12, 13, 17, 18, 20, 21, 24, 25, 26 };
System.out.println("Array Source:");
System.out.println(Arrays.toString(arr) + "\n");
System.out.println("Missing number(s):");
int nextNumber = arr[0] + 1;
for (int i = 1; i < arr.length; i++) {
while (arr[i] > nextNumber) {
System.out.println(nextNumber);
nextNumber++;
}
nextNumber++;
}
To fill the array with all numbers from min to max you can write
int[] shouldBe = IntStream.range(min, max + 1).toArray();
and in this special case
int[] shouldBe = IntStream.range(arr[0], arr[arr.length - 1] + 1).toArray();
A:
public class missingnumber{
public static void main(String args[]){
int arr[]={1,2,3,4,6,7,8,9};
for(int i=arr.length-1;i>0;i--){
if(arr[i]-1!=arr[i-1]){
System.out.println(arr[i]-1);
}
}
}
}
|
Java find missing numbers in array
|
I am trying to learn Java on my own for only 3 weeks (from YouTube videos and blogs) and this is my first language. I want to write a program to find missing number (s) in an ascending integer array. I found a way, but it only works if the last number in the incrementing array is less than 10. Like 1, 2, 3, 4, 5, 6, 7, 8, 9, 10.
I also found other programs on the internet, but they all have the same problem.
I tried to write on my own with my limited 3 week knowledge and succeeded. But I think I took the long way. The code is almost 27 lines long.
Let me show you the code
I have an integer array with 9 elements:
[12, 13, 17, 18, 20, 21, 24, 25, 26]
and 14, 15, 16, 19, 22, 23 are missing
public class Exercises {
public static void main(String[] args) {
int[] arr = {12, 13, 17, 18, 20, 21, 24, 25, 26};
int len = arr.length;
System.out.println("\nArray source: \n" + Arrays.toString(arr));
//for avoiding ArrayIndexOutOfBoundsException
//error I am creating temp. array with 10 elemets
//and adding main array elements to temp. array
int[] tempArr = new int[len + 1];
for (int i = 0; i < len; i++) {
tempArr[i] = arr[i];
}
//adding last element to temp. array
int max = 0;
for (int i = 0; i < tempArr.length; i++) {
if (tempArr[i] > max) {
max = tempArr[i];
}
}
tempArr[tempArr.length - 1] = max + 1;
System.out.println("\nMissing number(S): ");
for (int i = 0; i < len - 1; i++) {
// If it comes to the last loppf main array
// this will be use temp. arrays' last element to
// compare main array
if (i == (len - 1) && (tempArr[i + 1] - arr[i]) > 1) {
System.out.println(tempArr[i]);
} else if ((arr[i + 1] - arr[i]) > 1) {
for (int a = 1; a <= (arr[i + 1] - arr[i]) - 1; a++) {
System.out.println(arr[i] + a);
}
}
}
}
}
Output:
Array source:
[12, 13, 17, 18, 20, 21, 24, 25, 26]
Missing number(S):
14
15
16
19
22
23
I got what I wanted, but is there a more optimal way to do that?
Btw if I want to make code more esthetic it becomes huge :D
public class Exercises {
public static void main(String[] args) {
int[] arr = {12, 13, 17, 18, 20, 21, 24, 25, 26};
int len = arr.length;
int[] tempArr = new int[len + 1];
int[] correctArr = new int[MathUtils.max(arr) - MathUtils.min(arr) + 1];
int countArr = (MathUtils.max(arr) - (MathUtils.max(arr) - MathUtils.min(arr)) - 1);
for (int i = 0; i < correctArr.length; i++) {
countArr++;
correctArr[i] = countArr;
}
System.out.println("\nArray source: \n" + Arrays.toString(arr));
System.out.println("Source should be: \n" + Arrays.toString(correctArr));
for (int i = 0; i < len; i++) {
tempArr[i] = arr[i];
}
int max = 0;
for (int i = 0; i < tempArr.length; i++) {
if (tempArr[i] > max) {
max = tempArr[i];
}
}
tempArr[tempArr.length - 1] = max + 1;
int count = 0;
for (int i = 0; i < len - 1; i++) {
if (i == (len - 1) && (tempArr[i + 1] - arr[i]) > 1) {
count++;
} else if ((arr[i + 1] - arr[i]) > 1) {
for (int a = 1; a <= (arr[i + 1] - arr[i]) - 1; a++) {
count++;
}
}
}
if (count == 1) {
System.out.println("\nThere is only one missing number:");
} else if (count > 1) {
System.out.println("\nThere are " + count + " missing numbers:");
}
for (int i = 0; i < len - 1; i++) {
if (i == (len - 1) && (tempArr[i + 1] - arr[i]) > 1) {
System.out.println(tempArr[i]);
} else if ((arr[i + 1] - arr[i]) > 1) {
for (int a = 1; a <= (arr[i + 1] - arr[i]) - 1; a++) {
System.out.println(arr[i] + a);
}
}
}
}
}
Output:
Array source:
[12, 13, 17, 18, 20, 21, 24, 25, 26]
Source should be:
[12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26]
There are 6 missing numbers:
14
15
16
19
22
23
|
[
"You look to be over complicating things since a simple nested for loop is probably all that you need. The outer loop loops through the array, up to but not including the last number, and the inner loop loops between one item in the array going to the next item, from arr[i] + up to arr[i + 1]. Note that the inner loop won't \"loop\" if the two array items are contiguous, and so no if blocks are needed:\npublic class MissingNumber {\n public static void main(String[] args) {\n int[] arr = {12, 13, 17, 18, 20, 21, 24, 25, 26};\n \n System.out.println(\"missing numbers\");\n for (int i = 0; i < arr.length - 1; i++) {\n for (int j = arr[i] + 1; j < arr[i + 1]; j++) {\n System.out.println(\"\" + j);\n }\n }\n }\n}\n\nOutput:\nmissing numbers\n14\n15\n16\n19\n22\n23\n\nThat's it\n",
"You can use List<Integer> to avoid repeating the same calculation.\nstatic List<Integer> missingNumbers(int[] a) {\n List<Integer> missingNumbers = new ArrayList<>();\n for (int i = 1, length = a.length; i < length; ++i)\n if (a[i - 1] + 1 < a[i])\n for (int j = a[i - 1] + 1; j < a[i]; ++j)\n missingNumbers.add(j);\n return missingNumbers;\n}\n\npublic static void main(String[] args) {\n int[] a = {12, 13, 17, 18, 20, 21, 24, 25, 26};\n List<Integer> missingNumbers = missingNumbers(a);\n if (missingNumbers.size() == 1)\n System.out.println(\"There is only one missing number:\");\n else\n System.out.println(\"There are \" + missingNumbers.size() + \" missing numbers:\");\n for (int n : missingNumbers)\n System.out.println(n);\n}\n\noutput:\nThere are 6 missing numbers:\n14\n15\n16\n19\n22\n23\n\n",
"int[] arr = { 12, 13, 17, 18, 20, 21, 24, 25, 26 };\nSystem.out.println(\"Array Source:\");\nSystem.out.println(Arrays.toString(arr) + \"\\n\");\nSystem.out.println(\"Missing number(s):\");\nint nextNumber = arr[0] + 1;\nfor (int i = 1; i < arr.length; i++) {\n while (arr[i] > nextNumber) {\n System.out.println(nextNumber);\n nextNumber++;\n }\n nextNumber++;\n}\n\nTo fill the array with all numbers from min to max you can write\nint[] shouldBe = IntStream.range(min, max + 1).toArray();\n\nand in this special case\nint[] shouldBe = IntStream.range(arr[0], arr[arr.length - 1] + 1).toArray();\n\n",
"public class missingnumber{\npublic static void main(String args[]){\nint arr[]={1,2,3,4,6,7,8,9};\nfor(int i=arr.length-1;i>0;i--){\nif(arr[i]-1!=arr[i-1]){\nSystem.out.println(arr[i]-1);\n}\n}\n\n}\n}\n"
] |
[
1,
0,
0,
0
] |
[] |
[] |
[
"arraylist",
"arrays",
"java",
"netbeans",
"numbers"
] |
stackoverflow_0074235053_arraylist_arrays_java_netbeans_numbers.txt
|
Q:
Is there a way to format a json byte array and write it to a file?
I have a byte array that I made, and I am writing it to a json file. This works, but I want to have a formatted JSON file instead of a massive wall of text.
I have tried decoding the byte array with utf-8, but instead I get UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte. My plan was to then take this string and use json.dumps() to format it.
Trying json.dumps() without any other formatting gives this: TypeError: Object of type bytearray is not JSON serializable
content = bytearray()
content_idx = 0
try:
with open(arguments.input_file, 'rb') as input_file:
while (byte:=input_file.read(1)):
content += bytes([ord(byte) ^ xor_key[content_idx % (len(xor_key))]])
content_idx += 1
except (IOError, OSError) as exception:
print('Error: could not read input file')
exit()
try:
with open(arguments.output_file, 'wb') as output_file:
output_file.write(json.dumps(content.decode('utf-8'), indent=4))
except (IOError, OSError) as exception:
print('Error: could not create output file')
exit()
A:
The error is that you are trying to pass then bytes, and json.dumps() is trying to serialize them somehow, but can't, which is written in the error output.
To save the file in JSON you need to translate the byte stream into a Python dictionary, which will already accept JSON perfectly and without problems.
It would help if you could show what the input data looks like and what you want to save to JSON
Python has an off-the-shelf Base64 library that can translate an array of bytes into a string, and here's an example usage article. But the problem may arise later when parsing that string into the dictionary, so maybe I'd advise you to google what libraries are probably ready for such parsing, but otherwise you can use regular expressions
A:
The JSON encoder and decoder can be extended to support other types. Here's one way to support byte strings by converting them to a BASE64 str and serializing it as a dict with special key. The key is used to flag the decoder to convert the JSON object with that key back to a byte string.
import json
import base64
class B64Encoder(json.JSONEncoder):
'''Recognize a bytes object and return a dictionary with
a special key to indicate its value is a BASE64 string.
'''
def default(self, obj):
if isinstance(obj, bytes):
return {'__B64__': base64.b64encode(obj).decode('ascii')}
return super().default(obj)
def B64Decoder(obj):
'''Recognize a dictionary with the special BASE64 key
and return its BASE64-decoded value.
'''
if '__B64__' in obj:
return base64.b64decode(obj['__B64__'])
return obj
d = {'key1': bytes.fromhex('0102030405'), 'key2': b'\xaa\x55\x00\xff'}
print(f'Python IN: {d}')
print('\nJSON:')
s = json.dumps(d, indent=2, cls=B64Encoder)
print(s)
d2 = json.loads(s, object_hook=B64Decoder)
print(f'\nPython OUT: {d}')
Output:
Python IN: {'key1': b'\x01\x02\x03\x04\x05', 'key2': b'\xaaU\x00\xff'}
JSON:
{
"key1": {
"__B64__": "AQIDBAU="
},
"key2": {
"__B64__": "qlUA/w=="
}
}
Python OUT: {'key1': b'\x01\x02\x03\x04\x05', 'key2': b'\xaaU\x00\xff'}
|
Is there a way to format a json byte array and write it to a file?
|
I have a byte array that I made, and I am writing it to a json file. This works, but I want to have a formatted JSON file instead of a massive wall of text.
I have tried decoding the byte array with utf-8, but instead I get UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte. My plan was to then take this string and use json.dumps() to format it.
Trying json.dumps() without any other formatting gives this: TypeError: Object of type bytearray is not JSON serializable
content = bytearray()
content_idx = 0
try:
with open(arguments.input_file, 'rb') as input_file:
while (byte:=input_file.read(1)):
content += bytes([ord(byte) ^ xor_key[content_idx % (len(xor_key))]])
content_idx += 1
except (IOError, OSError) as exception:
print('Error: could not read input file')
exit()
try:
with open(arguments.output_file, 'wb') as output_file:
output_file.write(json.dumps(content.decode('utf-8'), indent=4))
except (IOError, OSError) as exception:
print('Error: could not create output file')
exit()
|
[
"The error is that you are trying to pass then bytes, and json.dumps() is trying to serialize them somehow, but can't, which is written in the error output.\nTo save the file in JSON you need to translate the byte stream into a Python dictionary, which will already accept JSON perfectly and without problems.\nIt would help if you could show what the input data looks like and what you want to save to JSON\nPython has an off-the-shelf Base64 library that can translate an array of bytes into a string, and here's an example usage article. But the problem may arise later when parsing that string into the dictionary, so maybe I'd advise you to google what libraries are probably ready for such parsing, but otherwise you can use regular expressions\n",
"The JSON encoder and decoder can be extended to support other types. Here's one way to support byte strings by converting them to a BASE64 str and serializing it as a dict with special key. The key is used to flag the decoder to convert the JSON object with that key back to a byte string.\nimport json\nimport base64\n\nclass B64Encoder(json.JSONEncoder):\n '''Recognize a bytes object and return a dictionary with\n a special key to indicate its value is a BASE64 string.\n '''\n def default(self, obj):\n if isinstance(obj, bytes):\n return {'__B64__': base64.b64encode(obj).decode('ascii')}\n return super().default(obj)\n\ndef B64Decoder(obj):\n '''Recognize a dictionary with the special BASE64 key\n and return its BASE64-decoded value.\n '''\n if '__B64__' in obj:\n return base64.b64decode(obj['__B64__'])\n return obj\n\nd = {'key1': bytes.fromhex('0102030405'), 'key2': b'\\xaa\\x55\\x00\\xff'}\nprint(f'Python IN: {d}')\nprint('\\nJSON:')\ns = json.dumps(d, indent=2, cls=B64Encoder)\nprint(s)\nd2 = json.loads(s, object_hook=B64Decoder)\nprint(f'\\nPython OUT: {d}')\n\nOutput:\nPython IN: {'key1': b'\\x01\\x02\\x03\\x04\\x05', 'key2': b'\\xaaU\\x00\\xff'}\n\nJSON:\n{\n \"key1\": {\n \"__B64__\": \"AQIDBAU=\"\n },\n \"key2\": {\n \"__B64__\": \"qlUA/w==\"\n }\n}\n\nPython OUT: {'key1': b'\\x01\\x02\\x03\\x04\\x05', 'key2': b'\\xaaU\\x00\\xff'}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"arrays",
"json",
"python"
] |
stackoverflow_0074663109_arrays_json_python.txt
|
Q:
Connector NoSuchMethodError: org.bouncycastle.crypto.CryptoServicesRegistrar.isInApprovedOnlyMode()Z
I am running Confluent Platform version 7.1.0 and my Kafka Connector requires bouncy castle fips library to be present in plugin path so that it can decrypt the encrypted private key.
The BouncyCastleFipsProvider is needed at runtime to generate a PrivateKey from encryptedPrivateKey
I get below error
Caused by: java.lang.NoSuchMethodError: org.bouncycastle.crypto.CryptoServicesRegistrar.isInApprovedOnlyMode()Z
at org.bouncycastle.jcajce.provider.ProvSecureHash$MD5.configure(Unknown Source)
at org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider.<init>(Unknown Source)
at org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider.<init>(Unknown Source)
at org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider.<init>(Unknown Source)
at com.snowflake.kafka.connector.internal.EncryptionUtils.parseEncryptedPrivateKey(EncryptionUtils.java:30)
Although, this works fine for confluent 5.5.0 but somehow doesn't work with confluent 6.2.0 or 7.1.0
I have made sure the fips library is present in
/usr/local/share/kafka/plugins as well as
ls confluent-7.1.0/share/java/kafka/ | grep fips
bc-fips-1.0.2.1.jar
bcpkix-fips-1.0.3.jar
I fail to understand the what could be the root cause. When Kafka Connect starts, I see it is loading both the jars from plugin path. I found this answer but doesnt apply in this case. The function is present in bc-fips bc-fips-1.0.2.1.jar
A:
The FIPS library is not a "Connect plugin", so despite it "seeing" the JAR, nothing will "load" from it.
You'll need to explicitly set CLASSPATH=confluent-7.1.0/share/java/kafka/*.jar environment variable for the JVM to load all the classes from those JARs.
You will also want to use jar tf to actually check content of those JARs for the class in the error.
|
Connector NoSuchMethodError: org.bouncycastle.crypto.CryptoServicesRegistrar.isInApprovedOnlyMode()Z
|
I am running Confluent Platform version 7.1.0 and my Kafka Connector requires bouncy castle fips library to be present in plugin path so that it can decrypt the encrypted private key.
The BouncyCastleFipsProvider is needed at runtime to generate a PrivateKey from encryptedPrivateKey
I get below error
Caused by: java.lang.NoSuchMethodError: org.bouncycastle.crypto.CryptoServicesRegistrar.isInApprovedOnlyMode()Z
at org.bouncycastle.jcajce.provider.ProvSecureHash$MD5.configure(Unknown Source)
at org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider.<init>(Unknown Source)
at org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider.<init>(Unknown Source)
at org.bouncycastle.jcajce.provider.BouncyCastleFipsProvider.<init>(Unknown Source)
at com.snowflake.kafka.connector.internal.EncryptionUtils.parseEncryptedPrivateKey(EncryptionUtils.java:30)
Although, this works fine for confluent 5.5.0 but somehow doesn't work with confluent 6.2.0 or 7.1.0
I have made sure the fips library is present in
/usr/local/share/kafka/plugins as well as
ls confluent-7.1.0/share/java/kafka/ | grep fips
bc-fips-1.0.2.1.jar
bcpkix-fips-1.0.3.jar
I fail to understand the what could be the root cause. When Kafka Connect starts, I see it is loading both the jars from plugin path. I found this answer but doesnt apply in this case. The function is present in bc-fips bc-fips-1.0.2.1.jar
|
[
"The FIPS library is not a \"Connect plugin\", so despite it \"seeing\" the JAR, nothing will \"load\" from it.\nYou'll need to explicitly set CLASSPATH=confluent-7.1.0/share/java/kafka/*.jar environment variable for the JVM to load all the classes from those JARs.\nYou will also want to use jar tf to actually check content of those JARs for the class in the error.\n"
] |
[
0
] |
[] |
[] |
[
"apache_kafka",
"apache_kafka_connect",
"fips",
"java"
] |
stackoverflow_0071852363_apache_kafka_apache_kafka_connect_fips_java.txt
|
Q:
How to call a Python function from Node.js
I have an Express Node.js application, but I also have a machine learning algorithm to use in Python. Is there a way I can call Python functions from my Node.js application to make use of the power of machine learning libraries?
A:
Easiest way I know of is to use "child_process" package which comes packaged with node.
Then you can do something like:
const spawn = require("child_process").spawn;
const pythonProcess = spawn('python',["path/to/script.py", arg1, arg2, ...]);
Then all you have to do is make sure that you import sys in your python script, and then you can access arg1 using sys.argv[1], arg2 using sys.argv[2], and so on.
To send data back to node just do the following in the python script:
print(dataToSendBack)
sys.stdout.flush()
And then node can listen for data using:
pythonProcess.stdout.on('data', (data) => {
// Do something with the data returned from python script
});
Since this allows multiple arguments to be passed to a script using spawn, you can restructure a python script so that one of the arguments decides which function to call, and the other argument gets passed to that function, etc.
Hope this was clear. Let me know if something needs clarification.
A:
Example for people who are from Python background and want to integrate their machine learning model in the Node.js application:
It uses the child_process core module:
const express = require('express')
const app = express()
app.get('/', (req, res) => {
const { spawn } = require('child_process');
const pyProg = spawn('python', ['./../pypy.py']);
pyProg.stdout.on('data', function(data) {
console.log(data.toString());
res.write(data);
res.end('end');
});
})
app.listen(4000, () => console.log('Application listening on port 4000!'))
It doesn't require sys module in your Python script.
Below is a more modular way of performing the task using Promise:
const express = require('express')
const app = express()
let runPy = new Promise(function(success, nosuccess) {
const { spawn } = require('child_process');
const pyprog = spawn('python', ['./../pypy.py']);
pyprog.stdout.on('data', function(data) {
success(data);
});
pyprog.stderr.on('data', (data) => {
nosuccess(data);
});
});
app.get('/', (req, res) => {
res.write('welcome\n');
runPy.then(function(fromRunpy) {
console.log(fromRunpy.toString());
res.end(fromRunpy);
});
})
app.listen(4000, () => console.log('Application listening on port 4000!'))
A:
The python-shell module by extrabacon is a simple way to run Python scripts from Node.js with basic, but efficient inter-process communication and better error handling.
Installation:
With npm:
npm install python-shell.
Or with yarn:
yarn add python-shell
Running a simple Python script:
const PythonShell = require('python-shell').PythonShell;
PythonShell.run('my_script.py', null, function (err) {
if (err) throw err;
console.log('finished');
});
Running a Python script with arguments and options:
const PythonShell = require('python-shell').PythonShell;
var options = {
mode: 'text',
pythonPath: 'path/to/python',
pythonOptions: ['-u'],
scriptPath: 'path/to/my/scripts',
args: ['value1', 'value2', 'value3']
};
PythonShell.run('my_script.py', options, function (err, results) {
if (err)
throw err;
// Results is an array consisting of messages collected during execution
console.log('results: %j', results);
});
For the full documentation and source code, check out https://github.com/extrabacon/python-shell
A:
You can now use RPC libraries that support Python and Javascript such as zerorpc
From their front page:
Node.js Client
var zerorpc = require("zerorpc");
var client = new zerorpc.Client();
client.connect("tcp://127.0.0.1:4242");
client.invoke("hello", "RPC", function(error, res, more) {
console.log(res);
});
Python Server
import zerorpc
class HelloRPC(object):
def hello(self, name):
return "Hello, %s" % name
s = zerorpc.Server(HelloRPC())
s.bind("tcp://0.0.0.0:4242")
s.run()
A:
Many of the examples are years out of date and involve complex setup. You can give JSPyBridge/pythonia a try (full disclosure: I'm the author). It's vanilla JS that lets you operate on foreign Python objects as if they existed in JS. In fact, it does interoperability so Python code can in return call JS through callbacks and passed functions.
numpy + matplotlib example, with the ES6 import system:
import { py, python } from 'pythonia'
const np = await python('numpy')
const plot = await python('matplotlib.pyplot')
// Fixing random state for reproducibility
await np.random.seed(19680801)
const [mu, sigma] = [100, 15]
// Inline expression evaluation for operator overloading
const x = await py`${mu} + ${sigma} * ${np.random.randn(10000)}`
// the histogram of the data
const [n, bins, patches] = await plot.hist$(x, 50, { density: true, facecolor: 'g', alpha: 0.75 })
console.log('Distribution', await n) // Always await for all Python access
await plot.show()
python.exit()
Through CommonJS (without top level await):
const { py, python } = require('pythonia')
async function main() {
const np = await python('numpy')
const plot = await python('matplotlib.pyplot')
...
// the rest of the code
}
main().then(() => python.exit()) // If you don't call this, the process won't quit by itself.
A:
Most of previous answers call the success of the promise in the on("data"), it is not the proper way to do it because if you receive a lot of data you will only get the first part. Instead you have to do it on the end event.
const { spawn } = require('child_process');
const pythonDir = (__dirname + "/../pythonCode/"); // Path of python script folder
const python = pythonDir + "pythonEnv/bin/python"; // Path of the Python interpreter
/** remove warning that you don't care about */
function cleanWarning(error) {
return error.replace(/Detector is not able to detect the language reliably.\n/g,"");
}
function callPython(scriptName, args) {
return new Promise(function(success, reject) {
const script = pythonDir + scriptName;
const pyArgs = [script, JSON.stringify(args) ]
const pyprog = spawn(python, pyArgs );
let result = "";
let resultError = "";
pyprog.stdout.on('data', function(data) {
result += data.toString();
});
pyprog.stderr.on('data', (data) => {
resultError += cleanWarning(data.toString());
});
pyprog.stdout.on("end", function(){
if(resultError == "") {
success(JSON.parse(result));
}else{
console.error(`Python error, you can reproduce the error with: \n${python} ${script} ${pyArgs.join(" ")}`);
const error = new Error(resultError);
console.error(error);
reject(resultError);
}
})
});
}
module.exports.callPython = callPython;
Call:
const pythonCaller = require("../core/pythonCaller");
const result = await pythonCaller.callPython("preprocessorSentiment.py", {"thekeyYouwant": value});
python:
try:
argu = json.loads(sys.argv[1])
except:
raise Exception("error while loading argument")
A:
I'm on node 10 and child process 1.0.2. The data from python is a byte array and has to be converted. Just another quick example of making a http request in python.
node
const process = spawn("python", ["services/request.py", "https://www.google.com"])
return new Promise((resolve, reject) =>{
process.stdout.on("data", data =>{
resolve(data.toString()); // <------------ by default converts to utf-8
})
process.stderr.on("data", reject)
})
request.py
import urllib.request
import sys
def karl_morrison_is_a_pedant():
response = urllib.request.urlopen(sys.argv[1])
html = response.read()
print(html)
sys.stdout.flush()
karl_morrison_is_a_pedant()
p.s. not a contrived example since node's http module doesn't load a few requests I need to make
A:
You could take your python, transpile it, and then call it as if it were javascript. I have done this succesfully for screeps and even got it to run in the browser a la brython.
A:
The Boa is good for your needs, see the example which extends Python tensorflow keras.Sequential class in JavaScript.
const fs = require('fs');
const boa = require('@pipcook/boa');
const { tuple, enumerate } = boa.builtins();
const tf = boa.import('tensorflow');
const tfds = boa.import('tensorflow_datasets');
const { keras } = tf;
const { layers } = keras;
const [
[ train_data, test_data ],
info
] = tfds.load('imdb_reviews/subwords8k', boa.kwargs({
split: tuple([ tfds.Split.TRAIN, tfds.Split.TEST ]),
with_info: true,
as_supervised: true
}));
const encoder = info.features['text'].encoder;
const padded_shapes = tuple([
[ null ], tuple([])
]);
const train_batches = train_data.shuffle(1000)
.padded_batch(10, boa.kwargs({ padded_shapes }));
const test_batches = test_data.shuffle(1000)
.padded_batch(10, boa.kwargs({ padded_shapes }));
const embedding_dim = 16;
const model = keras.Sequential([
layers.Embedding(encoder.vocab_size, embedding_dim),
layers.GlobalAveragePooling1D(),
layers.Dense(16, boa.kwargs({ activation: 'relu' })),
layers.Dense(1, boa.kwargs({ activation: 'sigmoid' }))
]);
model.summary();
model.compile(boa.kwargs({
optimizer: 'adam',
loss: 'binary_crossentropy',
metrics: [ 'accuracy' ]
}));
The complete example is at: https://github.com/alibaba/pipcook/blob/master/example/boa/tf2/word-embedding.js
I used Boa in another project Pipcook, which is to address the machine learning problems for JavaScript developers, we implemented ML/DL models upon the Python ecosystem(tensorflow,keras,pytorch) by the boa library.
A:
/*eslint-env es6*/
/*global require*/
/*global console*/
var express = require('express');
var app = express();
// Creates a server which runs on port 3000 and
// can be accessed through localhost:3000
app.listen(3000, function() {
console.log('server running on port 3000');
} )
app.get('/name', function(req, res) {
console.log('Running');
// Use child_process.spawn method from
// child_process module and assign it
// to variable spawn
var spawn = require("child_process").spawn;
// Parameters passed in spawn -
// 1. type_of_script
// 2. list containing Path of the script
// and arguments for the script
// E.g : http://localhost:3000/name?firstname=Levente
var process = spawn('python',['apiTest.py',
req.query.firstname]);
// Takes stdout data from script which executed
// with arguments and send this data to res object
var output = '';
process.stdout.on('data', function(data) {
console.log("Sending Info")
res.end(data.toString('utf8'));
});
console.log(output);
});
This worked for me. Your python.exe must be added to you path variables for this code snippet. Also, make sure your python script is in your project folder.
A:
const util = require('util');
const exec = util.promisify(require('child_process').exec);
function runPythonFile() {
const { stdout, stderr } = await exec('py ./path_to_python_file -s asdf -d pqrs');
if (stdout) { // do something }
if (stderr) { // do something }
}
For more information visit official Nodejs child process page: https://nodejs.org/api/child_process.html#child_processexeccommand-options-callback
A:
Yes, there are several ways that you can call Python functions from your Node.js application to use machine learning libraries. One way to do this is to use the child_process module in Node.js to run a Python script as a separate process and pass data to it through standard input (stdin) and receive data back through standard output (stdout). Here is an example of how you could do this:
// Import the child_process module
const { spawn } = require('child_process');
// Define the data to pass to the Python script
const data = {
input: [1, 2, 3, 4, 5]
};
// Spawn a new Python process
const python = spawn('python', ['script.py']);
// Write the data to the stdin of the Python process
python.stdin.write(JSON.stringify(data));
python.stdin.end();
// Listen for data from the stdout of the Python process
python.stdout.on('data', (output) => {
// Parse the output data as JSON
const result = JSON.parse(output);
// Use the result from the Python script
console.log(result);
});
In this example, the spawn() function is used to start a new Python process and run the script.py script. The data to be passed to the script is defined in the data variable, and then written to the stdin of the Python process using the write() method. The data event is listened for on the stdout of the Python process, and the output data is parsed as JSON and used as needed.
Another way to call Python functions from your Node.js application is to use a third-party library like python-shell or python-bridge. These libraries provide an API for calling Python functions directly from Node.js, without the need to spawn a separate Python process.
I hope this helps! Let me know if you have any other questions.
|
How to call a Python function from Node.js
|
I have an Express Node.js application, but I also have a machine learning algorithm to use in Python. Is there a way I can call Python functions from my Node.js application to make use of the power of machine learning libraries?
|
[
"Easiest way I know of is to use \"child_process\" package which comes packaged with node.\nThen you can do something like:\nconst spawn = require(\"child_process\").spawn;\nconst pythonProcess = spawn('python',[\"path/to/script.py\", arg1, arg2, ...]);\n\nThen all you have to do is make sure that you import sys in your python script, and then you can access arg1 using sys.argv[1], arg2 using sys.argv[2], and so on.\nTo send data back to node just do the following in the python script:\nprint(dataToSendBack)\nsys.stdout.flush()\n\nAnd then node can listen for data using:\npythonProcess.stdout.on('data', (data) => {\n // Do something with the data returned from python script\n});\n\nSince this allows multiple arguments to be passed to a script using spawn, you can restructure a python script so that one of the arguments decides which function to call, and the other argument gets passed to that function, etc.\nHope this was clear. Let me know if something needs clarification.\n",
"Example for people who are from Python background and want to integrate their machine learning model in the Node.js application:\nIt uses the child_process core module:\nconst express = require('express')\nconst app = express()\n\napp.get('/', (req, res) => {\n\n const { spawn } = require('child_process');\n const pyProg = spawn('python', ['./../pypy.py']);\n\n pyProg.stdout.on('data', function(data) {\n\n console.log(data.toString());\n res.write(data);\n res.end('end');\n });\n})\n\napp.listen(4000, () => console.log('Application listening on port 4000!'))\n\nIt doesn't require sys module in your Python script.\nBelow is a more modular way of performing the task using Promise:\nconst express = require('express')\nconst app = express()\n\nlet runPy = new Promise(function(success, nosuccess) {\n\n const { spawn } = require('child_process');\n const pyprog = spawn('python', ['./../pypy.py']);\n\n pyprog.stdout.on('data', function(data) {\n\n success(data);\n });\n\n pyprog.stderr.on('data', (data) => {\n\n nosuccess(data);\n });\n});\n\napp.get('/', (req, res) => {\n\n res.write('welcome\\n');\n\n runPy.then(function(fromRunpy) {\n console.log(fromRunpy.toString());\n res.end(fromRunpy);\n });\n})\n\napp.listen(4000, () => console.log('Application listening on port 4000!'))\n\n",
"The python-shell module by extrabacon is a simple way to run Python scripts from Node.js with basic, but efficient inter-process communication and better error handling.\nInstallation:\nWith npm:\nnpm install python-shell.\nOr with yarn:\nyarn add python-shell\nRunning a simple Python script:\nconst PythonShell = require('python-shell').PythonShell;\n\nPythonShell.run('my_script.py', null, function (err) {\n if (err) throw err;\n console.log('finished');\n});\n\nRunning a Python script with arguments and options:\nconst PythonShell = require('python-shell').PythonShell;\n\nvar options = {\n mode: 'text',\n pythonPath: 'path/to/python',\n pythonOptions: ['-u'],\n scriptPath: 'path/to/my/scripts',\n args: ['value1', 'value2', 'value3']\n};\n\nPythonShell.run('my_script.py', options, function (err, results) {\n if (err) \n throw err;\n // Results is an array consisting of messages collected during execution\n console.log('results: %j', results);\n});\n\nFor the full documentation and source code, check out https://github.com/extrabacon/python-shell\n",
"You can now use RPC libraries that support Python and Javascript such as zerorpc\nFrom their front page:\nNode.js Client\nvar zerorpc = require(\"zerorpc\");\n\nvar client = new zerorpc.Client();\nclient.connect(\"tcp://127.0.0.1:4242\");\n\nclient.invoke(\"hello\", \"RPC\", function(error, res, more) {\n console.log(res);\n});\n\nPython Server\nimport zerorpc\n\nclass HelloRPC(object):\n def hello(self, name):\n return \"Hello, %s\" % name\n\ns = zerorpc.Server(HelloRPC())\ns.bind(\"tcp://0.0.0.0:4242\")\ns.run()\n\n",
"Many of the examples are years out of date and involve complex setup. You can give JSPyBridge/pythonia a try (full disclosure: I'm the author). It's vanilla JS that lets you operate on foreign Python objects as if they existed in JS. In fact, it does interoperability so Python code can in return call JS through callbacks and passed functions.\nnumpy + matplotlib example, with the ES6 import system:\nimport { py, python } from 'pythonia'\nconst np = await python('numpy')\nconst plot = await python('matplotlib.pyplot')\n\n// Fixing random state for reproducibility\nawait np.random.seed(19680801)\nconst [mu, sigma] = [100, 15]\n// Inline expression evaluation for operator overloading\nconst x = await py`${mu} + ${sigma} * ${np.random.randn(10000)}`\n\n// the histogram of the data\nconst [n, bins, patches] = await plot.hist$(x, 50, { density: true, facecolor: 'g', alpha: 0.75 })\nconsole.log('Distribution', await n) // Always await for all Python access\nawait plot.show()\npython.exit()\n\nThrough CommonJS (without top level await):\nconst { py, python } = require('pythonia')\nasync function main() {\n const np = await python('numpy')\n const plot = await python('matplotlib.pyplot')\n ...\n // the rest of the code\n}\nmain().then(() => python.exit()) // If you don't call this, the process won't quit by itself.\n\n",
"Most of previous answers call the success of the promise in the on(\"data\"), it is not the proper way to do it because if you receive a lot of data you will only get the first part. Instead you have to do it on the end event.\nconst { spawn } = require('child_process');\nconst pythonDir = (__dirname + \"/../pythonCode/\"); // Path of python script folder\nconst python = pythonDir + \"pythonEnv/bin/python\"; // Path of the Python interpreter\n\n/** remove warning that you don't care about */\nfunction cleanWarning(error) {\n return error.replace(/Detector is not able to detect the language reliably.\\n/g,\"\");\n}\n\nfunction callPython(scriptName, args) {\n return new Promise(function(success, reject) {\n const script = pythonDir + scriptName;\n const pyArgs = [script, JSON.stringify(args) ]\n const pyprog = spawn(python, pyArgs );\n let result = \"\";\n let resultError = \"\";\n pyprog.stdout.on('data', function(data) {\n result += data.toString();\n });\n\n pyprog.stderr.on('data', (data) => {\n resultError += cleanWarning(data.toString());\n });\n\n pyprog.stdout.on(\"end\", function(){\n if(resultError == \"\") {\n success(JSON.parse(result));\n }else{\n console.error(`Python error, you can reproduce the error with: \\n${python} ${script} ${pyArgs.join(\" \")}`);\n const error = new Error(resultError);\n console.error(error);\n reject(resultError);\n }\n })\n });\n}\nmodule.exports.callPython = callPython;\n\nCall: \nconst pythonCaller = require(\"../core/pythonCaller\");\nconst result = await pythonCaller.callPython(\"preprocessorSentiment.py\", {\"thekeyYouwant\": value});\n\npython:\ntry:\n argu = json.loads(sys.argv[1])\nexcept:\n raise Exception(\"error while loading argument\")\n\n",
"I'm on node 10 and child process 1.0.2. The data from python is a byte array and has to be converted. Just another quick example of making a http request in python.\nnode\nconst process = spawn(\"python\", [\"services/request.py\", \"https://www.google.com\"])\n\nreturn new Promise((resolve, reject) =>{\n process.stdout.on(\"data\", data =>{\n resolve(data.toString()); // <------------ by default converts to utf-8\n })\n process.stderr.on(\"data\", reject)\n})\n\nrequest.py\nimport urllib.request\nimport sys\n\ndef karl_morrison_is_a_pedant(): \n response = urllib.request.urlopen(sys.argv[1])\n html = response.read()\n print(html)\n sys.stdout.flush()\n\nkarl_morrison_is_a_pedant()\n\np.s. not a contrived example since node's http module doesn't load a few requests I need to make\n",
"You could take your python, transpile it, and then call it as if it were javascript. I have done this succesfully for screeps and even got it to run in the browser a la brython.\n",
"The Boa is good for your needs, see the example which extends Python tensorflow keras.Sequential class in JavaScript.\nconst fs = require('fs');\nconst boa = require('@pipcook/boa');\nconst { tuple, enumerate } = boa.builtins();\n\nconst tf = boa.import('tensorflow');\nconst tfds = boa.import('tensorflow_datasets');\n\nconst { keras } = tf;\nconst { layers } = keras;\n\nconst [\n [ train_data, test_data ],\n info\n] = tfds.load('imdb_reviews/subwords8k', boa.kwargs({\n split: tuple([ tfds.Split.TRAIN, tfds.Split.TEST ]),\n with_info: true,\n as_supervised: true\n}));\n\nconst encoder = info.features['text'].encoder;\nconst padded_shapes = tuple([\n [ null ], tuple([])\n]);\nconst train_batches = train_data.shuffle(1000)\n .padded_batch(10, boa.kwargs({ padded_shapes }));\nconst test_batches = test_data.shuffle(1000)\n .padded_batch(10, boa.kwargs({ padded_shapes }));\n\nconst embedding_dim = 16;\nconst model = keras.Sequential([\n layers.Embedding(encoder.vocab_size, embedding_dim),\n layers.GlobalAveragePooling1D(),\n layers.Dense(16, boa.kwargs({ activation: 'relu' })),\n layers.Dense(1, boa.kwargs({ activation: 'sigmoid' }))\n]);\n\nmodel.summary();\nmodel.compile(boa.kwargs({\n optimizer: 'adam',\n loss: 'binary_crossentropy',\n metrics: [ 'accuracy' ]\n}));\n\n\nThe complete example is at: https://github.com/alibaba/pipcook/blob/master/example/boa/tf2/word-embedding.js\n\nI used Boa in another project Pipcook, which is to address the machine learning problems for JavaScript developers, we implemented ML/DL models upon the Python ecosystem(tensorflow,keras,pytorch) by the boa library.\n",
"/*eslint-env es6*/\n/*global require*/\n/*global console*/\nvar express = require('express'); \nvar app = express();\n\n// Creates a server which runs on port 3000 and \n// can be accessed through localhost:3000\napp.listen(3000, function() { \n console.log('server running on port 3000'); \n} ) \n\napp.get('/name', function(req, res) {\n\n console.log('Running');\n\n // Use child_process.spawn method from \n // child_process module and assign it \n // to variable spawn \n var spawn = require(\"child_process\").spawn; \n // Parameters passed in spawn - \n // 1. type_of_script \n // 2. list containing Path of the script \n // and arguments for the script \n\n // E.g : http://localhost:3000/name?firstname=Levente\n var process = spawn('python',['apiTest.py', \n req.query.firstname]);\n\n // Takes stdout data from script which executed \n // with arguments and send this data to res object\n var output = '';\n process.stdout.on('data', function(data) {\n\n console.log(\"Sending Info\")\n res.end(data.toString('utf8'));\n });\n\n console.log(output);\n}); \n\nThis worked for me. Your python.exe must be added to you path variables for this code snippet. Also, make sure your python script is in your project folder.\n",
"const util = require('util');\nconst exec = util.promisify(require('child_process').exec);\n \nfunction runPythonFile() {\n const { stdout, stderr } = await exec('py ./path_to_python_file -s asdf -d pqrs');\n if (stdout) { // do something }\n if (stderr) { // do something }\n}\n\nFor more information visit official Nodejs child process page: https://nodejs.org/api/child_process.html#child_processexeccommand-options-callback\n",
"Yes, there are several ways that you can call Python functions from your Node.js application to use machine learning libraries. One way to do this is to use the child_process module in Node.js to run a Python script as a separate process and pass data to it through standard input (stdin) and receive data back through standard output (stdout). Here is an example of how you could do this:\n// Import the child_process module\nconst { spawn } = require('child_process');\n\n// Define the data to pass to the Python script\nconst data = {\n input: [1, 2, 3, 4, 5]\n};\n\n// Spawn a new Python process\nconst python = spawn('python', ['script.py']);\n\n// Write the data to the stdin of the Python process\npython.stdin.write(JSON.stringify(data));\npython.stdin.end();\n\n// Listen for data from the stdout of the Python process\npython.stdout.on('data', (output) => {\n // Parse the output data as JSON\n const result = JSON.parse(output);\n\n // Use the result from the Python script\n console.log(result);\n});\n\n\nIn this example, the spawn() function is used to start a new Python process and run the script.py script. The data to be passed to the script is defined in the data variable, and then written to the stdin of the Python process using the write() method. The data event is listened for on the stdout of the Python process, and the output data is parsed as JSON and used as needed.\nAnother way to call Python functions from your Node.js application is to use a third-party library like python-shell or python-bridge. These libraries provide an API for calling Python functions directly from Node.js, without the need to spawn a separate Python process.\nI hope this helps! Let me know if you have any other questions.\n"
] |
[
357,
201,
55,
15,
10,
9,
4,
3,
3,
2,
0,
0
] |
[] |
[] |
[
"express",
"node.js",
"python"
] |
stackoverflow_0023450534_express_node.js_python.txt
|
Q:
How to read a textbox value from a playwright
I have input text field on the page with id "email30" and I am trying to read it's value from Playwright
let dd_handle = await page.$("#email30");
let value = await dd_handle.getAttribute("value");
However it come back "" although I have a value inside the input text. When I inspect I don't see the value attribute is set to a current value either.
Normal JS code as following gives me correct value
document.getElementById("email30").value
Not sure how I can read value from the playwright framework. Can any one please advise? Their documents are quite not helpful here.
A:
There are three major trends how to retrieve input values with playwright/puppeteer.
page.evaluate
const value = await page.evaluate(el => el.value, await page.$('input'))
page.$eval
const value = await page.$eval('input', el => el.value)
page.evaluate with Element.getAttribute
const value = await page.evaluate(() => document.querySelector('input').getAttribute('value'))
The first two will return the same result with very similar performance, I can't bring up an example when we could favor one over the other (maybe one: the one with $eval is shorter). The third one with Element.getAttribute is not advised if you are manipulating an input's value before evaluation and you want to retrieve the new value. It will always return the original attribute value, which is an empty string in most of the cases. It is a topic of attribute vs property value in JavaScript.
However page.evaluate with Element.getAttribute can be handy when you need such attributes that can't be accessed with the other mentioned methods (e.g.: class names, data attributes, aria attributes etc.)
A:
Ok finally following code did the job for me!
const value = await page.$eval("#email30", (el) => el.value);
A:
This works the best so far
const somevalue = this.page.inputValue('#Name');
A:
To add to this excellent answer, with locator, you can use inputValue to get the current input value for one or more elements, or call page.inputValue(selector) directly to get the input value of the first matching selector.
Here's a couple of minimal, complete examples:
const playwright = require("playwright");
let browser;
(async () => {
browser = await playwright.chromium.launch();
const page = await browser.newPage();
await page.setContent(`<input value="foo">`);
console.log(await page.inputValue("input")); // => foo
const input = await page.locator("input");
await page.fill("input", "bar");
console.log(await input.inputValue()); // => bar
})()
.catch(err => console.error(err))
.finally(() => browser?.close())
;
If you have multiple elements:
const playwright = require("playwright");
const html = `<input value="foo"><input value="bar">`;
let browser;
(async () => {
browser = await playwright.chromium.launch();
const page = await browser.newPage();
await page.setContent(html);
console.log(await page.inputValue("input")); // => foo
const els = await page.locator("input");
await els.nth(1).fill("baz");
console.log(await els.first().inputValue()); // => foo
console.log(await els.nth(0).inputValue()); // => foo
console.log(await els.last().inputValue()); // => baz
// or iterate all:
for (const el of await els.elementHandles()) {
console.log(await el.inputValue());
}
// or make an array:
const content = await Promise.all(
[...await els.elementHandles()].map(e => e.inputValue())
);
console.log(content);
})()
.catch(err => console.error(err))
.finally(() => browser?.close())
;
A:
To read the value of an input field in Playwright, you can use the value property of the ElementHandle object that represents the input element. Here is an example of how you might do this:
# get a reference to the input element
inputElement = await page.$("#email30")
# get the value of the input element
inputValue = inputElement.value
Here it uses the $ function to get a reference to the input element with the specified selector, and then accesses the value property of the ElementHandle object to get the value of the input.
But this approach will only work if the input element has a value attribute that contains the current value of the input. If the input element does not have a value attribute, or if the attribute does not contain the current value of the input, you can use the evaluate function to execute JavaScript that retrieves the value of the input.
Here is an example of how you might do this:
# get a reference to the input element
inputElement = await page.$("#email30")
# execute JavaScript to get the value of the input element
inputValue = await page.evaluate(() => document.getElementById("email30").value)
Here, evaluate function is used to execute JavaScript that uses the getElementById function to retrieve the input element and access its value property. This approach will work even if the input element does not have a value attribute, or if the attribute does not contain the current value of the input.
Note that with the above examples you may need to adapt them to your specific use case.
|
How to read a textbox value from a playwright
|
I have input text field on the page with id "email30" and I am trying to read it's value from Playwright
let dd_handle = await page.$("#email30");
let value = await dd_handle.getAttribute("value");
However it come back "" although I have a value inside the input text. When I inspect I don't see the value attribute is set to a current value either.
Normal JS code as following gives me correct value
document.getElementById("email30").value
Not sure how I can read value from the playwright framework. Can any one please advise? Their documents are quite not helpful here.
|
[
"There are three major trends how to retrieve input values with playwright/puppeteer.\npage.evaluate\nconst value = await page.evaluate(el => el.value, await page.$('input'))\n\npage.$eval\nconst value = await page.$eval('input', el => el.value)\n\npage.evaluate with Element.getAttribute\nconst value = await page.evaluate(() => document.querySelector('input').getAttribute('value'))\n\nThe first two will return the same result with very similar performance, I can't bring up an example when we could favor one over the other (maybe one: the one with $eval is shorter). The third one with Element.getAttribute is not advised if you are manipulating an input's value before evaluation and you want to retrieve the new value. It will always return the original attribute value, which is an empty string in most of the cases. It is a topic of attribute vs property value in JavaScript.\nHowever page.evaluate with Element.getAttribute can be handy when you need such attributes that can't be accessed with the other mentioned methods (e.g.: class names, data attributes, aria attributes etc.)\n",
"Ok finally following code did the job for me!\nconst value = await page.$eval(\"#email30\", (el) => el.value);\n\n",
"This works the best so far\nconst somevalue = this.page.inputValue('#Name');\n\n",
"To add to this excellent answer, with locator, you can use inputValue to get the current input value for one or more elements, or call page.inputValue(selector) directly to get the input value of the first matching selector.\nHere's a couple of minimal, complete examples:\nconst playwright = require(\"playwright\");\n\nlet browser;\n(async () => {\n browser = await playwright.chromium.launch();\n const page = await browser.newPage();\n await page.setContent(`<input value=\"foo\">`);\n\n console.log(await page.inputValue(\"input\")); // => foo\n const input = await page.locator(\"input\");\n\n await page.fill(\"input\", \"bar\");\n\n console.log(await input.inputValue()); // => bar\n})()\n .catch(err => console.error(err))\n .finally(() => browser?.close())\n;\n\nIf you have multiple elements:\nconst playwright = require(\"playwright\");\n\nconst html = `<input value=\"foo\"><input value=\"bar\">`;\n\nlet browser;\n(async () => {\n browser = await playwright.chromium.launch();\n const page = await browser.newPage();\n await page.setContent(html);\n\n console.log(await page.inputValue(\"input\")); // => foo\n\n const els = await page.locator(\"input\");\n await els.nth(1).fill(\"baz\");\n console.log(await els.first().inputValue()); // => foo\n console.log(await els.nth(0).inputValue()); // => foo\n console.log(await els.last().inputValue()); // => baz\n\n // or iterate all:\n for (const el of await els.elementHandles()) {\n console.log(await el.inputValue());\n }\n\n // or make an array:\n const content = await Promise.all(\n [...await els.elementHandles()].map(e => e.inputValue())\n );\n console.log(content);\n})()\n .catch(err => console.error(err))\n .finally(() => browser?.close())\n;\n\n",
"To read the value of an input field in Playwright, you can use the value property of the ElementHandle object that represents the input element. Here is an example of how you might do this:\n# get a reference to the input element\ninputElement = await page.$(\"#email30\")\n\n# get the value of the input element\ninputValue = inputElement.value\n\nHere it uses the $ function to get a reference to the input element with the specified selector, and then accesses the value property of the ElementHandle object to get the value of the input.\nBut this approach will only work if the input element has a value attribute that contains the current value of the input. If the input element does not have a value attribute, or if the attribute does not contain the current value of the input, you can use the evaluate function to execute JavaScript that retrieves the value of the input.\nHere is an example of how you might do this:\n# get a reference to the input element\ninputElement = await page.$(\"#email30\")\n\n# execute JavaScript to get the value of the input element\ninputValue = await page.evaluate(() => document.getElementById(\"email30\").value)\n\nHere, evaluate function is used to execute JavaScript that uses the getElementById function to retrieve the input element and access its value property. This approach will work even if the input element does not have a value attribute, or if the attribute does not contain the current value of the input.\nNote that with the above examples you may need to adapt them to your specific use case.\n"
] |
[
21,
7,
1,
0,
0
] |
[] |
[] |
[
"javascript",
"node.js",
"playwright"
] |
stackoverflow_0062653205_javascript_node.js_playwright.txt
|
Q:
Flutter i get an net:: ERR_UNKNOWN_URL_SCHEME when i click on a tel or mailto link on the website that is wrapped in the flutter webview
hi I wrapped my website inside a flutter webview but when i tab on a tel or mailto link then the app throws an error
the web page at tel: could not be loaded because net:: ERR_UNKNOWN_URL_SCHEME can someone please help me fix the problem the code below is my app with all the packages i am using i am very new to app development and i will really appreciate some help
Thank in advanced
Kind Regards.
import 'package:flutter/material.dart';
import 'package:flutter_webview_plugin/flutter_webview_plugin.dart';
import 'package:splashscreen/splashscreen.dart';
import 'package:flutter/rendering.dart';
import 'package:flutter/services.dart';
import 'package:url_launcher/url_launcher.dart';
void main() {enter code here
SystemChrome.setPreferredOrientations([DeviceOrientation.portraitUp]);
runApp(MaterialApp(
home: MyApp(),
debugShowCheckedModeBanner: false,
theme: ThemeData(
fontFamily: 'Bungee',
primaryTextTheme: TextTheme(
title: TextStyle(color: Colors.yellow, fontSize: 24),
),
)));
}
class MyApp extends StatefulWidget {
@override
_MyAppState createState() => new _MyAppState();
}
class _MyAppState extends State<MyApp> {
@override
Widget build(BuildContext context) {
return new SplashScreen(
seconds: 20,
navigateAfterSeconds: AfterSplash(),
image: new Image.asset(
'assets/images/icon.png',
width: 100,
height: 100,
),
backgroundColor: Colors.blueGrey[800],
photoSize: 100.0,
loaderColor: Colors.yellow[300],
);
}
}
class AfterSplash extends StatefulWidget {
@override
_MyAppsState createState() => _MyAppsState();
}
class _MyAppsState extends State<AfterSplash> {
@override
Widget build(BuildContext context) {
return SafeArea(
child: WebviewScaffold(
// Enter your custom url
url: "https://appdev.quintechx.tk/",
withJavascript: true,
withLocalStorage: true,
enableAppScheme: true,
primary: true,
supportMultipleWindows: true,
allowFileURLs: true,
withLocalUrl: true,
scrollBar: false,
appCacheEnabled: true,
),
);
}
}
A:
Solution from https://github.com/fluttercommunity/flutter_webview_plugin/issues/43:
_subscription = webViewPlugin.onUrlChanged.listen((String url) async {
print("navigating to...$url");
if (url.startsWith("mailto") || url.startsWith("tel") || url.startsWith("http") || url.startsWith("https"))
{
await webViewPlugin.stopLoading();
await webViewPlugin.goBack();
if (await canLaunch(url))
{
await launch(url);
return;
}
print("couldn't launch $url");
}
});
A:
launchUrl(
Uri.parse(url),
mode: LaunchMode.externalApplication,
);
|
Flutter i get an net:: ERR_UNKNOWN_URL_SCHEME when i click on a tel or mailto link on the website that is wrapped in the flutter webview
|
hi I wrapped my website inside a flutter webview but when i tab on a tel or mailto link then the app throws an error
the web page at tel: could not be loaded because net:: ERR_UNKNOWN_URL_SCHEME can someone please help me fix the problem the code below is my app with all the packages i am using i am very new to app development and i will really appreciate some help
Thank in advanced
Kind Regards.
import 'package:flutter/material.dart';
import 'package:flutter_webview_plugin/flutter_webview_plugin.dart';
import 'package:splashscreen/splashscreen.dart';
import 'package:flutter/rendering.dart';
import 'package:flutter/services.dart';
import 'package:url_launcher/url_launcher.dart';
void main() {enter code here
SystemChrome.setPreferredOrientations([DeviceOrientation.portraitUp]);
runApp(MaterialApp(
home: MyApp(),
debugShowCheckedModeBanner: false,
theme: ThemeData(
fontFamily: 'Bungee',
primaryTextTheme: TextTheme(
title: TextStyle(color: Colors.yellow, fontSize: 24),
),
)));
}
class MyApp extends StatefulWidget {
@override
_MyAppState createState() => new _MyAppState();
}
class _MyAppState extends State<MyApp> {
@override
Widget build(BuildContext context) {
return new SplashScreen(
seconds: 20,
navigateAfterSeconds: AfterSplash(),
image: new Image.asset(
'assets/images/icon.png',
width: 100,
height: 100,
),
backgroundColor: Colors.blueGrey[800],
photoSize: 100.0,
loaderColor: Colors.yellow[300],
);
}
}
class AfterSplash extends StatefulWidget {
@override
_MyAppsState createState() => _MyAppsState();
}
class _MyAppsState extends State<AfterSplash> {
@override
Widget build(BuildContext context) {
return SafeArea(
child: WebviewScaffold(
// Enter your custom url
url: "https://appdev.quintechx.tk/",
withJavascript: true,
withLocalStorage: true,
enableAppScheme: true,
primary: true,
supportMultipleWindows: true,
allowFileURLs: true,
withLocalUrl: true,
scrollBar: false,
appCacheEnabled: true,
),
);
}
}
|
[
"Solution from https://github.com/fluttercommunity/flutter_webview_plugin/issues/43: \n_subscription = webViewPlugin.onUrlChanged.listen((String url) async {\n print(\"navigating to...$url\");\n if (url.startsWith(\"mailto\") || url.startsWith(\"tel\") || url.startsWith(\"http\") || url.startsWith(\"https\"))\n {\n await webViewPlugin.stopLoading();\n await webViewPlugin.goBack();\n if (await canLaunch(url))\n {\n await launch(url);\n return;\n }\n print(\"couldn't launch $url\");\n }\n });\n\n",
"launchUrl(\n Uri.parse(url),\n mode: LaunchMode.externalApplication,\n);\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"flutter"
] |
stackoverflow_0060515164_flutter.txt
|
Q:
Create new column using custom function pandas df error
I want to create a new column which gives every row a category based on their value in one specific column. Here is the function:
def assign_category(df):
if df['AvgVAA'] >= -4:
return 'Elite'
elif df['AvgVAA'] <= -4 and df['AvgVAA'] > -4.5:
return 'Above Average'
elif df['AvgVAA'] <= -4.5 and df['AvgVAA'] > -5:
return 'Average'
elif df['AvgVAA'] <= -5 and df['AvgVAA'] > -5.5:
return 'Below Average'
elif df['AvgVAA'] <= -5.5:
return 'Mediocre'
I get this error message in the second line: ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
This is the code to create the new column:
df_upd['VAA Category'] = df_upd.apply(assign_category(df_upd), axis = 1)
I did some research on it, but the explanation did not help me because it was mainly about and operators, which are not used in that line.
I do not know why, but the error message did not come up when I first ran the code. But even at that time, the function did not work. Every row in that new column was filled with 'Unknown'.
Can someoen help me out here?
A:
You're so close. Instead of:
df_upd.apply(assign_category(df_upd), axis = 1)
Use:
df_upd.apply(assign_category, axis = 1)
In the updated approach, you are applying the function to df_upd (as intended), whereas in the original approach, you are essentially doing:
x = assign_category(df)
df.apply(x, axis = 1)
The error comes when you try to calculate x since you need to apply along axis 1.
|
Create new column using custom function pandas df error
|
I want to create a new column which gives every row a category based on their value in one specific column. Here is the function:
def assign_category(df):
if df['AvgVAA'] >= -4:
return 'Elite'
elif df['AvgVAA'] <= -4 and df['AvgVAA'] > -4.5:
return 'Above Average'
elif df['AvgVAA'] <= -4.5 and df['AvgVAA'] > -5:
return 'Average'
elif df['AvgVAA'] <= -5 and df['AvgVAA'] > -5.5:
return 'Below Average'
elif df['AvgVAA'] <= -5.5:
return 'Mediocre'
I get this error message in the second line: ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
This is the code to create the new column:
df_upd['VAA Category'] = df_upd.apply(assign_category(df_upd), axis = 1)
I did some research on it, but the explanation did not help me because it was mainly about and operators, which are not used in that line.
I do not know why, but the error message did not come up when I first ran the code. But even at that time, the function did not work. Every row in that new column was filled with 'Unknown'.
Can someoen help me out here?
|
[
"You're so close. Instead of:\ndf_upd.apply(assign_category(df_upd), axis = 1)\n\nUse:\ndf_upd.apply(assign_category, axis = 1)\n\nIn the updated approach, you are applying the function to df_upd (as intended), whereas in the original approach, you are essentially doing:\nx = assign_category(df)\ndf.apply(x, axis = 1)\n\nThe error comes when you try to calculate x since you need to apply along axis 1.\n"
] |
[
2
] |
[] |
[] |
[
"apply",
"function",
"pandas",
"python"
] |
stackoverflow_0074663358_apply_function_pandas_python.txt
|
Q:
Machine-learning Overview
This may not be the type of question to ask on SO, but just wanted to hear what about other people have to say regarding what factors to consider in implementing machine-learning algorithms in a large enterprise environment.
One of my goals is to research industry machine-learning solutions that can be tailored to my company's specific needs. Being pretty much the only person who has a math background on my team and and who has done some background reading on machine-learning algorithms previously, I'm tasked with explaining/comparing machine-learning solutions in the industry. From what I've gleaned by googling around, it seems that:
a. Machine-learning and predictive analytics aren't exactly the same thing, so what's inherently different when a company offers predictive analytics software vs. machine-learning software? (e.g. IBM Predictive Analytics vs. Skytree Server)
b. A lot of popular terminology often gets entangled together, especially regarding Big Data, Hadoop, machine-learning, etc. Could anyone clarify the distinction among those terms? From what I've learned, I think the conceptual separation goes like:
Machine-learning algorithms
Software Implementation
Infrastructure to run software on large datasets (Hadoop)
c. When implementing a solution, do most companies hire consultants from the solution company to help implement the algorithms, or are most algorithms pre-built and any data analyst can use them? Or do we need a team of data scientists, even with the software, to run the algorithms and understand the output?
I know this is quite a long-winded question(s), but any info would be helpful. It's kind of difficult being the only person who remotely knows anything about this stuff, so I'd love to hear what more experienced and technical people have to say.
A:
It's hard to answer your question without an idea of how much data you have and what your company's needs are. This will help you narrow down what types of solutions can meet your needs. Among those, there will likely be open source solutions (Mahout perhaps), visualization solutions, and a variety of solutions to help you manage your data.
A:
Regarding Big data/Hadoop/ML:
Big Data is a terminology that defines the essence of data you need to deal with. Mostly, you can define big data vs. "ordinary" one by something that is called 3Vs - Volume, Variety and Velocity.
The thresholds that defines "what is the volume necessary for big data" aren't defined scientifically, but rather more on feasibility considerations: if you feel that the amount of data creates large overhead on maintaining regular DB (MySql etc.), then you might consider big data solutions.
Hadoop is just the most common tool designed to handle big data.
Machine learning is subfield in data science that evolved from statistics and computer science. The idea is to let machines learn without explicitly programming it. In a nutshell, the learning method goal is to generalize past data in order to predict new data.
Big data and machine learning are mentioned together because the nature of ML techniques that requires data in order to learn. There is a trend towards big data in the industry and the nature in big data requires feeding ML algorithms a lot of data in order for it to learn (unstructured sparse data).
Most companies hire data scientists in order to deal with this tasks as it requires a lot of knowledge in statistics, computer science, algorithms etc. that regular data analysts don't have.
Most of data scientist job is not "running a ready algorithm" and there is a lot of preparing and statically analyzing the data before you even start thinking about the algorithms.
You don't need to hire a team in advance but it's a function that can grow gradually over time based on needs.
A:
Regarding you third part of the question:
There is always an initial learning curve for learning some new and powerful. The same applies to Data Modeling using Machine Learning. If you are bounded by constraints like budget it would require you to devote some time in learning the fundamentals of algorithm functionality and then its implementation. However, if you are bound by time you may need to hire a team of data scientists / Machine learning engineers. However, in the long run it would always help if you started understanding a bit of machine learning, so that you can collaborate easily with your team.
A:
Answering to your C part of the Question, Machine learning has prebuilt algorithms for both supervised and unsupervised methods. To have a solution for an organization we first have to understand the need of the client and before choosing the algorithm first we choose supervised learning or unsupervised learning. if the need is for supervised learning then first we have to do the feature engineering that is very important part of supervised learning, which find the attributes in the subjects that identified them from the rest. Then we choose the classification algorithm or prediction algorithm based on again the problem. For that, we have many algorithms, but choosing the best one, totally depends upon your hardware capacity and data processing capacity algorithm. we have the chart of comparison for that.
Unsupervised learning is best when we want to identify anomalies in data or we want to cluster the data which has similar attributes.
Hope this will help you to understand the third part of your question.
A:
It's great that you are interested in researching machine learning solutions for your company! Machine learning and predictive analytics are related, but they are not the same thing. Machine learning algorithms are a set of mathematical models and methods that can be used to automatically learn from data, identify patterns, and make predictions. Predictive analytics, on the other hand, is a broader term that refers to the use of data, statistical algorithms, and machine learning techniques to predict future outcomes based on historical data.
In terms of software, companies that offer predictive analytics software typically provide tools for data cleaning, preparation, and visualization, as well as various algorithms for predictive modeling. Machine learning software, on the other hand, typically focuses on providing a set of algorithms and tools for building, training, and deploying machine learning models. Some machine learning software may also include features for data preparation and visualization, but the focus is typically on the algorithms and model building rather than the broader range of predictive analytics tasks.
The terms "Big Data" and "Hadoop" are also often used in conjunction with machine learning and predictive analytics. Big Data refers to large and complex datasets that are difficult to process using traditional data management and analysis tools. Hadoop is a software framework that provides a distributed computing platform for storing, processing, and analyzing large datasets. Machine learning algorithms can be run on Hadoop to allow for scalable and efficient processing of large datasets.
In terms of implementation, the specific approach will depend on the company's needs and resources. Some companies may choose to hire consultants from the solution provider to help implement the algorithms and build predictive models. Others may have their own data science teams that can use the software and algorithms to build and deploy machine learning models. Some machine learning software may also include pre-built models and algorithms that can be used by data analysts without the need for specialized expertise in machine learning.
Overall, there are many factors to consider when implementing machine learning solutions in a large enterprise environment, including the specific business needs and objectives, the available data and resources, and the expertise and capabilities of the team. I hope this information is helpful, and good luck with your research!
|
Machine-learning Overview
|
This may not be the type of question to ask on SO, but just wanted to hear what about other people have to say regarding what factors to consider in implementing machine-learning algorithms in a large enterprise environment.
One of my goals is to research industry machine-learning solutions that can be tailored to my company's specific needs. Being pretty much the only person who has a math background on my team and and who has done some background reading on machine-learning algorithms previously, I'm tasked with explaining/comparing machine-learning solutions in the industry. From what I've gleaned by googling around, it seems that:
a. Machine-learning and predictive analytics aren't exactly the same thing, so what's inherently different when a company offers predictive analytics software vs. machine-learning software? (e.g. IBM Predictive Analytics vs. Skytree Server)
b. A lot of popular terminology often gets entangled together, especially regarding Big Data, Hadoop, machine-learning, etc. Could anyone clarify the distinction among those terms? From what I've learned, I think the conceptual separation goes like:
Machine-learning algorithms
Software Implementation
Infrastructure to run software on large datasets (Hadoop)
c. When implementing a solution, do most companies hire consultants from the solution company to help implement the algorithms, or are most algorithms pre-built and any data analyst can use them? Or do we need a team of data scientists, even with the software, to run the algorithms and understand the output?
I know this is quite a long-winded question(s), but any info would be helpful. It's kind of difficult being the only person who remotely knows anything about this stuff, so I'd love to hear what more experienced and technical people have to say.
|
[
"It's hard to answer your question without an idea of how much data you have and what your company's needs are. This will help you narrow down what types of solutions can meet your needs. Among those, there will likely be open source solutions (Mahout perhaps), visualization solutions, and a variety of solutions to help you manage your data. \n",
"Regarding Big data/Hadoop/ML:\nBig Data is a terminology that defines the essence of data you need to deal with. Mostly, you can define big data vs. \"ordinary\" one by something that is called 3Vs - Volume, Variety and Velocity.\nThe thresholds that defines \"what is the volume necessary for big data\" aren't defined scientifically, but rather more on feasibility considerations: if you feel that the amount of data creates large overhead on maintaining regular DB (MySql etc.), then you might consider big data solutions.\nHadoop is just the most common tool designed to handle big data.\nMachine learning is subfield in data science that evolved from statistics and computer science. The idea is to let machines learn without explicitly programming it. In a nutshell, the learning method goal is to generalize past data in order to predict new data.\nBig data and machine learning are mentioned together because the nature of ML techniques that requires data in order to learn. There is a trend towards big data in the industry and the nature in big data requires feeding ML algorithms a lot of data in order for it to learn (unstructured sparse data).\nMost companies hire data scientists in order to deal with this tasks as it requires a lot of knowledge in statistics, computer science, algorithms etc. that regular data analysts don't have.\nMost of data scientist job is not \"running a ready algorithm\" and there is a lot of preparing and statically analyzing the data before you even start thinking about the algorithms.\nYou don't need to hire a team in advance but it's a function that can grow gradually over time based on needs.\n",
"Regarding you third part of the question: \nThere is always an initial learning curve for learning some new and powerful. The same applies to Data Modeling using Machine Learning. If you are bounded by constraints like budget it would require you to devote some time in learning the fundamentals of algorithm functionality and then its implementation. However, if you are bound by time you may need to hire a team of data scientists / Machine learning engineers. However, in the long run it would always help if you started understanding a bit of machine learning, so that you can collaborate easily with your team. \n",
"Answering to your C part of the Question, Machine learning has prebuilt algorithms for both supervised and unsupervised methods. To have a solution for an organization we first have to understand the need of the client and before choosing the algorithm first we choose supervised learning or unsupervised learning. if the need is for supervised learning then first we have to do the feature engineering that is very important part of supervised learning, which find the attributes in the subjects that identified them from the rest. Then we choose the classification algorithm or prediction algorithm based on again the problem. For that, we have many algorithms, but choosing the best one, totally depends upon your hardware capacity and data processing capacity algorithm. we have the chart of comparison for that.\nUnsupervised learning is best when we want to identify anomalies in data or we want to cluster the data which has similar attributes.\nHope this will help you to understand the third part of your question.\n",
"It's great that you are interested in researching machine learning solutions for your company! Machine learning and predictive analytics are related, but they are not the same thing. Machine learning algorithms are a set of mathematical models and methods that can be used to automatically learn from data, identify patterns, and make predictions. Predictive analytics, on the other hand, is a broader term that refers to the use of data, statistical algorithms, and machine learning techniques to predict future outcomes based on historical data.\nIn terms of software, companies that offer predictive analytics software typically provide tools for data cleaning, preparation, and visualization, as well as various algorithms for predictive modeling. Machine learning software, on the other hand, typically focuses on providing a set of algorithms and tools for building, training, and deploying machine learning models. Some machine learning software may also include features for data preparation and visualization, but the focus is typically on the algorithms and model building rather than the broader range of predictive analytics tasks.\nThe terms \"Big Data\" and \"Hadoop\" are also often used in conjunction with machine learning and predictive analytics. Big Data refers to large and complex datasets that are difficult to process using traditional data management and analysis tools. Hadoop is a software framework that provides a distributed computing platform for storing, processing, and analyzing large datasets. Machine learning algorithms can be run on Hadoop to allow for scalable and efficient processing of large datasets.\nIn terms of implementation, the specific approach will depend on the company's needs and resources. Some companies may choose to hire consultants from the solution provider to help implement the algorithms and build predictive models. Others may have their own data science teams that can use the software and algorithms to build and deploy machine learning models. Some machine learning software may also include pre-built models and algorithms that can be used by data analysts without the need for specialized expertise in machine learning.\nOverall, there are many factors to consider when implementing machine learning solutions in a large enterprise environment, including the specific business needs and objectives, the available data and resources, and the expertise and capabilities of the team. I hope this information is helpful, and good luck with your research!\n"
] |
[
1,
1,
1,
0,
0
] |
[] |
[] |
[
"analytics",
"enterprise",
"machine_learning"
] |
stackoverflow_0016089013_analytics_enterprise_machine_learning.txt
|
Q:
How do I separate text after using BeautifulSoup in order to plot?
I am trying to make a program that scrapes the data from open insider and take that data and plot it. Open insider shows what insiders of the company are buying or selling the stock. I want to be able to show, in an easy to read format, what company, insider type and how much of the stock was purchased.
Here is my code so far:
from bs4 import BeautifulSoup
import requests
page = requests.get("http://openinsider.com/top-insider-purchases-of-the-month")
'''print(page.status_code)
checks to see if the page was downloaded successfully'''
soup = BeautifulSoup(page.content,'html.parser')
table = soup.find(class_="tinytable")
data = table.get_text()
#results = data.prettify
print(data, '\n')
Here is an example of some of the results:
X
Filing Date
Trade Date
Ticker
Company NameInsider NameTitle
Trade Type
Price
Qty
Owned
ΔOwn
Value
1d
1w
1m
6m
2022-12-01 16:10:122022-11-30 AKUSAkouos, Inc.Kearny Acquisition Corp10%P - Purchase$12.50+29,992,668100-100%+$374,908,350
2022-11-30 20:57:192022-11-29 HHCHoward Hughes CorpPershing Square Capital Management, L.P.Dir, 10%P - Purchase$70.00+1,560,20515,180,369+11%+$109,214,243
2022-12-02 17:29:182022-12-02 IOVAIovance Biotherapeutics, Inc.Rothbaum Wayne P.DirP - Purchase$6.50+10,000,00018,067,333+124%+$65,000,000
However, for me each year starts a new line.
Is there a better way to use BeautifulSoup? Or is there an easy way to sort through this data and retrieve the specific information I am looking for? Thank You in advance I have been stuck on this for a while.
A:
What Julian said then store values in a dict, load it into a Pandas dataframe and visualize it with plotly.express.
|
How do I separate text after using BeautifulSoup in order to plot?
|
I am trying to make a program that scrapes the data from open insider and take that data and plot it. Open insider shows what insiders of the company are buying or selling the stock. I want to be able to show, in an easy to read format, what company, insider type and how much of the stock was purchased.
Here is my code so far:
from bs4 import BeautifulSoup
import requests
page = requests.get("http://openinsider.com/top-insider-purchases-of-the-month")
'''print(page.status_code)
checks to see if the page was downloaded successfully'''
soup = BeautifulSoup(page.content,'html.parser')
table = soup.find(class_="tinytable")
data = table.get_text()
#results = data.prettify
print(data, '\n')
Here is an example of some of the results:
X
Filing Date
Trade Date
Ticker
Company NameInsider NameTitle
Trade Type
Price
Qty
Owned
ΔOwn
Value
1d
1w
1m
6m
2022-12-01 16:10:122022-11-30 AKUSAkouos, Inc.Kearny Acquisition Corp10%P - Purchase$12.50+29,992,668100-100%+$374,908,350
2022-11-30 20:57:192022-11-29 HHCHoward Hughes CorpPershing Square Capital Management, L.P.Dir, 10%P - Purchase$70.00+1,560,20515,180,369+11%+$109,214,243
2022-12-02 17:29:182022-12-02 IOVAIovance Biotherapeutics, Inc.Rothbaum Wayne P.DirP - Purchase$6.50+10,000,00018,067,333+124%+$65,000,000
However, for me each year starts a new line.
Is there a better way to use BeautifulSoup? Or is there an easy way to sort through this data and retrieve the specific information I am looking for? Thank You in advance I have been stuck on this for a while.
|
[
"What Julian said then store values in a dict, load it into a Pandas dataframe and visualize it with plotly.express.\n"
] |
[
0
] |
[] |
[] |
[
"beautifulsoup",
"python"
] |
stackoverflow_0074663423_beautifulsoup_python.txt
|
Q:
ActivityPluginBinding is returning null
I am trying to run a AlertDialog in my flutter plugin. So I need the Activity context. I tried using the Application context. However, I was greeted with this fine error and learned that I must use the Activity context.
android.view.WindowManager$BadTokenException: Unable to add window -- token null is not valid; is your activity running?
For some reason when I call getActivity() it always returns null. I was wondering if anyone could give me some pointers as to why this is happening. Here is my Plugin class I cleaned it up so it only contains the ActivityAware code. Did I not implement something correctly? Any help would be much appreciated!
public class MyPlugin implements FlutterPlugin, MethodCallHandler, ActivityAware {
private ActivityPluginBinding activityBinding;
private FlutterPluginBinding flutterBinding;
@Override
public void onAttachedToEngine(@NonNull FlutterPluginBinding flutterPluginBinding) {
flutterBinding = flutterPluginBinding;
}
@Override
public void onAttachedToActivity(ActivityPluginBinding binding) {
activityBinding = binding;
}
@Override
public void onDetachedFromActivity() {
activityBinding = null;
}
@Override
public void onReattachedToActivityForConfigChanges(ActivityPluginBinding binding) {
activityBinding = binding;
}
@Override
public void onDetachedFromActivityForConfigChanges() {
activityBinding = null;
}
// Implementation
public Context getApplicationContext() {
return (flutterBinding != null) ? flutterBinding.getApplicationContext() : null;
}
public Activity getActivity() {
return (activityBinding != null) ? activityBinding.getActivity() : null;
}
}
A:
Those objects might have cleared by time they are accessed. You can keep WeakReference to Application context and Activity context and access them later. This way you will avoid memory leak as well.
Something like
private WeakReference<Context> weakApplicationContext;
private WeakReference<Activity> weakActivity;
and then
@Override
public void onAttachedToEngine(@NonNull FlutterPluginBinding flutterPluginBinding) {
weakApplicationContext = new WeakReference<>(flutterPluginBinding.getApplicationContext());
}
@Override
public void onAttachedToActivity(ActivityPluginBinding binding) {
weakActivit = new WeakReference<>(binding.getActivity());
}
Then access them as
public Activity getActivity() {
return weakActivity.get();
}
|
ActivityPluginBinding is returning null
|
I am trying to run a AlertDialog in my flutter plugin. So I need the Activity context. I tried using the Application context. However, I was greeted with this fine error and learned that I must use the Activity context.
android.view.WindowManager$BadTokenException: Unable to add window -- token null is not valid; is your activity running?
For some reason when I call getActivity() it always returns null. I was wondering if anyone could give me some pointers as to why this is happening. Here is my Plugin class I cleaned it up so it only contains the ActivityAware code. Did I not implement something correctly? Any help would be much appreciated!
public class MyPlugin implements FlutterPlugin, MethodCallHandler, ActivityAware {
private ActivityPluginBinding activityBinding;
private FlutterPluginBinding flutterBinding;
@Override
public void onAttachedToEngine(@NonNull FlutterPluginBinding flutterPluginBinding) {
flutterBinding = flutterPluginBinding;
}
@Override
public void onAttachedToActivity(ActivityPluginBinding binding) {
activityBinding = binding;
}
@Override
public void onDetachedFromActivity() {
activityBinding = null;
}
@Override
public void onReattachedToActivityForConfigChanges(ActivityPluginBinding binding) {
activityBinding = binding;
}
@Override
public void onDetachedFromActivityForConfigChanges() {
activityBinding = null;
}
// Implementation
public Context getApplicationContext() {
return (flutterBinding != null) ? flutterBinding.getApplicationContext() : null;
}
public Activity getActivity() {
return (activityBinding != null) ? activityBinding.getActivity() : null;
}
}
|
[
"Those objects might have cleared by time they are accessed. You can keep WeakReference to Application context and Activity context and access them later. This way you will avoid memory leak as well.\nSomething like\nprivate WeakReference<Context> weakApplicationContext;\nprivate WeakReference<Activity> weakActivity;\n\nand then\n @Override\n public void onAttachedToEngine(@NonNull FlutterPluginBinding flutterPluginBinding) {\n weakApplicationContext = new WeakReference<>(flutterPluginBinding.getApplicationContext());\n }\n\n @Override\n public void onAttachedToActivity(ActivityPluginBinding binding) {\n weakActivit = new WeakReference<>(binding.getActivity());\n }\n\nThen access them as\npublic Activity getActivity() {\n return weakActivity.get();\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"android",
"dart",
"flutter",
"java"
] |
stackoverflow_0074663142_android_dart_flutter_java.txt
|
Q:
Machine learning-based edge detector
I have Read the following blog on edge detection using machine learning. They
used a modern machine learning-based algorithm. The algorithm is trained on images where humans annotate the most significant edges and object boundaries. Given this labeled dataset, a machine learning model is trained to predict the probability of each pixel in an image belonging to an object boundary.
I would like to implement this technique using opencv.
Does anybody have an idea or know how this method can be implemented/developped using Opencv ?
How can we annotate the most significant edges and object boundaries for use with machine learning algorith ?
A:
There are strong edge detection algorithms on opncv. The famous on is Hough transform (conjuction of lines which is described on the blog). Most of the strong edge detectors are based on gradient; gradient in x or y or both direction. I want to introduce you Sobel edge detector and Laplacian. both are provided in opencv.
For the case that you want problem is tricky and depends on your date set. There are many papers published for this important problem among those I provide you the following links:
Dollar slides at U. Toronto
Review Paper from U. twente
Github repository s9xie
I hope this helps you.
A:
Now you have to obtain a dataset with edges annotated in them, like dropbox did. This would then be your starting point. Then you can learn about neural networks in the documentation section of #deep-learning. So called U-shaped networks are the current state-of-the-art in segmentation like shown in (https://github.com/EdwardTyantov/ultrasound-nerve-segmentation). This can easily be adopted to be used for your task.
Still I would imagine that annotating thousands of images is not really what you were looking for. If you do not want to learn the edge detector you can use something more classical like canny or sobel (https://en.wikipedia.org/wiki/Edge_detection), as is stated in the blog post you provided.
A:
To implement edge detection using machine learning with OpenCV, you can follow these steps:
Collect a dataset of images that contain the objects and boundaries you want to detect.
Annotate the images by manually marking the most significant edges and object boundaries. This can be done using a tool like LabelImg (https://github.com/tzutalin/labelImg) to create a set of labeled data that can be used to train a machine learning model.
Use OpenCV's machine learning libraries, such as cv2.ml, to train a machine learning model using the labeled data. This can be done using a supervised learning algorithm, such as a support vector machine (SVM) or a convolutional neural network (CNN), to predict the probability of each pixel in an image belonging to an object boundary.
Once the machine learning model is trained, you can use OpenCV's image processing functions, such as cv2.Canny or cv2.Sobel, to apply the model to a new image and detect the edges and object boundaries.
I hope this helps! Let me know if you have any other questions.
|
Machine learning-based edge detector
|
I have Read the following blog on edge detection using machine learning. They
used a modern machine learning-based algorithm. The algorithm is trained on images where humans annotate the most significant edges and object boundaries. Given this labeled dataset, a machine learning model is trained to predict the probability of each pixel in an image belonging to an object boundary.
I would like to implement this technique using opencv.
Does anybody have an idea or know how this method can be implemented/developped using Opencv ?
How can we annotate the most significant edges and object boundaries for use with machine learning algorith ?
|
[
"There are strong edge detection algorithms on opncv. The famous on is Hough transform (conjuction of lines which is described on the blog). Most of the strong edge detectors are based on gradient; gradient in x or y or both direction. I want to introduce you Sobel edge detector and Laplacian. both are provided in opencv.\nFor the case that you want problem is tricky and depends on your date set. There are many papers published for this important problem among those I provide you the following links:\n\nDollar slides at U. Toronto\nReview Paper from U. twente\nGithub repository s9xie \n\nI hope this helps you. \n",
"Now you have to obtain a dataset with edges annotated in them, like dropbox did. This would then be your starting point. Then you can learn about neural networks in the documentation section of #deep-learning. So called U-shaped networks are the current state-of-the-art in segmentation like shown in (https://github.com/EdwardTyantov/ultrasound-nerve-segmentation). This can easily be adopted to be used for your task.\nStill I would imagine that annotating thousands of images is not really what you were looking for. If you do not want to learn the edge detector you can use something more classical like canny or sobel (https://en.wikipedia.org/wiki/Edge_detection), as is stated in the blog post you provided.\n",
"To implement edge detection using machine learning with OpenCV, you can follow these steps:\n\nCollect a dataset of images that contain the objects and boundaries you want to detect.\n\nAnnotate the images by manually marking the most significant edges and object boundaries. This can be done using a tool like LabelImg (https://github.com/tzutalin/labelImg) to create a set of labeled data that can be used to train a machine learning model.\n\nUse OpenCV's machine learning libraries, such as cv2.ml, to train a machine learning model using the labeled data. This can be done using a supervised learning algorithm, such as a support vector machine (SVM) or a convolutional neural network (CNN), to predict the probability of each pixel in an image belonging to an object boundary.\n\nOnce the machine learning model is trained, you can use OpenCV's image processing functions, such as cv2.Canny or cv2.Sobel, to apply the model to a new image and detect the edges and object boundaries.\n\n\nI hope this helps! Let me know if you have any other questions.\n"
] |
[
3,
1,
0
] |
[] |
[] |
[
"deep_learning",
"image_processing",
"machine_learning",
"opencv",
"signal_processing"
] |
stackoverflow_0043299083_deep_learning_image_processing_machine_learning_opencv_signal_processing.txt
|
Q:
KAA Machine learning
How a machine learning algorithm(Present in Spark MLlib) can be applied to Data collected from sensors in KAA. I haven't found any such use case built on KAA. My requirement is to collect the live streams of data, processing and cleaning the same and applying a machine leaning algorithm in KAA.
I have done this by collecting the data using Apache nifi and through Kafka passing the data to Spark Streaming Application on which I am applying the machine learning algorithm.
I want to perform the same in KAA as an IoT platform.
A:
To apply a machine learning algorithm to data collected from sensors in KAA, you can follow these steps:
Collect the data from the sensors using KAA's IoT platform capabilities, such as the Device SDK and the Endpoint Registry.
Use KAA's data processing and integration features, such as the Data Collection and Integration (DCI) module, to clean and prepare the data for machine learning.
Use the KAA Spark Integration module to connect to a Spark cluster and apply the machine learning algorithm from Spark MLlib to the processed data.
Use KAA's data visualization and analytics capabilities, such as the Analytics module, to view and analyze the results of the machine learning algorithm.
I hope this helps! Let me know if you have any other questions.
|
KAA Machine learning
|
How a machine learning algorithm(Present in Spark MLlib) can be applied to Data collected from sensors in KAA. I haven't found any such use case built on KAA. My requirement is to collect the live streams of data, processing and cleaning the same and applying a machine leaning algorithm in KAA.
I have done this by collecting the data using Apache nifi and through Kafka passing the data to Spark Streaming Application on which I am applying the machine learning algorithm.
I want to perform the same in KAA as an IoT platform.
|
[
"To apply a machine learning algorithm to data collected from sensors in KAA, you can follow these steps:\n\nCollect the data from the sensors using KAA's IoT platform capabilities, such as the Device SDK and the Endpoint Registry.\n\nUse KAA's data processing and integration features, such as the Data Collection and Integration (DCI) module, to clean and prepare the data for machine learning.\n\nUse the KAA Spark Integration module to connect to a Spark cluster and apply the machine learning algorithm from Spark MLlib to the processed data.\n\nUse KAA's data visualization and analytics capabilities, such as the Analytics module, to view and analyze the results of the machine learning algorithm.\n\n\nI hope this helps! Let me know if you have any other questions.\n"
] |
[
0
] |
[] |
[] |
[
"kaa",
"machine_learning"
] |
stackoverflow_0045300242_kaa_machine_learning.txt
|
Q:
What is a pandas.core.Frame.DataFrame, and how to convert it to pd.DataFrame?
Currently I was trying to do a machine learning classification of 6 time series datasets (in .csv format) using MiniRocket, an sktime machine learning package. However, when I imported the .csv files using pd.read_csv and run them through MiniRocket, the error "TypeError: X must be in an sktime compatible format" pops up, and it says that the following data types are sktime compatible:
['pd.Series', 'pd.DataFrame', 'np.ndarray', 'nested_univ', 'numpy3D', 'pd-multiindex', 'df-list', 'pd_multiindex_hier']
Then I checked the data type of my imported .csv files and got "pandas.core.Frame.DataFrame", which is a data type that I never saw before and is obviously different from the sktime compatible pd.DataFrame. What is the difference between pandas.core.Frame.DataFrame and pd.DataFrame, and how to convert pandas.core.Frame.DataFrame to the sktime compatible pd.DataFrame?
I tried to convert pandas.core.Frame.DataFrame to pd.DataFrame using df.join and df.pop functions, but neither of them was able to convert my data from pandas.core.Frame.DataFrame to pd.DataFrame (after conversion I checked the type again and it is still the same).
A:
If you just take the values from your old DataFrame with .values, you can create a new DataFrame the standard way. If you want to keep the same columns and index values, just set those when you declare your new DataFrame.
df_new = pd.DataFrame(df_old.values, columns=df_old.columns, index=df_old.index)
A:
Most of the pandas classes are defined under pandas.core folder: https://github.com/pandas-dev/pandas/tree/main/pandas/core.
For example, class DataFrame is defined in pandas.core.frame.py:
class DataFrame(NDFrame, OpsMixin):
...
def __init__(...)
...
Pandas is not yet a py.typed library PEP 561, hence the public API documentation uses pandas.DataFrame but internally all error messages still refer to the source file structure such as pandas.core.frame.DataFrame.
|
What is a pandas.core.Frame.DataFrame, and how to convert it to pd.DataFrame?
|
Currently I was trying to do a machine learning classification of 6 time series datasets (in .csv format) using MiniRocket, an sktime machine learning package. However, when I imported the .csv files using pd.read_csv and run them through MiniRocket, the error "TypeError: X must be in an sktime compatible format" pops up, and it says that the following data types are sktime compatible:
['pd.Series', 'pd.DataFrame', 'np.ndarray', 'nested_univ', 'numpy3D', 'pd-multiindex', 'df-list', 'pd_multiindex_hier']
Then I checked the data type of my imported .csv files and got "pandas.core.Frame.DataFrame", which is a data type that I never saw before and is obviously different from the sktime compatible pd.DataFrame. What is the difference between pandas.core.Frame.DataFrame and pd.DataFrame, and how to convert pandas.core.Frame.DataFrame to the sktime compatible pd.DataFrame?
I tried to convert pandas.core.Frame.DataFrame to pd.DataFrame using df.join and df.pop functions, but neither of them was able to convert my data from pandas.core.Frame.DataFrame to pd.DataFrame (after conversion I checked the type again and it is still the same).
|
[
"If you just take the values from your old DataFrame with .values, you can create a new DataFrame the standard way. If you want to keep the same columns and index values, just set those when you declare your new DataFrame.\ndf_new = pd.DataFrame(df_old.values, columns=df_old.columns, index=df_old.index)\n\n",
"Most of the pandas classes are defined under pandas.core folder: https://github.com/pandas-dev/pandas/tree/main/pandas/core.\nFor example, class DataFrame is defined in pandas.core.frame.py:\nclass DataFrame(NDFrame, OpsMixin):\n ...\n\ndef __init__(...)\n ...\n\nPandas is not yet a py.typed library PEP 561, hence the public API documentation uses pandas.DataFrame but internally all error messages still refer to the source file structure such as pandas.core.frame.DataFrame.\n"
] |
[
0,
0
] |
[] |
[] |
[
"csv",
"dataframe",
"pandas",
"python",
"python_3.x"
] |
stackoverflow_0074663328_csv_dataframe_pandas_python_python_3.x.txt
|
Q:
Machine learning in Clojure
We have theano and numpy in Python to do symbolic and numeric computations, optimising our Machine Learning computations (eg: Matrix multiplications and GPU usage). What are the relevant tools in Clojure to do Machine learning (or at least things like matrix multiplications)?
A:
An important library / tool for mathematical operations, statistics and more in Clojure is incanter. There is also clatrix wrapping jBlas for matrix operations.
With regard to machine learning in general, there are at least two libraries interfacing / wrapping Apache Spark which includes MLlib for machine learning: there is sparking and flambo. clj-ml is basically a wrapper around Weka and some additions. Finally, clojure-opennlp is a wrapper around opennlp, an NLP toolkit comparable to NLTK in Python.
This list of ML tools provides quite some more links.
A:
For the matrix/vector side there's core.matrix which is a plug-able library with an implementation at
vectorz-clj that is being actively developed, and other high perf libs exist. Usage from the readme:
(def M (matrix [[1 2] [3 4]]))
(def v (matrix [1 2]))
(mul M v)
=> #<Matrix22 [[1.0,4.0],[3.0,8.0]]>
A 'mentor' to the project mentioned on an answer to this SO question that GPU was a target, but no mention of it in the docs.
What kind of specific functionality do you need as your question is a bit broad? Have you tried anything?
A:
In Clojure, there are several tools that can be used for machine learning and numeric computations, such as matrix multiplications. Some of the most commonly used libraries include:
Incanter: Incanter is a data analysis and visualization library for Clojure, which provides a wide range of numerical and statistical functions for working with data. It includes support for matrix operations, linear algebra, and other mathematical operations that are commonly used in machine learning.
core.matrix: core.matrix is a library for working with multidimensional arrays in Clojure. It provides a uniform API for working with arrays from different numerical libraries, such as NumPy, and includes support for matrix operations, linear algebra, and other mathematical operations.
Breeze: Breeze is a numerical library for Scala, which provides a wide range of mathematical and statistical functions for working with data. It includes support for matrix operations, linear algebra, and other mathematical operations that are commonly used in machine learning. Breeze can be used in Clojure through interop with the Java Virtual Machine (JVM).
I hope this helps! Let me know if you have any other questions.
|
Machine learning in Clojure
|
We have theano and numpy in Python to do symbolic and numeric computations, optimising our Machine Learning computations (eg: Matrix multiplications and GPU usage). What are the relevant tools in Clojure to do Machine learning (or at least things like matrix multiplications)?
|
[
"An important library / tool for mathematical operations, statistics and more in Clojure is incanter. There is also clatrix wrapping jBlas for matrix operations.\nWith regard to machine learning in general, there are at least two libraries interfacing / wrapping Apache Spark which includes MLlib for machine learning: there is sparking and flambo. clj-ml is basically a wrapper around Weka and some additions. Finally, clojure-opennlp is a wrapper around opennlp, an NLP toolkit comparable to NLTK in Python.\nThis list of ML tools provides quite some more links.\n",
"For the matrix/vector side there's core.matrix which is a plug-able library with an implementation at \nvectorz-clj that is being actively developed, and other high perf libs exist. Usage from the readme:\n(def M (matrix [[1 2] [3 4]]))\n(def v (matrix [1 2]))\n(mul M v)\n=> #<Matrix22 [[1.0,4.0],[3.0,8.0]]>\n\nA 'mentor' to the project mentioned on an answer to this SO question that GPU was a target, but no mention of it in the docs.\nWhat kind of specific functionality do you need as your question is a bit broad? Have you tried anything?\n",
"In Clojure, there are several tools that can be used for machine learning and numeric computations, such as matrix multiplications. Some of the most commonly used libraries include:\n\nIncanter: Incanter is a data analysis and visualization library for Clojure, which provides a wide range of numerical and statistical functions for working with data. It includes support for matrix operations, linear algebra, and other mathematical operations that are commonly used in machine learning.\n\ncore.matrix: core.matrix is a library for working with multidimensional arrays in Clojure. It provides a uniform API for working with arrays from different numerical libraries, such as NumPy, and includes support for matrix operations, linear algebra, and other mathematical operations.\n\nBreeze: Breeze is a numerical library for Scala, which provides a wide range of mathematical and statistical functions for working with data. It includes support for matrix operations, linear algebra, and other mathematical operations that are commonly used in machine learning. Breeze can be used in Clojure through interop with the Java Virtual Machine (JVM).\n\n\nI hope this helps! Let me know if you have any other questions.\n"
] |
[
6,
3,
0
] |
[] |
[] |
[
"clojure",
"machine_learning",
"numeric",
"symbolic_math"
] |
stackoverflow_0030851073_clojure_machine_learning_numeric_symbolic_math.txt
|
Q:
Using a hashmap instead of a 2d array for a grid, Javascript
Is it possible to even use a O(1) type hash for a grid instead of a 2D-Array in Javascript?
2d array version that you would have to loop over every node to find a value:
let gridArr = [
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
];
Not sure how you create a grid with a hash for O(1) lookup?
let gridObj = [
{},
{},
{},
{},
{},
]
Is it possible to create a grid and have O(1) time and O(1) space complexity here?
A:
I can't tell exactly what the interview question was from your description, but O(1) coordinate space is standard in graphics programming. It works by converting between XY coordinates and a linear index.
const width = 100, height = 100;
const arrayLength = width * height;
const array = new Float32Array( arrayLength );
function getValueAt( x, y ) {
const arrayIndex = x + y * width;
return array[ arrayIndex ];
}
function setValueAt( x, y, value ) {
const arrayIndex = x + y * width;
array[ arrayIndex ] = value;
}
If you just need to look up an item's location from a hash map... You just build a hash map of its coordinates.
const width = 100, height = 100;
const arrayLength = width * height;
const array = new Array( arrayLength );
const hashMap = new Map();
function getValueAt( x, y ) {
const arrayIndex = x + y * width;
return array[ arrayIndex ];
}
function setValueAt( x, y, value ) {
const arrayIndex = x + y * width;
array[ arrayIndex ] = value;
hashMap.set( value, { x, y } );
}
function getCoordinatesOfValue( value ) {
return hashMap.get( value );
}
|
Using a hashmap instead of a 2d array for a grid, Javascript
|
Is it possible to even use a O(1) type hash for a grid instead of a 2D-Array in Javascript?
2d array version that you would have to loop over every node to find a value:
let gridArr = [
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
];
Not sure how you create a grid with a hash for O(1) lookup?
let gridObj = [
{},
{},
{},
{},
{},
]
Is it possible to create a grid and have O(1) time and O(1) space complexity here?
|
[
"I can't tell exactly what the interview question was from your description, but O(1) coordinate space is standard in graphics programming. It works by converting between XY coordinates and a linear index.\n\nconst width = 100, height = 100;\n\nconst arrayLength = width * height;\n\nconst array = new Float32Array( arrayLength );\n\nfunction getValueAt( x, y ) {\n const arrayIndex = x + y * width;\n return array[ arrayIndex ];\n}\nfunction setValueAt( x, y, value ) {\n const arrayIndex = x + y * width;\n array[ arrayIndex ] = value;\n}\n\n\nIf you just need to look up an item's location from a hash map... You just build a hash map of its coordinates.\n\nconst width = 100, height = 100;\nconst arrayLength = width * height;\nconst array = new Array( arrayLength );\nconst hashMap = new Map();\n\nfunction getValueAt( x, y ) {\n const arrayIndex = x + y * width;\n return array[ arrayIndex ];\n}\nfunction setValueAt( x, y, value ) {\n const arrayIndex = x + y * width;\n array[ arrayIndex ] = value;\n hashMap.set( value, { x, y } );\n}\nfunction getCoordinatesOfValue( value ) {\n return hashMap.get( value );\n}\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"multidimensional_array",
"object"
] |
stackoverflow_0074662020_javascript_multidimensional_array_object.txt
|
Q:
What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?
What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?
In my opinion, 'VALID' means there will be no zero padding outside the edges when we do max pool.
According to A guide to convolution arithmetic for deep learning, it says that there will be no padding in pool operator, i.e. just use 'VALID' of tensorflow.
But what is 'SAME' padding of max pool in tensorflow?
A:
If you like ascii art:
"VALID" = without padding:
inputs: 1 2 3 4 5 6 7 8 9 10 11 (12 13)
|________________| dropped
|_________________|
"SAME" = with zero padding:
pad| |pad
inputs: 0 |1 2 3 4 5 6 7 8 9 10 11 12 13|0 0
|________________|
|_________________|
|________________|
In this example:
Input width = 13
Filter width = 6
Stride = 5
Notes:
"VALID" only ever drops the right-most columns (or bottom-most rows).
"SAME" tries to pad evenly left and right, but if the amount of columns to be added is odd, it will add the extra column to the right, as is the case in this example (the same logic applies vertically: there may be an extra row of zeros at the bottom).
Edit:
About the name:
With "SAME" padding, if you use a stride of 1, the layer's outputs will have the same spatial dimensions as its inputs.
With "VALID" padding, there's no "made-up" padding inputs. The layer only uses valid input data.
A:
When stride is 1 (more typical with convolution than pooling), we can think of the following distinction:
"SAME": output size is the same as input size. This requires the filter window to slip outside input map, hence the need to pad.
"VALID": Filter window stays at valid position inside input map, so output size shrinks by filter_size - 1. No padding occurs.
A:
I'll give an example to make it clearer:
x: input image of shape [2, 3], 1 channel
valid_pad: max pool with 2x2 kernel, stride 2 and VALID padding.
same_pad: max pool with 2x2 kernel, stride 2 and SAME padding (this is the classic way to go)
The output shapes are:
valid_pad: here, no padding so the output shape is [1, 1]
same_pad: here, we pad the image to the shape [2, 4] (with -inf and then apply max pool), so the output shape is [1, 2]
x = tf.constant([[1., 2., 3.],
[4., 5., 6.]])
x = tf.reshape(x, [1, 2, 3, 1]) # give a shape accepted by tf.nn.max_pool
valid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
same_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
valid_pad.get_shape() == [1, 1, 1, 1] # valid_pad is [5.]
same_pad.get_shape() == [1, 1, 2, 1] # same_pad is [5., 6.]
A:
The TensorFlow Convolution example gives an overview about the difference between SAME and VALID :
For the SAME padding, the output height and width are computed as:
out_height = ceil(float(in_height) / float(strides[1]))
out_width = ceil(float(in_width) / float(strides[2]))
And
For the VALID padding, the output height and width are computed as:
out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))
A:
Complementing YvesgereY's great answer, I found this visualization extremely helpful:
Padding 'valid' is the first figure. The filter window stays inside the image.
Padding 'same' is the third figure. The output is the same size.
Found it on this article
Visualization credits: vdumoulin@GitHub
A:
Padding is an operation to increase the size of the input data. In case of 1-dimensional data you just append/prepend the array with a constant, in 2-dim you surround matrix with these constants. In n-dim you surround your n-dim hypercube with the constant. In most of the cases this constant is zero and it is called zero-padding.
Here is an example of zero-padding with p=1 applied to 2-d tensor:
You can use arbitrary padding for your kernel but some of the padding values are used more frequently than others they are:
VALID padding. The easiest case, means no padding at all. Just leave your data the same it was.
SAME padding sometimes called HALF padding. It is called SAME because for a convolution with a stride=1, (or for pooling) it should produce output of the same size as the input. It is called HALF because for a kernel of size k
FULL padding is the maximum padding which does not result in a convolution over just padded elements. For a kernel of size k, this padding is equal to k - 1.
To use arbitrary padding in TF, you can use tf.pad()
A:
Quick Explanation
VALID: Don't apply any padding, i.e., assume that all dimensions are valid so that input image fully gets covered by filter and stride you specified.
SAME: Apply padding to input (if needed) so that input image gets fully covered by filter and stride you specified. For stride 1, this will ensure that output image size is same as input.
Notes
This applies to conv layers as well as max pool layers in same way
The term "valid" is bit of a misnomer because things don't become "invalid" if you drop part of the image. Sometime you might even want that. This should have probably be called NO_PADDING instead.
The term "same" is a misnomer too because it only makes sense for stride of 1 when output dimension is same as input dimension. For stride of 2, output dimensions will be half, for example. This should have probably be called AUTO_PADDING instead.
In SAME (i.e. auto-pad mode), Tensorflow will try to spread padding evenly on both left and right.
In VALID (i.e. no padding mode), Tensorflow will drop right and/or bottom cells if your filter and stride doesn't full cover input image.
A:
I am quoting this answer from official tensorflow docs https://www.tensorflow.org/api_guides/python/nn#Convolution
For the 'SAME' padding, the output height and width are computed as:
out_height = ceil(float(in_height) / float(strides[1]))
out_width = ceil(float(in_width) / float(strides[2]))
and the padding on the top and left are computed as:
pad_along_height = max((out_height - 1) * strides[1] +
filter_height - in_height, 0)
pad_along_width = max((out_width - 1) * strides[2] +
filter_width - in_width, 0)
pad_top = pad_along_height // 2
pad_bottom = pad_along_height - pad_top
pad_left = pad_along_width // 2
pad_right = pad_along_width - pad_left
For the 'VALID' padding, the output height and width are computed as:
out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))
out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))
and the padding values are always zero.
A:
There are three choices of padding: valid (no padding), same (or half), full. You can find explanations (in Theano) here:
http://deeplearning.net/software/theano/tutorial/conv_arithmetic.html
Valid or no padding:
The valid padding involves no zero padding, so it covers only the valid input, not including artificially generated zeros. The length of output is ((the length of input) - (k-1)) for the kernel size k if the stride s=1.
Same or half padding:
The same padding makes the size of outputs be the same with that of inputs when s=1. If s=1, the number of zeros padded is (k-1).
Full padding:
The full padding means that the kernel runs over the whole inputs, so at the ends, the kernel may meet the only one input and zeros else. The number of zeros padded is 2(k-1) if s=1. The length of output is ((the length of input) + (k-1)) if s=1.
Therefore, the number of paddings: (valid) <= (same) <= (full)
A:
VALID padding: this is with zero padding. Hope there is no confusion.
x = tf.constant([[1., 2., 3.], [4., 5., 6.],[ 7., 8., 9.], [ 7., 8., 9.]])
x = tf.reshape(x, [1, 4, 3, 1])
valid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')
print (valid_pad.get_shape()) # output-->(1, 2, 1, 1)
SAME padding: This is kind of tricky to understand in the first place because we have to consider two conditions separately as mentioned in the official docs.
Let's take input as , output as , padding as , stride as and kernel size as (only a single dimension is considered)
Case 01: :
Case 02: :
is calculated such that the minimum value which can be taken for padding. Since value of is known, value of can be found using this formula .
Let's work out this example:
x = tf.constant([[1., 2., 3.], [4., 5., 6.],[ 7., 8., 9.], [ 7., 8., 9.]])
x = tf.reshape(x, [1, 4, 3, 1])
same_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')
print (same_pad.get_shape()) # --> output (1, 2, 2, 1)
Here the dimension of x is (3,4). Then if the horizontal direction is taken (3):
If the vertial direction is taken (4):
Hope this will help to understand how actually SAME padding works in TF.
A:
To sum up, 'valid' padding means no padding. The output size of the convolutional layer shrinks depending on the input size & kernel size.
On the contrary, 'same' padding means using padding. When the stride is set as 1, the output size of the convolutional layer maintains as the input size by appending a certain number of '0-border' around the input data when calculating convolution.
Hope this intuitive description helps.
A:
Based on the explanation here and following up on Tristan's answer, I usually use these quick functions for sanity checks.
# a function to help us stay clean
def getPaddings(pad_along_height,pad_along_width):
# if even.. easy..
if pad_along_height%2 == 0:
pad_top = pad_along_height / 2
pad_bottom = pad_top
# if odd
else:
pad_top = np.floor( pad_along_height / 2 )
pad_bottom = np.floor( pad_along_height / 2 ) +1
# check if width padding is odd or even
# if even.. easy..
if pad_along_width%2 == 0:
pad_left = pad_along_width / 2
pad_right= pad_left
# if odd
else:
pad_left = np.floor( pad_along_width / 2 )
pad_right = np.floor( pad_along_width / 2 ) +1
#
return pad_top,pad_bottom,pad_left,pad_right
# strides [image index, y, x, depth]
# padding 'SAME' or 'VALID'
# bottom and right sides always get the one additional padded pixel (if padding is odd)
def getOutputDim (inputWidth,inputHeight,filterWidth,filterHeight,strides,padding):
if padding == 'SAME':
out_height = np.ceil(float(inputHeight) / float(strides[1]))
out_width = np.ceil(float(inputWidth) / float(strides[2]))
#
pad_along_height = ((out_height - 1) * strides[1] + filterHeight - inputHeight)
pad_along_width = ((out_width - 1) * strides[2] + filterWidth - inputWidth)
#
# now get padding
pad_top,pad_bottom,pad_left,pad_right = getPaddings(pad_along_height,pad_along_width)
#
print 'output height', out_height
print 'output width' , out_width
print 'total pad along height' , pad_along_height
print 'total pad along width' , pad_along_width
print 'pad at top' , pad_top
print 'pad at bottom' ,pad_bottom
print 'pad at left' , pad_left
print 'pad at right' ,pad_right
elif padding == 'VALID':
out_height = np.ceil(float(inputHeight - filterHeight + 1) / float(strides[1]))
out_width = np.ceil(float(inputWidth - filterWidth + 1) / float(strides[2]))
#
print 'output height', out_height
print 'output width' , out_width
print 'no padding'
# use like so
getOutputDim (80,80,4,4,[1,1,1,1],'SAME')
A:
Padding on/off. Determines the effective size of your input.
VALID: No padding. Convolution etc. ops are only performed at locations that are "valid", i.e. not too close to the borders of your tensor. With a kernel of 3x3 and image of 10x10, you would be performing convolution on the 8x8 area inside the borders.
SAME: Padding is provided. Whenever your operation references a neighborhood (no matter how big), zero values are provided when that neighborhood extends outside the original tensor to allow that operation to work also on border values. With a kernel of 3x3 and image of 10x10, you would be performing convolution on the full 10x10 area.
A:
Here, W and H are width and height of input,
F are filter dimensions,
P is padding size (i.e., number of rows or columns to be padded)
For SAME padding:
For VALID padding:
A:
Tensorflow 2.0 Compatible Answer: Detailed Explanations have been provided above, about "Valid" and "Same" Padding.
However, I will specify different Pooling Functions and their respective Commands in Tensorflow 2.x (>= 2.0), for the benefit of the community.
Functions in 1.x:
tf.nn.max_pool
tf.keras.layers.MaxPool2D
Average Pooling => None in tf.nn, tf.keras.layers.AveragePooling2D
Functions in 2.x:
tf.nn.max_pool if used in 2.x and tf.compat.v1.nn.max_pool_v2 or tf.compat.v2.nn.max_pool, if migrated from 1.x to 2.x.
tf.keras.layers.MaxPool2D if used in 2.x and
tf.compat.v1.keras.layers.MaxPool2D or tf.compat.v1.keras.layers.MaxPooling2D or tf.compat.v2.keras.layers.MaxPool2D or tf.compat.v2.keras.layers.MaxPooling2D, if migrated from 1.x to 2.x.
Average Pooling => tf.nn.avg_pool2d or tf.keras.layers.AveragePooling2D if used in TF 2.x and
tf.compat.v1.nn.avg_pool_v2 or tf.compat.v2.nn.avg_pool or tf.compat.v1.keras.layers.AveragePooling2D or tf.compat.v1.keras.layers.AvgPool2D or tf.compat.v2.keras.layers.AveragePooling2D or tf.compat.v2.keras.layers.AvgPool2D , if migrated from 1.x to 2.x.
For more information about Migration from Tensorflow 1.x to 2.x, please refer to this Migration Guide.
A:
valid padding is no padding.
same padding is padding in a way the output has the same size as input.
A:
In the TensorFlow function tf.nn.max_pool, the padding parameter determines how the input tensor is padded before the max pooling operation is applied. The 'VALID' padding option means that no padding will be applied to the input tensor, and the output tensor will have dimensions that are smaller than the input tensor. For example, if the input tensor has dimensions [batch_size, height, width, channels] and the max pooling window has dimensions [pool_height, pool_width], then the output tensor will have dimensions [batch_size, (height - pool_height + 1), (width - pool_width + 1), channels].
The 'SAME' padding option, on the other hand, means that the input tensor will be padded with zeros in such a way that the output tensor will have the same dimensions as the input tensor. The amount of padding applied to the input tensor will depend on the dimensions of the max pooling window and the stride size. For example, if the input tensor has dimensions [batch_size, height, width, channels] and the max pooling window has dimensions [pool_height, pool_width], and the stride is set to 1, then the output tensor will also have dimensions [batch_size, height, width, channels], and the input tensor will be padded with zeros on the top, bottom, left, and right sides as needed.
In summary, the 'VALID' padding option means that no padding will be applied to the input tensor, and the output tensor will have dimensions that are smaller than the input tensor. The 'SAME' padding option means that the input tensor will be padded with zeros as needed to ensure that the output tensor has the same dimensions as the input tensor.
|
What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?
|
What is the difference between 'SAME' and 'VALID' padding in tf.nn.max_pool of tensorflow?
In my opinion, 'VALID' means there will be no zero padding outside the edges when we do max pool.
According to A guide to convolution arithmetic for deep learning, it says that there will be no padding in pool operator, i.e. just use 'VALID' of tensorflow.
But what is 'SAME' padding of max pool in tensorflow?
|
[
"If you like ascii art:\n\n\"VALID\" = without padding:\n inputs: 1 2 3 4 5 6 7 8 9 10 11 (12 13)\n |________________| dropped\n |_________________|\n\n\"SAME\" = with zero padding:\n pad| |pad\n inputs: 0 |1 2 3 4 5 6 7 8 9 10 11 12 13|0 0\n |________________|\n |_________________|\n |________________|\n\n\nIn this example:\n\nInput width = 13\nFilter width = 6\nStride = 5\n\nNotes:\n\n\"VALID\" only ever drops the right-most columns (or bottom-most rows).\n\"SAME\" tries to pad evenly left and right, but if the amount of columns to be added is odd, it will add the extra column to the right, as is the case in this example (the same logic applies vertically: there may be an extra row of zeros at the bottom).\n\nEdit:\nAbout the name:\n\nWith \"SAME\" padding, if you use a stride of 1, the layer's outputs will have the same spatial dimensions as its inputs.\nWith \"VALID\" padding, there's no \"made-up\" padding inputs. The layer only uses valid input data.\n\n",
"When stride is 1 (more typical with convolution than pooling), we can think of the following distinction:\n\n\"SAME\": output size is the same as input size. This requires the filter window to slip outside input map, hence the need to pad. \n\"VALID\": Filter window stays at valid position inside input map, so output size shrinks by filter_size - 1. No padding occurs.\n\n",
"I'll give an example to make it clearer:\n\nx: input image of shape [2, 3], 1 channel\nvalid_pad: max pool with 2x2 kernel, stride 2 and VALID padding.\nsame_pad: max pool with 2x2 kernel, stride 2 and SAME padding (this is the classic way to go)\n\nThe output shapes are:\n\nvalid_pad: here, no padding so the output shape is [1, 1]\nsame_pad: here, we pad the image to the shape [2, 4] (with -inf and then apply max pool), so the output shape is [1, 2]\n\n\nx = tf.constant([[1., 2., 3.],\n [4., 5., 6.]])\n\nx = tf.reshape(x, [1, 2, 3, 1]) # give a shape accepted by tf.nn.max_pool\n\nvalid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')\nsame_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')\n\nvalid_pad.get_shape() == [1, 1, 1, 1] # valid_pad is [5.]\nsame_pad.get_shape() == [1, 1, 2, 1] # same_pad is [5., 6.]\n\n\n",
"The TensorFlow Convolution example gives an overview about the difference between SAME and VALID :\n\nFor the SAME padding, the output height and width are computed as:\n out_height = ceil(float(in_height) / float(strides[1]))\n out_width = ceil(float(in_width) / float(strides[2]))\n\n\n\nAnd\n\nFor the VALID padding, the output height and width are computed as:\n out_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))\n out_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))\n\n\n\n",
"Complementing YvesgereY's great answer, I found this visualization extremely helpful:\n\nPadding 'valid' is the first figure. The filter window stays inside the image.\nPadding 'same' is the third figure. The output is the same size.\n\nFound it on this article\nVisualization credits: vdumoulin@GitHub\n",
"Padding is an operation to increase the size of the input data. In case of 1-dimensional data you just append/prepend the array with a constant, in 2-dim you surround matrix with these constants. In n-dim you surround your n-dim hypercube with the constant. In most of the cases this constant is zero and it is called zero-padding.\nHere is an example of zero-padding with p=1 applied to 2-d tensor:\n\n\nYou can use arbitrary padding for your kernel but some of the padding values are used more frequently than others they are:\n\nVALID padding. The easiest case, means no padding at all. Just leave your data the same it was.\nSAME padding sometimes called HALF padding. It is called SAME because for a convolution with a stride=1, (or for pooling) it should produce output of the same size as the input. It is called HALF because for a kernel of size k \nFULL padding is the maximum padding which does not result in a convolution over just padded elements. For a kernel of size k, this padding is equal to k - 1.\n\n\nTo use arbitrary padding in TF, you can use tf.pad()\n",
"Quick Explanation\nVALID: Don't apply any padding, i.e., assume that all dimensions are valid so that input image fully gets covered by filter and stride you specified.\nSAME: Apply padding to input (if needed) so that input image gets fully covered by filter and stride you specified. For stride 1, this will ensure that output image size is same as input.\nNotes\n\nThis applies to conv layers as well as max pool layers in same way\nThe term \"valid\" is bit of a misnomer because things don't become \"invalid\" if you drop part of the image. Sometime you might even want that. This should have probably be called NO_PADDING instead.\nThe term \"same\" is a misnomer too because it only makes sense for stride of 1 when output dimension is same as input dimension. For stride of 2, output dimensions will be half, for example. This should have probably be called AUTO_PADDING instead.\nIn SAME (i.e. auto-pad mode), Tensorflow will try to spread padding evenly on both left and right.\nIn VALID (i.e. no padding mode), Tensorflow will drop right and/or bottom cells if your filter and stride doesn't full cover input image.\n\n",
"I am quoting this answer from official tensorflow docs https://www.tensorflow.org/api_guides/python/nn#Convolution\nFor the 'SAME' padding, the output height and width are computed as:\nout_height = ceil(float(in_height) / float(strides[1]))\nout_width = ceil(float(in_width) / float(strides[2]))\n\nand the padding on the top and left are computed as:\npad_along_height = max((out_height - 1) * strides[1] +\n filter_height - in_height, 0)\npad_along_width = max((out_width - 1) * strides[2] +\n filter_width - in_width, 0)\npad_top = pad_along_height // 2\npad_bottom = pad_along_height - pad_top\npad_left = pad_along_width // 2\npad_right = pad_along_width - pad_left\n\nFor the 'VALID' padding, the output height and width are computed as:\nout_height = ceil(float(in_height - filter_height + 1) / float(strides[1]))\nout_width = ceil(float(in_width - filter_width + 1) / float(strides[2]))\n\nand the padding values are always zero.\n",
"There are three choices of padding: valid (no padding), same (or half), full. You can find explanations (in Theano) here:\nhttp://deeplearning.net/software/theano/tutorial/conv_arithmetic.html\n\nValid or no padding:\n\nThe valid padding involves no zero padding, so it covers only the valid input, not including artificially generated zeros. The length of output is ((the length of input) - (k-1)) for the kernel size k if the stride s=1.\n\nSame or half padding:\n\nThe same padding makes the size of outputs be the same with that of inputs when s=1. If s=1, the number of zeros padded is (k-1).\n\nFull padding:\n\nThe full padding means that the kernel runs over the whole inputs, so at the ends, the kernel may meet the only one input and zeros else. The number of zeros padded is 2(k-1) if s=1. The length of output is ((the length of input) + (k-1)) if s=1.\nTherefore, the number of paddings: (valid) <= (same) <= (full)\n",
"VALID padding: this is with zero padding. Hope there is no confusion. \nx = tf.constant([[1., 2., 3.], [4., 5., 6.],[ 7., 8., 9.], [ 7., 8., 9.]])\nx = tf.reshape(x, [1, 4, 3, 1])\nvalid_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='VALID')\nprint (valid_pad.get_shape()) # output-->(1, 2, 1, 1)\n\nSAME padding: This is kind of tricky to understand in the first place because we have to consider two conditions separately as mentioned in the official docs. \nLet's take input as , output as , padding as , stride as and kernel size as (only a single dimension is considered)\nCase 01: :\nCase 02: : \n is calculated such that the minimum value which can be taken for padding. Since value of is known, value of can be found using this formula . \nLet's work out this example:\nx = tf.constant([[1., 2., 3.], [4., 5., 6.],[ 7., 8., 9.], [ 7., 8., 9.]])\nx = tf.reshape(x, [1, 4, 3, 1])\nsame_pad = tf.nn.max_pool(x, [1, 2, 2, 1], [1, 2, 2, 1], padding='SAME')\nprint (same_pad.get_shape()) # --> output (1, 2, 2, 1)\n\nHere the dimension of x is (3,4). Then if the horizontal direction is taken (3):\n\nIf the vertial direction is taken (4):\n\nHope this will help to understand how actually SAME padding works in TF. \n",
"To sum up, 'valid' padding means no padding. The output size of the convolutional layer shrinks depending on the input size & kernel size. \nOn the contrary, 'same' padding means using padding. When the stride is set as 1, the output size of the convolutional layer maintains as the input size by appending a certain number of '0-border' around the input data when calculating convolution.\nHope this intuitive description helps.\n",
"Based on the explanation here and following up on Tristan's answer, I usually use these quick functions for sanity checks.\n# a function to help us stay clean\ndef getPaddings(pad_along_height,pad_along_width):\n # if even.. easy..\n if pad_along_height%2 == 0:\n pad_top = pad_along_height / 2\n pad_bottom = pad_top\n # if odd\n else:\n pad_top = np.floor( pad_along_height / 2 )\n pad_bottom = np.floor( pad_along_height / 2 ) +1\n # check if width padding is odd or even\n # if even.. easy..\n if pad_along_width%2 == 0:\n pad_left = pad_along_width / 2\n pad_right= pad_left\n # if odd\n else:\n pad_left = np.floor( pad_along_width / 2 )\n pad_right = np.floor( pad_along_width / 2 ) +1\n #\n return pad_top,pad_bottom,pad_left,pad_right\n\n# strides [image index, y, x, depth]\n# padding 'SAME' or 'VALID'\n# bottom and right sides always get the one additional padded pixel (if padding is odd)\ndef getOutputDim (inputWidth,inputHeight,filterWidth,filterHeight,strides,padding):\n if padding == 'SAME':\n out_height = np.ceil(float(inputHeight) / float(strides[1]))\n out_width = np.ceil(float(inputWidth) / float(strides[2]))\n #\n pad_along_height = ((out_height - 1) * strides[1] + filterHeight - inputHeight)\n pad_along_width = ((out_width - 1) * strides[2] + filterWidth - inputWidth)\n #\n # now get padding\n pad_top,pad_bottom,pad_left,pad_right = getPaddings(pad_along_height,pad_along_width)\n #\n print 'output height', out_height\n print 'output width' , out_width\n print 'total pad along height' , pad_along_height\n print 'total pad along width' , pad_along_width\n print 'pad at top' , pad_top\n print 'pad at bottom' ,pad_bottom\n print 'pad at left' , pad_left\n print 'pad at right' ,pad_right\n\n elif padding == 'VALID':\n out_height = np.ceil(float(inputHeight - filterHeight + 1) / float(strides[1]))\n out_width = np.ceil(float(inputWidth - filterWidth + 1) / float(strides[2]))\n #\n print 'output height', out_height\n print 'output width' , out_width\n print 'no padding'\n\n\n# use like so\ngetOutputDim (80,80,4,4,[1,1,1,1],'SAME')\n\n",
"Padding on/off. Determines the effective size of your input.\nVALID: No padding. Convolution etc. ops are only performed at locations that are \"valid\", i.e. not too close to the borders of your tensor. With a kernel of 3x3 and image of 10x10, you would be performing convolution on the 8x8 area inside the borders.\nSAME: Padding is provided. Whenever your operation references a neighborhood (no matter how big), zero values are provided when that neighborhood extends outside the original tensor to allow that operation to work also on border values. With a kernel of 3x3 and image of 10x10, you would be performing convolution on the full 10x10 area.\n",
"\nHere, W and H are width and height of input,\n F are filter dimensions, \n P is padding size (i.e., number of rows or columns to be padded)\nFor SAME padding: \n\nFor VALID padding: \n\n",
"Tensorflow 2.0 Compatible Answer: Detailed Explanations have been provided above, about \"Valid\" and \"Same\" Padding. \nHowever, I will specify different Pooling Functions and their respective Commands in Tensorflow 2.x (>= 2.0), for the benefit of the community.\nFunctions in 1.x:\ntf.nn.max_pool\ntf.keras.layers.MaxPool2D\nAverage Pooling => None in tf.nn, tf.keras.layers.AveragePooling2D\nFunctions in 2.x:\ntf.nn.max_pool if used in 2.x and tf.compat.v1.nn.max_pool_v2 or tf.compat.v2.nn.max_pool, if migrated from 1.x to 2.x.\ntf.keras.layers.MaxPool2D if used in 2.x and \ntf.compat.v1.keras.layers.MaxPool2D or tf.compat.v1.keras.layers.MaxPooling2D or tf.compat.v2.keras.layers.MaxPool2D or tf.compat.v2.keras.layers.MaxPooling2D, if migrated from 1.x to 2.x.\nAverage Pooling => tf.nn.avg_pool2d or tf.keras.layers.AveragePooling2D if used in TF 2.x and \ntf.compat.v1.nn.avg_pool_v2 or tf.compat.v2.nn.avg_pool or tf.compat.v1.keras.layers.AveragePooling2D or tf.compat.v1.keras.layers.AvgPool2D or tf.compat.v2.keras.layers.AveragePooling2D or tf.compat.v2.keras.layers.AvgPool2D , if migrated from 1.x to 2.x.\nFor more information about Migration from Tensorflow 1.x to 2.x, please refer to this Migration Guide.\n",
"valid padding is no padding.\nsame padding is padding in a way the output has the same size as input.\n",
"In the TensorFlow function tf.nn.max_pool, the padding parameter determines how the input tensor is padded before the max pooling operation is applied. The 'VALID' padding option means that no padding will be applied to the input tensor, and the output tensor will have dimensions that are smaller than the input tensor. For example, if the input tensor has dimensions [batch_size, height, width, channels] and the max pooling window has dimensions [pool_height, pool_width], then the output tensor will have dimensions [batch_size, (height - pool_height + 1), (width - pool_width + 1), channels].\nThe 'SAME' padding option, on the other hand, means that the input tensor will be padded with zeros in such a way that the output tensor will have the same dimensions as the input tensor. The amount of padding applied to the input tensor will depend on the dimensions of the max pooling window and the stride size. For example, if the input tensor has dimensions [batch_size, height, width, channels] and the max pooling window has dimensions [pool_height, pool_width], and the stride is set to 1, then the output tensor will also have dimensions [batch_size, height, width, channels], and the input tensor will be padded with zeros on the top, bottom, left, and right sides as needed.\nIn summary, the 'VALID' padding option means that no padding will be applied to the input tensor, and the output tensor will have dimensions that are smaller than the input tensor. The 'SAME' padding option means that the input tensor will be padded with zeros as needed to ensure that the output tensor has the same dimensions as the input tensor.\n"
] |
[
748,
200,
187,
106,
80,
59,
38,
28,
13,
12,
12,
9,
9,
9,
2,
1,
0
] |
[] |
[] |
[
"deep_learning",
"python",
"tensorflow"
] |
stackoverflow_0037674306_deep_learning_python_tensorflow.txt
|
Q:
How to color specific rows of facet_grid()
The way I have my data and figure set up now, I'm having a hard time choosing the different colors by model_type. I can get them to have different colors fairly easily by passing color = model_type to aes, but how/where can I actually specify which colors I want?
And yes, I know, there's definitely a better way of naming the facet rows and columns than the hacky way I have it set up now. But it works fairly well for everything except me trying to choose my colors.
Code:
mylabels <- c("1" = "Springfield",
"2" = "Valley",
"3" = "Glenside",
"4" = "Upper",
"5" = "Linear",
"6" = "Logit")
ggplot(dz, aes(x = var, y = coef, color = model_type,
ymin = ci_lower, ymax = ci_upper)) +
geom_point() +
geom_errorbar(width = 0.1) +
facet_grid(model_type~school_num, labeller = as_labeller(mylabels)) +
coord_flip() +
theme_bw(base_size = 14) +
theme(legend.position ="none",
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
strip.background = element_rect(fill = "white"),
)
Data:
structure(list(var = c("teacher_m", "test_score", "teacher_m",
"test_score", "teacher_m", "test_score", "teacher_m", "test_score",
"teacher_m", "test_score", "teacher_m", "test_score", "teacher_m",
"test_score", "teacher_m", "test_score"), coef = c(1.1, 0.92,
0.5, 0.45, 0.5, 0.47, 0.32, 0.5, 0.4, 0.68, 0.64, 0.25, 0.53,
0.26, 1.04, 0.62), ci_lower = c(0.99, 0.81, 0.33, 0.34, 0.34,
0.35, 0.21, 0.39, 0.29, 0.57, 0.44, 0.1, 0.42, 0.15, 0.93, 0.51
), ci_upper = c(1.21, 1.03, 0.61, 0.56, 0.61, 0.58, 0.43, 0.61,
0.51, 0.79, 0.75, 0.4, 0.6, 0.4, 1.06, 0.73), school_num = c(1,
1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4), model_type = c(5,
5, 6, 6, 5, 5, 6, 6, 5, 5, 6, 6, 5, 5, 6, 6)), class = c("spec_tbl_df",
"tbl_df", "tbl", "data.frame"), row.names = c(NA, -16L), spec = structure(list(
cols = list(var = structure(list(), class = c("collector_character",
"collector")), coef = structure(list(), class = c("collector_double",
"collector")), ci_lower = structure(list(), class = c("collector_double",
"collector")), ci_upper = structure(list(), class = c("collector_double",
"collector")), school_num = structure(list(), class = c("collector_double",
"collector")), model_type = structure(list(), class = c("collector_double",
"collector"))), default = structure(list(), class = c("collector_guess",
"collector")), skip = 1L), class = "col_spec"))
A:
You need to define model_type as a factor/character and add
+ scale_color_manual(values = vector_with_your_colors)
Code
ggplot(df, aes(x = var, y = coef, color = factor(model_type),
ymin = ci_lower, ymax = ci_upper)) +
geom_point() +
geom_errorbar(width = 0.1) +
facet_grid(model_type~school_num, labeller = as_labeller(mylabels)) +
coord_flip() +
theme_bw(base_size = 14) +
theme(#legend.position ="none",
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
strip.background = element_rect(fill = "white"),
)+
scale_color_manual(values = c("5" = "red", "6" = "blue"))
Output
|
How to color specific rows of facet_grid()
|
The way I have my data and figure set up now, I'm having a hard time choosing the different colors by model_type. I can get them to have different colors fairly easily by passing color = model_type to aes, but how/where can I actually specify which colors I want?
And yes, I know, there's definitely a better way of naming the facet rows and columns than the hacky way I have it set up now. But it works fairly well for everything except me trying to choose my colors.
Code:
mylabels <- c("1" = "Springfield",
"2" = "Valley",
"3" = "Glenside",
"4" = "Upper",
"5" = "Linear",
"6" = "Logit")
ggplot(dz, aes(x = var, y = coef, color = model_type,
ymin = ci_lower, ymax = ci_upper)) +
geom_point() +
geom_errorbar(width = 0.1) +
facet_grid(model_type~school_num, labeller = as_labeller(mylabels)) +
coord_flip() +
theme_bw(base_size = 14) +
theme(legend.position ="none",
panel.grid.major = element_blank(),
panel.grid.minor = element_blank(),
strip.background = element_rect(fill = "white"),
)
Data:
structure(list(var = c("teacher_m", "test_score", "teacher_m",
"test_score", "teacher_m", "test_score", "teacher_m", "test_score",
"teacher_m", "test_score", "teacher_m", "test_score", "teacher_m",
"test_score", "teacher_m", "test_score"), coef = c(1.1, 0.92,
0.5, 0.45, 0.5, 0.47, 0.32, 0.5, 0.4, 0.68, 0.64, 0.25, 0.53,
0.26, 1.04, 0.62), ci_lower = c(0.99, 0.81, 0.33, 0.34, 0.34,
0.35, 0.21, 0.39, 0.29, 0.57, 0.44, 0.1, 0.42, 0.15, 0.93, 0.51
), ci_upper = c(1.21, 1.03, 0.61, 0.56, 0.61, 0.58, 0.43, 0.61,
0.51, 0.79, 0.75, 0.4, 0.6, 0.4, 1.06, 0.73), school_num = c(1,
1, 1, 1, 2, 2, 2, 2, 3, 3, 3, 3, 4, 4, 4, 4), model_type = c(5,
5, 6, 6, 5, 5, 6, 6, 5, 5, 6, 6, 5, 5, 6, 6)), class = c("spec_tbl_df",
"tbl_df", "tbl", "data.frame"), row.names = c(NA, -16L), spec = structure(list(
cols = list(var = structure(list(), class = c("collector_character",
"collector")), coef = structure(list(), class = c("collector_double",
"collector")), ci_lower = structure(list(), class = c("collector_double",
"collector")), ci_upper = structure(list(), class = c("collector_double",
"collector")), school_num = structure(list(), class = c("collector_double",
"collector")), model_type = structure(list(), class = c("collector_double",
"collector"))), default = structure(list(), class = c("collector_guess",
"collector")), skip = 1L), class = "col_spec"))
|
[
"You need to define model_type as a factor/character and add\n+ scale_color_manual(values = vector_with_your_colors)\n\nCode\nggplot(df, aes(x = var, y = coef, color = factor(model_type),\n ymin = ci_lower, ymax = ci_upper)) +\n geom_point() +\n geom_errorbar(width = 0.1) +\n facet_grid(model_type~school_num, labeller = as_labeller(mylabels)) +\n coord_flip() +\n theme_bw(base_size = 14) +\n theme(#legend.position =\"none\",\n panel.grid.major = element_blank(),\n panel.grid.minor = element_blank(),\n strip.background = element_rect(fill = \"white\"),\n )+\n scale_color_manual(values = c(\"5\" = \"red\", \"6\" = \"blue\"))\n\nOutput\n\n"
] |
[
0
] |
[] |
[] |
[
"facet",
"facet_grid",
"ggplot2",
"r"
] |
stackoverflow_0074663450_facet_facet_grid_ggplot2_r.txt
|
Q:
Computer Vision Library
What is the best .net computer vision libraries anybody has used recently for different shape detection and bar code reading including reading pdf417? Most libraries have image processing capabilities similar to those found in Gimp and Photoshop, but not computer vision capabilities. Currently we are looking at a combination of Aforge and LEAD tools; anybody know of better alternatives or one library that could do what these two libraries could combined?
A:
A friend of mine has used OpenCV.net (a C# wrapper for OpenCV) for his Master's thesis (which was about hand gesture recognition) and found it very well developed and easy to use.
A:
I found EmguCV and AForge handy.
EmguCV is the .NET ported version of OpenCV, actively maintained (using the latest OpenCV at the time i checked)
AForge used to be my personal favorite, it's developed ground up by Andrew Kirillov (sadly he announced the end of free public support on April 2012). Well documented and very intuitive.
A:
I will add my little answer. For computer vision project in .net I usually use OpenCVSharp. It's OpenCV wrapper which complex wrap opencv function and it's under BSD 3-Clause License. It has a wide use and is usually sufficient in projects.
Another usefull tool is ImageMagic which has .NET interface. In computer vision I use this tool to make operation on single frame.
A:
There is hardly such a thing as a "best" library for the tasks. If you look at the major players, you will notice that there are some industrial ones, which give you quick and professional support, and a number of open source solutions where you're largely on your own.
It depends largely on how much time you got to fix potential shortcomings, i.e. what your time to money ratio is.
If you're an industrial user, commercial solutions are offered by e.g. Stemmer Imaging, Dalsa, MVTec or Cognex. The first supplier supports .NET from the beginning and the latter ones offer them too for some years now (to my knowledge).
If you prefer the open source approach, an OpenCV derivate/wrapper might be your best bet.
A:
For barcode reader my first suggestion is ZBar in C++ with this wrapper in C#, second one is ZXing in C++ and ZXing.Net is its C# porting.
For starting project in Computer Vision, most widely used and powerfull library is OpenCv that works in C/C++ but there are some wrapper in C# like AForge and EmguCv.
Second choice for CV is C# Dlib wrapper DlibDotNet.
There is also SimpleCv easy to use but for python.
A:
Some popular .NET computer vision libraries that you could consider using for shape detection and barcode reading include:
OpenCV: OpenCV is a popular open-source computer vision library that provides a wide range of algorithms and functions for image and video processing, including support for shape detection and barcode reading. It is written in C++ and has a .NET wrapper that allows it to be used in .NET applications.
Emgu CV: Emgu CV is a .NET wrapper for the OpenCV library, which allows it to be used in .NET applications. It provides a similar range of algorithms and functions as OpenCV, including support for shape detection and barcode reading.
ZXing.NET: ZXing.NET is a .NET library for barcode reading and generation. It supports a wide range of barcode formats, including PDF417, and provides functions for detecting and decoding barcodes in images.
Barcode Reader and Generator for .NET: This is a commercial library that provides support for reading and generating a wide range of barcode formats, including PDF417. It includes functions for detecting and decoding barcodes in images, and also supports generating barcodes from text or data.
Overall, the best library for your specific needs will depend on the specific algorithms and functions you require, as well as any performance or other constraints you have. It is worth considering trying out a few different libraries to see which one works best for your specific use case.
|
Computer Vision Library
|
What is the best .net computer vision libraries anybody has used recently for different shape detection and bar code reading including reading pdf417? Most libraries have image processing capabilities similar to those found in Gimp and Photoshop, but not computer vision capabilities. Currently we are looking at a combination of Aforge and LEAD tools; anybody know of better alternatives or one library that could do what these two libraries could combined?
|
[
"A friend of mine has used OpenCV.net (a C# wrapper for OpenCV) for his Master's thesis (which was about hand gesture recognition) and found it very well developed and easy to use.\n",
"I found EmguCV and AForge handy.\nEmguCV is the .NET ported version of OpenCV, actively maintained (using the latest OpenCV at the time i checked)\nAForge used to be my personal favorite, it's developed ground up by Andrew Kirillov (sadly he announced the end of free public support on April 2012). Well documented and very intuitive.\n",
"I will add my little answer. For computer vision project in .net I usually use OpenCVSharp. It's OpenCV wrapper which complex wrap opencv function and it's under BSD 3-Clause License. It has a wide use and is usually sufficient in projects.\nAnother usefull tool is ImageMagic which has .NET interface. In computer vision I use this tool to make operation on single frame.\n",
"There is hardly such a thing as a \"best\" library for the tasks. If you look at the major players, you will notice that there are some industrial ones, which give you quick and professional support, and a number of open source solutions where you're largely on your own.\nIt depends largely on how much time you got to fix potential shortcomings, i.e. what your time to money ratio is.\nIf you're an industrial user, commercial solutions are offered by e.g. Stemmer Imaging, Dalsa, MVTec or Cognex. The first supplier supports .NET from the beginning and the latter ones offer them too for some years now (to my knowledge).\nIf you prefer the open source approach, an OpenCV derivate/wrapper might be your best bet.\n",
"For barcode reader my first suggestion is ZBar in C++ with this wrapper in C#, second one is ZXing in C++ and ZXing.Net is its C# porting.\nFor starting project in Computer Vision, most widely used and powerfull library is OpenCv that works in C/C++ but there are some wrapper in C# like AForge and EmguCv.\nSecond choice for CV is C# Dlib wrapper DlibDotNet.\nThere is also SimpleCv easy to use but for python.\n",
"Some popular .NET computer vision libraries that you could consider using for shape detection and barcode reading include:\n\nOpenCV: OpenCV is a popular open-source computer vision library that provides a wide range of algorithms and functions for image and video processing, including support for shape detection and barcode reading. It is written in C++ and has a .NET wrapper that allows it to be used in .NET applications.\n\nEmgu CV: Emgu CV is a .NET wrapper for the OpenCV library, which allows it to be used in .NET applications. It provides a similar range of algorithms and functions as OpenCV, including support for shape detection and barcode reading.\n\nZXing.NET: ZXing.NET is a .NET library for barcode reading and generation. It supports a wide range of barcode formats, including PDF417, and provides functions for detecting and decoding barcodes in images.\n\nBarcode Reader and Generator for .NET: This is a commercial library that provides support for reading and generating a wide range of barcode formats, including PDF417. It includes functions for detecting and decoding barcodes in images, and also supports generating barcodes from text or data.\n\n\nOverall, the best library for your specific needs will depend on the specific algorithms and functions you require, as well as any performance or other constraints you have. It is worth considering trying out a few different libraries to see which one works best for your specific use case.\n"
] |
[
13,
11,
1,
0,
0,
0
] |
[] |
[] |
[
".net",
"computer_vision"
] |
stackoverflow_0006910536_.net_computer_vision.txt
|
Q:
How to pass a parameter from client side to server in python
I am using flask and flask-restx try to create a protocol to get a specific string from another service. I am wonder if there is a way I can pass the parameter from another function to server side.
For example, here's my server side:
from flask_restx import Api,fields,Resource
from flask import Flask
app = Flask(__name__)
api = Api(app)
parent = api.model('Parent', {
'name': fields.String(get_answer(a,b)),
'class': fields.String(discriminator=True)
})
@api.route('/language')
class Language(Resource):
# @api.marshal_with(data_stream_request)
@api.marshal_with(parent)
@api.response(403, "Unauthorized")
def get(self):
return {"happy": "good"}
get_answer function in a different file:
get_answer(a,b):
return a + b
What I expect is to get the result of get_answer from a file, and then my API is generated so that the GET request can get it. I know that if there is a web page, we can use render_template( or form to get it. But what if I want to get the value from another function? I know we only run the server with app.run(), but are we able to pass any value into the server? Guessing app.run(a,b) should not work in this case. We definite need to pass two parameter into the server. Or we can store the answer of get_answer(a,b) in main with specific value of a and b, then pass the number into the server side. But it will need the parameter either way.
One thing I've tested is wrapping up the server into a function. But in our case, is it a good idea to wrap a class inside a function as we have class Language(Resource):?
A:
You can use request parameters or the request body to pass in data to your endpoints. For example, you could define your endpoint like this:
@api.route('/language')
class Language(Resource):
@api.marshal_with(parent)
@api.response(403, "Unauthorized")
def get(self):
a = request.args.get('a')
b = request.args.get('b')
result = get_answer(a, b)
return {"result": result}
In this example, you can call the endpoint with a query string like this: /language?a=1&b=2 and the get_answer() function will be called with a=1 and b=2.
Alternatively, you could use the request body to pass in the data, like this:
@api.route('/language')
class Language(Resource):
@api.expect(parent)
@api.response(403, "Unauthorized")
def post(self):
data = request.get_json()
a = data['a']
b = data['b']
result = get_answer(a, b)
return {"result": result}
In this case, you would call the endpoint with a POST request and include a JSON object in the request body with the values for a and b.
As for your question about wrapping your Flask app in a function, this is generally not recommended. Flask is designed to be run as a standalone web application, and wrapping it in a function can cause issues with how Flask manages its internal state.
It is better to define your Flask app and its endpoints in the global scope, and then run the app with app.run() when you are ready to start the server. (just sharing my opinion)
If you still want to do this, you can define a function that creates and runs a Flask app like this:
def create_app():
app = Flask(__name__)
api = Api(app)
parent = api.model('Parent', {
'name': fields.String(get_answer(a,b)),
'class': fields.String(discriminator=True)
})
@api.route('/language')
class Language(Resource):
# @api.marshal_with(data_stream_request)
@api.marshal_with(parent)
@api.response(403, "Unauthorized")
def get(self):
return {"happy": "good"}
return app
You can then call this function to create your Flask app and run it like this:
app = create_app()
app.run()
A:
get_answer function in a different file
Then import and call it
from other_file import get_answer
...
def get(self):
return {"answer": get_answer(2,2)}
As the other answer shows, if you want to use custom arguments, parse them from the request object
|
How to pass a parameter from client side to server in python
|
I am using flask and flask-restx try to create a protocol to get a specific string from another service. I am wonder if there is a way I can pass the parameter from another function to server side.
For example, here's my server side:
from flask_restx import Api,fields,Resource
from flask import Flask
app = Flask(__name__)
api = Api(app)
parent = api.model('Parent', {
'name': fields.String(get_answer(a,b)),
'class': fields.String(discriminator=True)
})
@api.route('/language')
class Language(Resource):
# @api.marshal_with(data_stream_request)
@api.marshal_with(parent)
@api.response(403, "Unauthorized")
def get(self):
return {"happy": "good"}
get_answer function in a different file:
get_answer(a,b):
return a + b
What I expect is to get the result of get_answer from a file, and then my API is generated so that the GET request can get it. I know that if there is a web page, we can use render_template( or form to get it. But what if I want to get the value from another function? I know we only run the server with app.run(), but are we able to pass any value into the server? Guessing app.run(a,b) should not work in this case. We definite need to pass two parameter into the server. Or we can store the answer of get_answer(a,b) in main with specific value of a and b, then pass the number into the server side. But it will need the parameter either way.
One thing I've tested is wrapping up the server into a function. But in our case, is it a good idea to wrap a class inside a function as we have class Language(Resource):?
|
[
"You can use request parameters or the request body to pass in data to your endpoints. For example, you could define your endpoint like this:\[email protected]('/language')\nclass Language(Resource):\n @api.marshal_with(parent)\n @api.response(403, \"Unauthorized\")\n def get(self):\n a = request.args.get('a')\n b = request.args.get('b')\n result = get_answer(a, b)\n return {\"result\": result}\n\nIn this example, you can call the endpoint with a query string like this: /language?a=1&b=2 and the get_answer() function will be called with a=1 and b=2.\nAlternatively, you could use the request body to pass in the data, like this:\[email protected]('/language')\nclass Language(Resource):\n @api.expect(parent)\n @api.response(403, \"Unauthorized\")\n def post(self):\n data = request.get_json()\n a = data['a']\n b = data['b']\n result = get_answer(a, b)\n return {\"result\": result}\n\nIn this case, you would call the endpoint with a POST request and include a JSON object in the request body with the values for a and b.\nAs for your question about wrapping your Flask app in a function, this is generally not recommended. Flask is designed to be run as a standalone web application, and wrapping it in a function can cause issues with how Flask manages its internal state.\nIt is better to define your Flask app and its endpoints in the global scope, and then run the app with app.run() when you are ready to start the server. (just sharing my opinion)\nIf you still want to do this, you can define a function that creates and runs a Flask app like this:\ndef create_app():\n app = Flask(__name__)\n api = Api(app)\n\n parent = api.model('Parent', {\n 'name': fields.String(get_answer(a,b)),\n 'class': fields.String(discriminator=True)\n })\n\n @api.route('/language')\n class Language(Resource):\n # @api.marshal_with(data_stream_request)\n @api.marshal_with(parent)\n @api.response(403, \"Unauthorized\")\n def get(self):\n return {\"happy\": \"good\"}\n\n return app\n\nYou can then call this function to create your Flask app and run it like this:\napp = create_app()\napp.run()\n\n",
"\nget_answer function in a different file\n\nThen import and call it\nfrom other_file import get_answer\n\n... \ndef get(self):\n return {\"answer\": get_answer(2,2)}\n\nAs the other answer shows, if you want to use custom arguments, parse them from the request object\n"
] |
[
0,
0
] |
[] |
[] |
[
"flask",
"flask_restplus",
"flask_restx",
"python"
] |
stackoverflow_0074663441_flask_flask_restplus_flask_restx_python.txt
|
Q:
How to Specify Intended Variable When two Variables Have Same Name and Different Scope in Dart?
Let's say in Dart (DartPad) we have:
String a = "I am higher in the scope chain";
void main() {
String a = "I am lower in the scope chain, but I have the same name";
print(a);
}
Is there a keyword we can use so that it prints the variable in the parent scope instead such as:
print(parent.a)
or
print(this.a)
?
(Those don't work).
A:
If you filename is 'some_dart_file.dart', you can do this
import './some_dart_file.dart' as parent show a;
String a = "I am higher in the scope chain";
void main() {
String a = "I am lower in the scope chain, but I have the same name";
print(parent.a);
}
|
How to Specify Intended Variable When two Variables Have Same Name and Different Scope in Dart?
|
Let's say in Dart (DartPad) we have:
String a = "I am higher in the scope chain";
void main() {
String a = "I am lower in the scope chain, but I have the same name";
print(a);
}
Is there a keyword we can use so that it prints the variable in the parent scope instead such as:
print(parent.a)
or
print(this.a)
?
(Those don't work).
|
[
"If you filename is 'some_dart_file.dart', you can do this\nimport './some_dart_file.dart' as parent show a;\n\nString a = \"I am higher in the scope chain\";\n\nvoid main() {\n String a = \"I am lower in the scope chain, but I have the same name\";\n print(parent.a);\n}\n\n\n"
] |
[
2
] |
[] |
[] |
[
"dart",
"flutter",
"scope"
] |
stackoverflow_0074663073_dart_flutter_scope.txt
|
Q:
Hi I am making a card game I want to take input and select where the user wants to cut in cards and how can i do that?
package main;
import java.util.Scanner;
public class test {
public static String[] ranks = {"2", "3", "4", "5", "6", "7", "8", "9", "10", "Ace", "Jack", "Queen", "King"};
public static Scanner scanner = new Scanner(System.in);
public static void main(String[] args) {
String[] array = new String[13];
for(int i = 0; i<array.length; i++) {
array[i] = ranks[i] ;
}
for (int i = 0; i<array.length; i++ ) {
System.out.println(array[i]);
}
cutDeck(array);
for (int i = 0; i<array.length; i++ ) {
System.out.println(array[i]);
}
}
public static String[] cutDeck(String[] deck) {
System.out.println("Cut please. 'Choose between 1-51'");
int cutPoint = scanner.nextInt();
String[] topDeck= new String[52];
String[] bottomDeck = new String[52];
String[] newDeck = new String[deck.length];
for (int i = 1; i<=cutPoint ; i++) { // Topdeck
topDeck[i-1] = deck[deck.length-1*i];
}
for (int i = 0; i < cutPoint / 2; i++) // Reverse topdeck
{
String temp = topDeck[i];
topDeck[i] = topDeck[topDeck.length - i - 1];
topDeck[topDeck.length - i - 1] = temp;
}
for (int i = 0; i<deck.length - cutPoint; i++) { //Bottom cut point
bottomDeck[i] = deck[i];
}
for (int i = 0; i<deck.length; i++) {
if (cutPoint > i) {
newDeck[i] = topDeck[i];
} else {
newDeck[i] = bottomDeck[i];
}
}
return newDeck;
}
}
I am trying to cut the deck while asking to the user.
This function does not cut the deck.
Where am I doing wrong ?
I tried everything but I lost my brain can you guys help me please?
I am open to another ideas so you can give improve my code.
Thanks in advance.
A:
I can see three obvious issues with this code:
The first issue is that the cutDeck method is not actually modifying the original deck array. Instead, it is creating a new array and returning it. In order to actually shuffle the original deck, you would need to modify the cutDeck method to modify the original array, rather than returning a new one.
Another issue is that the cutDeck method is currently assuming that the deck has 52 cards. However, the code in the main method is only creating an array with 13 cards. This means that the cutDeck method will not work properly with the deck created in main.
Finally, the code is currently hard-coding the number of cards in the deck as 52, and it is also assuming that the user will always enter a valid cut point when prompted. It would be better to make the code more flexible and robust by using the length of the input array to determine the number of cards in the deck, and also by checking that the user's input is within the valid range for the deck size.
An alternative way to do this would be:
public static void cutDeck(String[] deck) {
System.out.println("Cut please. 'Choose between 1-" + (deck.length - 1) + "'");
int cutPoint = scanner.nextInt();
// Check that the cut point is within the valid range for the deck size
if (cutPoint < 1 || cutPoint >= deck.length) {
System.out.println("Invalid cut point. Please try again.");
return;
}
// Create the top and bottom halves of the deck
String[] topDeck = new String[cutPoint];
String[] bottomDeck = new String[deck.length - cutPoint];
for (int i = 0; i < cutPoint; i++) {
topDeck[i] = deck[i];
}
for (int i = cutPoint; i < deck.length; i++) {
bottomDeck[i - cutPoint] = deck[i];
}
// Reverse the top half of the deck
for (int i = 0; i < cutPoint / 2; i++) {
String temp = topDeck[i];
topDeck[i] = topDeck[topDeck.length - i - 1];
topDeck[topDeck.length - i - 1] = temp;
}
// Combine the top and bottom halves to create the shuffled deck
for (int i = 0; i < deck.length; i++) {
if (i < cutPoint) {
deck[i] = topDeck[i];
} else {
deck[i] = bottomDeck[i - cutPoint];
}
}
}
A:
The loops in the O/P code can be replaced with existing methods in the Arrays API and the List API :
public static String[] cutDeck (String [] deck, int n) {
List<String> cutDeck = new ArrayList<> (deck.length);
List<String> bottom = Arrays.asList (Arrays.copyOfRange (deck,n, deck.length));
cutDeck.addAll (bottom);
List<String> top = Arrays.asList (Arrays.copyOfRange (deck, 0, n));
cutDeck.addAll (top);
return cutDeck.toArray(deck);
}
This assumes the deck is 1D array of String. The test or other methods can call it like this:
array = cutDeck (array, cutPoint);
Note that this requires the value for cutPoint be set before calling the cut method. Moving the code to interact with the user to a different method makes the cutDeck method more cohesive, which is desirable.
Note that this does not reverse the order of cards in part of the deck. In real life, cutting a deck of cards does not reverse the order of part of the deck. However, if reversing is desired, one way to do it is to add a call to the reverse method from Collections API
List<String> top = Arrays.asList (Arrays.copyOfRange (deck, 0, n));
Collections.reverse(top);
cutDeck.addAll (top);
As noted in the previous answer, cutDeck(array); in your code causes the shuffled deck result to be ignored. If you want to fix that part of your original code, change that line to array = cutDeck(array);
The other points in the first answer are also valid:
You want to ensure the cutPoint value is valid.
The cutDeck method is more flexible if it gets the length to use from the array parameter.
By the way, the Arrays API has several toString methods. Using it can shorten code for printing array content, but with less control over formatting the output:
System.out.println (Arrays.toString(array));
Added: A simpler method:
As previously noted, in a real-life card game, cutting of the deck does not reverse the order of any of the cards. In fact, if you imagine the cards as a circular array, cutting the deck results in no change of order at all. Instead, it changes the starting point. Taking that into mind, we can write a single loop to cut the deck:
public static String[] cut2 (String [] deck, final int n) {
String [] cutDeck = new String [deck.length];
for (int i = 0; i < cutDeck.length; ++i) {
cutDeck [i] = deck [ (i + n) % deck.length];
}
return cutDeck;
}
Here is a test method:
public static void cards () {
String [] deck = {
"two clubs", "three clubs", "four clubs"
, "five clubs", "six clubs", "seven clubs"
, "eight clubs", "nine clubs", "ten clubs"
, "jack clubs", "queen clubs", "king clubs"
, "ace clubs"
, "two diamonds", "three diamonds", "four diamonds"
, "five diamonds", "six diamonds", "seven diamonds"
, "eight diamonds", "nine diamonds", "ten diamonds"
, "jack diamonds", "queen diamonds", "king diamonds"
, "ace diamonds"
, "two hearts", "three hearts", "four hearts"
, "five hearts", "six hearts", "seven hearts"
, "eight hearts", "nine hearts", "ten hearts"
, "jack hearts", "queen hearts", "king hearts"
, "ace hearts"
," two spades", "three spades", "four spades"
, "five spades", "six spades", "seven spades"
, "eight spades", "nine spades", "ten spades"
, "jack spades", "queen spades", "king spades"
, "ace spades"
};
System.out.println ("Original: " + Arrays.toString(deck));
for (int i = 0; i < 10; ++i) {
int r = (int) (Math.random () * deck.length);
System.out.println ();
System.out.println (r + " " +
Arrays.toString (cut (Arrays.copyOf (deck, deck.length), r)));
System.out.println (r + " " +
Arrays.toString (cut2 (Arrays.copyOf (deck, deck.length), r)));
}
}
The test method calls cut from above, and cut2 from this part. If the call to Collections.reverse is not used in cut, then cut and cut2 should return the same result.
Here is an example output from test run. Only the first three cuts are shown:
Original: [two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs, nine clubs, ten clubs, jack clubs, queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades, nine spades, ten spades, jack spades, queen spades, king spades, ace spades]
7 [nine clubs, ten clubs, jack clubs, queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades, nine spades, ten spades, jack spades, queen spades, king spades, ace spades, two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs]
7 [nine clubs, ten clubs, jack clubs, queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades, nine spades, ten spades, jack spades, queen spades, king spades, ace spades, two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs]
46 [nine spades, ten spades, jack spades, queen spades, king spades, ace spades, two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs, nine clubs, ten clubs, jack clubs, queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades]
46 [nine spades, ten spades, jack spades, queen spades, king spades, ace spades, two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs, nine clubs, ten clubs, jack clubs, queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades]
10 [queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades, nine spades, ten spades, jack spades, queen spades, king spades, ace spades, two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs, nine clubs, ten clubs, jack clubs]
10 [queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades, nine spades, ten spades, jack spades, queen spades, king spades, ace spades, two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs, nine clubs, ten clubs, jack clubs]
|
Hi I am making a card game I want to take input and select where the user wants to cut in cards and how can i do that?
|
package main;
import java.util.Scanner;
public class test {
public static String[] ranks = {"2", "3", "4", "5", "6", "7", "8", "9", "10", "Ace", "Jack", "Queen", "King"};
public static Scanner scanner = new Scanner(System.in);
public static void main(String[] args) {
String[] array = new String[13];
for(int i = 0; i<array.length; i++) {
array[i] = ranks[i] ;
}
for (int i = 0; i<array.length; i++ ) {
System.out.println(array[i]);
}
cutDeck(array);
for (int i = 0; i<array.length; i++ ) {
System.out.println(array[i]);
}
}
public static String[] cutDeck(String[] deck) {
System.out.println("Cut please. 'Choose between 1-51'");
int cutPoint = scanner.nextInt();
String[] topDeck= new String[52];
String[] bottomDeck = new String[52];
String[] newDeck = new String[deck.length];
for (int i = 1; i<=cutPoint ; i++) { // Topdeck
topDeck[i-1] = deck[deck.length-1*i];
}
for (int i = 0; i < cutPoint / 2; i++) // Reverse topdeck
{
String temp = topDeck[i];
topDeck[i] = topDeck[topDeck.length - i - 1];
topDeck[topDeck.length - i - 1] = temp;
}
for (int i = 0; i<deck.length - cutPoint; i++) { //Bottom cut point
bottomDeck[i] = deck[i];
}
for (int i = 0; i<deck.length; i++) {
if (cutPoint > i) {
newDeck[i] = topDeck[i];
} else {
newDeck[i] = bottomDeck[i];
}
}
return newDeck;
}
}
I am trying to cut the deck while asking to the user.
This function does not cut the deck.
Where am I doing wrong ?
I tried everything but I lost my brain can you guys help me please?
I am open to another ideas so you can give improve my code.
Thanks in advance.
|
[
"I can see three obvious issues with this code:\nThe first issue is that the cutDeck method is not actually modifying the original deck array. Instead, it is creating a new array and returning it. In order to actually shuffle the original deck, you would need to modify the cutDeck method to modify the original array, rather than returning a new one.\nAnother issue is that the cutDeck method is currently assuming that the deck has 52 cards. However, the code in the main method is only creating an array with 13 cards. This means that the cutDeck method will not work properly with the deck created in main.\nFinally, the code is currently hard-coding the number of cards in the deck as 52, and it is also assuming that the user will always enter a valid cut point when prompted. It would be better to make the code more flexible and robust by using the length of the input array to determine the number of cards in the deck, and also by checking that the user's input is within the valid range for the deck size.\nAn alternative way to do this would be:\npublic static void cutDeck(String[] deck) {\n System.out.println(\"Cut please. 'Choose between 1-\" + (deck.length - 1) + \"'\");\n int cutPoint = scanner.nextInt();\n \n // Check that the cut point is within the valid range for the deck size\n if (cutPoint < 1 || cutPoint >= deck.length) {\n System.out.println(\"Invalid cut point. Please try again.\");\n return;\n }\n \n // Create the top and bottom halves of the deck\n String[] topDeck = new String[cutPoint];\n String[] bottomDeck = new String[deck.length - cutPoint];\n for (int i = 0; i < cutPoint; i++) {\n topDeck[i] = deck[i];\n }\n for (int i = cutPoint; i < deck.length; i++) {\n bottomDeck[i - cutPoint] = deck[i];\n }\n \n // Reverse the top half of the deck\n for (int i = 0; i < cutPoint / 2; i++) {\n String temp = topDeck[i];\n topDeck[i] = topDeck[topDeck.length - i - 1];\n topDeck[topDeck.length - i - 1] = temp;\n }\n \n // Combine the top and bottom halves to create the shuffled deck\n for (int i = 0; i < deck.length; i++) {\n if (i < cutPoint) {\n deck[i] = topDeck[i];\n } else {\n deck[i] = bottomDeck[i - cutPoint];\n }\n }\n}\n\n",
"The loops in the O/P code can be replaced with existing methods in the Arrays API and the List API :\n public static String[] cutDeck (String [] deck, int n) { \n List<String> cutDeck = new ArrayList<> (deck.length);\n List<String> bottom = Arrays.asList (Arrays.copyOfRange (deck,n, deck.length));\n cutDeck.addAll (bottom);\n List<String> top = Arrays.asList (Arrays.copyOfRange (deck, 0, n));\n cutDeck.addAll (top);\n return cutDeck.toArray(deck);\n }\n\nThis assumes the deck is 1D array of String. The test or other methods can call it like this:\n array = cutDeck (array, cutPoint);\n\nNote that this requires the value for cutPoint be set before calling the cut method. Moving the code to interact with the user to a different method makes the cutDeck method more cohesive, which is desirable.\nNote that this does not reverse the order of cards in part of the deck. In real life, cutting a deck of cards does not reverse the order of part of the deck. However, if reversing is desired, one way to do it is to add a call to the reverse method from Collections API\n List<String> top = Arrays.asList (Arrays.copyOfRange (deck, 0, n));\n Collections.reverse(top);\n cutDeck.addAll (top);\n\nAs noted in the previous answer, cutDeck(array); in your code causes the shuffled deck result to be ignored. If you want to fix that part of your original code, change that line to array = cutDeck(array);\nThe other points in the first answer are also valid:\n\nYou want to ensure the cutPoint value is valid.\nThe cutDeck method is more flexible if it gets the length to use from the array parameter.\n\nBy the way, the Arrays API has several toString methods. Using it can shorten code for printing array content, but with less control over formatting the output:\n System.out.println (Arrays.toString(array));\n\nAdded: A simpler method:\nAs previously noted, in a real-life card game, cutting of the deck does not reverse the order of any of the cards. In fact, if you imagine the cards as a circular array, cutting the deck results in no change of order at all. Instead, it changes the starting point. Taking that into mind, we can write a single loop to cut the deck:\npublic static String[] cut2 (String [] deck, final int n) {\n String [] cutDeck = new String [deck.length];\n for (int i = 0; i < cutDeck.length; ++i) {\n cutDeck [i] = deck [ (i + n) % deck.length];\n }\n return cutDeck;\n }\n\nHere is a test method:\n public static void cards () {\n String [] deck = {\n \"two clubs\", \"three clubs\", \"four clubs\"\n , \"five clubs\", \"six clubs\", \"seven clubs\"\n , \"eight clubs\", \"nine clubs\", \"ten clubs\"\n , \"jack clubs\", \"queen clubs\", \"king clubs\"\n , \"ace clubs\"\n , \"two diamonds\", \"three diamonds\", \"four diamonds\"\n , \"five diamonds\", \"six diamonds\", \"seven diamonds\"\n , \"eight diamonds\", \"nine diamonds\", \"ten diamonds\"\n , \"jack diamonds\", \"queen diamonds\", \"king diamonds\"\n , \"ace diamonds\"\n , \"two hearts\", \"three hearts\", \"four hearts\"\n , \"five hearts\", \"six hearts\", \"seven hearts\"\n , \"eight hearts\", \"nine hearts\", \"ten hearts\"\n , \"jack hearts\", \"queen hearts\", \"king hearts\"\n , \"ace hearts\"\n ,\" two spades\", \"three spades\", \"four spades\"\n , \"five spades\", \"six spades\", \"seven spades\"\n , \"eight spades\", \"nine spades\", \"ten spades\"\n , \"jack spades\", \"queen spades\", \"king spades\"\n , \"ace spades\"\n }; \n \n System.out.println (\"Original: \" + Arrays.toString(deck));\n \n \n for (int i = 0; i < 10; ++i) {\n int r = (int) (Math.random () * deck.length);\n System.out.println ();\n System.out.println (r + \" \" + \n Arrays.toString (cut (Arrays.copyOf (deck, deck.length), r)));\n System.out.println (r + \" \" + \n Arrays.toString (cut2 (Arrays.copyOf (deck, deck.length), r))); \n } \n }\n\nThe test method calls cut from above, and cut2 from this part. If the call to Collections.reverse is not used in cut, then cut and cut2 should return the same result.\nHere is an example output from test run. Only the first three cuts are shown:\nOriginal: [two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs, nine clubs, ten clubs, jack clubs, queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades, nine spades, ten spades, jack spades, queen spades, king spades, ace spades]\n\n7 [nine clubs, ten clubs, jack clubs, queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades, nine spades, ten spades, jack spades, queen spades, king spades, ace spades, two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs]\n7 [nine clubs, ten clubs, jack clubs, queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades, nine spades, ten spades, jack spades, queen spades, king spades, ace spades, two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs]\n\n46 [nine spades, ten spades, jack spades, queen spades, king spades, ace spades, two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs, nine clubs, ten clubs, jack clubs, queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades]\n46 [nine spades, ten spades, jack spades, queen spades, king spades, ace spades, two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs, nine clubs, ten clubs, jack clubs, queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades]\n\n10 [queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades, nine spades, ten spades, jack spades, queen spades, king spades, ace spades, two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs, nine clubs, ten clubs, jack clubs]\n10 [queen clubs, king clubs, ace clubs, two diamonds, three diamonds, four diamonds, five diamonds, six diamonds, seven diamonds, eight diamonds, nine diamonds, ten diamonds, jack diamonds, queen diamonds, king diamonds, ace diamonds, two hearts, three hearts, four hearts, five hearts, six hearts, seven hearts, eight hearts, nine hearts, ten hearts, jack hearts, queen hearts, king hearts, ace hearts, two spades, three spades, four spades, five spades, six spades, seven spades, eight spades, nine spades, ten spades, jack spades, queen spades, king spades, ace spades, two clubs, three clubs, four clubs, five clubs, six clubs, seven clubs, eight clubs, nine clubs, ten clubs, jack clubs]\n\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"java"
] |
stackoverflow_0074661922_java.txt
|
Q:
Can Amplify Authentication SDK can be used on server side?
I would like to use Amplify instead of amazon-cognito-identity-js in my lambda functions (to sync the cognito users with their profiles i store into another database).
On the client side everything works fine, but i am not able to use it on the server side.
I don't find any resources on the internet, and i am fighting since 2 hours trying to make it works, i start wondering if we are supposed to do that.
Does someone know how to configure Amplify by requiring only @aws-amplify/auth?
Auth.configure is not a function
A:
Amplify Auth is actually designed to work with the browser. So it's not suitable for your lambda.
If you're using node then you will need to refer to the AWS SDK for JS instead.
|
Can Amplify Authentication SDK can be used on server side?
|
I would like to use Amplify instead of amazon-cognito-identity-js in my lambda functions (to sync the cognito users with their profiles i store into another database).
On the client side everything works fine, but i am not able to use it on the server side.
I don't find any resources on the internet, and i am fighting since 2 hours trying to make it works, i start wondering if we are supposed to do that.
Does someone know how to configure Amplify by requiring only @aws-amplify/auth?
Auth.configure is not a function
|
[
"Amplify Auth is actually designed to work with the browser. So it's not suitable for your lambda.\nIf you're using node then you will need to refer to the AWS SDK for JS instead.\n"
] |
[
0
] |
[] |
[] |
[
"aws_amplify"
] |
stackoverflow_0061545066_aws_amplify.txt
|
Q:
Pre-request script doesn't change the collection variable using Postman
Pre-Request Script:
let user_id = pm.collectionVariables.get("user_id");
pm.sendRequest(`http://security.postman-breakable.com/account/${user_id}/summary`, function (err, response) {
if(response.status == "FORBIDDEN"){
pm.collectionVariables.set("status_code", 403);
}else if(response.status == "OK"){
pm.collectionVariables.set("status_code",200);
}
});
Test:
let status_code = parseInt(pm.collectionVariables.get("status_code"));
pm.test(`Status code is ${status_code}`, function () {
pm.response.to.have.status(status_code);
});
The response code is 200 but it reads the previous response code which was 403.
Although I try to change my collection variable called "status_code" by writing pre-request script when the response code changes, it doesn't change.
A:
I was able to reproduce a similar behaviour, any chance you did something like this? Where the pre-request script calls a different endpoint than the actual request?
Also, don't know if it can help, in my postman the Forbidden was not uppercase
|
Pre-request script doesn't change the collection variable using Postman
|
Pre-Request Script:
let user_id = pm.collectionVariables.get("user_id");
pm.sendRequest(`http://security.postman-breakable.com/account/${user_id}/summary`, function (err, response) {
if(response.status == "FORBIDDEN"){
pm.collectionVariables.set("status_code", 403);
}else if(response.status == "OK"){
pm.collectionVariables.set("status_code",200);
}
});
Test:
let status_code = parseInt(pm.collectionVariables.get("status_code"));
pm.test(`Status code is ${status_code}`, function () {
pm.response.to.have.status(status_code);
});
The response code is 200 but it reads the previous response code which was 403.
Although I try to change my collection variable called "status_code" by writing pre-request script when the response code changes, it doesn't change.
|
[
"I was able to reproduce a similar behaviour, any chance you did something like this? Where the pre-request script calls a different endpoint than the actual request?\n\nAlso, don't know if it can help, in my postman the Forbidden was not uppercase\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"postman",
"postman_pre_request_script",
"testing"
] |
stackoverflow_0074661016_javascript_postman_postman_pre_request_script_testing.txt
|
Q:
Livewire temporarily disable action
What's the best way of preventing an action temporarily while something else is happening?
E.g a slow file upload and prevent the action 'store' from being submitted.
Tried using a public property but this is too slow.
<form wire:submit.prevent="store" enctype="multipart/form-data">
<x-forms.filepond wire:model="file_upload" />
<button wire:click.prevent="store" wire:loading.attr="disabled">
<x-loader wire:loading wire:target="store" class="full-current mr-2 h-4 w-4" />
<x-icons.save wire:loading.remove wire:target="store" class="full-current mr-2 h-4 w-4" />
Save
</button>
</form>
A:
I'd try alpine or allow store to submit and check the upload on the store function
A:
This is how I achieved with AlpineJS
filepond component,
@props(['key'])
<div
wire:ignore
x-data
x-init="
() => {
const pond = FilePond.create(document.getElementById('{{ $key }}'));
pond.setOptions({
credits: false,
disabled: {{ $attributes->has('disabled') ? 'true' : 'false' }},
allowMultiple: {{ $attributes->has('multiple') ? 'true' : 'false' }},
server: {
process:(fieldName, file, metadata, load, error, progress, abort, transfer, options) => {
@this.upload('{{ $key }}', file, load, error, progress)
},
revert: (filename, load) => {
@this.removeUpload('{{ $key }}', filename, load)
},
},
allowImagePreview: {{ $attributes->has('allowFileTypeValidation') ? 'true' : 'false' }},
imagePreviewMaxHeight: {{ $attributes->has('imagePreviewMaxHeight') ? $attributes->get('imagePreviewMaxHeight') : '256' }},
allowFileTypeValidation: {{ $attributes->has('allowFileTypeValidation') ? 'true' : 'false' }},
acceptedFileTypes: {!! $attributes->get('acceptedFileTypes') ?? 'null' !!},
allowFileSizeValidation: {{ $attributes->has('allowFileSizeValidation') ? 'true' : 'false' }},
maxFileSize: {!! $attributes->has('maxFileSize') ? "'".$attributes->get('maxFileSize')."'" : 'null' !!},
onaddfilestart: function(file) { submitButtonDisabled = true },
onprocessfiles: function() { submitButtonDisabled = false },
onupdatefiles: function (files) {
fileStatus = true;
files.forEach(function (file) {
if(file.status != 5) fileStatus = false;
});
if(fileStatus) submitButtonDisabled = false;
},
});
this.addEventListener('reset-pond', e => {
pond.removeFiles();
});
}
"
>
<input type="file" id="{{ $key }}"/>
</div>
<x-inputs.error field="{{ $attributes->has('multiple') ? $key . '.*' : $key }}"/>
(basically look at onaddfilestart, onprocessfiles, onupdatefiles filepond events)
Then on the file upload component,
add this to the root div x-data="{ submitButtonDisabled: false}"
add this to the submit button x-bind:disabled="submitButtonDisabled"
|
Livewire temporarily disable action
|
What's the best way of preventing an action temporarily while something else is happening?
E.g a slow file upload and prevent the action 'store' from being submitted.
Tried using a public property but this is too slow.
<form wire:submit.prevent="store" enctype="multipart/form-data">
<x-forms.filepond wire:model="file_upload" />
<button wire:click.prevent="store" wire:loading.attr="disabled">
<x-loader wire:loading wire:target="store" class="full-current mr-2 h-4 w-4" />
<x-icons.save wire:loading.remove wire:target="store" class="full-current mr-2 h-4 w-4" />
Save
</button>
</form>
|
[
"I'd try alpine or allow store to submit and check the upload on the store function\n",
"This is how I achieved with AlpineJS\nfilepond component,\n@props(['key'])\n<div\n wire:ignore\n x-data\n x-init=\"\n () => {\n const pond = FilePond.create(document.getElementById('{{ $key }}'));\n pond.setOptions({\n credits: false,\n disabled: {{ $attributes->has('disabled') ? 'true' : 'false' }},\n allowMultiple: {{ $attributes->has('multiple') ? 'true' : 'false' }},\n server: {\n process:(fieldName, file, metadata, load, error, progress, abort, transfer, options) => {\n @this.upload('{{ $key }}', file, load, error, progress)\n },\n revert: (filename, load) => {\n @this.removeUpload('{{ $key }}', filename, load)\n },\n },\n allowImagePreview: {{ $attributes->has('allowFileTypeValidation') ? 'true' : 'false' }},\n imagePreviewMaxHeight: {{ $attributes->has('imagePreviewMaxHeight') ? $attributes->get('imagePreviewMaxHeight') : '256' }},\n allowFileTypeValidation: {{ $attributes->has('allowFileTypeValidation') ? 'true' : 'false' }},\n acceptedFileTypes: {!! $attributes->get('acceptedFileTypes') ?? 'null' !!},\n allowFileSizeValidation: {{ $attributes->has('allowFileSizeValidation') ? 'true' : 'false' }},\n maxFileSize: {!! $attributes->has('maxFileSize') ? \"'\".$attributes->get('maxFileSize').\"'\" : 'null' !!},\n onaddfilestart: function(file) { submitButtonDisabled = true },\n onprocessfiles: function() { submitButtonDisabled = false },\n onupdatefiles: function (files) {\n fileStatus = true;\n files.forEach(function (file) {\n if(file.status != 5) fileStatus = false;\n });\n if(fileStatus) submitButtonDisabled = false;\n },\n });\n this.addEventListener('reset-pond', e => {\n pond.removeFiles();\n });\n }\n \"\n>\n <input type=\"file\" id=\"{{ $key }}\"/>\n</div>\n<x-inputs.error field=\"{{ $attributes->has('multiple') ? $key . '.*' : $key }}\"/>\n\n(basically look at onaddfilestart, onprocessfiles, onupdatefiles filepond events)\nThen on the file upload component,\n\nadd this to the root div x-data=\"{ submitButtonDisabled: false}\"\nadd this to the submit button x-bind:disabled=\"submitButtonDisabled\"\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"laravel",
"laravel_livewire"
] |
stackoverflow_0073802588_laravel_laravel_livewire.txt
|
Q:
Computer Vision System Toolbox & GUI
I am using Computer Vision System Toolbox in Matlab to grabb images from two cameras simultaneously, I want to create an Graphical User Interface which include the picture grabbing from two cameras into two axis. The problem is I don't know how to create GUI with axis while using Computer Vision System Toolbox. Can you help me or provide me an matlab code for creating GUI using Computer Vision System toolbox to grabb images from camera in real time.
kind regards
A:
The only features of the Computer Vision System Toolbox itself that can be considered GUI components are vision.VideoPlayer and vision.DeployableVideoPlayer for playing videos. You can also use functions like insertObjectAnnotation, insertShape, insertMarker, insertText to draw text and graphics into the video frames.
However, there are GUI components in the rest of MATLAB. For playing multiple video streams inside a custom GUI, take a look at this example
A:
To create a GUI with axes for displaying images from two cameras using the Computer Vision System Toolbox in MATLAB, you can follow these steps:
Open the MATLAB app and go to the "Apps" tab in the toolstrip.
In the "Apps" tab, select the "GUIDE" app and click "OK" to open the GUI Designer.
In the GUI Designer, create a new GUI by clicking on the "New" button in the toolbar.
In the GUI Designer, drag two "Axes" components from the "Components" pane onto the layout area. Position the axes in the layout so that they are side by side and have the desired dimensions.
In the "Properties" pane, set the "Tag" property of each of the axes to a unique value. This will be used to identify the axes in your code. For example, you could set the "Tag" property of the first axis to "axis1" and the "Tag" property of the second axis to "axis2".
In the "Properties" pane, set the "XLim" and "YLim" properties of each axis to the desired range of values. This will determine the range of values that the images will be displayed in. For example, you could set the "XLim" property to [0, 1024] and the "YLim" property to [0, 768] to display images with a width of 1024 pixels and a height of 768 pixels.
In the "Properties" pane, set the "XTick" and "YTick" properties of each axis to the desired tick intervals. This will determine the interval at which tick marks and labels will be displayed on the axes.
In the "Properties" pane, set the "XGrid" and "YGrid" properties of each axis to "on" to enable grid lines on the axes.
In the "Properties" pane, set the "XDir" and "YDir" properties of each axis to "reverse" to display the images with the origin at the top-left corner, as is typically the case with images.
In the "Properties" pane, set the "Box" property of each axis to "on" to enable a box around the axes.
In the "Properties" pane, set the "NextPlot" property of each axis to "add" to ensure that new images are added to the axes.
I hope this helps.
|
Computer Vision System Toolbox & GUI
|
I am using Computer Vision System Toolbox in Matlab to grabb images from two cameras simultaneously, I want to create an Graphical User Interface which include the picture grabbing from two cameras into two axis. The problem is I don't know how to create GUI with axis while using Computer Vision System Toolbox. Can you help me or provide me an matlab code for creating GUI using Computer Vision System toolbox to grabb images from camera in real time.
kind regards
|
[
"The only features of the Computer Vision System Toolbox itself that can be considered GUI components are vision.VideoPlayer and vision.DeployableVideoPlayer for playing videos. You can also use functions like insertObjectAnnotation, insertShape, insertMarker, insertText to draw text and graphics into the video frames.\nHowever, there are GUI components in the rest of MATLAB. For playing multiple video streams inside a custom GUI, take a look at this example \n",
"To create a GUI with axes for displaying images from two cameras using the Computer Vision System Toolbox in MATLAB, you can follow these steps:\n\nOpen the MATLAB app and go to the \"Apps\" tab in the toolstrip.\n\nIn the \"Apps\" tab, select the \"GUIDE\" app and click \"OK\" to open the GUI Designer.\n\nIn the GUI Designer, create a new GUI by clicking on the \"New\" button in the toolbar.\n\nIn the GUI Designer, drag two \"Axes\" components from the \"Components\" pane onto the layout area. Position the axes in the layout so that they are side by side and have the desired dimensions.\n\nIn the \"Properties\" pane, set the \"Tag\" property of each of the axes to a unique value. This will be used to identify the axes in your code. For example, you could set the \"Tag\" property of the first axis to \"axis1\" and the \"Tag\" property of the second axis to \"axis2\".\n\nIn the \"Properties\" pane, set the \"XLim\" and \"YLim\" properties of each axis to the desired range of values. This will determine the range of values that the images will be displayed in. For example, you could set the \"XLim\" property to [0, 1024] and the \"YLim\" property to [0, 768] to display images with a width of 1024 pixels and a height of 768 pixels.\n\nIn the \"Properties\" pane, set the \"XTick\" and \"YTick\" properties of each axis to the desired tick intervals. This will determine the interval at which tick marks and labels will be displayed on the axes.\n\nIn the \"Properties\" pane, set the \"XGrid\" and \"YGrid\" properties of each axis to \"on\" to enable grid lines on the axes.\n\nIn the \"Properties\" pane, set the \"XDir\" and \"YDir\" properties of each axis to \"reverse\" to display the images with the origin at the top-left corner, as is typically the case with images.\n\nIn the \"Properties\" pane, set the \"Box\" property of each axis to \"on\" to enable a box around the axes.\n\nIn the \"Properties\" pane, set the \"NextPlot\" property of each axis to \"add\" to ensure that new images are added to the axes.\n\n\nI hope this helps.\n"
] |
[
0,
0
] |
[] |
[] |
[
"matlab",
"matlab_cvst",
"vision"
] |
stackoverflow_0023655801_matlab_matlab_cvst_vision.txt
|
Q:
VS Code - How to stop it deleting whitespaces?
I am using Microsoft's VS Code to edit css, html and ts files that are shared by my team on a VSTS Git repo. However, my VS Code keeps removing empty/whitespaces that my colleagues added when I save any change (Image below) and this screws up the whole Git Diff part, as almost every single line of code shows as a diff.
I tried to disable every single config setup but nothings works:
A:
It seems you have trailing whitespace enabled in User Preferences too.
I'd suggest opening your configuration file of VSCode using
CtrlShiftP or
CmdShiftP in Mac and then go to Open User Settings.
I'm sure the next line is around there somewhere, delete it or change it to false.
files.trimTrailingWhitespace": true
A:
At the end, what was causing my problem was the extension: EditorConfig for VS Code
This plugin attempts to override user/workspace settings with settings
found in .editorconfig files. No additional or vscode-specific files
are required. As with any EditorConfig plugin, if root=true is not
specified, EditorConfig will continue to look for an .editorconfig
file outside of the project.
I believe, it was overriding the options I selected inside of VS Code (such as files.trimTrailingWhitespace: false). So, no setup change I was making was actually being applied.
A:
In my case, the JS-CSS-HTML Formatter extension from lonefy
caused the problem.
A:
Editor › Comments: Ignore Empty Lines
——>choose :false
|
VS Code - How to stop it deleting whitespaces?
|
I am using Microsoft's VS Code to edit css, html and ts files that are shared by my team on a VSTS Git repo. However, my VS Code keeps removing empty/whitespaces that my colleagues added when I save any change (Image below) and this screws up the whole Git Diff part, as almost every single line of code shows as a diff.
I tried to disable every single config setup but nothings works:
|
[
"It seems you have trailing whitespace enabled in User Preferences too.\nI'd suggest opening your configuration file of VSCode using\nCtrlShiftP or\nCmdShiftP in Mac and then go to Open User Settings.\nI'm sure the next line is around there somewhere, delete it or change it to false.\nfiles.trimTrailingWhitespace\": true\n\n",
"At the end, what was causing my problem was the extension: EditorConfig for VS Code\n\nThis plugin attempts to override user/workspace settings with settings\nfound in .editorconfig files. No additional or vscode-specific files\nare required. As with any EditorConfig plugin, if root=true is not\nspecified, EditorConfig will continue to look for an .editorconfig\nfile outside of the project.\n\nI believe, it was overriding the options I selected inside of VS Code (such as files.trimTrailingWhitespace: false). So, no setup change I was making was actually being applied.\n",
"In my case, the JS-CSS-HTML Formatter extension from lonefy\ncaused the problem.\n",
"Editor › Comments: Ignore Empty Lines\n——>choose :false\n"
] |
[
5,
5,
0,
0
] |
[] |
[] |
[
"css",
"html",
"visual_studio_code"
] |
stackoverflow_0064373905_css_html_visual_studio_code.txt
|
Q:
Reinforcement Learning applications in computer vision?
As I continued to study computer vision, I felt that RL (reinforcement learning) was used relatively less frequently in computer vision tasks, compared to the impact of the first RL and the likelihood that people predicted.
Even if you look at the list of papers accepted at top tier conferences such as CVPR, there are very few or no papers using RL.
Why is RL not well used in computer vision?
A:
RL has picked up significantly only in last few years. Reinforcement learning has been used both for solving applied tasks such as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others.
Last year there was a nice tutorial on Deep RL in CV at CVPR:
http://ivg.au.tsinghua.edu.cn/DRLCV/
This is a list of interesting papers from various applications:
Visual Tracking
[1] James Supančič, III, Deva Ramanan, Tracking as Online Decision-Making: Learning a Policy From Streaming Videos With Reinforcement Learning, ICCV, 2017.
Visual Dialogue
[1] Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra, earning Cooperative Visual Dialog Agents with Deep Reinforcement Learning, ICCV 2017.
Human Behaviour Analysis
[1] Nicholas Rhinehart, Kris M. Kitani, First-Person Activity Forecasting With Online Inverse Reinforcement Learning, ICCV, 2017.
Face Recognition
[1] Yongming Rao,Jiwen Lu, Jie Zhou. Attention-aware Deep Reinforcement Learning for Video Face Recognition, ICCV, 2017.
[2] Qingxing Cao, Liang Lin, Yukai Shi, Xiaodan Liang, Guanbin Li.Attention-Aware Face Hallucination via Deep Reinforcement Learning. CVPR, 2017.
Image Restoration
[1] Ke Yu, Chao Dong, Liang Lin, Chen Change Loy. Crafting a Toolchain for Image Restoration by Deep Reinforcement Learning. CVPR 2018.
Semantic Parsing
[1] Fangyu Liu, Shuaipeng Li, Liqiang Zhang, Chenghu Zhou, Rongtian Ye, Yuebin Wang, Jiwen Lu. 3DCNN-DQN-RNN: A Deep Reinforcement Learning Framework for Semantic Parsing of Large-Scale 3D Point Clouds. ICCV, 2017.
Video Summarization
[1] Kaiyang Zhou, Yu Qiao, Tao Xiang. Deep Reinforcement Learning for Unsupervised Video Summarization with Diversity-Representativeness Reward. AAAI, 2018.
Active Object localization
[1] Juan C. Caicedo, Svetlana Lazebnik. Active Object Localization with Deep Reinforcement Learning. ICCV, 2015.
A:
There are several reasons why reinforcement learning (RL) may not be widely used in computer vision tasks compared to other machine learning techniques. Some possible reasons include:
RL algorithms require a large amount of data and computational resources to train effectively. In many computer vision tasks, it may be difficult to collect and label sufficient amounts of data, and the computational requirements of RL algorithms may be too high for the available hardware.
RL algorithms can be challenging to implement and debug, and require a deep understanding of both the algorithm and the domain in which it is being applied. In contrast, other machine learning techniques such as convolutional neural networks (CNNs) have well-established implementations and libraries that make it easier for practitioners to use them.
RL algorithms often require carefully designed reward functions and action spaces in order to learn effectively. This can be difficult to do in the context of complex computer vision tasks, where it may be difficult to define clear goals and actions for the algorithm to learn from.
In many cases, other machine learning techniques such as CNNs may be able to achieve similar or better performance on computer vision tasks without the need for RL. For example, CNNs have been shown to be highly effective at image classification tasks, and may be able to outperform RL algorithms on these tasks.
Overall, while RL has the potential to be useful for computer vision tasks, there are many challenges and limitations to its use in this domain. As a result, it may not be as widely used in computer vision as other machine learning techniques.
|
Reinforcement Learning applications in computer vision?
|
As I continued to study computer vision, I felt that RL (reinforcement learning) was used relatively less frequently in computer vision tasks, compared to the impact of the first RL and the likelihood that people predicted.
Even if you look at the list of papers accepted at top tier conferences such as CVPR, there are very few or no papers using RL.
Why is RL not well used in computer vision?
|
[
"RL has picked up significantly only in last few years. Reinforcement learning has been used both for solving applied tasks such as processing and analysis of visual information, and for solving specific computer vision problems such as filtering, extracting image features, localizing objects in scenes, and many others.\nLast year there was a nice tutorial on Deep RL in CV at CVPR:\nhttp://ivg.au.tsinghua.edu.cn/DRLCV/\nThis is a list of interesting papers from various applications:\nVisual Tracking\n[1] James Supančič, III, Deva Ramanan, Tracking as Online Decision-Making: Learning a Policy From Streaming Videos With Reinforcement Learning, ICCV, 2017.\nVisual Dialogue\n[1] Abhishek Das, Satwik Kottur, José M. F. Moura, Stefan Lee, Dhruv Batra, earning Cooperative Visual Dialog Agents with Deep Reinforcement Learning, ICCV 2017.\nHuman Behaviour Analysis\n[1] Nicholas Rhinehart, Kris M. Kitani, First-Person Activity Forecasting With Online Inverse Reinforcement Learning, ICCV, 2017.\nFace Recognition\n[1] Yongming Rao,Jiwen Lu, Jie Zhou. Attention-aware Deep Reinforcement Learning for Video Face Recognition, ICCV, 2017.\n[2] Qingxing Cao, Liang Lin, Yukai Shi, Xiaodan Liang, Guanbin Li.Attention-Aware Face Hallucination via Deep Reinforcement Learning. CVPR, 2017.\nImage Restoration\n[1] Ke Yu, Chao Dong, Liang Lin, Chen Change Loy. Crafting a Toolchain for Image Restoration by Deep Reinforcement Learning. CVPR 2018.\nSemantic Parsing\n[1] Fangyu Liu, Shuaipeng Li, Liqiang Zhang, Chenghu Zhou, Rongtian Ye, Yuebin Wang, Jiwen Lu. 3DCNN-DQN-RNN: A Deep Reinforcement Learning Framework for Semantic Parsing of Large-Scale 3D Point Clouds. ICCV, 2017.\nVideo Summarization\n[1] Kaiyang Zhou, Yu Qiao, Tao Xiang. Deep Reinforcement Learning for Unsupervised Video Summarization with Diversity-Representativeness Reward. AAAI, 2018.\nActive Object localization\n[1] Juan C. Caicedo, Svetlana Lazebnik. Active Object Localization with Deep Reinforcement Learning. ICCV, 2015.\n",
"There are several reasons why reinforcement learning (RL) may not be widely used in computer vision tasks compared to other machine learning techniques. Some possible reasons include:\n\nRL algorithms require a large amount of data and computational resources to train effectively. In many computer vision tasks, it may be difficult to collect and label sufficient amounts of data, and the computational requirements of RL algorithms may be too high for the available hardware.\n\nRL algorithms can be challenging to implement and debug, and require a deep understanding of both the algorithm and the domain in which it is being applied. In contrast, other machine learning techniques such as convolutional neural networks (CNNs) have well-established implementations and libraries that make it easier for practitioners to use them.\n\nRL algorithms often require carefully designed reward functions and action spaces in order to learn effectively. This can be difficult to do in the context of complex computer vision tasks, where it may be difficult to define clear goals and actions for the algorithm to learn from.\n\nIn many cases, other machine learning techniques such as CNNs may be able to achieve similar or better performance on computer vision tasks without the need for RL. For example, CNNs have been shown to be highly effective at image classification tasks, and may be able to outperform RL algorithms on these tasks.\n\nOverall, while RL has the potential to be useful for computer vision tasks, there are many challenges and limitations to its use in this domain. As a result, it may not be as widely used in computer vision as other machine learning techniques.\n\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"computer_vision",
"reinforcement_learning"
] |
stackoverflow_0063335535_computer_vision_reinforcement_learning.txt
|
Q:
setvalidator on a FormControl does not make the form invalid
HTML code
<div>
<label for=""
>No additional information flag:</label>
<rca-checkbox formControlName="noAdditionalInfoCheckbox" (checkboxChecked)="onCheckboxChecked($event)"></rca-checkbox>
</div>
<div>
<label >No additional information reasons:</label>
<textarea
formControlName="noAdditionalInformationReasons"
id=""
class="form-control"
></textarea>
</div>
TS file
onCheckboxChecked(isChecked): void {
const noAdditionalInfoReasonsControl = this.addNewRequestFormForIndividual.get('noAdditionalInformationReasons');
if (isChecked){
noAdditionalInfoReasonsControl.setValidators(Validators.required);
this.noAddInfoReasonsErrorMessage = "give reason";
}
else {
noAdditionalInfoReasonsControl.clearValidators;
this.noAddInfoReasonsErrorMessage = '';
}
noAdditionalInfoReasonsControl.updateValueAndValidity;
console.log(this.addNewRequestFormForIndividual.valid);
}
If the checkbox is checked, I want to add a required validator to the second FormControl and the Add button over the form will be disabled if the form is not valid.
Now what I see is that the last console prints true even if I am setting the validators above and the Add button does not get disabled. Also, I see that when I start making modifications to the field which was made required then the validation state starts to change like I wrote something in the required field and then remove it then the add button gets disabled and the form state becomes invalid.
But then if I even uncheck the checkbox, the form is still invalid. I want to understand why it is happening even when I am using updateValueAndValidity.
A:
clearValidators and updateValueAndValidity are the methods. You miss out the () so both methods are not called. Add () in order to call the method as below:
noAdditionalInfoReasonsControl.clearValidators();
noAdditionalInfoReasonsControl.updateValueAndValidity();
Demo @ StackBlitz
|
setvalidator on a FormControl does not make the form invalid
|
HTML code
<div>
<label for=""
>No additional information flag:</label>
<rca-checkbox formControlName="noAdditionalInfoCheckbox" (checkboxChecked)="onCheckboxChecked($event)"></rca-checkbox>
</div>
<div>
<label >No additional information reasons:</label>
<textarea
formControlName="noAdditionalInformationReasons"
id=""
class="form-control"
></textarea>
</div>
TS file
onCheckboxChecked(isChecked): void {
const noAdditionalInfoReasonsControl = this.addNewRequestFormForIndividual.get('noAdditionalInformationReasons');
if (isChecked){
noAdditionalInfoReasonsControl.setValidators(Validators.required);
this.noAddInfoReasonsErrorMessage = "give reason";
}
else {
noAdditionalInfoReasonsControl.clearValidators;
this.noAddInfoReasonsErrorMessage = '';
}
noAdditionalInfoReasonsControl.updateValueAndValidity;
console.log(this.addNewRequestFormForIndividual.valid);
}
If the checkbox is checked, I want to add a required validator to the second FormControl and the Add button over the form will be disabled if the form is not valid.
Now what I see is that the last console prints true even if I am setting the validators above and the Add button does not get disabled. Also, I see that when I start making modifications to the field which was made required then the validation state starts to change like I wrote something in the required field and then remove it then the add button gets disabled and the form state becomes invalid.
But then if I even uncheck the checkbox, the form is still invalid. I want to understand why it is happening even when I am using updateValueAndValidity.
|
[
"clearValidators and updateValueAndValidity are the methods. You miss out the () so both methods are not called. Add () in order to call the method as below:\nnoAdditionalInfoReasonsControl.clearValidators();\n\nnoAdditionalInfoReasonsControl.updateValueAndValidity();\n\nDemo @ StackBlitz\n"
] |
[
1
] |
[] |
[] |
[
"angular",
"angular_reactive_forms",
"checkbox",
"typescript"
] |
stackoverflow_0074660375_angular_angular_reactive_forms_checkbox_typescript.txt
|
Q:
Select always returns 0 in an input file
Select always returns 0 in an input file
I wrote a function function that receives FILE* and checks if it is ready.
The function:
int ioManager_nextReady(FILE *IFILE) {
// Setting input ifle
int inDescrp = fileno(IFILE ? IFILE : stdin);
// Setting timer to 0
struct timeval timeout;
timeout.tv_sec = timeout.tv_usec = 0;
// Variables for select
unsigned short int nfds = 1;
fd_set readfds;
FD_ZERO(&readfds);
FD_SET(inDescrp, &readfds);
// Run select
int nReady = select(nfds, &readfds, NULL, NULL, &timeout);
if (nReady > 0) {
return inDescrp;
}
return -1;
}
I am trying to test this function with check.h.
The tests:
static FILE *tmpIn;
void before(char *line) {
tmpIn = tmpfile();
if (line) {
fprintf(tmpIn, "%s\n", line);
rewind(tmpIn);
fflush(tmpIn);
}
}
void after() { fclose(tmpIn); }
START_TEST(test_ioManager_nextReady_NULL) {
before(NULL);
int data;
data = ioManager_nextReady(tmpIn);
ck_assert_int_eq(data, -1);
after();
}
END_TEST
#define LINEIN "Sample input"
START_TEST(test_ioManager_nextReady_text) {
before(LINEIN);
int data;
data = ioManager_nextReady(tmpIn);
ck_assert_int_ne(data, -1);
after();
}
END_TEST
The result:
Running suite(s): IOManager
50%: Checks: 2, Failures: 1, Errors: 0
ioManager.test.c:42:F:Smoke:test_ioManager_nextReady_text:0: Assertion 'data != -1' failed: data == -1, -1 == -1
Select is returning 0 after I use rewind and fflush.
When I use read I can retreive the data.
// Debug
char bff[MAXLINE];
int n = read(inDescrp, bff, MAXLINE);
bff[n] = '\0';
printf("%d\n", inDescrp);
printf("%s\n", bff);
So select is returning 0 even when I can read data.
The problem also continues if I try to set a not zero timeout.
Why is this happening?
I need to check if a file is ready to be read.
What is a possible solution?
A:
I can see why you have been led astray, by the 'nfds' parameter to select(). It reads and sounds like "number of file descriptors".
It is not that. It should be the value of the highest file descriptor that you care about, plus 1. See (for example) the Linux manpage about it
As an aside, the nfds parameter is an int - so don't use an unsigned short. It will "just work" generally, but is very confusing.
|
Select always returns 0 in an input file
|
Select always returns 0 in an input file
I wrote a function function that receives FILE* and checks if it is ready.
The function:
int ioManager_nextReady(FILE *IFILE) {
// Setting input ifle
int inDescrp = fileno(IFILE ? IFILE : stdin);
// Setting timer to 0
struct timeval timeout;
timeout.tv_sec = timeout.tv_usec = 0;
// Variables for select
unsigned short int nfds = 1;
fd_set readfds;
FD_ZERO(&readfds);
FD_SET(inDescrp, &readfds);
// Run select
int nReady = select(nfds, &readfds, NULL, NULL, &timeout);
if (nReady > 0) {
return inDescrp;
}
return -1;
}
I am trying to test this function with check.h.
The tests:
static FILE *tmpIn;
void before(char *line) {
tmpIn = tmpfile();
if (line) {
fprintf(tmpIn, "%s\n", line);
rewind(tmpIn);
fflush(tmpIn);
}
}
void after() { fclose(tmpIn); }
START_TEST(test_ioManager_nextReady_NULL) {
before(NULL);
int data;
data = ioManager_nextReady(tmpIn);
ck_assert_int_eq(data, -1);
after();
}
END_TEST
#define LINEIN "Sample input"
START_TEST(test_ioManager_nextReady_text) {
before(LINEIN);
int data;
data = ioManager_nextReady(tmpIn);
ck_assert_int_ne(data, -1);
after();
}
END_TEST
The result:
Running suite(s): IOManager
50%: Checks: 2, Failures: 1, Errors: 0
ioManager.test.c:42:F:Smoke:test_ioManager_nextReady_text:0: Assertion 'data != -1' failed: data == -1, -1 == -1
Select is returning 0 after I use rewind and fflush.
When I use read I can retreive the data.
// Debug
char bff[MAXLINE];
int n = read(inDescrp, bff, MAXLINE);
bff[n] = '\0';
printf("%d\n", inDescrp);
printf("%s\n", bff);
So select is returning 0 even when I can read data.
The problem also continues if I try to set a not zero timeout.
Why is this happening?
I need to check if a file is ready to be read.
What is a possible solution?
|
[
"I can see why you have been led astray, by the 'nfds' parameter to select(). It reads and sounds like \"number of file descriptors\".\nIt is not that. It should be the value of the highest file descriptor that you care about, plus 1. See (for example) the Linux manpage about it\nAs an aside, the nfds parameter is an int - so don't use an unsigned short. It will \"just work\" generally, but is very confusing.\n"
] |
[
0
] |
[] |
[] |
[
"c",
"fflush",
"rewind",
"select"
] |
stackoverflow_0074658600_c_fflush_rewind_select.txt
|
Q:
Make a Computer Vision Application use Custom Vision
I built a Computer Vision Application following the Microsoft tutorial here:
https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise1
Now I want to use customvision.ai instead of the standard Computer Vision. I moved my customvision project to Azure and thought I just had to change Subscription Key and Vision Endpoint but it does not seem to work.
<add key="VisionEndpoint" value="CustomVisionEndpoint" />
<add key="SubscriptionKey" value="ValueOfCustomVisionKeyInAzure" />
When I use the Computer Vision Endpoint and Key everything works fine, but if I use Custom Vision I get an error message:
Operation returned an invalid status code 'NotFound'
Anyone have any idea? Do I have to change additional values or what is the problem?
Thank you for any help
Best regards,
Daniel
A:
On Azure Cognitive Services, Custom Vision Service is different from Computer Vision. They have different architectures, different APIs and different endpoints for calling. Therefore, their SDKs can not be compatible for each other.
You can deeper understand them via their offical samples for CustomVision and ComputerVision, even through their REST APIs in the Reference of Cognitive Services.
A:
In order to use Custom Vision in your application, you will need to make sure that you are using the correct endpoint and subscription key for your Custom Vision project. You can find these values in the Azure portal, under the "Prediction URL" and "Prediction Key" fields in your Custom Vision project's settings.
In addition to updating these values in your application's configuration, you may also need to update the code that calls the Custom Vision API to use the correct endpoint and authentication values. This may involve updating the URL and headers of the HTTP request that is sent to the Custom Vision API, as well as any other code that processes the response from the API.
It is also possible that the error you are seeing is related to the format or content of the images that you are sending to the Custom Vision API. You can try using the Custom Vision API's online test interface to test your images and see if they are being recognized correctly by your Custom Vision model. This can help you identify any issues with your images and ensure that they are being processed correctly by the API.
|
Make a Computer Vision Application use Custom Vision
|
I built a Computer Vision Application following the Microsoft tutorial here:
https://github.com/Microsoft/computerscience/blob/master/Labs/Azure%20Services/Azure%20Storage/Azure%20Storage%20and%20Cognitive%20Services%20(MVC).md#Exercise1
Now I want to use customvision.ai instead of the standard Computer Vision. I moved my customvision project to Azure and thought I just had to change Subscription Key and Vision Endpoint but it does not seem to work.
<add key="VisionEndpoint" value="CustomVisionEndpoint" />
<add key="SubscriptionKey" value="ValueOfCustomVisionKeyInAzure" />
When I use the Computer Vision Endpoint and Key everything works fine, but if I use Custom Vision I get an error message:
Operation returned an invalid status code 'NotFound'
Anyone have any idea? Do I have to change additional values or what is the problem?
Thank you for any help
Best regards,
Daniel
|
[
"On Azure Cognitive Services, Custom Vision Service is different from Computer Vision. They have different architectures, different APIs and different endpoints for calling. Therefore, their SDKs can not be compatible for each other.\nYou can deeper understand them via their offical samples for CustomVision and ComputerVision, even through their REST APIs in the Reference of Cognitive Services.\n\n",
"In order to use Custom Vision in your application, you will need to make sure that you are using the correct endpoint and subscription key for your Custom Vision project. You can find these values in the Azure portal, under the \"Prediction URL\" and \"Prediction Key\" fields in your Custom Vision project's settings.\nIn addition to updating these values in your application's configuration, you may also need to update the code that calls the Custom Vision API to use the correct endpoint and authentication values. This may involve updating the URL and headers of the HTTP request that is sent to the Custom Vision API, as well as any other code that processes the response from the API.\nIt is also possible that the error you are seeing is related to the format or content of the images that you are sending to the Custom Vision API. You can try using the Custom Vision API's online test interface to test your images and see if they are being recognized correctly by your Custom Vision model. This can help you identify any issues with your images and ensure that they are being processed correctly by the API.\n"
] |
[
0,
0
] |
[] |
[] |
[
"azure",
"azure_cognitive_services",
"computer_vision",
"microsoft_custom_vision"
] |
stackoverflow_0055099223_azure_azure_cognitive_services_computer_vision_microsoft_custom_vision.txt
|
Q:
Wait for function is done before triggering the next one Lambda
I have a lambda function that triggers once a file is uploaded to S3. There's a slight possibility that two files might be uploaded at the same time. The thing is, I don't want the lambda function running both files at the same time. I want one file to go first and then another. Is this possible?
A:
Yes, it is possible to have one Lambda function wait for another to finish before triggering. Here are a few different ways you can do this:
Use a lock on the shared resource that both functions need to access. This can be as simple as a file on S3 that each function checks for before proceeding. When one function starts running, it acquires the lock by creating the file. When it finishes, it deletes the file, allowing the other function to acquire the lock and run.
Use the built-in concurrency controls in AWS Lambda. With concurrency controls, you can set the maximum number of concurrent executions for a Lambda function, and the function will automatically queue any additional requests until the number of concurrent executions falls below the limit. This way, if one function is already running, the other will be queued until the first one finishes.
Use AWS Step Functions to create a workflow that orchestrates the execution of your Lambda functions. With Step Functions, you can define a sequence of tasks that run in a specific order, with each task representing a Lambda function. This way, you can ensure that one function runs after another and control the flow of execution.
Each of these approaches has its own trade-offs and considerations, so you'll need to choose the one that best fits your use case. You may also want to consider the complexity of the solution and the amount of additional code or infrastructure it requires.
A:
One way to implement this is to set the concurrency limit for a Lambda function to 1 to ensure that only one instance of the function is active at any given time. Any additional invocations of the function will be throttled until the active instance completes its execution. This will ensure that the function is executed serially, with only one instance running at a time. You can have your S3 bucket send a message to the FIFO queue whenever a new file is uploaded. This approach will ensure that your Lambda function processes the files in the order in which they are added to the queue and will prevent the function from running multiple instances concurrently.
Another way to implement this is to use a distributed lock. When your lambda function starts, it can attempt to acquire the lock by writing a record to the table. This dynamodb write can be transactional write with optimistic locking. If the write operation succeeds, it means that the lambda function has acquired the lock and can proceed with its execution. If the write operation fails, it means that another instance of the lambda function has already acquired the lock. This has a lot of challenges that need to be handled, like how to re-trigger the lambda if the acquisition of lock fails.
|
Wait for function is done before triggering the next one Lambda
|
I have a lambda function that triggers once a file is uploaded to S3. There's a slight possibility that two files might be uploaded at the same time. The thing is, I don't want the lambda function running both files at the same time. I want one file to go first and then another. Is this possible?
|
[
"Yes, it is possible to have one Lambda function wait for another to finish before triggering. Here are a few different ways you can do this:\n\nUse a lock on the shared resource that both functions need to access. This can be as simple as a file on S3 that each function checks for before proceeding. When one function starts running, it acquires the lock by creating the file. When it finishes, it deletes the file, allowing the other function to acquire the lock and run.\n\nUse the built-in concurrency controls in AWS Lambda. With concurrency controls, you can set the maximum number of concurrent executions for a Lambda function, and the function will automatically queue any additional requests until the number of concurrent executions falls below the limit. This way, if one function is already running, the other will be queued until the first one finishes.\n\nUse AWS Step Functions to create a workflow that orchestrates the execution of your Lambda functions. With Step Functions, you can define a sequence of tasks that run in a specific order, with each task representing a Lambda function. This way, you can ensure that one function runs after another and control the flow of execution.\n\n\nEach of these approaches has its own trade-offs and considerations, so you'll need to choose the one that best fits your use case. You may also want to consider the complexity of the solution and the amount of additional code or infrastructure it requires.\n",
"One way to implement this is to set the concurrency limit for a Lambda function to 1 to ensure that only one instance of the function is active at any given time. Any additional invocations of the function will be throttled until the active instance completes its execution. This will ensure that the function is executed serially, with only one instance running at a time. You can have your S3 bucket send a message to the FIFO queue whenever a new file is uploaded. This approach will ensure that your Lambda function processes the files in the order in which they are added to the queue and will prevent the function from running multiple instances concurrently.\nAnother way to implement this is to use a distributed lock. When your lambda function starts, it can attempt to acquire the lock by writing a record to the table. This dynamodb write can be transactional write with optimistic locking. If the write operation succeeds, it means that the lambda function has acquired the lock and can proceed with its execution. If the write operation fails, it means that another instance of the lambda function has already acquired the lock. This has a lot of challenges that need to be handled, like how to re-trigger the lambda if the acquisition of lock fails.\n"
] |
[
3,
0
] |
[] |
[] |
[
"amazon_s3",
"amazon_web_services",
"aws_lambda"
] |
stackoverflow_0074663051_amazon_s3_amazon_web_services_aws_lambda.txt
|
Q:
Difference between Bitcoin Core Build and Umbrel Build
I recently started reading Mastering Bitcoin and have downloaded Bitcoin Core. I've run through the installation process and I am at the section titled Running a Bitcoin Core Node. I am trying to understand how Bitcoin Core on my computer is different from running a full node on an Umbrel with a Raspberry Pi and an external hard drive. The Umbrel has a full copy of the block chain. What exactly is the Bitcoin Core Node configured from Bitcoin Core on my computer? Will this download the full blockchain to my computer? Is it different or the same thing?
A:
Bitcoin Core and Umbrel are two different implementations of the Bitcoin protocol. Bitcoin Core is the reference implementation of Bitcoin, and is maintained by the Bitcoin Core development team. Umbrel is a more user-friendly implementation of the Bitcoin protocol that is designed to make it easier for users to run a full node and interact with the Bitcoin network.
One key difference between the two implementations is the build process. Bitcoin Core uses a traditional build process, where users must compile the source code on their own machines in order to create a working version of the software. This can be a complex and time-consuming process, and requires users to have a certain level of technical expertise.
In contrast, Umbrel uses a different build process that is designed to be more user-friendly. Rather than compiling the source code themselves, users can download pre-built versions of Umbrel from the Umbrel website. This makes it easier for users to get started with Umbrel, as they don't have to go through the process of building the software from scratch.
Another key difference between the two implementations is the focus and target audience. Bitcoin Core is geared towards more technically-savvy users, who are comfortable with running a full node and participating in the Bitcoin network at a low-level. Umbrel, on the other hand, is designed for a wider audience, and is intended to make it easy for anyone to run a full node and interact with the Bitcoin network.
Overall, the choice between Bitcoin Core and Umbrel will depend on your specific needs and technical expertise. If you are a more experienced user and want to have complete control over your node and the Bitcoin network, Bitcoin Core may be the right choice for you. If you are a less experienced user and want an easier way to run a full node and interact with the Bitcoin network, Umbrel may be a better option.
What exactly is the Bitcoin Core Node configured from Bitcoin Core on my computer?
When you run the Bitcoin Core software on your computer, you are creating a full node on the Bitcoin network. This means that you are running a complete copy of the Bitcoin blockchain and are participating in the network as a full peer.
As a full node, your computer is responsible for verifying and relaying transactions on the network. This helps to ensure the integrity and security of the Bitcoin network, as it helps to prevent double spending and other forms of fraud.
In addition to running the Bitcoin Core software, you can also configure various settings and options for your node. This includes things like the maximum number of connections your node will accept, the maximum size of your transaction memory pool, and the minimum transaction fee you will accept.
These settings and options allow you to customize your node and control how it interacts with the rest of the network. For example, if you want your node to be more conservative and avoid potentially risky transactions, you can set the minimum transaction fee to a higher value. This will prevent your node from accepting and relaying transactions with low fees, which can help to reduce your exposure to potential losses.
Overall, running a Bitcoin Core node on your computer is an important way to support the Bitcoin network and help ensure its security and reliability. By participating as a full node, you are helping to decentralize the network and make it more resilient to attacks and other forms of disruption.
|
Difference between Bitcoin Core Build and Umbrel Build
|
I recently started reading Mastering Bitcoin and have downloaded Bitcoin Core. I've run through the installation process and I am at the section titled Running a Bitcoin Core Node. I am trying to understand how Bitcoin Core on my computer is different from running a full node on an Umbrel with a Raspberry Pi and an external hard drive. The Umbrel has a full copy of the block chain. What exactly is the Bitcoin Core Node configured from Bitcoin Core on my computer? Will this download the full blockchain to my computer? Is it different or the same thing?
|
[
"Bitcoin Core and Umbrel are two different implementations of the Bitcoin protocol. Bitcoin Core is the reference implementation of Bitcoin, and is maintained by the Bitcoin Core development team. Umbrel is a more user-friendly implementation of the Bitcoin protocol that is designed to make it easier for users to run a full node and interact with the Bitcoin network.\nOne key difference between the two implementations is the build process. Bitcoin Core uses a traditional build process, where users must compile the source code on their own machines in order to create a working version of the software. This can be a complex and time-consuming process, and requires users to have a certain level of technical expertise.\nIn contrast, Umbrel uses a different build process that is designed to be more user-friendly. Rather than compiling the source code themselves, users can download pre-built versions of Umbrel from the Umbrel website. This makes it easier for users to get started with Umbrel, as they don't have to go through the process of building the software from scratch.\nAnother key difference between the two implementations is the focus and target audience. Bitcoin Core is geared towards more technically-savvy users, who are comfortable with running a full node and participating in the Bitcoin network at a low-level. Umbrel, on the other hand, is designed for a wider audience, and is intended to make it easy for anyone to run a full node and interact with the Bitcoin network.\nOverall, the choice between Bitcoin Core and Umbrel will depend on your specific needs and technical expertise. If you are a more experienced user and want to have complete control over your node and the Bitcoin network, Bitcoin Core may be the right choice for you. If you are a less experienced user and want an easier way to run a full node and interact with the Bitcoin network, Umbrel may be a better option.\nWhat exactly is the Bitcoin Core Node configured from Bitcoin Core on my computer?\nWhen you run the Bitcoin Core software on your computer, you are creating a full node on the Bitcoin network. This means that you are running a complete copy of the Bitcoin blockchain and are participating in the network as a full peer.\nAs a full node, your computer is responsible for verifying and relaying transactions on the network. This helps to ensure the integrity and security of the Bitcoin network, as it helps to prevent double spending and other forms of fraud.\nIn addition to running the Bitcoin Core software, you can also configure various settings and options for your node. This includes things like the maximum number of connections your node will accept, the maximum size of your transaction memory pool, and the minimum transaction fee you will accept.\nThese settings and options allow you to customize your node and control how it interacts with the rest of the network. For example, if you want your node to be more conservative and avoid potentially risky transactions, you can set the minimum transaction fee to a higher value. This will prevent your node from accepting and relaying transactions with low fees, which can help to reduce your exposure to potential losses.\nOverall, running a Bitcoin Core node on your computer is an important way to support the Bitcoin network and help ensure its security and reliability. By participating as a full node, you are helping to decentralize the network and make it more resilient to attacks and other forms of disruption.\n"
] |
[
0
] |
[] |
[] |
[
"bitcoind"
] |
stackoverflow_0074661597_bitcoind.txt
|
Q:
ValueError when trying to write a for loop in python
When I run this:
import pandas as pd
data = {'id': ['earn', 'earn','lose', 'earn'],
'game': ['darts', 'balloons', 'balloons', 'darts']
}
df = pd.DataFrame(data)
print(df)
print(df.loc[[1],['id']] == 'earn')
The output is:
id game
0 earn darts
1 earn balloons
2 lose balloons
3 earn darts
id
1 True
But when I try to run this loop:
for i in range(len(df)):
if (df.loc[[i],['id']] == 'earn'):
print('yes')
else:
print('no')
I get the error 'ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().'
I am not sure what the problem is. Any help or advice is appreciated -- I am just starting.
I expected the output to be 'yes' from the loop. But I just got the 'ValueError' message. But, when I run the condition by itself, the output is 'True' so I'm not sure what is wrong.
A:
for i,row in df.iterrows():
if row.id == "earn":
print("yes")
A:
Its complicated. pandas is geared towards operating on entire groups of data, not individual cells. df.loc may create a new DataFrame, a Series or a single value, depending on how its indexed. And those produce DataFrame, Series or scalar results for the == comparison.
If the indexers are both lists, you get a new DataFrame and the compare is also a dataframe
>>> foo = df.loc[[1], ['id']]
>>> type(foo)
<class 'pandas.core.frame.DataFrame'>
>>> foo
id
1 earn
>>> foo == "earn"
id
1 True
If one indexer is scalar, you get a new Series
>>> foo = df.loc[[1], 'id']
>>> type(foo)
<class 'pandas.core.series.Series'>
>>> foo
1 earn
Name: id, dtype: object
>>> foo == 'earn'
1 True
Name: id, dtype: bool
If both indexers are scalar, you get a single cell's value
>>> foo = df.loc[1, 'id']
>>> type(foo)
<class 'str'>
>>> foo
'earn'
>>> foo == 'earn'
True
That last is the one you want. The first two produce containers where True is ambiguous (you need to decide if any or all values need to be True).
for i in range(len(df)):
if (df.loc[i,'id'] == 'earn'):
print('yes')
else:
print('no')
Or maybe not. Depending on what you intend to do next, create a series of boolean values for all of the rows at once
>>> earn = df[id'] == 'earn'
>>> earn
0 True
1 True
2 False
3 True
Name: id, dtype: bool
now you can continue to make calculations on the dataframe as a whole.
|
ValueError when trying to write a for loop in python
|
When I run this:
import pandas as pd
data = {'id': ['earn', 'earn','lose', 'earn'],
'game': ['darts', 'balloons', 'balloons', 'darts']
}
df = pd.DataFrame(data)
print(df)
print(df.loc[[1],['id']] == 'earn')
The output is:
id game
0 earn darts
1 earn balloons
2 lose balloons
3 earn darts
id
1 True
But when I try to run this loop:
for i in range(len(df)):
if (df.loc[[i],['id']] == 'earn'):
print('yes')
else:
print('no')
I get the error 'ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().'
I am not sure what the problem is. Any help or advice is appreciated -- I am just starting.
I expected the output to be 'yes' from the loop. But I just got the 'ValueError' message. But, when I run the condition by itself, the output is 'True' so I'm not sure what is wrong.
|
[
"for i,row in df.iterrows():\n if row.id == \"earn\":\n print(\"yes\")\n\n",
"Its complicated. pandas is geared towards operating on entire groups of data, not individual cells. df.loc may create a new DataFrame, a Series or a single value, depending on how its indexed. And those produce DataFrame, Series or scalar results for the == comparison.\nIf the indexers are both lists, you get a new DataFrame and the compare is also a dataframe\n>>> foo = df.loc[[1], ['id']]\n>>> type(foo)\n<class 'pandas.core.frame.DataFrame'>\n>>> foo\n id\n1 earn\n>>> foo == \"earn\"\n id\n1 True\n\nIf one indexer is scalar, you get a new Series\n>>> foo = df.loc[[1], 'id']\n>>> type(foo)\n<class 'pandas.core.series.Series'>\n>>> foo\n1 earn\nName: id, dtype: object\n>>> foo == 'earn'\n1 True\nName: id, dtype: bool\n\nIf both indexers are scalar, you get a single cell's value\n>>> foo = df.loc[1, 'id']\n>>> type(foo)\n<class 'str'>\n>>> foo\n'earn'\n>>> foo == 'earn'\nTrue\n\nThat last is the one you want. The first two produce containers where True is ambiguous (you need to decide if any or all values need to be True).\nfor i in range(len(df)): \n if (df.loc[i,'id'] == 'earn'): \n print('yes') \n else: \n print('no')\n\nOr maybe not. Depending on what you intend to do next, create a series of boolean values for all of the rows at once\n>>> earn = df[id'] == 'earn'\n>>> earn\n0 True\n1 True\n2 False\n3 True\nName: id, dtype: bool\n\nnow you can continue to make calculations on the dataframe as a whole.\n"
] |
[
1,
0
] |
[] |
[] |
[
"loops",
"python",
"valueerror"
] |
stackoverflow_0074663367_loops_python_valueerror.txt
|
Q:
Flutter does not accept Color const
i created a file with the colors that i want to use in my project, but when i try to used, flutter does not acept the variables as a correct color.
This is the colors file:
import 'package:flutter/material.dart';
class ColorSkh {
static Color innovation = const Color.fromARGB(255, 59, 18, 45);
static Color fresh = const Color.fromARGB(255, 172, 155, 163);
static Color hightech = const Color.fromARGB(255, 0, 128, 157);
static Color sophisticated = const Color.fromARGB(255, 230, 231, 232);
static Color sk = const Color.fromARGB(255, 191, 205, 217);
static Color white = const Color.fromARGB(255, 255, 255, 255);
}
And this is how i am using it
import 'package:analytic_skh/ui/color_skh.dart';
.
.
.
SizedBox(
*height: height * 0.10,
child: Align(
alignment: Alignment.centerLeft,
child: ElevatedButton.icon(
onPressed: onPressed,
icon: const Icon(Icons.settings),
label: const Text('Settings',
style:
TextStyle(backgroundColor: ColorSkh.innovation)),
style: ElevatedButton.styleFrom(
backgroundColor: Colors.white,
),
)),
)*
but i get the error that this is a invalid constant
A:
Change your static colors to the const like below.
class ColorSkh {
static const Color innovation = Color.fromARGB(255, 59, 18, 45);
static const Color fresh = Color.fromARGB(255, 172, 155, 163);
static const Color hightech = Color.fromARGB(255, 0, 128, 157);
static const Color sophisticated = Color.fromARGB(255, 230, 231, 232);
static const Color sk = Color.fromARGB(255, 191, 205, 217);
static const Color white = Color.fromARGB(255, 255, 255, 255);
}
A:
You need to remove the color from the Text widget since it ca be constant anymore because those color variables are static and not final, so try this:
SizedBox(
*height: height * 0.10,
child: Align(
alignment: Alignment.centerLeft,
child: ElevatedButton.icon(
onPressed: onPressed,
icon: const Icon(Icons.settings),
label: Text('Settings',
style:
TextStyle(backgroundColor: ColorSkh.innovation)),
style: ElevatedButton.styleFrom(
backgroundColor: Colors.white,
),
)),
)*
|
Flutter does not accept Color const
|
i created a file with the colors that i want to use in my project, but when i try to used, flutter does not acept the variables as a correct color.
This is the colors file:
import 'package:flutter/material.dart';
class ColorSkh {
static Color innovation = const Color.fromARGB(255, 59, 18, 45);
static Color fresh = const Color.fromARGB(255, 172, 155, 163);
static Color hightech = const Color.fromARGB(255, 0, 128, 157);
static Color sophisticated = const Color.fromARGB(255, 230, 231, 232);
static Color sk = const Color.fromARGB(255, 191, 205, 217);
static Color white = const Color.fromARGB(255, 255, 255, 255);
}
And this is how i am using it
import 'package:analytic_skh/ui/color_skh.dart';
.
.
.
SizedBox(
*height: height * 0.10,
child: Align(
alignment: Alignment.centerLeft,
child: ElevatedButton.icon(
onPressed: onPressed,
icon: const Icon(Icons.settings),
label: const Text('Settings',
style:
TextStyle(backgroundColor: ColorSkh.innovation)),
style: ElevatedButton.styleFrom(
backgroundColor: Colors.white,
),
)),
)*
but i get the error that this is a invalid constant
|
[
"Change your static colors to the const like below.\nclass ColorSkh {\n static const Color innovation = Color.fromARGB(255, 59, 18, 45);\n static const Color fresh = Color.fromARGB(255, 172, 155, 163);\n static const Color hightech = Color.fromARGB(255, 0, 128, 157);\n static const Color sophisticated = Color.fromARGB(255, 230, 231, 232);\n static const Color sk = Color.fromARGB(255, 191, 205, 217);\n static const Color white = Color.fromARGB(255, 255, 255, 255);\n}\n\n",
"You need to remove the color from the Text widget since it ca be constant anymore because those color variables are static and not final, so try this:\nSizedBox(\n *height: height * 0.10,\n child: Align(\n alignment: Alignment.centerLeft,\n child: ElevatedButton.icon(\n onPressed: onPressed,\n icon: const Icon(Icons.settings),\n label: Text('Settings',\n style:\n TextStyle(backgroundColor: ColorSkh.innovation)),\n style: ElevatedButton.styleFrom(\n backgroundColor: Colors.white,\n ),\n )),\n )*\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"dart",
"flutter"
] |
stackoverflow_0074663489_dart_flutter.txt
|
Q:
Shell error: Value too great for base (error token is "168a")
I get this error message for my shell script below:
./my_script.sh line 17: f1bfab2e-168a: value too great for base (error token is "168a")
data=($( jq -r '.data' data.json | tr -d '[]," ' ))
for i in "${data[@]}"; do
echo "${data[i]}"
done
(data is made up of array elements that are a series of letters and numbers such as f1bfab2e-168a-4da7-9677-5018e5f97g0f )
I've referenced other Stack Overflow posts that provide solutions such as variable indirection or characters getting interpreted as octal numbers but I have been unable to resolve this error.
When using variable indirection such as
data=($( jq -r '.data' data.json | tr -d '[]," ' ))
for i in "${data[@]}"; do
echo "${data[${!i}]}"
done
"${data[${!i}]}" ends up only referencing the first array element for some reason. So if my array has two elements abc123 and bcd234, then what gets printed is just
abc123 abc123
instead of
abc123 bcd234.
I don't exactly understand why and what's going on there.
Additionally, I don't think that bash is interpreting any of my characters as octal numbers here so a solution for that situation does not apply to my case.
What significance does 168a have to bash?
A:
The i is already the item of the array, not the index, as you seem to believe. This should work:
for i in "${data[@]}"; do
echo "$i"
done
|
Shell error: Value too great for base (error token is "168a")
|
I get this error message for my shell script below:
./my_script.sh line 17: f1bfab2e-168a: value too great for base (error token is "168a")
data=($( jq -r '.data' data.json | tr -d '[]," ' ))
for i in "${data[@]}"; do
echo "${data[i]}"
done
(data is made up of array elements that are a series of letters and numbers such as f1bfab2e-168a-4da7-9677-5018e5f97g0f )
I've referenced other Stack Overflow posts that provide solutions such as variable indirection or characters getting interpreted as octal numbers but I have been unable to resolve this error.
When using variable indirection such as
data=($( jq -r '.data' data.json | tr -d '[]," ' ))
for i in "${data[@]}"; do
echo "${data[${!i}]}"
done
"${data[${!i}]}" ends up only referencing the first array element for some reason. So if my array has two elements abc123 and bcd234, then what gets printed is just
abc123 abc123
instead of
abc123 bcd234.
I don't exactly understand why and what's going on there.
Additionally, I don't think that bash is interpreting any of my characters as octal numbers here so a solution for that situation does not apply to my case.
What significance does 168a have to bash?
|
[
"The i is already the item of the array, not the index, as you seem to believe. This should work:\nfor i in \"${data[@]}\"; do\n echo \"$i\"\ndone\n\n"
] |
[
2
] |
[] |
[] |
[
"bash",
"shell"
] |
stackoverflow_0074663530_bash_shell.txt
|
Q:
How to select multiple options with mat-radio-group
I am using mat-radio-group to have multiple radio buttons but I am unable to select/check multiple radio buttons (radio buttons are generated by iterating through an array). I have also tried coding it so that the radio buttons all have the same name but I am still unable to select multiple values.
<mat-radio-group [(ngModel)]="question.answered" fxFlex>
<mat-radio-button [disabled]="result" name="test" class="example-radio-button" [value]="answer"
*ngFor="let answers of question.getanswers">
{{answers}}
</mat-radio-button>
</mat-radio-group>
What is it I am doing wrong?
A:
Radio buttons are not supposed to be multi-selectable. You want some checkboxes instead.
A:
You add an attribute on your mat-radio-button only change the name for only item
[name]="i"
A:
i have a similar problem, this result worked for me, i stored the option in the array used the position of my "question", my object has an attribute called position, so i use to set the [(ngModel)]="testAnswerDtoList[question . position]" storing the value in an array, then in the value of the radiobutton, I pass a structured object with the values I need {position, value}.
<div *ngFor="let question of recurso.testQuestions" style="padding-left: 5%">
<b>{{ question.position }}. {{ question.description }}</b>
<mat-radio-group *ngFor="let itemQuestion of question.itemQuestions"
[(ngModel)]="testAnswerDtoList[question.position]"
>
<div style="padding-left: 5%">
<mat-radio-button [value]="{
'testQuestionId': question.id,
'value':itemQuestion.position
}">
{{ itemQuestion.position }} {{ itemQuestion.description }}
</mat-radio-button>
</div>
</mat-radio-group>
<br />
</div>
TS:
testAnswerDtoList: TestAnswerDto[] = [];
I hope I have explained well the solution that worked for me.
|
How to select multiple options with mat-radio-group
|
I am using mat-radio-group to have multiple radio buttons but I am unable to select/check multiple radio buttons (radio buttons are generated by iterating through an array). I have also tried coding it so that the radio buttons all have the same name but I am still unable to select multiple values.
<mat-radio-group [(ngModel)]="question.answered" fxFlex>
<mat-radio-button [disabled]="result" name="test" class="example-radio-button" [value]="answer"
*ngFor="let answers of question.getanswers">
{{answers}}
</mat-radio-button>
</mat-radio-group>
What is it I am doing wrong?
|
[
"Radio buttons are not supposed to be multi-selectable. You want some checkboxes instead.\n",
"You add an attribute on your mat-radio-button only change the name for only item\n[name]=\"i\"\n",
"i have a similar problem, this result worked for me, i stored the option in the array used the position of my \"question\", my object has an attribute called position, so i use to set the [(ngModel)]=\"testAnswerDtoList[question . position]\" storing the value in an array, then in the value of the radiobutton, I pass a structured object with the values I need {position, value}.\n<div *ngFor=\"let question of recurso.testQuestions\" style=\"padding-left: 5%\">\n <b>{{ question.position }}. {{ question.description }}</b>\n <mat-radio-group *ngFor=\"let itemQuestion of question.itemQuestions\"\n [(ngModel)]=\"testAnswerDtoList[question.position]\"\n >\n <div style=\"padding-left: 5%\">\n <mat-radio-button [value]=\"{\n 'testQuestionId': question.id,\n 'value':itemQuestion.position\n }\">\n {{ itemQuestion.position }} {{ itemQuestion.description }}\n </mat-radio-button>\n </div>\n </mat-radio-group>\n <br />\n </div>\n\nTS:\ntestAnswerDtoList: TestAnswerDto[] = [];\nI hope I have explained well the solution that worked for me.\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"angular",
"ionic4"
] |
stackoverflow_0064206398_angular_ionic4.txt
|
Q:
Image processing and Computer Vision
I would like to use Smalltalk (Pharo) to better refactor my image processing and computer vision code/algorithms, written in other languages. I have not found a lot of examples online where Smalltalk is used for processing images (or video frames). I would like to know whether
i) there is an opencv/image/computer vision library available for Smalltalk that is easily installed or
ii) someone could give an example of how to access the pixel data in an image and threshold it using Smalltalk.
A:
For the first question, you can maybe write your own interface using FFI to the OpenCV C-API.
For the second question, I think it's easy to use ImageReadWriter formFromFileNamed: and then can use pixelValueAt: to read the value, threshold, and then write back by pixelValueAt:put:.
A:
There is a recent binding to OpenCV (for Pharo 7 a.t.m.) at https://github.com/feenkcom/gt4opencv
A:
There is a Smalltalk library called "TensorFlow" that can be used for image and video processing tasks, including computer vision. You can install this library using the following instructions:
Open a new playground in Pharo by selecting "Playground" from the "World" menu in the top toolbar.
In the playground, evaluate the following expression to load the Metacello configuration for the TensorFlow library:
Metacello new
baseline: 'TensorFlow';
repository: 'github://pharo-ai/Pharo-TensorFlow:master/src';
load.
Once the TensorFlow library has been loaded, you can access the library's classes and methods by evaluating expressions in the playground, such as:
TFSession new runWithInputs: #() outputs: #(#output) feed: #(#input -> ((TFEagerTensor newWithShape: #(1 2) dataType: TFTensorDataTypeFloat values: #(1 2)))
To access the pixel data in an image and threshold it, you can use the TensorFlow library's TFEagerTensor class to represent the image as a tensor (a multi-dimensional array of numbers), and then use the threshold method to threshold the tensor. For example:
| tensor thresholdedTensor |
tensor := TFEagerTensor newWithShape: #(10 20 3) dataType: TFTensorDataTypeUInt8 values: #(...)
thresholdedTensor := tensor threshold: 127.
The thresholdedTensor variable will then contain a tensor representing the thresholded image, with pixel values of 0 for pixels below the threshold and 255 for pixels above the threshold. You can then use the tensor's values method to access the pixel data as an array of numbers, which you can then use to display the image or process further.
|
Image processing and Computer Vision
|
I would like to use Smalltalk (Pharo) to better refactor my image processing and computer vision code/algorithms, written in other languages. I have not found a lot of examples online where Smalltalk is used for processing images (or video frames). I would like to know whether
i) there is an opencv/image/computer vision library available for Smalltalk that is easily installed or
ii) someone could give an example of how to access the pixel data in an image and threshold it using Smalltalk.
|
[
"For the first question, you can maybe write your own interface using FFI to the OpenCV C-API.\nFor the second question, I think it's easy to use ImageReadWriter formFromFileNamed: and then can use pixelValueAt: to read the value, threshold, and then write back by pixelValueAt:put:.\n",
"There is a recent binding to OpenCV (for Pharo 7 a.t.m.) at https://github.com/feenkcom/gt4opencv\n",
"There is a Smalltalk library called \"TensorFlow\" that can be used for image and video processing tasks, including computer vision. You can install this library using the following instructions:\n\nOpen a new playground in Pharo by selecting \"Playground\" from the \"World\" menu in the top toolbar.\n\nIn the playground, evaluate the following expression to load the Metacello configuration for the TensorFlow library:\n\n\nMetacello new\n baseline: 'TensorFlow';\n repository: 'github://pharo-ai/Pharo-TensorFlow:master/src';\n load.\n\n\n\nOnce the TensorFlow library has been loaded, you can access the library's classes and methods by evaluating expressions in the playground, such as:\n\nTFSession new runWithInputs: #() outputs: #(#output) feed: #(#input -> ((TFEagerTensor newWithShape: #(1 2) dataType: TFTensorDataTypeFloat values: #(1 2)))\n\n\n\nTo access the pixel data in an image and threshold it, you can use the TensorFlow library's TFEagerTensor class to represent the image as a tensor (a multi-dimensional array of numbers), and then use the threshold method to threshold the tensor. For example:\n\n| tensor thresholdedTensor |\ntensor := TFEagerTensor newWithShape: #(10 20 3) dataType: TFTensorDataTypeUInt8 values: #(...)\nthresholdedTensor := tensor threshold: 127.\n\n\n\nThe thresholdedTensor variable will then contain a tensor representing the thresholded image, with pixel values of 0 for pixels below the threshold and 255 for pixels above the threshold. You can then use the tensor's values method to access the pixel data as an array of numbers, which you can then use to display the image or process further.\n\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"computer_vision",
"image_processing",
"pharo",
"pixels",
"smalltalk"
] |
stackoverflow_0036805051_computer_vision_image_processing_pharo_pixels_smalltalk.txt
|
Q:
In R how do i change the binwidth on my graph
ggplot(missense2)+
geom_bar(aes(x=Variant.start.in.translation..aa.),stat="count")+
theme_classic()+
xlab("amino acid position")+
What do I add or use to allow the binwidth of the graph to be changed?
ggplot(missense2)+
geom_bar(aes(x=Variant.start.in.translation..aa.),stat="count")+
theme_classic()+
xlab("amino acid position")+
The command (binwidth= "2") altered nothing no matter where placed in the code.
A:
You can use the parameter width
mtcars %>%
ggplot(aes(cyl))+
geom_bar()
mtcars %>%
ggplot(aes(cyl))+
geom_bar(width = .5)
|
In R how do i change the binwidth on my graph
|
ggplot(missense2)+
geom_bar(aes(x=Variant.start.in.translation..aa.),stat="count")+
theme_classic()+
xlab("amino acid position")+
What do I add or use to allow the binwidth of the graph to be changed?
ggplot(missense2)+
geom_bar(aes(x=Variant.start.in.translation..aa.),stat="count")+
theme_classic()+
xlab("amino acid position")+
The command (binwidth= "2") altered nothing no matter where placed in the code.
|
[
"You can use the parameter width\nmtcars %>% \n ggplot(aes(cyl))+\n geom_bar()\n\n\nmtcars %>% \n ggplot(aes(cyl))+\n geom_bar(width = .5)\n\n\n"
] |
[
0
] |
[] |
[] |
[
"data_science",
"ggplot2",
"r"
] |
stackoverflow_0074663313_data_science_ggplot2_r.txt
|
Q:
User controll multilingual files inside multi-tenancy laravel application
I have multi-tenancy application
and now I start adding multi language to this tenant ... but in my case
Every tenant can change the translated files that usually inside lang directory.
So how can I implement this changes without affecting other tenants?!
I want to make it dynamic all from tenant dashboard
A:
if you want to make it dynamic from the dashboard then you better store it in the database, and each tenant will have its own translations,
otherwise, you can store each tenant in a separate folder like lang/en/{TENANT_ID}/file.php, then you can call it like:
__(getTenantId() . '.file.word');
|
User controll multilingual files inside multi-tenancy laravel application
|
I have multi-tenancy application
and now I start adding multi language to this tenant ... but in my case
Every tenant can change the translated files that usually inside lang directory.
So how can I implement this changes without affecting other tenants?!
I want to make it dynamic all from tenant dashboard
|
[
"if you want to make it dynamic from the dashboard then you better store it in the database, and each tenant will have its own translations,\notherwise, you can store each tenant in a separate folder like lang/en/{TENANT_ID}/file.php, then you can call it like:\n__(getTenantId() . '.file.word');\n\n"
] |
[
0
] |
[] |
[] |
[
"laravel",
"multilingual"
] |
stackoverflow_0074663547_laravel_multilingual.txt
|
Q:
How do I remove the last character in some text being copied from one field to another using JavaScript or other methods?
I have a form that I created to make my work easier and recently figured out how to make certain fields automatically generate a comma and separates after 5 letters or numbers have been typed into it (CPT codes for medical claims that I have to look up) using the same coding you would for putting spaces between numbers. I also have coding here that would force letters to be capitalized since I'm a bit OCD about that stuff for work:
<input name="checkbox24" type="checkbox" id="checkbox24" onClick="FillDetails24(this.form);" /><span style="background-color:yellow">CPT</span>
<input type = "text" size="8" class = "uc-text-smooth" name="textfield24" id="textfield24" />
<script language = "JavaScript">
const forceKeyPressUppercase = (e) => {
let el = e.target;
let charInput = e.keyCode;
if((charInput >=97) && (charInput <= 122)) { // lowercase
if(!e.ctrlKey && !e.metaKey && !e.altKey) { // no modifier key
let newChar = charInput - 32;
let start = el.selectionStart;
let end = el.selectionEnd;
el.value = el.value.substring(0, start) + String.fromCharCode(newChar) + el.value.substring(end);
el.setSelectionRange(start+1, start+1);
e.preventDefault();
}
}
};
document.querySelectorAll(".uc-text-smooth").forEach(function(current) {
current.addEventListener("keypress", forceKeyPressUppercase);
});
document.getElementById('textfield24').addEventListener('input', function (g) {
g.target.value = g.target.value.replace(/[^\dA-Z]/g, '').replace(/(.{5})/g, '$1, ').trim();
});
</script>
When I use the checkbox, it automatically generates pre-written text using the following JavaScript:
function FillDetails24(f) {
const elem = f.Reason;
const x = f.Action;
const y = f.Resolution;
f.Reason.value += ("Verify CPT " + f.textfield24.value + ". " + '\n');
f.Action.value += ("Adv on how to locate and use CPT lookup tool on plan website. Information provided in resolution. " + '\n');
f.Resolution.value += ("Adv on how to locate and use CPT lookup tool on plan website. Caller is looking to verify CPT " + f.textfield24.value + ". " + '\n' + '\n' );
}
However, because of the way that I put it together, the end result would be, "Adv on how to locate and use CPT lookup tool on plan website. Caller is looking to verify CPT V2020, 99213,. " The extra comma at the end is driving me nuts.
Since it was my first time using this
document.getElementById('textfield24').addEventListener('input', function (g) {
g.target.value = g.target.value.replace(/[^\dA-Z]/g, '').replace(/(.{5})/g, '$1, ').trim();
});
with this
function FillDetails24(f) {
const elem = f.Reason;
const x = f.Action;
const y = f.Resolution;
f.Reason.value += ("Verify CPT " + f.textfield24.value + ". " + '\n');
f.Action.value += ("Adv on how to locate and use CPT lookup tool on plan website. Information provided in resolution. " + '\n');
f.Resolution.value += ("Adv on how to locate and use CPT lookup tool on plan website. Caller is looking to verify CPT " + f.textfield24.value + ". " + '\n' + '\n' );
}
I'm not certain how I can code it to eliminate the last comma generated at the end when it pulls the value of textfield24.
This is a very long, complex html form I've coded by hand for fun and for personal use at work for 4 years with only a couple years of HTML training and a little bit of JavaScript I learned in high school forever ago and I've been busting my butt to make this work perfectly so that I only have to tweak the pre-written stuff when things change at work.
I'm at a loss on how to continue. Any suggestions?
A:
you can use substring to remove the last comma, but you have to make sure that the last comma always be there. otherwise, you're gonna remove something else.
const example = "Adv on how to locate and use CPT lookup tool on plan website. Caller is looking to verify CPT V2020, 99213,. ";
const result = example.substring(0, example.lastIndexOf(",")) + ".";
console.log(result);
A:
You can replace with regex /,$/.
f = { textfield24: { value: 'V2020, 99213,'} }
console.log(f.textfield24.value);
console.log(f.textfield24.value.replace(/,$/, ''));
function FillDetails24(f) {
const elem = f.Reason;
const x = f.Action;
const y = f.Resolution;
f.Reason.value += ("Verify CPT " + f.textfield24.value + ". " + '\n');
f.Action.value += ("Adv on how to locate and use CPT lookup tool on plan website. Information provided in resolution. " + '\n');
f.Resolution.value += ("Adv on how to locate and use CPT lookup tool on plan website. Caller is looking to verify CPT " + f.textfield24.value.replace(/,$/, '') + ". " + '\n' + '\n' );
}
|
How do I remove the last character in some text being copied from one field to another using JavaScript or other methods?
|
I have a form that I created to make my work easier and recently figured out how to make certain fields automatically generate a comma and separates after 5 letters or numbers have been typed into it (CPT codes for medical claims that I have to look up) using the same coding you would for putting spaces between numbers. I also have coding here that would force letters to be capitalized since I'm a bit OCD about that stuff for work:
<input name="checkbox24" type="checkbox" id="checkbox24" onClick="FillDetails24(this.form);" /><span style="background-color:yellow">CPT</span>
<input type = "text" size="8" class = "uc-text-smooth" name="textfield24" id="textfield24" />
<script language = "JavaScript">
const forceKeyPressUppercase = (e) => {
let el = e.target;
let charInput = e.keyCode;
if((charInput >=97) && (charInput <= 122)) { // lowercase
if(!e.ctrlKey && !e.metaKey && !e.altKey) { // no modifier key
let newChar = charInput - 32;
let start = el.selectionStart;
let end = el.selectionEnd;
el.value = el.value.substring(0, start) + String.fromCharCode(newChar) + el.value.substring(end);
el.setSelectionRange(start+1, start+1);
e.preventDefault();
}
}
};
document.querySelectorAll(".uc-text-smooth").forEach(function(current) {
current.addEventListener("keypress", forceKeyPressUppercase);
});
document.getElementById('textfield24').addEventListener('input', function (g) {
g.target.value = g.target.value.replace(/[^\dA-Z]/g, '').replace(/(.{5})/g, '$1, ').trim();
});
</script>
When I use the checkbox, it automatically generates pre-written text using the following JavaScript:
function FillDetails24(f) {
const elem = f.Reason;
const x = f.Action;
const y = f.Resolution;
f.Reason.value += ("Verify CPT " + f.textfield24.value + ". " + '\n');
f.Action.value += ("Adv on how to locate and use CPT lookup tool on plan website. Information provided in resolution. " + '\n');
f.Resolution.value += ("Adv on how to locate and use CPT lookup tool on plan website. Caller is looking to verify CPT " + f.textfield24.value + ". " + '\n' + '\n' );
}
However, because of the way that I put it together, the end result would be, "Adv on how to locate and use CPT lookup tool on plan website. Caller is looking to verify CPT V2020, 99213,. " The extra comma at the end is driving me nuts.
Since it was my first time using this
document.getElementById('textfield24').addEventListener('input', function (g) {
g.target.value = g.target.value.replace(/[^\dA-Z]/g, '').replace(/(.{5})/g, '$1, ').trim();
});
with this
function FillDetails24(f) {
const elem = f.Reason;
const x = f.Action;
const y = f.Resolution;
f.Reason.value += ("Verify CPT " + f.textfield24.value + ". " + '\n');
f.Action.value += ("Adv on how to locate and use CPT lookup tool on plan website. Information provided in resolution. " + '\n');
f.Resolution.value += ("Adv on how to locate and use CPT lookup tool on plan website. Caller is looking to verify CPT " + f.textfield24.value + ". " + '\n' + '\n' );
}
I'm not certain how I can code it to eliminate the last comma generated at the end when it pulls the value of textfield24.
This is a very long, complex html form I've coded by hand for fun and for personal use at work for 4 years with only a couple years of HTML training and a little bit of JavaScript I learned in high school forever ago and I've been busting my butt to make this work perfectly so that I only have to tweak the pre-written stuff when things change at work.
I'm at a loss on how to continue. Any suggestions?
|
[
"you can use substring to remove the last comma, but you have to make sure that the last comma always be there. otherwise, you're gonna remove something else.\n\n\nconst example = \"Adv on how to locate and use CPT lookup tool on plan website. Caller is looking to verify CPT V2020, 99213,. \";\n\nconst result = example.substring(0, example.lastIndexOf(\",\")) + \".\";\n\nconsole.log(result);\n\n\n\n",
"You can replace with regex /,$/.\n\n\nf = { textfield24: { value: 'V2020, 99213,'} }\nconsole.log(f.textfield24.value);\nconsole.log(f.textfield24.value.replace(/,$/, ''));\n\n\n\nfunction FillDetails24(f) {\n const elem = f.Reason;\n const x = f.Action;\n const y = f.Resolution;\n f.Reason.value += (\"Verify CPT \" + f.textfield24.value + \". \" + '\\n');\n f.Action.value += (\"Adv on how to locate and use CPT lookup tool on plan website. Information provided in resolution. \" + '\\n');\n f.Resolution.value += (\"Adv on how to locate and use CPT lookup tool on plan website. Caller is looking to verify CPT \" + f.textfield24.value.replace(/,$/, '') + \". \" + '\\n' + '\\n' );\n\n}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"html",
"javascript",
"textfield"
] |
stackoverflow_0074663514_html_javascript_textfield.txt
|
Q:
`git pull origin main --rebase`, but also update local `main` branch
I like the succinctness of git pull origin main --rebase, but it leaves my main branch outdated. This can cause issues, as I like running git rebase -i main to do autosquashing.
A:
The most succinct command is:
git pull origin main:main --rebase
You could also run this to update your local main branch: [1]
git fetch origin main:main
Then:
git rebase -i main
If you want a memorable command, add this alias to you ~/.gitconfig:
[alias]
pluto = "!runit() {\
git pull origin $1:$1 --rebase; \
}; runit"
Then run:
git pluto main
[1]: yes origin is required
|
`git pull origin main --rebase`, but also update local `main` branch
|
I like the succinctness of git pull origin main --rebase, but it leaves my main branch outdated. This can cause issues, as I like running git rebase -i main to do autosquashing.
|
[
"The most succinct command is:\ngit pull origin main:main --rebase\n\nYou could also run this to update your local main branch: [1]\ngit fetch origin main:main\n\nThen:\ngit rebase -i main\n\nIf you want a memorable command, add this alias to you ~/.gitconfig:\n[alias]\n pluto = \"!runit() {\\\n git pull origin $1:$1 --rebase; \\\n }; runit\"\n\nThen run:\ngit pluto main\n\n[1]: yes origin is required\n"
] |
[
1
] |
[] |
[] |
[
"git",
"git_rebase"
] |
stackoverflow_0074663569_git_git_rebase.txt
|
Q:
Computer Vision
I am new to computer vision. I am trying to extract text from video frames and images. Most of the codes provided in github are compatible with python <3 versions. Any idea on how to proceed and get the related codes and good papers.
Note: I have already implemented pytesseract-OCR and I haven't gotten good result.
From this image, I have to extract acer
A:
Hello TISHANT CHANDRAKAR.
At first, you must understand how text recognizer works.
1. have-text-region is extracted from the image
2. we recognize text for each region
3. Combine the text of all regions to form final result
Tesseract itself work very well for step 2. But for step 1, it only work well with text in document. In computer vision, step 1 is called "Scene text detection". So your next step is find some good code, or paper which could do "scene text detection".
If you want to learn and read paper there is a list here Scene text detection list . But in my opinion, the text in you image is white text on black background so a simple color threshold could solve step 1 easily.
Hope that help
A:
There are many repositories out there for text detection&recognition, the tesseract is not a bad one but you need to configure it correctly like identifying oem, psm params follow the link below for more best practice
https://ai-facets.org/tesseract-ocr-best-practices/
On your example image, what matter is text detection and pre-processing steps, like deskew correction, you can check OpenCV examples
A:
There are several approaches to extracting text from images, and the approach that works best will depend on the quality of the input image and the specific requirements of your use case. In general, the first step in text extraction from images is to perform preprocessing on the input image to improve the quality of the image and make it easier for the text extraction algorithm to identify the text. This can include steps such as denoising the image, correcting for perspective distortion, and enhancing the contrast.
Once the input image has been preprocessed, there are several techniques that can be used to extract the text. Some common techniques include:
Optical character recognition (OCR): OCR algorithms use machine learning techniques to identify and recognize text in images. OCR algorithms are generally trained on large datasets of labeled text images, and they learn to identify and classify text based on the visual features of the text. OCR algorithms can be effective at extracting text from images, but they can be sensitive to variations in font, size, and other visual features of the text.
Feature-based approaches: In some cases, it may be possible to extract text from images using feature-based approaches that identify specific visual features of the text. For example, algorithms can be trained to identify the edges of letters, the angles at which lines intersect, or other visual features that can be used to segment and classify the text. These algorithms can be effective at extracting text from images with relatively simple layouts, but they can be less effective when the text is cluttered or overlapping.
Hybrid approaches: Many text extraction algorithms use a combination of OCR and feature-based approaches to extract text from images. For example, an algorithm might first use OCR to identify the general location of the text in the image, and then use feature-based techniques to segment and classify the text within that region. Hybrid approaches can be effective at extracting text from a wide range of images, but they can also be more complex and computationally intensive.
In terms of specific libraries and resources for text extraction in Python, there are several options available. Some popular libraries include:
PyTesseract: PyTesseract is a wrapper for the Tesseract OCR engine, which is an open-source OCR engine developed by Google. PyTesseract can be used to extract text from images in a variety of languages, and it includes support for image preprocessing and post-processing to improve the accuracy of the OCR results.
OpenCV: OpenCV is an open-source computer vision library that includes many algorithms for image processing and computer vision, including text extraction. OpenCV can be used to perform a wide range of tasks, including preprocessing, feature extraction, and text recognition.
scikit-image: scikit-image is a Python library for image processing that includes algorithms for text extraction and other image analysis tasks. scikit-image includes functions for preprocessing, feature extraction, and classification, and it can be used in combination with other machine-learning libraries to build text extraction systems.
In terms of papers and other resources, there are many research papers and tutorials available on text extraction from images. Some good starting points might include the following:
The OpenCV documentation
|
Computer Vision
|
I am new to computer vision. I am trying to extract text from video frames and images. Most of the codes provided in github are compatible with python <3 versions. Any idea on how to proceed and get the related codes and good papers.
Note: I have already implemented pytesseract-OCR and I haven't gotten good result.
From this image, I have to extract acer
|
[
"Hello TISHANT CHANDRAKAR.\nAt first, you must understand how text recognizer works.\n1. have-text-region is extracted from the image\n2. we recognize text for each region\n3. Combine the text of all regions to form final result\n\nTesseract itself work very well for step 2. But for step 1, it only work well with text in document. In computer vision, step 1 is called \"Scene text detection\". So your next step is find some good code, or paper which could do \"scene text detection\".\nIf you want to learn and read paper there is a list here Scene text detection list . But in my opinion, the text in you image is white text on black background so a simple color threshold could solve step 1 easily.\nHope that help\n",
"There are many repositories out there for text detection&recognition, the tesseract is not a bad one but you need to configure it correctly like identifying oem, psm params follow the link below for more best practice\nhttps://ai-facets.org/tesseract-ocr-best-practices/\nOn your example image, what matter is text detection and pre-processing steps, like deskew correction, you can check OpenCV examples\n",
"There are several approaches to extracting text from images, and the approach that works best will depend on the quality of the input image and the specific requirements of your use case. In general, the first step in text extraction from images is to perform preprocessing on the input image to improve the quality of the image and make it easier for the text extraction algorithm to identify the text. This can include steps such as denoising the image, correcting for perspective distortion, and enhancing the contrast.\nOnce the input image has been preprocessed, there are several techniques that can be used to extract the text. Some common techniques include:\n\nOptical character recognition (OCR): OCR algorithms use machine learning techniques to identify and recognize text in images. OCR algorithms are generally trained on large datasets of labeled text images, and they learn to identify and classify text based on the visual features of the text. OCR algorithms can be effective at extracting text from images, but they can be sensitive to variations in font, size, and other visual features of the text.\n\nFeature-based approaches: In some cases, it may be possible to extract text from images using feature-based approaches that identify specific visual features of the text. For example, algorithms can be trained to identify the edges of letters, the angles at which lines intersect, or other visual features that can be used to segment and classify the text. These algorithms can be effective at extracting text from images with relatively simple layouts, but they can be less effective when the text is cluttered or overlapping.\n\nHybrid approaches: Many text extraction algorithms use a combination of OCR and feature-based approaches to extract text from images. For example, an algorithm might first use OCR to identify the general location of the text in the image, and then use feature-based techniques to segment and classify the text within that region. Hybrid approaches can be effective at extracting text from a wide range of images, but they can also be more complex and computationally intensive.\n\n\nIn terms of specific libraries and resources for text extraction in Python, there are several options available. Some popular libraries include:\n\nPyTesseract: PyTesseract is a wrapper for the Tesseract OCR engine, which is an open-source OCR engine developed by Google. PyTesseract can be used to extract text from images in a variety of languages, and it includes support for image preprocessing and post-processing to improve the accuracy of the OCR results.\n\nOpenCV: OpenCV is an open-source computer vision library that includes many algorithms for image processing and computer vision, including text extraction. OpenCV can be used to perform a wide range of tasks, including preprocessing, feature extraction, and text recognition.\n\nscikit-image: scikit-image is a Python library for image processing that includes algorithms for text extraction and other image analysis tasks. scikit-image includes functions for preprocessing, feature extraction, and classification, and it can be used in combination with other machine-learning libraries to build text extraction systems.\n\n\nIn terms of papers and other resources, there are many research papers and tutorials available on text extraction from images. Some good starting points might include the following:\n\nThe OpenCV documentation\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"computer_vision",
"text_extraction"
] |
stackoverflow_0050566726_computer_vision_text_extraction.txt
|
Q:
ASP.Net Identity Login Redirect Enforce Protocol (Https) Part 2 (.Net 6++)
Prior reference (.Net Framework/ASP.Net MVC): ASP.Net Identity Login Redirect Enforce Protocol (Https)
It seems this is still an "issue" in .Net 6+. There are cases where the return url constructed by the infrastructure results in an http scheme/protocol instead of https for oauth/external logins (Google, etc). This obviously fails because it must be https.
While I haven't gone deep into things, because I haven't found the source code for it (yet?), it's likely the same "issue" - at the app level, it doesn't "see" a https request (because SSL is offloaded somewhere) and therefore the url created "matches" the scheme/protocol, resulting in an http redirect url.
End of day, whatever hosting infrastrucutre/configuration my host has is in place is beyond my control. Therefore, the ultimate goal is to force https (hard code, skip/override whatever scheme/protocol check/eval in place).
There's nothing special in my setup and it's working fine in local/dev (https) testing. It's only when the application is finally hosted (production):
In startup program.cs this is the only related code I have for external login (along with the scaffolding/templates of the identity package):
builder.Services.AddDefaultIdentity<ApplicationUser>(options => options.SignIn.RequireConfirmedAccount = true)
.AddEntityFrameworkStores<my_db_context>();
builder.Services.AddAuthentication().AddGoogle(goog =>
{
goog.ClientId = builder.Configuration["GoogleAuthClientId"];
goog.ClientSecret = builder.Configuration["GoogleAuthClientSecret"];
});
The issue:
the origin is https
the redirect uri sent to Google Auth is http - this will always fail
Can anyone point me to relevant docs/source on how to add/override options in .Net 6 and above? (similar to prior implementations in .Net Framework/MVC)?
A:
The answer is in the comment by @Tratcher:
Official Ref: Configure ASP.NET Core to work with proxy servers and load balancers
Essentially: ForwardedHeadersMiddleware
For my specific case:
In some cases, it might not be possible to add forwarded headers to
the requests proxied to the app. If the proxy is enforcing that all
public external requests are HTTPS, the scheme can be manually set
before using any type of middleware:
...
app.Use((context, next) => {
context.Request.Scheme = "https";
return next(context); });
...
|
ASP.Net Identity Login Redirect Enforce Protocol (Https) Part 2 (.Net 6++)
|
Prior reference (.Net Framework/ASP.Net MVC): ASP.Net Identity Login Redirect Enforce Protocol (Https)
It seems this is still an "issue" in .Net 6+. There are cases where the return url constructed by the infrastructure results in an http scheme/protocol instead of https for oauth/external logins (Google, etc). This obviously fails because it must be https.
While I haven't gone deep into things, because I haven't found the source code for it (yet?), it's likely the same "issue" - at the app level, it doesn't "see" a https request (because SSL is offloaded somewhere) and therefore the url created "matches" the scheme/protocol, resulting in an http redirect url.
End of day, whatever hosting infrastrucutre/configuration my host has is in place is beyond my control. Therefore, the ultimate goal is to force https (hard code, skip/override whatever scheme/protocol check/eval in place).
There's nothing special in my setup and it's working fine in local/dev (https) testing. It's only when the application is finally hosted (production):
In startup program.cs this is the only related code I have for external login (along with the scaffolding/templates of the identity package):
builder.Services.AddDefaultIdentity<ApplicationUser>(options => options.SignIn.RequireConfirmedAccount = true)
.AddEntityFrameworkStores<my_db_context>();
builder.Services.AddAuthentication().AddGoogle(goog =>
{
goog.ClientId = builder.Configuration["GoogleAuthClientId"];
goog.ClientSecret = builder.Configuration["GoogleAuthClientSecret"];
});
The issue:
the origin is https
the redirect uri sent to Google Auth is http - this will always fail
Can anyone point me to relevant docs/source on how to add/override options in .Net 6 and above? (similar to prior implementations in .Net Framework/MVC)?
|
[
"The answer is in the comment by @Tratcher:\nOfficial Ref: Configure ASP.NET Core to work with proxy servers and load balancers\nEssentially: ForwardedHeadersMiddleware\nFor my specific case:\n\nIn some cases, it might not be possible to add forwarded headers to\nthe requests proxied to the app. If the proxy is enforcing that all\npublic external requests are HTTPS, the scheme can be manually set\nbefore using any type of middleware:\n...\n\napp.Use((context, next) => {\ncontext.Request.Scheme = \"https\";\nreturn next(context); });\n\n...\n\n\n"
] |
[
0
] |
[] |
[] |
[
".net_6.0",
"asp.net_core",
"asp.net_identity"
] |
stackoverflow_0074630363_.net_6.0_asp.net_core_asp.net_identity.txt
|
Q:
How to create a custom sitemap of multiple subdomains?
how can I achieve a certain well structured layout in the Google search results as presented below? I am working with multiple Wordpress instances on several sub domains (not a multisite). Google does not get along with the sitemap structure so all sites are scattered around. Any ideas?
In the search results it should look like this (basically like any other well structured sitemap):
MAINPAGE.COM
(Meta text)
---- SUB.MAINPAGE.COM
---- SUB.MAINPAGE.COM
---- SUB.MAINPAGE.COM
I messed around with Toast but I did not reach any results.
A:
Messed around with Yoast? You just need to install Yoast on all instances. Submit the sitemaps generated by Yoast from all sites to Search Console.
Positioning the results in Google SERPs is not driven by sitemap. It
is driven by Google's algorithmic analysis and rankings. For one
search query, it is possible that Google finds a page from Instance B
better than instance C page or other site etc.
Under Google Search Console > Indexing > check the issues you're having with indexing. Do this for each instance property OR if you've only one property for ALL sub-domains, just check the Indexing/Coverage issues.
Additional Notes:
Wordpress 5+ comes with built-in sitemap functionality. If you're having issues with Yoast sitemap generation, try that as a fallback. Yoast sitemaps are better than built-in ones. Anyway. Submit them for each WordPress instances to Search Console to let Google discover pages from your submitted sitemaps.
Plus. If you want Google to analyze pages from all of your sub-domains, verify Search Console via DNS property. It acknowledges Google to explore through all main site and its sub-domains under the same property including https & http versions.
|
How to create a custom sitemap of multiple subdomains?
|
how can I achieve a certain well structured layout in the Google search results as presented below? I am working with multiple Wordpress instances on several sub domains (not a multisite). Google does not get along with the sitemap structure so all sites are scattered around. Any ideas?
In the search results it should look like this (basically like any other well structured sitemap):
MAINPAGE.COM
(Meta text)
---- SUB.MAINPAGE.COM
---- SUB.MAINPAGE.COM
---- SUB.MAINPAGE.COM
I messed around with Toast but I did not reach any results.
|
[
"Messed around with Yoast? You just need to install Yoast on all instances. Submit the sitemaps generated by Yoast from all sites to Search Console.\n\nPositioning the results in Google SERPs is not driven by sitemap. It\nis driven by Google's algorithmic analysis and rankings. For one\nsearch query, it is possible that Google finds a page from Instance B\nbetter than instance C page or other site etc.\n\nUnder Google Search Console > Indexing > check the issues you're having with indexing. Do this for each instance property OR if you've only one property for ALL sub-domains, just check the Indexing/Coverage issues.\n\nAdditional Notes:\nWordpress 5+ comes with built-in sitemap functionality. If you're having issues with Yoast sitemap generation, try that as a fallback. Yoast sitemaps are better than built-in ones. Anyway. Submit them for each WordPress instances to Search Console to let Google discover pages from your submitted sitemaps.\nPlus. If you want Google to analyze pages from all of your sub-domains, verify Search Console via DNS property. It acknowledges Google to explore through all main site and its sub-domains under the same property including https & http versions.\n"
] |
[
0
] |
[] |
[] |
[
"google_search",
"seo",
"sitemap",
"wordpress",
"yoast"
] |
stackoverflow_0074633739_google_search_seo_sitemap_wordpress_yoast.txt
|
Q:
Parse colon separated values contained in a bash variable
Have done this in the past but I'm hitting a wall ...
There is a variable containing a list of words separated by colons, ":". A similar format to /etc/passwd, or other config file entries.
wordList="dog:frog:cat:bat"
Need to parse this list into an array so the elements can be referenced individually. Something like
animal=$wordArray[2]
Which should assign "cat" into the variable animal.
Have tried
IFS=:
read wordList < echo $wordList
But, that is file rediection, so doesn't accomplish it.
Could do it with a while but something must be more elegant.
|
Parse colon separated values contained in a bash variable
|
Have done this in the past but I'm hitting a wall ...
There is a variable containing a list of words separated by colons, ":". A similar format to /etc/passwd, or other config file entries.
wordList="dog:frog:cat:bat"
Need to parse this list into an array so the elements can be referenced individually. Something like
animal=$wordArray[2]
Which should assign "cat" into the variable animal.
Have tried
IFS=:
read wordList < echo $wordList
But, that is file rediection, so doesn't accomplish it.
Could do it with a while but something must be more elegant.
|
[] |
[] |
[
"You can use array=(${wordList//:/ }), e.g.:\nwordList=\"dog:frog:cat:bat\"\narray=(${wordList//:/ })\n\nfor i in ${array[@]}; do\n echo $i;\ndone\n\ngives:\ndog\nfrog\ncat\nbat\n\nOr:\nIFS=':' && W=\"dog:frog:cat:bat\" && for i in $W; do echo $i; done\n\n"
] |
[
-1
] |
[
"arrays",
"bash",
"colon",
"parsing",
"string"
] |
stackoverflow_0074663540_arrays_bash_colon_parsing_string.txt
|
Q:
I am trying to figure out a grading system and cant seem to get it to work (python)
i have a problem which i am trying to solve and cant for the life of me figure it out. I feel like its the simplest answer but yet i'm still stuck.
The instructions stated that the application must do the following:
Ask the user to input the marks for the five subjects in a list/array.
The program must ensure that the marks are between 0 and 100
Display the list/array of marks entered.
Find the sum of all the marks in the list (all five subjects) and display the output as:
The sum of your marks is: [sum]
Find the average of all the marks in the list (all five subjects) and display the output as:
The average of your marks is: [average mark]
this is what i have tried
print("please enter your 5 marks below")
# read 5 inputs
mark1 = int(input("enter mark 1: "))
if mark1 <= 0 or mark1 <= 100:
print("Mark is acceptable")
else:
print("Mark is not acceptable")
mark1 = int(input("enter mark 1: "))
mark2 = int(input("enter mark 2: "))
if mark2 <= 0 or mark2 <= 100:
print("Mark is acceptable")
else:
print("Mark is not acceptable")
mark2 = int(input("enter mark 2: "))
mark3 = int(input("enter mark 3: "))
if mark3 <= 0 or mark3 <= 100:
print("Mark is acceptable")
else:
print("Mark is not acceptable")
mark3 = int(input("enter mark 3: "))
mark4 = int(input("enter mark 4: "))
if mark4 <= 0 or mark4 <= 100:
print("Mark is acceptable")
else:
print("Mark is not acceptable")
mark4 = int(input("enter mark 4: "))
mark5 = int(input("enter mark 5: "))
if mark5 <= 0 or mark5 <= 100:
print("Mark is acceptable")
# create array/list with five marks
marksList = [mark1, mark2, mark3, mark4, mark5]
# print the array/list
print(marksList)
# calculate the sum and average
sumOfMarks = sum(marksList)
averageOfMarks = sum(marksList) / 5
# display results
print("The sum of your marks is: " + str(sumOfMarks))
print("The average of your marks is: " + str(averageOfMarks))
A:
Firstly, you can use a while loop so that you don't have to manually copy & paste for each mark. e.g.:
marksList = []
i = 1
while len(marksList) < 5:
mark = int(input(f"Input mark {i}"))
if 0 <= mark <= 100:
print("Mark is acceptable")
marksList.append(mark)
i += 1
else:
print("Mark is not acceptable")
The above while loop keeps iterating through the loop unless the condition is met.
The rest should be simple:
print(marksList)
sum = sum(marksList) # sum
average = average(marksList) # there is a built-in average() function!
print(...)
Edit: If you wish to add some error handling, you can use try-except, like this:
# ... (same code)
while len(marksList) < 5:
try:
mark = int(input(f"Input mark {i}"))
except ValueError:
print("Please enter an integer.")
continue
# ... (same code)
A:
This is really unorthodox code. If you need to get a series of inputs, use a loop and after each iteration, store the value into a list.
marks = []
for i in range(1, 6): # needs to be 1 to 6 since the 6 won't be included but the 1 will
mark = int(input(f"Enter mark number {i}: ")
while mark<0 or mark>100:
print("Invalid mark")
mark = int(input(f"Enter mark number {i}: ")
marks+=mark
sums = sum(marks)
print(f"The sum of your marks is {sums}")
print(f"The average of your marks is {sums/5}")
What did I change?
I made the print statements into formatted strings for more clarity. I also used a loop to get inputs instead of just a series of inputs.
A:
Use while loop combine with try except to revalidate user input. You can use walrus operator (:=) to ask for each input and check if the value within range()
print("please enter your 5 marks below")
marks, nth= list(), 1
while len(marks) < 5:
try:
if not (mark := int(input(f"enter mark {nth}: "))) in range(101):
print("Mark is not acceptable")
else:
print("Mark is acceptable")
marks += [mark]
nth += 1
except ValueError:
print("Mark is not acceptable")
print("Your marks: ", *marks)
print("The sum of your marks is: ", sum(marks))
print("The average of your marks is: ", sum(marks)/len(marks))
Output:
please enter your 5 marks below
enter mark 1: no
Mark is not acceptable
enter mark 1: 67
Mark is acceptable
enter mark 2: 0
Mark is acceptable
enter mark 3:
Mark is not acceptable
enter mark 3: 56
Mark is acceptable
enter mark 4: 101
Mark is not acceptable
enter mark 4: 75
Mark is acceptable
enter mark 5: 81
Mark is acceptable
Your marks: 67 0 56 75 81
The sum of your marks is: 279
The average of your marks is: 55.8
|
I am trying to figure out a grading system and cant seem to get it to work (python)
|
i have a problem which i am trying to solve and cant for the life of me figure it out. I feel like its the simplest answer but yet i'm still stuck.
The instructions stated that the application must do the following:
Ask the user to input the marks for the five subjects in a list/array.
The program must ensure that the marks are between 0 and 100
Display the list/array of marks entered.
Find the sum of all the marks in the list (all five subjects) and display the output as:
The sum of your marks is: [sum]
Find the average of all the marks in the list (all five subjects) and display the output as:
The average of your marks is: [average mark]
this is what i have tried
print("please enter your 5 marks below")
# read 5 inputs
mark1 = int(input("enter mark 1: "))
if mark1 <= 0 or mark1 <= 100:
print("Mark is acceptable")
else:
print("Mark is not acceptable")
mark1 = int(input("enter mark 1: "))
mark2 = int(input("enter mark 2: "))
if mark2 <= 0 or mark2 <= 100:
print("Mark is acceptable")
else:
print("Mark is not acceptable")
mark2 = int(input("enter mark 2: "))
mark3 = int(input("enter mark 3: "))
if mark3 <= 0 or mark3 <= 100:
print("Mark is acceptable")
else:
print("Mark is not acceptable")
mark3 = int(input("enter mark 3: "))
mark4 = int(input("enter mark 4: "))
if mark4 <= 0 or mark4 <= 100:
print("Mark is acceptable")
else:
print("Mark is not acceptable")
mark4 = int(input("enter mark 4: "))
mark5 = int(input("enter mark 5: "))
if mark5 <= 0 or mark5 <= 100:
print("Mark is acceptable")
# create array/list with five marks
marksList = [mark1, mark2, mark3, mark4, mark5]
# print the array/list
print(marksList)
# calculate the sum and average
sumOfMarks = sum(marksList)
averageOfMarks = sum(marksList) / 5
# display results
print("The sum of your marks is: " + str(sumOfMarks))
print("The average of your marks is: " + str(averageOfMarks))
|
[
"Firstly, you can use a while loop so that you don't have to manually copy & paste for each mark. e.g.:\nmarksList = []\ni = 1\n\nwhile len(marksList) < 5:\n mark = int(input(f\"Input mark {i}\"))\n if 0 <= mark <= 100:\n print(\"Mark is acceptable\")\n marksList.append(mark)\n i += 1\n else:\n print(\"Mark is not acceptable\")\n\nThe above while loop keeps iterating through the loop unless the condition is met.\nThe rest should be simple:\nprint(marksList)\n\nsum = sum(marksList) # sum\naverage = average(marksList) # there is a built-in average() function!\n\nprint(...)\n\nEdit: If you wish to add some error handling, you can use try-except, like this:\n# ... (same code)\nwhile len(marksList) < 5:\n try:\n mark = int(input(f\"Input mark {i}\"))\n except ValueError:\n print(\"Please enter an integer.\")\n continue\n # ... (same code)\n\n",
"This is really unorthodox code. If you need to get a series of inputs, use a loop and after each iteration, store the value into a list.\nmarks = []\nfor i in range(1, 6): # needs to be 1 to 6 since the 6 won't be included but the 1 will \n mark = int(input(f\"Enter mark number {i}: \")\n while mark<0 or mark>100:\n print(\"Invalid mark\")\n mark = int(input(f\"Enter mark number {i}: \")\n marks+=mark\n\n\nsums = sum(marks)\nprint(f\"The sum of your marks is {sums}\")\nprint(f\"The average of your marks is {sums/5}\")\n\nWhat did I change?\nI made the print statements into formatted strings for more clarity. I also used a loop to get inputs instead of just a series of inputs.\n",
"Use while loop combine with try except to revalidate user input. You can use walrus operator (:=) to ask for each input and check if the value within range()\nprint(\"please enter your 5 marks below\")\nmarks, nth= list(), 1\nwhile len(marks) < 5:\n try:\n if not (mark := int(input(f\"enter mark {nth}: \"))) in range(101):\n print(\"Mark is not acceptable\")\n else:\n print(\"Mark is acceptable\")\n marks += [mark]\n nth += 1\n except ValueError:\n print(\"Mark is not acceptable\")\n\nprint(\"Your marks: \", *marks)\nprint(\"The sum of your marks is: \", sum(marks))\nprint(\"The average of your marks is: \", sum(marks)/len(marks))\n\nOutput:\nplease enter your 5 marks below\nenter mark 1: no\nMark is not acceptable\nenter mark 1: 67\nMark is acceptable\nenter mark 2: 0\nMark is acceptable\nenter mark 3: \nMark is not acceptable\nenter mark 3: 56\nMark is acceptable\nenter mark 4: 101\nMark is not acceptable\nenter mark 4: 75\nMark is acceptable\nenter mark 5: 81\nMark is acceptable\nYour marks: 67 0 56 75 81\nThe sum of your marks is: 279\nThe average of your marks is: 55.8\n\n"
] |
[
0,
0,
0
] |
[
"We can use the map function for the input. Map syntax looks like this: map(function, iter). By replacing function with int. We apply int to each element in input().split()\nAssuming all marks have to be between 0-100, we can use all() which checks if all items in a list are True. If x == True, we then print our results.\nn = list(map(int, input().split()))\nx = all(i >= 0 and i <= 100 for i in n)\nif x == True:\n print('The sum of your marks is: ', sum(n))\n print('The average of your marks is:', sum(n) / 5)\n\nIf you want to keep trying till you meet the requirement, you can use try/except/else. As we can see below, we raise an exception if x == False(This can be custom) and keep trying till x == True and only then print(..)\nwhile True:\n try:\n n = list(map(int, input().split()))\n x = all(i >= 0 and i <= 100 for i in n)\n if x == False:\n raise Exception\n except Exception:\n pass\n else:\n print('The sum of your marks is: ', sum(n))\n print('The average of your marks is:', sum(n) / 5)\n break\n\n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0074663209_python.txt
|
Q:
How to add a cooldown time in between commands so that user's can't spam my bot with commands
Like the title says. I need to make a way to force users to wait maybe 15 or 30 seconds between commands. So if they run it again it will let them know how much longer they need to wait.
A:
I figured it out by referencing the following code:
from time import time
MAX_USAGE = 5
async def callback(update: Update, context: ContextTypes.DEFAULT_TYPE):
count = context.user_data.get("usageCount", 0)
restrict_since = context.user_data.get("restrictSince", 0)
if restrict_since:
if (time() - restrict_since) >= 60 * 5: # 5 minutes
del context.user_data["restrictSince"]
del context.user_data["usageCount"]
await update.effective_message.reply_text("I have unrestricted you. Please behave well.")
else:
await update.effective_message.reply_text("Back off! Wait for your restriction to expire...")
raise ApplicationHandlerStop
else:
if count == MAX_USAGE:
context.user_data["restrictSince"] = time()
await update.effective_message.reply_text("Stop flooding! Don't bother me for 5 minutes...")
raise ApplicationHandlerStop
else:
context.user_data["usageCount"] = count + 1
|
How to add a cooldown time in between commands so that user's can't spam my bot with commands
|
Like the title says. I need to make a way to force users to wait maybe 15 or 30 seconds between commands. So if they run it again it will let them know how much longer they need to wait.
|
[
"I figured it out by referencing the following code:\nfrom time import time\n\nMAX_USAGE = 5\n\n\nasync def callback(update: Update, context: ContextTypes.DEFAULT_TYPE):\n count = context.user_data.get(\"usageCount\", 0)\n restrict_since = context.user_data.get(\"restrictSince\", 0)\n\n if restrict_since:\n if (time() - restrict_since) >= 60 * 5: # 5 minutes\n del context.user_data[\"restrictSince\"]\n del context.user_data[\"usageCount\"]\n await update.effective_message.reply_text(\"I have unrestricted you. Please behave well.\")\n else:\n await update.effective_message.reply_text(\"Back off! Wait for your restriction to expire...\")\n raise ApplicationHandlerStop\n else:\n if count == MAX_USAGE:\n context.user_data[\"restrictSince\"] = time()\n await update.effective_message.reply_text(\"Stop flooding! Don't bother me for 5 minutes...\")\n raise ApplicationHandlerStop\n else:\n context.user_data[\"usageCount\"] = count + 1\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_telegram_bot"
] |
stackoverflow_0074661186_python_python_telegram_bot.txt
|
Q:
How to parse/encode binary message formats?
I need to parse and encode to a legacy binary message format in Java. I began by using DataOutputStream to read/write primitive types but the problem I'm having is that the message format doesn't align nicely to byte offsets and includes bit flags.
For example I have to deal with messages like this:
+----------+---+---+----------+---------+--------------+
+uint32 +b +b + uint32 +4bit enum+32 byte string+
+----------+---+---+----------+---------+--------------+
Where (b) is a one bit flag. The problem being that java primitive types don't align to byte boundaries so I wouldn't be able to use DataOutputStream to encode this since the lowest level type I can write is a byte.
Are there any libraries, standard or 3rd party, for dealing with arbitrary bit level message formats?
Edit:
Thanks to @Software Monkey for forcing me to look at my spec more closely. The spec I am using does actually align on byte boundaries so DataOutputStream is appropriate. Given my original question though I would have gone with the solution proposed by @emboss.
Edit:
Although the message format for this question was discovered to be on byte boundaries I've come across another message format that is applicable to the original question. This format defines a 6 bit character mapping where each character really only takes up 6 bits, not the full byte, so character strings do not align on byte boundaries. I have discovered several binary output streams that tackle this problem. Like this one: http://introcs.cs.princeton.edu/java/stdlib/BinaryOut.java.html
A:
There is a builtin byte type in Java, and you can read into byte[] buffers just fine using InputStream#read(byte[]) and write to an OutputStream using OutputStream#write(byte[], int, int), so there's no problem in that.
Regarding your messages - as you noted correctly, the tiniest bit of information you get at a time is a byte, so you will have to decompose your message format into 8 bit chunks first:
Let's suppose your message is in a byte[] named data. I also assume little-endianness.
A uint32 is 32 bits long -> that's four bytes. (Be careful when parsing this in Java, Java integers and longs are signed, you need to handle that. An easy way to avoid trouble would be taking longs for that. data[0] fills bits 31 - 24, data[1] 23 - 16, data[2] bits 15 - 8 and data[3] bits 7 to 0. So you need to shift them appropriately to the left and glue them together with logical OR:
long uint32 = ((data[0]&0xFF) << 24) |
((data[1]&0xFF) << 16) |
((data[2]&0xFF) << 8) |
(data[3]&0xFF);
Next, there are two single bits. I suppose you have to check whether they are "on" (1) or "off" (0). To do this, you use bit masks and compare your byte with logical AND.
First bit: ( binary mask | 1 0 0 0 0 0 0 0 | = 128 = 0x80 )
if ( (data[4] & 0x80 ) == 0x80 ) // on
Second bit: ( binary mask | 0 1 0 0 0 0 0 0 | = 64 = 0x40 )
if ( (data[4] & 0x40 ) == 0x40 ) // on
To compose the next uint32, you will have to compose bytes over byte boundaries of the underlying data. E.g. for the first byte take the remaining 6 bits of data[4], shift them two to the left (they will be bit 8 to 2 of the uint32) and "add" the first (highest) two of data[5] by shifting them 6 bits to the right (they will take the remaining 1 and 0 slot of the uint32). "Adding" means logically OR'ing:
byte uint32Byte1 = (byte)( (data[4]&0xFF) << 2 | (data[5]&&0xFF) >> 6);
Building your uint32 is then the same procedure as in the first example. And so on and so forth.
A:
with Java Binary Block Parser the script to parse the message will be
class Parsed {
@Bin int field1;
@Bin (type = BinType.BIT) boolean field2;
@Bin(type = BinType.BIT) boolean field3;
@Bin int field4;
@Bin(type = BinType.BIT) int enums;
@Bin(type = BinType.UBYTE_ARRAY) String str;
}
Parsed parsed = JBBPParser.prepare("int field1; bit field2; bit field3; int field4; bit:4 enums; ubyte [32] str;").parse(STREAM).mapTo(Parsed.class);
A:
I've heard nice things about Preon.
A:
Just to add to pholser's answer, I think the Preon version would be something like this:
class DataStructure {
@BoundNumber(size="32") long first; // uint32
@Bound boolean second; // boolean
@Bound boolean third; // boolean
@BoundNumber(size="32") long fourth; // uint32
@BoundNumber(size="4") int fifth; // enum
@BoundString(size="32") String sixth; // string
}
... but in reality, you can make your life even easier by using Preon's support for dealing with enumerations directly.
Creating a Codec for it and using it to decode some data would be something like this:
Codec<DataStructure> codec = Codecs.create(DataStructure.class)
DataStructure data = Codecs.decode(codec, ....)
A:
You need to apply bit arithmetics (AND, OR, AND NOT operators) to change or read single bits within a byte in Java. Arithmetic operators are &, | and ~
A:
FastProto is 3rd party lib that can solve the problem, it can customize offset by annotations.
import org.indunet.fastproto.annotation.*;
public class Message {
@UInt32Type(offset = 0)
Long uint32_1;
@BoolType(byteOffset = 4, bitOffset = 0)
Boolean b1;
@BoolType(byteOffset = 5, bitOffset = 0)
Double b2;
@UInt32Type(offset = 6)
Long uint32_2;
...
}
// for parsing
byte[] bytes = ... // the binary
Message message = FastProto.parse(bytes, Message.class);
// for encoding
Message message = ... // the message
byte[] bytes = ... // store the binary
FastProto.toBytes(message, bytes);
If the project can solve your problem, please give a star, thanks.
GitHub Repo: https://github.com/indunet/fastproto
|
How to parse/encode binary message formats?
|
I need to parse and encode to a legacy binary message format in Java. I began by using DataOutputStream to read/write primitive types but the problem I'm having is that the message format doesn't align nicely to byte offsets and includes bit flags.
For example I have to deal with messages like this:
+----------+---+---+----------+---------+--------------+
+uint32 +b +b + uint32 +4bit enum+32 byte string+
+----------+---+---+----------+---------+--------------+
Where (b) is a one bit flag. The problem being that java primitive types don't align to byte boundaries so I wouldn't be able to use DataOutputStream to encode this since the lowest level type I can write is a byte.
Are there any libraries, standard or 3rd party, for dealing with arbitrary bit level message formats?
Edit:
Thanks to @Software Monkey for forcing me to look at my spec more closely. The spec I am using does actually align on byte boundaries so DataOutputStream is appropriate. Given my original question though I would have gone with the solution proposed by @emboss.
Edit:
Although the message format for this question was discovered to be on byte boundaries I've come across another message format that is applicable to the original question. This format defines a 6 bit character mapping where each character really only takes up 6 bits, not the full byte, so character strings do not align on byte boundaries. I have discovered several binary output streams that tackle this problem. Like this one: http://introcs.cs.princeton.edu/java/stdlib/BinaryOut.java.html
|
[
"There is a builtin byte type in Java, and you can read into byte[] buffers just fine using InputStream#read(byte[]) and write to an OutputStream using OutputStream#write(byte[], int, int), so there's no problem in that.\nRegarding your messages - as you noted correctly, the tiniest bit of information you get at a time is a byte, so you will have to decompose your message format into 8 bit chunks first:\nLet's suppose your message is in a byte[] named data. I also assume little-endianness.\nA uint32 is 32 bits long -> that's four bytes. (Be careful when parsing this in Java, Java integers and longs are signed, you need to handle that. An easy way to avoid trouble would be taking longs for that. data[0] fills bits 31 - 24, data[1] 23 - 16, data[2] bits 15 - 8 and data[3] bits 7 to 0. So you need to shift them appropriately to the left and glue them together with logical OR: \nlong uint32 = ((data[0]&0xFF) << 24) | \n ((data[1]&0xFF) << 16) | \n ((data[2]&0xFF) << 8) | \n (data[3]&0xFF);\n\nNext, there are two single bits. I suppose you have to check whether they are \"on\" (1) or \"off\" (0). To do this, you use bit masks and compare your byte with logical AND.\nFirst bit: ( binary mask | 1 0 0 0 0 0 0 0 | = 128 = 0x80 )\nif ( (data[4] & 0x80 ) == 0x80 ) // on\n\nSecond bit: ( binary mask | 0 1 0 0 0 0 0 0 | = 64 = 0x40 )\nif ( (data[4] & 0x40 ) == 0x40 ) // on\n\nTo compose the next uint32, you will have to compose bytes over byte boundaries of the underlying data. E.g. for the first byte take the remaining 6 bits of data[4], shift them two to the left (they will be bit 8 to 2 of the uint32) and \"add\" the first (highest) two of data[5] by shifting them 6 bits to the right (they will take the remaining 1 and 0 slot of the uint32). \"Adding\" means logically OR'ing:\nbyte uint32Byte1 = (byte)( (data[4]&0xFF) << 2 | (data[5]&&0xFF) >> 6);\n\nBuilding your uint32 is then the same procedure as in the first example. And so on and so forth.\n",
"with Java Binary Block Parser the script to parse the message will be\n class Parsed {\n @Bin int field1;\n @Bin (type = BinType.BIT) boolean field2;\n @Bin(type = BinType.BIT) boolean field3;\n @Bin int field4;\n @Bin(type = BinType.BIT) int enums;\n @Bin(type = BinType.UBYTE_ARRAY) String str;\n }\n\n Parsed parsed = JBBPParser.prepare(\"int field1; bit field2; bit field3; int field4; bit:4 enums; ubyte [32] str;\").parse(STREAM).mapTo(Parsed.class);\n\n",
"I've heard nice things about Preon.\n",
"Just to add to pholser's answer, I think the Preon version would be something like this:\nclass DataStructure {\n @BoundNumber(size=\"32\") long first; // uint32\n @Bound boolean second; // boolean\n @Bound boolean third; // boolean\n @BoundNumber(size=\"32\") long fourth; // uint32\n @BoundNumber(size=\"4\") int fifth; // enum\n @BoundString(size=\"32\") String sixth; // string\n}\n\n... but in reality, you can make your life even easier by using Preon's support for dealing with enumerations directly.\nCreating a Codec for it and using it to decode some data would be something like this:\nCodec<DataStructure> codec = Codecs.create(DataStructure.class)\nDataStructure data = Codecs.decode(codec, ....)\n\n",
"You need to apply bit arithmetics (AND, OR, AND NOT operators) to change or read single bits within a byte in Java. Arithmetic operators are &, | and ~\n",
"FastProto is 3rd party lib that can solve the problem, it can customize offset by annotations.\nimport org.indunet.fastproto.annotation.*;\n\npublic class Message {\n @UInt32Type(offset = 0)\n Long uint32_1;\n \n @BoolType(byteOffset = 4, bitOffset = 0)\n Boolean b1;\n\n @BoolType(byteOffset = 5, bitOffset = 0)\n Double b2;\n\n @UInt32Type(offset = 6)\n Long uint32_2;\n\n ...\n}\n\n// for parsing\nbyte[] bytes = ... // the binary\nMessage message = FastProto.parse(bytes, Message.class);\n\n\n// for encoding\nMessage message = ... // the message\nbyte[] bytes = ... // store the binary\nFastProto.toBytes(message, bytes);\n\nIf the project can solve your problem, please give a star, thanks.\nGitHub Repo: https://github.com/indunet/fastproto\n"
] |
[
5,
5,
4,
4,
2,
0
] |
[] |
[] |
[
"binaryfiles",
"java"
] |
stackoverflow_0006862855_binaryfiles_java.txt
|
Q:
Convert Integer to Roman Numeral String in Swift
I am looking to take an Integer in Swift and convert it to a Roman Numeral String. Any ideas?
A:
One could write an extension on Int, similar to the one seen below.
Please note: this code will return "" for numbers less than one. While this is probably okay in terms of Roman Numeral numbers (zero does not exist), you may want to handle this differently in your own implementation.
extension Int {
var romanNumeral: String {
var integerValue = self
var numeralString = ""
let mappingList: [(Int, String)] = [(1000, "M"), (900, "CM"), (500, "D"), (400, "CD"), (100, "C"), (90, "XC"), (50, "L"), (40, "XL"), (10, "X"), (9, "IX"), (5, "V"), (4, "IV"), (1, "I")]
for i in mappingList {
while (integerValue >= i.0) {
integerValue -= i.0
numeralString += i.1
}
}
return numeralString
}
}
Thanks to Kenneth Bruno for some suggestions on improving the code as well.
A:
Here's my version of an int to roman converter (without nested loop) :
extension Int {
func toRoman() -> String {
let conversionTable: [(intNumber: Int, romanNumber: String)] =
[(1000, "M"),
(900, "CM"),
(500, "D"),
(400, "CD"),
(100, "C"),
(90, "XC"),
(50, "L"),
(40, "XL"),
(10, "X"),
(9, "IX"),
(5, "V"),
(4, "IV"),
(1, "I")]
var roman = ""
var remainder = 0
for entry in conversionTable {
let quotient = (self - remainder) / entry.intNumber
remainder += quotient * entry.intNumber
roman += String(repeating: entry.romanNumber, count: quotient)
}
return roman
}
}
A:
One more for good measure:
fileprivate let romanNumerals: [String] = ["M", "CM", "D", "CD", "C", "XC", "L", "XL", "X", "IX", "V", "IV", "I"]
fileprivate let arabicNumerals: [Int] = [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1]
extension Int {
var romanRepresentation: String {
guard self > 0 && self < 4000 else {
return "Invalid Number"
}
var control: Int = self
return zip(arabicNumerals, romanNumerals)
.reduce(into: "") { partialResult, ar in
partialResult += String(repeating: ar.1, count: control/ar.0)
control = control % ar.0
}
}
}
A:
An addition to Brian Sachetta's version. If you want to go beyond 4999, you can use set of Enhanced Roman Numerals. Largest number in this set is 8,999,999,999,999, which is OZZZQZUQBUGBTGRTHREHMECMXCIX. Set uses all letters in Latin Alphabet.
extension Int {
var romanNumeral: String {
var integerValue = self
var numeralString = ""
let mappingList: [(Int, String)] = [(5000000000000, "O"), (4000000000000, "ZO"), (1000000000000, "Z"),
(900000000000, "QZ"), (500000000000, "Y"), (400000000000, "QY"), (100000000000, "Q"),
(90000000000, "UQ"), (50000000000, "W"), (40000000000, "UW"), (10000000000, "U"),
(9000000000, "BU"), (5000000000, "A"), (4000000000, "BA"), (1000000000, "B"),
(900000000, "GB"), (500000000, "J"), (400000000, "JG"), (100000000, "G"),
(90000000, "TG"), (50000000, "S"), (40000000, "TS"), (10000000, "T"),
(9000000, "RT"), (5000000, "P"), (4000000, "RP"), (1000000, "R"),
(900000, "HR"), (500000, "K"), (400000, "HK"), (100000, "H"),
(90000, "EH"), (50000, "F"), (40000, "EF"), (10000, "E"),
(9000, "ME"), (5000, "N"), (4000, "MN"), (1000, "M"),
(900, "CM"), (500, "D"), (400, "CD"), (100, "C"),
(90, "XC"), (50, "L"), (40, "XL"), (10, "X"),
(9, "IX"), (5, "V"), (4, "IV"), (1, "I")]
for i in mappingList {
while (integerValue >= i.0) {
integerValue -= i.0
numeralString += i.1
}
}
return numeralString
}
}
|
Convert Integer to Roman Numeral String in Swift
|
I am looking to take an Integer in Swift and convert it to a Roman Numeral String. Any ideas?
|
[
"One could write an extension on Int, similar to the one seen below.\nPlease note: this code will return \"\" for numbers less than one. While this is probably okay in terms of Roman Numeral numbers (zero does not exist), you may want to handle this differently in your own implementation.\nextension Int {\n var romanNumeral: String {\n var integerValue = self\n var numeralString = \"\"\n let mappingList: [(Int, String)] = [(1000, \"M\"), (900, \"CM\"), (500, \"D\"), (400, \"CD\"), (100, \"C\"), (90, \"XC\"), (50, \"L\"), (40, \"XL\"), (10, \"X\"), (9, \"IX\"), (5, \"V\"), (4, \"IV\"), (1, \"I\")]\n for i in mappingList {\n while (integerValue >= i.0) {\n integerValue -= i.0\n numeralString += i.1\n }\n }\n return numeralString\n }\n}\n\nThanks to Kenneth Bruno for some suggestions on improving the code as well.\n",
"Here's my version of an int to roman converter (without nested loop) :\nextension Int {\n func toRoman() -> String {\n let conversionTable: [(intNumber: Int, romanNumber: String)] =\n [(1000, \"M\"),\n (900, \"CM\"),\n (500, \"D\"),\n (400, \"CD\"),\n (100, \"C\"),\n (90, \"XC\"),\n (50, \"L\"),\n (40, \"XL\"),\n (10, \"X\"),\n (9, \"IX\"),\n (5, \"V\"),\n (4, \"IV\"),\n (1, \"I\")]\n var roman = \"\"\n var remainder = 0\n \n for entry in conversionTable {\n let quotient = (self - remainder) / entry.intNumber\n remainder += quotient * entry.intNumber\n roman += String(repeating: entry.romanNumber, count: quotient)\n }\n \n return roman\n }\n}\n\n",
"One more for good measure:\nfileprivate let romanNumerals: [String] = [\"M\", \"CM\", \"D\", \"CD\", \"C\", \"XC\", \"L\", \"XL\", \"X\", \"IX\", \"V\", \"IV\", \"I\"]\nfileprivate let arabicNumerals: [Int] = [1000, 900, 500, 400, 100, 90, 50, 40, 10, 9, 5, 4, 1]\n\nextension Int {\n var romanRepresentation: String {\n guard self > 0 && self < 4000 else {\n return \"Invalid Number\"\n }\n var control: Int = self\n return zip(arabicNumerals, romanNumerals)\n .reduce(into: \"\") { partialResult, ar in\n partialResult += String(repeating: ar.1, count: control/ar.0)\n control = control % ar.0\n }\n }\n}\n\n",
"An addition to Brian Sachetta's version. If you want to go beyond 4999, you can use set of Enhanced Roman Numerals. Largest number in this set is 8,999,999,999,999, which is OZZZQZUQBUGBTGRTHREHMECMXCIX. Set uses all letters in Latin Alphabet.\nextension Int {\nvar romanNumeral: String {\n var integerValue = self\n var numeralString = \"\"\n let mappingList: [(Int, String)] = [(5000000000000, \"O\"), (4000000000000, \"ZO\"), (1000000000000, \"Z\"),\n (900000000000, \"QZ\"), (500000000000, \"Y\"), (400000000000, \"QY\"), (100000000000, \"Q\"),\n (90000000000, \"UQ\"), (50000000000, \"W\"), (40000000000, \"UW\"), (10000000000, \"U\"),\n (9000000000, \"BU\"), (5000000000, \"A\"), (4000000000, \"BA\"), (1000000000, \"B\"),\n (900000000, \"GB\"), (500000000, \"J\"), (400000000, \"JG\"), (100000000, \"G\"),\n (90000000, \"TG\"), (50000000, \"S\"), (40000000, \"TS\"), (10000000, \"T\"),\n (9000000, \"RT\"), (5000000, \"P\"), (4000000, \"RP\"), (1000000, \"R\"),\n (900000, \"HR\"), (500000, \"K\"), (400000, \"HK\"), (100000, \"H\"),\n (90000, \"EH\"), (50000, \"F\"), (40000, \"EF\"), (10000, \"E\"),\n (9000, \"ME\"), (5000, \"N\"), (4000, \"MN\"), (1000, \"M\"),\n (900, \"CM\"), (500, \"D\"), (400, \"CD\"), (100, \"C\"),\n (90, \"XC\"), (50, \"L\"), (40, \"XL\"), (10, \"X\"),\n (9, \"IX\"), (5, \"V\"), (4, \"IV\"), (1, \"I\")]\n for i in mappingList {\n while (integerValue >= i.0) {\n integerValue -= i.0\n numeralString += i.1\n }\n }\n return numeralString\n}\n\n}\n"
] |
[
5,
1,
0,
0
] |
[] |
[] |
[
"nsnumber",
"swift",
"swift_extensions"
] |
stackoverflow_0036068104_nsnumber_swift_swift_extensions.txt
|
Q:
Disabling one button in a group of buttons with vanilla JS
So... basically I'm trying to make a function that reduces the value on the counter and disable a button in the same time, when it's only one button it works perfectly. But when I add more buttons it disable only the first one, the reducing part still working, but disabling not.
let subtract = () => {
let counter = document.getElementById("counter");
let button = document.getElementById("button");
let newScore = Number(counter.innerText) - 1;
console.log(newScore);
counter.innerText = newScore;
counter.classList.remove("new");
button.setAttribute("disabled", "");
}
<div class="new" id="counter">5</div>
<button id="button" onclick="subtract()">exec</button>
<button id="button" onclick="subtract()">exec</button>
What can I do to it works as it should??
Tried use "for" with "querySelectorAll" but it still not working
A:
So... after some time I made it, using event bubbling and other simple functions.
I'll not waste your time and vision, see by yourself here if you're interested:
[Codepen](https://codepen.io/otaldoallan/pen/xxzQwLo)
|
Disabling one button in a group of buttons with vanilla JS
|
So... basically I'm trying to make a function that reduces the value on the counter and disable a button in the same time, when it's only one button it works perfectly. But when I add more buttons it disable only the first one, the reducing part still working, but disabling not.
let subtract = () => {
let counter = document.getElementById("counter");
let button = document.getElementById("button");
let newScore = Number(counter.innerText) - 1;
console.log(newScore);
counter.innerText = newScore;
counter.classList.remove("new");
button.setAttribute("disabled", "");
}
<div class="new" id="counter">5</div>
<button id="button" onclick="subtract()">exec</button>
<button id="button" onclick="subtract()">exec</button>
What can I do to it works as it should??
Tried use "for" with "querySelectorAll" but it still not working
|
[
"So... after some time I made it, using event bubbling and other simple functions.\nI'll not waste your time and vision, see by yourself here if you're interested:\n[Codepen](https://codepen.io/otaldoallan/pen/xxzQwLo)\n"
] |
[
0
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0074662490_javascript.txt
|
Q:
How to use React router v6.4 with typescript properly?
I'm trying to use react-router-dom v6.4+ in my project. I implemented it as a route array of objects. Its worked as routing but suddenly I got another issue realated this. I can't call any hook inside the Component which located on element property in route array.
In the route.ts file:
import MainLayout from './container/layouts/mainLayout/MainLayout'
import ErrorPage from './view/Error'
import Home from './view/Home'
const routes: RouteObject[] = [
{
path: '/',
element: MainLayout(),
children: [
{
index: true,
element: Home(),
},
],
},
{
path: '*',
element: ChangeRoute('/404'),
},
{
path: '/404',
element: ErrorPage(),
},
]
const router = createBrowserRouter(routes)
export default router
and in the app.ts file:
<RouterProvider router={router} fallbackElement={<React.Fragment>Loading ...</React.Fragment>} />
But If I try to use any hook , inside MainLayout component , its saying
code in MainLayout component :
const MainLayout = () => {
const [collapsed, setCollapsed] = useState(false)
return (
<Layout className='layout'>
<SideBar collapsed={collapsed} />
<Layout>
<Topbar collapsed={collapsed} setCollapsed={setCollapsed} />
<Outlet />
</Layout>
</Layout>
)
}
export default MainLayout
I think if I use element: <MainLayout/> instead of element: MainLayout(), then this issue will resolve. but typescript doesnt allow me to do this. and on the documentation every thing is on plain javascript. only one type defination there is this
How to solve this? Kindly guide me.
Edit
Here is the codesandbox demo : visit sandbox
A:
The element prop expects a React.ReactNode, you are directly calling a React function instead of passing it as JSX.
Example:
const routes: RouteObject[] = [
{
path: '/',
element: <MainLayout />,
children: [
{
index: true,
element: <Home />,
},
],
},
{
path: '*',
element: <ChangeRoute redirect='/404' />,
},
{
path: '/404',
element: <ErrorPage />,
},
]
A:
Changes the name of the "route.ts" file to "route.tsx", now you can will set components in the "element "object property, this works for me.
|
How to use React router v6.4 with typescript properly?
|
I'm trying to use react-router-dom v6.4+ in my project. I implemented it as a route array of objects. Its worked as routing but suddenly I got another issue realated this. I can't call any hook inside the Component which located on element property in route array.
In the route.ts file:
import MainLayout from './container/layouts/mainLayout/MainLayout'
import ErrorPage from './view/Error'
import Home from './view/Home'
const routes: RouteObject[] = [
{
path: '/',
element: MainLayout(),
children: [
{
index: true,
element: Home(),
},
],
},
{
path: '*',
element: ChangeRoute('/404'),
},
{
path: '/404',
element: ErrorPage(),
},
]
const router = createBrowserRouter(routes)
export default router
and in the app.ts file:
<RouterProvider router={router} fallbackElement={<React.Fragment>Loading ...</React.Fragment>} />
But If I try to use any hook , inside MainLayout component , its saying
code in MainLayout component :
const MainLayout = () => {
const [collapsed, setCollapsed] = useState(false)
return (
<Layout className='layout'>
<SideBar collapsed={collapsed} />
<Layout>
<Topbar collapsed={collapsed} setCollapsed={setCollapsed} />
<Outlet />
</Layout>
</Layout>
)
}
export default MainLayout
I think if I use element: <MainLayout/> instead of element: MainLayout(), then this issue will resolve. but typescript doesnt allow me to do this. and on the documentation every thing is on plain javascript. only one type defination there is this
How to solve this? Kindly guide me.
Edit
Here is the codesandbox demo : visit sandbox
|
[
"The element prop expects a React.ReactNode, you are directly calling a React function instead of passing it as JSX.\nExample:\nconst routes: RouteObject[] = [\n {\n path: '/',\n element: <MainLayout />,\n children: [\n {\n index: true,\n element: <Home />,\n },\n ],\n },\n {\n path: '*',\n element: <ChangeRoute redirect='/404' />,\n },\n {\n path: '/404',\n element: <ErrorPage />,\n },\n]\n\n",
"Changes the name of the \"route.ts\" file to \"route.tsx\", now you can will set components in the \"element \"object property, this works for me.\n"
] |
[
0,
0
] |
[] |
[] |
[
"javascript",
"react_router_dom",
"reactjs",
"typescript"
] |
stackoverflow_0074609346_javascript_react_router_dom_reactjs_typescript.txt
|
Q:
Difference between Computer Vision API and Custom Vision API
I'm fairly new to using Microsoft's cognitive services. I'd like to know what is the difference between MS Computer Vision API and MS Custom Vision API?
A:
They both deal with computer vision on images, but hopefully, I can help make them more distinguishable here. :)
Computer Vision
The Computer Vision API is where Microsoft has built their own image models that can give you a few things:
Image classification - This is where the API will give you a number of tags that classify the image. It should also give you a confidence score of how strongly the model predicts the image to be of that tag.
Content Moderation - The API can give you an isAdult and isRacy flags to determine if the image meets those criteria. An accompanied confidence score is with those, too.
OCR - The API can read text within the images and will give you the text. This API can also work with handwritten text instead of just text on signs.
Facial Recognition - This API will recognize the faces of celebrities or other well-known people within images.
Landmark Recognition - This will recognize landmarks within images.
Custom Vision
The Custom Vision service is a little bit different where you can train a model of your own images based off of a prebuilt model that Microsoft has. For one thing, this can only do image classification and object detection. The object detection portion is where it will tell you not only what tag an image is, but show where in the image it is. Currently, this part of the service is in preview, but I've seen good results with it so far.
Another difference is that the Custom Vision service allows you to upload your own images. For image classification, this means you can upload your images and, for each image, give it one or multiple tags. So when you run an image through the model it will return the tag(s) it thinks it is along with the tag's confidence score. For object detection, you do the same process, but you pick in the images where the object is you want to detect and give that a tag.
Each time you upload and tag new images the model needs to be trained. From there you can evaluate how well your model performs, give it test images, or even use the REST URLs or SDKs to interact with it.
To summarize, the biggest difference between the two is the Custom Vision service can only do image classification and object detection, as well as take in your own images to perform those against. The Computer Vision APIs can do a bit more, but you don't have any control over how the models are trained.
Hope that helps! If you have any questions, just let me know.
A:
Microsoft Computer Vision API and Microsoft Custom Vision API are both cloud-based services that provide AI-powered image recognition and analysis capabilities. The main difference between the two is the type of recognition they perform.
Microsoft Computer Vision API is a general-purpose image recognition service that can identify objects, scenes, and text in images. It can also perform tasks such as optical character recognition (OCR) and generate descriptions of images in natural language.
On the other hand, Microsoft Custom Vision API is a service specifically designed for creating and training custom image classifiers. It allows users to upload their own images, tag and label them, and then train a machine learning model to recognize these custom classes in new images. This makes it particularly useful for applications that require the recognition of specific objects or scenes that may not be covered by the general-purpose models provided by the Computer Vision API.
In summary, Microsoft Computer Vision API is a broad-spectrum image recognition service, while Microsoft Custom Vision API is focused on creating and training custom image classifiers.
|
Difference between Computer Vision API and Custom Vision API
|
I'm fairly new to using Microsoft's cognitive services. I'd like to know what is the difference between MS Computer Vision API and MS Custom Vision API?
|
[
"They both deal with computer vision on images, but hopefully, I can help make them more distinguishable here. :)\nComputer Vision\nThe Computer Vision API is where Microsoft has built their own image models that can give you a few things:\n\nImage classification - This is where the API will give you a number of tags that classify the image. It should also give you a confidence score of how strongly the model predicts the image to be of that tag.\nContent Moderation - The API can give you an isAdult and isRacy flags to determine if the image meets those criteria. An accompanied confidence score is with those, too.\nOCR - The API can read text within the images and will give you the text. This API can also work with handwritten text instead of just text on signs.\nFacial Recognition - This API will recognize the faces of celebrities or other well-known people within images.\nLandmark Recognition - This will recognize landmarks within images.\n\nCustom Vision\nThe Custom Vision service is a little bit different where you can train a model of your own images based off of a prebuilt model that Microsoft has. For one thing, this can only do image classification and object detection. The object detection portion is where it will tell you not only what tag an image is, but show where in the image it is. Currently, this part of the service is in preview, but I've seen good results with it so far.\nAnother difference is that the Custom Vision service allows you to upload your own images. For image classification, this means you can upload your images and, for each image, give it one or multiple tags. So when you run an image through the model it will return the tag(s) it thinks it is along with the tag's confidence score. For object detection, you do the same process, but you pick in the images where the object is you want to detect and give that a tag.\nEach time you upload and tag new images the model needs to be trained. From there you can evaluate how well your model performs, give it test images, or even use the REST URLs or SDKs to interact with it.\nTo summarize, the biggest difference between the two is the Custom Vision service can only do image classification and object detection, as well as take in your own images to perform those against. The Computer Vision APIs can do a bit more, but you don't have any control over how the models are trained.\nHope that helps! If you have any questions, just let me know.\n",
"Microsoft Computer Vision API and Microsoft Custom Vision API are both cloud-based services that provide AI-powered image recognition and analysis capabilities. The main difference between the two is the type of recognition they perform.\nMicrosoft Computer Vision API is a general-purpose image recognition service that can identify objects, scenes, and text in images. It can also perform tasks such as optical character recognition (OCR) and generate descriptions of images in natural language.\nOn the other hand, Microsoft Custom Vision API is a service specifically designed for creating and training custom image classifiers. It allows users to upload their own images, tag and label them, and then train a machine learning model to recognize these custom classes in new images. This makes it particularly useful for applications that require the recognition of specific objects or scenes that may not be covered by the general-purpose models provided by the Computer Vision API.\nIn summary, Microsoft Computer Vision API is a broad-spectrum image recognition service, while Microsoft Custom Vision API is focused on creating and training custom image classifiers.\n"
] |
[
23,
0
] |
[] |
[] |
[
"azure_cognitive_services"
] |
stackoverflow_0052155632_azure_cognitive_services.txt
|
Q:
Add mask on ng-select Angular
I have the code below:
<ng-select [multiple]="true" [addTag]="addTag" id="fieldId" [ngClass]="formValidatorHelper.applyCssError(MyService.form.controls['fieldId'])"
formControlName="fieldId">
</ng-select>
I Need to add the mask '000.000.000-00' for each data added.. Can someone help-me?
I tryed to add mask="000.000.000-00" but not ok.
A:
If you are trying to add a mask to the ng-select element, it is not possible to do so directly with the mask attribute. The ng-select element does not have a built-in masking feature.
One way to add a mask to the ng-select element is to use a third-party library that provides a masking feature, such as ngx-mask or angular2-text-mask. You can then use the masking feature provided by the library to add the mask to the ng-select element.
Here is an example of how you can use the ngx-mask library to add a mask to the ng-select element:
Install the ngx-mask library by running the following command:
npm install ngx-mask
Import the NgxMaskModule in your Angular module where you want to use the ng-select element.
import { NgxMaskModule } from 'ngx-mask';
@NgModule({
// other module imports
imports: [
// other imports
NgxMaskModule.forRoot()
]
})
Use the [mask] attribute to add the mask to the ng-select element. The value of the [mask] attribute should be the mask that you want to use. In your case, the mask should be 000.000.000-00.
<ng-select [mask]="'000.000.000-00'" [multiple]="true" [addTag]="addTag" id="fieldId"
[ngClass]="formValidatorHelper.applyCssError(MyService.form.controls['fieldId'])"
formControlName="fieldId">
</ng-select>
I hope this helps!
-- Correction as per your comment --
The error message "Can't bind to 'mask' since it isn't a known property of 'ng-select'" indicates that the ng-select element does not have a property named 'mask'. This is because the ng-select element does not have a built-in masking feature, and the [mask] attribute is not a native attribute of the ng-select element.
To fix this error, you need to use a third-party library that provides a masking feature for the ng-select element. For example, you can use the ngx-mask library, which provides a mask attribute that you can use with the ng-select element.
To use the ngx-mask library, follow these steps:
Install the ngx-mask library by running the following command:
npm install ngx-mask
Import the NgxMaskModule in your Angular module where you want to use the ng-select element.
Copy code
import { NgxMaskModule } from 'ngx-mask';
@NgModule({
// other module imports
imports: [
// other imports
NgxMaskModule.forRoot()
]
})
Use the [mask] attribute to add the mask to the ng-select element. The value of the [mask] attribute should be the mask that you want to use. In your case, the mask should be 000.000.000-00.
<ng-select [mask]="'000.000.000-00'" [multiple]="true" [addTag]="addTag" id="fieldId"
[ngClass]="formValidatorHelper.applyCssError(MyService.form.controls['fieldId'])"
formControlName="fieldId">
</ng-select>
After importing the NgxMaskModule and using the [mask] attribute, the ng-select element should have a masking feature that you can use to add a mask to the element. This should resolve the error message that you are seeing.
A:
To add a mask to an ng-select element in Angular, you can use the ngx-mask library. This library provides a directive that you can use to add a mask to any input element, including an ng-select element.
Here is an example of how to use the ngx-mask directive to add a mask to an ng-select element:
<!-- Import the ng-select and ngx-mask modules -->
import { NgSelectModule } from '@ng-select/ng-select';
import { NgxMaskModule } from 'ngx-mask';
@NgModule({
imports: [
// ...
NgSelectModule,
NgxMaskModule.forRoot()
],
// ...
})
export class AppModule { }
--
<!-- Use the ng-select and ngx-mask directives in your component template -->
<ng-select [items]="items" [mask]="'00-00-0000'">
In the example above, we import the NgSelectModule and NgxMaskModule and add them to the imports array of the AppModule. We then use the ng-select and ngx-mask directives in the component template to create an ng-select element with a mask applied.
You can customize the mask by setting the mask attribute on the ng-select element. In the example above, we use the mask "00-00-0000", which will require users to enter a date in the format "MM-DD-YYYY". You can use any mask pattern supported by the ngx-mask library.
|
Add mask on ng-select Angular
|
I have the code below:
<ng-select [multiple]="true" [addTag]="addTag" id="fieldId" [ngClass]="formValidatorHelper.applyCssError(MyService.form.controls['fieldId'])"
formControlName="fieldId">
</ng-select>
I Need to add the mask '000.000.000-00' for each data added.. Can someone help-me?
I tryed to add mask="000.000.000-00" but not ok.
|
[
"If you are trying to add a mask to the ng-select element, it is not possible to do so directly with the mask attribute. The ng-select element does not have a built-in masking feature.\nOne way to add a mask to the ng-select element is to use a third-party library that provides a masking feature, such as ngx-mask or angular2-text-mask. You can then use the masking feature provided by the library to add the mask to the ng-select element.\nHere is an example of how you can use the ngx-mask library to add a mask to the ng-select element:\nInstall the ngx-mask library by running the following command:\nnpm install ngx-mask\n\nImport the NgxMaskModule in your Angular module where you want to use the ng-select element.\nimport { NgxMaskModule } from 'ngx-mask';\n\n@NgModule({\n // other module imports\n imports: [\n // other imports\n NgxMaskModule.forRoot()\n ]\n})\n\nUse the [mask] attribute to add the mask to the ng-select element. The value of the [mask] attribute should be the mask that you want to use. In your case, the mask should be 000.000.000-00.\n<ng-select [mask]=\"'000.000.000-00'\" [multiple]=\"true\" [addTag]=\"addTag\" id=\"fieldId\"\n [ngClass]=\"formValidatorHelper.applyCssError(MyService.form.controls['fieldId'])\"\n formControlName=\"fieldId\">\n</ng-select>\n\nI hope this helps!\n-- Correction as per your comment --\nThe error message \"Can't bind to 'mask' since it isn't a known property of 'ng-select'\" indicates that the ng-select element does not have a property named 'mask'. This is because the ng-select element does not have a built-in masking feature, and the [mask] attribute is not a native attribute of the ng-select element.\nTo fix this error, you need to use a third-party library that provides a masking feature for the ng-select element. For example, you can use the ngx-mask library, which provides a mask attribute that you can use with the ng-select element.\nTo use the ngx-mask library, follow these steps:\nInstall the ngx-mask library by running the following command:\nnpm install ngx-mask\nImport the NgxMaskModule in your Angular module where you want to use the ng-select element.\nCopy code\nimport { NgxMaskModule } from 'ngx-mask';\n\n@NgModule({\n // other module imports\n imports: [\n // other imports\n NgxMaskModule.forRoot()\n ]\n})\n\nUse the [mask] attribute to add the mask to the ng-select element. The value of the [mask] attribute should be the mask that you want to use. In your case, the mask should be 000.000.000-00.\n<ng-select [mask]=\"'000.000.000-00'\" [multiple]=\"true\" [addTag]=\"addTag\" id=\"fieldId\"\n [ngClass]=\"formValidatorHelper.applyCssError(MyService.form.controls['fieldId'])\"\n formControlName=\"fieldId\">\n</ng-select>\n\nAfter importing the NgxMaskModule and using the [mask] attribute, the ng-select element should have a masking feature that you can use to add a mask to the element. This should resolve the error message that you are seeing.\n",
"To add a mask to an ng-select element in Angular, you can use the ngx-mask library. This library provides a directive that you can use to add a mask to any input element, including an ng-select element.\nHere is an example of how to use the ngx-mask directive to add a mask to an ng-select element:\n<!-- Import the ng-select and ngx-mask modules -->\nimport { NgSelectModule } from '@ng-select/ng-select';\nimport { NgxMaskModule } from 'ngx-mask';\n\n@NgModule({\n imports: [\n // ...\n NgSelectModule,\n NgxMaskModule.forRoot()\n ],\n // ...\n})\nexport class AppModule { }\n\n--\n<!-- Use the ng-select and ngx-mask directives in your component template -->\n<ng-select [items]=\"items\" [mask]=\"'00-00-0000'\">\n\nIn the example above, we import the NgSelectModule and NgxMaskModule and add them to the imports array of the AppModule. We then use the ng-select and ngx-mask directives in the component template to create an ng-select element with a mask applied.\nYou can customize the mask by setting the mask attribute on the ng-select element. In the example above, we use the mask \"00-00-0000\", which will require users to enter a date in the format \"MM-DD-YYYY\". You can use any mask pattern supported by the ngx-mask library.\n"
] |
[
0,
0
] |
[] |
[] |
[
"angular",
"angular_ngselect",
"mask"
] |
stackoverflow_0074663379_angular_angular_ngselect_mask.txt
|
Q:
storing a variable from turtle.onclick(turtle.textinput())
I'm trying to program a slidepuzzle game and I've been given several potential files to load. The files need to be loaded from a clickable button within turtle itself. Ive written the following code-
def button_click(x,y):
if (x > 247 and x < 315) and (y > -292 and y < -246): #exit on click exit
turtle.onclick(quit(1))
elif (x > 143 and x < 213) and (y > -302 and y < -236): #load on click
load = turtle.onclick(turtle.textinput('Prompt',"prompt"))
print(load)
The quit button is working when clicked and the load prompt is showing up when the load button is clicked, but when the print(load) variable statement goes off in the terminal it keeps returning "none".
Ideally, load should be returning a valid string file name (entered by the user) that I can use as input into another function to begin loading the parameters of the puzzle. No matter what you type into the prompt box now, it just returns none.
Any assistance is greatly appreciated!
A:
load has now successfully become a variable, I just had to remove the turtle.onclick- now the code looks like this.
def button_click(x,y):
if (x > 247 and x < 315) and (y > -292 and y < -246): #exit on click exit
turtle.onclick(quit(1))
elif (x > 143 and x < 213) and (y > -302 and y < -236): #load on click
load = turtle.textinput('prompt','prompt')
print (load)
|
storing a variable from turtle.onclick(turtle.textinput())
|
I'm trying to program a slidepuzzle game and I've been given several potential files to load. The files need to be loaded from a clickable button within turtle itself. Ive written the following code-
def button_click(x,y):
if (x > 247 and x < 315) and (y > -292 and y < -246): #exit on click exit
turtle.onclick(quit(1))
elif (x > 143 and x < 213) and (y > -302 and y < -236): #load on click
load = turtle.onclick(turtle.textinput('Prompt',"prompt"))
print(load)
The quit button is working when clicked and the load prompt is showing up when the load button is clicked, but when the print(load) variable statement goes off in the terminal it keeps returning "none".
Ideally, load should be returning a valid string file name (entered by the user) that I can use as input into another function to begin loading the parameters of the puzzle. No matter what you type into the prompt box now, it just returns none.
Any assistance is greatly appreciated!
|
[
"load has now successfully become a variable, I just had to remove the turtle.onclick- now the code looks like this.\ndef button_click(x,y):\nif (x > 247 and x < 315) and (y > -292 and y < -246): #exit on click exit\n turtle.onclick(quit(1))\nelif (x > 143 and x < 213) and (y > -302 and y < -236): #load on click\n load = turtle.textinput('prompt','prompt')\n print (load)\n\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_turtle",
"turtle_graphics"
] |
stackoverflow_0074663018_python_python_turtle_turtle_graphics.txt
|
Q:
How to scrape image src the right way using puppeteer?
I'm trying to create a function that can capture the src attribute from a website. But all of the most common ways of doing so, aren't working.
This was my original attempt.
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
try {
await page.setDefaultNavigationTimeout(0);
await page.waitForTimeout(500);
await page.goto(
`https://www.sirved.com/restaurant/essex-ontario-canada/dairy-freez/1/menus/3413654`,
{
waitUntil: "domcontentloaded",
}
);
const fetchImgSrc = await page.evaluate(() => {
const img = document.querySelectorAll(
"#menus > div.tab-content >div > div > div.swiper-wrapper > div.swiper-slide > img"
);
let src = [];
for (let i = 0; i < img.length; i++) {
src.push(img[i].getAttribute("src"));
}
return src;
});
console.log(fetchImgSrc);
} catch (err) {
console.log(err);
}
await browser.close();
})();
[];
In my next attempt I tried a suggestion and was returned an empty string.
await page.setViewport({ width: 1024, height: 768 });
const imgs = await page.$$eval("#menus img", (images) =>
images.map((i) => i.src)
);
console.log(imgs);
And in my final attempt I fallowed another suggestion and was returned an array with two empty strings inside of it.
const fetchImgSrc = await page.evaluate(() => {
const img = document.querySelectorAll(".swiper-lazy-loaded");
let src = [];
for (let i = 0; i < img.length; i++) {
src.push(img[i].getAttribute("src"));
}
return src;
});
console.log(fetchImgSrc);
In each attempt i only replaced the function and console log portion of the code. I've done a lot of digging and found these are the most common ways of scrapping an image src using puppeteer and I've used them in other ways but for some reason right now they aren't working for me. I'm not sure if I have a bug in my code or why it will not work.
A:
To return the src link for the two menu images on this page you can use
const fetchImgSrc = await page.evaluate(() => {
const img = document.querySelectorAll('.swiper-lazy-loaded');
let src = [];
for (let i = 0; i < img.length; i++) {
src.push(img[i].getAttribute("src"));
}
return src;
});
This gives us the expected output
['https://images.sirved.com/ChIJq6qqqlrZOogRs_xGxBcn0_w/5caf3b9eabc40.jpg', 'https://images.sirved.com/ChIJq6qqqlrZOogRs_xGxBcn0_w/5caf3bbe93cc6.jpg']
A:
You have two issues here:
Puppeteer by default opens the page in a smaller window and the images to be scraped are lazy loaded, while they are not in the viewport: they won't be loaded (not even have src-s). You need to set your puppeteer browser to a bigger size with page.setViewport.
Element.getAttribute is not advised if you are working with dynamically changing websites: It will always return the original attribute value, which is an empty string in the lazy loaded image. What you need is the src property that is always up-to-date in the DOM. It is a topic of attribute vs property value in JavaScript.
By the way: you can shorten your script with page.$$eval like this:
await page.setViewport({ width: 1024, height: 768 })
const imgs = await page.$$eval('#menus img', images => images.map(i => i.src))
console.log(imgs)
Output:
[
'https://images.sirved.com/ChIJq6qqqlrZOogRs_xGxBcn0_w/5caf3b9eabc40.jpg',
'https://images.sirved.com/ChIJq6qqqlrZOogRs_xGxBcn0_w/5caf3bbe93cc6.jpg'
]
A:
It looks like the issue may be that the images are not loaded yet when you are trying to get their src attributes. In your code, you are calling page.setDefaultNavigationTimeout(0) which disables the default navigation timeout, which means that the page.goto function will not wait for the page to finish loading before continuing. This means that when you try to get the src attributes of the images, they may not have been loaded yet and therefore do not have src attributes to retrieve.
To fix this, you can try using the waitForSelector function to wait for the images to be loaded before trying to get their src attributes. Here is an example of how you might do this:
await page.waitForSelector("#menus img");
const imgs = await page.$$eval("#menus img", (images) =>
images.map((i) => i.src)
);
console.log(imgs);
You can also try using the waitFor function instead of waitForSelector if you want to wait for a specific condition (such as the images having src attributes) before continuing.
|
How to scrape image src the right way using puppeteer?
|
I'm trying to create a function that can capture the src attribute from a website. But all of the most common ways of doing so, aren't working.
This was my original attempt.
(async () => {
const browser = await puppeteer.launch();
const page = await browser.newPage();
try {
await page.setDefaultNavigationTimeout(0);
await page.waitForTimeout(500);
await page.goto(
`https://www.sirved.com/restaurant/essex-ontario-canada/dairy-freez/1/menus/3413654`,
{
waitUntil: "domcontentloaded",
}
);
const fetchImgSrc = await page.evaluate(() => {
const img = document.querySelectorAll(
"#menus > div.tab-content >div > div > div.swiper-wrapper > div.swiper-slide > img"
);
let src = [];
for (let i = 0; i < img.length; i++) {
src.push(img[i].getAttribute("src"));
}
return src;
});
console.log(fetchImgSrc);
} catch (err) {
console.log(err);
}
await browser.close();
})();
[];
In my next attempt I tried a suggestion and was returned an empty string.
await page.setViewport({ width: 1024, height: 768 });
const imgs = await page.$$eval("#menus img", (images) =>
images.map((i) => i.src)
);
console.log(imgs);
And in my final attempt I fallowed another suggestion and was returned an array with two empty strings inside of it.
const fetchImgSrc = await page.evaluate(() => {
const img = document.querySelectorAll(".swiper-lazy-loaded");
let src = [];
for (let i = 0; i < img.length; i++) {
src.push(img[i].getAttribute("src"));
}
return src;
});
console.log(fetchImgSrc);
In each attempt i only replaced the function and console log portion of the code. I've done a lot of digging and found these are the most common ways of scrapping an image src using puppeteer and I've used them in other ways but for some reason right now they aren't working for me. I'm not sure if I have a bug in my code or why it will not work.
|
[
"To return the src link for the two menu images on this page you can use\nconst fetchImgSrc = await page.evaluate(() => {\n const img = document.querySelectorAll('.swiper-lazy-loaded');\n let src = [];\n for (let i = 0; i < img.length; i++) {\n src.push(img[i].getAttribute(\"src\"));\n }\n return src;\n});\n\nThis gives us the expected output\n['https://images.sirved.com/ChIJq6qqqlrZOogRs_xGxBcn0_w/5caf3b9eabc40.jpg', 'https://images.sirved.com/ChIJq6qqqlrZOogRs_xGxBcn0_w/5caf3bbe93cc6.jpg']\n\n",
"You have two issues here:\n\nPuppeteer by default opens the page in a smaller window and the images to be scraped are lazy loaded, while they are not in the viewport: they won't be loaded (not even have src-s). You need to set your puppeteer browser to a bigger size with page.setViewport.\nElement.getAttribute is not advised if you are working with dynamically changing websites: It will always return the original attribute value, which is an empty string in the lazy loaded image. What you need is the src property that is always up-to-date in the DOM. It is a topic of attribute vs property value in JavaScript.\n\nBy the way: you can shorten your script with page.$$eval like this:\nawait page.setViewport({ width: 1024, height: 768 })\nconst imgs = await page.$$eval('#menus img', images => images.map(i => i.src))\nconsole.log(imgs)\n\nOutput:\n[\n 'https://images.sirved.com/ChIJq6qqqlrZOogRs_xGxBcn0_w/5caf3b9eabc40.jpg',\n 'https://images.sirved.com/ChIJq6qqqlrZOogRs_xGxBcn0_w/5caf3bbe93cc6.jpg'\n]\n\n",
"It looks like the issue may be that the images are not loaded yet when you are trying to get their src attributes. In your code, you are calling page.setDefaultNavigationTimeout(0) which disables the default navigation timeout, which means that the page.goto function will not wait for the page to finish loading before continuing. This means that when you try to get the src attributes of the images, they may not have been loaded yet and therefore do not have src attributes to retrieve.\nTo fix this, you can try using the waitForSelector function to wait for the images to be loaded before trying to get their src attributes. Here is an example of how you might do this:\nawait page.waitForSelector(\"#menus img\");\n\nconst imgs = await page.$$eval(\"#menus img\", (images) =>\n images.map((i) => i.src)\n);\n\nconsole.log(imgs);\n\nYou can also try using the waitFor function instead of waitForSelector if you want to wait for a specific condition (such as the images having src attributes) before continuing.\n"
] |
[
1,
1,
0
] |
[] |
[] |
[
"image",
"node.js",
"puppeteer",
"web_scraping"
] |
stackoverflow_0073183928_image_node.js_puppeteer_web_scraping.txt
|
Q:
Is there any Alternative of Dictionary with non contiguous memory allocation in C#
Using Dictionary<Key, Value> causing issues in memory allowing ( error Failed to allocate memory) in the application So is there any alternative which is allocate memory in non contiguous form.
I tried with sortedList<Key, Value> collection but doesn’t work out
|
Is there any Alternative of Dictionary with non contiguous memory allocation in C#
|
Using Dictionary<Key, Value> causing issues in memory allowing ( error Failed to allocate memory) in the application So is there any alternative which is allocate memory in non contiguous form.
I tried with sortedList<Key, Value> collection but doesn’t work out
|
[] |
[] |
[
"Memory allocation will be based on the objects you add to the dictionary (or HashSet or SortedList, or any kind of list).\nIf you want to keep fixed sized of items in your dictionary, you can pass the capacity for the dictionary when instantiating it.\nOr you can use Stack<TItem> with capacity so you can push/pop.\n"
] |
[
-2
] |
[
"c#",
"collections",
"data_structures",
"dictionary",
"sortedlist"
] |
stackoverflow_0074662529_c#_collections_data_structures_dictionary_sortedlist.txt
|
Q:
Query in ElasticSearch to match part of the word
I am trying to write a query in ElasticSearch which matches contiguous characters in the words. So, if my index has "John Doe", I should still see "John Doe" returned by Elasticsearch for the below searches.
john doe
john do
ohn do
john
n doe
I have tried the below query so far.
{
"query": {
"multi_match": {
"query": "term",
"operator": "OR",
"type": "phrase_prefix",
"max_expansions": 50,
"fields": [
"Field1",
"Field2"
]
}
}
}
But this also returns unnessary matches like I will still get "John Doe" when i type john x.
A:
As explained in my comment above, prefix wildcards should be avoided at all costs as your index grows since that will force ES to do full index scans. I'm still convinced that ngrams (more precisely edge-ngrams) is the way to go, so I'm taking a stab at it below.
The idea is to index all the suffixes of the input and then use a prefix query to match any suffix as searching for prefixes doesn't suffer the same performance issues as searching for suffixes. So the idea is to index john doe as follows:
john doe
ohn doe
hn doe
n doe
doe
oe
e
That way, using a prefix query we can match any sub-part of those tokens which effectively achieves the goal of matching partial contiguous words while at the same time ensuring good performance.
The definition of the index would go like this:
PUT my_index
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"my_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"lowercase",
"reverse",
"suffixes",
"reverse"
]
}
},
"filter": {
"suffixes": {
"type": "edgeNGram",
"min_gram": 1,
"max_gram": 20
}
}
}
}
},
"mappings": {
"doc": {
"properties": {
"name": {
"type": "text",
"analyzer": "my_analyzer",
"search_analyzer": "standard"
}
}
}
}
}
Then we can index a sample document:
PUT my_index/doc/1
{
"name": "john doe"
}
And finally all of the following searches will return the john doe document:
POST my_index/_search
{
"query": {
"prefix": {
"name": "john doe"
}
}
}
POST my_index/_search
{
"query": {
"prefix": {
"name": "john do"
}
}
}
POST my_index/_search
{
"query": {
"prefix": {
"name": "ohn do"
}
}
}
POST my_index/_search
{
"query": {
"prefix": {
"name": "john"
}
}
}
POST my_index/_search
{
"query": {
"prefix": {
"name": "n doe"
}
}
}
A:
This is what worked for me.
Instead of an ngram, index your data as keyword.
And use wildcard regex match to match the words.
"query": {
"bool": {
"should": [
{
"wildcard": { "Field1": "*" + term + "*" }
},
{
"wildcard": { "Field2": "*" + term + "*" }
}
],
"minimum_should_match": 1
}
}
A:
Here is a updated fix
link to the code
more options with tokenizers
create index with
body = {
"settings": {
"analysis": {
"analyzer": {
"autocomplete": {
"tokenizer": "autocomplete",
"filter": [
"lowercase"
]
},
"autocomplete_search": {
"tokenizer": "lowercase"
}
},
"tokenizer": {
"autocomplete": {
"type": "edge_ngram",
"min_gram": 2,
"max_gram": 10,
"token_chars": [
"letter"
]
}
}
}
},
"mappings": {
"properties": {
"title": {
"type": "text",
"analyzer": "autocomplete",
"search_analyzer": "autocomplete_search"
}
}
}
}
|
Query in ElasticSearch to match part of the word
|
I am trying to write a query in ElasticSearch which matches contiguous characters in the words. So, if my index has "John Doe", I should still see "John Doe" returned by Elasticsearch for the below searches.
john doe
john do
ohn do
john
n doe
I have tried the below query so far.
{
"query": {
"multi_match": {
"query": "term",
"operator": "OR",
"type": "phrase_prefix",
"max_expansions": 50,
"fields": [
"Field1",
"Field2"
]
}
}
}
But this also returns unnessary matches like I will still get "John Doe" when i type john x.
|
[
"As explained in my comment above, prefix wildcards should be avoided at all costs as your index grows since that will force ES to do full index scans. I'm still convinced that ngrams (more precisely edge-ngrams) is the way to go, so I'm taking a stab at it below.\nThe idea is to index all the suffixes of the input and then use a prefix query to match any suffix as searching for prefixes doesn't suffer the same performance issues as searching for suffixes. So the idea is to index john doe as follows:\njohn doe\nohn doe\nhn doe\nn doe\ndoe\noe\ne\n\nThat way, using a prefix query we can match any sub-part of those tokens which effectively achieves the goal of matching partial contiguous words while at the same time ensuring good performance.\nThe definition of the index would go like this:\nPUT my_index\n{\n \"settings\": {\n \"index\": {\n \"analysis\": {\n \"analyzer\": {\n \"my_analyzer\": {\n \"type\": \"custom\",\n \"tokenizer\": \"keyword\",\n \"filter\": [\n \"lowercase\",\n \"reverse\",\n \"suffixes\",\n \"reverse\"\n ]\n }\n },\n \"filter\": {\n \"suffixes\": {\n \"type\": \"edgeNGram\",\n \"min_gram\": 1,\n \"max_gram\": 20\n }\n }\n }\n }\n },\n \"mappings\": {\n \"doc\": {\n \"properties\": {\n \"name\": {\n \"type\": \"text\",\n \"analyzer\": \"my_analyzer\",\n \"search_analyzer\": \"standard\"\n }\n }\n }\n }\n}\n\nThen we can index a sample document:\nPUT my_index/doc/1\n{\n \"name\": \"john doe\"\n}\n\nAnd finally all of the following searches will return the john doe document:\nPOST my_index/_search \n{\n \"query\": {\n \"prefix\": {\n \"name\": \"john doe\"\n }\n }\n}\n\nPOST my_index/_search \n{\n \"query\": {\n \"prefix\": {\n \"name\": \"john do\"\n }\n }\n}\n\nPOST my_index/_search \n{\n \"query\": {\n \"prefix\": {\n \"name\": \"ohn do\"\n }\n }\n}\n\nPOST my_index/_search \n{\n \"query\": {\n \"prefix\": {\n \"name\": \"john\"\n }\n }\n}\n\nPOST my_index/_search \n{\n \"query\": {\n \"prefix\": {\n \"name\": \"n doe\"\n }\n }\n}\n\n",
"This is what worked for me.\nInstead of an ngram, index your data as keyword.\nAnd use wildcard regex match to match the words.\n \"query\": {\n \"bool\": {\n \"should\": [\n {\n \"wildcard\": { \"Field1\": \"*\" + term + \"*\" }\n },\n {\n \"wildcard\": { \"Field2\": \"*\" + term + \"*\" }\n }\n ],\n \"minimum_should_match\": 1\n }\n }\n\n",
"Here is a updated fix\nlink to the code\nmore options with tokenizers\ncreate index with\n body = {\n \"settings\": {\n \"analysis\": {\n \"analyzer\": {\n \"autocomplete\": {\n \"tokenizer\": \"autocomplete\",\n \"filter\": [\n \"lowercase\"\n ]\n },\n \"autocomplete_search\": {\n \"tokenizer\": \"lowercase\"\n }\n },\n \"tokenizer\": {\n \"autocomplete\": {\n \"type\": \"edge_ngram\",\n \"min_gram\": 2,\n \"max_gram\": 10,\n \"token_chars\": [\n \"letter\"\n ]\n }\n }\n }\n },\n \"mappings\": {\n \"properties\": {\n \"title\": {\n \"type\": \"text\",\n \"analyzer\": \"autocomplete\",\n \"search_analyzer\": \"autocomplete_search\"\n }\n }\n }\n}\n\n"
] |
[
14,
2,
0
] |
[] |
[] |
[
"elasticsearch"
] |
stackoverflow_0050478023_elasticsearch.txt
|
Q:
how to extract data from the database and pass it to the function in Django
I`m beginner Django user, please help me. I have multiple records in a sqlite3 data table. Please tell me how to read this data from the database in Django and write it to the views.py function.
This is my models.py
class Value(models.Model):
capacity = models.FloatField('Емкость конденсатора')
amplitude = models.FloatField('Амплитуда')
frequency = models.FloatField('Частота')
This is my views.py
def voltage(array, a, c, w, tim):
t = 0
for i in range(100):
array.append(c * a * math.sin(w * t - math.pi / 2))
tim.append(t)
t = t + 0.1
someArray = []
tim = []
voltage(someArray, a, c, tim)
in c I want to write capacity, in a - amplitude, in w - frequency.
I hope I can get data from the database into the views.py function
A:
#views.py
from models import Value
#in the view
vals = Value.objects.all()
for v in vals:
c = v.capacity #and so on
|
how to extract data from the database and pass it to the function in Django
|
I`m beginner Django user, please help me. I have multiple records in a sqlite3 data table. Please tell me how to read this data from the database in Django and write it to the views.py function.
This is my models.py
class Value(models.Model):
capacity = models.FloatField('Емкость конденсатора')
amplitude = models.FloatField('Амплитуда')
frequency = models.FloatField('Частота')
This is my views.py
def voltage(array, a, c, w, tim):
t = 0
for i in range(100):
array.append(c * a * math.sin(w * t - math.pi / 2))
tim.append(t)
t = t + 0.1
someArray = []
tim = []
voltage(someArray, a, c, tim)
in c I want to write capacity, in a - amplitude, in w - frequency.
I hope I can get data from the database into the views.py function
|
[
"\n#views.py\n\nfrom models import Value\n\n#in the view\nvals = Value.objects.all()\nfor v in vals:\n c = v.capacity #and so on\n\n"
] |
[
1
] |
[] |
[] |
[
"django",
"python",
"sqlite"
] |
stackoverflow_0074662676_django_python_sqlite.txt
|
Q:
Unexpected result with Conv2D layer in Keras
I am running some tests on Conv2D layers in Keras and I don't understand one of the result I am getting.
I am running a simple example to understand what is happening. I take a test array and create a Conv2D layer with 2 filters output. I use simple 3*3 kernel of 1's.
I am expecting to have the 2 filters with the same output.
Here is my minimal code sample :
import tensorflow.keras as keras
import functools
from keras import layers
import tensorflow as tf
import tensorflow.keras as keras
import keras.layers as layers
import numpy as np
###define a simple test array
test_array = np.array([[2,2,2,1],[2,1,2,2],[2,2,2,2],[2,2,1,2]],dtype=np.float32)
###reshape to simulate a filter entry of a one channel conv2D layer
test_array = test_array.reshape((1,4,4,1))
###Create conv2Dlayer and initialize
standardConv = layers.Conv2D(filters=2,kernel_size=[3,3])
standardConv(np.ones([1,4,4,1],dtype=np.float32))
###set simple weights for testing
standardConv.set_weights([ np.ones([3,3,1,2]) , np.zeros([2]) ])
###apply convolution layer to test_array
standardConv(test_array)
The result I get is the following :
Out[46]:
<tf.Tensor: shape=(1, 2, 2, 2), dtype=float32, numpy=
array([[[[17., 17.],
[16., 16.]],
[[16., 16.],
[16., 16.]]]], dtype=float32)>
I don't understand the second filter result [[16., 16.], [16., 16.]]
What I was expecting was to see the two filters with the same result [[17,17],[16,16]] which corresponds to the convolution of my test_array with a 3x3 kernel of 1's.
The convolution weights are the same for the two filters, just ones (np.ones([3,3,1,2])) and they should be applied to the same input array as far as I understood so I am probably missing something.
Can somebody explain me how we get the second filter result and why it is not the same as the first in this case ?
A:
The layout is a bit misleading; both filters give the same correct result.
First filter:
print(standardConv(test_array)[:, :, :, 0])
Output:
tf.Tensor(
[[[17. 16.]
[16. 16.]]], shape=(1, 2, 2), dtype=float32)
Second filter:
print(standardConv(test_array)[:, :, :, 1])
Output:
tf.Tensor(
[[[17. 16.]
[16. 16.]]], shape=(1, 2, 2), dtype=float32)
When you use more filters, for example 5, you will get this output:
tf.Tensor(
[[[[17. 17. 17. 17. 17.]
[16. 16. 16. 16. 16.]]
[[16. 16. 16. 16. 16.]
[16. 16. 16. 16. 16.]]]], shape=(1, 2, 2, 5), dtype=float32)
|
Unexpected result with Conv2D layer in Keras
|
I am running some tests on Conv2D layers in Keras and I don't understand one of the result I am getting.
I am running a simple example to understand what is happening. I take a test array and create a Conv2D layer with 2 filters output. I use simple 3*3 kernel of 1's.
I am expecting to have the 2 filters with the same output.
Here is my minimal code sample :
import tensorflow.keras as keras
import functools
from keras import layers
import tensorflow as tf
import tensorflow.keras as keras
import keras.layers as layers
import numpy as np
###define a simple test array
test_array = np.array([[2,2,2,1],[2,1,2,2],[2,2,2,2],[2,2,1,2]],dtype=np.float32)
###reshape to simulate a filter entry of a one channel conv2D layer
test_array = test_array.reshape((1,4,4,1))
###Create conv2Dlayer and initialize
standardConv = layers.Conv2D(filters=2,kernel_size=[3,3])
standardConv(np.ones([1,4,4,1],dtype=np.float32))
###set simple weights for testing
standardConv.set_weights([ np.ones([3,3,1,2]) , np.zeros([2]) ])
###apply convolution layer to test_array
standardConv(test_array)
The result I get is the following :
Out[46]:
<tf.Tensor: shape=(1, 2, 2, 2), dtype=float32, numpy=
array([[[[17., 17.],
[16., 16.]],
[[16., 16.],
[16., 16.]]]], dtype=float32)>
I don't understand the second filter result [[16., 16.], [16., 16.]]
What I was expecting was to see the two filters with the same result [[17,17],[16,16]] which corresponds to the convolution of my test_array with a 3x3 kernel of 1's.
The convolution weights are the same for the two filters, just ones (np.ones([3,3,1,2])) and they should be applied to the same input array as far as I understood so I am probably missing something.
Can somebody explain me how we get the second filter result and why it is not the same as the first in this case ?
|
[
"The layout is a bit misleading; both filters give the same correct result.\nFirst filter:\nprint(standardConv(test_array)[:, :, :, 0])\n\nOutput:\ntf.Tensor(\n[[[17. 16.]\n [16. 16.]]], shape=(1, 2, 2), dtype=float32)\n\nSecond filter:\nprint(standardConv(test_array)[:, :, :, 1])\n\nOutput:\ntf.Tensor(\n[[[17. 16.]\n [16. 16.]]], shape=(1, 2, 2), dtype=float32)\n\nWhen you use more filters, for example 5, you will get this output:\ntf.Tensor(\n[[[[17. 17. 17. 17. 17.]\n [16. 16. 16. 16. 16.]]\n\n [[16. 16. 16. 16. 16.]\n [16. 16. 16. 16. 16.]]]], shape=(1, 2, 2, 5), dtype=float32)\n\n"
] |
[
0
] |
[] |
[] |
[
"conv_neural_network",
"deep_learning",
"keras",
"python_3.x",
"tensorflow"
] |
stackoverflow_0074658992_conv_neural_network_deep_learning_keras_python_3.x_tensorflow.txt
|
Q:
CSV using '-' as NULL. Error to convert column to INT
I have a CSV
df = pd.read_csv('data.csv')
Table:
Column A
Column B
Column C
4068744
-1472525
2596219
198366
-
-
The file is using '-' for nul values
I tried converting to int without handling that '-'.
My question is: how do I strip the string '-' without changing the negative values?
df['Column B'] = df['Column B'].astype(int)
ValueError: invalid literal for int() with base 10: '-'
A:
Higher version of pandas can hold integer dtypes with missing values. Normal int conversion doesn't support null values.
# replace - with null
df.replace('-', pd.NA, inplace=True)
# and use Int surrounding with ''
df['Column B'] = df['Column B'].astype('Int64')
output:
> df
Column A Column B Column C
0 4068744 -1472525 2596219
1 198366 <NA> <NA>
> df['Column B'].info
Name: Column B, dtype: Int64>
|
CSV using '-' as NULL. Error to convert column to INT
|
I have a CSV
df = pd.read_csv('data.csv')
Table:
Column A
Column B
Column C
4068744
-1472525
2596219
198366
-
-
The file is using '-' for nul values
I tried converting to int without handling that '-'.
My question is: how do I strip the string '-' without changing the negative values?
df['Column B'] = df['Column B'].astype(int)
ValueError: invalid literal for int() with base 10: '-'
|
[
"Higher version of pandas can hold integer dtypes with missing values. Normal int conversion doesn't support null values.\n# replace - with null\ndf.replace('-', pd.NA, inplace=True)\n# and use Int surrounding with ''\ndf['Column B'] = df['Column B'].astype('Int64')\n\noutput:\n> df\n\n Column A Column B Column C\n0 4068744 -1472525 2596219\n1 198366 <NA> <NA>\n\n> df['Column B'].info\n\nName: Column B, dtype: Int64>\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"nul",
"pandas",
"python"
] |
stackoverflow_0074663597_dataframe_nul_pandas_python.txt
|
Q:
Karate Framework handling radio button
I am trying to assert a radio button clicked by default when page is loaded. Is there any idea how can I do it?
I automate a web application with Karate framework and not sure how to handle a radio buttons.
A:
A lot depends on the HTML.
But this may work:
* def val = value('.my-locator')
* match val == 'on'
Note that value(), attribute() and even html() can be used to get ALL data about the HTML element. Refer the docs: https://github.com/karatelabs/karate/tree/master/karate-core#value
|
Karate Framework handling radio button
|
I am trying to assert a radio button clicked by default when page is loaded. Is there any idea how can I do it?
I automate a web application with Karate framework and not sure how to handle a radio buttons.
|
[
"A lot depends on the HTML.\nBut this may work:\n* def val = value('.my-locator')\n* match val == 'on'\n\nNote that value(), attribute() and even html() can be used to get ALL data about the HTML element. Refer the docs: https://github.com/karatelabs/karate/tree/master/karate-core#value\n"
] |
[
0
] |
[] |
[] |
[
"automation",
"karate",
"radio_button",
"selenium_webdriver",
"user_interface"
] |
stackoverflow_0074662391_automation_karate_radio_button_selenium_webdriver_user_interface.txt
|
Q:
How do I fix TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'?
I'm trying to remake Tic-Tac-Toe on python. But, it wont work.
I tried
`
game_board = ['_'] * 9
print(game_board[0]) + " | " + (game_board[1]) + ' | ' + (game_board[2])
print(game_board[3]) + ' | ' + (game_board[4]) + ' | ' + (game_board[5])
print(game_board[6]) + ' | ' + (game_board[7]) + ' | ' + (game_board[8])
`
but it returns
`
Traceback (most recent call last):
File "C:\Users\username\PycharmProjects\pythonProject\tutorial.py", line 2, in <module>
print(game_board[0]) + " | " + (game_board[1]) + ' | ' + (game_board[2])
~~~~~~~~~~~~~~~~~~~~~^~~~~~~
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
`
A:
Is this you want..!?
Code:-
game_board = ['_']*9
print(game_board[0]+" | "+(game_board[1])+' | '+(game_board[2]))
print(game_board[3]+' | '+(game_board[4])+' | '+(game_board[5]))
print(game_board[6]+' | '+(game_board[7])+' | '+(game_board[8]))
Output:-
_ | _ | _
_ | _ | _
_ | _ | _
A:
This is because you put the parenthesis wrongly. It should be
game_board = ['_'] * 9
print(game_board[0] + " | " + (game_board[1]) + ' | ' + (game_board[2]))
print(game_board[3] + ' | ' + (game_board[4]) + ' | ' + (game_board[5]))
print(game_board[6] + ' | ' + (game_board[7]) + ' | ' + (game_board[8]))
A:
Please look at the error carefully to find your answer.
print(game_board[0]) + " | " + (game_board[1]) + ' | ' + (game_board[2])
~~~~~~~~~~~~~~~~~~~~~^~~~~~~
you have closed the bracket for game_board[0]. An additional '(' is to be used.
print( (game_board[0]) + " | " .....
|
How do I fix TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'?
|
I'm trying to remake Tic-Tac-Toe on python. But, it wont work.
I tried
`
game_board = ['_'] * 9
print(game_board[0]) + " | " + (game_board[1]) + ' | ' + (game_board[2])
print(game_board[3]) + ' | ' + (game_board[4]) + ' | ' + (game_board[5])
print(game_board[6]) + ' | ' + (game_board[7]) + ' | ' + (game_board[8])
`
but it returns
`
Traceback (most recent call last):
File "C:\Users\username\PycharmProjects\pythonProject\tutorial.py", line 2, in <module>
print(game_board[0]) + " | " + (game_board[1]) + ' | ' + (game_board[2])
~~~~~~~~~~~~~~~~~~~~~^~~~~~~
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
`
|
[
"Is this you want..!?\nCode:-\ngame_board = ['_']*9\nprint(game_board[0]+\" | \"+(game_board[1])+' | '+(game_board[2]))\nprint(game_board[3]+' | '+(game_board[4])+' | '+(game_board[5]))\nprint(game_board[6]+' | '+(game_board[7])+' | '+(game_board[8]))\n\nOutput:-\n_ | _ | _\n_ | _ | _\n_ | _ | _\n\n",
"This is because you put the parenthesis wrongly. It should be\ngame_board = ['_'] * 9\nprint(game_board[0] + \" | \" + (game_board[1]) + ' | ' + (game_board[2]))\nprint(game_board[3] + ' | ' + (game_board[4]) + ' | ' + (game_board[5]))\nprint(game_board[6] + ' | ' + (game_board[7]) + ' | ' + (game_board[8]))\n\n",
"Please look at the error carefully to find your answer.\nprint(game_board[0]) + \" | \" + (game_board[1]) + ' | ' + (game_board[2])\n~~~~~~~~~~~~~~~~~~~~~^~~~~~~\n\nyou have closed the bracket for game_board[0]. An additional '(' is to be used.\nprint( (game_board[0]) + \" | \" .....\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074663591_python.txt
|
Q:
Powershell try-catch VSCode intellisense syntax question
I haven't seen this syntax before, and it is hard to google this, so I'll ask it here: what does {1: mean in
try {
}
catch {
{1:<#Do this if a terminating exception happens#>}
}
A:
There is a bug in the try-catch snippet, resulting in bad text being generated.
It should be:
try {
}
catch {
<#Do this if a terminating exception happens#>
}
Source:
https://github.com/PowerShell/vscode-powershell/issues/3964
|
Powershell try-catch VSCode intellisense syntax question
|
I haven't seen this syntax before, and it is hard to google this, so I'll ask it here: what does {1: mean in
try {
}
catch {
{1:<#Do this if a terminating exception happens#>}
}
|
[
"There is a bug in the try-catch snippet, resulting in bad text being generated.\nIt should be:\ntry {\n \n}\ncatch {\n <#Do this if a terminating exception happens#>\n}\n\nSource:\nhttps://github.com/PowerShell/vscode-powershell/issues/3964\n"
] |
[
0
] |
[] |
[] |
[
"powershell",
"try_catch"
] |
stackoverflow_0073274435_powershell_try_catch.txt
|
Q:
I'm getting a Compiler Warning Message that I don't understand?
Please help. I typed the following code, and I am getting a Compiler Warning message C4430:
missing type specifier - int assumed
Can anyone tell me if I am doing something wrong?
#include <iostream>
using namespace std;
main()
{
int age = 15;
cout << age << endl;
if (age >= 16)
{
cout << "You can drive" << endl;
}
else
{
cout << "You cannot drive yet!" << endl;
}
for (int i = 1; i <= age; i++)
{
cout << "Happy Birthday " << i << endl;
}
return 0;
}
I went step-by-step and retraced my code, but I am unable to find the error.
A:
Your main() function has no return type.According to the C++ standard, it must have a return value with int type.
Reference: §6.9.3.1
|
I'm getting a Compiler Warning Message that I don't understand?
|
Please help. I typed the following code, and I am getting a Compiler Warning message C4430:
missing type specifier - int assumed
Can anyone tell me if I am doing something wrong?
#include <iostream>
using namespace std;
main()
{
int age = 15;
cout << age << endl;
if (age >= 16)
{
cout << "You can drive" << endl;
}
else
{
cout << "You cannot drive yet!" << endl;
}
for (int i = 1; i <= age; i++)
{
cout << "Happy Birthday " << i << endl;
}
return 0;
}
I went step-by-step and retraced my code, but I am unable to find the error.
|
[
"Your main() function has no return type.According to the C++ standard, it must have a return value with int type.\nReference: §6.9.3.1\n"
] |
[
1
] |
[] |
[] |
[
"c++",
"compiler_warnings"
] |
stackoverflow_0074663635_c++_compiler_warnings.txt
|
Q:
How to dynamically signal to Rust compiler that a given variable is non-zero?
I'd like to try to eliminate bounds checking on code generated by Rust. I have variables that are rarely zero and my code paths ensure they do not run into trouble. But because they can be, I cannot use NonZeroU64. When I am sure they are non-zero, how can I signal this to the compiler?
For example, if I have the following function, I know it will be non-zero. Can I tell the compiler this or do I have to have the unnecessary check?
pub fn f(n:u64) -> u32 {
n.trailing_zeros()
}
I can wrap the number in NonZeroU64 when I am sure, but then I've already incurred the check, which defeats the purpose ...
A:
Redundant checks within a single function body can usually be optimized out. So you just need convert the number to NonZeroU64 before calling trailing_zeros(), and rely on the compiler to optimize the bound checks.
use std::num::NonZeroU64;
pub fn g(n: NonZeroU64) -> u32 {
n.trailing_zeros()
}
pub fn other_fun(n: u64) -> u32 {
if n != 0 {
println!("Do something with non-zero!");
let n = NonZeroU64::new(n).unwrap();
g(n)
} else {
42
}
}
In the above code, the if n != 0 makes sure n cannot be zero within the block, and compiler is smart enough to remove the unwrap call, making NonZeroU64::new(n).unwrap() an zero-cost operation. You can check the asm to verify that.
|
How to dynamically signal to Rust compiler that a given variable is non-zero?
|
I'd like to try to eliminate bounds checking on code generated by Rust. I have variables that are rarely zero and my code paths ensure they do not run into trouble. But because they can be, I cannot use NonZeroU64. When I am sure they are non-zero, how can I signal this to the compiler?
For example, if I have the following function, I know it will be non-zero. Can I tell the compiler this or do I have to have the unnecessary check?
pub fn f(n:u64) -> u32 {
n.trailing_zeros()
}
I can wrap the number in NonZeroU64 when I am sure, but then I've already incurred the check, which defeats the purpose ...
|
[
"Redundant checks within a single function body can usually be optimized out. So you just need convert the number to NonZeroU64 before calling trailing_zeros(), and rely on the compiler to optimize the bound checks.\nuse std::num::NonZeroU64;\n\npub fn g(n: NonZeroU64) -> u32 {\n n.trailing_zeros()\n}\n\npub fn other_fun(n: u64) -> u32 {\n if n != 0 {\n println!(\"Do something with non-zero!\");\n let n = NonZeroU64::new(n).unwrap();\n g(n)\n } else {\n 42\n }\n}\n\nIn the above code, the if n != 0 makes sure n cannot be zero within the block, and compiler is smart enough to remove the unwrap call, making NonZeroU64::new(n).unwrap() an zero-cost operation. You can check the asm to verify that.\n"
] |
[
0
] |
[
"core::intrinsics::assume\n\nInforms the optimizer that a condition is always true. If the\ncondition is false, the behavior is undefined.\nNo code is generated for this intrinsic, but the optimizer will try to\npreserve it (and its condition) between passes, which may interfere\nwith optimization of surrounding code and reduce performance. It\nshould not be used if the invariant can be discovered by the optimizer\non its own, or if it does not enable any significant optimizations.\nThis intrinsic does not have a stable counterpart.\n\n"
] |
[
-1
] |
[
"rust"
] |
stackoverflow_0074663148_rust.txt
|
Q:
regex_replace is returning empty string
I am trying to remove all characters that are not digit, dot (.), plus/minus sign (+/-) with empty character/string for float conversion.
When I pass my string through regex_replace function I am returned an empty string.
I belive something is wrong with my regex expression std::regex reg_exp("\\D|[^+-.]")
Code
#include <iostream>
#include <regex>
int main()
{
std::string temporary_recieve_data = " S S +456.789 tg\r\n";
std::string::size_type sz;
const std::regex reg_exp("\\D|[^+-.]"); // matches not digit, decimal point (.), plus sign, minus sign
std::string numeric_string = std::regex_replace(temporary_recieve_data, reg_exp, ""); //replace the character that are not digit, dot (.), plus-minus sign (+,-) with empty character/string for float conversion
std::cout << "Numeric String : " << numeric_string << std::endl;
if (numeric_string.empty())
{
return 0;
}
float data_value = std::stof(numeric_string, &sz);
std::cout << "Float Value : " << data_value << std::endl;
return 0;
}
I have been trying to evaluate my regex expression on regex101.com for past 2 days but I am unable to figure out where I am wrong with my regular expression. When I just put \D, the editor substitutes non-digit character properly but soon as I add or condition | for not dot . or plus + or minus - sign the editor returns empty string.
A:
The string is empty because your regex matches each character.
\D already matches every character that is not a digit.
So plus, hyphen and the period thus far are consumed.
And digits get consumed by the negated class: [^+-.]
Further the hyphen indicates a range inside a character class.
Either escape it or put it at the start or end of the char-class.
(funnily the used range +-. 43-46 even contained a hyphen)
Remove the alternation with \D and put \d into the negated class:
[^\d.+-]+
See this demo at regex101 (attaching + for one or more is efficient)
A:
The regex_replace function in C++ is used to search for a regular expression in a string and replace it with a different string. If the function is returning an empty string, it could be because the regular expression being searched for was not found in the input string, or because there was an error in the regular expression itself.
Here is an example of how the regex_replace function might be used:
#include <iostream>
#include <regex>
using namespace std;
int main() {
string input = "Hello, world!";
regex pattern("world");
string result = regex_replace(input, pattern, "there");
cout << result << endl; // Outputs "Hello, there!"
return 0;
}
If the regex_replace function is returning an empty string in your code, you can try the following steps to troubleshoot the issue:
Verify that the input string actually contains the text that you are searching for with the regular expression. You can do this by printing the input string to the console or using a debugger to inspect its value.
Check the regular expression itself to make sure that it is valid and matches the text that you are trying to replace. Regular expressions can be tricky to get right, so it's worth double-checking to make sure there are no syntax errors or other issues with the pattern.
If the regular expression is correct and the input string contains the text you are searching for, it's possible that there is an error in your code that is causing the regex_replace function to return an empty string. In this case, you can try using a debugger to step through your code and see what is happening at the point where the regex_replace function is called. This can help you identify the root cause of the problem and fix it.
|
regex_replace is returning empty string
|
I am trying to remove all characters that are not digit, dot (.), plus/minus sign (+/-) with empty character/string for float conversion.
When I pass my string through regex_replace function I am returned an empty string.
I belive something is wrong with my regex expression std::regex reg_exp("\\D|[^+-.]")
Code
#include <iostream>
#include <regex>
int main()
{
std::string temporary_recieve_data = " S S +456.789 tg\r\n";
std::string::size_type sz;
const std::regex reg_exp("\\D|[^+-.]"); // matches not digit, decimal point (.), plus sign, minus sign
std::string numeric_string = std::regex_replace(temporary_recieve_data, reg_exp, ""); //replace the character that are not digit, dot (.), plus-minus sign (+,-) with empty character/string for float conversion
std::cout << "Numeric String : " << numeric_string << std::endl;
if (numeric_string.empty())
{
return 0;
}
float data_value = std::stof(numeric_string, &sz);
std::cout << "Float Value : " << data_value << std::endl;
return 0;
}
I have been trying to evaluate my regex expression on regex101.com for past 2 days but I am unable to figure out where I am wrong with my regular expression. When I just put \D, the editor substitutes non-digit character properly but soon as I add or condition | for not dot . or plus + or minus - sign the editor returns empty string.
|
[
"The string is empty because your regex matches each character.\n\n\\D already matches every character that is not a digit.\nSo plus, hyphen and the period thus far are consumed.\nAnd digits get consumed by the negated class: [^+-.]\nFurther the hyphen indicates a range inside a character class.\nEither escape it or put it at the start or end of the char-class.\n(funnily the used range +-. 43-46 even contained a hyphen)\n\nRemove the alternation with \\D and put \\d into the negated class:\n[^\\d.+-]+\n\nSee this demo at regex101 (attaching + for one or more is efficient)\n",
"The regex_replace function in C++ is used to search for a regular expression in a string and replace it with a different string. If the function is returning an empty string, it could be because the regular expression being searched for was not found in the input string, or because there was an error in the regular expression itself.\nHere is an example of how the regex_replace function might be used:\n#include <iostream>\n#include <regex>\n\nusing namespace std;\n\nint main() {\n string input = \"Hello, world!\";\n regex pattern(\"world\");\n\n string result = regex_replace(input, pattern, \"there\");\n cout << result << endl; // Outputs \"Hello, there!\"\n\n return 0;\n}\n\nIf the regex_replace function is returning an empty string in your code, you can try the following steps to troubleshoot the issue:\nVerify that the input string actually contains the text that you are searching for with the regular expression. You can do this by printing the input string to the console or using a debugger to inspect its value.\nCheck the regular expression itself to make sure that it is valid and matches the text that you are trying to replace. Regular expressions can be tricky to get right, so it's worth double-checking to make sure there are no syntax errors or other issues with the pattern.\nIf the regular expression is correct and the input string contains the text you are searching for, it's possible that there is an error in your code that is causing the regex_replace function to return an empty string. In this case, you can try using a debugger to step through your code and see what is happening at the point where the regex_replace function is called. This can help you identify the root cause of the problem and fix it.\n"
] |
[
3,
2
] |
[] |
[] |
[
"c++",
"regex"
] |
stackoverflow_0074663376_c++_regex.txt
|
Q:
How do i make my C program run in the background on windows?
I need my C program to run in the background, so without any open window or without blocking the terminal if run from there.
I can't find much info on how to do it online.
edit: To do what i needed, i just added -mwindows to the gcc command.
A:
To make a C program run in the background on Windows, you can use the start command in the command prompt. This will start the program in a separate window and allow you to continue using the command prompt while the program is running.
Here is an example of how to use the start command to run a C program in the background:
Open the command prompt by pressing the Windows key + R on your keyboard, typing "cmd" into the run dialog, and pressing Enter.
Navigate to the directory that contains the C program you want to run. For example, if the program is in the "C:\Programs" folder, you would type cd C:\Programs and press Enter.
Run the start command followed by the name of the C program. For example, if the program is named "myprogram.exe", you would type start myprogram.exe and press Enter. This will start the program in a separate window and return you to the command prompt.
To check that the program is running, you can use the tasklist command. This will show you a list of all the programs currently running on your system, including the C program you just started.
Note that this method will not actually make the C program run as a true background process. The program will still have its own window and be visible in the taskbar, and it will continue to run even if you close the command prompt. However, it will allow you to continue using the command prompt while the program is running, which can be useful in certain situations.
A:
Windows supports two program types; GUI and console.
Console applications automatically get a console window if the parent process does not already have one.
GUI applications do not get a console and they can have 0, 1 or multiple windows. If you don't create a window your application is basically only visible in Task manager. GUI applications typically use WinMain as the startup function instead of main. Notepad.exe is a GUI application that creates a window.
You need to tell the compiler/linker that you are creating a GUI program. If you are using Visual Studio, it probably has a project template you can use.
|
How do i make my C program run in the background on windows?
|
I need my C program to run in the background, so without any open window or without blocking the terminal if run from there.
I can't find much info on how to do it online.
edit: To do what i needed, i just added -mwindows to the gcc command.
|
[
"To make a C program run in the background on Windows, you can use the start command in the command prompt. This will start the program in a separate window and allow you to continue using the command prompt while the program is running.\nHere is an example of how to use the start command to run a C program in the background:\nOpen the command prompt by pressing the Windows key + R on your keyboard, typing \"cmd\" into the run dialog, and pressing Enter.\nNavigate to the directory that contains the C program you want to run. For example, if the program is in the \"C:\\Programs\" folder, you would type cd C:\\Programs and press Enter.\nRun the start command followed by the name of the C program. For example, if the program is named \"myprogram.exe\", you would type start myprogram.exe and press Enter. This will start the program in a separate window and return you to the command prompt.\nTo check that the program is running, you can use the tasklist command. This will show you a list of all the programs currently running on your system, including the C program you just started.\nNote that this method will not actually make the C program run as a true background process. The program will still have its own window and be visible in the taskbar, and it will continue to run even if you close the command prompt. However, it will allow you to continue using the command prompt while the program is running, which can be useful in certain situations.\n",
"Windows supports two program types; GUI and console.\nConsole applications automatically get a console window if the parent process does not already have one.\nGUI applications do not get a console and they can have 0, 1 or multiple windows. If you don't create a window your application is basically only visible in Task manager. GUI applications typically use WinMain as the startup function instead of main. Notepad.exe is a GUI application that creates a window.\nYou need to tell the compiler/linker that you are creating a GUI program. If you are using Visual Studio, it probably has a project template you can use.\n"
] |
[
1,
0
] |
[] |
[] |
[
"background",
"c",
"windows"
] |
stackoverflow_0074663502_background_c_windows.txt
|
Q:
How would I add search bar to off canvas menu?
Hi how would I be able to add a search bar to an off canvas menu that doesn't close the menu when the searchbar is clicked on?
A:
(Next time you can show us your code so we can edit your code)
I have an example here from @W3Schools. Maybe it will help you.
@W3Schools
Click to open their website
You can replace the <a href="#">Example</a> with <a href="#"><input type="text"></a>:
<!DOCTYPE html>
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
font-family: "Lato", sans-serif;
}
.sidenav {
height: 100%;
width: 0;
position: fixed;
z-index: 1;
top: 0;
left: 0;
background-color: #111;
overflow-x: hidden;
transition: 0.5s;
padding-top: 60px;
}
.sidenav a {
padding: 8px 8px 8px 32px;
text-decoration: none;
font-size: 25px;
color: #818181;
display: block;
transition: 0.3s;
}
.sidenav a:hover {
color: #f1f1f1;
}
.sidenav .closebtn {
position: absolute;
top: 0;
right: 25px;
font-size: 36px;
margin-left: 50px;
}
#main {
transition: margin-left .5s;
padding: 16px;
}
@media screen and (max-height: 450px) {
.sidenav {padding-top: 15px;}
.sidenav a {font-size: 18px;}
}
</style>
</head>
<body>
<div id="mySidenav" class="sidenav">
<a href="javascript:void(0)" class="closebtn" onclick="closeNav()">×</a>
<a href="#"><input type="search" placeholder="Search"></a>
</div>
<div id="main">
<h2>Sidenav Push Example</h2>
<p>Click on the element below to open the side navigation menu, and push this content to the right.</p>
<span style="font-size:30px;cursor:pointer" onclick="openNav()">☰ open</span>
</div>
<script>
function openNav() {
document.getElementById("mySidenav").style.width = "250px";
document.getElementById("main").style.marginLeft = "250px";
}
function closeNav() {
document.getElementById("mySidenav").style.width = "0";
document.getElementById("main").style.marginLeft= "0";
}
</script>
</body>
</html>
Happy Coding!
A:
I'm having the same issue as OP and this is my code:
<div class="offcanvas offcanvas-end" tabindex="-1" id="offcanvasRight" aria-labelledby="offcanvasRightLabel">
<div class="offcanvas-header">
<h5 class="offcanvas-title" id="offcanvasRightLabel">offcanvas</h5>
<button type="button" class="btn-close" data-bs-dismiss="offcanvas" aria-label="Close"></button>
</div>
<div class="offcanvas-body">
<form class="d-flex" role="search" method="get">
<input class="form-control me-2" type="text" placeholder="Search" aria-label="Search" name="search" value="<?php if(isset($_GET['search'])){echo $_GET['']; }?>" >
<button class="btn btn-outline-success" type="submit" name="button">Search</button>
</form>
</div>
</div>
|
How would I add search bar to off canvas menu?
|
Hi how would I be able to add a search bar to an off canvas menu that doesn't close the menu when the searchbar is clicked on?
|
[
"(Next time you can show us your code so we can edit your code)\nI have an example here from @W3Schools. Maybe it will help you.\n@W3Schools\nClick to open their website\nYou can replace the <a href=\"#\">Example</a> with <a href=\"#\"><input type=\"text\"></a>:\n\n\n<!DOCTYPE html>\n<html>\n<head>\n<meta name=\"viewport\" content=\"width=device-width, initial-scale=1\">\n<style>\nbody {\n font-family: \"Lato\", sans-serif;\n}\n\n.sidenav {\n height: 100%;\n width: 0;\n position: fixed;\n z-index: 1;\n top: 0;\n left: 0;\n background-color: #111;\n overflow-x: hidden;\n transition: 0.5s;\n padding-top: 60px;\n}\n\n.sidenav a {\n padding: 8px 8px 8px 32px;\n text-decoration: none;\n font-size: 25px;\n color: #818181;\n display: block;\n transition: 0.3s;\n}\n\n.sidenav a:hover {\n color: #f1f1f1;\n}\n\n.sidenav .closebtn {\n position: absolute;\n top: 0;\n right: 25px;\n font-size: 36px;\n margin-left: 50px;\n}\n\n#main {\n transition: margin-left .5s;\n padding: 16px;\n}\n\n@media screen and (max-height: 450px) {\n .sidenav {padding-top: 15px;}\n .sidenav a {font-size: 18px;}\n}\n</style>\n</head>\n<body>\n\n<div id=\"mySidenav\" class=\"sidenav\">\n <a href=\"javascript:void(0)\" class=\"closebtn\" onclick=\"closeNav()\">×</a>\n <a href=\"#\"><input type=\"search\" placeholder=\"Search\"></a>\n</div>\n\n<div id=\"main\">\n <h2>Sidenav Push Example</h2>\n <p>Click on the element below to open the side navigation menu, and push this content to the right.</p>\n <span style=\"font-size:30px;cursor:pointer\" onclick=\"openNav()\">☰ open</span>\n</div>\n\n<script>\nfunction openNav() {\n document.getElementById(\"mySidenav\").style.width = \"250px\";\n document.getElementById(\"main\").style.marginLeft = \"250px\";\n}\n\nfunction closeNav() {\n document.getElementById(\"mySidenav\").style.width = \"0\";\n document.getElementById(\"main\").style.marginLeft= \"0\";\n}\n</script>\n \n</body>\n</html> \n\n\n\nHappy Coding!\n",
"I'm having the same issue as OP and this is my code:\n<div class=\"offcanvas offcanvas-end\" tabindex=\"-1\" id=\"offcanvasRight\" aria-labelledby=\"offcanvasRightLabel\">\n <div class=\"offcanvas-header\">\n <h5 class=\"offcanvas-title\" id=\"offcanvasRightLabel\">offcanvas</h5>\n <button type=\"button\" class=\"btn-close\" data-bs-dismiss=\"offcanvas\" aria-label=\"Close\"></button>\n </div>\n <div class=\"offcanvas-body\">\n\n <form class=\"d-flex\" role=\"search\" method=\"get\">\n <input class=\"form-control me-2\" type=\"text\" placeholder=\"Search\" aria-label=\"Search\" name=\"search\" value=\"<?php if(isset($_GET['search'])){echo $_GET['']; }?>\" >\n <button class=\"btn btn-outline-success\" type=\"submit\" name=\"button\">Search</button>\n </form>\n\n </div>\n</div>\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"css",
"html",
"javascript",
"menu",
"searchbar"
] |
stackoverflow_0066025510_css_html_javascript_menu_searchbar.txt
|
Q:
I want to run karate as a stand-alone executabl (or jar) and I want to configure a logModifier. How can I do that?
I downloaded Karate as a stand alone executable, and it's working well.
I want to mask sensitive headers in the 'log' output. I gather that the way to do that is via configuring a logModifier.
But, I can't figure out to make it effective.
I tried the following:
put this snippet in my karate-config.js :
var LM = Java.type('demo.headers.DemoLogModifier');
karate.configure('logModifier', LM.INSTANCE);
in the root of where the karate stand-alone binary exists, i made the directory src/demo/headers/, and then put my DemoLogModifier.java in that dir. I also tried putting the .java in a different tree: src/test/java/demo/headers. I also tried putting in that same directory hierarchy at the root of where my *.feature files are. In every case, karate fails every Scenario, saying "TypeError: Access to host class demo.headers.DemoLogModifier is not allowed or does not exist".
Is this possible to do with the stand alone executable? If so, how?
If not, how then? I'm a java newbie, and i can't tell from the documentation how to make my own stand-alone jar and include my custom DemoLogModifier.java class.
Thanks!
A:
Yes, when you need to use a LogModifier, it requires you to build Java code.
Can you refer to this answer, it should get you on your way: https://stackoverflow.com/a/56458094/143475
The good thing is that the Java side is a one-time effort, so do consider getting the help of someone else just for that part. Then you will be up and running.
Feel free to submit a feature request for making this easier for the standalone JAR in the future.
|
I want to run karate as a stand-alone executabl (or jar) and I want to configure a logModifier. How can I do that?
|
I downloaded Karate as a stand alone executable, and it's working well.
I want to mask sensitive headers in the 'log' output. I gather that the way to do that is via configuring a logModifier.
But, I can't figure out to make it effective.
I tried the following:
put this snippet in my karate-config.js :
var LM = Java.type('demo.headers.DemoLogModifier');
karate.configure('logModifier', LM.INSTANCE);
in the root of where the karate stand-alone binary exists, i made the directory src/demo/headers/, and then put my DemoLogModifier.java in that dir. I also tried putting the .java in a different tree: src/test/java/demo/headers. I also tried putting in that same directory hierarchy at the root of where my *.feature files are. In every case, karate fails every Scenario, saying "TypeError: Access to host class demo.headers.DemoLogModifier is not allowed or does not exist".
Is this possible to do with the stand alone executable? If so, how?
If not, how then? I'm a java newbie, and i can't tell from the documentation how to make my own stand-alone jar and include my custom DemoLogModifier.java class.
Thanks!
|
[
"Yes, when you need to use a LogModifier, it requires you to build Java code.\nCan you refer to this answer, it should get you on your way: https://stackoverflow.com/a/56458094/143475\nThe good thing is that the Java side is a one-time effort, so do consider getting the help of someone else just for that part. Then you will be up and running.\nFeel free to submit a feature request for making this easier for the standalone JAR in the future.\n"
] |
[
0
] |
[] |
[] |
[
"java",
"karate"
] |
stackoverflow_0074660933_java_karate.txt
|
Q:
SwiftUI transition animation is only working when I use a deprecated API
I'm trying to get a very simple transition animation to work in SwiftUI. The transition works perfectly when I include the following:
.animation(.linear)
But when I do that I get the following warning message:
'animation' was deprecated in iOS 15.0: Use withAnimation or animation(_:value:) instead
I am already using withAnimation in my code (see below).
struct ShowAnimationProblemView: View {
@State var showingSubView = false
var body: some View {
VStack {
Text("Animation Transition Problem").padding()
Button("Show subView") {
withAnimation {
showingSubView.toggle()
}
}.padding()
if showingSubView {
VStack {
Text("This is")
Text("the")
Text("subView")
}
.padding(50)
.border(.red)
.transition(.slide)
}
Spacer()
}
.animation(.linear)
}
}
A picture of the view
The code, as shown above works perfectly, but shows the deprecation warning. When you tap the button, the subView slides in from the left. When you tap it again, the subView slides out to the right.
If I comment out the .animation(.linear) statement, then the first time I tap the button, the subView instantly appears with no animation. Then if I tap it again, it nicely slides out to the right.
How can I get this simple transition to work correctly without using the deprecated API?
A:
Replace the deprecated version:
.animation(.linear)
with this:
.animation(.linear, value: showingSubView)
Note: The animation works without the withAnimation { }.
|
SwiftUI transition animation is only working when I use a deprecated API
|
I'm trying to get a very simple transition animation to work in SwiftUI. The transition works perfectly when I include the following:
.animation(.linear)
But when I do that I get the following warning message:
'animation' was deprecated in iOS 15.0: Use withAnimation or animation(_:value:) instead
I am already using withAnimation in my code (see below).
struct ShowAnimationProblemView: View {
@State var showingSubView = false
var body: some View {
VStack {
Text("Animation Transition Problem").padding()
Button("Show subView") {
withAnimation {
showingSubView.toggle()
}
}.padding()
if showingSubView {
VStack {
Text("This is")
Text("the")
Text("subView")
}
.padding(50)
.border(.red)
.transition(.slide)
}
Spacer()
}
.animation(.linear)
}
}
A picture of the view
The code, as shown above works perfectly, but shows the deprecation warning. When you tap the button, the subView slides in from the left. When you tap it again, the subView slides out to the right.
If I comment out the .animation(.linear) statement, then the first time I tap the button, the subView instantly appears with no animation. Then if I tap it again, it nicely slides out to the right.
How can I get this simple transition to work correctly without using the deprecated API?
|
[
"Replace the deprecated version:\n.animation(.linear)\n\nwith this:\n.animation(.linear, value: showingSubView)\n\nNote: The animation works without the withAnimation { }.\n"
] |
[
0
] |
[] |
[] |
[
"animation",
"slide",
"swiftui",
"transition"
] |
stackoverflow_0074663285_animation_slide_swiftui_transition.txt
|
Q:
How to access a file in runtime from output directory from a different project folder
We have set of Test projects under one solution in Visual Studio. I want to read a Json file which is copied to the output directory from a different project folder in runtime. It's a test project. I can get the current project directory. But not sure how to get the other assembly's directory.
Solution looks as below
Project1 -> Sample.json (this file is set to copy to output directory)
Project2
While running my test in Project2 I want to access the file in Project1 from the output directory.
I used to access files in the same project folder with code as mentioned. But not sure how to get for a different project file. Now with replace I am able to achieve it. But sure this is not the right way
var filePath = Path.Combine("Sample.json");
var fullPath = Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), filePath).Replace("Project1", "Project2");
Not sure how to get from other project. I am sure I can't use GetExecutingAssembly(), but what is the alternative. Somehow I can access it using the above dirty way of replacing the assembly name.
A:
To get the location of another assembly, you get use a type from that assembly to get to the right Assembly instance, and thus its location:
typeof(FromOtherAssembly).Assembly.Location
A:
First, I suggest that you could find the dll path in the solution.
Second, you can filter the json file from the path.
The next is my working code.
Please install Microsoft.Build and Microsoft.Build.Framework nugetpackages first.
string path= string.Empty;
var solutionFile =SolutionFile.Parse(@"D:\test.sln");// Your solution place.
var projectsInSolution = solutionFile.ProjectsInOrder;
foreach (var project in projectsInSolution)
{
if(project.ProjectName=="TestDLL") //your dll name
{
path = project.AbsolutePath;
DirectoryInfo di = new DirectoryInfo(string.Format(@"{0}..\..\", path));
path = di.FullName;
foreach (var item in Directory.GetFiles(path,"*.json")) // filter the josn file in the correct path
{
if(item.StartsWith("Sample"))
{
Console.WriteLine(item);// you will get the correct json file path
}
}
}
}
A:
You can use the below code to do it in a better way
//solutionpath will take you to the root directory which has the .sln file
string solutionpath = Path.Combine(Directory.GetParent(Directory.GetCurrentDirectory()).Parent.Parent.FullName);
string secondprojectname = @"SecondProjectName\bin";
string finalPath = Path.Combine(solutionpath, secondprojectname);
A:
you can use CopyToOutputDirectory in MSBuild
|
How to access a file in runtime from output directory from a different project folder
|
We have set of Test projects under one solution in Visual Studio. I want to read a Json file which is copied to the output directory from a different project folder in runtime. It's a test project. I can get the current project directory. But not sure how to get the other assembly's directory.
Solution looks as below
Project1 -> Sample.json (this file is set to copy to output directory)
Project2
While running my test in Project2 I want to access the file in Project1 from the output directory.
I used to access files in the same project folder with code as mentioned. But not sure how to get for a different project file. Now with replace I am able to achieve it. But sure this is not the right way
var filePath = Path.Combine("Sample.json");
var fullPath = Path.Combine(Path.GetDirectoryName(Assembly.GetExecutingAssembly().Location), filePath).Replace("Project1", "Project2");
Not sure how to get from other project. I am sure I can't use GetExecutingAssembly(), but what is the alternative. Somehow I can access it using the above dirty way of replacing the assembly name.
|
[
"To get the location of another assembly, you get use a type from that assembly to get to the right Assembly instance, and thus its location:\ntypeof(FromOtherAssembly).Assembly.Location\n\n",
"First, I suggest that you could find the dll path in the solution.\nSecond, you can filter the json file from the path.\nThe next is my working code.\nPlease install Microsoft.Build and Microsoft.Build.Framework nugetpackages first.\n string path= string.Empty;\n var solutionFile =SolutionFile.Parse(@\"D:\\test.sln\");// Your solution place.\n var projectsInSolution = solutionFile.ProjectsInOrder;\n foreach (var project in projectsInSolution)\n {\n if(project.ProjectName==\"TestDLL\") //your dll name\n {\n path = project.AbsolutePath;\n DirectoryInfo di = new DirectoryInfo(string.Format(@\"{0}..\\..\\\", path));\n path = di.FullName;\n foreach (var item in Directory.GetFiles(path,\"*.json\")) // filter the josn file in the correct path\n {\n if(item.StartsWith(\"Sample\"))\n {\n Console.WriteLine(item);// you will get the correct json file path\n }\n }\n }\n\n }\n\n",
"You can use the below code to do it in a better way\n //solutionpath will take you to the root directory which has the .sln file\n string solutionpath = Path.Combine(Directory.GetParent(Directory.GetCurrentDirectory()).Parent.Parent.FullName);\n string secondprojectname = @\"SecondProjectName\\bin\";\n string finalPath = Path.Combine(solutionpath, secondprojectname);\n\n",
"you can use CopyToOutputDirectory in MSBuild\n"
] |
[
1,
1,
1,
0
] |
[] |
[] |
[
"c#",
"visual_studio"
] |
stackoverflow_0058729105_c#_visual_studio.txt
|
Q:
Hardhat compile error with API_URL and Private Key import
Tying to follow a basic NFT tutorial, and I have to say, that I am kind of a noob in Programming. The problem that I am facing is that my hardhat.config.js file needs an API_KEY and a Private Key, which it should import from the process.env file:
API_URL = "https://ethropsten.alchemyapi.io/v2/UkW3oySI7WxvFwDwopQHPOHajHaWFZFv"
PRIVATE_KEY = "8d33c2613cb63d0dc6305e57..."
the hardhad config file looks like this:
* @type import('hardhat/config').HardhatUserConfig
*/
require('dotenv').config();
require("@nomiclabs/hardhat-ethers");
const { API_URL, PRIVATE_KEY } = process.env;
module.exports = {
solidity: "0.8.0",
defaultNetwork: "ropsten",
networks: {
hardhat: {},
ropsten: {
url: API_URL,
accounts: [`0x${PRIVATE_KEY}`]
}
},
}
But whenever I try to compile it and run it tru my deploy.js file I get an error message that essentially tells me, that the import was not possible, and looks like this:
* Invalid value undefined for HardhatConfig.networks.ropsten.url - Expected a value of type string.
* Invalid value {"accounts":["0xundefined"]} for HardhatConfig.networks.ropsten - Expected a value of type HttpNetworkConfig.
To learn more about Hardhat's configuration, please go to https://hardhat.org/config/
For more info go to https://hardhat.org/HH8 or run Hardhat with --show-stack-traces
simon@MacBook-Pro-von-Simon test_fractals % npx hardhat run scripts/deploy.js --network ropsten
An unexpected error occurred:
simon@MacBook-Pro-von-Simon test_fractals % npx hardhat run scripts/deploy.js --network ropsten
An unexpected error occurred:
simon@MacBook-Pro-von-Simon test_fractals % npx hardhat run scripts/deploy.js --network ropsten
An unexpected error occurred:
ReferenceError: API_KEY is not defined
at Object.<anonymous> (/Users/simon/test_fractals/hardhat.config.js:37:12)
at Module._compile (internal/modules/cjs/loader.js:1072:14)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1101:10)
at Module.load (internal/modules/cjs/loader.js:937:32)
at Function.Module._load (internal/modules/cjs/loader.js:778:12)
at Module.require (internal/modules/cjs/loader.js:961:19)
at require (internal/modules/cjs/helpers.js:92:18)
at importCsjOrEsModule (/Users/simon/test_fractals/node_modules/hardhat/src/internal/core/config/config-loading.ts:23:20)
at Object.loadConfigAndTasks (/Users/simon/test_fractals/node_modules/hardhat/src/internal/core/config/config-loading.ts:66:18)
at main (/Users/simon/test_fractals/node_modules/hardhat/src/internal/cli/cli.ts:129:20)
simon@MacBook-Pro-von-Simon test_fractals % npx hardhat run scripts/deploy.js --network ropsten
An unexpected error occurred:
ReferenceError: API_KEY is not defined
at Object.<anonymous> (/Users/simon/test_fractals/hardhat.config.js:37:12)
at Module._compile (internal/modules/cjs/loader.js:1072:14)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1101:10)
at Module.load (internal/modules/cjs/loader.js:937:32)
at Function.Module._load (internal/modules/cjs/loader.js:778:12)
at Module.require (internal/modules/cjs/loader.js:961:19)
at require (internal/modules/cjs/helpers.js:92:18)
at importCsjOrEsModule (/Users/simon/test_fractals/node_modules/hardhat/src/internal/core/config/config-loading.ts:23:20)
at Object.loadConfigAndTasks (/Users/simon/test_fractals/node_modules/hardhat/src/internal/core/config/config-loading.ts:66:1
I first thought, that my .env file was not at the right place or that the syntax was not right, but after trying everything I could think of, I still get the same error message. Any help is appreciated.
A:
I solved the same problem by using this in the hardhat.config.js file:
require('dotenv').config({path:__dirname+'/.env'})
Instead of:
require('dotenv').config()
For more information, you can visit
dotenv file is not loading environment variables
A:
I ran in the same issue and I've solved it in a next way.
The things back to normal when I firstly require the dotenv and only then set config params.
So, the final code looks like that:
const dotenv = require("dotenv");
dotenv.config({path: __dirname + '/.env'});
const { API_URL, PRIVATE_KEY } = process.env;
hope this save some time for someone =)
A:
This worked for me
/**
* @type import('hardhat/config').HardhatUserConfig
*/
require('dotenv').config({path:__dirname+'/.env'})
require("@nomiclabs/hardhat-ethers")
const { API_URL, PRIVATE_KEY } = process.env
module.exports = {
solidity: "0.7.3",
defaultNetwork: "ropsten",
networks: {
hardhat: {
},
ropsten: {
url: "https://eth-ropsten.alchemyapi.io/v2/your-api-key",
// Replace your-api-key with your API key
// Your API_URL also stored in .env file but for simplity I have directly a string here
accounts: ["klaouoq84qoir983n2nc3234xn98349nx4u2394u23998x3n3"], // This is YOUR_PRIVATE_KEY from Metamask which you can store in env file but for simplity I have directly a string here
},
},
}
A:
I had this issue because I added 'export' in front of my environment variables.
Once I removed "export" I was able to access them from hardhat config
A:
> npm i dotenv
require('dotenv').config()
...
module.exports = {
solidity: "0.8.17",
networks: {
hardhat: {
chainId: 80001
},
mumbai: {
url: `https://polygon-mumbai.g.alchemy.com/v2/${process.env.ALCHEMY_API_KEY}`,
accounts: [process.env.PRIVATE_KEY_WALLET]
}
}
};
A:
To solve these problem I install dotenv extencion
npm install dotenv --save
And in hardtat.config I use process.env.YOUR_CONST
/**
* @type import('hardhat/config').HardhatUserConfig
*/
require('dotenv').config();
require("@nomiclabs/hardhat-ethers");
/** @type import('hardhat/config').HardhatUserConfig */
module.exports = {
solidity: "0.8.17",
defaultNetwork: "goerli",
networks: {
hardhat: {},
goerli: {
url:process.env.API_URL,
accounts: [`0x${process.env.PRIVATE_KEY}`]
}
},
};
|
Hardhat compile error with API_URL and Private Key import
|
Tying to follow a basic NFT tutorial, and I have to say, that I am kind of a noob in Programming. The problem that I am facing is that my hardhat.config.js file needs an API_KEY and a Private Key, which it should import from the process.env file:
API_URL = "https://ethropsten.alchemyapi.io/v2/UkW3oySI7WxvFwDwopQHPOHajHaWFZFv"
PRIVATE_KEY = "8d33c2613cb63d0dc6305e57..."
the hardhad config file looks like this:
* @type import('hardhat/config').HardhatUserConfig
*/
require('dotenv').config();
require("@nomiclabs/hardhat-ethers");
const { API_URL, PRIVATE_KEY } = process.env;
module.exports = {
solidity: "0.8.0",
defaultNetwork: "ropsten",
networks: {
hardhat: {},
ropsten: {
url: API_URL,
accounts: [`0x${PRIVATE_KEY}`]
}
},
}
But whenever I try to compile it and run it tru my deploy.js file I get an error message that essentially tells me, that the import was not possible, and looks like this:
* Invalid value undefined for HardhatConfig.networks.ropsten.url - Expected a value of type string.
* Invalid value {"accounts":["0xundefined"]} for HardhatConfig.networks.ropsten - Expected a value of type HttpNetworkConfig.
To learn more about Hardhat's configuration, please go to https://hardhat.org/config/
For more info go to https://hardhat.org/HH8 or run Hardhat with --show-stack-traces
simon@MacBook-Pro-von-Simon test_fractals % npx hardhat run scripts/deploy.js --network ropsten
An unexpected error occurred:
simon@MacBook-Pro-von-Simon test_fractals % npx hardhat run scripts/deploy.js --network ropsten
An unexpected error occurred:
simon@MacBook-Pro-von-Simon test_fractals % npx hardhat run scripts/deploy.js --network ropsten
An unexpected error occurred:
ReferenceError: API_KEY is not defined
at Object.<anonymous> (/Users/simon/test_fractals/hardhat.config.js:37:12)
at Module._compile (internal/modules/cjs/loader.js:1072:14)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1101:10)
at Module.load (internal/modules/cjs/loader.js:937:32)
at Function.Module._load (internal/modules/cjs/loader.js:778:12)
at Module.require (internal/modules/cjs/loader.js:961:19)
at require (internal/modules/cjs/helpers.js:92:18)
at importCsjOrEsModule (/Users/simon/test_fractals/node_modules/hardhat/src/internal/core/config/config-loading.ts:23:20)
at Object.loadConfigAndTasks (/Users/simon/test_fractals/node_modules/hardhat/src/internal/core/config/config-loading.ts:66:18)
at main (/Users/simon/test_fractals/node_modules/hardhat/src/internal/cli/cli.ts:129:20)
simon@MacBook-Pro-von-Simon test_fractals % npx hardhat run scripts/deploy.js --network ropsten
An unexpected error occurred:
ReferenceError: API_KEY is not defined
at Object.<anonymous> (/Users/simon/test_fractals/hardhat.config.js:37:12)
at Module._compile (internal/modules/cjs/loader.js:1072:14)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1101:10)
at Module.load (internal/modules/cjs/loader.js:937:32)
at Function.Module._load (internal/modules/cjs/loader.js:778:12)
at Module.require (internal/modules/cjs/loader.js:961:19)
at require (internal/modules/cjs/helpers.js:92:18)
at importCsjOrEsModule (/Users/simon/test_fractals/node_modules/hardhat/src/internal/core/config/config-loading.ts:23:20)
at Object.loadConfigAndTasks (/Users/simon/test_fractals/node_modules/hardhat/src/internal/core/config/config-loading.ts:66:1
I first thought, that my .env file was not at the right place or that the syntax was not right, but after trying everything I could think of, I still get the same error message. Any help is appreciated.
|
[
"I solved the same problem by using this in the hardhat.config.js file:\nrequire('dotenv').config({path:__dirname+'/.env'})\n\nInstead of:\nrequire('dotenv').config()\n\nFor more information, you can visit\ndotenv file is not loading environment variables\n",
"I ran in the same issue and I've solved it in a next way.\nThe things back to normal when I firstly require the dotenv and only then set config params.\nSo, the final code looks like that:\nconst dotenv = require(\"dotenv\");\ndotenv.config({path: __dirname + '/.env'});\nconst { API_URL, PRIVATE_KEY } = process.env;\n\nhope this save some time for someone =)\n",
"This worked for me\n\n\n /**\n * @type import('hardhat/config').HardhatUserConfig\n */\n\n require('dotenv').config({path:__dirname+'/.env'})\n require(\"@nomiclabs/hardhat-ethers\")\n \n const { API_URL, PRIVATE_KEY } = process.env\n\n module.exports = {\n solidity: \"0.7.3\",\n defaultNetwork: \"ropsten\",\n networks: {\n hardhat: {\n },\n ropsten: {\n url: \"https://eth-ropsten.alchemyapi.io/v2/your-api-key\", \n// Replace your-api-key with your API key\n// Your API_URL also stored in .env file but for simplity I have directly a string here\n accounts: [\"klaouoq84qoir983n2nc3234xn98349nx4u2394u23998x3n3\"], // This is YOUR_PRIVATE_KEY from Metamask which you can store in env file but for simplity I have directly a string here\n },\n },\n }\n\n\n\n",
"I had this issue because I added 'export' in front of my environment variables.\nOnce I removed \"export\" I was able to access them from hardhat config\n",
"> npm i dotenv\n\n\nrequire('dotenv').config()\n...\nmodule.exports = {\n solidity: \"0.8.17\",\n networks: {\n hardhat: {\n chainId: 80001\n },\n mumbai: {\n url: `https://polygon-mumbai.g.alchemy.com/v2/${process.env.ALCHEMY_API_KEY}`,\n accounts: [process.env.PRIVATE_KEY_WALLET]\n }\n }\n};\n\n",
"To solve these problem I install dotenv extencion\nnpm install dotenv --save\n\nAnd in hardtat.config I use process.env.YOUR_CONST\n /**\n* @type import('hardhat/config').HardhatUserConfig\n*/\n\nrequire('dotenv').config();\nrequire(\"@nomiclabs/hardhat-ethers\");\n/** @type import('hardhat/config').HardhatUserConfig */\nmodule.exports = {\n solidity: \"0.8.17\",\n defaultNetwork: \"goerli\",\n networks: {\n hardhat: {},\n goerli: {\n url:process.env.API_URL,\n accounts: [`0x${process.env.PRIVATE_KEY}`]\n }\n },\n};\n\n"
] |
[
13,
1,
1,
0,
0,
0
] |
[] |
[] |
[
"hardhat",
"javascript",
"nft",
"solidity"
] |
stackoverflow_0069525606_hardhat_javascript_nft_solidity.txt
|
Q:
error to install react app in the hyperterminal
How to solve this error?
I run npx create-react-app my-app
and get the error below:
A:
This problem is due to poor network connection, Try to install the react app with a stable network
|
error to install react app in the hyperterminal
|
How to solve this error?
I run npx create-react-app my-app
and get the error below:
|
[
"This problem is due to poor network connection, Try to install the react app with a stable network\n"
] |
[
0
] |
[] |
[] |
[
"hyperterminal",
"npm",
"npm_install",
"reactjs"
] |
stackoverflow_0073380706_hyperterminal_npm_npm_install_reactjs.txt
|
Q:
How to set State of functional component from unit test case
I am testing react functional component where i need to click a button before button click it is using some state which is false i want to make true so that my test case will pass.please help
A:
const [data, setData] = useState(false)
const [message, setMessage] = useState()
function onButtonHandler(){
setData((val) => !val)
setMessage("set you message here")
}
And use the data and message at places where you put your button
A:
const changeSize = jest.fn();
const wrapper = mount(<App onClick={changeSize} />);
const handleClick = jest.spyOn(React, "useState");
handleClick.mockImplementation(size => [size, changeSize]);
wrapper.find("#para1").simulate("click");
expect(changeSize).toBeCalled();
Or you can try
it('testing state ', () => {
const wrapper = mount(<Component list={[]} />)
wrapper.instance().setValue(1)
expect(wrapper.state('value')).toBe(1)
})
|
How to set State of functional component from unit test case
|
I am testing react functional component where i need to click a button before button click it is using some state which is false i want to make true so that my test case will pass.please help
|
[
"const [data, setData] = useState(false)\nconst [message, setMessage] = useState()\nfunction onButtonHandler(){\nsetData((val) => !val)\nsetMessage(\"set you message here\")\n}\n\nAnd use the data and message at places where you put your button\n",
" const changeSize = jest.fn();\n const wrapper = mount(<App onClick={changeSize} />);\n const handleClick = jest.spyOn(React, \"useState\");\n handleClick.mockImplementation(size => [size, changeSize]);\n\n wrapper.find(\"#para1\").simulate(\"click\");\n expect(changeSize).toBeCalled();\n\nOr you can try\nit('testing state ', () => {\n const wrapper = mount(<Component list={[]} />)\n wrapper.instance().setValue(1)\n expect(wrapper.state('value')).toBe(1)\n})\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"enzyme",
"reactjs"
] |
stackoverflow_0074637278_enzyme_reactjs.txt
|
Q:
Weird unexpected token error when trying to name a class in Code.org (ANSWERED)
I am trying to name a class in Javascript and when I initialize the class it has an error at the name of the class.
class e {. //SyntaxError: Unexpected token (13:6)
constructor(name, damage) {
this.name = name;
this.damage = damage;
}
}
Also, if I make another, identical in everything but name, class under it then there is no error on the second class.
class e { //SyntaxError: Unexpected token (13:6)
constructor(name, damage) {
this.name = name;
this.damage = damage;
}
}
class a { //all that comes up here is that a is not called in my program
constructor(and, what) {
this.and = and;
this.what = what;
}
}
When I add a third class nothing happens to the second class, no errors are coming up. So I have no idea why this is happening.
This is for a school project and we are using Code.org's App Lab if that makes a difference.
I copied a class from another program that works and pasted it into this program and changed only the names, hoping that error would go away, but the error still is there.
A:
Code.org does not support class. Here is a forum post about it.https://forum.code.org/t/oop-and-class-impementation-tech-help/31755/3
It explains that class is not implemented into the interpreter that is used, however it gives a clever solution to create a class using functions. It appears to work the same way a class would, just without class syntax.
|
Weird unexpected token error when trying to name a class in Code.org (ANSWERED)
|
I am trying to name a class in Javascript and when I initialize the class it has an error at the name of the class.
class e {. //SyntaxError: Unexpected token (13:6)
constructor(name, damage) {
this.name = name;
this.damage = damage;
}
}
Also, if I make another, identical in everything but name, class under it then there is no error on the second class.
class e { //SyntaxError: Unexpected token (13:6)
constructor(name, damage) {
this.name = name;
this.damage = damage;
}
}
class a { //all that comes up here is that a is not called in my program
constructor(and, what) {
this.and = and;
this.what = what;
}
}
When I add a third class nothing happens to the second class, no errors are coming up. So I have no idea why this is happening.
This is for a school project and we are using Code.org's App Lab if that makes a difference.
I copied a class from another program that works and pasted it into this program and changed only the names, hoping that error would go away, but the error still is there.
|
[
"Code.org does not support class. Here is a forum post about it.https://forum.code.org/t/oop-and-class-impementation-tech-help/31755/3\nIt explains that class is not implemented into the interpreter that is used, however it gives a clever solution to create a class using functions. It appears to work the same way a class would, just without class syntax.\n"
] |
[
1
] |
[] |
[] |
[
"class",
"javascript",
"token"
] |
stackoverflow_0074663038_class_javascript_token.txt
|
Q:
Trying to add to an array of structs
I am trying to add these values to an array of structs using fgets and then print them in the display function but when I go through and try to add in the program and then display it is just blank.
int main(void)
{
int maxsize;
printf("Enter a size for the dynamic array: ");
scanf("%d", &maxsize);
record_t *myarray = createArray(maxsize);
int currentsize = 0;
int option = -1;
while(option != 0)
{
printf("What would you like to do?\n");
printf("1: Insert a record\n");
printf("2: Display records\n");
printf("3: Remove record\n");
printf("4: Exit\n");
printf("Enter an option: ");
scanf("%d", &option);
getchar();
switch(option)
{
case 1:
printf("Insert was selected...\n");
myarray = insert(myarray, &maxsize, ¤tsize);
break;
case 2:
printf("Display was selected...\n");
display(myarray, currentsize);
break;
case 3:
printf("Remove was selected...\n");
printf("Select an index of record to remove...\n");
int index;
scanf("%d", &index);
getchar();
currentsize = removeRecord(myarray,currentsize,index);
break;
case 4:
printf("Exiting...\n");
option = 0;
break;
default:
printf("Invalid Selection...\n");
}
}
freeRecords(myarray, currentsize);
free(myarray);
myarray = NULL;
return 0;
}
record_t * createArray(int maxsize)
{
record_t * arr = (record_t *) malloc(maxsize * sizeof(record_t));
if(arr == NULL)
{
printf("malloc error in createArray...program exiting\n");
exit(1);
}
return arr;
}
void display(record_t *myarray, int currentsize)
{
printf("---------------------------------\n");
for(int x = 0; x < currentsize; ++x)
{
printf("myarray[%d].fname = %s\n", x, (x + myarray)->fname); //try it with myarray[x].name
printf("myarray[%d].lname = %s\n", x, (x + myarray)->lname); //try it with myarray[x].name
printf("myarray[%d].show = %s\n", x, (x + myarray)->show); //try it with myarray[x].show
}
printf("---------------------------------\n");
}
record_t * insert(record_t *myarray, int *maxsize, int *currentsize)
{
if (*currentsize >= *maxsize)
{
printf("Array is full...\n");
printf("Need to doubleIt...\n");
myarray = doubleIt(myarray, *currentsize);
*maxsize *= 2;
}
myarray[*currentsize].fname = malloc(sizeof(char)*LIMIT);
printf("Enter the first name: ");
fgets(myarray[*currentsize].fname, LIMIT, stdin);
helper(myarray[*currentsize].fname);
free(myarray[*currentsize].fname);
myarray[*currentsize].lname = malloc(sizeof(char)*LIMIT);
printf("Enter the last name: ");
fgets(myarray[*currentsize].lname, LIMIT, stdin);
helper(myarray[*currentsize].lname);
free(myarray[*currentsize].lname);
myarray[*currentsize].show = malloc(sizeof(char)*LIMIT);
printf("Enter favorite show: ");
fgets(myarray[*currentsize].show, LIMIT, stdin);
helper(myarray[*currentsize].show);
free(myarray[*currentsize].show);
}
have tried to switch around the malloc stuff and some of the syntax but nothing has worked, changing the code just causes it to not compile.
A:
The 3 issues in insert() that prevent your display() from working are:
Don't free() the strings that you just read (fname, lname, and show).
Increment *currentsize after adding a record.
return myarray.
record_t *insert(record_t *myarray, int *maxsize, int *currentsize) {
if (*currentsize >= *maxsize) {
printf("Array is full...\n");
printf("Need to doubleIt...\n");
myarray = doubleIt(myarray, currentsize);
*maxsize *= 2;
}
myarray[*currentsize].fname = malloc(LIMIT);
printf("Enter the first name: ");
fgets(myarray[*currentsize].fname, LIMIT, stdin);
helper(myarray[*currentsize].fname);
myarray[*currentsize].lname = malloc(LIMIT);
printf("Enter the last name: ");
fgets(myarray[*currentsize].lname, LIMIT, stdin);
helper(myarray[*currentsize].lname);
myarray[*currentsize].show = malloc(LIMIT);
printf("Enter favorite show: ");
fgets(myarray[*currentsize].show, LIMIT, stdin);
helper(myarray[*currentsize].show);
(*currentsize)++;
return myarray;
}
and example session:
Enter a size for the dynamic array: 1
What would you like to do?
1: Insert a record
2: Display records
3: Remove record
4: Exit
Enter an option: 1
Insert was selected...
Enter the first name: bob
Enter the last name: smith
Enter favorite show: 007
What would you like to do?
1: Insert a record
2: Display records
3: Remove record
4: Exit
Enter an option: 2
Display was selected...
---------------------------------
myarray[0].fname = bob
myarray[0].lname = smith
myarray[0].show = 007
|
Trying to add to an array of structs
|
I am trying to add these values to an array of structs using fgets and then print them in the display function but when I go through and try to add in the program and then display it is just blank.
int main(void)
{
int maxsize;
printf("Enter a size for the dynamic array: ");
scanf("%d", &maxsize);
record_t *myarray = createArray(maxsize);
int currentsize = 0;
int option = -1;
while(option != 0)
{
printf("What would you like to do?\n");
printf("1: Insert a record\n");
printf("2: Display records\n");
printf("3: Remove record\n");
printf("4: Exit\n");
printf("Enter an option: ");
scanf("%d", &option);
getchar();
switch(option)
{
case 1:
printf("Insert was selected...\n");
myarray = insert(myarray, &maxsize, ¤tsize);
break;
case 2:
printf("Display was selected...\n");
display(myarray, currentsize);
break;
case 3:
printf("Remove was selected...\n");
printf("Select an index of record to remove...\n");
int index;
scanf("%d", &index);
getchar();
currentsize = removeRecord(myarray,currentsize,index);
break;
case 4:
printf("Exiting...\n");
option = 0;
break;
default:
printf("Invalid Selection...\n");
}
}
freeRecords(myarray, currentsize);
free(myarray);
myarray = NULL;
return 0;
}
record_t * createArray(int maxsize)
{
record_t * arr = (record_t *) malloc(maxsize * sizeof(record_t));
if(arr == NULL)
{
printf("malloc error in createArray...program exiting\n");
exit(1);
}
return arr;
}
void display(record_t *myarray, int currentsize)
{
printf("---------------------------------\n");
for(int x = 0; x < currentsize; ++x)
{
printf("myarray[%d].fname = %s\n", x, (x + myarray)->fname); //try it with myarray[x].name
printf("myarray[%d].lname = %s\n", x, (x + myarray)->lname); //try it with myarray[x].name
printf("myarray[%d].show = %s\n", x, (x + myarray)->show); //try it with myarray[x].show
}
printf("---------------------------------\n");
}
record_t * insert(record_t *myarray, int *maxsize, int *currentsize)
{
if (*currentsize >= *maxsize)
{
printf("Array is full...\n");
printf("Need to doubleIt...\n");
myarray = doubleIt(myarray, *currentsize);
*maxsize *= 2;
}
myarray[*currentsize].fname = malloc(sizeof(char)*LIMIT);
printf("Enter the first name: ");
fgets(myarray[*currentsize].fname, LIMIT, stdin);
helper(myarray[*currentsize].fname);
free(myarray[*currentsize].fname);
myarray[*currentsize].lname = malloc(sizeof(char)*LIMIT);
printf("Enter the last name: ");
fgets(myarray[*currentsize].lname, LIMIT, stdin);
helper(myarray[*currentsize].lname);
free(myarray[*currentsize].lname);
myarray[*currentsize].show = malloc(sizeof(char)*LIMIT);
printf("Enter favorite show: ");
fgets(myarray[*currentsize].show, LIMIT, stdin);
helper(myarray[*currentsize].show);
free(myarray[*currentsize].show);
}
have tried to switch around the malloc stuff and some of the syntax but nothing has worked, changing the code just causes it to not compile.
|
[
"The 3 issues in insert() that prevent your display() from working are:\n\nDon't free() the strings that you just read (fname, lname, and show).\nIncrement *currentsize after adding a record.\nreturn myarray.\n\nrecord_t *insert(record_t *myarray, int *maxsize, int *currentsize) {\n if (*currentsize >= *maxsize) {\n printf(\"Array is full...\\n\");\n printf(\"Need to doubleIt...\\n\");\n myarray = doubleIt(myarray, currentsize);\n *maxsize *= 2;\n }\n\n myarray[*currentsize].fname = malloc(LIMIT); \n printf(\"Enter the first name: \");\n fgets(myarray[*currentsize].fname, LIMIT, stdin); \n helper(myarray[*currentsize].fname);\n\n myarray[*currentsize].lname = malloc(LIMIT); \n printf(\"Enter the last name: \");\n fgets(myarray[*currentsize].lname, LIMIT, stdin);\n helper(myarray[*currentsize].lname);\n\n myarray[*currentsize].show = malloc(LIMIT);\n printf(\"Enter favorite show: \");\n fgets(myarray[*currentsize].show, LIMIT, stdin);\n helper(myarray[*currentsize].show);\n\n (*currentsize)++;\n return myarray;\n}\n\nand example session:\nEnter a size for the dynamic array: 1\nWhat would you like to do?\n1: Insert a record\n2: Display records\n3: Remove record\n4: Exit\nEnter an option: 1\nInsert was selected...\nEnter the first name: bob\nEnter the last name: smith\nEnter favorite show: 007\nWhat would you like to do?\n1: Insert a record\n2: Display records\n3: Remove record\n4: Exit\nEnter an option: 2\nDisplay was selected...\n---------------------------------\nmyarray[0].fname = bob\n\nmyarray[0].lname = smith\n\nmyarray[0].show = 007\n\n"
] |
[
0
] |
[] |
[] |
[
"c"
] |
stackoverflow_0074663585_c.txt
|
Q:
Multi Dimensional Array Calculation in php ( PHP Ranking not working )
Hi guys,
I have Result like this ,but Problem is this Result's Rank not correct , Please can you any one help for me, I have tried many times for many days now to get the result through this code but it was never successful so please help me, I have put all the codes below for you.
Array
(
[0] => Array
(
[sturoll_no] => MAX001
[stu_name] => Name01
[stu_pres] => 70.00
[rank] => 1
)
[1] => Array
(
[sturoll_no] => MAX002
[stu_name] => H.M.Dehan
[stu_pres] => 53.33
[rank] => 2
)
[2] => Array
(
[sturoll_no] => MAX003
[stu_name] => Chamudi
[stu_pres] => 30.00
[rank] => 3
)
[3] => Array
(
[sturoll_no] => MAX004
[stu_name] => D Laknara
[stu_pres] => 71.67
[rank] => 4
)
[4] => Array
(
[sturoll_no] => MAX005
[stu_name] => D M Imasha
[stu_pres] => 21.67
[rank] => 5
)
[5] => Array
(
[sturoll_no] => Max006
[stu_name] => Name05
[stu_pres] => 63.33
[rank] => 6
)
[6] => Array
(
[sturoll_no] => MAX007
[stu_name] => Name06
[stu_pr found es] => 66.67
[rank] => 7
)
> i found this code : from https://www.techantena.com/4897/calculate-rank-array-values-php/
> MY CODE LIKE THIS --->>>>
$student_mark = [
[
'reg_no' => 'REG_10001',
'student_name' => 'Rita Book',
'mark' => '89',
],[
'reg_no' => 'REG_10002',
'student_name' => 'A. Mused',
'mark' => '95',
],[
'reg_no' => 'REG_10003',
'student_name' => 'Rose Bush',
'mark' => '35',
],[
'reg_no' => 'REG_10004',
'student_name' => 'Greg Arias',
'mark' => '86',
],[
'reg_no' => 'REG_10005',
'student_name' => 'Skye Blue',
'mark' => '12',
],[
'reg_no' => 'REG_10006',
'student_name' => 'Don Messwidme',
'mark' => '0',
],[
'reg_no' => 'REG_10007',
'student_name' => 'Emma Grate',
'mark' => '0',
],[
'reg_no' => 'REG_10008',
'student_name' => 'Sarah Moanees',
'mark' => '75',
],[
'reg_no' => 'REG_10009',
'student_name' => 'Mal Nurrisht',
'mark' => '86',
],[
'reg_no' => 'REG_10010',
'student_name' => 'Stanley Knife',
'mark' => '35',
],
];
$student_rank = calculate_rank($student_mark);
//calculate rank for multi dimensional array
function calculate_rank($rank_values): array {
$rank = 0;
$r_last = null;
foreach ($rank_values as $key => $arr) {
if ($arr['mark'] != $r_last) {
if($arr['mark'] > 0){ //if you want to set zero rank for values zero
$rank++;
}
$r_last = $arr['mark'];
}
$rank_values[$key]['rank'] = $arr['mark'] > 0 ? $rank: 0; //if you want to set zero rank for values zero
}
return $rank_values;
}
Can you anyone help for me i Was really stuck on this , Output ok but rank calculation is not correct
I need the Total Solution for this Question
A:
simply we need to sort this array by mark first, we can use the custom sort function to sort by mark key:
usort($student_mark, function ($a, $b) {
return $b['mark'] <=> $a['mark'];
});
then just add the field rank to the array:
foreach ($student_mark as $i => &$student) { // we used the pointer to directly add the rank key to the array
$student['rank'] = $i+1;
}
|
Multi Dimensional Array Calculation in php ( PHP Ranking not working )
|
Hi guys,
I have Result like this ,but Problem is this Result's Rank not correct , Please can you any one help for me, I have tried many times for many days now to get the result through this code but it was never successful so please help me, I have put all the codes below for you.
Array
(
[0] => Array
(
[sturoll_no] => MAX001
[stu_name] => Name01
[stu_pres] => 70.00
[rank] => 1
)
[1] => Array
(
[sturoll_no] => MAX002
[stu_name] => H.M.Dehan
[stu_pres] => 53.33
[rank] => 2
)
[2] => Array
(
[sturoll_no] => MAX003
[stu_name] => Chamudi
[stu_pres] => 30.00
[rank] => 3
)
[3] => Array
(
[sturoll_no] => MAX004
[stu_name] => D Laknara
[stu_pres] => 71.67
[rank] => 4
)
[4] => Array
(
[sturoll_no] => MAX005
[stu_name] => D M Imasha
[stu_pres] => 21.67
[rank] => 5
)
[5] => Array
(
[sturoll_no] => Max006
[stu_name] => Name05
[stu_pres] => 63.33
[rank] => 6
)
[6] => Array
(
[sturoll_no] => MAX007
[stu_name] => Name06
[stu_pr found es] => 66.67
[rank] => 7
)
> i found this code : from https://www.techantena.com/4897/calculate-rank-array-values-php/
> MY CODE LIKE THIS --->>>>
$student_mark = [
[
'reg_no' => 'REG_10001',
'student_name' => 'Rita Book',
'mark' => '89',
],[
'reg_no' => 'REG_10002',
'student_name' => 'A. Mused',
'mark' => '95',
],[
'reg_no' => 'REG_10003',
'student_name' => 'Rose Bush',
'mark' => '35',
],[
'reg_no' => 'REG_10004',
'student_name' => 'Greg Arias',
'mark' => '86',
],[
'reg_no' => 'REG_10005',
'student_name' => 'Skye Blue',
'mark' => '12',
],[
'reg_no' => 'REG_10006',
'student_name' => 'Don Messwidme',
'mark' => '0',
],[
'reg_no' => 'REG_10007',
'student_name' => 'Emma Grate',
'mark' => '0',
],[
'reg_no' => 'REG_10008',
'student_name' => 'Sarah Moanees',
'mark' => '75',
],[
'reg_no' => 'REG_10009',
'student_name' => 'Mal Nurrisht',
'mark' => '86',
],[
'reg_no' => 'REG_10010',
'student_name' => 'Stanley Knife',
'mark' => '35',
],
];
$student_rank = calculate_rank($student_mark);
//calculate rank for multi dimensional array
function calculate_rank($rank_values): array {
$rank = 0;
$r_last = null;
foreach ($rank_values as $key => $arr) {
if ($arr['mark'] != $r_last) {
if($arr['mark'] > 0){ //if you want to set zero rank for values zero
$rank++;
}
$r_last = $arr['mark'];
}
$rank_values[$key]['rank'] = $arr['mark'] > 0 ? $rank: 0; //if you want to set zero rank for values zero
}
return $rank_values;
}
Can you anyone help for me i Was really stuck on this , Output ok but rank calculation is not correct
I need the Total Solution for this Question
|
[
"simply we need to sort this array by mark first, we can use the custom sort function to sort by mark key:\nusort($student_mark, function ($a, $b) {\n return $b['mark'] <=> $a['mark']; \n});\n\nthen just add the field rank to the array:\nforeach ($student_mark as $i => &$student) { // we used the pointer to directly add the rank key to the array\n $student['rank'] = $i+1;\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"arrays",
"multidimensional_array",
"output",
"php",
"ranking"
] |
stackoverflow_0074661535_arrays_multidimensional_array_output_php_ranking.txt
|
Q:
How to create a :hover pseudo class selector for the *li's* inside a *ol* [CSS]?
The closest I got was li:hover but this targets the ol, rather than the li instead.
Does it look more along the lines of ol:li:hover? I tried doing that & anything like it, but it didn't work. Any help would be greatly appreciated, thank you!
|
How to create a :hover pseudo class selector for the *li's* inside a *ol* [CSS]?
|
The closest I got was li:hover but this targets the ol, rather than the li instead.
Does it look more along the lines of ol:li:hover? I tried doing that & anything like it, but it didn't work. Any help would be greatly appreciated, thank you!
|
[] |
[] |
[
"li:hover should not target your ol:\n\n\nli {\n border: 3px solid lightcoral;\n margin: 5px;\n padding: 5px;\n width: fit-content;\n}\n\nol {\n list-style: none;\n border: 3px solid lightblue;\n margin: 5px;\n padding: 5px;\n width: fit-content;\n}\n\nli:hover {\n background: lightcoral;\n}\n<ol>\n <li>item 1</li>\n <li>item 2</li>\n <li>item 3</li>\n</ol>\n\n\n\nThe problem probably lies elsewhere. Please share a minimal reproducible example.\n"
] |
[
-1
] |
[
"css"
] |
stackoverflow_0074663458_css.txt
|
Q:
start expo doesn't open in web
When I type in start expo into my command prompt thats cded to the directory that my app is in, it runs code but it doesn't open my expo in the web it just does this to my cmd
I was expecting it to open up in the web
A:
It sounds like you are having trouble starting your Expo app using the start command in the command prompt. This can happen for a few different reasons, and here are some steps you can try to troubleshoot the problem:
Verify that you are in the correct directory in the command prompt. In order to run the start command, you need to be in the directory that contains your Expo app. You can use the cd command to navigate to the correct directory. For example, if your app is in the "C:\ExpoApps" folder, you would type cd C:\ExpoApps and press Enter.
Make sure that you have installed Expo and the Expo CLI on your computer. In order to use the start command, you need to have both of these tools installed and available on your system. You can use the npm install -g expo-cli command to install the Expo CLI, and then use the expo init command to create a new Expo app.
Check that your Expo app is running properly. Before you can use the start command to open your app in the web, you need to make sure that your app is running and available on your network. You can use the expo start command to start your app, and then use the QR code or URL provided by Expo to open your app in the Expo client on your phone or in a web browser.
If your Expo app is running and available, but the start command is still not opening your app in the web, it's possible that there is a problem with the start command itself. You can try running the start command with the --no-dev option, which will start your app in production mode. This can sometimes resolve issues with the start command.
|
start expo doesn't open in web
|
When I type in start expo into my command prompt thats cded to the directory that my app is in, it runs code but it doesn't open my expo in the web it just does this to my cmd
I was expecting it to open up in the web
|
[
"It sounds like you are having trouble starting your Expo app using the start command in the command prompt. This can happen for a few different reasons, and here are some steps you can try to troubleshoot the problem:\nVerify that you are in the correct directory in the command prompt. In order to run the start command, you need to be in the directory that contains your Expo app. You can use the cd command to navigate to the correct directory. For example, if your app is in the \"C:\\ExpoApps\" folder, you would type cd C:\\ExpoApps and press Enter.\nMake sure that you have installed Expo and the Expo CLI on your computer. In order to use the start command, you need to have both of these tools installed and available on your system. You can use the npm install -g expo-cli command to install the Expo CLI, and then use the expo init command to create a new Expo app.\nCheck that your Expo app is running properly. Before you can use the start command to open your app in the web, you need to make sure that your app is running and available on your network. You can use the expo start command to start your app, and then use the QR code or URL provided by Expo to open your app in the Expo client on your phone or in a web browser.\nIf your Expo app is running and available, but the start command is still not opening your app in the web, it's possible that there is a problem with the start command itself. You can try running the start command with the --no-dev option, which will start your app in production mode. This can sometimes resolve issues with the start command.\n"
] |
[
0
] |
[] |
[] |
[
"expo",
"javascript",
"npm"
] |
stackoverflow_0074663679_expo_javascript_npm.txt
|
Q:
Cannot return class attribute from function
I would like to get familiar with classes and attributes and defined the following minimal example in the Python console:
class test:
def __init__(self, name):
self.name = name
I initiated a class instance:
>>> first_test = test('Linus')
>>> first_test.name
'Linus'
Works fine. I also tried getattr() to get the value of the attribute:
>>> getattr(first_test, 'name')
'Linus'
No problem. Next, I tried packing getattr() into a function:
def return_name(instance, attribute):
return getattr(instance, attribute)
Again, all fine:
>>> return_name(first_test, 'name')
'Linus'
Now, I wanted to try the same thing with the instance.attribute syntax:
def return_name(instance, attribute):
return instance.attribute
But for some reason, this fails:
>>> return_name(first_test, name)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'name' is not defined
Another try, this time passing the attribute name as a string:
>>> return_name(first_test, 'name')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in return_name
AttributeError: 'test' object has no attribute 'attribute'
Fail again?
What is the problem here?
A:
In Python, the dot notation (e.g., object.attribute) is used to access an attribute of an object directly, without using a string representation of the attribute name. In your code, you are using the dot notation to access the name attribute of the first_test object like this:
first_test.name
This works because first_test is an instance of the test class, and the test class has an attribute called name.
However, in the return_name function, you are trying to use the dot notation to access an attribute of the instance object, but you are using the string 'attribute' instead of the actual attribute name. Here is how you are trying to use the dot notation in the return_name function:
def return_name(instance, attribute):
return instance.attribute # This is the problematic line
return_name(first_test, 'name') # This will fail
In this case, you are trying to access an attribute called attribute of the instance object, which does not exist. This is why you are getting the AttributeError saying that the object has no attribute attribute.
To fix this, you can either use bracket notation (e.g., instance[attribute]) to access the attribute using the string representation of the attribute name, or you can use the getattr function (e.g., getattr(instance, attribute)) to access the attribute using the string representation of the attribute name.
Here is how you can fix your code using bracket notation:
def return_name(instance, attribute):
return instance[attribute] # Use bracket notation to access the attribute
return_name(first_test, 'name') # This will return 'Linus'
Here is how you can fix your code using the getattr function:
def return_name(instance, attribute):
return getattr(instance, attribute) # Use getattr to access the attribute
return_name(first_test, 'name') # This will also return 'Linus'
I also don't understand the error message AttributeError: 'test' object has no attribute 'attribute' - why the reference to test and not first_test, although I explicitly referenced the instance (and not the class)? The previous error message is equally confusing: NameError: name 'name' is not defined. Clearly, name is defined as an attribute of first_class!
In the error message AttributeError: 'test' object has no attribute 'attribute', the reference to test is actually a reference to the class test, not the instance first_test. This is because the error is saying that the test class does not have an attribute called attribute.
When you call the return_name function like this:
return_name(first_test, 'name')
The first_test object is passed to the instance parameter of the return_name function. Then, inside the return_name function, you are trying to access an attribute called attribute of the instance object using dot notation, like this:
def return_name(instance, attribute):
return instance.attribute # This is the problematic line
|
Cannot return class attribute from function
|
I would like to get familiar with classes and attributes and defined the following minimal example in the Python console:
class test:
def __init__(self, name):
self.name = name
I initiated a class instance:
>>> first_test = test('Linus')
>>> first_test.name
'Linus'
Works fine. I also tried getattr() to get the value of the attribute:
>>> getattr(first_test, 'name')
'Linus'
No problem. Next, I tried packing getattr() into a function:
def return_name(instance, attribute):
return getattr(instance, attribute)
Again, all fine:
>>> return_name(first_test, 'name')
'Linus'
Now, I wanted to try the same thing with the instance.attribute syntax:
def return_name(instance, attribute):
return instance.attribute
But for some reason, this fails:
>>> return_name(first_test, name)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'name' is not defined
Another try, this time passing the attribute name as a string:
>>> return_name(first_test, 'name')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 2, in return_name
AttributeError: 'test' object has no attribute 'attribute'
Fail again?
What is the problem here?
|
[
"In Python, the dot notation (e.g., object.attribute) is used to access an attribute of an object directly, without using a string representation of the attribute name. In your code, you are using the dot notation to access the name attribute of the first_test object like this:\nfirst_test.name\n\nThis works because first_test is an instance of the test class, and the test class has an attribute called name.\nHowever, in the return_name function, you are trying to use the dot notation to access an attribute of the instance object, but you are using the string 'attribute' instead of the actual attribute name. Here is how you are trying to use the dot notation in the return_name function:\ndef return_name(instance, attribute):\n return instance.attribute # This is the problematic line\n\nreturn_name(first_test, 'name') # This will fail\n\nIn this case, you are trying to access an attribute called attribute of the instance object, which does not exist. This is why you are getting the AttributeError saying that the object has no attribute attribute.\nTo fix this, you can either use bracket notation (e.g., instance[attribute]) to access the attribute using the string representation of the attribute name, or you can use the getattr function (e.g., getattr(instance, attribute)) to access the attribute using the string representation of the attribute name.\nHere is how you can fix your code using bracket notation:\ndef return_name(instance, attribute):\n return instance[attribute] # Use bracket notation to access the attribute\n\nreturn_name(first_test, 'name') # This will return 'Linus'\n\nHere is how you can fix your code using the getattr function:\ndef return_name(instance, attribute):\n return getattr(instance, attribute) # Use getattr to access the attribute\n\nreturn_name(first_test, 'name') # This will also return 'Linus'\n\n\nI also don't understand the error message AttributeError: 'test' object has no attribute 'attribute' - why the reference to test and not first_test, although I explicitly referenced the instance (and not the class)? The previous error message is equally confusing: NameError: name 'name' is not defined. Clearly, name is defined as an attribute of first_class!\n\nIn the error message AttributeError: 'test' object has no attribute 'attribute', the reference to test is actually a reference to the class test, not the instance first_test. This is because the error is saying that the test class does not have an attribute called attribute.\nWhen you call the return_name function like this:\nreturn_name(first_test, 'name')\n\nThe first_test object is passed to the instance parameter of the return_name function. Then, inside the return_name function, you are trying to access an attribute called attribute of the instance object using dot notation, like this:\ndef return_name(instance, attribute):\n return instance.attribute # This is the problematic line\n\n"
] |
[
0
] |
[] |
[] |
[
"attributes",
"class",
"function",
"getattr",
"return"
] |
stackoverflow_0074663685_attributes_class_function_getattr_return.txt
|
Q:
Why use an else statement when the if statement returns a value?
I am wondering, when an if statement within a function returns a value, is it necessary to proceed it with an else statement to return a different value? For example:
char to_lower_case(char ch) {
if(ch >= 'A' && ch <= 'Z')
return ch - 'A' + 'a';
else
return ch;
}
Couldn't the else statement be negated and have the same effect since the if statement will return a value and exit the function when the condition is met?
char to_lower_case(char ch) {
if(ch >= 'A' && ch <= 'Z')
return ch - 'A' + 'a';
return ch;
}
A:
Both are fine and produce identical results.
|
Why use an else statement when the if statement returns a value?
|
I am wondering, when an if statement within a function returns a value, is it necessary to proceed it with an else statement to return a different value? For example:
char to_lower_case(char ch) {
if(ch >= 'A' && ch <= 'Z')
return ch - 'A' + 'a';
else
return ch;
}
Couldn't the else statement be negated and have the same effect since the if statement will return a value and exit the function when the condition is met?
char to_lower_case(char ch) {
if(ch >= 'A' && ch <= 'Z')
return ch - 'A' + 'a';
return ch;
}
|
[
"Both are fine and produce identical results.\n"
] |
[
1
] |
[] |
[] |
[
"c",
"if_statement"
] |
stackoverflow_0074663649_c_if_statement.txt
|
Q:
How to add two spaces before every line in a string YAML
I want the config to show up with 2 spaces before each line.
---
- hosts: localhost
vars:
filename: file1
a: aaa
config: |-
missingok
daily
compress
rotate 4
create
dateext
dateformat -%d%m%Y
dateyesterday
tasks:
- name: Creating log config file
copy:
dest: /{{ filename }}
content: |
{{ a }}
{
{{ config }}
}
The spaces show up if I add another line at the beginning of config without any spaces. Putting spaces before the config variable also doesn't work because it only affects the first line (missingok) and the rest would be without any spaces in front.
A:
Tell YAML how many spaces of indentation (relative to the parent level) the block scalar has, e.g.:
config: |2-
missingok
daily
compress
rotate 4
create
dateext
dateformat -%d%m%Y
dateyesterday
By giving 2 but having 4 spaces relative to config:, you'll get two spaces in front of every line.
A:
Given the simplified variables for testing
a: aaa
config: |-
missingok
daily
compress
Q: "Putting spaces before the config variable doesn't work because it only affects the first line."
- copy:
dest: /tmp/file1
content: |
{{ a }}
{
{{ config }}
}
gives
shell> cat /tmp/file1
aaa
{
missingok
daily
compress
}
A: Use the Jinja filter indent and format the output
- copy:
dest: /tmp/file1
content: |
{{ a }}
{
{{ config|indent(2) }}
}
gives what you want
shell> cat /tmp/file1
aaa
{
missingok
daily
compress
}
Notes
Formating the value of a variable, i.e. prepending each line in config with 2 more spaces, doesn't have to be the best solution. Make your choice.
The Block Indentation Indicator can also solve the problem. For example, you declare the indentation of 2 spaces and prepend each line with 2 more spaces
config: |2-
missingok
daily
compress
Then the task
- copy:
dest: /tmp/file1
content: |
{{ a }}
{
{{ config }}
}
gives also what you want because each line in config starts with 2 spaces
shell> cat /tmp/file1
aaa
{
missingok
daily
compress
}
Credit to @Jefferson. I wonder why this post was deleted.
|
How to add two spaces before every line in a string YAML
|
I want the config to show up with 2 spaces before each line.
---
- hosts: localhost
vars:
filename: file1
a: aaa
config: |-
missingok
daily
compress
rotate 4
create
dateext
dateformat -%d%m%Y
dateyesterday
tasks:
- name: Creating log config file
copy:
dest: /{{ filename }}
content: |
{{ a }}
{
{{ config }}
}
The spaces show up if I add another line at the beginning of config without any spaces. Putting spaces before the config variable also doesn't work because it only affects the first line (missingok) and the rest would be without any spaces in front.
|
[
"Tell YAML how many spaces of indentation (relative to the parent level) the block scalar has, e.g.:\n config: |2-\n missingok\n daily\n compress\n rotate 4\n create\n dateext\n dateformat -%d%m%Y\n dateyesterday\n\nBy giving 2 but having 4 spaces relative to config:, you'll get two spaces in front of every line.\n",
"\nGiven the simplified variables for testing\n a: aaa\n config: |-\n missingok\n daily\n compress\n\n\n\nQ: \"Putting spaces before the config variable doesn't work because it only affects the first line.\"\n - copy:\n dest: /tmp/file1\n content: |\n {{ a }}\n {\n {{ config }}\n }\n\ngives\nshell> cat /tmp/file1 \naaa\n{\n missingok\ndaily\ncompress\n}\n\nA: Use the Jinja filter indent and format the output\n - copy:\n dest: /tmp/file1\n content: |\n {{ a }}\n {\n {{ config|indent(2) }}\n }\n\ngives what you want\nshell> cat /tmp/file1 \naaa\n{\n missingok\n daily\n compress\n}\n\n\n\nNotes\n\nFormating the value of a variable, i.e. prepending each line in config with 2 more spaces, doesn't have to be the best solution. Make your choice.\n\nThe Block Indentation Indicator can also solve the problem. For example, you declare the indentation of 2 spaces and prepend each line with 2 more spaces\n\n\n config: |2-\n missingok\n daily\n compress\n\nThen the task\n - copy:\n dest: /tmp/file1\n content: |\n {{ a }}\n {\n {{ config }}\n }\n\ngives also what you want because each line in config starts with 2 spaces\nshell> cat /tmp/file1 \naaa\n{\n missingok\n daily\n compress\n}\n\n\nCredit to @Jefferson. I wonder why this post was deleted.\n\n\n"
] |
[
4,
1
] |
[] |
[] |
[
"ansible",
"yaml"
] |
stackoverflow_0074659902_ansible_yaml.txt
|
Q:
Prevent input from overflow grid parent
Consider the following snippet:
#parent {
display: grid;
grid-template-columns: repeat(2, 1fr);
grid-template-rows: repeat(2, 1fr);
width: 275px;
border: 2px solid green;
padding: 10px;
}
input {
background-color: hotpink;
}
<div id="parent">
<input />
<input />
<input />
<input />
</div>
If you run the snippet, you will find that the inputs overflow the parent div along the x-axis and refuses to fit inside the div (tested in Chromium-based Edge). Basically, when you give the parent with display: grid a fixed width, the input children don't seem to fit.
I've tried all the properties I could think of, but none of them seemed to keep the inputs where they belong (I expect a nice 2x2 grid where the children fit evenly into the grid). How can I keep the inputs in the grid?
A:
You could use a percentage of the overall parents width in your grid template column for the parents css. Being you have two items, you can divide 100% by the amount of items and use that percentage grid-template-columns: repeat(2, 50%);
* {
padding: 0;
margin: 0;
box-sizing: border-box;
}
#parent {
display: grid;
grid-template-columns: repeat(2, 50%);
grid-template-rows: repeat(2, 1fr);
width: 275px;
border: 2px solid green;
padding: 10px;
}
input {
background-color: hotpink;
}
<div id="parent">
<input />
<input />
<input />
<input />
</div>
A:
The input width is depends on its size if there is no other width style on it, and the default of size is 20.
MDN Link
|
Prevent input from overflow grid parent
|
Consider the following snippet:
#parent {
display: grid;
grid-template-columns: repeat(2, 1fr);
grid-template-rows: repeat(2, 1fr);
width: 275px;
border: 2px solid green;
padding: 10px;
}
input {
background-color: hotpink;
}
<div id="parent">
<input />
<input />
<input />
<input />
</div>
If you run the snippet, you will find that the inputs overflow the parent div along the x-axis and refuses to fit inside the div (tested in Chromium-based Edge). Basically, when you give the parent with display: grid a fixed width, the input children don't seem to fit.
I've tried all the properties I could think of, but none of them seemed to keep the inputs where they belong (I expect a nice 2x2 grid where the children fit evenly into the grid). How can I keep the inputs in the grid?
|
[
"You could use a percentage of the overall parents width in your grid template column for the parents css. Being you have two items, you can divide 100% by the amount of items and use that percentage grid-template-columns: repeat(2, 50%);\n\n\n* {\n padding: 0;\n margin: 0;\n box-sizing: border-box;\n}\n\n#parent {\n display: grid;\n grid-template-columns: repeat(2, 50%);\n grid-template-rows: repeat(2, 1fr);\n width: 275px;\n border: 2px solid green;\n padding: 10px;\n}\n\ninput {\n background-color: hotpink;\n}\n<div id=\"parent\">\n <input />\n <input />\n <input />\n <input />\n</div>\n\n\n\n",
"The input width is depends on its size if there is no other width style on it, and the default of size is 20.\n\nMDN Link\n"
] |
[
0,
0
] |
[] |
[] |
[
"css",
"css_grid",
"html"
] |
stackoverflow_0074663634_css_css_grid_html.txt
|
Q:
why does google map "containsLocation" not work in a function?
I am trying to understand why certain google map functions like "containsLocation" does not result in assigning the variable "origin" true when run in a function ( ex. getOrigin function below).
When I run just the bare code outside of a function in the switch statement it results in true.. but when I run the function getOrigin it does not set "origin" as true.
let origin;
function getOrigin(x, y) {
origin = google.maps.geometry.poly.containsLocation(
new google.maps.LatLng(x,y),
bermudaTriangle
)
}
switch(origin) {
case
origin = google.maps.geometry.poly.containsLocation(
new google.maps.LatLng(originLat, originLng),
bermudaTriangle): console.log(2+2); break;
case getOrigin(originLat, originLng): console.log(3+3); break;
}
A:
It looks like the getOrigin function is not returning anything, so the value of origin within the switch statement is undefined. This means that the case statement that calls getOrigin will never be reached. You could fix this by having the getOrigin function return the value of origin.
Additionally, it's not clear what bermudaTriangle is in the code you provided. If it's not defined, then the call to containsLocation will throw an error. You should make sure that bermudaTriangle is defined and has the correct value before calling containsLocation.
|
why does google map "containsLocation" not work in a function?
|
I am trying to understand why certain google map functions like "containsLocation" does not result in assigning the variable "origin" true when run in a function ( ex. getOrigin function below).
When I run just the bare code outside of a function in the switch statement it results in true.. but when I run the function getOrigin it does not set "origin" as true.
let origin;
function getOrigin(x, y) {
origin = google.maps.geometry.poly.containsLocation(
new google.maps.LatLng(x,y),
bermudaTriangle
)
}
switch(origin) {
case
origin = google.maps.geometry.poly.containsLocation(
new google.maps.LatLng(originLat, originLng),
bermudaTriangle): console.log(2+2); break;
case getOrigin(originLat, originLng): console.log(3+3); break;
}
|
[
"It looks like the getOrigin function is not returning anything, so the value of origin within the switch statement is undefined. This means that the case statement that calls getOrigin will never be reached. You could fix this by having the getOrigin function return the value of origin.\nAdditionally, it's not clear what bermudaTriangle is in the code you provided. If it's not defined, then the call to containsLocation will throw an error. You should make sure that bermudaTriangle is defined and has the correct value before calling containsLocation.\n"
] |
[
1
] |
[] |
[] |
[
"function",
"google_maps",
"javascript"
] |
stackoverflow_0074663500_function_google_maps_javascript.txt
|
Q:
Packer provisioners failing on azure-arm
I have the following build in a Packer .hcl file. The first provisioner succeeds but the other two always fail. Does anyone have a working azure-arm provisioner to install software on a Windows image?
build {
sources = ["source.azure-arm.autogenerated_1"]
provisioner "powershell" {
inline = ["Add-WindowsFeature Web-Server", "while ((Get-Service RdAgent).Status -ne 'Running') { Start-Sleep -s 5 }", "while ((Get-Service WindowsAzureGuestAgent).Status -ne 'Running') { Start-Sleep -s 5 }", "& $env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /quiet /quit", "while($true) { $imageState = Get-ItemProperty HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Setup\\State | Select ImageState; if($imageState.ImageState -ne 'IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE') { Write-Output $imageState.ImageState; Start-Sleep -s 10 } else { break } }"]
}
provisioner "powershell" {
# inline = ["Set-ExecutionPolicy Bypass -Scope Process -Force", "Invoke-Expression((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"]
# inline = ["Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))"]
inline = ["Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"]
}
provisioner "powershell" {
inline = ["choco install -y 7zip", "choco install -y notepadplusplus"]
}
}
A:
There is a problem with the order. The following part of your provisioner is responsible for the generalization process, which should be run last
"while ((Get-Service WindowsAzureGuestAgent).Status -ne 'Running') { Start-Sleep -s 5 }", "& $env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /quiet /quit", "while($true) { $imageState = Get-ItemProperty HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Setup\\State | Select ImageState; if($imageState.ImageState -ne 'IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE') { Write-Output $imageState.ImageState; Start-Sleep -s 10 } else { break } }"
Change the order of provisioners or add your commands above the generalization phase
build {
sources = ["source.azure-arm.autogenerated_1"]
provisioner "powershell" {
inline = ["Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"]
}
provisioner "powershell" {
inline = ["choco install -y 7zip", "choco install -y notepadplusplus"]
}
provisioner "powershell" {
inline = [
"Add-WindowsFeature Web-Server",
"ADD YOUR COMMANDS HERE"
"while ((Get-Service RdAgent).Status -ne 'Running') { Start-Sleep -s 5 }",
"while ((Get-Service WindowsAzureGuestAgent).Status -ne 'Running') { Start-Sleep -s 5 }",
"& $env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /quiet /quit",
"while($true) { $imageState = Get-ItemProperty HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Setup\\State | Select ImageState; if($imageState.ImageState -ne 'IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE') { Write-Output $imageState.ImageState; Start-Sleep -s 10 } else { break } }"
]
}
}
|
Packer provisioners failing on azure-arm
|
I have the following build in a Packer .hcl file. The first provisioner succeeds but the other two always fail. Does anyone have a working azure-arm provisioner to install software on a Windows image?
build {
sources = ["source.azure-arm.autogenerated_1"]
provisioner "powershell" {
inline = ["Add-WindowsFeature Web-Server", "while ((Get-Service RdAgent).Status -ne 'Running') { Start-Sleep -s 5 }", "while ((Get-Service WindowsAzureGuestAgent).Status -ne 'Running') { Start-Sleep -s 5 }", "& $env:SystemRoot\\System32\\Sysprep\\Sysprep.exe /oobe /generalize /quiet /quit", "while($true) { $imageState = Get-ItemProperty HKLM:\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\Setup\\State | Select ImageState; if($imageState.ImageState -ne 'IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE') { Write-Output $imageState.ImageState; Start-Sleep -s 10 } else { break } }"]
}
provisioner "powershell" {
# inline = ["Set-ExecutionPolicy Bypass -Scope Process -Force", "Invoke-Expression((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"]
# inline = ["Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))"]
inline = ["Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))"]
}
provisioner "powershell" {
inline = ["choco install -y 7zip", "choco install -y notepadplusplus"]
}
}
|
[
"There is a problem with the order. The following part of your provisioner is responsible for the generalization process, which should be run last\n\"while ((Get-Service WindowsAzureGuestAgent).Status -ne 'Running') { Start-Sleep -s 5 }\", \"& $env:SystemRoot\\\\System32\\\\Sysprep\\\\Sysprep.exe /oobe /generalize /quiet /quit\", \"while($true) { $imageState = Get-ItemProperty HKLM:\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Setup\\\\State | Select ImageState; if($imageState.ImageState -ne 'IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE') { Write-Output $imageState.ImageState; Start-Sleep -s 10 } else { break } }\"\n\nChange the order of provisioners or add your commands above the generalization phase\nbuild {\n sources = [\"source.azure-arm.autogenerated_1\"]\n\n provisioner \"powershell\" {\n inline = [\"Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://chocolatey.org/install.ps1'))\"]\n }\n \n provisioner \"powershell\" {\n inline = [\"choco install -y 7zip\", \"choco install -y notepadplusplus\"]\n }\n\n provisioner \"powershell\" {\n inline = [\n \"Add-WindowsFeature Web-Server\",\n \"ADD YOUR COMMANDS HERE\"\n \"while ((Get-Service RdAgent).Status -ne 'Running') { Start-Sleep -s 5 }\",\n \"while ((Get-Service WindowsAzureGuestAgent).Status -ne 'Running') { Start-Sleep -s 5 }\",\n \"& $env:SystemRoot\\\\System32\\\\Sysprep\\\\Sysprep.exe /oobe /generalize /quiet /quit\",\n \"while($true) { $imageState = Get-ItemProperty HKLM:\\\\SOFTWARE\\\\Microsoft\\\\Windows\\\\CurrentVersion\\\\Setup\\\\State | Select ImageState; if($imageState.ImageState -ne 'IMAGE_STATE_GENERALIZE_RESEAL_TO_OOBE') { Write-Output $imageState.ImageState; Start-Sleep -s 10 } else { break } }\"\n ]\n }\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"azure_resource_manager",
"packer"
] |
stackoverflow_0074579950_azure_resource_manager_packer.txt
|
Q:
I did not set a breakpoint but entered a breakpoint. I used vscode to debug the program
I did not set breakpoints. My main program controls the acquisition of input and then performs verification. The first input is OK, but after one input content verification, the second verification will enter breakpoint mode. The program breaks in the sentence shown in the picture. Then I need to click Continue two or three times to continue the normal execution of the program. I want to ask why this happens?
No breakpoints
Code of calling function with breakpoint in main program:
/* Verify whether the entered value is each value between 1 and max_value */
while(checkForDuplicationForNums(key, key_length_m) /* Check whether the entered value is repeated */
|| checkElemExceedsMax(key, key_length_m)
|| checkNumWithinMaxNotAppear(key, key_length_m)) {
illegalInputPrompt();
getInputNumArrayByGetchHaveSpaces(key_string, key);
}
Code snippet for breakpoint occurrence:
int checkForDuplicationForNums(int nums[], int max_value){
int* within_max_value = (int *)calloc(max_value, sizeof(int));
for(int i=0; i<max_value; i++) {
/* nums : 0~m-1 */
within_max_value[nums[i]]++;
}
for(int i=0; i<max_value; i++) {
if(within_max_value[i] > 1) {
return 1;
}
}
free(within_max_value);
return 0;
}
The location of the strange breakpoint:
free(within_max_value);
Program execution effect
The max_value is always 11 during the program running.
The first input makes the nums:
int nums[11] = {0, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9};
The second input makes the nums:
int nums[11] = {0, 11, 2, 3, 4, 5, 6, 7, 8, 9, 10};
Then the program breaking. nums[1] exceeds the value range of my expected input value: 1~11.
A:
In the second loop iteration of the second function call, the line
within_max_value[nums[i]]++;
is writing to the array within_max_value out of bounds.
According to the information provided in your question, in the second function call, num[1] will have the value 11. This means that the line
within_max_value[nums[i]]++;
is equivalent to
within_max_value[11]++;
in the second loop iteration of the second function call.
That expression is accessing the array within_max_value out of bounds, because valid indexes are 0 up to and including 10. The index 11 is out of bounds.
By writing to the array out of bounds, you are probably corrupting heap memory, which causes the next call to free to generate an exception, which causes the debugger to stop at that line.
|
I did not set a breakpoint but entered a breakpoint. I used vscode to debug the program
|
I did not set breakpoints. My main program controls the acquisition of input and then performs verification. The first input is OK, but after one input content verification, the second verification will enter breakpoint mode. The program breaks in the sentence shown in the picture. Then I need to click Continue two or three times to continue the normal execution of the program. I want to ask why this happens?
No breakpoints
Code of calling function with breakpoint in main program:
/* Verify whether the entered value is each value between 1 and max_value */
while(checkForDuplicationForNums(key, key_length_m) /* Check whether the entered value is repeated */
|| checkElemExceedsMax(key, key_length_m)
|| checkNumWithinMaxNotAppear(key, key_length_m)) {
illegalInputPrompt();
getInputNumArrayByGetchHaveSpaces(key_string, key);
}
Code snippet for breakpoint occurrence:
int checkForDuplicationForNums(int nums[], int max_value){
int* within_max_value = (int *)calloc(max_value, sizeof(int));
for(int i=0; i<max_value; i++) {
/* nums : 0~m-1 */
within_max_value[nums[i]]++;
}
for(int i=0; i<max_value; i++) {
if(within_max_value[i] > 1) {
return 1;
}
}
free(within_max_value);
return 0;
}
The location of the strange breakpoint:
free(within_max_value);
Program execution effect
The max_value is always 11 during the program running.
The first input makes the nums:
int nums[11] = {0, 1, 1, 2, 3, 4, 5, 6, 7, 8, 9};
The second input makes the nums:
int nums[11] = {0, 11, 2, 3, 4, 5, 6, 7, 8, 9, 10};
Then the program breaking. nums[1] exceeds the value range of my expected input value: 1~11.
|
[
"In the second loop iteration of the second function call, the line\nwithin_max_value[nums[i]]++;\n\nis writing to the array within_max_value out of bounds.\nAccording to the information provided in your question, in the second function call, num[1] will have the value 11. This means that the line\nwithin_max_value[nums[i]]++;\n\nis equivalent to\nwithin_max_value[11]++;\n\nin the second loop iteration of the second function call.\nThat expression is accessing the array within_max_value out of bounds, because valid indexes are 0 up to and including 10. The index 11 is out of bounds.\nBy writing to the array out of bounds, you are probably corrupting heap memory, which causes the next call to free to generate an exception, which causes the debugger to stop at that line.\n"
] |
[
0
] |
[] |
[] |
[
"breakpoints",
"c",
"free",
"malloc",
"visual_studio_code"
] |
stackoverflow_0074663464_breakpoints_c_free_malloc_visual_studio_code.txt
|
Q:
google sheets- How to query multiple columns from a different sheet without having it combine the first column
I have a query that pulls specific ranges of columns from a different sheet. It is combining the top two rows. Any way to string together multiple ranges of columns from a different sheet and not combine the top row? thanks
example Query ( {'mainview'!, A1:L36, 'mainview'!n1:N36}) this does not work
First column not combined with second.
A:
If you are just trying to combine two ranges:
={mainview!A1:L36,mainview!N1:N36}
A:
Instead of a comma, use a semicolon:
={mainview!A1:L36; mainview!N3:N36}
|
google sheets- How to query multiple columns from a different sheet without having it combine the first column
|
I have a query that pulls specific ranges of columns from a different sheet. It is combining the top two rows. Any way to string together multiple ranges of columns from a different sheet and not combine the top row? thanks
example Query ( {'mainview'!, A1:L36, 'mainview'!n1:N36}) this does not work
First column not combined with second.
|
[
"If you are just trying to combine two ranges:\n={mainview!A1:L36,mainview!N1:N36}\n\n",
"Instead of a comma, use a semicolon:\n={mainview!A1:L36; mainview!N3:N36}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"google_sheets"
] |
stackoverflow_0074660013_google_sheets.txt
|
Q:
How to serialize a dictionary to an array of json objects
I have a dictionary as shown below:
var tableData = new Dictionary<string, string>();
tableData["name"] = "Sam";
tableData["city"] = "Wellington";
tableData["country"] = "New Zealand";
How can I convert the above dictionary to a serialized json data as below:
[
{ "key": "name", "val": "Sam" },
{ "key": "city", "val": "Wellington" },
{ "key": "country", "val": "New Zealand" }
]
A:
To convert a Dictionary to a JSON string, you can use the System.Text.Json namespace in C# to serialize the dictionary. Here is an example:
using System.Text.Json;
// ...
var tableData = new Dictionary<string, string>()
{
{ "name", "Sam" },
{ "city", "Wellington" },
{ "country", "New Zealand" },
};
// Convert the dictionary to a JSON string
string json = JsonSerializer.Serialize(tableData);
// The JSON string should look like this:
// [
// { "key": "name", "val": "Sam" },
// { "key": "city", "val": "Wellington" },
// { "key": "country", "val": "New Zealand" }
// ]
Note that in the JSON string, the keys and values in the dictionary are represented as object properties rather than as a list of key-value pairs. If you want the JSON string to look exactly like the one in your example, you will need to convert the dictionary to a list of objects before serializing it. Here is how you can do that:
using System.Linq;
// ...
var tableData = new Dictionary<string, string>()
{
{ "name", "Sam" },
{ "city", "Wellington" },
{ "country", "New Zealand" },
};
// Convert the dictionary to a list of objects
var list = tableData.Select(kvp => new { key = kvp.Key, val = kvp.Value }).ToList();
// Convert the list to a JSON string
string json = JsonSerializer.Serialize(list);
// The JSON string should now look like this:
// [
// { "key": "name", "val": "Sam" },
// { "key": "city", "val": "Wellington" },
// { "key": "country", "val": "New Zealand" }
// ]
|
How to serialize a dictionary to an array of json objects
|
I have a dictionary as shown below:
var tableData = new Dictionary<string, string>();
tableData["name"] = "Sam";
tableData["city"] = "Wellington";
tableData["country"] = "New Zealand";
How can I convert the above dictionary to a serialized json data as below:
[
{ "key": "name", "val": "Sam" },
{ "key": "city", "val": "Wellington" },
{ "key": "country", "val": "New Zealand" }
]
|
[
"To convert a Dictionary to a JSON string, you can use the System.Text.Json namespace in C# to serialize the dictionary. Here is an example:\nusing System.Text.Json;\n\n// ...\n\nvar tableData = new Dictionary<string, string>()\n{\n { \"name\", \"Sam\" },\n { \"city\", \"Wellington\" },\n { \"country\", \"New Zealand\" },\n};\n\n// Convert the dictionary to a JSON string\nstring json = JsonSerializer.Serialize(tableData);\n\n// The JSON string should look like this:\n// [\n// { \"key\": \"name\", \"val\": \"Sam\" },\n// { \"key\": \"city\", \"val\": \"Wellington\" },\n// { \"key\": \"country\", \"val\": \"New Zealand\" }\n// ]\n\nNote that in the JSON string, the keys and values in the dictionary are represented as object properties rather than as a list of key-value pairs. If you want the JSON string to look exactly like the one in your example, you will need to convert the dictionary to a list of objects before serializing it. Here is how you can do that:\nusing System.Linq;\n\n// ...\n\nvar tableData = new Dictionary<string, string>()\n{\n { \"name\", \"Sam\" },\n { \"city\", \"Wellington\" },\n { \"country\", \"New Zealand\" },\n};\n\n// Convert the dictionary to a list of objects\nvar list = tableData.Select(kvp => new { key = kvp.Key, val = kvp.Value }).ToList();\n\n// Convert the list to a JSON string\nstring json = JsonSerializer.Serialize(list);\n\n// The JSON string should now look like this:\n// [\n// { \"key\": \"name\", \"val\": \"Sam\" },\n// { \"key\": \"city\", \"val\": \"Wellington\" },\n// { \"key\": \"country\", \"val\": \"New Zealand\" }\n// ]\n\n"
] |
[
1
] |
[] |
[] |
[
"c#",
"json",
"serialization"
] |
stackoverflow_0074663715_c#_json_serialization.txt
|
Q:
How can I count the digits of a number with leading zeroes in python
In a number without leading zeroes I would do this
import math
num = 1001
digits = int(math.log10(num))+1
print (digits)
>>> 4
but if use a number with leading zeroes like "0001" I get
SyntaxError: leading zeros in decimal integer literals are not permitted; use an 0o prefix for octal integers
I would like to be able to count the digits including the leading zeroes. What would be the best way to achieve this?
A:
You can't reasonably have a number with leading digits unless it's a string!
Therefore, if you're accepting a string, just remove them and check the difference in length
>>> value = input("enter a number: ")
enter a number: 0001
>>> value_clean = value.lstrip("0")
>>> leading_zeros = len(value) - len(value_clean)
>>> print("leading zeros: {}".format(leading_zeros))
3
If you only wanted the number from a bad input, int() can directly convert it for you instead
>>> int("0001")
1
A:
I'm dumb. The answer was simple. All I needed was:
num = 0001
num_string = str(num)
print (len(num_string))
result:
>>> 4
|
How can I count the digits of a number with leading zeroes in python
|
In a number without leading zeroes I would do this
import math
num = 1001
digits = int(math.log10(num))+1
print (digits)
>>> 4
but if use a number with leading zeroes like "0001" I get
SyntaxError: leading zeros in decimal integer literals are not permitted; use an 0o prefix for octal integers
I would like to be able to count the digits including the leading zeroes. What would be the best way to achieve this?
|
[
"You can't reasonably have a number with leading digits unless it's a string!\nTherefore, if you're accepting a string, just remove them and check the difference in length\n>>> value = input(\"enter a number: \")\nenter a number: 0001\n>>> value_clean = value.lstrip(\"0\")\n>>> leading_zeros = len(value) - len(value_clean)\n>>> print(\"leading zeros: {}\".format(leading_zeros))\n3\n\nIf you only wanted the number from a bad input, int() can directly convert it for you instead\n>>> int(\"0001\")\n1\n\n",
"I'm dumb. The answer was simple. All I needed was:\nnum = 0001\nnum_string = str(num) \nprint (len(num_string))\n\n\nresult:\n>>> 4\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"digits",
"python"
] |
stackoverflow_0074663343_digits_python.txt
|
Q:
event file is not being triggered
So I am usually used to slapping the code in a singular spaghetti-o index.js file, yet this time I have two folders:
One for commands
One for events
While the main index.js is only with listeners for the above two in order to execute events and commands.
I am trying to get a messageCreate event trigger in a messageCreate.js within the events folder, I messed around with the intents on both the index.js and messageCreate.js event file, yet nothing seem to go through unless I did something wrong.
Here's what I am trying to do:
const { Events } = require('discord.js');
module.exports = {
name: Events.MessageCreate,
async execute(messageCreate) {
if(message.content == "Give me a random phrase for no reason!") {
var ran = [("A"),
("Some"),
("Ah"),
("You"),
("They"),
("He"),
("She"),
("Was"),
("Were"),
("Weren't"),
("Were you"),
("Weren't you"),
("Are you"),
("Aren't you"),
No need to pay attention to the rest of the code, what it does is basically simple. Grabs three variables, mashes them together and outputs a random spaghetti of words, yet whenever the trigger message is sent within the server, the code does not execute whatsoever and there is no error message either.
I tried putting it into the main index.js file to see if it would do anything different, and nothing seem to work out.
Here are the intents in the index.js file, which I tried copying into the messageCreate.js event file as well with no success:
const { Client, Collection, Events, GatewayIntentBits, GuildMessages, DirectMessages } = require('discord.js');
How can I get the messageCreate.js to be executed once the trigger message is sent?
A:
Got it to work.
In the main index.js file, simply do this:
const client = new Client({ intents: [
GatewayIntentBits.Guilds,
GatewayIntentBits.MessageContent,
GatewayIntentBits.GuildMessages,
],
partials: [Partials.Channel],
});
Then in the messageCreate.js file within the "events" folder, I did this:
const { Events, MessageContent, client } = require('discord.js');
module.exports = {
name: Events.MessageCreate,
async execute(message) {
if(message.content == "Give me a random phrase for no reason!") {
var ran = [("A"),
("Some"),
("Ah"),
("You"),
("They"),
("He"),
("She"),
("Was"),
|
event file is not being triggered
|
So I am usually used to slapping the code in a singular spaghetti-o index.js file, yet this time I have two folders:
One for commands
One for events
While the main index.js is only with listeners for the above two in order to execute events and commands.
I am trying to get a messageCreate event trigger in a messageCreate.js within the events folder, I messed around with the intents on both the index.js and messageCreate.js event file, yet nothing seem to go through unless I did something wrong.
Here's what I am trying to do:
const { Events } = require('discord.js');
module.exports = {
name: Events.MessageCreate,
async execute(messageCreate) {
if(message.content == "Give me a random phrase for no reason!") {
var ran = [("A"),
("Some"),
("Ah"),
("You"),
("They"),
("He"),
("She"),
("Was"),
("Were"),
("Weren't"),
("Were you"),
("Weren't you"),
("Are you"),
("Aren't you"),
No need to pay attention to the rest of the code, what it does is basically simple. Grabs three variables, mashes them together and outputs a random spaghetti of words, yet whenever the trigger message is sent within the server, the code does not execute whatsoever and there is no error message either.
I tried putting it into the main index.js file to see if it would do anything different, and nothing seem to work out.
Here are the intents in the index.js file, which I tried copying into the messageCreate.js event file as well with no success:
const { Client, Collection, Events, GatewayIntentBits, GuildMessages, DirectMessages } = require('discord.js');
How can I get the messageCreate.js to be executed once the trigger message is sent?
|
[
"Got it to work.\nIn the main index.js file, simply do this:\nconst client = new Client({ intents: [\n GatewayIntentBits.Guilds,\n GatewayIntentBits.MessageContent,\n GatewayIntentBits.GuildMessages,\n ],\n partials: [Partials.Channel],\n });\n\nThen in the messageCreate.js file within the \"events\" folder, I did this:\nconst { Events, MessageContent, client } = require('discord.js');\n\n\nmodule.exports = {\n name: Events.MessageCreate,\n async execute(message) {\n if(message.content == \"Give me a random phrase for no reason!\") {\n var ran = [(\"A\"),\n (\"Some\"),\n (\"Ah\"),\n (\"You\"),\n (\"They\"),\n (\"He\"),\n (\"She\"),\n (\"Was\"),\n\n"
] |
[
0
] |
[] |
[] |
[
"discord.js",
"node.js"
] |
stackoverflow_0074657387_discord.js_node.js.txt
|
Q:
fread() and fwrite() using strings
I'm trying to implement simple fread() and frwite() example using strings. The program gives me segfault or free(): invalid pointer errors. Below is the example code I'm working with.
#include <fstream>
#include <iostream>
static bool file_read(FILE* file) {
std::string value="abcd";
std::string retrieved;
size_t read, write;
write = fwrite(&value, sizeof(value), 1, file);
fseek(file, 0, SEEK_SET);
read = fread(&retrieved, sizeof(value), 1, file);
return true;
}
int main(int argc, char *argv[]) {
FILE *file = NULL;
file = fopen("file_test", "wb+");
file_read(file);
fclose(file);
}
I checked if file is opened correctly and that retrieved has the same value as value. I don't think I'm freeing any variables in my code. I'm suspecting that fread is causing all the trouble.
fread(&retrieved[0], sizeof(char), 4, file)
Doesn't read the value to retrieved and was where I am doing wrong.
A:
if you want to access the actual character buffer inside std::string you need to use value.c_str()
the length of the string is not given by sizeof, use value.length()
you cannot read from a file directly into a std::string using fread. You must read it into an intermediate char [] buffer and load from there. Or use ifstream
A:
std::string is not a trivial type. It contains several private data members, including a pointer to character data that may reside in memory that is outside of the string object. So you can't read/write the raw bytes of the string object itself, like you are trying to do. You need to instead serialize its character data separately, eg:
#include <fstream>
#include <iostream>
#include <string>
#include <cstdio>
static bool file_writeStr(std::FILE* file, const std::string &value) {
size_t len = value.size();
bool res = (std::fwrite(reinterpret_cast<char*>(&len), sizeof(len), 1, file) == 1);
if (res) res = (std::fwrite(value.c_str(), len, 1, file) == 1);
return res;
}
static bool file_readStr(std::FILE* file, std::string &value) {
size_t len;
value.clear();
bool res = (std::fread(reinterpret_cast<char*>(&len), sizeof(len), 1, file) == 1);
if (res && len > 0) {
value.resize(len);
res = (std::fread(&value[0], len, 1, file) == 1);
}
return res;
}
static bool file_test(std::FILE* file) {
std::string value = "abcd";
std::string retrieved;
bool res = file_writeStr(file, value);
if (res) {
std::fseek(file, 0, SEEK_SET);
res = file_readStr(file, retrieved);
}
return res;
}
int main() {
std::FILE *file = std::fopen("file_test", "wb+");
if (file_test(file))
std::cout << "success";
else
std::cerr << "failed";
std::fclose(file);
}
|
fread() and fwrite() using strings
|
I'm trying to implement simple fread() and frwite() example using strings. The program gives me segfault or free(): invalid pointer errors. Below is the example code I'm working with.
#include <fstream>
#include <iostream>
static bool file_read(FILE* file) {
std::string value="abcd";
std::string retrieved;
size_t read, write;
write = fwrite(&value, sizeof(value), 1, file);
fseek(file, 0, SEEK_SET);
read = fread(&retrieved, sizeof(value), 1, file);
return true;
}
int main(int argc, char *argv[]) {
FILE *file = NULL;
file = fopen("file_test", "wb+");
file_read(file);
fclose(file);
}
I checked if file is opened correctly and that retrieved has the same value as value. I don't think I'm freeing any variables in my code. I'm suspecting that fread is causing all the trouble.
fread(&retrieved[0], sizeof(char), 4, file)
Doesn't read the value to retrieved and was where I am doing wrong.
|
[
"if you want to access the actual character buffer inside std::string you need to use value.c_str()\nthe length of the string is not given by sizeof, use value.length()\nyou cannot read from a file directly into a std::string using fread. You must read it into an intermediate char [] buffer and load from there. Or use ifstream\n",
"std::string is not a trivial type. It contains several private data members, including a pointer to character data that may reside in memory that is outside of the string object. So you can't read/write the raw bytes of the string object itself, like you are trying to do. You need to instead serialize its character data separately, eg:\n#include <fstream>\n#include <iostream>\n#include <string>\n#include <cstdio>\n\nstatic bool file_writeStr(std::FILE* file, const std::string &value) {\n size_t len = value.size();\n bool res = (std::fwrite(reinterpret_cast<char*>(&len), sizeof(len), 1, file) == 1);\n if (res) res = (std::fwrite(value.c_str(), len, 1, file) == 1);\n return res;\n}\n\nstatic bool file_readStr(std::FILE* file, std::string &value) {\n size_t len;\n value.clear();\n bool res = (std::fread(reinterpret_cast<char*>(&len), sizeof(len), 1, file) == 1);\n if (res && len > 0) {\n value.resize(len);\n res = (std::fread(&value[0], len, 1, file) == 1);\n }\n return res;\n}\n\nstatic bool file_test(std::FILE* file) {\n std::string value = \"abcd\";\n std::string retrieved;\n bool res = file_writeStr(file, value);\n if (res) {\n std::fseek(file, 0, SEEK_SET);\n res = file_readStr(file, retrieved);\n }\n return res;\n}\n\nint main() {\n std::FILE *file = std::fopen(\"file_test\", \"wb+\");\n if (file_test(file))\n std::cout << \"success\";\n else\n std::cerr << \"failed\";\n std::fclose(file);\n}\n\n"
] |
[
3,
1
] |
[] |
[] |
[
"c++",
"fread",
"segmentation_fault"
] |
stackoverflow_0074662673_c++_fread_segmentation_fault.txt
|
Q:
Integrating Paypal payments in Flutter app
I just started on my Flutter journey and need to integrate Paypal payments into my app. However, there seems to be no standard Flutter API provided by Paypal and I couldn't find an acceptable answer anywhere.
A:
You can achieve this using WebView.
PayPal provides some APIs to do transaction. Using those APIs you can achieve this.
Read this article
Paypal Payment Gateway Integration in Flutter
This article demonstrates the steps you need to follow.
A:
Braintree is the payment processor provided by Paypal to accept safe and secure payments with feature Drop-in UI and custom UI design.
It also provides the Apple Pay, Google Pay feature to accept the payments.
Open Braintree Sandbox Account
Get the tokenization key from Braintree account
Add the flutter_braintree dependencies in your pubspec.yaml file
dependencies:
flutter_braintree: ^0.5.3+1
Create custom UI
Paypal Credit card: accept the followings from user
a. Card Number
b. Expiration Month
c. Expiration Year
Create a Braintree Request
final request = BraintreeCreditCardRequest(
card number: '4115511771161116',
expiration month: '02',
expiration year: '2020',
);
Ask Braintree to tokenization it
BraintreePaymentMethodNonce result = await Braintree.tokenizeCreditCard(
'<Insert your tokenization key>',
request,
);
For PayPal
create Paypal request
final request = BraintreePayPalRequest(amount: '50.00');
then launch Paypal Request
BraintreePaymentMethodNonce result = await Braintree.requestPaypalNonce(
"Insert your tokenization key or client token here",
request,
);
Get the NONCE from Braintree after successful payment and get the failure message on cancel the Paypal Payment by the user.
Save this NONCE for future reference in your database
A:
you will find to do in this package
flutter PayPal package
A:
There is a package in pub.dev called flutter_paypal https://pub.dev/packages/flutter_paypal
You can also check this youtube video, he cleared up everything that how to use it or how it works
https://youtu.be/QfLPdh771fA
|
Integrating Paypal payments in Flutter app
|
I just started on my Flutter journey and need to integrate Paypal payments into my app. However, there seems to be no standard Flutter API provided by Paypal and I couldn't find an acceptable answer anywhere.
|
[
"You can achieve this using WebView. \nPayPal provides some APIs to do transaction. Using those APIs you can achieve this.\nRead this article\nPaypal Payment Gateway Integration in Flutter\nThis article demonstrates the steps you need to follow.\n",
"Braintree is the payment processor provided by Paypal to accept safe and secure payments with feature Drop-in UI and custom UI design.\nIt also provides the Apple Pay, Google Pay feature to accept the payments.\n\nOpen Braintree Sandbox Account\n\nGet the tokenization key from Braintree account\n\nAdd the flutter_braintree dependencies in your pubspec.yaml file\ndependencies:\nflutter_braintree: ^0.5.3+1\n\nCreate custom UI\n\n\nPaypal Credit card: accept the followings from user\na. Card Number\nb. Expiration Month\nc. Expiration Year\nCreate a Braintree Request\nfinal request = BraintreeCreditCardRequest(\n\n card number: '4115511771161116',\n\n expiration month: '02',\n\n expiration year: '2020',\n\n);\n\nAsk Braintree to tokenization it\nBraintreePaymentMethodNonce result = await Braintree.tokenizeCreditCard(\n\n '<Insert your tokenization key>',\n\n request,\n\n);\n\nFor PayPal\ncreate Paypal request\nfinal request = BraintreePayPalRequest(amount: '50.00');\n\nthen launch Paypal Request\nBraintreePaymentMethodNonce result = await Braintree.requestPaypalNonce(\n\n \"Insert your tokenization key or client token here\",\n\n request,\n\n);\n\n\nGet the NONCE from Braintree after successful payment and get the failure message on cancel the Paypal Payment by the user.\n\nSave this NONCE for future reference in your database\n\n\n",
"you will find to do in this package\nflutter PayPal package\n",
"There is a package in pub.dev called flutter_paypal https://pub.dev/packages/flutter_paypal\nYou can also check this youtube video, he cleared up everything that how to use it or how it works\nhttps://youtu.be/QfLPdh771fA\n"
] |
[
11,
6,
0,
0
] |
[] |
[] |
[
"flutter",
"payment_gateway",
"paypal"
] |
stackoverflow_0059549242_flutter_payment_gateway_paypal.txt
|
Q:
Reduce Android App Storage Size built through MAUI
I built a basic android app that takes some inputs from the user, does some calculations and displays the output.
But the final size of the application is turning out to be 114MB, which is huge. :(
I tried deleting the user data by clicking on Clear Storage, but then I am unable to start the app itself.
Can someone help me with a way to reduce the app storage size?
A:
I am unable to start the app itself
This must be some crash, read the log and solve it.
As for reduce app storage size, you can find all the directories you used, and make a clean cache entrance for user to clean the cache by themselves.
|
Reduce Android App Storage Size built through MAUI
|
I built a basic android app that takes some inputs from the user, does some calculations and displays the output.
But the final size of the application is turning out to be 114MB, which is huge. :(
I tried deleting the user data by clicking on Clear Storage, but then I am unable to start the app itself.
Can someone help me with a way to reduce the app storage size?
|
[
"\nI am unable to start the app itself\n\nThis must be some crash, read the log and solve it.\nAs for reduce app storage size, you can find all the directories you used, and make a clean cache entrance for user to clean the cache by themselves.\n"
] |
[
0
] |
[] |
[] |
[
"android",
"maui"
] |
stackoverflow_0074663711_android_maui.txt
|
Q:
getting tensorflow to run on GPU
I've been trying to get this to work forever and still no luck
I have:
GTX 1050 Ti (on Lenovo Legion laptop)
the laptop also has an Intel UHD Graphics 630 (i'm not sure if maybe this is interfering?)
Anaconda
Visual Studio
Python 3.9.13
CUDA 11.2
cuDNN 8.1
I added these to the PATH:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\libnvvp
finally I installed tensorflow and created its own environment
and I still can't get it to read my GPU
basically followed https://www.youtube.com/watch?v=hHWkvEcDBO0&t=295s
AND I'm still having no luck.
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
yields only information on the CPU
Can anyone please help?
A:
You can upgrade tensorflow to 2.0. It should solve your problem.
A:
Check your tensorflow version and compatability with GPU, update your GPU drivers. CUDA 9/10 would do the job.
follow the official tensorflow link:
https://www.tensorflow.org/install/pip#windows-native_1
Do all the steps in the same environment in anaconda.
|
getting tensorflow to run on GPU
|
I've been trying to get this to work forever and still no luck
I have:
GTX 1050 Ti (on Lenovo Legion laptop)
the laptop also has an Intel UHD Graphics 630 (i'm not sure if maybe this is interfering?)
Anaconda
Visual Studio
Python 3.9.13
CUDA 11.2
cuDNN 8.1
I added these to the PATH:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\bin
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.2\libnvvp
finally I installed tensorflow and created its own environment
and I still can't get it to read my GPU
basically followed https://www.youtube.com/watch?v=hHWkvEcDBO0&t=295s
AND I'm still having no luck.
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
yields only information on the CPU
Can anyone please help?
|
[
"You can upgrade tensorflow to 2.0. It should solve your problem.\n",
"Check your tensorflow version and compatability with GPU, update your GPU drivers. CUDA 9/10 would do the job.\nfollow the official tensorflow link:\nhttps://www.tensorflow.org/install/pip#windows-native_1\nDo all the steps in the same environment in anaconda.\n"
] |
[
0,
0
] |
[] |
[] |
[
"python",
"tensorflow"
] |
stackoverflow_0074663667_python_tensorflow.txt
|
Q:
How to parse byte stream in Java properly
Hello boys and girls.
I'm developing a terminal based client application which communicates over TCP/IP to server and sends and receives an arbitary number of raw bytes. Each byte represents a command which I need to parse to Java classes representing these commands, for further use.
My question how I should parse these bytes efficiently. I don't want to end up with bunch of nested ifs and switch-cases.
I have the data classes for these commands ready to go. I just need to figure out the proper way of doing the parsing.
Here's some sample specifications:
Byte stream can be for example in
integers:[1,24,2,65,26,18,3,0,239,19,0,14,0,42,65,110,110,97,32,109,121,121,106,228,42,15,20,5,149,45,87]
First byte is 0x01 which is start of header containing only one byte.
Second one is the length which is the number of bytes in certain
commands, only one byte here also.
The next can be any command where the first byte is the command, 0x02
in this case, and it follows n number of bytes which are included in
the command.
So on. In the end there are checksum related bytes.
Sample class representing the set_cursor command:
/**
* Sets the cursor position.
* Syntax: 0x0E | position
*/
public class SET_CURSOR {
private final int hexCommand = 0x0e;
private int position;
public SET_CURSOR(int position) {
}
public int getPosition() {
return position;
}
public int getHexCommnad() {
return hexCommand;
}
}
A:
When parsing byte streams like this the best Design Pattern to use is the Command Pattern. Each of the different Commands will act as handlers to process the next several bytes in the stream.
interface Command{
//depending on your situation,
//either use InputStream if you don't know
//how many bytes each Command will use
// or the the commands will use an unknown number of bytes
//or a large number of bytes that performance
//would be affected by copying everything.
void execute(InputStream in);
//or you can use an array if the
//if the number of bytes is known and small.
void execute( byte[] data);
}
Then you can have a map containing each Command object for each of the byte "opcodes".
Map<Byte, Command> commands = ...
commands.put(Byte.parseByte("0x0e", 16), new SetCursorCommand() );
...
Then you can parse the message and act on the Commands:
InputStream in = ... //our byte array as inputstream
byte header = (byte)in.read();
int length = in.read();
byte commandKey = (byte)in.read();
byte[] data = new byte[length]
in.read(data);
Command command = commands.get(commandKey);
command.execute(data);
Can you have multiple Commands in the same byte message? If so you could then easily wrap the Command fetching and parsing in a loop until the EOF.
A:
you can try JBBP library for that https://github.com/raydac/java-binary-block-parser
@Bin class Parsed { byte header; byte command; byte [] data; int checksum;}
Parsed parsed = JBBPParser.prepare("byte header; ubyte len; byte command; byte [len] data; int checksum;").parse(theArray).mapTo(Parsed.class);
A:
This is a huge and complex subject.
It depends on the type of the data that you will read.
Is it a looooong stream ?
Is it a lot of small independent structures/objects ?
Do you have some references between structures/objects of your flow ?
I recently wrote a byte serialization/deserialization library for a proprietary software.
I took a visitor-like approach with type conversion, the same way JAXB works.
I define my object as a Java class. Initialize the parser on the class, and then pass it the bytes to unserialize or the Java object to serialize.
The type detection (based on the first byte of your flow) is done forward with a simple case matching mechanism (1 => ClassA, 15 => ClassF, ...).
EDIT: It may be complex or overloaded with code (embedding objects) but keep in mind that nowadays, java optimize this well, it keeps code clear and understandable.
A:
ByteBuffer can be used for parsing byte stream - What is the use of ByteBuffer in Java?:
byte[] bytesArray = {4, 2, 6, 5, 3, 2, 1};
ByteBuffer bb = ByteBuffer.wrap(bytesArray);
int intFromBB = bb.order(ByteOrder.LITTLE_ENDIAN).getInt();
byte byteFromBB = bb.get();
short shortFromBB = bb.getShort();
A:
This is a variable-length binary data parsing problem, which usually involves multiple parsing and requiring reverse addressing.
Recommend you a java tool(FastProto) to parse binary quickly.
import org.indunet.fastproto.annotation.*;
public class Message {
@UInt8Type(offset = 0)
Integer header; // header
@UInt8Type(offset = 1)
Integer length; // length
@BinaryType(offset = 2, length = -3)
byte[] binary; // binary without header, length and checksum
@UInt16Type(offset = -2)
Integer checksum; // 2 bytes if you use crc16
}
byte[] bytes = ... // the binary received.
Message message = FastProto.parse(bytes, Message.class);
// check data integrity
...
public class SET_CURSOR {
@UInt8Type(offset = 0)
int hexCommand;
@UInt8Type(offset = 1)
int position;
}
byte[] bytes = ... message.binary
SET_CURSOR set_cursor = FastProto.parse(bytes, SET_CURSOR.class);
If the project can solve your problem, please give a star, thanks.
GitHub Repo: https://github.com/indunet/fastproto
|
How to parse byte stream in Java properly
|
Hello boys and girls.
I'm developing a terminal based client application which communicates over TCP/IP to server and sends and receives an arbitary number of raw bytes. Each byte represents a command which I need to parse to Java classes representing these commands, for further use.
My question how I should parse these bytes efficiently. I don't want to end up with bunch of nested ifs and switch-cases.
I have the data classes for these commands ready to go. I just need to figure out the proper way of doing the parsing.
Here's some sample specifications:
Byte stream can be for example in
integers:[1,24,2,65,26,18,3,0,239,19,0,14,0,42,65,110,110,97,32,109,121,121,106,228,42,15,20,5,149,45,87]
First byte is 0x01 which is start of header containing only one byte.
Second one is the length which is the number of bytes in certain
commands, only one byte here also.
The next can be any command where the first byte is the command, 0x02
in this case, and it follows n number of bytes which are included in
the command.
So on. In the end there are checksum related bytes.
Sample class representing the set_cursor command:
/**
* Sets the cursor position.
* Syntax: 0x0E | position
*/
public class SET_CURSOR {
private final int hexCommand = 0x0e;
private int position;
public SET_CURSOR(int position) {
}
public int getPosition() {
return position;
}
public int getHexCommnad() {
return hexCommand;
}
}
|
[
"When parsing byte streams like this the best Design Pattern to use is the Command Pattern. Each of the different Commands will act as handlers to process the next several bytes in the stream.\ninterface Command{\n\n //depending on your situation, \n //either use InputStream if you don't know\n //how many bytes each Command will use\n // or the the commands will use an unknown number of bytes\n //or a large number of bytes that performance\n //would be affected by copying everything.\n void execute(InputStream in);\n\n //or you can use an array if the\n //if the number of bytes is known and small.\n void execute( byte[] data);\n\n}\n\nThen you can have a map containing each Command object for each of the byte \"opcodes\".\nMap<Byte, Command> commands = ...\n\ncommands.put(Byte.parseByte(\"0x0e\", 16), new SetCursorCommand() );\n...\n\nThen you can parse the message and act on the Commands:\nInputStream in = ... //our byte array as inputstream\nbyte header = (byte)in.read();\nint length = in.read();\nbyte commandKey = (byte)in.read(); \nbyte[] data = new byte[length]\nin.read(data);\n\nCommand command = commands.get(commandKey);\ncommand.execute(data);\n\nCan you have multiple Commands in the same byte message? If so you could then easily wrap the Command fetching and parsing in a loop until the EOF.\n",
"you can try JBBP library for that https://github.com/raydac/java-binary-block-parser\n@Bin class Parsed { byte header; byte command; byte [] data; int checksum;}\nParsed parsed = JBBPParser.prepare(\"byte header; ubyte len; byte command; byte [len] data; int checksum;\").parse(theArray).mapTo(Parsed.class);\n\n",
"This is a huge and complex subject.\nIt depends on the type of the data that you will read.\n\nIs it a looooong stream ?\nIs it a lot of small independent structures/objects ?\nDo you have some references between structures/objects of your flow ?\n\nI recently wrote a byte serialization/deserialization library for a proprietary software.\nI took a visitor-like approach with type conversion, the same way JAXB works.\nI define my object as a Java class. Initialize the parser on the class, and then pass it the bytes to unserialize or the Java object to serialize.\nThe type detection (based on the first byte of your flow) is done forward with a simple case matching mechanism (1 => ClassA, 15 => ClassF, ...).\nEDIT: It may be complex or overloaded with code (embedding objects) but keep in mind that nowadays, java optimize this well, it keeps code clear and understandable.\n",
"ByteBuffer can be used for parsing byte stream - What is the use of ByteBuffer in Java?:\nbyte[] bytesArray = {4, 2, 6, 5, 3, 2, 1};\nByteBuffer bb = ByteBuffer.wrap(bytesArray);\nint intFromBB = bb.order(ByteOrder.LITTLE_ENDIAN).getInt(); \nbyte byteFromBB = bb.get(); \nshort shortFromBB = bb.getShort(); \n\n",
"This is a variable-length binary data parsing problem, which usually involves multiple parsing and requiring reverse addressing.\nRecommend you a java tool(FastProto) to parse binary quickly.\nimport org.indunet.fastproto.annotation.*;\n\npublic class Message {\n @UInt8Type(offset = 0)\n Integer header; // header\n \n @UInt8Type(offset = 1)\n Integer length; // length\n\n @BinaryType(offset = 2, length = -3)\n byte[] binary; // binary without header, length and checksum\n\n @UInt16Type(offset = -2)\n Integer checksum; // 2 bytes if you use crc16\n}\n\n\nbyte[] bytes = ... // the binary received.\nMessage message = FastProto.parse(bytes, Message.class);\n\n// check data integrity\n...\n\npublic class SET_CURSOR {\n @UInt8Type(offset = 0)\n int hexCommand;\n \n @UInt8Type(offset = 1)\n int position;\n}\n\nbyte[] bytes = ... message.binary\nSET_CURSOR set_cursor = FastProto.parse(bytes, SET_CURSOR.class);\n\nIf the project can solve your problem, please give a star, thanks.\nGitHub Repo: https://github.com/indunet/fastproto\n"
] |
[
4,
2,
1,
0,
0
] |
[] |
[] |
[
"byte",
"java",
"parsing",
"stream"
] |
stackoverflow_0018785702_byte_java_parsing_stream.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.