content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
r.js with old build system (bower) error with bootstrap
I am trying to move away from bower (since Reactjs is discontinued for it, maybe also other components)
Used bower-away to make package.json
Seem to went further, however bootstrap.js while ”traced” or how it is called by r.js produces error
Command run
r.js -o $(BUILD_TMP_DIR)/static/js/build.js
r.js error: line 778 unexpected token
However this is just normal bootstrap.js file
npm install bootstrap --save
I looked also at the bootstrap.js and it looks like a normal JS file
Is there a way to fix it? Maybe not even use r.js to produce build.js?
what are my options?
Update 1
I am new to Frontend, this also seems like older/legacy code.
Yes, project I think uses requirejs
https://github.com/marcinguy/betterscan-ce/blob/master/quantifiedcode/frontend/src/js/config.js
The code is open source I try to fix:
https://github.com/marcinguy/betterscan-ce/tree/master/quantifiedcode/frontend
You need to just run make to build it
https://github.com/marcinguy/betterscan-ce/blob/master/quantifiedcode/frontend/Makefile
I modified it with bower-away to use only package.json. Changed Makefile and src/js/config.js to work with node_modules instead of bower_components
r.js worked for reactjs and prism, but errors on bootstrap
installed
npm install reactjs --save
npm install prism --save
etc
Here is bower.json
https://github.com/marcinguy/betterscan-ce/blob/master/quantifiedcode/frontend/bower.json
Maybe I should try different npm install bootstrap? but which?
I have node 14
A:
It sounds like you're running into a problem with the RequireJS optimizer (r.js) when trying to build your project. It's difficult to say exactly what the issue is without seeing the specific code, but there are a few potential solutions you could try.
One option would be to try using a different build tool, such as Webpack or Browserify, instead of r.js. These tools may be better suited to building modern JavaScript projects that use newer technologies like React.
Another option would be to try updating the version of bootstrap that you're using. If you're using an old version of bootstrap that is not compatible with the current version of r.js, that could be causing the error you're seeing.
Finally, if you're still having trouble, it may be helpful to post the specific code that's causing the error so that others can help you troubleshoot the issue.
|
r.js with old build system (bower) error with bootstrap
|
I am trying to move away from bower (since Reactjs is discontinued for it, maybe also other components)
Used bower-away to make package.json
Seem to went further, however bootstrap.js while ”traced” or how it is called by r.js produces error
Command run
r.js -o $(BUILD_TMP_DIR)/static/js/build.js
r.js error: line 778 unexpected token
However this is just normal bootstrap.js file
npm install bootstrap --save
I looked also at the bootstrap.js and it looks like a normal JS file
Is there a way to fix it? Maybe not even use r.js to produce build.js?
what are my options?
Update 1
I am new to Frontend, this also seems like older/legacy code.
Yes, project I think uses requirejs
https://github.com/marcinguy/betterscan-ce/blob/master/quantifiedcode/frontend/src/js/config.js
The code is open source I try to fix:
https://github.com/marcinguy/betterscan-ce/tree/master/quantifiedcode/frontend
You need to just run make to build it
https://github.com/marcinguy/betterscan-ce/blob/master/quantifiedcode/frontend/Makefile
I modified it with bower-away to use only package.json. Changed Makefile and src/js/config.js to work with node_modules instead of bower_components
r.js worked for reactjs and prism, but errors on bootstrap
installed
npm install reactjs --save
npm install prism --save
etc
Here is bower.json
https://github.com/marcinguy/betterscan-ce/blob/master/quantifiedcode/frontend/bower.json
Maybe I should try different npm install bootstrap? but which?
I have node 14
|
[
"It sounds like you're running into a problem with the RequireJS optimizer (r.js) when trying to build your project. It's difficult to say exactly what the issue is without seeing the specific code, but there are a few potential solutions you could try.\nOne option would be to try using a different build tool, such as Webpack or Browserify, instead of r.js. These tools may be better suited to building modern JavaScript projects that use newer technologies like React.\nAnother option would be to try updating the version of bootstrap that you're using. If you're using an old version of bootstrap that is not compatible with the current version of r.js, that could be causing the error you're seeing.\nFinally, if you're still having trouble, it may be helpful to post the specific code that's causing the error so that others can help you troubleshoot the issue.\n"
] |
[
1
] |
[] |
[] |
[
"bootstrap_5",
"bower",
"javascript",
"node_modules",
"r.js"
] |
stackoverflow_0074665127_bootstrap_5_bower_javascript_node_modules_r.js.txt
|
Q:
Difference between Divide and Conquer Algo and Dynamic Programming
What is the difference between Divide and Conquer Algorithms and Dynamic Programming Algorithms? How are the two terms different? I do not understand the difference between them.
Please take a simple example to explain any difference between the two and on what ground they seem to be similar.
A:
Divide and Conquer
Divide and Conquer works by dividing the problem into sub-problems, conquer each sub-problem recursively and combine these solutions.
Dynamic Programming
Dynamic Programming is a technique for solving problems with overlapping subproblems. Each sub-problem is solved only once and the result of each sub-problem is stored in a table ( generally implemented as an array or a hash table) for future references. These sub-solutions may be used to obtain the original solution and the technique of storing the sub-problem solutions is known as memoization.
You may think of DP = recursion + re-use
A classic example to understand the difference would be to see both these approaches towards obtaining the nth fibonacci number. Check this material from MIT.
Divide and Conquer approach
Dynamic Programming Approach
A:
Dynamic Programming and Divide-and-Conquer Similarities
As I see it for now I can say that dynamic programming is an extension of divide and conquer paradigm.
I would not treat them as something completely different. Because they both work by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.
So why do we still have different paradigm names then and why I called dynamic programming an extension. It is because dynamic programming approach may be applied to the problem only if the problem has certain restrictions or prerequisites. And after that dynamic programming extends divide and conquer approach with memoization or tabulation technique.
Let’s go step by step…
Dynamic Programming Prerequisites/Restrictions
As we’ve just discovered there are two key attributes that divide and conquer problem must have in order for dynamic programming to be applicable:
Optimal substructure — optimal solution can be constructed from optimal solutions of its subproblems
Overlapping sub-problems — problem can be broken down into subproblems which are reused several times or a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems
Once these two conditions are met we can say that this divide and conquer problem may be solved using dynamic programming approach.
Dynamic Programming Extension for Divide and Conquer
Dynamic programming approach extends divide and conquer approach with two techniques (memoization and tabulation) that both have a purpose of storing and re-using sub-problems solutions that may drastically improve performance. For example naive recursive implementation of Fibonacci function has time complexity of O(2^n) where DP solution doing the same with only O(n) time.
Memoization (top-down cache filling) refers to the technique of caching and reusing previously computed results. The memoized fib function would thus look like this:
memFib(n) {
if (mem[n] is undefined)
if (n < 2) result = n
else result = memFib(n-2) + memFib(n-1)
mem[n] = result
return mem[n]
}
Tabulation (bottom-up cache filling) is similar but focuses on filling the entries of the cache. Computing the values in the cache is easiest done iteratively. The tabulation version of fib would look like this:
tabFib(n) {
mem[0] = 0
mem[1] = 1
for i = 2...n
mem[i] = mem[i-2] + mem[i-1]
return mem[n]
}
You may read more about memoization and tabulation comparison here.
The main idea you should grasp here is that because our divide and conquer problem has overlapping sub-problems the caching of sub-problem solutions becomes possible and thus memoization/tabulation step up onto the scene.
So What the Difference Between DP and DC After All
Since we’re now familiar with DP prerequisites and its methodologies we’re ready to put all that was mentioned above into one picture.
If you want to see code examples you may take a look at more detailed explanation here where you'll find two algorithm examples: Binary Search and Minimum Edit Distance (Levenshtein Distance) that are illustrating the difference between DP and DC.
A:
The other difference between divide and conquer and dynamic programming could be:
Divide and conquer:
Does more work on the sub-problems and hence has more time consumption.
In divide and conquer the sub-problems are independent of each other.
Dynamic programming:
Solves the sub-problems only once and then stores it in the table.
In dynamic programming the sub-problem are not independent.
A:
sometimes when programming recursivly, you call the function with the same parameters multiple times which is unnecassary.
The famous example Fibonacci numbers:
index: 1,2,3,4,5,6...
Fibonacci number: 1,1,2,3,5,8...
function F(n) {
if (n < 3)
return 1
else
return F(n-1) + F(n-2)
}
Let's run F(5):
F(5) = F(4) + F(3)
= {F(3)+F(2)} + {F(2)+F(1)}
= {[F(2)+F(1)]+1} + {1+1}
= 1+1+1+1+1
So we have called :
1 times F(4)
2 times F(3)
3 times F(2)
2 times F(1)
Dynamic Programming approach: if you call a function with the same parameter more than once, save the result into a variable to directly access it on next time. The iterative way:
if (n==1 || n==2)
return 1
else
f1=1, f2=1
for i=3 to n
f = f1 + f2
f1 = f2
f2 = f
Let's call F(5) again:
fibo1 = 1
fibo2 = 1
fibo3 = (fibo1 + fibo2) = 1 + 1 = 2
fibo4 = (fibo2 + fibo3) = 1 + 2 = 3
fibo5 = (fibo3 + fibo4) = 2 + 3 = 5
As you can see, whenever you need the multiple call you just access the corresponding variable to get the value instead of recalculating it.
By the way, dynamic programming doesn't mean to convert a recursive code into an iterative code. You can also save the subresults into a variable if you want a recursive code. In this case the technique is called memoization. For our example it looks like this:
// declare and initialize a dictionary
var dict = new Dictionary<int,int>();
for i=1 to n
dict[i] = -1
function F(n) {
if (n < 3)
return 1
else
{
if (dict[n] == -1)
dict[n] = F(n-1) + F(n-2)
return dict[n]
}
}
So the relationship to the Divide and Conquer is that D&D algorithms rely on recursion. And some versions of them has this "multiple function call with the same parameter issue." Search for "matrix chain multiplication" and "longest common subsequence" for such examples where DP is needed to improve the T(n) of D&D algo.
A:
I assume you have already read Wikipedia and other academic resources on this, so I won't recycle any of that information. I must also caveat that I am not a computer science expert by any means, but I'll share my two cents on my understanding of these topics...
Dynamic Programming
Breaks the problem down into discrete subproblems. The recursive algorithm for the Fibonacci sequence is an example of Dynamic Programming, because it solves for fib(n) by first solving for fib(n-1). In order to solve the original problem, it solves a different problem.
Divide and Conquer
These algorithms typically solve similar pieces of the problem, and then put them together at the end. Mergesort is a classic example of divide and conquer. The main difference between this example and the Fibonacci example is that in a mergesort, the division can (theoretically) be arbitrary, and no matter how you slice it up, you are still merging and sorting. The same amount of work has to be done to mergesort the array, no matter how you divide it up. Solving for fib(52) requires more steps than solving for fib(2).
A:
I think of Divide & Conquer as an recursive approach and Dynamic Programming as table filling.
For example, Merge Sort is a Divide & Conquer algorithm, as in each step, you split the array into two halves, recursively call Merge Sort upon the two halves and then merge them.
Knapsack is a Dynamic Programming algorithm as you are filling a table representing optimal solutions to subproblems of the overall knapsack. Each entry in the table corresponds to the maximum value you can carry in a bag of weight w given items 1-j.
A:
Divide and Conquer involves three steps at each level of recursion:
Divide the problem into subproblems.
Conquer the subproblems by solving them recursively.
Combine the solution for subproblems into the solution for original problem.
It is a top-down approach.
It does more work on subproblems and hence has more time
consumption.
eg. n-th term of Fibonacci series can be computed in O(2^n) time complexity.
Dynamic Programming involves the following four steps:
1. Characterise the structure of optimal solutions.
2. Recursively define the values of optimal solutions.
3. Compute the value of optimal solutions.
4. Construct an Optimal Solution from computed information.
It is a Bottom-up approach.
Less time consumption than divide and conquer since we make use of the values computed earlier, rather than computing again.
eg. n-th term of Fibonacci series can be computed in O(n) time complexity.
For easier understanding, lets see divide and conquer as a brute force solution and its optimisation as dynamic programming.
N.B. divide and conquer algorithms with overlapping subproblems can only be optimised with dp.
A:
Divide and Conquer
They broke into non-overlapping sub-problems
Example: factorial numbers i.e. fact(n) = n*fact(n-1)
fact(5) = 5* fact(4) = 5 * (4 * fact(3))= 5 * 4 * (3 *fact(2))= 5 * 4 * 3 * 2 * (fact(1))
As we can see above, no fact(x) is repeated so factorial has non overlapping problems.
Dynamic Programming
They Broke into overlapping sub-problems
Example: Fibonacci numbers i.e. fib(n) = fib(n-1) + fib(n-2)
fib(5) = fib(4) + fib(3) = (fib(3)+fib(2)) + (fib(2)+fib(1))
As we can see above, fib(4) and fib(3) both use fib(2). similarly so many fib(x) gets repeated. that's why Fibonacci has overlapping sub-problems.
As a result of the repetition of sub-problem in DP, we can keep such results in a table and save computation effort. this is called as memoization
A:
Divide and Conquer
In this problem is solved in following three steps:
1. Divide - Dividing into number of sub-problems
2. Conquer - Conquering by solving sub-problems recursively
3. Combine - Combining sub-problem solutions to get original problem's solution
Recursive approach
Top Down technique
Example: Merge Sort
Dynamic Programming
In this the problem is solved in following steps:
1. Defining structure of optimal solution
2. Defines value of optimal solutions repeatedly.
3. Obtaining values of optimal solution in bottom-up fashion
4. Getting final optimal solution from obtained values
Non-Recursive
Bottom Up Technique
Example: Strassen's Matrix Multiplication
A:
Divide and Conquer:
This paradigm involves three stages:
Divide the problem into smaller sub-problems
Conquer, i.e., solve these smaller sub-problems
Combine these sub-problems' solutions to get the final answer.
Dynamic Programming:
DP is an optimization of recursive solutions. The primary difference it makes is that it stores the solution to sub-problems, which can later be accessed during the process of finding solutions of the remaining sub-problems. This is done so that we don't have to calculate the solution to a sub-problem every time, rather we can simply look it up the computer memory to retrieve its value, given that it has been solved earlier. We can simply add this as our base case in recursion. For example, we are solving a problem through recursion, we can store the solutions to sub-problems in an array and access them by adding the relevant code in one of our base cases in the recursive method.
There are two ways in which DP is done:
Consider a problem: To find factorial of x.
Tabulation: We use the bottom up approach, that is we go from the smallest numbers all the way upto x, to find the solution.
Pseudo Code:
1. int array
2. for int=1, i<=x, i++
3. array[i] = array[i-1]*i
Memoization: We use the top down approach, that is we take the problem and then break it down into smaller parts and solve them, to get the final solution
Pseudo Code:
fac():
1. int array
2. if(x==0): return 1
3. if(array[x]!=null): return array[x]
4. return array[x] = x*fac(x-1)
|
Difference between Divide and Conquer Algo and Dynamic Programming
|
What is the difference between Divide and Conquer Algorithms and Dynamic Programming Algorithms? How are the two terms different? I do not understand the difference between them.
Please take a simple example to explain any difference between the two and on what ground they seem to be similar.
|
[
"Divide and Conquer\nDivide and Conquer works by dividing the problem into sub-problems, conquer each sub-problem recursively and combine these solutions.\nDynamic Programming\nDynamic Programming is a technique for solving problems with overlapping subproblems. Each sub-problem is solved only once and the result of each sub-problem is stored in a table ( generally implemented as an array or a hash table) for future references. These sub-solutions may be used to obtain the original solution and the technique of storing the sub-problem solutions is known as memoization.\nYou may think of DP = recursion + re-use\nA classic example to understand the difference would be to see both these approaches towards obtaining the nth fibonacci number. Check this material from MIT.\n\nDivide and Conquer approach\n\nDynamic Programming Approach\n\n",
"Dynamic Programming and Divide-and-Conquer Similarities\nAs I see it for now I can say that dynamic programming is an extension of divide and conquer paradigm.\nI would not treat them as something completely different. Because they both work by recursively breaking down a problem into two or more sub-problems of the same or related type, until these become simple enough to be solved directly. The solutions to the sub-problems are then combined to give a solution to the original problem.\nSo why do we still have different paradigm names then and why I called dynamic programming an extension. It is because dynamic programming approach may be applied to the problem only if the problem has certain restrictions or prerequisites. And after that dynamic programming extends divide and conquer approach with memoization or tabulation technique.\nLet’s go step by step…\nDynamic Programming Prerequisites/Restrictions\nAs we’ve just discovered there are two key attributes that divide and conquer problem must have in order for dynamic programming to be applicable:\n\nOptimal substructure — optimal solution can be constructed from optimal solutions of its subproblems\n\nOverlapping sub-problems — problem can be broken down into subproblems which are reused several times or a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems\n\n\nOnce these two conditions are met we can say that this divide and conquer problem may be solved using dynamic programming approach.\nDynamic Programming Extension for Divide and Conquer\nDynamic programming approach extends divide and conquer approach with two techniques (memoization and tabulation) that both have a purpose of storing and re-using sub-problems solutions that may drastically improve performance. For example naive recursive implementation of Fibonacci function has time complexity of O(2^n) where DP solution doing the same with only O(n) time.\nMemoization (top-down cache filling) refers to the technique of caching and reusing previously computed results. The memoized fib function would thus look like this:\nmemFib(n) {\n if (mem[n] is undefined)\n if (n < 2) result = n\n else result = memFib(n-2) + memFib(n-1)\n \n mem[n] = result\n return mem[n]\n}\n\nTabulation (bottom-up cache filling) is similar but focuses on filling the entries of the cache. Computing the values in the cache is easiest done iteratively. The tabulation version of fib would look like this:\ntabFib(n) {\n mem[0] = 0\n mem[1] = 1\n for i = 2...n\n mem[i] = mem[i-2] + mem[i-1]\n return mem[n]\n}\n\nYou may read more about memoization and tabulation comparison here.\nThe main idea you should grasp here is that because our divide and conquer problem has overlapping sub-problems the caching of sub-problem solutions becomes possible and thus memoization/tabulation step up onto the scene.\nSo What the Difference Between DP and DC After All\nSince we’re now familiar with DP prerequisites and its methodologies we’re ready to put all that was mentioned above into one picture.\n\nIf you want to see code examples you may take a look at more detailed explanation here where you'll find two algorithm examples: Binary Search and Minimum Edit Distance (Levenshtein Distance) that are illustrating the difference between DP and DC.\n",
"The other difference between divide and conquer and dynamic programming could be:\nDivide and conquer:\n\nDoes more work on the sub-problems and hence has more time consumption.\nIn divide and conquer the sub-problems are independent of each other.\n\nDynamic programming:\n\nSolves the sub-problems only once and then stores it in the table.\nIn dynamic programming the sub-problem are not independent.\n\n",
"sometimes when programming recursivly, you call the function with the same parameters multiple times which is unnecassary.\nThe famous example Fibonacci numbers: \n index: 1,2,3,4,5,6...\nFibonacci number: 1,1,2,3,5,8...\n\nfunction F(n) {\n if (n < 3)\n return 1\n else\n return F(n-1) + F(n-2)\n}\n\nLet's run F(5):\nF(5) = F(4) + F(3)\n = {F(3)+F(2)} + {F(2)+F(1)}\n = {[F(2)+F(1)]+1} + {1+1}\n = 1+1+1+1+1\n\nSo we have called :\n1 times F(4)\n2 times F(3)\n3 times F(2)\n2 times F(1)\nDynamic Programming approach: if you call a function with the same parameter more than once, save the result into a variable to directly access it on next time. The iterative way:\nif (n==1 || n==2)\n return 1\nelse\n f1=1, f2=1\n for i=3 to n\n f = f1 + f2\n f1 = f2\n f2 = f\n\nLet's call F(5) again:\nfibo1 = 1\nfibo2 = 1 \nfibo3 = (fibo1 + fibo2) = 1 + 1 = 2\nfibo4 = (fibo2 + fibo3) = 1 + 2 = 3\nfibo5 = (fibo3 + fibo4) = 2 + 3 = 5\n\nAs you can see, whenever you need the multiple call you just access the corresponding variable to get the value instead of recalculating it.\nBy the way, dynamic programming doesn't mean to convert a recursive code into an iterative code. You can also save the subresults into a variable if you want a recursive code. In this case the technique is called memoization. For our example it looks like this:\n// declare and initialize a dictionary\nvar dict = new Dictionary<int,int>();\nfor i=1 to n\n dict[i] = -1\n\nfunction F(n) {\n if (n < 3)\n return 1\n else\n {\n if (dict[n] == -1)\n dict[n] = F(n-1) + F(n-2)\n\n return dict[n] \n }\n}\n\nSo the relationship to the Divide and Conquer is that D&D algorithms rely on recursion. And some versions of them has this \"multiple function call with the same parameter issue.\" Search for \"matrix chain multiplication\" and \"longest common subsequence\" for such examples where DP is needed to improve the T(n) of D&D algo.\n",
"I assume you have already read Wikipedia and other academic resources on this, so I won't recycle any of that information. I must also caveat that I am not a computer science expert by any means, but I'll share my two cents on my understanding of these topics...\nDynamic Programming\nBreaks the problem down into discrete subproblems. The recursive algorithm for the Fibonacci sequence is an example of Dynamic Programming, because it solves for fib(n) by first solving for fib(n-1). In order to solve the original problem, it solves a different problem.\nDivide and Conquer\nThese algorithms typically solve similar pieces of the problem, and then put them together at the end. Mergesort is a classic example of divide and conquer. The main difference between this example and the Fibonacci example is that in a mergesort, the division can (theoretically) be arbitrary, and no matter how you slice it up, you are still merging and sorting. The same amount of work has to be done to mergesort the array, no matter how you divide it up. Solving for fib(52) requires more steps than solving for fib(2).\n",
"I think of Divide & Conquer as an recursive approach and Dynamic Programming as table filling. \nFor example, Merge Sort is a Divide & Conquer algorithm, as in each step, you split the array into two halves, recursively call Merge Sort upon the two halves and then merge them. \nKnapsack is a Dynamic Programming algorithm as you are filling a table representing optimal solutions to subproblems of the overall knapsack. Each entry in the table corresponds to the maximum value you can carry in a bag of weight w given items 1-j.\n",
"Divide and Conquer involves three steps at each level of recursion:\n\nDivide the problem into subproblems.\nConquer the subproblems by solving them recursively.\nCombine the solution for subproblems into the solution for original problem.\n\nIt is a top-down approach.\nIt does more work on subproblems and hence has more time\nconsumption.\neg. n-th term of Fibonacci series can be computed in O(2^n) time complexity.\n\n\n\nDynamic Programming involves the following four steps: \n 1. Characterise the structure of optimal solutions.\n 2. Recursively define the values of optimal solutions.\n 3. Compute the value of optimal solutions.\n 4. Construct an Optimal Solution from computed information.\n\nIt is a Bottom-up approach.\nLess time consumption than divide and conquer since we make use of the values computed earlier, rather than computing again.\neg. n-th term of Fibonacci series can be computed in O(n) time complexity. \n\nFor easier understanding, lets see divide and conquer as a brute force solution and its optimisation as dynamic programming.\n N.B. divide and conquer algorithms with overlapping subproblems can only be optimised with dp.\n",
"\nDivide and Conquer\n\nThey broke into non-overlapping sub-problems\nExample: factorial numbers i.e. fact(n) = n*fact(n-1)\n\n\n\nfact(5) = 5* fact(4) = 5 * (4 * fact(3))= 5 * 4 * (3 *fact(2))= 5 * 4 * 3 * 2 * (fact(1))\n\nAs we can see above, no fact(x) is repeated so factorial has non overlapping problems.\n\nDynamic Programming\n\nThey Broke into overlapping sub-problems\nExample: Fibonacci numbers i.e. fib(n) = fib(n-1) + fib(n-2)\n\n\n\nfib(5) = fib(4) + fib(3) = (fib(3)+fib(2)) + (fib(2)+fib(1))\n\nAs we can see above, fib(4) and fib(3) both use fib(2). similarly so many fib(x) gets repeated. that's why Fibonacci has overlapping sub-problems.\n\nAs a result of the repetition of sub-problem in DP, we can keep such results in a table and save computation effort. this is called as memoization\n\n",
"Divide and Conquer\n\nIn this problem is solved in following three steps:\n1. Divide - Dividing into number of sub-problems\n2. Conquer - Conquering by solving sub-problems recursively\n3. Combine - Combining sub-problem solutions to get original problem's solution\nRecursive approach\nTop Down technique\nExample: Merge Sort\n\nDynamic Programming\n\nIn this the problem is solved in following steps:\n1. Defining structure of optimal solution\n2. Defines value of optimal solutions repeatedly.\n3. Obtaining values of optimal solution in bottom-up fashion\n4. Getting final optimal solution from obtained values\nNon-Recursive\nBottom Up Technique\nExample: Strassen's Matrix Multiplication\n\n",
"Divide and Conquer:\nThis paradigm involves three stages:\n\nDivide the problem into smaller sub-problems\nConquer, i.e., solve these smaller sub-problems\nCombine these sub-problems' solutions to get the final answer.\n\nDynamic Programming:\nDP is an optimization of recursive solutions. The primary difference it makes is that it stores the solution to sub-problems, which can later be accessed during the process of finding solutions of the remaining sub-problems. This is done so that we don't have to calculate the solution to a sub-problem every time, rather we can simply look it up the computer memory to retrieve its value, given that it has been solved earlier. We can simply add this as our base case in recursion. For example, we are solving a problem through recursion, we can store the solutions to sub-problems in an array and access them by adding the relevant code in one of our base cases in the recursive method.\nThere are two ways in which DP is done:\nConsider a problem: To find factorial of x.\n\nTabulation: We use the bottom up approach, that is we go from the smallest numbers all the way upto x, to find the solution.\n\nPseudo Code:\n 1. int array\n 2. for int=1, i<=x, i++\n 3. array[i] = array[i-1]*i\n\n\nMemoization: We use the top down approach, that is we take the problem and then break it down into smaller parts and solve them, to get the final solution\n\nPseudo Code:\n fac():\n 1. int array\n 2. if(x==0): return 1\n 3. if(array[x]!=null): return array[x]\n 4. return array[x] = x*fac(x-1)\n\n"
] |
[
209,
70,
32,
19,
12,
9,
5,
3,
1,
0
] |
[] |
[] |
[
"algorithm",
"divide_and_conquer",
"dynamic_programming"
] |
stackoverflow_0013538459_algorithm_divide_and_conquer_dynamic_programming.txt
|
Q:
Can anyone give more info about the useEffect hook based on experience
I understand a bit about the useEffect hook but I think there’s still more knowledge to grasp. Some of which are not in the documentation. Please any contribution will help a lot y’all.
Some of my questions are:
Does useEffect get called on initial render even in production just like in development?
If it does, how do we control this the best way?
How can we use a clean up function on this Hook?
How can we make asynchronous calls in useEffect?
My attempts on useEffect usually makes me feel like a bad developer
A:
Yes, the useEffect hook is called on initial render in both development and production environments. To control when useEffect is called, you can provide a second argument to the hook that is an array of values. The useEffect hook will only be called if one of the values in the array has changed.
Here is an example:
useEffect(() => {
// Perform some side effect
}, [value1, value2]);
In the example above, the useEffect hook will only be called if either value1 or value2 has changed.
To use a cleanup function with useEffect, you can return a function from the callback passed to the hook. The returned function will be called when the component is unmounted or the useEffect hook is called again.
Here is an example:
useEffect(() => {
// Perform some side effect
return () => {
// Clean up the side effect
};
}, [value1, value2]);
To make asynchronous calls in useEffect, you can use the async keyword and the await keyword to wait for the asynchronous operation to complete. Here is an example:
useEffect(() => {
const fetchData = async () => {
const response = await fetch('https://my-api.com/data');
const data = await response.json();
// Use the data
};
fetchData();
}, [value1, value2]);
In the example above, the fetchData function is declared as async and the await keyword is used to wait for the fetch operation to complete and for the response to be converted to JSON.
A:
useEffect is a very powerful hook. Regarding your question:
useEffect(() => (), []) - this version without params will be called once on initial rendering
you can control useEffect with params [] and based on these params you can place some logic inside the callback function.
clean up function used before unmount of your component, it is a good place to remove listeners or close connection to resources like Databases, Camera and etc.
Example of async call
useEffect(() => {
// declare the data fetching function
const fetchData = async () => {
const data = await fetch('https://yourapi.com');
}
// call the function
fetchData()
// make sure to catch any error
.catch(console.error);
}, [])
A:
Please take a look at react docs and react beta docs.
It always runs when your component mounts, after the first render regardless of the environment. In development mode when strict mode is on, it runs twice:
When Strict Mode is on, React will run one extra development-only setup+cleanup cycle before the first real setup. This is a stress-test that ensures that your cleanup logic “mirrors” your setup logic and that it stops or undoes whatever the setup is doing. If this causes a problem, you need to implement the cleanup function.
I'm not really sure what you mean by controlling it the best way. Your effect or setup code runs whenever the component mounts. Maybe
How to handle the Effect firing twice in development? can help you. You sometimes might want to prevent the effect to be executed when the component mounts, you can skip the effect by using a ref. See this stackoverflow question
The function you return in the useEffect does the clean up for you. See. For instance if you add an event listener inside useEffect, you remove the listener inside the function you return inside of it. See this link
useEffect(() => {
const listener = () => { /* Do something */ };
window.addEventListener("click", listener);
return () => {
window.removeEventListener("click", listener);
};
}, []);
Yes you can. See this stackoverflow question and fetching data in docs
useEffect(() => {
async function asyncFunction() {
/* Do something */
}
asyncFunction();
}, []);
Update:
Take a look at You Might Not Need an Effect
. It explains some situations which you might not need an effect at all.
Removing unnecessary Effects will make your code easier to follow, faster to run, and less error-prone.
Update 2:
You can probably skip this part for now, but it might help you to have a better grasp of useEffect, event handlers and what to expect in the future.
Separating Events from Effects tries to explain the differences between effects and event handlers, why distinguishing between those two is important and using event handlers inside effects.
Event handlers only re-run when you perform the same interaction again. Unlike event handlers, Effects re-synchronize if some value they read, like a prop or a state variable, is different from what it was during the last render. Sometimes, you also want a mix of both behaviors: an Effect that re-runs in response to some values but not others. This page will teach you how to do that.
Sometimes, you might use an event handler which has access to the props or the state inside an effect. But you don't want the useEffect to be triggered every time the values used in the event handler change. The following example is taken form useEffect shouldn’t re-fire when event handlers change
.
function Chat({ selectedRoom }) {
const [muted, setMuted] = useState(false);
const theme = useContext(ThemeContext);
useEffect(() => {
const socket = createSocket('/chat/' + selectedRoom);
socket.on('connected', async () => {
await checkConnection(selectedRoom);
showToast(theme, 'Connected to ' + selectedRoom);
});
socket.on('message', (message) => {
showToast(theme, 'New message: ' + message);
if (!muted) {
playSound();
}
});
socket.connect();
return () => socket.dispose();
}, [selectedRoom, theme, muted]); // Re-runs when any of them change
// ...
}
As you see, you do not want to reconnect every time theme or muted variables change. The only time you want the effect(connecting and disconnecting from the server) to run is when the selectedRoom value changes.
So the react team has proposed a RFC: useEvent which provides
A Hook to define an event handler with an always-stable function identity.
useEvent is an experimental and unstable API that has not yet been added to the React(stable versions) ye, so you can’t use it yet.
This might be off-topic but probably helps you to understand React and its lifecycles better: There is this issue useCallback() invalidates too often in practice issue on GitHub . One workaround would be to create a custom hook that returns a function that its identity is stable and won't change on re-renders:
function useEventCallback(fn) {
let ref = useRef();
useLayoutEffect(() => {
ref.current = fn;
});
return useCallback(() => (0, ref.current)(), []);
}
Or you could use the use-event-callback package.
Note that useEventCallback does not mimic useEvent precisely:
A high-fidelty polyfill for useEvent is not possible because there is no lifecycle or Hook in React that we can use to switch .current at the right timing. Although use-event-callback is “close enough” for many cases, it doesn't throw during rendering, and the timing isn’t quite right. We don’t recommend to broadly adopt this pattern until there is a version of React that includes a built-in useEvent implementation.
|
Can anyone give more info about the useEffect hook based on experience
|
I understand a bit about the useEffect hook but I think there’s still more knowledge to grasp. Some of which are not in the documentation. Please any contribution will help a lot y’all.
Some of my questions are:
Does useEffect get called on initial render even in production just like in development?
If it does, how do we control this the best way?
How can we use a clean up function on this Hook?
How can we make asynchronous calls in useEffect?
My attempts on useEffect usually makes me feel like a bad developer
|
[
"Yes, the useEffect hook is called on initial render in both development and production environments. To control when useEffect is called, you can provide a second argument to the hook that is an array of values. The useEffect hook will only be called if one of the values in the array has changed.\nHere is an example:\nuseEffect(() => {\n // Perform some side effect\n}, [value1, value2]);\n\nIn the example above, the useEffect hook will only be called if either value1 or value2 has changed.\nTo use a cleanup function with useEffect, you can return a function from the callback passed to the hook. The returned function will be called when the component is unmounted or the useEffect hook is called again.\nHere is an example:\nuseEffect(() => {\n // Perform some side effect\n\n return () => {\n // Clean up the side effect\n };\n}, [value1, value2]);\n\nTo make asynchronous calls in useEffect, you can use the async keyword and the await keyword to wait for the asynchronous operation to complete. Here is an example:\nuseEffect(() => {\n const fetchData = async () => {\n const response = await fetch('https://my-api.com/data');\n const data = await response.json();\n // Use the data\n };\n fetchData();\n}, [value1, value2]);\n\nIn the example above, the fetchData function is declared as async and the await keyword is used to wait for the fetch operation to complete and for the response to be converted to JSON.\n",
"useEffect is a very powerful hook. Regarding your question:\n\nuseEffect(() => (), []) - this version without params will be called once on initial rendering\nyou can control useEffect with params [] and based on these params you can place some logic inside the callback function.\nclean up function used before unmount of your component, it is a good place to remove listeners or close connection to resources like Databases, Camera and etc.\nExample of async call\n\nuseEffect(() => {\n // declare the data fetching function\n const fetchData = async () => {\n const data = await fetch('https://yourapi.com');\n }\n\n // call the function\n fetchData()\n // make sure to catch any error\n .catch(console.error);\n}, [])\n\n",
"Please take a look at react docs and react beta docs.\n\nIt always runs when your component mounts, after the first render regardless of the environment. In development mode when strict mode is on, it runs twice:\n\n\nWhen Strict Mode is on, React will run one extra development-only setup+cleanup cycle before the first real setup. This is a stress-test that ensures that your cleanup logic “mirrors” your setup logic and that it stops or undoes whatever the setup is doing. If this causes a problem, you need to implement the cleanup function.\n\n\nI'm not really sure what you mean by controlling it the best way. Your effect or setup code runs whenever the component mounts. Maybe\nHow to handle the Effect firing twice in development? can help you. You sometimes might want to prevent the effect to be executed when the component mounts, you can skip the effect by using a ref. See this stackoverflow question\n\nThe function you return in the useEffect does the clean up for you. See. For instance if you add an event listener inside useEffect, you remove the listener inside the function you return inside of it. See this link\n\n\n useEffect(() => {\n const listener = () => { /* Do something */ };\n\n window.addEventListener(\"click\", listener);\n\n return () => {\n window.removeEventListener(\"click\", listener);\n };\n }, []);\n\n\nYes you can. See this stackoverflow question and fetching data in docs\n\n useEffect(() => {\n async function asyncFunction() {\n /* Do something */\n }\n\n asyncFunction();\n }, []);\n\nUpdate:\nTake a look at You Might Not Need an Effect\n. It explains some situations which you might not need an effect at all.\n\nRemoving unnecessary Effects will make your code easier to follow, faster to run, and less error-prone.\n\nUpdate 2:\nYou can probably skip this part for now, but it might help you to have a better grasp of useEffect, event handlers and what to expect in the future.\nSeparating Events from Effects tries to explain the differences between effects and event handlers, why distinguishing between those two is important and using event handlers inside effects.\n\nEvent handlers only re-run when you perform the same interaction again. Unlike event handlers, Effects re-synchronize if some value they read, like a prop or a state variable, is different from what it was during the last render. Sometimes, you also want a mix of both behaviors: an Effect that re-runs in response to some values but not others. This page will teach you how to do that.\n\nSometimes, you might use an event handler which has access to the props or the state inside an effect. But you don't want the useEffect to be triggered every time the values used in the event handler change. The following example is taken form useEffect shouldn’t re-fire when event handlers change\n.\nfunction Chat({ selectedRoom }) {\n const [muted, setMuted] = useState(false);\n const theme = useContext(ThemeContext);\n\n useEffect(() => {\n const socket = createSocket('/chat/' + selectedRoom);\n socket.on('connected', async () => {\n await checkConnection(selectedRoom);\n showToast(theme, 'Connected to ' + selectedRoom);\n });\n socket.on('message', (message) => {\n showToast(theme, 'New message: ' + message);\n if (!muted) {\n playSound();\n }\n });\n socket.connect();\n return () => socket.dispose();\n }, [selectedRoom, theme, muted]); // Re-runs when any of them change\n // ...\n}\n\n\nAs you see, you do not want to reconnect every time theme or muted variables change. The only time you want the effect(connecting and disconnecting from the server) to run is when the selectedRoom value changes.\nSo the react team has proposed a RFC: useEvent which provides\n\nA Hook to define an event handler with an always-stable function identity.\n\nuseEvent is an experimental and unstable API that has not yet been added to the React(stable versions) ye, so you can’t use it yet.\nThis might be off-topic but probably helps you to understand React and its lifecycles better: There is this issue useCallback() invalidates too often in practice issue on GitHub . One workaround would be to create a custom hook that returns a function that its identity is stable and won't change on re-renders:\nfunction useEventCallback(fn) {\n let ref = useRef();\n useLayoutEffect(() => {\n ref.current = fn;\n });\n return useCallback(() => (0, ref.current)(), []);\n}\n\nOr you could use the use-event-callback package.\nNote that useEventCallback does not mimic useEvent precisely:\n\nA high-fidelty polyfill for useEvent is not possible because there is no lifecycle or Hook in React that we can use to switch .current at the right timing. Although use-event-callback is “close enough” for many cases, it doesn't throw during rendering, and the timing isn’t quite right. We don’t recommend to broadly adopt this pattern until there is a version of React that includes a built-in useEvent implementation.\n\n"
] |
[
2,
2,
2
] |
[] |
[] |
[
"async_await",
"react_hooks",
"react_native",
"reactjs"
] |
stackoverflow_0074665078_async_await_react_hooks_react_native_reactjs.txt
|
Q:
How to prevent users from direct/manually accessing admin panel(?) using - search/address bar
We currently have 2 sides. The customer(user) side and the admin side.
once I log in as customer I can access the admin side, like checking the stocks, products, and other admin side/tool. through direct access in address bar.
<?php
if(isset($_SESSION['userid'])){
if($_SESSION['position'] != 'Admin')
header("location: 404.php");
}
?>
this is the code include every admin pages trying to prevent users from accessing specific pages through direct access. So if person trying to access the page that his position is not set as admin it will redirect him/her to 404.php, but even with this as customer(user) I can still access it. I did include it on every admin pages tho.
A:
One way to prevent users from directly accessing the admin panel is to check for the user's permission level and redirect them if they do not have the required permission. This can be done by adding a condition to check for the user's permission level before allowing them to access the admin panel.
For example, you can add a condition to check if the user's position is set to "Admin" before allowing them to access the admin panel. If the user's position is not set to "Admin", then they will be redirected to a different page or an error message will be displayed.
Here is an example of how to implement this in your code:
<?php
if(isset($_SESSION['userid'])){
if($_SESSION['position'] != 'Admin'){
//Redirect user to a different page or display an error message
header("location: error.php");
exit;
}
}
else{
//Redirect user to login page
header("location: login.php");
exit;
}
?>
This code checks for the user's permission level and redirects them if they do not have the required permission. It also checks if the user is logged in, and if not, redirects them to the login page.
You can also add additional security measures to prevent users from accessing the admin panel through other means, such as adding a unique URL for the admin panel and using a secure login system.
|
How to prevent users from direct/manually accessing admin panel(?) using - search/address bar
|
We currently have 2 sides. The customer(user) side and the admin side.
once I log in as customer I can access the admin side, like checking the stocks, products, and other admin side/tool. through direct access in address bar.
<?php
if(isset($_SESSION['userid'])){
if($_SESSION['position'] != 'Admin')
header("location: 404.php");
}
?>
this is the code include every admin pages trying to prevent users from accessing specific pages through direct access. So if person trying to access the page that his position is not set as admin it will redirect him/her to 404.php, but even with this as customer(user) I can still access it. I did include it on every admin pages tho.
|
[
"One way to prevent users from directly accessing the admin panel is to check for the user's permission level and redirect them if they do not have the required permission. This can be done by adding a condition to check for the user's permission level before allowing them to access the admin panel.\nFor example, you can add a condition to check if the user's position is set to \"Admin\" before allowing them to access the admin panel. If the user's position is not set to \"Admin\", then they will be redirected to a different page or an error message will be displayed.\nHere is an example of how to implement this in your code:\n<?php\nif(isset($_SESSION['userid'])){\n if($_SESSION['position'] != 'Admin'){\n //Redirect user to a different page or display an error message\n header(\"location: error.php\");\n exit;\n }\n}\nelse{\n //Redirect user to login page\n header(\"location: login.php\");\n exit;\n}\n?>\n\nThis code checks for the user's permission level and redirects them if they do not have the required permission. It also checks if the user is logged in, and if not, redirects them to the login page.\nYou can also add additional security measures to prevent users from accessing the admin panel through other means, such as adding a unique URL for the admin panel and using a secure login system.\n"
] |
[
0
] |
[] |
[] |
[
"php"
] |
stackoverflow_0074665031_php.txt
|
Q:
terraform initialisation from GitHub workflow
I have a repo in which I have terraform infrastructure declared. I'm changing it by moving repeatable parts to modules and created folders for each environment. GitHub workflow is running init, plan and apply. As I have created new directories, I'm changing "working-directory" for init part, but I receive error Failed to get existing workspaces containers.Client#ListBlobs: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure:
I have arm access keys declared as envs in workflow. I tried to move it around but no luck. I dont know why terraform can initialise from main directory but can't initialise from child directory.
A:
I tried to reproduce the same in my environment:
I got same error in case of storageaccount itself:
This error occurs when one doesn’t have access to the backend or the container where the terraform state is stored.
Please make sure that you are logged in to the subscription or tenant where you have access to resources.
In my case I logged in to another subscription that caused the error .
Set the subscription correctly.
az account set --subscription "xxx"
and then run terraform init
To reconfigure for new working directory :
Run terraform init -reconfigure
Or run below command to migratethe state:
terraform init -migrate-state
terraform {
backend "azurerm" {
resource_group_name = "rg"
storage_account_name = "remotestatekavstr"
container_name = "terraform"
key = "terraform.tfstate"
}
}
Then the terraform is initialized successfully:
Note:
1.Check for any spelling corrections of the storage account or container .
2.When changed to new directory , reconfigure the terraform backend or migrate .
Also check this creating-azure-storage-containers-in-a-storage-account-with-network-rules-with
|
terraform initialisation from GitHub workflow
|
I have a repo in which I have terraform infrastructure declared. I'm changing it by moving repeatable parts to modules and created folders for each environment. GitHub workflow is running init, plan and apply. As I have created new directories, I'm changing "working-directory" for init part, but I receive error Failed to get existing workspaces containers.Client#ListBlobs: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure:
I have arm access keys declared as envs in workflow. I tried to move it around but no luck. I dont know why terraform can initialise from main directory but can't initialise from child directory.
|
[
"I tried to reproduce the same in my environment:\nI got same error in case of storageaccount itself:\n\nThis error occurs when one doesn’t have access to the backend or the container where the terraform state is stored.\nPlease make sure that you are logged in to the subscription or tenant where you have access to resources.\nIn my case I logged in to another subscription that caused the error .\n\nSet the subscription correctly.\naz account set --subscription \"xxx\"\nand then run terraform init\nTo reconfigure for new working directory :\nRun terraform init -reconfigure\n\nOr run below command to migratethe state:\nterraform init -migrate-state\nterraform {\nbackend \"azurerm\" {\nresource_group_name = \"rg\"\nstorage_account_name = \"remotestatekavstr\"\ncontainer_name = \"terraform\"\nkey = \"terraform.tfstate\"\n}\n}\nThen the terraform is initialized successfully:\n\n\nNote:\n1.Check for any spelling corrections of the storage account or container .\n2.When changed to new directory , reconfigure the terraform backend or migrate .\n\nAlso check this creating-azure-storage-containers-in-a-storage-account-with-network-rules-with\n"
] |
[
0
] |
[] |
[] |
[
"github_actions",
"terraform",
"terraform_provider_azure"
] |
stackoverflow_0074368015_github_actions_terraform_terraform_provider_azure.txt
|
Q:
How can I return a string from an Ienumerable?
I am very new to programming and am taking an Object Oriented Programming class. However, the professor didn't explain how to take an Ienumerable and make it into a string in order to accomplish this question of the assignment:
TODO:
Write a public static C# method named NumSquare that takes a one-dimentional array as input
and creates a LINQ statement that queries the numbers that have a square number graeter than 20 and orders them ascending.
The LINQ query retrieves anonymous objects in which each object contains the number (Num) and its square number (SqrNum).
The method returns the LINQ query as an IEnumerable object.
The anonymous object contains two instance variables named Num and SqrNum.
Input: a one-dimentional integer array.
Output: a LINQ query of type IEnumerable.
Example: Given array A = [3, 4, 10, 5], invoking NumSquare(A) return a LINQ query that once executed will contain:
{Num=5, SqrNum=25},
{Num=10, SqrNum=25}
Here's what I have so far, but I've tried several things over the last 2 1/2 weeks.
public static IEnumerable<object> NumSquare(int[] A)
{
//write your code here
var num = from Number in A
select Number;
var sqrnum = from Number in A
let squarenum = Number * Number
select squarenum;
return (IEnumerable<object>)sqrnum;
}
I know that this return won't get me the whole result that I need, but that's as far as I can get with no errors. I also don't know how to test anything because he didn't show us how to call an IEnumerable. Help?
A:
I think what you are looking for is not a string as output but as the exercise says an anonymous object. An anonymous object can be something like this:
var o = new { Num = 4, SqrNum = 16 };
Its just an object that basically has no explicit type and some read-only variables.
So what you want to do is to convert your array into a IEnumerable<{int Num, int SqrNum}> which you would have to declare as IEnumerable<object> and not a string.
You could do something like this:
static IEnumerable<object> NumSqr(int[] a)
{
return a
.Where(x => x * x > 20)
.OrderBy(x => x)
.Select(x => new { Num = x, SqrNum= x * x });
}
Alternatively:
static IEnumerable<object> NumSqr(int[] a)
{
return from number in a
where number * number > 20
orderby number
select new { Num = number, SqrNum = number * number };
}
In order to print out the result of the function you could do this:
var a = new int[] { 3, 4, 10, 5 };
var result = NumSqr(a);
foreach (var obj in result)
{
Console.WriteLine(obj);
}
The output should look like this:
{ Num = 5, SqrNum = 25 }
{ Num = 10, SqrNum = 100 }
|
How can I return a string from an Ienumerable?
|
I am very new to programming and am taking an Object Oriented Programming class. However, the professor didn't explain how to take an Ienumerable and make it into a string in order to accomplish this question of the assignment:
TODO:
Write a public static C# method named NumSquare that takes a one-dimentional array as input
and creates a LINQ statement that queries the numbers that have a square number graeter than 20 and orders them ascending.
The LINQ query retrieves anonymous objects in which each object contains the number (Num) and its square number (SqrNum).
The method returns the LINQ query as an IEnumerable object.
The anonymous object contains two instance variables named Num and SqrNum.
Input: a one-dimentional integer array.
Output: a LINQ query of type IEnumerable.
Example: Given array A = [3, 4, 10, 5], invoking NumSquare(A) return a LINQ query that once executed will contain:
{Num=5, SqrNum=25},
{Num=10, SqrNum=25}
Here's what I have so far, but I've tried several things over the last 2 1/2 weeks.
public static IEnumerable<object> NumSquare(int[] A)
{
//write your code here
var num = from Number in A
select Number;
var sqrnum = from Number in A
let squarenum = Number * Number
select squarenum;
return (IEnumerable<object>)sqrnum;
}
I know that this return won't get me the whole result that I need, but that's as far as I can get with no errors. I also don't know how to test anything because he didn't show us how to call an IEnumerable. Help?
|
[
"I think what you are looking for is not a string as output but as the exercise says an anonymous object. An anonymous object can be something like this:\nvar o = new { Num = 4, SqrNum = 16 };\n\nIts just an object that basically has no explicit type and some read-only variables.\nSo what you want to do is to convert your array into a IEnumerable<{int Num, int SqrNum}> which you would have to declare as IEnumerable<object> and not a string.\nYou could do something like this:\nstatic IEnumerable<object> NumSqr(int[] a)\n{\n return a\n .Where(x => x * x > 20)\n .OrderBy(x => x)\n .Select(x => new { Num = x, SqrNum= x * x });\n}\n\nAlternatively:\nstatic IEnumerable<object> NumSqr(int[] a)\n{\n return from number in a\n where number * number > 20\n orderby number\n select new { Num = number, SqrNum = number * number };\n}\n\nIn order to print out the result of the function you could do this:\nvar a = new int[] { 3, 4, 10, 5 };\nvar result = NumSqr(a);\nforeach (var obj in result)\n{\n Console.WriteLine(obj);\n}\n\nThe output should look like this:\n{ Num = 5, SqrNum = 25 }\n{ Num = 10, SqrNum = 100 }\n\n"
] |
[
1
] |
[] |
[] |
[
"arrays",
"c#",
"ienumerable",
"linq",
"string"
] |
stackoverflow_0074641842_arrays_c#_ienumerable_linq_string.txt
|
Q:
EC2 won't connect to internet
I have a freshly setup EC2 instance on AWS, however it seems it won't connect to the internet... I have searched far a wide and everything I come across doesn't change anything. Here are the errors I'm getting, let me know if more info is needed.
Err:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1360:8001::21). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1560:8001::11). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1560:8001::14). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1562::16). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1562::19). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1360:8001::17). - connect (101: Network is unreachable) Could not connect to security.ubuntu.com:80 (91.189.88.149), connection timed out Could not connect to security.ubuntu.com:80 (91.189.88.162), connection timed out Could not connect to security.ubuntu.com:80 (91.189.91.23), connection timed out Could not connect to security.ubuntu.com:80 (91.189.91.26), connection timed out Could not connect to security.ubuntu.com:80 (91.189.88.24), connection timed out Could not connect to security.ubuntu.com:80 (91.189.88.31), connection timed out
Err:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.201.250.36), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.229.132.181), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.229.150.131), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.237.137.22), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.73.36.184), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.91.65.63), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.207.133.243), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.152.129.43), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.165.17.230), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.172.25.22), connection timed out
Err:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease
Unable to connect to us-east-1.ec2.archive.ubuntu.com:http:
Err:4 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease
Unable to connect to us-east-1.ec2.archive.ubuntu.com:http:
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
W: Failed to fetch http://us-east-1.ec2.archive.ubuntu.com/ubuntu/dists/bionic/InRelease Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.201.250.36), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.229.132.181), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.229.150.131), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.237.137.22), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.73.36.184), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.91.65.63), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.207.133.243), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.152.129.43), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.165.17.230), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.172.25.22), connection timed out
W: Failed to fetch http://us-east-1.ec2.archive.ubuntu.com/ubuntu/dists/bionic-updates/InRelease Unable to connect to us-east-1.ec2.archive.ubuntu.com:http:
W: Failed to fetch http://us-east-1.ec2.archive.ubuntu.com/ubuntu/dists/bionic-backports/InRelease Unable to connect to us-east-1.ec2.archive.ubuntu.com:http:
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/bionic-security/InRelease Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1360:8001::21). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1560:8001::11). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1560:8001::14). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1562::16). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1562::19). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1360:8001::17). - connect (101: Network is unreachable) Could not connect to security.ubuntu.com:80 (91.189.88.149), connection timed out Could not connect to security.ubuntu.com:80 (91.189.88.162), connection timed out Could not connect to security.ubuntu.com:80 (91.189.91.23), connection timed out Could not connect to security.ubuntu.com:80 (91.189.91.26), connection timed out Could not connect to security.ubuntu.com:80 (91.189.88.24), connection timed out Could not connect to security.ubuntu.com:80 (91.189.88.31), connection timed out
W: Some index files failed to download. They have been ignored, or old ones used instead.
|
EC2 won't connect to internet
|
I have a freshly setup EC2 instance on AWS, however it seems it won't connect to the internet... I have searched far a wide and everything I come across doesn't change anything. Here are the errors I'm getting, let me know if more info is needed.
Err:1 http://security.ubuntu.com/ubuntu bionic-security InRelease
Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1360:8001::21). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1560:8001::11). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1560:8001::14). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1562::16). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1562::19). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1360:8001::17). - connect (101: Network is unreachable) Could not connect to security.ubuntu.com:80 (91.189.88.149), connection timed out Could not connect to security.ubuntu.com:80 (91.189.88.162), connection timed out Could not connect to security.ubuntu.com:80 (91.189.91.23), connection timed out Could not connect to security.ubuntu.com:80 (91.189.91.26), connection timed out Could not connect to security.ubuntu.com:80 (91.189.88.24), connection timed out Could not connect to security.ubuntu.com:80 (91.189.88.31), connection timed out
Err:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic InRelease
Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.201.250.36), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.229.132.181), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.229.150.131), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.237.137.22), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.73.36.184), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.91.65.63), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.207.133.243), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.152.129.43), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.165.17.230), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.172.25.22), connection timed out
Err:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-updates InRelease
Unable to connect to us-east-1.ec2.archive.ubuntu.com:http:
Err:4 http://us-east-1.ec2.archive.ubuntu.com/ubuntu bionic-backports InRelease
Unable to connect to us-east-1.ec2.archive.ubuntu.com:http:
Reading package lists... Done
Building dependency tree
Reading state information... Done
All packages are up to date.
W: Failed to fetch http://us-east-1.ec2.archive.ubuntu.com/ubuntu/dists/bionic/InRelease Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.201.250.36), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.229.132.181), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.229.150.131), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (34.237.137.22), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.73.36.184), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.91.65.63), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (52.207.133.243), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.152.129.43), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.165.17.230), connection timed out Could not connect to us-east-1.ec2.archive.ubuntu.com:80 (54.172.25.22), connection timed out
W: Failed to fetch http://us-east-1.ec2.archive.ubuntu.com/ubuntu/dists/bionic-updates/InRelease Unable to connect to us-east-1.ec2.archive.ubuntu.com:http:
W: Failed to fetch http://us-east-1.ec2.archive.ubuntu.com/ubuntu/dists/bionic-backports/InRelease Unable to connect to us-east-1.ec2.archive.ubuntu.com:http:
W: Failed to fetch http://security.ubuntu.com/ubuntu/dists/bionic-security/InRelease Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1360:8001::21). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1560:8001::11). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1560:8001::14). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1562::16). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1562::19). - connect (101: Network is unreachable) Cannot initiate the connection to security.ubuntu.com:80 (2001:67c:1360:8001::17). - connect (101: Network is unreachable) Could not connect to security.ubuntu.com:80 (91.189.88.149), connection timed out Could not connect to security.ubuntu.com:80 (91.189.88.162), connection timed out Could not connect to security.ubuntu.com:80 (91.189.91.23), connection timed out Could not connect to security.ubuntu.com:80 (91.189.91.26), connection timed out Could not connect to security.ubuntu.com:80 (91.189.88.24), connection timed out Could not connect to security.ubuntu.com:80 (91.189.88.31), connection timed out
W: Some index files failed to download. They have been ignored, or old ones used instead.
|
[] |
[] |
[
"Please change outbound rules to allow for http and https requ\n"
] |
[
-1
] |
[
"amazon_ec2",
"apt_get",
"ubuntu"
] |
stackoverflow_0057300502_amazon_ec2_apt_get_ubuntu.txt
|
Q:
How the JMeter actually works? Does it add records to the application in real time?
Can we add data in application(registration form/candidates details etc) using JMeter?
I create script
-HTTP Request where I used ${__UUID()} function to create random UID
-Create HTTP Req with parameters
-Add listener to view result
Script execurted successfully but when i check in application the data not added /displyed
I create script
-HTTP Request where I used ${__UUID()} function to create random UID
-Create HTTP Req with parameters
-Add listener to view result
A:
JMeter acts on HTTP protocol level, given you properly configure JMeter to behave like a real browser it will send the same HTTP request(s) as the real browser does hence the "record" (whatever it means) will be created.
You can add View Results Tree listener to your test plan, this way you will be able to see the request and response details and from the response you should be able to figure out what's wrong with your request.
|
How the JMeter actually works? Does it add records to the application in real time?
|
Can we add data in application(registration form/candidates details etc) using JMeter?
I create script
-HTTP Request where I used ${__UUID()} function to create random UID
-Create HTTP Req with parameters
-Add listener to view result
Script execurted successfully but when i check in application the data not added /displyed
I create script
-HTTP Request where I used ${__UUID()} function to create random UID
-Create HTTP Req with parameters
-Add listener to view result
|
[
"JMeter acts on HTTP protocol level, given you properly configure JMeter to behave like a real browser it will send the same HTTP request(s) as the real browser does hence the \"record\" (whatever it means) will be created.\nYou can add View Results Tree listener to your test plan, this way you will be able to see the request and response details and from the response you should be able to figure out what's wrong with your request.\n"
] |
[
0
] |
[] |
[] |
[
"automation_testing",
"jmeter",
"manual_testing",
"performance_testing",
"testing"
] |
stackoverflow_0074664637_automation_testing_jmeter_manual_testing_performance_testing_testing.txt
|
Q:
Using torch.nn.DataParallel with a custom CUDA extension
To my understanding, the built-in PyTorch operations all automatically handle batches through implicit vectorization, allowing parallelism across multiple GPUs.
However, when writing a custom operation in CUDA as per the Documentation, the LLTM example given performs operations that are batch invariant, for example computing the gradient of the Sigmoid function elementwise.
However, I have a use case that is not batch element invariant and not vectorizable. Running on a single GPU, I currently (inefficiently) loop over each element in the batch, performing a kernel launch for each, like so (written in the browser, just to demonstrate):
std::vector<at::Tensor> op_cuda_forward(at::Tensor input,
at::Tensor elementSpecificParam) {
auto output = at::zeros(torch::CUDA(/* TYPE */), {/* DIMENSIONS */});
const size_t blockDim = //
const size_t gridDim = //
const size_t = numBatches = //
for (size_t i = 0; i < numBatches; i++) {
op_cuda_forward_kernel<T><<<gridDim, blockDim>>>(input[i],
elementSpecificParam[i],
output[i]);
}
return {output};
}
However, I wish to split this operation over multiple GPUs by batch element.
How would the allocation of the output Tensor work in a multi-GPU scenario?
Of course, one may create intermediate Tensors on each GPU before launching the appropriate kernel, however, the overhead of copying the input data to each GPU and back again would be problematic.
Is there a simpler way to launch the kernels without first probing the environment for GPU information (# GPU's etc)?
The end goal is to have a CUDA operation that works with torch.nn.DataParallel.
A:
In order to use torch.nn.DataParallel with a custom CUDA extension, you can follow these steps:
Define your custom CUDA extension in a subclass of torch.autograd.Function, and implement the forward() and backward() methods for the forward and backward passes, respectively.
In the forward() method, create a new output Tensor and allocate it on the same device that the input Tensor is on using output.to(input.device).
In the backward() method, create a new gradient input Tensor and allocate it on the same device that the gradient output Tensor is on using grad_input.to(grad_output.device).
In your main PyTorch code, use torch.nn.DataParallel to wrap your model, and make sure to call the to() method on the input data and move it to the same device that your model is on.
Here's an example of how this could look like:
# Define your custom CUDA extension as a subclass of torch.autograd.Function
class MyCustomCudaExtension(torch.autograd.Function):
@staticmethod
def forward(ctx, input, element_specific_param):
# Create a new output Tensor and allocate it on the same device as the input Tensor
output = torch.zeros_like(input).to(input.device)
# Perform the forward pass using your custom CUDA kernel
# ...
# Save any necessary information for the backward pass
ctx.save_for_backward(output)
return output
@staticmethod
def backward(ctx, grad_output):
# Retrieve the saved information from the forward pass
output = ctx.saved_tensors[0]
# Create a new gradient input Tensor and allocate it on the same device as the gradient output Tensor
grad_input = torch.zeros_like(output).to(grad_output.device)
# Perform the backward pass using your custom CUDA kernel
# ...
return grad_input, None
# In your main PyTorch code, wrap your model in torch.nn.DataParallel
model = torch.nn.DataParallel(MyModel())
# Make sure to call the .to() method on the input data and move it to the same device as your model
input = input.to(model.device)
# Perform the forward pass
output = model(input)
This way, torch.nn.DataParallel will automatically handle the parallelism across multiple GPUs, and your custom CUDA extension will be executed on the correct device.
Note that in this example, we are assuming that the input and output Tensors are on the same device. If this is not the case, you can use the input.device and grad_output.device attributes to retrieve the devices of the input and gradient output Tensors, respectively, and allocate the output and gradient input Tensors on those devices.
|
Using torch.nn.DataParallel with a custom CUDA extension
|
To my understanding, the built-in PyTorch operations all automatically handle batches through implicit vectorization, allowing parallelism across multiple GPUs.
However, when writing a custom operation in CUDA as per the Documentation, the LLTM example given performs operations that are batch invariant, for example computing the gradient of the Sigmoid function elementwise.
However, I have a use case that is not batch element invariant and not vectorizable. Running on a single GPU, I currently (inefficiently) loop over each element in the batch, performing a kernel launch for each, like so (written in the browser, just to demonstrate):
std::vector<at::Tensor> op_cuda_forward(at::Tensor input,
at::Tensor elementSpecificParam) {
auto output = at::zeros(torch::CUDA(/* TYPE */), {/* DIMENSIONS */});
const size_t blockDim = //
const size_t gridDim = //
const size_t = numBatches = //
for (size_t i = 0; i < numBatches; i++) {
op_cuda_forward_kernel<T><<<gridDim, blockDim>>>(input[i],
elementSpecificParam[i],
output[i]);
}
return {output};
}
However, I wish to split this operation over multiple GPUs by batch element.
How would the allocation of the output Tensor work in a multi-GPU scenario?
Of course, one may create intermediate Tensors on each GPU before launching the appropriate kernel, however, the overhead of copying the input data to each GPU and back again would be problematic.
Is there a simpler way to launch the kernels without first probing the environment for GPU information (# GPU's etc)?
The end goal is to have a CUDA operation that works with torch.nn.DataParallel.
|
[
"In order to use torch.nn.DataParallel with a custom CUDA extension, you can follow these steps:\n\nDefine your custom CUDA extension in a subclass of torch.autograd.Function, and implement the forward() and backward() methods for the forward and backward passes, respectively.\nIn the forward() method, create a new output Tensor and allocate it on the same device that the input Tensor is on using output.to(input.device).\nIn the backward() method, create a new gradient input Tensor and allocate it on the same device that the gradient output Tensor is on using grad_input.to(grad_output.device).\nIn your main PyTorch code, use torch.nn.DataParallel to wrap your model, and make sure to call the to() method on the input data and move it to the same device that your model is on.\n\nHere's an example of how this could look like:\n# Define your custom CUDA extension as a subclass of torch.autograd.Function\nclass MyCustomCudaExtension(torch.autograd.Function):\n @staticmethod\n def forward(ctx, input, element_specific_param):\n # Create a new output Tensor and allocate it on the same device as the input Tensor\n output = torch.zeros_like(input).to(input.device)\n # Perform the forward pass using your custom CUDA kernel\n # ...\n # Save any necessary information for the backward pass\n ctx.save_for_backward(output)\n return output\n\n @staticmethod\n def backward(ctx, grad_output):\n # Retrieve the saved information from the forward pass\n output = ctx.saved_tensors[0]\n # Create a new gradient input Tensor and allocate it on the same device as the gradient output Tensor\n grad_input = torch.zeros_like(output).to(grad_output.device)\n # Perform the backward pass using your custom CUDA kernel\n # ...\n return grad_input, None\n\n# In your main PyTorch code, wrap your model in torch.nn.DataParallel\nmodel = torch.nn.DataParallel(MyModel())\n\n# Make sure to call the .to() method on the input data and move it to the same device as your model\ninput = input.to(model.device)\n\n# Perform the forward pass\noutput = model(input)\n\nThis way, torch.nn.DataParallel will automatically handle the parallelism across multiple GPUs, and your custom CUDA extension will be executed on the correct device.\nNote that in this example, we are assuming that the input and output Tensors are on the same device. If this is not the case, you can use the input.device and grad_output.device attributes to retrieve the devices of the input and gradient output Tensors, respectively, and allocate the output and gradient input Tensors on those devices.\n"
] |
[
0
] |
[] |
[] |
[
"deep_learning",
"libtorch",
"neural_network",
"pytorch"
] |
stackoverflow_0051400618_deep_learning_libtorch_neural_network_pytorch.txt
|
Q:
Azure Data Studio - fail to connection to login to server
I have create a container with docker:
mssql is in docker with status up
> docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=Passw0rd!12" -p 1433:1433 -d mcr.microsoft.com/mssql/server:2019-latest
I am trying to connect to SQL server on my network with Azure Data Studio, and I am getting error:
Login failed for user 'sa'.
Details for the error:
Microsoft.Data.SqlClient.SqlException (0x80131904): Login failed for user 'sa'.
at Microsoft.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at Microsoft.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at Microsoft.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at Microsoft.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean ignoreSniOpenTimeout, TimeoutTimer timeout, Boolean withFailover)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString connectionOptions, SqlCredential credential, TimeoutTimer timeout)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(TimeoutTimer timeout, SqlConnectionString connectionOptions, SqlCredential credential, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, SqlCredential credential, Object providerInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString userConnectionOptions, SessionData reconnectSessionData, Boolean applyTransientFaultHandling, String accessToken, DbConnectionPool pool)
at Microsoft.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, DbConnectionPoolKey poolKey, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection, DbConnectionOptions userOptions)
at Microsoft.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup, DbConnectionOptions userOptions)
at Microsoft.Data.ProviderBase.DbConnectionFactory.<>c__DisplayClass48_0.<CreateReplaceConnectionContinuation>b__0(Task`1 _)
at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location ---
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
--- End of stack trace from previous location ---
at Microsoft.SqlTools.ServiceLayer.Connection.ReliableConnection.ReliableSqlConnection.<>c__DisplayClass30_0.<<OpenAsync>b__0>d.MoveNext() in D:\a\_work\1\s\src\Microsoft.SqlTools.ManagedBatchParser\ReliableConnection\ReliableSqlConnection.cs:line 312
--- End of stack trace from previous location ---
at Microsoft.SqlTools.ServiceLayer.Connection.ConnectionService.TryOpenConnection(ConnectionInfo connectionInfo, ConnectParams connectionParams) in D:\a\_work\1\s\src\Microsoft.SqlTools.ServiceLayer\Connection\ConnectionService.cs:line 666
ClientConnectionId:7f82dc23-5d48-4820-9f58-4ccf08416472
Error Number:18456,State:1,Class:14
I have asked and tried to restart the container. I also type the pwd again multiple times and even try to create new container in Docker, but failed.
enter image description here
please help...and thank you.
Try: enter the pwd again and again - create new container - restart container
Expected: it will sucessfully login to localhost server
Result: still failed to log in
A:
The first step is to see what the error logs are in the container,
docker logs [your container id]
If you see an error message like,
Error: The evaluation period has expired.
/opt/mssql/bin/sqlservr: PAL initialization failed. Error: 104
Try to pull the docker image again,
docker pull mcr.microsoft.com/mssql/server:2019-latest
|
Azure Data Studio - fail to connection to login to server
|
I have create a container with docker:
mssql is in docker with status up
> docker run -e "ACCEPT_EULA=Y" -e "MSSQL_SA_PASSWORD=Passw0rd!12" -p 1433:1433 -d mcr.microsoft.com/mssql/server:2019-latest
I am trying to connect to SQL server on my network with Azure Data Studio, and I am getting error:
Login failed for user 'sa'.
Details for the error:
Microsoft.Data.SqlClient.SqlException (0x80131904): Login failed for user 'sa'.
at Microsoft.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)
at Microsoft.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj, Boolean callerHasConnectionLock, Boolean asyncClose)
at Microsoft.Data.SqlClient.TdsParser.TryRun(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj, Boolean& dataReady)
at Microsoft.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.CompleteLogin(Boolean enlistOK)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.AttemptOneLogin(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean ignoreSniOpenTimeout, TimeoutTimer timeout, Boolean withFailover)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.LoginNoFailover(ServerInfo serverInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString connectionOptions, SqlCredential credential, TimeoutTimer timeout)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds.OpenLoginEnlist(TimeoutTimer timeout, SqlConnectionString connectionOptions, SqlCredential credential, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance)
at Microsoft.Data.SqlClient.SqlInternalConnectionTds..ctor(DbConnectionPoolIdentity identity, SqlConnectionString connectionOptions, SqlCredential credential, Object providerInfo, String newPassword, SecureString newSecurePassword, Boolean redirectedUserInstance, SqlConnectionString userConnectionOptions, SessionData reconnectSessionData, Boolean applyTransientFaultHandling, String accessToken, DbConnectionPool pool)
at Microsoft.Data.SqlClient.SqlConnectionFactory.CreateConnection(DbConnectionOptions options, DbConnectionPoolKey poolKey, Object poolGroupProviderInfo, DbConnectionPool pool, DbConnection owningConnection, DbConnectionOptions userOptions)
at Microsoft.Data.ProviderBase.DbConnectionFactory.CreateNonPooledConnection(DbConnection owningConnection, DbConnectionPoolGroup poolGroup, DbConnectionOptions userOptions)
at Microsoft.Data.ProviderBase.DbConnectionFactory.<>c__DisplayClass48_0.<CreateReplaceConnectionContinuation>b__0(Task`1 _)
at System.Threading.Tasks.ContinuationResultTaskFromResultTask`2.InnerInvoke()
at System.Threading.ExecutionContext.RunInternal(ExecutionContext executionContext, ContextCallback callback, Object state)
--- End of stack trace from previous location ---
at System.Threading.Tasks.Task.ExecuteWithThreadLocal(Task& currentTaskSlot, Thread threadPoolThread)
--- End of stack trace from previous location ---
at Microsoft.SqlTools.ServiceLayer.Connection.ReliableConnection.ReliableSqlConnection.<>c__DisplayClass30_0.<<OpenAsync>b__0>d.MoveNext() in D:\a\_work\1\s\src\Microsoft.SqlTools.ManagedBatchParser\ReliableConnection\ReliableSqlConnection.cs:line 312
--- End of stack trace from previous location ---
at Microsoft.SqlTools.ServiceLayer.Connection.ConnectionService.TryOpenConnection(ConnectionInfo connectionInfo, ConnectParams connectionParams) in D:\a\_work\1\s\src\Microsoft.SqlTools.ServiceLayer\Connection\ConnectionService.cs:line 666
ClientConnectionId:7f82dc23-5d48-4820-9f58-4ccf08416472
Error Number:18456,State:1,Class:14
I have asked and tried to restart the container. I also type the pwd again multiple times and even try to create new container in Docker, but failed.
enter image description here
please help...and thank you.
Try: enter the pwd again and again - create new container - restart container
Expected: it will sucessfully login to localhost server
Result: still failed to log in
|
[
"The first step is to see what the error logs are in the container,\ndocker logs [your container id]\n\nIf you see an error message like,\n\nError: The evaluation period has expired.\n/opt/mssql/bin/sqlservr: PAL initialization failed. Error: 104\n\nTry to pull the docker image again,\ndocker pull mcr.microsoft.com/mssql/server:2019-latest\n\n"
] |
[
0
] |
[] |
[] |
[
"azure",
"azure_data_studio",
"connection_string",
"localhost"
] |
stackoverflow_0074665205_azure_azure_data_studio_connection_string_localhost.txt
|
Q:
react_dom_client__WEBPACK_IMPORTED_MODULE_1__.render is not a function show in the console of localhost:3000
after I created react.js project when I put any type of code it doesn't show in the localhost
so when i inspect and open the console tap it show me this error:
Uncaught TypeError: react_dom_client__WEBPACK_IMPORTED_MODULE_1__.render is not a function
at Module../src/index.js (index.js:7:1)
at Module.options.factory (react refresh:6:1)
at __webpack_require__ (bootstrap:24:1)
at startup:7:1
at startup:7:1
A:
The above used method is now deprecated for newer import methods in the React 18.
You can use this to solve the problem.
import {StrictMode} from 'react';
import {createRoot} from 'react-
dom/client';
import App from './App'
// this is the ID of the div in your index.html file
const rootElement =
document.getElementById('root');
const root =
createRoot(rootElement);
// ️ if you use TypeScript, add non-null (!) assertion operator
//
const root = createRoot(rootElement!);
Then
root.render(
<StrictMode>
<App />
</StrictMode>,
);
A:
create root with const
const root = ReactDOM.createRoot(document.getElementById("root"));
and instead of ReactDOM.render use
root.render
|
react_dom_client__WEBPACK_IMPORTED_MODULE_1__.render is not a function show in the console of localhost:3000
|
after I created react.js project when I put any type of code it doesn't show in the localhost
so when i inspect and open the console tap it show me this error:
Uncaught TypeError: react_dom_client__WEBPACK_IMPORTED_MODULE_1__.render is not a function
at Module../src/index.js (index.js:7:1)
at Module.options.factory (react refresh:6:1)
at __webpack_require__ (bootstrap:24:1)
at startup:7:1
at startup:7:1
|
[
"The above used method is now deprecated for newer import methods in the React 18.\nYou can use this to solve the problem.\nimport {StrictMode} from 'react';\nimport {createRoot} from 'react- \ndom/client';\n\nimport App from './App'\n\n// this is the ID of the div in your index.html file\nconst rootElement = \ndocument.getElementById('root');\nconst root = \ncreateRoot(rootElement);\n\n// ️ if you use TypeScript, add non-null (!) assertion operator\n//\nconst root = createRoot(rootElement!);\n\nThen\nroot.render(\n <StrictMode>\n <App />\n </StrictMode>,\n);\n\n",
"create root with const\nconst root = ReactDOM.createRoot(document.getElementById(\"root\"));\nand instead of ReactDOM.render use\nroot.render\n"
] |
[
4,
0
] |
[] |
[] |
[
"reactjs"
] |
stackoverflow_0071945583_reactjs.txt
|
Q:
Getting an error in applying yaml file to aks ingress controller
I'm trying to implement wallarm a security service to a simple API i created in aks ingress controller.
I'm getting an error like this
error: error validating ".\values.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I want this error to go away and apply this.
A:
The reason for this is this yaml file should be added to the cloud manually. using this.
Created the values.yaml file
vi values.yaml
Then You should use server snippets in the annotation of your ingress manifests, not in the values. yaml of the ingress.
|
Getting an error in applying yaml file to aks ingress controller
|
I'm trying to implement wallarm a security service to a simple API i created in aks ingress controller.
I'm getting an error like this
error: error validating ".\values.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
I want this error to go away and apply this.
|
[
"The reason for this is this yaml file should be added to the cloud manually. using this.\nCreated the values.yaml file \nvi values.yaml\n\nThen You should use server snippets in the annotation of your ingress manifests, not in the values. yaml of the ingress.\n"
] |
[
0
] |
[] |
[] |
[
"azure_aks",
"yaml"
] |
stackoverflow_0074621231_azure_aks_yaml.txt
|
Q:
Strapi custom service overwrite find method
I'm using strapi v4 and I want to populate all nested fields by default when I retrieve a list of my objects (contact-infos). Therefore I have overwritten the contact-info service with following code:
export default factories.createCoreService('api::contact-info.contact-info', ({ strapi }): {} => ({
async find(...args) {
let { results, pagination } = await super.find(...args)
results = await strapi.entityService.findMany('api::contact-info.contact-info', {
fields: ['locale'],
populate: {
sections: {
populate: { link: true }
}
}
})
return { results, pagination }
},
}));
That works well, but I execute a find all entries on the database twice, I guess, which I want to avoid, but when I try to return the result from the entityService directly I'm getting following response:
data": null,
"error": {
"status": 404,
"name": "NotFoundError",
"message": "Not Found",
"details": {}
}
also, I have no idea how I would retrieve the pagination information if I don't call super.find(). Is there any way to find all contents with the option to populate nested objects?
A:
the recommended way of doing this, would be a middleware (do it once apply for all controllers). There would be an video Best Practice Session 003 where it's describes exactly this scenario (Not sure if it's discord only, but on moment of writing this it wasn't yet published).
So regarding rest of your question:
async find(...args) {
let { results, pagination } = await super.find({...args, populate: {section: ['link']})
}
should be sufficient to fix that up in one query
custom pagination example:
async findOne(ctx) {
const { user, auth } = ctx.state;
const { id } = ctx.params;
const limit = ctx.query?.limit ?? 20;
const offset = ctx.query?.offset ?? 0;
const logs = await strapi.db.query("api::tasks-log.tasks-log").findMany({
where: { task: id },
limit,
offset,
orderBy: { updatedAt: "DESC" },
});
const total = await strapi.db
.query("api::tasks-log.tasks-log")
.count({ where: { task: id } });
return { data: logs, meta: { total, offset, limit } };
}
|
Strapi custom service overwrite find method
|
I'm using strapi v4 and I want to populate all nested fields by default when I retrieve a list of my objects (contact-infos). Therefore I have overwritten the contact-info service with following code:
export default factories.createCoreService('api::contact-info.contact-info', ({ strapi }): {} => ({
async find(...args) {
let { results, pagination } = await super.find(...args)
results = await strapi.entityService.findMany('api::contact-info.contact-info', {
fields: ['locale'],
populate: {
sections: {
populate: { link: true }
}
}
})
return { results, pagination }
},
}));
That works well, but I execute a find all entries on the database twice, I guess, which I want to avoid, but when I try to return the result from the entityService directly I'm getting following response:
data": null,
"error": {
"status": 404,
"name": "NotFoundError",
"message": "Not Found",
"details": {}
}
also, I have no idea how I would retrieve the pagination information if I don't call super.find(). Is there any way to find all contents with the option to populate nested objects?
|
[
"the recommended way of doing this, would be a middleware (do it once apply for all controllers). There would be an video Best Practice Session 003 where it's describes exactly this scenario (Not sure if it's discord only, but on moment of writing this it wasn't yet published).\nSo regarding rest of your question:\nasync find(...args) {\n let { results, pagination } = await super.find({...args, populate: {section: ['link']})\n}\n\nshould be sufficient to fix that up in one query\ncustom pagination example:\nasync findOne(ctx) {\n const { user, auth } = ctx.state;\n const { id } = ctx.params;\n\n const limit = ctx.query?.limit ?? 20;\n const offset = ctx.query?.offset ?? 0;\n\n const logs = await strapi.db.query(\"api::tasks-log.tasks-log\").findMany({\n where: { task: id },\n limit,\n offset,\n orderBy: { updatedAt: \"DESC\" },\n });\n\n const total = await strapi.db\n .query(\"api::tasks-log.tasks-log\")\n .count({ where: { task: id } });\n\n return { data: logs, meta: { total, offset, limit } };\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"strapi"
] |
stackoverflow_0074659040_strapi.txt
|
Q:
Creating a Snapchat bot in python
I am new in python programming and I was trying to create a Snapchat bot
Can you help me create a request based Snapchat bot.
I will be using this for marketing with my existing clients to help schedule posts. It will also be an auto responder to act as Thank you or Welcome messages.
If you got any ideas you can share your thoughts, thank you
Basically I need a python script to handle message scheduling and auto response
A:
Snapchat now supports the web or browser. So you can take a look at the tutorials of pyautogui module in python. you can manipulate the keyboard and mouse events and respond to the messages with prewritten messages of yours. Your task can be done easily.
|
Creating a Snapchat bot in python
|
I am new in python programming and I was trying to create a Snapchat bot
Can you help me create a request based Snapchat bot.
I will be using this for marketing with my existing clients to help schedule posts. It will also be an auto responder to act as Thank you or Welcome messages.
If you got any ideas you can share your thoughts, thank you
Basically I need a python script to handle message scheduling and auto response
|
[
"Snapchat now supports the web or browser. So you can take a look at the tutorials of pyautogui module in python. you can manipulate the keyboard and mouse events and respond to the messages with prewritten messages of yours. Your task can be done easily.\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074665242_python.txt
|
Q:
Java socket dwell time too long
I have two instances of the same Java program that talk to each other through a socket. They run on the same host. One waits for the other to receive a packet of data, after which this process is reversed. This constitutes a cycle interval.
The output of both instances is configured to give me some information about how long it takes to complete one of these cycles. A snippet of output from one of them follows:
Total interval duration: 85,120,197 nanoseconds.
Before receiving 5,239 0%
After receiving 83,174,365 97%
After sending 1,931,094 2%
End 9,499 0%
The percentages are of course out because I merely cast the derived doubles to an integer without rounding. This is not the problem.
The problem is that I am wondering why, when the two instances are running copies of the same program, would these instances dwell for about 97% of the time on an operation that involves blocking on a read statement from the socket they share.
As shown, the two instances take about 80 milliseconds to complete one cycle interval (about 12 cycles a second). I think this is far too long, as I have seen much shorter times before now, but I have recently made extensive changes to the program, and I think something I have put in these changes is causing the latency blowout. At the moment, I believe that whatever is doing this may also be responsible for the apparent simultaneous blocking anomaly.
The code that both instances execute where the blocking occurs is summarised here:
public final Transmitter receive(){
Transmitter tr=null;
try{
Receiver.currentReceiver().measure("Before receiving");
tr=(Transmitter)this.getInputStream().readObject();
Receiver.currentReceiver().measure("After receiving");
}catch(IOException x){
//...
}
return tr;
}
The receive method cited in the above code fragment is executed by a Receiver - a type of java thread - customised to complement and be accessible from the code it is executing. This receive method is a member of a class that extends the Java socket class; it reads a Transmitter object passed from the instance at the other end of the socket connection. The receiver thread is retrieved in a way similar to the way one might retrieve the current thread: through the call to the Receiver class's static method currentReceiver. The measure method records a place in the code where a measurement of execution time will be taken. The string given to the measure method appears in the cycle time output disclosed at the top of this question.
It seems reasonable that both instances should not simultaneously be blocked on the same socket for what appears to exceed 90% of the time. One of the instances would block reading it while the other is doing the things necessary to eventually write to it; I think it reasonable to see one instance blocked for perhaps a bit more than half the time, but I observe there is a large window of time where both are blocked. What could be happening in that window, and how might I reduce or eliminate it?
Any ideas perhaps?
A:
Would have been nice if someone who lurks on this site could have helped, but this question might have valuable information. I'm sure someone can change this answer to conform to this site's standards.
Option TCP_NODELAY=true appears to halve the cycle interval (documentation says it should account for 40 milliseconds - which accords with what I am seeing), yet the proportion of time blocking on read is still about 97% - it's just half as long which is an improvement, but I would expect this proportion to perhaps begin with a 5, possibly a 6.
The other option that seems relevant is TCP_QUICKACK=true. However, I imagine that having the Receiver write interval stats to the screen every cycle is accounting for a significant quantum of time. Hence, the use of TCP_QUICKACK may only become significant if I change the code so that, instead of writing stats every time a cycle is completed, my code samples the cycle interval every few seconds or so through a separate thread. I already have one of those - if switched on, it monitors the Receiver, letting the user know if the Receiver appears to have done nothing in five seconds.
|
Java socket dwell time too long
|
I have two instances of the same Java program that talk to each other through a socket. They run on the same host. One waits for the other to receive a packet of data, after which this process is reversed. This constitutes a cycle interval.
The output of both instances is configured to give me some information about how long it takes to complete one of these cycles. A snippet of output from one of them follows:
Total interval duration: 85,120,197 nanoseconds.
Before receiving 5,239 0%
After receiving 83,174,365 97%
After sending 1,931,094 2%
End 9,499 0%
The percentages are of course out because I merely cast the derived doubles to an integer without rounding. This is not the problem.
The problem is that I am wondering why, when the two instances are running copies of the same program, would these instances dwell for about 97% of the time on an operation that involves blocking on a read statement from the socket they share.
As shown, the two instances take about 80 milliseconds to complete one cycle interval (about 12 cycles a second). I think this is far too long, as I have seen much shorter times before now, but I have recently made extensive changes to the program, and I think something I have put in these changes is causing the latency blowout. At the moment, I believe that whatever is doing this may also be responsible for the apparent simultaneous blocking anomaly.
The code that both instances execute where the blocking occurs is summarised here:
public final Transmitter receive(){
Transmitter tr=null;
try{
Receiver.currentReceiver().measure("Before receiving");
tr=(Transmitter)this.getInputStream().readObject();
Receiver.currentReceiver().measure("After receiving");
}catch(IOException x){
//...
}
return tr;
}
The receive method cited in the above code fragment is executed by a Receiver - a type of java thread - customised to complement and be accessible from the code it is executing. This receive method is a member of a class that extends the Java socket class; it reads a Transmitter object passed from the instance at the other end of the socket connection. The receiver thread is retrieved in a way similar to the way one might retrieve the current thread: through the call to the Receiver class's static method currentReceiver. The measure method records a place in the code where a measurement of execution time will be taken. The string given to the measure method appears in the cycle time output disclosed at the top of this question.
It seems reasonable that both instances should not simultaneously be blocked on the same socket for what appears to exceed 90% of the time. One of the instances would block reading it while the other is doing the things necessary to eventually write to it; I think it reasonable to see one instance blocked for perhaps a bit more than half the time, but I observe there is a large window of time where both are blocked. What could be happening in that window, and how might I reduce or eliminate it?
Any ideas perhaps?
|
[
"Would have been nice if someone who lurks on this site could have helped, but this question might have valuable information. I'm sure someone can change this answer to conform to this site's standards.\nOption TCP_NODELAY=true appears to halve the cycle interval (documentation says it should account for 40 milliseconds - which accords with what I am seeing), yet the proportion of time blocking on read is still about 97% - it's just half as long which is an improvement, but I would expect this proportion to perhaps begin with a 5, possibly a 6.\nThe other option that seems relevant is TCP_QUICKACK=true. However, I imagine that having the Receiver write interval stats to the screen every cycle is accounting for a significant quantum of time. Hence, the use of TCP_QUICKACK may only become significant if I change the code so that, instead of writing stats every time a cycle is completed, my code samples the cycle interval every few seconds or so through a separate thread. I already have one of those - if switched on, it monitors the Receiver, letting the user know if the Receiver appears to have done nothing in five seconds.\n"
] |
[
0
] |
[] |
[] |
[
"java",
"sockets"
] |
stackoverflow_0074608478_java_sockets.txt
|
Q:
OfficeJS Excel Keyboard Shortcuts Not Working
I have done everything in the documentation
Manifest, Webpack, Associate, Shortcuts.json file
Everything is exactly as it should. No keyboard shortcut seems to be working.
I am on a windows laptop.
I build the add-in, the functions work, but the shortcuts related to the functions don't do anything.
Any thoughts on things I should check for? Thanks
A:
You need to check whether keyboard shortcuts are supported by your host application. Keyboard shortcuts are currently only supported on Excel and only on these platforms and builds:
Excel on Windows: Version 2102 (Build 13801.20632) and later
Excel on Mac: 16.48 and later
Excel on the web
Also keyboard shortcuts work only on platforms that support the following requirement sets.
SharedRuntime 1.1
KeyboardShortcuts 1.1
For more about requirement sets and how to work with them, see Specify Office applications and API requirements.
Finally, you need to make sure that you did everything like described in the Add custom keyboard shortcuts to your Office Add-ins article.
|
OfficeJS Excel Keyboard Shortcuts Not Working
|
I have done everything in the documentation
Manifest, Webpack, Associate, Shortcuts.json file
Everything is exactly as it should. No keyboard shortcut seems to be working.
I am on a windows laptop.
I build the add-in, the functions work, but the shortcuts related to the functions don't do anything.
Any thoughts on things I should check for? Thanks
|
[
"You need to check whether keyboard shortcuts are supported by your host application. Keyboard shortcuts are currently only supported on Excel and only on these platforms and builds:\n\nExcel on Windows: Version 2102 (Build 13801.20632) and later\nExcel on Mac: 16.48 and later\nExcel on the web\n\nAlso keyboard shortcuts work only on platforms that support the following requirement sets.\n\nSharedRuntime 1.1\nKeyboardShortcuts 1.1\n\nFor more about requirement sets and how to work with them, see Specify Office applications and API requirements.\nFinally, you need to make sure that you did everything like described in the Add custom keyboard shortcuts to your Office Add-ins article.\n"
] |
[
0
] |
[] |
[] |
[
"excel",
"excel_web_addins",
"javascript",
"office_addins",
"office_js"
] |
stackoverflow_0074662014_excel_excel_web_addins_javascript_office_addins_office_js.txt
|
Q:
Build fails with Error: Pear requires a 'dev' or 'nightly' version of rustc even after a successful rustup override set nightly
Windows 10
rustup 1.23.1 (3df2264a9 2020-11-30)
default rustc 1.50.0 (cb75ad5db 2021-02-10)
project rustc 1.52.0-nightly (4a8b6f708 2021-03-11)
rocket = "0.4.4"
I'm trying to build a rust project with rocket but I always get this error when compiling, even after successfully overwriting the project's toolchain:
D:\GitHub\Learning-Rust\poke_api> rustup override set nightly
info: using existing install for 'nightly-x86_64-pc-windows-msvc'
info: override toolchain for 'D:\GitHub\Learning-Rust\poke_api' set to 'nightly-x86_64-pc-windows-msvc'
nightly-x86_64-pc-windows-msvc unchanged - rustc 1.52.0-nightly (4a8b6f708 2021-03-11)
PS D:\GitHub\Learning-Rust\poke_api> cargo build
Compiling winapi v0.3.9
Compiling serde_derive v1.0.124
Compiling rocket v0.4.7
Compiling pear_codegen v0.1.4
Compiling rocket_codegen v0.4.7
Compiling proc-macro2 v1.0.24
Compiling pq-sys v0.4.6
Compiling aho-corasick v0.6.10
Compiling serde_json v1.0.64
error: failed to run custom build command for `pear_codegen v0.1.4`
Caused by:
process didn't exit successfully: `D:\GitHub\Learning-Rust\poke_api\target\debug\build\pear_codegen-e182711746033ac9\build-script-build` (exit code: 101)
--- stderr
Error: Pear requires a 'dev' or 'nightly' version of rustc.
Installed version: 1.48.0 (2020-11-16)
Minimum required: 1.31.0-nightly (2018-10-05)
thread 'main' panicked at 'Aborting compilation due to incompatible compiler.', C:\Users\gabre\.cargo\registry\src\github.com-1ecc6299db9ec823\pear_codegen-0.1.4\build.rs:24:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: build failed
A:
I had a similar issue while using rocket. Same error message too, Error: Pear requires a 'dev' or 'nightly' version of rustc.
If you get to the get-started page on rocket framework website. It says, "Rocket makes abundant use of Rust's syntax extensions and other advanced, unstable features. Because of this, we'll need to use a nightly version of Rust."
My issue was I was not using a nightly version of rust. Running this on my terminal in my project directory did it for me.
rustup override set nightly
If you check the cargo version for that directory after,
cargo version
you will confirm it has switched to nightly version
A:
Failed compilation even with nightly
It looks like you have some outdated dependencies (pear-codegen probably being the one that causes trouble), updating these may resolve the compilation issues.
General notes on how to override the toolchain
Using rustups override works fine, but it is bound to your local rustup configuration and not specified inside the project.
In order to achieve this, thereby making the project more portable and allowing others to always use the correct toolchain, I would recommend the toolchain file. It can look something like this (example taken from linked page) and will accurately specify the required toolchain only for the containing project.
# rust-toolchain.toml
[toolchain]
channel = "nightly-2020-07-10"
components = [ "rustfmt", "rustc-dev" ]
targets = [ "wasm32-unknown-unknown", "thumbv2-none-eabi" ]
profile = "minimal"
For your purposes a simple configuration like this will most likely be all you need, although adding the components you want to use would be beneficial.
[toolchain]
channel = "nightly"
A:
My issue was with rust-analyser that wouldn't start because multiple rocket dependencies needed nightly or dev version of rustc.
These steps fixed my issue:
Switch to nightly for my rocket project by running rustup override set nightly inside the app folder.
Remove all target folders in my project. (I also had one in root)
Manually remove the faulty cached packages from cargo cache. cd ~/.cargo/registry/cache/github.com-xxxxxxxxxxxx && rm -r pear_codegen-0.1.5/
|
Build fails with Error: Pear requires a 'dev' or 'nightly' version of rustc even after a successful rustup override set nightly
|
Windows 10
rustup 1.23.1 (3df2264a9 2020-11-30)
default rustc 1.50.0 (cb75ad5db 2021-02-10)
project rustc 1.52.0-nightly (4a8b6f708 2021-03-11)
rocket = "0.4.4"
I'm trying to build a rust project with rocket but I always get this error when compiling, even after successfully overwriting the project's toolchain:
D:\GitHub\Learning-Rust\poke_api> rustup override set nightly
info: using existing install for 'nightly-x86_64-pc-windows-msvc'
info: override toolchain for 'D:\GitHub\Learning-Rust\poke_api' set to 'nightly-x86_64-pc-windows-msvc'
nightly-x86_64-pc-windows-msvc unchanged - rustc 1.52.0-nightly (4a8b6f708 2021-03-11)
PS D:\GitHub\Learning-Rust\poke_api> cargo build
Compiling winapi v0.3.9
Compiling serde_derive v1.0.124
Compiling rocket v0.4.7
Compiling pear_codegen v0.1.4
Compiling rocket_codegen v0.4.7
Compiling proc-macro2 v1.0.24
Compiling pq-sys v0.4.6
Compiling aho-corasick v0.6.10
Compiling serde_json v1.0.64
error: failed to run custom build command for `pear_codegen v0.1.4`
Caused by:
process didn't exit successfully: `D:\GitHub\Learning-Rust\poke_api\target\debug\build\pear_codegen-e182711746033ac9\build-script-build` (exit code: 101)
--- stderr
Error: Pear requires a 'dev' or 'nightly' version of rustc.
Installed version: 1.48.0 (2020-11-16)
Minimum required: 1.31.0-nightly (2018-10-05)
thread 'main' panicked at 'Aborting compilation due to incompatible compiler.', C:\Users\gabre\.cargo\registry\src\github.com-1ecc6299db9ec823\pear_codegen-0.1.4\build.rs:24:13
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
warning: build failed, waiting for other jobs to finish...
error: build failed
|
[
"I had a similar issue while using rocket. Same error message too, Error: Pear requires a 'dev' or 'nightly' version of rustc.\nIf you get to the get-started page on rocket framework website. It says, \"Rocket makes abundant use of Rust's syntax extensions and other advanced, unstable features. Because of this, we'll need to use a nightly version of Rust.\"\nMy issue was I was not using a nightly version of rust. Running this on my terminal in my project directory did it for me.\nrustup override set nightly\n\nIf you check the cargo version for that directory after,\ncargo version\n\nyou will confirm it has switched to nightly version\n",
"Failed compilation even with nightly\nIt looks like you have some outdated dependencies (pear-codegen probably being the one that causes trouble), updating these may resolve the compilation issues.\nGeneral notes on how to override the toolchain\nUsing rustups override works fine, but it is bound to your local rustup configuration and not specified inside the project.\nIn order to achieve this, thereby making the project more portable and allowing others to always use the correct toolchain, I would recommend the toolchain file. It can look something like this (example taken from linked page) and will accurately specify the required toolchain only for the containing project.\n# rust-toolchain.toml\n[toolchain]\nchannel = \"nightly-2020-07-10\"\ncomponents = [ \"rustfmt\", \"rustc-dev\" ]\ntargets = [ \"wasm32-unknown-unknown\", \"thumbv2-none-eabi\" ]\nprofile = \"minimal\"\n\nFor your purposes a simple configuration like this will most likely be all you need, although adding the components you want to use would be beneficial.\n[toolchain]\nchannel = \"nightly\"\n\n",
"My issue was with rust-analyser that wouldn't start because multiple rocket dependencies needed nightly or dev version of rustc.\nThese steps fixed my issue:\n\nSwitch to nightly for my rocket project by running rustup override set nightly inside the app folder.\nRemove all target folders in my project. (I also had one in root)\nManually remove the faulty cached packages from cargo cache. cd ~/.cargo/registry/cache/github.com-xxxxxxxxxxxx && rm -r pear_codegen-0.1.5/\n\n"
] |
[
4,
0,
0
] |
[] |
[] |
[
"rust",
"rust_rocket",
"rustup"
] |
stackoverflow_0066605551_rust_rust_rocket_rustup.txt
|
Q:
Accessing python variable outside function scope when reassigning to update variable
I want to keep track of the current max of a calculated cosine similarity score. However, I keep getting the error UnboundLocalError: cannot access local variable 'current_max_cosine_similarity_score' where it is not associated with a value
In Javascript, I can typically do this without a problem using the let keyword when working with a variable outside of a function scope. However, in Python that doesn't seem to be the case.
What would be the pythonic way of going about this?
current_max_cosine_similarity_score = -math.inf
def func(acc, v):
calculated_cosine_similarity_score = ...
if calculated_cosine_similarity_score > current_max_cosine_similarity_score:
current_max_cosine_similarity_score = max([current_max_cosine_similarity_score, calculated_cosine_similarity_score])
acc['cosineSimilarityScore'] = calculated_cosine_similarity_score
return acc
print(reduce(func, [...], {}))
A:
You have to declare current_max_cosine_similarity_score as global (or nonlocal) in func().
But that's nevertheless a bad idea. The "pythonic" way would be to use a generator, closure or a class with a get_current_maximum().
Probably the most "pythonic" closure solves your problem:
from functools import reduce
def calc_closure():
def _calc(value, element):
# do calculations on element here
if element > value:
_calc.current_max_value = element
return _calc.current_max_value
# using an attribute makes current_max_value accessible from outer
_calc.current_max_value = -np.math.inf
return _calc
closure_1 = calc_closure()
closure_2 = calc_closure()
print(reduce(closure_1, [1, 2, 3, 4, 1]))
print(closure_1.current_max_value )
print(closure_2.current_max_value )
Output:
4
4
-inf
|
Accessing python variable outside function scope when reassigning to update variable
|
I want to keep track of the current max of a calculated cosine similarity score. However, I keep getting the error UnboundLocalError: cannot access local variable 'current_max_cosine_similarity_score' where it is not associated with a value
In Javascript, I can typically do this without a problem using the let keyword when working with a variable outside of a function scope. However, in Python that doesn't seem to be the case.
What would be the pythonic way of going about this?
current_max_cosine_similarity_score = -math.inf
def func(acc, v):
calculated_cosine_similarity_score = ...
if calculated_cosine_similarity_score > current_max_cosine_similarity_score:
current_max_cosine_similarity_score = max([current_max_cosine_similarity_score, calculated_cosine_similarity_score])
acc['cosineSimilarityScore'] = calculated_cosine_similarity_score
return acc
print(reduce(func, [...], {}))
|
[
"You have to declare current_max_cosine_similarity_score as global (or nonlocal) in func().\nBut that's nevertheless a bad idea. The \"pythonic\" way would be to use a generator, closure or a class with a get_current_maximum().\nProbably the most \"pythonic\" closure solves your problem:\nfrom functools import reduce\n\ndef calc_closure():\n def _calc(value, element):\n # do calculations on element here\n if element > value:\n _calc.current_max_value = element\n return _calc.current_max_value \n # using an attribute makes current_max_value accessible from outer\n _calc.current_max_value = -np.math.inf\n return _calc\n\nclosure_1 = calc_closure()\nclosure_2 = calc_closure()\n\nprint(reduce(closure_1, [1, 2, 3, 4, 1]))\nprint(closure_1.current_max_value )\nprint(closure_2.current_max_value )\n\nOutput:\n4\n4\n-inf\n"
] |
[
2
] |
[] |
[] |
[
"python",
"python_3.x"
] |
stackoverflow_0074665130_python_python_3.x.txt
|
Q:
Peer Dependency error while deploying to Vercel
I am trying to deploy my NextJS application onto Vercel but each time at the deploy stage, I am met with these errors and the deployment will fail:
Previous build cache not available
Cloning completed: 426.52ms
Running "vercel build"
Vercel CLI 28.6.0
Installing dependencies...
npm ERR! code ERESOLVE
npm ERR! ERESOLVE could not resolve
npm ERR!
npm ERR! While resolving: [email protected]
npm ERR! Found: [email protected]
npm ERR! node_modules/react
npm ERR! react@"18.2.0" from the root project
npm ERR! peer react@"^18.2.0" from [email protected]
npm ERR! node_modules/next
npm ERR! next@"13.0.5" from the root project
npm ERR! 3 more (react-dom, react-icons, styled-jsx)
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer react@"^16.3.0" from [email protected]
npm ERR! node_modules/react-typed
npm ERR! react-typed@"^1.2.0" from the root project
npm ERR!
npm ERR! Conflicting peer dependency: [email protected]
npm ERR! node_modules/react
npm ERR! peer react@"^16.3.0" from [email protected]
npm ERR! node_modules/react-typed
npm ERR! react-typed@"^1.2.0" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
npm ERR!
npm ERR! See /vercel/.npm/eresolve-report.txt for a full report.
npm ERR! A complete log of this run can be found in:
npm ERR! /vercel/.npm/_logs/2022-12-02T16_42_09_037Z-debug-0.log
Error: Command "npm install" exited with 1
I have tried running npm install --legacy-peer-deps and tried redeploying onto Vercel, but the same issue persists. When I run this application on my localhost:3000 using npm run dev, the application renders fine. Anyone knows what I can do?
I have tried
npm install --legacy-peer-deps
and also pushing these changes onto my GitHub repository. When redeploying, the same issue still shows. I run
npm install --legacy-peer-deps
again, but this time, there are no more changes to be made.
A:
you can override the installing command used by Vercel CLI, either by:
Going to Vercel Dashboard -> -> Settings -> General -> Scroll to Build & Development Settings and put npm install --legacy-peer-deps in Install Command
Creating vercel.json file in the root directory of your project that should contain:
// vercel.json
{
"installCommand": "npm install --legacy-peer-deps"
}
after that push and wait for deployment, it should go through this time
But I recommend looking for the cause of this issue. It seems like some dependency you're using is depending on react-typed, and the latter is not maintained anymore. It's better to seek an alternative
|
Peer Dependency error while deploying to Vercel
|
I am trying to deploy my NextJS application onto Vercel but each time at the deploy stage, I am met with these errors and the deployment will fail:
Previous build cache not available
Cloning completed: 426.52ms
Running "vercel build"
Vercel CLI 28.6.0
Installing dependencies...
npm ERR! code ERESOLVE
npm ERR! ERESOLVE could not resolve
npm ERR!
npm ERR! While resolving: [email protected]
npm ERR! Found: [email protected]
npm ERR! node_modules/react
npm ERR! react@"18.2.0" from the root project
npm ERR! peer react@"^18.2.0" from [email protected]
npm ERR! node_modules/next
npm ERR! next@"13.0.5" from the root project
npm ERR! 3 more (react-dom, react-icons, styled-jsx)
npm ERR!
npm ERR! Could not resolve dependency:
npm ERR! peer react@"^16.3.0" from [email protected]
npm ERR! node_modules/react-typed
npm ERR! react-typed@"^1.2.0" from the root project
npm ERR!
npm ERR! Conflicting peer dependency: [email protected]
npm ERR! node_modules/react
npm ERR! peer react@"^16.3.0" from [email protected]
npm ERR! node_modules/react-typed
npm ERR! react-typed@"^1.2.0" from the root project
npm ERR!
npm ERR! Fix the upstream dependency conflict, or retry
npm ERR! this command with --force, or --legacy-peer-deps
npm ERR! to accept an incorrect (and potentially broken) dependency resolution.
npm ERR!
npm ERR! See /vercel/.npm/eresolve-report.txt for a full report.
npm ERR! A complete log of this run can be found in:
npm ERR! /vercel/.npm/_logs/2022-12-02T16_42_09_037Z-debug-0.log
Error: Command "npm install" exited with 1
I have tried running npm install --legacy-peer-deps and tried redeploying onto Vercel, but the same issue persists. When I run this application on my localhost:3000 using npm run dev, the application renders fine. Anyone knows what I can do?
I have tried
npm install --legacy-peer-deps
and also pushing these changes onto my GitHub repository. When redeploying, the same issue still shows. I run
npm install --legacy-peer-deps
again, but this time, there are no more changes to be made.
|
[
"you can override the installing command used by Vercel CLI, either by:\n\nGoing to Vercel Dashboard -> -> Settings -> General -> Scroll to Build & Development Settings and put npm install --legacy-peer-deps in Install Command\nCreating vercel.json file in the root directory of your project that should contain:\n\n// vercel.json\n{\n \"installCommand\": \"npm install --legacy-peer-deps\"\n}\n\nafter that push and wait for deployment, it should go through this time\nBut I recommend looking for the cause of this issue. It seems like some dependency you're using is depending on react-typed, and the latter is not maintained anymore. It's better to seek an alternative\n"
] |
[
0
] |
[] |
[] |
[
"dependencies",
"deployment",
"next.js",
"vercel"
] |
stackoverflow_0074659077_dependencies_deployment_next.js_vercel.txt
|
Q:
i want here to get the product id
if(isset($_POST["action"]))
{
$average_rating = 0;
$total_review = 0;
$five_star_review = 0;
$four_star_review = 0;
$three_star_review = 0;
$two_star_review = 0;
$one_star_review = 0;
$total_user_rating = 0;
$review_content = array();
$query = "
SELECT * FROM review_table
ORDER BY review_id DESC
";
$result = $conn->query($query, PDO::FETCH_ASSOC);
foreach($result as $row)
{
$review_content[] = array(
'user_name' => $row["user_name"],
'user_review' => $row["user_review"],
'rating' => $row["user_rating"],
'datetime' => date('l jS, F Y h:i:s A', $row["datetime"])
);
if($row["user_rating"] == '5')
{
$five_star_review++;
}
if($row["user_rating"] == '4')
{
$four_star_review++;
}
if($row["user_rating"] == '3')
{
$three_star_review++;
}
if($row["user_rating"] == '2')
{
$two_star_review++;
}
if($row["user_rating"] == '1')
{
$one_star_review++;
}
$total_review++;
$total_user_rating = $total_user_rating + $row["user_rating"];
}
$average_rating = $total_user_rating / $total_review;
$output = array(
'average_rating' => number_format($average_rating, 1),
'total_review' => $total_review,
'five_star_review' => $five_star_review,
'four_star_review' => $four_star_review,
'three_star_review' => $three_star_review,
'two_star_review' => $two_star_review,
'one_star_review' => $one_star_review,
'review_data' => $review_content
);
echo json_encode($output);
}
?>
This is my submit_review.php file for review system where I am trying to retrieve in product page for that I need product_id the while giving in product page I use <input type="hidden"> to give the how I want to retrieve this id in isset():
($_POST["action"])
and here is my product.php page:
load_rating_data();
function load_rating_data()
{
$.ajax({
url:"submit_rating.php",
method:"POST",
data:{action:'load_data'},
dataType:"JSON",
success:function(data)
{
$('#average_rating').text(data.average_rating);
$('#total_review').text(data.total_review);
var count_star = 0;
$('.main_star').each(function(){
count_star++;
if(Math.ceil(data.average_rating) >= count_star)
{
$(this).addClass('text-warning');
$(this).addClass('star-light');
}
});
$('#total_five_star_review').text(data.five_star_review);
$('#total_four_star_review').text(data.four_star_review);
$('#total_three_star_review').text(data.three_star_review);
$('#total_two_star_review').text(data.two_star_review);
$('#total_one_star_review').text(data.one_star_review);
$('#five_star_progress').css('width', (data.five_star_review/data.total_review) * 100 + '%');
$('#four_star_progress').css('width', (data.four_star_review/data.total_review) * 100 + '%');
$('#three_star_progress').css('width', (data.three_star_review/data.total_review) * 100 + '%');
$('#two_star_progress').css('width', (data.two_star_review/data.total_review) * 100 + '%');
$('#one_star_progress').css('width', (data.one_star_review/data.total_review) * 100 + '%');
if(data.review_data.length > 0)
{
var html = '';
for(var count = 0; count < data.review_data.length; count++)
{
html += '<div class="row mb-3">';
html += '<div class="col-sm-1"><div class="rounded-circle bg-danger text-white pt-2 pb-2"><h3 class="text-center">'+data.review_data[count].user_name.charAt(0)+'</h3></div></div>';
html += '<div class="col-sm-11">';
html += '<div class="card">';
html += '<div class="card-header"><b>'+data.review_data[count].user_name+'</b></div>';
html += '<div class="card-body">';
for(var star = 1; star <= 5; star++)
{
var class_name = '';
if(data.review_data[count].rating >= star)
{
class_name = 'text-warning';
}
else
{
class_name = 'star-light';
}
html += '<i class="fa fa-star '+class_name+' mr-1"></i>';
}
html += '<br />';
html += data.review_data[count].user_review;
html += '</div>';
html += '<div class="card-footer text-right">On '+data.review_data[count].datetime+'</div>';
html += '</div>';
html += '</div>';
html += '</div>';
}
$('#review_content').html(html);
}
}
})
}
});
I want that product id in my action part so that i can easily apply the condition like where product_id=abavjhj variable.
A:
To retrieve the product ID in the code above, you can add a hidden input field in the HTML form on the product page with the name "product_id" and the value set to the ID of the product. Then, in the PHP code, you can access the product ID by using the $_POST["product_id"] variable.
For example, you can add the following code to the product page:
<input type="hidden" name="product_id" value="<?php echo $product_id; ?>">
Then, in the PHP code, you can access the product ID using the following code:
$product_id = $_POST["product_id"];
You can then use the $product_id variable in your SQL query to retrieve only the reviews for the specific product.
|
i want here to get the product id
|
if(isset($_POST["action"]))
{
$average_rating = 0;
$total_review = 0;
$five_star_review = 0;
$four_star_review = 0;
$three_star_review = 0;
$two_star_review = 0;
$one_star_review = 0;
$total_user_rating = 0;
$review_content = array();
$query = "
SELECT * FROM review_table
ORDER BY review_id DESC
";
$result = $conn->query($query, PDO::FETCH_ASSOC);
foreach($result as $row)
{
$review_content[] = array(
'user_name' => $row["user_name"],
'user_review' => $row["user_review"],
'rating' => $row["user_rating"],
'datetime' => date('l jS, F Y h:i:s A', $row["datetime"])
);
if($row["user_rating"] == '5')
{
$five_star_review++;
}
if($row["user_rating"] == '4')
{
$four_star_review++;
}
if($row["user_rating"] == '3')
{
$three_star_review++;
}
if($row["user_rating"] == '2')
{
$two_star_review++;
}
if($row["user_rating"] == '1')
{
$one_star_review++;
}
$total_review++;
$total_user_rating = $total_user_rating + $row["user_rating"];
}
$average_rating = $total_user_rating / $total_review;
$output = array(
'average_rating' => number_format($average_rating, 1),
'total_review' => $total_review,
'five_star_review' => $five_star_review,
'four_star_review' => $four_star_review,
'three_star_review' => $three_star_review,
'two_star_review' => $two_star_review,
'one_star_review' => $one_star_review,
'review_data' => $review_content
);
echo json_encode($output);
}
?>
This is my submit_review.php file for review system where I am trying to retrieve in product page for that I need product_id the while giving in product page I use <input type="hidden"> to give the how I want to retrieve this id in isset():
($_POST["action"])
and here is my product.php page:
load_rating_data();
function load_rating_data()
{
$.ajax({
url:"submit_rating.php",
method:"POST",
data:{action:'load_data'},
dataType:"JSON",
success:function(data)
{
$('#average_rating').text(data.average_rating);
$('#total_review').text(data.total_review);
var count_star = 0;
$('.main_star').each(function(){
count_star++;
if(Math.ceil(data.average_rating) >= count_star)
{
$(this).addClass('text-warning');
$(this).addClass('star-light');
}
});
$('#total_five_star_review').text(data.five_star_review);
$('#total_four_star_review').text(data.four_star_review);
$('#total_three_star_review').text(data.three_star_review);
$('#total_two_star_review').text(data.two_star_review);
$('#total_one_star_review').text(data.one_star_review);
$('#five_star_progress').css('width', (data.five_star_review/data.total_review) * 100 + '%');
$('#four_star_progress').css('width', (data.four_star_review/data.total_review) * 100 + '%');
$('#three_star_progress').css('width', (data.three_star_review/data.total_review) * 100 + '%');
$('#two_star_progress').css('width', (data.two_star_review/data.total_review) * 100 + '%');
$('#one_star_progress').css('width', (data.one_star_review/data.total_review) * 100 + '%');
if(data.review_data.length > 0)
{
var html = '';
for(var count = 0; count < data.review_data.length; count++)
{
html += '<div class="row mb-3">';
html += '<div class="col-sm-1"><div class="rounded-circle bg-danger text-white pt-2 pb-2"><h3 class="text-center">'+data.review_data[count].user_name.charAt(0)+'</h3></div></div>';
html += '<div class="col-sm-11">';
html += '<div class="card">';
html += '<div class="card-header"><b>'+data.review_data[count].user_name+'</b></div>';
html += '<div class="card-body">';
for(var star = 1; star <= 5; star++)
{
var class_name = '';
if(data.review_data[count].rating >= star)
{
class_name = 'text-warning';
}
else
{
class_name = 'star-light';
}
html += '<i class="fa fa-star '+class_name+' mr-1"></i>';
}
html += '<br />';
html += data.review_data[count].user_review;
html += '</div>';
html += '<div class="card-footer text-right">On '+data.review_data[count].datetime+'</div>';
html += '</div>';
html += '</div>';
html += '</div>';
}
$('#review_content').html(html);
}
}
})
}
});
I want that product id in my action part so that i can easily apply the condition like where product_id=abavjhj variable.
|
[
"To retrieve the product ID in the code above, you can add a hidden input field in the HTML form on the product page with the name \"product_id\" and the value set to the ID of the product. Then, in the PHP code, you can access the product ID by using the $_POST[\"product_id\"] variable.\nFor example, you can add the following code to the product page:\n<input type=\"hidden\" name=\"product_id\" value=\"<?php echo $product_id; ?>\">\n\nThen, in the PHP code, you can access the product ID using the following code:\n$product_id = $_POST[\"product_id\"];\n\nYou can then use the $product_id variable in your SQL query to retrieve only the reviews for the specific product.\n"
] |
[
0
] |
[] |
[] |
[
"ajax",
"javascript",
"php"
] |
stackoverflow_0074665140_ajax_javascript_php.txt
|
Q:
What's have better performance and memory usage: guava sets or java 17 set?
I have a big runtime HashSet collections: How i can use memory effective? Which tools or libraries i need? Big collection this is from 1000 - to 1000000 elements. Colections count from 4 to ~20;
I'd try to use initial capacity for decrise a count of resize of hashmap for perfomance.
May be have some libraries as guava (not sure that use this is right) or something else for decrise memory usage, or have some comparing "some library" vs java17 - memory/performance
A:
What's have better performance and memory usage: guava sets or java 17 set?
If you are talking about java.util.HashSet, and the objects returned by Google Guava's Collections.newHashSet() they are the same class and (therefore) have the same performance and memory characteristics.
See https://github.com/google/guava/blob/master/guava/src/com/google/common/collect/Sets.java line 180.
If they were different classes, I would recommend that you write a representative benchmark and compare the performance and memory utilization of the alternatives for yourself. Note that overall performance of collection types often depends on how you use them; e.g. the sequence of operations that you perform, and how well you have implemented hashCode and equal for the element types. Make sure that you do the benchmarking with your classes, and that the usage patterns correspond to what your real application will do.
Also note that if your initialCapacity estimate is poor, you can actually make performance and memory usage worse.
|
What's have better performance and memory usage: guava sets or java 17 set?
|
I have a big runtime HashSet collections: How i can use memory effective? Which tools or libraries i need? Big collection this is from 1000 - to 1000000 elements. Colections count from 4 to ~20;
I'd try to use initial capacity for decrise a count of resize of hashmap for perfomance.
May be have some libraries as guava (not sure that use this is right) or something else for decrise memory usage, or have some comparing "some library" vs java17 - memory/performance
|
[
"\nWhat's have better performance and memory usage: guava sets or java 17 set?\n\nIf you are talking about java.util.HashSet, and the objects returned by Google Guava's Collections.newHashSet() they are the same class and (therefore) have the same performance and memory characteristics.\nSee https://github.com/google/guava/blob/master/guava/src/com/google/common/collect/Sets.java line 180.\n\nIf they were different classes, I would recommend that you write a representative benchmark and compare the performance and memory utilization of the alternatives for yourself. Note that overall performance of collection types often depends on how you use them; e.g. the sequence of operations that you perform, and how well you have implemented hashCode and equal for the element types. Make sure that you do the benchmarking with your classes, and that the usage patterns correspond to what your real application will do.\nAlso note that if your initialCapacity estimate is poor, you can actually make performance and memory usage worse.\n"
] |
[
1
] |
[] |
[] |
[
"guava",
"java",
"java_17"
] |
stackoverflow_0074665201_guava_java_java_17.txt
|
Q:
Getting custom response from twilio
I am using twilio to call, in my laravel application. I have used webhook for that. When i click call button in web browser, it calls to specific phone number and when called user presses any digit, I get the digit as a response in webhook. I want to send question_id as parameter to twilio application and when user presses digit 3 in his phone. I will store 3 as answer along with the question_id in my db.
for exmple:
question_id: 1, Press 1 for sales, press 2 for account or press 3 for operator service
received digit: 3,
I have to store 1 as question_id and 3 as call_response in my db
A:
It sounds like you're trying to build an Interactive Voice Response (IVR) and that you only want to store the information on the pressed keys during the call - so you wouldn't need an actual database and could leverage the same by using Twilio Studio. It's also possible to make an HTTP request to your application at the end of the Studio flow and store all the pressed keys with one call to the database.
Suppose Studio doesn't offer all the functionality you need. In that case, it might be worth checking out our serverless offering and the Sync API to store information for a limited or unlimited time.
A:
First of all, please accept this comment here. The comment adding section is saying too long characters for commenting. My code is like this:
public function twilio_webhook() {
$response = new VoiceResponse();
$phone_number = $_POST['To'];
$option_digit = $_POST['Digits'];
$country = $_POST['ToCountry'];
$call_status = '1';
CallResponse::insert([
'phone_number' => $phone_number,
'option_digit' => $option_digit,
'country' => $country,
'call_status' => $call_status
]);
if(array_key_exists('Digits', $_POST)) {
switch($_POST['Digits']) {
case 1:
$response->say('Thank you for calling our sales department!');
break;
case 2:
$response->say('Thank you for calling our account team!');
break;
case 3:
$response->say('Thank you for calling our operator service team!');
break;
default:
$response->say('Please try again later!');
break;
}
header('Content-type: text/xml');
}
echo $response;
}
I am hardcoding the response here, thank you for calling sales department. This code assumes that, call has said, Press 1 for sales department. This function works well for this case. But there are several instructions in database. What if the call instruction has said, press 1 for connecting to the technical department. Got the pressed digit, but don't know what was the question...
|
Getting custom response from twilio
|
I am using twilio to call, in my laravel application. I have used webhook for that. When i click call button in web browser, it calls to specific phone number and when called user presses any digit, I get the digit as a response in webhook. I want to send question_id as parameter to twilio application and when user presses digit 3 in his phone. I will store 3 as answer along with the question_id in my db.
for exmple:
question_id: 1, Press 1 for sales, press 2 for account or press 3 for operator service
received digit: 3,
I have to store 1 as question_id and 3 as call_response in my db
|
[
"It sounds like you're trying to build an Interactive Voice Response (IVR) and that you only want to store the information on the pressed keys during the call - so you wouldn't need an actual database and could leverage the same by using Twilio Studio. It's also possible to make an HTTP request to your application at the end of the Studio flow and store all the pressed keys with one call to the database.\nSuppose Studio doesn't offer all the functionality you need. In that case, it might be worth checking out our serverless offering and the Sync API to store information for a limited or unlimited time.\n",
"First of all, please accept this comment here. The comment adding section is saying too long characters for commenting. My code is like this:\npublic function twilio_webhook() {\n $response = new VoiceResponse();\n $phone_number = $_POST['To'];\n $option_digit = $_POST['Digits'];\n $country = $_POST['ToCountry'];\n $call_status = '1';\n\n CallResponse::insert([\n 'phone_number' => $phone_number,\n 'option_digit' => $option_digit,\n 'country' => $country,\n 'call_status' => $call_status\n ]);\n\n if(array_key_exists('Digits', $_POST)) {\n switch($_POST['Digits']) {\n case 1:\n $response->say('Thank you for calling our sales department!');\n break;\n case 2:\n $response->say('Thank you for calling our account team!');\n break;\n case 3:\n $response->say('Thank you for calling our operator service team!');\n break;\n default:\n $response->say('Please try again later!');\n break;\n }\n header('Content-type: text/xml');\n }\n echo $response;\n}\n\nI am hardcoding the response here, thank you for calling sales department. This code assumes that, call has said, Press 1 for sales department. This function works well for this case. But there are several instructions in database. What if the call instruction has said, press 1 for connecting to the technical department. Got the pressed digit, but don't know what was the question...\n"
] |
[
0,
0
] |
[] |
[] |
[
"call",
"parameters",
"twilio"
] |
stackoverflow_0074521601_call_parameters_twilio.txt
|
Q:
Excel Interop - get table style colors
I work with Excel Interop, I try to get color of rows from specyfic Table Style.
I get name of Table Style like that:
var TableStyle = table.TableStyle.Name.ToString();
But I haven't got idea how get colors from known style.
A:
I soloved my problem.
If somebody will have similar in the future.
You can access to table in that way:
var tse = table.TableStyle.TableStyleElements;
double rs1 = ((TableStyleElement)tse[5]).Interior.Color;
double rs2 = ((TableStyleElement)tse[6]).Interior.Color;
when indexes of tse is an enum:
https://learn.microsoft.com/en-us/dotnet/api/microsoft.office.interop.excel.xltablestyleelementtype?view=excel-pia
But If you need create new style, you should copy existing and edit.
https://learn.microsoft.com/en-us/previous-versions/office/developer/office-2010/hh273483(v=office.14)?redirectedfrom=MSDN
|
Excel Interop - get table style colors
|
I work with Excel Interop, I try to get color of rows from specyfic Table Style.
I get name of Table Style like that:
var TableStyle = table.TableStyle.Name.ToString();
But I haven't got idea how get colors from known style.
|
[
"I soloved my problem.\nIf somebody will have similar in the future.\nYou can access to table in that way:\nvar tse = table.TableStyle.TableStyleElements;\ndouble rs1 = ((TableStyleElement)tse[5]).Interior.Color;\ndouble rs2 = ((TableStyleElement)tse[6]).Interior.Color;\n\nwhen indexes of tse is an enum:\nhttps://learn.microsoft.com/en-us/dotnet/api/microsoft.office.interop.excel.xltablestyleelementtype?view=excel-pia\nBut If you need create new style, you should copy existing and edit.\nhttps://learn.microsoft.com/en-us/previous-versions/office/developer/office-2010/hh273483(v=office.14)?redirectedfrom=MSDN\n"
] |
[
0
] |
[] |
[] |
[
"com_interop",
"excel",
"office_interop"
] |
stackoverflow_0074657845_com_interop_excel_office_interop.txt
|
Q:
Pandas assign method error while trying to use lambda function
I am trying to change origin column to 'Europe' if it is europe else return the original column value as it is. I can do that using map, np.where etc but I need to do this with lambda function using assign method. I am getting the value error. Would be thankful if someone can help me fix this error.
import pandas as pd
import seaborn as sns
df = sns.load_dataset("mpg")
df.assign(origin = lambda df: 'Europe' if df['origin']=='europe' else df['origin'] )
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()
A:
With the following toy dataframe:
import pandas as pd
df = pd.DataFrame(
{"rank": [1, 2, 3, 4, 5], "origin": ["usa", "jpn", "europe", "euro", "usa"]}
)
print(df)
# Output
rank origin
0 1 usa
1 2 jpn
2 3 europe
3 4 euro
4 5 usa
Here is how you could fix your code:
df = df.assign(
origin=df.apply(
lambda x: "Europe" if x["origin"] == "europe" else x["origin"], axis=1
)
)
Or even better:
df = df.assign(origin=df["origin"].replace("europe", "Europe"))
Anyway, in both cases, you get:
print(df)
# Output
rank origin
0 1 usa
1 2 jpn
2 3 Europe
3 4 euro
4 5 usa
|
Pandas assign method error while trying to use lambda function
|
I am trying to change origin column to 'Europe' if it is europe else return the original column value as it is. I can do that using map, np.where etc but I need to do this with lambda function using assign method. I am getting the value error. Would be thankful if someone can help me fix this error.
import pandas as pd
import seaborn as sns
df = sns.load_dataset("mpg")
df.assign(origin = lambda df: 'Europe' if df['origin']=='europe' else df['origin'] )
ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all()
|
[
"With the following toy dataframe:\nimport pandas as pd\n\ndf = pd.DataFrame(\n {\"rank\": [1, 2, 3, 4, 5], \"origin\": [\"usa\", \"jpn\", \"europe\", \"euro\", \"usa\"]}\n)\n\nprint(df)\n# Output\n rank origin\n0 1 usa\n1 2 jpn\n2 3 europe\n3 4 euro\n4 5 usa\n\nHere is how you could fix your code:\ndf = df.assign(\n origin=df.apply(\n lambda x: \"Europe\" if x[\"origin\"] == \"europe\" else x[\"origin\"], axis=1\n )\n)\n\nOr even better:\ndf = df.assign(origin=df[\"origin\"].replace(\"europe\", \"Europe\"))\n\nAnyway, in both cases, you get:\nprint(df)\n# Output\n rank origin\n0 1 usa\n1 2 jpn\n2 3 Europe\n3 4 euro\n4 5 usa\n\n"
] |
[
0
] |
[] |
[] |
[
"pandas"
] |
stackoverflow_0074623047_pandas.txt
|
Q:
How do I get the args from a post or get with Python without using cgi.FieldStorage
I just read that cgi is deprecated and so cgi.FieldStorage will stop working.
I'm struggling to find the replacement for this functionality. All the searches I've tried refer to urllib or requests, both of which (AFAIK) are designed to create requests, not to respond to them.
Thanks in advance
A:
The reference to urllib is actually a bit misleading. The following might give some insight to the cgi interface from a python programmers point of view:
#!/usr/bin/python3
'''
preflight_cgi.py
check the preflight option call
'''
import sys
import os
if __name__ == "__main__":
print("Content-Type: text/html") # HTML is following
print()
i = 0
for arg in sys.argv:
print("argv{}: {}\n".format(i, arg))
i = 0
for line in sys.stdin:
print("line {}: {}\n".format(i, line))
i += 1
print("<TITLE>CGI script output</TITLE>")
print("<H1>This is the environmet</H1>")
for it in os.environ.items():
print("<p>{} = {}</p>".format(it[0], it[1]))
Put that where your current cgi.FieldStorage based app is and call it via the address line of the browser.
You will see something like
[...]
CONTENT_LENGTH = 0
QUERY_STRING = par=meter&var=able
REQUEST_URI = /cgi-bin/preflight_cgi.py?par=meter&var=able
REDIRECT_STATUS = 200
SCRIPT_NAME = /cgi-bin/preflight_cgi.py
REQUEST_METHOD = GET
SERVER_PROTOCOL = HTTP/1.1
SERVER_SOFTWARE = lighttpd/1.4.53
GATEWAY_INTERFACE = CGI/1.1
REQUEST_SCHEME = http
SERVER_PORT = 80
[...]
The environment variables have already most of done.
As an alternative you can also use one of the http.server classes to build the server completely in python.
|
How do I get the args from a post or get with Python without using cgi.FieldStorage
|
I just read that cgi is deprecated and so cgi.FieldStorage will stop working.
I'm struggling to find the replacement for this functionality. All the searches I've tried refer to urllib or requests, both of which (AFAIK) are designed to create requests, not to respond to them.
Thanks in advance
|
[
"The reference to urllib is actually a bit misleading. The following might give some insight to the cgi interface from a python programmers point of view:\n#!/usr/bin/python3\n'''\npreflight_cgi.py\ncheck the preflight option call\n'''\n\nimport sys\nimport os\n\nif __name__ == \"__main__\":\n print(\"Content-Type: text/html\") # HTML is following\n print() \n i = 0\n for arg in sys.argv:\n print(\"argv{}: {}\\n\".format(i, arg))\n i = 0\n for line in sys.stdin:\n print(\"line {}: {}\\n\".format(i, line))\n i += 1\n \n print(\"<TITLE>CGI script output</TITLE>\")\n print(\"<H1>This is the environmet</H1>\")\n for it in os.environ.items():\n print(\"<p>{} = {}</p>\".format(it[0], it[1]))\n\nPut that where your current cgi.FieldStorage based app is and call it via the address line of the browser.\nYou will see something like\n[...]\nCONTENT_LENGTH = 0\nQUERY_STRING = par=meter&var=able\nREQUEST_URI = /cgi-bin/preflight_cgi.py?par=meter&var=able\nREDIRECT_STATUS = 200\nSCRIPT_NAME = /cgi-bin/preflight_cgi.py\nREQUEST_METHOD = GET\nSERVER_PROTOCOL = HTTP/1.1\nSERVER_SOFTWARE = lighttpd/1.4.53\nGATEWAY_INTERFACE = CGI/1.1\nREQUEST_SCHEME = http\nSERVER_PORT = 80\n[...]\nThe environment variables have already most of done.\nAs an alternative you can also use one of the http.server classes to build the server completely in python.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"webserver"
] |
stackoverflow_0074225287_python_webserver.txt
|
Q:
How can I loop through an array of object and get one property element out of the two objects listed inside the array?
I have my array of object bellow.
const Articles = [
{
id: 1,
article1: 1,
title1:'Trust',
name1:'Pericles',
date1:'Dec 2, 2022',
text1: 'Lorem ipsum dolor sit consectetur. minima in quae dolores quis fugit officia, quia at quam ipsum iste suscipit, eum, veniam eaque voluptas?',
},
{
id: 2,
article2: 2,
title2:'Love',
name2:'Billey',
date2:'Dec 2, 2022',
text2: 'minima in quae dolores quis fugit officia, quia at quam ipsum iste suscipit, eum, veniam eaque voluptas?',
}
]
I have a function and I am trying to select Articles.title1 through mapping
function arc(){
Articles.map((element, index)=>{
console.log(element.title1)
})
I get "Trust" for the title1 and that's all I needed but, I'm getting undefined for the other.
Is there a way I can have just title1? Thank you.
I have also tried for loop:
for(let i = 0; i < Articles.length; i++){
console.log(Articles[i].title1)
}
and get the same result.
A:
Use title as the key instead of title1 and title2.
And for other keys, you should do the same:
{
id: 1,
article: 1,
title:'Trust',
name:'Pericles',
date:'Dec 2, 2022',
text: 'Lorem ipsum dolor sit consectetur. minima in quae dolores quis fugit officia, quia at quam ipsum iste suscipit, eum, veniam eaque voluptas?',
}
|
How can I loop through an array of object and get one property element out of the two objects listed inside the array?
|
I have my array of object bellow.
const Articles = [
{
id: 1,
article1: 1,
title1:'Trust',
name1:'Pericles',
date1:'Dec 2, 2022',
text1: 'Lorem ipsum dolor sit consectetur. minima in quae dolores quis fugit officia, quia at quam ipsum iste suscipit, eum, veniam eaque voluptas?',
},
{
id: 2,
article2: 2,
title2:'Love',
name2:'Billey',
date2:'Dec 2, 2022',
text2: 'minima in quae dolores quis fugit officia, quia at quam ipsum iste suscipit, eum, veniam eaque voluptas?',
}
]
I have a function and I am trying to select Articles.title1 through mapping
function arc(){
Articles.map((element, index)=>{
console.log(element.title1)
})
I get "Trust" for the title1 and that's all I needed but, I'm getting undefined for the other.
Is there a way I can have just title1? Thank you.
I have also tried for loop:
for(let i = 0; i < Articles.length; i++){
console.log(Articles[i].title1)
}
and get the same result.
|
[
"Use title as the key instead of title1 and title2.\nAnd for other keys, you should do the same:\n{\n id: 1,\n article: 1,\n title:'Trust',\n name:'Pericles',\n date:'Dec 2, 2022',\n text: 'Lorem ipsum dolor sit consectetur. minima in quae dolores quis fugit officia, quia at quam ipsum iste suscipit, eum, veniam eaque voluptas?',\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"arrays",
"javascript",
"loops",
"mapping",
"object"
] |
stackoverflow_0074665252_arrays_javascript_loops_mapping_object.txt
|
Q:
Change values in lists that are in pandas column
I have a dataset where a column contains lists of previously received tokenized words. I need to replace a couple of values in these lists.
Initial data set:
df
date text
2022-06-02 [municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']
...
Required result:
df_res
date text
2022-06-02 [municipal', 'districts', 'mikhailovka', '84', 'kamyshin', '56']
...
How easy is it to change the values of the elements in the list for all the values of the column?
A:
df = pd.DataFrame([['2022-06-02', ['municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']], ['2022-06-02', ['municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']], ['2022-06-02', ['municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']]], columns=['date', 'text'])
mapper = {'mikhailovsky': 'mikhailovka',
'kamyshinsky': 'kamyshin'}
for k, v in mapper.items():
df.text = df.text.apply(lambda x: [element.replace(k, v) for element in x])
The code above changes df from this:
date text
0 2022-06-02 [municipal, districts, mikhailovsky, 84, kamyshinsky, 56]
1 2022-06-02 [municipal, districts, mikhailovsky, 84, kamyshinsky, 56]
2 2022-06-02 [municipal, districts, mikhailovsky, 84, kamyshinsky, 56]
into this:
date text
0 2022-06-02 [municipal, districts, mikhailovka, 84, kamyshin, 56]
1 2022-06-02 [municipal, districts, mikhailovka, 84, kamyshin, 56]
2 2022-06-02 [municipal, districts, mikhailovka, 84, kamyshin, 56]
|
Change values in lists that are in pandas column
|
I have a dataset where a column contains lists of previously received tokenized words. I need to replace a couple of values in these lists.
Initial data set:
df
date text
2022-06-02 [municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']
...
Required result:
df_res
date text
2022-06-02 [municipal', 'districts', 'mikhailovka', '84', 'kamyshin', '56']
...
How easy is it to change the values of the elements in the list for all the values of the column?
|
[
"df = pd.DataFrame([['2022-06-02', ['municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']], ['2022-06-02', ['municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']], ['2022-06-02', ['municipal', 'districts', 'mikhailovsky', '84', 'kamyshinsky', '56']]], columns=['date', 'text'])\n\nmapper = {'mikhailovsky': 'mikhailovka',\n 'kamyshinsky': 'kamyshin'}\n\nfor k, v in mapper.items():\n df.text = df.text.apply(lambda x: [element.replace(k, v) for element in x])\n\nThe code above changes df from this:\n date text\n0 2022-06-02 [municipal, districts, mikhailovsky, 84, kamyshinsky, 56]\n1 2022-06-02 [municipal, districts, mikhailovsky, 84, kamyshinsky, 56]\n2 2022-06-02 [municipal, districts, mikhailovsky, 84, kamyshinsky, 56]\n\ninto this:\n date text\n0 2022-06-02 [municipal, districts, mikhailovka, 84, kamyshin, 56]\n1 2022-06-02 [municipal, districts, mikhailovka, 84, kamyshin, 56]\n2 2022-06-02 [municipal, districts, mikhailovka, 84, kamyshin, 56]\n\n"
] |
[
2
] |
[] |
[] |
[
"dataframe",
"list",
"pandas",
"python"
] |
stackoverflow_0074664930_dataframe_list_pandas_python.txt
|
Q:
Create a shared ViewModel with Factory
I have a DocumentsFragment with a TabLayout with 3 tabs:
TabRulesFragment,
TabProceduresFragment,
TabGuidanceFragment
In DocumentsFragment I initialize a shared viewModel, DocumentsSharedViewModel with a factory:
class DocumentsFragment : Fragment() {
private lateinit var sharedViewModel: DocumentsSharedViewModel
private lateinit var viewPager2: ViewPager2
private lateinit var documentsCollectionAdapter: DocumentsCollectionAdapter
override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?
): View? {
val program = DocumentsFragmentArgs.fromBundle(requireArguments()).program
val name = DocumentsFragmentArgs.fromBundle(requireArguments()).name
val viewModelFactory = DocumentsSharedViewModelFactory(program, name)
sharedViewModel = ViewModelProvider(this, viewModelFactory)[DocumentsSharedViewModel::class.java]
to share data between the documents fragment and the 3 tab fragments. When I try to connect to the shared viewModel in one of the tab fragments (TabRulesFragment for example):
class TabRulesFragment : Fragment() {
private lateinit var tabRulesRecyclerView: RecyclerView
override fun onCreateView(
inflater: LayoutInflater, container: ViewGroup?,
savedInstanceState: Bundle?
): View {
val sharedViewModel : DocumentsSharedViewModel by viewModels()
val binding = TabRulesFragmentBinding.inflate(layoutInflater)
binding.viewModel = sharedViewModel
I get an error that I can't create an instance of the DocumentsSharedViewModel:
java.lang.RuntimeException: Cannot create an instance of class com.smellydogcoding.westvirginiaelectronicfieldguide.ui.documents.DocumentsSharedViewModel
at androidx.lifecycle.ViewModelProvider$NewInstanceFactory.create(ViewModelProvider.kt:188)
at androidx.lifecycle.ViewModelProvider$AndroidViewModelFactory.create(ViewModelProvider.kt:238)
at androidx.lifecycle.SavedStateViewModelFactory.create(SavedStateViewModelFactory.java:112)
at androidx.lifecycle.ViewModelProvider.get(ViewModelProvider.kt:169)
at androidx.lifecycle.ViewModelProvider.get(ViewModelProvider.kt:139)
at androidx.lifecycle.ViewModelLazy.getValue(ViewModelLazy.kt:44)
at androidx.lifecycle.ViewModelLazy.getValue(ViewModelLazy.kt:31)
at com.smellydogcoding.westvirginiaelectronicfieldguide.ui.documents.rulesTab.TabRulesFragment.onCreateView$lambda-0(TabRulesFragment.kt:30)
at com.smellydogcoding.westvirginiaelectronicfieldguide.ui.documents.rulesTab.TabRulesFragment.onCreateView(TabRulesFragment.kt:32)
I'm assuming that viewModelProvider is looking for the factory (which doen't exist in TabRulesFragment because it's in DocumentsFragment) and throwing an error when it doesn't find it. Is there any way to use data from a shared data model without creating another instance of it?
A:
If you want to make a ViewModel scoped to the owning Activity that you can share between fragments, you can use the following to get it in both fragments.
val sharedModel: DocumentsSharedViewModel by activityViewModels()
according to the docs, which has this simple example, where both fragments can access the same ViewModel
class ListFragment : Fragment() {
// Use the 'by activityViewModels()' Kotlin property delegate
// from the fragment-ktx artifact
private val model: SharedViewModel by activityViewModels()
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
//...
}
}
class DetailFragment : Fragment() {
// Use the 'by activityViewModels()' Kotlin property delegate
// from the fragment-ktx artifact
private val model: SharedViewModel by activityViewModels()
override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
super.onViewCreated(view, savedInstanceState)
//...
}
}
As pointed out in the comments, if you wanted the ViewModel to remain scoped to the parent fragment instead of using the activity scope you could use this instead to access it in the child fragment
val sharedModel: DocumentsSharedViewModel by viewModels({ requireParentFragment() })
A:
If you have a ViewModel Factory you have to add this code to both Fragments.
val sharedModel: DocumentsSharedViewModel by activityViewModels{DocumentsSharedViewModel.Factory}
|
Create a shared ViewModel with Factory
|
I have a DocumentsFragment with a TabLayout with 3 tabs:
TabRulesFragment,
TabProceduresFragment,
TabGuidanceFragment
In DocumentsFragment I initialize a shared viewModel, DocumentsSharedViewModel with a factory:
class DocumentsFragment : Fragment() {
private lateinit var sharedViewModel: DocumentsSharedViewModel
private lateinit var viewPager2: ViewPager2
private lateinit var documentsCollectionAdapter: DocumentsCollectionAdapter
override fun onCreateView(
inflater: LayoutInflater,
container: ViewGroup?,
savedInstanceState: Bundle?
): View? {
val program = DocumentsFragmentArgs.fromBundle(requireArguments()).program
val name = DocumentsFragmentArgs.fromBundle(requireArguments()).name
val viewModelFactory = DocumentsSharedViewModelFactory(program, name)
sharedViewModel = ViewModelProvider(this, viewModelFactory)[DocumentsSharedViewModel::class.java]
to share data between the documents fragment and the 3 tab fragments. When I try to connect to the shared viewModel in one of the tab fragments (TabRulesFragment for example):
class TabRulesFragment : Fragment() {
private lateinit var tabRulesRecyclerView: RecyclerView
override fun onCreateView(
inflater: LayoutInflater, container: ViewGroup?,
savedInstanceState: Bundle?
): View {
val sharedViewModel : DocumentsSharedViewModel by viewModels()
val binding = TabRulesFragmentBinding.inflate(layoutInflater)
binding.viewModel = sharedViewModel
I get an error that I can't create an instance of the DocumentsSharedViewModel:
java.lang.RuntimeException: Cannot create an instance of class com.smellydogcoding.westvirginiaelectronicfieldguide.ui.documents.DocumentsSharedViewModel
at androidx.lifecycle.ViewModelProvider$NewInstanceFactory.create(ViewModelProvider.kt:188)
at androidx.lifecycle.ViewModelProvider$AndroidViewModelFactory.create(ViewModelProvider.kt:238)
at androidx.lifecycle.SavedStateViewModelFactory.create(SavedStateViewModelFactory.java:112)
at androidx.lifecycle.ViewModelProvider.get(ViewModelProvider.kt:169)
at androidx.lifecycle.ViewModelProvider.get(ViewModelProvider.kt:139)
at androidx.lifecycle.ViewModelLazy.getValue(ViewModelLazy.kt:44)
at androidx.lifecycle.ViewModelLazy.getValue(ViewModelLazy.kt:31)
at com.smellydogcoding.westvirginiaelectronicfieldguide.ui.documents.rulesTab.TabRulesFragment.onCreateView$lambda-0(TabRulesFragment.kt:30)
at com.smellydogcoding.westvirginiaelectronicfieldguide.ui.documents.rulesTab.TabRulesFragment.onCreateView(TabRulesFragment.kt:32)
I'm assuming that viewModelProvider is looking for the factory (which doen't exist in TabRulesFragment because it's in DocumentsFragment) and throwing an error when it doesn't find it. Is there any way to use data from a shared data model without creating another instance of it?
|
[
"If you want to make a ViewModel scoped to the owning Activity that you can share between fragments, you can use the following to get it in both fragments.\nval sharedModel: DocumentsSharedViewModel by activityViewModels()\n\naccording to the docs, which has this simple example, where both fragments can access the same ViewModel\nclass ListFragment : Fragment() {\n\n // Use the 'by activityViewModels()' Kotlin property delegate\n // from the fragment-ktx artifact\n private val model: SharedViewModel by activityViewModels()\n\n override fun onViewCreated(view: View, savedInstanceState: Bundle?) {\n super.onViewCreated(view, savedInstanceState)\n //...\n }\n}\n\nclass DetailFragment : Fragment() {\n\n // Use the 'by activityViewModels()' Kotlin property delegate\n // from the fragment-ktx artifact\n private val model: SharedViewModel by activityViewModels()\n\n override fun onViewCreated(view: View, savedInstanceState: Bundle?) {\n super.onViewCreated(view, savedInstanceState)\n //...\n }\n}\n\nAs pointed out in the comments, if you wanted the ViewModel to remain scoped to the parent fragment instead of using the activity scope you could use this instead to access it in the child fragment\nval sharedModel: DocumentsSharedViewModel by viewModels({ requireParentFragment() })\n\n",
"If you have a ViewModel Factory you have to add this code to both Fragments.\nval sharedModel: DocumentsSharedViewModel by activityViewModels{DocumentsSharedViewModel.Factory}\n\n"
] |
[
3,
0
] |
[] |
[] |
[
"android",
"android_viewmodel"
] |
stackoverflow_0070855873_android_android_viewmodel.txt
|
Q:
How to fix the following procedures so that the parameters act as in out mode parameters?
Please solve this in ADA:
--initialize first array (My_Array) with random binary values
procedure Init_Array (Arr : BINARY_ARRAY) is
package Random_Bit is new Ada.Numerics.Discrete_Random (BINARY_NUMBER);
use Random_Bit;
G : Generator;
begin
Reset (G);
for Index in 1..16 loop
Arr(Index) := Random(G);
end loop;
end Init_Array;
--reverse binary array
procedure Reverse_Bin_Arr (Arr : BINARY_ARRAY) is
hold : BINARY_ARRAY := Arr;
begin
for Index in 1..16 loop
Arr(15 - Index) := hold(Index);
end loop;
end Reverse_Bin_Arr;
--initialize first array (My_Array) with random binary values
procedure Init_Array (Arr: in out BINARY_ARRAY);
--reverse binary array
procedure Reverse_Bin_Arr (Arr : in out BINARY_ARRAY);
I believe my above procedures are correct. I just keep getting the following error:
assgn.adb:7:15: not fully conformant with declaration at assgn.ads:7
assgn.adb:7:15: mode of "Arr" does not match
assgn.adb:19:15: not fully conformant with declaration at assgn.ads:10
assgn.adb:19:15: mode of "Arr" does not match
gnatmake: "assgn.adb" compilation error
The first procedure should initialize a random array that houses each bit of a binary number. Example: [1,0,1,0,1,1,1,1,0,0,0,1,1,0,1,0]
The second procedure should reverse the bits in the binary array.
A:
The error not fully conformant with declaration means exactly what it says; there's a mismatch between the spec and the implementation
The specification says (notice the in out parameter mode):
procedure Init_Array (Arr: in out BINARY_ARRAY);
but the implementation defaults to in:
procedure Init_Array (Arr : BINARY_ARRAY) is
|
How to fix the following procedures so that the parameters act as in out mode parameters?
|
Please solve this in ADA:
--initialize first array (My_Array) with random binary values
procedure Init_Array (Arr : BINARY_ARRAY) is
package Random_Bit is new Ada.Numerics.Discrete_Random (BINARY_NUMBER);
use Random_Bit;
G : Generator;
begin
Reset (G);
for Index in 1..16 loop
Arr(Index) := Random(G);
end loop;
end Init_Array;
--reverse binary array
procedure Reverse_Bin_Arr (Arr : BINARY_ARRAY) is
hold : BINARY_ARRAY := Arr;
begin
for Index in 1..16 loop
Arr(15 - Index) := hold(Index);
end loop;
end Reverse_Bin_Arr;
--initialize first array (My_Array) with random binary values
procedure Init_Array (Arr: in out BINARY_ARRAY);
--reverse binary array
procedure Reverse_Bin_Arr (Arr : in out BINARY_ARRAY);
I believe my above procedures are correct. I just keep getting the following error:
assgn.adb:7:15: not fully conformant with declaration at assgn.ads:7
assgn.adb:7:15: mode of "Arr" does not match
assgn.adb:19:15: not fully conformant with declaration at assgn.ads:10
assgn.adb:19:15: mode of "Arr" does not match
gnatmake: "assgn.adb" compilation error
The first procedure should initialize a random array that houses each bit of a binary number. Example: [1,0,1,0,1,1,1,1,0,0,0,1,1,0,1,0]
The second procedure should reverse the bits in the binary array.
|
[
"The error not fully conformant with declaration means exactly what it says; there's a mismatch between the spec and the implementation\nThe specification says (notice the in out parameter mode):\n procedure Init_Array (Arr: in out BINARY_ARRAY);\n\nbut the implementation defaults to in:\n procedure Init_Array (Arr : BINARY_ARRAY) is\n\n"
] |
[
3
] |
[] |
[] |
[
"ada",
"arrays",
"binary"
] |
stackoverflow_0074663288_ada_arrays_binary.txt
|
Q:
Why is ford-fulkerson so ubiquitous?
When looking at max flow solutions, ford-fulkerson seems to be ubiquitous in that it is the algorithm most people implement to solve this problem. However there are many more algorithms that can solve the problems and at a significantly better time complexity. So why is ford-fulkerson still so widely used then?
A:
Ford–Fulkerson is the simplest algorithm that embodies the key idea that a flow is maximum if and only if it has no augmenting path. This makes it useful for teaching.
Since F–F doesn't specify how to find an augmenting path, it's more of a framework than an algorithm. Edmonds–Karp is an instantiation of F–F that bounds the number of augmenting paths that must be found. Dinic's algorithm improves on Edmonds–Karp by keeping a data structure that allows it to find augmenting paths more efficiently. (Off the beaten path a bit, the O(n log n)-time algorithm for s-t flow in planar networks due to Borradaile and Klein is also an F–F instantiation.)
The push-relabel algorithms take the idea behind Dinic's algorithm one step further, but they break out of the F–F mold in using preflows, not flows (preflows allow more flow to enter a vertex than leaves, but not vice versa). Historically they followed Dinic's algorithm and make more intuitive sense as a reaction to Dinic.
The other algorithms on that list are complicated and not suitable for undergraduate instruction, which explains the lack of tutorial material.
A:
Ford-Fulkerson algorithm gives the fundamental paradigm to solve a maximum flow problem. Suppose a graph, consisting of Vs and Es, has a source and sink defined. Along with this there is a capacity defined for each E. The ford fulkerson algorithm directs us to find an augmenting path (a path whose capacity has not reached its limit), and to send as much flow as we can through that path, and accordingly update the capacities.
Ford-Fulkerson algorithm however, doesn't exactly specify how to find these augmenting paths. This is the primary research that is done today in this field. The classic ford fulkerson with simple graph searching algorithms give the time complexity of O(n^2). There have been several attempts from Dinitz, to many other CS experts such as Spielman and Teng, who have managed to bring the complexity down to a level of O(n^1.43). The primary challenge these days is to bring the time to nearly linear.
|
Why is ford-fulkerson so ubiquitous?
|
When looking at max flow solutions, ford-fulkerson seems to be ubiquitous in that it is the algorithm most people implement to solve this problem. However there are many more algorithms that can solve the problems and at a significantly better time complexity. So why is ford-fulkerson still so widely used then?
|
[
"Ford–Fulkerson is the simplest algorithm that embodies the key idea that a flow is maximum if and only if it has no augmenting path. This makes it useful for teaching.\nSince F–F doesn't specify how to find an augmenting path, it's more of a framework than an algorithm. Edmonds–Karp is an instantiation of F–F that bounds the number of augmenting paths that must be found. Dinic's algorithm improves on Edmonds–Karp by keeping a data structure that allows it to find augmenting paths more efficiently. (Off the beaten path a bit, the O(n log n)-time algorithm for s-t flow in planar networks due to Borradaile and Klein is also an F–F instantiation.)\nThe push-relabel algorithms take the idea behind Dinic's algorithm one step further, but they break out of the F–F mold in using preflows, not flows (preflows allow more flow to enter a vertex than leaves, but not vice versa). Historically they followed Dinic's algorithm and make more intuitive sense as a reaction to Dinic.\nThe other algorithms on that list are complicated and not suitable for undergraduate instruction, which explains the lack of tutorial material.\n",
"Ford-Fulkerson algorithm gives the fundamental paradigm to solve a maximum flow problem. Suppose a graph, consisting of Vs and Es, has a source and sink defined. Along with this there is a capacity defined for each E. The ford fulkerson algorithm directs us to find an augmenting path (a path whose capacity has not reached its limit), and to send as much flow as we can through that path, and accordingly update the capacities.\nFord-Fulkerson algorithm however, doesn't exactly specify how to find these augmenting paths. This is the primary research that is done today in this field. The classic ford fulkerson with simple graph searching algorithms give the time complexity of O(n^2). There have been several attempts from Dinitz, to many other CS experts such as Spielman and Teng, who have managed to bring the complexity down to a level of O(n^1.43). The primary challenge these days is to bring the time to nearly linear.\n"
] |
[
2,
0
] |
[] |
[] |
[
"algorithm",
"ford_fulkerson",
"max_flow",
"time_complexity"
] |
stackoverflow_0063587383_algorithm_ford_fulkerson_max_flow_time_complexity.txt
|
Q:
How to wrap JS library which creates elements by itself into Vue component with slot?
I have vanilla JS library which is given root element and callback function (data: any) => HTMLElement. Library calls callback and positions elements within root one.
I want to wrap this library into Vue component with slot to use it like this:
<my-component v-slot='data'>
<div>{{data.field}}</div>
</my-component>
Slots returned by useSlots are functions which, being called, return virtual DOM. How can I turn this into real DOM element retaining reactivity?
A:
It sounds like you want to create a custom Vue component that uses a vanilla JavaScript library to create elements within the component. To do this, you can create a custom Vue component using the Vue.component() method, and then use the library within the render function of the component.
Here is an example of how this might work:
Vue.component('my-component', {
render: function (createElement) {
// Use the createElement function to create a root element for the component
let root = createElement('div');
// Use the vanilla JS library to create elements within the root element
let elements = myLibrary(root, (data) => {
return createElement('div', data.field);
});
// Return the root element with the generated elements as children
return root;
}
});
The above code will create a custom Vue component called my-component that uses the myLibrary function to generate elements within the component. You can then use the component in your Vue app like this:
<my-component>
<!-- Use the slot to provide a template for the generated elements -->
<template v-slot="data">
<div>{{data.field}}</div>
</template>
</my-component>
This will render the elements generated by the myLibrary function using the provided template. The generated elements will be reactive, so any changes to the data will be reflected in the rendered elements.
|
How to wrap JS library which creates elements by itself into Vue component with slot?
|
I have vanilla JS library which is given root element and callback function (data: any) => HTMLElement. Library calls callback and positions elements within root one.
I want to wrap this library into Vue component with slot to use it like this:
<my-component v-slot='data'>
<div>{{data.field}}</div>
</my-component>
Slots returned by useSlots are functions which, being called, return virtual DOM. How can I turn this into real DOM element retaining reactivity?
|
[
"It sounds like you want to create a custom Vue component that uses a vanilla JavaScript library to create elements within the component. To do this, you can create a custom Vue component using the Vue.component() method, and then use the library within the render function of the component.\nHere is an example of how this might work:\nVue.component('my-component', {\n render: function (createElement) {\n // Use the createElement function to create a root element for the component\n let root = createElement('div');\n\n // Use the vanilla JS library to create elements within the root element\n let elements = myLibrary(root, (data) => {\n return createElement('div', data.field);\n });\n\n // Return the root element with the generated elements as children\n return root;\n }\n});\n\nThe above code will create a custom Vue component called my-component that uses the myLibrary function to generate elements within the component. You can then use the component in your Vue app like this:\n<my-component>\n <!-- Use the slot to provide a template for the generated elements -->\n <template v-slot=\"data\">\n <div>{{data.field}}</div>\n </template>\n</my-component>\n\nThis will render the elements generated by the myLibrary function using the provided template. The generated elements will be reactive, so any changes to the data will be reflected in the rendered elements.\n"
] |
[
0
] |
[] |
[] |
[
"javascript",
"vue.js"
] |
stackoverflow_0074665286_javascript_vue.js.txt
|
Q:
Neo4j, Graph Data Science Python Library Scaling Functions
Does anyone know whether the scaling functions mentioned here https://neo4j.com/docs/graph-data-science/current/alpha-algorithms/scale-properties/ exist within the python library, if so how can I call them?
A:
All the functions should be available through GDS python client.
The following code should work:
G, metadata = gds.graph.project(
'myGraph',
'Hotel',
'*',
{ nodeProperties: ['avgReview', 'buildYear', 'storyCapacity'] }
)
gds.alpha.scaleProperties.stream(G, {
'nodeProperties': ['buildYear', 'avgReview'],
'scaler': 'MinMax'
})
|
Neo4j, Graph Data Science Python Library Scaling Functions
|
Does anyone know whether the scaling functions mentioned here https://neo4j.com/docs/graph-data-science/current/alpha-algorithms/scale-properties/ exist within the python library, if so how can I call them?
|
[
"All the functions should be available through GDS python client.\nThe following code should work:\nG, metadata = gds.graph.project(\n 'myGraph',\n 'Hotel',\n '*',\n { nodeProperties: ['avgReview', 'buildYear', 'storyCapacity'] }\n)\n\n\ngds.alpha.scaleProperties.stream(G, {\n 'nodeProperties': ['buildYear', 'avgReview'],\n 'scaler': 'MinMax'\n})\n\n"
] |
[
0
] |
[] |
[] |
[
"graph_data_science",
"neo4j",
"neo4j_python_driver"
] |
stackoverflow_0074661896_graph_data_science_neo4j_neo4j_python_driver.txt
|
Q:
Resolve warning "A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy"?
When I import SciPy or a library dependent on it, I receive the following warning message:
UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.1
It's true that I am running NumPy version 1.23.1, however this message is a mystery to me since I am running SciPy version 1.7.3, which, according to SciPy's documentation, is compatible with NumPy <1.24.0.
Anyone having this problem or know how to resolve it?
I am using Conda as an environment manager, and all my packages are up to date as far as I know.
python: 3.9.12
numpy: 1.23.1
scipy: 1.7.3
Thanks in advance if anyone has any clues !
A:
I have the same issue.
The scipy 1.7.3 docs specifies
1.16.5 <= numpy <1.24.0 while in scipy 1.7.3 code setup.py and __init__.py we have np_maxversion = '1.23.0'.
As I rely on conda channel defaults to setup Intel MKL libraries for numpy and scipy I decided to pin "numpy>=1.22.3,<1.23.0" until a newer scipy is release on conda channel defaults:
conda create -n myenv python "numpy>=1.22.3,<1.23.0" scipy
A:
According to the setup.py file of the scipy 1.7.3, numpy is indeed <1.23.0. As @Libra said, the docs must be incorrect. You can:
Ignore this warning
Use scipy 1.8
Use numpy < 1.23.0
Edit:
This is now fixed in the dev docs of scipy https://scipy.github.io/devdocs/dev/toolchain.html
A:
Since "UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required", you can update the numpy version with the specified range to remove the warning.
According to syntax guidelines of conda and pip, updating your numpy version by
conda install "numpy>=1.16.5,<1.23.0"
or
pip install "numpy>=1.16.5,<1.23.0"
inside your environment will work.
Your numpy will be overwritten by the best-match version (1.22.4) in the specified range. You can double-check the new numpy version by:
conda list numpy
or
pip show numpy
|
Resolve warning "A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy"?
|
When I import SciPy or a library dependent on it, I receive the following warning message:
UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required for this version of SciPy (detected version 1.23.1
It's true that I am running NumPy version 1.23.1, however this message is a mystery to me since I am running SciPy version 1.7.3, which, according to SciPy's documentation, is compatible with NumPy <1.24.0.
Anyone having this problem or know how to resolve it?
I am using Conda as an environment manager, and all my packages are up to date as far as I know.
python: 3.9.12
numpy: 1.23.1
scipy: 1.7.3
Thanks in advance if anyone has any clues !
|
[
"I have the same issue.\nThe scipy 1.7.3 docs specifies\n1.16.5 <= numpy <1.24.0 while in scipy 1.7.3 code setup.py and __init__.py we have np_maxversion = '1.23.0'.\nAs I rely on conda channel defaults to setup Intel MKL libraries for numpy and scipy I decided to pin \"numpy>=1.22.3,<1.23.0\" until a newer scipy is release on conda channel defaults:\nconda create -n myenv python \"numpy>=1.22.3,<1.23.0\" scipy\n\n",
"According to the setup.py file of the scipy 1.7.3, numpy is indeed <1.23.0. As @Libra said, the docs must be incorrect. You can:\n\nIgnore this warning\nUse scipy 1.8\nUse numpy < 1.23.0\n\nEdit:\nThis is now fixed in the dev docs of scipy https://scipy.github.io/devdocs/dev/toolchain.html\n",
"Since \"UserWarning: A NumPy version >=1.16.5 and <1.23.0 is required\", you can update the numpy version with the specified range to remove the warning.\nAccording to syntax guidelines of conda and pip, updating your numpy version by\nconda install \"numpy>=1.16.5,<1.23.0\"\nor\npip install \"numpy>=1.16.5,<1.23.0\"\ninside your environment will work.\nYour numpy will be overwritten by the best-match version (1.22.4) in the specified range. You can double-check the new numpy version by:\nconda list numpy\nor\npip show numpy\n"
] |
[
7,
5,
0
] |
[] |
[] |
[
"conda",
"numpy",
"python",
"scipy"
] |
stackoverflow_0073072257_conda_numpy_python_scipy.txt
|
Q:
Can't connect PHP and MySQL: Warning: mysqli::__construct(): (HY000/1045): Access denied for user 'root'@'localhost' (using password: YES)
Why am I getting this error message and cannot connect to the database even though I put the correct username and password?
Here is my sample code:
<?php
$host = "localhost";
$username = "root";
$password = "root";
$database = "student_db";
$con = new mysqli($host,$username,$password,$database);
if($con->connect_error){
echo "not connected";
}else{
echo "Connected";
}
By entering the credentials above, I can log into phpMyAdmin but cannot connect php and mysql.
A:
This error message indicates that the specified username and password are not correct or do not have access to the database. This could be due to a few different reasons, such as:
The username and password entered in the PHP code do not match the
actual username and password for the database.
The user specified in the PHP code does not have access to the
database. This could be due to incorrect permissions or access
restrictions in the database.
The database specified in the PHP code does not exist or is spelled
incorrectly.
To fix this issue, ensure that the username, password, and database are correct and that the specified user has access to the database. You can also try using a different username and password if necessary.
|
Can't connect PHP and MySQL: Warning: mysqli::__construct(): (HY000/1045): Access denied for user 'root'@'localhost' (using password: YES)
|
Why am I getting this error message and cannot connect to the database even though I put the correct username and password?
Here is my sample code:
<?php
$host = "localhost";
$username = "root";
$password = "root";
$database = "student_db";
$con = new mysqli($host,$username,$password,$database);
if($con->connect_error){
echo "not connected";
}else{
echo "Connected";
}
By entering the credentials above, I can log into phpMyAdmin but cannot connect php and mysql.
|
[
"This error message indicates that the specified username and password are not correct or do not have access to the database. This could be due to a few different reasons, such as:\n\nThe username and password entered in the PHP code do not match the\nactual username and password for the database.\nThe user specified in the PHP code does not have access to the\ndatabase. This could be due to incorrect permissions or access\nrestrictions in the database.\nThe database specified in the PHP code does not exist or is spelled\nincorrectly.\n\nTo fix this issue, ensure that the username, password, and database are correct and that the specified user has access to the database. You can also try using a different username and password if necessary.\n"
] |
[
0
] |
[] |
[] |
[
"mysql",
"php"
] |
stackoverflow_0074665047_mysql_php.txt
|
Q:
Change password with Firebase for Android
I want to implement change password functionality for my application.
I included com.google.firebase:firebase-auth:9.0.2 in my build.gradle file and so far everything has been working fine until I tried to implement change password functionality.
I found that the FirebaseUser object has a updatePassword method that takes a new password as the parameter. I could use this method and implement validation myself. However, I need the user's current password for comparing with the inputted one and I can't find a way to get that password.
I also found another method on the Firebase object that takes the old password, new password, and a handler. The problem is that I need to also include com.firebase:firebase-client-android:2.5.2+ to access this class and when I am trying this method I'm getting to following error:
Projects created at console.firebase.google.com must use the new Firebase Authentication SDKs available from firebase.google.com/docs/auth/
Feel like I'm missing something here. What's the recommended approach for implementing this? And when to use what dependency?
A:
I found a handy example of this in the Firebase docs:
Some security-sensitive actions—such as deleting an account, setting a
primary email address, and changing a password—require that the user
has recently signed in. If you perform one of these actions, and the
user signed in too long ago, the action fails and throws
FirebaseAuthRecentLoginRequiredException. When this happens,
re-authenticate the user by getting new sign-in credentials from the
user and passing the credentials to reauthenticate. For example:
FirebaseUser user = FirebaseAuth.getInstance().getCurrentUser();
// Get auth credentials from the user for re-authentication. The example below shows
// email and password credentials but there are multiple possible providers,
// such as GoogleAuthProvider or FacebookAuthProvider.
AuthCredential credential = EmailAuthProvider
.getCredential("[email protected]", "password1234");
// Prompt the user to re-provide their sign-in credentials
user.reauthenticate(credential)
.addOnCompleteListener(new OnCompleteListener<Void>() {
@Override
public void onComplete(@NonNull Task<Void> task) {
if (task.isSuccessful()) {
user.updatePassword(newPass).addOnCompleteListener(new OnCompleteListener<Void>() {
@Override
public void onComplete(@NonNull Task<Void> task) {
if (task.isSuccessful()) {
Log.d(TAG, "Password updated");
} else {
Log.d(TAG, "Error password not updated")
}
}
});
} else {
Log.d(TAG, "Error auth failed")
}
}
});
A:
Changing password in firebase is bit tricky. it's not like what we usually do for changing password in server side scripting and database. to implement change password functionality in your app, first you need to get the user's email from FirebaseAuth or prompt user to input email and after that prompt the user to input old password because you can't retrieve user's password as Frank van Puffelen said. After that you need to reauthenticate that. Once reauthentication is done, if successful, you can use updatePassword(). I have added a sample below that i used for my own app. Hope, it will help you.
private FirebaseUser user;
user = FirebaseAuth.getInstance().getCurrentUser();
final String email = user.getEmail();
AuthCredential credential = EmailAuthProvider.getCredential(email,oldpass);
user.reauthenticate(credential).addOnCompleteListener(new OnCompleteListener<Void>() {
@Override
public void onComplete(@NonNull Task<Void> task) {
if(task.isSuccessful()){
user.updatePassword(newPass).addOnCompleteListener(new OnCompleteListener<Void>() {
@Override
public void onComplete(@NonNull Task<Void> task) {
if(!task.isSuccessful()){
Snackbar snackbar_fail = Snackbar
.make(coordinatorLayout, "Something went wrong. Please try again later", Snackbar.LENGTH_LONG);
snackbar_fail.show();
}else {
Snackbar snackbar_su = Snackbar
.make(coordinatorLayout, "Password Successfully Modified", Snackbar.LENGTH_LONG);
snackbar_su.show();
}
}
});
}else {
Snackbar snackbar_su = Snackbar
.make(coordinatorLayout, "Authentication Failed", Snackbar.LENGTH_LONG);
snackbar_su.show();
}
}
});
}
}
A:
There is no way to retrieve the current password of a user from Firebase Authentication.
One way to allow your users to change their password is to show a dialog where they enter their current password and the new password they'd like. You then sign in (or re-authenticate) the user with the current passwordand call updatePassword() to update it.
A:
I googled something about resetting Firebase passwords and got to this page. It was helpful but didn't get me all the way to the finish line: I still had to Google around for five or ten minutes. So I'm back to improve the answer for VueJS users.
I see lots of code here using "FirebaseUser user = FirebaseAuth.getInstance().getCurrentUser();" in the top line. That's a piece of the puzzle mentioned in the most popular two answers.
But I couldn't get that to work in my project, which is written in VueJS. So I had to go exploring.
What I found was another page of the Firebase documentation. It's the same page people are getting the quoted code from (I think), but with the documentation written for Web instead of Android/Java.
So check out the first link if you're here using VueJS. I think it'll be helpful. "Get the currently signed-in user" might contain the appropriate code for your project. The code I found there says:
firebase.auth().onAuthStateChanged(function(user) {
if (user) {
// User is signed in.
} else {
// No user is signed in.
}
});
That page I linked up above ("another page") brought me eventually to the "Set a user's password" part of the Web docs. Posters here correctly state that the user must have been authenticated recently to initiate a password update. Try this link for more on re-authenticating users.
"Set a user's password":
// You can set a user's password with the updatePassword method. For example:
var user = firebase.auth().currentUser;
var newPassword = getASecureRandomPassword();
user.updatePassword(newPassword).then(function() {
// Update successful.
}).catch(function(error) {
// An error happened.
});
"Re-authenticate a user"
var user = firebase.auth().currentUser;
var credential;
// Prompt the user to re-provide their sign-in credentials
user.reauthenticateWithCredential(credential).then(function() {
// User re-authenticated.
}).catch(function(error) {
// An error happened.
});
A:
Query revolves around users forgetting their passwords or wishing to reset their passwords via an email letter. Which can be attained by Auth.sendPasswordResetEmail("[email protected]");
begin by initializing
private FirebaseAuth mAuth;
private FirebaseAuth.AuthStateListener mAuthListener;
private String DummyEmail = "[email protected]"
mAuth = FirebaseAuth.getInstance();
mAuthListener = new FirebaseAuth.AuthStateListener() {
@Override
public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) {
if (firebaseAuth.getCurrentUser() == null) {
}
}
};
Somewhere else when a user requests to update or reset their passwords simply access the mAuth,
private void PassResetViaEmail(){
if(mAuth != null) {
Log.w(" if Email authenticated", "Recovery Email has been sent to " + DummyEmail);
mAuth.sendPasswordResetEmail(DummyEmail);
} else {
Log.w(" error ", " bad entry ");
}
}
Now, needless to burden yourself querying around your database to find whether the Email exits or not, Firebase mAuth will handle that for you.
Is the Email authenticated? Is it active in your Authentication list? Then send a password-reset Email.
The content will look something like this
the reset link will prompt the following dialog on a new web page
Extra
if you're bit nerved by the reset-template "devised" by Firebase. You can easily access and customize your own letter from the Firebase Console.
Authentication > Email templates > Password reset
A:
A simple approach to handle changing a password is to send a password reset email to the user.
FirebaseAuth.getInstance().sendPasswordResetEmail("[email protected]")
.addOnCompleteListener(new OnCompleteListener<Void>() {
@Override
public void onComplete(@NonNull Task<Void> task) {
if (task.isSuccessful()) {
Toast.makeText(Activity.this, "Password Reset Email Sent!"), Toast.LENGTH_LONG).show();
}
else {
Toast.makeText(Activity.this, task.getException().getLocalizedMessage(), Toast.LENGTH_LONG).show();
}
});
A:
This is a kotlin solution to the problem I am putting the method here Hope it helps
// The method takes current users email (currentUserEmail), current users old password (oldUserPassword), new users password (newUserPassword) as parameter and change the user password to newUserPassword
private fun fireBasePasswordChange(
currentUserEmail: String,
oldUserPassword: String,
newUserPassword: String
) {
// To re authenticate the user credentials getting current sign in credentials
val credential: AuthCredential =
EmailAuthProvider.getCredential(currentUserEmail, oldUserPassword)
// creating current users instance
val user: FirebaseUser? = FirebaseAuth.getInstance().currentUser
// creating after successfully re authenticating update password will be called else it will provide a toast about the error ( makeToast is a user defined function here for providing a toast to the user)
user?.reauthenticate(credential)?.addOnCompleteListener { task ->
when {
task.isSuccessful -> {
user.updatePassword(newUserPassword).addOnCompleteListener {
if (it.isSuccessful) {
makeToast("Password updated")
// This part is optional
// it is signing out the user from the current status once changing password is successful
// it is changing the activity and going to the sign in page while clearing the backstack so the user cant come to the current state by back pressing
FirebaseAuth.getInstance().signOut()
val i = Intent(activity, SignInActivity::class.java)
i.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK or Intent.FLAG_ACTIVITY_CLEAR_TASK)
startActivity(i)
(activity as Activity?)!!.overridePendingTransition(0, 0)
} else
makeToast("Error password not updated")
}
}
else -> {
makeToast("Incorrect old password")
}
}
}
}
|
Change password with Firebase for Android
|
I want to implement change password functionality for my application.
I included com.google.firebase:firebase-auth:9.0.2 in my build.gradle file and so far everything has been working fine until I tried to implement change password functionality.
I found that the FirebaseUser object has a updatePassword method that takes a new password as the parameter. I could use this method and implement validation myself. However, I need the user's current password for comparing with the inputted one and I can't find a way to get that password.
I also found another method on the Firebase object that takes the old password, new password, and a handler. The problem is that I need to also include com.firebase:firebase-client-android:2.5.2+ to access this class and when I am trying this method I'm getting to following error:
Projects created at console.firebase.google.com must use the new Firebase Authentication SDKs available from firebase.google.com/docs/auth/
Feel like I'm missing something here. What's the recommended approach for implementing this? And when to use what dependency?
|
[
"I found a handy example of this in the Firebase docs: \n\nSome security-sensitive actions—such as deleting an account, setting a\n primary email address, and changing a password—require that the user\n has recently signed in. If you perform one of these actions, and the\n user signed in too long ago, the action fails and throws\n FirebaseAuthRecentLoginRequiredException. When this happens,\n re-authenticate the user by getting new sign-in credentials from the\n user and passing the credentials to reauthenticate. For example:\n\nFirebaseUser user = FirebaseAuth.getInstance().getCurrentUser();\n\n// Get auth credentials from the user for re-authentication. The example below shows\n// email and password credentials but there are multiple possible providers,\n// such as GoogleAuthProvider or FacebookAuthProvider.\nAuthCredential credential = EmailAuthProvider\n .getCredential(\"[email protected]\", \"password1234\");\n\n// Prompt the user to re-provide their sign-in credentials\nuser.reauthenticate(credential)\n .addOnCompleteListener(new OnCompleteListener<Void>() {\n @Override\n public void onComplete(@NonNull Task<Void> task) {\n if (task.isSuccessful()) {\n user.updatePassword(newPass).addOnCompleteListener(new OnCompleteListener<Void>() {\n @Override\n public void onComplete(@NonNull Task<Void> task) {\n if (task.isSuccessful()) {\n Log.d(TAG, \"Password updated\");\n } else {\n Log.d(TAG, \"Error password not updated\")\n }\n }\n });\n } else {\n Log.d(TAG, \"Error auth failed\")\n }\n }\n });\n\n",
"Changing password in firebase is bit tricky. it's not like what we usually do for changing password in server side scripting and database. to implement change password functionality in your app, first you need to get the user's email from FirebaseAuth or prompt user to input email and after that prompt the user to input old password because you can't retrieve user's password as Frank van Puffelen said. After that you need to reauthenticate that. Once reauthentication is done, if successful, you can use updatePassword(). I have added a sample below that i used for my own app. Hope, it will help you.\nprivate FirebaseUser user;\nuser = FirebaseAuth.getInstance().getCurrentUser();\n final String email = user.getEmail();\n AuthCredential credential = EmailAuthProvider.getCredential(email,oldpass);\n\n user.reauthenticate(credential).addOnCompleteListener(new OnCompleteListener<Void>() {\n @Override\n public void onComplete(@NonNull Task<Void> task) {\n if(task.isSuccessful()){\n user.updatePassword(newPass).addOnCompleteListener(new OnCompleteListener<Void>() {\n @Override\n public void onComplete(@NonNull Task<Void> task) {\n if(!task.isSuccessful()){\n Snackbar snackbar_fail = Snackbar\n .make(coordinatorLayout, \"Something went wrong. Please try again later\", Snackbar.LENGTH_LONG);\n snackbar_fail.show();\n }else {\n Snackbar snackbar_su = Snackbar\n .make(coordinatorLayout, \"Password Successfully Modified\", Snackbar.LENGTH_LONG);\n snackbar_su.show();\n }\n }\n });\n }else {\n Snackbar snackbar_su = Snackbar\n .make(coordinatorLayout, \"Authentication Failed\", Snackbar.LENGTH_LONG);\n snackbar_su.show();\n }\n }\n });\n }\n }\n\n",
"There is no way to retrieve the current password of a user from Firebase Authentication.\nOne way to allow your users to change their password is to show a dialog where they enter their current password and the new password they'd like. You then sign in (or re-authenticate) the user with the current passwordand call updatePassword() to update it.\n",
"I googled something about resetting Firebase passwords and got to this page. It was helpful but didn't get me all the way to the finish line: I still had to Google around for five or ten minutes. So I'm back to improve the answer for VueJS users.\nI see lots of code here using \"FirebaseUser user = FirebaseAuth.getInstance().getCurrentUser();\" in the top line. That's a piece of the puzzle mentioned in the most popular two answers.\nBut I couldn't get that to work in my project, which is written in VueJS. So I had to go exploring.\nWhat I found was another page of the Firebase documentation. It's the same page people are getting the quoted code from (I think), but with the documentation written for Web instead of Android/Java.\nSo check out the first link if you're here using VueJS. I think it'll be helpful. \"Get the currently signed-in user\" might contain the appropriate code for your project. The code I found there says:\nfirebase.auth().onAuthStateChanged(function(user) {\n if (user) {\n // User is signed in.\n } else {\n // No user is signed in.\n }\n});\n\nThat page I linked up above (\"another page\") brought me eventually to the \"Set a user's password\" part of the Web docs. Posters here correctly state that the user must have been authenticated recently to initiate a password update. Try this link for more on re-authenticating users.\n\"Set a user's password\":\n// You can set a user's password with the updatePassword method. For example:\n\nvar user = firebase.auth().currentUser;\nvar newPassword = getASecureRandomPassword();\n\nuser.updatePassword(newPassword).then(function() {\n // Update successful.\n}).catch(function(error) {\n // An error happened.\n});\n\n\"Re-authenticate a user\"\nvar user = firebase.auth().currentUser;\nvar credential;\n\n// Prompt the user to re-provide their sign-in credentials\n\nuser.reauthenticateWithCredential(credential).then(function() {\n // User re-authenticated.\n}).catch(function(error) {\n // An error happened.\n});\n\n",
"Query revolves around users forgetting their passwords or wishing to reset their passwords via an email letter. Which can be attained by Auth.sendPasswordResetEmail(\"[email protected]\");\nbegin by initializing \n private FirebaseAuth mAuth;\n private FirebaseAuth.AuthStateListener mAuthListener;\n private String DummyEmail = \"[email protected]\"\n\n\n mAuth = FirebaseAuth.getInstance();\n mAuthListener = new FirebaseAuth.AuthStateListener() {\n @Override\n public void onAuthStateChanged(@NonNull FirebaseAuth firebaseAuth) {\n if (firebaseAuth.getCurrentUser() == null) {\n }\n }\n };\n\nSomewhere else when a user requests to update or reset their passwords simply access the mAuth,\n private void PassResetViaEmail(){\n if(mAuth != null) {\n Log.w(\" if Email authenticated\", \"Recovery Email has been sent to \" + DummyEmail);\n mAuth.sendPasswordResetEmail(DummyEmail);\n } else {\n Log.w(\" error \", \" bad entry \");\n }\n }\n\nNow, needless to burden yourself querying around your database to find whether the Email exits or not, Firebase mAuth will handle that for you. \nIs the Email authenticated? Is it active in your Authentication list? Then send a password-reset Email.\n\nThe content will look something like this \n\nthe reset link will prompt the following dialog on a new web page\n\nExtra\nif you're bit nerved by the reset-template \"devised\" by Firebase. You can easily access and customize your own letter from the Firebase Console. \nAuthentication > Email templates > Password reset\n\n",
"A simple approach to handle changing a password is to send a password reset email to the user.\nFirebaseAuth.getInstance().sendPasswordResetEmail(\"[email protected]\")\n .addOnCompleteListener(new OnCompleteListener<Void>() {\n @Override\n public void onComplete(@NonNull Task<Void> task) {\n if (task.isSuccessful()) {\n Toast.makeText(Activity.this, \"Password Reset Email Sent!\"), Toast.LENGTH_LONG).show();\n }\n else {\n Toast.makeText(Activity.this, task.getException().getLocalizedMessage(), Toast.LENGTH_LONG).show();\n }\n });\n\n",
"This is a kotlin solution to the problem I am putting the method here Hope it helps\n // The method takes current users email (currentUserEmail), current users old password (oldUserPassword), new users password (newUserPassword) as parameter and change the user password to newUserPassword\nprivate fun fireBasePasswordChange(\n currentUserEmail: String,\n oldUserPassword: String,\n newUserPassword: String\n) {\n// To re authenticate the user credentials getting current sign in credentials\n val credential: AuthCredential =\n EmailAuthProvider.getCredential(currentUserEmail, oldUserPassword)\n\n// creating current users instance \n val user: FirebaseUser? = FirebaseAuth.getInstance().currentUser\n\n\n// creating after successfully re authenticating update password will be called else it will provide a toast about the error ( makeToast is a user defined function here for providing a toast to the user)\n user?.reauthenticate(credential)?.addOnCompleteListener { task ->\n when {\n task.isSuccessful -> {\n user.updatePassword(newUserPassword).addOnCompleteListener {\n if (it.isSuccessful) {\n makeToast(\"Password updated\")\n \n // This part is optional\n // it is signing out the user from the current status once changing password is successful\n // it is changing the activity and going to the sign in page while clearing the backstack so the user cant come to the current state by back pressing\n \n FirebaseAuth.getInstance().signOut()\n val i = Intent(activity, SignInActivity::class.java)\n i.addFlags(Intent.FLAG_ACTIVITY_NEW_TASK or Intent.FLAG_ACTIVITY_CLEAR_TASK)\n startActivity(i)\n (activity as Activity?)!!.overridePendingTransition(0, 0)\n\n\n } else\n makeToast(\"Error password not updated\")\n }\n }\n else -> {\n makeToast(\"Incorrect old password\")\n }\n\n }\n }\n}\n\n"
] |
[
66,
22,
17,
7,
2,
1,
0
] |
[] |
[] |
[
"android",
"firebase",
"firebase_authentication"
] |
stackoverflow_0039866086_android_firebase_firebase_authentication.txt
|
Q:
How to extract number from a txt file
First my file
amtdec = open("amt.txt", "r+")
gc = open("gamecurrency.txt", "r+")
eg = gc.readline()
u = amtdec.readline()
The main code
user_balance = int(u)
egc = int(eg)
while True:
deposit_amount = int(input("Enter deposit amount: $"))
if deposit_amount<=user_balance:
entamount = deposit_amount * EXCHANGE_RATE
newgc = entamount + egc
newamt = user_balance - deposit_amount
This is what my error was:
user_balance = int(u)
ValueError: invalid literal for int() with base 10: ''
I was trying to compare a int in a file with my input.
A:
Usually an error like this should turn you to check the formatting of your file. As some others mentioned, the first line could be empty for whatever reason. You can check for an empty file prior to this by doing the following:
test.txt contents:
(empty file)
import os
f = open("test.txt")
if os.path.getsize("test.txt") == 0:
print("Empty File")
f.close()
else:
print("Some content exists")
Output: "Empty File" (file is closed too since there is nothing to read)
Alternatively, you can read the entire file if you somehow can't access its contents (some schools do this). Using this technique will give you an idea of what you are dealing with in your file if you can't view it within your IDE:
f = open("test.txt")
for line in f:
print(line)
f.close()
But let's say that just the first line of your file is empty. There are several ways you can check if a line is empty. If line 1 is blank but any line following it has content, reading line 1 from file will equal '\n':
test.txt contents:
line 1 = '\n' (blank line), line 2 = 20.72
import os
f = open("test.txt")
if os.path.getsize("test.txt") == 0:
print("Empty File")
f.close()
else:
print("Some content exists")
reader = f.readline()
# The second condition is if you are using binary mode
if reader == '\n' or reader == b"\r\n":
print("Blank Line")
Output: "Some content exists" & "Blank line"
This is just my suggestion. As for your integer conversion, if you have a '.' in your currency amount, you will get a conversion error for trying to data cast it into an integer. However, I do not know if your currency will be rounded off to the nearest dollar or if you have any indication of change, so I will leave this to you.
Happy coding! Please up vote my answer if useful so I can participate in Stack Overflow in new ways :)
|
How to extract number from a txt file
|
First my file
amtdec = open("amt.txt", "r+")
gc = open("gamecurrency.txt", "r+")
eg = gc.readline()
u = amtdec.readline()
The main code
user_balance = int(u)
egc = int(eg)
while True:
deposit_amount = int(input("Enter deposit amount: $"))
if deposit_amount<=user_balance:
entamount = deposit_amount * EXCHANGE_RATE
newgc = entamount + egc
newamt = user_balance - deposit_amount
This is what my error was:
user_balance = int(u)
ValueError: invalid literal for int() with base 10: ''
I was trying to compare a int in a file with my input.
|
[
"Usually an error like this should turn you to check the formatting of your file. As some others mentioned, the first line could be empty for whatever reason. You can check for an empty file prior to this by doing the following:\ntest.txt contents:\n(empty file)\nimport os\n\nf = open(\"test.txt\")\n\nif os.path.getsize(\"test.txt\") == 0:\n print(\"Empty File\")\n f.close()\nelse:\n print(\"Some content exists\")\n\nOutput: \"Empty File\" (file is closed too since there is nothing to read)\nAlternatively, you can read the entire file if you somehow can't access its contents (some schools do this). Using this technique will give you an idea of what you are dealing with in your file if you can't view it within your IDE:\nf = open(\"test.txt\")\n\nfor line in f:\n print(line)\n\nf.close()\n\nBut let's say that just the first line of your file is empty. There are several ways you can check if a line is empty. If line 1 is blank but any line following it has content, reading line 1 from file will equal '\\n':\ntest.txt contents:\nline 1 = '\\n' (blank line), line 2 = 20.72\nimport os\n\nf = open(\"test.txt\")\n\nif os.path.getsize(\"test.txt\") == 0:\n print(\"Empty File\")\n f.close()\nelse:\n print(\"Some content exists\")\n\nreader = f.readline()\n\n# The second condition is if you are using binary mode\nif reader == '\\n' or reader == b\"\\r\\n\":\n print(\"Blank Line\")\n\nOutput: \"Some content exists\" & \"Blank line\"\nThis is just my suggestion. As for your integer conversion, if you have a '.' in your currency amount, you will get a conversion error for trying to data cast it into an integer. However, I do not know if your currency will be rounded off to the nearest dollar or if you have any indication of change, so I will leave this to you.\nHappy coding! Please up vote my answer if useful so I can participate in Stack Overflow in new ways :)\n"
] |
[
0
] |
[] |
[] |
[
"function",
"python",
"runtime_error",
"syntax_error"
] |
stackoverflow_0074664866_function_python_runtime_error_syntax_error.txt
|
Q:
.mongo' is not recognized as an internal or external command, operable program or batch file
I have installed mongo db. Then when i tried to execute .mongo or mongo in command prompt. It's showing this error:
.mongo' is not recognized as an internal or external command, operable program or batch file
I'm following some tutorial, So I'm not able to move further because i got stuck here.
A:
For those that want a step-by-step guide:
You need to add Mongo's bin folder to the "Path" Environment Variable
Here's how on Windows 10:
Find Mongo's bin folder.
If you're not sure where it is, it's probably in C:\Program Files\MongoDB\Server\3.4\ 3.4 was the latest stable version at the time, this will be different for you probably.
It should look like this:
Notice this is the path to mongo.exe and mongod.exe. Adding this folder to the Path variable is telling Windows to search in this folder for executables matching your command when you run something in cmd. The search starts with the current working dir, and if it doesn't find your exe, goes on to search all the paths in Path till it finds it or it doesn't and it gives you that error you saw.
Copy the path to the bin folder. It should be C:\Program Files\MongoDB\Server\3.4\bin\ (Or whatever version you're using)
Press win, type env, Windows will suggest "Edit the System Environment Variables", click that.
On the Advanced tab, click "Environment Variables"
Highlight the "Path" variable, click "Edit":
This will bring up the "Edit environment variable" window, click "New"
This will start a new line in the list of folders:
Paste your path to the bin folder. Make sure it ends with a \ like so:
Press "OK", "OK", "OK"
Open a new cmd window to work with the updated path variable.
Now you should be able to run mongod and mongo from anywhere in a command window.
A:
I think you might have forgotten setting Environment variables for Mongo’s
bin folder.Follow this, and try again: Set Environment variables for mongo db's bin folder path
A:
If you have installed the 6.0.1 version then in place of mongo use mongod for example in place of mongo --version use mongod --v and it will work fine.
I have set the path and everything maybe the error is only in this version.
A:
Install the 6.0.1 version (or simply use latest version).
Set the path of "C:\Program Files\MongoDB\Server\6.0\bin" in system environment variables by editing "path".
Open "cmd" and type "mongod --version" in place of "mongo --version".
A:
If there is no mongo.exe in your bin file, then download the mongo shell - mongosh from here
Use mongosh instead of mongo in the command line.
Check this Answer from dododo : "mongo shell no longer ships with server binaries."
A:
Find the path to the MongoDB installed from driver name to .bin.
C:\Program Files\MongoDB\Server\4.4\bin
Add to the user's path variable.
A:
mongo is deprecated in the new version
better to use mongosh once installed
A:
Spending 30 minutes on this error Finally, I run this command only because the environment variable was not only an issue. For me version 6.0 not working I have set the environment variable many times but failed. finally, I downgrade the version and use 5.9, and set its environment variable too, and run this command its working fine now.
A:
Installing an old version might work, and if you are on version 6,
use
mongod --version
instead of mongo
A:
If you downloaded MongoDB version 6 and installed it already. Then do the followings:
Download MongoDB shell from enter link description here
Extract it and paste it as a separate folder inside C Drive
enter image description here
Considering that you have already pasted your path inside the environment variables now paste the mongo shell that path also inside the path.enter image description here
create a data folder inside C Drive if you want to (optional)
open a new cmd and you are good to go.
A:
If you have installed >=6.0 version and even though after configuring environmental variables you are facing this error try this
In cmd enter mongod command instead of mongo
After entering the command if you encounter following issue 'DataDirectory data/db not found' then In C drive create data folder and inside data folder create db folder
Now try executing mongod command again
Optional: install mongosh through following url for executing mongocommands
https://www.mongodb.com/try/download/shell
A:
enter image description here
Download the MongoDB Shell
A:
In MongoDB latest version 6.0.2 there is no mongo.exe executable in bin folder. To execute commands you have to install mongosh shell. - Install from here
A:
If you are using version 6.0 and your facing problem like
"mongo' is not recognized as an internal or external command, operable program or batch file"
Then just use mongod --help or else for version mongod --version.
And people using Windows, try version 5.0.13, since the latest version is not working for Windows.
A:
I was getting the same error 'mongo' is not recognized as an internal or external command.
I am connecting the Atlas cluster with the mongo shell with version 4.4.
'mongo' is not recognized as an internal or external command,
I used mongoshell version instead of the 4.4 version and it worked for me.
A:
It is because in mongodb version 6.0 we are downloading shell or mongosh separately so we have to add mongosh to our mongodb binary or bin folder.
Check the link so you can see images.
First Download MongoDB community server
Download MongoDB community server
When installing mongo community server
Copy Data Directory or Path marked by red arrow
then complete installation.
Download monogsh shell zip file and extract zip
MongoDB Shell Download
After that cut mongosh file from extracted files,
Cut this file
Goto earlier copied path(Data Directory Path)
paste mongosh inside this path
paste mongosh file
Add the Data Directory or path to Environment variable
finish now you can check by mongosh --version
if you want it mongo then rename mongosh inside data directory path to mongo and mongo --version
A:
The "mongo" shell has been superseded by "mongosh"
Use this command mongosh --version to check mongo shell version
A:
If you are on version 6, use mongod --version instead of mongo
it worked for me. please wait for some minutes if you are installed and did every configuration right wait some minutes if your laptop is running slow.
A:
try below command if you face like -
'mongo' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\Vishal Bramhankar>mongo
'mongo' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\Vishal Bramhankar>mongo --version
'mongo' is not recognized as an internal or external command,
operable program or batch file.
C:\Users\Vishal Bramhankar>mongod --version
db version v6.0.3
Build Info: {
"version": "6.0.3",
"gitVersion": "f803681c3ae19817d31958965850193de067c516",
"modules": [],
"allocator": "tcmalloc",
"environment": {
"distmod": "windows",
"distarch": "x86_64",
"target_arch": "x86_64"
}
}
C:\Users\Vishal Bramhankar>
|
.mongo' is not recognized as an internal or external command, operable program or batch file
|
I have installed mongo db. Then when i tried to execute .mongo or mongo in command prompt. It's showing this error:
.mongo' is not recognized as an internal or external command, operable program or batch file
I'm following some tutorial, So I'm not able to move further because i got stuck here.
|
[
"For those that want a step-by-step guide:\nYou need to add Mongo's bin folder to the \"Path\" Environment Variable\nHere's how on Windows 10:\n\nFind Mongo's bin folder.\n\nIf you're not sure where it is, it's probably in C:\\Program Files\\MongoDB\\Server\\3.4\\ 3.4 was the latest stable version at the time, this will be different for you probably.\nIt should look like this:\n\nNotice this is the path to mongo.exe and mongod.exe. Adding this folder to the Path variable is telling Windows to search in this folder for executables matching your command when you run something in cmd. The search starts with the current working dir, and if it doesn't find your exe, goes on to search all the paths in Path till it finds it or it doesn't and it gives you that error you saw.\n\nCopy the path to the bin folder. It should be C:\\Program Files\\MongoDB\\Server\\3.4\\bin\\ (Or whatever version you're using)\n\nPress win, type env, Windows will suggest \"Edit the System Environment Variables\", click that.\n\n\n\n\nOn the Advanced tab, click \"Environment Variables\"\n\n\n\nHighlight the \"Path\" variable, click \"Edit\":\n\n\n\nThis will bring up the \"Edit environment variable\" window, click \"New\"\n\n\n\nThis will start a new line in the list of folders:\n\n\n\nPaste your path to the bin folder. Make sure it ends with a \\ like so:\n\n\n\nPress \"OK\", \"OK\", \"OK\"\n\nOpen a new cmd window to work with the updated path variable.\n\n\nNow you should be able to run mongod and mongo from anywhere in a command window.\n",
"I think you might have forgotten setting Environment variables for Mongo’s\nbin folder.Follow this, and try again: Set Environment variables for mongo db's bin folder path\n",
"If you have installed the 6.0.1 version then in place of mongo use mongod for example in place of mongo --version use mongod --v and it will work fine.\nI have set the path and everything maybe the error is only in this version.\n",
"\nInstall the 6.0.1 version (or simply use latest version).\n\n\n\nSet the path of \"C:\\Program Files\\MongoDB\\Server\\6.0\\bin\" in system environment variables by editing \"path\".\n\n\n\n\nOpen \"cmd\" and type \"mongod --version\" in place of \"mongo --version\".\n\n\n",
"If there is no mongo.exe in your bin file, then download the mongo shell - mongosh from here\nUse mongosh instead of mongo in the command line.\nCheck this Answer from dododo : \"mongo shell no longer ships with server binaries.\"\n",
"\nFind the path to the MongoDB installed from driver name to .bin.\nC:\\Program Files\\MongoDB\\Server\\4.4\\bin\n\nAdd to the user's path variable.\n\n\n\n",
"mongo is deprecated in the new version\nbetter to use mongosh once installed\n\n",
"Spending 30 minutes on this error Finally, I run this command only because the environment variable was not only an issue. For me version 6.0 not working I have set the environment variable many times but failed. finally, I downgrade the version and use 5.9, and set its environment variable too, and run this command its working fine now.\n",
"Installing an old version might work, and if you are on version 6,\nuse\n mongod --version\ninstead of mongo\n",
"If you downloaded MongoDB version 6 and installed it already. Then do the followings:\n\nDownload MongoDB shell from enter link description here\n\nExtract it and paste it as a separate folder inside C Drive\nenter image description here\n\nConsidering that you have already pasted your path inside the environment variables now paste the mongo shell that path also inside the path.enter image description here\n\ncreate a data folder inside C Drive if you want to (optional)\n\nopen a new cmd and you are good to go.\n\n\n",
"If you have installed >=6.0 version and even though after configuring environmental variables you are facing this error try this\n\nIn cmd enter mongod command instead of mongo\n\nAfter entering the command if you encounter following issue 'DataDirectory data/db not found' then In C drive create data folder and inside data folder create db folder\nNow try executing mongod command again\nOptional: install mongosh through following url for executing mongocommands\nhttps://www.mongodb.com/try/download/shell\n",
"enter image description here\nDownload the MongoDB Shell\n",
"In MongoDB latest version 6.0.2 there is no mongo.exe executable in bin folder. To execute commands you have to install mongosh shell. - Install from here\n",
"If you are using version 6.0 and your facing problem like\n\"mongo' is not recognized as an internal or external command, operable program or batch file\"\n\nThen just use mongod --help or else for version mongod --version.\nAnd people using Windows, try version 5.0.13, since the latest version is not working for Windows.\n",
"I was getting the same error 'mongo' is not recognized as an internal or external command.\nI am connecting the Atlas cluster with the mongo shell with version 4.4.\n'mongo' is not recognized as an internal or external command,\nI used mongoshell version instead of the 4.4 version and it worked for me.\n",
"It is because in mongodb version 6.0 we are downloading shell or mongosh separately so we have to add mongosh to our mongodb binary or bin folder.\nCheck the link so you can see images.\nFirst Download MongoDB community server\nDownload MongoDB community server\nWhen installing mongo community server\nCopy Data Directory or Path marked by red arrow\nthen complete installation.\nDownload monogsh shell zip file and extract zip\nMongoDB Shell Download\nAfter that cut mongosh file from extracted files,\nCut this file\nGoto earlier copied path(Data Directory Path)\npaste mongosh inside this path\npaste mongosh file\nAdd the Data Directory or path to Environment variable\nfinish now you can check by mongosh --version\nif you want it mongo then rename mongosh inside data directory path to mongo and mongo --version\n",
"\nThe \"mongo\" shell has been superseded by \"mongosh\"\n\nUse this command mongosh --version to check mongo shell version\n",
"If you are on version 6, use mongod --version instead of mongo\nit worked for me. please wait for some minutes if you are installed and did every configuration right wait some minutes if your laptop is running slow.\n",
"try below command if you face like -\n'mongo' is not recognized as an internal or external command,\noperable program or batch file.\nC:\\Users\\Vishal Bramhankar>mongo\n'mongo' is not recognized as an internal or external command,\noperable program or batch file.\n\nC:\\Users\\Vishal Bramhankar>mongo --version\n'mongo' is not recognized as an internal or external command,\noperable program or batch file.\n\nC:\\Users\\Vishal Bramhankar>mongod --version\ndb version v6.0.3\nBuild Info: {\n \"version\": \"6.0.3\",\n \"gitVersion\": \"f803681c3ae19817d31958965850193de067c516\",\n \"modules\": [],\n \"allocator\": \"tcmalloc\",\n \"environment\": {\n \"distmod\": \"windows\",\n \"distarch\": \"x86_64\",\n \"target_arch\": \"x86_64\"\n }\n}\n\nC:\\Users\\Vishal Bramhankar>\n\n"
] |
[
35,
28,
5,
4,
3,
2,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[
"for mongoDb v5.0 and latest use mongosh instead of mongo\nenter image description here\n",
"I face same Problem When i installed mongoDB Version(6.0).\nI'm following some tutorial,\nand use this command [\"C:\\Program Files\\MongoDB\\Server\\6.0\\bin\\mongo.exe\" --version] to check the version of mongoDB.\nSo I'm not able to move further because i got stuck here.Then i find a solution\nSteps\n1.open environment variable\nenter image description here\n2.click on environment variable button\nenter image description here\n3.copy a path on c drive\nenter image description here\n4.click on edit button\nenter image description here\n5.paste a path on variable name and variable value\nenter image description here\n6.then click ok\n7.afther use this command to check version\nenter image description here\n",
"\"mongo\" or \"mongosh\" didn't work in my case. I am using version 6.0 and while trying out the commands, I found \"mongod\" did the work.\n",
"Change the Version to 5.something, It will work.\n",
"Use version 5.0 because here you will get mongo.exe in your bin folder\nBin folder\n",
"I was also getting the same error on version 6.0 after that I downgraded to version 5 , the issue resolved but make sure you have edited the environment variable and at the end of your path put '\\\n"
] |
[
-1,
-1,
-1,
-2,
-2,
-2
] |
[
"mongodb",
"reactjs"
] |
stackoverflow_0051224959_mongodb_reactjs.txt
|
Q:
This error is coming and i am not able to understand why. Error = TypeError: 'NoneType' object is not subscriptable
I am using SQL connectivity , python , tkinter and
I am trying to display the record after creating it but there is an error coming
The records are created and stored in my sql but it can't display them on tkinter
here is the code
import tkinter
import mysql.connector
from tkinter import Label
from tkinter import Entry
from tkinter import messagebox
from tkinter import *
mydb = mysql.connector.connect(
host = "localhost",
user = "root",
passwd = "tiger",
database = "system42"
)
def Create():
m=Tk()
m.geometry("1000x1000")
L1=Label(m,text="Enter Name",width=20,font="ariel")
L2=Label(m,text="Enter DOB (yyyy/mm/dd)",width=20,font="ariel")
L3=Label(m,text="Enter Class",width=20,font="ariel")
L4=Label(m,text="Enter Admission No",width=20,font="ariel")
L5=Label(m,text="Enter Address",width=20,font="ariel")
L6=Label(m,text="Enter Mobile No",width=20,font="ariel")
L7=Label(m,text="Enter Transport",width=20,font="ariel")
L1.place(x=50,y=100)
L2.place(x=50,y=150)
L3.place(x=50,y=200)
L4.place(x=50,y=250)
L5.place(x=50,y=300)
L6.place(x=50,y=350)
L7.place(x=50,y=400)
a=Entry(m)
b=Entry(m)
c=Entry(m)
d=Entry(m)
e=Entry(m)
f=Entry(m)
g=Entry(m)
a.place(x=300,y=100)
b.place(x=300,y=150)
c.place(x=300,y=200)
d.place(x=300,y=250)
e.place(x=300,y=300)
f.place(x=300,y=350)
g.place(x=300,y=400)
def Creation():
mycur=mydb.cursor()
name=a.get()
dob=b.get()
Class=c.get()
admn=d.get()
add=e.get()
mob=f.get()
tra=g.get()
query3=("insert into idcard values('{}' , '{}' , '{}' , {} , '{}' , {} , '{}')").format(name,dob,Class,admn,add,mob,tra)
mycur.execute(query3)
mycur.execute("commit")
q=mycur.fetchone()
L11=Label(m,text=q[0],width=20,font="ariel")
L12=Label(m,text=q[1],width=15,font="ariel")
L13=Label(m,text=q[2],width=10,font="ariel")
L14=Label(m,text=q[3],width=10,font="ariel")
L15=Label(m,text=q[4],width=30,font="ariel")
L16=Label(m,text=q[5],width=15,font="ariel")
L17=Label(m,text=q[6],width=15,font="ariel")
L11.place(x=50,y=500)
L12.place(x=50,y=550)
L13.place(x=50,y=600)
L14.place(x=50,y=650)
L15.place(x=50,y=700)
L16.place(x=50,y=750)
L17.place(x=50,y=800)
button=Button(m,text="Create",command=Creation,width=10,height=2)
button.place(x=400,y=50)
I expected that after creating it also displays the records. It is creating but not displaying
A:
You are trying to create new label widgits in your function when you should just be updating the already existing ones.
Try:
def Creation():
mycur=mydb.cursor()
name=a.get()
dob=b.get()
Class=c.get()
admn=d.get()
add=e.get()
mob=f.get()
tra=g.get()
query3=("insert into idcard values('{}' , '{}' , '{}' , {} , '{}' , {} , '{}')").format(name,dob,Class,admn,add,mob,tra)
mycur.execute(query3)
mycur.execute("commit")
q=mycur.fetchone()
L11.config(text=q[0])
L12.config(text=q[1])
L13.config(text=q[2])
L14.config(text=q[3])
L15.config(text=q[4])
L16.config(text=q[5])
L16.config(text=q[6])
button=Button(m,text="Create",command=Creation,width=10,height=2)
button.place(x=400,y=50)
|
This error is coming and i am not able to understand why. Error = TypeError: 'NoneType' object is not subscriptable
|
I am using SQL connectivity , python , tkinter and
I am trying to display the record after creating it but there is an error coming
The records are created and stored in my sql but it can't display them on tkinter
here is the code
import tkinter
import mysql.connector
from tkinter import Label
from tkinter import Entry
from tkinter import messagebox
from tkinter import *
mydb = mysql.connector.connect(
host = "localhost",
user = "root",
passwd = "tiger",
database = "system42"
)
def Create():
m=Tk()
m.geometry("1000x1000")
L1=Label(m,text="Enter Name",width=20,font="ariel")
L2=Label(m,text="Enter DOB (yyyy/mm/dd)",width=20,font="ariel")
L3=Label(m,text="Enter Class",width=20,font="ariel")
L4=Label(m,text="Enter Admission No",width=20,font="ariel")
L5=Label(m,text="Enter Address",width=20,font="ariel")
L6=Label(m,text="Enter Mobile No",width=20,font="ariel")
L7=Label(m,text="Enter Transport",width=20,font="ariel")
L1.place(x=50,y=100)
L2.place(x=50,y=150)
L3.place(x=50,y=200)
L4.place(x=50,y=250)
L5.place(x=50,y=300)
L6.place(x=50,y=350)
L7.place(x=50,y=400)
a=Entry(m)
b=Entry(m)
c=Entry(m)
d=Entry(m)
e=Entry(m)
f=Entry(m)
g=Entry(m)
a.place(x=300,y=100)
b.place(x=300,y=150)
c.place(x=300,y=200)
d.place(x=300,y=250)
e.place(x=300,y=300)
f.place(x=300,y=350)
g.place(x=300,y=400)
def Creation():
mycur=mydb.cursor()
name=a.get()
dob=b.get()
Class=c.get()
admn=d.get()
add=e.get()
mob=f.get()
tra=g.get()
query3=("insert into idcard values('{}' , '{}' , '{}' , {} , '{}' , {} , '{}')").format(name,dob,Class,admn,add,mob,tra)
mycur.execute(query3)
mycur.execute("commit")
q=mycur.fetchone()
L11=Label(m,text=q[0],width=20,font="ariel")
L12=Label(m,text=q[1],width=15,font="ariel")
L13=Label(m,text=q[2],width=10,font="ariel")
L14=Label(m,text=q[3],width=10,font="ariel")
L15=Label(m,text=q[4],width=30,font="ariel")
L16=Label(m,text=q[5],width=15,font="ariel")
L17=Label(m,text=q[6],width=15,font="ariel")
L11.place(x=50,y=500)
L12.place(x=50,y=550)
L13.place(x=50,y=600)
L14.place(x=50,y=650)
L15.place(x=50,y=700)
L16.place(x=50,y=750)
L17.place(x=50,y=800)
button=Button(m,text="Create",command=Creation,width=10,height=2)
button.place(x=400,y=50)
I expected that after creating it also displays the records. It is creating but not displaying
|
[
"You are trying to create new label widgits in your function when you should just be updating the already existing ones.\nTry:\ndef Creation():\n mycur=mydb.cursor()\n \n name=a.get()\n dob=b.get()\n Class=c.get()\n admn=d.get()\n add=e.get()\n mob=f.get()\n tra=g.get()\n query3=(\"insert into idcard values('{}' , '{}' , '{}' , {} , '{}' , {} , '{}')\").format(name,dob,Class,admn,add,mob,tra)\n mycur.execute(query3)\n mycur.execute(\"commit\")\n q=mycur.fetchone()\n\n L11.config(text=q[0])\n L12.config(text=q[1])\n L13.config(text=q[2])\n L14.config(text=q[3])\n L15.config(text=q[4])\n L16.config(text=q[5])\n L16.config(text=q[6])\n\nbutton=Button(m,text=\"Create\",command=Creation,width=10,height=2)\nbutton.place(x=400,y=50)\n\n"
] |
[
0
] |
[] |
[] |
[
"mysql",
"python",
"tkinter"
] |
stackoverflow_0074665189_mysql_python_tkinter.txt
|
Q:
Read and Display Outlook Folder Names
I am new to VB.net and appreciate any/all help. Years ago I developed a VB6 Learning Edition app to read all the entries in all Outlook folders.
To get started, I need help establishing a VB.net connection to Outlook, and sample code to access folders. Once I get that basic structure, I think I can accomplish my objective.
My ultimate objective is to add a creation date to all non-email items.
Thanks in advance.
Marvin
I tried some sample code, but had reference issues trying to get to the outlook namespace.
The sample posts provided do net seem to address VB.net
A:
The Outlook object model is common for all kind of applications. You just need to add a COM reference to your VB.NET application to be able to declare Outlook types and create a new Application instance. To create a new Outlook Application by using Automation from Visual Basic .NET, follow these steps:
Add a reference to the Microsoft Outlook Object Library. To do this, follow these steps:
On the Project menu, click Add Reference.
On the COM tab, locate the Microsoft Outlook Object Library and click Select.
Click OK in the Add References dialog box to accept your selections. If you receive a prompt to generate wrappers for the libraries that you selected, click Yes.
Vioal! Now you can use the Outlook object model in the code.
|
Read and Display Outlook Folder Names
|
I am new to VB.net and appreciate any/all help. Years ago I developed a VB6 Learning Edition app to read all the entries in all Outlook folders.
To get started, I need help establishing a VB.net connection to Outlook, and sample code to access folders. Once I get that basic structure, I think I can accomplish my objective.
My ultimate objective is to add a creation date to all non-email items.
Thanks in advance.
Marvin
I tried some sample code, but had reference issues trying to get to the outlook namespace.
The sample posts provided do net seem to address VB.net
|
[
"The Outlook object model is common for all kind of applications. You just need to add a COM reference to your VB.NET application to be able to declare Outlook types and create a new Application instance. To create a new Outlook Application by using Automation from Visual Basic .NET, follow these steps:\nAdd a reference to the Microsoft Outlook Object Library. To do this, follow these steps:\n\nOn the Project menu, click Add Reference.\nOn the COM tab, locate the Microsoft Outlook Object Library and click Select.\n\nClick OK in the Add References dialog box to accept your selections. If you receive a prompt to generate wrappers for the libraries that you selected, click Yes.\nVioal! Now you can use the Outlook object model in the code.\n"
] |
[
0
] |
[] |
[] |
[
".net",
"office_automation",
"office_interop",
"outlook",
"vb.net"
] |
stackoverflow_0074662944_.net_office_automation_office_interop_outlook_vb.net.txt
|
Q:
Register class as PDO driver
Can I somehow register php class as PDO driver?
To clarify my question. PDO driver works with dsn (for example mysql:host=localhost;dbname=test) where first part is which driver to use. In this example it is mysql and therefore it uses php extension pdo_mysql. Can I register new driver from php pointing on class?
Something like this:
PDO::registerDriver('foo', \Foo::class); // I know this does not exists and I'm asking if there is some way to do this
print_r(PDO::getAvailableDrivers());
// outputs: Array( [0] => mysql, [1] => foo )
I know I can extend and override php PDO and PDOStatement classes, but that is not what I'm asking.
I was looking into php source code but I'm not familiar with C/C++. Here is a link for function PDO::getAvailableDrivers if somebody is interested.
A:
No, it is not possible to register a PHP class as a PDO driver. PDO drivers are typically implemented as extensions written in C/C++ and compiled into PHP, not as PHP classes. PDO drivers must also be registered as extensions in the PHP configuration file (php.ini) in order to be available for use with PDO. There is no built-in mechanism for registering a PHP class as a PDO driver.
|
Register class as PDO driver
|
Can I somehow register php class as PDO driver?
To clarify my question. PDO driver works with dsn (for example mysql:host=localhost;dbname=test) where first part is which driver to use. In this example it is mysql and therefore it uses php extension pdo_mysql. Can I register new driver from php pointing on class?
Something like this:
PDO::registerDriver('foo', \Foo::class); // I know this does not exists and I'm asking if there is some way to do this
print_r(PDO::getAvailableDrivers());
// outputs: Array( [0] => mysql, [1] => foo )
I know I can extend and override php PDO and PDOStatement classes, but that is not what I'm asking.
I was looking into php source code but I'm not familiar with C/C++. Here is a link for function PDO::getAvailableDrivers if somebody is interested.
|
[
"No, it is not possible to register a PHP class as a PDO driver. PDO drivers are typically implemented as extensions written in C/C++ and compiled into PHP, not as PHP classes. PDO drivers must also be registered as extensions in the PHP configuration file (php.ini) in order to be available for use with PDO. There is no built-in mechanism for registering a PHP class as a PDO driver.\n"
] |
[
1
] |
[] |
[] |
[
"pdo",
"php"
] |
stackoverflow_0074665000_pdo_php.txt
|
Q:
How to install the standard environment in a computer not only in a shell
If I install the stenv , I install a list of command like gnu make, sed...
source
I only need to do execute
nix-shell -p
in oder to load the stdenv in a shell.
I don't need to give a package name. But I don't find in the documentation (the link I've given) where this information is.
Now I would like to install the stdenv on my computer, not only in a shell.
I've tried to execute
nix-env -iA
But it doesn't work.
nix-env -iA stdenv
doesn't work either
A:
This command install stdenv
nix-env -iA nixos
If you don't use nixos. I should try this:
nix-env -iA nixpkgs
But I still don't know why
nix-env -ia nixos
is equivalent to
nix-env -iA nixos.stdenv
And I still dont't know why I don't find stdenv in the packet manager.
As you can see :
https://search.nixos.org/packages?channel=22.11&from=0&size=50&sort=relevance&type=packages&query=stdenv
|
How to install the standard environment in a computer not only in a shell
|
If I install the stenv , I install a list of command like gnu make, sed...
source
I only need to do execute
nix-shell -p
in oder to load the stdenv in a shell.
I don't need to give a package name. But I don't find in the documentation (the link I've given) where this information is.
Now I would like to install the stdenv on my computer, not only in a shell.
I've tried to execute
nix-env -iA
But it doesn't work.
nix-env -iA stdenv
doesn't work either
|
[
"This command install stdenv\n nix-env -iA nixos\n\nIf you don't use nixos. I should try this:\nnix-env -iA nixpkgs\nBut I still don't know why\n nix-env -ia nixos \n\nis equivalent to\n nix-env -iA nixos.stdenv \n\nAnd I still dont't know why I don't find stdenv in the packet manager.\nAs you can see :\nhttps://search.nixos.org/packages?channel=22.11&from=0&size=50&sort=relevance&type=packages&query=stdenv\n"
] |
[
0
] |
[] |
[] |
[
"nix"
] |
stackoverflow_0074664733_nix.txt
|
Q:
No output in console.log. Beginner Question
I'm reading Head 1st JavaScript book and try to learn the language as much as i can. Wanted to to all the problems from the book. After i did so for one of the problems, and wrote the code as i thought, and checked the solution, i changed it to reflect the solution in the book even thou it worked for me. The thing is, when i want to "print" in the console now, nothing shows up and idk why... i don't see nor have a problem...
Any idea why the console.log will not output anything? Thanks!
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<script>
function Coffee(roast, ounces) {
this.roast = roast;
this.ounces = ounces;
this.getSize = function() {
if (this.ounces === 8) {
return "small";
} else if (this.ounces === 12) {
return "medium";
} else (this.ounces === 16); {
return "large";
}
};
this.toString = function() {
return "You have ordered a " + this.getSize() + " " + this.roast + " coffee.";
};
}
var csmall = new Coffee ("House Blend", "8");
var cmedium = new Coffee ("House Blend", "12");
var clarge = new Coffee ("Dark Roast", "16");
var coffees = [csmall, cmedium, clarge];
for (var i = 0; i < coffees.length; i++) {
coffees[i].toString();
}
</script>
</head>
<body>
</body>
</html>
This is the way i wrote the code and worked.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<script>
function Coffee(roast, size, ounces) {
this.roast = roast;
this.size = size;
this.ounces = ounces;
this.getSize = function() {
if (this.ounces === 8) {
console.log("You have ordered a " + this.size + " " + this.roast + " coffee.");
} else if (this.ounces === 12) {
console.log("You have ordered a " + this.size + " " + this.roast + " coffee.");
} else (this.ounces === 16); {
console.log("You have ordered a " + this.size + " " + this.roast + " coffee.");
}
};
}
var csmall = new Coffee ("House Blend", "small", "8");
var cmedium = new Coffee ("House Blend", "medium", "12");
var clarge = new Coffee ("Dark Roast", "large", "16");
var coffees = [csmall, cmedium, clarge];
for (var i = 0; i < coffees.length; i++) {
coffees[i].getSize();
}
</script>
</head>
<body>
</body>
</html>
A:
In above code console log has not been used so this is why you haven't seen any thing in the console
for (var i = 0; i < coffees.length; i++) {
coffees[i].toString();
console.log(String(coffees[i])); // <-- add here
}
Still it would give incorrect result becuase it used === in the function which means the type of the value supplied will also match with type in the function
It used integers in function but supplied value in string see the code below
if (this.ounces === 8) { // used === to match type as well here it is integer
return "small";
} else if (this.ounces === 12) {
return "medium";
} else (this.ounces === 16); {
return "large";
}
};
this.toString = function() {
return "You have ordered a " + this.getSize() + " " + this.roast + " coffee.";
};
}
var csmall = new Coffee ("House Blend", "8"); // passing it as string "8" it should be just 8
var cmedium = new Coffee ("House Blend", "12");
var clarge = new Coffee ("Dark Roast", "16");
Correct Code should be like this
<script>
function Coffee(roast, ounces) {
this.roast = roast;
this.ounces = ounces;
this.getSize = function() {
if (this.ounces === 8) {
return "small";
} else if (this.ounces === 12) {
return "medium";
} else (this.ounces === 16); {
return "large";
}
};
this.toString = function() {
return "You have ordered a " + this.getSize() + " " + this.roast + " coffee.";
};
}
var csmall = new Coffee ("House Blend", 8);
var cmedium = new Coffee ("House Blend", 12);
var clarge = new Coffee ("Dark Roast", 16);
var coffees = [csmall, cmedium, clarge];
for (var i = 0; i < coffees.length; i++) {
coffees[i].toString();
console.log(String(coffees[i]));
}
</script>
But in the secon code which is done by you...
I beleive the real problem was to use logic and convert ounces into size by getsize function where as you directly used sizes in your solution
|
No output in console.log. Beginner Question
|
I'm reading Head 1st JavaScript book and try to learn the language as much as i can. Wanted to to all the problems from the book. After i did so for one of the problems, and wrote the code as i thought, and checked the solution, i changed it to reflect the solution in the book even thou it worked for me. The thing is, when i want to "print" in the console now, nothing shows up and idk why... i don't see nor have a problem...
Any idea why the console.log will not output anything? Thanks!
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<script>
function Coffee(roast, ounces) {
this.roast = roast;
this.ounces = ounces;
this.getSize = function() {
if (this.ounces === 8) {
return "small";
} else if (this.ounces === 12) {
return "medium";
} else (this.ounces === 16); {
return "large";
}
};
this.toString = function() {
return "You have ordered a " + this.getSize() + " " + this.roast + " coffee.";
};
}
var csmall = new Coffee ("House Blend", "8");
var cmedium = new Coffee ("House Blend", "12");
var clarge = new Coffee ("Dark Roast", "16");
var coffees = [csmall, cmedium, clarge];
for (var i = 0; i < coffees.length; i++) {
coffees[i].toString();
}
</script>
</head>
<body>
</body>
</html>
This is the way i wrote the code and worked.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Document</title>
<script>
function Coffee(roast, size, ounces) {
this.roast = roast;
this.size = size;
this.ounces = ounces;
this.getSize = function() {
if (this.ounces === 8) {
console.log("You have ordered a " + this.size + " " + this.roast + " coffee.");
} else if (this.ounces === 12) {
console.log("You have ordered a " + this.size + " " + this.roast + " coffee.");
} else (this.ounces === 16); {
console.log("You have ordered a " + this.size + " " + this.roast + " coffee.");
}
};
}
var csmall = new Coffee ("House Blend", "small", "8");
var cmedium = new Coffee ("House Blend", "medium", "12");
var clarge = new Coffee ("Dark Roast", "large", "16");
var coffees = [csmall, cmedium, clarge];
for (var i = 0; i < coffees.length; i++) {
coffees[i].getSize();
}
</script>
</head>
<body>
</body>
</html>
|
[
"In above code console log has not been used so this is why you haven't seen any thing in the console\nfor (var i = 0; i < coffees.length; i++) {\n coffees[i].toString();\n console.log(String(coffees[i])); // <-- add here\n }\n\nStill it would give incorrect result becuase it used === in the function which means the type of the value supplied will also match with type in the function\nIt used integers in function but supplied value in string see the code below\nif (this.ounces === 8) { // used === to match type as well here it is integer\n return \"small\";\n } else if (this.ounces === 12) {\n return \"medium\";\n } else (this.ounces === 16); {\n return \"large\";\n }\n };\n this.toString = function() {\n return \"You have ordered a \" + this.getSize() + \" \" + this.roast + \" coffee.\";\n };\n }\n var csmall = new Coffee (\"House Blend\", \"8\"); // passing it as string \"8\" it should be just 8\n var cmedium = new Coffee (\"House Blend\", \"12\");\n var clarge = new Coffee (\"Dark Roast\", \"16\");\n\nCorrect Code should be like this\n<script>\n function Coffee(roast, ounces) {\n this.roast = roast;\n this.ounces = ounces;\n this.getSize = function() {\n if (this.ounces === 8) {\n return \"small\";\n } else if (this.ounces === 12) {\n return \"medium\";\n } else (this.ounces === 16); {\n return \"large\";\n }\n };\n this.toString = function() {\n return \"You have ordered a \" + this.getSize() + \" \" + this.roast + \" coffee.\";\n };\n }\n var csmall = new Coffee (\"House Blend\", 8);\n var cmedium = new Coffee (\"House Blend\", 12);\n var clarge = new Coffee (\"Dark Roast\", 16);\n var coffees = [csmall, cmedium, clarge];\n\n for (var i = 0; i < coffees.length; i++) {\n coffees[i].toString();\n console.log(String(coffees[i]));\n }\n</script>\n\nBut in the secon code which is done by you...\nI beleive the real problem was to use logic and convert ounces into size by getsize function where as you directly used sizes in your solution\n"
] |
[
0
] |
[] |
[] |
[
"console.log",
"javascript",
"printing"
] |
stackoverflow_0074664928_console.log_javascript_printing.txt
|
Q:
group column values with difference of 3(say) digit in python
I am new in python, problem statement is like we have below data as dataframe
df = pd.DataFrame({'Diff':[1,1,2,3,4,4,5,6,7,7,8,9,9,10], 'value':[x,x,y,x,x,x,y,x,z,x,x,y,y,z]})
Diff value
1 x
1 x
2 y
3 x
4 x
4 x
5 y
6 x
7 z
7 x
8 x
9 y
9 y
10 z
we need to group diff column with diff of 3 (let's say), like 0-3,3-6,6-9,>9, and value should be count
Expected output is like
Diff x y z
0-3 2 1
3-6 3 1
6-9 3 1
>=9 2 1
A:
Example
example code is wrong. someone who want exercise, use following code
df = pd.DataFrame({'Diff':[1,1,2,3,4,4,5,6,7,7,8,9,9,10],
'value':'x,x,y,x,x,x,y,x,z,x,x,y,y,z'.split(',')})
Code
labels = ['0-3', '3-6', '6-9', '>=9']
grouper = pd.cut(df['Diff'], bins=[0, 3, 6, 9, float('inf')], right=False, labels=labels)
pd.crosstab(grouper, df['value'])
output:
value x y z
Diff
0-3 2 1 0
3-6 3 1 0
6-9 3 0 1
>=9 0 2 1
|
group column values with difference of 3(say) digit in python
|
I am new in python, problem statement is like we have below data as dataframe
df = pd.DataFrame({'Diff':[1,1,2,3,4,4,5,6,7,7,8,9,9,10], 'value':[x,x,y,x,x,x,y,x,z,x,x,y,y,z]})
Diff value
1 x
1 x
2 y
3 x
4 x
4 x
5 y
6 x
7 z
7 x
8 x
9 y
9 y
10 z
we need to group diff column with diff of 3 (let's say), like 0-3,3-6,6-9,>9, and value should be count
Expected output is like
Diff x y z
0-3 2 1
3-6 3 1
6-9 3 1
>=9 2 1
|
[
"Example\nexample code is wrong. someone who want exercise, use following code\ndf = pd.DataFrame({'Diff':[1,1,2,3,4,4,5,6,7,7,8,9,9,10], \n 'value':'x,x,y,x,x,x,y,x,z,x,x,y,y,z'.split(',')})\n\nCode\nlabels = ['0-3', '3-6', '6-9', '>=9']\ngrouper = pd.cut(df['Diff'], bins=[0, 3, 6, 9, float('inf')], right=False, labels=labels)\npd.crosstab(grouper, df['value'])\n\noutput:\nvalue x y z\nDiff \n0-3 2 1 0\n3-6 3 1 0\n6-9 3 0 1\n>=9 0 2 1\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"pandas",
"python",
"python_3.x"
] |
stackoverflow_0074665214_dataframe_pandas_python_python_3.x.txt
|
Q:
How to Add Middleware in Navigation Section of Breeze Php?
I'm using breeze for authentication system, i've created two dashboards admin + user but when i login with admin dashboard, then if i click on dashboard button it redirects to user dashboards.
I Just Want Condition if admin then go to admin else go to user.
`
<div class="shrink-0 flex items-center">
<a href="{{ route('dashboard') }}">
<x-application-logo class="block h-9 w-auto fill-current text-gray-800" />
</a>
</div>
`
if tried:
`if (auth()->user()->is_admin == 1) {
goto admin dashboard
} else {
goto user dashboard
}
`
A:
To add middleware in the navigation section of Breeze PHP, you can do the following:
In your routes file, define the route for your admin dashboard and apply the appropriate middleware to it. For example:
Route::get('admin/dashboard', 'AdminController@dashboard')->middleware('is_admin');
In your AdminController, create a method to handle the admin dashboard. For example:
public function dashboard()
{
return view('admin.dashboard');
}
In your navigation section, use the middleware to determine which dashboard to redirect to. For example:
<div class="shrink-0 flex items-center">
<a href="{{ auth()->user()->is_admin == 1 ? route('admin.dashboard') : route('dashboard') }}">
<x-application-logo class="block h-9 w-auto fill-current text-gray-800" />
</a>
</div>
This will redirect to the admin dashboard if the user is an admin, and to the user dashboard if they are not.
|
How to Add Middleware in Navigation Section of Breeze Php?
|
I'm using breeze for authentication system, i've created two dashboards admin + user but when i login with admin dashboard, then if i click on dashboard button it redirects to user dashboards.
I Just Want Condition if admin then go to admin else go to user.
`
<div class="shrink-0 flex items-center">
<a href="{{ route('dashboard') }}">
<x-application-logo class="block h-9 w-auto fill-current text-gray-800" />
</a>
</div>
`
if tried:
`if (auth()->user()->is_admin == 1) {
goto admin dashboard
} else {
goto user dashboard
}
`
|
[
"To add middleware in the navigation section of Breeze PHP, you can do the following:\nIn your routes file, define the route for your admin dashboard and apply the appropriate middleware to it. For example:\nRoute::get('admin/dashboard', 'AdminController@dashboard')->middleware('is_admin');\n\nIn your AdminController, create a method to handle the admin dashboard. For example:\npublic function dashboard()\n{\nreturn view('admin.dashboard');\n}\n\nIn your navigation section, use the middleware to determine which dashboard to redirect to. For example:\n<div class=\"shrink-0 flex items-center\">\n <a href=\"{{ auth()->user()->is_admin == 1 ? route('admin.dashboard') : route('dashboard') }}\">\n <x-application-logo class=\"block h-9 w-auto fill-current text-gray-800\" />\n </a>\n</div>\n\nThis will redirect to the admin dashboard if the user is an admin, and to the user dashboard if they are not.\n"
] |
[
0
] |
[] |
[] |
[
"breeze",
"php"
] |
stackoverflow_0074664365_breeze_php.txt
|
Q:
Variable not getting defined
One of my event’s variables isn’t getting defined.
The title variable isn’t getting defined so I think I made a mistake.
var addToCartButtons = document.getElementsByClassName('item-button')
for (var i = 0; i < addToCartButtons.length; i++) {
var button = addToCartButtons[i]
button.addEventListener('click', addToCartClicked)
}
function addToCartClicked(event) {
var button = event.target
var shopItem = button.parentElement.parentElement
var title = storeitem.getElementsByClassName('product-name')[0].innerHTML
console.log('title')
}
<div class="store-item">
<span class="product-name">CPU 1</span>
<img class="cpu-image" src="Images/CPU-1.jpg">
<div class="product-details">
<span class="item-price">$229.99</span>
<button class="btn btn-primary item-button" role="button">ADD TO CART</button>
</div>
</div>
A:
It looks like there is a typo in the addToCartClicked function. The title variable is defined as storeitem.getElementsByClassName('product-name')[0].innerHTML, but it should be shopItem instead of storeitem. The shopItem variable is defined earlier in the function as button.parentElement.parentElement, so changing storeitem to shopItem should fix the issue.
Additionally, the console.log statement currently prints the string 'title' instead of the value of the title variable.
A:
Fix your code. you need that shopItem as discribe above by Mr.Sebastian Simon and Mr.haggbart.
var addToCartButtons = document.getElementsByClassName('item-button')
for (var i = 0; i < addToCartButtons.length; i++) {
var button = addToCartButtons[i]
button.addEventListener('click', addToCartClicked)
}
function addToCartClicked(event) {
var button = event.target
var shopItem = button.parentElement.parentElement
// error here 'storeitem' variable is defined.
var title = shopItem.getElementsByClassName('product-name')[0].innerHTML
console.log('title')
console.log(title) // you means title varible
}
<div class="store-item">
<span class="product-name">CPU 1</span>
<img class="cpu-image" src="Images/CPU-1.jpg">
<div class="product-details">
<span class="item-price">$229.99</span>
<button class="btn btn-primary item-button" role="button">ADD TO CART</button>
</div>
</div>
A:
function addToCartClicked(event) {
var button = event.target
var shopItem = button.parentElement.parentElement
var title = shopItem.getElementsByClassName('product-name')[0].innerHTML
console.log(title)
}
|
Variable not getting defined
|
One of my event’s variables isn’t getting defined.
The title variable isn’t getting defined so I think I made a mistake.
var addToCartButtons = document.getElementsByClassName('item-button')
for (var i = 0; i < addToCartButtons.length; i++) {
var button = addToCartButtons[i]
button.addEventListener('click', addToCartClicked)
}
function addToCartClicked(event) {
var button = event.target
var shopItem = button.parentElement.parentElement
var title = storeitem.getElementsByClassName('product-name')[0].innerHTML
console.log('title')
}
<div class="store-item">
<span class="product-name">CPU 1</span>
<img class="cpu-image" src="Images/CPU-1.jpg">
<div class="product-details">
<span class="item-price">$229.99</span>
<button class="btn btn-primary item-button" role="button">ADD TO CART</button>
</div>
</div>
|
[
"It looks like there is a typo in the addToCartClicked function. The title variable is defined as storeitem.getElementsByClassName('product-name')[0].innerHTML, but it should be shopItem instead of storeitem. The shopItem variable is defined earlier in the function as button.parentElement.parentElement, so changing storeitem to shopItem should fix the issue.\nAdditionally, the console.log statement currently prints the string 'title' instead of the value of the title variable.\n",
"Fix your code. you need that shopItem as discribe above by Mr.Sebastian Simon and Mr.haggbart.\n\n\nvar addToCartButtons = document.getElementsByClassName('item-button')\n\nfor (var i = 0; i < addToCartButtons.length; i++) {\n var button = addToCartButtons[i]\n button.addEventListener('click', addToCartClicked)\n}\n\nfunction addToCartClicked(event) {\n var button = event.target\n \n var shopItem = button.parentElement.parentElement\n\n // error here 'storeitem' variable is defined.\n var title = shopItem.getElementsByClassName('product-name')[0].innerHTML\n \n console.log('title')\n console.log(title) // you means title varible\n}\n<div class=\"store-item\">\n <span class=\"product-name\">CPU 1</span>\n <img class=\"cpu-image\" src=\"Images/CPU-1.jpg\">\n <div class=\"product-details\">\n <span class=\"item-price\">$229.99</span>\n <button class=\"btn btn-primary item-button\" role=\"button\">ADD TO CART</button>\n </div>\n</div>\n\n\n\n",
"function addToCartClicked(event) {\n var button = event.target\n var shopItem = button.parentElement.parentElement\n var title = shopItem.getElementsByClassName('product-name')[0].innerHTML\n console.log(title)\n}\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"html",
"javascript"
] |
stackoverflow_0074665237_html_javascript.txt
|
Q:
using esm modules with ts-node
I successfully ran ts-node that transpiles to CommonJS modules. I used the official: docs
I also wanted to use esm modules following the official esm docs, but was unfortunately unsuccessful. The error I keep getting is: CustomError: Cannot find module '/module/next/to/index/ts/ModuleName' imported from /Users/mainuser/angular_apps/typescript_test/index.ts
This is tsconfig.json
{
"compilerOptions": {
"target": "ES2015",
"module": "ES2020",
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"strict": true,
"skipLibCheck": true
},
"ts-node": {
"esm": true
}
}
and this is the contest of package.json:
{
"name": "typescript_test",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {},
"devDependencies": {
"ts-node": "^10.7.0",
"typescript": "^4.6.3"
},
"type": "module"
}
I run ts-node with: ./node_modules/.bin/ts-node index.ts while being in the project root.
What am I missing?
This is my index.ts code:
import { SmokeTest } from "./SmokeTest";
SmokeTest.Log();
And this is the SmokeTest.ts file contents. Both are at the same fs level:
export module SmokeTest{
export function Log(){
console.log("Smoke test running");
}
}
I am running node v14.19.1 and ts-node v10.7.0
A:
Adding ts-node to the tsconfig.json proved ineffective for me. I updated the tsconfig slightly (see node14 base as a starting point) :
{
"compilerOptions": {
"target": "ES2020",
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"strict": true,
"skipLibCheck": true,
"outDir": "build",
"module": "es2020",
"moduleResolution": "node"
},
"include": ["./*.ts"],
"exclude": ["node_modules"]
}
I used the --loader ts-node/esm option:
$ node --loader ts-node/esm ./index.ts
I also updated my smoke-test.ts module to look like this:
function Log() {
console.log('Smoke test running')
}
export const SmokeTest = { Log }
When importing with native esm modules, the extension must always be provided as .js for a .ts file:
import { SmokeTest } from './smoke-test.js'
SmokeTest.Log()
A:
I believe with ts-node v10.9.1 you can use the --esm flag
yarn ts-node --esm path/to/file.js
|
using esm modules with ts-node
|
I successfully ran ts-node that transpiles to CommonJS modules. I used the official: docs
I also wanted to use esm modules following the official esm docs, but was unfortunately unsuccessful. The error I keep getting is: CustomError: Cannot find module '/module/next/to/index/ts/ModuleName' imported from /Users/mainuser/angular_apps/typescript_test/index.ts
This is tsconfig.json
{
"compilerOptions": {
"target": "ES2015",
"module": "ES2020",
"esModuleInterop": true,
"forceConsistentCasingInFileNames": true,
"strict": true,
"skipLibCheck": true
},
"ts-node": {
"esm": true
}
}
and this is the contest of package.json:
{
"name": "typescript_test",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {},
"devDependencies": {
"ts-node": "^10.7.0",
"typescript": "^4.6.3"
},
"type": "module"
}
I run ts-node with: ./node_modules/.bin/ts-node index.ts while being in the project root.
What am I missing?
This is my index.ts code:
import { SmokeTest } from "./SmokeTest";
SmokeTest.Log();
And this is the SmokeTest.ts file contents. Both are at the same fs level:
export module SmokeTest{
export function Log(){
console.log("Smoke test running");
}
}
I am running node v14.19.1 and ts-node v10.7.0
|
[
"Adding ts-node to the tsconfig.json proved ineffective for me. I updated the tsconfig slightly (see node14 base as a starting point) :\n{\n \"compilerOptions\": {\n \"target\": \"ES2020\",\n \"esModuleInterop\": true,\n \"forceConsistentCasingInFileNames\": true,\n \"strict\": true,\n \"skipLibCheck\": true,\n \"outDir\": \"build\",\n \"module\": \"es2020\",\n \"moduleResolution\": \"node\"\n },\n \"include\": [\"./*.ts\"],\n \"exclude\": [\"node_modules\"]\n}\n\nI used the --loader ts-node/esm option:\n$ node --loader ts-node/esm ./index.ts\n\nI also updated my smoke-test.ts module to look like this:\nfunction Log() {\n console.log('Smoke test running')\n}\n\nexport const SmokeTest = { Log }\n\nWhen importing with native esm modules, the extension must always be provided as .js for a .ts file:\nimport { SmokeTest } from './smoke-test.js'\nSmokeTest.Log()\n\n",
"I believe with ts-node v10.9.1 you can use the --esm flag\nyarn ts-node --esm path/to/file.js\n\n"
] |
[
6,
0
] |
[] |
[] |
[
"node.js",
"ts_node",
"typescript"
] |
stackoverflow_0071808342_node.js_ts_node_typescript.txt
|
Q:
How to start migration if connectionstring adress is valid? / .Net Core 6
I get database information from the user and if this information is correct and the connection is successfully established, I create the necessary connectionstring parameter to the appsettings.json file. There is no problem so far, but after this creation, I want to migrate the DataContext structure. But I don't know how to do this.
I wrote the necessary migration codes in the Program.cs file, but the project does not start because the connectionstring is null at the beginning.
Null
Null Result on Program.cs
As you can see in the images above, because myconn is null, I cannot migrate with program.cs, so I need to start the migration process in the action where I control myconn.
myconn action
First, I want to check the database information that the user has entered, and if it is valid, enter this information into the connectionstring field in appsetting.json and then do the necessary migration.
How can I do it?
A:
Create a function to check your connection string is valid. if your connection string is not valid then create your connection string with your default values then you can run your method. If there is a error before app fully initialized you can't route your actions.
Check your connectionstring is valid:
public bool CheckConnectionIsValid(string cnnStr){
try {
//This will check your connectionstring format
DbConnectionStringBuilder builder = new DbConnectionStringBuilder();
builder.ConnectionString = cnnStr;
return true;
}
catch (Exception)
{
return false;
}
}
in your startup:
//before app build
var connectionString = builder.Configuration.GetConnectionString("myconn");
if(CheckConnectionIsValid(connectionString))
{
builder.Services.AddDbContext<ApplicationDbContext>(options =>
options.UseSqlServer(connectionString));
}
//after app build
if(CheckConnectionIsValid(connectionString))
{
//your migration code here
}
In first run your connectionstring will be null so database not initalized but your app will not throw error. So you can call your action.
If your app configuration updated your app will restart and in second time your app migrate new db
|
How to start migration if connectionstring adress is valid? / .Net Core 6
|
I get database information from the user and if this information is correct and the connection is successfully established, I create the necessary connectionstring parameter to the appsettings.json file. There is no problem so far, but after this creation, I want to migrate the DataContext structure. But I don't know how to do this.
I wrote the necessary migration codes in the Program.cs file, but the project does not start because the connectionstring is null at the beginning.
Null
Null Result on Program.cs
As you can see in the images above, because myconn is null, I cannot migrate with program.cs, so I need to start the migration process in the action where I control myconn.
myconn action
First, I want to check the database information that the user has entered, and if it is valid, enter this information into the connectionstring field in appsetting.json and then do the necessary migration.
How can I do it?
|
[
"Create a function to check your connection string is valid. if your connection string is not valid then create your connection string with your default values then you can run your method. If there is a error before app fully initialized you can't route your actions.\nCheck your connectionstring is valid:\npublic bool CheckConnectionIsValid(string cnnStr){\n try {\n //This will check your connectionstring format\n DbConnectionStringBuilder builder = new DbConnectionStringBuilder();\n builder.ConnectionString = cnnStr;\n return true;\n }\n catch (Exception)\n {\n return false;\n }\n}\n\nin your startup:\n//before app build\nvar connectionString = builder.Configuration.GetConnectionString(\"myconn\");\nif(CheckConnectionIsValid(connectionString))\n{\n builder.Services.AddDbContext<ApplicationDbContext>(options => \n options.UseSqlServer(connectionString));\n}\n\n//after app build\nif(CheckConnectionIsValid(connectionString))\n{\n //your migration code here\n}\n\nIn first run your connectionstring will be null so database not initalized but your app will not throw error. So you can call your action.\nIf your app configuration updated your app will restart and in second time your app migrate new db\n"
] |
[
0
] |
[] |
[] |
[
".net",
"asp.net_core",
"c#"
] |
stackoverflow_0074662415_.net_asp.net_core_c#.txt
|
Q:
Calculating the distance between two points that are objects in a separate class
I have created a program that creates an object called "Point" in the class "Point". Now I wonder how to find the distance between two points created in the Main method. The instructions from my teacher are :
Add a new method to the Point class named "distance".
public double distance (Point other)
Returns the distance between the current Point object and the given other Point object. The distance between two points is equal to the square root of the sum of the squares of the differences of their x- and y- coordinates. In other words, the distance between two points (x1, y1) and (x2, y2) can be expressed as the square root of (x2 - x1)^2 + (y2 - y1) ^2. Two points with the same (x, y) coordinates should return a distance of 0.0.
Below you can see my code and the task is to calculate the distance between point a and point b.
//Dimitar Kapitanov 11/6
using System;
public class PointClassPt2
{
public static void Main(String[] args)
{
//main method
Point a = new Point();
Console.WriteLine("First Point");
Console.Write("X: ");
double x = Convert.ToDouble(Console.ReadLine());
Console.Write("Y: ");
double y = Convert.ToDouble(Console.ReadLine());
a.setCoordinate(x, y);
Point b = new Point(x, y);
Console.WriteLine("\nSecond Point");
Console.Write("X: ");
x = Convert.ToDouble(Console.ReadLine());
Console.Write("Y: ");
y = Convert.ToDouble(Console.ReadLine());
b.setCoordinate(x, y);
Console.WriteLine("\nPoint A: ("
+ a.getXCoordinate() + " , " + a.getYCoordinate() + ")");
Console.WriteLine("Point B: ("
+ b.getXCoordinate() + " , " + b.getYCoordinate() + ")");
Point c = new Point();
c.setCoordinate(-x, -y);
Console.WriteLine("Point C: ("
+ c.getXCoordinate() + " , " + c.getYCoordinate() + ")");
Console.WriteLine("Distance from A to B: ");
}
}
class Point
{
public double _x;
public double _y;
public Point()
{
_x = 0;
_y = 0;
}
public Point(double x, double y)
{
_x = x;
_y = y;
}
public double getXCoordinate()
{
return _x;
}
public double getYCoordinate()
{
return _y;
}
public void setCoordinate(double x, double y)
{
_x = x;
_y = y;
}
}
The thing that I don't understand is how to get the double values of the coordinates of Point a and Point b in order to calculate the distance in the method distance. And what does he mean by the sending the method distance "Point other". Can someone help me what the method distance should look like and what should I send it as parameters in the main method?
A:
Since this is your homework, i am not going to give you the full answer but some pseudo code,
Inside your Point class, you will create a method distance
public double distance (Point other)
{
var otherX = other.getXCoordinate();
var otherY = other.getYCoordinate();
//you already have access to the current point _x and _y
//now you can do the distance calculation here.
var distance = //your formula
return distance;
}
A:
You could add a method, like below
public double Distance(Point other)
{
if(null == other )
return 0;
//calculate the distance with the formula you already mentioned
//double dist = sqrt( Math.Pow((other.getXcoordinate() - _x),2) + ...
return dist;
}
|
Calculating the distance between two points that are objects in a separate class
|
I have created a program that creates an object called "Point" in the class "Point". Now I wonder how to find the distance between two points created in the Main method. The instructions from my teacher are :
Add a new method to the Point class named "distance".
public double distance (Point other)
Returns the distance between the current Point object and the given other Point object. The distance between two points is equal to the square root of the sum of the squares of the differences of their x- and y- coordinates. In other words, the distance between two points (x1, y1) and (x2, y2) can be expressed as the square root of (x2 - x1)^2 + (y2 - y1) ^2. Two points with the same (x, y) coordinates should return a distance of 0.0.
Below you can see my code and the task is to calculate the distance between point a and point b.
//Dimitar Kapitanov 11/6
using System;
public class PointClassPt2
{
public static void Main(String[] args)
{
//main method
Point a = new Point();
Console.WriteLine("First Point");
Console.Write("X: ");
double x = Convert.ToDouble(Console.ReadLine());
Console.Write("Y: ");
double y = Convert.ToDouble(Console.ReadLine());
a.setCoordinate(x, y);
Point b = new Point(x, y);
Console.WriteLine("\nSecond Point");
Console.Write("X: ");
x = Convert.ToDouble(Console.ReadLine());
Console.Write("Y: ");
y = Convert.ToDouble(Console.ReadLine());
b.setCoordinate(x, y);
Console.WriteLine("\nPoint A: ("
+ a.getXCoordinate() + " , " + a.getYCoordinate() + ")");
Console.WriteLine("Point B: ("
+ b.getXCoordinate() + " , " + b.getYCoordinate() + ")");
Point c = new Point();
c.setCoordinate(-x, -y);
Console.WriteLine("Point C: ("
+ c.getXCoordinate() + " , " + c.getYCoordinate() + ")");
Console.WriteLine("Distance from A to B: ");
}
}
class Point
{
public double _x;
public double _y;
public Point()
{
_x = 0;
_y = 0;
}
public Point(double x, double y)
{
_x = x;
_y = y;
}
public double getXCoordinate()
{
return _x;
}
public double getYCoordinate()
{
return _y;
}
public void setCoordinate(double x, double y)
{
_x = x;
_y = y;
}
}
The thing that I don't understand is how to get the double values of the coordinates of Point a and Point b in order to calculate the distance in the method distance. And what does he mean by the sending the method distance "Point other". Can someone help me what the method distance should look like and what should I send it as parameters in the main method?
|
[
"Since this is your homework, i am not going to give you the full answer but some pseudo code,\nInside your Point class, you will create a method distance\npublic double distance (Point other)\n{\n var otherX = other.getXCoordinate();\n var otherY = other.getYCoordinate();\n //you already have access to the current point _x and _y\n //now you can do the distance calculation here.\n var distance = //your formula\n return distance;\n}\n\n",
"You could add a method, like below\npublic double Distance(Point other)\n{\n if(null == other )\n return 0;\n\n //calculate the distance with the formula you already mentioned\n //double dist = sqrt( Math.Pow((other.getXcoordinate() - _x),2) + ...\n return dist;\n}\n\n"
] |
[
3,
1
] |
[] |
[] |
[
"c#"
] |
stackoverflow_0074665281_c#.txt
|
Q:
Editing specific line in text file in Python
Let's say I have a text file containing:
Dan
Warrior
500
1
0
Is there a way I can edit a specific line in that text file? Right now I have this:
#!/usr/bin/env python
import io
myfile = open('stats.txt', 'r')
dan = myfile.readline()
print dan
print "Your name: " + dan.split('\n')[0]
try:
myfile = open('stats.txt', 'a')
myfile.writelines('Mage')[1]
except IOError:
myfile.close()
finally:
myfile.close()
Yes, I know that myfile.writelines('Mage')[1] is incorrect. But you get my point, right? I'm trying to edit line 2 by replacing Warrior with Mage. But can I even do that?
A:
You want to do something like this:
# with is like your try .. finally block in this case
with open('stats.txt', 'r') as file:
# read a list of lines into data
data = file.readlines()
print data
print "Your name: " + data[0]
# now change the 2nd line, note that you have to add a newline
data[1] = 'Mage\n'
# and write everything back
with open('stats.txt', 'w') as file:
file.writelines( data )
The reason for this is that you can't do something like "change line 2" directly in a file. You can only overwrite (not delete) parts of a file - that means that the new content just covers the old content. So, if you wrote 'Mage' over line 2, the resulting line would be 'Mageior'.
A:
def replace_line(file_name, line_num, text):
lines = open(file_name, 'r').readlines()
lines[line_num] = text
out = open(file_name, 'w')
out.writelines(lines)
out.close()
And then:
replace_line('stats.txt', 0, 'Mage')
A:
you can use fileinput to do in place editing
import fileinput
for line in fileinput.FileInput("myfile", inplace=1):
if line .....:
print line
A:
You can do it in two ways, choose what suits your requirement:
Method I.) Replacing using line number. You can use built-in function enumerate() in this case:
First, in read mode get all data in a variable
with open("your_file.txt",'r') as f:
get_all=f.readlines()
Second, write to the file (where enumerate comes to action)
with open("your_file.txt",'w') as f:
for i,line in enumerate(get_all,1): ## STARTS THE NUMBERING FROM 1 (by default it begins with 0)
if i == 2: ## OVERWRITES line:2
f.writelines("Mage\n")
else:
f.writelines(line)
Method II.) Using the keyword you want to replace:
Open file in read mode and copy the contents to a list
with open("some_file.txt","r") as f:
newline=[]
for word in f.readlines():
newline.append(word.replace("Warrior","Mage")) ## Replace the keyword while you copy.
"Warrior" has been replaced by "Mage", so write the updated data to the file:
with open("some_file.txt","w") as f:
for line in newline:
f.writelines(line)
This is what the output will be in both cases:
Dan Dan
Warrior ------> Mage
500 500
1 1
0 0
A:
If your text contains only one individual:
import re
# creation
with open('pers.txt','wb') as g:
g.write('Dan \n Warrior \n 500 \r\n 1 \r 0 ')
with open('pers.txt','rb') as h:
print 'exact content of pers.txt before treatment:\n',repr(h.read())
with open('pers.txt','rU') as h:
print '\nrU-display of pers.txt before treatment:\n',h.read()
# treatment
def roplo(file_name,what):
patR = re.compile('^([^\r\n]+[\r\n]+)[^\r\n]+')
with open(file_name,'rb+') as f:
ch = f.read()
f.seek(0)
f.write(patR.sub('\\1'+what,ch))
roplo('pers.txt','Mage')
# after treatment
with open('pers.txt','rb') as h:
print '\nexact content of pers.txt after treatment:\n',repr(h.read())
with open('pers.txt','rU') as h:
print '\nrU-display of pers.txt after treatment:\n',h.read()
If your text contains several individuals:
import re
# creation
with open('pers.txt','wb') as g:
g.write('Dan \n Warrior \n 500 \r\n 1 \r 0 \n Jim \n dragonfly\r300\r2\n10\r\nSomo\ncosmonaut\n490\r\n3\r65')
with open('pers.txt','rb') as h:
print 'exact content of pers.txt before treatment:\n',repr(h.read())
with open('pers.txt','rU') as h:
print '\nrU-display of pers.txt before treatment:\n',h.read()
# treatment
def ripli(file_name,who,what):
with open(file_name,'rb+') as f:
ch = f.read()
x,y = re.search('^\s*'+who+'\s*[\r\n]+([^\r\n]+)',ch,re.MULTILINE).span(1)
f.seek(x)
f.write(what+ch[y:])
ripli('pers.txt','Jim','Wizard')
# after treatment
with open('pers.txt','rb') as h:
print 'exact content of pers.txt after treatment:\n',repr(h.read())
with open('pers.txt','rU') as h:
print '\nrU-display of pers.txt after treatment:\n',h.read()
If the “job“ of an individual was of a constant length in the texte, you could change only the portion of texte corresponding to the “job“ the desired individual:
that’s the same idea as senderle’s one.
But according to me, better would be to put the characteristics of individuals in a dictionnary recorded in file with cPickle:
from cPickle import dump, load
with open('cards','wb') as f:
dump({'Dan':['Warrior',500,1,0],'Jim':['dragonfly',300,2,10],'Somo':['cosmonaut',490,3,65]},f)
with open('cards','rb') as g:
id_cards = load(g)
print 'id_cards before change==',id_cards
id_cards['Jim'][0] = 'Wizard'
with open('cards','w') as h:
dump(id_cards,h)
with open('cards') as e:
id_cards = load(e)
print '\nid_cards after change==',id_cards
A:
I have been practising working on files this evening and realised that I can build on Jochen's answer to provide greater functionality for repeated/multiple use. Unfortunately my answer does not address issue of dealing with large files but does make life easier in smaller files.
with open('filetochange.txt', 'r+') as foo:
data = foo.readlines() #reads file as list
pos = int(input("Which position in list to edit? "))-1 #list position to edit
data.insert(pos, "more foo"+"\n") #inserts before item to edit
x = data[pos+1]
data.remove(x) #removes item to edit
foo.seek(0) #seeks beginning of file
for i in data:
i.strip() #strips "\n" from list items
foo.write(str(i))
A:
Suppose I have a file named file_name as following:
this is python
it is file handling
this is editing of line
We have to replace line 2 with "modification is done":
f=open("file_name","r+")
a=f.readlines()
for line in f:
if line.startswith("rai"):
p=a.index(line)
#so now we have the position of the line which to be modified
a[p]="modification is done"
f.seek(0)
f.truncate() #ersing all data from the file
f.close()
#so now we have an empty file and we will write the modified content now in the file
o=open("file_name","w")
for i in a:
o.write(i)
o.close()
#now the modification is done in the file
A:
writing initial data, print an empty str for updating it to a new data
here we insert an empty str in the last line of the code, this code can be used in interative updation, in other words appending data in text.txt file
with open("data.txt", 'w') as f:
f.write('first line\n'
'second line\n'
'third line\n'
'fourth line\n'
' \n')
updating data in the last line of the text file
my_file=open('data.txt')
string_list = my_file.readlines()
string_list[-1] = "Edit the list of strings as desired\n"
my_file = open("data.txt", "w")
new_file_contents = "". join(string_list)
my_file. write(new_file_contents)
A:
I used to have same request, eventually ended up with Jinja templating. Change your text file to below, and a variable lastname, then you can render the template by passing lastname='Meg', that's the most efficient and quickest way I can think of.
Dan
{{ lastname }}
Warrior
500
1
0
|
Editing specific line in text file in Python
|
Let's say I have a text file containing:
Dan
Warrior
500
1
0
Is there a way I can edit a specific line in that text file? Right now I have this:
#!/usr/bin/env python
import io
myfile = open('stats.txt', 'r')
dan = myfile.readline()
print dan
print "Your name: " + dan.split('\n')[0]
try:
myfile = open('stats.txt', 'a')
myfile.writelines('Mage')[1]
except IOError:
myfile.close()
finally:
myfile.close()
Yes, I know that myfile.writelines('Mage')[1] is incorrect. But you get my point, right? I'm trying to edit line 2 by replacing Warrior with Mage. But can I even do that?
|
[
"You want to do something like this:\n# with is like your try .. finally block in this case\nwith open('stats.txt', 'r') as file:\n # read a list of lines into data\n data = file.readlines()\n\nprint data\nprint \"Your name: \" + data[0]\n\n# now change the 2nd line, note that you have to add a newline\ndata[1] = 'Mage\\n'\n\n# and write everything back\nwith open('stats.txt', 'w') as file:\n file.writelines( data )\n\nThe reason for this is that you can't do something like \"change line 2\" directly in a file. You can only overwrite (not delete) parts of a file - that means that the new content just covers the old content. So, if you wrote 'Mage' over line 2, the resulting line would be 'Mageior'.\n",
"def replace_line(file_name, line_num, text):\n lines = open(file_name, 'r').readlines()\n lines[line_num] = text\n out = open(file_name, 'w')\n out.writelines(lines)\n out.close()\n\nAnd then:\nreplace_line('stats.txt', 0, 'Mage')\n\n",
"you can use fileinput to do in place editing\nimport fileinput\nfor line in fileinput.FileInput(\"myfile\", inplace=1):\n if line .....:\n print line\n\n",
"You can do it in two ways, choose what suits your requirement:\nMethod I.) Replacing using line number. You can use built-in function enumerate() in this case:\nFirst, in read mode get all data in a variable\nwith open(\"your_file.txt\",'r') as f:\n get_all=f.readlines()\n\nSecond, write to the file (where enumerate comes to action) \nwith open(\"your_file.txt\",'w') as f:\n for i,line in enumerate(get_all,1): ## STARTS THE NUMBERING FROM 1 (by default it begins with 0) \n if i == 2: ## OVERWRITES line:2\n f.writelines(\"Mage\\n\")\n else:\n f.writelines(line)\n\nMethod II.) Using the keyword you want to replace:\nOpen file in read mode and copy the contents to a list\nwith open(\"some_file.txt\",\"r\") as f:\n newline=[]\n for word in f.readlines(): \n newline.append(word.replace(\"Warrior\",\"Mage\")) ## Replace the keyword while you copy. \n\n\"Warrior\" has been replaced by \"Mage\", so write the updated data to the file:\nwith open(\"some_file.txt\",\"w\") as f:\n for line in newline:\n f.writelines(line)\n\nThis is what the output will be in both cases:\nDan Dan \nWarrior ------> Mage \n500 500 \n1 1 \n0 0 \n\n",
"If your text contains only one individual:\nimport re\n\n# creation\nwith open('pers.txt','wb') as g:\n g.write('Dan \\n Warrior \\n 500 \\r\\n 1 \\r 0 ')\n\nwith open('pers.txt','rb') as h:\n print 'exact content of pers.txt before treatment:\\n',repr(h.read())\nwith open('pers.txt','rU') as h:\n print '\\nrU-display of pers.txt before treatment:\\n',h.read()\n\n\n# treatment\ndef roplo(file_name,what):\n patR = re.compile('^([^\\r\\n]+[\\r\\n]+)[^\\r\\n]+')\n with open(file_name,'rb+') as f:\n ch = f.read()\n f.seek(0)\n f.write(patR.sub('\\\\1'+what,ch))\nroplo('pers.txt','Mage')\n\n\n# after treatment\nwith open('pers.txt','rb') as h:\n print '\\nexact content of pers.txt after treatment:\\n',repr(h.read())\nwith open('pers.txt','rU') as h:\n print '\\nrU-display of pers.txt after treatment:\\n',h.read()\n\nIf your text contains several individuals:\nimport re\n# creation\nwith open('pers.txt','wb') as g:\n g.write('Dan \\n Warrior \\n 500 \\r\\n 1 \\r 0 \\n Jim \\n dragonfly\\r300\\r2\\n10\\r\\nSomo\\ncosmonaut\\n490\\r\\n3\\r65')\n\nwith open('pers.txt','rb') as h:\n print 'exact content of pers.txt before treatment:\\n',repr(h.read())\nwith open('pers.txt','rU') as h:\n print '\\nrU-display of pers.txt before treatment:\\n',h.read()\n\n\n# treatment\ndef ripli(file_name,who,what):\n with open(file_name,'rb+') as f:\n ch = f.read()\n x,y = re.search('^\\s*'+who+'\\s*[\\r\\n]+([^\\r\\n]+)',ch,re.MULTILINE).span(1)\n f.seek(x)\n f.write(what+ch[y:])\nripli('pers.txt','Jim','Wizard')\n\n\n# after treatment\nwith open('pers.txt','rb') as h:\n print 'exact content of pers.txt after treatment:\\n',repr(h.read())\nwith open('pers.txt','rU') as h:\n print '\\nrU-display of pers.txt after treatment:\\n',h.read()\n\nIf the “job“ of an individual was of a constant length in the texte, you could change only the portion of texte corresponding to the “job“ the desired individual:\nthat’s the same idea as senderle’s one.\nBut according to me, better would be to put the characteristics of individuals in a dictionnary recorded in file with cPickle:\nfrom cPickle import dump, load\n\nwith open('cards','wb') as f:\n dump({'Dan':['Warrior',500,1,0],'Jim':['dragonfly',300,2,10],'Somo':['cosmonaut',490,3,65]},f)\n\nwith open('cards','rb') as g:\n id_cards = load(g)\nprint 'id_cards before change==',id_cards\n\nid_cards['Jim'][0] = 'Wizard'\n\nwith open('cards','w') as h:\n dump(id_cards,h)\n\nwith open('cards') as e:\n id_cards = load(e)\nprint '\\nid_cards after change==',id_cards\n\n",
"I have been practising working on files this evening and realised that I can build on Jochen's answer to provide greater functionality for repeated/multiple use. Unfortunately my answer does not address issue of dealing with large files but does make life easier in smaller files.\nwith open('filetochange.txt', 'r+') as foo:\n data = foo.readlines() #reads file as list\n pos = int(input(\"Which position in list to edit? \"))-1 #list position to edit\n data.insert(pos, \"more foo\"+\"\\n\") #inserts before item to edit\n x = data[pos+1]\n data.remove(x) #removes item to edit\n foo.seek(0) #seeks beginning of file\n for i in data:\n i.strip() #strips \"\\n\" from list items\n foo.write(str(i))\n\n",
"Suppose I have a file named file_name as following:\nthis is python\nit is file handling\nthis is editing of line\n\nWe have to replace line 2 with \"modification is done\":\nf=open(\"file_name\",\"r+\")\na=f.readlines()\nfor line in f:\n if line.startswith(\"rai\"):\n p=a.index(line)\n#so now we have the position of the line which to be modified\na[p]=\"modification is done\"\nf.seek(0)\nf.truncate() #ersing all data from the file\nf.close()\n#so now we have an empty file and we will write the modified content now in the file\no=open(\"file_name\",\"w\")\nfor i in a:\n o.write(i)\no.close()\n#now the modification is done in the file\n\n",
"writing initial data, print an empty str for updating it to a new data\nhere we insert an empty str in the last line of the code, this code can be used in interative updation, in other words appending data in text.txt file\nwith open(\"data.txt\", 'w') as f:\n f.write('first line\\n'\n 'second line\\n'\n 'third line\\n'\n 'fourth line\\n'\n ' \\n')\n\nupdating data in the last line of the text file\nmy_file=open('data.txt')\nstring_list = my_file.readlines()\nstring_list[-1] = \"Edit the list of strings as desired\\n\"\nmy_file = open(\"data.txt\", \"w\")\nnew_file_contents = \"\". join(string_list)\nmy_file. write(new_file_contents)\n\n",
"I used to have same request, eventually ended up with Jinja templating. Change your text file to below, and a variable lastname, then you can render the template by passing lastname='Meg', that's the most efficient and quickest way I can think of.\nDan\n{{ lastname }}\nWarrior\n500\n1\n0\n"
] |
[
162,
34,
28,
16,
3,
2,
0,
0,
0
] |
[
"#read file lines and edit specific item\n\nfile=open(\"pythonmydemo.txt\",'r')\na=file.readlines()\nprint(a[0][6:11])\n\na[0]=a[0][0:5]+' Ericsson\\n'\nprint(a[0])\n\nfile=open(\"pythonmydemo.txt\",'w')\nfile.writelines(a)\nfile.close()\nprint(a)\n\n",
"This is the easiest way to do this.\nf = open(\"file.txt\", \"wt\")\nfor line in f:\n f.write(line.replace('foo', 'bar'))\nf.close()\n\nI hope it will work for you.\n"
] |
[
-1,
-2
] |
[
"io",
"python"
] |
stackoverflow_0004719438_io_python.txt
|
Q:
python get dictionary key from value is list
I have two dictionaries:
first_dict = {'a': ['1', '2', '3'],
'b': ['4', '5'],
'c': ['6'],
}
second_dict = {'1': 'wqeewe',
'2': 'efsafa',
'4': 'fsasaf',
'6': 'kgoeew',
'7': 'fkowew'
}
I want to have a third dict that will contain the key of second_dict and its corresponding value from first_dict's key. This way, I will have :
third_dict = {'1' : 'a',
'2' : 'a',
'4' : 'b',
'6' : 'c',
'7' : None,
}
here is my way:
def key_return(name):
for key, value in first_dict.items():
if name == value:
return key
if isinstance(value, list) and name in value:
return key
return None
reference:
Python return key from value, but its a list in the dictionary
However, I wondering that the another way using dict.get() or something else.
Any help would be appreciated. Thanks.
A:
you can do it like that:
Code
first_dict = {'a': ['1', '2', '3'],
'b': ['4', '5'],
'c': ['6'],
}
second_dict = {'1': 'wqeewe',
'2': 'efsafa',
'4': 'fsasaf',
'6': 'kgoeew',
'7': 'fkowew'
}
third_dict = dict()
for second_key in second_dict.keys():
found = False
for first_key, value in first_dict.items():
if second_key in value:
third_dict.setdefault(second_key, first_key )
found = True
if not found:
third_dict.setdefault(second_key, None)
print(third_dict)
Output:
{'1': 'a', '2': 'a', '4': 'b', '6': 'c', '7': None}
Hope this helps
A:
version with a_dict.get()
third_dict = {i: {i:k for k,v in first_dict.items() for i in v}.get(i) for i in second_dict.keys()}
this part {i:k for k,v in first_dict.items() for i in v}
creates a dict like {'1': 'a', '2': 'a', '3': 'a', '4': 'b', '5': 'b', '6': 'c'}
A:
You can map the values in the first dictionary to their keys with:
values_map = dict([a for k, v in first_dict.items() for a in zip(v, k*len(v))])
then use this map to create the third dictionary:
third_dict = {key: values_map.get(key) for key, value in second_dict.items()}
Since I get that the first_dict may contain single values instead of list you may want first to convert those values to list with:
first_dict = dict(map(lambda x: (x[0], x[1]) if isinstance(x[1], list) else (x[0], [str(x[1])]), first_dict.items()))
A:
res = {
x: k
for k, xs in first_dict.items()
for x in xs
if x in second_dict
}
this creates 1:a , 2:a, 4:b etc. If you also want missing keys like 7:None join it with a dummy dict:
res = {k: None for k in second_dict} | {
x: k
for k, xs in first_dict.items()
for x in xs
if x in second_dict
}
|
python get dictionary key from value is list
|
I have two dictionaries:
first_dict = {'a': ['1', '2', '3'],
'b': ['4', '5'],
'c': ['6'],
}
second_dict = {'1': 'wqeewe',
'2': 'efsafa',
'4': 'fsasaf',
'6': 'kgoeew',
'7': 'fkowew'
}
I want to have a third dict that will contain the key of second_dict and its corresponding value from first_dict's key. This way, I will have :
third_dict = {'1' : 'a',
'2' : 'a',
'4' : 'b',
'6' : 'c',
'7' : None,
}
here is my way:
def key_return(name):
for key, value in first_dict.items():
if name == value:
return key
if isinstance(value, list) and name in value:
return key
return None
reference:
Python return key from value, but its a list in the dictionary
However, I wondering that the another way using dict.get() or something else.
Any help would be appreciated. Thanks.
|
[
"you can do it like that:\nCode\nfirst_dict = {'a': ['1', '2', '3'],\n 'b': ['4', '5'],\n 'c': ['6'],\n }\n\nsecond_dict = {'1': 'wqeewe',\n '2': 'efsafa',\n '4': 'fsasaf',\n '6': 'kgoeew',\n '7': 'fkowew'\n }\n\nthird_dict = dict()\n\nfor second_key in second_dict.keys():\n found = False\n for first_key, value in first_dict.items():\n if second_key in value:\n third_dict.setdefault(second_key, first_key )\n found = True\n if not found:\n third_dict.setdefault(second_key, None)\n \nprint(third_dict)\n\nOutput:\n{'1': 'a', '2': 'a', '4': 'b', '6': 'c', '7': None}\n\nHope this helps\n",
"version with a_dict.get()\nthird_dict = {i: {i:k for k,v in first_dict.items() for i in v}.get(i) for i in second_dict.keys()}\n\nthis part {i:k for k,v in first_dict.items() for i in v}\ncreates a dict like {'1': 'a', '2': 'a', '3': 'a', '4': 'b', '5': 'b', '6': 'c'}\n",
"You can map the values in the first dictionary to their keys with:\nvalues_map = dict([a for k, v in first_dict.items() for a in zip(v, k*len(v))])\n\nthen use this map to create the third dictionary:\nthird_dict = {key: values_map.get(key) for key, value in second_dict.items()}\n\nSince I get that the first_dict may contain single values instead of list you may want first to convert those values to list with:\nfirst_dict = dict(map(lambda x: (x[0], x[1]) if isinstance(x[1], list) else (x[0], [str(x[1])]), first_dict.items()))\n\n",
"res = {\n x: k\n for k, xs in first_dict.items()\n for x in xs\n if x in second_dict\n}\n\nthis creates 1:a , 2:a, 4:b etc. If you also want missing keys like 7:None join it with a dummy dict:\nres = {k: None for k in second_dict} | {\n x: k\n for k, xs in first_dict.items()\n for x in xs\n if x in second_dict\n}\n\n"
] |
[
1,
1,
0,
0
] |
[] |
[] |
[
"dictionary",
"list",
"python"
] |
stackoverflow_0074664870_dictionary_list_python.txt
|
Q:
How to set a variable, that isnt iter variable, to increases in each iteration and doesnt always return to its value prior to its entry into for loop?
import re, datetime
def add_months(datestr, months):
ref_year, ref_month = "", ""
ref_year_is_leap_year = False
aux_date = str(datetime.datetime.strptime(datestr, "%Y-%m-%d"))
print(repr(aux_date))
for i_month in range(int(months)):
# I add a unit since the months are "numerical quantities",
# that is, they are expressed in natural numbers, so I need it
# to start from 1 and not from 0 like the iter variable in python
i_month = i_month + 1
m1 = re.search(
r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})",
aux_date,
re.IGNORECASE,
)
if m1:
ref_year, ref_month = (
str(m1.groups()[0]).strip(),
str(m1.groups()[1]).strip(),
)
number_of_days_in_each_month = {
"01": "31",
"02": "28",
"03": "31",
"04": "30",
"05": "31",
"06": "30",
"07": "31",
"08": "31",
"09": "30",
"10": "31",
"11": "30",
"12": "31",
}
n_days_in_this_i_month = number_of_days_in_each_month[ref_month]
print(n_days_in_this_i_month) # nro days to increment in each i month iteration
if (
int(ref_year) % 4 == 0
and int(ref_year) % 100 == 0
and int(ref_year) % 400 != 0
):
ref_year_is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto
if ref_year_is_leap_year == True and ref_month == "02":
n_days_in_this_i_month = str(int(n_days_in_this_i_month) + 1) # 28 --> 29
aux_date = (
datetime.datetime.strptime(datestr, "%Y-%m-%d")
+ datetime.timedelta(days=int(n_days_in_this_i_month))
).strftime("%Y-%m-%d")
print(repr(aux_date))
return aux_date
print(repr(add_months("2022-12-30", "3")))
Why does the aux_date variable, instead of progressively increasing the number of days of the elapsed months, only limit itself to adding 31 days of that month of January, and then add them back to the original amount, staying stuck there instead of advancing each iteration of this for loop?
The objective of this for loop is to achieve an incremental iteration loop where the days are added and not one that always returns to the original amount to add the same content over and over again.
Updated function Algorithm
In this edit I have modified some details and redundancies, and also fixed some bugs that are present in the original code.
def add_months(datestr, months):
ref_year, ref_month = "", ""
ref_year_is_leap_year = False #condicional booleano, cuya logica binaria intenta establecer si es o no bisiesto el año tomado como referencia
aux_date = datetime.datetime.strptime(datestr, "%Y-%m-%d")
for i_month in range(int(months)):
i_month = i_month + 1 # I add a unit since the months are "numerical quantities", that is, they are expressed in natural numbers, so I need it to start from 1 and not from 0 like the iter variable in python
m1 = re.search( r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})", str(aux_date), re.IGNORECASE, )
if m1:
ref_year, ref_month = ( str(m1.groups()[0]).strip(), str( int(m1.groups()[1]) + 1).strip(), )
if( len(ref_month) == 1 ): ref_month = "0" + ref_month
if( int(ref_month) > 12 ): ref_month = "01"
print(ref_month)
number_of_days_in_each_month = {
"01": "31",
"02": "28",
"03": "31",
"04": "30",
"05": "31",
"06": "30",
"07": "31",
"08": "31",
"09": "30",
"10": "31",
"11": "30",
"12": "31",
}
n_days_in_this_i_month = number_of_days_in_each_month[ref_month]
if ( int(ref_year) % 4 == 0 and int(ref_year) % 100 != 0 ) or ( int(ref_year) % 400 == 0 ): ref_year_is_leap_year = True ref_year_is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto
if ref_year_is_leap_year == True and ref_month == "02": n_days_in_this_i_month = str(int(n_days_in_this_i_month) + 1) # 28 --> 29
print(n_days_in_this_i_month) # nro days to increment in each i month iteration
aux_date = aux_date + datetime.timedelta(days=int(n_days_in_this_i_month))
return datetime.datetime.strftime(aux_date, "%Y-%m-%d")
A:
Because at the end of every iteration of your for loop you are reconverting the value that is given in the parameter datestr and that value is never updated. You are also converting it to a string while trying to add a timedelta object. You should leave the value as a datetime object and convert to string once the for loop has finished if you still need to.
Just change the variable used in the bottom assignment to aux_date to aux_date and remove all of the string conversions, that should at least get you going in the right direction.
for example:
import re, datetime
def add_months(datestr, months):
ref_year, ref_month = "", ""
ref_year_is_leap_year = False # condicional booleano, cuya logica binaria intenta establecer si es o no bisiesto el año tomado como referencia
aux_date = datetime.datetime.strptime(datestr, "%Y-%m-%d")
print(repr(aux_date))
for i_month in range(int(months)):
i_month = (
i_month + 1
) # I add a unit since the months are "numerical quantities", that is, they are expressed in natural numbers, so I need it to start from 1 and not from 0 like the iter variable in python
m1 = re.search(
r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})",
str(aux_date),
re.IGNORECASE,
)
if m1:
ref_year, ref_month = (
str(m1.groups()[0]).strip(),
str(m1.groups()[1]).strip(),
)
number_of_days_in_each_month = {
"01": "31",
"02": "28",
"03": "31",
"04": "30",
"05": "31",
"06": "30",
"07": "31",
"08": "31",
"09": "30",
"10": "31",
"11": "30",
"12": "31",
}
n_days_in_this_i_month = number_of_days_in_each_month[ref_month]
print(n_days_in_this_i_month) # nro days to increment in each i month iteration
if (
int(ref_year) % 4 == 0
and int(ref_year) % 100 == 0
and int(ref_year) % 400 != 0
):
ref_year_is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto
if ref_year_is_leap_year == True and ref_month == "02":
n_days_in_this_i_month = str(int(n_days_in_this_i_month) + 1) # 28 --> 29
aux_date = aux_date + datetime.timedelta(days=int(n_days_in_this_i_month))
print(repr(aux_date))
return datetime.datetime.strftime(aux_date, "%Y-%m-%d")
print(repr(add_months("2022-12-30", "3")))
Output:
datetime.datetime(2022, 12, 30, 0, 0)
31
datetime.datetime(2023, 1, 30, 0, 0)
31
datetime.datetime(2023, 3, 2, 0, 0)
31
datetime.datetime(2023, 4, 2, 0, 0)
datetime.datetime(2023, 4, 2, 0, 0)
'2023-04-02'
A:
So, as Alexander's answer already establishes, you weren't updating the date, so you were always adding to the same beginning date on each iteration. I took the liberty to clean up your code, using regex and converting to strings and back and for with the int's is the totally wrong approach here -- it misses the entire point of date-time objects, which is to encapsulate the information in a date. Just use those objects, not strings. Here is the same approach as your code using only datetime.datetime objects:
import datetime
def add_months(datestr, months):
number_of_days_in_each_month = {
1 : 31,
2 : 28,
3 : 31,
4: 30,
5: 31,
6: 30,
7: 31,
8: 31,
9: 30,
10: 31,
11: 30,
12: 31,
}
date = datetime.datetime.strptime(datestr, "%Y-%m-%d")
is_leap_year = False
for i_month in range(1, int(months) + 1):
ref_year, ref_month = date.year, date.month
n_days = number_of_days_in_each_month[ref_month]
if (
ref_year % 4 == 0
and ref_year % 100 == 0
and ref_year % 400 != 0
):
is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto
if is_leap_year and ref_month == 2: # febrero
n_days += 1 # 28 --> 29
date += datetime.timedelta(days=n_days)
return date.strftime("%Y-%m-%d")
print(add_months("2022-12-30", "3"))
I also made some stylistic changes to variable names. This is an art not a science, naming variables, and it always comes down to subjective opinion, but may I humbly submit my opinion about more legible names.
Also note, you had a comment to the effect of:
I need the iter variable to start from 1 and not from 0 like the iter
variable in python
The iterating variable starts where you tell it to start, given the iterable you iterate over. range(N) will always start at zero, but it doesn't have to. You could iterate over [1, 2, 3], or better yet, range(1, N + 1).
Note!
Your algorithm is not working quite how one might expect, the output one would naturally expect is 2023-03-30
I'll give you a hint, though, think about precisely which month's days you need to add to the current month.... n_days = number_of_days_in_each_month[ref_month]....
|
How to set a variable, that isnt iter variable, to increases in each iteration and doesnt always return to its value prior to its entry into for loop?
|
import re, datetime
def add_months(datestr, months):
ref_year, ref_month = "", ""
ref_year_is_leap_year = False
aux_date = str(datetime.datetime.strptime(datestr, "%Y-%m-%d"))
print(repr(aux_date))
for i_month in range(int(months)):
# I add a unit since the months are "numerical quantities",
# that is, they are expressed in natural numbers, so I need it
# to start from 1 and not from 0 like the iter variable in python
i_month = i_month + 1
m1 = re.search(
r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})",
aux_date,
re.IGNORECASE,
)
if m1:
ref_year, ref_month = (
str(m1.groups()[0]).strip(),
str(m1.groups()[1]).strip(),
)
number_of_days_in_each_month = {
"01": "31",
"02": "28",
"03": "31",
"04": "30",
"05": "31",
"06": "30",
"07": "31",
"08": "31",
"09": "30",
"10": "31",
"11": "30",
"12": "31",
}
n_days_in_this_i_month = number_of_days_in_each_month[ref_month]
print(n_days_in_this_i_month) # nro days to increment in each i month iteration
if (
int(ref_year) % 4 == 0
and int(ref_year) % 100 == 0
and int(ref_year) % 400 != 0
):
ref_year_is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto
if ref_year_is_leap_year == True and ref_month == "02":
n_days_in_this_i_month = str(int(n_days_in_this_i_month) + 1) # 28 --> 29
aux_date = (
datetime.datetime.strptime(datestr, "%Y-%m-%d")
+ datetime.timedelta(days=int(n_days_in_this_i_month))
).strftime("%Y-%m-%d")
print(repr(aux_date))
return aux_date
print(repr(add_months("2022-12-30", "3")))
Why does the aux_date variable, instead of progressively increasing the number of days of the elapsed months, only limit itself to adding 31 days of that month of January, and then add them back to the original amount, staying stuck there instead of advancing each iteration of this for loop?
The objective of this for loop is to achieve an incremental iteration loop where the days are added and not one that always returns to the original amount to add the same content over and over again.
Updated function Algorithm
In this edit I have modified some details and redundancies, and also fixed some bugs that are present in the original code.
def add_months(datestr, months):
ref_year, ref_month = "", ""
ref_year_is_leap_year = False #condicional booleano, cuya logica binaria intenta establecer si es o no bisiesto el año tomado como referencia
aux_date = datetime.datetime.strptime(datestr, "%Y-%m-%d")
for i_month in range(int(months)):
i_month = i_month + 1 # I add a unit since the months are "numerical quantities", that is, they are expressed in natural numbers, so I need it to start from 1 and not from 0 like the iter variable in python
m1 = re.search( r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})", str(aux_date), re.IGNORECASE, )
if m1:
ref_year, ref_month = ( str(m1.groups()[0]).strip(), str( int(m1.groups()[1]) + 1).strip(), )
if( len(ref_month) == 1 ): ref_month = "0" + ref_month
if( int(ref_month) > 12 ): ref_month = "01"
print(ref_month)
number_of_days_in_each_month = {
"01": "31",
"02": "28",
"03": "31",
"04": "30",
"05": "31",
"06": "30",
"07": "31",
"08": "31",
"09": "30",
"10": "31",
"11": "30",
"12": "31",
}
n_days_in_this_i_month = number_of_days_in_each_month[ref_month]
if ( int(ref_year) % 4 == 0 and int(ref_year) % 100 != 0 ) or ( int(ref_year) % 400 == 0 ): ref_year_is_leap_year = True ref_year_is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto
if ref_year_is_leap_year == True and ref_month == "02": n_days_in_this_i_month = str(int(n_days_in_this_i_month) + 1) # 28 --> 29
print(n_days_in_this_i_month) # nro days to increment in each i month iteration
aux_date = aux_date + datetime.timedelta(days=int(n_days_in_this_i_month))
return datetime.datetime.strftime(aux_date, "%Y-%m-%d")
|
[
"Because at the end of every iteration of your for loop you are reconverting the value that is given in the parameter datestr and that value is never updated. You are also converting it to a string while trying to add a timedelta object. You should leave the value as a datetime object and convert to string once the for loop has finished if you still need to.\nJust change the variable used in the bottom assignment to aux_date to aux_date and remove all of the string conversions, that should at least get you going in the right direction.\nfor example:\nimport re, datetime\n\ndef add_months(datestr, months):\n ref_year, ref_month = \"\", \"\"\n ref_year_is_leap_year = False # condicional booleano, cuya logica binaria intenta establecer si es o no bisiesto el año tomado como referencia\n\n aux_date = datetime.datetime.strptime(datestr, \"%Y-%m-%d\")\n print(repr(aux_date))\n\n for i_month in range(int(months)):\n\n i_month = (\n i_month + 1\n ) # I add a unit since the months are \"numerical quantities\", that is, they are expressed in natural numbers, so I need it to start from 1 and not from 0 like the iter variable in python\n\n m1 = re.search(\n r\"(?P<year>\\d*)-(?P<month>\\d{2})-(?P<startDay>\\d{2})\",\n str(aux_date),\n re.IGNORECASE,\n )\n if m1:\n ref_year, ref_month = (\n str(m1.groups()[0]).strip(),\n str(m1.groups()[1]).strip(),\n )\n\n number_of_days_in_each_month = {\n \"01\": \"31\",\n \"02\": \"28\",\n \"03\": \"31\",\n \"04\": \"30\",\n \"05\": \"31\",\n \"06\": \"30\",\n \"07\": \"31\",\n \"08\": \"31\",\n \"09\": \"30\",\n \"10\": \"31\",\n \"11\": \"30\",\n \"12\": \"31\",\n }\n\n n_days_in_this_i_month = number_of_days_in_each_month[ref_month]\n print(n_days_in_this_i_month) # nro days to increment in each i month iteration\n\n if (\n int(ref_year) % 4 == 0\n and int(ref_year) % 100 == 0\n and int(ref_year) % 400 != 0\n ):\n ref_year_is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto\n if ref_year_is_leap_year == True and ref_month == \"02\":\n n_days_in_this_i_month = str(int(n_days_in_this_i_month) + 1) # 28 --> 29\n\n aux_date = aux_date + datetime.timedelta(days=int(n_days_in_this_i_month))\n print(repr(aux_date))\n return datetime.datetime.strftime(aux_date, \"%Y-%m-%d\")\n\n\nprint(repr(add_months(\"2022-12-30\", \"3\")))\n\n\nOutput:\ndatetime.datetime(2022, 12, 30, 0, 0)\n31\ndatetime.datetime(2023, 1, 30, 0, 0)\n31\ndatetime.datetime(2023, 3, 2, 0, 0)\n31\ndatetime.datetime(2023, 4, 2, 0, 0)\ndatetime.datetime(2023, 4, 2, 0, 0)\n'2023-04-02'\n\n",
"So, as Alexander's answer already establishes, you weren't updating the date, so you were always adding to the same beginning date on each iteration. I took the liberty to clean up your code, using regex and converting to strings and back and for with the int's is the totally wrong approach here -- it misses the entire point of date-time objects, which is to encapsulate the information in a date. Just use those objects, not strings. Here is the same approach as your code using only datetime.datetime objects:\nimport datetime\n\ndef add_months(datestr, months):\n\n number_of_days_in_each_month = {\n 1 : 31,\n 2 : 28,\n 3 : 31,\n 4: 30,\n 5: 31,\n 6: 30,\n 7: 31,\n 8: 31,\n 9: 30,\n 10: 31,\n 11: 30,\n 12: 31,\n }\n\n date = datetime.datetime.strptime(datestr, \"%Y-%m-%d\")\n is_leap_year = False\n\n for i_month in range(1, int(months) + 1):\n\n ref_year, ref_month = date.year, date.month\n\n n_days = number_of_days_in_each_month[ref_month]\n\n if (\n ref_year % 4 == 0\n and ref_year % 100 == 0\n and ref_year % 400 != 0\n ):\n is_leap_year = True # divisible entre 4 y 10 y no entre 400, para determinar que sea un año bisciesto\n\n if is_leap_year and ref_month == 2: # febrero\n n_days += 1 # 28 --> 29\n\n date += datetime.timedelta(days=n_days)\n\n\n return date.strftime(\"%Y-%m-%d\")\n\n\nprint(add_months(\"2022-12-30\", \"3\"))\n\nI also made some stylistic changes to variable names. This is an art not a science, naming variables, and it always comes down to subjective opinion, but may I humbly submit my opinion about more legible names.\nAlso note, you had a comment to the effect of:\n\nI need the iter variable to start from 1 and not from 0 like the iter\nvariable in python\n\nThe iterating variable starts where you tell it to start, given the iterable you iterate over. range(N) will always start at zero, but it doesn't have to. You could iterate over [1, 2, 3], or better yet, range(1, N + 1).\nNote!\nYour algorithm is not working quite how one might expect, the output one would naturally expect is 2023-03-30\nI'll give you a hint, though, think about precisely which month's days you need to add to the current month.... n_days = number_of_days_in_each_month[ref_month]....\n"
] |
[
2,
2
] |
[] |
[] |
[
"for_loop",
"loops",
"python",
"python_3.x",
"variables"
] |
stackoverflow_0074665124_for_loop_loops_python_python_3.x_variables.txt
|
Q:
PHP Session ID issues across domain and subdomain - Possible to have different session IDs for each domain and subdomain?
My customer has a domain, www.sample.com. I have completed a simple webapp for them, and is planning to host it at webapp.sample.com. Both sites are hosted at different servers (sample.com is hosted at their own hosting), while webapp.sample.com is hosted at my own AWS server. I am having trouble with the PHP Session ID, as the browser looks at both domain and subdomain, assuming it the same server. Since it is different servers (and I don't have access to their web server), I am unable to login to my webapp, unless I clear the existing Session ID from the main domain. Is there a way to have 2 separate PHP Session IDs for the main domain and sub domain.
By clearing the Cookies/Session ID from the main domain, it will allow me to login to my webapp. However, if I don't clear the Cookies/Session ID, my webapp will detect the exiting Session ID (From main site), and fail to login, as it unable to access any data/info from it, since it is a different session ID and from different server
A:
Yes, it is possible to have different session IDs for each domain and subdomain. This can be achieved by setting the session save path in the PHP configuration file (php.ini) to a unique value for each domain and subdomain. For example, for www.sample.com, the session save path can be set to '/var/www/sample/sessions/' and for webapp.sample.com, it can be set to '/var/www/webapp/sessions/'. This will ensure that the session data is stored in separate directories for each domain and subdomain and will prevent any conflicts between the sessions.
Additionally, you can use the session_name() function in your PHP code to specify a unique session name for each domain and subdomain. This will ensure that the session IDs generated for each domain and subdomain are unique and will prevent any conflicts between the sessions.
Overall, it is important to properly configure the PHP session settings in order to avoid any issues with session IDs across different domains and subdomains.
|
PHP Session ID issues across domain and subdomain - Possible to have different session IDs for each domain and subdomain?
|
My customer has a domain, www.sample.com. I have completed a simple webapp for them, and is planning to host it at webapp.sample.com. Both sites are hosted at different servers (sample.com is hosted at their own hosting), while webapp.sample.com is hosted at my own AWS server. I am having trouble with the PHP Session ID, as the browser looks at both domain and subdomain, assuming it the same server. Since it is different servers (and I don't have access to their web server), I am unable to login to my webapp, unless I clear the existing Session ID from the main domain. Is there a way to have 2 separate PHP Session IDs for the main domain and sub domain.
By clearing the Cookies/Session ID from the main domain, it will allow me to login to my webapp. However, if I don't clear the Cookies/Session ID, my webapp will detect the exiting Session ID (From main site), and fail to login, as it unable to access any data/info from it, since it is a different session ID and from different server
|
[
"Yes, it is possible to have different session IDs for each domain and subdomain. This can be achieved by setting the session save path in the PHP configuration file (php.ini) to a unique value for each domain and subdomain. For example, for www.sample.com, the session save path can be set to '/var/www/sample/sessions/' and for webapp.sample.com, it can be set to '/var/www/webapp/sessions/'. This will ensure that the session data is stored in separate directories for each domain and subdomain and will prevent any conflicts between the sessions.\nAdditionally, you can use the session_name() function in your PHP code to specify a unique session name for each domain and subdomain. This will ensure that the session IDs generated for each domain and subdomain are unique and will prevent any conflicts between the sessions.\nOverall, it is important to properly configure the PHP session settings in order to avoid any issues with session IDs across different domains and subdomains.\n"
] |
[
0
] |
[] |
[] |
[
"cookies",
"html",
"php",
"session",
"sessionid"
] |
stackoverflow_0074664074_cookies_html_php_session_sessionid.txt
|
Q:
Bizarre ignoring of font size when canvas created dynamically - reposted
Ive been requested to repost this question. First it was flagged as duplicate with CSS issue, but I dont use CSS. This is a JS anomaly
These two codes should do the same thing.
But, the first with dynamic canvas ignores the font size, and scales the font bigger in larger window!
<html>
<body>
<script>
var n=0, c=[], ct=[]
for (n=0; n<2; n++){
c[n] = document.createElement('canvas');
c[n].id = "C"+n;
c[n].style.width = (400*n+70)+"px";
c[n].style.height = (400*n+70)+"px";
c[n].style.border = "2px solid";
document.body.appendChild(c[n]);
ct[n] = c[n].getContext("2d");
ct[n].font="30px Arial"
ct[n].fillText("Hello",3,30)
}
</script></body></html>
Here two canvas are created inline, and this time it works as expected
<html>
<body>
<canvas id="C0" width="70" height="70" style="border: 2px solid;"> </canvas>
<canvas id="C1" width="470" height="470" style="border: 2px solid;"> </canvas>
<script>
var c =[], ct=[]
c[0]=document.getElementById("C0")
ct[0] = c[0].getContext("2d")
c[1] = document.getElementById("C1")
ct[1] = c[1].getContext("2d")
ct[0].font="30px Arial"
ct[1].font="30px Arial"
ct[0].fillText("Hello",3,30)
ct[1].fillText("Hello",3,30)
</script> </body></html>
JS does have some strange flaws, and I would appreciate it if this question can be kept open for opinions, rather than swept under the carpert as 'a CSS issue' !
A:
It seems related to the confusion in
style.width vs width values
both codes below set the canvas size, but one will affect how text is displayed!
c[n].style.width = (400*n+70)+"px";
c[n].style.height = (400*n+70)+"px";
or
c[n].width = 400*n+70
c[n].height = 400*n+70
|
Bizarre ignoring of font size when canvas created dynamically - reposted
|
Ive been requested to repost this question. First it was flagged as duplicate with CSS issue, but I dont use CSS. This is a JS anomaly
These two codes should do the same thing.
But, the first with dynamic canvas ignores the font size, and scales the font bigger in larger window!
<html>
<body>
<script>
var n=0, c=[], ct=[]
for (n=0; n<2; n++){
c[n] = document.createElement('canvas');
c[n].id = "C"+n;
c[n].style.width = (400*n+70)+"px";
c[n].style.height = (400*n+70)+"px";
c[n].style.border = "2px solid";
document.body.appendChild(c[n]);
ct[n] = c[n].getContext("2d");
ct[n].font="30px Arial"
ct[n].fillText("Hello",3,30)
}
</script></body></html>
Here two canvas are created inline, and this time it works as expected
<html>
<body>
<canvas id="C0" width="70" height="70" style="border: 2px solid;"> </canvas>
<canvas id="C1" width="470" height="470" style="border: 2px solid;"> </canvas>
<script>
var c =[], ct=[]
c[0]=document.getElementById("C0")
ct[0] = c[0].getContext("2d")
c[1] = document.getElementById("C1")
ct[1] = c[1].getContext("2d")
ct[0].font="30px Arial"
ct[1].font="30px Arial"
ct[0].fillText("Hello",3,30)
ct[1].fillText("Hello",3,30)
</script> </body></html>
JS does have some strange flaws, and I would appreciate it if this question can be kept open for opinions, rather than swept under the carpert as 'a CSS issue' !
|
[
"It seems related to the confusion in\nstyle.width vs width values\nboth codes below set the canvas size, but one will affect how text is displayed!\n c[n].style.width = (400*n+70)+\"px\";\n c[n].style.height = (400*n+70)+\"px\";\n\nor\nc[n].width = 400*n+70\nc[n].height = 400*n+70\n\n"
] |
[
0
] |
[] |
[] |
[
"canvas",
"font_size"
] |
stackoverflow_0074665167_canvas_font_size.txt
|
Q:
Hibernate OneToMany ManyToOne on delete cascade
I'm trying to use Hibernate to map the following relationship:
Each order contains 2 images. When I delete an order I want the images gone as well.
I have two entities, OrderItems and Image and they look like this
public class OrderItems {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name="ID")
private Long id;
@Transient
private String language;
@OneToMany(fetch = FetchType.EAGER ,orphanRemoval = true, cascade = CascadeType.ALL, mappedBy = "order")
private List<Image> images ;
}
public class Image implements Serializable {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name="ID")
private Long id;
@Column(name = "IMAGE_NAME")
private String name;
@Column(name = "IMAGE_BYTES", unique = false, nullable = true, length = 1000000)
private byte[] image;
@ManyToOne(fetch = FetchType.EAGER, cascade = CascadeType.ALL)
@JoinColumn(name = "order_id" , nullable = false)
private OrderItems order;
}
Inserting new orders will also insert the coresponding images but when I try to delete an order I get an foreign key constraint error from the tables Image
Am I missing something about Hibernate ? Shouldn't the attribute cascade = CascadeType.ALL do the trick ?
Thanks for taking the time to provide any feedback. Cheers
I already tried OneToMany and ManyToOne unidirectional and bidirectional but I get the same foreign key violation error or my images are not saved at all when I save a new order.
A:
Try like this
@OneToMany(mappedBy = "order", cascade = CascadeType.ALL, orphanRemoval = true)
@ManyToOne(optional = false, fetch = FetchType.EAGER)
@JoinColumn(name = "order_id", nullable = false)
A:
I solved the issue by using Spring to delete an order and automagically it also deleted the images corresponding to that order.
So my first approach of deleting orders by executing sql queries directly on the DB was the issue.
|
Hibernate OneToMany ManyToOne on delete cascade
|
I'm trying to use Hibernate to map the following relationship:
Each order contains 2 images. When I delete an order I want the images gone as well.
I have two entities, OrderItems and Image and they look like this
public class OrderItems {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name="ID")
private Long id;
@Transient
private String language;
@OneToMany(fetch = FetchType.EAGER ,orphanRemoval = true, cascade = CascadeType.ALL, mappedBy = "order")
private List<Image> images ;
}
public class Image implements Serializable {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name="ID")
private Long id;
@Column(name = "IMAGE_NAME")
private String name;
@Column(name = "IMAGE_BYTES", unique = false, nullable = true, length = 1000000)
private byte[] image;
@ManyToOne(fetch = FetchType.EAGER, cascade = CascadeType.ALL)
@JoinColumn(name = "order_id" , nullable = false)
private OrderItems order;
}
Inserting new orders will also insert the coresponding images but when I try to delete an order I get an foreign key constraint error from the tables Image
Am I missing something about Hibernate ? Shouldn't the attribute cascade = CascadeType.ALL do the trick ?
Thanks for taking the time to provide any feedback. Cheers
I already tried OneToMany and ManyToOne unidirectional and bidirectional but I get the same foreign key violation error or my images are not saved at all when I save a new order.
|
[
"Try like this\n@OneToMany(mappedBy = \"order\", cascade = CascadeType.ALL, orphanRemoval = true)\n\n@ManyToOne(optional = false, fetch = FetchType.EAGER)\n@JoinColumn(name = \"order_id\", nullable = false)\n\n",
"I solved the issue by using Spring to delete an order and automagically it also deleted the images corresponding to that order.\nSo my first approach of deleting orders by executing sql queries directly on the DB was the issue.\n"
] |
[
0,
0
] |
[] |
[] |
[
"hibernate",
"java",
"many_to_one",
"one_to_many"
] |
stackoverflow_0074657805_hibernate_java_many_to_one_one_to_many.txt
|
Q:
C# and SIMD: High and low speedups. What is happening?
Introduction of the problem
I am trying to speed up the intersection code of a (2d) ray tracer that I am writing. I am using C# and the System.Numerics library to bring the speed of SIMD instructions.
The problem is that I am getting strange results, with over-the-roof speedups and rather low speedups. My question is, why is one over-the-roof whereas the other is rather low?
Context:
The RayPack struct is a series of (different) rays, packed in Vectors of System.Numerics.
The BoundingBoxPack and CirclePack struct is a single bb / circle, packed in vectors of System.Numerics.
The CPU used is an i7-4710HQ (Haswell), with SSE 4.2, AVX(2), and FMA(3) instructions.
Running in release mode (64 bit). The project runs in .Net Framework 472. No additional options set.
Attempts
I've already tried looking up whether some operations may or may not be properly supported (Take note: these are for c++. https://fgiesen.wordpress.com/2016/04/03/sse-mind-the-gap/ or http://sci.tuomastonteri.fi/programming/sse), but it seems that is not the case because the laptop I work on supports SSE 4.2.
In the current code, the following changes are applied:
Using more proper instructions (packed min, for example).
Not using the float * vector instruction (causes a lot of additional operations, see the assembly of the original).
Code ... snippets?
Apologies for the large amount of code, but I am not sure how we can discuss this concretely without this amount of code.
Code of Ray -> BoundingBox
public bool CollidesWith(Ray ray, out float t)
{
// https://gamedev.stackexchange.com/questions/18436/most-efficient-aabb-vs-ray-collision-algorithms
// r.dir is unit direction vector of ray
float dirfracx = 1.0f / ray.direction.X;
float dirfracy = 1.0f / ray.direction.Y;
// lb is the corner of AABB with minimal coordinates - left bottom, rt is maximal corner
// r.org is origin of ray
float t1 = (this.rx.min - ray.origin.X) * dirfracx;
float t2 = (this.rx.max - ray.origin.X) * dirfracx;
float t3 = (this.ry.min - ray.origin.Y) * dirfracy;
float t4 = (this.ry.max - ray.origin.Y) * dirfracy;
float tmin = Math.Max(Math.Min(t1, t2), Math.Min(t3, t4));
float tmax = Math.Min(Math.Max(t1, t2), Math.Max(t3, t4));
// if tmax < 0, ray (line) is intersecting AABB, but the whole AABB is behind us
if (tmax < 0)
{
t = tmax;
return false;
}
// if tmin > tmax, ray doesn't intersect AABB
if (tmin > tmax)
{
t = tmax;
return false;
}
t = tmin;
return true;
}
Code of RayPack -> BoundingBoxPack
public Vector<int> CollidesWith(ref RayPack ray, out Vector<float> t)
{
// ------------------------------------------------------- \\
// compute the collision. \\
Vector<float> dirfracx = Constants.ones / ray.direction.x;
Vector<float> dirfracy = Constants.ones / ray.direction.y;
Vector<float> t1 = (this.rx.min - ray.origin.x) * dirfracx;
Vector<float> t2 = (this.rx.max - ray.origin.x) * dirfracx;
Vector<float> t3 = (this.ry.min - ray.origin.y) * dirfracy;
Vector<float> t4 = (this.ry.max - ray.origin.y) * dirfracy;
Vector<float> tmin = Vector.Max(Vector.Min(t1, t2), Vector.Min(t3, t4));
Vector<float> tmax = Vector.Min(Vector.Max(t1, t2), Vector.Max(t3, t4));
Vector<int> lessThanZeroMask = Vector.GreaterThan(tmax, Constants.zeros);
Vector<int> greaterMask = Vector.LessThan(tmin, tmax);
Vector<int> combinedMask = Vector.BitwiseOr(lessThanZeroMask, greaterMask);
// ------------------------------------------------------- \\
// Keep track of the t's that collided. \\
t = Vector.ConditionalSelect(combinedMask, tmin, Constants.floatMax);
return combinedMask;
}
Code of Ray -> Circle
public bool Intersect(Circle other)
{
// Step 0: Work everything out on paper!
// Step 1: Gather all the relevant data.
float ox = this.origin.X;
float dx = this.direction.X;
float oy = this.origin.Y;
float dy = this.direction.Y;
float x0 = other.origin.X;
float y0 = other.origin.Y;
float cr = other.radius;
// Step 2: compute the substitutions.
float p = ox - x0;
float q = oy - y0;
float r = 2 * p * dx;
float s = 2 * q * dy;
// Step 3: compute the substitutions, check if there is a collision.
float a = dx * dx + dy * dy;
float b = r + s;
float c = p * p + q * q - cr * cr;
float DSqrt = b * b - 4 * a * c;
// no collision possible! Commented out to make the benchmark more fair
//if (DSqrt < 0)
//{ return false; }
// Step 4: compute the substitutions.
float D = (float)Math.Sqrt(DSqrt);
float t0 = (-b + D) / (2 * a);
float t1 = (-b - D) / (2 * a);
float ti = Math.Min(t0, t1);
if(ti > 0 && ti < t)
{
t = ti;
return true;
}
return false;
}
Code of RayPack -> CirclePack
Original, unedited, code can be found at: https://pastebin.com/87nYgZrv
public Vector<int> Intersect(CirclePack other)
{
// ------------------------------------------------------- \\
// Put all the data on the stack. \\
Vector<float> zeros = Constants.zeros;
Vector<float> twos = Constants.twos;
Vector<float> fours = Constants.fours;
// ------------------------------------------------------- \\
// Compute whether the ray collides with the circle. This \\
// is stored in the 'mask' vector. \\
Vector<float> p = this.origin.x - other.origin.x; ;
Vector<float> q = this.origin.y - other.origin.y;
Vector<float> r = twos * p * this.direction.x;
Vector<float> s = twos * q * this.direction.y; ;
Vector<float> a = this.direction.x * this.direction.x + this.direction.y * this.direction.y;
Vector<float> b = r + s;
Vector<float> c = p * p + q * q - other.radius * other.radius;
Vector<float> DSqrt = b * b - fours * a * c;
Vector<int> maskCollision = Vector.GreaterThan(DSqrt, zeros);
// Commented out to make the benchmark more fair.
//if (Vector.Dot(maskCollision, maskCollision) == 0)
//{ return maskCollision; }
// ------------------------------------------------------- \\
// Update t if and only if there is a collision. Take \\
// note of the conditional where we compute t. \\
Vector<float> D = Vector.SquareRoot(DSqrt);
Vector<float> bMinus = Vector.Negate(b);
Vector<float> twoA = twos * a;
Vector<float> t0 = (bMinus + D) / twoA;
Vector<float> t1 = (bMinus - D) / twoA;
Vector<float> tm = Vector.ConditionalSelect(Vector.LessThan(t1, t0), t1, t0);
Vector<int> maskBiggerThanZero = Vector.GreaterThan(tm, zeros);
Vector<int> maskSmallerThanT = Vector.LessThan(tm, this.t);
Vector<int> mask = Vector.BitwiseAnd(
maskCollision,
Vector.BitwiseAnd(
maskBiggerThanZero,
maskSmallerThanT)
);
this.t = Vector.ConditionalSelect(
mask, // the bit mask that allows us to choose.
tm, // the smallest of the t's.
t); // if the bit mask is false (0), then we get our original t.
return mask;
}
Assembly code
These can be found on pastebin. Take note that there is some boilerplate assembly from the benchmark tool. You need to look at the function calls.
BoundingBox(Pack): https://pastebin.com/RYMQdZMh
Circle(Pack) Tweaked: https://pastebin.com/YZHjc1vY
Circle(Pack) Original: https://pastebin.com/87nYgZrv
Benchmarking
I've been benchmarking the situation with BenchmarkDotNet.
Results for Circle / CirclePack (updated):
Method
Mean
Error
StdDev
Intersection
9.710 ms
0.0540 ms
0.0505 ms
IntersectionPacked
3.296 ms
0.0055 ms
0.0051 ms
Results for BoundingBox / BoundingBoxPacked:
Method
Mean
Error
StdDev
Intersection
24.269 ms
0.2663 ms
0.2491 ms
IntersectionPacked
1.152 ms
0.0229 ms
0.0264 ms
Due to AVX, a speedup of roughly 6x-8x is expected. The speedup of the boundingbox is significant, whereas the speedup of the circle is rather low.
Revisiting the question at the top: Why is one speedup over-the-roof and the other rather low? And how can the lower of the two (CirclePack) become faster?
Edit(s) with regard to Peter Cordes (comments)
Made the benchmark more fair: the single ray version does not early-branch-out as soon as the ray can no longer collide. Now the speedup is roughly 2.5x.
Added the assembly code as a separate header.
With regard to the square root: This does have impact, but not as much as it seems. Removing the vector square root reduces the total time with about 0.3ms. The single ray code now always performs the square root too.
Question about FMA (Fused Multiply Addition) in C#. I think it does for scalars (see Can C# make use of fused multiply-add?), but I haven't found a similar operation within the System.Numerics.Vector struct.
About a C# instruction for packed min: Yes it does. Silly me. I even used it already.
A:
It looks like you are using the min and max methods of the Vector class incorrectly. The min and max methods of the Vector class are not mathematical min and max functions, but rather methods that return a new vector with the minimum or maximum of each pair of corresponding elements from the two input vectors.
In order to compute the minimum or maximum of the elements of a single vector, you can use the Min and Max methods of the Vector<T>. These methods take a single vector as input and return a scalar value with the minimum or maximum of the vector's elements.
For example, the following code computes the minimum and maximum of the elements of a vector:
Vector<float> vec = ...;
float minValue = Vector<float>.Min(vec);
float maxValue = Vector<float>.Max(vec);
You can use these methods to compute the minimum and maximum values of the t1, t2, t3, and t4 vectors in your CollidesWith method.
Additionally, you may want to consider using the Vector.ConditionalSelect method to select the minimum and maximum values of t1, t2, t3, and t4 in a single operation. This method takes three vectors as input: a condition vector, a true vector, and a false vector. It returns a new vector with the elements from the true vector if the corresponding element in the condition vector is true, and the elements from the false vector otherwise.
For example, the following code uses the Vector.ConditionalSelect method to compute the minimum of t1 and t2, and the maximum of t3 and t4:
Vector<float> minT1T2 = Vector.ConditionalSelect(t1 < t2, t1, t2);
Vector<float> maxT3T4 = Vector.ConditionalSelect(t3 > t4, t3, t4);
You can then use the Vector.
A:
It is difficult to say for sure without more information, but it is possible that the low speedups you are seeing are due to the overhead of using SIMD instructions in C#. SIMD instructions can be very fast when used in low-level languages like C++ that are designed to take advantage of them, but in higher-level languages like C#, the performance benefits of SIMD may be less pronounced due to the additional overhead of using them. Additionally, the System.Numerics library in C# may not be optimized for SIMD instructions, which could also be contributing to the low speedups you are seeing. It is also possible that the specific operations you are performing in your code may not be well-suited to SIMD instructions, which could be leading to the low speedups.
|
C# and SIMD: High and low speedups. What is happening?
|
Introduction of the problem
I am trying to speed up the intersection code of a (2d) ray tracer that I am writing. I am using C# and the System.Numerics library to bring the speed of SIMD instructions.
The problem is that I am getting strange results, with over-the-roof speedups and rather low speedups. My question is, why is one over-the-roof whereas the other is rather low?
Context:
The RayPack struct is a series of (different) rays, packed in Vectors of System.Numerics.
The BoundingBoxPack and CirclePack struct is a single bb / circle, packed in vectors of System.Numerics.
The CPU used is an i7-4710HQ (Haswell), with SSE 4.2, AVX(2), and FMA(3) instructions.
Running in release mode (64 bit). The project runs in .Net Framework 472. No additional options set.
Attempts
I've already tried looking up whether some operations may or may not be properly supported (Take note: these are for c++. https://fgiesen.wordpress.com/2016/04/03/sse-mind-the-gap/ or http://sci.tuomastonteri.fi/programming/sse), but it seems that is not the case because the laptop I work on supports SSE 4.2.
In the current code, the following changes are applied:
Using more proper instructions (packed min, for example).
Not using the float * vector instruction (causes a lot of additional operations, see the assembly of the original).
Code ... snippets?
Apologies for the large amount of code, but I am not sure how we can discuss this concretely without this amount of code.
Code of Ray -> BoundingBox
public bool CollidesWith(Ray ray, out float t)
{
// https://gamedev.stackexchange.com/questions/18436/most-efficient-aabb-vs-ray-collision-algorithms
// r.dir is unit direction vector of ray
float dirfracx = 1.0f / ray.direction.X;
float dirfracy = 1.0f / ray.direction.Y;
// lb is the corner of AABB with minimal coordinates - left bottom, rt is maximal corner
// r.org is origin of ray
float t1 = (this.rx.min - ray.origin.X) * dirfracx;
float t2 = (this.rx.max - ray.origin.X) * dirfracx;
float t3 = (this.ry.min - ray.origin.Y) * dirfracy;
float t4 = (this.ry.max - ray.origin.Y) * dirfracy;
float tmin = Math.Max(Math.Min(t1, t2), Math.Min(t3, t4));
float tmax = Math.Min(Math.Max(t1, t2), Math.Max(t3, t4));
// if tmax < 0, ray (line) is intersecting AABB, but the whole AABB is behind us
if (tmax < 0)
{
t = tmax;
return false;
}
// if tmin > tmax, ray doesn't intersect AABB
if (tmin > tmax)
{
t = tmax;
return false;
}
t = tmin;
return true;
}
Code of RayPack -> BoundingBoxPack
public Vector<int> CollidesWith(ref RayPack ray, out Vector<float> t)
{
// ------------------------------------------------------- \\
// compute the collision. \\
Vector<float> dirfracx = Constants.ones / ray.direction.x;
Vector<float> dirfracy = Constants.ones / ray.direction.y;
Vector<float> t1 = (this.rx.min - ray.origin.x) * dirfracx;
Vector<float> t2 = (this.rx.max - ray.origin.x) * dirfracx;
Vector<float> t3 = (this.ry.min - ray.origin.y) * dirfracy;
Vector<float> t4 = (this.ry.max - ray.origin.y) * dirfracy;
Vector<float> tmin = Vector.Max(Vector.Min(t1, t2), Vector.Min(t3, t4));
Vector<float> tmax = Vector.Min(Vector.Max(t1, t2), Vector.Max(t3, t4));
Vector<int> lessThanZeroMask = Vector.GreaterThan(tmax, Constants.zeros);
Vector<int> greaterMask = Vector.LessThan(tmin, tmax);
Vector<int> combinedMask = Vector.BitwiseOr(lessThanZeroMask, greaterMask);
// ------------------------------------------------------- \\
// Keep track of the t's that collided. \\
t = Vector.ConditionalSelect(combinedMask, tmin, Constants.floatMax);
return combinedMask;
}
Code of Ray -> Circle
public bool Intersect(Circle other)
{
// Step 0: Work everything out on paper!
// Step 1: Gather all the relevant data.
float ox = this.origin.X;
float dx = this.direction.X;
float oy = this.origin.Y;
float dy = this.direction.Y;
float x0 = other.origin.X;
float y0 = other.origin.Y;
float cr = other.radius;
// Step 2: compute the substitutions.
float p = ox - x0;
float q = oy - y0;
float r = 2 * p * dx;
float s = 2 * q * dy;
// Step 3: compute the substitutions, check if there is a collision.
float a = dx * dx + dy * dy;
float b = r + s;
float c = p * p + q * q - cr * cr;
float DSqrt = b * b - 4 * a * c;
// no collision possible! Commented out to make the benchmark more fair
//if (DSqrt < 0)
//{ return false; }
// Step 4: compute the substitutions.
float D = (float)Math.Sqrt(DSqrt);
float t0 = (-b + D) / (2 * a);
float t1 = (-b - D) / (2 * a);
float ti = Math.Min(t0, t1);
if(ti > 0 && ti < t)
{
t = ti;
return true;
}
return false;
}
Code of RayPack -> CirclePack
Original, unedited, code can be found at: https://pastebin.com/87nYgZrv
public Vector<int> Intersect(CirclePack other)
{
// ------------------------------------------------------- \\
// Put all the data on the stack. \\
Vector<float> zeros = Constants.zeros;
Vector<float> twos = Constants.twos;
Vector<float> fours = Constants.fours;
// ------------------------------------------------------- \\
// Compute whether the ray collides with the circle. This \\
// is stored in the 'mask' vector. \\
Vector<float> p = this.origin.x - other.origin.x; ;
Vector<float> q = this.origin.y - other.origin.y;
Vector<float> r = twos * p * this.direction.x;
Vector<float> s = twos * q * this.direction.y; ;
Vector<float> a = this.direction.x * this.direction.x + this.direction.y * this.direction.y;
Vector<float> b = r + s;
Vector<float> c = p * p + q * q - other.radius * other.radius;
Vector<float> DSqrt = b * b - fours * a * c;
Vector<int> maskCollision = Vector.GreaterThan(DSqrt, zeros);
// Commented out to make the benchmark more fair.
//if (Vector.Dot(maskCollision, maskCollision) == 0)
//{ return maskCollision; }
// ------------------------------------------------------- \\
// Update t if and only if there is a collision. Take \\
// note of the conditional where we compute t. \\
Vector<float> D = Vector.SquareRoot(DSqrt);
Vector<float> bMinus = Vector.Negate(b);
Vector<float> twoA = twos * a;
Vector<float> t0 = (bMinus + D) / twoA;
Vector<float> t1 = (bMinus - D) / twoA;
Vector<float> tm = Vector.ConditionalSelect(Vector.LessThan(t1, t0), t1, t0);
Vector<int> maskBiggerThanZero = Vector.GreaterThan(tm, zeros);
Vector<int> maskSmallerThanT = Vector.LessThan(tm, this.t);
Vector<int> mask = Vector.BitwiseAnd(
maskCollision,
Vector.BitwiseAnd(
maskBiggerThanZero,
maskSmallerThanT)
);
this.t = Vector.ConditionalSelect(
mask, // the bit mask that allows us to choose.
tm, // the smallest of the t's.
t); // if the bit mask is false (0), then we get our original t.
return mask;
}
Assembly code
These can be found on pastebin. Take note that there is some boilerplate assembly from the benchmark tool. You need to look at the function calls.
BoundingBox(Pack): https://pastebin.com/RYMQdZMh
Circle(Pack) Tweaked: https://pastebin.com/YZHjc1vY
Circle(Pack) Original: https://pastebin.com/87nYgZrv
Benchmarking
I've been benchmarking the situation with BenchmarkDotNet.
Results for Circle / CirclePack (updated):
Method
Mean
Error
StdDev
Intersection
9.710 ms
0.0540 ms
0.0505 ms
IntersectionPacked
3.296 ms
0.0055 ms
0.0051 ms
Results for BoundingBox / BoundingBoxPacked:
Method
Mean
Error
StdDev
Intersection
24.269 ms
0.2663 ms
0.2491 ms
IntersectionPacked
1.152 ms
0.0229 ms
0.0264 ms
Due to AVX, a speedup of roughly 6x-8x is expected. The speedup of the boundingbox is significant, whereas the speedup of the circle is rather low.
Revisiting the question at the top: Why is one speedup over-the-roof and the other rather low? And how can the lower of the two (CirclePack) become faster?
Edit(s) with regard to Peter Cordes (comments)
Made the benchmark more fair: the single ray version does not early-branch-out as soon as the ray can no longer collide. Now the speedup is roughly 2.5x.
Added the assembly code as a separate header.
With regard to the square root: This does have impact, but not as much as it seems. Removing the vector square root reduces the total time with about 0.3ms. The single ray code now always performs the square root too.
Question about FMA (Fused Multiply Addition) in C#. I think it does for scalars (see Can C# make use of fused multiply-add?), but I haven't found a similar operation within the System.Numerics.Vector struct.
About a C# instruction for packed min: Yes it does. Silly me. I even used it already.
|
[
"It looks like you are using the min and max methods of the Vector class incorrectly. The min and max methods of the Vector class are not mathematical min and max functions, but rather methods that return a new vector with the minimum or maximum of each pair of corresponding elements from the two input vectors.\nIn order to compute the minimum or maximum of the elements of a single vector, you can use the Min and Max methods of the Vector<T>. These methods take a single vector as input and return a scalar value with the minimum or maximum of the vector's elements.\nFor example, the following code computes the minimum and maximum of the elements of a vector:\nVector<float> vec = ...;\nfloat minValue = Vector<float>.Min(vec);\nfloat maxValue = Vector<float>.Max(vec);\n\nYou can use these methods to compute the minimum and maximum values of the t1, t2, t3, and t4 vectors in your CollidesWith method.\nAdditionally, you may want to consider using the Vector.ConditionalSelect method to select the minimum and maximum values of t1, t2, t3, and t4 in a single operation. This method takes three vectors as input: a condition vector, a true vector, and a false vector. It returns a new vector with the elements from the true vector if the corresponding element in the condition vector is true, and the elements from the false vector otherwise.\nFor example, the following code uses the Vector.ConditionalSelect method to compute the minimum of t1 and t2, and the maximum of t3 and t4:\nVector<float> minT1T2 = Vector.ConditionalSelect(t1 < t2, t1, t2);\nVector<float> maxT3T4 = Vector.ConditionalSelect(t3 > t4, t3, t4);\n\nYou can then use the Vector.\n",
"It is difficult to say for sure without more information, but it is possible that the low speedups you are seeing are due to the overhead of using SIMD instructions in C#. SIMD instructions can be very fast when used in low-level languages like C++ that are designed to take advantage of them, but in higher-level languages like C#, the performance benefits of SIMD may be less pronounced due to the additional overhead of using them. Additionally, the System.Numerics library in C# may not be optimized for SIMD instructions, which could also be contributing to the low speedups you are seeing. It is also possible that the specific operations you are performing in your code may not be well-suited to SIMD instructions, which could be leading to the low speedups.\n"
] |
[
0,
0
] |
[] |
[] |
[
"avx",
"c#",
"performance",
"simd",
"x86_64"
] |
stackoverflow_0056951793_avx_c#_performance_simd_x86_64.txt
|
Q:
Visual Studio Code distorts ANSI characters
I have problems with Swedish national characters when using Rust in Visual Studio Code in Windows 11. It can be shown with the following program:
fn main() {
let abc = " ååå
ööö
äää";
println!("<---{}--->", abc);
}
When the program is run from the command line using "cargo run", the output is as follows:
<--- ååå
ööö
äää--->
Strangely, spaces are added at the beginning of lines 2 and 3. However, when the program is run in Visual Studio Code, the Swedish characters get distorted.
<--- ååå
├╢├╢├╢
äää--->
How can I solve it? I work with text processing and this is a major problem.
EDIT: Since the problem doesn't appear on many systems, I add the technical data: Windows 11 Pro Version 10.0.22621 Build 22621;
Visual Studio Code Version: 1.73.1 (user setup) Date: 2022-11-09 Chromium: 102.0.5005.167 Node.js: 16.14.2 Sandboxed: No.
The VSC terminal seems to use cmd since all cmd commands work.
EDIT 2: The problem is solved by adding the following:
"terminal.integrated.profiles.windows": {
"PowerShell": {
"source": "PowerShell",
"icon": "terminal-powershell",
"args": [
"-NoExit",
"/c",
"chcp.com 65001"
]
},
},
as the first parameter in settings.json and saving the changes.
A:
- This answer is Windows specific. -
INFO: This answer describes you how can change your VSCode settings to force UTF-8 in your console. An alternative to this answer would be to force UTF-8 system-wide, as described here: Using UTF-8 Encoding (CHCP 65001) in Command Prompt / Windows Powershell (Windows 10)
It seems that sometimes the Windows shell doesn't use the correct UTF-8 code page.
You can tell VSCode to force a codepage in its shell using the following settings.
Open the Settings page (Shortkey: Ctrl+,)
Click on the button on the top right whose mouse-over text reads "Open Settings (JSON)"
Add the following lines:
"terminal.integrated.profiles.windows": {
"PowerShell": {
"source": "PowerShell",
"icon": "terminal-powershell",
"args": [
"-NoExit",
"/c",
"chcp.com 65001"
]
},
"Command Prompt": {
"path": [
"${env:windir}\\Sysnative\\cmd.exe",
"${env:windir}\\System32\\cmd.exe"
],
"args": [
"/K",
"chcp 65001"
],
"icon": "terminal-cmd"
},
},
This will force the UTF-8 code page.
If it worked, opening a new shell should display Active code page: 65001.
Source: https://github.com/microsoft/vscode/issues/19837
Previous, deprecated settings:
If your shell is "CMD":
"terminal.integrated.shellArgs.windows": ["/K", "chcp 65001"],
If your shell is "Powershell":
"terminal.integrated.shellArgs.windows": ["-NoExit", "/c", "chcp.com 65001"],
|
Visual Studio Code distorts ANSI characters
|
I have problems with Swedish national characters when using Rust in Visual Studio Code in Windows 11. It can be shown with the following program:
fn main() {
let abc = " ååå
ööö
äää";
println!("<---{}--->", abc);
}
When the program is run from the command line using "cargo run", the output is as follows:
<--- ååå
ööö
äää--->
Strangely, spaces are added at the beginning of lines 2 and 3. However, when the program is run in Visual Studio Code, the Swedish characters get distorted.
<--- ååå
├╢├╢├╢
äää--->
How can I solve it? I work with text processing and this is a major problem.
EDIT: Since the problem doesn't appear on many systems, I add the technical data: Windows 11 Pro Version 10.0.22621 Build 22621;
Visual Studio Code Version: 1.73.1 (user setup) Date: 2022-11-09 Chromium: 102.0.5005.167 Node.js: 16.14.2 Sandboxed: No.
The VSC terminal seems to use cmd since all cmd commands work.
EDIT 2: The problem is solved by adding the following:
"terminal.integrated.profiles.windows": {
"PowerShell": {
"source": "PowerShell",
"icon": "terminal-powershell",
"args": [
"-NoExit",
"/c",
"chcp.com 65001"
]
},
},
as the first parameter in settings.json and saving the changes.
|
[
"- This answer is Windows specific. -\nINFO: This answer describes you how can change your VSCode settings to force UTF-8 in your console. An alternative to this answer would be to force UTF-8 system-wide, as described here: Using UTF-8 Encoding (CHCP 65001) in Command Prompt / Windows Powershell (Windows 10)\n\nIt seems that sometimes the Windows shell doesn't use the correct UTF-8 code page.\nYou can tell VSCode to force a codepage in its shell using the following settings.\n\nOpen the Settings page (Shortkey: Ctrl+,)\nClick on the button on the top right whose mouse-over text reads \"Open Settings (JSON)\"\nAdd the following lines:\n\n \"terminal.integrated.profiles.windows\": {\n \"PowerShell\": {\n \"source\": \"PowerShell\",\n \"icon\": \"terminal-powershell\",\n \"args\": [\n \"-NoExit\",\n \"/c\",\n \"chcp.com 65001\"\n ]\n },\n \"Command Prompt\": {\n \"path\": [\n \"${env:windir}\\\\Sysnative\\\\cmd.exe\",\n \"${env:windir}\\\\System32\\\\cmd.exe\"\n ],\n \"args\": [\n \"/K\",\n \"chcp 65001\"\n ],\n \"icon\": \"terminal-cmd\"\n },\n },\n\nThis will force the UTF-8 code page.\nIf it worked, opening a new shell should display Active code page: 65001.\nSource: https://github.com/microsoft/vscode/issues/19837\n\nPrevious, deprecated settings:\n\nIf your shell is \"CMD\":\n\"terminal.integrated.shellArgs.windows\": [\"/K\", \"chcp 65001\"],\n\n\nIf your shell is \"Powershell\":\n\"terminal.integrated.shellArgs.windows\": [\"-NoExit\", \"/c\", \"chcp.com 65001\"],\n\n\n\n"
] |
[
3
] |
[] |
[] |
[
"character_encoding",
"non_ascii_characters",
"rust",
"visual_studio_code"
] |
stackoverflow_0074665076_character_encoding_non_ascii_characters_rust_visual_studio_code.txt
|
Q:
Make a svg shape invert whats behind it
I am trying to create a SVG shape that inverts the color behind it. I played around with masking to get the SVG shape to the way I wanted it. The problem with my current code is that it only inverts the SVG rectangle I created inside the SVG element and I would like it to invert whatever is behind it. I tried to set the SVG rectangle inside the element to 0 opacity but that did not help. My current code looks like this:
<svg width="800px" height="600px">
<defs>
<filter id="invert">
<feComponentTransfer>
<feFuncR type="table" tableValues="1 0"/>
<feFuncG type="table" tableValues="1 0"/>
<feFuncB type="table" tableValues="1 0"/>
</feComponentTransfer>
</filter>
<mask id="mask-me">
<svg style="top: 50px" xmlns="http://www.w3.org/2000/svg" width="700" height="80" viewBox="0 0 700 80">
<g id="Rectangle_1" data-name="Rectangle 1" fill="rgba(255,255,255,0)" stroke="#b2bdce" stroke-linecap="round" stroke-width="3">
<rect width="700" height="80" rx="40" stroke="none"></rect>
<rect x="1.5" y="1.5" width="697" height="77" rx="38.5" fill="none"></rect>
</g>
</svg>
</mask>
</defs>
<!--the shape that gets inverted-->
<svg xmlns="http://www.w3.org/2000/svg" width="700" height="80" viewBox="0 0 700 80">
<rect id="Rectangle_5" data-name="Rectangle 5" width="700" height="80" fill="yellow"/>
</svg>
<!--the shape that inverts whats below it-->
<svg style="filter: invert(1); mix-blend-mode: difference;" mask="url(#mask-me)" filter="url(#invert)" xmlns="http://www.w3.org/2000/svg" width="700" height="80" viewBox="0 0 700 80">
<rect id="Rectangle_5" data-name="Rectangle 5" width="700" height="80" />
</svg>
</svg>
here is what I want:
Here is my codepen: https://codepen.io/Julius-olsson/pen/gOKKBKR
Any help would be appreciated.
A:
I think I have understood what you are trying to do, but I am not completely sure.
One problem you seem to have is that you are trying to use both an inverting filter and mix-blend-mode difference, which can both do similar things but here are almost certainly working against each other.
The following code has been pared down to the barest minimum to provide an example of what I think you are trying to do:
It provides an image to be inverted, in this case just a rectangle,
but it can be anything.
It provides an inverting image, that is anything overlapping the
image will be inverted, again in this case a rectangle.
The inverting image is masked by another image, so only the parts of
the inverting image that are in common with the mask actually cause
inversion.
It does not use a filter. It uses mix-blend-mode difference with its
colour set to white, that is, a difference from white is the opposite,
it is inverted.
I have offset the inverted image and the inverting image just for show, so that the boundary of the effect can be readily seen.
.container {
padding: 10px;
border: 3px solid red;
background-color: wheat;
width: calc( 700px + 250px );
height: calc( 80px + 80px - 40px );
/* Stop any bacground colouring affecting the inversion */
isolation: isolate;
}
.inverted {
position: relative;
left: 0px;
top: 0px;
fill: yellow;
width: 700px;
height: 80px;
}
.inverter {
width: 700px;
height: 80px;
/* Set the inversrion reference colour */
fill: white;
}
.masked {
position: relative;
left: 250px;
top: -40px;
/* Set up the inversion */
mix-blend-mode: difference;
/* Inversion only where the mask allows */
-mask-image: url('data:image/svg+xml; utf8, <svg xmlns="http://www.w3.org/2000/svg" height="80px" viewBox="0 0 80 80"><circle cx="40" cy="40" r="40" /></svg>');
-webkit-mask-image: url('data:image/svg+xml; utf8, <svg xmlns="http://www.w3.org/2000/svg" height="80px" viewBox="0 0 80 80"><circle cx="40" cy="40" r="40" /></svg>');
}
<div class="container">
<!-- the shape that gets inverted -->
<svg class="inverted" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 700 80">
<rect width="700" height="80" />
</svg>
<!-- the shape that does the inverting -->
<div class="masked">
<svg class="inverter" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 700 80">
<rect width="700" height="80" />
</svg>
</div>
</div>
The bottom half of the circles are black because there is nothing there to invert, the background has been isolated, so difference from white is black.
[EDIT 22021204 Added example for sticky]
The coding below shows an example of a sticky div which inverts material "behind" its background. It is using mix-mode-difference, so if it has a border, which by definition will be a different colour from the background (if it is to be discernible) the material "behind" the border is not actually inverted but is differenced relative to the colour of the border.
An invert filter function applies to the selected element, not other material stacked with the element.
body {
font-size: 150%;
}
.container {
width: 900px;
height: 75vh;
overflow-y: scroll;
/* background-color: beige; */
isolation: isolate;
}
.sticky {
width: 700px;
padding: 20px;
text-align: center;
position: sticky;
top: 0px;
left: 180px;
border: 20px solid yellow;
font-size: 300%;
font-weight: bold;
mix-blend-mode: difference;
background-color: white;
}
svg,
img {
float: right;
width: 300px;
}
.background {
background-image: url('data:image/svg+xml; utf8, <svg xmlns="http://www.w3.org/2000/svg" height="80px" viewBox="0 0 80 80"><circle cx="40" cy="40" r="40" fill="wheat"/></svg>');
}
<body>
<div class="container">
<h1>A heading</h1>
<div class="textstuff">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<div class="sticky">This is some sticky text with a border.</div>
<h1>A heading</h1>
<div class="textstuff" style="color: red;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<div class="textstuff" style="color: green;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<svg xmlns='http://www.w3.org/2000/svg' viewbox='65.60520541730344 -3.8540656494687013 800.3826769425817 539.7901985072457'><path fill='grey' d='M 156.40357183517773 23.19316100057085 L 83.97002318399646 188.27914171909507 L 518.4511031561435 60.897074118366035 L 799.3826769425817 214.44658030407507 L 304.1247347870089 -2.8540656494687013 L 593.7387174567936 199.93582818685073 L 773.3354502925422 66.72541735224387 L 625.6142873407109 92.7726440022834 L 428.65273673826925 127.50227953566946 L 379.41234908765887 136.184688419016 L 446.0175545049623 225.98305483689026 L 448.871620154431 530.1077896238992 L 509.768694272797 11.65668646775564 L 373.58400585378104 391.06903555541453 L 602.4211263401401 249.17621583746111 L 182.45079848521726 170.91432395240204 L 616.9318784573643 43.53225635167299 L 165.08598071852424 72.43354865118125 L 312.80714367035546 46.3863220011417 L 225.86284290194985 417.1162622054541 L 399.63123250382057 538.7901985072457 L 66.60520541730344 89.79836641787429 Z'/></svg>
<div class="textstuff" style="color: brown;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<h1>A heading</h1>
<div class="textstuff" style="color: red;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<div class="textstuff" style="color: green;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<img src='data:image/svg+xml; utf8, <svg xmlns="http://www.w3.org/2000/svg" height="80px" viewBox="0 0 160 160"><circle fill="pink" stroke="lime" stroke-width="10" cx="80" cy="80" r="70" /></svg>' />
<div class="textstuff" style="color: brown;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<h1>A heading</h1>
<div class="textstuff" style="color: red;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<div class="textstuff" style="color: green;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<div class="textstuff" style="color: brown;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<h1>A heading</h1>
<div class="background">
<div class="textstuff" style="color: red;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<div class="textstuff" style="color: green;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<div class="textstuff" style="color: brown;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
</div>
<h1>A heading</h1>
<div class="textstuff" style="color: red;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<div class="textstuff" style="color: green;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<div class="textstuff" style="color: brown;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<h1>A heading</h1>
<div class="textstuff" style="color: red;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<div class="textstuff" style="color: green;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
<div class="textstuff" style="color: brown;">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>
</div>
</body>
It will probably be informative to try different combinations of background colours and fill colours for various elements depending on how you want it to look in the end.
|
Make a svg shape invert whats behind it
|
I am trying to create a SVG shape that inverts the color behind it. I played around with masking to get the SVG shape to the way I wanted it. The problem with my current code is that it only inverts the SVG rectangle I created inside the SVG element and I would like it to invert whatever is behind it. I tried to set the SVG rectangle inside the element to 0 opacity but that did not help. My current code looks like this:
<svg width="800px" height="600px">
<defs>
<filter id="invert">
<feComponentTransfer>
<feFuncR type="table" tableValues="1 0"/>
<feFuncG type="table" tableValues="1 0"/>
<feFuncB type="table" tableValues="1 0"/>
</feComponentTransfer>
</filter>
<mask id="mask-me">
<svg style="top: 50px" xmlns="http://www.w3.org/2000/svg" width="700" height="80" viewBox="0 0 700 80">
<g id="Rectangle_1" data-name="Rectangle 1" fill="rgba(255,255,255,0)" stroke="#b2bdce" stroke-linecap="round" stroke-width="3">
<rect width="700" height="80" rx="40" stroke="none"></rect>
<rect x="1.5" y="1.5" width="697" height="77" rx="38.5" fill="none"></rect>
</g>
</svg>
</mask>
</defs>
<!--the shape that gets inverted-->
<svg xmlns="http://www.w3.org/2000/svg" width="700" height="80" viewBox="0 0 700 80">
<rect id="Rectangle_5" data-name="Rectangle 5" width="700" height="80" fill="yellow"/>
</svg>
<!--the shape that inverts whats below it-->
<svg style="filter: invert(1); mix-blend-mode: difference;" mask="url(#mask-me)" filter="url(#invert)" xmlns="http://www.w3.org/2000/svg" width="700" height="80" viewBox="0 0 700 80">
<rect id="Rectangle_5" data-name="Rectangle 5" width="700" height="80" />
</svg>
</svg>
here is what I want:
Here is my codepen: https://codepen.io/Julius-olsson/pen/gOKKBKR
Any help would be appreciated.
|
[
"I think I have understood what you are trying to do, but I am not completely sure.\nOne problem you seem to have is that you are trying to use both an inverting filter and mix-blend-mode difference, which can both do similar things but here are almost certainly working against each other.\nThe following code has been pared down to the barest minimum to provide an example of what I think you are trying to do:\n\nIt provides an image to be inverted, in this case just a rectangle,\nbut it can be anything.\nIt provides an inverting image, that is anything overlapping the\nimage will be inverted, again in this case a rectangle.\nThe inverting image is masked by another image, so only the parts of\nthe inverting image that are in common with the mask actually cause\ninversion.\nIt does not use a filter. It uses mix-blend-mode difference with its\ncolour set to white, that is, a difference from white is the opposite,\nit is inverted.\n\nI have offset the inverted image and the inverting image just for show, so that the boundary of the effect can be readily seen.\n\n\n.container {\n padding: 10px;\n border: 3px solid red;\n background-color: wheat;\n\n width: calc( 700px + 250px );\n height: calc( 80px + 80px - 40px );\n\n /* Stop any bacground colouring affecting the inversion */\n isolation: isolate;\n}\n\n.inverted {\n position: relative;\n left: 0px;\n top: 0px;\n\n fill: yellow;\n\n width: 700px;\n height: 80px;\n}\n\n.inverter {\n width: 700px;\n height: 80px;\n\n /* Set the inversrion reference colour */\n fill: white;\n}\n\n.masked {\n position: relative;\n left: 250px;\n top: -40px;\n\n /* Set up the inversion */\n mix-blend-mode: difference;\n /* Inversion only where the mask allows */\n -mask-image: url('data:image/svg+xml; utf8, <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"80px\" viewBox=\"0 0 80 80\"><circle cx=\"40\" cy=\"40\" r=\"40\" /></svg>');\n -webkit-mask-image: url('data:image/svg+xml; utf8, <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"80px\" viewBox=\"0 0 80 80\"><circle cx=\"40\" cy=\"40\" r=\"40\" /></svg>');\n}\n<div class=\"container\">\n <!-- the shape that gets inverted -->\n <svg class=\"inverted\" xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 700 80\">\n <rect width=\"700\" height=\"80\" />\n </svg>\n <!-- the shape that does the inverting -->\n <div class=\"masked\">\n <svg class=\"inverter\" xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 700 80\">\n <rect width=\"700\" height=\"80\" />\n </svg>\n </div>\n</div>\n\n\n\nThe bottom half of the circles are black because there is nothing there to invert, the background has been isolated, so difference from white is black.\n[EDIT 22021204 Added example for sticky]\nThe coding below shows an example of a sticky div which inverts material \"behind\" its background. It is using mix-mode-difference, so if it has a border, which by definition will be a different colour from the background (if it is to be discernible) the material \"behind\" the border is not actually inverted but is differenced relative to the colour of the border.\nAn invert filter function applies to the selected element, not other material stacked with the element.\n\n\nbody {\n font-size: 150%;\n}\n\n.container {\n width: 900px;\n height: 75vh;\n overflow-y: scroll;\n /* background-color: beige; */\n isolation: isolate;\n}\n\n.sticky {\n width: 700px;\n padding: 20px;\n text-align: center;\n position: sticky;\n top: 0px;\n left: 180px;\n border: 20px solid yellow;\n font-size: 300%;\n font-weight: bold;\n mix-blend-mode: difference;\n background-color: white;\n}\n\nsvg,\nimg {\n float: right;\n width: 300px;\n}\n\n.background {\n background-image: url('data:image/svg+xml; utf8, <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"80px\" viewBox=\"0 0 80 80\"><circle cx=\"40\" cy=\"40\" r=\"40\" fill=\"wheat\"/></svg>');\n}\n<body>\n<div class=\"container\">\n <h1>A heading</h1>\n <div class=\"textstuff\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <div class=\"sticky\">This is some sticky text with a border.</div>\n <h1>A heading</h1>\n <div class=\"textstuff\" style=\"color: red;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <div class=\"textstuff\" style=\"color: green;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <svg xmlns='http://www.w3.org/2000/svg' viewbox='65.60520541730344 -3.8540656494687013 800.3826769425817 539.7901985072457'><path fill='grey' d='M 156.40357183517773 23.19316100057085 L 83.97002318399646 188.27914171909507 L 518.4511031561435 60.897074118366035 L 799.3826769425817 214.44658030407507 L 304.1247347870089 -2.8540656494687013 L 593.7387174567936 199.93582818685073 L 773.3354502925422 66.72541735224387 L 625.6142873407109 92.7726440022834 L 428.65273673826925 127.50227953566946 L 379.41234908765887 136.184688419016 L 446.0175545049623 225.98305483689026 L 448.871620154431 530.1077896238992 L 509.768694272797 11.65668646775564 L 373.58400585378104 391.06903555541453 L 602.4211263401401 249.17621583746111 L 182.45079848521726 170.91432395240204 L 616.9318784573643 43.53225635167299 L 165.08598071852424 72.43354865118125 L 312.80714367035546 46.3863220011417 L 225.86284290194985 417.1162622054541 L 399.63123250382057 538.7901985072457 L 66.60520541730344 89.79836641787429 Z'/></svg>\n <div class=\"textstuff\" style=\"color: brown;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <h1>A heading</h1>\n <div class=\"textstuff\" style=\"color: red;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <div class=\"textstuff\" style=\"color: green;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <img src='data:image/svg+xml; utf8, <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"80px\" viewBox=\"0 0 160 160\"><circle fill=\"pink\" stroke=\"lime\" stroke-width=\"10\" cx=\"80\" cy=\"80\" r=\"70\" /></svg>' />\n <div class=\"textstuff\" style=\"color: brown;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <h1>A heading</h1>\n <div class=\"textstuff\" style=\"color: red;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <div class=\"textstuff\" style=\"color: green;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <div class=\"textstuff\" style=\"color: brown;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <h1>A heading</h1>\n <div class=\"background\">\n <div class=\"textstuff\" style=\"color: red;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <div class=\"textstuff\" style=\"color: green;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <div class=\"textstuff\" style=\"color: brown;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n </div>\n <h1>A heading</h1>\n <div class=\"textstuff\" style=\"color: red;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <div class=\"textstuff\" style=\"color: green;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <div class=\"textstuff\" style=\"color: brown;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <h1>A heading</h1>\n <div class=\"textstuff\" style=\"color: red;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <div class=\"textstuff\" style=\"color: green;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n <div class=\"textstuff\" style=\"color: brown;\">Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image. Some scrollable text spread across the page to the other side of the page, wrapped around a right floated image.</div>\n</div>\n</body>\n\n\n\nIt will probably be informative to try different combinations of background colours and fill colours for various elements depending on how you want it to look in the end.\n"
] |
[
1
] |
[] |
[] |
[
"css",
"html",
"svg",
"webkit"
] |
stackoverflow_0074593131_css_html_svg_webkit.txt
|
Q:
Property 'genre_name' does not exist on type 'string'.ts(2339)
**File Detail.tsx **
interface Detail {
genre: Detail;
}
<div className="font-mono flex flex-wrap py-2 gap-1">
{data?.genre_list.map((genre: Detail, key: number) => (
<p key={key}>{genre?.genre_name} ,</p>
))}
</div>
**File Model.ts **
export interface Detail {
author: string;
chapter: string[];
genre_list: string[];
status: string;
synopsis: string;
thumb: string;
title: string;
type: string;
genre_name: string;
}
Property 'genre_name' does not exist on type 'string', even though I have typed it on the interface
A:
Use another name for interface in Detail.tsx
...
import {Detail} from "./Model"
interface Props {
genre: Detail
}
...
• But i recommend you to use suitable names for objects like genre: Genre
|
Property 'genre_name' does not exist on type 'string'.ts(2339)
|
**File Detail.tsx **
interface Detail {
genre: Detail;
}
<div className="font-mono flex flex-wrap py-2 gap-1">
{data?.genre_list.map((genre: Detail, key: number) => (
<p key={key}>{genre?.genre_name} ,</p>
))}
</div>
**File Model.ts **
export interface Detail {
author: string;
chapter: string[];
genre_list: string[];
status: string;
synopsis: string;
thumb: string;
title: string;
type: string;
genre_name: string;
}
Property 'genre_name' does not exist on type 'string', even though I have typed it on the interface
|
[
"Use another name for interface in Detail.tsx\n...\n\nimport {Detail} from \"./Model\"\n\ninterface Props {\n genre: Detail\n}\n\n...\n\n\n• But i recommend you to use suitable names for objects like genre: Genre\n"
] |
[
0
] |
[] |
[] |
[
"reactjs",
"typescript"
] |
stackoverflow_0074665217_reactjs_typescript.txt
|
Q:
Protect api routes with middleware in nextJS?
I'm new to next.js and I wanted to know if I could protect a whole API route via middleware. So for example if i wanted to protect /api/users Could I create /api/users/_middleware.ts and handle authentication in the middleware and not have to worry about authentication in the actual api endpoints? If so, how would I go about doing that? The library i'm using right now is @auth0\nextjs-auth0 so I guess it would look something like this? (Also please forgive me if I code this wrong, I am doing this in the stackoverflow editor)
export default authMiddleware(req,res)=>{
const {user,error,isLoading} = whateverTheNameOfTheAuth0HookIs()
if(user)
{
// Allow the request to the api route
}
else
{
// Deny the request with HTTP 401
}
}
Do I have the general idea correct?
A:
next-auth v4 introduced middleware for this purpose. The basic use case is pretty simple.
You can add a middleware.js file with the following:
export { default } from "next-auth/middleware"
export const config = { matcher: ["/dashboard"] }
Other use cases can be found in the documentation
|
Protect api routes with middleware in nextJS?
|
I'm new to next.js and I wanted to know if I could protect a whole API route via middleware. So for example if i wanted to protect /api/users Could I create /api/users/_middleware.ts and handle authentication in the middleware and not have to worry about authentication in the actual api endpoints? If so, how would I go about doing that? The library i'm using right now is @auth0\nextjs-auth0 so I guess it would look something like this? (Also please forgive me if I code this wrong, I am doing this in the stackoverflow editor)
export default authMiddleware(req,res)=>{
const {user,error,isLoading} = whateverTheNameOfTheAuth0HookIs()
if(user)
{
// Allow the request to the api route
}
else
{
// Deny the request with HTTP 401
}
}
Do I have the general idea correct?
|
[
"next-auth v4 introduced middleware for this purpose. The basic use case is pretty simple.\nYou can add a middleware.js file with the following:\nexport { default } from \"next-auth/middleware\"\nexport const config = { matcher: [\"/dashboard\"] }\n\nOther use cases can be found in the documentation\n"
] |
[
0
] |
[
"You can use middleware for that, something similar to this example from the documentation.\nFor a sub-directory inside pages, you can create a _middleware.ts file. It will run for all pages in this directory. It looks something like this:\nimport { NextRequest, NextResponse } from 'next/server'\n\nexport function middleware(req: NextRequest) {\n const basicAuth = req.headers.get('authorization')\n\n if (basicAuth) {\n // do whatever checks you need here\n const hasAccess = ...\n\n if (hasAccess) {\n // will render the specified page\n return NextResponse.next()\n }\n }\n\n // will not allow access\n return new Response('No access', {\n status: 401,\n headers: {\n 'WWW-Authenticate': 'Basic realm=\"Secure Area\"',\n },\n })\n}\n\nYou can find more info in the documentation.\n"
] |
[
-2
] |
[
"auth0",
"authentication",
"middleware",
"next.js"
] |
stackoverflow_0071071601_auth0_authentication_middleware_next.js.txt
|
Q:
How to check visibility of list item in Jetpack Compose
FlatList of React Nativehas a property viewabilityConfigCallbackPairs where you can set:
viewabilityConfig: {
itemVisiblePercentThreshold: 50,
waitForInteraction: true,
}
to detect visible items of the list with threshold of 50% and after interaction or scroll.
Does Jetpack Compose also have something similar to this?
There is LazyListState with some layout info. But I wonder if there is anything built-in component/property for this use case.
Edit
I have a list of cardviews and I want to detect which card items (at least 50% of card is visible) are visible on display. But it needs to be detected only when the card is clicked or list is scrolled by user.
A:
To get an updating list of currently visible items with a certain threshold LazyListState can be used.
LazyListState exposes the list of currently visible items List<LazyListItemInfo>. It's easy to calculate visibility percent using
offset and size properties, and thus apply a filter to the visible list for visibility >= threshold.
LazyListItemInfo has index property, which can be used for mapping LazyListItemInfo to the actual data item in the list passed to LazyColumn.
fun LazyListState.visibleItems(itemVisiblePercentThreshold: Float) =
layoutInfo
.visibleItemsInfo
.filter {
visibilityPercent(it) >= itemVisiblePercentThreshold
}
fun LazyListState.visibilityPercent(info: LazyListItemInfo): Float {
val cutTop = max(0, layoutInfo.viewportStartOffset - info.offset)
val cutBottom = max(0, info.offset + info.size - layoutInfo.viewportEndOffset)
return max(0f, 100f - (cutTop + cutBottom) * 100f / info.size)
}
Usage
val list = state.visibleItems(50f) // list of LazyListItemInfo
This list has to be mapped first to corresponding items in LazyColumn.
val visibleItems = state.visibleItems(50f)
.map { listItems[it.index] }
@Composable
fun App() {
val listItems = remember { generateFakeListItems().toMutableStateList() }
val state = rememberLazyListState()
LazyColumn(Modifier.fillMaxSize(), state = state) {
items(listItems.size) {
Item(listItems[it])
}
}
val visibleItems = state.visibleItems(50f)
.map { listItems[it.index] }
Log.d(TAG, "App: $visibleItems")
}
fun generateFakeListItems() = (0..100).map { "Item $it" }
A:
The LazyListState#layoutInfo contains information about the visible items.
Since you want to apply a threshold you need to check the first and last item positions and size according to viewport size. All other items are for sure visible.
It is important to note that since you are reading the state you should use derivedStateOf to avoid redundant recompositions.
Something like:
@Composable
private fun LazyListState.visibleItemsWithThreshold(percentThreshold: Float): List<Int> {
return remember(this) {
derivedStateOf {
val visibleItemsInfo = layoutInfo.visibleItemsInfo
if (layoutInfo.totalItemsCount == 0) {
emptyList()
} else {
val fullyVisibleItemsInfo = visibleItemsInfo.toMutableList()
val lastItem = fullyVisibleItemsInfo.last()
val viewportHeight = layoutInfo.viewportEndOffset + layoutInfo.viewportStartOffset
if (lastItem.offset + (lastItem.size*percentThreshold) > viewportHeight) {
fullyVisibleItemsInfo.removeLast()
}
val firstItemIfLeft = fullyVisibleItemsInfo.firstOrNull()
if (firstItemIfLeft != null &&
firstItemIfLeft.offset + (lastItem.size*percentThreshold) < layoutInfo.viewportStartOffset) {
fullyVisibleItemsInfo.removeFirst()
}
fullyVisibleItemsInfo.map { it.index }
}
}
}.value
}
and then just use:
val state = rememberLazyListState()
LazyColumn( state = state ){
//items
}
val visibleItems = state.visibleItemsWithThreshold(percentThreshold = 0.5f)
In this way you have the list of all the visible items with a threshold of 50%. You can observe the list using something:
LaunchedEffect(visibleItems){
Log.d(TAG, "App: $visibleItems")
}
|
How to check visibility of list item in Jetpack Compose
|
FlatList of React Nativehas a property viewabilityConfigCallbackPairs where you can set:
viewabilityConfig: {
itemVisiblePercentThreshold: 50,
waitForInteraction: true,
}
to detect visible items of the list with threshold of 50% and after interaction or scroll.
Does Jetpack Compose also have something similar to this?
There is LazyListState with some layout info. But I wonder if there is anything built-in component/property for this use case.
Edit
I have a list of cardviews and I want to detect which card items (at least 50% of card is visible) are visible on display. But it needs to be detected only when the card is clicked or list is scrolled by user.
|
[
"To get an updating list of currently visible items with a certain threshold LazyListState can be used.\nLazyListState exposes the list of currently visible items List<LazyListItemInfo>. It's easy to calculate visibility percent using\noffset and size properties, and thus apply a filter to the visible list for visibility >= threshold.\nLazyListItemInfo has index property, which can be used for mapping LazyListItemInfo to the actual data item in the list passed to LazyColumn.\nfun LazyListState.visibleItems(itemVisiblePercentThreshold: Float) =\n layoutInfo\n .visibleItemsInfo\n .filter {\n visibilityPercent(it) >= itemVisiblePercentThreshold\n }\n\nfun LazyListState.visibilityPercent(info: LazyListItemInfo): Float {\n val cutTop = max(0, layoutInfo.viewportStartOffset - info.offset)\n val cutBottom = max(0, info.offset + info.size - layoutInfo.viewportEndOffset)\n\n return max(0f, 100f - (cutTop + cutBottom) * 100f / info.size)\n}\n\n\nUsage\nval list = state.visibleItems(50f) // list of LazyListItemInfo\n\nThis list has to be mapped first to corresponding items in LazyColumn.\nval visibleItems = state.visibleItems(50f)\n .map { listItems[it.index] }\n\n\n@Composable\nfun App() {\n val listItems = remember { generateFakeListItems().toMutableStateList() }\n\n val state = rememberLazyListState()\n\n LazyColumn(Modifier.fillMaxSize(), state = state) {\n items(listItems.size) {\n Item(listItems[it])\n }\n }\n\n val visibleItems = state.visibleItems(50f)\n .map { listItems[it.index] }\n\n Log.d(TAG, \"App: $visibleItems\")\n}\n\nfun generateFakeListItems() = (0..100).map { \"Item $it\" }\n\n",
"The LazyListState#layoutInfo contains information about the visible items.\nSince you want to apply a threshold you need to check the first and last item positions and size according to viewport size. All other items are for sure visible.\nIt is important to note that since you are reading the state you should use derivedStateOf to avoid redundant recompositions.\nSomething like:\n@Composable\nprivate fun LazyListState.visibleItemsWithThreshold(percentThreshold: Float): List<Int> {\n\n return remember(this) {\n derivedStateOf {\n val visibleItemsInfo = layoutInfo.visibleItemsInfo\n if (layoutInfo.totalItemsCount == 0) {\n emptyList()\n } else {\n val fullyVisibleItemsInfo = visibleItemsInfo.toMutableList()\n val lastItem = fullyVisibleItemsInfo.last()\n\n val viewportHeight = layoutInfo.viewportEndOffset + layoutInfo.viewportStartOffset\n\n if (lastItem.offset + (lastItem.size*percentThreshold) > viewportHeight) {\n fullyVisibleItemsInfo.removeLast()\n }\n\n val firstItemIfLeft = fullyVisibleItemsInfo.firstOrNull()\n if (firstItemIfLeft != null &&\n firstItemIfLeft.offset + (lastItem.size*percentThreshold) < layoutInfo.viewportStartOffset) {\n fullyVisibleItemsInfo.removeFirst()\n }\n\n fullyVisibleItemsInfo.map { it.index }\n }\n }\n }.value\n}\n\nand then just use:\n val state = rememberLazyListState()\n\n LazyColumn( state = state ){\n //items\n }\n val visibleItems = state.visibleItemsWithThreshold(percentThreshold = 0.5f)\n\nIn this way you have the list of all the visible items with a threshold of 50%. You can observe the list using something:\n LaunchedEffect(visibleItems){\n Log.d(TAG, \"App: $visibleItems\")\n }\n\n"
] |
[
8,
0
] |
[] |
[] |
[
"android",
"android_jetpack_compose",
"android_jetpack_compose_lazy_column",
"android_jetpack_compose_list"
] |
stackoverflow_0069252374_android_android_jetpack_compose_android_jetpack_compose_lazy_column_android_jetpack_compose_list.txt
|
Q:
Compare and replace character in js
I have a string like this:
let string = "/gb/fr/firstPage/secondPage.\
I want to compare the /gb/fr/ part against an array like below:
let array = [ "/fr/fr", "/de/de", "/es/es","/ro/ro", "/it/it"]
and if its the same return the string, but if not the same return :
string = "/gb/en/firstPage/secondPage\
Can you help me out please or explain how to do it ?
more example:
"/ee/et/firstPage/secondPage"
should return :
"/ee/en/firstPage/secondPage"`
another example :
"/lt/lt/firstPage/secondPage"
should return:
"/lt/en/firstPage/secondPage"
so it basically checks the first part /fr/
if it exists it will return the corresponding link otherwise it will replace the second part with `/en/`
A:
To compare the first part of the string against an array of strings, you can use the startsWith method to check if the string starts with any of the values in the array. If the string does not start with one of the values in the array, you can use the replace method to replace the first part of the string with the desired value.
Here is an example of how you could do this:
let string = "/gb/fr/firstPage/secondPage";
let array = [ "/fr/fr", "/de/de", "/es/es","/ro/ro", "/it/it"];
// check if the string starts with any of the values in the array
let isMatch = array.some(a => string.startsWith(a));
if (!isMatch) {
// replace the first part of the string with "/gb/en"
string = string.replace(/^\/\w+\/\w+/, "/gb/en");
}
console.log(string); // "/gb/en/firstPage/secondPage"
In this code, the some method is used to check if any of the values in the array match the start of the string. If none of the values match, the replace method is used to replace the first part of the string with the desired value. This method uses a regular expression to match the first two / characters, followed by one or more word characters (\w+), followed by another / character and one or more word characters. This regular expression will match a string like /gb/fr or /us/en, but not a string like /gb or /fr/fr/fr.
Finally, the updated string is logged to the console.
-chatgpt
A:
If I understand correctly, taking input "/<A>/<B>/<C>/<D>", you'd like to replace <B> with en in case <A> does not match any of the array item's first word, e.g. fr, de, es, ro, or it.
You can build a regex with a negative lookahead, e.g. the regex matches only if the lookahead condition is not met:
const input = [
"/fr/fr/firstPage/secondPage",
"/gb/fr/firstPage/secondPage",
"/ee/et/firstPage/secondPage",
"/lt/lt/firstPage/secondPage"
];
const array = ["/fr/fr", "/de/de", "/es/es","/ro/ro", "/it/it"];
let regex = new RegExp('^\\/(?!(?:' + array.map(s => s.substring(1, 3)).join('|') + '))(..)\\/..');
// resulting regex: /^\/(?!(?:fr|de|es|ro|it))(..)\/../
console.log('regex: ' + regex.toString());
input.forEach(str => {
let result = str.replace(regex, '/$1/en');
console.log(str + ' => ' + result);
});
Output:
regex: /^\/(?!(?:fr|de|es|ro|it))(..)\/../
/fr/fr/firstPage/secondPage => /fr/fr/firstPage/secondPage
/gb/fr/firstPage/secondPage => /gb/en/firstPage/secondPage
/ee/et/firstPage/secondPage => /ee/en/firstPage/secondPage
/lt/lt/firstPage/secondPage => /lt/en/firstPage/secondPage
Explanation of new RegExp():
code new RegExp('^\\/(?!(?:' + array.map(s => s.substring(1, 3)).join('|') + '))(..)\\/..') results in regex /^\/(?!(?:fr|de|es|ro|it))(..)\/../ with your given array
Explanation of resulting regex /^\/(?!(?:fr|de|es|ro|it))(..)\/../:
^ -- anchor at start of string
\/ -- literal /
(?! -- start of negative lookahead
(?:fr|de|es|ro|it) -- non-capture group containing a list of items with logical OR |
) -- end of negative lookahead
(..) -- capture group 1 for two chars
\/.. -- literal /..
Explanation of replace '/$1/en':
/ -- literal /
$1 -- capture group 1 value
/en -- literal /en
|
Compare and replace character in js
|
I have a string like this:
let string = "/gb/fr/firstPage/secondPage.\
I want to compare the /gb/fr/ part against an array like below:
let array = [ "/fr/fr", "/de/de", "/es/es","/ro/ro", "/it/it"]
and if its the same return the string, but if not the same return :
string = "/gb/en/firstPage/secondPage\
Can you help me out please or explain how to do it ?
more example:
"/ee/et/firstPage/secondPage"
should return :
"/ee/en/firstPage/secondPage"`
another example :
"/lt/lt/firstPage/secondPage"
should return:
"/lt/en/firstPage/secondPage"
so it basically checks the first part /fr/
if it exists it will return the corresponding link otherwise it will replace the second part with `/en/`
|
[
"To compare the first part of the string against an array of strings, you can use the startsWith method to check if the string starts with any of the values in the array. If the string does not start with one of the values in the array, you can use the replace method to replace the first part of the string with the desired value.\nHere is an example of how you could do this:\nlet string = \"/gb/fr/firstPage/secondPage\";\n\nlet array = [ \"/fr/fr\", \"/de/de\", \"/es/es\",\"/ro/ro\", \"/it/it\"];\n\n// check if the string starts with any of the values in the array\nlet isMatch = array.some(a => string.startsWith(a));\n\nif (!isMatch) {\n // replace the first part of the string with \"/gb/en\"\n string = string.replace(/^\\/\\w+\\/\\w+/, \"/gb/en\");\n}\n\nconsole.log(string); // \"/gb/en/firstPage/secondPage\"\n\n\n\nIn this code, the some method is used to check if any of the values in the array match the start of the string. If none of the values match, the replace method is used to replace the first part of the string with the desired value. This method uses a regular expression to match the first two / characters, followed by one or more word characters (\\w+), followed by another / character and one or more word characters. This regular expression will match a string like /gb/fr or /us/en, but not a string like /gb or /fr/fr/fr.\nFinally, the updated string is logged to the console.\n-chatgpt\n",
"If I understand correctly, taking input \"/<A>/<B>/<C>/<D>\", you'd like to replace <B> with en in case <A> does not match any of the array item's first word, e.g. fr, de, es, ro, or it.\nYou can build a regex with a negative lookahead, e.g. the regex matches only if the lookahead condition is not met:\n\n\nconst input = [\n \"/fr/fr/firstPage/secondPage\",\n \"/gb/fr/firstPage/secondPage\",\n \"/ee/et/firstPage/secondPage\",\n \"/lt/lt/firstPage/secondPage\"\n];\nconst array = [\"/fr/fr\", \"/de/de\", \"/es/es\",\"/ro/ro\", \"/it/it\"];\n\nlet regex = new RegExp('^\\\\/(?!(?:' + array.map(s => s.substring(1, 3)).join('|') + '))(..)\\\\/..');\n// resulting regex: /^\\/(?!(?:fr|de|es|ro|it))(..)\\/../\nconsole.log('regex: ' + regex.toString());\ninput.forEach(str => {\n let result = str.replace(regex, '/$1/en');\n console.log(str + ' => ' + result);\n});\n\n\n\nOutput:\nregex: /^\\/(?!(?:fr|de|es|ro|it))(..)\\/../\n/fr/fr/firstPage/secondPage => /fr/fr/firstPage/secondPage\n/gb/fr/firstPage/secondPage => /gb/en/firstPage/secondPage\n/ee/et/firstPage/secondPage => /ee/en/firstPage/secondPage\n/lt/lt/firstPage/secondPage => /lt/en/firstPage/secondPage\n\nExplanation of new RegExp():\n\ncode new RegExp('^\\\\/(?!(?:' + array.map(s => s.substring(1, 3)).join('|') + '))(..)\\\\/..') results in regex /^\\/(?!(?:fr|de|es|ro|it))(..)\\/../ with your given array\n\nExplanation of resulting regex /^\\/(?!(?:fr|de|es|ro|it))(..)\\/../:\n\n^ -- anchor at start of string\n\\/ -- literal /\n(?! -- start of negative lookahead\n(?:fr|de|es|ro|it) -- non-capture group containing a list of items with logical OR |\n) -- end of negative lookahead\n(..) -- capture group 1 for two chars\n\\/.. -- literal /..\n\nExplanation of replace '/$1/en':\n\n/ -- literal /\n$1 -- capture group 1 value\n/en -- literal /en\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"javascript",
"replace",
"string"
] |
stackoverflow_0074662787_javascript_replace_string.txt
|
Q:
pandas/regex: Remove the string after the hyphen or parenthesis character (including) carry string after the comma in pandas dataframe
I have a dataframe contains one column which has multiple strings separated by the comma, but in this string, I want to remove all matter after hyphen (including hyphen), main point is after in some cases hyphen is not there but directed parenthesis is there so I also want to remove that as well and carry all the after the comma how can I do it? You can see this case in last row.
dd = pd.DataFrame()
dd['sin'] = ['U147(BCM), U35(BCM)','P01-00(ECM), P02-00(ECM)', 'P3-00(ECM), P032-00(ECM)','P034-00(ECM)', 'P23F5(PCM), P04-00(ECM)']
Expected output
dd['sin']
# output
U147 U35
P01 P02
P3 P032
P034
P23F5 P04
Want to carry only string before the hyphen or parenthesis or any special character.
A:
The following code seems to reproduce your desired result:
dd['sin'] = dd['sin'].str.split(", ")
dd = dd.explode('sin').reset_index()
dd['sin'] = dd['sin'].str.replace('\W.*', '', regex=True)
Which gives dd['sin'] as:
0 U147
1 U35
2 P01
3 P02
4 P3
5 P032
6 P034
7 P23F5
8 P04
Name: sin, dtype: object
The call of .reset_index() in the second line is optional depending on whether you want to preserve which row that piece of the string came from.
A:
You can use the following regex:
r"-\d{2}|\([EBP]CM\)|\s"
Here is the code:
sin = ['U147(BCM), U35(BCM)','P01-00(ECM), P02-00(ECM)', 'P3-00(ECM), P032-00(ECM)','P034-00(ECM)', 'P23F5(PCM), P04-00(ECM)']
dd = pd.DataFrame()
dd['sin'] = sin
dd['sin'] = dd['sin'].str.replace(r'-\d{2}|\([EBP]CM\)|\s', '', regex=True)
print(dd)
OUTPUT:
sin
0 U147,U35
1 P01,P02
2 P3,P032
3 P034
4 P23F5,P04
EDIT
Or use this line to remove the comma:
dd['sin'] = dd['sin'].str.replace(r'-\d{2}|\([EBP]CM\)|\s', '', regex=True).str.replace(',',' ')
OUTPUT:
sin
0 U147 U35
1 P01 P02
2 P3 P032
3 P034
4 P23F5 P04
|
pandas/regex: Remove the string after the hyphen or parenthesis character (including) carry string after the comma in pandas dataframe
|
I have a dataframe contains one column which has multiple strings separated by the comma, but in this string, I want to remove all matter after hyphen (including hyphen), main point is after in some cases hyphen is not there but directed parenthesis is there so I also want to remove that as well and carry all the after the comma how can I do it? You can see this case in last row.
dd = pd.DataFrame()
dd['sin'] = ['U147(BCM), U35(BCM)','P01-00(ECM), P02-00(ECM)', 'P3-00(ECM), P032-00(ECM)','P034-00(ECM)', 'P23F5(PCM), P04-00(ECM)']
Expected output
dd['sin']
# output
U147 U35
P01 P02
P3 P032
P034
P23F5 P04
Want to carry only string before the hyphen or parenthesis or any special character.
|
[
"The following code seems to reproduce your desired result:\ndd['sin'] = dd['sin'].str.split(\", \")\ndd = dd.explode('sin').reset_index()\ndd['sin'] = dd['sin'].str.replace('\\W.*', '', regex=True)\n\nWhich gives dd['sin'] as:\n0 U147\n1 U35\n2 P01\n3 P02\n4 P3\n5 P032\n6 P034\n7 P23F5\n8 P04\nName: sin, dtype: object\n\nThe call of .reset_index() in the second line is optional depending on whether you want to preserve which row that piece of the string came from.\n",
"You can use the following regex:\nr\"-\\d{2}|\\([EBP]CM\\)|\\s\"\n\n\n\nHere is the code:\nsin = ['U147(BCM), U35(BCM)','P01-00(ECM), P02-00(ECM)', 'P3-00(ECM), P032-00(ECM)','P034-00(ECM)', 'P23F5(PCM), P04-00(ECM)']\n\ndd = pd.DataFrame()\ndd['sin'] = sin\ndd['sin'] = dd['sin'].str.replace(r'-\\d{2}|\\([EBP]CM\\)|\\s', '', regex=True)\nprint(dd)\n\nOUTPUT:\n sin\n0 U147,U35\n1 P01,P02\n2 P3,P032\n3 P034\n4 P23F5,P04\n\n\n\n\nEDIT\nOr use this line to remove the comma:\ndd['sin'] = dd['sin'].str.replace(r'-\\d{2}|\\([EBP]CM\\)|\\s', '', regex=True).str.replace(',',' ')\n\nOUTPUT:\n sin\n0 U147 U35\n1 P01 P02\n2 P3 P032\n3 P034\n4 P23F5 P04\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"pandas",
"python",
"replace"
] |
stackoverflow_0074664899_pandas_python_replace.txt
|
Q:
pandas.read_excel parameter "sheet_name" not working
According to pandas doc for 0.21+, pandas.read_excel has a parameter sheet_name that allows specifying which sheet is read. But when I am trying to read the second sheet from an excel file, no matter how I set the parameter (sheet_name = 1, sheet_name = 'Sheet2'), the dataframe always shows the first sheet, and passing a list of indices (sheet_name = [0, 1]) does not return a dictionary of dataframes but still the first sheet. What might be the problem here?
A:
It looks like you're using the old version of Python.
So try to change your code
df = pd.read_excel(file_with_data, sheetname=sheet_with_data)
It should work properly.
A:
You can try to use pd.ExcelFile:
xls = pd.ExcelFile('path_to_file.xls')
df1 = pd.read_excel(xls, 'Sheet1')
df2 = pd.read_excel(xls, 'Sheet2')
A:
This works:
df = pd.read_excel(open(file_path_name), 'rb'), sheetname = sheet_name)
file_path_name = your file
sheet_name = your sheet name
This does not for me:
df = pd.read_excel(open(file_path_name), 'rb'), sheet_name = sheet_name)
Gave me only the first sheet, no matter how I defined sheet_name.
--> it is an known error:
https://github.com/pandas-dev/pandas/issues/17107
A:
Try at Terminal, type the following first, then re-run your program:
pip install xlrd
A:
I also faced this problem until I found this solution:
rd=pd.read_excel(excel_file,sheet_name=['Sheet2']),
Here excel_file means "file name".
The filename should be the full path to the file.
Make sure to use two backslashes (\\) instead of just one!
In my case, this works.
A:
I would just use double quotes like this.
# Returns a DataFrame
pd.read_excel("path_to_file.xls", sheet_name="Sheet1")
|
pandas.read_excel parameter "sheet_name" not working
|
According to pandas doc for 0.21+, pandas.read_excel has a parameter sheet_name that allows specifying which sheet is read. But when I am trying to read the second sheet from an excel file, no matter how I set the parameter (sheet_name = 1, sheet_name = 'Sheet2'), the dataframe always shows the first sheet, and passing a list of indices (sheet_name = [0, 1]) does not return a dictionary of dataframes but still the first sheet. What might be the problem here?
|
[
"It looks like you're using the old version of Python.\nSo try to change your code \ndf = pd.read_excel(file_with_data, sheetname=sheet_with_data)\n\nIt should work properly.\n",
"You can try to use pd.ExcelFile:\nxls = pd.ExcelFile('path_to_file.xls')\ndf1 = pd.read_excel(xls, 'Sheet1')\ndf2 = pd.read_excel(xls, 'Sheet2')\n\n",
"This works:\ndf = pd.read_excel(open(file_path_name), 'rb'), sheetname = sheet_name)\n\nfile_path_name = your file\nsheet_name = your sheet name\n\nThis does not for me:\ndf = pd.read_excel(open(file_path_name), 'rb'), sheet_name = sheet_name)\n\nGave me only the first sheet, no matter how I defined sheet_name.\n--> it is an known error:\nhttps://github.com/pandas-dev/pandas/issues/17107\n",
"Try at Terminal, type the following first, then re-run your program:\npip install xlrd\n",
"I also faced this problem until I found this solution:\nrd=pd.read_excel(excel_file,sheet_name=['Sheet2']),\n\nHere excel_file means \"file name\".\nThe filename should be the full path to the file.\nMake sure to use two backslashes (\\\\) instead of just one!\nIn my case, this works.\n",
"I would just use double quotes like this.\n# Returns a DataFrame\npd.read_excel(\"path_to_file.xls\", sheet_name=\"Sheet1\")\n\n"
] |
[
22,
7,
2,
1,
0,
0
] |
[] |
[] |
[
"excel",
"pandas",
"python"
] |
stackoverflow_0047975866_excel_pandas_python.txt
|
Q:
Python evdev [Error 16] Device or resource busy
I connected 2D barcode scanner with Raspberry Pi 4 Model B and tried to scan few codes. on using evdev library I got the output successfully. But the issue is after 3 continues scans it's throwing me an exception saying "[Error 16] Device or resource busy". I can't able to find the root cause of this issue and tried many troubleshooting methods but nothing seems to work. Can anyone please help me. Here is the code I used.
from evdev import InputDevice, categorize, ecodes
from datetime import datetime
import calendar
scancodes = {
# Scancode: ASCIICode
0: None, 1: u'ESC', 2: u'1', 3: u'2', 4: u'3', 5: u'4', 6: u'5', 7: u'6', 8: u'7', 9: u'8',
10: u'9', 11: u'0', 12: u'-', 13: u'=', 14: u'BKSP', 15: u'TAB', 16: u'q', 17: u'w', 18: u'e', 19: u'r',
20: u't', 21: u'y', 22: u'u', 23: u'i', 24: u'o', 25: u'p', 26: u'[', 27: u']', 28: u'CRLF', 29: u'LCTRL',
30: u'a', 31: u's', 32: u'd', 33: u'f', 34: u'g', 35: u'h', 36: u'j', 37: u'k', 38: u'l', 39: u';',
40: u'"', 41: u'`', 42: u'LSHFT', 43: u'\\', 44: u'z', 45: u'x', 46: u'c', 47: u'v', 48: u'b', 49: u'n',
50: u'm', 51: u',', 52: u'.', 53: u'/', 54: u'RSHFT', 56: u'LALT', 57: u' ', 100: u'RALT'
}
capscodes = {
0: None, 1: u'ESC', 2: u'!', 3: u'@', 4: u'#', 5: u'$', 6: u'%', 7: u'^', 8: u'&', 9: u'*',
10: u'(', 11: u')', 12: u'_', 13: u'+', 14: u'BKSP', 15: u'TAB', 16: u'Q', 17: u'W', 18: u'E', 19: u'R',
20: u'T', 21: u'Y', 22: u'U', 23: u'I', 24: u'O', 25: u'P', 26: u'{', 27: u'}', 28: u'CRLF', 29: u'LCTRL',
30: u'A', 31: u'S', 32: u'D', 33: u'F', 34: u'G', 35: u'H', 36: u'J', 37: u'K', 38: u'L', 39: u':',
40: u'\'', 41: u'~', 42: u'LSHFT', 43: u'|', 44: u'Z', 45: u'X', 46: u'C', 47: u'V', 48: u'B', 49: u'N',
50: u'M', 51: u'<', 52: u'>', 53: u'?', 54: u'RSHFT', 56: u'LALT', 57: u' ', 100: u'RALT'
}
class scan_barcode:
def __init__(self,devicePath):
self.devicePath = devicePath
def readBarcode(self):
dev = InputDevice(self.devicePath)
dev.grab() # grab provides exclusive access to the device
x = ''
caps = False
for event in dev.read_loop():
if event.type == ecodes.EV_KEY:
data = categorize(event) # Save the event temporarily to introspect it
if data.scancode == 42:
if data.keystate == 1:
caps = True
if data.keystate == 0:
caps = False
if data.keystate == 1: # Down events only
if caps:
key_lookup = u'{}'.format(capscodes.get(data.scancode)) or u'UNKNOWN:[{}]'.format(data.scancode) # Lookup or return UNKNOWN:XX
else:
key_lookup = u'{}'.format(scancodes.get(data.scancode)) or u'UNKNOWN:[{}]'.format(data.scancode) # Lookup or return UNKNOWN:XX
if (data.scancode != 42) and (data.scancode != 28):
x += key_lookup
if(data.scancode == 28):
return(x)
scanned_data = scan_barcode('/dev/input/event0')
def scanner_function():
try:
value = scanned_data.readBarcode()
print(f"Scanned value:{str(value)}")
except Exception as e:
print(e)
pass
while True:
scanner_function()
Even though when I pass the exception It's not letting me to move to other tasks. The entire process stops here.
This is the output:
Scanned value: 4568hidhXGu
Scanned value: 1238fujXjje75
Scanned value: 789665
[Error 16] Device or resource busy
[Error 16] Device or resource busy
[Error 16] Device or resource busy
[Error 16] Device or resource busy
[Error 16] Device or resource busy
A:
I am not sure if the problem is related to your code. I think it is more related to your scanner. I have tested your script with the R32 QR Code reader (https://www.sycreader.com/en/3650/) and this is working perfect.
What scanner type are you using?
Result:
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:11.035355
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:11.675270
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:14.563287
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:15.007284
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:15.799299
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:20.959301
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:21.591286
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:24.515289
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:26.331292
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:31.323339
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:32.747289
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:34.495291
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:36.367294
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:37.903286
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:39.507295
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:41.099288
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:42.575295
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:44.123283
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:45.579286
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:47.055336
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:48.671301
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:49.983288
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:52.779284
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:54.755299
Scanned value:Toiletbezoek DateTime:2022-12-03 09:49:56.159286
A:
The error you're encountering is due to the fact that the barcode scanner is a "grabbed" device, meaning that it is currently in use by another process and cannot be accessed. In order to fix this, you can try releasing the device before attempting to use it again. Here is an example of how you can do this:
# Release the device
dev.ungrab()
# Wait for a few seconds to allow the device to be released
time.sleep(2)
# Attempt to grab the device again
dev.grab()
You can insert this code before the for loop in your readBarcode method to try and fix the issue.
|
Python evdev [Error 16] Device or resource busy
|
I connected 2D barcode scanner with Raspberry Pi 4 Model B and tried to scan few codes. on using evdev library I got the output successfully. But the issue is after 3 continues scans it's throwing me an exception saying "[Error 16] Device or resource busy". I can't able to find the root cause of this issue and tried many troubleshooting methods but nothing seems to work. Can anyone please help me. Here is the code I used.
from evdev import InputDevice, categorize, ecodes
from datetime import datetime
import calendar
scancodes = {
# Scancode: ASCIICode
0: None, 1: u'ESC', 2: u'1', 3: u'2', 4: u'3', 5: u'4', 6: u'5', 7: u'6', 8: u'7', 9: u'8',
10: u'9', 11: u'0', 12: u'-', 13: u'=', 14: u'BKSP', 15: u'TAB', 16: u'q', 17: u'w', 18: u'e', 19: u'r',
20: u't', 21: u'y', 22: u'u', 23: u'i', 24: u'o', 25: u'p', 26: u'[', 27: u']', 28: u'CRLF', 29: u'LCTRL',
30: u'a', 31: u's', 32: u'd', 33: u'f', 34: u'g', 35: u'h', 36: u'j', 37: u'k', 38: u'l', 39: u';',
40: u'"', 41: u'`', 42: u'LSHFT', 43: u'\\', 44: u'z', 45: u'x', 46: u'c', 47: u'v', 48: u'b', 49: u'n',
50: u'm', 51: u',', 52: u'.', 53: u'/', 54: u'RSHFT', 56: u'LALT', 57: u' ', 100: u'RALT'
}
capscodes = {
0: None, 1: u'ESC', 2: u'!', 3: u'@', 4: u'#', 5: u'$', 6: u'%', 7: u'^', 8: u'&', 9: u'*',
10: u'(', 11: u')', 12: u'_', 13: u'+', 14: u'BKSP', 15: u'TAB', 16: u'Q', 17: u'W', 18: u'E', 19: u'R',
20: u'T', 21: u'Y', 22: u'U', 23: u'I', 24: u'O', 25: u'P', 26: u'{', 27: u'}', 28: u'CRLF', 29: u'LCTRL',
30: u'A', 31: u'S', 32: u'D', 33: u'F', 34: u'G', 35: u'H', 36: u'J', 37: u'K', 38: u'L', 39: u':',
40: u'\'', 41: u'~', 42: u'LSHFT', 43: u'|', 44: u'Z', 45: u'X', 46: u'C', 47: u'V', 48: u'B', 49: u'N',
50: u'M', 51: u'<', 52: u'>', 53: u'?', 54: u'RSHFT', 56: u'LALT', 57: u' ', 100: u'RALT'
}
class scan_barcode:
def __init__(self,devicePath):
self.devicePath = devicePath
def readBarcode(self):
dev = InputDevice(self.devicePath)
dev.grab() # grab provides exclusive access to the device
x = ''
caps = False
for event in dev.read_loop():
if event.type == ecodes.EV_KEY:
data = categorize(event) # Save the event temporarily to introspect it
if data.scancode == 42:
if data.keystate == 1:
caps = True
if data.keystate == 0:
caps = False
if data.keystate == 1: # Down events only
if caps:
key_lookup = u'{}'.format(capscodes.get(data.scancode)) or u'UNKNOWN:[{}]'.format(data.scancode) # Lookup or return UNKNOWN:XX
else:
key_lookup = u'{}'.format(scancodes.get(data.scancode)) or u'UNKNOWN:[{}]'.format(data.scancode) # Lookup or return UNKNOWN:XX
if (data.scancode != 42) and (data.scancode != 28):
x += key_lookup
if(data.scancode == 28):
return(x)
scanned_data = scan_barcode('/dev/input/event0')
def scanner_function():
try:
value = scanned_data.readBarcode()
print(f"Scanned value:{str(value)}")
except Exception as e:
print(e)
pass
while True:
scanner_function()
Even though when I pass the exception It's not letting me to move to other tasks. The entire process stops here.
This is the output:
Scanned value: 4568hidhXGu
Scanned value: 1238fujXjje75
Scanned value: 789665
[Error 16] Device or resource busy
[Error 16] Device or resource busy
[Error 16] Device or resource busy
[Error 16] Device or resource busy
[Error 16] Device or resource busy
|
[
"I am not sure if the problem is related to your code. I think it is more related to your scanner. I have tested your script with the R32 QR Code reader (https://www.sycreader.com/en/3650/) and this is working perfect.\nWhat scanner type are you using?\nResult:\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:11.035355\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:11.675270\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:14.563287\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:15.007284\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:15.799299\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:20.959301\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:21.591286\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:24.515289\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:26.331292\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:31.323339\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:32.747289\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:34.495291\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:36.367294\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:37.903286\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:39.507295\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:41.099288\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:42.575295\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:44.123283\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:45.579286\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:47.055336\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:48.671301\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:49.983288\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:52.779284\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:54.755299\nScanned value:Toiletbezoek DateTime:2022-12-03 09:49:56.159286\n\n",
"The error you're encountering is due to the fact that the barcode scanner is a \"grabbed\" device, meaning that it is currently in use by another process and cannot be accessed. In order to fix this, you can try releasing the device before attempting to use it again. Here is an example of how you can do this:\n# Release the device\ndev.ungrab()\n\n# Wait for a few seconds to allow the device to be released\ntime.sleep(2)\n\n# Attempt to grab the device again\ndev.grab()\n\nYou can insert this code before the for loop in your readBarcode method to try and fix the issue.\n"
] |
[
0,
0
] |
[] |
[] |
[
"barcode_scanner",
"evdev",
"linux",
"python",
"raspberry_pi4"
] |
stackoverflow_0074325312_barcode_scanner_evdev_linux_python_raspberry_pi4.txt
|
Q:
Am Trying to Insert some values from Registration form. am getting this error mysqli_stmt::execute() expects exactly 0 parameters, 1 given in,
if(isset($_POST['submtbtn'])){
$firstname = $_POST['frstName'];
$lastname = $_POST['lstName'];
$password = $_POST['txtPwd'];
$email = $_POST['txtEmail'];
$sql = "Insert into MASTER_REGISTRATION (FIRST_NAME, LAST_NAME, PASSWORD, EMAIL) VALUES (?,?,?,?)";
$stmntinsert = $con->prepare($sql);
$result = $stmntinsert->execute([$firstname, $lastname, $password, $email]);
Am getting error in $sql line. i dont know whats the error is. Am new to php, beginner level programmer.
Error:
Warning: mysqli_stmt::execute() expects exactly 0 parameters, 1 given in /var/www/html/arun/Registration.php on line 81
Fatal error: Uncaught Error: Call to undefined method mysqli::error() in /var/www/html/arun/Registration.php:91 Stack trace: #0 {main} thrown in /var/www/html/arun/Registration.php on line 91
|
Am Trying to Insert some values from Registration form. am getting this error mysqli_stmt::execute() expects exactly 0 parameters, 1 given in,
|
if(isset($_POST['submtbtn'])){
$firstname = $_POST['frstName'];
$lastname = $_POST['lstName'];
$password = $_POST['txtPwd'];
$email = $_POST['txtEmail'];
$sql = "Insert into MASTER_REGISTRATION (FIRST_NAME, LAST_NAME, PASSWORD, EMAIL) VALUES (?,?,?,?)";
$stmntinsert = $con->prepare($sql);
$result = $stmntinsert->execute([$firstname, $lastname, $password, $email]);
Am getting error in $sql line. i dont know whats the error is. Am new to php, beginner level programmer.
Error:
Warning: mysqli_stmt::execute() expects exactly 0 parameters, 1 given in /var/www/html/arun/Registration.php on line 81
Fatal error: Uncaught Error: Call to undefined method mysqli::error() in /var/www/html/arun/Registration.php:91 Stack trace: #0 {main} thrown in /var/www/html/arun/Registration.php on line 91
|
[] |
[] |
[
"The error is occurring because you are trying to pass an array to the execute() method, but the method does not accept any parameters. Instead, you should pass the values as individual parameters to the method, like this:\n$result = $stmntinsert->execute($firstname, $lastname, $password, $email);\n\nAdditionally, the error message indicates that the error() method is not defined for the mysqli object. You should use the mysqli_stmt object to check for errors, like this:\nif (!$result) {\n printf(\"Error: %s\\n\", $stmntinsert->error);\n exit();\n}\n\nHope this helps!\n"
] |
[
-1
] |
[
"mysqli",
"php"
] |
stackoverflow_0074663998_mysqli_php.txt
|
Q:
google foobar : 'please pass the coded messages'. What is the matter in my code?
I happend to see the google foobar challenges and
i'm struggling to solve a problem, 'please pass the coded messages'
When submiting my solution code, i get the response that one failing in 5 tests.
I really scrutinize my code, but i can't discover any error in mine.
--Problem--
You need to pass a message to the bunny workers, but to avoid detection, the code you agreed to use is... obscure, to say the least. The bunnies are given food on standard-issue plates that are stamped with the numbers 0-9 for easier sorting, and you need to combine sets of plates to create the numbers in the code. The signal that a number is part of the code is that it is divisible by 3. You can do smaller numbers like 15 and 45 easily, but bigger numbers like 144 and 414 are a little trickier. Write a program to help yourself quickly create large numbers for use in the code, given a limited number of plates to work with.
You have L, a list containing some digits (0 to 9). Write a function solution(L) which finds the largest number that can be made from some or all of these digits and is divisible by 3. If it is not possible to make such a number, return 0 as the solution. L will contain anywhere from 1 to 9 digits. The same digit may appear multiple times in the list, but each element in the list may only be used once.
--Samples--
Input:
solution.solution([3, 1, 4, 1])
Output:
4311
Input:
solution.solution([3, 1, 4, 1, 5, 9])
Output:
94311
--My solution--
def jointer(l):
res=0
for i, v in enumerate(l):
if v==0:
res=res*10
else:
res+=v*10**i
return res
def solution(L):
L=sorted(L)
ll=[]
s=0
for i in L:
ll.append(i%3)
s+=i
r=s%3
if r==1:
if 1 in ll:
L.pop(ll.index(1))
else:
for _ in range(2):
L.pop(ll.index(2))
elif r==2:
if 2 in ll:
L.pop(ll.index(2))
else:
for _ in range(2):
L.pop(ll.index(1))
return jointer(L)
I'd like to ask you guys what is the problem in this code.
Thank you for your help in advence
A:
You need to be thinking in terms of permutations of the digits in the input list. Bear in mind that the challenge states that "some or all" of the values may be used. So you need to be looking at permutations from 1 to the length of the input list (inclusive).
There's probably a more efficient way to do this but:
from itertools import permutations
def solution(digits):
_max = -1
for r in range(1, len(digits)+1):
for combo in permutations(digits, r=r):
v = 0
for n in combo:
v = v * 10 + n
if v % 3 == 0:
_max = max(_max, v)
return _max
print(solution([3, 1, 4, 1, 5, 9]))
Output:
94311
|
google foobar : 'please pass the coded messages'. What is the matter in my code?
|
I happend to see the google foobar challenges and
i'm struggling to solve a problem, 'please pass the coded messages'
When submiting my solution code, i get the response that one failing in 5 tests.
I really scrutinize my code, but i can't discover any error in mine.
--Problem--
You need to pass a message to the bunny workers, but to avoid detection, the code you agreed to use is... obscure, to say the least. The bunnies are given food on standard-issue plates that are stamped with the numbers 0-9 for easier sorting, and you need to combine sets of plates to create the numbers in the code. The signal that a number is part of the code is that it is divisible by 3. You can do smaller numbers like 15 and 45 easily, but bigger numbers like 144 and 414 are a little trickier. Write a program to help yourself quickly create large numbers for use in the code, given a limited number of plates to work with.
You have L, a list containing some digits (0 to 9). Write a function solution(L) which finds the largest number that can be made from some or all of these digits and is divisible by 3. If it is not possible to make such a number, return 0 as the solution. L will contain anywhere from 1 to 9 digits. The same digit may appear multiple times in the list, but each element in the list may only be used once.
--Samples--
Input:
solution.solution([3, 1, 4, 1])
Output:
4311
Input:
solution.solution([3, 1, 4, 1, 5, 9])
Output:
94311
--My solution--
def jointer(l):
res=0
for i, v in enumerate(l):
if v==0:
res=res*10
else:
res+=v*10**i
return res
def solution(L):
L=sorted(L)
ll=[]
s=0
for i in L:
ll.append(i%3)
s+=i
r=s%3
if r==1:
if 1 in ll:
L.pop(ll.index(1))
else:
for _ in range(2):
L.pop(ll.index(2))
elif r==2:
if 2 in ll:
L.pop(ll.index(2))
else:
for _ in range(2):
L.pop(ll.index(1))
return jointer(L)
I'd like to ask you guys what is the problem in this code.
Thank you for your help in advence
|
[
"You need to be thinking in terms of permutations of the digits in the input list. Bear in mind that the challenge states that \"some or all\" of the values may be used. So you need to be looking at permutations from 1 to the length of the input list (inclusive).\nThere's probably a more efficient way to do this but:\nfrom itertools import permutations\n\ndef solution(digits):\n _max = -1\n for r in range(1, len(digits)+1):\n for combo in permutations(digits, r=r):\n v = 0\n for n in combo:\n v = v * 10 + n\n if v % 3 == 0:\n _max = max(_max, v)\n return _max\n\nprint(solution([3, 1, 4, 1, 5, 9]))\n\nOutput:\n94311\n\n"
] |
[
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074665209_python.txt
|
Q:
Access scoped viewModel from TopAppBar in jetpack compose
Folks I am stuck engineering a proper solution to access a viewModel scoped to a nav graph , from a button that exists in the TopAppBar in a compose application
Scaffold{
TopAppBar-> Contains the Save Button
Body->
BioDataGraph() -> Contains 5 screens to gather biodata information , and a viewmodel scoped to the graph
}
}
My BioDataViewModel looks like this
class BioDataViewModel{
fun gatherPersonalInformation()
fun gatherPhotos()
...
fun onSaveEverything()
}
The issue i came across is as i described above , how should i go about access the BioDataViewModel , such that i can invoke onSaveEverything when save is clicked in the TopAppBar.
What I have tried
private val performSave by mutableStateOf(false)
Scaffold(
topBar = {
TopAppBar(currentDestination){
//save is clicked.
performSave = true
}
})
{
NavHost(
navController = navController,
startDestination = homeNavigationRoute,
modifier = Modifier
.padding(padding)
.consumedWindowInsets(padding),
) {
composable(route = bioDataRoute) {
val viewModel = hiltViewModel<BioDataViewModel>()
if (performSave){
viewModel.onSaveEverything()
}
BioDataScreen(
viewModel
)
}
}
}
The problem with the approach above is that how and when should i reset the state of performSave ? . Because if i do not set it to false; on every recomposition onSaveEverything would get called.
What would be the best way to engineer a solution for this ? . I checked to see if a similar situation was tackled in jetpack samples , but i found nothing there .
A:
I'm not sure if I understand you correctly, but you can define the BioDataViewModel in activity level, and you can access it in the TopAppBar like this
class MyActivity: ComponentActivity() {
// BioDataViewModel definition here
private val viewModel: BioDataViewModel by viewModels()
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
Scaffold(
topBar = {
TopAppBar(currentDestination) {
//save is clicked.
viewModel.onSaveEverything() // call onSaveEverything here
}
})
{
...
...
}
...
...
Edit:
If you want to have the same instance of ViewModel from activity and NavGraph level, you can consider this, a reference from my other answer.
You can define the ViewModelStoreOwner in the navigation graph level.
NavHost(
navController = navController,
startDestination = homeNavigationRoute,
modifier = Modifier
.padding(padding)
.consumedWindowInsets(padding),
) {
val viewModelStoreOwner = checkNotNull(LocalViewModelStoreOwner.current) {
"LocalViewModelStoreOwner not available"
}
composable(route = bioDataRoute) {
val viewModel = hiltViewModel<BioDataViewModel>(viewModelStoreOwner)
if (performSave){
viewModel.onSaveEverything()
}
BioDataScreen(
viewModel
)
}
}
|
Access scoped viewModel from TopAppBar in jetpack compose
|
Folks I am stuck engineering a proper solution to access a viewModel scoped to a nav graph , from a button that exists in the TopAppBar in a compose application
Scaffold{
TopAppBar-> Contains the Save Button
Body->
BioDataGraph() -> Contains 5 screens to gather biodata information , and a viewmodel scoped to the graph
}
}
My BioDataViewModel looks like this
class BioDataViewModel{
fun gatherPersonalInformation()
fun gatherPhotos()
...
fun onSaveEverything()
}
The issue i came across is as i described above , how should i go about access the BioDataViewModel , such that i can invoke onSaveEverything when save is clicked in the TopAppBar.
What I have tried
private val performSave by mutableStateOf(false)
Scaffold(
topBar = {
TopAppBar(currentDestination){
//save is clicked.
performSave = true
}
})
{
NavHost(
navController = navController,
startDestination = homeNavigationRoute,
modifier = Modifier
.padding(padding)
.consumedWindowInsets(padding),
) {
composable(route = bioDataRoute) {
val viewModel = hiltViewModel<BioDataViewModel>()
if (performSave){
viewModel.onSaveEverything()
}
BioDataScreen(
viewModel
)
}
}
}
The problem with the approach above is that how and when should i reset the state of performSave ? . Because if i do not set it to false; on every recomposition onSaveEverything would get called.
What would be the best way to engineer a solution for this ? . I checked to see if a similar situation was tackled in jetpack samples , but i found nothing there .
|
[
"I'm not sure if I understand you correctly, but you can define the BioDataViewModel in activity level, and you can access it in the TopAppBar like this\nclass MyActivity: ComponentActivity() {\n\n // BioDataViewModel definition here\n private val viewModel: BioDataViewModel by viewModels()\n\n override fun onCreate(savedInstanceState: Bundle?) {\n super.onCreate(savedInstanceState)\n setContent {\n Scaffold(\n topBar = {\n TopAppBar(currentDestination) {\n //save is clicked.\n viewModel.onSaveEverything() // call onSaveEverything here\n }\n })\n {\n ...\n ...\n }\n\n ...\n ...\n\nEdit:\nIf you want to have the same instance of ViewModel from activity and NavGraph level, you can consider this, a reference from my other answer.\nYou can define the ViewModelStoreOwner in the navigation graph level.\nNavHost(\n navController = navController,\n startDestination = homeNavigationRoute,\n modifier = Modifier\n .padding(padding)\n .consumedWindowInsets(padding),\n) {\n\n val viewModelStoreOwner = checkNotNull(LocalViewModelStoreOwner.current) {\n \"LocalViewModelStoreOwner not available\"\n }\n\n composable(route = bioDataRoute) {\n\n val viewModel = hiltViewModel<BioDataViewModel>(viewModelStoreOwner)\n\n if (performSave){\n viewModel.onSaveEverything()\n }\n\n BioDataScreen(\n viewModel\n )\n }\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"android_jetpack_compose"
] |
stackoverflow_0074665289_android_jetpack_compose.txt
|
Q:
Connecting mysql to C++ in VS Code, compiled succesfully but not running
I'm new to databases, so I wanted to make a program to perform simple queries in mysql with C++ in VS Code, in Windows 10. Last time I had problems with linking the library, and now it seems like I managed to fixed them. I have the following code taken from another source by adding my system configurations:
#include <iostream>
#include <windows.h>
#include "C:/Program Files/MySQL/MySQL Server 8.0/include/mysql.h"
int main(){
MYSQL* conn;
conn = mysql_init(0);
conn = mysql_real_connect(conn, "localhost", "root", "password", "project", 0, NULL, 0);
if(conn){
std::cout << "Connected" << std::endl;
} else {
std::cout << "Not connected" << std::endl;
}
}
When I compile it with the command g++ main.cpp -Wall -Werror -I "C:/Program Files/MySQL/MySQL Server 8.0/include" -L "C:/Program Files/MySQL/MySQL Server 8.0/lib" -lmysql, it compiles without reporting any errors. However, if I try to run it, the program simply terminates. I don't understand what problems could be. I suspect the problem might be with mysql connector, but as I said I'm new to it, so I still have doubts. So I would really appreciate it if you could help me out how I can proceed further.
I looked for similar questions, and they helped me only for linking to the library.
A:
Fixed it. Just had to add libmysql.dll in a folder with the main.cpp file. Thank you all for your advices.
|
Connecting mysql to C++ in VS Code, compiled succesfully but not running
|
I'm new to databases, so I wanted to make a program to perform simple queries in mysql with C++ in VS Code, in Windows 10. Last time I had problems with linking the library, and now it seems like I managed to fixed them. I have the following code taken from another source by adding my system configurations:
#include <iostream>
#include <windows.h>
#include "C:/Program Files/MySQL/MySQL Server 8.0/include/mysql.h"
int main(){
MYSQL* conn;
conn = mysql_init(0);
conn = mysql_real_connect(conn, "localhost", "root", "password", "project", 0, NULL, 0);
if(conn){
std::cout << "Connected" << std::endl;
} else {
std::cout << "Not connected" << std::endl;
}
}
When I compile it with the command g++ main.cpp -Wall -Werror -I "C:/Program Files/MySQL/MySQL Server 8.0/include" -L "C:/Program Files/MySQL/MySQL Server 8.0/lib" -lmysql, it compiles without reporting any errors. However, if I try to run it, the program simply terminates. I don't understand what problems could be. I suspect the problem might be with mysql connector, but as I said I'm new to it, so I still have doubts. So I would really appreciate it if you could help me out how I can proceed further.
I looked for similar questions, and they helped me only for linking to the library.
|
[
"Fixed it. Just had to add libmysql.dll in a folder with the main.cpp file. Thank you all for your advices.\n"
] |
[
0
] |
[
"mysql_init can return NULL:\n\nAn initialized MYSQL* handler. NULL if there was insufficient memory to allocate a new object.\n\nWouldn't be safer to check its value before second call?\n"
] |
[
-1
] |
[
"c++",
"compilation",
"libraries",
"mysql",
"visual_studio_code"
] |
stackoverflow_0074495449_c++_compilation_libraries_mysql_visual_studio_code.txt
|
Q:
How can I use Selenium, Webdriver-manager, Chromedriver on virtual environment?
I am using Github codespace for creating an automated web scraping application using Webdriver-manager webdriver-manager with Selenium.
I have tried: How can we use Selenium Webdriver in collab.research.google.com?
!pip install selenium
!apt-get update # to update ubuntu to correctly run apt install
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
import sys
sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver')
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
wd = webdriver.Chrome('chromedriver',options=chrome_options)
wd.get("https://www.webite-url.com")
But it did not work!
Can you help me in setting up Webdriver-manager github codespaces or share some link?
A:
Please consider rephrasing the question in a better way. The problematic is not clear.
A:
please check https://stackoverflow.com/posts/46929945
this should work for you,
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
options = Options()
options.headless = True
driver = webdriver.Chrome(CHROMEDRIVER_PATH, options=options)
|
How can I use Selenium, Webdriver-manager, Chromedriver on virtual environment?
|
I am using Github codespace for creating an automated web scraping application using Webdriver-manager webdriver-manager with Selenium.
I have tried: How can we use Selenium Webdriver in collab.research.google.com?
!pip install selenium
!apt-get update # to update ubuntu to correctly run apt install
!apt install chromium-chromedriver
!cp /usr/lib/chromium-browser/chromedriver /usr/bin
import sys
sys.path.insert(0,'/usr/lib/chromium-browser/chromedriver')
from selenium import webdriver
chrome_options = webdriver.ChromeOptions()
chrome_options.add_argument('--headless')
chrome_options.add_argument('--no-sandbox')
chrome_options.add_argument('--disable-dev-shm-usage')
wd = webdriver.Chrome('chromedriver',options=chrome_options)
wd.get("https://www.webite-url.com")
But it did not work!
Can you help me in setting up Webdriver-manager github codespaces or share some link?
|
[
"Please consider rephrasing the question in a better way. The problematic is not clear.\n",
"please check https://stackoverflow.com/posts/46929945\nthis should work for you,\nfrom selenium import webdriver\nfrom selenium.webdriver.chrome.options import Options\noptions = Options()\noptions.headless = True\ndriver = webdriver.Chrome(CHROMEDRIVER_PATH, options=options)\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"codespaces",
"jupyter_notebook",
"python"
] |
stackoverflow_0074657070_codespaces_jupyter_notebook_python.txt
|
Q:
postman prerequisite test JAVA script convert into Jmeter
I have a JavaScript code from postman need to convert in jmeter
please help me get jmeter functions for below code and which sampler or pre/post processors need to use in jmeter to capture code, codeChallenge, state
enter image description here
enter image description here
PFA the postman screenshot
Thanks
A:
First of all, don't post code as image. Never.
With regards to your question itself:
JMeter's equivalent to pm.CollectionVariables.set() is vars.put() so you need to use the following constructions:
vars.put('code', code)
vars.put('codeChallenge', codeChallenge)
where vars stands for JMeterVariables class instance, see Top 8 JMeter Java Classes You Should Be Using with Groovy article for more information on this and other JMeter API shorthands
if you want to generate random alphabetic string of 10 characters you can use RandomStringUtils helper class:
vars.put('state',org.apache.commons.lang3.RandomStringUtils.randomAlphabetic(10))
In general it's possible to use JavaScript in JMeter so you can just copy and paste your code changing Postman-specific APIs to JMeter-specific ones.
however Groovy is the recommended scripting language
|
postman prerequisite test JAVA script convert into Jmeter
|
I have a JavaScript code from postman need to convert in jmeter
please help me get jmeter functions for below code and which sampler or pre/post processors need to use in jmeter to capture code, codeChallenge, state
enter image description here
enter image description here
PFA the postman screenshot
Thanks
|
[
"First of all, don't post code as image. Never.\nWith regards to your question itself:\n\nJMeter's equivalent to pm.CollectionVariables.set() is vars.put() so you need to use the following constructions:\nvars.put('code', code)\nvars.put('codeChallenge', codeChallenge)\n\nwhere vars stands for JMeterVariables class instance, see Top 8 JMeter Java Classes You Should Be Using with Groovy article for more information on this and other JMeter API shorthands\nif you want to generate random alphabetic string of 10 characters you can use RandomStringUtils helper class:\nvars.put('state',org.apache.commons.lang3.RandomStringUtils.randomAlphabetic(10))\n\n\n\nIn general it's possible to use JavaScript in JMeter so you can just copy and paste your code changing Postman-specific APIs to JMeter-specific ones.\n\nhowever Groovy is the recommended scripting language\n"
] |
[
0
] |
[] |
[] |
[
"jmeter"
] |
stackoverflow_0074659479_jmeter.txt
|
Q:
nested looping C++
i want to asking this problem.
this output is the expected output
*
*#
*#%
*#%*
*#%*#
*#%*#%
and this is my solution
#include <iostream>
using namespace std;
int main(){
int a,b,n;
cout << "Input the row";
cin >> n;
for (a = 1; a <= n; a++){
for(b = 1; b <= a; b++){
if (b == 1 || b == 1 + 3){
cout << "*";
}
if (b ==2 || b == 2 + 3){
cout << "#";
}
if (b ==3 || b == 3 + 3){
cout << "%";
}
}
cout << endl;
}
}
this solution is only work if the n = 6. what should i do if i want this work in every row when user input the row to the n
thank you in advance.
A:
To make your solution work for any value of n, you can use the modulo operator % to check whether a given value of b is the first, second, or third element of each row.
Here is one way you could modify your code to do this:
#include <iostream>
using namespace std;
int main() {
int a, b, n;
cout << "Input the row: ";
cin >> n;
for (a = 1; a <= n; a++) {
for (b = 1; b <= a; b++) {
// Use the modulo operator to check whether b is the first, second, or third element of each row
if (b % 3 == 1) {
cout << "*";
} else {
if (b % 3 == 2) {
cout << "#";
} else {
cout << "%";
}
}
}
cout << endl;
}
return 0;
}
With this change, the code will output the correct pattern for any value of n.
A:
Here, I tried using the modulo "%" on your if's
#include <iostream>
using namespace std;
int main(){
int a,b,n;
cout << "Input the row";
cin >> n;
for (a = 1; a <= n; a++){
for(b = 1; b <= a; b++){
// After every first digits will cout #
if (b % 3 == 2){
cout << "#";
}
// The first after the third digit will cout *
if (b % 3 == 1){
cout << "*";
}
// The third digit after the second digit will cout %
if (b % 3 == 0){
cout << "%";
}
}
cout << endl;
}
}
A:
Just adding a nice optimisation (note: C++ loops naturally go up from 0 to not including n, i.e. for(int i = 0; i < n; ++i) – this is especially relevant if you are indexing arrays which have a first index of 0 and last of n - 1, while n already is invalid!).
While you do use b % 3 to decide which character and you indeed can use this by chaining if(){} else if(){} else{} (where a switch() { case: case: default: } actually would have been preferrable) you can have a much more compact version as follows (and even more efficient as it avoids conditional branching):
for(int b = 0; b < a; ++b)
{
std::cout << "*#%"[b % 3];
}
The C-string literal "*#%" actually represents an array of char with length four (including the terminating null character) – and you can index it just like any other array you have explicitly defined (like int n[SOME_LIMIT]; n[7] = 1210;)...
|
nested looping C++
|
i want to asking this problem.
this output is the expected output
*
*#
*#%
*#%*
*#%*#
*#%*#%
and this is my solution
#include <iostream>
using namespace std;
int main(){
int a,b,n;
cout << "Input the row";
cin >> n;
for (a = 1; a <= n; a++){
for(b = 1; b <= a; b++){
if (b == 1 || b == 1 + 3){
cout << "*";
}
if (b ==2 || b == 2 + 3){
cout << "#";
}
if (b ==3 || b == 3 + 3){
cout << "%";
}
}
cout << endl;
}
}
this solution is only work if the n = 6. what should i do if i want this work in every row when user input the row to the n
thank you in advance.
|
[
"To make your solution work for any value of n, you can use the modulo operator % to check whether a given value of b is the first, second, or third element of each row.\nHere is one way you could modify your code to do this:\n#include <iostream>\n\nusing namespace std;\n\nint main() {\n int a, b, n;\n\n cout << \"Input the row: \";\n cin >> n;\n\n for (a = 1; a <= n; a++) {\n for (b = 1; b <= a; b++) {\n // Use the modulo operator to check whether b is the first, second, or third element of each row\n if (b % 3 == 1) {\n cout << \"*\";\n } else {\n if (b % 3 == 2) {\n cout << \"#\";\n } else {\n cout << \"%\";\n }\n }\n }\n cout << endl;\n }\n\n return 0;\n}\n\nWith this change, the code will output the correct pattern for any value of n.\n",
"Here, I tried using the modulo \"%\" on your if's\n#include <iostream>\n\nusing namespace std;\n\n\nint main(){\n\n int a,b,n;\n\n cout << \"Input the row\";\n cin >> n;\n\n\n for (a = 1; a <= n; a++){\n for(b = 1; b <= a; b++){\n // After every first digits will cout #\n if (b % 3 == 2){\n cout << \"#\";\n }\n // The first after the third digit will cout *\n if (b % 3 == 1){\n cout << \"*\";\n }\n // The third digit after the second digit will cout % \n if (b % 3 == 0){\n cout << \"%\";\n }\n }\n cout << endl;\n }\n}\n\n",
"Just adding a nice optimisation (note: C++ loops naturally go up from 0 to not including n, i.e. for(int i = 0; i < n; ++i) – this is especially relevant if you are indexing arrays which have a first index of 0 and last of n - 1, while n already is invalid!).\nWhile you do use b % 3 to decide which character and you indeed can use this by chaining if(){} else if(){} else{} (where a switch() { case: case: default: } actually would have been preferrable) you can have a much more compact version as follows (and even more efficient as it avoids conditional branching):\nfor(int b = 0; b < a; ++b)\n{\n std::cout << \"*#%\"[b % 3];\n}\n\nThe C-string literal \"*#%\" actually represents an array of char with length four (including the terminating null character) – and you can index it just like any other array you have explicitly defined (like int n[SOME_LIMIT]; n[7] = 1210;)...\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"c++",
"nested_loops"
] |
stackoverflow_0074665170_c++_nested_loops.txt
|
Q:
Binance TIMESTAMP ERROR (Timestamp for this request is outside of the recvWindow. M 1)
I'm using binance broker api, when executing my code.
I got this error :
data: {
code: -1021,enter code here
msg: 'Timestamp for this request is outside of the recvWindow.'
}
Then I have used binance server timestamp and get another error :
data: { code: -1022, msg: 'Signature for this request is not valid.' }
----- My sample code -------
'use strict'
//Modules
const crypto = require('crypto');
const axios = require('axios');
const moment = require('moment');
const keys = require('./config/keys');
//Keys
const apiKey = keys.keys.apiKey;
async function api_call(config){
var response = {}
try {
response = await axios(config);
response = {
error: false,
data : response.data
}
} catch (error) {
response = {
error: true,
data : error
}
}
return response;
}
async function get_binance_server_time(recvWindow=50000){
var timestamp = moment().unix();
var serverTime = 0;
var config = {
method: 'get',
url: `https://api.binance.com/api/v3/time`,
headers: {
'Content-Type': 'application/json',
'X-MBX-APIKEY': apiKey
}
};
var response = await api_call(config);
if(response.error){
response.data = {
serverTime : serverTime
}
}
serverTime = response.data.serverTime
console.log(timestamp);
console.log(response.data.serverTime);
if (timestamp < (serverTime + 1000) && (serverTime - timestamp) <= recvWindow) {
console.log("process");
} else {
console.log("Rejected");
}
return serverTime;
}
async function get_broker_account_information( _signature = null){
// Make the api call here
var serverTime = await get_binance_server_time();
var _timestamp = serverTime;
const query_string = `timestamp=${_timestamp}`;
const recvWindow = 50000
const signature = crypto.createHmac('sha256', apiKey).update(query_string).digest('hex')
var config = {
method: 'get',
url: `https://api.binance.com/sapi/v1/broker/info?timestamp=${_timestamp}&signature=${signature}&recvWindow=${recvWindow}`,
headers: {
'Content-Type': 'application/json',
'X-MBX-APIKEY': apiKey
}
};
var response = await api_call(config)
console.log(response.data);
return response;
}
// create_sub_account()
get_broker_account_information()
// get_binance_server_time()
A:
Binance requires Unix timestamp in Milliseconds.
So you should change
var timestamp = moment().unix();
to
var timestamp = moment().unix()*1000;
Some time sync issues can be resolved by using an offset of about 15 seconds:
const offset = 15000;
var timestamp = moment().unix()*1000 - offset; // works flawlessly in my case
Also try using
const recvWindow = 60000 // maximum allowed
Important:
Your signature has to be generated from secret key, not api key. You need both, api key in header, secret key in signature calculation.
To gerenate a valid signature, all query parameters must be in the queryString, which you use to generate the signature.
Make a parameters object and convert it to a queryString:
var params = {
'timestamp': _timeStamp,
'recvWindow: 60000,
// other query parameters
};
var queryString = Object.keys(params).map(key => key + '=' + params[key]).join('&');
Then generate signature:
const signature = crypto.createHmac('sha256', apiSecret).update(queryString).digest('hex')
A:
Just sync system time with network, is work for me.
|
Binance TIMESTAMP ERROR (Timestamp for this request is outside of the recvWindow. M 1)
|
I'm using binance broker api, when executing my code.
I got this error :
data: {
code: -1021,enter code here
msg: 'Timestamp for this request is outside of the recvWindow.'
}
Then I have used binance server timestamp and get another error :
data: { code: -1022, msg: 'Signature for this request is not valid.' }
----- My sample code -------
'use strict'
//Modules
const crypto = require('crypto');
const axios = require('axios');
const moment = require('moment');
const keys = require('./config/keys');
//Keys
const apiKey = keys.keys.apiKey;
async function api_call(config){
var response = {}
try {
response = await axios(config);
response = {
error: false,
data : response.data
}
} catch (error) {
response = {
error: true,
data : error
}
}
return response;
}
async function get_binance_server_time(recvWindow=50000){
var timestamp = moment().unix();
var serverTime = 0;
var config = {
method: 'get',
url: `https://api.binance.com/api/v3/time`,
headers: {
'Content-Type': 'application/json',
'X-MBX-APIKEY': apiKey
}
};
var response = await api_call(config);
if(response.error){
response.data = {
serverTime : serverTime
}
}
serverTime = response.data.serverTime
console.log(timestamp);
console.log(response.data.serverTime);
if (timestamp < (serverTime + 1000) && (serverTime - timestamp) <= recvWindow) {
console.log("process");
} else {
console.log("Rejected");
}
return serverTime;
}
async function get_broker_account_information( _signature = null){
// Make the api call here
var serverTime = await get_binance_server_time();
var _timestamp = serverTime;
const query_string = `timestamp=${_timestamp}`;
const recvWindow = 50000
const signature = crypto.createHmac('sha256', apiKey).update(query_string).digest('hex')
var config = {
method: 'get',
url: `https://api.binance.com/sapi/v1/broker/info?timestamp=${_timestamp}&signature=${signature}&recvWindow=${recvWindow}`,
headers: {
'Content-Type': 'application/json',
'X-MBX-APIKEY': apiKey
}
};
var response = await api_call(config)
console.log(response.data);
return response;
}
// create_sub_account()
get_broker_account_information()
// get_binance_server_time()
|
[
"Binance requires Unix timestamp in Milliseconds.\nSo you should change\nvar timestamp = moment().unix();\n\nto\nvar timestamp = moment().unix()*1000;\n\nSome time sync issues can be resolved by using an offset of about 15 seconds:\nconst offset = 15000;\nvar timestamp = moment().unix()*1000 - offset; // works flawlessly in my case\n\nAlso try using\nconst recvWindow = 60000 // maximum allowed\n\nImportant:\nYour signature has to be generated from secret key, not api key. You need both, api key in header, secret key in signature calculation.\nTo gerenate a valid signature, all query parameters must be in the queryString, which you use to generate the signature.\nMake a parameters object and convert it to a queryString:\nvar params = {\n 'timestamp': _timeStamp,\n 'recvWindow: 60000,\n // other query parameters\n};\n\nvar queryString = Object.keys(params).map(key => key + '=' + params[key]).join('&');\n\nThen generate signature:\nconst signature = crypto.createHmac('sha256', apiSecret).update(queryString).digest('hex')\n\n",
"Just sync system time with network, is work for me.\n"
] |
[
2,
0
] |
[] |
[] |
[
"binance",
"unix_timestamp"
] |
stackoverflow_0069208437_binance_unix_timestamp.txt
|
Q:
Create and Remove Components Dynamically in Angular 13+
I want to delete child component, parent will delete the last child only and after that, it shows that index is -1 from hostView and can't delete the child from view
this is my Child View
<button (click)="remove_me()" >I am a Child {{unique_key}}, click to Remove
</button>
this is my Child Component
import { Component } from '@angular/core';
import { ParentComponent } from '../parent/parent.component';
@Component({
selector: 'app-child',
templateUrl: './child.component.html',
styleUrls: ['./child.component.css'],
})
export class ChildComponent {
public unique_key: number;
public parentRef: ParentComponent;
constructor() {}
remove_me() {
console.log(this.unique_key);
this.parentRef.remove(this.unique_key);
}
}
this is my Parent View
<button type="button" (click)="AddChild()">
I am Parent, Click to create Child
</button>
<div>
<ng-template #viewContainerRef></ng-template>
</div>
this is my Parent Component
import {
ComponentRef,
ViewContainerRef,
ViewChild,
Component,
} from '@angular/core';
import { ChildComponent } from '../child/child.component';
@Component({
selector: 'app-parent',
templateUrl: './parent.component.html',
styleUrls: ['./parent.component.css'],
})
export class ParentComponent {
@ViewChild('viewContainerRef', { read: ViewContainerRef })
vcr!: ViewContainerRef;
ref!: ComponentRef<ChildComponent>;
child_unique_key: number = 0;
componentsReferences = Array<ComponentRef<ChildComponent>>();
constructor() {}
AddChild() {
this.ref = this.vcr.createComponent(ChildComponent);
let childComponent = this.ref.instance;
childComponent.unique_key = ++this.child_unique_key;
childComponent.parentRef = this;
}
remove(key: number) {
const index = this.vcr.indexOf(this.ref.hostView);
console.log(index);
if (index != -1) {
this.vcr.remove(index);
}
// removing component from the list
this.componentsReferences = this.componentsReferences.filter(
(x) => x.instance.unique_key !== key
);
}
}
I tried the methods from older Angular versions that supports ComponentFactoryResolver, but I want to upgrade the version of Angular
A:
There are some obvious errors in your code:
You initialize a property
componentsReferences = Array<ComponentRef<ChildComponent>>();
but never populate it.
even if you filter out that element from that list you dont remove it from the view-container. Which is why angular wont remove the component. (https://angular.io/guide/dynamic-component-loader)
But I have to say that there are quite some more issues with your code.
You should never ever have a ref from your child-component to your parent component for example.
I suggest you have a look at and follow through with the angular tour of heros because dynamic component creation is quite a complex topic and it seems that you should stick to some basics first.
For example could your desired outcome be done kinda like this:
export class ParentComponent {
uniqueKeys: number [] = [];
private counter = 0;
addKey() {
this.uniqueKeys.push(this.counter);
this.counter++;
}
removeLastKey() {
this.uniqueKeys.pop();
}
}
in parent-component.html:
<app-child *ngFor="let key of uniqueKeys" [key]=key></app-child>
and in child-component.html you can have your @Input() for the key property if needed...
Just check https://angular.io/tutorial
A:
To remove a child component in Angular, you can use the ViewContainerRef class. In your ParentComponent, you can inject ViewContainerRef in the constructor and use its remove() method to remove the child component. The remove() method takes the index of the child component in the view container as its argument. You can use the indexOf() method to get the index of the child component.
Here is an example of how you can use ViewContainerRef to remove a child component in Angular:
import { ViewContainerRef } from '@angular/core';
@Component({
selector: 'app-parent',
templateUrl: './parent.component.html',
styleUrls: ['./parent.component.css'],
})
export class ParentComponent {
@ViewChild('viewContainerRef', { read: ViewContainerRef })
vcr!: ViewContainerRef;
constructor(private vcRef: ViewContainerRef) {}
remove() {
const index = this.vcr.indexOf(this.ref.hostView);
if (index != -1) {
this.vcr.remove(index);
}
}
}
In the remove() method, you can use the indexOf() method to get the index of the child component in the view container and then use the remove() method to remove the child component from the view container.
You can then call the remove() method from the child component when the user clicks the button to remove the child component.
Here is an example of how you can call the remove() method from the child component:
import { Component } from '@angular/core';
import { ParentComponent } from '../parent/parent.component';
@Component({
selector: 'app-child',
templateUrl: './child.component.html',
styleUrls: ['./child.component.css']
})
export class ChildComponent {
public unique_key: number;
public parentRef: ParentComponent;
constructor() {
}
remove_me() {
console.log(this.unique_key)
this.parentRef.remove();
}
}
In the remove_me() method, you can call the remove() method on the parentRef property to remove the child component.
I hope this helps!
|
Create and Remove Components Dynamically in Angular 13+
|
I want to delete child component, parent will delete the last child only and after that, it shows that index is -1 from hostView and can't delete the child from view
this is my Child View
<button (click)="remove_me()" >I am a Child {{unique_key}}, click to Remove
</button>
this is my Child Component
import { Component } from '@angular/core';
import { ParentComponent } from '../parent/parent.component';
@Component({
selector: 'app-child',
templateUrl: './child.component.html',
styleUrls: ['./child.component.css'],
})
export class ChildComponent {
public unique_key: number;
public parentRef: ParentComponent;
constructor() {}
remove_me() {
console.log(this.unique_key);
this.parentRef.remove(this.unique_key);
}
}
this is my Parent View
<button type="button" (click)="AddChild()">
I am Parent, Click to create Child
</button>
<div>
<ng-template #viewContainerRef></ng-template>
</div>
this is my Parent Component
import {
ComponentRef,
ViewContainerRef,
ViewChild,
Component,
} from '@angular/core';
import { ChildComponent } from '../child/child.component';
@Component({
selector: 'app-parent',
templateUrl: './parent.component.html',
styleUrls: ['./parent.component.css'],
})
export class ParentComponent {
@ViewChild('viewContainerRef', { read: ViewContainerRef })
vcr!: ViewContainerRef;
ref!: ComponentRef<ChildComponent>;
child_unique_key: number = 0;
componentsReferences = Array<ComponentRef<ChildComponent>>();
constructor() {}
AddChild() {
this.ref = this.vcr.createComponent(ChildComponent);
let childComponent = this.ref.instance;
childComponent.unique_key = ++this.child_unique_key;
childComponent.parentRef = this;
}
remove(key: number) {
const index = this.vcr.indexOf(this.ref.hostView);
console.log(index);
if (index != -1) {
this.vcr.remove(index);
}
// removing component from the list
this.componentsReferences = this.componentsReferences.filter(
(x) => x.instance.unique_key !== key
);
}
}
I tried the methods from older Angular versions that supports ComponentFactoryResolver, but I want to upgrade the version of Angular
|
[
"There are some obvious errors in your code:\n\nYou initialize a property\ncomponentsReferences = Array<ComponentRef<ChildComponent>>();\nbut never populate it.\neven if you filter out that element from that list you dont remove it from the view-container. Which is why angular wont remove the component. (https://angular.io/guide/dynamic-component-loader)\n\nBut I have to say that there are quite some more issues with your code.\nYou should never ever have a ref from your child-component to your parent component for example.\nI suggest you have a look at and follow through with the angular tour of heros because dynamic component creation is quite a complex topic and it seems that you should stick to some basics first.\nFor example could your desired outcome be done kinda like this:\nexport class ParentComponent {\n uniqueKeys: number [] = [];\n private counter = 0; \n\n addKey() {\n this.uniqueKeys.push(this.counter);\n this.counter++;\n }\n\n removeLastKey() {\n this.uniqueKeys.pop();\n }\n}\n\nin parent-component.html:\n <app-child *ngFor=\"let key of uniqueKeys\" [key]=key></app-child>\n\nand in child-component.html you can have your @Input() for the key property if needed...\nJust check https://angular.io/tutorial\n",
"To remove a child component in Angular, you can use the ViewContainerRef class. In your ParentComponent, you can inject ViewContainerRef in the constructor and use its remove() method to remove the child component. The remove() method takes the index of the child component in the view container as its argument. You can use the indexOf() method to get the index of the child component.\nHere is an example of how you can use ViewContainerRef to remove a child component in Angular:\nimport { ViewContainerRef } from '@angular/core';\n\n@Component({\n selector: 'app-parent',\n templateUrl: './parent.component.html',\n styleUrls: ['./parent.component.css'],\n})\nexport class ParentComponent {\n @ViewChild('viewContainerRef', { read: ViewContainerRef })\n vcr!: ViewContainerRef;\n\n constructor(private vcRef: ViewContainerRef) {}\n\n remove() {\n const index = this.vcr.indexOf(this.ref.hostView);\n\n if (index != -1) {\n this.vcr.remove(index);\n }\n }\n}\n\nIn the remove() method, you can use the indexOf() method to get the index of the child component in the view container and then use the remove() method to remove the child component from the view container.\nYou can then call the remove() method from the child component when the user clicks the button to remove the child component.\nHere is an example of how you can call the remove() method from the child component:\nimport { Component } from '@angular/core';\nimport { ParentComponent } from '../parent/parent.component';\n\n@Component({\n selector: 'app-child',\n templateUrl: './child.component.html',\n styleUrls: ['./child.component.css']\n})\nexport class ChildComponent {\n public unique_key: number;\n public parentRef: ParentComponent;\n\n constructor() {\n }\n\n remove_me() {\n console.log(this.unique_key)\n this.parentRef.remove();\n }\n}\n\nIn the remove_me() method, you can call the remove() method on the parentRef property to remove the child component.\nI hope this helps!\n"
] |
[
0,
0
] |
[] |
[] |
[
"angular",
"angular_components"
] |
stackoverflow_0074665141_angular_angular_components.txt
|
Q:
Shiny Slick R carousel with internal links to tab panels
I would like to create a shiny navbarPage dashboard that has a slickR carousel of images on the landing page. Each image should have an action button superimposed that links to a different tabPanel.
It should basically look like this:
Screenshot of toy app
Here is reproducible toy example that doesn't do the job:`
library(shiny)
library(slickR)
# ui
ui <- navbarPage(title = "", id = "pageid",
tabsetPanel(id="tabs",
tab1 <- tabPanel(title="Tab 1", value="tab1",
fluidRow(
slickROutput("slickr1"),
h1("Title", style =
"position: relative;
margin-top:-43%;
color:#4BACC6;
font-size:30px;
text-align: center"),
div(actionButton("action1", "Action",
style="position: relative;
margin-top: 15%;
color:#FFFEFB;
background-color:#4BACC6;
border-color:#4BACC6;"),
align="center"),
)),
tab2 <- tabPanel(title="Tab 2", value="tab2"),
tab3 <- tabPanel(title="Tab 3", value="tab3")
)
)
# server
server <- function(input, output, session) {
output$slickr1 <- renderSlickR({
slick1 <- slick_list(slick_div(
nba_player_logo$uri[1:3],
type = "img", links = NULL)
)
slickR(slick1) +
settings(dots = TRUE,
autoplay = TRUE)
})
observeEvent(input$action1, {
updateTabsetPanel(session, "tabs",
selected = "tab2")
})
}
`
This code superimposes the same action button and title on all three images on the carousel and I can't get slickR to run through both images and action buttons.
I have tried to create a second slick_div within the slick_list that runs through three different action buttons, like this:
`
# server
server <- function(input, output, session) {
buttons <- list(actionButton("action1", "Action1"),
actionButton("action2", "Action2"),
actionButton("action3", "Action3"))
output$slickr1 <- renderSlickR({
slick1 <- slick_list(slick_div(
nba_player_logo$uri[1:3],
type = "img", links = NULL),
slick_div(
buttons,
css = htmltools::css(display = "inline"),
links = NULL)
)
slickR(slick1) +
settings(dots = TRUE,
autoplay = TRUE)
})
observeEvent(input$action1, {
updateTabsetPanel(session, "tabs",
selected = "tab2")
})
}
`
But it somehow just ends up stacking all the images on top each other for slide 1 and all action buttons next to each other on slide 2, rather than running through them one by one with one image and one action button on each slide.
Alternatively, I would also be open to having the entire image link to a different tab and I thought about using the "links" option in slick_div (set to NULL in the toy example above), but I'm struggling to determine a url for each of tabs that I could assign to "links".
I'm new to shiny and would really appreciate your help!
A:
I'm not sure to understand. Is it what you want?
library(shiny)
library(slickR)
css <- "
.slidetitle {
position: relative;
color: #4BACC6;
font-size: 30px;
text-align: center;
}
.slidebutton {
position: relative;
margin-top: -15%;
color: #FFFEFB;
background-color: #4BACC6;
border-color: #4BACC6;
}
"
# ui
ui <- navbarPage(
title = "SlickR with buttons",
id = "tabs",
header = tags$head(tags$style(HTML(css))),
tabPanel(
title= "Tab 1", value = "tab1",
fluidRow(
slickR(slick_list(
tags$div(
tags$h1("Title 1", class = "slidetitle"),
tags$img(
src = "https://cdn.pixabay.com/photo/2018/07/31/22/08/lion-3576045__340.jpg",
height = 500
),
actionButton(
"action1", "Action 1", class = "slidebutton"
),
align = "center"
),
tags$div(
tags$h1("Title 2", class = "slidetitle"),
tags$img(
src = "https://cdn.pixabay.com/photo/2012/02/27/15/35/lion-17335__340.jpg",
height = 500
),
actionButton(
"action2", "Action 2", class = "slidebutton"
),
align = "center"
),
tags$div(
tags$h1("Title 3", class = "slidetitle"),
tags$img(
src = "https://www.aprenderjuntos.cl/wp-content/uploads/2020/08/LEON-SERIO-.jpg",
height = 500
),
actionButton(
"action3", "Action 3", class = "slidebutton"
),
align = "center"
)
)) + settings(autoplay = TRUE, dots = TRUE)
)
),
tabPanel(
title = "Tab 2", value = "tab2",
tags$h1("TAB 2")
),
tabPanel(
title = "Tab 3", value = "tab3",
tags$h1("TAB3")
)
)
# server
server <- function(input, output, session) {
observeEvent(input$action1, {
updateNavbarPage(session, "tabs", selected = "tab2")
})
}
shinyApp(ui, server)
|
Shiny Slick R carousel with internal links to tab panels
|
I would like to create a shiny navbarPage dashboard that has a slickR carousel of images on the landing page. Each image should have an action button superimposed that links to a different tabPanel.
It should basically look like this:
Screenshot of toy app
Here is reproducible toy example that doesn't do the job:`
library(shiny)
library(slickR)
# ui
ui <- navbarPage(title = "", id = "pageid",
tabsetPanel(id="tabs",
tab1 <- tabPanel(title="Tab 1", value="tab1",
fluidRow(
slickROutput("slickr1"),
h1("Title", style =
"position: relative;
margin-top:-43%;
color:#4BACC6;
font-size:30px;
text-align: center"),
div(actionButton("action1", "Action",
style="position: relative;
margin-top: 15%;
color:#FFFEFB;
background-color:#4BACC6;
border-color:#4BACC6;"),
align="center"),
)),
tab2 <- tabPanel(title="Tab 2", value="tab2"),
tab3 <- tabPanel(title="Tab 3", value="tab3")
)
)
# server
server <- function(input, output, session) {
output$slickr1 <- renderSlickR({
slick1 <- slick_list(slick_div(
nba_player_logo$uri[1:3],
type = "img", links = NULL)
)
slickR(slick1) +
settings(dots = TRUE,
autoplay = TRUE)
})
observeEvent(input$action1, {
updateTabsetPanel(session, "tabs",
selected = "tab2")
})
}
`
This code superimposes the same action button and title on all three images on the carousel and I can't get slickR to run through both images and action buttons.
I have tried to create a second slick_div within the slick_list that runs through three different action buttons, like this:
`
# server
server <- function(input, output, session) {
buttons <- list(actionButton("action1", "Action1"),
actionButton("action2", "Action2"),
actionButton("action3", "Action3"))
output$slickr1 <- renderSlickR({
slick1 <- slick_list(slick_div(
nba_player_logo$uri[1:3],
type = "img", links = NULL),
slick_div(
buttons,
css = htmltools::css(display = "inline"),
links = NULL)
)
slickR(slick1) +
settings(dots = TRUE,
autoplay = TRUE)
})
observeEvent(input$action1, {
updateTabsetPanel(session, "tabs",
selected = "tab2")
})
}
`
But it somehow just ends up stacking all the images on top each other for slide 1 and all action buttons next to each other on slide 2, rather than running through them one by one with one image and one action button on each slide.
Alternatively, I would also be open to having the entire image link to a different tab and I thought about using the "links" option in slick_div (set to NULL in the toy example above), but I'm struggling to determine a url for each of tabs that I could assign to "links".
I'm new to shiny and would really appreciate your help!
|
[
"I'm not sure to understand. Is it what you want?\nlibrary(shiny)\nlibrary(slickR)\n\ncss <- \"\n.slidetitle {\n position: relative;\n color: #4BACC6;\n font-size: 30px;\n text-align: center;\n}\n.slidebutton {\n position: relative;\n margin-top: -15%;\n color: #FFFEFB;\n background-color: #4BACC6;\n border-color: #4BACC6;\n}\n\"\n\n\n# ui\nui <- navbarPage(\n title = \"SlickR with buttons\", \n id = \"tabs\",\n header = tags$head(tags$style(HTML(css))),\n tabPanel(\n title= \"Tab 1\", value = \"tab1\",\n fluidRow(\n slickR(slick_list(\n tags$div(\n tags$h1(\"Title 1\", class = \"slidetitle\"),\n tags$img(\n src = \"https://cdn.pixabay.com/photo/2018/07/31/22/08/lion-3576045__340.jpg\",\n height = 500\n ),\n actionButton(\n \"action1\", \"Action 1\", class = \"slidebutton\"\n ),\n align = \"center\"\n ),\n tags$div(\n tags$h1(\"Title 2\", class = \"slidetitle\"),\n tags$img(\n src = \"https://cdn.pixabay.com/photo/2012/02/27/15/35/lion-17335__340.jpg\",\n height = 500\n ),\n actionButton(\n \"action2\", \"Action 2\", class = \"slidebutton\"\n ),\n align = \"center\"\n ),\n tags$div(\n tags$h1(\"Title 3\", class = \"slidetitle\"),\n tags$img(\n src = \"https://www.aprenderjuntos.cl/wp-content/uploads/2020/08/LEON-SERIO-.jpg\",\n height = 500\n ),\n actionButton(\n \"action3\", \"Action 3\", class = \"slidebutton\"\n ),\n align = \"center\"\n )\n )) + settings(autoplay = TRUE, dots = TRUE)\n )\n ),\n tabPanel(\n title = \"Tab 2\", value = \"tab2\",\n tags$h1(\"TAB 2\")\n ),\n tabPanel(\n title = \"Tab 3\", value = \"tab3\",\n tags$h1(\"TAB3\")\n )\n)\n\n# server\nserver <- function(input, output, session) {\n observeEvent(input$action1, {\n updateNavbarPage(session, \"tabs\", selected = \"tab2\")\n })\n}\n\nshinyApp(ui, server)\n\n"
] |
[
0
] |
[] |
[] |
[
"carousel",
"navbar",
"r",
"shiny",
"slickr"
] |
stackoverflow_0074664815_carousel_navbar_r_shiny_slickr.txt
|
Q:
TypeError: preprocess_input() got an unexpected keyword argument 'mode'
I was preprocessing the images as per the model’s requirement before passing input to the model.
I create the model as follows and I encounter the error.
TypeError: preprocess_input() got an unexpected keyword argument 'mode'
My code:
from keras.applications.vgg16 import preprocess_input
X = preprocess_input(X, mode='tf')
Can someone help me out with this?
A:
According to preprocess_input documentation, this function gets two arguments:
input(X)
data_format
you can use the following code:
X = preprocess_input(X, data_format=None)
A:
I've adjusted to tf.keras.applications.imagenet_utils.preprocess_input(X, data_format=None, mode='tf') and it worked for me.
|
TypeError: preprocess_input() got an unexpected keyword argument 'mode'
|
I was preprocessing the images as per the model’s requirement before passing input to the model.
I create the model as follows and I encounter the error.
TypeError: preprocess_input() got an unexpected keyword argument 'mode'
My code:
from keras.applications.vgg16 import preprocess_input
X = preprocess_input(X, mode='tf')
Can someone help me out with this?
|
[
"According to preprocess_input documentation, this function gets two arguments:\n\ninput(X)\n\ndata_format\n\n\nyou can use the following code:\nX = preprocess_input(X, data_format=None) \n\n",
"I've adjusted to tf.keras.applications.imagenet_utils.preprocess_input(X, data_format=None, mode='tf') and it worked for me.\n"
] |
[
0,
0
] |
[] |
[] |
[
"deep_learning",
"keras",
"machine_learning",
"python_3.x"
] |
stackoverflow_0068776470_deep_learning_keras_machine_learning_python_3.x.txt
|
Q:
How can I increase my model performance in classification
Hi I am facing the problem that I have the dataset to tell if the person feels cold or not and the dataset given to me is known as the bad dataset and I want to maximize the accuracy and the precision of the model.
Right now the aacuracy is 53% and precision is 19% the columns description is :-
Age AMV Met Clo Dwpt plane Rad-temp AirTemp MeanRad-temp Velocity ATurb VaporPressure Humidity PMV TaOutdoor RhOutdoor
mean 308.637202 0.100735 1.066003 0.778492 13.621447 0.217785 23.178861 23.450261 0.112439 18.265870 5.123996 42.529203 -0.073676 17.174585 61.100365
std 680.115105 1.102099 0.428978 0.221992 5.903044 1.041164 1.433390 1.502953 0.079041 25.041109 8.156136 15.061075 0.538016 10.665071 24.703896
min 0.000000 -3.000000 0.100000 0.150000 -1.953000 -7.420000 15.960000 16.610000 0.000000 0.000000 0.000000 7.400000 -4.170000 -24.900000 0.000000
25% 26.000000 -0.700000 1.000000 0.630000 9.600000 -0.230000 22.300000 22.588684 0.068000 0.320000 1.226667 29.300000 -0.400000 11.350000 53.769937
50% 35.000000 0.000000 1.100000 0.751700 14.100000 0.200000 23.136667 23.358438 0.100000 0.500000 1.550667 43.280000 -0.030000 18.200000 68.795799
75% 45.000000 1.000000 1.241468 0.880000 17.337500 0.600000 23.900000 24.250000 0.140000 38.815000 1.985333 55.500125 0.260000 26.600000 76.950000
max 1996.000000 3.000000 4.500000 2.130000 26.896750 11.700000 31.000000 37.445000 1.880000 102.450000 27.700000 79.300000 2.500000 32.350000 100.350000
I removed all the outliers using IQR and i even smoothen the data using MinMax after it
I encoded the AMV for classification we have the table from -3 -2 -1 0 1 2 3 ranges from very cold to hot but all values in AMV reside in 0 and 1 what can i do to increase accuracy and precision. Sorry if I couldnt explain well but I am really hoping for any help if possible
A:
It sounds like you're trying to build a machine learning model to predict whether a person is feeling cold or not based on the dataset you provided. To improve the accuracy and precision of your model, there are several steps you can take.
First, make sure you're using the right evaluation metrics for your problem. Accuracy is not always the best metric to use, especially if your dataset is imbalanced (i.e. if there are significantly more instances of one class than the other). In this case, you may want to consider using precision and recall, which can give you a better understanding of how well your model is performing.
Next, you should try to improve the quality of your training data. This can include removing outliers, smoothing the data, and performing other preprocessing steps. You should also consider using more sophisticated techniques such as feature selection and dimensionality reduction to improve the predictive power of your model.
Finally, you should experiment with different machine learning algorithms to see which ones perform best on your dataset. This can involve trying out different model architectures, hyperparameters, and other settings to find the combination that produces the best results.
Overall, improving the accuracy and precision of your model will require a combination of data preprocessing, feature engineering, and algorithm selection. By following these steps, you should be able to improve the performance of your model and get better results.
|
How can I increase my model performance in classification
|
Hi I am facing the problem that I have the dataset to tell if the person feels cold or not and the dataset given to me is known as the bad dataset and I want to maximize the accuracy and the precision of the model.
Right now the aacuracy is 53% and precision is 19% the columns description is :-
Age AMV Met Clo Dwpt plane Rad-temp AirTemp MeanRad-temp Velocity ATurb VaporPressure Humidity PMV TaOutdoor RhOutdoor
mean 308.637202 0.100735 1.066003 0.778492 13.621447 0.217785 23.178861 23.450261 0.112439 18.265870 5.123996 42.529203 -0.073676 17.174585 61.100365
std 680.115105 1.102099 0.428978 0.221992 5.903044 1.041164 1.433390 1.502953 0.079041 25.041109 8.156136 15.061075 0.538016 10.665071 24.703896
min 0.000000 -3.000000 0.100000 0.150000 -1.953000 -7.420000 15.960000 16.610000 0.000000 0.000000 0.000000 7.400000 -4.170000 -24.900000 0.000000
25% 26.000000 -0.700000 1.000000 0.630000 9.600000 -0.230000 22.300000 22.588684 0.068000 0.320000 1.226667 29.300000 -0.400000 11.350000 53.769937
50% 35.000000 0.000000 1.100000 0.751700 14.100000 0.200000 23.136667 23.358438 0.100000 0.500000 1.550667 43.280000 -0.030000 18.200000 68.795799
75% 45.000000 1.000000 1.241468 0.880000 17.337500 0.600000 23.900000 24.250000 0.140000 38.815000 1.985333 55.500125 0.260000 26.600000 76.950000
max 1996.000000 3.000000 4.500000 2.130000 26.896750 11.700000 31.000000 37.445000 1.880000 102.450000 27.700000 79.300000 2.500000 32.350000 100.350000
I removed all the outliers using IQR and i even smoothen the data using MinMax after it
I encoded the AMV for classification we have the table from -3 -2 -1 0 1 2 3 ranges from very cold to hot but all values in AMV reside in 0 and 1 what can i do to increase accuracy and precision. Sorry if I couldnt explain well but I am really hoping for any help if possible
|
[
"It sounds like you're trying to build a machine learning model to predict whether a person is feeling cold or not based on the dataset you provided. To improve the accuracy and precision of your model, there are several steps you can take.\nFirst, make sure you're using the right evaluation metrics for your problem. Accuracy is not always the best metric to use, especially if your dataset is imbalanced (i.e. if there are significantly more instances of one class than the other). In this case, you may want to consider using precision and recall, which can give you a better understanding of how well your model is performing.\nNext, you should try to improve the quality of your training data. This can include removing outliers, smoothing the data, and performing other preprocessing steps. You should also consider using more sophisticated techniques such as feature selection and dimensionality reduction to improve the predictive power of your model.\nFinally, you should experiment with different machine learning algorithms to see which ones perform best on your dataset. This can involve trying out different model architectures, hyperparameters, and other settings to find the combination that produces the best results.\nOverall, improving the accuracy and precision of your model will require a combination of data preprocessing, feature engineering, and algorithm selection. By following these steps, you should be able to improve the performance of your model and get better results.\n"
] |
[
0
] |
[] |
[] |
[
"numpy",
"pandas",
"python"
] |
stackoverflow_0074665415_numpy_pandas_python.txt
|
Q:
React with uniqid not generating id
I'm making a todo list for my first react project. I'm using the uniqid generator for my todo object, but the way I currently have it set up is returning an id of '' instead of a random id. It seems to work if I set the todo state outside of the handleSubmit function, but I don't understand why it won't work as I have it now. I would truly appreciate any guidance on what I may be missing that's causing this problem. Thanks!
import { useState } from 'react';
import Header from './components/Header';
import List from './components/List';
import React from 'react';
import uniqid from 'uniqid';
function App() {
const [todoList, setTodoList] = useState([]);
const [todo, setTodo] = useState({
name: '',
complete: false,
id: ''
});
const [error, setError] = useState(false);
function handleChange(e) {
setTodo({
...todo,
name: e.target.value,
});
}
function handleAdd(newTodo) {
setTodoList([
...todoList,
newTodo
])
}
function handleSubmit(e) {
e.preventDefault();
if (!todo.name) {
setError(true);
} else {
handleAdd({
...todo,
id: uniqid()
})
setError(false);
setTodo({
...todo,
name: ''
})
console.log(todo);
}
}
function handleDelete(id) {
const todoListItems = todoList.filter(todo => todo.id !== id);
setTodoList(todoListItems);
}
function handleClear() {
setTodoList([]);
}
A:
The use of uniqid() seems to be correct, it is just that setTodo will update the value of todo in the next render of the component, but the following console.log(todo) in the same event block still only have the old value before the update.
A live example showing the unique ID generated: stackblitz
The above example also changed handleChange and handleAdd to the following format, so that the previous value can be used without potential conflicts.
function handleChange(e) {
setTodo((prev) => {
return { ...prev, name: e.target.value };
});
}
function handleAdd(newTodo) {
setTodoList((prev) => [...prev, newTodo]);
}
Also added a useRef to empty the input after submit, because the original implement cleaned the state value as "", but did not seems to do the same for input.
import { useRef } from 'react';
function handleSubmit(e) {
e.preventDefault();
...
setTodo((prev) => {
return { ...prev, name: "" };
});
newTodoRef.current.value = "";
}
Hope this will help.
|
React with uniqid not generating id
|
I'm making a todo list for my first react project. I'm using the uniqid generator for my todo object, but the way I currently have it set up is returning an id of '' instead of a random id. It seems to work if I set the todo state outside of the handleSubmit function, but I don't understand why it won't work as I have it now. I would truly appreciate any guidance on what I may be missing that's causing this problem. Thanks!
import { useState } from 'react';
import Header from './components/Header';
import List from './components/List';
import React from 'react';
import uniqid from 'uniqid';
function App() {
const [todoList, setTodoList] = useState([]);
const [todo, setTodo] = useState({
name: '',
complete: false,
id: ''
});
const [error, setError] = useState(false);
function handleChange(e) {
setTodo({
...todo,
name: e.target.value,
});
}
function handleAdd(newTodo) {
setTodoList([
...todoList,
newTodo
])
}
function handleSubmit(e) {
e.preventDefault();
if (!todo.name) {
setError(true);
} else {
handleAdd({
...todo,
id: uniqid()
})
setError(false);
setTodo({
...todo,
name: ''
})
console.log(todo);
}
}
function handleDelete(id) {
const todoListItems = todoList.filter(todo => todo.id !== id);
setTodoList(todoListItems);
}
function handleClear() {
setTodoList([]);
}
|
[
"The use of uniqid() seems to be correct, it is just that setTodo will update the value of todo in the next render of the component, but the following console.log(todo) in the same event block still only have the old value before the update.\nA live example showing the unique ID generated: stackblitz\nThe above example also changed handleChange and handleAdd to the following format, so that the previous value can be used without potential conflicts.\nfunction handleChange(e) {\n setTodo((prev) => {\n return { ...prev, name: e.target.value };\n });\n}\n\nfunction handleAdd(newTodo) {\n setTodoList((prev) => [...prev, newTodo]);\n}\n\nAlso added a useRef to empty the input after submit, because the original implement cleaned the state value as \"\", but did not seems to do the same for input.\nimport { useRef } from 'react';\n\nfunction handleSubmit(e) {\n e.preventDefault();\n\n ...\n\n setTodo((prev) => {\n return { ...prev, name: \"\" };\n });\n newTodoRef.current.value = \"\";\n}\n\nHope this will help.\n"
] |
[
1
] |
[] |
[] |
[
"css",
"html",
"javascript",
"reactjs"
] |
stackoverflow_0074664683_css_html_javascript_reactjs.txt
|
Q:
How do I convert relative directory names to absolute ones recognized by Bash?
When I input a relative directory path name using the 'read' command in Bash, it is not recognized as a valid directory by the -d test:
Relative directory path ~/tmp fails the if/-d test:
corba@samwise:~$ read CMDLINE_FILENAME
~/tmp
corba@samwise:~$ echo "$CMDLINE_FILENAME"
~/tmp
corba@samwise:~$ if [ -d $CMDLINE_FILENAME ]; then echo "Valid directory!"; fi
corba@samwise:~$
Relative directory path ../tmp fails the if/-d test:
corba@samwise:bin$ read CMDLINE_FILENAME
../tmp
corba@samwise:bin$ echo "$CMDLINE_FILENAME"
../tmp
corba@samwise:bin$ if [ -d $CMDLINE_FILENAME ]; then echo "Valid directory!"; fi
corba@samwise:~$
But an absolute directory path succeeds:
corba@samwise:~$ read CMDLINE_FILENAME
/home/corba/tmp
corba@samwise:~$ echo "$CMDLINE_FILENAME"
/home/corba/tmp
corba@samwise:~$ if [ -d $CMDLINE_FILENAME ]; then echo "Valid directory!"; fi
Valid directory!
corba@samwise:~$
I expected if [ -d ~/tmp ] and if [ -d ../tmp ] to be recognized as valid directory names. They were not.
I tried the solution that was offered in How to manually expand a special variable (ex: ~ tilde) in bash, but it only works for tilde and not for other relative paths like ../, ../../, or ./
I tried the variations of quoting/double-quoting and single/double square brackets in the if statement and get the same error in all cases. And these errors occur on MSYS2 (Git Bash) and Ubuntu 22.
A:
~/tmp actually is an absolute path from the bash point of view. However this relies on bash to substitute the ~ to the user accounts home folder path, so to something like /home/users/someone/tmp which clearly is an absolute path. That is a buildin feature of the bash shell usually called "path expansion". A relative path would be something like ./tmp or just tmp, so something that has to be interpreted relative to the current working directory of the process.
That substitution does not get applied here apparently. You can use a slightly altered command to achieve what you are looking for by explicitly forcing such expansion in a sub shell command:
if [ -d `eval echo $CMDLINE_FILENAME` ]; then echo "Valid directory!"; fi
That one works for absolute and relative paths, but also for entries that rely on the bash path expansion:
bash:~$ read CMDLINE_FILENAME
~/tmp
bash:~$ echo "$CMDLINE_FILENAME"
~/tmp
bash:~$ if [ -d `eval echo $CMDLINE_FILENAME`]; then echo "Valid directory!"; fi
Valid directory!
This does carry the risk of miss usage, though: the eval directive is a pretty mighty tool ...
|
How do I convert relative directory names to absolute ones recognized by Bash?
|
When I input a relative directory path name using the 'read' command in Bash, it is not recognized as a valid directory by the -d test:
Relative directory path ~/tmp fails the if/-d test:
corba@samwise:~$ read CMDLINE_FILENAME
~/tmp
corba@samwise:~$ echo "$CMDLINE_FILENAME"
~/tmp
corba@samwise:~$ if [ -d $CMDLINE_FILENAME ]; then echo "Valid directory!"; fi
corba@samwise:~$
Relative directory path ../tmp fails the if/-d test:
corba@samwise:bin$ read CMDLINE_FILENAME
../tmp
corba@samwise:bin$ echo "$CMDLINE_FILENAME"
../tmp
corba@samwise:bin$ if [ -d $CMDLINE_FILENAME ]; then echo "Valid directory!"; fi
corba@samwise:~$
But an absolute directory path succeeds:
corba@samwise:~$ read CMDLINE_FILENAME
/home/corba/tmp
corba@samwise:~$ echo "$CMDLINE_FILENAME"
/home/corba/tmp
corba@samwise:~$ if [ -d $CMDLINE_FILENAME ]; then echo "Valid directory!"; fi
Valid directory!
corba@samwise:~$
I expected if [ -d ~/tmp ] and if [ -d ../tmp ] to be recognized as valid directory names. They were not.
I tried the solution that was offered in How to manually expand a special variable (ex: ~ tilde) in bash, but it only works for tilde and not for other relative paths like ../, ../../, or ./
I tried the variations of quoting/double-quoting and single/double square brackets in the if statement and get the same error in all cases. And these errors occur on MSYS2 (Git Bash) and Ubuntu 22.
|
[
"~/tmp actually is an absolute path from the bash point of view. However this relies on bash to substitute the ~ to the user accounts home folder path, so to something like /home/users/someone/tmp which clearly is an absolute path. That is a buildin feature of the bash shell usually called \"path expansion\". A relative path would be something like ./tmp or just tmp, so something that has to be interpreted relative to the current working directory of the process.\nThat substitution does not get applied here apparently. You can use a slightly altered command to achieve what you are looking for by explicitly forcing such expansion in a sub shell command:\nif [ -d `eval echo $CMDLINE_FILENAME` ]; then echo \"Valid directory!\"; fi\n\nThat one works for absolute and relative paths, but also for entries that rely on the bash path expansion:\nbash:~$ read CMDLINE_FILENAME \n~/tmp\nbash:~$ echo \"$CMDLINE_FILENAME\"\n~/tmp\nbash:~$ if [ -d `eval echo $CMDLINE_FILENAME`]; then echo \"Valid directory!\"; fi \nValid directory!\n\nThis does carry the risk of miss usage, though: the eval directive is a pretty mighty tool ...\n"
] |
[
2
] |
[] |
[] |
[
"bash",
"msys2",
"shell",
"ubuntu"
] |
stackoverflow_0074664889_bash_msys2_shell_ubuntu.txt
|
Q:
How do i loop through the fields of a form in python?
I am trying to find out how "complete" a users profile is as a percentage.
I want to loop through the fields of a form to see which are still left blank and return a completion percentage.
My question is how do I reference each form value in the loop without having to write out the name of each field?
Is this possible?
completeness = 0
length = 20
for x in form:
if form.fields.values[x] != '':
completeness += 1
percentage = (completeness / length) * 100
print(completeness)
print(percentage)
A:
To reference each form value in a loop without having to write out the name of each field in Python, you can use the items() method on the form.fields.values dictionary to iterate over the key-value pairs in the dictionary.
Here is an example of how you could update your code to use the items() method to loop over the form:
completeness = 0
length = 20
for key, value in form.fields.values.items():
if value != "":
completeness += 1
percentage = (completeness / length) * 100
print(completeness)
print(percentage)
Note that in the code above, the items() method is used to loop over the form.fields.values dictionary, and the key and value for each iteration are assigned to the key and value variables, respectively. This allows you to reference the value for each field without having to explicitly specify the field name.
|
How do i loop through the fields of a form in python?
|
I am trying to find out how "complete" a users profile is as a percentage.
I want to loop through the fields of a form to see which are still left blank and return a completion percentage.
My question is how do I reference each form value in the loop without having to write out the name of each field?
Is this possible?
completeness = 0
length = 20
for x in form:
if form.fields.values[x] != '':
completeness += 1
percentage = (completeness / length) * 100
print(completeness)
print(percentage)
|
[
"To reference each form value in a loop without having to write out the name of each field in Python, you can use the items() method on the form.fields.values dictionary to iterate over the key-value pairs in the dictionary.\nHere is an example of how you could update your code to use the items() method to loop over the form:\ncompleteness = 0\nlength = 20\nfor key, value in form.fields.values.items():\n if value != \"\":\n completeness += 1\n\npercentage = (completeness / length) * 100\nprint(completeness)\nprint(percentage)\n\nNote that in the code above, the items() method is used to loop over the form.fields.values dictionary, and the key and value for each iteration are assigned to the key and value variables, respectively. This allows you to reference the value for each field without having to explicitly specify the field name.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"django_forms",
"python"
] |
stackoverflow_0074665090_django_django_forms_python.txt
|
Q:
Count the number of employees going to work each day via SQL in databricks
Count the number of employees going to work each day via SQL in databricks
I have the following input table:
Staff name
First Day
Last day
Staff A
2022-07-20
2022-11-14
Staff B
2022-07-16
2022-10-21
Staff C
2022-07-31
2022-09-28
Staff D
2022-07-21
2022-10-05
Staff E
2022-07-07
2022-10-22
Staff F
2022-07-15
2022-11-24
Staff G
2022-07-18
2022-11-03
Staff H
2022-08-07
2022-10-11
Staff I
2022-07-24
2022-11-27
Staff J
2022-07-10
2022-11-12
Staff K
2022-07-24
2022-08-31
Staff L
2022-07-22
2022-11-07
Staff M
2022-07-13
2022-10-21
Staff N
2022-08-21
2022-10-17
Staff O
2022-08-08
2022-10-26
Staff P
2022-07-18
2022-10-01
Staff Q
2022-07-19
2022-11-06
I want the output to look like this:
Date
Count unique staff
2022-07-02
17
2022-07-03
47
2022-07-04
5
2022-07-05
5
2022-07-06
25
2022-07-07
27
2022-07-08
17
2022-07-09
58
2022-07-10
23
2022-07-11
53
2022-07-12
18
2022-07-13
29
2022-07-14
52
2022-07-15
7
2022-07-16
17
2022-07-17
37
2022-07-18
33
How to write the SQL command in databricks to get the above result? many thanks
I want the output to look like this:
Date
Count unique staff
2022-07-02
17
2022-07-03
47
2022-07-04
5
2022-07-05
5
2022-07-06
25
2022-07-07
27
2022-07-08
17
2022-07-09
58
2022-07-10
23
2022-07-11
53
2022-07-12
18
2022-07-13
29
2022-07-14
52
2022-07-15
7
2022-07-16
17
2022-07-17
37
2022-07-18
33
A:
To count the number of unique staff going to work each day, you can use the following SQL query in Databricks:
SELECT Date, COUNT(DISTINCT Staff_name) AS Count_unique_staff
FROM input_table
WHERE Date BETWEEN First_day AND Last_day
GROUP BY Date
ORDER BY Date ASC
|
Count the number of employees going to work each day via SQL in databricks
|
Count the number of employees going to work each day via SQL in databricks
I have the following input table:
Staff name
First Day
Last day
Staff A
2022-07-20
2022-11-14
Staff B
2022-07-16
2022-10-21
Staff C
2022-07-31
2022-09-28
Staff D
2022-07-21
2022-10-05
Staff E
2022-07-07
2022-10-22
Staff F
2022-07-15
2022-11-24
Staff G
2022-07-18
2022-11-03
Staff H
2022-08-07
2022-10-11
Staff I
2022-07-24
2022-11-27
Staff J
2022-07-10
2022-11-12
Staff K
2022-07-24
2022-08-31
Staff L
2022-07-22
2022-11-07
Staff M
2022-07-13
2022-10-21
Staff N
2022-08-21
2022-10-17
Staff O
2022-08-08
2022-10-26
Staff P
2022-07-18
2022-10-01
Staff Q
2022-07-19
2022-11-06
I want the output to look like this:
Date
Count unique staff
2022-07-02
17
2022-07-03
47
2022-07-04
5
2022-07-05
5
2022-07-06
25
2022-07-07
27
2022-07-08
17
2022-07-09
58
2022-07-10
23
2022-07-11
53
2022-07-12
18
2022-07-13
29
2022-07-14
52
2022-07-15
7
2022-07-16
17
2022-07-17
37
2022-07-18
33
How to write the SQL command in databricks to get the above result? many thanks
I want the output to look like this:
Date
Count unique staff
2022-07-02
17
2022-07-03
47
2022-07-04
5
2022-07-05
5
2022-07-06
25
2022-07-07
27
2022-07-08
17
2022-07-09
58
2022-07-10
23
2022-07-11
53
2022-07-12
18
2022-07-13
29
2022-07-14
52
2022-07-15
7
2022-07-16
17
2022-07-17
37
2022-07-18
33
|
[
"To count the number of unique staff going to work each day, you can use the following SQL query in Databricks:\nSELECT Date, COUNT(DISTINCT Staff_name) AS Count_unique_staff \nFROM input_table \nWHERE Date BETWEEN First_day AND Last_day \nGROUP BY Date \nORDER BY Date ASC\n\n"
] |
[
0
] |
[] |
[] |
[
"apache_spark_sql",
"databricks_sql",
"sql"
] |
stackoverflow_0074665420_apache_spark_sql_databricks_sql_sql.txt
|
Q:
Redirect the python output to C input in Linux shell
I am testing for buffer overflow. I use a python script to generate input that exceeds the buffer size in the C file.
I want to execute the vulnerable C file (in sudo), which needs input from a Python script. The python script generates the input for the C file using something like print output.
I tried python myPythonFile.py | ./myCFile, and it worked well (no seg fault since I turned the ASLR and other protections off).
But when I run the C file in sudo: python myPythonFile.py | sudo ./myCFile under the same condition, the program failed (displays seg fault). I am confused. Is there something wrong with my way to run it in sudo?
Anybody could help me out here?
Many thanks!!
A:
Start a new shell as superuser then run your commands within that:
sudo sh -c 'python myPythonFile.py | ./myCFile'
|
Redirect the python output to C input in Linux shell
|
I am testing for buffer overflow. I use a python script to generate input that exceeds the buffer size in the C file.
I want to execute the vulnerable C file (in sudo), which needs input from a Python script. The python script generates the input for the C file using something like print output.
I tried python myPythonFile.py | ./myCFile, and it worked well (no seg fault since I turned the ASLR and other protections off).
But when I run the C file in sudo: python myPythonFile.py | sudo ./myCFile under the same condition, the program failed (displays seg fault). I am confused. Is there something wrong with my way to run it in sudo?
Anybody could help me out here?
Many thanks!!
|
[
"Start a new shell as superuser then run your commands within that:\nsudo sh -c 'python myPythonFile.py | ./myCFile'\n\n"
] |
[
0
] |
[] |
[] |
[
"c",
"linux",
"shell",
"unix"
] |
stackoverflow_0074665002_c_linux_shell_unix.txt
|
Q:
How to run docker:dind to start with a shell
I want to run docker:dind and get a shell.
If I run docker run --privileged docker:dind sh it just exit.
The workaround is to run: docker run -d --privileged docker:dind
it starts in the background and then I can run docker exec -it <container> sh and get a shell.
But I want that it will start with a shell.
I created a Dockerfile:
FROM docker:dind
ENTRYPOINT sh
I built it:
docker build -t dind2 -f Dockerfile .
When I run docker run --rm -it --privileged dind2 I get a shell but when I tried to run simple container docker run busybox echo hi it failed with:
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
Any idea how make docker:dind start with a shell without the workaround of running it in the background and then using exec to get a shell.
A:
Just like Andreas Wederbrand said. you can just
docker run -it docker:dind sh
and if you want to use Dockerfile. Just write like this.
FROM docker:dind
CMD ["sh"]
It shouldn't overwrite ENTRYPOINT. You can try to inspect docker:dind image.
docker inspect docker:dind
you can see entrypoint is a shell script file.
"Entrypoint": [
"dockerd-entrypoint.sh"
],
of course, we can find this file in container. get inside the docker
docker run -it docker:dind sh
and then
cat /usr/local/bin/dockerd-entrypoint.sh
more about entrypoint you can see
https://medium.freecodecamp.org/docker-entrypoint-cmd-dockerfile-best-practices-abc591c30e21
A:
You need to tell docker to run interactive and with a tty to be able to use the shell.
docker run --interactive --tty docker:dind sh
or, for short
docker run -it docker:dind sh
A:
you need to run
docker run -it docker:dind sh
then start dockerd service in background process:
./usr/local/bin/dockerd-entrypoint.sh&
Dont forget "&" at the end.
|
How to run docker:dind to start with a shell
|
I want to run docker:dind and get a shell.
If I run docker run --privileged docker:dind sh it just exit.
The workaround is to run: docker run -d --privileged docker:dind
it starts in the background and then I can run docker exec -it <container> sh and get a shell.
But I want that it will start with a shell.
I created a Dockerfile:
FROM docker:dind
ENTRYPOINT sh
I built it:
docker build -t dind2 -f Dockerfile .
When I run docker run --rm -it --privileged dind2 I get a shell but when I tried to run simple container docker run busybox echo hi it failed with:
docker: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?.
Any idea how make docker:dind start with a shell without the workaround of running it in the background and then using exec to get a shell.
|
[
"Just like Andreas Wederbrand said. you can just\ndocker run -it docker:dind sh\n\nand if you want to use Dockerfile. Just write like this.\nFROM docker:dind\nCMD [\"sh\"]\n\n\nIt shouldn't overwrite ENTRYPOINT. You can try to inspect docker:dind image.\ndocker inspect docker:dind\n\nyou can see entrypoint is a shell script file.\n\"Entrypoint\": [\n \"dockerd-entrypoint.sh\"\n ],\n\nof course, we can find this file in container. get inside the docker \ndocker run -it docker:dind sh\n\nand then\ncat /usr/local/bin/dockerd-entrypoint.sh\n\nmore about entrypoint you can see \nhttps://medium.freecodecamp.org/docker-entrypoint-cmd-dockerfile-best-practices-abc591c30e21\n",
"You need to tell docker to run interactive and with a tty to be able to use the shell.\ndocker run --interactive --tty docker:dind sh\n\nor, for short\ndocker run -it docker:dind sh\n\n",
"you need to run\n\ndocker run -it docker:dind sh\n\nthen start dockerd service in background process:\n\n./usr/local/bin/dockerd-entrypoint.sh&\n\nDont forget \"&\" at the end.\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"docker",
"dockerfile"
] |
stackoverflow_0054144630_docker_dockerfile.txt
|
Q:
How to group every [0][1][2] of an Array to create a new object?
I have an array and I want to assign values in group of 3s
Name, Id, Population, Name, Id, Population, Name, Id, Population etc.
Is there a way to do that?
This is what I have
while (scanner.hasNext()) { `
scanner.useDelimiter(",");`
list.add(scanner.nextLine());}`
for(int i=0; i<list.size(); i++){
String n = list.get(i);
System.out.println("Hopefully going thru " + n);} //for me to check
String ar =list.toString();
Object [] a = ar.split(",");// splitting the array for each string
for(int h=0;h<a.length;h+=3) { // for [0] += 3 is Name
for(int j=1;j<a.length; j+=3) { // for [1] += 3 is Id
for(int k=2; k<a.length;k+=3) { //for[2]+= is Population
String name = a[h].toString();
String id = a[j].toString();
String population = a[k].toString();
System.out.println("name is "+ name);// this is just to check correct values
System.out.println("id is "+ id);// this is just to check correct values
System.out.println("population is " +population);// this is just to check correct values
CityRow cityRow = new CityRow(name,id,population); //?? I want every set of [0][1][2] to create a new object`
A:
I don‘t think that ar has the correct data and I don‘t understand why you don’t work with list directly, but assuming that ar has the correct data, it should be possible to use:
for(int = 0; i < ar.length ; ) {
var cityRow = new CityRow(
ar[i++],
ar[i++],
ar[i++]
);
// remember to add cityRow to an
// appropriate list
}
A:
You use Scanner so no need to split an array. You can read each separate value one-by-one directly from it.
public class Main {
public static void main(String... args) {
Scanner scan = new Scanner(System.in);
scan.useDelimiter("\\n|,");
System.out.print("Total groups: ");
int total = scan.nextInt();
List<City> cities = readCities(scan, total);
printCities(cities);
}
private static List<City> readCities(Scanner scan, int total) {
List<City> cities = new ArrayList<>(total);
System.out.println("Enter each city on a new line. Each line should be: <id>,<name>,<population>");
for (int i = 0; i < total; i++) {
String id = scan.next();
String name = scan.next();
int population = scan.nextInt();
cities.add(new City(id, name, population));
}
return cities;
}
private static void printCities(List<City> cities) {
System.out.println();
System.out.format("There are total %d cities.\n", cities.size());
int i = 1;
for (City city : cities) {
System.out.format("City №%d: id=%s, name=%s, population=%d\n", i++, city.id, city.name, city.population);
}
}
static class City {
private final String id;
private final String name;
private final int population;
public City(String id, String name, int population) {
this.id = id;
this.name = name;
this.population = population;
}
}
}
|
How to group every [0][1][2] of an Array to create a new object?
|
I have an array and I want to assign values in group of 3s
Name, Id, Population, Name, Id, Population, Name, Id, Population etc.
Is there a way to do that?
This is what I have
while (scanner.hasNext()) { `
scanner.useDelimiter(",");`
list.add(scanner.nextLine());}`
for(int i=0; i<list.size(); i++){
String n = list.get(i);
System.out.println("Hopefully going thru " + n);} //for me to check
String ar =list.toString();
Object [] a = ar.split(",");// splitting the array for each string
for(int h=0;h<a.length;h+=3) { // for [0] += 3 is Name
for(int j=1;j<a.length; j+=3) { // for [1] += 3 is Id
for(int k=2; k<a.length;k+=3) { //for[2]+= is Population
String name = a[h].toString();
String id = a[j].toString();
String population = a[k].toString();
System.out.println("name is "+ name);// this is just to check correct values
System.out.println("id is "+ id);// this is just to check correct values
System.out.println("population is " +population);// this is just to check correct values
CityRow cityRow = new CityRow(name,id,population); //?? I want every set of [0][1][2] to create a new object`
|
[
"I don‘t think that ar has the correct data and I don‘t understand why you don’t work with list directly, but assuming that ar has the correct data, it should be possible to use:\nfor(int = 0; i < ar.length ; ) {\n var cityRow = new CityRow(\n ar[i++],\n ar[i++],\n ar[i++]\n );\n // remember to add cityRow to an\n // appropriate list\n}\n\n\n",
"You use Scanner so no need to split an array. You can read each separate value one-by-one directly from it.\npublic class Main {\n\n public static void main(String... args) {\n Scanner scan = new Scanner(System.in);\n scan.useDelimiter(\"\\\\n|,\");\n\n System.out.print(\"Total groups: \");\n int total = scan.nextInt();\n List<City> cities = readCities(scan, total);\n printCities(cities);\n }\n\n private static List<City> readCities(Scanner scan, int total) {\n List<City> cities = new ArrayList<>(total);\n System.out.println(\"Enter each city on a new line. Each line should be: <id>,<name>,<population>\");\n\n for (int i = 0; i < total; i++) {\n String id = scan.next();\n String name = scan.next();\n int population = scan.nextInt();\n cities.add(new City(id, name, population));\n }\n\n return cities;\n }\n\n private static void printCities(List<City> cities) {\n System.out.println();\n System.out.format(\"There are total %d cities.\\n\", cities.size());\n\n int i = 1;\n\n for (City city : cities) {\n System.out.format(\"City №%d: id=%s, name=%s, population=%d\\n\", i++, city.id, city.name, city.population);\n }\n }\n\n static class City {\n\n private final String id;\n private final String name;\n private final int population;\n\n public City(String id, String name, int population) {\n this.id = id;\n this.name = name;\n this.population = population;\n }\n }\n}\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"arrays",
"for_loop",
"java"
] |
stackoverflow_0074665089_arrays_for_loop_java.txt
|
Q:
React flow: custom group node example
I'm studying the React Flow library documentation and trying to figure out if there is a way to create my own node that will be a group (I want to implement custom title with styles for node before children nodes). I didn't find anything in the documentation - neither in the Docs/Sub flow section, nor in the Api/Custom nodes.
As a temporary solution for implementing the node title, I created a special node for it and placed it as the first child. But I don't really like this solution.
P.S. Found the same question on library's gitlab repository discussion section, for now without answer - https://github.com/wbkd/react-flow/discussions/2592
P.P.S: On githab answered that any node becomes a parent if its child is given the parentNode parameter. Haven't checked yet.
A:
Indeed: any node can become a group if you specify its id to its children as parentNode
|
React flow: custom group node example
|
I'm studying the React Flow library documentation and trying to figure out if there is a way to create my own node that will be a group (I want to implement custom title with styles for node before children nodes). I didn't find anything in the documentation - neither in the Docs/Sub flow section, nor in the Api/Custom nodes.
As a temporary solution for implementing the node title, I created a special node for it and placed it as the first child. But I don't really like this solution.
P.S. Found the same question on library's gitlab repository discussion section, for now without answer - https://github.com/wbkd/react-flow/discussions/2592
P.P.S: On githab answered that any node becomes a parent if its child is given the parentNode parameter. Haven't checked yet.
|
[
"Indeed: any node can become a group if you specify its id to its children as parentNode\n"
] |
[
0
] |
[] |
[] |
[
"react_flow",
"reactjs"
] |
stackoverflow_0074645473_react_flow_reactjs.txt
|
Q:
How to register two different service-workers in the same scope?
I have a service-worker.js file to make my reactjs app PWA. I now also want to add push notification using FCM, which requires me to have firebase-messaging-sw.js in the public folder. So now for both to work both are going to require to be in the same scope.
But as far as I have read from various answers on this site, we can't have two different service workers in the same scope, so how do we combine both service-worker.js and firebase-messaging-sw.js so both can function properly. One of the answers suggested that I rename the service-worker.js to firebase-messaging-sw.js. I did find one successful implementation on GitHub which I didn't understand much https://github.com/ERS-HCL/reactjs-pwa-firebase .
How can I have both the service-worker.js and firebase-messaging-sw.js work together?
firebase-messaging-sw.js
// Scripts for firebase and firebase messaging
importScripts("https://www.gstatic.com/firebasejs/8.2.0/firebase-app.js");
importScripts("https://www.gstatic.com/firebasejs/8.2.0/firebase-messaging.js");
// Initialize the Firebase app in the service worker by passing the generated config
const firebaseConfig = {
apiKey: "xxxx",
authDomain: "xxxx.firebaseapp.com",
projectId: "xxxx",
storageBucket: "xxxx",
messagingSenderId: "xxxx",
appId: "xxxx",
measurementId: "xxxx"
}
firebase.initializeApp(firebaseConfig);
// Retrieve firebase messaging
const messaging = firebase.messaging();
self.addEventListener("notificationclick", function (event) {
console.debug('SW notification click event', event)
const url = event.notification.data.link
event.waitUntil(
clients.matchAll({type: 'window'}).then( windowClients => {
// Check if there is already a window/tab open with the target URL
for (var i = 0; i < windowClients.length; i++) {
var client = windowClients[i];
// If so, just focus it.
if (client.url === url && 'focus' in client) {
return client.focus();
}
}
// If not, then open the target URL in a new window/tab.
if (clients.openWindow) {
return clients.openWindow(url);
}
})
);
})
messaging.onBackgroundMessage(async function(payload) {
console.log("Received background message ", payload)
const notificationTitle = payload.notification.title
const notificationOptions = {
body: payload.notification.body,
icon: './logo192.png',
badge: './notification-badgex24.png',
data: {
link: payload.data?.link
}
}
self.registration.showNotification(notificationTitle, notificationOptions)
})
service-worker.js
import { clientsClaim } from 'workbox-core';
import { ExpirationPlugin } from 'workbox-expiration';
import { precacheAndRoute, createHandlerBoundToURL } from 'workbox-precaching';
import { registerRoute } from 'workbox-routing';
import { StaleWhileRevalidate } from 'workbox-strategies';
clientsClaim();
const fileExtensionRegexp = new RegExp('/[^/?]+\\.[^/]+$');
registerRoute(
// Return false to exempt requests from being fulfilled by index.html.
({ request, url }) => {
// If this isn't a navigation, skip.
if (request.mode !== 'navigate') {
return false;
} // If this is a URL that starts with /_, skip.
if (url.pathname.startsWith('/_')) {
return false;
} // If this looks like a URL for a resource, because it contains // a file extension, skip.
if (url.pathname.match(fileExtensionRegexp)) {
return false;
} // Return true to signal that we want to use the handler.
return true;
},
createHandlerBoundToURL(process.env.PUBLIC_URL + '/index.html')
);
registerRoute(
// Add in any other file extensions or routing criteria as needed.
({ url }) => url.origin === self.location.origin && url.pathname.endsWith('.png'), // Customize this strategy as needed, e.g., by changing to CacheFirst.
new StaleWhileRevalidate({
cacheName: 'images',
plugins: [
// Ensure that once this runtime cache reaches a maximum size the
// least-recently used images are removed.
new ExpirationPlugin({ maxEntries: 50 }),
],
})
);
self.addEventListener('message', (event) => {
if (event.data && event.data.type === 'SKIP_WAITING') {
self.skipWaiting();
}
});
A:
I think one way of doing this is just having only firebase-messaging-sw.js file and using workbox to inject your service-worker.js into it
https://developer.chrome.com/docs/workbox/precaching-with-workbox/
forexample :
// build-sw.js
import {injectManifest} from 'workbox-build';
injectManifest({
swSrc: './src/sw.js',
swDest: './dist/firebase-messaging-sw.js',
globDirectory: './dist',
globPatterns: [
'**/*.js',
'**/*.css',
'**/*.svg'
]
});
and all firebase config must be in sw.js to be written in firebase-messaging-sw.js
and in your package.json simply run build-sw.js before react-script start
"scripts": {
"start": "node build-sw.js && react-scripts start",
}
or you can use react-app-rewired and workbox box plugin instead
|
How to register two different service-workers in the same scope?
|
I have a service-worker.js file to make my reactjs app PWA. I now also want to add push notification using FCM, which requires me to have firebase-messaging-sw.js in the public folder. So now for both to work both are going to require to be in the same scope.
But as far as I have read from various answers on this site, we can't have two different service workers in the same scope, so how do we combine both service-worker.js and firebase-messaging-sw.js so both can function properly. One of the answers suggested that I rename the service-worker.js to firebase-messaging-sw.js. I did find one successful implementation on GitHub which I didn't understand much https://github.com/ERS-HCL/reactjs-pwa-firebase .
How can I have both the service-worker.js and firebase-messaging-sw.js work together?
firebase-messaging-sw.js
// Scripts for firebase and firebase messaging
importScripts("https://www.gstatic.com/firebasejs/8.2.0/firebase-app.js");
importScripts("https://www.gstatic.com/firebasejs/8.2.0/firebase-messaging.js");
// Initialize the Firebase app in the service worker by passing the generated config
const firebaseConfig = {
apiKey: "xxxx",
authDomain: "xxxx.firebaseapp.com",
projectId: "xxxx",
storageBucket: "xxxx",
messagingSenderId: "xxxx",
appId: "xxxx",
measurementId: "xxxx"
}
firebase.initializeApp(firebaseConfig);
// Retrieve firebase messaging
const messaging = firebase.messaging();
self.addEventListener("notificationclick", function (event) {
console.debug('SW notification click event', event)
const url = event.notification.data.link
event.waitUntil(
clients.matchAll({type: 'window'}).then( windowClients => {
// Check if there is already a window/tab open with the target URL
for (var i = 0; i < windowClients.length; i++) {
var client = windowClients[i];
// If so, just focus it.
if (client.url === url && 'focus' in client) {
return client.focus();
}
}
// If not, then open the target URL in a new window/tab.
if (clients.openWindow) {
return clients.openWindow(url);
}
})
);
})
messaging.onBackgroundMessage(async function(payload) {
console.log("Received background message ", payload)
const notificationTitle = payload.notification.title
const notificationOptions = {
body: payload.notification.body,
icon: './logo192.png',
badge: './notification-badgex24.png',
data: {
link: payload.data?.link
}
}
self.registration.showNotification(notificationTitle, notificationOptions)
})
service-worker.js
import { clientsClaim } from 'workbox-core';
import { ExpirationPlugin } from 'workbox-expiration';
import { precacheAndRoute, createHandlerBoundToURL } from 'workbox-precaching';
import { registerRoute } from 'workbox-routing';
import { StaleWhileRevalidate } from 'workbox-strategies';
clientsClaim();
const fileExtensionRegexp = new RegExp('/[^/?]+\\.[^/]+$');
registerRoute(
// Return false to exempt requests from being fulfilled by index.html.
({ request, url }) => {
// If this isn't a navigation, skip.
if (request.mode !== 'navigate') {
return false;
} // If this is a URL that starts with /_, skip.
if (url.pathname.startsWith('/_')) {
return false;
} // If this looks like a URL for a resource, because it contains // a file extension, skip.
if (url.pathname.match(fileExtensionRegexp)) {
return false;
} // Return true to signal that we want to use the handler.
return true;
},
createHandlerBoundToURL(process.env.PUBLIC_URL + '/index.html')
);
registerRoute(
// Add in any other file extensions or routing criteria as needed.
({ url }) => url.origin === self.location.origin && url.pathname.endsWith('.png'), // Customize this strategy as needed, e.g., by changing to CacheFirst.
new StaleWhileRevalidate({
cacheName: 'images',
plugins: [
// Ensure that once this runtime cache reaches a maximum size the
// least-recently used images are removed.
new ExpirationPlugin({ maxEntries: 50 }),
],
})
);
self.addEventListener('message', (event) => {
if (event.data && event.data.type === 'SKIP_WAITING') {
self.skipWaiting();
}
});
|
[
"I think one way of doing this is just having only firebase-messaging-sw.js file and using workbox to inject your service-worker.js into it\nhttps://developer.chrome.com/docs/workbox/precaching-with-workbox/\nforexample :\n// build-sw.js\nimport {injectManifest} from 'workbox-build';\n\ninjectManifest({\n swSrc: './src/sw.js',\n swDest: './dist/firebase-messaging-sw.js',\n globDirectory: './dist',\n globPatterns: [\n '**/*.js',\n '**/*.css',\n '**/*.svg'\n ]\n});\n\nand all firebase config must be in sw.js to be written in firebase-messaging-sw.js\nand in your package.json simply run build-sw.js before react-script start\n \"scripts\": {\n \"start\": \"node build-sw.js && react-scripts start\",\n }\n\nor you can use react-app-rewired and workbox box plugin instead\n"
] |
[
0
] |
[] |
[] |
[
"firebase",
"firebase_cloud_messaging",
"progressive_web_apps",
"reactjs",
"service_worker"
] |
stackoverflow_0074651305_firebase_firebase_cloud_messaging_progressive_web_apps_reactjs_service_worker.txt
|
Q:
Getting inline CSS generated by Elementor with the Wordpress Rest API
I am using the Wordpress REST API to retrieve the rendered content of a wordpress page.
My page is constructed with Elementor. And Elementor is adding some inline CSS on the finale page to make all work. This inline CSS is not present in
the content returned by the API.
Example call:
http://website.com/guide/wp-json/wp/v2/pages/23412
Response:
<h1>My title</h1>
Real page HTML:
<h1>My title</h1> <style>h1 { color: red; }</style>
Part am I looking for (not present in the response):
<style>h1 { color: red; }</style>
Do you know how I can retrieve the inline CSS generated by Elementor with the Wordpress API ?
A:
The solution described here works pretty well :
https://wordpress.stackexchange.com/questions/292849/displaying-a-page-built-with-elementor-using-the-rest-api/292895
Once the HTML recieved you can parse the page and get the <style> tag with all the Elementor generated inline CSS.
A:
Here is a pretty simple way to get the content of an Elementor into a variable along with styles, I used this to display a single page on all subdomains of a multisite.
https://wordpress.stackexchange.com/questions/307783/echoing-elementor-page-content-in-template-but-it-doesnt-get-styles-and-some-w
I’ll add on my own that this code worked for me (with the second argument true)
$pluginElementor = \Elementor\Plugin::instance();
$contentElementor = $pluginElementor->frontend->get_builder_content($post_ID, $with_css = true);
|
Getting inline CSS generated by Elementor with the Wordpress Rest API
|
I am using the Wordpress REST API to retrieve the rendered content of a wordpress page.
My page is constructed with Elementor. And Elementor is adding some inline CSS on the finale page to make all work. This inline CSS is not present in
the content returned by the API.
Example call:
http://website.com/guide/wp-json/wp/v2/pages/23412
Response:
<h1>My title</h1>
Real page HTML:
<h1>My title</h1> <style>h1 { color: red; }</style>
Part am I looking for (not present in the response):
<style>h1 { color: red; }</style>
Do you know how I can retrieve the inline CSS generated by Elementor with the Wordpress API ?
|
[
"The solution described here works pretty well :\nhttps://wordpress.stackexchange.com/questions/292849/displaying-a-page-built-with-elementor-using-the-rest-api/292895\nOnce the HTML recieved you can parse the page and get the <style> tag with all the Elementor generated inline CSS.\n",
"Here is a pretty simple way to get the content of an Elementor into a variable along with styles, I used this to display a single page on all subdomains of a multisite.\nhttps://wordpress.stackexchange.com/questions/307783/echoing-elementor-page-content-in-template-but-it-doesnt-get-styles-and-some-w\nI’ll add on my own that this code worked for me (with the second argument true)\n$pluginElementor = \\Elementor\\Plugin::instance();\n$contentElementor = $pluginElementor->frontend->get_builder_content($post_ID, $with_css = true);\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"css",
"custom_wordpress_pages",
"elementor",
"wordpress",
"wordpress_rest_api"
] |
stackoverflow_0052669685_css_custom_wordpress_pages_elementor_wordpress_wordpress_rest_api.txt
|
Q:
why "if else" doesn't work in this piece of code
page = 1
img_count = 0
result_list = []
while True:
url = f'https://s3.landingfolio.com/inspiration?page={page}&sortBy=free-first'
response = requests.get(url=url, headers=headers)
data = response.json()
for item in data:
if page <= 11:
screenshots = item.get('screenshots')
img_count += len(screenshots)
for img in screenshots:
img.update({'url': f'https://landingfoliocom.imgix.net/{img.get("url")}'})
result_list.append(
{
'title': item.get('title'),
'description': item.get('slug'),
'url': item.get('url'),
'screenshots': screenshots
}
)
else:
print('test')
with open('result_list.json', 'a') as file:
json.dump(result_list, file, indent=4, ensure_ascii=False)
return f'[INFO] work finished. Images count is: {img_count}\n{"=" * 20}'
page += 1
print(f'[+] processed {page} ')
when executing code in the terminal, the page value is displayed, but even when it is greater than 11, the code for some reason does not proceed to the execution of the "else" section
what i wrote wrong?
I was expecting code execution in the "else" section when the value of the Page variable was greater than 11
A:
The code in the else block will not be executed because the return statement is inside the else block. The return statement causes the function to immediately return a value and exit, so the code in the else block will never be executed.
You can fix this by moving the return statement outside of the else block, like this:
page = 1
img_count = 0
result_list = []
while True:
url = f'https://s3.landingfolio.com/inspiration?page={page}&sortBy=free-first'
response = requests.get(url=url, headers=headers)
data = response.json()
for item in data:
if page <= 11:
screenshots = item.get('screenshots')
img_count += len(screenshots)
for img in screenshots:
img.update({'url': f'https://landingfoliocom.imgix.net/{img.get("url")}'})
result_list.append(
{
'title': item.get('title'),
'description': item.get('slug'),
'url': item.get('url'),
'screenshots': screenshots
}
)
else:
print('test')
with open('result_list.json', 'a') as file:
json.dump(result_list, file, indent=4, ensure_ascii=False)
if page > 11:
return f'[INFO] work finished. Images count is: {img_count}\n{"=" * 20}'
page += 1
print(f'[+] processed {page} ')
|
why "if else" doesn't work in this piece of code
|
page = 1
img_count = 0
result_list = []
while True:
url = f'https://s3.landingfolio.com/inspiration?page={page}&sortBy=free-first'
response = requests.get(url=url, headers=headers)
data = response.json()
for item in data:
if page <= 11:
screenshots = item.get('screenshots')
img_count += len(screenshots)
for img in screenshots:
img.update({'url': f'https://landingfoliocom.imgix.net/{img.get("url")}'})
result_list.append(
{
'title': item.get('title'),
'description': item.get('slug'),
'url': item.get('url'),
'screenshots': screenshots
}
)
else:
print('test')
with open('result_list.json', 'a') as file:
json.dump(result_list, file, indent=4, ensure_ascii=False)
return f'[INFO] work finished. Images count is: {img_count}\n{"=" * 20}'
page += 1
print(f'[+] processed {page} ')
when executing code in the terminal, the page value is displayed, but even when it is greater than 11, the code for some reason does not proceed to the execution of the "else" section
what i wrote wrong?
I was expecting code execution in the "else" section when the value of the Page variable was greater than 11
|
[
"The code in the else block will not be executed because the return statement is inside the else block. The return statement causes the function to immediately return a value and exit, so the code in the else block will never be executed.\nYou can fix this by moving the return statement outside of the else block, like this:\npage = 1\nimg_count = 0\nresult_list = []\n\nwhile True:\n url = f'https://s3.landingfolio.com/inspiration?page={page}&sortBy=free-first'\n\n response = requests.get(url=url, headers=headers)\n data = response.json()\n\n for item in data:\n if page <= 11:\n\n screenshots = item.get('screenshots')\n img_count += len(screenshots)\n\n for img in screenshots:\n img.update({'url': f'https://landingfoliocom.imgix.net/{img.get(\"url\")}'})\n\n result_list.append(\n {\n 'title': item.get('title'),\n 'description': item.get('slug'),\n 'url': item.get('url'),\n 'screenshots': screenshots\n }\n )\n else:\n print('test')\n with open('result_list.json', 'a') as file:\n json.dump(result_list, file, indent=4, ensure_ascii=False)\n\n if page > 11:\n return f'[INFO] work finished. Images count is: {img_count}\\n{\"=\" * 20}'\n\n page += 1\n print(f'[+] processed {page} ')\n\n"
] |
[
1
] |
[] |
[] |
[
"if_statement",
"python"
] |
stackoverflow_0074665438_if_statement_python.txt
|
Q:
How to setup a throuput whereby we have mulitple different request in time interval using jmeter
I have the following scenario. So I have an OS process sample that will send a request to the console app and wait for the process to finish, I want to control the request to be line 6 Request in an hour.
So send 1 Request wait 10 min send another request wait 10 mins...at 20 min point after the first request send 2 requests (could have a wait time of 30 seconds) in between the request then at the 40 min mark have another request. I have been reading about precise throughput timers but I am not new to Jmeter so trying to get some help if possible.
1 request of 5k at : xx:00 min
2 requests ( 1 each of 2k, 5k ) at : xx:20 min ~
1 request of 2k at : xx:40 min -
1hr no activity
repeat steps 1-4 a few ( 3 ) times
A:
There is no way to run requests with different throughput values within the bounds of a single Thread Group, the overall throughput would be the speed of the "slowest" request.
All JMeter timers like Constant Throughput Timer, Precise Throughput Timer and Throughput Shaping Timer are trying to spread the load evenly during the throughput period.
So your load pattern can be implemented only using Constant Timers or Flow Control Action samplers, whatever is more convenient to you.
Something like this:
You can use Test Fragments and Module Controllers to avoid copying and pasting the requests
|
How to setup a throuput whereby we have mulitple different request in time interval using jmeter
|
I have the following scenario. So I have an OS process sample that will send a request to the console app and wait for the process to finish, I want to control the request to be line 6 Request in an hour.
So send 1 Request wait 10 min send another request wait 10 mins...at 20 min point after the first request send 2 requests (could have a wait time of 30 seconds) in between the request then at the 40 min mark have another request. I have been reading about precise throughput timers but I am not new to Jmeter so trying to get some help if possible.
1 request of 5k at : xx:00 min
2 requests ( 1 each of 2k, 5k ) at : xx:20 min ~
1 request of 2k at : xx:40 min -
1hr no activity
repeat steps 1-4 a few ( 3 ) times
|
[
"There is no way to run requests with different throughput values within the bounds of a single Thread Group, the overall throughput would be the speed of the \"slowest\" request.\nAll JMeter timers like Constant Throughput Timer, Precise Throughput Timer and Throughput Shaping Timer are trying to spread the load evenly during the throughput period.\nSo your load pattern can be implemented only using Constant Timers or Flow Control Action samplers, whatever is more convenient to you.\nSomething like this:\n\nYou can use Test Fragments and Module Controllers to avoid copying and pasting the requests\n"
] |
[
0
] |
[] |
[] |
[
"jmeter"
] |
stackoverflow_0074660100_jmeter.txt
|
Q:
Why does pixel art asset is looking over-compressed even if compression is disabled?
I have an 30x19 PNG asset. I want to show it in a SpriteRenderer but it looks like some kind of over-compressed. How can I fix that?
This is what I see in the game screen:
PS: Scale=2x in the screenshot. Nothing changes if I set it to Scale=1x
This is the original asset:
Here are the settings:
Project Settings -> Anti Aliasing: Disabled (Ultra - Very Low)
Sprite Pixels Per Unit: 19
Sprite Filter Mode: Point (no filter)
Sprite Compression: None
Camera Size: 5
Sprite Settings:
Sprite Renderer Settings:
Camera Settings:
Where is the problem?
A:
It doesn't look over-compressed to me, just black but that's because you selected its color to be black in the Sprite Renderer.
Turn that into white and your sprite should look like the original asset.
|
Why does pixel art asset is looking over-compressed even if compression is disabled?
|
I have an 30x19 PNG asset. I want to show it in a SpriteRenderer but it looks like some kind of over-compressed. How can I fix that?
This is what I see in the game screen:
PS: Scale=2x in the screenshot. Nothing changes if I set it to Scale=1x
This is the original asset:
Here are the settings:
Project Settings -> Anti Aliasing: Disabled (Ultra - Very Low)
Sprite Pixels Per Unit: 19
Sprite Filter Mode: Point (no filter)
Sprite Compression: None
Camera Size: 5
Sprite Settings:
Sprite Renderer Settings:
Camera Settings:
Where is the problem?
|
[
"It doesn't look over-compressed to me, just black but that's because you selected its color to be black in the Sprite Renderer.\nTurn that into white and your sprite should look like the original asset.\n"
] |
[
1
] |
[] |
[] |
[
"pixel",
"unity3d"
] |
stackoverflow_0074660724_pixel_unity3d.txt
|
Q:
Stopwatch reactive Shinyapp; resets/laps when key is pressed (keydown) and when key is released (keyup)
I am attempting to build a stopwatch Shiny app.
My end goal is to record "trial" times. Each trial will start when the space bar (key code == 32) is pressed, and will end when the space bar is released. I also want to record the time between my trials, which is time from when the space bar is released to when the space bar is pressed again.
I'd like to get the stopwatch to run continuously when the app is open. However, I want the stopwatch to reset to 0 when I press the space bar while continuing to count up in seconds while holding the space bar, and reset to 0 then start counting up again when I release the space bar.
Currently I am struggling to get my stopwatch (what I called timer()) to reset to 0 whenever I press spacebar or release it.
Below is the code I have tried.
#install.packages("lubdridate")
#install.packages("shiny")
library(lubridate)
library(shiny)
ui <- fluidPage(hr(),
tags$script('
$(document).on("keydown", function (e) {
Shiny.onInputChange("space_down", e.which == 32);
});'
),
## keyup
tags$script('
$(document).on("keyup", function (e) {
Shiny.onInputChange("space_released", e.which == 32);
});'
),
tags$hr(),
textOutput('stopwatch')
)
server <- function(input, output, session) {
# Initialize the stopwatch, timer starts when shiny app opens.
timer <- reactiveVal(0)
update_interval = 0.01 # each interval increases the timer by one hundrendth of a second
# Output the stopwatch.
output$stopwatch <- renderText({
paste("Time passed: ", seconds_to_period(timer()))
})
# observer that invalidates every second. Increases timer by one update_interval.
observe({
invalidateLater(10, session)
isolate({
timer(round(timer()+update_interval,2))
})
})
# observers for Keys == 32 (Spacebar)
observeEvent(input$space_down, {timer(0)})
observeEvent(input$space_released, {timer(0)})
}
shinyApp(ui, server)
Please let me know if I am need to be more specific. Thank you in advance!
A:
Try to us Shiny.setInputValue instead of Shiny.onInputChange as shown below.
ui <- fluidPage(hr(),
# tags$script('
# $(document).on("keydown", function (e) {
# Shiny.onInputChange("space_down", e.which == 32);
# });'
# ),
## keyup
# tags$script('
# $(document).on("keyup", function (e) {
# Shiny.onInputChange("space_released", e.which == 32);
# });'
# ),
tags$script(HTML('
document.addEventListener("keydown", function(e) {
Shiny.setInputValue("space_down", e.key, {priority: "event"});
});
')),
tags$script(HTML('
document.addEventListener("keyup", function(e) {
Shiny.setInputValue("space_released", e.key, {priority: "event"});
});
')),
tags$hr(),
textOutput('stopwatch')
)
A:
@YBS has the good point: you have to use Shiny.setInputValue with the option {priority: "event"}, otherwise the observer does not react if input$space_down takes the same value as the previous one.
But this JS code will also trigger an event when any key is pressed (even if the event value is FALSE, this reacts). So you have to use:
$(document).on("keydown", function(e) {
if(e.which == 32) {
Shiny.setInputValue("space_down", true, {priority: "event"});
}
});
|
Stopwatch reactive Shinyapp; resets/laps when key is pressed (keydown) and when key is released (keyup)
|
I am attempting to build a stopwatch Shiny app.
My end goal is to record "trial" times. Each trial will start when the space bar (key code == 32) is pressed, and will end when the space bar is released. I also want to record the time between my trials, which is time from when the space bar is released to when the space bar is pressed again.
I'd like to get the stopwatch to run continuously when the app is open. However, I want the stopwatch to reset to 0 when I press the space bar while continuing to count up in seconds while holding the space bar, and reset to 0 then start counting up again when I release the space bar.
Currently I am struggling to get my stopwatch (what I called timer()) to reset to 0 whenever I press spacebar or release it.
Below is the code I have tried.
#install.packages("lubdridate")
#install.packages("shiny")
library(lubridate)
library(shiny)
ui <- fluidPage(hr(),
tags$script('
$(document).on("keydown", function (e) {
Shiny.onInputChange("space_down", e.which == 32);
});'
),
## keyup
tags$script('
$(document).on("keyup", function (e) {
Shiny.onInputChange("space_released", e.which == 32);
});'
),
tags$hr(),
textOutput('stopwatch')
)
server <- function(input, output, session) {
# Initialize the stopwatch, timer starts when shiny app opens.
timer <- reactiveVal(0)
update_interval = 0.01 # each interval increases the timer by one hundrendth of a second
# Output the stopwatch.
output$stopwatch <- renderText({
paste("Time passed: ", seconds_to_period(timer()))
})
# observer that invalidates every second. Increases timer by one update_interval.
observe({
invalidateLater(10, session)
isolate({
timer(round(timer()+update_interval,2))
})
})
# observers for Keys == 32 (Spacebar)
observeEvent(input$space_down, {timer(0)})
observeEvent(input$space_released, {timer(0)})
}
shinyApp(ui, server)
Please let me know if I am need to be more specific. Thank you in advance!
|
[
"Try to us Shiny.setInputValue instead of Shiny.onInputChange as shown below.\nui <- fluidPage(hr(),\n # tags$script('\n # $(document).on(\"keydown\", function (e) {\n # Shiny.onInputChange(\"space_down\", e.which == 32);\n # });'\n # ),\n ## keyup\n # tags$script('\n # $(document).on(\"keyup\", function (e) {\n # Shiny.onInputChange(\"space_released\", e.which == 32);\n # });'\n # ),\n \n tags$script(HTML('\n document.addEventListener(\"keydown\", function(e) {\n Shiny.setInputValue(\"space_down\", e.key, {priority: \"event\"});\n });\n ')),\n \n tags$script(HTML('\n document.addEventListener(\"keyup\", function(e) {\n Shiny.setInputValue(\"space_released\", e.key, {priority: \"event\"});\n });\n ')),\n tags$hr(),\n textOutput('stopwatch')\n \n)\n\n",
"@YBS has the good point: you have to use Shiny.setInputValue with the option {priority: \"event\"}, otherwise the observer does not react if input$space_down takes the same value as the previous one.\nBut this JS code will also trigger an event when any key is pressed (even if the event value is FALSE, this reacts). So you have to use:\n$(document).on(\"keydown\", function(e) {\n if(e.which == 32) {\n Shiny.setInputValue(\"space_down\", true, {priority: \"event\"});\n }\n});\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"r",
"shiny"
] |
stackoverflow_0074660283_r_shiny.txt
|
Q:
ASP.NET Core Web API - The name 'WebApplication' does not exist in the current context WebApi
In ASP.NET Core-6 Web API, I have this code in Program.cs of the WebApi Project:
WebApi
Program.cs:
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
app.Run();
I got this error:
Error CS0103 The name 'WebApplication' does not exist in the current context WebApi
How do I resolve this?
Thanks
A:
Usually this happens when you have updated a .Net < 5 app to .Net 6. The reason is that you have to enable ImplicitUsings in the .csproj file
Makes sure the property group for your .csproj file contains this line in it:
<ImplicitUsings>enable</ImplicitUsings>
Here is an example PropertyGroup from one of the recent projects where I upgraded from .Net Core 3.1 to .Net 6:
<PropertyGroup>
<TargetFramework>net6.0</TargetFramework>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
</PropertyGroup>
A:
I had the same message but this disappeared after changing the TargetPackage from net7.0 to net6.0 in the cproj file
<TargetFramework>net6.0</TargetFramework>
|
ASP.NET Core Web API - The name 'WebApplication' does not exist in the current context WebApi
|
In ASP.NET Core-6 Web API, I have this code in Program.cs of the WebApi Project:
WebApi
Program.cs:
var builder = WebApplication.CreateBuilder(args);
// Add services to the container.
builder.Services.AddControllers();
// Learn more about configuring Swagger/OpenAPI at https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseAuthorization();
app.MapControllers();
app.Run();
I got this error:
Error CS0103 The name 'WebApplication' does not exist in the current context WebApi
How do I resolve this?
Thanks
|
[
"Usually this happens when you have updated a .Net < 5 app to .Net 6. The reason is that you have to enable ImplicitUsings in the .csproj file\nMakes sure the property group for your .csproj file contains this line in it:\n<ImplicitUsings>enable</ImplicitUsings>\n\nHere is an example PropertyGroup from one of the recent projects where I upgraded from .Net Core 3.1 to .Net 6:\n<PropertyGroup>\n <TargetFramework>net6.0</TargetFramework>\n <Nullable>enable</Nullable>\n <ImplicitUsings>enable</ImplicitUsings>\n</PropertyGroup>\n\n",
"I had the same message but this disappeared after changing the TargetPackage from net7.0 to net6.0 in the cproj file\n<TargetFramework>net6.0</TargetFramework>\n\n"
] |
[
6,
0
] |
[] |
[] |
[
"asp.net_web_api",
"c#"
] |
stackoverflow_0071800753_asp.net_web_api_c#.txt
|
Q:
C/C++ macro/template blackmagic to generate unique name
Macros are fine.
Templates are fine.
Pretty much whatever it works is fine.
The example is OpenGL; but the technique is C++ specific and relies on no knowledge of OpenGL.
Precise problem:
I want an expression E; where I do not have to specify a unique name; such that a constructor is called where E is defined, and a destructor is called where the block E is in ends.
For example, consider:
class GlTranslate {
GLTranslate(float x, float y, float z); {
glPushMatrix();
glTranslatef(x, y, z);
}
~GlTranslate() { glPopMatrix(); }
};
Manual solution:
{
GlTranslate foo(1.0, 0.0, 0.0); // I had to give it a name
.....
} // auto popmatrix
Now, I have this not only for glTranslate, but lots of other PushAttrib/PopAttrib calls too. I would prefer not to have to come up with a unique name for each var. Is there some trick involving macros templates ... or something else that will automatically create a variable who's constructor is called at point of definition; and destructor called at end of block?
Thanks!
A:
I would not do this personally but just come up with unique names. But if you want to do it, one way is to use a combination of if and for:
#define FOR_BLOCK(DECL) if(bool _c_ = false) ; else for(DECL;!_c_;_c_=true)
You can use it like
FOR_BLOCK(GlTranslate t(1.0, 0.0, 0.0)) {
FOR_BLOCK(GlTranslate t(1.0, 1.0, 0.0)) {
...
}
}
Each of those names are in separate scopes and won't conflict. The inner names hide the outer names. The expressions in the if and for loops are constant and should be easily optimized by the compiler.
If you really want to pass an expression, you can use the ScopedGuard trick (see Most Important const), but it will need some more work to write it. But the nice side is, that we can get rid of the for loop, and let our object evaluate to false:
struct sbase {
operator bool() const { return false; }
};
template<typename T>
struct scont : sbase {
scont(T const& t):t(t), dismiss() {
t.enter();
}
scont(scont const&o):t(o.t), dismiss() {
o.dismiss = true;
}
~scont() { if(!dismiss) t.leave(); }
T t;
mutable bool dismiss;
};
template<typename T>
scont<T> make_scont(T const&t) { return scont<T>(t); }
#define FOR_BLOCK(E) if(sbase const& _b_ = make_scont(E)) ; else
You then provide the proper enter and leave functions:
struct GlTranslate {
GLTranslate(float x, float y, float z)
:x(x),y(y),z(z) { }
void enter() const {
glPushMatrix();
glTranslatef(x, y, z);
}
void leave() const {
glPopMatrix();
}
float x, y, z;
};
Now you can write it entirely without a name on the user side:
FOR_BLOCK(GlTranslate(1.0, 0.0, 0.0)) {
FOR_BLOCK(GlTranslate(1.0, 1.0, 0.0)) {
...
}
}
If you want to pass multiple expressions at once, it's a bit more tricky, but you can write an expression template that acts on operator, to collect all expressions into a scont.
template<typename Derived>
struct scoped_obj {
void enter() const { }
void leave() const { }
Derived const& get_obj() const {
return static_cast<Derived const&>(*this);
}
};
template<typename L, typename R> struct collect
: scoped_obj< collect<L, R> > {
L l;
R r;
collect(L const& l, R const& r)
:l(l), r(r) { }
void enter() const { l.enter(); r.enter(); }
void leave() const { r.leave(); l.leave(); }
};
template<typename D1, typename D2>
collect<D1, D2> operator,(scoped_obj<D1> const& l, scoped_obj<D2> const& r) {
return collect<D1, D2>(l.get_obj(), r.get_obj());
}
#define FOR_BLOCK(E) if(sbase const& _b_ = make_scont((E))) ; else
You need to inherit the RAII object from scoped_obj<Class> like the following shows
struct GLTranslate : scoped_obj<GLTranslate> {
GLTranslate(float x, float y, float z)
:x(x),y(y),z(z) { }
void enter() const {
std::cout << "entering ("
<< x << " " << y << " " << z << ")"
<< std::endl;
}
void leave() const {
std::cout << "leaving ("
<< x << " " << y << " " << z << ")"
<< std::endl;
}
float x, y, z;
};
int main() {
// if more than one element is passed, wrap them in parentheses
FOR_BLOCK((GLTranslate(10, 20, 30), GLTranslate(40, 50, 60))) {
std::cout << "in block..." << std::endl;
}
}
All of these involve no virtual functions, and the functions involved are transparent to the compiler. In fact, with the above GLTranslate changed to add a single integer to a global variable and when leaving subtracting it again, and the below defined GLTranslateE, i did a test:
// we will change this and see how the compiler reacts.
int j = 0;
// only add, don't subtract again
struct GLTranslateE : scoped_obj< GLTranslateE > {
GLTranslateE(int x):x(x) { }
void enter() const {
j += x;
}
int x;
};
int main() {
FOR_BLOCK((GLTranslate(10), GLTranslateE(5))) {
/* empty */
}
return j;
}
In fact, GCC at optimization level -O2 outputs this:
main:
sub $29, $29, 8
ldw $2, $0, j
add $2, $2, 5
stw $2, $0, j
.L1:
add $29, $29, 8
jr $31
I wouldn't have expected that, it optimized quite well!
A:
If your compiler supports __COUNTER__ (it probably does), you could try:
// boiler-plate
#define CONCATENATE_DETAIL(x, y) x##y
#define CONCATENATE(x, y) CONCATENATE_DETAIL(x, y)
#define MAKE_UNIQUE(x) CONCATENATE(x, __COUNTER__)
// per-transform type
#define GL_TRANSLATE_DETAIL(n, x, y, z) GlTranslate n(x, y, z)
#define GL_TRANSLATE(x, y, z) GL_TRANSLATE_DETAIL(MAKE_UNIQUE(_trans_), x, y, z)
For
{
GL_TRANSLATE(1.0, 0.0, 0.0);
// becomes something like:
GlTranslate _trans_1(1.0, 0.0, 0.0);
} // auto popmatrix
A:
I think it's now possible to do something like this:
struct GlTranslate
{
operator()(double x,double y,double z, std::function<void()> f)
{
glPushMatrix(); glTranslatef(x, y, z);
f();
glPopMatrix();
}
};
then in the code
GlTranslate(x, y, z,[&]()
{
// your code goes here
});
Obviously, C++11 is needed
A:
The canonical way as described in one answer is to use a lambda expression as the block, in C++ you can easily write a template function
with<T>(T instance, const std::function<void(T)> &f) {
f(instance);
}
and use it like
with(GLTranslate(...), [] (auto translate) {
....
});
but the most common reason for wanting a mechanism for avoiding defining names in your scope are long functions / methods that do lots of things. You might try a modern OOP / clean code inspired style with very short methods / functions for a change if this kind of problem keeps bothering you
A:
Using C++17, a very simple macro leading to an intuitive usage:
#define given(...) if (__VA_ARGS__; true)
And can be nested:
given (GlTranslate foo(1.0, 0.0, 0.0))
{
foo.stuff();
given (GlTranslate foo(1.0, 2.0, 3.0))
{
foo.stuff();
...
}
}
|
C/C++ macro/template blackmagic to generate unique name
|
Macros are fine.
Templates are fine.
Pretty much whatever it works is fine.
The example is OpenGL; but the technique is C++ specific and relies on no knowledge of OpenGL.
Precise problem:
I want an expression E; where I do not have to specify a unique name; such that a constructor is called where E is defined, and a destructor is called where the block E is in ends.
For example, consider:
class GlTranslate {
GLTranslate(float x, float y, float z); {
glPushMatrix();
glTranslatef(x, y, z);
}
~GlTranslate() { glPopMatrix(); }
};
Manual solution:
{
GlTranslate foo(1.0, 0.0, 0.0); // I had to give it a name
.....
} // auto popmatrix
Now, I have this not only for glTranslate, but lots of other PushAttrib/PopAttrib calls too. I would prefer not to have to come up with a unique name for each var. Is there some trick involving macros templates ... or something else that will automatically create a variable who's constructor is called at point of definition; and destructor called at end of block?
Thanks!
|
[
"I would not do this personally but just come up with unique names. But if you want to do it, one way is to use a combination of if and for:\n#define FOR_BLOCK(DECL) if(bool _c_ = false) ; else for(DECL;!_c_;_c_=true)\n\nYou can use it like\nFOR_BLOCK(GlTranslate t(1.0, 0.0, 0.0)) {\n FOR_BLOCK(GlTranslate t(1.0, 1.0, 0.0)) {\n ...\n }\n}\n\nEach of those names are in separate scopes and won't conflict. The inner names hide the outer names. The expressions in the if and for loops are constant and should be easily optimized by the compiler. \n\nIf you really want to pass an expression, you can use the ScopedGuard trick (see Most Important const), but it will need some more work to write it. But the nice side is, that we can get rid of the for loop, and let our object evaluate to false:\nstruct sbase { \n operator bool() const { return false; } \n};\n\ntemplate<typename T>\nstruct scont : sbase { \n scont(T const& t):t(t), dismiss() { \n t.enter();\n }\n scont(scont const&o):t(o.t), dismiss() {\n o.dismiss = true;\n }\n ~scont() { if(!dismiss) t.leave(); }\n\n T t; \n mutable bool dismiss;\n};\n\ntemplate<typename T>\nscont<T> make_scont(T const&t) { return scont<T>(t); }\n\n#define FOR_BLOCK(E) if(sbase const& _b_ = make_scont(E)) ; else\n\nYou then provide the proper enter and leave functions:\nstruct GlTranslate {\n GLTranslate(float x, float y, float z)\n :x(x),y(y),z(z) { }\n\n void enter() const {\n glPushMatrix();\n glTranslatef(x, y, z);\n }\n\n void leave() const {\n glPopMatrix();\n }\n\n float x, y, z;\n};\n\nNow you can write it entirely without a name on the user side:\nFOR_BLOCK(GlTranslate(1.0, 0.0, 0.0)) {\n FOR_BLOCK(GlTranslate(1.0, 1.0, 0.0)) {\n ...\n }\n}\n\n\nIf you want to pass multiple expressions at once, it's a bit more tricky, but you can write an expression template that acts on operator, to collect all expressions into a scont. \ntemplate<typename Derived>\nstruct scoped_obj { \n void enter() const { } \n void leave() const { } \n\n Derived const& get_obj() const {\n return static_cast<Derived const&>(*this);\n }\n};\n\ntemplate<typename L, typename R> struct collect \n : scoped_obj< collect<L, R> > {\n L l;\n R r;\n\n collect(L const& l, R const& r)\n :l(l), r(r) { }\n void enter() const { l.enter(); r.enter(); }\n void leave() const { r.leave(); l.leave(); }\n};\n\ntemplate<typename D1, typename D2> \ncollect<D1, D2> operator,(scoped_obj<D1> const& l, scoped_obj<D2> const& r) {\n return collect<D1, D2>(l.get_obj(), r.get_obj());\n}\n\n#define FOR_BLOCK(E) if(sbase const& _b_ = make_scont((E))) ; else\n\nYou need to inherit the RAII object from scoped_obj<Class> like the following shows\nstruct GLTranslate : scoped_obj<GLTranslate> {\n GLTranslate(float x, float y, float z)\n :x(x),y(y),z(z) { }\n\n void enter() const {\n std::cout << \"entering (\"\n << x << \" \" << y << \" \" << z << \")\" \n << std::endl;\n }\n\n void leave() const {\n std::cout << \"leaving (\"\n << x << \" \" << y << \" \" << z << \")\" \n << std::endl;\n }\n\n float x, y, z;\n};\n\nint main() {\n // if more than one element is passed, wrap them in parentheses\n FOR_BLOCK((GLTranslate(10, 20, 30), GLTranslate(40, 50, 60))) {\n std::cout << \"in block...\" << std::endl;\n }\n}\n\nAll of these involve no virtual functions, and the functions involved are transparent to the compiler. In fact, with the above GLTranslate changed to add a single integer to a global variable and when leaving subtracting it again, and the below defined GLTranslateE, i did a test:\n// we will change this and see how the compiler reacts.\nint j = 0;\n\n// only add, don't subtract again\nstruct GLTranslateE : scoped_obj< GLTranslateE > {\n GLTranslateE(int x):x(x) { }\n\n void enter() const {\n j += x;\n }\n\n int x;\n};\n\nint main() {\n FOR_BLOCK((GLTranslate(10), GLTranslateE(5))) {\n /* empty */\n }\n return j;\n}\n\nIn fact, GCC at optimization level -O2 outputs this:\nmain:\n sub $29, $29, 8\n ldw $2, $0, j\n add $2, $2, 5\n stw $2, $0, j\n.L1:\n add $29, $29, 8\n jr $31\n\nI wouldn't have expected that, it optimized quite well!\n",
"If your compiler supports __COUNTER__ (it probably does), you could try:\n// boiler-plate\n#define CONCATENATE_DETAIL(x, y) x##y\n#define CONCATENATE(x, y) CONCATENATE_DETAIL(x, y)\n#define MAKE_UNIQUE(x) CONCATENATE(x, __COUNTER__)\n\n// per-transform type\n#define GL_TRANSLATE_DETAIL(n, x, y, z) GlTranslate n(x, y, z)\n#define GL_TRANSLATE(x, y, z) GL_TRANSLATE_DETAIL(MAKE_UNIQUE(_trans_), x, y, z)\n\nFor\n{\n GL_TRANSLATE(1.0, 0.0, 0.0);\n\n // becomes something like:\n GlTranslate _trans_1(1.0, 0.0, 0.0);\n\n} // auto popmatrix\n\n",
"I think it's now possible to do something like this:\nstruct GlTranslate\n{\n operator()(double x,double y,double z, std::function<void()> f)\n {\n glPushMatrix(); glTranslatef(x, y, z);\n f();\n glPopMatrix();\n }\n};\n\nthen in the code\nGlTranslate(x, y, z,[&]()\n{\n// your code goes here\n});\n\nObviously, C++11 is needed\n",
"The canonical way as described in one answer is to use a lambda expression as the block, in C++ you can easily write a template function\nwith<T>(T instance, const std::function<void(T)> &f) {\n f(instance);\n}\n\nand use it like\nwith(GLTranslate(...), [] (auto translate) {\n ....\n});\n\nbut the most common reason for wanting a mechanism for avoiding defining names in your scope are long functions / methods that do lots of things. You might try a modern OOP / clean code inspired style with very short methods / functions for a change if this kind of problem keeps bothering you \n",
"Using C++17, a very simple macro leading to an intuitive usage:\n#define given(...) if (__VA_ARGS__; true)\n\nAnd can be nested:\ngiven (GlTranslate foo(1.0, 0.0, 0.0))\n{\n foo.stuff();\n\n given (GlTranslate foo(1.0, 2.0, 3.0))\n {\n foo.stuff();\n ...\n }\n}\n\n"
] |
[
67,
39,
11,
0,
0
] |
[] |
[] |
[
"c++",
"c_preprocessor",
"raii"
] |
stackoverflow_0002419650_c++_c_preprocessor_raii.txt
|
Q:
To find sum of 5 55 555 5555 .... n using python
We need to find the sum of the following number to a given range n which describes is n=5 then the last term will be 55555.
A:
You can use the mul operator to repeat the digit, convert back to an integer and sum.
def find_sum(digit, max_repeats):
return sum(int(str(digit)*(i+1)) for i in range(max_repeats))
print(find_sum(5, 5))
#output 61725
A:
You can use the idea from the following algorithm:
def sum(n):
# This will be multiplied.
nbr=0
# This will be the return value.
ret=0
for i in range(0, n):
# Every iteration this will add 1 to the nbr: 1, 11, 111, etc.
nbr = nbr + 10 ** i
# Multiply by 5 and sum with the previous value: 5 + 55 + 555 + etc.
ret = nbr * 5 + ret
print(ret)
|
To find sum of 5 55 555 5555 .... n using python
|
We need to find the sum of the following number to a given range n which describes is n=5 then the last term will be 55555.
|
[
"You can use the mul operator to repeat the digit, convert back to an integer and sum.\ndef find_sum(digit, max_repeats):\n return sum(int(str(digit)*(i+1)) for i in range(max_repeats))\n\nprint(find_sum(5, 5))\n#output 61725\n\n",
"You can use the idea from the following algorithm:\ndef sum(n):\n # This will be multiplied.\n nbr=0\n # This will be the return value.\n ret=0\n for i in range(0, n):\n # Every iteration this will add 1 to the nbr: 1, 11, 111, etc.\n nbr = nbr + 10 ** i\n # Multiply by 5 and sum with the previous value: 5 + 55 + 555 + etc.\n ret = nbr * 5 + ret\n print(ret)\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"python"
] |
stackoverflow_0074665350_python.txt
|
Q:
List field with only one element casting problem
I have a column in postgresql table that is a list with only one element and this element is always an integer or null.
I am trying to use this field in a query like this:
select
sum(case when value = 1 then 1 else 0 end) as count_of1
sum(case when value = 2 then 1 else 0 end) as count_of2
from tbl
and returns: operator does not exist: text=integer but as mentioned above I cant't cast it to numeric for some unknown reasos.
I am trying to cast this field and I always get an error. Tried:
value::numeric,
value::float,
value::integer
and I always get an error of casting.
pg_typeof(value) ->> 'text'
A:
If the column is defined as text, then compare it to a text value:
sum(case when value = '1' then 1 else 0 end)
alternatively:
count(*) filter (where value = '1')
But value::integer should work if all values in that column can be cast to an integer if there is at least one row with a value that can't be converted, this will fail.
|
List field with only one element casting problem
|
I have a column in postgresql table that is a list with only one element and this element is always an integer or null.
I am trying to use this field in a query like this:
select
sum(case when value = 1 then 1 else 0 end) as count_of1
sum(case when value = 2 then 1 else 0 end) as count_of2
from tbl
and returns: operator does not exist: text=integer but as mentioned above I cant't cast it to numeric for some unknown reasos.
I am trying to cast this field and I always get an error. Tried:
value::numeric,
value::float,
value::integer
and I always get an error of casting.
pg_typeof(value) ->> 'text'
|
[
"If the column is defined as text, then compare it to a text value:\nsum(case when value = '1' then 1 else 0 end) \n\nalternatively:\ncount(*) filter (where value = '1')\n\n\nBut value::integer should work if all values in that column can be cast to an integer if there is at least one row with a value that can't be converted, this will fail.\n"
] |
[
1
] |
[] |
[] |
[
"postgresql"
] |
stackoverflow_0074665426_postgresql.txt
|
Q:
if else condition how to print even odd
Given an integer, , perform the following conditional actions:
If is odd, print "Weird".
If is even and in the inclusive range of to , print "Not Weird".
If is even and in the inclusive range of to , print "Weird".
If is even and greater than , print "Not Weird".
Input Format:
A single line containing a positive integer, .
|
if else condition how to print even odd
|
Given an integer, , perform the following conditional actions:
If is odd, print "Weird".
If is even and in the inclusive range of to , print "Not Weird".
If is even and in the inclusive range of to , print "Weird".
If is even and greater than , print "Not Weird".
Input Format:
A single line containing a positive integer, .
|
[] |
[] |
[
"I believe something like this:\ndef conditional_print(text):\n number = int(text)\n if number % 2 == 1:\n print(\"Weird\")\n elif ... <= number <= ...:\n print(\"Not Weird\")\n elif ... <= number <= ...:\n print(\"Weird\")\n elif ... < number:\n print(\"Not Weird\")\nconditional_print(...)\n\nBut I'm missing some information, You will need to put this information in the place of the three dots.\n"
] |
[
-1
] |
[
"python"
] |
stackoverflow_0074665401_python.txt
|
Q:
Is there a way to view my mailbox (exchange online\365) and its contents (subject, attachments) using PowerShell?
I would like to gather information from a weekly mail that is being sent to me, and download the attachments from the message automatically using PowerShell. My organization uses cloud 365 and exchange online services.
I'm new to PowerShell and currently learning it, so let me know if anymore info is needed.
A:
You can use Microsoft Graph to get the job one. See Get started with the Microsoft Graph PowerShell SDK for more information.
If you have Outlook installed you may also consider automating Outlook and get all the required data there locally. The PowerShell - Managing an Outlook Mailbox with PowerShell article explains the required steps for that.
The first piece of business is to invoke the Outlook API using code such as the following. This code gives us access to the messaging namespace of the Outlook API, in which typical objects are e-mail messages, Outlook rules and mail folders, among other objects:
Add-Type -assembly "Microsoft.Office.Interop.Outlook"
$Outlook = New-Object -comobject Outlook.Application
$namespace = $Outlook.GetNameSpace("MAPI")
|
Is there a way to view my mailbox (exchange online\365) and its contents (subject, attachments) using PowerShell?
|
I would like to gather information from a weekly mail that is being sent to me, and download the attachments from the message automatically using PowerShell. My organization uses cloud 365 and exchange online services.
I'm new to PowerShell and currently learning it, so let me know if anymore info is needed.
|
[
"You can use Microsoft Graph to get the job one. See Get started with the Microsoft Graph PowerShell SDK for more information.\nIf you have Outlook installed you may also consider automating Outlook and get all the required data there locally. The PowerShell - Managing an Outlook Mailbox with PowerShell article explains the required steps for that.\nThe first piece of business is to invoke the Outlook API using code such as the following. This code gives us access to the messaging namespace of the Outlook API, in which typical objects are e-mail messages, Outlook rules and mail folders, among other objects:\nAdd-Type -assembly \"Microsoft.Office.Interop.Outlook\"\n$Outlook = New-Object -comobject Outlook.Application\n$namespace = $Outlook.GetNameSpace(\"MAPI\")\n\n"
] |
[
0
] |
[] |
[] |
[
"email",
"office365",
"office_automation",
"outlook",
"powershell"
] |
stackoverflow_0074627140_email_office365_office_automation_outlook_powershell.txt
|
Q:
Creating classical Properties.Settings in .Net 6.0 (Core) "Class Library" projects
Created a new "WPF Application" .NET 6.0 project
There creating classical Application Settings was easy in project->properties->Settings->"Create or open application settings"
Observed: the project gets a new folder "Properties" which has a yellow Folder icon with an additional black wrench symbol, okay
It contains a new item Settings.settings that can get edited via classical Settings Designer looking like it used to look in .Net 4.8, and a new App.config XML file is getting created automatically in the project's root folder which also looks like it used to in .Net 4.8, okay
Now the same procedure can apparently only be done manually in
a new "Class Library" project being added in the same solution where I would want to use that Properties.Settings / app.config feature pack for storing a DB connection string configurably:
the new sub-project does not seem to have a "Settings" option in the project Properties dialog (as opposed to a .Net4.x would have had)
the new Properties folder and new Settings file can be created successfully there too manually as described in Equivalent to UserSettings / ApplicationSettings in WPF .NET 5, .NET 6 or .Net Core
but doing a "Rebuild solution" gives an
Error CS1069 The type name 'ApplicationSettingsBase' could not be found in the namespace 'System.Configuration'. This type has been forwarded to assembly 'System.Configuration.ConfigurationManager, Version=0.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' Consider adding a reference to that assembly. ClassLibrary1 C:\Users\Stefan\source\repos\WpfCorePropertiesSettings\ClassLibrary1\Properties\Settings.Designer.cs 16 Active
as a next step adding NuGet package "System.Configuration.Abstractions" to the Class Library project cures the symptom, "rebuild solution" makes the error disappear.
TLDNR, actual question: is that sequence an acceptable solution or a kludge to avoid?
To me the NuGet package description does not sound as if the package was made for that purpose, and I have not heard the maintainers' names before (which might or might not matter?)
https://github.com/davidwhitney/System.Configuration.Abstractions
TIA
PS:
A:
Maybe I don't understand something...
Why create "Equivalent to UserSettings"?
My configuration is Win10+VS2022. I am creating a WPF .Net6 project. I go to the "Project Properties" menu. In the menu of the project properties tab (column on the left) there is an item Options. When selected, if the settings have not yet been created, there will be a small comment and a link to "Open or create application settings".
Unfortunately, I have Russian localization, so the screenshots are with Russian names.
Addition
But an additional "Class Library" sub-project does not seem to have that Project Properties option in my En/US localization. Does it in yours?
These are the APP settings.
Therefore, they do not make much sense in the library.
But if you need to, you can just copy the class to the library and then set up the links you need.
To do this, type in the application code the line Properties.Settings.Default.Save();. Move the cursor to Settings and press the F12 key.
You will be taken to the source code for the Settings class declaration. This code is generated by a code generator.
After moving to, copy all the source code into a class in another project. After the migration, you may need to add references in the project, fix the namespace and add usings.
As for the parameters in the «Class Library» project, it probably depends on what type this library is.
I have such settings in the «Class Library for WPF».
But in Libraries for Standard - no.
A:
In the meantime I'm happy with a custom "AppSettings.json" approach.
After removing the previously described "classical app.config" approach, and after adding two NuGet packages:
<PackageReference Include="Microsoft.Extensions.Configuration" Version="7.0.0" />
<PackageReference Include="Microsoft.Extensions.Configuration.Json" Version="7.0.0" />
... I created a custom Json file on "Class Library" (sub)project level in Visual Studio manually, and set its CopyToOutputDirectory property
<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
And added an 'IConfigurationBuilder` part:
using Microsoft.Extensions.Configuration;
namespace Xyz.Data
{
internal class AppSettingsConfig
{
public AppSettingsConfig()
{
IConfigurationBuilder builder = new ConfigurationBuilder();
_ = builder.AddJsonFile(Path.Combine(Directory.GetCurrentDirectory(), "AppSettings.Movies3Data.json"));
var root = builder.Build();
AttachedDb = root.GetConnectionString("AttachedDb")!;
}
public string AttachedDb { get; init; }
}
}
And then made it a "Jon Skeet singleton"
/// <summary>
/// Singleton as described by Jon Skeet
/// </summary>
/// https://csharpindepth.com/Articles/Singleton
internal sealed class AppSettingsConfigSingleton
{
private static readonly Logger log = LogManager.GetCurrentClassLogger();
private AppSettingsConfigSingleton()
{
log.Trace($"{nameof(AppSettingsConfigSingleton)} ctor is running");
IConfigurationBuilder builder = new ConfigurationBuilder();
_ = builder.AddJsonFile(Path.Combine(Directory.GetCurrentDirectory(), "AppSettings.Movies3Data.json"));
var root = builder.Build();
AttachedDb = root.GetConnectionString("AttachedDb")!;
}
static AppSettingsConfigSingleton() { }
public string? AttachedDb { get; init; }
public static AppSettingsConfigSingleton Instance { get { return Nested.instance; } }
private class Nested
{
// Explicit static constructor to tell C# compiler
// not to mark type as beforefieldinit
static Nested()
{
}
internal static readonly AppSettingsConfigSingleton instance = new();
}
}
And it "works well" by also reading JSON content just having been modified by admins at run-time. (Which would be the Entity Framework Core "localdb" location for the unit-of-work pattern in a multi-UI solution). Thanks again to you too, @EldHasp
|
Creating classical Properties.Settings in .Net 6.0 (Core) "Class Library" projects
|
Created a new "WPF Application" .NET 6.0 project
There creating classical Application Settings was easy in project->properties->Settings->"Create or open application settings"
Observed: the project gets a new folder "Properties" which has a yellow Folder icon with an additional black wrench symbol, okay
It contains a new item Settings.settings that can get edited via classical Settings Designer looking like it used to look in .Net 4.8, and a new App.config XML file is getting created automatically in the project's root folder which also looks like it used to in .Net 4.8, okay
Now the same procedure can apparently only be done manually in
a new "Class Library" project being added in the same solution where I would want to use that Properties.Settings / app.config feature pack for storing a DB connection string configurably:
the new sub-project does not seem to have a "Settings" option in the project Properties dialog (as opposed to a .Net4.x would have had)
the new Properties folder and new Settings file can be created successfully there too manually as described in Equivalent to UserSettings / ApplicationSettings in WPF .NET 5, .NET 6 or .Net Core
but doing a "Rebuild solution" gives an
Error CS1069 The type name 'ApplicationSettingsBase' could not be found in the namespace 'System.Configuration'. This type has been forwarded to assembly 'System.Configuration.ConfigurationManager, Version=0.0.0.0, Culture=neutral, PublicKeyToken=cc7b13ffcd2ddd51' Consider adding a reference to that assembly. ClassLibrary1 C:\Users\Stefan\source\repos\WpfCorePropertiesSettings\ClassLibrary1\Properties\Settings.Designer.cs 16 Active
as a next step adding NuGet package "System.Configuration.Abstractions" to the Class Library project cures the symptom, "rebuild solution" makes the error disappear.
TLDNR, actual question: is that sequence an acceptable solution or a kludge to avoid?
To me the NuGet package description does not sound as if the package was made for that purpose, and I have not heard the maintainers' names before (which might or might not matter?)
https://github.com/davidwhitney/System.Configuration.Abstractions
TIA
PS:
|
[
"Maybe I don't understand something...\nWhy create \"Equivalent to UserSettings\"?\nMy configuration is Win10+VS2022. I am creating a WPF .Net6 project. I go to the \"Project Properties\" menu. In the menu of the project properties tab (column on the left) there is an item Options. When selected, if the settings have not yet been created, there will be a small comment and a link to \"Open or create application settings\".\nUnfortunately, I have Russian localization, so the screenshots are with Russian names.\n\n\n\nAddition\n\nBut an additional \"Class Library\" sub-project does not seem to have that Project Properties option in my En/US localization. Does it in yours?\n\nThese are the APP settings.\nTherefore, they do not make much sense in the library.\nBut if you need to, you can just copy the class to the library and then set up the links you need.\nTo do this, type in the application code the line Properties.Settings.Default.Save();. Move the cursor to Settings and press the F12 key.\n\nYou will be taken to the source code for the Settings class declaration. This code is generated by a code generator.\n\nAfter moving to, copy all the source code into a class in another project. After the migration, you may need to add references in the project, fix the namespace and add usings.\nAs for the parameters in the «Class Library» project, it probably depends on what type this library is.\nI have such settings in the «Class Library for WPF».\n\nBut in Libraries for Standard - no.\n\n",
"In the meantime I'm happy with a custom \"AppSettings.json\" approach.\nAfter removing the previously described \"classical app.config\" approach, and after adding two NuGet packages:\n<PackageReference Include=\"Microsoft.Extensions.Configuration\" Version=\"7.0.0\" />\n<PackageReference Include=\"Microsoft.Extensions.Configuration.Json\" Version=\"7.0.0\" />\n\n... I created a custom Json file on \"Class Library\" (sub)project level in Visual Studio manually, and set its CopyToOutputDirectory property\n<CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>\n\nAnd added an 'IConfigurationBuilder` part:\nusing Microsoft.Extensions.Configuration;\n\nnamespace Xyz.Data\n{\n internal class AppSettingsConfig\n {\n public AppSettingsConfig()\n {\n IConfigurationBuilder builder = new ConfigurationBuilder();\n _ = builder.AddJsonFile(Path.Combine(Directory.GetCurrentDirectory(), \"AppSettings.Movies3Data.json\"));\n\n var root = builder.Build();\n AttachedDb = root.GetConnectionString(\"AttachedDb\")!;\n }\n\n public string AttachedDb { get; init; }\n }\n}\n\nAnd then made it a \"Jon Skeet singleton\"\n /// <summary>\n /// Singleton as described by Jon Skeet\n /// </summary>\n /// https://csharpindepth.com/Articles/Singleton\n internal sealed class AppSettingsConfigSingleton\n {\n private static readonly Logger log = LogManager.GetCurrentClassLogger();\n\n private AppSettingsConfigSingleton()\n {\n log.Trace($\"{nameof(AppSettingsConfigSingleton)} ctor is running\");\n\n IConfigurationBuilder builder = new ConfigurationBuilder();\n _ = builder.AddJsonFile(Path.Combine(Directory.GetCurrentDirectory(), \"AppSettings.Movies3Data.json\"));\n\n var root = builder.Build();\n AttachedDb = root.GetConnectionString(\"AttachedDb\")!;\n }\n static AppSettingsConfigSingleton() { }\n\n public string? AttachedDb { get; init; }\n\n public static AppSettingsConfigSingleton Instance { get { return Nested.instance; } }\n\n private class Nested\n {\n // Explicit static constructor to tell C# compiler\n // not to mark type as beforefieldinit\n static Nested()\n {\n }\n\n internal static readonly AppSettingsConfigSingleton instance = new();\n }\n }\n\nAnd it \"works well\" by also reading JSON content just having been modified by admins at run-time. (Which would be the Entity Framework Core \"localdb\" location for the unit-of-work pattern in a multi-UI solution). Thanks again to you too, @EldHasp\n"
] |
[
0,
0
] |
[] |
[] |
[
".net_6.0",
"app_config",
"class_library",
"configurationmanager",
"wpf"
] |
stackoverflow_0074309719_.net_6.0_app_config_class_library_configurationmanager_wpf.txt
|
Q:
Edit SVG color in draw.io
I'm following this guide on editing imported svg's in draw.io but with no luck. I am not getting the style options after inserting the editableCssRules=.*; code on the svg itself.
Has anyone else experienced this? I have the latest version installed and have restarted my machine.
I'm expecting to see additional Fill options as the guide suggests.
A:
If possible, please attach SVG image and add the whole SVG code (Ctrl+e) with inserted editableCssRules=.*; so I can see all the details.
Thanks,
A:
Same problem here.
Here is the whole SVG code as requested :
editableCssRules=.*;aspect=fixed;html=1;points=[];align=center;image;fontSize=12;image=img/lib/azure2/general/File.svg;imageBackground=default;sketch=0;
A:
The option editableCssRules just does what it says: it makes the specified CSS classes editables; in your case by using editableCssRules=.*; you are using a Regex expression to make all CSS classes in the SVG editable.
You are not seeing any options appear after modifying the style because with all probability the SVG you are using does not contain any CSS classes.
You will need to edit your SVG file and add the CSS classes that you need and refer to them in the paths, like so:
<style type="text/css">
.st0{fill:#000000;}
</style>
<path class="st0" d="YOUR_PATH_HERE"/>
Now, in draw.io import the SVG again and after adding the editableCssRules=.*; option to the style you should be able to edit its color, like so:
|
Edit SVG color in draw.io
|
I'm following this guide on editing imported svg's in draw.io but with no luck. I am not getting the style options after inserting the editableCssRules=.*; code on the svg itself.
Has anyone else experienced this? I have the latest version installed and have restarted my machine.
I'm expecting to see additional Fill options as the guide suggests.
|
[
"If possible, please attach SVG image and add the whole SVG code (Ctrl+e) with inserted editableCssRules=.*; so I can see all the details.\nThanks,\n",
"Same problem here.\nHere is the whole SVG code as requested :\neditableCssRules=.*;aspect=fixed;html=1;points=[];align=center;image;fontSize=12;image=img/lib/azure2/general/File.svg;imageBackground=default;sketch=0;\n\n\n",
"The option editableCssRules just does what it says: it makes the specified CSS classes editables; in your case by using editableCssRules=.*; you are using a Regex expression to make all CSS classes in the SVG editable.\nYou are not seeing any options appear after modifying the style because with all probability the SVG you are using does not contain any CSS classes.\nYou will need to edit your SVG file and add the CSS classes that you need and refer to them in the paths, like so:\n <style type=\"text/css\">\n .st0{fill:#000000;} \n </style> \n <path class=\"st0\" d=\"YOUR_PATH_HERE\"/>\n\nNow, in draw.io import the SVG again and after adding the editableCssRules=.*; option to the style you should be able to edit its color, like so:\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"draw.io"
] |
stackoverflow_0073309145_draw.io.txt
|
Q:
Checking for prime numbers - problems with understanding code - Java
I have a program which checks whether a given number from 0 to 9999 is a prime number or not (It was provided as a sample solution):
public class Primzahl {
public static void main(String[] args) {
for (int i = 0; i < 10000; i++) {
System.out.println(i + " " + isPrimzahl(i));
}
System.out.println();
}
public static boolean isPrimzahl(int zahl) {
for (int i = 2; i < (zahl / 2 + 1); i++) {
if (zahl % i == 0) {
return false;
}
}
return true;
}
}
However, I am having problems with understanding parts of the code:
i < (zahl / 2 + 1)
How is this part working?
And:
if (zahl % i == 0) {
return false;
}
With how many numbers is a given zahl checked in this program?
Edit: typo.
A:
i < (zahl / 2 + 1)
This is just setting the upper bound of the loop to half of the input number. Operator precedence means it will divide zahl by 2 before adding 1. There is no chance that the number will divide by something more than half of its value, so the loop can terminate there if no divisor has been found.
if (zahl % i == 0) {
return false;
}
This causes it to exit the loop if an exact divisor has been found. % is the modulus operator so zahl % i == 0 means that zahl divides exactly by i and so the number cannot be prime.
The program checks whether any numbers from 2 to zahl/2 (in increments of 1) divides exactly into the input number.
A:
i < (zahl / 2 + 1)
divide zahl by two, then add one. Since integer division is used, any fractions are ignored. That means for 13 you would get 6, add one and the result is 7.
Since you compare whether i is smaller, the whole expression will be true for 2, 3, 4, 5, 6.
zahl % i == 0
That one performs the same integer division but just returns the remainder. If the remainder is zero, the zahl cannot be a prime.
See also
https://press.rebus.community/programmingfundamentals/chapter/integer-division-and-modulus/
https://de.wikipedia.org/wiki/Division_mit_Rest#Nat%C3%BCrliche_Zahlen
|
Checking for prime numbers - problems with understanding code - Java
|
I have a program which checks whether a given number from 0 to 9999 is a prime number or not (It was provided as a sample solution):
public class Primzahl {
public static void main(String[] args) {
for (int i = 0; i < 10000; i++) {
System.out.println(i + " " + isPrimzahl(i));
}
System.out.println();
}
public static boolean isPrimzahl(int zahl) {
for (int i = 2; i < (zahl / 2 + 1); i++) {
if (zahl % i == 0) {
return false;
}
}
return true;
}
}
However, I am having problems with understanding parts of the code:
i < (zahl / 2 + 1)
How is this part working?
And:
if (zahl % i == 0) {
return false;
}
With how many numbers is a given zahl checked in this program?
Edit: typo.
|
[
"i < (zahl / 2 + 1)\n\nThis is just setting the upper bound of the loop to half of the input number. Operator precedence means it will divide zahl by 2 before adding 1. There is no chance that the number will divide by something more than half of its value, so the loop can terminate there if no divisor has been found.\n if (zahl % i == 0) {\n return false;\n }\n\nThis causes it to exit the loop if an exact divisor has been found. % is the modulus operator so zahl % i == 0 means that zahl divides exactly by i and so the number cannot be prime.\nThe program checks whether any numbers from 2 to zahl/2 (in increments of 1) divides exactly into the input number.\n",
"i < (zahl / 2 + 1)\n\ndivide zahl by two, then add one. Since integer division is used, any fractions are ignored. That means for 13 you would get 6, add one and the result is 7.\nSince you compare whether i is smaller, the whole expression will be true for 2, 3, 4, 5, 6.\nzahl % i == 0\n\nThat one performs the same integer division but just returns the remainder. If the remainder is zero, the zahl cannot be a prime.\nSee also\n\nhttps://press.rebus.community/programmingfundamentals/chapter/integer-division-and-modulus/\nhttps://de.wikipedia.org/wiki/Division_mit_Rest#Nat%C3%BCrliche_Zahlen\n\n"
] |
[
1,
0
] |
[] |
[] |
[
"java"
] |
stackoverflow_0074665430_java.txt
|
Q:
Possible null reference argument while adding an item to a list
I have the following code to read columns from SQL table:
public List<string> GetColumns(string tableName)
{
var columns = new List<string>();
using (var conn = new SqlConnection(SqlServerConnectionString))
{
conn.Open();
var selectQuery = $"SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '{tableName}'";
using (var cmd = new SqlCommand(selectQuery, conn))
{
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
columns.Add(reader["COLUMN_NAME"].ToString());
}
}
}
conn.Close();
}
return columns;
}
I see this warning on running the code:
Possible null reference argument for parameter 'item' in 'void List.Add(string item)'
How do I resolve this?
A:
The null warning probably warns you that reader["COLUMNAME"] might returns null. Even tho it in praxis should not occur, your compiler would never know.
May just add a variable for the reader["COLUMNAME"] and check if it's null before adding it to the list.
I'm not sure why it won't mention that .ToString() call but the List.Add.
Edit: and what was said in the comments of your question: better use SqlParameters instead of directly using strings in the query to avoid SQL injections.
|
Possible null reference argument while adding an item to a list
|
I have the following code to read columns from SQL table:
public List<string> GetColumns(string tableName)
{
var columns = new List<string>();
using (var conn = new SqlConnection(SqlServerConnectionString))
{
conn.Open();
var selectQuery = $"SELECT COLUMN_NAME FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '{tableName}'";
using (var cmd = new SqlCommand(selectQuery, conn))
{
using (var reader = cmd.ExecuteReader())
{
while (reader.Read())
{
columns.Add(reader["COLUMN_NAME"].ToString());
}
}
}
conn.Close();
}
return columns;
}
I see this warning on running the code:
Possible null reference argument for parameter 'item' in 'void List.Add(string item)'
How do I resolve this?
|
[
"The null warning probably warns you that reader[\"COLUMNAME\"] might returns null. Even tho it in praxis should not occur, your compiler would never know.\nMay just add a variable for the reader[\"COLUMNAME\"] and check if it's null before adding it to the list.\nI'm not sure why it won't mention that .ToString() call but the List.Add.\nEdit: and what was said in the comments of your question: better use SqlParameters instead of directly using strings in the query to avoid SQL injections.\n"
] |
[
0
] |
[] |
[] |
[
"c#",
"sql"
] |
stackoverflow_0074662541_c#_sql.txt
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.