content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Using pd.concat to add null valued columns from list if not in dataframe? I got the below performance warning PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling frame.insert many times, which has poor performance. Consider using pd.concat instead. To get a de-fragmented frame, use newframe = frame.copy() when I tried to add columns to a dataframe from a list. The warning asks me to consider using pd.concat. But it looks like pd.concat does not take lists. I am trying to create a set of dataframes from Excel with the same columns and rows. Each file I'm working with has a slightly different set of numbered columns. I tried iterating over a list to add the columns that are missing. But since that throws a performance warning, I'd like to improve performance. A sample of my code is below. missing_columns = list(map(str, [*range(1900, 2022, 1)])) for f in files: data = pd.read_excel(f) cols = data.columns.values.tolist() new_cols[0] = 'Company' data.columns = new_cols for missed in missing_columns : if missed not in new_cols: data[missed] = np.NAN data = data.set_index('Company') data = data.reindex(sorted(data.columns), axis = 1) Any help is appreciated! A: I am trying to process a set of Excel files that have different sets of columns, and I want to add any missing columns to the dataframes so that they all have the same set of columns. One issue with my code was that it used a for loop to iterate over the missing cols list and add missing columns to the dataframes. This approach can be inefficient because it requires the code to iterate over the entire missing cols list for each dataframe, which can take a long time if the list is large and the dataframes are large. One way to improve the efficiency of the code is to use the reindex method to add missing columns to the dataframes, rather than using a for loop. The reindex method can add missing columns to a dataframe and fill them with a specified value, such as NaN, which is what I tried in the for loop. By using the reindex method, I can avoid the need to iterate over the years list and add missing columns one at a time, which can improve the performance of the code. Here how I used the reindex method to add missing columns to the dataframes: # Get the complete set of columns for the dataframes all_columns = ['Company'] + missing_columns # Reindex the dataframes to include all columns data = data.reindex(columns = all_columns, fill_value = np.NAN)
Using pd.concat to add null valued columns from list if not in dataframe?
I got the below performance warning PerformanceWarning: DataFrame is highly fragmented. This is usually the result of calling frame.insert many times, which has poor performance. Consider using pd.concat instead. To get a de-fragmented frame, use newframe = frame.copy() when I tried to add columns to a dataframe from a list. The warning asks me to consider using pd.concat. But it looks like pd.concat does not take lists. I am trying to create a set of dataframes from Excel with the same columns and rows. Each file I'm working with has a slightly different set of numbered columns. I tried iterating over a list to add the columns that are missing. But since that throws a performance warning, I'd like to improve performance. A sample of my code is below. missing_columns = list(map(str, [*range(1900, 2022, 1)])) for f in files: data = pd.read_excel(f) cols = data.columns.values.tolist() new_cols[0] = 'Company' data.columns = new_cols for missed in missing_columns : if missed not in new_cols: data[missed] = np.NAN data = data.set_index('Company') data = data.reindex(sorted(data.columns), axis = 1) Any help is appreciated!
[ "I am trying to process a set of Excel files that have different sets of columns, and I want to add any missing columns to the dataframes so that they all have the same set of columns.\nOne issue with my code was that it used a for loop to iterate over the missing cols list and add missing columns to the dataframes. This approach can be inefficient because it requires the code to iterate over the entire missing cols list for each dataframe, which can take a long time if the list is large and the dataframes are large.\nOne way to improve the efficiency of the code is to use the reindex method to add missing columns to the dataframes, rather than using a for loop. The reindex method can add missing columns to a dataframe and fill them with a specified value, such as NaN, which is what I tried in the for loop. By using the reindex method, I can avoid the need to iterate over the years list and add missing columns one at a time, which can improve the performance of the code.\nHere how I used the reindex method to add missing columns to the dataframes:\n# Get the complete set of columns for the dataframes\nall_columns = ['Company'] + missing_columns\n\n# Reindex the dataframes to include all columns\ndata = data.reindex(columns = all_columns, fill_value = np.NAN)\n\n" ]
[ 0 ]
[]
[]
[ "dataframe" ]
stackoverflow_0074369140_dataframe.txt
Q: python re unterminated character set at position 0 CODE: import re inp=input() tup=tuple(map(str,inp.split(','))) i=0 while i<len(tup): x=tup[i] a=re.search("[0-9a-zA-Z\$#@",x) if a!="None": break else: i=i+1 if a!="None" and len(tup[i])>=6 and len(tup[i])<=12: print(tup[i]) else: print("invalid") INPUT: ABd1234@1,a F1#,2w3E*,2We3345 ERROR: unterminated character set at position 0 A: The error stems from the invalid regular expression - specifically, you've omitted the right-bracket. However, even if you fix that, based on the code shown in the question, this isn't going to work for a couple of reasons. The return value from re.search will always be unequal to 'None' The final if test in the code is outside the while loop which is almost certainly not what's wanted. Try this instead: import string VALIDCHARS = set(string.ascii_letters+string.digits+'$#@') for word in input().split(','): if 6 <= len(word) <= 12 and all(c in VALIDCHARS for c in word): print(f'{word} is valid') else: print(f'{word} is invalid') A: Your regular expression is missing a closing bracket. It should be: a=re.search("[0-9a-zA-Z\$#@]",x) Also, replace all instances of "None" the string with None the keyword. This is because .search returns None as can be seen here. https://docs.python.org/3/library/re.html#re.search
python re unterminated character set at position 0
CODE: import re inp=input() tup=tuple(map(str,inp.split(','))) i=0 while i<len(tup): x=tup[i] a=re.search("[0-9a-zA-Z\$#@",x) if a!="None": break else: i=i+1 if a!="None" and len(tup[i])>=6 and len(tup[i])<=12: print(tup[i]) else: print("invalid") INPUT: ABd1234@1,a F1#,2w3E*,2We3345 ERROR: unterminated character set at position 0
[ "The error stems from the invalid regular expression - specifically, you've omitted the right-bracket.\nHowever, even if you fix that, based on the code shown in the question, this isn't going to work for a couple of reasons.\n\nThe return value from re.search will always be unequal to 'None'\nThe final if test in the code is outside the while loop which is almost certainly not what's wanted.\n\nTry this instead:\nimport string\n\nVALIDCHARS = set(string.ascii_letters+string.digits+'$#@')\n\nfor word in input().split(','):\n if 6 <= len(word) <= 12 and all(c in VALIDCHARS for c in word):\n print(f'{word} is valid')\n else:\n print(f'{word} is invalid')\n\n", "Your regular expression is missing a closing bracket. It should be:\na=re.search(\"[0-9a-zA-Z\\$#@]\",x)\n\nAlso, replace all instances of \"None\" the string with None the keyword. This is because .search returns None as can be seen here.\nhttps://docs.python.org/3/library/re.html#re.search\n" ]
[ 0, 0 ]
[]
[]
[ "python", "python_re", "regex", "search", "tuples" ]
stackoverflow_0074658401_python_python_re_regex_search_tuples.txt
Q: How to directly read the value of the `std::atomic_int64_t` without atomic operation? I have an std::atomic_int64_t that can be read by multiple threads but written by only one thread. In the one thread that writes the atomic, I want to read it directly without any atomic-related instruction since there won't be concurrent writing. How should I do that in C++? A: It's hard to tell for sure without knowing your use case if what you're trying to do is reasonable, but there's about a 99.95% chance that it's a bad idea. The reason for this is not obvious, so let me have a go. Complex runtime environments Atomics, despite the name, are not just about atomic access to a variable, they're about ordering of effects. To understand what this means, we have to understand a little bit about two things: modern CPUs, caches, and memory compiler optimizations For point 1, consider that a modern CPU consists mostly of memory management. Very little silicon is actually devoted to calculating, most of it is concerned with keeping the calculation units fed with data. When you store a value into memory, chances are it's not going to show up in main memory immediately, instead it'll go to the active CPU core's store buffer that's going to be flushed out at some point in the future, at which point it may become visible to the other CPU core on which your other thread runs. In the name of performance, we have turned our CPUs into highly asynchronous beasts. For point 2, consider that your compiler will take apart the code you give it, analyze it for data dependencies, and reorder your instructions in such a way that they'll have the same results but run faster. Consider that the compiler can only do this for the code of one thread at a time. It cannot know that another thread depends on what the first one is doing being done in a particular order (or indeed at all), and so it'll run roughshod over the assumptions of that other, unknown thread. Another thread will change a variable, so that'll need to be re-read from time to time? Well, the compiler doesn't know that and has the value in a register, so it'll generate an endless loop and call it a day. This sort of behaviour needs to be inhibited when you have multiple threads. Atomics are about synchronization The main point of atomics is synchronization. An Atomic write ensures that store buffers are flushed, things become visible in main memory, and prevent the compiler from reordering instructions across it (possibly one-way, depending on the precise boundary used). Similarly, an atomic read ensures that the values from main memory become visible in cache and prevent the compiler from reordering across it. So if your reader threads don't use atomic reads, we have a situation: The writer threads does a proper atomic store. This ensures that the compiler does not reorder operations in the writer thread across the atomic boundary, and the generated code ensures that the CPU core's store buffer is flushed. If the reader threads read the atomic variable with an atomic read and see the new value, they'll also see everything else the writer thread was instructed to do up to that point. So, if the reader does not use an atomic read to read the atomic variable, what could go wrong? Well, basically two things: the cpu core might not see the need to update its cache the compiler could reorder operations across the non-atomic read, or not see a need to re-read a value from memory that the optimizer thought it already knew and had in a register. In effect, what this means is that the reader thread might work under the assumption that things the writer thread did before the write have already happened, but it ends up "not seeing" those new data. Hilarity will (almost) inevitably ensue. TL;DR Atomics are about synchonisation. You have multiple threads that you need to synchronize. Use atomic reads in the reader thread, or you're not synchronizing.
How to directly read the value of the `std::atomic_int64_t` without atomic operation?
I have an std::atomic_int64_t that can be read by multiple threads but written by only one thread. In the one thread that writes the atomic, I want to read it directly without any atomic-related instruction since there won't be concurrent writing. How should I do that in C++?
[ "It's hard to tell for sure without knowing your use case if what you're trying to do is reasonable, but there's about a 99.95% chance that it's a bad idea. The reason for this is not obvious, so let me have a go.\nComplex runtime environments\nAtomics, despite the name, are not just about atomic access to a variable, they're about ordering of effects. To understand what this means, we have to understand a little bit about two things:\n\nmodern CPUs, caches, and memory\ncompiler optimizations\n\nFor point 1, consider that a modern CPU consists mostly of memory management. Very little silicon is actually devoted to calculating, most of it is concerned with keeping the calculation units fed with data. When you store a value into memory, chances are it's not going to show up in main memory immediately, instead it'll go to the active CPU core's store buffer that's going to be flushed out at some point in the future, at which point it may become visible to the other CPU core on which your other thread runs. In the name of performance, we have turned our CPUs into highly asynchronous beasts.\nFor point 2, consider that your compiler will take apart the code you give it, analyze it for data dependencies, and reorder your instructions in such a way that they'll have the same results but run faster. Consider that the compiler can only do this for the code of one thread at a time. It cannot know that another thread depends on what the first one is doing being done in a particular order (or indeed at all), and so it'll run roughshod over the assumptions of that other, unknown thread. Another thread will change a variable, so that'll need to be re-read from time to time? Well, the compiler doesn't know that and has the value in a register, so it'll generate an endless loop and call it a day. This sort of behaviour needs to be inhibited when you have multiple threads.\nAtomics are about synchronization\nThe main point of atomics is synchronization. An Atomic write ensures that store buffers are flushed, things become visible in main memory, and prevent the compiler from reordering instructions across it (possibly one-way, depending on the precise boundary used). Similarly, an atomic read ensures that the values from main memory become visible in cache and prevent the compiler from reordering across it.\nSo if your reader threads don't use atomic reads, we have a situation: The writer threads does a proper atomic store. This ensures that the compiler does not reorder operations in the writer thread across the atomic boundary, and the generated code ensures that the CPU core's store buffer is flushed. If the reader threads read the atomic variable with an atomic read and see the new value, they'll also see everything else the writer thread was instructed to do up to that point.\nSo, if the reader does not use an atomic read to read the atomic variable, what could go wrong? Well, basically two things:\n\nthe cpu core might not see the need to update its cache\nthe compiler could reorder operations across the non-atomic read, or not see a need to re-read a value from memory that the optimizer thought it already knew and had in a register.\n\nIn effect, what this means is that the reader thread might work under the assumption that things the writer thread did before the write have already happened, but it ends up \"not seeing\" those new data. Hilarity will (almost) inevitably ensue.\nTL;DR\nAtomics are about synchonisation. You have multiple threads that you need to synchronize. Use atomic reads in the reader thread, or you're not synchronizing.\n" ]
[ 0 ]
[]
[]
[ "c++" ]
stackoverflow_0074656947_c++.txt
Q: Is there a way with Hasura to do a mutation based on the result of a query, within the same GraphQL call (Hasura transaction)? I tried to search for an example but, I presume it's not doable. I am looking to hopefully be proven wrong or to find an official confirmation that it's not doable. Before using Hasura, I was doing transactional SQL queries that ensured that data was kept consistent. For example, I would like to create a password reset token if a user requests it, only if the user can be found using an email address. Right now, I have to do 2 queries: Try to find a user with the specified email address Insert and assign the token to this user id In that case, it's not too bad, but now if I want to consume that token, I have to do 3 queries: Find the valid token Change the password to the user associated with that token Delete the token Obviously, if something goes wrong and the token is not deleted, this could be an issue - so I would be curious to see if there would be ways to merge these queries/mutations into transactions. A: You can apply changes to rows that you filter by certain criteria. Here is a sample mutation: mutation PasswordUpdate($id: uuid!, $token: String!, $new_password: String!) { update_user( where: {id: {_eq: $id}, token: {_eq: $token}} _set: {token: null, password: $new_password} ) { affected_rows } } That query deletes the token and sets a password for all users (hopefully just one) that have the token assigned.
Is there a way with Hasura to do a mutation based on the result of a query, within the same GraphQL call (Hasura transaction)?
I tried to search for an example but, I presume it's not doable. I am looking to hopefully be proven wrong or to find an official confirmation that it's not doable. Before using Hasura, I was doing transactional SQL queries that ensured that data was kept consistent. For example, I would like to create a password reset token if a user requests it, only if the user can be found using an email address. Right now, I have to do 2 queries: Try to find a user with the specified email address Insert and assign the token to this user id In that case, it's not too bad, but now if I want to consume that token, I have to do 3 queries: Find the valid token Change the password to the user associated with that token Delete the token Obviously, if something goes wrong and the token is not deleted, this could be an issue - so I would be curious to see if there would be ways to merge these queries/mutations into transactions.
[ "You can apply changes to rows that you filter by certain criteria. Here is a sample mutation:\nmutation PasswordUpdate($id: uuid!, $token: String!, $new_password: String!) {\n update_user(\n where: {id: {_eq: $id}, token: {_eq: $token}}\n _set: {token: null, password: $new_password}\n ) {\n affected_rows\n }\n}\n\nThat query deletes the token and sets a password for all users (hopefully just one) that have the token assigned.\n" ]
[ 0 ]
[]
[]
[ "hasura" ]
stackoverflow_0074649078_hasura.txt
Q: Avoiding data redundancy in SQL databases in a specific case I have seen some material, including on Stack Overflow, which suggests that some redundancy is to be expected in real world databases so I thought I would seek advice on this case. I have been learning about databases from a book in which one of the example databases seems to have some redundancy in the schema and there is some inconsistency in the data which presumably would not be possible if the redundancy was removed. I would like to know if I have identified this correctly and if my proposed solution would be sensible both from a theoretical and real world perspective. The two tables of interest in the database are Teams and Bowlers. These are shown below with a few rows selected from those in the actual database. The schema diagram for the database indicates that CaptainID in Teams is a foreign key referencing BowlerID in Bowlers. Teams: TeamID TeamName CaptainID 2 Sharks 5 9 Huckleberrys 7 Bowlers: BowlerID FirstName LastName TeamID 5 Ann Patterson 2 7 David Viescas 2 11 Angel Kennedy 3 It seems the data must be inconsistent as the bowler with BowlerID = 7 is in the team with TeamID = 2 but is Captain of the team with TeamID = 9. It is unlikely that would be an allowable situation (from my limited knowledge of Bowling). If I am right about that then I was wondering if a good theoretical and practical solution to this would be to remove the CaptainID column from the Teams table and create a new table called Captains which would just have two foreign keys as columns. These would be TeamID and BowlerID from the Teams and Bowlers tables respectively. One thing that struck me about the current database design is that it seems necessary to have a different name for the foreign key (CaptainID) than the key in Bowlers (BowlerID) that it references. That is because only some Bowlers are captains so if you used BowlerID instead of CaptainID as a name then it would be difficult to know what it actually meant. By implementing a new table (Captains) one can use the same name for the foreign keys inside it as are used in the tables to which they relate. I was wondering if that was a generally applicable fact in database design which is worth trying to adhere to? A: On the names of pk fields; i tend to use Id, an for an Fk playerId or captainPlayerId. There is no technical restriction, as long as they make sense. For the captainid in teams, this is not fully normalized as it is dependant on the key but not only on the key (boyce cod 3rd normal form violation but going from memory here) that said this seems like a common denormalization step. The integrity could be enforced using a trigger checking if the field is null or the corresponding player is in the same team
Avoiding data redundancy in SQL databases in a specific case
I have seen some material, including on Stack Overflow, which suggests that some redundancy is to be expected in real world databases so I thought I would seek advice on this case. I have been learning about databases from a book in which one of the example databases seems to have some redundancy in the schema and there is some inconsistency in the data which presumably would not be possible if the redundancy was removed. I would like to know if I have identified this correctly and if my proposed solution would be sensible both from a theoretical and real world perspective. The two tables of interest in the database are Teams and Bowlers. These are shown below with a few rows selected from those in the actual database. The schema diagram for the database indicates that CaptainID in Teams is a foreign key referencing BowlerID in Bowlers. Teams: TeamID TeamName CaptainID 2 Sharks 5 9 Huckleberrys 7 Bowlers: BowlerID FirstName LastName TeamID 5 Ann Patterson 2 7 David Viescas 2 11 Angel Kennedy 3 It seems the data must be inconsistent as the bowler with BowlerID = 7 is in the team with TeamID = 2 but is Captain of the team with TeamID = 9. It is unlikely that would be an allowable situation (from my limited knowledge of Bowling). If I am right about that then I was wondering if a good theoretical and practical solution to this would be to remove the CaptainID column from the Teams table and create a new table called Captains which would just have two foreign keys as columns. These would be TeamID and BowlerID from the Teams and Bowlers tables respectively. One thing that struck me about the current database design is that it seems necessary to have a different name for the foreign key (CaptainID) than the key in Bowlers (BowlerID) that it references. That is because only some Bowlers are captains so if you used BowlerID instead of CaptainID as a name then it would be difficult to know what it actually meant. By implementing a new table (Captains) one can use the same name for the foreign keys inside it as are used in the tables to which they relate. I was wondering if that was a generally applicable fact in database design which is worth trying to adhere to?
[ "On the names of pk fields; i tend to use Id, an for an Fk playerId or captainPlayerId. There is no technical restriction, as long as they make sense.\nFor the captainid in teams, this is not fully normalized as it is dependant on the key but not only on the key (boyce cod 3rd normal form violation but going from memory here) that said this seems like a common denormalization step.\nThe integrity could be enforced using a trigger checking if the field is null or the corresponding player is in the same team\n" ]
[ 1 ]
[]
[]
[ "database", "sql" ]
stackoverflow_0074658337_database_sql.txt
Q: How to resolve TypeError: Cannot convert undefined or null to object I've written a couple of functions that effectively replicate JSON.stringify(), converting a range of values into stringified versions. When I port my code over to JSBin and run it on some sample values, it functions just fine. But I'm getting this error in a spec runner designed to test this. My code: // five lines of comments var stringify = function(obj) { if (typeof obj === 'function') { return undefined;} // return undefined for function if (typeof obj === 'undefined') { return undefined;} // return undefined for undefined if (typeof obj === 'number') { return obj;} // number unchanged if (obj === 'null') { return null;} // null unchanged if (typeof obj === 'boolean') { return obj;} // boolean unchanged if (typeof obj === 'string') { return '\"' + obj + '\"';} // string gets escaped end-quotes if (Array.isArray(obj)) { return obj.map(function (e) { // uses map() to create new array with stringified elements return stringify(e); }); } else { var keys = Object.keys(obj); // convert object's keys into an array var container = keys.map(function (k) { // uses map() to create an array of key:(stringified)value pairs return k + ': ' + stringify(obj[k]); }); return '{' + container.join(', ') + '}'; // returns assembled object with curly brackets } }; var stringifyJSON = function(obj) { if (typeof stringify(obj) != 'undefined') { return "" + stringify(obj) + ""; } }; The error message I'm getting from the tester is: TypeError: Cannot convert undefined or null to object at Function.keys (native) at stringify (stringifyJSON.js:18:22) at stringifyJSON (stringifyJSON.js:27:13) at stringifyJSONSpec.js:7:20 at Array.forEach (native) at Context.<anonymous> (stringifyJSONSpec.js:5:26) at Test.Runnable.run (mocha.js:4039:32) at Runner.runTest (mocha.js:4404:10) at mocha.js:4450:12 at next (mocha.js:4330:14) It seems to fail with: stringifyJSON(null) for example A: Generic answer This error is caused when you call a function that expects an Object as its argument, but pass undefined or null instead, like for example Object.keys(null) Object.assign(window.UndefinedVariable, {}) As that is usually by mistake, the solution is to check your code and fix the null/undefined condition so that the function either gets a proper Object, or does not get called at all. Object.keys({'key': 'value'}) if (window.UndefinedVariable) { Object.assign(window.UndefinedVariable, {}) } Answer specific to the code in question The line if (obj === 'null') { return null;} // null unchanged will not evaluate when given null, only if given the string "null". So if you pass the actual null value to your script, it will be parsed in the Object part of the code. And Object.keys(null) throws the TypeError mentioned. To fix it, use if(obj === null) {return null} - without the qoutes around null. A: Make sure that destination object is not empty ( null or undefined ). You can initialize destination object with empty object like below: var destinationObj = {}; Object.assign(destinationObj, sourceObj); A: Make sure that object is not empty (null or undefined ). Error: let obj Object.keys(obj) Solution: Object.keys(obj || {}) A: Adding Object && works before putting the object on to map. objexts && Object.keys(objexts)?.map((objext, idx) => A: This is very useful to avoid errors when accessing properties of null or undefined objects. null to undefined object const obj = null; const newObj = obj || undefined; // newObj = undefined undefined to empty object const obj; const newObj = obj || {}; // newObj = {} // newObj.prop = undefined, but no error here null to empty object const obj = null; const newObj = obj || {}; // newObj = {} // newObj.prop = undefined, but no error here A: In my case, I added Lucid extension to Chrome and didn't notice the problem at that moment. After about a day of working on the problem and turning the program upside down, in a post someone had mentioned Lucid. I remembered what I had done and removed the extension from Chrome and ran the program again. The problem was gone. I am working with React. I thought this might help. A: I solved the same problem in a React Native project. I solved it using this. let data = snapshot.val(); if(data){ let items = Object.values(data); } else{ //return null } A: Replace if (typeof obj === 'undefined') { return undefined;} // return undefined for undefined if (obj === 'null') { return null;} // null unchanged with if (obj === undefined) { return undefined;} // return undefined for undefined if (obj === null) { return null;} // null unchanged A: If you're using Laravel, my problem was in the name of my Route. Instead: Route::put('/reason/update', 'REASONController@update'); I wrote: Route::put('/reason/update', 'RESONController@update'); and when I fixed the controller name, the code worked! A: In my case I had an extra pair of parenthesis () Instead of export default connect( someVariable )(otherVariable)() It had to be export default connect( someVariable )(otherVariable) A: Below snippet is sufficient to understand how I encountered the same issue but in a different scenario and how I solved it using the guidance in the accepted answer. In my case I was trying to log the keys of object present in the 0th index of the 'defaultViewData' array using Object.keys() method. defaultViewData = [{"name": "DEFAULT_VIEW_PLP","value": {"MSH25": "LIST"}}] console.log('DEFAULT_VIEW', Object.keys(this.props.defaultViewData[0])); The console.log was not getting printed and I was getting the same error as posted in this question. To prevent that error I added below condition if(this.props.defaultViewData[0]) { console.log('DEFAULT_VIEW', Object.keys(this.props.defaultViewData[0])); } Adding this check ensured that I didn't get this error. I hope this helps for someone. Note: This is React.js code. (although to understand the problem it doesn't matter). A: reactTraverser.js:6 Uncaught TypeError: Cannot convert undefined or null to object at Function.keys () at reactTraverser.js:6 If you are getting this error on typeScript Try using it without Live Server this error will not be displayed A: Easy fix Just convert whatever is passed to Object.keys to an Object using a one liner like: let variable = undefined; Object.keys(Object(variable)); // Outputs: []
How to resolve TypeError: Cannot convert undefined or null to object
I've written a couple of functions that effectively replicate JSON.stringify(), converting a range of values into stringified versions. When I port my code over to JSBin and run it on some sample values, it functions just fine. But I'm getting this error in a spec runner designed to test this. My code: // five lines of comments var stringify = function(obj) { if (typeof obj === 'function') { return undefined;} // return undefined for function if (typeof obj === 'undefined') { return undefined;} // return undefined for undefined if (typeof obj === 'number') { return obj;} // number unchanged if (obj === 'null') { return null;} // null unchanged if (typeof obj === 'boolean') { return obj;} // boolean unchanged if (typeof obj === 'string') { return '\"' + obj + '\"';} // string gets escaped end-quotes if (Array.isArray(obj)) { return obj.map(function (e) { // uses map() to create new array with stringified elements return stringify(e); }); } else { var keys = Object.keys(obj); // convert object's keys into an array var container = keys.map(function (k) { // uses map() to create an array of key:(stringified)value pairs return k + ': ' + stringify(obj[k]); }); return '{' + container.join(', ') + '}'; // returns assembled object with curly brackets } }; var stringifyJSON = function(obj) { if (typeof stringify(obj) != 'undefined') { return "" + stringify(obj) + ""; } }; The error message I'm getting from the tester is: TypeError: Cannot convert undefined or null to object at Function.keys (native) at stringify (stringifyJSON.js:18:22) at stringifyJSON (stringifyJSON.js:27:13) at stringifyJSONSpec.js:7:20 at Array.forEach (native) at Context.<anonymous> (stringifyJSONSpec.js:5:26) at Test.Runnable.run (mocha.js:4039:32) at Runner.runTest (mocha.js:4404:10) at mocha.js:4450:12 at next (mocha.js:4330:14) It seems to fail with: stringifyJSON(null) for example
[ "Generic answer\nThis error is caused when you call a function that expects an Object as its argument, but pass undefined or null instead, like for example\nObject.keys(null)\nObject.assign(window.UndefinedVariable, {})\n\nAs that is usually by mistake, the solution is to check your code and fix the null/undefined condition so that the function either gets a proper Object, or does not get called at all.\nObject.keys({'key': 'value'})\nif (window.UndefinedVariable) {\n Object.assign(window.UndefinedVariable, {})\n}\n\nAnswer specific to the code in question\nThe line if (obj === 'null') { return null;} // null unchanged will not \nevaluate when given null, only if given the string \"null\". So if you pass the actual null value to your script, it will be parsed in the Object part of the code. And Object.keys(null) throws the TypeError mentioned. To fix it, use if(obj === null) {return null} - without the qoutes around null.\n", "Make sure that destination object is not empty ( null or undefined ).\nYou can initialize destination object with empty object like below:\nvar destinationObj = {};\n\nObject.assign(destinationObj, sourceObj);\n\n", "Make sure that object is not empty (null or undefined ).\nError:\nlet obj\n\nObject.keys(obj)\n\nSolution:\nObject.keys(obj || {})\n\n", "Adding Object && works before putting the object on to map.\nobjexts && Object.keys(objexts)?.map((objext, idx) => \n\n", "This is very useful to avoid errors when accessing properties of null or undefined objects.\nnull to undefined object\nconst obj = null;\nconst newObj = obj || undefined;\n// newObj = undefined\n\nundefined to empty object\nconst obj; \nconst newObj = obj || {};\n// newObj = {} \n// newObj.prop = undefined, but no error here\n\nnull to empty object\nconst obj = null;\nconst newObj = obj || {};\n// newObj = {} \n// newObj.prop = undefined, but no error here\n\n", "In my case, I added Lucid extension to Chrome and didn't notice the problem at that moment. After about a day of working on the problem and turning the program upside down, in a post someone had mentioned Lucid. I remembered what I had done and removed the extension from Chrome and ran the program again. The problem was gone. I am working with React. I thought this might help.\n", "I solved the same problem in a React Native project. I solved it using this.\nlet data = snapshot.val();\nif(data){\n let items = Object.values(data);\n}\nelse{\n //return null\n}\n\n", "Replace\nif (typeof obj === 'undefined') { return undefined;} // return undefined for undefined\nif (obj === 'null') { return null;} // null unchanged\n\nwith\nif (obj === undefined) { return undefined;} // return undefined for undefined \nif (obj === null) { return null;} // null unchanged\n\n", "If you're using Laravel, my problem was in the name of my Route.\nInstead:\nRoute::put('/reason/update', 'REASONController@update');\n\nI wrote:\nRoute::put('/reason/update', 'RESONController@update');\n\nand when I fixed the controller name, the code worked!\n", "In my case I had an extra pair of parenthesis ()\nInstead of\nexport default connect(\n someVariable\n)(otherVariable)()\n\nIt had to be\nexport default connect(\n someVariable\n)(otherVariable)\n\n", "Below snippet is sufficient to understand how I encountered the same issue but in a different scenario and how I solved it using the guidance in the accepted answer. In my case I was trying to log the keys of object present in the 0th index of the 'defaultViewData' array using Object.keys() method.\ndefaultViewData = [{\"name\": \"DEFAULT_VIEW_PLP\",\"value\": {\"MSH25\": \"LIST\"}}] \n\nconsole.log('DEFAULT_VIEW', Object.keys(this.props.defaultViewData[0]));\n\nThe console.log was not getting printed and I was getting the same error as posted in this question. To prevent that error I added below condition\n if(this.props.defaultViewData[0]) {\n console.log('DEFAULT_VIEW', Object.keys(this.props.defaultViewData[0]));\n}\n\nAdding this check ensured that I didn't get this error. I hope this helps for someone.\nNote: This is React.js code. (although to understand the problem it doesn't matter).\n", "reactTraverser.js:6 Uncaught TypeError: Cannot convert undefined or null to object at Function.keys () at reactTraverser.js:6\nIf you are getting this error on typeScript Try using it without Live Server this error will not be displayed\n", "Easy fix\nJust convert whatever is passed to Object.keys to an Object using a one liner like:\nlet variable = undefined;\n\nObject.keys(Object(variable)); // Outputs: []\n\n" ]
[ 194, 6, 6, 5, 4, 1, 0, 0, 0, 0, 0, 0, 0 ]
[ "I have the same problem with a element in a webform. So what I did to fix it was validate.\nif(Object === 'null')\n do something\n" ]
[ -1 ]
[ "javascript", "null", "undefined" ]
stackoverflow_0029721205_javascript_null_undefined.txt
Q: how can i trigger OnChangeFile input without Button Click in Angular? I am trying to upload files to AWS but I don't want to press the "choose file" button. it should automatically trigger is there any way to achieve this in my .html file <div class="content"> <input (change)="onChangeFile($event)" type="file" /> </div> in my .ts file async onChangeFile(event: any) { console.log(event.target.files[0]); this.fileSelected = event.target.files[0]; console.log(environment); console.log('Uploaded'); await this.S3CustomClient.uploadFile( .uploadFile(this.fileSelected, this.fileSelected.type, undefined, this.fileSelected.name, "private") .then((data: UploadResponse) => console.log(data)) .catch((err: any) => console.error(err)) A: For security reasons a file chooser dialog can only be shown with a user activation - so for example click event. What you want is not possible.
how can i trigger OnChangeFile input without Button Click in Angular?
I am trying to upload files to AWS but I don't want to press the "choose file" button. it should automatically trigger is there any way to achieve this in my .html file <div class="content"> <input (change)="onChangeFile($event)" type="file" /> </div> in my .ts file async onChangeFile(event: any) { console.log(event.target.files[0]); this.fileSelected = event.target.files[0]; console.log(environment); console.log('Uploaded'); await this.S3CustomClient.uploadFile( .uploadFile(this.fileSelected, this.fileSelected.type, undefined, this.fileSelected.name, "private") .then((data: UploadResponse) => console.log(data)) .catch((err: any) => console.error(err))
[ "For security reasons a file chooser dialog can only be shown with a user activation - so for example click event. What you want is not possible.\n" ]
[ 0 ]
[]
[]
[ "angular", "typescript" ]
stackoverflow_0074655403_angular_typescript.txt
Q: How to change text color of a Sublime 3 predefined color-scheme/theme? Specifically, I'm using the Boxy theme for Sublime text 3. I'd like to change the color of the main text and comments and I can't find a default to change to do that - just "font_face" and "font_size". // my code in USER SETTINGS { "color_scheme": "Packages/Boxy Theme/schemes/Boxy Yesterday.tmTheme", "ignored_packages": [ "Vintage" ], "theme": "Boxy Yesterday.sublime-theme", "theme_sidebar_font_md": true, "theme_sidebar_size_xs": true } A: Install PackageResourseViewer with Package Installer go ctrl+shift+p and select PackageResourseViewer: Open Resource open the color scheme your using from that theme, and change whatever you want. A: Preferences > Customize Color Scheme
How to change text color of a Sublime 3 predefined color-scheme/theme?
Specifically, I'm using the Boxy theme for Sublime text 3. I'd like to change the color of the main text and comments and I can't find a default to change to do that - just "font_face" and "font_size". // my code in USER SETTINGS { "color_scheme": "Packages/Boxy Theme/schemes/Boxy Yesterday.tmTheme", "ignored_packages": [ "Vintage" ], "theme": "Boxy Yesterday.sublime-theme", "theme_sidebar_font_md": true, "theme_sidebar_size_xs": true }
[ "\nInstall PackageResourseViewer with Package Installer\ngo ctrl+shift+p and select PackageResourseViewer: Open Resource\nopen the color scheme your using from that theme, and change whatever you want.\n\n", "Preferences > Customize Color Scheme\n" ]
[ 3, 0 ]
[]
[]
[ "colors", "css", "json", "sublimetext", "themes" ]
stackoverflow_0039857290_colors_css_json_sublimetext_themes.txt
Q: Angular Local Storage Delete Not Removing item Hello I am learning angular and I'm working on a todo app that stores to local storage. Getting the local storage to remove a single item as well as editing an item has been a challenge and I've not found many good resources. Right now it looks like its removing the entire array and I'm not sure why. Hopefully Ive included the proper details. This is is my delete/remove in the CRUD task deleteTask(task : Task) { localStorage.removeItem(this.taskKey) } This is the delete task in the componant deleteTask(idx: number) { this.taskService.deleteTask(new Task(this.addTaskValue)) if( idx >= 0) { this.taskArr.splice(idx, 1); } } Other Details from the componant taskArr: Task[]; public addTaskValue: string = ''; constructor(private taskService: TaskService) {} ngOnInit(): void { this.addTaskValue = ''; this.taskArr = this.taskService.getAllTasks(); } Local Storage Image I've tried to just use the splice out in the deleteTask but that only removed it from the screen and not the local storage. Ive tried a bunch of other things as well but cant recall them all in detail. A: In the above code, you provided. I don't see a logic where the taskKey is even being set. I would add some console statements to see if you're even getting the right string in this.taskKey. deleteTask(task : Task) { console.log(this.taskKey); // check if this value is "tasks" localStorage.removeItem(this.taskKey) } Essentially it needs to boil down to this. localstorage.removeItem('tasks').
Angular Local Storage Delete Not Removing item
Hello I am learning angular and I'm working on a todo app that stores to local storage. Getting the local storage to remove a single item as well as editing an item has been a challenge and I've not found many good resources. Right now it looks like its removing the entire array and I'm not sure why. Hopefully Ive included the proper details. This is is my delete/remove in the CRUD task deleteTask(task : Task) { localStorage.removeItem(this.taskKey) } This is the delete task in the componant deleteTask(idx: number) { this.taskService.deleteTask(new Task(this.addTaskValue)) if( idx >= 0) { this.taskArr.splice(idx, 1); } } Other Details from the componant taskArr: Task[]; public addTaskValue: string = ''; constructor(private taskService: TaskService) {} ngOnInit(): void { this.addTaskValue = ''; this.taskArr = this.taskService.getAllTasks(); } Local Storage Image I've tried to just use the splice out in the deleteTask but that only removed it from the screen and not the local storage. Ive tried a bunch of other things as well but cant recall them all in detail.
[ "In the above code, you provided. I don't see a logic where the taskKey is even being set. I would add some console statements to see if you're even getting the right string in this.taskKey.\ndeleteTask(task : Task) {\n console.log(this.taskKey); // check if this value is \"tasks\"\n localStorage.removeItem(this.taskKey)\n}\n\n\nEssentially it needs to boil down to this.\nlocalstorage.removeItem('tasks').\n\n\n" ]
[ 0 ]
[]
[]
[ "angular", "crud", "java", "local_storage", "typescript" ]
stackoverflow_0074658551_angular_crud_java_local_storage_typescript.txt
Q: I have this problem while building the project in Unreal Engine. Can anyone help me with that An error occurred while trying to generate project files. Some Platforms were skipped due to invalid SDK setup: Mac, IOS, Android, Lumin. See the log file for detailed information (/Users/sidd/Library/Application Support/Epic/UnrealBuildTool/Log_GPF.txt) Discovering modules, targets and source code for project... WARNING: Failed to query Xcode version Triggered an exception while looking for SDK directory in Xcode.app System.IO.DirectoryNotFoundException: Could not find a part of the path '/Library/Developer/CommandLineTools/Platforms/MacOSX.platform/Developer/SDKs'. at System.IO.Enumeration.FileSystemEnumerator1.CreateDirectoryHandle(String path, Boolean ignoreNotFound) at System.IO.Enumeration.FileSystemEnumerator1.Init() at System.IO.Enumeration.FileSystemEnumerator1..ctor(String directory, Boolean isNormalized, EnumerationOptions options) at System.IO.Enumeration.FileSystemEnumerable1..ctor(String directory, FindTransform transform, EnumerationOptions options, Boolean isNormalized) at System.IO.Enumeration.FileSystemEnumerableFactory.UserDirectories(String directory, String expression, EnumerationOptions options) at System.IO.Directory.InternalEnumeratePaths(String path, String searchPattern, SearchTarget searchTarget, EnumerationOptions options) at System.IO.Directory.GetDirectories(String path) at UnrealBuildTool.AppleToolChainSettings.SelectSDK(String BaseSDKDir, String OSPrefix, String& PlatformSDKVersion, Boolean bVerbose) in /Users/build/Build/++UE5/Sync/Engine/Source/Programs/UnrealBuildTool/ToolChain/AppleToolChain.cs:line 87 ERROR: Invalid SDK MacOSX.sdk, not found in /Library/Developer/CommandLineTools/Platforms/MacOSX.platform/Developer/SDKs A: I was having the same problem, this post saved my life. You will need to select a valid 'Command Line Tools' in Xcode Preferences, under Locations tab. A: macOS 13.0.1 + Xcode 14.1 + UE5.1 source code Error: Invalid SDK MacOSX.sdk, not found in /Library/Developer/CommandLineTools/Platforms/MacOSX.platform/Developer/SDKs Fix: sudo ln -s /Applications/Xcode.app/Contents/Developer/Platforms /Library/Developer/CommandLineTools/Platforms
I have this problem while building the project in Unreal Engine. Can anyone help me with that
An error occurred while trying to generate project files. Some Platforms were skipped due to invalid SDK setup: Mac, IOS, Android, Lumin. See the log file for detailed information (/Users/sidd/Library/Application Support/Epic/UnrealBuildTool/Log_GPF.txt) Discovering modules, targets and source code for project... WARNING: Failed to query Xcode version Triggered an exception while looking for SDK directory in Xcode.app System.IO.DirectoryNotFoundException: Could not find a part of the path '/Library/Developer/CommandLineTools/Platforms/MacOSX.platform/Developer/SDKs'. at System.IO.Enumeration.FileSystemEnumerator1.CreateDirectoryHandle(String path, Boolean ignoreNotFound) at System.IO.Enumeration.FileSystemEnumerator1.Init() at System.IO.Enumeration.FileSystemEnumerator1..ctor(String directory, Boolean isNormalized, EnumerationOptions options) at System.IO.Enumeration.FileSystemEnumerable1..ctor(String directory, FindTransform transform, EnumerationOptions options, Boolean isNormalized) at System.IO.Enumeration.FileSystemEnumerableFactory.UserDirectories(String directory, String expression, EnumerationOptions options) at System.IO.Directory.InternalEnumeratePaths(String path, String searchPattern, SearchTarget searchTarget, EnumerationOptions options) at System.IO.Directory.GetDirectories(String path) at UnrealBuildTool.AppleToolChainSettings.SelectSDK(String BaseSDKDir, String OSPrefix, String& PlatformSDKVersion, Boolean bVerbose) in /Users/build/Build/++UE5/Sync/Engine/Source/Programs/UnrealBuildTool/ToolChain/AppleToolChain.cs:line 87 ERROR: Invalid SDK MacOSX.sdk, not found in /Library/Developer/CommandLineTools/Platforms/MacOSX.platform/Developer/SDKs
[ "I was having the same problem, this post saved my life.\nYou will need to select a valid 'Command Line Tools' in Xcode Preferences, under Locations tab.\n\n", "macOS 13.0.1 + Xcode 14.1 + UE5.1 source code\nError:\nInvalid SDK MacOSX.sdk, not found in /Library/Developer/CommandLineTools/Platforms/MacOSX.platform/Developer/SDKs\n\nFix:\nsudo ln -s /Applications/Xcode.app/Contents/Developer/Platforms /Library/Developer/CommandLineTools/Platforms\n\n" ]
[ 2, 0 ]
[]
[]
[ "android_sdk_tools", "ios", "sdk", "unreal_development_kit", "unreal_engine4" ]
stackoverflow_0069678366_android_sdk_tools_ios_sdk_unreal_development_kit_unreal_engine4.txt
Q: How to easier to split csv data by substring using python Finally I want to split clearly like this photo *NOT replace, I want to SPLIT and not just using "," to split MUST according to substring to split it I have a csv like: date, time, ID1, ID2, ID3, "Action=xxx, ProdCode=XXXX, Cmd=xxx, Price=xxxxx, Qty=xxx, TradedQty=xxx, Validity=xxx, Status=xxx, AddBy=xxxxxx, TimeStamp=xxx, ClOrderId=xxx, ChannelId=xxx",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xxx, RetCode=xxx, ProdCode=xxxx, Cmd=xxx, Price=xxxx, Qty=xxx, TradedQty=0, Validity=xxx, Status=xxx, ExtOrderNo=xxxxx, Ref=0, AddBy=xxxxx, Gateway=xxxxx, TimeStamp=xxx, ClOrderId=xxx",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xx, RetCode=xx, ProdCode=xxx, Cmd=xx, Price=xxx, Qty=x, TradedQty=x, Status=xxx, ExtOrderNo=xxx, Ref=xxx, AddBy=xx, Gateway=xxx, TimeStamp=xxx",x,x,ID4 date,time,ID1,ID2,ID3,"Action=xxx, ProdCode=xxx, Cmd=xxx, Price=xxx, Qty=x, ExtOrderNo=xxx, TradeNo=xxx, Ref=@xxx, AddBy=xxx, Gateway=xxx",x,x,ID4 How can I easier to split to different column by the string before "="? And if there is no relevant words in the row, the row is empty Or add "word=," or simply add "," at that position Final Result LIKE: date, time, ID1, ID2, ID3, "Action=xxx, **RetCode=,** ProdCode=XXXX, Cmd=xxx, Price=xxxxx, Qty=xxx, TradedQty=xxx, Validity=xxx, Status=xxx, **ExtOrderNo=,** **TradeNo=,** **Ref=,** AddBy=xxxxxx, **Gateway=,** TimeStamp=xxx, ClOrderId=xxx, ChannelId=xxx",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xxx, RetCode=xxx, ProdCode=xxxx, Cmd=xxx, Price=xxxx, Qty=xxx, TradedQty=0, Validity=xxx, Status=xxx, ExtOrderNo=xxxxx, **TradeNo=,** Ref=0, AddBy=xxxxx, Gateway=xxxxx, TimeStamp=xxx, ClOrderId=xxx **ChannelId=,**",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xxx, RetCode=xx, ProdCode=xxx, Cmd=xx, Price=xxx, Qty=x, TradedQty=x, **Validity=,** Status=xxx, ExtOrderNo=xxx, **TradeNo=,** Ref=xxx, AddBy=xx, Gateway=xxx, TimeStamp=xxx **ClOrderId=,** **ChannelId=,**",x,x,ID4 date,time,ID1,ID2,ID3,"Action=xxx, **RetCode=,** ProdCode=xxx, Cmd=xxx, Price=xxx, Qty=x, **TradedQty=,** **Validity=,** **Status=,** ExtOrderNo=xxx, TradeNo=xxx, Ref=@xxx, AddBy=xxx, Gateway=xxx **TimeStamp=,** **ClOrderId=,** **ChannelId=,**",x,x,ID4 p.s. above just some example of the csv, maybe have other words=xxx, how can I easier to split it I want clearly in csv or excel show which data exists and which data does not A: Not sure I understand 100%, but let me try to help. The focus points are: # import the pandas library and alias as pd import pandas as pd # read a csv with the example data df = pd.read_csv("data.csv", sep=",", quoting=False, header = None) # replace any values that match the pattern "something=value" with "value" df.replace(to_replace=r"^(.*)=", value="", regex=True, inplace=True) # save to a new csv file: df.to_csv("new_data.csv", sep=",", header = None, index = False)
How to easier to split csv data by substring using python
Finally I want to split clearly like this photo *NOT replace, I want to SPLIT and not just using "," to split MUST according to substring to split it I have a csv like: date, time, ID1, ID2, ID3, "Action=xxx, ProdCode=XXXX, Cmd=xxx, Price=xxxxx, Qty=xxx, TradedQty=xxx, Validity=xxx, Status=xxx, AddBy=xxxxxx, TimeStamp=xxx, ClOrderId=xxx, ChannelId=xxx",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xxx, RetCode=xxx, ProdCode=xxxx, Cmd=xxx, Price=xxxx, Qty=xxx, TradedQty=0, Validity=xxx, Status=xxx, ExtOrderNo=xxxxx, Ref=0, AddBy=xxxxx, Gateway=xxxxx, TimeStamp=xxx, ClOrderId=xxx",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xx, RetCode=xx, ProdCode=xxx, Cmd=xx, Price=xxx, Qty=x, TradedQty=x, Status=xxx, ExtOrderNo=xxx, Ref=xxx, AddBy=xx, Gateway=xxx, TimeStamp=xxx",x,x,ID4 date,time,ID1,ID2,ID3,"Action=xxx, ProdCode=xxx, Cmd=xxx, Price=xxx, Qty=x, ExtOrderNo=xxx, TradeNo=xxx, Ref=@xxx, AddBy=xxx, Gateway=xxx",x,x,ID4 How can I easier to split to different column by the string before "="? And if there is no relevant words in the row, the row is empty Or add "word=," or simply add "," at that position Final Result LIKE: date, time, ID1, ID2, ID3, "Action=xxx, **RetCode=,** ProdCode=XXXX, Cmd=xxx, Price=xxxxx, Qty=xxx, TradedQty=xxx, Validity=xxx, Status=xxx, **ExtOrderNo=,** **TradeNo=,** **Ref=,** AddBy=xxxxxx, **Gateway=,** TimeStamp=xxx, ClOrderId=xxx, ChannelId=xxx",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xxx, RetCode=xxx, ProdCode=xxxx, Cmd=xxx, Price=xxxx, Qty=xxx, TradedQty=0, Validity=xxx, Status=xxx, ExtOrderNo=xxxxx, **TradeNo=,** Ref=0, AddBy=xxxxx, Gateway=xxxxx, TimeStamp=xxx, ClOrderId=xxx **ChannelId=,**",x,x,ID4 date, time, ID1, ID2, ID3, "Action=xxx, RetCode=xx, ProdCode=xxx, Cmd=xx, Price=xxx, Qty=x, TradedQty=x, **Validity=,** Status=xxx, ExtOrderNo=xxx, **TradeNo=,** Ref=xxx, AddBy=xx, Gateway=xxx, TimeStamp=xxx **ClOrderId=,** **ChannelId=,**",x,x,ID4 date,time,ID1,ID2,ID3,"Action=xxx, **RetCode=,** ProdCode=xxx, Cmd=xxx, Price=xxx, Qty=x, **TradedQty=,** **Validity=,** **Status=,** ExtOrderNo=xxx, TradeNo=xxx, Ref=@xxx, AddBy=xxx, Gateway=xxx **TimeStamp=,** **ClOrderId=,** **ChannelId=,**",x,x,ID4 p.s. above just some example of the csv, maybe have other words=xxx, how can I easier to split it I want clearly in csv or excel show which data exists and which data does not
[ "Not sure I understand 100%, but let me try to help.\nThe focus points are:\n# import the pandas library and alias as pd\nimport pandas as pd\n\n# read a csv with the example data\ndf = pd.read_csv(\"data.csv\", sep=\",\", quoting=False, header = None)\n\n# replace any values that match the pattern \"something=value\" with \"value\"\ndf.replace(to_replace=r\"^(.*)=\", value=\"\", regex=True, inplace=True)\n# save to a new csv file:\ndf.to_csv(\"new_data.csv\", sep=\",\", header = None, index = False)\n\n" ]
[ 0 ]
[]
[]
[ "csv", "pandas", "python" ]
stackoverflow_0074658256_csv_pandas_python.txt
Q: (Terraform, Cloud Run) Error: Forbidden Your client does not have permission to get URL / from this server I'm trying to run a docker image on Cloud Run with the Terraform code below: provider "google" { credentials = file("myCredentials.json") project = "myproject-214771" region = "asia-northeast1" } resource "google_cloud_run_service" "default" { name = "hello-world" location = "asia-northeast1" template { spec { containers { image = "gcr.io/myproject-214771/hello-world:latest" } } } traffic { percent = 100 latest_revision = true } } Then, it was successful to run the docker image: But when I access the URL, it shows this: Error: Forbidden Your client does not have permission to get URL / from this server Are there any mistakes in my Terraform code? A: Add(Copy & paste) this code below to your Terraform code to allow unauthenticated invocations for public API or website: data "google_iam_policy" "noauth" { binding { role = "roles/run.invoker" members = [ "allUsers", ] } } resource "google_cloud_run_service_iam_policy" "noauth" { location = google_cloud_run_service.default.location project = google_cloud_run_service.default.project service = google_cloud_run_service.default.name policy_data = data.google_iam_policy.noauth.policy_data } So this is the full code: provider "google" { credentials = file("myCredentials.json") project = "myproject-214771" region = "asia-northeast1" } resource "google_cloud_run_service" "default" { name = "hello-world" location = "asia-northeast1" template { spec { containers { image = "gcr.io/myproject-214771/hello-world:latest" } } } traffic { percent = 100 latest_revision = true } } data "google_iam_policy" "noauth" { binding { role = "roles/run.invoker" members = [ "allUsers", ] } } resource "google_cloud_run_service_iam_policy" "noauth" { location = google_cloud_run_service.default.location project = google_cloud_run_service.default.project service = google_cloud_run_service.default.name policy_data = data.google_iam_policy.noauth.policy_data } Finally, your URL shows your website properly: Moreover, now "Authentication" is "Allow unauthenticated": Don't forget to add the role "Cloud Run Admin" to your service account: Otherwise, you cannot allow unauthenticated invocations for public API or website then you will get this error below: Error setting IAM policy for cloudrun service "v1/projects/myproject-214771/locations/asia-northeast1/services/hello-world": googleapi: Error 403: Permission 'run.services.setIamPolicy' denied on resource 'projects/myproject-214771/locations/asia-northeast1/services/hello-world' (or resource may not exist). Moreover, with these roles below, you cannot allow unauthenticated invocations for public API or website: Only the role "Cloud Run Admin" can allow unauthenticated invocations for public API or website. A: most likely you need to give the service account "Cloud Run Admin" access, it needs run.services.setIamPolicy permission to change the settings on the new cloud run
(Terraform, Cloud Run) Error: Forbidden Your client does not have permission to get URL / from this server
I'm trying to run a docker image on Cloud Run with the Terraform code below: provider "google" { credentials = file("myCredentials.json") project = "myproject-214771" region = "asia-northeast1" } resource "google_cloud_run_service" "default" { name = "hello-world" location = "asia-northeast1" template { spec { containers { image = "gcr.io/myproject-214771/hello-world:latest" } } } traffic { percent = 100 latest_revision = true } } Then, it was successful to run the docker image: But when I access the URL, it shows this: Error: Forbidden Your client does not have permission to get URL / from this server Are there any mistakes in my Terraform code?
[ "Add(Copy & paste) this code below to your Terraform code to allow unauthenticated invocations for public API or website:\ndata \"google_iam_policy\" \"noauth\" {\n binding {\n role = \"roles/run.invoker\"\n members = [\n \"allUsers\",\n ]\n }\n}\n\nresource \"google_cloud_run_service_iam_policy\" \"noauth\" {\n location = google_cloud_run_service.default.location\n project = google_cloud_run_service.default.project\n service = google_cloud_run_service.default.name\n\n policy_data = data.google_iam_policy.noauth.policy_data\n}\n\nSo this is the full code:\nprovider \"google\" {\n credentials = file(\"myCredentials.json\")\n project = \"myproject-214771\"\n region = \"asia-northeast1\"\n}\n\nresource \"google_cloud_run_service\" \"default\" {\n name = \"hello-world\"\n location = \"asia-northeast1\"\n\n template {\n spec {\n containers {\n image = \"gcr.io/myproject-214771/hello-world:latest\"\n }\n }\n }\n\n traffic {\n percent = 100\n latest_revision = true\n }\n}\n\ndata \"google_iam_policy\" \"noauth\" {\n binding {\n role = \"roles/run.invoker\"\n members = [\n \"allUsers\",\n ]\n }\n}\n\nresource \"google_cloud_run_service_iam_policy\" \"noauth\" {\n location = google_cloud_run_service.default.location\n project = google_cloud_run_service.default.project\n service = google_cloud_run_service.default.name\n\n policy_data = data.google_iam_policy.noauth.policy_data\n}\n\nFinally, your URL shows your website properly:\n\nMoreover, now \"Authentication\" is \"Allow unauthenticated\":\n\nDon't forget to add the role \"Cloud Run Admin\" to your service account:\n\nOtherwise, you cannot allow unauthenticated invocations for public API or website then you will get this error below:\n\nError setting IAM policy for cloudrun service\n\"v1/projects/myproject-214771/locations/asia-northeast1/services/hello-world\":\ngoogleapi: Error 403: Permission 'run.services.setIamPolicy' denied on\nresource\n'projects/myproject-214771/locations/asia-northeast1/services/hello-world'\n(or resource may not exist).\n\nMoreover, with these roles below, you cannot allow unauthenticated invocations for public API or website:\n\nOnly the role \"Cloud Run Admin\" can allow unauthenticated invocations for public API or website.\n\n", "most likely you need to give the service account \"Cloud Run Admin\" access, it needs run.services.setIamPolicy permission to change the settings on the new cloud run\n" ]
[ 4, 0 ]
[]
[]
[ "devops", "google_cloud_platform", "google_cloud_run", "terraform", "terraform_provider_gcp" ]
stackoverflow_0070797574_devops_google_cloud_platform_google_cloud_run_terraform_terraform_provider_gcp.txt
Q: How do I Dynamically create a Test Suite in JUnit 4? I would like to create a junit test suite using JUnit 4 where the names of the test classes to be included are not known until the test suite is run. In JUnit 3 I could do this: public final class MasterTester extends TestCase { /** * Used by junit to specify what TestCases to run. * * @return a suite containing what TestCases to run */ public static TestSuite suite() { TestSuite suite = new TestSuite(); for(Class<?> klass : gatherTestClasses()) { suite.addTestSuite(klass); } return suite; } } and let the gatherTestClasses() method deal with figuring out what test classes to run. In JUnit 4, the documentation says to use an annotation: @SuiteClasses({TestClass1.class, TestClass2.class...}) to build up my test suite. There are numerous SO answers showing how to do this. Unfortunately the examples I see do not seem to allow for passing a dynamically generated list of TestClasses. This SO answer suggested I would have to subclass BlockJUnit4ClassRunner which I do not want to do. Dynamically specified test suites seem like something that must be in JUnit 4 somewhere. Does anyone know where? A: To create a dynamic test suite, you need to use the @RunWith annotation. There are two common ways to use it: @RunWith(Suite.class) This allows you to specify, which classes compose the test suite in question. This is equivalent to the JUnit 3 style: import junit.framework.TestSuite; import junit.framework.TestCase; public final class MasterTester extends TestCase { public static TestSuite suite() { TestSuite suite = new TestSuite(); suite.addTestSuite(TestClass1.class); suite.addTestSuite(TestClass2.class); // etc... return suite; } } The equivalent JUnit 4 class will be: import org.junit.runners.Suite; @RunWith(Suite.class) @SuiteClasses({TestClass1.class, TestClass2.class}) public final class MasterTester { } @RunWith(AllTests.class) This allows you to dynamically specify the tests, which compose the test suite. If your tests are not known until runtime, you cannot specify them in the annotations. You can use this construction instead. So, if the JUnit 3 code is: import junit.framework.TestCase; import junit.framework.TestSuite; import junit.framework.Test; public final class MasterTester extends TestCase { public static TestSuite suite() { TestSuite suite = new TestSuite(); for (Test test : findAllTestCasesRuntime()) { suite.addTest(test); } return suite; } } The equivalent JUnit 4 code will be: import org.junit.runners.AllTests; import junit.framework.TestSuite; import junit.framework.Test; @RunWith(AllTests.class) public final class MasterTester { public static TestSuite suite() { TestSuite suite = new TestSuite(); for (Test test : findAllTestCasesRuntime()) { suite.addTest(test); } return suite; } } A: I've tried this using JUnit 4.8 and it works: @RunWith(AllTests.class) public class SomeTests { public static TestSuite suite() { TestSuite suite = new TestSuite(); suite.addTest(new JUnit4TestAdapter(Test1.class)); suite.addTest(new JUnit4TestAdapter(Test2.class)); return suite; } } A: I found Classpath suite quite useful when used with a naming convention on my test classes. https://github.com/takari/takari-cpsuite Here is an example: import org.junit.extensions.cpsuite.ClasspathSuite; import org.junit.runner.RunWith; @RunWith(ClasspathSuite.class) @ClassnameFilters({".*UnitTest"}) public class MySuite { } A: I'm not sure what gatherTestClasses() does, but let's say it returns some tests when the OS is Linux and different tests when the OS is Windows. You can replicate that in JUnit 4.4 with assumptions: @Test public void onlyOnLinux() { assumeThat(getOS(), is(OperatingSystem.LINUX)); // rest of test } @Test public void onlyOnWindows() { assumeThat(getOS(), is(OperatingSystem.WINDOWS)); // rest of test } @Test public void anyOperatingSystem() { // just don't call assumeThat(..) } The implementation of getOS() and OperatingSystem being your custom code. A: Here is a Complete example how to implement that. it combines of two testCase classes and one suite. ExampleInstrumentedTest: import android.support.test.rule.ActivityTestRule; import org.junit.Rule; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.JUnit4; @RunWith(JUnit4.class) public class ExampleInstrumentedTest { @Rule public ActivityTestRule<MainActivity> mActivityTestRule = new ActivityTestRule<>(MainActivity.class); @Test public void checkInputs() throws Exception { } } ExampleInstrumentedTest2: import android.support.test.rule.ActivityTestRule; import org.junit.Rule; import org.junit.Test; import org.junit.runner.RunWith; import org.junit.runners.JUnit4; @RunWith(JUnit4.class) public class ExampleInstrumentedTest2 { @Rule public ActivityTestRule<MainActivity> mActivityTestRule = new ActivityTestRule<>(MainActivity.class); @Test public void checkInputs() throws Exception { } } ExampleInstrumentedSuite: import junit.framework.TestSuite; import org.junit.runner.RunWith; import org.junit.runners.AllTests; @RunWith(AllTests.class) public class ExampleInstrumentedSuite { public static TestSuite suite() { TestSuite suite = new TestSuite(); suite.addTest(new junit.framework.JUnit4TestAdapter(ExampleInstrumentedTest.class)); suite.addTest(new junit.framework.JUnit4TestAdapter(ExampleInstrumentedTest2.class)); return suite; } } Note that you should use @RunWith(JUnit4.class) instead of default @RunWith(AndroidJUnit4.class) in testCase Class A: public class MyTestCase extends TestCase { @Override public void runTest() { // define assertion here <=== assertEquals("yes", "yes"); } } @RunWith(AllTests.class) public class DynamicTestSuite { public static TestSuite suite() { TestSuite suite = new TestSuite(); // dynamically create your test case here <==== suite.addTest(new MyTestCase()); return suite; } } A: Expanding on @kissLife's answer, here's a something you can paste and run that creates multiple tests on the fly: import junit.framework.TestCase; import junit.framework.TestSuite; public final class TestJunit4DynamicConstruction { public static TestSuite suite() { TestSuite suite = new TestSuite(); suite.addTest(new CompareInts(1, 1)); suite.addTest(new CompareInts(2, 2)); suite.addTest(new CompareInts(2, 1)); // huh, for some reason, 2 != 1 suite.addTest(new CompareInts(1, 1)); return suite; } static public class CompareInts extends TestCase { private final int got; private final int expected; CompareInts(int got, int expected) { super(Integer.toString(got) + ":" + Integer.toString(expected)); this.got = got; this.expected = expected; } @Override public void runTest() { assertEquals(got, expected); } } } You'll run these tests: TestJunit4DynamicConstruction$CompareInts.1:1 TestJunit4DynamicConstruction$CompareInts.2:2 TestJunit4DynamicConstruction$CompareInts.2:1 TestJunit4DynamicConstruction$CompareInts.1:1 TestJunit4DynamicConstruction$CompareInts and get this error: junit.framework.AssertionFailedError: Expected :2 Actual :1 ... TestJunit4DynamicConstruction$CompareInts.runTest(TestJunit4DynamicConstruction.java:26) ... Process finished with exit code 255
How do I Dynamically create a Test Suite in JUnit 4?
I would like to create a junit test suite using JUnit 4 where the names of the test classes to be included are not known until the test suite is run. In JUnit 3 I could do this: public final class MasterTester extends TestCase { /** * Used by junit to specify what TestCases to run. * * @return a suite containing what TestCases to run */ public static TestSuite suite() { TestSuite suite = new TestSuite(); for(Class<?> klass : gatherTestClasses()) { suite.addTestSuite(klass); } return suite; } } and let the gatherTestClasses() method deal with figuring out what test classes to run. In JUnit 4, the documentation says to use an annotation: @SuiteClasses({TestClass1.class, TestClass2.class...}) to build up my test suite. There are numerous SO answers showing how to do this. Unfortunately the examples I see do not seem to allow for passing a dynamically generated list of TestClasses. This SO answer suggested I would have to subclass BlockJUnit4ClassRunner which I do not want to do. Dynamically specified test suites seem like something that must be in JUnit 4 somewhere. Does anyone know where?
[ "To create a dynamic test suite, you need to use the @RunWith annotation. There are two common ways to use it:\n@RunWith(Suite.class)\nThis allows you to specify, which classes compose the test suite in question. This is equivalent to the JUnit 3 style:\nimport junit.framework.TestSuite;\nimport junit.framework.TestCase;\n\npublic final class MasterTester extends TestCase {\n\n public static TestSuite suite() {\n TestSuite suite = new TestSuite();\n suite.addTestSuite(TestClass1.class); \n suite.addTestSuite(TestClass2.class);\n // etc...\n return suite;\n }\n}\n\nThe equivalent JUnit 4 class will be:\nimport org.junit.runners.Suite;\n\n@RunWith(Suite.class)\n@SuiteClasses({TestClass1.class, TestClass2.class})\npublic final class MasterTester {\n\n}\n\n@RunWith(AllTests.class)\nThis allows you to dynamically specify the tests, which compose the test suite. If your tests are not known until runtime, you cannot specify them in the annotations. You can use this construction instead. So, if the JUnit 3 code is:\nimport junit.framework.TestCase;\nimport junit.framework.TestSuite;\nimport junit.framework.Test;\n\npublic final class MasterTester extends TestCase {\n\n public static TestSuite suite() {\n TestSuite suite = new TestSuite();\n for (Test test : findAllTestCasesRuntime()) {\n suite.addTest(test);\n }\n return suite;\n }\n}\n\nThe equivalent JUnit 4 code will be:\nimport org.junit.runners.AllTests;\nimport junit.framework.TestSuite;\nimport junit.framework.Test;\n\n@RunWith(AllTests.class)\npublic final class MasterTester {\n\n public static TestSuite suite() {\n TestSuite suite = new TestSuite();\n for (Test test : findAllTestCasesRuntime()) {\n suite.addTest(test);\n }\n return suite;\n }\n}\n\n", "I've tried this using JUnit 4.8 and it works:\n@RunWith(AllTests.class)\npublic class SomeTests\n{\n public static TestSuite suite()\n {\n TestSuite suite = new TestSuite();\n\n suite.addTest(new JUnit4TestAdapter(Test1.class));\n suite.addTest(new JUnit4TestAdapter(Test2.class));\n\n return suite;\n }\n}\n\n", "I found Classpath suite quite useful when used with a naming convention on my test classes.\nhttps://github.com/takari/takari-cpsuite\nHere is an example:\nimport org.junit.extensions.cpsuite.ClasspathSuite;\nimport org.junit.runner.RunWith;\n\n@RunWith(ClasspathSuite.class)\n@ClassnameFilters({\".*UnitTest\"})\npublic class MySuite {\n}\n\n", "I'm not sure what gatherTestClasses() does, but let's say it returns some tests when the OS is Linux and different tests when the OS is Windows. You can replicate that in JUnit 4.4 with assumptions:\n@Test\npublic void onlyOnLinux() {\n assumeThat(getOS(), is(OperatingSystem.LINUX));\n // rest of test\n}\n\n@Test\npublic void onlyOnWindows() {\n assumeThat(getOS(), is(OperatingSystem.WINDOWS));\n // rest of test\n}\n\n@Test\npublic void anyOperatingSystem() {\n // just don't call assumeThat(..)\n}\n\nThe implementation of getOS() and OperatingSystem being your custom code.\n", "Here is a Complete example how to implement that. it combines of two testCase classes and one suite.\n\nExampleInstrumentedTest:\nimport android.support.test.rule.ActivityTestRule;\n\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.junit.runners.JUnit4;\n\n@RunWith(JUnit4.class)\npublic class ExampleInstrumentedTest {\n\n\n @Rule\n public ActivityTestRule<MainActivity> mActivityTestRule = new ActivityTestRule<>(MainActivity.class);\n\n @Test\n public void checkInputs() throws Exception {\n\n }\n}\n\nExampleInstrumentedTest2:\nimport android.support.test.rule.ActivityTestRule;\n\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.runner.RunWith;\nimport org.junit.runners.JUnit4;\n\n@RunWith(JUnit4.class)\npublic class ExampleInstrumentedTest2 {\n\n\n @Rule\n public ActivityTestRule<MainActivity> mActivityTestRule = new ActivityTestRule<>(MainActivity.class);\n\n @Test\n public void checkInputs() throws Exception {\n\n }\n}\n\nExampleInstrumentedSuite:\nimport junit.framework.TestSuite;\n\nimport org.junit.runner.RunWith;\nimport org.junit.runners.AllTests;\n\n@RunWith(AllTests.class)\npublic class ExampleInstrumentedSuite {\n\n public static TestSuite suite() {\n TestSuite suite = new TestSuite();\n suite.addTest(new junit.framework.JUnit4TestAdapter(ExampleInstrumentedTest.class));\n suite.addTest(new junit.framework.JUnit4TestAdapter(ExampleInstrumentedTest2.class));\n return suite;\n }\n}\n\n\nNote that you should use @RunWith(JUnit4.class) instead of default @RunWith(AndroidJUnit4.class) in testCase Class\n", "public class MyTestCase extends TestCase {\n @Override\n public void runTest() {\n // define assertion here <===\n assertEquals(\"yes\", \"yes\");\n }\n}\n\n@RunWith(AllTests.class)\npublic class DynamicTestSuite {\n public static TestSuite suite() {\n TestSuite suite = new TestSuite();\n\n // dynamically create your test case here <====\n suite.addTest(new MyTestCase());\n\n return suite;\n }\n}\n\n", "Expanding on @kissLife's answer, here's a something you can paste and run that creates multiple tests on the fly:\nimport junit.framework.TestCase;\nimport junit.framework.TestSuite;\n\npublic final class TestJunit4DynamicConstruction {\n public static TestSuite suite() {\n TestSuite suite = new TestSuite();\n suite.addTest(new CompareInts(1, 1));\n suite.addTest(new CompareInts(2, 2));\n suite.addTest(new CompareInts(2, 1)); // huh, for some reason, 2 != 1\n suite.addTest(new CompareInts(1, 1));\n return suite;\n }\n\n static public class CompareInts extends TestCase {\n private final int got;\n private final int expected;\n\n CompareInts(int got, int expected) {\n super(Integer.toString(got) + \":\" + Integer.toString(expected));\n this.got = got;\n this.expected = expected;\n }\n @Override\n public void runTest() {\n assertEquals(got, expected);\n }\n }\n}\n\nYou'll run these tests:\nTestJunit4DynamicConstruction$CompareInts.1:1\nTestJunit4DynamicConstruction$CompareInts.2:2\nTestJunit4DynamicConstruction$CompareInts.2:1\nTestJunit4DynamicConstruction$CompareInts.1:1\nTestJunit4DynamicConstruction$CompareInts\n\nand get this error:\njunit.framework.AssertionFailedError: \nExpected :2\nActual :1\n\n\n ...\nTestJunit4DynamicConstruction$CompareInts.runTest(TestJunit4DynamicConstruction.java:26)\n ...\n\n\nProcess finished with exit code 255\n\n" ]
[ 40, 35, 26, 6, 0, 0, 0 ]
[]
[]
[ "java", "junit", "junit4" ]
stackoverflow_0003257080_java_junit_junit4.txt
Q: How to consume external websocket API in Apache Camel since ahc-ws deprecate? ahc and ahc-ws (Async Http Client) components have been deprecated in Apache camel version 3.16: https://issues.apache.org/jira/browse/CAMEL-17667. Is there an alternative for ahc-ws? The component was very easy to use to consume external websockets API. Other libraries like Jetty, Undertow, Atmosphere, don't seem to offer this kind of features. I have not been able to configure them and the documentation remains unclear. They only provide the server part. For the websocket-jsr356 component, I can't configure the component to consume a WebSockets over SSL API (wss). The library seems to support only classic websocket (ws). I looked for alternatives on the camel doc, examples on github but I didn't find anything. Is there a viable alternative to ahc-ws to consume external websocket APIs simply with camel? Thanks a lot A: It looks like the websocket-jsr356 component in Apache Camel is the recommended alternative to the deprecated ahc-ws component. While the websocket-jsr356 component does not support consuming WebSockets over SSL (wss) out of the box, it is possible to configure it to do so by providing a custom SSLContextParameters object in the component's configuration. Here's an example taken from the Apache Camel documentation: from("websocket-jsr356://myhost.com:9292/mypath") .to("log:org.apache.camel.websocket.jsr356?level=INFO") .to("mock:result"); SSLContextParameters sslContextParameters = new SSLContextParameters(); // configure the parameters WebSocketComponent websocket = context.getComponent("websocket-jsr356", WebSocketComponent.class); websocket.setSslContextParameters(sslContextParameters); You can find more information about configuring the websocket-jsr356 component to use SSL in the Apache Camel documentation: https://camel.apache.org/manual/latest/websocket-jsr356-component.html#websocket-jsr356-using-ssl. I hope this helps! Let me know if you have any other questions. A: Looks like it's not deprecated yet. There is just a suggestion for that. ahc-wss is very useful currently and there is no viable alternative for the same. websocket component requires tedious tweaking of secure storage parameters and is just kills the purpose of wss. I hope they don't deprecate ahc-wss without a proper replacement though.
How to consume external websocket API in Apache Camel since ahc-ws deprecate?
ahc and ahc-ws (Async Http Client) components have been deprecated in Apache camel version 3.16: https://issues.apache.org/jira/browse/CAMEL-17667. Is there an alternative for ahc-ws? The component was very easy to use to consume external websockets API. Other libraries like Jetty, Undertow, Atmosphere, don't seem to offer this kind of features. I have not been able to configure them and the documentation remains unclear. They only provide the server part. For the websocket-jsr356 component, I can't configure the component to consume a WebSockets over SSL API (wss). The library seems to support only classic websocket (ws). I looked for alternatives on the camel doc, examples on github but I didn't find anything. Is there a viable alternative to ahc-ws to consume external websocket APIs simply with camel? Thanks a lot
[ "It looks like the websocket-jsr356 component in Apache Camel is the recommended alternative to the deprecated ahc-ws component. While the websocket-jsr356 component does not support consuming WebSockets over SSL (wss) out of the box, it is possible to configure it to do so by providing a custom SSLContextParameters object in the component's configuration.\nHere's an example taken from the Apache Camel documentation:\nfrom(\"websocket-jsr356://myhost.com:9292/mypath\")\n .to(\"log:org.apache.camel.websocket.jsr356?level=INFO\")\n .to(\"mock:result\");\n\nSSLContextParameters sslContextParameters = new SSLContextParameters();\n// configure the parameters\n\nWebSocketComponent websocket = context.getComponent(\"websocket-jsr356\", WebSocketComponent.class);\nwebsocket.setSslContextParameters(sslContextParameters);\n\nYou can find more information about configuring the websocket-jsr356 component to use SSL in the Apache Camel documentation: https://camel.apache.org/manual/latest/websocket-jsr356-component.html#websocket-jsr356-using-ssl.\nI hope this helps! Let me know if you have any other questions.\n", "Looks like it's not deprecated yet. There is just a suggestion for that. ahc-wss is very useful currently and there is no viable alternative for the same. websocket component requires tedious tweaking of secure storage parameters and is just kills the purpose of wss. I hope they don't deprecate ahc-wss without a proper replacement though.\n" ]
[ 1, 0 ]
[]
[]
[ "apache_camel", "asynchttpclient", "java_websocket", "spring_boot" ]
stackoverflow_0073195445_apache_camel_asynchttpclient_java_websocket_spring_boot.txt
Q: Issue with Axios / React return undefined I have an issue with React when I try to retrieve the value of return. The code: export const RuoloOnline = (jwt) => { axios.get("http://localhost:1337/api/users/me", { headers: { "Authorization": `Bearer ${jwt}` } } ).then((res) => { return (res.data.ruolo) }).catch(() => {return 0}) if I put a console.log the value is correctly viewed. If I try to call this function outside the file, it generates an undefined return. A: Try using asynchronous function calls instead of this function. Your modified code will be export const RuoloOnline = async (jwt) ={ return await axios.get("http://localhost:1337/api/users/me", { headers: { "Authorization": `Bearer ${jwt}` } } ); } Hope it helps! A: The issue you're experiencing is likely because you're not returning the value correctly from the RuoloOnline function. In JavaScript, return statements immediately exit the function they are in, so the code after the return statement will never be executed. Here's one way you could fix your code: export const RuoloOnline = (jwt) => { return axios.get("http://localhost:1337/api/users/me", { headers: { "Authorization": `Bearer ${jwt}` } } ).then((res) => res.data.ruolo).catch(() => 0); } In this version of the code, we're returning the result of the axios.get call directly, so the value will be returned correctly when the function is called.
Issue with Axios / React return undefined
I have an issue with React when I try to retrieve the value of return. The code: export const RuoloOnline = (jwt) => { axios.get("http://localhost:1337/api/users/me", { headers: { "Authorization": `Bearer ${jwt}` } } ).then((res) => { return (res.data.ruolo) }).catch(() => {return 0}) if I put a console.log the value is correctly viewed. If I try to call this function outside the file, it generates an undefined return.
[ "Try using asynchronous function calls instead of this function. Your modified code will be\nexport const RuoloOnline = async (jwt) ={\n return await axios.get(\"http://localhost:1337/api/users/me\",\n {\n headers: { \n \"Authorization\": `Bearer ${jwt}`\n }\n }\n );\n}\n\nHope it helps!\n", "The issue you're experiencing is likely because you're not returning the value correctly from the RuoloOnline function. In JavaScript, return statements immediately exit the function they are in, so the code after the return statement will never be executed.\nHere's one way you could fix your code:\nexport const RuoloOnline = (jwt) => {\n return axios.get(\"http://localhost:1337/api/users/me\",\n {\n headers: {\n \"Authorization\": `Bearer ${jwt}`\n }\n }\n ).then((res) => res.data.ruolo).catch(() => 0);\n}\n\nIn this version of the code, we're returning the result of the axios.get call directly, so the value will be returned correctly when the function is called.\n" ]
[ 0, 0 ]
[]
[]
[ "axios", "get", "python_requests", "reactjs", "rest" ]
stackoverflow_0074658655_axios_get_python_requests_reactjs_rest.txt
Q: How to deal with the categorical variable of more than 33 000 cities? I work in Python. I have a problem with the categorical variable - "city". I'm building a predictive model on a large dataset-over 1 million rows. I have over 100 features. One of them is "city", consisting of 33 000 different cities. I use e.g. XGBoost where I need to convert categorical variables into numeric. Dummifying causes the number of features to increase strongly. XGBoost (and my 20 gb RAM) can't handle this. Is there any other way to deal with this variable than e.g. One Hot Encoding, dummies etc.? (When using One Hot Encoding e.g., I have performance problems, there are too many features in my model and I'm running out of memory.) Is there any way to deal with this? A: XGBoost has also since version 1.3.0 added experimental support for categorical encoding. Copying my answer from another question. Nov 23, 2020 XGBoost has since version 1.3.0 added experimental support for categorical features. From the docs: 1.8.7 Categorical Data Other than users performing encoding, XGBoost has experimental support for categorical data using gpu_hist and gpu_predictor. No special operation needs to be done on input test data since the information about categories is encoded into the model during training. https://buildmedia.readthedocs.org/media/pdf/xgboost/latest/xgboost.pdf In the DMatrix section the docs also say: enable_categorical (boolean, optional) – New in version 1.3.0. Experimental support of specializing for categorical features. Do not set to True unless you are interested in development. Currently it’s only available for gpu_hist tree method with 1 vs rest (one hot) categorical split. Also, JSON serialization format, gpu_predictor and pandas input are required. Other models option: If you don't need to use XGBoost, you can use a model like LightGBM or or CatBoost which support categorical encoding without one-hot-encoding out of the box. A: You could use some kind of embeddings that reflect better those cities (and compress the number of total features by direct OHE), maybe using some features to describe the continet where each city belongs, then some other features to describe the country/region, etc. Note that since you didn't provide any specific detail about this task, I've used only geographical data on my example, but you could use some other variables related to each city, like the mean temprature, the population, the area, etc, depending on the task you are trying to address here. Another approach could be replacing the city name with its coordinates (latitude and longitude). Again, this may be helpful depending on the task for your model. Hope this helps A: Beside the models, you could also decrease the number of the features (cities) by grouping them in geographical regions. Another option is grouping them by population size. Another option is grouping them by their frequency by using quantile bins. Target encoding might be another option for you. Feature engineering in many cases involves a lot of manual work, unfortunately you cannot always have everything sorted out automatically. A: There are already great responses here. Other technique I would use is cluster those cities into groups using K-means clustering with some of the features specific to cities in your dataset. By this way you could use the cluster number in place of the actual city. This could reduce the number of levels quite a bit.
How to deal with the categorical variable of more than 33 000 cities?
I work in Python. I have a problem with the categorical variable - "city". I'm building a predictive model on a large dataset-over 1 million rows. I have over 100 features. One of them is "city", consisting of 33 000 different cities. I use e.g. XGBoost where I need to convert categorical variables into numeric. Dummifying causes the number of features to increase strongly. XGBoost (and my 20 gb RAM) can't handle this. Is there any other way to deal with this variable than e.g. One Hot Encoding, dummies etc.? (When using One Hot Encoding e.g., I have performance problems, there are too many features in my model and I'm running out of memory.) Is there any way to deal with this?
[ "XGBoost has also since version 1.3.0 added experimental support for categorical encoding.\nCopying my answer from another question.\nNov 23, 2020\nXGBoost has since version 1.3.0 added experimental support for categorical features. From the docs:\n\n1.8.7 Categorical Data\nOther than users performing encoding, XGBoost has experimental support\nfor categorical data using gpu_hist and gpu_predictor. No special\noperation needs to be done on input test data since the information\nabout categories is encoded into the model during training.\n\nhttps://buildmedia.readthedocs.org/media/pdf/xgboost/latest/xgboost.pdf\nIn the DMatrix section the docs also say:\n\nenable_categorical (boolean, optional) – New in version 1.3.0.\nExperimental support of specializing for categorical features. Do not\nset to True unless you are interested in development. Currently it’s\nonly available for gpu_hist tree method with 1 vs rest (one hot)\ncategorical split. Also, JSON serialization format, gpu_predictor and\npandas input are required.\n\nOther models option:\nIf you don't need to use XGBoost, you can use a model like LightGBM or or CatBoost which support categorical encoding without one-hot-encoding out of the box.\n", "You could use some kind of embeddings that reflect better those cities (and compress the number of total features by direct OHE), maybe using some features to describe the continet where each city belongs, then some other features to describe the country/region, etc.\nNote that since you didn't provide any specific detail about this task, I've used only geographical data on my example, but you could use some other variables related to each city, like the mean temprature, the population, the area, etc, depending on the task you are trying to address here.\nAnother approach could be replacing the city name with its coordinates (latitude and longitude). Again, this may be helpful depending on the task for your model.\nHope this helps\n", "Beside the models, you could also decrease the number of the features (cities) by grouping them in geographical regions. Another option is grouping them by population size.\nAnother option is grouping them by their frequency by using quantile bins. Target encoding might be another option for you.\nFeature engineering in many cases involves a lot of manual work, unfortunately you cannot always have everything sorted out automatically.\n", "There are already great responses here.\nOther technique I would use is cluster those cities into groups using K-means clustering with some of the features specific to cities in your dataset.\nBy this way you could use the cluster number in place of the actual city. This could reduce the number of levels quite a bit.\n" ]
[ 2, 1, 1, 0 ]
[]
[]
[ "forecasting", "python", "xgboost" ]
stackoverflow_0061975690_forecasting_python_xgboost.txt
Q: Nested route doesn't render component I have a component that needs to be rendered under the Home page component. Now I have nested, but the component is not rendered. Although if you do without nesting, then everything works. How can I do this? export default function App() { return ( <div className="App"> <Routes> <Route path="/" element={<Home />}> <Route path="route1" element={<Route1 />} /> </Route> </Routes> </div> ); } export default function Route1() { return ( <> <h2>Route1</h2> <Outlet /> </> ); } export default function Home() { return ( <> <h1>Home Page</h1> </> ); } A: Try wrapping your Routes with Router Link to my stackblitz and play with it.. Also, a quick read >> https://dev.to/tywenk/how-to-use-nested-routes-in-react-router-6-4jhd import * as React from 'react'; import { BrowserRouter as Router, Route, Routes, Link, Outlet, } from 'react-router-dom'; export default function App() { return ( <div className="App"> <Router> <nav> <Link to="/">Home</Link> <Link to="route1">Route 1</Link> </nav> <Routes> <Route path="/" element={<Home />}> <Route path="route1" element={<Route1 />} /> </Route> </Routes> </Router> </div> ); } function Route1() { return <h1>"In Route 1"</h1>; } function Home() { return ( <React.Fragment> <h1>"In Home"</h1> <Outlet /> </React.Fragment> ); } Using <nav> is optional.. added just for demo.. you can use your own method for routing actions (using links, buttons, etc.) A: The Home component is rendered as a Layout Route so it should render an Outlet component for the nested routes. export default function App() { return ( <div className="App"> <Routes> <Route path="/" element={<Home />}> // <-- Layout route <Route path="route1" element={<Route1 />} /> // <-- Nested route </Route> </Routes> </div> ); } export default function Home() { return ( <> <h1>Home Page</h1> <Outlet /> // <-- Nested routes render element here </> ); } Route1 only needs to render an Outlet if it is also a layout route. The Outlet can be removed if this is not the case. export default function Route1() { return ( <> <h2>Route1</h2> </> ); }
Nested route doesn't render component
I have a component that needs to be rendered under the Home page component. Now I have nested, but the component is not rendered. Although if you do without nesting, then everything works. How can I do this? export default function App() { return ( <div className="App"> <Routes> <Route path="/" element={<Home />}> <Route path="route1" element={<Route1 />} /> </Route> </Routes> </div> ); } export default function Route1() { return ( <> <h2>Route1</h2> <Outlet /> </> ); } export default function Home() { return ( <> <h1>Home Page</h1> </> ); }
[ "Try wrapping your Routes with Router\nLink to my stackblitz and play with it..\nAlso, a quick read >> https://dev.to/tywenk/how-to-use-nested-routes-in-react-router-6-4jhd\nimport * as React from 'react';\nimport {\n BrowserRouter as Router,\n Route,\n Routes,\n Link,\n Outlet,\n} from 'react-router-dom';\n\nexport default function App() {\n return (\n <div className=\"App\">\n <Router>\n <nav>\n <Link to=\"/\">Home</Link> <Link to=\"route1\">Route 1</Link>\n </nav>\n <Routes>\n <Route path=\"/\" element={<Home />}>\n <Route path=\"route1\" element={<Route1 />} />\n </Route>\n </Routes>\n </Router>\n </div>\n );\n}\n\nfunction Route1() {\n return <h1>\"In Route 1\"</h1>;\n}\n\nfunction Home() {\n return (\n <React.Fragment>\n <h1>\"In Home\"</h1>\n <Outlet />\n </React.Fragment>\n );\n}\n\nUsing <nav> is optional.. added just for demo.. you can use your own method for routing actions (using links, buttons, etc.)\n", "The Home component is rendered as a Layout Route so it should render an Outlet component for the nested routes.\nexport default function App() {\n return (\n <div className=\"App\">\n <Routes>\n <Route path=\"/\" element={<Home />}> // <-- Layout route\n <Route path=\"route1\" element={<Route1 />} /> // <-- Nested route\n </Route>\n </Routes>\n </div>\n );\n}\n\nexport default function Home() {\n return (\n <>\n <h1>Home Page</h1>\n <Outlet /> // <-- Nested routes render element here\n </>\n );\n}\n\nRoute1 only needs to render an Outlet if it is also a layout route. The Outlet can be removed if this is not the case.\nexport default function Route1() {\n return (\n <>\n <h2>Route1</h2>\n </>\n );\n}\n\n" ]
[ 1, 1 ]
[]
[]
[ "javascript", "react_router", "reactjs" ]
stackoverflow_0074656968_javascript_react_router_reactjs.txt
Q: How do I use TypeScript and jest.requireActual() (with named exports)? I have a simple file called functions.ts which contains: export const log = console.log.bind(console); And a jest Mock for it in __mocks__/functions.ts, borrowed from the Jest requireActual() documentation: const originalModule = jest.requireActual("./functions"); // Quiet functions.log() during tests export default { __esModule: true, // Use it when dealing with esModules ...originalModule, log: jest.fn(), }; I wish to make the log() function useless, ie for the function to not do anything (people used to call this a no-op). import { runMe } from "./stackoverflow"; jest.mock("./src/backend/functions"); test(`pass, but make sure it doesn't log error messages`, () => { runMe(); expect(true).toBeTruthy(); }); And the actual function being ran: import { log } from "./src/backend/functions"; export const runMe = () => { console.log(`log is:`, log); log(`Hello`); }; The console.log(`log is:`, log); returns log is: undefined. Everything works perfectly (but still logs) if I remove the jest.mock("./src/backend/functions"); though. How do I use TypeScript and jest.requireActual()? Ie so the tests for runMe() will be quiet when I run them (but also so the other functions in funtions still work as normal)? A: I ended up having to export every named export manually, to ensure log() was mocked but everything else in functions worked. In __mocks__/functions.ts: const actualFunctions = jest.requireActual("../functions"); // Mock this export const log = jest.fn(); // Use actual implementation of everything else export const sleep = actualFunctions.sleep; export const stringify = actualFunctions.stringify; export const hexToUtf8 = actualFunctions.hexToUtf8; export const instructionDataToNote = actualFunctions.instructionDataToNote; It won't update automatically like using deconstruction would have, but it seems like this is the only way.
How do I use TypeScript and jest.requireActual() (with named exports)?
I have a simple file called functions.ts which contains: export const log = console.log.bind(console); And a jest Mock for it in __mocks__/functions.ts, borrowed from the Jest requireActual() documentation: const originalModule = jest.requireActual("./functions"); // Quiet functions.log() during tests export default { __esModule: true, // Use it when dealing with esModules ...originalModule, log: jest.fn(), }; I wish to make the log() function useless, ie for the function to not do anything (people used to call this a no-op). import { runMe } from "./stackoverflow"; jest.mock("./src/backend/functions"); test(`pass, but make sure it doesn't log error messages`, () => { runMe(); expect(true).toBeTruthy(); }); And the actual function being ran: import { log } from "./src/backend/functions"; export const runMe = () => { console.log(`log is:`, log); log(`Hello`); }; The console.log(`log is:`, log); returns log is: undefined. Everything works perfectly (but still logs) if I remove the jest.mock("./src/backend/functions"); though. How do I use TypeScript and jest.requireActual()? Ie so the tests for runMe() will be quiet when I run them (but also so the other functions in funtions still work as normal)?
[ "I ended up having to export every named export manually, to ensure log() was mocked but everything else in functions worked.\nIn __mocks__/functions.ts:\nconst actualFunctions = jest.requireActual(\"../functions\");\n\n// Mock this\nexport const log = jest.fn();\n\n// Use actual implementation of everything else\nexport const sleep = actualFunctions.sleep;\nexport const stringify = actualFunctions.stringify;\nexport const hexToUtf8 = actualFunctions.hexToUtf8;\nexport const instructionDataToNote = actualFunctions.instructionDataToNote;\n\nIt won't update automatically like using deconstruction would have, but it seems like this is the only way.\n" ]
[ 0 ]
[]
[]
[ "jestjs", "mocking", "typescript", "unit_testing" ]
stackoverflow_0074657645_jestjs_mocking_typescript_unit_testing.txt
Q: DDD: using aggregates inside another aggregates DDD: Can aggregates get other aggregates as parameters? According to this, its OK to use aggregates inside another aggregates. But its requires to change multiple aggregates at one transaction. So is it truth that this rule can be easily skipped and I can change multiple aggregates at one time (especially in case of Microservice). The only problem that I need to lock whole aggregates? Thx I have a simple situation: User, Friendship and Friendship request entities. User can be aggregate root. DDD and Homogeneous Many-to-Many Relationship But I would not like to use eventual consistency (especially inside on micro service) cause anyways when I handle that event (FriendshipRequestSent) I need to lock another dependant aggregate. And need to handle and write event on error. A: So is it truth that this rule can be easily skipped and I can change multiple aggregates at one time (especially in case of Microservice). Yes, maybe. The only problem that I need to lock whole aggregates? No - there is the additional problem that, because you are modifying multiple aggregates (or more precisely, domain entities that belong to multiple aggregates) in the same transaction, you also need to be careful to design your persistent storage so that updates to all of the entities can be committed in the same "transaction". That is simple enough when, for example, the entities are all stored in a single relational database, and you can use general purpose operations in the relational database to control your writes. But if you are working with a different kind of data storage, where you cannot easily control the writes to all entities at the same time, then it gets a bit spooky. In an "ideal" world, we could pretend that all information is local, and storing it is just an implementation detail. In practice, the actual implementations we get to use only approximate this idea, and we have to be mindful of the differences.
DDD: using aggregates inside another aggregates
DDD: Can aggregates get other aggregates as parameters? According to this, its OK to use aggregates inside another aggregates. But its requires to change multiple aggregates at one transaction. So is it truth that this rule can be easily skipped and I can change multiple aggregates at one time (especially in case of Microservice). The only problem that I need to lock whole aggregates? Thx I have a simple situation: User, Friendship and Friendship request entities. User can be aggregate root. DDD and Homogeneous Many-to-Many Relationship But I would not like to use eventual consistency (especially inside on micro service) cause anyways when I handle that event (FriendshipRequestSent) I need to lock another dependant aggregate. And need to handle and write event on error.
[ "\nSo is it truth that this rule can be easily skipped and I can change multiple aggregates at one time (especially in case of Microservice).\n\nYes, maybe.\n\nThe only problem that I need to lock whole aggregates?\n\nNo - there is the additional problem that, because you are modifying multiple aggregates (or more precisely, domain entities that belong to multiple aggregates) in the same transaction, you also need to be careful to design your persistent storage so that updates to all of the entities can be committed in the same \"transaction\".\nThat is simple enough when, for example, the entities are all stored in a single relational database, and you can use general purpose operations in the relational database to control your writes.\nBut if you are working with a different kind of data storage, where you cannot easily control the writes to all entities at the same time, then it gets a bit spooky.\nIn an \"ideal\" world, we could pretend that all information is local, and storing it is just an implementation detail. In practice, the actual implementations we get to use only approximate this idea, and we have to be mindful of the differences.\n" ]
[ 0 ]
[]
[]
[ "ddd_service", "domain_driven_design" ]
stackoverflow_0074633791_ddd_service_domain_driven_design.txt
Q: How can I make Visual Studio create new C# files that take ImplicitUsings into account? I have a C# 10 project with <ImplicitUsings> enabled: <LangVersion>10</LangVersion> <ImplicitUsings>enable</ImplicitUsings> With this in place, VS will gray-out many common namespaces in code files and offer to remove them. However, when I create a new C# file it still imports all of the now-unnecessary using statements by default: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace MyNamespace { internal class Class1 { } } Is this just a limitation of VS or is there something I can do to convince it to omit these namespaces from the new file template? A: I think that the "ImplicitUsings" is not for what you look for. It only add some pre-defined usings to your project based on the project's SDK. See Implicit-using for reference. But will not remove the usings directives of templates.
How can I make Visual Studio create new C# files that take ImplicitUsings into account?
I have a C# 10 project with <ImplicitUsings> enabled: <LangVersion>10</LangVersion> <ImplicitUsings>enable</ImplicitUsings> With this in place, VS will gray-out many common namespaces in code files and offer to remove them. However, when I create a new C# file it still imports all of the now-unnecessary using statements by default: using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace MyNamespace { internal class Class1 { } } Is this just a limitation of VS or is there something I can do to convince it to omit these namespaces from the new file template?
[ "I think that the \"ImplicitUsings\" is not for what you look for. It only add some pre-defined usings to your project based on the project's SDK. See Implicit-using for reference.\nBut will not remove the usings directives of templates.\n" ]
[ 0 ]
[]
[]
[ "c#", "visual_studio" ]
stackoverflow_0071945530_c#_visual_studio.txt
Q: Correct Way To Build Multiple Docker Versions In GitHub Actions? I have a GitHub Action that is almost like the one below. The action's purpose is to build a Dockerfile and push it to DockerHub. name: DockerHub Run on: push: branches: - "master" schedule: - cron: "0 0 * * 0" env: DOCKERHUB_USERNAME: MyUser OFFICIAL_TAG: MyUser/MyImage:latest MAIN_REPO_NAME: MyUser/MyImage DOCKERFILE_PATH: / jobs: docker: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v3 - name: Set up QEMU uses: docker/setup-qemu-action@v2 - name: Set up Docker Buildx uses: docker/setup-buildx-action@v2 - name: Login to DockerHub uses: docker/login-action@v2 with: username: ${{ env.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKER_ACCESS_TOKEN }} - name: Build and push image to DockerHub uses: docker/build-push-action@v3 with: platforms: linux/amd64,linux/arm64 file: ${{ env.GITHUB_WORKSPACE }}/Dockerfile push: true tags: ${{ env.OFFICIAL_TAG }} - name: Update repo description uses: peter-evans/dockerhub-description@v2 with: username: ${{ env.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKER_ACCESS_TOKEN }} repository: ${{ env.MAIN_REPO_NAME }} readme-filepath: ./readme.md And according to DockerHub, the architecture is listed However, I have a question about this line: uses: docker/build-push-action@v3 with: platforms: linux/amd64,linux/arm64 Im not sure if listing the platforms here actually compiles to those platforms. Keep in mind that GitHub is using ubuntu-latest which is x86-x64 and I do not have a ARM64 device to test. Am I setting up correctly to build to ARM devices? A: tldr Im not sure if listing the platforms here actually compiles to those platforms. It does (assuming your the commands defined in your Dockerfile are platform conscious) Am I setting up correctly to build to ARM devices? Your config looks correct and should build for amd64 and arm64. You can test by adding a step after the push and checking the output: # assuming your image is debian based $ docker run \ --platform linux/amd64 \ --rm \ --entrypoint='' \ MyUser/MyImage:latest \ /bin/bash -c 'dpkg --print-architecture' #output should be amd64 $ docker run \ --platform linux/arm64 \ --rm \ --entrypoint='' \ MyUser/MyImage:latest \ /bin/bash -c 'dpkg --print-architecture' #output should be arm64 long answer It "works" because of the emulators for the qemu emulators for a bunch of different platforms (aka docker/setup-qemu-action@v2) and then using docker buildx for the multi-platform images The problem is that even though everything seems to build fine in CI this way, the artifacts never really get tested on their respective native platforms, so to answer your question 'Am I setting up correctly to build to ARM devices?' ... ‍♂️ I find it similar to python and its universal2 wheels, where the cross-compilations are built, but not really ever tested (all very python and macos specific but the conversations point out challenges running integration/e2e tests for these multi platform artifacts): https://github.com/actions/setup-python/issues/197 https://github.com/actions/runner-images/issues/4133 https://github.com/actions/python-versions/pull/114 https://github.com/actions/setup-python/issues/547 This github/community discussion also provides a little more depth on the multiplatform builds as well https://github.com/community/community/discussions/38728#discussioncomment-4106829
Correct Way To Build Multiple Docker Versions In GitHub Actions?
I have a GitHub Action that is almost like the one below. The action's purpose is to build a Dockerfile and push it to DockerHub. name: DockerHub Run on: push: branches: - "master" schedule: - cron: "0 0 * * 0" env: DOCKERHUB_USERNAME: MyUser OFFICIAL_TAG: MyUser/MyImage:latest MAIN_REPO_NAME: MyUser/MyImage DOCKERFILE_PATH: / jobs: docker: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@v3 - name: Set up QEMU uses: docker/setup-qemu-action@v2 - name: Set up Docker Buildx uses: docker/setup-buildx-action@v2 - name: Login to DockerHub uses: docker/login-action@v2 with: username: ${{ env.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKER_ACCESS_TOKEN }} - name: Build and push image to DockerHub uses: docker/build-push-action@v3 with: platforms: linux/amd64,linux/arm64 file: ${{ env.GITHUB_WORKSPACE }}/Dockerfile push: true tags: ${{ env.OFFICIAL_TAG }} - name: Update repo description uses: peter-evans/dockerhub-description@v2 with: username: ${{ env.DOCKERHUB_USERNAME }} password: ${{ secrets.DOCKER_ACCESS_TOKEN }} repository: ${{ env.MAIN_REPO_NAME }} readme-filepath: ./readme.md And according to DockerHub, the architecture is listed However, I have a question about this line: uses: docker/build-push-action@v3 with: platforms: linux/amd64,linux/arm64 Im not sure if listing the platforms here actually compiles to those platforms. Keep in mind that GitHub is using ubuntu-latest which is x86-x64 and I do not have a ARM64 device to test. Am I setting up correctly to build to ARM devices?
[ "tldr\n\nIm not sure if listing the platforms here actually compiles to those\nplatforms.\n\nIt does (assuming your the commands defined in your Dockerfile are platform conscious)\n\nAm I setting up correctly to build to ARM devices?\n\nYour config looks correct and should build for amd64 and arm64. You can test by adding a step after the push and checking the output:\n# assuming your image is debian based\n$ docker run \\\n --platform linux/amd64 \\\n --rm \\\n --entrypoint='' \\\n MyUser/MyImage:latest \\\n /bin/bash -c 'dpkg --print-architecture'\n\n#output should be\namd64\n\n$ docker run \\\n --platform linux/arm64 \\\n --rm \\\n --entrypoint='' \\\n MyUser/MyImage:latest \\\n /bin/bash -c 'dpkg --print-architecture'\n\n#output should be\narm64\n\n\nlong answer\nIt \"works\" because of the emulators for the qemu emulators for a bunch of different platforms (aka docker/setup-qemu-action@v2) and then using docker buildx for the multi-platform images\nThe problem is that even though everything seems to build fine in CI this way, the artifacts never really get tested on their respective native platforms, so to answer your question 'Am I setting up correctly to build to ARM devices?' ... ‍♂️\nI find it similar to python and its universal2 wheels, where the cross-compilations are built, but not really ever tested (all very python and macos specific but the conversations point out challenges running integration/e2e tests for these multi platform artifacts):\n\nhttps://github.com/actions/setup-python/issues/197\nhttps://github.com/actions/runner-images/issues/4133\nhttps://github.com/actions/python-versions/pull/114\nhttps://github.com/actions/setup-python/issues/547\n\nThis github/community discussion also provides a little more depth on the multiplatform builds as well\nhttps://github.com/community/community/discussions/38728#discussioncomment-4106829\n" ]
[ 0 ]
[]
[]
[ "docker", "docker_registry", "dockerfile", "github_actions" ]
stackoverflow_0074632414_docker_docker_registry_dockerfile_github_actions.txt
Q: spacy Can't find model 'en_core_web_sm' on windows 10 and Python 3.5.3 :: Anaconda custom (64-bit) what is difference between spacy.load('en_core_web_sm') and spacy.load('en')? This link explains different model sizes. But i am still not clear how spacy.load('en_core_web_sm') and spacy.load('en') differ spacy.load('en') runs fine for me. But the spacy.load('en_core_web_sm') throws error i have installed spacyas below. when i go to jupyter notebook and run command nlp = spacy.load('en_core_web_sm') I get the below error --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-4-b472bef03043> in <module>() 1 # Import spaCy and load the language library 2 import spacy ----> 3 nlp = spacy.load('en_core_web_sm') 4 5 # Create a Doc object C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\__init__.py in load(name, **overrides) 13 if depr_path not in (True, False, None): 14 deprecation_warning(Warnings.W001.format(path=depr_path)) ---> 15 return util.load_model(name, **overrides) 16 17 C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\util.py in load_model(name, **overrides) 117 elif hasattr(name, 'exists'): # Path or Path-like to model data 118 return load_model_from_path(name, **overrides) --> 119 raise IOError(Errors.E050.format(name=name)) 120 121 OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory. how I installed Spacy --- (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>conda install -c conda-forge spacy Fetching package metadata ............. Solving package specifications: . Package plan for installation in environment C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder: The following NEW packages will be INSTALLED: blas: 1.0-mkl cymem: 1.31.2-py35h6538335_0 conda-forge dill: 0.2.8.2-py35_0 conda-forge msgpack-numpy: 0.4.4.2-py_0 conda-forge murmurhash: 0.28.0-py35h6538335_1000 conda-forge plac: 0.9.6-py_1 conda-forge preshed: 1.0.0-py35h6538335_0 conda-forge pyreadline: 2.1-py35_1000 conda-forge regex: 2017.11.09-py35_0 conda-forge spacy: 2.0.12-py35h830ac7b_0 conda-forge termcolor: 1.1.0-py_2 conda-forge thinc: 6.10.3-py35h830ac7b_2 conda-forge tqdm: 4.29.1-py_0 conda-forge ujson: 1.35-py35hfa6e2cd_1001 conda-forge The following packages will be UPDATED: msgpack-python: 0.4.8-py35_0 --> 0.5.6-py35he980bc4_3 conda-forge The following packages will be DOWNGRADED: freetype: 2.7-vc14_2 conda-forge --> 2.5.5-vc14_2 Proceed ([y]/n)? y blas-1.0-mkl.t 100% |###############################| Time: 0:00:00 0.00 B/s cymem-1.31.2-p 100% |###############################| Time: 0:00:00 1.65 MB/s msgpack-python 100% |###############################| Time: 0:00:00 5.37 MB/s murmurhash-0.2 100% |###############################| Time: 0:00:00 1.49 MB/s plac-0.9.6-py_ 100% |###############################| Time: 0:00:00 0.00 B/s pyreadline-2.1 100% |###############################| Time: 0:00:00 4.62 MB/s regex-2017.11. 100% |###############################| Time: 0:00:00 3.31 MB/s termcolor-1.1. 100% |###############################| Time: 0:00:00 187.81 kB/s tqdm-4.29.1-py 100% |###############################| Time: 0:00:00 2.51 MB/s ujson-1.35-py3 100% |###############################| Time: 0:00:00 1.66 MB/s dill-0.2.8.2-p 100% |###############################| Time: 0:00:00 4.34 MB/s msgpack-numpy- 100% |###############################| Time: 0:00:00 0.00 B/s preshed-1.0.0- 100% |###############################| Time: 0:00:00 0.00 B/s thinc-6.10.3-p 100% |###############################| Time: 0:00:00 5.49 MB/s spacy-2.0.12-p 100% |###############################| Time: 0:00:10 7.42 MB/s (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>python -V Python 3.5.3 :: Anaconda custom (64-bit) (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>python -m spacy download en Collecting en_core_web_sm==2.0.0 from https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz#egg=en_core_web_sm==2.0.0 Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz (37.4MB) 100% |################################| 37.4MB ... Installing collected packages: en-core-web-sm Running setup.py install for en-core-web-sm ... done Successfully installed en-core-web-sm-2.0.0 Linking successful C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\en_core_web_sm --> C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\data\en You can now load the model via spacy.load('en') (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz> A: Initially I downloaded two en packages using following statements in anaconda prompt. python -m spacy download en_core_web_lg python -m spacy download en_core_web_sm But, I kept on getting linkage error and finally running below command helped me to establish link and solved error. python -m spacy download en Also make sure you to restart your runtime if working with Jupyter. -PS : If you get linkage error try giving admin previlages. A: The answer to your misunderstanding is a Unix concept, softlinks which we could say that in Windows are similar to shortcuts. Let's explain this. When you spacy download en, spaCy tries to find the best small model that matches your spaCy distribution. The small model that I am talking about defaults to en_core_web_sm which can be found in different variations which correspond to the different spaCy versions (for example spacy, spacy-nightly have en_core_web_sm of different sizes). When spaCy finds the best model for you, it downloads it and then links the name en to the package it downloaded, e.g. en_core_web_sm. That basically means that whenever you refer to en you will be referring to en_core_web_sm. In other words, en after linking is not a "real" package, is just a name for en_core_web_sm. However, it doesn't work the other way. You can't refer directly to en_core_web_sm because your system doesn't know you have it installed. When you did spacy download en you basically did a pip install. So pip knows that you have a package named en installed for your python distribution, but knows nothing about the package en_core_web_sm. This package is just replacing package en when you import it, which means that package en is just a softlink to en_core_web_sm. Of course, you can directly download en_core_web_sm, using the command: python -m spacy download en_core_web_sm, or you can even link the name en to other models as well. For example, you could do python -m spacy download en_core_web_lg and then python -m spacy link en_core_web_lg en. That would make en a name for en_core_web_lg, which is a large spaCy model for the English language. Hope it is clear now :) A: The below worked for me : import en_core_web_sm nlp = en_core_web_sm.load() A: For those who are still facing problems even after installing it as administrator from Anaconda prompt, here's a quick fix: Got to the path where it is downloaded. For e.g. C:\Users\name\AppData\Local\Continuum\anaconda3\Lib\site-packages\en_core_web_sm\en_core_web_sm-2.2.0 Copy the path. Paste it in: nlp = spacy.load(r'C:\Users\name\AppData\Local\Continuum\anaconda3\Lib\site-packages\en_core_web_sm\en_core_web_sm-2.2.0') Works like a charm :) PS: Check for spacy version A: Using the Spacy language model in Colab requires only the following two steps: Download the model (change the name according to the size of the model) !python -m spacy download en_core_web_lg Restart the colab runtime! Perform shortcut key: Ctrl + M + . Test import spacy nlp = spacy.load("en_core_web_lg") successful!!! A: Try this method as this worked like a charm to me: In your Anaconda Prompt, run the command: !python -m spacy download en After running the above command, you should be able to execute the below in your jupyter notebook: spacy.load('en_core_web_sm') A: First of all, install spacy using the following command for jupyter notebook pip install -U spacy Then write the following code: import en_core_web_sm nlp = en_core_web_sm.load() A: I am running Jupyter Notebook on Windows. Finally, its a version issue, Need to execute below commands in conda cmd prompt( open as admin) pip install spacy==2.3.5 python -m spacy download en_core_web_sm python -m spacy download en from chatterbot import ChatBot import spacy import en_core_web_sm nlp = en_core_web_sm.load() ChatBot("hello") Output - A: Don't run !python -m spacy download en_core_web_lg from inside jupyter. Do this instead: import spacy.cli spacy.cli.download("en_core_web_lg") You may need to restart the kernel before running the above two commands for it to work. A: import spacy nlp = spacy.load('/opt/anaconda3/envs/NLPENV/lib/python3.7/site-packages/en_core_web_sm/en_core_web_sm-2.3.1') Try giving the absolute path of the package with the version as shown in the image. It works perfectly fine. A: a simple solution for this which I saw on spacy.io from spacy.lang.en import English nlp=English() https://course.spacy.io/en/chapter1 A: As for Windows based Anaconda, Open Anaconda Prompt Activate your environment. Ex: active myspacyenv pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz python -m spacy download en_core_web_sm Open Jupyter Notebook ex: active myspacyenv and then jupyter notebook on Anaconda Promt import spacy spacy.load('en_core_web_sm') and it will run peacefully! A: Steps to load up modules based on different versions of spacy download the best-matching version of a specific model for your spaCy installation python -m spacy download en_core_web_sm pip install .tar.gz archive from path or URL pip install /Users/you/en_core_web_sm-2.2.0.tar.gz or pip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz Add to your requirements file or environment yaml file. Theres range of version that one spacy version is comptable with you can view more under https://github.com/explosion/spacy-models/releases if your not sure running below code nlp = spacy.load('en_core_web_sm') will give off a warning telling what version model will be compatible with your installed spacy verion enironment.yml example name: root channels: - defaults - conda-forge - anaconda dependencies: - python=3.8.3 - pip - spacy=2.3.2 - scikit-learn=0.23.2 - pip: - https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.3.1/en_core_web_sm-2.3.1.tar.gz#egg=en_core_web_sm A: Open Anaconda Navigator. Click on any IDE. Run the code: !pip install -U spacy download en_core_web_sm !pip install -U spacy download en_core_web_sm It will work. If you are open IDE directly close it and follow this procedure once. A: Loading the module using the different syntax worked for me. import en_core_web_sm nlp = en_core_web_sm.load() A: Anaconda Users If you're using a conda virtual environment, be sure that its the same version of Python as that in your base environment. To verify this, run python --version in each environment. If not the same, create a new virtual environment with that version of Python (Ex. conda create --name myenv python=x.x.x). Activate the virtual environment (conda activate myenv) conda install -c conda-forge spacy python -m spacy download en_core_web_sm I just ran into this issue, and the above worked for me. This addresses the issue of the download occurring in an area that is not accessible to your current virtual environment. You should then be able to run the following: import spacy nlp = spacy.load("en_core_web_sm") A: Open command prompt or terminal and execute the below code: pip3 install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz Execute the below chunk in your Jupiter notebook. import spacy nlp = spacy.load('en_core_web_sm') Hope the above code works for all:) A: I had also same issue as I couldnt load module using '''spacy.load()''' You can follow below steps to solve this on windows: download using !python -m spacy download en_core_web_sm import en_core_web_sm as import en_core_web_sm load using en_core_web_sm.load() to some variable Complete code will be: python -m spacy download en_core_web_sm import en_core_web_sm nlp = en_core_web_sm.load() A: This works with colab: !python -m spacy download en import en_core_web_sm nlp = en_core_web_sm.load() Or for the medium: import en_core_web_md nlp = en_core_web_md.load() A: Instead of any of the above, this solved my error. conda install -c conda-forge spacy-model-en_core_web_sm If you are an anaconda user, this is the solution. A: I'm running PyCharm on MacOS and while none of the above answers completely worked for me, they did provide enough clues and I was finally able to everything working. I am connecting to an ec2 instance and have configured PyCharm such that I can edit on my Mac and it automatically updates the files on my ec2 instance. Thus, the problem was on the ec2 side where it was not finding Spacy even though I installed it several different times and ways. If I ran my python script from the command line, everything worked fine. However, from within PyCharm, it was initially not finding Spacy and the models. I eventually fixed the "finding" spacy issue using the above recommendation of adding a "requirements.txt" file. But the models were still not recognized. My solution: download the models manually and place them in the file system on the ec2 instance and explicitly point to them when loaded. I downloaded the files from here: https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz https://github.com/explosion/spacy-models/releases/download/en_core_web_lg-3.0.0/en_core_web_lg-3.0.0.tar.gz After downloading, I dropped moved them to my ec2 instance, decompressed and untared them in my filesystem, e.g. /path_to_models/en_core_web_lg-3.0.0/ I then load a model using the explicit path and it worked from within PyCharm (note the path used goes all the way to en_core_web_lg-3.0.0; you will get an error if you do not use the folder with the config.cfg file): nlpObject = spacy.load('/path_to_models/en_core_web_lg-3.0.0/en_core_web_lg/en_core_web_lg-3.0.0') A: Check installed version of spacy pip show spacy You will get something like this: Name: spacy Version: 3.1.3 Summary: Industrial-strength Natural Language Processing (NLP) in Python Install the relevant version of the model using: !pip install -U https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz A: I tried all the above answers but could not succeed. Below worked for me : (Specific to WINDOWS os) Run anaconda command prompt with admin privilege(Important) Then run below commands: pip install -U --user spacy python -m spacy download en Try below command for verification: import spacy spacy.load('en') It might work for others versions as well: A: If you have already downloaded spacy and the language model (E.g., en_core_web_sm or en_core_web_md), then you can follow these steps: Open Anaconda prompt as admin Then type : python -m spacy link [package name or path] [shortcut] For E.g., python -m spacy link /Users/you/model en This will create a symlink to the your language model. Now you can load the model using spacy.load("en") in your notebooks or scripts A: This is what I did: Went to the virtual environment where I was working on Anaconda Prompt / Command Line Ran this: python -m spacy download en_core_web_sm And was done A: TRY THIS :- !python -m spacy download en_core_web_md A: Even I faced similar issue. How I resolved it start anaconda prompt in admin mode. installed both python -m spacy download en and python -m spacy download en_core_web_sm after above steps only I started jupyter notebook where I am accessing this package. Now I can access both import spacy nlp = spacy.load('en_core_web_sm') or nlp = spacy.load('en') Both are working for me. A: I faced a similar issue. I installed spacy and en_core_web_sm from a specific conda environment. However, I got two(02) differents issues as following: [Errno 2] No such file or directory: '....\en_core_web_sm\en_core_web_sm-2.3.1\vocab\lexemes.bin' or OSError: [E050] Can't find model 'en_core_web_sm'.... It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory. I did the following: Open Command Prompt as Administrator Go to c:> Activate my Conda environment (If you work in a specific conda environment): c:\>activate <conda environment name> (conda environment name)c:\>python -m spacy download en Return to Jupyter Notebook and you can load the language library: nlp = en_core_web_sm.load() For me, it works :) A: Download en_core_web_sm tar file Open terminal from anaconda or open anaconda evn. Run this: pip3 install /Users/yourpath/Downloads/en_core_web_sm-3.1.0.tar.gz; or pip install /Users/yourpath/Downloads/en_core_web_sm-3.1.0.tar.gz; Restart jupyter, it will work. A: Run this in os console: python -m spacy download en python -m spacy link en_core_web_sm en_core_web_sm Then run this in python console or on your python IDE: import spacy spacy.load('en_core_web_sm') A: This worked for me: conda install -c conda-forge spacy-model-en_core_web_sm A: Best is to follow the official spacy docs for installation (https://spacy.io/usage): First uninstall your current spacy version pip uninstall spacy Then install pacy correctly pip install -U pip setuptools wheel pip install -U spacy python -m spacy download en_core_web_sm
spacy Can't find model 'en_core_web_sm' on windows 10 and Python 3.5.3 :: Anaconda custom (64-bit)
what is difference between spacy.load('en_core_web_sm') and spacy.load('en')? This link explains different model sizes. But i am still not clear how spacy.load('en_core_web_sm') and spacy.load('en') differ spacy.load('en') runs fine for me. But the spacy.load('en_core_web_sm') throws error i have installed spacyas below. when i go to jupyter notebook and run command nlp = spacy.load('en_core_web_sm') I get the below error --------------------------------------------------------------------------- OSError Traceback (most recent call last) <ipython-input-4-b472bef03043> in <module>() 1 # Import spaCy and load the language library 2 import spacy ----> 3 nlp = spacy.load('en_core_web_sm') 4 5 # Create a Doc object C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\__init__.py in load(name, **overrides) 13 if depr_path not in (True, False, None): 14 deprecation_warning(Warnings.W001.format(path=depr_path)) ---> 15 return util.load_model(name, **overrides) 16 17 C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\util.py in load_model(name, **overrides) 117 elif hasattr(name, 'exists'): # Path or Path-like to model data 118 return load_model_from_path(name, **overrides) --> 119 raise IOError(Errors.E050.format(name=name)) 120 121 OSError: [E050] Can't find model 'en_core_web_sm'. It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory. how I installed Spacy --- (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>conda install -c conda-forge spacy Fetching package metadata ............. Solving package specifications: . Package plan for installation in environment C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder: The following NEW packages will be INSTALLED: blas: 1.0-mkl cymem: 1.31.2-py35h6538335_0 conda-forge dill: 0.2.8.2-py35_0 conda-forge msgpack-numpy: 0.4.4.2-py_0 conda-forge murmurhash: 0.28.0-py35h6538335_1000 conda-forge plac: 0.9.6-py_1 conda-forge preshed: 1.0.0-py35h6538335_0 conda-forge pyreadline: 2.1-py35_1000 conda-forge regex: 2017.11.09-py35_0 conda-forge spacy: 2.0.12-py35h830ac7b_0 conda-forge termcolor: 1.1.0-py_2 conda-forge thinc: 6.10.3-py35h830ac7b_2 conda-forge tqdm: 4.29.1-py_0 conda-forge ujson: 1.35-py35hfa6e2cd_1001 conda-forge The following packages will be UPDATED: msgpack-python: 0.4.8-py35_0 --> 0.5.6-py35he980bc4_3 conda-forge The following packages will be DOWNGRADED: freetype: 2.7-vc14_2 conda-forge --> 2.5.5-vc14_2 Proceed ([y]/n)? y blas-1.0-mkl.t 100% |###############################| Time: 0:00:00 0.00 B/s cymem-1.31.2-p 100% |###############################| Time: 0:00:00 1.65 MB/s msgpack-python 100% |###############################| Time: 0:00:00 5.37 MB/s murmurhash-0.2 100% |###############################| Time: 0:00:00 1.49 MB/s plac-0.9.6-py_ 100% |###############################| Time: 0:00:00 0.00 B/s pyreadline-2.1 100% |###############################| Time: 0:00:00 4.62 MB/s regex-2017.11. 100% |###############################| Time: 0:00:00 3.31 MB/s termcolor-1.1. 100% |###############################| Time: 0:00:00 187.81 kB/s tqdm-4.29.1-py 100% |###############################| Time: 0:00:00 2.51 MB/s ujson-1.35-py3 100% |###############################| Time: 0:00:00 1.66 MB/s dill-0.2.8.2-p 100% |###############################| Time: 0:00:00 4.34 MB/s msgpack-numpy- 100% |###############################| Time: 0:00:00 0.00 B/s preshed-1.0.0- 100% |###############################| Time: 0:00:00 0.00 B/s thinc-6.10.3-p 100% |###############################| Time: 0:00:00 5.49 MB/s spacy-2.0.12-p 100% |###############################| Time: 0:00:10 7.42 MB/s (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>python -V Python 3.5.3 :: Anaconda custom (64-bit) (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>python -m spacy download en Collecting en_core_web_sm==2.0.0 from https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz#egg=en_core_web_sm==2.0.0 Downloading https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.0.0/en_core_web_sm-2.0.0.tar.gz (37.4MB) 100% |################################| 37.4MB ... Installing collected packages: en-core-web-sm Running setup.py install for en-core-web-sm ... done Successfully installed en-core-web-sm-2.0.0 Linking successful C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\en_core_web_sm --> C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder\lib\site-packages\spacy\data\en You can now load the model via spacy.load('en') (C:\Users\nikhizzz\AppData\Local\conda\conda\envs\tensorflowspyder) C:\Users\nikhizzz>
[ "Initially I downloaded two en packages using following statements in anaconda prompt.\npython -m spacy download en_core_web_lg\npython -m spacy download en_core_web_sm\n\nBut, I kept on getting linkage error and finally running below command helped me to establish link and solved error.\npython -m spacy download en\n\nAlso make sure you to restart your runtime if working with Jupyter.\n-PS : If you get linkage error try giving admin previlages.\n", "The answer to your misunderstanding is a Unix concept, softlinks which we could say that in Windows are similar to shortcuts. Let's explain this.\nWhen you spacy download en, spaCy tries to find the best small model that matches your spaCy distribution. The small model that I am talking about defaults to en_core_web_sm which can be found in different variations which correspond to the different spaCy versions (for example spacy, spacy-nightly have en_core_web_sm of different sizes). \nWhen spaCy finds the best model for you, it downloads it and then links the name en to the package it downloaded, e.g. en_core_web_sm. That basically means that whenever you refer to en you will be referring to en_core_web_sm. In other words, en after linking is not a \"real\" package, is just a name for en_core_web_sm.\nHowever, it doesn't work the other way. You can't refer directly to en_core_web_sm because your system doesn't know you have it installed. When you did spacy download en you basically did a pip install. So pip knows that you have a package named en installed for your python distribution, but knows nothing about the package en_core_web_sm. This package is just replacing package en when you import it, which means that package en is just a softlink to en_core_web_sm.\nOf course, you can directly download en_core_web_sm, using the command: python -m spacy download en_core_web_sm, or you can even link the name en to other models as well. For example, you could do python -m spacy download en_core_web_lg and then python -m spacy link en_core_web_lg en. That would make \nen a name for en_core_web_lg, which is a large spaCy model for the English language.\nHope it is clear now :) \n", "The below worked for me :\nimport en_core_web_sm\n\nnlp = en_core_web_sm.load()\n\n", "For those who are still facing problems even after installing it as administrator from Anaconda prompt, here's a quick fix:\n\nGot to the path where it is downloaded. For e.g.\nC:\\Users\\name\\AppData\\Local\\Continuum\\anaconda3\\Lib\\site-packages\\en_core_web_sm\\en_core_web_sm-2.2.0\n\n\nCopy the path.\n\nPaste it in:\nnlp = spacy.load(r'C:\\Users\\name\\AppData\\Local\\Continuum\\anaconda3\\Lib\\site-packages\\en_core_web_sm\\en_core_web_sm-2.2.0')\n\n\nWorks like a charm :)\n\n\nPS: Check for spacy version\n", "Using the Spacy language model in Colab requires only the following two steps:\n\nDownload the model (change the name according to the size of the model)\n\n!python -m spacy download en_core_web_lg \n\n\nRestart the colab runtime!\nPerform shortcut key: Ctrl + M + .\n\nTest\nimport spacy\nnlp = spacy.load(\"en_core_web_lg\")\n\nsuccessful!!!\n", "Try this method as this worked like a charm to me:\nIn your Anaconda Prompt, run the command:\n!python -m spacy download en\n\nAfter running the above command, you should be able to execute the below in your jupyter notebook:\nspacy.load('en_core_web_sm')\n\n", "First of all, install spacy using the following command for jupyter notebook\npip install -U spacy\nThen write the following code:\nimport en_core_web_sm\nnlp = en_core_web_sm.load()\n\n", "I am running Jupyter Notebook on Windows.\nFinally, its a version issue, Need to execute below commands in conda cmd prompt( open as admin)\n\npip install spacy==2.3.5\n\npython -m spacy download en_core_web_sm\n\npython -m spacy download en\n\n\nfrom chatterbot import ChatBot\nimport spacy\nimport en_core_web_sm\nnlp = en_core_web_sm.load()\nChatBot(\"hello\")\n\nOutput -\n\n", "Don't run !python -m spacy download en_core_web_lg from inside jupyter.\nDo this instead:\nimport spacy.cli\nspacy.cli.download(\"en_core_web_lg\")\n\nYou may need to restart the kernel before running the above two commands for it to work.\n", "import spacy\n\nnlp = spacy.load('/opt/anaconda3/envs/NLPENV/lib/python3.7/site-packages/en_core_web_sm/en_core_web_sm-2.3.1')\n\nTry giving the absolute path of the package with the version as shown in the image.\nIt works perfectly fine.\n", "a simple solution for this which I saw on spacy.io\nfrom spacy.lang.en import English\nnlp=English()\n\nhttps://course.spacy.io/en/chapter1\n", "As for Windows based Anaconda,\n\nOpen Anaconda Prompt\n\nActivate your environment. Ex: active myspacyenv\n\npip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz\n\npython -m spacy download en_core_web_sm\n\nOpen Jupyter Notebook ex: active myspacyenv and then jupyter notebook on Anaconda Promt\n\n\n\nimport spacy spacy.load('en_core_web_sm')\n\nand it will run peacefully!\n", "Steps to load up modules based on different versions of spacy\ndownload the best-matching version of a specific model for your spaCy installation\npython -m spacy download en_core_web_sm\npip install .tar.gz archive from path or URL\npip install /Users/you/en_core_web_sm-2.2.0.tar.gz\n\nor\npip install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz\n\nAdd to your requirements file or environment yaml file. Theres range of version that one spacy version is comptable with you can view more under https://github.com/explosion/spacy-models/releases\nif your not sure running below code\nnlp = spacy.load('en_core_web_sm') \n\nwill give off a warning telling what version model will be compatible with your installed spacy verion\nenironment.yml example\nname: root\nchannels:\n - defaults\n - conda-forge\n - anaconda\ndependencies:\n - python=3.8.3\n - pip\n - spacy=2.3.2\n - scikit-learn=0.23.2\n - pip:\n - https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.3.1/en_core_web_sm-2.3.1.tar.gz#egg=en_core_web_sm\n\n", "Open Anaconda Navigator. Click on any IDE. Run the code: \n!pip install -U spacy download en_core_web_sm\n!pip install -U spacy download en_core_web_sm\n\nIt will work. If you are open IDE directly close it and follow this procedure once.\n", "Loading the module using the different syntax worked for me.\nimport en_core_web_sm\nnlp = en_core_web_sm.load()\n\n", "Anaconda Users\n\nIf you're using a conda virtual environment, be sure that its the same version of Python as that in your base environment. To verify this, run python --version in each environment. If not the same, create a new virtual environment with that version of Python (Ex. conda create --name myenv python=x.x.x).\nActivate the virtual environment (conda activate myenv)\nconda install -c conda-forge spacy\npython -m spacy download en_core_web_sm\n\nI just ran into this issue, and the above worked for me. This addresses the issue of the download occurring in an area that is not accessible to your current virtual environment.\nYou should then be able to run the following:\nimport spacy\nnlp = spacy.load(\"en_core_web_sm\")\n\n", "Open command prompt or terminal and execute the below code:\npip3 install https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-2.2.0/en_core_web_sm-2.2.0.tar.gz\n\nExecute the below chunk in your Jupiter notebook.\nimport spacy\nnlp = spacy.load('en_core_web_sm')\nHope the above code works for all:)\n", "I had also same issue as I couldnt load module using '''spacy.load()'''\nYou can follow below steps to solve this on windows:\n\ndownload using !python -m spacy download en_core_web_sm\nimport en_core_web_sm as import en_core_web_sm\nload using en_core_web_sm.load() to some variable\n\nComplete code will be:\npython -m spacy download en_core_web_sm\n\nimport en_core_web_sm\n\nnlp = en_core_web_sm.load()\n\n", "This works with colab:\n!python -m spacy download en\nimport en_core_web_sm\nnlp = en_core_web_sm.load()\n\nOr for the medium:\nimport en_core_web_md\nnlp = en_core_web_md.load()\n\n", "Instead of any of the above, this solved my error.\nconda install -c conda-forge spacy-model-en_core_web_sm\nIf you are an anaconda user, this is the solution.\n", "I'm running PyCharm on MacOS and while none of the above answers completely worked for me, they did provide enough clues and I was finally able to everything working. I am connecting to an ec2 instance and have configured PyCharm such that I can edit on my Mac and it automatically updates the files on my ec2 instance. Thus, the problem was on the ec2 side where it was not finding Spacy even though I installed it several different times and ways. If I ran my python script from the command line, everything worked fine. However, from within PyCharm, it was initially not finding Spacy and the models. I eventually fixed the \"finding\" spacy issue using the above recommendation of adding a \"requirements.txt\" file. But the models were still not recognized.\nMy solution: download the models manually and place them in the file system on the ec2 instance and explicitly point to them when loaded. I downloaded the files from here:\nhttps://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz\nhttps://github.com/explosion/spacy-models/releases/download/en_core_web_lg-3.0.0/en_core_web_lg-3.0.0.tar.gz\nAfter downloading, I dropped moved them to my ec2 instance, decompressed and untared them in my filesystem, e.g. /path_to_models/en_core_web_lg-3.0.0/\nI then load a model using the explicit path and it worked from within PyCharm (note the path used goes all the way to en_core_web_lg-3.0.0; you will get an error if you do not use the folder with the config.cfg file):\nnlpObject = spacy.load('/path_to_models/en_core_web_lg-3.0.0/en_core_web_lg/en_core_web_lg-3.0.0')\n\n", "Check installed version of spacy\npip show spacy\nYou will get something like this:\nName: spacy\nVersion: 3.1.3\nSummary: Industrial-strength Natural Language Processing (NLP) in Python\nInstall the relevant version of the model using:\n!pip install -U https://github.com/explosion/spacy-models/releases/download/en_core_web_sm-3.0.0/en_core_web_sm-3.0.0.tar.gz\n", "I tried all the above answers but could not succeed. Below worked for me :\n(Specific to WINDOWS os)\n\nRun anaconda command prompt with admin privilege(Important)\nThen run below commands:\n\n pip install -U --user spacy \n python -m spacy download en\n\n\nTry below command for verification:\n\nimport spacy\nspacy.load('en')\n\n\nIt might work for others versions as well:\n\n\n", "If you have already downloaded spacy and the language model (E.g., en_core_web_sm or en_core_web_md), then you can follow these steps:\n\nOpen Anaconda prompt as admin\n\nThen type : python -m spacy link [package name or path] [shortcut]\nFor E.g., python -m spacy link /Users/you/model en\n\n\nThis will create a symlink to the your language model. Now you can load the model using spacy.load(\"en\") in your notebooks or scripts\n", "This is what I did:\n\nWent to the virtual environment where I was working on Anaconda Prompt / Command Line\n\nRan this: python -m spacy download en_core_web_sm\n\n\nAnd was done\n", "TRY THIS :-\n!python -m spacy download en_core_web_md\n", "Even I faced similar issue. How I resolved it\n\nstart anaconda prompt in admin mode.\ninstalled both\npython -m spacy download en\nand\npython -m spacy download en_core_web_sm\nafter above steps only I started jupyter notebook where I am accessing this package.\nNow I can access both\nimport spacy\nnlp = spacy.load('en_core_web_sm')\nor\nnlp = spacy.load('en')\nBoth are working for me.\n\n", "I faced a similar issue. I installed spacy and en_core_web_sm from a specific conda environment. However, I got two(02) differents issues as following:\n[Errno 2] No such file or directory: '....\\en_core_web_sm\\en_core_web_sm-2.3.1\\vocab\\lexemes.bin'\nor\nOSError: [E050] Can't find model 'en_core_web_sm'.... It doesn't seem to be a shortcut link, a Python package or a valid path to a data directory.\nI did the following:\n\nOpen Command Prompt as Administrator\nGo to c:>\nActivate my Conda environment (If you work in a specific conda environment):\n\nc:\\>activate <conda environment name>\n\n\n(conda environment name)c:\\>python -m spacy download en\nReturn to Jupyter Notebook and you can load the language library:\n\nnlp = en_core_web_sm.load()\n\nFor me, it works :)\n", "Download en_core_web_sm tar file\nOpen terminal from anaconda or open anaconda evn.\nRun this:\npip3 install /Users/yourpath/Downloads/en_core_web_sm-3.1.0.tar.gz;\n\nor\npip install /Users/yourpath/Downloads/en_core_web_sm-3.1.0.tar.gz;\n\nRestart jupyter, it will work.\n", "Run this in os console:\npython -m spacy download en\npython -m spacy link en_core_web_sm en_core_web_sm\n\nThen run this in python console or on your python IDE:\nimport spacy\nspacy.load('en_core_web_sm')\n\n", "This worked for me:\nconda install -c conda-forge spacy-model-en_core_web_sm\n", "Best is to follow the official spacy docs for installation (https://spacy.io/usage):\nFirst uninstall your current spacy version\npip uninstall spacy\n\nThen install pacy correctly\npip install -U pip setuptools wheel\npip install -U spacy\npython -m spacy download en_core_web_sm\n\n" ]
[ 156, 83, 40, 17, 15, 11, 5, 5, 5, 4, 4, 3, 3, 2, 2, 2, 2, 2, 2, 2, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "nlp", "python", "python_3.x", "spacy" ]
stackoverflow_0054334304_nlp_python_python_3.x_spacy.txt
Q: Cannot implicitly convert type 'double' to 'int'. An explicit conversion exists I know its normally obvious where the conversion between double and int has gone wrong but im using recursion methods to add up the sum of a list to 1 decimal point but i cannot seem to find where the error is. There is no error when i use console.WriteLine but when i use return (which i would like) it comes up with the error. double[] arr = { -1.103f, 2.2f, 3.1f, 10.0f, 15.0f, 23.1f, 22, 12f }; List<double> values = new List<double>(arr.Length); foreach (double i in arr) { values.Add(i); } static double sum(List<double> values, int position = 0) { if (values[position] == values[values.Count - 1]) { return values[values.Count - 1]; } return Math.Round(values[position], 2) + sum(values, position + 1); } return sum(values); //this is the return value that causes the error Some insight onto why it works with console.WriteLine but not return would be great and how to fix it so it works with return would be great. A: Don't do this: return sum(values); Your code isn't in a method, it's a relatively new C# feature called Top-level Statements. When you return from top-level statements, you're producing an "exit code" for the application. And... Exit codes are integers (hence the error, you're returning a double) Exit codes have a meaning (you don't want to return just any random value) If you want to output the value, then output it instead of returning it from the application: Console.WriteLine(values); why it works with console.WriteLine but not return Because Console.WriteLine is a method which accepts a variety of types. return is a statement which returns a value from a method. In this case that "method" is effectively an implicit Main() method for the application, which has a return type of int. A: Good day friend Possibly your mistake is in the way you do your sum. I would recommend that you do it as follows: They start your arrangement directly as a list and convert your list into statica because your method is static the same. static List<double> Numbers = new List{ -1.103f, 2.2f, 3.1f, 10.0f, 15.0f, 23.1f, 22, 12f };* Note: Thus avoiding the unnecessary creation of the array, unless it is necessary to start the object in array and you need to make validations before adding them to the list if so, your process is good, otherwise I recommend doing it this way. Use linq expressions(using System.Linq;) for your sum with the following static void SumList() => Console.WriteLine(Numbers.Sum().ToString("N0")); Note: We generated a method called SumList that will return the sum of the list since in your code you use a static method places it like this. Your code would end as follows static List<double> Numbers = new List<double> { -1.103f, 2.2f, 3.1f, 10.0f, 15.0f, 23.1f, 22, 12f }; static void Suma() => Console.WriteLine(Numbers.Sum().ToString("N0")); PS: I hope you will excuse me for my English, and help you by example.
Cannot implicitly convert type 'double' to 'int'. An explicit conversion exists
I know its normally obvious where the conversion between double and int has gone wrong but im using recursion methods to add up the sum of a list to 1 decimal point but i cannot seem to find where the error is. There is no error when i use console.WriteLine but when i use return (which i would like) it comes up with the error. double[] arr = { -1.103f, 2.2f, 3.1f, 10.0f, 15.0f, 23.1f, 22, 12f }; List<double> values = new List<double>(arr.Length); foreach (double i in arr) { values.Add(i); } static double sum(List<double> values, int position = 0) { if (values[position] == values[values.Count - 1]) { return values[values.Count - 1]; } return Math.Round(values[position], 2) + sum(values, position + 1); } return sum(values); //this is the return value that causes the error Some insight onto why it works with console.WriteLine but not return would be great and how to fix it so it works with return would be great.
[ "Don't do this:\nreturn sum(values);\n\nYour code isn't in a method, it's a relatively new C# feature called Top-level Statements. When you return from top-level statements, you're producing an \"exit code\" for the application. And...\n\nExit codes are integers (hence the error, you're returning a double)\nExit codes have a meaning (you don't want to return just any random value)\n\nIf you want to output the value, then output it instead of returning it from the application:\nConsole.WriteLine(values);\n\n\nwhy it works with console.WriteLine but not return\n\nBecause Console.WriteLine is a method which accepts a variety of types. return is a statement which returns a value from a method.\nIn this case that \"method\" is effectively an implicit Main() method for the application, which has a return type of int.\n", "Good day friend\nPossibly your mistake is in the way you do your sum.\nI would recommend that you do it as follows:\n\nThey start your arrangement directly as a list and convert your list into statica because your method is static the same.\n\nstatic List<double> Numbers = new List{ -1.103f, 2.2f, 3.1f, 10.0f, 15.0f, 23.1f, 22, 12f };*\nNote: Thus avoiding the unnecessary creation of the array, unless it is necessary to start the object in array and you need to make validations before adding them to the list if so, your process is good, otherwise I recommend doing it this way.\n\nUse linq expressions(using System.Linq;) for your sum with the following\n\nstatic void SumList() => Console.WriteLine(Numbers.Sum().ToString(\"N0\"));\nNote: We generated a method called SumList that will return the sum of the list since in your code you use a static method places it like this.\nYour code would end as follows\nstatic List<double> Numbers = new List<double> { -1.103f, 2.2f, 3.1f, 10.0f, 15.0f, 23.1f, 22, 12f };\nstatic void Suma() => Console.WriteLine(Numbers.Sum().ToString(\"N0\"));\nPS: I hope you will excuse me for my English, and help you by example.\n" ]
[ 2, 0 ]
[]
[]
[ "c#", "recursion" ]
stackoverflow_0074658394_c#_recursion.txt
Q: Laravel 9: Malformed characters for emails in laravel.log If I adjust in my Laravel 9 project (PHP 8), the .env - file to MAIL_MAILER=log. The mail is saved in the laravel.log file, but the problem is, that some characters are malformed. For example the mail-<head> looks like this: <head> <meta charset=3D"utf-8"> <meta name=3D="viewport" content=3D"width=3Ddevice-width, initial-scale=3D1.0"> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DUTF-8"> <meta name=3D"color-scheme" content=3D"light"> <meta name=3D"supported-color-schemes" content=3D"light"> </head> The malformed characters also occur for äöü and other UTF-8 characters (e.g. ä resolves to =C3=A4). If I'm using MAIL_MAILER=smtp the mail isn't malformed at all. This makes local email debugging hard. Anyway, this probably causes another problem on production. I'm using the package (https://github.com/shvetsgroup/laravel-email-database-log) to save all sent mails of Laravel in the database. Here the malformed characters are also saved in the database. I'm sending mails like this: \Illuminate\Support\Facades\Mail::to('[email protected]')->queue(new \App\Mail\ContactConfirmation( $name, $message )); class ContactConfirmation extends Mailable { use Queueable, SerializesModels; public function __construct( public string $name, public string $text ) { // } public function build() { return $this->markdown('mails.contact_confirmation') ->subject('Your message') ->with([ 'name' => $this->name, 'text' => $this->text ]); } } This problem looks similar to https://github.com/laravel/framework/issues/32954, but the mails sent by SMTP have no problem and Laravel 9 uses the Symfony Mailer. There is also a similar question (Why does Laravel replace a tab by a "=09" string when sending mails?) from 2014, but here is SwiftMailer used. Is there some way to fix the malformed characters on production and local? And if not, are there alternatives to save the mail without the package and malformed characters into the database?
Laravel 9: Malformed characters for emails in laravel.log
If I adjust in my Laravel 9 project (PHP 8), the .env - file to MAIL_MAILER=log. The mail is saved in the laravel.log file, but the problem is, that some characters are malformed. For example the mail-<head> looks like this: <head> <meta charset=3D"utf-8"> <meta name=3D="viewport" content=3D"width=3Ddevice-width, initial-scale=3D1.0"> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3DUTF-8"> <meta name=3D"color-scheme" content=3D"light"> <meta name=3D"supported-color-schemes" content=3D"light"> </head> The malformed characters also occur for äöü and other UTF-8 characters (e.g. ä resolves to =C3=A4). If I'm using MAIL_MAILER=smtp the mail isn't malformed at all. This makes local email debugging hard. Anyway, this probably causes another problem on production. I'm using the package (https://github.com/shvetsgroup/laravel-email-database-log) to save all sent mails of Laravel in the database. Here the malformed characters are also saved in the database. I'm sending mails like this: \Illuminate\Support\Facades\Mail::to('[email protected]')->queue(new \App\Mail\ContactConfirmation( $name, $message )); class ContactConfirmation extends Mailable { use Queueable, SerializesModels; public function __construct( public string $name, public string $text ) { // } public function build() { return $this->markdown('mails.contact_confirmation') ->subject('Your message') ->with([ 'name' => $this->name, 'text' => $this->text ]); } } This problem looks similar to https://github.com/laravel/framework/issues/32954, but the mails sent by SMTP have no problem and Laravel 9 uses the Symfony Mailer. There is also a similar question (Why does Laravel replace a tab by a "=09" string when sending mails?) from 2014, but here is SwiftMailer used. Is there some way to fix the malformed characters on production and local? And if not, are there alternatives to save the mail without the package and malformed characters into the database?
[]
[]
[ "A work around to this it could be to create a new channel for logging. On laravel documentation https://laravel.com/docs/9.x/logging#creating-custom-channels-via-factories it is explained it how to do it.\nThe idea is that when a log for mailer is used for sending emails, if the message is quoted-printable, then it will be decoded it back to '8bit' string.\nTo achieve this a new Logger is created that extends from Illuminate\\Log\\Logger and override the debug method with the code that convert to '8bit'string.\nFirst, create a class that extends from Illuminate\\Log\\Logger. For example App\\Mail\\Logging\\CustomLogger.php This contains the override to debug method.\nnamespace App\\Mail\\Logging;\nuse Illuminate\\Log\\Logger;\nclass CustomLogger extends Logger\n{\n public function debug($message, array $context = []): void\n {\n $message = str_contains($message, \"=3D\") ? quoted_printable_decode($message) : $message;\n $this->writeLog(__FUNCTION__, $message, $context);\n }\n}\n\nThen create the logger class that will resolve the customLogger previously. Example app\\Logging\\CreateCustomMailLogger.php\nnamespace App\\Logging;\nuse App\\Mail\\Logging\\CustomLogger;\nclass CreateCustomMailLogger\n{\n public function __invoke(array $config)\n {\n /** @var \\Illuminate\\Log\\Logger $log **/\n $log = resolve('log');\n return new CustomLogger($log->getLogger(), $log->getEventDispatcher());\n }\n}\n\nIn config\\logging.php file add a new channel using the CreateCustomMailLogger created before.\n'maillog' => [\n 'driver' => 'custom',\n 'via' => \\App\\Logging\\CreateCustomMailLogger::class\n ],\n\nIn the .env file set MAIL_LOG_CHANNEL to maillog, the channel in logging.php created before\nMAIL_MAILER=log\nMAIL_LOG_CHANNEL=maillog \n\nThis way when you set mailer to log, then this channel will be used and it will be logged as '8bit' string and not quoted-printable.\nFor futher details, check this reply from the same issue.\nhttps://github.com/laravel/framework/issues/32954#issuecomment-1335488478\nHope this help!\n" ]
[ -2 ]
[ "laravel", "laravel_9", "php", "php_8" ]
stackoverflow_0073763164_laravel_laravel_9_php_php_8.txt
Q: i have a small problem with sorting a list of objects by date so i have a database that has a payment entity within it, the payment entity has few parameters, the most important one is the date parameter, the problem that am facing is am trying to sort the list of payments in the database into a list of lists, each mini list contains the payments made on the same day, here is an image so you can better understand what am trying to explain. i don't know how do i go about this whatsoever so am just looking for some guidance on how i should approach this. i don't think that any code is needed here but here is the code to the payment class, and am more than happy to provide more code if it's needed : public class Payment { @PrimaryKey(autoGenerate = true) @ColumnInfo(name = "id_payment") int paymentID; @Embedded SubjectTeacherCrossRef subjectTeacherCrossRef; @ColumnInfo(name = "payment_date") String paymentDate; @ColumnInfo(name = "payment_total") int paymentTotal; public void setPaymentID(int paymentID) { this.paymentID = paymentID; } public Payment(SubjectTeacherCrossRef subjectTeacherCrossRef, String paymentDate, int paymentTotal) { this.subjectTeacherCrossRef = subjectTeacherCrossRef; this.paymentDate = paymentDate; this.paymentTotal = paymentTotal; } public int getPaymentID() { return paymentID; } public SubjectTeacherCrossRef getSubjectTeacherCrossRef() { return subjectTeacherCrossRef; } public String getPaymentDate() { return paymentDate; } public int getPaymentTotal() { return paymentTotal; } } A: You should iterate on all Payment fetched from DB. Get the paymentDate from each Payment and store it in a Map<Date, List<Payment>>. A: You must first check in the map if a value (List<Payment>) already exists for given key (the date). If not, you must first create the List<Payment> instance and put it in the map put(the_date, the_list). Se method addPayment(Payment payment) below. Remark: you should use a java.util.Date object to represent a date rather than a String package stackoverflow; import java.util.ArrayList; import java.util.Arrays; import java.util.HashMap; import java.util.List; import java.util.Map; public class PaymentMap { private Map<String, List<Payment>> map; public PaymentMap() { super(); map = new HashMap<String, List<Payment>>(); System.out.println("map created"); } public void addPayment(Payment payment) { List<Payment> payments; String paymentDate = payment.getPaymentDate(); if ((payments = map.get(paymentDate)) == null) { payments = new ArrayList<Payment>(); map.put(paymentDate, payments); } payments.add(payment); System.out.println("Payment " + payment + " added"); } public void printMapContent() { System.out.println("Map content : " + Arrays.toString(map.entrySet().toArray())); } class Payment { private int paymentID; private SubjectTeacherCrossRef subjectTeacherCrossRef; private String paymentDate; private int paymentTotal; public Payment(SubjectTeacherCrossRef subjectTeacherCrossRef, String paymentDate, int paymentTotal) { this.subjectTeacherCrossRef = subjectTeacherCrossRef; this.paymentDate = paymentDate; this.paymentTotal = paymentTotal; } public int getPaymentID() { return paymentID; } public SubjectTeacherCrossRef getSubjectTeacherCrossRef() { return subjectTeacherCrossRef; } public String getPaymentDate() { return paymentDate; } public int getPaymentTotal() { return paymentTotal; } @Override public String toString() { return new StringBuilder("Payment :: date: "+paymentDate+ " - total : "+paymentTotal).toString(); } } public static void main(String[] args) { PaymentMap map = new PaymentMap(); // Adding 3 payments on date 1 String date1 = "20220128"; map.addPayment(map.new Payment(null, date1, 1234)); map.addPayment(map.new Payment(null, date1, 2345)); map.addPayment(map.new Payment(null, date1, 3456)); // Adding 2 payments on date 2 String date2 = "20221125"; map.addPayment(map.new Payment(null, date2, 4567)); map.addPayment(map.new Payment(null, date2, 5678)); // Adding 2 payments on date 3 String date3 = "20221202"; map.addPayment(map.new Payment(null, date3, 6789)); map.addPayment(map.new Payment(null, date3, 7890)); // Printing map content map.printMapContent(); } }
i have a small problem with sorting a list of objects by date
so i have a database that has a payment entity within it, the payment entity has few parameters, the most important one is the date parameter, the problem that am facing is am trying to sort the list of payments in the database into a list of lists, each mini list contains the payments made on the same day, here is an image so you can better understand what am trying to explain. i don't know how do i go about this whatsoever so am just looking for some guidance on how i should approach this. i don't think that any code is needed here but here is the code to the payment class, and am more than happy to provide more code if it's needed : public class Payment { @PrimaryKey(autoGenerate = true) @ColumnInfo(name = "id_payment") int paymentID; @Embedded SubjectTeacherCrossRef subjectTeacherCrossRef; @ColumnInfo(name = "payment_date") String paymentDate; @ColumnInfo(name = "payment_total") int paymentTotal; public void setPaymentID(int paymentID) { this.paymentID = paymentID; } public Payment(SubjectTeacherCrossRef subjectTeacherCrossRef, String paymentDate, int paymentTotal) { this.subjectTeacherCrossRef = subjectTeacherCrossRef; this.paymentDate = paymentDate; this.paymentTotal = paymentTotal; } public int getPaymentID() { return paymentID; } public SubjectTeacherCrossRef getSubjectTeacherCrossRef() { return subjectTeacherCrossRef; } public String getPaymentDate() { return paymentDate; } public int getPaymentTotal() { return paymentTotal; } }
[ "You should iterate on all Payment fetched from DB.\nGet the paymentDate from each Payment and store it in a Map<Date, List<Payment>>.\n", "You must first check in the map if a value (List<Payment>) already exists for given key (the date). If not, you must first create the List<Payment> instance and put it in the map put(the_date, the_list).\nSe method addPayment(Payment payment) below.\nRemark: you should use a java.util.Date object to represent a date rather than a String\npackage stackoverflow;\n\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\n\npublic class PaymentMap {\n\nprivate Map<String, List<Payment>> map;\n\npublic PaymentMap() {\n super();\n map = new HashMap<String, List<Payment>>();\n System.out.println(\"map created\");\n}\n\npublic void addPayment(Payment payment) {\n List<Payment> payments;\n String paymentDate = payment.getPaymentDate();\n if ((payments = map.get(paymentDate)) == null) {\n payments = new ArrayList<Payment>();\n map.put(paymentDate, payments);\n }\n payments.add(payment);\n System.out.println(\"Payment \" + payment + \" added\");\n}\n\npublic void printMapContent() {\n System.out.println(\"Map content : \" + \n Arrays.toString(map.entrySet().toArray()));\n}\n\nclass Payment {\n private int paymentID;\n private SubjectTeacherCrossRef subjectTeacherCrossRef;\n private String paymentDate;\n private int paymentTotal;\n\n public Payment(SubjectTeacherCrossRef subjectTeacherCrossRef, String paymentDate, int paymentTotal) {\n this.subjectTeacherCrossRef = subjectTeacherCrossRef;\n this.paymentDate = paymentDate;\n this.paymentTotal = paymentTotal;\n }\n\n public int getPaymentID() {\n return paymentID;\n }\n\n public SubjectTeacherCrossRef getSubjectTeacherCrossRef() {\n return subjectTeacherCrossRef;\n }\n\n public String getPaymentDate() {\n return paymentDate;\n }\n\n public int getPaymentTotal() {\n return paymentTotal;\n }\n\n @Override\n public String toString() {\n return new StringBuilder(\"Payment :: date: \"+paymentDate+ \" - total : \"+paymentTotal).toString();\n }\n}\n\npublic static void main(String[] args) {\n PaymentMap map = new PaymentMap();\n\n // Adding 3 payments on date 1\n String date1 = \"20220128\";\n map.addPayment(map.new Payment(null, date1, 1234));\n map.addPayment(map.new Payment(null, date1, 2345));\n map.addPayment(map.new Payment(null, date1, 3456));\n\n // Adding 2 payments on date 2\n String date2 = \"20221125\";\n map.addPayment(map.new Payment(null, date2, 4567));\n map.addPayment(map.new Payment(null, date2, 5678));\n\n // Adding 2 payments on date 3\n String date3 = \"20221202\";\n map.addPayment(map.new Payment(null, date3, 6789));\n map.addPayment(map.new Payment(null, date3, 7890));\n\n // Printing map content\n map.printMapContent();\n}\n}\n\n" ]
[ 0, 0 ]
[]
[]
[ "android", "java" ]
stackoverflow_0074634277_android_java.txt
Q: docusignapi - Failed to instantiate [com.docusign.esign.client.ApiClient] I have a Spring boot application and wanted to add docusign integration. Starting from OauthCode/OauthToken to get the authcode and auth token. I added the dependencies in gradle but docusign ApiClient instantiation is failing. build.gradle: plugins { id 'org.springframework.boot' version '2.7.3' id 'io.spring.dependency-management' version '1.0.13.RELEASE' id 'java' } dependencies { implementation 'org.springframework.boot:spring-boot-starter-actuator' implementation 'org.springframework.boot:spring-boot-starter-security' implementation 'org.springframework.boot:spring-boot-starter-web' implementation 'org.springframework.cloud:spring-cloud-starter-openfeign' implementation 'com.docusign:docusign-esign-java:3.19.0' implementation 'jakarta.ws.rs:jakarta.ws.rs-api:3.1.0'*** implementation 'org.springdoc:springdoc-openapi-ui:1.6.11' implementation 'io.awspring.cloud:spring-cloud-starter-aws-secrets-manager- config:2.4.2' compileOnly 'org.projectlombok:lombok' developmentOnly 'org.springframework.boot:spring-boot-devtools' annotationProcessor 'org.projectlombok:lombok' testImplementation 'org.springframework.boot:spring-boot-starter-test' testImplementation 'org.springframework.security:spring-security-test' } @Generated @Configuration public class BeansConfig { @Value("${ds.api.base.path1}") private String basePath1; @Value("${ds.api.auth.base.path2}") private String basePath2; @Bean ApiClient getApiClient() { ApiClient apiClient = new ApiClient(basePath1); apiClient.setOAuthBasePath(basePath2); return apiClient; } } Exception trace: Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.docusign.esign.client.ApiClient]: Factory method 'apiClient' threw exception; nested exception is java.lang.NoClassDefFoundError: javax/ws/rs/ext/ContextResolver at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185) ~[spring-beans-5.3.22.jar:5.3.22] at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653) Caused by: java.lang.NoClassDefFoundError: javax/ws/rs/ext/ContextResolver A: This is a bug that was introduced in the latest version of the Java SDK. Version 3.18.0 doesn't have this bug, so for now the workaround is to use the oldest version. I'll update this answer once we have released a new version that fixes this issue. A: it's 12/2/2022 and I can confirm 3.21.0 still has this issue. Downgrading to 3.18 works. it would be REALLY NICE if the documentation published by docusign was updated to reflect this. I lost an hour chasing my tail on this. with 3.21.0 and boot 2.7.4 / java 11 the ApiClient instantiation fails with ava.lang.ClassNotFoundException: javax.ws.rs.ext.ContextResolver at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) ~[na:na] at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) ~[na:na] at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522) ~[na:na] at java.base/java.lang.ClassLoader.defineClass1(Native Method) ~[na:na] at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017) ~[na:na] at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174) ~[na:na] at java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:800) ~[na:na] at java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:698) ~[na:na] at java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:621) ~[na:na] at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:579) ~[na:na] at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) ~[na:na] at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522) ~[na:na] at com.docusign.esign.client.ApiClient.<init>(ApiClient.java:91) ~[docusign-esign-java-3.21.0.jar:na] at com.docusign.esign.client.ApiClient.<init>(ApiClient.java:123) ~[docusign-esign-java-3.21.0.jar:na] at com.myapp.Application.auth(Application.java:40) ~[classes/:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-5.3.24.jar:5.3.24] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) ~[spring-web-5.3.24.jar:5.3.24] at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-5.3.24.jar:5.3.24] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) ~[spring-webmvc-5.3.24.jar:5.3.24] at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) ~[spring-webmvc-5.3.24.jar:5.3.24] at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.3.24.jar:5.3.24] at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1071) ~[spring-webmvc-5.3.24.jar:5.3.24] at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:964) ~[spring-webmvc-5.3.24.jar:5.3.24] at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.3.24.jar:5.3.24] at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) ~[spring-webmvc-5.3.24.jar:5.3.24]
docusignapi - Failed to instantiate [com.docusign.esign.client.ApiClient]
I have a Spring boot application and wanted to add docusign integration. Starting from OauthCode/OauthToken to get the authcode and auth token. I added the dependencies in gradle but docusign ApiClient instantiation is failing. build.gradle: plugins { id 'org.springframework.boot' version '2.7.3' id 'io.spring.dependency-management' version '1.0.13.RELEASE' id 'java' } dependencies { implementation 'org.springframework.boot:spring-boot-starter-actuator' implementation 'org.springframework.boot:spring-boot-starter-security' implementation 'org.springframework.boot:spring-boot-starter-web' implementation 'org.springframework.cloud:spring-cloud-starter-openfeign' implementation 'com.docusign:docusign-esign-java:3.19.0' implementation 'jakarta.ws.rs:jakarta.ws.rs-api:3.1.0'*** implementation 'org.springdoc:springdoc-openapi-ui:1.6.11' implementation 'io.awspring.cloud:spring-cloud-starter-aws-secrets-manager- config:2.4.2' compileOnly 'org.projectlombok:lombok' developmentOnly 'org.springframework.boot:spring-boot-devtools' annotationProcessor 'org.projectlombok:lombok' testImplementation 'org.springframework.boot:spring-boot-starter-test' testImplementation 'org.springframework.security:spring-security-test' } @Generated @Configuration public class BeansConfig { @Value("${ds.api.base.path1}") private String basePath1; @Value("${ds.api.auth.base.path2}") private String basePath2; @Bean ApiClient getApiClient() { ApiClient apiClient = new ApiClient(basePath1); apiClient.setOAuthBasePath(basePath2); return apiClient; } } Exception trace: Caused by: org.springframework.beans.BeanInstantiationException: Failed to instantiate [com.docusign.esign.client.ApiClient]: Factory method 'apiClient' threw exception; nested exception is java.lang.NoClassDefFoundError: javax/ws/rs/ext/ContextResolver at org.springframework.beans.factory.support.SimpleInstantiationStrategy.instantiate(SimpleInstantiationStrategy.java:185) ~[spring-beans-5.3.22.jar:5.3.22] at org.springframework.beans.factory.support.ConstructorResolver.instantiate(ConstructorResolver.java:653) Caused by: java.lang.NoClassDefFoundError: javax/ws/rs/ext/ContextResolver
[ "This is a bug that was introduced in the latest version of the Java SDK. Version 3.18.0 doesn't have this bug, so for now the workaround is to use the oldest version.\nI'll update this answer once we have released a new version that fixes this issue.\n", "it's 12/2/2022 and I can confirm 3.21.0 still has this issue. Downgrading to 3.18 works.\nit would be REALLY NICE if the documentation published by docusign was updated to reflect this. I lost an hour chasing my tail on this.\nwith 3.21.0 and boot 2.7.4 / java 11 the ApiClient instantiation fails with\nava.lang.ClassNotFoundException: javax.ws.rs.ext.ContextResolver\n at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581) ~[na:na]\n at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) ~[na:na]\n at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522) ~[na:na]\n at java.base/java.lang.ClassLoader.defineClass1(Native Method) ~[na:na]\n at java.base/java.lang.ClassLoader.defineClass(ClassLoader.java:1017) ~[na:na]\n at java.base/java.security.SecureClassLoader.defineClass(SecureClassLoader.java:174) ~[na:na]\n at java.base/jdk.internal.loader.BuiltinClassLoader.defineClass(BuiltinClassLoader.java:800) ~[na:na]\n at java.base/jdk.internal.loader.BuiltinClassLoader.findClassOnClassPathOrNull(BuiltinClassLoader.java:698) ~[na:na]\n at java.base/jdk.internal.loader.BuiltinClassLoader.loadClassOrNull(BuiltinClassLoader.java:621) ~[na:na]\n at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:579) ~[na:na]\n at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178) ~[na:na]\n at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522) ~[na:na]\n at com.docusign.esign.client.ApiClient.<init>(ApiClient.java:91) ~[docusign-esign-java-3.21.0.jar:na]\n at com.docusign.esign.client.ApiClient.<init>(ApiClient.java:123) ~[docusign-esign-java-3.21.0.jar:na]\n at com.myapp.Application.auth(Application.java:40) ~[classes/:na]\n at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na]\n at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:na]\n at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na]\n at java.base/java.lang.reflect.Method.invoke(Method.java:566) ~[na:na]\n at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:205) ~[spring-web-5.3.24.jar:5.3.24]\n at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:150) ~[spring-web-5.3.24.jar:5.3.24]\n at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:117) ~[spring-webmvc-5.3.24.jar:5.3.24]\n at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:895) ~[spring-webmvc-5.3.24.jar:5.3.24]\n at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:808) ~[spring-webmvc-5.3.24.jar:5.3.24]\n at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.3.24.jar:5.3.24]\n at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:1071) ~[spring-webmvc-5.3.24.jar:5.3.24]\n at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:964) ~[spring-webmvc-5.3.24.jar:5.3.24]\n at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:1006) ~[spring-webmvc-5.3.24.jar:5.3.24]\n at org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:898) ~[spring-webmvc-5.3.24.jar:5.3.24]\n\n" ]
[ 0, 0 ]
[]
[]
[ "docusignapi", "gradle", "java", "spring_boot" ]
stackoverflow_0073974532_docusignapi_gradle_java_spring_boot.txt
Q: Iterating through IEnumerable to display values I'm trying to iterate through an IEnumerable to display the values in index 1 in textboxes if Count != 0 This is the code for the table on the right: public PartialViewResult OnGetDisplayOwnerInfoTable(int value) => Partial( "_DisplayDMVPartial", _context .ExemptionApplicationDmvinformations .Where(x => x.ExemptionApplicationOwnerId == value) .ToList() ); <tbody> <tr> <td style="border: 1px solid black; font-weight: bold; text-align: center;"> @Html.DisplayNameFor(m => m.DmvDob) </td> <td style="border: 1px solid black; font-weight: bold; text-align: center;"> @Html.DisplayNameFor(m => m.DriverLicense) </td> </tr> @foreach (Models.ExemptionApplicationDmvinformation item in Model) { <tr> <!-- <td style="border: 1px solid black; text-align: center;"> item.DmvDob.Value.ToString("MMddyyyy") </td> --> <td style="border: 1px solid black; text-align: center;"> @item.DmvDob </td> <td style="border: 1px solid black; text-align: center;"> @item.DriverLicense </td> </tr> } </tbody> This is the code for the textbox link on the left public JsonResult OnGetDisplayOwnerInfo(int value) { ExemptionApplicationDmvinformation data = _context .ExemptionApplicationDmvinformations .Where(x => x.ExemptionApplicationOwnerId == value) .FirstOrDefault(); return new JsonResult(new { DateOfBirth = data.DmvDob.Value.ToString("MMddyyyy"), DriversLicenseNumber = data.DriverLicense }); } Model code [Display(Name = "DOB")] public DateTime? DmvDob { get; set; } [Display(Name = "Driver's License #")] public string? DriverLicense { get; set; } A: I figured out the solution myself public JsonResult OnGetDisplay(int value) { IEnumerable<ModelName> model = _context.ContextModelName.Where(x => x.Id == value).ToArray(); foreach (var item in model) { if (item.Column1 == null || item.Column2 == null) { // Ignore null values } else { DateProperty = item.Column1.Value.ToString("MMddyyyy"); StringProperty = item.Column2; } } return new JsonResult(new { DateProperty, StringProperty }); }
Iterating through IEnumerable to display values
I'm trying to iterate through an IEnumerable to display the values in index 1 in textboxes if Count != 0 This is the code for the table on the right: public PartialViewResult OnGetDisplayOwnerInfoTable(int value) => Partial( "_DisplayDMVPartial", _context .ExemptionApplicationDmvinformations .Where(x => x.ExemptionApplicationOwnerId == value) .ToList() ); <tbody> <tr> <td style="border: 1px solid black; font-weight: bold; text-align: center;"> @Html.DisplayNameFor(m => m.DmvDob) </td> <td style="border: 1px solid black; font-weight: bold; text-align: center;"> @Html.DisplayNameFor(m => m.DriverLicense) </td> </tr> @foreach (Models.ExemptionApplicationDmvinformation item in Model) { <tr> <!-- <td style="border: 1px solid black; text-align: center;"> item.DmvDob.Value.ToString("MMddyyyy") </td> --> <td style="border: 1px solid black; text-align: center;"> @item.DmvDob </td> <td style="border: 1px solid black; text-align: center;"> @item.DriverLicense </td> </tr> } </tbody> This is the code for the textbox link on the left public JsonResult OnGetDisplayOwnerInfo(int value) { ExemptionApplicationDmvinformation data = _context .ExemptionApplicationDmvinformations .Where(x => x.ExemptionApplicationOwnerId == value) .FirstOrDefault(); return new JsonResult(new { DateOfBirth = data.DmvDob.Value.ToString("MMddyyyy"), DriversLicenseNumber = data.DriverLicense }); } Model code [Display(Name = "DOB")] public DateTime? DmvDob { get; set; } [Display(Name = "Driver's License #")] public string? DriverLicense { get; set; }
[ "I figured out the solution myself\npublic JsonResult OnGetDisplay(int value)\n{\n IEnumerable<ModelName> model = _context.ContextModelName.Where(x => x.Id == value).ToArray();\n\n foreach (var item in model)\n {\n if (item.Column1 == null || item.Column2 == null)\n {\n // Ignore null values\n }\n else\n {\n DateProperty = item.Column1.Value.ToString(\"MMddyyyy\");\n StringProperty = item.Column2;\n }\n }\n\n return new JsonResult(new { DateProperty, StringProperty });\n}\n\n" ]
[ 0 ]
[]
[]
[ "asp.net_core", "c#", "entity_framework_core", "linq", "razor_pages" ]
stackoverflow_0074642759_asp.net_core_c#_entity_framework_core_linq_razor_pages.txt
Q: implement search function (filter) in angular [Part 2] I tried to import 'CommonModule' but it gives me the same error message. If, on the other hand, I try to write of between car and cars, it underlines the word filter as an error and displays me as an error: no pipe found with name 'filter'. I need to implement a simple search function <div *ngFor="let car in cars | filter : searchText"> I expected typing this statement the search method worked properly A: Please find the attached code snippet with simple search function with given array : app.component.html : <div class="container text-center"> <h1>{{title}}</h1> </div> <div class="container"> <div class="row"> <div class="search-hero"> <input class="form-control" type="text" name="search" [(ngModel)]="searchText" autocomplete="off" placeholder="&#61442; Start searching for a hero by id or name or country"> </div> <table class="table table-striped"> <thead> <tr> <th>Id</th> <th>Hero Name</th> <th>Country</th> </tr> </thead> <tbody> <tr *ngFor="let hero of heroes | filter:searchText"> <td>{{hero.id}}</td> <td>{{hero.name}}</td> <td>{{hero.country}}</td> </tr> </tbody> </table> </div> </div> app.component.ts : import { Component } from '@angular/core'; @Component({ selector: 'my-app', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ] }) export class AppComponent { title = 'Angular Search Using ng2-search-filter'; searchText; heroes = [ { id: 11, name: 'Mr. Nice', country: 'India' }, { id: 12, name: 'Narco' , country: 'USA'}, { id: 13, name: 'Bombasto' , country: 'UK'}, { id: 14, name: 'Celeritas' , country: 'Canada' }, { id: 15, name: 'Magneta' , country: 'Russia'}, { id: 16, name: 'RubberMan' , country: 'China'}, { id: 17, name: 'Dynama' , country: 'Germany'}, { id: 18, name: 'Dr IQ' , country: 'Hong Kong'}, { id: 19, name: 'Magma' , country: 'South Africa'}, { id: 20, name: 'Tornado' , country: 'Sri Lanka'} ]; } Please find the working stackblitz example using ng2-search-filter here
implement search function (filter) in angular [Part 2]
I tried to import 'CommonModule' but it gives me the same error message. If, on the other hand, I try to write of between car and cars, it underlines the word filter as an error and displays me as an error: no pipe found with name 'filter'. I need to implement a simple search function <div *ngFor="let car in cars | filter : searchText"> I expected typing this statement the search method worked properly
[ "Please find the attached code snippet with simple search function with given array :\napp.component.html :\n<div class=\"container text-center\">\n <h1>{{title}}</h1>\n</div>\n<div class=\"container\">\n <div class=\"row\">\n <div class=\"search-hero\">\n <input class=\"form-control\" type=\"text\" name=\"search\" [(ngModel)]=\"searchText\" autocomplete=\"off\" placeholder=\"&#61442; Start searching for a hero by id or name or country\">\n </div>\n <table class=\"table table-striped\">\n <thead>\n <tr>\n <th>Id</th>\n <th>Hero Name</th>\n <th>Country</th>\n </tr>\n </thead>\n <tbody>\n <tr *ngFor=\"let hero of heroes | filter:searchText\">\n <td>{{hero.id}}</td>\n <td>{{hero.name}}</td>\n <td>{{hero.country}}</td>\n </tr>\n </tbody>\n </table>\n </div>\n</div>\n\napp.component.ts :\nimport { Component } from '@angular/core';\n\n@Component({\n selector: 'my-app',\n templateUrl: './app.component.html',\n styleUrls: [ './app.component.css' ]\n})\nexport class AppComponent {\n title = 'Angular Search Using ng2-search-filter';\n searchText;\n heroes = [\n { id: 11, name: 'Mr. Nice', country: 'India' },\n { id: 12, name: 'Narco' , country: 'USA'},\n { id: 13, name: 'Bombasto' , country: 'UK'},\n { id: 14, name: 'Celeritas' , country: 'Canada' },\n { id: 15, name: 'Magneta' , country: 'Russia'},\n { id: 16, name: 'RubberMan' , country: 'China'},\n { id: 17, name: 'Dynama' , country: 'Germany'},\n { id: 18, name: 'Dr IQ' , country: 'Hong Kong'},\n { id: 19, name: 'Magma' , country: 'South Africa'},\n { id: 20, name: 'Tornado' , country: 'Sri Lanka'}\n ];\n}\n\nPlease find the working stackblitz example using ng2-search-filter here\n" ]
[ 1 ]
[]
[]
[ "angular", "css", "html", "typescript" ]
stackoverflow_0074655710_angular_css_html_typescript.txt
Q: What is the default style of the blue focus outline in Chrome? I have a webapp that uses contenteditable div's. I like how they appear in Chrome: when I focus, Chrome displays a nice blue glow around the div. However in Firefox I get an ugly dashed outline. What I observed so far is that Chrome stops displaying its default blue frame once I change the outline of div:focus. I'd like to make my app consistently look nice, so my question is how can I replicate Chrome's default style for div[contenteditable="true"]:focus? A: To answer the question, Webkit browsers use outline: 5px auto -webkit-focus-ring-color;. On Macs -webkit-focus-ring-color is blue rgb(94, 158, 214) (or #5E9ED6), but on Windows and Linux it’s gold rgb(229, 151, 0) (or #E59700) (ref). While I understand your desire for consistency, users generally only use one browser, and are used to their browser’s default styles. Note that unless you plan to change every instance of :focus you’ll end up with inconsistency for e.g. keyboard users. Pros and cons eh! If you define outline styles and want to ‘revert’ back to the default User Agent styles on :focus, this will help .myClass:focus { outline: 1px dotted #212121; outline: 5px auto -webkit-focus-ring-color; } The -webkit-prefix color means FF, IE and Edge will ignore the second rule and use the first. Chrome, Safari and Opera will use the second rule. HTH! A: This fiddle gives a good approximation, you may want to tweak to get closer to what you're specifically after though. HTML <div contenteditable='true'>Edit Me</div> CSS div[contenteditable=true] { width:200px; border:2px solid #dadada; border-radius:7px; font-size:20px; padding:5px; margin:10px; } div[contenteditable=true]:focus { outline:none; border-color:#9ecaed; box-shadow:0 0 10px #9ecaed; } A: I think I've found the perfect one, At least for me: // Beggin button { outline: 5px auto rgba(0, 150, 255, 1); -webkit-outline: 5px auto rgba(0, 150, 255, 1); -moz-outline: 5px auto rgba(0, 150, 255, 1); -ms-outline: 5px auto rgba(0, 150, 255, 1); -o-outline: 5px auto rgba(0, 150, 255, 1); /* Use a border to apply the outline */ border: 1px solid rgba(0, 0, 0, 0); /* Unimortant styling: */ background: linear-gradient(to bottom, #fff 30%, #fcfcfc 40%, #f8f8f8 50%, #f0f0f0 100%); } <button type="button"">Outline</button> A: .myClass:focus { outline-color: Highlight; outline-color: -webkit-focus-ring-color; outline-style: auto; outline-width: 1px; } A: For Tailwind users, it's different, testing on ^3.1.8. Tailwind override the default focus color.
What is the default style of the blue focus outline in Chrome?
I have a webapp that uses contenteditable div's. I like how they appear in Chrome: when I focus, Chrome displays a nice blue glow around the div. However in Firefox I get an ugly dashed outline. What I observed so far is that Chrome stops displaying its default blue frame once I change the outline of div:focus. I'd like to make my app consistently look nice, so my question is how can I replicate Chrome's default style for div[contenteditable="true"]:focus?
[ "To answer the question, Webkit browsers use outline: 5px auto -webkit-focus-ring-color;. On Macs -webkit-focus-ring-color is blue rgb(94, 158, 214) (or #5E9ED6), but on Windows and Linux it’s gold rgb(229, 151, 0) (or #E59700) (ref).\nWhile I understand your desire for consistency, users generally only use one browser, and are used to their browser’s default styles. Note that unless you plan to change every instance of :focus you’ll end up with inconsistency for e.g. keyboard users. Pros and cons eh!\nIf you define outline styles and want to ‘revert’ back to the default User Agent styles on :focus, this will help\n\n\n.myClass:focus {\r\n outline: 1px dotted #212121;\r\n outline: 5px auto -webkit-focus-ring-color;\r\n}\n\n\n\nThe -webkit-prefix color means FF, IE and Edge will ignore the second rule and use the first. Chrome, Safari and Opera will use the second rule.\nHTH!\n", "This fiddle gives a good approximation, you may want to tweak to get closer to what you're specifically after though.\nHTML\n<div contenteditable='true'>Edit Me</div>\n\nCSS\ndiv[contenteditable=true] {\n width:200px;\n border:2px solid #dadada;\n border-radius:7px;\n font-size:20px;\n padding:5px;\n margin:10px; \n}\n\ndiv[contenteditable=true]:focus { \n outline:none;\n border-color:#9ecaed;\n box-shadow:0 0 10px #9ecaed;\n}\n\n", "I think I've found the perfect one, At least for me:\n\n\n// Beggin\nbutton {\r\n outline: 5px auto rgba(0, 150, 255, 1);\r\n -webkit-outline: 5px auto rgba(0, 150, 255, 1);\r\n -moz-outline: 5px auto rgba(0, 150, 255, 1);\r\n -ms-outline: 5px auto rgba(0, 150, 255, 1);\r\n -o-outline: 5px auto rgba(0, 150, 255, 1);\r\n /* Use a border to apply the outline */\r\n border: 1px solid rgba(0, 0, 0, 0);\r\n \r\n /* Unimortant styling: */\r\n background: linear-gradient(to bottom, #fff 30%, #fcfcfc 40%, #f8f8f8 50%, #f0f0f0 100%);\r\n}\n<button type=\"button\"\">Outline</button>\n\n\n\n", ".myClass:focus {\n outline-color: Highlight;\n outline-color: -webkit-focus-ring-color;\n outline-style: auto;\n outline-width: 1px;\n}\n\n", "For Tailwind users, it's different, testing on ^3.1.8. Tailwind override the default focus color.\n\n" ]
[ 76, 6, 5, 0, 0 ]
[]
[]
[ "contenteditable", "css", "google_chrome" ]
stackoverflow_0020609485_contenteditable_css_google_chrome.txt
Q: OpenCV HuMoments produces wrong result This piece of code: int main() { Mat input_img = imread("abcdef.png", CV_8UC1); // Image of size 1000*800 Moments moment = moments(input_img, false); double humm[7]; HuMoments(moment, humm); for (int i = 0; i<7; i++) cout << humm[i] << endl; } prints out: 0.000789284 1.24093e-07 2.37587e-15 1.48852e-15 -3.19408e-31 4.09704e-20 -2.78098e-30 which is wrong. Hu's invariant moments are not that small. I can only remember reading somewhere, the first moment is usually >100, the second >60 ... Did I miss something? A: Those values are usual values. For example, the sun in a thermal image (160X120) gives the following Hu moments: [ 6.69480755e-04] [ 9.56770429e-08] [ 1.14172836e-11] [ 3.53685429e-13] [-6.08786110e-25] [-1.05187688e-16] [ 3.66772976e-25] They could vary from only ~1% of standard deviation if there is a good correlation between shapes to ~300% for the same type of element being identified (it could be that some values can decrease to near zero sometimes, that happens frequently if the image has low resolution (small area)). To determine good HuMoments to identify an element it is better to use histograms rather than standard deviation, because negative/positive in HuMoments could show symmetries, and with standard deviation method the value could turn positive/negative (this explanation is for the use of various samples of the same type of element to determine its non-variable HuMoments).
OpenCV HuMoments produces wrong result
This piece of code: int main() { Mat input_img = imread("abcdef.png", CV_8UC1); // Image of size 1000*800 Moments moment = moments(input_img, false); double humm[7]; HuMoments(moment, humm); for (int i = 0; i<7; i++) cout << humm[i] << endl; } prints out: 0.000789284 1.24093e-07 2.37587e-15 1.48852e-15 -3.19408e-31 4.09704e-20 -2.78098e-30 which is wrong. Hu's invariant moments are not that small. I can only remember reading somewhere, the first moment is usually >100, the second >60 ... Did I miss something?
[ "Those values are usual values. For example, the sun in a thermal image (160X120) gives the following Hu moments:\n [ 6.69480755e-04]\n [ 9.56770429e-08]\n [ 1.14172836e-11]\n [ 3.53685429e-13]\n [-6.08786110e-25]\n [-1.05187688e-16]\n [ 3.66772976e-25]\n\nThey could vary from only ~1% of standard deviation if there is a good correlation between shapes to ~300% for the same type of element being identified (it could be that some values can decrease to near zero sometimes, that happens frequently if the image has low resolution (small area)).\nTo determine good HuMoments to identify an element it is better to use histograms rather than standard deviation, because negative/positive in HuMoments could show symmetries, and with standard deviation method the value could turn positive/negative (this explanation is for the use of various samples of the same type of element to determine its non-variable HuMoments).\n" ]
[ 0 ]
[]
[]
[ "c++", "opencv" ]
stackoverflow_0016708940_c++_opencv.txt
Q: How do I get Python to send as many concurrent HTTP requests as possible? I'm trying to send HTTPS requests as quickly as possible. I know this would have to be concurrent requests due to my goal being 150 to 500+ requests a second. I've searched everywhere, but get no Python 3.11+ answer or one that doesn't give me errors. I'm trying to avoid AIOHTTP as the rigmarole of setting it up was a pain, which didn't even work. The input should be an array or URLs and the output an array of the html string. A: It's quite unfortunate that you couldn't setup AIOHTTP properly because this is one of the most efficient way to do asynchronous requests in Python. Setup is not that hard: import asyncio import aiohttp from time import perf_counter def urls(n_reqs: int): for _ in range(n_reqs): yield "https://python.org" async def get(session: aiohttp.ClientSession, url: str): async with session.get(url) as response: _ = await response.text() async def main(n_reqs: int): async with aiohttp.ClientSession() as session: await asyncio.gather( *[get(session, url) for url in urls(n_reqs)] ) if __name__ == "__main__": n_reqs = 10_000 start = perf_counter() asyncio.run(main(n_reqs)) end = perf_counter() print(f"{n_reqs / (end - start)} req/s") You basically need to create a single ClientSession which you then reuse to send the get requests. The requests are made concurrently with to asyncio.gather(). You could also use the newer asyncio.TaskGroup: async def main(n_reqs: int): async with aiohttp.ClientSession() as session: async with asyncio.TaskGroup() as group: for url in urls(n_reqs): group.create_task(get(session, url)) This easily achieves 500+ requests per seconds on my 7+ years old bi-core computer. Contrary to what other answers suggested, this solution does not require to spawn thousands of threads, which are expensive. You may improve the speed even more my using a custom connector in order to allow more concurrent connections (default is 100) in a single session: async def main(n_reqs: int): let connector = aiohttp.TCPConnector(limit=0) async with aiohttp.ClientSession(connector=connector) as session: ... A: Hope this helps, this question asked What is the fastest way to send 10000 http requests I observed 15000 requests in 10s, using wireshark to trap on localhost and saved packets to CSV, only counted packets that had GET in them. FILE: a.py from treq import get from twisted.internet import reactor def done(response): if response.code == 200: get("http://localhost:3000").addCallback(done) get("http://localhost:3000").addCallback(done) reactor.callLater(10, reactor.stop) reactor.run() Run test like this: pip3 install treq python3 a.py # code from above Setup test website like this, mine was on port 3000 mkdir myapp cd myapp npm init npm install express node app.js FILE: app.js const express = require('express') const app = express() const port = 3000 app.get('/', (req, res) => { res.send('Hello World!') }) app.listen(port, () => { console.log(`Example app listening on port ${port}`) }) OUTPUT grep GET wireshark.csv | head "5","0.000418","::1","::1","HTTP","139","GET / HTTP/1.1 " "13","0.002334","::1","::1","HTTP","139","GET / HTTP/1.1 " "17","0.003236","::1","::1","HTTP","139","GET / HTTP/1.1 " "21","0.004018","::1","::1","HTTP","139","GET / HTTP/1.1 " "25","0.004803","::1","::1","HTTP","139","GET / HTTP/1.1 " grep GET wireshark.csv | tail "62145","9.994184","::1","::1","HTTP","139","GET / HTTP/1.1 " "62149","9.995102","::1","::1","HTTP","139","GET / HTTP/1.1 " "62153","9.995860","::1","::1","HTTP","139","GET / HTTP/1.1 " "62157","9.996616","::1","::1","HTTP","139","GET / HTTP/1.1 " "62161","9.997307","::1","::1","HTTP","139","GET / HTTP/1.1 " A: This works, getting around 250+ requests a second. This solution does work on Windows 10. You may have to pip install for concurrent and requests. import time import requests import concurrent.futures start = int(time.time()) # get time before the requests are sent urls = [] # input URLs/IPs array responses = [] # output content of each request as string in an array # create an list of 5000 sites to test with for y in range(5000):urls.append("https://example.com") def send(url):responses.append(requests.get(url).content) with concurrent.futures.ThreadPoolExecutor(max_workers=10000) as executor: futures = [] for url in urls:futures.append(executor.submit(send, url)) end = int(time.time()) # get time after stuff finishes print(str(round(len(urls)/(end - start),0))+"/sec") # get average requests per second Output: 286.0/sec Note: If your code requires something extremely time dependent, replace the middle part with this: with concurrent.futures.ThreadPoolExecutor(max_workers=10000) as executor: futures = [] for url in urls: futures.append(executor.submit(send, url)) for future in concurrent.futures.as_completed(futures): responses.append(future.result()) This is a modified version of what this site showed in an example. The secret sauce is the max_workers=10000. Otherwise, it would average about 80/sec. Although, when setting it to beyond 1000, there wasn't any boost in speed.
How do I get Python to send as many concurrent HTTP requests as possible?
I'm trying to send HTTPS requests as quickly as possible. I know this would have to be concurrent requests due to my goal being 150 to 500+ requests a second. I've searched everywhere, but get no Python 3.11+ answer or one that doesn't give me errors. I'm trying to avoid AIOHTTP as the rigmarole of setting it up was a pain, which didn't even work. The input should be an array or URLs and the output an array of the html string.
[ "It's quite unfortunate that you couldn't setup AIOHTTP properly because this is one of the most efficient way to do asynchronous requests in Python.\nSetup is not that hard:\nimport asyncio\nimport aiohttp\nfrom time import perf_counter\n\n\ndef urls(n_reqs: int):\n for _ in range(n_reqs):\n yield \"https://python.org\"\n\nasync def get(session: aiohttp.ClientSession, url: str):\n async with session.get(url) as response:\n _ = await response.text()\n \nasync def main(n_reqs: int):\n async with aiohttp.ClientSession() as session:\n await asyncio.gather(\n *[get(session, url) for url in urls(n_reqs)]\n )\n\n\nif __name__ == \"__main__\":\n n_reqs = 10_000\n \n start = perf_counter()\n asyncio.run(main(n_reqs))\n end = perf_counter()\n \n print(f\"{n_reqs / (end - start)} req/s\")\n\nYou basically need to create a single ClientSession which you then reuse to send the get requests. The requests are made concurrently with to asyncio.gather(). You could also use the newer asyncio.TaskGroup:\nasync def main(n_reqs: int):\n async with aiohttp.ClientSession() as session:\n async with asyncio.TaskGroup() as group:\n for url in urls(n_reqs):\n group.create_task(get(session, url))\n\nThis easily achieves 500+ requests per seconds on my 7+ years old bi-core computer. Contrary to what other answers suggested, this solution does not require to spawn thousands of threads, which are expensive.\nYou may improve the speed even more my using a custom connector in order to allow more concurrent connections (default is 100) in a single session:\nasync def main(n_reqs: int):\n let connector = aiohttp.TCPConnector(limit=0)\n async with aiohttp.ClientSession(connector=connector) as session:\n ...\n\n\n", "Hope this helps, this question asked What is the fastest way to send 10000 http requests\nI observed 15000 requests in 10s, using wireshark to trap on localhost and saved packets to CSV, only counted packets that had GET in them.\nFILE: a.py\nfrom treq import get\nfrom twisted.internet import reactor\n\ndef done(response):\n if response.code == 200:\n get(\"http://localhost:3000\").addCallback(done)\n\nget(\"http://localhost:3000\").addCallback(done)\n\nreactor.callLater(10, reactor.stop)\nreactor.run()\n\nRun test like this:\npip3 install treq\npython3 a.py # code from above\n\nSetup test website like this, mine was on port 3000\nmkdir myapp\ncd myapp\nnpm init\nnpm install express\nnode app.js\n\nFILE: app.js\nconst express = require('express')\nconst app = express()\nconst port = 3000\n\napp.get('/', (req, res) => {\n res.send('Hello World!')\n})\n\napp.listen(port, () => {\n console.log(`Example app listening on port ${port}`)\n})\n\nOUTPUT\ngrep GET wireshark.csv | head\n\"5\",\"0.000418\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"13\",\"0.002334\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"17\",\"0.003236\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"21\",\"0.004018\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"25\",\"0.004803\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\ngrep GET wireshark.csv | tail\n\"62145\",\"9.994184\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"62149\",\"9.995102\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"62153\",\"9.995860\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"62157\",\"9.996616\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\"62161\",\"9.997307\",\"::1\",\"::1\",\"HTTP\",\"139\",\"GET / HTTP/1.1 \"\n\n\n", "This works, getting around 250+ requests a second.\nThis solution does work on Windows 10. You may have to pip install for concurrent and requests.\nimport time\nimport requests\nimport concurrent.futures\n\nstart = int(time.time()) # get time before the requests are sent\n\nurls = [] # input URLs/IPs array\nresponses = [] # output content of each request as string in an array\n\n# create an list of 5000 sites to test with\nfor y in range(5000):urls.append(\"https://example.com\")\n\ndef send(url):responses.append(requests.get(url).content)\n\nwith concurrent.futures.ThreadPoolExecutor(max_workers=10000) as executor:\n futures = []\n for url in urls:futures.append(executor.submit(send, url))\n \nend = int(time.time()) # get time after stuff finishes\nprint(str(round(len(urls)/(end - start),0))+\"/sec\") # get average requests per second\n\nOutput:\n286.0/sec\nNote: If your code requires something extremely time dependent, replace the middle part with this:\nwith concurrent.futures.ThreadPoolExecutor(max_workers=10000) as executor:\n futures = []\n for url in urls:\n futures.append(executor.submit(send, url))\n for future in concurrent.futures.as_completed(futures):\n responses.append(future.result())\n\nThis is a modified version of what this site showed in an example.\nThe secret sauce is the max_workers=10000. Otherwise, it would average about 80/sec. Although, when setting it to beyond 1000, there wasn't any boost in speed.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "concurrency", "http", "https", "python", "python_3.x" ]
stackoverflow_0074567219_concurrency_http_https_python_python_3.x.txt
Q: What ONLY keyword really means in Postgresql CREATE INDEX command From the docs: "Indicates not to recurse creating indexes on partitions, if the table is partitioned. The default is to recurse.". Am I understand correctly that index will not be created on existing partitons? What kind of index will be created then (on what)? A: The objective is to build a partitioned index with as little locking as possible. Normally, you'd use CREATE INDEX CONCURRENTLY to create an index on each partition, then CREATE INDEX on the partitioned table. If the index definitions match, the previously created indexes will become partitions of the partitioned index. See this related question. The potential problem with that is that all partitions will be locked at the same time. Instead, you can do it one partition at a time: create the index ONLY on the partitioned table (the index will be invalid) use ALTER INDEX ... ATTACH PARTITION to attach the indexes on the partitions as partitions of the index once all partitions are attached, the partitioned index will become valid A: When CREATE INDEX is invoked on a partitioned table, the default behavior is to recurse to all partitions to ensure they all have matching indexes. Each partition is first checked to determine whether an equivalent index already exists, and if so, that index will become attached as a partition index to the index being created, which will become its parent index. If no matching index exists, a new index will be created and automatically attached; the name of the new index in each partition will be determined as if no index name had been specified in the command. If the ONLY option is specified, no recursion is done, and the index is marked invalid. (ALTER INDEX ... ATTACH PARTITION marks the index valid, once all partitions acquire matching indexes.) Note, however, that any partition that is created in the future using CREATE TABLE ... PARTITION OF will automatically have a matching index, regardless of whether ONLY is specified. small demo example: create table index_part (a int, b int) partition by range (a, b); create table index_part1 partition of index_part for values from (0,0) to (10, 10); create table index_part2 partition of index_part for values from (10,10) to (20, 20); create index index_part_a_b_idx on only index_part (a, b); now is INVALID: \d+ index_part_a_b_idx --- btree, for table "public.index_part", invalid Partitions: index_part2_a_b_idx Access method: btree create index idxpart1_a_b_idx on index_part1 (a, b); alter index index_part_a_b_idx attach partition idxpart1_a_b_idx; still INVALID. \d+ index_part_a_b_idx --- btree, for table "public.index_part", invalid Partitions: idxpart1_a_b_idx Access method: btree then create index idxpart2_a_b_idx on index_part2(a, b); alter index index_part_a_b_idx attach partition idxpart2_a_b_idx; now ISVALID. select indisvalid from pg_index where indexrelid = 'idxpart2_a_b_idx'::regclass; ---return true.
What ONLY keyword really means in Postgresql CREATE INDEX command
From the docs: "Indicates not to recurse creating indexes on partitions, if the table is partitioned. The default is to recurse.". Am I understand correctly that index will not be created on existing partitons? What kind of index will be created then (on what)?
[ "The objective is to build a partitioned index with as little locking as possible.\nNormally, you'd use CREATE INDEX CONCURRENTLY to create an index on each partition, then CREATE INDEX on the partitioned table. If the index definitions match, the previously created indexes will become partitions of the partitioned index. See this related question.\nThe potential problem with that is that all partitions will be locked at the same time. Instead, you can do it one partition at a time:\n\ncreate the index ONLY on the partitioned table (the index will be invalid)\n\nuse ALTER INDEX ... ATTACH PARTITION to attach the indexes on the partitions as partitions of the index\n\nonce all partitions are attached, the partitioned index will become valid\n\n\n", "\nWhen CREATE INDEX is invoked on a partitioned table, the default\nbehavior is to recurse to all partitions to ensure they all have\nmatching indexes. Each partition is first checked to determine whether\nan equivalent index already exists, and if so, that index will become\nattached as a partition index to the index being created, which will\nbecome its parent index. If no matching index exists, a new index will\nbe created and automatically attached; the name of the new index in\neach partition will be determined as if no index name had been\nspecified in the command. If the ONLY option is specified, no\nrecursion is done, and the index is marked invalid. (ALTER INDEX ...\nATTACH PARTITION marks the index valid, once all partitions acquire\nmatching indexes.) Note, however, that any partition that is created\nin the future using CREATE TABLE ... PARTITION OF will automatically\nhave a matching index, regardless of whether ONLY is specified.\n\nsmall demo example:\ncreate table index_part (a int, b int) partition by range (a, b);\ncreate table index_part1 partition of index_part for values from (0,0) to (10, 10);\ncreate table index_part2 partition of index_part for values from (10,10) to (20, 20);\ncreate index index_part_a_b_idx on only index_part (a, b);\n\nnow is INVALID:\n\\d+ index_part_a_b_idx\n---\nbtree, for table \"public.index_part\", invalid\nPartitions: index_part2_a_b_idx\nAccess method: btree\n\ncreate index idxpart1_a_b_idx on index_part1 (a, b);\nalter index index_part_a_b_idx attach partition idxpart1_a_b_idx;\n\nstill INVALID.\n\\d+ index_part_a_b_idx\n---\nbtree, for table \"public.index_part\", invalid\nPartitions: idxpart1_a_b_idx\nAccess method: btree\n\nthen\ncreate index idxpart2_a_b_idx on index_part2(a, b);\nalter index index_part_a_b_idx attach partition idxpart2_a_b_idx;\n\nnow ISVALID.\nselect indisvalid from pg_index where indexrelid = 'idxpart2_a_b_idx'::regclass; ---return true.\n\n" ]
[ 1, 1 ]
[]
[]
[ "postgresql" ]
stackoverflow_0074658555_postgresql.txt
Q: How can I pass search bar results to render on home component? It's my first time making an app that has a search feature with api and I couldn't find a way to render the search results from the search bar in the header component to home component or any other, so when the user search for something, no matter what's the location the results should render on screen instead of the actual current component's data but for now home screen is fine, since if I find a solution for this, the rest would be easy. app.js: <Container> <Route path="/" component={HomeScreen} exact /> <Route path="/login" component={LoginScreen} exact /> <Route path="/register" component={RegisterScreen} exact /> <Route path="/product/:id" component={ProductScreen} exact /> <Route path="/cart/:id?" component={CartScreen} exact /> </Container> header: function Header() { const userLogin = useSelector((state) => state.userLogin); const { userInfo } = userLogin; const [items, setItems] = useState(""); const debounce = useDebounce(items, 500); const dispatch = useDispatch(); const logoutHandler = () => { dispatch(logout()); }; useEffect(() => { const getData = setTimeout(() => { axios.get(`/api/search/?search=${items}`).then((response) => { console.log(response.data[0]); }); }, 2000); return () => clearTimeout(getData); }, [debounce]); return ( <div> <Navbar bg="dark" variant="dark" className="navCustom"> <Container> <LinkContainer to="/"> <Navbar.Brand>eCommerce</Navbar.Brand> </LinkContainer> <Form className="d-flex"> <Form.Control type="search" placeholder="Search" className="me-2" aria-label="Search" onChange={(e) => { setItems(e.target.value); }} /> <Button variant="outline-success">Search</Button> </Form> home: import React, { useEffect } from "react"; //import products from "../../products"; import { Row, Col } from "react-bootstrap"; import Product from "../Product"; import { useDispatch, useSelector } from "react-redux"; import { listProducts } from "../../actions/ProductAction"; import Message from "../Message"; import Loader from "../Loader"; function HomeScreen() { const dispatch = useDispatch(); const productList = useSelector((state) => state.productList); const { error, loading, products } = productList; useEffect(() => { dispatch(listProducts()); }, [dispatch]); return ( <div> <h1 className="text-center">Latest Products</h1> {loading ? ( <Loader /> ) : error ? ( <Message variant="danger">{error}</Message> ) : ( <Row> {products && products.map((product) => ( <Col key={product._id} sm={12} md={6} lg={4} xl={3}> {/* <h3>{product.name}</h3> */} <Product product={product} /> </Col> ))} </Row> )} </div> ); } export default HomeScreen; A: To render search results from the header component to the home component, you will need to add state to your header component to store the search results. Then, you can pass the search results down to the home component as a prop, and use them to render the search results instead of the regular products. Here's an example of how you could implement this: function Header() { // Add state to store search results const [searchResults, setSearchResults] = useState([]); // Use effect to fetch search results from API useEffect(() => { axios.get(`/api/search/?search=${items}`).then((response) => { setSearchResults(response.data); }); }, [debounce]); return ( <div> {/* Pass search results to home component as a prop */} <Home searchResults={searchResults} /> </div> ); } function HomeScreen({ searchResults }) { // Use searchResults prop to render search results instead of products return ( <div> {searchResults.length > 0 ? ( <Row> {searchResults.map((product) => ( <Col key={product._id} sm={12} md={6} lg={4} xl={3}> <Product product={product} /> </Col> ))} </Row> ) : ( // Fall back to rendering regular products <Row> {products && products.map((product) => ( <Col key={product._id} sm={12} md={6} lg={4} xl={3}> <Product product={product} /> </Col> ))} </Row> )} </div> ); } This is just one way to implement this functionality, and you may want to adjust the details to fit your specific use case. For example, you could add a loading state to display a loading spinner while the search results are being fetched from the API.
How can I pass search bar results to render on home component?
It's my first time making an app that has a search feature with api and I couldn't find a way to render the search results from the search bar in the header component to home component or any other, so when the user search for something, no matter what's the location the results should render on screen instead of the actual current component's data but for now home screen is fine, since if I find a solution for this, the rest would be easy. app.js: <Container> <Route path="/" component={HomeScreen} exact /> <Route path="/login" component={LoginScreen} exact /> <Route path="/register" component={RegisterScreen} exact /> <Route path="/product/:id" component={ProductScreen} exact /> <Route path="/cart/:id?" component={CartScreen} exact /> </Container> header: function Header() { const userLogin = useSelector((state) => state.userLogin); const { userInfo } = userLogin; const [items, setItems] = useState(""); const debounce = useDebounce(items, 500); const dispatch = useDispatch(); const logoutHandler = () => { dispatch(logout()); }; useEffect(() => { const getData = setTimeout(() => { axios.get(`/api/search/?search=${items}`).then((response) => { console.log(response.data[0]); }); }, 2000); return () => clearTimeout(getData); }, [debounce]); return ( <div> <Navbar bg="dark" variant="dark" className="navCustom"> <Container> <LinkContainer to="/"> <Navbar.Brand>eCommerce</Navbar.Brand> </LinkContainer> <Form className="d-flex"> <Form.Control type="search" placeholder="Search" className="me-2" aria-label="Search" onChange={(e) => { setItems(e.target.value); }} /> <Button variant="outline-success">Search</Button> </Form> home: import React, { useEffect } from "react"; //import products from "../../products"; import { Row, Col } from "react-bootstrap"; import Product from "../Product"; import { useDispatch, useSelector } from "react-redux"; import { listProducts } from "../../actions/ProductAction"; import Message from "../Message"; import Loader from "../Loader"; function HomeScreen() { const dispatch = useDispatch(); const productList = useSelector((state) => state.productList); const { error, loading, products } = productList; useEffect(() => { dispatch(listProducts()); }, [dispatch]); return ( <div> <h1 className="text-center">Latest Products</h1> {loading ? ( <Loader /> ) : error ? ( <Message variant="danger">{error}</Message> ) : ( <Row> {products && products.map((product) => ( <Col key={product._id} sm={12} md={6} lg={4} xl={3}> {/* <h3>{product.name}</h3> */} <Product product={product} /> </Col> ))} </Row> )} </div> ); } export default HomeScreen;
[ "To render search results from the header component to the home component, you will need to add state to your header component to store the search results. Then, you can pass the search results down to the home component as a prop, and use them to render the search results instead of the regular products.\nHere's an example of how you could implement this:\nfunction Header() {\n // Add state to store search results\n const [searchResults, setSearchResults] = useState([]);\n\n // Use effect to fetch search results from API\n useEffect(() => {\n axios.get(`/api/search/?search=${items}`).then((response) => {\n setSearchResults(response.data);\n });\n }, [debounce]);\n\n return (\n <div>\n {/* Pass search results to home component as a prop */}\n <Home searchResults={searchResults} />\n </div>\n );\n}\n\nfunction HomeScreen({ searchResults }) {\n // Use searchResults prop to render search results instead of products\n return (\n <div>\n {searchResults.length > 0 ? (\n <Row>\n {searchResults.map((product) => (\n <Col key={product._id} sm={12} md={6} lg={4} xl={3}>\n <Product product={product} />\n </Col>\n ))}\n </Row>\n ) : (\n // Fall back to rendering regular products\n <Row>\n {products &&\n products.map((product) => (\n <Col key={product._id} sm={12} md={6} lg={4} xl={3}>\n <Product product={product} />\n </Col>\n ))}\n </Row>\n )}\n </div>\n );\n}\n\nThis is just one way to implement this functionality, and you may want to adjust the details to fit your specific use case. For example, you could add a loading state to display a loading spinner while the search results are being fetched from the API.\n" ]
[ 1 ]
[]
[]
[ "javascript", "reactjs", "typescript" ]
stackoverflow_0074658418_javascript_reactjs_typescript.txt
Q: Remote debugging HiFive Unleashed in QEMU I'm trying to get remote debugging working in QEMU for the sifive_u machine. All tools are from the Arch Linux repositories: ➜ qemu-system-riscv64 --version QEMU emulator version 4.2.0 Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers ➜ riscv64-linux-gnu-gdb --version GNU gdb (GDB) 8.3.1 Copyright (C) 2019 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. I'm starting the machine as follows: qemu-system-riscv64 -M sifive_u -m 256M -bios default -nographic -S -s When I connect the debugger, I attempt to continue execution, but nothing happens; if I detach the debugger, the OpenSBI splash prints to the serial console. A typical gdb session looks something like this: GNU gdb (GDB) 8.3.1 Copyright (C) 2019 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "--host=x86_64-pc-linux-gnu --target=riscv64-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word". (gdb) target remote :1234 Remote debugging using :1234 warning: No executable has been specified and target does not support determining executable automatically. Try using the "file" command. 0x0000000000001000 in ?? () (gdb) info thread Id Target Id Frame * 1 Thread 1.1 (sifive-e51-riscv-cpu harts[0] [running]) 0x0000000000001000 in ?? () (gdb) c Continuing. ^C Program received signal SIGINT, Interrupt. 0x0000000080005a52 in ?? () (gdb) info thread Id Target Id Frame * 1 Thread 1.1 (sifive-e51-riscv-cpu harts[0] [halted ]) 0x0000000080005a52 in ?? () (gdb) detach Detaching from program: , process 1 Ending remote debugging. [Inferior 1 (process 1) detached] It seems odd that I can only see a single thread in info thread; I would expect to see one thread per hart. My hunch is that I end up attached to a hart which loses the lottery and goes to sleep, and for some none of the other harts are allowed to continue execution. If I use the virt machine, the execution starts as expected when I run continue and I see the OpenSBI splash immediately, so it seems to be linked to the use of the sifive_u in some way. Does anyone have any idea what I'm doing wrong? A: https://www.qemu.org/docs/master/system/gdb.html#Debugging%20multicore%20machines See "Debugging multicore machines" in this page. (gdb)target extended-remote :1234 (gdb)add-inferior (gdb)inferior 2 (gdb)attach 2 (gdb)i threads See this picture
Remote debugging HiFive Unleashed in QEMU
I'm trying to get remote debugging working in QEMU for the sifive_u machine. All tools are from the Arch Linux repositories: ➜ qemu-system-riscv64 --version QEMU emulator version 4.2.0 Copyright (c) 2003-2019 Fabrice Bellard and the QEMU Project developers ➜ riscv64-linux-gnu-gdb --version GNU gdb (GDB) 8.3.1 Copyright (C) 2019 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. I'm starting the machine as follows: qemu-system-riscv64 -M sifive_u -m 256M -bios default -nographic -S -s When I connect the debugger, I attempt to continue execution, but nothing happens; if I detach the debugger, the OpenSBI splash prints to the serial console. A typical gdb session looks something like this: GNU gdb (GDB) 8.3.1 Copyright (C) 2019 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html> This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law. Type "show copying" and "show warranty" for details. This GDB was configured as "--host=x86_64-pc-linux-gnu --target=riscv64-linux-gnu". Type "show configuration" for configuration details. For bug reporting instructions, please see: <http://www.gnu.org/software/gdb/bugs/>. Find the GDB manual and other documentation resources online at: <http://www.gnu.org/software/gdb/documentation/>. For help, type "help". Type "apropos word" to search for commands related to "word". (gdb) target remote :1234 Remote debugging using :1234 warning: No executable has been specified and target does not support determining executable automatically. Try using the "file" command. 0x0000000000001000 in ?? () (gdb) info thread Id Target Id Frame * 1 Thread 1.1 (sifive-e51-riscv-cpu harts[0] [running]) 0x0000000000001000 in ?? () (gdb) c Continuing. ^C Program received signal SIGINT, Interrupt. 0x0000000080005a52 in ?? () (gdb) info thread Id Target Id Frame * 1 Thread 1.1 (sifive-e51-riscv-cpu harts[0] [halted ]) 0x0000000080005a52 in ?? () (gdb) detach Detaching from program: , process 1 Ending remote debugging. [Inferior 1 (process 1) detached] It seems odd that I can only see a single thread in info thread; I would expect to see one thread per hart. My hunch is that I end up attached to a hart which loses the lottery and goes to sleep, and for some none of the other harts are allowed to continue execution. If I use the virt machine, the execution starts as expected when I run continue and I see the OpenSBI splash immediately, so it seems to be linked to the use of the sifive_u in some way. Does anyone have any idea what I'm doing wrong?
[ "https://www.qemu.org/docs/master/system/gdb.html#Debugging%20multicore%20machines\nSee \"Debugging multicore machines\" in this page.\n(gdb)target extended-remote :1234\n(gdb)add-inferior\n(gdb)inferior 2\n(gdb)attach 2\n(gdb)i threads\n\nSee this picture\n" ]
[ 0 ]
[]
[]
[ "bare_metal", "gdb", "gdbserver", "qemu", "riscv" ]
stackoverflow_0059828618_bare_metal_gdb_gdbserver_qemu_riscv.txt
Q: Meaning of the numbers on the side of paragraphs in the Ada Reference Manual The Reference Manual paragraphs have a "side number" (this is how I call them). For example, in the attached screenshot of the Reference Manual Introduction, the first "side numbers" are 1, 2, 3/3, 4/1, 5/3, 6/3 ,7. What is the meaning of the number after the slash sign ? I could not find the explanation in http://www.ada-auth.org. A: See the final paragraph of the Introduction of the latest Ada Reference Manual: www.ada-auth.org/standards/22rm/html/RM-0-2.html#p73 Copy-paste: Using this version of the Ada Reference Manual 72/5 This document has been revised with the corrections specified in Technical Corrigendum 1 for Ada 2012 (which corresponds to ISO/IEC 8652:2012/COR.1:2016) and other changes specifically for Ada 2022. In addition, a variety of editorial errors have been corrected. 73/5 Changes to the original 1995 version of the Ada Reference Manual can be identified by the version number following the paragraph number. Paragraphs with a version number of /1 were changed by Technical Corrigendum 1 for Ada 95 or were editorial corrections at that time, while paragraphs with a version number of /2 were changed by Amendment 1 or were more recent editorial corrections, and paragraphs with a version number of /3 were changed by the 2012 edition of the Reference Manual or were still more recent editorial corrections. Paragraphs with a version number of /4 are changed by Technical Corrigendum 1 for Ada 2012 or were editorial corrections at that time. Paragraphs with a version number of /5 are changes or editorial corrections for Ada 2022. Paragraphs not so marked are unchanged since the original 1995 edition of the Ada Reference Manual, and have the same paragraph numbers as in that edition. In addition, some versions of this document include revision bars near the paragraph numbers. Where paragraphs are inserted, the paragraph numbers are of the form pp.nn, where pp is the number of the preceding paragraph, and nn is an insertion number. For instance, the first paragraph inserted after paragraph 8 is numbered 8.1, the second paragraph inserted is numbered 8.2, and so on. Deleted paragraphs are indicated by the text This paragraph was deleted. Deleted paragraphs include empty paragraphs that were numbered in the 1995 edition of the Ada Reference Manual. 
Meaning of the numbers on the side of paragraphs in the Ada Reference Manual
The Reference Manual paragraphs have a "side number" (this is how I call them). For example, in the attached screenshot of the Reference Manual Introduction, the first "side numbers" are 1, 2, 3/3, 4/1, 5/3, 6/3 ,7. What is the meaning of the number after the slash sign ? I could not find the explanation in http://www.ada-auth.org.
[ "See the final paragraph of the Introduction of the latest Ada Reference Manual: www.ada-auth.org/standards/22rm/html/RM-0-2.html#p73\nCopy-paste:\nUsing this version of the Ada Reference Manual\n72/5\nThis document has been revised with the corrections specified in Technical Corrigendum 1 for Ada 2012 (which corresponds to ISO/IEC 8652:2012/COR.1:2016) and other changes specifically for Ada 2022. In addition, a variety of editorial errors have been corrected.\n73/5\nChanges to the original 1995 version of the Ada Reference Manual can be identified by the version number following the paragraph number. Paragraphs with a version number of /1 were changed by Technical Corrigendum 1 for Ada 95 or were editorial corrections at that time, while paragraphs with a version number of /2 were changed by Amendment 1 or were more recent editorial corrections, and paragraphs with a version number of /3 were changed by the 2012 edition of the Reference Manual or were still more recent editorial corrections. Paragraphs with a version number of /4 are changed by Technical Corrigendum 1 for Ada 2012 or were editorial corrections at that time. Paragraphs with a version number of /5 are changes or editorial corrections for Ada 2022. Paragraphs not so marked are unchanged since the original 1995 edition of the Ada Reference Manual, and have the same paragraph numbers as in that edition. In addition, some versions of this document include revision bars near the paragraph numbers. Where paragraphs are inserted, the paragraph numbers are of the form pp.nn, where pp is the number of the preceding paragraph, and nn is an insertion number. For instance, the first paragraph inserted after paragraph 8 is numbered 8.1, the second paragraph inserted is numbered 8.2, and so on. Deleted paragraphs are indicated by the text This paragraph was deleted. Deleted paragraphs include empty paragraphs that were numbered in the 1995 edition of the Ada Reference Manual. \n" ]
[ 1 ]
[]
[]
[ "ada" ]
stackoverflow_0074655010_ada.txt
Q: What is the best way to pass an extra argument to a scipy.LowLevelCallable function? I have a python script that creates a set of ctype input arguments to pass to scipy.LowLevelCallable(see the notes section) and uses it to make a call to scipy.generic_filter that only executes a single iteration for testing purposes. I also define an extra argument and pass it to the user_data void pointer as following: from scipy import LowLevelCallable, ndimage import numpy as np import ctypes clib = ctypes.cdll.LoadLibrary('path_to_my_file/my_filter.so') clib.max_filter.restype = ctypes.c_int clib.max_filter.argtypes = ( ctypes.POINTER(ctypes.c_double), ctypes.c_long, ctypes.POINTER(ctypes.c_double), ctypes.c_void_p) my_user_data = ctypes.c_double(12345) ptr = ctypes.cast(ctypes.pointer(my_user_data), ctypes.c_void_p) max_filter_llc = LowLevelCallable(clib.max_filter,ptr) #this part only executes the LowLevelCallable function once and has no special meaning image = np.random.random((1, 1)) footprint = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]], dtype=bool) mask = ndimage.generic_filter(image, max_filter_llc, footprint=footprint) path_to_my_file/my_filter.so corresponds to the scipy.LowLevelCallable function argument structure and simply prints the user_data variable: #include <math.h> #include <stdint.h> #include <stdio.h> int my_filter( double * buffer, intptr_t filter_size, double * return_value, void * user_data ) { double x; x = *(double *)(user_data); printf("my user_data input is: %ld", x); return 1; } This prints out my user_data input is: 0, even though I defined my_user_data as 12345 in my python script. How can I change my scripts so I can access the extra argument in my c program? A: @DavidRanieri's comment resolved my problem
What is the best way to pass an extra argument to a scipy.LowLevelCallable function?
I have a python script that creates a set of ctype input arguments to pass to scipy.LowLevelCallable(see the notes section) and uses it to make a call to scipy.generic_filter that only executes a single iteration for testing purposes. I also define an extra argument and pass it to the user_data void pointer as following: from scipy import LowLevelCallable, ndimage import numpy as np import ctypes clib = ctypes.cdll.LoadLibrary('path_to_my_file/my_filter.so') clib.max_filter.restype = ctypes.c_int clib.max_filter.argtypes = ( ctypes.POINTER(ctypes.c_double), ctypes.c_long, ctypes.POINTER(ctypes.c_double), ctypes.c_void_p) my_user_data = ctypes.c_double(12345) ptr = ctypes.cast(ctypes.pointer(my_user_data), ctypes.c_void_p) max_filter_llc = LowLevelCallable(clib.max_filter,ptr) #this part only executes the LowLevelCallable function once and has no special meaning image = np.random.random((1, 1)) footprint = np.array([[0, 1, 0], [1, 1, 1], [0, 1, 0]], dtype=bool) mask = ndimage.generic_filter(image, max_filter_llc, footprint=footprint) path_to_my_file/my_filter.so corresponds to the scipy.LowLevelCallable function argument structure and simply prints the user_data variable: #include <math.h> #include <stdint.h> #include <stdio.h> int my_filter( double * buffer, intptr_t filter_size, double * return_value, void * user_data ) { double x; x = *(double *)(user_data); printf("my user_data input is: %ld", x); return 1; } This prints out my user_data input is: 0, even though I defined my_user_data as 12345 in my python script. How can I change my scripts so I can access the extra argument in my c program?
[ "@DavidRanieri's comment resolved my problem\n" ]
[ 0 ]
[]
[]
[ "c", "ctypes", "python", "scipy", "scipy.ndimage" ]
stackoverflow_0074658716_c_ctypes_python_scipy_scipy.ndimage.txt
Q: Set ngModel sub-property dynamically I need to set a ngModel sub-property dynamically like this inside an ngFor. <div *ngFor="let weekday of this.weekdays"> <mat-slide-toggle [(ngModel)]="openingHoursObj.[weekday].isOpen">Open</mat-slide-toggle> </div> ... where weekday can be monday, tuesday, wednesday etc. It says: Property weekday does not exist on OpeningHoursViewModel. How can you go about setting this sub-property dynamically in similar fashion? A: Try removing this. from this.weekdays and removing the first . from openingHoursObj.[weekday].isOpen. That will access the class property and object properties correctly, respectively. Result: <div *ngFor="let weekday of weekdays"> <mat-slide-toggle [(ngModel)]="openingHoursObj[weekday].isOpen">Open</mat-slide-toggle> </div>
Set ngModel sub-property dynamically
I need to set a ngModel sub-property dynamically like this inside an ngFor. <div *ngFor="let weekday of this.weekdays"> <mat-slide-toggle [(ngModel)]="openingHoursObj.[weekday].isOpen">Open</mat-slide-toggle> </div> ... where weekday can be monday, tuesday, wednesday etc. It says: Property weekday does not exist on OpeningHoursViewModel. How can you go about setting this sub-property dynamically in similar fashion?
[ "Try removing this. from this.weekdays and removing the first . from openingHoursObj.[weekday].isOpen. That will access the class property and object properties correctly, respectively.\nResult:\n<div *ngFor=\"let weekday of weekdays\">\n <mat-slide-toggle [(ngModel)]=\"openingHoursObj[weekday].isOpen\">Open</mat-slide-toggle>\n</div>\n\n" ]
[ 1 ]
[]
[]
[ "angular", "ngfor", "ngmodel" ]
stackoverflow_0074658752_angular_ngfor_ngmodel.txt
Q: Windows Toast notification is not working with toastgeneric I am new to windows programming I wanted to start the notification system of my program with the documents I saw from Microsoft It works fine when I use ready-made templates XmlDocument doc= ToastNotificationManager::GetTemplateContent(ToastTemplateType::ToastText02); doc.SelectSingleNode(L"//text[1]").InnerText(L"Hellow :D"); doc.SelectSingleNode(L"//text[2]").InnerText(L"Im greate :X:X:X"); ToastNotification notif{doc}; toastNotifier_.Show(notif); But when I make my own template it doesn't work std::ifstream tro(address); std::string str((std::istreambuf_iterator<char>(tro)), std::istreambuf_iterator<char>()); XmlDocument doc; doc.LoadXml(winrt::to_hstring(str)); doc.SelectSingleNode(L"//text[1]").InnerText(L"Hellow :D"); doc.SelectSingleNode(L"//text[2]").InnerText(L"Im greate :X:X:X"); ToastNotification notif{doc}; toastNotifier_.Show(notif); XML file <toast> <visual> <binding template="ToastGeneric"> <text id="1"></text> <text id="2"></text> </binding> </visual> </toast> I noticed something, when I change the template attribute name from ToastGeneric to one of the ready template names like ToastText02, the notification is displayed, but the information is not placed in the children. A: add activationType to toast solve this issue <toast activationType="protocol"> // protocol,Background,Foreground <visual> <binding template="ToastGeneric"> <text id="1"></text> <text id="2"></text> </binding> </visual> </toast>
Windows Toast notification is not working with toastgeneric
I am new to windows programming I wanted to start the notification system of my program with the documents I saw from Microsoft It works fine when I use ready-made templates XmlDocument doc= ToastNotificationManager::GetTemplateContent(ToastTemplateType::ToastText02); doc.SelectSingleNode(L"//text[1]").InnerText(L"Hellow :D"); doc.SelectSingleNode(L"//text[2]").InnerText(L"Im greate :X:X:X"); ToastNotification notif{doc}; toastNotifier_.Show(notif); But when I make my own template it doesn't work std::ifstream tro(address); std::string str((std::istreambuf_iterator<char>(tro)), std::istreambuf_iterator<char>()); XmlDocument doc; doc.LoadXml(winrt::to_hstring(str)); doc.SelectSingleNode(L"//text[1]").InnerText(L"Hellow :D"); doc.SelectSingleNode(L"//text[2]").InnerText(L"Im greate :X:X:X"); ToastNotification notif{doc}; toastNotifier_.Show(notif); XML file <toast> <visual> <binding template="ToastGeneric"> <text id="1"></text> <text id="2"></text> </binding> </visual> </toast> I noticed something, when I change the template attribute name from ToastGeneric to one of the ready template names like ToastText02, the notification is displayed, but the information is not placed in the children.
[ "add activationType to toast solve this issue\n<toast activationType=\"protocol\"> // protocol,Background,Foreground\n <visual>\n <binding template=\"ToastGeneric\">\n <text id=\"1\"></text>\n <text id=\"2\"></text>\n </binding>\n </visual>\n</toast>\n\n\n" ]
[ 0 ]
[]
[]
[ "c++", "notifications", "toast", "windows" ]
stackoverflow_0074655219_c++_notifications_toast_windows.txt
Q: Can’t get the exact iPhone notch size in Unity Getting the notch size isn't supported in Unity. I've tried to first get the notch size for all iOS devices in Xcode and then convert them to pixels in Unity but it’s always wrong and I’m not sure why. No information regarding this can be found on the internet - I think this is because no games require knowing the exact notch size for every iOS device especially in Unity but in my case, I do need it so I can do something with it. It would be nice to be able to get the position as well (notch and 14 Pro's island) I know how to handle safe area in Unity. This is not about that. This is about getting the notch size.
Can’t get the exact iPhone notch size in Unity
Getting the notch size isn't supported in Unity. I've tried to first get the notch size for all iOS devices in Xcode and then convert them to pixels in Unity but it’s always wrong and I’m not sure why. No information regarding this can be found on the internet - I think this is because no games require knowing the exact notch size for every iOS device especially in Unity but in my case, I do need it so I can do something with it. It would be nice to be able to get the position as well (notch and 14 Pro's island) I know how to handle safe area in Unity. This is not about that. This is about getting the notch size.
[]
[]
[ "From this unity form post:\n“ Pre-existing apps will have black bars in non-safe parts of the screen.\nYou will have to rebuild with new Xcode and re-submit it to the app store to get access to full screen. You should be able to experiment with that in Simulator.”\nThis image was also posted to show how little effect it would have:\n\nIn general, the space is so small that if your text is that small and that far up that it is blocked, it would be a bad UI either way.\n" ]
[ -2 ]
[ "c#", "ios", "iphone", "unity3d" ]
stackoverflow_0074658826_c#_ios_iphone_unity3d.txt
Q: setValue using an array of schema values, each schema with an array of fields, and each field with an array of "inner" fields [Edited, for simplicity] I want to set the value of a few cells in a Google Sheet from the values retrieved from a schema list (from a Google Workspace domain), using "AdminDirectory.Schemas.list('my_customer').schemas". So far I only achieve partial solutions... In detail, I want each of the Google Sheet cells U4, U5, U6, and so on (one cell for each schema), to contain a single schema, with the entire array of fields, and the entire array of inner fields, as follows (below is the expected content of a single cell): ________________ Cell U4 ________________ | | |⏵ Display Name: Test Schema 1 | |⏵ Safe_Name: Test_Schema_1 | |⏵ Fields: | | • Field Nr.1 of 3: | | ◦ Display Name: Field 1: | | ◦ Field_Name: Field1: | | ◦ Field Type: BOOL: | | ◦ Multi-valued? :false: | | ◦ Accessible by: ADMINS_AND_SELF | | | | • Field Nr.2 of 3: | | ◦ Display Name: Field 2: | | ◦ Field_Name: Field2: | | ◦ Field Type: BOOL: | | ◦ Multi-valued? :false: | | ◦ Accessible by: ADMINS_AND_SELF | | | | • Field Nr.3 of 3: | | ◦ Display Name: Field 3: | | ◦ Field_Name: Field3: | | ◦ Field Type: BOOL: | | ◦ Multi-valued? :false: | | ◦ Accessible by: ADMINS_AND_SELF | |_______________________________________| The next cell might be slightly different, as each schema will have a different number of fields, and each field has five inner fields (the one seen in the example above). So far, the best I wasn't able to achieve better than this: ________________ Cell U4 ________________ | | |⏵ Display Name: Test Schema 1 | |⏵ Safe_Name: Test_Schema_1 | |⏵ Fields: | | • Field Nr.1 of 3: | | ◦ Display Name: Field 1: | | ◦ Field_Name: Field1: | | ◦ Field Type: BOOL: | | ◦ Multi-valued? :false: | | ◦ Accessible by: ADMINS_AND_SELF | | | |_______________ Cell U6 _______________| | | |⏵ Display Name: Test Schema 1 | |⏵ Safe_Name: Test_Schema_1 | |⏵ Fields: | | • Field Nr.2 of 3: | | ◦ Display Name: Field 2: | | ◦ Field_Name: Field2: | | ◦ Field Type: BOOL: | | ◦ Multi-valued? :false: | | ◦ Accessible by: ADMINS_AND_SELF | | | |_______________ Cell U6 _______________| | | |⏵ Display Name: Test Schema 1 | |⏵ Safe_Name: Test_Schema_1 | |⏵ Fields: | | • Field Nr.3 of 3: | | ◦ Display Name: Field 3: | | ◦ Field_Name: Field3: | | ◦ Field Type: BOOL: | | ◦ Multi-valued? :false: | | ◦ Accessible by: ADMINS_AND_SELF | |_______________________________________| As you can see, the schema "Display Name" and "Schema Name" is repeated in every cell, and the fields spread into the follwoing cells, until the end of the fields loop. When there are no more fields in that schema, the same happens for the next one. What I want is to have everything related to each schema in a single cell. So, in short, what I need it to be able to join or concatenate the results of the fields loop (more details after the Script 2). ㅤ SCRIPT 1: [removed for simplicity. Kept this reference out of respect for who may have read it before] ㅤ SCRIPT 2: function listSchemaB() { const sheet = SpreadsheetApp.getActive().getSheetByName("Domain Schema"); const schemaLength = AdminDirectory.Schemas.list('my_customer').schemas.length; for(var i=0;i<schemaLength;i++) { var data = AdminDirectory.Schemas.list('my_customer').schemas[i]; var fieldsLenght = data.fields.length; var schemaTitles = "⏵ Display Name: " + data.displayName + "\n\⏵ Safe_Name: " + data.schemaName + "\n\⏵ Fields:"; for(var x=0;x<fieldsLenght;x++) { var schemaFields = ("\n\ • Field Nr." + (x+1) + " of " + (fieldsLenght+1) + ":\n\ ◦ Display Name: " + data.fields[x].displayName + ":\n\ ◦ Field_Name: " + data.fields[x].fieldName + ":\n\ ◦ Field Type: " + data.fields[x].fieldType + ":\n\ ◦ Multi-valued? :" + data.fields[x].multiValued + ":\n\ ◦ Accessible by: " + data.fields[x].readAccessType).concat(""); } sheet.getRange(i+4,21).setValue(schemaTitles + schemaFields); } } This one almost works, but I get the results from the loop with the x variable all separated from each other, so they all go to a different cell when I use "setValue", and I can't find a way to merge/join/concatenate the results from the inner loop into a single cell. ㅤ SCRIPT 3: [removed for simplicity. Kept this reference out of respect for @doubleunary, who tried to help based on this script] Additionally - but secondary, for now -, I'd also like to know how I can use the output of "console.log(something)" as a variable to use with "setValue", to push the result to a Google Sheet. A: Note: "console.log(ret)" gives me a perfect result but I can't find a way to use the logged result inside the "setValue", to push the result to the Google Sheet. It is not entirely clear what your desired result is, but given that console.log(ret) gives what you want, try this: function loopSchemaC() { const sheet = SpreadsheetApp.getActive().getSheetByName('Domain Schema'); const data = AdminDirectory.Schemas.list('my_customer').schemas; const output = []; data.forEach(schema => { const ret = {}; ret.displayName = schema.displayName; ret.schemaName = schema.schemaName; ret.fields = []; for (let f of schema.fields) { const obj = {}; obj.readAccessType = f.readAccessType; obj.displayName = f.displayName; obj.fieldType = f.fieldType; obj.fieldName = f.fieldName; obj.multiValued = f.multiValued; ret.fields.push(obj); } output.push([JSON.stringify(ret, null, 2)]); }); sheet.getRange('U1') .offset(0, 0, output.length, output[0].length) .setValues(output); }
setValue using an array of schema values, each schema with an array of fields, and each field with an array of "inner" fields
[Edited, for simplicity] I want to set the value of a few cells in a Google Sheet from the values retrieved from a schema list (from a Google Workspace domain), using "AdminDirectory.Schemas.list('my_customer').schemas". So far I only achieve partial solutions... In detail, I want each of the Google Sheet cells U4, U5, U6, and so on (one cell for each schema), to contain a single schema, with the entire array of fields, and the entire array of inner fields, as follows (below is the expected content of a single cell): ________________ Cell U4 ________________ | | |⏵ Display Name: Test Schema 1 | |⏵ Safe_Name: Test_Schema_1 | |⏵ Fields: | | • Field Nr.1 of 3: | | ◦ Display Name: Field 1: | | ◦ Field_Name: Field1: | | ◦ Field Type: BOOL: | | ◦ Multi-valued? :false: | | ◦ Accessible by: ADMINS_AND_SELF | | | | • Field Nr.2 of 3: | | ◦ Display Name: Field 2: | | ◦ Field_Name: Field2: | | ◦ Field Type: BOOL: | | ◦ Multi-valued? :false: | | ◦ Accessible by: ADMINS_AND_SELF | | | | • Field Nr.3 of 3: | | ◦ Display Name: Field 3: | | ◦ Field_Name: Field3: | | ◦ Field Type: BOOL: | | ◦ Multi-valued? :false: | | ◦ Accessible by: ADMINS_AND_SELF | |_______________________________________| The next cell might be slightly different, as each schema will have a different number of fields, and each field has five inner fields (the one seen in the example above). So far, the best I wasn't able to achieve better than this: ________________ Cell U4 ________________ | | |⏵ Display Name: Test Schema 1 | |⏵ Safe_Name: Test_Schema_1 | |⏵ Fields: | | • Field Nr.1 of 3: | | ◦ Display Name: Field 1: | | ◦ Field_Name: Field1: | | ◦ Field Type: BOOL: | | ◦ Multi-valued? :false: | | ◦ Accessible by: ADMINS_AND_SELF | | | |_______________ Cell U6 _______________| | | |⏵ Display Name: Test Schema 1 | |⏵ Safe_Name: Test_Schema_1 | |⏵ Fields: | | • Field Nr.2 of 3: | | ◦ Display Name: Field 2: | | ◦ Field_Name: Field2: | | ◦ Field Type: BOOL: | | ◦ Multi-valued? :false: | | ◦ Accessible by: ADMINS_AND_SELF | | | |_______________ Cell U6 _______________| | | |⏵ Display Name: Test Schema 1 | |⏵ Safe_Name: Test_Schema_1 | |⏵ Fields: | | • Field Nr.3 of 3: | | ◦ Display Name: Field 3: | | ◦ Field_Name: Field3: | | ◦ Field Type: BOOL: | | ◦ Multi-valued? :false: | | ◦ Accessible by: ADMINS_AND_SELF | |_______________________________________| As you can see, the schema "Display Name" and "Schema Name" is repeated in every cell, and the fields spread into the follwoing cells, until the end of the fields loop. When there are no more fields in that schema, the same happens for the next one. What I want is to have everything related to each schema in a single cell. So, in short, what I need it to be able to join or concatenate the results of the fields loop (more details after the Script 2). ㅤ SCRIPT 1: [removed for simplicity. Kept this reference out of respect for who may have read it before] ㅤ SCRIPT 2: function listSchemaB() { const sheet = SpreadsheetApp.getActive().getSheetByName("Domain Schema"); const schemaLength = AdminDirectory.Schemas.list('my_customer').schemas.length; for(var i=0;i<schemaLength;i++) { var data = AdminDirectory.Schemas.list('my_customer').schemas[i]; var fieldsLenght = data.fields.length; var schemaTitles = "⏵ Display Name: " + data.displayName + "\n\⏵ Safe_Name: " + data.schemaName + "\n\⏵ Fields:"; for(var x=0;x<fieldsLenght;x++) { var schemaFields = ("\n\ • Field Nr." + (x+1) + " of " + (fieldsLenght+1) + ":\n\ ◦ Display Name: " + data.fields[x].displayName + ":\n\ ◦ Field_Name: " + data.fields[x].fieldName + ":\n\ ◦ Field Type: " + data.fields[x].fieldType + ":\n\ ◦ Multi-valued? :" + data.fields[x].multiValued + ":\n\ ◦ Accessible by: " + data.fields[x].readAccessType).concat(""); } sheet.getRange(i+4,21).setValue(schemaTitles + schemaFields); } } This one almost works, but I get the results from the loop with the x variable all separated from each other, so they all go to a different cell when I use "setValue", and I can't find a way to merge/join/concatenate the results from the inner loop into a single cell. ㅤ SCRIPT 3: [removed for simplicity. Kept this reference out of respect for @doubleunary, who tried to help based on this script] Additionally - but secondary, for now -, I'd also like to know how I can use the output of "console.log(something)" as a variable to use with "setValue", to push the result to a Google Sheet.
[ "\nNote: \"console.log(ret)\" gives me a perfect result but I can't find a way to use the logged result inside the \"setValue\", to push the result to the Google Sheet.\n\nIt is not entirely clear what your desired result is, but given that console.log(ret) gives what you want, try this:\nfunction loopSchemaC() {\n const sheet = SpreadsheetApp.getActive().getSheetByName('Domain Schema');\n const data = AdminDirectory.Schemas.list('my_customer').schemas;\n const output = [];\n data.forEach(schema => {\n const ret = {};\n ret.displayName = schema.displayName;\n ret.schemaName = schema.schemaName;\n ret.fields = [];\n for (let f of schema.fields) {\n const obj = {};\n obj.readAccessType = f.readAccessType;\n obj.displayName = f.displayName;\n obj.fieldType = f.fieldType;\n obj.fieldName = f.fieldName;\n obj.multiValued = f.multiValued;\n ret.fields.push(obj);\n }\n output.push([JSON.stringify(ret, null, 2)]);\n });\n sheet.getRange('U1')\n .offset(0, 0, output.length, output[0].length)\n .setValues(output);\n}\n\n" ]
[ 2 ]
[]
[]
[ "google_admin_sdk", "google_apps_script", "google_schemas", "google_sheets", "google_workspace" ]
stackoverflow_0074658598_google_admin_sdk_google_apps_script_google_schemas_google_sheets_google_workspace.txt
Q: How to use variable inside variable in angular template? I am new to angular , can someone tell me if I can use variable inside variable in angular. Explaination: I am creating one dropdown input component where it will make api call to get data. There is an @Input() selector:string = "" which will tell what to select from data Inside template it will run *ngFor loop, then inside html I want to display as kind of like that: <option *ngFor="let item of data" [value]="item.id"> {{ item.{{selector}} }} </option> In other module it will be used as: In one module <app-input [selector]="'name'"></app-input> Another one. <app-input [selector]="'id'"></app-input> How Can I use selector inside this any way? A: {{ item[selector] }} use the bracket syntax for accessing a property with a variable key.
How to use variable inside variable in angular template?
I am new to angular , can someone tell me if I can use variable inside variable in angular. Explaination: I am creating one dropdown input component where it will make api call to get data. There is an @Input() selector:string = "" which will tell what to select from data Inside template it will run *ngFor loop, then inside html I want to display as kind of like that: <option *ngFor="let item of data" [value]="item.id"> {{ item.{{selector}} }} </option> In other module it will be used as: In one module <app-input [selector]="'name'"></app-input> Another one. <app-input [selector]="'id'"></app-input> How Can I use selector inside this any way?
[ "{{ item[selector] }}\n\nuse the bracket syntax for accessing a property with a variable key.\n" ]
[ 3 ]
[]
[]
[ "angular", "javascript", "rxjs" ]
stackoverflow_0074658867_angular_javascript_rxjs.txt
Q: Jitsi External API configuration I'm new with Jitsi and I'm trying to configure the Jitsi External API on my Django project. I have created the video call as below. const domain = 'meet.jit.si'; const options = { roomName: 'BeehobMeetExample', width: 1100, height: 700, parentNode: document.querySelector('#meet'), userInfo: { email: '{{request.user.email}}', displayName: '{{request.user.first_name}} ' + '{{request.user.last_name}}', avatarUrl: '{{ request.user.socialaccount_set.all.0.get_avatar_url }}' }, configOverwrite: { prejoinPageEnabled: false }, interfaceConfigOverwrite: { TILE_VIEW_MAX_COLUMNS: 2 }, }; const api = new JitsiMeetExternalAPI(domain, options); Now, I'm trying to set one person selected by the moderator to be a guest like a camera and audio on, and other room attendees only listeners. Also I'm trying to show the role in the userInfo. Could you please help me with these? A: You can handle the buttons of the toolbar with the conditionals you need but in general, using the Jitsi iframe has its limitation because the UI and core functionalities are from Jitsi itself
Jitsi External API configuration
I'm new with Jitsi and I'm trying to configure the Jitsi External API on my Django project. I have created the video call as below. const domain = 'meet.jit.si'; const options = { roomName: 'BeehobMeetExample', width: 1100, height: 700, parentNode: document.querySelector('#meet'), userInfo: { email: '{{request.user.email}}', displayName: '{{request.user.first_name}} ' + '{{request.user.last_name}}', avatarUrl: '{{ request.user.socialaccount_set.all.0.get_avatar_url }}' }, configOverwrite: { prejoinPageEnabled: false }, interfaceConfigOverwrite: { TILE_VIEW_MAX_COLUMNS: 2 }, }; const api = new JitsiMeetExternalAPI(domain, options); Now, I'm trying to set one person selected by the moderator to be a guest like a camera and audio on, and other room attendees only listeners. Also I'm trying to show the role in the userInfo. Could you please help me with these?
[ "You can handle the buttons of the toolbar with the conditionals you need but in general, using the Jitsi iframe has its limitation because the UI and core functionalities are from Jitsi itself\n" ]
[ 0 ]
[]
[]
[ "jitsi", "jitsi_meet" ]
stackoverflow_0067858211_jitsi_jitsi_meet.txt
Q: TS4023: Exported Variable has or is using name from external module but cannot be named I've seen this answered before, but they don't seem to cover this specific use case (or they don't work/help) import {Route} from 'vue-router'; export const detailRoute = { path: '/detail/:id', component: Detail, props: (route: Route) => ({ state: route.query.state }) }; detailRoute uses Route, which I am importing, but I guess as a named import {Route} it doesn't work? Is there a different/better way to do this that will work? I tried export {Route}; as well, but that didn't help. tsconfig.json: { "compilerOptions": { "target": "ES2017", "module": "ES2015", "moduleResolution": "Node", "sourceMap": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "forceConsistentCasingInFileNames": true, "allowSyntheticDefaultImports": true, "noEmitHelpers": true, "importHelpers": true, "pretty": true, "alwaysStrict": true, "declaration": true, "declarationDir": "./types", "lib": [ "DOM", "ES2017", "DOM.Iterable", "ScriptHost" ], "baseUrl": "./client", "paths": { "styles/*": ["./app/core/styles/*"], "core/*": ["./app/core/*"], "components/*": ["./app/components/*"], "containers/*": ["./app/containers/*"], "assets/*": ["./assets/*"], "config/*": ["./config/*"] } } } Exact error: TS4023: Exported variable 'detailRoute' has or is using name 'Route' from external module "/Users/chris/<projectname>/node_modules/vue-router/types/router" but cannot be named. A: The compiler is failing to figure out the exact shape of detailRoute, because it does not know the shape of Route. Option 1 One way around this is to import Route from its source, thereby providing the information that the compiler needs to determine the shape of detailRoute. import { Route } from "./../node_modules/vue-router/types/router"; export const detailRoute = { props: (route: Route) => null, }; Since the index.d.ts file in vue-router (which you were importing in the question) re-exports Route, it does not provide the direct reference to Route that the compiler needed. Option 2 Another option is to opt detailRoute out of static typing altogether. import { Route } from 'vue-router'; // index.d.ts export const detailRoute: any = { props: (route: Route) => null, }; Since any opts-out of static typing, the compiler does not need to figure out the shape of detailRoute. Option 3 A further is option is what you did in your own answer. Since you provided the type annotation, the compiler again does not need to figure out the shape of detailRoute. import { Route, RouteConfig } from 'vue-router'; // index.d.ts export const detailRoute: RouteConfig = { props: (route: Route) => null, }; See also https://github.com/Microsoft/TypeScript/issues/5711 When trying to emit [the module], the compiler needs to write an object type literal... representing the shape of the module. But there isn't a name in scope that refers directly to [Route], so the type "cannot be named" and there's an error. If you add [a direct] import of [Route]... the error should go away. A: Apparently this is the solution to my problem: import {Route, RouteConfig} from 'vue-router'; export const detailRoute: RouteConfig = { path: '/detail/:id', component: Detail, props: (route: Route) => ({ state: route.query.state }) }; Specifying that detailRoute was a RouteConfig (which in turn uses Route) solved the problem. I must have misunderstood how this is supposed to work, but this fixed it. A: For me it this issue was because I was trying to build a library doing: interface Props {...}; const MyComponent = ({...}:Props)=>{<>...</>} I changed to: type Props = {...}; Issue resolved. A: I came across this when typing a rootReducer, in case anyone else is doing the same. I was importing typed reducers that were composed of other types (state, actions) that I had not also exported. Short answer: export all your action and state types from the reducers! Composite types seem not to work to well when their parts are not also exported and you rely on type inference. In this case, inferring the type of the rootReducer (which would be too much to explicitly type if you have more than just a few reducers). const rootReducer = combineReducers({ typedReducerA, typedReducerB, ... } A: An extension to this question for those looking for an answer. With the following conditions: Typescript Version installed: ^4.8.3 TSConfig { "module": "NodeNext", "moduleResolution": "NodeNext" } package.json { "type": "module" } Layout src/lib/types.ts // contains all type defs src/lib/something.ts // contains type def consumption and error I encountered this issue with my own library. The code Consumed an exported type (Box) Exported type consumed an unexported type (Dimension) Consuming exported type via implicit type (no explicit : SomeType annotation) Error saying that Box is [named but can't be] -- (read: "I can't find the name of something") The reason Typescript is looking for the type within Box called Dimension, and failed. "Cannot be named" is an unclear error, but it basically means "Yo, I have no clue what's in this thing" from the context in which it was thrown. My solution Export the nested type. export interface Box { width: Dimension; } interface Dimension { size: number; meta: any; } Should become export interface Box { width: Dimension; } // Update this: // interface Dimension { // To this: export interface Dimension { size: number; meta: any; } A: Just add this into tsconfig.json compilerOptions: { ... "declaration": false, "emitDeclarationOnly": false }
TS4023: Exported Variable has or is using name from external module but cannot be named
I've seen this answered before, but they don't seem to cover this specific use case (or they don't work/help) import {Route} from 'vue-router'; export const detailRoute = { path: '/detail/:id', component: Detail, props: (route: Route) => ({ state: route.query.state }) }; detailRoute uses Route, which I am importing, but I guess as a named import {Route} it doesn't work? Is there a different/better way to do this that will work? I tried export {Route}; as well, but that didn't help. tsconfig.json: { "compilerOptions": { "target": "ES2017", "module": "ES2015", "moduleResolution": "Node", "sourceMap": true, "emitDecoratorMetadata": true, "experimentalDecorators": true, "forceConsistentCasingInFileNames": true, "allowSyntheticDefaultImports": true, "noEmitHelpers": true, "importHelpers": true, "pretty": true, "alwaysStrict": true, "declaration": true, "declarationDir": "./types", "lib": [ "DOM", "ES2017", "DOM.Iterable", "ScriptHost" ], "baseUrl": "./client", "paths": { "styles/*": ["./app/core/styles/*"], "core/*": ["./app/core/*"], "components/*": ["./app/components/*"], "containers/*": ["./app/containers/*"], "assets/*": ["./assets/*"], "config/*": ["./config/*"] } } } Exact error: TS4023: Exported variable 'detailRoute' has or is using name 'Route' from external module "/Users/chris/<projectname>/node_modules/vue-router/types/router" but cannot be named.
[ "The compiler is failing to figure out the exact shape of detailRoute, because it does not know the shape of Route.\nOption 1\nOne way around this is to import Route from its source, thereby providing the information that the compiler needs to determine the shape of detailRoute.\nimport { Route } from \"./../node_modules/vue-router/types/router\";\n\nexport const detailRoute = {\n props: (route: Route) => null,\n};\n\nSince the index.d.ts file in vue-router (which you were importing in the question) re-exports Route, it does not provide the direct reference to Route that the compiler needed.\nOption 2\nAnother option is to opt detailRoute out of static typing altogether.\nimport { Route } from 'vue-router'; // index.d.ts\n\nexport const detailRoute: any = {\n props: (route: Route) => null,\n};\n\nSince any opts-out of static typing, the compiler does not need to figure out the shape of detailRoute.\nOption 3\nA further is option is what you did in your own answer. Since you provided the type annotation, the compiler again does not need to figure out the shape of detailRoute.\nimport { Route, RouteConfig } from 'vue-router'; // index.d.ts\n\nexport const detailRoute: RouteConfig = {\n props: (route: Route) => null,\n};\n\nSee also\nhttps://github.com/Microsoft/TypeScript/issues/5711\n\nWhen trying to emit [the module], the compiler needs to write an object type literal... representing the shape of the module. But there isn't a name in scope that refers directly to [Route], so the type \"cannot be named\" and there's an error.\nIf you add [a direct] import of [Route]... the error should go away.\n\n", "Apparently this is the solution to my problem:\n import {Route, RouteConfig} from 'vue-router';\n\n\n export const detailRoute: RouteConfig = {\n path: '/detail/:id',\n component: Detail,\n props: (route: Route) => ({\n state: route.query.state\n })\n };\n\nSpecifying that detailRoute was a RouteConfig (which in turn uses Route) solved the problem. I must have misunderstood how this is supposed to work, but this fixed it.\n", "For me it this issue was because I was trying to build a library doing:\ninterface Props {...};\nconst MyComponent = ({...}:Props)=>{<>...</>}\n\nI changed to:\ntype Props = {...};\n\nIssue resolved.\n", "I came across this when typing a rootReducer, in case anyone else is doing the same. I was importing typed reducers that were composed of other types (state, actions) that I had not also exported.\nShort answer: export all your action and state types from the reducers!\nComposite types seem not to work to well when their parts are not also exported and you rely on type inference. In this case, inferring the type of the rootReducer (which would be too much to explicitly type if you have more than just a few reducers).\nconst rootReducer = combineReducers({ typedReducerA, typedReducerB, ... }\n\n", "An extension to this question for those looking for an answer.\nWith the following conditions:\nTypescript\nVersion installed: ^4.8.3\nTSConfig\n{\n \"module\": \"NodeNext\",\n \"moduleResolution\": \"NodeNext\"\n}\n\npackage.json\n{\n \"type\": \"module\"\n}\n\nLayout\nsrc/lib/types.ts // contains all type defs\nsrc/lib/something.ts // contains type def consumption and error\n\nI encountered this issue with my own library.\nThe code\n\nConsumed an exported type (Box)\nExported type consumed an unexported type (Dimension)\nConsuming exported type via implicit type (no explicit : SomeType annotation)\nError saying that Box is [named but can't be] -- (read: \"I can't find the name of something\")\n\nThe reason\nTypescript is looking for the type within Box called Dimension, and failed. \"Cannot be named\" is an unclear error, but it basically means \"Yo, I have no clue what's in this thing\" from the context in which it was thrown.\nMy solution\nExport the nested type.\nexport interface Box {\n width: Dimension;\n}\n\ninterface Dimension {\n size: number;\n meta: any;\n}\n\nShould become\nexport interface Box {\n width: Dimension;\n}\n\n// Update this:\n// interface Dimension {\n// To this:\nexport interface Dimension {\n size: number;\n meta: any;\n}\n\n", "Just add this into tsconfig.json\ncompilerOptions: {\n ...\n \"declaration\": false,\n \"emitDeclarationOnly\": false\n}\n\n\n" ]
[ 33, 9, 6, 5, 0, 0 ]
[ "Adding a return type fixed the problem for me\nexport const test_setUpApp = async (args?: {\n fixtureData: SomeType;\n}) => {\n ....\n }\n\nGave me the error with SomeType\nThis fixed the problem:\nexport const test_setUpApp = async (args?: {\n fixtureData: SomeType;\n}):Promise<ReturnType> => {\n ....\n }\n\n\n\n" ]
[ -1 ]
[ "declaration", "module", "type_inference", "types", "typescript" ]
stackoverflow_0043900035_declaration_module_type_inference_types_typescript.txt
Q: Notify Users when Child is added on specific node using Cloud function Is it possible for cloud function to listen for specific node for when a child is added and then send a notification to users located on a different node, and if that is possible how so? I am using node.js with Firebase realtime database and not Firestore. This is my database: I want the cloud function to listen every time a child is added on "Emergencies", and then notify all the users in the "Registered Admins" This is the contents of the users in "Registered Admins" node, it has a child "Notification" containing the message, and I want to send that message to all the users, when a child is added on "Emergencies" node. This is my cloud function using node.js. I've deployed it however it does not work, does not send any notification at all. const functions = require("firebase-functions"); const admin = require("firebase-admin"); admin.initializeApp(); exports.listen = functions.database.ref("/Emergencies") .onWrite(async (change, context) => { change.after.val(); context.params.pushId; // Get the list of device notification tokens. const getDeviceTokensPromise = admin.database() .ref("/Registered Admins/{uid}/Token").once("value"); // The snapshot to the user's tokens. let tokensSnapshot; // The array containing all the user's tokens. let tokens; const results = await Promise.all([getDeviceTokensPromise]); tokensSnapshot = results[0]; // Check if there are any device tokens. if (!tokensSnapshot.hasChildren()) { return functions.logger.log( 'There are no notification tokens to send to.' ); } functions.logger.log( 'There are', tokensSnapshot.numChildren(), 'tokens to send notifications to.' ); // Notification details. const payload = { notification: { title: "New Emergency Request!", body: "Someone needs help check Emergenie App now!", } }; // Listing all tokens as an array. tokens = Object.keys(tokensSnapshot.val()); // Send notifications to all tokens. const response = await admin.messaging().sendToDevice(tokens, payload); // For each message check if there was an error. const tokensToRemove = []; response.results.forEach((result, index) => { const error = result.error; if (error) { functions.logger.error( 'Failure sending notification to', tokens[index], error ); // Cleanup the tokens who are not registered anymore. if (error.code === 'messaging/invalid-registration-token' || error.code === 'messaging/registration-token-not-registered') { tokensToRemove.push(tokensSnapshot.ref.child(tokens[index]).remove()); } } }); return Promise.all(tokensToRemove); }); A: Yes, that sounds possible and is in fact quite close to what the example on notifying users when something interesting happens does. To send a message to a specific device, you 'll need to know the token for that device. If you want to broadcast a message to multiple users, you could subscribe those users to a topic. Just keep in mind that anyone can subscribe to a topic if they know its name, so you can't use that to send messages that only a certain group of users is allowed to see.
Notify Users when Child is added on specific node using Cloud function
Is it possible for cloud function to listen for specific node for when a child is added and then send a notification to users located on a different node, and if that is possible how so? I am using node.js with Firebase realtime database and not Firestore. This is my database: I want the cloud function to listen every time a child is added on "Emergencies", and then notify all the users in the "Registered Admins" This is the contents of the users in "Registered Admins" node, it has a child "Notification" containing the message, and I want to send that message to all the users, when a child is added on "Emergencies" node. This is my cloud function using node.js. I've deployed it however it does not work, does not send any notification at all. const functions = require("firebase-functions"); const admin = require("firebase-admin"); admin.initializeApp(); exports.listen = functions.database.ref("/Emergencies") .onWrite(async (change, context) => { change.after.val(); context.params.pushId; // Get the list of device notification tokens. const getDeviceTokensPromise = admin.database() .ref("/Registered Admins/{uid}/Token").once("value"); // The snapshot to the user's tokens. let tokensSnapshot; // The array containing all the user's tokens. let tokens; const results = await Promise.all([getDeviceTokensPromise]); tokensSnapshot = results[0]; // Check if there are any device tokens. if (!tokensSnapshot.hasChildren()) { return functions.logger.log( 'There are no notification tokens to send to.' ); } functions.logger.log( 'There are', tokensSnapshot.numChildren(), 'tokens to send notifications to.' ); // Notification details. const payload = { notification: { title: "New Emergency Request!", body: "Someone needs help check Emergenie App now!", } }; // Listing all tokens as an array. tokens = Object.keys(tokensSnapshot.val()); // Send notifications to all tokens. const response = await admin.messaging().sendToDevice(tokens, payload); // For each message check if there was an error. const tokensToRemove = []; response.results.forEach((result, index) => { const error = result.error; if (error) { functions.logger.error( 'Failure sending notification to', tokens[index], error ); // Cleanup the tokens who are not registered anymore. if (error.code === 'messaging/invalid-registration-token' || error.code === 'messaging/registration-token-not-registered') { tokensToRemove.push(tokensSnapshot.ref.child(tokens[index]).remove()); } } }); return Promise.all(tokensToRemove); });
[ "Yes, that sounds possible and is in fact quite close to what the example on notifying users when something interesting happens does.\nTo send a message to a specific device, you 'll need to know the token for that device. If you want to broadcast a message to multiple users, you could subscribe those users to a topic. Just keep in mind that anyone can subscribe to a topic if they know its name, so you can't use that to send messages that only a certain group of users is allowed to see.\n" ]
[ 0 ]
[]
[]
[ "firebase_realtime_database", "google_cloud_functions", "node.js" ]
stackoverflow_0074652232_firebase_realtime_database_google_cloud_functions_node.js.txt
Q: Powershell: remove value from property by its value There is a JSON like: { a: { a1 : { a11:{}; a12:{}; a13:{};}; a2 : {some other values}; } b: { b1: {another values} } } so then I converted it using powershell with command and put it into a variable $test $test = Get-Content $some_file_path | ConvertFrom-Json and then I'd like to get this JSON with some excluding from "a1". result should be like ( so I remove a13): { a: { a1 : { a11: {} a12: {}}; } b: { b1: {another values} } } so, if I use: $test.a.PSObject.Properties["a1"].Remove("a13") returns me an error: Method invocation failed because [System.Management.Automation.PSNoteProperty] does not contain a method named 'Remove'. so, I clearly don't understand what should I do. could anyonne help me to understand? A: .Remove() is a method on the properties collection (.psobject.Properties), not on the individual entries in that collection (which explains the error you saw). A simplified example: $obj = [pscustomobject] @{ one = 1; two = 2; three = 3 } # Remove property .two $obj.psobject.Properties.Remove('two')
Powershell: remove value from property by its value
There is a JSON like: { a: { a1 : { a11:{}; a12:{}; a13:{};}; a2 : {some other values}; } b: { b1: {another values} } } so then I converted it using powershell with command and put it into a variable $test $test = Get-Content $some_file_path | ConvertFrom-Json and then I'd like to get this JSON with some excluding from "a1". result should be like ( so I remove a13): { a: { a1 : { a11: {} a12: {}}; } b: { b1: {another values} } } so, if I use: $test.a.PSObject.Properties["a1"].Remove("a13") returns me an error: Method invocation failed because [System.Management.Automation.PSNoteProperty] does not contain a method named 'Remove'. so, I clearly don't understand what should I do. could anyonne help me to understand?
[ ".Remove() is a method on the properties collection (.psobject.Properties), not on the individual entries in that collection (which explains the error you saw).\nA simplified example:\n$obj = [pscustomobject] @{ one = 1; two = 2; three = 3 }\n\n# Remove property .two\n$obj.psobject.Properties.Remove('two')\n\n" ]
[ 1 ]
[]
[]
[ "powershell", "ps", "ps1", "psobject" ]
stackoverflow_0074658470_powershell_ps_ps1_psobject.txt
Q: Haproxy Logging At Time of Request & Time of Response We have been using haproxy for a while now and it has been great for getting detailed logs and being able to customize much of the information we want to extract out of our traffic. However, I have made some layer 8 mistakes by thinking that timestamps on log entries are the time of the request to the server. Through testing, watching log entries, and studying haproxy documentation it looks like logging takes place at the time of the response and includes information about how long in milliseconds those requests took. This can be helpful, however when trying to align requests from the haproxy log with application logs it ends up being difficult to align log entries without doing some time math to get as close as possible to the true request time. Has my own study or experience led me astray in my understanding of when haproxy populates logs and the date/time stamp found in the logs? Or is there a way to configure haproxy to make a log entry at the time of the request as well as at the time of the response? Server Type: Centos HAProxy Version: 1.8 Example Log Entries: Feb 22 03:07:06 unipay-p11 haproxy[2286]: 10.13.6.101:35804 [22/Feb/2022:03:07:06.283] ft-unipay-https~ unipay-api/unipay-p13.zift.io 0/0/6/13/19 200 507 - - ---- 48/48/4/2/0 0/0 {|} "GET /pingdom/index.jsp HTTP/1.1"``` Feb 22 03:07:06 unipay-p11 haproxy[2286]: 10.13.6.103:39836 [22/Feb/2022:03:07:06.285] ft-unipay-https~ unipay-api/unipay-p13.zift.io 0/0/5/12/17 200 507 - - ---- 48/48/3/1/0 0/0 {|} "GET /pingdom/index.jsp HTTP/1.1" Feb 22 03:07:06 unipay-p11 haproxy[2286]: 10.13.6.103:39836 [22/Feb/2022:03:07:06.285] ft-unipay-https~ unipay-api/unipay-p13.zift.io 0/0/5/12/17 200 507 - - ---- 48/48/3/1/0 0/0 {|} "GET /pingdom/index.jsp HTTP/1.1" Feb 22 03:07:06 unipay-p11 haproxy[2286]: 10.13.8.102:50888 [22/Feb/2022:03:07:06.260] ft-unipay-https~ unipay-api/unipay-p11.zift.io 0/0/5/47/52 200 507 - - ---- 46/46/2/1/0 0/0 {|} "GET /pingdom/ HTTP/1.1" Thank you James Anderson A: Unfortunately I don't have an absolute answer for you but my experience with HAProxy and a couple of documents that I've found brings me to believe that the timestamp provided in the logs (between [ and ]) is the time when HAProxy received and began processing the request. In this doc https://cdn.haproxy.com/wp-content/uploads/2017/07/aloha_load_balancer_memo_log.pdf they say: %t exact date when the TCP connection was received by haproxy Also, in the official documentation https://cbonte.github.io/haproxy-dconv/1.7/configuration.html when referring to %t - "accept_date" is the exact date when the connection was received by haproxy Additionally, when I need to know when the request ended, I add to this timestamp the value of %Tt (milliseconds). Some info here https://www.haproxy.com/blog/haproxy-log-customization/ A: I log requests using rsyslog. This first let me to believe haproxy logs chronologically. But then I took a good look, and it was the rsyslog timestamp that shows end of request and the generic haproxy timestamp (between []) that shows the start. Your question just changed my search from a 4 second network stall into a 4 second application stall. So if you use (r)syslog, the loglines will log the end of the requests on second granularity, and the haproxy shows the start of the request with ms granularity and the total request time in ms.
Haproxy Logging At Time of Request & Time of Response
We have been using haproxy for a while now and it has been great for getting detailed logs and being able to customize much of the information we want to extract out of our traffic. However, I have made some layer 8 mistakes by thinking that timestamps on log entries are the time of the request to the server. Through testing, watching log entries, and studying haproxy documentation it looks like logging takes place at the time of the response and includes information about how long in milliseconds those requests took. This can be helpful, however when trying to align requests from the haproxy log with application logs it ends up being difficult to align log entries without doing some time math to get as close as possible to the true request time. Has my own study or experience led me astray in my understanding of when haproxy populates logs and the date/time stamp found in the logs? Or is there a way to configure haproxy to make a log entry at the time of the request as well as at the time of the response? Server Type: Centos HAProxy Version: 1.8 Example Log Entries: Feb 22 03:07:06 unipay-p11 haproxy[2286]: 10.13.6.101:35804 [22/Feb/2022:03:07:06.283] ft-unipay-https~ unipay-api/unipay-p13.zift.io 0/0/6/13/19 200 507 - - ---- 48/48/4/2/0 0/0 {|} "GET /pingdom/index.jsp HTTP/1.1"``` Feb 22 03:07:06 unipay-p11 haproxy[2286]: 10.13.6.103:39836 [22/Feb/2022:03:07:06.285] ft-unipay-https~ unipay-api/unipay-p13.zift.io 0/0/5/12/17 200 507 - - ---- 48/48/3/1/0 0/0 {|} "GET /pingdom/index.jsp HTTP/1.1" Feb 22 03:07:06 unipay-p11 haproxy[2286]: 10.13.6.103:39836 [22/Feb/2022:03:07:06.285] ft-unipay-https~ unipay-api/unipay-p13.zift.io 0/0/5/12/17 200 507 - - ---- 48/48/3/1/0 0/0 {|} "GET /pingdom/index.jsp HTTP/1.1" Feb 22 03:07:06 unipay-p11 haproxy[2286]: 10.13.8.102:50888 [22/Feb/2022:03:07:06.260] ft-unipay-https~ unipay-api/unipay-p11.zift.io 0/0/5/47/52 200 507 - - ---- 46/46/2/1/0 0/0 {|} "GET /pingdom/ HTTP/1.1" Thank you James Anderson
[ "Unfortunately I don't have an absolute answer for you but my experience with HAProxy and a couple of documents that I've found brings me to believe that the timestamp provided in the logs (between [ and ]) is the time when HAProxy received and began processing the request.\nIn this doc https://cdn.haproxy.com/wp-content/uploads/2017/07/aloha_load_balancer_memo_log.pdf\nthey say:\n%t exact date when the TCP connection was received by haproxy\n\nAlso, in the official documentation https://cbonte.github.io/haproxy-dconv/1.7/configuration.html\nwhen referring to %t\n - \"accept_date\" is the exact date when the connection was received by haproxy\n\nAdditionally, when I need to know when the request ended, I add to this timestamp the value of %Tt (milliseconds). Some info here https://www.haproxy.com/blog/haproxy-log-customization/\n", "I log requests using rsyslog. This first let me to believe haproxy logs chronologically. But then I took a good look, and it was the rsyslog timestamp that shows end of request and the generic haproxy timestamp (between []) that shows the start.\nYour question just changed my search from a 4 second network stall into a 4 second application stall.\nSo if you use (r)syslog, the loglines will log the end of the requests on second granularity, and the haproxy shows the start of the request with ms granularity and the total request time in ms.\n" ]
[ 0, 0 ]
[]
[]
[ "configuration", "haproxy", "logging" ]
stackoverflow_0072003367_configuration_haproxy_logging.txt
Q: How to use unsafe get a byte slice from a string without memory copy I have read about "https://github.com/golang/go/issues/25484" about no-copy conversion from []byte to string. I am wondering if there is a way to convert a string to a byte slice without memory copy? I am writing a program which processes terra-bytes data, if every string is copied twice in memory, it will slow down the progress. And I do not care about mutable/unsafe, only internal usage, I just need the speed as fast as possible. Example: var s string // some processing on s, for some reasons, I must use string here // ... // then output to a writer gzipWriter.Write([]byte(s)) // !!! Here I want to avoid the memory copy, no WriteString So the question is: is there a way to prevent from the memory copying? I know maybe I need the unsafe package, but I do not know how. I have searched a while, no answer till now, neither the SO showed related answers works. A: Getting the content of a string as a []byte without copying in general is only possible using unsafe, because strings in Go are immutable, and without a copy it would be possible to modify the contents of the string (by changing the elements of the byte slice). So using unsafe, this is how it could look like (corrected, working solution): func unsafeGetBytes(s string) []byte { return (*[0x7fff0000]byte)(unsafe.Pointer( (*reflect.StringHeader)(unsafe.Pointer(&s)).Data), )[:len(s):len(s)] } This solution is from Ian Lance Taylor. One thing to note here: the empty string "" has no bytes as its length is zero. This means there is no guarantee what the Data field may be, it may be zero or an arbitrary address shared among the zero-size variables. If an empty string may be passed, that must be checked explicitly (although there's no need to get the bytes of an empty string without copying...): func unsafeGetBytes(s string) []byte { if s == "" { return nil // or []byte{} } return (*[0x7fff0000]byte)(unsafe.Pointer( (*reflect.StringHeader)(unsafe.Pointer(&s)).Data), )[:len(s):len(s)] } Original, wrong solution was: func unsafeGetBytesWRONG(s string) []byte { return *(*[]byte)(unsafe.Pointer(&s)) // WRONG!!!! } See Nuno Cruces's answer below for reasoning. Testing it: s := "hi" data := unsafeGetBytes(s) fmt.Println(data, string(data)) data = unsafeGetBytes("gopher") fmt.Println(data, string(data)) Output (try it on the Go Playground): [104 105] hi [103 111 112 104 101 114] gopher BUT: You wrote you want this because you need performance. You also mentioned you want to compress the data. Please know that compressing data (using gzip) requires a lot more computation than just copying a few bytes! You will not see any noticeable performance gain by using this! Instead when you want to write strings to an io.Writer, it's recommended to do it via io.WriteString() function which if possible will do so without making a copy of the string (by checking and calling WriteString() method which if exists is most likely does it better than copying the string). For details, see What's the difference between ResponseWriter.Write and io.WriteString? There are also ways to access the contents of a string without converting it to []byte, such as indexing, or using a loop where the compiler optimizes away the copy: s := "something" for i, v := range []byte(s) { // Copying s is optimized away // ... } Also see related questions: []byte(string) vs []byte(*string) What are the possible consequences of using unsafe conversion from []byte to string in go? What is the difference between the string and []byte in Go? Does conversion between alias types in Go create copies? How does type conversion internally work? What is the memory utilization for the same? A: After some extensive investigation, I believe I've discovered the most efficient way of getting a []byte from a string as of Go 1.17 (this is for i386/x86_64 gc; I haven't tested other architectures.) The trade-off of being efficient code here is being inefficient to code, though. Before I say anything else, it should be made clear that the differences are ultimately very small and probably inconsequential -- the info below is for fun/educational purposes only. Summary With some minor alterations, the accepted answer illustrating the technique of slicing a pointer to array is the most efficient way. That being said, I wouldn't be surprised if unsafe.Slice becomes the (decisively) better choice in the future. unsafe.Slice unsafe.Slice currently has the advantage of being slightly more readable, but I'm skeptical about it's performance. It looks like it makes a call to runtime.unsafeslice. The following is the gc amd64 1.17 assembly of the function provided in Atamiri's answer (FUNCDATA omitted). Note the stack check (lack of NOSPLIT): unsafeGetBytes_pc0: TEXT "".unsafeGetBytes(SB), ABIInternal, $48-16 CMPQ SP, 16(R14) PCDATA $0, $-2 JLS unsafeGetBytes_pc86 PCDATA $0, $-1 SUBQ $48, SP MOVQ BP, 40(SP) LEAQ 40(SP), BP PCDATA $0, $-2 MOVQ BX, ""..autotmp_4+24(SP) MOVQ AX, "".s+56(SP) MOVQ BX, "".s+64(SP) MOVQ "".s+56(SP), DX PCDATA $0, $-1 MOVQ DX, ""..autotmp_5+32(SP) LEAQ type.uint8(SB), AX MOVQ BX, CX MOVQ DX, BX PCDATA $1, $1 CALL runtime.unsafeslice(SB) MOVQ ""..autotmp_5+32(SP), AX MOVQ ""..autotmp_4+24(SP), BX MOVQ BX, CX MOVQ 40(SP), BP ADDQ $48, SP RET unsafeGetBytes_pc86: NOP PCDATA $1, $-1 PCDATA $0, $-2 MOVQ AX, 8(SP) MOVQ BX, 16(SP) CALL runtime.morestack_noctxt(SB) MOVQ 8(SP), AX MOVQ 16(SP), BX PCDATA $0, $-1 JMP unsafeGetBytes_pc0 Other unimportant fun facts about the above (easily subject to change): compiled size of 3326B; has an inline cost of 7; correct escape analysis: s leaks to ~r1 with derefs=0. Carefully Modifying *reflect.SliceHeader This method has the advantage/disadvantage of letting one modify the internal state of a slice directly. Unfortunately, due it's multiline nature and use of uintptr, the GC can easily mess things up if one is not careful about keeping a reference to the original string. (Here I avoided creating temporary pointers to reduce inline cost and to avoid needing to add runtime.KeepAlive): func unsafeGetBytes(s string) (b []byte) { (*reflect.SliceHeader)(unsafe.Pointer(&b)).Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data (*reflect.SliceHeader)(unsafe.Pointer(&b)).Cap = len(s) (*reflect.SliceHeader)(unsafe.Pointer(&b)).Len = len(s) return } The corresponding assembly on amd64 (FUNCDATA omitted): TEXT "".unsafeGetBytes(SB), NOSPLIT|ABIInternal, $32-16 SUBQ $32, SP MOVQ BP, 24(SP) LEAQ 24(SP), BP MOVQ AX, "".s+40(SP) MOVQ BX, "".s+48(SP) MOVQ $0, "".b(SP) MOVUPS X15, "".b+8(SP) MOVQ "".s+40(SP), DX MOVQ DX, "".b(SP) MOVQ "".s+48(SP), CX MOVQ CX, "".b+16(SP) MOVQ "".s+48(SP), BX MOVQ BX, "".b+8(SP) MOVQ "".b(SP), AX MOVQ 24(SP), BP ADDQ $32, SP RET Other unimportant fun facts about the above (easily subject to change): compiled size of 3700B; has an inline cost of 20; subpar escape analysis: s leaks to {heap} with derefs=0. Unsafer version of modifying SliceHeader Adapted from Nuno Cruces' answer. This relies on the inherent structural similarity between StringHeader and SliceHeader, so in a sense it breaks "more easily". Additionally, it temporarily creates an illegal state where cap(b) (being 0) is less than len(b). func unsafeGetBytes(s string) (b []byte) { *(*string)(unsafe.Pointer(&b)) = s (*reflect.SliceHeader)(unsafe.Pointer(&b)).Cap = len(s) return } Corresponding assembly (FUNCDATA omitted): TEXT "".unsafeGetBytes(SB), NOSPLIT|ABIInternal, $32-16 SUBQ $32, SP MOVQ BP, 24(SP) LEAQ 24(SP), BP MOVQ AX, "".s+40(FP) MOVQ $0, "".b(SP) MOVUPS X15, "".b+8(SP) MOVQ AX, "".b(SP) MOVQ BX, "".b+8(SP) MOVQ BX, "".b+16(SP) MOVQ "".b(SP), AX MOVQ BX, CX MOVQ 24(SP), BP ADDQ $32, SP NOP RET Other unimportant details: compiled size 3636B, inline cost of 11, with subpar escape analysis: s leaks to {heap} with derefs=0. Slicing a pointer to array This is the accepted answer (shown here for comparison) -- its primary disadvantage is its ugliness (viz. magic number 0x7fff0000). There's also the tiniest possibility of getting a string bigger than the array, and an unavoidable bounds check. func unsafeGetBytes(s string) []byte { return (*[0x7fff0000]byte)(unsafe.Pointer( (*reflect.StringHeader)(unsafe.Pointer(&s)).Data), )[:len(s):len(s)] } Corresponding assembly (FUNCDATA removed). TEXT "".unsafeGetBytes(SB), NOSPLIT|ABIInternal, $24-16 SUBQ $24, SP MOVQ BP, 16(SP) LEAQ 16(SP), BP PCDATA $0, $-2 MOVQ AX, "".s+32(SP) MOVQ BX, "".s+40(SP) MOVQ "".s+32(SP), AX PCDATA $0, $-1 TESTB AL, (AX) NOP CMPQ BX, $2147418112 JHI unsafeGetBytes_pc54 MOVQ BX, CX MOVQ 16(SP), BP ADDQ $24, SP RET unsafeGetBytes_pc54: MOVQ BX, DX MOVL $2147418112, BX PCDATA $1, $1 NOP CALL runtime.panicSlice3Alen(SB) XCHGL AX, AX Other unimportant details: compiled size 3142B, inline cost of 9, with correct escape analysis: s leaks to ~r1 with derefs=0 Note the runtime.panicSlice3Alen -- this is bounds check that checks that len(s) is within 0x7fff0000. Improved slicing pointer to array This is what I've concluded to be the most efficient method as of Go 1.17. I basically modified the accepted answer to eliminate the bounds check, and found a "more meaningful" constant (math.MaxInt32) to use than 0x7fff0000. Using MaxInt32 preserves 32-bit compatibility. func unsafeGetBytes(s string) []byte { const MaxInt32 = 1<<31 - 1 return (*[MaxInt32]byte)(unsafe.Pointer((*reflect.StringHeader)( unsafe.Pointer(&s)).Data))[:len(s)&MaxInt32:len(s)&MaxInt32] } Corresponding assembly (FUNCDATA removed): TEXT "".unsafeGetBytes(SB), NOSPLIT|ABIInternal, $0-16 PCDATA $0, $-2 MOVQ AX, "".s+8(SP) MOVQ BX, "".s+16(SP) MOVQ "".s+8(SP), AX PCDATA $0, $-1 TESTB AL, (AX) ANDQ $2147483647, BX MOVQ BX, CX RET Other unimportant details: compiled size 3188B, inline cost of 13, and correct escape analysis: s leaks to ~r1 with derefs=0 A: In go 1.17, I'd recommend unsafe.Slice as more readable: unsafe.Slice((*byte)(unsafe.Pointer((*reflect.StringHeader)(unsafe.Pointer(&s)).Data)), len(s)) I think that this also works (doesn't violate any unsafe.Pointer rules), with the benefit that it works for a const s: *(*[]byte)(unsafe.Pointer(&struct{string; int}{s, len(s)})) Commentary bellow is regarding the accepted answer as it originally stood. The accepted answer now mentions an (authoritative) solution from Ian Lance Taylor. Keeping it as it points out a common error. The accepted answer is wrong, and may produce the panic @RFC mentioned in the comments. The explanation by @icza about GC and keep alive is misguided. The reason capacity is zero (or even an arbitrary value) is more prosaic. A slice is: type SliceHeader struct { Data uintptr Len int Cap int } A string is: type StringHeader struct { Data uintptr Len int } Converting a byte slice to a string can be "safely" done as the strings.Builder does it: func (b *Builder) String() string { return *(*string)(unsafe.Pointer(&b.buf)) } This will copy the Data pointer and Len from the slice to the string. The opposite conversion is not "safe" because Cap doesn't get set to the correct value. The following (originally by me) is also wrong because it violates unsafe.Pointer rule #1. This is the correct code, that fixes the panic: var buf = *(*[]byte)(unsafe.Pointer(&str)) (*reflect.SliceHeader)(unsafe.Pointer(&buf)).Cap = len(str) Or perhaps: var buf []byte *(*string)(unsafe.Pointer(&buf)) = str (*reflect.SliceHeader)(unsafe.Pointer(&buf)).Cap = len(str) I should add that all these conversions are unsafe in the sense that strings are expected to be immutable, and byte arrays/slices mutable. But if you know for sure that the byte slice won't be mutated, you won't get bounds (or GC) issues with the above conversions. A: In Go 1.17, one can now use unsafe.Slice, so the accepted answer can be rewritten as follows: func unsafeGetBytes(s string) []byte { return unsafe.Slice((*byte)(unsafe.Pointer((*reflect.StringHeader)(unsafe.Pointer(&s)).Data)), len(s)) } A: I managed to get the goal by this: func TestString(t *testing.T) { b := []byte{'a', 'b', 'c', '1', '2', '3', '4'} s := *(*string)(unsafe.Pointer(&b)) sb := *(*[]byte)(unsafe.Pointer(&s)) addr1 := unsafe.Pointer(&b) addr2 := unsafe.Pointer(&s) addr3 := unsafe.Pointer(&sb) fmt.Print("&b=", addr1, "\n&s=", addr2, "\n&sb=", addr3, "\n") hdr1 := (*reflect.StringHeader)(unsafe.Pointer(&b)) hdr2 := (*reflect.SliceHeader)(unsafe.Pointer(&s)) hdr3 := (*reflect.SliceHeader)(unsafe.Pointer(&sb)) fmt.Print("b.data=", hdr1.Data, "\ns.data=", hdr2.Data, "\nsb.data=", hdr3.Data, "\n") b[0] = 'X' sb[1] = 'Y' // if sb is from a string directly, this will cause nil panic fmt.Print("s=", s, "\nsb=") for _, c := range sb { fmt.Printf("%c", c) } fmt.Println() } Output: === RUN TestString &b=0xc000218000 &s=0xc00021a000 &sb=0xc000218020 b.data=824635867152 s.data=824635867152 sb.data=824635867152 s=XYc1234 sb=XYc1234 These variables all share the same memory. A: Go 1.20 (February 2023) You can use unsafe.StringData to greatly simplify YenForYang's answer: StringData returns a pointer to the underlying bytes of str. For an empty string the return value is unspecified, and may be nil. Since Go strings are immutable, the bytes returned by StringData must not be modified. func main() { str := "foobar" d := unsafe.StringData(str) b := unsafe.Slice(d, len(str)) fmt.Printf("%T, %s\n", b, b) // []uint8, foobar (byte is alias of uint8) } Go tip playground: https://go.dev/play/p/FIXe0rb8YHE?v=gotip Remember that you can't assign to b[n]. The memory is still read-only.
How to use unsafe get a byte slice from a string without memory copy
I have read about "https://github.com/golang/go/issues/25484" about no-copy conversion from []byte to string. I am wondering if there is a way to convert a string to a byte slice without memory copy? I am writing a program which processes terra-bytes data, if every string is copied twice in memory, it will slow down the progress. And I do not care about mutable/unsafe, only internal usage, I just need the speed as fast as possible. Example: var s string // some processing on s, for some reasons, I must use string here // ... // then output to a writer gzipWriter.Write([]byte(s)) // !!! Here I want to avoid the memory copy, no WriteString So the question is: is there a way to prevent from the memory copying? I know maybe I need the unsafe package, but I do not know how. I have searched a while, no answer till now, neither the SO showed related answers works.
[ "Getting the content of a string as a []byte without copying in general is only possible using unsafe, because strings in Go are immutable, and without a copy it would be possible to modify the contents of the string (by changing the elements of the byte slice).\nSo using unsafe, this is how it could look like (corrected, working solution):\nfunc unsafeGetBytes(s string) []byte {\n return (*[0x7fff0000]byte)(unsafe.Pointer(\n (*reflect.StringHeader)(unsafe.Pointer(&s)).Data),\n )[:len(s):len(s)]\n}\n\nThis solution is from Ian Lance Taylor.\nOne thing to note here: the empty string \"\" has no bytes as its length is zero. This means there is no guarantee what the Data field may be, it may be zero or an arbitrary address shared among the zero-size variables. If an empty string may be passed, that must be checked explicitly (although there's no need to get the bytes of an empty string without copying...):\nfunc unsafeGetBytes(s string) []byte {\n if s == \"\" {\n return nil // or []byte{}\n }\n return (*[0x7fff0000]byte)(unsafe.Pointer(\n (*reflect.StringHeader)(unsafe.Pointer(&s)).Data),\n )[:len(s):len(s)]\n}\n\nOriginal, wrong solution was:\nfunc unsafeGetBytesWRONG(s string) []byte {\n return *(*[]byte)(unsafe.Pointer(&s)) // WRONG!!!!\n}\n\nSee Nuno Cruces's answer below for reasoning.\nTesting it:\ns := \"hi\"\ndata := unsafeGetBytes(s)\nfmt.Println(data, string(data))\n\ndata = unsafeGetBytes(\"gopher\")\nfmt.Println(data, string(data))\n\nOutput (try it on the Go Playground):\n[104 105] hi\n[103 111 112 104 101 114] gopher\n\nBUT: You wrote you want this because you need performance. You also mentioned you want to compress the data. Please know that compressing data (using gzip) requires a lot more computation than just copying a few bytes! You will not see any noticeable performance gain by using this!\nInstead when you want to write strings to an io.Writer, it's recommended to do it via io.WriteString() function which if possible will do so without making a copy of the string (by checking and calling WriteString() method which if exists is most likely does it better than copying the string). For details, see What's the difference between ResponseWriter.Write and io.WriteString?\nThere are also ways to access the contents of a string without converting it to []byte, such as indexing, or using a loop where the compiler optimizes away the copy:\ns := \"something\"\nfor i, v := range []byte(s) { // Copying s is optimized away\n // ...\n}\n\nAlso see related questions:\n[]byte(string) vs []byte(*string)\nWhat are the possible consequences of using unsafe conversion from []byte to string in go?\nWhat is the difference between the string and []byte in Go?\nDoes conversion between alias types in Go create copies?\nHow does type conversion internally work? What is the memory utilization for the same?\n", "After some extensive investigation, I believe I've discovered the most efficient way of getting a []byte from a string as of Go 1.17 (this is for i386/x86_64 gc; I haven't tested other architectures.) The trade-off of being efficient code here is being inefficient to code, though.\nBefore I say anything else, it should be made clear that the differences are ultimately very small and probably inconsequential -- the info below is for fun/educational purposes only.\n\nSummary\nWith some minor alterations, the accepted answer illustrating the technique of slicing a pointer to array is the most efficient way. That being said, I wouldn't be surprised if unsafe.Slice becomes the (decisively) better choice in the future.\n\nunsafe.Slice\nunsafe.Slice currently has the advantage of being slightly more readable, but I'm skeptical about it's performance. It looks like it makes a call to runtime.unsafeslice. The following is the gc amd64 1.17 assembly of the function provided in Atamiri's answer (FUNCDATA omitted). Note the stack check (lack of NOSPLIT):\nunsafeGetBytes_pc0:\n TEXT \"\".unsafeGetBytes(SB), ABIInternal, $48-16\n CMPQ SP, 16(R14)\n PCDATA $0, $-2\n JLS unsafeGetBytes_pc86\n PCDATA $0, $-1\n SUBQ $48, SP\n MOVQ BP, 40(SP)\n LEAQ 40(SP), BP\n\n PCDATA $0, $-2\n MOVQ BX, \"\"..autotmp_4+24(SP)\n MOVQ AX, \"\".s+56(SP)\n MOVQ BX, \"\".s+64(SP)\n MOVQ \"\".s+56(SP), DX\n PCDATA $0, $-1\n MOVQ DX, \"\"..autotmp_5+32(SP)\n LEAQ type.uint8(SB), AX\n MOVQ BX, CX\n MOVQ DX, BX\n PCDATA $1, $1\n CALL runtime.unsafeslice(SB)\n MOVQ \"\"..autotmp_5+32(SP), AX\n MOVQ \"\"..autotmp_4+24(SP), BX\n MOVQ BX, CX\n MOVQ 40(SP), BP\n ADDQ $48, SP\n RET\nunsafeGetBytes_pc86:\n NOP\n PCDATA $1, $-1\n PCDATA $0, $-2\n MOVQ AX, 8(SP)\n MOVQ BX, 16(SP)\n CALL runtime.morestack_noctxt(SB)\n MOVQ 8(SP), AX\n MOVQ 16(SP), BX\n PCDATA $0, $-1\n JMP unsafeGetBytes_pc0\n\nOther unimportant fun facts about the above (easily subject to change): compiled size of 3326B; has an inline cost of 7; correct escape analysis: s leaks to ~r1 with derefs=0.\n\nCarefully Modifying *reflect.SliceHeader\nThis method has the advantage/disadvantage of letting one modify the internal state of a slice directly. Unfortunately, due it's multiline nature and use of uintptr, the GC can easily mess things up if one is not careful about keeping a reference to the original string. (Here I avoided creating temporary pointers to reduce inline cost and to avoid needing to add runtime.KeepAlive):\nfunc unsafeGetBytes(s string) (b []byte) {\n (*reflect.SliceHeader)(unsafe.Pointer(&b)).Data = (*reflect.StringHeader)(unsafe.Pointer(&s)).Data\n (*reflect.SliceHeader)(unsafe.Pointer(&b)).Cap = len(s)\n (*reflect.SliceHeader)(unsafe.Pointer(&b)).Len = len(s)\n return\n}\n\nThe corresponding assembly on amd64 (FUNCDATA omitted):\n TEXT \"\".unsafeGetBytes(SB), NOSPLIT|ABIInternal, $32-16\n SUBQ $32, SP\n MOVQ BP, 24(SP)\n LEAQ 24(SP), BP\n\n MOVQ AX, \"\".s+40(SP)\n MOVQ BX, \"\".s+48(SP)\n MOVQ $0, \"\".b(SP)\n MOVUPS X15, \"\".b+8(SP)\n MOVQ \"\".s+40(SP), DX\n MOVQ DX, \"\".b(SP)\n MOVQ \"\".s+48(SP), CX\n MOVQ CX, \"\".b+16(SP)\n MOVQ \"\".s+48(SP), BX\n MOVQ BX, \"\".b+8(SP)\n MOVQ \"\".b(SP), AX\n MOVQ 24(SP), BP\n ADDQ $32, SP\n RET\n\nOther unimportant fun facts about the above (easily subject to change): compiled size of 3700B; has an inline cost of 20; subpar escape analysis: s leaks to {heap} with derefs=0.\n\nUnsafer version of modifying SliceHeader\nAdapted from Nuno Cruces' answer. This relies on the inherent structural similarity between StringHeader and SliceHeader, so in a sense it breaks \"more easily\". Additionally, it temporarily creates an illegal state where cap(b) (being 0) is less than len(b).\nfunc unsafeGetBytes(s string) (b []byte) {\n *(*string)(unsafe.Pointer(&b)) = s\n (*reflect.SliceHeader)(unsafe.Pointer(&b)).Cap = len(s)\n return\n}\n\nCorresponding assembly (FUNCDATA omitted):\n TEXT \"\".unsafeGetBytes(SB), NOSPLIT|ABIInternal, $32-16\n SUBQ $32, SP\n MOVQ BP, 24(SP)\n LEAQ 24(SP), BP\n MOVQ AX, \"\".s+40(FP)\n\n MOVQ $0, \"\".b(SP)\n MOVUPS X15, \"\".b+8(SP)\n MOVQ AX, \"\".b(SP)\n MOVQ BX, \"\".b+8(SP)\n MOVQ BX, \"\".b+16(SP)\n MOVQ \"\".b(SP), AX\n MOVQ BX, CX\n MOVQ 24(SP), BP\n ADDQ $32, SP\n NOP\n RET\n\nOther unimportant details: compiled size 3636B, inline cost of 11, with subpar escape analysis: s leaks to {heap} with derefs=0.\n\nSlicing a pointer to array\nThis is the accepted answer (shown here for comparison) -- its primary disadvantage is its ugliness (viz. magic number 0x7fff0000). There's also the tiniest possibility of getting a string bigger than the array, and an unavoidable bounds check.\nfunc unsafeGetBytes(s string) []byte {\n return (*[0x7fff0000]byte)(unsafe.Pointer(\n (*reflect.StringHeader)(unsafe.Pointer(&s)).Data),\n )[:len(s):len(s)]\n}\n\nCorresponding assembly (FUNCDATA removed).\n TEXT \"\".unsafeGetBytes(SB), NOSPLIT|ABIInternal, $24-16\n SUBQ $24, SP\n MOVQ BP, 16(SP)\n LEAQ 16(SP), BP\n\n PCDATA $0, $-2\n MOVQ AX, \"\".s+32(SP)\n MOVQ BX, \"\".s+40(SP)\n MOVQ \"\".s+32(SP), AX\n PCDATA $0, $-1\n TESTB AL, (AX)\n NOP\n CMPQ BX, $2147418112\n JHI unsafeGetBytes_pc54\n MOVQ BX, CX\n MOVQ 16(SP), BP\n ADDQ $24, SP\n RET\nunsafeGetBytes_pc54:\n MOVQ BX, DX\n MOVL $2147418112, BX\n PCDATA $1, $1\n NOP\n CALL runtime.panicSlice3Alen(SB)\n XCHGL AX, AX\n\nOther unimportant details: compiled size 3142B, inline cost of 9, with correct escape analysis: s leaks to ~r1 with derefs=0\nNote the runtime.panicSlice3Alen -- this is bounds check that checks that len(s) is within 0x7fff0000.\n\nImproved slicing pointer to array\nThis is what I've concluded to be the most efficient method as of Go 1.17. I basically modified the accepted answer to eliminate the bounds check, and found a \"more meaningful\" constant (math.MaxInt32) to use than 0x7fff0000. Using MaxInt32 preserves 32-bit compatibility.\nfunc unsafeGetBytes(s string) []byte {\n const MaxInt32 = 1<<31 - 1\n return (*[MaxInt32]byte)(unsafe.Pointer((*reflect.StringHeader)(\n unsafe.Pointer(&s)).Data))[:len(s)&MaxInt32:len(s)&MaxInt32]\n}\n\nCorresponding assembly (FUNCDATA removed):\n TEXT \"\".unsafeGetBytes(SB), NOSPLIT|ABIInternal, $0-16\n\n PCDATA $0, $-2\n MOVQ AX, \"\".s+8(SP)\n MOVQ BX, \"\".s+16(SP)\n MOVQ \"\".s+8(SP), AX\n PCDATA $0, $-1\n TESTB AL, (AX)\n ANDQ $2147483647, BX\n MOVQ BX, CX\n RET\n\nOther unimportant details: compiled size 3188B, inline cost of 13, and correct escape analysis: s leaks to ~r1 with derefs=0\n\n", "In go 1.17, I'd recommend unsafe.Slice as more readable:\nunsafe.Slice((*byte)(unsafe.Pointer((*reflect.StringHeader)(unsafe.Pointer(&s)).Data)), len(s))\n\nI think that this also works (doesn't violate any unsafe.Pointer rules), with the benefit that it works for a const s:\n*(*[]byte)(unsafe.Pointer(&struct{string; int}{s, len(s)}))\n\n\nCommentary bellow is regarding the accepted answer as it originally stood. The accepted answer now mentions an (authoritative) solution from Ian Lance Taylor. Keeping it as it points out a common error.\n\nThe accepted answer is wrong, and may produce the panic @RFC mentioned in the comments. The explanation by @icza about GC and keep alive is misguided.\nThe reason capacity is zero (or even an arbitrary value) is more prosaic.\nA slice is:\ntype SliceHeader struct {\n Data uintptr\n Len int\n Cap int\n}\n\nA string is:\ntype StringHeader struct {\n Data uintptr\n Len int\n}\n\nConverting a byte slice to a string can be \"safely\" done as the strings.Builder does it:\nfunc (b *Builder) String() string {\n return *(*string)(unsafe.Pointer(&b.buf))\n}\n\nThis will copy the Data pointer and Len from the slice to the string.\nThe opposite conversion is not \"safe\" because Cap doesn't get set to the correct value.\n\nThe following (originally by me) is also wrong because it violates unsafe.Pointer rule #1.\n\nThis is the correct code, that fixes the panic:\nvar buf = *(*[]byte)(unsafe.Pointer(&str))\n(*reflect.SliceHeader)(unsafe.Pointer(&buf)).Cap = len(str)\n\nOr perhaps:\nvar buf []byte\n*(*string)(unsafe.Pointer(&buf)) = str\n(*reflect.SliceHeader)(unsafe.Pointer(&buf)).Cap = len(str)\n\n\nI should add that all these conversions are unsafe in the sense that strings are expected to be immutable, and byte arrays/slices mutable.\nBut if you know for sure that the byte slice won't be mutated, you won't get bounds (or GC) issues with the above conversions.\n", "In Go 1.17, one can now use unsafe.Slice, so the accepted answer can be rewritten as follows:\n\nfunc unsafeGetBytes(s string) []byte {\n return unsafe.Slice((*byte)(unsafe.Pointer((*reflect.StringHeader)(unsafe.Pointer(&s)).Data)), len(s))\n}\n\n", "I managed to get the goal by this:\nfunc TestString(t *testing.T) {\n\n b := []byte{'a', 'b', 'c', '1', '2', '3', '4'}\n s := *(*string)(unsafe.Pointer(&b))\n sb := *(*[]byte)(unsafe.Pointer(&s))\n\n addr1 := unsafe.Pointer(&b)\n addr2 := unsafe.Pointer(&s)\n addr3 := unsafe.Pointer(&sb)\n\n fmt.Print(\"&b=\", addr1, \"\\n&s=\", addr2, \"\\n&sb=\", addr3, \"\\n\")\n\n hdr1 := (*reflect.StringHeader)(unsafe.Pointer(&b))\n hdr2 := (*reflect.SliceHeader)(unsafe.Pointer(&s))\n hdr3 := (*reflect.SliceHeader)(unsafe.Pointer(&sb))\n\n fmt.Print(\"b.data=\", hdr1.Data, \"\\ns.data=\", hdr2.Data, \"\\nsb.data=\", hdr3.Data, \"\\n\")\n\n b[0] = 'X'\n sb[1] = 'Y' // if sb is from a string directly, this will cause nil panic\n fmt.Print(\"s=\", s, \"\\nsb=\")\n for _, c := range sb {\n fmt.Printf(\"%c\", c)\n }\n fmt.Println()\n\n}\n\nOutput:\n=== RUN TestString\n&b=0xc000218000\n&s=0xc00021a000\n&sb=0xc000218020\nb.data=824635867152\ns.data=824635867152\nsb.data=824635867152\ns=XYc1234\nsb=XYc1234\n\nThese variables all share the same memory.\n", "Go 1.20 (February 2023)\nYou can use unsafe.StringData to greatly simplify YenForYang's answer:\n\nStringData returns a pointer to the underlying bytes of str. For an empty string the return value is unspecified, and may be nil.\nSince Go strings are immutable, the bytes returned by StringData must not be modified.\n\nfunc main() {\n str := \"foobar\"\n d := unsafe.StringData(str)\n b := unsafe.Slice(d, len(str))\n fmt.Printf(\"%T, %s\\n\", b, b) // []uint8, foobar (byte is alias of uint8)\n}\n\nGo tip playground: https://go.dev/play/p/FIXe0rb8YHE?v=gotip\nRemember that you can't assign to b[n]. The memory is still read-only.\n" ]
[ 13, 11, 8, 4, 2, 1 ]
[ "Simple, no reflect, and I think it is portable. s is your string and b is your bytes slice\nvar b []byte\nbb:=(*[3]uintptr)(unsafe.Pointer(&b))[:]\ncopy(bb, (*[2]uintptr)(unsafe.Pointer(&s))[:])\nbb[2] = bb[1]\n// use b\n\nRemember, bytes value should not be modified (will panic). re-slicing is ok (for example: bytes.split(b, []byte{','} )\n" ]
[ -1 ]
[ "go", "performance", "slice", "string" ]
stackoverflow_0059209493_go_performance_slice_string.txt
Q: Binary Search Using a Recursive Function taking an intro CS class on python and was met by this lab on my textbook. It calls for binary search using recursive functions. I have the rest of the program, I simply need to define the Binary Search function. Any help on this would be greatly appreciated. Here is the problem: Binary search can be implemented as a recursive algorithm. Each call makes a recursive call on one-half of the list the call received as an argument. Complete the recursive function binary_search() with the following specifications: Parameters: a list of integers a target integer lower and upper bounds within which the recursive call will search Return value: if found, the index within the list where the target is located -1 if target is not found The algorithm begins by choosing an index midway between the lower and upper bounds. If target == nums[index] return index If lower == upper, return lower if target == nums[lower] else -1 to indicate not found Otherwise call the function recursively with half the list as an argument: If nums[index] < target, search the list from index to upper If nums[index] > target, search the list from lower to index The list must be ordered, but duplicates are allowed. Once the search algorithm works correctly, add the following to binary_search(): Count the number of calls to binary_search(). Count the number of times when the target is compared to an element of the list. Note: lower == upper should not be counted. Hint: Use a global variable to count calls and comparisons. The input of the program consists of integers on one line followed by a target integer on the second. The template provides the main program and a helper function that reads a list from input. Ex: If the input is: 1 2 3 4 5 6 7 8 9 2 the output is: index: 1, recursions: 2, comparisons: 3 Here is my code: # TODO: Declare global variables here. recursions = 0 comparisons = 0 def binary_search(nums, target, lower, upper): global recursions global comparisons if target == nums[(lower+upper)/2]: if lower == upper: if target == nums[lower]: return lower else: target == -1 elif nums[(lower+upper)/2] < target: recursions =+1 comparisons =+1 binary_search(upper) elif nums[(lower+upper)/2] > target: recursions =+1 comparisons =+1 binary_search(lower) if __name__ == '__main__': # Input a list of nums from the first line of input nums = [int(n) for n in input().split()] # Input a target value target = int(input()) # Start off with default values: full range of list indices index = binary_search(nums, target, 0, len(nums) - 1) # Output the index where target was found in nums, and the # number of recursions and comparisons performed print(f'index: {index}, recursions: {recursions}, comparisons: {comparisons}') Error output: Traceback (most recent call last): File "main.py", line 34, in <module> index = binary_search(nums, target, 0, len(nums) - 1) File "main.py", line 8, in binary_search if target == nums[(lower+upper)/2]: TypeError: list indices must be integers or slices, not float A: Your error means that lower + upper is an odd number, which when you divide by 2 results in something like 3.5, 8.5, etc., which is an invalid index for a list. To solve this, use floored division (rounding down) with the double slash // operator if target == nums[(lower+upper)//2]: A: Once you've fixed the integer division you'll have a problem when you try to make the recursive call because you're not providing enough parameters. You may find this helpful: recursions = 0 comparisons = 0 def binary_search(lst, t): def _binary_search(lst, lo, hi, t): global recursions, comparisons recursions += 1 if hi >= lo: mid = (hi + lo) // 2 comparisons += 1 if lst[mid] == t: return mid comparisons += 1 if lst[mid] > t: return _binary_search(lst, lo, mid - 1, t) else: return _binary_search(lst, mid + 1, hi, t) else: return -1 return _binary_search(lst, 0, len(lst)-1, t) index = binary_search([1, 2, 3, 4, 5, 6, 7, 8, 9], 2) print(f'{index=} {recursions=} {comparisons=}') Output: index=1 recursions=2 comparisons=3
Binary Search Using a Recursive Function
taking an intro CS class on python and was met by this lab on my textbook. It calls for binary search using recursive functions. I have the rest of the program, I simply need to define the Binary Search function. Any help on this would be greatly appreciated. Here is the problem: Binary search can be implemented as a recursive algorithm. Each call makes a recursive call on one-half of the list the call received as an argument. Complete the recursive function binary_search() with the following specifications: Parameters: a list of integers a target integer lower and upper bounds within which the recursive call will search Return value: if found, the index within the list where the target is located -1 if target is not found The algorithm begins by choosing an index midway between the lower and upper bounds. If target == nums[index] return index If lower == upper, return lower if target == nums[lower] else -1 to indicate not found Otherwise call the function recursively with half the list as an argument: If nums[index] < target, search the list from index to upper If nums[index] > target, search the list from lower to index The list must be ordered, but duplicates are allowed. Once the search algorithm works correctly, add the following to binary_search(): Count the number of calls to binary_search(). Count the number of times when the target is compared to an element of the list. Note: lower == upper should not be counted. Hint: Use a global variable to count calls and comparisons. The input of the program consists of integers on one line followed by a target integer on the second. The template provides the main program and a helper function that reads a list from input. Ex: If the input is: 1 2 3 4 5 6 7 8 9 2 the output is: index: 1, recursions: 2, comparisons: 3 Here is my code: # TODO: Declare global variables here. recursions = 0 comparisons = 0 def binary_search(nums, target, lower, upper): global recursions global comparisons if target == nums[(lower+upper)/2]: if lower == upper: if target == nums[lower]: return lower else: target == -1 elif nums[(lower+upper)/2] < target: recursions =+1 comparisons =+1 binary_search(upper) elif nums[(lower+upper)/2] > target: recursions =+1 comparisons =+1 binary_search(lower) if __name__ == '__main__': # Input a list of nums from the first line of input nums = [int(n) for n in input().split()] # Input a target value target = int(input()) # Start off with default values: full range of list indices index = binary_search(nums, target, 0, len(nums) - 1) # Output the index where target was found in nums, and the # number of recursions and comparisons performed print(f'index: {index}, recursions: {recursions}, comparisons: {comparisons}') Error output: Traceback (most recent call last): File "main.py", line 34, in <module> index = binary_search(nums, target, 0, len(nums) - 1) File "main.py", line 8, in binary_search if target == nums[(lower+upper)/2]: TypeError: list indices must be integers or slices, not float
[ "Your error means that lower + upper is an odd number, which when you divide by 2 results in something like 3.5, 8.5, etc., which is an invalid index for a list.\nTo solve this, use floored division (rounding down) with the double slash // operator\nif target == nums[(lower+upper)//2]:\n\n", "Once you've fixed the integer division you'll have a problem when you try to make the recursive call because you're not providing enough parameters.\nYou may find this helpful:\nrecursions = 0\ncomparisons = 0\n\ndef binary_search(lst, t):\n def _binary_search(lst, lo, hi, t):\n global recursions, comparisons\n recursions += 1\n if hi >= lo:\n mid = (hi + lo) // 2\n comparisons += 1\n if lst[mid] == t:\n return mid\n\n comparisons += 1\n\n if lst[mid] > t:\n return _binary_search(lst, lo, mid - 1, t)\n else:\n return _binary_search(lst, mid + 1, hi, t)\n else:\n return -1\n return _binary_search(lst, 0, len(lst)-1, t)\n \nindex = binary_search([1, 2, 3, 4, 5, 6, 7, 8, 9], 2)\n\nprint(f'{index=} {recursions=} {comparisons=}')\n\nOutput:\nindex=1 recursions=2 comparisons=3\n\n" ]
[ 1, 0 ]
[]
[]
[ "binary_search", "python", "recursion" ]
stackoverflow_0074658593_binary_search_python_recursion.txt
Q: Expand matrix based on vector I want to turn matrix A into matrix B. Is there a better/more efficient approach with NumPy than the following? import numpy as np a = np.array([[0.02, 0.05, 0.05], [0.35, 0.10, 0.45], [0.08, 0.25, 0.15]]) w = np.array([0.75, 0.25]) B = np.insert(a, 9, a[2, :]).reshape(4, 3) B = np.insert(B.T, 12, B[:, 2]).reshape(4, 4).T B[2:4, :] = np.multiply(B[2:4, :].T, w).T A: .insert isn't a good choice here because numpy needs to allocate memory to create a whole new array every time you do so. Instead, just pre-allocate the size of array you need, and then assign to its slices. a = np.array([[0.02, 0.05, 0.05], [0.35, 0.10, 0.45], [0.08, 0.25, 0.15]]) w = np.array([0.75, 0.25]) b_shape = tuple(s + 1 for s in a.shape) # We need one more row and column than a b = np.zeros(b_shape) # Create zero array of required shape b[:a.shape[0], :a.shape[1]] = a # Set a in the top left corner b[:, -1] = b[:, -2] # Set last column from second-last column b[-1, :] = b[-2, :] # Set last row from second-last row b[-w.shape[0]:, :] = b[-w.shape[0]:, :] * w[:, None] # Multiply last two rows with `w` w[:, None] makes w a column vector (a 2x1 matrix), and numpy broadcasts the shapes to do the correct elementwise multiplication. This gives us the required b: array([[0.02 , 0.05 , 0.05 , 0.05 ], [0.35 , 0.1 , 0.45 , 0.45 ], [0.06 , 0.1875, 0.1125, 0.1125], [0.02 , 0.0625, 0.0375, 0.0375]]) Putting this in a function to compare runtimes against your approach: import numpy as np import timeit from matplotlib import pyplot as plt #% Define functions def func_insert(a, w): B = np.insert(a, a.size, a[-1, :]).reshape(a.shape[0]+1, a.shape[1]) B = np.insert(B.T, B.size, B[:, -1]).reshape(a.shape[0]+1, a.shape[1]+1).T B[-w.shape[0]:, :] = np.multiply(B[-w.shape[0]:, :].T, w).T return B def func_prealloc(a, w): b_shape = tuple(s + 1 for s in a.shape) b = np.zeros(b_shape) b[:a.shape[0], :a.shape[1]] = a b[:, -1] = b[:, -2] b[-1, :] = b[-2, :] b[-w.shape[0]:, :] = b[-w.shape[0]:, :] * w[:, None] return b #% Time function calls sizes = [3, 10, 50, 100, 500, 1000, 5000, 10_000] times = np.zeros((len(sizes), 2)) for i, size in enumerate(sizes): a = np.random.random((size, size)) w = np.random.random((2,)) times[i, 0] = timeit.timeit("func_insert(a, w)", globals=globals(), number=10) / 10 print(".") times[i, 1] = timeit.timeit("func_prealloc(a, w)", globals=globals(), number=10) / 10 print("x") #% Plot results fig, ax = plt.subplots() ax.plot(sizes, times[:, 0], label="Insert") ax.plot(sizes, times[:, 1], label="Prealloc") ax.set_xscale('log') ax.set_yscale('log') ax.legend() ax.set_xlabel('Array size (NxN)') ax.set_ylabel('Time per function call (s)') ax.grid(True) fig.tight_layout() ] There's a consistent 3-5x speedup by preallocating.
Expand matrix based on vector
I want to turn matrix A into matrix B. Is there a better/more efficient approach with NumPy than the following? import numpy as np a = np.array([[0.02, 0.05, 0.05], [0.35, 0.10, 0.45], [0.08, 0.25, 0.15]]) w = np.array([0.75, 0.25]) B = np.insert(a, 9, a[2, :]).reshape(4, 3) B = np.insert(B.T, 12, B[:, 2]).reshape(4, 4).T B[2:4, :] = np.multiply(B[2:4, :].T, w).T
[ ".insert isn't a good choice here because numpy needs to allocate memory to create a whole new array every time you do so. Instead, just pre-allocate the size of array you need, and then assign to its slices.\na = np.array([[0.02, 0.05, 0.05],\n [0.35, 0.10, 0.45],\n [0.08, 0.25, 0.15]])\n\nw = np.array([0.75, 0.25])\n\nb_shape = tuple(s + 1 for s in a.shape) # We need one more row and column than a\n\nb = np.zeros(b_shape) # Create zero array of required shape\n\nb[:a.shape[0], :a.shape[1]] = a # Set a in the top left corner\n\nb[:, -1] = b[:, -2] # Set last column from second-last column\nb[-1, :] = b[-2, :] # Set last row from second-last row\n\nb[-w.shape[0]:, :] = b[-w.shape[0]:, :] * w[:, None] # Multiply last two rows with `w`\n\nw[:, None] makes w a column vector (a 2x1 matrix), and numpy broadcasts the shapes to do the correct elementwise multiplication.\nThis gives us the required b:\narray([[0.02 , 0.05 , 0.05 , 0.05 ],\n [0.35 , 0.1 , 0.45 , 0.45 ],\n [0.06 , 0.1875, 0.1125, 0.1125],\n [0.02 , 0.0625, 0.0375, 0.0375]])\n\n\nPutting this in a function to compare runtimes against your approach:\nimport numpy as np\nimport timeit\nfrom matplotlib import pyplot as plt\n\n#% Define functions\n\ndef func_insert(a, w):\n B = np.insert(a, a.size, a[-1, :]).reshape(a.shape[0]+1, a.shape[1])\n B = np.insert(B.T, B.size, B[:, -1]).reshape(a.shape[0]+1, a.shape[1]+1).T\n B[-w.shape[0]:, :] = np.multiply(B[-w.shape[0]:, :].T, w).T\n return B\n\ndef func_prealloc(a, w):\n b_shape = tuple(s + 1 for s in a.shape)\n b = np.zeros(b_shape)\n\n b[:a.shape[0], :a.shape[1]] = a\n b[:, -1] = b[:, -2]\n b[-1, :] = b[-2, :]\n\n b[-w.shape[0]:, :] = b[-w.shape[0]:, :] * w[:, None]\n return b\n\n#% Time function calls\nsizes = [3, 10, 50, 100, 500, 1000, 5000, 10_000]\ntimes = np.zeros((len(sizes), 2))\n\nfor i, size in enumerate(sizes):\n a = np.random.random((size, size))\n w = np.random.random((2,))\n \n times[i, 0] = timeit.timeit(\"func_insert(a, w)\", globals=globals(), number=10) / 10\n print(\".\")\n times[i, 1] = timeit.timeit(\"func_prealloc(a, w)\", globals=globals(), number=10) / 10\n print(\"x\")\n \n#% Plot results\n\nfig, ax = plt.subplots()\nax.plot(sizes, times[:, 0], label=\"Insert\")\nax.plot(sizes, times[:, 1], label=\"Prealloc\")\nax.set_xscale('log')\nax.set_yscale('log')\nax.legend()\nax.set_xlabel('Array size (NxN)')\nax.set_ylabel('Time per function call (s)')\nax.grid(True)\nfig.tight_layout()\n\n]\nThere's a consistent 3-5x speedup by preallocating.\n" ]
[ 1 ]
[]
[]
[ "numpy" ]
stackoverflow_0074658819_numpy.txt
Q: Unhandled Exception due to failure importing database file. How to fix? I have hosted a Flask website on pythonanywhere, but I keep getting the "Unhandled Exception" error when visiting the website. I checked the error logs, and the problem is with a database file, named finance.db. The exact text from the error logs are below: 2022-04-26 07:23:21,225: Error running WSGI application 2022-04-26 07:23:21,239: RuntimeError: does not exist: finance.db 2022-04-26 07:23:21,240: File "/var/www/routsiddharth_pythonanywhere_com_wsgi.py", line 16, in <module> 2022-04-26 07:23:21,240: from app import app as application # noqa 2022-04-26 07:23:21,240: 2022-04-26 07:23:21,240: File "/home/routsiddharth/mysite/app.py", line 39, in <module> 2022-04-26 07:23:21,240: 2022-04-26 07:23:21,240: File "/home/routsiddharth/.local/lib/python3.9/site-packages/cs50/sql.py", line 60, in __init__ 2022-04-26 07:23:21,240: raise RuntimeError("does not exist: {}".format(matches.group(1))) 2022-04-26 07:23:21,241: *************************************************** 2022-04-26 07:23:21,241: If you're seeing an import error and don't know why, 2022-04-26 07:23:21,241: we have a dedicated help page to help you debug: 2022-04-26 07:23:21,241: https://help.pythonanywhere.com/pages/DebuggingImportError/ 2022-04-26 07:23:21,241: *************************************************** Here is how I imported the file: from cs50 import SQL db = SQL("sqlite:///finance.db") The finance.db file is in the same directory as the app.py file. How do I fix this error? A: You need to reference the database with the correct path: https://help.pythonanywhere.com/pages/NoSuchFileOrDirectory/ A: You should give the absolute path with one extra '/' db = SQL("sqlite:////home/routsiddharth/mysite/finance.db")
Unhandled Exception due to failure importing database file. How to fix?
I have hosted a Flask website on pythonanywhere, but I keep getting the "Unhandled Exception" error when visiting the website. I checked the error logs, and the problem is with a database file, named finance.db. The exact text from the error logs are below: 2022-04-26 07:23:21,225: Error running WSGI application 2022-04-26 07:23:21,239: RuntimeError: does not exist: finance.db 2022-04-26 07:23:21,240: File "/var/www/routsiddharth_pythonanywhere_com_wsgi.py", line 16, in <module> 2022-04-26 07:23:21,240: from app import app as application # noqa 2022-04-26 07:23:21,240: 2022-04-26 07:23:21,240: File "/home/routsiddharth/mysite/app.py", line 39, in <module> 2022-04-26 07:23:21,240: 2022-04-26 07:23:21,240: File "/home/routsiddharth/.local/lib/python3.9/site-packages/cs50/sql.py", line 60, in __init__ 2022-04-26 07:23:21,240: raise RuntimeError("does not exist: {}".format(matches.group(1))) 2022-04-26 07:23:21,241: *************************************************** 2022-04-26 07:23:21,241: If you're seeing an import error and don't know why, 2022-04-26 07:23:21,241: we have a dedicated help page to help you debug: 2022-04-26 07:23:21,241: https://help.pythonanywhere.com/pages/DebuggingImportError/ 2022-04-26 07:23:21,241: *************************************************** Here is how I imported the file: from cs50 import SQL db = SQL("sqlite:///finance.db") The finance.db file is in the same directory as the app.py file. How do I fix this error?
[ "You need to reference the database with the correct path: https://help.pythonanywhere.com/pages/NoSuchFileOrDirectory/\n", "You should give the absolute path with one extra '/'\ndb = SQL(\"sqlite:////home/routsiddharth/mysite/finance.db\")\n\n" ]
[ 2, 0 ]
[]
[]
[ "flask", "python", "pythonanywhere" ]
stackoverflow_0072011386_flask_python_pythonanywhere.txt
Q: Filtering data based on columns in the result in hasura query I have two tables A and B. A [ a_id, a_num] B [ b_id, b_num, a_id ] How can we write a single hasura query to fetch rows from B where b_num < a_num joining the table based on A.a_id = B.a_id? A: What you essentially want is to compare two columns in a where-clause. That is not supported by hasura at the moment. See this issue, which has been closed: https://github.com/hasura/graphql-engine/issues/1387 They suggest you to create a generated column, a view, or a native function. That does this for you. Imo, creating a view that provides only A and B combinations where b_num is smaller than a_num is best suited for your usecase. Here is an example on how to create a view, which is called filtered_a_b_combos: CREATE OR REPLACE VIEW filtered_a_b_combos AS ( SELECT A.a_id, B.b_id FROM A JOIN B ON A.a_id = B.a_id WHERE B.b_num < A.a_num )
Filtering data based on columns in the result in hasura query
I have two tables A and B. A [ a_id, a_num] B [ b_id, b_num, a_id ] How can we write a single hasura query to fetch rows from B where b_num < a_num joining the table based on A.a_id = B.a_id?
[ "What you essentially want is to compare two columns in a where-clause. That is not supported by hasura at the moment.\nSee this issue, which has been closed: https://github.com/hasura/graphql-engine/issues/1387\nThey suggest you to create a generated column, a view, or a native function. That does this for you.\nImo, creating a view that provides only A and B combinations where b_num is smaller than a_num is best suited for your usecase.\nHere is an example on how to create a view, which is called filtered_a_b_combos:\nCREATE OR REPLACE VIEW filtered_a_b_combos AS (\n SELECT A.a_id, B.b_id\n FROM A\n JOIN B ON A.a_id = B.a_id\n WHERE B.b_num < A.a_num\n)\n\n" ]
[ 0 ]
[]
[]
[ "hasura" ]
stackoverflow_0074638857_hasura.txt
Q: Add a column to indicate the repetition rate of selected columns across each row I have a dataframe like this: df <- data.frame(ID = c(1,2,3,4,5), Total = c(1,1,2,1,2), Ma = c(1,2,1,2,1), Mb = c(1,2,1,2,2), Md = c(1,2,1,2,1), Me = c(1,1,1,2,2)) I'd like to add a column to indicate the maximum of repetition rate, from Total through Me column for each row. It should be something like: rep.rate = c(1,0.6,0.8,0.8,0.6) These values indicate the rate of repetition for the most common value across the five columns in each row. A: You can try, apply(df[-1], 1, function(i)max(prop.table(table(i)))) #[1] 1.0 0.6 0.8 0.8 0.6 A: Here's a more simplified dplyr solution that does not need a user-defined function: library(dplyr) df %>% rowwise %>% mutate(rep.rate = max(table(c_across(-ID)))/(ncol(.)-1)) %>% ungroup # # A tibble: 5 x 7 # ID Total Ma Mb Md Me rep.rate # <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> # 1 1 1 1 1 1 1 1 # 2 2 1 2 2 2 1 0.6 # 3 3 2 1 1 1 1 0.8 # 4 4 1 2 2 2 2 0.8 # 5 5 2 1 2 1 2 0.6 A: library(dplyr) df <- data.frame(ID = c(1,2,3,4,5), Total = c(1,1,2,1,2), Ma = c(1,2,1,2,1), Mb = c(1,2,1,2,2), Md = c(1,2,1,2,1), Me = c(1,1,1,2,2)) cat_mode <- function(x){ cat_levels <- unique(x) out <- cat_levels[which.max(tabulate(match(x, cat_levels)))] return(out) } df %>% rowwise() %>% mutate(rep.rate = sum(c_across(Total:Me) == cat_mode(c_across(Total:Me)),na.rm =TRUE)/5 ) # A tibble: 5 x 7 # Rowwise: ID Total Ma Mb Md Me rep.rate <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 1 1 1 1 1 1 1 2 2 1 2 2 2 1 0.6 3 3 2 1 1 1 1 0.8 4 4 1 2 2 2 2 0.8 5 5 2 1 2 1 2 0.6 A: df <- data.frame(ID = c(1,2,3,4,5), Total = c(1,1,2,1,2), Ma = c(1,2,1,2,1), Mb = c(1,2,1,2,2), Md = c(1,2,1,2,1), Me = c(1,1,1,2,2)) library(dplyr, warn.conflicts = FALSE) get_repeat_rate <- function(x){ table <- table(x) props <- table/sum(table max_prop <- max(props) return(max_prop) } df |> rowwise() |> mutate(repeat_rate = get_repeat_rate(c_across(-ID))) #> # A tibble: 5 × 7 #> # Rowwise: #> ID Total Ma Mb Md Me repeat_rate #> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> #> 1 1 1 1 1 1 1 1 #> 2 2 1 2 2 2 1 0.6 #> 3 3 2 1 1 1 1 0.8 #> 4 4 1 2 2 2 2 0.8 #> 5 5 2 1 2 1 2 0.6 Created on 2022-12-02 with reprex v2.0.2 A: The steps to approach this problem are the following: df%>% rowwise()%>% mutate(rep.rate=sum(across(Total:Me)== max(Total:Me))/5) The rowwise() make all operations row wise. Then mutate is used to create the new column which is according to this: max(Total:Me) finds the max value. Then sum(across(Total:Me)== max) finds how many occurences are there of the max value in the current row. Then we divide this number by 5 to get the needed proportion. A: If the columns take only 2 values as in the example data: 0.5 + abs(rowMeans(df[,-1] == df[1, 2]) - 0.5) #> [1] 1.0 0.6 0.8 0.8 0.6 If they take more than 2 values, a vectorized solution using matrixStats::rowTabulates: library(matrixStats) rowMaxs( rowTabulates( matrix( match( unlist(df[,-1]), unique(unlist(df[,-1])) ), nrow(df) ) ) )/(ncol(df) - 1) #> [1] 1.0 0.6 0.8 0.8 0.6
Add a column to indicate the repetition rate of selected columns across each row
I have a dataframe like this: df <- data.frame(ID = c(1,2,3,4,5), Total = c(1,1,2,1,2), Ma = c(1,2,1,2,1), Mb = c(1,2,1,2,2), Md = c(1,2,1,2,1), Me = c(1,1,1,2,2)) I'd like to add a column to indicate the maximum of repetition rate, from Total through Me column for each row. It should be something like: rep.rate = c(1,0.6,0.8,0.8,0.6) These values indicate the rate of repetition for the most common value across the five columns in each row.
[ "You can try,\napply(df[-1], 1, function(i)max(prop.table(table(i))))\n#[1] 1.0 0.6 0.8 0.8 0.6\n\n", "Here's a more simplified dplyr solution that does not need a user-defined function:\nlibrary(dplyr)\n\ndf %>% \n rowwise %>% \n mutate(rep.rate = max(table(c_across(-ID)))/(ncol(.)-1)) %>% \n ungroup\n\n# # A tibble: 5 x 7\n# ID Total Ma Mb Md Me rep.rate\n# <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>\n# 1 1 1 1 1 1 1 1 \n# 2 2 1 2 2 2 1 0.6\n# 3 3 2 1 1 1 1 0.8\n# 4 4 1 2 2 2 2 0.8\n# 5 5 2 1 2 1 2 0.6\n\n", "library(dplyr)\n\ndf <- data.frame(ID = c(1,2,3,4,5), Total = c(1,1,2,1,2), Ma = c(1,2,1,2,1), Mb = c(1,2,1,2,2), Md = c(1,2,1,2,1), Me = c(1,1,1,2,2))\n\ncat_mode <-\n function(x){\n \n cat_levels <- unique(x)\n \n out <- cat_levels[which.max(tabulate(match(x, cat_levels)))]\n \n return(out)\n \n }\n\ndf %>% \n rowwise() %>% \n mutate(rep.rate = sum(c_across(Total:Me) == cat_mode(c_across(Total:Me)),na.rm =TRUE)/5 )\n\n# A tibble: 5 x 7\n# Rowwise: \n ID Total Ma Mb Md Me rep.rate\n <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>\n1 1 1 1 1 1 1 1 \n2 2 1 2 2 2 1 0.6\n3 3 2 1 1 1 1 0.8\n4 4 1 2 2 2 2 0.8\n5 5 2 1 2 1 2 0.6\n\n", "df <- data.frame(ID = c(1,2,3,4,5), Total = c(1,1,2,1,2), Ma = c(1,2,1,2,1), Mb = c(1,2,1,2,2), Md = c(1,2,1,2,1), Me = c(1,1,1,2,2))\n\nlibrary(dplyr, warn.conflicts = FALSE)\n\nget_repeat_rate <- function(x){\n table <- table(x)\n props <- table/sum(table\n max_prop <- max(props)\n return(max_prop)\n}\n\ndf |> \n rowwise() |> \n mutate(repeat_rate = get_repeat_rate(c_across(-ID)))\n\n#> # A tibble: 5 × 7\n#> # Rowwise: \n#> ID Total Ma Mb Md Me repeat_rate\n#> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>\n#> 1 1 1 1 1 1 1 1 \n#> 2 2 1 2 2 2 1 0.6\n#> 3 3 2 1 1 1 1 0.8\n#> 4 4 1 2 2 2 2 0.8\n#> 5 5 2 1 2 1 2 0.6\n\nCreated on 2022-12-02 with reprex v2.0.2\n", "The steps to approach this problem are the following:\ndf%>%\nrowwise()%>%\nmutate(rep.rate=sum(across(Total:Me)== max(Total:Me))/5)\n\nThe rowwise() make all operations row wise. Then mutate is used to create the new column which is according to this: max(Total:Me) finds the max value. Then sum(across(Total:Me)== max) finds how many occurences are there of the max value in the current row. Then we divide this number by 5 to get the needed proportion.\n", "If the columns take only 2 values as in the example data:\n0.5 + abs(rowMeans(df[,-1] == df[1, 2]) - 0.5)\n#> [1] 1.0 0.6 0.8 0.8 0.6\n\nIf they take more than 2 values, a vectorized solution using matrixStats::rowTabulates:\nlibrary(matrixStats)\n\nrowMaxs(\n rowTabulates(\n matrix(\n match(\n unlist(df[,-1]),\n unique(unlist(df[,-1]))\n ), nrow(df)\n )\n )\n)/(ncol(df) - 1)\n#> [1] 1.0 0.6 0.8 0.8 0.6\n\n" ]
[ 4, 1, 0, 0, 0, 0 ]
[]
[]
[ "dataframe", "dplyr", "lapply", "r" ]
stackoverflow_0074656782_dataframe_dplyr_lapply_r.txt
Q: Java Selenium scraping characters not displayed I am currently trying to scrape address data from a website. I am using Selenium and the code so far is very simple: public class Scraper { public static void main(String[] args){ String baseUrl = "https://www.dhs.de/service/suchthilfeverzeichnis?tx_wwdhseinrichtung2_fe1%5Baction%5D=list&tx_wwdhseinrichtung2_fe1%5Bangebot%5D=0&tx_wwdhseinrichtung2_fe1%5Bbland%5D=0&tx_wwdhseinrichtung2_fe1%5Bcontroller%5D=Entry&tx_wwdhseinrichtung2_fe1%5Bdo%5D=search&tx_wwdhseinrichtung2_fe1%5Bplzort%5D=&tx_wwdhseinrichtung2_fe1%5Bspezi%5D=0&tx_wwdhseinrichtung2_fe1%5Bsprache%5D=0&tx_wwdhseinrichtung2_fe1%5Bumkreis%5D=0&tx_wwdhseinrichtung2_fe1%5Bzielgruppe%5D=0&cHash=69c2978df7ab94262c40a535ae021a1d"; ChromeDriver driver = new ChromeDriver(); driver.get(baseUrl); List<WebElement> addresses = driver.findElements(By.className("entryshort")); for(WebElement address : addresses) { String strasse = address.findElement(By.className("strasse")).getText(); String plzort = address.findElement(By.className("plzort")).getText(); System.out.println(strasse + " " + plzort); } WebElement nextButton = driver.findElement(By.className("next")); if (driver != null){ nextButton.click(); } } } So far so good. The output is displayed like this: Landower Stra�e 15 18573 Dreschvitz Schmalbeinstra�e 32 50674 K�ln Clearly it is not UTF-8 encoded. Expected Output: Landower Straße 15 18573 Dreschvitz Schmalbeinstraße 32 50674 Köln I have tried everything I could think of / find to solve this, but failed so far. Any ideas how I can ensure correct output? A: I think the issue is with your IDE console, which doesn't support UTF-8 encoding. At OS level you can use: -Dconsole.encoding=UTF-8 -Dfile.encoding=UTF-8 In IntelliJ Idea you can set the encoding: In Eclipse: Preferences > General > Workspace, set Text file encoding to Other : UTF-8 I have checked your code and even tried to write the data on to .txt file and it was working fine for me public static void main(String[] args) throws IOException { String baseUrl = "https://www.dhs.de/service/suchthilfeverzeichnis?tx_wwdhseinrichtung2_fe1%5Baction%5D=list&tx_wwdhseinrichtung2_fe1%5Bangebot%5D=0&tx_wwdhseinrichtung2_fe1%5Bbland%5D=0&tx_wwdhseinrichtung2_fe1%5Bcontroller%5D=Entry&tx_wwdhseinrichtung2_fe1%5Bdo%5D=search&tx_wwdhseinrichtung2_fe1%5Bplzort%5D=&tx_wwdhseinrichtung2_fe1%5Bspezi%5D=0&tx_wwdhseinrichtung2_fe1%5Bsprache%5D=0&tx_wwdhseinrichtung2_fe1%5Bumkreis%5D=0&tx_wwdhseinrichtung2_fe1%5Bzielgruppe%5D=0&cHash=69c2978df7ab94262c40a535ae021a1d"; ChromeOptions options = new ChromeOptions(); options.addArguments("--headless"); options.addArguments("--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36"); ChromeDriver driver = new ChromeDriver(options); driver.get(baseUrl); List<WebElement> addresses = driver.findElements(By.className("entryshort")); FileWriter fw = new FileWriter("scrap_data.txt", true); BufferedWriter bw = new BufferedWriter(fw); for(WebElement address : addresses) { String strasse = address.findElement(By.className("strasse")).getText(); String plzort = address.findElement(By.className("plzort")).getText(); System.out.println(strasse + " " + plzort); bw.write(strasse); bw.newLine(); bw.write(plzort); bw.newLine(); } bw.close(); WebElement nextButton = driver.findElement(By.className("next")); if (driver != null){ nextButton.click(); } } My output: There is one feedback and addition you can do with your code: while scrapping you should add two addArguments "headless" chrome and "user-agent". ChromeOptions options = new ChromeOptions(); options.addArguments("--headless"); options.addArguments("--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36"); ChromeDriver driver = new ChromeDriver(options);
Java Selenium scraping characters not displayed
I am currently trying to scrape address data from a website. I am using Selenium and the code so far is very simple: public class Scraper { public static void main(String[] args){ String baseUrl = "https://www.dhs.de/service/suchthilfeverzeichnis?tx_wwdhseinrichtung2_fe1%5Baction%5D=list&tx_wwdhseinrichtung2_fe1%5Bangebot%5D=0&tx_wwdhseinrichtung2_fe1%5Bbland%5D=0&tx_wwdhseinrichtung2_fe1%5Bcontroller%5D=Entry&tx_wwdhseinrichtung2_fe1%5Bdo%5D=search&tx_wwdhseinrichtung2_fe1%5Bplzort%5D=&tx_wwdhseinrichtung2_fe1%5Bspezi%5D=0&tx_wwdhseinrichtung2_fe1%5Bsprache%5D=0&tx_wwdhseinrichtung2_fe1%5Bumkreis%5D=0&tx_wwdhseinrichtung2_fe1%5Bzielgruppe%5D=0&cHash=69c2978df7ab94262c40a535ae021a1d"; ChromeDriver driver = new ChromeDriver(); driver.get(baseUrl); List<WebElement> addresses = driver.findElements(By.className("entryshort")); for(WebElement address : addresses) { String strasse = address.findElement(By.className("strasse")).getText(); String plzort = address.findElement(By.className("plzort")).getText(); System.out.println(strasse + " " + plzort); } WebElement nextButton = driver.findElement(By.className("next")); if (driver != null){ nextButton.click(); } } } So far so good. The output is displayed like this: Landower Stra�e 15 18573 Dreschvitz Schmalbeinstra�e 32 50674 K�ln Clearly it is not UTF-8 encoded. Expected Output: Landower Straße 15 18573 Dreschvitz Schmalbeinstraße 32 50674 Köln I have tried everything I could think of / find to solve this, but failed so far. Any ideas how I can ensure correct output?
[ "I think the issue is with your IDE console, which doesn't support UTF-8 encoding.\nAt OS level you can use:\n-Dconsole.encoding=UTF-8\n-Dfile.encoding=UTF-8\n\nIn IntelliJ Idea you can set the encoding:\n\nIn Eclipse:\nPreferences > General > Workspace, set Text file encoding to Other : UTF-8\n\nI have checked your code and even tried to write the data on to .txt file and it was working fine for me\npublic static void main(String[] args) throws IOException {\n String baseUrl = \"https://www.dhs.de/service/suchthilfeverzeichnis?tx_wwdhseinrichtung2_fe1%5Baction%5D=list&tx_wwdhseinrichtung2_fe1%5Bangebot%5D=0&tx_wwdhseinrichtung2_fe1%5Bbland%5D=0&tx_wwdhseinrichtung2_fe1%5Bcontroller%5D=Entry&tx_wwdhseinrichtung2_fe1%5Bdo%5D=search&tx_wwdhseinrichtung2_fe1%5Bplzort%5D=&tx_wwdhseinrichtung2_fe1%5Bspezi%5D=0&tx_wwdhseinrichtung2_fe1%5Bsprache%5D=0&tx_wwdhseinrichtung2_fe1%5Bumkreis%5D=0&tx_wwdhseinrichtung2_fe1%5Bzielgruppe%5D=0&cHash=69c2978df7ab94262c40a535ae021a1d\";\n ChromeOptions options = new ChromeOptions();\n options.addArguments(\"--headless\");\n options.addArguments(\"--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36\");\n ChromeDriver driver = new ChromeDriver(options);\n driver.get(baseUrl);\n List<WebElement> addresses = driver.findElements(By.className(\"entryshort\"));\n FileWriter fw = new FileWriter(\"scrap_data.txt\", true);\n BufferedWriter bw = new BufferedWriter(fw);\n for(WebElement address : addresses) {\n String strasse = address.findElement(By.className(\"strasse\")).getText();\n String plzort = address.findElement(By.className(\"plzort\")).getText();\n System.out.println(strasse + \" \" + plzort);\n bw.write(strasse);\n bw.newLine();\n bw.write(plzort);\n bw.newLine();\n }\n bw.close();\n WebElement nextButton = driver.findElement(By.className(\"next\"));\n if (driver != null){\n nextButton.click();\n }\n}\n\nMy output:\n\nThere is one feedback and addition you can do with your code: while scrapping you should add two addArguments \"headless\" chrome and \"user-agent\".\nChromeOptions options = new ChromeOptions();\n options.addArguments(\"--headless\");\n options.addArguments(\"--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36\");\n ChromeDriver driver = new ChromeDriver(options);\n\n" ]
[ 0 ]
[]
[]
[ "encoding", "java", "selenium", "string", "utf_8" ]
stackoverflow_0074658627_encoding_java_selenium_string_utf_8.txt
Q: Powershell Loop Through Directory of Text Files and Append Filename + Last Line in new Text File I am wanting to iterate over a directory full of various text files, which have lines of content such as the below example: Mark.Stevens;Wed 03/11/2020; 8:02:23.83 Paul.Robinson;Wed 03/11/2020; 9:52:24.78 And Filenames such as 'CII1234567.txt'. And pull the filename along with the last line from the files themselves. Currently I have the below code: $textfiles = (Get-ChildItem C:\Users\KP\Downloads\Test\Workstations | Where-Object {$_.Extension -eq '.txt'}).FullName ForEach ($textfile in $textfiles) { Get-content $textfile -Tail 1 >> C:\Users\KP\Downloads\Test\Output.txt Get-content $textfile.Basename >> C:\Users\KP\Downloads\Test\Output.txt } When I run the Powershell script it successfully grabs the content of each of the files such as: Mark.Stevens;Wed 03/11/2020; 8:02:23.83 Paul.Robinson;Wed 03/11/2020; 9:52:24.78 However I have been having difficulty with successfully pulling the filename as well. Ideally the resulting text file would look something like the below example: CII1234567.txt Mark.Stevens;Wed 03/11/2020; 8:02:23.83 CII1234567.txt Paul.Robinson;Wed 03/11/2020; 9:52:24.78 Would anyone be able to help advise on how I can get the desired output? A: I think it is better you debug the code yourself. I like writing code that is simple to debug. When you try to put everything into one instruction you can't debug. You can always use foreach in powershell to debug. After you get code working you can combine statements if that is your style. I like using Format-Table for debugging because it is great for enumerating through powershell object. $children = Get-ChildItem C:\Users\KP\Downloads\Test\Workstations | Where-Object {$_.Extension -eq '.txt'} foreach($child in $children) { $child | Format-Table foreach($textfile in $child) { $textfile | Format-Table foreach($row in $textfile) { "$textfile;" + $_ >> C:\Users\KP\Downloads\Test\Output.txt } } }
Powershell Loop Through Directory of Text Files and Append Filename + Last Line in new Text File
I am wanting to iterate over a directory full of various text files, which have lines of content such as the below example: Mark.Stevens;Wed 03/11/2020; 8:02:23.83 Paul.Robinson;Wed 03/11/2020; 9:52:24.78 And Filenames such as 'CII1234567.txt'. And pull the filename along with the last line from the files themselves. Currently I have the below code: $textfiles = (Get-ChildItem C:\Users\KP\Downloads\Test\Workstations | Where-Object {$_.Extension -eq '.txt'}).FullName ForEach ($textfile in $textfiles) { Get-content $textfile -Tail 1 >> C:\Users\KP\Downloads\Test\Output.txt Get-content $textfile.Basename >> C:\Users\KP\Downloads\Test\Output.txt } When I run the Powershell script it successfully grabs the content of each of the files such as: Mark.Stevens;Wed 03/11/2020; 8:02:23.83 Paul.Robinson;Wed 03/11/2020; 9:52:24.78 However I have been having difficulty with successfully pulling the filename as well. Ideally the resulting text file would look something like the below example: CII1234567.txt Mark.Stevens;Wed 03/11/2020; 8:02:23.83 CII1234567.txt Paul.Robinson;Wed 03/11/2020; 9:52:24.78 Would anyone be able to help advise on how I can get the desired output?
[ "I think it is better you debug the code yourself. I like writing code that is simple to debug. When you try to put everything into one instruction you can't debug. You can always use foreach in powershell to debug. After you get code working you can combine statements if that is your style. I like using Format-Table for debugging because it is great for enumerating through powershell object.\n$children = Get-ChildItem C:\\Users\\KP\\Downloads\\Test\\Workstations | Where-Object {$_.Extension -eq '.txt'}\nforeach($child in $children)\n{\n $child | Format-Table\n foreach($textfile in $child)\n {\n $textfile | Format-Table\n foreach($row in $textfile)\n {\n \"$textfile;\" + $_ >> C:\\Users\\KP\\Downloads\\Test\\Output.txt\n }\n \n }\n\n}\n\n" ]
[ 0 ]
[]
[]
[ "append", "loops", "powershell", "text_files" ]
stackoverflow_0074658373_append_loops_powershell_text_files.txt
Q: Audio and Video not synced after ffmpeg filter_complex select between I am trying to trim a video shoot on an iPhone. When I execute: ffmpeg -i IMG_8555.MOV \ -filter_complex " \ [0:v] select='between(t,448.856,1279.240)', setpts=N/FR/TB; \ [0:a] aselect='between(t,448.856,1279.240)', asetpts=N/SR/TB \ " \ output.mov the output audio is out of sync - audio is faster (noticeable towards the end of the output video). I noticed that the outputs frame rate is 29.97 while the inputs is 29.98. So I did some experimenting and changed setpts to setpts=N/29.98/TB; but still the video is falling behind. So I changed it even more to setpts=N/30.00/TB; - then it feels almost ok. I tired adding -vsync 1 - no luck I tried adding -async 1 - no luck I tried adding -async 7000 - no luck edit: If i put setpts=N/29.99/TB then it is ideal. Any ideas how can I make it always synced (no matter what is the input)? A: Try this: ffmpeg -ss 448.86 -to 1279.240 -i IMG_8555.MOV output.mov <addendum> If you have more cuts, then you can try one of the following 2 approaches: specify them as different inputs then concat ffmpeg -ss 0 -to 1 -i IMG_8555.MOV \ -ss 4 to 5 -i IMG_8555.MOV \ ... -ss 448.86 -to 1279.240 -i IMG_8555.MOV \ -filter_complex [0:v][0:a][1:v][1:a]...[99:v][99:a]concat output.mov (Unverified but likely work) Use concat demuxer. First create a concat file, name it say IMG_8555_trim.ffconcat and save it on the same folder as the video file ffconcat version 1.0 file IMG_8555.MOV inpoint 0 outpoint 1 file IMG_8555.MOV inpoint 4 outpoint 5 ... file IMG_8555.MOV inpoint 448.86 outpoint 1279.240 then run ffmpeg -i IMG_8555_trim.ffconcat output.mov A: @kesh's method worked for the mentioned problem but it was causing the other overlays (not included in this example) using enable='between(...)' to be off sync, so I couldnt go with that solution. At the end I managed to still use between and setpts but without using FRAME_RATE constant to calculate new pts values. Here is an example of my approach, assuming I want to have 3 cuts like that: [start1, end1]---[start2,end2]---[start3,end3] ffmpeg -i input.mov \ [0:v] \ select='between(t,start1,end1)+between(t,start2,end2)+between(t,start3,end3)', \ setpts='PTS-STARTPTS-(gt(t,end1)*(start2-end1) + gt(t,end2)*(start3-end2) )/TB'; \ [0:a] \ aselect='between(t,start1,end1)+between(t,start2,end2)+between(t,start3,end3)', \ asetpts='PTS-STARTPTS-(gt(t,end1)*(start2-end1) + gt(t,end2)*(start3-end2) )/TB' \ output.mov Note that gt(t,100) returns 1 if it is greather than 100 and 0 otherwise. I am using it to shift PTS by the gap between previous cuts (start2-end1). If the current T is less than end1 then the value of gt(t,end1) will be 0. So start2-end1 wont be added (as it is multiplied by zero)
Audio and Video not synced after ffmpeg filter_complex select between
I am trying to trim a video shoot on an iPhone. When I execute: ffmpeg -i IMG_8555.MOV \ -filter_complex " \ [0:v] select='between(t,448.856,1279.240)', setpts=N/FR/TB; \ [0:a] aselect='between(t,448.856,1279.240)', asetpts=N/SR/TB \ " \ output.mov the output audio is out of sync - audio is faster (noticeable towards the end of the output video). I noticed that the outputs frame rate is 29.97 while the inputs is 29.98. So I did some experimenting and changed setpts to setpts=N/29.98/TB; but still the video is falling behind. So I changed it even more to setpts=N/30.00/TB; - then it feels almost ok. I tired adding -vsync 1 - no luck I tried adding -async 1 - no luck I tried adding -async 7000 - no luck edit: If i put setpts=N/29.99/TB then it is ideal. Any ideas how can I make it always synced (no matter what is the input)?
[ "Try this:\nffmpeg -ss 448.86 -to 1279.240 -i IMG_8555.MOV output.mov\n\n<addendum>\nIf you have more cuts, then you can try one of the following 2 approaches:\n\nspecify them as different inputs then concat\n\nffmpeg -ss 0 -to 1 -i IMG_8555.MOV \\\n -ss 4 to 5 -i IMG_8555.MOV \\\n ...\n -ss 448.86 -to 1279.240 -i IMG_8555.MOV \\\n -filter_complex [0:v][0:a][1:v][1:a]...[99:v][99:a]concat\n output.mov\n\n\n(Unverified but likely work) Use concat demuxer. First create a concat file, name it say IMG_8555_trim.ffconcat and save it on the same folder as the video file\n\nffconcat version 1.0\n\nfile IMG_8555.MOV\ninpoint 0 \noutpoint 1 \n\nfile IMG_8555.MOV\ninpoint 4\noutpoint 5 \n\n...\n\nfile IMG_8555.MOV\ninpoint 448.86\noutpoint 1279.240\n\nthen run\nffmpeg -i IMG_8555_trim.ffconcat output.mov\n\n", "@kesh's method worked for the mentioned problem but it was causing the other overlays (not included in this example) using enable='between(...)' to be off sync, so I couldnt go with that solution.\nAt the end I managed to still use between and setpts but without using FRAME_RATE constant to calculate new pts values.\nHere is an example of my approach, assuming I want to have 3 cuts like that:\n[start1, end1]---[start2,end2]---[start3,end3]\n\nffmpeg -i input.mov \\\n[0:v] \\\nselect='between(t,start1,end1)+between(t,start2,end2)+between(t,start3,end3)', \\\nsetpts='PTS-STARTPTS-(gt(t,end1)*(start2-end1) + gt(t,end2)*(start3-end2) )/TB'; \\\n[0:a] \\\naselect='between(t,start1,end1)+between(t,start2,end2)+between(t,start3,end3)', \\ \nasetpts='PTS-STARTPTS-(gt(t,end1)*(start2-end1) + gt(t,end2)*(start3-end2) )/TB' \\\noutput.mov\n\nNote that gt(t,100) returns 1 if it is greather than 100 and 0 otherwise. I am using it to shift PTS by the gap between previous cuts (start2-end1). If the current T is less than end1 then the value of gt(t,end1) will be 0. So start2-end1 wont be added (as it is multiplied by zero)\n" ]
[ 1, 0 ]
[]
[]
[ "audio", "ffmpeg", "filter", "video" ]
stackoverflow_0074612693_audio_ffmpeg_filter_video.txt
Q: How to compare a string array within dictionary key I created dictionary like this: Dictionary<string, int> dic = new Dictionary<string, int>(); And I had a string array like this: string[] str = new string[]{"str1","str2","str3"} Now I want to check if dic key contains all elements of str without using loop. What is the best way to do this?. Thanks. A: this is a solution with linq, at least no visible loops, but internal linq uses loops Dictionary<string, int> dic = new Dictionary<string, int>(); dic.Add("str1", 1); dic.Add("str2", 2); dic.Add("str3", 3); string[] str = new string[] { "str1", "str2", "str3" }; bool ContainsAll = str.All(dic.ContainsKey); //true A: If you want to know if all dictionaries contain all keys: bool allContainsAll = dic.All(dictonary => str.All(dictonary.ContainsKey)); If you want to know if the strings are in any of the dictionaries keys: var allDictKeys = new HashSet<string>(dic.SelectMany(d => d.Keys)); bool allContainsAll = str.All(allDictKeys.Contains); Note that LINQ also uses loops, you just don't see them. A: If you want to compare a dictionary and a string array you can use SequenceEqual: bool AreEqual = dic.Keys.SequenceEquals(str);
How to compare a string array within dictionary key
I created dictionary like this: Dictionary<string, int> dic = new Dictionary<string, int>(); And I had a string array like this: string[] str = new string[]{"str1","str2","str3"} Now I want to check if dic key contains all elements of str without using loop. What is the best way to do this?. Thanks.
[ "this is a solution with linq, at least no visible loops, but internal linq uses loops\nDictionary<string, int> dic = new Dictionary<string, int>();\ndic.Add(\"str1\", 1);\ndic.Add(\"str2\", 2);\ndic.Add(\"str3\", 3);\n\nstring[] str = new string[] { \"str1\", \"str2\", \"str3\" };\nbool ContainsAll = str.All(dic.ContainsKey); //true\n\n", "If you want to know if all dictionaries contain all keys:\nbool allContainsAll = dic.All(dictonary => str.All(dictonary.ContainsKey));\n\nIf you want to know if the strings are in any of the dictionaries keys:\nvar allDictKeys = new HashSet<string>(dic.SelectMany(d => d.Keys));\nbool allContainsAll = str.All(allDictKeys.Contains);\n\nNote that LINQ also uses loops, you just don't see them.\n", "If you want to compare a dictionary and a string array you can use SequenceEqual:\nbool AreEqual = dic.Keys.SequenceEquals(str); \n\n" ]
[ 4, 2, 0 ]
[]
[]
[ "c#", "winforms" ]
stackoverflow_0032818244_c#_winforms.txt
Q: How to play an audio file through http response with sveltekit? I'm using a text-to-speech API and I'm trying to serve the audio response to the client and have it play from there, but I'm constantly being met with this error: GET blob:http://localhost:5173/8788f478-32ef-4e76-80a1-93c4f1a6a3a8 net::ERR_REQUEST_RANGE_NOT_SATISFIABLE localhost/:1 Uncaught (in promise) DOMException: Failed to load because no supported source was found. My client code looks like: // +page.svelte <script lang="ts"> let audio: any; const handleClick = async () => { const what = await fetch('/api/speech', { method: 'POST', headers: { 'Content-Type': 'audio/mpeg' } }); /** * Play audio from blob */ const blob = await what.blob(); const url = URL.createObjectURL(blob); audio.src = url; audio.play(); }; </script> <button on:click={handleClick}> Clicky </button> <audio bind:this={audio}> <source class="track" src="" type="audio/mpeg" /> </audio> And my server code looks like: // routes/api/speech/+server.ts import AWS from 'aws-sdk'; export const POST = async () => { const awsConfig = new AWS.Config(...); const polly = new AWS.Polly(awsConfig); const input = { Engine: 'standard', LanguageCode: 'en-US', OutputFormat: 'mp3', TextType: 'text', VoiceId: 'Ivy', Text: `hello hello` }; const speech = (await polly.synthesizeSpeech(input, (err, data: any) => { if (err) { new Response(String('err')); } /** * Return data in a way that's consumable by the browser */ if (data) { if (data.AudioStream instanceof Buffer) { // fs.writeFile('speech.mp3', data.AudioStream, function (err) { // if (err) { // return console.log(err); // } // console.log('The file was saved!'); // }); return data; } } })) as any; return new Response(speech.AudioStream, { headers: { 'Content-Type': 'audio/mpeg' } }); }; If I write the generated file to disk it works and plays properly, but if I use the same data from the generated response and serve it as a Response it doesn't work. What am I doing wrong or is there something I'm missing? A: Switching over to AWS SDK v3 made it much simpler. Just had to adjust the server code to: import { PollyClient, SynthesizeSpeechCommand } from '@aws-sdk/client-polly'; const pollyClient = new PollyClient(...); const input = { Engine: 'standard', LanguageCode: 'en-US', OutputFormat: 'mp3', TextType: 'text', VoiceId: 'Ivy', Text: `hi, I'm beepbooper` }; const pollyCommand = new SynthesizeSpeechCommand(input); const response = (await pollyClient.send(pollyCommand)) as any; return new Response(response.AudioStream, { headers: { 'Content-Type': 'audio/mpeg' } });
How to play an audio file through http response with sveltekit?
I'm using a text-to-speech API and I'm trying to serve the audio response to the client and have it play from there, but I'm constantly being met with this error: GET blob:http://localhost:5173/8788f478-32ef-4e76-80a1-93c4f1a6a3a8 net::ERR_REQUEST_RANGE_NOT_SATISFIABLE localhost/:1 Uncaught (in promise) DOMException: Failed to load because no supported source was found. My client code looks like: // +page.svelte <script lang="ts"> let audio: any; const handleClick = async () => { const what = await fetch('/api/speech', { method: 'POST', headers: { 'Content-Type': 'audio/mpeg' } }); /** * Play audio from blob */ const blob = await what.blob(); const url = URL.createObjectURL(blob); audio.src = url; audio.play(); }; </script> <button on:click={handleClick}> Clicky </button> <audio bind:this={audio}> <source class="track" src="" type="audio/mpeg" /> </audio> And my server code looks like: // routes/api/speech/+server.ts import AWS from 'aws-sdk'; export const POST = async () => { const awsConfig = new AWS.Config(...); const polly = new AWS.Polly(awsConfig); const input = { Engine: 'standard', LanguageCode: 'en-US', OutputFormat: 'mp3', TextType: 'text', VoiceId: 'Ivy', Text: `hello hello` }; const speech = (await polly.synthesizeSpeech(input, (err, data: any) => { if (err) { new Response(String('err')); } /** * Return data in a way that's consumable by the browser */ if (data) { if (data.AudioStream instanceof Buffer) { // fs.writeFile('speech.mp3', data.AudioStream, function (err) { // if (err) { // return console.log(err); // } // console.log('The file was saved!'); // }); return data; } } })) as any; return new Response(speech.AudioStream, { headers: { 'Content-Type': 'audio/mpeg' } }); }; If I write the generated file to disk it works and plays properly, but if I use the same data from the generated response and serve it as a Response it doesn't work. What am I doing wrong or is there something I'm missing?
[ "Switching over to AWS SDK v3 made it much simpler. Just had to adjust the server code to:\nimport { PollyClient, SynthesizeSpeechCommand } from '@aws-sdk/client-polly';\n\nconst pollyClient = new PollyClient(...);\n\n const input = {\n Engine: 'standard',\n LanguageCode: 'en-US',\n OutputFormat: 'mp3',\n TextType: 'text',\n VoiceId: 'Ivy',\n Text: `hi, I'm beepbooper`\n };\n\n const pollyCommand = new SynthesizeSpeechCommand(input);\n\n const response = (await pollyClient.send(pollyCommand)) as any;\n\n return new Response(response.AudioStream, {\n headers: {\n 'Content-Type': 'audio/mpeg'\n }\n });\n\n\n" ]
[ 0 ]
[]
[]
[ "amazon_polly", "audio", "javascript", "svelte", "sveltekit" ]
stackoverflow_0074650265_amazon_polly_audio_javascript_svelte_sveltekit.txt
Q: Tkinter ttk update label style I am trying to update the of the background color of a text label. For this I am using the ttk module of tkinter. For some reason it doesn't want to execute the config.xx(style="xx.TLabel. from tkinter import * from tkinter import ttk win = Tk() win.geometry("1200x800") #1024*600 s = ttk.Style(win) s.configure("CustomGrey.TLabel", background="#4D4D4D", foreground="white") s.configure("CustomGreen.TLabel", background="#97D077", foreground="white") s.configure("CustomYellow.TLabel", background="#FFD966", foreground="white") s.configure("CustomRed.TLabel", background="#FF6666", foreground="white") s.configure("CustomRed.TLabel", background="#FF6666", foreground="white", font=('Time New Roman', 60), anchor= "c") def updateLabelColor(color): if color == "Green": battery_lab.config(style="CustomGreen.TLabel") elif color == "Yellow": battery_lab.config(style="CustomYellow.TLabel") elif color == "Red": battery_lab.config(style="CustomRed.TLabel") updateLabelColor("Green") The goal is that text can change color in a program. It does not matter if it is done via a tk or a ttk label. Does anyone know what to do with this? A: I'm not sure what you asked for. If you want to change a tkinter.Label's background color you can easily change it by adding 'background' attribute: from tkinter import * root = Tk() label = Label(root, text="A Text", background="yellow") label.pack(pady=30, padx=50) root.mainloop() If you want to change a tkinter.Label's color by pressing a button or getting an input you can do this: from tkinter import * root = Tk() def update_Label_color(color): if color == "Green": label.config(background="green") elif color == "Yellow": label.config(background="yellow") elif color == "Red": label.config(background="red") color_variable = StringVar(value="Green") input = Entry(root, bg="orange", textvariable=color_variable) input.pack(pady=10, padx=50) button = Button(root, width=30, height=5, text="Button", command= lambda: update_Label_color(color_variable.get())) button.pack(pady=30, padx=50) label = Label(root, text="A Text", background="white") label.pack(pady=30, padx=50) root.mainloop()
Tkinter ttk update label style
I am trying to update the of the background color of a text label. For this I am using the ttk module of tkinter. For some reason it doesn't want to execute the config.xx(style="xx.TLabel. from tkinter import * from tkinter import ttk win = Tk() win.geometry("1200x800") #1024*600 s = ttk.Style(win) s.configure("CustomGrey.TLabel", background="#4D4D4D", foreground="white") s.configure("CustomGreen.TLabel", background="#97D077", foreground="white") s.configure("CustomYellow.TLabel", background="#FFD966", foreground="white") s.configure("CustomRed.TLabel", background="#FF6666", foreground="white") s.configure("CustomRed.TLabel", background="#FF6666", foreground="white", font=('Time New Roman', 60), anchor= "c") def updateLabelColor(color): if color == "Green": battery_lab.config(style="CustomGreen.TLabel") elif color == "Yellow": battery_lab.config(style="CustomYellow.TLabel") elif color == "Red": battery_lab.config(style="CustomRed.TLabel") updateLabelColor("Green") The goal is that text can change color in a program. It does not matter if it is done via a tk or a ttk label. Does anyone know what to do with this?
[ "I'm not sure what you asked for.\nIf you want to change a tkinter.Label's background color you can easily change it by adding 'background' attribute:\nfrom tkinter import *\nroot = Tk()\nlabel = Label(root, text=\"A Text\", background=\"yellow\")\nlabel.pack(pady=30, padx=50)\nroot.mainloop()\n\nIf you want to change a tkinter.Label's color by pressing a button or getting an input you can do this:\nfrom tkinter import *\n\nroot = Tk()\n\ndef update_Label_color(color):\n if color == \"Green\":\n label.config(background=\"green\")\n elif color == \"Yellow\":\n label.config(background=\"yellow\")\n elif color == \"Red\":\n label.config(background=\"red\")\n\ncolor_variable = StringVar(value=\"Green\")\ninput = Entry(root, bg=\"orange\", textvariable=color_variable)\ninput.pack(pady=10, padx=50)\nbutton = Button(root, width=30, height=5, text=\"Button\", command= lambda: update_Label_color(color_variable.get()))\nbutton.pack(pady=30, padx=50)\nlabel = Label(root, text=\"A Text\", background=\"white\")\nlabel.pack(pady=30, padx=50)\n\nroot.mainloop()\n\n" ]
[ 0 ]
[]
[]
[ "tkinter", "tkinter_label", "tkinter_text", "ttk", "ttkwidgets" ]
stackoverflow_0074656318_tkinter_tkinter_label_tkinter_text_ttk_ttkwidgets.txt
Q: How to open a PDF file by clicking on it in TreeView How do I open a file (ex. PDF) when I click on the row identify by its ID? I'm trying to make the treeview that uses a GUI to better access and open these PDFs, but I can't figure out how to actually open files using anything but a button. Can someone please tell me how to use these to find a filepath and open a pdf? Thanks the idea is basically is to open the pdf local file according to its ID in the treeview from tkinter import E, N, Frame, IntVar, LabelFrame, LEFT, RIGHT, BOTTOM, StringVar, Label, Button, END, Toplevel, Entry, Tk, font, Menu import tkinter as tk from tkinter import ttk, Spinbox class PRODUCTOS(): base_datos = "clientes_productos.db" resultado = 0.00 #valor x defecto self.resultado def __init__(self,root): self.wind = root #ventana completa self.wind.title('Facturacion principal') self.wind.geometry("850x525") #Las divisiones de la ventana, caja 1 arriba, caja 2 abajo caja1 = LabelFrame(self.wind, text="", font=("Calibri",14), padx=2, pady=2)#aleja lo q se encuentra dentro caja2 = LabelFrame(self.wind, text="Facturas", font=("Calibri",12), padx=1, pady=1) caja3 = LabelFrame(self.wind, text="", font=("Calibri",12), padx=2, pady=2) caja1.pack(fill="both", expand=True, padx=20, pady=10, ipady=10, ipadx=5)#pady = aleja a la caja 2, X aleja de la esquina derecha caja2.pack(fill="both", expand=True, padx=20, pady=10, ipady=100, ipadx=5)#ipady alarga el labelframe caja3.pack(fill="both", expand=True, padx=20, pady=10, ipady=30, ipadx=5) #los encabezados del cuadro blanco arriba #los encabezados del cuadro blanco arriba self.cuadro_blanco_facturas = ttk.Treeview(caja2, columns=("1","2","3","4","5","6","7"), show="headings", height=10)#Height largo del Scrollbar self.cuadro_blanco_facturas.pack(side=LEFT)#scrollbar self.cuadro_blanco_facturas.place(x=0, y=0)#scrollbar self.cuadro_blanco_facturas.heading("1", text="Nro_Fact.") self.cuadro_blanco_facturas.heading("2", text="ID-Cliente") self.cuadro_blanco_facturas.heading("3", text="Nombre del Cliente") #tamano de las columnas vertical self.cuadro_blanco_facturas.column("1", width=70)# width= anchura, minwidth = lo minimo de esa anchura self.cuadro_blanco_facturas.column("2", width=70) self.cuadro_blanco_facturas.column("3", width=258) #horizontal self.cuadro_blanco_facturas.column('#0', width=50, minwidth=100)#Yscrollbar1 #self.consulta_facturas() #llamada a la TABLA self.cuadro_blanco_facturas.bind("<Double 1>", self.on_double_click) #self.cuadro_blanco_facturas.bind('<Double-Button-1>', self.on_double_click) #scrollbar VERTICAL lado derecho cuadro blanco yscrollbar = ttk.Scrollbar(caja2, orient="vertical", command=self.cuadro_blanco_facturas.yview) yscrollbar.pack(side=RIGHT,fill="y") #scrollbar HORIZONTAL lado derecho cuadro blanco xscrollbar = ttk.Scrollbar(caja2, orient="horizontal", command=self.cuadro_blanco_facturas.xview) xscrollbar.pack(side=BOTTOM,fill="x") self.cuadro_blanco_facturas.configure(yscrollcommand=yscrollbar.set, xscrollcommand=xscrollbar.set) def on_double_click(self, event): iid = self.cuadro_blanco_facturas.focus() # get the iid of the selected item tags = self.cuadro_blanco_facturas.item(iid, 'tags') # get tags attached print(iid) if 'pdf' in tags: text = self.cuadro_blanco_facturas.item(iid, 'text') # get the text of selected item print(text) if __name__ == '__main__': root = Tk() product = PRODUCTOS(root) root.mainloop() A: thanks for helping me out its done! def treeview_click(self, event): iid = self.cuadro_blanco_facturas.focus() # id name = self.cuadro_blanco_facturas.item(iid)["values"][2] #column name espacio = " " file = os.startfile(f"facturas\\{iid}{espacio}{nombre}.pdf")
How to open a PDF file by clicking on it in TreeView
How do I open a file (ex. PDF) when I click on the row identify by its ID? I'm trying to make the treeview that uses a GUI to better access and open these PDFs, but I can't figure out how to actually open files using anything but a button. Can someone please tell me how to use these to find a filepath and open a pdf? Thanks the idea is basically is to open the pdf local file according to its ID in the treeview from tkinter import E, N, Frame, IntVar, LabelFrame, LEFT, RIGHT, BOTTOM, StringVar, Label, Button, END, Toplevel, Entry, Tk, font, Menu import tkinter as tk from tkinter import ttk, Spinbox class PRODUCTOS(): base_datos = "clientes_productos.db" resultado = 0.00 #valor x defecto self.resultado def __init__(self,root): self.wind = root #ventana completa self.wind.title('Facturacion principal') self.wind.geometry("850x525") #Las divisiones de la ventana, caja 1 arriba, caja 2 abajo caja1 = LabelFrame(self.wind, text="", font=("Calibri",14), padx=2, pady=2)#aleja lo q se encuentra dentro caja2 = LabelFrame(self.wind, text="Facturas", font=("Calibri",12), padx=1, pady=1) caja3 = LabelFrame(self.wind, text="", font=("Calibri",12), padx=2, pady=2) caja1.pack(fill="both", expand=True, padx=20, pady=10, ipady=10, ipadx=5)#pady = aleja a la caja 2, X aleja de la esquina derecha caja2.pack(fill="both", expand=True, padx=20, pady=10, ipady=100, ipadx=5)#ipady alarga el labelframe caja3.pack(fill="both", expand=True, padx=20, pady=10, ipady=30, ipadx=5) #los encabezados del cuadro blanco arriba #los encabezados del cuadro blanco arriba self.cuadro_blanco_facturas = ttk.Treeview(caja2, columns=("1","2","3","4","5","6","7"), show="headings", height=10)#Height largo del Scrollbar self.cuadro_blanco_facturas.pack(side=LEFT)#scrollbar self.cuadro_blanco_facturas.place(x=0, y=0)#scrollbar self.cuadro_blanco_facturas.heading("1", text="Nro_Fact.") self.cuadro_blanco_facturas.heading("2", text="ID-Cliente") self.cuadro_blanco_facturas.heading("3", text="Nombre del Cliente") #tamano de las columnas vertical self.cuadro_blanco_facturas.column("1", width=70)# width= anchura, minwidth = lo minimo de esa anchura self.cuadro_blanco_facturas.column("2", width=70) self.cuadro_blanco_facturas.column("3", width=258) #horizontal self.cuadro_blanco_facturas.column('#0', width=50, minwidth=100)#Yscrollbar1 #self.consulta_facturas() #llamada a la TABLA self.cuadro_blanco_facturas.bind("<Double 1>", self.on_double_click) #self.cuadro_blanco_facturas.bind('<Double-Button-1>', self.on_double_click) #scrollbar VERTICAL lado derecho cuadro blanco yscrollbar = ttk.Scrollbar(caja2, orient="vertical", command=self.cuadro_blanco_facturas.yview) yscrollbar.pack(side=RIGHT,fill="y") #scrollbar HORIZONTAL lado derecho cuadro blanco xscrollbar = ttk.Scrollbar(caja2, orient="horizontal", command=self.cuadro_blanco_facturas.xview) xscrollbar.pack(side=BOTTOM,fill="x") self.cuadro_blanco_facturas.configure(yscrollcommand=yscrollbar.set, xscrollcommand=xscrollbar.set) def on_double_click(self, event): iid = self.cuadro_blanco_facturas.focus() # get the iid of the selected item tags = self.cuadro_blanco_facturas.item(iid, 'tags') # get tags attached print(iid) if 'pdf' in tags: text = self.cuadro_blanco_facturas.item(iid, 'text') # get the text of selected item print(text) if __name__ == '__main__': root = Tk() product = PRODUCTOS(root) root.mainloop()
[ "thanks for helping me out\nits done!\ndef treeview_click(self, event):\n iid = self.cuadro_blanco_facturas.focus() # id \n name = self.cuadro_blanco_facturas.item(iid)[\"values\"][2] #column name\n espacio = \" \"\n file = os.startfile(f\"facturas\\\\{iid}{espacio}{nombre}.pdf\")\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter", "treeview" ]
stackoverflow_0074647514_python_tkinter_treeview.txt
Q: Apache commons csv skip lines How to skip lines in input file with apache commons csv. In my file first few lines are garbage useful meta-information like date, etc. Can't find any options for this. private void parse() throws Exception { Iterable<CSVRecord> records = CSVFormat.EXCEL .withQuote('"').withDelimiter(';').parse(new FileReader("example.csv")); for (CSVRecord csvRecord : records) { //do something } } A: Use FileReader.readLine() before starting the for-loop. Your example: private void parse() throws Exception { FileReader reader = new FileReader("example.csv"); reader.readLine(); // Read the first/current line. Iterable <CSVRecord> records = CSVFormat.EXCEL.withQuote('"').withDelimiter(';').parse(reader); for (CSVRecord csvRecord: records) { // do something } } A: There is no built-in facility to skip an unknown number of lines. If you want to skip only the first line (the header line), you can call withSkipHeaderRecord() while building the parser. A more general solution would be to call next() on the iterator: Iterable<CSVRecord> parser = CSVFormat.DEFAULT.parse(new FileReader("example.csv")); Iterator<CSVRecord> iterator = parser.iterator(); for (int i = 0; i < amountToSkip; i++) { if (iterator.hasNext()) { iterator.next(); } } while (iterator.hasNext()) { CSVRecord record = iterator.next(); System.out.println(record); } A: So CSVParser.iterator() should most definitely not throw an exception on iterator.hasNext() as it makes it near impossible to recover during an error condition. But where there is a will there is a way, and I present a Terrible Idea that sorta works™ public void runOnFile(Path file) { try { BufferedReader in = fixHeaders(file); CSVParser parsed = CSVFormat.DEFAULT.withFirstRecordAsHeader().parse(in); Map<String, Integer> headerMap = parsed.getHeaderMap(); String line; while ((line = in.readLine()) != null) { try { CSVRecord record = CSVFormat.DEFAULT.withHeader(headerMap.keySet().toArray(new String[headerMap.keySet().size()])) .parse(new StringReader(line)).getRecords().get(0); // do something with your record } catch (Exception e) { System.out.println("ignoring line:" + line); } } } catch (Exception e) { throw new RuntimeException(e); } } A: You can skip the header line using this Reader excelInput = new FileReader("example.csv"); CSVFormat csvFormat = CSVFormat.EXCEL.withSkipHeaderRecord(true).withHeader("Arm1", "Arm2", "Arm3", "Arm4", "Arm5", "Arm6"); CSVParser csvParser = new CSVParser(excelInput, csvFormat); The key point is to set withSkipHeaderRecord() to true and also specify the headers that you want to skip inside withHeader(). If you are aware of the line numbers you want to skip, you could do something like this: for(CVSRecord csvRecord: CSVParser){ if(csvRecord.getRecordNumber() == 1){ continue; } } where line 1 is what you want to skip. A: Check out CSVRecord.isConsistent() - which returns true if the record's size matches the header. I've had good success using this in conjunction with proper header setting on my CSVFormat.
Apache commons csv skip lines
How to skip lines in input file with apache commons csv. In my file first few lines are garbage useful meta-information like date, etc. Can't find any options for this. private void parse() throws Exception { Iterable<CSVRecord> records = CSVFormat.EXCEL .withQuote('"').withDelimiter(';').parse(new FileReader("example.csv")); for (CSVRecord csvRecord : records) { //do something } }
[ "Use FileReader.readLine() before starting the for-loop.\nYour example:\nprivate void parse() throws Exception {\n FileReader reader = new FileReader(\"example.csv\");\n reader.readLine(); // Read the first/current line.\n\n Iterable <CSVRecord> records = CSVFormat.EXCEL.withQuote('\"').withDelimiter(';').parse(reader);\n for (CSVRecord csvRecord: records) {\n // do something\n }\n}\n\n", "There is no built-in facility to skip an unknown number of lines.\nIf you want to skip only the first line (the header line), you can call withSkipHeaderRecord() while building the parser.\nA more general solution would be to call next() on the iterator:\nIterable<CSVRecord> parser = CSVFormat.DEFAULT.parse(new FileReader(\"example.csv\"));\nIterator<CSVRecord> iterator = parser.iterator();\n\nfor (int i = 0; i < amountToSkip; i++) {\n if (iterator.hasNext()) {\n iterator.next();\n }\n}\n\nwhile (iterator.hasNext()) {\n CSVRecord record = iterator.next();\n System.out.println(record);\n}\n\n", "So CSVParser.iterator() should most definitely not throw an exception on iterator.hasNext() as it makes it near impossible to recover during an error condition.\nBut where there is a will there is a way, and I present a Terrible Idea that sorta works™\n public void runOnFile(Path file) {\n try {\n BufferedReader in = fixHeaders(file);\n CSVParser parsed = CSVFormat.DEFAULT.withFirstRecordAsHeader().parse(in);\n Map<String, Integer> headerMap = parsed.getHeaderMap();\n\n String line;\n while ((line = in.readLine()) != null) {\n try {\n CSVRecord record = CSVFormat.DEFAULT.withHeader(headerMap.keySet().toArray(new String[headerMap.keySet().size()]))\n .parse(new StringReader(line)).getRecords().get(0);\n // do something with your record\n } catch (Exception e) {\n System.out.println(\"ignoring line:\" + line);\n }\n }\n } catch (Exception e) {\n throw new RuntimeException(e);\n }\n }\n\n", "You can skip the header line using this\n Reader excelInput = new FileReader(\"example.csv\");\n\n CSVFormat csvFormat = CSVFormat.EXCEL.withSkipHeaderRecord(true).withHeader(\"Arm1\", \"Arm2\", \"Arm3\", \"Arm4\",\n \"Arm5\", \"Arm6\");\n\n CSVParser csvParser = new CSVParser(excelInput, csvFormat);\n\nThe key point is to set withSkipHeaderRecord() to true and also specify the headers that you want to skip inside withHeader().\nIf you are aware of the line numbers you want to skip, you could do something like this:\nfor(CVSRecord csvRecord: CSVParser){\n if(csvRecord.getRecordNumber() == 1){\n continue;\n } \n} \n\nwhere line 1 is what you want to skip.\n", "Check out CSVRecord.isConsistent() - which returns true if the record's size matches the header. I've had good success using this in conjunction with proper header setting on my CSVFormat.\n" ]
[ 4, 3, 1, 0, 0 ]
[]
[]
[ "csv", "java" ]
stackoverflow_0033972243_csv_java.txt
Q: What is then().extract().body().jsonPath().getString("") doing? I have this method in my Cucumber test: public void validateError(String name, DataTable errorTable) { Map<String, String> error = errorTable.asMap(String.class, String.class); String result = then().extract().body().jsonPath().getString(""); then().statusCode(Integer.parseInt(error.get("errorCode"))); Assertions.assertThat(result).contains(error.get("errorMessage")); } It fails on then().extract().body().jsonPath().getString("") with: Caused by: groovy.json.JsonException: Lexing failed on line: 1, column: 1, while reading 'B', no possible valid JSON value or punctuation could be recognized. I'm trying to understand what then().extract().body().jsonPath().getString(""). Is it trying to extract the result from name? That would make sense as name is Bob in this case. I was expecting the line to extract the result from a json string though. A: The line then().extract().body().jsonPath().getString("") is trying to extract the result from the JSON response body using the JsonPath library. The empty string parameter passed to the getString() method indicates that it is trying to extract the value of the root element in the JSON response. However, this line of code is likely causing the error because it is trying to extract a value from an empty string. Since there is no JSON response body in this case, the JsonPath library is unable to parse it and throws the groovy.json.JsonException. To fix this error, you can either provide a valid JSON string to the getString() method or change the code to handle the case where there is no JSON response body. A: I don't understand how the code you provide compiles because I've not seen, then(), used liked that before, unless it's custom method you created. I've seen then() used in function programming style like the following. Response response = given() .contentType(ContentType.JSON) .when() .get("/posts") .then() .extract().response();
What is then().extract().body().jsonPath().getString("") doing?
I have this method in my Cucumber test: public void validateError(String name, DataTable errorTable) { Map<String, String> error = errorTable.asMap(String.class, String.class); String result = then().extract().body().jsonPath().getString(""); then().statusCode(Integer.parseInt(error.get("errorCode"))); Assertions.assertThat(result).contains(error.get("errorMessage")); } It fails on then().extract().body().jsonPath().getString("") with: Caused by: groovy.json.JsonException: Lexing failed on line: 1, column: 1, while reading 'B', no possible valid JSON value or punctuation could be recognized. I'm trying to understand what then().extract().body().jsonPath().getString(""). Is it trying to extract the result from name? That would make sense as name is Bob in this case. I was expecting the line to extract the result from a json string though.
[ "The line then().extract().body().jsonPath().getString(\"\") is trying to extract the result from the JSON response body using the JsonPath library. The empty string parameter passed to the getString() method indicates that it is trying to extract the value of the root element in the JSON response.\nHowever, this line of code is likely causing the error because it is trying to extract a value from an empty string. Since there is no JSON response body in this case, the JsonPath library is unable to parse it and throws the groovy.json.JsonException.\nTo fix this error, you can either provide a valid JSON string to the getString() method or change the code to handle the case where there is no JSON response body.\n", "I don't understand how the code you provide compiles because I've not seen, then(), used liked that before, unless it's custom method you created.\nI've seen then() used in function programming style like the following.\nResponse response = given()\n .contentType(ContentType.JSON)\n .when()\n .get(\"/posts\")\n .then()\n .extract().response();\n\n" ]
[ 0, 0 ]
[]
[]
[ "cucumber", "java" ]
stackoverflow_0074658276_cucumber_java.txt
Q: Fast way to detect/remove univariate outliers in R Is there a faster way to detect outliers in R than the examples below? Requirement: Outliers should by NA in the result vector. vals = c(6.4, 1.786, 5.934, 6.689, 6.098, 6.177, 6.768, 6.31, 6.164, 1.543, 6.242, 6.107, 6.708, 6.184, 6.102, 6.495, 6.423, 6.489, 5.264, 5.09, 5.915, 6.114, 5.395, 5.991, 6.732, 6.143, 6.657, 5.563, 5.173, 5.439, 4.305, 6.867, 5.007, 6.37, 6.193, 5.504, 6.333, 6.25, 0.206, 5.911, 5.496, 0.093, 6.554, 6.25, 6.526, 6.202, 6.305, 5.977, 6.476, 5.903, 5.758, 5.117, 6.985, 6.485, 0.763, 5.368, 5.146, 3.079, 5.823, 5.627, 6.077, 6.346, 5.301, 5.555, 6.02, 6.914, 5.896, 5.458, 6.473, 7.348, 7.649, 6.464, 6.545, 6.673, 6.618, 6.659) detect_outliers = function(x, na.rm = TRUE, ...) { qnt = stats::quantile(x, probs=c(.25, .75), na.rm = na.rm, ...) H = 1.5 * stats::IQR(x, na.rm = na.rm) y = x y[x < (qnt[1] - H)] = NA y[x > (qnt[2] + H)] = NA y } detect_outliers2 = function(x, ...) { out = suppressMessages(univOutl::boxB(x, ...)) x[out$outliers] = NA x } detect_outliers3 = function(x) { out = graphics::boxplot(x, plot=FALSE)$out x[fastmatch::`%fin%`(x, out)] = NA x } detect_outliers4 = function(x) { out = grDevices::boxplot.stats(x)$out x[fastmatch::`%fin%`(x, out)] = NA x } detect_outliers5 = function(x) { out = rstatix::identify_outliers(data.frame(x)) x[fastmatch::`%fin%`(x, out$x)] = NA x } detect_outliers6 = function(x) { dev = abs(x-median(x)) # absolute deviation from median MAD = median(abs(dev)) # median absolute deviation sd = MAD/0.67449 x[dev > 2*sd] = NA x } rbenchmark::benchmark("detect_outliers" = detect_outliers(vals), "detect_outliers2" = detect_outliers2(vals), "detect_outliers3" = detect_outliers3(vals), "detect_outliers4" = detect_outliers4(vals), "detect_outliers5" = detect_outliers5(vals), "detect_outliers6" = detect_outliers6(vals), replications = 1000, columns = c("test", "replications", "elapsed", "relative", "user.self", "sys.self")) Benchmark results test replications elapsed relative user.self sys.self 1 detect_outliers 1000 0.198 3.600 0.198 0.001 2 detect_outliers2 1000 0.350 6.364 0.331 0.019 3 detect_outliers3 1000 0.105 1.909 0.105 0.000 4 detect_outliers4 1000 0.070 1.273 0.070 0.000 5 detect_outliers5 1000 5.245 95.364 5.224 0.004 6 detect_outliers6 1000 0.055 1.000 0.055 0.001 Outliers removed df = data.frame(method = factor(c(rep("detect_outliers", length(vals)), rep("detect_outliers2", length(vals)), rep("detect_outliers3", length(vals)), rep("detect_outliers4", length(vals)), rep("detect_outliers5", length(vals)), rep("detect_outliers6", length(vals))), levels = rev(c("detect_outliers", "detect_outliers2", "detect_outliers3", "detect_outliers4", "detect_outliers5", "detect_outliers6"))), orig = rep(vals, 6), outlier_removed = c(detect_outliers(vals), detect_outliers2(vals), detect_outliers3(vals), detect_outliers4(vals), detect_outliers5(vals), detect_outliers6(vals))) df$is_outlier = factor(ifelse(is.na(df$outlier_removed), "yes", "no"), levels = c("yes", "no")) ggplot2::ggplot(df, ggplot2::aes(x = method, y = orig, color = is_outlier)) + ggplot2::geom_point(alpha = 0.5, size = 5) + ggplot2::theme_bw() + ggplot2::labs(x = "", y = "vals") + ggplot2::coord_flip() A: You can use identify_outliers() [rstatix package].
Fast way to detect/remove univariate outliers in R
Is there a faster way to detect outliers in R than the examples below? Requirement: Outliers should by NA in the result vector. vals = c(6.4, 1.786, 5.934, 6.689, 6.098, 6.177, 6.768, 6.31, 6.164, 1.543, 6.242, 6.107, 6.708, 6.184, 6.102, 6.495, 6.423, 6.489, 5.264, 5.09, 5.915, 6.114, 5.395, 5.991, 6.732, 6.143, 6.657, 5.563, 5.173, 5.439, 4.305, 6.867, 5.007, 6.37, 6.193, 5.504, 6.333, 6.25, 0.206, 5.911, 5.496, 0.093, 6.554, 6.25, 6.526, 6.202, 6.305, 5.977, 6.476, 5.903, 5.758, 5.117, 6.985, 6.485, 0.763, 5.368, 5.146, 3.079, 5.823, 5.627, 6.077, 6.346, 5.301, 5.555, 6.02, 6.914, 5.896, 5.458, 6.473, 7.348, 7.649, 6.464, 6.545, 6.673, 6.618, 6.659) detect_outliers = function(x, na.rm = TRUE, ...) { qnt = stats::quantile(x, probs=c(.25, .75), na.rm = na.rm, ...) H = 1.5 * stats::IQR(x, na.rm = na.rm) y = x y[x < (qnt[1] - H)] = NA y[x > (qnt[2] + H)] = NA y } detect_outliers2 = function(x, ...) { out = suppressMessages(univOutl::boxB(x, ...)) x[out$outliers] = NA x } detect_outliers3 = function(x) { out = graphics::boxplot(x, plot=FALSE)$out x[fastmatch::`%fin%`(x, out)] = NA x } detect_outliers4 = function(x) { out = grDevices::boxplot.stats(x)$out x[fastmatch::`%fin%`(x, out)] = NA x } detect_outliers5 = function(x) { out = rstatix::identify_outliers(data.frame(x)) x[fastmatch::`%fin%`(x, out$x)] = NA x } detect_outliers6 = function(x) { dev = abs(x-median(x)) # absolute deviation from median MAD = median(abs(dev)) # median absolute deviation sd = MAD/0.67449 x[dev > 2*sd] = NA x } rbenchmark::benchmark("detect_outliers" = detect_outliers(vals), "detect_outliers2" = detect_outliers2(vals), "detect_outliers3" = detect_outliers3(vals), "detect_outliers4" = detect_outliers4(vals), "detect_outliers5" = detect_outliers5(vals), "detect_outliers6" = detect_outliers6(vals), replications = 1000, columns = c("test", "replications", "elapsed", "relative", "user.self", "sys.self")) Benchmark results test replications elapsed relative user.self sys.self 1 detect_outliers 1000 0.198 3.600 0.198 0.001 2 detect_outliers2 1000 0.350 6.364 0.331 0.019 3 detect_outliers3 1000 0.105 1.909 0.105 0.000 4 detect_outliers4 1000 0.070 1.273 0.070 0.000 5 detect_outliers5 1000 5.245 95.364 5.224 0.004 6 detect_outliers6 1000 0.055 1.000 0.055 0.001 Outliers removed df = data.frame(method = factor(c(rep("detect_outliers", length(vals)), rep("detect_outliers2", length(vals)), rep("detect_outliers3", length(vals)), rep("detect_outliers4", length(vals)), rep("detect_outliers5", length(vals)), rep("detect_outliers6", length(vals))), levels = rev(c("detect_outliers", "detect_outliers2", "detect_outliers3", "detect_outliers4", "detect_outliers5", "detect_outliers6"))), orig = rep(vals, 6), outlier_removed = c(detect_outliers(vals), detect_outliers2(vals), detect_outliers3(vals), detect_outliers4(vals), detect_outliers5(vals), detect_outliers6(vals))) df$is_outlier = factor(ifelse(is.na(df$outlier_removed), "yes", "no"), levels = c("yes", "no")) ggplot2::ggplot(df, ggplot2::aes(x = method, y = orig, color = is_outlier)) + ggplot2::geom_point(alpha = 0.5, size = 5) + ggplot2::theme_bw() + ggplot2::labs(x = "", y = "vals") + ggplot2::coord_flip()
[ "You can use identify_outliers() [rstatix package].\n" ]
[ 1 ]
[]
[]
[ "outliers", "r" ]
stackoverflow_0074658406_outliers_r.txt
Q: How to return the values in a junction as an array? Define a junction my $j = 1 | 2 | 3 | 4 | 5, now I want to get an array of its value [1 2 3 4 5], how should I implement this? I tried $j.values but Perl6 gave me the whole junction as an element: [any((1), (2), (3), (4), (5))]. A: This is intentional, as far as I know. Imagine $j containing a Junction of hashes: then $j.values would be a junction of Seq's, not the hashes themselves. If you want the array of a junction, then maybe you should start from an array, and build a junction out of that: my @a = 1,2,3,4,5; my $j = any(@a); If you really want to go the Junction -> Array way, you could, but it would involve using nqp, and that's something I would not recommend in userland code. A: As Håkon Hægland already pointed out, this is not something you're supposed to do: Junctions are meant to be used as matchers in boolean context; introspection of junctions is not supported. If you feel the urge to introspect a junction, use a Set or a related type instead.  -- docs.perl6.org/type/Junction However, it is possible. First, you can use authothreading (ie the automatic evaluation of each branch of a junction when passed to a function that expects an argument of type Any): sub unjunc(Junction $j) { gather -> Any $_ { .take }.($j); } Second, you can poke into the guts and manually extract the values: sub unjunc(Junction $j) { multi extract(Any $_) { .take } multi extract(Junction $_) { use nqp; my $list := nqp::getattr($_, Junction, '$!storage'); my int $elems = nqp::elems($list); loop (my int $i = 0; $i < $elems; $i = $i + 1) { extract nqp::atpos($list, $i); } } gather extract $j; } If your junction is non-recursive (ie does not contain other junctions you want to flatten), the latter approach can be simplified: my $j := 1|2|3; say nqp::p6bindattrinvres(nqp::create(List), List, '$!reified', nqp::getattr($j, Junction, '$!storage')); A: There's a way to thread a junction of Mu so as to build an Array of eigenstates with their containers preserved. Because Mu is a supertype of the Any that autothreading is oriented towards, we can't depend on that feature while covering all possible eigenstate types, however. Mu.ACCEPTS has a multi method ACCEPTS(Mu:U: Junction:D) is default candidate to allow for smartmatches of junctions against any type object. This allow for a manual threading if fed a thunk that can recurse over ACCEPTS to be chained with CALL-ME, which can thread over its invocant: class Unison is Mu is repr<Uninstantiable> { method CALL-ME(Mu $topic is raw --> Array:D) { my @list; proto sub push(Mu) {*} multi sub push(Mu $topic is raw) { @list.BIND-POS(@list.elems, $topic) } multi sub push(Junction:D $junction) { self.ACCEPTS($junction).(&push) } self.ACCEPTS($topic).(&push); @list } multi method ACCEPTS(Mu $topic is raw) { -> &accept { accept $topic } } } say Unison(1 | 2 | 3 | 4 | 5); # OUTPUT: [1 2 3 4 5] say Unison((1 | (2, 3) & 4) ^ 5); # OUTPUT: [1 (2 3) 4 5] ACCEPTS must not be overridden in its entirety, but since Mu is the parent and not Any, just the one Mu candidate can cover its bases. This will only work on a type object invocant, so this is given the Uninstantiable REPR.
How to return the values in a junction as an array?
Define a junction my $j = 1 | 2 | 3 | 4 | 5, now I want to get an array of its value [1 2 3 4 5], how should I implement this? I tried $j.values but Perl6 gave me the whole junction as an element: [any((1), (2), (3), (4), (5))].
[ "This is intentional, as far as I know.\nImagine $j containing a Junction of hashes: then $j.values would be a junction of Seq's, not the hashes themselves.\nIf you want the array of a junction, then maybe you should start from an array, and build a junction out of that:\nmy @a = 1,2,3,4,5;\nmy $j = any(@a);\n\nIf you really want to go the Junction -> Array way, you could, but it would involve using nqp, and that's something I would not recommend in userland code.\n", "As Håkon Hægland already pointed out, this is not something you're supposed to do:\n\nJunctions are meant to be used as matchers in boolean context; introspection of junctions is not supported. If you feel the urge to introspect a junction, use a Set or a related type instead.\n -- docs.perl6.org/type/Junction\n\nHowever, it is possible.\nFirst, you can use authothreading (ie the automatic evaluation of each branch of a junction when passed to a function that expects an argument of type Any):\nsub unjunc(Junction $j) {\n gather -> Any $_ { .take }.($j);\n}\n\nSecond, you can poke into the guts and manually extract the values:\nsub unjunc(Junction $j) {\n multi extract(Any $_) { .take }\n multi extract(Junction $_) {\n use nqp;\n my $list := nqp::getattr($_, Junction, '$!storage');\n my int $elems = nqp::elems($list);\n loop (my int $i = 0; $i < $elems; $i = $i + 1) {\n extract nqp::atpos($list, $i);\n }\n }\n gather extract $j;\n}\n\nIf your junction is non-recursive (ie does not contain other junctions you want to flatten), the latter approach can be simplified:\nmy $j := 1|2|3;\nsay nqp::p6bindattrinvres(nqp::create(List), List, '$!reified',\n nqp::getattr($j, Junction, '$!storage'));\n\n", "There's a way to thread a junction of Mu so as to build an Array of eigenstates with their containers preserved. Because Mu is a supertype of the Any that autothreading is oriented towards, we can't depend on that feature while covering all possible eigenstate types, however.\nMu.ACCEPTS has a multi method ACCEPTS(Mu:U: Junction:D) is default candidate to allow for smartmatches of junctions against any type object. This allow for a manual threading if fed a thunk that can recurse over ACCEPTS to be chained with CALL-ME, which can thread over its invocant:\nclass Unison is Mu is repr<Uninstantiable> {\n method CALL-ME(Mu $topic is raw --> Array:D) {\n my @list;\n proto sub push(Mu) {*}\n multi sub push(Mu $topic is raw) { @list.BIND-POS(@list.elems, $topic) }\n multi sub push(Junction:D $junction) { self.ACCEPTS($junction).(&push) }\n self.ACCEPTS($topic).(&push);\n @list\n }\n\n multi method ACCEPTS(Mu $topic is raw) {\n -> &accept { accept $topic }\n }\n}\n\nsay Unison(1 | 2 | 3 | 4 | 5); # OUTPUT: [1 2 3 4 5]\nsay Unison((1 | (2, 3) & 4) ^ 5); # OUTPUT: [1 (2 3) 4 5]\n\nACCEPTS must not be overridden in its entirety, but since Mu is the parent and not Any, just the one Mu candidate can cover its bases. This will only work on a type object invocant, so this is given the Uninstantiable REPR.\n" ]
[ 10, 6, 0 ]
[]
[]
[ "arrays", "junction", "raku" ]
stackoverflow_0043568394_arrays_junction_raku.txt
Q: Appscript in google sheets. Autofill cells when a checkbox is checked Spreadsheet image I would like to be able to add names to the empty numbered cells when the checkbox next to the name is checked. When I uncheck a box, i would like the name removed and replaced with the next name that I check. The order the names show in the cells does not matter. I honestly have no idea how to do this or if it is possible. A: Use filter(), like this: =filter(D4:D, E4:E) The formula assumes that the names are in column D4:E and the checkboxes in column E4:E.
Appscript in google sheets. Autofill cells when a checkbox is checked
Spreadsheet image I would like to be able to add names to the empty numbered cells when the checkbox next to the name is checked. When I uncheck a box, i would like the name removed and replaced with the next name that I check. The order the names show in the cells does not matter. I honestly have no idea how to do this or if it is possible.
[ "Use filter(), like this:\n=filter(D4:D, E4:E)\nThe formula assumes that the names are in column D4:E and the checkboxes in column E4:E.\n" ]
[ 0 ]
[]
[]
[ "autofill", "checkbox", "google_apps_script", "google_sheets" ]
stackoverflow_0074658706_autofill_checkbox_google_apps_script_google_sheets.txt
Q: plotly including multiple hyperlinks in text Is there a way to hover over data in a plotly graph and then be able to click on a choice of hyperlinks within the text? There are a number of questions (e.g., here, here) that allow the user to click on a point and that brings you to the url associated with that point but in those solutions it is restricted to only one url. For example: library(ggplot2) library(plotly) library(htmlwidgets) mydata <- data.frame( xx = c(1, 2), yy = c(3, 4), website = c("https://www.google.com", "https://www.r-project.org/"), link = c( "https://www.google.com", "https://www.r-project.org/")) g <- ggplot(mydata, aes(x = xx, y = yy, text = paste0("xx: ", xx, "\n", "website link: ", website), customdata = link)) + geom_point() g p <- ggplotly(g, tooltip = c("text")) p onRender( p, " function(el) { el.on('plotly_click', function(d) { var url = d.points[0].customdata; window.open(url); }); } " ) You can then click on the second point and it will bring you to https://www.r-project.org/ : What I want is to be able to choice between two or more links (i.e. click on a hyperlink within the textbox): mydata <- data.frame( xx = c(1, 2), yy = c(3, 4), website = c("https://www.google.com", "https://www.r-project.org/), website2 = c(" https://www.reddit.com/", "http://stackoverflow.com/"), link = c( "https://www.google.com, https://www.reddit.com/", "https://www.r-project.org/, http://stackoverflow.com/")) g <- ggplot(mydata, aes(x = xx, y = yy, text = paste0("xx: ", xx, "\n", "website link: ", website, "\n", "Second website: ", website2), customdata = link)) + geom_point() g p <- ggplotly(g, tooltip = c("text")) p I sense this cannot be achieved with text or tooltip from plotly but perhaps there is a different workaround using e.g. javascript (which I am not familiar with). Any ideas? Thanks A: Here is a way with Shiny: library(plotly) library(htmlwidgets) library(shiny) mydata <- data.frame( xx = c(1, 2), yy = c(3, 4), website = c("https://www.google.com/", "https://www.r-project.org/"), website2 = c("https://www.reddit.com/", "http://stackoverflow.com/"), link = I(list( list("https://www.google.com", "https://www.reddit.com/"), list("https://www.r-project.org/", "http://stackoverflow.com/") )) ) g <- ggplot( mydata, aes( x = xx, y = yy, text = paste0( "xx: ", xx, "\n", "website link: ", website, "\n", "Second website: ", website2 ), customdata = link )) + geom_point() p <- ggplotly(g, tooltip = c("text")) %>% onRender( "function(el) { el.on('plotly_click', function(d) { var urls = d.points[0].customdata; Shiny.setInputValue('urls', urls); }); }" ) ui <- fluidPage( plotlyOutput("plotly") ) server <- function(input, output, session) { output[["plotly"]] <- renderPlotly({ p }) observeEvent(input[["urls"]], { url1 <- input[["urls"]][1] url2 <- input[["urls"]][2] showModal(modalDialog( tags$div( tags$a(href = url1, "First link"), tags$br(), tags$a(href = url2, "Second link") ) )) }) } shinyApp(ui, server) A: Here is a way without Shiny, using the jqueryUI library: library(plotly) library(htmlwidgets) library(htmltools) dep <- htmlDependency( name = "jquery-ui", version = "1.13.2", src = c(href = "https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.13.2"), script = "jquery-ui.min.js", stylesheet = "themes/base/jquery-ui.min.css" ) mydata <- data.frame( xx = c(1, 2), yy = c(3, 4), website = c("https://www.google.com/", "https://www.r-project.org/"), website2 = c("https://www.reddit.com/", "http://stackoverflow.com/"), link = I(list( list("https://www.google.com", "https://www.reddit.com/"), list("https://www.r-project.org/", "http://stackoverflow.com/") )) ) g <- ggplot( mydata, aes( x = xx, y = yy, text = paste0( "xx: ", xx, "\n", "website link: ", website, "\n", "Second website: ", website2 ), customdata = link )) + geom_point() p <- ggplotly(g, tooltip = c("text")) %>% onRender( "function(el) { el.on('plotly_click', function(d) { var urls = d.points[0].customdata; $div = $('<div><p><a href=\"' + urls[0] + '\">First link</a></p><p><a href=\"' + urls[1] + '\">Second link</a></p></div>'); $div.dialog({ autoOpen: false, show: {effect: 'blind', duration: 1000}, hide: {effect: 'explode', duration: 1000} }); $div.dialog('open'); }); }" ) deps <- c(p$dependencies, list(dep)) p$dependencies <- deps p Using the SweetAlert2 library: library(shiny) library(plotly) library(htmlwidgets) library(htmltools) dep <- htmlDependency( name = "sweetalert2", version = "11.6.15", src = c(href = "https://cdnjs.cloudflare.com/ajax/libs/limonte-sweetalert2/11.6.15"), script = "sweetalert2.all.min.js" ) mydata <- data.frame( xx = c(1, 2), yy = c(3, 4), website = c("https://www.google.com/", "https://www.r-project.org/"), website2 = c("https://www.reddit.com/", "http://stackoverflow.com/"), link = I(list( list("https://www.google.com", "https://www.reddit.com/"), list("https://www.r-project.org/", "http://stackoverflow.com/") )) ) g <- ggplot( mydata, aes( x = xx, y = yy, text = paste0( "xx: ", xx, "\n", "website link: ", website, "\n", "Second website: ", website2 ), customdata = link )) + geom_point() p <- ggplotly(g, tooltip = c("text")) %>% onRender( "function(el) { el.on('plotly_click', function(d) { var urls = d.points[0].customdata; var html = '<div><p>' + '<a href=\"' + urls[0] + '\" target=\"_blank\">First link</a>' + '</p><p>' + '<a href=\"' + urls[1] + '\" target=\"_blank\">Second link</a>' + '</p></div>'; Swal.fire({ title: 'Links', html: html }); }); }" ) deps <- c(p$dependencies, list(dep)) p$dependencies <- deps p More stylish: library(shiny) library(plotly) library(htmlwidgets) library(htmltools) dep <- htmlDependency( name = "sweetalert2", version = "11.6.15", src = c(href = "https://cdnjs.cloudflare.com/ajax/libs/limonte-sweetalert2/11.6.15"), script = "sweetalert2.all.min.js" ) mydata <- data.frame( xx = c(1, 2), yy = c(3, 4), link = I(list( list( list(title = "Google", url = "https://www.google.com"), list(title = "Reddit", url = "https://www.reddit.com/") ), list( list(title = "R project", url = "https://www.r-project.org/"), list(title = "StackOverflow", url = "http://stackoverflow.com/") ) )) ) g <- ggplot( mydata, aes( x = xx, y = yy, text = paste0("xx: ", xx), customdata = link )) + geom_point() p <- ggplotly(g, tooltip = c("text")) %>% onRender( "function(el) { el.on('plotly_click', function(d) { var urls = d.points[0].customdata; var html = '<hr/><div><p>' + '<a href=\"' + urls[0].url + '\" target=\"_blank\">' + urls[0].title + '</a>' + '</p><p>' + '<a href=\"' + urls[1].url + '\" target=\"_blank\">' + urls[1].title + '</a>' + '</p></div>'; Swal.fire({ title: '<strong>Links</strong>', html: html }); }); }" ) deps <- c(p$dependencies, list(dep)) p$dependencies <- deps p You can also animate the sweet alerts with the Animate.css library: library(shiny) library(plotly) library(htmlwidgets) library(htmltools) dep_sweetalert2 <- htmlDependency( name = "sweetalert2", version = "11.6.15", src = c(href = "https://cdnjs.cloudflare.com/ajax/libs/limonte-sweetalert2/11.6.15"), script = "sweetalert2.all.min.js" ) dep_animate.css <- htmlDependency( name = "animate.css", version = "4.1.1", src = c(href = "https://cdnjs.cloudflare.com/ajax/libs/animate.css/4.1.1"), stylesheet = "animate.min.css" ) mydata <- data.frame( xx = c(1, 2), yy = c(3, 4), link = I(list( list( list(title = "Google", url = "https://www.google.com"), list(title = "Reddit", url = "https://www.reddit.com/") ), list( list(title = "R project", url = "https://www.r-project.org/"), list(title = "StackOverflow", url = "http://stackoverflow.com/") ) )) ) g <- ggplot( mydata, aes( x = xx, y = yy, text = paste0("xx: ", xx), customdata = link )) + geom_point() p <- ggplotly(g, tooltip = c("text")) %>% onRender( "function(el) { el.on('plotly_click', function(d) { var urls = d.points[0].customdata; var html = '<hr/><div><p>' + '<a href=\"' + urls[0].url + '\" target=\"_blank\">' + urls[0].title + '</a>' + '</p><p>' + '<a href=\"' + urls[1].url + '\" target=\"_blank\">' + urls[1].title + '</a>' + '</p></div>'; Swal.fire({ title: '<strong>Links</strong>', html: html, showClass: {popup: 'animate__animated animate__rollIn'}, hideClass: {popup: 'animate__animated animate__rollOut'} }); }); }" ) deps <- c(p$dependencies, list(dep_sweetalert2, dep_animate.css)) p$dependencies <- deps p
plotly including multiple hyperlinks in text
Is there a way to hover over data in a plotly graph and then be able to click on a choice of hyperlinks within the text? There are a number of questions (e.g., here, here) that allow the user to click on a point and that brings you to the url associated with that point but in those solutions it is restricted to only one url. For example: library(ggplot2) library(plotly) library(htmlwidgets) mydata <- data.frame( xx = c(1, 2), yy = c(3, 4), website = c("https://www.google.com", "https://www.r-project.org/"), link = c( "https://www.google.com", "https://www.r-project.org/")) g <- ggplot(mydata, aes(x = xx, y = yy, text = paste0("xx: ", xx, "\n", "website link: ", website), customdata = link)) + geom_point() g p <- ggplotly(g, tooltip = c("text")) p onRender( p, " function(el) { el.on('plotly_click', function(d) { var url = d.points[0].customdata; window.open(url); }); } " ) You can then click on the second point and it will bring you to https://www.r-project.org/ : What I want is to be able to choice between two or more links (i.e. click on a hyperlink within the textbox): mydata <- data.frame( xx = c(1, 2), yy = c(3, 4), website = c("https://www.google.com", "https://www.r-project.org/), website2 = c(" https://www.reddit.com/", "http://stackoverflow.com/"), link = c( "https://www.google.com, https://www.reddit.com/", "https://www.r-project.org/, http://stackoverflow.com/")) g <- ggplot(mydata, aes(x = xx, y = yy, text = paste0("xx: ", xx, "\n", "website link: ", website, "\n", "Second website: ", website2), customdata = link)) + geom_point() g p <- ggplotly(g, tooltip = c("text")) p I sense this cannot be achieved with text or tooltip from plotly but perhaps there is a different workaround using e.g. javascript (which I am not familiar with). Any ideas? Thanks
[ "Here is a way with Shiny:\nlibrary(plotly)\nlibrary(htmlwidgets)\nlibrary(shiny)\n\nmydata <- data.frame(\n xx = c(1, 2), \n yy = c(3, 4),\n website = c(\"https://www.google.com/\",\n \"https://www.r-project.org/\"),\n website2 = c(\"https://www.reddit.com/\", \n \"http://stackoverflow.com/\"),\n link = I(list(\n list(\"https://www.google.com\", \"https://www.reddit.com/\"),\n list(\"https://www.r-project.org/\", \"http://stackoverflow.com/\")\n ))\n)\n\ng <- ggplot(\n mydata, \n aes(\n x = xx, \n y = yy, \n text = paste0(\n \"xx: \", xx, \"\\n\",\n \"website link: \", website, \"\\n\",\n \"Second website: \", website2\n ),\n customdata = link\n )) +\n geom_point()\np <- ggplotly(g, tooltip = c(\"text\")) %>% onRender(\n \"function(el) {\n el.on('plotly_click', function(d) {\n var urls = d.points[0].customdata;\n Shiny.setInputValue('urls', urls);\n });\n }\"\n)\n\n\nui <- fluidPage(\n plotlyOutput(\"plotly\")\n)\n\nserver <- function(input, output, session) {\n \n output[[\"plotly\"]] <- renderPlotly({\n p\n })\n \n observeEvent(input[[\"urls\"]], {\n url1 <- input[[\"urls\"]][1]\n url2 <- input[[\"urls\"]][2]\n showModal(modalDialog(\n tags$div(\n tags$a(href = url1, \"First link\"),\n tags$br(),\n tags$a(href = url2, \"Second link\")\n )\n ))\n })\n \n}\n\nshinyApp(ui, server)\n\n", "Here is a way without Shiny, using the jqueryUI library:\nlibrary(plotly)\nlibrary(htmlwidgets)\nlibrary(htmltools)\n\ndep <- htmlDependency(\n name = \"jquery-ui\",\n version = \"1.13.2\",\n src = c(href = \"https://cdnjs.cloudflare.com/ajax/libs/jqueryui/1.13.2\"),\n script = \"jquery-ui.min.js\",\n stylesheet = \"themes/base/jquery-ui.min.css\"\n)\n\n\nmydata <- data.frame(\n xx = c(1, 2), \n yy = c(3, 4),\n website = c(\"https://www.google.com/\",\n \"https://www.r-project.org/\"),\n website2 = c(\"https://www.reddit.com/\", \n \"http://stackoverflow.com/\"),\n link = I(list(\n list(\"https://www.google.com\", \"https://www.reddit.com/\"),\n list(\"https://www.r-project.org/\", \"http://stackoverflow.com/\")\n ))\n)\n\ng <- ggplot(\n mydata, \n aes(\n x = xx, \n y = yy, \n text = paste0(\n \"xx: \", xx, \"\\n\",\n \"website link: \", website, \"\\n\",\n \"Second website: \", website2\n ),\n customdata = link\n )) +\n geom_point()\np <- ggplotly(g, tooltip = c(\"text\")) %>% onRender(\n \"function(el) {\n el.on('plotly_click', function(d) {\n var urls = d.points[0].customdata;\n $div = $('<div><p><a href=\\\"' + urls[0] + '\\\">First link</a></p><p><a href=\\\"' + urls[1] + '\\\">Second link</a></p></div>');\n $div.dialog({\n autoOpen: false,\n show: {effect: 'blind', duration: 1000},\n hide: {effect: 'explode', duration: 1000}\n });\n $div.dialog('open');\n });\n }\"\n)\ndeps <- c(p$dependencies, list(dep))\np$dependencies <- deps\n\np\n\n\nUsing the SweetAlert2 library:\nlibrary(shiny)\nlibrary(plotly)\nlibrary(htmlwidgets)\nlibrary(htmltools)\n\ndep <- htmlDependency(\n name = \"sweetalert2\",\n version = \"11.6.15\",\n src = c(href = \"https://cdnjs.cloudflare.com/ajax/libs/limonte-sweetalert2/11.6.15\"),\n script = \"sweetalert2.all.min.js\"\n)\n\n\nmydata <- data.frame(\n xx = c(1, 2), \n yy = c(3, 4),\n website = c(\"https://www.google.com/\",\n \"https://www.r-project.org/\"),\n website2 = c(\"https://www.reddit.com/\", \n \"http://stackoverflow.com/\"),\n link = I(list(\n list(\"https://www.google.com\", \"https://www.reddit.com/\"),\n list(\"https://www.r-project.org/\", \"http://stackoverflow.com/\")\n ))\n)\n\ng <- ggplot(\n mydata, \n aes(\n x = xx, \n y = yy, \n text = paste0(\n \"xx: \", xx, \"\\n\",\n \"website link: \", website, \"\\n\",\n \"Second website: \", website2\n ),\n customdata = link\n )) +\n geom_point()\np <- ggplotly(g, tooltip = c(\"text\")) %>% onRender(\n \"function(el) {\n el.on('plotly_click', function(d) {\n var urls = d.points[0].customdata;\n var html = '<div><p>' + \n '<a href=\\\"' + urls[0] + '\\\" target=\\\"_blank\\\">First link</a>' +\n '</p><p>' + \n '<a href=\\\"' + urls[1] + '\\\" target=\\\"_blank\\\">Second link</a>' + \n '</p></div>';\n Swal.fire({\n title: 'Links',\n html: html\n });\n });\n }\"\n)\ndeps <- c(p$dependencies, list(dep))\np$dependencies <- deps\n\np\n\n\n\nMore stylish:\nlibrary(shiny)\nlibrary(plotly)\nlibrary(htmlwidgets)\nlibrary(htmltools)\n\ndep <- htmlDependency(\n name = \"sweetalert2\",\n version = \"11.6.15\",\n src = c(href = \"https://cdnjs.cloudflare.com/ajax/libs/limonte-sweetalert2/11.6.15\"),\n script = \"sweetalert2.all.min.js\"\n)\n\n\nmydata <- data.frame(\n xx = c(1, 2), \n yy = c(3, 4),\n link = I(list(\n list(\n list(title = \"Google\", url = \"https://www.google.com\"), \n list(title = \"Reddit\", url = \"https://www.reddit.com/\")\n ),\n list(\n list(title = \"R project\", url = \"https://www.r-project.org/\"), \n list(title = \"StackOverflow\", url = \"http://stackoverflow.com/\")\n )\n ))\n)\n\ng <- ggplot(\n mydata, \n aes(\n x = xx, \n y = yy, \n text = paste0(\"xx: \", xx),\n customdata = link\n )) +\n geom_point()\np <- ggplotly(g, tooltip = c(\"text\")) %>% onRender(\n \"function(el) {\n el.on('plotly_click', function(d) {\n var urls = d.points[0].customdata;\n var html = '<hr/><div><p>' + \n '<a href=\\\"' + urls[0].url + '\\\" target=\\\"_blank\\\">' + \n urls[0].title + \n '</a>' +\n '</p><p>' + \n '<a href=\\\"' + urls[1].url + '\\\" target=\\\"_blank\\\">' + \n urls[1].title +\n '</a>' + \n '</p></div>';\n Swal.fire({\n title: '<strong>Links</strong>',\n html: html\n });\n });\n }\"\n)\ndeps <- c(p$dependencies, list(dep))\np$dependencies <- deps\n\np\n\n\n\nYou can also animate the sweet alerts with the Animate.css library:\nlibrary(shiny)\nlibrary(plotly)\nlibrary(htmlwidgets)\nlibrary(htmltools)\n\ndep_sweetalert2 <- htmlDependency(\n name = \"sweetalert2\",\n version = \"11.6.15\",\n src = c(href = \"https://cdnjs.cloudflare.com/ajax/libs/limonte-sweetalert2/11.6.15\"),\n script = \"sweetalert2.all.min.js\"\n)\ndep_animate.css <- htmlDependency(\n name = \"animate.css\",\n version = \"4.1.1\",\n src = c(href = \"https://cdnjs.cloudflare.com/ajax/libs/animate.css/4.1.1\"),\n stylesheet = \"animate.min.css\"\n)\n\n\nmydata <- data.frame(\n xx = c(1, 2), \n yy = c(3, 4),\n link = I(list(\n list(\n list(title = \"Google\", url = \"https://www.google.com\"), \n list(title = \"Reddit\", url = \"https://www.reddit.com/\")\n ),\n list(\n list(title = \"R project\", url = \"https://www.r-project.org/\"), \n list(title = \"StackOverflow\", url = \"http://stackoverflow.com/\")\n )\n ))\n)\n\ng <- ggplot(\n mydata, \n aes(\n x = xx, \n y = yy, \n text = paste0(\"xx: \", xx),\n customdata = link\n )) +\n geom_point()\np <- ggplotly(g, tooltip = c(\"text\")) %>% onRender(\n \"function(el) {\n el.on('plotly_click', function(d) {\n var urls = d.points[0].customdata;\n var html = '<hr/><div><p>' + \n '<a href=\\\"' + urls[0].url + '\\\" target=\\\"_blank\\\">' + \n urls[0].title + \n '</a>' +\n '</p><p>' + \n '<a href=\\\"' + urls[1].url + '\\\" target=\\\"_blank\\\">' + \n urls[1].title +\n '</a>' + \n '</p></div>';\n Swal.fire({\n title: '<strong>Links</strong>',\n html: html,\n showClass: {popup: 'animate__animated animate__rollIn'},\n hideClass: {popup: 'animate__animated animate__rollOut'}\n });\n });\n }\"\n)\ndeps <- c(p$dependencies, list(dep_sweetalert2, dep_animate.css))\np$dependencies <- deps\n\np\n\n\n" ]
[ 1, 1 ]
[]
[]
[ "ggplot2", "ggplotly", "javascript", "plotly", "r" ]
stackoverflow_0074657914_ggplot2_ggplotly_javascript_plotly_r.txt
Q: How to improve the use of function splinefun? I have my code that parts work fine: C <- c(0, 0.3, 1.5, 3.5, 19.5) v1 <- c(0.00, 0.00, 0.00, 0.26, 0.91) H <- 1 n <- 1 V <- function(C, H, n) { 1/(1 + (C/H)^n) } y_spa1 <- V(C, H, n) x_dense1 <- seq(0, 10, by=0.1) y_dense1 <- splinefun(y_spa1, C, )(x_dense1) y_dense <- approx(C, y_spa1, xout=x_dense1)$y which(y_dense1 <= 0.5) which(y_dense1 <= 0.5)[1] x_dense1[which(y_dense1 <= 0.5)[1]] It seems to me that when I try to do the same for v1 it doesn't give me the correct result maybe because some of the values are 0? y_spa1 <- V(C, H, n) x_dense1 <- seq(0, 10, by=0.1) y_dense1 <- splinefun(y_spa1, v1, )(x_dense1) y_dense <- approx(v1, y_spa1, xout=x_dense1)$y which(y_dense1 <= 0.5) which(y_dense1 <= 0.5)[1] x_dense1[which(y_dense1 <= 0.5)[1]] which(y_dense1 <= 0.5)[1] # [1] 3 x_dense1[which(y_dense1 <= 0.5)[1]] # [1] 0.2 I think the results are too low. I have no experience in this area in R, so I am asking the forum for help. A: Looks like you called splinefun with inadvertently swapped x and y arguments, thus predicting x from y: y_dense1 <- splinefun(y_spa1, C, )(x_dense1). Boiling down your code to a minimal reproducible example and plotting the results would have revealed this (and made it easier to help you). In essence: xs = .1 * 1:100 ys = 1 / (1 + xs) plot(xs, ys) ## overlay thick blue interpolated curve from splinefun: curve(splinefun(xs, ys)(x), lwd = 4, col = 'blue', add = TRUE) ## overlay thin red interpolated curve from approxfun: curve(approxfun(xs, ys)(x), lwd = 1, col = 'red', add = TRUE) Now both splinefun and approx (via approxfun to supply the function to curve) coincide with the data (plot not shown).
How to improve the use of function splinefun?
I have my code that parts work fine: C <- c(0, 0.3, 1.5, 3.5, 19.5) v1 <- c(0.00, 0.00, 0.00, 0.26, 0.91) H <- 1 n <- 1 V <- function(C, H, n) { 1/(1 + (C/H)^n) } y_spa1 <- V(C, H, n) x_dense1 <- seq(0, 10, by=0.1) y_dense1 <- splinefun(y_spa1, C, )(x_dense1) y_dense <- approx(C, y_spa1, xout=x_dense1)$y which(y_dense1 <= 0.5) which(y_dense1 <= 0.5)[1] x_dense1[which(y_dense1 <= 0.5)[1]] It seems to me that when I try to do the same for v1 it doesn't give me the correct result maybe because some of the values are 0? y_spa1 <- V(C, H, n) x_dense1 <- seq(0, 10, by=0.1) y_dense1 <- splinefun(y_spa1, v1, )(x_dense1) y_dense <- approx(v1, y_spa1, xout=x_dense1)$y which(y_dense1 <= 0.5) which(y_dense1 <= 0.5)[1] x_dense1[which(y_dense1 <= 0.5)[1]] which(y_dense1 <= 0.5)[1] # [1] 3 x_dense1[which(y_dense1 <= 0.5)[1]] # [1] 0.2 I think the results are too low. I have no experience in this area in R, so I am asking the forum for help.
[ "Looks like you called splinefun with inadvertently swapped x and y arguments, thus predicting x from y: y_dense1 <- splinefun(y_spa1, C, )(x_dense1).\nBoiling down your code to a minimal reproducible example and plotting the results would have revealed this (and made it easier to help you). In essence:\nxs = .1 * 1:100\nys = 1 / (1 + xs) \n\nplot(xs, ys)\n\n## overlay thick blue interpolated curve from splinefun:\ncurve(splinefun(xs, ys)(x), lwd = 4, col = 'blue', add = TRUE)\n\n## overlay thin red interpolated curve from approxfun:\ncurve(approxfun(xs, ys)(x), lwd = 1, col = 'red', add = TRUE)\n\nNow both splinefun and approx (via approxfun to supply the function to curve) coincide with the data (plot not shown).\n" ]
[ 1 ]
[]
[]
[ "function", "r" ]
stackoverflow_0074655797_function_r.txt
Q: does `radix_tree_insert` need `spin_lock` to protect it radix_tree_insert is protected by spin_lock in Linux kernel source code. But the dmesg shows warning information as below: [ 667.551326] dump_backtrace.cfi_jt+0x0/0x8 [ 667.556266] show_stack+0x1c/0x2c [ 667.560415] dump_stack_lvl+0x94/0x100 [ 667.565017] ___might_sleep+0x194/0x1e4 [ 667.569688] __might_sleep+0x58/0x94 [ 667.574112] slab_pre_alloc_hook+0x5c/0xf0 [ 667.579066] kmem_cache_alloc+0x84/0x398 [ 667.583830] radix_tree_node_alloc+0x74/0x138 [ 667.589035] radix_tree_insert+0xf4/0x1fc The warning information means radix_tree_insert might sleep, and it should not be in atomic context. I also notice radix_tree_insert is not protected by spin_lock in some code. Does radix_tree_insert need to be protected by spin_lock? Do we need to care about the warning information? A: Like any other function, which modifies a radix tree, radix_tree_insert should be called under those synchronization, which (at least) prevents other modifications to operate concurrently. This is clearly written in the header include/linux/radix-tree.h which declares the radix tree: any function modifying the tree or tags (inserting or deleting items, setting or clearing tags) must exclude other modifications, and exclude any functions reading the tree. Depending on a usage scenario for a specific radix tree, such synchronization could be spinlock, mutex or something else. Normally, a synchronization mechanism for a specific radix tree is described near its declaration. E.g. the declaration in the fs/btrfs/ctree.h is following: /* * radix tree that keeps track of delayed nodes of every inode, * protected by inode_lock */ struct radix_tree_root delayed_nodes_tree; A type of synchronization for modifications should take into account gfp mask parameter, which is passed to the constructor of the radix tree and used for nodes allocation. That is, if this mask parameter is GFP_KERNEL, then modification operations shouldn't be called under a spinlock.
does `radix_tree_insert` need `spin_lock` to protect it
radix_tree_insert is protected by spin_lock in Linux kernel source code. But the dmesg shows warning information as below: [ 667.551326] dump_backtrace.cfi_jt+0x0/0x8 [ 667.556266] show_stack+0x1c/0x2c [ 667.560415] dump_stack_lvl+0x94/0x100 [ 667.565017] ___might_sleep+0x194/0x1e4 [ 667.569688] __might_sleep+0x58/0x94 [ 667.574112] slab_pre_alloc_hook+0x5c/0xf0 [ 667.579066] kmem_cache_alloc+0x84/0x398 [ 667.583830] radix_tree_node_alloc+0x74/0x138 [ 667.589035] radix_tree_insert+0xf4/0x1fc The warning information means radix_tree_insert might sleep, and it should not be in atomic context. I also notice radix_tree_insert is not protected by spin_lock in some code. Does radix_tree_insert need to be protected by spin_lock? Do we need to care about the warning information?
[ "Like any other function, which modifies a radix tree, radix_tree_insert should be called under those synchronization, which (at least) prevents other modifications to operate concurrently. This is clearly written in the header include/linux/radix-tree.h which declares the radix tree:\n\n\nany function modifying the tree or tags (inserting or deleting\nitems, setting or clearing tags) must exclude other modifications, and\nexclude any functions reading the tree.\n\n\nDepending on a usage scenario for a specific radix tree, such synchronization could be spinlock, mutex or something else.\nNormally, a synchronization mechanism for a specific radix tree is described near its declaration. E.g. the declaration in the fs/btrfs/ctree.h is following:\n /*\n * radix tree that keeps track of delayed nodes of every inode,\n * protected by inode_lock\n */\n struct radix_tree_root delayed_nodes_tree;\n\nA type of synchronization for modifications should take into account gfp mask parameter, which is passed to the constructor of the radix tree and used for nodes allocation. That is, if this mask parameter is GFP_KERNEL, then modification operations shouldn't be called under a spinlock.\n" ]
[ 0 ]
[]
[]
[ "linux", "linux_kernel", "locking", "radix_tree", "spinlock" ]
stackoverflow_0074656552_linux_linux_kernel_locking_radix_tree_spinlock.txt
Q: Mac OS Automator pdf to jpg quick action not working I produce pdf's from jpg's and vice versa all the time by right clicking on the file and using "Quick Actions" produced using Mac's Automator. But while the workflow I use to produce pdf's works perfectly, the workflow to produce jpg's always gives me duplicate files. I've tried everything I can think of but I always get either no jpg produced or 2 of them. The illustrations below show each of my workflows, with comment bubbles on the presence or absence of the "Copy Finder Items" action which seems to be key. Thanks. Workflow to produce jpgs (makes duplicates): Workflow to produce pdfs (works perfectly): A: I figured it out. When I removed the "Get Selected Finder Items" action I no longer got duplicate jpg's. The process of posting a question helps me a lot. Maybe this will help others. A: I figured it out.... this was happening to me. I noticed the 4 files I was trying to convert to pdf by quick action one of the files was a "txt" file by accident and not a pdf. which is why i didn't get the "Create PDF" option. make sure your files are correct!
Mac OS Automator pdf to jpg quick action not working
I produce pdf's from jpg's and vice versa all the time by right clicking on the file and using "Quick Actions" produced using Mac's Automator. But while the workflow I use to produce pdf's works perfectly, the workflow to produce jpg's always gives me duplicate files. I've tried everything I can think of but I always get either no jpg produced or 2 of them. The illustrations below show each of my workflows, with comment bubbles on the presence or absence of the "Copy Finder Items" action which seems to be key. Thanks. Workflow to produce jpgs (makes duplicates): Workflow to produce pdfs (works perfectly):
[ "I figured it out. When I removed the \"Get Selected Finder Items\" action I no longer got duplicate jpg's.\nThe process of posting a question helps me a lot. Maybe this will help others.\n", "I figured it out.... this was happening to me. I noticed the 4 files I was trying to convert to pdf by quick action one of the files was a \"txt\" file by accident and not a pdf. which is why i didn't get the \"Create PDF\" option. make sure your files are correct!\n" ]
[ 1, 0 ]
[]
[]
[ "automator", "jpeg", "macos", "pdf" ]
stackoverflow_0054237499_automator_jpeg_macos_pdf.txt
Q: Dask worker out of memory but I don't know why I have a dask cluster with several workers each with 93 GiB = 100 GB memory, and the total cluster has more than 2 TiB of memory (see picture below). When I watch the dashboard as my job runs, it fluctuates a bit but always looks like something shown in the picture, i.e. no where near the memory limit. Then, one of the workers will die due to an out-of-memory error. What I am really baffled with is how did it happen and why is it not shown at all in the dashboard? (note my dask version is new enough that it shows unmanaged memory as light color for each worker). My task is to load a relatively large dataset defined on a 2D grid (a wavefield). First, I would like to filter it in the time domain (which means accessing the entire time axis for each point at once). Then, I would like to write the filtered data for all points at each single time to a separate file. When these two tasks are by themselves(i.e., if I only filter the data without writing; or if I don't filter the data and just write the raw wavefield), dask works very well. However, when they are combined, the OOM error occurs for large simulations (but still works fine for small simulations). The raw wavefield data (variable: wave_on_slice_channel) for my large simulation is 11.67 GiB. For a smaller test simulation (which works when the above two tasks are combined), it is only 20.75 MiB. My (simplified) code are as follows: ### Function to filter def filter_wavefield(pos, butter_filter): filtered = signal.sosfilt(butter_filter,wave_on_slice_channel[pos,:].compute()).astype("float32") return filtered ### Function to write files def save_filtered_wavefield(chunk): # Many lines omitted here for setting up the write filtered_data = ncfile.createVariable('filtered_data', np.float32, ('data','time')) filtered_data[:,:] = blocks[chunk].compute() ncfile.close() return ### Putting multiple points together into a dask bag to avoid crushing the scheduler coord_list = [i for i in range(nelem*ngll)] coord_bag = db.from_sequence(coord_list,npartitions=100) coord_bag = coord_bag.persist() wait(coord_bag) ### Submitting tasks for filtering ### and converting back to dask arrays filtered = coord_bag.map(filter_wavefield, butter_filter) filtered_waves = filtered.compute() # this is a numpy array filtered_da = da.from_array(filtered_waves,chunks=wave_on_slice_channel.chunks) # this should be exactly the same in size and shape as the raw wavefield, except this is filtered blocks = filtered_da.to_delayed().ravel() # Split filtered wavefield by the raw wavefield's original chunks so each writer only sees a portion of the whole wavefield. The above code always works fine no matter for the small or large simulations (because this is just one of the two tasks, i.e. filtering + writing). As a check, filtered_da for the small simulation is shown, and we can see it is exactly the same as the raw wavefield from the small simulation (except the number of graph layers, which I think is just the number of operations it took to get this dask array and so not important?) The problem comes now when I want to save these filtered data to files. I have something similar as above: ### Use dask bags to avoid too many tasks file_list = [i for i in range(len(blocks))] file_bag = db.from_sequence(file_list,npartitions=len(blocks)) file_bag = file_bag.persist() wait(file_bag) ### Write out expected number of files to receive ### This file is always written so up to here everything is fine. with open(dest_dir+'/NOF.txt','w') as f: f.write("The number of expected filtered data files is: %d" % len(blocks)+'\n') ### Submit tasks to write files ### This is where things break for i in range(len(blocks)): f.append(client.submit(save_filtered_wavefield,i)). Note the variable passed to each call to save_filtered_wavefield is simply an index i, and then within that function the data is accessed with blocks[i].compute(). I think this is fine because the filtering also has wave_on_slice_channel[pos,:].compute(). I have tried to delete some variables from memory, especially the persisted coord_bag, but the problem persists. I have also tried to read some articles about managing memory on dask, but since I can't seem to see anything on my dashboard, I am still quite lost here. Sorry for the long post, but any help will be greatly appreciated!! A: More detailed answers will follow hopefully. My first thought: it is a bad idea to access the global dask array within a function to be submitted. You should call the high level API (like array, bag) in the client only, and write functions that work on the partitions only (numpy arrays). You should never normally be calling compute() on a worker.
Dask worker out of memory but I don't know why
I have a dask cluster with several workers each with 93 GiB = 100 GB memory, and the total cluster has more than 2 TiB of memory (see picture below). When I watch the dashboard as my job runs, it fluctuates a bit but always looks like something shown in the picture, i.e. no where near the memory limit. Then, one of the workers will die due to an out-of-memory error. What I am really baffled with is how did it happen and why is it not shown at all in the dashboard? (note my dask version is new enough that it shows unmanaged memory as light color for each worker). My task is to load a relatively large dataset defined on a 2D grid (a wavefield). First, I would like to filter it in the time domain (which means accessing the entire time axis for each point at once). Then, I would like to write the filtered data for all points at each single time to a separate file. When these two tasks are by themselves(i.e., if I only filter the data without writing; or if I don't filter the data and just write the raw wavefield), dask works very well. However, when they are combined, the OOM error occurs for large simulations (but still works fine for small simulations). The raw wavefield data (variable: wave_on_slice_channel) for my large simulation is 11.67 GiB. For a smaller test simulation (which works when the above two tasks are combined), it is only 20.75 MiB. My (simplified) code are as follows: ### Function to filter def filter_wavefield(pos, butter_filter): filtered = signal.sosfilt(butter_filter,wave_on_slice_channel[pos,:].compute()).astype("float32") return filtered ### Function to write files def save_filtered_wavefield(chunk): # Many lines omitted here for setting up the write filtered_data = ncfile.createVariable('filtered_data', np.float32, ('data','time')) filtered_data[:,:] = blocks[chunk].compute() ncfile.close() return ### Putting multiple points together into a dask bag to avoid crushing the scheduler coord_list = [i for i in range(nelem*ngll)] coord_bag = db.from_sequence(coord_list,npartitions=100) coord_bag = coord_bag.persist() wait(coord_bag) ### Submitting tasks for filtering ### and converting back to dask arrays filtered = coord_bag.map(filter_wavefield, butter_filter) filtered_waves = filtered.compute() # this is a numpy array filtered_da = da.from_array(filtered_waves,chunks=wave_on_slice_channel.chunks) # this should be exactly the same in size and shape as the raw wavefield, except this is filtered blocks = filtered_da.to_delayed().ravel() # Split filtered wavefield by the raw wavefield's original chunks so each writer only sees a portion of the whole wavefield. The above code always works fine no matter for the small or large simulations (because this is just one of the two tasks, i.e. filtering + writing). As a check, filtered_da for the small simulation is shown, and we can see it is exactly the same as the raw wavefield from the small simulation (except the number of graph layers, which I think is just the number of operations it took to get this dask array and so not important?) The problem comes now when I want to save these filtered data to files. I have something similar as above: ### Use dask bags to avoid too many tasks file_list = [i for i in range(len(blocks))] file_bag = db.from_sequence(file_list,npartitions=len(blocks)) file_bag = file_bag.persist() wait(file_bag) ### Write out expected number of files to receive ### This file is always written so up to here everything is fine. with open(dest_dir+'/NOF.txt','w') as f: f.write("The number of expected filtered data files is: %d" % len(blocks)+'\n') ### Submit tasks to write files ### This is where things break for i in range(len(blocks)): f.append(client.submit(save_filtered_wavefield,i)). Note the variable passed to each call to save_filtered_wavefield is simply an index i, and then within that function the data is accessed with blocks[i].compute(). I think this is fine because the filtering also has wave_on_slice_channel[pos,:].compute(). I have tried to delete some variables from memory, especially the persisted coord_bag, but the problem persists. I have also tried to read some articles about managing memory on dask, but since I can't seem to see anything on my dashboard, I am still quite lost here. Sorry for the long post, but any help will be greatly appreciated!!
[ "More detailed answers will follow hopefully.\nMy first thought: it is a bad idea to access the global dask array within a function to be submitted. You should call the high level API (like array, bag) in the client only, and write functions that work on the partitions only (numpy arrays). You should never normally be calling compute() on a worker.\n" ]
[ 0 ]
[]
[]
[ "dask", "python_3.x" ]
stackoverflow_0074651409_dask_python_3.x.txt
Q: Type 'Ref' cannot be used as an index type I want to call myArray[refVariable] in Vue. How to fix following error in Vue/Typescript: Type 'Ref<number>' cannot be used as an index type A: Instead of refVariable, use refVariable.value.
Type 'Ref' cannot be used as an index type
I want to call myArray[refVariable] in Vue. How to fix following error in Vue/Typescript: Type 'Ref<number>' cannot be used as an index type
[ "Instead of refVariable, use refVariable.value.\n" ]
[ 1 ]
[]
[]
[ "indexing", "javascript", "ref", "typescript", "vue.js" ]
stackoverflow_0074658959_indexing_javascript_ref_typescript_vue.js.txt
Q: Redirecting HTTP to HTTPS on Apache VirtualHost server not working My website is connecting through HTTP and redirecting to the HTTPS VirtualHost but there it ends. I wouldn't post if I hadn't searched for hours without result. Please see the following: Trying to connect through port 443 (With VirtualHost setup and Port Info) My ports.conf file is the following: Listen 80 <IfModule ssl_module> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> And this is my router setup yes, ssl is enabled through apache and running with ssl_mod being enabled. All posts lead me to different types of configs in my VirtualHost for port 80, but I tried them all. Is there anything I missed? EDIT UFW config sudo ufw status To Action From -- ------ ---- WWW Full ALLOW Anywhere 443/tcp ALLOW Anywhere WWW Full (v6) ALLOW Anywhere 443/tcp (v6) ALLOW Anywhere Further description of WWW Full sudo nano /etc/ufw/applications.d/ufw-webserver ... [WWW Full] title=Web Server (HTTP,HTTPS) description=Web Server (HTTP,HTTPS) ports=80,443/tcp ... A: The redirect to HTTPS can be enabled in the Virtual Host file for port 80. If you would like to force HTTPS for all web pages, you can use the following set of directives, after running sudo a2enmod rewrite and sudo a2enmod ssl: to redirect everything to https://yourdomain.com: <VirtualHost *:80> ServerName yourdomain.com Redirect permanent / https://yourdomain.com/ </VirtualHost> <VirtualHost _default_:443> ServerName yourdomain.com DocumentRoot /usr/local/apache2/htdocs SSLEngine On ... </VirtualHost> to redirect everything to https://www.yourdomain.com: <VirtualHost *:80> ServerName www.yourdomain.com Redirect permanent / https://www.yourdomain.com/ </VirtualHost> <VirtualHost _default_:443> ServerName www.yourdomain.com DocumentRoot /usr/local/apache2/htdocs SSLEngine On ... </VirtualHost> to redirect a specific directory (/secure in our case): <VirtualHost *:80> ServerName www.yourdomain.com DocumentRoot /usr/local/apache2/htdocs Redirect permanent /secure https://yourdomain.com/secure </VirtualHost> <VirtualHost _default_:443> ServerName www.yourdomain.com DocumentRoot /usr/local/apache2/htdocs SSLEngine On ... </VirtualHost> You can read more about other approaches including .htaccess here
Redirecting HTTP to HTTPS on Apache VirtualHost server not working
My website is connecting through HTTP and redirecting to the HTTPS VirtualHost but there it ends. I wouldn't post if I hadn't searched for hours without result. Please see the following: Trying to connect through port 443 (With VirtualHost setup and Port Info) My ports.conf file is the following: Listen 80 <IfModule ssl_module> Listen 443 </IfModule> <IfModule mod_gnutls.c> Listen 443 </IfModule> And this is my router setup yes, ssl is enabled through apache and running with ssl_mod being enabled. All posts lead me to different types of configs in my VirtualHost for port 80, but I tried them all. Is there anything I missed? EDIT UFW config sudo ufw status To Action From -- ------ ---- WWW Full ALLOW Anywhere 443/tcp ALLOW Anywhere WWW Full (v6) ALLOW Anywhere 443/tcp (v6) ALLOW Anywhere Further description of WWW Full sudo nano /etc/ufw/applications.d/ufw-webserver ... [WWW Full] title=Web Server (HTTP,HTTPS) description=Web Server (HTTP,HTTPS) ports=80,443/tcp ...
[ "The redirect to HTTPS can be enabled in the Virtual Host file for port 80. If you would like to force HTTPS for all web pages, you can use the following set of directives, after running sudo a2enmod rewrite and\nsudo a2enmod ssl:\nto redirect everything to https://yourdomain.com:\n<VirtualHost *:80>\nServerName yourdomain.com\nRedirect permanent / https://yourdomain.com/\n</VirtualHost>\n<VirtualHost _default_:443>\nServerName yourdomain.com\nDocumentRoot /usr/local/apache2/htdocs\nSSLEngine On\n...\n</VirtualHost>\n\nto redirect everything to https://www.yourdomain.com:\n<VirtualHost *:80>\nServerName www.yourdomain.com\nRedirect permanent / https://www.yourdomain.com/\n</VirtualHost>\n<VirtualHost _default_:443>\nServerName www.yourdomain.com\nDocumentRoot /usr/local/apache2/htdocs\nSSLEngine On\n...\n</VirtualHost>\n\nto redirect a specific directory (/secure in our case):\n<VirtualHost *:80>\nServerName www.yourdomain.com\nDocumentRoot /usr/local/apache2/htdocs\nRedirect permanent /secure https://yourdomain.com/secure\n</VirtualHost>\n<VirtualHost _default_:443>\nServerName www.yourdomain.com\nDocumentRoot /usr/local/apache2/htdocs\nSSLEngine On\n...\n</VirtualHost>\n\nYou can read more about other approaches including .htaccess here\n" ]
[ 0 ]
[]
[]
[ "apache", "https", "ip", "server", "ssl" ]
stackoverflow_0074658870_apache_https_ip_server_ssl.txt
Q: unsubscribe from combineLatest I have the following code snippet. I'm not sure how to clean up all subscriptions or if I made any mistakes. Please help to improve it. I'm using it within an Angular service to initialize my application. destroy$ = new Subject(); loadData() : void { const loadData1 = this.store.select(selector); // first sets loading to true, then to false const loadData2 = this.store.select(selector); // first sets loading to true, then to false const loadData3 = this.store.select(selector); // first sets loading to true, then to false combineLatest([loadData1, loadData2, loadData3]) .pipe(takeUntil(this.destroy$)) .subscribe(data => { const a = data[0]; const b = data[1]; const c = data[2]; ... if (a.loadedSuccessfully && b.loadedSuccessfully && c.loadedSuccessfully) { ... // do something with the data ... // clean up this.destroy$.next(true); this.destroy$.complete(); } } }); } Questions: (1) Did I make any mistakes? (2) How can I improve it? (3) What about the Observables loadData1-3. There is no subscription in the beginning. So the following line does not create a memory leak, right? *const loadData1 = this.store.select(selector);* Does combineLatest create the subscriptions for loadData1-3 and unsbscribe it? A: (1) Did I make any mistakes? Your solution will work, but it will work only once, because you can't complete destroy$ twice. Usually an Subject called destroy$ is invoked and completed once, when the component or service is destroyed, e.g. in ngOnDestroy. I believe you chose the wrong operator here. If you want exaclty one emission, you should prefer forkJoin over combineLatest, it will also make destroy$ superflous, since it emits only once or never. (2) How can I improve it? Except for changing the operator I'd suggest to expand tuples in the parameter declartion, e.g. .subscribe(data => { const a = data[0]; const b = data[1]; const c = data[2]; could become .subscribe(([a, b, c]) => { It's also a bit odd, that the result has a boolean that indicates wether or not loading was successfull. I believe the idiomatic way would be to use the operators catchError and throwError for error handling. (3) What about the Observables loadData1-3. There is no subscription in the beginning. So the following line does not create a memory leak, right? [...] Does combineLatest create the subscriptions for loadData1-3 and unsbscribe it? Exactly! When you subscribe to a piped Observable it will automatically subscribe to all the Observables it depends on and when you unsubscribe from it, it will also unsubscribe from all of it's dependencies. A: first operators : Emits only the first value (or the first value that meets some condition. If you want to do something on loadData1-3 all loadedSuccessfully. try this loadData(): void { const loadData1 = this.store.select(selector); const loadData2 = this.store.select(selector); const loadData3 = this.store.select(selector); combineLatest([loadData1, loadData2, loadData3]).pipe( first((arr) => arr.every(({loadedSuccessfully}) => loadedSuccessfully)) ).subscribe(/* do something with the data */); }
unsubscribe from combineLatest
I have the following code snippet. I'm not sure how to clean up all subscriptions or if I made any mistakes. Please help to improve it. I'm using it within an Angular service to initialize my application. destroy$ = new Subject(); loadData() : void { const loadData1 = this.store.select(selector); // first sets loading to true, then to false const loadData2 = this.store.select(selector); // first sets loading to true, then to false const loadData3 = this.store.select(selector); // first sets loading to true, then to false combineLatest([loadData1, loadData2, loadData3]) .pipe(takeUntil(this.destroy$)) .subscribe(data => { const a = data[0]; const b = data[1]; const c = data[2]; ... if (a.loadedSuccessfully && b.loadedSuccessfully && c.loadedSuccessfully) { ... // do something with the data ... // clean up this.destroy$.next(true); this.destroy$.complete(); } } }); } Questions: (1) Did I make any mistakes? (2) How can I improve it? (3) What about the Observables loadData1-3. There is no subscription in the beginning. So the following line does not create a memory leak, right? *const loadData1 = this.store.select(selector);* Does combineLatest create the subscriptions for loadData1-3 and unsbscribe it?
[ "\n(1) Did I make any mistakes?\n\nYour solution will work, but it will work only once, because you can't complete destroy$ twice. Usually an Subject called destroy$ is invoked and completed once, when the component or service is destroyed, e.g. in ngOnDestroy.\nI believe you chose the wrong operator here. If you want exaclty one emission, you should prefer forkJoin over combineLatest, it will also make destroy$ superflous, since it emits only once or never.\n\n(2) How can I improve it?\n\nExcept for changing the operator I'd suggest to expand tuples in the parameter declartion, e.g.\n.subscribe(data => {\n const a = data[0];\n const b = data[1];\n const c = data[2];\n\ncould become .subscribe(([a, b, c]) => {\nIt's also a bit odd, that the result has a boolean that indicates wether or not loading was successfull. I believe the idiomatic way would be to use the operators catchError and throwError for error handling.\n\n(3) What about the Observables loadData1-3. There is no subscription in the beginning. So the following line does not create a memory leak, right?\n[...]\nDoes combineLatest create the subscriptions for loadData1-3 and unsbscribe it?\n\nExactly! When you subscribe to a piped Observable it will automatically subscribe to all the Observables it depends on and when you unsubscribe from it, it will also unsubscribe from all of it's dependencies.\n", "first operators : Emits only the first value (or the first value that meets some condition.\nIf you want to do something on loadData1-3 all loadedSuccessfully.\ntry this\nloadData(): void {\n const loadData1 = this.store.select(selector);\n const loadData2 = this.store.select(selector);\n const loadData3 = this.store.select(selector);\n \n combineLatest([loadData1, loadData2, loadData3]).pipe(\n first((arr) => arr.every(({loadedSuccessfully}) => loadedSuccessfully))\n ).subscribe(/* do something with the data */);\n}\n\n" ]
[ 0, 0 ]
[]
[]
[ "rxjs" ]
stackoverflow_0074651069_rxjs.txt
Q: How to avoid skipping items in a list in zsh? I am using zsh on Mac. I created a list subjects of say 25 items in that list. Now, I would like to run all possible pairwise comparisons between the items in the list, e.g., subject1 vs. subject2, but without running repeated measurements (such as subject1 vs. subject2 and subject2 vs subject1) Here is my code for this task: subjects=(Subject1 Subject2 Subject4 Subject5 Subject6 Subject7 Subject8 Subject9 Subject10 Subject11 Subject12 Subject13 Subject14 Subject15 Subject16 Subject17 Subject18 Subject19 Subject20 Subject22 Subject23 Subject24 Subject25) for i in $subjects do for j in $subjects do if [[ $i < $j ]] then echo "Processing pair $i - $j ..." fi done done The problem is that zsh skips the subjects from subject10 to subject19 and directly jumps to subject20 after comparing subject1 vs. subject9. Where is the flaw in my code? A: The issue is with the comparison operator used in the conditional statement if [[ $i < $j ]]. The < operator compares the strings lexicographically, i.e., it compares the ASCII value of the characters in the strings. In ASCII, the value of the character '1' is less than the value of the character '9', so the strings "Subject10" to "Subject19" are considered to be less than "Subject9", and the comparison between "Subject1" and "Subject10" to "Subject19" is skipped. To fix this issue, you could use a numerical comparison operator -lt, which compares the values of the variables as integers. First, you need to extract the numerical part of the strings by using parameter expansion, and then use the -lt operator to compare the numerical values. Here's the modified code: subjects=(Subject1 Subject2 Subject4 Subject5 Subject6 Subject7 Subject8 Subject9 Subject10 Subject11 Subject12 Subject13 Subject14 Subject15 Subject16 Subject17 Subject18 Subject19 Subject20 Subject22 Subject23 Subject24 Subject25) for i in $subjects do for j in $subjects do # Extract the numerical part of the strings i_num=${i#Subject} j_num=${j#Subject} # Use the -lt operator to compare the numerical values if [[ $i_num -lt $j_num ]] then echo "Processing pair $i - $j ..." fi done done This should give you the expected output.
How to avoid skipping items in a list in zsh?
I am using zsh on Mac. I created a list subjects of say 25 items in that list. Now, I would like to run all possible pairwise comparisons between the items in the list, e.g., subject1 vs. subject2, but without running repeated measurements (such as subject1 vs. subject2 and subject2 vs subject1) Here is my code for this task: subjects=(Subject1 Subject2 Subject4 Subject5 Subject6 Subject7 Subject8 Subject9 Subject10 Subject11 Subject12 Subject13 Subject14 Subject15 Subject16 Subject17 Subject18 Subject19 Subject20 Subject22 Subject23 Subject24 Subject25) for i in $subjects do for j in $subjects do if [[ $i < $j ]] then echo "Processing pair $i - $j ..." fi done done The problem is that zsh skips the subjects from subject10 to subject19 and directly jumps to subject20 after comparing subject1 vs. subject9. Where is the flaw in my code?
[ "The issue is with the comparison operator used in the conditional statement if [[ $i < $j ]]. The < operator compares the strings lexicographically, i.e., it compares the ASCII value of the characters in the strings. In ASCII, the value of the character '1' is less than the value of the character '9', so the strings \"Subject10\" to \"Subject19\" are considered to be less than \"Subject9\", and the comparison between \"Subject1\" and \"Subject10\" to \"Subject19\" is skipped.\nTo fix this issue, you could use a numerical comparison operator -lt, which compares the values of the variables as integers. First, you need to extract the numerical part of the strings by using parameter expansion, and then use the -lt operator to compare the numerical values. Here's the modified code:\nsubjects=(Subject1 Subject2 Subject4 Subject5 Subject6 Subject7 Subject8 Subject9 Subject10 Subject11 Subject12 Subject13 Subject14 Subject15 Subject16 Subject17 Subject18 Subject19 Subject20 Subject22 Subject23 Subject24 Subject25)\n\nfor i in $subjects\ndo\n for j in $subjects\n do\n # Extract the numerical part of the strings\n i_num=${i#Subject}\n j_num=${j#Subject}\n\n # Use the -lt operator to compare the numerical values\n if [[ $i_num -lt $j_num ]]\n then\n echo \"Processing pair $i - $j ...\"\n fi\n done\ndone\n\nThis should give you the expected output.\n" ]
[ 1 ]
[]
[]
[ "for_loop", "if_statement", "zsh" ]
stackoverflow_0074658892_for_loop_if_statement_zsh.txt
Q: Document containing array not getting inserted into mongodb using mongoose and node.js @ Norman .. thank you for reply....I am trying hard to know my error.. My code goes link this: '''' const { body, validationResult } = require("express-validator"); const Customer = require('../models/custdetails'); const async = require('async'); '''' Then I have some code related to sanitization .. then customer object is created '''' const customer = new Customer({ firm_name : req.body.firm_name, firm_feature : req.body.firm_feature, first_name: req.body.first_name, last_name: req.body.last_name, mob_no: req.body.mob_no, cust_email : req.body.cust_email, date_of_onboard: req.body.date_of_onboard, date_of_visit: req.body.date_of_visit, cust_need : req.body.cust_need, status : req.body.status, contact_back : req.body.contact_back, }); '''' here firm feature and cust_need are both arrays, then '''' const data = req.body; customer.update({$push: {customer: data},function(err, res){if (err) { console.log("This is the error while inserting data:", err); }else {console.log(res); } } }); res.redirect(customer.url); } ] '''' My data is not getting inserted into database. I have tried every method. Please help I have also tried as below '''' (async function(){ try { const filter = { first_name: req.body.first_name}; const options = {upsert: true}; const result = await customer.updateOne(filter, {$set:{data}}, options).then(function(){ console.log("data is inserted"); console.log('${result.matchedCount} document(s) matched the filter, updated ${result.modifiedCount} document(s)') }) }catch(err) { console.log("Some error has occurred in insertion"); console.log(err); }; }); res.status(200).redirect(customer.url); } '''' Below is my custdetails.js '''' const mongoose = require('mongoose'); const Schema = mongoose.Schema; const CustSchema = new Schema( { firm_name:{type: String, maxLength:50}, firm_feature: {type : { type: String }, enum: ['Private Limited Company', 'Public Limited Company', 'Partnerships Company', 'Limited Liability Partnership LLP', 'One Person Company', 'Sole Proprietorship', 'Section 8 Company']}, first_name: {type: String, required: true, maxLength: 100}, last_name: {type: String, required: true, maxLength: 100}, mob_no: {type: Number, required: true, maxLength:10}, cust_email:{type: String, lowercase: true}, //always converts emailto lowercase before saving date_of_onboard: {type: Date}, date_of_visit: {type: Date}, cust_need: {type : { type: String }, enum:['Four Wheeler Loan', 'Home Loan', 'Two Wheeler Loan']}, brperson: {type: Schema.Types.ObjectId, ref: 'Branch'}, status: {type: Schema.Types.ObjectId, ref: 'CustInstance'}, contact_back: {type: Schema.Types.ObjectId, ref: 'CustInstance' }, } ); //Export model module.exports = mongoose.model('Customer', CustSchema); '''' A: If you are trying to insert I would switch to "customer.save()" method. Here is an example from my code: customer.save((err) => { if (err) { return next(err); } // Respond to request indicating the user was created res.status(201).json({ customer: customer }).end(); });
Document containing array not getting inserted into mongodb using mongoose and node.js
@ Norman .. thank you for reply....I am trying hard to know my error.. My code goes link this: '''' const { body, validationResult } = require("express-validator"); const Customer = require('../models/custdetails'); const async = require('async'); '''' Then I have some code related to sanitization .. then customer object is created '''' const customer = new Customer({ firm_name : req.body.firm_name, firm_feature : req.body.firm_feature, first_name: req.body.first_name, last_name: req.body.last_name, mob_no: req.body.mob_no, cust_email : req.body.cust_email, date_of_onboard: req.body.date_of_onboard, date_of_visit: req.body.date_of_visit, cust_need : req.body.cust_need, status : req.body.status, contact_back : req.body.contact_back, }); '''' here firm feature and cust_need are both arrays, then '''' const data = req.body; customer.update({$push: {customer: data},function(err, res){if (err) { console.log("This is the error while inserting data:", err); }else {console.log(res); } } }); res.redirect(customer.url); } ] '''' My data is not getting inserted into database. I have tried every method. Please help I have also tried as below '''' (async function(){ try { const filter = { first_name: req.body.first_name}; const options = {upsert: true}; const result = await customer.updateOne(filter, {$set:{data}}, options).then(function(){ console.log("data is inserted"); console.log('${result.matchedCount} document(s) matched the filter, updated ${result.modifiedCount} document(s)') }) }catch(err) { console.log("Some error has occurred in insertion"); console.log(err); }; }); res.status(200).redirect(customer.url); } '''' Below is my custdetails.js '''' const mongoose = require('mongoose'); const Schema = mongoose.Schema; const CustSchema = new Schema( { firm_name:{type: String, maxLength:50}, firm_feature: {type : { type: String }, enum: ['Private Limited Company', 'Public Limited Company', 'Partnerships Company', 'Limited Liability Partnership LLP', 'One Person Company', 'Sole Proprietorship', 'Section 8 Company']}, first_name: {type: String, required: true, maxLength: 100}, last_name: {type: String, required: true, maxLength: 100}, mob_no: {type: Number, required: true, maxLength:10}, cust_email:{type: String, lowercase: true}, //always converts emailto lowercase before saving date_of_onboard: {type: Date}, date_of_visit: {type: Date}, cust_need: {type : { type: String }, enum:['Four Wheeler Loan', 'Home Loan', 'Two Wheeler Loan']}, brperson: {type: Schema.Types.ObjectId, ref: 'Branch'}, status: {type: Schema.Types.ObjectId, ref: 'CustInstance'}, contact_back: {type: Schema.Types.ObjectId, ref: 'CustInstance' }, } ); //Export model module.exports = mongoose.model('Customer', CustSchema); ''''
[ "If you are trying to insert I would switch to \"customer.save()\" method. Here is an example from my code:\ncustomer.save((err) => {\n if (err) { return next(err); }\n\n // Respond to request indicating the user was created\n res.status(201).json({ customer: customer }).end();\n});\n\n" ]
[ 0 ]
[]
[]
[ "mongoose", "node.js" ]
stackoverflow_0074650737_mongoose_node.js.txt
Q: Kotlin: Why does override with additional optional arguments not work? I'm trying to override the toString function of a data class with a custom toString that has optional arguments, but it is not working as expected: data class LatLong( val latitude: Double, val longitude: Double ){ // Override keyword not allowed by compiler here fun toString(decimals: Int = 5) = "${"%.${decimals}f".format(latitude)}, ${"%.${decimals}f".format(longitude)}" } fun main() { println(LatLong(-123.0, 49.0)) // prints: "LatLong(latitude=-123.0, longitude=49.0)" i.e. does not call custom toString println(LatLong(-123.0, 49.0).toString()) // prints: "LatLong(latitude=-123.0, longitude=49.0)" i.e. does not call custom toString println(LatLong(-123.0, 49.0).toString(decimals=5)) // prints: "-123.00000, 49.00000" } Question is how should I override it to get the behaviour that you'd expect (i.e. all 3 calls above should use the custom method)?. I could obviously add override fun toString() = toString(decimals=5) But this means defining the default argument twice which is a recipe for future bugs. Of course I could define the default as a constant and reference from both toStringa, but it seems messy. It is surprising LatLong(...).toString() does not call the new method. What is the "Kotlinic" way to handle this? A: You don't need to declare the default value twice. Just declare it in the toString override, rather than in your own toString's parameter list: override fun toString() = toString(decimals = 5) // make this a required parameter fun toString(decimals: Int) = "${"%.${decimals}f".format(latitude)}, ${"%.${decimals}f".format(longitude)}" Of course if you have more format options this would get a bit complicated, but you can always just wrap everything in a (data) class, and end up with a single parameter. data class FormatOptions( val decimals: Int = 5, val someOtherOption: Int = 10 ) override fun toString() = toString(FormatOptions(/* ... */)) fun toString(options: FormatOptions): String = TODO() Just by the way, the parameter list of the call toString() exactly matches the parameterless toString overload declared automatically by the data class. On the other hand, it only matches the one you declared if it considers optional parameters. So the compiler has very good reasons to prefer to resolve LatLong(...).toString() to the parameterless toString method, instead of the one you declared.
Kotlin: Why does override with additional optional arguments not work?
I'm trying to override the toString function of a data class with a custom toString that has optional arguments, but it is not working as expected: data class LatLong( val latitude: Double, val longitude: Double ){ // Override keyword not allowed by compiler here fun toString(decimals: Int = 5) = "${"%.${decimals}f".format(latitude)}, ${"%.${decimals}f".format(longitude)}" } fun main() { println(LatLong(-123.0, 49.0)) // prints: "LatLong(latitude=-123.0, longitude=49.0)" i.e. does not call custom toString println(LatLong(-123.0, 49.0).toString()) // prints: "LatLong(latitude=-123.0, longitude=49.0)" i.e. does not call custom toString println(LatLong(-123.0, 49.0).toString(decimals=5)) // prints: "-123.00000, 49.00000" } Question is how should I override it to get the behaviour that you'd expect (i.e. all 3 calls above should use the custom method)?. I could obviously add override fun toString() = toString(decimals=5) But this means defining the default argument twice which is a recipe for future bugs. Of course I could define the default as a constant and reference from both toStringa, but it seems messy. It is surprising LatLong(...).toString() does not call the new method. What is the "Kotlinic" way to handle this?
[ "You don't need to declare the default value twice. Just declare it in the toString override, rather than in your own toString's parameter list:\noverride fun toString() = toString(decimals = 5)\n\n// make this a required parameter\nfun toString(decimals: Int) =\n \"${\"%.${decimals}f\".format(latitude)}, ${\"%.${decimals}f\".format(longitude)}\"\n\nOf course if you have more format options this would get a bit complicated, but you can always just wrap everything in a (data) class, and end up with a single parameter.\ndata class FormatOptions(\n val decimals: Int = 5,\n val someOtherOption: Int = 10\n)\n\noverride fun toString() = toString(FormatOptions(/* ... */))\n\nfun toString(options: FormatOptions): String = TODO()\n\nJust by the way, the parameter list of the call toString() exactly matches the parameterless toString overload declared automatically by the data class. On the other hand, it only matches the one you declared if it considers optional parameters. So the compiler has very good reasons to prefer to resolve LatLong(...).toString() to the parameterless toString method, instead of the one you declared.\n" ]
[ 1 ]
[]
[]
[ "inheritance", "kotlin", "overriding" ]
stackoverflow_0074658770_inheritance_kotlin_overriding.txt
Q: MERGE statement to update or insert rows into a table My task is to insert or update rows in a table2. Table1 contains id's of all employees. That id matches the ID in the table2. Some of the employees in table2 already have the rows I need but some don't. Table2 doesn't contain the ID's of the employees that don't have those rows. My task is to update the rows for the existing ID's and insert for the ones that don't have those rows. I have tried the following statement: MERGE INTO dbo.table2 AS TGT USING (SELECT table1ID FROM dbo.table1) AS SRC ON SRC.table1ID = TGT.table2ID WHEN MATCHED AND table2Code = 'ValueToInsertOrUpdateCode' THEN UPDATE SET table2Value= 'ValueToInsertOrUpdateValue' WHEN NOT MATCHED BY TARGET THEN INSERT (table2Code, table2ID, table2Value) VALUES ('ValueToInsertOrUpdateCode', src.table1ID, 'ValueToInsertOrUpdateValue'); This currently only updates the rows that exist, but doesn't insert the rows for ID's that don't have existing rows. A: I would, honestly, suggest avoiding the MERGE operator and doing an Upsert here instead. For your scenario, what you need is most likely the following: SET XACT_ABORT ON; BEGIN TRANSACTION; UPDATE T2 WITH (UPDLOCK, SERIALIZABLE) SET table2Value = 'ValueToInsertOrUpdateValue' FROM dbo.Table2 T2 JOIN dbo.Table1 T1 ON T1.table1ID = T2.table2ID; -- You could honestly use an EXISTS here, considering that you're updating the table -- with a literal, rather than a value from the table Table1. INSERT INTO dbo.Table2 (table2Code , table2ID, table2Value) SELECT 'ValueToInsertOrUpdateCode', T1.table1ID, 'ValueToInsertOrUpdateValue' FROM dbo.Table1 T1 WHERE NOT EXISTS (SELECT 1 FROM dbo.Table2 T2 WHERE T2.table2ID = T1.table1ID); COMMIT; db<>fiddle A: Based on your comments is sounds like you want this so that the WHEN NOT MATCHED BY TARGET is executed: MERGE INTO dbo.table2 AS TGT USING (SELECT table1ID FROM dbo.table1) AS SRC ON (SRC.table1ID = TGT.table2ID AND table2Code = 'ValueToInsertOrUpdateCode') -- This is the difference WHEN MATCHED AND table2Code = 'ValueToInsertOrUpdateCode' THEN UPDATE SET table2Value= 'ValueToInsertOrUpdateValue' WHEN NOT MATCHED BY TARGET THEN INSERT (table2Code, table2ID, table2Value) VALUES ('ValueToInsertOrUpdateCode', src.table1ID, 'ValueToInsertOrUpdateValue'); WHEN NOT MATCHED BY TARGET would not execute when SRC.table1ID = TGT.table2ID (i.e. they match). Updating the ON clause to ON (SRC.table1ID = TGT.table2ID AND table2Code = 'ValueToInsertOrUpdateCode') will give you the inserts you are expecting. However you should probably not do this: ON <merge_search_condition> Caution It's important to specify only the columns from the target table to use for matching purposes. That is, specify columns from the target table that are compared to the corresponding column of the source table. Don't attempt to improve query performance by filtering out rows in the target table in the ON clause; for example, such as specifying AND NOT target_table.column_x = value. Doing so may return unexpected and incorrect results. For this reason and what others have suggested it would be safer to do separate update and insert statements.
MERGE statement to update or insert rows into a table
My task is to insert or update rows in a table2. Table1 contains id's of all employees. That id matches the ID in the table2. Some of the employees in table2 already have the rows I need but some don't. Table2 doesn't contain the ID's of the employees that don't have those rows. My task is to update the rows for the existing ID's and insert for the ones that don't have those rows. I have tried the following statement: MERGE INTO dbo.table2 AS TGT USING (SELECT table1ID FROM dbo.table1) AS SRC ON SRC.table1ID = TGT.table2ID WHEN MATCHED AND table2Code = 'ValueToInsertOrUpdateCode' THEN UPDATE SET table2Value= 'ValueToInsertOrUpdateValue' WHEN NOT MATCHED BY TARGET THEN INSERT (table2Code, table2ID, table2Value) VALUES ('ValueToInsertOrUpdateCode', src.table1ID, 'ValueToInsertOrUpdateValue'); This currently only updates the rows that exist, but doesn't insert the rows for ID's that don't have existing rows.
[ "I would, honestly, suggest avoiding the MERGE operator and doing an Upsert here instead. For your scenario, what you need is most likely the following:\nSET XACT_ABORT ON;\nBEGIN TRANSACTION;\n\nUPDATE T2 WITH (UPDLOCK, SERIALIZABLE) \nSET table2Value = 'ValueToInsertOrUpdateValue'\nFROM dbo.Table2 T2\n JOIN dbo.Table1 T1 ON T1.table1ID = T2.table2ID;\n-- You could honestly use an EXISTS here, considering that you're updating the table\n-- with a literal, rather than a value from the table Table1.\n\nINSERT INTO dbo.Table2 (table2Code , table2ID, table2Value)\nSELECT 'ValueToInsertOrUpdateCode',\n T1.table1ID,\n 'ValueToInsertOrUpdateValue'\nFROM dbo.Table1 T1\nWHERE NOT EXISTS (SELECT 1\n FROM dbo.Table2 T2\n WHERE T2.table2ID = T1.table1ID);\n\nCOMMIT;\n\ndb<>fiddle\n", "Based on your comments is sounds like you want this so that the WHEN NOT MATCHED BY TARGET is executed:\nMERGE INTO dbo.table2 AS TGT\nUSING (SELECT table1ID FROM dbo.table1) AS SRC\n ON (SRC.table1ID = TGT.table2ID AND table2Code = 'ValueToInsertOrUpdateCode') -- This is the difference\n\nWHEN MATCHED \n AND table2Code = 'ValueToInsertOrUpdateCode'\n THEN\n UPDATE \n SET table2Value= 'ValueToInsertOrUpdateValue'\n\nWHEN NOT MATCHED BY TARGET \n THEN\n INSERT (table2Code, table2ID, table2Value)\n VALUES ('ValueToInsertOrUpdateCode', src.table1ID, 'ValueToInsertOrUpdateValue'); \n\nWHEN NOT MATCHED BY TARGET would not execute when SRC.table1ID = TGT.table2ID (i.e. they match).\nUpdating the ON clause to ON (SRC.table1ID = TGT.table2ID AND table2Code = 'ValueToInsertOrUpdateCode') will give you the inserts you are expecting.\nHowever you should probably not do this:\nON <merge_search_condition> Caution\n\nIt's important to specify only the columns from the target table to use for matching purposes. That is, specify columns from the target table that are compared to the corresponding column of the source table. Don't attempt to improve query performance by filtering out rows in the target table in the ON clause; for example, such as specifying AND NOT target_table.column_x = value. Doing so may return unexpected and incorrect results.\n\nFor this reason and what others have suggested it would be safer to do separate update and insert statements.\n" ]
[ 0, 0 ]
[]
[]
[ "merge", "sql", "sql_server", "tsql" ]
stackoverflow_0074657044_merge_sql_sql_server_tsql.txt
Q: How to specify a URL slug in django template? I am trying to create a django template that has links to other pages (static images in particular, each with their own html template). I am trying to move away from hard-coding a URL and view for each one. Instead I want to capture them all with a general slug URL, and a view that takes the slug as input. My slug URL in urls.py is working fine - when I manually input the slug field in the full URL it links to the correct template and directs me to the correct page. However l, when I try to reference any of links as slugs from the 'cside' template I keep getting the following error: NoReverseMatch at /plots/cside Reverse for '2E_C' not found. '2E_C' is not a valid view function or pattern name. Basically, I want the 'cside' page to have links that are slugs. Can anyone tell me what I am missing? I have tried everything! Here is my urls.py: from django.urls import re_path, path from django.contrib.staticfiles.urls import staticfiles_urlpatterns from . import views from .views import mode urlpatterns = [ re_path(r'^cside$', views.cside, name='cside'), re_path(r'^lside$', views.lside, name='lside'), re_path(r'^home$', views.home, name='home'), #re_path(r'^2E_C$', views.m2E_C), re_path(r'^4M_C$', views.m4M_C), re_path(r'^6E_C$', views.m6E_C), re_path(r'^6M_C$', views.m6M_C), re_path(r'^8E_C$', views.m8E_C), re_path(r'^2E_L$', views.m2E_L), re_path(r'^4M_L$', views.m4M_L), re_path(r'^6E_L$', views.m6E_L), re_path(r'^6M_L$', views.m6M_L), re_path(r'^8E_L$', views.m8E_L), #re_path(r'^(?P<slug>[-\w]+)/$', views.mode, name='mode'), path('<slug:slugIn>/', views.mode, name='mode') ] Here is my views.py: from django.views.generic import TemplateView, ListView from django.http import HttpResponse, HttpResponseRedirect from django.template import loader from django.shortcuts import get_object_or_404, render from django.urls import reverse from django.views import generic, View from .models import Mode def cside(request): template = loader.get_template('cside.html') context = {} return HttpResponse(template.render(context, request)) def lside(request): template = loader.get_template('lside.html') context = {} return HttpResponse(template.render(context, request)) def home(request): template = loader.get_template('home.html') context = {} return HttpResponse(template.render(context, request)) # Slug Solution #====================================== def mode(request, slugIn=None): model = Mode #print(slugIn) slugOut = Mode.objects.all() #print(slugOut) template = loader.get_template(slugIn+'.html') context = {"slug": slugOut} return HttpResponse(template.render(context, request)) # Hardcoded Old Solution #====================================== # def m2E_C(request): # template = loader.get_template('2E_C.html') # context = {} # return HttpResponse(template.render(context, request)) def m400M_C(request): template = loader.get_template('4M.html') context = {} return HttpResponse(template.render(context, request)) def m6E_C(request): template = loader.get_template('6E_C.html') context = {} return HttpResponse(template.render(context, request)) def m6M_C(request): ... And here is the html template for the page I am having issues with: <!DOCTYPE html <html><head> <style type="text/css"> a { color:#005fce; text-decoration:none; font-weight:normal;} c { color:#000000; text-decoration:none; font-weight:bold;} </style> </head> <body><div class="content"> <h1>C SIDE</h1><br> <a href="{% url 'home' %}">Home</a><br><br> <c>TYPES:</c><br> <a href="{% url '2E_C' slug.slug %}">2E</a><br> <a href="{% url '4M_C' %}">4M</a><br> <a href="{% url '6E_C' %}">6E</a><br> <a href="{% url '6M_C' %}">6M</a><br> <a href="{% url '8E_C' %}">8E</a><br> I think my issue is that I can’t seem to pass the input slug field from the URL to the template file, to then know where to go. Or maybe I need to use a model to save the slug, but I couldn't figure out how to do this either. A: Your url is named mode so you need to change your link to use {% url ‘mode’ ‘2E_C’ %}.
How to specify a URL slug in django template?
I am trying to create a django template that has links to other pages (static images in particular, each with their own html template). I am trying to move away from hard-coding a URL and view for each one. Instead I want to capture them all with a general slug URL, and a view that takes the slug as input. My slug URL in urls.py is working fine - when I manually input the slug field in the full URL it links to the correct template and directs me to the correct page. However l, when I try to reference any of links as slugs from the 'cside' template I keep getting the following error: NoReverseMatch at /plots/cside Reverse for '2E_C' not found. '2E_C' is not a valid view function or pattern name. Basically, I want the 'cside' page to have links that are slugs. Can anyone tell me what I am missing? I have tried everything! Here is my urls.py: from django.urls import re_path, path from django.contrib.staticfiles.urls import staticfiles_urlpatterns from . import views from .views import mode urlpatterns = [ re_path(r'^cside$', views.cside, name='cside'), re_path(r'^lside$', views.lside, name='lside'), re_path(r'^home$', views.home, name='home'), #re_path(r'^2E_C$', views.m2E_C), re_path(r'^4M_C$', views.m4M_C), re_path(r'^6E_C$', views.m6E_C), re_path(r'^6M_C$', views.m6M_C), re_path(r'^8E_C$', views.m8E_C), re_path(r'^2E_L$', views.m2E_L), re_path(r'^4M_L$', views.m4M_L), re_path(r'^6E_L$', views.m6E_L), re_path(r'^6M_L$', views.m6M_L), re_path(r'^8E_L$', views.m8E_L), #re_path(r'^(?P<slug>[-\w]+)/$', views.mode, name='mode'), path('<slug:slugIn>/', views.mode, name='mode') ] Here is my views.py: from django.views.generic import TemplateView, ListView from django.http import HttpResponse, HttpResponseRedirect from django.template import loader from django.shortcuts import get_object_or_404, render from django.urls import reverse from django.views import generic, View from .models import Mode def cside(request): template = loader.get_template('cside.html') context = {} return HttpResponse(template.render(context, request)) def lside(request): template = loader.get_template('lside.html') context = {} return HttpResponse(template.render(context, request)) def home(request): template = loader.get_template('home.html') context = {} return HttpResponse(template.render(context, request)) # Slug Solution #====================================== def mode(request, slugIn=None): model = Mode #print(slugIn) slugOut = Mode.objects.all() #print(slugOut) template = loader.get_template(slugIn+'.html') context = {"slug": slugOut} return HttpResponse(template.render(context, request)) # Hardcoded Old Solution #====================================== # def m2E_C(request): # template = loader.get_template('2E_C.html') # context = {} # return HttpResponse(template.render(context, request)) def m400M_C(request): template = loader.get_template('4M.html') context = {} return HttpResponse(template.render(context, request)) def m6E_C(request): template = loader.get_template('6E_C.html') context = {} return HttpResponse(template.render(context, request)) def m6M_C(request): ... And here is the html template for the page I am having issues with: <!DOCTYPE html <html><head> <style type="text/css"> a { color:#005fce; text-decoration:none; font-weight:normal;} c { color:#000000; text-decoration:none; font-weight:bold;} </style> </head> <body><div class="content"> <h1>C SIDE</h1><br> <a href="{% url 'home' %}">Home</a><br><br> <c>TYPES:</c><br> <a href="{% url '2E_C' slug.slug %}">2E</a><br> <a href="{% url '4M_C' %}">4M</a><br> <a href="{% url '6E_C' %}">6E</a><br> <a href="{% url '6M_C' %}">6M</a><br> <a href="{% url '8E_C' %}">8E</a><br> I think my issue is that I can’t seem to pass the input slug field from the URL to the template file, to then know where to go. Or maybe I need to use a model to save the slug, but I couldn't figure out how to do this either.
[ "Your url is named mode so you need to change your link to use {% url ‘mode’ ‘2E_C’ %}.\n" ]
[ 0 ]
[]
[]
[ "django", "django_templates", "django_urls", "django_views", "slug" ]
stackoverflow_0074658879_django_django_templates_django_urls_django_views_slug.txt
Q: How to achieve time-sensitive flag in APNS JSON output via FCM? iOS distinguishes between messages by UNNotificationInterruptionLevel. I would like to achieve that messages sent via FCM have the time-sensitive interruption-level. Is this equivalent to just sending messages in FCM with high priority? Unfortunately it's not super clear to me from looking at the docs. A: The interruption level is automatically handled by system, not by FCM. That's different than the high priority. You should be able to use it as it is by following Apple's documentation. FCM supports passing down the interruption-level in the payload. A: I achieved this by having this payload in my firebase console file: var message = { notification: { title: "Notification Title", body: `${initiatedUsername} sent you message`, }, "data": { "target_exec":"messaging" }, "apns": { "payload": { "aps": { "alert": { "title": "Notification Title", "body": `${initiatedUsername} sent you a message` }, "badge": 1, "sound": "default", "interruption-level": "time-sensitive" } } }, token: fcmToken, }; You can change the token attribute to topic aswell, if you'd rather like to send a message to a topic. Hope this might help somebody out here!
How to achieve time-sensitive flag in APNS JSON output via FCM?
iOS distinguishes between messages by UNNotificationInterruptionLevel. I would like to achieve that messages sent via FCM have the time-sensitive interruption-level. Is this equivalent to just sending messages in FCM with high priority? Unfortunately it's not super clear to me from looking at the docs.
[ "The interruption level is automatically handled by system, not by FCM. That's different than the high priority.\nYou should be able to use it as it is by following Apple's documentation. FCM supports passing down the interruption-level in the payload.\n", "I achieved this by having this payload in my firebase console file:\nvar message = {\n notification: {\n title: \"Notification Title\",\n body: `${initiatedUsername} sent you message`,\n },\n \"data\": {\n \"target_exec\":\"messaging\"\n },\n \"apns\": {\n \"payload\": {\n \"aps\": {\n \"alert\": {\n \"title\": \"Notification Title\",\n \"body\": `${initiatedUsername} sent you a message`\n },\n \"badge\": 1,\n \"sound\": \"default\",\n \"interruption-level\": \"time-sensitive\"\n }\n }\n },\n token: fcmToken,\n };\n\nYou can change the token attribute to topic aswell, if you'd rather like to send a message to a topic.\nHope this might help somebody out here!\n" ]
[ 2, 0 ]
[]
[]
[ "firebase", "firebase_cloud_messaging" ]
stackoverflow_0072991293_firebase_firebase_cloud_messaging.txt
Q: What could be the problem in To-do app using Streamlit in Python? to-dos.py import streamlit as st import get_todos todos = get_todos.getTodos() def add_todos(): todo1 = st.session_state["new_todo"] + "\n" todos.append(todo1) get_todos.writeTodos(todos) st.title("My TO-DO App") ... get_todos.py def getTodos(): with open("docs.txt", "r") as file: data = file.readlines() return data def writeTodos(adder): with open("docs.txt", "w") as file: file.writelines(adder) I built a TO-DO App in Python using streamlit While performing this task in terminal, it's continuously showing 'FileNotFoundError' meanwhile the file actually exist. What could be the problem ? Any syntax error? or Logical Error? Error Traceback: My project structure is shown below: A: The main purpose of virtual environments or venv is to manage settings and dependencies of a particular project regardless of other Python projects. virtualenv tool comes bundled with PyCharm, so the user doesn't need to install it. It is always found in the project directory named venv which should be a unique folder design to fulfil a specific purpose. Note: No external file(s) should be added to the venv folder. This clearly indicates that your structure is not appropriate. I will recommend you visit pycharm project structure to read more about configuration of virtual environments. You should restructure your project properly. It might feel like a pain on the neck but I bet it worth it. Attention: All the external files you added to venv should rather be in your samik folder which is your project main folder.
What could be the problem in To-do app using Streamlit in Python?
to-dos.py import streamlit as st import get_todos todos = get_todos.getTodos() def add_todos(): todo1 = st.session_state["new_todo"] + "\n" todos.append(todo1) get_todos.writeTodos(todos) st.title("My TO-DO App") ... get_todos.py def getTodos(): with open("docs.txt", "r") as file: data = file.readlines() return data def writeTodos(adder): with open("docs.txt", "w") as file: file.writelines(adder) I built a TO-DO App in Python using streamlit While performing this task in terminal, it's continuously showing 'FileNotFoundError' meanwhile the file actually exist. What could be the problem ? Any syntax error? or Logical Error? Error Traceback: My project structure is shown below:
[ "The main purpose of virtual environments or venv is to manage settings and dependencies of a particular project regardless of other Python projects. virtualenv tool comes bundled with PyCharm, so the user doesn't need to install it. It is always found in the project directory named venv which should be a unique folder design to fulfil a specific purpose.\nNote: No external file(s) should be added to the venv folder.\nThis clearly indicates that your structure is not appropriate. I will recommend you visit pycharm project structure to read more about configuration of virtual environments. You should restructure your project properly. It might feel like a pain on the neck but I bet it worth it.\nAttention:\nAll the external files you added to venv should rather be in your samik folder which is your project main folder.\n" ]
[ 1 ]
[]
[]
[ "contextmanager", "filenotfounderror", "pycharm", "python", "streamlit" ]
stackoverflow_0074652347_contextmanager_filenotfounderror_pycharm_python_streamlit.txt
Q: ORA-22992: cannot use LOB locators selected from remote tables even without using BLOB column I have table which has one BLOB datatype column and i am using this table via dblink to insert into my schema table. But i dont use this BLOB datatype column in my insert query at all still i am getting error : ORA-22992: cannot use LOB locators selected from remote tables Below is my insert query: insert /*+ materialize */ into TOP.BKR ( SECTANRFFT, REFBEREICH ) select SECTANRFFT, REFBEREICH FROM ( select txtr.SECTANRFFT SECTANRFFT, txtr.REFBEREICH REFBEREICH from TOP.TB_ODS_LAST_DATE TB_ODS_LAST_DATE INNER JOIN BKP.ZORP@"TECD.POR" txtr ON 1=1 where (1=1) and (TB_ODS_LAST_DATE.TABLE_NAME = 'SRTPO') and (to_date('19700101','yyyymmdd') + (((txtr.DAT/60)/60)/24) > TB_ODS_LAST_DATE.LAST_DATE) ) FRT A: You can add a driving_site hint: insert ... select /*+ driving_site(TB_ODS_LAST_DATE) */ ... That will ask Oracle to send the local data to the remote site to do the join, rather than pulling the remote data (which may include the BLOB, at least nominally) to your local site. The hint is described in the documentation.
ORA-22992: cannot use LOB locators selected from remote tables even without using BLOB column
I have table which has one BLOB datatype column and i am using this table via dblink to insert into my schema table. But i dont use this BLOB datatype column in my insert query at all still i am getting error : ORA-22992: cannot use LOB locators selected from remote tables Below is my insert query: insert /*+ materialize */ into TOP.BKR ( SECTANRFFT, REFBEREICH ) select SECTANRFFT, REFBEREICH FROM ( select txtr.SECTANRFFT SECTANRFFT, txtr.REFBEREICH REFBEREICH from TOP.TB_ODS_LAST_DATE TB_ODS_LAST_DATE INNER JOIN BKP.ZORP@"TECD.POR" txtr ON 1=1 where (1=1) and (TB_ODS_LAST_DATE.TABLE_NAME = 'SRTPO') and (to_date('19700101','yyyymmdd') + (((txtr.DAT/60)/60)/24) > TB_ODS_LAST_DATE.LAST_DATE) ) FRT
[ "You can add a driving_site hint:\ninsert ...\nselect /*+ driving_site(TB_ODS_LAST_DATE) */\n...\n\nThat will ask Oracle to send the local data to the remote site to do the join, rather than pulling the remote data (which may include the BLOB, at least nominally) to your local site.\nThe hint is described in the documentation.\n" ]
[ 1 ]
[]
[]
[ "blob", "dblink", "lob", "oracle", "sql" ]
stackoverflow_0074658620_blob_dblink_lob_oracle_sql.txt
Q: GiLab - is it possible to get job id of the project which was executed 4 days ago through GitLab API we have scheduled pipeline executed once every day. With this setup, i would like to know if its possible to get job id[successful] of the project which was executed 4 days ago through GitLab API A: Apparently not: the GitLab Pipeline API only list, for a single pipeline, its latest state. It does not list past execution occurrences. And no audit events would reflect those past executions either. A: From documentation: In GitLab 13.9 and later, pipeline API endpoint can include retried jobs in the response with include_retried set to true.
GiLab - is it possible to get job id of the project which was executed 4 days ago through GitLab API
we have scheduled pipeline executed once every day. With this setup, i would like to know if its possible to get job id[successful] of the project which was executed 4 days ago through GitLab API
[ "Apparently not: the GitLab Pipeline API only list, for a single pipeline, its latest state.\nIt does not list past execution occurrences.\nAnd no audit events would reflect those past executions either.\n", "From documentation: In GitLab 13.9 and later, pipeline API endpoint can include retried jobs in the response with include_retried set to true.\n" ]
[ 0, 0 ]
[]
[]
[ "continuous_integration", "gitlab", "gitlab_api", "gitlab_ci" ]
stackoverflow_0073564334_continuous_integration_gitlab_gitlab_api_gitlab_ci.txt
Q: Automating Facebook using Selenium Webdriver driver = webdriver.Chrome('chromedriver') driver.get('https://www.facebook.com/') print("opened facebook") I am using this code to open Facebook and the page opens. driver.find_element(By.NAME, "email").send_keys("xxx") sleep(1) driver.find_element(By.NAME, "pass").send_keys("xxx") sleep(1) driver.find_element(By.NAME, "login").click() sleep(1) Then log in to my account. After successful login, my chrome window closes in a few seconds. Can someone tell me why? Full Code: import time import os import wget import shutil from time import sleep from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options from webdriver_manager.chrome import ChromeDriverManager try: usr="" pwd="" driver = webdriver.Chrome('chromedriver') driver.get('https://www.facebook.com/') print ("Opened facebook") driver.find_element(By.NAME, "email").send_keys(usr) print ("Email Id entered") sleep(1) driver.find_element(By.NAME, "pass").send_keys(pwd) print ("Password entered") driver.find_element(By.NAME,"login").click() sleep(100) except Exception as e: print("The error raised is: ", e) A: The program will exit after executing code. Add below statements to keep program running: time.sleep(300) #300 seconds i.e. 5 minutes # close the browser window driver.quit() A: This will fix your problem from selenium.webdriver.chrome.options import Options # Stop Selenium from closing browser automatically chrome_options = Options() chrome_options.add_experimental_option("detach", True) # Chrome driver to run chrome driver = webdriver.Chrome(options=chrome_options)
Automating Facebook using Selenium Webdriver
driver = webdriver.Chrome('chromedriver') driver.get('https://www.facebook.com/') print("opened facebook") I am using this code to open Facebook and the page opens. driver.find_element(By.NAME, "email").send_keys("xxx") sleep(1) driver.find_element(By.NAME, "pass").send_keys("xxx") sleep(1) driver.find_element(By.NAME, "login").click() sleep(1) Then log in to my account. After successful login, my chrome window closes in a few seconds. Can someone tell me why? Full Code: import time import os import wget import shutil from time import sleep from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.chrome.options import Options from webdriver_manager.chrome import ChromeDriverManager try: usr="" pwd="" driver = webdriver.Chrome('chromedriver') driver.get('https://www.facebook.com/') print ("Opened facebook") driver.find_element(By.NAME, "email").send_keys(usr) print ("Email Id entered") sleep(1) driver.find_element(By.NAME, "pass").send_keys(pwd) print ("Password entered") driver.find_element(By.NAME,"login").click() sleep(100) except Exception as e: print("The error raised is: ", e)
[ "The program will exit after executing code. Add below statements to keep program running:\ntime.sleep(300) #300 seconds i.e. 5 minutes\n\n# close the browser window\ndriver.quit()\n\n", "This will fix your problem\nfrom selenium.webdriver.chrome.options import Options\n\n# Stop Selenium from closing browser automatically\nchrome_options = Options()\nchrome_options.add_experimental_option(\"detach\", True)\n\n# Chrome driver to run chrome\ndriver = webdriver.Chrome(options=chrome_options)\n\n" ]
[ 1, 0 ]
[]
[]
[ "python", "selenium", "selenium_webdriver" ]
stackoverflow_0073245674_python_selenium_selenium_webdriver.txt
Q: MongoServerError: Cannot create new indexes on existing collection xxxx.xxxx in a multi-document transaction I'm using migrate-mongo and get the following error when trying to run some migrations. ~$ npx migrate-mongo up ERROR: Could not migrate up 20221201223533-add-indexes.js: Cannot create new indexes on existing collection xxx.xxx in a multi-document transaction. MongoServerError: Cannot create new indexes on existing collection xxx.xxx in a multi-document transaction. Googling the error comes up with zero results. Similar questions about "Cannot do xyz in a multi-document transaction" seem to suggest the collections need to exist before running the transaction. I've checked and double checked to make sure the collections exist, but still get the above error. My code looks something like this: exports.up = async (db, client) => { const fooCollection = db.collection('foo') const barCollection = db.collection('bar') const session = client.startSession(); try { await session.withTransaction(async () => { await fooCollection.createIndex({ foo: 1 }, { session }); await barCollection.createIndex({ bar: 1 }, { session }); }); } finally { await session.endSession(); } } exports.down = async (db, client) => { ... } How can I create a new index within a transaction? A: Thanks @user20042973 for pointing out the docs that explain the issue. When creating an index inside a transaction [1], the index to create must be on either: a non-existing collection. The collection is created as part of the operation. a new empty collection created earlier in the same transaction. Although this is disappointing, apparently it is not possible to create an index within a transaction on an existing, non-empty collection.
MongoServerError: Cannot create new indexes on existing collection xxxx.xxxx in a multi-document transaction
I'm using migrate-mongo and get the following error when trying to run some migrations. ~$ npx migrate-mongo up ERROR: Could not migrate up 20221201223533-add-indexes.js: Cannot create new indexes on existing collection xxx.xxx in a multi-document transaction. MongoServerError: Cannot create new indexes on existing collection xxx.xxx in a multi-document transaction. Googling the error comes up with zero results. Similar questions about "Cannot do xyz in a multi-document transaction" seem to suggest the collections need to exist before running the transaction. I've checked and double checked to make sure the collections exist, but still get the above error. My code looks something like this: exports.up = async (db, client) => { const fooCollection = db.collection('foo') const barCollection = db.collection('bar') const session = client.startSession(); try { await session.withTransaction(async () => { await fooCollection.createIndex({ foo: 1 }, { session }); await barCollection.createIndex({ bar: 1 }, { session }); }); } finally { await session.endSession(); } } exports.down = async (db, client) => { ... } How can I create a new index within a transaction?
[ "Thanks @user20042973 for pointing out the docs that explain the issue.\n\nWhen creating an index inside a transaction [1], the index to create must be on either:\n\na non-existing collection. The collection is created as part of the operation.\n\na new empty collection created earlier in the same transaction.\n\n\n\nAlthough this is disappointing, apparently it is not possible to create an index within a transaction on an existing, non-empty collection.\n" ]
[ 0 ]
[]
[]
[ "migrate_mongo", "mongodb" ]
stackoverflow_0074649505_migrate_mongo_mongodb.txt
Q: SPLIT_PART() entire column in redshift How do I split_part an entire column(word-by-word)? I am trying to split the column "answer" into each word. eg this is my dataset: name answer Kate i love cheese Tom i love bacon & eggs this is what i want: name split_answer Kate i Kate love Kate cheese Tom i Tom love Tom bacon Tom & Tom eggs this is my query: SELECT name, split_part(answer, ' ') AS split_asnwer FROM table A: Split_part() can take 3 arguments - string, delimiter, and part number. So you need to cross join with a numbers table that has all the integer values from 1 to the max number of parts in any string. You can generuerate this numbers table with a recursive CTE or some like to just have a numbers table on hand. The query will look something like (untested and off the cuff): with recursive nums(n) as ( select 1 as n union all select n + 1 from nums where n < (select LEN(answer) - LEN(REPLACE(answer, ' ', '')) + 1 from table) ) select name, split_part(answer, ' ', n) AS split_answer FROM table cross join nums where split_answer <> '';
SPLIT_PART() entire column in redshift
How do I split_part an entire column(word-by-word)? I am trying to split the column "answer" into each word. eg this is my dataset: name answer Kate i love cheese Tom i love bacon & eggs this is what i want: name split_answer Kate i Kate love Kate cheese Tom i Tom love Tom bacon Tom & Tom eggs this is my query: SELECT name, split_part(answer, ' ') AS split_asnwer FROM table
[ "Split_part() can take 3 arguments - string, delimiter, and part number.\nSo you need to cross join with a numbers table that has all the integer values from 1 to the max number of parts in any string. You can generuerate this numbers table with a recursive CTE or some like to just have a numbers table on hand.\nThe query will look something like (untested and off the cuff):\nwith recursive nums(n) as (\n select 1 as n\n union all\n select n + 1 \n from nums \n where n < (select LEN(answer) - LEN(REPLACE(answer, ' ', '')) + 1 from table)\n)\nselect name, split_part(answer, ' ', n) AS split_answer \nFROM table\ncross join nums\nwhere split_answer <> '';\n\n" ]
[ 1 ]
[]
[]
[ "amazon_redshift", "amazon_web_services", "split", "sql" ]
stackoverflow_0074656677_amazon_redshift_amazon_web_services_split_sql.txt
Q: Odoo 15 Search non-stored compute value that depends on search_count of many2one field I'm trying to display a view with certain products that have multiple BoM's. I've created a computed field that labels which records should be displayed. I'm trying to create a search function so that the records in interest can be displayed as a filter but am having trouble creating the function. Currently trying to append record.id's of interest into a list and returning the list within the search domain but that is not working. Any help would be much appreciated. Please see code below and thanks in advance! I tried the following code but it returns an empty data list. I think there's something wrong with how I'm getting the id of the current record and appending it to the list that is returned. class products_ppa_bom_check(models.Model): _inherit = ['product.template'] ppa_multi_bom = fields.Selection([ ('true', 'True'), ('false', 'False'), ('na', 'Not Applicable')], string="PPA Multi BOM Check", compute='_compute_ppa_multi_bom', search='_search_ppa_multi_bom') def _compute_ppa_multi_bom(self): for record in self: count = record.env['mrp.bom'].search_count(['|', ('product_tmpl_id', '=', record.id), ('byproduct_ids.product_id.product_tmpl_id', '=', record.id)]) if (count > 1) and ('PPA' in str(record.default_code)): record.ppa_multi_bom = 'true' elif (count == 1) and ('PPA' in str(record.default_code)): record.ppa_multi_bom = 'false' else: record.ppa_multi_bom = 'na' def _search_ppa_multi_bom(self, operator, value): ids = [] for record in self: count = record.env['mrp.bom'].search_count(['|', ('product_tmpl_id', '=', record.id), ('byproduct_ids.product_id.product_tmpl_id', '=', record.id)]) if (count > 1) and ('PPA' in str(record.default_code)): ids = ids.append(record.id) return[('id', 'in', ids)] A: If you want to use a filter in products with = operator, you can use the below code which I already tested: You can use bom_count field rather than use search_count method from odoo import api, fields, models, _ class products_ppa_bom_check(models.Model): _inherit = ['product.template'] ppa_multi_bom = fields.Selection([ ('true', 'True'), ('false', 'False'), ('na', 'Not Applicable')], string="PPA Multi BOM Check", compute='_compute_ppa_multi_bom', search='_search_ppa_multi_bom') def _compute_ppa_multi_bom(self): for record in self: if (record.bom_count > 1) and ('PPA' in str(record.default_code)): record.ppa_multi_bom = 'true' elif (record.bom_count == 1) and ('PPA' in str(record.default_code)): record.ppa_multi_bom = 'false' else: record.ppa_multi_bom = 'na' def _search_ppa_multi_bom(self, operator, value): true_ids = self.env['product.template'].search([]).filtered( lambda x: x.bom_count > 1 and 'PPA' in str(x.default_code)).ids false_ids = self.env['product.template'].search([]).filtered( lambda x: x.bom_count == 1 and x.default_code and 'PPA' in x.default_code).ids if value == 'true': ids = true_ids elif value == 'false': ids = false_ids else: all_ids = self.env['product.template'].search([]).ids ids = list(set(all_ids) - set(true_ids + false_ids)) return [('id', 'in', ids)] To add filter to search view: <record id="product_template_search_view_inherit_bom" model="ir.ui.view"> <field name="name">product.template.search.inherit.bom</field> <field name="model">product.template</field> <field name="inherit_id" ref="product.product_template_search_view"/> <field name="arch" type="xml"> <xpath expr="//filter[@name='consumable']" position="after"> <separator/> <filter string="Multi BOM" name="ppa_multi_bom" domain="[('ppa_multi_bom', '=', 'true')]"/> <filter string="One BOM" name="ppa_one_bom" domain="[('ppa_multi_bom', '=', 'false')]"/> <filter string="NA BOM" name="ppa_na_bom" domain="[('ppa_multi_bom', '=', 'na')]"/> </xpath> </field> </record>
Odoo 15 Search non-stored compute value that depends on search_count of many2one field
I'm trying to display a view with certain products that have multiple BoM's. I've created a computed field that labels which records should be displayed. I'm trying to create a search function so that the records in interest can be displayed as a filter but am having trouble creating the function. Currently trying to append record.id's of interest into a list and returning the list within the search domain but that is not working. Any help would be much appreciated. Please see code below and thanks in advance! I tried the following code but it returns an empty data list. I think there's something wrong with how I'm getting the id of the current record and appending it to the list that is returned. class products_ppa_bom_check(models.Model): _inherit = ['product.template'] ppa_multi_bom = fields.Selection([ ('true', 'True'), ('false', 'False'), ('na', 'Not Applicable')], string="PPA Multi BOM Check", compute='_compute_ppa_multi_bom', search='_search_ppa_multi_bom') def _compute_ppa_multi_bom(self): for record in self: count = record.env['mrp.bom'].search_count(['|', ('product_tmpl_id', '=', record.id), ('byproduct_ids.product_id.product_tmpl_id', '=', record.id)]) if (count > 1) and ('PPA' in str(record.default_code)): record.ppa_multi_bom = 'true' elif (count == 1) and ('PPA' in str(record.default_code)): record.ppa_multi_bom = 'false' else: record.ppa_multi_bom = 'na' def _search_ppa_multi_bom(self, operator, value): ids = [] for record in self: count = record.env['mrp.bom'].search_count(['|', ('product_tmpl_id', '=', record.id), ('byproduct_ids.product_id.product_tmpl_id', '=', record.id)]) if (count > 1) and ('PPA' in str(record.default_code)): ids = ids.append(record.id) return[('id', 'in', ids)]
[ "If you want to use a filter in products with = operator, you can use the below code which I already tested:\nYou can use bom_count field rather than use search_count method\nfrom odoo import api, fields, models, _\n\n\nclass products_ppa_bom_check(models.Model):\n _inherit = ['product.template']\n\n ppa_multi_bom = fields.Selection([\n ('true', 'True'),\n ('false', 'False'),\n ('na', 'Not Applicable')],\n string=\"PPA Multi BOM Check\", compute='_compute_ppa_multi_bom',\n search='_search_ppa_multi_bom')\n\n def _compute_ppa_multi_bom(self):\n for record in self:\n if (record.bom_count > 1) and ('PPA' in str(record.default_code)):\n record.ppa_multi_bom = 'true'\n elif (record.bom_count == 1) and ('PPA' in str(record.default_code)):\n record.ppa_multi_bom = 'false'\n else:\n record.ppa_multi_bom = 'na'\n\n def _search_ppa_multi_bom(self, operator, value):\n true_ids = self.env['product.template'].search([]).filtered(\n lambda x: x.bom_count > 1 and 'PPA' in str(x.default_code)).ids\n false_ids = self.env['product.template'].search([]).filtered(\n lambda x: x.bom_count == 1 and x.default_code and 'PPA' in x.default_code).ids\n if value == 'true':\n ids = true_ids\n elif value == 'false':\n ids = false_ids\n else:\n all_ids = self.env['product.template'].search([]).ids\n ids = list(set(all_ids) - set(true_ids + false_ids))\n return [('id', 'in', ids)]\n\nTo add filter to search view:\n <record id=\"product_template_search_view_inherit_bom\" model=\"ir.ui.view\">\n <field name=\"name\">product.template.search.inherit.bom</field>\n <field name=\"model\">product.template</field>\n <field name=\"inherit_id\" ref=\"product.product_template_search_view\"/>\n <field name=\"arch\" type=\"xml\">\n <xpath expr=\"//filter[@name='consumable']\" position=\"after\">\n <separator/>\n <filter string=\"Multi BOM\" name=\"ppa_multi_bom\" domain=\"[('ppa_multi_bom', '=', 'true')]\"/>\n <filter string=\"One BOM\" name=\"ppa_one_bom\" domain=\"[('ppa_multi_bom', '=', 'false')]\"/>\n <filter string=\"NA BOM\" name=\"ppa_na_bom\" domain=\"[('ppa_multi_bom', '=', 'na')]\"/>\n </xpath>\n </field>\n </record>\n\n" ]
[ 0 ]
[]
[]
[ "odoo", "odoo_15" ]
stackoverflow_0074619086_odoo_odoo_15.txt
Q: css-grid/flex float legend inside fieldset I'm working on a form where the client wants a series of fieldsets (groups) in a row. Semantically and for accessibility, it makes sense to use a <legend> from the parent <fieldset> as the label for the row, by floating it left and declaring display: grid; or display: flex; on the parent <fieldset>. This seems to work well enough for everything but Safari, which doesn't honour the float or treat the <legend> as a grid element. Is this a known bug/interoperable difference between browsers, or am I doing something wrong? I can't seem to find any reference to this on the webkit bug tracker, and my [insert-fav-search]-foo is failing me. These other two SE questions seem to be related, but do not address my issue: Can't position HTML legend tag with CSS Grid Grid layout on <fieldset>... Bug on chrome? Reduced test case: https://codepen.io/ShonenKnife/full/dyJXERG Chrome/Edge FF Safari 14 OSX & Safari 15 iOS <form class="v1-o-form"> <div class="v1-o-form__wrap"> <fieldset id="SECTOR1" class="v1-o-inputGroup"> <legend>Test input group 1</legend> <fieldset id="SECTOR1__gdp" class="v1-o-inputGroup__section"> <input type="number" id="gdp-SECTOR1" value="0.0" class="v1-a-inputSpinner__input" placeholder="–.–" min="-200" max="200" step="0.1" /> </fieldset> <fieldset id="SECTOR1__components" class="v1-o-inputGroup__components"> <input type="number" id="household-SECTOR1" name="" value="0.0" class="v1-a-inputSpinner__input" placeholder="–.–" min="-200" max="200" step="0.1" /> <input type="number" id="govt-SECTOR1" name="" value="0.0" class="v1-a-inputSpinner__input" placeholder="–.–" min="-200" max="200" step="0.1" /> <input type="number" id="investment-SECTOR1" name="" value="0.0" class="v1-a-inputSpinner__input" placeholder="–.–" min="-200" max="200" step="0.1" /> <input type="number" id="export-SECTOR1" name="" value="0.0" class="v1-a-inputSpinner__input" placeholder="–.–" min="-200" max="200" step="0.1" /> <input type="number" id="import-SECTOR1" name="" value="0.0" class="v1-a-inputSpinner__input" placeholder="–.–" min="-200" max="200" step="0.1" /> </fieldset> </fieldset> </div> </form> <style> :root { --focus-highlight: 0 0 0 0.15rem rgb(13 110 253 / 25%); --focus-inset-highlight: inset 0 0 0 0.15rem rgb(13 110 253 / 25%); --row-bg: #f2f2f2; --section-bg: #dadbe5; --pusherBlock-section-bg: #9398c9; --input-bg: white; } .v1-o-form { display: grid; flex-direction: column; flex-wrap: nowrap; justify-content: center; width: 100vw; } .v1-o-form__wrap { display: block; margin: 0 auto; } .v1-o-inputGroup { display: grid; grid-template-columns: min-content min-content min-content; max-width: 80vw; border: none; margin: 0.3rem 0; padding: 0; background: var(--row-bg); } .v1-o-inputGroup legend { float: left; margin: auto 0; padding: 0 0.5rem; font-size: 0.9em; min-width: 10em; } .v1-o-inputGroup__components, .v1-o-inputGroup__section { display: grid; background-color: var(--section-bg); padding: 0.3rem; border: 0; align-items: center; } .v1-o-inputGroup__components { grid-template-columns: 1fr 1fr 1fr 1fr 1fr; } input[type="number"] { text-align: center; margin: 0 3px; padding: 0; } </style> A: I eventually found a reference to a related Webkit issue: https://bugs.webkit.org/show_bug.cgi?id=220793 The bug appears to effect floats of the <legend> element more broadly and not specifically grid/flexbox layouts specifically. The approach we took was to replace the legend element, with a div, whose content we could control via grid/flexbox. For accessibility, we use aria-labelledby to maintain descriptive properties of what should really be a <legend>. A: Leverage on display: contents. legend { display: contents; } fieldset { display: inline-flex; gap: 16px; border: none; padding: 0; margin: 0; } <fieldset> <legend> <span>Name</span> </legend> <input type="text" aria-label="First Name" /> <input type="text" aria-label="Last Name" /> </fieldset> Result: This will also allow you to be A11y compliant:
css-grid/flex float legend inside fieldset
I'm working on a form where the client wants a series of fieldsets (groups) in a row. Semantically and for accessibility, it makes sense to use a <legend> from the parent <fieldset> as the label for the row, by floating it left and declaring display: grid; or display: flex; on the parent <fieldset>. This seems to work well enough for everything but Safari, which doesn't honour the float or treat the <legend> as a grid element. Is this a known bug/interoperable difference between browsers, or am I doing something wrong? I can't seem to find any reference to this on the webkit bug tracker, and my [insert-fav-search]-foo is failing me. These other two SE questions seem to be related, but do not address my issue: Can't position HTML legend tag with CSS Grid Grid layout on <fieldset>... Bug on chrome? Reduced test case: https://codepen.io/ShonenKnife/full/dyJXERG Chrome/Edge FF Safari 14 OSX & Safari 15 iOS <form class="v1-o-form"> <div class="v1-o-form__wrap"> <fieldset id="SECTOR1" class="v1-o-inputGroup"> <legend>Test input group 1</legend> <fieldset id="SECTOR1__gdp" class="v1-o-inputGroup__section"> <input type="number" id="gdp-SECTOR1" value="0.0" class="v1-a-inputSpinner__input" placeholder="–.–" min="-200" max="200" step="0.1" /> </fieldset> <fieldset id="SECTOR1__components" class="v1-o-inputGroup__components"> <input type="number" id="household-SECTOR1" name="" value="0.0" class="v1-a-inputSpinner__input" placeholder="–.–" min="-200" max="200" step="0.1" /> <input type="number" id="govt-SECTOR1" name="" value="0.0" class="v1-a-inputSpinner__input" placeholder="–.–" min="-200" max="200" step="0.1" /> <input type="number" id="investment-SECTOR1" name="" value="0.0" class="v1-a-inputSpinner__input" placeholder="–.–" min="-200" max="200" step="0.1" /> <input type="number" id="export-SECTOR1" name="" value="0.0" class="v1-a-inputSpinner__input" placeholder="–.–" min="-200" max="200" step="0.1" /> <input type="number" id="import-SECTOR1" name="" value="0.0" class="v1-a-inputSpinner__input" placeholder="–.–" min="-200" max="200" step="0.1" /> </fieldset> </fieldset> </div> </form> <style> :root { --focus-highlight: 0 0 0 0.15rem rgb(13 110 253 / 25%); --focus-inset-highlight: inset 0 0 0 0.15rem rgb(13 110 253 / 25%); --row-bg: #f2f2f2; --section-bg: #dadbe5; --pusherBlock-section-bg: #9398c9; --input-bg: white; } .v1-o-form { display: grid; flex-direction: column; flex-wrap: nowrap; justify-content: center; width: 100vw; } .v1-o-form__wrap { display: block; margin: 0 auto; } .v1-o-inputGroup { display: grid; grid-template-columns: min-content min-content min-content; max-width: 80vw; border: none; margin: 0.3rem 0; padding: 0; background: var(--row-bg); } .v1-o-inputGroup legend { float: left; margin: auto 0; padding: 0 0.5rem; font-size: 0.9em; min-width: 10em; } .v1-o-inputGroup__components, .v1-o-inputGroup__section { display: grid; background-color: var(--section-bg); padding: 0.3rem; border: 0; align-items: center; } .v1-o-inputGroup__components { grid-template-columns: 1fr 1fr 1fr 1fr 1fr; } input[type="number"] { text-align: center; margin: 0 3px; padding: 0; } </style>
[ "I eventually found a reference to a related Webkit issue: https://bugs.webkit.org/show_bug.cgi?id=220793\nThe bug appears to effect floats of the <legend> element more broadly and not specifically grid/flexbox layouts specifically.\nThe approach we took was to replace the legend element, with a div, whose content we could control via grid/flexbox. For accessibility, we use aria-labelledby to maintain descriptive properties of what should really be a <legend>.\n", "Leverage on display: contents.\n\n\nlegend {\n display: contents;\n}\n\nfieldset {\n display: inline-flex;\n gap: 16px;\n border: none;\n padding: 0;\n margin: 0;\n}\n<fieldset>\n <legend>\n <span>Name</span>\n </legend>\n <input type=\"text\" aria-label=\"First Name\" />\n <input type=\"text\" aria-label=\"Last Name\" />\n</fieldset>\n\n\n\nResult:\n\nThis will also allow you to be A11y compliant:\n\n" ]
[ 1, 0 ]
[]
[]
[ "css", "css_float", "css_grid", "forms", "html" ]
stackoverflow_0071560733_css_css_float_css_grid_forms_html.txt
Q: Hackerrank small triangles, large triangles problem Link here is link to question. In this question we have to sort the triangles based on their areas and then print out the dimensions of triangle in sorted format. #include <stdio.h> #include <stdlib.h> #include <math.h> struct triangle { int a; int b; int c; }; typedef struct triangle triangle; void sort_by_area(triangle* tr, int n) { /** * Sort an array a of the length n */ double arr[n+1]; triangle temp; for(int i=0;i<n;i++) { double area_2,p; p=((tr[i].a+tr[i].b+tr[i].c)/2.0); area_2=(p*(p-tr[i].a)*(p-tr[i].b)*(p-tr[i].c)); arr[i]=area_2; } for(int i=0;i<n-1;i++) { for(int j=i+1;j<n;j++) { if(arr[i]>arr[j]) { temp=tr[i]; tr[i]=tr[j]; tr[j]=temp; } } } } int main() { int n; scanf("%d", &n); triangle *tr = malloc(n * sizeof(triangle)); for (int i = 0; i < n; i++) { scanf("%d%d%d", &tr[i].a, &tr[i].b, &tr[i].c); } sort_by_area(tr, n); for (int i = 0; i < n; i++) { printf("%d %d %d\n", tr[i].a, tr[i].b, tr[i].c); } return 0; } This is my code. Only sample testcase is getting passed with this code and all else testcases are wrong. can someone pls help me with this? A: Thanks a lot guys for your help. finally with lots of debugging, i understood the error in my code. while swapping the structure array i was forgetting to swap the area array as well.Here is the code for this: #include <stdio.h> #include <stdlib.h> #include <math.h> struct triangle { int a; int b; int c; }; typedef struct triangle triangle; #define longlong int; void swap(double *array,int i,int j) { int temp=array[i]; array[i]=array[j]; array[j]=temp; } void sort_by_area(triangle* tr, int n) { double arr[n]; triangle temp; for(int i=0;i<n;i++) { double area_2,p; p=((tr[i].a+tr[i].b+tr[i].c)/2.0); area_2=(p*(p-tr[i].a)*(p-tr[i].b)*(p-tr[i].c)); arr[i]=area_2; /*storing square of area in a different array of different types of triangle*/ } for(int i=0;i<n;i++) { for(int j=i+1;j<n;j++) { if(arr[i]>arr[j]) /*using bubble sort comparing the area of subsequent triangles and if area of subsequent triangles are found to be greater than the previous one then swapping the areas as well as the structure array*/ { swap(arr,i,j); temp=tr[i]; tr[i]=tr[j]; tr[j]=temp; } } } } int main() { int n; scanf("%d", &n); triangle *tr = malloc(n * sizeof(triangle)); for (int i = 0; i < n; i++) { scanf("%d%d%d", &tr[i].a, &tr[i].b, &tr[i].c); } sort_by_area(tr, n); for (int i = 0; i < n; i++) { printf("%d %d %d\n", tr[i].a, tr[i].b, tr[i].c); } return 0; }
Hackerrank small triangles, large triangles problem
Link here is link to question. In this question we have to sort the triangles based on their areas and then print out the dimensions of triangle in sorted format. #include <stdio.h> #include <stdlib.h> #include <math.h> struct triangle { int a; int b; int c; }; typedef struct triangle triangle; void sort_by_area(triangle* tr, int n) { /** * Sort an array a of the length n */ double arr[n+1]; triangle temp; for(int i=0;i<n;i++) { double area_2,p; p=((tr[i].a+tr[i].b+tr[i].c)/2.0); area_2=(p*(p-tr[i].a)*(p-tr[i].b)*(p-tr[i].c)); arr[i]=area_2; } for(int i=0;i<n-1;i++) { for(int j=i+1;j<n;j++) { if(arr[i]>arr[j]) { temp=tr[i]; tr[i]=tr[j]; tr[j]=temp; } } } } int main() { int n; scanf("%d", &n); triangle *tr = malloc(n * sizeof(triangle)); for (int i = 0; i < n; i++) { scanf("%d%d%d", &tr[i].a, &tr[i].b, &tr[i].c); } sort_by_area(tr, n); for (int i = 0; i < n; i++) { printf("%d %d %d\n", tr[i].a, tr[i].b, tr[i].c); } return 0; } This is my code. Only sample testcase is getting passed with this code and all else testcases are wrong. can someone pls help me with this?
[ "Thanks a lot guys for your help. finally with lots of debugging, i understood the error in my code. while swapping the structure array i was forgetting to swap the area array as well.Here is the code for this:\n#include <stdio.h>\n#include <stdlib.h>\n#include <math.h>\n\nstruct triangle\n{\n int a;\n int b;\n int c;\n};\n\ntypedef struct triangle triangle;\n#define longlong int;\nvoid swap(double *array,int i,int j)\n{\n int temp=array[i];\n array[i]=array[j];\n array[j]=temp; \n}\nvoid sort_by_area(triangle* tr, int n) {\n\n double arr[n];\n\n triangle temp;\n \n for(int i=0;i<n;i++)\n {\n double area_2,p;\n p=((tr[i].a+tr[i].b+tr[i].c)/2.0);\n area_2=(p*(p-tr[i].a)*(p-tr[i].b)*(p-tr[i].c));\n arr[i]=area_2;\n/*storing square of area in a different array of different types of triangle*/\n }\n for(int i=0;i<n;i++)\n {\n for(int j=i+1;j<n;j++)\n {\n if(arr[i]>arr[j])\n/*using bubble sort comparing the area of subsequent triangles and if area of subsequent triangles are found to be greater than the previous one then swapping the areas as well as the structure array*/\n { swap(arr,i,j);\n temp=tr[i];\n tr[i]=tr[j];\n tr[j]=temp;\n }\n \n }\n }\n}\n\nint main()\n{\n int n;\n scanf(\"%d\", &n);\n triangle *tr = malloc(n * sizeof(triangle));\n for (int i = 0; i < n; i++) {\n scanf(\"%d%d%d\", &tr[i].a, &tr[i].b, &tr[i].c);\n }\n sort_by_area(tr, n);\n for (int i = 0; i < n; i++) {\n printf(\"%d %d %d\\n\", tr[i].a, tr[i].b, tr[i].c);\n }\n return 0;\n}\n\n" ]
[ 0 ]
[]
[]
[ "c", "enums", "struct", "structure" ]
stackoverflow_0074623921_c_enums_struct_structure.txt
Q: How to implement separate routing for sidebar in react router v6.4 I would like to use new React router loader features, but I cannot figure out how to convert it in my application. I used Route in multiple components but since new ReactProvider needs whole tree of routes in prop I don't know how to solve it. So far I use v6 BrowserRouter with nested routes rendering Layout on the top level. <BrowserRouter> <Routes> <Route element={<Layout />}> <Route index element={<Dashboard />} /> <Route path="section-a/part1" element={<SomePage />} /> <Route path="section-a/part2" element={<AnotherPage />} /> <Route path="section-b" element={<DifferentPage />} /> </Route> </Routes> </BrowserRouter> In Layout I render SideBar which have static part which I want to appear at all times and don't want it to rerender when changing route, and dynamic part which have it's own routing and behavior when changing route. const Layout = () => ( <> <AppBar/> <SideBar> <StaticNavigation/> <DynamicSubnavigation /> </SideBar> <Outlet/> </> ); const DynamicSubnavigation = () => ( <Routes> <Route element={<Layout />}> <Route index element={<DashboardSideBar />} /> <Route path="section-a" element={<SectionASideBar />} /> <Route path="section-b" element={<SectionBSideBar />} /> </Route> </Routes> ); I don't see how I can implement this using new RouterProvider and createBrowserRouter. In docs they say that we need only one router but since it is passed as prop to ReactProvider it doesn´t help me. Only solution I see is to rerender whole SideBar and add it to nested routes, but I really don't wanna do that. Rerendering of the static part of the SideBar would cause me different problems. A: Ah, I see now that you're using React Router v6. In that case, the code you provided should work as-is. It looks like you're using the Route component to define your routes, and the Routes component to render them. This is the correct way to do it in React Router v6. The Routes component expects to receive one or more Route components as its children, and it will render the component that corresponds to the current URL. In your code, you have a top-level Route component that renders the Layout component, and a nested Route component inside the Layout component that renders the DynamicSubnavigation component. This should work as expected. I'm not sure what you mean when you say that you need to "rerender whole SideBar and add it to nested routes". If you want to avoid re-rendering the static part of the SideBar when the route changes, you can simply move that part outside of the Route component. For example: const Layout = () => ( <> <AppBar/> <SideBar> <StaticNavigation/> <Routes> <Route element={<Layout />}> <Route index element={<DashboardSideBar />} /> <Route path="section-a" element={<SectionASideBar />} /> <Route path="section-b" element={<SectionBSideBar />} /> </Route> </Routes> </SideBar> <Outlet/> </> ); This way, the static part of the SideBar will only be rendered once, and the dynamic part will be re-rendered whenever the route changes.
How to implement separate routing for sidebar in react router v6.4
I would like to use new React router loader features, but I cannot figure out how to convert it in my application. I used Route in multiple components but since new ReactProvider needs whole tree of routes in prop I don't know how to solve it. So far I use v6 BrowserRouter with nested routes rendering Layout on the top level. <BrowserRouter> <Routes> <Route element={<Layout />}> <Route index element={<Dashboard />} /> <Route path="section-a/part1" element={<SomePage />} /> <Route path="section-a/part2" element={<AnotherPage />} /> <Route path="section-b" element={<DifferentPage />} /> </Route> </Routes> </BrowserRouter> In Layout I render SideBar which have static part which I want to appear at all times and don't want it to rerender when changing route, and dynamic part which have it's own routing and behavior when changing route. const Layout = () => ( <> <AppBar/> <SideBar> <StaticNavigation/> <DynamicSubnavigation /> </SideBar> <Outlet/> </> ); const DynamicSubnavigation = () => ( <Routes> <Route element={<Layout />}> <Route index element={<DashboardSideBar />} /> <Route path="section-a" element={<SectionASideBar />} /> <Route path="section-b" element={<SectionBSideBar />} /> </Route> </Routes> ); I don't see how I can implement this using new RouterProvider and createBrowserRouter. In docs they say that we need only one router but since it is passed as prop to ReactProvider it doesn´t help me. Only solution I see is to rerender whole SideBar and add it to nested routes, but I really don't wanna do that. Rerendering of the static part of the SideBar would cause me different problems.
[ "Ah, I see now that you're using React Router v6. In that case, the code you provided should work as-is. It looks like you're using the Route component to define your routes, and the Routes component to render them. This is the correct way to do it in React Router v6.\nThe Routes component expects to receive one or more Route components as its children, and it will render the component that corresponds to the current URL. In your code, you have a top-level Route component that renders the Layout component, and a nested Route component inside the Layout component that renders the DynamicSubnavigation component. This should work as expected.\nI'm not sure what you mean when you say that you need to \"rerender whole SideBar and add it to nested routes\". If you want to avoid re-rendering the static part of the SideBar when the route changes, you can simply move that part outside of the Route component. For example:\nconst Layout = () => (\n <>\n <AppBar/>\n <SideBar>\n <StaticNavigation/>\n <Routes>\n <Route element={<Layout />}>\n <Route index element={<DashboardSideBar />} />\n <Route path=\"section-a\" element={<SectionASideBar />} />\n <Route path=\"section-b\" element={<SectionBSideBar />} /> \n </Route>\n </Routes>\n </SideBar>\n <Outlet/>\n </>\n);\n\nThis way, the static part of the SideBar will only be rendered once, and the dynamic part will be re-rendered whenever the route changes.\n" ]
[ 0 ]
[]
[]
[ "nested_routes", "react_router", "reactjs" ]
stackoverflow_0074658863_nested_routes_react_router_reactjs.txt
Q: Creating subset in CPLEX I'm working on a formulation related to the Concrete Delivery Problem. I implement the formulation in CPLEX but face some problems with the construct ion of subset of the sets. For example, I need to construct a set of all subsets of artificial nodes of the graph. What I do is the following: int st=...; {int} StartingLocation = asSet(1..st); int ft=...; {int} FinishingLocation = asSet(1..ft); int m2=1; // amount of customer type 2 int m4=1; // amount of customer type 4 {int} Customer2_NOTdefinedCustomers_NONSingleSource = asSet(1..m2); {int} Customer4_DefinedCustomers_NONsingleSource = asSet(m2..m4+m2); // Number of actual Customer nodes {int} Customers = Customer2_NOTdefinedCustomers_NONSingleSource union Customer4_DefinedCustomers_NONsingleSource ; float Deadline[Customers]= ...; // deadline float demand[Customers]= ...; int TimeLag[Customers] = ...; float DefaultLoadSize = ...; float LoadPerTrip = ... ; int definedORnot [Customers] = ...; int NofArtificialCustomers = 0; int maxNumber_ofDeliveries[x in Customers]; execute { for( var l in Customers){ if (definedORnot == 1) maxNumber_ofDeliveries[l] = Opl.ftoi(Opl.ceil(DefaultLoadSize/demand[l])); else maxNumber_ofDeliveries[l] = Opl.ftoi(Opl.ceil(demand[l]/LoadPerTrip)); // calculating the number of artificial customer nodes NofArtificialCustomers = NofArtificialCustomers + maxNumber_ofDeliveries[l]; } } int NoCustomers = card(Customers); // the number of Actual nodes for customers {int} SetofArtCustomerNodes[i in 1..NoCustomers] = asSet(1..maxNumber_ofDeliveries[i]); // CREATING THE ARTIFICIAL CUSTOMER NODES tuple artificialnodes {int i ; int j;} // B: the set of artificial customer nodes {artificialnodes} Setof_ArtificialCustomers = {<i,j> | i in Customers, j in SetofArtCustomerNodes[i in Customers]}; int d = ...; {int} Depots = asSet(m4+m2..d+m4+m2); {int} SetofArtDepotNodes = asSet(1..NofArtificialCustomers); // CREATING THE ARTIFICIAL DEPOT NODES {artificialnodes} Setof_ArtificialDepots = {<i,j> | i in Depots, j in SetofArtDepotNodes}; int NoV = ...; // amount of vehicles {int} Vehicles = asSet(1..NoV); // set of vehicles // CREATING THE ARTIFICIAL STARTING LOCATION NODES {artificialnodes} Setof_StartingLocations = {<i,j> | i in Vehicles, j in Customers: j==1}; // CREATING THE ARTIFICIAL FINISHING LOCATION NODES {artificialnodes} Setof_FinishingLocations = {<i,j> | i in Vehicles, j in Customers: j==1}; {artificialnodes} N0 = Setof_StartingLocations union Setof_FinishingLocations union Setof_ArtificialDepots union Setof_ArtificialCustomers; A: All subsets of a set in example https://github.com/AlexFleischerParis/howtowithopl/blob/master/powerset.mod {string} s={"A","B","C","D"}; range r=1.. ftoi(pow(2,card(s))); {string} s2 [k in r] = {i | i in s: ((k div (ftoi(pow(2,(ord(s,i))))) mod 2) == 1)}; execute { writeln(s2); }
Creating subset in CPLEX
I'm working on a formulation related to the Concrete Delivery Problem. I implement the formulation in CPLEX but face some problems with the construct ion of subset of the sets. For example, I need to construct a set of all subsets of artificial nodes of the graph. What I do is the following: int st=...; {int} StartingLocation = asSet(1..st); int ft=...; {int} FinishingLocation = asSet(1..ft); int m2=1; // amount of customer type 2 int m4=1; // amount of customer type 4 {int} Customer2_NOTdefinedCustomers_NONSingleSource = asSet(1..m2); {int} Customer4_DefinedCustomers_NONsingleSource = asSet(m2..m4+m2); // Number of actual Customer nodes {int} Customers = Customer2_NOTdefinedCustomers_NONSingleSource union Customer4_DefinedCustomers_NONsingleSource ; float Deadline[Customers]= ...; // deadline float demand[Customers]= ...; int TimeLag[Customers] = ...; float DefaultLoadSize = ...; float LoadPerTrip = ... ; int definedORnot [Customers] = ...; int NofArtificialCustomers = 0; int maxNumber_ofDeliveries[x in Customers]; execute { for( var l in Customers){ if (definedORnot == 1) maxNumber_ofDeliveries[l] = Opl.ftoi(Opl.ceil(DefaultLoadSize/demand[l])); else maxNumber_ofDeliveries[l] = Opl.ftoi(Opl.ceil(demand[l]/LoadPerTrip)); // calculating the number of artificial customer nodes NofArtificialCustomers = NofArtificialCustomers + maxNumber_ofDeliveries[l]; } } int NoCustomers = card(Customers); // the number of Actual nodes for customers {int} SetofArtCustomerNodes[i in 1..NoCustomers] = asSet(1..maxNumber_ofDeliveries[i]); // CREATING THE ARTIFICIAL CUSTOMER NODES tuple artificialnodes {int i ; int j;} // B: the set of artificial customer nodes {artificialnodes} Setof_ArtificialCustomers = {<i,j> | i in Customers, j in SetofArtCustomerNodes[i in Customers]}; int d = ...; {int} Depots = asSet(m4+m2..d+m4+m2); {int} SetofArtDepotNodes = asSet(1..NofArtificialCustomers); // CREATING THE ARTIFICIAL DEPOT NODES {artificialnodes} Setof_ArtificialDepots = {<i,j> | i in Depots, j in SetofArtDepotNodes}; int NoV = ...; // amount of vehicles {int} Vehicles = asSet(1..NoV); // set of vehicles // CREATING THE ARTIFICIAL STARTING LOCATION NODES {artificialnodes} Setof_StartingLocations = {<i,j> | i in Vehicles, j in Customers: j==1}; // CREATING THE ARTIFICIAL FINISHING LOCATION NODES {artificialnodes} Setof_FinishingLocations = {<i,j> | i in Vehicles, j in Customers: j==1}; {artificialnodes} N0 = Setof_StartingLocations union Setof_FinishingLocations union Setof_ArtificialDepots union Setof_ArtificialCustomers;
[ "All subsets of a set in example https://github.com/AlexFleischerParis/howtowithopl/blob/master/powerset.mod\n{string} s={\"A\",\"B\",\"C\",\"D\"};\nrange r=1.. ftoi(pow(2,card(s)));\n{string} s2 [k in r] = {i | i in s: ((k div (ftoi(pow(2,(ord(s,i))))) mod 2) == 1)};\n\nexecute\n{\n writeln(s2);\n}\n\n" ]
[ 0 ]
[]
[]
[ "cdp", "cplex" ]
stackoverflow_0074658780_cdp_cplex.txt
Q: How to see injected scripts in Chrome Developer tools I am injecting a partial into a page using $().html(content). Part of the partial is JavaScript code in an inline script block I need to inspect. When I look in the Sources tab in the Chrome Developer Tools it doesn't show the injected content. All it shows is the original source. Is there a way to gain access to the JavaScript? Update I am using Google Chrome 21.0.1180.77 but I also have Google Chrome Canary installed. A: I don't have a Sources tab (Elements, Resources, Network, Scripts, Timeline, Profiles, Audits, Console). The Elements tab always reflects the current state of the DOM, so it will show any injected scripts. EDIT: This appears to be wrong. There's a Chrome issue about this: http://code.google.com/p/chromium/issues/detail?id=95352 You can add a specially formed comment to the injected JavaScript code, and it will then show up in the Scripts tab (but it still doesn't show up in the Elements tab, for whatever reason): //@ sourceUrl=whatever.js A: How to see injected snippets: In order for injected code to be visible, you will need to add a sourceURL comment to the top of the evaluated script, like one of the following: //# sourceURL=//domain/file.js //# sourceURL=http://domain/file.js //# sourceURL=https://domain/file.js //# sourceURL=//domain/file Note, that without the // hinting at the protocol and some domain immediately following, then the injected snippet will not show up under sources by default. How to see injected snippets without a protocol and domain: Continuing, with just a file name, like so: //# sourceURL=file.js You will have to change the source settings by unchecking Group by folder. See image.
How to see injected scripts in Chrome Developer tools
I am injecting a partial into a page using $().html(content). Part of the partial is JavaScript code in an inline script block I need to inspect. When I look in the Sources tab in the Chrome Developer Tools it doesn't show the injected content. All it shows is the original source. Is there a way to gain access to the JavaScript? Update I am using Google Chrome 21.0.1180.77 but I also have Google Chrome Canary installed.
[ "I don't have a Sources tab (Elements, Resources, Network, Scripts, Timeline, Profiles, Audits, Console).\nThe Elements tab always reflects the current state of the DOM, so it will show any injected scripts. EDIT: This appears to be wrong.\nThere's a Chrome issue about this: http://code.google.com/p/chromium/issues/detail?id=95352\nYou can add a specially formed comment to the injected JavaScript code, and it will then show up in the Scripts tab (but it still doesn't show up in the Elements tab, for whatever reason):\n//@ sourceUrl=whatever.js\n\n", "How to see injected snippets:\nIn order for injected code to be visible, you will need to add a sourceURL comment to the top of the evaluated script, like one of the following:\n//# sourceURL=//domain/file.js\n//# sourceURL=http://domain/file.js\n//# sourceURL=https://domain/file.js\n//# sourceURL=//domain/file\n\nNote, that without the // hinting at the protocol and some domain immediately following, then the injected snippet will not show up under sources by default.\nHow to see injected snippets without a protocol and domain:\nContinuing, with just a file name, like so:\n//# sourceURL=file.js\n\nYou will have to change the source settings by unchecking Group by folder. See image.\n\n" ]
[ 2, 0 ]
[]
[]
[ "google_chrome_devtools" ]
stackoverflow_0011904209_google_chrome_devtools.txt
Q: Camelot - detecting hyperlinks within table I am using Camelot to extract tables from PDF files. While this works very well, it extracts the text only, it does not extract the hyperlinks that are embedded in the tables. Is there a way of using Camelot or a similar package to extract table text and hyperlinks embedded within tables? Thanks! A: most applications such as tablular text extractors simply scrape the visible surface as plain text and actually hyperlinks are often stored elsewhere in the pdf which is NOT a WTSIWYG word processor file. So, if you're lucky you can extract the co-ordinates (without their page allocation like this) C:\Users\lz02\Downloads>type "7 - 20 November 2022 (003).pdf" |findstr /i "(http" <</Subtype/Link/Rect[ 69.75 299.75 280.63 313.18] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/complaint/) >>/StructParent 5>> <</Subtype/Link/Rect[ 219.37 120.85 402.47 133.06] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/handle-complaint/) >>/StructParent 1>> <</Subtype/Link/Rect[ 146.23 108.64 329.33 120.85] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/handle-complaint/) >>/StructParent 2>> <</Subtype/Link/Rect[ 412.48 108.64 525.55 120.85] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code) >>/StructParent 3>> <</Subtype/Link/Rect[ 69.75 96.434 95.085 108.64] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code) >>/StructParent 4>> <</Subtype/Link/Rect[ 69.75 683.75 317.08 697.18] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/comp-reports/ecu/) >>/StructParent 7>> <</Subtype/Link/Rect[ 463.35 604.46 500.24 617.89] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/reporting-scotland-bbc-one-scotland-20-december-2021) >>/StructParent 8>> <</Subtype/Link/Rect[ 463.35 577.11 500.24 590.54] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/book-of-the-week-preventable-radio-4-19-april-2022) >>/StructParent 9>> <</Subtype/Link/Rect[ 463.35 522.4 521.41 535.83] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/the-one-show-bbc-one-6-october-2022) >>/StructParent 10>> <</Subtype/Link/Rect[ 463.35 495.04 518.04 508.47] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/news-6pm-bbc-one-22-september-2022) >>/StructParent 11>> <</Subtype/Link/Rect[ 463.35 469.04 518.04 482.47] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/news-1030am-bbc-news-channel-20-september-2022) >>/StructParent 12>> NOTE, the random order, to find which page they belong to you need to traceback to their /StructParent ## A: Yes, it's possible. Camelot, by default, only extracts the text from PDF files, but it also provides options to extract additional information, such as the position and size of text blocks, as well as the coordinates of the lines and curves that define the table cells. With this information, it is possible to identify the table cells that contain hyperlinks, and to extract the text and the hyperlink destination for each of these cells. Here is an example of how this can be done using Camelot: import camelot # Load the PDF file pdf = camelot.read_pdf("example.pdf") # Extract the tables, including their coordinates and text blocks tables = pdf.extract(flavor="lattice", tables=None, spreadsheets=None, str_columns_map=None, columns=None, suppress_stdout=False) # Iterate over the tables for table in tables: # Iterate over the rows in the table for row in table.data: # Iterate over the cells in the row for cell in row: # If the cell contains a hyperlink, extract the text and the hyperlink destination if cell.text.startswith("http"): text = cell.text hyperlink = cell.bbox[0] print(text, hyperlink)
Camelot - detecting hyperlinks within table
I am using Camelot to extract tables from PDF files. While this works very well, it extracts the text only, it does not extract the hyperlinks that are embedded in the tables. Is there a way of using Camelot or a similar package to extract table text and hyperlinks embedded within tables? Thanks!
[ "most applications such as tablular text extractors simply scrape the visible surface as plain text and actually hyperlinks are often stored elsewhere in the pdf which is NOT a WTSIWYG word processor file.\nSo, if you're lucky you can extract the co-ordinates (without their page allocation like this)\nC:\\Users\\lz02\\Downloads>type \"7 - 20 November 2022 (003).pdf\" |findstr /i \"(http\"\n<</Subtype/Link/Rect[ 69.75 299.75 280.63 313.18] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/complaint/) >>/StructParent 5>>\n<</Subtype/Link/Rect[ 219.37 120.85 402.47 133.06] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/handle-complaint/) >>/StructParent 1>>\n<</Subtype/Link/Rect[ 146.23 108.64 329.33 120.85] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/handle-complaint/) >>/StructParent 2>>\n<</Subtype/Link/Rect[ 412.48 108.64 525.55 120.85] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code) >>/StructParent 3>>\n<</Subtype/Link/Rect[ 69.75 96.434 95.085 108.64] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.ofcom.org.uk/tv-radio-and-on-demand/broadcast-codes/broadcast-code) >>/StructParent 4>>\n<</Subtype/Link/Rect[ 69.75 683.75 317.08 697.18] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(http://www.bbc.co.uk/complaints/comp-reports/ecu/) >>/StructParent 7>>\n<</Subtype/Link/Rect[ 463.35 604.46 500.24 617.89] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/reporting-scotland-bbc-one-scotland-20-december-2021) >>/StructParent 8>>\n<</Subtype/Link/Rect[ 463.35 577.11 500.24 590.54] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/book-of-the-week-preventable-radio-4-19-april-2022) >>/StructParent 9>>\n<</Subtype/Link/Rect[ 463.35 522.4 521.41 535.83] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/the-one-show-bbc-one-6-october-2022) >>/StructParent 10>>\n<</Subtype/Link/Rect[ 463.35 495.04 518.04 508.47] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/news-6pm-bbc-one-22-september-2022) >>/StructParent 11>>\n<</Subtype/Link/Rect[ 463.35 469.04 518.04 482.47] /BS<</W 0>>/F 4/A<</Type/Action/S/URI/URI(https://www.bbc.co.uk/contact/ecu/news-1030am-bbc-news-channel-20-september-2022) >>/StructParent 12>>\n\nNOTE, the random order, to find which page they belong to you need to traceback to their /StructParent ##\n", "Yes, it's possible. Camelot, by default, only extracts the text from PDF files, but it also provides options to extract additional information, such as the position and size of text blocks, as well as the coordinates of the lines and curves that define the table cells. With this information, it is possible to identify the table cells that contain hyperlinks, and to extract the text and the hyperlink destination for each of these cells.\nHere is an example of how this can be done using Camelot:\nimport camelot\n\n# Load the PDF file\npdf = camelot.read_pdf(\"example.pdf\")\n\n# Extract the tables, including their coordinates and text blocks\ntables = pdf.extract(flavor=\"lattice\", tables=None, spreadsheets=None,\n str_columns_map=None, columns=None, suppress_stdout=False)\n\n# Iterate over the tables\nfor table in tables:\n # Iterate over the rows in the table\n for row in table.data:\n # Iterate over the cells in the row\n for cell in row:\n # If the cell contains a hyperlink, extract the text and the hyperlink destination\n if cell.text.startswith(\"http\"):\n text = cell.text\n hyperlink = cell.bbox[0]\n print(text, hyperlink)\n\n" ]
[ 0, 0 ]
[]
[]
[ "pdf", "python", "python_camelot" ]
stackoverflow_0074655135_pdf_python_python_camelot.txt
Q: How to find the length of a state in a react component There is a function as follows: async function validate(value) { try { const result = await schema.validate(value, { abortEarly: false }); console.log(result); return result; } catch (error) { console.log(error.errors); setError({errors:error.errors}); console.log(setError.length); } } In line number 8, the errors are updated in the state without any problem, but when I want to find the length of the state setError array, it returns the value of 1, even though the value of the created array is greater than 1. Is there a solution to find the state length in functional components in react? A: you doing wrong man, you are assigning {} to error state and then you are checking the length of setError method, I don't know why are you doing this try like this const Component =()=>{ const [errors, setError]=useState([]); async function validate(value) { try { const result = await schema.validate(value, { abortEarly: false }); console.log(result); return result; } catch (error) { console.log(errors.errors); setError([...errors,{errors:error.errors}]); console.log(errors.length); // here you will get previous length of errors array } } // console here console.log("ERROR LENGTH",errors.length); return( //UI stuff ) } A: Your code has 2 problems: set state functions in React are async. It means that your values are not updated immediately. So you should declare a new variable and then log the new variable or write your console.log in useEffect. You should log the length of the error state not the set state function. So this would help you. Let's assume you have this state: const [error, setError] = useState({errors: []}); Using another variable: async function validate(value) { try { const result = await schema.validate(value, { abortEarly: false }); console.log(result); return result; } catch (error) { console.log(error.errors); const newError = {errors: error.errors}; setError(newError); console.log(newError.errors.length); } } With useEffect: async function validate(value) { try { const result = await schema.validate(value, { abortEarly: false }); console.log(result); return result; } catch (error) { console.log(error.errors); setError({errors: error.errors}); } useEffect(() => { console.log(error.errors.length); }, [error]); }
How to find the length of a state in a react component
There is a function as follows: async function validate(value) { try { const result = await schema.validate(value, { abortEarly: false }); console.log(result); return result; } catch (error) { console.log(error.errors); setError({errors:error.errors}); console.log(setError.length); } } In line number 8, the errors are updated in the state without any problem, but when I want to find the length of the state setError array, it returns the value of 1, even though the value of the created array is greater than 1. Is there a solution to find the state length in functional components in react?
[ "you doing wrong man, you are assigning {} to error state and then you are checking the length of setError method, I don't know why are you doing this\ntry like this\nconst Component =()=>{\nconst [errors, setError]=useState([]);\n\n async function validate(value) { \n try {\n const result = await schema.validate(value, { abortEarly: false });\n console.log(result);\n return result;\n } catch (error) {\n console.log(errors.errors);\n setError([...errors,{errors:error.errors}]);\n console.log(errors.length); // here you will get previous length of errors array\n } \n }\n// console here \nconsole.log(\"ERROR LENGTH\",errors.length);\n\nreturn(\n//UI stuff\n)\n}\n\n", "Your code has 2 problems:\n\nset state functions in React are async. It means that your values are not updated immediately. So you should declare a new variable and then log the new variable or write your console.log in useEffect.\n\nYou should log the length of the error state not the set state function.\n\n\nSo this would help you.\nLet's assume you have this state:\nconst [error, setError] = useState({errors: []});\n\nUsing another variable:\nasync function validate(value) { \n try {\n const result = await schema.validate(value, { abortEarly: false });\n console.log(result);\n return result;\n } catch (error) {\n console.log(error.errors);\n const newError = {errors: error.errors};\n setError(newError);\n console.log(newError.errors.length);\n } \n}\n\nWith useEffect:\nasync function validate(value) { \n try {\n const result = await schema.validate(value, { abortEarly: false });\n console.log(result);\n return result;\n } catch (error) {\n console.log(error.errors);\n setError({errors: error.errors});\n } \n\n useEffect(() => {\n console.log(error.errors.length);\n }, [error]);\n}\n\n" ]
[ 0, 0 ]
[]
[]
[ "react_functional_component", "react_hooks", "reactjs" ]
stackoverflow_0074653795_react_functional_component_react_hooks_reactjs.txt
Q: Google Sheets Script - How to make it copy & paste 2 rows base on the date that corresponds today's date in another cell I am working on a Lunch sheet Project for the school I work at & I need to copy data each month based on lunch & breakfast that is reimbursable. I have the sheets working but I need to copy & paste the data to another sheet so I can collect all the data for the month. I have a script working to Copy & Paste the data but I need it to only copy & paste the data for the 2 rows that corresponds with today's date & not the whole page. This is a link to the helper copy of the sheet I made: https://docs.google.com/spreadsheets/d/1cp8Y36tlzq9n4_7jX1QDhYV3AeF3MT-OO1GeZLR3uwM/edit#gid=28052480 If you can help it would be greatly appreciated. This is the code I tried but it doesn't only do the 2 columns that have the Date: I just need it to Copy & Paste the 2 Columns "Lunch" & "Breakfast" based on today's date in Row 2 & do it on a time trigger at 3:20 PM CST US time every day. This was as far as I got, I need it to roll with today's date every day, "N7" was the starting point but I don't know how to get it to only copy & paste the 2 columns under today's date every day. function runsies() { var ss = SpreadsheetApp.openById("1cp8Y36tlzq9n4_7jX1QDhYV3AeF3MT-OO1GeZLR3uwM"); var sheet = ss.getSheetByName("Nutrition Data"); var rows = sheet.getDataRange().getValues(); var dates = rows[1]; var column; var today = new Date(); for (var i = 15; i < dates.length; i++) { if (dates[i].getDate() == today.getDate() && dates[i].getMonth() == today.getMonth()) { column = i + 2; break; } } var sheet2 = ss.getSheetByName("Copy of Nutrition Data"); sheet.getRange(7,14,sheet.getLastRow(), column).copyTo(sheet2.getRange("N7"), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); } A: Try this: function runsies() { const ss = SpreadsheetApp.getActive(); const sheet = ss.getSheetByName('Nutrition Data'); const targetSheet = ss.getSheetByName('Copy of Nutrition Data'); const timezone = ss.getSpreadsheetTimeZone(); const dateStrings = sheet.getRange('A2:2') .getValues() .flat() .map(date => { if (Object.prototype.toString.call(date) === '[object Date]') { return Utilities.formatDate(date, timezone, 'yyyy-MM-dd') } }); const todayString = Utilities.formatDate(new Date(), timezone, 'yyyy-MM-dd'); const columnIndex = dateStrings.indexOf(todayString) + 1; if (!columnIndex) { throw new Error(`Cannot find today's date (${todayString}).`); } sheet.getRange(7, columnIndex, sheet.getLastRow() - 7 + 1, 2) .copyTo(targetSheet.getRange(7, columnIndex), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); }
Google Sheets Script - How to make it copy & paste 2 rows base on the date that corresponds today's date in another cell
I am working on a Lunch sheet Project for the school I work at & I need to copy data each month based on lunch & breakfast that is reimbursable. I have the sheets working but I need to copy & paste the data to another sheet so I can collect all the data for the month. I have a script working to Copy & Paste the data but I need it to only copy & paste the data for the 2 rows that corresponds with today's date & not the whole page. This is a link to the helper copy of the sheet I made: https://docs.google.com/spreadsheets/d/1cp8Y36tlzq9n4_7jX1QDhYV3AeF3MT-OO1GeZLR3uwM/edit#gid=28052480 If you can help it would be greatly appreciated. This is the code I tried but it doesn't only do the 2 columns that have the Date: I just need it to Copy & Paste the 2 Columns "Lunch" & "Breakfast" based on today's date in Row 2 & do it on a time trigger at 3:20 PM CST US time every day. This was as far as I got, I need it to roll with today's date every day, "N7" was the starting point but I don't know how to get it to only copy & paste the 2 columns under today's date every day. function runsies() { var ss = SpreadsheetApp.openById("1cp8Y36tlzq9n4_7jX1QDhYV3AeF3MT-OO1GeZLR3uwM"); var sheet = ss.getSheetByName("Nutrition Data"); var rows = sheet.getDataRange().getValues(); var dates = rows[1]; var column; var today = new Date(); for (var i = 15; i < dates.length; i++) { if (dates[i].getDate() == today.getDate() && dates[i].getMonth() == today.getMonth()) { column = i + 2; break; } } var sheet2 = ss.getSheetByName("Copy of Nutrition Data"); sheet.getRange(7,14,sheet.getLastRow(), column).copyTo(sheet2.getRange("N7"), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false); }
[ "Try this:\nfunction runsies() {\n const ss = SpreadsheetApp.getActive();\n const sheet = ss.getSheetByName('Nutrition Data');\n const targetSheet = ss.getSheetByName('Copy of Nutrition Data');\n const timezone = ss.getSpreadsheetTimeZone();\n const dateStrings = sheet.getRange('A2:2')\n .getValues()\n .flat()\n .map(date => {\n if (Object.prototype.toString.call(date) === '[object Date]') {\n return Utilities.formatDate(date, timezone, 'yyyy-MM-dd')\n }\n });\n const todayString = Utilities.formatDate(new Date(), timezone, 'yyyy-MM-dd');\n const columnIndex = dateStrings.indexOf(todayString) + 1;\n if (!columnIndex) {\n throw new Error(`Cannot find today's date (${todayString}).`);\n }\n sheet.getRange(7, columnIndex, sheet.getLastRow() - 7 + 1, 2)\n .copyTo(targetSheet.getRange(7, columnIndex), SpreadsheetApp.CopyPasteType.PASTE_VALUES, false);\n}\n\n" ]
[ 0 ]
[]
[]
[ "google_apps_script", "google_sheets" ]
stackoverflow_0074658738_google_apps_script_google_sheets.txt
Q: What files are used for classpath? I have some classes in Eclipse which cannot be resolved to a type. I know that classes can be in .class, .jar, .par, .zip files. Are there any other file types that I have to look for? Or is there anything eclipse how I could make Eclipse recognize the classes? In my general understanding, once I have found these files and added them to the classpath, the Eclipse should be able to recognize them. A: Actually my colleague gave me an answer: If the class cannot be found or resolved to a type within Eclipse, this is only the problem with build path. And to the build path, the files should be added of type as I have said: class, jar, par, zip; no other suffix is accepted. A: Maybe I am misunderstanding your question, so please excuse me if I am. Type resolution errors are often due to incorrect imports or misspellings. Ill give an example: public class Foo extends Component{ //If you forgot to import java.awt.*; You would receive the error "Component cannot be resolved to a type" } Also if you misspelled an extension you would get the same error.When you create a class in eclipse you need to create a new file for that class. You should also post your code so we know exactly what you're talking about. I don't think your error has anything to do with the classpath. :) A: Class path entry could be jar, zip or directory (source) set CLASSPATH=classpath1;classpath2... Class paths to the .jar, .zip or .class files. Each classpath should end with a filename or directory depending on what you are setting the class path to: For a .jar or .zip file that contains .class files, the class path ends with the name of the .zip or .jar file. For .class files in an unnamed package, the class path ends with the directory that contains the .class files. For .class files in a named package, the class path ends with the directory that contains the "root" package (the first package in the full package name). Multiple path entries are separated by semi-colons. With the set command, it's important to omit spaces from around the equals sign (=). The default class path is the current directory. Setting the CLASSPATH variable or using the -classpath command-line option overrides that default, so if you want to include the current directory in the search path, you must include "." in the new settings. Classpath entries that are neither directories nor archives (.zip or .jar files) nor * are ignored.
What files are used for classpath?
I have some classes in Eclipse which cannot be resolved to a type. I know that classes can be in .class, .jar, .par, .zip files. Are there any other file types that I have to look for? Or is there anything eclipse how I could make Eclipse recognize the classes? In my general understanding, once I have found these files and added them to the classpath, the Eclipse should be able to recognize them.
[ "Actually my colleague gave me an answer:\nIf the class cannot be found or resolved to a type within Eclipse, this is only the problem with build path. And to the build path, the files should be added of type as I have said: class, jar, par, zip; no other suffix is accepted.\n", "Maybe I am misunderstanding your question, so please excuse me if I am. Type resolution errors are often due to incorrect imports or misspellings. Ill give an example:\npublic class Foo extends Component{\n//If you forgot to import java.awt.*; You would receive the error \n \"Component cannot be resolved to a type\"\n}\n\nAlso if you misspelled an extension you would get the same error.When you create a class in eclipse you need to create a new file for that class.\nYou should also post your code so we know exactly what you're talking about. I don't think your error has anything to do with the classpath. :)\n", "Class path entry could be jar, zip or directory (source)\nset CLASSPATH=classpath1;classpath2...\n\nClass paths to the .jar, .zip or .class files. Each classpath should end with a filename or directory depending on what you are setting the class path to:\nFor a .jar or .zip file that contains .class files, the class path ends with the name of the .zip or .jar file.\nFor .class files in an unnamed package, the class path ends with the directory that contains the .class files.\nFor .class files in a named package, the class path ends with the directory that contains the \"root\" package (the first package in the full package name).\nMultiple path entries are separated by semi-colons. With the set command, it's important to omit spaces from around the equals sign (=).\n\n\nThe default class path is the current directory. Setting the CLASSPATH variable or using the -classpath command-line option overrides that default, so if you want to include the current directory in the search path, you must include \".\" in the new settings.\n\n\nClasspath entries that are neither directories nor archives (.zip or .jar files) nor * are ignored.\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "classpath", "compilation", "eclipse", "eclipse_classpath", "java" ]
stackoverflow_0017687903_classpath_compilation_eclipse_eclipse_classpath_java.txt
Q: Fill in missing numbers in different lists twice. ArgumentOutOfRangeException I need your help to prepare data. I am reading a byte array. I make bytes to unsigned integers. I read in different blocks of that array and write the UInt32s in 5 lists in total. The data has been stored compressed; that is, some spaces are missing and I need to fill them up. To make it clear, I made a compilable test project for you and wrote the data into an excel file. This is the original data. From the left to the right: Sizes, Addresses, Indexes, Number_of_items, Description You can see that in column C the 2, 3, and 4 are missing. So I select columns C through E, and move them down 3 rows. I fill the gaps with 2, 3, 4 in column C and 1, 1, 1 in the other two columns. I do this until I reach the end of column B. Columns B, C, D, and E must have the same length. Where I have a little problem I fail because a While or For loop evaluates the List.Count property only once. That is, if I add something to a list within the loop, the loop doesn't run often enough. I've provisionally worked around this by writing While True and catching an OutOfRangeException. Maybe someone has a better idea; or even an idea that completely replaces my approach :D Step № 2 If a row has a 2 in column D, I select columns B through E below the 2, and move the contents down one row (only one, because the difference is 1). I want to do this until I get to the bottom of the table. This will make all columns the same length. Again, I have the problem that I use While True and go out using an exception. Does anyone have a better idea? FormMain.vb Public NotInheritable Class FormMain Private Sizes As New List(Of UInt32) From { 58_355UI, 20_270UI, 4_830UI, 4_443UI, 25_177UI, 8_844UI, 4_101UI, 4_200UI, 14_991UI, 12_639UI, 12_894UI, 14_165UI, 12_954UI, 26_670UI, 7_388UI} Private Addresses As New List(Of UInt32) From {4_323UI, 62_706UI, 83_646UI, 88_935UI, 93_883UI, 128_259UI, 132_718UI, 137_254UI, 152_590UI, 178_485UI, 193_022UI, 206_718UI} Private Indexes As New List(Of UInt32) From {1UI, 5UI, 6UI, 9UI, 10UI, 12UI} Private NumberOfItems As New List(Of UInt32) From {1UI, 2UI, 1UI, 2UI, 1UI, 2UI} Private Description As New List(Of UInt32) From {1UI, 1UI, 1UI, 1UI, 1UI, 1UI} Private Sub ButtonStart_Click(sender As Object, e As EventArgs) Handles ButtonStart.Click Dim RopD As New Reprocessing_of_parsed_data(Sizes, Addresses, Indexes, NumberOfItems, Description) RopD.Fill_gaps() End Sub End Class Reprocessing_of_parsed_data.vb Public NotInheritable Class Reprocessing_of_parsed_data Public Property Sizes As New List(Of UInteger) Public Property Addresses As New List(Of UInteger) Public Property Indexes As New List(Of UInteger) Public Property Number_of_items As New List(Of UInteger) Public Property Description As New List(Of UInteger) Public Sub New(sizes As List(Of UInt32), addresses As List(Of UInt32), indexes As List(Of UInt32), number_of_items As List(Of UInt32), description As List(Of UInt32)) Me.Sizes = sizes Me.Addresses = addresses Me.Indexes = indexes Me.Number_of_items = number_of_items Me.Description = description End Sub Public Sub Fill_gaps() Dim counterForAddressesList As Integer = 0 'Dim ListCount As Integer = Indexes.Count - 2 Dim i As Integer = 0 While True 'i < ListCount - 2 Try Dim delta As Integer = CInt(Indexes(i + 1) - Indexes(i)) - 1 Dim number As UInt32 = Indexes(i) While delta > 0 number += 1UI counterForAddressesList += 1 Indexes.Insert(CInt(number) - 1, number) Number_of_items.Insert(CInt(number) - 1, 1UI) Description.Insert(CInt(number) - 1, 1UI) delta -= 1 'ListCount += 1 End While counterForAddressesList += 1 i += 1 Catch ex As ArgumentOutOfRangeException Exit While End Try End While ' Step 2 Dim j As Integer = 0 While True Try If Number_of_items(j) > 1UI Then Dim delta As Integer = CInt(Number_of_items(j)) - 1 While delta > 0 Addresses.Insert(j + 1, UInteger.MaxValue) Indexes.Insert(j + 1, UInteger.MaxValue) Number_of_items.Insert(j + 1, UInteger.MaxValue) Description.Insert(j + 1, UInteger.MaxValue) delta -= 1 j += 1 End While End If j += 1 Catch ex As ArgumentOutOfRangeException Exit While End Try End While End Sub End Class A: It is never a good idea to catch an index out of bounds exception in a Try-Catch-statement. Only conditions you are not in control of (often I/O errors) should be handled at runtime. An index being out of bounds is a design error and must be fixed at design time. I extracted the two steps from Sub Fill_gaps into two new methods to make the code easier to read and test. Public Sub Fill_gaps() ' A better name would be "Decompress" PrintTable() 'For testing FillGaps() PrintTable() 'For testing AddMissingNumberOfItems() PrintTable() 'For testing End Sub I also added a method PrintTable for testing Private Sub PrintTable() Console.WriteLine() Console.WriteLine($" A B C D E") For i = 0 To Sizes.Count - 1 Dim A = Sizes(i) Dim B = If(i < Addresses.Count, Addresses(i), 0UI) Dim C = If(i < Indexes.Count, Indexes(i), 0UI) Dim D = If(i < NumberOfItems.Count, NumberOfItems(i), 0UI) Dim E = If(i < Description.Count, Description(i), 0UI) Console.WriteLine($"{A,10}{B,10}{C,10}{D,10}{E,10}") Next End Sub Step 1: fill the gaps (the method is self-explanatory): Private Sub FillGaps() ' Fill gaps in columns C, D and E. ' The number of Addresses B indicates the total number of indexes. ' Append empty items to C, D and E until the list counts matches the ' expected total number of indexes. Dim originalIndexCount = Indexes.Count 'Save original count Do While Indexes.Count < Addresses.Count Indexes.Add(CUInt(Indexes.Count + 1)) ' Make index 1-based NumberOfItems.Add(1) Description.Add(1) Loop 'Move the rows to where the index indicates. 'We do it backwards to not overwrite existing items. For i As Integer = originalIndexCount - 1 To 0 Step -1 Dim targetIndex = CInt(Indexes(i)) - 1 ' Subtract 1, indexes are 0-based If targetIndex <> i Then ' Copy to target position Indexes(targetIndex) = Indexes(i) NumberOfItems(targetIndex) = NumberOfItems(i) Description(targetIndex) = Description(i) 'Clear resp. initialize old row Indexes(i) = CUInt(i + 1) ' Make index 1-based NumberOfItems(i) = 1 Description(i) = 1 End If Next End Sub Step 2: Private Sub AddMissingNumberOfItems() ' Insert empty rows after items with NumberOfItems > 1. ' We do it backwards to not mess up our indexes. For i As Integer = Indexes.Count - 1 To 0 Step -1 For k As UInteger = 2 To NumberOfItems(i) Addresses.Insert(i + 1, 0) Indexes.Insert(i + 1, 0) NumberOfItems.Insert(i + 1, 0) Description.Insert(i + 1, 0) Next Next End Sub If you use the following test list for the descriptions, you will better see which rows have been moved or added Private Description As New List(Of UInt32) From {2UI, 3UI, 4UI, 5UI, 6UI, 7UI}
Fill in missing numbers in different lists twice. ArgumentOutOfRangeException
I need your help to prepare data. I am reading a byte array. I make bytes to unsigned integers. I read in different blocks of that array and write the UInt32s in 5 lists in total. The data has been stored compressed; that is, some spaces are missing and I need to fill them up. To make it clear, I made a compilable test project for you and wrote the data into an excel file. This is the original data. From the left to the right: Sizes, Addresses, Indexes, Number_of_items, Description You can see that in column C the 2, 3, and 4 are missing. So I select columns C through E, and move them down 3 rows. I fill the gaps with 2, 3, 4 in column C and 1, 1, 1 in the other two columns. I do this until I reach the end of column B. Columns B, C, D, and E must have the same length. Where I have a little problem I fail because a While or For loop evaluates the List.Count property only once. That is, if I add something to a list within the loop, the loop doesn't run often enough. I've provisionally worked around this by writing While True and catching an OutOfRangeException. Maybe someone has a better idea; or even an idea that completely replaces my approach :D Step № 2 If a row has a 2 in column D, I select columns B through E below the 2, and move the contents down one row (only one, because the difference is 1). I want to do this until I get to the bottom of the table. This will make all columns the same length. Again, I have the problem that I use While True and go out using an exception. Does anyone have a better idea? FormMain.vb Public NotInheritable Class FormMain Private Sizes As New List(Of UInt32) From { 58_355UI, 20_270UI, 4_830UI, 4_443UI, 25_177UI, 8_844UI, 4_101UI, 4_200UI, 14_991UI, 12_639UI, 12_894UI, 14_165UI, 12_954UI, 26_670UI, 7_388UI} Private Addresses As New List(Of UInt32) From {4_323UI, 62_706UI, 83_646UI, 88_935UI, 93_883UI, 128_259UI, 132_718UI, 137_254UI, 152_590UI, 178_485UI, 193_022UI, 206_718UI} Private Indexes As New List(Of UInt32) From {1UI, 5UI, 6UI, 9UI, 10UI, 12UI} Private NumberOfItems As New List(Of UInt32) From {1UI, 2UI, 1UI, 2UI, 1UI, 2UI} Private Description As New List(Of UInt32) From {1UI, 1UI, 1UI, 1UI, 1UI, 1UI} Private Sub ButtonStart_Click(sender As Object, e As EventArgs) Handles ButtonStart.Click Dim RopD As New Reprocessing_of_parsed_data(Sizes, Addresses, Indexes, NumberOfItems, Description) RopD.Fill_gaps() End Sub End Class Reprocessing_of_parsed_data.vb Public NotInheritable Class Reprocessing_of_parsed_data Public Property Sizes As New List(Of UInteger) Public Property Addresses As New List(Of UInteger) Public Property Indexes As New List(Of UInteger) Public Property Number_of_items As New List(Of UInteger) Public Property Description As New List(Of UInteger) Public Sub New(sizes As List(Of UInt32), addresses As List(Of UInt32), indexes As List(Of UInt32), number_of_items As List(Of UInt32), description As List(Of UInt32)) Me.Sizes = sizes Me.Addresses = addresses Me.Indexes = indexes Me.Number_of_items = number_of_items Me.Description = description End Sub Public Sub Fill_gaps() Dim counterForAddressesList As Integer = 0 'Dim ListCount As Integer = Indexes.Count - 2 Dim i As Integer = 0 While True 'i < ListCount - 2 Try Dim delta As Integer = CInt(Indexes(i + 1) - Indexes(i)) - 1 Dim number As UInt32 = Indexes(i) While delta > 0 number += 1UI counterForAddressesList += 1 Indexes.Insert(CInt(number) - 1, number) Number_of_items.Insert(CInt(number) - 1, 1UI) Description.Insert(CInt(number) - 1, 1UI) delta -= 1 'ListCount += 1 End While counterForAddressesList += 1 i += 1 Catch ex As ArgumentOutOfRangeException Exit While End Try End While ' Step 2 Dim j As Integer = 0 While True Try If Number_of_items(j) > 1UI Then Dim delta As Integer = CInt(Number_of_items(j)) - 1 While delta > 0 Addresses.Insert(j + 1, UInteger.MaxValue) Indexes.Insert(j + 1, UInteger.MaxValue) Number_of_items.Insert(j + 1, UInteger.MaxValue) Description.Insert(j + 1, UInteger.MaxValue) delta -= 1 j += 1 End While End If j += 1 Catch ex As ArgumentOutOfRangeException Exit While End Try End While End Sub End Class
[ "It is never a good idea to catch an index out of bounds exception in a Try-Catch-statement. Only conditions you are not in control of (often I/O errors) should be handled at runtime. An index being out of bounds is a design error and must be fixed at design time.\nI extracted the two steps from Sub Fill_gaps into two new methods to make the code easier to read and test.\nPublic Sub Fill_gaps() ' A better name would be \"Decompress\"\n PrintTable() 'For testing\n FillGaps()\n PrintTable() 'For testing\n AddMissingNumberOfItems()\n PrintTable() 'For testing\nEnd Sub\n\nI also added a method PrintTable for testing\nPrivate Sub PrintTable()\n Console.WriteLine()\n Console.WriteLine($\" A B C D E\")\n For i = 0 To Sizes.Count - 1\n Dim A = Sizes(i)\n Dim B = If(i < Addresses.Count, Addresses(i), 0UI)\n Dim C = If(i < Indexes.Count, Indexes(i), 0UI)\n Dim D = If(i < NumberOfItems.Count, NumberOfItems(i), 0UI)\n Dim E = If(i < Description.Count, Description(i), 0UI)\n Console.WriteLine($\"{A,10}{B,10}{C,10}{D,10}{E,10}\")\n Next\nEnd Sub\n\nStep 1: fill the gaps (the method is self-explanatory):\nPrivate Sub FillGaps()\n ' Fill gaps in columns C, D and E.\n ' The number of Addresses B indicates the total number of indexes.\n ' Append empty items to C, D and E until the list counts matches the\n ' expected total number of indexes.\n Dim originalIndexCount = Indexes.Count 'Save original count\n Do While Indexes.Count < Addresses.Count\n Indexes.Add(CUInt(Indexes.Count + 1)) ' Make index 1-based\n NumberOfItems.Add(1)\n Description.Add(1)\n Loop\n\n 'Move the rows to where the index indicates.\n 'We do it backwards to not overwrite existing items.\n For i As Integer = originalIndexCount - 1 To 0 Step -1\n Dim targetIndex = CInt(Indexes(i)) - 1 ' Subtract 1, indexes are 0-based\n\n If targetIndex <> i Then\n ' Copy to target position\n Indexes(targetIndex) = Indexes(i)\n NumberOfItems(targetIndex) = NumberOfItems(i)\n Description(targetIndex) = Description(i)\n\n 'Clear resp. initialize old row\n Indexes(i) = CUInt(i + 1) ' Make index 1-based\n NumberOfItems(i) = 1\n Description(i) = 1\n End If\n Next\nEnd Sub\n\nStep 2:\nPrivate Sub AddMissingNumberOfItems()\n ' Insert empty rows after items with NumberOfItems > 1.\n ' We do it backwards to not mess up our indexes.\n For i As Integer = Indexes.Count - 1 To 0 Step -1\n For k As UInteger = 2 To NumberOfItems(i)\n Addresses.Insert(i + 1, 0)\n Indexes.Insert(i + 1, 0)\n NumberOfItems.Insert(i + 1, 0)\n Description.Insert(i + 1, 0)\n Next\n Next\nEnd Sub\n\nIf you use the following test list for the descriptions, you will better see which rows have been moved or added\nPrivate Description As New List(Of UInt32) From {2UI, 3UI, 4UI, 5UI, 6UI, 7UI}\n\n" ]
[ 1 ]
[]
[]
[ "vb.net" ]
stackoverflow_0074657270_vb.net.txt
Q: Firebase Cloud Messaging Unable to Get Token on Vue 3 I am creating an App that will get real time notification with firebase cloud messaging, but it seems does not work for me. I am following their documentation an put firebase-messaging-sw.js in root of my vue project. i post the demo on stackblitz here is my App.vue <script setup lang="ts"> import { useTheme } from './services/vuestic-ui/themes' import { initializeApp } from "firebase/app" import { getMessaging, getToken } from "firebase/messaging" const firebaseConfig = { apiKey: "", authDomain: ", databaseURL: "m", projectId: "", storageBucket: "", messagingSenderId: "2", appId: "" } // Initialize Firebase const app = initializeApp(firebaseConfig) const messaging = getMessaging(app) getToken(messaging, { vapidKey: 'mykey' }).then((currentToken) => { if (currentToken) { // Send the token to your server and update the UI if necessary // ... console.log({currentToken}) } else { // Show permission request UI console.log('No registration token available. Request permission to generate one.'); // ... } }).catch((err) => { console.log('An error occurred while retrieving token. ', err); // ... }) </script> if You run the demo it will say An error occurred while retrieving token. FirebaseError: Messaging: We are unable to register the default service worker. so little documentation about Firebase cloud messaging with vue 3, I don't know how to register the service worker properly,ho to get this Firebase Cloud Messaging works on vue 3? A: Firebase is unable to access the route firebase-messaging-sw.js, that's why you are getting this error. Just create an empty JavaScript file in your public folder named firebase-messaging-sw.js and this error will be gone.
Firebase Cloud Messaging Unable to Get Token on Vue 3
I am creating an App that will get real time notification with firebase cloud messaging, but it seems does not work for me. I am following their documentation an put firebase-messaging-sw.js in root of my vue project. i post the demo on stackblitz here is my App.vue <script setup lang="ts"> import { useTheme } from './services/vuestic-ui/themes' import { initializeApp } from "firebase/app" import { getMessaging, getToken } from "firebase/messaging" const firebaseConfig = { apiKey: "", authDomain: ", databaseURL: "m", projectId: "", storageBucket: "", messagingSenderId: "2", appId: "" } // Initialize Firebase const app = initializeApp(firebaseConfig) const messaging = getMessaging(app) getToken(messaging, { vapidKey: 'mykey' }).then((currentToken) => { if (currentToken) { // Send the token to your server and update the UI if necessary // ... console.log({currentToken}) } else { // Show permission request UI console.log('No registration token available. Request permission to generate one.'); // ... } }).catch((err) => { console.log('An error occurred while retrieving token. ', err); // ... }) </script> if You run the demo it will say An error occurred while retrieving token. FirebaseError: Messaging: We are unable to register the default service worker. so little documentation about Firebase cloud messaging with vue 3, I don't know how to register the service worker properly,ho to get this Firebase Cloud Messaging works on vue 3?
[ "Firebase is unable to access the route firebase-messaging-sw.js, that's why you are getting this error.\nJust create an empty JavaScript file in your public folder named firebase-messaging-sw.js and this error will be gone.\n" ]
[ 0 ]
[]
[]
[ "firebase", "firebase_cloud_messaging", "service_worker", "vue.js", "vuejs3" ]
stackoverflow_0073290239_firebase_firebase_cloud_messaging_service_worker_vue.js_vuejs3.txt
Q: ModuleNotFoundError: No module named 'proj' I ran into this problem and searched many resources but couldn't find a solution. My Django project was running successfully on my local. But when I deployed it to the server, it kept getting the following error. ModuleNotFoundError: No module named 'proj' I installed all the required libraries and all the settings should be correct as they worked fine on my OSX. (venv) [root@10-10-7-140 vanilla]# python manage.py runserver Traceback (most recent call last): File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/base.py", line 414, in run_from_argv self.execute(*args, **cmd_options) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 74, in execute super().execute(*args, **options) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/base.py", line 460, in execute output = self.handle(*args, **options) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 81, in handle if not settings.DEBUG and not settings.ALLOWED_HOSTS: File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 87, in __getattr__ self._setup(name) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 74, in _setup self._wrapped = Settings(settings_module) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 183, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File "/usr/local/python3.8/python3.8/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'proj' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "manage.py", line 22, in <module> main() File "manage.py", line 18, in main execute_from_command_line(sys.argv) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line utility.execute() File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 440, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/base.py", line 427, in run_from_argv connections.close_all() File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/db/utils.py", line 212, in close_all for alias in self: File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/utils/connection.py", line 73, in __iter__ return iter(self.settings) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/utils/functional.py", line 49, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/utils/connection.py", line 45, in settings self._settings = self.configure_settings(self._settings) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/db/utils.py", line 148, in configure_settings databases = super().configure_settings(databases) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/utils/connection.py", line 50, in configure_settings settings = getattr(django_settings, self.settings_name) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 87, in __getattr__ self._setup(name) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 74, in _setup self._wrapped = Settings(settings_module) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 183, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File "/usr/local/python3.8/python3.8/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'proj' I noticed the message of if not settings.DEBUG and not settings.ALLOWED_HOSTS: and checked my code in venilla/settings.py, which includes: DEBUG = True ALLOWED_HOSTS = ['*'] Furthermore, the following line is standard in my manage.py. os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'vanilla.settings') I checked all the libraries are installed correctly, tried to run python manage.py makemigrations and python manage.py check. Got the same error. Really wondering what caused my problem. Can anyone help? Thank you. A: After checking the system, I found the following line in /etc/profile. export DJANGO_SETTINGS_MODULE=proj.settings It set the DJANGO_SETTINGS_MODULE to 'proj.settings' and cannot be overwritten by manage.py. After removing it and rebooting the system, the problem is resolved.
ModuleNotFoundError: No module named 'proj'
I ran into this problem and searched many resources but couldn't find a solution. My Django project was running successfully on my local. But when I deployed it to the server, it kept getting the following error. ModuleNotFoundError: No module named 'proj' I installed all the required libraries and all the settings should be correct as they worked fine on my OSX. (venv) [root@10-10-7-140 vanilla]# python manage.py runserver Traceback (most recent call last): File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/base.py", line 414, in run_from_argv self.execute(*args, **cmd_options) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 74, in execute super().execute(*args, **options) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/base.py", line 460, in execute output = self.handle(*args, **options) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/commands/runserver.py", line 81, in handle if not settings.DEBUG and not settings.ALLOWED_HOSTS: File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 87, in __getattr__ self._setup(name) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 74, in _setup self._wrapped = Settings(settings_module) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 183, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File "/usr/local/python3.8/python3.8/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'proj' During handling of the above exception, another exception occurred: Traceback (most recent call last): File "manage.py", line 22, in <module> main() File "manage.py", line 18, in main execute_from_command_line(sys.argv) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 446, in execute_from_command_line utility.execute() File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/__init__.py", line 440, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/core/management/base.py", line 427, in run_from_argv connections.close_all() File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/db/utils.py", line 212, in close_all for alias in self: File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/utils/connection.py", line 73, in __iter__ return iter(self.settings) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/utils/functional.py", line 49, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/utils/connection.py", line 45, in settings self._settings = self.configure_settings(self._settings) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/db/utils.py", line 148, in configure_settings databases = super().configure_settings(databases) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/utils/connection.py", line 50, in configure_settings settings = getattr(django_settings, self.settings_name) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 87, in __getattr__ self._setup(name) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 74, in _setup self._wrapped = Settings(settings_module) File "/data/www/vanilla/vanilla/venv/lib/python3.8/site-packages/django/conf/__init__.py", line 183, in __init__ mod = importlib.import_module(self.SETTINGS_MODULE) File "/usr/local/python3.8/python3.8/lib/python3.8/importlib/__init__.py", line 127, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 961, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed File "<frozen importlib._bootstrap>", line 1014, in _gcd_import File "<frozen importlib._bootstrap>", line 991, in _find_and_load File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked ModuleNotFoundError: No module named 'proj' I noticed the message of if not settings.DEBUG and not settings.ALLOWED_HOSTS: and checked my code in venilla/settings.py, which includes: DEBUG = True ALLOWED_HOSTS = ['*'] Furthermore, the following line is standard in my manage.py. os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'vanilla.settings') I checked all the libraries are installed correctly, tried to run python manage.py makemigrations and python manage.py check. Got the same error. Really wondering what caused my problem. Can anyone help? Thank you.
[ "After checking the system, I found the following line in /etc/profile.\nexport DJANGO_SETTINGS_MODULE=proj.settings\nIt set the DJANGO_SETTINGS_MODULE to 'proj.settings' and cannot be overwritten by manage.py. After removing it and rebooting the system, the problem is resolved.\n" ]
[ 0 ]
[]
[]
[ "django", "django_deployment", "django_settings", "modulenotfounderror", "python_importlib" ]
stackoverflow_0074648275_django_django_deployment_django_settings_modulenotfounderror_python_importlib.txt
Q: SVG Works in Safari, Not Chrome My understanding is that SMIL animations are supported by chrome; however my svg SMIL animation does not in Chrome (v107) while it does work in Safari (v16.0) Here's a link to a codepen illustrating the issue. Why won't this <animation> work across browsers? <svg width="46" height="62" viewBox="0 0 46 62" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z" id="shape" fill="#007AFF" stroke="white" stroke-width="4"> <animate attributename="d" begin="G.click" from="M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z" to="M 23 2 C 29.018 2 34.385 4.034 38.179 7.421 C 41.794 10.647 44 15.131 44 20.365 C 44 22.74 43.043 26.026 41.334 29.89 C 39.649 33.7 37.33 37.861 34.818 41.916 C 30.621 48.692 25.957 55.057 22.998 58.816 C 20.041 55.06 15.379 48.7 11.182 41.928 C 8.671 37.873 6.351 33.712 4.666 29.901 C 2.958 26.036 2 22.746 2 20.365 C 2 15.119 4.203 10.637 7.812 7.416 C 11.606 4.029 16.976 2 23 2 Z" dur=".4s" /> </path> </svg> A: They apparently don't handle event-value on elements outside of the SVG namespace. This should be considered a bug and I opened BUG 1395274. As for a workaround, the obvious one is to use JS instead: document.querySelector("button").addEventListener("click", (evt) => { document.querySelector("path animate").beginElement(); }); body { background-color: green; } svg { margin: 0 auto; } <svg width="146" height="62" viewBox="0 0 146 62" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z" id="shape" fill="#007AFF" stroke="white" stroke-width="4"> <animate attributename="d" begin="indefinite" from="M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z" to="M 23 2 C 29.018 2 34.385 4.034 38.179 7.421 C 41.794 10.647 44 15.131 44 20.365 C 44 22.74 43.043 26.026 41.334 29.89 C 39.649 33.7 37.33 37.861 34.818 41.916 C 30.621 48.692 25.957 55.057 22.998 58.816 C 20.041 55.06 15.379 48.7 11.182 41.928 C 8.671 37.873 6.351 33.712 4.666 29.901 C 2.958 26.036 2 22.746 2 20.365 C 2 15.119 4.203 10.637 7.812 7.416 C 11.606 4.029 16.976 2 23 2 Z" dur="0.4s" /> </path> </svg> <button> start animation </button> Or if you really don't want to use JS, then you can hack something around by inserting an <svg> element in your button... but probably don't do that. body { background-color: green; } svg { margin: 0 auto; } <svg width="146" height="62" viewBox="0 0 146 62" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z" id="shape" fill="#007AFF" stroke="white" stroke-width="4"> <animate attributename="d" begin="G.click" from="M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z" to="M 23 2 C 29.018 2 34.385 4.034 38.179 7.421 C 41.794 10.647 44 15.131 44 20.365 C 44 22.74 43.043 26.026 41.334 29.89 C 39.649 33.7 37.33 37.861 34.818 41.916 C 30.621 48.692 25.957 55.057 22.998 58.816 C 20.041 55.06 15.379 48.7 11.182 41.928 C 8.671 37.873 6.351 33.712 4.666 29.901 C 2.958 26.036 2 22.746 2 20.365 C 2 15.119 4.203 10.637 7.812 7.416 C 11.606 4.029 16.976 2 23 2 Z" dur="0.4s" /> </path> </svg> <!-- really just to show how Chrome's bug behave don't do that... --> <button style="position: relative"> <svg id=G style="width:100%; height:100%; position:absolute; left:0; top:0;"></svg> start animation </button> Ps: Note that as mentioned by Robert in the comments, you need to set the initial 0 in the dur attribute for Firefox to accept it. A: Another style work-around, that gets you tight coupling between your SVG and JS code capturing the click, is a native JavaScript Web Component, supported in all modern browsers. body { background-color: green; } svg { margin: 0 auto; } <svg-marker color="gold"></svg-marker> <script> customElements.define("svg-marker", class extends HTMLElement{ connectedCallback(){ this.innerHTML = ` <svg width="146" height="62" viewBox="0 0 146 62" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z" fill="${this.getAttribute("color") || "#007AFF"}" stroke="white" stroke-width="4"> <animate begin="freeze" fill="freeze" attributename="d" dur="0.4s" from="M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z" to="M 23 2 C 29.018 2 34.385 4.034 38.179 7.421 C 41.794 10.647 44 15.131 44 20.365 C 44 22.74 43.043 26.026 41.334 29.89 C 39.649 33.7 37.33 37.861 34.818 41.916 C 30.621 48.692 25.957 55.057 22.998 58.816 C 20.041 55.06 15.379 48.7 11.182 41.928 C 8.671 37.873 6.351 33.712 4.666 29.901 C 2.958 26.036 2 22.746 2 20.365 C 2 15.119 4.203 10.637 7.812 7.416 C 11.606 4.029 16.976 2 23 2 Z"/> </path> </svg>`; this.onclick = (evt) => { this.querySelector("animate").beginElement(); } } }); </script>
SVG Works in Safari, Not Chrome
My understanding is that SMIL animations are supported by chrome; however my svg SMIL animation does not in Chrome (v107) while it does work in Safari (v16.0) Here's a link to a codepen illustrating the issue. Why won't this <animation> work across browsers? <svg width="46" height="62" viewBox="0 0 46 62" fill="none" xmlns="http://www.w3.org/2000/svg"> <path d="M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z" id="shape" fill="#007AFF" stroke="white" stroke-width="4"> <animate attributename="d" begin="G.click" from="M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z" to="M 23 2 C 29.018 2 34.385 4.034 38.179 7.421 C 41.794 10.647 44 15.131 44 20.365 C 44 22.74 43.043 26.026 41.334 29.89 C 39.649 33.7 37.33 37.861 34.818 41.916 C 30.621 48.692 25.957 55.057 22.998 58.816 C 20.041 55.06 15.379 48.7 11.182 41.928 C 8.671 37.873 6.351 33.712 4.666 29.901 C 2.958 26.036 2 22.746 2 20.365 C 2 15.119 4.203 10.637 7.812 7.416 C 11.606 4.029 16.976 2 23 2 Z" dur=".4s" /> </path> </svg>
[ "They apparently don't handle event-value on elements outside of the SVG namespace. This should be considered a bug and I opened BUG 1395274.\nAs for a workaround, the obvious one is to use JS instead:\n\n\ndocument.querySelector(\"button\").addEventListener(\"click\", (evt) => {\n document.querySelector(\"path animate\").beginElement();\n});\nbody {\n background-color: green;\n}\n\nsvg {\n margin: 0 auto;\n}\n<svg width=\"146\" height=\"62\" viewBox=\"0 0 146 62\" fill=\"none\" xmlns=\"http://www.w3.org/2000/svg\">\n <path d=\"M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z\" id=\"shape\" fill=\"#007AFF\" stroke=\"white\" stroke-width=\"4\">\n <animate \n attributename=\"d\"\n begin=\"indefinite\"\n from=\"M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z\"\n to=\"M 23 2 C 29.018 2 34.385 4.034 38.179 7.421 C 41.794 10.647 44 15.131 44 20.365 C 44 22.74 43.043 26.026 41.334 29.89 C 39.649 33.7 37.33 37.861 34.818 41.916 C 30.621 48.692 25.957 55.057 22.998 58.816 C 20.041 55.06 15.379 48.7 11.182 41.928 C 8.671 37.873 6.351 33.712 4.666 29.901 C 2.958 26.036 2 22.746 2 20.365 C 2 15.119 4.203 10.637 7.812 7.416 C 11.606 4.029 16.976 2 23 2 Z\" \n dur=\"0.4s\" \n />\n </path>\n</svg>\n<button>\n start animation\n</button>\n\n\n\nOr if you really don't want to use JS, then you can hack something around by inserting an <svg> element in your button... but probably don't do that.\n\n\nbody {\n background-color: green;\n}\n\nsvg {\n margin: 0 auto;\n}\n<svg width=\"146\" height=\"62\" viewBox=\"0 0 146 62\" fill=\"none\" xmlns=\"http://www.w3.org/2000/svg\">\n <path d=\"M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z\" id=\"shape\" fill=\"#007AFF\" stroke=\"white\" stroke-width=\"4\">\n <animate \n attributename=\"d\"\n begin=\"G.click\"\n from=\"M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z\"\n to=\"M 23 2 C 29.018 2 34.385 4.034 38.179 7.421 C 41.794 10.647 44 15.131 44 20.365 C 44 22.74 43.043 26.026 41.334 29.89 C 39.649 33.7 37.33 37.861 34.818 41.916 C 30.621 48.692 25.957 55.057 22.998 58.816 C 20.041 55.06 15.379 48.7 11.182 41.928 C 8.671 37.873 6.351 33.712 4.666 29.901 C 2.958 26.036 2 22.746 2 20.365 C 2 15.119 4.203 10.637 7.812 7.416 C 11.606 4.029 16.976 2 23 2 Z\" \n dur=\"0.4s\" \n />\n </path>\n</svg>\n<!-- really just to show how Chrome's bug behave don't do that... -->\n<button style=\"position: relative\">\n <svg id=G style=\"width:100%; height:100%; position:absolute; left:0; top:0;\"></svg>\n start animation\n</button>\n\n\n\nPs: Note that as mentioned by Robert in the comments, you need to set the initial 0 in the dur attribute for Firefox to accept it.\n", "Another style work-around,\nthat gets you tight coupling between your SVG and JS code capturing the click,\nis a native JavaScript Web Component, supported in all modern browsers.\n\n\nbody {\n background-color: green;\n}\n\nsvg {\n margin: 0 auto;\n}\n<svg-marker color=\"gold\"></svg-marker>\n\n<script>\ncustomElements.define(\"svg-marker\", class extends HTMLElement{\n connectedCallback(){\n this.innerHTML = `\n <svg width=\"146\" height=\"62\" viewBox=\"0 0 146 62\" fill=\"none\" xmlns=\"http://www.w3.org/2000/svg\">\n <path d=\"M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z\" \n fill=\"${this.getAttribute(\"color\") || \"#007AFF\"}\" stroke=\"white\" stroke-width=\"4\">\n <animate begin=\"freeze\" fill=\"freeze\" attributename=\"d\" dur=\"0.4s\" \n from=\"M 23 40 C 26.376 40 28.889 41.717 29.586 42.414 C 31.827 44.655 33 46.675 33 50 C 33 51.479 32.678 52.894 32.096 54.17 C 31.523 55.427 30.696 56.55 29.676 57.467 C 27.932 59.032 25.618 60 22.999 60 C 20.386 60 18.076 59.036 16.334 57.477 C 15.309 56.558 14.479 55.432 13.904 54.17 C 13.322 52.894 13 51.479 13 50 C 13 46.675 14.173 44.655 16.414 42.414 C 17.111 41.717 19.624 40 23 40 Z\"\n to=\"M 23 2 C 29.018 2 34.385 4.034 38.179 7.421 C 41.794 10.647 44 15.131 44 20.365 C 44 22.74 43.043 26.026 41.334 29.89 C 39.649 33.7 37.33 37.861 34.818 41.916 C 30.621 48.692 25.957 55.057 22.998 58.816 C 20.041 55.06 15.379 48.7 11.182 41.928 C 8.671 37.873 6.351 33.712 4.666 29.901 C 2.958 26.036 2 22.746 2 20.365 C 2 15.119 4.203 10.637 7.812 7.416 C 11.606 4.029 16.976 2 23 2 Z\"/>\n </path>\n </svg>`;\n this.onclick = (evt) => {\n this.querySelector(\"animate\").beginElement();\n }\n }\n});\n\n</script>\n\n\n\n" ]
[ 2, 0 ]
[]
[]
[ "animation", "html", "smil", "svg" ]
stackoverflow_0074649985_animation_html_smil_svg.txt
Q: Keras category predictions always same distribution New to Keras/Machine Learning. I figure I am making a dumb mistake but I don't know what. I have 3 labels. The training data for each sequence of timesteps is labeled as [1, 0, 0] or [0, 1, 0], or [0, 0, 1]. I always get a distribution that looks something like this. You can't tell in the photo, but the numbers aren't the same when you zoom in or look at the actual data results. https://imgur.com/a/o04cS97 The actual results is just color coding that spot based on the category above, so the values are all 1 but the labels are always one of the above. model = Sequential() model.add(LSTM(units=50, return_sequences=False, input_shape=(num_timesteps, num_features)) model.add(Dense(3, activation="softmax")) model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=["accuracy"]) model.fit(x_train, y_train, epochs=100, validation_data=(x_test, y_test)) results = model.predict(x_train) I can change the number of sequences, timesteps, features, epochs, add other lstm layers. The distribution will change but always be like that. I'm expecting based on the data (and based on even just making things random), that the probabilities would be varied and not always discretely layered. I originally did this with just a regular Dense layer and then Dense(3) layer to categorize and I was getting results that went with that expectation. Switching to LSTM due to the type of data and no longer getting expected results but same data A: It sounds like your model is overfitting to your training data. This means that it is performing well on the data it was trained on, but not generalizing well to new data. One common cause of overfitting is using a model that is too complex for the amount of training data you have. In your case, using an LSTM with 50 units may be too complex for your data, especially if you don't have a lot of training examples. To combat overfitting, you can try using regularization techniques such as adding dropout layers to your model. You can also try using a simpler model with fewer parameters, or using more training data. Additionally, it's a good idea to monitor the performance of your model on a validation set during training, to ensure that it is not overfitting. You can do this by passing a validation set to the fit method of your model, and setting the validation_split argument to a value between 0 and 1. This will cause the model to evaluate its performance on the validation set after each epoch of training.
Keras category predictions always same distribution
New to Keras/Machine Learning. I figure I am making a dumb mistake but I don't know what. I have 3 labels. The training data for each sequence of timesteps is labeled as [1, 0, 0] or [0, 1, 0], or [0, 0, 1]. I always get a distribution that looks something like this. You can't tell in the photo, but the numbers aren't the same when you zoom in or look at the actual data results. https://imgur.com/a/o04cS97 The actual results is just color coding that spot based on the category above, so the values are all 1 but the labels are always one of the above. model = Sequential() model.add(LSTM(units=50, return_sequences=False, input_shape=(num_timesteps, num_features)) model.add(Dense(3, activation="softmax")) model.compile(optimizer='adam', loss="categorical_crossentropy", metrics=["accuracy"]) model.fit(x_train, y_train, epochs=100, validation_data=(x_test, y_test)) results = model.predict(x_train) I can change the number of sequences, timesteps, features, epochs, add other lstm layers. The distribution will change but always be like that. I'm expecting based on the data (and based on even just making things random), that the probabilities would be varied and not always discretely layered. I originally did this with just a regular Dense layer and then Dense(3) layer to categorize and I was getting results that went with that expectation. Switching to LSTM due to the type of data and no longer getting expected results but same data
[ "It sounds like your model is overfitting to your training data. This means that it is performing well on the data it was trained on, but not generalizing well to new data.\nOne common cause of overfitting is using a model that is too complex for the amount of training data you have. In your case, using an LSTM with 50 units may be too complex for your data, especially if you don't have a lot of training examples.\nTo combat overfitting, you can try using regularization techniques such as adding dropout layers to your model. You can also try using a simpler model with fewer parameters, or using more training data.\nAdditionally, it's a good idea to monitor the performance of your model on a validation set during training, to ensure that it is not overfitting. You can do this by passing a validation set to the fit method of your model, and setting the validation_split argument to a value between 0 and 1. This will cause the model to evaluate its performance on the validation set after each epoch of training.\n" ]
[ 0 ]
[]
[]
[ "categorical", "categories", "keras", "lstm", "python" ]
stackoverflow_0074658705_categorical_categories_keras_lstm_python.txt
Q: Airflow Task Succeeded But Not All Data Ingested I have an airflow task to extract data with this flow PostgreSQL -> Google Cloud Storage -> BigQuery The problem that I have is, it seems not all the data is ingested into BigQuery. on the PostgreSQL source, the table has 18M+ rows of data, but after ingested it only has 4M+ rows of data. When I check on production, the data return 18M+ rows with this query: SELECT COUNT(1) FROM my_table -- This return 18M+ rows But after the DAG finished running, when I check on BigQuery: SELECT COUNT(1) FROM data_lake.my_table -- This return 4M+ rows To take notes, not all the tables that I ingested returned like this. All of the smaller tables ingested just fine. But when it hits a certain amount of rows it behaves like this. My suspicion is when the data is extracted from PostgreSQL to Google Cloud Storage. So I'll provide my function here: def create_operator_write_append_init(self, worker=10): worker_var = dict() with TaskGroup(group_id=self.task_id_init) as tg1: for i in range(worker): worker_var[f'worker_{i}'] = PostgresToGCSOperator( task_id = f'worker_{i}', postgres_conn_id = self.conn_id, sql = 'extract_init.sql', bucket = self.bucket, filename = f'{self.filename_init}_{i}.{self.export_format}', export_format = self.export_format, # the export format is json gzip = True, params = { 'worker': i } ) return tg1 and here is the SQL file: SELECT id, name, created_at, updated_at, deleted_at FROM my_table WHERE 1=1 AND ABS(MOD(hashtext(id::TEXT), 10)) = {{params.worker}}; What I did is I chunk the data and split it into several workers, hence the TaskGroup. To provide more information. I use Composer: composer-2.0.32-airflow-2.3.4 Large instance Worker 8CPU Worker 32GB Memory Worker 2GB storage Worker between 1-16 What are the possibilities of these happening? A: PostgresToGCSOperator inherits from BaseSQLToGCSOperator(https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/transfers/sql_to_gcs/index.html) According to source code, approx_max_file_size_bytes=1900000000. So if you split your table into 10 parts (or workers lets say) the maximum size of this chunk should be maximum 1.9 gigabyte. In case this chunk is bigger, the previous chunk will be replaced with the new one as you did not specify to create "chunks of your chunk" by PostgresToGCSOperator. You can to it by adding placeholder {} in the filename and the Operator will handle it. def create_operator_write_append_init(self, worker=10): worker_var = dict() with TaskGroup(group_id=self.task_id_init) as tg1: for i in range(worker): worker_var[f'worker_{i}'] = PostgresToGCSOperator( task_id = f'worker_{i}', postgres_conn_id = self.conn_id, sql = 'extract_init.sql', bucket = self.bucket, filename = f'{self.filename_init}_{i}_part_{{}}.{self.export_format}', export_format = self.export_format, # the export format is json gzip = True, params = { 'worker': i } ) return tg1
Airflow Task Succeeded But Not All Data Ingested
I have an airflow task to extract data with this flow PostgreSQL -> Google Cloud Storage -> BigQuery The problem that I have is, it seems not all the data is ingested into BigQuery. on the PostgreSQL source, the table has 18M+ rows of data, but after ingested it only has 4M+ rows of data. When I check on production, the data return 18M+ rows with this query: SELECT COUNT(1) FROM my_table -- This return 18M+ rows But after the DAG finished running, when I check on BigQuery: SELECT COUNT(1) FROM data_lake.my_table -- This return 4M+ rows To take notes, not all the tables that I ingested returned like this. All of the smaller tables ingested just fine. But when it hits a certain amount of rows it behaves like this. My suspicion is when the data is extracted from PostgreSQL to Google Cloud Storage. So I'll provide my function here: def create_operator_write_append_init(self, worker=10): worker_var = dict() with TaskGroup(group_id=self.task_id_init) as tg1: for i in range(worker): worker_var[f'worker_{i}'] = PostgresToGCSOperator( task_id = f'worker_{i}', postgres_conn_id = self.conn_id, sql = 'extract_init.sql', bucket = self.bucket, filename = f'{self.filename_init}_{i}.{self.export_format}', export_format = self.export_format, # the export format is json gzip = True, params = { 'worker': i } ) return tg1 and here is the SQL file: SELECT id, name, created_at, updated_at, deleted_at FROM my_table WHERE 1=1 AND ABS(MOD(hashtext(id::TEXT), 10)) = {{params.worker}}; What I did is I chunk the data and split it into several workers, hence the TaskGroup. To provide more information. I use Composer: composer-2.0.32-airflow-2.3.4 Large instance Worker 8CPU Worker 32GB Memory Worker 2GB storage Worker between 1-16 What are the possibilities of these happening?
[ "PostgresToGCSOperator inherits from BaseSQLToGCSOperator(https://airflow.apache.org/docs/apache-airflow-providers-google/stable/_api/airflow/providers/google/cloud/transfers/sql_to_gcs/index.html)\nAccording to source code, approx_max_file_size_bytes=1900000000. So if you split your table into 10 parts (or workers lets say) the maximum size of this chunk should be maximum 1.9 gigabyte. In case this chunk is bigger, the previous chunk will be replaced with the new one as you did not specify to create \"chunks of your chunk\" by PostgresToGCSOperator.\nYou can to it by adding placeholder {} in the filename and the Operator will handle it.\ndef create_operator_write_append_init(self, worker=10):\n worker_var = dict()\n with TaskGroup(group_id=self.task_id_init) as tg1:\n for i in range(worker):\n worker_var[f'worker_{i}'] = PostgresToGCSOperator(\n task_id = f'worker_{i}',\n postgres_conn_id = self.conn_id,\n sql = 'extract_init.sql',\n bucket = self.bucket,\n filename = f'{self.filename_init}_{i}_part_{{}}.{self.export_format}', \n export_format = self.export_format, # the export format is json\n gzip = True,\n params = {\n 'worker': i\n }\n )\n return tg1\n\n" ]
[ 0 ]
[]
[]
[ "airflow", "airflow_2.x", "google_cloud_composer", "python" ]
stackoverflow_0074650653_airflow_airflow_2.x_google_cloud_composer_python.txt