content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Unwanted recomposition when using Context/Toast in event - Jetpack Compose In a Jetpack Compose application, I have two composables similar to here: @Composable fun Main() { println("Composed Main") val context = LocalContext.current var text by remember { mutableStateOf("") } fun update(num: Number) { text = num.toString() Toast.makeText(context, "Toast", Toast.LENGTH_SHORT).show() } Column { Text(text) Keypad { update(it) } } } @Composable fun Keypad(onClick: (Number) -> Unit) { println("Composed Keypad") Column { for (i in 1..10) { Button(onClick = {onClick(i)}) { Text(i.toString()) } } } } Clicking each button causes the two composables to recompose and produces this output: I/System.out: Composed Main I/System.out: Composed Keypad Recomposing the Keypad composable is unneeded and makes the app freeze (for several seconds in a bigger project). Removing usages of context in the event handles (in here, commenting out the Toast) solves the problem and does not recompose the Keypad and produces this output: I/System.out: Composed Main Is there any other way I could use context in an event without causing unneeded recompositions? A: That's actually weird why Toast affects the recomposition of KeyPad when its showing, but considering Toast is kind of a side-effect not an actual part of the composable (I'm expecting a correction here) I thought of putting it in an Effect to prevent Keypad from recomposing, and it did. Here, SideEffect will execute every post-recomposition. SideEffect { if (text.isNotEmpty()) { Toast.makeText(context, "Toast", Toast.LENGTH_SHORT).show() } } or you can utilize LaunchedEffect using the text as its key, so on succeeding re-compositions, when the text changes, different from its previous value (invalidates), the LaunchedEffect will re-execute and show the toast again LaunchedEffect(key1 = "text") { if (text.isNotEmpty()) { Toast.makeText(context, "Toast", Toast.LENGTH_SHORT).show() } } Replacing your print with Log statements, this is the output when clicking any buttons, either of the effect being used E/Composable: Composed Main // first launch of screen E/Composable: Composed Keypad // first launch of screen // succeeding clicks E/Composable: Composed Main E/Composable: Composed Main E/Composable: Composed Main E/Composable: Composed Main
Unwanted recomposition when using Context/Toast in event - Jetpack Compose
In a Jetpack Compose application, I have two composables similar to here: @Composable fun Main() { println("Composed Main") val context = LocalContext.current var text by remember { mutableStateOf("") } fun update(num: Number) { text = num.toString() Toast.makeText(context, "Toast", Toast.LENGTH_SHORT).show() } Column { Text(text) Keypad { update(it) } } } @Composable fun Keypad(onClick: (Number) -> Unit) { println("Composed Keypad") Column { for (i in 1..10) { Button(onClick = {onClick(i)}) { Text(i.toString()) } } } } Clicking each button causes the two composables to recompose and produces this output: I/System.out: Composed Main I/System.out: Composed Keypad Recomposing the Keypad composable is unneeded and makes the app freeze (for several seconds in a bigger project). Removing usages of context in the event handles (in here, commenting out the Toast) solves the problem and does not recompose the Keypad and produces this output: I/System.out: Composed Main Is there any other way I could use context in an event without causing unneeded recompositions?
[ "That's actually weird why Toast affects the recomposition of KeyPad when its showing, but considering Toast is kind of a side-effect not an actual part of the composable (I'm expecting a correction here) I thought of putting it in an Effect to prevent Keypad from recomposing, and it did.\nHere, SideEffect will execute every post-recomposition.\nSideEffect {\n if (text.isNotEmpty()) {\n Toast.makeText(context, \"Toast\", Toast.LENGTH_SHORT).show()\n }\n}\n\nor you can utilize LaunchedEffect using the text as its key, so on succeeding re-compositions, when the text changes, different from its previous value (invalidates), the LaunchedEffect will re-execute and show the toast again\nLaunchedEffect(key1 = \"text\") {\n if (text.isNotEmpty()) {\n Toast.makeText(context, \"Toast\", Toast.LENGTH_SHORT).show()\n }\n}\n\nReplacing your print with Log statements, this is the output when clicking any buttons, either of the effect being used\nE/Composable: Composed Main // first launch of screen\nE/Composable: Composed Keypad // first launch of screen\n\n// succeeding clicks\n\nE/Composable: Composed Main\nE/Composable: Composed Main\nE/Composable: Composed Main\nE/Composable: Composed Main\n\n" ]
[ 2 ]
[]
[]
[ "android", "android_jetpack_compose", "compose_recomposition", "kotlin", "state" ]
stackoverflow_0074660094_android_android_jetpack_compose_compose_recomposition_kotlin_state.txt
Q: When I link JS script with HTML I get error: ReferenceError: require is not defined I have an application written in the electron module. When I add a script from a JS file to an html file or enter a script between <script> tags and define a new module, I get an error: ReferenceError: require is not defined This is my HTML script: <script src="../scripts/computer.js"></script> This is the javascript file: const os = require("os") I wanted to fetch information from my computer and then display it in my application using the "os" module but it doesn't work A: require is not native for browser, it is a part of Node.js there is a good explanation in the answer to this post how to use react require syntax? A: const mainWindow = new BrowserWindow({ width: 800, height: 600, webPreferences: { ... nodeIntegration: true ... }) You can add 'nodeIntegration: true' to your window attribute in main.js
When I link JS script with HTML I get error: ReferenceError: require is not defined
I have an application written in the electron module. When I add a script from a JS file to an html file or enter a script between <script> tags and define a new module, I get an error: ReferenceError: require is not defined This is my HTML script: <script src="../scripts/computer.js"></script> This is the javascript file: const os = require("os") I wanted to fetch information from my computer and then display it in my application using the "os" module but it doesn't work
[ "require is not native for browser, it is a part of Node.js\nthere is a good explanation in the answer to this post\nhow to use react require syntax?\n", "const mainWindow = new BrowserWindow({\n width: 800,\n height: 600,\n webPreferences: {\n ...\n nodeIntegration: true\n ...\n })\n\nYou can add 'nodeIntegration: true' to your window attribute in main.js\n" ]
[ 0, 0 ]
[]
[]
[ "electron", "javascript", "node.js", "operating_system", "require" ]
stackoverflow_0074655823_electron_javascript_node.js_operating_system_require.txt
Q: Lookoing to create a graph based off the average of two columns in my dataset Ulitmately I am very new with Data Analysis and am in the middle of a project that is due very soon. Of the data here: enter image description here I would like to have the Station areas grouped up, and the Time_Diff averaged out for each area. There are 35000+ entries in this dataset, hence why I want to group it up into the totals so the graph will work. Such as: Tallaght: 13:46 Blanchardstown: 14:35 etc.. I have attempted to graph them but my results were only returning the total count of the time_diff column hence making the area with the higher entries the higher count. The Time_Diff column I made by converting the 'text' value times into datetime using pandas, then minus the IA from the TOC to retrieve the time difference. My dataset: https://data.gov.ie/dataset/fire-brigade-and-ambulance?package_type=dataset Brownie points if you can figure out how I can remove the 0 days entry from the output. I believe this was a result of me converting the 'text' to datetime. A: subset.groupby('Station Area')['Time_Diff'].mean()
Lookoing to create a graph based off the average of two columns in my dataset
Ulitmately I am very new with Data Analysis and am in the middle of a project that is due very soon. Of the data here: enter image description here I would like to have the Station areas grouped up, and the Time_Diff averaged out for each area. There are 35000+ entries in this dataset, hence why I want to group it up into the totals so the graph will work. Such as: Tallaght: 13:46 Blanchardstown: 14:35 etc.. I have attempted to graph them but my results were only returning the total count of the time_diff column hence making the area with the higher entries the higher count. The Time_Diff column I made by converting the 'text' value times into datetime using pandas, then minus the IA from the TOC to retrieve the time difference. My dataset: https://data.gov.ie/dataset/fire-brigade-and-ambulance?package_type=dataset Brownie points if you can figure out how I can remove the 0 days entry from the output. I believe this was a result of me converting the 'text' to datetime.
[ "subset.groupby('Station Area')['Time_Diff'].mean()\n" ]
[ 0 ]
[]
[]
[ "data_analysis", "dataset", "graph", "jupyter_notebook", "python" ]
stackoverflow_0074659331_data_analysis_dataset_graph_jupyter_notebook_python.txt
Q: How do I replace every NaN value in every column by minimum value of that column in pandas? I have a dataframe and I want to replace every NaN value in every column by min() of the column, how do I do that? A: To replace all NaN values in a dataframe with the minimum value of the respective column, you can use the pandas DataFrame.fillna() method in combination with the DataFrame.min() method. For example, suppose you have a dataframe df with the following values: col1 col2 0 NaN 1 1 NaN 3 2 5.0 2 3 6.0 NaN 4 NaN 4 To replace all NaN values with the minimum value of each column, you can use the following code: df.fillna(df.min()) This will return a new dataframe with the NaN values replaced by the minimum value of each column: col1 col2 0 5.0 1 1 5.0 3 2 5.0 2 3 6.0 1 4 5.0 4 Note that the fillna() method will only replace NaN values in the original dataframe. If you want to save the changes to the original dataframe, you can use the inplace parameter like this: df.fillna(df.min(), inplace=True) This will replace the NaN values in the original dataframe df and return None.
How do I replace every NaN value in every column by minimum value of that column in pandas?
I have a dataframe and I want to replace every NaN value in every column by min() of the column, how do I do that?
[ "To replace all NaN values in a dataframe with the minimum value of the respective column, you can use the pandas DataFrame.fillna() method in combination with the DataFrame.min() method.\nFor example, suppose you have a dataframe df with the following values:\n col1 col2\n0 NaN 1\n1 NaN 3\n2 5.0 2\n3 6.0 NaN\n4 NaN 4\n\n\nTo replace all NaN values with the minimum value of each column, you can use the following code:\ndf.fillna(df.min())\n\nThis will return a new dataframe with the NaN values replaced by the minimum value of each column:\n col1 col2\n0 5.0 1\n1 5.0 3\n2 5.0 2\n3 6.0 1\n4 5.0 4\n\n\nNote that the fillna() method will only replace NaN values in the original dataframe. If you want to save the changes to the original dataframe, you can use the inplace parameter like this:\ndf.fillna(df.min(), inplace=True)\n\nThis will replace the NaN values in the original dataframe df and return None.\n" ]
[ 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074660412_dataframe_pandas_python.txt
Q: How to concatenate a series to a pandas dataframe in python? I would like to iterate through a dataframe rows and concatenate that row to a different dataframe basically building up a different dataframe with some rows. For example: `IPCSection and IPCClass Dataframes allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis = 0) finalpatentclasses = pd.DataFrame(columns=allcolumns) for isec, secrow in IPCSection.iterrows(): for icl, clrow in IPCClass.iterrows(): if (secrow[0] in clrow[0]): pdList = [finalpatentclasses, pd.DataFrame(secrow), pd.DataFrame(clrow)] finalpatentclasses = pd.concat(pdList, axis=0, ignore_index=True) display(finalpatentclasses) The output is: I want the nan values to dissapear and move all the data under the correct columns. I tried axis = 1 but messes up the column names. Append does not work as well all values are placed diagonally at the table with nan values as well. A: The problem with the current implementation is that pd.concat is being called with axis=0 and ignore_index=True, resulting in the values from secrow and clrow being concatenated vertically and the original indices being ignored. This causes the values to be misaligned with the columns of the final dataframe, as shown in the output. To solve this problem, you can create a new dataframe that has the same columns as the final dataframe, and then assign the values from secrow and clrow to the appropriate columns in the new dataframe. After that, you can append the new dataframe to the final dataframe using the pd.concat function with axis=0, as before. Here is a modified version of the code that should produce the desired output: allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis=0) finalpatentclasses = pd.DataFrame(columns=allcolumns) for isec, secrow in IPCSection.iterrows(): for icl, clrow in IPCClass.iterrows(): if (secrow[0] in clrow[0]): # Create a new dataframe with the same columns as the final dataframe newrow = pd.DataFrame(columns=allcolumns) # Assign the values from secrow and clrow to the appropriate columns in the new dataframe newrow[IPCSection.columns] = secrow.values newrow[IPCClass.columns] = clrow.values # Append the new dataframe to the final dataframe finalpatentclasses = pd.concat([finalpatentclasses, newrow], axis=0) display(finalpatentclasses) This should result in a final dataframe that has the values from secrow and clrow concatenated horizontally under the correct columns, with no nan values. UPDATED SCRIPT: allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis=0) finalpatentclasses = pd.DataFrame(columns=allcolumns) for isec, secrow in IPCSection.iterrows(): for icl, clrow in IPCClass.iterrows(): if (secrow[0] in clrow[0]): print("Condition met") pdList = [finalpatentclasses, secrow.to_frame().transpose(), clrow.to_frame().transpose()] finalpatentclasses = pd.concat(pdList, axis=0, ignore_index=True) display(finalpatentclasses) Final Update (Efficient for larger datasets): allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis=0) finalpatentclasses_list = [] for secrow in IPCSection.itertuples(index=False): for clrow in IPCClass.itertuples(index=False): if secrow[0] in clrow[0]: row = list(secrow) + list(clrow) finalpatentclasses_list.append(row) finalpatentclasses = pd.DataFrame(finalpatentclasses_list, columns=allcolumns) display(finalpatentclasses) Note how secrow and clrow are now namedtuples instead of Series, and need to be converted to lists using the list() function before concatenating them with the + operator. Also, the index=False argument is passed to itertuples() to skip the index column in the output. A: Alright, I have figured it out. The idea is that you create a newrowDataframe and concatenate all the data in a list from there you can add it to the dataframe and then conc with the final dataframe. Here is the code: allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis = 0) finalpatentclasses = pd.DataFrame(columns=allcolumns) for isec, secrow in IPCSection.iterrows(): for icl, clrow in IPCClass.iterrows(): newrow = pd.DataFrame(columns=allcolumns) values = np.concatenate((secrow.values, subclrow.values), axis=0) newrow.loc[len(newrow.index)] = values finalpatentclasses = pd.concat([finalpatentclasses, newrow], axis=0) finalpatentclasses.reset_index(drop=false, inplace=True) display(finalpatentclasses) Update the code below is more efficient: allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns, IPCSubClass.columns, IPCGroup.columns), axis = 0) newList = [] for secrow in IPCSection.itertuples(): for clrow in IPCClass.itertuples(): if (secrow[1] in clrow[1]): values = ([secrow[1], secrow[2], subclrow[1], subclrow[2]]) new_row = {IPCSection.columns[0]: [secrow[1]], IPCSection.columns[1]: [secrow[2]], IPCClass.columns[0]: [clrow[1]], IPCClass.columns[1]: [clrow[2]]} newList.append(values) finalpatentclasses = pd.DataFrame(newList, columns=allcolumns) display(finalpatentclasses)
How to concatenate a series to a pandas dataframe in python?
I would like to iterate through a dataframe rows and concatenate that row to a different dataframe basically building up a different dataframe with some rows. For example: `IPCSection and IPCClass Dataframes allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis = 0) finalpatentclasses = pd.DataFrame(columns=allcolumns) for isec, secrow in IPCSection.iterrows(): for icl, clrow in IPCClass.iterrows(): if (secrow[0] in clrow[0]): pdList = [finalpatentclasses, pd.DataFrame(secrow), pd.DataFrame(clrow)] finalpatentclasses = pd.concat(pdList, axis=0, ignore_index=True) display(finalpatentclasses) The output is: I want the nan values to dissapear and move all the data under the correct columns. I tried axis = 1 but messes up the column names. Append does not work as well all values are placed diagonally at the table with nan values as well.
[ "The problem with the current implementation is that pd.concat is being called with axis=0 and ignore_index=True, resulting in the values from secrow and clrow being concatenated vertically and the original indices being ignored. This causes the values to be misaligned with the columns of the final dataframe, as shown in the output.\nTo solve this problem, you can create a new dataframe that has the same columns as the final dataframe, and then assign the values from secrow and clrow to the appropriate columns in the new dataframe. After that, you can append the new dataframe to the final dataframe using the pd.concat function with axis=0, as before.\nHere is a modified version of the code that should produce the desired output:\nallcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis=0)\nfinalpatentclasses = pd.DataFrame(columns=allcolumns)\nfor isec, secrow in IPCSection.iterrows():\n for icl, clrow in IPCClass.iterrows():\n if (secrow[0] in clrow[0]):\n # Create a new dataframe with the same columns as the final dataframe\n newrow = pd.DataFrame(columns=allcolumns)\n # Assign the values from secrow and clrow to the appropriate columns in the new dataframe\n newrow[IPCSection.columns] = secrow.values\n newrow[IPCClass.columns] = clrow.values\n # Append the new dataframe to the final dataframe\n finalpatentclasses = pd.concat([finalpatentclasses, newrow], axis=0)\ndisplay(finalpatentclasses)\n\nThis should result in a final dataframe that has the values from secrow and clrow concatenated horizontally under the correct columns, with no nan values.\nUPDATED SCRIPT:\n allcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis=0)\nfinalpatentclasses = pd.DataFrame(columns=allcolumns)\nfor isec, secrow in IPCSection.iterrows():\n for icl, clrow in IPCClass.iterrows():\n if (secrow[0] in clrow[0]):\n print(\"Condition met\")\n pdList = [finalpatentclasses, secrow.to_frame().transpose(), clrow.to_frame().transpose()]\n finalpatentclasses = pd.concat(pdList, axis=0, ignore_index=True)\ndisplay(finalpatentclasses)\n\nFinal Update (Efficient for larger datasets):\nallcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis=0)\nfinalpatentclasses_list = []\nfor secrow in IPCSection.itertuples(index=False):\n for clrow in IPCClass.itertuples(index=False):\n if secrow[0] in clrow[0]:\n row = list(secrow) + list(clrow)\n finalpatentclasses_list.append(row)\nfinalpatentclasses = pd.DataFrame(finalpatentclasses_list, columns=allcolumns)\ndisplay(finalpatentclasses)\n\nNote how secrow and clrow are now namedtuples instead of Series, and need to be converted to lists using the list() function before concatenating them with the + operator. Also, the index=False argument is passed to itertuples() to skip the index column in the output.\n", "Alright, I have figured it out. The idea is that you create a newrowDataframe and concatenate all the data in a list from there you can add it to the dataframe and then conc with the final dataframe.\nHere is the code:\nallcolumns = np.concatenate((IPCSection.columns, IPCClass.columns), axis = 0)\nfinalpatentclasses = pd.DataFrame(columns=allcolumns)\nfor isec, secrow in IPCSection.iterrows():\n for icl, clrow in IPCClass.iterrows():\n newrow = pd.DataFrame(columns=allcolumns)\n values = np.concatenate((secrow.values, subclrow.values), axis=0)\n newrow.loc[len(newrow.index)] = values \n finalpatentclasses = pd.concat([finalpatentclasses, newrow], axis=0)\nfinalpatentclasses.reset_index(drop=false, inplace=True)\ndisplay(finalpatentclasses)\n\nUpdate the code below is more efficient:\nallcolumns = np.concatenate((IPCSection.columns, IPCClass.columns, IPCSubClass.columns, IPCGroup.columns), axis = 0)\nnewList = []\nfor secrow in IPCSection.itertuples():\n for clrow in IPCClass.itertuples():\n if (secrow[1] in clrow[1]):\n values = ([secrow[1], secrow[2], subclrow[1], subclrow[2]])\n new_row = {IPCSection.columns[0]: [secrow[1]], IPCSection.columns[1]: [secrow[2]],\n IPCClass.columns[0]: [clrow[1]], IPCClass.columns[1]: [clrow[2]]}\n newList.append(values)\nfinalpatentclasses = pd.DataFrame(newList, columns=allcolumns)\ndisplay(finalpatentclasses)\n\n" ]
[ 0, 0 ]
[]
[]
[ "dataframe", "loops", "python" ]
stackoverflow_0074659968_dataframe_loops_python.txt
Q: How to add a year to today's date and default it to the preceding working day If I want to default a date to the next working day I am using the following: <?php echo(date('d/m/Y',strtotime('+1 Weekdays'))); ?> For example: If a user is adding an item on a Friday it is given a default of the following Monday - the next working day. I have to create a schedule of events with a start and end date. The end date needs to 1 year in the future on the preceding working day. For example: If a user adds a schedule that has a start day of Wednesday and the same date in a years time happens to be a Sunday, then the end date needs to default to the previous Friday - the preceding working day. A: I found the answer: <?php echo(date(date('d/m/Y',strtotime('+1 year')),strtotime('-1 Weekdays'))); ?> A: You just need to add one year to today's date then check the day of the week, if it is 'Sat' or 'Sun' subtract one weekday. The PHP DateTime object makes this easy with the format() and modify() methods. $inOneYear = new DateTime('+1 year'); if(in_array($inOneYear->format('D'), ['Sat', 'Sun'])){ $inOneYear->modify('-1 weekday'); } echo $inOneYear->format('D, d/m/Y'); In all these cases: today (Thursday, Dec. 1st, 2022) tomorrow (Friday, Dec. 2nd, 2022) the next day (Saturday, Dec. 3rd, 2022) the above will output: Fri, 01/12/2023 A: The strtotime() function and the DateTime constructor both take stacking relative values, and will process them in order, so you can do things like: $when = strtotime('now +1 year -1 weekdays'); $when = new DateTime('now +1 year -1 weekdays');
How to add a year to today's date and default it to the preceding working day
If I want to default a date to the next working day I am using the following: <?php echo(date('d/m/Y',strtotime('+1 Weekdays'))); ?> For example: If a user is adding an item on a Friday it is given a default of the following Monday - the next working day. I have to create a schedule of events with a start and end date. The end date needs to 1 year in the future on the preceding working day. For example: If a user adds a schedule that has a start day of Wednesday and the same date in a years time happens to be a Sunday, then the end date needs to default to the previous Friday - the preceding working day.
[ "I found the answer:\n<?php\n echo(date(date('d/m/Y',strtotime('+1 year')),strtotime('-1 Weekdays')));\n?>\n\n", "You just need to add one year to today's date then check the day of the week, if it is 'Sat' or 'Sun' subtract one weekday. The PHP DateTime object makes this easy with the format() and modify() methods.\n$inOneYear = new DateTime('+1 year');\nif(in_array($inOneYear->format('D'), ['Sat', 'Sun'])){\n $inOneYear->modify('-1 weekday');\n}\necho $inOneYear->format('D, d/m/Y');\n\nIn all these cases:\n\ntoday (Thursday, Dec. 1st, 2022)\ntomorrow (Friday, Dec. 2nd, 2022)\nthe next day (Saturday, Dec. 3rd, 2022)\n\nthe above will output:\nFri, 01/12/2023\n\n", "The strtotime() function and the DateTime constructor both take stacking relative values, and will process them in order, so you can do things like:\n$when = strtotime('now +1 year -1 weekdays');\n$when = new DateTime('now +1 year -1 weekdays');\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "php" ]
stackoverflow_0074646122_php.txt
Q: Error: type(Nonetype) has no len() attribute I was trying to solve a problem on leetcode but I keep incurring in an error that I don’t understand class Solution(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ char = set() longest = [] curr = [] for i in range(len(s)): if s[i] in char: longest = max(curr, longest, key=len) curr = curr[curr.index(s[i])+1:].append(s[i]) else: curr.append(s[i]) char.add(s[i]) return max(curr, longest, key=len) This is the code. The error refers to the fact that when i call the function max() one between curr or longest has no attribute len(). Aren’t both lists? I looked up the solve but it uses a slightly different method. A: As already mentioned in the comment problem When you set longest = max... it ceases to be a list by @mark-ransom already. I will purpose different way of solving this # lengthOfLongestSubstring # can be optimized to O(n) using sliding window technique class Solution2(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ if len(s) == 0: return 0 if len(s) == 1: return 1 max_len = 0 for i in range(len(s)): for j in range(i+1, len(s)+1): if len(set(s[i:j])) == len(s[i:j]): max_len = max(max_len, len(s[i:j])) else: break return max_len class SolutionWithProblem(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ char = set() longest = [] curr = [] for i in range(len(s)): if s[i] in char: longest = max(curr, longest, key=len) # problem here curr = curr[curr.index(s[i])+1:].append(s[i]) else: curr.append(s[i]) char.add(s[i]) return max(curr, longest, key=len) # call the function s = Solution2() print(s.lengthOfLongestSubstring("abcabcbb")) print(s.lengthOfLongestSubstring("bbbbb")) print(s.lengthOfLongestSubstring("pwwkew")) print(s.lengthOfLongestSubstring(" ")) print(s.lengthOfLongestSubstring("dvdf")) print(s.lengthOfLongestSubstring("anviaj")) print(s.lengthOfLongestSubstring("abba")) This might not be the best solution. Update to your solution class Solution3(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ char = set() longest = [] curr = [] for i in range(len(s)): if s[i] in char: if len(curr) > len(longest): longest = curr curr = [] char = set() char.add(s[i]) curr.append(s[i]) if len(curr) > len(longest): longest = curr return len(longest)
Error: type(Nonetype) has no len() attribute
I was trying to solve a problem on leetcode but I keep incurring in an error that I don’t understand class Solution(object): def lengthOfLongestSubstring(self, s): """ :type s: str :rtype: int """ char = set() longest = [] curr = [] for i in range(len(s)): if s[i] in char: longest = max(curr, longest, key=len) curr = curr[curr.index(s[i])+1:].append(s[i]) else: curr.append(s[i]) char.add(s[i]) return max(curr, longest, key=len) This is the code. The error refers to the fact that when i call the function max() one between curr or longest has no attribute len(). Aren’t both lists? I looked up the solve but it uses a slightly different method.
[ "As already mentioned in the comment problem When you set longest = max... it ceases to be a list by @mark-ransom already. I will purpose different way of solving this\n# lengthOfLongestSubstring\n# can be optimized to O(n) using sliding window technique\nclass Solution2(object):\n def lengthOfLongestSubstring(self, s):\n \"\"\"\n :type s: str\n :rtype: int\n \"\"\"\n if len(s) == 0:\n return 0\n if len(s) == 1:\n return 1\n max_len = 0\n for i in range(len(s)):\n for j in range(i+1, len(s)+1):\n if len(set(s[i:j])) == len(s[i:j]):\n max_len = max(max_len, len(s[i:j]))\n else:\n break\n return max_len\n\n\nclass SolutionWithProblem(object):\n def lengthOfLongestSubstring(self, s):\n \"\"\"\n :type s: str\n :rtype: int\n \"\"\"\n char = set()\n longest = []\n curr = []\n for i in range(len(s)):\n if s[i] in char:\n longest = max(curr, longest, key=len) # problem here\n curr = curr[curr.index(s[i])+1:].append(s[i])\n else:\n curr.append(s[i])\n char.add(s[i])\n return max(curr, longest, key=len)\n\n# call the function\ns = Solution2()\nprint(s.lengthOfLongestSubstring(\"abcabcbb\"))\nprint(s.lengthOfLongestSubstring(\"bbbbb\"))\nprint(s.lengthOfLongestSubstring(\"pwwkew\"))\nprint(s.lengthOfLongestSubstring(\" \"))\nprint(s.lengthOfLongestSubstring(\"dvdf\"))\nprint(s.lengthOfLongestSubstring(\"anviaj\"))\nprint(s.lengthOfLongestSubstring(\"abba\"))\n\nThis might not be the best solution. Update to your solution\nclass Solution3(object):\n def lengthOfLongestSubstring(self, s):\n \"\"\"\n :type s: str\n :rtype: int\n \"\"\"\n char = set()\n longest = []\n curr = []\n for i in range(len(s)):\n if s[i] in char:\n if len(curr) > len(longest):\n longest = curr\n curr = []\n char = set()\n char.add(s[i])\n curr.append(s[i])\n if len(curr) > len(longest):\n longest = curr\n return len(longest)\n\n" ]
[ 0 ]
[]
[]
[ "algorithm", "python", "python_3.x" ]
stackoverflow_0074660122_algorithm_python_python_3.x.txt
Q: How to override ASP.NET Core configuration array settings using environment variables TL;DR In an ASP.NET Core app, I have an appsettings.json config file which uses a JSON array to configure a collection of settings. How do I override a setting of one of the array objects using environment variables? Background I'm using Serilog in an ASP.NET Core application and using the Serilog.Settings.Configuration, to allow it to be configured using appsettings.json. The configuration is like this: { "Serilog": { "Using": ["Serilog.Sinks.Literate"], "MinimumLevel": "Debug", "WriteTo": [ { "Name": "File", "Args": { "path": "%TEMP%\\Logs\\serilog-configuration-sample.txt" } } ], "Enrich": ["FromLogContext", "WithMachineName", "WithThreadId"], "Properties": { "Application": "Sample" } } } When deployed, I want to override some of the settings, e.g. the MinimumLevel, and the path to the log file. My preferred option is to do this via environment variables as I'm deploying to an Azure App Service, so I'll use the App settings through the Azure management portal (these are realised as environment variables). I can easily set the MinimumLevel by adding an environment variable with the name: Serilog:MinimumLevel and the application name Serilog:Properties:Application. What is the format for specifying a setting with an array? A: After looking at the configuration in the debugger I found the answer. Serilog__WriteTo__0__Args__path (All platforms) Serilog:WriteTo:0:Args:path (Windows) Serilog--WriteTo--0--Args--path (sourced From Azure Key Vault) Note: The Configuration in ASP.NET Core documentation now covers this. So I need to use the array index (zero-based) as if it were a name. Here is the screenshot of the debugger, also (thanks to Victor Hurdugaci in the comments), the unit tests are a good place to look for examples. A: I know this is an old thread, but the most relevant one I could find for my question: My appsettings file has a setting like this: { "Settings": { "UserList": [ "devuser1", "devuser2" ] } } I want to put this into a k8s secret(because the list between environments changes), so I set the key-value pair as follows: Settings__UserList: '[ "devuser1", "devuser2" ]' But that doesn't seem to be working. I just realized I have yet to try setting it without the brackets: Settings__UserList: "devuser1", "devuser2" If that doesn't work, am I going to have to setup my secret like this to really get it to work? Settings__UserList__0: "devuser1" Settings__UserList__1: "devuser2" I didn't see anything in the documentation that covered this type of array value.
How to override ASP.NET Core configuration array settings using environment variables
TL;DR In an ASP.NET Core app, I have an appsettings.json config file which uses a JSON array to configure a collection of settings. How do I override a setting of one of the array objects using environment variables? Background I'm using Serilog in an ASP.NET Core application and using the Serilog.Settings.Configuration, to allow it to be configured using appsettings.json. The configuration is like this: { "Serilog": { "Using": ["Serilog.Sinks.Literate"], "MinimumLevel": "Debug", "WriteTo": [ { "Name": "File", "Args": { "path": "%TEMP%\\Logs\\serilog-configuration-sample.txt" } } ], "Enrich": ["FromLogContext", "WithMachineName", "WithThreadId"], "Properties": { "Application": "Sample" } } } When deployed, I want to override some of the settings, e.g. the MinimumLevel, and the path to the log file. My preferred option is to do this via environment variables as I'm deploying to an Azure App Service, so I'll use the App settings through the Azure management portal (these are realised as environment variables). I can easily set the MinimumLevel by adding an environment variable with the name: Serilog:MinimumLevel and the application name Serilog:Properties:Application. What is the format for specifying a setting with an array?
[ "After looking at the configuration in the debugger I found the answer.\n\nSerilog__WriteTo__0__Args__path (All platforms)\nSerilog:WriteTo:0:Args:path (Windows)\nSerilog--WriteTo--0--Args--path (sourced From Azure Key Vault)\n\n\nNote: The Configuration in ASP.NET Core documentation now covers this.\n\nSo I need to use the array index (zero-based) as if it were a name.\nHere is the screenshot of the debugger, also (thanks to Victor Hurdugaci in the comments), the unit tests are a good place to look for examples.\n\n", "I know this is an old thread, but the most relevant one I could find for my question:\nMy appsettings file has a setting like this:\n{\n \"Settings\": {\n \"UserList\": [ \"devuser1\", \"devuser2\" ]\n }\n}\n\nI want to put this into a k8s secret(because the list between environments changes), so I set the key-value pair as follows:\n Settings__UserList: '[ \"devuser1\", \"devuser2\" ]'\nBut that doesn't seem to be working. I just realized I have yet to try setting it without the brackets:\n Settings__UserList: \"devuser1\", \"devuser2\"\nIf that doesn't work, am I going to have to setup my secret like this to really get it to work?\nSettings__UserList__0: \"devuser1\"\nSettings__UserList__1: \"devuser2\"\n\nI didn't see anything in the documentation that covered this type of array value.\n" ]
[ 138, 0 ]
[]
[]
[ ".net_core", "asp.net_core", "configuration", "serilog" ]
stackoverflow_0037657320_.net_core_asp.net_core_configuration_serilog.txt
Q: How can I change parameterized type in collection? I would like to use collection in one method in my UserServiceImpl to return the list of all users which will be parameterized by UserDTO. I have the following method: @Override public List<UserEntity> getUsers() { var usersList = userRepository.findAll(); return usersList; } But I want to change it to public List<UserDTO> getUsers()... I have map method from entity to dto and vice versa: public UserDTO mapToUserDTO(UserEntity userEntity) { var userDto = new UserDTO(); var rolesEntity = userEntity.getRoles().stream() .map(RoleEntity::getId) .map(String::valueOf) .collect(Collectors.toList()); userDto.setId(userEntity.getId()); userDto.setUsername(userEntity.getUsername()); userDto.setName(userEntity.getName()); userDto.setSurname(userEntity.getSurname()); userDto.setEmail(userEntity.getEmail()); userDto.setAge(userEntity.getAge()); userDto.setRoles(rolesEntity); return userDto; } But in this case it cannot be applied. Could you help me out please - how can I change parametrized type from UserEntity to UserDTO in my method? A: Create a stream over the list of user entities, transform each entity into a DTO via map() operation and the method you've shared and collect the result into a list. public List<UserDTO> getUsers() { return userRepository.findAll().stream() .map(this::mapToUserDTO) // or ClassName::mapToUserDTO .collect(Collectors.toList()); // or .toList() for Java 16+ }
How can I change parameterized type in collection?
I would like to use collection in one method in my UserServiceImpl to return the list of all users which will be parameterized by UserDTO. I have the following method: @Override public List<UserEntity> getUsers() { var usersList = userRepository.findAll(); return usersList; } But I want to change it to public List<UserDTO> getUsers()... I have map method from entity to dto and vice versa: public UserDTO mapToUserDTO(UserEntity userEntity) { var userDto = new UserDTO(); var rolesEntity = userEntity.getRoles().stream() .map(RoleEntity::getId) .map(String::valueOf) .collect(Collectors.toList()); userDto.setId(userEntity.getId()); userDto.setUsername(userEntity.getUsername()); userDto.setName(userEntity.getName()); userDto.setSurname(userEntity.getSurname()); userDto.setEmail(userEntity.getEmail()); userDto.setAge(userEntity.getAge()); userDto.setRoles(rolesEntity); return userDto; } But in this case it cannot be applied. Could you help me out please - how can I change parametrized type from UserEntity to UserDTO in my method?
[ "Create a stream over the list of user entities, transform each entity into a DTO via map() operation and the method you've shared and collect the result into a list.\npublic List<UserDTO> getUsers() {\n \n return userRepository.findAll().stream()\n .map(this::mapToUserDTO) // or ClassName::mapToUserDTO\n .collect(Collectors.toList()); // or .toList() for Java 16+\n}\n\n" ]
[ 1 ]
[]
[]
[ "collections", "java", "rest", "stream" ]
stackoverflow_0074660395_collections_java_rest_stream.txt
Q: Is there any way to set a recursive environment variable in Windows? I have an apps directory in my dropbox - I'd like to be able to access all of them from the command line without having to set up loads and loads of path variables. Is there any way to set up a recursive path variable? I tried putting ** at the end - no joy. A: You can't use placeholders or anything like that in the PATH environment variable. It's just a concatenation of directories, no additional features. So either add all of the app directories to the PATHenvironment variable or think about other ways to solve the problem. For example, you could add one directory to the PATH and place batch files named like the apps there that start the apps. A: Made an account for this 11 year old question. $path = Read-Host -Prompt "Enter the exact path that needs to be recursively added to the PATH env:" $items = gci -Path $path -Recurse -Directory -name $nuPath = $env:Path $r = 0 write-Host "Env started as $nuPath" foreach ($iitem in $items){ $addpath = ($path + "\" + $iitem) $executabledir = $addpath + '\' + "*.exe" if(test-path $executabledir){ Write-Host $addpath $regexAddPath = [regex]::Escape($addPath) $arrPath = $nuPath -split ';' | Where-Object {$_ -notMatch "^$regexAddPath\\?"} $nuPath = ($arrPath + $addPath) -join ';' ++$r } } $result = ($path + ";" + $nupath) -join ';' $temp = New-TemporaryFile $result.ToString() > $temp Start-Process notepad.exe -ArgumentList $temp.FullName $title = 'WARNING' $question = "Your new environmental variable for PATH will be in the notepad window that popped up. are you sure you want to continue?" $choices = '&Yes', '&No' $decision = $Host.UI.PromptForChoice($title, $question, $choices, 1) if ($decision -eq 0 -and $r -gt 5) { $title = 'Are you really sure?' $question = 'This is larger than 5 entries and this can ruin your day if you mess it up. Just doublechecking everything is OK' $choices = '&Yes', '&No' $decision = $Host.UI.PromptForChoice($title, $question, $choices, 1) if ($decision -eq 0) { $env:Path > $HOME\pathbkup.txt [Environment]::SetEnvironmentVariable("Path", $result, "Machine") } else { Write-Host 'cancelled' } } else { Write-Host 'cancelled' } Remove-Item $temp
Is there any way to set a recursive environment variable in Windows?
I have an apps directory in my dropbox - I'd like to be able to access all of them from the command line without having to set up loads and loads of path variables. Is there any way to set up a recursive path variable? I tried putting ** at the end - no joy.
[ "You can't use placeholders or anything like that in the PATH environment variable. It's just a concatenation of directories, no additional features.\nSo either add all of the app directories to the PATHenvironment variable or think about other ways to solve the problem. For example, you could add one directory to the PATH and place batch files named like the apps there that start the apps.\n", "Made an account for this 11 year old question.\n $path = Read-Host -Prompt \"Enter the exact path that needs to be recursively added to the PATH env:\"\n $items = gci -Path $path -Recurse -Directory -name\n $nuPath = $env:Path\n $r = 0\n write-Host \"Env started as $nuPath\"\n foreach ($iitem in $items){\n $addpath = ($path + \"\\\" + $iitem)\n $executabledir = $addpath + '\\' + \"*.exe\"\n if(test-path $executabledir){\n Write-Host $addpath\n $regexAddPath = [regex]::Escape($addPath)\n $arrPath = $nuPath -split ';' | Where-Object {$_ -notMatch \"^$regexAddPath\\\\?\"}\n $nuPath = ($arrPath + $addPath) -join ';'\n ++$r\n }\n }\n $result = ($path + \";\" + $nupath) -join ';'\n $temp = New-TemporaryFile\n $result.ToString() > $temp\n Start-Process notepad.exe -ArgumentList $temp.FullName\n \n $title = 'WARNING'\n $question = \"Your new environmental variable for PATH will be in the notepad window that popped up. are you sure you want to continue?\"\n $choices = '&Yes', '&No'\n \n $decision = $Host.UI.PromptForChoice($title, $question, $choices, 1)\n if ($decision -eq 0 -and $r -gt 5) {\n $title = 'Are you really sure?'\n $question = 'This is larger than 5 entries and this can ruin your day if you mess it up. Just doublechecking everything is OK'\n $choices = '&Yes', '&No'\n \n $decision = $Host.UI.PromptForChoice($title, $question, $choices, 1)\n if ($decision -eq 0) {\n $env:Path > $HOME\\pathbkup.txt\n [Environment]::SetEnvironmentVariable(\"Path\", $result, \"Machine\")\n }\n else {\n Write-Host 'cancelled'\n }\n }\n else {\n Write-Host 'cancelled'\n }\n Remove-Item $temp\n\n" ]
[ 5, 0 ]
[]
[]
[ "environment_variables", "windows" ]
stackoverflow_0005118465_environment_variables_windows.txt
Q: How to set input time limit for user in game? I was wondering how I can make a program with input of MAXIMUM 5 seconds(e.g he can send input after 2 seconds) in python I decided to do a SIMPLE game where you basically have to rewrite a word below 5 seconds. I know how to create input and make it wait EXACTLY 5 SECONDS, but what I want to achieve is to set maximum time of input to 5 seconds so if a user types an answer in let's say 2 seconds he will go the next word. Could you tell me the way to achieve my goal. Thanks in advance! for word in ["banana","earth","turtle","manchester","coctail","chicken"]: # User gets maximum of 5 seconds to write the word, # if he does it before 5 seconds pass ,he goes to next word (does not have to wait exactly 5 seconds, he # can send input in e.g 2 seconds) # if he does not do it in 5 seconds he loses game and it is finished user_input = input(f"Type word '{word}': ") #IF the word is correct go to next iteration if(user_input==word): continue #If the word is incorrect finish the game else: print("You lost") break I tried to do it with threading.Timer() but it doesn't work import threading class NoTime(Exception): pass def count_time(): raise NoTime for word in ["banana","earth","turtle","manchester","coctail","chicken"]: try: #Create timer which raises exception after 5 seconds timer = threading.Timer(5,count_time) timer.start() user_input = input(f"Type word '{word}': ") #if timer hasn't lasted 5 seconds then destroy it in order to prevent unwanted exception timer.cancel() if user_input==word: print("Correct") else: print("Incorrect, you LOSE!") break except NoTime: print("You run out of time, you lose") break The error i get Traceback (most recent call last): File "C:\Users\papit\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner self.run() File "C:\Users\papit\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1394, in run self.function(*self.args, **self.kwargs) File "C:\Users\papit\OneDrive\Pulpit\Programming\Python Bro Course\Math\second\threading_training.py", line 7, in count_time raise NoTime NoTime
How to set input time limit for user in game?
I was wondering how I can make a program with input of MAXIMUM 5 seconds(e.g he can send input after 2 seconds) in python I decided to do a SIMPLE game where you basically have to rewrite a word below 5 seconds. I know how to create input and make it wait EXACTLY 5 SECONDS, but what I want to achieve is to set maximum time of input to 5 seconds so if a user types an answer in let's say 2 seconds he will go the next word. Could you tell me the way to achieve my goal. Thanks in advance! for word in ["banana","earth","turtle","manchester","coctail","chicken"]: # User gets maximum of 5 seconds to write the word, # if he does it before 5 seconds pass ,he goes to next word (does not have to wait exactly 5 seconds, he # can send input in e.g 2 seconds) # if he does not do it in 5 seconds he loses game and it is finished user_input = input(f"Type word '{word}': ") #IF the word is correct go to next iteration if(user_input==word): continue #If the word is incorrect finish the game else: print("You lost") break I tried to do it with threading.Timer() but it doesn't work import threading class NoTime(Exception): pass def count_time(): raise NoTime for word in ["banana","earth","turtle","manchester","coctail","chicken"]: try: #Create timer which raises exception after 5 seconds timer = threading.Timer(5,count_time) timer.start() user_input = input(f"Type word '{word}': ") #if timer hasn't lasted 5 seconds then destroy it in order to prevent unwanted exception timer.cancel() if user_input==word: print("Correct") else: print("Incorrect, you LOSE!") break except NoTime: print("You run out of time, you lose") break The error i get Traceback (most recent call last): File "C:\Users\papit\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner self.run() File "C:\Users\papit\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1394, in run self.function(*self.args, **self.kwargs) File "C:\Users\papit\OneDrive\Pulpit\Programming\Python Bro Course\Math\second\threading_training.py", line 7, in count_time raise NoTime NoTime
[]
[]
[ "=======\nTo create a program with a maximum input time of 5 seconds in Python, you can use the time and select modules to implement a timeout for the input operation. The time module provides functions for working with time, such as measuring elapsed time, and the select module provides functions for waiting for input from multiple sources, including a timeout.\nTo create a program with a maximum input time of 5 seconds, you can use the following steps:\nImport the time and select modules at the beginning of your program:\nimport time\nimport select\nUse the time.time() function to get the current time at the start of the input operation:\nstart_time = time.time()\nUse the select.select() function to wait for input from the user, with a timeout of 5 seconds:\ntimeout = 5 # Set the timeout to 5 seconds\ninput_ready, _, _ = select.select([sys.stdin], [], [], timeout)\nIf the input_ready variable is not empty, which indicates that input was received from the user within the timeout, read the input from the user using the input() function:\nif input_ready:\nuser_input = input()\nIf the input_ready variable is empty, which indicates that the timeout expired without receiving input from the user, handle the timeout by either displaying an error message or taking another action, as appropriate for your program:\nelse:\n# Handle the timeout, e.g. by displaying an error message or taking another action\nUse the time.time() function to get the current time at the end of the input operation, and calculate the elapsed time by subtracting the start time from the end time:\nend_time = time.time()\nelapsed_time = end_time - start_time\nUse the elapsed time to determine whether the user's input was within the maximum time limit, and take appropriate action, such as displaying the input or moving on to the next word in your game:\nif elapsed_time <= timeout:\n# The user's input was within the time limit, so display it or take another action\nelse:\n# The user's input was not within the time limit, so handle the timeout\nOverall, to create a program with a maximum input time of 5 seconds in Python, you can use the time and select modules to implement a timeout for the input operation, and to handle the timeout if the user does not provide input within the maximum time limit. This allows you to ensure that the user's input is received within the specified time limit, and to take appropriate action based on the elapsed time.\n", "import threading\n\ndef lost():\n print(\"You run out of time, you lose\")\n\n\nfor word in [\"banana\", \"earth\", \"turtle\", \"manchester\", \"coctail\", \"chicken\"]:\n\n timer = threading.Timer(5, lost)\n timer.start()\n\n user_input = input(f\"Type word '{word}': \")\n timer.cancel()\n\n if user_input == word:\n print(\"Correct\")\n else:\n print(\"Incorrect, you LOSE!\")\n\n" ]
[ -1, -1 ]
[ "python", "python_3.x" ]
stackoverflow_0074660327_python_python_3.x.txt
Q: Show Filtered Stock Item in a certain Branch how to show a filtered non-stock/stock items in a certain Branch in #acumatica? Have tried to limit it with following action, but still didn't work the branches in Users Management, Manage Restriction Group in Items. need advice on how to add a filtered non-stock/stock items in a certain transactional Branch (Request, PO) A: If you are looking to force purchase orders only for the items in that users branch, I would use the Create Purchase Orders screen (PO505000) Make a graph extension that will default the WarehouseID to the default warehouse that is in their profile Then you would lock down the field with user security for users who need it locked. I would also lock down the default warehouse on the user profile, or else the employees could change it themselves and access another branch. The same could be done for the Prepare Replenishment screen (IN508000)
Show Filtered Stock Item in a certain Branch
how to show a filtered non-stock/stock items in a certain Branch in #acumatica? Have tried to limit it with following action, but still didn't work the branches in Users Management, Manage Restriction Group in Items. need advice on how to add a filtered non-stock/stock items in a certain transactional Branch (Request, PO)
[ "If you are looking to force purchase orders only for the items in that users branch, I would use the Create Purchase Orders screen (PO505000)\nMake a graph extension that will default the WarehouseID to the default warehouse that is in their profile\nThen you would lock down the field with user security for users who need it locked. I would also lock down the default warehouse on the user profile, or else the employees could change it themselves and access another branch.\nThe same could be done for the Prepare Replenishment screen (IN508000)\n" ]
[ 0 ]
[]
[]
[ "acumatica", "erp" ]
stackoverflow_0074370962_acumatica_erp.txt
Q: It is possible to clear all error boundaries in React? I have an application with multiple components each wrapped in an error boundary. Sometimes, their children components throw errors which bubble up to the error boundary. Here's a contrived example: https://codesandbox.io/s/throwing-many-errors-yirbgc Each "Failed to load" is displaying an error boundary. I want to be able to click "refresh" to reset all of them. Is it possible? A: This solution is very crude and far from perfect but you could try something in the lines of remembering the resets of the errors given by the error boundary props const errorResets = useRef<Function[]>([]); <button onClick={() => { // I want to reset all my error boundaries alert(`resetting ${errorResets.current.length} errors`); errorResets.current.forEach((reset) => { console.log(reset); reset(); }); errorResets.current = []; }} > Refresh </button> <ErrorBoundary FallbackComponent={({ error, resetErrorBoundary }) => { errorResets.current = [...errorResets.current, resetErrorBoundary]; return <h3>Failed to load</h3>; }} > <MyRow /> </ErrorBoundary>
It is possible to clear all error boundaries in React?
I have an application with multiple components each wrapped in an error boundary. Sometimes, their children components throw errors which bubble up to the error boundary. Here's a contrived example: https://codesandbox.io/s/throwing-many-errors-yirbgc Each "Failed to load" is displaying an error boundary. I want to be able to click "refresh" to reset all of them. Is it possible?
[ "This solution is very crude and far from perfect but you could try something in the lines of remembering the resets of the errors given by the error boundary props\nconst errorResets = useRef<Function[]>([]);\n<button\n onClick={() => {\n // I want to reset all my error boundaries\n alert(`resetting ${errorResets.current.length} errors`);\n errorResets.current.forEach((reset) => {\n console.log(reset);\n reset();\n });\n errorResets.current = [];\n }}\n >\n Refresh\n</button>\n\n<ErrorBoundary\n FallbackComponent={({ error, resetErrorBoundary }) => {\n errorResets.current = [...errorResets.current, resetErrorBoundary];\n return <h3>Failed to load</h3>;\n }}\n >\n <MyRow />\n</ErrorBoundary>\n\n" ]
[ 0 ]
[]
[]
[ "reactjs" ]
stackoverflow_0074660293_reactjs.txt
Q: Is there a way to append the output of a batchfile into a second batch file? I am trying to find a way to automate the connection of cellular profiles. Currently I extract the profile names with this command: netsh mbn show profiles Which gives me this output: Profiles on interface Cellular: ------------------------------------- {4DXXXX81-XXX-XXXF-905B-438XXXXXXX94} {5XXXXX70-XXX-XX4D-XXX-EXXXXXXXXXXXE} Then I copy and paste these profiles into this batch file below, and run it. As there are 2 possible profiles, i use the Or command to pick either one. netsh mbn connect interface="Cellular" connmode=name name="{4DXXXX81-XXX-XXXF-905B-438XXXXXXX94}" || netsh mbn connect interface="Cellular" connmode=name name="{5XXXXX70-XXX-XX4D-XXX-EXXXXXXXXXXXE}"` The reason for this is that i then set up a scheduled task to run this batch file every 60 minutes, to reconnect the cellular profile in case of a loss of signal at any point. As there as sometime multiple devices that need this done, it can become time consuming. So is there a way to integrate the output of the first batch file into the second one automatically? I am fairly new to this stuff, and am not a programmer, so I have not even been able to come up with the right question to type into google to find a possible answer Thanks A: @ECHO OFF SETLOCAL SET "profile1=" SET "profile2=" FOR /f "SKIP=2tokens=*" %%b IN ('netsh mbn show profiles') DO ( IF DEFINED profile1 (SET "profile2=%%b") ELSE (SET "profile1=%%b") ) netsh mbn connect interface="Cellular" connmode=name name="%profile1%" || netsh mbn connect interface="Cellular" connmode=name name="%profile2%" GOTO :EOF A simple batch file. Cut-and-paste to a file named say, reconnect.bat. Running reconnect from the prompt should do the required task. Note : I cannot test this completely. Use caution.
Is there a way to append the output of a batchfile into a second batch file?
I am trying to find a way to automate the connection of cellular profiles. Currently I extract the profile names with this command: netsh mbn show profiles Which gives me this output: Profiles on interface Cellular: ------------------------------------- {4DXXXX81-XXX-XXXF-905B-438XXXXXXX94} {5XXXXX70-XXX-XX4D-XXX-EXXXXXXXXXXXE} Then I copy and paste these profiles into this batch file below, and run it. As there are 2 possible profiles, i use the Or command to pick either one. netsh mbn connect interface="Cellular" connmode=name name="{4DXXXX81-XXX-XXXF-905B-438XXXXXXX94}" || netsh mbn connect interface="Cellular" connmode=name name="{5XXXXX70-XXX-XX4D-XXX-EXXXXXXXXXXXE}"` The reason for this is that i then set up a scheduled task to run this batch file every 60 minutes, to reconnect the cellular profile in case of a loss of signal at any point. As there as sometime multiple devices that need this done, it can become time consuming. So is there a way to integrate the output of the first batch file into the second one automatically? I am fairly new to this stuff, and am not a programmer, so I have not even been able to come up with the right question to type into google to find a possible answer Thanks
[ "@ECHO OFF\nSETLOCAL\n\nSET \"profile1=\"\nSET \"profile2=\"\n\nFOR /f \"SKIP=2tokens=*\" %%b IN ('netsh mbn show profiles') DO (\n IF DEFINED profile1 (SET \"profile2=%%b\") ELSE (SET \"profile1=%%b\")\n)\n\nnetsh mbn connect interface=\"Cellular\" connmode=name name=\"%profile1%\" || netsh mbn connect interface=\"Cellular\" connmode=name name=\"%profile2%\"\n\nGOTO :EOF\n\nA simple batch file. Cut-and-paste to a file named say, reconnect.bat. Running reconnect from the prompt should do the required task.\nNote : I cannot test this completely. Use caution.\n" ]
[ 0 ]
[]
[]
[ "batch_file", "cmd" ]
stackoverflow_0074656010_batch_file_cmd.txt
Q: array_slice throws undefined constant error I am getting an array of ids from db, shuffling them and returning the first few elements: $stmt = $this->pdo->query("SELECT `id` FROM `IDS_TABLE` WHERE `active` = 1 LIMIT 10"); $rows = $stmt->fetchAll(); if($rows){ $pageIdsArray = array_map(function($page){return $page["id"];}, $rows); shuffle($pageIdsArray); error_log(json_encode($pageIdsArray)); // looks ok $result = array_slice($pageIdsArray, 0, 3); // error here return $result; } To my suprise,this code throws: PHP Fatal error: Uncaught ErrorException: Use of undefined constant \xc2\xa00 - assumed '\xc2\xa00' (this will throw an Error in a future version of PHP) What am I doing wrong? A: There are two non-breaking space characters (u+00A0) before the 0 and the 3 in this line: $result = array_slice($pageIdsArray, 0, 3); // error here I'm guessing you may have copied this from a web page which inserted those characters for formatting. Just retype it with regular spaces: $result = array_slice($pageIdsArray, 0, 3); For future reference, some IDEs/editors like VS Code can be configured to give you a warning if such characters are present:
array_slice throws undefined constant error
I am getting an array of ids from db, shuffling them and returning the first few elements: $stmt = $this->pdo->query("SELECT `id` FROM `IDS_TABLE` WHERE `active` = 1 LIMIT 10"); $rows = $stmt->fetchAll(); if($rows){ $pageIdsArray = array_map(function($page){return $page["id"];}, $rows); shuffle($pageIdsArray); error_log(json_encode($pageIdsArray)); // looks ok $result = array_slice($pageIdsArray, 0, 3); // error here return $result; } To my suprise,this code throws: PHP Fatal error: Uncaught ErrorException: Use of undefined constant \xc2\xa00 - assumed '\xc2\xa00' (this will throw an Error in a future version of PHP) What am I doing wrong?
[ "There are two non-breaking space characters (u+00A0) before the 0 and the 3 in this line:\n$result = array_slice($pageIdsArray, 0, 3); // error here\n\nI'm guessing you may have copied this from a web page which inserted those characters for formatting.\nJust retype it with regular spaces:\n$result = array_slice($pageIdsArray, 0, 3); \n\nFor future reference, some IDEs/editors like VS Code can be configured to give you a warning if such characters are present:\n\n" ]
[ 5 ]
[]
[]
[ "php" ]
stackoverflow_0074660421_php.txt
Q: WAI-ARIA Roles for Carousels (a.k.a. sliders, slideshows, galleries) What WAI-ARIA roles should be used for carousels, like the one below? Notes: I'm familiar with the slider role, but this refers to a different type of widget. The example is from USA.gov and the only aria role I see in use that's related to the carousel is aria-live on the individual slides. A: as you correctly say, role=slider is not right for a carousel. The one that you should use is role=listbox quoting from MDN (see link below): The listbox role is used to identify an element that creates a list from which a user may select one or more items which are static and, unlike HTML elements, may contain images. see https://developer.mozilla.org/en-US/docs/Accessibility/ARIA/ARIA_Techniques/Using_the_listbox_role for additional information on other ARIA roles you should use, such as role=option for each entry in the listbox. You could also have a look at this YUI plugin (YUI 2 is deprecated but the documentation is still valid for the purpose of your question) http://yui.github.io/yui2/docs/yui_2.9.0_full/examples/carousel/carousel-ariaplugin.html that adds aria roles to an existing carousel via javascript. I'm not suggesting to use it, but you can certainly infer what it does and replicate it in your markup as needed. A: I have seen many examples using role="listbox", but this feels incorrect to me. A listbox is a form control, wanting to get a selection from a user. If the purpose of your carousel is to have a user select an option, then use listbox, but most people do not use carousel components in this manner. A better role would be a tablist. Tablists are used to represent data (as opposed to capturing an option). Carousel represent data. The links to display a certain image would have the role of tab and each image with it's corresponding data such as its caption would get a role of tabpanel. See http://www.w3.org/TR/2010/WD-wai-aria-practices-20100916/#tabpanel A: Here's an example of an accessible carousel using jQuery UI: http://hanshillen.github.io/jqtest/#goto_carousel The control bar is a list of images, marked up with <ul role="listbox">. Each <li> has role="option", tabindex="-1", and aria-selected="false". The two arrows are <button> elements with title="previous" and title="next", and when a button is pressed the previous or next list item is selected, in which case that list item's attributes change to aria-selected="true" and tabindex="0". The latter means the user can tab directly to the current selected image, which of course has suitable alt text. Another option not included in this example might be to add role="alert" to the viewer div, so when the content of that div changes the new alt text is automatically read. That way users don't have to press tab to see what the image is, then shift+tab to go back to the button. A: From the official W3 tutorials: To group the carousel in a way that is perceivable for assistive technologies, the role attribute with a value of region can be used. To identify the region, the aria-label attribute can be used, as show in the example below: <div class="carousel" role="region" aria-label="Recent news"> … </div> So you should use role="region" A: tl;dr: role="marquee" A carousel is not a listbox. From the listbox description A widget that allows the user to select one or more items from a list of choices. The listbox role is a subclass of select. While I'm sure there are exceptions, carousels are generally not used as form elements where users "select one or more items", and should not use a role that would otherwise treat them as such. A better role for a carousel is tablist so long as bullets or numbering for the carousel slides are provided and toggle depending on which one is active: A list of tab elements, which are references to tabpanel elements. An all around-better role for carousels is marquee, as there is no dependency on showing "tabs" as bullets or numbers to access different slides: A type of live region where non-essential information changes frequently. A: Now that other answers already can't agree on role "listbox", "region" or "marquee", another one pops up: role "group", in combination with role description "carousel". From the, very official, WAI-ARIA Authoring Practices 1.2 (emphasis mine): A carousel container element [...] has either role region or role group. The most appropriate role for the carousel container depends on the information architecture of the page. [...] The carousel container has the aria-roledescription property set to carousel. A11y is hard. A: This reference may not have been around when the original question was posted but it is a very official source with clear instructions. A11y is a learning process. https://www.w3.org/WAI/ARIA/apg/example-index/carousel/carousel-1-prev-next
WAI-ARIA Roles for Carousels (a.k.a. sliders, slideshows, galleries)
What WAI-ARIA roles should be used for carousels, like the one below? Notes: I'm familiar with the slider role, but this refers to a different type of widget. The example is from USA.gov and the only aria role I see in use that's related to the carousel is aria-live on the individual slides.
[ "as you correctly say, role=slider is not right for a carousel. The one that you should use is role=listbox \nquoting from MDN (see link below):\n\nThe listbox role is used to identify an element that creates a list from which a user may select one or more items which are static and, unlike HTML elements, may contain images.\n\nsee https://developer.mozilla.org/en-US/docs/Accessibility/ARIA/ARIA_Techniques/Using_the_listbox_role for additional information on other ARIA roles you should use, such as role=option for each entry in the listbox.\nYou could also have a look at this YUI plugin (YUI 2 is deprecated but the documentation is still valid for the purpose of your question)\nhttp://yui.github.io/yui2/docs/yui_2.9.0_full/examples/carousel/carousel-ariaplugin.html\nthat adds aria roles to an existing carousel via javascript. I'm not suggesting to use it, but you can certainly infer what it does and replicate it in your markup as needed.\n", "I have seen many examples using role=\"listbox\", but this feels incorrect to me. A listbox is a form control, wanting to get a selection from a user. If the purpose of your carousel is to have a user select an option, then use listbox, but most people do not use carousel components in this manner. A better role would be a tablist. Tablists are used to represent data (as opposed to capturing an option). Carousel represent data. The links to display a certain image would have the role of tab and each image with it's corresponding data such as its caption would get a role of tabpanel.\nSee http://www.w3.org/TR/2010/WD-wai-aria-practices-20100916/#tabpanel\n", "Here's an example of an accessible carousel using jQuery UI: \nhttp://hanshillen.github.io/jqtest/#goto_carousel\nThe control bar is a list of images, marked up with <ul role=\"listbox\">. Each <li> has role=\"option\", tabindex=\"-1\", and aria-selected=\"false\". \nThe two arrows are <button> elements with title=\"previous\" and title=\"next\", and when a button is pressed the previous or next list item is selected, in which case that list item's attributes change to aria-selected=\"true\" and tabindex=\"0\". The latter means the user can tab directly to the current selected image, which of course has suitable alt text. \nAnother option not included in this example might be to add role=\"alert\" to the viewer div, so when the content of that div changes the new alt text is automatically read. That way users don't have to press tab to see what the image is, then shift+tab to go back to the button. \n", "From the official W3 tutorials:\n\nTo group the carousel in a way that is perceivable for assistive technologies, the role attribute with a value of region can be used. To identify the region, the aria-label attribute can be used, as show in the example below:\n<div class=\"carousel\" role=\"region\" aria-label=\"Recent news\">\n …\n</div>\n\n\nSo you should use role=\"region\"\n", "tl;dr:\nrole=\"marquee\"\n\nA carousel is not a listbox.\nFrom the listbox description\n\nA widget that allows the user to select one or more items from a list of choices.\n\nThe listbox role is a subclass of select. While I'm sure there are exceptions, carousels are generally not used as form elements where users \"select one or more items\", and should not use a role that would otherwise treat them as such.\nA better role for a carousel is tablist so long as bullets or numbering for the carousel slides are provided and toggle depending on which one is active:\n\nA list of tab elements, which are references to tabpanel elements.\n\nAn all around-better role for carousels is marquee, as there is no dependency on showing \"tabs\" as bullets or numbers to access different slides:\n\nA type of live region where non-essential information changes frequently.\n\n", "Now that other answers already can't agree on role \"listbox\", \"region\" or \"marquee\", another one pops up: role \"group\", in combination with role description \"carousel\".\nFrom the, very official, WAI-ARIA Authoring Practices 1.2 (emphasis mine):\n\n\nA carousel container element [...] has either\nrole region or role group. The most appropriate role for the\ncarousel container depends on the information architecture of the\npage. [...]\n\nThe carousel container has the aria-roledescription property set to carousel.\n\n\n\nA11y is hard.\n", "This reference may not have been around when the original question was posted but it is a very official source with clear instructions. A11y is a learning process.\nhttps://www.w3.org/WAI/ARIA/apg/example-index/carousel/carousel-1-prev-next\n" ]
[ 12, 7, 6, 6, 3, 2, 0 ]
[]
[]
[ "accessibility", "html", "javascript", "wai_aria" ]
stackoverflow_0016840054_accessibility_html_javascript_wai_aria.txt
Q: ReferenceError: require is not defined in ES module scope, you can use import instead gulp sass I got an error when using this line of code const sass = require('gulp-sass')(require('sass')); The error output is : Requiring external module babel-register ReferenceError: require is not defined in ES module scope, you can use import instead This file is being treated as an ES module because it has a '.js' file extension and 'C:\xampp-clean\htdocs\myfirsttheme\package.json' contains "type": "module". T o treat it as a CommonJS script, rename it to use the '.cjs' file extension. I have changed const sass = require('gulp-sass')(require('sass')); the error disappeared but I am getting this error : Requiring external module babel-register TypeError: Cannot read property 'prod' of undefined and this line is causing the error : const PRODUCTION = yargs.argv.prod; Do anyone has an idea? I have tried a lot and googled this error but I did not find a solution , do anyone has an idea about this ? Thanks in advance A: Can you check the "type" part in the package.json file? If the type part is module, you will get this error. You have to make it Commonjs. A: work for me , only change in package.json file, this line: "type":"module" to "type":"commonjs"
ReferenceError: require is not defined in ES module scope, you can use import instead gulp sass
I got an error when using this line of code const sass = require('gulp-sass')(require('sass')); The error output is : Requiring external module babel-register ReferenceError: require is not defined in ES module scope, you can use import instead This file is being treated as an ES module because it has a '.js' file extension and 'C:\xampp-clean\htdocs\myfirsttheme\package.json' contains "type": "module". T o treat it as a CommonJS script, rename it to use the '.cjs' file extension. I have changed const sass = require('gulp-sass')(require('sass')); the error disappeared but I am getting this error : Requiring external module babel-register TypeError: Cannot read property 'prod' of undefined and this line is causing the error : const PRODUCTION = yargs.argv.prod; Do anyone has an idea? I have tried a lot and googled this error but I did not find a solution , do anyone has an idea about this ? Thanks in advance
[ "Can you check the \"type\" part in the package.json file? If the type part is module, you will get this error. You have to make it Commonjs.\n", "work for me , only change in package.json file, this line:\n\"type\":\"module\"\nto\n\"type\":\"commonjs\"\n" ]
[ 9, 0 ]
[ "\nFirst of all add \"type\":\"commonjs\" to package.json.\nDowngrade your package version. If you are using gulp-imagemin version 8.0.0 then install older version like npm i [email protected].\nThat's all now enjoy coding...\n\n" ]
[ -2 ]
[ "gulp", "gulp_sass", "javascript", "sass" ]
stackoverflow_0069099763_gulp_gulp_sass_javascript_sass.txt
Q: Multiplication Table Based On BOTH User Inputs I am trying to create a multiplication table that is based on BOTH user inputs. I want to ask the user the first and second integer and then to print out the first integer by the second integer that amount of times. For example, if I choose 5 for first integer and 10 for second integer, I want the results printed as such: 5 x 1 = 5 5 x 2 = 10 and so forth... I do not know whether I should be using for loops or an array for this type of program. I am about 15 weeks of learning C++. When I execute the code, nothing happens in the executable. Here is the code: cout<<" Multiplication Table"<<endl; cout<<"----------------------"<<endl; cout<<" Input a number: "; cin>>multiplyNumber; cout<<"Print the multiplication table of a number up to: "; cin>>multiply2Number; for (int a = multiplyNumber; a < multiply2Number; a++){ for (int b = multiply2Number; b < multiply2Number; b++){ cout<<a<<" X "<<" = "<<endl; } A: You don't see the numbers being outputted because your inner for loop is never entered, as b (which has the value of multiply2Number) can never be less than multiply2Number. Do you want the second number to be the number of entries to display, starting at x 1 and progressing sequentially? If so, then try something like this: cout << " Multiplication Table" << endl; cout << "----------------------" << endl; cout << " Input a number: "; cin >> number; cout << "Print the multiplication table of a number up to: "; cin >> lastMultiplier; for (int multiplier = 1; multiplier <= lastMultiplier; ++a){ cout << number << " X " << multiplier << " = " << (number * multiplier) << endl; } Online Demo Or, do you want the second number to be the highest multiplied value to stop at, and you want to display however many entries is required to reach that value? If so, then try something like this instead: cout << " Multiplication Table" << endl; cout << "----------------------" << endl; cout << " Input a number: "; cin >> number; cout << "Print the multiplication table of a number up to: "; cin >> lastResult; for (int multiplier = 1, result = number; result <= lastResult; result = number * ++multiplier){ cout << number << " X " << multiplier << " = " << result << endl; } Online Demo
Multiplication Table Based On BOTH User Inputs
I am trying to create a multiplication table that is based on BOTH user inputs. I want to ask the user the first and second integer and then to print out the first integer by the second integer that amount of times. For example, if I choose 5 for first integer and 10 for second integer, I want the results printed as such: 5 x 1 = 5 5 x 2 = 10 and so forth... I do not know whether I should be using for loops or an array for this type of program. I am about 15 weeks of learning C++. When I execute the code, nothing happens in the executable. Here is the code: cout<<" Multiplication Table"<<endl; cout<<"----------------------"<<endl; cout<<" Input a number: "; cin>>multiplyNumber; cout<<"Print the multiplication table of a number up to: "; cin>>multiply2Number; for (int a = multiplyNumber; a < multiply2Number; a++){ for (int b = multiply2Number; b < multiply2Number; b++){ cout<<a<<" X "<<" = "<<endl; }
[ "You don't see the numbers being outputted because your inner for loop is never entered, as b (which has the value of multiply2Number) can never be less than multiply2Number.\nDo you want the second number to be the number of entries to display, starting at x 1 and progressing sequentially? If so, then try something like this:\ncout << \" Multiplication Table\" << endl;\ncout << \"----------------------\" << endl;\n\ncout << \" Input a number: \";\ncin >> number;\n\ncout << \"Print the multiplication table of a number up to: \";\ncin >> lastMultiplier;\n\nfor (int multiplier = 1; multiplier <= lastMultiplier; ++a){\n cout << number << \" X \" << multiplier << \" = \" << (number * multiplier) << endl;\n}\n\nOnline Demo\nOr, do you want the second number to be the highest multiplied value to stop at, and you want to display however many entries is required to reach that value? If so, then try something like this instead:\ncout << \" Multiplication Table\" << endl;\ncout << \"----------------------\" << endl;\n\ncout << \" Input a number: \";\ncin >> number;\n\ncout << \"Print the multiplication table of a number up to: \";\ncin >> lastResult;\n\nfor (int multiplier = 1, result = number; result <= lastResult; result = number * ++multiplier){\n cout << number << \" X \" << multiplier << \" = \" << result << endl;\n}\n\nOnline Demo\n" ]
[ 1 ]
[]
[]
[ "c++", "for_loop", "multiplication" ]
stackoverflow_0074660413_c++_for_loop_multiplication.txt
Q: The function can't be unconditionally invoked because it can be 'null'. ZStack.clip() This piece of code is from pre-null safety version of flutter. It uses VelocityX, which I know is kind of depriciated, but I am trying to make it work by modifying it. "radios": [ { "id": 1, "name": "92.7", "tagline": "Suno Sunao, Life Banao!", "color": "0xffa11431", "desc": "The chills you get when you listen to music, is mostly caused by the brain releasing dopamine while anticipating the peak moment of a song.", "url": "http://sc-bb.1.fm:8017/;", "icon": "https://mytuner.global.ssl.fastly.net/media/tvos_radios/m8afyszryaqt.png", "image": "https://mir-s3-cdn-cf.behance.net/project_modules/max_1200/b5df4c18876369.562d0d4bd94cf.jpg", "lang": "Hindi", "category": "pop", "disliked": false, "order": 1 }, ... ] final rad = radios![index]; ZStack? Z = ZStack([ Positioned( top: 0.0, right: 0.0, child: VxBox( child: rad.category.text.uppercase.white.make().p16(), ).height(40).color(Colors.black).alignCenter.withRounded(value: 10.0).make(), ), Align( alignment: Alignment.bottomCenter, child: VStack([ rad.name.text.xl3.white.bold.make(), 5.heightBox, rad.tagline.text.sm.white.semiBold.make(), ], crossAlignment: CrossAxisAlignment.center, ), ), Align( alignment: Alignment.center, child: [const Icon( CupertinoIcons.play_circle, color: Colors.white, ), 10.heightBox, "Double tap to play".text.gray300.make() ].vStack(), ), ]).clip(Clip.antiAlias); Everything works fine except the last line, that is .clip(Clip.antiAlias) It gives the following error: Error It works fine if I remove .clip(Clip.antiAlias) . I resolved a similar error previously itemCount: radios?.length ?? 0, and it worked. But ZStack Z1 = Z?.clip(Clip.antiAlias) ?? Z , where Z does not have .clip() method, does not work. Even ! null check does not work. Is there any work-around for this ? Or any best practice I am missing ? A: please add ! null check operator before .clip() solution for your problem is -> ZStack Z1 = Z!.clip(Clip.antiAlias) ?? Z where Z must be some other object incase if you object return null
The function can't be unconditionally invoked because it can be 'null'. ZStack.clip()
This piece of code is from pre-null safety version of flutter. It uses VelocityX, which I know is kind of depriciated, but I am trying to make it work by modifying it. "radios": [ { "id": 1, "name": "92.7", "tagline": "Suno Sunao, Life Banao!", "color": "0xffa11431", "desc": "The chills you get when you listen to music, is mostly caused by the brain releasing dopamine while anticipating the peak moment of a song.", "url": "http://sc-bb.1.fm:8017/;", "icon": "https://mytuner.global.ssl.fastly.net/media/tvos_radios/m8afyszryaqt.png", "image": "https://mir-s3-cdn-cf.behance.net/project_modules/max_1200/b5df4c18876369.562d0d4bd94cf.jpg", "lang": "Hindi", "category": "pop", "disliked": false, "order": 1 }, ... ] final rad = radios![index]; ZStack? Z = ZStack([ Positioned( top: 0.0, right: 0.0, child: VxBox( child: rad.category.text.uppercase.white.make().p16(), ).height(40).color(Colors.black).alignCenter.withRounded(value: 10.0).make(), ), Align( alignment: Alignment.bottomCenter, child: VStack([ rad.name.text.xl3.white.bold.make(), 5.heightBox, rad.tagline.text.sm.white.semiBold.make(), ], crossAlignment: CrossAxisAlignment.center, ), ), Align( alignment: Alignment.center, child: [const Icon( CupertinoIcons.play_circle, color: Colors.white, ), 10.heightBox, "Double tap to play".text.gray300.make() ].vStack(), ), ]).clip(Clip.antiAlias); Everything works fine except the last line, that is .clip(Clip.antiAlias) It gives the following error: Error It works fine if I remove .clip(Clip.antiAlias) . I resolved a similar error previously itemCount: radios?.length ?? 0, and it worked. But ZStack Z1 = Z?.clip(Clip.antiAlias) ?? Z , where Z does not have .clip() method, does not work. Even ! null check does not work. Is there any work-around for this ? Or any best practice I am missing ?
[ "please add ! null check operator before .clip()\nsolution for your problem is ->\nZStack Z1 = Z!.clip(Clip.antiAlias) ?? Z\n\nwhere Z must be some other object incase if you object return null\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074660180_dart_flutter.txt
Q: Plotting GIF on Folium Map I have a GIF of 24 seconds showing the temperature of -119.564209,38.503915,-114.060059,41.211203 region in the square (heat map) for 24 hrs. Is there a way I can plot this GIF on the Folium map (or any other interactive map in python) by giving the mentioned coordinates? A: Here's how I solved it using folium. from folium import raster_layers m = folium.Map(location=[100, 100], zoom_start=2, tiles='OpenStreetMap') raster_layers.ImageOverlay('temp.gif', [[-119.564209,38.503915],[-114.060059,41.211203]], opacity=0.8, ).add_to(m) folium.LayerControl().add_to(m) m
Plotting GIF on Folium Map
I have a GIF of 24 seconds showing the temperature of -119.564209,38.503915,-114.060059,41.211203 region in the square (heat map) for 24 hrs. Is there a way I can plot this GIF on the Folium map (or any other interactive map in python) by giving the mentioned coordinates?
[ "Here's how I solved it using folium.\nfrom folium import raster_layers\nm = folium.Map(location=[100, 100], zoom_start=2, tiles='OpenStreetMap')\n\nraster_layers.ImageOverlay('temp.gif',\n [[-119.564209,38.503915],[-114.060059,41.211203]],\n opacity=0.8,\n ).add_to(m)\n\nfolium.LayerControl().add_to(m)\nm\n\n" ]
[ 0 ]
[]
[]
[ "folium", "gis", "python" ]
stackoverflow_0074657170_folium_gis_python.txt
Q: Is it possible to AWS Cognito as a SAML-based IdP to authenticate users to AWS Workspaces with MFA? Is it possible to AWS Cognito as a SAML-based IdP to authenticate users to AWS Workspaces with MFA? A: Amazon Cognito supports authentication with identity providers (IdPs) through Security Assertion Markup Language 2.0 (SAML 2.0). Users can access WorkSpaces with SAML 2.0 authentication with latest version of WorkSpaces. Cognito User pool supports MFA.
Is it possible to AWS Cognito as a SAML-based IdP to authenticate users to AWS Workspaces with MFA?
Is it possible to AWS Cognito as a SAML-based IdP to authenticate users to AWS Workspaces with MFA?
[ "Amazon Cognito supports authentication with identity providers (IdPs) through Security Assertion Markup Language 2.0 (SAML 2.0).\nUsers can access WorkSpaces with SAML 2.0 authentication with latest version of WorkSpaces.\nCognito User pool supports MFA.\n" ]
[ 1 ]
[]
[]
[ "amazon_cognito", "amazon_web_services", "amazon_workspaces" ]
stackoverflow_0074660384_amazon_cognito_amazon_web_services_amazon_workspaces.txt
Q: How can I compare the directry lists inside directories? I have 2 directories, A and B. In A, I have subdirectories 1, 2, 3, 4 while in B I have 1 and 3. How can I highlight 2 and 4 easily? The contents of the subdirectories are the same, respectively, so I just want to compare the names. Mark -> Compare Directories only compare files, but not directories. A: I don't know whether you can use other tools, but there is another way to compare directories (unless you don't mind using a command line). In macOS you can simply type this command in the terminal: $ diff A/ B/ Common subdirectories: A/1 and B/1 Only in A/: 2 Common subdirectories: A/3 and B/3 Only in A/: 4 or $ diff -rq A B Only in A/: 2 Only in A/: 4 -r recursively compares any subdirectories -q shows only differences without any details For Windows preinstalled comp command. Here is the link preinstalled fc command. Here is the link also you can download gnuwin32 which provides diff command and you can use it in almost the same way as described above. Hope it helps somehow. A: To compare the contents of two directories and highlight the differences, you can use the diff command in a terminal or command prompt. This command will show the differences between the directories, including any files or subdirectories that are present in one directory but not the other. For example, if you have two directories named A and B, and you want to compare them and highlight the differences, you could use the following command: diff -rq A B This command will recursively compare the contents of the two directories, showing any files or subdirectories that are present in one directory but not the other. The -q flag tells the diff command to only show the names of the files and directories that are different, without showing the full contents of the files. If you only want to compare the names of the subdirectories, rather than the contents of the files within them, you can use the find command to list the subdirectories in each directory, and then use the diff command to compare the lists. For example: diff <(find A -type d) <(find B -type d) This command will list the subdirectories in each directory, and then use the diff command to compare the lists and highlight the differences. Note that the diff and find commands may vary depending on the operating system you are using. If you are unsure how to use these commands, you may want to consult the documentation for your operating system or search online for tutorials or guides on using them.
How can I compare the directry lists inside directories?
I have 2 directories, A and B. In A, I have subdirectories 1, 2, 3, 4 while in B I have 1 and 3. How can I highlight 2 and 4 easily? The contents of the subdirectories are the same, respectively, so I just want to compare the names. Mark -> Compare Directories only compare files, but not directories.
[ "I don't know whether you can use other tools, but there is another way to compare directories (unless you don't mind using a command line).\nIn macOS you can simply type this command in the terminal:\n $ diff A/ B/\n Common subdirectories: A/1 and B/1\n Only in A/: 2\n Common subdirectories: A/3 and B/3\n Only in A/: 4\n\nor\n$ diff -rq A B\nOnly in A/: 2\nOnly in A/: 4\n\n-r recursively compares any subdirectories\n-q shows only differences without any details\n\nFor Windows\n\npreinstalled comp command. Here is the link\npreinstalled fc command. Here is the link\nalso you can download gnuwin32 which provides diff command and you can use it in almost the same way as described above.\n\nHope it helps somehow.\n", "To compare the contents of two directories and highlight the differences, you can use the diff command in a terminal or command prompt. This command will show the differences between the directories, including any files or subdirectories that are present in one directory but not the other.\nFor example, if you have two directories named A and B, and you want to compare them and highlight the differences, you could use the following command:\ndiff -rq A B\n\nThis command will recursively compare the contents of the two directories, showing any files or subdirectories that are present in one directory but not the other. The -q flag tells the diff command to only show the names of the files and directories that are different, without showing the full contents of the files.\nIf you only want to compare the names of the subdirectories, rather than the contents of the files within them, you can use the find command to list the subdirectories in each directory, and then use the diff command to compare the lists. For example:\ndiff <(find A -type d) <(find B -type d)\n\nThis command will list the subdirectories in each directory, and then use the diff command to compare the lists and highlight the differences.\nNote that the diff and find commands may vary depending on the operating system you are using. If you are unsure how to use these commands, you may want to consult the documentation for your operating system or search online for tutorials or guides on using them.\n" ]
[ 1, 0 ]
[]
[]
[ "total_commander" ]
stackoverflow_0074656186_total_commander.txt
Q: gh cli create release from tagged draft template Let say I have release draft template tagged 1.10 Now I would like to publish a new 1.10 release from this draft with gh cli but I see no options for that or do not understanding how can I achieve it If I run gh release create 1.10 --generate-notes it will create new release notes which is undesirable (my draft uses another template). If I add -d flag, it will create new draft. All I want is to publish from existing tagged draft something like (dummy command) gh release create 1.10 --from-draft - is it possible? A: Use the edit option to publish the release. Publish a release that was previously a draft $ gh release edit v1.0 --draft=false
gh cli create release from tagged draft template
Let say I have release draft template tagged 1.10 Now I would like to publish a new 1.10 release from this draft with gh cli but I see no options for that or do not understanding how can I achieve it If I run gh release create 1.10 --generate-notes it will create new release notes which is undesirable (my draft uses another template). If I add -d flag, it will create new draft. All I want is to publish from existing tagged draft something like (dummy command) gh release create 1.10 --from-draft - is it possible?
[ "Use the edit option to publish the release.\nPublish a release that was previously a draft\n$ gh release edit v1.0 --draft=false\n\n" ]
[ 1 ]
[]
[]
[ "github", "github_cli" ]
stackoverflow_0074659860_github_github_cli.txt
Q: Xcode error 'building for iOS Simulator, but linking in dylib built for iOS .. for architecture arm64' from Apple Silicon M1 Mac I have an app which compiles and runs fine in older Macs with Intel processors in physical devices & iOS simulators. The same app also compiles and runs fine from newer Apple Silicon Mac with M1 processor with physical iPhone devices, but it refuses to be compiled for iOS simulator. Without simulator support, debugging turn around time gets gets really long so I am trying to solve this issue. Not to mention Xcode preview feature isn't working either which is annoying. The first error that I encountered without making any changes (but moved from Intel Mac to M1 Mac) is like below. building for iOS Simulator, but linking in dylib built for iOS, file '/Users/andy/workspace/app/Pods/GoogleWebRTC/Frameworks/frameworks/WebRTC.framework/WebRTC' for architecture arm64 The Cocoapods library that I am using is GoogleWebRTC, and according to its doc, arm64 should be supported so I am baffled why above error is getting thrown. As I have said before, it compiles fine in real device which I believe is running on arm64. According to the doc.. This pod contains the WebRTC iOS SDK in binary form. It is a dynamic library that contains the armv7, arm64 and x86_64 slices. Bitcode is not supported. Our currently provided API’s are Objective C only. I searched online and it appears there appears to be 2 workarounds for this issue. The first one is by adding arm64 to Excluded Architectures The second option is to mark Build Active Architecture Only for Release build. I don't exactly understand if above are necessary even when I am compiling my app on M1 Mac which is running under arm64 architecture, because the solution seems to be applicable only for for Intel Mac which does not support arm64 simulator, as for Intel Mac, simulators might have been running in x86_64, not with arm64, so solution #1 is not applicable in my case. When I adapt the second change only, nothing really changes and the same error is thrown. When I make both changes and tried building, I now get the following 2nd error during build. (not really 100% sure if I solved the 1st error / I might have introduced 2nd error in addition to 1st by adapting two changes) Could not find module 'Lottie' for target 'x86_64-apple-ios-simulator'; found: arm64, arm64-apple-ios-simulator The second library that I am using is lottie-ios and I am pulling this in with a swift package manager. I guess what is happening is that because I excluded arm64 in build setting for iOS simulator, Xcode is attempting to run my app in x86_64. However, library is not supported running in x86_64 for some reason, and is throwing an error. I don't have much insights into what dictates whether or not library can run in x86_64 or arm64 so I couldn't dig to investigate this issue. My weak conclusion is that GoogleWebRTC cannot be compiled to run in iOS simulator with arm64 for some reason (unlike what its doc says), and lottie-ios cannot be compiled to run in iOS simulator with x86_64. So I cannot use them both in this case. Q1. I want to know what kind of changes I can make to resolve this issue... The app compiles and runs perfectly in both device & simulator when compiled from Intel Mac. The app compiles and runs fine in device when compiled from Apple Silicon Mac. It is just that app refuse to be compiled and run in iOS simulator from Apple Silicon Mac, and I cannot seem to figure out why. Q2. If there is no solution available, I want to understand why this is happening in the first place. I really wish not to buy old Intel Mac again just to make things work in simulator. A: Answering my own question in a hope to help others who are having similar problems. (and until a good answer is added from another user) I found out that GoogleWebRTC actually requires its source to be compiled with x64 based on its source depo. For builds targeting iOS devices, this should be set to either "arm" or "arm64", depending on the architecture of the device. For builds to run in the simulator, this should be set to "x64". https://webrtc.github.io/webrtc-org/native-code/ios/ This must be why I was getting the following error. building for iOS Simulator, but linking in dylib built for iOS, file '/Users/andy/workspace/app/Pods/GoogleWebRTC/Frameworks/frameworks/WebRTC.framework/WebRTC' for architecture arm64 Please correct me if I am wrong, but by default, it seems that Xcode running in Apple M1 silicon seems to launch iOS simulator with arm arch type. Since my app did run fine on simulators in Intel Mac, I did the following as a workaround for now. Quit Xcode. Go to Finder and open Application Folder. Right click on Xcode application, select Get Info In the "Xcode Info Window" check on Open using Rosetta. Open Xcode and try running again. That was all I needed to do to make my app, which relies on a library that is not yet fully supported on arm simulator, work again. (I believe launching Xcode in Rosetta mode runs simulator in x86 as well..?? which explains why things are working after making the above change) A lot of online sources (often posted before M1 Mac launch on Nov/2020) talks about "add arm64 to Excluded Architectures", but that solution seems to be only applicable to Intel Mac, and not M1 Mac, as I did not need to make that change to make things work again. Of course, running Xcode in Rosetta mode is not a permanent solution, and Xcode slows down lil bit, but it is an interim solution that gets things going in case one of libraries that you are using is not runnable in arm64 simulator.. yet. A: I found the fix here thanks @Narlei 1- Set exclude arm64 architecture in your project 2- This at the end of your Podfile post_install do |installer| installer.pods_project.build_configurations.each do |config| config.build_settings["EXCLUDED_ARCHS[sdk=iphonesimulator*]"] = "arm64" end end A: It is not supported by Apple to run Xcode simulator thru Rosetta. There are a lot of bugs. It's much better to isolate problematic libraries thru Cocoapods and build them only to supported architectures. post_install do |installer| installer.pods_project.targets.each do |target| target.build_configurations.each do |config| # Targets that do not support m1 simulator if ['Lib1', 'Lib2'].include? target.name config.build_settings['ARCHS[sdk=iphonesimulator*]'] = 'x86_64' end end end end Then you can use #if to isolate code, that uses this frameworks. #if !((arch(arm64)) && targetEnvironment(simulator)) // Not M1 #else // M1 #endif P.S. Here is how you can create pod with any framework. A: Quit Xcode. Go to Finder and open Application Folder. Right click on Xcode application, select Get Info In the "Xcode Info Window" check on Open using Rosetta. Open Xcode and try running again.enter image description here A: I am running Xcode on a M2 silicon macbook. The app I'm building has mutiple modules that variably depend on cocoapods that are not configured to build on arm64-simulators, hence the error in question. Running Xcode in Rosetta worked to get the app to build onto the Sim, but only for the first build, and everything ran much slower. When I would subsequently rebuild and run the app again, it would not attach the process to the Sim! I would get a black screen and spinning wheel of death on Xcode for "Launching app" which was quite annoying. I had to find a different solution, which I will explain below. Following this link given by @Michael Long https://blog.sudeium.com/2021/06/18/build-for-x86-simulator-on-apple-silicon-macs/ I had to set the "Architectures" (not exclude option, just general Arch) for "Any iOS simulator SDK" to "x86_64" for EVERY Project/Module in my application. Also, it was important to make sure "Build Active Architecure Only" was set to "NO". I then went to my podfile and added this: config.build_settings['ARCHS[sdk=iphonesimulator*]'] = 'x86_64' to the post install part of the podfile. If you find that certain internal modules cannot be found during a build, you missed a build setting described above for that module.
Xcode error 'building for iOS Simulator, but linking in dylib built for iOS .. for architecture arm64' from Apple Silicon M1 Mac
I have an app which compiles and runs fine in older Macs with Intel processors in physical devices & iOS simulators. The same app also compiles and runs fine from newer Apple Silicon Mac with M1 processor with physical iPhone devices, but it refuses to be compiled for iOS simulator. Without simulator support, debugging turn around time gets gets really long so I am trying to solve this issue. Not to mention Xcode preview feature isn't working either which is annoying. The first error that I encountered without making any changes (but moved from Intel Mac to M1 Mac) is like below. building for iOS Simulator, but linking in dylib built for iOS, file '/Users/andy/workspace/app/Pods/GoogleWebRTC/Frameworks/frameworks/WebRTC.framework/WebRTC' for architecture arm64 The Cocoapods library that I am using is GoogleWebRTC, and according to its doc, arm64 should be supported so I am baffled why above error is getting thrown. As I have said before, it compiles fine in real device which I believe is running on arm64. According to the doc.. This pod contains the WebRTC iOS SDK in binary form. It is a dynamic library that contains the armv7, arm64 and x86_64 slices. Bitcode is not supported. Our currently provided API’s are Objective C only. I searched online and it appears there appears to be 2 workarounds for this issue. The first one is by adding arm64 to Excluded Architectures The second option is to mark Build Active Architecture Only for Release build. I don't exactly understand if above are necessary even when I am compiling my app on M1 Mac which is running under arm64 architecture, because the solution seems to be applicable only for for Intel Mac which does not support arm64 simulator, as for Intel Mac, simulators might have been running in x86_64, not with arm64, so solution #1 is not applicable in my case. When I adapt the second change only, nothing really changes and the same error is thrown. When I make both changes and tried building, I now get the following 2nd error during build. (not really 100% sure if I solved the 1st error / I might have introduced 2nd error in addition to 1st by adapting two changes) Could not find module 'Lottie' for target 'x86_64-apple-ios-simulator'; found: arm64, arm64-apple-ios-simulator The second library that I am using is lottie-ios and I am pulling this in with a swift package manager. I guess what is happening is that because I excluded arm64 in build setting for iOS simulator, Xcode is attempting to run my app in x86_64. However, library is not supported running in x86_64 for some reason, and is throwing an error. I don't have much insights into what dictates whether or not library can run in x86_64 or arm64 so I couldn't dig to investigate this issue. My weak conclusion is that GoogleWebRTC cannot be compiled to run in iOS simulator with arm64 for some reason (unlike what its doc says), and lottie-ios cannot be compiled to run in iOS simulator with x86_64. So I cannot use them both in this case. Q1. I want to know what kind of changes I can make to resolve this issue... The app compiles and runs perfectly in both device & simulator when compiled from Intel Mac. The app compiles and runs fine in device when compiled from Apple Silicon Mac. It is just that app refuse to be compiled and run in iOS simulator from Apple Silicon Mac, and I cannot seem to figure out why. Q2. If there is no solution available, I want to understand why this is happening in the first place. I really wish not to buy old Intel Mac again just to make things work in simulator.
[ "Answering my own question in a hope to help others who are having similar problems. (and until a good answer is added from another user)\nI found out that GoogleWebRTC actually requires its source to be compiled with x64 based on its source depo.\n\nFor builds targeting iOS devices, this should be set to either \"arm\" or \"arm64\", depending on the architecture of the device. For builds to run in the simulator, this should be set to \"x64\".\n\nhttps://webrtc.github.io/webrtc-org/native-code/ios/\nThis must be why I was getting the following error.\nbuilding for iOS Simulator, but linking in dylib built for iOS, file '/Users/andy/workspace/app/Pods/GoogleWebRTC/Frameworks/frameworks/WebRTC.framework/WebRTC' for architecture arm64\nPlease correct me if I am wrong, but by default, it seems that Xcode running in Apple M1 silicon seems to launch iOS simulator with arm arch type. Since my app did run fine on simulators in Intel Mac, I did the following as a workaround for now.\n\nQuit Xcode.\nGo to Finder and open Application Folder.\nRight click on Xcode application, select Get Info\nIn the \"Xcode Info Window\" check on Open using Rosetta.\nOpen Xcode and try running again.\n\nThat was all I needed to do to make my app, which relies on a library that is not yet fully supported on arm simulator, work again. (I believe launching Xcode in Rosetta mode runs simulator in x86 as well..?? which explains why things are working after making the above change)\nA lot of online sources (often posted before M1 Mac launch on Nov/2020) talks about \"add arm64 to Excluded Architectures\", but that solution seems to be only applicable to Intel Mac, and not M1 Mac, as I did not need to make that change to make things work again.\nOf course, running Xcode in Rosetta mode is not a permanent solution, and Xcode slows down lil bit, but it is an interim solution that gets things going in case one of libraries that you are using is not runnable in arm64 simulator.. yet.\n", "I found the fix here thanks @Narlei\n1- Set exclude arm64 architecture in your project\n\n2- This at the end of your Podfile\npost_install do |installer|\n installer.pods_project.build_configurations.each do |config|\n config.build_settings[\"EXCLUDED_ARCHS[sdk=iphonesimulator*]\"] = \"arm64\"\n end\nend\n\n", "It is not supported by Apple to run Xcode simulator thru Rosetta. There are a lot of bugs.\nIt's much better to isolate problematic libraries thru Cocoapods and build them only to supported architectures.\npost_install do |installer| \n installer.pods_project.targets.each do |target|\n target.build_configurations.each do |config|\n # Targets that do not support m1 simulator\n if ['Lib1', 'Lib2'].include? target.name\n config.build_settings['ARCHS[sdk=iphonesimulator*]'] = 'x86_64'\n end\n end\n end\nend\n\nThen you can use #if to isolate code, that uses this frameworks.\n#if !((arch(arm64)) && targetEnvironment(simulator))\n// Not M1\n#else\n// M1\n#endif\n\nP.S.\nHere is how you can create pod with any framework.\n", "Quit Xcode.\nGo to Finder and open Application Folder.\nRight click on Xcode application, select Get Info\nIn the \"Xcode Info Window\" check on Open using Rosetta.\nOpen Xcode and try running again.enter image description here\n", "I am running Xcode on a M2 silicon macbook. The app I'm building has mutiple modules that variably depend on cocoapods that are not configured to build on arm64-simulators, hence the error in question.\nRunning Xcode in Rosetta worked to get the app to build onto the Sim, but only for the first build, and everything ran much slower. When I would subsequently rebuild and run the app again, it would not attach the process to the Sim! I would get a black screen and spinning wheel of death on Xcode for \"Launching app\" which was quite annoying. I had to find a different solution, which I will explain below.\nFollowing this link given by @Michael Long\nhttps://blog.sudeium.com/2021/06/18/build-for-x86-simulator-on-apple-silicon-macs/\nI had to set the \"Architectures\" (not exclude option, just general Arch) for \"Any iOS simulator SDK\" to \"x86_64\" for EVERY Project/Module in my application. Also, it was important to make sure \"Build Active Architecure Only\" was set to \"NO\". I then went to my podfile and added this:\n config.build_settings['ARCHS[sdk=iphonesimulator*]'] = 'x86_64'\n\nto the post install part of the podfile. If you find that certain internal modules cannot be found during a build, you missed a build setting described above for that module.\n" ]
[ 64, 5, 2, 0, 0 ]
[]
[]
[ "apple_silicon", "cocoa_touch", "ios", "xcode" ]
stackoverflow_0065978359_apple_silicon_cocoa_touch_ios_xcode.txt
Q: I can't create an order on the exchange bybit using python (pybit) i need help from pybit.usdt_perpetual import HTTP session_auth_ = HTTP( endpoint='https://api.bybit.com', api_key=api_key, api_secret=secret_key ) res=session_auth_.place_active_order( symbol='LTCUSDT', side='Sell', order_type='Limit', qty=1, price=56.01, time_in_force='GoodTillCancel', reduce_only=False, close_on_trigger=False ) Error: pybit.exceptions.InvalidRequestError: Oc_diff[568068600], new_oc[568068600] with ob[0]+ab[0] (ErrCode: 130021) (ErrTime: 20:24:51). Request → POST https://api.bybit.com/private/linear/order/create: {'api_key': '.........', 'close_on_trigger': False, 'order_type': 'Limit', 'price': 56.1, 'qty': 1, 'recv_window': 5000, 'reduce_only': False, 'side': 'Sell', 'symbol': 'LTCUSDT', 'time_in_force': 'GoodTillCancel', 'timestamp': 1666815890695, 'sign': 'cf8c055049303634c8c6aa17077689ddb6d8ca490302e392b0590b3dbd02ca19'}. I tried to change the quantity, price, but the result was not received A: 130021 : order cost not available I am not 100% sure what this means. Not having enough funds can be an issue.
I can't create an order on the exchange bybit using python (pybit)
i need help from pybit.usdt_perpetual import HTTP session_auth_ = HTTP( endpoint='https://api.bybit.com', api_key=api_key, api_secret=secret_key ) res=session_auth_.place_active_order( symbol='LTCUSDT', side='Sell', order_type='Limit', qty=1, price=56.01, time_in_force='GoodTillCancel', reduce_only=False, close_on_trigger=False ) Error: pybit.exceptions.InvalidRequestError: Oc_diff[568068600], new_oc[568068600] with ob[0]+ab[0] (ErrCode: 130021) (ErrTime: 20:24:51). Request → POST https://api.bybit.com/private/linear/order/create: {'api_key': '.........', 'close_on_trigger': False, 'order_type': 'Limit', 'price': 56.1, 'qty': 1, 'recv_window': 5000, 'reduce_only': False, 'side': 'Sell', 'symbol': 'LTCUSDT', 'time_in_force': 'GoodTillCancel', 'timestamp': 1666815890695, 'sign': 'cf8c055049303634c8c6aa17077689ddb6d8ca490302e392b0590b3dbd02ca19'}. I tried to change the quantity, price, but the result was not received
[ "130021 : order cost not available\nI am not 100% sure what this means.\nNot having enough funds can be an issue.\n" ]
[ 0 ]
[]
[]
[ "python_bybit" ]
stackoverflow_0074213930_python_bybit.txt
Q: Testing: When writing a HTTP response to the socket, write the headers, then sleep before writing the body This is surely a weird one ... I'm doing some extreme integration style testing on a custom Java HTTP client for a backend service I'm working with. For reasons which don't matter here, the client has some specific quirks and a custom solution was the only real option. For automated testing, I've built a "fake" version of the backend service by spinning up a Jetty server locally and having it behave in different ways e.g. return 500, wait e.g. 4 seconds before giving a response to simulate latency etc and firing off a battery of tests against it with the client on build time. Given the nature of this client, there is an usual and specific scenario which I need to test and I'm trying to find a way to make my Jetty serve behave in the correct fashion. Basically, when returning HTTP response, I need to immediately return the HTTP Headers and the first few bytes of the HTTP body and then sleep. The goal is to trigger a socket timeout in the client specifically when reading the HTTP body. Anyone know where in Jetty I could plug something in to force this behaviour? Was looking at the Connector interface but not so sure thats the right place. Thanks for any suggestions. A: Write a few bytes to the HttpServletResponse.getOutputStream(), then call HttpServletResponse.flushBuffer() to immediately commit the response. Bonus tip: use HttpServletResponse.sendError(-1) to terminate the connection abruptly.
Testing: When writing a HTTP response to the socket, write the headers, then sleep before writing the body
This is surely a weird one ... I'm doing some extreme integration style testing on a custom Java HTTP client for a backend service I'm working with. For reasons which don't matter here, the client has some specific quirks and a custom solution was the only real option. For automated testing, I've built a "fake" version of the backend service by spinning up a Jetty server locally and having it behave in different ways e.g. return 500, wait e.g. 4 seconds before giving a response to simulate latency etc and firing off a battery of tests against it with the client on build time. Given the nature of this client, there is an usual and specific scenario which I need to test and I'm trying to find a way to make my Jetty serve behave in the correct fashion. Basically, when returning HTTP response, I need to immediately return the HTTP Headers and the first few bytes of the HTTP body and then sleep. The goal is to trigger a socket timeout in the client specifically when reading the HTTP body. Anyone know where in Jetty I could plug something in to force this behaviour? Was looking at the Connector interface but not so sure thats the right place. Thanks for any suggestions.
[ "Write a few bytes to the HttpServletResponse.getOutputStream(), then call HttpServletResponse.flushBuffer() to immediately commit the response.\nBonus tip: use HttpServletResponse.sendError(-1) to terminate the connection abruptly.\n" ]
[ 0 ]
[]
[]
[ "jetty" ]
stackoverflow_0074657081_jetty.txt
Q: How to show the lowest price of a variable in woocommerce product by specifying the price I have a WordPress woo-commerce site I'm working on which has variable products of 1pack, 5pack, 10pack & 30pack. The price for 1pack if you choose the 30pack variant is the cheapest, and this is the price I wish to show to people who visit the site. I'm trying to have a place in the back end of the site where this price can be specified and displayed, as the calculation varies for each product. I modified this code here Display prefixed price and additional unit price in Woocommerce simple products. add_filter( 'woocommerce_get_price_html', 'unit_product_price_on_archives', 10, 2 ); function unit_product_price_on_archives( $price, $product ) { if ( is_product() || is_product_category() || is_product_tag() ) { $unit_divider = 6; $group_suffix = ' '. __('pack', 'woocommerce'); $unit_suffix = ' '. __('(per pack)', 'woocommerce'); if( $product->is_on_sale() ) { $regular_price_unit = $product->get_regular_price() / $unit_divider; $regular_price_unit = wc_get_price_to_display( $product, array( 'price' => $regular_price_unit ) ); $regular_price_group = $product->get_regular_price(); $regular_price_group = wc_get_price_to_display( $product, array( 'price' => $regular_price_group ) ); $group_price_sale = $product->get_sale_price(); $group_price_sale = wc_get_price_to_display( $product, array( 'price' => $group_price_sale ) ); $group_price_sale = wc_format_sale_price( $regular_price_group, $group_price_sale ) . $group_suffix; $unit_price_sale = $product->get_sale_price() / $unit_divider; $unit_price_sale = wc_get_price_to_display( $product, array( 'price' => $unit_price_sale ) ); $unit_price_sale = wc_format_sale_price( $regular_price_unit, $unit_price_sale ) . $unit_suffix; $price = $group_price_sale . '<br>' . $unit_price_sale; } else { $group_price = $price; $group_price = $group_price . $group_suffix; $unit_price = $product->get_price() / $unit_divider; $unit_price = wc_get_price_to_display( $product, array( 'price' => $unit_price ) ); $unit_price = $price = wc_price($unit_price) . $unit_suffix; $price = $group_price . '<br>' . $unit_price; } } return $price; } But the code only divided the original simple product by 6. A: this should achieve the result you're looking for: /** Variable Product Price Range: "From: $$$min_price" */ add_filter( 'woocommerce_variable_price_html', ' bbloomer_variation_price_format_min', 9999, 2 ); function bbloomer_variation_price_format_min( $price, $product ) { $prices = $product->get_variation_prices( true ); $min_price = current( $prices['price'] ); $price = sprintf( __( 'From: %1$s', 'woocommerce' ), wc_price( $min_price ) ); return $price; }
How to show the lowest price of a variable in woocommerce product by specifying the price
I have a WordPress woo-commerce site I'm working on which has variable products of 1pack, 5pack, 10pack & 30pack. The price for 1pack if you choose the 30pack variant is the cheapest, and this is the price I wish to show to people who visit the site. I'm trying to have a place in the back end of the site where this price can be specified and displayed, as the calculation varies for each product. I modified this code here Display prefixed price and additional unit price in Woocommerce simple products. add_filter( 'woocommerce_get_price_html', 'unit_product_price_on_archives', 10, 2 ); function unit_product_price_on_archives( $price, $product ) { if ( is_product() || is_product_category() || is_product_tag() ) { $unit_divider = 6; $group_suffix = ' '. __('pack', 'woocommerce'); $unit_suffix = ' '. __('(per pack)', 'woocommerce'); if( $product->is_on_sale() ) { $regular_price_unit = $product->get_regular_price() / $unit_divider; $regular_price_unit = wc_get_price_to_display( $product, array( 'price' => $regular_price_unit ) ); $regular_price_group = $product->get_regular_price(); $regular_price_group = wc_get_price_to_display( $product, array( 'price' => $regular_price_group ) ); $group_price_sale = $product->get_sale_price(); $group_price_sale = wc_get_price_to_display( $product, array( 'price' => $group_price_sale ) ); $group_price_sale = wc_format_sale_price( $regular_price_group, $group_price_sale ) . $group_suffix; $unit_price_sale = $product->get_sale_price() / $unit_divider; $unit_price_sale = wc_get_price_to_display( $product, array( 'price' => $unit_price_sale ) ); $unit_price_sale = wc_format_sale_price( $regular_price_unit, $unit_price_sale ) . $unit_suffix; $price = $group_price_sale . '<br>' . $unit_price_sale; } else { $group_price = $price; $group_price = $group_price . $group_suffix; $unit_price = $product->get_price() / $unit_divider; $unit_price = wc_get_price_to_display( $product, array( 'price' => $unit_price ) ); $unit_price = $price = wc_price($unit_price) . $unit_suffix; $price = $group_price . '<br>' . $unit_price; } } return $price; } But the code only divided the original simple product by 6.
[ "this should achieve the result you're looking for:\n/** Variable Product Price Range: \"From: $$$min_price\" */\nadd_filter( 'woocommerce_variable_price_html', ' \nbbloomer_variation_price_format_min', 9999, 2 );\nfunction bbloomer_variation_price_format_min( $price, $product ) {\n$prices = $product->get_variation_prices( true );\n$min_price = current( $prices['price'] );\n$price = sprintf( __( 'From: %1$s', 'woocommerce' ), wc_price( $min_price ) );\nreturn $price;\n}\n\n" ]
[ 0 ]
[]
[]
[ "php", "price", "product", "woocommerce", "wordpress" ]
stackoverflow_0074092943_php_price_product_woocommerce_wordpress.txt
Q: How to explicitly add role to a user in discord bot I'm relatively new to programming and am trying to code a bot for a server I'm in. I'd ideally like to assign a user to a specific role based on them sending a message containing 'gm' or 'good morning'. Right now, the bot can read the message and send a reply. But I'm a bit lost trying to figure out how to actually add the role to a user once the 'gm' message is read. `@client.event async def on_ready(): print(f'We have logged in as {client.user}') async def addRole(user : discord.Member, role : discord.Role = BagChaser): if role in user.roles: return else: await user.add_roles(role) @client.event async def on_message(message): if message.author == client.user: return msg = message.content.lower() words_list = ['gm', 'good morning'] if any(word in msg for word in words_list): # await addRole(message.author, BagChaser) await message.channel.send(f'Lets get this bag, {message.author}') await message.author.add_roles(BagChaser)` the commented line and the last line were some ideas of how to add the role 'BagChaser' to the author of the message. I tried setting the role parameter in the addRole function to BagChaser since that will never change, but this seems incorrect. The role is already made in my server, but I'm not sure how I can make the bot aware of that role in the code. Any help would be greatly appreciated! I tried explicitly calling out my role but i can't get it recognized. A: You need a role object, and to do that, you need a guild object, which you can get with message.author.guild. From this, you can get the Role object: role = await message.author.guild.get_role(ROLE_ID) Note that you need to get the role ID yourself. The easiest method to do so is to go into Discord and enable Developer settings, then right click the role on someone's profile and click "Copy ID". Once you have this role object, you can just apply it with message.author.add_roles(role). Complete code: role_id = ... author = message.author; role = await author.guild.get_role(role_id) await author.add_roles(role) Make sure your bot has the Manage Roles permission
How to explicitly add role to a user in discord bot
I'm relatively new to programming and am trying to code a bot for a server I'm in. I'd ideally like to assign a user to a specific role based on them sending a message containing 'gm' or 'good morning'. Right now, the bot can read the message and send a reply. But I'm a bit lost trying to figure out how to actually add the role to a user once the 'gm' message is read. `@client.event async def on_ready(): print(f'We have logged in as {client.user}') async def addRole(user : discord.Member, role : discord.Role = BagChaser): if role in user.roles: return else: await user.add_roles(role) @client.event async def on_message(message): if message.author == client.user: return msg = message.content.lower() words_list = ['gm', 'good morning'] if any(word in msg for word in words_list): # await addRole(message.author, BagChaser) await message.channel.send(f'Lets get this bag, {message.author}') await message.author.add_roles(BagChaser)` the commented line and the last line were some ideas of how to add the role 'BagChaser' to the author of the message. I tried setting the role parameter in the addRole function to BagChaser since that will never change, but this seems incorrect. The role is already made in my server, but I'm not sure how I can make the bot aware of that role in the code. Any help would be greatly appreciated! I tried explicitly calling out my role but i can't get it recognized.
[ "You need a role object, and to do that, you need a guild object, which you can get with message.author.guild.\nFrom this, you can get the Role object:\nrole = await message.author.guild.get_role(ROLE_ID)\n\nNote that you need to get the role ID yourself. The easiest method to do so is to go into Discord and enable Developer settings, then right click the role on someone's profile and click \"Copy ID\". Once you have this role object, you can just apply it with message.author.add_roles(role).\nComplete code:\nrole_id = ...\n\nauthor = message.author;\nrole = await author.guild.get_role(role_id)\nawait author.add_roles(role)\n\nMake sure your bot has the Manage Roles permission\n" ]
[ 0 ]
[]
[]
[ "bots", "discord", "python" ]
stackoverflow_0074660454_bots_discord_python.txt
Q: I'd like to convert my data from rows to columns in PL/SQL. So, can anyone assist me with this? My Input data is : Vechical_NUM Vehical_name Period Year cost 1111 Car Aug 2022 $5000 1111 Car Sep 2022 $5100 1111 Car Oct 2022 $5300 1111 Car Nov 2022 $5000 1111 Car Dec 2022 $5000 1111 Car Jan 2023 $5000 1111 Car Feb 2023 $5000 : : : : : : : : : : 1111 Car Aug 2023 $10000 1111 Car sep 2023 $10000 1111 Car Oct 2023 $10000 1111 Car Nov 2023 $10000 1111 Car Dec 2023 $10000 2222 Truck Aug 2022 $10000 2222 Truck Sep 2022 $10000 2222 Truck Oct 2022 $10000 2222 Truck Nov 2022 $10000 2222 Truck Dec 2022 $10000 2222 Truck Jan 2023 $10000 2222 Truck Feb 2023 $10000 2222 Truck Mar 2023 $10000 : : : : : 2222 Truck Dec 2023 $5000 Expected Out put Vechical_NUM Vehical_name Aug_2022 Sep_2022 Oct_2022 ....... Dec-2023 1111 Car $5000 $5100 $5300 ....... $5000 2222 Truck $10000 $10000 $10000 ----- $10000 A: Use a PIVOT: SELECT * FROM table_name PIVOT ( SUM(cost) FOR (period, year) IN ( ('Aug', 2022) AS aug_2022, ('Sep', 2022) AS sep_2022, ('Oct', 2022) AS oct_2022, ('Nov', 2022) AS nov_2022, ('Dec', 2022) AS dec_2022, ('Jan', 2023) AS jan_2023, ('Feb', 2023) AS feb_2023, -- ... ('Dec', 2023) AS dec_2023 ) ) Which, for the sample data: CREATE TABLE table_name (Vechical_NUM, Vehical_name, Period, Year, cost) AS SELECT 1111, 'Car', 'Aug', 2022, 5000 FROM DUAL UNION ALL SELECT 1111, 'Car', 'Sep', 2022, 5100 FROM DUAL UNION ALL SELECT 1111, 'Car', 'Oct', 2022, 5300 FROM DUAL UNION ALL SELECT 1111, 'Car', 'Nov', 2022, 5000 FROM DUAL UNION ALL SELECT 1111, 'Car', 'Dec', 2022, 5000 FROM DUAL UNION ALL SELECT 1111, 'Car', 'Jan', 2023, 5000 FROM DUAL UNION ALL SELECT 1111, 'Car', 'Feb', 2023, 5000 FROM DUAL UNION ALL SELECT 1111, 'Car', 'Aug', 2023, 10000 FROM DUAL UNION ALL SELECT 1111, 'Car', 'sep', 2023, 10000 FROM DUAL UNION ALL SELECT 1111, 'Car', 'Oct', 2023, 10000 FROM DUAL UNION ALL SELECT 1111, 'Car', 'Nov', 2023, 10000 FROM DUAL UNION ALL SELECT 1111, 'Car', 'Dec', 2023, 10000 FROM DUAL UNION ALL SELECT 2222, 'Truck', 'Aug', 2022, 10000 FROM DUAL UNION ALL SELECT 2222, 'Truck', 'Sep', 2022, 10000 FROM DUAL UNION ALL SELECT 2222, 'Truck', 'Oct', 2022, 10000 FROM DUAL UNION ALL SELECT 2222, 'Truck', 'Nov', 2022, 10000 FROM DUAL UNION ALL SELECT 2222, 'Truck', 'Dec', 2022, 10000 FROM DUAL UNION ALL SELECT 2222, 'Truck', 'Jan', 2023, 10000 FROM DUAL UNION ALL SELECT 2222, 'Truck', 'Feb', 2023, 10000 FROM DUAL UNION ALL SELECT 2222, 'Truck', 'Mar', 2023, 10000 FROM DUAL UNION ALL SELECT 2222, 'Truck', 'Dec', 2023, 5000 FROM DUAL; Outputs: VECHICAL_NUM VEHICAL_NAME AUG_2022 SEP_2022 OCT_2022 NOV_2022 DEC_2022 JAN_2023 FEB_2023 DEC_2023 1111 Car 5000 5100 5300 5000 5000 5000 5000 10000 2222 Truck 10000 10000 10000 10000 10000 10000 10000 5000 fiddle I need to pass the period and year dynamically. If you want the current month and some previous and future months then use: SELECT vechical_num, vehical_name, SUM( CASE WHEN period = TO_CHAR(ADD_MONTHS(SYSDATE, -4), 'Mon', 'NLS_DATE_LANGUAGE=English') AND year = EXTRACT(YEAR FROM ADD_MONTHS(SYSDATE, -4)) THEN cost END ) AS prev_month_4, SUM( CASE WHEN period = TO_CHAR(ADD_MONTHS(SYSDATE, -3), 'Mon', 'NLS_DATE_LANGUAGE=English') AND year = EXTRACT(YEAR FROM ADD_MONTHS(SYSDATE, -3)) THEN cost END ) AS prev_month_3, SUM( CASE WHEN period = TO_CHAR(ADD_MONTHS(SYSDATE, -2), 'Mon', 'NLS_DATE_LANGUAGE=English') AND year = EXTRACT(YEAR FROM ADD_MONTHS(SYSDATE, -2)) THEN cost END ) AS prev_month_2, SUM( CASE WHEN period = TO_CHAR(ADD_MONTHS(SYSDATE, -1), 'Mon', 'NLS_DATE_LANGUAGE=English') AND year = EXTRACT(YEAR FROM ADD_MONTHS(SYSDATE, -1)) THEN cost END ) AS prev_month_1, SUM( CASE WHEN period = TO_CHAR(SYSDATE, 'Mon', 'NLS_DATE_LANGUAGE=English') AND year = EXTRACT(YEAR FROM SYSDATE) THEN cost END ) AS current_month, SUM( CASE WHEN period = TO_CHAR(ADD_MONTHS(SYSDATE, 1), 'Mon', 'NLS_DATE_LANGUAGE=English') AND year = EXTRACT(YEAR FROM ADD_MONTHS(SYSDATE, 1)) THEN cost END ) AS next_month_1 FROM table_name GROUP BY vechical_num, vehical_name; Which outputs: VECHICAL_NUM VEHICAL_NAME PREV_MONTH_4 PREV_MONTH_3 PREV_MONTH_2 PREV_MONTH_1 CURRENT_MONTH NEXT_MONTH_1 1111 Car 5000 5100 5300 5000 5000 5000 2222 Truck 10000 10000 10000 10000 10000 10000 fiddle If you want to name the columns then don't do it in SQL. In SQL, you need to know the columns and identifiers beforehand and cannot generate them dynamically in the query. Instead, transpose the columns in your table to rows in whatever third-party client application (Java, C#, PHP, etc.) that you are using to access the database.
I'd like to convert my data from rows to columns in PL/SQL. So, can anyone assist me with this?
My Input data is : Vechical_NUM Vehical_name Period Year cost 1111 Car Aug 2022 $5000 1111 Car Sep 2022 $5100 1111 Car Oct 2022 $5300 1111 Car Nov 2022 $5000 1111 Car Dec 2022 $5000 1111 Car Jan 2023 $5000 1111 Car Feb 2023 $5000 : : : : : : : : : : 1111 Car Aug 2023 $10000 1111 Car sep 2023 $10000 1111 Car Oct 2023 $10000 1111 Car Nov 2023 $10000 1111 Car Dec 2023 $10000 2222 Truck Aug 2022 $10000 2222 Truck Sep 2022 $10000 2222 Truck Oct 2022 $10000 2222 Truck Nov 2022 $10000 2222 Truck Dec 2022 $10000 2222 Truck Jan 2023 $10000 2222 Truck Feb 2023 $10000 2222 Truck Mar 2023 $10000 : : : : : 2222 Truck Dec 2023 $5000 Expected Out put Vechical_NUM Vehical_name Aug_2022 Sep_2022 Oct_2022 ....... Dec-2023 1111 Car $5000 $5100 $5300 ....... $5000 2222 Truck $10000 $10000 $10000 ----- $10000
[ "Use a PIVOT:\nSELECT *\nFROM table_name\nPIVOT (\n SUM(cost) FOR (period, year) IN (\n ('Aug', 2022) AS aug_2022,\n ('Sep', 2022) AS sep_2022,\n ('Oct', 2022) AS oct_2022,\n ('Nov', 2022) AS nov_2022,\n ('Dec', 2022) AS dec_2022,\n ('Jan', 2023) AS jan_2023,\n ('Feb', 2023) AS feb_2023,\n -- ...\n ('Dec', 2023) AS dec_2023\n )\n)\n\nWhich, for the sample data:\nCREATE TABLE table_name (Vechical_NUM, Vehical_name, Period, Year, cost) AS\nSELECT 1111, 'Car', 'Aug', 2022, 5000 FROM DUAL UNION ALL\nSELECT 1111, 'Car', 'Sep', 2022, 5100 FROM DUAL UNION ALL\nSELECT 1111, 'Car', 'Oct', 2022, 5300 FROM DUAL UNION ALL\nSELECT 1111, 'Car', 'Nov', 2022, 5000 FROM DUAL UNION ALL\nSELECT 1111, 'Car', 'Dec', 2022, 5000 FROM DUAL UNION ALL\nSELECT 1111, 'Car', 'Jan', 2023, 5000 FROM DUAL UNION ALL\nSELECT 1111, 'Car', 'Feb', 2023, 5000 FROM DUAL UNION ALL\nSELECT 1111, 'Car', 'Aug', 2023, 10000 FROM DUAL UNION ALL\nSELECT 1111, 'Car', 'sep', 2023, 10000 FROM DUAL UNION ALL\nSELECT 1111, 'Car', 'Oct', 2023, 10000 FROM DUAL UNION ALL\nSELECT 1111, 'Car', 'Nov', 2023, 10000 FROM DUAL UNION ALL\nSELECT 1111, 'Car', 'Dec', 2023, 10000 FROM DUAL UNION ALL\nSELECT 2222, 'Truck', 'Aug', 2022, 10000 FROM DUAL UNION ALL\nSELECT 2222, 'Truck', 'Sep', 2022, 10000 FROM DUAL UNION ALL\nSELECT 2222, 'Truck', 'Oct', 2022, 10000 FROM DUAL UNION ALL\nSELECT 2222, 'Truck', 'Nov', 2022, 10000 FROM DUAL UNION ALL\nSELECT 2222, 'Truck', 'Dec', 2022, 10000 FROM DUAL UNION ALL\nSELECT 2222, 'Truck', 'Jan', 2023, 10000 FROM DUAL UNION ALL\nSELECT 2222, 'Truck', 'Feb', 2023, 10000 FROM DUAL UNION ALL\nSELECT 2222, 'Truck', 'Mar', 2023, 10000 FROM DUAL UNION ALL\nSELECT 2222, 'Truck', 'Dec', 2023, 5000 FROM DUAL;\n\nOutputs:\n\n\n\n\nVECHICAL_NUM\nVEHICAL_NAME\nAUG_2022\nSEP_2022\nOCT_2022\nNOV_2022\nDEC_2022\nJAN_2023\nFEB_2023\nDEC_2023\n\n\n\n\n1111\nCar\n5000\n5100\n5300\n5000\n5000\n5000\n5000\n10000\n\n\n2222\nTruck\n10000\n10000\n10000\n10000\n10000\n10000\n10000\n5000\n\n\n\n\nfiddle\n\n\nI need to pass the period and year dynamically.\n\nIf you want the current month and some previous and future months then use:\nSELECT vechical_num,\n vehical_name,\n SUM(\n CASE\n WHEN period = TO_CHAR(ADD_MONTHS(SYSDATE, -4), 'Mon', 'NLS_DATE_LANGUAGE=English')\n AND year = EXTRACT(YEAR FROM ADD_MONTHS(SYSDATE, -4))\n THEN cost\n END\n ) AS prev_month_4,\n SUM(\n CASE\n WHEN period = TO_CHAR(ADD_MONTHS(SYSDATE, -3), 'Mon', 'NLS_DATE_LANGUAGE=English')\n AND year = EXTRACT(YEAR FROM ADD_MONTHS(SYSDATE, -3))\n THEN cost\n END\n ) AS prev_month_3,\n SUM(\n CASE\n WHEN period = TO_CHAR(ADD_MONTHS(SYSDATE, -2), 'Mon', 'NLS_DATE_LANGUAGE=English')\n AND year = EXTRACT(YEAR FROM ADD_MONTHS(SYSDATE, -2))\n THEN cost\n END\n ) AS prev_month_2,\n SUM(\n CASE\n WHEN period = TO_CHAR(ADD_MONTHS(SYSDATE, -1), 'Mon', 'NLS_DATE_LANGUAGE=English')\n AND year = EXTRACT(YEAR FROM ADD_MONTHS(SYSDATE, -1))\n THEN cost\n END\n ) AS prev_month_1,\n SUM(\n CASE\n WHEN period = TO_CHAR(SYSDATE, 'Mon', 'NLS_DATE_LANGUAGE=English')\n AND year = EXTRACT(YEAR FROM SYSDATE)\n THEN cost\n END\n ) AS current_month,\n SUM(\n CASE\n WHEN period = TO_CHAR(ADD_MONTHS(SYSDATE, 1), 'Mon', 'NLS_DATE_LANGUAGE=English')\n AND year = EXTRACT(YEAR FROM ADD_MONTHS(SYSDATE, 1))\n THEN cost\n END\n ) AS next_month_1\nFROM table_name\nGROUP BY\n vechical_num,\n vehical_name;\n\nWhich outputs:\n\n\n\n\nVECHICAL_NUM\nVEHICAL_NAME\nPREV_MONTH_4\nPREV_MONTH_3\nPREV_MONTH_2\nPREV_MONTH_1\nCURRENT_MONTH\nNEXT_MONTH_1\n\n\n\n\n1111\nCar\n5000\n5100\n5300\n5000\n5000\n5000\n\n\n2222\nTruck\n10000\n10000\n10000\n10000\n10000\n10000\n\n\n\n\nfiddle\nIf you want to name the columns then don't do it in SQL. In SQL, you need to know the columns and identifiers beforehand and cannot generate them dynamically in the query. Instead, transpose the columns in your table to rows in whatever third-party client application (Java, C#, PHP, etc.) that you are using to access the database.\n" ]
[ 0 ]
[]
[]
[ "oracle", "sql" ]
stackoverflow_0074660411_oracle_sql.txt
Q: bash sh script with user permissions 755, cannot be run Why can't run it? If I run it in the following way, it works: [usuario@MyPC ~]$ sh ./x11vnc.sh PORT=5900 First, the permissions, so that you can see that it is in 755. ls -l -rw-rw-rw- 1 usuario users 4485 dic 2 11:35 x11vnc.log -rwxr-xr-x 1 usuario users 117 nov 7 14:06 x11vnc.sh Second,the script file cat x11vnc.sh #!/bin/bash x11vnc -nap -wait 30 -noxdamage -passwd somepass -display :0 -forever -o ~/x11vnc.log -bg -rfbport 5900 Third, I must clarify the structure of the disks lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 3,6T 0 disk ├─md126 9:126 0 3,6T 0 raid1 │ ├─md126p1 259:3 0 3,6T 0 part /home/usuario │ └─md126p2 259:4 0 8G 0 part [SWAP] └─md127 9:127 0 0B 0 md sdb 8:16 0 3,6T 0 disk ├─md126 9:126 0 3,6T 0 raid1 │ ├─md126p1 259:3 0 3,6T 0 part /home/usuario │ └─md126p2 259:4 0 8G 0 part [SWAP] └─md127 9:127 0 0B 0 md nvme0n1 259:0 0 232,9G 0 disk ├─nvme0n1p1 259:1 0 232,6G 0 part / └─nvme0n1p2 259:2 0 256M 0 part /boot I am the user usuario. I can edit and modify the x11vnc.sh file as I wish, but I can't run it, and I need to run it to include in the auto-start session of the plasma. [usuario@MyPC ~]$ ~/x11vnc.sh -bash: /home/usuario/x11vnc.sh: permission denied Why can't run it? If I run it in the following way, it works: [usuario@MyPC ~]$ sh ./x11vnc.sh PORT=5900 A: Thank you all, specially to @CharlesDuffy I change the fstab line from UUID=16b711b6-789f-4c27-9d6c-d0f744407f00 /home/usuario ext4 auto,exec,rw,user,relatime 0 2 to UUID=16b711b6-789f-4c27-9d6c-d0f744407f00 /home/usuario ext4 auto,rw,user,exec,relatime 0 2 The position of exec is important, since user also applies noexec. By putting exec after user, you ensure that exec is set. The most important options should be listed last
bash sh script with user permissions 755, cannot be run
Why can't run it? If I run it in the following way, it works: [usuario@MyPC ~]$ sh ./x11vnc.sh PORT=5900 First, the permissions, so that you can see that it is in 755. ls -l -rw-rw-rw- 1 usuario users 4485 dic 2 11:35 x11vnc.log -rwxr-xr-x 1 usuario users 117 nov 7 14:06 x11vnc.sh Second,the script file cat x11vnc.sh #!/bin/bash x11vnc -nap -wait 30 -noxdamage -passwd somepass -display :0 -forever -o ~/x11vnc.log -bg -rfbport 5900 Third, I must clarify the structure of the disks lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS sda 8:0 0 3,6T 0 disk ├─md126 9:126 0 3,6T 0 raid1 │ ├─md126p1 259:3 0 3,6T 0 part /home/usuario │ └─md126p2 259:4 0 8G 0 part [SWAP] └─md127 9:127 0 0B 0 md sdb 8:16 0 3,6T 0 disk ├─md126 9:126 0 3,6T 0 raid1 │ ├─md126p1 259:3 0 3,6T 0 part /home/usuario │ └─md126p2 259:4 0 8G 0 part [SWAP] └─md127 9:127 0 0B 0 md nvme0n1 259:0 0 232,9G 0 disk ├─nvme0n1p1 259:1 0 232,6G 0 part / └─nvme0n1p2 259:2 0 256M 0 part /boot I am the user usuario. I can edit and modify the x11vnc.sh file as I wish, but I can't run it, and I need to run it to include in the auto-start session of the plasma. [usuario@MyPC ~]$ ~/x11vnc.sh -bash: /home/usuario/x11vnc.sh: permission denied Why can't run it? If I run it in the following way, it works: [usuario@MyPC ~]$ sh ./x11vnc.sh PORT=5900
[ "Thank you all, specially to @CharlesDuffy\nI change the fstab line from\nUUID=16b711b6-789f-4c27-9d6c-d0f744407f00 /home/usuario ext4 auto,exec,rw,user,relatime 0 2\n\nto\nUUID=16b711b6-789f-4c27-9d6c-d0f744407f00 /home/usuario ext4 auto,rw,user,exec,relatime 0 2\n\nThe position of exec is important, since user also applies noexec. By putting exec after user, you ensure that exec is set. The most important options should be listed last\n" ]
[ 3 ]
[]
[]
[ "bash", "chmod", "linux", "sh", "shell" ]
stackoverflow_0074658091_bash_chmod_linux_sh_shell.txt
Q: Comparing files and seeing the differences I've two excel files having data and each having four columns. I need to compare the two excels and it should show me the difference for each row. To achieve this, I'm thinking of: Using a VLookup feature or Writing Python code to achieve this. Was wondering if there is any other better approach for this please? Thank you. A: Just an idea. Here is an example of how you could use the =IF function to compare two Excel files: 1. Open both Excel files in separate tabs in Excel. 2. In the first Excel file, select the cell where you want to display the result of the comparison. 3. Type the =IF function in the selected cell, followed by the cell reference for the corresponding cell in the second Excel file. For example, if you want to compare the values in cell A1 of the first Excel file with cell A1 of the second Excel file, you would type =IF(A1=Sheet2!A1, "Match", "Difference") in the selected cell. 4. Press Enter to execute the =IF function and display the result of the comparison. If the values in the two cells are the same, the =IF function will display "Match". If the values are different, the =IF function will display "Difference". 5. Repeat the above steps for each cell that you want to compare in the two Excel files. And when you've done it to two you mgiht be able to select the two created IF's and drag it down so it will fill out the rest (if you someday have more than 4 columns).
Comparing files and seeing the differences
I've two excel files having data and each having four columns. I need to compare the two excels and it should show me the difference for each row. To achieve this, I'm thinking of: Using a VLookup feature or Writing Python code to achieve this. Was wondering if there is any other better approach for this please? Thank you.
[ "Just an idea.\nHere is an example of how you could use the =IF function to compare two Excel files:\n\n1. Open both Excel files in separate tabs in Excel.\n\n2. In the first Excel file, select the cell where you want to display the result of the comparison.\n\n3. Type the =IF function in the selected cell, followed by the cell reference for the corresponding cell in the second Excel file. For example, if you want to compare the values in cell A1 of the first Excel file with cell A1 of the second Excel file, you would type =IF(A1=Sheet2!A1, \"Match\", \"Difference\") in the selected cell.\n\n4. Press Enter to execute the =IF function and display the result of the comparison. If the values in the two cells are the same, the =IF function will display \"Match\". If the values are different, the =IF function will display \"Difference\".\n\n5. Repeat the above steps for each cell that you want to compare in the two Excel files. And when you've done it to two you mgiht be able to select the two created IF's and drag it down so it will fill out the rest (if you someday have more than 4 columns).\n\n" ]
[ 0 ]
[]
[]
[ "compare", "excel", "vlookup" ]
stackoverflow_0074660422_compare_excel_vlookup.txt
Q: How can I add a volume to an existing Docker container? I have a Docker container that I've created simply by installing Docker on Ubuntu and doing: sudo docker run -i -t ubuntu /bin/bash I immediately started installing Java and some other tools, spent some time with it, and stopped the container by exit Then I wanted to add a volume and realised that this is not as straightforward as I thought it would be. If I use sudo docker -v /somedir run ... then I end up with a fresh new container, so I'd have to install Java and do what I've already done before just to arrive at a container with a mounted volume. All the documentation about mounting a folder from the host seems to imply that mounting a volume is something that can be done when creating a container. So the only option I have to avoid reconfiguring a new container from scratch is to commit the existing container to a repository and use that as the basis of a new one whilst mounting the volume. Is this indeed the only way to add a volume to an existing container? A: You can commit your existing container (that is create a new image from container’s changes) and then run it with your new mounts. Example: $ docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5a8f89adeead ubuntu:14.04 "/bin/bash" About a minute ago Exited (0) About a minute ago agitated_newton $ docker commit 5a8f89adeead newimagename $ docker run -ti -v "$PWD/somedir":/somedir newimagename /bin/bash If it's all OK, stop your old container, and use this new one. You can also commit a container using its name, for example: docker commit agitated_newton newimagename That's it :) A: We don't have any way to add volume in running container, but to achieve this objective you may use the below commands: Copy files/folders between a container and the local filesystem: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH docker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH For reference see: https://docs.docker.com/engine/reference/commandline/cp/ A: I've successfully mount /home/<user-name> folder of my host to the /mnt folder of the existing (not running) container. You can do it in the following way: Open configuration file corresponding to the stopped container, which can be found at /var/lib/docker/containers/99d...1fb/config.v2.json (may be config.json for older versions of docker). Find MountPoints section, which was empty in my case: "MountPoints":{}. Next replace the contents with something like this (you can copy proper contents from another container with proper settings): "MountPoints":{"/mnt":{"Source":"/home/<user-name>","Destination":"/mnt","RW":true,"Name":"","Driver":"","Type":"bind","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/home/<user-name>","Target":"/mnt"},"SkipMountpointCreation":false}} or the same (formatted): "MountPoints": { "/mnt": { "Source": "/home/<user-name>", "Destination": "/mnt", "RW": true, "Name": "", "Driver": "", "Type": "bind", "Propagation": "rprivate", "Spec": { "Type": "bind", "Source": "/home/<user-name>", "Target": "/mnt" }, "SkipMountpointCreation": false } } Restart the docker service: service docker restart This works for me with Ubuntu 18.04.1 and Docker 18.09.0 A: Jérôme Petazzoni has a pretty interesting blog post on how to Attach a volume to a container while it is running. This isn't something that's built into Docker out of the box, but possible to accomplish. As he also points out This will not work on filesystems which are not based on block devices. It will only work if /proc/mounts correctly lists the block device node (which, as we saw above, is not necessarily true). Also, I only tested this on my local environment; I didn’t even try on a cloud instance or anything like that YMMV A: Unfortunately the switch option to mount a volume is only found in the run command. docker run --help -v, --volume list Bind mount a volume (default []) There is a way you can work around this though so you won't have to reinstall the applications you've already set up on your container. Export your container docker container export -o ./myimage.docker mycontainer Import as an image docker import ./myimage.docker myimage Then docker run -i -t -v /somedir --name mycontainer myimage /bin/bash A: A note for using Docker Windows containers after I had to look for this problem for a long time! Condiditions: Windows 10 Docker Desktop (latest version) using Docker Windows Container for image microsoft/mssql-server-windows-developer Problem: I wanted to mount a host dictionary into my windows container. Solution as partially discripted here: create docker container docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer go to command shell in container docker exec -it <CONTAINERID> cmd.exe create DIR mkdir DirForMount stop container docker container stop <CONTAINERID> commit container docker commit <CONTAINERID> <NEWIMAGENAME> delete old container docker container rm <CONTAINERID> create new container with new image and volume mounting docker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y -v C:\DirToMount:C:\DirForMount <NEWIMAGENAME> After this i solved this problem on docker windows containers. A: My answer will be little different. You can stop your container, add the volume and restart it. How to do it, follow the steps. docker volume create ubuntu-volume docker stop <container-name> sudo docker run -i -t --mount source=ubuntu-volume,target=<target-path-in-container> ubuntu /bin/bash A: You can stop and remove the container, append the existing volume in a startup script, and restart from the image. If the already existing existing partitions do keep the data, you shouldn't experience any loss of information. This should also work the same way with Dockerfile and Docker composer. eg (solr image). (initial script) #!/bin/sh docker pull solr:8.5 docker stop my_solr docker rm solr:8.5 docker create \ --name my_solr \ -v "/XXXX/docker/solr/solrdata":/var/solr \ -p 8983:8983 \ --restart unless-stopped \ --user 1000:1000 \ -e SOLR_HEAP=1g \ --log-opt max-size=10m \ --log-opt max-file=3 \ solr:8.5 docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib docker start my_solr file with the second volume #!/bin/sh docker pull solr:8.5 docker stop my_solr docker rm solr:8.5 docker create \ --name my_solr \ -v "/XXXX/docker/solr/solrdata":/var/solr \ -v "/XXXX/backups/solr_snapshot_folder":/var/solr_snapshots \ -p 8983:8983 \ --restart unless-stopped \ --user 1000:1000 \ -e SOLR_HEAP=1g \ --log-opt max-size=10m \ --log-opt max-file=3 \ solr:8.5 docker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib docker start my_solr
How can I add a volume to an existing Docker container?
I have a Docker container that I've created simply by installing Docker on Ubuntu and doing: sudo docker run -i -t ubuntu /bin/bash I immediately started installing Java and some other tools, spent some time with it, and stopped the container by exit Then I wanted to add a volume and realised that this is not as straightforward as I thought it would be. If I use sudo docker -v /somedir run ... then I end up with a fresh new container, so I'd have to install Java and do what I've already done before just to arrive at a container with a mounted volume. All the documentation about mounting a folder from the host seems to imply that mounting a volume is something that can be done when creating a container. So the only option I have to avoid reconfiguring a new container from scratch is to commit the existing container to a repository and use that as the basis of a new one whilst mounting the volume. Is this indeed the only way to add a volume to an existing container?
[ "You can commit your existing container (that is create a new image from container’s changes) and then run it with your new mounts.\nExample:\n$ docker ps -a\n\nCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES\n5a8f89adeead ubuntu:14.04 \"/bin/bash\" About a minute ago Exited (0) About a minute ago agitated_newton\n\n$ docker commit 5a8f89adeead newimagename\n$ docker run -ti -v \"$PWD/somedir\":/somedir newimagename /bin/bash\n\nIf it's all OK, stop your old container, and use this new one.\nYou can also commit a container using its name, for example:\ndocker commit agitated_newton newimagename\n\nThat's it :)\n", "We don't have any way to add volume in running container, but to achieve this objective you may use the below commands:\nCopy files/folders between a container and the local filesystem:\ndocker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH\n\ndocker cp [OPTIONS] SRC_PATH CONTAINER:DEST_PATH\n\nFor reference see:\nhttps://docs.docker.com/engine/reference/commandline/cp/\n", "I've successfully mount /home/<user-name> folder of my host to the /mnt folder of the existing (not running) container. You can do it in the following way:\n\nOpen configuration file corresponding to the stopped container, which can be found at /var/lib/docker/containers/99d...1fb/config.v2.json (may be config.json for older versions of docker).\nFind MountPoints section, which was empty in my case: \"MountPoints\":{}. Next replace the contents with something like this (you can copy proper contents from another container with proper settings): \n\n\"MountPoints\":{\"/mnt\":{\"Source\":\"/home/<user-name>\",\"Destination\":\"/mnt\",\"RW\":true,\"Name\":\"\",\"Driver\":\"\",\"Type\":\"bind\",\"Propagation\":\"rprivate\",\"Spec\":{\"Type\":\"bind\",\"Source\":\"/home/<user-name>\",\"Target\":\"/mnt\"},\"SkipMountpointCreation\":false}}\n\nor the same (formatted):\n \"MountPoints\": {\n \"/mnt\": {\n \"Source\": \"/home/<user-name>\",\n \"Destination\": \"/mnt\",\n \"RW\": true,\n \"Name\": \"\",\n \"Driver\": \"\",\n \"Type\": \"bind\",\n \"Propagation\": \"rprivate\",\n \"Spec\": {\n \"Type\": \"bind\",\n \"Source\": \"/home/<user-name>\",\n \"Target\": \"/mnt\"\n },\n \"SkipMountpointCreation\": false\n }\n }\n\n\nRestart the docker service: service docker restart\n\nThis works for me with Ubuntu 18.04.1 and Docker 18.09.0\n", "Jérôme Petazzoni has a pretty interesting blog post on how to Attach a volume to a container while it is running. This isn't something that's built into Docker out of the box, but possible to accomplish.\nAs he also points out\n\nThis will not work on filesystems which are not based on block devices.\nIt will only work if /proc/mounts correctly lists the block device node (which, as we saw above, is not necessarily true).\nAlso, I only tested this on my local environment; I didn’t even try on a cloud instance or anything like that\n\nYMMV\n", "Unfortunately the switch option to mount a volume is only found in the run command.\ndocker run --help\n-v, --volume list Bind mount a volume (default [])\nThere is a way you can work around this though so you won't have to reinstall the applications you've already set up on your container.\n\nExport your container\ndocker container export -o ./myimage.docker mycontainer\nImport as an image\ndocker import ./myimage.docker myimage\nThen docker run -i -t -v /somedir --name mycontainer myimage /bin/bash\n\n", "A note for using Docker Windows containers after I had to look for this problem for a long time!\nCondiditions:\n\nWindows 10 \nDocker Desktop (latest version)\nusing Docker Windows Container for image microsoft/mssql-server-windows-developer\n\nProblem:\n\nI wanted to mount a host dictionary into my windows container.\n\nSolution as partially discripted here:\n\ncreate docker container\n\ndocker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y microsoft/mssql-server-windows-developer\n\ngo to command shell in container\n\ndocker exec -it <CONTAINERID> cmd.exe\n\ncreate DIR\n\nmkdir DirForMount\n\nstop container\n\ndocker container stop <CONTAINERID>\n\ncommit container\n\ndocker commit <CONTAINERID> <NEWIMAGENAME>\n\ndelete old container\n\ndocker container rm <CONTAINERID>\n\ncreate new container with new image and volume mounting\n\ndocker run -d -p 1433:1433 -e sa_password=<STRONG_PASSWORD> -e ACCEPT_EULA=Y -v C:\\DirToMount:C:\\DirForMount <NEWIMAGENAME>\nAfter this i solved this problem on docker windows containers.\n", "My answer will be little different. You can stop your container, add the volume and restart it. How to do it, follow the steps.\ndocker volume create ubuntu-volume\ndocker stop <container-name>\nsudo docker run -i -t --mount source=ubuntu-volume,target=<target-path-in-container> ubuntu /bin/bash \n\n", "You can stop and remove the container, append the existing volume in a startup script, and restart from the image. If the already existing existing partitions do keep the data, you shouldn't experience any loss of information. This should also work the same way with Dockerfile and Docker composer.\neg (solr image).\n(initial script)\n#!/bin/sh\ndocker pull solr:8.5\ndocker stop my_solr\ndocker rm solr:8.5\n\ndocker create \\\n --name my_solr \\\n -v \"/XXXX/docker/solr/solrdata\":/var/solr \\\n -p 8983:8983 \\\n --restart unless-stopped \\\n --user 1000:1000 \\\n -e SOLR_HEAP=1g \\\n --log-opt max-size=10m \\\n --log-opt max-file=3 \\\n solr:8.5\ndocker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib\ndocker start my_solr\n\nfile with the second volume\n#!/bin/sh\ndocker pull solr:8.5\ndocker stop my_solr\ndocker rm solr:8.5\n\ndocker create \\\n --name my_solr \\\n -v \"/XXXX/docker/solr/solrdata\":/var/solr \\\n -v \"/XXXX/backups/solr_snapshot_folder\":/var/solr_snapshots \\\n -p 8983:8983 \\\n --restart unless-stopped \\\n --user 1000:1000 \\\n -e SOLR_HEAP=1g \\\n --log-opt max-size=10m \\\n --log-opt max-file=3 \\\n solr:8.5\ndocker cp /home/XXXX/docker/solr/XXXXXXXX.jar my_solr:/opt/solr/contrib/dataimporthandler-extras/lib\ndocker start my_solr\n\n" ]
[ 626, 163, 93, 25, 14, 7, 0, 0 ]
[ "Use symlink to the already mounted drive:\nln -s Source_path targer_path_which_is_already_mounted_on_the_running_docker\n", "The best way is to copy all the files and folders inside a directory on your local file system by: docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH\nSRC_PATH is on container\nDEST_PATH is on localhost\nThen do docker-compose down attach a volume to the same DEST_PATH and run Docker containers by using docker-compose up -d\nAdd volume by following in docker-compose.yml\nvolumes:\n - DEST_PATH:SRC_PATH\n\n" ]
[ -2, -6 ]
[ "docker" ]
stackoverflow_0028302178_docker.txt
Q: pip install . creates only the dist-info not the package I am trying to make a python package which I want to install using pip install . locally. The package name is listed in pip freeze but import <package> results in an error No module named <package>. Also the site-packages folder does only contain a dist-info folder. find_packages() is able to find packages. What am I missing? import io import os import sys from shutil import rmtree from setuptools import find_packages, setup, Command # Package meta-data. NAME = '<package>' DESCRIPTION = 'description' URL = '' EMAIL = 'email' AUTHOR = 'name' # What packages are required for this module to be executed? REQUIRED = [ # 'requests', 'maya', 'records', ] # The rest you shouldn't have to touch too much :) # ------------------------------------------------ # Except, perhaps the License and Trove Classifiers! # If you do change the License, remember to change the Trove Classifier for that! here = os.path.abspath(os.path.dirname(__file__)) # Where the magic happens: setup( name=NAME, #version=about['__version__'], description=DESCRIPTION, # long_description=long_description, author=AUTHOR, author_email=EMAIL, url=URL, packages=find_packages(), # If your package is a single module, use this instead of 'packages': # py_modules=['mypackage'], # entry_points={ # 'console_scripts': ['mycli=mymodule:cli'], # }, install_requires=REQUIRED, include_package_data=True, license='MIT', classifiers=[ # Trove classifiers # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers 'License :: OSI Approved :: MIT License', 'Programming Language :: Python', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy' ], ) A: Since the question has become quite popular, here are the diagnosis steps to go through when you're missing files after installation. Imagine having an example project with the following structure: root ├── spam │ ├── __init__.py │ ├── data.txt │ ├── eggs.py │ └── fizz │ ├── __init__.py │ └── buzz.py ├── bacon.py └── setup.py Now I run pip install ., check that the package is installed: $ pip list Package Version ---------- ------- mypkg 0.1 pip 19.0.1 setuptools 40.6.3 wheel 0.32.3 but see neither spam, nor spam/eggs.py nor bacon.py nor spam/fizz/buzz.py in the list of files belonging to the installed package: $ pip show -f mypkg Name: mypkg Version: 0.1 ... Files: mypkg-0.1.dist-info/DESCRIPTION.rst mypkg-0.1.dist-info/INSTALLER mypkg-0.1.dist-info/METADATA mypkg-0.1.dist-info/RECORD mypkg-0.1.dist-info/WHEEL mypkg-0.1.dist-info/metadata.json mypkg-0.1.dist-info/top_level.txt So what to do now? Diagnose by inspecting the wheel build log Unless told not to do so, pip will always try to build a wheel file and install your package from it. We can inspect the log for the wheel build process if reinstalling in the verbose mode. First step is to uninstall the package: $ pip uninstall -y mypkg ... then install it again, but now with an additional argument: $ pip install . -vvv ... Now if I inspect the log: $ pip install . -vvv | grep 'adding' adding 'mypkg-0.1.dist-info/METADATA' adding 'mypkg-0.1.dist-info/WHEEL' adding 'mypkg-0.1.dist-info/top_level.txt' adding 'mypkg-0.1.dist-info/RECORD' I notice that no files from the spam directory or bacon.py are mentioned anywhere. This means they were simply not included in the wheel file and hence not installed by pip. The most common error sources are: Missing packages: check the packages argument Verify you have passed the packages argument to the setup function. Check that you have mentioned all of the packages that should be installed. Subpackages will not be collected automatically if only the parent package is mentioned! For example, in the setup script from setuptools import setup setup( name='mypkg', version='0.1', packages=['spam'] ) spam will be installed, but not spam.fizz because it is a package itself and must be mentioned explicitly. Fixing it: from setuptools import setup setup( name='mypkg', version='0.1', packages=['spam', 'spam.fizz'] ) If you have lots of packages, use setuptools.find_packages to automate the process: from setuptools import find_packages, setup setup( name='mypkg', version='0.1', packages=find_packages() # will return a list ['spam', 'spam.fizz'] ) In case you are missing a module: Missing modules: check the py_modules argument In the above examples, I will be missing bacon.py after installation since it doesn't belong to any package. I have to provide its module name in the separate argument py_modules: from setuptools import find_packages, setup setup( name='mypkg', version='0.1', packages=find_packages(), py_modules=['bacon'] ) Missing data files: check the package_data argument I have all the source code files in place now, but the data.txt file is still not installed. Data files located under package directories should be added via the package_data argument. Fixing the above setup script: from setuptools import find_packages, setup setup( name='mypkg', version='0.1', packages=find_packages(), package_data={'spam': ['data.txt']}, py_modules=['bacon'] ) Don't be tempted to use the data_files argument. Place the data files under a package and configure package_data instead. After fixing the setup script, verify the package files are in place after installation If I now reinstall the package, I will notice all of the files are added to the wheel: $ pip install . -vvv | grep 'adding' adding 'bacon.py' adding 'spam/__init__.py' adding 'spam/data.txt' adding 'spam/eggs.py' adding 'spam/fizz/__init__.py' adding 'spam/fizz/buzz.py' adding 'mypkg-0.1.dist-info/METADATA' adding 'mypkg-0.1.dist-info/WHEEL' adding 'mypkg-0.1.dist-info/top_level.txt' adding 'mypkg-0.1.dist-info/RECORD' They will also be visible in the list of files belonging to mypkg: $ pip show -f mypkg Name: mypkg Version: 0.1 ... Files: __pycache__/bacon.cpython-36.pyc bacon.py mypkg-0.1.dist-info/INSTALLER mypkg-0.1.dist-info/METADATA mypkg-0.1.dist-info/RECORD mypkg-0.1.dist-info/WHEEL mypkg-0.1.dist-info/top_level.txt spam/__init__.py spam/__pycache__/__init__.cpython-36.pyc spam/__pycache__/eggs.cpython-36.pyc spam/data.txt spam/eggs.py spam/fizz/__init__.py spam/fizz/__pycache__/__init__.cpython-36.pyc spam/fizz/__pycache__/buzz.cpython-36.pyc spam/fizz/buzz.py A: For me, I noticed something weird if you do this: # Not in the setup.py directory python /path/to/folder/setup.py bdist_wheel It will only install the .dist-info folder in your site-packages folder when you install the wheel. However, if you do this: cd /path/to/folder \ && python setup.py bdist_wheel The wheel will include all your files. A: If you are on Windows 10+, one way you could make sure that you had all the correct installations was to click start in the bottom left-hand corner and search cmd.exe and right-click on "Command Prompt" (Make sure you choose "Run as Administrator"). Type "cd path to your Python 3.X installation". You can find this path in File Explorer (go to the folder where Python is installed) and then at the top. Copy this, and put it in where I wrote above path to your Python 3.X installation. Once you do that and click enter, type "python -m pip install package" (package signifies the package you would like to install). Your Python program should now work perfectly. A: I had the same problem, and updating setuptools helped: python3 -m pip install --upgrade pip setuptools wheel After that, reinstall the package, and it should work fine :) A: Make certain that your src files are in example_package_YOUR_USERNAME_HERE (this is the example package name that is used in the docs) and not in src. Errantly putting the files in src can have the effect described in the question. Reference: https://packaging.python.org/en/latest/tutorials/packaging-projects/ The package should be set up like this: packaging_tutorial/ └── src/ └── example_package_YOUR_USERNAME_HERE/ ├── __init__.py └── example.py
pip install . creates only the dist-info not the package
I am trying to make a python package which I want to install using pip install . locally. The package name is listed in pip freeze but import <package> results in an error No module named <package>. Also the site-packages folder does only contain a dist-info folder. find_packages() is able to find packages. What am I missing? import io import os import sys from shutil import rmtree from setuptools import find_packages, setup, Command # Package meta-data. NAME = '<package>' DESCRIPTION = 'description' URL = '' EMAIL = 'email' AUTHOR = 'name' # What packages are required for this module to be executed? REQUIRED = [ # 'requests', 'maya', 'records', ] # The rest you shouldn't have to touch too much :) # ------------------------------------------------ # Except, perhaps the License and Trove Classifiers! # If you do change the License, remember to change the Trove Classifier for that! here = os.path.abspath(os.path.dirname(__file__)) # Where the magic happens: setup( name=NAME, #version=about['__version__'], description=DESCRIPTION, # long_description=long_description, author=AUTHOR, author_email=EMAIL, url=URL, packages=find_packages(), # If your package is a single module, use this instead of 'packages': # py_modules=['mypackage'], # entry_points={ # 'console_scripts': ['mycli=mymodule:cli'], # }, install_requires=REQUIRED, include_package_data=True, license='MIT', classifiers=[ # Trove classifiers # Full list: https://pypi.python.org/pypi?%3Aaction=list_classifiers 'License :: OSI Approved :: MIT License', 'Programming Language :: Python', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Programming Language :: Python :: Implementation :: CPython', 'Programming Language :: Python :: Implementation :: PyPy' ], )
[ "Since the question has become quite popular, here are the diagnosis steps to go through when you're missing files after installation. Imagine having an example project with the following structure:\nroot\n├── spam\n│ ├── __init__.py\n│ ├── data.txt\n│ ├── eggs.py\n│ └── fizz\n│ ├── __init__.py\n│ └── buzz.py\n├── bacon.py\n└── setup.py\n\nNow I run pip install ., check that the package is installed:\n$ pip list\nPackage Version\n---------- -------\nmypkg 0.1 \npip 19.0.1 \nsetuptools 40.6.3 \nwheel 0.32.3 \n\nbut see neither spam, nor spam/eggs.py nor bacon.py nor spam/fizz/buzz.py in the list of files belonging to the installed package:\n$ pip show -f mypkg\nName: mypkg\nVersion: 0.1\n...\nFiles:\n mypkg-0.1.dist-info/DESCRIPTION.rst\n mypkg-0.1.dist-info/INSTALLER\n mypkg-0.1.dist-info/METADATA\n mypkg-0.1.dist-info/RECORD\n mypkg-0.1.dist-info/WHEEL\n mypkg-0.1.dist-info/metadata.json\n mypkg-0.1.dist-info/top_level.txt\n\nSo what to do now?\nDiagnose by inspecting the wheel build log\nUnless told not to do so, pip will always try to build a wheel file and install your package from it. We can inspect the log for the wheel build process if reinstalling in the verbose mode. First step is to uninstall the package:\n$ pip uninstall -y mypkg\n...\n\nthen install it again, but now with an additional argument:\n$ pip install . -vvv\n...\n\nNow if I inspect the log:\n$ pip install . -vvv | grep 'adding'\n adding 'mypkg-0.1.dist-info/METADATA'\n adding 'mypkg-0.1.dist-info/WHEEL'\n adding 'mypkg-0.1.dist-info/top_level.txt'\n adding 'mypkg-0.1.dist-info/RECORD'\n\nI notice that no files from the spam directory or bacon.py are mentioned anywhere. This means they were simply not included in the wheel file and hence not installed by pip. The most common error sources are:\nMissing packages: check the packages argument\nVerify you have passed the packages argument to the setup function. Check that you have mentioned all of the packages that should be installed. Subpackages will not be collected automatically if only the parent package is mentioned! For example, in the setup script\nfrom setuptools import setup\n\nsetup(\n name='mypkg',\n version='0.1',\n packages=['spam']\n)\n\nspam will be installed, but not spam.fizz because it is a package itself and must be mentioned explicitly. Fixing it:\nfrom setuptools import setup\n\nsetup(\n name='mypkg',\n version='0.1',\n packages=['spam', 'spam.fizz']\n)\n\nIf you have lots of packages, use setuptools.find_packages to automate the process:\nfrom setuptools import find_packages, setup\n\nsetup(\n name='mypkg',\n version='0.1',\n packages=find_packages() # will return a list ['spam', 'spam.fizz']\n)\n\nIn case you are missing a module:\nMissing modules: check the py_modules argument\nIn the above examples, I will be missing bacon.py after installation since it doesn't belong to any package. I have to provide its module name in the separate argument py_modules:\nfrom setuptools import find_packages, setup\n\nsetup(\n name='mypkg',\n version='0.1',\n packages=find_packages(),\n py_modules=['bacon']\n)\n\nMissing data files: check the package_data argument\nI have all the source code files in place now, but the data.txt file is still not installed. Data files located under package directories should be added via the package_data argument. Fixing the above setup script:\nfrom setuptools import find_packages, setup\n\nsetup(\n name='mypkg',\n version='0.1',\n packages=find_packages(),\n package_data={'spam': ['data.txt']},\n py_modules=['bacon']\n)\n\nDon't be tempted to use the data_files argument. Place the data files under a package and configure package_data instead.\nAfter fixing the setup script, verify the package files are in place after installation\nIf I now reinstall the package, I will notice all of the files are added to the wheel:\n$ pip install . -vvv | grep 'adding'\n adding 'bacon.py'\n adding 'spam/__init__.py'\n adding 'spam/data.txt'\n adding 'spam/eggs.py'\n adding 'spam/fizz/__init__.py'\n adding 'spam/fizz/buzz.py'\n adding 'mypkg-0.1.dist-info/METADATA'\n adding 'mypkg-0.1.dist-info/WHEEL'\n adding 'mypkg-0.1.dist-info/top_level.txt'\n adding 'mypkg-0.1.dist-info/RECORD'\n\nThey will also be visible in the list of files belonging to mypkg:\n$ pip show -f mypkg\nName: mypkg\nVersion: 0.1\n...\nFiles:\n __pycache__/bacon.cpython-36.pyc\n bacon.py\n mypkg-0.1.dist-info/INSTALLER\n mypkg-0.1.dist-info/METADATA\n mypkg-0.1.dist-info/RECORD\n mypkg-0.1.dist-info/WHEEL\n mypkg-0.1.dist-info/top_level.txt\n spam/__init__.py\n spam/__pycache__/__init__.cpython-36.pyc\n spam/__pycache__/eggs.cpython-36.pyc\n spam/data.txt\n spam/eggs.py\n spam/fizz/__init__.py\n spam/fizz/__pycache__/__init__.cpython-36.pyc\n spam/fizz/__pycache__/buzz.cpython-36.pyc\n spam/fizz/buzz.py\n\n", "For me, I noticed something weird if you do this:\n# Not in the setup.py directory\npython /path/to/folder/setup.py bdist_wheel\n\nIt will only install the .dist-info folder in your site-packages folder when you install the wheel.\nHowever, if you do this:\ncd /path/to/folder \\\n&& python setup.py bdist_wheel\n\nThe wheel will include all your files.\n", "If you are on Windows 10+, one way you could make sure that you had all the correct installations was to click start in the bottom left-hand corner and search cmd.exe and right-click on \"Command Prompt\" (Make sure you choose \"Run as Administrator\"). Type \"cd path to your Python 3.X installation\". You can find this path in File Explorer (go to the folder where Python is installed) and then at the top. Copy this, and put it in where I wrote above path to your Python 3.X installation. Once you do that and click enter, type \"python -m pip install package\" (package signifies the package you would like to install). Your Python program should now work perfectly. \n", "I had the same problem, and updating setuptools helped:\npython3 -m pip install --upgrade pip setuptools wheel\n\nAfter that, reinstall the package, and it should work fine :)\n", "Make certain that your src files are in example_package_YOUR_USERNAME_HERE (this is the example package name that is used in the docs) and not in src. Errantly putting the files in src can have the effect described in the question.\nReference: https://packaging.python.org/en/latest/tutorials/packaging-projects/\nThe package should be set up like this:\npackaging_tutorial/\n└── src/\n └── example_package_YOUR_USERNAME_HERE/\n ├── __init__.py\n └── example.py\n\n" ]
[ 127, 1, 0, 0, 0 ]
[]
[]
[ "package", "pip", "python", "setup.py", "setuptools" ]
stackoverflow_0050585246_package_pip_python_setup.py_setuptools.txt
Q: Array to string, I get [object HTMLParagraphElement][object HTMLUListElement] instead of value / content I have an array from a parse odg = tinymce.activeEditor.getContent({ format: "HTML" }); const parser = new DOMParser(); const parsedDocument = parser.parseFromString(odg, "text/html"); const parsedContent = parsedDocument.querySelectorAll( "body:not(first-child) > *" ); let array2 = Array.from(parsedContent); I get: Array [ p, ul] 0: <p>​ 1: <ul> length: 2 <prototype>: Array [] When I change into string let contentString = array2.join(""); I get: [object HTMLParagraphElement][object HTMLUListElement] Instead of: <p>something</p><ul><li>something else</li></ul> What am I missing? A: You can access outerHTML let odg = document.querySelector('container').innerHTML const parser = new DOMParser(); const parsedDocument = parser.parseFromString(odg, "text/html"); const parsedContent = parsedDocument.querySelectorAll( ".findme > *" ); let array2 = Array.from(parsedContent); console.log(array2.map(m => m.outerHTML).join('')); <container> Ignore me <div class='findme'> <div>This </div> <p>That</p> </div> </container>
Array to string, I get [object HTMLParagraphElement][object HTMLUListElement] instead of value / content
I have an array from a parse odg = tinymce.activeEditor.getContent({ format: "HTML" }); const parser = new DOMParser(); const parsedDocument = parser.parseFromString(odg, "text/html"); const parsedContent = parsedDocument.querySelectorAll( "body:not(first-child) > *" ); let array2 = Array.from(parsedContent); I get: Array [ p, ul] 0: <p>​ 1: <ul> length: 2 <prototype>: Array [] When I change into string let contentString = array2.join(""); I get: [object HTMLParagraphElement][object HTMLUListElement] Instead of: <p>something</p><ul><li>something else</li></ul> What am I missing?
[ "You can access outerHTML\n\n\nlet odg = document.querySelector('container').innerHTML\nconst parser = new DOMParser();\nconst parsedDocument = parser.parseFromString(odg, \"text/html\");\nconst parsedContent = parsedDocument.querySelectorAll(\n \".findme > *\"\n);\n\n\nlet array2 = Array.from(parsedContent);\nconsole.log(array2.map(m => m.outerHTML).join(''));\n<container>\n Ignore me\n <div class='findme'>\n <div>This </div>\n <p>That</p>\n </div>\n</container>\n\n\n\n" ]
[ 2 ]
[]
[]
[ "arrays", "html", "javascript", "string" ]
stackoverflow_0074660488_arrays_html_javascript_string.txt
Q: Custom Claims Principal Factory within ABP I am using an iDP that provides the username in the 'sub' claim which inside abp does not mark the CurrentUser.IsAuthenticated to true and the id for the user is also null because it is not a GUID. From the documentation I found that you can inherit AbpUserClaimsPrincipalFactory and add your custom claims. I thought I could override the NameIdentifier claim and pull the mapped user from the database and use that GUID custom UserClaimsPrincipalFactory never gets called. Below is my code: I have tried the below implementation but CreateAsync never gets called. public class BaselineUserClaimsPrincipalFactory : AbpUserClaimsPrincipalFactory { private readonly ICoreApiService _coreApiService; public BaselineUserClaimsPrincipalFactory( UserManager<IdentityUser> userManager, RoleManager<IdentityRole> roleManager, IOptions<IdentityOptions> options, ICurrentPrincipalAccessor currentPrincipalAccessor, IAbpClaimsPrincipalFactory abpClaimsPrincipalFactory, ICoreApiService coreApiService) : base( userManager, roleManager, options, currentPrincipalAccessor, abpClaimsPrincipalFactory) { _coreApiService = coreApiService; } [UnitOfWork] public async override Task<ClaimsPrincipal> CreateAsync(IdentityUser user) { var principal = await base.CreateAsync(user); var identity = principal.Identities.First(); if (identity != null) { . . identity.AddClaim(new Claim(ClaimTypes.NameIdentifier, user_guid)); } return principal; } } Inside BaselineHttpApiHostModule public override void PreConfigureServices(ServiceConfigurationContext context) { PreConfigure<IdentityBuilder>(builder => builder.AddClaimsPrincipalFactory<BaselineUserClaimsPrincipalFactory>()); } A: It seems you forgot to register your factory to DI. [Dependency(ReplaceServices = true)] [ExposeServices(typeof(AbpUserClaimsPrincipalFactory))] public class BaselineUserClaimsPrincipalFactory : AbpUserClaimsPrincipalFactory Similar answer here: ABP Add User's Custom Property from AppUser to Claims A: using IdentityUser = Volo.Abp.Identity.IdentityUser; public class ServiceClaimTransformation : IClaimsTransformation { private readonly UserManager<IdentityUser> _userManager; public ServiceClaimTransformation(UserManager<IdentityUser> userManager) { _userManager = userManager; } public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal) { var identity = principal.Identities.FirstOrDefault(); if (identity != null) { var usernameClaim = identity.Claims.FirstOrDefault(t => t.Type == ClaimTypes.NameIdentifier); if (usernameClaim != null) { var dbUser = await _userManager.FindByNameAsync(usernameClaim.Value); if (dbUser != null) { identity.RemoveClaim(usernameClaim); identity.AddClaim(new Claim(ClaimTypes.NameIdentifier, dbUser.Id.ToString())); } } } return principal; } } Above is what I ended up doing. This will find the user within the db by username and add their user id to the claims. Below is how you would register in your startup. context.Services.AddTransient<IClaimsTransformation, ServiceClaimTransformation>();
Custom Claims Principal Factory within ABP
I am using an iDP that provides the username in the 'sub' claim which inside abp does not mark the CurrentUser.IsAuthenticated to true and the id for the user is also null because it is not a GUID. From the documentation I found that you can inherit AbpUserClaimsPrincipalFactory and add your custom claims. I thought I could override the NameIdentifier claim and pull the mapped user from the database and use that GUID custom UserClaimsPrincipalFactory never gets called. Below is my code: I have tried the below implementation but CreateAsync never gets called. public class BaselineUserClaimsPrincipalFactory : AbpUserClaimsPrincipalFactory { private readonly ICoreApiService _coreApiService; public BaselineUserClaimsPrincipalFactory( UserManager<IdentityUser> userManager, RoleManager<IdentityRole> roleManager, IOptions<IdentityOptions> options, ICurrentPrincipalAccessor currentPrincipalAccessor, IAbpClaimsPrincipalFactory abpClaimsPrincipalFactory, ICoreApiService coreApiService) : base( userManager, roleManager, options, currentPrincipalAccessor, abpClaimsPrincipalFactory) { _coreApiService = coreApiService; } [UnitOfWork] public async override Task<ClaimsPrincipal> CreateAsync(IdentityUser user) { var principal = await base.CreateAsync(user); var identity = principal.Identities.First(); if (identity != null) { . . identity.AddClaim(new Claim(ClaimTypes.NameIdentifier, user_guid)); } return principal; } } Inside BaselineHttpApiHostModule public override void PreConfigureServices(ServiceConfigurationContext context) { PreConfigure<IdentityBuilder>(builder => builder.AddClaimsPrincipalFactory<BaselineUserClaimsPrincipalFactory>()); }
[ "It seems you forgot to register your factory to DI.\n[Dependency(ReplaceServices = true)]\n[ExposeServices(typeof(AbpUserClaimsPrincipalFactory))]\npublic class BaselineUserClaimsPrincipalFactory : AbpUserClaimsPrincipalFactory\n\nSimilar answer here: ABP Add User's Custom Property from AppUser to Claims\n", "using IdentityUser = Volo.Abp.Identity.IdentityUser;\n\npublic class ServiceClaimTransformation : IClaimsTransformation\n{\n private readonly UserManager<IdentityUser> _userManager;\n\n public ServiceClaimTransformation(UserManager<IdentityUser> userManager)\n {\n _userManager = userManager;\n }\n\n public async Task<ClaimsPrincipal> TransformAsync(ClaimsPrincipal principal)\n {\n var identity = principal.Identities.FirstOrDefault();\n\n if (identity != null)\n {\n var usernameClaim = identity.Claims.FirstOrDefault(t => t.Type == ClaimTypes.NameIdentifier);\n if (usernameClaim != null)\n {\n var dbUser = await _userManager.FindByNameAsync(usernameClaim.Value);\n if (dbUser != null)\n {\n identity.RemoveClaim(usernameClaim);\n identity.AddClaim(new Claim(ClaimTypes.NameIdentifier, dbUser.Id.ToString()));\n }\n }\n }\n\n return principal;\n }\n}\n\nAbove is what I ended up doing. This will find the user within the db by username and add their user id to the claims. Below is how you would register in your startup.\ncontext.Services.AddTransient<IClaimsTransformation, ServiceClaimTransformation>();\n\n" ]
[ 0, 0 ]
[]
[]
[ "abp", "claims", "identity" ]
stackoverflow_0074466974_abp_claims_identity.txt
Q: diff between executing coroutine with and without suspend-ed IO methods Let's assume I have the code: // Android ViewModel class MyVM(val dao: MyDao) : ViewModel() { fun onButtonInsert() { // point 1 viewModelScope.launch { // point 2 dao.insert_SuspendFun(MyData(id=1, "hello, it works")) // point 3 dao.insert_NOT_SuspendFun(MyData(id=2, "hello, it fails")) // point 4 } } } // Room DAO @Dao interface MyDao { @Insert suspend fun insert_SuspendFun(md: MyData) @Insert fun insert_NOT_SuspendFun(md: MyData) } Now when fun onButtonInsert runs then: 1st line works: dao.insert_SuspendFun(MyData(id=1, "hello, it works")) but 2nd line: dao.insert_NOT_SuspendFun(MyData(id=2, "hello, it fails")) fails with the exception: java.lang.IllegalStateException: Cannot access database on the main thread since it may potentially lock the UI for a long period of time. "The only difference" is that fun insert_NOT_SuspendFun has no suspend keyword. Both methods run in the same coroutine. Can someone explain what happens under the hood? How are threads working in this coroutine? Why does 1st call use non-UI thread but 2nd uses UI thread? Thanks! ;] A: Room generates the code for the implementation of your DAO, so the behaviors are based on their design decisions. The non-suspend function runs on whatever thread you call it from, which is the only possible way a non-suspend function can run. Since Google doesn’t want people making janky apps that do IO on the main thread, they decided to make the generated implementation under the hood check the thread and throw an exception if it’s the main thread. The suspend function implementation that Room generates suspends while the IO is done on a background thread. This follows the Kotlin coroutines convention that a suspend function should never block the thread it is called from. Any suspend function can internally suspend the calling code until they finish and resume. Other than by calling other suspend functions, this is most commonly done using withContext, coroutineScope, suspendCoroutine, or suspendCancellableCoroutine. Those last two are low level and allow the coroutine to be suspended and then resumed at will from any other thread. I have not checked the generated source, but I think it’s likely Room generates the suspend function implementation by using suspendCancellableCoroutine and runs the IO work using an internal thread pool before resuming the coroutine. An alternative way they could have done it is by using withContext(Dispatchers.IO) and then done the blocking IO work within it. This is possibly the most common and easiest way to do blocking work in a coroutine. It uses the IO Dispatcher (a thread pool provided by Kotlin coroutines). As for why your coroutine runs on the main thread, it’s because viewModelScope’s attached dispatcher uses the main thread. You can override it by passing a specific dispatcher to your launch call, but this is not often done if you’re following conventions. Android is full of functions that must be called from the main thread, so it usually looks cleanest to keep coroutines on the main dispatcher and only do the background work using suspend functions and withContext blocks. Exception would be a coroutine that runs blocking code and doesn’t need to do anything on the main thread. A: You're trying to run a long-running synchronize task in side main thread as it does not mention suspend keyword to make it async. You can use both methods in the IO thread instead main; viewmodelscope.launch(dispatchers.io)
diff between executing coroutine with and without suspend-ed IO methods
Let's assume I have the code: // Android ViewModel class MyVM(val dao: MyDao) : ViewModel() { fun onButtonInsert() { // point 1 viewModelScope.launch { // point 2 dao.insert_SuspendFun(MyData(id=1, "hello, it works")) // point 3 dao.insert_NOT_SuspendFun(MyData(id=2, "hello, it fails")) // point 4 } } } // Room DAO @Dao interface MyDao { @Insert suspend fun insert_SuspendFun(md: MyData) @Insert fun insert_NOT_SuspendFun(md: MyData) } Now when fun onButtonInsert runs then: 1st line works: dao.insert_SuspendFun(MyData(id=1, "hello, it works")) but 2nd line: dao.insert_NOT_SuspendFun(MyData(id=2, "hello, it fails")) fails with the exception: java.lang.IllegalStateException: Cannot access database on the main thread since it may potentially lock the UI for a long period of time. "The only difference" is that fun insert_NOT_SuspendFun has no suspend keyword. Both methods run in the same coroutine. Can someone explain what happens under the hood? How are threads working in this coroutine? Why does 1st call use non-UI thread but 2nd uses UI thread? Thanks! ;]
[ "Room generates the code for the implementation of your DAO, so the behaviors are based on their design decisions.\nThe non-suspend function runs on whatever thread you call it from, which is the only possible way a non-suspend function can run. Since Google doesn’t want people making janky apps that do IO on the main thread, they decided to make the generated implementation under the hood check the thread and throw an exception if it’s the main thread.\nThe suspend function implementation that Room generates suspends while the IO is done on a background thread. This follows the Kotlin coroutines convention that a suspend function should never block the thread it is called from.\nAny suspend function can internally suspend the calling code until they finish and resume. Other than by calling other suspend functions, this is most commonly done using withContext, coroutineScope, suspendCoroutine, or suspendCancellableCoroutine. Those last two are low level and allow the coroutine to be suspended and then resumed at will from any other thread.\nI have not checked the generated source, but I think it’s likely Room generates the suspend function implementation by using suspendCancellableCoroutine and runs the IO work using an internal thread pool before resuming the coroutine.\nAn alternative way they could have done it is by using withContext(Dispatchers.IO) and then done the blocking IO work within it. This is possibly the most common and easiest way to do blocking work in a coroutine. It uses the IO Dispatcher (a thread pool provided by Kotlin coroutines).\nAs for why your coroutine runs on the main thread, it’s because viewModelScope’s attached dispatcher uses the main thread. You can override it by passing a specific dispatcher to your launch call, but this is not often done if you’re following conventions. Android is full of functions that must be called from the main thread, so it usually looks cleanest to keep coroutines on the main dispatcher and only do the background work using suspend functions and withContext blocks. Exception would be a coroutine that runs blocking code and doesn’t need to do anything on the main thread.\n", "You're trying to run a long-running synchronize task in side main thread as it does not mention suspend keyword to make it async.\nYou can use both methods in the IO thread instead main;\nviewmodelscope.launch(dispatchers.io)\n\n\n" ]
[ 2, 0 ]
[]
[]
[ "kotlin", "kotlin_coroutines" ]
stackoverflow_0074659167_kotlin_kotlin_coroutines.txt
Q: Update treemodel in real time PySide How can I make it so when I click the Randomize button, for the selected treeview items, the treeview updates to show the changes to data, while maintaining the expanding items states and the users selection? Is this accomplished by subclasses the StandardItemModel or ProxyModel class? Help is much appreciated as I'm not sure how to resolve this issue. It's a very simple example demonstrating the issue. When clicking Randmoize, all it's doing is randomly assigning a new string (name) to each coaches position on the selected Team. import os import sys import random from PySide2 import QtGui, QtWidgets, QtCore class Team(object): def __init__(self, name='', nameA='', nameB='', nameC='', nameD=''): super(Team, self).__init__() self.name = name self.headCoach = nameA self.assistantCoach = nameB self.offensiveCoach = nameC self.defensiveCoach = nameD def randomize(self): names = ['doug', 'adam', 'seth', 'emily', 'kevin', 'mike', 'sarah', 'cassy', 'courtney', 'henry'] cnt = len(names)-1 self.headCoach = names[random.randint(0, cnt)] self.assistantCoach = names[random.randint(0, cnt)] self.offensiveCoach = names[random.randint(0, cnt)] self.defensiveCoach = names[random.randint(0, cnt)] print('TRADED PLAYERS') TEAMS = [ Team('Cowboys', 'doug', 'adam', 'seth', 'emily'), Team('Packers'), Team('Lakers', 'kevin', 'mike', 'sarah', 'cassy'), Team('Yankees', 'courtney', 'henry'), Team('Gators'), ] class MainDialog(QtWidgets.QMainWindow): def __init__(self, parent=None): super(MainDialog, self).__init__(parent) self.resize(600,400) self.button = QtWidgets.QPushButton('Randomize') self.itemModel = QtGui.QStandardItemModel() self.proxyModel = QtCore.QSortFilterProxyModel() self.proxyModel.setSourceModel(self.itemModel) self.proxyModel.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) self.proxyModel.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive) self.proxyModel.setDynamicSortFilter(True) self.proxyModel.setFilterKeyColumn(0) self.treeView = QtWidgets.QTreeView() self.treeView.setModel(self.proxyModel) self.treeView.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers) self.treeView.setSelectionMode(QtWidgets.QAbstractItemView.ExtendedSelection) self.treeView.setVerticalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel) self.treeView.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows) self.treeView.setAlternatingRowColors(True) self.treeView.setSortingEnabled(True) self.treeView.setUniformRowHeights(False) self.treeView.header().setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents) self.treeView.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) self.selectionModel = self.treeView.selectionModel() # layout self.mainLayout = QtWidgets.QVBoxLayout() self.mainLayout.addWidget(self.treeView) self.mainLayout.addWidget(self.button) self.mainWidget = QtWidgets.QWidget() self.mainWidget.setLayout(self.mainLayout) self.setCentralWidget(self.mainWidget) # connections self.selectionModel.selectionChanged.connect(self.updateControls) self.button.clicked.connect(self.randomizeTeams) # begin self.populateModel() self.updateControls() def randomizeTeams(self): for proxyIndex in self.selectionModel.selectedRows(): sourceIndex = self.proxyModel.mapToSource(proxyIndex) item = self.itemModel.itemFromIndex(sourceIndex) team = item.data(QtCore.Qt.UserRole) team.randomize() # UPDATE UI... def updateControls(self): self.button.setEnabled(self.selectionModel.hasSelection()) def populateModel(self): self.itemModel.clear() self.itemModel.setHorizontalHeaderLabels(['Position', 'Name']) # add teams for ts in TEAMS: col1 = QtGui.QStandardItem(ts.name) col1.setData(ts, QtCore.Qt.UserRole) # add coaches childCol1 = QtGui.QStandardItem('Head Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.headCoach) col1.appendRow([childCol1, childCol2]) childCol1 = QtGui.QStandardItem('Head Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.assistantCoach) col1.appendRow([childCol1, childCol2]) childCol1 = QtGui.QStandardItem('Offensive Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.offensiveCoach) col1.appendRow([childCol1, childCol2]) childCol1 = QtGui.QStandardItem('Defensive Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.defensiveCoach) col1.appendRow([childCol1, childCol2]) self.itemModel.appendRow([col1]) self.itemModel.setSortRole(QtCore.Qt.DisplayRole) self.itemModel.sort(0, QtCore.Qt.AscendingOrder) self.proxyModel.sort(0, QtCore.Qt.AscendingOrder) def main(): app = QtWidgets.QApplication(sys.argv) window = MainDialog() window.show() app.exec_() if __name__ == '__main__': pass main() A: Your Team class should be a subclass of QStandardItem, which will be the top-level parent in the model. This class should create its own child items (as you are currently doing in the for-loop of populateModel), and its randomize method should directly reset the item-data of those children. This will ensure the changes are immediately reflected in the model. So - it's really just a matter of taking the code you already have and refactoring it accordingly. For example, something like this should work: TEAMS = { 'Cowboys': ('doug', 'adam', 'seth', 'emily'), 'Packers': (), 'Lakers': ('kevin', 'mike', 'sarah', 'cassy'), 'Yankees': ('courtney', 'henry'), 'Gators': (), } class Team(QtGui.QStandardItem): def __init__(self, name): super(Team, self).__init__(name) for coach in ('Head', 'Assistant', 'Offensive', 'Defensive'): childCol1 = QtGui.QStandardItem(f'{coach} Coach') childCol2 = QtGui.QStandardItem() self.appendRow([childCol1, childCol2]) def populate(self, head='', assistant='', offensive='', defensive=''): self.child(0, 1).setText(head) self.child(1, 1).setText(assistant) self.child(2, 1).setText(offensive) self.child(3, 1).setText(defensive) def randomize(self, names): self.populate(*random.sample(names, 4)) class MainDialog(QtWidgets.QMainWindow): ... def randomizeTeams(self): for proxyIndex in self.selectionModel.selectedRows(): sourceIndex = self.proxyModel.mapToSource(proxyIndex) item = self.itemModel.itemFromIndex(sourceIndex) if not isinstance(item, Team): item = item.parent() item.randomize(self._coaches) def populateModel(self): self.itemModel.clear() self.itemModel.setHorizontalHeaderLabels(['Position', 'Name']) self._coaches = [] # add teams for name, coaches in TEAMS.items(): team = Team(name) team.populate(*coaches) self._coaches.extend(coaches) self.itemModel.appendRow([team]) self.itemModel.setSortRole(QtCore.Qt.DisplayRole) self.itemModel.sort(0, QtCore.Qt.AscendingOrder) self.proxyModel.sort(0, QtCore.Qt.AscendingOrder)
Update treemodel in real time PySide
How can I make it so when I click the Randomize button, for the selected treeview items, the treeview updates to show the changes to data, while maintaining the expanding items states and the users selection? Is this accomplished by subclasses the StandardItemModel or ProxyModel class? Help is much appreciated as I'm not sure how to resolve this issue. It's a very simple example demonstrating the issue. When clicking Randmoize, all it's doing is randomly assigning a new string (name) to each coaches position on the selected Team. import os import sys import random from PySide2 import QtGui, QtWidgets, QtCore class Team(object): def __init__(self, name='', nameA='', nameB='', nameC='', nameD=''): super(Team, self).__init__() self.name = name self.headCoach = nameA self.assistantCoach = nameB self.offensiveCoach = nameC self.defensiveCoach = nameD def randomize(self): names = ['doug', 'adam', 'seth', 'emily', 'kevin', 'mike', 'sarah', 'cassy', 'courtney', 'henry'] cnt = len(names)-1 self.headCoach = names[random.randint(0, cnt)] self.assistantCoach = names[random.randint(0, cnt)] self.offensiveCoach = names[random.randint(0, cnt)] self.defensiveCoach = names[random.randint(0, cnt)] print('TRADED PLAYERS') TEAMS = [ Team('Cowboys', 'doug', 'adam', 'seth', 'emily'), Team('Packers'), Team('Lakers', 'kevin', 'mike', 'sarah', 'cassy'), Team('Yankees', 'courtney', 'henry'), Team('Gators'), ] class MainDialog(QtWidgets.QMainWindow): def __init__(self, parent=None): super(MainDialog, self).__init__(parent) self.resize(600,400) self.button = QtWidgets.QPushButton('Randomize') self.itemModel = QtGui.QStandardItemModel() self.proxyModel = QtCore.QSortFilterProxyModel() self.proxyModel.setSourceModel(self.itemModel) self.proxyModel.setSortCaseSensitivity(QtCore.Qt.CaseInsensitive) self.proxyModel.setFilterCaseSensitivity(QtCore.Qt.CaseInsensitive) self.proxyModel.setDynamicSortFilter(True) self.proxyModel.setFilterKeyColumn(0) self.treeView = QtWidgets.QTreeView() self.treeView.setModel(self.proxyModel) self.treeView.setEditTriggers(QtWidgets.QAbstractItemView.NoEditTriggers) self.treeView.setSelectionMode(QtWidgets.QAbstractItemView.ExtendedSelection) self.treeView.setVerticalScrollMode(QtWidgets.QAbstractItemView.ScrollPerPixel) self.treeView.setSelectionBehavior(QtWidgets.QAbstractItemView.SelectRows) self.treeView.setAlternatingRowColors(True) self.treeView.setSortingEnabled(True) self.treeView.setUniformRowHeights(False) self.treeView.header().setSectionResizeMode(QtWidgets.QHeaderView.ResizeToContents) self.treeView.setContextMenuPolicy(QtCore.Qt.CustomContextMenu) self.selectionModel = self.treeView.selectionModel() # layout self.mainLayout = QtWidgets.QVBoxLayout() self.mainLayout.addWidget(self.treeView) self.mainLayout.addWidget(self.button) self.mainWidget = QtWidgets.QWidget() self.mainWidget.setLayout(self.mainLayout) self.setCentralWidget(self.mainWidget) # connections self.selectionModel.selectionChanged.connect(self.updateControls) self.button.clicked.connect(self.randomizeTeams) # begin self.populateModel() self.updateControls() def randomizeTeams(self): for proxyIndex in self.selectionModel.selectedRows(): sourceIndex = self.proxyModel.mapToSource(proxyIndex) item = self.itemModel.itemFromIndex(sourceIndex) team = item.data(QtCore.Qt.UserRole) team.randomize() # UPDATE UI... def updateControls(self): self.button.setEnabled(self.selectionModel.hasSelection()) def populateModel(self): self.itemModel.clear() self.itemModel.setHorizontalHeaderLabels(['Position', 'Name']) # add teams for ts in TEAMS: col1 = QtGui.QStandardItem(ts.name) col1.setData(ts, QtCore.Qt.UserRole) # add coaches childCol1 = QtGui.QStandardItem('Head Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.headCoach) col1.appendRow([childCol1, childCol2]) childCol1 = QtGui.QStandardItem('Head Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.assistantCoach) col1.appendRow([childCol1, childCol2]) childCol1 = QtGui.QStandardItem('Offensive Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.offensiveCoach) col1.appendRow([childCol1, childCol2]) childCol1 = QtGui.QStandardItem('Defensive Coach') childCol1.setData(ts, QtCore.Qt.UserRole) childCol2 = QtGui.QStandardItem(ts.defensiveCoach) col1.appendRow([childCol1, childCol2]) self.itemModel.appendRow([col1]) self.itemModel.setSortRole(QtCore.Qt.DisplayRole) self.itemModel.sort(0, QtCore.Qt.AscendingOrder) self.proxyModel.sort(0, QtCore.Qt.AscendingOrder) def main(): app = QtWidgets.QApplication(sys.argv) window = MainDialog() window.show() app.exec_() if __name__ == '__main__': pass main()
[ "Your Team class should be a subclass of QStandardItem, which will be the top-level parent in the model. This class should create its own child items (as you are currently doing in the for-loop of populateModel), and its randomize method should directly reset the item-data of those children. This will ensure the changes are immediately reflected in the model.\nSo - it's really just a matter of taking the code you already have and refactoring it accordingly. For example, something like this should work:\nTEAMS = {\n 'Cowboys': ('doug', 'adam', 'seth', 'emily'),\n 'Packers': (),\n 'Lakers': ('kevin', 'mike', 'sarah', 'cassy'),\n 'Yankees': ('courtney', 'henry'),\n 'Gators': (),\n }\n\nclass Team(QtGui.QStandardItem):\n def __init__(self, name):\n super(Team, self).__init__(name)\n for coach in ('Head', 'Assistant', 'Offensive', 'Defensive'):\n childCol1 = QtGui.QStandardItem(f'{coach} Coach')\n childCol2 = QtGui.QStandardItem()\n self.appendRow([childCol1, childCol2])\n\n def populate(self, head='', assistant='', offensive='', defensive=''):\n self.child(0, 1).setText(head)\n self.child(1, 1).setText(assistant)\n self.child(2, 1).setText(offensive)\n self.child(3, 1).setText(defensive)\n\n def randomize(self, names):\n self.populate(*random.sample(names, 4))\n\nclass MainDialog(QtWidgets.QMainWindow):\n ...\n def randomizeTeams(self):\n for proxyIndex in self.selectionModel.selectedRows():\n sourceIndex = self.proxyModel.mapToSource(proxyIndex)\n item = self.itemModel.itemFromIndex(sourceIndex)\n if not isinstance(item, Team):\n item = item.parent()\n item.randomize(self._coaches)\n\n def populateModel(self):\n self.itemModel.clear()\n self.itemModel.setHorizontalHeaderLabels(['Position', 'Name'])\n self._coaches = []\n # add teams\n for name, coaches in TEAMS.items():\n team = Team(name)\n team.populate(*coaches)\n self._coaches.extend(coaches)\n self.itemModel.appendRow([team])\n self.itemModel.setSortRole(QtCore.Qt.DisplayRole)\n self.itemModel.sort(0, QtCore.Qt.AscendingOrder)\n self.proxyModel.sort(0, QtCore.Qt.AscendingOrder)\n\n" ]
[ 0 ]
[]
[]
[ "pyside", "python", "qstandarditemmodel" ]
stackoverflow_0074646878_pyside_python_qstandarditemmodel.txt
Q: Nodejs: How to reverse proxy + TLS certificates with Caddy2? this is my first time deploying nodejs from localhost to the live server. I am using aapanel for my live server. Here is the relevant code in node server.js file: const hostname = 'localhost'; // const hostname = 'www.thespacebar.io'; // set port, listen for requests const PORT = process.env.PORT || 8080; app.listen(PORT, hostname, () => { console.log(`Server is running on port ${PORT}.`); }); Here is my pm2 settings: I am unable to open my nodejs app with GET https://www.thespacebar.io:8080, but it works for GET http://www.thespacebar.io:8080 GET https://www.thespacebar.io:8080 does not work with error: This site can’t provide a secure connection ERR_SSL_PROTOCOL_ERROR Anyone know what I did wrong? EDIT: I have installed Caddy and setup the Caddyfile in /etc/caddy like this: # The Caddyfile is an easy way to configure your Caddy web server. # # Unless the file starts with a global options block, the first # uncommented line is always the address of your site. # # To use your own domain name (with automatic HTTPS), first make # sure your domain's A/AAAA DNS records are properly pointed to # this machine's public IP, then replace ":80" below with your # domain name. import ./thespacebar.io :80 { # Set this path to your site's directory. root * /usr/share/caddy # Enable the static file server. file_server # Another common task is to set up a reverse proxy: # reverse_proxy localhost:8080 # Or serve a PHP site through php-fpm: # php_fastcgi localhost:9000 } # Refer to the Caddy docs for more information: # https://caddyserver.com/docs/caddyfile and created the adjacent file thespacebar.io: thespacebar.io { reverse_proxy localhost:8080 } but when I visit https://thespacebar.io/, I end up at index.html instead of the JSON { message: "Welcome to bezkoder application." } and POST http://www.thespacebar.io/api/verification/callback with body param verify_token:abcde is supposed to show the JSON: { "message": "Callback called successfully." } instead of 404 Not Found EDIT 2: I have removed the portion: # :80 { # Set this path to your site's directory. # root * /usr/share/caddy # Enable the static file server. # file_server # Another common task is to set up a reverse proxy: # reverse_proxy localhost:8080 # Or serve a PHP site through php-fpm: # php_fastcgi localhost:9000 # } # Refer to the Caddy docs for more information: # https://caddyserver.com/docs/caddyfile from etc/caddy/Caddyfile but when I run caddy run Caddyfile and caddy reload Caddyfile, I am getting this error: [root@vultrguest caddy]# caddy run Caddyfile 2022/12/02 08:11:44.132 INFO using adjacent Caddyfile 2022/12/02 08:11:44.132 WARN Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies {"adapter": "caddyfile", "file": "Caddyfile", "line": 12} 2022/12/02 08:11:44.133 INFO admin admin endpoint started {"address": "localhost:2019", "enforce_origin": false, "origins": ["//localhost:2019", "//[::1]:2019", "//127.0.0.1:2019"]} 2022/12/02 08:11:44.133 INFO http server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443} 2022/12/02 08:11:44.133 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"} 2022/12/02 08:11:44.133 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc000151030"} 2022/12/02 08:11:44.133 INFO tls.cache.maintenance stopped background certificate maintenance {"cache": "0xc000151030"} Error: loading initial config: loading new config: http app module: start: listening on :80: listen tcp :80: bind: address already in use [root@vultrguest caddy]# caddy reload Caddyfile 2022/12/02 08:11:49.875 INFO using adjacent Caddyfile 2022/12/02 08:11:49.876 WARN Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies {"adapter": "caddyfile", "file": "Caddyfile", "line": 12} Error: sending configuration to instance: performing request: Post "http://localhost:2019/load": dial tcp [::1]:2019: connect: connection refused [root@vultrguest caddy]# If I run GET http://www.thespacebar.io:8080 I get: Web server is down Error code 521 Visit cloudflare.com for more information. 2022-12-02 08:22:13 UTC You A: Caddy is simple to setup as a reverse proxy, and it gets letsencrypt SSL certs for you with minimal fuss { email [email protected] } thespacebar.io { reverse_proxy localhost:8080 } I see you've posted an update - the one thing I would remove is :80 { ..... } If you read the text you posted it does say to replace :80 with your domain (but don't add :80 or caddy won't do the certificate for the domain) I also see you haven't set up a global section with an email address - I'm fairly sure that needs to be there (don't quote me on that) for letsencrypt to work - at least it used to when I first started using caddy A: here is some pseudo code for a generic caddyfile for caddy v2 for the code below replace "\*" with "*" "\" is used below to escape "/*" in the markdown this code will add basic security headers and cors to the response it will proxy pass thru to a process on localhost port 9883 if you have a dns record for your server it will set up the letsencrypt certs for you and renew them when required see caddy snippets https://caddyserver.com/docs/caddyfile/concepts#snippets # begin common code block snippet to be imported into the server block, # for example here we set common security headers # see the markdown escape comment above for "/\*" should be "/*" (common) { header /\* { -Server -X-Powered-By +X-Permitted-Cross-Domain-Policies none +X-Frame-Options DENY +X-Content-Type-Options nosniff +Strict-Transport-Security "max-age=63072000 includeSubDomains preload" +Referrer-Policy no-referrer } } # cors snippet (cors) { @cors_preflight method OPTIONS # "{args.0}" is an input value used when calling the snippet @cors header Origin "{args.0}" handle @cors_preflight { header Access-Control-Allow-Origin "{args.0}" header Access-Control-Allow-Methods "GET, POST, PUT, PATCH, DELETE" header Access-Control-Allow-Headers "Content-Type" header Access-Control-Max-Age "3600" respond "" 204 } } # main server block # dns record for server is myserver.edu myserver.edu { # import common sec headers snippet import common # import cors snippet passing server name parameter, wildcard cors poor sec import cors myserver.edu # proxy redirect see handle_path directive # https://caddyserver.com/docs/caddyfile/directives/handle_path # see the markdown escape comment above for "/\*" should be "/*" handle_path /somepath/\* { reverse_proxy localhost:9883 { header_up X-Real-IP {remote_host} # caddy will add X-Forwarded-For for you so not need this one #header_up X-Forwarded-For {remote_host} header_down Content-Security-Policy "media-src blob:" } } }
Nodejs: How to reverse proxy + TLS certificates with Caddy2?
this is my first time deploying nodejs from localhost to the live server. I am using aapanel for my live server. Here is the relevant code in node server.js file: const hostname = 'localhost'; // const hostname = 'www.thespacebar.io'; // set port, listen for requests const PORT = process.env.PORT || 8080; app.listen(PORT, hostname, () => { console.log(`Server is running on port ${PORT}.`); }); Here is my pm2 settings: I am unable to open my nodejs app with GET https://www.thespacebar.io:8080, but it works for GET http://www.thespacebar.io:8080 GET https://www.thespacebar.io:8080 does not work with error: This site can’t provide a secure connection ERR_SSL_PROTOCOL_ERROR Anyone know what I did wrong? EDIT: I have installed Caddy and setup the Caddyfile in /etc/caddy like this: # The Caddyfile is an easy way to configure your Caddy web server. # # Unless the file starts with a global options block, the first # uncommented line is always the address of your site. # # To use your own domain name (with automatic HTTPS), first make # sure your domain's A/AAAA DNS records are properly pointed to # this machine's public IP, then replace ":80" below with your # domain name. import ./thespacebar.io :80 { # Set this path to your site's directory. root * /usr/share/caddy # Enable the static file server. file_server # Another common task is to set up a reverse proxy: # reverse_proxy localhost:8080 # Or serve a PHP site through php-fpm: # php_fastcgi localhost:9000 } # Refer to the Caddy docs for more information: # https://caddyserver.com/docs/caddyfile and created the adjacent file thespacebar.io: thespacebar.io { reverse_proxy localhost:8080 } but when I visit https://thespacebar.io/, I end up at index.html instead of the JSON { message: "Welcome to bezkoder application." } and POST http://www.thespacebar.io/api/verification/callback with body param verify_token:abcde is supposed to show the JSON: { "message": "Callback called successfully." } instead of 404 Not Found EDIT 2: I have removed the portion: # :80 { # Set this path to your site's directory. # root * /usr/share/caddy # Enable the static file server. # file_server # Another common task is to set up a reverse proxy: # reverse_proxy localhost:8080 # Or serve a PHP site through php-fpm: # php_fastcgi localhost:9000 # } # Refer to the Caddy docs for more information: # https://caddyserver.com/docs/caddyfile from etc/caddy/Caddyfile but when I run caddy run Caddyfile and caddy reload Caddyfile, I am getting this error: [root@vultrguest caddy]# caddy run Caddyfile 2022/12/02 08:11:44.132 INFO using adjacent Caddyfile 2022/12/02 08:11:44.132 WARN Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies {"adapter": "caddyfile", "file": "Caddyfile", "line": 12} 2022/12/02 08:11:44.133 INFO admin admin endpoint started {"address": "localhost:2019", "enforce_origin": false, "origins": ["//localhost:2019", "//[::1]:2019", "//127.0.0.1:2019"]} 2022/12/02 08:11:44.133 INFO http server is listening only on the HTTPS port but has no TLS connection policies; adding one to enable TLS {"server_name": "srv0", "https_port": 443} 2022/12/02 08:11:44.133 INFO http enabling automatic HTTP->HTTPS redirects {"server_name": "srv0"} 2022/12/02 08:11:44.133 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc000151030"} 2022/12/02 08:11:44.133 INFO tls.cache.maintenance stopped background certificate maintenance {"cache": "0xc000151030"} Error: loading initial config: loading new config: http app module: start: listening on :80: listen tcp :80: bind: address already in use [root@vultrguest caddy]# caddy reload Caddyfile 2022/12/02 08:11:49.875 INFO using adjacent Caddyfile 2022/12/02 08:11:49.876 WARN Caddyfile input is not formatted; run the 'caddy fmt' command to fix inconsistencies {"adapter": "caddyfile", "file": "Caddyfile", "line": 12} Error: sending configuration to instance: performing request: Post "http://localhost:2019/load": dial tcp [::1]:2019: connect: connection refused [root@vultrguest caddy]# If I run GET http://www.thespacebar.io:8080 I get: Web server is down Error code 521 Visit cloudflare.com for more information. 2022-12-02 08:22:13 UTC You
[ "Caddy is simple to setup as a reverse proxy, and it gets letsencrypt SSL certs for you with minimal fuss\n{\n email [email protected]\n}\nthespacebar.io {\n reverse_proxy localhost:8080\n}\n\nI see you've posted an update - the one thing I would remove is\n:80 {\n .....\n}\n\nIf you read the text you posted it does say to replace :80 with your domain (but don't add :80 or caddy won't do the certificate for the domain)\nI also see you haven't set up a global section with an email address - I'm fairly sure that needs to be there (don't quote me on that) for letsencrypt to work - at least it used to when I first started using caddy\n", "here is some pseudo code for a generic caddyfile for caddy v2\nfor the code below replace \"\\*\" with \"*\" \"\\\" is used below to escape \"/*\" in the markdown\nthis code will add basic security headers and cors to the response\nit will proxy pass thru to a process on localhost port 9883\nif you have a dns record for your server it will set up the letsencrypt certs for you and renew them when required\nsee caddy snippets https://caddyserver.com/docs/caddyfile/concepts#snippets\n# begin common code block snippet to be imported into the server block, \n# for example here we set common security headers\n# see the markdown escape comment above for \"/\\*\" should be \"/*\"\n(common) {\n header /\\* {\n -Server\n -X-Powered-By\n +X-Permitted-Cross-Domain-Policies none\n +X-Frame-Options DENY\n +X-Content-Type-Options nosniff\n +Strict-Transport-Security \"max-age=63072000 includeSubDomains preload\"\n +Referrer-Policy no-referrer\n }\n}\n# cors snippet\n(cors) { \n @cors_preflight method OPTIONS\n# \"{args.0}\" is an input value used when calling the snippet\n \n @cors header Origin \"{args.0}\" \n handle @cors_preflight {\n header Access-Control-Allow-Origin \"{args.0}\"\n header Access-Control-Allow-Methods \"GET, POST, PUT, PATCH, DELETE\"\n header Access-Control-Allow-Headers \"Content-Type\"\n header Access-Control-Max-Age \"3600\"\n respond \"\" 204\n }\n}\n\n# main server block\n# dns record for server is myserver.edu\nmyserver.edu { \n\n# import common sec headers snippet\n import common \n# import cors snippet passing server name parameter, wildcard cors poor sec\n import cors myserver.edu \n\n# proxy redirect see handle_path directive \n# https://caddyserver.com/docs/caddyfile/directives/handle_path\n# see the markdown escape comment above for \"/\\*\" should be \"/*\"\n handle_path /somepath/\\* {\n reverse_proxy localhost:9883 {\n header_up X-Real-IP {remote_host}\n # caddy will add X-Forwarded-For for you so not need this one\n #header_up X-Forwarded-For {remote_host}\n header_down Content-Security-Policy \"media-src blob:\"\n }\n }\n\n}\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "caddy", "javascript", "node.js", "ssl" ]
stackoverflow_0074650981_caddy_javascript_node.js_ssl.txt
Q: How to change color of input range slider with tailwind css? Wish you are having a wonderful day, I'm currently trying to apply some color to a simple slider range. <Input class="range pr-6 bg-red-500" type="range" value="0" min="0" max="1000" onChange="rangeSlide(this.value)" onmousemove="rangeSlide(this.value)"></Input> It doesn't work at all to change the color like this, I also tried with text and border class. I made some research and assume I should use this to change the slider bar : -webkit-slider-thumb Like this : .range::-webkit-slider-thumb { background: #00fd0a; } However I wish to only use tailwind and dont apply style with pure css. if anyone have any insight to give me I would really appreciate it. A: I finally find how to do it. My bad to not search more the documentation before asking. Here is what work for me, using accent-<color>-<number> : <Input class="range pr-6 accent-red-500" type="range" value="0" min="0" max="1000" onChange="rangeSlide(this.value)" onmousemove="rangeSlide(this.value)"></Input> Also, link to the documentation if someone pass by and need more info : https://tailwindcss.com/docs/accent-color
How to change color of input range slider with tailwind css?
Wish you are having a wonderful day, I'm currently trying to apply some color to a simple slider range. <Input class="range pr-6 bg-red-500" type="range" value="0" min="0" max="1000" onChange="rangeSlide(this.value)" onmousemove="rangeSlide(this.value)"></Input> It doesn't work at all to change the color like this, I also tried with text and border class. I made some research and assume I should use this to change the slider bar : -webkit-slider-thumb Like this : .range::-webkit-slider-thumb { background: #00fd0a; } However I wish to only use tailwind and dont apply style with pure css. if anyone have any insight to give me I would really appreciate it.
[ "I finally find how to do it. My bad to not search more the documentation before asking.\nHere is what work for me, using accent-<color>-<number> :\n<Input class=\"range pr-6 accent-red-500\" \n type=\"range\" \n value=\"0\" min=\"0\" \n max=\"1000\" \n onChange=\"rangeSlide(this.value)\" \n onmousemove=\"rangeSlide(this.value)\"></Input>\n\nAlso, link to the documentation if someone pass by and need more info : https://tailwindcss.com/docs/accent-color\n" ]
[ 0 ]
[]
[]
[ "range", "slider", "tailwind_css" ]
stackoverflow_0074648250_range_slider_tailwind_css.txt
Q: How do I make this VBA code more efficient so it does not crash due to lack of memory? How can I prevent this code from taking up too much memory? Would like to learn for this project and for future reference. Sub Part5() 'Updating the classification columns all the way to the right 'Use a long if statement pulling from labels Dim LastRow As Integer Dim LastRowJira As Integer Dim Temp As String Dim row_index As String Dim x As Integer LastRow = Worksheets("Sheet1").Range("B" & Worksheets("Sheet1").Rows.Count).End(xlUp).Row Lastrowpaste = LastRowJira + 1 LastRowJira = Worksheets("Sheet0").Range("B" & Worksheets("Sheet0").Rows.Count).End(xlUp).Row Dim AA, AB, AC, AD, AE, AF, AG, AH, AI, AJ, AK, AL, AM, AN, AO As String Dim NN As String NN = "Non-Nuclear" Dim Nuc As String Nuc = "Nuclear" Dim PM As String Dim MV As String PM = "PostMVP" MV = "MVP" Dim SC As String Dim WM As String SC = "GEAM-Supply Chain" WM = "GEAM-Work Mgmt" Dim Conv As String Conv = "Conversion" Lookup = Worksheets("Sheet0").Range("$B$2:$B$" & LastRowJira) WholeLookup = Worksheets("Sheet0").Range("$B$2:$GQ$" & LastRowJira) On Error Resume Next For x = 2 To LastRow Temp = Worksheets("Sheet1").Range("B" & x).Value row_index = Application.WorksheetFunction.Index(Lookup, WorksheetFunction.Match(Temp, Lookup, 0), 1) AA = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 26) AB = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 27) AC = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 28) AD = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 29) AE = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 30) AF = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 31) AG = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 32) AH = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 33) AI = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 34) AJ = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 35) AK = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 36) AL = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 37) AM = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 38) AN = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 39) U = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 20) Comp = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 21) Sev = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 198) Priority = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 11) 'NN WM (NC S3) MVP , AR 44 -- Where we will paste if true If U = "GEAM-Work Mgmt" And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AR" & x).Value = row_index Else Worksheets("Sheet1").Range("AR" & x).Value = vbNullString End If 'NN SCM (NC S3) MVP, AS 45 If U = "GEAM-Supply Chain" And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AS" & x).Value = row_index Else Worksheets("Sheet1").Range("AS" & x).Value = vbNullString End If 'NN All (NC S3) MVP, AT 46 If (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AT" & x).Value = row_index Else Worksheets("Sheet1").Range("AT" & x).Value = vbNullString End If 'NN WM (NC S3), AU 47 If U = "GEAM-Work Mgmt" And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AU" & x).Value = row_index Else Worksheets("Sheet1").Range("AU" & x).Value = vbNullString End If 'NN SCM (NC S3), AV 48 If U = SC And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AV" & x).Value = row_index Else Worksheets("Sheet1").Range("AV" & x).Value = vbNullString End If 'NN All (NC S3) All, AW 49 If (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AW" & x).Value = row_index Else Worksheets("Sheet1").Range("AW" & x).Value = vbNullString End If 'NUC WM (NC S3) MVP, AX 50 If U = WM And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AX" & x).Value = row_index Else Worksheets("Sheet1").Range("AX" & x).Value = vbNullString End If 'NUC SCM (NC S3) MVP, AY 51 If U = SC And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AY" & x).Value = row_index Else Worksheets("Sheet1").Range("AY" & x).Value = vbNullString End If 'NUC All (NC S3) MVP, AZ 52 If (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AZ" & x).Value = row_index Else Worksheets("Sheet1").Range("AZ" & x).Value = vbNullString End If 'NUC WM (NC S3), BA 53 If U = WM And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BA" & x).Value = row_index Else Worksheets("Sheet1").Range("BA" & x).Value = vbNullString End If 'NUC SCM (NC S3), BB 54 If U = SC And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BB" & x).Value = row_index Else Worksheets("Sheet1").Range("BB" & x).Value = vbNullString End If 'NUC All (NC S3), BC 55 If (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BC" & x).Value = row_index Else Worksheets("Sheet1").Range("BC" & x).Value = vbNullString End If 'MVP S3, BD 56 If (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BD" & x).Value = row_index Else Worksheets("Sheet1").Range("BD" & x).Value = vbNullString End If 'PostMVP S3, BE 57 If (AA = PM Or AB = PM Or AC = PM Or AD = PM Or AE = PM Or AF = PM Or AG = PM Or AH = PM Or AI = PM Or AJ = PM Or AK = PM Or AL = PM Or AM = PM Or AN = PM Or AO = PM) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BE" & x).Value = row_index Else Worksheets("Sheet1").Range("BE" & x).Value = vbNullString End If 'NN Conv WM S3 MVP, BF 58 If U = WM And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BF" & x).Value = row_index Else Worksheets("Sheet1").Range("BF" & x).Value = vbNullString End If 'NN Conv SCM S3 MVP, BG 59 If U = SC And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BG" & x).Value = row_index Else Worksheets("Sheet1").Range("BG" & x).Value = vbNullString End If 'NN Conv All S3 MVP, BH 60 If (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BH" & x).Value = row_index Else Worksheets("Sheet1").Range("BH" & x).Value = vbNullString End If 'NN Conv WM S3, BI 61 If U = WM And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BI" & x).Value = row_index Else Worksheets("Sheet1").Range("BI" & x).Value = vbNullString End If 'NN Conv SCM S3, BJ 62 If U = SC And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BJ" & x).Value = row_index Else Worksheets("Sheet1").Range("BJ" & x).Value = vbNullString End If 'NN Conv All S3, BL 63 If (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BL" & x).Value = row_index Else Worksheets("Sheet1").Range("BL" & x).Value = vbNullString End If 'Nuc Conv SCM S3 MVP, BM 64 If U = SC And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BM" & x).Value = row_index Else Worksheets("Sheet1").Range("BM" & x).Value = vbNullString End If 'Nuc Conv All S3 MVP, BN 65 If (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BN" & x).Value = row_index Else Worksheets("Sheet1").Range("BN" & x).Value = vbNullString End If 'Nuc Conv WM S3, BO 66 If U = WM And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BO" & x).Value = row_index Else Worksheets("Sheet1").Range("BO" & x).Value = vbNullString End If 'Nuc Conv SCM S3, BP 67 If U = SC And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BP" & x).Value = row_index Else Worksheets("Sheet1").Range("BP" & x).Value = vbNullString End If 'Nuc Conv All S3, BQ 68 If (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BQ" & x).Value = row_index Else Worksheets("Sheet1").Range("BQ" & x).Value = vbNullString End If Next x End Sub I was considering a nested for loop? I tried separating loops for each column? I am a novice if you could not tell by the code. A: I will guess: If the issue is memory then instead of creating arrays for Lookup and WholeLookup, try using the Range. Replace this: Lookup = Worksheets("Sheet0").Range("$B$2:$B$" & LastRowJira) WholeLookup = Worksheets("Sheet0").Range("$B$2:$GQ$" & LastRowJira) With this: Set Lookup = Worksheets("Sheet0").Range("$B$2:$B$" & LastRowJira) Set WholeLookup = Worksheets("Sheet0").Range("$B$2:$GQ$" & LastRowJira)
How do I make this VBA code more efficient so it does not crash due to lack of memory?
How can I prevent this code from taking up too much memory? Would like to learn for this project and for future reference. Sub Part5() 'Updating the classification columns all the way to the right 'Use a long if statement pulling from labels Dim LastRow As Integer Dim LastRowJira As Integer Dim Temp As String Dim row_index As String Dim x As Integer LastRow = Worksheets("Sheet1").Range("B" & Worksheets("Sheet1").Rows.Count).End(xlUp).Row Lastrowpaste = LastRowJira + 1 LastRowJira = Worksheets("Sheet0").Range("B" & Worksheets("Sheet0").Rows.Count).End(xlUp).Row Dim AA, AB, AC, AD, AE, AF, AG, AH, AI, AJ, AK, AL, AM, AN, AO As String Dim NN As String NN = "Non-Nuclear" Dim Nuc As String Nuc = "Nuclear" Dim PM As String Dim MV As String PM = "PostMVP" MV = "MVP" Dim SC As String Dim WM As String SC = "GEAM-Supply Chain" WM = "GEAM-Work Mgmt" Dim Conv As String Conv = "Conversion" Lookup = Worksheets("Sheet0").Range("$B$2:$B$" & LastRowJira) WholeLookup = Worksheets("Sheet0").Range("$B$2:$GQ$" & LastRowJira) On Error Resume Next For x = 2 To LastRow Temp = Worksheets("Sheet1").Range("B" & x).Value row_index = Application.WorksheetFunction.Index(Lookup, WorksheetFunction.Match(Temp, Lookup, 0), 1) AA = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 26) AB = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 27) AC = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 28) AD = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 29) AE = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 30) AF = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 31) AG = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 32) AH = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 33) AI = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 34) AJ = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 35) AK = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 36) AL = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 37) AM = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 38) AN = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 39) U = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 20) Comp = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 21) Sev = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 198) Priority = Application.WorksheetFunction.Index(WholeLookup, WorksheetFunction.Match(Temp, Lookup, 0), 11) 'NN WM (NC S3) MVP , AR 44 -- Where we will paste if true If U = "GEAM-Work Mgmt" And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AR" & x).Value = row_index Else Worksheets("Sheet1").Range("AR" & x).Value = vbNullString End If 'NN SCM (NC S3) MVP, AS 45 If U = "GEAM-Supply Chain" And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AS" & x).Value = row_index Else Worksheets("Sheet1").Range("AS" & x).Value = vbNullString End If 'NN All (NC S3) MVP, AT 46 If (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AT" & x).Value = row_index Else Worksheets("Sheet1").Range("AT" & x).Value = vbNullString End If 'NN WM (NC S3), AU 47 If U = "GEAM-Work Mgmt" And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AU" & x).Value = row_index Else Worksheets("Sheet1").Range("AU" & x).Value = vbNullString End If 'NN SCM (NC S3), AV 48 If U = SC And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AV" & x).Value = row_index Else Worksheets("Sheet1").Range("AV" & x).Value = vbNullString End If 'NN All (NC S3) All, AW 49 If (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AW" & x).Value = row_index Else Worksheets("Sheet1").Range("AW" & x).Value = vbNullString End If 'NUC WM (NC S3) MVP, AX 50 If U = WM And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AX" & x).Value = row_index Else Worksheets("Sheet1").Range("AX" & x).Value = vbNullString End If 'NUC SCM (NC S3) MVP, AY 51 If U = SC And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AY" & x).Value = row_index Else Worksheets("Sheet1").Range("AY" & x).Value = vbNullString End If 'NUC All (NC S3) MVP, AZ 52 If (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("AZ" & x).Value = row_index Else Worksheets("Sheet1").Range("AZ" & x).Value = vbNullString End If 'NUC WM (NC S3), BA 53 If U = WM And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BA" & x).Value = row_index Else Worksheets("Sheet1").Range("BA" & x).Value = vbNullString End If 'NUC SCM (NC S3), BB 54 If U = SC And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BB" & x).Value = row_index Else Worksheets("Sheet1").Range("BB" & x).Value = vbNullString End If 'NUC All (NC S3), BC 55 If (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BC" & x).Value = row_index Else Worksheets("Sheet1").Range("BC" & x).Value = vbNullString End If 'MVP S3, BD 56 If (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BD" & x).Value = row_index Else Worksheets("Sheet1").Range("BD" & x).Value = vbNullString End If 'PostMVP S3, BE 57 If (AA = PM Or AB = PM Or AC = PM Or AD = PM Or AE = PM Or AF = PM Or AG = PM Or AH = PM Or AI = PM Or AJ = PM Or AK = PM Or AL = PM Or AM = PM Or AN = PM Or AO = PM) And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BE" & x).Value = row_index Else Worksheets("Sheet1").Range("BE" & x).Value = vbNullString End If 'NN Conv WM S3 MVP, BF 58 If U = WM And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BF" & x).Value = row_index Else Worksheets("Sheet1").Range("BF" & x).Value = vbNullString End If 'NN Conv SCM S3 MVP, BG 59 If U = SC And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BG" & x).Value = row_index Else Worksheets("Sheet1").Range("BG" & x).Value = vbNullString End If 'NN Conv All S3 MVP, BH 60 If (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BH" & x).Value = row_index Else Worksheets("Sheet1").Range("BH" & x).Value = vbNullString End If 'NN Conv WM S3, BI 61 If U = WM And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BI" & x).Value = row_index Else Worksheets("Sheet1").Range("BI" & x).Value = vbNullString End If 'NN Conv SCM S3, BJ 62 If U = SC And (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BJ" & x).Value = row_index Else Worksheets("Sheet1").Range("BJ" & x).Value = vbNullString End If 'NN Conv All S3, BL 63 If (AA = NN Or AB = NN Or AC = NN Or AD = NN Or AE = NN Or AF = NN Or AG = NN Or AH = NN Or AI = NN Or AJ = NN Or AK = NN Or AL = NN Or AM = NN Or AN = NN Or AO = NN) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BL" & x).Value = row_index Else Worksheets("Sheet1").Range("BL" & x).Value = vbNullString End If 'Nuc Conv SCM S3 MVP, BM 64 If U = SC And (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BM" & x).Value = row_index Else Worksheets("Sheet1").Range("BM" & x).Value = vbNullString End If 'Nuc Conv All S3 MVP, BN 65 If (AA = "MVP" Or AB = "MVP" Or AC = "MVP" Or AD = "MVP" Or AE = "MVP" Or AF = "MVP" Or AG = "MVP" Or AH = "MVP" Or AI = "MVP" Or AJ = "MVP" Or AK = "MVP" Or AL = "MVP" Or AM = "MVP" Or AN = "MVP" Or AO = "MVP") And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BN" & x).Value = row_index Else Worksheets("Sheet1").Range("BN" & x).Value = vbNullString End If 'Nuc Conv WM S3, BO 66 If U = WM And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BO" & x).Value = row_index Else Worksheets("Sheet1").Range("BO" & x).Value = vbNullString End If 'Nuc Conv SCM S3, BP 67 If U = SC And (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BP" & x).Value = row_index Else Worksheets("Sheet1").Range("BP" & x).Value = vbNullString End If 'Nuc Conv All S3, BQ 68 If (AA = Nuc Or AB = Nuc Or AC = Nuc Or AD = Nuc Or AE = Nuc Or AF = Nuc Or AG = Nuc Or AH = Nuc Or AI = Nuc Or AJ = Nuc Or AK = Nuc Or AL = Nuc Or AM = Nuc Or AN = Nuc Or AO = Nuc) And Comp = Conv And (Sev = "SEV 2" Or (Sev = "SEV 3" And Priority = "High") Or (Sev = "SEV 3" And Priority = "Highest")) Then Worksheets("Sheet1").Range("BQ" & x).Value = row_index Else Worksheets("Sheet1").Range("BQ" & x).Value = vbNullString End If Next x End Sub I was considering a nested for loop? I tried separating loops for each column? I am a novice if you could not tell by the code.
[ "I will guess: If the issue is memory then instead of creating arrays for Lookup and WholeLookup, try using the Range.\nReplace this:\nLookup = Worksheets(\"Sheet0\").Range(\"$B$2:$B$\" & LastRowJira)\nWholeLookup = Worksheets(\"Sheet0\").Range(\"$B$2:$GQ$\" & LastRowJira)\n\nWith this:\nSet Lookup = Worksheets(\"Sheet0\").Range(\"$B$2:$B$\" & LastRowJira)\nSet WholeLookup = Worksheets(\"Sheet0\").Range(\"$B$2:$GQ$\" & LastRowJira)\n\n" ]
[ 0 ]
[]
[]
[ "excel", "loops", "memory", "memory_management", "vba" ]
stackoverflow_0074659661_excel_loops_memory_memory_management_vba.txt
Q: Copying files in java (Doesn't work) I have tried many examples from the same question that has already been asked including: IOUtils.copy(); (copy is a non-existent method) Files.copy(source, target, REPLACE_EXISTING); (REPLACE_EXISTING "Cannot find Symbol") FileUtils.copyFile(); (FileUtils doesn't exist) The problems with using them are in brackets. Here is the code for the most repeated method for copying: import static java.nio.file.Files; public void Install() { CrtFol(); CrtImgFol(); CrtSaveFol(); CrtSaveFile(); open.runmm(); //I have added the import for "Files" Files.copy(img1, d4, REPLACE_EXISTING); //Compiler says "Cannot find symbol" when I go over REPLACE_EXISTING //img1 is a File and d4 is a File as a directory } Are there any other ways to copy or a way to fix the one above? A: With Java 7's standard library, you can use java.nio.file.Files.copy(Path source, Path target, CopyOption... options). No need to add additional dependencies or implement your own. try { Files.copy( Paths.get( sFrom ), Paths.get( sTo ), StandardCopyOption.REPLACE_EXISTING); } catch (IOException e) { // Handle exception } A: Not sure if Java actually has anything to copy a file. The simplest way would be to convert the file into a byte stream and then write this stream to another file. Something like this: InputStream inStream = null; OutputStream outStream = null; File inputFile =new File("inputFile.txt"); File outputFile =new File("outputFile.txt"); inStream = new FileInputStream(inputFile); outStream = new FileOutputStream(outputFile); byte[] buffer = new byte[1024]; int fileLength; while ((fileLength = inStream.read(buffer)) > 0){ outStream.write(buffer, 0, fileLength ); } inStream.close(); outStream.close(); where inputFile is the file being copied from, and outputFile is the name of the copy. A: I use this code: import java.io.*; public class CopyTest { public CopyTest() { } public static void main(String[] args) { try { File stockInputFile = new File("C://test.txt"); File StockOutputFile = new File("C://output.txt"); FileInputStream fis = new FileInputStream(stockInputFile); FileOutputStream fos = new FileOutputStream(StockOutputFile); int count = 0; while((count = fis.read()) > -1){ fos.write(count); } fis.close(); fos.close(); } catch (FileNotFoundException e) { System.err.println("FileStreamsReadnWrite: " + e); } catch (IOException e) { System.err.println("FileStreamsReadnWrite: " + e); } } } A: Use this code to upload file, I am working on SpringBoot... import org.springframework.stereotype.Component; import org.springframework.web.multipart.MultipartFile; import java.nio.file.Files; import java.nio.file.Paths; import java.nio.file.StandardCopyOption; @Component public class FileUploadhelper { public final String uploadDirectory = "D:\\SpringBoot Project\\BootRestBooks\\src\\main\\resources\\static\\image"; public boolean uploadFile(MultipartFile mf) { boolean flag = false; try { Files.copy(mf.getInputStream(), Paths.get(uploadDirectory + "\\" + mf.getOriginalFilename()), StandardCopyOption.REPLACE_EXISTING); flag = true; } catch (Exception e) { e.printStackTrace(); } return flag; } }
Copying files in java (Doesn't work)
I have tried many examples from the same question that has already been asked including: IOUtils.copy(); (copy is a non-existent method) Files.copy(source, target, REPLACE_EXISTING); (REPLACE_EXISTING "Cannot find Symbol") FileUtils.copyFile(); (FileUtils doesn't exist) The problems with using them are in brackets. Here is the code for the most repeated method for copying: import static java.nio.file.Files; public void Install() { CrtFol(); CrtImgFol(); CrtSaveFol(); CrtSaveFile(); open.runmm(); //I have added the import for "Files" Files.copy(img1, d4, REPLACE_EXISTING); //Compiler says "Cannot find symbol" when I go over REPLACE_EXISTING //img1 is a File and d4 is a File as a directory } Are there any other ways to copy or a way to fix the one above?
[ "With Java 7's standard library, you can use java.nio.file.Files.copy(Path source, Path target, CopyOption... options). No need to add additional dependencies or implement your own.\ntry {\n Files.copy( Paths.get( sFrom ), \n Paths.get( sTo ),\n StandardCopyOption.REPLACE_EXISTING);\n } catch (IOException e) { \n // Handle exception\n }\n\n", "Not sure if Java actually has anything to copy a file. The simplest way would be to convert the file into a byte stream and then write this stream to another file. Something like this:\n InputStream inStream = null;\n OutputStream outStream = null;\n\n File inputFile =new File(\"inputFile.txt\");\n File outputFile =new File(\"outputFile.txt\");\n\n inStream = new FileInputStream(inputFile);\n outStream = new FileOutputStream(outputFile);\n\n byte[] buffer = new byte[1024];\n\n\n int fileLength;\n while ((fileLength = inStream.read(buffer)) > 0){\n\n outStream.write(buffer, 0, fileLength );\n\n }\n\n inStream.close();\n outStream.close();\n\nwhere inputFile is the file being copied from, and outputFile is the name of the copy.\n", "I use this code:\nimport java.io.*;\n\npublic class CopyTest {\n\n public CopyTest() {\n\n }\n\n public static void main(String[] args) {\n try {\n File stockInputFile = new File(\"C://test.txt\");\n File StockOutputFile = new File(\"C://output.txt\");\n\n FileInputStream fis = new FileInputStream(stockInputFile);\n FileOutputStream fos = new FileOutputStream(StockOutputFile);\n int count = 0;\n\n while((count = fis.read()) > -1){\n fos.write(count);\n }\n fis.close();\n fos.close();\n } catch (FileNotFoundException e) {\n System.err.println(\"FileStreamsReadnWrite: \" + e);\n } catch (IOException e) {\n System.err.println(\"FileStreamsReadnWrite: \" + e);\n }\n\n }\n\n}\n\n", "Use this code to upload file, I am working on SpringBoot...\nimport org.springframework.stereotype.Component;\nimport org.springframework.web.multipart.MultipartFile;\n\n\nimport java.nio.file.Files;\nimport java.nio.file.Paths;\nimport java.nio.file.StandardCopyOption;\n\n\n@Component\npublic class FileUploadhelper {\n public final String uploadDirectory = \"D:\\\\SpringBoot Project\\\\BootRestBooks\\\\src\\\\main\\\\resources\\\\static\\\\image\";\n\n public boolean uploadFile(MultipartFile mf) {\n\n boolean flag = false;\n\n\n try {\n\n Files.copy(mf.getInputStream(), Paths.get(uploadDirectory + \"\\\\\" + mf.getOriginalFilename()), StandardCopyOption.REPLACE_EXISTING);\n flag = true;\n\n } catch (Exception e) {\n e.printStackTrace();\n }\n\n return flag;\n }\n}\n\n" ]
[ 3, 1, 1, 0 ]
[]
[]
[ "copy", "directory", "file", "java" ]
stackoverflow_0015973066_copy_directory_file_java.txt
Q: How to save all the weights of LSTM keras model in 1D array I have a saved LSTM model and its weights. I want to extract all the weights from each layer of this LSTM model and save it 1D array. I tried for layer in model.layers: l = layer.get_weights() but it only returns the weight of the last layer A: weights = [] for layer in model.layers: layer_weights = layer.get_weights()[0] weights.append(layer_weights.flatten()) In this code, the weights variable is used to store the 1D arrays of weights for each layer. The loop iterates over the layers of the model, and for each layer, the get_weights() method is used to extract the weights. The flatten() method is then used to convert the weights from a 2D array to a 1D array, and the resulting array is added to the weights list. After the loop completes, the weights list will contain a 1D array for each layer of the model, containing the weights for that layer. You can then use this list to access the weights for each layer, or save it to a file for later use.
How to save all the weights of LSTM keras model in 1D array
I have a saved LSTM model and its weights. I want to extract all the weights from each layer of this LSTM model and save it 1D array. I tried for layer in model.layers: l = layer.get_weights() but it only returns the weight of the last layer
[ "weights = []\n\nfor layer in model.layers:\n layer_weights = layer.get_weights()[0]\n weights.append(layer_weights.flatten())\n\nIn this code, the weights variable is used to store the 1D arrays of weights for each layer. The loop iterates over the layers of the model, and for each layer, the get_weights() method is used to extract the weights. The flatten() method is then used to convert the weights from a 2D array to a 1D array, and the resulting array is added to the weights list.\nAfter the loop completes, the weights list will contain a 1D array for each layer of the model, containing the weights for that layer. You can then use this list to access the weights for each layer, or save it to a file for later use.\n" ]
[ 0 ]
[]
[]
[ "keras", "keras_layer", "lstm", "numpy_ndarray", "tf.keras" ]
stackoverflow_0074652790_keras_keras_layer_lstm_numpy_ndarray_tf.keras.txt
Q: Django form send emails in console but not to my mailbox I'm getting some practice using Django and now I'm stuck trying to get a contact form to work. What I'm trying to do is: once the "Submit" button is clicked, I should receive the form data in an email arriving at the email address linked to my website. Instead what happens is: once I clicked on the "Submit" button, the page loads for some time and at the end I get a SMTPServerDisconnected error. Can you tell me if I made a mistake in writing some logic or if it is a problem that I have to solve with my hosting service? This is forms.py: from django import forms class ContactForm(forms.Form): name = forms.CharField(label='Your name', max_length=200, widget=forms.TextInput(attrs={'class': 'form-control', 'id': 'name'})) from_email = forms.EmailField(label='Your email', max_length=200, widget=forms.TextInput( attrs={'class': 'form-control', 'id': 'email'})) subject = forms.CharField(label='Enter a subject', max_length=200, widget=forms.TextInput(attrs={'class': 'form-control', 'id': 'subject'})) message = forms.CharField(label='Write here your message', max_length=500, widget=forms.TextInput(attrs={'class': 'form-control', 'id': 'message'})) This is view.py (I replaced each address with dummy addresses): def home(request): if request.method == 'POST': form = ContactForm(request.POST) if form.is_valid(): subject = form.cleaned_data['subject'] body = { 'first_name': form.cleaned_data['name'], 'email_from': form.cleaned_data['from_email'], 'message': form.cleaned_data['message'], } message = "\n".join(body.values()) try: send_mail(subject, message, '[email protected]', ['[email protected]'], fail_silently=True) except BadHeaderError: return HttpResponse('Invalid header found') form = ContactForm() return render(request, 'index.html', {'form': form}) And this is settings.py: EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_USE_TLS = True EMAIL_HOST = 'mail.mydomain.net' EMAIL_PORT = SmtpPort EMAIL_HOST_USER = '[email protected]' EMAIL_HOST_PASSWORD = 'my email password' I tried to see if in the VS Code terminal the form works and that's why I replaced EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' with EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' and it works. So I tried to replace the EMAIL_PORT with the IMAP PORT and POP3 PORT, but nothing happened. A: One of the reason why that doesnt work is a lot of mail providers use two step authentication. for my case it was outlook that have the issue. i suggest creating a gmail account and add app password and use that email to send emails. here you can find how to use that.
Django form send emails in console but not to my mailbox
I'm getting some practice using Django and now I'm stuck trying to get a contact form to work. What I'm trying to do is: once the "Submit" button is clicked, I should receive the form data in an email arriving at the email address linked to my website. Instead what happens is: once I clicked on the "Submit" button, the page loads for some time and at the end I get a SMTPServerDisconnected error. Can you tell me if I made a mistake in writing some logic or if it is a problem that I have to solve with my hosting service? This is forms.py: from django import forms class ContactForm(forms.Form): name = forms.CharField(label='Your name', max_length=200, widget=forms.TextInput(attrs={'class': 'form-control', 'id': 'name'})) from_email = forms.EmailField(label='Your email', max_length=200, widget=forms.TextInput( attrs={'class': 'form-control', 'id': 'email'})) subject = forms.CharField(label='Enter a subject', max_length=200, widget=forms.TextInput(attrs={'class': 'form-control', 'id': 'subject'})) message = forms.CharField(label='Write here your message', max_length=500, widget=forms.TextInput(attrs={'class': 'form-control', 'id': 'message'})) This is view.py (I replaced each address with dummy addresses): def home(request): if request.method == 'POST': form = ContactForm(request.POST) if form.is_valid(): subject = form.cleaned_data['subject'] body = { 'first_name': form.cleaned_data['name'], 'email_from': form.cleaned_data['from_email'], 'message': form.cleaned_data['message'], } message = "\n".join(body.values()) try: send_mail(subject, message, '[email protected]', ['[email protected]'], fail_silently=True) except BadHeaderError: return HttpResponse('Invalid header found') form = ContactForm() return render(request, 'index.html', {'form': form}) And this is settings.py: EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' EMAIL_USE_TLS = True EMAIL_HOST = 'mail.mydomain.net' EMAIL_PORT = SmtpPort EMAIL_HOST_USER = '[email protected]' EMAIL_HOST_PASSWORD = 'my email password' I tried to see if in the VS Code terminal the form works and that's why I replaced EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend' with EMAIL_BACKEND = 'django.core.mail.backends.console.EmailBackend' and it works. So I tried to replace the EMAIL_PORT with the IMAP PORT and POP3 PORT, but nothing happened.
[ "One of the reason why that doesnt work is a lot of mail providers use two step authentication. for my case it was outlook that have the issue. i suggest creating a gmail account and add app password and use that email to send emails. here you can find how to use that.\n" ]
[ 0 ]
[]
[]
[ "django", "django_forms", "django_settings", "django_views" ]
stackoverflow_0074659323_django_django_forms_django_settings_django_views.txt
Q: What is the correct way of setting state on a Vue 3 Pinia store? When initializing state on a Vue 3 (composition API) Pinia store which pattern is more "correct" or idiomatic? Option 1: state: () => ({ user: {}, }), Option 2: state: () => { return { user: {}, }; }, Option 3: Maybe something else? A: They are the same. Option 1 and 2 are functions that returns an object. In arrow functions, the { stands for the content of the function (like function x () {). So if you want to return an object (like return {), you would use ({ as option 1 does. Reference: Arrow functions advanced syntax A: Option 2. It should be a functions that returns the state. https://pinia.vuejs.org/core-concepts/state.html A: For the Vue 3 Composition API in Pinia, the correct pattern is: export const useUserStore = defineStore('users', () => { const user = ref({}); return { user }; }) It's called "Setup Store". In Setup Stores: ref()s become state properties computed()s become getters function()s become actions More info: https://pinia.vuejs.org/core-concepts/#setup-stores
What is the correct way of setting state on a Vue 3 Pinia store?
When initializing state on a Vue 3 (composition API) Pinia store which pattern is more "correct" or idiomatic? Option 1: state: () => ({ user: {}, }), Option 2: state: () => { return { user: {}, }; }, Option 3: Maybe something else?
[ "They are the same. Option 1 and 2 are functions that returns an object. In arrow functions, the { stands for the content of the function (like function x () {). So if you want to return an object (like return {), you would use ({ as option 1 does.\nReference: Arrow functions advanced syntax\n", "Option 2. It should be a functions that returns the state. https://pinia.vuejs.org/core-concepts/state.html\n", "For the Vue 3 Composition API in Pinia, the correct pattern is:\nexport const useUserStore = defineStore('users', () => {\n const user = ref({});\n\n return { user };\n})\n\nIt's called \"Setup Store\".\n\nIn Setup Stores:\nref()s become state properties\ncomputed()s become getters\nfunction()s become actions\n\nMore info: https://pinia.vuejs.org/core-concepts/#setup-stores\n" ]
[ 5, 1, 0 ]
[]
[]
[ "pinia", "vue.js" ]
stackoverflow_0071336397_pinia_vue.js.txt
Q: How to implement a YouTube like bottom sheet in Flutter? I’m working on a video app that basically mimics YouTube. I need to implement a bottom sheet identically to theirs with fading and scaling animations. Could you recommend me how to begin or what sheet to use (currently I am using snapping sheet package)? I tried to use Draggable scrollable bottom sheet, however I couldn’t position the video on top without it being scrolled out of view. I tried snapping sheet package, however after I scaled it to the max 1.0 size I couldn’t start to drag it down (the draggable widget disappeared) and I could minimise it only through the controller programmatically and not with user drag interaction. A: it is simple BottomNavigationBar bottomNavigationBar: BottomNavigationBar( items: const <BottomNavigationBarItem>[ BottomNavigationBarItem( icon: Icon(Icons.home), label: 'Home', backgroundColor: Colors.red, ), BottomNavigationBarItem( icon: Icon(Icons.business), label: 'Business', backgroundColor: Colors.green, ), BottomNavigationBarItem( icon: Icon(Icons.school), label: 'School', backgroundColor: Colors.purple, ), BottomNavigationBarItem( icon: Icon(Icons.settings), label: 'Settings', backgroundColor: Colors.pink, ), ], currentIndex: _selectedIndex, selectedItemColor: Colors.amber[800], onTap: _onItemTapped, ) as mentioned above it will create a simple bottomNavigationBar required by your side
How to implement a YouTube like bottom sheet in Flutter?
I’m working on a video app that basically mimics YouTube. I need to implement a bottom sheet identically to theirs with fading and scaling animations. Could you recommend me how to begin or what sheet to use (currently I am using snapping sheet package)? I tried to use Draggable scrollable bottom sheet, however I couldn’t position the video on top without it being scrolled out of view. I tried snapping sheet package, however after I scaled it to the max 1.0 size I couldn’t start to drag it down (the draggable widget disappeared) and I could minimise it only through the controller programmatically and not with user drag interaction.
[ "it is simple BottomNavigationBar\nbottomNavigationBar: BottomNavigationBar(\n items: const <BottomNavigationBarItem>[\n BottomNavigationBarItem(\n icon: Icon(Icons.home),\n label: 'Home',\n backgroundColor: Colors.red,\n ),\n BottomNavigationBarItem(\n icon: Icon(Icons.business),\n label: 'Business',\n backgroundColor: Colors.green,\n ),\n BottomNavigationBarItem(\n icon: Icon(Icons.school),\n label: 'School',\n backgroundColor: Colors.purple,\n ),\n BottomNavigationBarItem(\n icon: Icon(Icons.settings),\n label: 'Settings',\n backgroundColor: Colors.pink,\n ),\n ],\n currentIndex: _selectedIndex,\n selectedItemColor: Colors.amber[800],\n onTap: _onItemTapped,\n )\n\nas mentioned above it will create a simple bottomNavigationBar required by your side\n" ]
[ 0 ]
[]
[]
[ "bottom_sheet", "dart", "flutter" ]
stackoverflow_0074660108_bottom_sheet_dart_flutter.txt
Q: Swift/iOS: Generics Error: protocol requires nested type 'V'; do you want to add it? I am using Playground, Swift version 5.7.1. I have two protocols, and two classes. This first one is simple and it works: protocol TestSomeB: ObservableObject { associatedtype V: View func returnSome() -> V } class TestSomeBImp: TestSomeB { init() {} func returnSome() -> some View { Text("Hello World") } } let testSomeBImp = TestSomeBImp() testSomeBImp.returnSome() This works and gives me the value {{SwiftUI.AnyTextStorage, {key "Hello World", hasFormatting false, []}, nil, nil}} The second one doesn't work even though the basic code structure is the same: struct TestModel { var title: String } struct TestView: View { var body: some View { Text("Hello, World!") } } // similar to protocol TestSomeB protocol TestSomeA: ObservableObject { associatedtype V: View func linkBuilder<Content: View>(data: TestModel, @ViewBuilder content: () -> Content) -> V } class TestSomeAImp: TestSomeA { init() {} // returns `some View` similar to returnSome() method above func linkBuilder<Content: View>(data: TestModel, @ViewBuilder content: () -> Content) -> some View { NavigationLink(destination: routeToPage(data: data)) { content() } } private func routeToPage(data: TestModel) -> some View { TestView() } } let testSomeImp = TestSomeAImp() testSomeImp.linkBuilder( data: TestModel(title: "Hello "), content: { Text("World!") } ) Sadly this gives me the error: protocol requires nested type 'V'; do you want to add it? associatedtype V: View I need to return some View but I also need to abstract my implementation. I tried using Content instead of V in the return type but that gives me error too. I tried using only associatedtype V in the protocol without specifying the type, but that also gives me an error. I tried creating two associated types, one V, and the other for Content, but that actually gave me the same nested error I tried adding the typealias V, but since it's nested, the error keeps occurring again and again. Please advise. Thanks! A: Your protocol requirement is that you offer this: func linkBuilder<Content: View>(data: TestModel, @ViewBuilder content: () -> Content) -> V Given a Content, you promise to always return a value of type V, which is determined by the implementation of the protocol. There is precisely one type V for a given implementation type. You then you implement it as: func linkBuilder<Content: View>(data: TestModel, @ViewBuilder content: () -> Content) -> some View { This says that linkBuilder will return an opaque type that depends on Content, which is determined by the caller. Each call to linkBuilder can return a different specific type, depending on what Content is. This does not match "will always return V, which is a specific, concrete type defined at compile time for this implementation." In your first example, the type returned is exactly Text. In your failing example, your type depends on Content. Try getting rid of all of the some types, and it will be more clear what's going on. (A some type must always be replaceable with a known-at-compiletime, type. It's opaque; it's not dynamic.) func linkBuilder<Content: View>(data: TestModel, @ViewBuilder content: () -> Content) -> NavigationLink<Content, TestView> { NavigationLink(destination: routeToPage(data: data), label: content) } private func routeToPage(data: TestModel) -> TestView { TestView() } So you could make a protocol for that certainly, though I expect this isn't what you want: protocol TestSomeA: ObservableObject { func linkBuilder<Content: View>(data: TestModel, @ViewBuilder content: () -> Content) -> NavigationLink<Content, TestView> } Another important thing to understand here is that NavigationLink<Text, TestView> is a completely different type than NavigationLink<Image, TestViw>. There is no way to directly express "a NavigationLink with arbitrary parameters." It's not clear here what the point of the protocol is here. It's not clear what another implementation of TestSomeA would look like, and how callers would use it without knowing the types. Your linkBuilder is fine; it's just the throwing in a protocol that makes it messy. What problem is this solving?
Swift/iOS: Generics Error: protocol requires nested type 'V'; do you want to add it?
I am using Playground, Swift version 5.7.1. I have two protocols, and two classes. This first one is simple and it works: protocol TestSomeB: ObservableObject { associatedtype V: View func returnSome() -> V } class TestSomeBImp: TestSomeB { init() {} func returnSome() -> some View { Text("Hello World") } } let testSomeBImp = TestSomeBImp() testSomeBImp.returnSome() This works and gives me the value {{SwiftUI.AnyTextStorage, {key "Hello World", hasFormatting false, []}, nil, nil}} The second one doesn't work even though the basic code structure is the same: struct TestModel { var title: String } struct TestView: View { var body: some View { Text("Hello, World!") } } // similar to protocol TestSomeB protocol TestSomeA: ObservableObject { associatedtype V: View func linkBuilder<Content: View>(data: TestModel, @ViewBuilder content: () -> Content) -> V } class TestSomeAImp: TestSomeA { init() {} // returns `some View` similar to returnSome() method above func linkBuilder<Content: View>(data: TestModel, @ViewBuilder content: () -> Content) -> some View { NavigationLink(destination: routeToPage(data: data)) { content() } } private func routeToPage(data: TestModel) -> some View { TestView() } } let testSomeImp = TestSomeAImp() testSomeImp.linkBuilder( data: TestModel(title: "Hello "), content: { Text("World!") } ) Sadly this gives me the error: protocol requires nested type 'V'; do you want to add it? associatedtype V: View I need to return some View but I also need to abstract my implementation. I tried using Content instead of V in the return type but that gives me error too. I tried using only associatedtype V in the protocol without specifying the type, but that also gives me an error. I tried creating two associated types, one V, and the other for Content, but that actually gave me the same nested error I tried adding the typealias V, but since it's nested, the error keeps occurring again and again. Please advise. Thanks!
[ "Your protocol requirement is that you offer this:\nfunc linkBuilder<Content: View>(data: TestModel, @ViewBuilder content: () -> Content) -> V\n\nGiven a Content, you promise to always return a value of type V, which is determined by the implementation of the protocol. There is precisely one type V for a given implementation type.\nYou then you implement it as:\nfunc linkBuilder<Content: View>(data: TestModel, @ViewBuilder content: () -> Content) -> some View {\n\nThis says that linkBuilder will return an opaque type that depends on Content, which is determined by the caller. Each call to linkBuilder can return a different specific type, depending on what Content is. This does not match \"will always return V, which is a specific, concrete type defined at compile time for this implementation.\"\nIn your first example, the type returned is exactly Text. In your failing example, your type depends on Content.\nTry getting rid of all of the some types, and it will be more clear what's going on. (A some type must always be replaceable with a known-at-compiletime, type. It's opaque; it's not dynamic.)\nfunc linkBuilder<Content: View>(data: TestModel, @ViewBuilder content: () -> Content)\n -> NavigationLink<Content, TestView> {\n NavigationLink(destination: routeToPage(data: data), label: content)\n}\n\nprivate func routeToPage(data: TestModel) -> TestView {\n TestView()\n}\n\nSo you could make a protocol for that certainly, though I expect this isn't what you want:\nprotocol TestSomeA: ObservableObject {\n func linkBuilder<Content: View>(data: TestModel, \n @ViewBuilder content: () -> Content) \n -> NavigationLink<Content, TestView>\n}\n\nAnother important thing to understand here is that NavigationLink<Text, TestView> is a completely different type than NavigationLink<Image, TestViw>. There is no way to directly express \"a NavigationLink with arbitrary parameters.\"\nIt's not clear here what the point of the protocol is here. It's not clear what another implementation of TestSomeA would look like, and how callers would use it without knowing the types. Your linkBuilder is fine; it's just the throwing in a protocol that makes it messy. What problem is this solving?\n" ]
[ 1 ]
[]
[]
[ "generics", "ios", "swift", "swift_playground" ]
stackoverflow_0074657362_generics_ios_swift_swift_playground.txt
Q: Delete IndexedDb database and recreate with button click it doesn't work after 2 or 3 click With the code below, when the html button is pressed, I delete the entire database and create it from scratch. (I need this.) But after a few button clicks (2-3 click) it gets corrupted and doesn't delete and gets stuck. Since I have autoIncrement columns, just resetting the data without deleting the database doesn't work for me. I have to delete the database and recreate it to reset it in autoIncrement. let db; //DELETE DATABASE var req = indexedDB.deleteDatabase("AdminDatabase"); req.onsuccess = function () { console.log("Deleted database successfully"); } //CREATE DATABASE const request = window.indexedDB.open("AdminDatabase", 3); request.onupgradeneeded = function (event) { db = event.target.result; if (event.oldVersion < 1) { var objectStore = db.createObjectStore("floors", { keyPath: "id", autoIncrement: true }); } if (event.oldVersion < 2) { var objectStore = db.createObjectStore("rooms", { keyPath: "id", autoIncrement: true }); } if (event.oldVersion < 3) { var objectStore = db.createObjectStore("components", { keyPath: "id", autoIncrement: true }); let index = objectStore.createIndex('componentFloorId, componentRoomId', ['componentFloorId', 'componentRoomId']); // let index2 = objectStore.createIndex('roomId', 'roomId', { unique: false}); } // objectStore.createIndex("name", "name", { unique: false }); }; //CONNECT DATABASE const request2 = indexedDB.open("AdminDatabase"); request2.onerror = (event) => { console.error("Connect Db Error!"); }; request2.onsuccess = (event) => { db = event.target.result; //DB TRANSACTIONS ... ... ... }; A: After the //DB TRANSACTIONS refreshing the page with the location.reload(); solved my problem.
Delete IndexedDb database and recreate with button click it doesn't work after 2 or 3 click
With the code below, when the html button is pressed, I delete the entire database and create it from scratch. (I need this.) But after a few button clicks (2-3 click) it gets corrupted and doesn't delete and gets stuck. Since I have autoIncrement columns, just resetting the data without deleting the database doesn't work for me. I have to delete the database and recreate it to reset it in autoIncrement. let db; //DELETE DATABASE var req = indexedDB.deleteDatabase("AdminDatabase"); req.onsuccess = function () { console.log("Deleted database successfully"); } //CREATE DATABASE const request = window.indexedDB.open("AdminDatabase", 3); request.onupgradeneeded = function (event) { db = event.target.result; if (event.oldVersion < 1) { var objectStore = db.createObjectStore("floors", { keyPath: "id", autoIncrement: true }); } if (event.oldVersion < 2) { var objectStore = db.createObjectStore("rooms", { keyPath: "id", autoIncrement: true }); } if (event.oldVersion < 3) { var objectStore = db.createObjectStore("components", { keyPath: "id", autoIncrement: true }); let index = objectStore.createIndex('componentFloorId, componentRoomId', ['componentFloorId', 'componentRoomId']); // let index2 = objectStore.createIndex('roomId', 'roomId', { unique: false}); } // objectStore.createIndex("name", "name", { unique: false }); }; //CONNECT DATABASE const request2 = indexedDB.open("AdminDatabase"); request2.onerror = (event) => { console.error("Connect Db Error!"); }; request2.onsuccess = (event) => { db = event.target.result; //DB TRANSACTIONS ... ... ... };
[ "After the //DB TRANSACTIONS refreshing the page with the location.reload(); solved my problem.\n" ]
[ 0 ]
[]
[]
[ "indexeddb", "javascript" ]
stackoverflow_0074602127_indexeddb_javascript.txt
Q: Project Doesn't Build After Installed Sitecore TDS After setting up Sitecore TDS, my project will not build. I'm new to Visual Studio and also new to working with Sitecore. It seems that it cannot find a particular setting, but a Google search is not coming up with anything: Severity Code Description Project Path File Line Suppression State Error The "AnalyzeProject" task failed unexpectedly. System.MissingFieldException: Field not found: 'HedgehogDevelopment.SitecoreProject.Tasks.SitecoreDeployInfo.ParsedItem'. at HedgehogDevelopment.SitecoreProject.Analysis.TemplateStructure.Validate(Dictionary`2 projectItems, XDocument scprojDocument) at HedgehogDevelopment.SitecoreProject.Tasks.ProjectAnalysis.AnalysisEngine.<>c__DisplayClass4_1.<GetReport>b__0() at HedgehogDevelopment.SitecoreProject.Tasks.ProjectAnalysis.ExecutionTimer.Time(Action action) at HedgehogDevelopment.SitecoreProject.Tasks.ProjectAnalysis.AnalysisEngine.GetReport(Dictionary`2 projectItems, XDocument scprojDocument) at HedgehogDevelopment.SitecoreProject.Tasks.AnalyzeProject.Execute() at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() at Microsoft.Build.BackEnd.TaskBuilder.<ExecuteInstantiatedTask>d__26.MoveNext() B2B.Core C:\Program Files (x86)\MSBuild\HedgehogDevelopment\SitecoreProject\v9.0 C:\Program Files (x86)\MSBuild\HedgehogDevelopment\SitecoreProject\v9.0\HedgehogDevelopment.SitecoreProject.targets 144 Apparently my project does still build and will run, but that error pops up each time regardless. A: This can happen when you have TDS validations enabled and are missing some DLLs. In this directory: C:\Program Files (x86)\MSBuild\HedgehogDevelopment\SitecoreProject\v9.0\ Add the following DLLs: Microsoft.Web.Infrastructure.dll TDSWebDeploy.Services.Contracts.dll If you have TDS installed, you may be able to source those DLLs from somewhere in your C:\Program Files (x86)\MSBuild\HedgehogDevelopment directory. If not, someone else on your team may have them.
Project Doesn't Build After Installed Sitecore TDS
After setting up Sitecore TDS, my project will not build. I'm new to Visual Studio and also new to working with Sitecore. It seems that it cannot find a particular setting, but a Google search is not coming up with anything: Severity Code Description Project Path File Line Suppression State Error The "AnalyzeProject" task failed unexpectedly. System.MissingFieldException: Field not found: 'HedgehogDevelopment.SitecoreProject.Tasks.SitecoreDeployInfo.ParsedItem'. at HedgehogDevelopment.SitecoreProject.Analysis.TemplateStructure.Validate(Dictionary`2 projectItems, XDocument scprojDocument) at HedgehogDevelopment.SitecoreProject.Tasks.ProjectAnalysis.AnalysisEngine.<>c__DisplayClass4_1.<GetReport>b__0() at HedgehogDevelopment.SitecoreProject.Tasks.ProjectAnalysis.ExecutionTimer.Time(Action action) at HedgehogDevelopment.SitecoreProject.Tasks.ProjectAnalysis.AnalysisEngine.GetReport(Dictionary`2 projectItems, XDocument scprojDocument) at HedgehogDevelopment.SitecoreProject.Tasks.AnalyzeProject.Execute() at Microsoft.Build.BackEnd.TaskExecutionHost.Microsoft.Build.BackEnd.ITaskExecutionHost.Execute() at Microsoft.Build.BackEnd.TaskBuilder.<ExecuteInstantiatedTask>d__26.MoveNext() B2B.Core C:\Program Files (x86)\MSBuild\HedgehogDevelopment\SitecoreProject\v9.0 C:\Program Files (x86)\MSBuild\HedgehogDevelopment\SitecoreProject\v9.0\HedgehogDevelopment.SitecoreProject.targets 144 Apparently my project does still build and will run, but that error pops up each time regardless.
[ "This can happen when you have TDS validations enabled and are missing some DLLs.\nIn this directory:\nC:\\Program Files (x86)\\MSBuild\\HedgehogDevelopment\\SitecoreProject\\v9.0\\\nAdd the following DLLs:\n\nMicrosoft.Web.Infrastructure.dll\nTDSWebDeploy.Services.Contracts.dll\n\nIf you have TDS installed, you may be able to source those DLLs from somewhere in your C:\\Program Files (x86)\\MSBuild\\HedgehogDevelopment directory. If not, someone else on your team may have them.\n" ]
[ 0 ]
[]
[]
[ "msbuild", "sitecore", "tds" ]
stackoverflow_0072266337_msbuild_sitecore_tds.txt
Q: Recover AWS Root email given AWS CLI credential I have another PC which has AWS CLI setup. I can still access the account and when I do aws s3 ls it lists all the buckets that I have. However, this is not connected to my current AWS root account. I don't know which account this AWS CLI credential belong to Is there any possible way to recover or a hint so I can get the email address of the root account this CLI profile associated with? I have tried aws sts get-caller-identity but I can't still figure out what my root email is Edit: I found a user that I can sign in to the web console, but is there any way to recover the root's email address? I have tried live chat with AWS billing and account, but they are unable to help. In summary my situation is: I have an account with access to AWS CLI and web console This account is not the root account How can I recover my root account? A: If your AWS accounts are attached to an AWS Organisation, you can also see the aws root email address in the OU screen in the tree view: https://us-east-1.console.aws.amazon.com/organizations/v2/home/accounts This should list the root email of all of your Organisations associated accounts. A: You can get the account id of your credentials with STS. With AWS IAM CLI you can get the users which you can try to login with. It might be that a email address is used as username A: To answer my own question, no it is not possible to recover the email of the root account for security reason Not even AWS account support is able to help. I'm not sure if it can be escalated somewhere, but I can imagine the process will not be straightforward and taking into account it's a personal account, I don't think it's worth it So my advice to those having the same question: If you have a non root user with admin privilege, disable all running services to avoid incurring future charges simply create a new account I know it's not the ideal solution, but I don't think there's any other way A: you can recover the root email, by creating an Organization if the account is not in Organization yet. it will show the account's root email then in the Organization interface.
Recover AWS Root email given AWS CLI credential
I have another PC which has AWS CLI setup. I can still access the account and when I do aws s3 ls it lists all the buckets that I have. However, this is not connected to my current AWS root account. I don't know which account this AWS CLI credential belong to Is there any possible way to recover or a hint so I can get the email address of the root account this CLI profile associated with? I have tried aws sts get-caller-identity but I can't still figure out what my root email is Edit: I found a user that I can sign in to the web console, but is there any way to recover the root's email address? I have tried live chat with AWS billing and account, but they are unable to help. In summary my situation is: I have an account with access to AWS CLI and web console This account is not the root account How can I recover my root account?
[ "If your AWS accounts are attached to an AWS Organisation, you can also see the aws root email address in the OU screen in the tree view:\nhttps://us-east-1.console.aws.amazon.com/organizations/v2/home/accounts\nThis should list the root email of all of your Organisations associated accounts.\n", "You can get the account id of your credentials with STS.\nWith AWS IAM CLI you can get the users which you can try to login with. It might be that a email address is used as username\n", "To answer my own question, no it is not possible to recover the email of the root account for security reason\nNot even AWS account support is able to help. I'm not sure if it can be escalated somewhere, but I can imagine the process will not be straightforward and taking into account it's a personal account, I don't think it's worth it\nSo my advice to those having the same question:\n\nIf you have a non root user with admin privilege, disable all running services to avoid incurring future charges\nsimply create a new account\n\nI know it's not the ideal solution, but I don't think there's any other way\n", "you can recover the root email, by creating an Organization if the account is not in Organization yet.\nit will show the account's root email then in the Organization interface.\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "amazon_web_services" ]
stackoverflow_0069062288_amazon_web_services.txt
Q: Python: What does this error message mean and why are we getting it? We are trying to create a new Excel file with nested data using Python code. Here is the code for reference: `import glob import pandas as pd import re import openpyxl dp = pd.read_excel("UnpredictableDataMerge.xlsx", sheet_name ="Sheet1") line_numbers = [4, 7] print("Heey, we read") dp_max = dp.groupby(['Subject', 'Date & Time', 'Trees Again', 'DifficultyLevel', 'Block', 'UpdatevsNonupdate', 'responsetimerecodeforACC', 'Nonupdate', 'Update'], sort=False).max() dp_max = dp_max[["Total Training Time"]] print("This worked. Good start. Yaaaay.s") dp_max.to_excel('unpredictable_grouped_max_heregoesnothing.xlsx', index=True) print("This worked. Yaaaay.s") dp['Signal_Detection2'] = dp.loc[:, 'Signal_Detection'] dp_count = dp.groupby(['Subject', 'Signal_Detection'], sort=False).count()[["Signal_Detection2"]] dp_count.to_excel('unpredictable_grouped_signal_count_heregoesnothing.xlsx', index=True) Unexpected exception formatting exception. Falling back to standard exception Output exceeds the size limit. Open the full output data in a text editor Traceback (most recent call last): File "C:\Users\mxa210135\AppData\Roaming\Python\Python38\site-packages\IPython\core\interactiveshell.py", line 3433, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-9-853a8bf5b14e>", line 5, in <module> dp = pd.read_excel("UnpredictableDataMerge.xlsx", sheet_name ="Sheet1")` The code above is what we had tried and it had worked previously. We only added the 'Trees Again' variable and 'UpdatevsNonupdate', 'responsetimerecodeforACC', 'Nonupdate', and lastly 'Update'. Please let me know if more information is needed and I will happily provide it. We tried splitting the large file in half and run the code on both, but it did not work and gave us the same error message. A: The error message indicates that an exception occurred while trying to read an Excel file using the pd.read_excel function. The most likely cause of the error is that the file "UnpredictableDataMerge.xlsx" was not found in the current working directory. You can check the current working directory by running the following code: import os print(os.getcwd()) This will print the current working directory, which is the directory where Python is looking for the file. Make sure that the file "UnpredictableDataMerge.xlsx" is located in the current working directory, or provide the full path to the file in the pd.read_excel function. Another possible cause of the error are: the file is open in another program, such as Excel, and is locked for editing. In that case, you can try closing the file in Excel, and then run the code again. The sheet "Sheet1" does not exist in the Excel file. Make sure that the sheet name is spelled correctly, and that it exists in the file. You can also try omitting the sheet_name parameter to read the first sheet in the file by default. There is not enough memory available to read the Excel file. Make sure that you have enough free memory to read the file. You can try closing other programs to free up memory, or increasing the amount of memory available to your Python environment. If neither of these suggestions solves the problem, please provide the full error message and traceback, as well as the version of pandas and openpyxl that you are using. You can check the version of pandas by running pd.__version__ and the version of openpyxl by running openpyxl.__version__. Also, please provide more information about the file "UnpredictableDataMerge.xlsx", such as its size, the number of rows and columns, and the type of data it contains, as well as the versions of pandas and openpyxl that you are using. This will help to narrow down the possible causes of the error. A: I have ran into this before with large data sets. Try installing lxml as openpyxl will auto detect if the library is installed. Be sure to install it to the correct interpreter you are using. py -m pip install lxml Alternatively: Convert data to CSV file and use pd.read_csv()
Python: What does this error message mean and why are we getting it?
We are trying to create a new Excel file with nested data using Python code. Here is the code for reference: `import glob import pandas as pd import re import openpyxl dp = pd.read_excel("UnpredictableDataMerge.xlsx", sheet_name ="Sheet1") line_numbers = [4, 7] print("Heey, we read") dp_max = dp.groupby(['Subject', 'Date & Time', 'Trees Again', 'DifficultyLevel', 'Block', 'UpdatevsNonupdate', 'responsetimerecodeforACC', 'Nonupdate', 'Update'], sort=False).max() dp_max = dp_max[["Total Training Time"]] print("This worked. Good start. Yaaaay.s") dp_max.to_excel('unpredictable_grouped_max_heregoesnothing.xlsx', index=True) print("This worked. Yaaaay.s") dp['Signal_Detection2'] = dp.loc[:, 'Signal_Detection'] dp_count = dp.groupby(['Subject', 'Signal_Detection'], sort=False).count()[["Signal_Detection2"]] dp_count.to_excel('unpredictable_grouped_signal_count_heregoesnothing.xlsx', index=True) Unexpected exception formatting exception. Falling back to standard exception Output exceeds the size limit. Open the full output data in a text editor Traceback (most recent call last): File "C:\Users\mxa210135\AppData\Roaming\Python\Python38\site-packages\IPython\core\interactiveshell.py", line 3433, in run_code exec(code_obj, self.user_global_ns, self.user_ns) File "<ipython-input-9-853a8bf5b14e>", line 5, in <module> dp = pd.read_excel("UnpredictableDataMerge.xlsx", sheet_name ="Sheet1")` The code above is what we had tried and it had worked previously. We only added the 'Trees Again' variable and 'UpdatevsNonupdate', 'responsetimerecodeforACC', 'Nonupdate', and lastly 'Update'. Please let me know if more information is needed and I will happily provide it. We tried splitting the large file in half and run the code on both, but it did not work and gave us the same error message.
[ "The error message indicates that an exception occurred while trying to read an Excel file using the pd.read_excel function. The most likely cause of the error is that the file \"UnpredictableDataMerge.xlsx\" was not found in the current working directory.\nYou can check the current working directory by running the following code:\nimport os\nprint(os.getcwd())\n\nThis will print the current working directory, which is the directory where Python is looking for the file. Make sure that the file \"UnpredictableDataMerge.xlsx\" is located in the current working directory, or provide the full path to the file in the pd.read_excel function.\nAnother possible cause of the error are:\n\nthe file is open in another program, such as Excel, and is locked for editing. In that case, you can try closing the file in Excel, and then run the code again.\nThe sheet \"Sheet1\" does not exist in the Excel file. Make sure that the sheet name is spelled correctly, and that it exists in the file. You can also try omitting the sheet_name parameter to read the first sheet in the file by default.\nThere is not enough memory available to read the Excel file. Make sure that you have enough free memory to read the file. You can try closing other programs to free up memory, or increasing the amount of memory available to your Python environment.\n\nIf neither of these suggestions solves the problem, please provide the full error message and traceback, as well as the version of pandas and openpyxl that you are using. You can check the version of pandas by running pd.__version__ and the version of openpyxl by running openpyxl.__version__. Also, please provide more information about the file \"UnpredictableDataMerge.xlsx\", such as its size, the number of rows and columns, and the type of data it contains, as well as the versions of pandas and openpyxl that you are using. This will help to narrow down the possible causes of the error.\n", "I have ran into this before with large data sets. Try installing lxml as openpyxl will auto detect if the library is installed. Be sure to install it to the correct interpreter you are using.\npy -m pip install lxml\n\nAlternatively:\n\nConvert data to CSV file and use pd.read_csv()\n\n" ]
[ 0, 0 ]
[]
[]
[ "data_analysis", "database", "excel", "output", "python" ]
stackoverflow_0074659693_data_analysis_database_excel_output_python.txt
Q: Vega-Lite Calculated Scale domainMax I'm trying to calculate a value for domainMax on the Y-axis scale. I tried the following example where I want the Y-axis domainMax to be one greater than the maximum value in the dataset field named "value". The example produces the error 'Unrecognized signal name: "domMax"'. How can I get it to work? { "data": { "values": [ {"date": "2021-03-01T00:00:00", "value": 1}, {"date": "2021-04-01T00:00:00", "value": 3}, {"date": "2021-05-01T00:00:00", "value": 2} ] }, "transform": [ { "calculate": "max(datum.value)+1","as": "domMax"} ], "mark": "line", "encoding": { "x": { "field": "date", "type": "temporal" }, "y": {"field": "value", "type": "quantitative", "scale": {"domainMax": {"expr": "domMax"}} } } } A: This transform "transform": [ { "calculate": "max(datum.value)+1","as": "domMax"} ] adds a new column to your data set - it does not create a new signal. You can check that in the editor. Go to the DataViewer tab and select data_0 from the drop down. Can you see the new domMax column? Signals are a different thing entirely - have a look here in the documentation. Note that the link points to Vega, not Vega-Lite. (Vega-Lite specifications are compiled to Vega.) Vega-Lite does not let you declare signals; you declare parameters instead. Here is another example using the domMax parameter. Vega-Lite parameters are translated to Vega signals. It looks like you are trying to derive the value of your parameter/signal from the data. I am not sure you can do that in Vega-Lite. On the other hand it's very easy in Vega. For example you could use the extent transform: https://vega.github.io/vega/docs/transforms/extent/ Side comment - while Vega specifications are more verbose you can sometimes find their primitives simpler and a good way to understand how the visualisation works. (You can see compiled Vega in the editor.) A: I tried to get a custom domain based on the data but hit the same limitations as you did. In my case, I update the data from the outside a bit like the streaming example. I compute the domain from the outside and modify them in the visualization with params. This is quite easy as vega-lite params are exposed as vega signals. This is the gist of the layout: { "$schema": "https://vega.github.io/schema/vega-lite/v5.json", "params": [ { "name": "lowBound", "value": -10 }, { "name": "highBound", "value": 100 } ], ../.. "vconcat": [ { "name": "detailed", ../.. "layer": [ { ../.. "mark": "line", "encoding": { "y": { "field": "value", "title": "Temperature", "type": "quantitative", "scale": { "domainMin": { "expr": "lowBound" }, "domainMax": { "expr": "highBound" } } }, ... The lowBound and highBound are dynamically changed through vega signals. I change them with the regular JS API. A: You can add a param to pan and zoom in case your hard coded values are less than ideal. "params": [{"name": "grid", "select": "interval", "bind": "scales"}], Open the Chart in the Vega Editor
Vega-Lite Calculated Scale domainMax
I'm trying to calculate a value for domainMax on the Y-axis scale. I tried the following example where I want the Y-axis domainMax to be one greater than the maximum value in the dataset field named "value". The example produces the error 'Unrecognized signal name: "domMax"'. How can I get it to work? { "data": { "values": [ {"date": "2021-03-01T00:00:00", "value": 1}, {"date": "2021-04-01T00:00:00", "value": 3}, {"date": "2021-05-01T00:00:00", "value": 2} ] }, "transform": [ { "calculate": "max(datum.value)+1","as": "domMax"} ], "mark": "line", "encoding": { "x": { "field": "date", "type": "temporal" }, "y": {"field": "value", "type": "quantitative", "scale": {"domainMax": {"expr": "domMax"}} } } }
[ "\nThis transform\n\"transform\": [\n { \"calculate\": \"max(datum.value)+1\",\"as\": \"domMax\"}\n ]\n\nadds a new column to your data set - it does not create a new signal. You can check that in the editor. Go to the DataViewer tab and select data_0 from the drop down. Can you see the new domMax column?\n\n\n\nSignals are a different thing entirely - have a look here in the documentation. Note that the link points to Vega, not Vega-Lite. (Vega-Lite specifications are compiled to Vega.)\n\nVega-Lite does not let you declare signals; you declare parameters instead. Here is another example using the domMax parameter. Vega-Lite parameters are translated to Vega signals.\n\nIt looks like you are trying to derive the value of your parameter/signal from the data. I am not sure you can do that in Vega-Lite.\n\nOn the other hand it's very easy in Vega. For example you could use the extent transform:\nhttps://vega.github.io/vega/docs/transforms/extent/\n\n\nSide comment - while Vega specifications are more verbose you can sometimes find their primitives simpler and a good way to understand how the visualisation works. (You can see compiled Vega in the editor.)\n", "I tried to get a custom domain based on the data but hit the same limitations as you did.\nIn my case, I update the data from the outside a bit like the streaming example. I compute the domain from the outside and modify them in the visualization with params. This is quite easy as vega-lite params are exposed as vega signals.\nThis is the gist of the layout:\n {\n \"$schema\": \"https://vega.github.io/schema/vega-lite/v5.json\",\n \"params\": [\n {\n \"name\": \"lowBound\",\n \"value\": -10\n },\n {\n \"name\": \"highBound\",\n \"value\": 100\n }\n ],\n ../..\n \"vconcat\": [\n {\n \"name\": \"detailed\",\n ../..\n \"layer\": [\n { \n ../..\n \"mark\": \"line\",\n \"encoding\": {\n \"y\": {\n \"field\": \"value\",\n \"title\": \"Temperature\",\n \"type\": \"quantitative\",\n \"scale\": {\n \"domainMin\": {\n \"expr\": \"lowBound\"\n },\n \"domainMax\": {\n \"expr\": \"highBound\"\n }\n }\n },\n...\n\nThe lowBound and highBound are dynamically changed through vega signals. I change them with the regular JS API.\n", "You can add a param to pan and zoom in case your hard coded values are less than ideal.\n\"params\": [{\"name\": \"grid\", \"select\": \"interval\", \"bind\": \"scales\"}],\nOpen the Chart in the Vega Editor\n" ]
[ 2, 1, 0 ]
[]
[]
[ "vega_lite", "yaxis" ]
stackoverflow_0071028396_vega_lite_yaxis.txt
Q: How to use toLocaleString() and tofixed(2) in JavaScript How can I do this in JavaScript? var num = 2046430; num.toLocaleString(); will give you "2,046,430"; What I have tried is: var num = 2046430; num.toLocaleString().toFixed(2); Expected Output "2,046,430.00" A: Taken from MDN: Syntax numObj.toLocaleString([locales [, options]]) toLocaleString takes 2 arguments. The first is the locale, the second are the options. As for the options, you are looking for: minimumFractionDigits The minimum number of fraction digits to use. Possible values are from 0 to 20; the default for plain number and percent formatting is 0; the default for currency formatting is the number of minor unit digits provided by the ISO 4217 currency code list (2 if the list doesn't provide that information). https://developer.mozilla.org/de/docs/Web/JavaScript/Reference/Global_Objects/Number/toLocaleString To be able to set the options without setting the locale, you can pass undefined as first argument: var num = 2046430; num.toLocaleString(undefined, {minimumFractionDigits: 2}) // 2,046,430.00 However this also allows the fraction to be longer than 2 digits. So we need to look for one more option called maximumFractionDigits. (Also on that MDN page) var num = 2046430.123; num.toLocaleString(undefined, { minimumFractionDigits: 2, maximumFractionDigits: 2 }) // 2,046,430.12 A: @Sebastian Nette's accepted answer was, for some reason not working for me to handle numbers like "1,230.05", where it was a string with a comma. I ended up going with the following: var num = "1,230.05"; parseFloat(num.replace(",", "")).toFixed(2); It just manually strips the commas before parsing as a float and running toFixed(). A: I came here because I would like to show the currency symbol (R$) in addition show two digits after the decimal point in the result. Initially I was trying the following code: `Amount ${(gas * litros).toFixed(2) .toLocaleString('pt-BR', {style: 'currency', currency: 'BRL'})}` Expected output: Total a pagar R$ 20.95 Output: Total a pagar 20.95 So, with the answers above, I tried without the toFixed(): `Total a pagar ${(gas * litros) .toLocaleString('pt-BR', {style: 'currency', currency: 'BRL', minimumFractionDigits: 2})}` Output: Total a pagar R$ 15,80
How to use toLocaleString() and tofixed(2) in JavaScript
How can I do this in JavaScript? var num = 2046430; num.toLocaleString(); will give you "2,046,430"; What I have tried is: var num = 2046430; num.toLocaleString().toFixed(2); Expected Output "2,046,430.00"
[ "Taken from MDN:\nSyntax\nnumObj.toLocaleString([locales [, options]])\ntoLocaleString takes 2 arguments. The first is the locale, the second are the options. As for the options, you are looking for:\n\nminimumFractionDigits\nThe minimum number of fraction digits to use.\n Possible values are from 0 to 20; the default for plain number and\n percent formatting is 0; the default for currency formatting is the\n number of minor unit digits provided by the ISO 4217 currency code\n list (2 if the list doesn't provide that information).\n\nhttps://developer.mozilla.org/de/docs/Web/JavaScript/Reference/Global_Objects/Number/toLocaleString\nTo be able to set the options without setting the locale, you can pass undefined as first argument:\nvar num = 2046430;\nnum.toLocaleString(undefined, {minimumFractionDigits: 2}) // 2,046,430.00\n\nHowever this also allows the fraction to be longer than 2 digits. So we need to look for one more option called maximumFractionDigits. (Also on that MDN page)\nvar num = 2046430.123;\nnum.toLocaleString(undefined, {\n minimumFractionDigits: 2,\n maximumFractionDigits: 2\n}) // 2,046,430.12\n\n", "@Sebastian Nette's accepted answer was, for some reason not working for me to handle numbers like \"1,230.05\", where it was a string with a comma.\nI ended up going with the following:\nvar num = \"1,230.05\";\nparseFloat(num.replace(\",\", \"\")).toFixed(2);\n\nIt just manually strips the commas before parsing as a float and running toFixed().\n", "I came here because I would like to show the currency symbol (R$) in addition show two digits after the decimal point in the result.\nInitially I was trying the following code:\n`Amount ${(gas * litros).toFixed(2)\n .toLocaleString('pt-BR', {style: 'currency', currency: 'BRL'})}`\n\nExpected output: Total a pagar R$ 20.95\nOutput: Total a pagar 20.95\nSo, with the answers above, I tried without the toFixed():\n`Total a pagar ${(gas * litros)\n .toLocaleString('pt-BR', {style: 'currency', currency: 'BRL', minimumFractionDigits: 2})}`\n\nOutput: Total a pagar R$ 15,80\n" ]
[ 306, 0, 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0031581011_javascript.txt
Q: Trying to deserialize JSON returns null values Using the method JsonConvert.DeserializeObject returns the default values for all properties. var current = JsonConvert.DeserializeObject<Current>(myJson); { "location": { "name": "London" }, "current": { "temp_c": 5.0, "cloud": 50 } } public class Current { public double Temp_c { get; set; } public double Cloud { get; set; } } The expected current object should have the values: 50 for Cloud, and 5.0 for Temp_c, but returns the default values for all properties. A: You need to define a class model like json object and then deserialize to it public class YourModel { //create location class that has Name property public Location Location { get; set; } //create current class that has Temp_c and Cloud property public Current Current { get; set; } } and then var data = JsonConvert.DeserializeObject<YourModel>(myJson); and get the current value from data object var current = data.Current; A: your 'Current' class is far of been like the JSON you post. You need to convert the JSON string to a C# class. You can use QuickType to convert it (Newtonsoft compatible). Note: I am using System.Text.Json.Serialization but the class model should be equal, just change: [JsonPropertyName("temp_c")] // .Net serializer (I prefer this) to [JsonProperty("temp_c")] // Newtonsoft.Json serializer (replace "temp_c" for every name) Here is the class model (a complete console application) you need: using System; using System.Text.Json; using System.Text.Json.Serialization; #nullable disable namespace test { public class Weather { [JsonPropertyName("location")] public Location Location { get; set; } [JsonPropertyName("current")] public Current Current { get; set; } } public class Location { [JsonPropertyName("name")] public string Name { get; set; } } public class Current { [JsonPropertyName("temp_c")] public double TempC { get; set; } [JsonPropertyName("cloud")] public int Cloud { get; set; } } class Program { static void Main(string[] args) { string json = "{\"location\": { \"name\": \"London\" }, \"current\": { \"temp_c\": 5.0, \"cloud\": 50 }}"; Weather myWeather = JsonSerializer.Deserialize<Weather>(json); Console.WriteLine("Location: {0} - Temp: {1:F}", myWeather.Location.Name, myWeather.Current.TempC); } } } Now the Deserializer will work OK.
Trying to deserialize JSON returns null values
Using the method JsonConvert.DeserializeObject returns the default values for all properties. var current = JsonConvert.DeserializeObject<Current>(myJson); { "location": { "name": "London" }, "current": { "temp_c": 5.0, "cloud": 50 } } public class Current { public double Temp_c { get; set; } public double Cloud { get; set; } } The expected current object should have the values: 50 for Cloud, and 5.0 for Temp_c, but returns the default values for all properties.
[ "You need to define a class model like json object\nand then deserialize to it\npublic class YourModel {\n\n //create location class that has Name property\n public Location Location { get; set; }\n\n //create current class that has Temp_c and Cloud property\n public Current Current { get; set; }\n\n}\n\nand then\nvar data = JsonConvert.DeserializeObject<YourModel>(myJson);\n\nand get the current value from data object\nvar current = data.Current;\n\n", "your 'Current' class is far of been like the JSON you post.\nYou need to convert the JSON string to a C# class. You can use QuickType to convert it (Newtonsoft compatible).\nNote: I am using System.Text.Json.Serialization but the class model should be equal, just change:\n[JsonPropertyName(\"temp_c\")] // .Net serializer (I prefer this)\n\nto\n[JsonProperty(\"temp_c\")] // Newtonsoft.Json serializer\n\n(replace \"temp_c\" for every name)\nHere is the class model (a complete console application) you need:\nusing System;\nusing System.Text.Json;\nusing System.Text.Json.Serialization;\n\n#nullable disable\n\nnamespace test\n{\n public class Weather\n {\n [JsonPropertyName(\"location\")]\n public Location Location { get; set; }\n\n [JsonPropertyName(\"current\")]\n public Current Current { get; set; }\n }\n\n public class Location\n {\n [JsonPropertyName(\"name\")]\n public string Name { get; set; }\n }\n\n public class Current\n {\n [JsonPropertyName(\"temp_c\")]\n public double TempC { get; set; }\n\n [JsonPropertyName(\"cloud\")]\n public int Cloud { get; set; }\n }\n\n\n class Program\n {\n static void Main(string[] args)\n {\n string json = \"{\\\"location\\\": { \\\"name\\\": \\\"London\\\" }, \\\"current\\\": { \\\"temp_c\\\": 5.0, \\\"cloud\\\": 50 }}\";\n Weather myWeather = JsonSerializer.Deserialize<Weather>(json);\n\n Console.WriteLine(\"Location: {0} - Temp: {1:F}\", myWeather.Location.Name, myWeather.Current.TempC);\n }\n }\n}\n\nNow the Deserializer will work OK.\n" ]
[ 3, 1 ]
[]
[]
[ "c#", "deserialization", "json", "json.net", "json_deserialization" ]
stackoverflow_0074660121_c#_deserialization_json_json.net_json_deserialization.txt
Q: LO Calc Basic What is the right property name for axis major/minor settings and how set them correctly? Using a libreoffice basic macro for charts, we can controll the maximum value of an axis and turn the automatic mode on/off: oAxis.AutoMax = False oAxis.Max = 12345 But what are the right property names for Major Interval Major Auto Major Time which you can set manually ??? A: First, I created a chart with Insert Chart > Line > Points and Lines. Modifying the Y axis with code was fairly straightforward. For both axes, I went into formatting and specified Positioning > Interval Marks > Minor > Outer so that the minor ticks are shown. oCharts = ThisComponent.getSheets().getByIndex(0).getCharts() oEmbeddedObject = oCharts.getByIndex(0).getEmbeddedObject() oDiagram = oEmbeddedObject.getDiagram() oYAxis = oDiagram.getYAxis() oYAxis.StepMain = 40.0 oYAxis.StepHelpCount = 3 Here is what the Y Axis properties looked like after running the code: AutoStepMain (and the corresponding Major interval checkbox) started out as True, but setting the StepMain value via macro changed it to False. With the X axis, things were more complex. For the scale, there was a choice of Type, and selecting Date seemed to be the only way to control step settings. After that, instead of StepMain (which didn't seem to be relevant in this case), there is a complex structure called ExplicitTimeIncrement that specifies the type of increment (Days or Months) along with each value. I didn't dig too far into it, but it looks like it has all of the values you were asking about. EDIT: I tried the following code, but none of the values were changed, and the last line throws an error stating that the property is read-only (as also shown by MRI). So perhaps the values cannot be modified via the API. sTimeIntervalMajor = CreateUnoStruct("com.sun.star.chart.TimeInterval") sTimeIntervalMajor.Number = 4 sTimeIntervalMajor.TimeUnit = 0 sTimeIntervalMinor = CreateUnoStruct("com.sun.star.chart.TimeInterval") sTimeIntervalMinor.Number = 1 sTimeIntervalMinor.TimeUnit = 0 sTimeIncrement = CreateUnoStruct("com.sun.star.chart.TimeIncrement") sTimeIncrement.MajorTimeInterval = sTimeIntervalMajor sTimeIncrement.MinorTimeInterval = sTimeIntervalMinor sTimeIncrement.TimeResolution = 1 oXAxis = oDiagram.getXAxis() oXAxis.ExplicitTimeIncrement.MajorTimeInterval = sTimeIntervalMajor oXAxis.setPropertyValue("ExplicitTimeIncrement", sTimeIncrement) oXAxis.ExplicitTimeIncrement = sTimeIncrement It might also be worth posting at ask.libreoffice.org or forum.openoffice.org to see if anyone there can find a way to modify the values, with a link to this question. Of course, the UNO API isn't the only possibility. You could write a script to unzip the .ods file and modify the XML code with a parsing library such as xml.etree or regular expressions.
LO Calc Basic What is the right property name for axis major/minor settings and how set them correctly?
Using a libreoffice basic macro for charts, we can controll the maximum value of an axis and turn the automatic mode on/off: oAxis.AutoMax = False oAxis.Max = 12345 But what are the right property names for Major Interval Major Auto Major Time which you can set manually ???
[ "First, I created a chart with Insert Chart > Line > Points and Lines.\nModifying the Y axis with code was fairly straightforward.\nFor both axes, I went into formatting and specified Positioning > Interval Marks > Minor > Outer so that the minor ticks are shown.\noCharts = ThisComponent.getSheets().getByIndex(0).getCharts()\noEmbeddedObject = oCharts.getByIndex(0).getEmbeddedObject()\noDiagram = oEmbeddedObject.getDiagram()\noYAxis = oDiagram.getYAxis()\noYAxis.StepMain = 40.0\noYAxis.StepHelpCount = 3\n\nHere is what the Y Axis properties looked like after running the code:\n\nAutoStepMain (and the corresponding Major interval checkbox) started out as True, but setting the StepMain value via macro changed it to False.\nWith the X axis, things were more complex. For the scale, there was a choice of Type, and selecting Date seemed to be the only way to control step settings.\nAfter that, instead of StepMain (which didn't seem to be relevant in this case), there is a complex structure called ExplicitTimeIncrement that specifies the type of increment (Days or Months) along with each value. I didn't dig too far into it, but it looks like it has all of the values you were asking about.\nEDIT:\nI tried the following code, but none of the values were changed, and the last line throws an error stating that the property is read-only (as also shown by MRI). So perhaps the values cannot be modified via the API.\nsTimeIntervalMajor = CreateUnoStruct(\"com.sun.star.chart.TimeInterval\")\nsTimeIntervalMajor.Number = 4\nsTimeIntervalMajor.TimeUnit = 0\nsTimeIntervalMinor = CreateUnoStruct(\"com.sun.star.chart.TimeInterval\")\nsTimeIntervalMinor.Number = 1\nsTimeIntervalMinor.TimeUnit = 0\nsTimeIncrement = CreateUnoStruct(\"com.sun.star.chart.TimeIncrement\")\nsTimeIncrement.MajorTimeInterval = sTimeIntervalMajor\nsTimeIncrement.MinorTimeInterval = sTimeIntervalMinor\nsTimeIncrement.TimeResolution = 1\n\noXAxis = oDiagram.getXAxis()\noXAxis.ExplicitTimeIncrement.MajorTimeInterval = sTimeIntervalMajor\noXAxis.setPropertyValue(\"ExplicitTimeIncrement\", sTimeIncrement)\noXAxis.ExplicitTimeIncrement = sTimeIncrement\n\nIt might also be worth posting at ask.libreoffice.org or forum.openoffice.org to see if anyone there can find a way to modify the values, with a link to this question.\nOf course, the UNO API isn't the only possibility. You could write a script to unzip the .ods file and modify the XML code with a parsing library such as xml.etree or regular expressions.\n" ]
[ 2 ]
[]
[]
[ "basic", "charts", "libreoffice_basic" ]
stackoverflow_0074639398_basic_charts_libreoffice_basic.txt
Q: Using the value of a loop outside of the loop Im trying to get the data after I restrurce them in a loop. So when the when the loop completes I could have a variable with the value in it so I could download it or whatever. const handleData = async (e) => { if (e.key === "Enter") { await fetch(`http://localhost:3000/api/getsubs`, { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ getInput }), }).then((res) => res.json()).then((res) => { let len = res.length; for (let i = 0; i < len; i++) { let subNum = i + 1; let start = res[i].start; let end = res[i].dur; let text = res[i].text; const secondsToHms = (start) => { // ** starting restructuring the text ** var h = Math.floor(start / 3600); var m = Math.floor((start % 3600) / 60); var s = Math.floor((start % 3600) % 60); var ms = Math.round((start % 1) * 1000); return ( (h < 10 ? "0" : "") + h + ":" + (m < 10 ? "0" : "") + m + ":" + (s < 10 ? "0" : "") + s + "," + (ms < 100 ? "0" : "") + (ms < 10 ? "0" : "") + ms); }; // ** still in the loop ** const inputToSRT = () => { return (subNum + "\r\n" + secondsToHms(start) + " --> " + secondsToHms(end) + "\r\n" + text + "\r\n\r\n"); const srtData = inputToSRT(); console.log(srtData); }; } /* ** here the loop ends ** ** Im looking to create a var here with the loop data stored in it so I could download it after ** */ let blob = srtData; // ** download function outside the loop ** // console.log(blob); // const download = () => { // Create blob link to download if (srtData != -1) { const url = window.URL.createObjectURL(new Blob([srtData])); const link = document.createElement("a"); link.href = url; link.setAttribute("download", `FileName.srt`); // Append to html link element page document.body.appendChild(link); // Start download link.click(); // Clean up and remove the link link.parentNode.removeChild(link); } }); } }; I have tried adding the download function in the loop but it would download mulptie files with each loop instead of one with all the values A: I moved secondsToHms outside of the loop as it isn't needed in there. I also moved and updated inputToSRT to accept arguments. Then I declared srtData outside of the loop so that the scope allows it to be modified in the loop then accessed outside of the loop. const secondsToHms = (start) => { var h = Math.floor(start / 3600); var m = Math.floor((start % 3600) / 60); var s = Math.floor((start % 3600) % 60); var ms = Math.round((start % 1) * 1000); return ( (h < 10 ? "0" : "") + h + ":" + (m < 10 ? "0" : "") + m + ":" + (s < 10 ? "0" : "") + s + "," + (ms < 100 ? "0" : "") + (ms < 10 ? "0" : "") + ms ); }; const inputToSRT = (start, end, subNum, text) => { return ( subNum + "\r\n" + secondsToHms(start) + " --> " + secondsToHms(end) + "\r\n" + text + "\r\n\r\n" ); } const handleData = async (e) => { if (e.key === "Enter") { await fetch(`http://localhost:3000/api/getsubs`, { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ getInput }), }) .then((res) => res.json()) .then((res) => { let len = res.length; let srtData; for (let i = 0; i < len; i++) { let subNum = i + 1; let start = res[i].start; let end = res[i].dur; let text = res[i].text; srtData = inputToSRT(start, end, subNum, text); } let blob = srtData; // console.log(blob); // const download = () => { // Create blob link to download if (fetchData != -1) { const url = window.URL.createObjectURL(new Blob([fetchData])); const link = document.createElement("a"); link.href = url; link.setAttribute("download", `FileName.srt`); // Append to html link element page document.body.appendChild(link); // Start download link.click(); // Clean up and remove the link link.parentNode.removeChild(link); } }); } }; Below is a test showing the variables passed and accessed outside of the loop. const secondsToHms = (start) => { var h = Math.floor(start / 3600); var m = Math.floor((start % 3600) / 60); var s = Math.floor((start % 3600) % 60); var ms = Math.round((start % 1) * 1000); return ( (h < 10 ? "0" : "") + h + ":" + (m < 10 ? "0" : "") + m + ":" + (s < 10 ? "0" : "") + s + "," + (ms < 100 ? "0" : "") + (ms < 10 ? "0" : "") + ms ); }; const inputToSRT = (start, end, subNum, text) => { return ( subNum + "\r\n" + secondsToHms(start) + " --> " + secondsToHms(end) + "\r\n" + text + "\r\n\r\n" ); } let res = [ {"start": 0, "dur": 1, "text": "test"} ]; let srtData; let len = res.length; for (let i = 0; i < len; i++) { let subNum = i + 1; let start = res[i].start; let end = res[i].dur; let text = res[i].text; srtData = inputToSRT(start, end, subNum, text); } console.log(srtData)
Using the value of a loop outside of the loop
Im trying to get the data after I restrurce them in a loop. So when the when the loop completes I could have a variable with the value in it so I could download it or whatever. const handleData = async (e) => { if (e.key === "Enter") { await fetch(`http://localhost:3000/api/getsubs`, { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ getInput }), }).then((res) => res.json()).then((res) => { let len = res.length; for (let i = 0; i < len; i++) { let subNum = i + 1; let start = res[i].start; let end = res[i].dur; let text = res[i].text; const secondsToHms = (start) => { // ** starting restructuring the text ** var h = Math.floor(start / 3600); var m = Math.floor((start % 3600) / 60); var s = Math.floor((start % 3600) % 60); var ms = Math.round((start % 1) * 1000); return ( (h < 10 ? "0" : "") + h + ":" + (m < 10 ? "0" : "") + m + ":" + (s < 10 ? "0" : "") + s + "," + (ms < 100 ? "0" : "") + (ms < 10 ? "0" : "") + ms); }; // ** still in the loop ** const inputToSRT = () => { return (subNum + "\r\n" + secondsToHms(start) + " --> " + secondsToHms(end) + "\r\n" + text + "\r\n\r\n"); const srtData = inputToSRT(); console.log(srtData); }; } /* ** here the loop ends ** ** Im looking to create a var here with the loop data stored in it so I could download it after ** */ let blob = srtData; // ** download function outside the loop ** // console.log(blob); // const download = () => { // Create blob link to download if (srtData != -1) { const url = window.URL.createObjectURL(new Blob([srtData])); const link = document.createElement("a"); link.href = url; link.setAttribute("download", `FileName.srt`); // Append to html link element page document.body.appendChild(link); // Start download link.click(); // Clean up and remove the link link.parentNode.removeChild(link); } }); } }; I have tried adding the download function in the loop but it would download mulptie files with each loop instead of one with all the values
[ "I moved secondsToHms outside of the loop as it isn't needed in there. I also moved and updated inputToSRT to accept arguments. Then I declared srtData outside of the loop so that the scope allows it to be modified in the loop then accessed outside of the loop.\nconst secondsToHms = (start) => {\n var h = Math.floor(start / 3600);\n var m = Math.floor((start % 3600) / 60);\n var s = Math.floor((start % 3600) % 60);\n var ms = Math.round((start % 1) * 1000);\n return (\n (h < 10 ? \"0\" : \"\") +\n h +\n \":\" +\n (m < 10 ? \"0\" : \"\") +\n m +\n \":\" +\n (s < 10 ? \"0\" : \"\") +\n s +\n \",\" +\n (ms < 100 ? \"0\" : \"\") +\n (ms < 10 ? \"0\" : \"\") +\n ms\n );\n};\n\n const inputToSRT = (start, end, subNum, text) => {\n return (\n subNum +\n \"\\r\\n\" +\n secondsToHms(start) +\n \" --> \" +\n secondsToHms(end) +\n \"\\r\\n\" +\n text +\n \"\\r\\n\\r\\n\"\n );\n }\nconst handleData = async (e) => {\n if (e.key === \"Enter\") {\n await fetch(`http://localhost:3000/api/getsubs`, {\n method: \"POST\",\n headers: {\n \"Content-Type\": \"application/json\",\n },\n body: JSON.stringify({\n getInput\n }),\n })\n .then((res) => res.json())\n .then((res) => {\n let len = res.length;\n let srtData;\n\n for (let i = 0; i < len; i++) {\n let subNum = i + 1;\n let start = res[i].start;\n let end = res[i].dur;\n let text = res[i].text;\n srtData = inputToSRT(start, end, subNum, text);\n }\n\n let blob = srtData;\n // console.log(blob);\n // const download = () => {\n // Create blob link to download\n if (fetchData != -1) {\n const url = window.URL.createObjectURL(new Blob([fetchData]));\n const link = document.createElement(\"a\");\n link.href = url;\n link.setAttribute(\"download\", `FileName.srt`);\n // Append to html link element page\n document.body.appendChild(link);\n // Start download\n link.click();\n // Clean up and remove the link\n link.parentNode.removeChild(link);\n }\n\n });\n }\n};\n\nBelow is a test showing the variables passed and accessed outside of the loop.\n\n\nconst secondsToHms = (start) => {\n var h = Math.floor(start / 3600);\n var m = Math.floor((start % 3600) / 60);\n var s = Math.floor((start % 3600) % 60);\n var ms = Math.round((start % 1) * 1000);\n return (\n (h < 10 ? \"0\" : \"\") +\n h +\n \":\" +\n (m < 10 ? \"0\" : \"\") +\n m +\n \":\" +\n (s < 10 ? \"0\" : \"\") +\n s +\n \",\" +\n (ms < 100 ? \"0\" : \"\") +\n (ms < 10 ? \"0\" : \"\") +\n ms\n );\n};\n\nconst inputToSRT = (start, end, subNum, text) => {\n return (\n subNum +\n \"\\r\\n\" +\n secondsToHms(start) +\n \" --> \" +\n secondsToHms(end) +\n \"\\r\\n\" +\n text +\n \"\\r\\n\\r\\n\"\n );\n}\n\nlet res = [\n {\"start\": 0, \"dur\": 1, \"text\": \"test\"}\n];\nlet srtData;\nlet len = res.length;\nfor (let i = 0; i < len; i++) {\n let subNum = i + 1;\n let start = res[i].start;\n let end = res[i].dur;\n let text = res[i].text;\n srtData = inputToSRT(start, end, subNum, text);\n}\nconsole.log(srtData)\n\n\n\n" ]
[ 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0074660290_javascript.txt
Q: Slickgrid header menu sorting is not working properly In my case, Once I clear the column sorting and try to resort to the Ascending by the slick grid header menu. During this action, the column values are shuffled, again and again. I don't know the reason why sorting is not working properly in the slick grid. As per our expectation, Using the slick grid header menu sorting (ascending) and repeatedly clicking the sort ascending, the column value should not shuffle again and again. So kindly provide a proper solution for this sorting issue. Software versions: Angular: 13.3.9 Angular-Slickgrid: 4.3.1 slickgrid-universal: 1.4.0 TypeScript: 4.6.4 Operating System: Windows 10 Node: 14.18.3 NPM: 6.14.8 A: Please note that I'm the author of Angular-Slickgrid That was a bug, so you could have open a GitHub issue for that (I opened one for your issue 1052). I looked into this and it was challenging to fix, the lib uses MomentJS (moment-mini actually) and date sorting with Moment is not exactly straightforward when dealing with potential null values. Anyway long story short, I have a fix in this Slickgrid-Universal PR (to be released in the next few days for Angular-Slickgrid 5.x only). However please note that I only support latest version and I see you are on older version so you won't get the fix unless you upgrade to latest. It is simply too much work for a single person (me) to support more than 1 version at a time, since the lib is a free Open Source project and I don't get any money from this work (apart from the very rare ko-fi contributions). What if you can't upgrade just yet? You can define your own sortComparer on each, so you could add the fix manually but that means that you will have to add the sortComparer on every date field of your grid(s)... so it would be easier to upgrade but at least you get a workaround in the meantime. this.columnDefinitions = [ { id: 'finish', name: 'Finish', field: 'finish', sortable: true, // ... other options sortComparer: (val1, val2, sortDirection) => { let diff = 0; if (val1 === val2) { diff = 0; } else { let date1: Moment | Date = moment(val1, 'YYYY-MM-DD', true); let date2: Moment | Date = moment(val2, 'YYYY-MM-DD', true); // when date is invalid, we'll create a temporary old date if (!date1.isValid()) { date1 = new Date(1001, 1, 1); } if (!date2.isValid()) { date2 = new Date(1001, 1, 1); } diff = date1.valueOf() - date2.valueOf(); } return sortDirection * diff; }, }] if you plan to reuse it multiple time then extract it to a separate function & file function sortDateWithFix(val1, val2, sortDirection) { let diff = 0; if (val1 === val2) { diff = 0; } else { let date1: Moment | Date = moment(val1, 'YYYY-MM-DD', true); let date2: Moment | Date = moment(val2, 'YYYY-MM-DD', true); // when date is invalid, we'll create a temporary old date if (!date1.isValid()) { date1 = new Date(1001, 1, 1); } if (!date2.isValid()) { date2 = new Date(1001, 1, 1); } diff = date1.valueOf() - date2.valueOf(); } return sortDirection * diff; } this.columnDefinitions = [ { id: 'start', name: 'Start', field: 'start', sortable: true, sortComparer: sortDateWithFix, }, { id: 'finish', name: 'Finish', field: 'finish', sortable: true, sortComparer: sortDateWithFix, }] Note that the new code is taking slightly more time to execute because we now create a temp date to sort nullish values, however it seems negligeable 1240ms vs 1125ms on first sort of 50k unsorted rows (so about 100-150ms longer).
Slickgrid header menu sorting is not working properly
In my case, Once I clear the column sorting and try to resort to the Ascending by the slick grid header menu. During this action, the column values are shuffled, again and again. I don't know the reason why sorting is not working properly in the slick grid. As per our expectation, Using the slick grid header menu sorting (ascending) and repeatedly clicking the sort ascending, the column value should not shuffle again and again. So kindly provide a proper solution for this sorting issue. Software versions: Angular: 13.3.9 Angular-Slickgrid: 4.3.1 slickgrid-universal: 1.4.0 TypeScript: 4.6.4 Operating System: Windows 10 Node: 14.18.3 NPM: 6.14.8
[ "Please note that I'm the author of Angular-Slickgrid\nThat was a bug, so you could have open a GitHub issue for that (I opened one for your issue 1052). I looked into this and it was challenging to fix, the lib uses MomentJS (moment-mini actually) and date sorting with Moment is not exactly straightforward when dealing with potential null values. Anyway long story short, I have a fix in this Slickgrid-Universal PR (to be released in the next few days for Angular-Slickgrid 5.x only).\nHowever please note that I only support latest version and I see you are on older version so you won't get the fix unless you upgrade to latest. It is simply too much work for a single person (me) to support more than 1 version at a time, since the lib is a free Open Source project and I don't get any money from this work (apart from the very rare ko-fi contributions).\nWhat if you can't upgrade just yet? You can define your own sortComparer on each, so you could add the fix manually but that means that you will have to add the sortComparer on every date field of your grid(s)... so it would be easier to upgrade but at least you get a workaround in the meantime.\nthis.columnDefinitions = [\n{\n id: 'finish', name: 'Finish', field: 'finish', sortable: true,\n // ... other options\n sortComparer: (val1, val2, sortDirection) => {\n let diff = 0;\n if (val1 === val2) {\n diff = 0;\n } else {\n let date1: Moment | Date = moment(val1, 'YYYY-MM-DD', true);\n let date2: Moment | Date = moment(val2, 'YYYY-MM-DD', true);\n // when date is invalid, we'll create a temporary old date\n if (!date1.isValid()) {\n date1 = new Date(1001, 1, 1);\n }\n if (!date2.isValid()) {\n date2 = new Date(1001, 1, 1);\n }\n diff = date1.valueOf() - date2.valueOf();\n }\n\n return sortDirection * diff;\n },\n}]\n\nif you plan to reuse it multiple time then extract it to a separate function & file\nfunction sortDateWithFix(val1, val2, sortDirection) {\n let diff = 0;\n if (val1 === val2) {\n diff = 0;\n } else {\n let date1: Moment | Date = moment(val1, 'YYYY-MM-DD', true);\n let date2: Moment | Date = moment(val2, 'YYYY-MM-DD', true);\n // when date is invalid, we'll create a temporary old date\n if (!date1.isValid()) {\n date1 = new Date(1001, 1, 1);\n }\n if (!date2.isValid()) {\n date2 = new Date(1001, 1, 1);\n }\n diff = date1.valueOf() - date2.valueOf();\n }\n\n return sortDirection * diff;\n}\n\nthis.columnDefinitions = [\n{\n id: 'start', name: 'Start', field: 'start', sortable: true,\n sortComparer: sortDateWithFix,\n},\n{\n id: 'finish', name: 'Finish', field: 'finish', sortable: true,\n sortComparer: sortDateWithFix,\n}]\n\nNote that the new code is taking slightly more time to execute because we now create a temp date to sort nullish values, however it seems negligeable 1240ms vs 1125ms on first sort of 50k unsorted rows (so about 100-150ms longer).\n" ]
[ 0 ]
[]
[]
[ "angular", "angular_slickgrid", "ionic_framework", "slickgrid" ]
stackoverflow_0074657131_angular_angular_slickgrid_ionic_framework_slickgrid.txt
Q: (Macbook M1 Pro) flutter run problem with cocoapods After installing the camera dependancy and running flutter run on my new Macbook M1, i got problems with cocoapods, which I successfully installed. After trying the recommended command "gem pristine ffi --version 1.15.5", i received no error: output: Restoring gems to pristine condition... Building native extensions. This could take a while... Restored ffi-1.15.5 But when running flutter run I got the same error flutter run output: ` Launching lib/main.dart on iPhone 14 Pro Max in debug mode... Running pod install... 668ms CocoaPods' output: ↳ Preparing Analyzing dependencies Inspecting targets to integrate Using `ARCHS` setting to build architectures of target `Pods-Runner`: (``) Fetching external sources -> Fetching podspec for `Flutter` from `Flutter` -> Fetching podspec for `camera_avfoundation` from `.symlinks/plugins/camera_avfoundation/ios` Resolving dependencies of `Podfile` CDN: trunk Relative path: CocoaPods-version.yml exists! Returning local because checking is only performed in repo update Comparing resolved specification to the sandbox manifest A Flutter A camera_avfoundation Downloading dependencies -> Installing Flutter (1.0.0) -> Installing camera_avfoundation (0.0.1) - Running pre install hooks - Running pre integrate hooks Generating Pods project - Creating Pods project - Installing files into Pods project - Adding source files - Adding frameworks - Adding libraries - Adding resources - Adding development pod helper files - Linking headers - Installing Pod Targets - Installing target `Flutter` iOS 9.0 - Installing target `camera_avfoundation` iOS 9.0 - Copying module map file to `Pods/Target Support Files/camera_avfoundation/camera_avfoundation.modulemap` - Generating Info.plist file at `Pods/Target Support Files/camera_avfoundation/camera_avfoundation-Info.plist` - Generating dummy source at `Pods/Target Support Files/camera_avfoundation/camera_avfoundation-dummy.m` - Installing Aggregate Targets - Installing target `Pods-Runner` iOS 11.0 - Generating Info.plist file at `Pods/Target Support Files/Pods-Runner/Pods-Runner-Info.plist` - Generating module map file at `Pods/Target Support Files/Pods-Runner/Pods-Runner.modulemap` - Generating umbrella header at `Pods/Target Support Files/Pods-Runner/Pods-Runner-umbrella.h` - Generating dummy source at `Pods/Target Support Files/Pods-Runner/Pods-Runner-dummy.m` - Generating deterministic UUIDs - Stabilizing target UUIDs - Running post install hooks [!] An error occurred while processing the post-install hook of the Podfile. undefined local variable or method `continue' for #<Pod::Podfile:0x000000010725cbc0 @defined_in_file=#<Pathname:/Users/laurent/development/personal/flutter/jemp i/ios/Podfile>, @internal_hash={}, @root_target_definitions=[#<Pod::Podfile::TargetDefinition label=Pods>], @current_target_definition=#<Pod::Podfile::TargetDefinition label=Pods>, @post_install_callback=#<Proc:0x000000010727dde8 /Users/laurent/development/personal/flutter/jempi/ios/Podfile:37>, @installation_options=#<Pod::Installer::InstallationOptions:0x000000010732c9 10 @clean=true, @deduplicate_targets=true, @deterministic_uuids=true, @integrate_targets=true, @lock_pod_sources=true, @warn_for_multiple_pod_sources=true, @warn_for_unused_master_specs_repo=true, @share_schemes_for_development_pods=false, @disable_input_output_paths=false, @preserve_pod_file_structure=false, @generate_multiple_pod_projects=false, @incremental_installation=false, @skip_pods_project_generation=false>> continue if xcframework_file.start_with?(".") # Hidden file, possibly on external disk. ^^^^^^^^ /Users/laurent/development/flutter/packages/flutter_tools/bin/podhelper.rb:6 3:in `block (2 levels) in flutter_additional_ios_build_settings' /Users/laurent/development/flutter/packages/flutter_tools/bin/podhelper.rb:6 2:in `each_child' /Users/laurent/development/flutter/packages/flutter_tools/bin/podhelper.rb:6 2:in `block in flutter_additional_ios_build_settings' /Users/laurent/development/flutter/packages/flutter_tools/bin/podhelper.rb:5 1:in `each' /Users/laurent/development/flutter/packages/flutter_tools/bin/podhelper.rb:5 1:in `flutter_additional_ios_build_settings' /Users/laurent/development/personal/flutter/jempi/ios/Podfile:39:in `block (3 levels) in from_ruby' /Users/laurent/development/personal/flutter/jempi/ios/Podfile:38:in `each' /Users/laurent/development/personal/flutter/jempi/ios/Podfile:38:in `block (2 levels) in from_ruby' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-core-1.11.3/lib /cocoapods-core/podfile.rb:196:in `post_install!' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:945:in `run_podfile_post_install_hook' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:933:in `block in run_podfile_post_install_hooks' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/user_interface.rb:149:in `message' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:932:in `run_podfile_post_install_hooks' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:331:in `block (2 levels) in create_and_save_projects' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer/xcode/pods_project_generator/pods_project_writer.rb:61:in `write!' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:330:in `block in create_and_save_projects' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/user_interface.rb:64:in `section' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:309:in `create_and_save_projects' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:301:in `generate_pods_project' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:180:in `integrate' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:167:in `install!' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/command/install.rb:52:in `run' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/claide-1.1.0/lib/claide/c ommand.rb:334:in `run' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/command.rb:52:in `run' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/bin/pod: 55:in `<top (required)>' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/bin/pod:25:in `load' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/bin/pod:25:in `<main>' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:947:in `rescue in run_podfile_post_install_hook' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:944:in `run_podfile_post_install_hook' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:933:in `block in run_podfile_post_install_hooks' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/user_interface.rb:149:in `message' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:932:in `run_podfile_post_install_hooks' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:331:in `block (2 levels) in create_and_save_projects' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer/xcode/pods_project_generator/pods_project_writer.rb:61:in `write!' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:330:in `block in create_and_save_projects' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/user_interface.rb:64:in `section' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:309:in `create_and_save_projects' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:301:in `generate_pods_project' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:180:in `integrate' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:167:in `install!' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/command/install.rb:52:in `run' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/claide-1.1.0/lib/claide/c ommand.rb:334:in `run' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/command.rb:52:in `run' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/bin/pod: 55:in `<top (required)>' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/bin/pod:25:in `load' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/bin/pod:25:in `<main>' Error output from CocoaPods: ↳ Ignoring ffi-1.15.5 because its extensions are not built. Try: gem pristine ffi --version 1.15.5 Error running pod install Error launching application on iPhone 14 Pro Max. ` Here my Podfile # Uncomment this line to define a global platform for your project platform :ios, '11.0' # CocoaPods analytics sends network stats synchronously affecting flutter build latency. ENV['COCOAPODS_DISABLE_STATS'] = 'true' project 'Runner', { 'Debug' => :debug, 'Profile' => :release, 'Release' => :release, } def flutter_root generated_xcode_build_settings_path = File.expand_path(File.join('..', 'Flutter', 'Generated.xcconfig'), __FILE__) unless File.exist?(generated_xcode_build_settings_path) raise "#{generated_xcode_build_settings_path} must exist. If you're running pod install manually, make sure flutter pub get is executed first" end File.foreach(generated_xcode_build_settings_path) do |line| matches = line.match(/FLUTTER_ROOT\=(.*)/) return matches[1].strip if matches end raise "FLUTTER_ROOT not found in #{generated_xcode_build_settings_path}. Try deleting Generated.xcconfig, then run flutter pub get" end require File.expand_path(File.join('packages', 'flutter_tools', 'bin', 'podhelper'), flutter_root) flutter_ios_podfile_setup target 'Runner' do use_frameworks! use_modular_headers! flutter_install_all_ios_pods File.dirname(File.realpath(__FILE__)) end post_install do |installer| installer.pods_project.targets.each do |target| flutter_additional_ios_build_settings(target) end end Here my pubspec name: jempi description: A new Flutter project. # The following line prevents the package from being accidentally published to # pub.dev using `flutter pub publish`. This is preferred for private packages. publish_to: "none" # Remove this line if you wish to publish to pub.dev # The following defines the version and build number for your application. # A version number is three numbers separated by dots, like 1.2.43 # followed by an optional build number separated by a +. # Both the version and the builder number may be overridden in flutter # build by specifying --build-name and --build-number, respectively. # In Android, build-name is used as versionName while build-number used as versionCode. # Read more about Android versioning at https://developer.android.com/studio/publish/versioning # In iOS, build-name is used as CFBundleShortVersionString while build-number used as CFBundleVersion. # Read more about iOS versioning at # https://developer.apple.com/library/archive/documentation/General/Reference/InfoPlistKeyReference/Articles/CoreFoundationKeys.html version: 1.0.0+1 environment: sdk: ">=2.17.6 <3.0.0" # Dependencies specify other packages that your package needs in order to work. # To automatically upgrade your package dependencies to the latest versions # consider running `flutter pub upgrade --major-versions`. Alternatively, # dependencies can be manually updated by changing the version numbers below to # the latest version available on pub.dev. To see which dependencies have newer # versions available, run `flutter pub outdated`. dependencies: camera: ^0.10.0+4 cupertino_icons: ^1.0.2 flutter: sdk: flutter dev_dependencies: flutter_lints: ^2.0.0 flutter_test: sdk: flutter # For information on the generic Dart part of this file, see the # following page: https://dart.dev/tools/pub/pubspec # The following section is specific to Flutter packages. flutter: # The following line ensures that the Material Icons font is # included with your application, so that you can use the icons in # the material Icons class. uses-material-design: true # To add assets to your application, add an assets section, like this: assets: - assets/img/ # An image asset can refer to one or more resolution-specific "variants", see # https://flutter.dev/assets-and-images/#resolution-aware # For details regarding adding assets from package dependencies, see # https://flutter.dev/assets-and-images/#from-packages # To add custom fonts to your application, add a fonts section here, # in this "flutter" section. Each entry in this list should have a # "family" key with the font family name, and a "fonts" key with a # list giving the asset and other descriptors for the font. For # example: # fonts: # - family: Schyler # fonts: # - asset: fonts/Schyler-Regular.ttf # - asset: fonts/Schyler-Italic.ttf # style: italic # - family: Trajan Pro # fonts: # - asset: fonts/TrajanPro.ttf # - asset: fonts/TrajanPro_Bold.ttf # weight: 700 # # For details regarding fonts from package dependencies, # see https://flutter.dev/custom-fonts/#from-packages I tried reinstalling cocoapods and ffi several times, also using sudo gem install cocoapods sudo arch -x86_64 gem install ffi A: basic steps for solving pod issues in flutter are:- pod deintegrate pod clean pod upgrade pod install and solve the specific pod errors if you find any advice to go with Xcode while solving pod issues
(Macbook M1 Pro) flutter run problem with cocoapods
After installing the camera dependancy and running flutter run on my new Macbook M1, i got problems with cocoapods, which I successfully installed. After trying the recommended command "gem pristine ffi --version 1.15.5", i received no error: output: Restoring gems to pristine condition... Building native extensions. This could take a while... Restored ffi-1.15.5 But when running flutter run I got the same error flutter run output: ` Launching lib/main.dart on iPhone 14 Pro Max in debug mode... Running pod install... 668ms CocoaPods' output: ↳ Preparing Analyzing dependencies Inspecting targets to integrate Using `ARCHS` setting to build architectures of target `Pods-Runner`: (``) Fetching external sources -> Fetching podspec for `Flutter` from `Flutter` -> Fetching podspec for `camera_avfoundation` from `.symlinks/plugins/camera_avfoundation/ios` Resolving dependencies of `Podfile` CDN: trunk Relative path: CocoaPods-version.yml exists! Returning local because checking is only performed in repo update Comparing resolved specification to the sandbox manifest A Flutter A camera_avfoundation Downloading dependencies -> Installing Flutter (1.0.0) -> Installing camera_avfoundation (0.0.1) - Running pre install hooks - Running pre integrate hooks Generating Pods project - Creating Pods project - Installing files into Pods project - Adding source files - Adding frameworks - Adding libraries - Adding resources - Adding development pod helper files - Linking headers - Installing Pod Targets - Installing target `Flutter` iOS 9.0 - Installing target `camera_avfoundation` iOS 9.0 - Copying module map file to `Pods/Target Support Files/camera_avfoundation/camera_avfoundation.modulemap` - Generating Info.plist file at `Pods/Target Support Files/camera_avfoundation/camera_avfoundation-Info.plist` - Generating dummy source at `Pods/Target Support Files/camera_avfoundation/camera_avfoundation-dummy.m` - Installing Aggregate Targets - Installing target `Pods-Runner` iOS 11.0 - Generating Info.plist file at `Pods/Target Support Files/Pods-Runner/Pods-Runner-Info.plist` - Generating module map file at `Pods/Target Support Files/Pods-Runner/Pods-Runner.modulemap` - Generating umbrella header at `Pods/Target Support Files/Pods-Runner/Pods-Runner-umbrella.h` - Generating dummy source at `Pods/Target Support Files/Pods-Runner/Pods-Runner-dummy.m` - Generating deterministic UUIDs - Stabilizing target UUIDs - Running post install hooks [!] An error occurred while processing the post-install hook of the Podfile. undefined local variable or method `continue' for #<Pod::Podfile:0x000000010725cbc0 @defined_in_file=#<Pathname:/Users/laurent/development/personal/flutter/jemp i/ios/Podfile>, @internal_hash={}, @root_target_definitions=[#<Pod::Podfile::TargetDefinition label=Pods>], @current_target_definition=#<Pod::Podfile::TargetDefinition label=Pods>, @post_install_callback=#<Proc:0x000000010727dde8 /Users/laurent/development/personal/flutter/jempi/ios/Podfile:37>, @installation_options=#<Pod::Installer::InstallationOptions:0x000000010732c9 10 @clean=true, @deduplicate_targets=true, @deterministic_uuids=true, @integrate_targets=true, @lock_pod_sources=true, @warn_for_multiple_pod_sources=true, @warn_for_unused_master_specs_repo=true, @share_schemes_for_development_pods=false, @disable_input_output_paths=false, @preserve_pod_file_structure=false, @generate_multiple_pod_projects=false, @incremental_installation=false, @skip_pods_project_generation=false>> continue if xcframework_file.start_with?(".") # Hidden file, possibly on external disk. ^^^^^^^^ /Users/laurent/development/flutter/packages/flutter_tools/bin/podhelper.rb:6 3:in `block (2 levels) in flutter_additional_ios_build_settings' /Users/laurent/development/flutter/packages/flutter_tools/bin/podhelper.rb:6 2:in `each_child' /Users/laurent/development/flutter/packages/flutter_tools/bin/podhelper.rb:6 2:in `block in flutter_additional_ios_build_settings' /Users/laurent/development/flutter/packages/flutter_tools/bin/podhelper.rb:5 1:in `each' /Users/laurent/development/flutter/packages/flutter_tools/bin/podhelper.rb:5 1:in `flutter_additional_ios_build_settings' /Users/laurent/development/personal/flutter/jempi/ios/Podfile:39:in `block (3 levels) in from_ruby' /Users/laurent/development/personal/flutter/jempi/ios/Podfile:38:in `each' /Users/laurent/development/personal/flutter/jempi/ios/Podfile:38:in `block (2 levels) in from_ruby' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-core-1.11.3/lib /cocoapods-core/podfile.rb:196:in `post_install!' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:945:in `run_podfile_post_install_hook' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:933:in `block in run_podfile_post_install_hooks' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/user_interface.rb:149:in `message' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:932:in `run_podfile_post_install_hooks' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:331:in `block (2 levels) in create_and_save_projects' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer/xcode/pods_project_generator/pods_project_writer.rb:61:in `write!' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:330:in `block in create_and_save_projects' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/user_interface.rb:64:in `section' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:309:in `create_and_save_projects' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:301:in `generate_pods_project' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:180:in `integrate' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:167:in `install!' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/command/install.rb:52:in `run' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/claide-1.1.0/lib/claide/c ommand.rb:334:in `run' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/command.rb:52:in `run' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/bin/pod: 55:in `<top (required)>' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/bin/pod:25:in `load' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/bin/pod:25:in `<main>' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:947:in `rescue in run_podfile_post_install_hook' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:944:in `run_podfile_post_install_hook' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:933:in `block in run_podfile_post_install_hooks' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/user_interface.rb:149:in `message' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:932:in `run_podfile_post_install_hooks' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:331:in `block (2 levels) in create_and_save_projects' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer/xcode/pods_project_generator/pods_project_writer.rb:61:in `write!' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:330:in `block in create_and_save_projects' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/user_interface.rb:64:in `section' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:309:in `create_and_save_projects' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:301:in `generate_pods_project' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:180:in `integrate' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/installer.rb:167:in `install!' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/command/install.rb:52:in `run' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/claide-1.1.0/lib/claide/c ommand.rb:334:in `run' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/lib/coco apods/command.rb:52:in `run' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/gems/cocoapods-1.11.3/bin/pod: 55:in `<top (required)>' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/bin/pod:25:in `load' /opt/homebrew/Cellar/cocoapods/1.11.3/libexec/bin/pod:25:in `<main>' Error output from CocoaPods: ↳ Ignoring ffi-1.15.5 because its extensions are not built. Try: gem pristine ffi --version 1.15.5 Error running pod install Error launching application on iPhone 14 Pro Max. ` Here my Podfile # Uncomment this line to define a global platform for your project platform :ios, '11.0' # CocoaPods analytics sends network stats synchronously affecting flutter build latency. ENV['COCOAPODS_DISABLE_STATS'] = 'true' project 'Runner', { 'Debug' => :debug, 'Profile' => :release, 'Release' => :release, } def flutter_root generated_xcode_build_settings_path = File.expand_path(File.join('..', 'Flutter', 'Generated.xcconfig'), __FILE__) unless File.exist?(generated_xcode_build_settings_path) raise "#{generated_xcode_build_settings_path} must exist. If you're running pod install manually, make sure flutter pub get is executed first" end File.foreach(generated_xcode_build_settings_path) do |line| matches = line.match(/FLUTTER_ROOT\=(.*)/) return matches[1].strip if matches end raise "FLUTTER_ROOT not found in #{generated_xcode_build_settings_path}. Try deleting Generated.xcconfig, then run flutter pub get" end require File.expand_path(File.join('packages', 'flutter_tools', 'bin', 'podhelper'), flutter_root) flutter_ios_podfile_setup target 'Runner' do use_frameworks! use_modular_headers! flutter_install_all_ios_pods File.dirname(File.realpath(__FILE__)) end post_install do |installer| installer.pods_project.targets.each do |target| flutter_additional_ios_build_settings(target) end end Here my pubspec name: jempi description: A new Flutter project. # The following line prevents the package from being accidentally published to # pub.dev using `flutter pub publish`. This is preferred for private packages. publish_to: "none" # Remove this line if you wish to publish to pub.dev # The following defines the version and build number for your application. # A version number is three numbers separated by dots, like 1.2.43 # followed by an optional build number separated by a +. # Both the version and the builder number may be overridden in flutter # build by specifying --build-name and --build-number, respectively. # In Android, build-name is used as versionName while build-number used as versionCode. # Read more about Android versioning at https://developer.android.com/studio/publish/versioning # In iOS, build-name is used as CFBundleShortVersionString while build-number used as CFBundleVersion. # Read more about iOS versioning at # https://developer.apple.com/library/archive/documentation/General/Reference/InfoPlistKeyReference/Articles/CoreFoundationKeys.html version: 1.0.0+1 environment: sdk: ">=2.17.6 <3.0.0" # Dependencies specify other packages that your package needs in order to work. # To automatically upgrade your package dependencies to the latest versions # consider running `flutter pub upgrade --major-versions`. Alternatively, # dependencies can be manually updated by changing the version numbers below to # the latest version available on pub.dev. To see which dependencies have newer # versions available, run `flutter pub outdated`. dependencies: camera: ^0.10.0+4 cupertino_icons: ^1.0.2 flutter: sdk: flutter dev_dependencies: flutter_lints: ^2.0.0 flutter_test: sdk: flutter # For information on the generic Dart part of this file, see the # following page: https://dart.dev/tools/pub/pubspec # The following section is specific to Flutter packages. flutter: # The following line ensures that the Material Icons font is # included with your application, so that you can use the icons in # the material Icons class. uses-material-design: true # To add assets to your application, add an assets section, like this: assets: - assets/img/ # An image asset can refer to one or more resolution-specific "variants", see # https://flutter.dev/assets-and-images/#resolution-aware # For details regarding adding assets from package dependencies, see # https://flutter.dev/assets-and-images/#from-packages # To add custom fonts to your application, add a fonts section here, # in this "flutter" section. Each entry in this list should have a # "family" key with the font family name, and a "fonts" key with a # list giving the asset and other descriptors for the font. For # example: # fonts: # - family: Schyler # fonts: # - asset: fonts/Schyler-Regular.ttf # - asset: fonts/Schyler-Italic.ttf # style: italic # - family: Trajan Pro # fonts: # - asset: fonts/TrajanPro.ttf # - asset: fonts/TrajanPro_Bold.ttf # weight: 700 # # For details regarding fonts from package dependencies, # see https://flutter.dev/custom-fonts/#from-packages I tried reinstalling cocoapods and ffi several times, also using sudo gem install cocoapods sudo arch -x86_64 gem install ffi
[ "basic steps for solving pod issues in flutter are:-\n\npod deintegrate\npod clean\npod upgrade\npod install\nand solve the specific pod errors if you find any\nadvice to go with Xcode while solving pod issues\n\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter", "ios", "macos", "mobile" ]
stackoverflow_0074659919_dart_flutter_ios_macos_mobile.txt
Q: R: Faster way to create a variable with values assigned from an iterative rule (SAS retain-type programming) I have the following data.frame in R which consists of about 100k rows. Imagine in this scenario that ID represents a patient, date is the date of testing for a specific disease, A is an indicator of a positive test from test-type A, and B is a similar indicator but from test-type B. We can assume every row will have at least one positive test and that the dataset is sorted by ID and date. head(data, 10) ID date A B 1 1 2022-01-01 1 0 2 1 2022-01-05 1 0 3 1 2022-02-17 0 1 4 1 2022-05-13 1 0 5 1 2022-09-20 0 1 6 2 2021-02-02 0 1 7 2 2021-02-06 0 1 8 2 2021-04-18 1 0 9 2 2021-04-19 0 1 10 2 2022-08-21 1 0 I have the following rule to determine dates/counts (infec_date and n_infec) of consecutive infections for a patient: The very first date value will be a patients' initial infection The first value of date that is 45 days or more after a previous infection where a patient is positive due to test-type A (A==1) will be the date of next infection. Otherwise, the first value of date that is 90 days or more after a previous infection where a patient is positive due to test-type B (B==1) will be the date of next infection. If no criteria for infection are met, then the most recent infection number/date are set. My output dataset using this rule will need to look like: ID date A B infec_date n_infec 1 1 2022-01-01 1 0 2022-01-01 1 2 1 2022-01-05 1 0 2022-01-01 1 3 1 2022-02-17 0 1 2022-01-01 1 4 1 2022-05-13 1 0 2022-05-13 2 5 1 2022-09-20 0 1 2022-09-20 3 6 2 2021-02-02 0 1 2021-02-02 1 7 2 2021-02-06 0 1 2021-02-02 1 8 2 2021-04-18 1 0 2021-04-18 2 9 2 2021-04-19 0 1 2021-04-18 2 10 2 2022-08-21 1 0 2022-08-21 3 MY GOAL: Is to find a way to do this that is faster than having to iterate through each individual row and checking a string of logic to set the values manually. I would really like to know if there is any functionality for ease and speed of this type of programming using packages dplyr/tidyr. The for loop I've been using in R is as follows: for(i in 1:nrow(data)){ if(i==1){ data[i,"infec_date"]=data[i,"date"] data[i,"n_infec"]=1 }else if(data[i,"ID"]!=data[i-1,"ID"]){ data[i,"infec_date"]=data[i,"date"] data[i,"n_infec"]=1 }else{ if(data[i,"A"]==1&data[i,"date"]>=data[i-1,"infec_date"]+45){ data[i,"infec_date"]=data[i,"date"] data[i,"n_infec"]=data[i-1,"n_infec"]+1 }else if(data[i,"B"]==1&data[i,"date"]>=(data[i-1,"infec_date"]+90)){ data[i,"infec_date"]=data[i,"date"] data[i,"n_infec"]=data[i-1,"n_infec"]+1 }else{ data[i,"infec_date"]=data[i-1,"infec_date"] data[i,"n_infec"]=data[i-1,"n_infec"] } } } As one can imagine, this gets incredibly slow and takes forever to run when dealing with 100k rows of data. I have programmed this in SAS before using the retain statement with first./last. notation, which would make this much easier/faster. But unfortunately, I do not have access to the software. I know a similar SAS call would look like: data new_data; set data; by id date; length infec_date n_infec 8.; format infec_date mmddyy10.; retain infec_date n_infec; if first.id then do; infec_date=date; n_infec=1; end; if A=1 and date>=infec_date+45 then do; infec_date=date; n_infec=n_infec+1; end; else if B=1 and date>=infec_date+90 then do; infec_date=date; n_infec=n_infec+1; end; run; Thanks in advance! A: This should be faster. library(dplyr) df %>% group_by(ID) %>% mutate(before_date = date - if_else(A==1, 45, 90), new_infec = coalesce(before_date > lag(date), TRUE), infec_date = if_else(new_infec, date, NA_real_), n_infec = cumsum(new_infec)) %>% fill(infec_date) %>% ungroup() %>% select(-c(5:6)) Result ID date A B infec_date n_infec <int> <date> <int> <int> <date> <int> 1 1 2022-01-01 1 0 2022-01-01 1 2 1 2022-01-05 1 0 2022-01-01 1 3 1 2022-02-17 0 1 2022-01-01 1 4 1 2022-05-13 1 0 2022-05-13 2 5 1 2022-09-20 0 1 2022-09-20 3 6 2 2021-02-02 0 1 2021-02-02 1 7 2 2021-02-06 0 1 2021-02-02 1 8 2 2021-04-18 1 0 2021-04-18 2 9 2 2021-04-19 0 1 2021-04-18 2 10 2 2022-08-21 1 0 2022-08-21 3
R: Faster way to create a variable with values assigned from an iterative rule (SAS retain-type programming)
I have the following data.frame in R which consists of about 100k rows. Imagine in this scenario that ID represents a patient, date is the date of testing for a specific disease, A is an indicator of a positive test from test-type A, and B is a similar indicator but from test-type B. We can assume every row will have at least one positive test and that the dataset is sorted by ID and date. head(data, 10) ID date A B 1 1 2022-01-01 1 0 2 1 2022-01-05 1 0 3 1 2022-02-17 0 1 4 1 2022-05-13 1 0 5 1 2022-09-20 0 1 6 2 2021-02-02 0 1 7 2 2021-02-06 0 1 8 2 2021-04-18 1 0 9 2 2021-04-19 0 1 10 2 2022-08-21 1 0 I have the following rule to determine dates/counts (infec_date and n_infec) of consecutive infections for a patient: The very first date value will be a patients' initial infection The first value of date that is 45 days or more after a previous infection where a patient is positive due to test-type A (A==1) will be the date of next infection. Otherwise, the first value of date that is 90 days or more after a previous infection where a patient is positive due to test-type B (B==1) will be the date of next infection. If no criteria for infection are met, then the most recent infection number/date are set. My output dataset using this rule will need to look like: ID date A B infec_date n_infec 1 1 2022-01-01 1 0 2022-01-01 1 2 1 2022-01-05 1 0 2022-01-01 1 3 1 2022-02-17 0 1 2022-01-01 1 4 1 2022-05-13 1 0 2022-05-13 2 5 1 2022-09-20 0 1 2022-09-20 3 6 2 2021-02-02 0 1 2021-02-02 1 7 2 2021-02-06 0 1 2021-02-02 1 8 2 2021-04-18 1 0 2021-04-18 2 9 2 2021-04-19 0 1 2021-04-18 2 10 2 2022-08-21 1 0 2022-08-21 3 MY GOAL: Is to find a way to do this that is faster than having to iterate through each individual row and checking a string of logic to set the values manually. I would really like to know if there is any functionality for ease and speed of this type of programming using packages dplyr/tidyr. The for loop I've been using in R is as follows: for(i in 1:nrow(data)){ if(i==1){ data[i,"infec_date"]=data[i,"date"] data[i,"n_infec"]=1 }else if(data[i,"ID"]!=data[i-1,"ID"]){ data[i,"infec_date"]=data[i,"date"] data[i,"n_infec"]=1 }else{ if(data[i,"A"]==1&data[i,"date"]>=data[i-1,"infec_date"]+45){ data[i,"infec_date"]=data[i,"date"] data[i,"n_infec"]=data[i-1,"n_infec"]+1 }else if(data[i,"B"]==1&data[i,"date"]>=(data[i-1,"infec_date"]+90)){ data[i,"infec_date"]=data[i,"date"] data[i,"n_infec"]=data[i-1,"n_infec"]+1 }else{ data[i,"infec_date"]=data[i-1,"infec_date"] data[i,"n_infec"]=data[i-1,"n_infec"] } } } As one can imagine, this gets incredibly slow and takes forever to run when dealing with 100k rows of data. I have programmed this in SAS before using the retain statement with first./last. notation, which would make this much easier/faster. But unfortunately, I do not have access to the software. I know a similar SAS call would look like: data new_data; set data; by id date; length infec_date n_infec 8.; format infec_date mmddyy10.; retain infec_date n_infec; if first.id then do; infec_date=date; n_infec=1; end; if A=1 and date>=infec_date+45 then do; infec_date=date; n_infec=n_infec+1; end; else if B=1 and date>=infec_date+90 then do; infec_date=date; n_infec=n_infec+1; end; run; Thanks in advance!
[ "This should be faster.\nlibrary(dplyr)\ndf %>%\n group_by(ID) %>%\n mutate(before_date = date - if_else(A==1, 45, 90),\n new_infec = coalesce(before_date > lag(date), TRUE),\n infec_date = if_else(new_infec, date, NA_real_),\n n_infec = cumsum(new_infec)) %>%\n fill(infec_date) %>%\n ungroup() %>% \n select(-c(5:6))\n\nResult\n ID date A B infec_date n_infec\n <int> <date> <int> <int> <date> <int>\n 1 1 2022-01-01 1 0 2022-01-01 1\n 2 1 2022-01-05 1 0 2022-01-01 1\n 3 1 2022-02-17 0 1 2022-01-01 1\n 4 1 2022-05-13 1 0 2022-05-13 2\n 5 1 2022-09-20 0 1 2022-09-20 3\n 6 2 2021-02-02 0 1 2021-02-02 1\n 7 2 2021-02-06 0 1 2021-02-02 1\n 8 2 2021-04-18 1 0 2021-04-18 2\n 9 2 2021-04-19 0 1 2021-04-18 2\n10 2 2022-08-21 1 0 2022-08-21 3\n\n" ]
[ 0 ]
[]
[]
[ "dplyr", "loops", "r", "retain", "tidyr" ]
stackoverflow_0074658486_dplyr_loops_r_retain_tidyr.txt
Q: Insert specific N blank rows depending on column value I have a data frame looking like this : > df <- data.frame(x = c(1,0,2,0,1,3,1), + y = c("lima","chicago","new york","Miami","havana","Colon","la paz")) > df x y 1 1 lima 2 0 chicago 3 2 new york 4 0 Miami 5 1 havana 6 3 Colon 7 1 la paz I would like to find a way to insert blank N rows depending on the value of column x so if x is 1, 1 blank row would be inserted above, if x is 3, 3 blank rows would be inserted above. The desired output for the data frame above should be this: > df x y 1 NA <NA> 2 1 lima 3 0 chicago 4 NA <NA> 5 NA <NA> 6 2 new york 7 0 Miami 8 NA <NA> 9 1 havana 10 NA <NA> 11 NA <NA> 12 NA <NA> 13 3 Colon 14 NA <NA> 15 1 la paz A: We could do it this way: We group by row number and add to each row number x + 1 rows. Using a trick we can show the NA's first: library(dplyr) df %>% group_by(ID = row_number()) %>% summarise(cur_data()[seq(x+1),]) %>% arrange(!is.na(x), x, .by_group = TRUE) %>% ungroup() %>% select(-ID) x y <dbl> <chr> 1 NA NA 2 1 lima 3 0 chicago 4 NA NA 5 NA NA 6 2 new york 7 0 Miami 8 NA NA 9 1 havana 10 NA NA 11 NA NA 12 NA NA 13 3 Colon 14 NA NA 15 1 la paz A: Using R base do.call(rbind, c(make.row.names=FALSE, lapply(split(df, df$y), function(z){ x <- y <- rep(NA, z$x) rbind(cbind(x, y), z) }) )) x y 1 0 chicago 2 NA <NA> 3 NA <NA> 4 NA <NA> 5 3 Colon 6 NA <NA> 7 1 havana 8 NA <NA> 9 1 la paz 10 NA <NA> 11 1 lima 12 0 Miami 13 NA <NA> 14 NA <NA> 15 2 new york A: Here is a base R option i1 <-with(df, rep(seq_along(x), ifelse(x >0, x + 1, 1))) out <- df[NA^(duplicated(i1, fromLast = TRUE)) * i1,] row.names(out) <- NULL -output > out x y 1 NA <NA> 2 1 lima 3 0 chicago 4 NA <NA> 5 NA <NA> 6 2 new york 7 0 Miami 8 NA <NA> 9 1 havana 10 NA <NA> 11 NA <NA> 12 NA <NA> 13 3 Colon 14 NA <NA> 15 1 la paz
Insert specific N blank rows depending on column value
I have a data frame looking like this : > df <- data.frame(x = c(1,0,2,0,1,3,1), + y = c("lima","chicago","new york","Miami","havana","Colon","la paz")) > df x y 1 1 lima 2 0 chicago 3 2 new york 4 0 Miami 5 1 havana 6 3 Colon 7 1 la paz I would like to find a way to insert blank N rows depending on the value of column x so if x is 1, 1 blank row would be inserted above, if x is 3, 3 blank rows would be inserted above. The desired output for the data frame above should be this: > df x y 1 NA <NA> 2 1 lima 3 0 chicago 4 NA <NA> 5 NA <NA> 6 2 new york 7 0 Miami 8 NA <NA> 9 1 havana 10 NA <NA> 11 NA <NA> 12 NA <NA> 13 3 Colon 14 NA <NA> 15 1 la paz
[ "We could do it this way:\nWe group by row number and add to each row number x + 1 rows.\nUsing a trick we can show the NA's first:\nlibrary(dplyr)\n\ndf %>% \n group_by(ID = row_number()) %>% \n summarise(cur_data()[seq(x+1),]) %>% \n arrange(!is.na(x), x, .by_group = TRUE) %>% \n ungroup() %>% \n select(-ID)\n\n x y \n <dbl> <chr> \n 1 NA NA \n 2 1 lima \n 3 0 chicago \n 4 NA NA \n 5 NA NA \n 6 2 new york\n 7 0 Miami \n 8 NA NA \n 9 1 havana \n10 NA NA \n11 NA NA \n12 NA NA \n13 3 Colon \n14 NA NA \n15 1 la paz \n\n", "Using R base\ndo.call(rbind, c(make.row.names=FALSE, lapply(split(df, df$y), function(z){\n x <- y <- rep(NA, z$x)\n rbind(cbind(x, y), z)\n}) ))\n x y\n1 0 chicago\n2 NA <NA>\n3 NA <NA>\n4 NA <NA>\n5 3 Colon\n6 NA <NA>\n7 1 havana\n8 NA <NA>\n9 1 la paz\n10 NA <NA>\n11 1 lima\n12 0 Miami\n13 NA <NA>\n14 NA <NA>\n15 2 new york\n\n", "Here is a base R option\ni1 <-with(df, rep(seq_along(x), ifelse(x >0, x + 1, 1)))\nout <- df[NA^(duplicated(i1, fromLast = TRUE)) * i1,]\nrow.names(out) <- NULL\n\n-output\n> out\n x y\n1 NA <NA>\n2 1 lima\n3 0 chicago\n4 NA <NA>\n5 NA <NA>\n6 2 new york\n7 0 Miami\n8 NA <NA>\n9 1 havana\n10 NA <NA>\n11 NA <NA>\n12 NA <NA>\n13 3 Colon\n14 NA <NA>\n15 1 la paz\n\n" ]
[ 4, 2, 1 ]
[]
[]
[ "dataframe", "r", "tidyverse" ]
stackoverflow_0074659143_dataframe_r_tidyverse.txt
Q: Is it possible to use GEKKO for curve fitting if complex numbers can appear in intermediate steps? I am trying to use GEKKO for optimization purposes, particularly for curve-fitting, functions have a relatively large number of parameters. For example: f1 = f(par_1, par_2, par_3, ... , par_n) Functions can be defined explicitly (not black-box functions) and all parameters have real values. par_1, par_2, ..., par_n have real values. All outputs of the f1 and the data for fitting are always real, but some intermediate calculation steps can result in complex values (real and non-zero imaginary parts). I am using Steady-State (MPU) mode for the estimation of parameters. Is it possible to use GEKKO in this case? Or maybe it depends on the solver selected? A: Gekko does not natively handle complex numbers. The automatic differentiation and gradient-based solvers haven't been programmed with that in mind. There are workarounds to solve complex number problems by splitting the variable. Below is an example that maximizes the imaginary number yr + yi i as the square root of x. The variable x is a decision that the optimizer can adjust to maximize the objective. from gekko import GEKKO m = GEKKO() x = m.Var(2,lb=-9,ub=16) # -9<=x<=16 b = m.if3(x,0,1) # binary switch s = m.Intermediate(m.sqrt(m.abs3(x))) # sqrt(x) yr = m.Intermediate(b*s) # real yi = m.Intermediate((1-b)*s) # imaginary m.Maximize(yi**2) # maximize imaginary number m.solve() print(yr.value[0],'+',yi.value[0],'i') This produces the correct solution: --------------------------------------------------- Solver : APOPT (v1.0) Solution time : 3.800000000046566E-002 sec Objective : -9.00000000000000 Successful solution --------------------------------------------------- 0.0 + 3.0 i There are additional suggestions at Application of complex numbers in Linear Programming?
Is it possible to use GEKKO for curve fitting if complex numbers can appear in intermediate steps?
I am trying to use GEKKO for optimization purposes, particularly for curve-fitting, functions have a relatively large number of parameters. For example: f1 = f(par_1, par_2, par_3, ... , par_n) Functions can be defined explicitly (not black-box functions) and all parameters have real values. par_1, par_2, ..., par_n have real values. All outputs of the f1 and the data for fitting are always real, but some intermediate calculation steps can result in complex values (real and non-zero imaginary parts). I am using Steady-State (MPU) mode for the estimation of parameters. Is it possible to use GEKKO in this case? Or maybe it depends on the solver selected?
[ "Gekko does not natively handle complex numbers. The automatic differentiation and gradient-based solvers haven't been programmed with that in mind. There are workarounds to solve complex number problems by splitting the variable. Below is an example that maximizes the imaginary number yr + yi i as the square root of x. The variable x is a decision that the optimizer can adjust to maximize the objective.\nfrom gekko import GEKKO\nm = GEKKO()\nx = m.Var(2,lb=-9,ub=16) # -9<=x<=16\nb = m.if3(x,0,1) # binary switch\ns = m.Intermediate(m.sqrt(m.abs3(x))) # sqrt(x)\nyr = m.Intermediate(b*s) # real\nyi = m.Intermediate((1-b)*s) # imaginary\n\nm.Maximize(yi**2) # maximize imaginary number\nm.solve()\nprint(yr.value[0],'+',yi.value[0],'i')\n\nThis produces the correct solution:\n ---------------------------------------------------\n Solver : APOPT (v1.0)\n Solution time : 3.800000000046566E-002 sec\n Objective : -9.00000000000000 \n Successful solution\n ---------------------------------------------------\n \n0.0 + 3.0 i\n\nThere are additional suggestions at Application of complex numbers in Linear Programming?\n" ]
[ 1 ]
[]
[]
[ "complex_numbers", "gekko", "optimization" ]
stackoverflow_0074645998_complex_numbers_gekko_optimization.txt
Q: TypeError: 'NoneType' object is not iterable in Python What does TypeError: 'NoneType' object is not iterable mean? Example: for row in data: # Gives TypeError! print(row) A: It means the value of data is None. A: Explanation of error: 'NoneType' object is not iterable In python2, NoneType is the type of None. In Python3 NoneType is the class of None, for example: >>> print(type(None)) #Python2 <type 'NoneType'> #In Python2 the type of None is the 'NoneType' type. >>> print(type(None)) #Python3 <class 'NoneType'> #In Python3, the type of None is the 'NoneType' class. Iterating over a variable that has value None fails: for a in None: print("k") #TypeError: 'NoneType' object is not iterable Python methods return NoneType if they don't return a value: def foo(): print("k") a, b = foo() #TypeError: 'NoneType' object is not iterable You need to check your looping constructs for NoneType like this: a = None print(a is None) #prints True print(a is not None) #prints False print(a == None) #prints True print(a != None) #prints False print(isinstance(a, object)) #prints True print(isinstance(a, str)) #prints False Guido says only use is to check for None because is is more robust to identity checking. Don't use equality operations because those can spit bubble-up implementationitis of their own. Python's Coding Style Guidelines - PEP-008 NoneTypes are Sneaky, and can sneak in from lambdas: import sys b = lambda x : sys.stdout.write("k") for a in b(10): pass #TypeError: 'NoneType' object is not iterable NoneType is not a valid keyword: a = NoneType #NameError: name 'NoneType' is not defined Concatenation of None and a string: bar = "something" foo = None print foo + bar #TypeError: cannot concatenate 'str' and 'NoneType' objects What's going on here? Python's interpreter converted your code to pyc bytecode. The Python virtual machine processed the bytecode, it encountered a looping construct which said iterate over a variable containing None. The operation was performed by invoking the __iter__ method on the None. None has no __iter__ method defined, so Python's virtual machine tells you what it sees: that NoneType has no __iter__ method. This is why Python's duck-typing ideology is considered bad. The programmer does something completely reasonable with a variable and at runtime it gets contaminated by None, the python virtual machine attempts to soldier on, and pukes up a bunch of unrelated nonsense all over the carpet. Java or C++ doesn't have these problems because such a program wouldn't be allowed to compile since you haven't defined what to do when None occurs. Python gives the programmer lots of rope to hang himself by allowing you to do lots of things that should cannot be expected to work under exceptional circumstances. Python is a yes-man, saying yes-sir when it out to be stopping you from harming yourself, like Java and C++ does. A: Code: for row in data: Error message: TypeError: 'NoneType' object is not iterable Which object is it complaining about? Choice of two, row and data. In for row in data, which needs to be iterable? Only data. What's the problem with data? Its type is NoneType. Only None has type NoneType. So data is None. You can verify this in an IDE, or by inserting e.g. print "data is", repr(data) before the for statement, and re-running. Think about what you need to do next: How should "no data" be represented? Do we write an empty file? Do we raise an exception or log a warning or keep silent? A: Another thing that can produce this error is when you are setting something equal to the return from a function, but forgot to actually return anything. Example: def foo(dict_of_dicts): for key, row in dict_of_dicts.items(): for key, inner_row in row.items(): Do SomeThing #Whoops, forgot to return all my stuff return1, return2, return3 = foo(dict_of_dicts) This is a little bit of a tough error to spot because the error can also be produced if the row variable happens to be None on one of the iterations. The way to spot it is that the trace fails on the last line and not inside the function. If your only returning one variable from a function, I am not sure if the error would be produced... I suspect error "'NoneType' object is not iterable in Python" in this case is actually implying "Hey, I'm trying to iterate over the return values to assign them to these three variables in order but I'm only getting None to iterate over" A: It means that the data variable is passing None (which is type NoneType), its equivalent for nothing. So it can't be iterable as a list, as you are trying to do. A: You're calling write_file with arguments like this: write_file(foo, bar) But you haven't defined 'foo' correctly, or you have a typo in your code so that it's creating a new empty variable and passing it in. A: For me it was a case of having my Groovy hat on instead of the Python 3 one. Forgot the return keyword at the end of a def function. Had not been coding Python 3 in earnest for a couple of months. Was thinking last statement evaluated in routine was being returned per the Groovy (or Rust) way. Took a few iterations, looking at the stack trace, inserting try: ... except TypeError: ... block debugging/stepping thru code to figure out what was wrong. The solution for the message certainly did not make the error jump out at me. A: It also depends on Python version you are using. Seeing different error message thrown in python 3.6 and python 3.8 as following which was the issue in my case Python 3.6 (a,b) = None Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: 'NoneType' object is not iterable Python 3.8 (a,b) = None Traceback (most recent call last): File "<stdin>", line 1, in <module> TypeError: cannot unpack non-iterable NoneType object A: This means that the value of data is None. A: because using for loop while the result it is just one value not a set of value pola.py @app.route("/search") def search(): title='search' search_name = request.form.get('search') search_item = User.query.filter_by(id=search_name).first() return render_template('search.html', title=title, search_item=search_item ) search.html (wrong) {% for p in search %} {{ p }} search.html (correct) <td>{{ search_item }}</td> A: i had this error with pandas in databricks. The solution for this error was install the library in the cluster enter image description here A: It means data is None, which is not an iterable. Adding an or []* prevents the exception and doesn't print anything: for row in data or []: # no more TypeError! print(row) * credits to some earlier comments; please beware that raising an exception may be a desired behavior too and/or an indicator of improper data setting.
TypeError: 'NoneType' object is not iterable in Python
What does TypeError: 'NoneType' object is not iterable mean? Example: for row in data: # Gives TypeError! print(row)
[ "It means the value of data is None.\n", "Explanation of error: 'NoneType' object is not iterable\nIn python2, NoneType is the type of None. In Python3 NoneType is the class of None, for example:\n>>> print(type(None)) #Python2\n<type 'NoneType'> #In Python2 the type of None is the 'NoneType' type.\n\n>>> print(type(None)) #Python3\n<class 'NoneType'> #In Python3, the type of None is the 'NoneType' class.\n\nIterating over a variable that has value None fails:\nfor a in None:\n print(\"k\") #TypeError: 'NoneType' object is not iterable\n\nPython methods return NoneType if they don't return a value:\ndef foo():\n print(\"k\")\na, b = foo() #TypeError: 'NoneType' object is not iterable\n\nYou need to check your looping constructs for NoneType like this:\na = None \nprint(a is None) #prints True\nprint(a is not None) #prints False\nprint(a == None) #prints True\nprint(a != None) #prints False\nprint(isinstance(a, object)) #prints True\nprint(isinstance(a, str)) #prints False\n\nGuido says only use is to check for None because is is more robust to identity checking. Don't use equality operations because those can spit bubble-up implementationitis of their own. Python's Coding Style Guidelines - PEP-008\nNoneTypes are Sneaky, and can sneak in from lambdas:\nimport sys\nb = lambda x : sys.stdout.write(\"k\") \nfor a in b(10): \n pass #TypeError: 'NoneType' object is not iterable \n\nNoneType is not a valid keyword:\na = NoneType #NameError: name 'NoneType' is not defined\n\nConcatenation of None and a string:\nbar = \"something\"\nfoo = None\nprint foo + bar #TypeError: cannot concatenate 'str' and 'NoneType' objects\n\nWhat's going on here?\nPython's interpreter converted your code to pyc bytecode. The Python virtual machine processed the bytecode, it encountered a looping construct which said iterate over a variable containing None. The operation was performed by invoking the __iter__ method on the None. \nNone has no __iter__ method defined, so Python's virtual machine tells you what it sees: that NoneType has no __iter__ method. \nThis is why Python's duck-typing ideology is considered bad. The programmer does something completely reasonable with a variable and at runtime it gets contaminated by None, the python virtual machine attempts to soldier on, and pukes up a bunch of unrelated nonsense all over the carpet. \nJava or C++ doesn't have these problems because such a program wouldn't be allowed to compile since you haven't defined what to do when None occurs. Python gives the programmer lots of rope to hang himself by allowing you to do lots of things that should cannot be expected to work under exceptional circumstances. Python is a yes-man, saying yes-sir when it out to be stopping you from harming yourself, like Java and C++ does.\n", "Code: for row in data:\nError message: TypeError: 'NoneType' object is not iterable\nWhich object is it complaining about? Choice of two, row and data.\nIn for row in data, which needs to be iterable? Only data.\nWhat's the problem with data? Its type is NoneType. Only None has type NoneType. So data is None.\nYou can verify this in an IDE, or by inserting e.g. print \"data is\", repr(data) before the for statement, and re-running.\nThink about what you need to do next: \nHow should \"no data\" be represented? Do we write an empty file? Do we raise an exception or log a warning or keep silent?\n", "Another thing that can produce this error is when you are setting something equal to the return from a function, but forgot to actually return anything.\nExample:\ndef foo(dict_of_dicts):\n for key, row in dict_of_dicts.items():\n for key, inner_row in row.items():\n Do SomeThing\n #Whoops, forgot to return all my stuff\n\nreturn1, return2, return3 = foo(dict_of_dicts)\n\nThis is a little bit of a tough error to spot because the error can also be produced if the row variable happens to be None on one of the iterations. The way to spot it is that the trace fails on the last line and not inside the function.\nIf your only returning one variable from a function, I am not sure if the error would be produced... I suspect error \"'NoneType' object is not iterable in Python\" in this case is actually implying \"Hey, I'm trying to iterate over the return values to assign them to these three variables in order but I'm only getting None to iterate over\"\n", "It means that the data variable is passing None (which is type NoneType), its equivalent for nothing. So it can't be iterable as a list, as you are trying to do.\n", "You're calling write_file with arguments like this:\nwrite_file(foo, bar)\n\nBut you haven't defined 'foo' correctly, or you have a typo in your code so that it's creating a new empty variable and passing it in.\n", "For me it was a case of having my Groovy hat on instead of the Python 3 one.\nForgot the return keyword at the end of a def function.\nHad not been coding Python 3 in earnest for a couple of months. Was thinking last statement evaluated in routine was being returned per the Groovy (or Rust) way.\nTook a few iterations, looking at the stack trace, inserting try: ... except TypeError: ... block debugging/stepping thru code to figure out what was wrong.\nThe solution for the message certainly did not make the error jump out at me.\n", "It also depends on Python version you are using. Seeing different error message thrown in python 3.6 and python 3.8 as following which was the issue in my case\n\nPython 3.6\n\n\n(a,b) = None\nTraceback (most recent call last):\nFile \"<stdin>\", line 1, in <module>\nTypeError: 'NoneType' object is not iterable\n\n\n\nPython 3.8\n\n\n(a,b) = None\nTraceback (most recent call last):\n File \"<stdin>\", line 1, in <module>\nTypeError: cannot unpack non-iterable NoneType object\n\n\n", "This means that the value of data is None.\n", "because using for loop while the result it is just one value not a set of value\n\npola.py\n\[email protected](\"/search\")\ndef search():\n title='search'\n search_name = request.form.get('search')\n \n search_item = User.query.filter_by(id=search_name).first()\n\n return render_template('search.html', title=title, search_item=search_item ) \n\n\nsearch.html (wrong)\n\n{% for p in search %}\n{{ p }}\n\n\n\nsearch.html (correct)\n\n\n<td>{{ search_item }}</td>\n\n", "i had this error with pandas in databricks.\nThe solution for this error was install the library in the cluster\nenter image description here\n", "It means data is None, which is not an iterable. Adding an or []* prevents the exception and doesn't print anything:\nfor row in data or []: # no more TypeError!\n print(row)\n\n* credits to some earlier comments; please beware that raising an exception may be a desired behavior too and/or an indicator of improper data setting.\n" ]
[ 261, 114, 63, 20, 8, 7, 2, 1, 0, 0, 0, 0 ]
[ "Just continue the loop when you get None Exception,\nexample:\n a = None\n if a is None:\n continue\n else:\n print(\"do something\")\n\nThis can be any iterable coming from DB or an excel file.\n" ]
[ -3 ]
[ "nonetype", "python" ]
stackoverflow_0003887381_nonetype_python.txt
Q: how to make a portrait image responsive and keep ratio? I have a portrait picture (original size width :2500px height : 4500px) and I want to place this image in a modal div . the modal has width: 100vw and height: 100vh .this is the structure: <div class="modal"> <div class="image-wrapper> <img src="..." /> </div> </div> my question is how can I make this portrait image responsive in all width and height and also keep the aspect ratio. what should be the CSS like ? what should be the width and the height of image-wrapper and also the img (image) ? A: .modal { width: 200px; height: 300px; border: 1px solid red; } .modal--horizont { width: 300px; height: 200px; } .image-wrapper { width: inherit; height: inherit; display: flex; justify-content: center; align-items: center; } img { width: 100%; height: 100%; display: block; object-fit: contain; } <div class="modal"> <div class="image-wrapper"> <img src="https://picsum.photos/450/250" /> </div> </div> <div class="modal modal--horizont"> <div class="image-wrapper"> <img src="https://picsum.photos/450/250" /> </div> </div>
how to make a portrait image responsive and keep ratio?
I have a portrait picture (original size width :2500px height : 4500px) and I want to place this image in a modal div . the modal has width: 100vw and height: 100vh .this is the structure: <div class="modal"> <div class="image-wrapper> <img src="..." /> </div> </div> my question is how can I make this portrait image responsive in all width and height and also keep the aspect ratio. what should be the CSS like ? what should be the width and the height of image-wrapper and also the img (image) ?
[ "\n\n.modal {\n width: 200px;\n height: 300px;\n border: 1px solid red;\n}\n\n.modal--horizont {\n width: 300px;\n height: 200px;\n}\n\n.image-wrapper {\n width: inherit;\n height: inherit;\n display: flex;\n justify-content: center;\n align-items: center;\n}\n\nimg {\n width: 100%;\n height: 100%;\n display: block;\n object-fit: contain;\n}\n<div class=\"modal\">\n <div class=\"image-wrapper\">\n <img src=\"https://picsum.photos/450/250\" />\n </div>\n</div>\n\n<div class=\"modal modal--horizont\">\n <div class=\"image-wrapper\">\n <img src=\"https://picsum.photos/450/250\" />\n </div>\n</div>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "css", "image", "responsive" ]
stackoverflow_0074660314_css_image_responsive.txt
Q: how to remove buttons off of a message discord @client.command() async def test(ctx): message = await ctx.send("**TEST**\n**IS THIS WORKING?**") await asyncio.sleep(3) button = Button(style = discord.ButtonStyle.green, emoji = "◀", custom_id = "button") view = View() view.add_item(button) async def button_callback(interaction): await message.edit(content="**edited message and removed buttons!**") button.callback = button_callback await ctx.send("test", view=view) so far when it edits the message to "edited message and removed buttons" the buttons arent removed, how can i make it so the buttons are removed when it edits to that? A: Make sure you set view=None when you edit your message (or if you only want a few buttons removed, create a new view without those buttons and set view to that).
how to remove buttons off of a message discord
@client.command() async def test(ctx): message = await ctx.send("**TEST**\n**IS THIS WORKING?**") await asyncio.sleep(3) button = Button(style = discord.ButtonStyle.green, emoji = "◀", custom_id = "button") view = View() view.add_item(button) async def button_callback(interaction): await message.edit(content="**edited message and removed buttons!**") button.callback = button_callback await ctx.send("test", view=view) so far when it edits the message to "edited message and removed buttons" the buttons arent removed, how can i make it so the buttons are removed when it edits to that?
[ "Make sure you set view=None when you edit your message (or if you only want a few buttons removed, create a new view without those buttons and set view to that).\n" ]
[ 0 ]
[]
[]
[ "discord", "pycord", "python" ]
stackoverflow_0074612394_discord_pycord_python.txt
Q: Default Branch on Payments and Applications (AR302000) I want to display the 'Default Branch' (screenshot 1) from the Customers' (AR303000) Shipping tab to the Payment and Applications'(AR302000) 'LOAD DOCUMENTS' dialog box's, 'Company Branch'(screenshot 2) Screenshot 1: Customers Default Branch Screenshot 2: Load Document Company Branch (Payments and Applications) The code I have does not seem to be displaying the Default Branch from the Customers Shipping tab to the Payments and Applications Company Branch. Code snippet: public class ARPaymentEntry_Extension : PXGraphExtension<PX.Objects.AR.ARPaymentEntry> { #region Event Handlers protected void LoadOptions_RowSelected(PXCache cache, PXRowSelectedEventArgs e) { var row = (LoadOptions)e.Row; if (row == null) { return; } ARPayment aRPayment = Base.Document.Current; if (aRPayment == null) { return; } Customer customer = PXSelect<Customer, Where<Customer.bAccountID, Equal<Required<ARPayment.customerID>>>> .Select(Base, aRPayment.CustomerID); if (customer == null) { return; } Location location = PXSelect<Location, Where<Location.bAccountID, Equal<Required<ARPayment.customerID>>, And<Location.locationID, Equal<Required<Customer.defLocationID>>>>> .Select(Base, aRPayment.CustomerID, customer.DefLocationID); if (location == null) { return; } row.BranchID = location.CBranchID; } [PXMergeAttributes(Method = MergeMethod.Replace)] // [OrganizationTree(typeof(organizationID), typeof(branchID), onlyActive: true)] protected virtual void LoadOptions_OrgBAccountID_CacheAttached(PXCache cache) { } A: I would move your code into FieldDefaulting for the LoadOptions, rather than row selected. protected virtual void LoadOptions_OrgBAccountID_FieldDefaulting(PXCache cache, PXFieldDefaultingEventArgs args, PXFieldDefaulting del) { var row = (LoadOptions)e.Row; if (row == null) { return; } ARPayment aRPayment = Base.Document.Current; if (aRPayment == null) { return; } Customer customer = PXSelect<Customer, Where<Customer.bAccountID, Equal<Required<ARPayment.customerID>>>> .Select(Base, aRPayment.CustomerID); if (customer == null) { return; } Location location = PXSelect<Location, Where<Location.bAccountID, Equal<Required<ARPayment.customerID>>, And<Location.locationID, Equal<Required<Customer.defLocationID>>>>> .Select(Base, aRPayment.CustomerID, customer.DefLocationID); if (location == null) { return; } //set the default with customers branch args.NewValue = location.CBranchID; }
Default Branch on Payments and Applications (AR302000)
I want to display the 'Default Branch' (screenshot 1) from the Customers' (AR303000) Shipping tab to the Payment and Applications'(AR302000) 'LOAD DOCUMENTS' dialog box's, 'Company Branch'(screenshot 2) Screenshot 1: Customers Default Branch Screenshot 2: Load Document Company Branch (Payments and Applications) The code I have does not seem to be displaying the Default Branch from the Customers Shipping tab to the Payments and Applications Company Branch. Code snippet: public class ARPaymentEntry_Extension : PXGraphExtension<PX.Objects.AR.ARPaymentEntry> { #region Event Handlers protected void LoadOptions_RowSelected(PXCache cache, PXRowSelectedEventArgs e) { var row = (LoadOptions)e.Row; if (row == null) { return; } ARPayment aRPayment = Base.Document.Current; if (aRPayment == null) { return; } Customer customer = PXSelect<Customer, Where<Customer.bAccountID, Equal<Required<ARPayment.customerID>>>> .Select(Base, aRPayment.CustomerID); if (customer == null) { return; } Location location = PXSelect<Location, Where<Location.bAccountID, Equal<Required<ARPayment.customerID>>, And<Location.locationID, Equal<Required<Customer.defLocationID>>>>> .Select(Base, aRPayment.CustomerID, customer.DefLocationID); if (location == null) { return; } row.BranchID = location.CBranchID; } [PXMergeAttributes(Method = MergeMethod.Replace)] // [OrganizationTree(typeof(organizationID), typeof(branchID), onlyActive: true)] protected virtual void LoadOptions_OrgBAccountID_CacheAttached(PXCache cache) { }
[ "I would move your code into FieldDefaulting for the LoadOptions, rather than row selected.\n protected virtual void LoadOptions_OrgBAccountID_FieldDefaulting(PXCache cache, PXFieldDefaultingEventArgs args, PXFieldDefaulting del)\n {\n var row = (LoadOptions)e.Row;\n\n if (row == null)\n {\n return;\n }\n\n ARPayment aRPayment = Base.Document.Current;\n\n if (aRPayment == null)\n {\n return;\n }\n\n Customer customer = PXSelect<Customer,\n Where<Customer.bAccountID, Equal<Required<ARPayment.customerID>>>>\n .Select(Base, aRPayment.CustomerID);\n\n\n if (customer == null)\n {\n return;\n }\n\n Location location = PXSelect<Location,\n Where<Location.bAccountID, Equal<Required<ARPayment.customerID>>,\n And<Location.locationID, Equal<Required<Customer.defLocationID>>>>>\n .Select(Base, aRPayment.CustomerID, customer.DefLocationID);\n\n if (location == null)\n {\n return;\n }\n\n //set the default with customers branch\n args.NewValue = location.CBranchID;\n }\n\n" ]
[ 0 ]
[]
[]
[ "acumatica", "acumatica_kb" ]
stackoverflow_0074042791_acumatica_acumatica_kb.txt
Q: What I have is a list that was fetched from an api. Support me with react filter What I have is a list that was fetched from an api. This list will be filtered based on the input. But at the first render it will render nothing, unless I press space or add anything to the input. usinnng react when i click on the button it filters but I want at the first render to show the whole list before clicking import Button from "react-bootstrap/Button"; import Card from "react-bootstrap/Card"; import Container from "react-bootstrap/Container"; import Row from "react-bootstrap/Row"; import Col from "react-bootstrap/Col"; import { useEffect, useState } from "react"; function News() { const [news, setNews] = useState([]); const [listing, setLists] = useState([]); const fetchDataList = () => { return fetch("https://api.npoint.io/d275425a434e02acf2f7") .then((response) => response.json()) .then((data) => { setLists(data.News); // data.News[0].id = 0; console.log(listing); }); }; useEffect(() => { fetchDataList(); }, []); const fetchData = () => { return fetch("https://api.npoint.io/91298d970c27e9a06518") .then((response) => response.json()) .then((data) => { setNews(data.newsCategory); console.log(news); }); }; useEffect(() => { fetchData(); }, []); const [filteredCat, setFiltredCat] = useState(null); useEffect(() => { setFiltredCat(setLists()); }, []); function filteredCategory(typecat) { let filteredCategory = listing.filter( (type) => type.categoryID === typecat ); return filteredCategory; } function handleCategory(e) { let typeCategory = e.target.id typeCategory ? setFiltredCat(filteredCategory(typeCategory)) : filteredCategory(setLists); } return ( <> <p>Media</p> <h2>Top </h2> <div> {news && news.map((idx) => ( <Button id={idx.id} variant="secondary" onClick={handleCategory}> {idx.id} {idx.name} </Button> ))} </div> <Container> <Row> <Col> {filteredCat && filteredCat.map((list) => ( <Card style={{ width: "18rem" }}> <Card.Img variant="top" src="holder.js/100px180" /> <Card.Body> <Card.Title>{list.title}</Card.Title> <Card.Text>{list.description}</Card.Text> <Button variant="primary">{list.categoryID}</Button> </Card.Body> </Card> ))} </Col> </Row> </Container> </> ); } export default News; A: You need to set the category as well when the data comes in, Below your setNews(data.newsCategory); you can add setFiltredCat(filteredCategory(news[0].id.toString())); edit: To be sure the news has been set before setting your initial filter you could also add a hook which is watching for changes to news. Checkout this codesandbox for an example useEffect(() => { if (news.length <= 0) return; setFiltredCat(filteredCategory(news[0].id.toString())); }, [news]); Edit: to return all listings initially you could use the same concept useEffect(() => { setFiltredCat(listing); }, [listing]);
What I have is a list that was fetched from an api. Support me with react filter
What I have is a list that was fetched from an api. This list will be filtered based on the input. But at the first render it will render nothing, unless I press space or add anything to the input. usinnng react when i click on the button it filters but I want at the first render to show the whole list before clicking import Button from "react-bootstrap/Button"; import Card from "react-bootstrap/Card"; import Container from "react-bootstrap/Container"; import Row from "react-bootstrap/Row"; import Col from "react-bootstrap/Col"; import { useEffect, useState } from "react"; function News() { const [news, setNews] = useState([]); const [listing, setLists] = useState([]); const fetchDataList = () => { return fetch("https://api.npoint.io/d275425a434e02acf2f7") .then((response) => response.json()) .then((data) => { setLists(data.News); // data.News[0].id = 0; console.log(listing); }); }; useEffect(() => { fetchDataList(); }, []); const fetchData = () => { return fetch("https://api.npoint.io/91298d970c27e9a06518") .then((response) => response.json()) .then((data) => { setNews(data.newsCategory); console.log(news); }); }; useEffect(() => { fetchData(); }, []); const [filteredCat, setFiltredCat] = useState(null); useEffect(() => { setFiltredCat(setLists()); }, []); function filteredCategory(typecat) { let filteredCategory = listing.filter( (type) => type.categoryID === typecat ); return filteredCategory; } function handleCategory(e) { let typeCategory = e.target.id typeCategory ? setFiltredCat(filteredCategory(typeCategory)) : filteredCategory(setLists); } return ( <> <p>Media</p> <h2>Top </h2> <div> {news && news.map((idx) => ( <Button id={idx.id} variant="secondary" onClick={handleCategory}> {idx.id} {idx.name} </Button> ))} </div> <Container> <Row> <Col> {filteredCat && filteredCat.map((list) => ( <Card style={{ width: "18rem" }}> <Card.Img variant="top" src="holder.js/100px180" /> <Card.Body> <Card.Title>{list.title}</Card.Title> <Card.Text>{list.description}</Card.Text> <Button variant="primary">{list.categoryID}</Button> </Card.Body> </Card> ))} </Col> </Row> </Container> </> ); } export default News;
[ "You need to set the category as well when the data comes in,\nBelow your setNews(data.newsCategory); you can add setFiltredCat(filteredCategory(news[0].id.toString()));\nedit:\nTo be sure the news has been set before setting your initial filter you could also add a hook which is watching for changes to news. Checkout this codesandbox for an example\nuseEffect(() => {\n if (news.length <= 0) return;\n\n setFiltredCat(filteredCategory(news[0].id.toString()));\n}, [news]);\n\nEdit: to return all listings initially you could use the same concept\nuseEffect(() => {\n setFiltredCat(listing);\n }, [listing]);\n\n" ]
[ 0 ]
[]
[]
[ "react_hooks", "reactjs" ]
stackoverflow_0074660461_react_hooks_reactjs.txt
Q: In Python why is my "for entry in csv_compare:" loop iterating only once and getting stuck on the last input I'm trying to compare 2 csv files and then put the common entries in a 3rd csv to write to file. For some reason it iterates the whole loop for row in csv_input but the entry in csv_compare loop iterates only once and stops on the last entry. I want to compare every row entry with every entry entry. import csv finalCSV = {} with open('input.csv', newline='') as csvfile, open('compare.csv', newline='') as keyCSVFile, open('output.csv', 'w' ,newline='') as OutputCSV: csv_input = csv.reader(csvfile) csv_compare = csv.reader(keyCSVFile) csv_output = csv.writer(OutputCSV) csv_output.writerow(next(csv_input)) for row in csv_input: for entry in csv_compare: print(row[0] + ' ' + entry[0]) if row[0] == entry[0]: csv_output.writerow(row) break print('wait...') A: When you break the inner loop and start the next iteration of the outer loop, csv_compare doesn't reset to the beginning. It picks up where you left off. Once you have exhausted the iterator, that's it. You would need to reset the iterator at the top of each iteration of the outer loop, which is most easily done by simply opening the file there. with open('input.csv', newline='') as csvfile, open('output.csv', 'w' ,newline='') as OutputCSV: csv_input = csv.reader(csvfile) csv_output = csv.writer(OutputCSV) csv_output.writerow(next(csv_input)) for row in csv_input: with open('compare.csv', newline='') as keyCSVFile: csv_compare = csv.reader(keyCSVFile) for entry in csv_compare: if row[0] == entry[0]: csv_output.writerow(row) break A: I suggest to read the first column from csv_compare to list or a set and then use only single for-loop: import csv finalCSV = {} with open("input.csv", newline="") as csvfile, open( "compare.csv", newline="" ) as keyCSVFile, open("output.csv", "w", newline="") as OutputCSV: csv_input = csv.reader(csvfile) csv_compare = csv.reader(keyCSVFile) csv_output = csv.writer(OutputCSV) csv_output.writerow(next(csv_input)) compare = {entry[0] for entry in csv_compare} # <--- read csv_compare to a set for row in csv_input: if row[0] in compare: # <--- use `in` operator csv_output.writerow(row) A: You could skip the inner loop completely. You add rows from input.csv when the first column matches any of the first column values in compare.csv. So put those values in a set for easy lookup. import csv with open('compare.csv', newline='') as keyCSVFile: key_set = {row[0] for row in csv.reader(keyCSVFile)} with open('input.csv', newline='') as csvfile, open('output.csv', 'w' ,newline='') as OutputCSV: csv_input = csv.reader(csvfile) csv_output = csv.writer(OutputCSV) csv_output.writerow(next(csv_input)) csv_output.writerows(row for row in csv_input if row[0] in key_set) del key_set print('wait...')
In Python why is my "for entry in csv_compare:" loop iterating only once and getting stuck on the last input
I'm trying to compare 2 csv files and then put the common entries in a 3rd csv to write to file. For some reason it iterates the whole loop for row in csv_input but the entry in csv_compare loop iterates only once and stops on the last entry. I want to compare every row entry with every entry entry. import csv finalCSV = {} with open('input.csv', newline='') as csvfile, open('compare.csv', newline='') as keyCSVFile, open('output.csv', 'w' ,newline='') as OutputCSV: csv_input = csv.reader(csvfile) csv_compare = csv.reader(keyCSVFile) csv_output = csv.writer(OutputCSV) csv_output.writerow(next(csv_input)) for row in csv_input: for entry in csv_compare: print(row[0] + ' ' + entry[0]) if row[0] == entry[0]: csv_output.writerow(row) break print('wait...')
[ "When you break the inner loop and start the next iteration of the outer loop, csv_compare doesn't reset to the beginning. It picks up where you left off. Once you have exhausted the iterator, that's it.\nYou would need to reset the iterator at the top of each iteration of the outer loop, which is most easily done by simply opening the file there.\nwith open('input.csv', newline='') as csvfile, open('output.csv', 'w' ,newline='') as OutputCSV:\n csv_input = csv.reader(csvfile)\n csv_output = csv.writer(OutputCSV)\n csv_output.writerow(next(csv_input))\n\n for row in csv_input:\n with open('compare.csv', newline='') as keyCSVFile:\n csv_compare = csv.reader(keyCSVFile)\n for entry in csv_compare:\n if row[0] == entry[0]:\n csv_output.writerow(row)\n break\n\n", "I suggest to read the first column from csv_compare to list or a set and then use only single for-loop:\nimport csv\n\nfinalCSV = {}\nwith open(\"input.csv\", newline=\"\") as csvfile, open(\n \"compare.csv\", newline=\"\"\n) as keyCSVFile, open(\"output.csv\", \"w\", newline=\"\") as OutputCSV:\n csv_input = csv.reader(csvfile)\n csv_compare = csv.reader(keyCSVFile)\n csv_output = csv.writer(OutputCSV)\n csv_output.writerow(next(csv_input))\n\n compare = {entry[0] for entry in csv_compare} # <--- read csv_compare to a set\n\n for row in csv_input:\n if row[0] in compare: # <--- use `in` operator\n csv_output.writerow(row)\n\n", "You could skip the inner loop completely. You add rows from input.csv when the first column matches any of the first column values in compare.csv. So put those values in a set for easy lookup.\nimport csv\n\nwith open('compare.csv', newline='') as keyCSVFile:\n key_set = {row[0] for row in csv.reader(keyCSVFile)}\n\nwith open('input.csv', newline='') as csvfile, open('output.csv', 'w' ,newline='') as OutputCSV:\n csv_input = csv.reader(csvfile)\n csv_output = csv.writer(OutputCSV)\n csv_output.writerow(next(csv_input))\n csv_output.writerows(row for row in csv_input if row[0] in key_set)\n\ndel key_set\nprint('wait...')\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "csv", "for_loop", "python" ]
stackoverflow_0074660417_csv_for_loop_python.txt
Q: How to use a library created using Bazel I'm relatively new to Cpp and I was trying to build a simple app for a quant project. I've managed to use Bazel to convert a chunk of my code to a library residing in the bazel-bin directory in my repository as a .lib file. However, I'm not able to figure out how to use this library in my main.cpp which is outside the library I created. Could anyone help with this? Thanks! A: I'm not sure if I understood your question correctly, but I think, there is a .lib file that you want to use it (technically speaking, you need to link that .lib file to another app, or accessing in the main.cpp). Ok, it depends on what build-system you use. for MSVC: Add the full path of the .lib file to Project Property Page -> Linker -> Input -> Additional Dependencies. and don't forget to add the path to .h files (include path) to Project Property Page -> C/C++ -> General -> Additional Include Directories It's the same for other build systems, you have to link it to the final built item. for gcc there is a -L flag for the lib and a -h flag to point to the include directories. in CMake: include_directories( path/to/include/directory ) link_directories( path/to/directory/that/lib/lives ) add_executable(main main.cpp) target_link_libraries(main lib_file_name) if the lib file name is lib_file_name.lib you should not include .lib (just for cmake)
How to use a library created using Bazel
I'm relatively new to Cpp and I was trying to build a simple app for a quant project. I've managed to use Bazel to convert a chunk of my code to a library residing in the bazel-bin directory in my repository as a .lib file. However, I'm not able to figure out how to use this library in my main.cpp which is outside the library I created. Could anyone help with this? Thanks!
[ "I'm not sure if I understood your question correctly, but I think, there is a .lib file that you want to use it (technically speaking, you need to link that .lib file to another app, or accessing in the main.cpp).\nOk, it depends on what build-system you use.\nfor MSVC:\nAdd the full path of the .lib file to Project Property Page -> Linker -> Input -> Additional Dependencies.\nand don't forget to add the path to .h files (include path) to Project Property Page -> C/C++ -> General -> Additional Include Directories\nIt's the same for other build systems, you have to link it to the final built item.\nfor gcc there is a -L flag for the lib and a -h flag to point to the include directories.\nin CMake:\ninclude_directories(\n path/to/include/directory\n)\nlink_directories(\n path/to/directory/that/lib/lives\n)\n\nadd_executable(main main.cpp)\n\ntarget_link_libraries(main lib_file_name)\n\nif the lib file name is lib_file_name.lib you should not include .lib (just for cmake)\n" ]
[ 0 ]
[]
[]
[ "bazel", "c++", "shared_libraries" ]
stackoverflow_0074660477_bazel_c++_shared_libraries.txt
Q: XPATH ORACLE how to get values from parent node My query ouhgt to transform XML into a table (data set): with UFD_data as (select xmltype( '<Document> <Lines> <Line> <LineNumber>1</LineNumber> <Fields> <Field> <FieldName>A1</FieldName> <FieldType>B1</FieldType> <FieldValue>C1</FieldValue> </Field> </Fields> </Line> <Line> <LineNumber>2</LineNumber> <Fields> <Field> <FieldName>A2</FieldName> <FieldType>B2</FieldType> <FieldValue>C2</FieldValue> </Field> </Fields> </Line> <Line> <LineNumber>3</LineNumber> </Line> </Lines> </Document> ') xml_data from dual) select xml_data, x.* from UFD_data sd cross join xmltable('/Document/Lines/Line' passing sd.xml_data columns "LineNumber" varchar2(20) path 'LineNumber', "FieldName" varchar2(255) path 'Fields/Field/FieldName', "FieldType" varchar2(255) path 'Fields/Field/FieldType', "FieldValue" varchar2(4000) path 'Fields/Field/FieldValue' ) x but there is a problem when added more than one group with fields: with UFD_data as (select xmltype( '<Document> <Lines> <Line> <LineNumber>1</LineNumber> <Fields> <Field> <FieldName>A1</FieldName> <FieldType>B1</FieldType> <FieldValue>C1</FieldValue> </Field> <Field> <FieldName>D1</FieldName> <FieldType>E1</FieldType> <FieldValue>F1</FieldValue> </Field> </Fields> </Line> <Line> <LineNumber>2</LineNumber> <Fields> <Field> <FieldName>A2</FieldName> <FieldType>B2</FieldType> <FieldValue>C2</FieldValue> </Field> </Fields> </Line> <Line> <LineNumber>3</LineNumber> </Line> </Lines> </Document> ') xml_data from dual) select xml_data, x.* from UFD_data sd cross join xmltable('/Document/Lines/Line' passing sd.xml_data columns "LineNumber" varchar2(20) path 'LineNumber', "FieldName" varchar2(255) path 'Fields/Field/FieldName', "FieldType" varchar2(255) path 'Fields/Field/FieldType', "FieldValue" varchar2(4000) path 'Fields/Field/FieldValue' ) x The error returned : ORA-19279: XPTY0004 - XQuery dynamic TYPE mismatch: expected singleton SEQUENCE How to change XPATH expession to get results like below: LineNumber FieldName FieldType FieldValue 1 A1 B2 C1 1 D1 E2 F1 2 A2 B2 C2 3 How to change XPATH expession to get results like below: LineNumber FieldName FieldType FieldValue 1 A1 B2 C1 1 D1 E2 F1 2 A2 B2 C2 3 A: Use two XMLTABLEs, passing Fields from the first to the second: SELECT l."LineNumber", f.* FROM UFD_data sd CROSS JOIN XMLTABLE( '/Document/Lines/Line' PASSING sd.xml_data COLUMNS "LineNumber" varchar2(20) PATH 'LineNumber', Fields XMLTYPE PATH 'Fields' ) l LEFT OUTER JOIN XMLTABLE( '/Fields/Field' PASSING l.fields COLUMNS "FieldName" varchar2(255) PATH 'FieldName', "FieldType" varchar2(255) PATH 'FieldType', "FieldValue" varchar2(4000) PATH 'FieldValue' ) f ON (1 = 1) Which outputs: LineNumber FieldName FieldType FieldValue 1 A1 B1 C1 1 D1 E1 F1 2 A2 B2 C2 3 null null null fiddle
XPATH ORACLE how to get values from parent node
My query ouhgt to transform XML into a table (data set): with UFD_data as (select xmltype( '<Document> <Lines> <Line> <LineNumber>1</LineNumber> <Fields> <Field> <FieldName>A1</FieldName> <FieldType>B1</FieldType> <FieldValue>C1</FieldValue> </Field> </Fields> </Line> <Line> <LineNumber>2</LineNumber> <Fields> <Field> <FieldName>A2</FieldName> <FieldType>B2</FieldType> <FieldValue>C2</FieldValue> </Field> </Fields> </Line> <Line> <LineNumber>3</LineNumber> </Line> </Lines> </Document> ') xml_data from dual) select xml_data, x.* from UFD_data sd cross join xmltable('/Document/Lines/Line' passing sd.xml_data columns "LineNumber" varchar2(20) path 'LineNumber', "FieldName" varchar2(255) path 'Fields/Field/FieldName', "FieldType" varchar2(255) path 'Fields/Field/FieldType', "FieldValue" varchar2(4000) path 'Fields/Field/FieldValue' ) x but there is a problem when added more than one group with fields: with UFD_data as (select xmltype( '<Document> <Lines> <Line> <LineNumber>1</LineNumber> <Fields> <Field> <FieldName>A1</FieldName> <FieldType>B1</FieldType> <FieldValue>C1</FieldValue> </Field> <Field> <FieldName>D1</FieldName> <FieldType>E1</FieldType> <FieldValue>F1</FieldValue> </Field> </Fields> </Line> <Line> <LineNumber>2</LineNumber> <Fields> <Field> <FieldName>A2</FieldName> <FieldType>B2</FieldType> <FieldValue>C2</FieldValue> </Field> </Fields> </Line> <Line> <LineNumber>3</LineNumber> </Line> </Lines> </Document> ') xml_data from dual) select xml_data, x.* from UFD_data sd cross join xmltable('/Document/Lines/Line' passing sd.xml_data columns "LineNumber" varchar2(20) path 'LineNumber', "FieldName" varchar2(255) path 'Fields/Field/FieldName', "FieldType" varchar2(255) path 'Fields/Field/FieldType', "FieldValue" varchar2(4000) path 'Fields/Field/FieldValue' ) x The error returned : ORA-19279: XPTY0004 - XQuery dynamic TYPE mismatch: expected singleton SEQUENCE How to change XPATH expession to get results like below: LineNumber FieldName FieldType FieldValue 1 A1 B2 C1 1 D1 E2 F1 2 A2 B2 C2 3 How to change XPATH expession to get results like below: LineNumber FieldName FieldType FieldValue 1 A1 B2 C1 1 D1 E2 F1 2 A2 B2 C2 3
[ "Use two XMLTABLEs, passing Fields from the first to the second:\nSELECT l.\"LineNumber\",\n f.*\nFROM UFD_data sd\n CROSS JOIN XMLTABLE(\n '/Document/Lines/Line'\n PASSING sd.xml_data\n COLUMNS\n \"LineNumber\" varchar2(20) PATH 'LineNumber',\n Fields XMLTYPE PATH 'Fields'\n ) l\n LEFT OUTER JOIN XMLTABLE(\n '/Fields/Field'\n PASSING l.fields\n COLUMNS \n \"FieldName\" varchar2(255) PATH 'FieldName',\n \"FieldType\" varchar2(255) PATH 'FieldType', \n \"FieldValue\" varchar2(4000) PATH 'FieldValue'\n ) f\n ON (1 = 1)\n\nWhich outputs:\n\n\n\n\nLineNumber\nFieldName\nFieldType\nFieldValue\n\n\n\n\n1\nA1\nB1\nC1\n\n\n1\nD1\nE1\nF1\n\n\n2\nA2\nB2\nC2\n\n\n3\nnull\nnull\nnull\n\n\n\n\nfiddle\n" ]
[ 1 ]
[]
[]
[ "oracle", "xpath" ]
stackoverflow_0074660515_oracle_xpath.txt
Q: How to override sshd_config defaults with custom_override file inside /etc/ssh/sshd_config.d folder via puppet I have installed the below version of openssh rpms openssh-clients-8.0p1-13.el8.x86_64 openssh-8.0p1-13.el8.x86_64 openssh-server-8.0p1-13.el8.x86_64 In the default sshd_config file, I do not see "Include" directive mentioned in it. grep -nr "Include" /etc/ssh/sshd_config returns nothing. Also "/etc/ssh/sshd_config.d" is not created by rpm. So what I did is created /etc/ssh/sshd_config.d directory and added this "Include /etc/ssh/sshd_config.d/*.conf" in last line of /etc/ssh/sshd_config. I am using puppet to override the default sshd_config file by setting sshd_config_path parameter in puppet ssh module to "/etc/sshd_config.d/custom_sshd_config.conf". ssh module of puppet is just take a copy of sshd_config file and replacing the lines as per puppet configurations. With this I face issues like having conflicting & duplicate values for many sshd_config configurations. One such example: sshd_config: HostKey /etc/ssh/ssh_host_rsa_key # duplicate HostKey /etc/ssh/ssh_host_ecdsa_key HostKey /etc/ssh/ssh_host_ed25519_key Subsystem sftp /usr/libexec/openssh/sftp-server sshd_config.d/custom_sshd_config.conf: HostKey /etc/ssh/ssh_host_rsa_key # duplicate (gets added by default puppet ssh module itself) Subsystem sftp /usr/libexec/openssh/sftp-server # When Subsystem in defined in both the places, # sshd service is failed to start, So I am # manually commenting this out in one file sshd -T output has duplicates: ... hostkey /etc/ssh/ssh_host_rsa_key hostkey /etc/ssh/ssh_host_ecdsa_key hostkey /etc/ssh/ssh_host_ed25519_key hostkey /etc/ssh/ssh_host_rsa_key ... Adding the Include directive in the top also doesn't solve my problem. I am aware of the sshd man page note first obtained value for each parameter is used in sshd : Order matters only when conflicting parameters exist, as the first obtained value for each parameter is used Also directly overriding sshd_config file itself by puppet will solve this duplication/conflict issue, but my need is to maintain a separate custom_sshd_config file without overriding directly the original sshd_config file. What I expect is "Include directive file should behave like overrides of default sshd_config". Is there any way to automate this in puppet like whenever an sshd configuration is overridden in custom_sshd_config file that needs to be commented in default sshd_config so that it will be overridden in real. It would be really helpful if someone helps me out with this issue. Thanks in advance!! A: In the default sshd_config file, I do not see "Include" directive mentioned in it. grep -nr "Include" /etc/ssh/sshd_config returns nothing. Also "/etc/ssh/sshd_config.d" is not created by rpm. I don't find that particularly surprising. The logical contents of sshd_config are order- and context-sensitive, so although there is an Include directive available, using it to provide for generic drop-in sshd configuration files doesn't work very well. I could see a more targeted approach involving drop-in files, perhaps, but not what you're actually trying to do. Nevertheless, ... what I did is created /etc/ssh/sshd_config.d directory and added this "Include /etc/ssh/sshd_config.d/*.conf" in last line of /etc/ssh/sshd_config. ... sure, you can do that if you want. But this ... I am using puppet to override the default sshd_config file by setting sshd_config_path parameter in puppet ssh module to "/etc/sshd_config.d/custom_sshd_config.conf". ... seems both to misrepresent fact and to be unwise. In the first place, no, you are not overriding the default config file. That suggests that sshd would use the config file you specify instead of /etc/sshd/sshd_config, but clearly that's not happening. What you are doing is simply telling Puppet to manage a different file instead. In the second place, doing that in the way you are doing it is downright begging for exactly the kind of problem you observe: duplicate / inconsistent configuration. You're managing etc/sshd_config.d/custom_sshd_config.conf as if it were a complete sshd configuration file (because that's what the module does), yet the only way it gets used at all is by being included by the main config file. It's not clear how you even expect to gain anything from this, when you could simply manage the regular config file directly. You say that you can't do that, but you already are doing it, in the sense that you are placing an Include directive in it that was not provided by the RPM. What I expect is "Include directive file should behave like overrides of default sshd_config". Is there any way to automate this in puppet like whenever an sshd configuration is overridden in custom_sshd_config file that needs to be commented in default sshd_config so that it will be overridden in real. The module you're using (see also below) does not do this, and I don't see why it would. If you're going to modify the main config file anyway, then why would you not put the configuration directives you want there? Or if indeed you must not modify that file, then why are you proposing an approach that involves modifying it (further)? One way to move forward would be to indeed change which file sshd uses for its main config file. You could do that on EL8 by managing sshd's systemd unit file to add an appropriate -f option to the sshd command line it uses. Or if you're ok with modifying /etc/ssh/sshd_config after all, but you still want drop-in files, then you could consider removing everything but the Include directive from the main config file, and otherwise proceeding as you already are doing. But if you want exactly what you describe, then you'll need to write your own module to handle it. Speaking of modules, though, the one you linked to is a less-maintained and little used fork of the massively popular, Puppet-approved one by ghoneycutt. At this point, the two are out of sync. Unless you need something that's not in ghoneycutt's version, I would strongly suggest switching.
How to override sshd_config defaults with custom_override file inside /etc/ssh/sshd_config.d folder via puppet
I have installed the below version of openssh rpms openssh-clients-8.0p1-13.el8.x86_64 openssh-8.0p1-13.el8.x86_64 openssh-server-8.0p1-13.el8.x86_64 In the default sshd_config file, I do not see "Include" directive mentioned in it. grep -nr "Include" /etc/ssh/sshd_config returns nothing. Also "/etc/ssh/sshd_config.d" is not created by rpm. So what I did is created /etc/ssh/sshd_config.d directory and added this "Include /etc/ssh/sshd_config.d/*.conf" in last line of /etc/ssh/sshd_config. I am using puppet to override the default sshd_config file by setting sshd_config_path parameter in puppet ssh module to "/etc/sshd_config.d/custom_sshd_config.conf". ssh module of puppet is just take a copy of sshd_config file and replacing the lines as per puppet configurations. With this I face issues like having conflicting & duplicate values for many sshd_config configurations. One such example: sshd_config: HostKey /etc/ssh/ssh_host_rsa_key # duplicate HostKey /etc/ssh/ssh_host_ecdsa_key HostKey /etc/ssh/ssh_host_ed25519_key Subsystem sftp /usr/libexec/openssh/sftp-server sshd_config.d/custom_sshd_config.conf: HostKey /etc/ssh/ssh_host_rsa_key # duplicate (gets added by default puppet ssh module itself) Subsystem sftp /usr/libexec/openssh/sftp-server # When Subsystem in defined in both the places, # sshd service is failed to start, So I am # manually commenting this out in one file sshd -T output has duplicates: ... hostkey /etc/ssh/ssh_host_rsa_key hostkey /etc/ssh/ssh_host_ecdsa_key hostkey /etc/ssh/ssh_host_ed25519_key hostkey /etc/ssh/ssh_host_rsa_key ... Adding the Include directive in the top also doesn't solve my problem. I am aware of the sshd man page note first obtained value for each parameter is used in sshd : Order matters only when conflicting parameters exist, as the first obtained value for each parameter is used Also directly overriding sshd_config file itself by puppet will solve this duplication/conflict issue, but my need is to maintain a separate custom_sshd_config file without overriding directly the original sshd_config file. What I expect is "Include directive file should behave like overrides of default sshd_config". Is there any way to automate this in puppet like whenever an sshd configuration is overridden in custom_sshd_config file that needs to be commented in default sshd_config so that it will be overridden in real. It would be really helpful if someone helps me out with this issue. Thanks in advance!!
[ "\nIn the default sshd_config file, I do not see \"Include\" directive mentioned in it. grep -nr \"Include\" /etc/ssh/sshd_config returns nothing. Also \"/etc/ssh/sshd_config.d\" is not created by rpm.\n\nI don't find that particularly surprising. The logical contents of sshd_config are order- and context-sensitive, so although there is an Include directive available, using it to provide for generic drop-in sshd configuration files doesn't work very well. I could see a more targeted approach involving drop-in files, perhaps, but not what you're actually trying to do.\nNevertheless, ...\n\nwhat I did is created /etc/ssh/sshd_config.d directory and added this \"Include /etc/ssh/sshd_config.d/*.conf\" in last line of /etc/ssh/sshd_config.\n\n... sure, you can do that if you want. But this ...\n\nI am using puppet to override the default sshd_config file by setting sshd_config_path parameter in puppet ssh module to \"/etc/sshd_config.d/custom_sshd_config.conf\".\n\n... seems both to misrepresent fact and to be unwise. In the first place, no, you are not overriding the default config file. That suggests that sshd would use the config file you specify instead of /etc/sshd/sshd_config, but clearly that's not happening. What you are doing is simply telling Puppet to manage a different file instead.\nIn the second place, doing that in the way you are doing it is downright begging for exactly the kind of problem you observe: duplicate / inconsistent configuration. You're managing etc/sshd_config.d/custom_sshd_config.conf as if it were a complete sshd configuration file (because that's what the module does), yet the only way it gets used at all is by being included by the main config file.\nIt's not clear how you even expect to gain anything from this, when you could simply manage the regular config file directly. You say that you can't do that, but you already are doing it, in the sense that you are placing an Include directive in it that was not provided by the RPM.\n\nWhat I expect is \"Include directive file should behave like overrides of default sshd_config\". Is there any way to automate this in puppet like whenever an sshd configuration is overridden in custom_sshd_config file that needs to be commented in default sshd_config so that it will be overridden in real.\n\nThe module you're using (see also below) does not do this, and I don't see why it would. If you're going to modify the main config file anyway, then why would you not put the configuration directives you want there? Or if indeed you must not modify that file, then why are you proposing an approach that involves modifying it (further)?\nOne way to move forward would be to indeed change which file sshd uses for its main config file. You could do that on EL8 by managing sshd's systemd unit file to add an appropriate -f option to the sshd command line it uses.\nOr if you're ok with modifying /etc/ssh/sshd_config after all, but you still want drop-in files, then you could consider removing everything but the Include directive from the main config file, and otherwise proceeding as you already are doing.\nBut if you want exactly what you describe, then you'll need to write your own module to handle it.\nSpeaking of modules, though, the one you linked to is a less-maintained and little used fork of the massively popular, Puppet-approved one by ghoneycutt. At this point, the two are out of sync. Unless you need something that's not in ghoneycutt's version, I would strongly suggest switching.\n" ]
[ 0 ]
[]
[]
[ "puppet", "ssh", "sshd" ]
stackoverflow_0074654971_puppet_ssh_sshd.txt
Q: What does pg_replication_slots.active f for the WAL? Postgres documentation states the following regarding pg_replication_slots.active: True if this slot is currently actively being used. What does this actually imply for a false value? For example, will the WAL still stick around if the value is false? Can I then set to true and it will continue to replicate from where it left off? A: From all of my personal exploration it appears that even with an active=False replication slot: The WAL is still pinned and will stick around even if the replication slot is active=False. It's not possible to explicitly set active to True. Deactivation general occurs because the follower had some issue which made it fail to safely read from the WAL, replicate, and move the LSN pointer. Restarting the follower, assuming it then succeeds in safely replicating, will generally fix the issue, and the slot will automatically change to active=True. A: In my case I was using AWS and because I had automated backups turned off the wal_level was set to minimal and I couldn't bump up the max_replication_slots (even though it looked like I could...). I had to create a new subscriber DB instance and then it worked.
What does pg_replication_slots.active f for the WAL?
Postgres documentation states the following regarding pg_replication_slots.active: True if this slot is currently actively being used. What does this actually imply for a false value? For example, will the WAL still stick around if the value is false? Can I then set to true and it will continue to replicate from where it left off?
[ "From all of my personal exploration it appears that even with an active=False replication slot:\n\nThe WAL is still pinned and will stick around even if the replication slot is active=False.\nIt's not possible to explicitly set active to True.\nDeactivation general occurs because the follower had some issue which made it fail to safely read from the WAL, replicate, and move the LSN pointer.\nRestarting the follower, assuming it then succeeds in safely replicating, will generally fix the issue, and the slot will automatically change to active=True.\n\n", "In my case I was using AWS and because I had automated backups turned off the wal_level was set to minimal and I couldn't bump up the max_replication_slots (even though it looked like I could...). I had to create a new subscriber DB instance and then it worked.\n" ]
[ 2, 0 ]
[]
[]
[ "postgresql", "wal" ]
stackoverflow_0065637856_postgresql_wal.txt
Q: Swift: Error converting type 'Binding' when passing Observed object's property to child view I want to load data from an API, then pass that data to several child views. Here's a minimal example with one child view (DetailsView). I am getting this error: Cannot convert value of type 'Binding<Subject>' to expected argument type 'BusinessDetails' import Foundation import SwiftUI import Alamofire struct BusinessView: View { var shop: Business class Observer : ObservableObject{ @Published public var shop = BusinessDetails() @Published public var loading = false init(){ shop = await getDetails(id: shop.id) } func getDetails(id: String) async -> (BusinessDetails) { let params = [ id: id ] self.loading = true self.shop = try await AF.request("https://api.com/details", parameters: params).serializingDecodable(BusinessDetails.self).value self.loading = false return self.shop } } @StateObject var observed = Observer() var body: some View { if !observed.loading { TabView { DetailsView(shop: $observed.shop) .tabItem { Label("Details", systemImage: "") } } } } } This has worked before when the Observed object's property wasn't an object itself (like how the loading property doesn't cause an error). A: When using async/await you should use the .task modifier and remove the object. The task will be started when the view appears, cancelled when it disappears and restarted when the id changes. This saves you a lot of effort trying to link async task lifecycle to object lifecycle. e.g. struct BusinessView: View { let shop: Business @State var shopDetails = BusinessDetails() @State var loading = false var body: some View { if loading { Text("Loading") } else { TabView { DetailsView(shop: shopDetails) .tabItem { Label("Details", systemImage: "") } } } .task(id: shop.id) { loading = true shopDetails = await Self.getDetails(id: shop.id) // usually we have a try catch around this so we can show an error message loading = false } } // you can move this func somewhere else if you like static func getDetails(id: String) async -> BusinessDetails{ let params = [ id: id ] let result = try await AF.request("https://api.com/details", parameters: params).serializingDecodable(BusinessDetails.self).value return result } } }
Swift: Error converting type 'Binding' when passing Observed object's property to child view
I want to load data from an API, then pass that data to several child views. Here's a minimal example with one child view (DetailsView). I am getting this error: Cannot convert value of type 'Binding<Subject>' to expected argument type 'BusinessDetails' import Foundation import SwiftUI import Alamofire struct BusinessView: View { var shop: Business class Observer : ObservableObject{ @Published public var shop = BusinessDetails() @Published public var loading = false init(){ shop = await getDetails(id: shop.id) } func getDetails(id: String) async -> (BusinessDetails) { let params = [ id: id ] self.loading = true self.shop = try await AF.request("https://api.com/details", parameters: params).serializingDecodable(BusinessDetails.self).value self.loading = false return self.shop } } @StateObject var observed = Observer() var body: some View { if !observed.loading { TabView { DetailsView(shop: $observed.shop) .tabItem { Label("Details", systemImage: "") } } } } } This has worked before when the Observed object's property wasn't an object itself (like how the loading property doesn't cause an error).
[ "When using async/await you should use the .task modifier and remove the object. The task will be started when the view appears, cancelled when it disappears and restarted when the id changes. This saves you a lot of effort trying to link async task lifecycle to object lifecycle. e.g.\nstruct BusinessView: View {\n let shop: Business\n @State var shopDetails = BusinessDetails()\n @State var loading = false\n \n var body: some View {\n if loading {\n Text(\"Loading\")\n } \n else {\n TabView {\n DetailsView(shop: shopDetails)\n .tabItem {\n Label(\"Details\", systemImage: \"\")\n }\n }\n }\n .task(id: shop.id) {\n loading = true\n shopDetails = await Self.getDetails(id: shop.id) // usually we have a try catch around this so we can show an error message\n loading = false\n }\n }\n\n // you can move this func somewhere else if you like\n static func getDetails(id: String) async -> BusinessDetails{\n let params = [\n id: id\n ]\n\n let result = try await AF.request(\"https://api.com/details\", parameters: params).serializingDecodable(BusinessDetails.self).value\n \n return result\n }\n }\n}\n\n" ]
[ 1 ]
[]
[]
[ "asynchronous", "swift", "swiftui" ]
stackoverflow_0074651723_asynchronous_swift_swiftui.txt
Q: How can I return a list I wanted to do a discord command scraper in python for the raffles available on https://releases.footshop.com/ and I almost finished it but when I wan to return a list of size (and stock also) it's return an error "IndexError: list index out of range" and I can't find what to do :/ thank for your help guys ! I try this code but it return an error (I try other thing but I don't remember, I had been trying to find a solution by myself for some time now lol) there is the code: def searchsizefootshop(size): item = searchfootshop(size) if len(item['sizeSets']['Men']['sizes']) !=0: size1 = int(item['sizeSets']['Men']['sizes'][X]) size = item['sizeSets']['Men']['sizes'][X]['eur'] count = 0 while count <= len(size1): print(size[count]) count += 1 searchsizefootshop('hGZrRYMB3xHSyCfZ4BFw') [scraper] [Footshop API] A: len(size1) returns the length of the list, not the index of the last item. To loop through a list using len and index access, you need to compare it to len(size1) - 1 or just use count < len(size1).
How can I return a list
I wanted to do a discord command scraper in python for the raffles available on https://releases.footshop.com/ and I almost finished it but when I wan to return a list of size (and stock also) it's return an error "IndexError: list index out of range" and I can't find what to do :/ thank for your help guys ! I try this code but it return an error (I try other thing but I don't remember, I had been trying to find a solution by myself for some time now lol) there is the code: def searchsizefootshop(size): item = searchfootshop(size) if len(item['sizeSets']['Men']['sizes']) !=0: size1 = int(item['sizeSets']['Men']['sizes'][X]) size = item['sizeSets']['Men']['sizes'][X]['eur'] count = 0 while count <= len(size1): print(size[count]) count += 1 searchsizefootshop('hGZrRYMB3xHSyCfZ4BFw') [scraper] [Footshop API]
[ "len(size1) returns the length of the list, not the index of the last item. To loop through a list using len and index access, you need to compare it to len(size1) - 1 or just use count < len(size1).\n" ]
[ 0 ]
[]
[]
[ "discord", "list", "python" ]
stackoverflow_0074605388_discord_list_python.txt
Q: How can we modify the ER diagrams using DBeaver I already have ER diagrams with me which are updated version a month ago. Meanwhile, there are some schema changes happened in the database. How do I can merge those changes using DBeaver or what are the alternate options to merge those changes. A: To update an existing ER diagram to reflect subsequent changes made to the schema, follow these steps: Ensure that the .erd file tab is selected Click File > Refresh in the options bar on the top-left (or hit F5) Keep in mind that you can't undo this change, and the layout of the elements in the diagram will be rearranged. This was tested with DBeaver Version 21.1.5.202108160727
How can we modify the ER diagrams using DBeaver
I already have ER diagrams with me which are updated version a month ago. Meanwhile, there are some schema changes happened in the database. How do I can merge those changes using DBeaver or what are the alternate options to merge those changes.
[ "To update an existing ER diagram to reflect subsequent changes made to the schema, follow these steps:\n\nEnsure that the .erd file tab is selected\nClick File > Refresh in the options bar on the top-left (or hit F5)\n\nKeep in mind that you can't undo this change, and the layout of the elements in the diagram will be rearranged.\nThis was tested with DBeaver Version 21.1.5.202108160727\n" ]
[ 0 ]
[]
[]
[ "dbeaver", "entity_relationship" ]
stackoverflow_0070314092_dbeaver_entity_relationship.txt
Q: copy on premise sql server hosted on linux server to Azure datalake using azure data factory I have a requirement to copy the table from on premise sql database hosted on linux server to Azure datalake using azure data factory. Self hosted integration works natively for windows system. Can someone share your thoughts or work around to achieve this requirement? Regards Aravindan A: Unfortunately, this cannot be achieved as the SHIR has below system requirements and in order to connect to onPrem data sources ADF requires SHIR. A workaround could be using SSIS packages in Linux to extract the data. For more information, please refer to this documentation: Extract, transform, and load data on Linux with SSIS
copy on premise sql server hosted on linux server to Azure datalake using azure data factory
I have a requirement to copy the table from on premise sql database hosted on linux server to Azure datalake using azure data factory. Self hosted integration works natively for windows system. Can someone share your thoughts or work around to achieve this requirement? Regards Aravindan
[ "Unfortunately, this cannot be achieved as the SHIR has below system requirements and in order to connect to onPrem data sources ADF requires SHIR.\n\nA workaround could be using SSIS packages in Linux to extract the data. For more information, please refer to this documentation: Extract, transform, and load data on Linux with SSIS\n" ]
[ 0 ]
[]
[]
[ "azure_data_factory" ]
stackoverflow_0074557524_azure_data_factory.txt
Q: How to extract an object from a json response and store it in a var with jQuery I get this response in the console and I want to store the "text" part to a variable using jQuery or JS. It gets logged as console.log(response); A: var text = response["generations"][0]["text"];
How to extract an object from a json response and store it in a var with jQuery
I get this response in the console and I want to store the "text" part to a variable using jQuery or JS. It gets logged as console.log(response);
[ "\n\nvar text = response[\"generations\"][0][\"text\"];\n\n\n\n" ]
[ 0 ]
[]
[]
[ "jquery", "json" ]
stackoverflow_0074660505_jquery_json.txt
Q: connect the front and the back via a REST API in Maven project I would like to connect the front and the back of my Web App via a REST API, but I don't know how it works, technically speaking. I know that I have to make them communicate through the links first, but I don't know exactly where to place these links. I also don't know how I know that the communication is established, nor how to display data from the back in the front. I know that the front communicates with the Resource of the back, but I do not see when the link is made with the front. A: This example is good start sample for REST-APIs (CRUD-Create/Read/Update/Delete) with spring boot and angular https://github.com/bezkoder/spring-boot-angular-14-crud-example #1 clone it #2 Open it at root of project directory by VS code #3 Install JDK - it detect automatically OS and version #4 run backend (Spring boot) project in spring-boot-server sub directory Run Spring Boot application mvn spring-boot:run The Spring Boot Server will export API at port 8081. #5 run frontend (Angular) project npm install ng serve --port 8081 #6 launch browser end open frontend URL http://localhost:8081 You can add tutorial , list, delete and update You can call the list API by browser without angular #7 review & study code This overview list API server in Spring Boot at TutorialController.java file @CrossOrigin(origins = "http://localhost:8081") @RestController @RequestMapping("/api") public class TutorialController { @Autowired TutorialRepository tutorialRepository; @GetMapping("/tutorials") public ResponseEntity<List<Tutorial>> getAllTutorials(@RequestParam(required = false) String title) { try { List<Tutorial> tutorials = new ArrayList<Tutorial>(); if (title == null) tutorialRepository.findAll().forEach(tutorials::add); else tutorialRepository.findByTitleContaining(title).forEach(tutorials::add); if (tutorials.isEmpty()) { return new ResponseEntity<>(HttpStatus.NO_CONTENT); } return new ResponseEntity<>(tutorials, HttpStatus.OK); } catch (Exception e) { return new ResponseEntity<>(null, HttpStatus.INTERNAL_SERVER_ERROR); } } Get list API in Angular part at tutorial.service.ts file const baseUrl = 'http://localhost:8080/api/tutorials'; @Injectable({ providedIn: 'root' }) export class TutorialService { constructor(private http: HttpClient) { } getAll(): Observable<Tutorial[]> { return this.http.get<Tutorial[]>(baseUrl); } get(id: any): Observable<Tutorial> { return this.http.get<Tutorial>(`${baseUrl}/${id}`); } create(data: any): Observable<any> { return this.http.post(baseUrl, data); } update(id: any, data: any): Observable<any> { return this.http.put(`${baseUrl}/${id}`, data); } delete(id: any): Observable<any> { return this.http.delete(`${baseUrl}/${id}`); } deleteAll(): Observable<any> { return this.http.delete(baseUrl); } findByTitle(title: any): Observable<Tutorial[]> { return this.http.get<Tutorial[]>(`${baseUrl}?title=${title}`); } } Display in Angular for list at tutorials-list.component.html by tutorials-list.component.html <div class="col-md-6"> <h4>Tutorials List</h4> <ul class="list-group"> <li class="list-group-item" *ngFor="let tutorial of tutorials; let i = index" [class.active]="i == currentIndex" (click)="setActiveTutorial(tutorial, i)" > {{ tutorial.title }} </li> </ul> <button class="m-3 btn btn-sm btn-danger" (click)="removeAllTutorials()"> Remove All </button> </div> It's html's tutorials array variable binding with TutorialsListComponent's tutorials?: Tutorial[]; It was assigned values by tutorialService: TutorialService It injected at consructor() export class TutorialsListComponent implements OnInit { tutorials?: Tutorial[]; currentTutorial: Tutorial = {}; currentIndex = -1; title = ''; constructor(private tutorialService: TutorialService) { } ngOnInit(): void { this.retrieveTutorials(); } retrieveTutorials(): void { this.tutorialService.getAll() .subscribe({ next: (data) => { this.tutorials = data; console.log(data); }, error: (e) => console.error(e) }); } Enjoy happy coding!
connect the front and the back via a REST API in Maven project
I would like to connect the front and the back of my Web App via a REST API, but I don't know how it works, technically speaking. I know that I have to make them communicate through the links first, but I don't know exactly where to place these links. I also don't know how I know that the communication is established, nor how to display data from the back in the front. I know that the front communicates with the Resource of the back, but I do not see when the link is made with the front.
[ "This example is good start sample for REST-APIs (CRUD-Create/Read/Update/Delete) with spring boot and angular\nhttps://github.com/bezkoder/spring-boot-angular-14-crud-example\n#1 clone it\n#2 Open it at root of project directory by VS code\n#3 Install JDK - it detect automatically OS and version\n\n#4 run backend (Spring boot) project\nin spring-boot-server sub directory\nRun Spring Boot application\nmvn spring-boot:run\n\nThe Spring Boot Server will export API at port 8081.\n\n#5 run frontend (Angular) project\nnpm install\nng serve --port 8081\n\n\n#6 launch browser end open frontend URL\nhttp://localhost:8081\n\nYou can add tutorial , list, delete and update\n\n\nYou can call the list API by browser without angular\n\n#7 review & study code\nThis overview\n\nlist API server in Spring Boot at TutorialController.java file\n@CrossOrigin(origins = \"http://localhost:8081\")\n@RestController\n@RequestMapping(\"/api\")\npublic class TutorialController {\n\n @Autowired\n TutorialRepository tutorialRepository;\n\n @GetMapping(\"/tutorials\")\n public ResponseEntity<List<Tutorial>> getAllTutorials(@RequestParam(required = false) String title) {\n try {\n List<Tutorial> tutorials = new ArrayList<Tutorial>();\n\n if (title == null)\n tutorialRepository.findAll().forEach(tutorials::add);\n else\n tutorialRepository.findByTitleContaining(title).forEach(tutorials::add);\n\n if (tutorials.isEmpty()) {\n return new ResponseEntity<>(HttpStatus.NO_CONTENT);\n }\n\n return new ResponseEntity<>(tutorials, HttpStatus.OK);\n } catch (Exception e) {\n return new ResponseEntity<>(null, HttpStatus.INTERNAL_SERVER_ERROR);\n }\n }\n\nGet list API in Angular part at tutorial.service.ts file\nconst baseUrl = 'http://localhost:8080/api/tutorials';\n\n@Injectable({\n providedIn: 'root'\n})\nexport class TutorialService {\n\n constructor(private http: HttpClient) { }\n\n getAll(): Observable<Tutorial[]> {\n return this.http.get<Tutorial[]>(baseUrl);\n }\n\n get(id: any): Observable<Tutorial> {\n return this.http.get<Tutorial>(`${baseUrl}/${id}`);\n }\n\n create(data: any): Observable<any> {\n return this.http.post(baseUrl, data);\n }\n\n update(id: any, data: any): Observable<any> {\n return this.http.put(`${baseUrl}/${id}`, data);\n }\n\n delete(id: any): Observable<any> {\n return this.http.delete(`${baseUrl}/${id}`);\n }\n\n deleteAll(): Observable<any> {\n return this.http.delete(baseUrl);\n }\n\n findByTitle(title: any): Observable<Tutorial[]> {\n return this.http.get<Tutorial[]>(`${baseUrl}?title=${title}`);\n }\n}\n\nDisplay in Angular for list at tutorials-list.component.html\nby tutorials-list.component.html\n <div class=\"col-md-6\">\n <h4>Tutorials List</h4>\n <ul class=\"list-group\">\n <li\n class=\"list-group-item\"\n *ngFor=\"let tutorial of tutorials; let i = index\"\n [class.active]=\"i == currentIndex\"\n (click)=\"setActiveTutorial(tutorial, i)\"\n >\n {{ tutorial.title }}\n </li>\n </ul>\n\n <button class=\"m-3 btn btn-sm btn-danger\" (click)=\"removeAllTutorials()\">\n Remove All\n </button>\n </div>\n\nIt's html's tutorials array variable binding with TutorialsListComponent's tutorials?: Tutorial[];\nIt was assigned values by tutorialService: TutorialService\nIt injected at consructor()\nexport class TutorialsListComponent implements OnInit {\n\n tutorials?: Tutorial[];\n currentTutorial: Tutorial = {};\n currentIndex = -1;\n title = '';\n\n constructor(private tutorialService: TutorialService) { }\n\n ngOnInit(): void {\n this.retrieveTutorials();\n }\n\n retrieveTutorials(): void {\n this.tutorialService.getAll()\n .subscribe({\n next: (data) => {\n this.tutorials = data;\n console.log(data);\n },\n error: (e) => console.error(e)\n });\n }\n\nEnjoy happy coding!\n" ]
[ 0 ]
[]
[]
[ "angular", "api", "java", "maven", "rest" ]
stackoverflow_0074646381_angular_api_java_maven_rest.txt
Q: How to use `ListCtrl` on wxpython How can I append row and it's corresponding data into ListCtrl. I've just finished how to use TreeCtrl(Relatively easier than ListCtrl), it shows me a clear usage of matching single GUI object and data. But ListCtrl dose not. How can I append or insert single row with it's corresponding data. How can I access row and it's data How can I manipulated them (Editing data/row, Deleting data/row) Can you explain summary of them? Thank you. I know my question is so simple and I can get about this from doc somewhat. I read docs, but still I got no clue A: I know that wxPython docs are retarded and gives no much help, here is some quick tips below, i added explanations in comments: # create new list control listctrl = wx.dataview.DataViewListCtrl( my_panel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.dataview.DV_SINGLE ) # setup listctrl columns listctrl.AppendTextColumn('first name', width=220) # normal text column listctrl.AppendBitmapColumn('my images', 0, width=35) # you can add images in this col listctrl.AppendProgressColumn('Progress', align=wx.ALIGN_CENTER) # a progress bar listctrl.SetRowHeight(30) # define all rows height # add data, note myList is a list or tuple contains the exact type of data for each columns and same length as col numbers listctrl.AppendItem(myList) # to modify an entry "a single cell located at row x col" listctrl.SetValue(myNewValue, row, column) A: this is what works for me: import wx il_icons = wx.ImageList(16, 16, mask=True, initialCount=2) il_icons.Add(wx.Bitmap('icon01.png')) il_icons.Add(wx.Bitmap('icon02.png')) lc_list = wx.ListCtrl(self, wx.ID_ANY, style=wx.LC_REPORT | wx.LC_SINGLE_SEL | wx.LC_EDIT_LABELS | wx.LC_VRULES, name='lc_list') lc_list.AssignImageList(il_icons, which=wx.IMAGE_LIST_SMALL) lc_list.AppendColumn('col01', format=wx.LIST_FORMAT_LEFT, width=64) lc_list.AppendColumn('col02', format=wx.LIST_FORMAT_RIGHT, width=64) lc_list.Append(('item01',100)) lc_list.Append(('item02',200)) lc_list.SetItemColumnImage(0,0,0) lc_list.SetItemColumnImage(1,0,1) lc_list.Bind(wx.EVT_LIST_ITEM_SELECTED, OnItemSelected) lc_list.Show(True)
How to use `ListCtrl` on wxpython
How can I append row and it's corresponding data into ListCtrl. I've just finished how to use TreeCtrl(Relatively easier than ListCtrl), it shows me a clear usage of matching single GUI object and data. But ListCtrl dose not. How can I append or insert single row with it's corresponding data. How can I access row and it's data How can I manipulated them (Editing data/row, Deleting data/row) Can you explain summary of them? Thank you. I know my question is so simple and I can get about this from doc somewhat. I read docs, but still I got no clue
[ "I know that wxPython docs are retarded and gives no much help, here is some quick tips below,\ni added explanations in comments:\n# create new list control\nlistctrl = wx.dataview.DataViewListCtrl( my_panel, wx.ID_ANY, wx.DefaultPosition, wx.DefaultSize, wx.dataview.DV_SINGLE )\n\n# setup listctrl columns\nlistctrl.AppendTextColumn('first name', width=220) # normal text column\nlistctrl.AppendBitmapColumn('my images', 0, width=35) # you can add images in this col\nlistctrl.AppendProgressColumn('Progress', align=wx.ALIGN_CENTER) # a progress bar\n\nlistctrl.SetRowHeight(30) # define all rows height\n\n# add data, note myList is a list or tuple contains the exact type of data for each columns and same length as col numbers\nlistctrl.AppendItem(myList)\n\n# to modify an entry \"a single cell located at row x col\"\nlistctrl.SetValue(myNewValue, row, column)\n\n", "this is what works for me:\nimport wx\n\nil_icons = wx.ImageList(16, 16, mask=True, initialCount=2)\nil_icons.Add(wx.Bitmap('icon01.png'))\nil_icons.Add(wx.Bitmap('icon02.png'))\n\nlc_list = wx.ListCtrl(self, wx.ID_ANY, style=wx.LC_REPORT | wx.LC_SINGLE_SEL | wx.LC_EDIT_LABELS | wx.LC_VRULES, name='lc_list')\nlc_list.AssignImageList(il_icons, which=wx.IMAGE_LIST_SMALL)\nlc_list.AppendColumn('col01', format=wx.LIST_FORMAT_LEFT, width=64)\nlc_list.AppendColumn('col02', format=wx.LIST_FORMAT_RIGHT, width=64)\nlc_list.Append(('item01',100))\nlc_list.Append(('item02',200))\nlc_list.SetItemColumnImage(0,0,0)\nlc_list.SetItemColumnImage(1,0,1)\n\nlc_list.Bind(wx.EVT_LIST_ITEM_SELECTED, OnItemSelected)\n\nlc_list.Show(True)\n\n" ]
[ 1, 0 ]
[]
[]
[ "listctrl", "python", "wxpython" ]
stackoverflow_0055789154_listctrl_python_wxpython.txt
Q: Puppeteer: remove attribute <div> <button class="button--primary button--filled" disabled="disabled" data-action="publish"> <span class="ButtonText">Save and publish </span> </button> </div> I have the above button element on browser and I couldn't enable it unless I remove the disabled="disabled" attribute. Setting another value didn't help. Is there a way to remove the attribute completely? (Using page.$eval() is preferable) A: As pointed out in the comments, you can use Element.removeAttribute(attrName): const puppeteer = require("puppeteer"); // ^19.1.0 const html = ` <div> <button class="button--primary button--filled" disabled="disabled" data-action="publish"> <span class="ButtonText">Save and publish </span> </button> </div> `; let browser; (async () => { browser = await puppeteer.launch(); const [page] = await browser.pages(); await page.setContent(html); const sel = '.button--primary.button--filled[disabled="disabled"]'; const el = await page.$(sel); await page.$eval(sel, el => el.removeAttribute("disabled")); console.log(await el.evaluate(el => el.outerHTML)); })() .catch(err => console.error(err)) .finally(() => browser?.close()); Output (with disabled="disabled" removed): <button class="button--primary button--filled" data-action="publish"> <span class="ButtonText">Save and publish </span> </button>
Puppeteer: remove attribute
<div> <button class="button--primary button--filled" disabled="disabled" data-action="publish"> <span class="ButtonText">Save and publish </span> </button> </div> I have the above button element on browser and I couldn't enable it unless I remove the disabled="disabled" attribute. Setting another value didn't help. Is there a way to remove the attribute completely? (Using page.$eval() is preferable)
[ "As pointed out in the comments, you can use Element.removeAttribute(attrName):\nconst puppeteer = require(\"puppeteer\"); // ^19.1.0\n\nconst html = `\n<div>\n <button class=\"button--primary button--filled\" disabled=\"disabled\" data-action=\"publish\">\n <span class=\"ButtonText\">Save and publish\n </span>\n </button>\n</div>\n`;\n\nlet browser;\n(async () => {\n browser = await puppeteer.launch();\n const [page] = await browser.pages();\n await page.setContent(html);\n const sel = '.button--primary.button--filled[disabled=\"disabled\"]';\n const el = await page.$(sel);\n await page.$eval(sel, el => el.removeAttribute(\"disabled\"));\n console.log(await el.evaluate(el => el.outerHTML));\n})()\n .catch(err => console.error(err))\n .finally(() => browser?.close());\n\nOutput (with disabled=\"disabled\" removed):\n<button class=\"button--primary button--filled\" data-action=\"publish\">\n <span class=\"ButtonText\">Save and publish\n </span>\n </button>\n\n" ]
[ 0 ]
[]
[]
[ "dom", "javascript", "puppeteer" ]
stackoverflow_0065235001_dom_javascript_puppeteer.txt
Q: When request to httpserver in android app wait the click listener for button in alertdialog and return response for client When the http server (on AndroidApp) receives a request, I show an alert dialog to the user about this request. After the user responds to this alert dialog, I want it to return to the client (Browser). I also want to add a 10 second timeout in case the user doesn't press any button. Create HttpServer ` private HttpServerManager() { try { InetSocketAddress address = new InetSocketAddress(8080); httpServer = HttpServer.create(address, 0); httpServer.createContext("/getDeviceRegister", new EchoGetHandlerForDeviceRegister()); httpServer.setExecutor(null); httpServer.start(); Log.i(TAG, "HttpServer Start"); } catch (Exception e) { e.printStackTrace(); } } ` HttpHandler for -> EchoGetHandlerForDeviceRegister ` class EchoGetHandlerForDeviceRegister implements HttpHandler { @RequiresApi(api = Build.VERSION_CODES.O) @Override public void handle(HttpExchange he) throws IOException { // parse request Map<String, Object> parameters = new HashMap<String, Object>(); URI requestedUri = he.getRequestURI(); String query = requestedUri.getRawQuery(); HttpServerManager.parseQuery(query, parameters); new Handler(Looper.getMainLooper()).post(new Runnable() { @Override public void run() { //SHOW DIALOG HERE TestApplication.instance().showAdminRegisterDialog(he.getRemoteAddress()); } }); // send response String response = "<h1>Device Register</h1>"; for (String key : parameters.keySet()) response += key + " = " + parameters.get(key) + "\n"; he.sendResponseHeaders(200, response.length()); OutputStream os = he.getResponseBody(); os.write(response.toString().getBytes()); os.close(); } } ` ShowDialog Method ` public void showAdminRegisterDialog(InetSocketAddress clientAdress){ Log.i(TAG, "showAdminRegisterDialog()"); if (adminRegisterDialog != null) adminRegisterDialog.cancel(); Context context = MainActivity.instance(); AlertDialog.Builder builder = new AlertDialog.Builder(context, R.style.NewDialog2); builder = new AlertDialog.Builder(context); LayoutInflater li = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); adminRegisterView = li.inflate(R.layout.register_dialog, null); builder.setView(adminRegisterView); builder.setCancelable(false); TextView deviceNameText = adminRegisterView.findViewById(R.id.deviceNameText); TextView infoText = adminRegisterView.findViewById(R.id.infoText); deviceNameText.setText(clientAdress.toString()); infoText.setText(R.string.register_admin_allow_text); AppCompatButton allowButton = adminRegisterView.findViewById(R.id.allowButton); AppCompatButton notAllowButton = adminRegisterView.findViewById(R.id.notAllowButton); allowButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Log.i(TAG,"allowButton"); adminRegisterDialog.dismiss(); } }); notAllowButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Log.i(TAG,"not allowButton"); adminRegisterDialog.dismiss(); } }); adminRegisterDialog = builder.create(); adminRegisterDialog.show(); adminRegisterDialog.getWindow().setLayout(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT); } ` -I tried using AsyncTask but I couldn't because it was too complicated. -Thread.sleep didn't work as it stopped all processes. A: I solved the problem as follows: I am using Alertdialog in class where I defined HttpServer. I keep it in a while loop until the user responds when the alertdialog is shown. After the user click alertdialog button, I finish the while loop and send a response to the client. boolean isClick = false; //SHOW ALERT DIALOG HERE //ALERT DILOG CLICK LISTENER result.getAllowButton().setOnClickListener(new View.OnClickListener() { isClick = true; }); while (!isClick) { Log.i(TAG, "in while loop"); } Log.i(TAG, "out while loop"); isClick = false; // send response String response = "<h1>Alert Dialog Clicked</h1>"; for (String key : parameters.keySet()) response += key + " = " + parameters.get(key) + "\n"; he.sendResponseHeaders(200, response.length()); OutputStream os = he.getResponseBody(); os.write(response.toString().getBytes()); os.close();
When request to httpserver in android app wait the click listener for button in alertdialog and return response for client
When the http server (on AndroidApp) receives a request, I show an alert dialog to the user about this request. After the user responds to this alert dialog, I want it to return to the client (Browser). I also want to add a 10 second timeout in case the user doesn't press any button. Create HttpServer ` private HttpServerManager() { try { InetSocketAddress address = new InetSocketAddress(8080); httpServer = HttpServer.create(address, 0); httpServer.createContext("/getDeviceRegister", new EchoGetHandlerForDeviceRegister()); httpServer.setExecutor(null); httpServer.start(); Log.i(TAG, "HttpServer Start"); } catch (Exception e) { e.printStackTrace(); } } ` HttpHandler for -> EchoGetHandlerForDeviceRegister ` class EchoGetHandlerForDeviceRegister implements HttpHandler { @RequiresApi(api = Build.VERSION_CODES.O) @Override public void handle(HttpExchange he) throws IOException { // parse request Map<String, Object> parameters = new HashMap<String, Object>(); URI requestedUri = he.getRequestURI(); String query = requestedUri.getRawQuery(); HttpServerManager.parseQuery(query, parameters); new Handler(Looper.getMainLooper()).post(new Runnable() { @Override public void run() { //SHOW DIALOG HERE TestApplication.instance().showAdminRegisterDialog(he.getRemoteAddress()); } }); // send response String response = "<h1>Device Register</h1>"; for (String key : parameters.keySet()) response += key + " = " + parameters.get(key) + "\n"; he.sendResponseHeaders(200, response.length()); OutputStream os = he.getResponseBody(); os.write(response.toString().getBytes()); os.close(); } } ` ShowDialog Method ` public void showAdminRegisterDialog(InetSocketAddress clientAdress){ Log.i(TAG, "showAdminRegisterDialog()"); if (adminRegisterDialog != null) adminRegisterDialog.cancel(); Context context = MainActivity.instance(); AlertDialog.Builder builder = new AlertDialog.Builder(context, R.style.NewDialog2); builder = new AlertDialog.Builder(context); LayoutInflater li = (LayoutInflater) context.getSystemService(Context.LAYOUT_INFLATER_SERVICE); adminRegisterView = li.inflate(R.layout.register_dialog, null); builder.setView(adminRegisterView); builder.setCancelable(false); TextView deviceNameText = adminRegisterView.findViewById(R.id.deviceNameText); TextView infoText = adminRegisterView.findViewById(R.id.infoText); deviceNameText.setText(clientAdress.toString()); infoText.setText(R.string.register_admin_allow_text); AppCompatButton allowButton = adminRegisterView.findViewById(R.id.allowButton); AppCompatButton notAllowButton = adminRegisterView.findViewById(R.id.notAllowButton); allowButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Log.i(TAG,"allowButton"); adminRegisterDialog.dismiss(); } }); notAllowButton.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View view) { Log.i(TAG,"not allowButton"); adminRegisterDialog.dismiss(); } }); adminRegisterDialog = builder.create(); adminRegisterDialog.show(); adminRegisterDialog.getWindow().setLayout(LinearLayout.LayoutParams.WRAP_CONTENT, LinearLayout.LayoutParams.WRAP_CONTENT); } ` -I tried using AsyncTask but I couldn't because it was too complicated. -Thread.sleep didn't work as it stopped all processes.
[ "I solved the problem as follows: I am using Alertdialog in class where I defined HttpServer. I keep it in a while loop until the user responds when the alertdialog is shown. After the user click alertdialog button, I finish the while loop and send a response to the client.\nboolean isClick = false;\n\n//SHOW ALERT DIALOG HERE\n\n\n//ALERT DILOG CLICK LISTENER\n result.getAllowButton().setOnClickListener(new View.OnClickListener() {\n isClick = true;\n });\n\n\n\nwhile (!isClick) {\n Log.i(TAG, \"in while loop\");\n}\n\nLog.i(TAG, \"out while loop\");\n isClick = false;\n\n // send response\n String response = \"<h1>Alert Dialog Clicked</h1>\";\n for (String key : parameters.keySet())\n response += key + \" = \" + parameters.get(key) + \"\\n\";\n he.sendResponseHeaders(200, response.length());\n OutputStream os = he.getResponseBody();\n os.write(response.toString().getBytes());\n os.close();\n\n\n" ]
[ 0 ]
[]
[]
[ "android", "http", "httphandler", "httpserver", "java" ]
stackoverflow_0074288969_android_http_httphandler_httpserver_java.txt
Q: multiple javascript files are not working in a sinlge page As you can see in my index.html page, I am adding two javascript file in a single page, one is for menu drop down and another is for readmore option.i'm new to javascript please help.when i add both last one file is work only first file cannot work . "index.html" <!DOCTYPE html> <html> <head> <title>united</title> <link rel="stylesheet" href="/static/css/dps.css"/> </head> <body> <div class="dropdown"> <button onclick="myFunction()" class="dropbtn">MENU</button> <div id="myDropdown" class="dropdown-content"> <a href='..'>Home</a> <a href="aboutus">About</a> <a href="#contact">Contact</a> <a href="Question-bank">Question Bank</a> <a href="updated-calender">calender</a> <a href="calculator">calculator</a> </div> </div><br> <script src="/static/js/menu.js"></script> <script src="/static/js/readmore.js" ></script> </Body> </html> "menu.js" /* When the user clicks on the button, toggle between hiding and showing the dropdown content */ function myFunction() { document.getElementById("myDropdown").classList.toggle("show"); } // Close the dropdown if the user clicks outside of it window.onclick = function(event) { if (!event.target.matches('.dropbtn')) { var dropdowns = document.getElementsByClassName("dropdown-content"); var i; for (i = 0; i < dropdowns.length; i++) { var openDropdown = dropdowns[i]; if (openDropdown.classList.contains('show')) { openDropdown.classList.remove('show'); } } } } "readmore.js" function myFunction() { var dots = document.getElementById("dots"); var moreText = document.getElementById("more"); var btnText = document.getElementById("myBtn"); if (dots.style.display === "none") { dots.style.display = "inline"; btnText.innerHTML = "Read more"; moreText.style.display = "none"; } else { dots.style.display = "none"; btnText.innerHTML = "Read less"; moreText.style.display = "inline"; } } please solve my problem why two js files cannot work in a single page. A: Welcome to Stack Overflow! Both your scripts define a function with the same name: myFunction. If you define 2 functions with the same name, then the second function will shadow the first one, meaning that the first one is inaccessible. When calling myFunction you will always call the second function. If you make sure that the functions have different names, then it should work. A: be carefull about the "onClick" EventListener, the good way to do that is to add an EventListener to the element. For exemple for your button const element = document.getElementById('dropbtnId'); element.addEventListener("click", function() { myFunction(); }); Here is an example with your code : const element = document.getElementById('dropbtnId'); element.addEventListener("click", function() { alert("triggered"); }); <html> <head> <title>united</title> <link rel="stylesheet" href="/static/css/dps.css"/> </head> <body> <div class="dropdown"> <button id="dropbtnId" class="dropbtn">MENU</button> <div id="myDropdown" class="dropdown-content"> <a href='..'>Home</a> <a href="aboutus">About</a> <a href="#contact">Contact</a> <a href="Question-bank">Question Bank</a> <a href="updated-calender">calender</a> <a href="calculator">calculator</a> </div> </div><br> Next step is to rename your functions, and then call it inside the EventListener
multiple javascript files are not working in a sinlge page
As you can see in my index.html page, I am adding two javascript file in a single page, one is for menu drop down and another is for readmore option.i'm new to javascript please help.when i add both last one file is work only first file cannot work . "index.html" <!DOCTYPE html> <html> <head> <title>united</title> <link rel="stylesheet" href="/static/css/dps.css"/> </head> <body> <div class="dropdown"> <button onclick="myFunction()" class="dropbtn">MENU</button> <div id="myDropdown" class="dropdown-content"> <a href='..'>Home</a> <a href="aboutus">About</a> <a href="#contact">Contact</a> <a href="Question-bank">Question Bank</a> <a href="updated-calender">calender</a> <a href="calculator">calculator</a> </div> </div><br> <script src="/static/js/menu.js"></script> <script src="/static/js/readmore.js" ></script> </Body> </html> "menu.js" /* When the user clicks on the button, toggle between hiding and showing the dropdown content */ function myFunction() { document.getElementById("myDropdown").classList.toggle("show"); } // Close the dropdown if the user clicks outside of it window.onclick = function(event) { if (!event.target.matches('.dropbtn')) { var dropdowns = document.getElementsByClassName("dropdown-content"); var i; for (i = 0; i < dropdowns.length; i++) { var openDropdown = dropdowns[i]; if (openDropdown.classList.contains('show')) { openDropdown.classList.remove('show'); } } } } "readmore.js" function myFunction() { var dots = document.getElementById("dots"); var moreText = document.getElementById("more"); var btnText = document.getElementById("myBtn"); if (dots.style.display === "none") { dots.style.display = "inline"; btnText.innerHTML = "Read more"; moreText.style.display = "none"; } else { dots.style.display = "none"; btnText.innerHTML = "Read less"; moreText.style.display = "inline"; } } please solve my problem why two js files cannot work in a single page.
[ "Welcome to Stack Overflow!\nBoth your scripts define a function with the same name: myFunction. If you define 2 functions with the same name, then the second function will shadow the first one, meaning that the first one is inaccessible. When calling myFunction you will always call the second function.\nIf you make sure that the functions have different names, then it should work.\n", "be carefull about the \"onClick\" EventListener, the good way to do that is to add an EventListener to the element.\nFor exemple for your button\nconst element = document.getElementById('dropbtnId');\nelement.addEventListener(\"click\", function() {\n myFunction();\n});\n\nHere is an example with your code :\n\n\nconst element = document.getElementById('dropbtnId');\nelement.addEventListener(\"click\", function() {\n alert(\"triggered\");\n});\n <html>\n <head>\n <title>united</title>\n \n <link rel=\"stylesheet\" href=\"/static/css/dps.css\"/>\n </head>\n <body>\n \n \n <div class=\"dropdown\">\n <button id=\"dropbtnId\" class=\"dropbtn\">MENU</button>\n <div id=\"myDropdown\" class=\"dropdown-content\">\n <a href='..'>Home</a>\n <a href=\"aboutus\">About</a>\n <a href=\"#contact\">Contact</a>\n <a href=\"Question-bank\">Question Bank</a>\n <a href=\"updated-calender\">calender</a>\n <a href=\"calculator\">calculator</a>\n </div>\n </div><br>\n\n\n\nNext step is to rename your functions, and then call it inside the EventListener\n" ]
[ 1, 0 ]
[]
[]
[ "html", "javascript" ]
stackoverflow_0074656116_html_javascript.txt
Q: Return multiple arrays with same name object So I have a JSON File with multiple arrays "ID" that I want to extract. [ { "ID": 318, "Type": 0, "Value": 3, }, { "ID": 580, "Type": 5, "Value": 8, }, { "ID": 23, "Type": 3, "Value": 40, }, { "ID": 13, "Type": 2, "Value": 58, }, { "ID": 60, "Type": 54, "Value": 22, }, function getID(address){ var url = 'https://data.com/api/v1/users/'; var res = UrlFetchApp.fetch(url); var content = res.getContentText(); var json = JSON.parse(content); var result = json[0]['ID']+", "+json[1]['ID']+", "+json[3]['ID']; return result; } I have this code that parses the arrays one by one but I don't know how to get all the "ID" arrays at once. Anyone that can help? A: To create the array: var result = json.map(({ ID }) => ID); to return the result as string: return JSON.stringify(result); And if you dont want the brackets: return JSON.stringify(result).slice(1, -1); A: The OP might have a look into the documentation of ... the fetch API, Promises and Promise.prototype.then as well as into ... async ... await ... and into ... Array.prototype.map, Array.prototype.join. /*async */function getFetchedApiData(apiUrl) { return fetch(apiUrl) .then(response => response.json()); } getFetchedApiData('https://jsonplaceholder.typicode.com/users') .then(data => { console.log({ data }); return data; }) .then(data => data.map(item => item.id) ) .then(idList => { console.log({ idList }); return idList; }) .then(idList => { console.log({ concatenatedIds: idList.join(', ') }); // return idList; }); .as-console-wrapper { min-height: 100%!important; top: 0; } ... or with a slightly different syntax ... async function getFetchedApiData(apiUrl) { return fetch(apiUrl) .then(response => response.json()); } (async () => { const data = await getFetchedApiData('https://jsonplaceholder.typicode.com/users'); const idList = data.map(({ id }) => id); const concatenatedIds = idList.join(', '); console.log({ data, idList, concatenatedIds }); })(); .as-console-wrapper { min-height: 100%!important; top: 0; }
Return multiple arrays with same name object
So I have a JSON File with multiple arrays "ID" that I want to extract. [ { "ID": 318, "Type": 0, "Value": 3, }, { "ID": 580, "Type": 5, "Value": 8, }, { "ID": 23, "Type": 3, "Value": 40, }, { "ID": 13, "Type": 2, "Value": 58, }, { "ID": 60, "Type": 54, "Value": 22, }, function getID(address){ var url = 'https://data.com/api/v1/users/'; var res = UrlFetchApp.fetch(url); var content = res.getContentText(); var json = JSON.parse(content); var result = json[0]['ID']+", "+json[1]['ID']+", "+json[3]['ID']; return result; } I have this code that parses the arrays one by one but I don't know how to get all the "ID" arrays at once. Anyone that can help?
[ "To create the array:\nvar result = json.map(({ ID }) => ID);\n\nto return the result as string:\nreturn JSON.stringify(result);\n\nAnd if you dont want the brackets:\nreturn JSON.stringify(result).slice(1, -1);\n\n", "The OP might have a look into the documentation of ...\n\nthe fetch API,\n\nPromises and Promise.prototype.then as well as into ...\n\nasync ... await\n\n\n... and into ...\n\nArray.prototype.map,\n\nArray.prototype.join.\n\n\n\n\n/*async */function getFetchedApiData(apiUrl) {\n return fetch(apiUrl)\n .then(response => response.json());\n}\n\ngetFetchedApiData('https://jsonplaceholder.typicode.com/users')\n .then(data => {\n console.log({ data });\n return data;\n })\n .then(data =>\n data.map(item => item.id)\n )\n .then(idList => {\n console.log({ idList });\n return idList;\n })\n .then(idList => {\n console.log({ concatenatedIds: idList.join(', ') });\n // return idList;\n });\n.as-console-wrapper { min-height: 100%!important; top: 0; }\n\n\n\n... or with a slightly different syntax ...\n\n\nasync function getFetchedApiData(apiUrl) {\n return fetch(apiUrl)\n .then(response => response.json());\n}\n\n(async () => {\n\n const data = await getFetchedApiData('https://jsonplaceholder.typicode.com/users');\n const idList = data.map(({ id }) => id);\n const concatenatedIds = idList.join(', ');\n\n console.log({ data, idList, concatenatedIds });\n \n})();\n.as-console-wrapper { min-height: 100%!important; top: 0; }\n\n\n\n" ]
[ 1, 0 ]
[]
[]
[ "arrays", "javascript", "json" ]
stackoverflow_0074656987_arrays_javascript_json.txt
Q: Convert DynamoDB JSON to AttributeValue, Go Object or Json I am trying to convert simple DynamoDB Object string: { "Item": { "Id": { "S": "db31" }, "CreateTime": { "N": "1647882237618915000" } } to either dynamodb.AttributeValue and then map to a go object (go type structure) or convert to a simple JSON go object. I think, there are similar answers (1, 2, 3) in Java, but I didn't find a similar implementation in Golang. A: You could create a struct type and use json.Unmarshal to unmarshal the JSON string like this: package main import ( "encoding/json" "fmt" "os" ) type Record struct { Item struct { Id struct { S string } CreateTime struct { N string } } } func main() { str := `{ "Item": { "Id": { "S": "db31" }, "CreateTime": { "N": "1647882237618915000" } } }` var record Record if err := json.Unmarshal([]byte(str), &record); err != nil { fmt.Fprintf(os.Stderr, "unmarshal failed: %v", err) os.Exit(1) } fmt.Printf("%s %s", record.Item.Id.S, record.Item.CreateTime.N) } If you want a different approach, and want to transform the result into a structure that is different than the JSON, you could use a library like gjson. Here is an example "flattening" the result into a simpler struct: package main import ( "fmt" "github.com/tidwall/gjson" ) type Record struct { Id string CreateTime string } func main() { str := `{ "Item": { "Id": { "S": "db31" }, "CreateTime": { "N": "1647882237618915000" } } }` values := gjson.GetMany(str, "Item.Id.S", "Item.CreateTime.N") record := Record{ Id: values[0].Str, CreateTime: values[1].Str, } fmt.Printf("%s %s", record.Id, record.CreateTime) } A: I hope this is not against the TOS. Please let me know if it is, and I will delete the answer. I tried this question on the OpenAI Chatbot and here is it's answer. Works locally for me [https://chat.openai.com/chat]. Here is an example of how you can convert a DynamoDB object string to a JSON object in Go: // Import the necessary packages import ( "encoding/json" "fmt" ) // Define a struct to represent the DynamoDB object type DynamoDBObject struct { Item struct { Id struct { S string `json:"S"` } `json:"Id"` CreateTime struct { N string `json:"N"` } `json:"CreateTime"` } `json:"Item"` } func main() { // Define the DynamoDB object string dynamoDBObjectString := `{ "Item": { "Id": { "S": "db31" }, "CreateTime": { "N": "1647882237618915000" } } }` // Unmarshal the DynamoDB object string into a DynamoDBObject struct var dynamoDBObject DynamoDBObject json.Unmarshal([]byte(dynamoDBObjectString), &dynamoDBObject) // Marshal the DynamoDBObject struct into a JSON object jsonObject, _ := json.Marshal(dynamoDBObject) // Print the JSON object fmt.Println(string(jsonObject)) } In this example, the DynamoDBObject struct is defined to represent the DynamoDB object, with the fields corresponding to the keys and values in the object. Then, the json.Unmarshal function is used to convert the DynamoDB object string into a DynamoDBObject struct. Finally, the json.Marshal function is used to convert the DynamoDBObject struct into a JSON object, which can then be printed or used as needed. Here is the output of the code: {"Item":{"Id":{"S":"db31"},"CreateTime":{"N":"1647882237618915000"}}} This is the JSON representation of the DynamoDB object string, with the keys and values of the object represented as fields and values in the JSON object.
Convert DynamoDB JSON to AttributeValue, Go Object or Json
I am trying to convert simple DynamoDB Object string: { "Item": { "Id": { "S": "db31" }, "CreateTime": { "N": "1647882237618915000" } } to either dynamodb.AttributeValue and then map to a go object (go type structure) or convert to a simple JSON go object. I think, there are similar answers (1, 2, 3) in Java, but I didn't find a similar implementation in Golang.
[ "You could create a struct type and use json.Unmarshal to unmarshal the JSON string like this:\npackage main\n\nimport (\n \"encoding/json\"\n \"fmt\"\n \"os\"\n)\n\ntype Record struct {\n Item struct {\n Id struct {\n S string\n }\n CreateTime struct {\n N string\n }\n }\n}\n\nfunc main() {\n\n str := `{\n \"Item\": {\n \"Id\": {\n \"S\": \"db31\"\n },\n \"CreateTime\": {\n \"N\": \"1647882237618915000\"\n }\n }\n}`\n\n var record Record\n if err := json.Unmarshal([]byte(str), &record); err != nil {\n fmt.Fprintf(os.Stderr, \"unmarshal failed: %v\", err)\n os.Exit(1)\n }\n\n fmt.Printf(\"%s %s\", record.Item.Id.S, record.Item.CreateTime.N)\n}\n\n\n\nIf you want a different approach, and want to transform the result into a structure that is different than the JSON, you could use a library like gjson.\nHere is an example \"flattening\" the result into a simpler struct:\npackage main\n\nimport (\n \"fmt\"\n \"github.com/tidwall/gjson\"\n)\n\ntype Record struct {\n Id string\n CreateTime string\n}\n\nfunc main() {\n\n str := `{\n \"Item\": {\n \"Id\": {\n \"S\": \"db31\"\n },\n \"CreateTime\": {\n \"N\": \"1647882237618915000\"\n }\n }\n}`\n\n values := gjson.GetMany(str, \"Item.Id.S\", \"Item.CreateTime.N\")\n\n record := Record{\n Id: values[0].Str,\n CreateTime: values[1].Str,\n }\n\n fmt.Printf(\"%s %s\", record.Id, record.CreateTime)\n}\n\n", "I hope this is not against the TOS. Please let me know if it is, and I will delete the answer. I tried this question on the OpenAI Chatbot and here is it's answer. Works locally for me [https://chat.openai.com/chat].\nHere is an example of how you can convert a DynamoDB object string to a JSON object in Go:\n// Import the necessary packages\nimport (\n \"encoding/json\"\n \"fmt\"\n)\n\n// Define a struct to represent the DynamoDB object\ntype DynamoDBObject struct {\n Item struct {\n Id struct {\n S string `json:\"S\"`\n } `json:\"Id\"`\n CreateTime struct {\n N string `json:\"N\"`\n } `json:\"CreateTime\"`\n } `json:\"Item\"`\n}\n\nfunc main() {\n // Define the DynamoDB object string\n dynamoDBObjectString := `{\n \"Item\": {\n \"Id\": {\n \"S\": \"db31\"\n },\n \"CreateTime\": {\n \"N\": \"1647882237618915000\"\n }\n }\n }`\n\n // Unmarshal the DynamoDB object string into a DynamoDBObject struct\n var dynamoDBObject DynamoDBObject\n json.Unmarshal([]byte(dynamoDBObjectString), &dynamoDBObject)\n\n // Marshal the DynamoDBObject struct into a JSON object\n jsonObject, _ := json.Marshal(dynamoDBObject)\n\n // Print the JSON object\n fmt.Println(string(jsonObject))\n}\n\nIn this example, the DynamoDBObject struct is defined to represent the DynamoDB object, with the fields corresponding to the keys and values in the object. Then, the json.Unmarshal function is used to convert the DynamoDB object string into a DynamoDBObject struct. Finally, the json.Marshal function is used to convert the DynamoDBObject struct into a JSON object, which can then be printed or used as needed.\nHere is the output of the code:\n {\"Item\":{\"Id\":{\"S\":\"db31\"},\"CreateTime\":{\"N\":\"1647882237618915000\"}}}\n\nThis is the JSON representation of the DynamoDB object string, with the keys and values of the object represented as fields and values in the JSON object.\n" ]
[ 1, 0 ]
[]
[]
[ "amazon_dynamodb", "aws_sdk_go", "aws_sdk_go_v2", "go", "json" ]
stackoverflow_0074635931_amazon_dynamodb_aws_sdk_go_aws_sdk_go_v2_go_json.txt
Q: assistance needed please, float is not iterable issues ` life_max = -5 life_min = 999 country_max = "" country_min = "" answer = int(input("Which year would you like to enter? ")) with open ("life.csv") as f: next(f) for line in f: parts = line.split(",") life = float(parts[3]) year = int(parts[2]) country = parts[0].strip() code = parts[1].strip() if life > life_max: life_max = life country_max = country if life < life_min: life_min = life country_min = country average = range(sum(life)) / range(len(life)) print(f"The average is {average}") print(f"The country with the worst life expectancy is {country_min} at {life_min} years.") print(f"The country with the best life expectancy is {country_max} at {life_max} years.") ` I'm having some troubles in finding the average life expectancy given a specified year, it returns with a 'float' not iterable error and I'm pretty lost.
assistance needed please, float is not iterable issues
` life_max = -5 life_min = 999 country_max = "" country_min = "" answer = int(input("Which year would you like to enter? ")) with open ("life.csv") as f: next(f) for line in f: parts = line.split(",") life = float(parts[3]) year = int(parts[2]) country = parts[0].strip() code = parts[1].strip() if life > life_max: life_max = life country_max = country if life < life_min: life_min = life country_min = country average = range(sum(life)) / range(len(life)) print(f"The average is {average}") print(f"The country with the worst life expectancy is {country_min} at {life_min} years.") print(f"The country with the best life expectancy is {country_max} at {life_max} years.") ` I'm having some troubles in finding the average life expectancy given a specified year, it returns with a 'float' not iterable error and I'm pretty lost.
[]
[]
[ "sum needs a list of values to add to each other ;)\n", "A bit confusing answering this without your input and what line throws the error, but I'm guessing it's due to the 'sum(life)' - life seems to be a float while sum expects an iterable\n" ]
[ -2, -2 ]
[ "python", "python_3.x" ]
stackoverflow_0074660613_python_python_3.x.txt
Q: How to write a javascript algorithm to solve a logical boolean and/or/xor circuit diagram I'm playing a game with the following circuit diagram. 3 inputs can be turned on. The solution of the game is to turn ON inputs 3, 4 and 8. Here are the games rules: The 3 wires of this circuit have been torn off. You have to choose where to plug them in to make the laser work again. The integrated circuit has AND, OR and XOR logic connectors. AND: The 2 wires on the left must be powered for the right wire to be powered. OR : At least one of the 2 wires on the left must be powered for the right wire to be powered. XOR : Only one of the 2 wires on the left must be powered for the right wire to be powered. I would like to write an JavaScript algorithm to solve it, but I have no clue on how to write it... I tried brute forcing using permutations but didn't manage to write the logical condition test. A: I didn't manage to write the logical condition test Label each net of wires, and use variables. It's unclear how the backfeed of the output into the input should be handled, you may assume a diode or an implicit OR condition. const circuit(inputs) { const and0 = inputs[0] && inputs[1]; const input2 = inputs[2] || and0; // implicit const and1 = input2 && inputs[3]; const and2 = inputs[4] && inputs[5]; const input6 = inputs[6] || and2; // implicit const or0 = input6 || inputs[7]; const or1 = and1 || and2; // or `and1 || input6`? const xor = or0 != and2; // or `or0 != input6`? const and3 = or1 && xor; return and3; } You can now bruteforce this with an array of 8 booleans. A: No easy way to do that I guess Here is my attempt: function test(a, b, c, d, e, f, g, h) { const ab = a && b; const abcd = (ab || c) && d; const ef = e && f; const efgh = ef || g || h; const abcdef = abcd || ef; const efgh2 = ef !== efgh; const abcdefgh = abcdef && efgh2; return abcdefgh; } Playground: const result = document.querySelector('#result') const wrapper = document.querySelector('#wrapper') const inputs = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'].map(e => { const div = document.createElement('div') const label = document.createElement('label') label.textContent = e label.setAttribute('for', e) div.appendChild(label) const input = document.createElement('input') input.setAttribute('type', 'checkbox') input.setAttribute('id', e) div.appendChild(input) wrapper.appendChild(div) return input }) update() wrapper.addEventListener('change', update) function update() { const value = test(...inputs.map(e => e.checked)) result.textContent = value ? 'true' : 'false' } function test(a, b, c, d, e, f, g, h) { const ab = a && b const abcd = (ab || c) && d const ef = e && f const efgh = (ef || g || h) const abcdef = abcd || ef const efgh2 = ef !== efgh const abcdefgh = abcdef && efgh2 return abcdefgh } <div id="wrapper"></div> <p>result: <span id="result"></span></p> Auto-solver: for (const inp of getInputs()) { const result = test(...inp) if (result) { const values = inp.map((e, i) => e ? i + 1 : null).filter(e => e !== null) console.log(values) break } } function test(a, b, c, d, e, f, g, h) { const ab = a && b const abcd = (ab || c) && d const ef = e && f const efgh = (ef || g || h) const abcdef = abcd || ef const efgh2 = ef !== efgh const abcdefgh = abcdef && efgh2 return abcdefgh } function* getInputs() { for (let i = 0; i < 2 ** 8; i += 1) { yield [...i.toString(2).padStart(8, '0')].map(e => e === '1') } }
How to write a javascript algorithm to solve a logical boolean and/or/xor circuit diagram
I'm playing a game with the following circuit diagram. 3 inputs can be turned on. The solution of the game is to turn ON inputs 3, 4 and 8. Here are the games rules: The 3 wires of this circuit have been torn off. You have to choose where to plug them in to make the laser work again. The integrated circuit has AND, OR and XOR logic connectors. AND: The 2 wires on the left must be powered for the right wire to be powered. OR : At least one of the 2 wires on the left must be powered for the right wire to be powered. XOR : Only one of the 2 wires on the left must be powered for the right wire to be powered. I would like to write an JavaScript algorithm to solve it, but I have no clue on how to write it... I tried brute forcing using permutations but didn't manage to write the logical condition test.
[ "\nI didn't manage to write the logical condition test\n\nLabel each net of wires, and use variables. It's unclear how the backfeed of the output into the input should be handled, you may assume a diode or an implicit OR condition.\nconst circuit(inputs) {\n const and0 = inputs[0] && inputs[1];\n const input2 = inputs[2] || and0; // implicit\n const and1 = input2 && inputs[3];\n const and2 = inputs[4] && inputs[5];\n const input6 = inputs[6] || and2; // implicit\n const or0 = input6 || inputs[7];\n const or1 = and1 || and2; // or `and1 || input6`?\n const xor = or0 != and2; // or `or0 != input6`?\n const and3 = or1 && xor;\n return and3;\n}\n\nYou can now bruteforce this with an array of 8 booleans.\n", "No easy way to do that I guess\nHere is my attempt:\nfunction test(a, b, c, d, e, f, g, h) {\n const ab = a && b;\n const abcd = (ab || c) && d;\n const ef = e && f;\n const efgh = ef || g || h;\n\n const abcdef = abcd || ef;\n const efgh2 = ef !== efgh;\n\n const abcdefgh = abcdef && efgh2;\n\n return abcdefgh;\n}\n\nPlayground:\n\n\nconst result = document.querySelector('#result')\nconst wrapper = document.querySelector('#wrapper')\nconst inputs = ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h'].map(e => {\n const div = document.createElement('div')\n\n const label = document.createElement('label')\n label.textContent = e\n label.setAttribute('for', e)\n div.appendChild(label)\n\n const input = document.createElement('input')\n input.setAttribute('type', 'checkbox')\n input.setAttribute('id', e)\n div.appendChild(input)\n\n wrapper.appendChild(div)\n\n return input\n})\n\nupdate()\n\nwrapper.addEventListener('change', update)\n\nfunction update() {\n const value = test(...inputs.map(e => e.checked))\n result.textContent = value ? 'true' : 'false'\n}\n\nfunction test(a, b, c, d, e, f, g, h) {\n const ab = a && b\n const abcd = (ab || c) && d\n const ef = e && f\n const efgh = (ef || g || h)\n\n const abcdef = abcd || ef\n const efgh2 = ef !== efgh\n\n const abcdefgh = abcdef && efgh2\n\n return abcdefgh\n}\n<div id=\"wrapper\"></div>\n<p>result: <span id=\"result\"></span></p>\n\n\n\nAuto-solver:\n\n\nfor (const inp of getInputs()) {\n const result = test(...inp)\n if (result) {\n const values = inp.map((e, i) => e ? i + 1 : null).filter(e => e !== null)\n console.log(values)\n break\n }\n}\n\nfunction test(a, b, c, d, e, f, g, h) {\n const ab = a && b\n const abcd = (ab || c) && d\n const ef = e && f\n const efgh = (ef || g || h)\n\n const abcdef = abcd || ef\n const efgh2 = ef !== efgh\n\n const abcdefgh = abcdef && efgh2\n\n return abcdefgh\n}\n\nfunction* getInputs() {\n for (let i = 0; i < 2 ** 8; i += 1) {\n yield [...i.toString(2).padStart(8, '0')].map(e => e === '1')\n }\n}\n\n\n\n" ]
[ 3, 2 ]
[]
[]
[ "algorithm", "javascript", "logic", "logical_operators", "logical_or" ]
stackoverflow_0074660240_algorithm_javascript_logic_logical_operators_logical_or.txt
Q: Fill in gaps between irregular end and start timestamps I am working on a simple timesheet/time tracking app in Shiny for personal use. The app will record timestamps for when I start and stop activities. However, there are times during the day where there are natural gaps between specific tasks where you still do stuff. These in-between periods are not captured by the app, but are present in the data as "gaps" between the timestamps. Sample data looks like this (dput of the data at the end of the post): # A tibble: 9 x 3 start end activity <dttm> <dttm> <chr> 1 2022-11-28 10:00:00 2022-11-28 10:50:30 Activity 1 2 2022-11-28 10:50:30 2022-11-28 11:39:05 Activity 2 3 2022-11-28 12:01:00 2022-11-28 16:10:45 Activity 2 4 2022-11-29 10:00:00 2022-11-29 10:50:30 Activity 1 5 2022-11-29 10:50:31 2022-11-29 11:00:15 Activity 4 6 2022-11-29 12:00:00 2022-11-29 13:00:00 Activity 5 7 2022-11-29 13:00:00 2022-11-29 16:00:00 Activity 2 8 2022-11-30 08:00:05 2022-11-30 10:00:00 Activity 1 9 2022-11-30 16:03:05 2022-11-30 17:00:00 Activity 2 The gaps in the data are obvious. For example, on the 28th there is no gap between the first and the second entry (end time of the first entry is equal to the start time of the second entry). There is, however, a gap between the second entry and the third entry (the end time of the second entry differs from the third entry). We can find similar gaps for the other days in the sample data. What I want to do is fill in these gaps with an activity called "Other", such that for each day there are no gaps between the start of the first and the end of the last entry. That is, all existing gaps are filled in. The desired output would look like this: # A tibble: 13 x 3 start end activity <dttm> <dttm> <chr> 1 2022-11-28 10:00:00 2022-11-28 10:50:30 Activity 1 2 2022-11-28 10:50:30 2022-11-28 11:39:05 Activity 2 3 2022-11-28 11:39:05 2022-11-28 12:01:00 Other 4 2022-11-28 12:01:00 2022-11-28 16:10:45 Activity 2 5 2022-11-29 10:00:00 2022-11-29 10:50:30 Activity 1 6 2022-11-29 10:50:30 2022-11-29 10:50:31 Other 7 2022-11-29 10:50:31 2022-11-29 11:00:15 Activity 4 8 2022-11-29 11:00:15 2022-11-29 12:00:00 Other 9 2022-11-29 12:00:00 2022-11-29 13:00:00 Activity 5 10 2022-11-29 13:00:00 2022-11-29 16:00:00 Activity 2 11 2022-11-30 08:00:05 2022-11-30 10:00:00 Activity 1 12 2022-11-30 10:00:00 2022-11-30 16:03:05 Other 13 2022-11-30 16:03:05 2022-11-30 17:00:00 Activity 2 The data will be created daily for the forseeable future, so potentially the solution will have to work on larger datasets and a vectorized approach is preferable. Sofar, I've been working within the tidyverse and with lubridate. I am not sure if there is something simple/easy that I've overlookd (I hope so). The first thing I thought about was to write a loop or using a lapply type expression. This can quicly get out hand as the data grows unless I remember to always fill in or run checks and fill in data regularly (I will probably get to this part of the app eventually). Alternatively, I started thinking about pivoting the data longer creating groups of 2 matches with start and end times for each day to work out the gaps. This could potentially be quick, but I struggled to find a good way of setting up the problem. If it matters, the data is submitted to a local SQLite database everytime an entry is added. Any help/input on this is much appreciated. Sample data: library(tidyverse) library(lubridate) db <- structure(list(start = structure(c(1669629600, 1669632630, 1669636860, 1669716000, 1669719031, 1669723200, 1669726800, 1669795205, 1669824185 ), class = c("POSIXct", "POSIXt"), tzone = "UTC"), end = structure(c(1669632630, 1669635545, 1669651845, 1669719030, 1669719615, 1669726800, 1669737600, 1669802400, 1669827600), class = c("POSIXct", "POSIXt"), tzone = "UTC"), activity = c("Activity 1", "Activity 2", "Activity 2", "Activity 1", "Activity 4", "Activity 5", "Activity 2", "Activity 1", "Activity 2" )), row.names = c(NA, -9L), class = c("tbl_df", "tbl", "data.frame" )) A: Here a way to do it: library(dplyr) library(lubridate) db %>% bind_rows( data.frame( start = db %>% group_by(day(start)) %>% filter(end != lead(start)) %>% pull(end), end = NA, activity= "Other" ) ) %>% arrange(start) %>% mutate(end = if_else(is.na(end),lead(start),end)) # A tibble: 13 x 3 start end activity <dttm> <dttm> <chr> 1 2022-11-28 10:00:00 2022-11-28 10:50:30 Activity 1 2 2022-11-28 10:50:30 2022-11-28 11:39:05 Activity 2 3 2022-11-28 11:39:05 2022-11-28 12:01:00 Other 4 2022-11-28 12:01:00 2022-11-28 16:10:45 Activity 2 5 2022-11-29 10:00:00 2022-11-29 10:50:30 Activity 1 6 2022-11-29 10:50:30 2022-11-29 10:50:31 Other 7 2022-11-29 10:50:31 2022-11-29 11:00:15 Activity 4 8 2022-11-29 11:00:15 2022-11-29 12:00:00 Other 9 2022-11-29 12:00:00 2022-11-29 13:00:00 Activity 5 10 2022-11-29 13:00:00 2022-11-29 16:00:00 Activity 2 11 2022-11-30 08:00:05 2022-11-30 10:00:00 Activity 1 12 2022-11-30 10:00:00 2022-11-30 16:03:05 Other 13 2022-11-30 16:03:05 2022-11-30 17:00:00 Activity 2 A: Here is another approach using tidyverse. Add a column to indicate next row start time. Then, filter for rows where there are gaps (end time is less than the next row start time). Change those rows to "Other" and revise the times for the gap. Then, use bind_rows to add back to original data.frame. library(tidyverse) library(lubridate) db %>% group_by(day(start)) %>% mutate(nextstart = lead(start)) %>% ungroup() %>% filter(end < nextstart) %>% transmute(start = end, end = nextstart, activity = "Other") %>% bind_rows(db) %>% arrange(start) Output start end activity <dttm> <dttm> <chr> 1 2022-11-28 10:00:00 2022-11-28 10:50:30 Activity 1 2 2022-11-28 10:50:30 2022-11-28 11:39:05 Activity 2 3 2022-11-28 11:39:05 2022-11-28 12:01:00 Other 4 2022-11-28 12:01:00 2022-11-28 16:10:45 Activity 2 5 2022-11-29 10:00:00 2022-11-29 10:50:30 Activity 1 6 2022-11-29 10:50:30 2022-11-29 10:50:31 Other 7 2022-11-29 10:50:31 2022-11-29 11:00:15 Activity 4 8 2022-11-29 11:00:15 2022-11-29 12:00:00 Other 9 2022-11-29 12:00:00 2022-11-29 13:00:00 Activity 5 10 2022-11-29 13:00:00 2022-11-29 16:00:00 Activity 2 11 2022-11-30 08:00:05 2022-11-30 10:00:00 Activity 1 12 2022-11-30 10:00:00 2022-11-30 16:03:05 Other 13 2022-11-30 16:03:05 2022-11-30 17:00:00 Activity 2
Fill in gaps between irregular end and start timestamps
I am working on a simple timesheet/time tracking app in Shiny for personal use. The app will record timestamps for when I start and stop activities. However, there are times during the day where there are natural gaps between specific tasks where you still do stuff. These in-between periods are not captured by the app, but are present in the data as "gaps" between the timestamps. Sample data looks like this (dput of the data at the end of the post): # A tibble: 9 x 3 start end activity <dttm> <dttm> <chr> 1 2022-11-28 10:00:00 2022-11-28 10:50:30 Activity 1 2 2022-11-28 10:50:30 2022-11-28 11:39:05 Activity 2 3 2022-11-28 12:01:00 2022-11-28 16:10:45 Activity 2 4 2022-11-29 10:00:00 2022-11-29 10:50:30 Activity 1 5 2022-11-29 10:50:31 2022-11-29 11:00:15 Activity 4 6 2022-11-29 12:00:00 2022-11-29 13:00:00 Activity 5 7 2022-11-29 13:00:00 2022-11-29 16:00:00 Activity 2 8 2022-11-30 08:00:05 2022-11-30 10:00:00 Activity 1 9 2022-11-30 16:03:05 2022-11-30 17:00:00 Activity 2 The gaps in the data are obvious. For example, on the 28th there is no gap between the first and the second entry (end time of the first entry is equal to the start time of the second entry). There is, however, a gap between the second entry and the third entry (the end time of the second entry differs from the third entry). We can find similar gaps for the other days in the sample data. What I want to do is fill in these gaps with an activity called "Other", such that for each day there are no gaps between the start of the first and the end of the last entry. That is, all existing gaps are filled in. The desired output would look like this: # A tibble: 13 x 3 start end activity <dttm> <dttm> <chr> 1 2022-11-28 10:00:00 2022-11-28 10:50:30 Activity 1 2 2022-11-28 10:50:30 2022-11-28 11:39:05 Activity 2 3 2022-11-28 11:39:05 2022-11-28 12:01:00 Other 4 2022-11-28 12:01:00 2022-11-28 16:10:45 Activity 2 5 2022-11-29 10:00:00 2022-11-29 10:50:30 Activity 1 6 2022-11-29 10:50:30 2022-11-29 10:50:31 Other 7 2022-11-29 10:50:31 2022-11-29 11:00:15 Activity 4 8 2022-11-29 11:00:15 2022-11-29 12:00:00 Other 9 2022-11-29 12:00:00 2022-11-29 13:00:00 Activity 5 10 2022-11-29 13:00:00 2022-11-29 16:00:00 Activity 2 11 2022-11-30 08:00:05 2022-11-30 10:00:00 Activity 1 12 2022-11-30 10:00:00 2022-11-30 16:03:05 Other 13 2022-11-30 16:03:05 2022-11-30 17:00:00 Activity 2 The data will be created daily for the forseeable future, so potentially the solution will have to work on larger datasets and a vectorized approach is preferable. Sofar, I've been working within the tidyverse and with lubridate. I am not sure if there is something simple/easy that I've overlookd (I hope so). The first thing I thought about was to write a loop or using a lapply type expression. This can quicly get out hand as the data grows unless I remember to always fill in or run checks and fill in data regularly (I will probably get to this part of the app eventually). Alternatively, I started thinking about pivoting the data longer creating groups of 2 matches with start and end times for each day to work out the gaps. This could potentially be quick, but I struggled to find a good way of setting up the problem. If it matters, the data is submitted to a local SQLite database everytime an entry is added. Any help/input on this is much appreciated. Sample data: library(tidyverse) library(lubridate) db <- structure(list(start = structure(c(1669629600, 1669632630, 1669636860, 1669716000, 1669719031, 1669723200, 1669726800, 1669795205, 1669824185 ), class = c("POSIXct", "POSIXt"), tzone = "UTC"), end = structure(c(1669632630, 1669635545, 1669651845, 1669719030, 1669719615, 1669726800, 1669737600, 1669802400, 1669827600), class = c("POSIXct", "POSIXt"), tzone = "UTC"), activity = c("Activity 1", "Activity 2", "Activity 2", "Activity 1", "Activity 4", "Activity 5", "Activity 2", "Activity 1", "Activity 2" )), row.names = c(NA, -9L), class = c("tbl_df", "tbl", "data.frame" ))
[ "Here a way to do it:\nlibrary(dplyr)\nlibrary(lubridate)\n\ndb %>% \n bind_rows(\n data.frame(\n start = db %>% \n group_by(day(start)) %>% \n filter(end != lead(start)) %>% \n pull(end),\n end = NA,\n activity= \"Other\"\n )\n ) %>% \n arrange(start) %>% \n mutate(end = if_else(is.na(end),lead(start),end))\n\n# A tibble: 13 x 3\n start end activity \n <dttm> <dttm> <chr> \n 1 2022-11-28 10:00:00 2022-11-28 10:50:30 Activity 1\n 2 2022-11-28 10:50:30 2022-11-28 11:39:05 Activity 2\n 3 2022-11-28 11:39:05 2022-11-28 12:01:00 Other \n 4 2022-11-28 12:01:00 2022-11-28 16:10:45 Activity 2\n 5 2022-11-29 10:00:00 2022-11-29 10:50:30 Activity 1\n 6 2022-11-29 10:50:30 2022-11-29 10:50:31 Other \n 7 2022-11-29 10:50:31 2022-11-29 11:00:15 Activity 4\n 8 2022-11-29 11:00:15 2022-11-29 12:00:00 Other \n 9 2022-11-29 12:00:00 2022-11-29 13:00:00 Activity 5\n10 2022-11-29 13:00:00 2022-11-29 16:00:00 Activity 2\n11 2022-11-30 08:00:05 2022-11-30 10:00:00 Activity 1\n12 2022-11-30 10:00:00 2022-11-30 16:03:05 Other \n13 2022-11-30 16:03:05 2022-11-30 17:00:00 Activity 2\n\n", "Here is another approach using tidyverse. Add a column to indicate next row start time. Then, filter for rows where there are gaps (end time is less than the next row start time). Change those rows to \"Other\" and revise the times for the gap. Then, use bind_rows to add back to original data.frame.\nlibrary(tidyverse)\nlibrary(lubridate)\n\ndb %>%\n group_by(day(start)) %>%\n mutate(nextstart = lead(start)) %>%\n ungroup() %>%\n filter(end < nextstart) %>%\n transmute(start = end, end = nextstart, activity = \"Other\") %>%\n bind_rows(db) %>%\n arrange(start)\n\nOutput\n start end activity \n <dttm> <dttm> <chr> \n 1 2022-11-28 10:00:00 2022-11-28 10:50:30 Activity 1\n 2 2022-11-28 10:50:30 2022-11-28 11:39:05 Activity 2\n 3 2022-11-28 11:39:05 2022-11-28 12:01:00 Other \n 4 2022-11-28 12:01:00 2022-11-28 16:10:45 Activity 2\n 5 2022-11-29 10:00:00 2022-11-29 10:50:30 Activity 1\n 6 2022-11-29 10:50:30 2022-11-29 10:50:31 Other \n 7 2022-11-29 10:50:31 2022-11-29 11:00:15 Activity 4\n 8 2022-11-29 11:00:15 2022-11-29 12:00:00 Other \n 9 2022-11-29 12:00:00 2022-11-29 13:00:00 Activity 5\n10 2022-11-29 13:00:00 2022-11-29 16:00:00 Activity 2\n11 2022-11-30 08:00:05 2022-11-30 10:00:00 Activity 1\n12 2022-11-30 10:00:00 2022-11-30 16:03:05 Other \n13 2022-11-30 16:03:05 2022-11-30 17:00:00 Activity 2\n\n" ]
[ 1, 1 ]
[]
[]
[ "lubridate", "r" ]
stackoverflow_0074643676_lubridate_r.txt
Q: SCITE4Autohotkey debugger not working I installed SCITE4Autohotkey under autohotkey last version everything work ok, but I cannot activate the debugger. when I set the bullet stop near line no (and press debug mode). the code is not stopped on [F5] or RUN any suggestion ? A: End the process InternalAHK.exe*32 A: Save your file first. Set your stop bullets (red dots) by clicking just next to the line numbers. Go to first line of code, press debug (little insect) first (Icon changes and new single line icons appear), then press Run (blue triangle) or single line (blue arrows). "Run" will now run the code until the first red dot, you can then continue with single line execution (Blue arrows) or continue with run (blue triangle) until the next red stop bullet. Works flawlessly with my system (Win-7, AHK_L). A: Fix this issue by simply selecting a platform, just click this and select Default from the dropdown menu. (I was not able to run and debug scripts inside SciTE4AutoHotkey) A: Sometimes, I need to close the file and reopen it and then I'm able to create breakpoints.
SCITE4Autohotkey debugger not working
I installed SCITE4Autohotkey under autohotkey last version everything work ok, but I cannot activate the debugger. when I set the bullet stop near line no (and press debug mode). the code is not stopped on [F5] or RUN any suggestion ?
[ "End the process InternalAHK.exe*32\n", "Save your file first.\nSet your stop bullets (red dots) by clicking just next to the line numbers.\nGo to first line of code, press debug (little insect) first (Icon changes and new single line icons appear), then press Run (blue triangle) or single line (blue arrows).\n\"Run\" will now run the code until the first red dot, you can then continue with single line execution (Blue arrows) or continue with run (blue triangle) until the next red stop bullet.\nWorks flawlessly with my system (Win-7, AHK_L).\n", "Fix this issue by simply selecting a platform, just click this \n and select Default from the dropdown menu.\n(I was not able to run and debug scripts inside SciTE4AutoHotkey)\n", "Sometimes, I need to close the file and reopen it and then I'm able to create breakpoints.\n" ]
[ 1, 0, 0, 0 ]
[]
[]
[ "autohotkey" ]
stackoverflow_0014170170_autohotkey.txt
Q: UICollectionViewDragDelegate isn't working I'm successfully implementing a UICollectionView that display some dummy texts on screen. Everything is working fine and now I added the drag feature implementing self.collectionView.dragDelegate = self and the UICollectionViewDelegateFlowLayout, but it doesn't work. Am I missing something more? final class XptoComponent: UIView { private var titles = ["Xpto 1", "Xpto 2", "Xpto 3", "Xpto 4", "Xpto 5", "Xpto 6", "Xpto 7", "Xpto 8", "Xpto 9"] private var collectionView: UICollectionView = { let collectionViewLayout = UICollectionViewFlowLayout() collectionViewLayout.minimumInteritemSpacing = 0 collectionViewLayout.minimumLineSpacing = 0 collectionViewLayout.itemSize = CGSize(width: UIScreen.main.bounds.size.width, height: NumericConstant.ViewCell.height) collectionViewLayout.sectionInset = .zero let collectionView = UICollectionView(frame: .zero, collectionViewLayout: collectionViewLayout).usingAutoLayout() collectionView.registerCell(XptoCollectionViewCell.self) return collectionView }() init() { super.init(frame: .zero) self.collectionView.dataSource = self self.collectionView.delegate = self self.collectionView.dragDelegate = self self.defineSubviews() self.defineSubviewsConstraints() } private func defineSubviewsConstraints() { NSLayoutConstraint.activate([ self.collectionView.topAnchor.constraint(equalTo: self.topAnchor), self.collectionView.leadingAnchor.constraint(equalTo: self.leadingAnchor), self.collectionView.trailingAnchor.constraint(equalTo: self.trailingAnchor), self.collectionView.bottomAnchor.constraint(equalTo: self.safeAreaLayoutGuide.bottomAnchor) ]) } private func defineSubviews() { self.addSubview(self.collectionView) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } extension XptoComponent: UICollectionViewDataSource { func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { return self.titles.count } func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { let cell = collectionView.dequeueReusableCell(XptoCollectionViewCell.self, for: indexPath) cell?.titleLabel.text = self.titles[indexPath.row] return cell ?? UICollectionViewCell() } } extension XptoComponent: UICollectionViewDelegateFlowLayout { func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize { return CGSize(width: self.frame.width, height: 100) } } extension XptoComponent: UICollectionViewDragDelegate { func collectionView(_ collectionView: UICollectionView, itemsForBeginning session: UIDragSession, at indexPath: IndexPath) -> [UIDragItem] { let title = self.titles[indexPath.item] let itemProvider = NSItemProvider(object: title as NSString) let dragItem = UIDragItem(itemProvider: itemProvider) return [dragItem] } } A: You need to set this variable dragInteractionEnabled to true collectionView.dragInteractionEnabled = true A: Make sure to return something in the collectionView(_:itemsForBeginning:at:) function. Returning an empty array works for a UITableView but not for a UICollectionView.
UICollectionViewDragDelegate isn't working
I'm successfully implementing a UICollectionView that display some dummy texts on screen. Everything is working fine and now I added the drag feature implementing self.collectionView.dragDelegate = self and the UICollectionViewDelegateFlowLayout, but it doesn't work. Am I missing something more? final class XptoComponent: UIView { private var titles = ["Xpto 1", "Xpto 2", "Xpto 3", "Xpto 4", "Xpto 5", "Xpto 6", "Xpto 7", "Xpto 8", "Xpto 9"] private var collectionView: UICollectionView = { let collectionViewLayout = UICollectionViewFlowLayout() collectionViewLayout.minimumInteritemSpacing = 0 collectionViewLayout.minimumLineSpacing = 0 collectionViewLayout.itemSize = CGSize(width: UIScreen.main.bounds.size.width, height: NumericConstant.ViewCell.height) collectionViewLayout.sectionInset = .zero let collectionView = UICollectionView(frame: .zero, collectionViewLayout: collectionViewLayout).usingAutoLayout() collectionView.registerCell(XptoCollectionViewCell.self) return collectionView }() init() { super.init(frame: .zero) self.collectionView.dataSource = self self.collectionView.delegate = self self.collectionView.dragDelegate = self self.defineSubviews() self.defineSubviewsConstraints() } private func defineSubviewsConstraints() { NSLayoutConstraint.activate([ self.collectionView.topAnchor.constraint(equalTo: self.topAnchor), self.collectionView.leadingAnchor.constraint(equalTo: self.leadingAnchor), self.collectionView.trailingAnchor.constraint(equalTo: self.trailingAnchor), self.collectionView.bottomAnchor.constraint(equalTo: self.safeAreaLayoutGuide.bottomAnchor) ]) } private func defineSubviews() { self.addSubview(self.collectionView) } required init?(coder: NSCoder) { fatalError("init(coder:) has not been implemented") } } extension XptoComponent: UICollectionViewDataSource { func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int { return self.titles.count } func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell { let cell = collectionView.dequeueReusableCell(XptoCollectionViewCell.self, for: indexPath) cell?.titleLabel.text = self.titles[indexPath.row] return cell ?? UICollectionViewCell() } } extension XptoComponent: UICollectionViewDelegateFlowLayout { func collectionView(_ collectionView: UICollectionView, layout collectionViewLayout: UICollectionViewLayout, sizeForItemAt indexPath: IndexPath) -> CGSize { return CGSize(width: self.frame.width, height: 100) } } extension XptoComponent: UICollectionViewDragDelegate { func collectionView(_ collectionView: UICollectionView, itemsForBeginning session: UIDragSession, at indexPath: IndexPath) -> [UIDragItem] { let title = self.titles[indexPath.item] let itemProvider = NSItemProvider(object: title as NSString) let dragItem = UIDragItem(itemProvider: itemProvider) return [dragItem] } }
[ "You need to set this variable dragInteractionEnabled to true\n collectionView.dragInteractionEnabled = true\n\n", "Make sure to return something in the collectionView(_:itemsForBeginning:at:) function. Returning an empty array works for a UITableView but not for a UICollectionView.\n" ]
[ 6, 0 ]
[]
[]
[ "drag_and_drop", "ios", "swift", "uicollectionview", "uicollectionviewdragdelegate" ]
stackoverflow_0063845841_drag_and_drop_ios_swift_uicollectionview_uicollectionviewdragdelegate.txt
Q: Python, replace a word in a string from a list and iterate over it I have a simple string and a list: string = "the secret key is A" list = ["123","234","345"] I need to replace one item ("A") combining that item with another item from the list ("A123") as many times as the number of items in the list. Basically the result I would like to achieve is: "the secret key is A123" "the secret key is A234" "the secret key is A345" I know I need to use a for loop but I fail in joining together the items. A: Please don't clobber reserved keywords. s = "the secret key is A" lst = ["123","234","345"] item = 'A' newlst = [s.replace(item, f'{item}{tok}') for tok in lst] >>> newlst ['the secret key is A123', 'the secret key is A234', 'the secret key is A345'] Edit As rightly noted by @JohnnyMopp, the above will over-enthusiastically replace any occurrence of the item in a string such as 'And the secret key is A'. We can specify that only words matching the item should be replaced, using regex: import re s = 'And the secret key is A, I repeat: A.' lst = ['123', '234', '345'] item = 'A' newlst = [re.sub(fr'\b{item}\b', f'{item}{e}', s) for e in lst] >>> newlst ['And the secret key is A123, I repeat: A123.', 'And the secret key is A234, I repeat: A234.', 'And the secret key is A345, I repeat: A345.'] A: You can use str.replace. st = "the secret key is A" lst = ["123","234","345"] key_rep = "A" for l in lst: print(st.replace(key_rep, key_rep+l)) # Or as list_comprehension # [st.replace(key_rep, key_rep+l) for l in lst] Output: the secret key is A123 the secret key is A234 the secret key is A345 A: If I understood you correctly you can try this. string = "the secret key is A" lst = ["123", "234", "345"] res = list(map(lambda x: string + x, lst)) #You can print it in any way you want, here are some examples: print(*res) [print(i for i in res)] #...
Python, replace a word in a string from a list and iterate over it
I have a simple string and a list: string = "the secret key is A" list = ["123","234","345"] I need to replace one item ("A") combining that item with another item from the list ("A123") as many times as the number of items in the list. Basically the result I would like to achieve is: "the secret key is A123" "the secret key is A234" "the secret key is A345" I know I need to use a for loop but I fail in joining together the items.
[ "Please don't clobber reserved keywords.\ns = \"the secret key is A\"\nlst = [\"123\",\"234\",\"345\"]\n\nitem = 'A'\nnewlst = [s.replace(item, f'{item}{tok}') for tok in lst]\n\n>>> newlst\n['the secret key is A123', 'the secret key is A234', 'the secret key is A345']\n\nEdit\nAs rightly noted by @JohnnyMopp, the above will over-enthusiastically replace any occurrence of the item in a string such as 'And the secret key is A'. We can specify that only words matching the item should be replaced, using regex:\nimport re\n\ns = 'And the secret key is A, I repeat: A.'\nlst = ['123', '234', '345']\n\nitem = 'A'\nnewlst = [re.sub(fr'\\b{item}\\b', f'{item}{e}', s) for e in lst]\n\n>>> newlst\n['And the secret key is A123, I repeat: A123.',\n 'And the secret key is A234, I repeat: A234.',\n 'And the secret key is A345, I repeat: A345.']\n\n", "You can use str.replace.\nst = \"the secret key is A\"\n\nlst = [\"123\",\"234\",\"345\"]\n\nkey_rep = \"A\"\n\nfor l in lst:\n print(st.replace(key_rep, key_rep+l))\n\n# Or as list_comprehension\n# [st.replace(key_rep, key_rep+l) for l in lst]\n\nOutput:\nthe secret key is A123\nthe secret key is A234\nthe secret key is A345\n\n", "If I understood you correctly you can try this.\nstring = \"the secret key is A\"\nlst = [\"123\", \"234\", \"345\"]\n\nres = list(map(lambda x: string + x, lst))\n\n#You can print it in any way you want, here are some examples:\nprint(*res)\n[print(i for i in res)]\n#...\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "list", "python", "replace" ]
stackoverflow_0074660234_list_python_replace.txt
Q: git checkout remote branch if it exist, else create it I can't be the first one to ask this, but I'm having trouble finding the right search results. These terms are so overloaded. Eventually I want to make some additions to a remote branch. Remote branch may not exist. So first I clone remote repo, I only have default branch locally. Then: so far I have: git checkout -b ${BRANCHNAME} origin/${BRANCHNAME} --track || git checkout -b ${BRANCHNAME} git add ... git commit -m "new stuff" git push origin ${BRANCHNAME} Is there a nicer way to do the first line? A: All branches must have a starting point; your sequence: attempt to create branch B at origin/B with origin/B as upstream; if that fails, create branch B at HEAD with no upstream can be rewritten as: if start=$(git rev-parse refs/remotes/origin/${BRANCHNAME}); then extra="--track origin/${BRANCHNAME}" else start=HEAD extra= fi git checkout -b ${BRANCHNAME} ${start} $extra (or the equivalent with git switch -c as the final command). But I don't think it can be made any shorter than the two-step || variant you already have. Using the if ... then ... else ... fi sequence, you can add --quiet --verify to the rev-parse step and avoid generating error messages, and of course you can also set additional variables as desired so that you can make or early-encode future tests (e.g., decide whether you want to use git push -u for instance). A: There is a more concise way to accomplish the same thing as the first line in your code. Instead of using git checkout with the -b flag to create a new branch and then specifying the remote branch to track, you can use the git push command with the -u flag (short for "upstream") to create the new branch and track the remote branch in one step. Here is an example of how you can use this command: git push -u origin ${BRANCHNAME} This command will create a new local branch named ${BRANCHNAME} and track the remote branch with the same name. If the remote branch does not yet exist, the command will create it for you. You can then add and commit your changes as usual, and use git push without the -u flag to push your changes to the remote branch. Here is an example of how you can use this command to combine the first three lines of your code into a single line: git push -u origin ${BRANCHNAME} && git add ... && git commit -m "new stuff" This command will create the new branch, add your changes, and commit them in one step. You can then use git push without the -u flag to push your changes to the remote branch.
git checkout remote branch if it exist, else create it
I can't be the first one to ask this, but I'm having trouble finding the right search results. These terms are so overloaded. Eventually I want to make some additions to a remote branch. Remote branch may not exist. So first I clone remote repo, I only have default branch locally. Then: so far I have: git checkout -b ${BRANCHNAME} origin/${BRANCHNAME} --track || git checkout -b ${BRANCHNAME} git add ... git commit -m "new stuff" git push origin ${BRANCHNAME} Is there a nicer way to do the first line?
[ "All branches must have a starting point; your sequence:\n\nattempt to create branch B at origin/B with origin/B as upstream;\nif that fails, create branch B at HEAD with no upstream\n\ncan be rewritten as:\nif start=$(git rev-parse refs/remotes/origin/${BRANCHNAME}); then\n extra=\"--track origin/${BRANCHNAME}\"\nelse\n start=HEAD\n extra=\nfi\ngit checkout -b ${BRANCHNAME} ${start} $extra\n\n(or the equivalent with git switch -c as the final command). But I don't think it can be made any shorter than the two-step || variant you already have.\nUsing the if ... then ... else ... fi sequence, you can add --quiet --verify to the rev-parse step and avoid generating error messages, and of course you can also set additional variables as desired so that you can make or early-encode future tests (e.g., decide whether you want to use git push -u for instance).\n", "There is a more concise way to accomplish the same thing as the first line in your code. Instead of using git checkout with the -b flag to create a new branch and then specifying the remote branch to track, you can use the git push command with the -u flag (short for \"upstream\") to create the new branch and track the remote branch in one step. Here is an example of how you can use this command:\ngit push -u origin ${BRANCHNAME}\n\nThis command will create a new local branch named ${BRANCHNAME} and track the remote branch with the same name. If the remote branch does not yet exist, the command will create it for you. You can then add and commit your changes as usual, and use git push without the -u flag to push your changes to the remote branch.\nHere is an example of how you can use this command to combine the first three lines of your code into a single line:\ngit push -u origin ${BRANCHNAME} && git add ... && git commit -m \"new stuff\"\n\nThis command will create the new branch, add your changes, and commit them in one step. You can then use git push without the -u flag to push your changes to the remote branch.\n" ]
[ 1, 1 ]
[]
[]
[ "checkout", "git", "git_branch" ]
stackoverflow_0074650458_checkout_git_git_branch.txt
Q: Android Studio won't build release build because of Compose Preview functions - NonExistentClass cannot be converted to Annotation I get the following error error: incompatible types: NonExistentClass cannot be converted to Annotation @error.NonExistentClass() at @Preview("Drawer contents") @Preview("Drawer contents (dark)", uiMode = UI_MODE_NIGHT_YES) @Composable fun PreviewAppDrawer() { AppTheme { Surface { AppDrawer( currentScreen = Screen.Home, onTopLevelScreenNavigate = {}, closeDrawer = { } ) } } } during release gradle build How can I fix it? Works fine for debug build A: Fixed by adding missing dependency: implementation "androidx.compose.ui:ui-tooling-preview:$composeVersion" Final: implementation "androidx.compose.ui:ui-tooling-preview:$composeVersion" debugImplementation "androidx.compose.ui:ui-tooling:$composeVersion"
Android Studio won't build release build because of Compose Preview functions - NonExistentClass cannot be converted to Annotation
I get the following error error: incompatible types: NonExistentClass cannot be converted to Annotation @error.NonExistentClass() at @Preview("Drawer contents") @Preview("Drawer contents (dark)", uiMode = UI_MODE_NIGHT_YES) @Composable fun PreviewAppDrawer() { AppTheme { Surface { AppDrawer( currentScreen = Screen.Home, onTopLevelScreenNavigate = {}, closeDrawer = { } ) } } } during release gradle build How can I fix it? Works fine for debug build
[ "Fixed by adding missing dependency:\nimplementation \"androidx.compose.ui:ui-tooling-preview:$composeVersion\"\n\nFinal:\nimplementation \"androidx.compose.ui:ui-tooling-preview:$composeVersion\"\ndebugImplementation \"androidx.compose.ui:ui-tooling:$composeVersion\"\n\n" ]
[ 1 ]
[]
[]
[ "android", "android_jetpack_compose" ]
stackoverflow_0074660623_android_android_jetpack_compose.txt
Q: Buttons don't do their intended command at click Im making a turtle race game, its a game where there are a few turtles who are assigned random speeds and then one turtle wins. However, just for fun im trying to add a few things to the game. For example, a button to exit the game and a button to restart the race. I have made only the exit button for now, and gave the command to exit the game. The button works, however not in the right time. The problem is is that i have a piece of code that makes the canvas (background), Which is just the turtle drawing. I have another piece of code that places the buttons and tells them what to do when being clicked. And then I have a piece of code that assigns random speeds to the turtles. This is the buttons code.(The try again button command is not finished yet.) screen = Screen() screen.setup(width=600, height=400) def exit_game(): exit() canvas = screen.getcanvas() button = Button(canvas.master, text="Exit Game", command=exit_game, width=10, height=4, fg="white", bg="dodgerblue") button.pack() button.place(x=150, y=530) canvas2 = screen.getcanvas() button2 = Button(canvas2.master, text="Try Again", command=exit_game, width=10, height=4, fg="white", bg="dodgerblue" ) button2.pack() button2.place(x=50, y=530) And here is the code for assigning random numbers to the turtles. for movement in range (230): red.forward(randint(1,8)) blue.forward(randint(1,8)) purple.forward(randint(1,8)) orange.forward(randint(1,8)) The problem is, is that when for example the turtles are moving, i can press the button, but it does not do the command. After the movement loop goes through 230 times, only then it exits the game. So basically my code is just reading the speed to the turtles and forgot about the button commands. Is there a way to override this somehow and make my button exit the game when being clicked at all times? Also i did try to put the button into an infinite loop, but it did not work(maybe I did it wrong). import turtle import time from random import randint from tkinter import * from turtle import Screen, Turtle import tkinter import tkinter as tk # Window Customization Window = turtle.Screen() Window.title('Turtle Race Game') #Complete back canvas for the game def back_canvas(): # Main drawing turtle pen = turtle.Turtle() pen.speed(0) # far left -640; far right 633 #top 330; bottom -320 # Landscape making #Making the ground pen.hideturtle() pen.color("sienna") pen.penup() pen.left(90) pen.setpos(-640, -320) pen.pendown() pen.begin_fill() pen.color("sienna") for i in range(2): pen.forward(162.5) pen.right(90) pen.forward(1272) pen.right(90) pen.end_fill() #Making Racing Area for i in range(2): pen.forward(162.5) pen.color("lime") pen.begin_fill() for i in range(2): pen.forward(162.5) pen.right(90) pen.forward(1272) pen.right(90) pen.end_fill() #Making Top Area pen.color("dodgerblue") pen.begin_fill() pen.forward(162.5) for i in range(2): pen.forward(162.5) pen.right(90) pen.forward(1272) pen.right(90) pen.end_fill() pen.penup() # Writing "Turtle Race Game" pen.color('lime') pen.setpos(-170,250) pen.color("black") pen.write("Turtle Race Game",pen, font=("Arial", 27, 'normal')) # Making the first finishline pen.setpos(500,143) pen.right(180) for i in range(7): pen.color('black') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() pen.color('white') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() # Making the second finishline pen.setpos(520,143) for i in range(7): pen.color('white') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() pen.color('black') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() # placing main pen to right place to say who won pen.setpos(520,180) # Making all the turtles def race(): # Making the turtles, turtle 1 red = turtle.Turtle() red.speed(0) red.shape("turtle") red.penup() red.color("red") red.setpos(-550, 90) red.pendown() # Making the turtles, turtle 2 blue = turtle.Turtle() blue.shape("turtle") blue.speed(0) blue.penup() blue.color("blue") blue.setpos(-550,30) blue.pendown() # Making the turtles, turtle 3 purple = turtle.Turtle() purple.speed(0) purple.shape("turtle") purple.penup() purple.color("purple") purple.setpos(-550,-30) purple.pendown() # Making the turtles, turtle 4 orange = turtle.Turtle() orange.speed(0) orange.shape("turtle") orange.penup() orange.color("orange") orange.setpos(-550,-90) orange.pendown() race_step_count = 230 if race_step_count: red.forward(randint(1,8)) blue.forward(randint(1,8)) purple.forward(randint(1,8)) orange.forward(randint(1,8)) race_step_count -= 1 next_step = Window.after(100, race) # call this function again after 100mS else: # no more steps - the race is over! Window.after_cancel(next_step) # stop calling the race function def main_game(): run = True screen = Screen() screen.setup(width=600, height=400) def exit_game(): exit() canvas = screen.getcanvas() button = Button(canvas.master, text="Exit Game",command = exit_game ,width= 10, height = 4, fg = "white", bg = "dodgerblue") button.place(x=150, y=530) canvas2 = screen.getcanvas() button2 = Button(canvas2.master, text="Try Again",command = exit_game, width= 10, height = 4,fg = "white", bg = "dodgerblue" ) button2.place(x=50, y=530) #Complete back canvas for the game back_canvas() # Making all the turtles race() main_game() # Making my button do something when being clicked # Making the turtles stop when hitting the finish line time.sleep(1) #Writing who won def who_won(): for i in range(1): if blue.xcor() > red.xcor() and blue.xcor() > purple.xcor() and blue.xcor() > orange.xcor(): time.sleep(1) pen.write('Blue won!', align = "center", font =("Arial", 25, "bold")) elif red.xcor() > blue.xcor() and red.xcor() > purple.xcor() and red.xcor() > orange.xcor(): time.sleep(1) pen.write('Red won!', align = "center", font =("Arial", 25, "bold")) elif purple.xcor() > blue.xcor() and purple.xcor() > red.xcor() and purple.xcor() > orange.xcor(): time.sleep(1) pen.write('Purple won!', align = "center", font =("Arial", 25, "bold")) elif orange.xcor() > blue.xcor() and orange.xcor() > red.xcor() and orange.xcor() > purple.xcor(): time.sleep(1) pen.write('Orange won!', align = "center", font =("Arial", 25, "bold")) else: continue # Window doesnt close on its own Window.mainloop() A: What's happing is that your application is getting "hung up" on the long-running movement loop. Tkinter is registering your button press, but it can't do anything about it until it's done with the for loop. A quick solution to this is to define a function that handles the movements, and uses tkinter.after() to call it periodically until the "race" is over, since the built-in after method allows the UI's event loop to continue uninterrupted. # I don't know what your imports look like, so this is a boilerplate example import tkinter as tk root = tk.Tk() # this is whatever you're calling 'mainloop()' on right now race_step_count = 230 # define how 'long' the race is def race(): global race_step_count if race_step_count: red.forward(randint(1,8)) blue.forward(randint(1,8)) purple.forward(randint(1,8)) orange.forward(randint(1,8)) race_step_count -= 1 next_step = root.after(100, race) # call this function again after 100mS else: # no more steps - the race is over! root.after_cancel(next_step) # stop calling the race function To start the race, just call the function when you're ready: race() A: Looking at your code, I'm surprised it runs. Running your code, I find it doesn't. It bombs out with: AttributeError: '_Screen' object has no attribute 'after' Turtle works in two modes, standalone and embeded in a larger tkinter program. You're trying to embed a standalone turtle program. Below, I've taken apart and reassembled your turtle program to be embedded in tkinter and fully implement the functionality you describe. (It has a tkinter "Exit Game" button.) from random import randint from turtle import TurtleScreen, RawTurtle import tkinter as tk import sys def back_canvas(): # Landscape making # Making the ground pen.color('sienna') pen.penup() pen.setpos(-640, -162.5) pen.pendown() pen.begin_fill() for _ in range(2): pen.forward(1280) pen.right(90) pen.forward(162.5) pen.right(90) pen.end_fill() # Making Racing Area pen.color('lime') pen.begin_fill() for _ in range(2): pen.forward(1280) pen.left(90) pen.forward(325) pen.left(90) pen.end_fill() # Making Top Area pen.color('dodgerblue') pen.begin_fill() pen.left(90) pen.forward(325) for _ in range(2): pen.forward(162.5) pen.right(90) pen.forward(1280) pen.right(90) pen.end_fill() pen.penup() # Writing "Turtle Race Game" pen.color('lime') pen.setpos(0, 250) pen.color('black') pen.write("Turtle Race Game", align='center', font=('Arial', 27, 'normal')) # Making the first finish line pen.right(90) pen.setpos(500, 143) def flag(): pen.color('black') pen.begin_fill() for _ in range(4): pen.forward(20) pen.right(90) pen.end_fill() pen.forward(20) pen.color('white') pen.begin_fill() for _ in range(4): pen.forward(20) pen.right(90) pen.end_fill() pen.forward(20) for _ in range(7): flag() pen.right(90) pen.forward(40) pen.right(90) flag() pen.right(180) # placing main pen to right place to say who won pen.setpos(520, 180) race_step_count = 230 def race(): global race_step_count if race_step_count > 0: red.forward(randint(1, 8)) blue.forward(randint(1, 8)) purple.forward(randint(1, 8)) orange.forward(randint(1, 8)) race_step_count -= 1 screen.ontimer(race, 100) # call this function again after 100mS else: who_won() def who_won(): if blue.xcor() > red.xcor() and blue.xcor() > purple.xcor() and blue.xcor() > orange.xcor(): pen.write("Blue won!", align='center', font=('Arial', 25, 'bold')) elif red.xcor() > blue.xcor() and red.xcor() > purple.xcor() and red.xcor() > orange.xcor(): pen.write("Red won!", align='center', font=('Arial', 25, 'bold')) elif purple.xcor() > blue.xcor() and purple.xcor() > red.xcor() and purple.xcor() > orange.xcor(): pen.write("Purple won!", align='center', font=('Arial', 25, 'bold')) elif orange.xcor() > blue.xcor() and orange.xcor() > red.xcor() and orange.xcor() > purple.xcor(): pen.write("Orange won!", align='center', font=('Arial', 25, 'bold')) master = tk.Tk() master.title("Turtle Race Game") canvas = tk.Canvas(master, width=1280, height=650) canvas.pack() screen = TurtleScreen(canvas) tk.Button(master, text="Exit Game", command=sys.exit, width=0, height=4, fg='gold', bg='dodgerblue').pack() # Main drawing turtle pen = RawTurtle(screen) pen.hideturtle() pen.speed('fastest') back_canvas() red = RawTurtle(screen) red.speed('fastest') red.shape('turtle') red.penup() red.color('red') red.setpos(-550, 90) blue = red.clone() blue.color('blue') blue.setpos(-550, 30) purple = red.clone() purple.color('purple') purple.setpos(-550, -30) orange = red.clone() orange.color('orange') orange.setpos(-550, -90) race() screen.mainloop() Whenever you import the same library multiple ways, you're probably in trouble. (When you import mulitple libraries multiple ways, you're definitely in trouble.)
Buttons don't do their intended command at click
Im making a turtle race game, its a game where there are a few turtles who are assigned random speeds and then one turtle wins. However, just for fun im trying to add a few things to the game. For example, a button to exit the game and a button to restart the race. I have made only the exit button for now, and gave the command to exit the game. The button works, however not in the right time. The problem is is that i have a piece of code that makes the canvas (background), Which is just the turtle drawing. I have another piece of code that places the buttons and tells them what to do when being clicked. And then I have a piece of code that assigns random speeds to the turtles. This is the buttons code.(The try again button command is not finished yet.) screen = Screen() screen.setup(width=600, height=400) def exit_game(): exit() canvas = screen.getcanvas() button = Button(canvas.master, text="Exit Game", command=exit_game, width=10, height=4, fg="white", bg="dodgerblue") button.pack() button.place(x=150, y=530) canvas2 = screen.getcanvas() button2 = Button(canvas2.master, text="Try Again", command=exit_game, width=10, height=4, fg="white", bg="dodgerblue" ) button2.pack() button2.place(x=50, y=530) And here is the code for assigning random numbers to the turtles. for movement in range (230): red.forward(randint(1,8)) blue.forward(randint(1,8)) purple.forward(randint(1,8)) orange.forward(randint(1,8)) The problem is, is that when for example the turtles are moving, i can press the button, but it does not do the command. After the movement loop goes through 230 times, only then it exits the game. So basically my code is just reading the speed to the turtles and forgot about the button commands. Is there a way to override this somehow and make my button exit the game when being clicked at all times? Also i did try to put the button into an infinite loop, but it did not work(maybe I did it wrong). import turtle import time from random import randint from tkinter import * from turtle import Screen, Turtle import tkinter import tkinter as tk # Window Customization Window = turtle.Screen() Window.title('Turtle Race Game') #Complete back canvas for the game def back_canvas(): # Main drawing turtle pen = turtle.Turtle() pen.speed(0) # far left -640; far right 633 #top 330; bottom -320 # Landscape making #Making the ground pen.hideturtle() pen.color("sienna") pen.penup() pen.left(90) pen.setpos(-640, -320) pen.pendown() pen.begin_fill() pen.color("sienna") for i in range(2): pen.forward(162.5) pen.right(90) pen.forward(1272) pen.right(90) pen.end_fill() #Making Racing Area for i in range(2): pen.forward(162.5) pen.color("lime") pen.begin_fill() for i in range(2): pen.forward(162.5) pen.right(90) pen.forward(1272) pen.right(90) pen.end_fill() #Making Top Area pen.color("dodgerblue") pen.begin_fill() pen.forward(162.5) for i in range(2): pen.forward(162.5) pen.right(90) pen.forward(1272) pen.right(90) pen.end_fill() pen.penup() # Writing "Turtle Race Game" pen.color('lime') pen.setpos(-170,250) pen.color("black") pen.write("Turtle Race Game",pen, font=("Arial", 27, 'normal')) # Making the first finishline pen.setpos(500,143) pen.right(180) for i in range(7): pen.color('black') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() pen.color('white') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() # Making the second finishline pen.setpos(520,143) for i in range(7): pen.color('white') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() pen.color('black') pen.begin_fill() pen.left(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(90) pen.forward(20) pen.right(180) pen.forward(20) pen.end_fill() # placing main pen to right place to say who won pen.setpos(520,180) # Making all the turtles def race(): # Making the turtles, turtle 1 red = turtle.Turtle() red.speed(0) red.shape("turtle") red.penup() red.color("red") red.setpos(-550, 90) red.pendown() # Making the turtles, turtle 2 blue = turtle.Turtle() blue.shape("turtle") blue.speed(0) blue.penup() blue.color("blue") blue.setpos(-550,30) blue.pendown() # Making the turtles, turtle 3 purple = turtle.Turtle() purple.speed(0) purple.shape("turtle") purple.penup() purple.color("purple") purple.setpos(-550,-30) purple.pendown() # Making the turtles, turtle 4 orange = turtle.Turtle() orange.speed(0) orange.shape("turtle") orange.penup() orange.color("orange") orange.setpos(-550,-90) orange.pendown() race_step_count = 230 if race_step_count: red.forward(randint(1,8)) blue.forward(randint(1,8)) purple.forward(randint(1,8)) orange.forward(randint(1,8)) race_step_count -= 1 next_step = Window.after(100, race) # call this function again after 100mS else: # no more steps - the race is over! Window.after_cancel(next_step) # stop calling the race function def main_game(): run = True screen = Screen() screen.setup(width=600, height=400) def exit_game(): exit() canvas = screen.getcanvas() button = Button(canvas.master, text="Exit Game",command = exit_game ,width= 10, height = 4, fg = "white", bg = "dodgerblue") button.place(x=150, y=530) canvas2 = screen.getcanvas() button2 = Button(canvas2.master, text="Try Again",command = exit_game, width= 10, height = 4,fg = "white", bg = "dodgerblue" ) button2.place(x=50, y=530) #Complete back canvas for the game back_canvas() # Making all the turtles race() main_game() # Making my button do something when being clicked # Making the turtles stop when hitting the finish line time.sleep(1) #Writing who won def who_won(): for i in range(1): if blue.xcor() > red.xcor() and blue.xcor() > purple.xcor() and blue.xcor() > orange.xcor(): time.sleep(1) pen.write('Blue won!', align = "center", font =("Arial", 25, "bold")) elif red.xcor() > blue.xcor() and red.xcor() > purple.xcor() and red.xcor() > orange.xcor(): time.sleep(1) pen.write('Red won!', align = "center", font =("Arial", 25, "bold")) elif purple.xcor() > blue.xcor() and purple.xcor() > red.xcor() and purple.xcor() > orange.xcor(): time.sleep(1) pen.write('Purple won!', align = "center", font =("Arial", 25, "bold")) elif orange.xcor() > blue.xcor() and orange.xcor() > red.xcor() and orange.xcor() > purple.xcor(): time.sleep(1) pen.write('Orange won!', align = "center", font =("Arial", 25, "bold")) else: continue # Window doesnt close on its own Window.mainloop()
[ "What's happing is that your application is getting \"hung up\" on the long-running movement loop. Tkinter is registering your button press, but it can't do anything about it until it's done with the for loop. A quick solution to this is to define a function that handles the movements, and uses tkinter.after() to call it periodically until the \"race\" is over, since the built-in after method allows the UI's event loop to continue uninterrupted.\n# I don't know what your imports look like, so this is a boilerplate example\nimport tkinter as tk\n\nroot = tk.Tk() # this is whatever you're calling 'mainloop()' on right now\nrace_step_count = 230 # define how 'long' the race is\n\n\ndef race():\n global race_step_count\n if race_step_count:\n red.forward(randint(1,8))\n blue.forward(randint(1,8))\n purple.forward(randint(1,8))\n orange.forward(randint(1,8))\n race_step_count -= 1\n next_step = root.after(100, race) # call this function again after 100mS\n else: # no more steps - the race is over!\n root.after_cancel(next_step) # stop calling the race function\n\nTo start the race, just call the function when you're ready: race()\n", "Looking at your code, I'm surprised it runs. Running your code, I find it doesn't. It bombs out with:\nAttributeError: '_Screen' object has no attribute 'after'\n\nTurtle works in two modes, standalone and embeded in a larger tkinter program. You're trying to embed a standalone turtle program. Below, I've taken apart and reassembled your turtle program to be embedded in tkinter and fully implement the functionality you describe. (It has a tkinter \"Exit Game\" button.)\nfrom random import randint\nfrom turtle import TurtleScreen, RawTurtle\nimport tkinter as tk\nimport sys\n\ndef back_canvas():\n # Landscape making\n # Making the ground\n\n pen.color('sienna')\n\n pen.penup()\n pen.setpos(-640, -162.5)\n pen.pendown()\n\n pen.begin_fill()\n\n for _ in range(2):\n pen.forward(1280)\n pen.right(90)\n pen.forward(162.5)\n pen.right(90)\n\n pen.end_fill()\n\n # Making Racing Area\n\n pen.color('lime')\n pen.begin_fill()\n\n for _ in range(2):\n pen.forward(1280)\n pen.left(90)\n pen.forward(325)\n pen.left(90)\n\n pen.end_fill()\n\n # Making Top Area\n\n pen.color('dodgerblue')\n pen.begin_fill()\n pen.left(90)\n pen.forward(325)\n\n for _ in range(2):\n pen.forward(162.5)\n pen.right(90)\n pen.forward(1280)\n pen.right(90)\n\n pen.end_fill()\n pen.penup()\n\n # Writing \"Turtle Race Game\"\n pen.color('lime')\n pen.setpos(0, 250)\n pen.color('black')\n pen.write(\"Turtle Race Game\", align='center', font=('Arial', 27, 'normal'))\n\n # Making the first finish line\n pen.right(90)\n pen.setpos(500, 143)\n\n def flag():\n pen.color('black')\n pen.begin_fill()\n\n for _ in range(4):\n pen.forward(20)\n pen.right(90)\n\n pen.end_fill()\n pen.forward(20)\n\n pen.color('white')\n pen.begin_fill()\n\n for _ in range(4):\n pen.forward(20)\n pen.right(90)\n\n pen.end_fill()\n pen.forward(20)\n\n for _ in range(7):\n flag()\n\n pen.right(90)\n pen.forward(40)\n pen.right(90)\n\n flag()\n\n pen.right(180)\n\n # placing main pen to right place to say who won\n pen.setpos(520, 180)\n\nrace_step_count = 230\n\ndef race():\n global race_step_count\n\n if race_step_count > 0:\n red.forward(randint(1, 8))\n blue.forward(randint(1, 8))\n purple.forward(randint(1, 8))\n orange.forward(randint(1, 8))\n\n race_step_count -= 1\n screen.ontimer(race, 100) # call this function again after 100mS\n else:\n who_won()\n\ndef who_won():\n if blue.xcor() > red.xcor() and blue.xcor() > purple.xcor() and blue.xcor() > orange.xcor():\n pen.write(\"Blue won!\", align='center', font=('Arial', 25, 'bold'))\n elif red.xcor() > blue.xcor() and red.xcor() > purple.xcor() and red.xcor() > orange.xcor():\n pen.write(\"Red won!\", align='center', font=('Arial', 25, 'bold'))\n elif purple.xcor() > blue.xcor() and purple.xcor() > red.xcor() and purple.xcor() > orange.xcor():\n pen.write(\"Purple won!\", align='center', font=('Arial', 25, 'bold'))\n elif orange.xcor() > blue.xcor() and orange.xcor() > red.xcor() and orange.xcor() > purple.xcor():\n pen.write(\"Orange won!\", align='center', font=('Arial', 25, 'bold'))\n\nmaster = tk.Tk()\nmaster.title(\"Turtle Race Game\")\n\ncanvas = tk.Canvas(master, width=1280, height=650)\ncanvas.pack()\n\nscreen = TurtleScreen(canvas)\n\ntk.Button(master, text=\"Exit Game\", command=sys.exit, width=0, height=4, fg='gold', bg='dodgerblue').pack()\n\n# Main drawing turtle\npen = RawTurtle(screen)\npen.hideturtle()\npen.speed('fastest')\n\nback_canvas()\n\nred = RawTurtle(screen)\nred.speed('fastest')\nred.shape('turtle')\nred.penup()\n\nred.color('red')\nred.setpos(-550, 90)\n\nblue = red.clone()\nblue.color('blue')\nblue.setpos(-550, 30)\n\npurple = red.clone()\npurple.color('purple')\npurple.setpos(-550, -30)\n\norange = red.clone()\norange.color('orange')\norange.setpos(-550, -90)\n\nrace()\n\nscreen.mainloop()\n\nWhenever you import the same library multiple ways, you're probably in trouble. (When you import mulitple libraries multiple ways, you're definitely in trouble.)\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_turtle", "tkinter" ]
stackoverflow_0074646592_python_python_turtle_tkinter.txt
Q: Jmeter and openldap I am trying to use Jmeter to load test an openldap mstaer/slave cluster. I want to simulate a large number of users searching openldap but connecting to the slave. My slave is behind an LB but not the master so I only have access to the slave. Jmeter tries to update the openldap and since it's a replica it fails. Does anyone know how to configure Jmeter just to do a search and no updates? Thanks Jack A: Just make sure to use only Search test in the LDAP Extended Request sampler and JMeter won't modify any data there: You may also be interested in How to Load Test LDAP with Apache JMeter which covers all the aspects of the OpenLDAP (and compatible directory servers) load testing using JMeter A: I believe you first need to open the "context", i.e.: Add LDAP Extended Request with Thread Bind Then you can add another LDAP Extended request (e.g. Search) Finally last LDAP Extended Request to do Thread Unbind.
Jmeter and openldap
I am trying to use Jmeter to load test an openldap mstaer/slave cluster. I want to simulate a large number of users searching openldap but connecting to the slave. My slave is behind an LB but not the master so I only have access to the slave. Jmeter tries to update the openldap and since it's a replica it fails. Does anyone know how to configure Jmeter just to do a search and no updates? Thanks Jack
[ "Just make sure to use only Search test in the LDAP Extended Request sampler and JMeter won't modify any data there:\n\nYou may also be interested in How to Load Test LDAP with Apache JMeter which covers all the aspects of the OpenLDAP (and compatible directory servers) load testing using JMeter\n", "I believe you first need to open the \"context\", i.e.:\n\nAdd LDAP Extended Request with Thread Bind\nThen you can add another LDAP Extended request (e.g. Search)\nFinally last LDAP Extended Request to do Thread Unbind.\n\n" ]
[ 0, 0 ]
[]
[]
[ "jmeter" ]
stackoverflow_0061551017_jmeter.txt
Q: How to turn a doc with 10k independent JSON objects into a JSON array in Java I am currently working on migrating a database from a non-SQL source to an SQL database. The non-SQL source outputs the data in a JSON doc that is just a series of independent JSON objects. I an using JSONObject within Java, and that (to my understanding) can only recognize the top most object within the document. In order to get around this issue I am writing code to convert the independent objects into an array. The current method I am using involves converting the JSON doc into a string, counting curly brackets to find objects, and then inserting them into an array. for (int i = 0; i < doc.length(); i++) { char currentChar = doc.charAt(i); if (currentChar == '{') { Integer jsonStart = i; Integer openBrace = 1; Integer closeBrace = 0; while (openBrace > closeBrace) { i++; currentChar = doc.charAt(i); if (currentChar == '{') { openBrace++; } if (currentChar == '}') { closeBrace++; } } Integer jsonEnd = i; String currentString = doc.substring(jsonStart, jsonEnd + 1); JSONObject currentJSONObject = new JSONObject(currentString); returnJSONArray.put(currentJSONObject); Due to size, the database had to be divided into multiple 10k object documents. The code worked well until one of the documents had braces stored within the value. So I added some code to watch for values and ignore those based on quotation marks beneath the close curly bracket counter. if (currentChar == '"') { i++; currentChar = mongoExport.charAt(i); while (!(currentChar == '"')) { i++; currentChar = mongoExport.charAt(i); } This worked for the document with value pairs that contained curly brackets, but upon testing it against the rest of the documents I experience a "String index out of range: big number" error in one of the other documents that traces back to the while loop looking for and end quotation mark. From what I can figure, this means that there are also values that contain quotation marks. I tried some code to check for escape characters before quotation marks, but that changed nothing. I can't check through these documents manually, they are far too long for that. Is there a way for me to handle these strings? Also, was there a far easier method I could have used that I was unaware of from the beginning? A: Even using the java.json package doesn't require manual parsing. something like: import java.io.ByteArrayInputStream; import java.io.InputStream; import javax.json.Json; import javax.json.JsonArray; import javax.json.JsonObject; import javax.json.JsonReader; import javax.json.JsonValue; ... private static final String jsonString = "[" + "{\n" + "\"id\":123,\n" + "\"name\":\"Bob Marley\",\n" + "\"address\":{\n" + "\"street\":\"123 Main St\",\n" + "\"city\":\"Anytown\",\n" + "\"state\":\"CO\",\n" + "\"zipcode\":80205\n" + "},\n" + "\"phoneNumbers\":[\"3032920200\"],\n" + "\"role\":\"Developer\"\n" + "},\n" + "{\n" + "\"id\":456,\n" + "\"name\":\"Tommy Tutone\",\n" + "\"address\":{\n" + "\"street\":\"456 Main St\",\n" + "\"city\":\"Sometown\",\n" + "\"state\":\"CO\",\n" + "\"zipcode\":80205\n" + "},\n" + "\"phoneNumbers\":[\"1238675309\"],\n" + "\"role\":\"Developer\"\n" + "}\n" + "]"; ... @GET @Produces("text/plain") public String hello() { InputStream inputStream = new ByteArrayInputStream(jsonString.getBytes()); JsonReader jsonReader = Json.createReader(inputStream); JsonArray jsonArray = jsonReader.readArray(); for (JsonValue jsonValue : jsonArray) { JsonObject jsonObject = jsonValue.asJsonObject(); System.out.println("next object id is " + jsonObject.getInt("id")); JsonObject addressObject = jsonObject.getJsonObject("address"); System.out.println("next object city is " + addressObject.getString("city")); } return "Hello, World!"; } This gets the first level objects (for example, "id") and nested objects ("address" in this example). I intentionally did not create a POJO type object that would represent the JSON object - you can do that but you'll have to decide if it's worthwhile to have a full object of your data or just pull it with things like getString().
How to turn a doc with 10k independent JSON objects into a JSON array in Java
I am currently working on migrating a database from a non-SQL source to an SQL database. The non-SQL source outputs the data in a JSON doc that is just a series of independent JSON objects. I an using JSONObject within Java, and that (to my understanding) can only recognize the top most object within the document. In order to get around this issue I am writing code to convert the independent objects into an array. The current method I am using involves converting the JSON doc into a string, counting curly brackets to find objects, and then inserting them into an array. for (int i = 0; i < doc.length(); i++) { char currentChar = doc.charAt(i); if (currentChar == '{') { Integer jsonStart = i; Integer openBrace = 1; Integer closeBrace = 0; while (openBrace > closeBrace) { i++; currentChar = doc.charAt(i); if (currentChar == '{') { openBrace++; } if (currentChar == '}') { closeBrace++; } } Integer jsonEnd = i; String currentString = doc.substring(jsonStart, jsonEnd + 1); JSONObject currentJSONObject = new JSONObject(currentString); returnJSONArray.put(currentJSONObject); Due to size, the database had to be divided into multiple 10k object documents. The code worked well until one of the documents had braces stored within the value. So I added some code to watch for values and ignore those based on quotation marks beneath the close curly bracket counter. if (currentChar == '"') { i++; currentChar = mongoExport.charAt(i); while (!(currentChar == '"')) { i++; currentChar = mongoExport.charAt(i); } This worked for the document with value pairs that contained curly brackets, but upon testing it against the rest of the documents I experience a "String index out of range: big number" error in one of the other documents that traces back to the while loop looking for and end quotation mark. From what I can figure, this means that there are also values that contain quotation marks. I tried some code to check for escape characters before quotation marks, but that changed nothing. I can't check through these documents manually, they are far too long for that. Is there a way for me to handle these strings? Also, was there a far easier method I could have used that I was unaware of from the beginning?
[ "Even using the java.json package doesn't require manual parsing. something like:\nimport java.io.ByteArrayInputStream;\nimport java.io.InputStream;\n\nimport javax.json.Json;\nimport javax.json.JsonArray;\nimport javax.json.JsonObject;\nimport javax.json.JsonReader;\nimport javax.json.JsonValue;\n\n...\n\nprivate static final String jsonString = \"[\" +\n \"{\\n\" +\n \"\\\"id\\\":123,\\n\" +\n \"\\\"name\\\":\\\"Bob Marley\\\",\\n\" +\n \"\\\"address\\\":{\\n\" +\n \"\\\"street\\\":\\\"123 Main St\\\",\\n\" +\n \"\\\"city\\\":\\\"Anytown\\\",\\n\" +\n \"\\\"state\\\":\\\"CO\\\",\\n\" +\n \"\\\"zipcode\\\":80205\\n\" +\n \"},\\n\" +\n \"\\\"phoneNumbers\\\":[\\\"3032920200\\\"],\\n\" +\n \"\\\"role\\\":\\\"Developer\\\"\\n\" +\n \"},\\n\" +\n \"{\\n\" +\n \"\\\"id\\\":456,\\n\" +\n \"\\\"name\\\":\\\"Tommy Tutone\\\",\\n\" +\n \"\\\"address\\\":{\\n\" +\n \"\\\"street\\\":\\\"456 Main St\\\",\\n\" +\n \"\\\"city\\\":\\\"Sometown\\\",\\n\" +\n \"\\\"state\\\":\\\"CO\\\",\\n\" +\n \"\\\"zipcode\\\":80205\\n\" +\n \"},\\n\" +\n \"\\\"phoneNumbers\\\":[\\\"1238675309\\\"],\\n\" +\n \"\\\"role\\\":\\\"Developer\\\"\\n\" +\n \"}\\n\" +\n \"]\";\n\n... \n\n@GET\n@Produces(\"text/plain\")\npublic String hello() {\n InputStream inputStream = new ByteArrayInputStream(jsonString.getBytes());\n JsonReader jsonReader = Json.createReader(inputStream);\n JsonArray jsonArray = jsonReader.readArray();\n\n for (JsonValue jsonValue : jsonArray) {\n JsonObject jsonObject = jsonValue.asJsonObject();\n\n System.out.println(\"next object id is \" + jsonObject.getInt(\"id\"));\n JsonObject addressObject = jsonObject.getJsonObject(\"address\");\n\n System.out.println(\"next object city is \" + addressObject.getString(\"city\"));\n }\n\n return \"Hello, World!\";\n}\n\nThis gets the first level objects (for example, \"id\") and nested objects (\"address\" in this example). I intentionally did not create a POJO type object that would represent the JSON object - you can do that but you'll have to decide if it's worthwhile to have a full object of your data or just pull it with things like getString().\n" ]
[ 0 ]
[]
[]
[ "java", "json" ]
stackoverflow_0074659202_java_json.txt
Q: How to insert missing years in temporary table in MS SQL Server I work with Sales and problem is that this table does not have records for each client for every year. Records are missing randomly. Instead i need to have those years there and put 0 for sales for those years for my analysis. I have limited knowledge of SQL. Can anybody help on this one? What i have as of now and what i would like to have is shown below. I have thoughts to use LAG() function, but missing records can be for 2 years in a row or 3. I am not sure how to tackle such problem. What I have now: Client_ID SalesYear Sales 1 2010 12 1 2012 20 1 2013 21 1 2016 14 What i need to have: Client_ID SalesYear Sales 1 2010 12 1 2011 0 1 2012 20 1 2013 21 1 2014 0 1 2015 0 1 2016 14 A: You need a complete list of years to outer-join with. You can do this a number of ways, the basic principle would be: with y as ( select * from (values(2010),(2011),(2012),(2013),(2014),(2015),(2016))y(y) ) insert into t (Client_Id, SalesYear, Sales) select 1, y.y, 0 from y where not exists (select * from t where t.SalesYear = y.y); A: Something like this might help: DECLARE @Sales TABLE (Client_ID int, SalesYear int, Sales money) INSERT INTO @Sales(Client_ID, SalesYear, Sales) SELECT 1, 2010, 12 INSERT INTO @Sales(Client_ID, SalesYear, Sales) SELECT 1, 2012, 20 INSERT INTO @Sales(Client_ID, SalesYear, Sales) SELECT 1, 2013, 21 INSERT INTO @Sales(Client_ID, SalesYear, Sales) SELECT 1, 2016, 14; with years as ( select 2000 as theYear UNION ALL select y.theYear + 1 as theYear from years y where y.theYear + 1 <= YEAR(GetDate()) ) select Y.theYear, S.Client_ID, S.Sales FROM Years Y LEFT JOIN @Sales S ON S.SalesYear = Y.theYear option (maxrecursion 0) You can change "2000" to something more appropriate.
How to insert missing years in temporary table in MS SQL Server
I work with Sales and problem is that this table does not have records for each client for every year. Records are missing randomly. Instead i need to have those years there and put 0 for sales for those years for my analysis. I have limited knowledge of SQL. Can anybody help on this one? What i have as of now and what i would like to have is shown below. I have thoughts to use LAG() function, but missing records can be for 2 years in a row or 3. I am not sure how to tackle such problem. What I have now: Client_ID SalesYear Sales 1 2010 12 1 2012 20 1 2013 21 1 2016 14 What i need to have: Client_ID SalesYear Sales 1 2010 12 1 2011 0 1 2012 20 1 2013 21 1 2014 0 1 2015 0 1 2016 14
[ "You need a complete list of years to outer-join with.\nYou can do this a number of ways, the basic principle would be:\nwith y as (\n select * from (values(2010),(2011),(2012),(2013),(2014),(2015),(2016))y(y)\n)\ninsert into t (Client_Id, SalesYear, Sales)\nselect 1, y.y, 0\nfrom y\nwhere not exists (select * from t where t.SalesYear = y.y);\n\n", "Something like this might help:\nDECLARE @Sales TABLE\n(Client_ID int, SalesYear int, Sales money)\n\nINSERT INTO @Sales(Client_ID, SalesYear, Sales) SELECT 1, 2010, 12 \nINSERT INTO @Sales(Client_ID, SalesYear, Sales) SELECT 1, 2012, 20 \nINSERT INTO @Sales(Client_ID, SalesYear, Sales) SELECT 1, 2013, 21 \nINSERT INTO @Sales(Client_ID, SalesYear, Sales) SELECT 1, 2016, 14;\n\n\n\nwith years as \n(\n select 2000 as theYear \n UNION ALL\n select y.theYear + 1 as theYear\n from years y\n where y.theYear + 1 <= YEAR(GetDate())\n)\n\nselect \n Y.theYear, S.Client_ID, S.Sales\nFROM \n Years Y\nLEFT JOIN\n @Sales S ON S.SalesYear = Y.theYear\noption (maxrecursion 0)\n\nYou can change \"2000\" to something more appropriate.\n" ]
[ 1, 1 ]
[]
[]
[ "sql_server" ]
stackoverflow_0074660537_sql_server.txt
Q: Springboot + MS Sql Server (Azure Cloud) selecting 100K records failing due to Connection reset I am using Springboot framework with MyBatis & JDBCTemplate to perform select operation over SQL Server Database. Following code snippet posting here only JDBCTemplate List<TableName> tnames = new ArrayList<>(); jdbcTemplate.query("select * from testdb.table_name", new RowCallbackHandler() { public void processRow(ResultSet resultSet) throws SQLException { while (resultSet.next()) { TableName tn = new TableName(); tn.setStatus(resultSet.getString("status")); tnames.add(tn); // process it } } }); return tnames; Note: Code is shortened Query too having 4-5 tables join , which performs well on SQL server side and fetching almost 100K rows. But integration with Springboot when I apply TOP(n) criteria I found that beyond 47000 rows it wont work and throws following error {"@timestamp":"2022-07-06T04:12:24.161-04:00","message":"InternalServerException Exception : ","loggerSource":"c.m.d.d.a.s.i.r.h.GlobalExceptionHandler","level":"ERROR","stacktrace":"java.io.IOException: Connection reset by peer sun.nio.ch.SocketDispatcher.write0(SocketDispatcher.java) sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:54) sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113) sun.nio.ch.IOUtil.write(IOUtil.java:79) sun.nio.ch.IOUtil.write(IOUtil.java:50) sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:506)... 10 frames excluded... 114 common frames omittedWrapped by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: Connection reset by peer org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353) org.apache.catalina.connector.OutputBuffer.flushByteBuffer(OutputBuffer.java:783)... 7 frames excluded com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2171) com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeFieldName(UTF8JsonGenerator.java:289) com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:726) com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:774) com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:178) com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serializeContents(CollectionSerializer.java:145) com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serialize(CollectionSerializer.java:107) com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serialize(CollectionSerializer.java:25) com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(DefaultSerializerProvider.java:480) com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:400) com.fasterxml.jackson.databind.ObjectWriter$Prefetch.serialize(ObjectWriter.java:1514) com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:1007)... 13 frames excluded javax.servlet.http.HttpServlet.service(HttpServlet.java:655) org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) javax.servlet.http.HttpServlet.service(HttpServlet.java:764)... 77 frames excluded java.lang.Thread.run(Thread.java:832)Suppressed: java.io.IOException: Connection reset by peer sun.nio.ch.SocketDispatcher.write0(SocketDispatcher.java) sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:54) sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113) sun.nio.ch.IOUtil.write(IOUtil.java:79) sun.nio.ch.IOUtil.write(IOUtil.java:50) sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:506)... 10 frames excluded... 104 common frames omittedWrapped by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: Connection reset by peer org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353) org.apache.catalina.connector.OutputBuffer.flushByteBuffer(OutputBuffer.java:783)... 7 frames excluded com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2171) com.fasterxml.jackson.core.json.UTF8JsonGenerator.close(UTF8JsonGenerator.java:1214)... 1 frames excluded... 93 common frames omitted","logType":"SERVICE"} {"@timestamp":"2022-07-06T04:12:24.164-04:00","message":"Found 'Content-Type:application/json' in response","loggerSource":"o.s.w.s.m.m.a.HttpEntityMethodProcessor","level":"DEBUG","logType":"SERVICE"} {"@timestamp":"2022-07-06T04:12:24.166-04:00","message":"Writing [ErrorApiResponse(code=HTTP500, message=HTTP 500 - Internal Server Error, description=java.io.IOExcep (truncated)...]","loggerSource":"o.s.w.s.m.m.a.HttpEntityMethodProcessor","level":"DEBUG","logType":"SERVICE"} {"@timestamp":"2022-07-06T04:12:24.167-04:00","message":"Failure in @ExceptionHandler com.dev.infra.rest.handlers.GlobalExceptionHandler#handleInternalException(Exception)","loggerSource":"o.s.w.s.m.m.a.ExceptionHandlerExceptionResolver","level":"WARN","stacktrace":"java.io.IOException: Connection reset by peer sun.nio.ch.SocketDispatcher.write0(SocketDispatcher.java) sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:54) sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113) sun.nio.ch.IOUtil.write(IOUtil.java:79) sun.nio.ch.IOUtil.write(IOUtil.java:50) sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:506)... 10 frames excluded... 108 common frames omittedWrapped by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: Connection reset by peer org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353) org.apache.catalina.connector.OutputBuffer.flushByteBuffer(OutputBuffer.java:783)... 7 frames excluded com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2171) com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:1184) com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:1009)... 16 frames excluded javax.servlet.http.HttpServlet.service(HttpServlet.java:655) org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) javax.servlet.http.HttpServlet.service(HttpServlet.java:764)... 77 frames excluded java.lang.Thread.run(Thread.java:832)","logType":"SERVICE"} {"@timestamp":"2022-07-06T04:12:24.168-04:00","message":"Failed to complete request: org.apache.catalina.connector.ClientAbortException: java.io.IOException: Connection reset by peer","loggerSource":"o.s.web.servlet.DispatcherServlet","level":"DEBUG","logType":"SERVICE"} Query is executing in 12-17 Seconds I tried with applying queryTimeout wont work. Any suggestions here would be great !! Thanks in advance :) A: I would suggest use Pagination than selecting all records at a glance. The connection was kept open long and was terminated by db server. If you eventually have more records this will fail even if you change connection timeout settings. Please visit how can i implement a pagination in spring jdbcTemplate to know how to do Pagination and retrieve all records in batch
Springboot + MS Sql Server (Azure Cloud) selecting 100K records failing due to Connection reset
I am using Springboot framework with MyBatis & JDBCTemplate to perform select operation over SQL Server Database. Following code snippet posting here only JDBCTemplate List<TableName> tnames = new ArrayList<>(); jdbcTemplate.query("select * from testdb.table_name", new RowCallbackHandler() { public void processRow(ResultSet resultSet) throws SQLException { while (resultSet.next()) { TableName tn = new TableName(); tn.setStatus(resultSet.getString("status")); tnames.add(tn); // process it } } }); return tnames; Note: Code is shortened Query too having 4-5 tables join , which performs well on SQL server side and fetching almost 100K rows. But integration with Springboot when I apply TOP(n) criteria I found that beyond 47000 rows it wont work and throws following error {"@timestamp":"2022-07-06T04:12:24.161-04:00","message":"InternalServerException Exception : ","loggerSource":"c.m.d.d.a.s.i.r.h.GlobalExceptionHandler","level":"ERROR","stacktrace":"java.io.IOException: Connection reset by peer sun.nio.ch.SocketDispatcher.write0(SocketDispatcher.java) sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:54) sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113) sun.nio.ch.IOUtil.write(IOUtil.java:79) sun.nio.ch.IOUtil.write(IOUtil.java:50) sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:506)... 10 frames excluded... 114 common frames omittedWrapped by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: Connection reset by peer org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353) org.apache.catalina.connector.OutputBuffer.flushByteBuffer(OutputBuffer.java:783)... 7 frames excluded com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2171) com.fasterxml.jackson.core.json.UTF8JsonGenerator.writeFieldName(UTF8JsonGenerator.java:289) com.fasterxml.jackson.databind.ser.BeanPropertyWriter.serializeAsField(BeanPropertyWriter.java:726) com.fasterxml.jackson.databind.ser.std.BeanSerializerBase.serializeFields(BeanSerializerBase.java:774) com.fasterxml.jackson.databind.ser.BeanSerializer.serialize(BeanSerializer.java:178) com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serializeContents(CollectionSerializer.java:145) com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serialize(CollectionSerializer.java:107) com.fasterxml.jackson.databind.ser.std.CollectionSerializer.serialize(CollectionSerializer.java:25) com.fasterxml.jackson.databind.ser.DefaultSerializerProvider._serialize(DefaultSerializerProvider.java:480) com.fasterxml.jackson.databind.ser.DefaultSerializerProvider.serializeValue(DefaultSerializerProvider.java:400) com.fasterxml.jackson.databind.ObjectWriter$Prefetch.serialize(ObjectWriter.java:1514) com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:1007)... 13 frames excluded javax.servlet.http.HttpServlet.service(HttpServlet.java:655) org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) javax.servlet.http.HttpServlet.service(HttpServlet.java:764)... 77 frames excluded java.lang.Thread.run(Thread.java:832)Suppressed: java.io.IOException: Connection reset by peer sun.nio.ch.SocketDispatcher.write0(SocketDispatcher.java) sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:54) sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113) sun.nio.ch.IOUtil.write(IOUtil.java:79) sun.nio.ch.IOUtil.write(IOUtil.java:50) sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:506)... 10 frames excluded... 104 common frames omittedWrapped by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: Connection reset by peer org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353) org.apache.catalina.connector.OutputBuffer.flushByteBuffer(OutputBuffer.java:783)... 7 frames excluded com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2171) com.fasterxml.jackson.core.json.UTF8JsonGenerator.close(UTF8JsonGenerator.java:1214)... 1 frames excluded... 93 common frames omitted","logType":"SERVICE"} {"@timestamp":"2022-07-06T04:12:24.164-04:00","message":"Found 'Content-Type:application/json' in response","loggerSource":"o.s.w.s.m.m.a.HttpEntityMethodProcessor","level":"DEBUG","logType":"SERVICE"} {"@timestamp":"2022-07-06T04:12:24.166-04:00","message":"Writing [ErrorApiResponse(code=HTTP500, message=HTTP 500 - Internal Server Error, description=java.io.IOExcep (truncated)...]","loggerSource":"o.s.w.s.m.m.a.HttpEntityMethodProcessor","level":"DEBUG","logType":"SERVICE"} {"@timestamp":"2022-07-06T04:12:24.167-04:00","message":"Failure in @ExceptionHandler com.dev.infra.rest.handlers.GlobalExceptionHandler#handleInternalException(Exception)","loggerSource":"o.s.w.s.m.m.a.ExceptionHandlerExceptionResolver","level":"WARN","stacktrace":"java.io.IOException: Connection reset by peer sun.nio.ch.SocketDispatcher.write0(SocketDispatcher.java) sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:54) sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:113) sun.nio.ch.IOUtil.write(IOUtil.java:79) sun.nio.ch.IOUtil.write(IOUtil.java:50) sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:506)... 10 frames excluded... 108 common frames omittedWrapped by: org.apache.catalina.connector.ClientAbortException: java.io.IOException: Connection reset by peer org.apache.catalina.connector.OutputBuffer.realWriteBytes(OutputBuffer.java:353) org.apache.catalina.connector.OutputBuffer.flushByteBuffer(OutputBuffer.java:783)... 7 frames excluded com.fasterxml.jackson.core.json.UTF8JsonGenerator._flushBuffer(UTF8JsonGenerator.java:2171) com.fasterxml.jackson.core.json.UTF8JsonGenerator.flush(UTF8JsonGenerator.java:1184) com.fasterxml.jackson.databind.ObjectWriter.writeValue(ObjectWriter.java:1009)... 16 frames excluded javax.servlet.http.HttpServlet.service(HttpServlet.java:655) org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:883) javax.servlet.http.HttpServlet.service(HttpServlet.java:764)... 77 frames excluded java.lang.Thread.run(Thread.java:832)","logType":"SERVICE"} {"@timestamp":"2022-07-06T04:12:24.168-04:00","message":"Failed to complete request: org.apache.catalina.connector.ClientAbortException: java.io.IOException: Connection reset by peer","loggerSource":"o.s.web.servlet.DispatcherServlet","level":"DEBUG","logType":"SERVICE"} Query is executing in 12-17 Seconds I tried with applying queryTimeout wont work. Any suggestions here would be great !! Thanks in advance :)
[ "I would suggest use Pagination than selecting all records at a glance. The connection was kept open long and was terminated by db server. If you eventually have more records this will fail even if you change connection timeout settings. Please visit how can i implement a pagination in spring jdbcTemplate to know how to do Pagination and retrieve all records in batch\n" ]
[ 0 ]
[]
[]
[ "java", "spring", "spring_boot", "sql_server" ]
stackoverflow_0072880615_java_spring_spring_boot_sql_server.txt
Q: Only keep the characters in front of "_" per row in a single column then create a new column with the output I have a large data frame called data_frame with two columns PRE and STATUS that look like this: PRE STATUS 1_752566 GAINED 1_776546 LOST 1_832918 NA 1_842013 LOST 1_846864 GAINED 11_8122943 NA 11_8188699 GAINED 11_8321128 NA 23_95137734 NA 23_95146814 GAINED What I would like is to create a new column CHR with only the number(s) before the underscore and make sure they are matched up next to the original column correctly like this: PRE STATUS CHR 1_752566 GAINED 1 1_776546 LOST 1 1_832918 NA 1 1_842013 LOST 1 1_846864 GAINED 1 11_8122943 NA 11 11_8188699 GAINED 11 11_8321128 NA 11 23_95137734 NA 23 23_95146814 GAINED 23 From here I'd like to group CHR by number and then find the sum of each group. If possible, I would like a new data table showing the sums of each group number like this: NUM SUM 1 5 11 3 23 2 I would then plot this to visualize the sums of each number where my x-axis is NUM and my y-axis is SUM A: We could extract the prefix before the _ library(dplyr) df <- df %>% mutate(CHR = trimws(PRE, whitespace = "_.*")) -output df PRE STATUS CHR 1 1_752566 GAINED 1 2 1_776546 LOST 1 3 1_832918 <NA> 1 4 1_842013 LOST 1 5 1_846864 GAINED 1 6 11_8122943 <NA> 11 7 11_8188699 GAINED 11 8 11_8321128 <NA> 11 9 23_95137734 <NA> 23 10 23_95146814 GAINED 23 Then, based on the expected output showed, it will be the count we need df %>% count(CHR, name = "SUM") CHR SUM 1 1 5 2 11 3 3 23 2 For the plot, do library(ggplot2) df %>% count(CHR, name = "SUM") %>% ggplot(aes(x = CHR, y = SUM)) + geom_col() data df <- structure(list(PRE = c("1_752566", "1_776546", "1_832918", "1_842013", "1_846864", "11_8122943", "11_8188699", "11_8321128", "23_95137734", "23_95146814"), STATUS = c("GAINED", "LOST", NA, "LOST", "GAINED", NA, "GAINED", NA, NA, "GAINED")), class = "data.frame", row.names = c(NA, -10L)) A: Here is an alternative approach using parse_number() from readr package: parse_number will extract only the first numbers. Here only the number before _: library(dplyr) library(readr) df %>% group_by(CHR = parse_number(PRE)) %>% summarise(NUM = first(CHR), SUM =n()) %>% select(-CHR) NUM SUM <dbl> <int> 1 1 5 2 11 3 3 23 2
Only keep the characters in front of "_" per row in a single column then create a new column with the output
I have a large data frame called data_frame with two columns PRE and STATUS that look like this: PRE STATUS 1_752566 GAINED 1_776546 LOST 1_832918 NA 1_842013 LOST 1_846864 GAINED 11_8122943 NA 11_8188699 GAINED 11_8321128 NA 23_95137734 NA 23_95146814 GAINED What I would like is to create a new column CHR with only the number(s) before the underscore and make sure they are matched up next to the original column correctly like this: PRE STATUS CHR 1_752566 GAINED 1 1_776546 LOST 1 1_832918 NA 1 1_842013 LOST 1 1_846864 GAINED 1 11_8122943 NA 11 11_8188699 GAINED 11 11_8321128 NA 11 23_95137734 NA 23 23_95146814 GAINED 23 From here I'd like to group CHR by number and then find the sum of each group. If possible, I would like a new data table showing the sums of each group number like this: NUM SUM 1 5 11 3 23 2 I would then plot this to visualize the sums of each number where my x-axis is NUM and my y-axis is SUM
[ "We could extract the prefix before the _\nlibrary(dplyr)\ndf <- df %>%\n mutate(CHR = trimws(PRE, whitespace = \"_.*\"))\n\n-output\ndf\n PRE STATUS CHR\n1 1_752566 GAINED 1\n2 1_776546 LOST 1\n3 1_832918 <NA> 1\n4 1_842013 LOST 1\n5 1_846864 GAINED 1\n6 11_8122943 <NA> 11\n7 11_8188699 GAINED 11\n8 11_8321128 <NA> 11\n9 23_95137734 <NA> 23\n10 23_95146814 GAINED 23\n\nThen, based on the expected output showed, it will be the count we need\ndf %>%\n count(CHR, name = \"SUM\")\n CHR SUM\n1 1 5\n2 11 3\n3 23 2\n\nFor the plot, do\nlibrary(ggplot2)\ndf %>%\n count(CHR, name = \"SUM\") %>%\n ggplot(aes(x = CHR, y = SUM)) +\n geom_col()\n\ndata\ndf <- structure(list(PRE = c(\"1_752566\", \"1_776546\", \"1_832918\", \"1_842013\", \n\"1_846864\", \"11_8122943\", \"11_8188699\", \"11_8321128\", \"23_95137734\", \n\"23_95146814\"), STATUS = c(\"GAINED\", \"LOST\", NA, \"LOST\", \"GAINED\", \nNA, \"GAINED\", NA, NA, \"GAINED\")), class = \"data.frame\", row.names = c(NA, \n-10L))\n\n", "Here is an alternative approach using parse_number() from readr package:\nparse_number will extract only the first numbers. Here only the number before _:\nlibrary(dplyr)\nlibrary(readr)\ndf %>% \n group_by(CHR = parse_number(PRE)) %>% \n summarise(NUM = first(CHR), SUM =n()) %>% \n select(-CHR)\n\n NUM SUM\n <dbl> <int>\n1 1 5\n2 11 3\n3 23 2\n\n" ]
[ 2, 1 ]
[]
[]
[ "dataframe", "multiple_columns", "mutate", "r", "row" ]
stackoverflow_0074660599_dataframe_multiple_columns_mutate_r_row.txt
Q: Why is my PXFormula with PXDBShorts Throwing Specified Cast Error? I have a formula I am using the calculate a non-persisting field. I'm redefining a base DAC field: POVendorItem.VLeadTime. This is my new definition: #region VLeadTime [PXShort(MinValue = 0, MaxValue = 100000)] [PXUIField(DisplayName = "Vendor Lead Time (Days)", Enabled = false)] [PXFormula(typeof(Sub<Current<usrCFAvgLeadTime>, Current<POVendorInventory.addLeadTimeDays>>))] public virtual short? VLeadTime { get; set; } #endregionv Both fields in the formula are defined as PXDBShorts, yet I keep getting a "Specified cast is not valid" error when trying to access the grid the field is on. Here is the trace: I've also noticed that if I type out a statement such as short? = short? - short? I also get an error in VS stating I can't convert an int? to a short? which doesn't make sense. For me to correctly calculate it, VS auto corrects the formula to short? = (short?)((short)(short?) - (short)(short?)) adding in a ton of type casts. A: To get around the formula, I would attach events to field defaulting for VLeadTime, as well as field updating for usrCFAvgLeadTime and addLeadTimeDays and then recalculate it and set the value. I know it is redundant, but as long as you can set the value to a short in visual studio in the code, it would set for VLeadTime.
Why is my PXFormula with PXDBShorts Throwing Specified Cast Error?
I have a formula I am using the calculate a non-persisting field. I'm redefining a base DAC field: POVendorItem.VLeadTime. This is my new definition: #region VLeadTime [PXShort(MinValue = 0, MaxValue = 100000)] [PXUIField(DisplayName = "Vendor Lead Time (Days)", Enabled = false)] [PXFormula(typeof(Sub<Current<usrCFAvgLeadTime>, Current<POVendorInventory.addLeadTimeDays>>))] public virtual short? VLeadTime { get; set; } #endregionv Both fields in the formula are defined as PXDBShorts, yet I keep getting a "Specified cast is not valid" error when trying to access the grid the field is on. Here is the trace: I've also noticed that if I type out a statement such as short? = short? - short? I also get an error in VS stating I can't convert an int? to a short? which doesn't make sense. For me to correctly calculate it, VS auto corrects the formula to short? = (short?)((short)(short?) - (short)(short?)) adding in a ton of type casts.
[ "To get around the formula, I would attach events to field defaulting for VLeadTime, as well as field updating for usrCFAvgLeadTime and addLeadTimeDays and then recalculate it and set the value. I know it is redundant, but as long as you can set the value to a short in visual studio in the code, it would set for VLeadTime.\n" ]
[ 0 ]
[]
[]
[ "acumatica" ]
stackoverflow_0073623271_acumatica.txt
Q: Gatling Cannot resolve symbol 'http' I am having an issue specifically with the 'http' imports shown below import static io.gatling.javaapi.http.HttpDsl.http; import static io.gatling.javaapi.http.HttpDsl.status; They are not recognised in the following block: .exec(http("${testType}") .post(RequestBuilder.launch1p0) .formParam(LTIParam.context_id.name(), "${district_pid}") ) My simplified version of my pom.xml looks like this: <properties> <java.version>17</java.version> <gatling.version>3.8.4</gatling.version> <gatling-maven-plugin.version>4.2.7</gatling-maven-plugin.version> <scala.version>2.12.8</scala.version> <scala-logging_2.11.version>3.7.2</scala-logging_2.11.version> <scala-maven-plugin.version>4.7.2</scala-maven-plugin.version> <performance-base.version>1.0</performance-base.version> </properties> <dependency> <groupId>io.gatling.highcharts</groupId> <artifactId>gatling-charts-highcharts</artifactId> <version>${gatling.version}</version> </dependency> <dependency> <groupId>io.gatling</groupId> <artifactId>gatling-app</artifactId> <version>${gatling.version}</version> </dependency> <dependency> <groupId>io.gatling</groupId> <artifactId>gatling-recorder</artifactId> <version>${gatling.version}</version> </dependency> <plugin> <groupId>io.gatling</groupId> <artifactId>gatling-maven-plugin</artifactId> <version>${gatling-maven-plugin.version}</version> </plugin> I have tried reloading the maven projects, deleting the gatling-core.jar files and retrying mvn clean install -DskipTests If anyone knows why it has an issue with this specific import, and not the following: import io.gatling.javaapi.core.ChainBuilder; import static io.gatling.javaapi.core.CoreDsl.*; please explain to me. Thank you. A: What you provided is very suspicious. You mention that you have in your pom.xml: <scala.version>2.12.8</scala.version> <scala-logging_2.11.version>3.7.2</scala-logging_2.11.version> but those properties are not used in what you provided. If you are really pulling those libraries, those versions are completely wrong and break your build. Gatling 3.8 requires Scala 2.13 (since 3.5) you're trying to force Scala 2.12.8 => not compatible scala-logging_2.11 means "compiles for Scala 2.11" 3 different and incompatible Scala versions! You should be pulling Scala 2.13.10 (but that shouldn't be necessary) and scala-logging_2.13 3.9.5.
Gatling Cannot resolve symbol 'http'
I am having an issue specifically with the 'http' imports shown below import static io.gatling.javaapi.http.HttpDsl.http; import static io.gatling.javaapi.http.HttpDsl.status; They are not recognised in the following block: .exec(http("${testType}") .post(RequestBuilder.launch1p0) .formParam(LTIParam.context_id.name(), "${district_pid}") ) My simplified version of my pom.xml looks like this: <properties> <java.version>17</java.version> <gatling.version>3.8.4</gatling.version> <gatling-maven-plugin.version>4.2.7</gatling-maven-plugin.version> <scala.version>2.12.8</scala.version> <scala-logging_2.11.version>3.7.2</scala-logging_2.11.version> <scala-maven-plugin.version>4.7.2</scala-maven-plugin.version> <performance-base.version>1.0</performance-base.version> </properties> <dependency> <groupId>io.gatling.highcharts</groupId> <artifactId>gatling-charts-highcharts</artifactId> <version>${gatling.version}</version> </dependency> <dependency> <groupId>io.gatling</groupId> <artifactId>gatling-app</artifactId> <version>${gatling.version}</version> </dependency> <dependency> <groupId>io.gatling</groupId> <artifactId>gatling-recorder</artifactId> <version>${gatling.version}</version> </dependency> <plugin> <groupId>io.gatling</groupId> <artifactId>gatling-maven-plugin</artifactId> <version>${gatling-maven-plugin.version}</version> </plugin> I have tried reloading the maven projects, deleting the gatling-core.jar files and retrying mvn clean install -DskipTests If anyone knows why it has an issue with this specific import, and not the following: import io.gatling.javaapi.core.ChainBuilder; import static io.gatling.javaapi.core.CoreDsl.*; please explain to me. Thank you.
[ "What you provided is very suspicious.\nYou mention that you have in your pom.xml:\n<scala.version>2.12.8</scala.version>\n<scala-logging_2.11.version>3.7.2</scala-logging_2.11.version>\n\nbut those properties are not used in what you provided.\nIf you are really pulling those libraries, those versions are completely wrong and break your build.\n\nGatling 3.8 requires Scala 2.13 (since 3.5)\nyou're trying to force Scala 2.12.8 => not compatible\nscala-logging_2.11 means \"compiles for Scala 2.11\"\n\n3 different and incompatible Scala versions!\nYou should be pulling Scala 2.13.10 (but that shouldn't be necessary) and scala-logging_2.13 3.9.5.\n" ]
[ 0 ]
[]
[]
[ "gatling", "java", "maven", "scala" ]
stackoverflow_0074659210_gatling_java_maven_scala.txt
Q: Converting for() loops to apply() functions in R for a custom XIRR function I've currently been looking for functions to calculate the XIRR of a large database of cashflows/balances in R, and I've come across this function, which I've been trying to modify to fit my code: library(tidyverse) xirr2 <- function(exflow, date) { if(as.numeric(max(date) - min(date)) <= 2) { return(0) } else if(abs(sum(exflow, na.rm = TRUE)) < 1e-12) { return(0) } else { npv <- function(range, exflow, date){ for(test.rate in range) { temp <- as.data.frame(cbind(exflow, date)) %>% mutate(npv = exflow * ((1 + test.rate/100)^(as.numeric(max(date) - date)/365))) %>% select(npv) %>% .[1] if(sum(exflow, na.rm = TRUE) > 0) { if(sum(temp, na.rm = TRUE) > 0) { min.rate <- test.rate next } else { max.rate <- test.rate break } } else { if(sum(temp, na.rm = TRUE) < 0) { min.rate <- test.rate next } else { max.rate <- test.rate break } } } return(list(min.rate = min.rate, max.rate = max.rate)) } max.rate <- c() min.rate <- c() if(sum(exflow, na.rm = TRUE) >= 1e-12) { range <- seq(from = 0, to = 1e8, by = 1e3) hundreds <- npv(range, exflow, date) range <- seq(from = hundreds$min.rate, to = hundreds$max.rate, by = 10) tens <- npv(range, exflow, date) range <- seq(from = tens$min.rate, to = tens$max.rate, by = 1) ones <- npv(range, exflow, date) range <- seq(from = ones$min.rate, to = ones$max.rate, by = 0.01) decimals <- npv(range, exflow, date) return(mean(unlist(decimals))/100) } else { range <- seq(from = 0, to = -1e8, by = -1e3) hundreds <- npv(range, exflow, date) range <- seq(from = hundreds$min.rate, to = hundreds$max.rate, by = -10) tens <- npv(range, exflow, date) range <- seq(from = tens$min.rate, to = tens$max.rate, by = -1) ones <- npv(range, exflow, date) range <- seq(from = ones$min.rate, to = ones$max.rate, by = -0.01) decimals <- npv(range, exflow, date) return(mean(unlist(decimals))/100) } } } Basically, given a vector of cashflows and a vector of corresponding dates, this function returns the annualized XIRR of a investment. While it works great and generates consistently correct answers when cross-referenced with MS Excel and LibreOffice Calc, it is a bit on the slower side, and I feel that it could be improved by replacing the for() loop with an apply() function or something from the data.table package. The speed issues are barely noticeable on small examples, but on large datasets like mine with a ton of edge cases, the slow-down can be pretty substantial. For what it's worth, I've tried a multitude of other XIRR functions from various packages, including tvm, FinancialMath, and FinCal. For whatever reason, these functions tend to break down over time: solutions eventually stop converging and become inaccurate, particularly with large cashflows and rapid changes between positive/negative returns. This might possibly be due to a common reliance on the uniroot() or polyroot() functions in R to calculate XIRR, but I'm not sure. In any case, the above function actually gets me the numbers I want---I just need some help optimizing it for larger datasets. Thank you in advance! EDIT Thank you for the help so far. Here are some minimum examples: Some deposits, some withdrawals, and then a complete withdrawal for a positive return. MS Excel shows XIRR = 15.32%: > flow1 <- c(-1000,-100,100,1200) > date1 <- as.Date(c("2018-01-01","2018-10-31","2019-03-31","2019-03-31"), format = "%Y-%m-%d") > tvm::xirr(flow1,date1) Error in uniroot(xnpv, interval = interval, cf = cf, d = d, tau = tau, : f.lower = f(lower) is NA > xirr2(flow1,date1) [1] 0.15315 An account receiving regular contributions with a poor return. MS Excel shows XIRR = -27.54%: > flow2 <- c(-200,-200,-200,-200,-200,800) > date2 <- as.Date(c("2018-01-01","2018-03-01","2018-06-01","2018-09-01","2019-01-01","2019-03-01"), format = "%Y-%m-%d") > tvm::xirr(flow2,date2) Error in uniroot(xnpv, interval = interval, cf = cf, d = d, tau = tau, : f.lower = f(lower) is NA > xirr2(flow2,date2) [1] -0.27535 Maybe I'm just using tvm::xirr() wrong? I'm not sure how to correct that uniroot() error. A: OK, I figured it out thanks to this answer. Turns out that tvm::xirr() does work well (and is significantly faster than the above function), but I had been calling it incorrectly. Here is a working example: > flow2 <- c(-200,-200,-200,-200,-200,800) > date2 <- as.Date(c("2018-01-01","2018-03-01","2018-06-01","2018-09-01","2019-01-01","2019-03-01"), format = "%Y-%m-%d") > tvm::xirr(flow2, date2, comp_freq = 1, maxiter = 100, tol = 1e-8, lower = -0.999, upper = 100) [1] -0.2753857 I had tried this earlier with lower = -1, which gives the same error as above. So I was off by 0.001... so close. Thank you all again for your help!
Converting for() loops to apply() functions in R for a custom XIRR function
I've currently been looking for functions to calculate the XIRR of a large database of cashflows/balances in R, and I've come across this function, which I've been trying to modify to fit my code: library(tidyverse) xirr2 <- function(exflow, date) { if(as.numeric(max(date) - min(date)) <= 2) { return(0) } else if(abs(sum(exflow, na.rm = TRUE)) < 1e-12) { return(0) } else { npv <- function(range, exflow, date){ for(test.rate in range) { temp <- as.data.frame(cbind(exflow, date)) %>% mutate(npv = exflow * ((1 + test.rate/100)^(as.numeric(max(date) - date)/365))) %>% select(npv) %>% .[1] if(sum(exflow, na.rm = TRUE) > 0) { if(sum(temp, na.rm = TRUE) > 0) { min.rate <- test.rate next } else { max.rate <- test.rate break } } else { if(sum(temp, na.rm = TRUE) < 0) { min.rate <- test.rate next } else { max.rate <- test.rate break } } } return(list(min.rate = min.rate, max.rate = max.rate)) } max.rate <- c() min.rate <- c() if(sum(exflow, na.rm = TRUE) >= 1e-12) { range <- seq(from = 0, to = 1e8, by = 1e3) hundreds <- npv(range, exflow, date) range <- seq(from = hundreds$min.rate, to = hundreds$max.rate, by = 10) tens <- npv(range, exflow, date) range <- seq(from = tens$min.rate, to = tens$max.rate, by = 1) ones <- npv(range, exflow, date) range <- seq(from = ones$min.rate, to = ones$max.rate, by = 0.01) decimals <- npv(range, exflow, date) return(mean(unlist(decimals))/100) } else { range <- seq(from = 0, to = -1e8, by = -1e3) hundreds <- npv(range, exflow, date) range <- seq(from = hundreds$min.rate, to = hundreds$max.rate, by = -10) tens <- npv(range, exflow, date) range <- seq(from = tens$min.rate, to = tens$max.rate, by = -1) ones <- npv(range, exflow, date) range <- seq(from = ones$min.rate, to = ones$max.rate, by = -0.01) decimals <- npv(range, exflow, date) return(mean(unlist(decimals))/100) } } } Basically, given a vector of cashflows and a vector of corresponding dates, this function returns the annualized XIRR of a investment. While it works great and generates consistently correct answers when cross-referenced with MS Excel and LibreOffice Calc, it is a bit on the slower side, and I feel that it could be improved by replacing the for() loop with an apply() function or something from the data.table package. The speed issues are barely noticeable on small examples, but on large datasets like mine with a ton of edge cases, the slow-down can be pretty substantial. For what it's worth, I've tried a multitude of other XIRR functions from various packages, including tvm, FinancialMath, and FinCal. For whatever reason, these functions tend to break down over time: solutions eventually stop converging and become inaccurate, particularly with large cashflows and rapid changes between positive/negative returns. This might possibly be due to a common reliance on the uniroot() or polyroot() functions in R to calculate XIRR, but I'm not sure. In any case, the above function actually gets me the numbers I want---I just need some help optimizing it for larger datasets. Thank you in advance! EDIT Thank you for the help so far. Here are some minimum examples: Some deposits, some withdrawals, and then a complete withdrawal for a positive return. MS Excel shows XIRR = 15.32%: > flow1 <- c(-1000,-100,100,1200) > date1 <- as.Date(c("2018-01-01","2018-10-31","2019-03-31","2019-03-31"), format = "%Y-%m-%d") > tvm::xirr(flow1,date1) Error in uniroot(xnpv, interval = interval, cf = cf, d = d, tau = tau, : f.lower = f(lower) is NA > xirr2(flow1,date1) [1] 0.15315 An account receiving regular contributions with a poor return. MS Excel shows XIRR = -27.54%: > flow2 <- c(-200,-200,-200,-200,-200,800) > date2 <- as.Date(c("2018-01-01","2018-03-01","2018-06-01","2018-09-01","2019-01-01","2019-03-01"), format = "%Y-%m-%d") > tvm::xirr(flow2,date2) Error in uniroot(xnpv, interval = interval, cf = cf, d = d, tau = tau, : f.lower = f(lower) is NA > xirr2(flow2,date2) [1] -0.27535 Maybe I'm just using tvm::xirr() wrong? I'm not sure how to correct that uniroot() error.
[ "OK, I figured it out thanks to this answer. Turns out that tvm::xirr() does work well (and is significantly faster than the above function), but I had been calling it incorrectly. Here is a working example:\n> flow2 <- c(-200,-200,-200,-200,-200,800)\n> date2 <- as.Date(c(\"2018-01-01\",\"2018-03-01\",\"2018-06-01\",\"2018-09-01\",\"2019-01-01\",\"2019-03-01\"), format = \"%Y-%m-%d\")\n> tvm::xirr(flow2, date2, comp_freq = 1, maxiter = 100, tol = 1e-8, lower = -0.999, upper = 100)\n[1] -0.2753857\n\nI had tried this earlier with lower = -1, which gives the same error as above. So I was off by 0.001... so close. Thank you all again for your help!\n" ]
[ 0 ]
[]
[]
[ "apply", "for_loop", "lapply", "r", "xirr" ]
stackoverflow_0074652277_apply_for_loop_lapply_r_xirr.txt
Q: WinSCP: How to make sure SFTP upload gets renamed from .zip.filepart to .zip? Using the .NET assembly of WinSCP to upload a file. OperationResultBase.Check() is throwing the following error: WinSCP.SessionRemoteException: Transfer was successfully finished, but temporary transfer file 'testfile.zip.filepart' could not be renamed to target file name 'testfile.zip'. If the problem persists, you may want to turn off transfer resume support. It seems that this happens with any zip file that I try to send. If it makes a difference, these are zip files that were created using the DotNetZip library. Code that I'm using, taken pretty much directly from the example in the WinSCP documentation: public void uploadFile(string filePath, string remotePath) { TransferOptions transferOptions = new TransferOptions(); transferOptions.TransferMode = TransferMode.Binary; TransferOperationResult transferResult; transferResult = currentSession.PutFiles(filePath, remotePath, false, transferOptions); transferResult.Check(); foreach (TransferEventArgs transfer in transferResult.Transfers) { Console.WriteLine("Upload of {0} succeeded", transfer.FileName); } } Discussion over at the WinSCP forum indicates that the assembly doesn't yet allow programmatic control of transfer resume support. Is there a workaround for this? A: It sounds as if the filesystem on the destination server where the file is getting uploaded to does not allow file change permissions. This could be causing the renaming of the file at the finish of the upload to fail despite the fact that the complete file was uploaded and written to the filesystem with the temporary file name used while the transfer was in progress. If you don't have administrative access to the destination server, you can test that by trying to rename a file that is already on the destination server. If that fails also, then you will either need to have the proper permissions on the destination server changed in order for that to work. Otherwise you might have to use the advice provided in your error message to turn off the resume support so it is initially opened for writing with the desired filename instead of the temporary filename (with the .filepart extension). A: Turn off the resumesupport: put *.txt -nopreservetime -nopermissions -resumesupport=off A: It would help, if you included full error message, including root cause as returned by the server. My guess is that there's an antivirus application (or similar) running on the server-side. The antivirus application checks any file once upload finishes. That conflicts with WinSCP attempt to rename the file once the upload is finished. The problem may tend to occur more frequently for .ZIP archives, either because they tend to be larger or simply because they need to get extracted before the check (what takes time). Anyway, you can disable the transfer to temporary file name using the TransferOptions.ResumeSupport. See also the documentation for the error message "Transfer was successfully finished, but temporary transfer file ... could not be renamed to target file name ..." A: All you have to do is to disable TransferResumeSupport using the below code. transferOptions.ResumeSupport = new TransferResumeSuppor {State = TransferResumeSupportState.Off };
WinSCP: How to make sure SFTP upload gets renamed from .zip.filepart to .zip?
Using the .NET assembly of WinSCP to upload a file. OperationResultBase.Check() is throwing the following error: WinSCP.SessionRemoteException: Transfer was successfully finished, but temporary transfer file 'testfile.zip.filepart' could not be renamed to target file name 'testfile.zip'. If the problem persists, you may want to turn off transfer resume support. It seems that this happens with any zip file that I try to send. If it makes a difference, these are zip files that were created using the DotNetZip library. Code that I'm using, taken pretty much directly from the example in the WinSCP documentation: public void uploadFile(string filePath, string remotePath) { TransferOptions transferOptions = new TransferOptions(); transferOptions.TransferMode = TransferMode.Binary; TransferOperationResult transferResult; transferResult = currentSession.PutFiles(filePath, remotePath, false, transferOptions); transferResult.Check(); foreach (TransferEventArgs transfer in transferResult.Transfers) { Console.WriteLine("Upload of {0} succeeded", transfer.FileName); } } Discussion over at the WinSCP forum indicates that the assembly doesn't yet allow programmatic control of transfer resume support. Is there a workaround for this?
[ "It sounds as if the filesystem on the destination server where the file is getting uploaded to does not allow file change permissions. This could be causing the renaming of the file at the finish of the upload to fail despite the fact that the complete file was uploaded and written to the filesystem with the temporary file name used while the transfer was in progress. If you don't have administrative access to the destination server, you can test that by trying to rename a file that is already on the destination server. If that fails also, then you will either need to have the proper permissions on the destination server changed in order for that to work. Otherwise you might have to use the advice provided in your error message to turn off the resume support so it is initially opened for writing with the desired filename instead of the temporary filename (with the .filepart extension).\n", "Turn off the resumesupport:\nput *.txt -nopreservetime -nopermissions -resumesupport=off\n\n", "It would help, if you included full error message, including root cause as returned by the server.\nMy guess is that there's an antivirus application (or similar) running on the server-side. The antivirus application checks any file once upload finishes. That conflicts with WinSCP attempt to rename the file once the upload is finished. The problem may tend to occur more frequently for .ZIP archives, either because they tend to be larger or simply because they need to get extracted before the check (what takes time).\nAnyway, you can disable the transfer to temporary file name using the TransferOptions.ResumeSupport. \nSee also the documentation for the error message \"Transfer was successfully finished, but temporary transfer file ... could not be renamed to target file name ...\"\n", "All you have to do is to disable TransferResumeSupport using the below code.\ntransferOptions.ResumeSupport = new TransferResumeSuppor {State = TransferResumeSupportState.Off };\n" ]
[ 4, 2, 1, 0 ]
[]
[]
[ "c#", "scp", "sftp", "winscp", "winscp_net" ]
stackoverflow_0010858143_c#_scp_sftp_winscp_winscp_net.txt