content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: How to pull out only one beta coefficient for each group separately from the regression equation and calculate mean by variables in R here little example of my data. sales.data=structure(list(MDM_Key = c(370L, 370L, 370L, 370L, 370L, 370L, 370L, 371L, 371L, 371L, 371L, 371L, 371L, 371L), sale_count = c(30L, 32L, 32L, 24L, 20L, 15L, 23L, 30L, 32L, 32L, 24L, 20L, 15L, 23L ), iek_disc_price = c(38227.08, 38227.08, 33739.7, 38227.08, 38227.08, 28844.16, 31649.255, 38227.08, 38227.08, 33739.7, 38227.08, 38227.08, 28844.16, 31649.255)), class = "data.frame", row.names = c(NA, -14L)) i perform regression analysis str(sales.data) m1<-lm(formula=sale_count~iek_disc_price,data=sales.data) summary(m1) But the main difficulty is that for each group (MDM_Key) I don't need all the regression results from the summary, but only one beta coefficient. here B=0.0008559. but then i need calculate mean value for sale_count and the mean for iek_disc_price (also for each mdm key group) so the desired result would be like this MDM_Key beta mean(sale_count) mean(iek_disc_price) 370 0.0008559 25.14 35305 371 0.0008559 25.14 35305 How to take only beta (nor intercept)regression coefficient for each group mdm_key and also for each group, calculate the mean values for sale_count and iek_disc_price to get the summary table indicated above. Thank you for your help. A: If I understood correctly, you want to apply one regression per MDM_Key. library(dplyr) library(purrr) library(broom) sales.data %>% group_by(MDM_Key) %>% mutate( mean_sale_count = mean(sale_count), mean_iek_disc_price = mean(iek_disc_price) ) %>% nest(-MDM_Key,-mean_sale_count,-mean_iek_disc_price) %>% mutate( coefs = map(.x = data,.f = ~tidy(lm(formula=.$sale_count~.$iek_disc_price,data=.))) ) %>% unnest(coefs) %>% filter(term != "(Intercept)") %>% select(MDM_Key,beta = estimate,mean_sale_count,mean_iek_disc_price) # A tibble: 2 x 4 # Groups: MDM_Key [2] MDM_Key beta mean_sale_count mean_iek_disc_price <int> <dbl> <dbl> <dbl> 1 370 0.000856 25.1 35306. 2 371 0.000856 25.1 35306. A: Get the means using aggregate and the beta values using lmList and then put them together and rearrange the columns in the order shown in the question. Omit [, c(2:1, 3:4)] if the column order doesn't matter. Note that nlme comes with R and does not have to be installed. library(nlme) # lmList means <- aggregate(. ~ MDM_Key, sales.data, mean) fm <- lmList(sale_count ~ iek_disc_price | MDM_Key, sales.data) cbind(beta = coef(fm)[, 2], means)[, c(2:1, 3:4)] ## MDM_Key beta sale_count iek_disc_price ## 1 370 0.0008558854 25.14286 35305.92 ## 2 371 0.0008558854 25.14286 35305.92 A: Using R base and the split + apply + combine strategy: do.call(rbind, lapply(split(sales.data, sales.data$MDM_Key), function(i) { c(beta=coef(lm(sale_count~iek_disc_price, data=i))[2], sale_count_mean=mean(i$sale_count), iek_disc_price_mean=mean(i$iek_disc_price)) } )) beta.iek_disc_price sale_count_mean iek_disc_price_mean 370 0.0008558854 25.14286 35305.92 371 0.0008558854 25.14286 35305.92
How to pull out only one beta coefficient for each group separately from the regression equation and calculate mean by variables in R
here little example of my data. sales.data=structure(list(MDM_Key = c(370L, 370L, 370L, 370L, 370L, 370L, 370L, 371L, 371L, 371L, 371L, 371L, 371L, 371L), sale_count = c(30L, 32L, 32L, 24L, 20L, 15L, 23L, 30L, 32L, 32L, 24L, 20L, 15L, 23L ), iek_disc_price = c(38227.08, 38227.08, 33739.7, 38227.08, 38227.08, 28844.16, 31649.255, 38227.08, 38227.08, 33739.7, 38227.08, 38227.08, 28844.16, 31649.255)), class = "data.frame", row.names = c(NA, -14L)) i perform regression analysis str(sales.data) m1<-lm(formula=sale_count~iek_disc_price,data=sales.data) summary(m1) But the main difficulty is that for each group (MDM_Key) I don't need all the regression results from the summary, but only one beta coefficient. here B=0.0008559. but then i need calculate mean value for sale_count and the mean for iek_disc_price (also for each mdm key group) so the desired result would be like this MDM_Key beta mean(sale_count) mean(iek_disc_price) 370 0.0008559 25.14 35305 371 0.0008559 25.14 35305 How to take only beta (nor intercept)regression coefficient for each group mdm_key and also for each group, calculate the mean values for sale_count and iek_disc_price to get the summary table indicated above. Thank you for your help.
[ "If I understood correctly, you want to apply one regression per MDM_Key.\nlibrary(dplyr)\nlibrary(purrr)\nlibrary(broom)\n\nsales.data %>% \n group_by(MDM_Key) %>% \n mutate(\n mean_sale_count = mean(sale_count),\n mean_iek_disc_price = mean(iek_disc_price)\n ) %>% \n nest(-MDM_Key,-mean_sale_count,-mean_iek_disc_price) %>% \n mutate(\n coefs = map(.x = data,.f = ~tidy(lm(formula=.$sale_count~.$iek_disc_price,data=.)))\n ) %>%\n unnest(coefs) %>% \n filter(term != \"(Intercept)\") %>% \n select(MDM_Key,beta = estimate,mean_sale_count,mean_iek_disc_price)\n\n\n# A tibble: 2 x 4\n# Groups: MDM_Key [2]\n MDM_Key beta mean_sale_count mean_iek_disc_price\n <int> <dbl> <dbl> <dbl>\n1 370 0.000856 25.1 35306.\n2 371 0.000856 25.1 35306.\n\n", "Get the means using aggregate and the beta values using lmList and then put them together and rearrange the columns in the order shown in the question. Omit [, c(2:1, 3:4)] if the column order doesn't matter. Note that nlme comes with R and does not have to be installed.\nlibrary(nlme) # lmList\n\nmeans <- aggregate(. ~ MDM_Key, sales.data, mean)\nfm <- lmList(sale_count ~ iek_disc_price | MDM_Key, sales.data)\ncbind(beta = coef(fm)[, 2], means)[, c(2:1, 3:4)]\n\n## MDM_Key beta sale_count iek_disc_price\n## 1 370 0.0008558854 25.14286 35305.92\n## 2 371 0.0008558854 25.14286 35305.92\n\n", "Using R base and the split + apply + combine strategy:\ndo.call(rbind, lapply(split(sales.data, sales.data$MDM_Key), function(i) {\n c(beta=coef(lm(sale_count~iek_disc_price, data=i))[2],\n sale_count_mean=mean(i$sale_count), \n iek_disc_price_mean=mean(i$iek_disc_price))\n} ))\n\n beta.iek_disc_price sale_count_mean iek_disc_price_mean\n370 0.0008558854 25.14286 35305.92\n371 0.0008558854 25.14286 35305.92\n\n" ]
[ 2, 2, 1 ]
[]
[]
[ "r" ]
stackoverflow_0074657266_r.txt
Q: Pandas - Dataframe dates subtraction I am dealing with a dataframe like this: mydata['TS_START'] 0 2022-11-09 00:00:00 1 2022-11-09 00:00:30 2 2022-11-09 00:01:00 3 2022-11-09 00:01:30 4 2022-11-09 00:02:00 ... I would like to create a new column where: mydata['delta_t'] 0 2022-11-09 00:00:30 - 2022-11-09 00:00:00 1 2022-11-09 00:01:00 - 2022-11-09 00:00:30 2 2022-11-09 00:01:30 - 2022-11-09 00:01:00 3 2022-11-09 00:02:00 - 2022-11-09 00:01:30 ... Obtaining something like this (in decimals units hour based): mydata['delta_t'] 0 30/3600 1 30/3600 2 30/3600 3 30/3600 ... I obtained this result using a for cycle, but it is very slow. I would like to obtain a faster solution, using a vectorized form. Do you have any suggestion? A: here is one way : df['date'] = pd.to_datetime(df['date']) df['delta_t'] = (df['date'] - df['date'].shift(1)).dt.total_seconds() print(df) output : >> date delta_t 0 2022-11-09 00:00:00 NaN 1 2022-11-09 00:00:30 30.0 2 2022-11-09 00:01:00 30.0 3 2022-11-09 00:01:30 30.0 4 2022-11-09 00:02:00 30.0
Pandas - Dataframe dates subtraction
I am dealing with a dataframe like this: mydata['TS_START'] 0 2022-11-09 00:00:00 1 2022-11-09 00:00:30 2 2022-11-09 00:01:00 3 2022-11-09 00:01:30 4 2022-11-09 00:02:00 ... I would like to create a new column where: mydata['delta_t'] 0 2022-11-09 00:00:30 - 2022-11-09 00:00:00 1 2022-11-09 00:01:00 - 2022-11-09 00:00:30 2 2022-11-09 00:01:30 - 2022-11-09 00:01:00 3 2022-11-09 00:02:00 - 2022-11-09 00:01:30 ... Obtaining something like this (in decimals units hour based): mydata['delta_t'] 0 30/3600 1 30/3600 2 30/3600 3 30/3600 ... I obtained this result using a for cycle, but it is very slow. I would like to obtain a faster solution, using a vectorized form. Do you have any suggestion?
[ "here is one way :\ndf['date'] = pd.to_datetime(df['date'])\n\ndf['delta_t'] = (df['date'] - df['date'].shift(1)).dt.total_seconds()\nprint(df)\n\noutput :\n>>\n date delta_t\n0 2022-11-09 00:00:00 NaN\n1 2022-11-09 00:00:30 30.0\n2 2022-11-09 00:01:00 30.0\n3 2022-11-09 00:01:30 30.0\n4 2022-11-09 00:02:00 30.0\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "date", "pandas", "python" ]
stackoverflow_0074658022_dataframe_date_pandas_python.txt
Q: mysql: i want to authenticate as user on remote host but..give me localhost I want to login to mysql db with another user I create the user mysql -u root create USER 'myuser'@'%' IDENTIFIED BY 'password' REQUIRE SSL; flush privileges; grant privileges grant all on *.* to 'myuser'@'%' IDENTIFIED BY 'password'; flush privileges; but mysql refuse connect and "force" localhost ! mysql -h myhost -p Enter password: ERROR 1045 (28000): Access denied for user 'myuser'@'localhost' (using password: YES) the tcp port 3306 is open telnet-ssl myshost 3306 Escape character is '^]'. Y ******-MariaDB... ss -tulnepona|grep 3306 tcp LISTEN 0 80 *:3306 *:* users:(("mariadbd",pid=13501,fd=20)) uid:27 ino:39020 sk:45 cgroup:unreachable:1 v6only:0 <-> tcp TIME-WAIT 0 0 [::1]:59876 [::1]:3306 timer:(timewait,53sec,0) ino:0 sk:47 why force the localhost and don't connect to remote server? Server is mariadb on slackware-15, i don't use $HOME/.my.cnf A: Solution found, I have to test it from a remote site, not from localsite
mysql: i want to authenticate as user on remote host but..give me localhost
I want to login to mysql db with another user I create the user mysql -u root create USER 'myuser'@'%' IDENTIFIED BY 'password' REQUIRE SSL; flush privileges; grant privileges grant all on *.* to 'myuser'@'%' IDENTIFIED BY 'password'; flush privileges; but mysql refuse connect and "force" localhost ! mysql -h myhost -p Enter password: ERROR 1045 (28000): Access denied for user 'myuser'@'localhost' (using password: YES) the tcp port 3306 is open telnet-ssl myshost 3306 Escape character is '^]'. Y ******-MariaDB... ss -tulnepona|grep 3306 tcp LISTEN 0 80 *:3306 *:* users:(("mariadbd",pid=13501,fd=20)) uid:27 ino:39020 sk:45 cgroup:unreachable:1 v6only:0 <-> tcp TIME-WAIT 0 0 [::1]:59876 [::1]:3306 timer:(timewait,53sec,0) ino:0 sk:47 why force the localhost and don't connect to remote server? Server is mariadb on slackware-15, i don't use $HOME/.my.cnf
[ "Solution found, I have to test it from a remote site, not from localsite\n" ]
[ 0 ]
[]
[]
[ "mariadb" ]
stackoverflow_0074658217_mariadb.txt
Q: How to fire onChange event and get the result JestJS I have an input in one of my classes which onChange updates some of the properties, according to what the user typed. So I want to call that input, give it a value, then it should go through the onChange method and then get the result from one of the properties. Here is my test case it("test-input-value-1", async () => { const { getByTestId } = render( <> <Home /> <Try typeracertext="dsa dsa"/> </> ); const input = getByTestId("add-word-input"); const inputWord = "For"; userEvent.type(input, 'For') const userText = await getByTestId("userText"); const typeracertext = getByTestId("typeracertext"); await expect(userText.innerHTML).toBe(inputWord); }); and here is what I got I don't have an idea why the result is empty when it has to be changed into the same word "For" that the input has. EDIT: Here is the JSX Code as requested Home.js: const Game = () => { if (cantType === false) { return ( <Try typeracertext={typeracertext} setWholeText={setWholeText} setStartTyping={setStartTyping} setEndTyping={setEndTyping} setCountWords={setCountWords} newGame={newGame} /> ) } else { return ( <input readOnly /> ) } } return ( <span data-testid="userText" className="userTextHome">{wholeText}</span><div data-testid="typeracertext"> </div> <div data-testid="add-word-input2" className="box d"> {Game()} </div> ... Try.js: //here is also the onChange method but it is not needed in this case as it is very long and I have explained what it does in the end (make a property to be equal to the input data) return ( <div data-testid="add-word-input"><input name="add-word-input" placeholder="Message..." onChange={onChange}></input> </div> ); A: You are executing user event on the div element, instead of input element. Try moving attribute data-testid to input element in Try.js file: return ( <div><input data-testid="add-word-input" name="add-word-input" placeholder="Message..." onChange={onChange}></input> </div> ); Working example: Codesandbox Input.js: import { useState } from "react"; export const Input = () => { const [text, setText] = useState(); const handleChange = (e) => { setText(e.target.value); }; return ( <> <div> <input data-testid="add-word-input" name="add-word-input" placeholder="Message..." onChange={handleChange} ></input> </div> <label data-testId="test-label">{text}</label> </> ); }; Input.test.js: import { render } from "@testing-library/react"; import userEvent from "@testing-library/user-event"; import "@testing-library/jest-dom"; import { Input } from "./Input"; describe("Input", () => { it("should change", async () => { const { getByTestId } = render(<Input />); let input = getByTestId("add-word-input"); expect(input).not.toBe(null); await userEvent.type(input, "for"); let label = getByTestId("test-label"); expect(label.textContent).toBe("for"); }); });
How to fire onChange event and get the result JestJS
I have an input in one of my classes which onChange updates some of the properties, according to what the user typed. So I want to call that input, give it a value, then it should go through the onChange method and then get the result from one of the properties. Here is my test case it("test-input-value-1", async () => { const { getByTestId } = render( <> <Home /> <Try typeracertext="dsa dsa"/> </> ); const input = getByTestId("add-word-input"); const inputWord = "For"; userEvent.type(input, 'For') const userText = await getByTestId("userText"); const typeracertext = getByTestId("typeracertext"); await expect(userText.innerHTML).toBe(inputWord); }); and here is what I got I don't have an idea why the result is empty when it has to be changed into the same word "For" that the input has. EDIT: Here is the JSX Code as requested Home.js: const Game = () => { if (cantType === false) { return ( <Try typeracertext={typeracertext} setWholeText={setWholeText} setStartTyping={setStartTyping} setEndTyping={setEndTyping} setCountWords={setCountWords} newGame={newGame} /> ) } else { return ( <input readOnly /> ) } } return ( <span data-testid="userText" className="userTextHome">{wholeText}</span><div data-testid="typeracertext"> </div> <div data-testid="add-word-input2" className="box d"> {Game()} </div> ... Try.js: //here is also the onChange method but it is not needed in this case as it is very long and I have explained what it does in the end (make a property to be equal to the input data) return ( <div data-testid="add-word-input"><input name="add-word-input" placeholder="Message..." onChange={onChange}></input> </div> );
[ "You are executing user event on the div element, instead of input element.\nTry moving attribute data-testid to input element in Try.js file:\nreturn (\n <div><input data-testid=\"add-word-input\" name=\"add-word-input\" placeholder=\"Message...\" onChange={onChange}></input> </div>\n);\n\nWorking example: Codesandbox\nInput.js:\nimport { useState } from \"react\";\n\nexport const Input = () => {\n const [text, setText] = useState();\n const handleChange = (e) => {\n setText(e.target.value);\n };\n return (\n <>\n <div>\n <input\n data-testid=\"add-word-input\"\n name=\"add-word-input\"\n placeholder=\"Message...\"\n onChange={handleChange}\n ></input>\n </div>\n <label data-testId=\"test-label\">{text}</label>\n </>\n );\n};\n\nInput.test.js:\nimport { render } from \"@testing-library/react\";\nimport userEvent from \"@testing-library/user-event\";\nimport \"@testing-library/jest-dom\";\nimport { Input } from \"./Input\";\n\ndescribe(\"Input\", () => {\n it(\"should change\", async () => {\n const { getByTestId } = render(<Input />);\n\n let input = getByTestId(\"add-word-input\");\n expect(input).not.toBe(null);\n await userEvent.type(input, \"for\");\n\n let label = getByTestId(\"test-label\");\n\n expect(label.textContent).toBe(\"for\");\n });\n});\n\n" ]
[ 0 ]
[]
[]
[ "jestjs", "reactjs" ]
stackoverflow_0074546200_jestjs_reactjs.txt
Q: TextBox Bound to double Unexpected Behavior I have a TextBox bound to a double which must be validated on each keystroke. I would like to allow the user to enter any character. For some reason I don't understand the behavior is not as desired. Examples: .3 - it is accepted, but changed to 0.3; abc123. - accepted as it is; 12.3 – . is not accepted, the final result is 123; 123, place cursor between 2 and 3, enter . – accepted as 12.3. I cannot imagine a scenario in which this behavior is desire. How do I fix it? <TextBox Name="txtPrice" Text="{Binding Model.Price, ValidatesOnDataErrors=True, UpdateSourceTrigger=PropertyChanged}" /> A: Andy (in comments) is right - the problem is from UpdateSourceTrigger=PropertyChanged - it means that whenever you type something, it sends it to the backing property, gets converted to a double, then sent back to the text box as that double. So if you're typing 12.3, when you've typed 12. it will convert it to the double 12 and then place that in the box, which effectively to the user looks like it dropped the decimal point. The other results you're seeing are similar. EDIT: I misread your question earlier and thought you were asking to block invalid text, rather than simply validating it. Here's an updated answer You can set this to only bind one way, from the textbox to the binding target, so the converted value doesn't go back to the textbox, by doing this: <TextBox Name="txtPrice" Text="{Binding Model.Price, ValidatesOnDataErrors=True, Mode=OneWayToSource, UpdateSourceTrigger=PropertyChanged}" /> The downside to this is that changing the Model.Price no longer updates the value shown in the textbox - it only propagates TextBox -> Model, not the other way round. If this isn't an issue for you (eg it's just for user input rather than editing), then use that. If it is an issue, then you're going to have to write you're own value converter to specify your desired behaviour. Original answer, about blocking non-numeric input: If you need to validate on every keystroke then you're going to have use something like PreviewTextInput to validate what text can go into the textbox: XAML <TextBox Name="txtPrice" PreviewTextInput="ValidateNumericInput" Text="{Binding Model.Price, ValidatesOnDataErrors=True, UpdateSourceTrigger=Default}" /> C# private void ValidateNumericInput(object sender, TextCompositionEventArgs e) { var textBox = (TextBox)sender; var oldText = textBox.Text; var newText = oldText.Substring(0, textBox.SelectionStart) + e.Text + oldText.Substring(textBox.SelectionStart + textBox.SelectionLength); // If it's not valid, then ignore this text input if (!double.TryParse(newText, out var _) && newText != "." && newText != "-") { e.Handled = true; } } What this does is works out if the text you are trying to enter is a valid double (or the start of one), and ignores it if it's not. Also changed the UpdateSourceTrigger to it's default value, which means the double value won't back-propogate to the textbox. Note that PreviewTextInput does not get hit when you paste something into the text box rather than type it (absolutely no idea which genius at MS decided that), so you can still paste invalid inputs. If that's an issue to you, look into PasteHandlers as to how you can mitigate that. As a side note, you're right that WPF is bad at this particular area - you're not the first person to deal with this issue and you won't be the last. Text handling in TextBoxes has a lot of weird gotchas and edge cases, and it's a pain in the butt. I don't know why they don't add a text control that lets you set a validity regex or something similar.
TextBox Bound to double Unexpected Behavior
I have a TextBox bound to a double which must be validated on each keystroke. I would like to allow the user to enter any character. For some reason I don't understand the behavior is not as desired. Examples: .3 - it is accepted, but changed to 0.3; abc123. - accepted as it is; 12.3 – . is not accepted, the final result is 123; 123, place cursor between 2 and 3, enter . – accepted as 12.3. I cannot imagine a scenario in which this behavior is desire. How do I fix it? <TextBox Name="txtPrice" Text="{Binding Model.Price, ValidatesOnDataErrors=True, UpdateSourceTrigger=PropertyChanged}" />
[ "Andy (in comments) is right - the problem is from UpdateSourceTrigger=PropertyChanged - it means that whenever you type something, it sends it to the backing property, gets converted to a double, then sent back to the text box as that double. So if you're typing 12.3, when you've typed 12. it will convert it to the double 12 and then place that in the box, which effectively to the user looks like it dropped the decimal point. The other results you're seeing are similar.\nEDIT: I misread your question earlier and thought you were asking to block invalid text, rather than simply validating it. Here's an updated answer\nYou can set this to only bind one way, from the textbox to the binding target, so the converted value doesn't go back to the textbox, by doing this:\n<TextBox Name=\"txtPrice\"\n Text=\"{Binding Model.Price,\n ValidatesOnDataErrors=True,\n Mode=OneWayToSource,\n UpdateSourceTrigger=PropertyChanged}\" />\n\nThe downside to this is that changing the Model.Price no longer updates the value shown in the textbox - it only propagates TextBox -> Model, not the other way round. If this isn't an issue for you (eg it's just for user input rather than editing), then use that. If it is an issue, then you're going to have to write you're own value converter to specify your desired behaviour.\nOriginal answer, about blocking non-numeric input:\nIf you need to validate on every keystroke then you're going to have use something like PreviewTextInput to validate what text can go into the textbox:\nXAML\n<TextBox Name=\"txtPrice\"\nPreviewTextInput=\"ValidateNumericInput\"\nText=\"{Binding Model.Price,\nValidatesOnDataErrors=True,\nUpdateSourceTrigger=Default}\" />\n\nC#\nprivate void ValidateNumericInput(object sender, TextCompositionEventArgs e)\n{\n var textBox = (TextBox)sender;\n var oldText = textBox.Text;\n\n var newText = oldText.Substring(0, textBox.SelectionStart) + e.Text + oldText.Substring(textBox.SelectionStart + textBox.SelectionLength);\n\n // If it's not valid, then ignore this text input\n if (!double.TryParse(newText, out var _) && newText != \".\" && newText != \"-\")\n {\n e.Handled = true;\n }\n}\n\nWhat this does is works out if the text you are trying to enter is a valid double (or the start of one), and ignores it if it's not.\nAlso changed the UpdateSourceTrigger to it's default value, which means the double value won't back-propogate to the textbox.\nNote that PreviewTextInput does not get hit when you paste something into the text box rather than type it (absolutely no idea which genius at MS decided that), so you can still paste invalid inputs. If that's an issue to you, look into PasteHandlers as to how you can mitigate that.\nAs a side note, you're right that WPF is bad at this particular area - you're not the first person to deal with this issue and you won't be the last. Text handling in TextBoxes has a lot of weird gotchas and edge cases, and it's a pain in the butt. I don't know why they don't add a text control that lets you set a validity regex or something similar.\n" ]
[ 1 ]
[]
[]
[ "c#", "wpf", "xaml" ]
stackoverflow_0074646493_c#_wpf_xaml.txt
Q: Convert string to List so square bracket will be eliminated and will be a list '[1, 2]' It is a string. How to I make it List [1,2] So convertion from '[1, 2]' to [1,2] A: You have a couple of options eval eval('[1, 2]') # [1, 2] ast.literal_eval import ast ast.literal_eval('[1, 2]') # [1, 2] string parsing list(map(int, '[1, 2]'.strip('[]').split(', '))) # [1, 2]
Convert string to List so square bracket will be eliminated and will be a list
'[1, 2]' It is a string. How to I make it List [1,2] So convertion from '[1, 2]' to [1,2]
[ "You have a couple of options\neval\neval('[1, 2]')\n# [1, 2]\n\nast.literal_eval\nimport ast\nast.literal_eval('[1, 2]')\n# [1, 2]\n\nstring parsing\nlist(map(int, '[1, 2]'.strip('[]').split(', ')))\n# [1, 2]\n\n" ]
[ 2 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074658231_list_python.txt
Q: Not able to get the value of summernote in asp.net I am setting the value of summernote code to hidden field but in code behind hidden field returns empty string Javascript code $("#<%= txtQuestion.ClientID %>").on('summernote.blur', function () { debugger; let a = $('#<%= txtQuestion.ClientID %>').summernote('code'); document.getElementById("<%= HiddenField1.ClientID %>").innerText = a; console.log(a); }); Code Behind string a = HiddenField1.Value;
Not able to get the value of summernote in asp.net
I am setting the value of summernote code to hidden field but in code behind hidden field returns empty string Javascript code $("#<%= txtQuestion.ClientID %>").on('summernote.blur', function () { debugger; let a = $('#<%= txtQuestion.ClientID %>').summernote('code'); document.getElementById("<%= HiddenField1.ClientID %>").innerText = a; console.log(a); }); Code Behind string a = HiddenField1.Value;
[]
[]
[ "You're setting innerText, you should set value.\nhttps://www.w3schools.com/tags/att_input_value.asp\n" ]
[ -1 ]
[ "asp.net", "c#", "javascript", "summernote" ]
stackoverflow_0074656752_asp.net_c#_javascript_summernote.txt
Q: Dynamic links is not connected properly : Path prefix is not configured I keep getting the following message in Firebase dynamic links so can someone help me? https://deep.example_url.com is not connected properly Your Dynamic Links path prefix is not configured in firebase.json. Check your firebase.json configuration and try again. A: Make sure that the Dynamic link rewrite rule comes before the existing rule as rewrites are processed in the order they are listed. See this guide for additional details. Here is an example of a fixed firebase.json: { "hosting": { "public": "public", "ignore": ["firebase.json", "**/.*", "**/node_modules/**"], "appAssociation": "AUTO", "rewrites": [ { "source": "/urlPrefix/**", "dynamicLinks": true}, { "source": "/**", "destination": "index.html" } ] } }
Dynamic links is not connected properly : Path prefix is not configured
I keep getting the following message in Firebase dynamic links so can someone help me? https://deep.example_url.com is not connected properly Your Dynamic Links path prefix is not configured in firebase.json. Check your firebase.json configuration and try again.
[ "Make sure that the Dynamic link rewrite rule comes before the existing rule as rewrites are processed in the order they are listed. See this guide for additional details.\nHere is an example of a fixed firebase.json:\n{\n\n \"hosting\": {\n\n \"public\": \"public\",\n\n \"ignore\": [\"firebase.json\", \"**/.*\", \"**/node_modules/**\"],\n\n \"appAssociation\": \"AUTO\",\n\n \"rewrites\": [\n\n { \"source\": \"/urlPrefix/**\", \"dynamicLinks\": true},\n\n { \"source\": \"/**\", \"destination\": \"index.html\" }\n\n ]\n\n }\n\n}\n\n" ]
[ 0 ]
[]
[]
[ "firebase_dynamic_links" ]
stackoverflow_0072214937_firebase_dynamic_links.txt
Q: How can I connect to Memgraph Cloud from Python I have Memgraph Lab installed on my computer as well as mgconsole. I know how to connect to Memgraph Cloud using them, but I'd like to connect to them using Python. How can I do that? A: First, you will to install a Python driver. You will actually use GQLAlchemy. You can use pip (pip install gqlalchemy) or poetry (poetry add gqlalchemy). In the following code, replace YOUR_MEMGRAPH_PASSWORD, YOUR_MEMGRAPH_USERNAME and MEMGRAPH_HOST_ADDRESS with your values: from gqlalchemy import Memgraph MEMGRAPH_HOST = 'MEMGRAPH_HOST_ADDRESS' MEMGRAPH_PORT = 7687 MEMGRAPH_USERNAME = 'YOUR_MEMGRAPH_USERNAME' # Place your Memgraph password that was created during Project creation MEMGRAPH_PASSWORD = 'YOUR_MEMGRAPH_PASSWORD' def hello_memgraph(host: str, port: int, username: str, password: str): connection = Memgraph(host, port, username, password, encrypted=True) results = connection.execute_and_fetch( 'CREATE (n:FirstNode { message: "Hello Memgraph from Python!" }) RETURN n.message AS message' ) print("Created node with message:", next(results)["message"]) if __name__ == "__main__": hello_memgraph(MEMGRAPH_HOST, MEMGRAPH_PORT, MEMGRAPH_USERNAME, MEMGRAPH_PASSWORD)
How can I connect to Memgraph Cloud from Python
I have Memgraph Lab installed on my computer as well as mgconsole. I know how to connect to Memgraph Cloud using them, but I'd like to connect to them using Python. How can I do that?
[ "First, you will to install a Python driver. You will actually use GQLAlchemy.\nYou can use pip (pip install gqlalchemy) or poetry (poetry add gqlalchemy).\nIn the following code, replace YOUR_MEMGRAPH_PASSWORD, YOUR_MEMGRAPH_USERNAME and MEMGRAPH_HOST_ADDRESS with your values:\nfrom gqlalchemy import Memgraph\n\nMEMGRAPH_HOST = 'MEMGRAPH_HOST_ADDRESS'\nMEMGRAPH_PORT = 7687\nMEMGRAPH_USERNAME = 'YOUR_MEMGRAPH_USERNAME'\n# Place your Memgraph password that was created during Project creation\nMEMGRAPH_PASSWORD = 'YOUR_MEMGRAPH_PASSWORD'\n\ndef hello_memgraph(host: str, port: int, username: str, password: str):\n connection = Memgraph(host, port, username, password, encrypted=True)\n results = connection.execute_and_fetch(\n 'CREATE (n:FirstNode { message: \"Hello Memgraph from Python!\" }) RETURN n.message AS message'\n )\n print(\"Created node with message:\", next(results)[\"message\"])\n\nif __name__ == \"__main__\":\n hello_memgraph(MEMGRAPH_HOST, MEMGRAPH_PORT, MEMGRAPH_USERNAME, MEMGRAPH_PASSWORD)\n\n" ]
[ 0 ]
[]
[]
[ "memgraphdb" ]
stackoverflow_0074658294_memgraphdb.txt
Q: How to add items to array in react Code: export default function App() { const [name,setName] = useState(""); var myArray = []; const handleAdd = () => { myArray = [...myArray,name] setName("") } return ( <div className="App"> <input placeholder="type a name" onChange={(e) => setName(e.target.value)}/> <button onClick={handleAdd}>add</button> <button onClick={() => console.log(myArray)}>test</button> {myArray.map((n) => { return <h2>{n}</h2> })} </div> ); } OnClick it isn't adding the name to the array. A: this is how you "push" to an array with useState const [array, setArray] = useState([]) setArray(previous => [...previuous, newItem]) A: You should use a state for your array and set that state to see the changes reflected: export default function App() { const [name, setName] = useState(''); const [myArray, setMyArray] = useState([]); const handleAdd = () => { setMyArray([...myArray, name]); setName(''); }; return ( <div className="App"> <input placeholder="type a name" onChange={(e) => setName(e.target.value)} /> <button onClick={handleAdd}>add</button> <button onClick={() => console.log(myArray)}>test</button> {myArray.map((n) => { return <h2>{n}</h2>; })} </div> ); } A: We can also set the state of "myArr" to be an empty array initially, making it easier to manipulate the subsequent state of the that array. The onClick event handler does not fires the handleAdd function, for some reason, it only resets the form and does not provide any state. To submit the form and materialize the state, we can also use the onSubmit event handler instead of onClick. Lastly, we can use the "name" state as a value/prop for the name input, which will be used by the onChange handler. import React, { useState } from 'react' const App = () => { const [name, setName] = useState('') const [myArr, setMyArr] = useState([]) const submit = (event) => { event.preventDefault() setMyArr(myArr.concat(name)) setName('') } //console.log(myArr) return ( <div className="App"> <form onSubmit={submit}> <div> <label htmlFor="name">Name</label> <input placeholder="type a name" type="text" value={name} onChange={({ target }) => setName(target.value)} /> </div> <div> <button type="submit">Add</button> </div> </form> <div> {myArr.map((arr, index) => ( <div key={index}> <p>{arr}</p> </div> ))} </div> </div> ) } export default App Happy coding!
How to add items to array in react
Code: export default function App() { const [name,setName] = useState(""); var myArray = []; const handleAdd = () => { myArray = [...myArray,name] setName("") } return ( <div className="App"> <input placeholder="type a name" onChange={(e) => setName(e.target.value)}/> <button onClick={handleAdd}>add</button> <button onClick={() => console.log(myArray)}>test</button> {myArray.map((n) => { return <h2>{n}</h2> })} </div> ); } OnClick it isn't adding the name to the array.
[ "this is how you \"push\" to an array with useState\nconst [array, setArray] = useState([])\nsetArray(previous => [...previuous, newItem])\n\n", "You should use a state for your array and set that state to see the changes reflected:\nexport default function App() {\n const [name, setName] = useState('');\n const [myArray, setMyArray] = useState([]);\n const handleAdd = () => {\n setMyArray([...myArray, name]);\n setName('');\n };\n return (\n <div className=\"App\">\n <input\n placeholder=\"type a name\"\n onChange={(e) => setName(e.target.value)}\n />\n <button onClick={handleAdd}>add</button>\n <button onClick={() => console.log(myArray)}>test</button>\n {myArray.map((n) => {\n return <h2>{n}</h2>;\n })}\n </div>\n );\n}\n\n", "We can also set the state of \"myArr\" to be an empty array initially, making it easier to manipulate the subsequent state of the that array. The onClick event handler does not fires the handleAdd function, for some reason, it only resets the form and does not provide any state. To submit the form and materialize the state, we can also use the onSubmit event handler instead of onClick. Lastly, we can use the \"name\" state as a value/prop for the name input, which will be used by the onChange handler.\nimport React, { useState } from 'react'\n\nconst App = () => {\nconst [name, setName] = useState('')\nconst [myArr, setMyArr] = useState([])\n\nconst submit = (event) => {\n event.preventDefault()\n setMyArr(myArr.concat(name))\n setName('')\n}\n\n//console.log(myArr)\nreturn (\n <div className=\"App\">\n <form onSubmit={submit}>\n <div>\n <label htmlFor=\"name\">Name</label>\n <input\n placeholder=\"type a name\"\n type=\"text\"\n value={name}\n onChange={({ target }) => setName(target.value)}\n />\n </div>\n <div>\n <button type=\"submit\">Add</button>\n </div>\n </form>\n <div>\n {myArr.map((arr, index) => (\n <div key={index}>\n <p>{arr}</p>\n </div>\n ))}\n </div>\n </div>\n )\n }\n\nexport default App\n\nHappy coding!\n" ]
[ 2, 1, 0 ]
[]
[]
[ "reactjs" ]
stackoverflow_0072225342_reactjs.txt
Q: Compiling js via webpacker results in: SassError: expected "{" I'm trying to use scss in my rails application, configured by webpacker. Whenever I run rails webpacker:compile, I get the following error: ERROR in ./app/javascript/stylesheets/application.scss Module build failed (from ./node_modules/mini-css-extract-plugin/dist/loader.js): ModuleBuildError: Module build failed (from ./node_modules/sass-loader/dist/cjs.js): SassError: expected "{". ╷ 1 │ import api from "!../../../node_modules/style-loader/dist/runtime/injectStylesIntoStyleTag.js"; │ ^ ╵ app/javascript/stylesheets/application.scss 1:95 root stylesheet I'm having trouble debugging this problem and would appreciate any help. Dependencies rails: 6.1 webpacker: 6.0.0.pre1 @webpack-cli/serve webpack: 5.11 webpack-cli: 4.2 webpack-dev-server: 3.11 package.json { "name": "ostor", "private": true, "dependencies": { "@popperjs/core": "^2.6.0", "@rails/actioncable": "^6.1.2-1", "@rails/activestorage": "^6.1.2-1", "@rails/ujs": "^6.1.2-1", "@rails/webpacker": "^6.0.0-beta.5", "autoprefixer": "^10.2.4", "bootstrap": "^v5.0.0-beta2", "css-loader": "^5.0.2", "css-minimizer-webpack-plugin": "^1.2.0", "d3": "^6.2.0", "jquery": "^3.5.1", "mini-css-extract-plugin": "^1.3.7", "postcss": "^8.2.6", "postcss-loader": "^5.0.0", "sass": "^1.32.7", "sass-loader": "^11.0.1", "style-loader": "^2.0.0", "turbolinks": "^5.2.0", "webpack": "^5.11.0", "webpack-cli": "^4.2.0" }, "version": "0.1.0", "devDependencies": { "@webpack-cli/serve": "^1.3.0", "webpack-dev-server": "^3.11.2" }, "babel": { "presets": [ "./node_modules/@rails/webpacker/package/babel/preset.js" ] }, "browserslist": [ "defaults" ] } config/webpack/base.js: const { webpackConfig, merge } = require('@rails/webpacker') const customConfig = { module: { rules: [ { test: /\.s[ac]ss$/i, exclude: /node_modules/, use: [ // Creates `style` nodes from JS strings "style-loader", // Translates CSS into CommonJS "css-loader", // Compiles Sass to CSS "sass-loader", ], }, ], }, } module.exports = merge(webpackConfig, customConfig) app/javascript/packs/application.js import ActiveStorage from "@rails/activestorage"; import * as RailsUjs from "@rails/ujs"; import Turbolinks from "turbolinks"; ActiveStorage.start(); RailsUjs.start(); Turbolinks.start(); import "channels"; import "bootstrap"; import "../stylesheets/application.scss"; A: Remove the custom config rules you added for SASS/SCSS. Webpacker 6 will provide the appropriate CSS rules for you when it detects you've installed css-loader, postcss-loader, mini-css-extract-plugin, etc. A: For the shakapacker users: You don't need to add the option: test: /\.s[ac]ss$/i in the "rules" section. You just need to add: yarn add sass sass-loader and: extensions: - .sass - .scss in your webpacker.yml file, and shakapacker will transpile sass/scss files.
Compiling js via webpacker results in: SassError: expected "{"
I'm trying to use scss in my rails application, configured by webpacker. Whenever I run rails webpacker:compile, I get the following error: ERROR in ./app/javascript/stylesheets/application.scss Module build failed (from ./node_modules/mini-css-extract-plugin/dist/loader.js): ModuleBuildError: Module build failed (from ./node_modules/sass-loader/dist/cjs.js): SassError: expected "{". ╷ 1 │ import api from "!../../../node_modules/style-loader/dist/runtime/injectStylesIntoStyleTag.js"; │ ^ ╵ app/javascript/stylesheets/application.scss 1:95 root stylesheet I'm having trouble debugging this problem and would appreciate any help. Dependencies rails: 6.1 webpacker: 6.0.0.pre1 @webpack-cli/serve webpack: 5.11 webpack-cli: 4.2 webpack-dev-server: 3.11 package.json { "name": "ostor", "private": true, "dependencies": { "@popperjs/core": "^2.6.0", "@rails/actioncable": "^6.1.2-1", "@rails/activestorage": "^6.1.2-1", "@rails/ujs": "^6.1.2-1", "@rails/webpacker": "^6.0.0-beta.5", "autoprefixer": "^10.2.4", "bootstrap": "^v5.0.0-beta2", "css-loader": "^5.0.2", "css-minimizer-webpack-plugin": "^1.2.0", "d3": "^6.2.0", "jquery": "^3.5.1", "mini-css-extract-plugin": "^1.3.7", "postcss": "^8.2.6", "postcss-loader": "^5.0.0", "sass": "^1.32.7", "sass-loader": "^11.0.1", "style-loader": "^2.0.0", "turbolinks": "^5.2.0", "webpack": "^5.11.0", "webpack-cli": "^4.2.0" }, "version": "0.1.0", "devDependencies": { "@webpack-cli/serve": "^1.3.0", "webpack-dev-server": "^3.11.2" }, "babel": { "presets": [ "./node_modules/@rails/webpacker/package/babel/preset.js" ] }, "browserslist": [ "defaults" ] } config/webpack/base.js: const { webpackConfig, merge } = require('@rails/webpacker') const customConfig = { module: { rules: [ { test: /\.s[ac]ss$/i, exclude: /node_modules/, use: [ // Creates `style` nodes from JS strings "style-loader", // Translates CSS into CommonJS "css-loader", // Compiles Sass to CSS "sass-loader", ], }, ], }, } module.exports = merge(webpackConfig, customConfig) app/javascript/packs/application.js import ActiveStorage from "@rails/activestorage"; import * as RailsUjs from "@rails/ujs"; import Turbolinks from "turbolinks"; ActiveStorage.start(); RailsUjs.start(); Turbolinks.start(); import "channels"; import "bootstrap"; import "../stylesheets/application.scss";
[ "Remove the custom config rules you added for SASS/SCSS. Webpacker 6 will provide the appropriate CSS rules for you when it detects you've installed css-loader, postcss-loader, mini-css-extract-plugin, etc.\n", "For the shakapacker users:\nYou don't need to add the option:\n test: /\\.s[ac]ss$/i\n\nin the \"rules\" section. You just need to add:\nyarn add sass sass-loader\n\nand:\n extensions:\n - .sass\n - .scss\n\nin your webpacker.yml file, and shakapacker will transpile sass/scss files.\n" ]
[ 11, 0 ]
[]
[]
[ "ruby_on_rails", "webpack", "webpacker" ]
stackoverflow_0066216033_ruby_on_rails_webpack_webpacker.txt
Q: what is missing for DocumentNode function to show text in datagridview? What i´m trying to do is to use the documentnode method to find a specific table from the internet and put it into datagridview. My code can be seen below: ` List<string> list = new List<string>(); DataTable dt1 = new DataTable(); var table = doc.DocumentNode.SelectNodes("xpath link") .Descendants("tr") .Where(tr=>tr.Elements("td").Count()>1) .Select(td => td.InnerText.Trim()) .ToList(); foreach (var tables in table) { list.Add(tables.ToString());} dataGridView1.DataSource = list ` The result I get in the table is a list of numbers instead of text (datagridview table). As I have tried to see it the text actually appears I changed the foreach with the following code: ` foreach (var tables in table) { list.Add(tables.ToString()); richTextBox1.Text += tables; } ` The result I get from the change is a string of the table in richTextBox1 but still a table of numbers in datagridview1 richtextbox1 text. This means I´m getting the right table from the internet and its being loaded correctly but i´m still missing something for the datagridview1 as I get a list of numbers instead of text that is being shown in richtextbox1. I followed this up by changing the DocumentNode function with removing parts in the .select part of the code and the datagridview1 stilled showed numbers (I added for example .ToString, .ToList() etc.). What exactly have I missed in my code that makes this happen and should I have added something else to make it show the text instead of numbers? Edit: New code. ` List<string> list = new List<string>(); DataTable dt1 = new DataTable(); dt1.Columns.Add("td", typeof(int)); var table = doc.DocumentNode.SelectNodes("//div[@id=\"cr_cashflow\"]/div[2]/div/table") .Descendants("tr") .Select(td => td.InnerText.Trim()) .ToList(); foreach (var tables in table) { dt1.Rows.Add(new object[] { int.Parse(tables) }); } dataGridView1.DataSource= dt1; ` A: Try something like this List<string> list = new List<string>(); DataTable dt1 = new DataTable(); dt1.Columns.Add("td",typeof(int)); var rows = doc.DocumentNode.SelectNodes("xpath link") .Descendants("tr") .Where(tr=>tr.Elements("td").Count()>1) .Select(td => td.InnerText.Trim()) .ToList(); foreach (var row in rows) { dt.Rows.Add(new object[] { int.Parse(row)}); }
what is missing for DocumentNode function to show text in datagridview?
What i´m trying to do is to use the documentnode method to find a specific table from the internet and put it into datagridview. My code can be seen below: ` List<string> list = new List<string>(); DataTable dt1 = new DataTable(); var table = doc.DocumentNode.SelectNodes("xpath link") .Descendants("tr") .Where(tr=>tr.Elements("td").Count()>1) .Select(td => td.InnerText.Trim()) .ToList(); foreach (var tables in table) { list.Add(tables.ToString());} dataGridView1.DataSource = list ` The result I get in the table is a list of numbers instead of text (datagridview table). As I have tried to see it the text actually appears I changed the foreach with the following code: ` foreach (var tables in table) { list.Add(tables.ToString()); richTextBox1.Text += tables; } ` The result I get from the change is a string of the table in richTextBox1 but still a table of numbers in datagridview1 richtextbox1 text. This means I´m getting the right table from the internet and its being loaded correctly but i´m still missing something for the datagridview1 as I get a list of numbers instead of text that is being shown in richtextbox1. I followed this up by changing the DocumentNode function with removing parts in the .select part of the code and the datagridview1 stilled showed numbers (I added for example .ToString, .ToList() etc.). What exactly have I missed in my code that makes this happen and should I have added something else to make it show the text instead of numbers? Edit: New code. ` List<string> list = new List<string>(); DataTable dt1 = new DataTable(); dt1.Columns.Add("td", typeof(int)); var table = doc.DocumentNode.SelectNodes("//div[@id=\"cr_cashflow\"]/div[2]/div/table") .Descendants("tr") .Select(td => td.InnerText.Trim()) .ToList(); foreach (var tables in table) { dt1.Rows.Add(new object[] { int.Parse(tables) }); } dataGridView1.DataSource= dt1; `
[ "Try something like this\nList<string> list = new List<string>();\nDataTable dt1 = new DataTable();\ndt1.Columns.Add(\"td\",typeof(int));\nvar rows = doc.DocumentNode.SelectNodes(\"xpath link\")\n .Descendants(\"tr\")\n .Where(tr=>tr.Elements(\"td\").Count()>1)\n .Select(td => td.InnerText.Trim())\n .ToList();\n\nforeach (var row in rows)\n{ \n dt.Rows.Add(new object[] { int.Parse(row)});\n}\n\n" ]
[ 0 ]
[]
[]
[ "c#", "datagridview", "foreach", "html_agility_pack", "string" ]
stackoverflow_0074656378_c#_datagridview_foreach_html_agility_pack_string.txt
Q: useEffect not working with local storage and forEach In my project, I am working with the redux store and that data will store in local storage. On every new update, local storage will update and as per local storage, my component will render with forEach method. With the below code, I am getting the data from local storage. const ShowMyFlashcard = () => { let cardValueObj=[] useEffect(() => { let cardValue = localStorage.getItem("cardValue"); if (cardValue == null) { cardValueObj= []; } else { cardValueObj= JSON.parse(cardValue); } }); return ( // {/* SECTION TO SHOW CREATED ALL CARDS*/} <div className="mx-10 my-10 grid grid-cols-3 gap-4 place-content-around flex flex-wrap justify-items-center"> {cardValueObj.forEach((element) => { <div className=" shadow-md bg-white p-5 h-64 w-64 m-4 mx-1.5 my-1.5"> <div> <h1 className="text-center font-bold mx-1.5 my-1.5"> {element.createGroup} </h1> <div className="text-center bg-white mx-1.5 my-1.5 h-32 w-48"> <span>{element.groupDescription}</span> </div> <div className="flex justify-center"> <button className="rounded-md text-red-600 border-solid border-2 bg-white border-red-700 mx-2 my-2 h-8 w-40"> View Cards </button> </div> </div> </div>; })} </div> ); }; export default ShowMyFlashcard; A: You haven't passed any dependency array const [cardValue, setCardValue] = useState([]); useEffect(() => { let cardValueFromLocalStorage = localStorage.getItem("cardValue"); if (cardValueFromLocalStorage === null) { setCardValue([]); } else { setCardValue(JSON.parse(cardValueFromLocalStorage)); }}, []); Do add some dependency in the array. Also using let is not a proper way try using the useState Hook A: .forEach() has no return value, so that operation doesn't output anything to the page. Use .map() instead. Additionally, the callback to your .forEach() never returns anything. When you use .map(), also make sure the callback returns the value you want. {cardValueObj.map((element) => ( <div className=" shadow-md bg-white p-5 h-64 w-64 m-4 mx-1.5 my-1.5"> the rest of your markup... </div>; ))} Or with an explicit return: {cardValueObj.map((element) => { return (<div className=" shadow-md bg-white p-5 h-64 w-64 m-4 mx-1.5 my-1.5"> the rest of your markup... </div>); })}
useEffect not working with local storage and forEach
In my project, I am working with the redux store and that data will store in local storage. On every new update, local storage will update and as per local storage, my component will render with forEach method. With the below code, I am getting the data from local storage. const ShowMyFlashcard = () => { let cardValueObj=[] useEffect(() => { let cardValue = localStorage.getItem("cardValue"); if (cardValue == null) { cardValueObj= []; } else { cardValueObj= JSON.parse(cardValue); } }); return ( // {/* SECTION TO SHOW CREATED ALL CARDS*/} <div className="mx-10 my-10 grid grid-cols-3 gap-4 place-content-around flex flex-wrap justify-items-center"> {cardValueObj.forEach((element) => { <div className=" shadow-md bg-white p-5 h-64 w-64 m-4 mx-1.5 my-1.5"> <div> <h1 className="text-center font-bold mx-1.5 my-1.5"> {element.createGroup} </h1> <div className="text-center bg-white mx-1.5 my-1.5 h-32 w-48"> <span>{element.groupDescription}</span> </div> <div className="flex justify-center"> <button className="rounded-md text-red-600 border-solid border-2 bg-white border-red-700 mx-2 my-2 h-8 w-40"> View Cards </button> </div> </div> </div>; })} </div> ); }; export default ShowMyFlashcard;
[ "You haven't passed any dependency array\nconst [cardValue, setCardValue] = useState([]);\n\nuseEffect(() => {\n let cardValueFromLocalStorage = localStorage.getItem(\"cardValue\");\n if (cardValueFromLocalStorage === null) {\n setCardValue([]);\n } else {\n setCardValue(JSON.parse(cardValueFromLocalStorage));\n }}, []);\n\nDo add some dependency in the array. Also using let is not a proper way try using the useState Hook\n", ".forEach() has no return value, so that operation doesn't output anything to the page. Use .map() instead.\nAdditionally, the callback to your .forEach() never returns anything. When you use .map(), also make sure the callback returns the value you want.\n{cardValueObj.map((element) => (\n <div className=\" shadow-md bg-white p-5 h-64 w-64 m-4 mx-1.5 my-1.5\">\n the rest of your markup...\n </div>;\n))}\n\nOr with an explicit return:\n{cardValueObj.map((element) => {\n return (<div className=\" shadow-md bg-white p-5 h-64 w-64 m-4 mx-1.5 my-1.5\">\n the rest of your markup...\n </div>);\n})}\n\n" ]
[ 0, 0 ]
[]
[]
[ "local_storage", "react_hooks", "reactjs" ]
stackoverflow_0074655963_local_storage_react_hooks_reactjs.txt
Q: Wildfly: Unexpected element '{urn:jboss:domain:4.2}server' Error: 17:42:50,333 INFO [org.jboss.as] (MSC service thread 1-6) WFLYSRV0049: WildFly Full 10.0.0.Final (WildFly Core 2.0.10.Final) starting 17:42:50,732 ERROR [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0055: Caught exception during boot: org.jboss.as.controller.persistence.ConfigurationPersistenceException: WFLYCTL0085: Failed to parse configuration at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:131) at org.jboss.as.server.ServerService.boot(ServerService.java:356) at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:299) at java.lang.Thread.run(Thread.java:745) Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[2,1] Message: Unexpected element '{urn:jboss:domain:4.2}server' at org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:108) at org.jboss.staxmapper.XMLMapperImpl.parseDocument(XMLMapperImpl.java:69) at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:123) ... 3 more 17:42:50,733 FATAL [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0056: Server boot has failed in an unrecoverable manner; exiting. See previous messages for details. I'm getting this error from the beginning of my standalone-full.xml. I've used an xml validator on the file and its syntax is correct. I assume it's a problem with my environment. A: The 4.2 version of the server URN is for WildFly 10.1.0.Final. It looks like you're using WildFly 10.0.0.Final. You'd need to use version 4.0 of the URN or upgrade your WildFly server to 10.1.0.Final. A: Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[2,1] Message: Unexpected element '{urn:jboss:domain:4.2}server' Remove server from {urn:jboss:domain:4.2}server line number 2 in standalone-full.xml and try to start server. A: Check your standalone.xml which you are using as configuration for your Jboss. In the beginning of standalone.xml and standalone-full.xml look out for <server xmlns="urn:jboss:domain:x.y"> make sure the x.y matches to the server URN for the version of wildfly you are using. A: I've encountered this error message with WildFly 26.1.2. The standalone-full.xml was fine, the problem was with my modules. I had a bug with my Docker image, that deploys WildFly locally. A step in the Dockerfile copies a tar.gz file with our custom modules. Since I was in the process of upgrading from WildFly 10.1.0 to 26.1.2, an old zip with old modules was added to the modules folder. I have no clue why this happens, but removing old modules.tar.gz did the job.
Wildfly: Unexpected element '{urn:jboss:domain:4.2}server'
Error: 17:42:50,333 INFO [org.jboss.as] (MSC service thread 1-6) WFLYSRV0049: WildFly Full 10.0.0.Final (WildFly Core 2.0.10.Final) starting 17:42:50,732 ERROR [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0055: Caught exception during boot: org.jboss.as.controller.persistence.ConfigurationPersistenceException: WFLYCTL0085: Failed to parse configuration at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:131) at org.jboss.as.server.ServerService.boot(ServerService.java:356) at org.jboss.as.controller.AbstractControllerService$1.run(AbstractControllerService.java:299) at java.lang.Thread.run(Thread.java:745) Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[2,1] Message: Unexpected element '{urn:jboss:domain:4.2}server' at org.jboss.staxmapper.XMLMapperImpl.processNested(XMLMapperImpl.java:108) at org.jboss.staxmapper.XMLMapperImpl.parseDocument(XMLMapperImpl.java:69) at org.jboss.as.controller.persistence.XmlConfigurationPersister.load(XmlConfigurationPersister.java:123) ... 3 more 17:42:50,733 FATAL [org.jboss.as.server] (Controller Boot Thread) WFLYSRV0056: Server boot has failed in an unrecoverable manner; exiting. See previous messages for details. I'm getting this error from the beginning of my standalone-full.xml. I've used an xml validator on the file and its syntax is correct. I assume it's a problem with my environment.
[ "The 4.2 version of the server URN is for WildFly 10.1.0.Final. It looks like you're using WildFly 10.0.0.Final. You'd need to use version 4.0 of the URN or upgrade your WildFly server to 10.1.0.Final.\n", "\nCaused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[2,1]\n Message: Unexpected element '{urn:jboss:domain:4.2}server'\n\nRemove server from {urn:jboss:domain:4.2}server line number 2 in standalone-full.xml and try to start server. \n", "Check your standalone.xml which you are using as configuration for your Jboss.\nIn the beginning of standalone.xml and standalone-full.xml look out for\n<server xmlns=\"urn:jboss:domain:x.y\">\n\nmake sure the x.y matches to the server URN for the version of wildfly you are using.\n", "I've encountered this error message with WildFly 26.1.2. The standalone-full.xml was fine, the problem was with my modules.\nI had a bug with my Docker image, that deploys WildFly locally. A step in the Dockerfile copies a tar.gz file with our custom modules. Since I was in the process of upgrading from WildFly 10.1.0 to 26.1.2, an old zip with old modules was added to the modules folder.\nI have no clue why this happens, but removing old modules.tar.gz did the job.\n" ]
[ 6, 1, 0, 0 ]
[]
[]
[ "jboss", "undertow", "wildfly" ]
stackoverflow_0044161521_jboss_undertow_wildfly.txt
Q: Snowflake - convert timestamp to date serial number I have a SQL table containing timestamps of the following format: 2022-02-07 12:57:45.000 In SQL Server, you can convert this to a floating point date serial number (days since 1900-01-01): select convert(float, my_timestamp_field) as float_serial_number which yields an output of: float_serial_number 44597.5401041667 I am trying to get the same output in Snowflake, but cant figure out how to return a float. This is as close as I have gotten: select datediff(day, '1900-01-01', '2022-02-07 12:57:45.000') as integer_only, timestampdiff(day, '1900-01-01', '2022-02-07 12:57:45.000') as same_as_above which yields an output of: integer_only same_as_above 44,597 44,597 The output I need in Snowflake (this takes the time into account where 12pm = 0.5): desired_Snowflake_output 44597.5401041667 A: You could calculate the diff in decimals using either these- (seconds, milliseconds, nanoseconds) in datediff and dividing that by the appropriate denominator set mydate='2022-02-07 12:57:45.000'::timestamp; select datediff(seconds, '1900-01-01', $mydate)::float/86400 A: Here's an sql function that does the job create or replace function dayfloat_ts(ts TIMESTAMP) returns float language sql as $$ (datediff(day, '1900-01-01', TS)::float + (datediff(nanoseconds, '1900-01-01', TS)::float - datediff(nanoseconds, '1900-01-01', date_trunc(day, TS)))::float / 86400000000000::float) $$;
Snowflake - convert timestamp to date serial number
I have a SQL table containing timestamps of the following format: 2022-02-07 12:57:45.000 In SQL Server, you can convert this to a floating point date serial number (days since 1900-01-01): select convert(float, my_timestamp_field) as float_serial_number which yields an output of: float_serial_number 44597.5401041667 I am trying to get the same output in Snowflake, but cant figure out how to return a float. This is as close as I have gotten: select datediff(day, '1900-01-01', '2022-02-07 12:57:45.000') as integer_only, timestampdiff(day, '1900-01-01', '2022-02-07 12:57:45.000') as same_as_above which yields an output of: integer_only same_as_above 44,597 44,597 The output I need in Snowflake (this takes the time into account where 12pm = 0.5): desired_Snowflake_output 44597.5401041667
[ "You could calculate the diff in decimals using either these- (seconds, milliseconds, nanoseconds) in datediff and dividing that by the appropriate denominator\nset mydate='2022-02-07 12:57:45.000'::timestamp;\n\nselect datediff(seconds, '1900-01-01', $mydate)::float/86400\n\n", "Here's an sql function that does the job\ncreate or replace function dayfloat_ts(ts TIMESTAMP)\nreturns float\nlanguage sql\nas\n$$\n (datediff(day, '1900-01-01', TS)::float +\n (datediff(nanoseconds, '1900-01-01', TS)::float -\n datediff(nanoseconds, '1900-01-01', date_trunc(day, TS)))::float / 86400000000000::float)\n$$;\n\n" ]
[ 3, 1 ]
[]
[]
[ "snowflake_cloud_data_platform", "sql" ]
stackoverflow_0074647892_snowflake_cloud_data_platform_sql.txt
Q: Save JPEG comment using Pillow I need to save an Image in Python (created as a Numpy array) as a JPEG file, while including a "comment" in the file with some specific metadata. This metadata will be used by another (third-party) application and is a simple ASCII string. I have a sample image including such a "comment", which I can read out using Pillow (PIL), via the image.info['comment'] or the image.app['COM'] property. However, when I try a simple round-trip, i.e. loading my sample image and save it again using a different file name, the comment is no longer preserved. Equally, I found no way to include a comment in a newly created image. I am aware that EXIF tags are the preferred way to save metadata in JPEG images, but as mentioned, the third-party application only accepts this data as a "comment", not as EXIF, which I cannot change. After reading this question, I looked into the binary structure of my sample file and found the comment at the start of the file, after a few bytes of some other (meta)data. I do however not know a lot about binary file manipulation, and also I was wondering if there is a more elegant way, other than messing with the binary... EDIT: minimum example: from PIL import Image img = Image.open(path) # where path is the path to the sample image # this prints the desired metadata if it is correctly saved in loaded image print(img.info["comment"]) img.save(new_path) # save with different file name img.close() # now open to see if it has been saved correctly new_img = Image.open(new_path) print(new_img.info['comment']) # now results in KeyError I also tried img.save(new_path, info=img.info), but this does not seem to have an effect. Since img.info['comment'] appears identical to img.app['COM'], I tried img.save(new_path, app=img.app), again does not work. A: To save the "comment" metadata in the JPEG file, you can use the Image.save() method with the save_all=True and exif=img.app arguments. This will preserve the metadata in the JPEG file. Here is an example: from PIL import Image # open the image img = Image.open(path) # save the image with the comment metadata preserved img.save(new_path, save_all=True, exif=img.app) img.close() # now open the new image to see if the metadata has been preserved new_img = Image.open(new_path) print(new_img.info['comment']) You can also specify the comment metadata as a dictionary in the Image.save() method directly, instead of using the img.app property: from PIL import Image # open the image img = Image.open(path) # create a dictionary with the comment metadata comment_metadata = {'comment': "this is my comment metadata"} # save the image with the comment metadata preserved img.save(new_path, save_all=True, exif=comment_metadata) img.close() # now open the new image to see if the metadata has been preserved new_img = Image.open(new_path) print(new_img.info['comment']) A: Just been having a play with this and I couldn't see anything directly in Pillow to support this. I've found that the save() method supports a parameter called extra that can be used to pass arbitrary bytes to the output file. We then just need a simple method to turn a comment into a valid JPEG segment, for example: import struct from PIL import Image def make_jpeg_variable_segment(marker: int, payload: bytes) -> bytes: "make a JPEG segment from the given payload" return struct.pack('>HH', marker, 2 + len(payload)) + payload def make_jpeg_comment_segment(comment: bytes) -> bytes: "make a JPEG comment/COM segment" return make_jpeg_variable_segment(0xFFFE, comment) # open source image with Image.open("foo.jpeg") as im: # save out with new JPEG comment im.save('bar.jpeg', extra=make_jpeg_comment_segment("hello world".encode())) # read file back in to ensure comment round-trips with Image.open('bar.jpeg') as im: print(im.app['COM']) print(im.info['comment']) Note that in my initial attempts I tried appending the comment segment at the end of the file, but Pillow wouldn't load this comment even after calling the .load() method to force it to load the entire JPEG file. It would be nice if this was supported natively, but it doesn't seem to do it yet!
Save JPEG comment using Pillow
I need to save an Image in Python (created as a Numpy array) as a JPEG file, while including a "comment" in the file with some specific metadata. This metadata will be used by another (third-party) application and is a simple ASCII string. I have a sample image including such a "comment", which I can read out using Pillow (PIL), via the image.info['comment'] or the image.app['COM'] property. However, when I try a simple round-trip, i.e. loading my sample image and save it again using a different file name, the comment is no longer preserved. Equally, I found no way to include a comment in a newly created image. I am aware that EXIF tags are the preferred way to save metadata in JPEG images, but as mentioned, the third-party application only accepts this data as a "comment", not as EXIF, which I cannot change. After reading this question, I looked into the binary structure of my sample file and found the comment at the start of the file, after a few bytes of some other (meta)data. I do however not know a lot about binary file manipulation, and also I was wondering if there is a more elegant way, other than messing with the binary... EDIT: minimum example: from PIL import Image img = Image.open(path) # where path is the path to the sample image # this prints the desired metadata if it is correctly saved in loaded image print(img.info["comment"]) img.save(new_path) # save with different file name img.close() # now open to see if it has been saved correctly new_img = Image.open(new_path) print(new_img.info['comment']) # now results in KeyError I also tried img.save(new_path, info=img.info), but this does not seem to have an effect. Since img.info['comment'] appears identical to img.app['COM'], I tried img.save(new_path, app=img.app), again does not work.
[ "To save the \"comment\" metadata in the JPEG file, you can use the Image.save() method with the save_all=True and exif=img.app arguments. This will preserve the metadata in the JPEG file.\nHere is an example:\nfrom PIL import Image\n\n# open the image\nimg = Image.open(path)\n\n# save the image with the comment metadata preserved\nimg.save(new_path, save_all=True, exif=img.app)\nimg.close()\n\n# now open the new image to see if the metadata has been preserved\nnew_img = Image.open(new_path)\nprint(new_img.info['comment'])\n\nYou can also specify the comment metadata as a dictionary in the Image.save() method directly, instead of using the img.app property:\nfrom PIL import Image\n\n# open the image\nimg = Image.open(path)\n\n# create a dictionary with the comment metadata\ncomment_metadata = {'comment': \"this is my comment metadata\"}\n\n# save the image with the comment metadata preserved\nimg.save(new_path, save_all=True, exif=comment_metadata)\nimg.close()\n\n# now open the new image to see if the metadata has been preserved\nnew_img = Image.open(new_path)\nprint(new_img.info['comment'])\n\n", "Just been having a play with this and I couldn't see anything directly in Pillow to support this. I've found that the save() method supports a parameter called extra that can be used to pass arbitrary bytes to the output file.\nWe then just need a simple method to turn a comment into a valid JPEG segment, for example:\nimport struct\nfrom PIL import Image\n\ndef make_jpeg_variable_segment(marker: int, payload: bytes) -> bytes:\n \"make a JPEG segment from the given payload\"\n return struct.pack('>HH', marker, 2 + len(payload)) + payload\n\ndef make_jpeg_comment_segment(comment: bytes) -> bytes:\n \"make a JPEG comment/COM segment\"\n return make_jpeg_variable_segment(0xFFFE, comment)\n\n# open source image\nwith Image.open(\"foo.jpeg\") as im:\n # save out with new JPEG comment\n im.save('bar.jpeg', extra=make_jpeg_comment_segment(\"hello world\".encode()))\n\n# read file back in to ensure comment round-trips\nwith Image.open('bar.jpeg') as im:\n print(im.app['COM'])\n print(im.info['comment'])\n\nNote that in my initial attempts I tried appending the comment segment at the end of the file, but Pillow wouldn't load this comment even after calling the .load() method to force it to load the entire JPEG file.\nIt would be nice if this was supported natively, but it doesn't seem to do it yet!\n" ]
[ 1, 1 ]
[]
[]
[ "jpeg", "python", "python_imaging_library" ]
stackoverflow_0074653239_jpeg_python_python_imaging_library.txt
Q: Can the password in HTTP Basic be optional? I'd like to use HTTP Basic auth to do password-less authentication between trusted services in a private network. Is it acceptable to leave out the password field entirely when using Basic auth? Is there a better authentication mechanism I should research? A: RFC 2617 defines them as user-pass = userid ":" password userid = *<TEXT excluding ":"> password = *TEXT and * in the ABNF means "0 or more." Meaning the userid and password can be empty, in which case : is encoded into Base64. A: In HTTP Basic auth, the username and password are concatenated using a colon then encoded in base64 and the resulting header looks something like: Authorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ== The Basic part specified basic authentication and the second part is the base64 encoded token. It doesn't have to be a username/password combo, but can just be a username with a blank password, or a username alone. You just have to be aware of that when decoding the authorization header.
Can the password in HTTP Basic be optional?
I'd like to use HTTP Basic auth to do password-less authentication between trusted services in a private network. Is it acceptable to leave out the password field entirely when using Basic auth? Is there a better authentication mechanism I should research?
[ "RFC 2617 defines them as\nuser-pass = userid \":\" password\nuserid = *<TEXT excluding \":\">\npassword = *TEXT\n\nand * in the ABNF means \"0 or more.\"\nMeaning the userid and password can be empty, in which case : is encoded into Base64.\n", "In HTTP Basic auth, the username and password are concatenated using a colon then encoded in base64 and the resulting header looks something like:\nAuthorization: Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==\n\nThe Basic part specified basic authentication and the second part is the base64 encoded token. It doesn't have to be a username/password combo, but can just be a username with a blank password, or a username alone. You just have to be aware of that when decoding the authorization header.\n" ]
[ 0, -1 ]
[]
[]
[ "basic_authentication", "http", "soa" ]
stackoverflow_0014247637_basic_authentication_http_soa.txt
Q: Oracle - Why is CHAR Column is automatically adding a leading zero? I am working with an Oracle DB 11g I have a database table with the primary key being a CHAR(4) - Though only numbers are used for this column. I noticed that there are some records that for example show '0018' or '0123'. So few things I noticed odd and needed some help on -Does a CHAR column "automatically" pad zeros to a value? -Also I noticed when writing a SQL that if I DONT use quotes in my where clause that it returns results, but if I do use quotes it does not? So for example DB CHAR(4) column has a key of '0018' I use this query SELECT * FROM TABLE_A WHERE COLUMN_1=18; I get the row as expected. But when I try the following SELECT * FROM TABLE_A WHERE COLUMN_1='18'; This does NOT work but this does work again SELECT * FROM TABLE_A WHERE COLUMN_1='0018'; So I am a bit confused how the first query can work as expected without quotes? A: Does a CHAR column "automatically" pad zeros to a value? No. From the documentation: If you insert a value that is shorter than the column length, then Oracle blank-pads the value to column length. So if you insert the number 18 it will be implicitly converted to the string '18 ', with two trailing spaces. You can see that in this fiddle, which also shows the comparisons. That means something else is zero-padding your data - either your application/code before inserting, or possibly in a trigger. Also I noticed when writing a SQL that if I DONT use quotes in my where clause that it returns results, but if I do use quotes it does not The data type comparison and conversion rules are shown in the documentation too: When comparing a character value with a numeric value, Oracle converts the character data to a numeric value. When you do: SELECT * FROM TABLE_A WHERE COLUMN_1=18; the string '0018' is implicitly converted to the number 18 so that it can be compared with your numeric literal. The leading zeros are meaningless once it's converted, so '0018', '018 ' and 18 ' would all match. With your zero-padded column value that matches and you do get a result: 18 ('0018' converted to a number) = 18 That means that every value in the table has to be converted before it can be compared; which also means that if you has a normal index on column_1 then it wouldn't be utilised in that comparison. When you do: SELECT * FROM TABLE_A WHERE COLUMN_1='18'; the column and literal are the same data type so no conversion has to be applied (so a normal index can be used). Oracle will use blank-padded comparison semantics here, because the column is char, padding the shorter literal value to the column size as '18 ', and then it will only match if the strings match exactly - so '18 ' would match but '0018' or ' 18 ' or anything else would not. With your zero-padded column value that does not match and you don't get a result: '0018' != '18 ' ('18' padded to length 4) When you do: SELECT * FROM TABLE_A WHERE COLUMN_1='0018'; the column and literal are the same data type so no conversion, no padding is applied as the literal is already the same length as the column value, and again it will only match if the strings match exactly - so '0018' would match but '18 ' or ' 18 ' or anything else would not. With your zero-padded column value that matches and you do get a result: '0018' = '0018' A: Does a CHAR column "automatically" pad zeros to a value? Not always zero's sometimes spaces. if all characters values are numeric yes it will pad zeros up to a fixed size of the character field. So I am a bit confused how the first query can work as expected without quotes? Because of implicit type conversions. The system is casting either the char to numeric or the numeric to char in which case it either drops the leading zeros and compares numeric values or it pads to be of the same data type and then compares. I'm pretty sure it's going character to numeric and thus the leading zeros are dropped when comparing. See: https://docs.oracle.com/cd/B13789_01/server.101/b10759/sql_elements002.htm for more details on data type comparison and implicit casting More: in the case of : SELECT * FROM TABLE_A WHERE COLUMN_1='18'; I think the 18 is already a character data so it becomes '18 ' (note 2 spaces after 18) compared to '0018' SELECT * FROM TABLE_A WHERE COLUMN_1=18; columN_1 gets cast to numeric so 18=18 SELECT * FROM TABLE_A WHERE COLUMN_1='0018'; column_1 is already a char(4) so '0018' = '0018'
Oracle - Why is CHAR Column is automatically adding a leading zero?
I am working with an Oracle DB 11g I have a database table with the primary key being a CHAR(4) - Though only numbers are used for this column. I noticed that there are some records that for example show '0018' or '0123'. So few things I noticed odd and needed some help on -Does a CHAR column "automatically" pad zeros to a value? -Also I noticed when writing a SQL that if I DONT use quotes in my where clause that it returns results, but if I do use quotes it does not? So for example DB CHAR(4) column has a key of '0018' I use this query SELECT * FROM TABLE_A WHERE COLUMN_1=18; I get the row as expected. But when I try the following SELECT * FROM TABLE_A WHERE COLUMN_1='18'; This does NOT work but this does work again SELECT * FROM TABLE_A WHERE COLUMN_1='0018'; So I am a bit confused how the first query can work as expected without quotes?
[ "\nDoes a CHAR column \"automatically\" pad zeros to a value?\n\nNo. From the documentation:\n\nIf you insert a value that is shorter than the column length, then Oracle blank-pads the value to column length.\n\nSo if you insert the number 18 it will be implicitly converted to the string '18 ', with two trailing spaces. You can see that in this fiddle, which also shows the comparisons.\nThat means something else is zero-padding your data - either your application/code before inserting, or possibly in a trigger.\n\nAlso I noticed when writing a SQL that if I DONT use quotes in my where clause that it returns results, but if I do use quotes it does not\n\nThe data type comparison and conversion rules are shown in the documentation too:\n\nWhen comparing a character value with a numeric value, Oracle converts the character data to a numeric value.\n\nWhen you do:\nSELECT * FROM TABLE_A WHERE COLUMN_1=18;\n\nthe string '0018' is implicitly converted to the number 18 so that it can be compared with your numeric literal. The leading zeros are meaningless once it's converted, so '0018', '018 ' and 18 ' would all match.\nWith your zero-padded column value that matches and you do get a result: 18 ('0018' converted to a number) = 18\nThat means that every value in the table has to be converted before it can be compared; which also means that if you has a normal index on column_1 then it wouldn't be utilised in that comparison.\nWhen you do:\nSELECT * FROM TABLE_A WHERE COLUMN_1='18';\n\nthe column and literal are the same data type so no conversion has to be applied (so a normal index can be used). Oracle will use blank-padded comparison semantics here, because the column is char, padding the shorter literal value to the column size as '18 ', and then it will only match if the strings match exactly - so '18 ' would match but '0018' or ' 18 ' or anything else would not.\nWith your zero-padded column value that does not match and you don't get a result: '0018' != '18 ' ('18' padded to length 4)\nWhen you do:\nSELECT * FROM TABLE_A WHERE COLUMN_1='0018';\n\nthe column and literal are the same data type so no conversion, no padding is applied as the literal is already the same length as the column value, and again it will only match if the strings match exactly - so '0018' would match but '18 ' or ' 18 ' or anything else would not.\nWith your zero-padded column value that matches and you do get a result: '0018' = '0018'\n", "Does a CHAR column \"automatically\" pad zeros to a value?\nNot always zero's sometimes spaces. if all characters values are numeric yes it will pad zeros up to a fixed size of the character field.\nSo I am a bit confused how the first query can work as expected without quotes?\nBecause of implicit type conversions. The system is casting either the char to numeric or the numeric to char in which case it either drops the leading zeros and compares numeric values or it pads to be of the same data type and then compares. I'm pretty sure it's going character to numeric and thus the leading zeros are dropped when comparing.\nSee: https://docs.oracle.com/cd/B13789_01/server.101/b10759/sql_elements002.htm for more details on data type comparison and implicit casting\nMore:\n\nin the case of : SELECT * FROM TABLE_A WHERE COLUMN_1='18'; I\nthink the 18 is already a character data so it becomes '18 ' (note 2 spaces after 18)\ncompared to '0018'\nSELECT * FROM TABLE_A WHERE COLUMN_1=18; columN_1 gets cast to numeric so 18=18\nSELECT * FROM TABLE_A WHERE COLUMN_1='0018'; column_1 is already a char(4) so '0018' = '0018'\n\n" ]
[ 4, 0 ]
[]
[]
[ "char", "oracle", "oracle11g", "oracle_sqldeveloper", "sql" ]
stackoverflow_0074657888_char_oracle_oracle11g_oracle_sqldeveloper_sql.txt
Q: Array reversal in excel I got 8 bytes Hex values in a cell as below which is in little endian format 00 00 08 04 22 00 40 00 With text split I could get individual hex values in an array. = TEXTSPLIT(A1, , " ") 00 00 08 04 22 00 40 00 Is there an excel formula that I can use to grab the values in reverse order from an array to do below? 00 40 00 22 04 08 00 00 I don't want to use LEFT or MID or RIGHT extractors as I want to create generic formula that works on all data types. A: For this very specific case you could use =TRIM(CONCAT(MID(" "&A1,SEQUENCE(8,,22,-3),3))) but to be more generic, try: Formula in A2: =TEXTJOIN(" ",,SORTBY(TEXTSPLIT(A1,," "),ROW(1:8),-1)) I suppose you can make this even more generic for any string you split on space: =LET(r,TEXTSPLIT(A1,," "),TEXTJOIN(" ",,SORTBY(r,SEQUENCE(ROWS(r)),-1))) Note this is almost an exact copy of this question where you could also use the technique shown by @ScottCraner using INDEX(). A: =MID(SUBSTITUTE(A1, " ", ""), SEQUENCE(1, LEN(SUBSTITUTE(A1, " ", ""))/2, LEN(SUBSTITUTE(A1, " ", ""))-1, -2), 2)
Array reversal in excel
I got 8 bytes Hex values in a cell as below which is in little endian format 00 00 08 04 22 00 40 00 With text split I could get individual hex values in an array. = TEXTSPLIT(A1, , " ") 00 00 08 04 22 00 40 00 Is there an excel formula that I can use to grab the values in reverse order from an array to do below? 00 40 00 22 04 08 00 00 I don't want to use LEFT or MID or RIGHT extractors as I want to create generic formula that works on all data types.
[ "For this very specific case you could use =TRIM(CONCAT(MID(\" \"&A1,SEQUENCE(8,,22,-3),3))) but to be more generic, try:\n\nFormula in A2:\n=TEXTJOIN(\" \",,SORTBY(TEXTSPLIT(A1,,\" \"),ROW(1:8),-1))\n\n\nI suppose you can make this even more generic for any string you split on space:\n =LET(r,TEXTSPLIT(A1,,\" \"),TEXTJOIN(\" \",,SORTBY(r,SEQUENCE(ROWS(r)),-1)))\n\n\nNote this is almost an exact copy of this question where you could also use the technique shown by @ScottCraner using INDEX().\n", "=MID(SUBSTITUTE(A1, \" \", \"\"), SEQUENCE(1, LEN(SUBSTITUTE(A1, \" \", \"\"))/2, LEN(SUBSTITUTE(A1, \" \", \"\"))-1, -2), 2)\n\n" ]
[ 7, 2 ]
[]
[]
[ "excel", "excel_formula" ]
stackoverflow_0074658113_excel_excel_formula.txt
Q: What mechanism can I use for an optional function parameter that gets a value assigned if not provided? In Python I can do something like: def add_postfix(name: str, postfix: str = None): if base is None: postfix = some_computation_based_on_name(name) return name + postfix So I have an optional parameter which, if not provided, gets assigned a value. Notice that I don't have a constant default for postfix. It needs to be calculated. (which is why I can't just have a default value). In C++ I reached for std::optional and tried: std::string add_postfix(const std::string& name, std::optional<const std::string&> postfix) { if (!postfix.has_value()) { postfix.emplace("2") }; return name + postfix; } I'm now aware that this won't work because std::optional<T&> is not a thing in C++. I'm fine with that. But now what mechanism should I use to achieve the following: Maintain the benefits of const T&: no copy and don't modify the original. Don't have to make some other postfix_ so that I have the optional one and the final one. Don't have to overload. Have multiple of these optional parameters in one function signature. A: You do this with two functions: std::string add_postfix(const std::string& name, const std::string& postfix) { // whatever } std::string add_default_postfix(const std::string& name) { return add_postfix(name, "2"); } Or, if you're into overloading, you can write the second one as an overload by naming it add_postfix. A: One possibility is to use a std::string const* (a non-constant pointer to a const std::string) as a function argument. std::string add_postfix(const std::string& name, std::string const* postfix = nullptr) { std::string derivedSuffix; if(!postfix) { derivedSuffix = some_computation(name); postfix = &derivedSuffix; } return name + *postfix; } Some care is required with the details here. derivedSuffix needs to be an object that lasts at least as long as the pointer postfix refers to it. Therefore it cannot be contained entirely within the if(!postfix) block, because if it did then using *postfix outside of it would be invalid. There's technically still a bit of overhead here where we create an empty std::string even when postfix isn't nullptr, but we never have to make a copy of a std::string with actual values in it. A: You simply can write: std::string add_postfix(const std::string& name, const std::string& postfix = "default value") { return name + postfix; } A: With your usage, value_or seems to do the job: std::string add_postfix(const std::string& name, const std::optional<std::string>& postfix) { return name + postfix.value_or("2"); } If you really want optional<T&>, optional<reference_wrapper<T>> might do the job. std::string add_postfix(const std::string& name, const std::optional<std::reference_wrapper<const std::string>>& postfix) { #if 1 const std::string postfix_ = "2"; return name + postfix.value_or(postfix_).get(); #else // or return name + (postfix.has_value() ? postfix->get() : "2"); #endif } Demo
What mechanism can I use for an optional function parameter that gets a value assigned if not provided?
In Python I can do something like: def add_postfix(name: str, postfix: str = None): if base is None: postfix = some_computation_based_on_name(name) return name + postfix So I have an optional parameter which, if not provided, gets assigned a value. Notice that I don't have a constant default for postfix. It needs to be calculated. (which is why I can't just have a default value). In C++ I reached for std::optional and tried: std::string add_postfix(const std::string& name, std::optional<const std::string&> postfix) { if (!postfix.has_value()) { postfix.emplace("2") }; return name + postfix; } I'm now aware that this won't work because std::optional<T&> is not a thing in C++. I'm fine with that. But now what mechanism should I use to achieve the following: Maintain the benefits of const T&: no copy and don't modify the original. Don't have to make some other postfix_ so that I have the optional one and the final one. Don't have to overload. Have multiple of these optional parameters in one function signature.
[ "You do this with two functions:\nstd::string add_postfix(const std::string& name, const std::string& postfix) {\n// whatever\n}\n\nstd::string add_default_postfix(const std::string& name) {\nreturn add_postfix(name, \"2\");\n}\n\nOr, if you're into overloading, you can write the second one as an overload by naming it add_postfix.\n", "One possibility is to use a std::string const* (a non-constant pointer to a const std::string) as a function argument.\nstd::string add_postfix(const std::string& name, std::string const* postfix = nullptr) \n{\n std::string derivedSuffix;\n if(!postfix) \n { \n derivedSuffix = some_computation(name); \n postfix = &derivedSuffix;\n }\n return name + *postfix;\n}\n\nSome care is required with the details here. derivedSuffix needs to be an object that lasts at least as long as the pointer postfix refers to it. Therefore it cannot be contained entirely within the if(!postfix) block, because if it did then using *postfix outside of it would be invalid. There's technically still a bit of overhead here where we create an empty std::string even when postfix isn't nullptr, but we never have to make a copy of a std::string with actual values in it.\n", "You simply can write:\nstd::string add_postfix(const std::string& name, const std::string& postfix = \"default value\")\n{\n return name + postfix;\n}\n\n", "With your usage, value_or seems to do the job:\nstd::string add_postfix(const std::string& name,\n const std::optional<std::string>& postfix)\n{\n return name + postfix.value_or(\"2\");\n}\n\nIf you really want optional<T&>, optional<reference_wrapper<T>> might do the job.\nstd::string add_postfix(const std::string& name,\n const std::optional<std::reference_wrapper<const std::string>>& postfix)\n{\n#if 1\n const std::string postfix_ = \"2\";\n return name + postfix.value_or(postfix_).get();\n#else // or\n return name + (postfix.has_value() ? postfix->get() : \"2\");\n#endif\n}\n\nDemo\n" ]
[ 3, 2, 1, 1 ]
[]
[]
[ "c++" ]
stackoverflow_0074657605_c++.txt
Q: “Unable to load DLL 'SQLite.Interop.dll' or one of its dependencies” running an application in Docker, but locally it works fine Introduction There seems to be frequent issues with the SQLite.Interop.dll library. Just look at these other StackOverflow questions: Unable to load DLL 'SQLite.Interop.dll' Visual Studio C# - SQLite.Interop.dll not found "Unable to load DLL 'SQLite.Interop.dll' error on client machine I’ve gone through all of the above and more: I installed the System.Data.SQLite.Core NuGet package, I specified these elements in the *.csproj, I set x64 instead of Any CPU, and so on with no success. The key element is that the application runs just fine locally, but I need to make it run on Docker, and that’s where the problems start. The situation is that I’m working on a .NET Core 3.1 console application with Visual Studio 2022. The application will be finally deployed on AKS (Azure Kubernetes Service). Of course, first I need to make it work on the local installation of Docker in my development machine. Building the console application project on Visual Studio gives me no error and it runs fine from Visual Studio. Building its Docker image with the appropriate Dockerfile also gives no problems, but running the same image on Docker raises the following exception: Unable to load DLL 'SQLite.Interop.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E). Currently, the project has the System.Data.SQLite.Core NuGet package installed and doesn’t have these elements in the *.csproj. The exception is raised by a project referenced by the one of the console application; this project both has the System.Data.SQLite.Core and System.Data.SQLite NuGet packages installed. The console application’s project has only the System.Data.SQLite.Core package installed (I already tried to install also the System.Data.SQLite, with no success). All the projects, now, has the System.Data.SQLite NuGet package installed (to give it a try), which causes the System.Data.SQLite.Core to be installed as a dependency. What I tried I used the Dockerfile pre-assembled by Visual Studio 2022, which deals with all the various referenced projects: FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base WORKDIR /app FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build WORKDIR /src COPY ["ProjectA/ProjectA.csproj", "ProjectA/"] COPY ["ProjectB/ProjectB.csproj", "ProjectB/"] #COPY all the other referenced projects RUN dotnet restore "ProjectA/ProjectA.csproj" COPY . . WORKDIR "/src/ProjectA" RUN dotnet build "ProjectA.csproj" -c Release -o /app/build FROM build AS publish RUN dotnet publish "ProjectA.csproj" -c Release -o /app/publish FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "ProjectA.dll"] Since this Dockerfile is based on a multi-stage build, it has the following lines: ... RUN dotnet build "MyProject.csproj" -c Release -o /app/build ... RUN dotnet publish "MyProject.csproj" -c Release -o /app/publish ... to first build the code, and then publish it. Given that dotnet publish, I decided to take a step back and be sure that, at least, I was able to publish the project myself, to narrow down the issue. I found that publishing it locally on a folder also gave me some problems, in particular the NU1605 error, which was preventing me to end the process successfully. I resolved the issue using what Microsoft suggests. Now, if I run the .exe located inside the publish destination folder, the application works as expected. It was time to finally make it a Docker image. The image building process went well, but then the Unable... error appeared at runtime. Reading online, to solve the SQLite.Interop.dll issue, they suggest to move the SQLite.Interop.dll file directly in the bin folder; in fact, originally, I found it lived inside the bin\Release\netcoreapp3.1\runtimes\ folder, both in its win-x86 and win-x64 sub-directories. To “get rid” of this runtimes folder, I tried to publish the project locally on a folder, with the following settings: This in fact made the SQLite.Interop.dll appear directly inside the bin folder and got rid of the runtimes folder. So, to obtain the same result with Docker, I modified the Dockerfile like this: ... RUN dotnet build "MyProject.csproj" -c Release -o /app/build -r win-x64 ... RUN dotnet publish "MyProject.csproj" -c Release -o /app/publish -r win-x64 ... and now the SQLite.Interop.dll file appears directly also inside the bin folder in the Docker running container, so I thought everything was just fine at that time, but still I got the exception. One last thing I found that running the application locally (not in Docker) without specifying the destination runtime (so, keeping the build process as Any CPU instead of x64), raised no exceptions (that is, keeping the runtimes/ folder and the two SQLite.Interop.dll files inside of it, in the win-x86 and win-x64 sub-folders). A: I ended up switching to Linux containers. A: Solution To run a dotnet application (e.g. dotnet test) inside a windows docker container with the System.Data.SQLite.Core nuget package you can use the following docker file # escape=` FROM mcr.microsoft.com/windows/servercore:ltsc2019 RUN setx /M PATH "%PATH%;C:\Program Files\dotnet;C:\nodejs" # Install Microsoft Visual C++ Runtime Library SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';$ProgressPreference='silentlyContinue';"] # Install .NET 6.0 for x86 (just change it to the platform you need) RUN Invoke-WebRequest ` -UseBasicParsing ` -Uri https://dot.net/v1/dotnet-install.ps1 ` -OutFile dotnet-install.ps1; ` ./dotnet-install.ps1 ` -InstallDir '/Program Files/dotnet' ` -Channel 6.0 ` -Architecture x86 Here is the releveant part of my csproj file: <ItemGroup> ... <PackageReference Include="System.Data.SQLite.Core" Version="1.0.117" /> </ItemGroup> What worked for me After switching to the script method, to install the .NET framework (and not using the binary) it worked for me. Beforehand, like you, I tried a ton of different things! What you can try What might work is also to add the PrivateAssets="none" in the csproi. See this answer. After adding this to the csproj files I got a different error, because I must use a x86-dll in my project. After installing .NET 6 x86 correctly, I reverted even this change in the csproj and it still worked! What else What I also tried, but was not necessary in the end, was Microsoft Visual C++ Runtime Library. Quote from https://system.data.sqlite.org/index.html/doc/trunk/www/downloads.wiki All downloadable packages on this web page that do not include the word "static" in their file name require the appropriate version (e.g. 2005, 2008, 2010, 2012, 2013, 2015, 2017) of the Microsoft Visual C++ Runtime Library, to be successfully installed on the target machine, prior to making use of the executables contained therein. ... # Install Microsoft Visual C++ Runtime Library SHELL ["powershell", "-Command", "$ErrorActionPreference = 'Stop';$ProgressPreference='silentlyContinue';"] RUN [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; ` Invoke-WebRequest "https://aka.ms/vs/17/release/vc_redist.x64.exe" -OutFile "vc_redist.x64.exe"; ` Start-Process -filepath C:\vc_redist.x64.exe -ArgumentList "/install", "/passive", "/norestart" -Passthru | Wait-Process; ' Remove-Item -Force vc_redist.x64.exe; More information on How-To install .NET in a docker container can be found at the dotnet-docker github repo TL/DR Use dotnet-install.ps1 in your dockerfile to install .NET and the SDK. Somehow, it does some magic, so that you can run dotnet apps which are using the SQLite.Interop.dll library (included via nuget)
“Unable to load DLL 'SQLite.Interop.dll' or one of its dependencies” running an application in Docker, but locally it works fine
Introduction There seems to be frequent issues with the SQLite.Interop.dll library. Just look at these other StackOverflow questions: Unable to load DLL 'SQLite.Interop.dll' Visual Studio C# - SQLite.Interop.dll not found "Unable to load DLL 'SQLite.Interop.dll' error on client machine I’ve gone through all of the above and more: I installed the System.Data.SQLite.Core NuGet package, I specified these elements in the *.csproj, I set x64 instead of Any CPU, and so on with no success. The key element is that the application runs just fine locally, but I need to make it run on Docker, and that’s where the problems start. The situation is that I’m working on a .NET Core 3.1 console application with Visual Studio 2022. The application will be finally deployed on AKS (Azure Kubernetes Service). Of course, first I need to make it work on the local installation of Docker in my development machine. Building the console application project on Visual Studio gives me no error and it runs fine from Visual Studio. Building its Docker image with the appropriate Dockerfile also gives no problems, but running the same image on Docker raises the following exception: Unable to load DLL 'SQLite.Interop.dll': The specified module could not be found. (Exception from HRESULT: 0x8007007E). Currently, the project has the System.Data.SQLite.Core NuGet package installed and doesn’t have these elements in the *.csproj. The exception is raised by a project referenced by the one of the console application; this project both has the System.Data.SQLite.Core and System.Data.SQLite NuGet packages installed. The console application’s project has only the System.Data.SQLite.Core package installed (I already tried to install also the System.Data.SQLite, with no success). All the projects, now, has the System.Data.SQLite NuGet package installed (to give it a try), which causes the System.Data.SQLite.Core to be installed as a dependency. What I tried I used the Dockerfile pre-assembled by Visual Studio 2022, which deals with all the various referenced projects: FROM mcr.microsoft.com/dotnet/aspnet:3.1 AS base WORKDIR /app FROM mcr.microsoft.com/dotnet/sdk:3.1 AS build WORKDIR /src COPY ["ProjectA/ProjectA.csproj", "ProjectA/"] COPY ["ProjectB/ProjectB.csproj", "ProjectB/"] #COPY all the other referenced projects RUN dotnet restore "ProjectA/ProjectA.csproj" COPY . . WORKDIR "/src/ProjectA" RUN dotnet build "ProjectA.csproj" -c Release -o /app/build FROM build AS publish RUN dotnet publish "ProjectA.csproj" -c Release -o /app/publish FROM base AS final WORKDIR /app COPY --from=publish /app/publish . ENTRYPOINT ["dotnet", "ProjectA.dll"] Since this Dockerfile is based on a multi-stage build, it has the following lines: ... RUN dotnet build "MyProject.csproj" -c Release -o /app/build ... RUN dotnet publish "MyProject.csproj" -c Release -o /app/publish ... to first build the code, and then publish it. Given that dotnet publish, I decided to take a step back and be sure that, at least, I was able to publish the project myself, to narrow down the issue. I found that publishing it locally on a folder also gave me some problems, in particular the NU1605 error, which was preventing me to end the process successfully. I resolved the issue using what Microsoft suggests. Now, if I run the .exe located inside the publish destination folder, the application works as expected. It was time to finally make it a Docker image. The image building process went well, but then the Unable... error appeared at runtime. Reading online, to solve the SQLite.Interop.dll issue, they suggest to move the SQLite.Interop.dll file directly in the bin folder; in fact, originally, I found it lived inside the bin\Release\netcoreapp3.1\runtimes\ folder, both in its win-x86 and win-x64 sub-directories. To “get rid” of this runtimes folder, I tried to publish the project locally on a folder, with the following settings: This in fact made the SQLite.Interop.dll appear directly inside the bin folder and got rid of the runtimes folder. So, to obtain the same result with Docker, I modified the Dockerfile like this: ... RUN dotnet build "MyProject.csproj" -c Release -o /app/build -r win-x64 ... RUN dotnet publish "MyProject.csproj" -c Release -o /app/publish -r win-x64 ... and now the SQLite.Interop.dll file appears directly also inside the bin folder in the Docker running container, so I thought everything was just fine at that time, but still I got the exception. One last thing I found that running the application locally (not in Docker) without specifying the destination runtime (so, keeping the build process as Any CPU instead of x64), raised no exceptions (that is, keeping the runtimes/ folder and the two SQLite.Interop.dll files inside of it, in the win-x86 and win-x64 sub-folders).
[ "I ended up switching to Linux containers.\n", "Solution\nTo run a dotnet application (e.g. dotnet test) inside a windows docker container with the System.Data.SQLite.Core nuget package you can use the following docker file\n# escape=`\nFROM mcr.microsoft.com/windows/servercore:ltsc2019\n\nRUN setx /M PATH \"%PATH%;C:\\Program Files\\dotnet;C:\\nodejs\"\n\n# Install Microsoft Visual C++ Runtime Library\nSHELL [\"powershell\", \"-Command\", \"$ErrorActionPreference = 'Stop';$ProgressPreference='silentlyContinue';\"]\n\n# Install .NET 6.0 for x86 (just change it to the platform you need)\nRUN Invoke-WebRequest `\n -UseBasicParsing `\n -Uri https://dot.net/v1/dotnet-install.ps1 `\n -OutFile dotnet-install.ps1; `\n ./dotnet-install.ps1 `\n -InstallDir '/Program Files/dotnet' `\n -Channel 6.0 `\n -Architecture x86 \n\n\nHere is the releveant part of my csproj file:\n <ItemGroup>\n ...\n <PackageReference Include=\"System.Data.SQLite.Core\" Version=\"1.0.117\" />\n </ItemGroup>\n\nWhat worked for me\nAfter switching to the script method, to install the .NET framework (and not using the binary) it worked for me.\nBeforehand, like you, I tried a ton of different things!\nWhat you can try\nWhat might work is also to add the PrivateAssets=\"none\" in the csproi. See this answer. After adding this to the csproj files I got a different error, because I must use a x86-dll in my project.\nAfter installing .NET 6 x86 correctly, I reverted even this change in the csproj and it still worked!\nWhat else\nWhat I also tried, but was not necessary in the end, was Microsoft Visual C++ Runtime Library.\nQuote from https://system.data.sqlite.org/index.html/doc/trunk/www/downloads.wiki\n\n\nAll downloadable packages on this web page that do not include the word \"static\" in their file name require the appropriate version (e.g. 2005, 2008, 2010, 2012, 2013, 2015, 2017) of the Microsoft Visual C++ Runtime Library, to be successfully installed on the target machine, prior to making use of the executables contained therein.\n\n\n...\n# Install Microsoft Visual C++ Runtime Library\nSHELL [\"powershell\", \"-Command\", \"$ErrorActionPreference = 'Stop';$ProgressPreference='silentlyContinue';\"]\nRUN [Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; `\n Invoke-WebRequest \"https://aka.ms/vs/17/release/vc_redist.x64.exe\" -OutFile \"vc_redist.x64.exe\"; `\n Start-Process -filepath C:\\vc_redist.x64.exe -ArgumentList \"/install\", \"/passive\", \"/norestart\" -Passthru | Wait-Process; '\n Remove-Item -Force vc_redist.x64.exe;\n\nMore information on How-To install .NET in a docker container can be found at the dotnet-docker github repo\nTL/DR\nUse dotnet-install.ps1 in your dockerfile to install .NET and the SDK. Somehow, it does some magic, so that you can run dotnet apps which are using the SQLite.Interop.dll library (included via nuget)\n" ]
[ 0, 0 ]
[]
[]
[ "dll", "docker", "dockerfile", "sqlite", "system.data.sqlite" ]
stackoverflow_0071799985_dll_docker_dockerfile_sqlite_system.data.sqlite.txt
Q: Encrypt with 'window.crypto.subtle', decrypt in c# I want to encrypt with window.crypto.subtle and decrypt in C#. The crypt / decrypt in js are working. In C#, The computed authentification tag don't match the input. I don't know if I can put any 12 bytes as salt nor if I need to derive the password. export async function deriveKey(password, salt) { const buffer = utf8Encoder.encode(password); const key = await crypto.subtle.importKey( 'raw', buffer, { name: 'PBKDF2' }, false, ['deriveKey'], ); const privateKey = crypto.subtle.deriveKey( { name: 'PBKDF2', hash: { name: 'SHA-256' }, iterations, salt, }, key, { name: 'AES-GCM', length: 256, }, false, ['encrypt', 'decrypt'], ); return privateKey; } const buff_to_base64 = (buff) => btoa(String.fromCharCode.apply(null, buff)); const base64_to_buf = (b64) => Uint8Array.from(atob(b64), (c) => c.charCodeAt(null)); export async function encrypt(key, data) { const salt = crypto.getRandomValues(new Uint8Array(12)); const iv = crypto.getRandomValues(new Uint8Array(12)); console.log('encrypt'); console.log('iv', iv); console.log('salt', salt); const buffer = new TextEncoder().encode(data); const privatekey = await deriveKey(key, salt); const encrypted = await crypto.subtle.encrypt( { name: 'AES-GCM', iv, tagLength: 128, }, privatekey, buffer, ); const bytes = new Uint8Array(encrypted); console.log('concat'); const buff = new Uint8Array(iv.byteLength + encrypted.byteLength + salt.byteLength); buff.set(iv, 0); buff.set(salt, iv.byteLength); buff.set(bytes, iv.byteLength + salt.byteLength); console.log('iv', iv); console.log('salt', salt); console.log('buff', buff); const base64Buff = buff_to_base64(buff); console.log(base64Buff); return base64Buff; } export async function decrypt(key, data) { console.log('decryption'); console.log('buff', base64_to_buf(data)); const d = base64_to_buf(data); const iv = d.slice(0, 12); const salt = d.slice(12, 24); const ec = d.slice(24); console.log('iv', iv); console.log('salt', salt); console.log(ec); const decrypted = await window.crypto.subtle.decrypt( { name: 'AES-GCM', iv, tagLength: 128, }, await deriveKey(key, salt), ec, ); return new TextDecoder().decode(new Uint8Array(decrypted)); } Span<byte> encryptedData = Convert.FromBase64String(enc).AsSpan(); Span<byte> nonce = encryptedData[..12]; Span<byte> salt = encryptedData.Slice(12, 12); Span<byte> data = encryptedData.Slice(12 + 12, encryptedData.Length - 16 - 12 - 12); Span<byte> tag = encryptedData[^16..]; Span<byte> result = new byte[data.Length]; using Rfc2898DeriveBytes pbkdf2 = new(Encoding.UTF8.GetBytes(password), salt.ToArray(), 1000, HashAlgorithmName.SHA256); using AesGcm aes = new(pbkdf2.GetBytes(16)); aes.Decrypt(nonce, data, tag, result); A: There are a few inconsistencies and/or minor flaws in both codes. Concerning the JavaScript code: The salt should be concatenated like the IV along with the ciphertext/tag (ciphertext/tag = implicit concatenation of the actual ciphertext and tag), e.g. salt|IV|ciphertext|tag. The IV should be randomly generated like the salt. In both codes the same iteration count must be used for key derivation with PBKDF2, e.g. 25000 (in practice the value should be set as high as possible while maintaining acceptable performance). In both codes, the PBKDF2 key derivation must generate AES keys of the same length, so that the same AES variant is used, e.g. a 32 bytes key for AES-256. With these changes, the JavaScript code (async () => { const utf8Encoder = new TextEncoder('utf-8'); const salt = crypto.getRandomValues(new Uint8Array(16)); // Fix 1: consider salt const iv = crypto.getRandomValues(new Uint8Array(12)); const iterations = 25000; // Fix 2: apply the same iteration count async function deriveKey(password) { const buffer = utf8Encoder.encode(password); const key = await crypto.subtle.importKey( 'raw', buffer, { name: 'PBKDF2' }, false, ['deriveKey'], ); const privateKey = crypto.subtle.deriveKey( { name: 'PBKDF2', hash: { name: 'SHA-256' }, iterations, salt, }, key, { name: 'AES-GCM', length: 256, // Fix 3: use the same key size }, false, ['encrypt', 'decrypt'], ); return privateKey; } const buff_to_base64 = (buff) => btoa(String.fromCharCode.apply(null, buff)); const base64_to_buf = (b64) => Uint8Array.from(atob(b64), (c) => c.charCodeAt(null)); async function encrypt(key, data, iv, salt) { const buffer = new TextEncoder().encode(data); const privatekey = await deriveKey(key); const encrypted = await crypto.subtle.encrypt( { name: 'AES-GCM', iv, tagLength: 128, }, privatekey, buffer, ); const bytes = new Uint8Array(encrypted); let buff = new Uint8Array(salt.byteLength + iv.byteLength + encrypted.byteLength); buff.set(salt, 0); // Fix 1: consider salt buff.set(iv, salt.byteLength); buff.set(bytes, salt.byteLength + iv.byteLength); const base64Buff = buff_to_base64(buff); return base64Buff; } async function decrypt(key, data) { const d = base64_to_buf(data); const salt = d.slice(0, 16); // Fix 1: consider salt const iv = d.slice(16, 16 + 12) const ec = d.slice(16 + 12); const decrypted = await window.crypto.subtle.decrypt( { name: 'AES-GCM', iv, tagLength: 128, }, await deriveKey(key), ec ); return new TextDecoder().decode(new Uint8Array(decrypted)); } var data = 'The quick brown fox jumps over the lazy dog'; var passphrase = 'my passphrase'; var ct = await encrypt(passphrase, data, iv, salt); var dt = await decrypt(passphrase, ct); console.log(ct); console.log(dt); })(); returns, e.g.: P/y3nrZU70XtanEUvubyVUp+LzOVHLGAl55cd+N6T0c9ak15KVXh5UxFEjMYGsvGWzf286wAGc5HgEjmwxWCkdjSt5vt42Anb4jwKlVMdLyYoP9Gg/be In the C# code, salt, IV, and ciphertext/tag must be correctly separated, and keysize and iteration count of the JavaScript code must be used: string ciphertext = "P/y3nrZU70XtanEUvubyVUp+LzOVHLGAl55cd+N6T0c9ak15KVXh5UxFEjMYGsvGWzf286wAGc5HgEjmwxWCkdjSt5vt42Anb4jwKlVMdLyYoP9Gg/be"; Span<byte> encryptedData = Convert.FromBase64String(ciphertext).AsSpan(); Span<byte> salt = encryptedData[..16]; // Fix 1: consider salt (and apply the correct parameters) Span<byte> nonce = encryptedData[16..(16 + 12)]; Span<byte> data = encryptedData[(16 + 12)..^16]; Span<byte> tag = encryptedData[^16..]; string password = "my passphrase"; using Rfc2898DeriveBytes pbkdf2 = new(Encoding.UTF8.GetBytes(password), salt.ToArray(), 25000, HashAlgorithmName.SHA256); // Fix 2: apply the same iteration count using AesGcm aes = new(pbkdf2.GetBytes(32)); // Fix 3: use the same key size (e.g. 32 bytes for AES-256) Span<byte> result = new byte[data.Length]; aes.Decrypt(nonce, data, tag, result); Console.WriteLine(Encoding.UTF8.GetString(result)); // The quick brown fox jumps over the lazy dog Then the ciphertext of the JavaScript code can be successfully decrypted with the C# code.
Encrypt with 'window.crypto.subtle', decrypt in c#
I want to encrypt with window.crypto.subtle and decrypt in C#. The crypt / decrypt in js are working. In C#, The computed authentification tag don't match the input. I don't know if I can put any 12 bytes as salt nor if I need to derive the password. export async function deriveKey(password, salt) { const buffer = utf8Encoder.encode(password); const key = await crypto.subtle.importKey( 'raw', buffer, { name: 'PBKDF2' }, false, ['deriveKey'], ); const privateKey = crypto.subtle.deriveKey( { name: 'PBKDF2', hash: { name: 'SHA-256' }, iterations, salt, }, key, { name: 'AES-GCM', length: 256, }, false, ['encrypt', 'decrypt'], ); return privateKey; } const buff_to_base64 = (buff) => btoa(String.fromCharCode.apply(null, buff)); const base64_to_buf = (b64) => Uint8Array.from(atob(b64), (c) => c.charCodeAt(null)); export async function encrypt(key, data) { const salt = crypto.getRandomValues(new Uint8Array(12)); const iv = crypto.getRandomValues(new Uint8Array(12)); console.log('encrypt'); console.log('iv', iv); console.log('salt', salt); const buffer = new TextEncoder().encode(data); const privatekey = await deriveKey(key, salt); const encrypted = await crypto.subtle.encrypt( { name: 'AES-GCM', iv, tagLength: 128, }, privatekey, buffer, ); const bytes = new Uint8Array(encrypted); console.log('concat'); const buff = new Uint8Array(iv.byteLength + encrypted.byteLength + salt.byteLength); buff.set(iv, 0); buff.set(salt, iv.byteLength); buff.set(bytes, iv.byteLength + salt.byteLength); console.log('iv', iv); console.log('salt', salt); console.log('buff', buff); const base64Buff = buff_to_base64(buff); console.log(base64Buff); return base64Buff; } export async function decrypt(key, data) { console.log('decryption'); console.log('buff', base64_to_buf(data)); const d = base64_to_buf(data); const iv = d.slice(0, 12); const salt = d.slice(12, 24); const ec = d.slice(24); console.log('iv', iv); console.log('salt', salt); console.log(ec); const decrypted = await window.crypto.subtle.decrypt( { name: 'AES-GCM', iv, tagLength: 128, }, await deriveKey(key, salt), ec, ); return new TextDecoder().decode(new Uint8Array(decrypted)); } Span<byte> encryptedData = Convert.FromBase64String(enc).AsSpan(); Span<byte> nonce = encryptedData[..12]; Span<byte> salt = encryptedData.Slice(12, 12); Span<byte> data = encryptedData.Slice(12 + 12, encryptedData.Length - 16 - 12 - 12); Span<byte> tag = encryptedData[^16..]; Span<byte> result = new byte[data.Length]; using Rfc2898DeriveBytes pbkdf2 = new(Encoding.UTF8.GetBytes(password), salt.ToArray(), 1000, HashAlgorithmName.SHA256); using AesGcm aes = new(pbkdf2.GetBytes(16)); aes.Decrypt(nonce, data, tag, result);
[ "There are a few inconsistencies and/or minor flaws in both codes. Concerning the JavaScript code:\n\nThe salt should be concatenated like the IV along with the ciphertext/tag (ciphertext/tag = implicit concatenation of the actual ciphertext and tag), e.g. salt|IV|ciphertext|tag. The IV should be randomly generated like the salt.\nIn both codes the same iteration count must be used for key derivation with PBKDF2, e.g. 25000 (in practice the value should be set as high as possible while maintaining acceptable performance).\nIn both codes, the PBKDF2 key derivation must generate AES keys of the same length, so that the same AES variant is used, e.g. a 32 bytes key for AES-256.\n\nWith these changes, the JavaScript code\n\n\n(async () => {\n\n const utf8Encoder = new TextEncoder('utf-8');\n const salt = crypto.getRandomValues(new Uint8Array(16)); // Fix 1: consider salt\n const iv = crypto.getRandomValues(new Uint8Array(12));\n const iterations = 25000; // Fix 2: apply the same iteration count\n\n async function deriveKey(password) {\n const buffer = utf8Encoder.encode(password);\n const key = await crypto.subtle.importKey(\n 'raw',\n buffer,\n { name: 'PBKDF2' },\n false,\n ['deriveKey'],\n );\n\n const privateKey = crypto.subtle.deriveKey(\n {\n name: 'PBKDF2',\n hash: { name: 'SHA-256' },\n iterations,\n salt,\n },\n key,\n {\n name: 'AES-GCM',\n length: 256, // Fix 3: use the same key size\n },\n false,\n ['encrypt', 'decrypt'],\n );\n\n return privateKey;\n }\n \n const buff_to_base64 = (buff) => btoa(String.fromCharCode.apply(null, buff));\n const base64_to_buf = (b64) => Uint8Array.from(atob(b64), (c) => c.charCodeAt(null));\n\n async function encrypt(key, data, iv, salt) {\n const buffer = new TextEncoder().encode(data);\n\n const privatekey = await deriveKey(key);\n const encrypted = await crypto.subtle.encrypt(\n {\n name: 'AES-GCM',\n iv,\n tagLength: 128,\n },\n privatekey,\n buffer,\n );\n\n const bytes = new Uint8Array(encrypted);\n let buff = new Uint8Array(salt.byteLength + iv.byteLength + encrypted.byteLength);\n buff.set(salt, 0); // Fix 1: consider salt\n buff.set(iv, salt.byteLength);\n buff.set(bytes, salt.byteLength + iv.byteLength);\n\n const base64Buff = buff_to_base64(buff);\n return base64Buff;\n }\n\n async function decrypt(key, data) {\n const d = base64_to_buf(data);\n const salt = d.slice(0, 16); // Fix 1: consider salt\n const iv = d.slice(16, 16 + 12)\n const ec = d.slice(16 + 12);\n\n const decrypted = await window.crypto.subtle.decrypt(\n {\n name: 'AES-GCM',\n iv,\n tagLength: 128,\n },\n await deriveKey(key),\n ec\n );\n\n return new TextDecoder().decode(new Uint8Array(decrypted));\n }\n\n var data = 'The quick brown fox jumps over the lazy dog';\n var passphrase = 'my passphrase';\n var ct = await encrypt(passphrase, data, iv, salt);\n var dt = await decrypt(passphrase, ct);\n console.log(ct);\n console.log(dt);\n\n})();\n\n\n\nreturns, e.g.:\nP/y3nrZU70XtanEUvubyVUp+LzOVHLGAl55cd+N6T0c9ak15KVXh5UxFEjMYGsvGWzf286wAGc5HgEjmwxWCkdjSt5vt42Anb4jwKlVMdLyYoP9Gg/be\n\nIn the C# code, salt, IV, and ciphertext/tag must be correctly separated, and keysize and iteration count of the JavaScript code must be used:\nstring ciphertext = \"P/y3nrZU70XtanEUvubyVUp+LzOVHLGAl55cd+N6T0c9ak15KVXh5UxFEjMYGsvGWzf286wAGc5HgEjmwxWCkdjSt5vt42Anb4jwKlVMdLyYoP9Gg/be\";\nSpan<byte> encryptedData = Convert.FromBase64String(ciphertext).AsSpan();\nSpan<byte> salt = encryptedData[..16]; // Fix 1: consider salt (and apply the correct parameters)\nSpan<byte> nonce = encryptedData[16..(16 + 12)]; \nSpan<byte> data = encryptedData[(16 + 12)..^16]; \nSpan<byte> tag = encryptedData[^16..];\n\nstring password = \"my passphrase\";\nusing Rfc2898DeriveBytes pbkdf2 = new(Encoding.UTF8.GetBytes(password), salt.ToArray(), 25000, HashAlgorithmName.SHA256); // Fix 2: apply the same iteration count\n\nusing AesGcm aes = new(pbkdf2.GetBytes(32)); // Fix 3: use the same key size (e.g. 32 bytes for AES-256)\nSpan<byte> result = new byte[data.Length];\naes.Decrypt(nonce, data, tag, result);\n\nConsole.WriteLine(Encoding.UTF8.GetString(result)); // The quick brown fox jumps over the lazy dog\n\nThen the ciphertext of the JavaScript code can be successfully decrypted with the C# code.\n" ]
[ 1 ]
[]
[]
[ "aes_gcm", "c#", "cryptography", "subtlecrypto", "window.crypto" ]
stackoverflow_0074656707_aes_gcm_c#_cryptography_subtlecrypto_window.crypto.txt
Q: Invalid JDK configuration found, while importing a project via Gradle I have installed IntelliJ and I need to import a Gradle project. I have build the gradle project using command prompt with the gradlew build command. At the IntelliJ welcome page, I have proceeded with proper instructions, and when I choose "Finish". I get the following error: Invalid Gradle JDK configuration found. Open Gradle Settings JAVA_HOME ennvironment variable not set. When I click on "Open Gradle Settings" it pop up with error of Not found with a path under IntelliJ directory in Program files and searching in jre/jre/bin/....etc. A: Deleting .gradle and .idea will likely solve the problem. So: Close the project Go to the project dir and delete .gradle and .idea Get back and re-open the project using the IDE These two must be generated locally on your PC (Some content of .idea might be version controlled though) and not pulled from a remote or somewhere else (Also they should be in .gitignore). In my case the reason was that these two folders were generated on another computer and I had opened a project with these two folders existing before. A: Just found the solution : Create an empty Gradle project, then go to "Project Structure" and check the path to JDK (it should be valid, if it isn't you can add your own path). Then build this empty project, wait and once done, close IntelliJ. Relaunch it and try to import/open your Gradle project, now it should work. A: You don't need to create a new project to fix this. You can do it from the main window (Configure -> Project Defaults -> Project Structure): Then, on SDKs, set the appropriate JDK home path: If you are on a Mac, click on the button with 3 dots and select the folder /Library/Java/JavaVirtualMachines/jdk1.8.0_141.jdk/Contents/Home. I've found this here: https://intellij-support.jetbrains.com/hc/en-us/community/posts/115000266650-invalid-gradle-jdk-configuration-found A: Mac OS X Solution: I had the same issue and fixed it by setting the JAVA_HOME environment variable using the command: launchctl setenv JAVA_HOME /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/ Refer to this answer on how to set environment variables in Mac OS X: Setting environment variables in OS X? A: Close the project Go to the project dir and delete .gradle .idea Get back and re-open the project using the IDE A: I recently had the same problem while importing a Gradle project. The trick was the remove the quotes from the JAVA_HOME variable. So instead of "C:\Program Files\Java\jdk1.8.0_66" my path now contains only the plain path C:\Program Files\Java\jdk1.8.0_66. A: To add to the previous responses, if you want to prevent this problem when cloning a repository in Git, you can simply remove .idea/misc.xml from your .gitignore file. This contains information about the project jar. For example: <?xml version="1.0" encoding="UTF-8"?> <project version="4"> <component name="ProjectRootManager" version="2" languageLevel="JDK_10" default="false" project-jdk-name="1.8" project-jdk-type="JavaSDK" /> </project> A: For my case, I just restart the IDE and it works. It automatically download Gradle to suit the project version. A: I had the same problem on the fresh installed Windows OS. I did not have a JDK at all and forgot to check it invalid JDK configuration . By default, you can check the Project configuration. If it is empty NO_SDK_ProjectStructure try to download JDK from Oracle web site and configure your project structure A: I have faced same problem for tomcat 9 with my project based on Gradle. You can easily rectify the problem by configuring the application.properties file with the following code. location - src/main/resources/application.properties server.port = 9090 spring.security.user.name= admin spring.security.user.password= password A: My issue was not addressed by the above solution, instead root cause was that I've imported settings from my old system and internal Intellij configuration was invalid because first jdk that it had in the list in jdk.table.xml pointed to a wrong path. To fix this you should find this file in the intellij config folder and then simply open it with an editor and remove whole block related to the bad jdk version. A: Close the project you are working on and then create another new project and build it and then close it and go back to your old project and it will work. A: Comment this code on gradle.properties, the Issue was gone. #org.gradle.java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_141.jdk/Contents/Home A: I was getting: Invalid JDK: /home/sz97/idea-IC-223.7571.182/jbr/bin/jlink is not a file! Ensure JAVA_HOME or buildSettings.javaHome is set to JDK 15 or newer As mentioned by Mahdi-Malv, delete .gradle & .idea folders from the project directory. Then delete other SDKs from File > Project Structure > Platform Settings > SDKs keeping the only required one. Finally change the SDK version from Project Structure > Project Settings > Project. This may solve the problem.
Invalid JDK configuration found, while importing a project via Gradle
I have installed IntelliJ and I need to import a Gradle project. I have build the gradle project using command prompt with the gradlew build command. At the IntelliJ welcome page, I have proceeded with proper instructions, and when I choose "Finish". I get the following error: Invalid Gradle JDK configuration found. Open Gradle Settings JAVA_HOME ennvironment variable not set. When I click on "Open Gradle Settings" it pop up with error of Not found with a path under IntelliJ directory in Program files and searching in jre/jre/bin/....etc.
[ "Deleting .gradle and .idea will likely solve the problem.\nSo:\n\nClose the project\nGo to the project dir and delete .gradle and .idea\nGet back and re-open the project using the IDE\n\n\nThese two must be generated locally on your PC (Some content of .idea might be version controlled though) and not pulled from a remote or somewhere else (Also they should be in .gitignore).\nIn my case the reason was that these two folders were generated on another computer and I had opened a project with these two folders existing before.\n", "Just found the solution :\n\nCreate an empty Gradle project, then go to \"Project Structure\" and check the path to JDK (it should be valid, if it isn't you can add your own path).\nThen build this empty project, wait and once done, close IntelliJ.\nRelaunch it and try to import/open your Gradle project, now it should work.\n\n", "You don't need to create a new project to fix this. You can do it from the main window (Configure -> Project Defaults -> Project Structure):\n\nThen, on SDKs, set the appropriate JDK home path:\n\nIf you are on a Mac, click on the button with 3 dots and select the folder /Library/Java/JavaVirtualMachines/jdk1.8.0_141.jdk/Contents/Home.\nI've found this here:\nhttps://intellij-support.jetbrains.com/hc/en-us/community/posts/115000266650-invalid-gradle-jdk-configuration-found\n", "Mac OS X Solution:\nI had the same issue and fixed it by setting the JAVA_HOME environment variable using the command:\nlaunchctl setenv JAVA_HOME /Library/Java/JavaVirtualMachines/jdk1.8.0_45.jdk/Contents/Home/\n\nRefer to this answer on how to set environment variables in Mac OS X:\nSetting environment variables in OS X?\n", "Close the project\nGo to the project dir and delete\n\n.gradle\n.idea\n\nGet back and re-open the project using the IDE\n", "I recently had the same problem while importing a Gradle project. The trick was the remove the quotes from the JAVA_HOME variable. So instead of \"C:\\Program Files\\Java\\jdk1.8.0_66\" my path now contains only the plain path C:\\Program Files\\Java\\jdk1.8.0_66.\n", "To add to the previous responses, if you want to prevent this problem when cloning a repository in Git, you can simply remove .idea/misc.xml from your .gitignore file. This contains information about the project jar. For example:\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<project version=\"4\">\n <component name=\"ProjectRootManager\" version=\"2\" languageLevel=\"JDK_10\" default=\"false\" project-jdk-name=\"1.8\" project-jdk-type=\"JavaSDK\" />\n</project>\n\n", "For my case, I just restart the IDE and it works. It automatically download Gradle to suit the project version.\n", "I had the same problem on the fresh installed Windows OS. \nI did not have a JDK at all and forgot to check it invalid JDK configuration .\nBy default, you can check the Project configuration. If it is empty NO_SDK_ProjectStructure try to download JDK from Oracle web site and configure your project structure\n", "I have faced same problem for tomcat 9 with my project based on Gradle.\nYou can easily rectify the problem by configuring the application.properties file with the following code.\nlocation - src/main/resources/application.properties\nserver.port = 9090\n\nspring.security.user.name= admin\n\nspring.security.user.password= password\n\n", "My issue was not addressed by the above solution, instead root cause was that I've imported settings from my old system and internal Intellij configuration was invalid because first jdk that it had in the list in jdk.table.xml pointed to a wrong path.\nTo fix this you should find this file in the intellij config folder and then simply open it with an editor and remove whole block related to the bad jdk version.\n", "Close the project you are working on and then create another new project and build it and then close it and go back to your old project and it will work.\n", "Comment this code on gradle.properties, the Issue was gone.\n#org.gradle.java.home=/Library/Java/JavaVirtualMachines/jdk1.8.0_141.jdk/Contents/Home\n", "I was getting:\nInvalid JDK: /home/sz97/idea-IC-223.7571.182/jbr/bin/jlink is not a file! \nEnsure JAVA_HOME or buildSettings.javaHome is set to JDK 15 or newer\n\n\nAs mentioned by Mahdi-Malv, delete .gradle & .idea folders from the project directory.\nThen delete other SDKs from File > Project Structure > Platform Settings > SDKs keeping the only required one.\nFinally change the SDK version from Project Structure > Project Settings > Project.\n\nThis may solve the problem.\n" ]
[ 138, 76, 38, 19, 9, 3, 3, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "gradle", "intellij_idea" ]
stackoverflow_0032654016_gradle_intellij_idea.txt
Q: Map function to show the data received from fetch function is not working I am a beginner in writing React applications. Please help me where I have gone wrong in writing the code. This is the API- https://api.coindesk.com/v1/bpi/currentprice.json. I am not able to iterate over the json format that I have received from fetch function. Below is the code. //import logo from './logo.svg'; import './App.css'; import {useEffect, useState} from 'react'; function App() { const[bitData, setbitData]=useState([]); useEffect(()=>{ fetch("https://api.coindesk.com/v1/bpi/currentprice.json",{ method:'GET' }).then(result=>result.json()) .then(result=>setbitData(result)) },[]) return ( <div className="App"> { bitData && <table className="table"> <thead> <tr> <th scope="col">Code</th> <th scope="col">Symbol</th> <th scope="col">Rate</th> <th scope="col">Description</th> <th scope="col">Rate_float</th> </tr> </thead> <tbody> { bitData.map(draw=> <tr> <th scope="row">{draw.code}</th> <td>{draw.symbol}</td> <td>{draw.rate}</td> <td>{draw.description}</td> </tr> )} </tbody> </table> } </div> ); } export default App; This is the error: Uncaught TypeError: bitData.map is not a function at App (App.js:28:1) at renderWithHooks (react-dom.development.js:16305:1) at updateFunctionComponent (react-dom.development.js:19588:1) at beginWork (react-dom.development.js:21601:1) at beginWork$1 (react-dom.development.js:27426:1) at performUnitOfWork (react-dom.development.js:26557:1) at workLoopSync (react-dom.development.js:26466:1) at renderRootSync (react-dom.development.js:26434:1) at recoverFromConcurrentError (react-dom.development.js:25850:1) at performConcurrentWorkOnRoot (react-dom.development.js:25750:1) App @ App.js:28 renderWithHooks @ react-dom.development.js:16305 updateFunctionComponent @ react-dom.development.js:19588 beginWork @ react-dom.development.js:21601 beginWork$1 @ react-dom.development.js:27426 performUnitOfWork @ react-dom.development.js:26557 workLoopSync @ react-dom.development.js:26466 renderRootSync @ react-dom.development.js:26434 recoverFromConcurrentError @ react-dom.development.js:25850 performConcurrentWorkOnRoot @ react-dom.development.js:25750 workLoop @ scheduler.development.js:266 flushWork @ scheduler.development.js:239 performWorkUntilDeadline @ scheduler.development.js:533 A: The issue is bitData is a JSON object. you can use Object.values() to get the actual values in it. Try like below: const App = () => { const [bitData, setbitData] = React.useState(null); React.useEffect(() => { fetch("https://api.coindesk.com/v1/bpi/currentprice.json", { method: "GET" }) .then((result) => result.json()) .then((result) => setbitData(result)); }, []); return ( <div className="App"> {bitData && ( <table className="table"> <thead> <tr> <th scope="col">Code</th> <th scope="col">Symbol</th> <th scope="col">Rate</th> <th scope="col">Description</th> <th scope="col">Rate_float</th> </tr> </thead> <tbody> {bitData && Object.values(bitData.bpi).map( ({ code, symbol, rate, description, rate_float }) => ( <tr key={code}> <th scope="row">{code}</th> <td>{symbol}</td> <td>{rate}</td> <td>{description}</td> <td>{rate_float}</td> </tr> ) )} </tbody> </table> )} </div> ); } ReactDOM.render(<App />, document.querySelector('.react')); table, td, th { border: 1px solid; padding: 5px; } table { border-collapse: collapse; } <script crossorigin src="https://unpkg.com/react@16/umd/react.development.js"></script> <script crossorigin src="https://unpkg.com/react-dom@16/umd/react-dom.development.js"></script> <div class='react'></div> Explanation Object.values(bitData.bpi) gives the list of values of the JSON input. And in your case value is again an object with the format like: { code: "", symbol: "", rate: "", description: "", rate_float: "" } So then we can destructure these JSON properties in the input of map function. A: The API link you are using returns a JSON response (JavaScript Object Notations), but the .map() function is used for arrays. From your code and the JSON provided by the API I believe you are trying to reach the elements under the bpi. I edited the last .then method in your fetch request so you can get an array in bitData instead of an object: fetch("https://api.coindesk.com/v1/bpi/currentprice.json",{ method:'GET' }).then(result=>result.json()) .then(result=>setbitData(Object.values(result.bpi)))
Map function to show the data received from fetch function is not working
I am a beginner in writing React applications. Please help me where I have gone wrong in writing the code. This is the API- https://api.coindesk.com/v1/bpi/currentprice.json. I am not able to iterate over the json format that I have received from fetch function. Below is the code. //import logo from './logo.svg'; import './App.css'; import {useEffect, useState} from 'react'; function App() { const[bitData, setbitData]=useState([]); useEffect(()=>{ fetch("https://api.coindesk.com/v1/bpi/currentprice.json",{ method:'GET' }).then(result=>result.json()) .then(result=>setbitData(result)) },[]) return ( <div className="App"> { bitData && <table className="table"> <thead> <tr> <th scope="col">Code</th> <th scope="col">Symbol</th> <th scope="col">Rate</th> <th scope="col">Description</th> <th scope="col">Rate_float</th> </tr> </thead> <tbody> { bitData.map(draw=> <tr> <th scope="row">{draw.code}</th> <td>{draw.symbol}</td> <td>{draw.rate}</td> <td>{draw.description}</td> </tr> )} </tbody> </table> } </div> ); } export default App; This is the error: Uncaught TypeError: bitData.map is not a function at App (App.js:28:1) at renderWithHooks (react-dom.development.js:16305:1) at updateFunctionComponent (react-dom.development.js:19588:1) at beginWork (react-dom.development.js:21601:1) at beginWork$1 (react-dom.development.js:27426:1) at performUnitOfWork (react-dom.development.js:26557:1) at workLoopSync (react-dom.development.js:26466:1) at renderRootSync (react-dom.development.js:26434:1) at recoverFromConcurrentError (react-dom.development.js:25850:1) at performConcurrentWorkOnRoot (react-dom.development.js:25750:1) App @ App.js:28 renderWithHooks @ react-dom.development.js:16305 updateFunctionComponent @ react-dom.development.js:19588 beginWork @ react-dom.development.js:21601 beginWork$1 @ react-dom.development.js:27426 performUnitOfWork @ react-dom.development.js:26557 workLoopSync @ react-dom.development.js:26466 renderRootSync @ react-dom.development.js:26434 recoverFromConcurrentError @ react-dom.development.js:25850 performConcurrentWorkOnRoot @ react-dom.development.js:25750 workLoop @ scheduler.development.js:266 flushWork @ scheduler.development.js:239 performWorkUntilDeadline @ scheduler.development.js:533
[ "The issue is bitData is a JSON object. you can use Object.values() to get the actual values in it.\nTry like below:\n\n\nconst App = () => {\n const [bitData, setbitData] = React.useState(null);\n\n React.useEffect(() => {\n fetch(\"https://api.coindesk.com/v1/bpi/currentprice.json\", {\n method: \"GET\"\n })\n .then((result) => result.json())\n .then((result) => setbitData(result));\n }, []);\n\n return (\n <div className=\"App\">\n {bitData && (\n <table className=\"table\">\n <thead>\n <tr>\n <th scope=\"col\">Code</th>\n <th scope=\"col\">Symbol</th>\n <th scope=\"col\">Rate</th>\n <th scope=\"col\">Description</th>\n <th scope=\"col\">Rate_float</th>\n </tr>\n </thead>\n <tbody>\n {bitData &&\n Object.values(bitData.bpi).map(\n ({ code, symbol, rate, description, rate_float }) => (\n <tr key={code}>\n <th scope=\"row\">{code}</th>\n <td>{symbol}</td>\n <td>{rate}</td>\n <td>{description}</td>\n <td>{rate_float}</td>\n </tr>\n )\n )}\n </tbody>\n </table>\n )}\n </div>\n );\n}\n\nReactDOM.render(<App />, document.querySelector('.react'));\ntable,\ntd,\nth {\n border: 1px solid;\n padding: 5px;\n}\n\ntable {\n border-collapse: collapse;\n}\n<script crossorigin src=\"https://unpkg.com/react@16/umd/react.development.js\"></script>\n<script crossorigin src=\"https://unpkg.com/react-dom@16/umd/react-dom.development.js\"></script>\n<div class='react'></div>\n\n\n\nExplanation\nObject.values(bitData.bpi) gives the list of values of the JSON input. And in your case value is again an object with the format like:\n{ code: \"\", symbol: \"\", rate: \"\", description: \"\", rate_float: \"\" }\n\nSo then we can destructure these JSON properties in the input of map function.\n", "The API link you are using returns a JSON response (JavaScript Object Notations), but the .map() function is used for arrays.\nFrom your code and the JSON provided by the API I believe you are trying to reach the elements under the bpi.\nI edited the last .then method in your fetch request so you can get an array in bitData instead of an object:\nfetch(\"https://api.coindesk.com/v1/bpi/currentprice.json\",{\n method:'GET'\n }).then(result=>result.json())\n .then(result=>setbitData(Object.values(result.bpi))) \n\n" ]
[ 1, 1 ]
[]
[]
[ "api", "arrays", "components", "jsx", "reactjs" ]
stackoverflow_0074658130_api_arrays_components_jsx_reactjs.txt
Q: Couldn't find ffmpeg or avconv - Python I'm working on a captcha solver and I need to use ffmpeg, though nothing works. Windows 10 user. Warning when running the code for the first time: C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) Then, when I tried running the script anyway and it required ffprobe, I got the following error: C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\utils.py:198: RuntimeWarning: Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work warn("Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work", RuntimeWarning) Traceback (most recent call last): File "D:\Scripts\captcha\main.py", line 164, in <module> main() File "D:\Scripts\captcha\main.py", line 155, in main captchaSolver() File "D:\Scripts\captcha\main.py", line 106, in captchaSolver sound = pydub.AudioSegment.from_mp3( File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\audio_segment.py", line 796, in from_mp3 return cls.from_file(file, 'mp3', parameters=parameters) File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\audio_segment.py", line 728, in from_file info = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit) File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\utils.py", line 274, in mediainfo_json res = Popen(command, stdin=stdin_parameter, stdout=PIPE, stderr=PIPE) File "C:\Program Files\Python310\lib\subprocess.py", line 966, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Program Files\Python310\lib\subprocess.py", line 1435, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] The system cannot find the file specified I tried downloading it normally, editing environment variables, pasting them in the same folder as the script, installing with pip, tested ffmpeg manually to see if it works and indeed it converted a mkv to mp4, however my script has no intention of running A: As you can see by the error message, the issue is with ffprobe and not ffmpeg. Make sure that both ffprobe.exe and ffmpeg.exe are in the executable path. One option is placing ffprobe.exe and ffmpeg.exe in the same folder as the Python script (D:\Scripts\captcha\ in your case). Other option is adding the folder of ffprobe.exe and ffmpeg.exe to Windows system path. (Placing the EXE files in a folder that is already in the system path may also work). Debugging Pydub source code (pydub-0.25.1): The code fails in the following code: hp, ht, pid, tid = _winapi.CreateProcess(executable, args, # no special security None, None, int(not close_fds), creationflags, env, os.fspath(cwd) if cwd is not None else None, startupinfo) Where args = 'ffprobe -of json -v info -show_format -show_streams test.mp3' We are getting there from: info = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit) From: prober = get_prober_name() Here is the source code of get_prober_name method: def get_prober_name(): """ Return probe application, either avconv or ffmpeg """ if which("avprobe"): return "avprobe" elif which("ffprobe"): return "ffprobe" else: # should raise exception warn("Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work", RuntimeWarning) return "ffprobe" The which method looks for the command is in the execution path (returns None if it is not found). As you can see from Pydub sources, ffprobe.exe should be in the execution path. Note For setting FFmpeg path we may also apply something like: import pydub pydub.AudioSegment.converter = 'c:\\FFmpeg\\bin\\ffmpeg.exe' sound = pydub.AudioSegment.from_mp3("test.mp3") But there is no such option for FFprobe. A: If you haven't installed the ffmpeg/ffprobe as @Rotem answered, you can use my ffmpeg-downloader package: pip install ffmpeg-downloader ffdl install --add-path The --add-path option adds the installed FFmpeg folder to the user's system path. Re-open the Python window and both ffmpeg and ffprobe will be available to your program.
Couldn't find ffmpeg or avconv - Python
I'm working on a captcha solver and I need to use ffmpeg, though nothing works. Windows 10 user. Warning when running the code for the first time: C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\utils.py:170: RuntimeWarning: Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work warn("Couldn't find ffmpeg or avconv - defaulting to ffmpeg, but may not work", RuntimeWarning) Then, when I tried running the script anyway and it required ffprobe, I got the following error: C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\utils.py:198: RuntimeWarning: Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work warn("Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work", RuntimeWarning) Traceback (most recent call last): File "D:\Scripts\captcha\main.py", line 164, in <module> main() File "D:\Scripts\captcha\main.py", line 155, in main captchaSolver() File "D:\Scripts\captcha\main.py", line 106, in captchaSolver sound = pydub.AudioSegment.from_mp3( File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\audio_segment.py", line 796, in from_mp3 return cls.from_file(file, 'mp3', parameters=parameters) File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\audio_segment.py", line 728, in from_file info = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit) File "C:\Users\user\AppData\Roaming\Python\Python310\site-packages\pydub\utils.py", line 274, in mediainfo_json res = Popen(command, stdin=stdin_parameter, stdout=PIPE, stderr=PIPE) File "C:\Program Files\Python310\lib\subprocess.py", line 966, in __init__ self._execute_child(args, executable, preexec_fn, close_fds, File "C:\Program Files\Python310\lib\subprocess.py", line 1435, in _execute_child hp, ht, pid, tid = _winapi.CreateProcess(executable, args, FileNotFoundError: [WinError 2] The system cannot find the file specified I tried downloading it normally, editing environment variables, pasting them in the same folder as the script, installing with pip, tested ffmpeg manually to see if it works and indeed it converted a mkv to mp4, however my script has no intention of running
[ "As you can see by the error message, the issue is with ffprobe and not ffmpeg.\nMake sure that both ffprobe.exe and ffmpeg.exe are in the executable path.\n\nOne option is placing ffprobe.exe and ffmpeg.exe in the same folder as the Python script (D:\\Scripts\\captcha\\ in your case).\nOther option is adding the folder of ffprobe.exe and ffmpeg.exe to Windows system path.\n(Placing the EXE files in a folder that is already in the system path may also work).\n\n\nDebugging Pydub source code (pydub-0.25.1):\nThe code fails in the following code:\nhp, ht, pid, tid = _winapi.CreateProcess(executable, args,\n # no special security\n None, None,\n int(not close_fds),\n creationflags,\n env,\n os.fspath(cwd) if cwd is not None else None,\n startupinfo)\n\nWhere args = 'ffprobe -of json -v info -show_format -show_streams test.mp3'\nWe are getting there from:\ninfo = mediainfo_json(orig_file, read_ahead_limit=read_ahead_limit)\nFrom:\nprober = get_prober_name()\nHere is the source code of get_prober_name method:\ndef get_prober_name():\n \"\"\"\n Return probe application, either avconv or ffmpeg\n \"\"\"\n if which(\"avprobe\"):\n return \"avprobe\"\n elif which(\"ffprobe\"):\n return \"ffprobe\"\n else:\n # should raise exception\n warn(\"Couldn't find ffprobe or avprobe - defaulting to ffprobe, but may not work\", RuntimeWarning)\n return \"ffprobe\"\n\nThe which method looks for the command is in the execution path (returns None if it is not found).\nAs you can see from Pydub sources, ffprobe.exe should be in the execution path.\n\n\nNote\nFor setting FFmpeg path we may also apply something like:\n import pydub\n pydub.AudioSegment.converter = 'c:\\\\FFmpeg\\\\bin\\\\ffmpeg.exe'\n sound = pydub.AudioSegment.from_mp3(\"test.mp3\")\n\n\n\nBut there is no such option for FFprobe.\n", "If you haven't installed the ffmpeg/ffprobe as @Rotem answered, you can use my ffmpeg-downloader package:\npip install ffmpeg-downloader\nffdl install --add-path\n\nThe --add-path option adds the installed FFmpeg folder to the user's system path. Re-open the Python window and both ffmpeg and ffprobe will be available to your program.\n" ]
[ 0, 0 ]
[]
[]
[ "ffmpeg", "ffprobe", "pydub", "python", "selenium" ]
stackoverflow_0074651215_ffmpeg_ffprobe_pydub_python_selenium.txt
Q: running github actions on push of a tag not executing I'm new to yml language and just setup a github actions flow which works well manually, however when I try to make it work on pushing of specific tag it doesn't work: I'm trying to have the yml execute after doing git commit -am "my commit" git tag cicd11 git push and my .yml starts with this: on: push: tags: - cicd** I've read many questions on SO, but noone seems to be doing push on tags and this is supposed to be possible according to the manual. Thank you. A: You need to fully qualify the tag. Per docs: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#onpushbranchestagsbranches-ignoretags-ignore A tag named v2 (refs/tags/v2) So you can try **cicd** or refs/tags/cicd** A: after rereading the doc carefully as suggested by myz540, I realized that my yaml was correct what was needed is a --tags during push i.e.: git commit -am "my commit" git tag cicd11 git push --tags That's it, the yaml stays the same: on: push: tags: - cicd**
running github actions on push of a tag not executing
I'm new to yml language and just setup a github actions flow which works well manually, however when I try to make it work on pushing of specific tag it doesn't work: I'm trying to have the yml execute after doing git commit -am "my commit" git tag cicd11 git push and my .yml starts with this: on: push: tags: - cicd** I've read many questions on SO, but noone seems to be doing push on tags and this is supposed to be possible according to the manual. Thank you.
[ "You need to fully qualify the tag. Per docs: https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#onpushbranchestagsbranches-ignoretags-ignore\nA tag named v2 (refs/tags/v2)\nSo you can try **cicd** or refs/tags/cicd**\n", "after rereading the doc carefully as suggested by myz540, I realized that my yaml was correct what was needed is a --tags during push i.e.:\ngit commit -am \"my commit\"\ngit tag cicd11\ngit push --tags\n\nThat's it, the yaml stays the same:\non:\n push:\n tags:\n - cicd**\n\n" ]
[ 1, 0 ]
[]
[]
[ "github_actions", "yaml" ]
stackoverflow_0074657456_github_actions_yaml.txt
Q: One producer, multiple consumers. How to use condition variables with unbounded buffer? Despite the number of similar questions on Stackoverflow, I can't come up with a solution for the following Producer-Consumer problem: My program has three threads: One writer thread that reads from a file, saves data into a sensor_data_t struct, and writes it into a dynamic pointer-based buffer using sbuffer_insert(buffer, &sensor_data). Once this thread finishes reading it sends an end-of-stream data struct represented by data->id == 0. Two reader threads that remove data from the buffer head (FIFO-style), and store it into a temporary data struct using sbuffer_remove(buffer, &data) and then print it to the cmd line for testing purposes. I think I have to avoid at the least: My reader threads to try to consume/remove from the buffer while it is empty. My reader threads to consume/remove from the buffer at the same time. On the other hand, I don't think my writer thread in sbuffer_insert() needs to worry if the readers are changing the head because it only appends to the tail. Is this reasoning correct or am I missing something? Here's what I've done so far: My main function: sbuffer_t *buffer; void *writer(void *fp); void *reader(void *fp); int main() { // Initialize the buffer sbuffer_init(&buffer); // Open sensor_data file FILE *sensor_data_fp; sensor_data_fp = fopen("sensor_data", "rb"); // Start thread for reading sensor_data file adding elements to the sbuffer pthread_t writer_thread; pthread_create(&writer_thread, NULL, &writer, sensor_data_fp); // Open sensor_data_out file FILE *sensor_data_out_fp; sensor_data_out_fp = fopen("sensor_data_out", "w"); // Start thread 1 and 2 for writing sensor_data_out file pthread_t reader_thread1; pthread_t reader_thread2; pthread_create(&reader_thread1, NULL, &reader, sensor_data_out_fp); pthread_create(&reader_thread2, NULL, &reader, sensor_data_out_fp); // Wait for threads to finish and join them pthread_join(reader_thread1, NULL); pthread_join(reader_thread2, NULL); pthread_join(writer_thread, NULL); // Close sensor_data file fclose(sensor_data_fp); // Close sensor_data_out file fclose(sensor_data_out_fp); // free buffer sbuffer_free(&buffer); return 0; } My reader and writer threads: typedef uint16_t sensor_id_t; typedef double sensor_value_t; typedef time_t sensor_ts_t; // UTC timestamp as returned by time() - notice that the size of time_t is different on 32/64 bit machine typedef struct { sensor_id_t id; sensor_value_t value; sensor_ts_t ts; } sensor_data_t; void *writer(void *fp) { // cast fp to FILE FILE *sensor_data_fp = (FILE *)fp; // make char buffers of size sensor_id_t, sensor_value_t and sensor_ts_t char sensor_id_buffer[sizeof(sensor_id_t)]; char sensor_value_buffer[sizeof(sensor_value_t)]; char sensor_ts_buffer[sizeof(sensor_ts_t)]; // parse sensor_data file into sensor_id_buffer, sensor_value_buffer and sensor_ts_buffer while(fread(sensor_id_buffer, sizeof(sensor_id_t), 1, sensor_data_fp) == 1 && fread(sensor_value_buffer, sizeof(sensor_value_t), 1, sensor_data_fp) == 1 && fread(sensor_ts_buffer, sizeof(sensor_ts_t), 1, sensor_data_fp)) { // create sensor_data_t sensor_data_t sensor_data; // copy sensor_id_buffer to sensor_data.id memcpy(&sensor_data.id, sensor_id_buffer, sizeof(sensor_id_t)); // copy sensor_value_buffer to sensor_data.value memcpy(&sensor_data.value, sensor_value_buffer, sizeof(sensor_value_t)); // copy sensor_ts_buffer to sensor_data.ts memcpy(&sensor_data.ts, sensor_ts_buffer, sizeof(sensor_ts_t)); // print sensor_data for testing // printf("sensor_data.id: %d, sensor_data.value: %f, sensor_data.ts: %ld\n", sensor_data.id, sensor_data.value, sensor_data.ts); // insert sensor_data into buffer sbuffer_insert(buffer, &sensor_data); } // Add dummy data to buffer to signal end of file sensor_data_t sensor_data; sensor_data.id = 0; sensor_data.value = 0; sensor_data.ts = 0; sbuffer_insert(buffer, &sensor_data); return NULL; } void *reader(void *fp) { // cast fp to FILE //FILE *sensor_data_out_fp = (FILE *)fp; // Init data as sensor_data_t sensor_data_t data; do{ // read data from buffer if (sbuffer_remove(buffer, &data) == 0) { // SBUFFER_SUCCESS 0 // write data to sensor_data_out file // fwrite(&data, sizeof(sensor_data_t), 1, sensor_data_out_fp); // print data for testing printf("data.id: %d, data.value: %f, data.ts: %ld \n", data.id, data.value, data.ts); } } while(data.id != 0); // free allocated memory // free(fp); return NULL; } Global variables and buffer initialization: typedef struct sbuffer_node { struct sbuffer_node *next; sensor_data_t data; } sbuffer_node_t; struct sbuffer { sbuffer_node_t *head; sbuffer_node_t *tail; }; pthread_mutex_t mutex; pthread_cond_t empty, removing; int count = 0; // reader count int sbuffer_init(sbuffer_t **buffer) { *buffer = malloc(sizeof(sbuffer_t)); if (*buffer == NULL) return SBUFFER_FAILURE; (*buffer)->head = NULL; (*buffer)->tail = NULL; // Initialize mutex and condition variables pthread_mutex_init(&mutex, NULL); pthread_cond_init(&empty, NULL); pthread_cond_init(&removing, NULL); return SBUFFER_SUCCESS; } sbuffer_remove (Consumer) int sbuffer_remove(sbuffer_t *buffer, sensor_data_t *data) { sbuffer_node_t *dummy; if (buffer == NULL) return SBUFFER_FAILURE; // while the count is 0, wait pthread_mutex_lock(&mutex); while (count > 0) { pthread_cond_wait(&removing, &mutex); } pthread_mutex_unlock(&mutex); pthread_mutex_lock(&mutex); while (buffer->head == NULL){ pthread_cond_wait(&empty, &mutex); // Wait until buffer is not empty if (data->id == 0){ // end-of-stream pthread_mutex_unlock(&mutex); return SBUFFER_NO_DATA; } } count++; *data = buffer->head->data; dummy = buffer->head; if (buffer->head == buffer->tail) // buffer has only one node { buffer->head = buffer->tail = NULL; } else // buffer has many nodes empty { buffer->head = buffer->head->next; } free(dummy); count--; pthread_cond_signal(&removing); // Signal that data is removed pthread_mutex_unlock(&mutex); return SBUFFER_SUCCESS; } sbuffer_insert (Producer) int sbuffer_insert(sbuffer_t *buffer, sensor_data_t *data) { sbuffer_node_t *dummy; if (buffer == NULL) return SBUFFER_FAILURE; dummy = malloc(sizeof(sbuffer_node_t)); if (dummy == NULL) return SBUFFER_FAILURE; dummy->data = *data; dummy->next = NULL; if (buffer->tail == NULL) // buffer empty (buffer->head should also be NULL { pthread_mutex_lock(&mutex); buffer->head = buffer->tail = dummy; pthread_cond_signal(&empty); // Signal that buffer is not empty pthread_mutex_unlock(&mutex); } else // buffer not empty { buffer->tail->next = dummy; buffer->tail = buffer->tail->next; } return SBUFFER_SUCCESS; } Currently, the code has very unstable behavior. Sometimes it runs and prints everything, sometimes it doesn't print anything and gets stuck in a loop, sometimes it prints everything but the last value comes after the end-of-stream code and it doesn't terminate. I would really appreciate a solution that explains what I'm doing wrong or a comment that redirects me to a duplicate of my question. A: Not a complete answer here, but I see this in your sbuffer_remove function: // while the count is 0, wait pthread_mutex_lock(&mutex); while (count > 0) { pthread_cond_wait(&removing, &mutex); } pthread_mutex_unlock(&mutex); That looks suspicious to me. What is the purpose of waiting for the count to become zero? Your code waits for the count to become zero, but then it does nothing else before it unlocks the mutex. I don't know what count represents, but if the other reader thread is concurrently manipulating it, then there is no guarantee that it will still be zero once you've unlocked the mutex. But, maybe that hasn't caused a problem for you because... ...This also looks suspicious: pthread_mutex_unlock(&mutex); pthread_mutex_lock(&mutex); Why do you unlock the mutex and immediately lock it again? Are you thinking that will afford the other consumer a chance to lock it? Technically speaking, it does that, but in practical terms, the chance it offers is known as, "a snowball's chance in Hell." If thread A is waiting for a mutex that is locked by thread B, and thread B unlocks and then immediately tries to re-lock, then in most languages/libraries/operating systems, thread B will almost always succeed while thread A goes back to try again. Mutexes work best when they are rarely contested. If you have a program in which threads spend any significant amount of time waiting for mutexes, then you probably are not getting much benefit from using multiple threads. A: I think I have to avoid at the least: My reader threads to try to consume/remove from the buffer while it is empty. My reader threads to consume/remove from the buffer at the same time. Yes, you must avoid those. And more. On the other hand, I don't think my writer thread in sbuffer_insert() needs to worry if the readers are changing the head because it only appends to the tail. Is this reasoning correct or am I missing something? You are missing at least that when the buffer contains fewer than two nodes, there is no distinction between the head node and the tail node. This manifests at the code level at least in the fact that your sbuffer_insert() and sbuffer_remove() functions both access both buffer->head and buffer->tail. From the perspective of synchronization requirements, it is this lower-level view that matters. Insertion and removal modifies the node objects themselves, not just the overall buffer object. Synchronization is not just, nor even primarily, about avoiding threads directly interfering with each other. It is even more about the consistency of different threads' views of memory. You need appropriate synchronization to ensure that one thread's writes to memory are (ever) observed by other threads, and to establish ordering relationships among operations on memory by different threads. Currently, the code has very unstable behavior. Sometimes it runs and prints everything, sometimes it doesn't print anything and gets stuck in a loop, sometimes it prints everything but the last value comes after the end-of-stream code and it doesn't terminate. This is unsurprising, because your program contains data races, and its behavior is therefore undefined. Do ensure that neither the reader nor the writer accesses any member of the buffer object without holding the mutex locked. As the code is presently structured, that will synchronize access not only to the buffer structure itself, but also to the data in the nodes, which presently are involved in their own data races. Now note that here ... while (buffer->head == NULL){ pthread_cond_wait(&empty, &mutex); // Wait until buffer is not empty if (data->id == 0){ // end-of-stream pthread_mutex_unlock(&mutex); return SBUFFER_NO_DATA; } } ... you are testing for an end-of-data marker before actually reading an item from the buffer. It looks like that's useless in practice. In prarticular, it does not prevent the end-of-stream item from being removed from the buffer, so only one reader will see it. The other(s) will then end up waiting indefinitely for data that will never arrive. Next, consider this code executed by the readers: // while the count is 0, wait pthread_mutex_lock(&mutex); while (count > 0) { pthread_cond_wait(&removing, &mutex); } pthread_mutex_unlock(&mutex); Note well that the reader unlocks the mutex while count is 0, so there is an opportunity for another reader to reach the while loop and pass through. I'm not sure that two threads both getting past that point at the same time produces a problem in practice, but the point seems to be to avoid that, so do it right: move the count++ from later in the function to right after the while loop, prior to unlocking the mutex. Alternatively, once you've done that, it should be clear(er) that you've effectively hand-rolled a binary semaphore. You could simplify your code by switching to an actual POSIX semaphore for this purpose. Or if you want to continue with a mutex + CV for this, then consider using a different mutex, as the data to be protected for this purpose are disjoint from the buffer and its contents. That would get rid of the weirdness of re-locking the mutex immediately after unlocking it. Or on the third hand, consider whether you need to do any of that at all. How is the (separate) mutex protection of the rest of the body of sbuffer_remove() not sufficient by itself? I propose to you that it is sufficient. After all, you're presently using your hand-rolled semaphore exactly as (another) mutex. The bones of this code seem reasonably good, so I don't think repairs will be too hard. First, add the needed mutex protection in sbuffer_insert(). Or really, just expand the scope of the critical section that's already there: int sbuffer_insert(sbuffer_t *buffer, sensor_data_t *data) { sbuffer_node_t *dummy; if (buffer == NULL) return SBUFFER_FAILURE; dummy = malloc(sizeof(sbuffer_node_t)); if (dummy == NULL) return SBUFFER_FAILURE; dummy->data = *data; dummy->next = NULL; pthread_mutex_lock(&mutex); if (buffer->tail == NULL) // buffer empty (buffer->head should also be NULL { assert(buffer->head == NULL); buffer->head = buffer->tail = dummy; pthread_cond_signal(&empty); // Signal that buffer is not empty } else // buffer not empty { buffer->tail->next = dummy; buffer->tail = buffer->tail->next; } pthread_mutex_unlock(&mutex); return SBUFFER_SUCCESS; } Second, simplify and fix sbuffer_remove(): int sbuffer_remove(sbuffer_t *buffer, sensor_data_t *data) { if (buffer == NULL) { return SBUFFER_FAILURE; } pthread_mutex_lock(&mutex); // Wait until the buffer is nonempty while (buffer->head == NULL) { pthread_cond_wait(&empty, &mutex); } // Copy the first item from the buffer *data = buffer->head->data; if (data->id == 0) { // end-of-stream: leave the item in the buffer for other readers to see pthread_mutex_unlock(&mutex); pthread_cond_signal(&empty); // other threads can consume this item return SBUFFER_NO_DATA; } // else remove the item sbuffer_node_t *dummy = buffer->head; buffer->head = buffer->head->next; if (buffer->head == NULL) { // buffer is now empty buffer->tail = NULL; } pthread_mutex_unlock(&mutex); free(dummy); return SBUFFER_SUCCESS; }
One producer, multiple consumers. How to use condition variables with unbounded buffer?
Despite the number of similar questions on Stackoverflow, I can't come up with a solution for the following Producer-Consumer problem: My program has three threads: One writer thread that reads from a file, saves data into a sensor_data_t struct, and writes it into a dynamic pointer-based buffer using sbuffer_insert(buffer, &sensor_data). Once this thread finishes reading it sends an end-of-stream data struct represented by data->id == 0. Two reader threads that remove data from the buffer head (FIFO-style), and store it into a temporary data struct using sbuffer_remove(buffer, &data) and then print it to the cmd line for testing purposes. I think I have to avoid at the least: My reader threads to try to consume/remove from the buffer while it is empty. My reader threads to consume/remove from the buffer at the same time. On the other hand, I don't think my writer thread in sbuffer_insert() needs to worry if the readers are changing the head because it only appends to the tail. Is this reasoning correct or am I missing something? Here's what I've done so far: My main function: sbuffer_t *buffer; void *writer(void *fp); void *reader(void *fp); int main() { // Initialize the buffer sbuffer_init(&buffer); // Open sensor_data file FILE *sensor_data_fp; sensor_data_fp = fopen("sensor_data", "rb"); // Start thread for reading sensor_data file adding elements to the sbuffer pthread_t writer_thread; pthread_create(&writer_thread, NULL, &writer, sensor_data_fp); // Open sensor_data_out file FILE *sensor_data_out_fp; sensor_data_out_fp = fopen("sensor_data_out", "w"); // Start thread 1 and 2 for writing sensor_data_out file pthread_t reader_thread1; pthread_t reader_thread2; pthread_create(&reader_thread1, NULL, &reader, sensor_data_out_fp); pthread_create(&reader_thread2, NULL, &reader, sensor_data_out_fp); // Wait for threads to finish and join them pthread_join(reader_thread1, NULL); pthread_join(reader_thread2, NULL); pthread_join(writer_thread, NULL); // Close sensor_data file fclose(sensor_data_fp); // Close sensor_data_out file fclose(sensor_data_out_fp); // free buffer sbuffer_free(&buffer); return 0; } My reader and writer threads: typedef uint16_t sensor_id_t; typedef double sensor_value_t; typedef time_t sensor_ts_t; // UTC timestamp as returned by time() - notice that the size of time_t is different on 32/64 bit machine typedef struct { sensor_id_t id; sensor_value_t value; sensor_ts_t ts; } sensor_data_t; void *writer(void *fp) { // cast fp to FILE FILE *sensor_data_fp = (FILE *)fp; // make char buffers of size sensor_id_t, sensor_value_t and sensor_ts_t char sensor_id_buffer[sizeof(sensor_id_t)]; char sensor_value_buffer[sizeof(sensor_value_t)]; char sensor_ts_buffer[sizeof(sensor_ts_t)]; // parse sensor_data file into sensor_id_buffer, sensor_value_buffer and sensor_ts_buffer while(fread(sensor_id_buffer, sizeof(sensor_id_t), 1, sensor_data_fp) == 1 && fread(sensor_value_buffer, sizeof(sensor_value_t), 1, sensor_data_fp) == 1 && fread(sensor_ts_buffer, sizeof(sensor_ts_t), 1, sensor_data_fp)) { // create sensor_data_t sensor_data_t sensor_data; // copy sensor_id_buffer to sensor_data.id memcpy(&sensor_data.id, sensor_id_buffer, sizeof(sensor_id_t)); // copy sensor_value_buffer to sensor_data.value memcpy(&sensor_data.value, sensor_value_buffer, sizeof(sensor_value_t)); // copy sensor_ts_buffer to sensor_data.ts memcpy(&sensor_data.ts, sensor_ts_buffer, sizeof(sensor_ts_t)); // print sensor_data for testing // printf("sensor_data.id: %d, sensor_data.value: %f, sensor_data.ts: %ld\n", sensor_data.id, sensor_data.value, sensor_data.ts); // insert sensor_data into buffer sbuffer_insert(buffer, &sensor_data); } // Add dummy data to buffer to signal end of file sensor_data_t sensor_data; sensor_data.id = 0; sensor_data.value = 0; sensor_data.ts = 0; sbuffer_insert(buffer, &sensor_data); return NULL; } void *reader(void *fp) { // cast fp to FILE //FILE *sensor_data_out_fp = (FILE *)fp; // Init data as sensor_data_t sensor_data_t data; do{ // read data from buffer if (sbuffer_remove(buffer, &data) == 0) { // SBUFFER_SUCCESS 0 // write data to sensor_data_out file // fwrite(&data, sizeof(sensor_data_t), 1, sensor_data_out_fp); // print data for testing printf("data.id: %d, data.value: %f, data.ts: %ld \n", data.id, data.value, data.ts); } } while(data.id != 0); // free allocated memory // free(fp); return NULL; } Global variables and buffer initialization: typedef struct sbuffer_node { struct sbuffer_node *next; sensor_data_t data; } sbuffer_node_t; struct sbuffer { sbuffer_node_t *head; sbuffer_node_t *tail; }; pthread_mutex_t mutex; pthread_cond_t empty, removing; int count = 0; // reader count int sbuffer_init(sbuffer_t **buffer) { *buffer = malloc(sizeof(sbuffer_t)); if (*buffer == NULL) return SBUFFER_FAILURE; (*buffer)->head = NULL; (*buffer)->tail = NULL; // Initialize mutex and condition variables pthread_mutex_init(&mutex, NULL); pthread_cond_init(&empty, NULL); pthread_cond_init(&removing, NULL); return SBUFFER_SUCCESS; } sbuffer_remove (Consumer) int sbuffer_remove(sbuffer_t *buffer, sensor_data_t *data) { sbuffer_node_t *dummy; if (buffer == NULL) return SBUFFER_FAILURE; // while the count is 0, wait pthread_mutex_lock(&mutex); while (count > 0) { pthread_cond_wait(&removing, &mutex); } pthread_mutex_unlock(&mutex); pthread_mutex_lock(&mutex); while (buffer->head == NULL){ pthread_cond_wait(&empty, &mutex); // Wait until buffer is not empty if (data->id == 0){ // end-of-stream pthread_mutex_unlock(&mutex); return SBUFFER_NO_DATA; } } count++; *data = buffer->head->data; dummy = buffer->head; if (buffer->head == buffer->tail) // buffer has only one node { buffer->head = buffer->tail = NULL; } else // buffer has many nodes empty { buffer->head = buffer->head->next; } free(dummy); count--; pthread_cond_signal(&removing); // Signal that data is removed pthread_mutex_unlock(&mutex); return SBUFFER_SUCCESS; } sbuffer_insert (Producer) int sbuffer_insert(sbuffer_t *buffer, sensor_data_t *data) { sbuffer_node_t *dummy; if (buffer == NULL) return SBUFFER_FAILURE; dummy = malloc(sizeof(sbuffer_node_t)); if (dummy == NULL) return SBUFFER_FAILURE; dummy->data = *data; dummy->next = NULL; if (buffer->tail == NULL) // buffer empty (buffer->head should also be NULL { pthread_mutex_lock(&mutex); buffer->head = buffer->tail = dummy; pthread_cond_signal(&empty); // Signal that buffer is not empty pthread_mutex_unlock(&mutex); } else // buffer not empty { buffer->tail->next = dummy; buffer->tail = buffer->tail->next; } return SBUFFER_SUCCESS; } Currently, the code has very unstable behavior. Sometimes it runs and prints everything, sometimes it doesn't print anything and gets stuck in a loop, sometimes it prints everything but the last value comes after the end-of-stream code and it doesn't terminate. I would really appreciate a solution that explains what I'm doing wrong or a comment that redirects me to a duplicate of my question.
[ "Not a complete answer here, but I see this in your sbuffer_remove function:\n // while the count is 0, wait\n pthread_mutex_lock(&mutex);\n while (count > 0) {\n pthread_cond_wait(&removing, &mutex);\n }\n pthread_mutex_unlock(&mutex);\n\nThat looks suspicious to me. What is the purpose of waiting for the count to become zero? Your code waits for the count to become zero, but then it does nothing else before it unlocks the mutex.\nI don't know what count represents, but if the other reader thread is concurrently manipulating it, then there is no guarantee that it will still be zero once you've unlocked the mutex.\nBut, maybe that hasn't caused a problem for you because...\n\n...This also looks suspicious:\n pthread_mutex_unlock(&mutex);\n\n pthread_mutex_lock(&mutex);\n\nWhy do you unlock the mutex and immediately lock it again? Are you thinking that will afford the other consumer a chance to lock it? Technically speaking, it does that, but in practical terms, the chance it offers is known as, \"a snowball's chance in Hell.\" If thread A is waiting for a mutex that is locked by thread B, and thread B unlocks and then immediately tries to re-lock, then in most languages/libraries/operating systems, thread B will almost always succeed while thread A goes back to try again.\nMutexes work best when they are rarely contested. If you have a program in which threads spend any significant amount of time waiting for mutexes, then you probably are not getting much benefit from using multiple threads.\n", "\nI think I have to avoid at the least:\n\nMy reader threads to try to consume/remove from the buffer while it is empty.\nMy reader threads to consume/remove from the buffer at the same time.\n\n\nYes, you must avoid those. And more.\n\nOn the other hand, I don't think my writer thread in sbuffer_insert()\nneeds to worry if the readers are changing the head because it only\nappends to the tail. Is this reasoning correct or am I missing\nsomething?\n\nYou are missing at least that\n\nwhen the buffer contains fewer than two nodes, there is no distinction between the head node and the tail node. This manifests at the code level at least in the fact that your sbuffer_insert() and sbuffer_remove() functions both access both buffer->head and buffer->tail. From the perspective of synchronization requirements, it is this lower-level view that matters.\n\nInsertion and removal modifies the node objects themselves, not just the overall buffer object.\n\nSynchronization is not just, nor even primarily, about avoiding threads directly interfering with each other. It is even more about the consistency of different threads' views of memory. You need appropriate synchronization to ensure that one thread's writes to memory are (ever) observed by other threads, and to establish ordering relationships among operations on memory by different threads.\n\n\n\nCurrently, the code has very unstable behavior. Sometimes it runs and\nprints everything, sometimes it doesn't print anything and gets stuck\nin a loop, sometimes it prints everything but the last value comes\nafter the end-of-stream code and it doesn't terminate.\n\nThis is unsurprising, because your program contains data races, and its behavior is therefore undefined.\nDo ensure that neither the reader nor the writer accesses any member of the buffer object without holding the mutex locked. As the code is presently structured, that will synchronize access not only to the buffer structure itself, but also to the data in the nodes, which presently are involved in their own data races.\n\nNow note that here ...\n\n while (buffer->head == NULL){\n\n pthread_cond_wait(&empty, &mutex); // Wait until buffer is not empty\n\n if (data->id == 0){ // end-of-stream\n pthread_mutex_unlock(&mutex);\n return SBUFFER_NO_DATA;\n }\n }\n\n\n... you are testing for an end-of-data marker before actually reading an item from the buffer. It looks like that's useless in practice. In prarticular, it does not prevent the end-of-stream item from being removed from the buffer, so only one reader will see it. The other(s) will then end up waiting indefinitely for data that will never arrive.\n\nNext, consider this code executed by the readers:\n\n // while the count is 0, wait\n pthread_mutex_lock(&mutex);\n while (count > 0) {\n pthread_cond_wait(&removing, &mutex);\n }\n pthread_mutex_unlock(&mutex);\n\n\nNote well that the reader unlocks the mutex while count is 0, so there is an opportunity for another reader to reach the while loop and pass through. I'm not sure that two threads both getting past that point at the same time produces a problem in practice, but the point seems to be to avoid that, so do it right: move the count++ from later in the function to right after the while loop, prior to unlocking the mutex.\nAlternatively, once you've done that, it should be clear(er) that you've effectively hand-rolled a binary semaphore. You could simplify your code by switching to an actual POSIX semaphore for this purpose. Or if you want to continue with a mutex + CV for this, then consider using a different mutex, as the data to be protected for this purpose are disjoint from the buffer and its contents. That would get rid of the weirdness of re-locking the mutex immediately after unlocking it.\nOr on the third hand, consider whether you need to do any of that at all. How is the (separate) mutex protection of the rest of the body of sbuffer_remove() not sufficient by itself? I propose to you that it is sufficient. After all, you're presently using your hand-rolled semaphore exactly as (another) mutex.\n\nThe bones of this code seem reasonably good, so I don't think repairs will be too hard.\nFirst, add the needed mutex protection in sbuffer_insert(). Or really, just expand the scope of the critical section that's already there:\nint sbuffer_insert(sbuffer_t *buffer, sensor_data_t *data) {\n sbuffer_node_t *dummy;\n if (buffer == NULL) return SBUFFER_FAILURE;\n dummy = malloc(sizeof(sbuffer_node_t));\n if (dummy == NULL) return SBUFFER_FAILURE;\n dummy->data = *data;\n dummy->next = NULL;\n\n pthread_mutex_lock(&mutex);\n if (buffer->tail == NULL) // buffer empty (buffer->head should also be NULL\n {\n assert(buffer->head == NULL);\n buffer->head = buffer->tail = dummy;\n pthread_cond_signal(&empty); // Signal that buffer is not empty\n\n } else // buffer not empty\n {\n buffer->tail->next = dummy;\n buffer->tail = buffer->tail->next;\n }\n pthread_mutex_unlock(&mutex);\n\n return SBUFFER_SUCCESS;\n}\n\nSecond, simplify and fix sbuffer_remove():\nint sbuffer_remove(sbuffer_t *buffer, sensor_data_t *data) {\n if (buffer == NULL) {\n return SBUFFER_FAILURE;\n }\n\n pthread_mutex_lock(&mutex);\n\n // Wait until the buffer is nonempty\n while (buffer->head == NULL) {\n pthread_cond_wait(&empty, &mutex);\n }\n\n // Copy the first item from the buffer\n *data = buffer->head->data;\n\n if (data->id == 0) {\n // end-of-stream: leave the item in the buffer for other readers to see\n pthread_mutex_unlock(&mutex);\n pthread_cond_signal(&empty); // other threads can consume this item\n return SBUFFER_NO_DATA;\n } // else remove the item\n\n sbuffer_node_t *dummy = buffer->head;\n\n buffer->head = buffer->head->next;\n if (buffer->head == NULL) {\n // buffer is now empty\n buffer->tail = NULL;\n }\n\n pthread_mutex_unlock(&mutex);\n\n free(dummy);\n\n return SBUFFER_SUCCESS;\n}\n\n" ]
[ 1, 1 ]
[]
[]
[ "c", "multithreading", "producer_consumer" ]
stackoverflow_0074656303_c_multithreading_producer_consumer.txt
Q: How to store a language in a database I'm working with a couple volunteers on creating the first online dictionary for our language Tarifit (An Amazigh language spoken in Northern Morocco) I'm still a CS student learning about Python and C# currently but I also know HTML/CSS/JS and my question was what is the best way to store all the words in a database and how can the people I work with who don't know anything about programming edit the database and add more words etc... I'm already using JavaScript to work on the dictionary site but I could also use Python or any other programming language if it has a better solution for the Database. I have been looking at some SQL Databases and Redis but I don't have experience with them so idk if they will be useful to learn for this exact type of project.
How to store a language in a database
I'm working with a couple volunteers on creating the first online dictionary for our language Tarifit (An Amazigh language spoken in Northern Morocco) I'm still a CS student learning about Python and C# currently but I also know HTML/CSS/JS and my question was what is the best way to store all the words in a database and how can the people I work with who don't know anything about programming edit the database and add more words etc... I'm already using JavaScript to work on the dictionary site but I could also use Python or any other programming language if it has a better solution for the Database. I have been looking at some SQL Databases and Redis but I don't have experience with them so idk if they will be useful to learn for this exact type of project.
[]
[]
[ "1-) Classical solution: Rent a server, Deploy MySQL to server, Use bootstrap templates for web application design in which users add words, use PHP to connect MySQL database server (for adding and displaying words\n2-) Modern solution: Open 3 month free usage Google cloud account, create Firebase database, create program or web app and connect to firebase using credential keys with python\n", "TLDR: I recommend Python + SQLite\nReasoning for a relational DB\n\nYour data is very structured - therefore you can utilize the structure guarantees of a relational DB.\nJoins are very effective with relational DBs and can be done all in one query.\nThe size of your data should not exceed 1M rows, which is the perfect size for relational DBs.\nData is persisted on the hard drive, and you have all the nice ACID properties.\n\nReasoning for SQLite\n\nTakes about 5 Minutes to set it up.\n\nResoning for Python\nThis is entirely opinionated, in my experience the JS-database-connectors feel a lot less stable. Python is a perfect language for a beginner and is very rewarding when prototyping a project such as yours.\n\nAdditional: Redis will be overkill - that's for very high-speed applications with very small access times.\n" ]
[ -1, -1 ]
[ "database", "dictionary", "javascript", "python", "web" ]
stackoverflow_0074658078_database_dictionary_javascript_python_web.txt
Q: How to get multiple plots per variable I'm not sure how to make multiple plots per variable. So I would like to have something like this: My data looks like this > head(Captiv_mean) Participant.Code Condition Class.1 Upper_Left_Arm_RULA Upper_Right_Arm_RULA Neck_RULA Trunk_RULA 1 AE1_01 DBG Calibration 1.187500 1.2155172 3.3225575 0.4798851 2 AE1_01 DBG Sitting 2.962401 3.0016527 5.1971110 2.8696135 3 AE1_01 DBG Stepping 2.494737 1.9894737 4.6052632 1.4052632 4 AE1_01 PRE Calibration 1.678552 1.2618384 5.5771588 0.6072423 5 AE1_01 PRE Other 0.132678 0.1103238 0.6377426 0.2530313 6 AE1_01 PRE Sitting 2.013686 1.6693523 5.8169352 1.7554690 I would like to have a 1 graph PER "Class.1" (there are 4 different ones in total, so that all the data per "Class.1" is grouped within their on graphs (you can ignore the first column). It is important to note that I need the data within each graph to be grouped by the column "Condition". Please let me know if you need more information. A: With ggplot2 especially, it's often good to "melt" (reshape from "wide" to "long") the data. I'll use tidyr::pivot_longer for this, though it's easily done using reshape2::melt (and data.table::melt). library(tidyr) library(ggplot2) pivot_longer(Captiv_mean, -c("Participant.Code", "Condition", "Class.1")) |> ggplot(aes(Class.1, value, fill = Condition)) + geom_boxplot(position = position_dodge(preserve = "single")) The position_dodge(..) is needed because the sample data does not include values for all combinations of Class.1 and Condition, causing some groups to have different widths. Without position=.., we see: Data Captiv_mean <- structure(list(Participant.Code = c("AE1_01", "AE1_01", "AE1_01", "AE1_01", "AE1_01", "AE1_01"), Condition = c("DBG", "DBG", "DBG", "PRE", "PRE", "PRE"), Class.1 = c("Calibration", "Sitting", "Stepping", "Calibration", "Other", "Sitting"), Upper_Left_Arm_RULA = c(1.1875, 2.962401, 2.494737, 1.678552, 0.132678, 2.013686), Upper_Right_Arm_RULA = c(1.2155172, 3.0016527, 1.9894737, 1.2618384, 0.1103238, 1.6693523), Neck_RULA = c(3.3225575, 5.197111, 4.6052632, 5.5771588, 0.6377426, 5.8169352), Trunk_RULA = c(0.4798851, 2.8696135, 1.4052632, 0.6072423, 0.2530313, 1.755469)), class = "data.frame", row.names = c("1", "2", "3", "4", "5", "6"))
How to get multiple plots per variable
I'm not sure how to make multiple plots per variable. So I would like to have something like this: My data looks like this > head(Captiv_mean) Participant.Code Condition Class.1 Upper_Left_Arm_RULA Upper_Right_Arm_RULA Neck_RULA Trunk_RULA 1 AE1_01 DBG Calibration 1.187500 1.2155172 3.3225575 0.4798851 2 AE1_01 DBG Sitting 2.962401 3.0016527 5.1971110 2.8696135 3 AE1_01 DBG Stepping 2.494737 1.9894737 4.6052632 1.4052632 4 AE1_01 PRE Calibration 1.678552 1.2618384 5.5771588 0.6072423 5 AE1_01 PRE Other 0.132678 0.1103238 0.6377426 0.2530313 6 AE1_01 PRE Sitting 2.013686 1.6693523 5.8169352 1.7554690 I would like to have a 1 graph PER "Class.1" (there are 4 different ones in total, so that all the data per "Class.1" is grouped within their on graphs (you can ignore the first column). It is important to note that I need the data within each graph to be grouped by the column "Condition". Please let me know if you need more information.
[ "With ggplot2 especially, it's often good to \"melt\" (reshape from \"wide\" to \"long\") the data. I'll use tidyr::pivot_longer for this, though it's easily done using reshape2::melt (and data.table::melt).\nlibrary(tidyr)\nlibrary(ggplot2)\npivot_longer(Captiv_mean, -c(\"Participant.Code\", \"Condition\", \"Class.1\")) |>\n ggplot(aes(Class.1, value, fill = Condition)) +\n geom_boxplot(position = position_dodge(preserve = \"single\"))\n\n\nThe position_dodge(..) is needed because the sample data does not include values for all combinations of Class.1 and Condition, causing some groups to have different widths. Without position=.., we see:\n\n\nData\nCaptiv_mean <- structure(list(Participant.Code = c(\"AE1_01\", \"AE1_01\", \"AE1_01\", \"AE1_01\", \"AE1_01\", \"AE1_01\"), Condition = c(\"DBG\", \"DBG\", \"DBG\", \"PRE\", \"PRE\", \"PRE\"), Class.1 = c(\"Calibration\", \"Sitting\", \"Stepping\", \"Calibration\", \"Other\", \"Sitting\"), Upper_Left_Arm_RULA = c(1.1875, 2.962401, 2.494737, 1.678552, 0.132678, 2.013686), Upper_Right_Arm_RULA = c(1.2155172, 3.0016527, 1.9894737, 1.2618384, 0.1103238, 1.6693523), Neck_RULA = c(3.3225575, 5.197111, 4.6052632, 5.5771588, 0.6377426, 5.8169352), Trunk_RULA = c(0.4798851, 2.8696135, 1.4052632, 0.6072423, 0.2530313, 1.755469)), class = \"data.frame\", row.names = c(\"1\", \"2\", \"3\", \"4\", \"5\", \"6\"))\n\n" ]
[ 1 ]
[]
[]
[ "ggplot2", "r" ]
stackoverflow_0074658064_ggplot2_r.txt
Q: How can I inspect the Highcharts tooltip in the Chrome inspector? I want to be able to target the highcharts tooltip using Protractor in the Chrome DOM inspector, but I need to be able to capture the class name of the tolltip to be able to do so. When a point in a data series is hovered over in a Highcharts chart, a tooltip is displayed as you can see here: https://jsfiddle.net/gh/get/library/pure/highcharts/highcharts/tree/master/samples/highcharts/demo/line-basic/ I am using a shared tooltip: tooltip: { shared: true, }, However, using the inspectors 'Force element state :hover' does not work. I can't even see the tooltip appear as DOM element in the inspector at all? How is it possible to inspect the Highcharts tooltip in the Chrome DOM inspector? A: To inspect the Highcharts tooltip in the Chrome DOM inspector you need to keep the tooltip visible. You can achieve it by wrapping the hide method: Highcharts.wrap(Highcharts.Tooltip.prototype, 'hide', function(proceed) {}); Demo: https://jsfiddle.net/BlackLabel/mke7Lh3b/ Now you can easily find the tooltip class which is (for the point in the first series): highcharts-label highcharts-tooltip highcharts-color-0 highcharts-color-0 this part is added dynamically. The number depends on which color from default Highcharts color array series have assigned. API: https://api.highcharts.com/highcharts/colors A: An alternative is to use ctrl + F in the developper inspecteur. After you use a specific term in your tooltil press enter and you will find it easily
How can I inspect the Highcharts tooltip in the Chrome inspector?
I want to be able to target the highcharts tooltip using Protractor in the Chrome DOM inspector, but I need to be able to capture the class name of the tolltip to be able to do so. When a point in a data series is hovered over in a Highcharts chart, a tooltip is displayed as you can see here: https://jsfiddle.net/gh/get/library/pure/highcharts/highcharts/tree/master/samples/highcharts/demo/line-basic/ I am using a shared tooltip: tooltip: { shared: true, }, However, using the inspectors 'Force element state :hover' does not work. I can't even see the tooltip appear as DOM element in the inspector at all? How is it possible to inspect the Highcharts tooltip in the Chrome DOM inspector?
[ "To inspect the Highcharts tooltip in the Chrome DOM inspector you need to keep the tooltip visible. You can achieve it by wrapping the hide method:\nHighcharts.wrap(Highcharts.Tooltip.prototype, 'hide', function(proceed) {});\n\nDemo: https://jsfiddle.net/BlackLabel/mke7Lh3b/\nNow you can easily find the tooltip class which is (for the point in the first series): \nhighcharts-label highcharts-tooltip highcharts-color-0\n\nhighcharts-color-0 this part is added dynamically. The number depends on which color from default Highcharts color array series have assigned. \nAPI: https://api.highcharts.com/highcharts/colors\n", "An alternative is to use ctrl + F in the developper inspecteur. After you use a specific term in your tooltil press enter and you will find it easily\n" ]
[ 2, 0 ]
[]
[]
[ "google_chrome", "highcharts", "html" ]
stackoverflow_0060987692_google_chrome_highcharts_html.txt
Q: How do I create a React Native project using Yarn? I am running the following commands in the DOS console on a Windows 7 (64-bit) machine. npm install -g yarn yarn add global react-native yarn add global react-native-cli react-native init sample After running react-native init sample, the console was closed. The error log shows: D:\Mobile>"$basedir/../../Users/pramaswamy/AppData/Local/Yarn/.global/node_modules/.bin/react-native.cmd" "$@" D:\Mobile>exit $? A: I think you're adding global dependencies wrong, and you shouldn't need to install react-native, globally or locally. react-native init will create a package.json with react-native listed as a dependency. You should be able to install react-native-cli globally with yarn global add react-native-cli, not yarn add global react-native-cli. You should be fine with running the following: npm install -g yarn yarn global add react-native-cli react-native init sample A: NEW SEP 2019, now it's more simple, use node10 and expo: (easy way) npm install -g expo-cli *to create project: expo init AwesomeProject cd AwesomeProject npm start *install the app 'expo' on your phone, and scan the qr code for the project and you can start to view your app more info: https://facebook.github.io/react-native/docs/getting-started.html UPDATE OCT 2018 Create React Native App (now discontinued) has been merged with Expo CLI You can now use expo init to create your project. See Quick Start in the Expo documentation for instructions on getting started using Expo CLI. Unfortunately, react-native-cli is outdated. Starting 13 March 2017, use create-react-native-app instead. Moreover, you shouldn't install Yarn with NPM. Instead, use one of the methods on the yarn installation page. 1. Install yarn Via NPM. According to its installation docs, you shouldn't install yarn via npm, but if necessary, you can still install it with a pre-v5 version of npm. UPDATE 2018 - OCTOBER Node 8.12.0 and NPM 6.4.1 is already compatible with create-react-native-app. Really some minors previous versions too. You don't need more downgrade your npm. On Ubuntu. curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add - echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list On macOS, use Homebrew or MacPorts. brew install yarn sudo port install yarn 2. Install the Create React Native App yarn global add create-react-native-app 3. Update your shell environment source ~/.bashrc 4. Create a React native project create-react-native-app myreactproj A: You got the order wrong. You should be yarn add global react-native-cli yarn add react-native react-native init sample A: Please You visit Bug yarn global add react-native-cli with react-native --version and I get "$basedir/../../Users/juvasquezg/AppData/Local/Yarn/config/global/node_modules/.bin/react-native.cmd" "$@" the system cannot find the path specified Go to C:\Program Files\nodejs and I saw: react-native react-native.cmd react-native.cmd.cmd The fix is to delete react-native.cmd and rename react-native.cmd.cmd to react-native.cmd The Solution #1324 (comment) A: It is now: yarn dlx create-react-native-app then follow the instructions. cd project_name into the project folder and do: yarn install then try with: yarn web A: You could also do yarn dlx expo-cli then: expo init project_name and follow the instructions after finishing cd project_name and try yarn web A: If you want to create app using yarn instead of npx; yarn dlx react-native init ExampleApp this command will be helpful. According to the react native documentation we should use npx react-native init AwesomeProject command For more info about "yarn dlx": https://yarnpkg.com/cli/dlx Yarn global is deprecated. If you run yarn global you will get an error. Usage Error: The 'yarn global' commands have been removed in 2.x - consider using 'yarn dlx' or a third-party plugin instead But yarn dlx won't work like yarn global or npm install <module> --global
How do I create a React Native project using Yarn?
I am running the following commands in the DOS console on a Windows 7 (64-bit) machine. npm install -g yarn yarn add global react-native yarn add global react-native-cli react-native init sample After running react-native init sample, the console was closed. The error log shows: D:\Mobile>"$basedir/../../Users/pramaswamy/AppData/Local/Yarn/.global/node_modules/.bin/react-native.cmd" "$@" D:\Mobile>exit $?
[ "I think you're adding global dependencies wrong, and you shouldn't need to install react-native, globally or locally. react-native init will create a package.json with react-native listed as a dependency.\nYou should be able to install react-native-cli globally with yarn global add react-native-cli, not yarn add global react-native-cli.\nYou should be fine with running the following:\nnpm install -g yarn\nyarn global add react-native-cli\nreact-native init sample\n\n", "NEW SEP 2019,\nnow it's more simple, use node10 and expo: (easy way)\nnpm install -g expo-cli\n\n*to create project:\n\n expo init AwesomeProject\n\n cd AwesomeProject\n npm start\n\n*install the app 'expo' on your phone, and scan the qr code for the project and you can start to view your app\n\nmore info:\n https://facebook.github.io/react-native/docs/getting-started.html\n\nUPDATE OCT 2018 Create React Native App (now discontinued) has been merged with Expo CLI\n You can now use expo init to create your project. See Quick Start in\n the Expo documentation for instructions on getting started using Expo\n CLI.\n\nUnfortunately, react-native-cli is outdated. Starting 13 March 2017, use create-react-native-app instead. Moreover, you shouldn't install Yarn with NPM. Instead, use one of the methods on the yarn installation page.\n1. Install yarn\nVia NPM. According to its installation docs, you shouldn't install yarn via npm, but if necessary, you can still install it with a pre-v5 version of npm. \n\n\nUPDATE 2018 - OCTOBER\nNode 8.12.0 and NPM 6.4.1 is already compatible with create-react-native-app. Really some minors previous versions too. You don't need more downgrade your npm.\nOn Ubuntu.\ncurl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -\necho \"deb https://dl.yarnpkg.com/debian/ stable main\" | sudo tee /etc/apt/sources.list.d/yarn.list\n\nOn macOS, use Homebrew or MacPorts.\nbrew install yarn\nsudo port install yarn\n2. Install the Create React Native App\nyarn global add create-react-native-app\n3. Update your shell environment\nsource ~/.bashrc\n4. Create a React native project\ncreate-react-native-app myreactproj\n", "You got the order wrong. You should be\nyarn add global react-native-cli\nyarn add react-native\nreact-native init sample\n\n", "Please You visit Bug\nyarn global add react-native-cli\n\nwith \nreact-native --version\n\nand I get \"$basedir/../../Users/juvasquezg/AppData/Local/Yarn/config/global/node_modules/.bin/react-native.cmd\" \"$@\" \n\nthe system cannot find the path specified\n\nGo to C:\\Program Files\\nodejs and I saw:\n\nreact-native\nreact-native.cmd\nreact-native.cmd.cmd\n\nThe fix is to delete react-native.cmd and rename react-native.cmd.cmd to react-native.cmd\nThe Solution #1324 (comment)\n", "It is now:\nyarn dlx create-react-native-app\n\nthen follow the instructions.\ncd project_name into the project folder and do:\nyarn install\n\nthen try with:\nyarn web\n\n", "You could also do\nyarn dlx expo-cli\n\nthen:\nexpo init project_name\n\nand follow the instructions\nafter finishing cd project_name and try\nyarn web\n\n", "If you want to create app using yarn instead of npx;\nyarn dlx react-native init ExampleApp this command will be helpful.\nAccording to the react native documentation we should use npx react-native init AwesomeProject command\nFor more info about \"yarn dlx\": https://yarnpkg.com/cli/dlx\nYarn global is deprecated. If you run yarn global you will get an error.\n\n\nUsage Error: The 'yarn global' commands have been removed in 2.x -\nconsider using 'yarn dlx' or a third-party plugin instead\n\nBut yarn dlx won't work like yarn global or npm install <module> --global\n" ]
[ 60, 21, 8, 2, 1, 1, 0 ]
[]
[]
[ "android", "npm", "react_native", "yarnpkg" ]
stackoverflow_0040011693_android_npm_react_native_yarnpkg.txt
Q: Why assign method in pandas method chaining behave differently if it is applied in chain after group by? I am trying to chain some methods in pandas but seems like order of methods is restrictive in Pandas.Let me explain this with mpg data. In two of the below options, I have changed the order of the assign method. In option 1, it is before group by and it works as expected. While in option 2, it is after group by and it produces garbage output. In R/tidyverse I could simply do ungroup() and use mutate() either before or after group by and it would still produce the same output. import pandas as pd import seaborn as sns df = sns.load_dataset("mpg") Option 1 ( df .assign(origin=df.origin.map({'europe':'Europe'}).fillna(df.origin)) .query(("origin=='Europe' & model_year==80")) .groupby(['origin','cylinders'],dropna=False) .mpg .sum() .reset_index() ) Option 2 ( df .query(("origin=='europe' & model_year==80")) .groupby(['origin','cylinders'],dropna=False) .mpg .sum() .reset_index() .assign(origin=df.origin.map({'europe':'Europe'}).fillna(df.origin)) ) The whole thing can also be done quite neatly without method chaining in Pandas but I am trying to see if I can make method chaining work for myself. How can I ensure assign method in above two options produce same output regardless of where it is in the chain of methods? A: The key thing here is actually the .reset_index(). In the original data, the first two rows have "usa" as their origin, so those get applied to the transformed data. To illustrate, we can join (on the index): tra = ( df .query("origin=='europe' & model_year==80") .groupby(['origin', 'cylinders'], dropna=False) ['mpg'].sum() .reset_index() ) tra.join(df['origin'], rsuffix='_2') origin cylinders mpg origin_2 0 europe 4 299.2 usa 1 europe 5 36.4 usa To fix it, you could use a lambda to make use of the transformed data (as sammywemmy wrote in a comment): tra.assign(origin=lambda df_: df_['origin'].map({'europe':'Europe'}).fillna(df_['origin']) ) origin cylinders mpg 0 Europe 4 299.2 1 Europe 5 36.4
Why assign method in pandas method chaining behave differently if it is applied in chain after group by?
I am trying to chain some methods in pandas but seems like order of methods is restrictive in Pandas.Let me explain this with mpg data. In two of the below options, I have changed the order of the assign method. In option 1, it is before group by and it works as expected. While in option 2, it is after group by and it produces garbage output. In R/tidyverse I could simply do ungroup() and use mutate() either before or after group by and it would still produce the same output. import pandas as pd import seaborn as sns df = sns.load_dataset("mpg") Option 1 ( df .assign(origin=df.origin.map({'europe':'Europe'}).fillna(df.origin)) .query(("origin=='Europe' & model_year==80")) .groupby(['origin','cylinders'],dropna=False) .mpg .sum() .reset_index() ) Option 2 ( df .query(("origin=='europe' & model_year==80")) .groupby(['origin','cylinders'],dropna=False) .mpg .sum() .reset_index() .assign(origin=df.origin.map({'europe':'Europe'}).fillna(df.origin)) ) The whole thing can also be done quite neatly without method chaining in Pandas but I am trying to see if I can make method chaining work for myself. How can I ensure assign method in above two options produce same output regardless of where it is in the chain of methods?
[ "The key thing here is actually the .reset_index(). In the original data, the first two rows have \"usa\" as their origin, so those get applied to the transformed data.\nTo illustrate, we can join (on the index):\ntra = (\n df\n .query(\"origin=='europe' & model_year==80\")\n .groupby(['origin', 'cylinders'], dropna=False)\n ['mpg'].sum()\n .reset_index()\n)\ntra.join(df['origin'], rsuffix='_2')\n\n origin cylinders mpg origin_2\n0 europe 4 299.2 usa\n1 europe 5 36.4 usa\n\nTo fix it, you could use a lambda to make use of the transformed data (as sammywemmy wrote in a comment):\ntra.assign(origin=lambda df_:\n df_['origin'].map({'europe':'Europe'}).fillna(df_['origin'])\n)\n\n origin cylinders mpg\n0 Europe 4 299.2\n1 Europe 5 36.4\n\n" ]
[ 0 ]
[]
[]
[ "pandas" ]
stackoverflow_0074622393_pandas.txt
Q: R - remove rows from data frame that do not match (exactly) elements of list Imagine a data frame... df <- rbind("A*YOU 1.000 0.780", "A*YOUR 1.000 0.780", "B*USE 0.800 0.678", "B*USER 0.700 1.000") df <- as.data.frame(df) df ... which prints... > df V1 1 A*YOU 1.000 0.780 2 A*YOUR 1.000 0.780 3 B*USE 0.800 0.678 4 B*USER 0.700 1.000 ... and of which I would like to remove any row that does not contain exactly any element of a list (called tenables here) tenables <- c("A*YOU", "B*USE"), so that the outcome becomes: > df V1 1 A*YOU 1.000 0.780 2 B*USE 0.800 0.678 Any ideas on how to solve this? Many thanks in advance. A: > df[gsub("\\s*\\d+\\.*", "", df$V1) %in% tenables, ,drop=FALSE] V1 1 A*YOU 1.000 0.780 3 B*USE 0.800 0.678 A: Since you have regex specials in tenables (* means "0 or more of the previous character/class/group"), we cannot use fixed=TRUE in the grep call. As such, we need to find those specials and backslash-escape them. From there, we'll add \\b (word-boundary) to differentiate between YOU and YOUR, where adding a space or any other character may be over-constraining. ## clean up tenables to be regex-friendly and precise gsub("([].*+(){}[])", "\\\\\\1", tenables) # [1] "A\\*YOU" "B\\*USE" ## combine into a single pattern for simple use in grep paste0("\\b(", paste(gsub("([].*+(){}[])", "\\\\\\1", tenables), collapse = "|"), ")\\b") # [1] "\\b(A\\*YOU|B\\*USE)\\b" ## subset your frame subset(df, !grepl(paste0("\\b(", paste(gsub("([].*+(){}[])", "\\\\\\1", tenables), collapse = "|"), ")\\b"), V1)) # V1 # 2 A*YOUR 1.000 0.780 # 4 B*USER 0.700 1.000 Regex explanation: \\b(A\\*YOU|B\\*USE)\\b ^^^ ^^^ "word boundary", meaning the previous/next chars are begin/end of string or from A-Z, a-z, 0-9, or _ ^ ^ parens "group" the pattern so we can reference it in the replacement string ^^^^^^^ literal "A", "*", "Y", "O", "U" (same with other string) ^ the "|" means "OR", so either the "A*" or the "B*" strings A: One approach using sapply on the strsplit column of df, only looking at the first entry of A*YOU 1.000 0.780, respectively. df[sapply(strsplit(df$V1, " "), function(x) any(grepl(x[1], tenables))), , drop=F] V1 2 A*YOU 1.000 0.780 4 B*USE 0.800 0.678
R - remove rows from data frame that do not match (exactly) elements of list
Imagine a data frame... df <- rbind("A*YOU 1.000 0.780", "A*YOUR 1.000 0.780", "B*USE 0.800 0.678", "B*USER 0.700 1.000") df <- as.data.frame(df) df ... which prints... > df V1 1 A*YOU 1.000 0.780 2 A*YOUR 1.000 0.780 3 B*USE 0.800 0.678 4 B*USER 0.700 1.000 ... and of which I would like to remove any row that does not contain exactly any element of a list (called tenables here) tenables <- c("A*YOU", "B*USE"), so that the outcome becomes: > df V1 1 A*YOU 1.000 0.780 2 B*USE 0.800 0.678 Any ideas on how to solve this? Many thanks in advance.
[ "> df[gsub(\"\\\\s*\\\\d+\\\\.*\", \"\", df$V1) %in% tenables, ,drop=FALSE]\n V1\n1 A*YOU 1.000 0.780\n3 B*USE 0.800 0.678\n\n", "Since you have regex specials in tenables (* means \"0 or more of the previous character/class/group\"), we cannot use fixed=TRUE in the grep call. As such, we need to find those specials and backslash-escape them. From there, we'll add \\\\b (word-boundary) to differentiate between YOU and YOUR, where adding a space or any other character may be over-constraining.\n## clean up tenables to be regex-friendly and precise\ngsub(\"([].*+(){}[])\", \"\\\\\\\\\\\\1\", tenables)\n# [1] \"A\\\\*YOU\" \"B\\\\*USE\"\n\n## combine into a single pattern for simple use in grep\npaste0(\"\\\\b(\", paste(gsub(\"([].*+(){}[])\", \"\\\\\\\\\\\\1\", tenables), collapse = \"|\"), \")\\\\b\")\n# [1] \"\\\\b(A\\\\*YOU|B\\\\*USE)\\\\b\"\n\n## subset your frame\nsubset(df, !grepl(paste0(\"\\\\b(\", paste(gsub(\"([].*+(){}[])\", \"\\\\\\\\\\\\1\", tenables), collapse = \"|\"), \")\\\\b\"), V1))\n# V1\n# 2 A*YOUR 1.000 0.780\n# 4 B*USER 0.700 1.000\n\nRegex explanation:\n\\\\b(A\\\\*YOU|B\\\\*USE)\\\\b\n^^^ ^^^ \"word boundary\", meaning the previous/next chars\n are begin/end of string or from A-Z, a-z, 0-9, or _\n ^ ^ parens \"group\" the pattern so we can reference it\n in the replacement string\n ^^^^^^^ literal \"A\", \"*\", \"Y\", \"O\", \"U\" (same with other string)\n ^ the \"|\" means \"OR\", so either the \"A*\" or the \"B*\" strings\n\n", "One approach using sapply on the strsplit column of df, only looking at the first entry of A*YOU 1.000 0.780, respectively.\ndf[sapply(strsplit(df$V1, \" \"), function(x) \n any(grepl(x[1], tenables))), , drop=F]\n V1\n2 A*YOU 1.000 0.780\n4 B*USE 0.800 0.678\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "dataframe", "list", "r", "row" ]
stackoverflow_0074657825_dataframe_list_r_row.txt
Q: Apache Camel and InfluxDB 2.x Up to now I used Apache Camel (JAVA) to route data from an Apache Kafka broker to an InfluxDB 1.8. Now I upgraded the database to InfluxDB 2.5. The two InfluxDB-Versions are incompatible in terms of their read/write API. For example, it is not possible to inject the required security token required for reading/writing. InfluxDB 1.8 requires a dependency to <groupId>org.influxdb</groupId> <artifactId>influxdb-java</artifactId> <version>XXX</version> InfluxDB 2.5 requires <groupId>com.influxdb</groupId> <artifactId>influxdb-client-java</artifactId> <version>YYY</version> In Apache Camel an InfluxDB component is available: <groupId>org.apache.camel</groupId> <artifactId>camel-influxdb</artifactId> <version>ZZZ</version> Which has a dependency to the influx-clientlibrary. Does that mean there is no InfluxDB 2.x component anymore? How do I build an InfluxDB 2.5 endpoint then? A: Unfortunately, influxdb v2 has fully support only in influxdb-client-java. However according to documentation, you can use InfluxDB 1.x compatibility API to continue working with influx 2.X using influxdb-java lib. A: I ended up building a custom Camel Component providing native support for InfluxDB 2x. As a starting point I used the structure and code from original Apache Camel InfluxDB jar-file. As a route for those who want to do something similar: Create a Maven project with a POM like this: <project xmlns="http://maven.apache.org/POM/4.0.0 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <!-- NOTE: I would like to get rid of this camel parent and replace it with our own. didn manage yet --> <parent> <groupId>org.apache.camel</groupId> <artifactId>components</artifactId> <version>3.19.0</version> </parent> <groupId>my.group.name</groupId> <artifactId>my-component-name</artifactId> <version>3.19.0</version> <packaging>jar</packaging> <name>Camel :: InfluxDBClient</name> <description>A Camel Component</description> <url>...</url> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-support</artifactId> </dependency> <!-- InfluxDB driver dependency --> <dependency> <groupId>com.influxdb</groupId> <artifactId>influxdb-client-java</artifactId> <version>${version.influx-java-driver}</version> <exclusions> <exclusion> <groupId>com.squareup.okhttp3</groupId> <artifactId>logging-interceptor</artifactId> </exclusion> </exclusions> </dependency> <!-- test dependencies --> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-test-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-core</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j-impl</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter</artifactId> <scope>test</scope> </dependency> ... Create the following methods in your Camel component project (all based on original camel-influxdb), where prefix Influx2Db can be replaced with something to you liking: CamelInfluxDbException Influx2DbComponent (replace InfluxDB with InfluxDBClient (2.x) and adjust code) Influx2DbConstants Influx2DbEndpoint Influx2DbOperations Influx2DbProducer (replace InfluxDB with InfluxDBClient (2.x) and adjust code) Create a file named <last part of your package name> at: src/main/resources/META-INF/service/<package name minus last part>/<last part of your package name> with content: class=<package name>.Influx2DbComponent Furthermore, in your Apache Camel application you need Spring-Boot-Autoconfigure classes in your spring lookup path based on spring-boot-autoconfigure:influxdb (if you use Spring-Boot): Influx2DbAutoConfiguration Influx2DbCustomizer (FunctionalInterface) Influx2DbOkHttpClientBuilderProvider (FunctionalInterface) Influx2DbProperties A: I ended up building a custom Camel component providing native support for InfluxDB 2x in a separate Maven project. As a starting point I used the structure and code from original Apache Camel InfluxDB jar-file. As a route for those who want to do something similar: Create a Maven project with a POM like this: <project xmlns="http://maven.apache.org/POM/4.0.0 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd"> <modelVersion>4.0.0</modelVersion> <!-- NOTE: I would like to get rid of this camel parent and replace it with our own. didn manage yet --> <parent> <groupId>org.apache.camel</groupId> <artifactId>components</artifactId> <version>3.19.0</version> </parent> <groupId>my.group.name</groupId> <artifactId>my-component-name</artifactId> <version>3.19.0</version> <packaging>jar</packaging> <name>Camel :: InfluxDBClient</name> <description>A Camel Component</description> <url>...</url> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-support</artifactId> </dependency> <!-- InfluxDB driver dependency --> <dependency> <groupId>com.influxdb</groupId> <artifactId>influxdb-client-java</artifactId> <version>${version.influx-java-driver}</version> <exclusions> <exclusion> <groupId>com.squareup.okhttp3</groupId> <artifactId>logging-interceptor</artifactId> </exclusion> </exclusions> </dependency> <!-- test dependencies --> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-test-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.mockito</groupId> <artifactId>mockito-core</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j-impl</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter</artifactId> <scope>test</scope> </dependency> ... Create the following methods in your Camel component project (all based on original camel-influxdb), where prefix Influx2Db can be replaced with something to you liking: CamelInfluxDbException Influx2DbComponent (replace InfluxDB with InfluxDBClient (2.x) and adjust code) Influx2DbConstants Influx2DbEndpoint Influx2DbOperations Influx2DbProducer (replace InfluxDB with InfluxDBClient (2.x) and adjust code) Create a file named <last part of your package name> at: src/main/resources/META-INF/service/<package name minus last part>/<last part of your package name> with content: class=<package name>.Influx2DbComponent Furthermore, in your Apache Camel application you need Spring-Boot-Autoconfigure classes in your spring lookup path based on spring-boot-autoconfigure:influxdb (if you use Spring-Boot): Influx2DbAutoConfiguration Influx2DbCustomizer (FunctionalInterface) Influx2DbOkHttpClientBuilderProvider (FunctionalInterface) Influx2DbProperties
Apache Camel and InfluxDB 2.x
Up to now I used Apache Camel (JAVA) to route data from an Apache Kafka broker to an InfluxDB 1.8. Now I upgraded the database to InfluxDB 2.5. The two InfluxDB-Versions are incompatible in terms of their read/write API. For example, it is not possible to inject the required security token required for reading/writing. InfluxDB 1.8 requires a dependency to <groupId>org.influxdb</groupId> <artifactId>influxdb-java</artifactId> <version>XXX</version> InfluxDB 2.5 requires <groupId>com.influxdb</groupId> <artifactId>influxdb-client-java</artifactId> <version>YYY</version> In Apache Camel an InfluxDB component is available: <groupId>org.apache.camel</groupId> <artifactId>camel-influxdb</artifactId> <version>ZZZ</version> Which has a dependency to the influx-clientlibrary. Does that mean there is no InfluxDB 2.x component anymore? How do I build an InfluxDB 2.5 endpoint then?
[ "Unfortunately, influxdb v2 has fully support only in influxdb-client-java. However according to documentation, you can use InfluxDB 1.x compatibility API to continue working with influx 2.X using influxdb-java lib.\n", "I ended up building a custom Camel Component providing native support for InfluxDB 2x. As a starting point I used the structure and code from original Apache Camel InfluxDB jar-file.\nAs a route for those who want to do something similar:\nCreate a Maven project with a POM like this:\n<project xmlns=\"http://maven.apache.org/POM/4.0.0 xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n<modelVersion>4.0.0</modelVersion>\n\n<!-- NOTE: I would like to get rid of this camel parent and replace it with our own. didn manage yet -->\n<parent>\n <groupId>org.apache.camel</groupId>\n <artifactId>components</artifactId>\n <version>3.19.0</version>\n</parent>\n\n<groupId>my.group.name</groupId>\n<artifactId>my-component-name</artifactId>\n<version>3.19.0</version>\n<packaging>jar</packaging>\n<name>Camel :: InfluxDBClient</name>\n<description>A Camel Component</description>\n<url>...</url>\n\n<dependencies>\n <dependency>\n <groupId>org.apache.camel</groupId>\n <artifactId>camel-support</artifactId>\n </dependency>\n\n <!-- InfluxDB driver dependency -->\n <dependency>\n <groupId>com.influxdb</groupId>\n <artifactId>influxdb-client-java</artifactId>\n <version>${version.influx-java-driver}</version>\n <exclusions>\n <exclusion>\n <groupId>com.squareup.okhttp3</groupId>\n <artifactId>logging-interceptor</artifactId>\n </exclusion>\n </exclusions>\n </dependency>\n\n <!-- test dependencies -->\n <dependency>\n <groupId>org.apache.camel</groupId>\n <artifactId>camel-test-junit5</artifactId>\n <scope>test</scope>\n </dependency>\n\n <dependency>\n <groupId>org.mockito</groupId>\n <artifactId>mockito-core</artifactId>\n <scope>test</scope>\n </dependency>\n\n <dependency>\n <groupId>org.apache.logging.log4j</groupId>\n <artifactId>log4j-slf4j-impl</artifactId>\n <scope>test</scope>\n </dependency>\n\n <dependency>\n <groupId>org.junit.jupiter</groupId>\n <artifactId>junit-jupiter</artifactId>\n <scope>test</scope>\n </dependency>\n ...\n\nCreate the following methods in your Camel component project (all based on original camel-influxdb), where prefix Influx2Db can be replaced with something to you liking:\n\nCamelInfluxDbException\nInflux2DbComponent (replace InfluxDB with InfluxDBClient (2.x) and adjust code)\nInflux2DbConstants\nInflux2DbEndpoint\nInflux2DbOperations\nInflux2DbProducer (replace InfluxDB with InfluxDBClient (2.x) and adjust code)\n\nCreate a file named <last part of your package name> at:\n\nsrc/main/resources/META-INF/service/<package name minus last part>/<last part of your package name>\n\nwith content:\nclass=<package name>.Influx2DbComponent\n\nFurthermore, in your Apache Camel application you need Spring-Boot-Autoconfigure classes in your spring lookup path based on spring-boot-autoconfigure:influxdb (if you use Spring-Boot):\n\nInflux2DbAutoConfiguration\nInflux2DbCustomizer (FunctionalInterface)\nInflux2DbOkHttpClientBuilderProvider (FunctionalInterface)\nInflux2DbProperties\n\n", "I ended up building a custom Camel component providing native support for InfluxDB 2x in a separate Maven project. As a starting point I used the structure and code from original Apache Camel InfluxDB jar-file.\nAs a route for those who want to do something similar:\nCreate a Maven project with a POM like this:\n<project xmlns=\"http://maven.apache.org/POM/4.0.0 xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd\">\n<modelVersion>4.0.0</modelVersion>\n\n<!-- NOTE: I would like to get rid of this camel parent and replace it with our own. didn manage yet -->\n<parent>\n <groupId>org.apache.camel</groupId>\n <artifactId>components</artifactId>\n <version>3.19.0</version>\n</parent>\n\n<groupId>my.group.name</groupId>\n<artifactId>my-component-name</artifactId>\n<version>3.19.0</version>\n<packaging>jar</packaging>\n<name>Camel :: InfluxDBClient</name>\n<description>A Camel Component</description>\n<url>...</url>\n\n<dependencies>\n <dependency>\n <groupId>org.apache.camel</groupId>\n <artifactId>camel-support</artifactId>\n </dependency>\n\n <!-- InfluxDB driver dependency -->\n <dependency>\n <groupId>com.influxdb</groupId>\n <artifactId>influxdb-client-java</artifactId>\n <version>${version.influx-java-driver}</version>\n <exclusions>\n <exclusion>\n <groupId>com.squareup.okhttp3</groupId>\n <artifactId>logging-interceptor</artifactId>\n </exclusion>\n </exclusions>\n </dependency>\n\n <!-- test dependencies -->\n <dependency>\n <groupId>org.apache.camel</groupId>\n <artifactId>camel-test-junit5</artifactId>\n <scope>test</scope>\n </dependency>\n\n <dependency>\n <groupId>org.mockito</groupId>\n <artifactId>mockito-core</artifactId>\n <scope>test</scope>\n </dependency>\n\n <dependency>\n <groupId>org.apache.logging.log4j</groupId>\n <artifactId>log4j-slf4j-impl</artifactId>\n <scope>test</scope>\n </dependency>\n\n <dependency>\n <groupId>org.junit.jupiter</groupId>\n <artifactId>junit-jupiter</artifactId>\n <scope>test</scope>\n </dependency>\n ...\n\nCreate the following methods in your Camel component project (all based on original camel-influxdb), where prefix Influx2Db can be replaced with something to you liking:\n\nCamelInfluxDbException\nInflux2DbComponent (replace InfluxDB with InfluxDBClient (2.x) and adjust code)\nInflux2DbConstants\nInflux2DbEndpoint\nInflux2DbOperations\nInflux2DbProducer (replace InfluxDB with InfluxDBClient (2.x) and adjust code)\n\nCreate a file named <last part of your package name> at:\n\nsrc/main/resources/META-INF/service/<package name minus last part>/<last part of your package name>\n\nwith content:\nclass=<package name>.Influx2DbComponent\n\nFurthermore, in your Apache Camel application you need Spring-Boot-Autoconfigure classes in your spring lookup path based on spring-boot-autoconfigure:influxdb (if you use Spring-Boot):\n\nInflux2DbAutoConfiguration\nInflux2DbCustomizer (FunctionalInterface)\nInflux2DbOkHttpClientBuilderProvider (FunctionalInterface)\nInflux2DbProperties\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "apache_camel", "influxdb", "java" ]
stackoverflow_0074625810_apache_camel_influxdb_java.txt
Q: Spring Cloud Config File System Backend Issue (not reading properties from the file) I have an issue not reading properties from a file as a file system backend. I tried to use these lines shown below to reach the file but it didn't work. Here is my APIConfig.properties file shown below beneath file-system-backend-config token.expiration_time = 8640000 token.secret = hfgry463hf746hf573ydh475fhy57414141 login.url.path = /users/login Here is my application.properties under PhotoAppAPIConfigServer server.port=8042 # File System Backend spring.profiles.active= native spring.cloud.config.server.native.search-locations= file:///C:/Users/Noyan/Desktop/dev/file-system-backend-config # Bus Refresh management.endpoints.web.exposure.include=busrefresh # Rabbit MQ spring.rabbitmq.host=localhost spring.rabbitmq.port=5672 spring.rabbitmq.username=guest spring.rabbitmq.password=guest I got an issue when I run this config server. Here is my error message shown below. java.lang.IllegalArgumentException: Pattern cannot be null or empty How can I fix it? A: To use a file system backend, just write into application.properties file: server.port=8042 spring.application.name=photo-app-config # Not mandatory spring.profiles.active=native spring.cloud.config.server.native.search-locations=classpath:/config # spring.cloud.config.server.native.search-locations=file:///C:/config # Or your preferred absolute path Put your configurations into: classpath:/config:(project directory) resources/config/api-config.properties file:///C:/config:(your system directory) C:/config/api-config.properties Now read your configurations through the URL: http://localhost:8042/api-config/default 100% tested.
Spring Cloud Config File System Backend Issue (not reading properties from the file)
I have an issue not reading properties from a file as a file system backend. I tried to use these lines shown below to reach the file but it didn't work. Here is my APIConfig.properties file shown below beneath file-system-backend-config token.expiration_time = 8640000 token.secret = hfgry463hf746hf573ydh475fhy57414141 login.url.path = /users/login Here is my application.properties under PhotoAppAPIConfigServer server.port=8042 # File System Backend spring.profiles.active= native spring.cloud.config.server.native.search-locations= file:///C:/Users/Noyan/Desktop/dev/file-system-backend-config # Bus Refresh management.endpoints.web.exposure.include=busrefresh # Rabbit MQ spring.rabbitmq.host=localhost spring.rabbitmq.port=5672 spring.rabbitmq.username=guest spring.rabbitmq.password=guest I got an issue when I run this config server. Here is my error message shown below. java.lang.IllegalArgumentException: Pattern cannot be null or empty How can I fix it?
[ "To use a file system backend, just write into application.properties file:\nserver.port=8042\nspring.application.name=photo-app-config # Not mandatory \nspring.profiles.active=native\nspring.cloud.config.server.native.search-locations=classpath:/config\n# spring.cloud.config.server.native.search-locations=file:///C:/config # Or your preferred absolute path\n\nPut your configurations into:\n\nclasspath:/config:(project directory)\n resources/config/api-config.properties\n\n\nfile:///C:/config:(your system directory)\n C:/config/api-config.properties\n\n\n\nNow read your configurations through the URL:\nhttp://localhost:8042/api-config/default\n\n100% tested.\n" ]
[ 1 ]
[]
[]
[ "java", "spring_boot", "spring_cloud" ]
stackoverflow_0071081637_java_spring_boot_spring_cloud.txt
Q: How to fix the error name 'phi' is not defined? I'm trying to solve the following laplace transform: f(t) = sen(ωt + φ) I wrote the following code to solve the problem import sympy as sym from sympy.abc import s,t,x,y,z from sympy.integrals import laplace_transform from sympy.integrals import inverse_laplace_transform omega = sympy.Symbol('omega', real=True) sin = sympy.sin function = (sin(omega*t + phi)) function U = laplace_transform(function, t, s) U[0] As you can see, I tried the code above to solve the problem, however, the error that the name 'phi' is not defined. Could someone give me an idea of ​​what I would have to fix to make it work? A: add a import for phi from sympy.abc import phi
How to fix the error name 'phi' is not defined?
I'm trying to solve the following laplace transform: f(t) = sen(ωt + φ) I wrote the following code to solve the problem import sympy as sym from sympy.abc import s,t,x,y,z from sympy.integrals import laplace_transform from sympy.integrals import inverse_laplace_transform omega = sympy.Symbol('omega', real=True) sin = sympy.sin function = (sin(omega*t + phi)) function U = laplace_transform(function, t, s) U[0] As you can see, I tried the code above to solve the problem, however, the error that the name 'phi' is not defined. Could someone give me an idea of ​​what I would have to fix to make it work?
[ "add a import for phi\nfrom sympy.abc import phi\n\n" ]
[ 2 ]
[]
[]
[ "jupyter_notebook", "math", "python", "python_3.x", "symbolic_math" ]
stackoverflow_0074658323_jupyter_notebook_math_python_python_3.x_symbolic_math.txt
Q: Launching specific fragment of an activity from a background service on Android 5.1.1 I have an app with a working background service running on a custom made Android 5 device (so I can use regular background service and I am ok with it) My goal is that background service can send some kind of intent start or open the app and specific fragment from the activity. My activity_main.xml looks as follows <?xml version="1.0" encoding="utf-8"?> <androidx.fragment.app.FragmentContainerView xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:id="@+id/main_content" android:name="androidx.navigation.fragment.NavHostFragment" android:layout_width="match_parent" android:layout_height="match_parent" app:defaultNavHost="true" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" app:navGraph="@navigation/nav_graph" /> And the MainActivity.kt looks as follows @AndroidEntryPoint class MainActivity : AppCompatActivity() { private lateinit var binding: ActivityMainBinding override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) binding = ActivityMainBinding.inflate(layoutInflater) setContentView(binding.root) val intent = Intent(this, BackgroundService::class.java) // Service restarts from here! this.startService(intent) } } I have tried adding intent to just start the app from the service based on the advice from stack overflow val intent = Intent(this, MainActivity::class.java) // Service restarts from here! this.startService(intent) but it didn't work... I have also added <uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/> In AndroidManifest So it looks like the problem is twofold, how to start/open activity and how to switch to the correct fragment. I will be thankful for any advice A: Try to add new task flag: val intent = Intent(this, MainActivity::class.java) // Service restarts from here! intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK) this.startActivity(intent)
Launching specific fragment of an activity from a background service on Android 5.1.1
I have an app with a working background service running on a custom made Android 5 device (so I can use regular background service and I am ok with it) My goal is that background service can send some kind of intent start or open the app and specific fragment from the activity. My activity_main.xml looks as follows <?xml version="1.0" encoding="utf-8"?> <androidx.fragment.app.FragmentContainerView xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" android:id="@+id/main_content" android:name="androidx.navigation.fragment.NavHostFragment" android:layout_width="match_parent" android:layout_height="match_parent" app:defaultNavHost="true" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" app:navGraph="@navigation/nav_graph" /> And the MainActivity.kt looks as follows @AndroidEntryPoint class MainActivity : AppCompatActivity() { private lateinit var binding: ActivityMainBinding override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) binding = ActivityMainBinding.inflate(layoutInflater) setContentView(binding.root) val intent = Intent(this, BackgroundService::class.java) // Service restarts from here! this.startService(intent) } } I have tried adding intent to just start the app from the service based on the advice from stack overflow val intent = Intent(this, MainActivity::class.java) // Service restarts from here! this.startService(intent) but it didn't work... I have also added <uses-permission android:name="android.permission.SYSTEM_ALERT_WINDOW"/> In AndroidManifest So it looks like the problem is twofold, how to start/open activity and how to switch to the correct fragment. I will be thankful for any advice
[ "Try to add new task flag:\nval intent = Intent(this, MainActivity::class.java) // Service restarts from here!\n\nintent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK)\n\nthis.startActivity(intent)\n\n\n" ]
[ 0 ]
[]
[]
[ "android", "android_fragments", "android_intent", "android_service" ]
stackoverflow_0074658032_android_android_fragments_android_intent_android_service.txt
Q: core/no-options Firebase error in flutter web firebase storage I want to store file in firebase storage when i use firebaseStorage instance to put data i got this error : Uncaught (in promise) Error: [core/no-options] Firebase: Need to provide options, when not being deployed to hosting via source.. note that i dont have firebase opitions file Future main() async { runApp(StartPoint()); WidgetsFlutterBinding.ensureInitialized(); await Firebase.initializeApp( name: "myapp", options: kIsWeb || Platform.isAndroid ? FirebaseOptions( apiKey: "api_key", appId: "appID", messagingSenderId: "messaging", projectId: "proID", storageBucket: "myapp.appspot.com", ):null, ); } void UploadFiles() async{ UploadTask task = FirebaseStorage.instance.ref().child("files/$filename").putData(bfile); task.snapshotEvents.listen((event) { setState(() { progress = ((event.bytesTransferred.toDouble()/event.totalBytes.toDouble())*100).roundToDouble(); print(progress); }); }); A: It looks like you are trying to initialize the Firebase app with a null options object. It seems that you are checking for the kIsWeb and Platform.isAndroid flags before providing options, but you are not providing options if those flags are false. This is causing the error you are seeing, because Firebase requires options to be provided in order to initialize the app. To fix this error, you can provide options for the Firebase app in all cases, rather than only providing options when kIsWeb or Platform.isAndroid is true. For example, you could provide default options as shown below: Future main() async { runApp(StartPoint()); WidgetsFlutterBinding.ensureInitialized(); FirebaseOptions defaultOptions = FirebaseOptions( apiKey: "api_key", appId: "appID", messagingSenderId: "messaging", projectId: "proID", storageBucket: "myapp.appspot.com", ); await Firebase.initializeApp( name: "myapp", options: defaultOptions, ); } Alternatively, you could remove the kIsWeb || Platform.isAndroid check and provide options in all cases. This would allow you to initialize the Firebase app regardless of the platform or deployment environment. Future main() async { runApp(StartPoint()); WidgetsFlutterBinding.ensureInitialized(); await Firebase.initializeApp( name: "myapp", options: FirebaseOptions( apiKey: "api_key", appId: "appID", messagingSenderId: "messaging", projectId: "proID", storageBucket: "myapp.appspot.com", ), ); }
core/no-options Firebase error in flutter web firebase storage
I want to store file in firebase storage when i use firebaseStorage instance to put data i got this error : Uncaught (in promise) Error: [core/no-options] Firebase: Need to provide options, when not being deployed to hosting via source.. note that i dont have firebase opitions file Future main() async { runApp(StartPoint()); WidgetsFlutterBinding.ensureInitialized(); await Firebase.initializeApp( name: "myapp", options: kIsWeb || Platform.isAndroid ? FirebaseOptions( apiKey: "api_key", appId: "appID", messagingSenderId: "messaging", projectId: "proID", storageBucket: "myapp.appspot.com", ):null, ); } void UploadFiles() async{ UploadTask task = FirebaseStorage.instance.ref().child("files/$filename").putData(bfile); task.snapshotEvents.listen((event) { setState(() { progress = ((event.bytesTransferred.toDouble()/event.totalBytes.toDouble())*100).roundToDouble(); print(progress); }); });
[ "It looks like you are trying to initialize the Firebase app with a null options object. It seems that you are checking for the kIsWeb and Platform.isAndroid flags before providing options, but you are not providing options if those flags are false. This is causing the error you are seeing, because Firebase requires options to be provided in order to initialize the app.\nTo fix this error, you can provide options for the Firebase app in all cases, rather than only providing options when kIsWeb or Platform.isAndroid is true. For example, you could provide default options as shown below:\nFuture main() async {\n runApp(StartPoint());\n WidgetsFlutterBinding.ensureInitialized();\n\n FirebaseOptions defaultOptions = FirebaseOptions(\n apiKey: \"api_key\",\n appId: \"appID\",\n messagingSenderId: \"messaging\",\n projectId: \"proID\",\n storageBucket: \"myapp.appspot.com\",\n );\n\n await Firebase.initializeApp(\n name: \"myapp\",\n options: defaultOptions,\n );\n}\n\nAlternatively, you could remove the kIsWeb || Platform.isAndroid check and provide options in all cases. This would allow you to initialize the Firebase app regardless of the platform or deployment environment.\nFuture main() async {\n runApp(StartPoint());\n WidgetsFlutterBinding.ensureInitialized();\n\n await Firebase.initializeApp(\n name: \"myapp\",\n options: FirebaseOptions(\n apiKey: \"api_key\",\n appId: \"appID\",\n messagingSenderId: \"messaging\",\n projectId: \"proID\",\n storageBucket: \"myapp.appspot.com\",\n ),\n );\n}\n\n" ]
[ 0 ]
[]
[]
[ "dart", "firebase", "flutter" ]
stackoverflow_0074657959_dart_firebase_flutter.txt
Q: Unable to use css transitions With below code I can have a transition but two issues the transition seem to be going the wrong way than what i defined (right to left instead of left to right). It is supposed to transition the old widget while the new one enters but it is replacing immediately the old one with the new one then the transition begins. by swapping .fadeEnterActive with .fadeExitActive I had the second issue fixed. but the entering element now starts with .fadeEnterDone className for some reasons. Can one spot the issue? how can i fix that import { useRef, useState } from "react"; import { SwitchTransition, CSSTransition } from "react-transition-group"; import PayContainer from "../payContainer"; import PersonalInfoForm from "../persnalInfoForm"; import styles from './drawer_container.module.scss' export default function DrawerContainer() { const [state, setState] = useState('pay-fill'); const payRef = useRef(null); const personalInfoRef = useRef(null); const previewRef = useRef(null); const nodeRef = () => { switch (state) { case 'pay-fill': return payRef case 'personalinfo-fill': return personalInfoRef case 'preview': return previewRef default: return null; } } const setActiveState = (curState: string) => { setState(curState) } const getWidget = () => { switch (state) { case 'pay-fill': return <PayContainer setState={setActiveState}/> case 'personalinfo-fill': return <PersonalInfoForm /> case 'preview': return <div>preview nigga</div> default: return <div>default</div>; } } return <div className="w-full h-full"> <SwitchTransition mode={'out-in'}> <CSSTransition key={state} nodeRef={nodeRef()} addEndListener={(done: () => void) => { nodeRef()?.current?.addEventListener("transitionend", done, false); }} // in={focus} timeout={500} classNames={{ enterActive: styles.fadeEnterActive, enterDone: styles.fadeEnterDone, exitActive: styles.fadeExitActive, exitDone: styles.fadeExitDone }} > <div ref={nodeRef()} className={['w-full h-full', styles.fade].join(' ')}> {getWidget()} </div> </CSSTransition> </SwitchTransition> </div>; } styles .fadeEnterActive { opacity: 0; transform: translateX(-100%); } .fadeEnterDone { opacity: 1; transform: translateX(0%); } .fadeExitActive { opacity: 1; transform: translateX(0%); } .fadeExitDone { opacity: 0; transform: translateX(100%); } .fadeEnterActive, .fadeExitActive { transition: opacity 500ms, transform 500ms; } .fade { } A: By looking at the react-transition-group documentation, I am guessing that the problem is occurring because the starting point of the transition for fade-enter and fade-exit is not defined. And also the styles of fadeEnterActive and fadeExitActive that you provided seem to be in opposite directions. This code should work. React snippet <SwitchTransition mode={'out-in'}> <CSSTransition key={state} nodeRef={nodeRef()} addEndListener={(done: () => void) => { nodeRef()?.current?.addEventListener("transitionend", done, false); }} // in={focus} timeout={500} classNames={{ enterActive: styles.fadeEnterActive, enter: styles.fadeEnter, exitActive: styles.fadeExitActive, exit: styles.fadeExit }} > <div ref={nodeRef()} className={['w-full h-full', styles.fade].join(' ')}> {getWidget()} </div> </CSSTransition> </SwitchTransition> Styles .fadeEnter { opacity: 0; transform: translateX(-100%); } .fadeEnterActive { opacity: 1; transform: translateX(0%); } .fadeExit { opacity: 1; transform: translateX(0%); } .fadeExitActive { opacity: 0; transform: translateX(100%); } .fadeEnterActive, .fadeExitActive { transition: opacity 500ms, transform 500ms; }
Unable to use css transitions
With below code I can have a transition but two issues the transition seem to be going the wrong way than what i defined (right to left instead of left to right). It is supposed to transition the old widget while the new one enters but it is replacing immediately the old one with the new one then the transition begins. by swapping .fadeEnterActive with .fadeExitActive I had the second issue fixed. but the entering element now starts with .fadeEnterDone className for some reasons. Can one spot the issue? how can i fix that import { useRef, useState } from "react"; import { SwitchTransition, CSSTransition } from "react-transition-group"; import PayContainer from "../payContainer"; import PersonalInfoForm from "../persnalInfoForm"; import styles from './drawer_container.module.scss' export default function DrawerContainer() { const [state, setState] = useState('pay-fill'); const payRef = useRef(null); const personalInfoRef = useRef(null); const previewRef = useRef(null); const nodeRef = () => { switch (state) { case 'pay-fill': return payRef case 'personalinfo-fill': return personalInfoRef case 'preview': return previewRef default: return null; } } const setActiveState = (curState: string) => { setState(curState) } const getWidget = () => { switch (state) { case 'pay-fill': return <PayContainer setState={setActiveState}/> case 'personalinfo-fill': return <PersonalInfoForm /> case 'preview': return <div>preview nigga</div> default: return <div>default</div>; } } return <div className="w-full h-full"> <SwitchTransition mode={'out-in'}> <CSSTransition key={state} nodeRef={nodeRef()} addEndListener={(done: () => void) => { nodeRef()?.current?.addEventListener("transitionend", done, false); }} // in={focus} timeout={500} classNames={{ enterActive: styles.fadeEnterActive, enterDone: styles.fadeEnterDone, exitActive: styles.fadeExitActive, exitDone: styles.fadeExitDone }} > <div ref={nodeRef()} className={['w-full h-full', styles.fade].join(' ')}> {getWidget()} </div> </CSSTransition> </SwitchTransition> </div>; } styles .fadeEnterActive { opacity: 0; transform: translateX(-100%); } .fadeEnterDone { opacity: 1; transform: translateX(0%); } .fadeExitActive { opacity: 1; transform: translateX(0%); } .fadeExitDone { opacity: 0; transform: translateX(100%); } .fadeEnterActive, .fadeExitActive { transition: opacity 500ms, transform 500ms; } .fade { }
[ "By looking at the react-transition-group documentation, I am guessing that the problem is occurring because the starting point of the transition for fade-enter and fade-exit is not defined. And also the styles of fadeEnterActive and fadeExitActive that you provided seem to be in opposite directions.\nThis code should work.\nReact snippet\n <SwitchTransition mode={'out-in'}>\n <CSSTransition\n key={state}\n nodeRef={nodeRef()}\n addEndListener={(done: () => void) => {\n nodeRef()?.current?.addEventListener(\"transitionend\", done, false);\n }}\n // in={focus}\n timeout={500}\n classNames={{\n enterActive: styles.fadeEnterActive,\n enter: styles.fadeEnter,\n exitActive: styles.fadeExitActive,\n exit: styles.fadeExit\n }}\n >\n <div ref={nodeRef()} className={['w-full h-full', styles.fade].join(' ')}>\n {getWidget()}\n </div>\n </CSSTransition>\n </SwitchTransition>\n\nStyles\n .fadeEnter {\n opacity: 0;\n transform: translateX(-100%);\n }\n .fadeEnterActive {\n opacity: 1;\n transform: translateX(0%);\n }\n .fadeExit {\n opacity: 1;\n transform: translateX(0%);\n }\n .fadeExitActive {\n opacity: 0;\n transform: translateX(100%);\n }\n .fadeEnterActive,\n .fadeExitActive {\n transition: opacity 500ms, transform 500ms;\n }\n\n" ]
[ 1 ]
[]
[]
[ "css", "javascript", "react_transition_group" ]
stackoverflow_0074608401_css_javascript_react_transition_group.txt
Q: The LINQ expression '...' could not be translated I am getting the following error; The LINQ expression 'DbSet<Rule>()\r\n .Where(r => True && True && False || r.Title.Contains(\r\n value: \"i\", \r\n comparisonType: OrdinalIgnoreCase) && True && True && True)' could not be translated. Additional information: Translation of method 'string.Contains' failed. If this method can be mapped to your custom function, see https://go.microsoft.com/fwlink/?linkid=2132413 for more information. Either rewrite the query in a form that can be translated, or switch to client evaluation explicitly by inserting a call to 'AsEnumerable', 'AsAsyncEnumerable', 'ToList', or 'ToListAsync'. See https://go.microsoft.com/fwlink/?linkid=2101038 for more information. When I am using the following code; return x => (!model.Status.HasValue || x.Status == model.Status) && (!model.RuleTypeId.HasValue || x.RuleTypeId == model.RuleTypeId) && (string.IsNullOrWhiteSpace(model.Title) || x.Title.Contains(model.Title, StringComparison.OrdinalIgnoreCase)) && (!model.UpdateDateFrom.HasValue || x.UpdateDate >= model.UpdateDateFrom) && (!model.UpdateDateTo.HasValue || x.UpdateDate <= model.UpdateDateTo) && (!model.UpdatedBy.HasValue || x.UpdatedBy == model.UpdatedBy); Used version : .net 6, efcore 6.0.11 The problem has been solved when I used; EF.Functions.Like(x.Title, $"%{model.Title}%") Instead of x.Title.Contains(...). Why I am not able use contains?
The LINQ expression '...' could not be translated
I am getting the following error; The LINQ expression 'DbSet<Rule>()\r\n .Where(r => True && True && False || r.Title.Contains(\r\n value: \"i\", \r\n comparisonType: OrdinalIgnoreCase) && True && True && True)' could not be translated. Additional information: Translation of method 'string.Contains' failed. If this method can be mapped to your custom function, see https://go.microsoft.com/fwlink/?linkid=2132413 for more information. Either rewrite the query in a form that can be translated, or switch to client evaluation explicitly by inserting a call to 'AsEnumerable', 'AsAsyncEnumerable', 'ToList', or 'ToListAsync'. See https://go.microsoft.com/fwlink/?linkid=2101038 for more information. When I am using the following code; return x => (!model.Status.HasValue || x.Status == model.Status) && (!model.RuleTypeId.HasValue || x.RuleTypeId == model.RuleTypeId) && (string.IsNullOrWhiteSpace(model.Title) || x.Title.Contains(model.Title, StringComparison.OrdinalIgnoreCase)) && (!model.UpdateDateFrom.HasValue || x.UpdateDate >= model.UpdateDateFrom) && (!model.UpdateDateTo.HasValue || x.UpdateDate <= model.UpdateDateTo) && (!model.UpdatedBy.HasValue || x.UpdatedBy == model.UpdatedBy); Used version : .net 6, efcore 6.0.11 The problem has been solved when I used; EF.Functions.Like(x.Title, $"%{model.Title}%") Instead of x.Title.Contains(...). Why I am not able use contains?
[]
[]
[ ".Contains has no equivalent in SQL, so it is impossible to translate it into a clause to be evaluated on the server.\n" ]
[ -1 ]
[ ".net_6.0", "c#", "ef_core_6.0", "entity_framework_core" ]
stackoverflow_0074658198_.net_6.0_c#_ef_core_6.0_entity_framework_core.txt
Q: TailwindCSS animations on conditionally rendered components in React If I am conditionally rendering a component depending on some state, how can I animate its transition between its open and closed states in React with TailwindCSS? {successMessage && ( <div className="flex transition-all ease-in-out duration-300 bg-gray-200 w-44 items-center justify-between px-2 rounded"> <p>Added to watchlist!</p> <button onClick={() => setSuccessMessage(false)}>X</button> </div> )} This code half works but there is no animation or transition period to it. How can I fix this? A: Try something like this : <div className={`flex transition-all ease-in-out duration-300 bg-gray-200 w-44 items-center justify-between px-2 rounded ${your_state ? 'opacity-100' : 'opacity-0'}`}> ... </div> A: Two states works great. With first one add/remove "hidden" class. And with second one change opacity/height/translate or what you need for animation. Use useEffect and setTimeout with 0 delay for changing the state of secondState. like below: useEffect(() => { setTimeout(() => { setSecondState(firstState) }, 0) }, [firstState]) <div className={`${firstState ? "hidden" : ""} ${secondState ? "opacity-100" : "opacity-0"}`} />
TailwindCSS animations on conditionally rendered components in React
If I am conditionally rendering a component depending on some state, how can I animate its transition between its open and closed states in React with TailwindCSS? {successMessage && ( <div className="flex transition-all ease-in-out duration-300 bg-gray-200 w-44 items-center justify-between px-2 rounded"> <p>Added to watchlist!</p> <button onClick={() => setSuccessMessage(false)}>X</button> </div> )} This code half works but there is no animation or transition period to it. How can I fix this?
[ "Try something like this :\n <div className={`flex transition-all ease-in-out duration-300 bg-gray-200 w-44 items-center justify-between px-2 rounded ${your_state ? 'opacity-100' : 'opacity-0'}`}>\n ...\n </div>\n\n", "Two states works great.\nWith first one add/remove \"hidden\" class.\nAnd with second one change opacity/height/translate or what you need for animation.\nUse useEffect and setTimeout with 0 delay for changing the state of secondState. like below:\nuseEffect(() => {\nsetTimeout(() => {\n setSecondState(firstState)\n}, 0) }, [firstState])\n\n<div className={`${firstState ? \"hidden\" : \"\"} ${secondState ? \"opacity-100\" : \"opacity-0\"}`} />\n\n" ]
[ 2, 0 ]
[ "I would recommend AOS libary to do this, as you would need to do some more work to make this work with TailwindCSS.\nTry this\nInstall Aos\nnpm install aos --save\n\nPut in your file\nimport React, { useEffect } from \"react\";\nimport AOS from 'aos';\nimport \"aos/dist/aos.css\";\n\nuseEffect(() => {\n AOS.init({\n duration : 1000\n });\n}, []);\n\nYou can change duration as you like.\n\nAdd as attribute to HTML tag\ndata-aos=\"animation_name\"\n\nExample\n <div> data-aos=\"fadeIn\" </div>\n\nExtra Attributes you can use\n data-aos=\"fade-up\"\n data-aos-offset=\"200\"\n data-aos-delay=\"50\"\n data-aos-duration=\"1000\"\n data-aos-easing=\"ease-in-out\"\n data-aos-mirror=\"true\"\n data-aos-once=\"false\"\n data-aos-anchor-placement=\"top-center\"\n\nList of all animation\nDocumentation\n" ]
[ -1 ]
[ "css", "javascript", "next.js", "reactjs", "tailwind_css" ]
stackoverflow_0069874496_css_javascript_next.js_reactjs_tailwind_css.txt
Q: How to get the total number of branches ever created in a git repository? I'm working on a large git repository and are trying to get some metrics. One such metric is the number of branches ever created and in use. From this post: Get total remote branches in git, one can get the number of current branches. But how to get the number of branches ever created in a repository, included deleted ones? A: As a possible substitute metric, you could find out how many commits that are ~extra~ children of a commit have single parents, those were new branches off an existing root. Count those and the roots, you've got total actual branches in history. This won't tell you about any branches that never wound up contributing any commits. ( git rev-list --all --children; echo; git rev-list --all --parents --no-merges ) \ | awk ' !doneloading && NF>2 { i=2; while(++i<=NF) branchchild[$i]=1 } /^$/ { doneloading=1 } doneloading && (NF==1 || $1 in branchchild) { print $1 } ' | wc A: Since nothing is ever really deleted in a repo Not quite. The git reflog (for 90 days by default) or git fsck commands can help you list old commits, but you would still have to remember the branch name in order to restore it (as in here). And that supposes a direct access to the common repository where those 100 contributors are pushing to. If the remote repository is managed by a repository hosting service, like for instance GitHub, you would use the Events API in order to find back the trace of deleted branches. In both instances though, once a branch is deleted, it is not easy to find it back, and that would be even trickier for all past deleted branch. A: I am pretty late here but giving answer as it may help others. You can print total number of remote branches. git branch -r | wc -l git branch -r command is for remote branches and wc -l will print total number of branches. Note: It will give you existing branches, if you deleted permanently then you can not get them by executing above command. happy coding and learning
How to get the total number of branches ever created in a git repository?
I'm working on a large git repository and are trying to get some metrics. One such metric is the number of branches ever created and in use. From this post: Get total remote branches in git, one can get the number of current branches. But how to get the number of branches ever created in a repository, included deleted ones?
[ "As a possible substitute metric, you could find out how many commits that are ~extra~ children of a commit have single parents, those were new branches off an existing root. Count those and the roots, you've got total actual branches in history. This won't tell you about any branches that never wound up contributing any commits.\n( git rev-list --all --children; echo; git rev-list --all --parents --no-merges ) \\\n| awk ' !doneloading && NF>2 { i=2; while(++i<=NF) branchchild[$i]=1 }\n /^$/ { doneloading=1 }\n doneloading && (NF==1 || $1 in branchchild) { print $1 }\n' | wc\n\n", "\nSince nothing is ever really deleted in a repo\n\nNot quite. The git reflog (for 90 days by default) or git fsck commands can help you list old commits, but you would still have to remember the branch name in order to restore it (as in here).\nAnd that supposes a direct access to the common repository where those 100 contributors are pushing to.\nIf the remote repository is managed by a repository hosting service, like for instance GitHub, you would use the Events API in order to find back the trace of deleted branches.\nIn both instances though, once a branch is deleted, it is not easy to find it back, and that would be even trickier for all past deleted branch.\n", "I am pretty late here but giving answer as it may help others.\nYou can print total number of remote branches.\ngit branch -r | wc -l\n\ngit branch -r command is for remote branches and wc -l will print total number of branches.\nNote: It will give you existing branches, if you deleted permanently then you can not get them by executing above command.\nhappy coding and learning\n" ]
[ 3, 0, 0 ]
[]
[]
[ "git" ]
stackoverflow_0065603593_git.txt
Q: How to get rid of the in place FutureWarning when setting an entire column from an array? In pandas v.1.5.0 a new warning has been added, which is shown, when a column is set from an array of different dtype. The FutureWarning informs about a planned semantic change, when using iloc: the change will be done in-place in future versions. The changelog instructs what to do to get the old behavior, but there is no hint how to handle the situation, when in-place operation is in fact the right choice. The example from the changelog: df = pd.DataFrame({'price': [11.1, 12.2]}, index=['book1', 'book2']) original_prices = df['price'] new_prices = np.array([98, 99]) df.iloc[:, 0] = new_prices df.iloc[:, 0] This is the warning, which is printed in pandas 1.5.0: FutureWarning: In a future version, df.iloc[:, i] = newvals will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either df[df.columns[i]] = newvals or, if columns are non-unique, df.isetitem(i, newvals) How to get rid of the warning, if I don't care about in-place or not, but want to get rid of the warning? Am I supposed to change dtype explicitly? Do I really need to catch the warning every single time I need to use this feature? Isn't there a better way? A: I haven't found any better way than suppressing the warning using the warnings module: import numpy as np import pandas as pd import warnings df = pd.DataFrame({"price": [11.1, 12.2]}, index=["book1", "book2"]) original_prices = df["price"] new_prices = np.array([98, 99]) with warnings.catch_warnings(): # Setting values in-place is fine, ignore the warning in Pandas >= 1.5.0 # This can be removed, if Pandas 1.5.0 does not need to be supported any longer. # See also: https://stackoverflow.com/q/74057367/859591 warnings.filterwarnings( "ignore", category=FutureWarning, message=( ".*will attempt to set the values inplace instead of always setting a new array. " "To retain the old behavior, use either.*" ), ) df.iloc[:, 0] = new_prices df.iloc[:, 0] A: Post here because I can't comment yet. For now I think I will also suppress the warnings, because I don't want the old behavior, never expected to use it that way. And the suggested syntax has the danger to trigger the SettingWithCopyWarning warning. A: As the changelog states, the warning is printed when setting an entire column from an array with different dtype, so adjusting the dtype is one way to silence it: df = pd.DataFrame({'price': [11.1, 12.2]}, index=['book1', 'book2']) original_prices = df['price'] new_prices = np.array([98, 99]).astype(float) df.iloc[:, 0] = new_prices df.iloc[:, 0] Note the additional .astype(float). Not an ideal solution, but a solution.
How to get rid of the in place FutureWarning when setting an entire column from an array?
In pandas v.1.5.0 a new warning has been added, which is shown, when a column is set from an array of different dtype. The FutureWarning informs about a planned semantic change, when using iloc: the change will be done in-place in future versions. The changelog instructs what to do to get the old behavior, but there is no hint how to handle the situation, when in-place operation is in fact the right choice. The example from the changelog: df = pd.DataFrame({'price': [11.1, 12.2]}, index=['book1', 'book2']) original_prices = df['price'] new_prices = np.array([98, 99]) df.iloc[:, 0] = new_prices df.iloc[:, 0] This is the warning, which is printed in pandas 1.5.0: FutureWarning: In a future version, df.iloc[:, i] = newvals will attempt to set the values inplace instead of always setting a new array. To retain the old behavior, use either df[df.columns[i]] = newvals or, if columns are non-unique, df.isetitem(i, newvals) How to get rid of the warning, if I don't care about in-place or not, but want to get rid of the warning? Am I supposed to change dtype explicitly? Do I really need to catch the warning every single time I need to use this feature? Isn't there a better way?
[ "I haven't found any better way than suppressing the warning using the warnings module:\nimport numpy as np\nimport pandas as pd\nimport warnings\n\ndf = pd.DataFrame({\"price\": [11.1, 12.2]}, index=[\"book1\", \"book2\"])\noriginal_prices = df[\"price\"]\nnew_prices = np.array([98, 99])\nwith warnings.catch_warnings():\n # Setting values in-place is fine, ignore the warning in Pandas >= 1.5.0\n # This can be removed, if Pandas 1.5.0 does not need to be supported any longer.\n # See also: https://stackoverflow.com/q/74057367/859591\n warnings.filterwarnings(\n \"ignore\",\n category=FutureWarning,\n message=(\n \".*will attempt to set the values inplace instead of always setting a new array. \"\n \"To retain the old behavior, use either.*\"\n ),\n )\n\n df.iloc[:, 0] = new_prices\n\ndf.iloc[:, 0]\n\n", "Post here because I can't comment yet.\nFor now I think I will also suppress the warnings, because I don't want the old behavior, never expected to use it that way. And the suggested syntax has the danger to trigger the SettingWithCopyWarning warning.\n", "As the changelog states, the warning is printed when setting an entire column from an array with different dtype, so adjusting the dtype is one way to silence it:\ndf = pd.DataFrame({'price': [11.1, 12.2]}, index=['book1', 'book2'])\noriginal_prices = df['price']\nnew_prices = np.array([98, 99]).astype(float)\ndf.iloc[:, 0] = new_prices\ndf.iloc[:, 0]\n\nNote the additional .astype(float). Not an ideal solution, but a solution.\n" ]
[ 4, 0, 0 ]
[ "I am just filtering all future warnings for now:\nimport warnings\nwarnings.simplefilter(\"ignore\", category=FutureWarning)\n\n" ]
[ -2 ]
[ "pandas", "python" ]
stackoverflow_0074057367_pandas_python.txt
Q: content project elements through directive and in angular 13 I have the following scenario: page.component.html <app-component-w-directive> <child-component></child-component> </app-component-w-directive> component-w-directive.component.html <ng-template myCustomDirective [someInputs]="someValues" [someInputs]="someValues" [someInputs]="someValues" > <!-- my failed attempt --> <ng-content></ng-content> </ng-template> I use the component-w-directive component to cast different components dinamically depending on some information, and I want them all to share the same <child-component> from page.component.html. Currently, within component-w-directive.component I have full access to <child-component>. And tried to do the following to drill the ng-content down to one of the components generated dinamically with the directive with no success, in any of the "cast-able" components, its undefined the ng-content. casted-from-directive.component.html <!-- some html --> <ng-content></ng-content> <!-- (expected to be the child-component from page.component.html) --> <!-- some html --> How can I project the <child-component> in the dynamic generated ones through the directive? EDIT: here's an example https://stackblitz.com/edit/angular-ivy-qgbslk A: I did not find an elegant way for any component, so it will work only when you have an access to injected components (Today, Tomorrow) you need to change this components as well as MyComponent interface TodayComponent HTML: <ng-container *ngTemplateOutlet="content"></ng-container> TodayComponent.ts content?: TemplateRef<any>; Now we need to add same input to creating directive and pass it to the creating component @Input() contentTemplate: TemplateRef<any>; ... this.cmpRef = this.viewContainerRef.createComponent<MyComponent>(component); this.cmpRef.instance.content = this.contentTemplate; Final step is to make a bridge between Hook component and directive. Notice that we need to wrap ng-content into ng-template and pass it as ref to the creating directive. <ng-container appDynamicComponent [compSelector]="compSelector" [color]="color" [contentTemplate]="content" ></ng-container> <ng-template #content> <ng-content></ng-content> </ng-template> Working sample
content project elements through directive and in angular 13
I have the following scenario: page.component.html <app-component-w-directive> <child-component></child-component> </app-component-w-directive> component-w-directive.component.html <ng-template myCustomDirective [someInputs]="someValues" [someInputs]="someValues" [someInputs]="someValues" > <!-- my failed attempt --> <ng-content></ng-content> </ng-template> I use the component-w-directive component to cast different components dinamically depending on some information, and I want them all to share the same <child-component> from page.component.html. Currently, within component-w-directive.component I have full access to <child-component>. And tried to do the following to drill the ng-content down to one of the components generated dinamically with the directive with no success, in any of the "cast-able" components, its undefined the ng-content. casted-from-directive.component.html <!-- some html --> <ng-content></ng-content> <!-- (expected to be the child-component from page.component.html) --> <!-- some html --> How can I project the <child-component> in the dynamic generated ones through the directive? EDIT: here's an example https://stackblitz.com/edit/angular-ivy-qgbslk
[ "I did not find an elegant way for any component, so it will work only when you have an access to injected components (Today, Tomorrow)\n\nyou need to change this components as well as MyComponent interface\nTodayComponent HTML:\n\n<ng-container *ngTemplateOutlet=\"content\"></ng-container>\n\nTodayComponent.ts\ncontent?: TemplateRef<any>;\n\n\nNow we need to add same input to creating directive and pass it to the creating component\n\n @Input() contentTemplate: TemplateRef<any>;\n...\n\n this.cmpRef = this.viewContainerRef.createComponent<MyComponent>(component);\n this.cmpRef.instance.content = this.contentTemplate;\n\n\nFinal step is to make a bridge between Hook component and directive. Notice that we need to wrap ng-content into ng-template and pass it as ref to the creating directive.\n\n<ng-container\n appDynamicComponent\n [compSelector]=\"compSelector\"\n [color]=\"color\"\n [contentTemplate]=\"content\"\n></ng-container>\n\n<ng-template #content>\n <ng-content></ng-content>\n</ng-template>\n\n\nWorking sample\n\n\n\n" ]
[ 0 ]
[]
[]
[ "angular", "angular13", "angular_content_projection", "ng_content", "ng_template" ]
stackoverflow_0074630351_angular_angular13_angular_content_projection_ng_content_ng_template.txt
Q: Select specific/all columns in rowwise I have the following table: col1 col2 col3 col4 1 2 1 4 5 6 6 3 My goal is to find the max value per each row, and then find how many times it was repeated in the same row. The resulting table should look like this: col1 col2 col3 col4 max_val repetition 1 2 1 4 4 1 5 6 6 3 6 2 Now to achieve this, I am doing the following for Max: df%>% rowwise%>% mutate(max=max(col1:col4)) However, I am struggling to find the repetition. My idea is to use this pseudo code in mutate: sum( "select current row entirely or only for some columns"==max). But I don't know how to select entire row or only some columns of it and use its content to do the check, i.e.: is it equal to the max. How can we do this in dplyr? A: A dplyr approach: library(dplyr) df %>% rowwise() %>% mutate(max_val = max(across(everything())), repetition = sum(across(col1:col4) == max_val)) # A tibble: 2 × 6 # Rowwise: col1 col2 col3 col4 max_val repetition <int> <int> <int> <int> <int> <int> 1 1 2 1 4 4 1 2 5 6 6 3 6 2 An R base approach: df$max_val <- apply(df,1,max) df$repetition <- rowSums(df[, 1:4] == df[, 5]) A: For other (non-tidyverse) readers, a base R approach could be: df$max_val <- apply(df, 1, max) df$repetition <- apply(df, 1, function(x) sum(x[1:4] == x[5])) Output: # col1 col2 col3 col4 max_val repetition # 1 1 2 1 4 4 1 # 2 5 6 6 3 6 2 A: Although dplyr has added many tools for working across rows of data, it remains, in my mind at least, much easier to adhere to tidy principles and always convert the data to "long" format for these kinds of operations. Thus, here is a tidy approach: df %>% mutate(row = row_number()) %>% pivot_longer(cols = -row) %>% group_by(row) %>% mutate(max_val = max(value), repetitions = sum(value == max(value))) %>% pivot_wider(id_cols = c(row, max_val, repetitions)) %>% select(col1:col4, max_val, repetitions) The last select() is just to get the columns in the order you want.
Select specific/all columns in rowwise
I have the following table: col1 col2 col3 col4 1 2 1 4 5 6 6 3 My goal is to find the max value per each row, and then find how many times it was repeated in the same row. The resulting table should look like this: col1 col2 col3 col4 max_val repetition 1 2 1 4 4 1 5 6 6 3 6 2 Now to achieve this, I am doing the following for Max: df%>% rowwise%>% mutate(max=max(col1:col4)) However, I am struggling to find the repetition. My idea is to use this pseudo code in mutate: sum( "select current row entirely or only for some columns"==max). But I don't know how to select entire row or only some columns of it and use its content to do the check, i.e.: is it equal to the max. How can we do this in dplyr?
[ "A dplyr approach:\nlibrary(dplyr)\ndf %>% \n rowwise() %>% \n mutate(max_val = max(across(everything())),\n repetition = sum(across(col1:col4) == max_val))\n\n# A tibble: 2 × 6\n# Rowwise: \n col1 col2 col3 col4 max_val repetition\n <int> <int> <int> <int> <int> <int>\n1 1 2 1 4 4 1\n2 5 6 6 3 6 2\n\nAn R base approach:\ndf$max_val <- apply(df,1,max)\ndf$repetition <- rowSums(df[, 1:4] == df[, 5])\n\n", "For other (non-tidyverse) readers, a base R approach could be:\ndf$max_val <- apply(df, 1, max)\ndf$repetition <- apply(df, 1, function(x) sum(x[1:4] == x[5]))\n\nOutput:\n# col1 col2 col3 col4 max_val repetition\n# 1 1 2 1 4 4 1\n# 2 5 6 6 3 6 2\n\n", "Although dplyr has added many tools for working across rows of data, it remains, in my mind at least, much easier to adhere to tidy principles and always convert the data to \"long\" format for these kinds of operations.\nThus, here is a tidy approach:\ndf %>%\n mutate(row = row_number()) %>%\n pivot_longer(cols = -row) %>%\n group_by(row) %>%\n mutate(max_val = max(value), repetitions = sum(value == max(value))) %>%\n pivot_wider(id_cols = c(row, max_val, repetitions)) %>%\n select(col1:col4, max_val, repetitions)\n\nThe last select() is just to get the columns in the order you want.\n" ]
[ 4, 1, 1 ]
[]
[]
[ "dplyr", "r" ]
stackoverflow_0074658183_dplyr_r.txt
Q: How Can I Pass a dataLayer Variable From One Domain to Another Domain Using GTM How can I pass a dataLayer variable from one domain to another domain using Google Tag Manager? Both domains use the same GTM container. Thank you. A: Whether the domains use the same GTM is not important. What's more important is whether they're the same TLD (top-level domain), or different TLDs. If it's the same TLD, then passing a variable would be easily done via a cookie. If it's a different TLD, then you could use a third party cookie. However, there has been a world-wide witch hunt after the third party cookies, due to which it's not a good idea to use them anymore. As the result, there are two fiable options to pass variables between TLDs, whether through GTM or just front-end. Employ backend. Make the backend keep and pass the variable. Based on its ability to sync sessions of users on different TLDs, if the backend is the same on all TLDs. Pass your variables by appending them to the url on navigations from one TLD to the other. Once GTM on the other side sees them, it can store them in a cookie and clean urls from the query params to make it look neater. The latter option is how GTM (and GA4) does cross-domain tracking by default.
How Can I Pass a dataLayer Variable From One Domain to Another Domain Using GTM
How can I pass a dataLayer variable from one domain to another domain using Google Tag Manager? Both domains use the same GTM container. Thank you.
[ "Whether the domains use the same GTM is not important. What's more important is whether they're the same TLD (top-level domain), or different TLDs.\nIf it's the same TLD, then passing a variable would be easily done via a cookie. If it's a different TLD, then you could use a third party cookie. However, there has been a world-wide witch hunt after the third party cookies, due to which it's not a good idea to use them anymore.\nAs the result, there are two fiable options to pass variables between TLDs, whether through GTM or just front-end.\n\nEmploy backend. Make the backend keep and pass the variable. Based on its ability to sync sessions of users on different TLDs, if the backend is the same on all TLDs.\nPass your variables by appending them to the url on navigations from one TLD to the other. Once GTM on the other side sees them, it can store them in a cookie and clean urls from the query params to make it look neater.\n\nThe latter option is how GTM (and GA4) does cross-domain tracking by default.\n" ]
[ 0 ]
[]
[]
[ "cross_domain", "data_layers", "google_tag_manager" ]
stackoverflow_0074651384_cross_domain_data_layers_google_tag_manager.txt
Q: Nextcloud and Google SAML SSO: Error parsing the request, No SAML message present in request I struggle a bit to get Nextcloud to work with Google as SSO provider. I have URL target of the iPd is https://accounts.google.com/o/saml2/idp?idpid=xxxxxxx then I get 403. That’s an error. Error: app_not_configured_for_user So following the suggestion here, I changed the url to https://accounts.google.com/accountchooser?continue=https://accounts.google.com/o/saml2/idp?idpid=xxxxxx Which redirects me to the google account chooser, but then I get, after selecting my account null. That’s an error. Error parsing the request, No SAML message present in request That’s all we know. Sometimes I am not asked for a user account, so then I get the following from Nextcloud Account not provisioned. Your account is not provisioned, access to this service is thus not possible. A: What worked for me is to configure Google having these attribute mappings While on Nextcloud I configure SAML as follows
Nextcloud and Google SAML SSO: Error parsing the request, No SAML message present in request
I struggle a bit to get Nextcloud to work with Google as SSO provider. I have URL target of the iPd is https://accounts.google.com/o/saml2/idp?idpid=xxxxxxx then I get 403. That’s an error. Error: app_not_configured_for_user So following the suggestion here, I changed the url to https://accounts.google.com/accountchooser?continue=https://accounts.google.com/o/saml2/idp?idpid=xxxxxx Which redirects me to the google account chooser, but then I get, after selecting my account null. That’s an error. Error parsing the request, No SAML message present in request That’s all we know. Sometimes I am not asked for a user account, so then I get the following from Nextcloud Account not provisioned. Your account is not provisioned, access to this service is thus not possible.
[ "What worked for me is to configure Google having these attribute mappings\n\nWhile on Nextcloud I configure SAML as follows\n\n" ]
[ 0 ]
[]
[]
[ "google_workspace", "nextcloud", "single_sign_on" ]
stackoverflow_0074601212_google_workspace_nextcloud_single_sign_on.txt
Q: my asynchronous function is showing promise pending I awaited my function at many places but it still showing promise pending. I am trying to get YouTube video thumbnails by url. I created a index.js file with this code: const checkurl = require('./checkurl.js'); console.log(checkurl('https://youtu.be/NbT4NcLkly8')); and the checkurl.js have: const getvideoid = require('get-video-id'); const https = require('https'); const GOOGLEKEY = process.env['GOOGLEKEY']; module.exports = async function(url) { const urlinfo = getvideoid(url) if (urlinfo.service == 'youtube' && urlinfo.id !== undefined) { const result = await checkid(urlinfo.id) return result } return false }; function checkid(id) { return new Promise((resolve, reject) => { const url = 'https://www.googleapis.com/youtube/v3/videos?key=' + GOOGLEKEY + '&part=snippet&id=' + id const req = https.request(url, (res) => { res.setEncoding('utf8'); let responseBody = ''; res.on('data', (chunk) => { responseBody += chunk; }); res.on('end', () => { const data = JSON.parse(responseBody); if (data.items[0]) { const thumbnail = data.items[0].snippet.thumbnails resolve(thumbnail); } else { resolve(undefined); }; }); }); req.on('error', (err) => { reject(err); }); req.end(); }); }; I awaited all my function which return promise but I am still getting promise pending idk why. I also tried to resolve the promise in second function but still same. A: checkurl returns a Promise because it's an async function. You either need to await or .then it before you can console.log its value. // in an async function console.log(await checkurl('https://youtu.be/NbT4NcLkly8')); checkurl('https://youtu.be/NbT4NcLkly8').then(console.log)
my asynchronous function is showing promise pending
I awaited my function at many places but it still showing promise pending. I am trying to get YouTube video thumbnails by url. I created a index.js file with this code: const checkurl = require('./checkurl.js'); console.log(checkurl('https://youtu.be/NbT4NcLkly8')); and the checkurl.js have: const getvideoid = require('get-video-id'); const https = require('https'); const GOOGLEKEY = process.env['GOOGLEKEY']; module.exports = async function(url) { const urlinfo = getvideoid(url) if (urlinfo.service == 'youtube' && urlinfo.id !== undefined) { const result = await checkid(urlinfo.id) return result } return false }; function checkid(id) { return new Promise((resolve, reject) => { const url = 'https://www.googleapis.com/youtube/v3/videos?key=' + GOOGLEKEY + '&part=snippet&id=' + id const req = https.request(url, (res) => { res.setEncoding('utf8'); let responseBody = ''; res.on('data', (chunk) => { responseBody += chunk; }); res.on('end', () => { const data = JSON.parse(responseBody); if (data.items[0]) { const thumbnail = data.items[0].snippet.thumbnails resolve(thumbnail); } else { resolve(undefined); }; }); }); req.on('error', (err) => { reject(err); }); req.end(); }); }; I awaited all my function which return promise but I am still getting promise pending idk why. I also tried to resolve the promise in second function but still same.
[ "checkurl returns a Promise because it's an async function.\nYou either need to await or .then it before you can console.log its value.\n// in an async function\nconsole.log(await checkurl('https://youtu.be/NbT4NcLkly8'));\n\ncheckurl('https://youtu.be/NbT4NcLkly8').then(console.log)\n\n" ]
[ 2 ]
[]
[]
[ "asynchronous", "https", "javascript", "node.js", "youtube_api" ]
stackoverflow_0074658367_asynchronous_https_javascript_node.js_youtube_api.txt
Q: Manually place the ticks on x-axis, at the beginning, middle and end - Matplotlib Is there a way to always place the ticks on x-axis of Matplotlib always at the beginning, middle and end of the axis instead of Matplotlib automatically placing them?For example, I have a plot shown below. matplotlib plot Is there a way to always place 25 at the very beginning, 80 in the middle and 95 at the very end? This is the code I tried in Jupyter Notebook: import matplotlib.pyplot as plt def box(ax_position, percentile_values, label, label_position): fig = plt.figure(figsize = (10, 2)) ax1 = fig.add_axes(ax_position) ax1.set_xticks(percentile_values) ax1.tick_params(axis="x",direction="out", pad=-15, colors='b') ax1.set_yticks([]) ax1.text(*label_position, label, size=8) plt.show() box((0.4, 0.5, 0.4, 0.1), (25,80,95), "CA",(0.01, 1.3)) The number of values passed in percentile_values will always be 3 and these 3 always needs to be placed at the beginning, middle, and end - but Matplotlib automatically places these ticks as per the numerical value. This is what I am looking for: what I need I tried using matplotlib.ticker.FixedLocator, but that does not help me though I can display only 3 ticks, but the position of the ticks is chosen by Matplotlib and not placed at the beginning, middle and end. A: You need to split the set_xticks() to have only the number of entries - (0,1,2) and use set_xticklables() to give the text you want to display - (25,80,95). Note that I have used numpy's linspace to get the list of numbers based on length of percentile_value. Also, I have removed the pad=-15 as you have indicated that you want the numbers below the ticks. Hope this is what you are looking for. import matplotlib.pyplot as plt import numpy as np def box(ax_position, percentile_values, label, label_position): fig = plt.figure(figsize = (10, 2)) ax1 = fig.add_axes(ax_position) ax1.set_xticks(np.linspace(0,len(percentile_values)-1,len(percentile_values))) ax1.set_xticklabels(percentile_values) ax1.tick_params(axis="x",direction="out", colors='b') ax1.set_yticks([]) ax1.text(*label_position, label, size=8) plt.show() box((0.4, 0.5, 0.4, 0.1), (25,80,95), "CA",(0.01, 1.3))
Manually place the ticks on x-axis, at the beginning, middle and end - Matplotlib
Is there a way to always place the ticks on x-axis of Matplotlib always at the beginning, middle and end of the axis instead of Matplotlib automatically placing them?For example, I have a plot shown below. matplotlib plot Is there a way to always place 25 at the very beginning, 80 in the middle and 95 at the very end? This is the code I tried in Jupyter Notebook: import matplotlib.pyplot as plt def box(ax_position, percentile_values, label, label_position): fig = plt.figure(figsize = (10, 2)) ax1 = fig.add_axes(ax_position) ax1.set_xticks(percentile_values) ax1.tick_params(axis="x",direction="out", pad=-15, colors='b') ax1.set_yticks([]) ax1.text(*label_position, label, size=8) plt.show() box((0.4, 0.5, 0.4, 0.1), (25,80,95), "CA",(0.01, 1.3)) The number of values passed in percentile_values will always be 3 and these 3 always needs to be placed at the beginning, middle, and end - but Matplotlib automatically places these ticks as per the numerical value. This is what I am looking for: what I need I tried using matplotlib.ticker.FixedLocator, but that does not help me though I can display only 3 ticks, but the position of the ticks is chosen by Matplotlib and not placed at the beginning, middle and end.
[ "You need to split the set_xticks() to have only the number of entries - (0,1,2) and use set_xticklables() to give the text you want to display - (25,80,95). Note that I have used numpy's linspace to get the list of numbers based on length of percentile_value. Also, I have removed the pad=-15 as you have indicated that you want the numbers below the ticks. Hope this is what you are looking for.\nimport matplotlib.pyplot as plt\nimport numpy as np\ndef box(ax_position, percentile_values, label, label_position):\n fig = plt.figure(figsize = (10, 2))\n ax1 = fig.add_axes(ax_position) \n ax1.set_xticks(np.linspace(0,len(percentile_values)-1,len(percentile_values)))\n ax1.set_xticklabels(percentile_values)\n ax1.tick_params(axis=\"x\",direction=\"out\", colors='b')\n ax1.set_yticks([])\n ax1.text(*label_position, label, size=8)\n plt.show()\n\nbox((0.4, 0.5, 0.4, 0.1), (25,80,95), \"CA\",(0.01, 1.3))\n\n\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0074657894_matplotlib_python.txt
Q: What have i do wrong? File "F:\2д шутер на питоне\main.py", line 402, in <module> world_data[x][y] = int(tile) ValueError: invalid literal for int() with base 10: '-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\ a fragment of the problematic code: world_data = [] for row in range(ROWS): r = [-1] * COLS world_data.append(r) with open(f'level{level}_data.csv', newline='') as csvfile: reader = csv.reader(csvfile, delimiter=',') for x, row in enumerate(reader): for y, tile in enumerate(row): world_data[x][y] = int(tile) I don't even know what the problem is A: It looks like you have tab delimiter, not comma, so replace reader = csv.reader(csvfile, delimiter=',') with reader = csv.reader(csvfile, delimiter='\t') I would note that you could replace this whole block with import numpy as np world_data = np.genfromtext(f'level{level}_data.csv', delimiter='\t')
What have i do wrong?
File "F:\2д шутер на питоне\main.py", line 402, in <module> world_data[x][y] = int(tile) ValueError: invalid literal for int() with base 10: '-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\t-1\ a fragment of the problematic code: world_data = [] for row in range(ROWS): r = [-1] * COLS world_data.append(r) with open(f'level{level}_data.csv', newline='') as csvfile: reader = csv.reader(csvfile, delimiter=',') for x, row in enumerate(reader): for y, tile in enumerate(row): world_data[x][y] = int(tile) I don't even know what the problem is
[ "It looks like you have tab delimiter, not comma, so replace\nreader = csv.reader(csvfile, delimiter=',')\n\nwith\nreader = csv.reader(csvfile, delimiter='\\t')\n\nI would note that you could replace this whole block with\nimport numpy as np\nworld_data = np.genfromtext(f'level{level}_data.csv', delimiter='\\t')\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074658374_python.txt
Q: Quarterly forecast Data across multiple departments I want to forecast some data, here is an example of the csv table: Time Period HR Fin Legal Leadership Overall 2021Q2 42 36 66 53 2021Q3 52 43 64 67 2021Q4 65 47 71 73 2022Q1 68 50 75 74 2022Q2 72 57 77 81 2022Q3 79 62 75 78 I want to make predictions for every quarter until the end of Q4 2023. I found an article which does something similar but doesn't have multiple value columns (Y axis) I tried tailoring my code to allow for this but I get an error. Here is my code(I've altered the contents to simplify my table, there were originally 12 columns not 5): import pandas as pd from datetime import date, timedelta import datetime import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') from statsmodels.tsa.seasonal import seasonal_decompose from statsmodels.graphics.tsaplots import plot_pacf from statsmodels.tsa.arima_model import ARIMA import statsmodels.api as sm import warnings import plotly.graph_objects as go # import make_subplots function from plotly.subplots # to make grid of plots from plotly.subplots import make_subplots 'set filepath' inputfilepath = 'C:/Documents/' \ 'Forecast/Input/' \ 'Forecast Data csv.csv' df = pd.read_csv(inputfilepath) print(df) import plotly.express as px figure = px.line(df, x="Time Period", y=("Fin","Legal","Leadership","Overall"), title='Quarterly scores') figure.show() However, I am met with the following error: ValueError: All arguments should have the same length. The length of argument y is 4, whereas the length of previously-processed arguments ['Time Period'] is 6 How would I alter my code to produce a graph that contains multiple y variables (Fin, Legal, Leadership, Overall)? Additionally, this is the link to the article I found: https://thecleverprogrammer.com/2022/09/05/business-forecasting-using-python/ A: Looks like your "y" argument accepts only list [ele1, ele2], not a tuple(ele1, ele2). I changed the brackets to squares and I ran your code just fine: import plotly.express as px figure = px.line(df, x="Time Period", y=["Fin","Legal","Leadership","Overall"], title='Quarterly scores') figure.show() produces: this
Quarterly forecast Data across multiple departments
I want to forecast some data, here is an example of the csv table: Time Period HR Fin Legal Leadership Overall 2021Q2 42 36 66 53 2021Q3 52 43 64 67 2021Q4 65 47 71 73 2022Q1 68 50 75 74 2022Q2 72 57 77 81 2022Q3 79 62 75 78 I want to make predictions for every quarter until the end of Q4 2023. I found an article which does something similar but doesn't have multiple value columns (Y axis) I tried tailoring my code to allow for this but I get an error. Here is my code(I've altered the contents to simplify my table, there were originally 12 columns not 5): import pandas as pd from datetime import date, timedelta import datetime import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') from statsmodels.tsa.seasonal import seasonal_decompose from statsmodels.graphics.tsaplots import plot_pacf from statsmodels.tsa.arima_model import ARIMA import statsmodels.api as sm import warnings import plotly.graph_objects as go # import make_subplots function from plotly.subplots # to make grid of plots from plotly.subplots import make_subplots 'set filepath' inputfilepath = 'C:/Documents/' \ 'Forecast/Input/' \ 'Forecast Data csv.csv' df = pd.read_csv(inputfilepath) print(df) import plotly.express as px figure = px.line(df, x="Time Period", y=("Fin","Legal","Leadership","Overall"), title='Quarterly scores') figure.show() However, I am met with the following error: ValueError: All arguments should have the same length. The length of argument y is 4, whereas the length of previously-processed arguments ['Time Period'] is 6 How would I alter my code to produce a graph that contains multiple y variables (Fin, Legal, Leadership, Overall)? Additionally, this is the link to the article I found: https://thecleverprogrammer.com/2022/09/05/business-forecasting-using-python/
[ "Looks like your \"y\" argument accepts only list [ele1, ele2], not a tuple(ele1, ele2). I changed the brackets to squares and I ran your code just fine:\n import plotly.express as px\n figure = px.line(df, x=\"Time Period\", \n y=[\"Fin\",\"Legal\",\"Leadership\",\"Overall\"],\n title='Quarterly scores')\n\nfigure.show()\n\nproduces:\nthis\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "pandas", "python" ]
stackoverflow_0074658092_matplotlib_pandas_python.txt
Q: component(separatedBy:) versus .split(separator: ) In Swift 4, new method .split(separator:) is introduced by apple in String struct. So to split a string with whitespace which is faster for e.g.. let str = "My name is Sudhir" str.components(separatedBy: " ") //or str.split(separator: " ") A: Performance aside, there is an important difference between split(separator:) and components(separatedBy:) in how they treat empty subsequences. They will produce different results if your input contains a trailing whitespace: let str = "My name is Sudhir " // trailing space str.split(separator: " ") // ["My", "name", "is", "Sudhir"] str.components(separatedBy: " ") // ["My", "name", "is", "Sudhir", ""] ← Additional empty string To have both produce the same result, use the omittingEmptySubsequences:false argument (which defaults to true): // To get the same behavior: str.split(separator: " ", omittingEmptySubsequences: false) // ["My", "name", "is", "Sudhir", ""] Details here: https://developer.apple.com/documentation/swift/string/2894564-split A: I have made sample test with following Code. var str = """ One of those refinements is to the String API, which has been made a lot easier to use (while also gaining power) in Swift 4. In past versions of Swift, the String API was often brought up as an example of how Swift sometimes goes too far in favoring correctness over ease of use, with its cumbersome way of handling characters and substrings. This week, let’s take a look at how it is to work with strings in Swift 4, and how we can take advantage of the new, improved API in various situations. Sometimes we have longer, static strings in our apps or scripts that span multiple lines. Before Swift 4, we had to do something like inline \n across the string, add an appendOnNewLine() method through an extension on String or - in the case of scripting - make multiple print() calls to add newlines to a long output. For example, here is how TestDrive’s printHelp() function (which is used to print usage instructions for the script) looks like in Swift 3 One of those refinements is to the String API, which has been made a lot easier to use (while also gaining power) in Swift 4. In past versions of Swift, the String API was often brought up as an example of how Swift sometimes goes too far in favoring correctness over ease of use, with its cumbersome way of handling characters and substrings. This week, let’s take a look at how it is to work with strings in Swift 4, and how we can take advantage of the new, improved API in various situations. Sometimes we have longer, static strings in our apps or scripts that span multiple lines. Before Swift 4, we had to do something like inline \n across the string, add an appendOnNewLine() method through an extension on String or - in the case of scripting - make multiple print() calls to add newlines to a long output. For example, here is how TestDrive’s printHelp() function (which is used to print usage instructions for the script) looks like in Swift 3 """ var newString = String() for _ in 1..<9999 { newString.append(str) } var methodStart = Date() _ = newString.components(separatedBy: " ") print("Execution time Separated By: \(Date().timeIntervalSince(methodStart))") methodStart = Date() _ = newString.split(separator: " ") print("Execution time Split By: \(Date().timeIntervalSince(methodStart))") I run above code on iPhone6 , Here are the results Execution time Separated By: 8.27463299036026 Execution time Split By: 4.06880903244019 Conclusion : split(separator:) is faster than components(separatedBy:). A: Maybe a little late to answer: split is a native swift method components is NSString Foundation method When you play with them, they behave a little bit different: str.components(separatedBy: "\n\n") This call can give you some interesting results str.split(separator: "\n\n") This leads to an compile error as you must provide a single character.
component(separatedBy:) versus .split(separator: )
In Swift 4, new method .split(separator:) is introduced by apple in String struct. So to split a string with whitespace which is faster for e.g.. let str = "My name is Sudhir" str.components(separatedBy: " ") //or str.split(separator: " ")
[ "Performance aside, there is an important difference between split(separator:) and components(separatedBy:) in how they treat empty subsequences.\nThey will produce different results if your input contains a trailing whitespace:\n let str = \"My name is Sudhir \" // trailing space\n\n str.split(separator: \" \")\n // [\"My\", \"name\", \"is\", \"Sudhir\"]\n\n str.components(separatedBy: \" \")\n // [\"My\", \"name\", \"is\", \"Sudhir\", \"\"] ← Additional empty string\n\nTo have both produce the same result, use the omittingEmptySubsequences:false argument (which defaults to true):\n // To get the same behavior:\n str.split(separator: \" \", omittingEmptySubsequences: false)\n // [\"My\", \"name\", \"is\", \"Sudhir\", \"\"]\n\nDetails here:\nhttps://developer.apple.com/documentation/swift/string/2894564-split\n", "I have made sample test with following Code.\n var str = \"\"\"\n One of those refinements is to the String API, which has been made a lot easier to use (while also gaining power) in Swift 4. In past versions of Swift, the String API was often brought up as an example of how Swift sometimes goes too far in favoring correctness over ease of use, with its cumbersome way of handling characters and substrings. This week, let’s take a look at how it is to work with strings in Swift 4, and how we can take advantage of the new, improved API in various situations. Sometimes we have longer, static strings in our apps or scripts that span multiple lines. Before Swift 4, we had to do something like inline \\n across the string, add an appendOnNewLine() method through an extension on String or - in the case of scripting - make multiple print() calls to add newlines to a long output. For example, here is how TestDrive’s printHelp() function (which is used to print usage instructions for the script) looks like in Swift 3 One of those refinements is to the String API, which has been made a lot easier to use (while also gaining power) in Swift 4. In past versions of Swift, the String API was often brought up as an example of how Swift sometimes goes too far in favoring correctness over ease of use, with its cumbersome way of handling characters and substrings. This week, let’s take a look at how it is to work with strings in Swift 4, and how we can take advantage of the new, improved API in various situations. Sometimes we have longer, static strings in our apps or scripts that span multiple lines. Before Swift 4, we had to do something like inline \\n across the string, add an appendOnNewLine() method through an extension on String or - in the case of scripting - make multiple print() calls to add newlines to a long output. For example, here is how TestDrive’s printHelp() function (which is used to print usage instructions for the script) looks like in Swift 3\n \"\"\"\n\n var newString = String()\n\n for _ in 1..<9999 {\n newString.append(str)\n }\n\n var methodStart = Date()\n\n _ = newString.components(separatedBy: \" \")\n print(\"Execution time Separated By: \\(Date().timeIntervalSince(methodStart))\")\n\n methodStart = Date()\n _ = newString.split(separator: \" \")\n print(\"Execution time Split By: \\(Date().timeIntervalSince(methodStart))\")\n\nI run above code on iPhone6 , Here are the results \nExecution time Separated By: 8.27463299036026\n Execution time Split By: 4.06880903244019\nConclusion : split(separator:) is faster than components(separatedBy:).\n", "Maybe a little late to answer:\n\nsplit is a native swift method\ncomponents is NSString Foundation method\n\nWhen you play with them, they behave a little bit different:\nstr.components(separatedBy: \"\\n\\n\")\n\nThis call can give you some interesting results\nstr.split(separator: \"\\n\\n\")\n\nThis leads to an compile error as you must provide a single character.\n" ]
[ 17, 13, 0 ]
[]
[]
[ "swift" ]
stackoverflow_0046344649_swift.txt
Q: How to convert a currency string to a double with Javascript? I have a text box that will have a currency string in it that I then need to convert that string to a double to perform some operations on it. "$1,100.00" → 1100.00 This needs to occur all client side. I have no choice but to leave the currency string as a currency string as input but need to cast/convert it to a double to allow some mathematical operations. A: Remove all non dot / digits: var currency = "-$4,400.50"; var number = Number(currency.replace(/[^0-9.-]+/g,"")); A: accounting.js is the way to go. I used it at a project and had very good experience using it. accounting.formatMoney(4999.99, "€", 2, ".", ","); // €4.999,99 accounting.unformat("€ 1.000.000,00", ","); // 1000000 You can find it at GitHub A: Use a regex to remove the formating (dollar and comma), and use parseFloat to convert the string to a floating point number.` var currency = "$1,100.00"; currency.replace(/[$,]+/g,""); var result = parseFloat(currency) + .05; A: I know this is an old question but wanted to give an additional option. The jQuery Globalize gives the ability to parse a culture specific format to a float. https://github.com/jquery/globalize Given a string "$13,042.00", and Globalize set to en-US: Globalize.culture("en-US"); You can parse the float value out like so: var result = Globalize.parseFloat(Globalize.format("$13,042.00", "c")); This will give you: 13042.00 And allows you to work with other cultures. A: I know this is an old question, but CMS's answer seems to have one tiny little flaw: it only works if currency format uses "." as decimal separator. For example, if you need to work with russian rubles, the string will look like this: "1 000,00 rub." My solution is far less elegant than CMS's, but it should do the trick. var currency = "1 000,00 rub."; //it works for US-style currency strings as well var cur_re = /\D*(\d+|\d.*?\d)(?:\D+(\d{2}))?\D*$/; var parts = cur_re.exec(currency); var number = parseFloat(parts[1].replace(/\D/,'')+'.'+(parts[2]?parts[2]:'00')); console.log(number.toFixed(2)); Assumptions: currency value uses decimal notation there are no digits in the string that are not a part of the currency value currency value contains either 0 or 2 digits in its fractional part * The regexp can even handle something like "1,999 dollars and 99 cents", though it isn't an intended feature and it should not be relied upon. Hope this will help someone. A: This example run ok var currency = "$1,123,456.00"; var number = Number(currency.replace(/[^0-9\.]+/g,"")); console.log(number); A: This is my function. Works with all currencies.. function toFloat(num) { dotPos = num.indexOf('.'); commaPos = num.indexOf(','); if (dotPos < 0) dotPos = 0; if (commaPos < 0) commaPos = 0; if ((dotPos > commaPos) && dotPos) sep = dotPos; else { if ((commaPos > dotPos) && commaPos) sep = commaPos; else sep = false; } if (sep == false) return parseFloat(num.replace(/[^\d]/g, "")); return parseFloat( num.substr(0, sep).replace(/[^\d]/g, "") + '.' + num.substr(sep+1, num.length).replace(/[^0-9]/, "") ); } Usage : toFloat("$1,100.00") or toFloat("1,100.00$") A: // "10.000.500,61 TL" price_to_number => 10000500.61 // "10000500.62" number_to_price => 10.000.500,62 JS FIDDLE: https://jsfiddle.net/Limitlessisa/oxhgd32c/ var price="10.000.500,61 TL"; document.getElementById("demo1").innerHTML = price_to_number(price); var numberPrice="10000500.62"; document.getElementById("demo2").innerHTML = number_to_price(numberPrice); function price_to_number(v){ if(!v){return 0;} v=v.split('.').join(''); v=v.split(',').join('.'); return Number(v.replace(/[^0-9.]/g, "")); } function number_to_price(v){ if(v==0){return '0,00';} v=parseFloat(v); v=v.toFixed(2).replace(/(\d)(?=(\d\d\d)+(?!\d))/g, "$1,"); v=v.split('.').join('*').split(',').join('.').split('*').join(','); return v; } A: You can try this var str = "$1,112.12"; str = str.replace(",", ""); str = str.replace("$", ""); console.log(parseFloat(str)); A: For anyone looking for a solution in 2021 you can use Currency.js. After much research this was the most reliable method I found for production, I didn't have any issues so far. In addition it's very active on Github. currency(123); // 123.00 currency(1.23); // 1.23 currency("1.23") // 1.23 currency("$12.30") // 12.30 var value = currency("123.45"); currency(value); // 123.45 typescript import currency from "currency.js"; currency("$12.30").value; // 12.30 A: let thousands_seps = '.'; let decimal_sep = ','; let sanitizeValue = "R$ 2.530,55".replace(thousands_seps,'') .replace(decimal_sep,'.') .replace(/[^0-9.-]+/, ''); // Converting to float // Result 2530.55 let stringToFloat = parseFloat(sanitizeValue); // Formatting for currency: "R$ 2.530,55" // BRL in this case let floatTocurrency = Number(stringToFloat).toLocaleString('pt-BR', {style: 'currency', currency: 'BRL'}); // Output console.log(stringToFloat, floatTocurrency); A: I know you've found a solution to your question, I just wanted to recommend that maybe you look at the following more extensive jQuery plugin for International Number Formats: International Number Formatter A: How about simply Number(currency.replace(/[^0-9-]+/g,""))/100; Works with all currencies and locales. replaces all non-numeric chars (you can have €50.000,00 or $50,000.00) input must have 2 decimal places A: jQuery.preferCulture("en-IN"); var price = jQuery.format(39.00, "c"); output is: Rs. 39.00 use jquery.glob.js, jQuery.glob.all.js A: Here's a simple function - function getNumberFromCurrency(currency) { return Number(currency.replace(/[$,]/g,'')) } console.log(getNumberFromCurrency('$1,000,000.99')) // 1000000.99 A: For currencies that use the ',' separator mentioned by Quethzel Diaz Currency is in Brazilian. var currency_br = "R$ 1.343,45"; currency_br = currency_br.replace('.', "").replace(',', '.'); var number_formated = Number(currency_br.replace(/[^0-9.-]+/g,"")); A: var parseCurrency = function (e) { if (typeof (e) === 'number') return e; if (typeof (e) === 'string') { var str = e.trim(); var value = Number(e.replace(/[^0-9.-]+/g, "")); return str.startsWith('(') && str.endsWith(')') ? -value: value; } return e; } A: This worked for me and covers most edge cases :) function toFloat(num) { const cleanStr = String(num).replace(/[^0-9.,]/g, ''); let dotPos = cleanStr.indexOf('.'); let commaPos = cleanStr.indexOf(','); if (dotPos < 0) dotPos = 0; if (commaPos < 0) commaPos = 0; const dotSplit = cleanStr.split('.'); const commaSplit = cleanStr.split(','); const isDecimalDot = dotPos && ( (commaPos && dotPos > commaPos) || (!commaPos && dotSplit[dotSplit.length - 1].length === 2) ); const isDecimalComma = commaPos && ( (dotPos && dotPos < commaPos) || (!dotPos && commaSplit[commaSplit.length - 1].length === 2) ); let integerPart = cleanStr; let decimalPart = '0'; if (isDecimalComma) { integerPart = commaSplit[0]; decimalPart = commaSplit[1]; } if (isDecimalDot) { integerPart = dotSplit[0]; decimalPart = dotSplit[1]; } return parseFloat( `${integerPart.replace(/[^0-9]/g, '')}.${decimalPart.replace(/[^0-9]/g, '')}`, ); } toFloat('USD 1,500.00'); // 1500 toFloat('USD 1,500'); // 1500 toFloat('USD 500.00'); // 500 toFloat('USD 500'); // 500 toFloat('EUR 1.500,00'); // 1500 toFloat('EUR 1.500'); // 1500 toFloat('EUR 500,00'); // 500 toFloat('EUR 500'); // 500 A: Such a headache and so less consideration to other cultures for nothing... here it is folks: let floatPrice = parseFloat(price.replace(/(,|\.)([0-9]{3})/g,'$2').replace(/(,|\.)/,'.')); as simple as that. A: $ 150.00 Fr. 150.00 € 689.00 I have tested for above three currency symbols .You can do it for others also. var price = Fr. 150.00; var priceFloat = price.replace(/[^\d\.]/g, ''); Above regular expression will remove everything that is not a digit or a period.So You can get the string without currency symbol but in case of " Fr. 150.00 " if you console for output then you will get price as console.log('priceFloat : '+priceFloat); output will be like priceFloat : .150.00 which is wrong so you check the index of "." then split that and get the proper result. if (priceFloat.indexOf('.') == 0) { priceFloat = parseFloat(priceFloat.split('.')[1]); }else{ priceFloat = parseFloat(priceFloat); } A: function NumberConvertToDecimal (number) { if (number == 0) { return '0.00'; } number = parseFloat(number); number = number.toFixed(2).replace(/(\d)(?=(\d\d\d)+(?!\d))/g, "$1"); number = number.split('.').join('*').split('*').join('.'); return number; } A: This function should work whichever the locale and currency settings : function getNumPrice(price, decimalpoint) { var p = price.split(decimalpoint); for (var i=0;i<p.length;i++) p[i] = p[i].replace(/\D/g,''); return p.join('.'); } This assumes you know the decimal point character (in my case the locale is set from PHP, so I get it with <?php echo cms_function_to_get_decimal_point(); ?>). A: You should be able to handle this using vanilla JS. The Internationalization API is part of JS core: ECMAScript Internationalization API https://www.w3.org/International/wiki/JavaScriptInternationalization This answer worked for me: How to format numbers as currency strings
How to convert a currency string to a double with Javascript?
I have a text box that will have a currency string in it that I then need to convert that string to a double to perform some operations on it. "$1,100.00" → 1100.00 This needs to occur all client side. I have no choice but to leave the currency string as a currency string as input but need to cast/convert it to a double to allow some mathematical operations.
[ "Remove all non dot / digits:\nvar currency = \"-$4,400.50\";\nvar number = Number(currency.replace(/[^0-9.-]+/g,\"\"));\n\n", "accounting.js is the way to go. I used it at a project and had very good experience using it.\naccounting.formatMoney(4999.99, \"€\", 2, \".\", \",\"); // €4.999,99\naccounting.unformat(\"€ 1.000.000,00\", \",\"); // 1000000\n\nYou can find it at GitHub\n", "Use a regex to remove the formating (dollar and comma), and use parseFloat to convert the string to a floating point number.`\nvar currency = \"$1,100.00\";\ncurrency.replace(/[$,]+/g,\"\");\nvar result = parseFloat(currency) + .05;\n\n", "I know this is an old question but wanted to give an additional option. \nThe jQuery Globalize gives the ability to parse a culture specific format to a float. \nhttps://github.com/jquery/globalize\nGiven a string \"$13,042.00\", and Globalize set to en-US:\nGlobalize.culture(\"en-US\");\n\nYou can parse the float value out like so:\nvar result = Globalize.parseFloat(Globalize.format(\"$13,042.00\", \"c\"));\n\nThis will give you:\n13042.00\n\nAnd allows you to work with other cultures. \n", "I know this is an old question, but CMS's answer seems to have one tiny little flaw: it only works if currency format uses \".\" as decimal separator.\nFor example, if you need to work with russian rubles, the string will look like this: \n\"1 000,00 rub.\"\nMy solution is far less elegant than CMS's, but it should do the trick. \n\n\nvar currency = \"1 000,00 rub.\"; //it works for US-style currency strings as well\r\nvar cur_re = /\\D*(\\d+|\\d.*?\\d)(?:\\D+(\\d{2}))?\\D*$/;\r\nvar parts = cur_re.exec(currency);\r\nvar number = parseFloat(parts[1].replace(/\\D/,'')+'.'+(parts[2]?parts[2]:'00'));\r\nconsole.log(number.toFixed(2));\n\n\n\nAssumptions:\n\ncurrency value uses decimal notation\nthere are no digits in the string that are not a part of the currency value\ncurrency value contains either 0 or 2 digits in its fractional part *\n\nThe regexp can even handle something like \"1,999 dollars and 99 cents\", though it isn't an intended feature and it should not be relied upon.\nHope this will help someone.\n", "This example run ok\n\n\nvar currency = \"$1,123,456.00\";\nvar number = Number(currency.replace(/[^0-9\\.]+/g,\"\"));\nconsole.log(number);\n\n\n\n", "This is my function. Works with all currencies..\nfunction toFloat(num) {\n dotPos = num.indexOf('.');\n commaPos = num.indexOf(',');\n\n if (dotPos < 0)\n dotPos = 0;\n\n if (commaPos < 0)\n commaPos = 0;\n\n if ((dotPos > commaPos) && dotPos)\n sep = dotPos;\n else {\n if ((commaPos > dotPos) && commaPos)\n sep = commaPos;\n else\n sep = false;\n }\n\n if (sep == false)\n return parseFloat(num.replace(/[^\\d]/g, \"\"));\n\n return parseFloat(\n num.substr(0, sep).replace(/[^\\d]/g, \"\") + '.' + \n num.substr(sep+1, num.length).replace(/[^0-9]/, \"\")\n );\n\n}\n\nUsage : toFloat(\"$1,100.00\") or toFloat(\"1,100.00$\")\n", "// \"10.000.500,61 TL\" price_to_number => 10000500.61\n// \"10000500.62\" number_to_price => 10.000.500,62\nJS FIDDLE: https://jsfiddle.net/Limitlessisa/oxhgd32c/\nvar price=\"10.000.500,61 TL\";\ndocument.getElementById(\"demo1\").innerHTML = price_to_number(price);\n\nvar numberPrice=\"10000500.62\";\ndocument.getElementById(\"demo2\").innerHTML = number_to_price(numberPrice);\n\nfunction price_to_number(v){\n if(!v){return 0;}\n v=v.split('.').join('');\n v=v.split(',').join('.');\n return Number(v.replace(/[^0-9.]/g, \"\"));\n}\n\nfunction number_to_price(v){\n if(v==0){return '0,00';}\n v=parseFloat(v);\n v=v.toFixed(2).replace(/(\\d)(?=(\\d\\d\\d)+(?!\\d))/g, \"$1,\");\n v=v.split('.').join('*').split(',').join('.').split('*').join(',');\n return v;\n}\n\n", "You can try this\n\n\nvar str = \"$1,112.12\";\r\nstr = str.replace(\",\", \"\");\r\nstr = str.replace(\"$\", \"\");\r\nconsole.log(parseFloat(str));\n\n\n\n", "For anyone looking for a solution in 2021 you can use Currency.js.\nAfter much research this was the most reliable method I found for production, I didn't have any issues so far. In addition it's very active on Github.\ncurrency(123); // 123.00\ncurrency(1.23); // 1.23\ncurrency(\"1.23\") // 1.23\ncurrency(\"$12.30\") // 12.30\n\nvar value = currency(\"123.45\");\ncurrency(value); // 123.45\n\ntypescript\nimport currency from \"currency.js\";\n\ncurrency(\"$12.30\").value; // 12.30\n\n", "let thousands_seps = '.';\nlet decimal_sep = ',';\n\nlet sanitizeValue = \"R$ 2.530,55\".replace(thousands_seps,'')\n .replace(decimal_sep,'.')\n .replace(/[^0-9.-]+/, '');\n\n// Converting to float\n// Result 2530.55\nlet stringToFloat = parseFloat(sanitizeValue);\n\n\n// Formatting for currency: \"R$ 2.530,55\"\n// BRL in this case\nlet floatTocurrency = Number(stringToFloat).toLocaleString('pt-BR', {style: 'currency', currency: 'BRL'});\n\n// Output\nconsole.log(stringToFloat, floatTocurrency);\n\n", "I know you've found a solution to your question, I just wanted to recommend that maybe you look at the following more extensive jQuery plugin for International Number Formats:\nInternational Number Formatter\n", "How about simply\nNumber(currency.replace(/[^0-9-]+/g,\"\"))/100;\n\nWorks with all currencies and locales. replaces all non-numeric chars (you can have €50.000,00 or $50,000.00) input must have 2 decimal places\n", "jQuery.preferCulture(\"en-IN\");\nvar price = jQuery.format(39.00, \"c\");\n\noutput is: Rs. 39.00\nuse jquery.glob.js,\n jQuery.glob.all.js\n\n", "Here's a simple function -\n\n\nfunction getNumberFromCurrency(currency) {\n return Number(currency.replace(/[$,]/g,''))\n}\n\nconsole.log(getNumberFromCurrency('$1,000,000.99')) // 1000000.99\n\n\n\n", "For currencies that use the ',' separator mentioned by Quethzel Diaz\nCurrency is in Brazilian.\nvar currency_br = \"R$ 1.343,45\";\ncurrency_br = currency_br.replace('.', \"\").replace(',', '.');\nvar number_formated = Number(currency_br.replace(/[^0-9.-]+/g,\"\"));\n\n", "var parseCurrency = function (e) {\n if (typeof (e) === 'number') return e;\n if (typeof (e) === 'string') {\n var str = e.trim();\n var value = Number(e.replace(/[^0-9.-]+/g, \"\"));\n return str.startsWith('(') && str.endsWith(')') ? -value: value;\n }\n\n return e;\n} \n\n", "This worked for me and covers most edge cases :)\nfunction toFloat(num) {\n const cleanStr = String(num).replace(/[^0-9.,]/g, '');\n let dotPos = cleanStr.indexOf('.');\n let commaPos = cleanStr.indexOf(',');\n\n if (dotPos < 0) dotPos = 0;\n\n if (commaPos < 0) commaPos = 0;\n\n const dotSplit = cleanStr.split('.');\n const commaSplit = cleanStr.split(',');\n\n const isDecimalDot = dotPos\n && (\n (commaPos && dotPos > commaPos)\n || (!commaPos && dotSplit[dotSplit.length - 1].length === 2)\n );\n\n const isDecimalComma = commaPos\n && (\n (dotPos && dotPos < commaPos)\n || (!dotPos && commaSplit[commaSplit.length - 1].length === 2)\n );\n\n let integerPart = cleanStr;\n let decimalPart = '0';\n if (isDecimalComma) {\n integerPart = commaSplit[0];\n decimalPart = commaSplit[1];\n }\n if (isDecimalDot) {\n integerPart = dotSplit[0];\n decimalPart = dotSplit[1];\n }\n\n return parseFloat(\n `${integerPart.replace(/[^0-9]/g, '')}.${decimalPart.replace(/[^0-9]/g, '')}`,\n );\n}\n\n\ntoFloat('USD 1,500.00'); // 1500\ntoFloat('USD 1,500'); // 1500\ntoFloat('USD 500.00'); // 500\ntoFloat('USD 500'); // 500\n\ntoFloat('EUR 1.500,00'); // 1500\ntoFloat('EUR 1.500'); // 1500\ntoFloat('EUR 500,00'); // 500\ntoFloat('EUR 500'); // 500\n\n", "Such a headache and so less consideration to other cultures for nothing...\nhere it is folks:\nlet floatPrice = parseFloat(price.replace(/(,|\\.)([0-9]{3})/g,'$2').replace(/(,|\\.)/,'.'));\n\nas simple as that.\n", " $ 150.00\n Fr. 150.00\n € 689.00\n\nI have tested for above three currency symbols .You can do it for others also.\n var price = Fr. 150.00;\n var priceFloat = price.replace(/[^\\d\\.]/g, '');\n\nAbove regular expression will remove everything that is not a digit or a period.So You can get the string without currency symbol but in case of \" Fr. 150.00 \" if you console for output then you will get price as\n console.log('priceFloat : '+priceFloat);\n\n output will be like priceFloat : .150.00\n\nwhich is wrong so you check the index of \".\" then split that and get the proper result.\n if (priceFloat.indexOf('.') == 0) {\n priceFloat = parseFloat(priceFloat.split('.')[1]);\n }else{\n priceFloat = parseFloat(priceFloat);\n }\n\n", "function NumberConvertToDecimal (number) {\n if (number == 0) {\n return '0.00'; \n }\n number = parseFloat(number);\n number = number.toFixed(2).replace(/(\\d)(?=(\\d\\d\\d)+(?!\\d))/g, \"$1\");\n number = number.split('.').join('*').split('*').join('.');\n return number;\n}\n\n", "This function should work whichever the locale and currency settings :\nfunction getNumPrice(price, decimalpoint) {\n var p = price.split(decimalpoint);\n for (var i=0;i<p.length;i++) p[i] = p[i].replace(/\\D/g,'');\n return p.join('.');\n}\n\nThis assumes you know the decimal point character (in my case the locale is set from PHP, so I get it with <?php echo cms_function_to_get_decimal_point(); ?>).\n", "You should be able to handle this using vanilla JS. The Internationalization API is part of JS core: ECMAScript Internationalization API\nhttps://www.w3.org/International/wiki/JavaScriptInternationalization\nThis answer worked for me: How to format numbers as currency strings\n" ]
[ 571, 28, 25, 20, 17, 12, 6, 6, 5, 5, 4, 3, 3, 2, 2, 2, 1, 1, 1, 0, 0, 0, 0 ]
[]
[]
[ "javascript", "jquery" ]
stackoverflow_0000559112_javascript_jquery.txt
Q: Insert into temporary table from select returns blank table I'm trying to fill two temporary tables with ids coming from outside as a single string that I split and save to a third temporary table: CREATE TABLE #TempProdotti (Id int NULL); CREATE TABLE #TempProdottiAggregati (Id int NULL); CREATE TABLE #TempCorsiSingoli (Id int NULL); -- split ids and cast them as INT INSERT INTO #TempProdotti (Id) (SELECT CAST(value AS int) AS Id FROM string_split('3116,3122,3090', ',')); -- then search into products table if the ids match any aggregated (or not) product. -- then save aggegated products id in one table and the not aggregated ones into another INSERT INTO #TempCorsiSingoli (Id) (SELECT Id FROM mod_SHOP_Prodotti WHERE Id IN (SELECT Id FROM #TempProdotti) AND ProdottoAggregato = 0); INSERT INTO #TempProdottiAggregati (Id) (SELECT Id FROM mod_SHOP_Prodotti WHERE Id IN (SELECT Id FROM #TempProdotti) AND ProdottoAggregato = 1); SELECT * FROM #TempProdotti; SELECT * FROM #TempProdottiAggregati; SELECT * FROM #TempCorsiSingoli; DROP TABLE #TempProdotti; DROP TABLE #TempProdottiAggregati; DROP TABLE #TempCorsiSingoli; When I run the query, if it doesn't find anything in one of the two temporary tables, it just returns an empty table: Is there a clean way to return NULL on Id in case the condition is not met? A: One method would be to LEFT JOIN from a data set with row of NULL values: SELECT TP.* FROM (VALUES(NULL))V(N) LEFT JOIN #TempProdotti TP ON 1 = 1; If #TempProdotti contains rows, then the data in the table will be returned. If not a single row, of NULLs will be returned. CREATE TABLE #TempProdotti (Id int NULL); CREATE TABLE #TempProdottiAggregati (Id int NULL); CREATE TABLE #TempCorsiSingoli (Id int NULL); GO INSERT INTO #TempProdotti (Id) SELECT CAST(value AS int) AS Id FROM string_split('3116,3122,3090', ','); INSERT INTO #TempCorsiSingoli (Id) SELECT Id FROM mod_SHOP_Prodotti WHERE Id IN (SELECT Id FROM #TempProdotti) AND ProdottoAggregato = 0; INSERT INTO #TempProdottiAggregati (Id) SELECT Id FROM mod_SHOP_Prodotti WHERE Id IN (SELECT Id FROM #TempProdotti) AND ProdottoAggregato = 1; GO SELECT TP.* FROM (VALUES (NULL)) V (N) LEFT JOIN #TempProdotti TP ON 1 = 1; SELECT TPA.* FROM (VALUES (NULL)) V (N) LEFT JOIN #TempProdottiAggregati TPA ON 1 = 1; SELECT TCS.* FROM (VALUES (NULL)) V (N) LEFT JOIN #TempCorsiSingoli TCS ON 1 = 1; GO DROP TABLE #TempProdotti; DROP TABLE #TempProdottiAggregati; DROP TABLE #TempCorsiSingoli;
Insert into temporary table from select returns blank table
I'm trying to fill two temporary tables with ids coming from outside as a single string that I split and save to a third temporary table: CREATE TABLE #TempProdotti (Id int NULL); CREATE TABLE #TempProdottiAggregati (Id int NULL); CREATE TABLE #TempCorsiSingoli (Id int NULL); -- split ids and cast them as INT INSERT INTO #TempProdotti (Id) (SELECT CAST(value AS int) AS Id FROM string_split('3116,3122,3090', ',')); -- then search into products table if the ids match any aggregated (or not) product. -- then save aggegated products id in one table and the not aggregated ones into another INSERT INTO #TempCorsiSingoli (Id) (SELECT Id FROM mod_SHOP_Prodotti WHERE Id IN (SELECT Id FROM #TempProdotti) AND ProdottoAggregato = 0); INSERT INTO #TempProdottiAggregati (Id) (SELECT Id FROM mod_SHOP_Prodotti WHERE Id IN (SELECT Id FROM #TempProdotti) AND ProdottoAggregato = 1); SELECT * FROM #TempProdotti; SELECT * FROM #TempProdottiAggregati; SELECT * FROM #TempCorsiSingoli; DROP TABLE #TempProdotti; DROP TABLE #TempProdottiAggregati; DROP TABLE #TempCorsiSingoli; When I run the query, if it doesn't find anything in one of the two temporary tables, it just returns an empty table: Is there a clean way to return NULL on Id in case the condition is not met?
[ "One method would be to LEFT JOIN from a data set with row of NULL values:\nSELECT TP.*\nFROM (VALUES(NULL))V(N)\n LEFT JOIN #TempProdotti TP ON 1 = 1;\n\nIf #TempProdotti contains rows, then the data in the table will be returned. If not a single row, of NULLs will be returned.\n\nCREATE TABLE #TempProdotti (Id int NULL);\nCREATE TABLE #TempProdottiAggregati (Id int NULL);\nCREATE TABLE #TempCorsiSingoli (Id int NULL);\nGO\n\nINSERT INTO #TempProdotti (Id)\nSELECT CAST(value AS int) AS Id\nFROM string_split('3116,3122,3090', ',');\n\nINSERT INTO #TempCorsiSingoli (Id)\nSELECT Id\nFROM mod_SHOP_Prodotti\nWHERE Id IN (SELECT Id FROM #TempProdotti)\n AND ProdottoAggregato = 0;\n\nINSERT INTO #TempProdottiAggregati (Id)\nSELECT Id\nFROM mod_SHOP_Prodotti\nWHERE Id IN (SELECT Id FROM #TempProdotti)\n AND ProdottoAggregato = 1;\n\nGO\n\nSELECT TP.*\nFROM (VALUES (NULL)) V (N)\n LEFT JOIN #TempProdotti TP ON 1 = 1;\nSELECT TPA.*\nFROM (VALUES (NULL)) V (N)\n LEFT JOIN #TempProdottiAggregati TPA ON 1 = 1;\nSELECT TCS.*\nFROM (VALUES (NULL)) V (N)\n LEFT JOIN #TempCorsiSingoli TCS ON 1 = 1;\nGO\nDROP TABLE #TempProdotti;\nDROP TABLE #TempProdottiAggregati;\nDROP TABLE #TempCorsiSingoli;\n\n" ]
[ 0 ]
[]
[]
[ "split", "temp_tables", "tsql" ]
stackoverflow_0074658088_split_temp_tables_tsql.txt
Q: Syntax error in .cabal file created by cabal init, at the name of the package I have a directory called day-2 (unrelated to the issue, just working through the Advent of Code), in which I ran cabal init. This init command generated a day-2.cabal file: cabal-version: 3.4 -- The cabal-version field refers to the version of the .cabal specification, -- and can be different from the cabal-install (the tool) version and the -- Cabal (the library) version you are using. As such, the Cabal (the library) -- version used must be equal or greater than the version stated in this field. -- Starting from the specification version 2.2, the cabal-version field must be -- the first thing in the cabal file. -- Initial package description 'day-2' generated by -- 'cabal init'. For further documentation, see: -- http://haskell.org/cabal/users-guide/ -- -- The name of the package. name: day-2 -- The package version. -- See the Haskell package versioning policy (PVP) for standards -- guiding when and how versions should be incremented. -- https://pvp.haskell.org -- PVP summary: +-+------- breaking API changes -- | | +----- non-breaking API additions -- | | | +--- code changes with no API change version: 0.1.0.0 -- A short (one-line) description of the package. -- synopsis: -- A longer description of the package. -- description: -- The license under which the package is released. license: AGPL-3.0-or-later -- The file containing the license text. license-file: LICENSE -- The package author(s). author: <redacted> -- An email address to which users can send suggestions, bug reports, and patches. maintainer: <redacted> -- A copyright notice. -- copyright: build-type: Simple -- Extra doc files to be distributed with the package, such as a CHANGELOG or a README. extra-doc-files: CHANGELOG.md -- Extra source files to be distributed with the package, such as examples, or a tutorial module. -- extra-source-files: common warnings ghc-options: -Wall executable day-2 -- Import common warning flags. import: warnings -- .hs or .lhs file containing the Main module. main-is: Main.hs -- Modules included in this executable, other than Main. -- other-modules: -- LANGUAGE extensions used by modules in this package. -- other-extensions: -- Other library packages from which modules are imported. build-depends: base ^>=4.12.0.0 -- Directories containing source files. hs-source-dirs: app -- Base language which the package is written in. default-language: Haskell2010 Here are a list of files it created: $ find . -type f ./app/Main.hs ./LICENSE ./CHANGELOG.md ./day-2.cabal ./dist-newstyle/cache/compiler ./README.md # I created this one myself, manually I went to compile my app/Main.hs file, which looks like: module Main where main :: IO () main = putStrLn "hello" When I run cabal build I get this error: $ cabal build Errors encountered when parsing cabal file ./day-2.cabal: day-2.cabal:14:26: error: unexpected end of input 1 | cabal-version: 3.4 2 | -- The cabal-version field refers to the version of the .cabal specification, 3 | -- and can be different from the cabal-install (the tool) version and the 4 | -- Cabal (the library) version you are using. As such, the Cabal (the library) 5 | -- version used must be equal or greater than the version stated in this field. 6 | -- Starting from the specification version 2.2, the cabal-version field must be 7 | -- the first thing in the cabal file. 8 | 9 | -- Initial package description 'day-2' generated by 10 | -- 'cabal init'. For further documentation, see: 11 | -- http://haskell.org/cabal/users-guide/ 12 | -- 13 | -- The name of the package. 14 | name: day-2 | ^ I don't see anyone else talking about this particular issue. Here is my cabal version: $ cabal --version cabal-install version 3.8.1.0 compiled using version 3.8.1.0 of the Cabal library Not sure if it's relevant, but here is my ghc version as well: $ ghc --version The Glorious Glasgow Haskell Compilation System, version 8.6.5 I should say, yesterday I ran cabal init in a different directory (for day-1) and it made my Main.hs and day-1.cabal file just fine, and it was able to build and run without issue. Not sure what the problem is here, was hoping someone knew what is wrong. A: Cabal package names may contain letters, numbers and hyphens, but not spaces and may also not contain a hyphened section consisting of only numbers. https://cabal.readthedocs.io/en/stable/developing-packages.html#package-names-and-versions Otherwise the syntax $PACKAGE-$VERSION would be ambiguous.
Syntax error in .cabal file created by cabal init, at the name of the package
I have a directory called day-2 (unrelated to the issue, just working through the Advent of Code), in which I ran cabal init. This init command generated a day-2.cabal file: cabal-version: 3.4 -- The cabal-version field refers to the version of the .cabal specification, -- and can be different from the cabal-install (the tool) version and the -- Cabal (the library) version you are using. As such, the Cabal (the library) -- version used must be equal or greater than the version stated in this field. -- Starting from the specification version 2.2, the cabal-version field must be -- the first thing in the cabal file. -- Initial package description 'day-2' generated by -- 'cabal init'. For further documentation, see: -- http://haskell.org/cabal/users-guide/ -- -- The name of the package. name: day-2 -- The package version. -- See the Haskell package versioning policy (PVP) for standards -- guiding when and how versions should be incremented. -- https://pvp.haskell.org -- PVP summary: +-+------- breaking API changes -- | | +----- non-breaking API additions -- | | | +--- code changes with no API change version: 0.1.0.0 -- A short (one-line) description of the package. -- synopsis: -- A longer description of the package. -- description: -- The license under which the package is released. license: AGPL-3.0-or-later -- The file containing the license text. license-file: LICENSE -- The package author(s). author: <redacted> -- An email address to which users can send suggestions, bug reports, and patches. maintainer: <redacted> -- A copyright notice. -- copyright: build-type: Simple -- Extra doc files to be distributed with the package, such as a CHANGELOG or a README. extra-doc-files: CHANGELOG.md -- Extra source files to be distributed with the package, such as examples, or a tutorial module. -- extra-source-files: common warnings ghc-options: -Wall executable day-2 -- Import common warning flags. import: warnings -- .hs or .lhs file containing the Main module. main-is: Main.hs -- Modules included in this executable, other than Main. -- other-modules: -- LANGUAGE extensions used by modules in this package. -- other-extensions: -- Other library packages from which modules are imported. build-depends: base ^>=4.12.0.0 -- Directories containing source files. hs-source-dirs: app -- Base language which the package is written in. default-language: Haskell2010 Here are a list of files it created: $ find . -type f ./app/Main.hs ./LICENSE ./CHANGELOG.md ./day-2.cabal ./dist-newstyle/cache/compiler ./README.md # I created this one myself, manually I went to compile my app/Main.hs file, which looks like: module Main where main :: IO () main = putStrLn "hello" When I run cabal build I get this error: $ cabal build Errors encountered when parsing cabal file ./day-2.cabal: day-2.cabal:14:26: error: unexpected end of input 1 | cabal-version: 3.4 2 | -- The cabal-version field refers to the version of the .cabal specification, 3 | -- and can be different from the cabal-install (the tool) version and the 4 | -- Cabal (the library) version you are using. As such, the Cabal (the library) 5 | -- version used must be equal or greater than the version stated in this field. 6 | -- Starting from the specification version 2.2, the cabal-version field must be 7 | -- the first thing in the cabal file. 8 | 9 | -- Initial package description 'day-2' generated by 10 | -- 'cabal init'. For further documentation, see: 11 | -- http://haskell.org/cabal/users-guide/ 12 | -- 13 | -- The name of the package. 14 | name: day-2 | ^ I don't see anyone else talking about this particular issue. Here is my cabal version: $ cabal --version cabal-install version 3.8.1.0 compiled using version 3.8.1.0 of the Cabal library Not sure if it's relevant, but here is my ghc version as well: $ ghc --version The Glorious Glasgow Haskell Compilation System, version 8.6.5 I should say, yesterday I ran cabal init in a different directory (for day-1) and it made my Main.hs and day-1.cabal file just fine, and it was able to build and run without issue. Not sure what the problem is here, was hoping someone knew what is wrong.
[ "\nCabal package names may contain letters, numbers and hyphens, but not spaces and may also not contain a hyphened section consisting of only numbers.\nhttps://cabal.readthedocs.io/en/stable/developing-packages.html#package-names-and-versions\n\nOtherwise the syntax $PACKAGE-$VERSION would be ambiguous.\n" ]
[ 1 ]
[]
[]
[ "cabal", "compiler_errors", "haskell" ]
stackoverflow_0074658335_cabal_compiler_errors_haskell.txt
Q: Snakemake not interpreting wildcard correctly? I am trying to run a snakemake file but it is producing a weird result refseq = 'refseq.fasta' reads = '_R1_001' reads2 = '_R2_001' configfile: "config.yaml" ## Add config def getsamples(): import glob test = (glob.glob("*.fastq.gz")) samples = [] for i in test: samples.append(i.rsplit('_', 2)[0]) print(samples) return(samples) def getbarcodes(): with open('unique.barcodes.txt') as file: lines = [line.rstrip() for line in file] return(lines) rule all: input: expand("called/{barcodes}{sample}_called.vcf", barcodes=getbarcodes(), sample=getsamples()), expand("mosdepth/{barcodes}{sample}.mosdepth.summary.txt", barcodes=getbarcodes(), sample=getsamples()) rule fastq_grep: input: R1 = "{sample}_R1_001.fastq.gz", R2 = "{sample}_R2_001.fastq.gz" output: "grepped/{barcodes}{sample}_R1_001.plate.fastq", "grepped/{barcodes}{sample}_R2_001.plate.fastq" shell: "fastq-grep -i '{wildcards.barcodes}' {input.R1} > {output} && fastq-grep -i '{wildcards.barcodes}' {input.R2} > {output}" I have files in my directory with *.fastq.gz on the end of them but I get this: Missing input files for rule fastq_grep: 0_R1_001.fastq.gz 0_R2_001.fastq.gz Those two files do not exist, where is it getting them from? I would expect to see a lot of fastq files that are in my directory but it is only listing one file that does not exist. A: It's a common problem due to {barcodes}{sample} pattern. Snakemake won't know where {barcodes} ends and where {sample} starts without a wildcard_constraint. Right now, snakemake is thinking that your sample wildcard is just a 0.
Snakemake not interpreting wildcard correctly?
I am trying to run a snakemake file but it is producing a weird result refseq = 'refseq.fasta' reads = '_R1_001' reads2 = '_R2_001' configfile: "config.yaml" ## Add config def getsamples(): import glob test = (glob.glob("*.fastq.gz")) samples = [] for i in test: samples.append(i.rsplit('_', 2)[0]) print(samples) return(samples) def getbarcodes(): with open('unique.barcodes.txt') as file: lines = [line.rstrip() for line in file] return(lines) rule all: input: expand("called/{barcodes}{sample}_called.vcf", barcodes=getbarcodes(), sample=getsamples()), expand("mosdepth/{barcodes}{sample}.mosdepth.summary.txt", barcodes=getbarcodes(), sample=getsamples()) rule fastq_grep: input: R1 = "{sample}_R1_001.fastq.gz", R2 = "{sample}_R2_001.fastq.gz" output: "grepped/{barcodes}{sample}_R1_001.plate.fastq", "grepped/{barcodes}{sample}_R2_001.plate.fastq" shell: "fastq-grep -i '{wildcards.barcodes}' {input.R1} > {output} && fastq-grep -i '{wildcards.barcodes}' {input.R2} > {output}" I have files in my directory with *.fastq.gz on the end of them but I get this: Missing input files for rule fastq_grep: 0_R1_001.fastq.gz 0_R2_001.fastq.gz Those two files do not exist, where is it getting them from? I would expect to see a lot of fastq files that are in my directory but it is only listing one file that does not exist.
[ "It's a common problem due to {barcodes}{sample} pattern.\nSnakemake won't know where {barcodes} ends and where {sample} starts without a wildcard_constraint. Right now, snakemake is thinking that your sample wildcard is just a 0.\n" ]
[ 1 ]
[]
[]
[ "bash", "bioinformatics", "python", "snakemake" ]
stackoverflow_0074658260_bash_bioinformatics_python_snakemake.txt
Q: How to create a datepicker search in Dashboard in Angular14?Or, How to filter searchvalues by clicking on Datepicker? How to create a datepicker search in Dashboard in Angular14? Or, How to filter searchvalues by clicking on Datepicker? Html: <input type="date" style="width: 100%; height: 34px" (change)="SendDataonChange($event)"/> ts:- SendDataonChange(event: any) { debugger; clearTimeout(this.timeout); var $this = this; this.timeout = setTimeout(function () { if (event.keyCode != 13) { $this.applySearchByDateFilter(event.target.value); } }, 500); } Html: <input type="date" style="width: 100%; height: 34px" (change)="SendDataonChange($event)"/> I expect results (dashoard records view) on the basis of clicking A particular date in datepicker A: do this work for you? <input type="date" [ngModel] ="dt | date:'yyyy-MM-dd'" (change)="dt = $event"> Im showing you how you would convert your date to a desirable format (My guess is the format is not right for your service call). Please let me know if it does not work, i am only answering your question based on my understanding
How to create a datepicker search in Dashboard in Angular14?Or, How to filter searchvalues by clicking on Datepicker?
How to create a datepicker search in Dashboard in Angular14? Or, How to filter searchvalues by clicking on Datepicker? Html: <input type="date" style="width: 100%; height: 34px" (change)="SendDataonChange($event)"/> ts:- SendDataonChange(event: any) { debugger; clearTimeout(this.timeout); var $this = this; this.timeout = setTimeout(function () { if (event.keyCode != 13) { $this.applySearchByDateFilter(event.target.value); } }, 500); } Html: <input type="date" style="width: 100%; height: 34px" (change)="SendDataonChange($event)"/> I expect results (dashoard records view) on the basis of clicking A particular date in datepicker
[ "do this work for you?\n<input type=\"date\" [ngModel] =\"dt | date:'yyyy-MM-dd'\" (change)=\"dt = $event\">\n\nIm showing you how you would convert your date to a desirable format (My guess is the format is not right for your service call).\nPlease let me know if it does not work, i am only answering your question based on my understanding\n" ]
[ 0 ]
[]
[]
[ ".net", "angular", "asp.net", "c#", "webapi" ]
stackoverflow_0074651344_.net_angular_asp.net_c#_webapi.txt
Q: Split one file into multiple files based on delimiter I have one file with -| as delimiter after each section...need to create separate files for each section using unix. example of input file wertretr ewretrtret 1212132323 000232 -| ereteertetet 232434234 erewesdfsfsfs 0234342343 -| jdhg3875jdfsgfd sjdhfdbfjds 347674657435 -| Expected result in File 1 wertretr ewretrtret 1212132323 000232 -| Expected result in File 2 ereteertetet 232434234 erewesdfsfsfs 0234342343 -| Expected result in File 3 jdhg3875jdfsgfd sjdhfdbfjds 347674657435 -| A: A one liner, no programming. (except the regexp etc.) csplit --digits=2 --quiet --prefix=outfile infile "/-|/+1" "{*}" tested on: csplit (GNU coreutils) 8.30 Notes about usage on Apple Mac "For OS X users, note that the version of csplit that comes with the OS doesn't work. You'll want the version in coreutils (installable via Homebrew), which is called gcsplit." — @Danial "Just to add, you can get the version for OS X to work (at least with High Sierra). You just need to tweak the args a bit csplit -k -f=outfile infile "/-\|/+1" "{3}". Features that don't seem to work are the "{*}", I had to be specific on the number of separators, and needed to add -k to avoid it deleting all outfiles if it can't find a final separator. Also if you want --digits, you need to use -n instead." — @Pebbl A: awk '{f="file" NR; print $0 " -|"> f}' RS='-\\|' input-file Explanation (edited): RS is the record separator, and this solution uses a gnu awk extension which allows it to be more than one character. NR is the record number. The print statement prints a record followed by " -|" into a file that contains the record number in its name. A: Debian has csplit, but I don't know if that's common to all/most/other distributions. If not, though, it shouldn't be too hard to track down the source and compile it... A: I solved a slightly different problem, where the file contains a line with the name where the text that follows should go. This perl code does the trick for me: #!/path/to/perl -w #comment the line below for UNIX systems use Win32::Clipboard; # Get command line flags #print ($#ARGV, "\n"); if($#ARGV == 0) { print STDERR "usage: ncsplit.pl --mff -- filename.txt [...] \n\nNote that no space is allowed between the '--' and the related parameter.\n\nThe mff is found on a line followed by a filename. All of the contents of filename.txt are written to that file until another mff is found.\n"; exit; } # this package sets the ARGV count variable to -1; use Getopt::Long; my $mff = ""; GetOptions('mff' => \$mff); # set a default $mff variable if ($mff eq "") {$mff = "-#-"}; print ("using file switch=", $mff, "\n\n"); while($_ = shift @ARGV) { if(-f "$_") { push @filelist, $_; } } # Could be more than one file name on the command line, # but this version throws away the subsequent ones. $readfile = $filelist[0]; open SOURCEFILE, "<$readfile" or die "File not found...\n\n"; #print SOURCEFILE; while (<SOURCEFILE>) { /^$mff (.*$)/o; $outname = $1; # print $outname; # print "right is: $1 \n"; if (/^$mff /) { open OUTFILE, ">$outname" ; print "opened $outname\n"; } else {print OUTFILE "$_"}; } A: The following command works for me. Hope it helps. awk 'BEGIN{file = 0; filename = "output_" file ".txt"} /-|/ {getline; file ++; filename = "output_" file ".txt"} {print $0 > filename}' input A: You can also use awk. I'm not very familiar with awk, but the following did seem to work for me. It generated part1.txt, part2.txt, part3.txt, and part4.txt. Do note, that the last partn.txt file that this generates is empty. I'm not sure how fix that, but I'm sure it could be done with a little tweaking. Any suggestions anyone? awk_pattern file: BEGIN{ fn = "part1.txt"; n = 1 } { print > fn if (substr($0,1,2) == "-|") { close (fn) n++ fn = "part" n ".txt" } } bash command: awk -f awk_pattern input.file A: Here's a Python 3 script that splits a file into multiple files based on a filename provided by the delimiters. Example input file: # Ignored ######## FILTER BEGIN foo.conf This goes in foo.conf. ######## FILTER END # Ignored ######## FILTER BEGIN bar.conf This goes in bar.conf. ######## FILTER END Here's the script: #!/usr/bin/env python3 import os import argparse # global settings start_delimiter = '######## FILTER BEGIN' end_delimiter = '######## FILTER END' # parse command line arguments parser = argparse.ArgumentParser() parser.add_argument("-i", "--input-file", required=True, help="input filename") parser.add_argument("-o", "--output-dir", required=True, help="output directory") args = parser.parse_args() # read the input file with open(args.input_file, 'r') as input_file: input_data = input_file.read() # iterate through the input data by line input_lines = input_data.splitlines() while input_lines: # discard lines until the next start delimiter while input_lines and not input_lines[0].startswith(start_delimiter): input_lines.pop(0) # corner case: no delimiter found and no more lines left if not input_lines: break # extract the output filename from the start delimiter output_filename = input_lines.pop(0).replace(start_delimiter, "").strip() output_path = os.path.join(args.output_dir, output_filename) # open the output file print("extracting file: {0}".format(output_path)) with open(output_path, 'w') as output_file: # while we have lines left and they don't match the end delimiter while input_lines and not input_lines[0].startswith(end_delimiter): output_file.write("{0}\n".format(input_lines.pop(0))) # remove end delimiter if present if not input_lines: input_lines.pop(0) Finally here's how you run it: $ python3 script.py -i input-file.txt -o ./output-folder/ A: Use csplit if you have it. If you don't, but you have Python... don't use Perl. Lazy reading of the file Your file may be too large to hold in memory all at once - reading line by line may be preferable. Assume the input file is named "samplein": $ python3 -c "from itertools import count with open('samplein') as file: for i in count(): firstline = next(file, None) if firstline is None: break with open(f'out{i}', 'w') as out: out.write(firstline) for line in file: out.write(line) if line == '-|\n': break" A: cat file| ( I=0; echo -n "">file0; while read line; do echo $line >> file$I; if [ "$line" == '-|' ]; then I=$[I+1]; echo -n "" > file$I; fi; done ) and the formated version: #!/bin/bash cat FILE | ( I=0; echo -n"">file0; while read line; do echo $line >> file$I; if [ "$line" == '-|' ]; then I=$[I+1]; echo -n "" > file$I; fi; done; ) A: Here is a perl code that will do the thing #!/usr/bin/perl open(FI,"file.txt") or die "Input file not found"; $cur=0; open(FO,">res.$cur.txt") or die "Cannot open output file $cur"; while(<FI>) { print FO $_; if(/^-\|/) { close(FO); $cur++; open(FO,">res.$cur.txt") or die "Cannot open output file $cur" } } close(FO); A: This is the sort of problem I wrote context-split for: http://stromberg.dnsalias.org/~strombrg/context-split.html $ ./context-split -h usage: ./context-split [-s separator] [-n name] [-z length] -s specifies what regex should separate output files -n specifies how output files are named (default: numeric -z specifies how long numbered filenames (if any) should be -i include line containing separator in output files operations are always performed on stdin A: Try this python script: import os import argparse delimiter = '-|' parser = argparse.ArgumentParser() parser.add_argument("-i", "--input-file", required=True, help="input txt") parser.add_argument("-o", "--output-dir", required=True, help="output directory") args = parser.parse_args() counter = 1; output_filename = 'part-'+str(counter) with open(args.input_file, 'r') as input_file: for line in input_file.read().split('\n'): if delimiter in line: counter = counter+1 output_filename = 'part-'+str(counter) print('Section '+str(counter)+' Started') else: #skips empty lines (change the condition if you want empty lines too) if line.strip() : output_path = os.path.join(args.output_dir, output_filename+'.txt') with open(output_path, 'a') as output_file: output_file.write("{0}\n".format(line)) ex: python split.py -i ./to-split.txt -o ./output-dir
Split one file into multiple files based on delimiter
I have one file with -| as delimiter after each section...need to create separate files for each section using unix. example of input file wertretr ewretrtret 1212132323 000232 -| ereteertetet 232434234 erewesdfsfsfs 0234342343 -| jdhg3875jdfsgfd sjdhfdbfjds 347674657435 -| Expected result in File 1 wertretr ewretrtret 1212132323 000232 -| Expected result in File 2 ereteertetet 232434234 erewesdfsfsfs 0234342343 -| Expected result in File 3 jdhg3875jdfsgfd sjdhfdbfjds 347674657435 -|
[ "A one liner, no programming. (except the regexp etc.)\ncsplit --digits=2 --quiet --prefix=outfile infile \"/-|/+1\" \"{*}\"\n\n\ntested on:\ncsplit (GNU coreutils) 8.30\nNotes about usage on Apple Mac\n\"For OS X users, note that the version of csplit that comes with the OS doesn't work. You'll want the version in coreutils (installable via Homebrew), which is called gcsplit.\" — @Danial\n\"Just to add, you can get the version for OS X to work (at least with High Sierra). You just need to tweak the args a bit csplit -k -f=outfile infile \"/-\\|/+1\" \"{3}\". Features that don't seem to work are the \"{*}\", I had to be specific on the number of separators, and needed to add -k to avoid it deleting all outfiles if it can't find a final separator. Also if you want --digits, you need to use -n instead.\" — @Pebbl\n", "awk '{f=\"file\" NR; print $0 \" -|\"> f}' RS='-\\\\|' input-file\n\nExplanation (edited):\nRS is the record separator, and this solution uses a gnu awk extension which allows it to be more than one character. NR is the record number.\nThe print statement prints a record followed by \" -|\" into a file that contains the record number in its name.\n", "Debian has csplit, but I don't know if that's common to all/most/other distributions. If not, though, it shouldn't be too hard to track down the source and compile it...\n", "I solved a slightly different problem, where the file contains a line with the name where the text that follows should go. This perl code does the trick for me: \n#!/path/to/perl -w\n\n#comment the line below for UNIX systems\nuse Win32::Clipboard;\n\n# Get command line flags\n\n#print ($#ARGV, \"\\n\");\nif($#ARGV == 0) {\n print STDERR \"usage: ncsplit.pl --mff -- filename.txt [...] \\n\\nNote that no space is allowed between the '--' and the related parameter.\\n\\nThe mff is found on a line followed by a filename. All of the contents of filename.txt are written to that file until another mff is found.\\n\";\n exit;\n}\n\n# this package sets the ARGV count variable to -1;\n\nuse Getopt::Long;\nmy $mff = \"\";\nGetOptions('mff' => \\$mff);\n\n# set a default $mff variable\nif ($mff eq \"\") {$mff = \"-#-\"};\nprint (\"using file switch=\", $mff, \"\\n\\n\");\n\nwhile($_ = shift @ARGV) {\n if(-f \"$_\") {\n push @filelist, $_;\n } \n}\n\n# Could be more than one file name on the command line, \n# but this version throws away the subsequent ones.\n\n$readfile = $filelist[0];\n\nopen SOURCEFILE, \"<$readfile\" or die \"File not found...\\n\\n\";\n#print SOURCEFILE;\n\nwhile (<SOURCEFILE>) {\n /^$mff (.*$)/o;\n $outname = $1;\n# print $outname;\n# print \"right is: $1 \\n\";\n\nif (/^$mff /) {\n\n open OUTFILE, \">$outname\" ;\n print \"opened $outname\\n\";\n }\n else {print OUTFILE \"$_\"};\n }\n\n", "The following command works for me. Hope it helps.\nawk 'BEGIN{file = 0; filename = \"output_\" file \".txt\"}\n /-|/ {getline; file ++; filename = \"output_\" file \".txt\"}\n {print $0 > filename}' input\n\n", "You can also use awk. I'm not very familiar with awk, but the following did seem to work for me. It generated part1.txt, part2.txt, part3.txt, and part4.txt. Do note, that the last partn.txt file that this generates is empty. I'm not sure how fix that, but I'm sure it could be done with a little tweaking. Any suggestions anyone?\nawk_pattern file:\nBEGIN{ fn = \"part1.txt\"; n = 1 }\n{\n print > fn\n if (substr($0,1,2) == \"-|\") {\n close (fn)\n n++\n fn = \"part\" n \".txt\"\n }\n}\n\nbash command:\nawk -f awk_pattern input.file\n", "Here's a Python 3 script that splits a file into multiple files based on a filename provided by the delimiters. Example input file:\n# Ignored\n\n######## FILTER BEGIN foo.conf\nThis goes in foo.conf.\n######## FILTER END\n\n# Ignored\n\n######## FILTER BEGIN bar.conf\nThis goes in bar.conf.\n######## FILTER END\n\nHere's the script:\n#!/usr/bin/env python3\n\nimport os\nimport argparse\n\n# global settings\nstart_delimiter = '######## FILTER BEGIN'\nend_delimiter = '######## FILTER END'\n\n# parse command line arguments\nparser = argparse.ArgumentParser()\nparser.add_argument(\"-i\", \"--input-file\", required=True, help=\"input filename\")\nparser.add_argument(\"-o\", \"--output-dir\", required=True, help=\"output directory\")\n\nargs = parser.parse_args()\n\n# read the input file\nwith open(args.input_file, 'r') as input_file:\n input_data = input_file.read()\n\n# iterate through the input data by line\ninput_lines = input_data.splitlines()\nwhile input_lines:\n # discard lines until the next start delimiter\n while input_lines and not input_lines[0].startswith(start_delimiter):\n input_lines.pop(0)\n\n # corner case: no delimiter found and no more lines left\n if not input_lines:\n break\n\n # extract the output filename from the start delimiter\n output_filename = input_lines.pop(0).replace(start_delimiter, \"\").strip()\n output_path = os.path.join(args.output_dir, output_filename)\n\n # open the output file\n print(\"extracting file: {0}\".format(output_path))\n with open(output_path, 'w') as output_file:\n # while we have lines left and they don't match the end delimiter\n while input_lines and not input_lines[0].startswith(end_delimiter):\n output_file.write(\"{0}\\n\".format(input_lines.pop(0)))\n\n # remove end delimiter if present\n if not input_lines:\n input_lines.pop(0)\n\nFinally here's how you run it:\n$ python3 script.py -i input-file.txt -o ./output-folder/\n\n", "Use csplit if you have it. \nIf you don't, but you have Python... don't use Perl.\nLazy reading of the file\nYour file may be too large to hold in memory all at once - reading line by line may be preferable. Assume the input file is named \"samplein\":\n$ python3 -c \"from itertools import count\nwith open('samplein') as file:\n for i in count():\n firstline = next(file, None)\n if firstline is None:\n break\n with open(f'out{i}', 'w') as out:\n out.write(firstline)\n for line in file:\n out.write(line)\n if line == '-|\\n':\n break\"\n\n", "cat file| ( I=0; echo -n \"\">file0; while read line; do echo $line >> file$I; if [ \"$line\" == '-|' ]; then I=$[I+1]; echo -n \"\" > file$I; fi; done )\n\nand the formated version:\n#!/bin/bash\ncat FILE | (\n I=0;\n echo -n\"\">file0;\n while read line; \n do\n echo $line >> file$I;\n if [ \"$line\" == '-|' ];\n then I=$[I+1];\n echo -n \"\" > file$I;\n fi;\n done;\n)\n\n", "Here is a perl code that will do the thing\n#!/usr/bin/perl\nopen(FI,\"file.txt\") or die \"Input file not found\";\n$cur=0;\nopen(FO,\">res.$cur.txt\") or die \"Cannot open output file $cur\";\nwhile(<FI>)\n{\n print FO $_;\n if(/^-\\|/)\n {\n close(FO);\n $cur++;\n open(FO,\">res.$cur.txt\") or die \"Cannot open output file $cur\"\n }\n}\nclose(FO);\n\n", "This is the sort of problem I wrote context-split for:\nhttp://stromberg.dnsalias.org/~strombrg/context-split.html\n$ ./context-split -h\nusage:\n./context-split [-s separator] [-n name] [-z length]\n -s specifies what regex should separate output files\n -n specifies how output files are named (default: numeric\n -z specifies how long numbered filenames (if any) should be\n -i include line containing separator in output files\n operations are always performed on stdin\n\n", "Try this python script:\nimport os\nimport argparse\n\ndelimiter = '-|'\n\nparser = argparse.ArgumentParser()\nparser.add_argument(\"-i\", \"--input-file\", required=True, help=\"input txt\")\nparser.add_argument(\"-o\", \"--output-dir\", required=True, help=\"output directory\")\n\nargs = parser.parse_args()\n\ncounter = 1;\noutput_filename = 'part-'+str(counter)\nwith open(args.input_file, 'r') as input_file:\n for line in input_file.read().split('\\n'):\n if delimiter in line:\n counter = counter+1\n output_filename = 'part-'+str(counter)\n print('Section '+str(counter)+' Started')\n else:\n #skips empty lines (change the condition if you want empty lines too)\n if line.strip() :\n output_path = os.path.join(args.output_dir, output_filename+'.txt')\n with open(output_path, 'a') as output_file:\n output_file.write(\"{0}\\n\".format(line))\n\nex:\n\npython split.py -i ./to-split.txt -o ./output-dir\n\n" ]
[ 108, 44, 8, 5, 4, 3, 2, 2, 1, 0, 0, 0 ]
[]
[]
[ "awk", "linux", "split", "unix" ]
stackoverflow_0011313852_awk_linux_split_unix.txt
Q: Informatica Powercenter - Load contents of XML file to CLOB column Is it possible to pick up an XML file via Powercenter and load the entirety of it into a target table's CLOB column? I have some code which is currently doing this completely in Oracle, but would like to perform the same process in Powercenter. Or is the only option to call an procedure from within Powercenter to perform this insert? A: of course its possible. Set the DB target field as clob if not done already. Set the FILE source field as text with length like 100000. Pls note there can be some issues if its too large. Need to test it out. All transformations should use string/text data type with length as 100000. Pls note, there can be some perf problem if you are trying to do some transformations to this field.
Informatica Powercenter - Load contents of XML file to CLOB column
Is it possible to pick up an XML file via Powercenter and load the entirety of it into a target table's CLOB column? I have some code which is currently doing this completely in Oracle, but would like to perform the same process in Powercenter. Or is the only option to call an procedure from within Powercenter to perform this insert?
[ "of course its possible.\n\nSet the DB target field as clob if not done already.\nSet the FILE source field as text with length like 100000. Pls note there can be some issues if its too large. Need to test it out.\nAll transformations should use string/text data type with length as 100000.\n\nPls note, there can be some perf problem if you are trying to do some transformations to this field.\n" ]
[ 1 ]
[]
[]
[ "clob", "file_handling", "informatica", "informatica_powercenter", "xml" ]
stackoverflow_0074656699_clob_file_handling_informatica_informatica_powercenter_xml.txt
Q: making a new column in R dataframe based on a selection of values from other columns I have a dataframe as below: data <- data.table (x= c(1,2,3,4,5), y= c(7,6,8,9,10), z= c('a','b','c','d','e'), w=c(01.05, 03.04, 11.08, 05.07, 09.18)) I need to make a new column containing values as I describe below: whenever x equals 3 or 5, or y equals 7, then make a new column called M and insert the value in the corresponding row from w into column M, and leave the other rows empty. The data shuld then look like this: data <- data.table (x= c(1,2,3,4,5), y= c(7, 6,8,9,10), z= c('a','b','c','d','e'), w=c(01.05, 03.04, 11.08, 05.07, 09,18), M=c(01.05, NA, 11.08, NA, 09.18) I am a beginner in R and searched a lot but could not find a solution. I tried using case_when and apply functions but no luck. I would really appreciate it if anyone can help me with this. A: As you are using data.table, this is the way to assign := a new variable. You can use ifelse or even better fifelse (to ensure M will be the same class as w) to check for your conditions, using %in% and |. library(data.table) data <- data.table (x = c(1:5), y = c(7,6,8,9,10), z = letters[1:5], w = c(01.05, 03.04, 11.08, 05.07, 09.18)) data[, M := fifelse(x %in% c(3,5) | y == 7, w, NA_real_)] data #> x y z w M #> 1: 1 7 a 1.05 1.05 #> 2: 2 6 b 3.04 NA #> 3: 3 8 c 11.08 11.08 #> 4: 4 9 d 5.07 NA #> 5: 5 10 e 9.18 9.18 Edit: r2evans' approach in the comment below this answer does the same but is more elegant: data[x %in% c(3,5) | y == 7, M := w]
making a new column in R dataframe based on a selection of values from other columns
I have a dataframe as below: data <- data.table (x= c(1,2,3,4,5), y= c(7,6,8,9,10), z= c('a','b','c','d','e'), w=c(01.05, 03.04, 11.08, 05.07, 09.18)) I need to make a new column containing values as I describe below: whenever x equals 3 or 5, or y equals 7, then make a new column called M and insert the value in the corresponding row from w into column M, and leave the other rows empty. The data shuld then look like this: data <- data.table (x= c(1,2,3,4,5), y= c(7, 6,8,9,10), z= c('a','b','c','d','e'), w=c(01.05, 03.04, 11.08, 05.07, 09,18), M=c(01.05, NA, 11.08, NA, 09.18) I am a beginner in R and searched a lot but could not find a solution. I tried using case_when and apply functions but no luck. I would really appreciate it if anyone can help me with this.
[ "As you are using data.table, this is the way to assign := a new variable. You can use ifelse or even better fifelse (to ensure M will be the same class as w) to check for your conditions, using %in% and |.\nlibrary(data.table)\ndata <- data.table (x = c(1:5), y = c(7,6,8,9,10), z = letters[1:5], \n w = c(01.05, 03.04, 11.08, 05.07, 09.18))\n\ndata[, M := fifelse(x %in% c(3,5) | y == 7, w, NA_real_)]\ndata\n#> x y z w M\n#> 1: 1 7 a 1.05 1.05\n#> 2: 2 6 b 3.04 NA\n#> 3: 3 8 c 11.08 11.08\n#> 4: 4 9 d 5.07 NA\n#> 5: 5 10 e 9.18 9.18\n\nEdit: r2evans' approach in the comment below this answer does the same but is more elegant:\ndata[x %in% c(3,5) | y == 7, M := w]\n" ]
[ 1 ]
[]
[]
[ "apply", "data.table", "dplyr", "r", "select" ]
stackoverflow_0074658230_apply_data.table_dplyr_r_select.txt
Q: Finding specific text with selenium Okay, so I need to search in a search engine for (person's name) net worth and from the first 5 links get all the values and find the average one. So... When I search for example Elon Musk net worth and open for example the first one which happens to be Wikipedia my thought process was to search example for a string that ends in billion and get that value but there happens to be many strings that end in billion and I can't be sure if I found the one, I need. Any advice? I haven't tried anything yet. I was just brainstorming but couldn't find any solutions to my problem. A: I think you're going to have to use some sort of machine learning based text analysis to figure out if it's the context you're looking for.
Finding specific text with selenium
Okay, so I need to search in a search engine for (person's name) net worth and from the first 5 links get all the values and find the average one. So... When I search for example Elon Musk net worth and open for example the first one which happens to be Wikipedia my thought process was to search example for a string that ends in billion and get that value but there happens to be many strings that end in billion and I can't be sure if I found the one, I need. Any advice? I haven't tried anything yet. I was just brainstorming but couldn't find any solutions to my problem.
[ "I think you're going to have to use some sort of machine learning based text analysis to figure out if it's the context you're looking for.\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "web_scraping" ]
stackoverflow_0074657852_python_selenium_web_scraping.txt
Q: How to automate triggering of a deploy hook at a specified time I have a Nuxt project where I scrape data in the server and send it back to the client on request. The app is SSG so the scraping happens at build time. The data changes once every 12 hours. I deployed it on vercel and it's working correctly but don't know how to setup an automation to trigger vercel deploy hooks to redeploy the app with the new data. I prefer to do it with GitHub Actions if it's possible so that all the projects is in one place. A: You can use crontab guru to find out how to make a cronjob every 12 hours aka 0 */12 * * *. Then, you can schedule it in your Github actions like the following on: # Triggers the workflow every 12 hours schedule: - cron: "0 */12 * * *" Check this dev article or the official documentation.
How to automate triggering of a deploy hook at a specified time
I have a Nuxt project where I scrape data in the server and send it back to the client on request. The app is SSG so the scraping happens at build time. The data changes once every 12 hours. I deployed it on vercel and it's working correctly but don't know how to setup an automation to trigger vercel deploy hooks to redeploy the app with the new data. I prefer to do it with GitHub Actions if it's possible so that all the projects is in one place.
[ "You can use crontab guru to find out how to make a cronjob every 12 hours aka 0 */12 * * *.\nThen, you can schedule it in your Github actions like the following\non:\n # Triggers the workflow every 12 hours\n schedule:\n - cron: \"0 */12 * * *\"\n\nCheck this dev article or the official documentation.\n" ]
[ 1 ]
[]
[]
[ "continuous_integration", "github_actions", "nuxt.js", "nuxtjs3", "vercel" ]
stackoverflow_0074656936_continuous_integration_github_actions_nuxt.js_nuxtjs3_vercel.txt
Q: What's the recommended way to delay Kotlin's buildSequence? I'm trying to poll a paginated API and provide new items to the user as they appear. fun connect(): Sequence<T> = buildSequence { while (true) { // result is a List<T> val result = dataSource.getFirstPage() yieldAll(/* the new data in `result` */) // Block the thread for a little bit } } Here's the sample usage: for (item in connect()) { // do something as each item is made available } My first thought was to use the delay function, but I get this message: Restricted suspended functions can only invoke member or extension suspending functions on their restricted coroutine scope This is the signature for buildSequence: public fun <T> buildSequence(builderAction: suspend SequenceBuilder<T>.() -> Unit): Sequence<T> I think this message means that I can only use the suspend functions in SequenceBuilder: yield and yieldAll and that using arbitrary suspend function calls aren't allowed. Right now I'm using this to block the sequence building by one second after every time the API is polled: val resumeTime = System.nanoTime() + TimeUnit.SECONDS.toNanos(1) while (resumeTime > System.nanoTime()) { // do nothing } This works, but it really doesn't seem like a good solution. Has anybody encountered this issue before? A: Why does it not work? Some research When we look at buildSequence, we can see that it takes an builderAction: suspend SequenceBuilder<T>.() -> Unit as its argument. As a client of that method, you'll be able to hand on a suspend lambda that has SequenceBuilder as its receiver (read about lambda with receiver here). The SequenceBuilder itself is annotated with RestrictSuspension: @RestrictsSuspension @SinceKotlin("1.1") public abstract class SequenceBuilder<in T> ... The annotation is defined and commented like this: /** * Classes and interfaces marked with this annotation are restricted * when used as receivers for extension `suspend` functions. * These `suspend` extensions can only invoke other member or extension * `suspend` functions on this particular receiver only * and are restricted from calling arbitrary suspension functions. */ @SinceKotlin("1.1") @Target(AnnotationTarget.CLASS) @Retention(AnnotationRetention.BINARY) public annotation class RestrictsSuspension As the RestrictSuspension documentation tells, in the case of buildSequence, you can pass a lambda with SequenceBuilder as its receiver but with restricted possibilities since you'll only be able to call "other member or extension suspend functions on this particular receiver". That means, the block passed to buildSequence may call any method defined on SequenceBuilder (like yield, yieldAll). Since, on the other hand, the block is "restricted from calling arbitrary suspension functions", using delay does not work. The resulting compiler error verifies it: Restricted suspended functions can only invoke member or extension suspending functions on their restricted coroutine scope. Ultimately, you need to be aware that the buildSequence creates a coroutine that is an example of a synchronous coroutine. In your example, the sequence code will be executed in the same thread that consumes the sequence by calling connect(). How to delay the sequence? As we learned, The buildSequence creates a synchronous sequence. It's fine to use regular Thread blocking here: fun connect(): Sequence<T> = buildSequence { while (true) { val result = dataSource.getFirstPage() yieldAll(result) Thread.sleep(1000) } } But, do you really want an entire thread to be blocked? Alternatively, you can implement asynchronous sequences as described here. As a result, using delay and other suspending functions will be valid. A: Just for an alternate solution... If what you're really trying to do is asynchronously produce elements, you can use Flows which are basically asynchronous sequences. Here is a quick table: Sync Async Single Normal valuefun example(): String suspendingsuspend fun example(): Stringorfun example(): Deferred<String> Many Sequencefun example(): Sequence<String> Flowfun example(): Flow<String> You can convert your Sequence<T> to a Flow<T> by replacing the sequence { ... } builder with the flow { ... } builder and then replace yield/yieldAll with emit/emitAll: fun example(): Flow<String> = flow { (1..5).forEach { getString().let { emit(it) } } } suspend fun getString(): String = { ... } So, for your example: fun connect(): Flow<T> = flow { while (true) { // Call suspend function to get data from dataSource val result: List<T> = dataSource.getFirstPage() emitAll(result) // _Suspend_ for a little bit delay(1000) } }
What's the recommended way to delay Kotlin's buildSequence?
I'm trying to poll a paginated API and provide new items to the user as they appear. fun connect(): Sequence<T> = buildSequence { while (true) { // result is a List<T> val result = dataSource.getFirstPage() yieldAll(/* the new data in `result` */) // Block the thread for a little bit } } Here's the sample usage: for (item in connect()) { // do something as each item is made available } My first thought was to use the delay function, but I get this message: Restricted suspended functions can only invoke member or extension suspending functions on their restricted coroutine scope This is the signature for buildSequence: public fun <T> buildSequence(builderAction: suspend SequenceBuilder<T>.() -> Unit): Sequence<T> I think this message means that I can only use the suspend functions in SequenceBuilder: yield and yieldAll and that using arbitrary suspend function calls aren't allowed. Right now I'm using this to block the sequence building by one second after every time the API is polled: val resumeTime = System.nanoTime() + TimeUnit.SECONDS.toNanos(1) while (resumeTime > System.nanoTime()) { // do nothing } This works, but it really doesn't seem like a good solution. Has anybody encountered this issue before?
[ "Why does it not work? Some research\nWhen we look at buildSequence, we can see that it takes an builderAction: suspend SequenceBuilder<T>.() -> Unit as its argument. As a client of that method, you'll be able to hand on a suspend lambda that has SequenceBuilder as its receiver (read about lambda with receiver here). \nThe SequenceBuilder itself is annotated with RestrictSuspension:\n@RestrictsSuspension\n@SinceKotlin(\"1.1\")\npublic abstract class SequenceBuilder<in T> ...\n\nThe annotation is defined and commented like this:\n/**\n * Classes and interfaces marked with this annotation are restricted\n * when used as receivers for extension `suspend` functions. \n * These `suspend` extensions can only invoke other member or extension \n * `suspend` functions on this particular receiver only \n * and are restricted from calling arbitrary suspension functions.\n */\n@SinceKotlin(\"1.1\") @Target(AnnotationTarget.CLASS) @Retention(AnnotationRetention.BINARY)\npublic annotation class RestrictsSuspension\n\nAs the RestrictSuspension documentation tells, in the case of buildSequence, you can pass a lambda with SequenceBuilder as its receiver but with restricted possibilities since you'll only be able to call \"other member or extension suspend functions on this particular receiver\". That means, the block passed to buildSequence may call any method defined on SequenceBuilder (like yield, yieldAll). Since, on the other hand, the block is \"restricted from calling arbitrary suspension functions\", using delay does not work. The resulting compiler error verifies it: \n\nRestricted suspended functions can only invoke member or extension suspending functions on their restricted coroutine scope. \n\nUltimately, you need to be aware that the buildSequence creates a coroutine that is an example of a synchronous coroutine. In your example, the sequence code will be executed in the same thread that consumes the sequence by calling connect().\nHow to delay the sequence?\nAs we learned, The buildSequence creates a synchronous sequence. It's fine to use regular Thread blocking here:\nfun connect(): Sequence<T> = buildSequence {\n while (true) {\n val result = dataSource.getFirstPage()\n yieldAll(result)\n Thread.sleep(1000)\n }\n}\n\nBut, do you really want an entire thread to be blocked? Alternatively, you can implement asynchronous sequences as described here. As a result, using delay and other suspending functions will be valid.\n", "Just for an alternate solution...\nIf what you're really trying to do is asynchronously produce elements, you can use Flows which are basically asynchronous sequences.\nHere is a quick table:\n\n\n\n\n\nSync\nAsync\n\n\n\n\nSingle\nNormal valuefun example(): String\nsuspendingsuspend fun example(): Stringorfun example(): Deferred<String>\n\n\nMany\nSequencefun example(): Sequence<String>\nFlowfun example(): Flow<String>\n\n\n\n\nYou can convert your Sequence<T> to a Flow<T> by replacing the sequence { ... } builder with the flow { ... } builder and then replace yield/yieldAll with emit/emitAll:\nfun example(): Flow<String> = flow {\n (1..5).forEach { getString().let { emit(it) } }\n}\n\nsuspend fun getString(): String = { ... }\n\nSo, for your example:\nfun connect(): Flow<T> = flow {\n while (true) {\n\n // Call suspend function to get data from dataSource\n val result: List<T> = dataSource.getFirstPage()\n emitAll(result)\n\n // _Suspend_ for a little bit\n delay(1000)\n }\n}\n\n" ]
[ 11, 1 ]
[]
[]
[ "kotlin", "kotlinx.coroutines" ]
stackoverflow_0049262609_kotlin_kotlinx.coroutines.txt
Q: BiLSTM hidden layers, and memory cells I have a BiLSTM model, as the following: tf.keras.models.Sequential([ tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(A, return_sequences=True), input_shape=x), tf.keras.layers.Dense(B, activation='tanh'), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(A)), tf.keras.layers.Dense(B, activation='tanh'), tf.keras.layers.Dropout(0.25), tf.keras.layers.Dense(output), ]) if the total parameters = 1 million, what values should be A and B? how many hidden layers should I add to let the model train in a proper way? I tried the following: A = 265 B = 64 Used three Dense layers, but the forecasting is still weak! A: LSTM layer is long-short-term memory it can process input as sequences you do not need to chop input into small of pieces. Sample: Single shape and double sharp, you can apply BiDirection or domain property as well. I example as a single trip because of its dimension. import tensorflow as tf class MyLSTMLayer( tf.keras.layers.LSTM ): def __init__(self, units, return_sequences, return_state): super(MyLSTMLayer, self).__init__( units, return_sequences=True, return_state=False ) self.num_units = units def build(self, input_shape): self.kernel = self.add_weight("kernel", shape=[int(input_shape[-1]), self.num_units]) def call(self, inputs): lstm = tf.keras.layers.LSTM(self.num_units) return lstm(inputs) start = 3 limit = 93 delta = 3 sample = tf.range(start, limit, delta) sample = tf.cast( sample, dtype=tf.float32 ) sample = tf.constant( sample, shape=( 30, 1, 1 ) ) layer = MyLSTMLayer(10, True, True) layer_2 = MyLSTMLayer(20, True, False) temp = layer(sample) print( temp ) temp = tf.expand_dims(temp, -1) temp = layer_2(temp) print( temp ) Operation: ( 10, 1, 1 ) x ( 10, 1, 1 ) layer = MyLSTMLayer(10, True, True) sample = tf.constant( sample, shape=( 10, 1, 1 ) ) Output: (10, 10) ... 1, 1, 1, 1]], shape=(10, 10), dtype=float32) Operation: ( 20, 1, 1 ) x ( 10, 1, 1 ) layer = MyLSTMLayer(20, True, True) sample = tf.constant( sample, shape=( 10, 1, 1 ) ) Output: (20, 10) ... 1, 1, 1, 1, 1, 1]], shape=(20, 10), dtype=float32) Operation: ( 30, 1, 1 ) x ( 10, 1, 1 ) layer = MyLSTMLayer(30, True, True) sample = tf.constant( sample, shape=( 10, 1, 1 ) ) Output: (30, 10) ... 1, 1, 1, 1, 1, 1]], shape=(30, 10), dtype=float32) Operation: ( 30, 1, 1 ) x ( 10, 1, 1 ) layer = MyLSTMLayer(10, True, True) layer_2 = MyLSTMLayer(20, True, False) sample = tf.constant( sample, shape=( 30, 1, 1 ) ) Output: (30, 20) ... 1, 1, 1, 1]]], shape=(30, 20), dtype=float32) Sample: Implementation, Discrete sequence import tensorflow as tf class MyLSTMLayer( tf.keras.layers.LSTM ): def __init__(self, units, return_sequences, return_state): super(MyLSTMLayer, self).__init__( units, return_sequences=True, return_state=False ) self.num_units = units def build(self, input_shape): self.kernel = self.add_weight("kernel", shape=[int(input_shape[-1]), self.num_units]) def call(self, inputs): lstm = tf.keras.layers.LSTM(self.num_units) temp = lstm(inputs) temp = tf.nn.softmax(temp) temp = tf.math.argmax(temp).numpy() return temp sample = tf.constant( [1.0, 2.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], shape=( 10, 1, 1 ) ) layer = MyLSTMLayer(10, True, False) temp = layer(sample) print( temp ) Output: As a sequence [1 0 1 1 1 0 0 0 1 0]
BiLSTM hidden layers, and memory cells
I have a BiLSTM model, as the following: tf.keras.models.Sequential([ tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(A, return_sequences=True), input_shape=x), tf.keras.layers.Dense(B, activation='tanh'), tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(A)), tf.keras.layers.Dense(B, activation='tanh'), tf.keras.layers.Dropout(0.25), tf.keras.layers.Dense(output), ]) if the total parameters = 1 million, what values should be A and B? how many hidden layers should I add to let the model train in a proper way? I tried the following: A = 265 B = 64 Used three Dense layers, but the forecasting is still weak!
[ "LSTM layer is long-short-term memory it can process input as sequences you do not need to chop input into small of pieces.\nSample: Single shape and double sharp, you can apply BiDirection or domain property as well. I example as a single trip because of its dimension.\nimport tensorflow as tf\n\nclass MyLSTMLayer( tf.keras.layers.LSTM ):\ndef __init__(self, units, return_sequences, return_state):\n super(MyLSTMLayer, self).__init__( units, return_sequences=True, return_state=False )\n self.num_units = units\n\ndef build(self, input_shape):\n self.kernel = self.add_weight(\"kernel\",\n shape=[int(input_shape[-1]),\n self.num_units])\n\ndef call(self, inputs):\n lstm = tf.keras.layers.LSTM(self.num_units)\n return lstm(inputs)\n\n\nstart = 3\nlimit = 93\ndelta = 3\nsample = tf.range(start, limit, delta)\nsample = tf.cast( sample, dtype=tf.float32 )\nsample = tf.constant( sample, shape=( 30, 1, 1 ) )\nlayer = MyLSTMLayer(10, True, True)\nlayer_2 = MyLSTMLayer(20, True, False)\n\ntemp = layer(sample)\nprint( temp )\ntemp = tf.expand_dims(temp, -1)\ntemp = layer_2(temp)\nprint( temp )\n\nOperation: ( 10, 1, 1 ) x ( 10, 1, 1 )\nlayer = MyLSTMLayer(10, True, True)\nsample = tf.constant( sample, shape=( 10, 1, 1 ) )\n\nOutput: (10, 10)\n...\n 1, 1, 1, 1]], shape=(10, 10), dtype=float32)\n\nOperation: ( 20, 1, 1 ) x ( 10, 1, 1 )\nlayer = MyLSTMLayer(20, True, True)\nsample = tf.constant( sample, shape=( 10, 1, 1 ) )\n\nOutput: (20, 10)\n...\n 1, 1, 1, 1, 1, 1]], shape=(20, 10), dtype=float32)\n\nOperation: ( 30, 1, 1 ) x ( 10, 1, 1 )\nlayer = MyLSTMLayer(30, True, True)\nsample = tf.constant( sample, shape=( 10, 1, 1 ) )\n\nOutput: (30, 10)\n...\n 1, 1, 1, 1, 1, 1]], shape=(30, 10), dtype=float32)\n\nOperation: ( 30, 1, 1 ) x ( 10, 1, 1 )\nlayer = MyLSTMLayer(10, True, True)\nlayer_2 = MyLSTMLayer(20, True, False)\nsample = tf.constant( sample, shape=( 30, 1, 1 ) )\n\nOutput: (30, 20)\n...\n 1, 1, 1, 1]]], shape=(30, 20), dtype=float32)\n\nSample: Implementation, Discrete sequence\nimport tensorflow as tf\n\nclass MyLSTMLayer( tf.keras.layers.LSTM ):\n def __init__(self, units, return_sequences, return_state):\n super(MyLSTMLayer, self).__init__( units, return_sequences=True, return_state=False )\n self.num_units = units\n\n def build(self, input_shape):\n self.kernel = self.add_weight(\"kernel\",\n shape=[int(input_shape[-1]),\n self.num_units])\n\n def call(self, inputs):\n lstm = tf.keras.layers.LSTM(self.num_units)\n temp = lstm(inputs)\n temp = tf.nn.softmax(temp)\n temp = tf.math.argmax(temp).numpy()\n return temp\n \nsample = tf.constant( [1.0, 2.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0], shape=( 10, 1, 1 ) )\nlayer = MyLSTMLayer(10, True, False)\ntemp = layer(sample)\nprint( temp )\n\nOutput: As a sequence\n[1 0 1 1 1 0 0 0 1 0]\n\n" ]
[ 1 ]
[]
[]
[ "bilstm", "deep_learning", "lstm", "tensorflow" ]
stackoverflow_0074658189_bilstm_deep_learning_lstm_tensorflow.txt
Q: How to increase font size for Edge and Nodes using Diagrams I'm using Diagrams as code which uses the Diagrams Python module. I'm trying to increase the font size for Edge labels but can't seem to figure out how to do it. Edge only seems to accept attr instead of graph_attr so I've tried variations with no luck. Examples I've tried are: Edge(style="dotted", label="patches", attr="fontsize=20") Edge(style="dotted", label="patches", attr={"fontsize": "20"}) Edge(style="dotted", label="patches", fontsize="20") Internet(label="Internet", attr="fontsize=20") A: I'm not sure what I was doing wrong before but I was able to get the following variations working. Edge(color="black", label="texthere", fontsize="20") font = "20" Edge(color="black", label="texthere", fontsize=font) There is also a head and tail label. See the following examples: Edge(color="red", headlabel="port info", labelfontsize="20")
How to increase font size for Edge and Nodes using Diagrams
I'm using Diagrams as code which uses the Diagrams Python module. I'm trying to increase the font size for Edge labels but can't seem to figure out how to do it. Edge only seems to accept attr instead of graph_attr so I've tried variations with no luck. Examples I've tried are: Edge(style="dotted", label="patches", attr="fontsize=20") Edge(style="dotted", label="patches", attr={"fontsize": "20"}) Edge(style="dotted", label="patches", fontsize="20") Internet(label="Internet", attr="fontsize=20")
[ "I'm not sure what I was doing wrong before but I was able to get the following variations working.\nEdge(color=\"black\", label=\"texthere\", fontsize=\"20\")\n\nfont = \"20\"\nEdge(color=\"black\", label=\"texthere\", fontsize=font)\n\nThere is also a head and tail label. See the following examples:\nEdge(color=\"red\", headlabel=\"port info\", labelfontsize=\"20\")\n\n" ]
[ 0 ]
[]
[]
[ "diagram", "python" ]
stackoverflow_0074648806_diagram_python.txt
Q: Rendering "pop up" and similar comments annotations in Ghostscript I'm using Ghostscript to convert searchable PDFs to image PDFs so that they can be viewed using an imaging toolkit using a command line like this: gswin32 -o c:\temp\output%d.png -r300 -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -dDOINTERPOLATE -dSAFER -sDEVICE=png16m c:\temp\test.pdf If I add the -dDOPDFMARKS command line parameter it renders the annotation mark showing that there's an annotation but not the annotation text. Does anyone know how to get Ghostscript to render annotation text? I've googled the life out of it without any luck. A: It's possible, but it would be hacky. Ghostscript is an open source postscript interpreter. PDF's are just postscript files which use a special pre-defined dictionary. In ghostscript 8.62 or prior, the dictionaries are located as postscript text files in the directory /lib. Inside of /lib is a file pdf_draw.ps, which is used to render the PDF into what you see in the .png file. Inside pdf_draw.ps is a definition for /drawidget, which draws the little symbol you see which represents the annotation. At his place in the code, the entire annotation is available, it is just not being used. A simple demonstration is to add the 2 lines shown below (just after the /drawwidget { %...), directly below the /drawwidget line and run gswin32 in a console with gswin32c. This will result in 2 lines being displayed as the PDF is rendered in the console window. /drawwidget { % <scalefactor> <annot> drawwidget - dup /Contents known {dup /Contents get == } if dup /T known { dup /T get == } if Output (This is a test sticky note) (Laurie Shufeldt) Where it gets tricky is defining what to do with the annotations, which is why they are not being displayed. In this case, one method might be to place a footnote reference on top of the widget and placing a footnote at the bottom of the page with the text formatted which makes sense to the intent of the sticky. Alternatively the stickies could be images in place, similar to how they look when expanded in acrobat, but this would cover the content below the sticky. How hacky the implementation would be would depend on how much effort is put into the implementation. editing pdf_draw.ps is very hacky, but quick and easy. It should be possible to put the hacks into their own file and call them as part of the command line. If putting the change into a header works, "it should" work in the current version of ghostscript, not just the old one. Pre-defining the number of allowable stickies and a fixed location of the footnotes would ease the placing of footnotes. If stickies have extra long text, the text would need to have special formatting to allow line breaks, where short text which assumed no line breaks would be easier to program. Perhaps you just want to extract the data from the stickies and put them in the database. If that's the case, the above code is close to what you need. A: Without seeing your PDF file I can't be sure but there are several possible reasons. Your annotation may be closed, ie not displaying anything when you open the PDF file. It may not have an appearance stream, Ghostscript does not manufacture appearance streams for all annotation types. Update What the provided PDF sample contains is 2 annotations: the first is the 'popup' annotation; the second is a text annotation. Popup annotations are basically interactive, because you can open and close them, move them around etc. However, Ghostscript doesn't support interactive elements. So what you get is the icon for the popup but you don't get the associated text annotation. There is no way currently to render this text with Ghostscript. A: You can solve this task in two steps: Create an intermediate PDF with GhostScript's PDFwrite using -dPreserveAnnots=false. This will render the annotations as normal PDF content into the new PDF. Here is an example: gswin64c -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -dPreserveAnnots=false -sOutputFile="OUTPUT_FILE_NAME.pdf" "INPUT_FILE_NAME.pdf Then use this intermediate PDF to render your PNGs as you did.
Rendering "pop up" and similar comments annotations in Ghostscript
I'm using Ghostscript to convert searchable PDFs to image PDFs so that they can be viewed using an imaging toolkit using a command line like this: gswin32 -o c:\temp\output%d.png -r300 -dTextAlphaBits=4 -dGraphicsAlphaBits=4 -dDOINTERPOLATE -dSAFER -sDEVICE=png16m c:\temp\test.pdf If I add the -dDOPDFMARKS command line parameter it renders the annotation mark showing that there's an annotation but not the annotation text. Does anyone know how to get Ghostscript to render annotation text? I've googled the life out of it without any luck.
[ "It's possible, but it would be hacky. \nGhostscript is an open source postscript interpreter. PDF's are just postscript files which use a special pre-defined dictionary. In ghostscript 8.62 or prior, the dictionaries are located as postscript text files in the directory /lib. Inside of /lib is a file pdf_draw.ps, which is used to render the PDF into what you see in the .png file. Inside pdf_draw.ps is a definition for /drawidget, which draws the little symbol you see which represents the annotation. At his place in the code, the entire annotation is available, it is just not being used. \nA simple demonstration is to add the 2 lines shown below (just after the /drawwidget { %...), directly below the /drawwidget line and run gswin32 in a console with gswin32c. This will result in 2 lines being displayed as the PDF is rendered in the console window. \n/drawwidget { % <scalefactor> <annot> drawwidget -\n dup /Contents known {dup /Contents get == } if \n dup /T known { dup /T get == } if \n\nOutput\n(This is a test sticky note)\n(Laurie Shufeldt)\n\nWhere it gets tricky is defining what to do with the annotations, which is why they are not being displayed. \nIn this case, one method might be to place a footnote reference on top of the widget and placing a footnote at the bottom of the page with the text formatted which makes sense to the intent of the sticky. \nAlternatively the stickies could be images in place, similar to how they look when expanded in acrobat, but this would cover the content below the sticky. \nHow hacky the implementation would be would depend on how much effort is put into the implementation. editing pdf_draw.ps is very hacky, but quick and easy. It should be possible to put the hacks into their own file and call them as part of the command line. If putting the change into a header works, \"it should\" work in the current version of ghostscript, not just the old one. \nPre-defining the number of allowable stickies and a fixed location of the footnotes would ease the placing of footnotes. If stickies have extra long text, the text would need to have special formatting to allow line breaks, where short text which assumed no line breaks would be easier to program. \nPerhaps you just want to extract the data from the stickies and put them in the database. If that's the case, the above code is close to what you need. \n", "Without seeing your PDF file I can't be sure but there are several possible reasons. Your annotation may be closed, ie not displaying anything when you open the PDF file. It may not have an appearance stream, Ghostscript does not manufacture appearance streams for all annotation types.\nUpdate\nWhat the provided PDF sample contains is 2 annotations: the first is the 'popup' annotation; the second is a text annotation. \nPopup annotations are basically interactive, because you can open and close them, move them around etc. However, Ghostscript doesn't support interactive elements. So what you get is the icon for the popup but you don't get the associated text annotation. \nThere is no way currently to render this text with Ghostscript.\n", "You can solve this task in two steps:\n\nCreate an intermediate PDF with GhostScript's PDFwrite using -dPreserveAnnots=false. This will render the annotations as normal PDF content into the new PDF. Here is an example:\ngswin64c -dNOPAUSE -dBATCH -sDEVICE=pdfwrite -dPreserveAnnots=false -sOutputFile=\"OUTPUT_FILE_NAME.pdf\" \"INPUT_FILE_NAME.pdf\n\n\nThen use this intermediate PDF to render your PNGs as you did.\n\n\n" ]
[ 5, 4, 0 ]
[]
[]
[ "ghostscript" ]
stackoverflow_0015259730_ghostscript.txt
Q: Why does writing a bean to CSV file fail with OpenCSV (5.6 or 5.7)? I'm using OpenCSV in version 5.6, and have followed sample https://www.geeksforgeeks.org/mapping-java-beans-to-csv-using-opencsv/ but not able to write mine to csv file. public static void main(String[] args){ List<MyPartbean> mybeans = new List<MyPartbean>(); MyPartbean b1 = new MyPartbean("123", "Red"); MyPartbean b2 = new MyPartbean("456", "Blue"); mybeans.add(b1); mybeans.add(b2); file_location = "/tmp/out.csv"; String[] columns = new String[]{"Number", "Description"}; CSVUtils.writeToCSV(String file_location, MyPartbean.class, mybeans, columns) } Bean: public class MyPartbean extends HashMap { String number=""; String description=""; public MyPartbean(String number, String desc){ this.number = number; this.description = desc; } public void setNumber(String number){ this.number = number;} public void setDescription(String description){ this.description = description;} public String getNumber() {return number;} public String getDescription() {return description;} } Write to CSV: public class CSVUtils { public static void writeToCSV(String file_location, Class type, List<MyPartbean> records, String[] columns) throws IOException, CsvRequiredFieldEmptyException, CsvDataTypeMismatchException { FileWriter writer = new FileWriter(file_location); ColumnPositionMappingStrategy mappingStrategy = new ColumnPositionMappingStrategy(); mappingStrategy.setType(type); mappingStrategy.setColumnMapping(columns); debug("mapping: " +mappingStrategy.getColumnMapping().length); StatefulBeanToCsv<MyPartbean> beanToCsv = new StatefulBeanToCsvBuilder<MyPartbean>(writer) .withMappingStrategy(mappingStrategy) . withSeparator(',') .withQuotechar(CSVWriter.NO_QUOTE_CHARACTER) .build(); beanToCsv.write(records); for(int i=0; i<records.size(); i++){ MyPartbean item = (MyPartbean) records.get(i); debug(i + " " + item.getNumber() + " :: " + item.getDescription()); } writer.close(); } } Output file has two "," represented by the number of columns[] . But there's no columns and values ,, ,, Any suggestion? A: It works with OpenCSV 4.x and 5.x (tested for 4.1, 5.6 and 5.7) like so: Bean keep it unchanged, as in OP. Note: I have generalized the writeToCSV method a bit, so it accepts any bean you pass to this method. Serialize & write public class CSVUtils { public static <T> void writeToCSV(String location, Class<T> type, List<T> records, String[] columns) throws IOException, CsvRequiredFieldEmptyException, CsvDataTypeMismatchException { ColumnPositionMappingStrategy<T> mappingStrategy = new ColumnPositionMappingStrategy<>(); mappingStrategy.setType(type); mappingStrategy.setColumnMapping(columns); try (Writer writer = new FileWriter(file_location)) { StatefulBeanToCsv<T> beanToCsv = new StatefulBeanToCsvBuilder<T>(writer) .withMappingStrategy(mappingStrategy) .withQuotechar(CSVWriter.NO_QUOTE_CHARACTER) .build(); beanToCsv.write(records); } } } Runner public class OpenCSV56Demo { public static void main(String[] args){ List<MyPartbean> mybeans = List.of(new MyPartbean("123", "Red"), new MyPartbean("456", "Blue")); String location = "myPartbeans.csv"; String[] columns = new String[]{"number", "description"}; try { CSVUtils.writeToCSV(location, MyPartbean.class, mybeans, columns); } catch (IOException | CsvRequiredFieldEmptyException | CsvDataTypeMismatchException e) { e.printStackTrace(); } } } Difference: String[] columns = new String[]{"number", "description"} with lower-cased 'n' and 'd'! vs. String[] columns = new String[]{"Number", "Description"}; (OP) Seems, OpenCSV 5.x uses a mapping that does not tolerate upper-cased column names any longer.
Why does writing a bean to CSV file fail with OpenCSV (5.6 or 5.7)?
I'm using OpenCSV in version 5.6, and have followed sample https://www.geeksforgeeks.org/mapping-java-beans-to-csv-using-opencsv/ but not able to write mine to csv file. public static void main(String[] args){ List<MyPartbean> mybeans = new List<MyPartbean>(); MyPartbean b1 = new MyPartbean("123", "Red"); MyPartbean b2 = new MyPartbean("456", "Blue"); mybeans.add(b1); mybeans.add(b2); file_location = "/tmp/out.csv"; String[] columns = new String[]{"Number", "Description"}; CSVUtils.writeToCSV(String file_location, MyPartbean.class, mybeans, columns) } Bean: public class MyPartbean extends HashMap { String number=""; String description=""; public MyPartbean(String number, String desc){ this.number = number; this.description = desc; } public void setNumber(String number){ this.number = number;} public void setDescription(String description){ this.description = description;} public String getNumber() {return number;} public String getDescription() {return description;} } Write to CSV: public class CSVUtils { public static void writeToCSV(String file_location, Class type, List<MyPartbean> records, String[] columns) throws IOException, CsvRequiredFieldEmptyException, CsvDataTypeMismatchException { FileWriter writer = new FileWriter(file_location); ColumnPositionMappingStrategy mappingStrategy = new ColumnPositionMappingStrategy(); mappingStrategy.setType(type); mappingStrategy.setColumnMapping(columns); debug("mapping: " +mappingStrategy.getColumnMapping().length); StatefulBeanToCsv<MyPartbean> beanToCsv = new StatefulBeanToCsvBuilder<MyPartbean>(writer) .withMappingStrategy(mappingStrategy) . withSeparator(',') .withQuotechar(CSVWriter.NO_QUOTE_CHARACTER) .build(); beanToCsv.write(records); for(int i=0; i<records.size(); i++){ MyPartbean item = (MyPartbean) records.get(i); debug(i + " " + item.getNumber() + " :: " + item.getDescription()); } writer.close(); } } Output file has two "," represented by the number of columns[] . But there's no columns and values ,, ,, Any suggestion?
[ "It works with OpenCSV 4.x and 5.x (tested for 4.1, 5.6 and 5.7) like so:\nBean\nkeep it unchanged, as in OP.\nNote: I have generalized the writeToCSV method a bit, so it accepts any bean you pass to this method.\nSerialize & write\npublic class CSVUtils {\n\n public static <T> void writeToCSV(String location, Class<T> type, List<T> records, String[] columns)\n throws IOException, CsvRequiredFieldEmptyException, CsvDataTypeMismatchException {\n\n ColumnPositionMappingStrategy<T> mappingStrategy = new ColumnPositionMappingStrategy<>();\n mappingStrategy.setType(type);\n mappingStrategy.setColumnMapping(columns);\n\n try (Writer writer = new FileWriter(file_location)) {\n StatefulBeanToCsv<T> beanToCsv = new StatefulBeanToCsvBuilder<T>(writer)\n .withMappingStrategy(mappingStrategy)\n .withQuotechar(CSVWriter.NO_QUOTE_CHARACTER)\n .build();\n beanToCsv.write(records);\n }\n }\n}\n\nRunner\npublic class OpenCSV56Demo {\n\n public static void main(String[] args){\n List<MyPartbean> mybeans = List.of(new MyPartbean(\"123\", \"Red\"),\n new MyPartbean(\"456\", \"Blue\"));\n\n String location = \"myPartbeans.csv\";\n String[] columns = new String[]{\"number\", \"description\"};\n try {\n CSVUtils.writeToCSV(location, MyPartbean.class, mybeans, columns);\n } catch (IOException | CsvRequiredFieldEmptyException | CsvDataTypeMismatchException e) {\n e.printStackTrace();\n }\n }\n\n}\n\nDifference:\n\nString[] columns = new String[]{\"number\", \"description\"}\nwith lower-cased 'n' and 'd'!\n\nvs.\n\nString[] columns = new String[]{\"Number\", \"Description\"}; (OP)\n\nSeems, OpenCSV 5.x uses a mapping that does not tolerate upper-cased column names any longer.\n" ]
[ 1 ]
[]
[]
[ "csv", "java", "opencsv" ]
stackoverflow_0073451914_csv_java_opencsv.txt
Q: how to add a profile object by using objects.create method simply my error is this Exception has occurred: TypeError User() got an unexpected keyword argument 'User' here is the data I receive from the post request in view.py if request.method == "POST": student_surname = request.POST.get('student_surname') student_initials = request.POST.get('student_initials') student_birthday = request.POST.get('student_birthday') student_username = request.POST.get('student_username') student_email = request.POST.get('student_email') student_entrance = request.POST.get('student_entrance') student_contact = request.POST.get('student_contact') student_residence = request.POST.get('student_residence') student_father = request.POST.get('student_father') student_other_skills = request.POST.get('student_skills') student_sports = request.POST.get('student_sports') student_password = request.POST.get('student_password') I can create user object it's working in view.py user = User.objects.create_user( username=student_username, email=student_email, password=student_password ) some data is related to profile in view.py student_profile = User.objects.create( User=user, #Error line surname=student_surname, initials=student_initials, entrance_number=student_entrance, email=student_email, father=student_father, skills=student_other_skills, sports=student_sports, birthday=student_birthday, contact=student_contact, address=student_residence, ) student_profile.save() profile definition in models.py class Profile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) surname = models.CharField(max_length=50) initials = models.CharField(max_length=10, blank=True) entrance_number = models.CharField(max_length=10, blank=True) email = models.EmailField(max_length=254, blank=True) father = models.CharField(max_length=50, blank=True) skills = models.CharField(max_length=50, blank=True) sports = models.CharField(max_length=50, blank=True) birthday = models.DateField(null=True, blank=True) contact = models.CharField(max_length=100, null=True, blank=True) address = models.CharField(max_length=100, null=True, blank=True) # other fields here def __str__(self): return self.user.username I believe the error is in User = user line can somebody tell me how to initialize this profile object correctly AND save record in the database at the time of creating the user. A: student_profile = Profile.objects.create( # Profile user=user, #user surname=student_surname, initials=student_initials, entrance_number=student_entrance, email=student_email, father=student_father, skills=student_other_skills, sports=student_sports, birthday=student_birthday, contact=student_contact, address=student_residence, ) Not User model, must be Profile model, your model field is user, but you have used User
how to add a profile object by using objects.create method
simply my error is this Exception has occurred: TypeError User() got an unexpected keyword argument 'User' here is the data I receive from the post request in view.py if request.method == "POST": student_surname = request.POST.get('student_surname') student_initials = request.POST.get('student_initials') student_birthday = request.POST.get('student_birthday') student_username = request.POST.get('student_username') student_email = request.POST.get('student_email') student_entrance = request.POST.get('student_entrance') student_contact = request.POST.get('student_contact') student_residence = request.POST.get('student_residence') student_father = request.POST.get('student_father') student_other_skills = request.POST.get('student_skills') student_sports = request.POST.get('student_sports') student_password = request.POST.get('student_password') I can create user object it's working in view.py user = User.objects.create_user( username=student_username, email=student_email, password=student_password ) some data is related to profile in view.py student_profile = User.objects.create( User=user, #Error line surname=student_surname, initials=student_initials, entrance_number=student_entrance, email=student_email, father=student_father, skills=student_other_skills, sports=student_sports, birthday=student_birthday, contact=student_contact, address=student_residence, ) student_profile.save() profile definition in models.py class Profile(models.Model): user = models.OneToOneField(User, on_delete=models.CASCADE) surname = models.CharField(max_length=50) initials = models.CharField(max_length=10, blank=True) entrance_number = models.CharField(max_length=10, blank=True) email = models.EmailField(max_length=254, blank=True) father = models.CharField(max_length=50, blank=True) skills = models.CharField(max_length=50, blank=True) sports = models.CharField(max_length=50, blank=True) birthday = models.DateField(null=True, blank=True) contact = models.CharField(max_length=100, null=True, blank=True) address = models.CharField(max_length=100, null=True, blank=True) # other fields here def __str__(self): return self.user.username I believe the error is in User = user line can somebody tell me how to initialize this profile object correctly AND save record in the database at the time of creating the user.
[ "student_profile = Profile.objects.create( # Profile\n user=user, #user\n surname=student_surname,\n initials=student_initials,\n entrance_number=student_entrance,\n email=student_email,\n father=student_father,\n skills=student_other_skills,\n sports=student_sports,\n birthday=student_birthday,\n contact=student_contact,\n address=student_residence,\n )\n\nNot User model, must be Profile model, your model field is user, but you have used User\n" ]
[ 1 ]
[]
[]
[ "authentication", "django", "django_models", "python", "python_3.x" ]
stackoverflow_0074658376_authentication_django_django_models_python_python_3.x.txt
Q: Git tries to lstat files outside its work directory I recently started experiencing weird behavior with git. I'm on Windows 7 and using MINGW32. My git repository is in /d/www/project1/app When I type in git pull origin master, I used to get: fatal: cannot lstat '\/$RECYCLE.BIN': No such file or directory Which is a hidden system folder in /d. After removing it, I noticed it just moved on to the next file it could find in /d (root folder of my D: drive) and is still throwing the same lstat error. So the question is simple - why on earth does it try to lstat every file and folder on my D drive? Thanks. A: There must be a .git folder ( created accidentally maybe) in the root of drive D: ( and you are not doing the command in the correct folder where your actual repo) A: Note that with Git 2.5+ (Q2 2015), this error message should not pop up anymore. See commit 838d6a9 by David Turner (csusbdt), 18 May 2015. (Merged by Junio C Hamano -- gitster -- in commit f93a393, 01 Jun 2015) clean: only lstat files in pathspec Even though "git clean" takes pathspec to limit the part of the working tree to be cleaned, it checked the paths it encounters during its directory traversal with lstat(2), before checking if the path is within the pathspec. Ignore paths outside pathspec and proceed without checking with lstat(2). Even if such a path is unreadable due to e.g. EPERM, "git clean" should not care. And with git-for-windows releases, a Git for Windows will be only days after the Linux Git release itself. A: I had an issue where my local gitlab runner stored builds on an ExFat formatted external hard drive. After formatting it to Mac OS Extended Journaled, it worked
Git tries to lstat files outside its work directory
I recently started experiencing weird behavior with git. I'm on Windows 7 and using MINGW32. My git repository is in /d/www/project1/app When I type in git pull origin master, I used to get: fatal: cannot lstat '\/$RECYCLE.BIN': No such file or directory Which is a hidden system folder in /d. After removing it, I noticed it just moved on to the next file it could find in /d (root folder of my D: drive) and is still throwing the same lstat error. So the question is simple - why on earth does it try to lstat every file and folder on my D drive? Thanks.
[ "There must be a .git folder ( created accidentally maybe) in the root of drive D: ( and you are not doing the command in the correct folder where your actual repo)\n", "Note that with Git 2.5+ (Q2 2015), this error message should not pop up anymore.\nSee commit 838d6a9 by David Turner (csusbdt), 18 May 2015.\n(Merged by Junio C Hamano -- gitster -- in commit f93a393, 01 Jun 2015)\n\nclean: only lstat files in pathspec\nEven though \"git clean\" takes pathspec to limit the part of the working tree to be cleaned, it checked the paths it encounters during its directory traversal with lstat(2), before checking if the path is within the pathspec.\nIgnore paths outside pathspec and proceed without checking with\n lstat(2). Even if such a path is unreadable due to e.g. EPERM,\n \"git clean\" should not care.\n\nAnd with git-for-windows releases, a Git for Windows will be only days after the Linux Git release itself.\n", "I had an issue where my local gitlab runner stored builds on an ExFat formatted external hard drive. After formatting it to Mac OS Extended Journaled, it worked\n" ]
[ 0, 0, 0 ]
[]
[]
[ "git", "mingw32" ]
stackoverflow_0008590669_git_mingw32.txt
Q: Error: Invalid hook call. Hooks can only be called inside of the body of a function component. When using next/router I have a specific scenario where I need to redirect to spesific page using next/router But bellow error occured: Error: Invalid hook call. Hooks can only be called inside of the body of a function component. All my code in index.tsx are: import { Axios } from '../axios' import { Languages } from '../bussiness/language' import { useRouter } from 'next/router' export async function DispatchApi({ address, sendedData, lan = Languages.fa, method, }: { address: string sendedData: string lan: Languages method: 'POST' | 'GET' }) { let data = JSON.parse(sendedData) const router = useRouter() let useAxios = Axios.interceptors.request.use((config) => { router.push('login') return config }) return await Axios({ url: address, data, method, headers: { accept: '*/*', 'Accept-Language': lan, 'Content-Type': 'application/json', }, }).then(({ data }) => data) } But how can I fix problem. Is there any specific configuration? A: You can avoid this by using import Router from 'next/router'; It has the same properties as the object returned from useRouter()
Error: Invalid hook call. Hooks can only be called inside of the body of a function component. When using next/router
I have a specific scenario where I need to redirect to spesific page using next/router But bellow error occured: Error: Invalid hook call. Hooks can only be called inside of the body of a function component. All my code in index.tsx are: import { Axios } from '../axios' import { Languages } from '../bussiness/language' import { useRouter } from 'next/router' export async function DispatchApi({ address, sendedData, lan = Languages.fa, method, }: { address: string sendedData: string lan: Languages method: 'POST' | 'GET' }) { let data = JSON.parse(sendedData) const router = useRouter() let useAxios = Axios.interceptors.request.use((config) => { router.push('login') return config }) return await Axios({ url: address, data, method, headers: { accept: '*/*', 'Accept-Language': lan, 'Content-Type': 'application/json', }, }).then(({ data }) => data) } But how can I fix problem. Is there any specific configuration?
[ "You can avoid this by using\nimport Router from 'next/router';\nIt has the same properties as the object returned from useRouter()\n" ]
[ 2 ]
[]
[]
[ "axios", "next.js", "nextjs_dynamic_routing", "react_hooks", "reactjs" ]
stackoverflow_0074658029_axios_next.js_nextjs_dynamic_routing_react_hooks_reactjs.txt
Q: Testing a material ui slider with @testing-library/react Hi there I'm trying to test a Slider component created with Material-UI, but I cannot get my tests to pass. I would like test the the value changes using the fireEvent with @testing-library/react. I've been following this post to properly query the DOM, I cannot get the correct DOM nodes. Thanks in advance. <Slider /> component // @format // @flow import * as React from "react"; import styled from "styled-components"; import { Slider as MaterialUISlider } from "@material-ui/core"; import { withStyles, makeStyles } from "@material-ui/core/styles"; import { priceRange } from "../../../domain/Search/PriceRange/priceRange"; const Wrapper = styled.div` width: 93%; display: inline-block; margin-left: 0.5em; margin-right: 0.5em; margin-bottom: 0.5em; `; // ommited code pertaining props and styles for simplicity function Slider(props: SliderProps) { const initialState = [1, 100]; const [value, setValue] = React.useState(initialState); function onHandleChangeCommitted(e, latestValue) { e.preventDefault(); const { onUpdate } = props; const newPriceRange = priceRange(latestValue); onUpdate(newPriceRange); } function onHandleChange(e, newValue) { e.preventDefault(); setValue(newValue); } return ( <Wrapper aria-label="range-slider" > <SliderWithStyles aria-labelledby="range-slider" defaultValue={initialState} // getAriaLabel={index => // index === 0 ? "Minimum Price" : "Maximum Price" // } getAriaValueText={valueText} onChange={onHandleChange} onChangeCommitted={onHandleChangeCommitted} valueLabelDisplay="auto" value={value} /> </Wrapper> ); } export default Slider; Slider.test.js // @flow import React from "react"; import { cleanup, render, getAllByAltText, fireEvent, waitForElement } from "@testing-library/react"; import "@testing-library/jest-dom/extend-expect"; import Slider from "../Slider"; afterEach(cleanup); describe("<Slider /> specs", () => { // [NOTE]: Works, but maybe a better way to do it ? xdescribe("<Slider /> component aria-label", () => { it("renders without crashing", () => { const { container } = render(<Slider />); expect(container.firstChild).toBeInTheDocument(); }); }); // [ASK]: How to test the event handlers with fireEvent. describe("<Slider /> props", () => { it("display a initial min value of '1'", () => { const renderResult = render(<Slider />); // TODO }); it("display a initial max value of '100'", () => { const renderResult = render(<Slider />); // TODO }); xit("display to values via the onHandleChangeCommitted event when dragging stop", () => { const renderResult = render(<Slider />); console.log(renderResult) // fireEvent.change(renderResult.getByText("1")) // expect(onChange).toHaveBeenCalled(0); }); // [NOTE]: Does not work, returns undefined xit("display to values via the onHandleChange event when dragging stop", () => { const renderResult = render(<Slider />); console.log(renderResult.container); const spanNodeWithAriaAttribute = renderResult.container.firstChild.getElementsByTagName("span")[0].getAttribute('aria-label') expect(spanNodeWithAriaAttribute).toBe(/range-slider/) }); }); // [ASK]: Works, but a snapshot is an overkill a better way of doing this ? xdescribe("<Slider /> snapshot", () => { it("renders without crashing", () => { const { container } = render(<Slider />); expect(container.firstChild).toMatchSnapshot(); }); }); }); A: I would recommend not to write tests for a custom component and believe that this component works for all our cases. Read through this article for more details. In that they have mentioned how to write unit tests for a component that wraps react-select. I followed the similar approach and wrote a mock for my third-party slider component. in setupTests.js: jest.mock('@material-ui/core/Slider', () => (props) => { const { id, name, min, max, onChange, testid } = props; return ( <input data-testid={testid} type="range" id={id} name={name} min={min} max={max} onChange={(event) => onChange(event.target.value)} /> ); }); With this mock, you can simply fire a change event in your tests like this: fireEvent.change(getByTestId(`slider`), { target: { value: 25 } }); Make sure to pass proper testid as a prop to your SliderWithStyles component A: After battling this for hours I was able to solve my case related to testing MUI slider It really depends on how you need to test yours, in my case I have to check if a label text content has changed after clicking a mark using marks slider props. The problems 1) The slider component computes the return value base on elements getBoundingClientRect and MouseEvent 2) How to query the slider and fire the event. 3) JSDOM limitation on reading element actual height and width which causes the problem no.1 The solution 1) mock getBoundingClientRect should also fix the problem no.3 2) add test id to slider and use use fireEvent.mouseDown(contaner, {....}) const sliderLabel = screen.getByText("Default text that the user should see") // add data-testid to slider const sliderInput = screen.getByTestId("slider") // mock the getBoundingClientRect sliderInput.getBoundingClientRect = jest.fn(() => { return { bottom: 286.22918701171875, height: 28, left: 19.572917938232422, right: 583.0937919616699, top: 258.22918701171875, width: 563.5208740234375, x: 19.572917938232422, y: 258.22918701171875, } }) expect(sliderInput).toBeInTheDocument() expect(sliderLabel).toHaveTextContent("Default text that the user should see") await fireEvent.mouseDown(sliderInput, { clientX: 162, clientY: 302 }) expect(sliderLabel).toHaveTextContent( "New text that the user should see" ) A: I turned the solution mentioned above into simple (Typescript) helper export class Slider { private static height = 10 // For simplicity pretend that slider's width is 100 private static width = 100 private static getBoundingClientRectMock() { return { bottom: Slider.height, height: Slider.height, left: 0, right: Slider.width, top: 0, width: Slider.width, x: 0, y: 0, } as DOMRect } static change(element: HTMLElement, value: number, min: number = 0, max: number = 100) { const getBoundingClientRect = element.getBoundingClientRect element.getBoundingClientRect = Slider.getBoundingClientRectMock fireEvent.mouseDown( element, { clientX: ((value - min) / (max - min)) * Slider.width, clientY: Slider.height } ) element.getBoundingClientRect = getBoundingClientRect } } Usage: Slider.change(getByTestId('mySlider'), 40) // When min=0, max=100 (default) // Otherwise Slider.change(getByTestId('mySlider'), 4, 0, 5) // Sets 4 with scale set to 0-5 A: Building on @rehman_00001's answer, I created a file mock for the component. I wrote it in TypeScript, but it should work just as well without the types. __mocks__/@material-ui/core/Slider.tsx import { SliderTypeMap } from '@material-ui/core'; import React from 'react'; export default function Slider(props: SliderTypeMap['props']): JSX.Element { const { onChange, ...others } = props; return ( <input type="range" onChange={(event) => { onChange && onChange(event, parseInt(event.target.value)); }} {...(others as any)} /> ); } Now every usage of the Material UI <Slider/> component will be rendered as a simple HTML <input/> element during testing which is much easier to work with using Jest and react-testing-library. {...(others as any)} is a hack that allows me to avoid worrying about ensuring that every possible prop of the original component is handled properly. Depending on what Slider props you rely on, you may need to pull out additional props during destructuring so that you can properly translate them into something that makes sense on a vanilla <input/> element. See this page in the Material UI docs for info on each possible property. A: Here is an easy and flexible approach: Import Mui like this: import * as MuiModule from '@mui/material'; Mock the @mui/material library and specify a function for Slider jest.mock('@mui/material', () => ({ __esModule: true, ...jest.requireActual('@mui/material'), Slider: () => {}, // Important: resolves issues where Jest sees Slider as an object instead of a function and allows jest.spy to work })); Spy on Slider and write your own implementation describe('Given a MyCustomSlider component', () => { beforeEach(() => { jest.spyOn(MuiModule, 'Slider').mockImplementation((props) => { const { value, onChange, onChangeCommitted } = props; const simulateChange = (event) => { if (!onChange) return; onChange(event, [0, 10], 0); }; const simulateChangeCommitted = (event) => { if (!onChangeCommitted) return; onChangeCommitted(event, [0, 10]); }; return ( <> Value: {value} <button onClick={simulateChange}>Trigger Change</button> <button onClick={simulateChangeCommitted}> Trigger Change Committed </button> </> ); }); }); // ... tests here }); Render your component and trigger the change event: describe('When it is rendered', () => { beforeEach(() => { jest.clearAllMocks(); render(<MyCustomSlider />); }); describe('When a change event occurs', () => { beforeEach(async () => { jest.clearAllMocks(); await userEvent.click(screen.getByText('Trigger Change')); }); test('Then ... (your assertion here)', () => { // your test }); }); }); Full example: import userEvent from '@testing-library/user-event'; import * as MuiModule from '@mui/material'; import { render } from '../../../libs/test-utils'; jest.mock('@mui/material', () => ({ __esModule: true, ...jest.requireActual('@mui/material'), Slider: () => {}, })); describe('Given a MyCustomSlider component', () => { beforeEach(() => { jest.spyOn(MuiModule, 'Slider').mockImplementation((props) => { const { value, onChange, onChangeCommitted } = props; const simulateChange = (event) => { if (!onChange) return; onChange(event, [0, 10], 0); }; const simulateChangeCommitted = (event) => { if (!onChangeCommitted) return; onChangeCommitted(event, [0, 10]); }; return ( <> Value: {value} <button onClick={simulateChange}>Trigger Change</button> <button onClick={simulateChangeCommitted}> Trigger Change Committed </button> </> ); }); }); describe('When it is rendered', () => { beforeEach(() => { jest.clearAllMocks(); render(<MyCustomSlider />); }); describe('When a change event occurs', () => { beforeEach(async () => { jest.clearAllMocks(); await userEvent.click(screen.getByText('Trigger Change')); }); test('Then ... (your assertion here)', () => { // your test }); }); }); });
Testing a material ui slider with @testing-library/react
Hi there I'm trying to test a Slider component created with Material-UI, but I cannot get my tests to pass. I would like test the the value changes using the fireEvent with @testing-library/react. I've been following this post to properly query the DOM, I cannot get the correct DOM nodes. Thanks in advance. <Slider /> component // @format // @flow import * as React from "react"; import styled from "styled-components"; import { Slider as MaterialUISlider } from "@material-ui/core"; import { withStyles, makeStyles } from "@material-ui/core/styles"; import { priceRange } from "../../../domain/Search/PriceRange/priceRange"; const Wrapper = styled.div` width: 93%; display: inline-block; margin-left: 0.5em; margin-right: 0.5em; margin-bottom: 0.5em; `; // ommited code pertaining props and styles for simplicity function Slider(props: SliderProps) { const initialState = [1, 100]; const [value, setValue] = React.useState(initialState); function onHandleChangeCommitted(e, latestValue) { e.preventDefault(); const { onUpdate } = props; const newPriceRange = priceRange(latestValue); onUpdate(newPriceRange); } function onHandleChange(e, newValue) { e.preventDefault(); setValue(newValue); } return ( <Wrapper aria-label="range-slider" > <SliderWithStyles aria-labelledby="range-slider" defaultValue={initialState} // getAriaLabel={index => // index === 0 ? "Minimum Price" : "Maximum Price" // } getAriaValueText={valueText} onChange={onHandleChange} onChangeCommitted={onHandleChangeCommitted} valueLabelDisplay="auto" value={value} /> </Wrapper> ); } export default Slider; Slider.test.js // @flow import React from "react"; import { cleanup, render, getAllByAltText, fireEvent, waitForElement } from "@testing-library/react"; import "@testing-library/jest-dom/extend-expect"; import Slider from "../Slider"; afterEach(cleanup); describe("<Slider /> specs", () => { // [NOTE]: Works, but maybe a better way to do it ? xdescribe("<Slider /> component aria-label", () => { it("renders without crashing", () => { const { container } = render(<Slider />); expect(container.firstChild).toBeInTheDocument(); }); }); // [ASK]: How to test the event handlers with fireEvent. describe("<Slider /> props", () => { it("display a initial min value of '1'", () => { const renderResult = render(<Slider />); // TODO }); it("display a initial max value of '100'", () => { const renderResult = render(<Slider />); // TODO }); xit("display to values via the onHandleChangeCommitted event when dragging stop", () => { const renderResult = render(<Slider />); console.log(renderResult) // fireEvent.change(renderResult.getByText("1")) // expect(onChange).toHaveBeenCalled(0); }); // [NOTE]: Does not work, returns undefined xit("display to values via the onHandleChange event when dragging stop", () => { const renderResult = render(<Slider />); console.log(renderResult.container); const spanNodeWithAriaAttribute = renderResult.container.firstChild.getElementsByTagName("span")[0].getAttribute('aria-label') expect(spanNodeWithAriaAttribute).toBe(/range-slider/) }); }); // [ASK]: Works, but a snapshot is an overkill a better way of doing this ? xdescribe("<Slider /> snapshot", () => { it("renders without crashing", () => { const { container } = render(<Slider />); expect(container.firstChild).toMatchSnapshot(); }); }); });
[ "I would recommend not to write tests for a custom component and believe that this component works for all our cases.\nRead through this article for more details. In that they have mentioned how to write unit tests for a component that wraps react-select.\nI followed the similar approach and wrote a mock for my third-party slider component.\nin setupTests.js:\njest.mock('@material-ui/core/Slider', () => (props) => {\n const { id, name, min, max, onChange, testid } = props;\n return (\n <input\n data-testid={testid}\n type=\"range\"\n id={id}\n name={name}\n min={min}\n max={max}\n onChange={(event) => onChange(event.target.value)}\n />\n );\n});\n\nWith this mock, you can simply fire a change event in your tests like this:\nfireEvent.change(getByTestId(`slider`), { target: { value: 25 } });\n\nMake sure to pass proper testid as a prop to your SliderWithStyles component\n", "After battling this for hours I was able to solve my case related to testing MUI slider\nIt really depends on how you need to test yours, in my case I have to check if a label text content has changed after clicking a mark using marks slider props.\nThe problems\n1) The slider component computes the return value base on elements getBoundingClientRect and MouseEvent\n2) How to query the slider and fire the event.\n3) JSDOM limitation on reading element actual height and width which causes the problem no.1 \nThe solution\n1) mock getBoundingClientRect should also fix the problem no.3\n2) add test id to slider and use use fireEvent.mouseDown(contaner, {....})\nconst sliderLabel = screen.getByText(\"Default text that the user should see\")\n\n// add data-testid to slider\nconst sliderInput = screen.getByTestId(\"slider\")\n\n// mock the getBoundingClientRect\n sliderInput.getBoundingClientRect = jest.fn(() => {\n return {\n bottom: 286.22918701171875,\n height: 28,\n left: 19.572917938232422,\n right: 583.0937919616699,\n top: 258.22918701171875,\n width: 563.5208740234375,\n x: 19.572917938232422,\n y: 258.22918701171875,\n }\n })\n\n expect(sliderInput).toBeInTheDocument()\n\n expect(sliderLabel).toHaveTextContent(\"Default text that the user should see\")\n await fireEvent.mouseDown(sliderInput, { clientX: 162, clientY: 302 })\n expect(sliderLabel).toHaveTextContent(\n \"New text that the user should see\"\n )\n\n\n", "I turned the solution mentioned above into simple (Typescript) helper\nexport class Slider {\n private static height = 10\n\n // For simplicity pretend that slider's width is 100\n private static width = 100\n\n private static getBoundingClientRectMock() {\n return {\n bottom: Slider.height,\n height: Slider.height,\n left: 0,\n right: Slider.width,\n top: 0,\n width: Slider.width,\n x: 0,\n y: 0,\n } as DOMRect\n }\n\n static change(element: HTMLElement, value: number, min: number = 0, max: number = 100) {\n const getBoundingClientRect = element.getBoundingClientRect\n element.getBoundingClientRect = Slider.getBoundingClientRectMock\n fireEvent.mouseDown(\n element,\n {\n clientX: ((value - min) / (max - min)) * Slider.width,\n clientY: Slider.height\n }\n )\n element.getBoundingClientRect = getBoundingClientRect\n }\n}\n\nUsage:\nSlider.change(getByTestId('mySlider'), 40) // When min=0, max=100 (default)\n// Otherwise\nSlider.change(getByTestId('mySlider'), 4, 0, 5) // Sets 4 with scale set to 0-5\n\n", "Building on @rehman_00001's answer, I created a file mock for the component. I wrote it in TypeScript, but it should work just as well without the types.\n__mocks__/@material-ui/core/Slider.tsx\nimport { SliderTypeMap } from '@material-ui/core';\nimport React from 'react';\n\nexport default function Slider(props: SliderTypeMap['props']): JSX.Element {\n const { onChange, ...others } = props;\n return (\n <input\n type=\"range\"\n onChange={(event) => {\n onChange && onChange(event, parseInt(event.target.value));\n }}\n {...(others as any)}\n />\n );\n}\n\nNow every usage of the Material UI <Slider/> component will be rendered as a simple HTML <input/> element during testing which is much easier to work with using Jest and react-testing-library.\n{...(others as any)} is a hack that allows me to avoid worrying about ensuring that every possible prop of the original component is handled properly. Depending on what Slider props you rely on, you may need to pull out additional props during destructuring so that you can properly translate them into something that makes sense on a vanilla <input/> element. See this page in the Material UI docs for info on each possible property.\n", "Here is an easy and flexible approach:\n\nImport Mui like this:\n\nimport * as MuiModule from '@mui/material';\n\n\nMock the @mui/material library and specify a function for Slider\n\njest.mock('@mui/material', () => ({\n __esModule: true,\n ...jest.requireActual('@mui/material'),\n Slider: () => {}, // Important: resolves issues where Jest sees Slider as an object instead of a function and allows jest.spy to work\n}));\n\n\nSpy on Slider and write your own implementation\n\ndescribe('Given a MyCustomSlider component', () => {\n beforeEach(() => {\n jest.spyOn(MuiModule, 'Slider').mockImplementation((props) => {\n const { value, onChange, onChangeCommitted } = props;\n const simulateChange = (event) => {\n if (!onChange) return;\n onChange(event, [0, 10], 0);\n };\n const simulateChangeCommitted = (event) => {\n if (!onChangeCommitted) return;\n onChangeCommitted(event, [0, 10]);\n };\n return (\n <>\n Value: {value}\n <button onClick={simulateChange}>Trigger Change</button>\n <button onClick={simulateChangeCommitted}>\n Trigger Change Committed\n </button>\n </>\n );\n });\n });\n\n // ... tests here\n});\n\n\nRender your component and trigger the change event:\n\ndescribe('When it is rendered', () => {\n beforeEach(() => {\n jest.clearAllMocks();\n render(<MyCustomSlider />);\n });\n\n describe('When a change event occurs', () => {\n beforeEach(async () => {\n jest.clearAllMocks();\n await userEvent.click(screen.getByText('Trigger Change'));\n });\n\n test('Then ... (your assertion here)', () => {\n // your test\n });\n });\n});\n\nFull example:\nimport userEvent from '@testing-library/user-event';\nimport * as MuiModule from '@mui/material';\nimport { render } from '../../../libs/test-utils';\n\njest.mock('@mui/material', () => ({\n __esModule: true,\n ...jest.requireActual('@mui/material'),\n Slider: () => {},\n}));\n\ndescribe('Given a MyCustomSlider component', () => {\n beforeEach(() => {\n jest.spyOn(MuiModule, 'Slider').mockImplementation((props) => {\n const { value, onChange, onChangeCommitted } = props;\n const simulateChange = (event) => {\n if (!onChange) return;\n onChange(event, [0, 10], 0);\n };\n const simulateChangeCommitted = (event) => {\n if (!onChangeCommitted) return;\n onChangeCommitted(event, [0, 10]);\n };\n return (\n <>\n Value: {value}\n <button onClick={simulateChange}>Trigger Change</button>\n <button onClick={simulateChangeCommitted}>\n Trigger Change Committed\n </button>\n </>\n );\n });\n });\n\n describe('When it is rendered', () => {\n beforeEach(() => {\n jest.clearAllMocks();\n render(<MyCustomSlider />);\n });\n\n describe('When a change event occurs', () => {\n beforeEach(async () => {\n jest.clearAllMocks();\n await userEvent.click(screen.getByText('Trigger Change'));\n });\n\n test('Then ... (your assertion here)', () => {\n // your test\n });\n });\n });\n});\n\n\n" ]
[ 8, 8, 3, 1, 0 ]
[]
[]
[ "jestjs", "material_ui", "react_testing_library", "reactjs", "testing" ]
stackoverflow_0058856094_jestjs_material_ui_react_testing_library_reactjs_testing.txt
Q: save InAppWebView Cookies in Shared_Preferences F;utter I'm using shared_preferences ^2.0.15. I'm facing webview logout problem. Every time user restart the app, user logout from the webview. I'm trying to save cookies when user login in webview and generate cookies. So I'm troubling to save in cookies in Shared_Preferences. final cookies = <dynamic>[].obs; void saveCookies(dynamic value) async { final store = await SharedPreferences.getInstance(); store.clear(); cookies.value = value; await store.setString('cookies', convert.jsonEncode(cookies.toJson())); // Fail to add } var cookieList = await _cookieManager.getCookies(url); saveCookies(cookieList); A: It looks like you are trying to save a list of cookies in Shared Preferences, but you are having trouble with the conversion to and from JSON. In the code you provided, you are calling toJson() on the cookies object, but this method is not available on the ObservableList class. Instead, you can convert the list to a JSON-encodable object, such as a List, by calling toList() on the ObservableList and then encode the resulting list to a JSON string using the jsonEncode function from the dart:convert library. Here is an example of how you could save the cookies in Shared Preferences: import 'package:flutter/services.dart' show rootBundle; import 'package:flutter/widgets.dart'; import 'package:shared_preferences/shared_preferences.dart'; import 'package:webview_flutter/webview_flutter.dart'; final cookies = <dynamic>[].obs; void saveCookies(dynamic value) async { final store = await SharedPreferences.getInstance(); store.clear(); cookies.value = value; await store.setString('cookies', jsonEncode(cookies.toList())); } void main() async { WidgetsFlutterBinding.ensureInitialized(); // Initialize the webview and load a page final webView = WebView( initialUrl: 'https://www.example.com', javascriptMode: JavascriptMode.unrestricted, ); // Save the cookies when the webview finishes loading webView.onPageFinished.listen((url) async { final cookieManager = CookieManager(); final cookieList = await cookieManager.getCookies(url); saveCookies(cookieList); }); runApp(webView); } I hope this helps! Let me know if you have any other questions.
save InAppWebView Cookies in Shared_Preferences F;utter
I'm using shared_preferences ^2.0.15. I'm facing webview logout problem. Every time user restart the app, user logout from the webview. I'm trying to save cookies when user login in webview and generate cookies. So I'm troubling to save in cookies in Shared_Preferences. final cookies = <dynamic>[].obs; void saveCookies(dynamic value) async { final store = await SharedPreferences.getInstance(); store.clear(); cookies.value = value; await store.setString('cookies', convert.jsonEncode(cookies.toJson())); // Fail to add } var cookieList = await _cookieManager.getCookies(url); saveCookies(cookieList);
[ "It looks like you are trying to save a list of cookies in Shared Preferences, but you are having trouble with the conversion to and from JSON. In the code you provided, you are calling toJson() on the cookies object, but this method is not available on the ObservableList class. Instead, you can convert the list to a JSON-encodable object, such as a List, by calling toList() on the ObservableList and then encode the resulting list to a JSON string using the jsonEncode function from the dart:convert library.\nHere is an example of how you could save the cookies in Shared Preferences:\nimport 'package:flutter/services.dart' show rootBundle;\nimport 'package:flutter/widgets.dart';\nimport 'package:shared_preferences/shared_preferences.dart';\nimport 'package:webview_flutter/webview_flutter.dart';\n\nfinal cookies = <dynamic>[].obs;\n\nvoid saveCookies(dynamic value) async {\n final store = await SharedPreferences.getInstance();\n store.clear();\n cookies.value = value;\n await store.setString('cookies', jsonEncode(cookies.toList()));\n}\n\nvoid main() async {\n WidgetsFlutterBinding.ensureInitialized();\n\n // Initialize the webview and load a page\n final webView = WebView(\n initialUrl: 'https://www.example.com',\n javascriptMode: JavascriptMode.unrestricted,\n );\n\n // Save the cookies when the webview finishes loading\n webView.onPageFinished.listen((url) async {\n final cookieManager = CookieManager();\n final cookieList = await cookieManager.getCookies(url);\n saveCookies(cookieList);\n });\n\n runApp(webView);\n}\n\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074657956_dart_flutter.txt
Q: How to create a visual family tree by reading data from excel or txt file? The information in the file includes name, surname,gender, spouse's name, mother's name and father's name. How to create a pedigree chart accordingly?How to create chart with java gui form? I read the file and assigned each row to a person object, and also assigned each column to a list based on that column's property. A: You can try Family Tree JS: https://balkan.app/FamilyTreeJS/Docs/GettingStarted Feel free to ask any further questions.
How to create a visual family tree by reading data from excel or txt file?
The information in the file includes name, surname,gender, spouse's name, mother's name and father's name. How to create a pedigree chart accordingly?How to create chart with java gui form? I read the file and assigned each row to a person object, and also assigned each column to a list based on that column's property.
[ "You can try Family Tree JS:\nhttps://balkan.app/FamilyTreeJS/Docs/GettingStarted\nFeel free to ask any further questions.\n" ]
[ 0 ]
[]
[]
[ "family_tree", "java", "nodes", "oop", "user_interface" ]
stackoverflow_0074642531_family_tree_java_nodes_oop_user_interface.txt
Q: Nest can't resolve dependencies of the services When I used person Service in the Organisation Service, I got the error like this: Nest can't resolve dependencies of the PersonService (PersonModel, ?). Please make sure that the argument at index [1] is available in the OrganizationModule context. Organization.service.ts @Injectable() export class OrganizationService { constructor( @InjectModel('Organization') private readonly organizationModel: Model<Organization>, @Inject(forwardRef(() => UsersService)) private readonly usersService: UsersService, private readonly mailerService: MailerService, @Inject(forwardRef(() => PersonService)) private readonly personService: PersonService, ) {} Organization.module.ts @Module({ imports: [ RateLimiterModule.register({ type: 'Memory', points: 100, duration: 60 * 5, keyPrefix: 'organization' }), MongooseModule.forFeature([ { name: 'Organization', schema: OrganizationSchema }, { name: 'User', schema: UserSchema }, { name: 'Person', schema: PersonSchema }, ]), PassportModule.register({ defaultStrategy: 'jwt', session: false }), forwardRef(() => UsersModule), forwardRef(() => PersonModule), ], exports: [OrganizationService], controllers: [OrganizationController], providers: [OrganizationService, UsersService, PersonService] }) Person.module.ts @Module({ imports: [ RateLimiterModule.register({ type: 'Memory', points: 100, duration: 60 * 5 }), MongooseModule.forFeature([ { name: 'Person', schema: PersonSchema }, { name: 'User', schema: UserSchema }, ]), PassportModule.register({ defaultStrategy: 'jwt', session: false }), forwardRef(() => UsersModule), ], exports: [PersonService], controllers: [PersonController], providers: [PersonService, UsersService] }) export class PersonModule { public configure(consumer: MiddlewareConsumer) { consumer .apply(LoggerMiddleware) .forRoutes(PersonController); consumer .apply(SiteMiddleware) .forRoutes(PersonController); } } What is the error in this code? A: You shouldn't re-add dependencies to providers arrays. You already define the PersonService provider in the PersonModule, and the PersonModule properly exports that provider, so all that needs to happen is the OrganizationModule needs to have PersonModule in the imports. By putting PersonService in the OrganizationModule's providers, Nest will try to recreate the provider in the context of the OrganiztionModule, meaning it will need access to getModelToken('Person') and UsersService as other providers in the current context.
Nest can't resolve dependencies of the services
When I used person Service in the Organisation Service, I got the error like this: Nest can't resolve dependencies of the PersonService (PersonModel, ?). Please make sure that the argument at index [1] is available in the OrganizationModule context. Organization.service.ts @Injectable() export class OrganizationService { constructor( @InjectModel('Organization') private readonly organizationModel: Model<Organization>, @Inject(forwardRef(() => UsersService)) private readonly usersService: UsersService, private readonly mailerService: MailerService, @Inject(forwardRef(() => PersonService)) private readonly personService: PersonService, ) {} Organization.module.ts @Module({ imports: [ RateLimiterModule.register({ type: 'Memory', points: 100, duration: 60 * 5, keyPrefix: 'organization' }), MongooseModule.forFeature([ { name: 'Organization', schema: OrganizationSchema }, { name: 'User', schema: UserSchema }, { name: 'Person', schema: PersonSchema }, ]), PassportModule.register({ defaultStrategy: 'jwt', session: false }), forwardRef(() => UsersModule), forwardRef(() => PersonModule), ], exports: [OrganizationService], controllers: [OrganizationController], providers: [OrganizationService, UsersService, PersonService] }) Person.module.ts @Module({ imports: [ RateLimiterModule.register({ type: 'Memory', points: 100, duration: 60 * 5 }), MongooseModule.forFeature([ { name: 'Person', schema: PersonSchema }, { name: 'User', schema: UserSchema }, ]), PassportModule.register({ defaultStrategy: 'jwt', session: false }), forwardRef(() => UsersModule), ], exports: [PersonService], controllers: [PersonController], providers: [PersonService, UsersService] }) export class PersonModule { public configure(consumer: MiddlewareConsumer) { consumer .apply(LoggerMiddleware) .forRoutes(PersonController); consumer .apply(SiteMiddleware) .forRoutes(PersonController); } } What is the error in this code?
[ "You shouldn't re-add dependencies to providers arrays. You already define the PersonService provider in the PersonModule, and the PersonModule properly exports that provider, so all that needs to happen is the OrganizationModule needs to have PersonModule in the imports.\nBy putting PersonService in the OrganizationModule's providers, Nest will try to recreate the provider in the context of the OrganiztionModule, meaning it will need access to getModelToken('Person') and UsersService as other providers in the current context.\n" ]
[ 0 ]
[]
[]
[ "nestjs", "node.js" ]
stackoverflow_0074656013_nestjs_node.js.txt
Q: SocketException:Connection failed (OS Error: Network is unreachable, errno = 101), address = 10.0.2.2, port = 80 Making a login screen in flutter when i tap the login it gives the error 'network is Unreachable'. I have change the ip addresses "10.0.2.2" , "8.7.7.7" but doesn't work. Error : E/flutter (16082): [ERROR:flutter/lib/ui/ui_dart_state.cc(148)] Unhandled Exception: SocketException: Connection failed (OS Error: Network is unreachable, errno = 101), address = 10.0.2.2, port = 80 CODE : TextEditingController user=new TextEditingController(); TextEditingController pass=new TextEditingController(); Future<List> _login() async{ final response = await http.post("http://127.0.0.1/my_store/login.php", body: { "username": user.text, "password": pass.text, }); print(response.body); } A: In my case turning on the wifi and making sure that it is connected solved the issue. A: if you are using a physical device make sure the ip address is the ip of your computer. you can find it by running ipconfig in cmd. Remember you have to be connected to the internet to have this ip address. A: I have searched some threads about this and similar connection problems. In my case, sometimes the connection works, sometimes it refuses to work. The process I used to solve this problem was the following: Open cmd -> ipconfig the ip that is relevant for my solution Since I am using an Apache server, and I have a php file that handles the request I make in Flutter, I set the url to the following: String url="http://192.168.0.137/login.php" In your case, the code would be TextEditingController user=new TextEditingController(); TextEditingController pass=new TextEditingController(); Future<List> _login() async{ final response = await http.post("http://<your_ipv4_of_ipconfig>/my_store/login.php", body: { "username": user.text, "password": pass.text, }); print(response.body); } A: After many hours of head scratching, in my case it was the data connection icon was turned off by mistake cutting all sorts of data streams to vm :~ A: Turning OFF the Wi-fi on emulator helped me!
SocketException:Connection failed (OS Error: Network is unreachable, errno = 101), address = 10.0.2.2, port = 80
Making a login screen in flutter when i tap the login it gives the error 'network is Unreachable'. I have change the ip addresses "10.0.2.2" , "8.7.7.7" but doesn't work. Error : E/flutter (16082): [ERROR:flutter/lib/ui/ui_dart_state.cc(148)] Unhandled Exception: SocketException: Connection failed (OS Error: Network is unreachable, errno = 101), address = 10.0.2.2, port = 80 CODE : TextEditingController user=new TextEditingController(); TextEditingController pass=new TextEditingController(); Future<List> _login() async{ final response = await http.post("http://127.0.0.1/my_store/login.php", body: { "username": user.text, "password": pass.text, }); print(response.body); }
[ "In my case turning on the wifi and making sure that it is connected solved the issue.\n", "if you are using a physical device make sure the ip address is the ip of your computer. you can find it by running ipconfig in cmd. Remember you have to be connected to the internet to have this ip address.\n", "I have searched some threads about this and similar connection problems. In my case, sometimes the connection works, sometimes it refuses to work. The process I used to solve this problem was the following:\nOpen cmd -> ipconfig\nthe ip that is relevant for my solution\nSince I am using an Apache server, and I have a php file that handles the request I make in Flutter, I set the url to the following:\n String url=\"http://192.168.0.137/login.php\"\n\nIn your case, the code would be\n TextEditingController user=new TextEditingController();\n TextEditingController pass=new TextEditingController();\n\n Future<List> _login() async{\n final response = await http.post(\"http://<your_ipv4_of_ipconfig>/my_store/login.php\", body: {\n \"username\": user.text,\n \"password\": pass.text,\n\n });\n\n print(response.body);\n }\n\n", "After many hours of head scratching, in my case it was the data connection icon was turned off by mistake cutting all sorts of data streams to vm :~\n", "Turning OFF the Wi-fi on emulator helped me!\n" ]
[ 11, 3, 1, 0, 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0056757712_dart_flutter.txt
Q: Estimate elapsed time in Db2 I wanted to calculate the elapsed time of a script in DB2 LUW . I need to wright the code to get start time and end time then return the difference. select current timestamp as startdate from sysibm.sysdummy1; -- my querys select current timestamp as enddate from sysibm.sysdummy1; select timestampdiff (enddate , startdate); A: Firstly, you are using timestampdiff() incorrectly -- please check the manual. Secondly, you cannot select from nothing in Db2; you seem to know how to use sysibm.sysdummy1, so apply the same technique to your elapsed time calculation. Alternatively, you could use the values statement. But worst of all, you don't save the result of your select current timestamp... queries anywhere, so you can't reference them later. You could do something like this if you don't want to write SQL/PL code: create table t (starttime timestamp, endtime timestamp); -- you could also declare a global temporary table instead insert into t (starttime) values (current timestamp); -- your statements update t set endtime=current timestamp; select timestampdiff(1, char(endtime-starttime)) as elapsed_microseconds from t; drop table t; -- if it's not a temp table
Estimate elapsed time in Db2
I wanted to calculate the elapsed time of a script in DB2 LUW . I need to wright the code to get start time and end time then return the difference. select current timestamp as startdate from sysibm.sysdummy1; -- my querys select current timestamp as enddate from sysibm.sysdummy1; select timestampdiff (enddate , startdate);
[ "Firstly, you are using timestampdiff() incorrectly -- please check the manual.\nSecondly, you cannot select from nothing in Db2; you seem to know how to use sysibm.sysdummy1, so apply the same technique to your elapsed time calculation. Alternatively, you could use the values statement.\nBut worst of all, you don't save the result of your select current timestamp... queries anywhere, so you can't reference them later.\nYou could do something like this if you don't want to write SQL/PL code:\ncreate table t (starttime timestamp, endtime timestamp);\n-- you could also declare a global temporary table instead\n\ninsert into t (starttime) values (current timestamp);\n\n-- your statements\n\nupdate t set endtime=current timestamp;\nselect timestampdiff(1, char(endtime-starttime)) as elapsed_microseconds from t;\ndrop table t; -- if it's not a temp table\n\n" ]
[ 0 ]
[]
[]
[ "db2", "db2_luw" ]
stackoverflow_0074648046_db2_db2_luw.txt
Q: how can vb.net make inherit class with selected variable from base calss I need to make an inherited class from a base class with a selected variable. for example, if the base class have 3 variable name, age, marks but inherit class must have name and marks only how can we do it A: When designing object-oriented code, subclasses should be the specializations. The situation you describe makes the base class the specialization because it has more specific requirements than the subclass. There is a principle called the "Liskov Substitution Principle" that says all subclasses should work where the base class works - and this wouldn't be the case as calling subclass.age would fail. Instead, the base class should have the two common properties, and there should be a subclass that represents the extended class that represents the situation where an age would be used. Score Name Marks AgedScore extends Score Age The names are examples here, ideally you'd name them after what they relate to in the business domain.
how can vb.net make inherit class with selected variable from base calss
I need to make an inherited class from a base class with a selected variable. for example, if the base class have 3 variable name, age, marks but inherit class must have name and marks only how can we do it
[ "When designing object-oriented code, subclasses should be the specializations. The situation you describe makes the base class the specialization because it has more specific requirements than the subclass.\nThere is a principle called the \"Liskov Substitution Principle\" that says all subclasses should work where the base class works - and this wouldn't be the case as calling subclass.age would fail.\nInstead, the base class should have the two common properties, and there should be a subclass that represents the extended class that represents the situation where an age would be used.\nScore\n\nName\nMarks\n\nAgedScore extends Score\n\nAge\n\nThe names are examples here, ideally you'd name them after what they relate to in the business domain.\n" ]
[ 0 ]
[]
[]
[ "abstract_base_class", "class", "inheritance", "oop", "vb.net" ]
stackoverflow_0074651635_abstract_base_class_class_inheritance_oop_vb.net.txt
Q: Transactional: controller vs service Consider I have a controller method get() which calls a few service methods working with database. Is it correct to make the entire controller method transactional or just every service method? It seems to me that we must make get() transactional because it performs associated operations. A: That's entirely up to you, and how you interpret your own business logic. Spring doesn't really care where you put the transaction boundaries, and certainly doesn't limit you to putting them on your DAO classes. So yes, adding @Transactional to your controller methods is perfectly valid. A: I prefer to make only transactional the service methods that need to be transactional and control the transactionality in the service not in the controller. You can create a service method which englobes other service methods and with the spring transaction manage the transaction with propagation in @Transactional annotation. @Transactional(propagation =...) Edit If I had 2 methods for example saveUser() and saveEmail() (because I store the emails in a database to send them later - like a queue) I would create in my service a method saveUserAndSendEmail(User user) which would be transactional. This method would call saveUser and saveEmail() each one in a @Repository component because they deal with the database. So I would put them in the @Repository components the methods to handle with the database and then I control the transactionality in the @Service component. Then the controller will only have to worry about providing the data and calling whenever they are needed. But I make a transaction because I don't want to commit changes in thedatabase until the whole method is executed successfully. But this is the style I usually use, I'm not saying that this must be the way to go. A: You also need to consider that if you add it on the controller level then you may hold connection withing transaction for longer then needed. So it is good practice only wrap in transaction what is needed. I agree service level is more appropriate place.
Transactional: controller vs service
Consider I have a controller method get() which calls a few service methods working with database. Is it correct to make the entire controller method transactional or just every service method? It seems to me that we must make get() transactional because it performs associated operations.
[ "That's entirely up to you, and how you interpret your own business logic.\nSpring doesn't really care where you put the transaction boundaries, and certainly doesn't limit you to putting them on your DAO classes.\nSo yes, adding @Transactional to your controller methods is perfectly valid.\n", "I prefer to make only transactional the service methods that need to be transactional and control the transactionality in the service not in the controller. You can create a service method which englobes other service methods and with the spring transaction manage the transaction with propagation in @Transactional annotation.\n@Transactional(propagation =...)\n\nEdit\nIf I had 2 methods for example saveUser() and saveEmail() (because I store the emails in a database to send them later - like a queue) I would create in my service a method saveUserAndSendEmail(User user) which would be transactional. This method would call saveUser and saveEmail() each one in a @Repository component because they deal with the database. So I would put them in the @Repository components the methods to handle with the database and then I control the transactionality in the @Service component. Then the controller will only have to worry about providing the data and calling whenever they are needed. But I make a transaction because I don't want to commit changes in thedatabase until the whole method is executed successfully.\nBut this is the style I usually use, I'm not saying that this must be the way to go.\n", "You also need to consider that if you add it on the controller level then you may hold connection withing transaction for longer then needed. So it is good practice only wrap in transaction what is needed. I agree service level is more appropriate place.\n" ]
[ 10, 9, 0 ]
[]
[]
[ "spring_mvc", "transactions" ]
stackoverflow_0004462785_spring_mvc_transactions.txt
Q: CSS: Center block, but align contents to the left I want a whole block to be centered in its parent, but I want the contents of the block to be left aligned. Examples serve best On this page : http://yaml-online-parser.appspot.com/?yaml=%23+ASCII+Art%0d%0a---+%7c%0d%0a++%5c%2f%2f%7c%7c%5c%2f%7c%7c%0d%0a++%2f%2f+%7c%7c++%7c%7c__%0d%0a&type=python the ascii art should be centered (as it appears) but it should line up and look like "YAML". Or this : http://yaml-online-parser.appspot.com/?yaml=%3f+-+Detroit+Tigers%0d%0a++-+Chicago+cubs%0d%0a%3a%0d%0a++-+2001-07-23%0d%0a%0d%0a%3f+%5b+New+York+Yankees%2c%0d%0a++++Atlanta+Braves+%5d%0d%0a%3a+%5b+2001-07-02%2c+2001-08-12%2c%0d%0a++++2001-08-14+%5d%0d%0a the error message should all line up as it does in a console. A: First, create a parent div that centers its child content with text-align: center. Next, create a child div that uses display: inline-block to adapt to the width of its children and text-align: left to make the content it holds align to the left as desired. <div style="text-align: center;"> <div style="display: inline-block; text-align: left;"> Centered<br /> Content<br /> That<br /> Is<br /> Left<br /> Aligned </div> </div> If you wish to ensure that a long line does not widen everything too much, you may also apply the max-width property (with a value of your choice) to the inner tag: max-width: 250px; A: Reposting the working answer from the other question: How to horizontally center a floating element of a variable width? Assuming the element which is floated and will be centered is a div with an id="content" ... <body> <div id="wrap"> <div id="content"> This will be centered </div> </div> </body> And apply the following CSS #wrap { float: left; position: relative; left: 50%; } #content { float: left; position: relative; left: -50%; } Here is a good reference regarding that http://dev.opera.com/articles/view/35-floats-and-clearing/#centeringfloats A: If I understand you well, you need to use to center a container (or block) margin-left: auto; margin-right: auto; and to left align it's contents: text-align: left; A: I've found the easiest way to centre and left-align text inside a container is the following: HTML: <div> <p>Some interesting text.</p> </div> CSS: P { width: 50%; //or whatever looks best margin: auto; //top and bottom margin can be added for aesthetic effect } Hope this is what you were looking for as it took me quite a bit of searching just to figure out this pretty basic solution. A: Normally you should use margin: 0 auto on the div as mentioned in the other answers, but you'll have to specify a width for the div. If you don't want to specify a width you could either (this is depending on what you're trying to do) use margins, something like margin: 0 200px; , this should make your content seems as if it's centered, you could also see the answer of Leyu to my question A: <div> <div style="text-align: left; width: 400px; border: 1px solid black; margin: 0 auto;"> <pre> Hello Testing Beep </pre> </div> </div> A: Is this what you are looking for? Flexbox... .container{ display: flex; flex-flow: row wrap; justify-content: center; align-content: center; align-items: center; } .inside{ height:100px; width:100px; background:gray; border:1px solid; } <section class="container"> <section class="inside"> A </section> <section class="inside"> B </section> <section class="inside"> C </section> </section> A: For those of us still working with older browsers, here's some extended backwards compatibility: <div style="text-align: center;"> <div style="display:-moz-inline-stack; display:inline-block; zoom:1; *display:inline; text-align: left;"> Line 1: Testing<br> Line 2: More testing<br> Line 3: Even more testing<br> </div> </div> Partially inspired by this post: https://stackoverflow.com/a/12567422/14999964. A: use CSS text-align and display properties to see changes accordingly. Margins are also helpful. For me in the case of SweetAlert to center and align left the following code works. For you may be a different scenario. .format-pre pre { font-size: 18px; text-align: left; display: inline-block; } in ts file showPasswordHints(){ var message = 'Your password mist contain:<br>'+ '1. At least 8 characters in length<br>'+ '2. At least 3 Lowercase letters<br>'+ '3. At least 1 Uppercase letter<br>'+ '4. At least 1 Numbers<br>'+ '5. At least 1 Special characters<br>'+ '5. Maximum 16 characters in length'; Swal.fire({ html:'<pre>' + message + '</pre>', customClass: { popup: 'format-pre' } , showClass: { popup: 'animate__animated animate__fadeInDown' }, hideClass: { popup: 'animate__animated animate__fadeOutUp' }, icon: 'info', confirmButtonText: 'Got it', confirmButtonColor: '#3f51b5', }); }
CSS: Center block, but align contents to the left
I want a whole block to be centered in its parent, but I want the contents of the block to be left aligned. Examples serve best On this page : http://yaml-online-parser.appspot.com/?yaml=%23+ASCII+Art%0d%0a---+%7c%0d%0a++%5c%2f%2f%7c%7c%5c%2f%7c%7c%0d%0a++%2f%2f+%7c%7c++%7c%7c__%0d%0a&type=python the ascii art should be centered (as it appears) but it should line up and look like "YAML". Or this : http://yaml-online-parser.appspot.com/?yaml=%3f+-+Detroit+Tigers%0d%0a++-+Chicago+cubs%0d%0a%3a%0d%0a++-+2001-07-23%0d%0a%0d%0a%3f+%5b+New+York+Yankees%2c%0d%0a++++Atlanta+Braves+%5d%0d%0a%3a+%5b+2001-07-02%2c+2001-08-12%2c%0d%0a++++2001-08-14+%5d%0d%0a the error message should all line up as it does in a console.
[ "First, create a parent div that centers its child content with text-align: center. Next, create a child div that uses display: inline-block to adapt to the width of its children and text-align: left to make the content it holds align to the left as desired.\n\n\n<div style=\"text-align: center;\">\n <div style=\"display: inline-block; text-align: left;\">\n Centered<br />\n Content<br />\n That<br />\n Is<br />\n Left<br />\n Aligned\n </div>\n</div>\n\n\n\nIf you wish to ensure that a long line does not widen everything too much, you may also apply the max-width property (with a value of your choice) to the inner tag:\nmax-width: 250px;\n\n", "Reposting the working answer from the other question: How to horizontally center a floating element of a variable width?\nAssuming the element which is floated and will be centered is a div with an id=\"content\" ...\n<body>\n<div id=\"wrap\">\n <div id=\"content\">\n This will be centered\n </div>\n</div>\n</body>\n\nAnd apply the following CSS\n#wrap {\n float: left;\n position: relative;\n left: 50%;\n}\n\n#content {\n float: left;\n position: relative;\n left: -50%;\n}\n\nHere is a good reference regarding that http://dev.opera.com/articles/view/35-floats-and-clearing/#centeringfloats\n", "If I understand you well, you need to use to center a container (or block)\nmargin-left: auto;\nmargin-right: auto;\n\nand to left align it's contents:\ntext-align: left;\n\n", "I've found the easiest way to centre and left-align text inside a container is the following:\nHTML:\n<div>\n <p>Some interesting text.</p>\n</div>\n\nCSS:\nP {\n width: 50%; //or whatever looks best\n margin: auto; //top and bottom margin can be added for aesthetic effect\n}\n\nHope this is what you were looking for as it took me quite a bit of searching just to figure out this pretty basic solution.\n", "Normally you should use margin: 0 auto on the div as mentioned in the other answers, but you'll have to specify a width for the div. If you don't want to specify a width you could either (this is depending on what you're trying to do) use margins, something like margin: 0 200px; , this should make your content seems as if it's centered, you could also see the answer of Leyu to my question\n", "<div>\n <div style=\"text-align: left; width: 400px; border: 1px solid black; margin: 0 auto;\">\n <pre>\nHello\nTesting\nBeep\n </pre>\n </div>\n</div>\n\n", "Is this what you are looking for? Flexbox...\n\n\n.container{\r\n display: flex;\r\n flex-flow: row wrap;\r\n justify-content: center;\r\n align-content: center;\r\n align-items: center;\r\n}\r\n.inside{\r\n height:100px;\r\n width:100px;\r\n background:gray;\r\n border:1px solid;\r\n}\n<section class=\"container\">\r\n <section class=\"inside\">\r\n A\r\n </section>\r\n <section class=\"inside\">\r\n B\r\n </section>\r\n <section class=\"inside\">\r\n C\r\n </section>\r\n</section>\n\n\n\n", "For those of us still working with older browsers, here's some extended backwards compatibility:\n\n\n<div style=\"text-align: center;\">\n <div style=\"display:-moz-inline-stack; display:inline-block; zoom:1; *display:inline; text-align: left;\">\n Line 1: Testing<br>\n Line 2: More testing<br>\n Line 3: Even more testing<br>\n </div>\n</div>\n\n\n\nPartially inspired by this post: https://stackoverflow.com/a/12567422/14999964.\n", "use CSS text-align and display properties to see changes accordingly. Margins are also helpful. For me in the case of SweetAlert to center and align left the following code works. For you may be a different scenario.\n.format-pre pre {\n font-size: 18px;\n text-align: left;\n display: inline-block;\n }\n\nin ts file\n showPasswordHints(){\n var message = 'Your password mist contain:<br>'+\n '1. At least 8 characters in length<br>'+\n '2. At least 3 Lowercase letters<br>'+\n '3. At least 1 Uppercase letter<br>'+\n '4. At least 1 Numbers<br>'+\n '5. At least 1 Special characters<br>'+\n '5. Maximum 16 characters in length';\nSwal.fire({\n html:'<pre>' + message + '</pre>',\n customClass: {\n popup: 'format-pre'\n }\n ,\n showClass: {\n popup: 'animate__animated animate__fadeInDown'\n },\n hideClass: {\n popup: 'animate__animated animate__fadeOutUp'\n },\n icon: 'info',\n confirmButtonText: 'Got it',\n confirmButtonColor: '#3f51b5',\n });\n }\n\n" ]
[ 212, 22, 5, 5, 2, 1, 1, 1, 0 ]
[ "THIS works\n<div style=\"display:inline-block;margin:10px auto;\">\n <ul style=\"list-style-type:none;\">\n <li style=\"text-align:left;\"><span class=\"red\">❶</span> YouTube AutoComplete Keyword Scraper software <em>root keyword text box</em>.</li>\n <li style=\"text-align:left;\"><span class=\"red\">❷</span> YouTube.com website <em>video search text box</em>.</li>\n <li style=\"text-align:left;\"><span class=\"red\">❸</span> YouTube AutoComplete Keyword Scraper software <em>scraped keywords listbox</em>.</li>\n <li style=\"text-align:left;\"><span class=\"red\">❹</span> YouTube AutoComplete Keyword Scraper software <em>right click context menu</em>.</li>\n </ul>\n</div>\n\n" ]
[ -1 ]
[ "alignment", "center", "css" ]
stackoverflow_0001269589_alignment_center_css.txt
Q: Kubernetes with Cluster Autoscaler & Statefulset: node(s) had volume node affinity conflict I have an Kubernetes cluster on AWS with Cluster Autoscaler (a component to automically adjusts the desired number of nodes based on usage). The cluster previously had node A on AZ-1 and node B on AZ-2. When I deploy my statefulset with dynamic PVC, the PVC and PV are created on AZ-2, and the pods are created on node B. I deleted the statefulset to perform some testing. The Cluster Autoscaler decides that one node is now enough and adjusted the desired number down to 1. Now that node B is deleted, when I redeploy my statefulset, the pods are in pending state and can't be created on node A with the following error: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m8s (x997 over 18h) default-scheduler 0/1 nodes are available: 1 node(s) had volume node affinity conflict. Normal NotTriggerScaleUp 95s (x6511 over 18h) cluster-autoscaler pod didn't trigger scale-up: 1 node(s) had volume node affinity conflict I know it is because the PVs are created in AZ-2 and can't be attached to pods in AZ-1, but how do I overcome this issue? A: Use multiple node groups, each scoped to a single Availability Zone. In addition, you should enable the --balance-similar-node-groups feature. https://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html Important If you are running a stateful application across multiple Availability Zones that is backed by Amazon EBS volumes and using the Kubernetes Cluster Autoscaler, you should configure multiple node groups, each scoped to a single Availability Zone. In addition, you should enable the --balance-similar-node-groups feature.
Kubernetes with Cluster Autoscaler & Statefulset: node(s) had volume node affinity conflict
I have an Kubernetes cluster on AWS with Cluster Autoscaler (a component to automically adjusts the desired number of nodes based on usage). The cluster previously had node A on AZ-1 and node B on AZ-2. When I deploy my statefulset with dynamic PVC, the PVC and PV are created on AZ-2, and the pods are created on node B. I deleted the statefulset to perform some testing. The Cluster Autoscaler decides that one node is now enough and adjusted the desired number down to 1. Now that node B is deleted, when I redeploy my statefulset, the pods are in pending state and can't be created on node A with the following error: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 2m8s (x997 over 18h) default-scheduler 0/1 nodes are available: 1 node(s) had volume node affinity conflict. Normal NotTriggerScaleUp 95s (x6511 over 18h) cluster-autoscaler pod didn't trigger scale-up: 1 node(s) had volume node affinity conflict I know it is because the PVs are created in AZ-2 and can't be attached to pods in AZ-1, but how do I overcome this issue?
[ "Use multiple node groups, each scoped to a single Availability Zone. In addition, you should enable the --balance-similar-node-groups feature.\nhttps://docs.aws.amazon.com/eks/latest/userguide/managed-node-groups.html\nImportant\nIf you are running a stateful application across multiple Availability Zones that is backed by Amazon EBS volumes and using the Kubernetes Cluster Autoscaler, you should configure multiple node groups, each scoped to a single Availability Zone. In addition, you should enable the --balance-similar-node-groups feature.\n" ]
[ 0 ]
[]
[]
[ "amazon_eks", "kubernetes" ]
stackoverflow_0072863300_amazon_eks_kubernetes.txt
Q: How can I convert string date number to datetime in JavaScript? I have this date format that I wanna show it rightly formatted on the page: var dt = "/Date(570333600000-0200)/"; I'm tired of try Date.parse() and variations, someone knows the best to way to achieve this? A: That looks like a combination of milliseconds-since-The-Epoch and a timezone. ("The Epoch" is midnight, January 1st, 1970, UTC.) First, you get the parts: var str = "/Date(570333600000-0200)/"; var parts = str.match(/Date\((\d+)([\-+])(\d{2})(\d{2})/); // /^^^\/^^^^^\/^^^^^\/^^^^^\ // | | | \-- tz minutes // | | \--------- tz hours // | \---------------- plus or minus // \---------------------- raw milliseconds value Now, constructing a date from the milliseconds-since-The-Epoch part is easy: var msSinceTheEpoch = parseInt(parts[1], 10); var dt = new Date(msSinceTheEpoch); ...BUT we need to handle the timezone. JavaScript's Date object only has UTC, and "local time". So we're best off adding the timezone to the milliseconds value before we construct the date. I'm guessing -0200 means "UTC minus two hours", so assuming that's true and the - might be a + (for "plus X ..."), we get this: var hours = parseInt(parts[3], 10); var minutes = parseInt(parts[4], 10); var msOffset = ((hours * 60) + minutes) * 60 * 1000; msSinceTheEpoch += parts[2] === "-" ? -msOffset : msOffset; Now we can create the date: var dt = new Date(msSinceTheEpoch); Live Example | Source A: moment(new Date(1667145600000)).format("DD/MM/YYYY")
How can I convert string date number to datetime in JavaScript?
I have this date format that I wanna show it rightly formatted on the page: var dt = "/Date(570333600000-0200)/"; I'm tired of try Date.parse() and variations, someone knows the best to way to achieve this?
[ "That looks like a combination of milliseconds-since-The-Epoch and a timezone. (\"The Epoch\" is midnight, January 1st, 1970, UTC.)\nFirst, you get the parts:\nvar str = \"/Date(570333600000-0200)/\";\nvar parts = str.match(/Date\\((\\d+)([\\-+])(\\d{2})(\\d{2})/);\n// /^^^\\/^^^^^\\/^^^^^\\/^^^^^\\\n// | | | \\-- tz minutes\n// | | \\--------- tz hours\n// | \\---------------- plus or minus\n// \\---------------------- raw milliseconds value\n\nNow, constructing a date from the milliseconds-since-The-Epoch part is easy:\nvar msSinceTheEpoch = parseInt(parts[1], 10);\nvar dt = new Date(msSinceTheEpoch);\n\n...BUT we need to handle the timezone. JavaScript's Date object only has UTC, and \"local time\". So we're best off adding the timezone to the milliseconds value before we construct the date.\nI'm guessing -0200 means \"UTC minus two hours\", so assuming that's true and the - might be a + (for \"plus X ...\"), we get this:\nvar hours = parseInt(parts[3], 10);\nvar minutes = parseInt(parts[4], 10);\nvar msOffset = ((hours * 60) + minutes) * 60 * 1000;\nmsSinceTheEpoch += parts[2] === \"-\" ? -msOffset : msOffset;\n\nNow we can create the date:\nvar dt = new Date(msSinceTheEpoch);\n\nLive Example | Source\n", "moment(new Date(1667145600000)).format(\"DD/MM/YYYY\")\n" ]
[ 3, 0 ]
[]
[]
[ "date_format", "javascript" ]
stackoverflow_0022289474_date_format_javascript.txt
Q: Setting component height to 100% in react-native I can give the height element of style numeric values such as 40 but these are required to be integers. How can I make my component to have a height of 100%? A: check out the flexbox doc. in the stylesheet, use: flex:1, A: Grab the window height into a variable, then assign it as the height of the flex container you want to target : let ScreenHeight = Dimensions.get("window").height; In your styles : var Styles = StyleSheet.create({ ... height: ScreenHeight }); Note that you have to import Dimensions before using it: import { ... Dimensions } from 'react-native' A: flex:1 should work for almost any case. However, remember that for ScrollView, it's contentContainerStyle that controls the height of view: WRONG const styles = StyleSheet.create({ outer: { flex: 1, }, inner: { flex: 1 } }); <ScrollView style={styles.outer}> <View style={styles.inner}> </View> </ScrollView> CORRECT const styles = StyleSheet.create({ outer: { flex: 1, }, inner: { flex: 1 } }); <ScrollView contentContainerStyle={styles.outer}> <View style={styles.inner}> </View> </ScrollView> A: You can simply add height: '100%' into your item's stylesheet. it works for me A: most of the time should be using flexGrow: 1 or flex: 1 or you can use import { Dimensions } from 'react-native'; const { Height } = Dimensions.get('window'); styleSheet({ classA: { height: Height - 40, }, }); if none of them work for you try it: container: { position: 'absolute', top: 0, bottom: 0, left: 0, right: 0, } A: Try this: <View style={{flex: 1}}> <View style={{flex: 1, backgroundColor: 'skyblue'}} /> </View> You can have more help in react-native online documentation (https://facebook.github.io/react-native/docs/height-and-width). A: I was using a ScrollView, so none of these solutions solved my problem. Until I tried contentContainerStyle={{flexGrow: 1}} prop on my scrollview. Seems like without it -scrollviews will just always be as tall as their content. My solution was found here: React native, children of ScrollView wont fill full height A: <View style={styles.container}> </View> const styles = StyleSheet.create({ container: { flex: 1 } }) A: I looked at lots of these solutions, and none worked across React Native mobile and web. Tracking the screen height using Dimensions API is one way that does work, but this can be innacurate on some mobile devices. The best solution I found was to use this on your element: <View style={{ height:Platform.OS === 'web' ? '100vh' : '100%' }} /* ... your application */ </View> Please also note the caveat with ScrollView as mentioned here. A: I would say <View style={{ ...StyleSheet.absoluteFillObject, }}></View> In this way, you can fill the entire screen without caring about, flex, width, or height
Setting component height to 100% in react-native
I can give the height element of style numeric values such as 40 but these are required to be integers. How can I make my component to have a height of 100%?
[ "check out the flexbox doc. in the stylesheet, use:\nflex:1,\n\n", "Grab the window height into a variable, then assign it as the height of the flex container you want to target :\nlet ScreenHeight = Dimensions.get(\"window\").height;\n\nIn your styles :\nvar Styles = StyleSheet.create({ ... height: ScreenHeight });\n\nNote that you have to import Dimensions before using it: \nimport { ... Dimensions } from 'react-native'\n\n", "flex:1 should work for almost any case. However, remember that for ScrollView, it's contentContainerStyle that controls the height of view:\nWRONG\nconst styles = StyleSheet.create({\n outer: {\n flex: 1,\n },\n inner: {\n flex: 1\n }\n});\n\n<ScrollView style={styles.outer}>\n <View style={styles.inner}>\n </View>\n</ScrollView>\n\nCORRECT\nconst styles = StyleSheet.create({\n outer: {\n flex: 1,\n },\n inner: {\n flex: 1\n }\n});\n\n<ScrollView contentContainerStyle={styles.outer}>\n <View style={styles.inner}>\n </View>\n</ScrollView>\n\n\n", "You can simply add height: '100%' into your item's stylesheet.\nit works for me\n", "most of the time should be using flexGrow: 1 or flex: 1\nor you can use\nimport { Dimensions } from 'react-native';\nconst { Height } = Dimensions.get('window');\n\nstyleSheet({\n classA: {\n height: Height - 40,\n },\n});\n\nif none of them work for you try it:\ncontainer: {\n position: 'absolute',\n top: 0,\n bottom: 0,\n left: 0,\n right: 0,\n}\n\n", "Try this:\n <View style={{flex: 1}}>\n <View style={{flex: 1, backgroundColor: 'skyblue'}} />\n </View>\n\nYou can have more help in react-native online documentation (https://facebook.github.io/react-native/docs/height-and-width).\n", "I was using a ScrollView, so none of these solutions solved my problem. Until I tried contentContainerStyle={{flexGrow: 1}} prop on my scrollview. Seems like without it -scrollviews will just always be as tall as their content.\nMy solution was found here: React native, children of ScrollView wont fill full height\n", "<View style={styles.container}> \n</View>\n\nconst styles = StyleSheet.create({\n container: {\n flex: 1\n }\n})\n\n", "I looked at lots of these solutions, and none worked across React Native mobile and web.\nTracking the screen height using Dimensions API is one way that does work, but this can be innacurate on some mobile devices. The best solution I found was to use this on your element:\n <View style={{ height:Platform.OS === 'web' ? '100vh' : '100%' }}\n /* ... your application */\n </View>\n\nPlease also note the caveat with ScrollView as mentioned here.\n", "I would say\n<View\n style={{\n ...StyleSheet.absoluteFillObject,\n }}></View>\n\nIn this way, you can fill the entire screen without caring about, flex, width, or height\n" ]
[ 74, 58, 37, 34, 7, 4, 1, 0, 0, 0 ]
[]
[]
[ "react_native" ]
stackoverflow_0029326998_react_native.txt
Q: How to convert OLE Automation Date to readable format using javascript? You must be aware of the .NET method "DateTime.FromOADate(double d)". I need to implement the same functionality in javascript. i.e given a double value like "40967.6424503935" it has to be converted to "2/28/2012 3:25:07 PM" Can someone please help me out? Thanks in advance! A: The automation date is the number of days since January 1, 1900 (with the year 1900 strangely being treated as a leap year). So a conversion is: var oaDate = 40967.6424503935; var date = new Date(); date.setTime((oaDate - 25569) * 24 * 3600 * 1000); alert(date); This solution creates a UTC date. When you display it, it'll be displayed in your local timezone. Depending on whether your date is a local date or a UTC date, this is correct or will require some additional timezone fiddling. A: As @donovan shares, it is actually from 12/30/1899 .net function documentation An OLE Automation date is implemented as a floating-point number whose integral component is the number of days before or after midnight, 30 December 1899, and whose fractional component represents the time on that day divided by 24. For example, midnight, 31 December 1899 is represented by 1.0; 6 A.M., 1 January 1900 is represented by 2.25; midnight, 29 December 1899 is represented by -1.0; and 6 A.M., 29 December 1899 is represented by -1.25. A: I think this is a simple solution as well (you can easily collapse it into a single line.): // distance from ole to epoch in milliseconds. // We need to invert sign later (bc it is before epoch!) const oleToEpoch = new Date("12/30/1899")).getTime() // distance from ole to target (our number const oleToTarget = 40967.6424503935*24*60*60*1000 // + because of sign change const resultInMs = oleToTarget + oleToEpoch // useIndia timezone console.log(new Date(resultInMs).toLocaleString("en-IN")) // distance from ole to epoc in milliseconds const oleToEpoch = new Date("12/30/1899").getTime() const oleToTarget = 40967.6424503935*24*60*60*1000 // + because of sign change const resultInMs = oleToTarget + oleToEpoch console.log(new Date(resultInMs).toLocaleString("en-IN")) I've made it into a little function: // distance from ole to epoc in milliseconds function oleToDate(ole, timezone) { const oleToEpoch = new Date("12/30/1899").getTime() const oleToTarget = ole * 24 * 60 * 60 * 1000 // + because of sign change const resultInMs = oleToTarget + oleToEpoch const result = new Date(resultInMs) if (timezone) { return result.toLocaleString(timezone) } return result }
How to convert OLE Automation Date to readable format using javascript?
You must be aware of the .NET method "DateTime.FromOADate(double d)". I need to implement the same functionality in javascript. i.e given a double value like "40967.6424503935" it has to be converted to "2/28/2012 3:25:07 PM" Can someone please help me out? Thanks in advance!
[ "The automation date is the number of days since January 1, 1900 (with the year 1900 strangely being treated as a leap year). So a conversion is:\nvar oaDate = 40967.6424503935;\nvar date = new Date();\ndate.setTime((oaDate - 25569) * 24 * 3600 * 1000);\nalert(date);\n\nThis solution creates a UTC date. When you display it, it'll be displayed in your local timezone. Depending on whether your date is a local date or a UTC date, this is correct or will require some additional timezone fiddling.\n", "As @donovan shares, it is actually from 12/30/1899\n.net function documentation\nAn OLE Automation date is implemented as a floating-point number whose integral component is the number of days before or after midnight, 30 December 1899, and whose fractional component represents the time on that day divided by 24. For example, midnight, 31 December 1899 is represented by 1.0; 6 A.M., 1 January 1900 is represented by 2.25; midnight, 29 December 1899 is represented by -1.0; and 6 A.M., 29 December 1899 is represented by -1.25.\n", "I think this is a simple solution as well (you can easily collapse it into a single line.):\n // distance from ole to epoch in milliseconds. \n // We need to invert sign later (bc it is before epoch!) \n const oleToEpoch = new Date(\"12/30/1899\")).getTime() \n // distance from ole to target (our number\n const oleToTarget = 40967.6424503935*24*60*60*1000\n // + because of sign change\n const resultInMs = oleToTarget + oleToEpoch \n // useIndia timezone\n console.log(new Date(resultInMs).toLocaleString(\"en-IN\"))\n\n\n\n // distance from ole to epoc in milliseconds\n const oleToEpoch = new Date(\"12/30/1899\").getTime() \n const oleToTarget = 40967.6424503935*24*60*60*1000\n // + because of sign change\n const resultInMs = oleToTarget + oleToEpoch \n console.log(new Date(resultInMs).toLocaleString(\"en-IN\"))\n\n\n\nI've made it into a little function:\n // distance from ole to epoc in milliseconds\n function oleToDate(ole, timezone) {\n const oleToEpoch = new Date(\"12/30/1899\").getTime()\n const oleToTarget = ole * 24 * 60 * 60 * 1000\n // + because of sign change\n const resultInMs = oleToTarget + oleToEpoch\n const result = new Date(resultInMs)\n if (timezone) {\n return result.toLocaleString(timezone)\n }\n return result\n }\n\n" ]
[ 13, 0, 0 ]
[]
[]
[ "date", "javascript", "ole" ]
stackoverflow_0010443325_date_javascript_ole.txt
Q: Plotly dash callback order I have a dash app, where I query a database. Some of my queries are quick, some are slow. I would like to show the results of these queries in a table in a way, that first I would populate the table with the quickly fetchable columns, then add the resulting columns of the slower queries gradually. My problem is that the rendering callback of the aggragate data only runs after all the queries are done, whereas I would like to see it firing after each query callback result. Here is a minimal example, where I fetch some quick data, then based on the quick query I fetch a slower one. There is a rendering callback, which is supposed to run after each query callback, but in reality runs only once in the end. (For the sake of simplicity I did not add the table here, just a basic div. I run Dash within a larger django project using django_plotly_dash, but probably it is not key regarding the question here.) from django_plotly_dash import DjangoDash import time from dash import html from dash.dependencies import Output, Input app = DjangoDash("Minimal",) app.layout = html.Div( id='main-container', children = [ html.Div(id='user-id'), html.Div(id='quick-data'), html.Div(id='slow-data'), html.Div(id='aggregate-data'), ], ) @app.callback( Output('quick-data', 'children'), Input('user-id', 'children'), ) def query_quick_data(user_id,): print("--------- query quick data ----------") return "quick data" @app.callback( Output('slow-data', 'children'), Input('quick-data', 'children'), ) def query_slow_data(slow_data,): print("--------- query slow data ----------") time.sleep(3) return "slow data" @app.callback( Output('aggregate-data', 'children'), Input('quick-data', 'children'), Input('slow-data', 'children'), ) def render_data(quick_data,slow_data): print("--------- render aggregate data ----------") return quick_data + " | " + slow_data Upon opening the app, the terminal looks as follows, while I would expect therender aggregate data to run twice (once straight after the quick query): backend_1 | --------- query quick data ---------- backend_1 | --------- query slow data ---------- backend_1 | --------- render aggregate data ---------- My guess is that the query_slow_data callback is called first and the render_data is only fired after. So the question is, how could I force the render_data to be called first. A: Your guess is correct, this is related to how Dash actually works. At the initial call, dash-render will recursively look at all the callbacks in your app, and will order them in by the availability of the input (read more from here to prevent unnecessary re-rendering. This is really important when you have a big dashboard. Imagine a dashboard with many callbacks, in which many are related to each other, if dash-render start calling randomly without proper organization, some callbacks might run in a loop, breaking the application or in the best scenario, making the application very heavy. For your given example, assume that dash-render first saw render_data callback. When it recursively checks the callback inputs with other available callbacks, it will find out that query_quick_data callback's is ready to use (not output from another callback) so it will give it a priority. The query_slow_data has only one input and that input already came from the executed callback, so everything is ready for it can be called, while your render-data callback is asking for two inputs, one is available and the other isn't therefore, the dash-render will run the query_slow_data as you guessed. I have been reading dash documents for a while and I have not run into any method in which you can custom the callback order to make a callback fire twice during the initial_call.
Plotly dash callback order
I have a dash app, where I query a database. Some of my queries are quick, some are slow. I would like to show the results of these queries in a table in a way, that first I would populate the table with the quickly fetchable columns, then add the resulting columns of the slower queries gradually. My problem is that the rendering callback of the aggragate data only runs after all the queries are done, whereas I would like to see it firing after each query callback result. Here is a minimal example, where I fetch some quick data, then based on the quick query I fetch a slower one. There is a rendering callback, which is supposed to run after each query callback, but in reality runs only once in the end. (For the sake of simplicity I did not add the table here, just a basic div. I run Dash within a larger django project using django_plotly_dash, but probably it is not key regarding the question here.) from django_plotly_dash import DjangoDash import time from dash import html from dash.dependencies import Output, Input app = DjangoDash("Minimal",) app.layout = html.Div( id='main-container', children = [ html.Div(id='user-id'), html.Div(id='quick-data'), html.Div(id='slow-data'), html.Div(id='aggregate-data'), ], ) @app.callback( Output('quick-data', 'children'), Input('user-id', 'children'), ) def query_quick_data(user_id,): print("--------- query quick data ----------") return "quick data" @app.callback( Output('slow-data', 'children'), Input('quick-data', 'children'), ) def query_slow_data(slow_data,): print("--------- query slow data ----------") time.sleep(3) return "slow data" @app.callback( Output('aggregate-data', 'children'), Input('quick-data', 'children'), Input('slow-data', 'children'), ) def render_data(quick_data,slow_data): print("--------- render aggregate data ----------") return quick_data + " | " + slow_data Upon opening the app, the terminal looks as follows, while I would expect therender aggregate data to run twice (once straight after the quick query): backend_1 | --------- query quick data ---------- backend_1 | --------- query slow data ---------- backend_1 | --------- render aggregate data ---------- My guess is that the query_slow_data callback is called first and the render_data is only fired after. So the question is, how could I force the render_data to be called first.
[ "Your guess is correct, this is related to how Dash actually works. At the initial call, dash-render will recursively look at all the callbacks in your app, and will order them in by the availability of the input (read more from here to prevent unnecessary re-rendering. This is really important when you have a big dashboard. Imagine a dashboard with many callbacks, in which many are related to each other, if dash-render start calling randomly without proper organization, some callbacks might run in a loop, breaking the application or in the best scenario, making the application very heavy.\nFor your given example, assume that dash-render first saw render_data callback. When it recursively checks the callback inputs with other available callbacks, it will find out that query_quick_data callback's is ready to use (not output from another callback) so it will give it a priority. The query_slow_data has only one input and that input already came from the executed callback, so everything is ready for it can be called, while your render-data callback is asking for two inputs, one is available and the other isn't therefore, the dash-render will run the query_slow_data as you guessed.\nI have been reading dash documents for a while and I have not run into any method in which you can custom the callback order to make a callback fire twice during the initial_call.\n" ]
[ 0 ]
[]
[]
[ "plotly_dash" ]
stackoverflow_0074517561_plotly_dash.txt
Q: List of all suffixes of a list on Prolog I need a predicate that gives me all the suffixes of a list on Prolog. For example: ?- suffixes([1,2,3], X). X = [[1, 2, 3], [2, 3], [3], []]. I tried this and it works, but I can't use the findall function to get all of them in a single list. suffix(Xs, Ys) :- append(_,Ys,Xs). suffixes(Xs, Ss) :- findall(S, suffix(Xs,S), Ss). A: You may roll your own implementation of a list walking that yields every suffix: suffixes([], [[]]). suffixes([Item|Tail], [[Item|Tail]|Suffixes]):- suffixes(Tail, Suffixes). Sample run: ?- suffixes([1,2,3], Suffixes). Suffixes = [[1, 2, 3], [2, 3], [3], []]. A: In page 5 of Representation Sharing for Prolog: all_tails([],[[]]). all_tails(L,[L|S]) :- L = [_|R], all_tails(R,S).
List of all suffixes of a list on Prolog
I need a predicate that gives me all the suffixes of a list on Prolog. For example: ?- suffixes([1,2,3], X). X = [[1, 2, 3], [2, 3], [3], []]. I tried this and it works, but I can't use the findall function to get all of them in a single list. suffix(Xs, Ys) :- append(_,Ys,Xs). suffixes(Xs, Ss) :- findall(S, suffix(Xs,S), Ss).
[ "You may roll your own implementation of a list walking that yields every suffix:\nsuffixes([], [[]]).\nsuffixes([Item|Tail], [[Item|Tail]|Suffixes]):-\n suffixes(Tail, Suffixes).\n\nSample run:\n?- suffixes([1,2,3], Suffixes).\nSuffixes = [[1, 2, 3], [2, 3], [3], []].\n\n", "In page 5 of Representation Sharing for Prolog:\nall_tails([],[[]]).\nall_tails(L,[L|S]) :- L = [_|R], all_tails(R,S).\n\n" ]
[ 2, 2 ]
[]
[]
[ "prolog" ]
stackoverflow_0074657502_prolog.txt
Q: Scrolling to a element inside a scrollable DIV with pure Javascript I have a div that has overflow: scroll and I have some elements inside the DIV that are hidden. On click of a button on the page, I want to make the DIV scroll to a particular element inside the DIV. How do I achieve this? A: You need to read the offsetTop property of the div you need to scroll to and then set that offset to the scrollTop property of the container div. Bind this function the event you want to : function scrollToElementD(){ var topPos = document.getElementById('inner-element').offsetTop; document.getElementById('container').scrollTop = topPos-10; } div { height: 200px; width: 100px; border: 1px solid black; overflow: auto; } p { height: 80px; background: blue; } #inner-element { background: red; } <div id="container"><p>A</p><p>B</p><p>C</p><p id="inner-element">D</p><p>E</p><p>F</p></div> <button onclick="scrollToElementD()">SCROLL TO D</button> function scrollToElementD(){ var topPos = document.getElementById('inner-element').offsetTop; document.getElementById('container').scrollTop = topPos-10; } Fiddle : http://jsfiddle.net/p3kar5bb/322/ (courtesy @rofrischmann) A: Just improved it by setting a smooth auto scrolling inside a list contained in a div https://codepen.io/rebosante/pen/eENYBv var topPos = elem.offsetTop document.getElementById('mybutton').onclick = function () { console.log('click') scrollTo(document.getElementById('container'), topPos-10, 600); } function scrollTo(element, to, duration) { var start = element.scrollTop, change = to - start, currentTime = 0, increment = 20; var animateScroll = function(){ currentTime += increment; var val = Math.easeInOutQuad(currentTime, start, change, duration); element.scrollTop = val; if(currentTime < duration) { setTimeout(animateScroll, increment); } }; animateScroll(); } //t = current time //b = start value //c = change in value //d = duration Math.easeInOutQuad = function (t, b, c, d) { t /= d/2; if (t < 1) return c/2*t*t + b; t--; return -c/2 * (t*(t-2) - 1) + b; }; I guess it may help someone :) A: Here's a simple pure JavaScript solution that works for a target Number (value for scrollTop), target DOM element, or some special String cases: /** * target - target to scroll to (DOM element, scrollTop Number, 'top', or 'bottom' * containerEl - DOM element for the container with scrollbars */ var scrollToTarget = function(target, containerEl) { // Moved up here for readability: var isElement = target && target.nodeType === 1, isNumber = Object.prototype.toString.call(target) === '[object Number]'; if (isElement) { containerEl.scrollTop = target.offsetTop; } else if (isNumber) { containerEl.scrollTop = target; } else if (target === 'bottom') { containerEl.scrollTop = containerEl.scrollHeight - containerEl.offsetHeight; } else if (target === 'top') { containerEl.scrollTop = 0; } }; And here are some examples of usage: // Scroll to the top var scrollableDiv = document.getElementById('scrollable_div'); scrollToTarget('top', scrollableDiv); or // Scroll to 200px from the top var scrollableDiv = document.getElementById('scrollable_div'); scrollToTarget(200, scrollableDiv); or // Scroll to targetElement var scrollableDiv = document.getElementById('scrollable_div'); var targetElement= document.getElementById('target_element'); scrollToTarget(targetElement, scrollableDiv); A: You need a ref to the div you wish to scroll to inner-div and a ref to the scrollable div scrollable-div: const scrollToDiv = () => { const innerDivPos = document.getElementById('inner-div').offsetTop document .getElementById('scrollable-div') .scrollTo({ top: innerDivPos, behavior: 'smooth' }) } <div id="scrollable-div" style="height:100px; overflow-y:auto;"> <button type="button" style="margin-bottom:500px" onclick="scrollToDiv()">Scroll To Div</button> <div id="inner-div">Inner Div</div> </div>
Scrolling to a element inside a scrollable DIV with pure Javascript
I have a div that has overflow: scroll and I have some elements inside the DIV that are hidden. On click of a button on the page, I want to make the DIV scroll to a particular element inside the DIV. How do I achieve this?
[ "You need to read the offsetTop property of the div you need to scroll to and then set that offset to the scrollTop property of the container div. Bind this function the event you want to : \n\n\nfunction scrollToElementD(){\r\n var topPos = document.getElementById('inner-element').offsetTop;\r\n document.getElementById('container').scrollTop = topPos-10;\r\n}\ndiv {\r\n height: 200px;\r\n width: 100px;\r\n border: 1px solid black;\r\n overflow: auto;\r\n}\r\n\r\np {\r\n height: 80px;\r\n background: blue;\r\n}\r\n#inner-element {\r\n background: red;\r\n}\n<div id=\"container\"><p>A</p><p>B</p><p>C</p><p id=\"inner-element\">D</p><p>E</p><p>F</p></div>\r\n<button onclick=\"scrollToElementD()\">SCROLL TO D</button>\n\n\n\nfunction scrollToElementD(){\n var topPos = document.getElementById('inner-element').offsetTop;\n document.getElementById('container').scrollTop = topPos-10;\n}\n\nFiddle : http://jsfiddle.net/p3kar5bb/322/ (courtesy @rofrischmann)\n", "Just improved it by setting a smooth auto scrolling inside a list contained in a div\nhttps://codepen.io/rebosante/pen/eENYBv\nvar topPos = elem.offsetTop\n\ndocument.getElementById('mybutton').onclick = function () {\n console.log('click')\n scrollTo(document.getElementById('container'), topPos-10, 600); \n}\n\nfunction scrollTo(element, to, duration) {\n var start = element.scrollTop,\n change = to - start,\n currentTime = 0,\n increment = 20;\n\n var animateScroll = function(){ \n currentTime += increment;\n var val = Math.easeInOutQuad(currentTime, start, change, duration);\n element.scrollTop = val;\n if(currentTime < duration) {\n setTimeout(animateScroll, increment);\n }\n };\n animateScroll();\n}\n\n//t = current time\n//b = start value\n//c = change in value\n//d = duration\nMath.easeInOutQuad = function (t, b, c, d) {\n t /= d/2;\n if (t < 1) return c/2*t*t + b;\n t--;\n return -c/2 * (t*(t-2) - 1) + b;\n};\n\nI guess it may help someone :)\n", "Here's a simple pure JavaScript solution that works for a target Number (value for scrollTop), target DOM element, or some special String cases:\n/**\n * target - target to scroll to (DOM element, scrollTop Number, 'top', or 'bottom'\n * containerEl - DOM element for the container with scrollbars\n */\nvar scrollToTarget = function(target, containerEl) {\n // Moved up here for readability:\n var isElement = target && target.nodeType === 1,\n isNumber = Object.prototype.toString.call(target) === '[object Number]';\n\n if (isElement) {\n containerEl.scrollTop = target.offsetTop;\n } else if (isNumber) {\n containerEl.scrollTop = target;\n } else if (target === 'bottom') {\n containerEl.scrollTop = containerEl.scrollHeight - containerEl.offsetHeight;\n } else if (target === 'top') {\n containerEl.scrollTop = 0;\n }\n};\n\nAnd here are some examples of usage:\n// Scroll to the top\nvar scrollableDiv = document.getElementById('scrollable_div');\nscrollToTarget('top', scrollableDiv);\n\nor\n// Scroll to 200px from the top\nvar scrollableDiv = document.getElementById('scrollable_div');\nscrollToTarget(200, scrollableDiv);\n\nor\n// Scroll to targetElement\nvar scrollableDiv = document.getElementById('scrollable_div');\nvar targetElement= document.getElementById('target_element');\nscrollToTarget(targetElement, scrollableDiv);\n\n", "You need a ref to the div you wish to scroll to inner-div and a ref to the scrollable div scrollable-div:\n\n\nconst scrollToDiv = () => {\n const innerDivPos = document.getElementById('inner-div').offsetTop\n\n document\n .getElementById('scrollable-div')\n .scrollTo({ top: innerDivPos, behavior: 'smooth' })\n}\n<div id=\"scrollable-div\" style=\"height:100px; overflow-y:auto;\">\n <button type=\"button\" style=\"margin-bottom:500px\" onclick=\"scrollToDiv()\">Scroll To Div</button>\n <div id=\"inner-div\">Inner Div</div>\n</div>\n\n\n\n" ]
[ 75, 10, 9, 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0027980084_javascript.txt
Q: While loop doesn't exit when eof is detected I'm having troubles with the eof sequence at the while loop. Basically I have to read a txt file (sequence) and each character has a different character that will be printed on an exit.txt file. But my while loop doesn't recognize the eof. Here's my code. program LaboratorioPascal; uses crt; var sec, sal: Textfile; v: char; por_especial, cont_palabra, cont_caracter, cont_especial: integer; vocales2: set of char; pares: set of char; impares: set of char; consonantes: set of char; consonantes2: set of char; procedure numeros(var x: char); begin case x of '0': Write(sal, '0'); '1': Write(sal, '1'); '2': Write(sal, '4'); '3': begin Write(sal, '2'); Write(sal, '7'); end; '4': Write(sal, '8'); '5': begin Write(sal, '1'); Write(sal, '2'); Write(sal, '5'); end; '6': begin Write(sal, '1'); Write(sal, '2'); end; '7': begin Write(sal, '3'); Write(sal, '4'); Write(sal, '3'); end; '8': begin Write(sal, '1'); Write(sal, '6'); end; '9': begin Write(sal, '7'); Write(sal, '2'); Write(sal, '9'); end; else Exit; end; end; function vocales(var s: char): char; begin case s of 'e': vocales := 'u'; 'a': vocales := 'o'; 'i': vocales := 'a'; 'o': vocales := 'e'; else vocales := 'i'; end; end; begin assign(sec, 'input.txt'); // Le asignamos un archivo del cual lea reset(sec); // arrancamos la secuencia read(sec, v); // leemos la secuencia. avz(sec, v) assign(sal, 'salida.txt'); rewrite(sal); vocales2 := ['a', 'e', 'i', 'o', 'u']; pares := ['0', '2', '4', '6', '8']; impares := ['1', '3', '5', '7', '9']; consonantes := ['b', 'c', 'd', 'f', 'g', 'h', 'j','k','l','m', 'n']; consonantes2 := ['p', 'q', 'r', 's', 't', 'v', 'w', 'x', 'y', 'z']; por_especial := 0; cont_palabra := 0; cont_caracter := 0; cont_especial := 0; writeln('El objetivo de este programa es cifrar un mensaje para favorecer a la inteligencia Rusa.'); while not eof(sec) do begin while v = ' ' do begin write(sal, ' '); read(sec, v); end; cont_palabra := cont_palabra + 1; while v <> ' ' do begin if (v in consonantes) or (v in consonantes2) then begin write(sal, '1'); end else begin if v in vocales2 then begin Write(sal, vocales(v)); end else begin if v in pares then; begin numeros(v); end; begin if v in impares then begin numeros(v); end else begin cont_especial := cont_especial + 1; Write(sal, '@'); end; end; end; end; read(sec, v); end; end; write(cont_palabra, ' se crifraon con [Exito]'); close(sec); close(sal); end. But the result I have in the exit file (salida.txt) is 1o1ao i1o 1u1 i1 1e1111ie 1iu 1u 1e1ae o i1o 11a11u1o@@@ 1a1@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ I've done my research about the eof topic, but I can't find anything about pascal. And if I try to put an if eof then Exit; end; inside the while loop, and it just read one character from the input.txt file. A: The problem is that you are in the inner loop ("while v <> ' ' do") when you come to the end of your input file. If the last character in the input file is a space, you jump out of the inner loop and out of the outer loop, because you reached eof. But if it isn't, you stay in the inner loop, and keep reading beyond the eof, until you encounter a space or a problem. You can change the inner loop's "while v <> ' ' do" to "while (v <> ' ') and (not eof(sec)) do". Or make it one loop and handle the space in an if statement.
While loop doesn't exit when eof is detected
I'm having troubles with the eof sequence at the while loop. Basically I have to read a txt file (sequence) and each character has a different character that will be printed on an exit.txt file. But my while loop doesn't recognize the eof. Here's my code. program LaboratorioPascal; uses crt; var sec, sal: Textfile; v: char; por_especial, cont_palabra, cont_caracter, cont_especial: integer; vocales2: set of char; pares: set of char; impares: set of char; consonantes: set of char; consonantes2: set of char; procedure numeros(var x: char); begin case x of '0': Write(sal, '0'); '1': Write(sal, '1'); '2': Write(sal, '4'); '3': begin Write(sal, '2'); Write(sal, '7'); end; '4': Write(sal, '8'); '5': begin Write(sal, '1'); Write(sal, '2'); Write(sal, '5'); end; '6': begin Write(sal, '1'); Write(sal, '2'); end; '7': begin Write(sal, '3'); Write(sal, '4'); Write(sal, '3'); end; '8': begin Write(sal, '1'); Write(sal, '6'); end; '9': begin Write(sal, '7'); Write(sal, '2'); Write(sal, '9'); end; else Exit; end; end; function vocales(var s: char): char; begin case s of 'e': vocales := 'u'; 'a': vocales := 'o'; 'i': vocales := 'a'; 'o': vocales := 'e'; else vocales := 'i'; end; end; begin assign(sec, 'input.txt'); // Le asignamos un archivo del cual lea reset(sec); // arrancamos la secuencia read(sec, v); // leemos la secuencia. avz(sec, v) assign(sal, 'salida.txt'); rewrite(sal); vocales2 := ['a', 'e', 'i', 'o', 'u']; pares := ['0', '2', '4', '6', '8']; impares := ['1', '3', '5', '7', '9']; consonantes := ['b', 'c', 'd', 'f', 'g', 'h', 'j','k','l','m', 'n']; consonantes2 := ['p', 'q', 'r', 's', 't', 'v', 'w', 'x', 'y', 'z']; por_especial := 0; cont_palabra := 0; cont_caracter := 0; cont_especial := 0; writeln('El objetivo de este programa es cifrar un mensaje para favorecer a la inteligencia Rusa.'); while not eof(sec) do begin while v = ' ' do begin write(sal, ' '); read(sec, v); end; cont_palabra := cont_palabra + 1; while v <> ' ' do begin if (v in consonantes) or (v in consonantes2) then begin write(sal, '1'); end else begin if v in vocales2 then begin Write(sal, vocales(v)); end else begin if v in pares then; begin numeros(v); end; begin if v in impares then begin numeros(v); end else begin cont_especial := cont_especial + 1; Write(sal, '@'); end; end; end; end; read(sec, v); end; end; write(cont_palabra, ' se crifraon con [Exito]'); close(sec); close(sal); end. But the result I have in the exit file (salida.txt) is 1o1ao i1o 1u1 i1 1e1111ie 1iu 1u 1e1ae o i1o 11a11u1o@@@ 1a1@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ I've done my research about the eof topic, but I can't find anything about pascal. And if I try to put an if eof then Exit; end; inside the while loop, and it just read one character from the input.txt file.
[ "The problem is that you are in the inner loop (\"while v <> ' ' do\") when you come to the end of your input file.\nIf the last character in the input file is a space, you jump out of the inner loop and out of the outer loop, because you reached eof.\nBut if it isn't, you stay in the inner loop, and keep reading beyond the eof, until you encounter a space or a problem.\nYou can change the inner loop's\n\"while v <> ' ' do\"\nto\n\"while (v <> ' ') and (not eof(sec)) do\".\nOr make it one loop and handle the space in an if statement.\n" ]
[ 0 ]
[]
[]
[ "eof", "pascal", "while_loop" ]
stackoverflow_0073179049_eof_pascal_while_loop.txt
Q: C# Component Model RegularExpression validator rejecting valid regex data This REGEX [Required] [RegularExpression("^[VB]", ErrorMessage = "The barcode must start with B or V")] public string Barcode { get; set; } fails with the following: "Barcode": { "rawValue": "B6761126229752008155", "attemptedValue": "B6761126229752008155", "errors": [ { "exception": null, "errorMessage": "The barcode must start with B or V" } ], "validationState": 1, "isContainerNode": false, "children": null }, even though the values are shown to be correct..... The regex passes in Regex101.com I'm not sure where to go with this. Any ideas? If I remove the validator the code runs through to my controller with the correct barcode value. A: You can use [Required] [RegularExpression("^[VB].*", ErrorMessage = "The barcode must start with B or V")] public string Barcode { get; set; } By adding .*, you allow the whole string to match the regex pattern. Basically, the ^ is redundant in the current context since the pattern used in RegularExpression attribute just had to match the whole string. A: You are only matching the first word not the entire 'barcode'. So you need to add something to match the rest of the 'barcode'. One form is to add \d+ at the end. It tells to you to match one or more digits after the 'V' or 'B' that is required. Full regex could be: "^[VB]\d+" That will match the whole 'barcode' and solve your problem.
C# Component Model RegularExpression validator rejecting valid regex data
This REGEX [Required] [RegularExpression("^[VB]", ErrorMessage = "The barcode must start with B or V")] public string Barcode { get; set; } fails with the following: "Barcode": { "rawValue": "B6761126229752008155", "attemptedValue": "B6761126229752008155", "errors": [ { "exception": null, "errorMessage": "The barcode must start with B or V" } ], "validationState": 1, "isContainerNode": false, "children": null }, even though the values are shown to be correct..... The regex passes in Regex101.com I'm not sure where to go with this. Any ideas? If I remove the validator the code runs through to my controller with the correct barcode value.
[ "You can use\n[Required]\n[RegularExpression(\"^[VB].*\", ErrorMessage = \"The barcode must start with B or V\")]\npublic string Barcode { get; set; }\n\nBy adding .*, you allow the whole string to match the regex pattern. Basically, the ^ is redundant in the current context since the pattern used in RegularExpression attribute just had to match the whole string.\n", "You are only matching the first word not the entire 'barcode'. So you need to add something to match the rest of the 'barcode'.\nOne form is to add \\d+ at the end. It tells to you to match one or more digits after the 'V' or 'B' that is required.\nFull regex could be: \"^[VB]\\d+\"\nThat will match the whole 'barcode' and solve your problem.\n" ]
[ 1, 1 ]
[]
[]
[ ".net", "regex", "system.componentmodel" ]
stackoverflow_0074658191_.net_regex_system.componentmodel.txt
Q: Flutter_contacts update contacts function I'm quite new to flutter/dart and working in Flutterflow, where trying to create a custom function that takes as input data about the contact in the app (name, surname, cell number, email, photo) and then updates/creates this contact in the user's phone using the next logic: if a user has a contact with the same name (displayName), then it updates this contact if a user has a contact with the same phone number, then it updates this contact if there is no match based on two parameters before, then new contact is created So far, I have only managed to create a function that checks displayName match, updates contact if there is a match, and if there is no match, it creates a new contact. But I don't know how to do the cell number match/update part (it doesn't work in my code). The way I'm doing the search for the right contact to update is through the search of contact ID using lists of contacts' names and contacts' IDs to index the right ID when I found the necessary name. It's probably a very ugly and inefficient way to do it, but I don't know the other way. Will be super happy if someone could give advice on the contact search/update part and overall code optimization (cause I think my approach is long and inefficient). Thank you! My code is below: import 'package:flutter_contacts/flutter_contacts.dart'; import 'dart:typed_data'; import 'package:flutter/services.dart'; Future updateContactOnPhone( String name, String surname, String cellnumber, String email, String photo, ) async { String searchingParameterName = name + " " + surname; String searchingParameterCellphone = cellnumber; List<String> phoneContactsIDs = []; List<String> phoneContactsNames = []; List<String> phoneContactsNumbers = []; Uint8List bytes = (await NetworkAssetBundle(Uri.parse(photo)).load(photo)) .buffer .asUint8List(); if (await FlutterContacts.requestPermission()) { List<dynamic> contacts = await FlutterContacts.getContacts(); contacts.forEach((contact) {phoneContactsIDs.add(contact.id);}); contacts.forEach((contact) {phoneContactsNames.add(contact.displayName);}); contacts.forEach((contact) {if (contact.phones != null) { phoneContactsNumbers.add(contact.phones.first);} {phoneContactsNumbers.add("");}}); if (phoneContactsNames.contains(searchingParameterName)) { int index = phoneContactsNames.indexOf(searchingParameterName); String contactID = phoneContactsIDs.elementAt(index); dynamic contact = await FlutterContacts.getContact(contactID); contact.name.first = name; contact.name.last = surname; contact.phones = [Phone(cellnumber)]; contact.emails = [Email(email)]; await contact.update(); } else if (phoneContactsNumbers.contains(searchingParameterCellphone)) { int index = phoneContactsNumbers.indexOf(searchingParameterCellphone); String contactID = phoneContactsIDs.elementAt(index); dynamic contact = await FlutterContacts.getContact(contactID); contact.name.first = name; contact.name.last = surname; contact.phones = [Phone(cellnumber)]; contact.emails = [Email(email)]; await contact.update(); } else { final newContact = Contact() ..name.first = name ..name.last = surname ..phones = [Phone(cellnumber)] ..emails = [Email(email)] ..photo = bytes; await newContact.insert(); }}} I tried various combinations of the code and searched for similar examples on forums, but nothing helped. A: Here is an updated version of your code that should implement the functionality you described. I have made a few changes to your code to make it more efficient and easier to read. First, I removed the phoneContactsNumbers list and added a phoneContacts list, which contains Contact objects instead of strings. This allows us to access the contact's displayName and phones directly from the list instead of having to search for them in separate lists. Next, I added a matchContact() function that searches the phoneContacts list for a contact with either the specified displayName or phoneNumber. If a match is found, the function returns the matching contact. Otherwise, it returns null. Finally, I updated the updateContactOnPhone() function to use the matchContact() function to find the contact to update, and to use the update() method on the Contact object to update the contact's fields. I also added a try-catch block to handle any exceptions that may be thrown when interacting with the contacts API. Here is the updated code: import 'package:flutter_contacts/flutter_contacts.dart'; import 'dart:typed_data'; import 'package:flutter/services.dart'; Future updateContactOnPhone( String name, String surname, String cellnumber, String email, String photo, ) async { String displayName = name + " " + surname; String phoneNumber = cellnumber; List<Contact> phoneContacts = []; Uint8List bytes = (await NetworkAssetBundle(Uri.parse(photo)).load(photo)) .buffer .asUint8List(); if (await FlutterContacts.requestPermission()) { List<dynamic> contacts = await FlutterContacts.getContacts(); phoneContacts = contacts.map((c) => Contact.fromMap(c)).toList(); try { Contact contactToUpdate = matchContact(displayName, phoneNumber, phoneContacts); if (contactToUpdate == null) { final newContact = Contact() ..name.first = name ..name.last = surname ..phones = [Phone(cellnumber)] ..emails = [Email(email)] ..photo = bytes; await newContact.insert(); } else { contactToUpdate.name.first = name; contactToUpdate.name.last = surname; contactToUpdate.phones = [Phone(cellnumber)]; contactToUpdate.emails = [Email(email)]; await contactToUpdate.update(); } } catch (e) { print(e); } } } Contact matchContact(String displayName, String phoneNumber, List<Contact> contacts) { return contacts.firstWhere((c) => c.displayName == displayName || c.phones.contains(phoneNumber), orElse: () => null); } I hope this helps! Let me know if you have any other questions.
Flutter_contacts update contacts function
I'm quite new to flutter/dart and working in Flutterflow, where trying to create a custom function that takes as input data about the contact in the app (name, surname, cell number, email, photo) and then updates/creates this contact in the user's phone using the next logic: if a user has a contact with the same name (displayName), then it updates this contact if a user has a contact with the same phone number, then it updates this contact if there is no match based on two parameters before, then new contact is created So far, I have only managed to create a function that checks displayName match, updates contact if there is a match, and if there is no match, it creates a new contact. But I don't know how to do the cell number match/update part (it doesn't work in my code). The way I'm doing the search for the right contact to update is through the search of contact ID using lists of contacts' names and contacts' IDs to index the right ID when I found the necessary name. It's probably a very ugly and inefficient way to do it, but I don't know the other way. Will be super happy if someone could give advice on the contact search/update part and overall code optimization (cause I think my approach is long and inefficient). Thank you! My code is below: import 'package:flutter_contacts/flutter_contacts.dart'; import 'dart:typed_data'; import 'package:flutter/services.dart'; Future updateContactOnPhone( String name, String surname, String cellnumber, String email, String photo, ) async { String searchingParameterName = name + " " + surname; String searchingParameterCellphone = cellnumber; List<String> phoneContactsIDs = []; List<String> phoneContactsNames = []; List<String> phoneContactsNumbers = []; Uint8List bytes = (await NetworkAssetBundle(Uri.parse(photo)).load(photo)) .buffer .asUint8List(); if (await FlutterContacts.requestPermission()) { List<dynamic> contacts = await FlutterContacts.getContacts(); contacts.forEach((contact) {phoneContactsIDs.add(contact.id);}); contacts.forEach((contact) {phoneContactsNames.add(contact.displayName);}); contacts.forEach((contact) {if (contact.phones != null) { phoneContactsNumbers.add(contact.phones.first);} {phoneContactsNumbers.add("");}}); if (phoneContactsNames.contains(searchingParameterName)) { int index = phoneContactsNames.indexOf(searchingParameterName); String contactID = phoneContactsIDs.elementAt(index); dynamic contact = await FlutterContacts.getContact(contactID); contact.name.first = name; contact.name.last = surname; contact.phones = [Phone(cellnumber)]; contact.emails = [Email(email)]; await contact.update(); } else if (phoneContactsNumbers.contains(searchingParameterCellphone)) { int index = phoneContactsNumbers.indexOf(searchingParameterCellphone); String contactID = phoneContactsIDs.elementAt(index); dynamic contact = await FlutterContacts.getContact(contactID); contact.name.first = name; contact.name.last = surname; contact.phones = [Phone(cellnumber)]; contact.emails = [Email(email)]; await contact.update(); } else { final newContact = Contact() ..name.first = name ..name.last = surname ..phones = [Phone(cellnumber)] ..emails = [Email(email)] ..photo = bytes; await newContact.insert(); }}} I tried various combinations of the code and searched for similar examples on forums, but nothing helped.
[ "Here is an updated version of your code that should implement the functionality you described. I have made a few changes to your code to make it more efficient and easier to read.\nFirst, I removed the phoneContactsNumbers list and added a phoneContacts list, which contains Contact objects instead of strings. This allows us to access the contact's displayName and phones directly from the list instead of having to search for them in separate lists.\nNext, I added a matchContact() function that searches the phoneContacts list for a contact with either the specified displayName or phoneNumber. If a match is found, the function returns the matching contact. Otherwise, it returns null.\nFinally, I updated the updateContactOnPhone() function to use the matchContact() function to find the contact to update, and to use the update() method on the Contact object to update the contact's fields. I also added a try-catch block to handle any exceptions that may be thrown when interacting with the contacts API.\nHere is the updated code:\nimport 'package:flutter_contacts/flutter_contacts.dart';\nimport 'dart:typed_data';\nimport 'package:flutter/services.dart';\n\nFuture updateContactOnPhone(\n String name,\n String surname,\n String cellnumber,\n String email,\n String photo,\n) async {\n String displayName = name + \" \" + surname;\n String phoneNumber = cellnumber;\n List<Contact> phoneContacts = [];\n Uint8List bytes = (await NetworkAssetBundle(Uri.parse(photo)).load(photo))\n .buffer\n .asUint8List();\n\n if (await FlutterContacts.requestPermission()) {\n List<dynamic> contacts = await FlutterContacts.getContacts();\n phoneContacts = contacts.map((c) => Contact.fromMap(c)).toList();\n\n try {\n Contact contactToUpdate = matchContact(displayName, phoneNumber, phoneContacts);\n\n if (contactToUpdate == null) {\n final newContact = Contact()\n ..name.first = name\n ..name.last = surname\n ..phones = [Phone(cellnumber)]\n ..emails = [Email(email)]\n ..photo = bytes;\n await newContact.insert();\n } else {\n contactToUpdate.name.first = name;\n contactToUpdate.name.last = surname;\n contactToUpdate.phones = [Phone(cellnumber)];\n contactToUpdate.emails = [Email(email)];\n await contactToUpdate.update();\n }\n } catch (e) {\n print(e);\n }\n }\n}\n\nContact matchContact(String displayName, String phoneNumber, List<Contact> contacts) {\n return contacts.firstWhere((c) => c.displayName == displayName || c.phones.contains(phoneNumber), orElse: () => null);\n}\n\nI hope this helps! Let me know if you have any other questions.\n" ]
[ 0 ]
[]
[]
[ "contacts", "dart", "flutter" ]
stackoverflow_0074657704_contacts_dart_flutter.txt
Q: What is foursquare place price options? I am integrating foursquare place api. They provide place price as numeric value (1~4) This is their official doc I want to know exact price value. For example, for 1 Cheap => 1$ ~ 100$ They have no explain about this. Where can I get this info? Thanks A: The price value is a relative allocation. The algorithm scraped menu prices and does a comparison of nearby places (simplest way to explain it). So there isn't an actual dollar amount assigned to any tier.
What is foursquare place price options?
I am integrating foursquare place api. They provide place price as numeric value (1~4) This is their official doc I want to know exact price value. For example, for 1 Cheap => 1$ ~ 100$ They have no explain about this. Where can I get this info? Thanks
[ "The price value is a relative allocation. The algorithm scraped menu prices and does a comparison of nearby places (simplest way to explain it). So there isn't an actual dollar amount assigned to any tier.\n" ]
[ 0 ]
[]
[]
[ "foursquare", "price" ]
stackoverflow_0071475807_foursquare_price.txt
Q: How to find an event in sentry by job arguments? I have a rails app, sidekiq and sentry. I want to find event in sentry by job arguments. Sample: I have SomeJob which executed with arguments [{some_arg: 'Arg1'}] Job failed with error and send event to sentry. How I can find event by job arguments? I try full-text search, but it doesn't work A: Search in sentry is limited by what they allow you to search by. From reading their Search docs briefly you can either use: sentry tags messages Either way, you would want to enrich your sentry events. For example, let's assume you will rescue from the error raised in your job class SomeJob include Sidekiq::Worker def perform(args) # do stuff with args rescue StandardError SentryError.new(args: args) end end SentryJobError is really just a PORO that would be called by your job classes. class SentryJobError def initialize(args:) return if Rails.env.development? Sentry.configure_scope do |scope| scope.set_context('job_args', { args: args }) scope.set_context('message', 'job ${args[:some_arg]} failed') end end end
How to find an event in sentry by job arguments?
I have a rails app, sidekiq and sentry. I want to find event in sentry by job arguments. Sample: I have SomeJob which executed with arguments [{some_arg: 'Arg1'}] Job failed with error and send event to sentry. How I can find event by job arguments? I try full-text search, but it doesn't work
[ "Search in sentry is limited by what they allow you to search by.\nFrom reading their Search docs briefly you can either use:\n\nsentry tags\nmessages\n\nEither way, you would want to enrich your sentry events.\nFor example, let's assume you will rescue from the error raised in your job\nclass SomeJob\n\n include Sidekiq::Worker\n \n def perform(args)\n\n # do stuff with args\n\n rescue StandardError\n SentryError.new(args: args)\n end\nend\n\nSentryJobError is really just a PORO that would be called by your job classes.\nclass SentryJobError\n\n def initialize(args:)\n return if Rails.env.development?\n \n Sentry.configure_scope do |scope|\n scope.set_context('job_args', { args: args })\n scope.set_context('message', 'job ${args[:some_arg]} failed')\n end\n end\n\nend\n\n" ]
[ 0 ]
[]
[]
[ "ruby", "ruby_on_rails", "sentry", "sidekiq" ]
stackoverflow_0074640510_ruby_ruby_on_rails_sentry_sidekiq.txt
Q: read properties from multiple property files iteratively and do some action using a batch script I have 2 directories one has a single inputs.properties file and one of the property in this file is script_properties_path. My task is to get the path from this property and list all the files which are again different set of property files. Now I have to read all properties from each file in the path and perform some action. I am able to read the properties but when I set them to some variables I get empty string. inputs.properties: WORKSPACE_DIR=C:\Workspace_LSV\ PACKAGES_PATH=QA_AUTO_SELENIUM\Selenium\Automation2.0\ RESULTS_DIR=C:\Results\ PROPERTIES_DIR=D:\Work\Projects\xECM\BatchRunner\temp file in D:\Work\Projects\xECM\BatchRunner\temp: LSV1.properties LSV2.properties LSV-437_LSV-436.properties: FILENAME=/cs/gui/tests/admin/auditing/AuditTestVersionAndFuncMenuItems.java CLASSNAME=cs.gui.tests.admin.auditing.AuditTestVersionAndFuncMenuItems LS_435.properties: FILENAME=/cs/gui/tests/admin/auditing/TestAuditingShortcut.java CLASSNAME=cs.gui.tests.admin.auditing.TestAuditingShortcut My Code: @echo "batch program" @echo off For /F "tokens=1* delims==" %%A IN (inputs.properties) DO (SET %%A=%%B) IF "%%A"=="WORKSPACE_DIR" SET WORKSPACE_DIR=%%B IF "%%A"=="PACKAGES_PATH" SET PACKAGES_PATH=%%B IF "%%A"=="PROPERTIES_DIR" SET PROPERTIES_DIR=%%B IF "%%A"=="RESULTS_DIR" SET RESULTS_DIR=%%B IF "%%A"=="BROWSER" SET BROWSER=%%B IF "%%A"=="APP_URL" SET APP_URL=%%B IF "%%A"=="CLASSPATH" SET CLASSPATH=%%B @echo "WORKSPACE_DIR %WORKSPACE_DIR%" @echo "PACKAGES_PATH %PACKAGES_PATH%" @echo "PROPERTIES_DIR %PROPERTIES_DIR%" @echo "RESULTS_DIR %RESULTS_DIR%" @echo "BROWSER %BROWSER%" @echo "APP_URL %APP_URL%" FOR %%D in (%PROPERTIES_DIR%\*.*) DO ( @echo "FILE:%%D " For /F "tokens=1* delims==" %%E IN (%%D) DO (set %%E=%%F) IF "%%E"=="FILENAME" SET FILENAME=%%F IF "%%E"=="CLASSNAME" SET CLASSNAME=%%F @echo %FILENAME% @echo %CLASSNAME% ) cmd /k Output: "batch program" "WORKSPACE_DIR C:\Workspace_LSV\" "PACKAGES_PATH products\comp\qa.auto.selenium\22.4.0-branch\pkg\QA_AUTO_SELENIUM\OT_Selenium\Automation2.0\" "PROPERTIES_DIR D:\Work\Projects\xECM\BatchRunner\temp" "FILE:D:\Work\Projects\xECM\BatchRunner\temp\LSV-437_LSV-436.properties " ECHO is off. ECHO is off. "FILE:D:\Work\Projects\xECM\BatchRunner\temp\LS_435.properties " ECHO is off. ECHO is off. When I run the following code: FOR %%D in (%PROPERTIES_DIR%\*.*) DO ( @echo "FILE:%%D " For /F "tokens=1* delims==" %%E IN (%%D) DO ( IF "%%E"=="FILENAME" SET FILENAME=%%F IF "%%E"=="CLASSNAME" SET CLASSNAME=%%F @echo %%E @echo %%F @echo %FILENAME% @echo %CLASSNAME% ) ) output is: "batch program" "WORKSPACE_DIR C:\Workspace_LSV\" "PACKAGES_PATH products\comp\qa.auto.selenium\22.4.0-branch\pkg\QA_AUTO_SELENIUM\OT_Selenium\Automation2.0\" "PROPERTIES_DIR D:\Work\Projects\xECM\BatchRunner\temp" "RESULTS_DIR C:\Results\" "BROWSER chrome" "APP_URL https://otcs2.xsogpreprod.opentext.cloud/cs/cs" "FILE:D:\Work\Projects\xECM\BatchRunner\temp\LSV-437_LSV-436.properties " FILENAME /CSTests16_2/src/com/opentext/auto/cs/gui/tests/admin/auditing/AuditTestVersionAndFuncMenuItems.java ECHO is off. ECHO is off. CLASSNAME com.opentext.auto.cs.gui.tests.admin.auditing.AuditTestVersionAndFuncMenuItems ECHO is off. ECHO is off. "FILE:D:\Work\Projects\xECM\BatchRunner\temp\LS_435.properties " FILENAME /CSTests16_2/src/com/opentext/auto/cs/gui/tests/admin/auditing/TestAuditingShortcut.java ECHO is off. ECHO is off. CLASSNAME com.opentext.auto.cs.gui.tests.admin.auditing.TestAuditingShortcut ECHO is off. ECHO is off. A: ... FOR %%D in (%PROPERTIES_DIR%\*.*) DO ( @echo "FILE:%%D " For /F "tokens=1* delims==" %%E IN (%%D) DO (set %%E=%%F) rem Redundant IF "%%E"=="FILENAME" SET FILENAME=%%F rem Redundant IF "%%E"=="CLASSNAME" SET CLASSNAME=%%F ) @echo %FILENAME% @echo %CLASSNAME% cmd /k
read properties from multiple property files iteratively and do some action using a batch script
I have 2 directories one has a single inputs.properties file and one of the property in this file is script_properties_path. My task is to get the path from this property and list all the files which are again different set of property files. Now I have to read all properties from each file in the path and perform some action. I am able to read the properties but when I set them to some variables I get empty string. inputs.properties: WORKSPACE_DIR=C:\Workspace_LSV\ PACKAGES_PATH=QA_AUTO_SELENIUM\Selenium\Automation2.0\ RESULTS_DIR=C:\Results\ PROPERTIES_DIR=D:\Work\Projects\xECM\BatchRunner\temp file in D:\Work\Projects\xECM\BatchRunner\temp: LSV1.properties LSV2.properties LSV-437_LSV-436.properties: FILENAME=/cs/gui/tests/admin/auditing/AuditTestVersionAndFuncMenuItems.java CLASSNAME=cs.gui.tests.admin.auditing.AuditTestVersionAndFuncMenuItems LS_435.properties: FILENAME=/cs/gui/tests/admin/auditing/TestAuditingShortcut.java CLASSNAME=cs.gui.tests.admin.auditing.TestAuditingShortcut My Code: @echo "batch program" @echo off For /F "tokens=1* delims==" %%A IN (inputs.properties) DO (SET %%A=%%B) IF "%%A"=="WORKSPACE_DIR" SET WORKSPACE_DIR=%%B IF "%%A"=="PACKAGES_PATH" SET PACKAGES_PATH=%%B IF "%%A"=="PROPERTIES_DIR" SET PROPERTIES_DIR=%%B IF "%%A"=="RESULTS_DIR" SET RESULTS_DIR=%%B IF "%%A"=="BROWSER" SET BROWSER=%%B IF "%%A"=="APP_URL" SET APP_URL=%%B IF "%%A"=="CLASSPATH" SET CLASSPATH=%%B @echo "WORKSPACE_DIR %WORKSPACE_DIR%" @echo "PACKAGES_PATH %PACKAGES_PATH%" @echo "PROPERTIES_DIR %PROPERTIES_DIR%" @echo "RESULTS_DIR %RESULTS_DIR%" @echo "BROWSER %BROWSER%" @echo "APP_URL %APP_URL%" FOR %%D in (%PROPERTIES_DIR%\*.*) DO ( @echo "FILE:%%D " For /F "tokens=1* delims==" %%E IN (%%D) DO (set %%E=%%F) IF "%%E"=="FILENAME" SET FILENAME=%%F IF "%%E"=="CLASSNAME" SET CLASSNAME=%%F @echo %FILENAME% @echo %CLASSNAME% ) cmd /k Output: "batch program" "WORKSPACE_DIR C:\Workspace_LSV\" "PACKAGES_PATH products\comp\qa.auto.selenium\22.4.0-branch\pkg\QA_AUTO_SELENIUM\OT_Selenium\Automation2.0\" "PROPERTIES_DIR D:\Work\Projects\xECM\BatchRunner\temp" "FILE:D:\Work\Projects\xECM\BatchRunner\temp\LSV-437_LSV-436.properties " ECHO is off. ECHO is off. "FILE:D:\Work\Projects\xECM\BatchRunner\temp\LS_435.properties " ECHO is off. ECHO is off. When I run the following code: FOR %%D in (%PROPERTIES_DIR%\*.*) DO ( @echo "FILE:%%D " For /F "tokens=1* delims==" %%E IN (%%D) DO ( IF "%%E"=="FILENAME" SET FILENAME=%%F IF "%%E"=="CLASSNAME" SET CLASSNAME=%%F @echo %%E @echo %%F @echo %FILENAME% @echo %CLASSNAME% ) ) output is: "batch program" "WORKSPACE_DIR C:\Workspace_LSV\" "PACKAGES_PATH products\comp\qa.auto.selenium\22.4.0-branch\pkg\QA_AUTO_SELENIUM\OT_Selenium\Automation2.0\" "PROPERTIES_DIR D:\Work\Projects\xECM\BatchRunner\temp" "RESULTS_DIR C:\Results\" "BROWSER chrome" "APP_URL https://otcs2.xsogpreprod.opentext.cloud/cs/cs" "FILE:D:\Work\Projects\xECM\BatchRunner\temp\LSV-437_LSV-436.properties " FILENAME /CSTests16_2/src/com/opentext/auto/cs/gui/tests/admin/auditing/AuditTestVersionAndFuncMenuItems.java ECHO is off. ECHO is off. CLASSNAME com.opentext.auto.cs.gui.tests.admin.auditing.AuditTestVersionAndFuncMenuItems ECHO is off. ECHO is off. "FILE:D:\Work\Projects\xECM\BatchRunner\temp\LS_435.properties " FILENAME /CSTests16_2/src/com/opentext/auto/cs/gui/tests/admin/auditing/TestAuditingShortcut.java ECHO is off. ECHO is off. CLASSNAME com.opentext.auto.cs.gui.tests.admin.auditing.TestAuditingShortcut ECHO is off. ECHO is off.
[ "...\nFOR %%D in (%PROPERTIES_DIR%\\*.*) DO (\n @echo \"FILE:%%D \"\n \n For /F \"tokens=1* delims==\" %%E IN (%%D) DO (set %%E=%%F)\n rem Redundant IF \"%%E\"==\"FILENAME\" SET FILENAME=%%F\n rem Redundant IF \"%%E\"==\"CLASSNAME\" SET CLASSNAME=%%F\n) \n\n@echo %FILENAME%\n@echo %CLASSNAME%\n\ncmd /k\n\n" ]
[ 0 ]
[]
[]
[ "batch_file", "cmd", "windows" ]
stackoverflow_0074657572_batch_file_cmd_windows.txt
Q: Databricks Jobs Webhook Notification ID I want to set a webhook notification for a job using the Jobs API. According to the documentation I need the notification ID, but where do I get that from? Is that the essentially the webhook configuration name? An optional list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified for the on_failure property. A: These IDs are IDs of the Alerts Destinations that you created via UI. You can fetch the destination ID from the URL when you're accessing specific alert in UI. This feature right now is in the private preview, so for detailed specification please reach someone from Databricks.
Databricks Jobs Webhook Notification ID
I want to set a webhook notification for a job using the Jobs API. According to the documentation I need the notification ID, but where do I get that from? Is that the essentially the webhook configuration name? An optional list of notification IDs to call when the run fails. A maximum of 3 destinations can be specified for the on_failure property.
[ "These IDs are IDs of the Alerts Destinations that you created via UI. You can fetch the destination ID from the URL when you're accessing specific alert in UI. This feature right now is in the private preview, so for detailed specification please reach someone from Databricks.\n" ]
[ 1 ]
[]
[]
[ "databricks", "databricks_rest_api" ]
stackoverflow_0074652564_databricks_databricks_rest_api.txt
Q: Get url function returns duplicate urls multiple times my geturl function returns the same url multiple times[Firebase consoleconsole.log()](https://i.stack.imgur.com/5vKEK.jpg) and when I retrieve and display the url from firestore the image is duplicated and that messes up my UI. I tried to use once: true, on the button that sends the url to the firebase database. and also I tried stopPropagation(); with no success. constructor() { this.posts = []; this.files = []; this.post = { id: cuid(), username: "", caption: "", image: "", }; this.$app = document.querySelector("#app"); this.$firebaseAuthContainer = document.querySelector( "#firebaseui-auth-container" ); this.$authUser = document.querySelector(".auth-user"); this.$uploadBtn = document.querySelector(".upload-container"); this.$postContainer = document.querySelector(".post-container"); this.$filesToUpload = document.querySelector("#files"); this.$sendBtn = document.querySelector("#send"); this.$progress = document.querySelector("#progress"); this.$uploadingBar = document.querySelector("#uploading"); this.$captionText = document.querySelector("#caption-text"); this.$posts = document.querySelector(".posts"); this.$postTime = document.querySelector(".posted-time"); this.ui = new firebaseui.auth.AuthUI(auth); this.handleAuth(); this.addEventListener(); this.displayPost(); } handleAuth() { firebase.auth().onAuthStateChanged((user) => { if (user) { this.username = user.displayName; this.userId = user.uid; this.redirectToApp(); } else { this.redirectToAuth(); } }); } redirectToApp() { this.$firebaseAuthContainer.style.display = "none"; this.$postContainer.style.display = "none"; this.$app.style.display = "block"; this.fetchPostsFromDB(); } redirectToAuth() { this.$firebaseAuthContainer.style.display = "block"; this.$app.style.display = "none"; this.$postContainer.style.display = "none"; this.ui.start("#firebaseui-auth-container", { signInOptions: [ firebase.auth.EmailAuthProvider.PROVIDER_ID, firebase.auth.GoogleAuthProvider.PROVIDER_ID, ], // Other config options... }); } addEventListener() { document.body.addEventListener("click", (event) => { this.handleClick(event); this.handleUploadClick(event); this.handlePostClick(event); }); this.$filesToUpload.addEventListener("change", (event) => { this.handleFileChosen(event); }); this.$captionText.addEventListener("change", (event) => { this.post.caption = event.target.value; }); this.$authUser.addEventListener("change", (event) => { this.post.username = event.target.value; }); } handleClick() { this.$authUser.addEventListener("click", (event) => { this.handleLogout(event); }); } handleLogout() { firebase .auth() .signOut() .then(() => { this.redirectToAuth(); }) .catch((error) => { console.log("ERROR OCCURED", error); }); } handleUploadClick() { this.$uploadBtn.addEventListener("click", () => { this.redirectToPost(); }); } redirectToPost() { this.$postContainer.style.display = "block"; this.$firebaseAuthContainer.style.display = "none"; this.$app.style.display = "none"; } handleFileChosen(event) { this.files = event.target.files; if (this.files.length > 0) { alert("File chosen!"); } else { alert("No file chosen!"); } } handlePostClick(e) { this.$sendBtn.addEventListener("click", () => { this.uploadToFB(); }); } uploadToFB() { for (let i = 0; i < this.files.length; i++) { const name = this.files[i].name; const upload = storage.ref(name).put(this.files[i]); upload .then((snapshot) => { console.log("'Successfully uploaded image"); this.progressBar(snapshot); this.getFileUrl(name); }) .catch((error) => { console.log(error, "Error Loading File Occured"); }); } } progressBar(snapshot) { const percentage = (snapshot.bytesTransferred / snapshot.totalBytes) * 100; this.$progress.value = percentage; if (percentage) { this.$uploadingBar.innerHTML = `${this.files[0].name} Uploaded`; } } getFileUrl(name) { const imageRef = storage.ref(name); imageRef .getDownloadURL() .then((url) => { (this.post.image = url), this.posts.push(this.post); console.log(this.post.image); }) .catch((error) => { console.log(error, "Error Occured"); }); } fetchPostsFromDB() { var docRef = db.collection("users").doc(this.userId); docRef .get() .then((doc) => { if (doc.exists) { console.log("Document data:", doc.data().posts); this.posts = doc.data().posts; this.displayPost(); } else { // doc.data() will be undefined in this case console.log("No such document!"); db.collection("users") .doc(this.userId) .set({ posts: this.posts, }) .then(() => { console.log("User successfully Created!"); }) .catch((error) => { console.error("Error writing document: ", error); }); } }) .catch((error) => { console.log("Error getting document:", error); }); } savePosts() { db.collection("users") .doc(this.userId) .set({ posts: this.posts, }) .then(() => { console.log("Document successfully written!"); }) .catch((error) => { console.error("Error writing document: ", error); }); } displayPost() { this.$posts.innerHTML = this.posts .map( (post) => `<div class="post" id="${(post, this.userId)}"> <div class="header"> <div class="profile-area"> <div class="post-pic"> <img alt="jayshetty's profile picture" class="_6q-tv" data-testid="user-avatar" draggable="false" src="assets/akhil.png" /> </div> <span class="profile-name">${post.username}</span> </div> <div class="options" id="${(post, this.userId)}"> <button type="button" class="more-btn"> <div class="Igw0E rBNOH YBx95 _4EzTm" style="height: 24px; width: 24px" > <svg aria-label="More options" class="_8-yf5" fill="#262626" height="16" viewBox="0 0 48 48" width="16" > <circle clip-rule="evenodd" cx="8" cy="24" fill-rule="evenodd" r="4.5" ></circle> <circle clip-rule="evenodd" cx="24" cy="24" fill-rule="evenodd" r="4.5" ></circle> <circle clip-rule="evenodd" cx="40" cy="24" fill-rule="evenodd" r="4.5" ></circle> </svg> </div> </button> </div> </div> <div class="body"> <img alt="Photo by Jay Shetty on September 12, 2020. Image may contain: 2 people." class="FFVAD" decoding="auto" sizes="614px" src="${post.image}" style="object-fit: cover" /> </div> <div class="footer"> <div class="user-actions"> <div class="like-comment-share"> <div> <span class="" ><svg aria-label="Like" class="_8-yf5" fill="#262626" height="24" viewBox="0 0 48 48" width="24" > <path d="M34.6 6.1c5.7 0 10.4 5.2 10.4 11.5 0 6.8-5.9 11-11.5 16S25 41.3 24 41.9c-1.1-.7-4.7-4-9.5-8.3-5.7-5-11.5-9.2-11.5-16C3 11.3 7.7 6.1 13.4 6.1c4.2 0 6.5 2 8.1 4.3 1.9 2.6 2.2 3.9 2.5 3.9.3 0 .6-1.3 2.5-3.9 1.6-2.3 3.9-4.3 8.1-4.3m0-3c-4.5 0-7.9 1.8-10.6 5.6-2.7-3.7-6.1-5.5-10.6-5.5C6 3.1 0 9.6 0 17.6c0 7.3 5.4 12 10.6 16.5.6.5 1.3 1.1 1.9 1.7l2.3 2c4.4 3.9 6.6 5.9 7.6 6.5.5.3 1.1.5 1.6.5.6 0 1.1-.2 1.6-.5 1-.6 2.8-2.2 7.8-6.8l2-1.8c.7-.6 1.3-1.2 2-1.7C42.7 29.6 48 25 48 17.6c0-8-6-14.5-13.4-14.5z" ></path> </svg> </span> </div> <div class="margin-left-small"> <svg aria-label="Comment" class="_8-yf5" fill="#262626" height="24" viewBox="0 0 48 48" width="24" > <path clip-rule="evenodd" d="M47.5 46.1l-2.8-11c1.8-3.3 2.8-7.1 2.8-11.1C47.5 11 37 .5 24 .5S.5 11 .5 24 11 47.5 24 47.5c4 0 7.8-1 11.1-2.8l11 2.8c.8.2 1.6-.6 1.4-1.4zm-3-22.1c0 4-1 7-2.6 10-.2.4-.3.9-.2 1.4l2.1 8.4-8.3-2.1c-.5-.1-1-.1-1.4.2-1.8 1-5.2 2.6-10 2.6-11.4 0-20.6-9.2-20.6-20.5S12.7 3.5 24 3.5 44.5 12.7 44.5 24z" fill-rule="evenodd" ></path> </svg> </div> <div class="margin-left-small"> <svg aria-label="Share Post" class="_8-yf5" fill="#262626" height="24" viewBox="0 0 48 48" width="24" > <path d="M47.8 3.8c-.3-.5-.8-.8-1.3-.8h-45C.9 3.1.3 3.5.1 4S0 5.2.4 5.7l15.9 15.6 5.5 22.6c.1.6.6 1 1.2 1.1h.2c.5 0 1-.3 1.3-.7l23.2-39c.4-.4.4-1 .1-1.5zM5.2 6.1h35.5L18 18.7 5.2 6.1zm18.7 33.6l-4.4-18.4L42.4 8.6 23.9 39.7z" ></path> </svg> </div> </div> <div class="bookmark"> <div class="QBdPU rrUvL"> <svg aria-label="Save" class="_8-yf5" fill="#262626" height="24" viewBox="0 0 48 48" width="24" > <path d="M43.5 48c-.4 0-.8-.2-1.1-.4L24 29 5.6 47.6c-.4.4-1.1.6-1.6.3-.6-.2-1-.8-1-1.4v-45C3 .7 3.7 0 4.5 0h39c.8 0 1.5.7 1.5 1.5v45c0 .6-.4 1.2-.9 1.4-.2.1-.4.1-.6.1zM24 26c.8 0 1.6.3 2.2.9l15.8 16V3H6v39.9l15.8-16c.6-.6 1.4-.9 2.2-.9z" ></path> </svg> </div> </div> </div> <span class="caption"> <span id="cap-username" class="caption-username"><b>${ post.username }</b></span> <span class="caption-text" id="caption-text"> ${post.caption}</span > </span> <span class="comment"> <span class="caption-username"><b>akhilboddu</b></span> <span class="caption-text">Thank you</span> </span> <span class="comment"> <span class="caption-username"><b>imharjot</b></span> <span class="caption-text"> Great stuff</span> </span> <span class="posted-time">${post.timestamp}</span> </div> <div class="add-comment"> <input type="text" placeholder="Add a comment..." /> <a class="post-btn">Post</a> </div> </div> ` ) .join(""); } } const app = new App(); A: You are repeatedly attaching same event listeners. Below is your code stripped down to demonstrate that after several clicks on body (anywhere) one click on $sendButton will trigger console.log('SEND BUTTON CLICKED') several times. I believe that when you test your full code, you must make several clicks on auth, file selection — whatever — before you reach your button. All these clicks are handled by your document.body click listener, which attaches more listeners to sendButton. See addEventListener on MDN. class App { constructor() { this.$app = document.querySelector('#app'); this.$sendBtn = document.querySelector('#send'); this.addEventListener(); console.log('constructor finished'); } addEventListener() { // I'd rename this method to avoid confusion, BTW console.log('add ev listeners IN'); document.body.addEventListener('click', (event) => { console.log('DOC BODY click'); this.handlePostClick(event); }); } handlePostClick(e) { console.log('handlePostClick ATTACHER'); this.$sendBtn.addEventListener('click', () => { console.log('handlePostClick LISTENER'); this.uploadToFB(); }); } uploadToFB() { console.log('—————→ SEND BUTTON CLICKED') } } const app = new App(); <!doctype html> <html style="height: 100%;"> <head> <meta charset="utf-8"> <title>so 74615759</title> </head> <body style="height: 100%; margin: 0; padding: 0;"> <h5>click anywhere, see the console.</h5> <div id="app"> <button id="send">Send</button> </div> </body> </html> You have to fix it either by attaching handlers using sendButton.onclick = handlerFunction instead of addEventListener, or by marking sendButton with some class name after one addEventListener has been attached: if (someButton.classList.contains('has-click-listener') === false) { someButton.addEventListener('click'), (event) => { /* do stuff on click */ }); someButton.classList.add('has-click-listener'); } Or use named functions instead of anonymous as event handlers as per addEventListener docs.
Get url function returns duplicate urls multiple times
my geturl function returns the same url multiple times[Firebase consoleconsole.log()](https://i.stack.imgur.com/5vKEK.jpg) and when I retrieve and display the url from firestore the image is duplicated and that messes up my UI. I tried to use once: true, on the button that sends the url to the firebase database. and also I tried stopPropagation(); with no success. constructor() { this.posts = []; this.files = []; this.post = { id: cuid(), username: "", caption: "", image: "", }; this.$app = document.querySelector("#app"); this.$firebaseAuthContainer = document.querySelector( "#firebaseui-auth-container" ); this.$authUser = document.querySelector(".auth-user"); this.$uploadBtn = document.querySelector(".upload-container"); this.$postContainer = document.querySelector(".post-container"); this.$filesToUpload = document.querySelector("#files"); this.$sendBtn = document.querySelector("#send"); this.$progress = document.querySelector("#progress"); this.$uploadingBar = document.querySelector("#uploading"); this.$captionText = document.querySelector("#caption-text"); this.$posts = document.querySelector(".posts"); this.$postTime = document.querySelector(".posted-time"); this.ui = new firebaseui.auth.AuthUI(auth); this.handleAuth(); this.addEventListener(); this.displayPost(); } handleAuth() { firebase.auth().onAuthStateChanged((user) => { if (user) { this.username = user.displayName; this.userId = user.uid; this.redirectToApp(); } else { this.redirectToAuth(); } }); } redirectToApp() { this.$firebaseAuthContainer.style.display = "none"; this.$postContainer.style.display = "none"; this.$app.style.display = "block"; this.fetchPostsFromDB(); } redirectToAuth() { this.$firebaseAuthContainer.style.display = "block"; this.$app.style.display = "none"; this.$postContainer.style.display = "none"; this.ui.start("#firebaseui-auth-container", { signInOptions: [ firebase.auth.EmailAuthProvider.PROVIDER_ID, firebase.auth.GoogleAuthProvider.PROVIDER_ID, ], // Other config options... }); } addEventListener() { document.body.addEventListener("click", (event) => { this.handleClick(event); this.handleUploadClick(event); this.handlePostClick(event); }); this.$filesToUpload.addEventListener("change", (event) => { this.handleFileChosen(event); }); this.$captionText.addEventListener("change", (event) => { this.post.caption = event.target.value; }); this.$authUser.addEventListener("change", (event) => { this.post.username = event.target.value; }); } handleClick() { this.$authUser.addEventListener("click", (event) => { this.handleLogout(event); }); } handleLogout() { firebase .auth() .signOut() .then(() => { this.redirectToAuth(); }) .catch((error) => { console.log("ERROR OCCURED", error); }); } handleUploadClick() { this.$uploadBtn.addEventListener("click", () => { this.redirectToPost(); }); } redirectToPost() { this.$postContainer.style.display = "block"; this.$firebaseAuthContainer.style.display = "none"; this.$app.style.display = "none"; } handleFileChosen(event) { this.files = event.target.files; if (this.files.length > 0) { alert("File chosen!"); } else { alert("No file chosen!"); } } handlePostClick(e) { this.$sendBtn.addEventListener("click", () => { this.uploadToFB(); }); } uploadToFB() { for (let i = 0; i < this.files.length; i++) { const name = this.files[i].name; const upload = storage.ref(name).put(this.files[i]); upload .then((snapshot) => { console.log("'Successfully uploaded image"); this.progressBar(snapshot); this.getFileUrl(name); }) .catch((error) => { console.log(error, "Error Loading File Occured"); }); } } progressBar(snapshot) { const percentage = (snapshot.bytesTransferred / snapshot.totalBytes) * 100; this.$progress.value = percentage; if (percentage) { this.$uploadingBar.innerHTML = `${this.files[0].name} Uploaded`; } } getFileUrl(name) { const imageRef = storage.ref(name); imageRef .getDownloadURL() .then((url) => { (this.post.image = url), this.posts.push(this.post); console.log(this.post.image); }) .catch((error) => { console.log(error, "Error Occured"); }); } fetchPostsFromDB() { var docRef = db.collection("users").doc(this.userId); docRef .get() .then((doc) => { if (doc.exists) { console.log("Document data:", doc.data().posts); this.posts = doc.data().posts; this.displayPost(); } else { // doc.data() will be undefined in this case console.log("No such document!"); db.collection("users") .doc(this.userId) .set({ posts: this.posts, }) .then(() => { console.log("User successfully Created!"); }) .catch((error) => { console.error("Error writing document: ", error); }); } }) .catch((error) => { console.log("Error getting document:", error); }); } savePosts() { db.collection("users") .doc(this.userId) .set({ posts: this.posts, }) .then(() => { console.log("Document successfully written!"); }) .catch((error) => { console.error("Error writing document: ", error); }); } displayPost() { this.$posts.innerHTML = this.posts .map( (post) => `<div class="post" id="${(post, this.userId)}"> <div class="header"> <div class="profile-area"> <div class="post-pic"> <img alt="jayshetty's profile picture" class="_6q-tv" data-testid="user-avatar" draggable="false" src="assets/akhil.png" /> </div> <span class="profile-name">${post.username}</span> </div> <div class="options" id="${(post, this.userId)}"> <button type="button" class="more-btn"> <div class="Igw0E rBNOH YBx95 _4EzTm" style="height: 24px; width: 24px" > <svg aria-label="More options" class="_8-yf5" fill="#262626" height="16" viewBox="0 0 48 48" width="16" > <circle clip-rule="evenodd" cx="8" cy="24" fill-rule="evenodd" r="4.5" ></circle> <circle clip-rule="evenodd" cx="24" cy="24" fill-rule="evenodd" r="4.5" ></circle> <circle clip-rule="evenodd" cx="40" cy="24" fill-rule="evenodd" r="4.5" ></circle> </svg> </div> </button> </div> </div> <div class="body"> <img alt="Photo by Jay Shetty on September 12, 2020. Image may contain: 2 people." class="FFVAD" decoding="auto" sizes="614px" src="${post.image}" style="object-fit: cover" /> </div> <div class="footer"> <div class="user-actions"> <div class="like-comment-share"> <div> <span class="" ><svg aria-label="Like" class="_8-yf5" fill="#262626" height="24" viewBox="0 0 48 48" width="24" > <path d="M34.6 6.1c5.7 0 10.4 5.2 10.4 11.5 0 6.8-5.9 11-11.5 16S25 41.3 24 41.9c-1.1-.7-4.7-4-9.5-8.3-5.7-5-11.5-9.2-11.5-16C3 11.3 7.7 6.1 13.4 6.1c4.2 0 6.5 2 8.1 4.3 1.9 2.6 2.2 3.9 2.5 3.9.3 0 .6-1.3 2.5-3.9 1.6-2.3 3.9-4.3 8.1-4.3m0-3c-4.5 0-7.9 1.8-10.6 5.6-2.7-3.7-6.1-5.5-10.6-5.5C6 3.1 0 9.6 0 17.6c0 7.3 5.4 12 10.6 16.5.6.5 1.3 1.1 1.9 1.7l2.3 2c4.4 3.9 6.6 5.9 7.6 6.5.5.3 1.1.5 1.6.5.6 0 1.1-.2 1.6-.5 1-.6 2.8-2.2 7.8-6.8l2-1.8c.7-.6 1.3-1.2 2-1.7C42.7 29.6 48 25 48 17.6c0-8-6-14.5-13.4-14.5z" ></path> </svg> </span> </div> <div class="margin-left-small"> <svg aria-label="Comment" class="_8-yf5" fill="#262626" height="24" viewBox="0 0 48 48" width="24" > <path clip-rule="evenodd" d="M47.5 46.1l-2.8-11c1.8-3.3 2.8-7.1 2.8-11.1C47.5 11 37 .5 24 .5S.5 11 .5 24 11 47.5 24 47.5c4 0 7.8-1 11.1-2.8l11 2.8c.8.2 1.6-.6 1.4-1.4zm-3-22.1c0 4-1 7-2.6 10-.2.4-.3.9-.2 1.4l2.1 8.4-8.3-2.1c-.5-.1-1-.1-1.4.2-1.8 1-5.2 2.6-10 2.6-11.4 0-20.6-9.2-20.6-20.5S12.7 3.5 24 3.5 44.5 12.7 44.5 24z" fill-rule="evenodd" ></path> </svg> </div> <div class="margin-left-small"> <svg aria-label="Share Post" class="_8-yf5" fill="#262626" height="24" viewBox="0 0 48 48" width="24" > <path d="M47.8 3.8c-.3-.5-.8-.8-1.3-.8h-45C.9 3.1.3 3.5.1 4S0 5.2.4 5.7l15.9 15.6 5.5 22.6c.1.6.6 1 1.2 1.1h.2c.5 0 1-.3 1.3-.7l23.2-39c.4-.4.4-1 .1-1.5zM5.2 6.1h35.5L18 18.7 5.2 6.1zm18.7 33.6l-4.4-18.4L42.4 8.6 23.9 39.7z" ></path> </svg> </div> </div> <div class="bookmark"> <div class="QBdPU rrUvL"> <svg aria-label="Save" class="_8-yf5" fill="#262626" height="24" viewBox="0 0 48 48" width="24" > <path d="M43.5 48c-.4 0-.8-.2-1.1-.4L24 29 5.6 47.6c-.4.4-1.1.6-1.6.3-.6-.2-1-.8-1-1.4v-45C3 .7 3.7 0 4.5 0h39c.8 0 1.5.7 1.5 1.5v45c0 .6-.4 1.2-.9 1.4-.2.1-.4.1-.6.1zM24 26c.8 0 1.6.3 2.2.9l15.8 16V3H6v39.9l15.8-16c.6-.6 1.4-.9 2.2-.9z" ></path> </svg> </div> </div> </div> <span class="caption"> <span id="cap-username" class="caption-username"><b>${ post.username }</b></span> <span class="caption-text" id="caption-text"> ${post.caption}</span > </span> <span class="comment"> <span class="caption-username"><b>akhilboddu</b></span> <span class="caption-text">Thank you</span> </span> <span class="comment"> <span class="caption-username"><b>imharjot</b></span> <span class="caption-text"> Great stuff</span> </span> <span class="posted-time">${post.timestamp}</span> </div> <div class="add-comment"> <input type="text" placeholder="Add a comment..." /> <a class="post-btn">Post</a> </div> </div> ` ) .join(""); } } const app = new App();
[ "You are repeatedly attaching same event listeners.\nBelow is your code stripped down to demonstrate that after several clicks on body (anywhere) one click on $sendButton will trigger console.log('SEND BUTTON CLICKED') several times.\nI believe that when you test your full code, you must make several clicks on auth, file selection — whatever — before you reach your button. All these clicks are handled by your document.body click listener, which attaches more listeners to sendButton.\nSee addEventListener on MDN.\n\n\nclass App {\n\n constructor() {\n\n this.$app = document.querySelector('#app');\n\n this.$sendBtn = document.querySelector('#send');\n\n this.addEventListener();\n\n console.log('constructor finished');\n }\n\n addEventListener() { // I'd rename this method to avoid confusion, BTW\n console.log('add ev listeners IN');\n \n document.body.addEventListener('click', (event) => {\n console.log('DOC BODY click');\n this.handlePostClick(event);\n });\n }\n\n handlePostClick(e) {\n console.log('handlePostClick ATTACHER');\n this.$sendBtn.addEventListener('click', () => {\n console.log('handlePostClick LISTENER');\n this.uploadToFB();\n });\n }\n\n uploadToFB() {\n console.log('—————→ SEND BUTTON CLICKED')\n }\n\n}\n\nconst app = new App();\n<!doctype html>\n<html style=\"height: 100%;\">\n<head>\n<meta charset=\"utf-8\">\n<title>so 74615759</title>\n</head>\n<body style=\"height: 100%; margin: 0; padding: 0;\">\n<h5>click anywhere, see the console.</h5>\n\n<div id=\"app\">\n <button id=\"send\">Send</button>\n</div>\n\n</body>\n</html>\n\n\n\nYou have to fix it either by attaching handlers using sendButton.onclick = handlerFunction instead of addEventListener, or by marking sendButton with some class name after one addEventListener has been attached:\nif (someButton.classList.contains('has-click-listener') === false) {\n someButton.addEventListener('click'), (event) => { \n /* do stuff on click */ \n });\n someButton.classList.add('has-click-listener');\n}\n\nOr use named functions instead of anonymous as event handlers as per addEventListener docs.\n" ]
[ 0 ]
[]
[]
[ "firebase_storage", "google_cloud_firestore", "javascript" ]
stackoverflow_0074614759_firebase_storage_google_cloud_firestore_javascript.txt
Q: How do i trigger an event in angular after a date has been entered in datepicker element I am trying to calculate the hours between a begindate and an enddate and show this to the user in realtime. So if both begindate and enddate have been entered, the hours should be calculated and presented automatically without the user having to press a button or something like this. Right now i'm trying to use the (mouseup) event to trigger the calculation but I get a not defined error when i'm calling getHours method or getDate method on the endDate and startDate variables. This makes sense because the values of these 2 date variables don't have a value yet when the mouseup event is triggered. I could circumvent this problem by assigning a new Date() default object to startDate and endDate but that gives strange output as soon as I press one of the datepickers. The output needs to remain 0 as long as both datepickers haven't been set a value yet. So how do I do this? See my template and component below. The method that is handling the hours calculation is called computeLeave() import {Component, OnInit} from '@angular/core'; import {FormBuilder, FormGroup, Validators} from "@angular/forms"; import {AccountService} from "../../account/account.service"; @Component({ selector: 'app-leave-create', templateUrl: './leave-create.component.html', styleUrls: ['./leave-create.component.css'], providers: [AccountService] }) export class LeaveCreateComponent{ title: string; leaveCreateForm: FormGroup; leaveRemaining: [number, number]; leaveRemainingTotal: number; leaveRequested: [number, number]; leaveRequestedTotal: number = 0; leaveRequestedStatutory: number = 0; leaveRequestedNonstatutory: number = 0; private leaveHours: [number, number]; leaveStartDate: Date; //= new Date(); leaveEndDate: Date; //= new Date(); constructor(private readonly fb: FormBuilder, private accountService: AccountService) { this.title = 'Verlofuren aanvragen'; this.leaveCreateForm = fb.group({ startDate: ['', [Validators.required]], endDate: ['', [Validators.required]], }); this.leaveHours = [180, 120]; } // ngOnInit(): void { // console.log("oninit called") // this.computeLeave(); // } // ngOnChanges(): void{ // console.log("changing") // this.computeLeave(); // } onSubmit() { // console.log(localStorage.getItem('account').toString()) // this.accountService.grantLeaveHours(this.leaveCreateForm.get('startDate').value, this.leaveCreateForm.get('endDate').value, JSON.parse(localStorage.getItem("account"))) console.log('%s: startDate=%s, endDate=%s', this.constructor.name, this.leaveCreateForm.get('startDate').value, this.leaveCreateForm.get('endDate').value ); } // onReset() { // this.leaveCreateForm.reset(); // this.computeLeave(); // } private computeLeave() { console.log("computeLeave called..") //console.log("test" + this.leaveCreateForm.get('startDate').value.toString()) //this.leaveStartDate = new Date(this.leaveCreateForm.get('startDate').value); //this.leaveEndDate = new Date(this.leaveCreateForm.get('endDate').value); // console.log("Startdate: " + this.leaveStartDate.toString()) // console.log("EndDate: " + this.leaveEndDate.toString()) if (this.leaveStartDate === undefined || this.leaveEndDate === undefined) { this.leaveRequestedTotal = 0; this.leaveRequestedStatutory = 0; this.leaveRequestedNonstatutory = 0; } if(this.leaveStartDate.getDate() !== this.leaveEndDate.getDate()) { console.log("Startdate: " + this.leaveStartDate.toString()) console.log("EndDate: " + this.leaveEndDate.toString()) console.log("not undefined") console.log("hour" + this.leaveStartDate.getHours()) console.log("hour" + this.leaveEndDate.getHours()) this.leaveRequestedTotal = this.leaveStartDate.getHours() + this.leaveEndDate.getHours(); this.leaveRequestedStatutory = this.leaveRequestedTotal * 0.8; this.leaveRequestedNonstatutory = this.leaveRequestedTotal * 0.2; } else console.log("undefined") } } Heres my template <h2>{{title}}</h2> <form [formGroup]="leaveCreateForm" (ngSubmit)="onSubmit()"> <div class="row"> <div class="col-sm-4 form-group"> <label for="startDate">Begin datum</label> <!--<input type="datetime-local" id="startDate" class="form-control"--> <!--placeholder="01-01-2017" [(ngModel)]="leaveStartDate" formControlName="startDate"--> <!--(mouseup)="computeLeave()">--> <input type="date" id="startDate" name="startDate" class="form-control" placeholder="01-01-2017" [(ngModel)]="leaveStartDate" formControlName="startDate" (focus)="computeLeave()"> </div> <div class="col-sm-4 form-group"> <label class="form-check-label"> <input #intraDay class="form-check-input" type="checkbox" (change)="0"> Verlof duurt minder dan 1 dag </label> <input *ngIf= "intraDay.checked" type="text" class="form-control" placeholder="voer het aantal uren in" formControlName="intraDayHours"> </div> </div> <div class="row"> <div class="col-sm-4 form-group"> <label *ngIf= "!intraDay.checked" for="endDate">Eind datum</label> <input *ngIf= "!intraDay.checked" type="date" id="endDate" name="endDate" class="form-control" placeholder="02-02-2017" [(ngModel)]="leaveEndDate" formControlName="endDate" (mouseup)="computeLeave()"> </div> </div> <div class="form-group"> <table class="table"> <thead> <tr> <th>Type</th> <th>Berekend</th> <th>Beschikbaar</th> </tr> </thead> <tbody> <tr> <td>Wettelijk</td> <td>{{leaveRequestedStatutory | number:"1.0-0"}}</td> <!--<td>{{leaveRemaining[0] | number:"1.0-0"}}</td>--> </tr> <tr> <td>Bovenwettelijk</td> <td>{{leaveRequestedNonstatutory | number:"1.0-0"}}</td> <!--<td>{{leaveRemaining[1] | number:"1.0-0"}}</td>--> </tr> <tr> <td>Totaal</td> <td>{{leaveRequestedTotal | number:"1.0-0"}}</td> <!--<td>{{leaveRemainingTotal | number:"1.0-0"}}</td>--> </tr> </tbody> </table> </div> <div class="form-group"> <button type="submit" class="btn btn-default">Aanvragen</button> <button type="button" class="btn btn-default" (click)="onReset()">Reset</button> </div> </form> ERROR TypeError: this.leaveStartDate.getDate is not a function at LeaveCreateComponent.webpackJsonp.../../../../../src/app/app/leave/create/leave-create.component.ts.LeaveCreateComponent.computeLeave (leave-create.component.ts:74) at Object.eval [as handleEvent] (LeaveCreateComponent.html:24) at handleEvent (core.es5.js:11998) at callWithDebugContext (core.es5.js:13467) at Object.debugHandleEvent [as handleEvent] (core.es5.js:13055) at dispatchEvent (core.es5.js:8614) at core.es5.js:10770 at SafeSubscriber.schedulerFn [as _next] (core.es5.js:3647) at SafeSubscriber.webpackJsonp.../../../../rxjs/Subscriber.js.SafeSubscriber.__tryOrUnsub (Subscriber.js:238) at SafeSubscriber.webpackJsonp.../../../../rxjs/Subscriber.js.SafeSubscriber.next (Subscriber.js:185) A: Bind change event to the input and compute difference. <input type="date" id="startDate" name="startDate" [(ngModel)]="leaveStartDate" (change)="onChange($event)"> <input type="date" id="endDate" name="endDate" [(ngModel)]="leaveEndDate" (change)="onChange($event)"> component isValidDate(date:any){ let checkDate: any = new Date(date); return checkDate != "Invalid Date"; } onChange(value){ if(this.leaveStartDate && this.leaveEndDate && this.isValidDate(this.leaveStartDate) && this.isValidDate(this.leaveEndDate)) { //custom logic goes here //this.leaveRequestedTotal = moment.duration(this.leaveEndDate.diff(this.leaveStartDate)); } else { this.leaveRequestedTotal = 0; } } A: If you use ngModel for binding your datepicker's data to a model, you should try using ngModelChange event to compute date difference, like this: (ngModelChange)="computeLeave()" A: so i have issues understanding your code but i am seeing a particular error. have you tried to use <input (change)="computeLeave()".. /> mouseup might not work in this implementation below <input *ngIf= "!intraDay.checked" type="date" id="endDate" name="endDate" class="form-control" placeholder="02-02-2017" [(ngModel)]="leaveEndDate" formControlName="endDate" (mouseup)="computeLeave() /> Also, please note that you did not call your function in the snippet you commented out. If this does not work for you, can you share a codepen here so i can fork and check?
How do i trigger an event in angular after a date has been entered in datepicker element
I am trying to calculate the hours between a begindate and an enddate and show this to the user in realtime. So if both begindate and enddate have been entered, the hours should be calculated and presented automatically without the user having to press a button or something like this. Right now i'm trying to use the (mouseup) event to trigger the calculation but I get a not defined error when i'm calling getHours method or getDate method on the endDate and startDate variables. This makes sense because the values of these 2 date variables don't have a value yet when the mouseup event is triggered. I could circumvent this problem by assigning a new Date() default object to startDate and endDate but that gives strange output as soon as I press one of the datepickers. The output needs to remain 0 as long as both datepickers haven't been set a value yet. So how do I do this? See my template and component below. The method that is handling the hours calculation is called computeLeave() import {Component, OnInit} from '@angular/core'; import {FormBuilder, FormGroup, Validators} from "@angular/forms"; import {AccountService} from "../../account/account.service"; @Component({ selector: 'app-leave-create', templateUrl: './leave-create.component.html', styleUrls: ['./leave-create.component.css'], providers: [AccountService] }) export class LeaveCreateComponent{ title: string; leaveCreateForm: FormGroup; leaveRemaining: [number, number]; leaveRemainingTotal: number; leaveRequested: [number, number]; leaveRequestedTotal: number = 0; leaveRequestedStatutory: number = 0; leaveRequestedNonstatutory: number = 0; private leaveHours: [number, number]; leaveStartDate: Date; //= new Date(); leaveEndDate: Date; //= new Date(); constructor(private readonly fb: FormBuilder, private accountService: AccountService) { this.title = 'Verlofuren aanvragen'; this.leaveCreateForm = fb.group({ startDate: ['', [Validators.required]], endDate: ['', [Validators.required]], }); this.leaveHours = [180, 120]; } // ngOnInit(): void { // console.log("oninit called") // this.computeLeave(); // } // ngOnChanges(): void{ // console.log("changing") // this.computeLeave(); // } onSubmit() { // console.log(localStorage.getItem('account').toString()) // this.accountService.grantLeaveHours(this.leaveCreateForm.get('startDate').value, this.leaveCreateForm.get('endDate').value, JSON.parse(localStorage.getItem("account"))) console.log('%s: startDate=%s, endDate=%s', this.constructor.name, this.leaveCreateForm.get('startDate').value, this.leaveCreateForm.get('endDate').value ); } // onReset() { // this.leaveCreateForm.reset(); // this.computeLeave(); // } private computeLeave() { console.log("computeLeave called..") //console.log("test" + this.leaveCreateForm.get('startDate').value.toString()) //this.leaveStartDate = new Date(this.leaveCreateForm.get('startDate').value); //this.leaveEndDate = new Date(this.leaveCreateForm.get('endDate').value); // console.log("Startdate: " + this.leaveStartDate.toString()) // console.log("EndDate: " + this.leaveEndDate.toString()) if (this.leaveStartDate === undefined || this.leaveEndDate === undefined) { this.leaveRequestedTotal = 0; this.leaveRequestedStatutory = 0; this.leaveRequestedNonstatutory = 0; } if(this.leaveStartDate.getDate() !== this.leaveEndDate.getDate()) { console.log("Startdate: " + this.leaveStartDate.toString()) console.log("EndDate: " + this.leaveEndDate.toString()) console.log("not undefined") console.log("hour" + this.leaveStartDate.getHours()) console.log("hour" + this.leaveEndDate.getHours()) this.leaveRequestedTotal = this.leaveStartDate.getHours() + this.leaveEndDate.getHours(); this.leaveRequestedStatutory = this.leaveRequestedTotal * 0.8; this.leaveRequestedNonstatutory = this.leaveRequestedTotal * 0.2; } else console.log("undefined") } } Heres my template <h2>{{title}}</h2> <form [formGroup]="leaveCreateForm" (ngSubmit)="onSubmit()"> <div class="row"> <div class="col-sm-4 form-group"> <label for="startDate">Begin datum</label> <!--<input type="datetime-local" id="startDate" class="form-control"--> <!--placeholder="01-01-2017" [(ngModel)]="leaveStartDate" formControlName="startDate"--> <!--(mouseup)="computeLeave()">--> <input type="date" id="startDate" name="startDate" class="form-control" placeholder="01-01-2017" [(ngModel)]="leaveStartDate" formControlName="startDate" (focus)="computeLeave()"> </div> <div class="col-sm-4 form-group"> <label class="form-check-label"> <input #intraDay class="form-check-input" type="checkbox" (change)="0"> Verlof duurt minder dan 1 dag </label> <input *ngIf= "intraDay.checked" type="text" class="form-control" placeholder="voer het aantal uren in" formControlName="intraDayHours"> </div> </div> <div class="row"> <div class="col-sm-4 form-group"> <label *ngIf= "!intraDay.checked" for="endDate">Eind datum</label> <input *ngIf= "!intraDay.checked" type="date" id="endDate" name="endDate" class="form-control" placeholder="02-02-2017" [(ngModel)]="leaveEndDate" formControlName="endDate" (mouseup)="computeLeave()"> </div> </div> <div class="form-group"> <table class="table"> <thead> <tr> <th>Type</th> <th>Berekend</th> <th>Beschikbaar</th> </tr> </thead> <tbody> <tr> <td>Wettelijk</td> <td>{{leaveRequestedStatutory | number:"1.0-0"}}</td> <!--<td>{{leaveRemaining[0] | number:"1.0-0"}}</td>--> </tr> <tr> <td>Bovenwettelijk</td> <td>{{leaveRequestedNonstatutory | number:"1.0-0"}}</td> <!--<td>{{leaveRemaining[1] | number:"1.0-0"}}</td>--> </tr> <tr> <td>Totaal</td> <td>{{leaveRequestedTotal | number:"1.0-0"}}</td> <!--<td>{{leaveRemainingTotal | number:"1.0-0"}}</td>--> </tr> </tbody> </table> </div> <div class="form-group"> <button type="submit" class="btn btn-default">Aanvragen</button> <button type="button" class="btn btn-default" (click)="onReset()">Reset</button> </div> </form> ERROR TypeError: this.leaveStartDate.getDate is not a function at LeaveCreateComponent.webpackJsonp.../../../../../src/app/app/leave/create/leave-create.component.ts.LeaveCreateComponent.computeLeave (leave-create.component.ts:74) at Object.eval [as handleEvent] (LeaveCreateComponent.html:24) at handleEvent (core.es5.js:11998) at callWithDebugContext (core.es5.js:13467) at Object.debugHandleEvent [as handleEvent] (core.es5.js:13055) at dispatchEvent (core.es5.js:8614) at core.es5.js:10770 at SafeSubscriber.schedulerFn [as _next] (core.es5.js:3647) at SafeSubscriber.webpackJsonp.../../../../rxjs/Subscriber.js.SafeSubscriber.__tryOrUnsub (Subscriber.js:238) at SafeSubscriber.webpackJsonp.../../../../rxjs/Subscriber.js.SafeSubscriber.next (Subscriber.js:185)
[ "Bind change event to the input and compute difference.\n<input type=\"date\" id=\"startDate\" name=\"startDate\" [(ngModel)]=\"leaveStartDate\" (change)=\"onChange($event)\">\n<input type=\"date\" id=\"endDate\" name=\"endDate\" [(ngModel)]=\"leaveEndDate\" (change)=\"onChange($event)\">\n\ncomponent\n isValidDate(date:any){\n let checkDate: any = new Date(date);\n return checkDate != \"Invalid Date\";\n }\n\n onChange(value){\n if(this.leaveStartDate && this.leaveEndDate && this.isValidDate(this.leaveStartDate) && this.isValidDate(this.leaveEndDate)) {\n //custom logic goes here\n //this.leaveRequestedTotal = moment.duration(this.leaveEndDate.diff(this.leaveStartDate));\n } else {\n this.leaveRequestedTotal = 0;\n }\n }\n\n", "If you use ngModel for binding your datepicker's data to a model, you should try using ngModelChange event to compute date difference, like this: \n(ngModelChange)=\"computeLeave()\"\n", "so i have issues understanding your code but i am seeing a particular error. have you tried to use\n<input (change)=\"computeLeave()\".. />\n\nmouseup might not work in this implementation below\n<input *ngIf= \"!intraDay.checked\" type=\"date\" id=\"endDate\" name=\"endDate\" class=\"form-control\"\n placeholder=\"02-02-2017\" [(ngModel)]=\"leaveEndDate\" formControlName=\"endDate\" (mouseup)=\"computeLeave() />\n\nAlso, please note that you did not call your function in the snippet you commented out.\nIf this does not work for you, can you share a codepen here so i can fork and check?\n" ]
[ 1, 0, 0 ]
[]
[]
[ "angular", "javascript", "twitter_bootstrap" ]
stackoverflow_0047411909_angular_javascript_twitter_bootstrap.txt
Q: How does "Insert documentation comment stub" work in Pycharm for getting method parameters? I have enabled Insert documentation comment stub within Editor | General |Smart keys : But then how to get the method parameters type stubs? Adding the docstring triple quotes and then enter does open up the docstring - but with nothing in it: def get_self_join_clause(self, df, alias1='o', alias2 = 'n'): """ """ # pycharm added this automatically A: how to get the method parameters type stubs? PyCharm does generate the docstring stub with the type placeholders, but the placeholders aren't currently (using PyCharm 2022.1) populated from the __annotations__ with the types. This has been marked with the state "To be discussed" in the JetBrains bugtracker, see issues PY-23400 and PY-54930. Inclusion of the type placeholders is configured by checking File > Settings > Editor > General > Smart Keys > Python > Insert type placeholders in the documentation comment stub. It's worth noting the insert docstring stub intention is selected on the function name itself In this example with Napoleon style docstring format selected in File > Settings > Tools > Python Integrated Tools > Docstrings, the result:
How does "Insert documentation comment stub" work in Pycharm for getting method parameters?
I have enabled Insert documentation comment stub within Editor | General |Smart keys : But then how to get the method parameters type stubs? Adding the docstring triple quotes and then enter does open up the docstring - but with nothing in it: def get_self_join_clause(self, df, alias1='o', alias2 = 'n'): """ """ # pycharm added this automatically
[ "\nhow to get the method parameters type stubs?\n\nPyCharm does generate the docstring stub with the type placeholders, but the placeholders aren't currently (using PyCharm 2022.1) populated from the __annotations__ with the types. This has been marked with the state \"To be discussed\" in the JetBrains bugtracker, see issues PY-23400 and PY-54930.\nInclusion of the type placeholders is configured by checking File > Settings > Editor > General > Smart Keys > Python > Insert type placeholders in the documentation comment stub.\n\n\nIt's worth noting the insert docstring stub intention is selected on the function name itself\n\nIn this example with Napoleon style docstring format selected in File > Settings > Tools > Python Integrated Tools > Docstrings, the result:\n\n" ]
[ 1 ]
[]
[]
[ "docstring", "pycharm", "python" ]
stackoverflow_0074657042_docstring_pycharm_python.txt
Q: What is this long error in React from material ui? Compiled with problems:X ERROR in ./src/Pages/Crypto_transactions.js 184:35-43 export 'default' (imported as 'DataGrid') was not found in '@material-ui/data-grid' (possible exports: DATA_GRID_PROPTYPES, DEFAULT_GRID_COL_TYPE_KEY, DEFAULT_GRID_OPTIONS, DEFAULT_GRID_PROPS_FROM_OPTIONS, DEFAULT_GRID_SLOTS_COMPONENTS, DataGrid, GRID_BOOLEAN_COLUMN_TYPE, GRID_CELL_CSS_CLASS, GRID_CELL_CSS_CLASS_SUFFIX, GRID_COLUMN_HEADER_CSS_CLASS, GRID_COLUMN_HEADER_CSS_CLASS_SUFFIX, GRID_COLUMN_HEADER_DRAGGING_CSS_CLASS, GRID_COLUMN_HEADER_DROP_ZONE_CSS_CLASS, GRID_COLUMN_HEADER_SEPARATOR_RESIZABLE_CSS_CLASS, GRID_COLUMN_HEADER_TITLE_CSS_CLASS, GRID_CSS_CLASS_PREFIX, GRID_DATETIME_COLUMN_TYPE, GRID_DATETIME_COL_DEF, GRID_DATE_COLUMN_TYPE, GRID_DATE_COL_DEF, GRID_DEFAULT_LOCALE_TEXT, GRID_EXPERIMENTAL_ENABLED, GRID_NUMBER_COLUMN_TYPE, GRID_NUMERIC_COL_DEF, GRID_ROOT_CSS_CLASS_SUFFIX, GRID_ROW_CSS_CLASS, GRID_ROW_CSS_CLASS_SUFFIX, GRID_STRING_COLUMN_TYPE, GRID_STRING_COL_DEF, GridAddIcon, GridApiContext, GridArrowDownwardIcon, GridArrowUpwardIcon, GridAutoSizer, GridBody, GridCell, GridCellCheckboxForwardRef, GridCellCheckboxRenderer, GridCheckCircleIcon, GridCheckIcon, GridCloseIcon, GridColumnHeaderItem, GridColumnHeaderMenu, GridColumnHeaderSeparator, GridColumnHeaderSortIcon, GridColumnHeaderTitle, GridColumnHeadersItemCollection, GridColumnIcon, GridColumnMenu, GridColumnMenuContainer, GridColumnsContainer, GridColumnsHeader, GridColumnsMenuItem, GridColumnsPanel, GridDataContainer, GridDensityTypes, GridDragIcon, GridEditInputCell, GridEditSingleSelectCell, GridEmptyCell, GridErrorHandler, GridEvents, GridFeatureModeConstant, GridFilterAltIcon, GridFilterForm, GridFilterInputValue, GridFilterListIcon, GridFilterMenuItem, GridFilterPanel, GridFooter, GridFooterContainer, GridFooterPlaceholder, GridHeader, GridHeaderCheckbox, GridHeaderPlaceholder, GridLinkOperator, GridLoadIcon, GridLoadingOverlay, GridMenu, GridMenuIcon, GridNoRowsOverlay, GridOverlay, GridOverlays, GridPagination, GridPanel, GridPanelContent, GridPanelFooter, GridPanelHeader, GridPanelWrapper, GridPreferencePanelsValue, GridPreferencesPanel, GridRenderingZone, GridRoot, GridRow, GridRowCells, GridRowCount, GridSaveAltIcon, GridScrollArea, GridSearchIcon, GridSelectedRowCount, GridSeparatorIcon, GridStickyContainer, GridTableRowsIcon, GridToolbar, GridToolbarColumnsButton, GridToolbarContainer, GridToolbarDensitySelector, GridToolbarExport, GridToolbarFilterButton, GridTripleDotsVerticalIcon, GridViewHeadlineIcon, GridViewStreamIcon, GridViewport, GridWindow, HideGridColMenuItem, MAX_PAGE_SIZE, SUBMIT_FILTER_STROKE_TIME, Signature, SortGridMenuItems, activeGridFilterItemsSelector, allGridColumnsFieldsSelector, allGridColumnsSelector, arSD, bgBG, checkGridRowIdIsValid, convertGridRowsPropToState, csCZ, deDE, elGR, enUS, esES, filterGridColumnLookupSelector, filterGridItemsCounterSelector, filterGridStateSelector, filterableGridColumnsIdsSelector, filterableGridColumnsSelector, frFR, getGridColDef, getGridDateOperators, getGridDefaultColumnTypes, getGridNumericColumnOperators, getGridStringOperators, getInitialGridColumnReorderState, getInitialGridColumnResizeState, getInitialGridColumnsState, getInitialGridFilterState, getInitialGridRenderingState, getInitialGridRowState, getInitialGridSortingState, getInitialGridState, getInitialVisibleGridRowsState, gridCheckboxSelectionColDef, gridColumnLookupSelector, gridColumnMenuStateSelector, gridColumnReorderDragColSelector, gridColumnReorderSelector, gridColumnResizeSelector, gridColumnsMetaSelector, gridColumnsSelector, gridColumnsTotalWidthSelector, gridDateFormatter, gridDateTimeFormatter, gridEditRowsStateSelector, gridFocusCellSelector, gridFocusColumnHeaderSelector, gridFocusStateSelector, gridPaginatedVisibleSortedGridRowIdsSelector, gridPaginationSelector, gridPanelClasses, gridPreferencePanelStateSelector, gridResizingColumnFieldSelector, gridRowCountSelector, gridRowsLookupSelector, gridRowsStateSelector, gridScrollbarStateSelector, gridSelectionStateSelector, gridSortColumnLookupSelector, gridSortModelSelector, gridTabIndexCellSelector, gridTabIndexColumnHeaderSelector, gridTabIndexStateSelector, gridViewportSizeStateSelector, itIT, jaJP, nlNL, plPL, plPLGrid, ptBR, renderEditInputCell, renderEditSingleSelectCell, ruRU, ruRUGrid, selectedGridRowsCountSelector, selectedGridRowsSelector, selectedIdsLookupSelector, skSK, skSKGrid, sortedGridRowIdsSelector, sortedGridRowsSelector, trTR, ukUA, ukUAGrid, unorderedGridRowIdsSelector, unorderedGridRowModelsSelector, useApi, useDataGridComponent, useGridApi, useGridApiEventHandler, useGridApiMethod, useGridApiOptionHandler, useGridApiRef, useGridColumnMenu, useGridColumnReorder, useGridColumnResize, useGridColumns, useGridComponents, useGridContainerProps, useGridControlState, useGridEditRows, useGridFilter, useGridFocus, useGridKeyboard, useGridKeyboardNavigation, useGridPage, useGridPageSize, useGridParamsApi, useGridPreferencesPanel, useGridReducer, useGridRows, useGridScrollFn, useGridSelection, useGridSelector, useGridSlotComponentProps, useGridSorting, useGridState, useGridVirtualRows, useLogger, useLoggerFactory, useNativeEventListener, visibleGridColumnsLengthSelector, visibleGridColumnsSelector, visibleGridRowCountSelector, visibleGridRowsStateSelector, visibleSortedGridRowIdsSelector, visibleSortedGridRowsAsArraySelector, visibleSortedGridRowsSelector) A: You used import DataGrid from '@material-ui/data-grid', which doesn't have a default export. You probably want import { DataGrid } from '@material-ui/data-grid' A: Use import {DataGrid} from "@mui/material"; instead of import DataGrid from "@mui/material";. It will solve the problem. When It wants to import a named component, It must be included inside curly braces. {DataGrid} instead of DataGrid. A Component with braces specifies a particular element of the React package. Without braces it will import default react package.
What is this long error in React from material ui?
Compiled with problems:X ERROR in ./src/Pages/Crypto_transactions.js 184:35-43 export 'default' (imported as 'DataGrid') was not found in '@material-ui/data-grid' (possible exports: DATA_GRID_PROPTYPES, DEFAULT_GRID_COL_TYPE_KEY, DEFAULT_GRID_OPTIONS, DEFAULT_GRID_PROPS_FROM_OPTIONS, DEFAULT_GRID_SLOTS_COMPONENTS, DataGrid, GRID_BOOLEAN_COLUMN_TYPE, GRID_CELL_CSS_CLASS, GRID_CELL_CSS_CLASS_SUFFIX, GRID_COLUMN_HEADER_CSS_CLASS, GRID_COLUMN_HEADER_CSS_CLASS_SUFFIX, GRID_COLUMN_HEADER_DRAGGING_CSS_CLASS, GRID_COLUMN_HEADER_DROP_ZONE_CSS_CLASS, GRID_COLUMN_HEADER_SEPARATOR_RESIZABLE_CSS_CLASS, GRID_COLUMN_HEADER_TITLE_CSS_CLASS, GRID_CSS_CLASS_PREFIX, GRID_DATETIME_COLUMN_TYPE, GRID_DATETIME_COL_DEF, GRID_DATE_COLUMN_TYPE, GRID_DATE_COL_DEF, GRID_DEFAULT_LOCALE_TEXT, GRID_EXPERIMENTAL_ENABLED, GRID_NUMBER_COLUMN_TYPE, GRID_NUMERIC_COL_DEF, GRID_ROOT_CSS_CLASS_SUFFIX, GRID_ROW_CSS_CLASS, GRID_ROW_CSS_CLASS_SUFFIX, GRID_STRING_COLUMN_TYPE, GRID_STRING_COL_DEF, GridAddIcon, GridApiContext, GridArrowDownwardIcon, GridArrowUpwardIcon, GridAutoSizer, GridBody, GridCell, GridCellCheckboxForwardRef, GridCellCheckboxRenderer, GridCheckCircleIcon, GridCheckIcon, GridCloseIcon, GridColumnHeaderItem, GridColumnHeaderMenu, GridColumnHeaderSeparator, GridColumnHeaderSortIcon, GridColumnHeaderTitle, GridColumnHeadersItemCollection, GridColumnIcon, GridColumnMenu, GridColumnMenuContainer, GridColumnsContainer, GridColumnsHeader, GridColumnsMenuItem, GridColumnsPanel, GridDataContainer, GridDensityTypes, GridDragIcon, GridEditInputCell, GridEditSingleSelectCell, GridEmptyCell, GridErrorHandler, GridEvents, GridFeatureModeConstant, GridFilterAltIcon, GridFilterForm, GridFilterInputValue, GridFilterListIcon, GridFilterMenuItem, GridFilterPanel, GridFooter, GridFooterContainer, GridFooterPlaceholder, GridHeader, GridHeaderCheckbox, GridHeaderPlaceholder, GridLinkOperator, GridLoadIcon, GridLoadingOverlay, GridMenu, GridMenuIcon, GridNoRowsOverlay, GridOverlay, GridOverlays, GridPagination, GridPanel, GridPanelContent, GridPanelFooter, GridPanelHeader, GridPanelWrapper, GridPreferencePanelsValue, GridPreferencesPanel, GridRenderingZone, GridRoot, GridRow, GridRowCells, GridRowCount, GridSaveAltIcon, GridScrollArea, GridSearchIcon, GridSelectedRowCount, GridSeparatorIcon, GridStickyContainer, GridTableRowsIcon, GridToolbar, GridToolbarColumnsButton, GridToolbarContainer, GridToolbarDensitySelector, GridToolbarExport, GridToolbarFilterButton, GridTripleDotsVerticalIcon, GridViewHeadlineIcon, GridViewStreamIcon, GridViewport, GridWindow, HideGridColMenuItem, MAX_PAGE_SIZE, SUBMIT_FILTER_STROKE_TIME, Signature, SortGridMenuItems, activeGridFilterItemsSelector, allGridColumnsFieldsSelector, allGridColumnsSelector, arSD, bgBG, checkGridRowIdIsValid, convertGridRowsPropToState, csCZ, deDE, elGR, enUS, esES, filterGridColumnLookupSelector, filterGridItemsCounterSelector, filterGridStateSelector, filterableGridColumnsIdsSelector, filterableGridColumnsSelector, frFR, getGridColDef, getGridDateOperators, getGridDefaultColumnTypes, getGridNumericColumnOperators, getGridStringOperators, getInitialGridColumnReorderState, getInitialGridColumnResizeState, getInitialGridColumnsState, getInitialGridFilterState, getInitialGridRenderingState, getInitialGridRowState, getInitialGridSortingState, getInitialGridState, getInitialVisibleGridRowsState, gridCheckboxSelectionColDef, gridColumnLookupSelector, gridColumnMenuStateSelector, gridColumnReorderDragColSelector, gridColumnReorderSelector, gridColumnResizeSelector, gridColumnsMetaSelector, gridColumnsSelector, gridColumnsTotalWidthSelector, gridDateFormatter, gridDateTimeFormatter, gridEditRowsStateSelector, gridFocusCellSelector, gridFocusColumnHeaderSelector, gridFocusStateSelector, gridPaginatedVisibleSortedGridRowIdsSelector, gridPaginationSelector, gridPanelClasses, gridPreferencePanelStateSelector, gridResizingColumnFieldSelector, gridRowCountSelector, gridRowsLookupSelector, gridRowsStateSelector, gridScrollbarStateSelector, gridSelectionStateSelector, gridSortColumnLookupSelector, gridSortModelSelector, gridTabIndexCellSelector, gridTabIndexColumnHeaderSelector, gridTabIndexStateSelector, gridViewportSizeStateSelector, itIT, jaJP, nlNL, plPL, plPLGrid, ptBR, renderEditInputCell, renderEditSingleSelectCell, ruRU, ruRUGrid, selectedGridRowsCountSelector, selectedGridRowsSelector, selectedIdsLookupSelector, skSK, skSKGrid, sortedGridRowIdsSelector, sortedGridRowsSelector, trTR, ukUA, ukUAGrid, unorderedGridRowIdsSelector, unorderedGridRowModelsSelector, useApi, useDataGridComponent, useGridApi, useGridApiEventHandler, useGridApiMethod, useGridApiOptionHandler, useGridApiRef, useGridColumnMenu, useGridColumnReorder, useGridColumnResize, useGridColumns, useGridComponents, useGridContainerProps, useGridControlState, useGridEditRows, useGridFilter, useGridFocus, useGridKeyboard, useGridKeyboardNavigation, useGridPage, useGridPageSize, useGridParamsApi, useGridPreferencesPanel, useGridReducer, useGridRows, useGridScrollFn, useGridSelection, useGridSelector, useGridSlotComponentProps, useGridSorting, useGridState, useGridVirtualRows, useLogger, useLoggerFactory, useNativeEventListener, visibleGridColumnsLengthSelector, visibleGridColumnsSelector, visibleGridRowCountSelector, visibleGridRowsStateSelector, visibleSortedGridRowIdsSelector, visibleSortedGridRowsAsArraySelector, visibleSortedGridRowsSelector)
[ "You used import DataGrid from '@material-ui/data-grid', which doesn't have a default export.\nYou probably want\nimport { DataGrid } from '@material-ui/data-grid'\n\n", "Use import {DataGrid} from \"@mui/material\"; instead of import DataGrid from \"@mui/material\";. It will solve the problem.\nWhen It wants to import a named component, It must be included inside curly braces. {DataGrid} instead of DataGrid.\nA Component with braces specifies a particular element of the React package. Without braces it will import default react package.\n" ]
[ 0, 0 ]
[]
[]
[ "datagrid", "javascript", "material_ui", "reactjs" ]
stackoverflow_0072028451_datagrid_javascript_material_ui_reactjs.txt
Q: BeautifulSoup giving me many error lines when used I've installed beautifulsoup (file named bs4) into my pythonproject folder which is the same folder as the python file I am running. The .py file contains the following code, and for input I am using this URL to a simple page with 1 link which the code is supposed to retrieve. URL used as url input: http://data.pr4e.org/page1.htm .py code: import urllib.request, urllib.parse, urllib.error from bs4 import BeautifulSoup import ssl ctx = ssl.create_default_context() ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE url = input('Enter - ') html = urllib.request.urlopen(url, context=ctx).read() soup = BeautifulSoup(html, 'html.parser') # Retrieve all of the anchor tags tags = soup('a') for tag in tags: print(tag.get('href', None)) Though I could be wrong, it appears to me that bs4 imports correctly because my IDE program suggests BeautifulSoup when I begin typing it. After all, it is installed in the same directory as the .py file. however, It spits out the following lines of error when I run it using the previously provided url: Traceback (most recent call last): File "C:\Users\Thomas\PycharmProjects\pythonProject\main.py", line 16, in <module> soup = BeautifulSoup(html, 'html.parser') File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 215, in __init__ self._feed() File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 241, in _feed self.endData() File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 315, in endData self.object_was_parsed(o) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 320, in object_was_parsed previous_element = most_recent_element or self._most_recent_element File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1001, in __getattr__ return self.find(tag) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1238, in find l = self.find_all(name, attrs, recursive, text, 1, **kwargs) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1259, in find_all return self._find_all(name, attrs, text, limit, generator, **kwargs) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 516, in _find_all strainer = SoupStrainer(name, attrs, text, **kwargs) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1560, in __init__ self.text = self._normalize_search_value(text) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1565, in _ normalize_search_value if (isinstance(value, str) or isinstance(value, collections.Callable) or hasattr(value, 'match') AttributeError: module 'collections' has no attribute 'Callable' Process finished with exit code 1 The lines being referred to in the error messages are from files inside bs4 that were downloaded as part of it. I haven't edited any of the bs4 contained files or even touched them. Can anyone help me figure out why bs4 isn't working? A: Are you using python 3.10? Looks like beautifulsoup library is using removed deprecated aliases to Collections Abstract Base Classes. More info here: https://docs.python.org/3/whatsnew/3.10.html#removed A quick fix is to paste these 2 lines just below your imports: import collections collections.Callable = collections.abc.Callable A: Andrey, i cannot comment yet. But i tried your fix and Im using Thonny and using 3.10 in terminal. But after adding the two import collections and callable lines. i get another error in Thonny that isnt shown in terminal. when i run the program in terminal it simply seems to do nothing. In Thonny it suggests that "Module has no attribute "Callable"
BeautifulSoup giving me many error lines when used
I've installed beautifulsoup (file named bs4) into my pythonproject folder which is the same folder as the python file I am running. The .py file contains the following code, and for input I am using this URL to a simple page with 1 link which the code is supposed to retrieve. URL used as url input: http://data.pr4e.org/page1.htm .py code: import urllib.request, urllib.parse, urllib.error from bs4 import BeautifulSoup import ssl ctx = ssl.create_default_context() ctx.check_hostname = False ctx.verify_mode = ssl.CERT_NONE url = input('Enter - ') html = urllib.request.urlopen(url, context=ctx).read() soup = BeautifulSoup(html, 'html.parser') # Retrieve all of the anchor tags tags = soup('a') for tag in tags: print(tag.get('href', None)) Though I could be wrong, it appears to me that bs4 imports correctly because my IDE program suggests BeautifulSoup when I begin typing it. After all, it is installed in the same directory as the .py file. however, It spits out the following lines of error when I run it using the previously provided url: Traceback (most recent call last): File "C:\Users\Thomas\PycharmProjects\pythonProject\main.py", line 16, in <module> soup = BeautifulSoup(html, 'html.parser') File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 215, in __init__ self._feed() File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 241, in _feed self.endData() File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 315, in endData self.object_was_parsed(o) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\__init__.py", line 320, in object_was_parsed previous_element = most_recent_element or self._most_recent_element File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1001, in __getattr__ return self.find(tag) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1238, in find l = self.find_all(name, attrs, recursive, text, 1, **kwargs) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1259, in find_all return self._find_all(name, attrs, text, limit, generator, **kwargs) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 516, in _find_all strainer = SoupStrainer(name, attrs, text, **kwargs) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1560, in __init__ self.text = self._normalize_search_value(text) File "C:\Users\Thomas\PycharmProjects\pythonProject\bs4\element.py", line 1565, in _ normalize_search_value if (isinstance(value, str) or isinstance(value, collections.Callable) or hasattr(value, 'match') AttributeError: module 'collections' has no attribute 'Callable' Process finished with exit code 1 The lines being referred to in the error messages are from files inside bs4 that were downloaded as part of it. I haven't edited any of the bs4 contained files or even touched them. Can anyone help me figure out why bs4 isn't working?
[ "Are you using python 3.10? Looks like beautifulsoup library is using removed deprecated aliases to Collections Abstract Base Classes. More info here: https://docs.python.org/3/whatsnew/3.10.html#removed\nA quick fix is to paste these 2 lines just below your imports:\nimport collections\ncollections.Callable = collections.abc.Callable\n\n", "Andrey, i cannot comment yet. But i tried your fix and Im using Thonny and using 3.10 in terminal. But after adding the two import collections and callable lines. i get another error in Thonny that isnt shown in terminal. when i run the program in terminal it simply seems to do nothing. In Thonny it suggests that \"Module has no attribute \"Callable\"\n" ]
[ 2, 0 ]
[]
[]
[ "beautifulsoup", "python" ]
stackoverflow_0070677261_beautifulsoup_python.txt
Q: Unable to locate a class or view for component, livewire - jetstreem I had a net project in laravel 8, i just installed jetstream:livewire but when i try to call some compose for example: <x-jet-dropdown> It throws me the following error that I can't solve A: Seem to be a typo. In your error message is says jet-dropdowm <- with "m".
Unable to locate a class or view for component, livewire - jetstreem
I had a net project in laravel 8, i just installed jetstream:livewire but when i try to call some compose for example: <x-jet-dropdown> It throws me the following error that I can't solve
[ "Seem to be a typo.\nIn your error message is says jet-dropdowm <- with \"m\".\n" ]
[ 0 ]
[]
[]
[ "jetstream", "laravel_8", "laravel_livewire" ]
stackoverflow_0072624160_jetstream_laravel_8_laravel_livewire.txt
Q: How do I make an @Input property readonly in an Angular component? How would I make this @Input property read only in an angular component? @Input() name: string; A: You simply need to use the readonly qualifier as below, between the @Input decorator and the name identifier. @Input() readonly name: string;
How do I make an @Input property readonly in an Angular component?
How would I make this @Input property read only in an angular component? @Input() name: string;
[ "You simply need to use the readonly qualifier as below, between the @Input decorator and the name identifier.\n@Input() readonly name: string;\n" ]
[ 0 ]
[]
[]
[ "angular", "angular_components", "frontend", "javascript", "typescript" ]
stackoverflow_0074658526_angular_angular_components_frontend_javascript_typescript.txt