content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: R: apply regression's model parameters at another spatial scale I have three raster layers, two coarse resolution and one fine resolution. I perform simple linear regression using the coarse resolution rasters and then I extract the coefficients of the regression (slope and intercept) and I use them with my fine resolution raster. So far, I am doing this manually by plotting the coefficients in the console. Is there any way I can do this automatically (somehow to use the coefficients without having to print them and copy-paste them, because I have 10-20 rasters to do this). Here is what I am doing so far: library(terra) library(sp) ntl = rast("path/ntl.tif") vals_ntl <- as.data.frame(values(ntl)) ntl_coords = as.data.frame(xyFromCell(ntl, 1:ncell(ntl))) combine <- as.data.frame(cbind(ntl_coords,vals_ntl)) ebbi = rast("path/tirs010.tif") ebbi <- resample(ebbi, ntl, method="bilinear") vals_ebbi <- as.data.frame(values(ebbi)) s = c(ntl, ebbi) names(s) = c('ntl', 'ebbi') block.data <- as.data.frame(cbind(combine, vals_ebbi)) names(block.data)[3] <- "ntl" names(block.data)[4] <- "ebbi" block.data <- na.omit(block.data) model <- lm(formula = ntl ~ ebbi, data = block.data) #predict to a raster summary(model) model$coefficients # here I plot the coefficients into the console and then manually I use them below pop = rast("path/pop.tif") lm_pred010 = 19.0540153 + 0.2797187 * pop plot(lm_pred010) res(lm_pred010) writeRaster(lm_pred010, filename = "path/lm_pred010.tif", overwrite = T) The rasters I am using: ntl = rast(ncols=101, nrows=84, nlyrs=1, xmin=509634.6325, xmax=550034.6325, ymin=161998.158, ymax=195598.158, names=c('ntl'), crs='EPSG:27700') ebbi = rast(ncols=101, nrows=84, nlyrs=1, xmin=509634.6325, xmax=550034.6325, ymin=161998.158, ymax=195598.158, names=c('focal_sum'), crs='EPSG:27700') pop = rast(ncols=407, nrows=343, nlyrs=1, xmin=509615.9028, xmax=550315.9028, ymin=161743.6035, ymax=196043.6035, names=c('pop'), crs='EPSG:27700') A: You can use predict to apply that fitted model to any raster you want. You just need to check that the names (band names) of the raster from which the model was fitted are the same as the names of the raster to which the model will be applied. library(terra) lm_pred010 <- predict(pop, model)
R: apply regression's model parameters at another spatial scale
I have three raster layers, two coarse resolution and one fine resolution. I perform simple linear regression using the coarse resolution rasters and then I extract the coefficients of the regression (slope and intercept) and I use them with my fine resolution raster. So far, I am doing this manually by plotting the coefficients in the console. Is there any way I can do this automatically (somehow to use the coefficients without having to print them and copy-paste them, because I have 10-20 rasters to do this). Here is what I am doing so far: library(terra) library(sp) ntl = rast("path/ntl.tif") vals_ntl <- as.data.frame(values(ntl)) ntl_coords = as.data.frame(xyFromCell(ntl, 1:ncell(ntl))) combine <- as.data.frame(cbind(ntl_coords,vals_ntl)) ebbi = rast("path/tirs010.tif") ebbi <- resample(ebbi, ntl, method="bilinear") vals_ebbi <- as.data.frame(values(ebbi)) s = c(ntl, ebbi) names(s) = c('ntl', 'ebbi') block.data <- as.data.frame(cbind(combine, vals_ebbi)) names(block.data)[3] <- "ntl" names(block.data)[4] <- "ebbi" block.data <- na.omit(block.data) model <- lm(formula = ntl ~ ebbi, data = block.data) #predict to a raster summary(model) model$coefficients # here I plot the coefficients into the console and then manually I use them below pop = rast("path/pop.tif") lm_pred010 = 19.0540153 + 0.2797187 * pop plot(lm_pred010) res(lm_pred010) writeRaster(lm_pred010, filename = "path/lm_pred010.tif", overwrite = T) The rasters I am using: ntl = rast(ncols=101, nrows=84, nlyrs=1, xmin=509634.6325, xmax=550034.6325, ymin=161998.158, ymax=195598.158, names=c('ntl'), crs='EPSG:27700') ebbi = rast(ncols=101, nrows=84, nlyrs=1, xmin=509634.6325, xmax=550034.6325, ymin=161998.158, ymax=195598.158, names=c('focal_sum'), crs='EPSG:27700') pop = rast(ncols=407, nrows=343, nlyrs=1, xmin=509615.9028, xmax=550315.9028, ymin=161743.6035, ymax=196043.6035, names=c('pop'), crs='EPSG:27700')
[ "You can use predict to apply that fitted model to any raster you want. You just need to check that the names (band names) of the raster from which the model was fitted are the same as the names of the raster to which the model will be applied.\nlibrary(terra)\nlm_pred010 <- predict(pop, model)\n\n" ]
[ 3 ]
[]
[]
[ "coefficients", "linear_regression", "r", "raster" ]
stackoverflow_0074661475_coefficients_linear_regression_r_raster.txt
Q: How to get Hour from Timestamps ending in Z? The timestamps my database returns are in this format: '2022-11-25T17:54:29.819Z' I want to do hour(timestamp) to return just the hour but get this error 'error:"function hour(timestamp with time zone) does not exist"' How do I get around this? Thanks! A: Try the function extract(...): select extract(hour from my_column) from my_table A: We can use EXTRACT for that: SELECT EXTRACT(HOUR FROM yourtimestamp) FROM yourtable; Whether a Z occurs in the timestamp, doesn't matter, see db<>fiddle See the documentation
How to get Hour from Timestamps ending in Z?
The timestamps my database returns are in this format: '2022-11-25T17:54:29.819Z' I want to do hour(timestamp) to return just the hour but get this error 'error:"function hour(timestamp with time zone) does not exist"' How do I get around this? Thanks!
[ "Try the function extract(...):\nselect extract(hour from my_column) from my_table\n\n", "We can use EXTRACT for that:\nSELECT EXTRACT(HOUR FROM yourtimestamp) FROM yourtable;\n\nWhether a Z occurs in the timestamp, doesn't matter, see db<>fiddle\nSee the documentation\n" ]
[ 0, 0 ]
[]
[]
[ "postgresql", "sql" ]
stackoverflow_0074661460_postgresql_sql.txt
Q: How to scrape this database site? I wanted to scape this site, but it seems like the information is not in html code. How to scrape this site/information? https://golden.com/query/list-of-incubator-companies-NMB3 I have tried to use normal html scraping, but I am currently not that much familiar with scraping at all. A: This site uses javascript to render it's content, however you can use it's api to scrape all of the data in json format. The api endpoint is: url = f"https://golden.com/api/v1/queries/list-of-incubators-and-accelerators-NMB3/results/?page={page_number}&per_page=25&order=&search=" And a simple scrapy example could look something like this. import scrapy class MySpider(scrapy.Spider): name = 'golden' def start_requests(self): for page_num in range(1,4): url = f"https://golden.com/api/v1/queries/list-of-incubators-and-accelerators-NMB3/results/?page={page_num}&per_page=25&order=&search=" yield scrapy.Request(url) def parse(self, response): data = response.json() yield {"data": data["results"]}
How to scrape this database site?
I wanted to scape this site, but it seems like the information is not in html code. How to scrape this site/information? https://golden.com/query/list-of-incubator-companies-NMB3 I have tried to use normal html scraping, but I am currently not that much familiar with scraping at all.
[ "This site uses javascript to render it's content, however you can use it's api to scrape all of the data in json format.\nThe api endpoint is:\nurl = f\"https://golden.com/api/v1/queries/list-of-incubators-and-accelerators-NMB3/results/?page={page_number}&per_page=25&order=&search=\"\n\nAnd a simple scrapy example could look something like this.\nimport scrapy\n\nclass MySpider(scrapy.Spider):\n name = 'golden'\n\n def start_requests(self):\n for page_num in range(1,4):\n url = f\"https://golden.com/api/v1/queries/list-of-incubators-and-accelerators-NMB3/results/?page={page_num}&per_page=25&order=&search=\"\n yield scrapy.Request(url)\n\n def parse(self, response):\n data = response.json()\n yield {\"data\": data[\"results\"]}\n\n" ]
[ 2 ]
[]
[]
[ "scrape", "scrapy", "screen_scraping", "web_scraping" ]
stackoverflow_0074658085_scrape_scrapy_screen_scraping_web_scraping.txt
Q: Swing how to close JPanel programmatically My main class extends JPanel and I create a table and a button on this panel.Now I want to close this panel when the user press it.On the internet closing examples are about JFrame.Is there solution for JPanel? There is a panel On Panel there is a table and a button I add an action listener to the button I want to close panel if user press button This is my code now I want when the user press the btnDelete then it close the panel public class ListUsers extends JPanel { ResultSet rs; ClientDAO dao; JScrollPane scrollPane; JTable table; Object columnId; public ListUsers() throws SQLException { dao = new ClientDAO(); rs=dao.getUsers(); ResultSetMetaData md = rs.getMetaData(); int columnCount = md.getColumnCount(); Vector<String> columns = new Vector(columnCount); //store column names for(int i=1; i<=columnCount; i++) columns.add(md.getColumnName(i)); Vector data = new Vector(); Vector row; //store row data while(rs.next()) { row = new Vector(columnCount); for(int i=1; i<=columnCount; i++) { row.add(rs.getString(i)); } data.add(row); } table = new JTable(data, columns); table.setPreferredScrollableViewportSize(new Dimension(500, 70)); table.setFillsViewportHeight(true); table.setVisible(true); table.validate(); table.setEnabled(true); add(table); table.addMouseListener(new MouseAdapter() { public void mouseClicked(MouseEvent e) { final JDialog dialog = new JDialog(); dialog.setSize(300, 200); dialog.setLayout(null); columnId = table.getValueAt(table.getSelectedRow(),0); Integer no = new Integer(columnId.toString()); final int i =no.intValue(); String columnNo =columnId.toString(); String name = table.getValueAt(table.getSelectedRow(),1).toString(); String surname = table.getValueAt(table.getSelectedRow(),2).toString(); String gender = table.getValueAt(table.getSelectedRow(),3).toString(); String labelText ="<html>Name :<b>"+name+"</b><br>Surname :<b>"+surname+"</b><br>Gender :<b>"+gender+"</b></html>"; JLabel label=new JLabel(labelText); label.setVisible(true); label.setBounds(10, 10,300, 100); dialog.add(label); JButton btnUpdate= new JButton("Update"); btnUpdate.setVisible(true); btnUpdate.setBounds(10,100,100,35); JButton btnDelete= new JButton("Delete"); btnDelete.setVisible(true); btnDelete.setBounds(150,100,100,35); dialog.add(btnDelete); dialog.add(btnUpdate); btnUpdate.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { dialog.setVisible(false); } }); btnDelete.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { try { dao.deleteUser(i); } catch (SQLException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } dialog.setVisible(false); setVisible(false); System.exit(0); } }); dialog.setVisible(true); A: public void actionPerformed(ActionEvent e) { JComponent comp = (JComponent) e.getSource(); Window win = SwingUtilities.getWindowAncestor(comp); win.dispose(); } e.getSource(): Get Java Component getWindowAncestor: Returns the first Window ancestor of Component win.dispose(): Releases all of the native screen resources used by this its subcomponents, and all of its owned children. A: If you want to 'close' a JPanel, you can hide it. myPanel.setVisible(false); And if/when you want to 'open' it again: myPanel.setVisible(true); A: Assuming you want to close your swing application when you press the button. You can just use: System.exit(0); A: You may try using Frame.pack() again it worked for me. A: When you extends a JPanel you import the necesary : JComponent comp = (JComponent) e.getSource(); Window win = SwingUtilities.getWindowAncestor(comp); win.dispose(); but when you extend for a JFrame you just use this.dispose() or yourFrameInstance.dispose();
Swing how to close JPanel programmatically
My main class extends JPanel and I create a table and a button on this panel.Now I want to close this panel when the user press it.On the internet closing examples are about JFrame.Is there solution for JPanel? There is a panel On Panel there is a table and a button I add an action listener to the button I want to close panel if user press button This is my code now I want when the user press the btnDelete then it close the panel public class ListUsers extends JPanel { ResultSet rs; ClientDAO dao; JScrollPane scrollPane; JTable table; Object columnId; public ListUsers() throws SQLException { dao = new ClientDAO(); rs=dao.getUsers(); ResultSetMetaData md = rs.getMetaData(); int columnCount = md.getColumnCount(); Vector<String> columns = new Vector(columnCount); //store column names for(int i=1; i<=columnCount; i++) columns.add(md.getColumnName(i)); Vector data = new Vector(); Vector row; //store row data while(rs.next()) { row = new Vector(columnCount); for(int i=1; i<=columnCount; i++) { row.add(rs.getString(i)); } data.add(row); } table = new JTable(data, columns); table.setPreferredScrollableViewportSize(new Dimension(500, 70)); table.setFillsViewportHeight(true); table.setVisible(true); table.validate(); table.setEnabled(true); add(table); table.addMouseListener(new MouseAdapter() { public void mouseClicked(MouseEvent e) { final JDialog dialog = new JDialog(); dialog.setSize(300, 200); dialog.setLayout(null); columnId = table.getValueAt(table.getSelectedRow(),0); Integer no = new Integer(columnId.toString()); final int i =no.intValue(); String columnNo =columnId.toString(); String name = table.getValueAt(table.getSelectedRow(),1).toString(); String surname = table.getValueAt(table.getSelectedRow(),2).toString(); String gender = table.getValueAt(table.getSelectedRow(),3).toString(); String labelText ="<html>Name :<b>"+name+"</b><br>Surname :<b>"+surname+"</b><br>Gender :<b>"+gender+"</b></html>"; JLabel label=new JLabel(labelText); label.setVisible(true); label.setBounds(10, 10,300, 100); dialog.add(label); JButton btnUpdate= new JButton("Update"); btnUpdate.setVisible(true); btnUpdate.setBounds(10,100,100,35); JButton btnDelete= new JButton("Delete"); btnDelete.setVisible(true); btnDelete.setBounds(150,100,100,35); dialog.add(btnDelete); dialog.add(btnUpdate); btnUpdate.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { dialog.setVisible(false); } }); btnDelete.addActionListener(new ActionListener() { @Override public void actionPerformed(ActionEvent e) { try { dao.deleteUser(i); } catch (SQLException e1) { // TODO Auto-generated catch block e1.printStackTrace(); } dialog.setVisible(false); setVisible(false); System.exit(0); } }); dialog.setVisible(true);
[ "public void actionPerformed(ActionEvent e) {\n JComponent comp = (JComponent) e.getSource();\n Window win = SwingUtilities.getWindowAncestor(comp);\n win.dispose();\n}\n\ne.getSource(): Get Java Component\ngetWindowAncestor: Returns the first Window ancestor of Component\nwin.dispose(): \nReleases all of the native screen resources used by this\n its subcomponents, and all of its owned children.\n", "If you want to 'close' a JPanel, you can hide it.\nmyPanel.setVisible(false);\n\nAnd if/when you want to 'open' it again:\nmyPanel.setVisible(true);\n\n", "Assuming you want to close your swing application when you press the button. You can just use:\nSystem.exit(0);\n", "You may try using Frame.pack() again it worked for me.\n", "When you extends a JPanel you import the necesary :\n JComponent comp = (JComponent) e.getSource();\n Window win = SwingUtilities.getWindowAncestor(comp);\n win.dispose();\n\nbut when you extend for a JFrame you just use this.dispose() or yourFrameInstance.dispose();\n" ]
[ 8, 7, 3, 3, 0 ]
[]
[]
[ "java", "swing" ]
stackoverflow_0026762324_java_swing.txt
Q: Pandas equivelt of pyspark reduce and add? I have a dataframe in the following where Day_1, Day_2, Day_3 are the number of impressions in the past 3 days. df = pd.DataFrame({'Day_1': [2, 4, 8, 0], 'Day_2': [2, 0, 0, 0], 'Day_3': [1, 1, 0, 0], index=['user1', 'user2', 'user3', 'user4']) df Day_1 Day_2 Day_3 user1 2 2 1 user2 4 0 1 user3 8 0 0 user4 0 0 0 Now, I need to check if a user had any impression in the past n days. For example, if num_days = 2, I need to add a new column, impression, where it gets 1 if sum Day_1 and Day_2 is greater than zero, and 0 otherwise. Here is what I expect to see: Day_1 Day_2 Day_3 impression user1 2 2 1 1 user2 4 0 1 1 user3 8 0 0 1 user4 0 0 0 0 It is a straightforward process in pyspark and I use something like this: imp_cols = ['Day_'+str(i) for i in range(1, num_days+1)] df = df.withColumn("impression",reduce(add, [F.col(x) for x in imp_cols])) A: IIUC, you can use numpy.where with pandas.DataFrame.sum. Try this : df["impression"] = np.where(df.sum(axis=1).gt(0), 1, 0) # Output : print(df) ​ Day_1 Day_2 Day_3 impression user1 2 2 1 1 user2 4 0 1 1 user3 8 0 0 1 user4 0 0 0 0 If you want to select a specific columns/days, you can use pandas.DataFrame.filter : num_days = 2 l = list(range(1, num_days+1)) pat= "|".join([str(x) for x in l]) sub_df = df.filter(regex="Day_[{}]".format(pat)) df["impression"] = np.where(sub_df.sum(axis=1).gt(0), 1, 0) A: You can use the DataFrame.loc method to select the columns you want to sum, and then use the DataFrame.sum method to compute the sum of these columns. You can then use the DataFrame.clip method to set values less than 1 to 0 and values greater than or equal to 1 to 1. Finally, you can use the DataFrame.assign method to add the new impression column to the dataframe. import pandas as pd df = pd.DataFrame({'Day_1': [2, 4, 8, 0], 'Day_2': [2, 0, 0, 0], 'Day_3': [1, 1, 0, 0], index=['user1', 'user2', 'user3', 'user4']) num_days = 2 imp_cols = ['Day_'+str(i) for i in range(1, num_days+1)] df = df.loc[:, imp_cols].sum(axis=1).clip(0, 1).to_frame("impression") df = df.assign(impression=impression)
Pandas equivelt of pyspark reduce and add?
I have a dataframe in the following where Day_1, Day_2, Day_3 are the number of impressions in the past 3 days. df = pd.DataFrame({'Day_1': [2, 4, 8, 0], 'Day_2': [2, 0, 0, 0], 'Day_3': [1, 1, 0, 0], index=['user1', 'user2', 'user3', 'user4']) df Day_1 Day_2 Day_3 user1 2 2 1 user2 4 0 1 user3 8 0 0 user4 0 0 0 Now, I need to check if a user had any impression in the past n days. For example, if num_days = 2, I need to add a new column, impression, where it gets 1 if sum Day_1 and Day_2 is greater than zero, and 0 otherwise. Here is what I expect to see: Day_1 Day_2 Day_3 impression user1 2 2 1 1 user2 4 0 1 1 user3 8 0 0 1 user4 0 0 0 0 It is a straightforward process in pyspark and I use something like this: imp_cols = ['Day_'+str(i) for i in range(1, num_days+1)] df = df.withColumn("impression",reduce(add, [F.col(x) for x in imp_cols]))
[ "IIUC, you can use numpy.where with pandas.DataFrame.sum.\nTry this :\ndf[\"impression\"] = np.where(df.sum(axis=1).gt(0), 1, 0)\n\n# Output :\nprint(df)\n​\n Day_1 Day_2 Day_3 impression\nuser1 2 2 1 1\nuser2 4 0 1 1\nuser3 8 0 0 1\nuser4 0 0 0 0\n\nIf you want to select a specific columns/days, you can use pandas.DataFrame.filter :\nnum_days = 2\nl = list(range(1, num_days+1))\npat= \"|\".join([str(x) for x in l])\n\nsub_df = df.filter(regex=\"Day_[{}]\".format(pat))\n\ndf[\"impression\"] = np.where(sub_df.sum(axis=1).gt(0), 1, 0)\n\n", "You can use the DataFrame.loc method to select the columns you want to sum, and then use the DataFrame.sum method to compute the sum of these columns. You can then use the DataFrame.clip method to set values less than 1 to 0 and values greater than or equal to 1 to 1. Finally, you can use the DataFrame.assign method to add the new impression column to the dataframe.\nimport pandas as pd\n\ndf = pd.DataFrame({'Day_1': [2, 4, 8, 0],\n 'Day_2': [2, 0, 0, 0],\n 'Day_3': [1, 1, 0, 0],\n index=['user1', 'user2', 'user3', 'user4'])\n\nnum_days = 2\nimp_cols = ['Day_'+str(i) for i in range(1, num_days+1)]\n\ndf = df.loc[:, imp_cols].sum(axis=1).clip(0, 1).to_frame(\"impression\")\n\ndf = df.assign(impression=impression)\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074661465_dataframe_pandas_python.txt
Q: QueryParameter in WireMock I am trying to implement basic query param using WireMock. Here is my code stubFor(get(urlEqualTo("/getWithQueryParam?searchFor=WireMock")) .withQueryParam("searchFor" ,equalTo("WireMock")) .willReturn(aResponse() .withStatusMessage("This Page is Authenticated And having query Parameter"))); when do or why do we use withQueryParam("searchFor",equalTo(WireMock) if we giving params in the urlEqualTo method itself. And is there any way i can read the queryParam and put it in response body ? A: ReponseTemplating should be used to handle to request params. and urlPathEqualTo will used instead if urlEqualTo when we are using withQueryParam A: Per the prior answer, this is correct and here's how to do it. First, ensure you have the 'response-template' transformer set. wm.stubFor(get(urlPathEqualTo("/getWithQueryParam")) .willReturn(aResponse() .withBody("Your value for searchFor is {{request.query.searchFor}}") .withTransformers("response-template")));
QueryParameter in WireMock
I am trying to implement basic query param using WireMock. Here is my code stubFor(get(urlEqualTo("/getWithQueryParam?searchFor=WireMock")) .withQueryParam("searchFor" ,equalTo("WireMock")) .willReturn(aResponse() .withStatusMessage("This Page is Authenticated And having query Parameter"))); when do or why do we use withQueryParam("searchFor",equalTo(WireMock) if we giving params in the urlEqualTo method itself. And is there any way i can read the queryParam and put it in response body ?
[ "ReponseTemplating should be used to handle to request params. and urlPathEqualTo will used instead if urlEqualTo when we are using withQueryParam\n", "Per the prior answer, this is correct and here's how to do it. First, ensure you have the 'response-template' transformer set.\nwm.stubFor(get(urlPathEqualTo(\"/getWithQueryParam\"))\n .willReturn(aResponse()\n .withBody(\"Your value for searchFor is {{request.query.searchFor}}\")\n .withTransformers(\"response-template\")));\n\n" ]
[ 1, 0 ]
[]
[]
[ "api", "java", "mocking", "wiremock" ]
stackoverflow_0050491518_api_java_mocking_wiremock.txt
Q: Unity: animation resets scale my problem is I have an animator with two animations. The first animation ([1]: https://i.stack.imgur.com/wZnDr.png) doesn't change the scale but the second ([2]: https://i.stack.imgur.com/BpDIJ.png) does. If the first animation plays and I change the scale via code or inspector it is instantly reset to 1. This however doesn't happen if I delete the second animation out of the animator. It seems like the animator knows there is an animation that changes the scale and therefore forbids this for everything else even though that animation isn't currently playing. Is there any way to stop that from happening. Thanks for feedback A: Make sure in the animation inspector you have "write defaults" UNCHECKED for that specific animation https://i.stack.imgur.com/yGvR6.png
Unity: animation resets scale
my problem is I have an animator with two animations. The first animation ([1]: https://i.stack.imgur.com/wZnDr.png) doesn't change the scale but the second ([2]: https://i.stack.imgur.com/BpDIJ.png) does. If the first animation plays and I change the scale via code or inspector it is instantly reset to 1. This however doesn't happen if I delete the second animation out of the animator. It seems like the animator knows there is an animation that changes the scale and therefore forbids this for everything else even though that animation isn't currently playing. Is there any way to stop that from happening. Thanks for feedback
[ "Make sure in the animation inspector you have \"write defaults\" UNCHECKED for that specific animation\nhttps://i.stack.imgur.com/yGvR6.png\n" ]
[ 0 ]
[]
[]
[ "animation", "scale", "unity3d" ]
stackoverflow_0070792513_animation_scale_unity3d.txt
Q: swift - Sign in With Apple "Sign Up not completed" error Made all the setup for sign in with apple, but when pressing on continue and passing Face ID check I am getting "Sign Up not completed" error. However, no error is thrown to the delegate. Then I tried to create a test project with my friend's Paid developer account and everything was fine, no errors. I haven't connected to the API, just trying to print Email and Fullname. May be something is wrong with my company's Developer account? A: I had the same issue. I did followings and it was working fine. Go to developer apple and choose app's bundle id Remove Sign In With Apple and save Again go to your app's bundle id and add Sign In With Apple and save. Hope this works! A: I had created Auth Key with Apple Sign selected in which is never used. When I remove the unused Auth Key it worked. A: I recently encountered this error and the root problem turned out to be some back-end data problem with my Apple account. I ultimately had to raise an Apple Technical Support Request, generate some debug information for them, and then they ultimately fixed their back-end data for my account. The overall process took just under three weeks. A: For me the reason was , I was using my bundle id in the place where Apple services ID should be entered in cognito user pool. A: As suggested on this SO thread, this is sometimes an issue one Apple's end. I have tested with Apple's own project from the documentation and ran into the same issue. I reached out to Apple DTS support and they replied the following: The underlying issue was unexpected. We have confirmed the issue was caused by an internal data feed sync operation and was resolved today. So in my case, the issue went away after 2 days without any changes.
swift - Sign in With Apple "Sign Up not completed" error
Made all the setup for sign in with apple, but when pressing on continue and passing Face ID check I am getting "Sign Up not completed" error. However, no error is thrown to the delegate. Then I tried to create a test project with my friend's Paid developer account and everything was fine, no errors. I haven't connected to the API, just trying to print Email and Fullname. May be something is wrong with my company's Developer account?
[ "I had the same issue. I did followings and it was working fine.\n\nGo to developer apple and choose app's bundle id\nRemove Sign In With Apple and save\nAgain go to your app's bundle id and add Sign In With Apple and save.\n\nHope this works!\n", "I had created Auth Key with Apple Sign selected in which is never used. When I remove the unused Auth Key it worked.\n", "I recently encountered this error and the root problem turned out to be some back-end data problem with my Apple account.\nI ultimately had to raise an Apple Technical Support Request, generate some debug information for them, and then they ultimately fixed their back-end data for my account. The overall process took just under three weeks.\n", "For me the reason was , I was using my bundle id in the place where Apple services ID should be entered in cognito user pool.\n", "As suggested on this SO thread, this is sometimes an issue one Apple's end. I have tested with Apple's own project from the documentation and ran into the same issue.\nI reached out to Apple DTS support and they replied the following:\n\nThe underlying issue was unexpected. We have confirmed the issue was\ncaused by an internal data feed sync operation and was resolved today.\n\nSo in my case, the issue went away after 2 days without any changes.\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "ios", "sign_in_with_apple", "swift" ]
stackoverflow_0060834369_ios_sign_in_with_apple_swift.txt
Q: How to create an array forEach of the playlist card ids on HTML landing page and assert order via Cypress How to create an array forEach of the playlist card ids and test to assert array positioning/order on the page ex. 0=130, 1=100 etc. in Cypress I think I could use this slector, but unsure how create forEach array for it... data-object-type="playlists" Test it('verify expected order on program landing page', () =\> { cy.visit(learnerLandingPage); }); I only tried asserting against the number of playlist cards present which work as expected, but unsure how/if I can use the playlists data-object-type cy.get('[data-object-type="playlists"]').should('have.length', 3); Looking for the cleaneast way to accomplish with also being able to work across different environments where the ids in array will be different A: The cleanest way is using cy.each() const order = ['130', '100' , '1'] cy.get('[data-object-type="playlists"]').each(($el, index) => { expect($el.attr('data-object-id')).to.eq(order[index]) }) where $el.attr('data-object-id') uses jQuery to extract the id attribute. An equivalent using .invoke() would be const order = ['130', '100' , '1'] cy.get('[data-object-type="playlists"]').each(($el, index) => { cy.wrap($el) .invoke('attr', 'data-object-id') .should('eq', order[index]) }) This has the advantage that .should() uses retry if the table is loaded asynchronously. A: Cypress comes bundled with lodash so you can use the _.map() util to get only the values of each data-object-type. The data-object-type should be unique and stable across environments. You will only need to know the expected values of each. Here is a working example. const expectedOrder = ["130", "100", "1"]; cy.get("[data-object-type]") .should("have.length", 3) // get array of data-object-type attr .then(($list) => Cypress._.map($list, ($el) => $el.getAttribute("data-object-type")) ) .should("deep.equal", expectedOrder);
How to create an array forEach of the playlist card ids on HTML landing page and assert order via Cypress
How to create an array forEach of the playlist card ids and test to assert array positioning/order on the page ex. 0=130, 1=100 etc. in Cypress I think I could use this slector, but unsure how create forEach array for it... data-object-type="playlists" Test it('verify expected order on program landing page', () =\> { cy.visit(learnerLandingPage); }); I only tried asserting against the number of playlist cards present which work as expected, but unsure how/if I can use the playlists data-object-type cy.get('[data-object-type="playlists"]').should('have.length', 3); Looking for the cleaneast way to accomplish with also being able to work across different environments where the ids in array will be different
[ "The cleanest way is using cy.each()\nconst order = ['130', '100' , '1']\n\ncy.get('[data-object-type=\"playlists\"]').each(($el, index) => {\n expect($el.attr('data-object-id')).to.eq(order[index])\n})\n\nwhere $el.attr('data-object-id') uses jQuery to extract the id attribute.\n\nAn equivalent using .invoke() would be\nconst order = ['130', '100' , '1']\n\ncy.get('[data-object-type=\"playlists\"]').each(($el, index) => {\n cy.wrap($el)\n .invoke('attr', 'data-object-id')\n .should('eq', order[index])\n})\n\nThis has the advantage that .should() uses retry if the table is loaded asynchronously.\n", "Cypress comes bundled with lodash so you can use the _.map() util to get only the values of each data-object-type.\nThe data-object-type should be unique and stable across environments. You will only need to know the expected values of each.\nHere is a working example.\nconst expectedOrder = [\"130\", \"100\", \"1\"];\ncy.get(\"[data-object-type]\")\n .should(\"have.length\", 3)\n // get array of data-object-type attr\n .then(($list) =>\n Cypress._.map($list, ($el) => $el.getAttribute(\"data-object-type\"))\n )\n .should(\"deep.equal\", expectedOrder);\n\n" ]
[ 2, 1 ]
[]
[]
[ "arrays", "automation", "cypress", "foreach", "javascript" ]
stackoverflow_0074660475_arrays_automation_cypress_foreach_javascript.txt
Q: Why are headings and paragraphs considered as clickable elements (HTML) I am trying to make my webapp (created with Python Dash) more accessible, e.g. for users relying on screen readers. In order to test my webapp I use the Accessibility Inspector built into Firefox. Here I run into issues concerning focusability and interactivity of elements which are supposed to show text (e.g. headings and paragraphs). First I created my elements like this: html.H1( children="this is my header" ) html.P( children="this is text" )` In these cases the Accessibility Inspector yields the following warning: Clickable elements must be focusable and should have interactive semantics In order to resolve this, I added 'tabIndex' to my elements: html.H1( children="this is my header", tabIndex='0' ) html.P( children="this is text", tabIndex='0' ) This eliminated one part of the warning. Now I got this warning: Focusable elements should have interactive semantics From what I gathered on accessibility so far it is bad practice to give non interactive elements like headings a 'tabIndex'. So my second approach is probably already going in the wrong direction. Can you please tell me how to deal with the first warning instead? And why are headings and paragraphs considered as clickable elements? They do not contain a link or anything else. Update: If I take a look at the HTML in Firefox, the header looks like this: screenshot HTML A: I looked at some sample Dash applications, https://dash.gallery/dash-clinical-analytics/, and indeed the headings on that demo are "clickable". "Clinical Analytics" is an <h5> and "Welcome to the Clinical Analytics Dashboard" is an <h3>. Both have an event handler for the click event. That will cause some screen readers to say those elements are "clickable". Here's what NVDA says when I use the H key to navigate to the headings: "Clinical Analytics, heading, clickable, level 5" "Welcome to the Clinical Analytics Dashboard, heading, clickable, level 3" The screen reader is trying to help the user let them know that an event handler is on that element and that it can be clicked on. However, headings are not normally considered interactive elements so it'll be confusing. This seems like a bug in Dash. The error you're getting, "Clickable elements must be focusable and should have interactive semantics" is accurate. It has two parts: "elements must be focusable" - anything that you can interact with using a mouse should be interactable (word?) with the keyboard. In order to interact with the element, you first have to be able to get your keyboard focus to it, thus the error message. "elements should have interactive semantics" - when a screen reader user navigates to an interactive element, they should hear the role of that element and that role should convey that the element is an interactive element. The headings do have semantics, NVDA says they're "headings", as noted above, but the error message says it should be "interactive semantics", not just "semantics". All this info doesn't explain how to fix your problem but I wanted to post the "why" of the error message, and I couldn't fit all this into a simple comment :-)
Why are headings and paragraphs considered as clickable elements (HTML)
I am trying to make my webapp (created with Python Dash) more accessible, e.g. for users relying on screen readers. In order to test my webapp I use the Accessibility Inspector built into Firefox. Here I run into issues concerning focusability and interactivity of elements which are supposed to show text (e.g. headings and paragraphs). First I created my elements like this: html.H1( children="this is my header" ) html.P( children="this is text" )` In these cases the Accessibility Inspector yields the following warning: Clickable elements must be focusable and should have interactive semantics In order to resolve this, I added 'tabIndex' to my elements: html.H1( children="this is my header", tabIndex='0' ) html.P( children="this is text", tabIndex='0' ) This eliminated one part of the warning. Now I got this warning: Focusable elements should have interactive semantics From what I gathered on accessibility so far it is bad practice to give non interactive elements like headings a 'tabIndex'. So my second approach is probably already going in the wrong direction. Can you please tell me how to deal with the first warning instead? And why are headings and paragraphs considered as clickable elements? They do not contain a link or anything else. Update: If I take a look at the HTML in Firefox, the header looks like this: screenshot HTML
[ "I looked at some sample Dash applications, https://dash.gallery/dash-clinical-analytics/, and indeed the headings on that demo are \"clickable\".\n\n\"Clinical Analytics\" is an <h5> and \"Welcome to the Clinical Analytics Dashboard\" is an <h3>. Both have an event handler for the click event.\n\nThat will cause some screen readers to say those elements are \"clickable\". Here's what NVDA says when I use the H key to navigate to the headings:\n\n\"Clinical Analytics, heading, clickable, level 5\"\n\"Welcome to the Clinical Analytics Dashboard, heading, clickable, level 3\"\n\nThe screen reader is trying to help the user let them know that an event handler is on that element and that it can be clicked on. However, headings are not normally considered interactive elements so it'll be confusing.\nThis seems like a bug in Dash.\nThe error you're getting, \"Clickable elements must be focusable and should have interactive semantics\" is accurate. It has two parts:\n\n\"elements must be focusable\" - anything that you can interact with using a mouse should be interactable (word?) with the keyboard. In order to interact with the element, you first have to be able to get your keyboard focus to it, thus the error message.\n\n\"elements should have interactive semantics\" - when a screen reader user navigates to an interactive element, they should hear the role of that element and that role should convey that the element is an interactive element. The headings do have semantics, NVDA says they're \"headings\", as noted above, but the error message says it should be \"interactive semantics\", not just \"semantics\".\n\n\nAll this info doesn't explain how to fix your problem but I wanted to post the \"why\" of the error message, and I couldn't fit all this into a simple comment :-)\n" ]
[ 0 ]
[]
[]
[ "accessibility", "html", "plotly_dash" ]
stackoverflow_0074657472_accessibility_html_plotly_dash.txt
Q: What is the difference between SUMX(ALL...) vs CALCULATE(SUMX.., ALL..)? Following are 2 measures: SUMX ( ALL ( SALES ) , SALES[AMT] ) CALCULATE ( SUMX ( SALES, SALES[AMT] ), ALL (SALES) ) Similarly for the following 2 measures: SUMX ( FILTER ( SALES, SALES[QTY]>1 ), SALES[QTY] * SALES[AMT] ) CALCULATE ( SUMX ( SALES, SALES[QTY] * SALES[AMT] ), FILTER ( SALES, SALES[QTY]>1 ) ) Both above examples clear the natural filters on the SALES table and perform the aggregation. I'm trying to understand what is the significance/use case of using either approach maybe in terms of logic or performance? A: In DAX you can achieve the same results from different DAX queries/syntax. So based on my understanding both the DAX provide the same result : SUMX ( ALL ( SALES ) , SALES[AMT] ) CALCULATE ( SUMX ( SALES, SALES[AMT] ), ALL (SALES) ) And the 1st one is a more concise way to achieve way rather than the 2nd one in all cases/scenarios. Currently when I tested these out with <100 records in a table ; the performance was the same for both the measures. But ideally the 1st scenario would be quicker then the 2nd one which we can test out by >1 million record through DAX studio. Can you please share your thoughts on the same? A: SUMX ( ALL ( SALES ) , SALES[AMT] ) CALCULATE ( SUMX ( SALES, SALES[AMT] ), ALL (SALES) ) In these two DAX functions, ALL() is doing two very different things and it is unfortunate the same name was used. In the first one, ALL() is being used as a table function and returns a table. In the second one, ALL() is being used to remove filters and could be replaced with REMOVEFILTERS() (the first one cannot be replaced this same way). This is a lengthy and detailed topic and I suggest you make a cup of coffee and have a read here: https://www.sqlbi.com/articles/managing-all-functions-in-dax-all-allselected-allnoblankrow-allexcept/ A: At the risk of stating the obvious, you are wrapping a function (SUMX) inside a process represented by CALCULATE function. It is an actual process, which will attempt a context transition. The answer to your question would really depend on where/how these measures get injected into the model. For reference, here are just some of the relevant SQLBI articles: https://www.sqlbi.com/articles/introducing-calculate-in-dax/ https://www.sqlbi.com/articles/understanding-context-transition-in-dax/
What is the difference between SUMX(ALL...) vs CALCULATE(SUMX.., ALL..)?
Following are 2 measures: SUMX ( ALL ( SALES ) , SALES[AMT] ) CALCULATE ( SUMX ( SALES, SALES[AMT] ), ALL (SALES) ) Similarly for the following 2 measures: SUMX ( FILTER ( SALES, SALES[QTY]>1 ), SALES[QTY] * SALES[AMT] ) CALCULATE ( SUMX ( SALES, SALES[QTY] * SALES[AMT] ), FILTER ( SALES, SALES[QTY]>1 ) ) Both above examples clear the natural filters on the SALES table and perform the aggregation. I'm trying to understand what is the significance/use case of using either approach maybe in terms of logic or performance?
[ "In DAX you can achieve the same results from different DAX queries/syntax.\nSo based on my understanding both the DAX provide the same result :\nSUMX ( ALL ( SALES ) , SALES[AMT] )\n\nCALCULATE ( SUMX ( SALES, SALES[AMT] ), ALL (SALES) )\n\nAnd the 1st one is a more concise way to achieve way rather than the 2nd one in all cases/scenarios.\nCurrently when I tested these out with <100 records in a table ; the performance was the same for both the measures.\nBut ideally the 1st scenario would be quicker then the 2nd one which we can test out by >1 million record through DAX studio.\nCan you please share your thoughts on the same?\n", "SUMX ( ALL ( SALES ) , SALES[AMT] )\n\nCALCULATE ( SUMX ( SALES, SALES[AMT] ), ALL (SALES) )\n\nIn these two DAX functions, ALL() is doing two very different things and it is unfortunate the same name was used. In the first one, ALL() is being used as a table function and returns a table. In the second one, ALL() is being used to remove filters and could be replaced with REMOVEFILTERS() (the first one cannot be replaced this same way).\nThis is a lengthy and detailed topic and I suggest you make a cup of coffee and have a read here: https://www.sqlbi.com/articles/managing-all-functions-in-dax-all-allselected-allnoblankrow-allexcept/\n", "At the risk of stating the obvious, you are wrapping a function (SUMX) inside a process represented by CALCULATE function.\nIt is an actual process, which will attempt a context transition.\nThe answer to your question would really depend on where/how these measures get injected into the model.\nFor reference, here are just some of the relevant SQLBI articles: https://www.sqlbi.com/articles/introducing-calculate-in-dax/\nhttps://www.sqlbi.com/articles/understanding-context-transition-in-dax/\n" ]
[ 3, 0, 0 ]
[]
[]
[ "dax", "powerbi" ]
stackoverflow_0065742878_dax_powerbi.txt
Q: Stuck with woocommerce_rest_authentication_error: Invalid signature - provided signature does not match Below issue was posted by me on https://github.com/XiaoFaye/WooCommerce.NET/issues/414 but since this may not be related at all to WooCommerce.Net but on a lowerlevel to Apache/Word/WooCommerc itself I am posting the same question here I am really stuck with the famous error: WebException: {"code":"woocommerce_rest_authentication_error","message":"Invalid signature - provided signature does not match.","data":{"status":401}} FYI: I have two wordpress instance running. One on my local machine and one on a remote server. The remote server is, as my local machine, in our company's LAN I am running WAMP on both machines to run Apache and host Wordpress on port 80 The error ONLY occurs when trying to call the Rest api on the remote server. Connecting to the local rest api, the Rest Api/WooCommerceNet is working like a charm :-) From my local browser I can login to the remote WooCommerce instance without any problem On the remote server I have defined WP_SITEURL as 'http://[ip address]/webshop/ and WP_HOME as 'http://[ip address]/webshopin wp-config.php Calling the api url (http://[ip address]/webshop/wp-json/wc/v3/) from my local browser works OK. I get the normal JSON response Authentication is done through the WooCommerce.Net wrapper which only requires a consumer key, consumer secret and the api url. I am sure I am using the right consumer key and secret and the proper api url http://[ip address]/webshop/wp-json/wc/v3/ (see previous bullet) I already played around with the authorizedHeader variable (true/false) when instantiating a WooCommerce RestApi but this has no effect Is there anybody that can point me into the direction of a solution? Your help will be much appreciated! A: In my case, the problem was in my url adress. The URL Adress had two // begin wp-json Url Before the solution: http://localhost:8080/wordpress//wp-json/wc/v3/ URL Now, and works ok: http://localhost:8080/wordpress/wp-json/wc/v3/ I use with this sentence. RestAPI rest = new RestAPI(cUrlApi, Funciones.CK, Funciones.CS,false); WCObject wc = new WCObject(rest); var lstWooCategorias = await wc.Category.GetAll(); I hope my answer helps you. A: Had the same issue. My fault was to define my url incorrect: http:// instead of https://.
Stuck with woocommerce_rest_authentication_error: Invalid signature - provided signature does not match
Below issue was posted by me on https://github.com/XiaoFaye/WooCommerce.NET/issues/414 but since this may not be related at all to WooCommerce.Net but on a lowerlevel to Apache/Word/WooCommerc itself I am posting the same question here I am really stuck with the famous error: WebException: {"code":"woocommerce_rest_authentication_error","message":"Invalid signature - provided signature does not match.","data":{"status":401}} FYI: I have two wordpress instance running. One on my local machine and one on a remote server. The remote server is, as my local machine, in our company's LAN I am running WAMP on both machines to run Apache and host Wordpress on port 80 The error ONLY occurs when trying to call the Rest api on the remote server. Connecting to the local rest api, the Rest Api/WooCommerceNet is working like a charm :-) From my local browser I can login to the remote WooCommerce instance without any problem On the remote server I have defined WP_SITEURL as 'http://[ip address]/webshop/ and WP_HOME as 'http://[ip address]/webshopin wp-config.php Calling the api url (http://[ip address]/webshop/wp-json/wc/v3/) from my local browser works OK. I get the normal JSON response Authentication is done through the WooCommerce.Net wrapper which only requires a consumer key, consumer secret and the api url. I am sure I am using the right consumer key and secret and the proper api url http://[ip address]/webshop/wp-json/wc/v3/ (see previous bullet) I already played around with the authorizedHeader variable (true/false) when instantiating a WooCommerce RestApi but this has no effect Is there anybody that can point me into the direction of a solution? Your help will be much appreciated!
[ "In my case, the problem was in my url adress. The URL Adress had two // begin wp-json\nUrl Before the solution: http://localhost:8080/wordpress//wp-json/wc/v3/\nURL Now, and works ok: http://localhost:8080/wordpress/wp-json/wc/v3/\nI use with this sentence.\nRestAPI rest = new RestAPI(cUrlApi, Funciones.CK, Funciones.CS,false);\nWCObject wc = new WCObject(rest);\nvar lstWooCategorias = await wc.Category.GetAll();\nI hope my answer helps you.\n", "Had the same issue. My fault was to define my url incorrect: http:// instead of https://.\n" ]
[ 1, 0 ]
[]
[]
[ "apache", "wamp", "woocommerce", "woocommerce_rest_api", "wordpress" ]
stackoverflow_0058554329_apache_wamp_woocommerce_woocommerce_rest_api_wordpress.txt
Q: How to run Firebase Firestore Cloud Functions locally and debuging it, with WebStorm and Ngrok? It's Known that you have to test your functions first, before deploying them to firebase to avoid loops and unwanted behavior. I need to run a local environment to test it first, How can I do that? A: //How to Debug Firebase Functions locally 1 - Start EMU for debugging functions(required Java installed) firebase emulators:start --inspect-functions 2 - Connect Webstorm Debugger Edit Run/Debug configurations -> Attach to Node/Chrome (select the correct port(just in case))(9229 by default) 3 - Run ngrok and server the functions port (http://127.0.0.1:5001/your-firebase-app) | here | ./ngrok http 127.0.0.1:5001 4 - Finish output |ngrok tunnel| |rest of your API endpoints| http://4386-2600-8801-192-4500-4dcd-e70f-e09f-63cb.ngrok.io/wherever-you-app-is/app A: To run Firebase Firestore Cloud Functions locally and debug them, you can use the firebase emulators:start command, this will allow you to test your functions on your local machine, using the same runtime and dependencies as the production environment. To debug your functions, you can use the console.log method, and use the debug command in the Cloud Functions shell to attach a debugger to the running function. This will allow you to step through your code, set breakpoints, and inspect variables, which can help you identify and fix any issues with your functions. $ firebase emulators:start # Output i emulators: Starting emulators: functions, firestore, hosting i functions: Using Node.js version: 12 i functions: Emulator started at http://localhost:5001 i firestore: Emulator started at http://localhost:8080 i hosting: Emulator started at http://localhost:5000 $ firebase functions:shell # In the Cloud Functions shell > debug functions/helloWorld # Output [debug] functions:helloWorld: Listening on port 5001. [debug] functions:helloWorld: Stopped the emulator. A bit more documentation: https://firebase.google.com/docs/emulator-suite
How to run Firebase Firestore Cloud Functions locally and debuging it, with WebStorm and Ngrok?
It's Known that you have to test your functions first, before deploying them to firebase to avoid loops and unwanted behavior. I need to run a local environment to test it first, How can I do that?
[ "//How to Debug Firebase Functions locally\n1 - Start EMU for debugging functions(required Java installed)\nfirebase emulators:start --inspect-functions\n2 - Connect Webstorm Debugger\nEdit Run/Debug configurations -> Attach to Node/Chrome (select the correct port(just in case))(9229 by default)\n3 - Run ngrok and server the functions port\n(http://127.0.0.1:5001/your-firebase-app)\n| here |\n./ngrok http 127.0.0.1:5001\n4 - Finish output\n|ngrok tunnel| |rest of your API endpoints|\nhttp://4386-2600-8801-192-4500-4dcd-e70f-e09f-63cb.ngrok.io/wherever-you-app-is/app\n", "To run Firebase Firestore Cloud Functions locally and debug them, you can use the firebase emulators:start command, this will allow you to test your functions on your local machine, using the same runtime and dependencies as the production environment.\nTo debug your functions, you can use the console.log method, and use the debug command in the Cloud Functions shell to attach a debugger to the running function. This will allow you to step through your code, set breakpoints, and inspect variables, which can help you identify and fix any issues with your functions.\n$ firebase emulators:start\n\n# Output\ni emulators: Starting emulators: functions, firestore, hosting\ni functions: Using Node.js version: 12\ni functions: Emulator started at http://localhost:5001\ni firestore: Emulator started at http://localhost:8080\ni hosting: Emulator started at http://localhost:5000\n\n$ firebase functions:shell\n\n# In the Cloud Functions shell\n> debug functions/helloWorld\n\n# Output\n[debug] functions:helloWorld: Listening on port 5001.\n[debug] functions:helloWorld: Stopped the emulator.\n\nA bit more documentation: https://firebase.google.com/docs/emulator-suite\n" ]
[ 0, 0 ]
[]
[]
[ "firebase", "google_cloud_firestore", "reactjs" ]
stackoverflow_0074661541_firebase_google_cloud_firestore_reactjs.txt
Q: Background color of bokeh layout I'm playing around with the Bokeh sliders demo (source code here), and I'm trying to change the background color of the entire page. Though changing the color of the figure is easy using background_fill_color and border_fill_color, the rest of the layout still appears on top of a white background. Is there an attribute I can add to the theme that will allow me to set the color via curdoc().theme? A: There's not currently any Python property that would control the HTML background color. HTML and CSS is vast territory, so instead of trying to make a corresponding Python property for every possible style option, Bokeh provides a general mechanism for supplying your own HMTL templates so that any standard familiar CSS can be applied. This is most easily accomplished by adding a templates/index.html file to a Directory-style Bokeh App. The template should be Jinja2 template. There are two substitutions required to be defined in the <head>: {{ bokeh_css }} {{ bokeh_js }} as well as two required in <body>: {{ plot_div }} {{ plot_script }} The app will appear wherever the plot_script appears in the template. Apart from this, you can apply whatever HTML and CSS you need. You can see a concrete example here: https://github.com/bokeh/bokeh/blob/master/examples/app/crossfilter A boiled down template that changes the page background might look like this: <!DOCTYPE html> <html lang="en"> <head> <style> body { background: #2F2F2F; } </style> <meta charset="utf-8"> {{ bokeh_css }} {{ bokeh_js }} </head> <body> {{ plot_div|indent(8) }} {{ plot_script|indent(8) }} </body> </html> A: Changing the .bk-root style worked for me: from bokeh.resources import Resources from bokeh.io.state import curstate from bokeh.io import curdoc, output_file, save from bokeh.util.browser import view from bokeh.models.widgets import Panel, Tabs from bokeh.plotting import figure class MyResources(Resources): @property def css_raw(self): return super().css_raw + [ """.bk-root { background-color: #000000; border-color: #000000; } """ ] f = figure(height=200, width=200) f.line([1,2,3], [1,2,3]) tabs = Tabs( tabs=[ Panel( child=f, title="TabTitle" ) ], height=500 ) output_file("BlackBG.html") curstate().file['resources'] = MyResources(mode='cdn') save(tabs) view("./BlackBG.html") A: If you are using row or column for displaying several figures in the document, a workaround is setting the background attribute like this: curdoc().add_root(row(fig1, fig2, background="beige")) A: I know it is not the cleanest way to do it, but a workaround would be to modify file.html inside bokeh template folder FILE PATH CODE SNIPPET
Background color of bokeh layout
I'm playing around with the Bokeh sliders demo (source code here), and I'm trying to change the background color of the entire page. Though changing the color of the figure is easy using background_fill_color and border_fill_color, the rest of the layout still appears on top of a white background. Is there an attribute I can add to the theme that will allow me to set the color via curdoc().theme?
[ "There's not currently any Python property that would control the HTML background color. HTML and CSS is vast territory, so instead of trying to make a corresponding Python property for every possible style option, Bokeh provides a general mechanism for supplying your own HMTL templates so that any standard familiar CSS can be applied. \nThis is most easily accomplished by adding a templates/index.html file to a Directory-style Bokeh App. The template should be Jinja2 template. There are two substitutions required to be defined in the <head>:\n\n{{ bokeh_css }}\n{{ bokeh_js }}\n\nas well as two required in <body>:\n\n{{ plot_div }}\n{{ plot_script }}\n\nThe app will appear wherever the plot_script appears in the template. Apart from this, you can apply whatever HTML and CSS you need. You can see a concrete example here:\nhttps://github.com/bokeh/bokeh/blob/master/examples/app/crossfilter\nA boiled down template that changes the page background might look like this:\n<!DOCTYPE html>\n<html lang=\"en\">\n <head>\n <style>\n body { background: #2F2F2F; }\n </style>\n <meta charset=\"utf-8\">\n {{ bokeh_css }}\n {{ bokeh_js }}\n </head>\n <body>\n {{ plot_div|indent(8) }}\n {{ plot_script|indent(8) }}\n </body>\n</html>\n\n", "Changing the .bk-root style worked for me:\nfrom bokeh.resources import Resources\nfrom bokeh.io.state import curstate\nfrom bokeh.io import curdoc, output_file, save\nfrom bokeh.util.browser import view\nfrom bokeh.models.widgets import Panel, Tabs\nfrom bokeh.plotting import figure \n\nclass MyResources(Resources):\n @property\n def css_raw(self):\n return super().css_raw + [\n \"\"\".bk-root {\n background-color: #000000;\n border-color: #000000;\n }\n \"\"\"\n ]\n\nf = figure(height=200, width=200)\nf.line([1,2,3], [1,2,3])\n\ntabs = Tabs( tabs=[ Panel( child=f, title=\"TabTitle\" ) ], height=500 )\n\noutput_file(\"BlackBG.html\")\ncurstate().file['resources'] = MyResources(mode='cdn')\nsave(tabs)\nview(\"./BlackBG.html\")\n\n", "If you are using row or column for displaying several figures in the document, a workaround is setting the background attribute like this:\ncurdoc().add_root(row(fig1, fig2, background=\"beige\"))\n", "I know it is not the cleanest way to do it, but a workaround would be to modify file.html inside bokeh template folder\nFILE PATH\nCODE SNIPPET\n" ]
[ 6, 3, 1, 0 ]
[ "From Bokeh documentation:\n\nThe background fill style is controlled by the background_fill_color\n and background_fill_alpha properties of the Plot object:\nfrom bokeh.plotting import figure, output_file, show\n\noutput_file(\"background.html\")\n\np = figure(plot_width=400, plot_height=400)\np.background_fill_color = \"beige\"\np.background_fill_alpha = 0.5\n\np.circle([1, 2, 3, 4, 5], [2, 5, 8, 2, 7], size=10)\n\nshow(p)\n\n\n\n" ]
[ -1 ]
[ "bokeh", "python" ]
stackoverflow_0044607084_bokeh_python.txt
Q: Is it best practice to have a RecyclerView inside RecyclerView In my case I have a list of objects and each object has a list of other objects, like: class Parent(val title, val childs: List<Child>) class Child(val title) The above code is an example and there are a lot more fields to implement, like images (for the sake of the question lets keep it simple). Now what I want is: To use custom layout for both parent and child items The child items must be displayed inside the parent item Based on the above, I decided to use a RecyclerView inside a RecyclerView. Some other options are: The parent RecyclerView will be scrollable but the child RecyclerView will have fixed height (so there is no need to implement nested scrolling). Both RecyclerViews have a vertical orientation. When clicking the parent item the visibility of the child RecyclerView will be toggled (this is not very relevant to the actual question, but just to get an idea of what I have in mind). Using my poor skills, I would initialize the child RecyclerView inside the parent RecyclerView.ViewHolder and onBindViewHolder method (of the parent RecyclerView.Adapter) I would assign the LayoutManager and initialize/assign the child RecyclerView.Adapter to the child RecyclerView. I would also implement a Listener that when clicking a child item, a callback will be called containing two parameters, the index of parent and index of child. I know how to implement what I described above, the actual question is if this is "correct" or better say, the best practice. As far as I can think, this implementation seems dirty and not very optimized (maybe I am wrong). I want you to explain me if there is better way to implement what I described above (with examples if possible). If not, providing an explanation why there is no better way will be also useful. A: There are multiple ways to go on about this. The method I would use is the one you described above, however, you should be careful so that you "set up" Child's RecyclerView in the init block of the Parent's ViewHolder, so you don't do too much work on the binding process of the Parent's items binding process. And, I would use this method, because it is easier to implement the: When clicking the parent item the visibility of the child RecyclerView will be toggled (this is not very relevant to the actual question, but just to get an idea of what I have in mind). feature, so this is kinda relevant, in the aspect that you can have a harder time to implement it using the next method. Another way of doing this, is by implementing an adapter that takes into consideration the RecyclerView.Adapter's viewType, which you can use to distinguish between Parent and Child model types so you use the corresponding ViewHolder for each type, making the list look like a list with clear sections, the section headers of which are the Parent's model. This method is great for making a list with sections, however, making a toggle functionality requires careful considerations on how you want to hide the ViewHolders by looking for them specifically or using extra variables to save the state of the sections and careful conditions on the RecyclerView.Adapter's ViewHolder's bind() method, which may or may not cause a headache. So basically I would use the first method. Hope this helps!
Is it best practice to have a RecyclerView inside RecyclerView
In my case I have a list of objects and each object has a list of other objects, like: class Parent(val title, val childs: List<Child>) class Child(val title) The above code is an example and there are a lot more fields to implement, like images (for the sake of the question lets keep it simple). Now what I want is: To use custom layout for both parent and child items The child items must be displayed inside the parent item Based on the above, I decided to use a RecyclerView inside a RecyclerView. Some other options are: The parent RecyclerView will be scrollable but the child RecyclerView will have fixed height (so there is no need to implement nested scrolling). Both RecyclerViews have a vertical orientation. When clicking the parent item the visibility of the child RecyclerView will be toggled (this is not very relevant to the actual question, but just to get an idea of what I have in mind). Using my poor skills, I would initialize the child RecyclerView inside the parent RecyclerView.ViewHolder and onBindViewHolder method (of the parent RecyclerView.Adapter) I would assign the LayoutManager and initialize/assign the child RecyclerView.Adapter to the child RecyclerView. I would also implement a Listener that when clicking a child item, a callback will be called containing two parameters, the index of parent and index of child. I know how to implement what I described above, the actual question is if this is "correct" or better say, the best practice. As far as I can think, this implementation seems dirty and not very optimized (maybe I am wrong). I want you to explain me if there is better way to implement what I described above (with examples if possible). If not, providing an explanation why there is no better way will be also useful.
[ "There are multiple ways to go on about this.\n\nThe method I would use is the one you described above, however, you should be careful so that you \"set up\" Child's RecyclerView in the init block of the Parent's ViewHolder, so you don't do too much work on the binding process of the Parent's items binding process. And, I would use this method, because it is easier to implement the:\n\n\nWhen clicking the parent item the visibility of the child RecyclerView will be toggled (this is not very relevant to the actual question, but just to get an idea of what I have in mind).\n\nfeature, so this is kinda relevant, in the aspect that you can have a harder time to implement it using the next method.\n\nAnother way of doing this, is by implementing an adapter that takes into consideration the RecyclerView.Adapter's viewType, which you can use to distinguish between Parent and Child model types so you use the corresponding ViewHolder for each type, making the list look like a list with clear sections, the section headers of which are the Parent's model. This method is great for making a list with sections, however, making a toggle functionality requires careful considerations on how you want to hide the ViewHolders by looking for them specifically or using extra variables to save the state of the sections and careful conditions on the RecyclerView.Adapter's ViewHolder's bind() method, which may or may not cause a headache.\n\nSo basically I would use the first method. Hope this helps!\n" ]
[ 1 ]
[]
[]
[ "android", "android_recyclerview", "kotlin" ]
stackoverflow_0074661232_android_android_recyclerview_kotlin.txt
Q: commitizen: No commit found with range: 'origin/HEAD..HEAD' I've recently started using commitizen in my day-to-day development, however I don't understand why I get the following error with the first commit to a new branch, ie: ...on current main branch... git checkout -b fix/my-new-branch ...make some changes... git commit -am "fix: did the thing" commitizen check.........................................................Passed commitizen check branch..................................................Failed - hook id: commitizen-branch - exit code: 3 No commit found with range: 'origin/HEAD..HEAD' My pre-commit file looks like this: --- repos: - repo: https://github.com/commitizen-tools/commitizen rev: v2.37.1 hooks: - id: commitizen - id: commitizen-branch stages: [commit-msg] Is there something I'm missing here? My git remote is setup: origin [email protected]:myuser/my_repo.git (fetch) origin [email protected]:myuser/my_repo.git (push) Git branch shows: fix/my-new-branch * main A: the commitizen-branch hook is intended for after-the-fact usage and not during the commit-msg stage -- you probably don't need it / don't want it and can have it removed notably, the stages: [commit-msg] is incorrect to set for that hook since it is not designed to run during commit-msg (where no commits exist between origin/HEAD and HEAD) personally I'd probably set that ones as stages: [manual] such that it never automatically runs, but can be run on demand disclaimer: I wrote pre-commit
commitizen: No commit found with range: 'origin/HEAD..HEAD'
I've recently started using commitizen in my day-to-day development, however I don't understand why I get the following error with the first commit to a new branch, ie: ...on current main branch... git checkout -b fix/my-new-branch ...make some changes... git commit -am "fix: did the thing" commitizen check.........................................................Passed commitizen check branch..................................................Failed - hook id: commitizen-branch - exit code: 3 No commit found with range: 'origin/HEAD..HEAD' My pre-commit file looks like this: --- repos: - repo: https://github.com/commitizen-tools/commitizen rev: v2.37.1 hooks: - id: commitizen - id: commitizen-branch stages: [commit-msg] Is there something I'm missing here? My git remote is setup: origin [email protected]:myuser/my_repo.git (fetch) origin [email protected]:myuser/my_repo.git (push) Git branch shows: fix/my-new-branch * main
[ "the commitizen-branch hook is intended for after-the-fact usage and not during the commit-msg stage -- you probably don't need it / don't want it and can have it removed\nnotably, the stages: [commit-msg] is incorrect to set for that hook since it is not designed to run during commit-msg (where no commits exist between origin/HEAD and HEAD)\npersonally I'd probably set that ones as stages: [manual] such that it never automatically runs, but can be run on demand\n\ndisclaimer: I wrote pre-commit\n" ]
[ 0 ]
[]
[]
[ "commitizen", "pre_commit.com" ]
stackoverflow_0074658100_commitizen_pre_commit.com.txt
Q: how do i remove buttons off a message? button = Button(style = discord.ButtonStyle.green, emoji = ":arrow_backward:", custom_id = "button1") button2 = Button(style = discord.ButtonStyle.green, emoji = ":arrow_up_small:", custom_id = "button2") button3 = Button(style = discord.ButtonStyle.green, emoji = ":arrow_forward:", custom_id = "button3") view = View() view.add_item(button) view.add_item(button2) view.add_item(button3) async def button_callback(interaction): if number != ("⠀⠀1"): await message.edit(content="**response 1**") else: await message.edit(content="**response 2**") async def button_callback2(interaction): if number != ("⠀⠀⠀⠀⠀⠀⠀⠀⠀2"): await message.edit(content="**response 1**") else: await message.edit(content="**response 2**") async def button_callback3(interaction): if number != ("⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀3"): await message.edit(content="**response 1**") else: await message.edit(content= "**response 2**") button.callback = button_callback button2.callback = button_callback2 button3.callback = button_callback3 await message.edit(content= f"⠀⠀:watermelon:⠀⠀⠀⠀⠀:watermelon:⠀⠀⠀⠀⠀:watermelon:\n{number}", view=view) In the code it creates and sends a message with buttons on it, once you press the button itll either edit the message to say "response1" or "response2" depending if the button had the 1, 2 ,3 over it (if it didnt have the number over it, it prints "response1" if it did have the number over it, it prints "response2") i would like it so when it edits it to either of the responses it also removes the buttons, as it currently doesnt. A: Set view=None in your message.edit function call to remove all of the buttons.
how do i remove buttons off a message?
button = Button(style = discord.ButtonStyle.green, emoji = ":arrow_backward:", custom_id = "button1") button2 = Button(style = discord.ButtonStyle.green, emoji = ":arrow_up_small:", custom_id = "button2") button3 = Button(style = discord.ButtonStyle.green, emoji = ":arrow_forward:", custom_id = "button3") view = View() view.add_item(button) view.add_item(button2) view.add_item(button3) async def button_callback(interaction): if number != ("⠀⠀1"): await message.edit(content="**response 1**") else: await message.edit(content="**response 2**") async def button_callback2(interaction): if number != ("⠀⠀⠀⠀⠀⠀⠀⠀⠀2"): await message.edit(content="**response 1**") else: await message.edit(content="**response 2**") async def button_callback3(interaction): if number != ("⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀3"): await message.edit(content="**response 1**") else: await message.edit(content= "**response 2**") button.callback = button_callback button2.callback = button_callback2 button3.callback = button_callback3 await message.edit(content= f"⠀⠀:watermelon:⠀⠀⠀⠀⠀:watermelon:⠀⠀⠀⠀⠀:watermelon:\n{number}", view=view) In the code it creates and sends a message with buttons on it, once you press the button itll either edit the message to say "response1" or "response2" depending if the button had the 1, 2 ,3 over it (if it didnt have the number over it, it prints "response1" if it did have the number over it, it prints "response2") i would like it so when it edits it to either of the responses it also removes the buttons, as it currently doesnt.
[ "Set view=None in your message.edit function call to remove all of the buttons.\n" ]
[ 0 ]
[]
[]
[ "discord", "pycord", "python" ]
stackoverflow_0074607302_discord_pycord_python.txt
Q: Calculate co-occurrences without any overlap in pandas I have the following dataframe import pandas as pd df = pd.DataFrame({'TFD' : ['AA', 'SL', 'BB', 'D0', 'Dk', 'FF'], 'Snack' : [1, 0, 1, 1, 0, 0], 'Trans' : [1, 1, 1, 0, 0, 1], 'Dop' : [1, 0, 1, 0, 1, 1]}).set_index('TFD') df Snack Trans Dop TFD AA 1 1 1 SL 0 1 0 BB 1 1 1 D0 1 0 0 Dk 0 0 1 FF 0 1 1 By using this I can calculate the following co-occurrence matrix: df_asint = df.astype(int) coocc = df_asint.T.dot(df_asint) coocc Snack Trans Dop Snack 3 2 2 Trans 2 4 3 Dop 2 3 4 Though, I want the occurrences to not overlap. What I mean is this: in the original df there is only 1 TFD that has only Snack, so the [Snack, Snack] value at the cooc table should be 1. Moreover the [Dop, Trans] should be equal to 1 and not equal to 3(the above calculation gives as output 3 because it takes into account the [Dop, Snack, Trans] combination, which is what I want to avoid) Moreover the order shouldnt matter -> [Dop, Trans] is the same as [Trans, Dop] Having an ['all', 'all'] [row, column] which would indicate how many times an occurrence contains all elements My solution contains the following steps: First, for each row of the df get the list of columns for which the column has value equal to 1: llist = [] for k,v in df.iterrows(): llist.append((list(v[v==1].index))) llist [['Snack', 'Trans', 'Dop'], ['Trans'], ['Snack', 'Trans', 'Dop'], ['Snack'], ['Dop'], ['Trans', 'Dop']] Then I duplicate the lists (inside the list) which have only 1 element: llist2 = llist.copy() for i,l in enumerate(llist2): if len(l) == 1: llist2[i] = l + l if len(l) == 3: llist2[i] = ['all', 'all'] # this is to see how many triple elements I have in the list llist2.append(['Dop', 'Trans']) # This is to test that the order of the elements of the sublists doesnt matter llist2 [['all', 'all'], ['Trans', 'Trans'], ['all', 'all'], ['Snack', 'Snack'], ['Dop', 'Dop'], ['Trans', 'Dop'], ['Dop', 'Trans']] Later I create an empty dataframe with the indexes and columns of interest: elements = ['Trans', 'Dop', 'Snack', 'all'] foo = pd.DataFrame(columns=elements, index=elements) foo.fillna(0,inplace=True) foo Trans Dop Snack all Trans 0 0 0 0 Dop 0 0 0 0 Snack 0 0 0 0 all 0 0 0 0 Then I check and count, which combination is included in the original llist2 from itertools import combinations_with_replacement import collections comb = combinations_with_replacement(elements, 2) for l in comb: val = foo.loc[l[0],l[1]] foo.loc[l[0],l[1]] = val + llist2.count(list(l)) if (set(l).__len__() != 1) and (list(reversed(list(l))) in llist2): # check if the reversed element exists as well, but do not double count the diagonal elements val = foo.loc[l[0],l[1]] foo.loc[l[0],l[1]] = val + llist2.count(list(reversed(list(l)))) foo Trans Dop Snack all Trans 1 2 0 0 Dop 0 1 0 0 Snack 0 0 1 0 all 0 0 0 2 Last step would be to make foo symmetrical: import numpy as np foo = np.maximum( foo, foo.transpose() ) foo Trans Dop Snack all Trans 1 2 0 0 Dop 2 1 0 0 Snack 0 0 1 0 all 0 0 0 2 Looking for a more efficient/faster (avoiding all these loops) solution A: Managed to shrink it to one "for" loop. I am using "any" and "all" in combination with "mask". import pandas as pd import itertools df = pd.DataFrame({'TFD': ['AA', 'SL', 'BB', 'D0', 'Dk', 'FF'], 'Snack': [1, 0, 1, 1, 0, 0], 'Trans': [1, 1, 1, 0, 0, 1], 'Dop': [1, 0, 1, 0, 1, 1]}).set_index('TFD') df["all"] = 0 # adding artifical columns so the results contains "all" list_of_columns = list(df.columns) my_result_list = [] # empty list where we put the results comb = itertools.combinations_with_replacement(list_of_columns, 2) for item in comb: temp_list = list_of_columns[:] # temp_list holds columns of interest if item[0] == item[1]: temp_list.remove(item[0]) my_col_list = [item[0]] # my_col_list holds which occurance we count else: temp_list.remove(item[0]) temp_list.remove(item[1]) my_col_list = [item[0], item[1]] mask = df.loc[:, temp_list].any(axis=1) # creating mask so we know which rows to look at distance = df.loc[~mask, my_col_list].all(axis=1).sum() # calculating ocurrance my_result_list.append([item[0], item[1], distance]) # occurance info recorded in the list my_result_list.append([item[1], item[0], distance]) # occurance put in reverse so we get square form in the end result = pd.DataFrame(my_result_list).drop_duplicates().pivot(index=1, columns=0, values=2) # construc DataFrame in squareform list_of_columns.remove("all") result.loc["all", "all"] = df.loc[:, list_of_columns].all(axis=1).sum() # fill in all/all occurances print(result)
Calculate co-occurrences without any overlap in pandas
I have the following dataframe import pandas as pd df = pd.DataFrame({'TFD' : ['AA', 'SL', 'BB', 'D0', 'Dk', 'FF'], 'Snack' : [1, 0, 1, 1, 0, 0], 'Trans' : [1, 1, 1, 0, 0, 1], 'Dop' : [1, 0, 1, 0, 1, 1]}).set_index('TFD') df Snack Trans Dop TFD AA 1 1 1 SL 0 1 0 BB 1 1 1 D0 1 0 0 Dk 0 0 1 FF 0 1 1 By using this I can calculate the following co-occurrence matrix: df_asint = df.astype(int) coocc = df_asint.T.dot(df_asint) coocc Snack Trans Dop Snack 3 2 2 Trans 2 4 3 Dop 2 3 4 Though, I want the occurrences to not overlap. What I mean is this: in the original df there is only 1 TFD that has only Snack, so the [Snack, Snack] value at the cooc table should be 1. Moreover the [Dop, Trans] should be equal to 1 and not equal to 3(the above calculation gives as output 3 because it takes into account the [Dop, Snack, Trans] combination, which is what I want to avoid) Moreover the order shouldnt matter -> [Dop, Trans] is the same as [Trans, Dop] Having an ['all', 'all'] [row, column] which would indicate how many times an occurrence contains all elements My solution contains the following steps: First, for each row of the df get the list of columns for which the column has value equal to 1: llist = [] for k,v in df.iterrows(): llist.append((list(v[v==1].index))) llist [['Snack', 'Trans', 'Dop'], ['Trans'], ['Snack', 'Trans', 'Dop'], ['Snack'], ['Dop'], ['Trans', 'Dop']] Then I duplicate the lists (inside the list) which have only 1 element: llist2 = llist.copy() for i,l in enumerate(llist2): if len(l) == 1: llist2[i] = l + l if len(l) == 3: llist2[i] = ['all', 'all'] # this is to see how many triple elements I have in the list llist2.append(['Dop', 'Trans']) # This is to test that the order of the elements of the sublists doesnt matter llist2 [['all', 'all'], ['Trans', 'Trans'], ['all', 'all'], ['Snack', 'Snack'], ['Dop', 'Dop'], ['Trans', 'Dop'], ['Dop', 'Trans']] Later I create an empty dataframe with the indexes and columns of interest: elements = ['Trans', 'Dop', 'Snack', 'all'] foo = pd.DataFrame(columns=elements, index=elements) foo.fillna(0,inplace=True) foo Trans Dop Snack all Trans 0 0 0 0 Dop 0 0 0 0 Snack 0 0 0 0 all 0 0 0 0 Then I check and count, which combination is included in the original llist2 from itertools import combinations_with_replacement import collections comb = combinations_with_replacement(elements, 2) for l in comb: val = foo.loc[l[0],l[1]] foo.loc[l[0],l[1]] = val + llist2.count(list(l)) if (set(l).__len__() != 1) and (list(reversed(list(l))) in llist2): # check if the reversed element exists as well, but do not double count the diagonal elements val = foo.loc[l[0],l[1]] foo.loc[l[0],l[1]] = val + llist2.count(list(reversed(list(l)))) foo Trans Dop Snack all Trans 1 2 0 0 Dop 0 1 0 0 Snack 0 0 1 0 all 0 0 0 2 Last step would be to make foo symmetrical: import numpy as np foo = np.maximum( foo, foo.transpose() ) foo Trans Dop Snack all Trans 1 2 0 0 Dop 2 1 0 0 Snack 0 0 1 0 all 0 0 0 2 Looking for a more efficient/faster (avoiding all these loops) solution
[ "Managed to shrink it to one \"for\" loop. I am using \"any\" and \"all\" in combination with \"mask\".\nimport pandas as pd\nimport itertools\n\n\ndf = pd.DataFrame({'TFD': ['AA', 'SL', 'BB', 'D0', 'Dk', 'FF'],\n 'Snack': [1, 0, 1, 1, 0, 0],\n 'Trans': [1, 1, 1, 0, 0, 1],\n 'Dop': [1, 0, 1, 0, 1, 1]}).set_index('TFD')\n\ndf[\"all\"] = 0 # adding artifical columns so the results contains \"all\"\nlist_of_columns = list(df.columns)\nmy_result_list = [] # empty list where we put the results\ncomb = itertools.combinations_with_replacement(list_of_columns, 2)\nfor item in comb:\n temp_list = list_of_columns[:] # temp_list holds columns of interest\n if item[0] == item[1]:\n temp_list.remove(item[0])\n my_col_list = [item[0]] # my_col_list holds which occurance we count\n else:\n temp_list.remove(item[0])\n temp_list.remove(item[1])\n my_col_list = [item[0], item[1]]\n\n mask = df.loc[:, temp_list].any(axis=1) # creating mask so we know which rows to look at\n distance = df.loc[~mask, my_col_list].all(axis=1).sum() # calculating ocurrance\n my_result_list.append([item[0], item[1], distance]) # occurance info recorded in the list\n my_result_list.append([item[1], item[0], distance]) # occurance put in reverse so we get square form in the end\n\nresult = pd.DataFrame(my_result_list).drop_duplicates().pivot(index=1, columns=0, values=2) # construc DataFrame in squareform\nlist_of_columns.remove(\"all\")\nresult.loc[\"all\", \"all\"] = df.loc[:, list_of_columns].all(axis=1).sum() # fill in all/all occurances\nprint(result)\n\n" ]
[ 0 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074641437_pandas_python.txt
Q: Flutter: Stop playing video in list view when another video is started i am using Better Player (https://pub.dev/packages/better_player) to create several video players in list view. ListView.builder( shrinkWrap: true, physics: const NeverScrollableScrollPhysics(), addAutomaticKeepAlives: true, itemCount: awaitedContents!.length, itemBuilder: (context, index) { Content content = awaitedContents[index]; ... } else if (content.type == 'VIDEO') { return SizedBox( height: MediaQuery.of(context).size.width * 9 / 16, child: VideoContent(content.value, content.image, content.videoSubtitle, subtitlesEnabled), ); } How can I stop one Video player from playing when users starts another one? A: I guess you could use AutomaticKeepAliveClientMixin and the KeepAlive widgets: ListView.builder( shrinkWrap: true, physics: const NeverScrollableScrollPhysics(), itemCount: awaitedContents!.length, itemBuilder: (context, index) { Content content = awaitedContents[index]; ... if (content.type == 'VIDEO') { return KeepAlive( child: VideoContent(content.value, content.image, content.videoSubtitle, subtitlesEnabled), ); } } ) KeepAlive widget is used to wrap the VideoContent widget for each video in the list. This will cause the VideoContent widget to be kept alive and its children to be retained when the list view is scrolled. When a new video is started, the KeepAlive widget will dispose of the previous VideoContent widget and its children, stopping any videos that were playing. A: To stop one video player from playing when the user starts another one, you can use the dispose method of the BetterPlayerController class. This method will stop the video player and release any resources it is using. Here's an example of how you might update your code to use the dispose method: import 'package:better_player/better_player.dart'; ListView.builder( shrinkWrap: true, physics: const NeverScrollableScrollPhysics(), addAutomaticKeepAlives: true, itemCount: awaitedContents!.length, itemBuilder: (context, index) { Content content = awaitedContents[index]; if (content.type == 'VIDEO') { // Create a new BetterPlayerController instance BetterPlayerController playerController = BetterPlayerController(); return SizedBox( height: MediaQuery.of(context).size.width * 9 / 16, child: VideoContent(content.value, content.image, content.videoSubtitle, subtitlesEnabled, playerController), ); } } ) class VideoContent extends StatefulWidget { final String videoUrl; final String videoSubtitle; final bool subtitlesEnabled; final BetterPlayerController playerController; const VideoContent(this.videoUrl, this.videoImage, this.videoSubtitle, this.subtitlesEnabled, this.playerController); @override _VideoContentState createState() => _VideoContentState(); } class _VideoContentState extends State<VideoContent> { @override Widget build(BuildContext context) { return BetterPlayer( widget.playerController, src: widget.videoUrl, autoplay: false, aspectRatio: 16 / 9, fit: BoxFit.contain, placeholder: Container( child: Center( child: CircularProgressIndicator(), ), ), onVideoStart: () { // Stop the previous video player and release its resources when the user starts a new video widget.playerController.dispose(); }, ); } } In this example, when the user starts a new video, the onVideoStart callback is triggered, and the dispose method is called on the previous BetterPlayerController instance. This will stop the previous video player and release its resources.
Flutter: Stop playing video in list view when another video is started
i am using Better Player (https://pub.dev/packages/better_player) to create several video players in list view. ListView.builder( shrinkWrap: true, physics: const NeverScrollableScrollPhysics(), addAutomaticKeepAlives: true, itemCount: awaitedContents!.length, itemBuilder: (context, index) { Content content = awaitedContents[index]; ... } else if (content.type == 'VIDEO') { return SizedBox( height: MediaQuery.of(context).size.width * 9 / 16, child: VideoContent(content.value, content.image, content.videoSubtitle, subtitlesEnabled), ); } How can I stop one Video player from playing when users starts another one?
[ "I guess you could use AutomaticKeepAliveClientMixin and the KeepAlive widgets:\nListView.builder(\n shrinkWrap: true,\n physics: const NeverScrollableScrollPhysics(),\n itemCount: awaitedContents!.length,\n itemBuilder: (context, index) {\n Content content = awaitedContents[index];\n ...\n if (content.type == 'VIDEO') {\n return KeepAlive(\n child: VideoContent(content.value, content.image,\n content.videoSubtitle, subtitlesEnabled),\n );\n }\n }\n)\n\nKeepAlive widget is used to wrap the VideoContent widget for each video in the list. This will cause the VideoContent widget to be kept alive and its children to be retained when the list view is scrolled. When a new video is started, the KeepAlive widget will dispose of the previous VideoContent widget and its children, stopping any videos that were playing.\n", "To stop one video player from playing when the user starts another one, you can use the dispose method of the BetterPlayerController class. This method will stop the video player and release any resources it is using.\nHere's an example of how you might update your code to use the dispose method:\n\n\nimport 'package:better_player/better_player.dart';\n\nListView.builder(\n shrinkWrap: true,\n physics: const NeverScrollableScrollPhysics(),\n addAutomaticKeepAlives: true,\n itemCount: awaitedContents!.length,\n itemBuilder: (context, index) {\n Content content = awaitedContents[index];\n if (content.type == 'VIDEO') {\n // Create a new BetterPlayerController instance\n BetterPlayerController playerController = BetterPlayerController();\n\n return SizedBox(\n height: MediaQuery.of(context).size.width * 9 / 16,\n child: VideoContent(content.value, content.image,\n content.videoSubtitle, subtitlesEnabled, playerController),\n );\n }\n }\n)\n\nclass VideoContent extends StatefulWidget {\n final String videoUrl;\n final String videoSubtitle;\n final bool subtitlesEnabled;\n final BetterPlayerController playerController;\n\n const VideoContent(this.videoUrl, this.videoImage, this.videoSubtitle,\n this.subtitlesEnabled, this.playerController);\n\n @override\n _VideoContentState createState() => _VideoContentState();\n}\n\nclass _VideoContentState extends State<VideoContent> {\n @override\n Widget build(BuildContext context) {\n return BetterPlayer(\n widget.playerController,\n src: widget.videoUrl,\n autoplay: false,\n aspectRatio: 16 / 9,\n fit: BoxFit.contain,\n placeholder: Container(\n child: Center(\n child: CircularProgressIndicator(),\n ),\n ),\n onVideoStart: () {\n // Stop the previous video player and release its resources when the user starts a new video\n widget.playerController.dispose();\n },\n );\n }\n}\n\n\n\nIn this example, when the user starts a new video, the onVideoStart callback is triggered, and the dispose method is called on the previous BetterPlayerController instance. This will stop the previous video player and release its resources.\n" ]
[ 0, 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074661427_dart_flutter.txt
Q: How to get Balance field value from VendorBalanceSummary DAC and show it in Checks and Payments screen in Acumatica I want to get balance value from VendorBalanceSummary DAC and show it in Checks and Payments screen. I tried to do this with a selector by passing vendorId. But it gives and error saying "Invalid Object Name VendorBalanceSummary". How Can I Do this? A: The DAC is a calculated DAC on the vendor screen. I made a quick extension that should work for you. public class APPaymentEntryExt : PXGraphExtension<APPaymentEntry> { [PXCopyPasteHiddenView] public PXSelect<PX.Objects.AP.VendorMaint.VendorBalanceSummary> VendorBalance; protected virtual IEnumerable vendorBalance() { //modify to get from current payments vendor, not current vendor. List<VendorBalanceSummary> list = new List<VendorBalanceSummary>(1); PXSelectBase<APVendorBalanceEnq.APLatestHistory> sel = new PXSelectJoinGroupBy<APVendorBalanceEnq.APLatestHistory, LeftJoin<CuryAPHistory, On<APVendorBalanceEnq.APLatestHistory.branchID, Equal<CuryAPHistory.branchID>, And<APVendorBalanceEnq.APLatestHistory.accountID, Equal<CuryAPHistory.accountID>, And<APVendorBalanceEnq.APLatestHistory.vendorID, Equal<CuryAPHistory.vendorID>, And<APVendorBalanceEnq.APLatestHistory.subID, Equal<CuryAPHistory.subID>, And<APVendorBalanceEnq.APLatestHistory.curyID, Equal<CuryAPHistory.curyID>, And<APVendorBalanceEnq.APLatestHistory.lastActivityPeriod, Equal<CuryAPHistory.finPeriodID>>>>>>>>, Where<APVendorBalanceEnq.APLatestHistory.vendorID, Equal<Current<APPayment.vendorID>>>, Aggregate< Sum<CuryAPHistory.finBegBalance, Sum<CuryAPHistory.curyFinBegBalance, Sum<CuryAPHistory.finYtdBalance, Sum<CuryAPHistory.curyFinYtdBalance, Sum<CuryAPHistory.tranBegBalance, Sum<CuryAPHistory.curyTranBegBalance, Sum<CuryAPHistory.tranYtdBalance, Sum<CuryAPHistory.curyTranYtdBalance, Sum<CuryAPHistory.finPtdPayments, Sum<CuryAPHistory.finPtdPurchases, Sum<CuryAPHistory.finPtdDiscTaken, Sum<CuryAPHistory.finPtdWhTax, Sum<CuryAPHistory.finPtdCrAdjustments, Sum<CuryAPHistory.finPtdDrAdjustments, Sum<CuryAPHistory.finPtdRGOL, Sum<CuryAPHistory.finPtdDeposits, Sum<CuryAPHistory.finYtdDeposits, Sum<CuryAPHistory.finPtdRetainageWithheld, Sum<CuryAPHistory.finYtdRetainageWithheld, Sum<CuryAPHistory.finPtdRetainageReleased, Sum<CuryAPHistory.finYtdRetainageReleased, Sum<CuryAPHistory.tranPtdPayments, Sum<CuryAPHistory.tranPtdPurchases, Sum<CuryAPHistory.tranPtdDiscTaken, Sum<CuryAPHistory.tranPtdWhTax, Sum<CuryAPHistory.tranPtdCrAdjustments, Sum<CuryAPHistory.tranPtdDrAdjustments, Sum<CuryAPHistory.tranPtdRGOL, Sum<CuryAPHistory.tranPtdDeposits, Sum<CuryAPHistory.tranYtdDeposits, Sum<CuryAPHistory.tranPtdRetainageWithheld, Sum<CuryAPHistory.tranYtdRetainageWithheld, Sum<CuryAPHistory.tranPtdRetainageReleased, Sum<CuryAPHistory.tranYtdRetainageReleased, Sum<CuryAPHistory.curyFinPtdPayments, Sum<CuryAPHistory.curyFinPtdPurchases, Sum<CuryAPHistory.curyFinPtdDiscTaken, Sum<CuryAPHistory.curyFinPtdWhTax, Sum<CuryAPHistory.curyFinPtdCrAdjustments, Sum<CuryAPHistory.curyFinPtdDrAdjustments, Sum<CuryAPHistory.curyFinPtdDeposits, Sum<CuryAPHistory.curyFinYtdDeposits, Sum<CuryAPHistory.curyFinPtdRetainageWithheld, Sum<CuryAPHistory.curyFinYtdRetainageWithheld, Sum<CuryAPHistory.curyFinPtdRetainageReleased, Sum<CuryAPHistory.curyFinYtdRetainageReleased, Sum<CuryAPHistory.curyTranPtdPayments, Sum<CuryAPHistory.curyTranPtdPurchases, Sum<CuryAPHistory.curyTranPtdDiscTaken, Sum<CuryAPHistory.curyTranPtdWhTax, Sum<CuryAPHistory.curyTranPtdCrAdjustments, Sum<CuryAPHistory.curyTranPtdDrAdjustments, Sum<CuryAPHistory.curyTranPtdDeposits, Sum<CuryAPHistory.curyTranYtdDeposits, Sum<CuryAPHistory.curyTranPtdRetainageWithheld, Sum<CuryAPHistory.curyTranYtdRetainageWithheld, Sum<CuryAPHistory.curyTranPtdRetainageReleased, Sum<CuryAPHistory.curyTranYtdRetainageReleased >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>(Base); VendorBalanceSummary res = new VendorBalanceSummary(); foreach (PXResult<APVendorBalanceEnq.APLatestHistory, CuryAPHistory> it in sel.Select()) { CuryAPHistory iHst = it; Aggregate(res, iHst); } list.Add(res); return list; } //copied from vendor maintenance protected virtual void Aggregate(VendorBalanceSummary aRes, CuryAPHistory aSrc) { if (!aRes.Balance.HasValue) aRes.Balance = Decimal.Zero; if (!aRes.DepositsBalance.HasValue) aRes.DepositsBalance = Decimal.Zero; if (!aRes.RetainageBalance.HasValue) aRes.RetainageBalance = Decimal.Zero; aRes.VendorID = aSrc.VendorID; aRes.Balance += aSrc.FinYtdBalance ?? Decimal.Zero; aRes.DepositsBalance += aSrc.FinYtdDeposits ?? Decimal.Zero; aRes.RetainageBalance += aSrc.FinYtdRetainageWithheld - aSrc.FinYtdRetainageReleased; } }
How to get Balance field value from VendorBalanceSummary DAC and show it in Checks and Payments screen in Acumatica
I want to get balance value from VendorBalanceSummary DAC and show it in Checks and Payments screen. I tried to do this with a selector by passing vendorId. But it gives and error saying "Invalid Object Name VendorBalanceSummary". How Can I Do this?
[ "The DAC is a calculated DAC on the vendor screen. I made a quick extension that should work for you.\npublic class APPaymentEntryExt : PXGraphExtension<APPaymentEntry>\n{\n\n [PXCopyPasteHiddenView]\n public PXSelect<PX.Objects.AP.VendorMaint.VendorBalanceSummary> VendorBalance;\n\n protected virtual IEnumerable vendorBalance()\n {\n //modify to get from current payments vendor, not current vendor.\n List<VendorBalanceSummary> list = new List<VendorBalanceSummary>(1);\n PXSelectBase<APVendorBalanceEnq.APLatestHistory> sel = new PXSelectJoinGroupBy<APVendorBalanceEnq.APLatestHistory,\n LeftJoin<CuryAPHistory, On<APVendorBalanceEnq.APLatestHistory.branchID, Equal<CuryAPHistory.branchID>,\n And<APVendorBalanceEnq.APLatestHistory.accountID, Equal<CuryAPHistory.accountID>,\n And<APVendorBalanceEnq.APLatestHistory.vendorID, Equal<CuryAPHistory.vendorID>,\n And<APVendorBalanceEnq.APLatestHistory.subID, Equal<CuryAPHistory.subID>,\n And<APVendorBalanceEnq.APLatestHistory.curyID, Equal<CuryAPHistory.curyID>,\n And<APVendorBalanceEnq.APLatestHistory.lastActivityPeriod, Equal<CuryAPHistory.finPeriodID>>>>>>>>,\n Where<APVendorBalanceEnq.APLatestHistory.vendorID, Equal<Current<APPayment.vendorID>>>,\n Aggregate<\n Sum<CuryAPHistory.finBegBalance,\n Sum<CuryAPHistory.curyFinBegBalance,\n Sum<CuryAPHistory.finYtdBalance,\n Sum<CuryAPHistory.curyFinYtdBalance,\n Sum<CuryAPHistory.tranBegBalance,\n Sum<CuryAPHistory.curyTranBegBalance,\n Sum<CuryAPHistory.tranYtdBalance,\n Sum<CuryAPHistory.curyTranYtdBalance,\n\n Sum<CuryAPHistory.finPtdPayments,\n Sum<CuryAPHistory.finPtdPurchases,\n Sum<CuryAPHistory.finPtdDiscTaken,\n Sum<CuryAPHistory.finPtdWhTax,\n Sum<CuryAPHistory.finPtdCrAdjustments,\n Sum<CuryAPHistory.finPtdDrAdjustments,\n Sum<CuryAPHistory.finPtdRGOL,\n Sum<CuryAPHistory.finPtdDeposits,\n Sum<CuryAPHistory.finYtdDeposits,\n\n Sum<CuryAPHistory.finPtdRetainageWithheld,\n Sum<CuryAPHistory.finYtdRetainageWithheld,\n Sum<CuryAPHistory.finPtdRetainageReleased,\n Sum<CuryAPHistory.finYtdRetainageReleased,\n\n Sum<CuryAPHistory.tranPtdPayments,\n Sum<CuryAPHistory.tranPtdPurchases,\n Sum<CuryAPHistory.tranPtdDiscTaken,\n Sum<CuryAPHistory.tranPtdWhTax,\n Sum<CuryAPHistory.tranPtdCrAdjustments,\n Sum<CuryAPHistory.tranPtdDrAdjustments,\n Sum<CuryAPHistory.tranPtdRGOL,\n Sum<CuryAPHistory.tranPtdDeposits,\n Sum<CuryAPHistory.tranYtdDeposits,\n\n Sum<CuryAPHistory.tranPtdRetainageWithheld,\n Sum<CuryAPHistory.tranYtdRetainageWithheld,\n Sum<CuryAPHistory.tranPtdRetainageReleased,\n Sum<CuryAPHistory.tranYtdRetainageReleased,\n\n Sum<CuryAPHistory.curyFinPtdPayments,\n Sum<CuryAPHistory.curyFinPtdPurchases,\n Sum<CuryAPHistory.curyFinPtdDiscTaken,\n Sum<CuryAPHistory.curyFinPtdWhTax,\n Sum<CuryAPHistory.curyFinPtdCrAdjustments,\n Sum<CuryAPHistory.curyFinPtdDrAdjustments,\n Sum<CuryAPHistory.curyFinPtdDeposits,\n Sum<CuryAPHistory.curyFinYtdDeposits,\n\n Sum<CuryAPHistory.curyFinPtdRetainageWithheld,\n Sum<CuryAPHistory.curyFinYtdRetainageWithheld,\n Sum<CuryAPHistory.curyFinPtdRetainageReleased,\n Sum<CuryAPHistory.curyFinYtdRetainageReleased,\n\n Sum<CuryAPHistory.curyTranPtdPayments,\n Sum<CuryAPHistory.curyTranPtdPurchases,\n Sum<CuryAPHistory.curyTranPtdDiscTaken,\n Sum<CuryAPHistory.curyTranPtdWhTax,\n Sum<CuryAPHistory.curyTranPtdCrAdjustments,\n Sum<CuryAPHistory.curyTranPtdDrAdjustments,\n Sum<CuryAPHistory.curyTranPtdDeposits,\n Sum<CuryAPHistory.curyTranYtdDeposits,\n\n Sum<CuryAPHistory.curyTranPtdRetainageWithheld,\n Sum<CuryAPHistory.curyTranYtdRetainageWithheld,\n Sum<CuryAPHistory.curyTranPtdRetainageReleased,\n Sum<CuryAPHistory.curyTranYtdRetainageReleased\n >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>(Base);\n\n VendorBalanceSummary res = new VendorBalanceSummary();\n foreach (PXResult<APVendorBalanceEnq.APLatestHistory, CuryAPHistory> it in sel.Select())\n {\n CuryAPHistory iHst = it;\n Aggregate(res, iHst);\n }\n list.Add(res);\n\n\n return list;\n }\n\n //copied from vendor maintenance\n protected virtual void Aggregate(VendorBalanceSummary aRes, CuryAPHistory aSrc)\n {\n if (!aRes.Balance.HasValue) aRes.Balance = Decimal.Zero;\n if (!aRes.DepositsBalance.HasValue) aRes.DepositsBalance = Decimal.Zero;\n if (!aRes.RetainageBalance.HasValue) aRes.RetainageBalance = Decimal.Zero;\n\n aRes.VendorID = aSrc.VendorID;\n aRes.Balance += aSrc.FinYtdBalance ?? Decimal.Zero;\n aRes.DepositsBalance += aSrc.FinYtdDeposits ?? Decimal.Zero;\n aRes.RetainageBalance += aSrc.FinYtdRetainageWithheld - aSrc.FinYtdRetainageReleased;\n }\n\n}\n\n" ]
[ 0 ]
[]
[]
[ "acumatica" ]
stackoverflow_0073150288_acumatica.txt
Q: PhpStorm: set file type for a single file I've inherited a project where all Smarty template (should be .tpl) are named .html Problem is that there are html files also, so I can't globally redefine .html to always be smarty. Also, with 10 years+ of legacy code, I can't just rename the file. Is there anyway I can get PhpStorm to recognize a single file (or a directory) as another format then the file name? A: Better late than never: there is an option in the file's contextmenu (right mousebutton) where you can set the type for one (or multiple files). rest is self explaining.
PhpStorm: set file type for a single file
I've inherited a project where all Smarty template (should be .tpl) are named .html Problem is that there are html files also, so I can't globally redefine .html to always be smarty. Also, with 10 years+ of legacy code, I can't just rename the file. Is there anyway I can get PhpStorm to recognize a single file (or a directory) as another format then the file name?
[ "Better late than never: there is an option in the file's contextmenu (right mousebutton) where you can set the type for one (or multiple files). rest is self explaining.\n\n" ]
[ 0 ]
[]
[]
[ "ide", "phpstorm" ]
stackoverflow_0029918820_ide_phpstorm.txt
Q: Visual Studio 2022 auto prefix properties with "this" I would like that when I write the name of a property and I tab to autocomplete, it put "this" prefix automatically. For exemple: If I tab, I get this: I would like to have this instead: There are my visual studio settings, Resharper settings and editorConfig: A: This would need to be a feature request. You can submit that through VS with the Suggest A Feature on the feedback menu. When you press tab to commit the completion item, the text to insert (or "insertion text") for that item has already been calculated. In most cases, it is the text seen (known as the display text), but sometimes the insertion text can be different (or even include edits other places in the document!). There is also the possibility, depending on how each language implements the completion feature, that the language can make additional calculations and edits as part of committing the item. In either event (pre-computed or computed during commit), the language feature would need to make changes to include the this. prefix as part of the completion commit edit. A possible alternative to waiting for a new feature would be to write an editor extension that monitors the buffer changes, and when it sees an applicable member inserted without the required this. prefix, it could then trigger its own edit automatically to insert it.
Visual Studio 2022 auto prefix properties with "this"
I would like that when I write the name of a property and I tab to autocomplete, it put "this" prefix automatically. For exemple: If I tab, I get this: I would like to have this instead: There are my visual studio settings, Resharper settings and editorConfig:
[ "This would need to be a feature request. You can submit that through VS with the Suggest A Feature on the feedback menu.\nWhen you press tab to commit the completion item, the text to insert (or \"insertion text\") for that item has already been calculated. In most cases, it is the text seen (known as the display text), but sometimes the insertion text can be different (or even include edits other places in the document!). There is also the possibility, depending on how each language implements the completion feature, that the language can make additional calculations and edits as part of committing the item.\nIn either event (pre-computed or computed during commit), the language feature would need to make changes to include the this. prefix as part of the completion commit edit.\nA possible alternative to waiting for a new feature would be to write an editor extension that monitors the buffer changes, and when it sees an applicable member inserted without the required this. prefix, it could then trigger its own edit automatically to insert it.\n" ]
[ 0 ]
[]
[]
[ "visual_studio_2022" ]
stackoverflow_0074661048_visual_studio_2022.txt
Q: R : using filter in a for loop and change the data value I have a list called 'col_missing' like below. col_missing <- list( RAgeCat = c(-1, 8) , Married = c(-1, 9) , skipmeal = c(-1, 8, 9) ) and I want to filter the dataframe if the column has above values, and replace them as NA. Below code is working but if it is in a for loop, it is not working. Working temp <- df %>% filter(skipmeal %in% col_missing$skipmeal) temp$skipmeal <- NA Not working for (col in names(col_missing)) { temp <- df %>% filter(col %in% col_missing$col) temp$col <- NA } Could you tell me how to do this in a for loop? A: We may do this using across and modify the values without any filtering library(dplyr) df <- df %>% mutate(across(all_of(names(col_missing)), ~ replace(.x, .x %in% col_missing[[cur_column()]], NA))) In the for loop, temp is getting updated in each iteration. Perhaps it should be created outside Also, use [[ instead of $ i.e. col_missing[[col]] instead of col_missing$col If we need to filter and create multiple datasets in a list library(purrr) imap(col_missing, ~ df %>% filter(.data[[.y]] %in% .x) %>% mutate(!! .y := NA) )
R : using filter in a for loop and change the data value
I have a list called 'col_missing' like below. col_missing <- list( RAgeCat = c(-1, 8) , Married = c(-1, 9) , skipmeal = c(-1, 8, 9) ) and I want to filter the dataframe if the column has above values, and replace them as NA. Below code is working but if it is in a for loop, it is not working. Working temp <- df %>% filter(skipmeal %in% col_missing$skipmeal) temp$skipmeal <- NA Not working for (col in names(col_missing)) { temp <- df %>% filter(col %in% col_missing$col) temp$col <- NA } Could you tell me how to do this in a for loop?
[ "We may do this using across and modify the values without any filtering\nlibrary(dplyr)\ndf <- df %>%\n mutate(across(all_of(names(col_missing)),\n ~ replace(.x, .x %in% col_missing[[cur_column()]], NA)))\n\n\nIn the for loop, temp is getting updated in each iteration. Perhaps it should be created outside Also, use [[ instead of $ i.e. col_missing[[col]] instead of col_missing$col\nIf we need to filter and create multiple datasets in a list\nlibrary(purrr)\nimap(col_missing, ~ df %>%\n filter(.data[[.y]] %in% .x) %>%\n mutate(!! .y := NA)\n)\n\n" ]
[ 1 ]
[]
[]
[ "dplyr", "filter", "for_loop", "r" ]
stackoverflow_0074661581_dplyr_filter_for_loop_r.txt
Q: Virtualenv not compatible with this system or executable Simply trying to create a virtual environment on my mac OSX 10.10.05 Running from the Terminal, already successfully made VirtualEnv on linux and windows OS on other computers. Tried troubleshooting this by adding a WORK_ON path to my bash profile, did not resolve. Online forums doesn't seem to address this, suggestions are to use mkvirtualenv which does not seem to be a downloadable package per pip, conda and easy_install... Anyways, if you're able to help that would be super appreciated. here's the terminal output: joshua ~ $ pip install --upgrade virtualenv Requirement already up-to-date: virtualenv in ./anaconda/lib/python3.5/site-packages joshua ~ $ virtualenv -p python3 test Running virtualenv with interpreter /Users/joshua/anaconda/bin/python3 Using base prefix '/Users/joshua/anaconda' New python executable in /Users/joshua/test/bin/python3 Also creating executable in /Users/joshua/test/bin/python ERROR: The executable /Users/joshua/test/bin/python3 is not functioning ERROR: It thinks sys.prefix is '/Users/joshua' (should be '/Users/joshua/test') ERROR: virtualenv is not compatible with this system or executable ...tried uninstalling virtualenv Successfully uninstalled virtualenv-15.1.0 joshua ~ $ pip install virtualenv Collecting virtualenv Using cached virtualenv-15.1.0-py2.py3-none-any.whl Installing collected packages: virtualenv Successfully installed virtualenv-15.1.0 joshua ~ $ virtualenv test -v Using base prefix '/Users/joshua/anaconda' Creating /Users/joshua/test/lib/python3.5 Symlinking Python bootstrap modules Symlinking /Users/joshua/test/lib/python3.5/config-3.5m Symlinking /Users/joshua/test/lib/python3.5/lib-dynload Symlinking /Users/joshua/test/lib/python3.5/plat-darwin Symlinking /Users/joshua/test/lib/python3.5/os.py Ignoring built-in bootstrap module: posix Symlinking /Users/joshua/test/lib/python3.5/posixpath.py Cannot import bootstrap module: nt Symlinking /Users/joshua/test/lib/python3.5/ntpath.py Symlinking /Users/joshua/test/lib/python3.5/genericpath.py Symlinking /Users/joshua/test/lib/python3.5/fnmatch.py Symlinking /Users/joshua/test/lib/python3.5/locale.py Symlinking /Users/joshua/test/lib/python3.5/encodings Symlinking /Users/joshua/test/lib/python3.5/codecs.py Symlinking /Users/joshua/test/lib/python3.5/stat.py Cannot import bootstrap module: UserDict Cannot import bootstrap module: copy_reg Symlinking /Users/joshua/test/lib/python3.5/types.py Symlinking /Users/joshua/test/lib/python3.5/re.py Cannot import bootstrap module: sre Symlinking /Users/joshua/test/lib/python3.5/sre_parse.py Symlinking /Users/joshua/test/lib/python3.5/sre_constants.py Symlinking /Users/joshua/test/lib/python3.5/sre_compile.py Cannot import bootstrap module: _abcoll Symlinking /Users/joshua/test/lib/python3.5/warnings.py Symlinking /Users/joshua/test/lib/python3.5/linecache.py Symlinking /Users/joshua/test/lib/python3.5/abc.py Symlinking /Users/joshua/test/lib/python3.5/io.py Symlinking /Users/joshua/test/lib/python3.5/_weakrefset.py Symlinking /Users/joshua/test/lib/python3.5/copyreg.py Symlinking /Users/joshua/test/lib/python3.5/tempfile.py Symlinking /Users/joshua/test/lib/python3.5/random.py Symlinking /Users/joshua/test/lib/python3.5/__future__.py Symlinking /Users/joshua/test/lib/python3.5/collections Symlinking /Users/joshua/test/lib/python3.5/keyword.py Symlinking /Users/joshua/test/lib/python3.5/tarfile.py Symlinking /Users/joshua/test/lib/python3.5/shutil.py Symlinking /Users/joshua/test/lib/python3.5/struct.py Symlinking /Users/joshua/test/lib/python3.5/copy.py Symlinking /Users/joshua/test/lib/python3.5/tokenize.py Symlinking /Users/joshua/test/lib/python3.5/token.py Symlinking /Users/joshua/test/lib/python3.5/functools.py Symlinking /Users/joshua/test/lib/python3.5/heapq.py Symlinking /Users/joshua/test/lib/python3.5/bisect.py Symlinking /Users/joshua/test/lib/python3.5/weakref.py Symlinking /Users/joshua/test/lib/python3.5/reprlib.py Symlinking /Users/joshua/test/lib/python3.5/base64.py Symlinking /Users/joshua/test/lib/python3.5/_dummy_thread.py Symlinking /Users/joshua/test/lib/python3.5/hashlib.py Symlinking /Users/joshua/test/lib/python3.5/hmac.py Symlinking /Users/joshua/test/lib/python3.5/imp.py Symlinking /Users/joshua/test/lib/python3.5/importlib Symlinking /Users/joshua/test/lib/python3.5/rlcompleter.py Symlinking /Users/joshua/test/lib/python3.5/operator.py Symlinking /Users/joshua/test/lib/python3.5/_collections_abc.py Symlinking /Users/joshua/test/lib/python3.5/_bootlocale.py Creating /Users/joshua/test/lib/python3.5/site-packages Writing /Users/joshua/test/lib/python3.5/site.py Writing /Users/joshua/test/lib/python3.5/orig-prefix.txt Writing /Users/joshua/test/lib/python3.5/no-global-site-packages.txt Creating parent directories for /Users/joshua/test/include Symlinking /Users/joshua/test/include/python3.5m Creating /Users/joshua/test/bin New python executable in /Users/joshua/test/bin/python Changed mode of /Users/joshua/test/bin/python to 0o755 Testing executable with /Users/joshua/test/bin/python -c "import sys;out=sys.stdout;getattr(out, "buffer", out).write(sys.prefix.encode("utf-8"))" ERROR: The executable /Users/joshua/test/bin/python is not functioning ERROR: It thinks sys.prefix is '/Users/joshua' (should be '/Users/joshua/test') ERROR: virtualenv is not compatible with this system or executable here's current bash_profile: # Enable tab completion source ~/git-completion.bash # colors! green="\[\033[0;32m\]" blue="\[\033[0;34m\]" purple="\[\033[0;35m\]" reset="\[\033[0m\]" # Change command prompt source ~/git-prompt.sh export GIT_PS1_SHOWDIRTYSTATE=1 # '\u' adds the name of the current user to the prompt # '\$(__git_ps1)' adds git-related stuff # '\W' adds the name of the current directory export PS1="$purple\u$green\$(__git_ps1)$blue \W $ $reset" alias subl="/Applications/Sublime\ Text.app/Contents/SharedSupport/bin/subl" # Add Path export PATH="$HOME/anaconda/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:$PATH" # export PATH=$PATH:/users/Joshua/anaconda/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin # Locale $ export LC_ALL=en_US.UTF-8 $ export LANG=en_US.UTF-8 A: My limited undestanding is that my python interpreter and packages are managed under Anaconda using Conda package manager, and my virtualenv was originally installed using pip.. uninstalling virtualenv with pip and re-installing with conda fixed the issue pip uninstall virtualenv conda install virtualenv A: I had this same issue when trying to install py2.7 on a newer system. The root issue was that virtualenv was part of py3.7 and thus was not compatible: $ virtualenv -p python2.7 env Running virtualenv with interpreter /usr/local/bin/python2.7 New python executable in /Users/blah/env/bin/python ERROR: The executable /Users/blah/env/bin/python is not functioning ERROR: It thinks sys.prefix is u'/Library/Frameworks/Python.framework/Versions/2.7' (should be u'/Users/blah/env') ERROR: virtualenv is not compatible with this system or executable $ which virtualenv /Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenv # install proper version of virtualenv $ pip2.7 install virtualenv $ /Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenv -p python2.7 env $ . ./env/bin/activate (env) $
Virtualenv not compatible with this system or executable
Simply trying to create a virtual environment on my mac OSX 10.10.05 Running from the Terminal, already successfully made VirtualEnv on linux and windows OS on other computers. Tried troubleshooting this by adding a WORK_ON path to my bash profile, did not resolve. Online forums doesn't seem to address this, suggestions are to use mkvirtualenv which does not seem to be a downloadable package per pip, conda and easy_install... Anyways, if you're able to help that would be super appreciated. here's the terminal output: joshua ~ $ pip install --upgrade virtualenv Requirement already up-to-date: virtualenv in ./anaconda/lib/python3.5/site-packages joshua ~ $ virtualenv -p python3 test Running virtualenv with interpreter /Users/joshua/anaconda/bin/python3 Using base prefix '/Users/joshua/anaconda' New python executable in /Users/joshua/test/bin/python3 Also creating executable in /Users/joshua/test/bin/python ERROR: The executable /Users/joshua/test/bin/python3 is not functioning ERROR: It thinks sys.prefix is '/Users/joshua' (should be '/Users/joshua/test') ERROR: virtualenv is not compatible with this system or executable ...tried uninstalling virtualenv Successfully uninstalled virtualenv-15.1.0 joshua ~ $ pip install virtualenv Collecting virtualenv Using cached virtualenv-15.1.0-py2.py3-none-any.whl Installing collected packages: virtualenv Successfully installed virtualenv-15.1.0 joshua ~ $ virtualenv test -v Using base prefix '/Users/joshua/anaconda' Creating /Users/joshua/test/lib/python3.5 Symlinking Python bootstrap modules Symlinking /Users/joshua/test/lib/python3.5/config-3.5m Symlinking /Users/joshua/test/lib/python3.5/lib-dynload Symlinking /Users/joshua/test/lib/python3.5/plat-darwin Symlinking /Users/joshua/test/lib/python3.5/os.py Ignoring built-in bootstrap module: posix Symlinking /Users/joshua/test/lib/python3.5/posixpath.py Cannot import bootstrap module: nt Symlinking /Users/joshua/test/lib/python3.5/ntpath.py Symlinking /Users/joshua/test/lib/python3.5/genericpath.py Symlinking /Users/joshua/test/lib/python3.5/fnmatch.py Symlinking /Users/joshua/test/lib/python3.5/locale.py Symlinking /Users/joshua/test/lib/python3.5/encodings Symlinking /Users/joshua/test/lib/python3.5/codecs.py Symlinking /Users/joshua/test/lib/python3.5/stat.py Cannot import bootstrap module: UserDict Cannot import bootstrap module: copy_reg Symlinking /Users/joshua/test/lib/python3.5/types.py Symlinking /Users/joshua/test/lib/python3.5/re.py Cannot import bootstrap module: sre Symlinking /Users/joshua/test/lib/python3.5/sre_parse.py Symlinking /Users/joshua/test/lib/python3.5/sre_constants.py Symlinking /Users/joshua/test/lib/python3.5/sre_compile.py Cannot import bootstrap module: _abcoll Symlinking /Users/joshua/test/lib/python3.5/warnings.py Symlinking /Users/joshua/test/lib/python3.5/linecache.py Symlinking /Users/joshua/test/lib/python3.5/abc.py Symlinking /Users/joshua/test/lib/python3.5/io.py Symlinking /Users/joshua/test/lib/python3.5/_weakrefset.py Symlinking /Users/joshua/test/lib/python3.5/copyreg.py Symlinking /Users/joshua/test/lib/python3.5/tempfile.py Symlinking /Users/joshua/test/lib/python3.5/random.py Symlinking /Users/joshua/test/lib/python3.5/__future__.py Symlinking /Users/joshua/test/lib/python3.5/collections Symlinking /Users/joshua/test/lib/python3.5/keyword.py Symlinking /Users/joshua/test/lib/python3.5/tarfile.py Symlinking /Users/joshua/test/lib/python3.5/shutil.py Symlinking /Users/joshua/test/lib/python3.5/struct.py Symlinking /Users/joshua/test/lib/python3.5/copy.py Symlinking /Users/joshua/test/lib/python3.5/tokenize.py Symlinking /Users/joshua/test/lib/python3.5/token.py Symlinking /Users/joshua/test/lib/python3.5/functools.py Symlinking /Users/joshua/test/lib/python3.5/heapq.py Symlinking /Users/joshua/test/lib/python3.5/bisect.py Symlinking /Users/joshua/test/lib/python3.5/weakref.py Symlinking /Users/joshua/test/lib/python3.5/reprlib.py Symlinking /Users/joshua/test/lib/python3.5/base64.py Symlinking /Users/joshua/test/lib/python3.5/_dummy_thread.py Symlinking /Users/joshua/test/lib/python3.5/hashlib.py Symlinking /Users/joshua/test/lib/python3.5/hmac.py Symlinking /Users/joshua/test/lib/python3.5/imp.py Symlinking /Users/joshua/test/lib/python3.5/importlib Symlinking /Users/joshua/test/lib/python3.5/rlcompleter.py Symlinking /Users/joshua/test/lib/python3.5/operator.py Symlinking /Users/joshua/test/lib/python3.5/_collections_abc.py Symlinking /Users/joshua/test/lib/python3.5/_bootlocale.py Creating /Users/joshua/test/lib/python3.5/site-packages Writing /Users/joshua/test/lib/python3.5/site.py Writing /Users/joshua/test/lib/python3.5/orig-prefix.txt Writing /Users/joshua/test/lib/python3.5/no-global-site-packages.txt Creating parent directories for /Users/joshua/test/include Symlinking /Users/joshua/test/include/python3.5m Creating /Users/joshua/test/bin New python executable in /Users/joshua/test/bin/python Changed mode of /Users/joshua/test/bin/python to 0o755 Testing executable with /Users/joshua/test/bin/python -c "import sys;out=sys.stdout;getattr(out, "buffer", out).write(sys.prefix.encode("utf-8"))" ERROR: The executable /Users/joshua/test/bin/python is not functioning ERROR: It thinks sys.prefix is '/Users/joshua' (should be '/Users/joshua/test') ERROR: virtualenv is not compatible with this system or executable here's current bash_profile: # Enable tab completion source ~/git-completion.bash # colors! green="\[\033[0;32m\]" blue="\[\033[0;34m\]" purple="\[\033[0;35m\]" reset="\[\033[0m\]" # Change command prompt source ~/git-prompt.sh export GIT_PS1_SHOWDIRTYSTATE=1 # '\u' adds the name of the current user to the prompt # '\$(__git_ps1)' adds git-related stuff # '\W' adds the name of the current directory export PS1="$purple\u$green\$(__git_ps1)$blue \W $ $reset" alias subl="/Applications/Sublime\ Text.app/Contents/SharedSupport/bin/subl" # Add Path export PATH="$HOME/anaconda/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:$PATH" # export PATH=$PATH:/users/Joshua/anaconda/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin # Locale $ export LC_ALL=en_US.UTF-8 $ export LANG=en_US.UTF-8
[ "My limited undestanding is that my python interpreter and packages are managed under Anaconda using Conda package manager, and my virtualenv was originally installed using pip..\nuninstalling virtualenv with pip and re-installing with conda fixed the issue\npip uninstall virtualenv\n\nconda install virtualenv\n\n", "I had this same issue when trying to install py2.7 on a newer system.\nThe root issue was that virtualenv was part of py3.7 and thus was not compatible:\n$ virtualenv -p python2.7 env\nRunning virtualenv with interpreter /usr/local/bin/python2.7\nNew python executable in /Users/blah/env/bin/python\nERROR: The executable /Users/blah/env/bin/python is not functioning\nERROR: It thinks sys.prefix is u'/Library/Frameworks/Python.framework/Versions/2.7' (should be u'/Users/blah/env')\nERROR: virtualenv is not compatible with this system or executable\n\n$ which virtualenv\n/Library/Frameworks/Python.framework/Versions/3.7/bin/virtualenv\n\n# install proper version of virtualenv \n$ pip2.7 install virtualenv\n\n$ /Library/Frameworks/Python.framework/Versions/2.7/bin/virtualenv -p python2.7 env\n\n$ . ./env/bin/activate\n(env) $ \n\n" ]
[ 34, 0 ]
[]
[]
[ "bash", "python", "virtualenv" ]
stackoverflow_0044575994_bash_python_virtualenv.txt
Q: defineConfig is undefined when required from @vue/cli-service - vue.config.js I'm trying to set up my vue.config.js file and found that issue when I use defineConfig method from @vue/cli-service. TypeError: defineConfig is not a function vue.config.js const { defineConfig } = require('@vue/cli-service') module.exports = defineConfig({ devServer: { port: 3001, }, }); defineConfig is undefined. I was trying to look for it in cli-service folder, but (as suppose) there is no such method. I'm using vue.js 3, yarn 1.22.17 and my @vue/cli version is 4.5.15. Is it possible that defineConfig is for @vue/cli version 5..? A: defineConfig macro is used by Vite not by Vue CLI, the right syntax for vue cli : module.exports = { devServer: { port: 3001, }, }; A: The defineConfig function is available in @vue/cli-service v5 but not v4. I had the same problem and updating to v5 fixed the problem. You can also see it explicitly mentioned in the Vue CLI docs (so it isn't specific to Vite).
defineConfig is undefined when required from @vue/cli-service - vue.config.js
I'm trying to set up my vue.config.js file and found that issue when I use defineConfig method from @vue/cli-service. TypeError: defineConfig is not a function vue.config.js const { defineConfig } = require('@vue/cli-service') module.exports = defineConfig({ devServer: { port: 3001, }, }); defineConfig is undefined. I was trying to look for it in cli-service folder, but (as suppose) there is no such method. I'm using vue.js 3, yarn 1.22.17 and my @vue/cli version is 4.5.15. Is it possible that defineConfig is for @vue/cli version 5..?
[ "defineConfig macro is used by Vite not by Vue CLI, the right syntax for vue cli :\nmodule.exports = {\n devServer: {\n port: 3001,\n },\n};\n\n", "The defineConfig function is available in @vue/cli-service v5 but not v4. I had the same problem and updating to v5 fixed the problem. You can also see it explicitly mentioned in the Vue CLI docs (so it isn't specific to Vite).\n" ]
[ 3, 0 ]
[]
[]
[ "vue.js", "vuejs3" ]
stackoverflow_0072663598_vue.js_vuejs3.txt
Q: Script @php artisan package:discover handling the post-autoload-dump event returned with error code 255 I moved my project from desk to another. When I run php artisan it does not work. I tried to run composer update, but it returns the error Script @php artisan package:discover handling the post-autoload-dump event returned with error code 255 A: This is how I solved this after an upgrade from laravel version 6.x - 7.x: In App\Exceptions\Handler changed //Use Exception; Use Throwable; Then methods to accept instances of Throwable instead of Exceptions as follows: //public function report(Exception$exception); public function report(Throwable $exception); //public function render($request, Exception $exception); public function render($request, Throwable $exception); In config\session.php: //'secure' => env('SESSION_SECURE_COOKIE', false), 'secure' => env('SESSION_SECURE_COOKIE', null), Then run composer update A: I solved the problem this way: cd bootstrap/cache/ rm -rf *.php The bootstrap directory contains the app.php file that initializes the structure. This directory also houses a cache directory that contains structure-generated files for performance optimization, such as files and route cache services. Laravel stores configuration files, provider, and cached services to optimize the fetching of this information. The problem with me was when the other developer ran the 'php artisan config: cache' command on your machine and since the cache folder contains files that can be deleted, I deleted them and solved the problem. A: If this happened after Laravel update from 6.x to 7.x, then this could be due to the update of Symfony. See the upgrade guide of this part: https://laravel.com/docs/7.x/upgrade#symfony-5-related-upgrades A: I was upgrading my Laravel from 5.8 to 8.0 and I got this error. So my fixes were As @nobuhiroharada mentioned that I had missed .env file in my project Second is that Laravel removed Exception and replaced it with Throwable. So we need to fix that in our app\Exceptions\Handler.php. One can refer Medium.com for the error fix. In the upgrade guide of Laravel 8.x you need to update the dependencies like this Next, in your composer.json file, remove classmap block from the autoload section and add the new namespaced class directory mappings: "autoload": { "psr-4": { "App\\": "app/", "Database\\Factories\\": "database/factories/", "Database\\Seeders\\": "database/seeders/" } }, Finally from bootstrap\cache delete the cache files and run composer update. These 5 steps might help you remove the error you are facing in your Laravel Project. A: This happens because you have upgraded to Laravel 7. To fix it, update app/Exceptions/Handler.php like so: <?php namespace App\Exceptions; use Illuminate\Foundation\Exceptions\Handler as ExceptionHandler; use Throwable; // <-- ADD THIS class Handler extends ExceptionHandler { public function report(Throwable $exception) // <-- USE Throwable HERE { parent::report($exception); } public function render($request, Throwable $exception) // AND HERE { return parent::render($request, $exception); } } This is documented in the official upgrade guide here: https://laravel.com/docs/7.x/upgrade#symfony-5-related-upgrades A: I got the same problem in Win 8 and solve it: Here is the steps. Step-1: Go to your project directory Step-2: And type command cd bootstrap/cache/ Step-3: Again type command del -rf *.php Step-4: Update your composer composer update Step-5: Now you are done: php artisan serve Thanks. A: Do you have .env file in your new project? I had same error message. When I add .env file, error is gone. success message like this. Generating optimized autoload files > Illuminate\Foundation\ComposerScripts::postAutoloadDump > @php artisan package:discover Discovered Package: fideloper/proxy Discovered Package: ixudra/curl Discovered Package: laravel/tinker Discovered Package: nesbot/carbon Discovered Package: socialiteproviders/manager Package manifest generated successfully. I hope this will help you. A: maybe you have an error in the project code (for example, in routes or controller). This may be one of the reasons for this error. In my project, the web.php file has a syntax error. I defined this when I started the php artisan command C:\OSPanel\domains\lara.shop.loc>php artisan In web.php line syntax error, unexpected end of file A: Same issue when I update laravel from 6.x to 7.x I tried the most voted answer but it didn't work, then I used php artisan serve I noticed that: RuntimeException In order to use the Auth::routes() method, please install the laravel/ui package. Try composer require laravel/ui maybe it will work. A: I solve this error by deleting the vendor table then run composer update. I'm using Laravel 7. So, if you are not updating from the older Laravel version, maybe this is the solution. A: I had this same problem when running composer update in a Laravel project. In the package.json it's configured to run artisan package:discover, which failed with: Class 'Symfony\Component\Translation\Translator' not found in vendor/nesbot/carbon/src/Carbon/Translator.php on line 18 When I looked in the vendor/symfony/translation directory I found that it was completely empty, which explained the error. The solution was to completely delete the vendor directory and then re-run composer update. This was the only way that I was able to make composer install the missing files. A: I deleted composer.lock file and ran composer update. That solved mine A: I had the same issue, my problem was the PHP version of the server account did not match my Docker container. The SSH terminal was using the global php version for the server. php -v Confirm it's the version your project needs. Composer did warn me that a higher php version was required but I rm -rf'd /vendor and ./composer.lock without paying too much attention to the warnings! A: This is not an actual error. If you look a bit above you'll see the actual error. In my case, there was an error in my code: PHP Fatal error: Declaration of App\Exceptions\Handler::render($request, App\Exceptions\Exception $exception) must be compatible with Illuminate\Foundation\Exceptions\Handler::render($request, Throwable $e) It is not possible to tell you what is actually a problem in your code, so you have to look real reason for this error in your stack trace. A: Nothing worked, so I installed a new project, and I read Handler.php in App\Exceptions, it was different, probably because I copied some solution and Internet and deleted the following: protected $dontReport = [ // ]; protected $dontFlash = [ 'password', 'password_confirmation', ]; I copy here all of Handler.php generated by laravel 7.5, may be useful for someone: <?php namespace App\Exceptions; use Illuminate\Foundation\Exceptions\Handler as ExceptionHandler; use Throwable; class Handler extends ExceptionHandler { /** * A list of the exception types that are not reported. * * @var array */ protected $dontReport = [ // ]; /** * A list of the inputs that are never flashed for validation exceptions. * * @var array */ protected $dontFlash = [ 'password', 'password_confirmation', ]; /** * Report or log an exception. * * @param \Throwable $exception * @return void * * @throws \Exception */ public function report(Throwable $exception) { parent::report($exception); } /** * Render an exception into an HTTP response. * * @param \Illuminate\Http\Request $request * @param \Throwable $exception * @return \Symfony\Component\HttpFoundation\Response * * @throws \Throwable */ public function render($request, Throwable $exception) { return parent::render($request, $exception); } } A: I deleted my project I created a new folder and cloned the repository again and after that I gave composer install / update. A: In my case there is missing folder and its file Kernel.php in app/Console So I created app/Console/Kernel.php using code from previous project. Now everything working fine. A: Make sure your config\constants.php (and/or resources\lang\en\local.php) has no syntax errors. I get this error a lot by missing commas in constants.php file. A: I got the same problem in Win 10 and solve it: Here is the steps. Step-1: Go to your project directory Step-2: Update your composer composer update Step-3: Now you are done: php artisan serve A: If you have this error the simplest way is you can try using composer install instead of composer update A: For me it was related to Kernel.php I was adding new schedule task into the Kernel. I also updated some controllers, views, and installed twilio extension. The error did not provide more information than Script @php artisan package:discover handling the post-autoload-dump event returned with error code 255 @Suresh Pangeni refences the Kernel.php doc so I checked by doc that is in PROJECTFOLDER\app\Console\Kernel.php protected $commands = [ Commands\Inspire::class, Commands\Test::class \App\Console\Commands\Message::class, ]; Missing Comma between Commands\Test::class and the next line didn't let me proceed. It provided no further warning or information when I ran composer dump-autoload. Hope this can help someone else that has a similar issue! A: Got the same problem. php artisan doesn't work. composer install got error: Script @php artisan package:discover handling the post-autoload-dump event returned with error code 255 And this works for me. When I switch another linux user. It works. some files are owned by another linux user. So I use root account and change all the project file to the specific user, chown -R www:www project/ and use that user to execute composer cmd and then it works. A: I had the same error today, the causes are as follows: cause 1: the env file contains space in one of the configuration. cause 2: incorrect configuration of the Handler file belonging to the App\Exceptions namespace; cause 3: incorrect configuration of a file inheriting ExceptionHandler A: My case/solution, in case it helps anyone... I copied my repo over from my old Windows computer to a new one, and installed the latest php. composer install was returning: Root composer.json requires php ^7.1.3 but your php version (8.1.10) does not satisfy that requirement ...which I thought was odd (assuming 8 satisfied ^7), so I continued on with composer install --ignore-platform-reqs, and ended up with this particular issue. After trying a bunch of other possible solutions, what ended up working for me was simply downgrading to the same PHP version from my old machine (7.4.33). A: I was using Laravel 9.x and got the same error after trying to install this package maatwebsite/excel! thanks to @samuel-terra and @dqureshiumar there is the solution worked for me: clear bootstrap/cache: cd bootstrap/cache/ rm -rf *.php then run composer update: composer update
Script @php artisan package:discover handling the post-autoload-dump event returned with error code 255
I moved my project from desk to another. When I run php artisan it does not work. I tried to run composer update, but it returns the error Script @php artisan package:discover handling the post-autoload-dump event returned with error code 255
[ "This is how I solved this after an upgrade from laravel version 6.x - 7.x:\nIn App\\Exceptions\\Handler changed \n//Use Exception;\nUse Throwable;\n\nThen methods to accept instances of Throwable instead of Exceptions as follows:\n//public function report(Exception$exception);\npublic function report(Throwable $exception);\n\n//public function render($request, Exception $exception);\npublic function render($request, Throwable $exception);\n\nIn config\\session.php:\n//'secure' => env('SESSION_SECURE_COOKIE', false),\n'secure' => env('SESSION_SECURE_COOKIE', null),\n\nThen run composer update\n", "I solved the problem this way:\ncd bootstrap/cache/\nrm -rf *.php\n\nThe bootstrap directory contains the app.php file that initializes the structure. This directory also houses a cache directory that contains structure-generated files for performance optimization, such as files and route cache services. Laravel stores configuration files, provider, and cached services to optimize the fetching of this information. The problem with me was when the other developer ran the 'php artisan config: cache' command on your machine and since the cache folder contains files that can be deleted, I deleted them and solved the problem.\n", "If this happened after Laravel update from 6.x to 7.x, then this could be due to the update of Symfony. See the upgrade guide of this part:\nhttps://laravel.com/docs/7.x/upgrade#symfony-5-related-upgrades\n", "I was upgrading my Laravel from 5.8 to 8.0 and I got this error.\nSo my fixes were\n\nAs @nobuhiroharada mentioned that I had missed .env file in my project\n\nSecond is that Laravel removed Exception and replaced it with Throwable. So we need to fix that in our app\\Exceptions\\Handler.php. One can refer Medium.com for the error fix.\n\nIn the upgrade guide of Laravel 8.x you need to update the dependencies like this\n\nNext, in your composer.json file, remove classmap block from the autoload section and add the new namespaced class directory mappings:\n\n\n\n\n\"autoload\": {\n \"psr-4\": {\n \"App\\\\\": \"app/\",\n \"Database\\\\Factories\\\\\": \"database/factories/\",\n \"Database\\\\Seeders\\\\\": \"database/seeders/\"\n }\n},\n\n\n\n\nFinally from bootstrap\\cache delete the cache files and run composer update.\n\nThese 5 steps might help you remove the error you are facing in your Laravel Project.\n", "This happens because you have upgraded to Laravel 7.\nTo fix it, update app/Exceptions/Handler.php like so:\n<?php\n\nnamespace App\\Exceptions;\n\nuse Illuminate\\Foundation\\Exceptions\\Handler as ExceptionHandler;\nuse Throwable; // <-- ADD THIS\n\nclass Handler extends ExceptionHandler\n{\n public function report(Throwable $exception) // <-- USE Throwable HERE\n {\n parent::report($exception);\n }\n public function render($request, Throwable $exception) // AND HERE\n {\n return parent::render($request, $exception);\n }\n}\n\nThis is documented in the official upgrade guide here:\nhttps://laravel.com/docs/7.x/upgrade#symfony-5-related-upgrades\n", "I got the same problem in Win 8 and solve it:\nHere is the steps.\nStep-1: Go to your project directory\nStep-2: And type command cd bootstrap/cache/\nStep-3: Again type command del -rf *.php\nStep-4: Update your composer composer update\nStep-5: Now you are done: php artisan serve\nThanks.\n", "Do you have .env file in your new project?\nI had same error message. When I add .env file, error is gone.\nsuccess message like this.\nGenerating optimized autoload files\n> Illuminate\\Foundation\\ComposerScripts::postAutoloadDump\n> @php artisan package:discover\nDiscovered Package: fideloper/proxy\nDiscovered Package: ixudra/curl\nDiscovered Package: laravel/tinker\nDiscovered Package: nesbot/carbon\nDiscovered Package: socialiteproviders/manager\nPackage manifest generated successfully.\n\nI hope this will help you.\n", "maybe you have an error in the project code (for example, in routes or controller). This may be one of the reasons for this error.\nIn my project, the web.php file has a syntax error. I defined this when I started the php artisan command\nC:\\OSPanel\\domains\\lara.shop.loc>php artisan\nIn web.php line \n syntax error, unexpected end of file \n\n", "Same issue when I update laravel from 6.x to 7.x\nI tried the most voted answer but it didn't work, then I used php artisan serve I noticed that:\nRuntimeException\n\nIn order to use the Auth::routes() method, please install the laravel/ui package.\n\nTry composer require laravel/ui maybe it will work.\n", "I solve this error by deleting the vendor table then run composer update. I'm using Laravel 7. So, if you are not updating from the older Laravel version, maybe this is the solution.\n", "I had this same problem when running composer update in a Laravel project. In the package.json it's configured to run artisan package:discover, which failed with:\nClass 'Symfony\\Component\\Translation\\Translator' not found in vendor/nesbot/carbon/src/Carbon/Translator.php on line 18\nWhen I looked in the vendor/symfony/translation directory I found that it was completely empty, which explained the error.\nThe solution was to completely delete the vendor directory and then re-run composer update. This was the only way that I was able to make composer install the missing files.\n", "I deleted composer.lock file and ran composer update.\nThat solved mine\n", "I had the same issue, my problem was the PHP version of the server account did not match my Docker container. The SSH terminal was using the global php version for the server.\nphp -v\n\nConfirm it's the version your project needs.\n\nComposer did warn me that a higher php version was required but I rm -rf'd /vendor and ./composer.lock without paying too much attention to the warnings!\n", "This is not an actual error. If you look a bit above you'll see the actual error.\nIn my case, there was an error in my code:\nPHP Fatal error: Declaration of \nApp\\Exceptions\\Handler::render($request, App\\Exceptions\\Exception $exception)\nmust be compatible with \nIlluminate\\Foundation\\Exceptions\\Handler::render($request, Throwable $e)\n\nIt is not possible to tell you what is actually a problem in your code, so you have to look real reason for this error in your stack trace.\n", "Nothing worked, so I installed a new project, and I read Handler.php in App\\Exceptions, it was different, probably because I copied some solution and Internet and deleted the following:\nprotected $dontReport = [\n //\n];\n\nprotected $dontFlash = [\n 'password',\n 'password_confirmation',\n];\n\nI copy here all of Handler.php generated by laravel 7.5, may be useful for someone:\n<?php\n\nnamespace App\\Exceptions;\n\nuse Illuminate\\Foundation\\Exceptions\\Handler as ExceptionHandler;\nuse Throwable;\n\nclass Handler extends ExceptionHandler\n{\n /**\n * A list of the exception types that are not reported.\n *\n * @var array\n */\n protected $dontReport = [\n //\n ];\n\n/**\n * A list of the inputs that are never flashed for validation exceptions.\n *\n * @var array\n */\nprotected $dontFlash = [\n 'password',\n 'password_confirmation',\n];\n\n/**\n * Report or log an exception.\n *\n * @param \\Throwable $exception\n * @return void\n *\n * @throws \\Exception\n */\npublic function report(Throwable $exception)\n{\n parent::report($exception);\n}\n\n/**\n * Render an exception into an HTTP response.\n *\n * @param \\Illuminate\\Http\\Request $request\n * @param \\Throwable $exception\n * @return \\Symfony\\Component\\HttpFoundation\\Response\n *\n * @throws \\Throwable\n */\npublic function render($request, Throwable $exception)\n{\n return parent::render($request, $exception);\n}\n}\n\n", "I deleted my project I created a new folder and cloned the repository again and after that I gave composer install / update.\n", "In my case there is missing folder and its file Kernel.php in\n\napp/Console\n\nSo I created app/Console/Kernel.php using code from previous project.\nNow everything working fine.\n", "Make sure your config\\constants.php (and/or resources\\lang\\en\\local.php) has no syntax errors. I get this error a lot by missing commas in constants.php file.\n", "I got the same problem in Win 10 and solve it:\nHere is the steps.\nStep-1: Go to your project directory\nStep-2: Update your composer\n\ncomposer update\n\nStep-3: Now you are done: php artisan serve\n", "If you have this error the simplest way is you can try using composer install instead of composer update\n", "For me it was related to Kernel.php\nI was adding new schedule task into the Kernel. I also updated some controllers, views, and installed twilio extension.\nThe error did not provide more information than\nScript @php artisan package:discover handling the post-autoload-dump event returned with error code 255\n\n@Suresh Pangeni refences the Kernel.php doc so I checked by doc that is in PROJECTFOLDER\\app\\Console\\Kernel.php\nprotected $commands = [\n Commands\\Inspire::class,\n Commands\\Test::class\n \\App\\Console\\Commands\\Message::class,\n];\n\nMissing Comma between Commands\\Test::class and the next line didn't let me proceed. It provided no further warning or information when I ran composer dump-autoload.\nHope this can help someone else that has a similar issue!\n", "Got the same problem.\n\nphp artisan doesn't work.\n\ncomposer install got error:\n\n\nScript @php artisan package:discover handling the post-autoload-dump event returned with error code 255\n\nAnd this works for me.\nWhen I switch another linux user. It works.\nsome files are owned by another linux user.\nSo I use root account and change all the project file to the specific user,\nchown -R www:www project/\n\nand use that user to execute composer cmd\nand then it works.\n", "I had the same error today, the causes are as follows:\ncause 1: the env file contains space in one of the configuration.\ncause 2: incorrect configuration of the Handler file belonging to the App\\Exceptions namespace;\ncause 3: incorrect configuration of a file inheriting ExceptionHandler\n", "My case/solution, in case it helps anyone...\nI copied my repo over from my old Windows computer to a new one, and installed the latest php.\ncomposer install was returning:\nRoot composer.json requires php ^7.1.3 but your php version (8.1.10) does not satisfy that requirement\n\n...which I thought was odd (assuming 8 satisfied ^7), so I continued on with composer install --ignore-platform-reqs, and ended up with this particular issue.\nAfter trying a bunch of other possible solutions, what ended up working for me was simply downgrading to the same PHP version from my old machine (7.4.33).\n", "I was using Laravel 9.x and got the same error after trying to install this package maatwebsite/excel!\nthanks to @samuel-terra and @dqureshiumar there is the solution worked for me:\n\nclear bootstrap/cache:\n\ncd bootstrap/cache/\nrm -rf *.php\n\n\nthen run composer update:\n\ncomposer update\n\n" ]
[ 124, 31, 24, 23, 18, 10, 7, 3, 2, 2, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Getting this error when my composer version 2.x then i rollback this\ncomposer self-update --1\n\nNow its perfectly working\n" ]
[ -5 ]
[ "command", "console", "laravel", "laravel_5", "php" ]
stackoverflow_0050840960_command_console_laravel_laravel_5_php.txt
Q: Prolog performance and recursion type I was playing with permutation in a couple of programs and stumbled upon this little experiment: Permutation method 1: permute([], []). permute([X|Rest], L) :- permute(Rest, L1), select(X, L, L1). Permutation method 2: permute([], []). permute(L, [P | P1]) :- select(P, L, L1), permute(L1, P1). Permutation method 3 (use the built-in): permute(L, P) :- permutation(L, P). I understand that it's good practice to use tail recursion, and generally using built-ins is supposed to be efficient. But when I run the following: time(findall(P, permute([1,2,3,4,5,6,7,8,9], P), L)). I got the following results, which are relatively consistent across several runs: Method 1: % 772,064 inferences, 1.112 CPU in 2.378 seconds (47% CPU, 694451 Lips) Method 2: % 3,322,118 inferences, 2.126 CPU in 4.660 seconds (46% CPU, 1562923 Lips) Method 3: % 2,959,245 inferences, 1.967 CPU in 4.217 seconds (47% CPU, 1504539 Lips) So the non-tail recursive method is quite significantly more real-time efficient. Is a particular recursion type generally more real time efficient, all other things being equal (I know that's not always a simple premise)? What this experiment tells me is that I may not want to be always striving for tail recursion, but I may need to do a performance analysis first, then weigh performance benefit against other benefits that tail recursion does have. A: Really nice question, +1! Tail call (and, as a special case, tail recursion) optimization only applies if the predicate is deterministic! This is not the case here, so your predicate will always require the local stack space, no matter in which order you place the goals. The non-tail recursive version is more (time-)efficient here when generating all solutions because it needs to do fewer unifications on backtracking. EDIT: I am expanding on this point since it is well worth studying the performance difference in more detail. First, for clarity, I rename the two different versions to make clear which version I am talking about: Variant 1: Non-tail recursive: permute1([], []). permute1([X|Rest], L) :- permute1(Rest, L1), select(X, L, L1). Variant 2: Tail-recursive: permute2([], []). permute2(L, [P|P1]) :- select(P, L, L1), permute2(L1, P1). Note again that, although the second version is clearly tail recursive, tail call (and hence also tail recursion) optimisation only helps if the predicate is deterministic, and hence cannot help when we generate all permutations, because choice points are still left in that case. Note also that I am deliberately retaining the original variable naming and main predicate name to avoid introducing more variants. Personally, I prefer a naming convention that makes clear which variables denote lists by appending an s to their names, in analogy to regular English plural. Also, I prefer predicate names that more clearly exhibit the (at least intended and desirable) declarative, relational nature of the code, and recommend to avoid imperative names for this reason. Consider now unfolding the first variant and partially evaluating it for a list of 3 elements. We start with a simple goal: ?- Xs = [A,B,C], permute1(Xs, L). and then gradually unfold it by plugging in the definition of permute1/2, while making all head unifications explicit. In the first iteration, we obtain: ?- Xs = [A,B,C], Xs1 = [B,C], permute1(Xs1, L1), select(A, L, L1). I am marking the head unifications in bold. Now, still one goal of permute1/2 is left. So we repeat the process, again plugging in the predicate's only applicable rule body in place of its head: ?- Xs = [A,B,C], Xs1 = [B,C], Xs2 = [C], permute1(Xs2, L2), select(B, L1, L2), select(A, L, L1). One more pass of this, and we obtain: ?- Xs = [A,B,C], Xs1 = [B,C], Xs2 = [C], select(C, L2, []), select(B, L1, L2), select(A, L, L1). This is what the original goal looks like if we just unfold the definition of permute1/2 repeatedly. Now, what about the second variant? Again, we start with a simple goal: ?- Xs = [A,B,C], permute2(Xs, Ys). One iteration of unfolding permute2/2 yields the equivalent version: ?- Xs = [A,B,C], Ys = [P|P1], select(P, Xs, L1), permute2(L1, P1). and a second iteration yields: ?- Xs = [A,B,C], Ys = [P|P1], select(P, Xs, L1), Ys1 = [P1|P2], select(P1, L1, L2), permute2(L2, P2). I leave the third and last iteration as a simple exercise that I strongly recommend you do. And from this it is clear what we initially probably hadn't expected: A big difference lies in the head unifications, which the first version performs deterministically right at the start, and the second version performs over and over on backtracking. This famous example nicely shows that, somewhat contrary to common expectation, tail recursion can be quite slow if the code is not deterministic. A: Really nice question. Waiting for someone to post a time/space analysis, the only caveat I can offer is that method 1 & 2 don't terminate when first argument is free, while method 3 does. Anyway, method 1 seems really much more efficient than the builtin. Good to know. edit: and given that the library implementation merely adjust instantiation of arguments and calls method 1, I'm going to discuss on SWI-Prolog mailing list your method 2 as alternative (or, you prefer to do it yourself, let me know). more edit: I forgot previously to point out that permutation/3 (let's say, method 2), gives lexicographically ordered solutions, while method 1 doesn't. I think that could be a strong preferential requirement, but should be expressed as an option, given the performance gain that method 1 allows. ?- time(call_nth(permute1([0,1,2,3,4,5,6,7,8,9],P),1000000)). % 3,112,758 inferences, 3,160 CPU in 3,162 seconds (100% CPU, 984974 Lips) P = [1, 4, 8, 3, 7, 6, 5, 9, 2|...] . ?- time(call_nth(permute2([0,1,2,3,4,5,6,7,8,9],P),1000000)). % 10,154,843 inferences, 9,779 CPU in 9,806 seconds (100% CPU, 1038398 Lips) P = [2, 7, 8, 3, 9, 1, 5, 4, 6|...] . YAP gives still more gain! ?- time(call_nth(permute1([0,1,2,3,4,5,6,7,8,9],P),1000000)). % 0.716 CPU in 0.719 seconds ( 99% CPU) P = [1,4,8,3,7,6,5,9,2,0] ?- time(call_nth(permute2([0,1,2,3,4,5,6,7,8,9],P),1000000)). % 8.357 CPU in 8.368 seconds ( 99% CPU) P = [2,7,8,3,9,1,5,4,6,0] edit: I've posted a comment on SWI-Prolog doc page about this theme. A: I suspect what triggered this investigation was the discussion about tail-recursive sum/2 using an accumulator versus not. The sum/2 example is very cut-and-dry; one version is doing the arithmetic on the stack, the other is using an accumulator. However, like most things in the real world, the general truth is "it depends." For instance, compare the efficiency of methods 1 and 2 using full instantiation: ?- time(permute([1,2,3,4,5,6,7,8,9], [1,2,3,4,5,6,7,8,9])). % 18 inferences, 0.000 CPU in 0.000 seconds (66% CPU, 857143 Lips) true ; % 86,546 inferences, 0.022 CPU in 0.022 seconds (100% CPU, 3974193 Lips) false. ?- time(permute([1,2,3,4,5,6,7,8,9], [1,2,3,4,5,6,7,8,9])). % 18 inferences, 0.000 CPU in 0.000 seconds (62% CPU, 857143 Lips) true ; % 47 inferences, 0.000 CPU in 0.000 seconds (79% CPU, 940000 Lips) false. Method 1 beats method 2 when you're generating solutions (as in your tests), but method 2 beats method 1 when you're simply checking. Looking at the code it's easy to see why: the first one has to re-permute the whole tail of the list, while the second one just has to try selecting out one item. In this case it may be easy to point to the generating case and say it's more desired. That determination is simply one of the tradeoffs one must keep track of when dealing with Prolog. It's very difficult to make predicates that are all things to all people and always perform great; you must decide which are the "privileged paths" and which are not. I do vaguely recall someone recently showed an example of appending lists "during the return" and how you could take something that isn't or shouldn't be tail recursive and make it work thanks to unification, but I don't have the link handy. Hopefully whoever brought it up last time (Will?) will show up and share it. Great question, by the way. Your investigation method is valid, you'll just need to take into account other instantiation patterns as well. Speaking personally, I usually try to worry harder about correctness and generality than performance up-front. If I see immediately how to use an accumulator instead I will, but otherwise I won't do it that way until I run into an actual need for better performance. Tail recursion is just one method for improving performance; frequently there are other things that need to be addressed as badly or worse. A: Problem: ?- permute1(L, [a]). L = [a] ; <Infinite Loop> Solution: permute(Lst, Perm) :- % Adding Perm, to prevent infinite loop in: permute(L, [a]) permute_(Lst, Perm, Perm). permute_([], [], []). permute_([H|T], [_|TC], [P|TP]) :- permute_(T, TC, L1), select(H, [P|TP], L1). A: Nice example. But I would rather use, it doesn't leave a choice point in permute([], []): permute3([], []). permute3([H|T], [P|P1]) :- select(P, [H|T], L1), permute3(L1, P1). Its tail recursive and like 20% faster than permute2/2, but still not as fast as permute1/2. ?- time((permute2([1,2,3,4,5,6,7,8,9,0],_), fail; true)). % 29,592,302 inferences, 1.653 CPU in 1.667 seconds (99% CPU, 17896885 Lips) true. ?- time((permute3([1,2,3,4,5,6,7,8,9,0],_), fail; true)). % 25,963,501 inferences, 1.470 CPU in 1.480 seconds (99% CPU, 17662390 Lips) true. But I am not sure whether the explanation by mat is correct. It could be also the case that permute1/2 does less often perform LCO than permute3/2 does. Namely of n! results of the sub call permute1/2, only the last redo doesn't leave a choice point. On the other hand in permute3/2 every select/3 call has n results and doesn't leave a choice point in the last redo. I did a little test, write a period for each LCO: ?- permute1([1,2,3],_), fail; nl. ... ?- permute3([1,2,3],_), fail; nl. .......... LCO has not an extreme benefit in a fail loop. But the Prolog system doesn't know about it. So I guess thats where unnecessary time is spent, to a larger amount in permute3/2. A: Solution suggested by this comment. Using a meta-interpreter to count the number of calls: mi(G, S) :- mi([[G]-G], G, [], S). mi([], G, S, S). mi([[]-G|_], G, S, S). mi([Gs-G|Gss0], G0, S0, S) :- step(Gs, G, Gss, Gss0, S0, S1), mi(Gss, G0, S1, S). statistics_(PI, S0, S) :- select(PI-C0, S0, S1), !, succ(C0, C), S = [PI-C|S1]. statistics_(PI, S0, S) :- S = [PI-1|S0]. step([], _, Gss, Gss, S, S). step([G0|Gs0], G, Gss, Gss0, S0, S) :- functor(G0, A, N), statistics_(A/N, S0, S), findall(Gs-G, head_body(G0, Gs, Gs0), Gss, Gss0). head_body(true, Gs, Gs). head_body((G0,G1), [G0,G1|Gs], Gs). head_body(A=A, Gs, Gs). head_body(select(E0,[E0|Es],Es), Gs, Gs). head_body(select(E0,[E|Es0],[E|Es]), [select(E0,Es0,Es)|Gs], Gs). head_body(permute1([],[]), Gs, Gs). head_body(permute1([X|Rest],L), [permute1(Rest,L1),select(X,L,L1)|Gs], Gs). % head_body(permute1(L0,L), [L0=[],L=[]|Gs], Gs). % head_body(permute1(L0,L), [L0=[X|Rest],permute1(Rest,L1),select(X,L,L1)|Gs], Gs). head_body(permute2([],[]), Gs, Gs). head_body(permute2(L,[P|P1]), [select(P,L,L1),permute2(L1,P1)|Gs], Gs). % head_body(permute2(L,P0), [L=[],P0=[]|Gs], Gs). % head_body(permute2(L,P0), [P0=[P|P1],select(P,L,L1),permute2(L1,P1)|Gs], Gs). The result is the following: ?- mi((permute1([1,2,3,4,5,6,7,8,9],_),false), S). S=[false/0-362880,select/3-409113,permute1/2-10,(',')/2-1]. ?- mi((permute2([1,2,3,4,5,6,7,8,9],_),false), S). S=[select/3-1972819,false/0-362880,permute2/2-986410,(',')/2-1]. And if head unification is delayed with (=)/2: ?- mi((permute1([1,2,3,4,5,6,7,8,9],_),false), S). S=[(=)/2-21,false/0-362880,select/3-409113,permute1/2-10,(',')/2-1]. ?- mi((permute2([1,2,3,4,5,6,7,8,9],_),false), S). S=[select/3-1972819,(=)/2-2335700,false/0-362880,permute2/2-986410,(',')/2-1]. The predicate permute1/2 does less calls than permute2/2 which explains why permute1/2 is faster than permute2/2.
Prolog performance and recursion type
I was playing with permutation in a couple of programs and stumbled upon this little experiment: Permutation method 1: permute([], []). permute([X|Rest], L) :- permute(Rest, L1), select(X, L, L1). Permutation method 2: permute([], []). permute(L, [P | P1]) :- select(P, L, L1), permute(L1, P1). Permutation method 3 (use the built-in): permute(L, P) :- permutation(L, P). I understand that it's good practice to use tail recursion, and generally using built-ins is supposed to be efficient. But when I run the following: time(findall(P, permute([1,2,3,4,5,6,7,8,9], P), L)). I got the following results, which are relatively consistent across several runs: Method 1: % 772,064 inferences, 1.112 CPU in 2.378 seconds (47% CPU, 694451 Lips) Method 2: % 3,322,118 inferences, 2.126 CPU in 4.660 seconds (46% CPU, 1562923 Lips) Method 3: % 2,959,245 inferences, 1.967 CPU in 4.217 seconds (47% CPU, 1504539 Lips) So the non-tail recursive method is quite significantly more real-time efficient. Is a particular recursion type generally more real time efficient, all other things being equal (I know that's not always a simple premise)? What this experiment tells me is that I may not want to be always striving for tail recursion, but I may need to do a performance analysis first, then weigh performance benefit against other benefits that tail recursion does have.
[ "Really nice question, +1!\nTail call (and, as a special case, tail recursion) optimization only applies if the predicate is deterministic! This is not the case here, so your predicate will always require the local stack space, no matter in which order you place the goals. The non-tail recursive version is more (time-)efficient here when generating all solutions because it needs to do fewer unifications on backtracking.\nEDIT: I am expanding on this point since it is well worth studying the performance difference in more detail.\nFirst, for clarity, I rename the two different versions to make clear which version I am talking about:\nVariant 1: Non-tail recursive:\npermute1([], []).\npermute1([X|Rest], L) :-\n permute1(Rest, L1),\n select(X, L, L1).\n\nVariant 2: Tail-recursive:\npermute2([], []).\npermute2(L, [P|P1]) :-\n select(P, L, L1),\n permute2(L1, P1).\n\nNote again that, although the second version is clearly tail recursive, tail call (and hence also tail recursion) optimisation only helps if the predicate is deterministic, and hence cannot help when we generate all permutations, because choice points are still left in that case.\nNote also that I am deliberately retaining the original variable naming and main predicate name to avoid introducing more variants. Personally, I prefer a naming convention that makes clear which variables denote lists by appending an s to their names, in analogy to regular English plural. Also, I prefer predicate names that more clearly exhibit the (at least intended and desirable) declarative, relational nature of the code, and recommend to avoid imperative names for this reason.\n\nConsider now unfolding the first variant and partially evaluating it for a list of 3 elements. We start with a simple goal:\n?- Xs = [A,B,C], permute1(Xs, L).\n\nand then gradually unfold it by plugging in the definition of permute1/2, while making all head unifications explicit. In the first iteration, we obtain:\n\n?- Xs = [A,B,C], Xs1 = [B,C], permute1(Xs1, L1), select(A, L, L1).\n\nI am marking the head unifications in bold.\nNow, still one goal of permute1/2 is left. So we repeat the process, again plugging in the predicate's only applicable rule body in place of its head:\n\n?- Xs = [A,B,C], Xs1 = [B,C], Xs2 = [C], permute1(Xs2, L2), select(B, L1, L2), select(A, L, L1).\n\nOne more pass of this, and we obtain:\n\n?- Xs = [A,B,C], Xs1 = [B,C], Xs2 = [C], select(C, L2, []), select(B, L1, L2), select(A, L, L1).\n\nThis is what the original goal looks like if we just unfold the definition of permute1/2 repeatedly.\n\nNow, what about the second variant? Again, we start with a simple goal:\n?- Xs = [A,B,C], permute2(Xs, Ys).\n\nOne iteration of unfolding permute2/2 yields the equivalent version:\n\n?- Xs = [A,B,C], Ys = [P|P1], select(P, Xs, L1), permute2(L1, P1).\n\nand a second iteration yields:\n\n?- Xs = [A,B,C], Ys = [P|P1], select(P, Xs, L1), Ys1 = [P1|P2], select(P1, L1, L2), permute2(L2, P2).\n\nI leave the third and last iteration as a simple exercise that I strongly recommend you do.\n\nAnd from this it is clear what we initially probably hadn't expected: A big difference lies in the head unifications, which the first version performs deterministically right at the start, and the second version performs over and over on backtracking.\nThis famous example nicely shows that, somewhat contrary to common expectation, tail recursion can be quite slow if the code is not deterministic.\n", "Really nice question.\nWaiting for someone to post a time/space analysis, the only caveat I can offer is that method 1 & 2 don't terminate when first argument is free, while method 3 does. \nAnyway, method 1 seems really much more efficient than the builtin. Good to know.\nedit: and given that the library implementation merely adjust instantiation of arguments and calls method 1, I'm going to discuss on SWI-Prolog mailing list your method 2 as alternative (or, you prefer to do it yourself, let me know).\nmore edit: I forgot previously to point out that permutation/3 (let's say, method 2), gives lexicographically ordered solutions, while method 1 doesn't. I think that could be a strong preferential requirement, but should be expressed as an option, given the performance gain that method 1 allows.\n?- time(call_nth(permute1([0,1,2,3,4,5,6,7,8,9],P),1000000)).\n% 3,112,758 inferences, 3,160 CPU in 3,162 seconds (100% CPU, 984974 Lips)\nP = [1, 4, 8, 3, 7, 6, 5, 9, 2|...] .\n\n?- time(call_nth(permute2([0,1,2,3,4,5,6,7,8,9],P),1000000)).\n% 10,154,843 inferences, 9,779 CPU in 9,806 seconds (100% CPU, 1038398 Lips)\nP = [2, 7, 8, 3, 9, 1, 5, 4, 6|...] .\n\nYAP gives still more gain!\n?- time(call_nth(permute1([0,1,2,3,4,5,6,7,8,9],P),1000000)).\n% 0.716 CPU in 0.719 seconds ( 99% CPU)\nP = [1,4,8,3,7,6,5,9,2,0]\n\n?- time(call_nth(permute2([0,1,2,3,4,5,6,7,8,9],P),1000000)).\n% 8.357 CPU in 8.368 seconds ( 99% CPU)\nP = [2,7,8,3,9,1,5,4,6,0]\n\nedit: I've posted a comment on SWI-Prolog doc page about this theme.\n", "I suspect what triggered this investigation was the discussion about tail-recursive sum/2 using an accumulator versus not. The sum/2 example is very cut-and-dry; one version is doing the arithmetic on the stack, the other is using an accumulator. However, like most things in the real world, the general truth is \"it depends.\" For instance, compare the efficiency of methods 1 and 2 using full instantiation:\n?- time(permute([1,2,3,4,5,6,7,8,9], [1,2,3,4,5,6,7,8,9])).\n% 18 inferences, 0.000 CPU in 0.000 seconds (66% CPU, 857143 Lips)\ntrue ;\n% 86,546 inferences, 0.022 CPU in 0.022 seconds (100% CPU, 3974193 Lips)\nfalse.\n\n?- time(permute([1,2,3,4,5,6,7,8,9], [1,2,3,4,5,6,7,8,9])).\n% 18 inferences, 0.000 CPU in 0.000 seconds (62% CPU, 857143 Lips)\ntrue ;\n% 47 inferences, 0.000 CPU in 0.000 seconds (79% CPU, 940000 Lips)\nfalse.\n\nMethod 1 beats method 2 when you're generating solutions (as in your tests), but method 2 beats method 1 when you're simply checking. Looking at the code it's easy to see why: the first one has to re-permute the whole tail of the list, while the second one just has to try selecting out one item. In this case it may be easy to point to the generating case and say it's more desired. That determination is simply one of the tradeoffs one must keep track of when dealing with Prolog. It's very difficult to make predicates that are all things to all people and always perform great; you must decide which are the \"privileged paths\" and which are not.\nI do vaguely recall someone recently showed an example of appending lists \"during the return\" and how you could take something that isn't or shouldn't be tail recursive and make it work thanks to unification, but I don't have the link handy. Hopefully whoever brought it up last time (Will?) will show up and share it.\nGreat question, by the way. Your investigation method is valid, you'll just need to take into account other instantiation patterns as well. Speaking personally, I usually try to worry harder about correctness and generality than performance up-front. If I see immediately how to use an accumulator instead I will, but otherwise I won't do it that way until I run into an actual need for better performance. Tail recursion is just one method for improving performance; frequently there are other things that need to be addressed as badly or worse.\n", "Problem:\n?- permute1(L, [a]).\nL = [a] ;\n<Infinite Loop>\n\nSolution:\npermute(Lst, Perm) :-\n % Adding Perm, to prevent infinite loop in: permute(L, [a])\n permute_(Lst, Perm, Perm).\n\npermute_([], [], []).\npermute_([H|T], [_|TC], [P|TP]) :-\n permute_(T, TC, L1),\n select(H, [P|TP], L1).\n\n", "Nice example. But I would rather use, \nit doesn't leave a choice point in permute([], []): \npermute3([], []). \npermute3([H|T], [P|P1]) :- \n select(P, [H|T], L1), \n permute3(L1, P1). \n\nIts tail recursive and like 20% faster than \npermute2/2, but still not as fast as permute1/2.\n?- time((permute2([1,2,3,4,5,6,7,8,9,0],_), fail; true)).\n% 29,592,302 inferences, 1.653 CPU in 1.667 seconds (99% CPU, 17896885 Lips)\ntrue.\n\n?- time((permute3([1,2,3,4,5,6,7,8,9,0],_), fail; true)).\n% 25,963,501 inferences, 1.470 CPU in 1.480 seconds (99% CPU, 17662390 Lips)\ntrue.\n\nBut I am not sure whether the explanation by mat is correct. \nIt could be also the case that permute1/2 does less often \nperform LCO than permute3/2 does. \nNamely of n! results of the sub call permute1/2, only the \nlast redo doesn't leave a choice point. On the other hand in \npermute3/2 every select/3 call has n results and doesn't \nleave a choice point in the last redo. I did a little test, \nwrite a period for each LCO:\n?- permute1([1,2,3],_), fail; nl.\n...\n?- permute3([1,2,3],_), fail; nl.\n..........\n\nLCO has not an extreme benefit in a fail loop. But the Prolog\nsystem doesn't know about it. So I guess thats where unnecessary\ntime is spent, to a larger amount in permute3/2.\n", "Solution suggested by this comment.\nUsing a meta-interpreter to count the number of calls:\nmi(G, S) :-\n mi([[G]-G], G, [], S).\n\nmi([], G, S, S).\nmi([[]-G|_], G, S, S).\nmi([Gs-G|Gss0], G0, S0, S) :-\n step(Gs, G, Gss, Gss0, S0, S1),\n mi(Gss, G0, S1, S).\n\nstatistics_(PI, S0, S) :-\n select(PI-C0, S0, S1), !,\n succ(C0, C),\n S = [PI-C|S1].\nstatistics_(PI, S0, S) :-\n S = [PI-1|S0].\n\nstep([], _, Gss, Gss, S, S).\nstep([G0|Gs0], G, Gss, Gss0, S0, S) :-\n functor(G0, A, N),\n statistics_(A/N, S0, S),\n findall(Gs-G, head_body(G0, Gs, Gs0), Gss, Gss0).\n\n\nhead_body(true, Gs, Gs).\nhead_body((G0,G1), [G0,G1|Gs], Gs).\nhead_body(A=A, Gs, Gs).\n\nhead_body(select(E0,[E0|Es],Es), Gs, Gs).\nhead_body(select(E0,[E|Es0],[E|Es]), [select(E0,Es0,Es)|Gs], Gs).\n\nhead_body(permute1([],[]), Gs, Gs).\nhead_body(permute1([X|Rest],L), [permute1(Rest,L1),select(X,L,L1)|Gs], Gs).\n% head_body(permute1(L0,L), [L0=[],L=[]|Gs], Gs).\n% head_body(permute1(L0,L), [L0=[X|Rest],permute1(Rest,L1),select(X,L,L1)|Gs], Gs).\n\nhead_body(permute2([],[]), Gs, Gs).\nhead_body(permute2(L,[P|P1]), [select(P,L,L1),permute2(L1,P1)|Gs], Gs).\n% head_body(permute2(L,P0), [L=[],P0=[]|Gs], Gs).\n% head_body(permute2(L,P0), [P0=[P|P1],select(P,L,L1),permute2(L1,P1)|Gs], Gs).\n\nThe result is the following:\n?- mi((permute1([1,2,3,4,5,6,7,8,9],_),false), S).\n S=[false/0-362880,select/3-409113,permute1/2-10,(',')/2-1].\n?- mi((permute2([1,2,3,4,5,6,7,8,9],_),false), S).\n S=[select/3-1972819,false/0-362880,permute2/2-986410,(',')/2-1].\n\nAnd if head unification is delayed with (=)/2:\n?- mi((permute1([1,2,3,4,5,6,7,8,9],_),false), S).\n S=[(=)/2-21,false/0-362880,select/3-409113,permute1/2-10,(',')/2-1].\n?- mi((permute2([1,2,3,4,5,6,7,8,9],_),false), S).\n S=[select/3-1972819,(=)/2-2335700,false/0-362880,permute2/2-986410,(',')/2-1].\n\nThe predicate permute1/2 does less calls than permute2/2 which explains why permute1/2 is faster than permute2/2.\n" ]
[ 8, 4, 4, 2, 1, 1 ]
[]
[]
[ "list", "performance", "permutation", "prolog", "tail_recursion" ]
stackoverflow_0017016221_list_performance_permutation_prolog_tail_recursion.txt
Q: How can I write lines and bytes alternatively from one mixed file? I have a file that has normal lines mixed with weird binary characters. I need to write it all out however it should look. I am doing a readline() in a try-except, and parsing the successfully read lines. But am not sure how to switch modes in the exception to write in binary the characters that are not ascii. And then switch back to line-mode... without losing the file pointer position :) Do you have an idea on that? A: Yes, you can use the io.BytesIO class to read and write binary data in Python, and the io.TextIOWrapper class to read and write text data in Python. The io.TextIOWrapper class allows you to specify the encoding of the text data, and it also provides methods for dealing with Unicode errors, such as replacing or ignoring invalid characters. Here is an example of how you can use these classes to read and write a file that contains both text and binary data: import io # Open the file in binary mode with open('file.bin', 'rb') as f: # Create a BytesIO object that wraps the file byte_stream = io.BytesIO(f.read()) # Create a TextIOWrapper object that wraps the BytesIO object # Use the 'utf-8' encoding and 'ignore' errors text_stream = io.TextIOWrapper(byte_stream, encoding='utf-8', errors='ignore') # Read and write the file using the TextIOWrapper object for line in text_stream: try: # Process the text line print(line) except: # Handle binary data binary_data = byte_stream.read(8) print(binary_data) In this example, the io.BytesIO object is used to read the file in binary mode, and the io.TextIOWrapper object is used to read the file in text mode, using the utf-8 encoding and ignoring Unicode errors. The for loop iterates over the lines in the file and the try-except block is used to handle lines that contain binary data. In the try block, the text line is processed, and in the except block, the binary data is read from the io.BytesIO object and processed.
How can I write lines and bytes alternatively from one mixed file?
I have a file that has normal lines mixed with weird binary characters. I need to write it all out however it should look. I am doing a readline() in a try-except, and parsing the successfully read lines. But am not sure how to switch modes in the exception to write in binary the characters that are not ascii. And then switch back to line-mode... without losing the file pointer position :) Do you have an idea on that?
[ "Yes, you can use the io.BytesIO class to read and write binary data in Python, and the io.TextIOWrapper class to read and write text data in Python. The io.TextIOWrapper class allows you to specify the encoding of the text data, and it also provides methods for dealing with Unicode errors, such as replacing or ignoring invalid characters.\nHere is an example of how you can use these classes to read and write a file that contains both text and binary data:\nimport io\n\n# Open the file in binary mode\nwith open('file.bin', 'rb') as f:\n # Create a BytesIO object that wraps the file\n byte_stream = io.BytesIO(f.read())\n\n# Create a TextIOWrapper object that wraps the BytesIO object\n# Use the 'utf-8' encoding and 'ignore' errors\ntext_stream = io.TextIOWrapper(byte_stream, encoding='utf-8', errors='ignore')\n\n# Read and write the file using the TextIOWrapper object\nfor line in text_stream:\n try:\n # Process the text line\n print(line)\n except:\n # Handle binary data\n binary_data = byte_stream.read(8)\n print(binary_data)\n\nIn this example, the io.BytesIO object is used to read the file in binary mode, and the io.TextIOWrapper object is used to read the file in text mode, using the utf-8 encoding and ignoring Unicode errors. The for loop iterates over the lines in the file and the try-except block is used to handle lines that contain binary data. In the try block, the text line is processed, and in the except block, the binary data is read from the io.BytesIO object and processed.\n" ]
[ 0 ]
[]
[]
[ "binary", "readline" ]
stackoverflow_0074661536_binary_readline.txt
Q: How to stop DockerOperator after another one is completed Airflow So, basically i have an airflow dag that is as follow: Operator T1 execute a container that listen on a port forever Operator T2 execute a container that will use the container from T1 inside a python script Operator T3 execute a python script that doesn't need the other 2 containers, but need to be executed after T2 The problem is, how can i stop container from Operator T1 after Operator T2 finished his task (failed or success)? Basically i have the following graph [T1, T2] T2 >> T3 One solution was to add a forth operator that try to kill the first container created like using docker stop container_name But i don't know how to do this The problem is that, since T1 run foverer, the dag won't stop. PS: I can't set a limit of time, i don't know how much time T2 will take A: Why dont you run your docker stop container_name inside BashOperator?
How to stop DockerOperator after another one is completed Airflow
So, basically i have an airflow dag that is as follow: Operator T1 execute a container that listen on a port forever Operator T2 execute a container that will use the container from T1 inside a python script Operator T3 execute a python script that doesn't need the other 2 containers, but need to be executed after T2 The problem is, how can i stop container from Operator T1 after Operator T2 finished his task (failed or success)? Basically i have the following graph [T1, T2] T2 >> T3 One solution was to add a forth operator that try to kill the first container created like using docker stop container_name But i don't know how to do this The problem is that, since T1 run foverer, the dag won't stop. PS: I can't set a limit of time, i don't know how much time T2 will take
[ "Why dont you run your docker stop container_name inside BashOperator?\n" ]
[ 0 ]
[]
[]
[ "airflow", "directed_acyclic_graphs", "docker", "dockeroperator" ]
stackoverflow_0074661089_airflow_directed_acyclic_graphs_docker_dockeroperator.txt
Q: Redirect on submit with React-Hook-Form I have a simple signup form and using react-hook-form and react-router. I want to redirect within onSubmit after the Axios post request. I can't make it happen and I can't find anything online explaining why. I have tried both redirect and useNavigate from react-router. Nothing happens with redirect and I get this error when using navigate: Error: Invalid hook call. Hooks can only be called inside of the body of a function component. This could happen for one of the following reasons: 1. You might have mismatching versions of React and the renderer (such as React DOM) 2. You might be breaking the Rules of Hooks 3. You might have more than one copy of React in the same app I found a post where the maintainer of Hook-Form that suggests using props.history.push("./my-route") as is done here in 'Step1.' But this doesn't work for me. I don't know where history is coming from. The only thing that works is window.location.replace("/my-route") but I don't like this solution at all. Can someone help me or explain why the react-router methods are not working? Is it because Hook-Form is uncontrolled? My code: import axios from "axios"; import { useForm } from "react-hook-form"; import { redirect, useNavigate } from "react-router-dom"; export function Login() { const onSubmit = async (data) => { try { console.log(data); await axios.post("http://localhost:5000/signup", data); // window.location.replace("/my-route"); } catch (error) { console.error("There was an error!", error); } }; return ( <div> <h1>Blah</h1> <h4>Signup</h4> <form key={1} onSubmit={handleSubmit(onSubmitSignup)}> <input {...register("email", { required: true, minLength: 1 })} placeholder="email" /> <input type="password" {...register("password", { required: true, minLength: 1 })} placeholder="password" /> <button type="submit">Send</button> </form> </div> ); } A: The redirect utility function only works within a Data router (introduced in `[email protected]) and only in the route loader or action functions, not within a React component. You should use the useNavigate hook in this case. React hooks are only callable from React function components and Custom React Hooks, not in any callbacks. See the Rules of Hooks for more details. Use the navigate function that is returned by the hook to issue the imperative redirect. Example: import axios from "axios"; import { useForm } from "react-hook-form"; import { useNavigate } from "react-router-dom"; export function Login() { const navigate = useNavigate(); // <-- access navigate function const onSubmit = async (data) => { try { console.log(data); await axios.post("http://localhost:5000/signup", data); navigate("/my-route", { replace: true }); // <-- redirect } catch (error) { console.error("There was an error!", error); } }; return ( <div> <h1>Blah</h1> <h4>Signup</h4> <form key={1} onSubmit={handleSubmit(onSubmitSignup)}> <input {...register("email", { required: true, minLength: 1 })} placeholder="email" /> <input type="password" {...register("password", { required: true, minLength: 1 })} placeholder="password" /> <button type="submit">Send</button> </form> </div> ); }
Redirect on submit with React-Hook-Form
I have a simple signup form and using react-hook-form and react-router. I want to redirect within onSubmit after the Axios post request. I can't make it happen and I can't find anything online explaining why. I have tried both redirect and useNavigate from react-router. Nothing happens with redirect and I get this error when using navigate: Error: Invalid hook call. Hooks can only be called inside of the body of a function component. This could happen for one of the following reasons: 1. You might have mismatching versions of React and the renderer (such as React DOM) 2. You might be breaking the Rules of Hooks 3. You might have more than one copy of React in the same app I found a post where the maintainer of Hook-Form that suggests using props.history.push("./my-route") as is done here in 'Step1.' But this doesn't work for me. I don't know where history is coming from. The only thing that works is window.location.replace("/my-route") but I don't like this solution at all. Can someone help me or explain why the react-router methods are not working? Is it because Hook-Form is uncontrolled? My code: import axios from "axios"; import { useForm } from "react-hook-form"; import { redirect, useNavigate } from "react-router-dom"; export function Login() { const onSubmit = async (data) => { try { console.log(data); await axios.post("http://localhost:5000/signup", data); // window.location.replace("/my-route"); } catch (error) { console.error("There was an error!", error); } }; return ( <div> <h1>Blah</h1> <h4>Signup</h4> <form key={1} onSubmit={handleSubmit(onSubmitSignup)}> <input {...register("email", { required: true, minLength: 1 })} placeholder="email" /> <input type="password" {...register("password", { required: true, minLength: 1 })} placeholder="password" /> <button type="submit">Send</button> </form> </div> ); }
[ "The redirect utility function only works within a Data router (introduced in `[email protected]) and only in the route loader or action functions, not within a React component.\nYou should use the useNavigate hook in this case. React hooks are only callable from React function components and Custom React Hooks, not in any callbacks. See the Rules of Hooks for more details. Use the navigate function that is returned by the hook to issue the imperative redirect.\nExample:\nimport axios from \"axios\";\nimport { useForm } from \"react-hook-form\";\nimport { useNavigate } from \"react-router-dom\";\n\nexport function Login() {\n const navigate = useNavigate(); // <-- access navigate function\n\n const onSubmit = async (data) => {\n try {\n console.log(data);\n await axios.post(\"http://localhost:5000/signup\", data);\n navigate(\"/my-route\", { replace: true }); // <-- redirect\n } catch (error) {\n console.error(\"There was an error!\", error);\n }\n };\n\n return (\n <div>\n <h1>Blah</h1>\n <h4>Signup</h4>\n <form key={1} onSubmit={handleSubmit(onSubmitSignup)}>\n <input\n {...register(\"email\", { required: true, minLength: 1 })}\n placeholder=\"email\"\n />\n <input\n type=\"password\"\n {...register(\"password\", { required: true, minLength: 1 })}\n placeholder=\"password\"\n />\n <button type=\"submit\">Send</button>\n </form>\n </div>\n );\n}\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "react_hook_form", "react_router", "react_router_dom", "reactjs" ]
stackoverflow_0074661187_javascript_react_hook_form_react_router_react_router_dom_reactjs.txt
Q: When do I use the TestFixtureSetUp attribute instead of a default constructor? The NUnit documentation doesn't tell me when to use a method with a TestFixtureSetup and when to do the setup in the constructor. public class MyTest { private MyClass myClass; public MyTest() { myClass = new MyClass(); } [TestFixtureSetUp] public void Init() { myClass = new MyClass(); } } Are there any good/bad practices about the TestFixtureSetup versus default constructor or isn't there any difference? A: Why would you need to use a constructor in your test classes? I use [SetUp] and [TearDown] marked methods for code to be executed before and after each test, and similarly [TestFixtureSetUp] and [TestFixtureTearDown] marked methods for code to be executed only once before and after all test in the fixture have been run. I guess you could probably substitute the [TestFixtureSetUp] for a constructor (although I haven't tried), but this only seems to break from the clear convention that the marked methods provide. A: I think this has been one of the issues that hasn't been addressed by the nUnit team. However, there is the excellent xUnit project that saw this exact issue and decided that constructors were a good thing to use on test fixture initialization. For nunit, my best practice in this case has been to use the TestFixtureSetUp, TestFixtureTearDown, SetUp, and TearDown methods as described in the documentation. I think it also helps me when I don't think of an nUnit test fixture as a normal class, even though you are defining it with that construct. I think of them as fixtures, and that gets me over the mental hurdle and allows me to overlook this issue. A: One thing you can't do with [TestFixtureSetup] that you can do in the constructor is receive parameters from the [TestFixture] . If you want to parameterise your test fixture, then you will have to use the constructor for at least some of the set-up. So far, I've only used this for integration tests, e.g. for testing a data access layer with multiple data providers: [TestFixture("System.Data.SqlClient", "Server=(local)\\SQLEXPRESS;Initial Catalog=MyTestDatabase;Integrated Security=True;Pooling=False"))] [TestFixture("System.Data.SQLite", "Data Source=MyTestDatabase.s3db")])] internal class MyDataAccessLayerIntegrationTests { MyDataAccessLayerIntegrationTests( string dataProvider, string connectionString) { ... } } A: I have often wondered what the need for [TestFixtureSetUp] was, given that there is a simple, well understood first class language construct that does exactly the same thing. My preference is to use constructors, to take advantage of the readonly keyword ensuring member variables cannot be reinitialised. A: There is difference between constructor and method marked with [TestFixtureSetUp] attribute. According to NUnit documentation: It is advisable that the constructor not have any side effects, since NUnit may construct the object multiple times in the course of a session. So if you have any expensive initialization it is better to use TestFixtureSetUp. A: [TestFixtureSetUp] and [TestFixtureTearDown] are for whole test class. runs only once. [SetUp] and [TearDown] are for every test method(test). runs for every test. A: An important difference between constructor and TestFixtureSetUp is that, in NUnit 2 at least, constructor code is actually executed on test enumeration, not just test running, so basically you want to limit ctor code to only populating readonly, i.e. parameter, values. Anything that causes side-effects or does any actual work needs to either be wrapped in a Lazy or done in the TestFixtureSetUp / OneTimeSetUp. So, you can think of the constructor as solely a place to configure the test. Whereas the TestFixtureSetUp is where the test fixture, the required initial state of the system before tests are run, is initialized. A: I think I have a negative good answer - the reason to use a constructor instead of the attribute is when you have an inheritence between test classes. Only one method annotated with [TestFixtureSetup] will be called (on the concrete class only), but the other fixture initializers will not. In this case I'd rather put the initialization in the constructor, which has a well-defined semantics for inheritance :) A: The method with [SetUp] may be async. The constructor can not be.
When do I use the TestFixtureSetUp attribute instead of a default constructor?
The NUnit documentation doesn't tell me when to use a method with a TestFixtureSetup and when to do the setup in the constructor. public class MyTest { private MyClass myClass; public MyTest() { myClass = new MyClass(); } [TestFixtureSetUp] public void Init() { myClass = new MyClass(); } } Are there any good/bad practices about the TestFixtureSetup versus default constructor or isn't there any difference?
[ "Why would you need to use a constructor in your test classes?\nI use [SetUp] and [TearDown] marked methods for code to be executed before and after each test, and similarly [TestFixtureSetUp] and [TestFixtureTearDown] marked methods for code to be executed only once before and after all test in the fixture have been run.\nI guess you could probably substitute the [TestFixtureSetUp] for a constructor (although I haven't tried), but this only seems to break from the clear convention that the marked methods provide.\n", "I think this has been one of the issues that hasn't been addressed by the nUnit team. However, there is the excellent xUnit project that saw this exact issue and decided that constructors were a good thing to use on test fixture initialization.\nFor nunit, my best practice in this case has been to use the TestFixtureSetUp, TestFixtureTearDown, SetUp, and TearDown methods as described in the documentation. \nI think it also helps me when I don't think of an nUnit test fixture as a normal class, even though you are defining it with that construct. I think of them as fixtures, and that gets me over the mental hurdle and allows me to overlook this issue. \n", "One thing you can't do with [TestFixtureSetup] that you can do in the constructor is receive parameters from the [TestFixture] .\nIf you want to parameterise your test fixture, then you will have to use the constructor for at least some of the set-up. So far, I've only used this for integration tests, e.g. for testing a data access layer with multiple data providers:\n[TestFixture(\"System.Data.SqlClient\",\n \"Server=(local)\\\\SQLEXPRESS;Initial Catalog=MyTestDatabase;Integrated Security=True;Pooling=False\"))]\n[TestFixture(\"System.Data.SQLite\", \"Data Source=MyTestDatabase.s3db\")])]\ninternal class MyDataAccessLayerIntegrationTests\n{\n MyDataAccessLayerIntegrationTests(\n string dataProvider,\n string connectionString)\n {\n ...\n }\n}\n\n", "I have often wondered what the need for [TestFixtureSetUp] was, given that there is a simple, well understood first class language construct that does exactly the same thing.\nMy preference is to use constructors, to take advantage of the readonly keyword ensuring member variables cannot be reinitialised.\n", "There is difference between constructor and method marked with [TestFixtureSetUp] attribute. According to NUnit documentation:\n\nIt is advisable that the constructor not have any side effects, since NUnit may construct the object multiple times in the course of a session.\n\nSo if you have any expensive initialization it is better to use TestFixtureSetUp.\n", "[TestFixtureSetUp] and [TestFixtureTearDown] are for whole test class. runs only once.\n[SetUp] and [TearDown] are for every test method(test). runs for every test.\n", "An important difference between constructor and TestFixtureSetUp is that, in NUnit 2 at least, constructor code is actually executed on test enumeration, not just test running, so basically you want to limit ctor code to only populating readonly, i.e. parameter, values. Anything that causes side-effects or does any actual work needs to either be wrapped in a Lazy or done in the TestFixtureSetUp / OneTimeSetUp. So, you can think of the constructor as solely a place to configure the test. Whereas the TestFixtureSetUp is where the test fixture, the required initial state of the system before tests are run, is initialized.\n", "I think I have a negative good answer - the reason to use a constructor instead of the attribute is when you have an inheritence between test classes.\nOnly one method annotated with [TestFixtureSetup] will be called (on the concrete class only), but the other fixture initializers will not. In this case I'd rather put the initialization in the constructor, which has a well-defined semantics for inheritance :)\n", "The method with [SetUp] may be async. The constructor can not be.\n" ]
[ 65, 21, 13, 11, 11, 4, 3, 2, 0 ]
[ "The constructor and the SetUp methods are used differently:\nThe constructor is run only once.\nHowever, the SetUp methods are run multiple times, before every test case is executed.\n" ]
[ -2 ]
[ "c#", "nunit", "unit_testing" ]
stackoverflow_0000212718_c#_nunit_unit_testing.txt
Q: Keep observations with two or more consecutive years of data by group I have a dataset consisting of directorid, match_id, and calyear. I would like to keep only observations by director_id and match_id that have at least 2 consecutive years of data. I have tried a few different ways to do this, and haven't been able to get it quite right. The few different things I have tried have also required multiple steps and weren't particularly clean. Here is what I have: director_id match_id calyear 282 1111 2006 282 1111 2007 356 2222 2005 356 2222 2007 600 3333 2010 600 3333 2011 600 3333 2012 600 3355 2013 600 3355 2015 600 3355 2016 753 4444 2005 753 4444 2008 753 4444 2009 Here is what I want: director_id match_id calyear 282 1111 2006 282 1111 2007 600 3333 2010 600 3333 2011 600 3333 2012 600 3355 2015 600 3355 2016 753 4444 2008 753 4444 2009 I started by creating a variable equal to one: df['tosum'] = 1 And then count the number of observations where the difference in calyear by group is equal to 1. df['num_years'] = ( df.groupby(['directorid','match_id'])['tosum'].transform('sum').where(df.groupby(['match_id'])['calyear'].diff()==1, np.nan) ) And then I keep all observations with 'num_years' greater than 1. However, the first observation per director_id match_id gets set equal to NaN. In general, I think I am going about this in a convoluted way...it feels like there should be a simpler way to achieve my goal. Any help is greatly appreciated! A: Yes you need to groupby 'director_id', 'match_id' and then do a transform but the transform just needs to look at the difference between next element in both directions. In one direction you need to see if it equals 1 and in another -1 and then subset using the resulting True/False values. df = df[ df.groupby(["director_id", "match_id"])["calyear"].transform( lambda x: (x.diff().eq(1)) | (x[::-1].diff().eq(-1)) ) ] print(df): director_id match_id calyear 0 282 1111 2006 1 282 1111 2007 4 600 3333 2010 5 600 3333 2011 6 600 3333 2012 8 600 3355 2015 9 600 3355 2016 11 753 4444 2008 12 753 4444 2009
Keep observations with two or more consecutive years of data by group
I have a dataset consisting of directorid, match_id, and calyear. I would like to keep only observations by director_id and match_id that have at least 2 consecutive years of data. I have tried a few different ways to do this, and haven't been able to get it quite right. The few different things I have tried have also required multiple steps and weren't particularly clean. Here is what I have: director_id match_id calyear 282 1111 2006 282 1111 2007 356 2222 2005 356 2222 2007 600 3333 2010 600 3333 2011 600 3333 2012 600 3355 2013 600 3355 2015 600 3355 2016 753 4444 2005 753 4444 2008 753 4444 2009 Here is what I want: director_id match_id calyear 282 1111 2006 282 1111 2007 600 3333 2010 600 3333 2011 600 3333 2012 600 3355 2015 600 3355 2016 753 4444 2008 753 4444 2009 I started by creating a variable equal to one: df['tosum'] = 1 And then count the number of observations where the difference in calyear by group is equal to 1. df['num_years'] = ( df.groupby(['directorid','match_id'])['tosum'].transform('sum').where(df.groupby(['match_id'])['calyear'].diff()==1, np.nan) ) And then I keep all observations with 'num_years' greater than 1. However, the first observation per director_id match_id gets set equal to NaN. In general, I think I am going about this in a convoluted way...it feels like there should be a simpler way to achieve my goal. Any help is greatly appreciated!
[ "Yes you need to groupby 'director_id', 'match_id' and then do a transform but the transform just needs to look at the difference between next element in both directions. In one direction you need to see if it equals 1 and in another -1 and then subset using the resulting True/False values.\ndf = df[\n df.groupby([\"director_id\", \"match_id\"])[\"calyear\"].transform(\n lambda x: (x.diff().eq(1)) | (x[::-1].diff().eq(-1))\n )\n]\n\nprint(df):\n director_id match_id calyear\n0 282 1111 2006\n1 282 1111 2007\n4 600 3333 2010\n5 600 3333 2011\n6 600 3333 2012\n8 600 3355 2015\n9 600 3355 2016\n11 753 4444 2008\n12 753 4444 2009\n\n" ]
[ 2 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074661298_pandas_python.txt
Q: nodejs mysql not inserting data so i have been getting two errors whenever im running my code through vscode terminal 1- i have created a connection to a database though xampp 2- i have html form to insert 3- server works but i get these errors Server Started! Listening on port 5001.. Error while creating Tourist! Access denied for user 'root'@'localhost' (using password: YES) C:\Users\may\Desktop\projectRun\routes\tourist.js:10 next(err); ^ ReferenceError: next is not defined at C:\Users\may\Desktop\projectRun\routes\tourist.js:10:9 at processTicksAndRejections (node:internal/process/task_queues:96:5) the route router.post("/", async (req, res) => { try { res.json(await tourist.create(req.body)); } catch (err) { console.error(``Error while creating Tourist!``, err.message); next(err); } }) the html button that suppose to insert data and move to another page <input type="submit" value="Next" name="submit" onclick="submitTouristForm(); >``` A: The error reference to next not defined because the name of the input is submit, so it submits a variable with name submit and value = 'Next', try change the name of the input . The next it refer cuold have been even thatn with Next(err) try comment it
nodejs mysql not inserting data
so i have been getting two errors whenever im running my code through vscode terminal 1- i have created a connection to a database though xampp 2- i have html form to insert 3- server works but i get these errors Server Started! Listening on port 5001.. Error while creating Tourist! Access denied for user 'root'@'localhost' (using password: YES) C:\Users\may\Desktop\projectRun\routes\tourist.js:10 next(err); ^ ReferenceError: next is not defined at C:\Users\may\Desktop\projectRun\routes\tourist.js:10:9 at processTicksAndRejections (node:internal/process/task_queues:96:5) the route router.post("/", async (req, res) => { try { res.json(await tourist.create(req.body)); } catch (err) { console.error(``Error while creating Tourist!``, err.message); next(err); } }) the html button that suppose to insert data and move to another page <input type="submit" value="Next" name="submit" onclick="submitTouristForm(); >```
[ "The error reference to next not defined because the name of the input is submit,\nso it submits a variable with name submit and value = 'Next',\ntry change the name of the input .\nThe next it refer cuold have been even thatn with Next(err) try comment it\n" ]
[ 0 ]
[]
[]
[ "html", "mysql", "node.js" ]
stackoverflow_0074661448_html_mysql_node.js.txt
Q: Why does CsvToBeanBuilder struggle mapping boolean csv entries to List? I have method to read CSV class: public void readCSV() throws FileNotFoundException { String fileName = "<path-to-csv>/question.csv"; List<Question> beans = new CsvToBeanBuilder(new FileReader(fileName)) .withType(Question.class) .withSeparator(';') .build() .parse(); } Question: public class Question { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @CsvBindByPosition(position = 0) private String questionNumber; @CsvBindByPosition(position = 1) private String question; @CsvBindByPosition(position = 2) private String answer1; @CsvBindByPosition(position = 3) private String answer2; @CsvBindByPosition(position = 4) private String answer3; @CsvBindByPosition(position = 5) private String answer4; @CsvBindByPosition(position = 6) private String answer5; @CsvBindByPosition(position = 7) private String answer6; @CsvBindByPosition(position = 8) private String answer7; @CsvBindByPosition(position = 9) private String answer8; @CsvBindByPosition(position = 10) private String answer9; @CsvBindByPosition(position = 11) @ElementCollection private List<Boolean> answers = new ArrayList<>(); } and line of csv question 1.1 ; Question? ; 1a ; 2b ; 3c ; 4d ; "" ; "" ; "" ; "" ; "" ; True, False, False, False Unfortunately logs give me information about incorrect type of list Caused by: org.apache.commons.beanutils.ConversionException: Can't convert value ' True, False, False, False' to type interface java.util.List Could you help me? A: Fix In your scenario, make use of the annotation @CsvBindAndSplitByPosition. For your requirements, it's important to override the annotation parameter splitOn to a regular expression that meets your specific splitting needs. Change the mapping for the List to: @ElementCollection @CsvBindAndSplitByPosition(position = 11, elementType = Boolean.class, splitOn = "\\s?,\\s?", writeDelimiter = ",\\s+") private List<Boolean> answers = new ArrayList<>(); Important Your input csv file requires an important change/fix: 1.1 ; Question? ; 1a ; 2b ; 3c ; 4d ; "" ; "" ; "" ; "" ; "" ;True, False, False, False Note: The whitespace in front of the first boolean element is removed. If this is not removed (and ensured), the parsing will fail as it the split token will evaluate to True which can't be converted to java.lang.Boolean. I tested the above code with OpenCSV 5.6/5.7 under OpenJDK 17.
Why does CsvToBeanBuilder struggle mapping boolean csv entries to List?
I have method to read CSV class: public void readCSV() throws FileNotFoundException { String fileName = "<path-to-csv>/question.csv"; List<Question> beans = new CsvToBeanBuilder(new FileReader(fileName)) .withType(Question.class) .withSeparator(';') .build() .parse(); } Question: public class Question { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @CsvBindByPosition(position = 0) private String questionNumber; @CsvBindByPosition(position = 1) private String question; @CsvBindByPosition(position = 2) private String answer1; @CsvBindByPosition(position = 3) private String answer2; @CsvBindByPosition(position = 4) private String answer3; @CsvBindByPosition(position = 5) private String answer4; @CsvBindByPosition(position = 6) private String answer5; @CsvBindByPosition(position = 7) private String answer6; @CsvBindByPosition(position = 8) private String answer7; @CsvBindByPosition(position = 9) private String answer8; @CsvBindByPosition(position = 10) private String answer9; @CsvBindByPosition(position = 11) @ElementCollection private List<Boolean> answers = new ArrayList<>(); } and line of csv question 1.1 ; Question? ; 1a ; 2b ; 3c ; 4d ; "" ; "" ; "" ; "" ; "" ; True, False, False, False Unfortunately logs give me information about incorrect type of list Caused by: org.apache.commons.beanutils.ConversionException: Can't convert value ' True, False, False, False' to type interface java.util.List Could you help me?
[ "Fix\nIn your scenario, make use of the annotation @CsvBindAndSplitByPosition. For your requirements, it's important to override the annotation parameter splitOn to a regular expression that meets your specific splitting needs.\nChange the mapping for the List to:\n@ElementCollection\n@CsvBindAndSplitByPosition(position = 11, \n elementType = Boolean.class, \n splitOn = \"\\\\s?,\\\\s?\", \n writeDelimiter = \",\\\\s+\")\nprivate List<Boolean> answers = new ArrayList<>();\n\nImportant\nYour input csv file requires an important change/fix:\n1.1 ; Question? ; 1a ; 2b ; 3c ; 4d ; \"\" ; \"\" ; \"\" ; \"\" ; \"\" ;True, False, False, False\n\nNote: The whitespace in front of the first boolean element is removed. If this is not removed (and ensured), the parsing will fail as it the split token will evaluate to True which can't be converted to java.lang.Boolean.\nI tested the above code with OpenCSV 5.6/5.7 under OpenJDK 17.\n" ]
[ 0 ]
[]
[]
[ "csv", "data_conversion", "java", "opencsv" ]
stackoverflow_0070427901_csv_data_conversion_java_opencsv.txt
Q: Mongoose save multiple updated documents I have a situation where I need to update multiple documents in a collection, but each update is logic-driven and is not uniform. In other words, I don't want to apply the same update to every affected document. Each document's changes will be driven by some logic on the system. After this logic has been executed, I have an array of documents that have been edited in application space using standard JS operations like: someDocument.someField = "some new value"; someDocument.someRef = otherDocument._id; editedDocs.push(someDocument); Then I normally do something like: for (let i = 0; i < editedDocs.length; i++) { await editedDocs[i].save({session}); } I put the updates into a transaction so they succeed/fail atomically. But I've noticed this approach is slow and one edit of ~200 documents took about 15 seconds to complete. What I'm looking for is the ability to save all the documents in the editedDocs array in one function call. Does such a call exist? Thanks! A: @R2D2 answered my question in the form of a comment. It actually pointed me in the right direction. The operation I wanted was actually Model.bulkSave() which uses Model.bulkWrite() ultimately. I also had to update Mongoose from 5.x to the latest version because Model.bulkSave() in version 5.x doesn't respect transactions. In version 6.x it does, so I can save multiple documents that were edited in application space in one call without looping within a transaction, which met all my requirements. See here for more information: https://mongoosejs.com/docs/api/model.html#model_Model-bulkSave Thanks @R2D2
Mongoose save multiple updated documents
I have a situation where I need to update multiple documents in a collection, but each update is logic-driven and is not uniform. In other words, I don't want to apply the same update to every affected document. Each document's changes will be driven by some logic on the system. After this logic has been executed, I have an array of documents that have been edited in application space using standard JS operations like: someDocument.someField = "some new value"; someDocument.someRef = otherDocument._id; editedDocs.push(someDocument); Then I normally do something like: for (let i = 0; i < editedDocs.length; i++) { await editedDocs[i].save({session}); } I put the updates into a transaction so they succeed/fail atomically. But I've noticed this approach is slow and one edit of ~200 documents took about 15 seconds to complete. What I'm looking for is the ability to save all the documents in the editedDocs array in one function call. Does such a call exist? Thanks!
[ "@R2D2 answered my question in the form of a comment.\nIt actually pointed me in the right direction. The operation I wanted was actually Model.bulkSave() which uses Model.bulkWrite() ultimately.\nI also had to update Mongoose from 5.x to the latest version because Model.bulkSave() in version 5.x doesn't respect transactions. In version 6.x it does, so I can save multiple documents that were edited in application space in one call without looping within a transaction, which met all my requirements.\nSee here for more information: https://mongoosejs.com/docs/api/model.html#model_Model-bulkSave\nThanks @R2D2\n" ]
[ 0 ]
[]
[]
[ "mongodb", "mongoose" ]
stackoverflow_0074659236_mongodb_mongoose.txt
Q: Flutter how do you return value or error from stream I want to use this method, but instead of returning Future<void>, I want to return a value from it. This value is either a final list of bytes or an error (if error happens anytime). I don't know how to that, since you can't return a value from onDone or onError. If I use await for instead of listen, the errors behave strangely. How to do that? Future<void> _downloadImage() async { _response = await http.Client() .send(http.Request('GET', Uri.parse('https://upload.wikimedia.org/wikipedia/commons/f/ff/Pizigani_1367_Chart_10MB.jpg'))); _total = _response.contentLength ?? 0; _response.stream.listen((value) { setState(() { _bytes.addAll(value); _received += value.length; }); }).onDone(() async { final file = File('${(await getApplicationDocumentsDirectory()).path}/image.png'); await file.writeAsBytes(_bytes); setState(() { _image = file; }); }); } A: To return a value from a Future, you can use the then method. The then method takes two arguments: a callback function that will be called when the Future completes successfully, and another callback function that will be called if the Future completes with an error. Here's an example of how you might update your code to use the then method to return a value from your _downloadImage function: import 'dart:io'; import 'package:path_provider/path_provider.dart'; import 'package:http/http.dart' as http; Future<List<int>> _downloadImage() async { // Send the HTTP request to download the image final response = await http.Client() .send(http.Request('GET', Uri.parse('https://upload.wikimedia.org/wikipedia/commons/f/ff/Pizigani_1367_Chart_10MB.jpg'))); // Get the total size of the image and create an empty list of bytes final total = response.contentLength ?? 0; final bytes = <int>[]; // Listen for data events from the response stream and add them to the bytes list response.stream.listen((value) { bytes.addAll(value); }); // Return a Future that completes with the list of bytes return bytes.then((value) { // When the Future completes successfully, return the list of bytes return value; }, onError: (error) { // If the Future completes with an error, return the error return error; }); } In this example, the _downloadImage function returns a Future<List> that will contain the list of bytes that were downloaded from the HTTP response stream, or an error if one occurred.
Flutter how do you return value or error from stream
I want to use this method, but instead of returning Future<void>, I want to return a value from it. This value is either a final list of bytes or an error (if error happens anytime). I don't know how to that, since you can't return a value from onDone or onError. If I use await for instead of listen, the errors behave strangely. How to do that? Future<void> _downloadImage() async { _response = await http.Client() .send(http.Request('GET', Uri.parse('https://upload.wikimedia.org/wikipedia/commons/f/ff/Pizigani_1367_Chart_10MB.jpg'))); _total = _response.contentLength ?? 0; _response.stream.listen((value) { setState(() { _bytes.addAll(value); _received += value.length; }); }).onDone(() async { final file = File('${(await getApplicationDocumentsDirectory()).path}/image.png'); await file.writeAsBytes(_bytes); setState(() { _image = file; }); }); }
[ "To return a value from a Future, you can use the then method. The then method takes two arguments: a callback function that will be called when the Future completes successfully, and another callback function that will be called if the Future completes with an error.\nHere's an example of how you might update your code to use the then method to return a value from your _downloadImage function:\n\n\nimport 'dart:io';\nimport 'package:path_provider/path_provider.dart';\nimport 'package:http/http.dart' as http;\n\nFuture<List<int>> _downloadImage() async {\n // Send the HTTP request to download the image\n final response = await http.Client()\n .send(http.Request('GET', Uri.parse('https://upload.wikimedia.org/wikipedia/commons/f/ff/Pizigani_1367_Chart_10MB.jpg')));\n\n // Get the total size of the image and create an empty list of bytes\n final total = response.contentLength ?? 0;\n final bytes = <int>[];\n\n // Listen for data events from the response stream and add them to the bytes list\n response.stream.listen((value) {\n bytes.addAll(value);\n });\n\n // Return a Future that completes with the list of bytes\n return bytes.then((value) {\n // When the Future completes successfully, return the list of bytes\n return value;\n }, onError: (error) {\n // If the Future completes with an error, return the error\n return error;\n });\n}\n\n\n\nIn this example, the _downloadImage function returns a Future<List> that will contain the list of bytes that were downloaded from the HTTP response stream, or an error if one occurred.\n" ]
[ 0 ]
[]
[]
[ "flutter", "flutter_http", "flutter_streambuilder" ]
stackoverflow_0074661077_flutter_flutter_http_flutter_streambuilder.txt
Q: Filter all dicts in a list to be with specific keys and ignore others? I have the following list of dicts: lst = [{'a':1, 'b':2, 'c':3}, {'a':1, 'b':2, 'd':3}, {'a':1, 'c':2, 'k':3}, {'d':1, 'k':2, 'l':3}] I want to filter the list of dicts (in my case it's a list of thousands or even more dicts, with different keys with some overlap) to be a list containing all the dicts that have keys: ["a", "b"]. I want to filter each dict only to these a and b keys, and if they don't exist, don't include the dictionary in the final list. I am using: [{"a": d.get("a"), "b": d.get("b")} for d in lst] Please advise for an elegant way to solve it. A: The dictionary keys-view is set-like, so it supports subset comparisons by using <= operator: >>> keys = set("ab") >>> [{k: d[k] for k in keys} for d in lst if keys <= d.keys()] [{'a': 1, 'b': 2}, {'a': 1, 'b': 2}] A: I have figured it out and here is my alternative: lst = [{'a':1, 'b':2, 'c':3}, {'a':1, 'b':2, 'd':3}, {'a':1, 'c':2, 'k':3}, {'d':1, 'k':2, 'l':3}] keys = set("ab") [i for i in [{k: d.get(k) for k in keys if k in d} for d in lst] if i] Gives the desired answer: [{'b': 2, 'a': 1}, {'b': 2, 'a': 1}, {'a': 1}]
Filter all dicts in a list to be with specific keys and ignore others?
I have the following list of dicts: lst = [{'a':1, 'b':2, 'c':3}, {'a':1, 'b':2, 'd':3}, {'a':1, 'c':2, 'k':3}, {'d':1, 'k':2, 'l':3}] I want to filter the list of dicts (in my case it's a list of thousands or even more dicts, with different keys with some overlap) to be a list containing all the dicts that have keys: ["a", "b"]. I want to filter each dict only to these a and b keys, and if they don't exist, don't include the dictionary in the final list. I am using: [{"a": d.get("a"), "b": d.get("b")} for d in lst] Please advise for an elegant way to solve it.
[ "The dictionary keys-view is set-like, so it supports subset comparisons by using <= operator:\n>>> keys = set(\"ab\")\n>>> [{k: d[k] for k in keys} for d in lst if keys <= d.keys()]\n[{'a': 1, 'b': 2}, {'a': 1, 'b': 2}]\n\n", "I have figured it out and here is my alternative:\nlst = [{'a':1, 'b':2, 'c':3}, {'a':1, 'b':2, 'd':3}, {'a':1, 'c':2, 'k':3}, {'d':1, 'k':2, 'l':3}]\nkeys = set(\"ab\")\n[i for i in [{k: d.get(k) for k in keys if k in d} for d in lst] if i]\n\nGives the desired answer:\n[{'b': 2, 'a': 1}, {'b': 2, 'a': 1}, {'a': 1}]\n\n" ]
[ 2, 0 ]
[]
[]
[ "dictionary", "python_3.x" ]
stackoverflow_0074636869_dictionary_python_3.x.txt
Q: Invalid argument(s): No host specified in URI images/27567f904a64ba79ae95672e4ddf10c8.png I want to retrieve the images from an api and I get this error. Can anyone help me? Thanks in advance. Note that I am on the localhost CachedNetworkImage( imageUrl: AppConstants.BASE_URL + getCartHistoryList[listCounter - 1].img!, ), A: First of all you are trying to fetch image using CachedNetworkImage which will takes imageUrl argument as a String and in https:// format i.e. it should be a link Now come to the error part where error show images/27567... so on which is not the url link so first print this AppConstants.BASE_URL + getCartHistoryList[listCounter - 1].img!, and you will get to know the reason and will solve that easily
Invalid argument(s): No host specified in URI images/27567f904a64ba79ae95672e4ddf10c8.png
I want to retrieve the images from an api and I get this error. Can anyone help me? Thanks in advance. Note that I am on the localhost CachedNetworkImage( imageUrl: AppConstants.BASE_URL + getCartHistoryList[listCounter - 1].img!, ),
[ "First of all you are trying to fetch image using\n\nCachedNetworkImage\n\nwhich will takes\n\nimageUrl\n\nargument as a String and in\n\nhttps://\n\nformat i.e. it should be a link\nNow come to the error part where error show\n\nimages/27567... so on\n\nwhich is not the url link so first print this\nAppConstants.BASE_URL +\n getCartHistoryList[listCounter - 1].img!, \n\nand you will get to know the reason and will solve that easily\n" ]
[ 0 ]
[]
[]
[ "flutter", "networkimageview" ]
stackoverflow_0074661167_flutter_networkimageview.txt
Q: Simple AJAX form failing to submit I've written a simple form & AJAX function that should post the data of the form. Can anyone help me determine why this is failing to post? Fiddle: https://jsfiddle.net/onyeLp4d/2/ HTML <div id="lp-pom-form-25"> <form method="POST"> <div class="row"> <div class="col-6"> <label>First Name <input id="your_first_name" name="your_first_name" type="text" /></label> <label>Last Name <input id="your_last_name" name="your_last_name" type="text" /></label> <label>Company Name <input id="your_company_name" name="your_company_name" type="text" /></label> <label>Email Address <input id="your_work_email" name="your_work_email" type="text" /></label> </div> <div class="col-6"> <label>Referrals First Name <input id="your_referrals_first_name" name="your_referrals_first_name" type="text" /></label> <label>Referrals Last Name <input id="your_referrals_last_name" name="your_referrals_last_name" type="text" /></label> <label>Referrals Company Name <input id="your_referrals_company_name" name="your_referrals_company_name" type="text" /></label> <label>Referrals Email Address <input id="your_referrals_email_address" name="your_referrals_email_address" type="text" /></label> <label>Referrals Phone Number <input id="your_referrals_phone_number" name="your_referrals_phone_number" type="text" /></label> <label>How did you hear about us? <input id="how_did_you_hear_about_electrics_referral_program" name="how_did_you_hear_about_electrics_referral_program" type="text" /></label> </div> <button type="submit"> Submit </button> </div> </form> </div> jQuery jQuery(document).ready(function () { jQuery("form").submit(function (e) { e.preventDefault(); var form = jQuery(this); var formData = { your_first_name: jQuery("#your_first_name").val(), your_last_name: jQuery("#your_last_name").val(), your_company_name: jQuery("#your_company_name").val(), your_work_email: jQuery("#your_work_email").val(), your_referrals_first_name: jQuery("#your_referrals_first_name").val(), your_referrals_last_name: jQuery("#your_referrals_last_name").val(), your_referrals_company_name: jQuery("#your_referrals_company_name").val(), your_referrals_email_address: jQuery("#your_referrals_email_address").val(), your_referrals_phone_Number: jQuery("#your_referrals_phone_number").val(), how_did_you_hear_about_electrics_referral_program: jQuery("#how_did_you_hear_about_electrics_referral_program").val() } console.log(formData); $.ajax ({ type: form.attr('method'), data: formData, dataType: 'json', success: function (data) { console.log("Success!"); console.log(data); }, error: function (data) { console.log("An error has occured!"); console.log(data); } }); }); }); It seems quite simple: Prevent form from submitting as normal Get form data (saving to variable so I can remap some of it eventually) Post the form data via AJAX so that the page does not reload I am expecting the form to submit successfully using AJAX. I have tried numerous alterations of my code but it is quite simple so I must be missing something small or silly. A: In your code you are specifying that the URL where you will make the post request is defined in the action attribute of your form, but in your html code, your form tag does not have this attribute.
Simple AJAX form failing to submit
I've written a simple form & AJAX function that should post the data of the form. Can anyone help me determine why this is failing to post? Fiddle: https://jsfiddle.net/onyeLp4d/2/ HTML <div id="lp-pom-form-25"> <form method="POST"> <div class="row"> <div class="col-6"> <label>First Name <input id="your_first_name" name="your_first_name" type="text" /></label> <label>Last Name <input id="your_last_name" name="your_last_name" type="text" /></label> <label>Company Name <input id="your_company_name" name="your_company_name" type="text" /></label> <label>Email Address <input id="your_work_email" name="your_work_email" type="text" /></label> </div> <div class="col-6"> <label>Referrals First Name <input id="your_referrals_first_name" name="your_referrals_first_name" type="text" /></label> <label>Referrals Last Name <input id="your_referrals_last_name" name="your_referrals_last_name" type="text" /></label> <label>Referrals Company Name <input id="your_referrals_company_name" name="your_referrals_company_name" type="text" /></label> <label>Referrals Email Address <input id="your_referrals_email_address" name="your_referrals_email_address" type="text" /></label> <label>Referrals Phone Number <input id="your_referrals_phone_number" name="your_referrals_phone_number" type="text" /></label> <label>How did you hear about us? <input id="how_did_you_hear_about_electrics_referral_program" name="how_did_you_hear_about_electrics_referral_program" type="text" /></label> </div> <button type="submit"> Submit </button> </div> </form> </div> jQuery jQuery(document).ready(function () { jQuery("form").submit(function (e) { e.preventDefault(); var form = jQuery(this); var formData = { your_first_name: jQuery("#your_first_name").val(), your_last_name: jQuery("#your_last_name").val(), your_company_name: jQuery("#your_company_name").val(), your_work_email: jQuery("#your_work_email").val(), your_referrals_first_name: jQuery("#your_referrals_first_name").val(), your_referrals_last_name: jQuery("#your_referrals_last_name").val(), your_referrals_company_name: jQuery("#your_referrals_company_name").val(), your_referrals_email_address: jQuery("#your_referrals_email_address").val(), your_referrals_phone_Number: jQuery("#your_referrals_phone_number").val(), how_did_you_hear_about_electrics_referral_program: jQuery("#how_did_you_hear_about_electrics_referral_program").val() } console.log(formData); $.ajax ({ type: form.attr('method'), data: formData, dataType: 'json', success: function (data) { console.log("Success!"); console.log(data); }, error: function (data) { console.log("An error has occured!"); console.log(data); } }); }); }); It seems quite simple: Prevent form from submitting as normal Get form data (saving to variable so I can remap some of it eventually) Post the form data via AJAX so that the page does not reload I am expecting the form to submit successfully using AJAX. I have tried numerous alterations of my code but it is quite simple so I must be missing something small or silly.
[ "In your code you are specifying that the URL where you will make the post request is defined in the action attribute of your form, but in your html code, your form tag does not have this attribute.\n" ]
[ 1 ]
[]
[]
[ "ajax", "javascript", "jquery" ]
stackoverflow_0074661588_ajax_javascript_jquery.txt
Q: how do I trigger a workflow dispatch event with gh api with inputs so right now I have: gh api --method POST -H "Accept: application/vnd.github+json" /repos/${{ github.repository }}/actions/workflows/30721645/dispatches -F run_id=${{ github.run_id }} my workflow_dispatch event takes run_id as input, but the problem is that I get invalid_key for this request, how do I properly pass in the run_id to gh api? A: It is a bit more tricky as you have to pass a JSON to gh api and you also need to pass a ref. This should work: jq -n '{"ref":"main","inputs":{"run_id":"${{github.run_id}}"}}' | gh api -H "Accept: application/vnd.github+json" --method POST /repos/${{ github.repository }}/actions/workflows/30721645/dispatches --input - A: If you don't have any inputs to pass in and are just running the workflow, you can use: gh api /repos/joshjohanning-org/bash-testing/actions/workflows # get id gh api -X POST /repos/joshjohanning-org/bash-testing/actions/workflows/19595110/dispatches -f ref='main' Otherwise if you want inputs, you can use (similar to @Grzegorz Krukowski above): gh api -X POST /repos/joshjohanning-org/bash-testing/actions/workflows/19595110/dispatches \ --input - <<< '{"ref":"main","inputs":{"message":"all"}}' Or use gh workflow run: gh workflow run -R joshjohanning-org/bash-testing blank.yml echo '{"name":"scully", "greeting":"hello"}' | gh workflow run -R joshjohanning-org/bash-testing blank.yml --json
how do I trigger a workflow dispatch event with gh api with inputs
so right now I have: gh api --method POST -H "Accept: application/vnd.github+json" /repos/${{ github.repository }}/actions/workflows/30721645/dispatches -F run_id=${{ github.run_id }} my workflow_dispatch event takes run_id as input, but the problem is that I get invalid_key for this request, how do I properly pass in the run_id to gh api?
[ "It is a bit more tricky as you have to pass a JSON to gh api and you also need to pass a ref.\nThis should work:\njq -n '{\"ref\":\"main\",\"inputs\":{\"run_id\":\"${{github.run_id}}\"}}' | gh api -H \"Accept: application/vnd.github+json\" --method POST /repos/${{ github.repository }}/actions/workflows/30721645/dispatches --input -\n\n", "If you don't have any inputs to pass in and are just running the workflow, you can use:\ngh api /repos/joshjohanning-org/bash-testing/actions/workflows # get id\ngh api -X POST /repos/joshjohanning-org/bash-testing/actions/workflows/19595110/dispatches -f ref='main'\n\nOtherwise if you want inputs, you can use (similar to @Grzegorz Krukowski above):\ngh api -X POST /repos/joshjohanning-org/bash-testing/actions/workflows/19595110/dispatches \\\n --input - <<< '{\"ref\":\"main\",\"inputs\":{\"message\":\"all\"}}'\n\nOr use gh workflow run:\ngh workflow run -R joshjohanning-org/bash-testing blank.yml \necho '{\"name\":\"scully\", \"greeting\":\"hello\"}' | gh workflow run -R joshjohanning-org/bash-testing blank.yml --json\n\n" ]
[ 1, 0 ]
[]
[]
[ "github_actions" ]
stackoverflow_0073069456_github_actions.txt
Q: Add ACF fields to search results page WordPress I have a file named search.php, where all the search results are added. The search form is located on the homepage. The problem is, I have no special page for the search results, but I want to add ACF fields to this page. I have searched in the dropdown menu 'location' in the ACF plugin, but I can't find the search.php file to add groups to. I am sorry if my explaination is a bit vague, but I don't know how to explain this exactly. Summary, I want to add ACF fields to my search.php search results page. Terminology: ACF stand for the Advanced Custom Fields plugin in WordPress. A: You can also do it using plugin but use below code to avoide plugin. You need to add this code in you function.php file <?php /** * [list_searcheable_acf list all the custom fields we want to include in our search query] * @return [array] [list of custom fields] */ function list_searcheable_acf(){ $list_searcheable_acf = array("title", "sub_title", "excerpt_short", "excerpt_long", "xyz", "myACF"); return $list_searcheable_acf; } /** * [advanced_custom_search search that encompasses ACF/advanced custom fields and taxonomies and split expression before request] * @param [query-part/string] $where [the initial "where" part of the search query] * @param [object] $wp_query [] * @return [query-part/string] $where [the "where" part of the search query as we customized] * see https://vzurczak.wordpress.com/2013/06/15/extend-the-default-wordpress-search/ * credits to Vincent Zurczak for the base query structure/spliting tags section */ function advanced_custom_search( $where, &$wp_query ) { global $wpdb; if ( empty( $where )) return $where; // get search expression $terms = $wp_query->query_vars[ 's' ]; // explode search expression to get search terms $exploded = explode( ' ', $terms ); if( $exploded === FALSE || count( $exploded ) == 0 ) $exploded = array( 0 => $terms ); // reset search in order to rebuilt it as we whish $where = ''; // get searcheable_acf, a list of advanced custom fields you want to search content in $list_searcheable_acf = list_searcheable_acf(); foreach( $exploded as $tag ) : $where .= " AND ( (wp_posts.post_title LIKE '%$tag%') OR (wp_posts.post_content LIKE '%$tag%') OR EXISTS ( SELECT * FROM wp_postmeta WHERE post_id = wp_posts.ID AND ("; foreach ($list_searcheable_acf as $searcheable_acf) : if ($searcheable_acf == $list_searcheable_acf[0]): $where .= " (meta_key LIKE '%" . $searcheable_acf . "%' AND meta_value LIKE '%$tag%') "; else : $where .= " OR (meta_key LIKE '%" . $searcheable_acf . "%' AND meta_value LIKE '%$tag%') "; endif; endforeach; $where .= ") ) OR EXISTS ( SELECT * FROM wp_comments WHERE comment_post_ID = wp_posts.ID AND comment_content LIKE '%$tag%' ) OR EXISTS ( SELECT * FROM wp_terms INNER JOIN wp_term_taxonomy ON wp_term_taxonomy.term_id = wp_terms.term_id INNER JOIN wp_term_relationships ON wp_term_relationships.term_taxonomy_id = wp_term_taxonomy.term_taxonomy_id WHERE ( taxonomy = 'post_tag' OR taxonomy = 'category' OR taxonomy = 'myCustomTax' ) AND object_id = wp_posts.ID AND wp_terms.name LIKE '%$tag%' ) )"; endforeach; return $where; } add_filter( 'posts_search', 'advanced_custom_search', 500, 2 ); for reference - https://gist.github.com/charleslouis/5924863 Try This solution if first is not working. https://support.advancedcustomfields.com/forums/topic/making-customfields-searchable/ A: it works for me function custom_search_query( $query ) { if ( !is_admin() && $query->is_search ) { $result = $query->query_vars['s']; $query->query_vars['s'] = ''; $query->set('meta_query', array('relation' => 'OR', array( 'key' => 'acf_name', // ACF FIELD NAME OR POST META 'value' => $result, 'compare' => 'LIKE', ) )); $query->set('post_type', 'post'); // optional POST TYPE } } add_filter( 'pre_get_posts', 'custom_search_query');
Add ACF fields to search results page WordPress
I have a file named search.php, where all the search results are added. The search form is located on the homepage. The problem is, I have no special page for the search results, but I want to add ACF fields to this page. I have searched in the dropdown menu 'location' in the ACF plugin, but I can't find the search.php file to add groups to. I am sorry if my explaination is a bit vague, but I don't know how to explain this exactly. Summary, I want to add ACF fields to my search.php search results page. Terminology: ACF stand for the Advanced Custom Fields plugin in WordPress.
[ "You can also do it using plugin but use below code to avoide plugin. \nYou need to add this code in you function.php file\n<?php\n/**\n * [list_searcheable_acf list all the custom fields we want to include in our search query]\n * @return [array] [list of custom fields]\n */\nfunction list_searcheable_acf(){\n $list_searcheable_acf = array(\"title\", \"sub_title\", \"excerpt_short\", \"excerpt_long\", \"xyz\", \"myACF\");\n return $list_searcheable_acf;\n}\n/**\n * [advanced_custom_search search that encompasses ACF/advanced custom fields and taxonomies and split expression before request]\n * @param [query-part/string] $where [the initial \"where\" part of the search query]\n * @param [object] $wp_query []\n * @return [query-part/string] $where [the \"where\" part of the search query as we customized]\n * see https://vzurczak.wordpress.com/2013/06/15/extend-the-default-wordpress-search/\n * credits to Vincent Zurczak for the base query structure/spliting tags section\n */\nfunction advanced_custom_search( $where, &$wp_query ) {\n global $wpdb;\n\n if ( empty( $where ))\n return $where;\n\n // get search expression\n $terms = $wp_query->query_vars[ 's' ];\n\n // explode search expression to get search terms\n $exploded = explode( ' ', $terms );\n if( $exploded === FALSE || count( $exploded ) == 0 )\n $exploded = array( 0 => $terms );\n\n // reset search in order to rebuilt it as we whish\n $where = '';\n\n // get searcheable_acf, a list of advanced custom fields you want to search content in\n $list_searcheable_acf = list_searcheable_acf();\n foreach( $exploded as $tag ) :\n $where .= \" \n AND (\n (wp_posts.post_title LIKE '%$tag%')\n OR (wp_posts.post_content LIKE '%$tag%')\n OR EXISTS (\n SELECT * FROM wp_postmeta\n WHERE post_id = wp_posts.ID\n AND (\";\n foreach ($list_searcheable_acf as $searcheable_acf) :\n if ($searcheable_acf == $list_searcheable_acf[0]):\n $where .= \" (meta_key LIKE '%\" . $searcheable_acf . \"%' AND meta_value LIKE '%$tag%') \";\n else :\n $where .= \" OR (meta_key LIKE '%\" . $searcheable_acf . \"%' AND meta_value LIKE '%$tag%') \";\n endif;\n endforeach;\n $where .= \")\n )\n OR EXISTS (\n SELECT * FROM wp_comments\n WHERE comment_post_ID = wp_posts.ID\n AND comment_content LIKE '%$tag%'\n )\n OR EXISTS (\n SELECT * FROM wp_terms\n INNER JOIN wp_term_taxonomy\n ON wp_term_taxonomy.term_id = wp_terms.term_id\n INNER JOIN wp_term_relationships\n ON wp_term_relationships.term_taxonomy_id = wp_term_taxonomy.term_taxonomy_id\n WHERE (\n taxonomy = 'post_tag'\n OR taxonomy = 'category' \n OR taxonomy = 'myCustomTax'\n )\n AND object_id = wp_posts.ID\n AND wp_terms.name LIKE '%$tag%'\n )\n )\";\n endforeach;\n return $where;\n}\n\nadd_filter( 'posts_search', 'advanced_custom_search', 500, 2 );\n\nfor reference - https://gist.github.com/charleslouis/5924863\nTry This solution if first is not working.\nhttps://support.advancedcustomfields.com/forums/topic/making-customfields-searchable/\n", "it works for me\nfunction custom_search_query( $query ) {\n if ( !is_admin() && $query->is_search ) {\n $result = $query->query_vars['s'];\n $query->query_vars['s'] = '';\n $query->set('meta_query', array('relation' => 'OR',\n array(\n 'key' => 'acf_name', // ACF FIELD NAME OR POST META\n 'value' => $result,\n 'compare' => 'LIKE',\n )\n ));\n $query->set('post_type', 'post'); // optional POST TYPE\n }\n}\nadd_filter( 'pre_get_posts', 'custom_search_query');\n\n" ]
[ 9, 0 ]
[]
[]
[ "advanced_custom_fields", "php", "wordpress" ]
stackoverflow_0036787391_advanced_custom_fields_php_wordpress.txt
Q: How to apply filters on spark scala dataframe view? I am pasting a snippet here where I am facing issues with the BigQuery Read. The "wherePart" has more number of records and hence BQ call is invoked again and again. Keeping the filter outside of BQ Read would help. The idea is, first read the "mainTable" from BQ, store it in a spark view, then apply the "wherePart" filter to this view in spark. ["subDate" is a function to subtract one date from another and return the number of days in between] val Df = getFb(config, mainTable, ds) def getFb(config: DataFrame, mainTable: String, ds: String) : DataFrame = { val fb = config.map(row => Target.Pfb( row.getAs[String]("m1"), row.getAs[String]("m2"), row.getAs[Seq[Int]]("days"))) .collect val wherePart = fb.map(x => (x.m1, x.m2, subDate(ds, x.days.max - 1))). map(x => s"(idata_${x._1} = '${x._2}' AND ds BETWEEN '${x._3}' AND '${ds}')"). mkString(" OR ") val q = new Q() val tempView = "tempView" spark.readBigQueryTable(mainTable, wherePart).createOrReplaceTempView(tempView) val Df = q.mainTableLogs(tempView) Df } Could someone please help me here. A: Are you using the spark-bigquery-connector? If so the right syntax is spark.read.format("bigquery") .load(mainTable) .where(wherePart) .createOrReplaceTempView(tempView)
How to apply filters on spark scala dataframe view?
I am pasting a snippet here where I am facing issues with the BigQuery Read. The "wherePart" has more number of records and hence BQ call is invoked again and again. Keeping the filter outside of BQ Read would help. The idea is, first read the "mainTable" from BQ, store it in a spark view, then apply the "wherePart" filter to this view in spark. ["subDate" is a function to subtract one date from another and return the number of days in between] val Df = getFb(config, mainTable, ds) def getFb(config: DataFrame, mainTable: String, ds: String) : DataFrame = { val fb = config.map(row => Target.Pfb( row.getAs[String]("m1"), row.getAs[String]("m2"), row.getAs[Seq[Int]]("days"))) .collect val wherePart = fb.map(x => (x.m1, x.m2, subDate(ds, x.days.max - 1))). map(x => s"(idata_${x._1} = '${x._2}' AND ds BETWEEN '${x._3}' AND '${ds}')"). mkString(" OR ") val q = new Q() val tempView = "tempView" spark.readBigQueryTable(mainTable, wherePart).createOrReplaceTempView(tempView) val Df = q.mainTableLogs(tempView) Df } Could someone please help me here.
[ "Are you using the spark-bigquery-connector? If so the right syntax is\nspark.read.format(\"bigquery\")\n .load(mainTable)\n .where(wherePart)\n .createOrReplaceTempView(tempView)\n\n" ]
[ 0 ]
[]
[]
[ "apache_spark", "apache_spark_sql", "google_bigquery", "scala", "view" ]
stackoverflow_0074659329_apache_spark_apache_spark_sql_google_bigquery_scala_view.txt
Q: How to trigger github actions with dispatch using gh cli I have a action which has the following yaml in it: on: workflow_dispatch: inputs: BuildTarget: description: "Targets to rebuild. Set to all to rebuild everything." required: false default: "" Which I can trigger with: gh api /repos/:owner/:repo/actions/workflows/build_dev.yml/dispatches -F ref=":branch" But I can't seem to figure out how to pass inputs into the action from the cli. I have tried: gh api /repos/:owner/:repo/actions/workflows/build_dev.yml/dispatches -F ref=":branch" -F BuildTarget=all Which tells "BuildTarget" is not a permitted key. (HTTP 422) and trying this: gh api /repos/:owner/:repo/actions/workflows/build_dev.yml/dispatches -F ref=":branch" -F inputs='{ "BuildTarget": "all" }' Gives me For 'properties/inputs', "{ \"BuildTarget\": \"all\" }" is not an object. (HTTP 422) Any idea on how to call this api from the cli and pass in input properties to a workflow? A: You can send the raw body directly using --input - to read from standard input : gh api /repos/:owner/:repo/actions/workflows/build_dev.yml/dispatches \ --input - <<< '{"ref":"master","inputs":{"BuildTarget":"all"}}' Checkout this documentation A: If you don't have any inputs to pass in and are just running the workflow, you can use: gh api /repos/joshjohanning-org/bash-testing/actions/workflows # get id gh api -X POST /repos/joshjohanning-org/bash-testing/actions/workflows/19595110/dispatches -f ref='main' Otherwise if you want inputs, use @Bertrand Martel's example in this post. Or use gh workflow run: gh workflow run -R joshjohanning-org/bash-testing blank.yml echo '{"name":"scully", "greeting":"hello"}' | gh workflow run -R joshjohanning-org/bash-testing blank.yml --json
How to trigger github actions with dispatch using gh cli
I have a action which has the following yaml in it: on: workflow_dispatch: inputs: BuildTarget: description: "Targets to rebuild. Set to all to rebuild everything." required: false default: "" Which I can trigger with: gh api /repos/:owner/:repo/actions/workflows/build_dev.yml/dispatches -F ref=":branch" But I can't seem to figure out how to pass inputs into the action from the cli. I have tried: gh api /repos/:owner/:repo/actions/workflows/build_dev.yml/dispatches -F ref=":branch" -F BuildTarget=all Which tells "BuildTarget" is not a permitted key. (HTTP 422) and trying this: gh api /repos/:owner/:repo/actions/workflows/build_dev.yml/dispatches -F ref=":branch" -F inputs='{ "BuildTarget": "all" }' Gives me For 'properties/inputs', "{ \"BuildTarget\": \"all\" }" is not an object. (HTTP 422) Any idea on how to call this api from the cli and pass in input properties to a workflow?
[ "You can send the raw body directly using --input - to read from standard input :\ngh api /repos/:owner/:repo/actions/workflows/build_dev.yml/dispatches \\\n --input - <<< '{\"ref\":\"master\",\"inputs\":{\"BuildTarget\":\"all\"}}'\n\nCheckout this documentation\n", "If you don't have any inputs to pass in and are just running the workflow, you can use:\ngh api /repos/joshjohanning-org/bash-testing/actions/workflows # get id\ngh api -X POST /repos/joshjohanning-org/bash-testing/actions/workflows/19595110/dispatches -f ref='main'\n\nOtherwise if you want inputs, use @Bertrand Martel's example in this post.\nOr use gh workflow run:\ngh workflow run -R joshjohanning-org/bash-testing blank.yml \necho '{\"name\":\"scully\", \"greeting\":\"hello\"}' | gh workflow run -R joshjohanning-org/bash-testing blank.yml --json\n\n" ]
[ 1, 0 ]
[]
[]
[ "github", "github_actions", "github_api" ]
stackoverflow_0066416590_github_github_actions_github_api.txt
Q: Print output includes 'None' def backwards_alphabet(curr_letter): if curr_letter == 'a': print(curr_letter) else: print(curr_letter) prev_letter = chr(ord(curr_letter) - 1) backwards_alphabet(prev_letter) starting_letter = input() print (backwards_alphabet(starting_letter)) #this is the code i wrote The output includes "None" but I have no idea why. Image of output A: The function print takes a parameter - you are giving it the result of backwards_alphabet(starting_letter). Since you aren't explicit about what backwards_alphabet() returns - which you do would with by including return 'this is what I am returning', it will return None by default. So, you are calling print(None) and it prints 'None'. Since your function backwards_alphabet() already has all of the printing within it, you don't want to do print(backwards_alphabet(...)), you just want to call backwards_alphabet(...) by itself.
Print output includes 'None'
def backwards_alphabet(curr_letter): if curr_letter == 'a': print(curr_letter) else: print(curr_letter) prev_letter = chr(ord(curr_letter) - 1) backwards_alphabet(prev_letter) starting_letter = input() print (backwards_alphabet(starting_letter)) #this is the code i wrote The output includes "None" but I have no idea why. Image of output
[ "The function print takes a parameter - you are giving it the result of backwards_alphabet(starting_letter).\nSince you aren't explicit about what backwards_alphabet() returns - which you do would with by including return 'this is what I am returning', it will return None by default.\nSo, you are calling print(None) and it prints 'None'.\nSince your function backwards_alphabet() already has all of the printing within it, you don't want to do print(backwards_alphabet(...)), you just want to call backwards_alphabet(...) by itself.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074661626_python.txt
Q: How to make a reusable component which renders data based on object structure? I'm trying to make a reusable InfoBlock component. This component renders a list of items. Each item has a label, icon and value. The problem that I don't know how to map available INFO_BLOCK_ITEMS to data object and render only items which present in data object. Full list of all available labels and icons looks like this: const INFO_BLOCK_ITEMS = { author: { label: "Author", icon: AccountTreeIcon }, date: { label: "Created", icon: FaceRetouchingNaturalIcon }, rawMedias: { label: "RAW Medias", icon: InsertDriveFileOutlinedIcon }, face: { label: "Faces", icon: ScheduleOutlinedIcon, faceAction: true, }, s3Source: { label: "S3 Source", icon: AcUnitIcon } }; Data object which I pass to InfoBlock component along with dataType (for another page, the data structure will be different but it will contain the keys from INFO_BLOCK_ITEMS: const DATASET = { author: "[email protected]", date: 1669208819, rawMedias: "Video name 1, Video name 2, Video name 3", face: "" }; <InfoBlock data={DATASET} type={"dataset"} /> The result should be a list like this for every key in data object: <Stack> <AccountTreeIcon /> <Stack> <Typography>Author</Typography> <Typography>[email protected]</Typography> </Stack> </Stack> Here's a Codesandbox with hardcoded list: https://codesandbox.io/s/info-block-forked-0bwrze?file=/src/InfoBlock.tsx A: You don't need to pass type to InfoBlock component. data in Rawmedia type should be number. Use icons as React component. Hope it helps. Types: export type Dataset = { author: string; date: number; rawMedias: string; face: string; }; export type RawMedia = { s3Source: string; author: string; date: number; face: string; }; InfoBlock Component: const INFO_BLOCK_ITEMS = { author: { label: "Author", icon: <AccountTreeIcon /> }, date: { label: "Created", icon: <FaceRetouchingNaturalIcon /> }, rawMedias: { label: "RAW Medias", icon: <InsertDriveFileOutlinedIcon /> }, face: { label: "Faces", icon: <ScheduleOutlinedIcon />, action: () => console.log("If no data then button renders") }, s3Source: { label: "S3 Source", icon: <AcUnitIcon /> } }; interface IInfoBlockProps { data: Dataset | RawMedia; } function InfoBlock({ data }: IInfoBlockProps) { return( <Stack gap={"20px"}> { Object.keys(data).map((key, _index) => { const infoBlockItem = INFO_BLOCK_ITEMS[key]; return ( <Stack key={_index} direction={"row"} gap={"10px"}> {infoBlockItem.icon} <Stack direction={"row"} gap={"20px"}> <Typography>{infoBlockItem.label}</Typography> <Typography>{data[key]}</Typography> </Stack> </Stack> ); }) } </Stack> ) } App Component: const DATASET = { author: "[email protected]", date: 1669208819.837662, rawMedias: "Video name 1, Video name 2, Video name 3", face: "" }; const RAW_MEDIA = { s3Source: "https://example.com", author: "[email protected]", date: 1669208819.837662, face: "Some face" }; function App() { return ( <div> <InfoBlock data={DATASET} /> <InfoBlock data={RAW_MEDIA} /> </div> ); }
How to make a reusable component which renders data based on object structure?
I'm trying to make a reusable InfoBlock component. This component renders a list of items. Each item has a label, icon and value. The problem that I don't know how to map available INFO_BLOCK_ITEMS to data object and render only items which present in data object. Full list of all available labels and icons looks like this: const INFO_BLOCK_ITEMS = { author: { label: "Author", icon: AccountTreeIcon }, date: { label: "Created", icon: FaceRetouchingNaturalIcon }, rawMedias: { label: "RAW Medias", icon: InsertDriveFileOutlinedIcon }, face: { label: "Faces", icon: ScheduleOutlinedIcon, faceAction: true, }, s3Source: { label: "S3 Source", icon: AcUnitIcon } }; Data object which I pass to InfoBlock component along with dataType (for another page, the data structure will be different but it will contain the keys from INFO_BLOCK_ITEMS: const DATASET = { author: "[email protected]", date: 1669208819, rawMedias: "Video name 1, Video name 2, Video name 3", face: "" }; <InfoBlock data={DATASET} type={"dataset"} /> The result should be a list like this for every key in data object: <Stack> <AccountTreeIcon /> <Stack> <Typography>Author</Typography> <Typography>[email protected]</Typography> </Stack> </Stack> Here's a Codesandbox with hardcoded list: https://codesandbox.io/s/info-block-forked-0bwrze?file=/src/InfoBlock.tsx
[ "\nYou don't need to pass type to InfoBlock component.\n\ndata in Rawmedia type should be number.\n\nUse icons as React component.\n\n\nHope it helps.\nTypes:\nexport type Dataset = {\n author: string;\n date: number;\n rawMedias: string;\n face: string;\n};\n\nexport type RawMedia = {\n s3Source: string;\n author: string;\n date: number;\n face: string;\n};\n\nInfoBlock Component:\nconst INFO_BLOCK_ITEMS = {\n author: {\n label: \"Author\",\n icon: <AccountTreeIcon />\n },\n date: {\n label: \"Created\",\n icon: <FaceRetouchingNaturalIcon />\n },\n rawMedias: {\n label: \"RAW Medias\",\n icon: <InsertDriveFileOutlinedIcon />\n },\n face: {\n label: \"Faces\",\n icon: <ScheduleOutlinedIcon />,\n action: () => console.log(\"If no data then button renders\")\n },\n s3Source: {\n label: \"S3 Source\",\n icon: <AcUnitIcon />\n }\n};\n\ninterface IInfoBlockProps {\n data: Dataset | RawMedia;\n}\n\nfunction InfoBlock({ data }: IInfoBlockProps) {\n return(\n <Stack gap={\"20px\"}>\n {\n Object.keys(data).map((key, _index) => {\n const infoBlockItem = INFO_BLOCK_ITEMS[key];\n return (\n <Stack key={_index} direction={\"row\"} gap={\"10px\"}>\n {infoBlockItem.icon}\n <Stack direction={\"row\"} gap={\"20px\"}>\n <Typography>{infoBlockItem.label}</Typography>\n <Typography>{data[key]}</Typography>\n </Stack>\n </Stack>\n );\n })\n }\n </Stack>\n )\n}\n\nApp Component:\nconst DATASET = {\n author: \"[email protected]\",\n date: 1669208819.837662,\n rawMedias: \"Video name 1, Video name 2, Video name 3\",\n face: \"\"\n};\n\nconst RAW_MEDIA = {\n s3Source: \"https://example.com\",\n author: \"[email protected]\",\n date: 1669208819.837662,\n face: \"Some face\"\n};\n\nfunction App() {\n return (\n <div>\n <InfoBlock data={DATASET} />\n <InfoBlock data={RAW_MEDIA} />\n </div>\n );\n}\n\n" ]
[ 1 ]
[]
[]
[ "reactjs" ]
stackoverflow_0074661390_reactjs.txt
Q: Idiomatic way to accept owned and referenced iterables? Consider this rust code: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=d6f2075a8872305334a8ba513241950b fn main() { let v: Vec<i32> = vec![1, 2, 3]; // This works, of course. println!("{}", foo(&v)); // Now let's create an "extra conversion" step: let vs: Vec<&str> = vec!["1", "2", "3"]; // We want to "stream" straight from this vec. Let's create an // iterator that converts: let converting_iterator = vs.iter().map(|s| s.parse::<i32>().unwrap()); // This does not work println!("{}", foo(converting_iterator)); } fn foo<'a>(it: impl IntoIterator<Item=&'a i32>) -> i32 { it.into_iter().sum() } I understand why the second line doesn't work. It creates an iterator over i32, not &i32. I can't just slap a & into the closure, because that would attempt to reference a temporary value. What I'd be curious about though is if there is any way to write foo in such a way that it can deal with both types of iterables? If I were to just add the .sum() to the end of creating the converting_iterator, it would just work. So I feel that there should be some way to "intercept" the result (i.e. the converting iterator), pass that to something, and have that something call .sum on it. Maybe something with Borrow or AsRef, but I couldn't figure that out from the documentation of those traits. A: For sum in particular, the following works: fn foo<Item>(it: impl IntoIterator<Item=Item>) -> i32 where i32: Sum<Item> { it.into_iter().sum() }
Idiomatic way to accept owned and referenced iterables?
Consider this rust code: https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=d6f2075a8872305334a8ba513241950b fn main() { let v: Vec<i32> = vec![1, 2, 3]; // This works, of course. println!("{}", foo(&v)); // Now let's create an "extra conversion" step: let vs: Vec<&str> = vec!["1", "2", "3"]; // We want to "stream" straight from this vec. Let's create an // iterator that converts: let converting_iterator = vs.iter().map(|s| s.parse::<i32>().unwrap()); // This does not work println!("{}", foo(converting_iterator)); } fn foo<'a>(it: impl IntoIterator<Item=&'a i32>) -> i32 { it.into_iter().sum() } I understand why the second line doesn't work. It creates an iterator over i32, not &i32. I can't just slap a & into the closure, because that would attempt to reference a temporary value. What I'd be curious about though is if there is any way to write foo in such a way that it can deal with both types of iterables? If I were to just add the .sum() to the end of creating the converting_iterator, it would just work. So I feel that there should be some way to "intercept" the result (i.e. the converting iterator), pass that to something, and have that something call .sum on it. Maybe something with Borrow or AsRef, but I couldn't figure that out from the documentation of those traits.
[ "For sum in particular, the following works:\nfn foo<Item>(it: impl IntoIterator<Item=Item>) -> i32\nwhere i32: Sum<Item>\n{\n it.into_iter().sum()\n}\n\n" ]
[ 1 ]
[]
[]
[ "iterator", "ownership", "rust", "traits" ]
stackoverflow_0074661414_iterator_ownership_rust_traits.txt
Q: Adding a nickname to an R package? When developping an R package, is it possible to add a nickname to a certain version? How would this be added in the DESCRIPTION file? That cannot be in the version, as only numbers are accepted. It could of course be in the description, but it would not be a real metadata. A: You can add a field to the DESCRIPTION file for this: Package: coolpkg Nickname: Coolest Nickname Ever Version: 0.1.0 After installing your package you can then get the nickname back like this: packageDescription("coolpkg")[["Nickname"]] FYI I think CRAN has standards around what can go in the DESCRIPTION file, just something to keep in mind if that's what you're working towards. Alternatively, you could just save the nickname as a data object in the package.
Adding a nickname to an R package?
When developping an R package, is it possible to add a nickname to a certain version? How would this be added in the DESCRIPTION file? That cannot be in the version, as only numbers are accepted. It could of course be in the description, but it would not be a real metadata.
[ "You can add a field to the DESCRIPTION file for this:\nPackage: coolpkg\nNickname: Coolest Nickname Ever\nVersion: 0.1.0\n\nAfter installing your package you can then get the nickname back like this:\npackageDescription(\"coolpkg\")[[\"Nickname\"]]\nFYI I think CRAN has standards around what can go in the DESCRIPTION file, just something to keep in mind if that's what you're working towards.\nAlternatively, you could just save the nickname as a data object in the package.\n" ]
[ 0 ]
[]
[]
[ "r", "r_package" ]
stackoverflow_0066912975_r_r_package.txt
Q: Simplify Python array retrieval of values I have the following code at the moment which works perfectly:- my_array = [ ['var1', 1], ['var2', 2], ['var3', 3], ['var4', 4], ['var5', 5] ] for i in range(len(my_array)): if my_array[i][0] == "var1": var_a = my_array[i][1] elif my_array[i][0] == "var2": var_b = my_array[i][1] elif my_array[i][0] == "var3": var_c = my_array[i][1] elif my_array[i][0] == "var4": var_d = my_array[i][1] elif my_array[i][0] == "var5": var_e = my_array[i][1] print(var_a) print(var_b) print(var_c) print(var_d) print(var_e) Is there a way I can simplify the way I get the values, instead of using multiple "elif's"? I tried something like this:- var_f = my_array[i]["var1"] print(var_f) but I get an error:- TypeError: list indices must be integer, not str Any help would be very much appreciated! Thank you A: You can convert my_array to dict to simplify the retrieval of values: my_array = [["var1", 1], ["var2", 2], ["var3", 3], ["var4", 4], ["var5", 5]] dct = dict(my_array) # print var1 print(dct["var1"]) Prints: 1
Simplify Python array retrieval of values
I have the following code at the moment which works perfectly:- my_array = [ ['var1', 1], ['var2', 2], ['var3', 3], ['var4', 4], ['var5', 5] ] for i in range(len(my_array)): if my_array[i][0] == "var1": var_a = my_array[i][1] elif my_array[i][0] == "var2": var_b = my_array[i][1] elif my_array[i][0] == "var3": var_c = my_array[i][1] elif my_array[i][0] == "var4": var_d = my_array[i][1] elif my_array[i][0] == "var5": var_e = my_array[i][1] print(var_a) print(var_b) print(var_c) print(var_d) print(var_e) Is there a way I can simplify the way I get the values, instead of using multiple "elif's"? I tried something like this:- var_f = my_array[i]["var1"] print(var_f) but I get an error:- TypeError: list indices must be integer, not str Any help would be very much appreciated! Thank you
[ "You can convert my_array to dict to simplify the retrieval of values:\nmy_array = [[\"var1\", 1], [\"var2\", 2], [\"var3\", 3], [\"var4\", 4], [\"var5\", 5]]\n\ndct = dict(my_array)\n\n# print var1\nprint(dct[\"var1\"])\n\nPrints:\n1\n\n" ]
[ 0 ]
[]
[]
[ "list", "python", "python_3.x" ]
stackoverflow_0074661638_list_python_python_3.x.txt
Q: make API call before another on angular interceptor In my app I have multiple forms where the result JSON object may vary in its structure and has nested objects and arrays in different levels. These forms also allow the user to upload files and the object stores and array with url to download, name, etc. What I do now is turn the file into base64 string, then before any request that has files, I upload them to my backend. What I want to do is to make that API call of file upload, wait until it finish and once I get response, modify the user's body request, only then make the main post request with these modifications. But is not pausing, the queries are being executed in parallel, I know this because in the backend the file is uploaded but the user's object is not modified, and besides for some reason the query of file upload is being executed several times for no reason. export class FilesCheckerInterceptor implements HttpInterceptor { constructor(private filesService: FilesService) {} intercept(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { const data = request.body; if (data) { const uploaded: File[] = []; this.traverse(data, files => { files.forEach(file => { const base64 = file.data; const result = this.filesService.toFile(base64, file.name); uploaded.push(result); }); }); console.log(request); return this.filesService.uploadFile(uploaded).pipe( mergeMap(response => { this.traverse(data, files => { for (let i = 0; i < response.length; i++) { files[i] = response[i]; } }); return next.handle(request.clone({ body: data })); }), ); } else { return next.handle(request); } } traverse(obj: any, action: (value: InternalFile[]) => void) { if (obj !== null && typeof obj === 'object') { Object.entries(obj).forEach(([key, value]) => { if (key === 'attachments' && Array.isArray(value)) { // const files = value as InternalFile[]; action(value); } else { this.traverse(value, action); } }) } } } A: I will suggest using forkJoin. forkJoin will wait for all passed observables to emit and complete and then it will emit an array or an object with the last values from corresponding observables. This operator is best used when you have a group of observables and only care about the final emitted value of each. One common use case for this is if you wish to issue multiple requests on page load (or some other event) and only want to take action when a response has been received for all. In this way, it is similar to how you might use Promise.all. Or you can use async-await. Further explanation on forkjoin https://ultimatecourses.com/blog/rxjs-forkjoin-combine-observables A: I tried many things but this is the only one that works, I do not know what it is or how it works but by pure try-fail that I achieved the goal. this goes on the return statement of the initial code` const observable = this.filesService.uploadFile(uploaded).pipe( concatMap(response => { //replace current representations with server-given representations of files this.traverse(data, files => { files.forEach((_, index) => { files[index] = response[index]; }); }); //make actual request const clone = request.clone({ body: data }); return next.handle(clone); }), ); return observable;
make API call before another on angular interceptor
In my app I have multiple forms where the result JSON object may vary in its structure and has nested objects and arrays in different levels. These forms also allow the user to upload files and the object stores and array with url to download, name, etc. What I do now is turn the file into base64 string, then before any request that has files, I upload them to my backend. What I want to do is to make that API call of file upload, wait until it finish and once I get response, modify the user's body request, only then make the main post request with these modifications. But is not pausing, the queries are being executed in parallel, I know this because in the backend the file is uploaded but the user's object is not modified, and besides for some reason the query of file upload is being executed several times for no reason. export class FilesCheckerInterceptor implements HttpInterceptor { constructor(private filesService: FilesService) {} intercept(request: HttpRequest<any>, next: HttpHandler): Observable<HttpEvent<any>> { const data = request.body; if (data) { const uploaded: File[] = []; this.traverse(data, files => { files.forEach(file => { const base64 = file.data; const result = this.filesService.toFile(base64, file.name); uploaded.push(result); }); }); console.log(request); return this.filesService.uploadFile(uploaded).pipe( mergeMap(response => { this.traverse(data, files => { for (let i = 0; i < response.length; i++) { files[i] = response[i]; } }); return next.handle(request.clone({ body: data })); }), ); } else { return next.handle(request); } } traverse(obj: any, action: (value: InternalFile[]) => void) { if (obj !== null && typeof obj === 'object') { Object.entries(obj).forEach(([key, value]) => { if (key === 'attachments' && Array.isArray(value)) { // const files = value as InternalFile[]; action(value); } else { this.traverse(value, action); } }) } } }
[ "I will suggest using forkJoin.\n\nforkJoin will wait for all passed observables to emit and complete and then it will emit an array or an object with the last values from corresponding observables. This operator is best used when you have a group of observables and only care about the final emitted value of each. One common use case for this is if you wish to issue multiple requests on page load (or some other event) and only want to take action when a response has been received for all. In this way, it is similar to how you might use Promise.all.\n\nOr you can use async-await.\nFurther explanation on forkjoin https://ultimatecourses.com/blog/rxjs-forkjoin-combine-observables\n", "I tried many things but this is the only one that works, I do not know what it is or how it works but by pure try-fail that I achieved the goal.\nthis goes on the return statement of the initial code`\n const observable = this.filesService.uploadFile(uploaded).pipe(\n concatMap(response => {\n //replace current representations with server-given representations of files\n this.traverse(data, files => {\n files.forEach((_, index) => {\n files[index] = response[index];\n });\n });\n \n //make actual request\n const clone = request.clone({ body: data });\n return next.handle(clone);\n }),\n );\n \n return observable;\n\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "angular", "rxjs", "typescript" ]
stackoverflow_0074644650_angular_rxjs_typescript.txt
Q: C# ExcelDataReader Invalid File Signature I've already looked at this and it did not solve my issue https://stackoverflow.com/questions/51079664/c-sharp-error-with-exceldatareader I've tried building a method that reads an XLS file and converts it to string[] But I get an error when trying to run it: ExcelDataReader.Exceptions.HeaderException: Invalid file signature. I have tried running it with XLSX and it works fine The files that I am using have worked before Note. I have run the same method that worked with XLS before, so I'm confused as to why this error is occurring.(using ExcelDataReader Version 3.6.0) Here is the code: private static List<string[]> GetExcelRecords(string path, bool hasHeaders) { var records = new List<string[]>(); using (var stream = File.Open(path, FileMode.Open, FileAccess.Read)) { using (var reader = ExcelReaderFactory.CreateReader(stream)) { var sheetFile = reader.AsDataSet().Tables[0]; for (int i = 0; i < sheetFile.Rows.Count; i++) { var record = sheetFile.Rows[i]; if (hasHeaders) { hasHeaders = false; continue; } var row = record.ItemArray.Select(o => o.ToString()).ToArray(); records.Add(row); } } } return records; } The exception occurs on line 4 I have tried using ExcelReaderFactory.CreateBinaryReader and ExcelReaderFactory.CreateOpenXlmReader A: This is more for fun (so community wiki), but here's how I'd write it: private static IEnumerable<string[]> GetExcelRecords(string path, int headerRowsToSkip = 0) { using (var stream = File.Open(path, FileMode.Open, FileAccess.Read)) using (var reader = ExcelReaderFactory.CreateReader(stream)) { while(headerRowsToSkip-- > 0 && reader.Read()) {} //intentionally empty while(reader.Read()) { object[] temp = new object[reader.FieldCount]; reader.GetValues(temp); yield return temp.Select(o => o.ToString()).ToArray(); } } } I'm not happy with it yet, specifically this part: object[] temp = new object[reader.FieldCount]; reader.GetValues(temp); yield return temp.Select(o => o.ToString()).ToArray(); The problem is we end up with three copies of the data in each iteration: the copy included in the reader, the copy in the object[], and the copy in the string[]. It's also worth mentioning that, thanks to cultural/internationalization issues, converting back and forth between strings and date/numeric values is among the slowest things we do in a computer on a regular basis. If the ExcelDataReader is respecting type information in the sheet and giving you numeric and date values, it is usually a mistake that costs a LOT more performance than you realize to convert everything to string. We can't avoid copying the data once out of the reader, so we're gonna need two copies no matter what. With that in mind, I think this might be better: object[] temp = new object[reader.FieldCount]; reader.GetValues(temp); yield return temp; Then we also need to change the method signature. This leaves us returning a set ofobject[]s, which is still less than idea. The other option is even more transformational: use generics and functional programming concepts to create an actual typed object, instead of just an array: private static IEnumerable<T> GetExcelRecords<T>(string path, Func<IDataRecord, T> transform, int headerRowsToSkip = 0) { using (var stream = File.Open(path, FileMode.Open, FileAccess.Read)) using (var reader = ExcelReaderFactory.CreateReader(stream)) { while(headerRowsToSkip-- > 0 && reader.Read()) {} //intentionally empty while(reader.Read()) { yield return transform(reader); } } } This punts the hard work to a lambda expression. To explain how to use it, let's say you have a sheet with one header row and columns (in order) for an integer ID, a Date, a decimal price, a double quantity, and a string description. You might call the method like this: var rows = GetExcelRecords("file path", row => { return (row.GetInt32(0), row.GetDateTime(1), row.GetDecimal(2), row.GetDouble(3), row.GetString(4) ); }, 1); That used a ValueTuple to avoid declaring a class, while still preserving the original data types and avoiding two (2!) array allocations for every single row. Since we're using IEnumerable<> instead of List<> it also means we only need to keep one row in memory at a time. I think we've ended up in a good place. The trick I'm using to read past the header rows is maybe a little too clever — relying on boolean short circuit ordering, an operator precedence subtlety, and an empty while loop all in the same line — but I like symmetry here. A: The problem appears that my laptop is corrupting XLS files (I'm unsure why) The error isn't due to the file type, but instead the fact that my files are corrupted.
C# ExcelDataReader Invalid File Signature
I've already looked at this and it did not solve my issue https://stackoverflow.com/questions/51079664/c-sharp-error-with-exceldatareader I've tried building a method that reads an XLS file and converts it to string[] But I get an error when trying to run it: ExcelDataReader.Exceptions.HeaderException: Invalid file signature. I have tried running it with XLSX and it works fine The files that I am using have worked before Note. I have run the same method that worked with XLS before, so I'm confused as to why this error is occurring.(using ExcelDataReader Version 3.6.0) Here is the code: private static List<string[]> GetExcelRecords(string path, bool hasHeaders) { var records = new List<string[]>(); using (var stream = File.Open(path, FileMode.Open, FileAccess.Read)) { using (var reader = ExcelReaderFactory.CreateReader(stream)) { var sheetFile = reader.AsDataSet().Tables[0]; for (int i = 0; i < sheetFile.Rows.Count; i++) { var record = sheetFile.Rows[i]; if (hasHeaders) { hasHeaders = false; continue; } var row = record.ItemArray.Select(o => o.ToString()).ToArray(); records.Add(row); } } } return records; } The exception occurs on line 4 I have tried using ExcelReaderFactory.CreateBinaryReader and ExcelReaderFactory.CreateOpenXlmReader
[ "This is more for fun (so community wiki), but here's how I'd write it:\nprivate static IEnumerable<string[]> GetExcelRecords(string path, int headerRowsToSkip = 0)\n{\n using (var stream = File.Open(path, FileMode.Open, FileAccess.Read))\n using (var reader = ExcelReaderFactory.CreateReader(stream))\n {\n while(headerRowsToSkip-- > 0 && reader.Read()) {} //intentionally empty\n while(reader.Read())\n {\n object[] temp = new object[reader.FieldCount];\n reader.GetValues(temp);\n yield return temp.Select(o => o.ToString()).ToArray();\n }\n }\n}\n\nI'm not happy with it yet, specifically this part:\nobject[] temp = new object[reader.FieldCount];\nreader.GetValues(temp);\nyield return temp.Select(o => o.ToString()).ToArray();\n\nThe problem is we end up with three copies of the data in each iteration: the copy included in the reader, the copy in the object[], and the copy in the string[].\nIt's also worth mentioning that, thanks to cultural/internationalization issues, converting back and forth between strings and date/numeric values is among the slowest things we do in a computer on a regular basis. If the ExcelDataReader is respecting type information in the sheet and giving you numeric and date values, it is usually a mistake that costs a LOT more performance than you realize to convert everything to string.\nWe can't avoid copying the data once out of the reader, so we're gonna need two copies no matter what. With that in mind, I think this might be better:\nobject[] temp = new object[reader.FieldCount];\nreader.GetValues(temp);\nyield return temp;\n\nThen we also need to change the method signature. This leaves us returning a set ofobject[]s, which is still less than idea.\nThe other option is even more transformational: use generics and functional programming concepts to create an actual typed object, instead of just an array:\nprivate static IEnumerable<T> GetExcelRecords<T>(string path, Func<IDataRecord, T> transform, int headerRowsToSkip = 0)\n{\n using (var stream = File.Open(path, FileMode.Open, FileAccess.Read))\n using (var reader = ExcelReaderFactory.CreateReader(stream))\n {\n while(headerRowsToSkip-- > 0 && reader.Read()) {} //intentionally empty\n while(reader.Read())\n {\n yield return transform(reader);\n }\n }\n}\n\nThis punts the hard work to a lambda expression. To explain how to use it, let's say you have a sheet with one header row and columns (in order) for an integer ID, a Date, a decimal price, a double quantity, and a string description. You might call the method like this:\nvar rows = GetExcelRecords(\"file path\", row => \n {\n return (row.GetInt32(0), row.GetDateTime(1), row.GetDecimal(2), row.GetDouble(3), row.GetString(4) );\n }, 1);\n\nThat used a ValueTuple to avoid declaring a class, while still preserving the original data types and avoiding two (2!) array allocations for every single row. Since we're using IEnumerable<> instead of List<> it also means we only need to keep one row in memory at a time.\nI think we've ended up in a good place. The trick I'm using to read past the header rows is maybe a little too clever — relying on boolean short circuit ordering, an operator precedence subtlety, and an empty while loop all in the same line — but I like symmetry here.\n", "The problem appears that my laptop is corrupting XLS files (I'm unsure why)\nThe error isn't due to the file type, but instead the fact that my files are corrupted.\n" ]
[ 0, 0 ]
[]
[]
[ "c#", "excel", "exceldatareader" ]
stackoverflow_0074660257_c#_excel_exceldatareader.txt
Q: Reload image without refreshing page I have little experience with javascript, I'm trying to make the image update but I'm not succeeding. This image comes through an API image here <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"> <script type="text/javascript"> window.onload = function(){ setInterval("atualizarqr()",3000); // troca imagem a cada 3 segundos } function atualizarqr(){ var img = document.getElementById('img').src; document.getElementById('img').src = "https://santoaleixo.com.br/wp-content/uploads/2018/11/Logo-Santo-Aleixo-3.png"; } </script> <body onload="atualizarqr"> <img src="" id="img"/> </body> A: A couple of issues with this code: You create an img variable but do not use it. You can just remove the variable declaration altogether You are passing in the function to setInterval but as a string, which is incorrect Here is the updated code: function atualizarqr() { document.getElementById("img").src = "https://santoaleixo.com.br/wp-content/uploads/2018/11/Logo-Santo-Aleixo-3.png"; } window.onload = function () { // troca imagem a cada 3 segundos setInterval(atualizarqr, 3000); };
Reload image without refreshing page
I have little experience with javascript, I'm trying to make the image update but I'm not succeeding. This image comes through an API image here <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"> <script type="text/javascript"> window.onload = function(){ setInterval("atualizarqr()",3000); // troca imagem a cada 3 segundos } function atualizarqr(){ var img = document.getElementById('img').src; document.getElementById('img').src = "https://santoaleixo.com.br/wp-content/uploads/2018/11/Logo-Santo-Aleixo-3.png"; } </script> <body onload="atualizarqr"> <img src="" id="img"/> </body>
[ "A couple of issues with this code:\n\nYou create an img variable but do not use it. You can just remove the variable declaration altogether\nYou are passing in the function to setInterval but as a string, which is incorrect\n\nHere is the updated code:\nfunction atualizarqr() {\n document.getElementById(\"img\").src = \"https://santoaleixo.com.br/wp-content/uploads/2018/11/Logo-Santo-Aleixo-3.png\";\n}\n\nwindow.onload = function () {\n // troca imagem a cada 3 segundos\n setInterval(atualizarqr, 3000);\n};\n\n" ]
[ 0 ]
[ "Try this\nwindow.onload = function() {\n let counter = 0;\n setInterval(() => atualizarqr(), 3000); // troca imagem a cada 3 segundos } \n function atualizarqr() {\n counter++\n console.log(\"called \" + counter + \" times\")\n var img = document.getElementById('img').src;\n document.getElementById('img').src = \"https://santoaleixo.com.br/wp-content/uploads/2018/11/Logo-Santo-Aleixo-3.png\";\n }\n}\n\n" ]
[ -1 ]
[ "ajax", "javascript" ]
stackoverflow_0074661568_ajax_javascript.txt
Q: CUDA - problem with counting prime numbers from 1-N I just started with CUDA and wanted to write a simple C++ program using Visual Studio that can find total number of prime numbers from 1-N. I did this: #include "cuda_runtime.h" #include "device_launch_parameters.h" #include <cstdio> #include <cmath> static void HandleError(cudaError_t err, const char* file, int line) { if (err != cudaSuccess) { printf("%s in %s at line %d\n", cudaGetErrorString(err), file, line); exit(EXIT_FAILURE); } } #define HANDLE_ERROR( err ) (HandleError( err, __FILE__, __LINE__ )) __global__ void countPrimeGPU(const int* array, int* count) { int number = array[threadIdx.x + blockIdx.x * blockDim.x]; if (number <= 1) return; for (int i = 2; i <= floor(pow(number, 0.5)); i++) if (!(number % i)) return; atomicAdd(count, 1); } int main() { int* host_array; int* dev_array; const int size = 9; // prime numbers from 1-9 // allocating & initializing host array host_array = new int[size]; for (int i = 0; i < size; i++) host_array[i] = i + 1; // 1, 2, ..., size int host_primeCount = 0; int* dev_pprimeCount; HANDLE_ERROR(cudaSetDevice(0)); HANDLE_ERROR(cudaMalloc((void**)&dev_array, size * sizeof(int))); HANDLE_ERROR(cudaMemcpy(dev_array, host_array, size * sizeof(int), cudaMemcpyHostToDevice)); HANDLE_ERROR(cudaMalloc((void**)&dev_pprimeCount, sizeof(int))); HANDLE_ERROR(cudaMemcpy(dev_pprimeCount, &host_primeCount, sizeof(int), cudaMemcpyHostToDevice)); countPrimeGPU <<< size, 1 >>> (dev_array, dev_pprimeCount); // !!! HANDLE_ERROR(cudaGetLastError()); HANDLE_ERROR(cudaDeviceSynchronize()); HANDLE_ERROR(cudaMemcpy(&host_primeCount, dev_pprimeCount, sizeof(int), cudaMemcpyDeviceToHost)); printf("Prime count for the first %d numbers: %d\n", size, host_primeCount); cudaFree(dev_array); cudaFree(dev_pprimeCount); HANDLE_ERROR(cudaDeviceReset()); delete[] host_array; return 0; } The problem here is that I get correct result only when size is in interval [1-8]. But, when setting it to 9 or above it is always incorrect. What am I doing wrong? I suspect I have incorrectly set the configuration (number of blocks/threads) when calling countPrimeGPU, but was unable to fix it. Ultimately, I would like to test it with size=10'000'000, and compare it with my multi-threaded CPU implementation. Thank you. A: The proximal issue is that when number is 9, floor(pow(number, 0.5)) is giving you 2, not 3 as you expect. As a result, 9 is incorrectly flagged as a prime. here is a similar question. pow() (at least, in CUDA device code) does not have the absolute accuracy you desire/need when using floor() (i.e. truncation). You might have a workable path using ordinary rounding rather than truncation (since testing against a factor slightly larger than the square root is not going to disrupt the correctness of your approach), but the method I would suggest to address this is to change your for-loop as follows: for (int i = 2; (i*i) <= number; i++) This avoids the floating-point head scratching, and should be computationally simpler as well. For your desired range of number (up to 10,000,000) the quantity i*i will fit in a int type/value, so I don't forsee issues there. Since this is just a learning exercise, I'm not trying to scrub your code for all sorts of optimization improvements. To pick one example, launching blocks of 1 thread each: countPrimeGPU <<< size, 1 >>> (dev_array, dev_pprimeCount); // !!! is not particularly efficient on the GPU, but it is not algorithmically incorrect to do so.
CUDA - problem with counting prime numbers from 1-N
I just started with CUDA and wanted to write a simple C++ program using Visual Studio that can find total number of prime numbers from 1-N. I did this: #include "cuda_runtime.h" #include "device_launch_parameters.h" #include <cstdio> #include <cmath> static void HandleError(cudaError_t err, const char* file, int line) { if (err != cudaSuccess) { printf("%s in %s at line %d\n", cudaGetErrorString(err), file, line); exit(EXIT_FAILURE); } } #define HANDLE_ERROR( err ) (HandleError( err, __FILE__, __LINE__ )) __global__ void countPrimeGPU(const int* array, int* count) { int number = array[threadIdx.x + blockIdx.x * blockDim.x]; if (number <= 1) return; for (int i = 2; i <= floor(pow(number, 0.5)); i++) if (!(number % i)) return; atomicAdd(count, 1); } int main() { int* host_array; int* dev_array; const int size = 9; // prime numbers from 1-9 // allocating & initializing host array host_array = new int[size]; for (int i = 0; i < size; i++) host_array[i] = i + 1; // 1, 2, ..., size int host_primeCount = 0; int* dev_pprimeCount; HANDLE_ERROR(cudaSetDevice(0)); HANDLE_ERROR(cudaMalloc((void**)&dev_array, size * sizeof(int))); HANDLE_ERROR(cudaMemcpy(dev_array, host_array, size * sizeof(int), cudaMemcpyHostToDevice)); HANDLE_ERROR(cudaMalloc((void**)&dev_pprimeCount, sizeof(int))); HANDLE_ERROR(cudaMemcpy(dev_pprimeCount, &host_primeCount, sizeof(int), cudaMemcpyHostToDevice)); countPrimeGPU <<< size, 1 >>> (dev_array, dev_pprimeCount); // !!! HANDLE_ERROR(cudaGetLastError()); HANDLE_ERROR(cudaDeviceSynchronize()); HANDLE_ERROR(cudaMemcpy(&host_primeCount, dev_pprimeCount, sizeof(int), cudaMemcpyDeviceToHost)); printf("Prime count for the first %d numbers: %d\n", size, host_primeCount); cudaFree(dev_array); cudaFree(dev_pprimeCount); HANDLE_ERROR(cudaDeviceReset()); delete[] host_array; return 0; } The problem here is that I get correct result only when size is in interval [1-8]. But, when setting it to 9 or above it is always incorrect. What am I doing wrong? I suspect I have incorrectly set the configuration (number of blocks/threads) when calling countPrimeGPU, but was unable to fix it. Ultimately, I would like to test it with size=10'000'000, and compare it with my multi-threaded CPU implementation. Thank you.
[ "The proximal issue is that when number is 9, floor(pow(number, 0.5)) is giving you 2, not 3 as you expect. As a result, 9 is incorrectly flagged as a prime.\nhere is a similar question. pow() (at least, in CUDA device code) does not have the absolute accuracy you desire/need when using floor() (i.e. truncation). You might have a workable path using ordinary rounding rather than truncation (since testing against a factor slightly larger than the square root is not going to disrupt the correctness of your approach), but the method I would suggest to address this is to change your for-loop as follows:\nfor (int i = 2; (i*i) <= number; i++)\n\nThis avoids the floating-point head scratching, and should be computationally simpler as well. For your desired range of number (up to 10,000,000) the quantity i*i will fit in a int type/value, so I don't forsee issues there.\nSince this is just a learning exercise, I'm not trying to scrub your code for all sorts of optimization improvements. To pick one example, launching blocks of 1 thread each:\ncountPrimeGPU <<< size, 1 >>> (dev_array, dev_pprimeCount); // !!!\n\nis not particularly efficient on the GPU, but it is not algorithmically incorrect to do so.\n" ]
[ 2 ]
[]
[]
[ "c++", "cuda", "nvidia" ]
stackoverflow_0074661183_c++_cuda_nvidia.txt
Q: Google cloud vision with Google storage I am making a text detection application, using google vision api. I want to figure out the way for a OCR detection function be able to load the jpg file. This is a code referance I get from google codelab, but when try to open the url= gs:// like the diagram demonstartes, the error message saying invalid arguments. I wonder if I have missed anything?? Then, i found out that, when it is deployed on cloud functions, google vision will load image from storage. but HOW? I can not find any relenvant documents giving a detailed process about this. I am new to the code and bad at finding these instructions. Does anyone knows how can I successfuly reads/connect to the jpg file? or maybe provide a reference link regarding this? Thank you! A: The code on my end is running fine and appears to be correct, I just copied the code and run it through Google Cloud Shell, Be sure to install the Vision API python client library in your cloud shell: pip install --upgrade google-cloud-vision If your intention is to open the the image you can access the image sample provided in the reference you mentioned here: https://storage.cloud.google.com/cloud-samples-data/vision/text/screen.jpg the uri provided in the code is the resource location of the image that is stored in Google Cloud Storage, the link above is the url equivalent of it . Output: I would suggest reading through official documents for more information about the API using client libraries here and although different implementation you can view this OCR usage here.
Google cloud vision with Google storage
I am making a text detection application, using google vision api. I want to figure out the way for a OCR detection function be able to load the jpg file. This is a code referance I get from google codelab, but when try to open the url= gs:// like the diagram demonstartes, the error message saying invalid arguments. I wonder if I have missed anything?? Then, i found out that, when it is deployed on cloud functions, google vision will load image from storage. but HOW? I can not find any relenvant documents giving a detailed process about this. I am new to the code and bad at finding these instructions. Does anyone knows how can I successfuly reads/connect to the jpg file? or maybe provide a reference link regarding this? Thank you!
[ "The code on my end is running fine and appears to be correct, I just copied the code and run it through Google Cloud Shell, Be sure to install the Vision API python client library in your cloud shell: pip install --upgrade google-cloud-vision If your intention is to open the the image you can access the image sample provided in the reference you mentioned here: https://storage.cloud.google.com/cloud-samples-data/vision/text/screen.jpg the uri provided in the code is the resource location of the image that is stored in Google Cloud Storage, the link above is the url equivalent of it .\nOutput:\n\nI would suggest reading through official documents for more information about the API using client libraries here and although different implementation you can view this OCR usage here.\n" ]
[ 2 ]
[]
[]
[ "google_cloud_functions", "google_cloud_platform", "google_cloud_storage", "ocr", "vision_api" ]
stackoverflow_0074659285_google_cloud_functions_google_cloud_platform_google_cloud_storage_ocr_vision_api.txt
Q: How to reference and upload a file in Azure Blob Storage? I’m working on an Azure Function App that will grab a .pgp file off of Blob Storage, decrypt it, and then upload that decrypted file back to Blob Storage. I’ve done quite a bit of research and everything usually assumes you are downloading a file to a local drive, decrypt, then upload. However, in my case I’m trying to do everything in Azure. This is the code I’ve come up with so far. This will connect to and download the file to a stream successfully but I’m not figuring out how to wire it up with the output stream. The line for the UploadAsync() is the one I'm having issues with and it needs a value passed into the method but I’m assuming the targetBlobClient already has reference to the Blob Container and file name. I’m lost here and can’t seem to find any kind of examples to help me figure out what to do. I’m sure this code could be reduced and I will look into that once I can get it to work. var outputStream = await targetBlobClient.UploadAsync(); Here is the code I've come up with so far: try { var privateKeyValue = GetKeyVaultSecretValue(m.KeyVaultURL, m.KeyVaultPrivateSecretName); var privateKeyPassword = GetKeyVaultSecretValue(m.KeyVaultURL, m.KeyVaultPrivateSecretPassword); var storageConnString = m.InputStorageConnection; var containerName = m.InputStorageContainer; var sourceFile = m.InputFileName; var targetFile = m.OutputFileName; var sourceFolder = Path.Combine(m.InputStorageContainer, m.InputStorageFolder); var targetFolder = Path.Combine(m.OutputStorageContainer, m.OutputFolder); Console.WriteLine(@"Source full path: " + sourceFolder + "\\" + sourceFile); BlobServiceClient blobServiceClient = new BlobServiceClient(storageConnString); BlobContainerClient sourceContainerClient = blobServiceClient.GetBlobContainerClient(sourceFolder); BlobClient sourceBlobClient = sourceContainerClient.GetBlobClient(sourceFile); BlobContainerClient targetContainerClient = blobServiceClient.GetBlobContainerClient(targetFolder); BlobClient targetBlobClient = sourceContainerClient.GetBlobClient(targetFile); if (await sourceBlobClient.ExistsAsync()) { var inputStream = await sourceBlobClient.DownloadAsync(); var outputStream = await targetBlobClient.UploadAsync(); EncryptionKeys encryptionKeys = new EncryptionKeys(privateKeyValue, privateKeyPassword); PGP pgp = new PGP(encryptionKeys); await pgp.DecryptStreamAsync(inputStream, outputStream); } else { Console.WriteLine(@"Error finding file. " + sourceFolder + "\\" + sourceFile); _log.LogError("Error find file {0}\\{1}.", sourceFolder, sourceFile); } } catch (Exception ex) { _log.LogError("Error decrypting file. EventType: {0} | File: {1} | {2} | {3} | {4}", m.EventName, m.InputFileName, ex.Message, ex.StackTrace, ex.InnerException); Console.WriteLine("Error: " + ex.Message); } A: The UploadAsync method takes a stream. Here is the code from our Encryption routine that writes to Blob Storage: // TMP file names var temp_sourceFileName = IOHelper.BuildTempFileName(BlobHelper.StripPath(sourceBlobName)); var temp_targetFileName = IOHelper.BuildTempFileName(BlobHelper.StripPath(targetBlobName)); var temp_keyFileName = IOHelper.BuildTempFileName(BlobHelper.StripPath(keyBlobName)); // download Blob to TMP using (var sourceStream = new FileStream(temp_sourceFileName, FileMode.Create)) { var sourceBlobClient = new BlobClient(blobAccountConnStr, sourceContainerName, sourceBlobName); await sourceBlobClient.DownloadToAsync(sourceStream); } // download key to TMP using (var keyStream = new FileStream(temp_keyFileName, FileMode.Create)) { var keyBlobClient = new BlobClient(blobAccountConnStr, sourceContainerName, keyBlobName); await keyBlobClient.DownloadToAsync(keyStream); } // Encrypt stream using (var pgp = new PGP()) { using (FileStream inputFileStream = new FileStream(temp_sourceFileName, FileMode.Open)) { using (Stream outputFileStream = File.Create(temp_targetFileName)) { using (Stream publicKeyStream = new FileStream(temp_keyFileName, FileMode.Open)) { pgp.EncryptStream(inputFileStream, outputFileStream, publicKeyStream, true, true); } } } } // write to target blob // write to target blob using (var encryptStream = new FileStream(temp_targetFileName, FileMode.Open)) { var targetBlobClient = new BlobClient(blobAccountConnStr, targetContainerName, targetBlobName); await targetBlobClient.UploadAsync(encryptStream, true); return new OkObjectResult(targetBlobClient); } A: I worked with someone and we finally figured it out. The code could probably be cleaned up some but this seems to work for now. This uses all memory streams to download, decrypt, upload. I'll eventually create an encryption side of the Function App as well. In PROD we will be using a Service Bus message to kick this off but I'm using an HTTP Trigger to make it easy to test with Postman. Hopefully it helps out someone else. Also note that there is a known bug currently (Nov 2022) with adding Key Vault secret values through the Azure Portal. It strips out all of the formatting which renders the PGP Keys invalid. You have to use Azure Cloud Shell or similar to upload a multi-line file first and then use Powershell (or Bash) to insert the file value into Key Vault in order to keep the formatting. public async Task DecryptFileAsync(PGPmessage m) { _log.LogInformation("Start decryption process for Event: {0}", m.EventName); //========= Get PGP Keys ================================================================ var privateKeyValue = GetKeyVaultSecretValue(m.KeyVaultURL, m.KeyVaultPrivateSecretName); var privateKeyPassword = GetKeyVaultSecretValue(m.KeyVaultURL, m.KeyVaultPrivateSecretPassword); try { var storageConnString = m.InputStorageConnection; var sourceFolder = m.InputStorageContainer; var sourceFile = m.InputFileName; var targetFolder = m.OutputStorageContainer; var targetFile = m.OutputFileName; _log.LogInformation(@"Looking for file {0}\{1}.", sourceFolder, sourceFile); Console.WriteLine(@"Source full path: " + sourceFolder + "\\" + sourceFile); // Create the connections to Blob Storage for both Source and Target files. BlobServiceClient blobServiceClient = new BlobServiceClient(storageConnString); BlobContainerClient sourceContainerClient = blobServiceClient.GetBlobContainerClient(sourceFolder); BlobClient sourceBlobClient = sourceContainerClient.GetBlobClient(sourceFile); BlobContainerClient targetContainerClient = blobServiceClient.GetBlobContainerClient(targetFolder); BlobClient targetBlobClient = sourceContainerClient.GetBlobClient(targetFile); if (await sourceBlobClient.ExistsAsync()) { // Use a memory stream because a Stream type is not seekable, a memory stream is. var inputStream = new MemoryStream(); var outputStream = new MemoryStream(); // Download from blob storage. Copy to memory stream so that it can be seakable. // After copying to memory stream, reset the position to 0 so that it will be able to read from the begining. var blobDownloadStream = await sourceBlobClient.DownloadAsync(); blobDownloadStream.Value.Content.CopyTo(inputStream); inputStream.Position = 0; // Create PGP Core object passing in the PGP Keys. EncryptionKeys encryptionKeys = new EncryptionKeys(privateKeyValue, privateKeyPassword); PGP pgp = new PGP(encryptionKeys); await pgp.DecryptStreamAsync(inputStream, outputStream); _log.LogInformation(@"Uploading file to storage: {0}\{1}", targetFolder, targetFile); // Reset to the beginning of the stream since it will be at the end due to writing the decrypted value to the stream. outputStream.Position = 0; await targetBlobClient.UploadAsync(outputStream, true); //Set to overwrite=true _log.LogInformation(@"Uploading to file to storage Complete. {0}\{1}", targetFolder, targetFile); Console.WriteLine(@"Uploading to file to storage Complete. {0}\{1}", targetFolder, targetFile); } else { Console.WriteLine(@"Error finding file: " + sourceFolder + "\\" + sourceFile); _log.LogError("Error finding file: {0}\\{1}.", sourceFolder, sourceFile); } } catch (Exception ex) { _log.LogError("Error decrypting file. EventType: {0} | File: {1} | {2} | {3} | {4}", m.EventName, m.InputFileName, ex.Message, ex.StackTrace, ex.InnerException); Console.WriteLine("Error: " + ex.Message); } } private string GetKeyVaultSecretValue(string keyVaultURL, string secretName) { var kvSecretValue = string.Empty; try { var secretsClient = new SecretClient(new Uri(keyVaultURL), new DefaultAzureCredential()); kvSecretValue = secretsClient.GetSecret(secretName).Value.Value; //https://scottgeek.technology/the-azure-vault-pgp-and-other-matters-part-2/ } catch (Exception ex) { _log.LogError("Error getting Key Vault secret. SecretName: {0} | {1} | {2} | {3}", secretName, ex.Message, ex.StackTrace, ex.InnerException); } return kvSecretValue; }
How to reference and upload a file in Azure Blob Storage?
I’m working on an Azure Function App that will grab a .pgp file off of Blob Storage, decrypt it, and then upload that decrypted file back to Blob Storage. I’ve done quite a bit of research and everything usually assumes you are downloading a file to a local drive, decrypt, then upload. However, in my case I’m trying to do everything in Azure. This is the code I’ve come up with so far. This will connect to and download the file to a stream successfully but I’m not figuring out how to wire it up with the output stream. The line for the UploadAsync() is the one I'm having issues with and it needs a value passed into the method but I’m assuming the targetBlobClient already has reference to the Blob Container and file name. I’m lost here and can’t seem to find any kind of examples to help me figure out what to do. I’m sure this code could be reduced and I will look into that once I can get it to work. var outputStream = await targetBlobClient.UploadAsync(); Here is the code I've come up with so far: try { var privateKeyValue = GetKeyVaultSecretValue(m.KeyVaultURL, m.KeyVaultPrivateSecretName); var privateKeyPassword = GetKeyVaultSecretValue(m.KeyVaultURL, m.KeyVaultPrivateSecretPassword); var storageConnString = m.InputStorageConnection; var containerName = m.InputStorageContainer; var sourceFile = m.InputFileName; var targetFile = m.OutputFileName; var sourceFolder = Path.Combine(m.InputStorageContainer, m.InputStorageFolder); var targetFolder = Path.Combine(m.OutputStorageContainer, m.OutputFolder); Console.WriteLine(@"Source full path: " + sourceFolder + "\\" + sourceFile); BlobServiceClient blobServiceClient = new BlobServiceClient(storageConnString); BlobContainerClient sourceContainerClient = blobServiceClient.GetBlobContainerClient(sourceFolder); BlobClient sourceBlobClient = sourceContainerClient.GetBlobClient(sourceFile); BlobContainerClient targetContainerClient = blobServiceClient.GetBlobContainerClient(targetFolder); BlobClient targetBlobClient = sourceContainerClient.GetBlobClient(targetFile); if (await sourceBlobClient.ExistsAsync()) { var inputStream = await sourceBlobClient.DownloadAsync(); var outputStream = await targetBlobClient.UploadAsync(); EncryptionKeys encryptionKeys = new EncryptionKeys(privateKeyValue, privateKeyPassword); PGP pgp = new PGP(encryptionKeys); await pgp.DecryptStreamAsync(inputStream, outputStream); } else { Console.WriteLine(@"Error finding file. " + sourceFolder + "\\" + sourceFile); _log.LogError("Error find file {0}\\{1}.", sourceFolder, sourceFile); } } catch (Exception ex) { _log.LogError("Error decrypting file. EventType: {0} | File: {1} | {2} | {3} | {4}", m.EventName, m.InputFileName, ex.Message, ex.StackTrace, ex.InnerException); Console.WriteLine("Error: " + ex.Message); }
[ "The UploadAsync method takes a stream. Here is the code from our Encryption routine that writes to Blob Storage:\n// TMP file names\nvar temp_sourceFileName = IOHelper.BuildTempFileName(BlobHelper.StripPath(sourceBlobName));\nvar temp_targetFileName = IOHelper.BuildTempFileName(BlobHelper.StripPath(targetBlobName));\nvar temp_keyFileName = IOHelper.BuildTempFileName(BlobHelper.StripPath(keyBlobName));\n\n\n// download Blob to TMP\nusing (var sourceStream = new FileStream(temp_sourceFileName, FileMode.Create))\n{\n var sourceBlobClient = new BlobClient(blobAccountConnStr, sourceContainerName, sourceBlobName);\n await sourceBlobClient.DownloadToAsync(sourceStream);\n}\n\n// download key to TMP\nusing (var keyStream = new FileStream(temp_keyFileName, FileMode.Create))\n{\n var keyBlobClient = new BlobClient(blobAccountConnStr, sourceContainerName, keyBlobName);\n await keyBlobClient.DownloadToAsync(keyStream);\n}\n\n\n// Encrypt stream\nusing (var pgp = new PGP())\n{\n using (FileStream inputFileStream = new FileStream(temp_sourceFileName, FileMode.Open))\n {\n using (Stream outputFileStream = File.Create(temp_targetFileName))\n {\n using (Stream publicKeyStream = new FileStream(temp_keyFileName, FileMode.Open))\n {\n pgp.EncryptStream(inputFileStream, outputFileStream, publicKeyStream, true, true);\n }\n }\n }\n}\n\n// write to target blob\n// write to target blob\nusing (var encryptStream = new FileStream(temp_targetFileName, FileMode.Open))\n{\n var targetBlobClient = new BlobClient(blobAccountConnStr, targetContainerName, targetBlobName);\n await targetBlobClient.UploadAsync(encryptStream, true);\n\n return new OkObjectResult(targetBlobClient);\n}\n\n", "I worked with someone and we finally figured it out. The code could probably be cleaned up some but this seems to work for now. This uses all memory streams to download, decrypt, upload. I'll eventually create an encryption side of the Function App as well. In PROD we will be using a Service Bus message to kick this off but I'm using an HTTP Trigger to make it easy to test with Postman. Hopefully it helps out someone else.\nAlso note that there is a known bug currently (Nov 2022) with adding Key Vault secret values through the Azure Portal. It strips out all of the formatting which renders the PGP Keys invalid. You have to use Azure Cloud Shell or similar to upload a multi-line file first and then use Powershell (or Bash) to insert the file value into Key Vault in order to keep the formatting.\npublic async Task DecryptFileAsync(PGPmessage m)\n{\n _log.LogInformation(\"Start decryption process for Event: {0}\", m.EventName);\n\n\n //========= Get PGP Keys ================================================================\n var privateKeyValue = GetKeyVaultSecretValue(m.KeyVaultURL, m.KeyVaultPrivateSecretName);\n var privateKeyPassword = GetKeyVaultSecretValue(m.KeyVaultURL, m.KeyVaultPrivateSecretPassword);\n\n\n try\n {\n var storageConnString = m.InputStorageConnection;\n\n var sourceFolder = m.InputStorageContainer;\n var sourceFile = m.InputFileName;\n \n var targetFolder = m.OutputStorageContainer;\n var targetFile = m.OutputFileName;\n\n _log.LogInformation(@\"Looking for file {0}\\{1}.\", sourceFolder, sourceFile);\n Console.WriteLine(@\"Source full path: \" + sourceFolder + \"\\\\\" + sourceFile);\n\n\n // Create the connections to Blob Storage for both Source and Target files.\n BlobServiceClient blobServiceClient = new BlobServiceClient(storageConnString);\n\n BlobContainerClient sourceContainerClient = blobServiceClient.GetBlobContainerClient(sourceFolder); \n BlobClient sourceBlobClient = sourceContainerClient.GetBlobClient(sourceFile); \n\n BlobContainerClient targetContainerClient = blobServiceClient.GetBlobContainerClient(targetFolder);\n BlobClient targetBlobClient = sourceContainerClient.GetBlobClient(targetFile);\n\n if (await sourceBlobClient.ExistsAsync())\n {\n // Use a memory stream because a Stream type is not seekable, a memory stream is. \n var inputStream = new MemoryStream();\n var outputStream = new MemoryStream();\n\n // Download from blob storage. Copy to memory stream so that it can be seakable. \n // After copying to memory stream, reset the position to 0 so that it will be able to read from the begining.\n var blobDownloadStream = await sourceBlobClient.DownloadAsync();\n blobDownloadStream.Value.Content.CopyTo(inputStream);\n inputStream.Position = 0;\n\n // Create PGP Core object passing in the PGP Keys. \n EncryptionKeys encryptionKeys = new EncryptionKeys(privateKeyValue, privateKeyPassword);\n PGP pgp = new PGP(encryptionKeys);\n\n await pgp.DecryptStreamAsync(inputStream, outputStream);\n\n _log.LogInformation(@\"Uploading file to storage: {0}\\{1}\", targetFolder, targetFile);\n\n // Reset to the beginning of the stream since it will be at the end due to writing the decrypted value to the stream.\n outputStream.Position = 0; \n await targetBlobClient.UploadAsync(outputStream, true); //Set to overwrite=true\n\n _log.LogInformation(@\"Uploading to file to storage Complete. {0}\\{1}\", targetFolder, targetFile);\n Console.WriteLine(@\"Uploading to file to storage Complete. {0}\\{1}\", targetFolder, targetFile);\n }\n else\n {\n Console.WriteLine(@\"Error finding file: \" + sourceFolder + \"\\\\\" + sourceFile);\n _log.LogError(\"Error finding file: {0}\\\\{1}.\", sourceFolder, sourceFile);\n }\n }\n catch (Exception ex)\n {\n _log.LogError(\"Error decrypting file. EventType: {0} | File: {1} | {2} | {3} | {4}\", m.EventName, m.InputFileName, ex.Message, ex.StackTrace, ex.InnerException);\n\n Console.WriteLine(\"Error: \" + ex.Message);\n }\n}\n\nprivate string GetKeyVaultSecretValue(string keyVaultURL, string secretName)\n{\n var kvSecretValue = string.Empty;\n\n try\n {\n var secretsClient = new SecretClient(new Uri(keyVaultURL), new DefaultAzureCredential());\n kvSecretValue = secretsClient.GetSecret(secretName).Value.Value;\n\n //https://scottgeek.technology/the-azure-vault-pgp-and-other-matters-part-2/\n }\n catch (Exception ex)\n {\n _log.LogError(\"Error getting Key Vault secret. SecretName: {0} | {1} | {2} | {3}\", secretName, ex.Message, ex.StackTrace, ex.InnerException);\n }\n\n return kvSecretValue;\n}\n\n" ]
[ 1, 0 ]
[]
[]
[ ".net_core", "azure", "azure_blob_storage", "c#", "pgp" ]
stackoverflow_0074644919_.net_core_azure_azure_blob_storage_c#_pgp.txt
Q: savedInstanceState causes a memory leak when returning to fragment (The whole code can be found here (without the leakCanary dependencies): https://github.com/Dawwit0001/HiltMultiModule) I created 2 fragments, a login fragment and a register fragment, whenever the user opens the app, the login screen is displayed if the user navigates to the register screen, creates an account and then navigates back to the login screen a leak happens. I am not sure why is that, but I discovered that when I replace the "savedInstanceState" in the login fragment with null (inside onViewCreated), it does not happen. The whole leak: ┬─── │ GC Root: Input or output parameters in native code │ ├─ dalvik.system.PathClassLoader instance │ Leaking: NO (InternalLeakCanary↓ is not leaking and A ClassLoader is never │ leaking) │ ↓ ClassLoader.runtimeInternalObjects ├─ java.lang.Object[] array │ Leaking: NO (InternalLeakCanary↓ is not leaking) │ ↓ Object[728] ├─ leakcanary.internal.InternalLeakCanary class │ Leaking: NO (MainActivity↓ is not leaking and a class is never leaking) │ ↓ static InternalLeakCanary.resumedActivity ├─ winged.example.hiltmultimodule.MainActivity instance │ Leaking: NO (RegisterFragment↓ is not leaking and Activity#mDestroyed is │ false) │ mApplication instance of winged.example.hiltmultimodule.di. │ HiltMultiModuleApplication │ mBase instance of androidx.appcompat.view.ContextThemeWrapper │ ↓ ComponentActivity.mOnConfigurationChangedListeners ├─ java.util.concurrent.CopyOnWriteArrayList instance │ Leaking: NO (RegisterFragment↓ is not leaking) │ ↓ CopyOnWriteArrayList[4] ├─ androidx.fragment.app.FragmentManager$$ExternalSyntheticLambda0 instance │ Leaking: NO (RegisterFragment↓ is not leaking) │ ↓ FragmentManager$$ExternalSyntheticLambda0.f$0 ├─ androidx.fragment.app.FragmentManagerImpl instance │ Leaking: NO (RegisterFragment↓ is not leaking) │ ↓ FragmentManager.mParent ├─ winged.example.feature_login.register.RegisterFragment instance │ Leaking: NO (Fragment#mFragmentManager is not null) │ componentContext instance of dagger.hilt.android.internal.managers. │ ViewComponentManager$FragmentContextWrapper, wrapping activity winged. │ example.hiltmultimodule.MainActivity with mDestroyed = false │ ↓ Fragment.mSavedViewState │ ~~~~~~~~~~~~~~~ ├─ android.util.SparseArray instance │ Leaking: UNKNOWN │ Retaining 417.7 kB in 4154 objects │ ↓ SparseArray.mValues │ ~~~~~~~ ├─ java.lang.Object[] array │ Leaking: UNKNOWN │ Retaining 417.6 kB in 4152 objects │ ↓ Object[9] │ ~~~ ├─ android.widget.TextView$SavedState instance │ Leaking: UNKNOWN │ Retaining 416.1 kB in 4113 objects │ ↓ TextView$SavedState.text │ ~~~~ ├─ android.text.SpannableStringBuilder instance │ Leaking: UNKNOWN │ Retaining 416.0 kB in 4109 objects │ ↓ SpannableStringBuilder.mSpans │ ~~~~~~ ├─ java.lang.Object[] array │ Leaking: UNKNOWN │ Retaining 36 B in 1 objects │ ↓ Object[0] │ ~~~ ├─ android.text.method.PasswordTransformationMethod$Visible instance │ Leaking: UNKNOWN │ Retaining 415.4 kB in 4099 objects │ ↓ PasswordTransformationMethod$Visible.mText │ ~~~~~ ├─ androidx.emoji2.text.SpannableBuilder instance │ Leaking: UNKNOWN │ Retaining 415.4 kB in 4098 objects │ ↓ SpannableStringBuilder.mSpans │ ~~~~~~ ├─ java.lang.Object[] array │ Leaking: UNKNOWN │ Retaining 76 B in 1 objects │ ↓ Object[0] │ ~~~ ├─ android.widget.TextView$ChangeWatcher instance │ Leaking: UNKNOWN │ Retaining 16 B in 1 objects │ ↓ TextView$ChangeWatcher.this$0 │ ~~~~~~ ├─ com.google.android.material.textfield.TextInputEditText instance │ Leaking: UNKNOWN │ Retaining 410.0 kB in 3980 objects │ View not part of a window view hierarchy │ View.mAttachInfo is null (view detached) │ View.mID = R.id.repeatPasswordTIET │ View.mWindowAttachCount = 1 │ mContext instance of androidx.appcompat.view.ContextThemeWrapper, wrapping │ activity winged.example.hiltmultimodule.MainActivity with mDestroyed = │ false │ ↓ View.mParent │ ~~~~~~~ ├─ android.widget.FrameLayout instance │ Leaking: UNKNOWN │ Retaining 1.0 kB in 15 objects │ View not part of a window view hierarchy │ View.mAttachInfo is null (view detached) │ View.mWindowAttachCount = 1 │ mContext instance of androidx.appcompat.view.ContextThemeWrapper, wrapping │ activity winged.example.hiltmultimodule.MainActivity with mDestroyed = │ false │ ↓ View.mParent │ ~~~~~~~ ├─ com.google.android.material.textfield.TextInputLayout instance │ Leaking: UNKNOWN │ Retaining 381.0 kB in 3284 objects │ View not part of a window view hierarchy │ View.mAttachInfo is null (view detached) │ View.mID = R.id.repeatPasswordTIL │ View.mWindowAttachCount = 1 │ mContext instance of androidx.appcompat.view.ContextThemeWrapper, wrapping │ activity winged.example.hiltmultimodule.MainActivity with mDestroyed = │ false │ ↓ View.mParent │ ~~~~~~~ ╰→ androidx.constraintlayout.widget.ConstraintLayout instance ​ Leaking: YES (ObjectWatcher was watching this because winged.example. ​ feature_login.register.RegisterFragment received Fragment#onDestroyView() ​ callback (references to its views should be cleared to prevent leaks)) ​ Retaining 2.5 kB in 59 objects ​ key = 16bf9a7e-c3de-4737-a5c2-8933c6fed9d3 ​ watchDurationMillis = 132084 ​ retainedDurationMillis = 127081 ​ View not part of a window view hierarchy ​ View.mAttachInfo is null (view detached) ​ View.mID = R.id.mainCL ​ View.mWindowAttachCount = 1 ​ mContext instance of dagger.hilt.android.internal.managers. ​ ViewComponentManager$FragmentContextWrapper, wrapping activity winged. ​ example.hiltmultimodule.MainActivity with mDestroyed = false METADATA Build.VERSION.SDK_INT: 30 Build.MANUFACTURER: unknown LeakCanary version: 2.10 App process name: winged.example.hiltmultimodule Class count: 18527 Instance count: 115319 Primitive array count: 86210 Object array count: 17808 Thread count: 21 Heap total bytes: 16303680 Bitmap count: 4 Bitmap total bytes: 228214 Large bitmap count: 0 Large bitmap total bytes: 0 Db 1: open /data/user/0/winged.example. hiltmultimodule/databases/HiltMultiModuleDB Stats: LruCache[maxSize=3000,hits=40347,misses=84973,hitRate=32%] RandomAccess[bytes=4231371,reads=84973,travel=25038680029,range=19100784,size=25 202710] Analysis duration: 6049 ms I'm still learning so any info / possible reasons / solutions will be appreciated, thanks :) Edit: BaseFragment: abstract class BaseFragment<T : ViewDataBinding>(@LayoutRes private val fragmentRes: Int) : Fragment() { private var _binding: T? = null val binding get() = _binding!! override fun onCreateView( inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle? ): View? { _binding = DataBindingUtil.inflate(inflater, fragmentRes, container, false) return binding.root } override fun onDestroyView() { super.onDestroyView() _binding = null } fun navigateTo(targetDestination: Int) { findNavController().navigate(targetDestination) } fun navigateUp() { findNavController().navigateUp() } } loginFragment: @AndroidEntryPoint class LoginFragment : BaseFragment<FragmentLoginBinding>(R.layout.fragment_login) { private val viewModel: LoginViewModel by viewModels() override fun onViewCreated(view: View, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) setUpLogInButton() setUpTextRedirection() observeForLoginEvents() } private fun setUpTextRedirection() { binding.signUpTV.setOnClickListener { navigateTo(R.id.registerFragment) } } private fun setUpLogInButton() { binding.logInBTN.setOnClickListener { val email = binding.emailTIET.extractText() val password = binding.passwordTIET.extractText() if(email.isAValidEmail() && password.isNotBlank()) { viewModel.logIn(LoginCredentials(mail = email, password = password)) } } } private fun observeForLoginEvents() { viewModel.loginEvent.observe(viewLifecycleOwner) { result -> if(result.isSuccess) { /* Adding some kind of "Main Screen" module would be an idea but as I've stated previously, this is just a small "test" project showing off architecture, so I hope you will forgive me <3 (PS: if you are reading this and there still isn't that module, you can make a PR and add it)*/ Toast.makeText(requireContext(), "Success!", Toast.LENGTH_SHORT).show() } else { Toast.makeText(requireContext(), "No matching account", Toast.LENGTH_SHORT).show() } } } } registerFragment: @AndroidEntryPoint class RegisterFragment: BaseFragment<FragmentRegisterBinding>(R.layout.fragment_register) { private val viewModel: RegisterViewModel by viewModels() override fun onViewCreated(view: View, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) setUpCreateAccountButton() setUpTextRedirection() observeRegisterEvents() } private fun setUpTextRedirection() { binding.logInTV.setOnClickListener { navigateTo(R.id.loginFragment) } } private fun setUpCreateAccountButton() { binding.createAnAccountBTN.setOnClickListener { val email = binding.emailTIET.extractText() val password = binding.passwordTIET.extractText() val repeatedPassword = binding.repeatPasswordTIET.extractText() if(email.isAValidEmail() && (password == repeatedPassword) && password.isNotEmpty()) { viewModel.saveUser( LoginCredentials(mail = email, password = password) ) } } } private fun observeRegisterEvents() { viewModel.registerEvent.observe(viewLifecycleOwner) { result -> if(result.isSuccess) { navigateTo(R.id.loginFragment) } else { Toast.makeText(requireContext(), "Something went wrong", Toast.LENGTH_SHORT).show() } } } As you may notice, the BaseFragment class has a reference to a view (binding variable), but it releases it in onDestoryView, so I think that should be working, also in the leak it isn't "complaining" about the binding itself A: Found the culprit. The problem comes from an EditText that has the input type android:inputType="textPassword" or any other variant that has a password. In this case, it is one of the TextInputLayout instance, that has a TextInputEditText. But: It may need to be combined with the usage of Emoji Library, because the class has an element with the type androidx.emoji2.text.SpannableBuilder that belongs to the Emoji library. The TextInputEditText's text is spannable, which means it's not a simple string, it's an object. An object, that can be Parcelable, which means its state can be saved. And, it looks like its actually saved here. No idea how though, since Parcelable limits which types can be saved. The memory leak appears to be on the TextInputEditText with the ID R.id.repeatPasswordTIET. In your layout file, you can also search for @+id/repeatPasswordTIET or @id/repeatPasswordTIET to find the specific one. Why the leak? TextView's (or more likely EditText's) have a tendency to not remove their listeners once they are not needed. It's just not configured that way, maybe due to expecting the callers to remove the listeners themselves once they are not needed. A lot of other listeners get cleared once they are not needed, but the TextWatcher is an exception unfortunately. Examining the leak canary trace, android.text.method.PasswordTransformationMethod$Visible instance has a androidx.emoji2.text.SpannableBuilder which contains an array, and one of the entries points to android.widget.TextView$ChangeWatcher instance which then shows the TextInputEditText that is leaked. It is leaked because in the same trace, you can see that the listener is saved to android.widget.TextView$SavedState instance, which I assume gets restored in a future fragment. I actually tried to fetch the value myself, but wasn't able to do it. The saved state did not hold the listener. Although, I have a potential solution: Delete every listener when the view is not necessary anymore. Potential solution: import android.content.Context import android.text.TextWatcher import android.util.AttributeSet import com.google.android.material.textfield.TextInputEditText class ListenerAwareEditText @JvmOverloads constructor( context: Context, attrs: AttributeSet? = null, defStyleAttr: Int = 0 ): TextInputEditText(context, attrs, defStyleAttr) { private companion object { val textChangedListenersStatic: MutableList<TextWatcher> = ArrayList() } private val textChangedListeners: MutableList<TextWatcher> = ArrayList() /** * Swap the listeners added in the companion object list with the actual. */ init { textChangedListeners.addAll(textChangedListenersStatic) textChangedListenersStatic.clear() } /** * Overridden to hold a reference of the listener */ override fun addTextChangedListener(watcher: TextWatcher?) { super.addTextChangedListener(watcher) watcher?.let { // NullPointerException may happen because this method // can be called before the object itself is constructed, // from the super classes. // So, to hold the values, a static list in a // companion object was used, and then the elements // get transferred to the actual list, clearing the // static one. try { textChangedListeners.add(it) } catch (ignore: NullPointerException) { textChangedListenersStatic.add(it) } } } /** * Overridden to release the listener in our list */ override fun removeTextChangedListener(watcher: TextWatcher?) { super.removeTextChangedListener(watcher) watcher?.let { // NullPointerException may happen because this method // can be called before the object itself is constructed, // from the super classes. // So, to hold the values, a static list in a // companion object was used, and then the elements // get transferred to the actual list, clearing the // static one. try { textChangedListeners.remove(it) } catch (ignore: NullPointerException) { textChangedListenersStatic.remove(it) } } } /** * Clears the text changed listeners. Call this from the * fragment's [onDestroyView] or Activity's [onDestroy]. */ fun clearTextChangedListeners() { textChangedListeners.forEach { super.removeTextChangedListener(it) } textChangedListeners.clear() } } What the class does: It caches all the listeners added in a list, and allows you to call clearTextChangedListeners() once it is not needed. (I tried to do this automatically but the lifecycle got confusing once fragments, nested recyclerviews etc... got involved so I left it here) Usage: Swap with your layouts' TextInputEditText with this class, and at your fragment's onDestroyView, call editText.clearTextChangedListeners(). It should solve your problem, however it's the Android world. It might not.
savedInstanceState causes a memory leak when returning to fragment
(The whole code can be found here (without the leakCanary dependencies): https://github.com/Dawwit0001/HiltMultiModule) I created 2 fragments, a login fragment and a register fragment, whenever the user opens the app, the login screen is displayed if the user navigates to the register screen, creates an account and then navigates back to the login screen a leak happens. I am not sure why is that, but I discovered that when I replace the "savedInstanceState" in the login fragment with null (inside onViewCreated), it does not happen. The whole leak: ┬─── │ GC Root: Input or output parameters in native code │ ├─ dalvik.system.PathClassLoader instance │ Leaking: NO (InternalLeakCanary↓ is not leaking and A ClassLoader is never │ leaking) │ ↓ ClassLoader.runtimeInternalObjects ├─ java.lang.Object[] array │ Leaking: NO (InternalLeakCanary↓ is not leaking) │ ↓ Object[728] ├─ leakcanary.internal.InternalLeakCanary class │ Leaking: NO (MainActivity↓ is not leaking and a class is never leaking) │ ↓ static InternalLeakCanary.resumedActivity ├─ winged.example.hiltmultimodule.MainActivity instance │ Leaking: NO (RegisterFragment↓ is not leaking and Activity#mDestroyed is │ false) │ mApplication instance of winged.example.hiltmultimodule.di. │ HiltMultiModuleApplication │ mBase instance of androidx.appcompat.view.ContextThemeWrapper │ ↓ ComponentActivity.mOnConfigurationChangedListeners ├─ java.util.concurrent.CopyOnWriteArrayList instance │ Leaking: NO (RegisterFragment↓ is not leaking) │ ↓ CopyOnWriteArrayList[4] ├─ androidx.fragment.app.FragmentManager$$ExternalSyntheticLambda0 instance │ Leaking: NO (RegisterFragment↓ is not leaking) │ ↓ FragmentManager$$ExternalSyntheticLambda0.f$0 ├─ androidx.fragment.app.FragmentManagerImpl instance │ Leaking: NO (RegisterFragment↓ is not leaking) │ ↓ FragmentManager.mParent ├─ winged.example.feature_login.register.RegisterFragment instance │ Leaking: NO (Fragment#mFragmentManager is not null) │ componentContext instance of dagger.hilt.android.internal.managers. │ ViewComponentManager$FragmentContextWrapper, wrapping activity winged. │ example.hiltmultimodule.MainActivity with mDestroyed = false │ ↓ Fragment.mSavedViewState │ ~~~~~~~~~~~~~~~ ├─ android.util.SparseArray instance │ Leaking: UNKNOWN │ Retaining 417.7 kB in 4154 objects │ ↓ SparseArray.mValues │ ~~~~~~~ ├─ java.lang.Object[] array │ Leaking: UNKNOWN │ Retaining 417.6 kB in 4152 objects │ ↓ Object[9] │ ~~~ ├─ android.widget.TextView$SavedState instance │ Leaking: UNKNOWN │ Retaining 416.1 kB in 4113 objects │ ↓ TextView$SavedState.text │ ~~~~ ├─ android.text.SpannableStringBuilder instance │ Leaking: UNKNOWN │ Retaining 416.0 kB in 4109 objects │ ↓ SpannableStringBuilder.mSpans │ ~~~~~~ ├─ java.lang.Object[] array │ Leaking: UNKNOWN │ Retaining 36 B in 1 objects │ ↓ Object[0] │ ~~~ ├─ android.text.method.PasswordTransformationMethod$Visible instance │ Leaking: UNKNOWN │ Retaining 415.4 kB in 4099 objects │ ↓ PasswordTransformationMethod$Visible.mText │ ~~~~~ ├─ androidx.emoji2.text.SpannableBuilder instance │ Leaking: UNKNOWN │ Retaining 415.4 kB in 4098 objects │ ↓ SpannableStringBuilder.mSpans │ ~~~~~~ ├─ java.lang.Object[] array │ Leaking: UNKNOWN │ Retaining 76 B in 1 objects │ ↓ Object[0] │ ~~~ ├─ android.widget.TextView$ChangeWatcher instance │ Leaking: UNKNOWN │ Retaining 16 B in 1 objects │ ↓ TextView$ChangeWatcher.this$0 │ ~~~~~~ ├─ com.google.android.material.textfield.TextInputEditText instance │ Leaking: UNKNOWN │ Retaining 410.0 kB in 3980 objects │ View not part of a window view hierarchy │ View.mAttachInfo is null (view detached) │ View.mID = R.id.repeatPasswordTIET │ View.mWindowAttachCount = 1 │ mContext instance of androidx.appcompat.view.ContextThemeWrapper, wrapping │ activity winged.example.hiltmultimodule.MainActivity with mDestroyed = │ false │ ↓ View.mParent │ ~~~~~~~ ├─ android.widget.FrameLayout instance │ Leaking: UNKNOWN │ Retaining 1.0 kB in 15 objects │ View not part of a window view hierarchy │ View.mAttachInfo is null (view detached) │ View.mWindowAttachCount = 1 │ mContext instance of androidx.appcompat.view.ContextThemeWrapper, wrapping │ activity winged.example.hiltmultimodule.MainActivity with mDestroyed = │ false │ ↓ View.mParent │ ~~~~~~~ ├─ com.google.android.material.textfield.TextInputLayout instance │ Leaking: UNKNOWN │ Retaining 381.0 kB in 3284 objects │ View not part of a window view hierarchy │ View.mAttachInfo is null (view detached) │ View.mID = R.id.repeatPasswordTIL │ View.mWindowAttachCount = 1 │ mContext instance of androidx.appcompat.view.ContextThemeWrapper, wrapping │ activity winged.example.hiltmultimodule.MainActivity with mDestroyed = │ false │ ↓ View.mParent │ ~~~~~~~ ╰→ androidx.constraintlayout.widget.ConstraintLayout instance ​ Leaking: YES (ObjectWatcher was watching this because winged.example. ​ feature_login.register.RegisterFragment received Fragment#onDestroyView() ​ callback (references to its views should be cleared to prevent leaks)) ​ Retaining 2.5 kB in 59 objects ​ key = 16bf9a7e-c3de-4737-a5c2-8933c6fed9d3 ​ watchDurationMillis = 132084 ​ retainedDurationMillis = 127081 ​ View not part of a window view hierarchy ​ View.mAttachInfo is null (view detached) ​ View.mID = R.id.mainCL ​ View.mWindowAttachCount = 1 ​ mContext instance of dagger.hilt.android.internal.managers. ​ ViewComponentManager$FragmentContextWrapper, wrapping activity winged. ​ example.hiltmultimodule.MainActivity with mDestroyed = false METADATA Build.VERSION.SDK_INT: 30 Build.MANUFACTURER: unknown LeakCanary version: 2.10 App process name: winged.example.hiltmultimodule Class count: 18527 Instance count: 115319 Primitive array count: 86210 Object array count: 17808 Thread count: 21 Heap total bytes: 16303680 Bitmap count: 4 Bitmap total bytes: 228214 Large bitmap count: 0 Large bitmap total bytes: 0 Db 1: open /data/user/0/winged.example. hiltmultimodule/databases/HiltMultiModuleDB Stats: LruCache[maxSize=3000,hits=40347,misses=84973,hitRate=32%] RandomAccess[bytes=4231371,reads=84973,travel=25038680029,range=19100784,size=25 202710] Analysis duration: 6049 ms I'm still learning so any info / possible reasons / solutions will be appreciated, thanks :) Edit: BaseFragment: abstract class BaseFragment<T : ViewDataBinding>(@LayoutRes private val fragmentRes: Int) : Fragment() { private var _binding: T? = null val binding get() = _binding!! override fun onCreateView( inflater: LayoutInflater, container: ViewGroup?, savedInstanceState: Bundle? ): View? { _binding = DataBindingUtil.inflate(inflater, fragmentRes, container, false) return binding.root } override fun onDestroyView() { super.onDestroyView() _binding = null } fun navigateTo(targetDestination: Int) { findNavController().navigate(targetDestination) } fun navigateUp() { findNavController().navigateUp() } } loginFragment: @AndroidEntryPoint class LoginFragment : BaseFragment<FragmentLoginBinding>(R.layout.fragment_login) { private val viewModel: LoginViewModel by viewModels() override fun onViewCreated(view: View, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) setUpLogInButton() setUpTextRedirection() observeForLoginEvents() } private fun setUpTextRedirection() { binding.signUpTV.setOnClickListener { navigateTo(R.id.registerFragment) } } private fun setUpLogInButton() { binding.logInBTN.setOnClickListener { val email = binding.emailTIET.extractText() val password = binding.passwordTIET.extractText() if(email.isAValidEmail() && password.isNotBlank()) { viewModel.logIn(LoginCredentials(mail = email, password = password)) } } } private fun observeForLoginEvents() { viewModel.loginEvent.observe(viewLifecycleOwner) { result -> if(result.isSuccess) { /* Adding some kind of "Main Screen" module would be an idea but as I've stated previously, this is just a small "test" project showing off architecture, so I hope you will forgive me <3 (PS: if you are reading this and there still isn't that module, you can make a PR and add it)*/ Toast.makeText(requireContext(), "Success!", Toast.LENGTH_SHORT).show() } else { Toast.makeText(requireContext(), "No matching account", Toast.LENGTH_SHORT).show() } } } } registerFragment: @AndroidEntryPoint class RegisterFragment: BaseFragment<FragmentRegisterBinding>(R.layout.fragment_register) { private val viewModel: RegisterViewModel by viewModels() override fun onViewCreated(view: View, savedInstanceState: Bundle?) { super.onViewCreated(view, savedInstanceState) setUpCreateAccountButton() setUpTextRedirection() observeRegisterEvents() } private fun setUpTextRedirection() { binding.logInTV.setOnClickListener { navigateTo(R.id.loginFragment) } } private fun setUpCreateAccountButton() { binding.createAnAccountBTN.setOnClickListener { val email = binding.emailTIET.extractText() val password = binding.passwordTIET.extractText() val repeatedPassword = binding.repeatPasswordTIET.extractText() if(email.isAValidEmail() && (password == repeatedPassword) && password.isNotEmpty()) { viewModel.saveUser( LoginCredentials(mail = email, password = password) ) } } } private fun observeRegisterEvents() { viewModel.registerEvent.observe(viewLifecycleOwner) { result -> if(result.isSuccess) { navigateTo(R.id.loginFragment) } else { Toast.makeText(requireContext(), "Something went wrong", Toast.LENGTH_SHORT).show() } } } As you may notice, the BaseFragment class has a reference to a view (binding variable), but it releases it in onDestoryView, so I think that should be working, also in the leak it isn't "complaining" about the binding itself
[ "Found the culprit. The problem comes from an EditText that has the input type android:inputType=\"textPassword\" or any other variant that has a password. In this case, it is one of the TextInputLayout instance, that has a TextInputEditText. But: It may need to be combined with the usage of Emoji Library, because the class has an element with the type androidx.emoji2.text.SpannableBuilder that belongs to the Emoji library.\nThe TextInputEditText's text is spannable, which means it's not a simple string, it's an object. An object, that can be Parcelable, which means its state can be saved. And, it looks like its actually saved here. No idea how though, since Parcelable limits which types can be saved.\nThe memory leak appears to be on the TextInputEditText with the ID R.id.repeatPasswordTIET. In your layout file, you can also search for @+id/repeatPasswordTIET or @id/repeatPasswordTIET to find the specific one.\nWhy the leak?\nTextView's (or more likely EditText's) have a tendency to not remove their listeners once they are not needed. It's just not configured that way, maybe due to expecting the callers to remove the listeners themselves once they are not needed. A lot of other listeners get cleared once they are not needed, but the TextWatcher is an exception unfortunately.\nExamining the leak canary trace, android.text.method.PasswordTransformationMethod$Visible instance has a androidx.emoji2.text.SpannableBuilder which contains an array, and one of the entries points to android.widget.TextView$ChangeWatcher instance which then shows the TextInputEditText that is leaked. It is leaked because in the same trace, you can see that the listener is saved to android.widget.TextView$SavedState instance, which I assume gets restored in a future fragment.\nI actually tried to fetch the value myself, but wasn't able to do it. The saved state did not hold the listener.\nAlthough, I have a potential solution: Delete every listener when the view is not necessary anymore.\nPotential solution:\nimport android.content.Context\nimport android.text.TextWatcher\nimport android.util.AttributeSet\nimport com.google.android.material.textfield.TextInputEditText\n\nclass ListenerAwareEditText @JvmOverloads constructor(\n context: Context,\n attrs: AttributeSet? = null,\n defStyleAttr: Int = 0\n): TextInputEditText(context, attrs, defStyleAttr) {\n\n private companion object {\n val textChangedListenersStatic: MutableList<TextWatcher> = ArrayList()\n }\n\n private val textChangedListeners: MutableList<TextWatcher> = ArrayList()\n\n /** \n * Swap the listeners added in the companion object list with the actual.\n */\n init {\n textChangedListeners.addAll(textChangedListenersStatic)\n textChangedListenersStatic.clear()\n }\n\n /**\n * Overridden to hold a reference of the listener\n */\n override fun addTextChangedListener(watcher: TextWatcher?) {\n super.addTextChangedListener(watcher)\n watcher?.let {\n // NullPointerException may happen because this method\n // can be called before the object itself is constructed,\n // from the super classes.\n // So, to hold the values, a static list in a\n // companion object was used, and then the elements\n // get transferred to the actual list, clearing the\n // static one.\n try {\n textChangedListeners.add(it)\n } catch (ignore: NullPointerException) {\n textChangedListenersStatic.add(it)\n }\n }\n }\n\n /**\n * Overridden to release the listener in our list\n */\n override fun removeTextChangedListener(watcher: TextWatcher?) {\n super.removeTextChangedListener(watcher)\n watcher?.let {\n // NullPointerException may happen because this method\n // can be called before the object itself is constructed,\n // from the super classes.\n // So, to hold the values, a static list in a\n // companion object was used, and then the elements\n // get transferred to the actual list, clearing the\n // static one.\n try {\n textChangedListeners.remove(it)\n } catch (ignore: NullPointerException) {\n textChangedListenersStatic.remove(it)\n }\n }\n }\n\n /**\n * Clears the text changed listeners. Call this from the\n * fragment's [onDestroyView] or Activity's [onDestroy].\n */\n fun clearTextChangedListeners() {\n textChangedListeners.forEach {\n super.removeTextChangedListener(it)\n }\n textChangedListeners.clear()\n }\n}\n\nWhat the class does: It caches all the listeners added in a list, and allows you to call clearTextChangedListeners() once it is not needed. (I tried to do this automatically but the lifecycle got confusing once fragments, nested recyclerviews etc... got involved so I left it here)\nUsage:\nSwap with your layouts' TextInputEditText with this class, and at your fragment's onDestroyView, call editText.clearTextChangedListeners().\nIt should solve your problem, however it's the Android world. It might not.\n" ]
[ 1 ]
[]
[]
[ "android", "android_fragments", "kotlin", "memory_leaks" ]
stackoverflow_0074581947_android_android_fragments_kotlin_memory_leaks.txt
Q: "vite is not recognized ..." on "npm run dev" I'm using Node.js and npm for the first time, I'm trying to get Vite working, following the tutorials and documentation. But every time I run into the problem 'vite' is not recognized as an internal or external command, operable program or batch file. I have been trying to find a solution for 4 hours now but with no results. I tried restarting pc, reinstalling node.js, several procedures to create vite project but in vain. I suppose it's my beginner's mistake, but I really don't know what to do anymore. Commands and responses I run when I try to create a vite project: npm create vite@latest >> my-portfolio >> vanilla & vanilla cd my-portfolio npm install >>resp: up to date, audited 1 package in 21s found 0 vulnerabilities npm run dev resp: > [email protected] dev > vite 'vite' is not recognized as an internal or external command, operable program or batch file. A: try to install the packages to make it work npm install or npm i A: For this error use the following command on your terminal in the present working directory of the project npm install npm run dev first, try to install a node package manager and then run npm run dev hope it will work A: yarn add vite on project folder to add vite, and run npm run dev again. remember to update your node version to 18, LTS from 17 might not support this installation. update: I try to fresh install again my Laravel 9.19, since i had update my node to version 18, npm install & npm run dev just work fine without yarn. A: According to documentation https://vitejs.dev/guide/#community-templates npm install npm run dev npx vite build A: I found myself in the same situation. The problem is vite.cmd is not in the system or user PATH variable, so it cannot be found when it is executed from your project folder. To fix it, you should temporarily add the folder where vite.cmd is in your PATH variable (either for the entire system or your user). I recommend adding it just for your user, and keep in mind you should probably remove it after you stop working on that project, because this could affect future projects using the same build tools. To do this: My PC > Properties > Advanced system settings > Click on Environment Variables (alternatively just use the start button and begin typing Environment, you should get a direct link) On "User variables" find "Path" and edit it. Add a new entry for the folder where vite.cmd is. Example "C:\dev\reactplayground\firsttest\test01\node_modules.bin" Check your project folder to find the right path. Make sure your close and open your console for this change to affect. Go back to your project root folder and run "vite build", it should work now. A: for me I've: 1 - excuted yarn add vite 2- and then npm install work fine ! A: For me I had a project I created on one computer and it had this in devDependencies: "vite": "^3.1.0" I did pnpm install and it reported everything was fine, but I was getting the error. I ran pnpm install vite and it installed it again with this: "vite": "^3.1.8" After that it worked fine. So try using npm, yarn, or pnpm to install the vite package again and see if that works. A: You need Node version 15 or higher, I had the same problem because I was using an older version of it. A: Needs to install all the packages in package.json and run again npm i npm run dev A: For me this worked: I changed NODE_ENV environment variable to development ( earlier it was production - which should not be the case, as dev-dependencies won't get installed by npm install or yarn ) Here is what to make sure before running npm install or yarn: Make sure `NODE_ENV` environment variable is not set to `production` if you running locally for dev purpose. A: try npm install then npm run build A: Recently faced this error and I run npm install npm run dev then the output was VITE v3.2.4 ready in 1913 ms THAT'S COOL reference LINK A: 'vite' is not recognized as an internal or external command, operable program or batch file. > vite 'vite' is not recognized as an internal or external command, operable program or batch file. try to install the packages to make it work npm install or npm i
"vite is not recognized ..." on "npm run dev"
I'm using Node.js and npm for the first time, I'm trying to get Vite working, following the tutorials and documentation. But every time I run into the problem 'vite' is not recognized as an internal or external command, operable program or batch file. I have been trying to find a solution for 4 hours now but with no results. I tried restarting pc, reinstalling node.js, several procedures to create vite project but in vain. I suppose it's my beginner's mistake, but I really don't know what to do anymore. Commands and responses I run when I try to create a vite project: npm create vite@latest >> my-portfolio >> vanilla & vanilla cd my-portfolio npm install >>resp: up to date, audited 1 package in 21s found 0 vulnerabilities npm run dev resp: > [email protected] dev > vite 'vite' is not recognized as an internal or external command, operable program or batch file.
[ "try to install the packages to make it work\nnpm install or npm i\n\n", "For this error use the following command on your terminal in the present working directory of the project\nnpm install\nnpm run dev\n\nfirst, try to install a node package manager and then run npm run dev hope it will work\n", "yarn add vite\n\non project folder to add vite,\nand run\nnpm run dev\n\nagain.\n\nremember to update your node version to 18, LTS from 17 might not support this installation.\n\nupdate:\nI try to fresh install again my Laravel 9.19, since i had update my node to version 18, npm install & npm run dev just work fine without yarn.\n", "According to documentation https://vitejs.dev/guide/#community-templates\nnpm install\nnpm run dev\nnpx vite build\n\n", "I found myself in the same situation.\nThe problem is vite.cmd is not in the system or user PATH variable, so it cannot be found when it is executed from your project folder.\nTo fix it, you should temporarily add the folder where vite.cmd is in your PATH variable (either for the entire system or your user). I recommend adding it just for your user, and keep in mind you should probably remove it after you stop working on that project, because this could affect future projects using the same build tools.\nTo do this:\n\nMy PC > Properties > Advanced system settings > Click on Environment Variables (alternatively just use the start button and begin typing Environment, you should get a direct link)\nOn \"User variables\" find \"Path\" and edit it.\nAdd a new entry for the folder where vite.cmd is. Example \"C:\\dev\\reactplayground\\firsttest\\test01\\node_modules.bin\" Check your project folder to find the right path.\nMake sure your close and open your console for this change to affect.\nGo back to your project root folder and run \"vite build\", it should work now.\n\n", "for me I've:\n1 - excuted yarn add vite \n2- and then npm install\nwork fine !\n", "For me I had a project I created on one computer and it had this in devDependencies:\n\"vite\": \"^3.1.0\"\n\nI did pnpm install and it reported everything was fine, but I was getting the error. I ran pnpm install vite and it installed it again with this:\n\"vite\": \"^3.1.8\"\n\nAfter that it worked fine. So try using npm, yarn, or pnpm to install the vite package again and see if that works.\n", "You need Node version 15 or higher, I had the same problem because I was using an older version of it.\n", "Needs to install all the packages in package.json and run again\nnpm i\nnpm run dev\n\n", "For me this worked:\nI changed NODE_ENV environment variable to development ( earlier it was production - which should not be the case, as dev-dependencies won't get installed by npm install or yarn )\nHere is what to make sure before running npm install or yarn:\n Make sure `NODE_ENV` environment variable is not set to `production` if you running locally for dev purpose.\n\n\n \n\n", "try npm install\nthen npm run build\n", "Recently faced this error and I run\nnpm install\n\nnpm run dev\n\nthen the output was\nVITE v3.2.4 ready in 1913 ms\nTHAT'S COOL \nreference LINK\n", "'vite' is not recognized as an internal or external command, operable program or batch file.\n> vite\n'vite' is not recognized as an internal or external command,\noperable program or batch file.\ntry to install the packages to make it work\nnpm install or npm i\n" ]
[ 27, 14, 2, 2, 1, 1, 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "node.js", "npm" ]
stackoverflow_0071844271_node.js_npm.txt
Q: Instant search , facetFilter does not work with tilder.(~) I'm trying to implement a typesense search with instantsearch, and trying to use facetFilters, and it works very well, the problem is that the name of some items that were sent to typesense are with "tilder" (~), because in Portuguese do Brazil, which is what I'm doing is very common. But whenever I need to filter an element with the "tilder" (~) such as "composição" it doesn't bring any result. How can I include the "tilder" in my searches? I hope someone can help me solve this problem. A: Algolia performs a series of normalizations on queries that includes removing diacritics like "~": https://www.algolia.com/doc/guides/managing-results/optimize-search-results/handling-natural-languages-nlp/in-depth/normalization/ You may have to modify some of the normalization settings (like keepDiacriticsOnCharacters) to get the result you are looking for.
Instant search , facetFilter does not work with tilder.(~)
I'm trying to implement a typesense search with instantsearch, and trying to use facetFilters, and it works very well, the problem is that the name of some items that were sent to typesense are with "tilder" (~), because in Portuguese do Brazil, which is what I'm doing is very common. But whenever I need to filter an element with the "tilder" (~) such as "composição" it doesn't bring any result. How can I include the "tilder" in my searches? I hope someone can help me solve this problem.
[ "Algolia performs a series of normalizations on queries that includes removing diacritics like \"~\":\nhttps://www.algolia.com/doc/guides/managing-results/optimize-search-results/handling-natural-languages-nlp/in-depth/normalization/\nYou may have to modify some of the normalization settings (like keepDiacriticsOnCharacters) to get the result you are looking for.\n" ]
[ 0 ]
[]
[]
[ "algolia", "instantsearch", "typesense" ]
stackoverflow_0074225138_algolia_instantsearch_typesense.txt
Q: Shallow Copy, Map and Spread Operator I am trying to understand how shallow copy works with map and without map on an array. const arr = [ { name: 'example', inner: { title: 'test' } } ] const copy = arr.map(a => ({ ...a })) copy[0].name = 'name1' copy[0].inner.title = 'title1' Updating name prop on the copy will not affect the original array arr. However, updating inner.title will modify both copy and arr. As far as I understand, in a shallow copy only top/first level props are copied, nested objects still share the same reference. However, now if I create a shallow copy without using map then even the first/top level props will share the same reference I think or at least updating first/top level props on the copy will affect the original one. // same arr above const copy2 = [...arr]; copy2[0].name = 'name2'; copy2[0].inner.title = 'title2'; Now, the original arr's name value is 'name2' and inner.title value is 'title2'. Why is that? What is the exact difference between two code snippets, how map plays a role in shallow copy? A: It's easiest to start with your copy2 as that's simpler. When you do this: const copy2 = [...arr]; that is use the "spread" operator on an array, you make exactly a "shallow copy" of the array. This means that copy2 and arr are references to different arrays (arrays that are in different memory locations, if you want to think of it that way, although memory management is just an implementation detail of the JS engine rather than something JS developers have to think about), but their contents, if they are objects (which they are here) are still referencing the same things. That is arr[0] and copy2[0] are literally the same object - in the same memory location, if you want to think of it that way - and mutating one will mutate the other just the same (as your code snippet and its result proves). copy though is doing something rather different: const copy = arr.map(a => ({ ...a })) While you're also using the "spread" operator to here to again make "shallow copies", what it is that you're copying is different. It's not the array you're copying - it's a, which stands for an element of the array. (This affects all elements because that's what map does.) So what you're doing here is making a new array called copy, that isn't a reference to any previously existing array, whose elements are shallow copies of the corresponding elements in a. Which is why changing copy[0].name doesn't affect arr[0].name, because those 2 objects are references to different things - due to the shallow copy. TLDR: the key difference is that in copy2 it's the whole array that you've made a shallow copy of, and because it's shallow, mutations to its object elements mutate the elements of the original. Whereas with copy you've actually shallowly copied each individual element in in, thereby going "one level deeper" if you like.
Shallow Copy, Map and Spread Operator
I am trying to understand how shallow copy works with map and without map on an array. const arr = [ { name: 'example', inner: { title: 'test' } } ] const copy = arr.map(a => ({ ...a })) copy[0].name = 'name1' copy[0].inner.title = 'title1' Updating name prop on the copy will not affect the original array arr. However, updating inner.title will modify both copy and arr. As far as I understand, in a shallow copy only top/first level props are copied, nested objects still share the same reference. However, now if I create a shallow copy without using map then even the first/top level props will share the same reference I think or at least updating first/top level props on the copy will affect the original one. // same arr above const copy2 = [...arr]; copy2[0].name = 'name2'; copy2[0].inner.title = 'title2'; Now, the original arr's name value is 'name2' and inner.title value is 'title2'. Why is that? What is the exact difference between two code snippets, how map plays a role in shallow copy?
[ "It's easiest to start with your copy2 as that's simpler. When you do this:\nconst copy2 = [...arr];\n\nthat is use the \"spread\" operator on an array, you make exactly a \"shallow copy\" of the array. This means that copy2 and arr are references to different arrays (arrays that are in different memory locations, if you want to think of it that way, although memory management is just an implementation detail of the JS engine rather than something JS developers have to think about), but their contents, if they are objects (which they are here) are still referencing the same things. That is arr[0] and copy2[0] are literally the same object - in the same memory location, if you want to think of it that way - and mutating one will mutate the other just the same (as your code snippet and its result proves).\ncopy though is doing something rather different:\nconst copy = arr.map(a => ({\n ...a\n}))\n\nWhile you're also using the \"spread\" operator to here to again make \"shallow copies\", what it is that you're copying is different. It's not the array you're copying - it's a, which stands for an element of the array. (This affects all elements because that's what map does.) So what you're doing here is making a new array called copy, that isn't a reference to any previously existing array, whose elements are shallow copies of the corresponding elements in a. Which is why changing copy[0].name doesn't affect arr[0].name, because those 2 objects are references to different things - due to the shallow copy.\nTLDR: the key difference is that in copy2 it's the whole array that you've made a shallow copy of, and because it's shallow, mutations to its object elements mutate the elements of the original. Whereas with copy you've actually shallowly copied each individual element in in, thereby going \"one level deeper\" if you like.\n" ]
[ 3 ]
[]
[]
[ "javascript", "shallow_copy" ]
stackoverflow_0074661551_javascript_shallow_copy.txt
Q: When GC is run for the first time? As far as I understand, by default GC runs every time the heap doubles since its size after previous GC cycle. But GC should be run at least once before it can continue that way. When is the first time? A: It seems to be an implementation detail, but for reference to others, at the moment of go1.19 source code says: // heapMinimum is the minimum heap size at which to trigger GC. // ... // During initialization this is set to 4MB*GOGC/100. ... heapMinimum uint64 // defaultHeapMinimum is the value of heapMinimum for GOGC==100. defaultHeapMinimum = (goexperiment.HeapMinimum512KiBInt)*(512<<10) + (1-goexperiment.HeapMinimum512KiBInt)*(4<<20) // ... c.heapMinimum = defaultHeapMinimum
When GC is run for the first time?
As far as I understand, by default GC runs every time the heap doubles since its size after previous GC cycle. But GC should be run at least once before it can continue that way. When is the first time?
[ "It seems to be an implementation detail, but for reference to others, at the moment of go1.19 source code says:\n // heapMinimum is the minimum heap size at which to trigger GC.\n // ...\n // During initialization this is set to 4MB*GOGC/100. ...\n heapMinimum uint64\n\n // defaultHeapMinimum is the value of heapMinimum for GOGC==100.\n defaultHeapMinimum = (goexperiment.HeapMinimum512KiBInt)*(512<<10) +\n (1-goexperiment.HeapMinimum512KiBInt)*(4<<20)\n\n // ... \n\n c.heapMinimum = defaultHeapMinimum\n\n" ]
[ 0 ]
[]
[]
[ "garbage_collection", "go" ]
stackoverflow_0074661333_garbage_collection_go.txt
Q: Including Advanced Custom Fields (ACF) in a Custom Search - Wordpress I have created / mashed together a cool search through a specified category for my blog. Using Ajax to load the results without the reload. When I search - no matter the term I search. I receive all posts. I use ACF for the content & the author. I also reference products using the field featured_product_title. These fields are used within my page like this: <?php if ( have_rows('case_study_page_content') ): ?> <?php while (have_rows('case_study_page_content')): the_row(); $title = get_sub_field('title'); $author = get_sub_field('author'); $content = get_sub_field('content'); ?> <div class=""> <h1 class=""><?php echo $title; ?></h3> <h3 class=""><?php echo $author; ?></h4> <p><?php echo $content; ?></p> </div> <?php endwhile; ?> <?php endif; ?> <?php while (have_rows('featured_products')): the_row(); $featured_product_title = get_sub_field('featured_product_title', 'featured_products'); ?> With these in mind my current search looks like this (functions.php): // CASE STUDY SEARCH function my_search(){ $args = array( 'orderby' => 'date', 'order' => $_POST['date'] ); if( isset( $_POST['s'] ) ): /* $args = array( 'post_type' => 'post', 'posts_per_page' => -1, 's' => $_POST['s'] ); */ if( have_rows('case_study_page_content') ): while( have_rows('case_study_page_content') ) : the_row(); $title = get_sub_field('title'); $author = get_sub_field('author'); $content = get_sub_field('content'); $args = array( 'post_type' => 'post', 'posts_per_page' => -1, 'meta_query' => array( 'relation' => 'OR', array( 'key' => $title, 'compare' => 'like', 'value' => '%'.$_POST['s'].'%', ), array( 'key' => $author, 'compare' => 'like', 'value' => '%'.$_POST['s'].'%', ), array( 'key' => $content, 'compare' => 'like', 'value' => '%'.$_POST['s'].'%', ) ) ); endwhile; endif; $query = new WP_Query($args); if( $query->have_posts() ): while( $query->have_posts() ): $query->the_post(); echo "<article class=\"post-box " . get_post_class() . "\">"; echo "<a href=\"" . get_the_permalink() . "\" class=\"box-link\"></a>"; $url = wp_get_attachment_url( get_post_thumbnail_id($post->ID), 'thumbnail' ); echo "<img src=\"" . $url . "\" />"; echo "<h2>" . get_the_title() . "</h2>"; $case_study = get_field('case_study_page_content'); if( $case_study ): while( have_rows('case_study_page_content') ): the_row(); $case_study_author = get_sub_field('author'); echo "<p>" . $case_study_author . "</p>"; endwhile; endif; echo "</article>"; endwhile; wp_reset_postdata(); else : echo 'No case studies found'; endif; die(); endif; } add_action('wp_ajax_customsearch', 'my_search'); add_action('wp_ajax_nopriv_customsearch', 'my_search'); I guess my question is how do I add ACF's into the $args array...? Please can someone help me successfully compare the 'key' to the 'value' in my WP_Query($args)? Thanks everyone, Jason. A: test this but without conviction // args $args = array( 'numberposts' => -1, 'post_type' => 'post', 'meta_query' => array( 'relation' => 'OR', array( 'key' => 'case_study_page_content_title', 'compare' => 'like', 'value' => '%'.$_POST['s'].'', ), array( 'key' => 'case_study_page_content_author', 'compare' => 'like', 'value' => '%'.$_POST['s'].'%', ), array( 'key' => 'case_study_page_content_content', 'compare' => 'like', 'value' => '%'.$_POST['s'].'%', ) ) ); A: function custom_search_query( $query ) { if ( !is_admin() && $query->is_search ) { $result = $query->query_vars['s']; $query->query_vars['s'] = ''; $query->set('meta_query', array('relation' => 'OR', array( 'key' => 'acf_name', // ACF FIELD NAME OR POST META 'value' => $result, 'compare' => 'LIKE', ) )); $query->set('post_type', 'post'); // optional POST TYPE } } add_filter( 'pre_get_posts', 'custom_search_query');
Including Advanced Custom Fields (ACF) in a Custom Search - Wordpress
I have created / mashed together a cool search through a specified category for my blog. Using Ajax to load the results without the reload. When I search - no matter the term I search. I receive all posts. I use ACF for the content & the author. I also reference products using the field featured_product_title. These fields are used within my page like this: <?php if ( have_rows('case_study_page_content') ): ?> <?php while (have_rows('case_study_page_content')): the_row(); $title = get_sub_field('title'); $author = get_sub_field('author'); $content = get_sub_field('content'); ?> <div class=""> <h1 class=""><?php echo $title; ?></h3> <h3 class=""><?php echo $author; ?></h4> <p><?php echo $content; ?></p> </div> <?php endwhile; ?> <?php endif; ?> <?php while (have_rows('featured_products')): the_row(); $featured_product_title = get_sub_field('featured_product_title', 'featured_products'); ?> With these in mind my current search looks like this (functions.php): // CASE STUDY SEARCH function my_search(){ $args = array( 'orderby' => 'date', 'order' => $_POST['date'] ); if( isset( $_POST['s'] ) ): /* $args = array( 'post_type' => 'post', 'posts_per_page' => -1, 's' => $_POST['s'] ); */ if( have_rows('case_study_page_content') ): while( have_rows('case_study_page_content') ) : the_row(); $title = get_sub_field('title'); $author = get_sub_field('author'); $content = get_sub_field('content'); $args = array( 'post_type' => 'post', 'posts_per_page' => -1, 'meta_query' => array( 'relation' => 'OR', array( 'key' => $title, 'compare' => 'like', 'value' => '%'.$_POST['s'].'%', ), array( 'key' => $author, 'compare' => 'like', 'value' => '%'.$_POST['s'].'%', ), array( 'key' => $content, 'compare' => 'like', 'value' => '%'.$_POST['s'].'%', ) ) ); endwhile; endif; $query = new WP_Query($args); if( $query->have_posts() ): while( $query->have_posts() ): $query->the_post(); echo "<article class=\"post-box " . get_post_class() . "\">"; echo "<a href=\"" . get_the_permalink() . "\" class=\"box-link\"></a>"; $url = wp_get_attachment_url( get_post_thumbnail_id($post->ID), 'thumbnail' ); echo "<img src=\"" . $url . "\" />"; echo "<h2>" . get_the_title() . "</h2>"; $case_study = get_field('case_study_page_content'); if( $case_study ): while( have_rows('case_study_page_content') ): the_row(); $case_study_author = get_sub_field('author'); echo "<p>" . $case_study_author . "</p>"; endwhile; endif; echo "</article>"; endwhile; wp_reset_postdata(); else : echo 'No case studies found'; endif; die(); endif; } add_action('wp_ajax_customsearch', 'my_search'); add_action('wp_ajax_nopriv_customsearch', 'my_search'); I guess my question is how do I add ACF's into the $args array...? Please can someone help me successfully compare the 'key' to the 'value' in my WP_Query($args)? Thanks everyone, Jason.
[ "test this but without conviction\n// args\n$args = array(\n 'numberposts' => -1,\n 'post_type' => 'post',\n 'meta_query' => array(\n 'relation' => 'OR',\n array(\n 'key' => 'case_study_page_content_title',\n 'compare' => 'like',\n 'value' => '%'.$_POST['s'].'',\n ),\n array(\n 'key' => 'case_study_page_content_author',\n 'compare' => 'like',\n 'value' => '%'.$_POST['s'].'%',\n ),\n array(\n 'key' => 'case_study_page_content_content',\n 'compare' => 'like',\n 'value' => '%'.$_POST['s'].'%',\n )\n )\n);\n\n", "function custom_search_query( $query ) {\n if ( !is_admin() && $query->is_search ) {\n $result = $query->query_vars['s'];\n $query->query_vars['s'] = '';\n $query->set('meta_query', array('relation' => 'OR',\n array(\n 'key' => 'acf_name', // ACF FIELD NAME OR POST META\n 'value' => $result,\n 'compare' => 'LIKE',\n )\n ));\n $query->set('post_type', 'post'); // optional POST TYPE\n }\n}\nadd_filter( 'pre_get_posts', 'custom_search_query');\n\n" ]
[ 0, 0 ]
[]
[]
[ "ajax", "php", "search", "wordpress" ]
stackoverflow_0053193101_ajax_php_search_wordpress.txt
Q: Dynamic Importing with Pyinstaller Executable I’m trying to write a script that dynamically imports and uses any modules a user places in a folder. The dynamic importing works fine when I’m running it via python, but when I try to compile it into a Pyinstaller executable, it breaks down and throws me a ModuleNotFoundError, saying it can't find a module with the same name as the folder the modules are placed in. The executable sits alongside this folder, which contains all the modules to be dynamically imported, so my import statements look like __import__("FOLDERNAME.MODULENAME"). The script must be able to run the modules dropped in this folder without being recompiled. What's strange is that the ModuleNotFoundError says No module named 'FOLDERNAME', despite that just being the name of the folder containing the modules, I'd expect it to complain about No module named 'FOLDERNAME.MODULENAME' instead. In my googling, I found this question (pyinstaller: adding dynamically loaded modules), which is pretty similar, but the answer they provided from the docs doesn’t really help. How do I give additional files on the command line if I don’t know what files are going to be in the folder in the first place? That kind of beats the purpose of dynamic importing. I've attempted to use the hidden-import command line flag, but the compiler output said Hidden import '[X]' not found. Maybe I'm just using it wrong? And I have no idea how to modify the spec file or write a hook file to do what I need. Any help would be greatly appreciated. A: I was working on a similar functionality to implement a Plugin Architecture and ran into the same issue. Quoting @Gao Yuan from a similar question :- Pyinstaller (currently v 3.4) can't detect imports like importlib.import_module(). The issue and solutions are detailed in Pyinstaller's documentation, which I pasted below as an entry point. But of-course there is always a way. Instead you can use importlib.util.spec_from_file_location to load and then compile the module Minimum wokring example iterface.py # from dependency import VARIABLE # from PySide6.QtCore import Qt def hello_world(): print(f"this is a plugin calling QT {Qt.AlignmentFlag.AlignAbsolute}") print(f"this is a plugin calling DEPENDENCY {VARIABLE}") cli.py import sys import types from pprint import pprint import importlib.util import sys if __name__ == "__main__": module_name = "dependency" module_file = "plugins/abcplugin/dependency.py" if spec:=importlib.util.spec_from_file_location(module_name, module_file): dependency = importlib.util.module_from_spec(spec) sys.modules[module_name] = dependency spec.loader.exec_module(dependency) module_name = "interface" module_file = "plugins/abcplugin/interface.py" if spec:=importlib.util.spec_from_file_location(module_name, module_file): interface = importlib.util.module_from_spec(spec) sys.modules[module_name] = interface spec.loader.exec_module(interface) sys.modules[module_name].hello_world() project structure cli.exe plugins abcplugin __init__.py interface.py dependency.py complus __init__.py ... Thumb Rules Plugin must always be relative to .exe As you can notice I commented out # from dependency import VARIABLE in line one of interface.py. If you scripts depend on scripts in the same plugin, then you must load dependency.py before loading interface.py. You can then un-comment the line. In pyinstaller.spec file you need to add hiddenimports in this case PySide6 and then un-comment # from PySide6.QtCore import Qt Always use absolute imports when designing a plugin in reference to your project root folder. You can then set the module name to plugins.abcplugin.interface and plugins.abcplugin.dependency and also update from dependency import VARIABLE to from plugins.abcplugin.dependency import VARIABLE Hope people find this usefull, cheers!!
Dynamic Importing with Pyinstaller Executable
I’m trying to write a script that dynamically imports and uses any modules a user places in a folder. The dynamic importing works fine when I’m running it via python, but when I try to compile it into a Pyinstaller executable, it breaks down and throws me a ModuleNotFoundError, saying it can't find a module with the same name as the folder the modules are placed in. The executable sits alongside this folder, which contains all the modules to be dynamically imported, so my import statements look like __import__("FOLDERNAME.MODULENAME"). The script must be able to run the modules dropped in this folder without being recompiled. What's strange is that the ModuleNotFoundError says No module named 'FOLDERNAME', despite that just being the name of the folder containing the modules, I'd expect it to complain about No module named 'FOLDERNAME.MODULENAME' instead. In my googling, I found this question (pyinstaller: adding dynamically loaded modules), which is pretty similar, but the answer they provided from the docs doesn’t really help. How do I give additional files on the command line if I don’t know what files are going to be in the folder in the first place? That kind of beats the purpose of dynamic importing. I've attempted to use the hidden-import command line flag, but the compiler output said Hidden import '[X]' not found. Maybe I'm just using it wrong? And I have no idea how to modify the spec file or write a hook file to do what I need. Any help would be greatly appreciated.
[ "I was working on a similar functionality to implement a Plugin Architecture and ran into the same issue. Quoting @Gao Yuan from a similar question :-\n\nPyinstaller (currently v 3.4) can't detect imports like importlib.import_module(). The issue and solutions are detailed in Pyinstaller's documentation, which I pasted below as an entry point.\n\nBut of-course there is always a way. Instead you can use importlib.util.spec_from_file_location to load and then compile the module\nMinimum wokring example\niterface.py\n# from dependency import VARIABLE\n# from PySide6.QtCore import Qt\n\ndef hello_world():\n print(f\"this is a plugin calling QT {Qt.AlignmentFlag.AlignAbsolute}\")\n print(f\"this is a plugin calling DEPENDENCY {VARIABLE}\")\n\ncli.py\nimport sys\nimport types\nfrom pprint import pprint\nimport importlib.util\nimport sys\nif __name__ == \"__main__\":\n module_name = \"dependency\"\n module_file = \"plugins/abcplugin/dependency.py\"\n if spec:=importlib.util.spec_from_file_location(module_name, module_file):\n dependency = importlib.util.module_from_spec(spec)\n sys.modules[module_name] = dependency\n spec.loader.exec_module(dependency)\n\n module_name = \"interface\"\n module_file = \"plugins/abcplugin/interface.py\"\n if spec:=importlib.util.spec_from_file_location(module_name, module_file):\n interface = importlib.util.module_from_spec(spec)\n sys.modules[module_name] = interface\n spec.loader.exec_module(interface)\n\n sys.modules[module_name].hello_world()\n\nproject structure\ncli.exe\nplugins\n abcplugin\n __init__.py\n interface.py\n dependency.py\n\n complus\n __init__.py\n ...\n\nThumb Rules\n\nPlugin must always be relative to .exe\nAs you can notice I commented out # from dependency import VARIABLE in line one of interface.py. If you scripts depend on scripts in the same plugin, then you must load dependency.py before loading interface.py. You can then un-comment the line.\nIn pyinstaller.spec file you need to add hiddenimports in this case PySide6 and then un-comment # from PySide6.QtCore import Qt\nAlways use absolute imports when designing a plugin in reference to your project root folder. You can then set the module name to plugins.abcplugin.interface and plugins.abcplugin.dependency and also update from dependency import VARIABLE to from plugins.abcplugin.dependency import VARIABLE \n\nHope people find this usefull, cheers!!\n" ]
[ 0 ]
[]
[]
[ "dynamic_import", "pyinstaller", "python" ]
stackoverflow_0071162951_dynamic_import_pyinstaller_python.txt
Q: Merge lists within dictionaries with the same keys I have the following three dictionaries within a list like so: dict1 = {'key1':'x', 'key2':['one', 'two', 'three']} dict2 = {'key1':'x', 'key2':['four', 'five', 'six']} dict3 = {'key1':'y', 'key2':['one', 'two', 'three']} list = [dict1, dict2, dict3] I'd like to merge the dictionaries that have the same value for key1 into a single dictionary with merged values (lists in this case) for key2 like so: new_dict = {'key1':'x', 'key2':['one', 'two', 'three', 'four', 'five', 'six']} list = [new_dict, dict3] I've come up with a very brutish solution riddled with hard codes and loops. I'd like to employ some higher-order functions but I'm new to those. A: With the help of itertools.groupby and itertools.chain, your goal can be achieved in a single line: from itertools import groupby from itertools import chain dict1 = {'key1':'x', 'key2':['one', 'two', 'three']} dict2 = {'key1':'x', 'key2':['four', 'five', 'six']} dict3 = {'key1':'y', 'key2':['one', 'two', 'three']} list_of_dicts = [dict1, dict2, dict3] result = [{'key1': k, 'key2': list(chain(*[x['key2'] for x in v]))} for k, v in groupby(list_of_dicts, lambda x: x['key1'])] print(result) [{'key1': 'x', 'key2': ['one', 'two', 'three', 'four', 'five', 'six']}, {'key1': 'y', 'key2': ['one', 'two', 'three']}] A: Build an intermediate dict that uses key1 as a key to aggregate the key2 lists, and then build the final list of dicts out of that: >>> my_dicts = [ ... {'key1':'x', 'key2':['one', 'two', 'three']}, ... {'key1':'x', 'key2':['four', 'five', 'six']}, ... {'key1':'y', 'key2':['one', 'two', 'three']}, ... ] >>> agg_dict = {} >>> for d in my_dicts: ... agg_dict.setdefault(d['key1'], []).extend(d['key2']) ... >>> [{'key1': key1, 'key2': key2} for key1, key2 in agg_dict.items()] [{'key1': 'x', 'key2': ['one', 'two', 'three', 'four', 'five', 'six']}, {'key1': 'y', 'key2': ['one', 'two', 'three']}]
Merge lists within dictionaries with the same keys
I have the following three dictionaries within a list like so: dict1 = {'key1':'x', 'key2':['one', 'two', 'three']} dict2 = {'key1':'x', 'key2':['four', 'five', 'six']} dict3 = {'key1':'y', 'key2':['one', 'two', 'three']} list = [dict1, dict2, dict3] I'd like to merge the dictionaries that have the same value for key1 into a single dictionary with merged values (lists in this case) for key2 like so: new_dict = {'key1':'x', 'key2':['one', 'two', 'three', 'four', 'five', 'six']} list = [new_dict, dict3] I've come up with a very brutish solution riddled with hard codes and loops. I'd like to employ some higher-order functions but I'm new to those.
[ "With the help of itertools.groupby and itertools.chain, your goal can be achieved in a single line:\nfrom itertools import groupby\nfrom itertools import chain\n\ndict1 = {'key1':'x', 'key2':['one', 'two', 'three']}\ndict2 = {'key1':'x', 'key2':['four', 'five', 'six']}\ndict3 = {'key1':'y', 'key2':['one', 'two', 'three']}\nlist_of_dicts = [dict1, dict2, dict3]\n\nresult = [{'key1': k, 'key2': list(chain(*[x['key2'] for x in v]))} for k, v in groupby(list_of_dicts, lambda x: x['key1'])]\n\nprint(result)\n[{'key1': 'x', 'key2': ['one', 'two', 'three', 'four', 'five', 'six']}, {'key1': 'y', 'key2': ['one', 'two', 'three']}]\n\n", "Build an intermediate dict that uses key1 as a key to aggregate the key2 lists, and then build the final list of dicts out of that:\n>>> my_dicts = [\n... {'key1':'x', 'key2':['one', 'two', 'three']},\n... {'key1':'x', 'key2':['four', 'five', 'six']},\n... {'key1':'y', 'key2':['one', 'two', 'three']},\n... ]\n>>> agg_dict = {}\n>>> for d in my_dicts:\n... agg_dict.setdefault(d['key1'], []).extend(d['key2'])\n...\n>>> [{'key1': key1, 'key2': key2} for key1, key2 in agg_dict.items()]\n[{'key1': 'x', 'key2': ['one', 'two', 'three', 'four', 'five', 'six']}, {'key1': 'y', 'key2': ['one', 'two', 'three']}]\n\n" ]
[ 2, 0 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074661388_dictionary_list_python.txt
Q: Redis NodeJs server error,client is closed I am developing an application where chats has to cached and monitored, currently it is an local application where i have installed redis and redis-cli. The problem i'm facing is (node:5368) UnhandledPromiseRejectionWarning: Error: The client is closed Attaching code snippet below //redis setup const redis = require('redis'); const client = redis.createClient()//kept blank so that default options are available //runs when client connects io.on("connect", function (socket) { //this is client side socket //console.log("a new user connected..."); socket.on("join", function ({ name, room }, callback) { //console.log(name, room); const { msg, user } = addUser({ id: socket.id, name, room }); // console.log(user); if (msg) return callback(msg); //accessible in frontend //emit to all users socket.emit("message", { user: "Admin", text: `Welcome to the room ${user.name}`, }); //emit to all users except current one socket.broadcast .to(user.room) .emit("message", { user: "Admin", text: `${user.name} has joined` }); socket.join(user.room); //pass the room that user wants to join //get all users in the room io.to(user.room).emit("roomData", { room: user.room, users: getUsersInRoom(user.room), }); callback(); }); //end of join //user generated messages socket.on("sendMessage", async(message, callback)=>{ const user = getUser(socket.id); //this is where we can store the messages in redis await client.set("messages",message); io.to(user.room).emit("message", { user: user.name, text: message }); console.log(client.get('messages')); callback(); }); //end of sendMessage //when user disconnects socket.on("disconnect", function () { const user = removeUser(socket.id); if (user) { console.log(client) io.to(user.room).emit("message", { user: "Admin", text: `${user.name} has left `, }); } }); //end of disconnect I am getting above error when user sends a message to the room or when socket.on("sendMessage") is called. Where am I going wrong? Thank you in advance. A: You should await client.connect() before using the client A: In node-redis V4, the client does not automatically connect to the server, you need to run .connect() before any command, or you will receive error ClientClosedError: The client is closed. import { createClient } from 'redis'; const client = createClient(); await client.connect(); Or you can use legacy mode to preserve the backwards compatibility const client = createClient({ legacyMode: true }); A: client.connect() returns a promise. You gotta use .then() because you cannot call await outside of a function. const client = createClient(); client.connect().then(() => { ... }) A: You cannot call await outside of a function. const redis = require('redis'); const client = redis.createClient(); client .connect() .then(async (res) => { console.log('connected'); // Write your own code here // Example const value = await client.lRange('data', 0, -1); console.log(value.length); console.log(value); client.quit(); }) .catch((err) => { console.log('err happened' + err); }); A: I was having a similar problem and was able to change the connect code as follows. const client = redis.createClient({ legacyMode: true, PORT: 5001 }) client.connect().catch(console.error)
Redis NodeJs server error,client is closed
I am developing an application where chats has to cached and monitored, currently it is an local application where i have installed redis and redis-cli. The problem i'm facing is (node:5368) UnhandledPromiseRejectionWarning: Error: The client is closed Attaching code snippet below //redis setup const redis = require('redis'); const client = redis.createClient()//kept blank so that default options are available //runs when client connects io.on("connect", function (socket) { //this is client side socket //console.log("a new user connected..."); socket.on("join", function ({ name, room }, callback) { //console.log(name, room); const { msg, user } = addUser({ id: socket.id, name, room }); // console.log(user); if (msg) return callback(msg); //accessible in frontend //emit to all users socket.emit("message", { user: "Admin", text: `Welcome to the room ${user.name}`, }); //emit to all users except current one socket.broadcast .to(user.room) .emit("message", { user: "Admin", text: `${user.name} has joined` }); socket.join(user.room); //pass the room that user wants to join //get all users in the room io.to(user.room).emit("roomData", { room: user.room, users: getUsersInRoom(user.room), }); callback(); }); //end of join //user generated messages socket.on("sendMessage", async(message, callback)=>{ const user = getUser(socket.id); //this is where we can store the messages in redis await client.set("messages",message); io.to(user.room).emit("message", { user: user.name, text: message }); console.log(client.get('messages')); callback(); }); //end of sendMessage //when user disconnects socket.on("disconnect", function () { const user = removeUser(socket.id); if (user) { console.log(client) io.to(user.room).emit("message", { user: "Admin", text: `${user.name} has left `, }); } }); //end of disconnect I am getting above error when user sends a message to the room or when socket.on("sendMessage") is called. Where am I going wrong? Thank you in advance.
[ "You should await client.connect() before using the client\n", "In node-redis V4, the client does not automatically connect to the server, you need to run .connect() before any command, or you will receive error ClientClosedError: The client is closed.\nimport { createClient } from 'redis';\n\nconst client = createClient();\n\nawait client.connect();\n\nOr you can use legacy mode to preserve the backwards compatibility\nconst client = createClient({\n legacyMode: true\n});\n\n", "client.connect() returns a promise. You gotta use .then() because you cannot call await outside of a function.\nconst client = createClient(); \nclient.connect().then(() => {\n ...\n})\n\n", "You cannot call await outside of a function.\nconst redis = require('redis');\nconst client = redis.createClient();\n\nclient\n .connect()\n .then(async (res) => {\n console.log('connected');\n // Write your own code here\n\n // Example\n const value = await client.lRange('data', 0, -1);\n console.log(value.length);\n console.log(value);\n client.quit();\n })\n .catch((err) => {\n console.log('err happened' + err);\n });\n\n", "I was having a similar problem and was able to change the connect code as follows.\nconst client = redis.createClient({\n legacyMode: true,\n PORT: 5001\n})\nclient.connect().catch(console.error)\n\n" ]
[ 44, 27, 2, 1, 1 ]
[]
[]
[ "node.js", "node_redis", "redis", "sockets" ]
stackoverflow_0070185436_node.js_node_redis_redis_sockets.txt
Q: Typescript Have the type of the property from an object I am trying to do a generic react form reducer. For that "Value" must have the type of T by "name". Im not sure to be comprehensive so here is an example that will be clearer: Example: type RandomObject = { test1: number, test2: string, test3: string[] } type ActionChangeInput<T> = { name: keyof T value: typeof T["name"] // Get type of name } I want value to be a number because value of name is "test1" and "test1" type is number. const a: ActionChangeInput<RandomObject> = { name: "test1", // type is "test1" | "test2" | "test3" value: 12931 // Expect number here } Sorry if it's not clear but it's hard to explain something I don't understand (that's why im here). A: What you want is for ActionChangeInput<T> to resolve to a union of all possibilities. You can do that with a mapped type that maps over each property of T and creates type that pairs name and value types for just that property. Something like: type ActionChangeInput<T> = { [K in keyof T]: { name: K value: T[K] } }[keyof T] This maps over each property in T with [K in keyof T], and creates an object type for each combination of name as K and value as T[K]. You then index that resulting object by it's own keys to get a union of its values. With the RandomObject type, that should resolve to: type RandomObject = { test1: number, test2: string, test3: string[] } type Test = ActionChangeInput<RandomObject> /* | { name: 'test1', value: number } | { name: 'test2', value: string } | { name: 'test3', value: string[] } */ So any valid value of that type must have a name and value type that match in that list of unions. The rest now behaves as you would expect: type RandomObject = { test1: number, test2: string, test3: string[] } // fine const a: ActionChangeInput<RandomObject> = { name: "test1", value: 12931, } // error const b: ActionChangeInput<RandomObject> = { name: "test2", value: 12931, // Type 'number' is not assignable to type 'string'.(2322) } See Playground
Typescript Have the type of the property from an object
I am trying to do a generic react form reducer. For that "Value" must have the type of T by "name". Im not sure to be comprehensive so here is an example that will be clearer: Example: type RandomObject = { test1: number, test2: string, test3: string[] } type ActionChangeInput<T> = { name: keyof T value: typeof T["name"] // Get type of name } I want value to be a number because value of name is "test1" and "test1" type is number. const a: ActionChangeInput<RandomObject> = { name: "test1", // type is "test1" | "test2" | "test3" value: 12931 // Expect number here } Sorry if it's not clear but it's hard to explain something I don't understand (that's why im here).
[ "What you want is for ActionChangeInput<T> to resolve to a union of all possibilities. You can do that with a mapped type that maps over each property of T and creates type that pairs name and value types for just that property.\nSomething like:\ntype ActionChangeInput<T> = {\n [K in keyof T]: {\n name: K\n value: T[K]\n }\n}[keyof T]\n\nThis maps over each property in T with [K in keyof T], and creates an object type for each combination of name as K and value as T[K].\nYou then index that resulting object by it's own keys to get a union of its values.\nWith the RandomObject type, that should resolve to:\ntype RandomObject = { test1: number, test2: string, test3: string[] }\n\ntype Test = ActionChangeInput<RandomObject>\n/*\n| { name: 'test1', value: number }\n| { name: 'test2', value: string }\n| { name: 'test3', value: string[] }\n*/\n\nSo any valid value of that type must have a name and value type that match in that list of unions.\nThe rest now behaves as you would expect:\ntype RandomObject = { test1: number, test2: string, test3: string[] }\n\n// fine\nconst a: ActionChangeInput<RandomObject> = {\n name: \"test1\",\n value: 12931,\n}\n\n// error\nconst b: ActionChangeInput<RandomObject> = {\n name: \"test2\",\n value: 12931, // Type 'number' is not assignable to type 'string'.(2322)\n}\n\nSee Playground\n" ]
[ 7 ]
[]
[]
[ "typescript" ]
stackoverflow_0074661578_typescript.txt
Q: Python - Beginner ATM project I want it so that my current balance will be the as the amount withdrawn for the next loop. Enter pin:123 1 – Balance Inquiry 2 – Withdraw 3 – Deposit X – Exit Enter your choice:2 Hi name1. Your current balance is 50000 Enter amount to withdraw: 400 Transaction Successful Your current balance is 49600 Enter pin:123 1 – Balance Inquiry 2 – Withdraw 3 – Deposit X – Exit Enter your choice:2 Hi name1. Your current balance is 50000 *** Problem *** Enter amount to withdraw: This is currently my code. (sorry for the messy code as I am a beginner) pin = [123, 456, 789] balance = [50000, 2000, 500] name = ["name1", "name2", "name3"] def main(): while True: pin_input = input('Enter pin:') try: n = int(pin_input) except: break if n in pin: print('1 – Balance Inquiry') print('2 – Withdraw') print('3 – Deposit') print('X – Exit') choice = input('Enter your choice:') c = int(choice) if choice == 'X': print('Thank you for banking with us!') break else: pol = pin.index(n) if c == 1: print(f'Hi {name[pol]}.') print(f'Your current balance is {balance[pol]} ') elif c == 2: print(f'Hi {name[pol]}.') print(f'Your current balance is {balance[pol]} ') withdraw = int(input('Enter amount to withdraw: ')) if withdraw > balance[pol]: print('Not enough amount') else: difference = balance[pol] - withdraw print('Transaction Successful') print(f'Your current balance is {difference}') elif c == 3: print(f'Hi {name[pol]}.') print(f'Your current balance is {balance[pol]} ') deposit = int(input('Enter amount to deposit: ')) sums = deposit + balance[pol] print('') print(f'Your current balance is {sums}') main() A: welcome to Python. I see what is your problem. When you handle the withdrawal you create a new variable and performed the subtraction you just displayed the new result and never updated it. So to solve it you need to replace this code: difference = balance[pol] - withdraw print(f'Transaction Successful.\nYour current balance is {difference}') with: balance[pol] -= withdraw print(f'Transaction Successful.\nYour current balance is {balance[pol]}') I took the liberty to edit your code a bit and make it more "professional" but also so that you could read and understand it (I added comments for you to read). pin = [123, 456, 789] balance = [50000, 2000, 500] name = ["name1", "name2", "name3"] def main(): while True: try: pin_input = int(input('Enter pin:')) except ValueError: #It's bad practice to leave it just as "except". break if pin_input in pin: print('1 – Balance Inquiry') print('2 – Withdraw') print('3 – Deposit') print('X – Exit') choice = input('Enter your choice:') #As you can see, I have removed the conversion because no one will care if it is int or str, it's happening behind the scene. if choice == 'X': print('Thank you for banking with us!') break #You don't need the else statement because if the choice would be 'X' it would automatically exit. pol = pin.index(pin_input) if choice == '1': print(f'Hi {name[pol]}.\nYour current balance is {balance[pol]}') #Using \n to downline instead of using two prints. elif choice == '2': print(f'Hi {name[pol]}.\nYour current balance is {balance[pol]}') #Using \n to downline instead of using two prints. withdraw = int(input('Enter amount to withdraw: ')) #Assuming the user will write an integer. if withdraw > balance[pol]: print('Not enough amount') break # Let's just add that here (easier to read) and we don't need the else statement anymore. balance[pol] -= withdraw print(f'Transaction Successful.\nYour current balance is {balance[pol]}') elif choice == '3': print(f'Hi {name[pol]}.\nYour current balance is {balance[pol]}')#Using \n to downline instead of using two prints. deposit = int(input('Enter amount to deposit: ')) #Assuming the user will write an integer. the user sums = deposit + balance[pol] print(f'\nYour current balance is {sums}') #\n instead of a whole print function. if __name__ == "__main__": #Use this script as the main script. main() Nice work, keep up with this good job! Also, I want to add my own way of creating an ATM machine. I hope that one day when you will learn and have more knowledge you would open this again and try to understand this. (This code will work only in py-3.10 or higher) Code: class ATM: def __init__(self, pin) -> None: self.pin = pin def Balance(self) -> str: return f"{data[self.pin][1]}, Your balance is: {data[self.pin][0]}" def Withdraw(self) -> str: try: withdraw = int(input('Enter amount to withdraw: ')) if withdraw > data[self.pin][0]: return f"{data[self.pin][1]}, Looks like you can't withdraw {withdraw}$ due to lower balance.\nYou can withdraw up to {data[self.pin][0]}$." data[self.pin][0] -= withdraw except ValueError: return f"{data[self.pin][1]}, Looks like there was an error with your request to withdraw. Please try again." return f"{data[self.pin][1]}, You've successfully withdrawn {withdraw}$. Your remaining balance is: {data[self.pin][0]}$." def Deposit(self) -> str: try: deposit = int(input('Enter amount to Deposit: ')) data[self.pin][0] += deposit except ValueError: return f"{data[self.pin][1]}, Looks like there was an error with your request to deposit. Please try again." return f"{data[self.pin][1]}, You've deposited {deposit}$ into your account. Your balance right now is {data[self.pin][0]}" if __name__ == "__main__": data = { 123 : [50000, "name1"], 456 : [2000, "name2"], 789 : [500, "name3"] } options = { 1 : "Balance Inquiry", 2 : "Withdraw", 3 : "Deposit", 'X' : "Exit" } running = True while running: try: pin = int(input("Enter your pin: ")) if pin in data: user = ATM(pin) else: print(f"User {pin} Doesn't exist. Please check your input and try again.") continue for key, value in options.items(): print(f"Press '{key}' for {value}") while True: action = input("Please enter your action: ") match action: case '1': print(user.Balance()) break case '2': print(user.Withdraw()) break case '3': print(user.Deposit()) break case 'X' | 'x': running = False break case _: print("This action doesn't exist. Please try again.") continue except ValueError: print("There is an error in the given pin. Please try again.") continue
Python - Beginner ATM project
I want it so that my current balance will be the as the amount withdrawn for the next loop. Enter pin:123 1 – Balance Inquiry 2 – Withdraw 3 – Deposit X – Exit Enter your choice:2 Hi name1. Your current balance is 50000 Enter amount to withdraw: 400 Transaction Successful Your current balance is 49600 Enter pin:123 1 – Balance Inquiry 2 – Withdraw 3 – Deposit X – Exit Enter your choice:2 Hi name1. Your current balance is 50000 *** Problem *** Enter amount to withdraw: This is currently my code. (sorry for the messy code as I am a beginner) pin = [123, 456, 789] balance = [50000, 2000, 500] name = ["name1", "name2", "name3"] def main(): while True: pin_input = input('Enter pin:') try: n = int(pin_input) except: break if n in pin: print('1 – Balance Inquiry') print('2 – Withdraw') print('3 – Deposit') print('X – Exit') choice = input('Enter your choice:') c = int(choice) if choice == 'X': print('Thank you for banking with us!') break else: pol = pin.index(n) if c == 1: print(f'Hi {name[pol]}.') print(f'Your current balance is {balance[pol]} ') elif c == 2: print(f'Hi {name[pol]}.') print(f'Your current balance is {balance[pol]} ') withdraw = int(input('Enter amount to withdraw: ')) if withdraw > balance[pol]: print('Not enough amount') else: difference = balance[pol] - withdraw print('Transaction Successful') print(f'Your current balance is {difference}') elif c == 3: print(f'Hi {name[pol]}.') print(f'Your current balance is {balance[pol]} ') deposit = int(input('Enter amount to deposit: ')) sums = deposit + balance[pol] print('') print(f'Your current balance is {sums}') main()
[ "welcome to Python. I see what is your problem. When you handle the withdrawal you create a new variable and performed the subtraction you just displayed the new result and never updated it.\nSo to solve it you need to replace this code:\ndifference = balance[pol] - withdraw\nprint(f'Transaction Successful.\\nYour current balance is {difference}')\n\nwith:\nbalance[pol] -= withdraw\nprint(f'Transaction Successful.\\nYour current balance is {balance[pol]}')\n\nI took the liberty to edit your code a bit and make it more \"professional\" but also so that you could read and understand it (I added comments for you to read).\npin = [123, 456, 789]\nbalance = [50000, 2000, 500]\nname = [\"name1\", \"name2\", \"name3\"]\n\ndef main():\n while True:\n try:\n pin_input = int(input('Enter pin:'))\n except ValueError: #It's bad practice to leave it just as \"except\".\n break\n if pin_input in pin:\n print('1 – Balance Inquiry')\n print('2 – Withdraw')\n print('3 – Deposit')\n print('X – Exit')\n choice = input('Enter your choice:')\n #As you can see, I have removed the conversion because no one will care if it is int or str, it's happening behind the scene.\n if choice == 'X':\n print('Thank you for banking with us!')\n break\n #You don't need the else statement because if the choice would be 'X' it would automatically exit.\n pol = pin.index(pin_input)\n if choice == '1':\n print(f'Hi {name[pol]}.\\nYour current balance is {balance[pol]}') #Using \\n to downline instead of using two prints.\n elif choice == '2':\n print(f'Hi {name[pol]}.\\nYour current balance is {balance[pol]}') #Using \\n to downline instead of using two prints.\n withdraw = int(input('Enter amount to withdraw: ')) #Assuming the user will write an integer. \n if withdraw > balance[pol]:\n print('Not enough amount')\n break # Let's just add that here (easier to read) and we don't need the else statement anymore. \n balance[pol] -= withdraw\n print(f'Transaction Successful.\\nYour current balance is {balance[pol]}')\n elif choice == '3':\n print(f'Hi {name[pol]}.\\nYour current balance is {balance[pol]}')#Using \\n to downline instead of using two prints.\n deposit = int(input('Enter amount to deposit: ')) #Assuming the user will write an integer. the user \n sums = deposit + balance[pol]\n print(f'\\nYour current balance is {sums}') #\\n instead of a whole print function. \n if __name__ == \"__main__\": #Use this script as the main script. \n main() \n\nNice work, keep up with this good job!\nAlso, I want to add my own way of creating an ATM machine. I hope that one day when you will learn and have more knowledge you would open this again and try to understand this. (This code will work only in py-3.10 or higher)\nCode:\nclass ATM:\n def __init__(self, pin) -> None:\n self.pin = pin\n\n def Balance(self) -> str:\n return f\"{data[self.pin][1]}, Your balance is: {data[self.pin][0]}\"\n\n def Withdraw(self) -> str:\n try:\n withdraw = int(input('Enter amount to withdraw: '))\n if withdraw > data[self.pin][0]:\n return f\"{data[self.pin][1]}, Looks like you can't withdraw {withdraw}$ due to lower balance.\\nYou can withdraw up to {data[self.pin][0]}$.\"\n data[self.pin][0] -= withdraw\n except ValueError:\n return f\"{data[self.pin][1]}, Looks like there was an error with your request to withdraw. Please try again.\"\n\n return f\"{data[self.pin][1]}, You've successfully withdrawn {withdraw}$. Your remaining balance is: {data[self.pin][0]}$.\"\n\n def Deposit(self) -> str:\n try:\n deposit = int(input('Enter amount to Deposit: '))\n data[self.pin][0] += deposit\n except ValueError:\n return f\"{data[self.pin][1]}, Looks like there was an error with your request to deposit. Please try again.\"\n\n return f\"{data[self.pin][1]}, You've deposited {deposit}$ into your account. Your balance right now is {data[self.pin][0]}\"\n\nif __name__ == \"__main__\":\n data = {\n 123 : [50000, \"name1\"],\n 456 : [2000, \"name2\"],\n 789 : [500, \"name3\"]\n }\n\n options = {\n 1 : \"Balance Inquiry\",\n 2 : \"Withdraw\",\n 3 : \"Deposit\",\n 'X' : \"Exit\"\n }\n\n running = True\n\n while running:\n try:\n pin = int(input(\"Enter your pin: \"))\n if pin in data:\n user = ATM(pin)\n else:\n print(f\"User {pin} Doesn't exist. Please check your input and try again.\")\n continue\n for key, value in options.items():\n print(f\"Press '{key}' for {value}\")\n while True:\n action = input(\"Please enter your action: \")\n match action:\n case '1':\n print(user.Balance())\n break\n case '2':\n print(user.Withdraw())\n break\n case '3':\n print(user.Deposit())\n break\n case 'X' | 'x':\n running = False\n break\n case _:\n print(\"This action doesn't exist. Please try again.\")\n continue\n except ValueError:\n print(\"There is an error in the given pin. Please try again.\")\n continue\n\n" ]
[ 0 ]
[]
[]
[ "list", "python_3.x" ]
stackoverflow_0074660457_list_python_3.x.txt
Q: GCP Cloud Armor deny main domain https://mma.mydomain.com/ Is there a way to deny https://mma.mydomain.com/ main domain and allow the below Web sevices in GCP Cloud armor. 1. https://mma.mydomain.com/v1/teststudio/developer - POST 2. https://mma.mydomain.com/v1/teststudio/developer - GET 3. https://mma.mydomain.com/v1/teststudio/developer - PATCH 4. https://mma.mydomain.com/v1/teststudio/developer/app - POST 5. https://mma.mydomain.com/v1/teststudio/developer/app - GET I have set the below rules in Google Cloud Armor Network Security services deny request.path.matches('https://mma.mydomain.com/') Deny access from Internet to https://mma.mydomain.com 28 Allow request.path.matches('/v1/devstudio/developer') Allow access from Internet to /v1/teststudio/developer 31 Allow request.path.matches('/v1/devstudio/developer') Allow access from Internet to /v1/teststudio/developer/app 32 I am referring to https://cloud.google.com/armor/docs/rules-language-reference. Please guide with examples. Thanks in Advance. Best Regards, Kaushal A: Assuming your numbers to the right are the rule priorities, Cloud Armor will match the first rule and stop. In your case, it will match the hostname value and deny the request and never consider the other rules. Consider reversing the flow and have the more specific allow rules first and then fire the "default" hostname rule. consider a rule like this: request.headers['host'].matches('mma.mydomain.com') && request.path.lower().urlDecode().contains('/v1/devstudio/developer') && request.method == "GET" And if you want to block other requests, have your request.path.matches('https://mma.mydomain.com/') rule fire after
GCP Cloud Armor deny main domain https://mma.mydomain.com/
Is there a way to deny https://mma.mydomain.com/ main domain and allow the below Web sevices in GCP Cloud armor. 1. https://mma.mydomain.com/v1/teststudio/developer - POST 2. https://mma.mydomain.com/v1/teststudio/developer - GET 3. https://mma.mydomain.com/v1/teststudio/developer - PATCH 4. https://mma.mydomain.com/v1/teststudio/developer/app - POST 5. https://mma.mydomain.com/v1/teststudio/developer/app - GET I have set the below rules in Google Cloud Armor Network Security services deny request.path.matches('https://mma.mydomain.com/') Deny access from Internet to https://mma.mydomain.com 28 Allow request.path.matches('/v1/devstudio/developer') Allow access from Internet to /v1/teststudio/developer 31 Allow request.path.matches('/v1/devstudio/developer') Allow access from Internet to /v1/teststudio/developer/app 32 I am referring to https://cloud.google.com/armor/docs/rules-language-reference. Please guide with examples. Thanks in Advance. Best Regards, Kaushal
[ "Assuming your numbers to the right are the rule priorities, Cloud Armor will match the first rule and stop. In your case, it will match the hostname value and deny the request and never consider the other rules. Consider reversing the flow and have the more specific allow rules first and then fire the \"default\" hostname rule.\nconsider a rule like this:\nrequest.headers['host'].matches('mma.mydomain.com') && request.path.lower().urlDecode().contains('/v1/devstudio/developer') && request.method == \"GET\"\nAnd if you want to block other requests, have your request.path.matches('https://mma.mydomain.com/') rule fire after\n" ]
[ 0 ]
[]
[]
[ "google_cloud_armor", "google_cloud_platform", "re2" ]
stackoverflow_0074604548_google_cloud_armor_google_cloud_platform_re2.txt
Q: Can FastAPI guarantee a sync handler will never block the main application thread? I have the following FastAPI application: from fastapi import FastAPI import socket app = FastAPI() @app.get("/") async def root(): return {"message": "Hello World"} @app.get("/healthcheck") def health_check(): result = some_network_operation() return result def some_network_operation(): HOST = "192.168.30.12" # This host does not exist so the connection will time out PORT = 4567 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.settimeout(10) s.connect((HOST, PORT)) s.sendall(b"Are you ok?") data = s.recv(1024) print(data) This is a simple application with two routes: / handler that is async /healthcheck handler that is sync With this particular example, if you call /healthcheck, it won't complete until after 10 seconds because the socket connection will timeout. However, if you make a call to / in the meantime, it will return the response right away because FastAPI's main thread is not blocked. This makes sense because according to the docs, FastAPI runs sync handlers on an external threadpool. My question is, if it is at all possible for us to block the application (block FastAPI's main thread) by doing something inside the health_check method. Perhaps by acquiring the global interpreter lock? Some other kind of lock? A: Yes, if you try to do sync work in a async method it will block FastAPI, something like this: @router.get("/healthcheck") async def health_check(): result = some_network_operation() return result Where some_network_operation() is blocking the event loop because it is a synchronous method. A: I think I may have an answer to my question, which is that there are some weird edge cases where a sync endpoint handler can block FastAPI. For instance, if we adjust the some_network_operation in my example to the following, it will block the entire application. def some_network_operation(): """ No, this is not a network operation, but it illustrates the point """ block = pow (363,100000000000000) I reached this conclusion based on this question: pow function blocking all threads with ThreadPoolExecutor. So, it looks like the GIL maybe the culprit here. That SO question suggests using the multiprocessing module (which will get around GIL). However, I tried this, and it still resulted in the same behavior. So my root problem remains unsolved. Either way, here is the entire example in the question edited to reproduce the problem: from fastapi import FastAPI app = FastAPI() @app.get("/") async def root(): return {"message": "Hello World"} @app.get("/healthcheck") def health_check(): result = some_network_operation() return result def some_network_operation(): block = pow(363,100000000000000)
Can FastAPI guarantee a sync handler will never block the main application thread?
I have the following FastAPI application: from fastapi import FastAPI import socket app = FastAPI() @app.get("/") async def root(): return {"message": "Hello World"} @app.get("/healthcheck") def health_check(): result = some_network_operation() return result def some_network_operation(): HOST = "192.168.30.12" # This host does not exist so the connection will time out PORT = 4567 with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.settimeout(10) s.connect((HOST, PORT)) s.sendall(b"Are you ok?") data = s.recv(1024) print(data) This is a simple application with two routes: / handler that is async /healthcheck handler that is sync With this particular example, if you call /healthcheck, it won't complete until after 10 seconds because the socket connection will timeout. However, if you make a call to / in the meantime, it will return the response right away because FastAPI's main thread is not blocked. This makes sense because according to the docs, FastAPI runs sync handlers on an external threadpool. My question is, if it is at all possible for us to block the application (block FastAPI's main thread) by doing something inside the health_check method. Perhaps by acquiring the global interpreter lock? Some other kind of lock?
[ "Yes, if you try to do sync work in a async method it will block FastAPI, something like this:\[email protected](\"/healthcheck\")\nasync def health_check():\n result = some_network_operation()\n return result\n\nWhere some_network_operation() is blocking the event loop because it is a synchronous method.\n", "I think I may have an answer to my question, which is that there are some weird edge cases where a sync endpoint handler can block FastAPI.\nFor instance, if we adjust the some_network_operation in my example to the following, it will block the entire application.\ndef some_network_operation():\n \"\"\" No, this is not a network operation, but it illustrates the point \"\"\"\n block = pow (363,100000000000000)\n\nI reached this conclusion based on this question: pow function blocking all threads with ThreadPoolExecutor.\nSo, it looks like the GIL maybe the culprit here.\nThat SO question suggests using the multiprocessing module (which will get around GIL). However, I tried this, and it still resulted in the same behavior. So my root problem remains unsolved.\nEither way, here is the entire example in the question edited to reproduce the problem:\nfrom fastapi import FastAPI\n\napp = FastAPI()\n\n\[email protected](\"/\")\nasync def root():\n return {\"message\": \"Hello World\"}\n\n\[email protected](\"/healthcheck\")\ndef health_check():\n result = some_network_operation()\n return result\n\n\ndef some_network_operation():\n block = pow(363,100000000000000)\n\n" ]
[ 0, 0 ]
[]
[]
[ "fastapi", "python", "sockets", "tcp" ]
stackoverflow_0074636003_fastapi_python_sockets_tcp.txt
Q: Am trying to write a program that ask for input and insert it to an array keeping increase order using pointers in C I wrote this code that sort the giving element while inserting then ask for an element and insert it while kipping an ascending order #include<stdio.h> #include<stdlib.h> int main() { int i, j, n, * p, v; printf("\n Entrer la taille du tableau:"); scanf("%d", & n); p = (int * ) malloc(n * sizeof(int)); printf("Enter l\'élément(s) 1:"); scanf("%d", p); for (i = 1; i < n; i++) { /*the condition for the input*/ do { printf("Enter l\'élément(s) %d:", i + 1); scanf("%d", (p + i)); } while (p[i] < p[i - 1]); } printf("\n Affichage du tableau\n"); for (i = 0; i < n; i++) { printf("%d ", *(p + i)); } /*INTEGRATE A VALUE IN THE ARRAY 'tab'*/ printf("\n Entrer la valeur de v: "); /*scanf("%d", p); printf("\n Insertion de v=%d dans le tableau\n", *p);*/ for (int * current = p; current != p + 1; ++current) { int value; scanf("%d", & value); int * pos = current; for (; pos != p && value < * (pos - 1); --pos) { * pos = * (pos - 1); } * pos = value; } for (int * current = p; current != p + n; ++current) { printf("%d ", * current); } putchar( '\n' ); free(p); return 0; } what i get is that if i entred a list like ---3 5 7 8--- when i insert a numbre like 4 i get ---4 5 7 8--- how to make it display ---3 4 5 7 8--- A: The following only iterates once: for (int * current = p; current != p + 1; ++current) If current is p, then current + 1 == p + 1, and ++current increments current to current + 1 after the first iteration. This loop never runs: int * pos = current; for (; pos != p && value < * (pos - 1); --pos) If current is p, and pos is current, then pos is p. So the value is always assigned to the first element of p. *pos = value; To insert a value to an already sorted (ascending) array, you need to find the position at which the element is greater than the given value, and move it and all successive elements ahead (towards the end of the array) by one. Then assign the new value to the position found. If you never find an element greater than the value provided, you assign it to the end of the array. Here is an example of inserting a single element into an array. Note we initially allocate space for one additional int. If you wanted to repeatedly insert elements in a loop, you will need to realloc the array as you go, to make room for more elements. #include <stdio.h> #include <stdlib.h> int get_int(void) { int x; if (1 != scanf("%d", &x)) exit(EXIT_FAILURE); return x; } void print_array(int *a, int len) { printf("Contents of the array: \n"); for (int i = 0; i < len; i++) { printf("%d ", a[i]); } printf("\n"); } int main(void) { printf("Enter the size of the array: "); int n = get_int(); if (n < 1) exit(EXIT_FAILURE); int *p = malloc((1 + n) * sizeof *p); printf("Enter the first element: "); *p = get_int(); for (int i = 1; i < n; i++) { do { printf("Enter element #%d: ", i + 1); p[i] = get_int(); } while (p[i] < p[i - 1]); } print_array(p, n); printf("Enter a value to insert: "); int v = get_int(); int inserted = 0; for (int i = 0; !inserted && i < n; i++) { if (p[i] > v) { int l = n; while (l > i) { p[l] = p[l - 1]; l--; } p[i] = v; inserted = 1; } } if (!inserted) p[n] = v; print_array(p, n + 1); free(p); } Using this program: Enter the size of the array: 5 Enter the first element: 11 Enter element #2: 22 Enter element #3: 33 Enter element #4: 44 Enter element #5: 55 Contents of the array: 11 22 33 44 55 Enter a value to insert: 40 Contents of the array: 11 22 33 40 44 55
Am trying to write a program that ask for input and insert it to an array keeping increase order using pointers in C
I wrote this code that sort the giving element while inserting then ask for an element and insert it while kipping an ascending order #include<stdio.h> #include<stdlib.h> int main() { int i, j, n, * p, v; printf("\n Entrer la taille du tableau:"); scanf("%d", & n); p = (int * ) malloc(n * sizeof(int)); printf("Enter l\'élément(s) 1:"); scanf("%d", p); for (i = 1; i < n; i++) { /*the condition for the input*/ do { printf("Enter l\'élément(s) %d:", i + 1); scanf("%d", (p + i)); } while (p[i] < p[i - 1]); } printf("\n Affichage du tableau\n"); for (i = 0; i < n; i++) { printf("%d ", *(p + i)); } /*INTEGRATE A VALUE IN THE ARRAY 'tab'*/ printf("\n Entrer la valeur de v: "); /*scanf("%d", p); printf("\n Insertion de v=%d dans le tableau\n", *p);*/ for (int * current = p; current != p + 1; ++current) { int value; scanf("%d", & value); int * pos = current; for (; pos != p && value < * (pos - 1); --pos) { * pos = * (pos - 1); } * pos = value; } for (int * current = p; current != p + n; ++current) { printf("%d ", * current); } putchar( '\n' ); free(p); return 0; } what i get is that if i entred a list like ---3 5 7 8--- when i insert a numbre like 4 i get ---4 5 7 8--- how to make it display ---3 4 5 7 8---
[ "The following only iterates once:\nfor (int * current = p; current != p + 1; ++current)\n\nIf current is p, then current + 1 == p + 1, and ++current increments current to current + 1 after the first iteration.\nThis loop never runs:\nint * pos = current;\nfor (; pos != p && value < * (pos - 1); --pos)\n\nIf current is p, and pos is current, then pos is p.\nSo the value is always assigned to the first element of p.\n*pos = value;\n\n\nTo insert a value to an already sorted (ascending) array, you need to find the position at which the element is greater than the given value, and move it and all successive elements ahead (towards the end of the array) by one. Then assign the new value to the position found.\nIf you never find an element greater than the value provided, you assign it to the end of the array.\nHere is an example of inserting a single element into an array. Note we initially allocate space for one additional int. If you wanted to repeatedly insert elements in a loop, you will need to realloc the array as you go, to make room for more elements.\n#include <stdio.h>\n#include <stdlib.h>\n\nint get_int(void)\n{\n int x;\n\n if (1 != scanf(\"%d\", &x))\n exit(EXIT_FAILURE);\n\n return x;\n}\n\nvoid print_array(int *a, int len)\n{\n printf(\"Contents of the array: \\n\");\n for (int i = 0; i < len; i++) {\n printf(\"%d \", a[i]);\n }\n printf(\"\\n\");\n}\n\nint main(void)\n{\n printf(\"Enter the size of the array: \");\n\n int n = get_int();\n\n if (n < 1) \n exit(EXIT_FAILURE);\n\n int *p = malloc((1 + n) * sizeof *p);\n\n printf(\"Enter the first element: \");\n *p = get_int();\n\n for (int i = 1; i < n; i++) {\n do {\n printf(\"Enter element #%d: \", i + 1);\n p[i] = get_int();\n } while (p[i] < p[i - 1]);\n }\n\n print_array(p, n);\n\n printf(\"Enter a value to insert: \");\n int v = get_int();\n int inserted = 0;\n\n for (int i = 0; !inserted && i < n; i++) {\n if (p[i] > v) {\n int l = n;\n\n while (l > i) {\n p[l] = p[l - 1];\n l--;\n }\n\n p[i] = v;\n inserted = 1;\n }\n }\n\n if (!inserted)\n p[n] = v;\n\n print_array(p, n + 1);\n free(p);\n}\n\nUsing this program:\nEnter the size of the array: 5 \nEnter the first element: 11\nEnter element #2: 22\nEnter element #3: 33\nEnter element #4: 44\nEnter element #5: 55\nContents of the array: \n11 22 33 44 55 \nEnter a value to insert: 40\nContents of the array: \n11 22 33 40 44 55\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "c", "pointers" ]
stackoverflow_0074661362_arrays_c_pointers.txt
Q: Not able to filter check by bill reference number in acumatica I am integrating acumatica in my software and I need to void bill. To void bill I need to void check. Can any help me how to get check by bill reference nbr in acumatica REST API A: You can pull it based on the applications of the bill. We will use bill # 000007 as an example. Make your GET request: http://localhost/dev/entity/Default/20.200.001/Bill/Bill/000007?$expand=Applications It would return the bill amount (in this case 325) and then the check with the amount paid. You would get it by Applications => Array => ReferenceNbr { "id": "xxxxx-ba7f-ec11x-xxx-xxxxx", "rowNumber": 1, "note": { "value": "" }, "Amount": { "value": 325.0000 }, "Applications": [ { "id": "xxxxxxxx-7ba0-ec11-b83e-xxxxx", "rowNumber": 1, "note": null, "AmountPaid": { "value": 325.0000 }, "Balance": { "value": 0.0000 }, "DocType": { "value": "Check" }, "ReferenceNbr": { "value": "000006" }, "Status": { "value": "Closed" }, "custom": {} } ], From there, you can run the action on the check to void it
Not able to filter check by bill reference number in acumatica
I am integrating acumatica in my software and I need to void bill. To void bill I need to void check. Can any help me how to get check by bill reference nbr in acumatica REST API
[ "You can pull it based on the applications of the bill. We will use bill # 000007 as an example.\nMake your GET request: http://localhost/dev/entity/Default/20.200.001/Bill/Bill/000007?$expand=Applications\nIt would return the bill amount (in this case 325) and then the check with the amount paid. You would get it by Applications => Array => ReferenceNbr\n{\n \"id\": \"xxxxx-ba7f-ec11x-xxx-xxxxx\",\n \"rowNumber\": 1,\n \"note\": {\n \"value\": \"\"\n },\n \"Amount\": {\n \"value\": 325.0000\n },\n \"Applications\": [\n {\n \"id\": \"xxxxxxxx-7ba0-ec11-b83e-xxxxx\",\n \"rowNumber\": 1,\n \"note\": null,\n \"AmountPaid\": {\n \"value\": 325.0000\n },\n \"Balance\": {\n \"value\": 0.0000\n },\n \"DocType\": {\n \"value\": \"Check\"\n },\n \"ReferenceNbr\": {\n \"value\": \"000006\"\n },\n \"Status\": {\n \"value\": \"Closed\"\n },\n \"custom\": {}\n }\n ],\n\nFrom there, you can run the action on the check to void it\n\n" ]
[ 0 ]
[]
[]
[ "acumatica" ]
stackoverflow_0072956134_acumatica.txt
Q: Adding two items to a list in list commprehension I wrote a for loop which adds the letters of the input into the list "words" and it also adds a space and then the letter if the letter is capital. Like so: def solution(s): word = [] for letter in s: print(letter.isupper()) if letter.isupper(): word.append(" ") word.append(letter) else: word.append(letter) return ''.join(word) print(solution("helloWorld")) output: hello World I want to convert this to a list comprehension but it wont take both items I would like to add to the list, I tried the following: def solution(s): word = [" " and letter if letter.isupper() else letter for letter in s] return ''.join(word) print(solution("helloWorld")) output: helloWorld wanted output: hello World How can I add the space along with the letter if it is an upper case, as done in the for loop? EDIT: Found out it can be done the following way. def solution(s): word = [" " + letter if letter.isupper() else letter for letter in s] return ''.join(word) A: The following code works: def solution(s): word = [f" {letter}" if letter.isupper() else letter for letter in s] return ''.join(word) print(solution("helloWorldFooBar")) result: hello World Foo Bar
Adding two items to a list in list commprehension
I wrote a for loop which adds the letters of the input into the list "words" and it also adds a space and then the letter if the letter is capital. Like so: def solution(s): word = [] for letter in s: print(letter.isupper()) if letter.isupper(): word.append(" ") word.append(letter) else: word.append(letter) return ''.join(word) print(solution("helloWorld")) output: hello World I want to convert this to a list comprehension but it wont take both items I would like to add to the list, I tried the following: def solution(s): word = [" " and letter if letter.isupper() else letter for letter in s] return ''.join(word) print(solution("helloWorld")) output: helloWorld wanted output: hello World How can I add the space along with the letter if it is an upper case, as done in the for loop? EDIT: Found out it can be done the following way. def solution(s): word = [" " + letter if letter.isupper() else letter for letter in s] return ''.join(word)
[ "The following code works:\ndef solution(s):\n word = [f\" {letter}\" if letter.isupper() else letter for letter in s]\n return ''.join(word)\n\nprint(solution(\"helloWorldFooBar\"))\n\nresult: hello World Foo Bar\n" ]
[ 0 ]
[]
[]
[ "for_loop", "function", "list", "list_comprehension", "python_3.x" ]
stackoverflow_0074659467_for_loop_function_list_list_comprehension_python_3.x.txt
Q: Can't create a Compute Engine instance template that assigns ephemeral external IPs I am running the following command (mostly copied from the GCP console) to create an instance template with a custom boot disk: gcloud --project=$PROJECT \ compute instance-templates create indesign-server-template-$TIMESTAMP \ --machine-type=$MACHINE_TYPE \ --network-interface=network=default,network-tier=PREMIUM \ --no-restart-on-failure --maintenance-policy=TERMINATE --provisioning-model=STANDARD \ --service-account=$SVC_ACCOUNT \ --scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append \ --tags=http-server,https-server \ --create-disk=auto-delete=yes,boot=yes,device-name=indesign-server-template-$TIMESTAMP,image=projects/$PROJECT/global/images/indesign-server-image-$TIMESTAMP,mode=rw,size=100,type=pd-balanced \ --reservation-affinity=any When I view the template in the console, here's what I see under "Network Interfaces": But here's what I want to see (from a template I created by point-and-click): From reading the gcloud docs I am passing the right options to --network-interface, what am I missing? A: You should try adding an address flag and leave the string empty. So it should look like this --network-interface=network=default,network-tier=PREMIUM,address=''
Can't create a Compute Engine instance template that assigns ephemeral external IPs
I am running the following command (mostly copied from the GCP console) to create an instance template with a custom boot disk: gcloud --project=$PROJECT \ compute instance-templates create indesign-server-template-$TIMESTAMP \ --machine-type=$MACHINE_TYPE \ --network-interface=network=default,network-tier=PREMIUM \ --no-restart-on-failure --maintenance-policy=TERMINATE --provisioning-model=STANDARD \ --service-account=$SVC_ACCOUNT \ --scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append \ --tags=http-server,https-server \ --create-disk=auto-delete=yes,boot=yes,device-name=indesign-server-template-$TIMESTAMP,image=projects/$PROJECT/global/images/indesign-server-image-$TIMESTAMP,mode=rw,size=100,type=pd-balanced \ --reservation-affinity=any When I view the template in the console, here's what I see under "Network Interfaces": But here's what I want to see (from a template I created by point-and-click): From reading the gcloud docs I am passing the right options to --network-interface, what am I missing?
[ "You should try adding an address flag and leave the string empty. So it should look like this\n--network-interface=network=default,network-tier=PREMIUM,address=''\n\n" ]
[ 3 ]
[]
[]
[ "gcloud", "google_cloud_platform", "google_compute_engine" ]
stackoverflow_0074661472_gcloud_google_cloud_platform_google_compute_engine.txt
Q: import custom python module in azure ml deployment environment I have an sklearn k-means model. I am training the model and saving it in a pickle file so I can deploy it later using azure ml library. The model that I am training uses a custom Feature Encoder called MultiColumnLabelEncoder. The pipeline model is defined as follow : # Pipeline kmeans = KMeans(n_clusters=3, random_state=0) pipe = Pipeline([ ("encoder", MultiColumnLabelEncoder()), ('k-means', kmeans), ]) #Training the pipeline model = pipe.fit(visitors_df) prediction = model.predict(visitors_df) #save the model in pickle/joblib format filename = 'k_means_model.pkl' joblib.dump(model, filename) The model saving works fine. The Deployment steps are the same as the steps in this link : https://notebooks.azure.com/azureml/projects/azureml-getting-started/html/how-to-use-azureml/deploy-to-cloud/model-register-and-deploy.ipynb However the deployment always fails with this error : File "/var/azureml-server/create_app.py", line 3, in <module> from app import main File "/var/azureml-server/app.py", line 27, in <module> import main as user_main File "/var/azureml-app/main.py", line 19, in <module> driver_module_spec.loader.exec_module(driver_module) File "/structure/azureml-app/score.py", line 22, in <module> importlib.import_module("multilabelencoder") File "/azureml-envs/azureml_b707e8c15a41fd316cf6c660941cf3d5/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named 'multilabelencoder' I understand that pickle/joblib has some problems unpickling the custom function MultiLabelEncoder. That's why I defined this class in a separate python script (which I executed also). I called this custom function in the training python script, in the deployment script and in the scoring python file (score.py). The importing in the score.py file is not successful. So my question is how can I import custom python module to azure ml deployment environment ? Thank you in advance. EDIT: This is my .yml file name: project_environment dependencies: # The python interpreter version. # Currently Azure ML only supports 3.5.2 and later. - python=3.6.2 - pip: - multilabelencoder==1.0.4 - scikit-learn - azureml-defaults==1.0.74.* - pandas channels: - conda-forge A: In fact, the solution was to import my customized class MultiColumnLabelEncoder as a pip package (You can find it through pip install multilllabelencoder==1.0.5). Then I passed the pip package to the .yml file or in the InferenceConfig of the azure ml environment. In the score.py file, I imported the class as follows : from multilabelencoder import multilabelencoder def init(): global model # Call the custom encoder to be used dfor unpickling the model encoder = multilabelencoder.MultiColumnLabelEncoder() # Get the path where the deployed model can be found. model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'k_means_model_45.pkl') model = joblib.load(model_path) Then the deployment was successful. One more important thing is I had to use the same pip package (multilabelencoder) in the training pipeline as here : from multilabelencoder import multilabelencoder pipe = Pipeline([ ("encoder", multilabelencoder.MultiColumnLabelEncoder(columns)), ('k-means', kmeans), ]) #Training the pipeline trainedModel = pipe.fit(df) A: I am facing the same problem, trying to deploy a model that has dependency on some of my own scripts and got the error message: ModuleNotFoundError: No module named 'my-own-module-name' Found this "Private wheel files" solution in MS documentation and it works. The difference from the solution above is now I do not need to publish my scripts to pip. I think many people might face the same situation that for some reason you cannot or do not want to publish your scripts. Instead, your own wheel file is saved under your own blob storage. Following the documentation, I did the following steps and it worked for me. Now I can deploy my model that has dependency in my own scripts. Package your own scripts that the model is dependent on into wheel file, and the wheel file is saved locally. "your_path/your-wheel-file-name.whl" Follow the instructions in the "Private wheel files" solution in MS documentation. Below is the code that worked for me. from azureml.core.environment import Environment from azureml.core.conda_dependencies import CondaDependencies whl_url = Environment.add_private_pip_wheel(workspace=ws,file_path = "your_pathpath/your-wheel-file-name.whl") myenv = CondaDependencies() myenv.add_pip_package("scikit-learn==0.22.1") myenv.add_pip_package("azureml-defaults") myenv.add_pip_package(whl_url) with open("myenv.yml","w") as f: f.write(myenv.serialize_to_string()) My environment file now looks like: name: project_environment dependencies: # The python interpreter version. # Currently Azure ML only supports 3.5.2 and later. - python=3.6.2 - pip: - scikit-learn==0.22.1 - azureml-defaults - https://myworkspaceid.blob.core/azureml/Environment/azureml-private-packages/my-wheel-file-name.whl channels: - conda-forge I'm new to Azure ml. Learning by doing and communicating with the community. This solution works fine for me, hope that it helps. A: An alternative method that works for me is to register a "model_src"-directory containing both the pickled model and a custom module, instead of registering only the pickled model. Then, specify the custom module in the scoring script during deployment, e.g., using python's os module. Example below using sdk-v1: Example of "model_src"-directory model_src │ ├─ utils # your custom module │ └─ multilabelencoder.py │ └─ models # your pickled files └─ k_means_model_45.pkl Register "model_src" in sdk-v1 model = Model.register(model_path="./model_src", model_name="kmeans", description="model registered as a directory", workspace=ws ) Correspondingly, when defining the inference config deployment_folder = './model_src' script_file = 'models/score.py' service_env = Environment.from_conda_specification(kmeans-service, './environment.yml' # wherever yml is located locally ) inference_config = InferenceConfig(source_directory=deployment_folder, entry_script=script_file, environment=service_env ) Content of scoring script, e.g., score.py # Specify model_src as your parent import os deploy_dir = os.path.join(os.getenv('AZUREML_MODEL_DIR'),'model_src') # Import custom module import sys sys.path.append("{0}/utils".format(deploy_dir)) from multilabelencoder import MultiColumnLabelEncoder import joblib def init(): global model # Call the custom encoder to be used dfor unpickling the model encoder = MultiColumnLabelEncoder() # Use as intended downstream # Get the path where the deployed model can be found. model = joblib.load('{}/models/k_means_model_45.pkl'.format(deploy_dir)) This method provides flexibility in importing various custom scripts in my scoring script.
import custom python module in azure ml deployment environment
I have an sklearn k-means model. I am training the model and saving it in a pickle file so I can deploy it later using azure ml library. The model that I am training uses a custom Feature Encoder called MultiColumnLabelEncoder. The pipeline model is defined as follow : # Pipeline kmeans = KMeans(n_clusters=3, random_state=0) pipe = Pipeline([ ("encoder", MultiColumnLabelEncoder()), ('k-means', kmeans), ]) #Training the pipeline model = pipe.fit(visitors_df) prediction = model.predict(visitors_df) #save the model in pickle/joblib format filename = 'k_means_model.pkl' joblib.dump(model, filename) The model saving works fine. The Deployment steps are the same as the steps in this link : https://notebooks.azure.com/azureml/projects/azureml-getting-started/html/how-to-use-azureml/deploy-to-cloud/model-register-and-deploy.ipynb However the deployment always fails with this error : File "/var/azureml-server/create_app.py", line 3, in <module> from app import main File "/var/azureml-server/app.py", line 27, in <module> import main as user_main File "/var/azureml-app/main.py", line 19, in <module> driver_module_spec.loader.exec_module(driver_module) File "/structure/azureml-app/score.py", line 22, in <module> importlib.import_module("multilabelencoder") File "/azureml-envs/azureml_b707e8c15a41fd316cf6c660941cf3d5/lib/python3.6/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) ModuleNotFoundError: No module named 'multilabelencoder' I understand that pickle/joblib has some problems unpickling the custom function MultiLabelEncoder. That's why I defined this class in a separate python script (which I executed also). I called this custom function in the training python script, in the deployment script and in the scoring python file (score.py). The importing in the score.py file is not successful. So my question is how can I import custom python module to azure ml deployment environment ? Thank you in advance. EDIT: This is my .yml file name: project_environment dependencies: # The python interpreter version. # Currently Azure ML only supports 3.5.2 and later. - python=3.6.2 - pip: - multilabelencoder==1.0.4 - scikit-learn - azureml-defaults==1.0.74.* - pandas channels: - conda-forge
[ "In fact, the solution was to import my customized class MultiColumnLabelEncoder as a pip package (You can find it through pip install multilllabelencoder==1.0.5).\nThen I passed the pip package to the .yml file or in the InferenceConfig of the azure ml environment.\nIn the score.py file, I imported the class as follows :\nfrom multilabelencoder import multilabelencoder\ndef init():\n global model\n\n # Call the custom encoder to be used dfor unpickling the model\n encoder = multilabelencoder.MultiColumnLabelEncoder() \n # Get the path where the deployed model can be found.\n model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'k_means_model_45.pkl')\n model = joblib.load(model_path)\n\nThen the deployment was successful. \nOne more important thing is I had to use the same pip package (multilabelencoder) in the training pipeline as here :\nfrom multilabelencoder import multilabelencoder \npipe = Pipeline([\n (\"encoder\", multilabelencoder.MultiColumnLabelEncoder(columns)),\n ('k-means', kmeans),\n])\n#Training the pipeline\ntrainedModel = pipe.fit(df)\n\n", "I am facing the same problem, trying to deploy a model that has dependency on some of my own scripts and got the error message:\n ModuleNotFoundError: No module named 'my-own-module-name'\n\nFound this \"Private wheel files\" solution in MS documentation and it works. The difference from the solution above is now I do not need to publish my scripts to pip. I think many people might face the same situation that for some reason you cannot or do not want to publish your scripts. Instead, your own wheel file is saved under your own blob storage.\nFollowing the documentation, I did the following steps and it worked for me. Now I can deploy my model that has dependency in my own scripts.\n\nPackage your own scripts that the model is dependent on into wheel file, and the wheel file is saved locally.\n\"your_path/your-wheel-file-name.whl\"\n\nFollow the instructions in the \"Private wheel files\" solution in MS documentation. Below is the code that worked for me.\n\n\n\nfrom azureml.core.environment import Environment\nfrom azureml.core.conda_dependencies import CondaDependencies\n\nwhl_url = Environment.add_private_pip_wheel(workspace=ws,file_path = \"your_pathpath/your-wheel-file-name.whl\")\n\nmyenv = CondaDependencies()\nmyenv.add_pip_package(\"scikit-learn==0.22.1\")\nmyenv.add_pip_package(\"azureml-defaults\")\nmyenv.add_pip_package(whl_url)\n\nwith open(\"myenv.yml\",\"w\") as f:\n f.write(myenv.serialize_to_string())\n\nMy environment file now looks like:\nname: project_environment\ndependencies:\n # The python interpreter version.\n\n # Currently Azure ML only supports 3.5.2 and later.\n\n- python=3.6.2\n\n- pip:\n - scikit-learn==0.22.1\n - azureml-defaults\n - https://myworkspaceid.blob.core/azureml/Environment/azureml-private-packages/my-wheel-file-name.whl\nchannels:\n- conda-forge\n\nI'm new to Azure ml. Learning by doing and communicating with the community. This solution works fine for me, hope that it helps.\n", "An alternative method that works for me is to register a \"model_src\"-directory containing both the pickled model and a custom module, instead of registering only the pickled model. Then, specify the custom module in the scoring script during deployment, e.g., using python's os module. Example below using sdk-v1:\nExample of \"model_src\"-directory\nmodel_src\n │\n ├─ utils # your custom module\n │ └─ multilabelencoder.py\n │\n └─ models # your pickled files\n └─ k_means_model_45.pkl \n\nRegister \"model_src\" in sdk-v1\nmodel = Model.register(model_path=\"./model_src\",\n model_name=\"kmeans\", \n description=\"model registered as a directory\",\n workspace=ws\n)\n\nCorrespondingly, when defining the inference config\ndeployment_folder = './model_src'\nscript_file = 'models/score.py'\nservice_env = Environment.from_conda_specification(kmeans-service,\n './environment.yml' # wherever yml is located locally\n)\ninference_config = InferenceConfig(source_directory=deployment_folder,\n entry_script=script_file,\n environment=service_env\n)\n\nContent of scoring script, e.g., score.py\n# Specify model_src as your parent\nimport os\ndeploy_dir = os.path.join(os.getenv('AZUREML_MODEL_DIR'),'model_src')\n\n# Import custom module\nimport sys\nsys.path.append(\"{0}/utils\".format(deploy_dir)) \nfrom multilabelencoder import MultiColumnLabelEncoder\n\nimport joblib\n\ndef init():\n global model\n\n # Call the custom encoder to be used dfor unpickling the model\n encoder = MultiColumnLabelEncoder() # Use as intended downstream \n \n # Get the path where the deployed model can be found.\n model = joblib.load('{}/models/k_means_model_45.pkl'.format(deploy_dir))\n\n\nThis method provides flexibility in importing various custom scripts in my scoring script.\n" ]
[ 4, 4, 0 ]
[]
[]
[ "azure_machine_learning_service", "azure_machine_learning_studio", "pickle", "python" ]
stackoverflow_0059176241_azure_machine_learning_service_azure_machine_learning_studio_pickle_python.txt
Q: how can I count the occurrences > than a value for each year of a data frame I have a data frame with the values of precipitations day per day. I would like to do a sort of resample, so instead of day per day the data is collected year per year and every year has a column that contains the number of times it rained more than a certain value. Date Precipitation 2000-01-01 1 2000-01-03 6 2000-01-03 5 2001-01-01 3 2001-01-02 1 2001-01-03 0 2002-01-01 10 2002-01-02 8 2002-01-03 12 what I want is to count every year how many times Precipitation > 2 Date Count 2000 2 2001 1 2002 3 I tried using resample() but with no results A: @Tatthew you can do this with GroupBy.apply: import pandas as pd df = pd.DataFrame({'Date': ['2000-01-01', '2000-01-03', '2000-01-03', '2001-01-01', '2001-01-02', '2001-01-03', '2002-01-01', '2002-01-02', '2002-01-03'], 'Precipitation': [1, 6, 5, 3, 1, 0, 10, 8, 12]}) df = df.astype({'Date': datetime64}) df.groupby(df.Date.dt.year).apply(lambda df: df.Precipitation[df.Precipitation > 2].count()) A: You can use this bit of code: # convert "Precipitation" and "date" values to proper types df['Precipitation'] = df['Precipitation'].astype(int) df["date"] = pd.to_datetime(df["date"]) # find rows that have "Precipitation" > 2 df['Count']= df.apply(lambda x: x["Precipitation"] > 2, axis=1) # group df by year and drop the "Precipitation" column df.groupby(df['date'].dt.year).sum().drop(columns=['Precipitation']) A: @Tatthew you can do this with query and Groupby.size too. import pandas as pd df = pd.DataFrame({'Date': ['2000-01-01', '2000-01-03', '2000-01-03', '2001-01-01', '2001-01-02', '2001-01-03', '2002-01-01', '2002-01-02', '2002-01-03'], 'Precipitation': [1, 6, 5, 3, 1, 0, 10, 8, 12]}) df = df.astype({'Date': datetime64}) above_threshold = df.query('Precipitation > 2') above_threshold.groupby(above_threshold.Date.dt.year).size()
how can I count the occurrences > than a value for each year of a data frame
I have a data frame with the values of precipitations day per day. I would like to do a sort of resample, so instead of day per day the data is collected year per year and every year has a column that contains the number of times it rained more than a certain value. Date Precipitation 2000-01-01 1 2000-01-03 6 2000-01-03 5 2001-01-01 3 2001-01-02 1 2001-01-03 0 2002-01-01 10 2002-01-02 8 2002-01-03 12 what I want is to count every year how many times Precipitation > 2 Date Count 2000 2 2001 1 2002 3 I tried using resample() but with no results
[ "@Tatthew you can do this with GroupBy.apply:\nimport pandas as pd\ndf = pd.DataFrame({'Date': ['2000-01-01', '2000-01-03',\n '2000-01-03', '2001-01-01',\n '2001-01-02', '2001-01-03',\n '2002-01-01', '2002-01-02',\n '2002-01-03'],\n 'Precipitation': [1, 6, 5, 3, 1, 0,\n 10, 8, 12]})\ndf = df.astype({'Date': datetime64})\ndf.groupby(df.Date.dt.year).apply(lambda df: df.Precipitation[df.Precipitation > 2].count())\n\n", "You can use this bit of code:\n# convert \"Precipitation\" and \"date\" values to proper types\ndf['Precipitation'] = df['Precipitation'].astype(int)\ndf[\"date\"] = pd.to_datetime(df[\"date\"])\n\n# find rows that have \"Precipitation\" > 2\ndf['Count']= df.apply(lambda x: x[\"Precipitation\"] > 2, axis=1)\n\n# group df by year and drop the \"Precipitation\" column\ndf.groupby(df['date'].dt.year).sum().drop(columns=['Precipitation'])\n\n", "@Tatthew you can do this with query and Groupby.size too.\nimport pandas as pd\ndf = pd.DataFrame({'Date': ['2000-01-01', '2000-01-03',\n '2000-01-03', '2001-01-01',\n '2001-01-02', '2001-01-03',\n '2002-01-01', '2002-01-02',\n '2002-01-03'],\n 'Precipitation': [1, 6, 5, 3, 1, 0,\n 10, 8, 12]})\ndf = df.astype({'Date': datetime64})\nabove_threshold = df.query('Precipitation > 2')\nabove_threshold.groupby(above_threshold.Date.dt.year).size()\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074659756_dataframe_pandas_python.txt
Q: Accessing Mailman 3 list members via Python/Django management console I am trying to access members of an existing Mailman 3 mailing list directly from Django Management console on a Debian Bullseye where Mailman is installed from deb packages (mailman3-full). I can connect to the Django admin console like this (all 3 variants seem to work fine): $ /usr/share/mailman3-web/manage.py shell $ mailman-web shell $ mailman-web shell --settings /etc/mailman3/mailman-web.py Python 3.9.2 (default, Feb 28 2021, 17:03:44) >>> But inside the Django admin console, some mailman components seem to be missing. I try to access the list manager as described here: Docs > Models > The mailing list manager: >>> from mailman.interfaces.listmanager import IListManager >>> from zope.component import getUtility >>> list_manager = getUtility(IListManager) Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/lib/python3/dist-packages/zope/component/_api.py", line 169, in getUtility raise ComponentLookupError(interface, name) zope.interface.interfaces.ComponentLookupError: (<InterfaceClass mailman.interfaces.listmanager.IListManager>, '') Can't figure out why this ComponentLookupError happens. Also tried to acccess a list with the ListManager implementation: >>> from mailman.config import config >>> from mailman.model.listmanager import ListManager >>> list_manager = ListManager() >>> list_manager.get('[email protected]') Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/lib/python3/dist-packages/mailman/database/transaction.py", line 85, in wrapper return function(args[0], config.db.store, *args[1:], **kws) AttributeError: 'NoneType' object has no attribute 'store' >>> list_manager.get_by_list_id('mynews.example.com') Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/lib/python3/dist-packages/mailman/database/transaction.py", line 85, in wrapper return function(args[0], config.db.store, *args[1:], **kws) AttributeError: 'NoneType' object has no attribute 'store' What am I doing wrong here? None of the examples in the Mailman 3 models documentation is working if I don't even get that far. any help greatly appreciated! A: It's just the wrong shell you are using. You should use Mailman core shell instead. It is accessible via just mailman shell in your system most probably. A: So mailman shell works great and I could run this interactively: from mailman.interfaces.listmanager import IListManager from zope.component import getUtility from mailman.testing.documentation import dump_list from operator import attrgetter def dump_members(roster): all_addresses = list(member.address for member in roster) sorted_addresses = sorted(all_addresses, key=attrgetter('email')) dump_list(sorted_addresses) list_manager = getUtility(IListManager) mlist = list_manager.get('[email protected]') dump_members(mlist.members.members) but how could I put this into a script that could be run with mailman withlist -r listmembers -l [email protected]? from mailman.testing.documentation import dump_list from operator import attrgetter def listmembers(mlist): roster = mlist.members.members all_addresses = list(member.address for member in roster) sorted_addresses = sorted(all_addresses, key=attrgetter('email')) dump_list(sorted_addresses) where would I put such a listmembers.py runner? I tried to put it into /usr/lib/python3/dist-packages/mailman/runners directory, but didn't work: $ mailman withlist -r listmembers -l [email protected] ModuleNotFoundError: No module named 'listmembers' Thanks!
Accessing Mailman 3 list members via Python/Django management console
I am trying to access members of an existing Mailman 3 mailing list directly from Django Management console on a Debian Bullseye where Mailman is installed from deb packages (mailman3-full). I can connect to the Django admin console like this (all 3 variants seem to work fine): $ /usr/share/mailman3-web/manage.py shell $ mailman-web shell $ mailman-web shell --settings /etc/mailman3/mailman-web.py Python 3.9.2 (default, Feb 28 2021, 17:03:44) >>> But inside the Django admin console, some mailman components seem to be missing. I try to access the list manager as described here: Docs > Models > The mailing list manager: >>> from mailman.interfaces.listmanager import IListManager >>> from zope.component import getUtility >>> list_manager = getUtility(IListManager) Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/lib/python3/dist-packages/zope/component/_api.py", line 169, in getUtility raise ComponentLookupError(interface, name) zope.interface.interfaces.ComponentLookupError: (<InterfaceClass mailman.interfaces.listmanager.IListManager>, '') Can't figure out why this ComponentLookupError happens. Also tried to acccess a list with the ListManager implementation: >>> from mailman.config import config >>> from mailman.model.listmanager import ListManager >>> list_manager = ListManager() >>> list_manager.get('[email protected]') Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/lib/python3/dist-packages/mailman/database/transaction.py", line 85, in wrapper return function(args[0], config.db.store, *args[1:], **kws) AttributeError: 'NoneType' object has no attribute 'store' >>> list_manager.get_by_list_id('mynews.example.com') Traceback (most recent call last): File "<console>", line 1, in <module> File "/usr/lib/python3/dist-packages/mailman/database/transaction.py", line 85, in wrapper return function(args[0], config.db.store, *args[1:], **kws) AttributeError: 'NoneType' object has no attribute 'store' What am I doing wrong here? None of the examples in the Mailman 3 models documentation is working if I don't even get that far. any help greatly appreciated!
[ "It's just the wrong shell you are using. You should use Mailman core shell instead.\nIt is accessible via just mailman shell in your system most probably.\n", "So mailman shell works great and I could run this interactively:\nfrom mailman.interfaces.listmanager import IListManager\nfrom zope.component import getUtility\nfrom mailman.testing.documentation import dump_list\nfrom operator import attrgetter\n\ndef dump_members(roster):\n all_addresses = list(member.address for member in roster)\n sorted_addresses = sorted(all_addresses, key=attrgetter('email'))\n dump_list(sorted_addresses)\n\nlist_manager = getUtility(IListManager)\nmlist = list_manager.get('[email protected]')\ndump_members(mlist.members.members)\n\nbut how could I put this into a script that could be run with mailman withlist -r listmembers -l [email protected]?\nfrom mailman.testing.documentation import dump_list\nfrom operator import attrgetter\n\ndef listmembers(mlist):\n roster = mlist.members.members\n all_addresses = list(member.address for member in roster)\n sorted_addresses = sorted(all_addresses, key=attrgetter('email'))\n dump_list(sorted_addresses)\n\nwhere would I put such a listmembers.py runner? I tried to put it into /usr/lib/python3/dist-packages/mailman/runners directory, but didn't work:\n$ mailman withlist -r listmembers -l [email protected]\nModuleNotFoundError: No module named 'listmembers'\n\nThanks!\n" ]
[ 1, 0 ]
[]
[]
[ "django", "mailman" ]
stackoverflow_0074654097_django_mailman.txt
Q: Laravel hasOne() Function Using $this when not in object context I have 2 models named AdminContent, AdminCategory. I have content_category_id in my admin_contents table. I have category_id and category_name in my admin_categories table. I linked category_id with content_category_id foreign. I am using the hasOne() function in my Admin Content model. But I get the error Using $this when not in object context! My main goal is to get content_category_id value from admin_categories table name column Migrations // Admin Categories Migration Schema::create( 'admin_categories', function(Blueprint $table) { $table->bigIncrements('ctgry_id')->unique(); $table->string('category_name', 50)->unique(); $table->timestamps(); }); // Admin Contents Migration Schema::create('admin_contents', function (Blueprint $table) { $table->bigIncrements('cntnt_id')->unique(); $table->string('content_title'); $table->text('content_content'); $table->string('content_slug'); $table->bigInteger('content_category_id'); $table->foreign('content_category_id')->references('ctgry_id')->on('admin_categories'); $table->string('content_status'); $table->string('create_user'); $table->string('content_tags'); $table->string('content_excerpt'); $table->dateTime('posted_at'); $table->timestamps(); }); Models // AdminContent Model protected $table = "admin_contents"; protected $fillable = [ 'content_title', 'content_content', 'content_category_id', 'content_status', 'create_user','content_tags', 'content_excerpt', 'created_at', 'updated_at' ]; protected $guards = [ 'cntnt_id', ]; public function setCategoryName() { return $this->hasOne(AdminCategory::class); } When I want to access with $this->hasOne(AdminCategory::class) I get this error! A: First: relationships in Laravel are based in standardize models, using 'id' as column name for ids. If you are using another name for firstKey, you should add it to relationship definition, as stated in documentation. I mean, your relationship should not work because Eloquent doesn't know which are your tables first keys. Second: when you define a relationship you should call id from your model. So how are you accessing to $this->hasOne(AdminCategory::class)? It should be something like AdminContent::with('setCategoryName') Maybe showing some code from your controller we can give you a more accurate reply. A: What I want is this, I get the blog content with query and print it. But I am printing the content_category_id value as the id value in category table. What I need to do is get the content_category_id and the id value in the category table, the category name linked to that id. Thanks in advance for your help. Admin Content Model namespace App\Models\Admin; use Illuminate\Database\Eloquent\Factories\HasFactory; use Illuminate\Database\Eloquent\Model; use Illuminate\Support\Facades\DB; class AdminContent extends Model { use HasFactory; protected $table = "admin_contents"; protected $primaryKey = 'cntnt_id'; protected $fillable = [ 'content_title', 'content_content', 'content_category_id', 'content_status', 'create_user','content_tags', 'content_excerpt', 'created_at', 'updated_at' ]; protected $guards = [ 'cntnt_id', ]; public function _all() { return self::all(); } public static function setCategoryName() { return $this->hasOne(AdminCategory::class, 'content_category_id', 'ctgry_id'); } } Admin Category Model namespace App\Models\Admin; use Illuminate\Database\Eloquent\Factories\HasFactory; use Illuminate\Database\Eloquent\Model; class AdminCategory extends Model { use HasFactory; protected $table = 'admin_categories'; protected $primaryKey = 'ctgry_id'; protected $fillable = [ 'category_name', 'updated_at' ]; protected $quards = [ 'ctgry_id', 'created_at' ]; } Post Controller namespace App\Http\Controllers; use Illuminate\Http\Request; use App\Models\Admin\AdminContent; class PostController extends Controller { public function index() { return view('frontend.blog'); } public function getCategoryName() { return AdminContent::find(1)->setCategoryName; } } MySQL Tables https://www.hizliresim.com/2z0337a
Laravel hasOne() Function Using $this when not in object context
I have 2 models named AdminContent, AdminCategory. I have content_category_id in my admin_contents table. I have category_id and category_name in my admin_categories table. I linked category_id with content_category_id foreign. I am using the hasOne() function in my Admin Content model. But I get the error Using $this when not in object context! My main goal is to get content_category_id value from admin_categories table name column Migrations // Admin Categories Migration Schema::create( 'admin_categories', function(Blueprint $table) { $table->bigIncrements('ctgry_id')->unique(); $table->string('category_name', 50)->unique(); $table->timestamps(); }); // Admin Contents Migration Schema::create('admin_contents', function (Blueprint $table) { $table->bigIncrements('cntnt_id')->unique(); $table->string('content_title'); $table->text('content_content'); $table->string('content_slug'); $table->bigInteger('content_category_id'); $table->foreign('content_category_id')->references('ctgry_id')->on('admin_categories'); $table->string('content_status'); $table->string('create_user'); $table->string('content_tags'); $table->string('content_excerpt'); $table->dateTime('posted_at'); $table->timestamps(); }); Models // AdminContent Model protected $table = "admin_contents"; protected $fillable = [ 'content_title', 'content_content', 'content_category_id', 'content_status', 'create_user','content_tags', 'content_excerpt', 'created_at', 'updated_at' ]; protected $guards = [ 'cntnt_id', ]; public function setCategoryName() { return $this->hasOne(AdminCategory::class); } When I want to access with $this->hasOne(AdminCategory::class) I get this error!
[ "First: relationships in Laravel are based in standardize models, using 'id' as column name for ids. If you are using another name for firstKey, you should add it to relationship definition, as stated in documentation. I mean, your relationship should not work because Eloquent doesn't know which are your tables first keys.\nSecond: when you define a relationship you should call id from your model. So how are you accessing to $this->hasOne(AdminCategory::class)?\nIt should be something like AdminContent::with('setCategoryName')\nMaybe showing some code from your controller we can give you a more accurate reply.\n", "What I want is this,\nI get the blog content with query and print it. But I am printing the content_category_id value as the id value in category table. What I need to do is get the content_category_id and the id value in the category table, the category name linked to that id. Thanks in advance for your help.\nAdmin Content Model\nnamespace App\\Models\\Admin;\n\nuse Illuminate\\Database\\Eloquent\\Factories\\HasFactory;\nuse Illuminate\\Database\\Eloquent\\Model;\n\nuse Illuminate\\Support\\Facades\\DB;\n\nclass AdminContent extends Model\n{\n use HasFactory;\n\n protected $table = \"admin_contents\";\n\n protected $primaryKey = 'cntnt_id';\n\n protected $fillable = [\n 'content_title', 'content_content',\n 'content_category_id', 'content_status', 'create_user','content_tags',\n 'content_excerpt',\n 'created_at', 'updated_at'\n ];\n\n protected $guards = [\n 'cntnt_id',\n ];\n\n public function _all()\n {\n return self::all();\n }\n\n public static function setCategoryName()\n {\n return $this->hasOne(AdminCategory::class, 'content_category_id', 'ctgry_id');\n }\n}\n\nAdmin Category Model\nnamespace App\\Models\\Admin;\n\nuse Illuminate\\Database\\Eloquent\\Factories\\HasFactory;\nuse Illuminate\\Database\\Eloquent\\Model;\n\nclass AdminCategory extends Model\n{\n use HasFactory;\n\n protected $table = 'admin_categories';\n\n protected $primaryKey = 'ctgry_id';\n\n protected $fillable = [\n 'category_name', 'updated_at'\n ];\n\n protected $quards = [\n 'ctgry_id', 'created_at'\n ];\n\n}\n\nPost Controller\nnamespace App\\Http\\Controllers;\n\nuse Illuminate\\Http\\Request;\n\nuse App\\Models\\Admin\\AdminContent;\n\nclass PostController extends Controller\n{\n public function index()\n {\n return view('frontend.blog');\n }\n\n public function getCategoryName()\n {\n return AdminContent::find(1)->setCategoryName;\n }\n}\n\nMySQL Tables\nhttps://www.hizliresim.com/2z0337a\n" ]
[ 0, 0 ]
[]
[]
[ "laravel", "laravel_8", "oop", "php" ]
stackoverflow_0074658900_laravel_laravel_8_oop_php.txt
Q: Remove duplicates keys in multiples arrays PHP I have an array ($datas) with subs arrays like this : I need to remove subs arrays with same [0] value. But i can't do it. I tested with array_unique() and many foreach in another foreach but i don't understand the methodology(correct in english?). Any suggestion are welcome ! A: Ok, i found a solution ! $datasCounter = count($curlDatas["data"]); //datas counts $subArrays = array_slice($datas, 0, $datasCounter); foreach ($datas as $k => $v) { foreach ($subArrays as $key => $value) { if ($k != $key && $v[0] == $value[0]) { unset($subArrays[$k]); } } } A: Here is a solution that uses a map. This is a very efficient solution I think for your case because it is not O(n^2) complexity like a double foreach, it is simply O(n) which is much faster, and is the fastest complexity for this I believe. // First, create an array of sub-arrays. $arr = [ [1, 2, 3], [1, 5, 6], [3, 3, 4], [1, 7, 8] ]; // We create a 'map' in PHP, which is basically just an array but with non-sequential (non-ordered) keys. $map = []; // We loop through all the sub-arrays and save the pair (first element, sub-array) // since it's a 'map', it will only keep 1. foreach($arr as $subarr) { // The 'idx' is the first element (sub-array[0]) $first = $subarr[0]; // If you want the first appearance of the first element (in this case [1,2,3] for '1') // then you check if the first element of this sub-array was already found (is in the map) if (!array_key_exists($first, $map)) $map[$first] = $subarr; // Set the } // Now we convert the 'map' into an array with sequential keys, // since the 'map' is just an array with non-sequential keys. $arr = array_values($map); // You can print the output. print_r($arr); The output in this case will be: Array ( [0] => Array ( [0] => 1 [1] => 2 [2] => 3 ) [1] => Array ( [0] => 3 [1] => 3 [2] => 4 ) ) A: All you have to do is re-index the main array on the value in 0 of each subarray and it will remove the duplicates: $datas = array_column($datas, null, 0); That should be usable for you and even better with the new indexes. But if you want to get back to how it was, just get the values (not necessary): $datas = array_values(array_column($datas, null, 0));
Remove duplicates keys in multiples arrays PHP
I have an array ($datas) with subs arrays like this : I need to remove subs arrays with same [0] value. But i can't do it. I tested with array_unique() and many foreach in another foreach but i don't understand the methodology(correct in english?). Any suggestion are welcome !
[ "Ok, i found a solution !\n $datasCounter = count($curlDatas[\"data\"]); //datas counts\n\n $subArrays = array_slice($datas, 0, $datasCounter);\n foreach ($datas as $k => $v) {\n foreach ($subArrays as $key => $value) {\n if ($k != $key && $v[0] == $value[0]) {\n unset($subArrays[$k]);\n }\n }\n }\n\n", "Here is a solution that uses a map. This is a very efficient solution I think for your case because it is not O(n^2) complexity like a double foreach, it is simply O(n) which is much faster, and is the fastest complexity for this I believe.\n// First, create an array of sub-arrays.\n$arr = [\n [1, 2, 3],\n [1, 5, 6],\n [3, 3, 4],\n [1, 7, 8]\n];\n\n// We create a 'map' in PHP, which is basically just an array but with non-sequential (non-ordered) keys.\n$map = [];\n\n// We loop through all the sub-arrays and save the pair (first element, sub-array)\n// since it's a 'map', it will only keep 1.\nforeach($arr as $subarr)\n{\n // The 'idx' is the first element (sub-array[0])\n $first = $subarr[0];\n // If you want the first appearance of the first element (in this case [1,2,3] for '1')\n // then you check if the first element of this sub-array was already found (is in the map)\n if (!array_key_exists($first, $map))\n $map[$first] = $subarr; // Set the \n}\n\n// Now we convert the 'map' into an array with sequential keys,\n// since the 'map' is just an array with non-sequential keys.\n$arr = array_values($map);\n\n// You can print the output.\nprint_r($arr);\n\nThe output in this case will be:\nArray\n(\n [0] => Array\n (\n [0] => 1\n [1] => 2\n [2] => 3\n )\n\n [1] => Array\n (\n [0] => 3\n [1] => 3\n [2] => 4\n )\n)\n\n", "All you have to do is re-index the main array on the value in 0 of each subarray and it will remove the duplicates:\n$datas = array_column($datas, null, 0);\n\nThat should be usable for you and even better with the new indexes. But if you want to get back to how it was, just get the values (not necessary):\n$datas = array_values(array_column($datas, null, 0));\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "arrays", "loops", "php" ]
stackoverflow_0074659294_arrays_loops_php.txt
Q: I am having trouble trying to fix TypeError: string indices must be integers Grades.txt file I am currently trying to finish a assignment but I am confused on how to fix this error. I am creating a program that will analyzes grades from a file and should calculate the average score for each distinct section (given). I receive the error for sections[sec]["total"] = grade[grade] grades = {'A': 100, 'B': 89, 'C': 79, 'D': 74, 'F': 69} # this section reads the file def calculate_average(): file = open("grades.txt", "r") sections = {} for line in file: [_, sec, grade] = line.split("\t") grade = grade.strip() if sec in sections: sections[sec]["count"] += 1 sections[sec]["total"] += grade[grade] else: sections[sec] = {} sections[sec]["count"] = 1 sections[sec]["total"] = grade[grade] file.close() # This section calculates the average data based on file for sec, secdata in sections.items(): avg = secdata[" total "] / secdata[" count"] print(" {0} : {1}".format(sec, round(avg, 2))) if __name__ == "__main__": calculate_average() A: It looks like you are trying to access the value of the grade dictionary by using the value of the grade variable as a key. This won't work because the keys of the grade dictionary are strings (e.g. 'A', 'B', 'C'), but the value of the grade variable is also a string (e.g. 'A', 'B', 'C'), so you are trying to use a string as an index for a dictionary. To fix this error, you should use the grade variable directly to access the value in the grade dictionary, like this: sections[sec]["total"] += grades[grade] This will add the value associated with the grade string in the grades dictionary to the total field in the sections dictionary. Here is the updated code with this change: grades = {'A': 100, 'B': 89, 'C': 79, 'D': 74, 'F': 69} # this section reads the file def calculate_average(): file = open("grades.txt", "r") sections = {} for line in file: [_, sec, grade] = line.split("\t") grade = grade.strip() if sec in sections: sections[sec]["count"] += 1 sections[sec]["total"] += grades[grade] else: sections[sec] = {} sections[sec]["count"] = 1 sections[sec]["total"] = grades[grade] file.close() # This section calculates the average data based on file for sec, secdata in sections.items(): avg = secdata[" total "] / secdata[" count"] print(" {0} : {1}".format(sec, round(avg, 2))) if __name__ == "__main__": calculate_average() A: You probably mean: sections[sec]["total"] = grades[grade]
I am having trouble trying to fix TypeError: string indices must be integers
Grades.txt file I am currently trying to finish a assignment but I am confused on how to fix this error. I am creating a program that will analyzes grades from a file and should calculate the average score for each distinct section (given). I receive the error for sections[sec]["total"] = grade[grade] grades = {'A': 100, 'B': 89, 'C': 79, 'D': 74, 'F': 69} # this section reads the file def calculate_average(): file = open("grades.txt", "r") sections = {} for line in file: [_, sec, grade] = line.split("\t") grade = grade.strip() if sec in sections: sections[sec]["count"] += 1 sections[sec]["total"] += grade[grade] else: sections[sec] = {} sections[sec]["count"] = 1 sections[sec]["total"] = grade[grade] file.close() # This section calculates the average data based on file for sec, secdata in sections.items(): avg = secdata[" total "] / secdata[" count"] print(" {0} : {1}".format(sec, round(avg, 2))) if __name__ == "__main__": calculate_average()
[ "It looks like you are trying to access the value of the grade dictionary by using the value of the grade variable as a key. This won't work because the keys of the grade dictionary are strings (e.g. 'A', 'B', 'C'), but the value of the grade variable is also a string (e.g. 'A', 'B', 'C'), so you are trying to use a string as an index for a dictionary.\nTo fix this error, you should use the grade variable directly to access the value in the grade dictionary, like this:\nsections[sec][\"total\"] += grades[grade]\n\nThis will add the value associated with the grade string in the grades dictionary to the total field in the sections dictionary.\nHere is the updated code with this change:\ngrades = {'A': 100, 'B': 89, 'C': 79, 'D': 74, 'F': 69}\n\n# this section reads the file\n\n\ndef calculate_average():\n file = open(\"grades.txt\", \"r\")\n sections = {}\n for line in file:\n [_, sec, grade] = line.split(\"\\t\")\n grade = grade.strip()\n if sec in sections:\n sections[sec][\"count\"] += 1\n sections[sec][\"total\"] += grades[grade]\n else:\n sections[sec] = {}\n sections[sec][\"count\"] = 1\n sections[sec][\"total\"] = grades[grade]\n file.close()\n\n# This section calculates the average data based on file\n\n for sec, secdata in sections.items():\n avg = secdata[\" total \"] / secdata[\" count\"]\n print(\" {0} : {1}\".format(sec, round(avg, 2)))\n\n\nif __name__ == \"__main__\":\n calculate_average()\n\n", "You probably mean:\nsections[sec][\"total\"] = grades[grade]\n\n" ]
[ 0, 0 ]
[ "Welcome to SO\nThe error message you are getting is because you are trying to use the grade as a key to access the value in the grades dictionary. However, the grade variable contains the actual grade (e.g. 'A', 'B', etc.), not the key. To fix this, you need to use the grade variable to access the corresponding value in the grades dictionary, like this:\nsections[sec][\"total\"] = grades[grade]\n\nHere is the complete calculate_average function with this change applied:\ndef calculate_average():\n file = open(\"grades.txt\", \"r\")\n sections = {}\n for line in file:\n [_, sec, grade] = line.split(\"\\t\")\n grade = grade.strip()\n if sec in sections:\n sections[sec][\"count\"] += 1\n sections[sec][\"total\"] += grades[grade]\n else:\n sections[sec] = {}\n sections[sec][\"count\"] = 1\n sections[sec][\"total\"] = grades[grade]\n file.close()\n\n for sec, secdata in sections.items():\n avg = secdata[\" total \"] / secdata[\" count\"]\n print(\" {0} : {1}\".format(sec, round(avg, 2)))\n\nHowever, I think you can optimise this function a bit better.\nIt is good practice to use the with statement when working with files, as it ensures that the file is properly closed even if an error occurs. This ensures that the file is automatically closed when the with block is exited, even if an error occurs.\nSecondly, you can eliminate the need for the if statement inside the for loop that processes the lines in the file. Instead, the if statement is moved outside the for loop and only executed once for each section.\nA third improvement would be within the format of the string. You can use the below code to round which is more pythonic.\nprint(\" {sec} : {avg:.2f}\".format(sec=sec, avg=avg))\n\nFinally, you don't need to unpack your variables into a list you can remove this.\nI also think your variables could be renamed better to give you a final result of:\ndef calculate_average():\n with open(\"grades.txt\", \"r\") as file:\n sections = {}\n for line in file:\n _, section, grade = line.split(\"\\t\")\n grade = grade.strip()\n if section not in sections:\n sections[section] = {\"count\": 0, \"total\": 0}\n sections[section][\"count\"] += 1\n sections[section][\"total\"] += grades[grade]\n\n for section, section_data in sections.items():\n avg = section_data[\"total\"] / section_data[\"count\"]\n print(\" {section} : {avg:.2f}\".format(section=section, avg=avg))\n\nAnother possible improvement could be to use:\nsections[section][\"total\"] += grades.get(grade, 0)\n# rather than\nsections[section][\"total\"] += grades[grade]\n\nThis could stop exceptions being raised if the grade is not in the dictionary but it depends on your desired behaviour.\n" ]
[ -1 ]
[ "python" ]
stackoverflow_0074661176_python.txt
Q: Create progress bar based on interval timer I have some simple code that will run a macro every xxx seconds based on a value the user puts into a cell. For example, if the user puts in "30", it will run the macro once every 30 seconds. Here is the code: Public interval As Double Sub Start_Import() Set sht = ThisWorkbook.Sheets("Timing") 'Tells where to find the interval value interval = Now + TimeValue(sht.Range("X6").Text) 'Tells Excel when to next run the macro. Application.OnTime interval, "RunMacro" End Sub All of this works OK. I want to add something that looks like a progress bar or a series of "....." or a circle that will progress based on the timer interval. For example, if the interval is 30 then the bar would take 30 seconds to move left to right. Everything I am finding is related to how long a task takes to run which I do not think is the same. I have tried to adopt some task timers but cannot make them work. Any ideas or suggestions would be great. Many thanks in advance. A: I created this bare bones loading bar for the status bar. I'm sure you could adapt it to your code no problem. Sub StatusbarPercent() Dim I As Long Dim N As Long Dim TotalSeconds As Long Dim Filler As String Dim spaces As String TotalSeconds = 15 For I = 1 To 100 spaces = spaces & " " Next I For I = 1 To TotalSeconds Filler = "" For N = 1 To CInt(I / TotalSeconds * 100) Filler = Filler & "|" Next N Application.StatusBar = "Working : <" & Left(Filler & spaces, 100) & "> " & CInt(I / TotalSeconds * 100) & "%" Application.Wait Now() + TimeValue("00:00:01") Next I End Sub
Create progress bar based on interval timer
I have some simple code that will run a macro every xxx seconds based on a value the user puts into a cell. For example, if the user puts in "30", it will run the macro once every 30 seconds. Here is the code: Public interval As Double Sub Start_Import() Set sht = ThisWorkbook.Sheets("Timing") 'Tells where to find the interval value interval = Now + TimeValue(sht.Range("X6").Text) 'Tells Excel when to next run the macro. Application.OnTime interval, "RunMacro" End Sub All of this works OK. I want to add something that looks like a progress bar or a series of "....." or a circle that will progress based on the timer interval. For example, if the interval is 30 then the bar would take 30 seconds to move left to right. Everything I am finding is related to how long a task takes to run which I do not think is the same. I have tried to adopt some task timers but cannot make them work. Any ideas or suggestions would be great. Many thanks in advance.
[ "I created this bare bones loading bar for the status bar. I'm sure you could adapt it to your code no problem.\nSub StatusbarPercent()\n\n Dim I As Long\n Dim N As Long\n Dim TotalSeconds As Long\n Dim Filler As String\n Dim spaces As String\n \n TotalSeconds = 15\n \n For I = 1 To 100\n spaces = spaces & \" \"\n Next I\n \n For I = 1 To TotalSeconds\n Filler = \"\"\n For N = 1 To CInt(I / TotalSeconds * 100)\n Filler = Filler & \"|\"\n Next N\n Application.StatusBar = \"Working : <\" & Left(Filler & spaces, 100) & \"> \" & CInt(I / TotalSeconds * 100) & \"%\"\n Application.Wait Now() + TimeValue(\"00:00:01\")\n Next I\n \nEnd Sub\n\n\n\n\n" ]
[ 0 ]
[]
[]
[ "excel", "progress_bar", "timer", "vba" ]
stackoverflow_0074659867_excel_progress_bar_timer_vba.txt
Q: Highcharts heatmap not showing data I'm trying to build a highchart heatmap but the I cannot display the data. I'm fairly new to front end and js so maybe I'm missing something. highchart heatmap The data show as label but doesn't display it. \<!doctype html\> <html> <head> <script src="https://code.highcharts.com/highcharts.js"></script> <script src="https://code.highcharts.com/modules/heatmap.js"></script> <script src="https://code.highcharts.com/modules/exporting.js"></script> <script src="https://code.highcharts.com/modules/data.js"></script> <script src="https://code.highcharts.com/modules/boost-canvas.js"></script> <script src="https://code.highcharts.com/modules/boost.js"></script> <script src="https://code.highcharts.com/modules/accessibility.js"></script> </head> <body> <div id="container"></div> <script> Highcharts.chart('container', { chart: { type: 'heatmap' }, boost: { useGPUTranslations: true }, xAxis: { type: 'datetime', }, yAxis: { title: { text: null }, labels: { format: '{value}' }, tickWidth: 1, min: 17000, max: 25000 }, colorAxis: { stops: [ [-10000, '#3060cf'], [0, '#ffffff'], [10000, '#c4463a'] ], min: -10000, max: 10000, }, series: [{ name: 'Stop losses', data: {{ csv }}, nullColor: '#FFFFFF' }] }); </script> </body> </html> You can see the example at the jsfiddle: https://jsfiddle.net/zx6ukdtn/ A: Your example doesn't work because of a few reasons: First of all, you have to set series.colsize and series.rowsize which are 1 by default. In datetime axis, you should set them to equal to your time period/unit on a particular axis. Example code based on your data: series: [{ colsize: 36e5, // one hour rowsize: 50, data: [..] }] Secondly, colorAxis.stops are defined incorrectly. As API says: The stops is an array of tuples, where the first item is a float between 0 and 1 assigning the relative position in the gradient, and the second item is the color. Example code: colorAxis: { stops: [ [0, '#3060cf'], // corresponds to the lowest value (-382.249) [0.5, '#ffffff'], [1, '#c4463a'] // corresponds to the highest value (0) ], /* min: - 300, max: -10 */ // You can set min and max in the scope of values }, Demo: http://jsfiddle.net/BlackLabel/veo4dybq/ You can read more about colorAxis.stops in the following article: https://highcharts.freshdesk.com/support/solutions/articles/44002324217 API References: https://api.highcharts.com/highcharts/colorAxis.stops https://api.highcharts.com/highcharts/series.heatmap.colsize https://api.highcharts.com/highcharts/series.heatmap.rowsize
Highcharts heatmap not showing data
I'm trying to build a highchart heatmap but the I cannot display the data. I'm fairly new to front end and js so maybe I'm missing something. highchart heatmap The data show as label but doesn't display it. \<!doctype html\> <html> <head> <script src="https://code.highcharts.com/highcharts.js"></script> <script src="https://code.highcharts.com/modules/heatmap.js"></script> <script src="https://code.highcharts.com/modules/exporting.js"></script> <script src="https://code.highcharts.com/modules/data.js"></script> <script src="https://code.highcharts.com/modules/boost-canvas.js"></script> <script src="https://code.highcharts.com/modules/boost.js"></script> <script src="https://code.highcharts.com/modules/accessibility.js"></script> </head> <body> <div id="container"></div> <script> Highcharts.chart('container', { chart: { type: 'heatmap' }, boost: { useGPUTranslations: true }, xAxis: { type: 'datetime', }, yAxis: { title: { text: null }, labels: { format: '{value}' }, tickWidth: 1, min: 17000, max: 25000 }, colorAxis: { stops: [ [-10000, '#3060cf'], [0, '#ffffff'], [10000, '#c4463a'] ], min: -10000, max: 10000, }, series: [{ name: 'Stop losses', data: {{ csv }}, nullColor: '#FFFFFF' }] }); </script> </body> </html> You can see the example at the jsfiddle: https://jsfiddle.net/zx6ukdtn/
[ "Your example doesn't work because of a few reasons:\nFirst of all, you have to set series.colsize and series.rowsize which are 1 by default. In datetime axis, you should set them to equal to your time period/unit on a particular axis.\nExample code based on your data:\n series: [{\n colsize: 36e5, // one hour\n rowsize: 50,\n data: [..]\n }]\n\nSecondly, colorAxis.stops are defined incorrectly. As API says:\nThe stops is an array of tuples, where the first item is a float between 0 and 1 assigning the relative position in the gradient, and the second item is the color.\nExample code:\n colorAxis: {\n stops: [\n [0, '#3060cf'], // corresponds to the lowest value (-382.249)\n [0.5, '#ffffff'],\n [1, '#c4463a'] // corresponds to the highest value (0)\n ],\n /* min: - 300,\n max: -10 */ // You can set min and max in the scope of values\n },\n\nDemo:\nhttp://jsfiddle.net/BlackLabel/veo4dybq/\nYou can read more about colorAxis.stops in the following article:\nhttps://highcharts.freshdesk.com/support/solutions/articles/44002324217\nAPI References:\nhttps://api.highcharts.com/highcharts/colorAxis.stops\nhttps://api.highcharts.com/highcharts/series.heatmap.colsize\nhttps://api.highcharts.com/highcharts/series.heatmap.rowsize\n" ]
[ 0 ]
[]
[]
[ "highcharts", "html", "javascript" ]
stackoverflow_0074638642_highcharts_html_javascript.txt
Q: how to automatically change IP got from proxy api to use for selenium from selenium import webdriver from selenium.webdriver.common.proxy import * from selenium.webdriver.common.by import By from time import sleep import requests response = requests.get("http://proxy.tinsoftsv.com/api/changeProxy.php?key=mykey_apiG&location=0") print(response.json()) proxy_url = "127.0.0.1:9009" proxy = Proxy({ 'proxyType': ProxyType.MANUAL, 'httpProxy': proxy_url, 'sslProxy': proxy_url, 'noProxy': ''}) capabilities = webdriver.DesiredCapabilities.CHROME proxy.add_to_capabilities(capabilities) driver = webdriver.Chrome(desired_capabilities=capabilities) driver.get("https://whoer.net/zh") I am having a problem. I have rented a revolving proxy from a web service and I want to change the IP continuously via the api. I used the requests module to get a new IP, so how can it automatically get that new IP and replace it with the old one to use? I'm a newbie and really don't know much, hope someone can help me. Thanks very much! I read the instructions of the service site but they don't have a tutorial for python A: The code appears to be correctly importing the necessary modules and using them to create a Proxy object and a webdriver.Chrome object. However, there are a few issues with the code that may cause it to not work as expected: The proxy_url variable is set to "127.0.0.1:9009", which is the localhost IP address and port number. This is not a valid proxy server, and it will not allow the webdriver.Chrome object to access the internet. You should replace this with a valid proxy server IP address and port number. The response variable is not being used in the code. The requests.get() method is used to get a response from the proxy API, but the response is not being saved or used in any way. You should either use the response to get a valid proxy server IP address and port number, or remove the requests.get() method from the code. The sleep() method is imported but not used in the code. This is not causing any errors, but it is unnecessary and can be removed from the code. Here is the corrected code: from selenium import webdriver from selenium.webdriver.common.proxy import * from selenium.webdriver.common.by import By import requests response = requests.get("http://proxy.tinsoftsv.com/api/changeProxy.php?key=mykey_apiG&location=0") proxy_url = response.json()["ip"] proxy = Proxy({ 'proxyType': ProxyType.MANUAL, 'httpProxy': proxy_url, 'sslProxy': proxy_url, 'noProxy': ''}) capabilities = webdriver.DesiredCapabilities.CHROME proxy.add_to_capabilities(capabilities) driver = webdriver.Chrome(desired_capabilities=capabilities) driver.get("https://whoer.net/zh") A: To change the IP address used by your Selenium webdriver, you can do the following: from selenium import webdriver from selenium.webdriver.common.proxy import * from selenium.webdriver.common.by import By from time import sleep import requests # Get the latest IP address from the proxy service response = requests.get("http://proxy.tinsoftsv.com/api/changeProxy.php?key=mykey_apiG&location=0") ip = response.json()["ip"] port = response.json()["port"] # Update the proxy URL with the new IP address proxy_url = f"{ip}:{port}" # Create a proxy object with the updated proxy URL proxy = Proxy({ 'proxyType': ProxyType.MANUAL, 'httpProxy': proxy_url, 'sslProxy': proxy_url, 'noProxy': ''}) # Update the capabilities object to use the new proxy capabilities = webdriver.DesiredCapabilities.CHROME proxy.add_to_capabilities(capabilities) # Create a new instance of the webdriver with the updated capabilities driver = webdriver.Chrome(desired_capabilities=capabilities) driver.get("https://whoer.net/zh")
how to automatically change IP got from proxy api to use for selenium
from selenium import webdriver from selenium.webdriver.common.proxy import * from selenium.webdriver.common.by import By from time import sleep import requests response = requests.get("http://proxy.tinsoftsv.com/api/changeProxy.php?key=mykey_apiG&location=0") print(response.json()) proxy_url = "127.0.0.1:9009" proxy = Proxy({ 'proxyType': ProxyType.MANUAL, 'httpProxy': proxy_url, 'sslProxy': proxy_url, 'noProxy': ''}) capabilities = webdriver.DesiredCapabilities.CHROME proxy.add_to_capabilities(capabilities) driver = webdriver.Chrome(desired_capabilities=capabilities) driver.get("https://whoer.net/zh") I am having a problem. I have rented a revolving proxy from a web service and I want to change the IP continuously via the api. I used the requests module to get a new IP, so how can it automatically get that new IP and replace it with the old one to use? I'm a newbie and really don't know much, hope someone can help me. Thanks very much! I read the instructions of the service site but they don't have a tutorial for python
[ "The code appears to be correctly importing the necessary modules and using them to create a Proxy object and a webdriver.Chrome object.\nHowever, there are a few issues with the code that may cause it to not work as expected:\nThe proxy_url variable is set to \"127.0.0.1:9009\", which is the localhost IP address and port number. This is not a valid proxy server, and it will not allow the webdriver.Chrome object to access the internet. You should replace this with a valid proxy server IP address and port number.\nThe response variable is not being used in the code. The requests.get() method is used to get a response from the proxy API, but the response is not being saved or used in any way. You should either use the response to get a valid proxy server IP address and port number, or remove the requests.get() method from the code.\nThe sleep() method is imported but not used in the code. This is not causing any errors, but it is unnecessary and can be removed from the code.\nHere is the corrected code:\nfrom selenium import webdriver\nfrom selenium.webdriver.common.proxy import *\nfrom selenium.webdriver.common.by import By\nimport requests\n\nresponse = requests.get(\"http://proxy.tinsoftsv.com/api/changeProxy.php?key=mykey_apiG&location=0\")\nproxy_url = response.json()[\"ip\"]\n\nproxy = Proxy({\n 'proxyType': ProxyType.MANUAL,\n 'httpProxy': proxy_url,\n 'sslProxy': proxy_url,\n 'noProxy': ''})\n\ncapabilities = webdriver.DesiredCapabilities.CHROME\nproxy.add_to_capabilities(capabilities)\n\ndriver = webdriver.Chrome(desired_capabilities=capabilities)\ndriver.get(\"https://whoer.net/zh\")\n\n", "To change the IP address used by your Selenium webdriver, you can do the following:\nfrom selenium import webdriver\nfrom selenium.webdriver.common.proxy import *\nfrom selenium.webdriver.common.by import By\nfrom time import sleep\nimport requests\n\n# Get the latest IP address from the proxy service\nresponse = requests.get(\"http://proxy.tinsoftsv.com/api/changeProxy.php?key=mykey_apiG&location=0\")\nip = response.json()[\"ip\"]\nport = response.json()[\"port\"]\n\n# Update the proxy URL with the new IP address\nproxy_url = f\"{ip}:{port}\"\n\n# Create a proxy object with the updated proxy URL\nproxy = Proxy({\n 'proxyType': ProxyType.MANUAL,\n 'httpProxy': proxy_url,\n 'sslProxy': proxy_url,\n 'noProxy': ''})\n\n# Update the capabilities object to use the new proxy\ncapabilities = webdriver.DesiredCapabilities.CHROME\nproxy.add_to_capabilities(capabilities)\n\n# Create a new instance of the webdriver with the updated capabilities\ndriver = webdriver.Chrome(desired_capabilities=capabilities)\ndriver.get(\"https://whoer.net/zh\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "api", "python", "python_3.x", "selenium", "selenium_webdriver" ]
stackoverflow_0074661524_api_python_python_3.x_selenium_selenium_webdriver.txt
Q: What is a good way to think of these layout contraints I am learning about layout constraints and find it a bit confusing why the last line of NSLayout Constraints for the trailing anchor mentions a view instead of loginView? Is there any good logical way to think of this? Struggling to imagine what is written. let loginView = LoginView() view.addSubview(loginView) NSLayoutConstraint.activate([ loginView.centerYAnchor.constraint(equalTo: view.centerYAnchor), loginView.leadingAnchor.constraint(equalToSystemSpacingAfter: view.leadingAnchor, multiplier: 1), view.trailingAnchor.constraint(equalToSystemSpacingAfter: loginView.trailingAnchor, multiplier: 1) ]) A: To clarify the "flipping" between: loginView.leadingAnchor.constraint(...) and: view.trailingAnchor.constraint(...) Both of these sets of constraints will give the same result: NSLayoutConstraint.activate([ loginView.heightAnchor.constraint(equalToConstant: 120.0), loginView.centerYAnchor.constraint(equalTo: view.centerYAnchor), loginView.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 8.0), loginView.trailingAnchor.constraint(equalTo: view.trailingAnchor, constant: -8.0), ]) NSLayoutConstraint.activate([ loginView.heightAnchor.constraint(equalToConstant: 120.0), loginView.centerYAnchor.constraint(equalTo: view.centerYAnchor), loginView.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 8.0), view.trailingAnchor.constraint(equalTo: loginView.trailingAnchor, constant: 8.0), ]) In each case, we're telling auto-layout to put the trailing-edge of loginView 8-points from the trailing-edge of view. Which approach to use really comes down to individual preference: Do I like using all Positive values, with order-flipping? Or do I like using Positive values for "left-side" constraints and Negative values for "right-side" constraints without order-flipping (obviously, flip the terminology for LTR locales). Starting with iOS 11, Apple added the concept of system spacing - which changes based on device size, accessibility options, etc - which we can use instead of hard-coded values. We have equalToSystemSpacingAfter (and equalToSystemSpacingBelow), but we do not have equalToSystemSpacingBefore (or equalToSystemSpacingAbove). So, if we want to use system spacing, we must "flip" the constraint order: NSLayoutConstraint.activate([ loginView.centerYAnchor.constraint(equalTo: view.centerYAnchor), loginView.heightAnchor.constraint(equalToConstant: 120.0), loginView.leadingAnchor.constraint(equalToSystemSpacingAfter: view.leadingAnchor, multiplier: 1), view.trailingAnchor.constraint(equalToSystemSpacingAfter: loginView.trailingAnchor, multiplier: 1), ]) A: The code you posted is defining a set of layout constraints for the loginView object. The constraints specify how the loginView should be positioned within its parent view. In the last line of the code, the view.trailingAnchor is being used as the reference for the trailing edge of the loginView. This means that the loginView will be positioned such that its trailing edge is aligned with the trailing edge of the parent view. In general, when working with layout constraints, it is important to think about the relationship between the views being constrained and the constraints themselves. In this case, the loginView is the view being constrained, and the constraints are defining how the loginView should be positioned relative to its parent view. A: view means self.view. This is a UIViewController; it has a view. This is the view that will contain the loginView; you can actually see the loginView being added to the view controller's view as a subview, right there in the code. So this code inserts the loginView into self.view and then proceeds to describe the physical relationship between their sizes and positions.
What is a good way to think of these layout contraints
I am learning about layout constraints and find it a bit confusing why the last line of NSLayout Constraints for the trailing anchor mentions a view instead of loginView? Is there any good logical way to think of this? Struggling to imagine what is written. let loginView = LoginView() view.addSubview(loginView) NSLayoutConstraint.activate([ loginView.centerYAnchor.constraint(equalTo: view.centerYAnchor), loginView.leadingAnchor.constraint(equalToSystemSpacingAfter: view.leadingAnchor, multiplier: 1), view.trailingAnchor.constraint(equalToSystemSpacingAfter: loginView.trailingAnchor, multiplier: 1) ])
[ "To clarify the \"flipping\" between:\nloginView.leadingAnchor.constraint(...)\n\nand:\nview.trailingAnchor.constraint(...)\n\nBoth of these sets of constraints will give the same result:\nNSLayoutConstraint.activate([\n\n loginView.heightAnchor.constraint(equalToConstant: 120.0),\n loginView.centerYAnchor.constraint(equalTo: view.centerYAnchor),\n\n loginView.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 8.0),\n\n loginView.trailingAnchor.constraint(equalTo: view.trailingAnchor, constant: -8.0),\n \n])\n\n\nNSLayoutConstraint.activate([\n \n loginView.heightAnchor.constraint(equalToConstant: 120.0),\n loginView.centerYAnchor.constraint(equalTo: view.centerYAnchor),\n \n loginView.leadingAnchor.constraint(equalTo: view.leadingAnchor, constant: 8.0),\n\n view.trailingAnchor.constraint(equalTo: loginView.trailingAnchor, constant: 8.0),\n \n])\n\nIn each case, we're telling auto-layout to put the trailing-edge of loginView 8-points from the trailing-edge of view.\nWhich approach to use really comes down to individual preference: Do I like using all Positive values, with order-flipping? Or do I like using Positive values for \"left-side\" constraints and Negative values for \"right-side\" constraints without order-flipping (obviously, flip the terminology for LTR locales).\nStarting with iOS 11, Apple added the concept of system spacing - which changes based on device size, accessibility options, etc - which we can use instead of hard-coded values.\nWe have equalToSystemSpacingAfter (and equalToSystemSpacingBelow), but we do not have equalToSystemSpacingBefore (or equalToSystemSpacingAbove).\nSo, if we want to use system spacing, we must \"flip\" the constraint order:\nNSLayoutConstraint.activate([\n \n loginView.centerYAnchor.constraint(equalTo: view.centerYAnchor),\n loginView.heightAnchor.constraint(equalToConstant: 120.0),\n \n loginView.leadingAnchor.constraint(equalToSystemSpacingAfter: view.leadingAnchor, multiplier: 1),\n view.trailingAnchor.constraint(equalToSystemSpacingAfter: loginView.trailingAnchor, multiplier: 1),\n \n])\n\n", "The code you posted is defining a set of layout constraints for the loginView object. The constraints specify how the loginView should be positioned within its parent view.\nIn the last line of the code, the view.trailingAnchor is being used as the reference for the trailing edge of the loginView. This means that the loginView will be positioned such that its trailing edge is aligned with the trailing edge of the parent view.\nIn general, when working with layout constraints, it is important to think about the relationship between the views being constrained and the constraints themselves. In this case, the loginView is the view being constrained, and the constraints are defining how the loginView should be positioned relative to its parent view.\n", "view means self.view. This is a UIViewController; it has a view. This is the view that will contain the loginView; you can actually see the loginView being added to the view controller's view as a subview, right there in the code.\nSo this code inserts the loginView into self.view and then proceeds to describe the physical relationship between their sizes and positions.\n" ]
[ 2, 0, 0 ]
[]
[]
[ "autolayout", "swift" ]
stackoverflow_0074660368_autolayout_swift.txt
Q: How to create a function using for loop Hi I am a beginner in learning For loop and functions I am trying to scrape job count given on each career url. Is there a way I can create one function instead of the following two individual codes for each website? import os from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support import expected_conditions as ec import re os.environ['PATH'] += "/Users/monicayadav/PycharmProjects/pythonProject4/selenium/venv/bin" driver = webdriver.Firefox() driver.implicitly_wait(10) # 1 Benteler Automotive driver.get('https://career.benteler.jobs/go/All-Jobs/3197201/') WebDriverWait(driver, 10).until(ec.presence_of_element_located((By.XPATH, "(//b)[11]"))) # Matches `1,234`, `1`, `12`, `1,234,567` r = re.compile(r'^([0-9,]+).*$') JobCountBenteler = WebDriverWait(driver, 10).until( lambda _: (e := driver.find_element(By.XPATH, "(//b)[11]")) \ and (m := r.match(e.text)) \ and m.group(1) ) print(JobCountBenteler) # 2 Best Buy driver.get('https://jobs.bestbuy.com/bby?id=all_jobs&spa=1&s=req_id_num') WebDriverWait(driver, 10).until(ec.presence_of_element_located((By.XPATH, "//p[contains(@class, 'font-wt-500 ng-binding')]"))) # Matches `1,234`, `1`, `12`, `1,234,567` r = re.compile(r'^([0-9,]+).*$') JobCountBESTBUY = WebDriverWait(driver, 10).until( lambda _: (e := driver.find_element(By.XPATH, "//p[contains(@class, 'font-wt-500 ng-binding')]")) \ and (m := r.match(e.text)) \ and m.group(1) ) print(JobCountBESTBUY) driver.quit() A: You may try this way: def your_function_name(url, xpath): driver.get(url) WebDriverWait(driver, 10).until(ec.presence_of_element_located((By.XPATH, xpath))) # Matches `1,234`, `1`, `12`, `1,234,567` r = re.compile(r'^([0-9,]+).*$') jobCount = WebDriverWait(driver, 10).until( lambda _: (e := driver.find_element(By.XPATH, xpath)) and (m := r.match(e.text)) and m.group(1) ) print(jobCount) # Benteler Automotive your_function_name('https://career.benteler.jobs/go/All-Jobs/3197201/', "(//b)[11]") # Best Buy your_function_name('https://jobs.bestbuy.com/bby?id=all_jobs&spa=1&s=req_id_num', "//p[contains(@class, 'font-wt-500 ng-binding')]")
How to create a function using for loop
Hi I am a beginner in learning For loop and functions I am trying to scrape job count given on each career url. Is there a way I can create one function instead of the following two individual codes for each website? import os from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.wait import WebDriverWait from selenium.webdriver.support import expected_conditions as ec import re os.environ['PATH'] += "/Users/monicayadav/PycharmProjects/pythonProject4/selenium/venv/bin" driver = webdriver.Firefox() driver.implicitly_wait(10) # 1 Benteler Automotive driver.get('https://career.benteler.jobs/go/All-Jobs/3197201/') WebDriverWait(driver, 10).until(ec.presence_of_element_located((By.XPATH, "(//b)[11]"))) # Matches `1,234`, `1`, `12`, `1,234,567` r = re.compile(r'^([0-9,]+).*$') JobCountBenteler = WebDriverWait(driver, 10).until( lambda _: (e := driver.find_element(By.XPATH, "(//b)[11]")) \ and (m := r.match(e.text)) \ and m.group(1) ) print(JobCountBenteler) # 2 Best Buy driver.get('https://jobs.bestbuy.com/bby?id=all_jobs&spa=1&s=req_id_num') WebDriverWait(driver, 10).until(ec.presence_of_element_located((By.XPATH, "//p[contains(@class, 'font-wt-500 ng-binding')]"))) # Matches `1,234`, `1`, `12`, `1,234,567` r = re.compile(r'^([0-9,]+).*$') JobCountBESTBUY = WebDriverWait(driver, 10).until( lambda _: (e := driver.find_element(By.XPATH, "//p[contains(@class, 'font-wt-500 ng-binding')]")) \ and (m := r.match(e.text)) \ and m.group(1) ) print(JobCountBESTBUY) driver.quit()
[ "You may try this way:\ndef your_function_name(url, xpath):\n driver.get(url)\n WebDriverWait(driver, 10).until(ec.presence_of_element_located((By.XPATH, xpath)))\n\n # Matches `1,234`, `1`, `12`, `1,234,567`\n r = re.compile(r'^([0-9,]+).*$')\n jobCount = WebDriverWait(driver, 10).until(\n lambda _: (e := driver.find_element(By.XPATH, xpath))\n and (m := r.match(e.text))\n and m.group(1)\n )\n print(jobCount)\n\n\n# Benteler Automotive\nyour_function_name('https://career.benteler.jobs/go/All-Jobs/3197201/', \"(//b)[11]\")\n\n# Best Buy\nyour_function_name('https://jobs.bestbuy.com/bby?id=all_jobs&spa=1&s=req_id_num',\n \"//p[contains(@class, 'font-wt-500 ng-binding')]\")\n\n" ]
[ 0 ]
[]
[]
[ "function", "python_3.x", "selenium", "selenium_webdriver", "xpath_2.0" ]
stackoverflow_0074660215_function_python_3.x_selenium_selenium_webdriver_xpath_2.0.txt
Q: 'DataFrame' object does not support item assignment I imported a df into Databricks as a pyspark.sql.dataframe.DataFrame. Within this df I have 3 columns (which I have verified to be strings) that I wish to concatenate. I have tried to use a simple "+" function first, eg. df["fullname"] = df["firstname"] + df["middlename"] + df["lastname"] But I keep receiving the error "'DataFrame' object does not support item assignment". So I tried to add .astype(str) after every column with no avail. Finally I tried to simply add another column full of the number 5: df['new_col'] = 5 and received the same error. So now Im thinking maybe this dataframe is immutable. But I even tried to make a copy of the original df hoping I could modify it df2 = df.select('*') But once again I could not concatenate or modify the new dataframe. Any help is greatly appreciated! A: The error message you are getting suggests that the DataFrame object you are trying to modify is immutable, which means that it cannot be changed. To solve this problem, you will need to create a new DataFrame object that contains the concatenated column. You can do this using the withColumn method, which creates a new DataFrame by adding a new column to the existing DataFrame. Here is an example of how you can use withColumn to concatenate the three columns in your DataFrame and create a new DataFrame: from pyspark.sql.functions import concat # Concatenate the columns and create a new DataFrame df2 = df.withColumn("fullname", concat(df["firstname"], df["middlename"], df["lastname"])) This will create a new DataFrame called df2 that contains the concatenated column. You can then use this new DataFrame for any further operations you need to perform on your data. Alternatively, if you don't need to keep the original DataFrame and want to modify the existing DataFrame, you can use the withColumn method in place of the assignment operator (=) to add the new column to the existing DataFrame: from pyspark.sql.functions import concat # Concatenate the columns and add the new column to the existing DataFrame df = df.withColumn("fullname", concat(df["firstname"], df["middlename"], df["lastname"])) This will add the new column to the existing DataFrame and you can then use the updated DataFrame for any further operations you need to perform.
'DataFrame' object does not support item assignment
I imported a df into Databricks as a pyspark.sql.dataframe.DataFrame. Within this df I have 3 columns (which I have verified to be strings) that I wish to concatenate. I have tried to use a simple "+" function first, eg. df["fullname"] = df["firstname"] + df["middlename"] + df["lastname"] But I keep receiving the error "'DataFrame' object does not support item assignment". So I tried to add .astype(str) after every column with no avail. Finally I tried to simply add another column full of the number 5: df['new_col'] = 5 and received the same error. So now Im thinking maybe this dataframe is immutable. But I even tried to make a copy of the original df hoping I could modify it df2 = df.select('*') But once again I could not concatenate or modify the new dataframe. Any help is greatly appreciated!
[ "The error message you are getting suggests that the DataFrame object you are trying to modify is immutable, which means that it cannot be changed. To solve this problem, you will need to create a new DataFrame object that contains the concatenated column. You can do this using the withColumn method, which creates a new DataFrame by adding a new column to the existing DataFrame. Here is an example of how you can use withColumn to concatenate the three columns in your DataFrame and create a new DataFrame:\nfrom pyspark.sql.functions import concat\n\n# Concatenate the columns and create a new DataFrame\ndf2 = df.withColumn(\"fullname\", concat(df[\"firstname\"], df[\"middlename\"], df[\"lastname\"]))\n\nThis will create a new DataFrame called df2 that contains the concatenated column. You can then use this new DataFrame for any further operations you need to perform on your data.\nAlternatively, if you don't need to keep the original DataFrame and want to modify the existing DataFrame, you can use the withColumn method in place of the assignment operator (=) to add the new column to the existing DataFrame:\nfrom pyspark.sql.functions import concat\n\n# Concatenate the columns and add the new column to the existing DataFrame\ndf = df.withColumn(\"fullname\", concat(df[\"firstname\"], df[\"middlename\"], df[\"lastname\"]))\n\nThis will add the new column to the existing DataFrame and you can then use the updated DataFrame for any further operations you need to perform.\n" ]
[ 1 ]
[]
[]
[ "databricks", "dataframe", "pandas", "pyspark", "python" ]
stackoverflow_0074661704_databricks_dataframe_pandas_pyspark_python.txt
Q: BeautifulSoup find partial string in section I am trying to use BeautifulSoup to scrape a particular download URL from a web page, based on a partial text match. There are many links on the page, and it changes frequently. The html I'm scraping is full of sections that look something like this: <section class="onecol habonecol"> <a href="https://longGibberishDownloadURL" title="Download"> <img src="\azure_storage_blob\includes\download_for_windows.png"/> </a> sentinel-3.2022335.1201.1507_1608C.ab.L3.FL3.v951T202211_1_3.CIcyano.LakeOkee.tif </section> The second to last line (sentinel-3.2022335...LakeOkee.tif) is the part I need to search using a partial string to pull out the correct download url. The code I have attempted so far looks something like this: import requests, re from bs4 import BeautifulSoup reqs = requests.get(url) soup = BeautifulSoup(reqs.text, 'html.parser') result = soup.find('section', attrs={'class':'onecol habonecol'}, string=re.compile(?)) I've been searching StackOverflow a long time now and while there are similar questions and answers, none of the proposed solutions have worked for me so far (re.compile, lambdas, etc.). I am able to pull up a section if I remove the string argument, but when I try to include a partial matching string I get None for my result. I'm unsure what to put for the string argument (? above) to find a match based on partial text, say if I wanted to find the filename that has "CIcyano" somewhere in it (see second to last line of html example at top). I've tried multiple methods using re.compile and lambdas, but I don't quite understand how either of those functions really work. I was able to pull up other sections from the html using these solutions, but something about this filename string with all the periods seems to be preventing it from working. Or maybe it's the way it is positioned within the section? Perhaps I'm going about this the wrong way entirely. Is this perhaps considered part of the section id, and so the string argument can't find it?? An example of a section on the page that I AM able to find has html like the one below, and I'm easily able to find it using the string argument and re.compile using "Name", "^N", etc. <section class="onecol habonecol"> <h3> Name </h3> </section> Appreciate any advice on how to go about this! Once I get the correct section, I know how to pull out the URL via the a tag. Here is the full html of the page I'm scraping, if that helps clarify the structure I'm working against. A: I believe you are overthinking. Just remove the regular expression part, take the text and you will be fine. import requests from bs4 import BeautifulSoup reqs = requests.get(url) soup = BeautifulSoup(reqs.text, 'html.parser') result = soup.find('section', attrs={'class':'onecol habonecol'}).text print(result) A: You can query inside every section for the string you want. Like so: s.find('section', attrs={'class':'onecol habonecol'}).find(string=re.compile(r'.sentinel.*')) Using this regular expression you will match any text that has sentinel in it, be careful that you will have to match some characters like spaces, that's why there is a . at beginning of the regex, you might want a more robust regex which you can test here: https://regex101.com/ A: I ended up finding another method not using the string argument in find(), instead using something like the code below, which pulls the first instance of a section that contains a partial text match. sections = soup.find_all('section', attrs={'class':'onecol habonecol'}) for s in sections: text = s.text if 'CIcyano' in text: print(s) break links = s.find('a') dwn_url = links.get('href') This works for my purposes and fetches the first instance of the matching filename, and grabs the URL.
BeautifulSoup find partial string in section
I am trying to use BeautifulSoup to scrape a particular download URL from a web page, based on a partial text match. There are many links on the page, and it changes frequently. The html I'm scraping is full of sections that look something like this: <section class="onecol habonecol"> <a href="https://longGibberishDownloadURL" title="Download"> <img src="\azure_storage_blob\includes\download_for_windows.png"/> </a> sentinel-3.2022335.1201.1507_1608C.ab.L3.FL3.v951T202211_1_3.CIcyano.LakeOkee.tif </section> The second to last line (sentinel-3.2022335...LakeOkee.tif) is the part I need to search using a partial string to pull out the correct download url. The code I have attempted so far looks something like this: import requests, re from bs4 import BeautifulSoup reqs = requests.get(url) soup = BeautifulSoup(reqs.text, 'html.parser') result = soup.find('section', attrs={'class':'onecol habonecol'}, string=re.compile(?)) I've been searching StackOverflow a long time now and while there are similar questions and answers, none of the proposed solutions have worked for me so far (re.compile, lambdas, etc.). I am able to pull up a section if I remove the string argument, but when I try to include a partial matching string I get None for my result. I'm unsure what to put for the string argument (? above) to find a match based on partial text, say if I wanted to find the filename that has "CIcyano" somewhere in it (see second to last line of html example at top). I've tried multiple methods using re.compile and lambdas, but I don't quite understand how either of those functions really work. I was able to pull up other sections from the html using these solutions, but something about this filename string with all the periods seems to be preventing it from working. Or maybe it's the way it is positioned within the section? Perhaps I'm going about this the wrong way entirely. Is this perhaps considered part of the section id, and so the string argument can't find it?? An example of a section on the page that I AM able to find has html like the one below, and I'm easily able to find it using the string argument and re.compile using "Name", "^N", etc. <section class="onecol habonecol"> <h3> Name </h3> </section> Appreciate any advice on how to go about this! Once I get the correct section, I know how to pull out the URL via the a tag. Here is the full html of the page I'm scraping, if that helps clarify the structure I'm working against.
[ "I believe you are overthinking. Just remove the regular expression part, take the text and you will be fine.\nimport requests\nfrom bs4 import BeautifulSoup\n\nreqs = requests.get(url)\nsoup = BeautifulSoup(reqs.text, 'html.parser')\nresult = soup.find('section', attrs={'class':'onecol habonecol'}).text\nprint(result)\n\n", "You can query inside every section for the string you want. Like so:\ns.find('section', attrs={'class':'onecol habonecol'}).find(string=re.compile(r'.sentinel.*'))\n\nUsing this regular expression you will match any text that has sentinel in it, be careful that you will have to match some characters like spaces, that's why there is a . at beginning of the regex, you might want a more robust regex which you can test here:\nhttps://regex101.com/\n", "I ended up finding another method not using the string argument in find(), instead using something like the code below, which pulls the first instance of a section that contains a partial text match.\nsections = soup.find_all('section', attrs={'class':'onecol habonecol'})\n\n\nfor s in sections:\n text = s.text\n if 'CIcyano' in text:\n print(s)\n break\n\nlinks = s.find('a')\ndwn_url = links.get('href')\n\nThis works for my purposes and fetches the first instance of the matching filename, and grabs the URL.\n" ]
[ 0, 0, 0 ]
[]
[]
[ "beautifulsoup", "html", "partial", "python" ]
stackoverflow_0074648666_beautifulsoup_html_partial_python.txt
Q: My code is not doing what I want it to do and I cant get it out of the while loop. Please explain why it's like that val = [*range(1,51)] print("Now, I need aaato know how many state Capitals you would like to practice") user = input("chose a number from 1 to 50") while user not in val: print("There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type \"EXIT\"") user = input("I needbbb to know how many state Capitals you would like to practice") if user.capitalize() == "EXIT": break if user == 0: print("There are more than zero States in the United Sts That means that you do not want to play today") user = input("I needccc to know how many state Capitals you would like to practice. If you want to exit the game, type \"EXIT\"") print("Hello") output: There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice0 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice5 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice123 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice5 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice0 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practiceexit There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice I created a list with ints between the number 1 and 50. I want the user to pick a number from the list (val). If it's not there, I want the user to keep trying. Unless the user wants to quit with "EXIT". It just keeps getting stuck in my user input print statement and I dont understand why? A: You want user.upper(), not user.capitalize(). From the help-text: >>> help(str.capitalize) Help on method_descriptor: capitalize(self, /) Return a capitalized version of the string. More specifically, make the first character have upper case and the rest lower case. >>> help(str.upper) Help on method_descriptor: upper(self, /) Return a copy of the string converted to uppercase.
My code is not doing what I want it to do and I cant get it out of the while loop. Please explain why it's like that
val = [*range(1,51)] print("Now, I need aaato know how many state Capitals you would like to practice") user = input("chose a number from 1 to 50") while user not in val: print("There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type \"EXIT\"") user = input("I needbbb to know how many state Capitals you would like to practice") if user.capitalize() == "EXIT": break if user == 0: print("There are more than zero States in the United Sts That means that you do not want to play today") user = input("I needccc to know how many state Capitals you would like to practice. If you want to exit the game, type \"EXIT\"") print("Hello") output: There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice0 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice5 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice123 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice5 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice0 There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practiceexit There are 50 States in the United States. You need to pick a number between 1-50. If you want to exit the game, type "EXIT" I needbbb to know how many state Capitals you would like to practice I created a list with ints between the number 1 and 50. I want the user to pick a number from the list (val). If it's not there, I want the user to keep trying. Unless the user wants to quit with "EXIT". It just keeps getting stuck in my user input print statement and I dont understand why?
[ "You want user.upper(), not user.capitalize().\nFrom the help-text:\n>>> help(str.capitalize)\nHelp on method_descriptor:\n\ncapitalize(self, /)\n Return a capitalized version of the string.\n\n More specifically, make the first character have upper case and the rest lower\n case.\n\n>>> help(str.upper)\nHelp on method_descriptor:\n\nupper(self, /)\n Return a copy of the string converted to uppercase.\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074661763_python.txt
Q: Pass whole row to DB function as an argument SQLAlchemy I need to implement following SQL expression using SQLAlchemy 1.4.41, Postgres 13.6 SELECT book.name, my_func(book) AS func_result FROM book WHERE book.name = 'The Adventures of Tom Sawyer'; Is there a way to implement such SQL expression? Function is the following and I'm not supposed to change it: create function my_func(table_row anyelement) returns json I assume that passing Book to func.my_func is not correct as SQLAlchemy unpacks it to list of Book attributes (ex. book.id, book.name, book.total_pages) from db.models import Book from sqlalchemy import func, select function = func.my_func(Book) query = select(Book.name, function).where(Book.name == 'The Adventures of Tom Sawyer') A: In PostgreSQL, you would do this by passing a row object to the function. For example, row_to_json is a function that accepts a row and returns JSON, so given this table Table "public.users" Column │ Type │ Collation │ Nullable │ Default ═══════════════════╪═══════════════════╪═══════════╪══════════╪══════════════════════════════ id │ integer │ │ not null │ generated always as identity name │ character varying │ │ │ registration_date │ date │ │ │ this query select name, row_to_json(users) from users; could return name │ row_to_json ═══════╪══════════════════════════════════════════════════════════ Alice │ {"id":1,"name":"Alice","registration_date":"2022-10-27"} Bob │ {"id":2,"name":"Bob","registration_date":"2022-10-27"} Translating this to SQLAlchemy, if you are only passing the columns from the table underlying the model (so no relationships), you can use FromClause.table_valued as a shortcut. q = sa.select(User.name, sa.func.row_to_json(User.__table__.table_valued())) If you do require values from relationships, or only a subset of the table's fields, you need to use a subquery to specify the them*: subq = sa.select(User.id, User.name).subquery() q = sa.select(sa.func.row_to_json(subq.table_valued())) * This part of the answer is inspired by this answer by Anatoly Ressin.
Pass whole row to DB function as an argument SQLAlchemy
I need to implement following SQL expression using SQLAlchemy 1.4.41, Postgres 13.6 SELECT book.name, my_func(book) AS func_result FROM book WHERE book.name = 'The Adventures of Tom Sawyer'; Is there a way to implement such SQL expression? Function is the following and I'm not supposed to change it: create function my_func(table_row anyelement) returns json I assume that passing Book to func.my_func is not correct as SQLAlchemy unpacks it to list of Book attributes (ex. book.id, book.name, book.total_pages) from db.models import Book from sqlalchemy import func, select function = func.my_func(Book) query = select(Book.name, function).where(Book.name == 'The Adventures of Tom Sawyer')
[ "In PostgreSQL, you would do this by passing a row object to the function. For example, row_to_json is a function that accepts a row and returns JSON, so given this table\n Table \"public.users\"\n Column │ Type │ Collation │ Nullable │ Default \n═══════════════════╪═══════════════════╪═══════════╪══════════╪══════════════════════════════\n id │ integer │ │ not null │ generated always as identity\n name │ character varying │ │ │ \n registration_date │ date │ │ │ \n\nthis query\nselect name, row_to_json(users) from users;\n\ncould return\n name │ row_to_json \n═══════╪══════════════════════════════════════════════════════════\n Alice │ {\"id\":1,\"name\":\"Alice\",\"registration_date\":\"2022-10-27\"}\n Bob │ {\"id\":2,\"name\":\"Bob\",\"registration_date\":\"2022-10-27\"}\n\nTranslating this to SQLAlchemy, if you are only passing the columns from the table underlying the model (so no relationships), you can use FromClause.table_valued as a shortcut.\nq = sa.select(User.name, sa.func.row_to_json(User.__table__.table_valued()))\n\nIf you do require values from relationships, or only a subset of the table's fields, you need to use a subquery to specify the them*:\nsubq = sa.select(User.id, User.name).subquery() \nq = sa.select(sa.func.row_to_json(subq.table_valued()))\n\n\n* This part of the answer is inspired by this answer by Anatoly Ressin.\n" ]
[ 1 ]
[]
[]
[ "python", "sql", "sqlalchemy" ]
stackoverflow_0074654755_python_sql_sqlalchemy.txt
Q: How to create a column that records how many times a person_id occur? I want to create a column called visit_occurrance that sums the number of times each person_id reappears in the dataset. For example, > dput(df) structure(list(Person_ID = c(123L, 123L, 110L, 145L, 345L, 345L, 345L, 345L, 300L, 234L, 234L, 111L, 110L)), class = "data.frame", row.names = c(NA, -13L)) Desired output: > dput(df) structure(list(Person_ID = c(123L, 123L, 110L, 145L, 345L, 345L, 345L, 345L, 300L, 234L, 234L, 111L, 110L), Visit_occurrance = c(1L, 2L, 1L, 1L, 1L, 2L, 3L, 4L, 1L, 1L, 2L, 1L, 2L)), class = "data.frame", row.names = c(NA, -13L)) A: library(dplyr) -13L)) df %>% group_by(Person_ID) %>% mutate(Visit_occurrance = row_number()) Person_ID Visit_occurrance <int> <int> 1 123 1 2 123 2 3 110 1 4 145 1 5 345 1 6 345 2 7 345 3 8 345 4 9 300 1 10 234 1 11 234 2 12 111 1 13 110 2
How to create a column that records how many times a person_id occur?
I want to create a column called visit_occurrance that sums the number of times each person_id reappears in the dataset. For example, > dput(df) structure(list(Person_ID = c(123L, 123L, 110L, 145L, 345L, 345L, 345L, 345L, 300L, 234L, 234L, 111L, 110L)), class = "data.frame", row.names = c(NA, -13L)) Desired output: > dput(df) structure(list(Person_ID = c(123L, 123L, 110L, 145L, 345L, 345L, 345L, 345L, 300L, 234L, 234L, 111L, 110L), Visit_occurrance = c(1L, 2L, 1L, 1L, 1L, 2L, 3L, 4L, 1L, 1L, 2L, 1L, 2L)), class = "data.frame", row.names = c(NA, -13L))
[ "library(dplyr)\n -13L))\ndf %>% \n group_by(Person_ID) %>% \n mutate(Visit_occurrance = row_number())\n\n Person_ID Visit_occurrance\n <int> <int>\n 1 123 1\n 2 123 2\n 3 110 1\n 4 145 1\n 5 345 1\n 6 345 2\n 7 345 3\n 8 345 4\n 9 300 1\n10 234 1\n11 234 2\n12 111 1\n13 110 2\n\n" ]
[ 1 ]
[]
[]
[ "r" ]
stackoverflow_0074661769_r.txt
Q: Efficiently convert rows to columns in sql server I'm looking for an efficient way to convert rows to columns in SQL server, I heard that PIVOT is not very fast, and I need to deal with lot of records. This is my example: ------------------------------- | Id | Value | ColumnName | ------------------------------- | 1 | John | FirstName | | 2 | 2.4 | Amount | | 3 | ZH1E4A | PostalCode | | 4 | Fork | LastName | | 5 | 857685 | AccountNumber | ------------------------------- This is my result: --------------------------------------------------------------------- | FirstName |Amount| PostalCode | LastName | AccountNumber | --------------------------------------------------------------------- | John | 2.4 | ZH1E4A | Fork | 857685 | --------------------------------------------------------------------- How can I build the result? A: There are several ways that you can transform data from multiple rows into columns. Using PIVOT In SQL Server you can use the PIVOT function to transform the data from rows to columns: select Firstname, Amount, PostalCode, LastName, AccountNumber from ( select value, columnname from yourtable ) d pivot ( max(value) for columnname in (Firstname, Amount, PostalCode, LastName, AccountNumber) ) piv; See Demo. Pivot with unknown number of columnnames If you have an unknown number of columnnames that you want to transpose, then you can use dynamic SQL: DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX) select @cols = STUFF((SELECT ',' + QUOTENAME(ColumnName) from yourtable group by ColumnName, id order by id FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') set @query = N'SELECT ' + @cols + N' from ( select value, ColumnName from yourtable ) x pivot ( max(value) for ColumnName in (' + @cols + N') ) p ' exec sp_executesql @query; See Demo. Using an aggregate function If you do not want to use the PIVOT function, then you can use an aggregate function with a CASE expression: select max(case when columnname = 'FirstName' then value end) Firstname, max(case when columnname = 'Amount' then value end) Amount, max(case when columnname = 'PostalCode' then value end) PostalCode, max(case when columnname = 'LastName' then value end) LastName, max(case when columnname = 'AccountNumber' then value end) AccountNumber from yourtable See Demo. Using multiple joins This could also be completed using multiple joins, but you will need some column to associate each of the rows which you do not have in your sample data. But the basic syntax would be: select fn.value as FirstName, a.value as Amount, pc.value as PostalCode, ln.value as LastName, an.value as AccountNumber from yourtable fn left join yourtable a on fn.somecol = a.somecol and a.columnname = 'Amount' left join yourtable pc on fn.somecol = pc.somecol and pc.columnname = 'PostalCode' left join yourtable ln on fn.somecol = ln.somecol and ln.columnname = 'LastName' left join yourtable an on fn.somecol = an.somecol and an.columnname = 'AccountNumber' where fn.columnname = 'Firstname' A: This is rather a method than just a single script but gives you much more flexibility. First of all There are 3 objects: User defined TABLE type [ColumnActionList] -> holds data as parameter SP [proc_PivotPrepare] -> prepares our data SP [proc_PivotExecute] -> execute the script CREATE TYPE [dbo].[ColumnActionList] AS TABLE ( [ID] [smallint] NOT NULL, [ColumnName] nvarchar NOT NULL, [Action] nchar NOT NULL ); GO CREATE PROCEDURE [dbo].[proc_PivotPrepare] ( @DB_Name nvarchar(128), @TableName nvarchar(128) ) AS SELECT @DB_Name = ISNULL(@DB_Name,db_name()) DECLARE @SQL_Code nvarchar(max) DECLARE @MyTab TABLE (ID smallint identity(1,1), [Column_Name] nvarchar(128), [Type] nchar(1), [Set Action SQL] nvarchar(max)); SELECT @SQL_Code = 'SELECT [<| SQL_Code |>] = '' '' ' + 'UNION ALL ' + 'SELECT ''----------------------------------------------------------------------------------------------------'' ' + 'UNION ALL ' + 'SELECT ''-----| Declare user defined type [ID] / [ColumnName] / [PivotAction] '' ' + 'UNION ALL ' + 'SELECT ''----------------------------------------------------------------------------------------------------'' ' + 'UNION ALL ' + 'SELECT ''DECLARE @ColumnListWithActions ColumnActionList;''' + 'UNION ALL ' + 'SELECT ''----------------------------------------------------------------------------------------------------'' ' + 'UNION ALL ' + 'SELECT ''-----| Set [PivotAction] (''''S'''' as default) to select dimentions and values '' ' + 'UNION ALL ' + 'SELECT ''-----|''' + 'UNION ALL ' + 'SELECT ''-----| ''''S'''' = Stable column || ''''D'''' = Dimention column || ''''V'''' = Value column '' ' + 'UNION ALL ' + 'SELECT ''----------------------------------------------------------------------------------------------------'' ' + 'UNION ALL ' + 'SELECT ''INSERT INTO @ColumnListWithActions VALUES ('' + CAST( ROW_NUMBER() OVER (ORDER BY [NAME]) as nvarchar(10)) + '', '' + '''''''' + [NAME] + ''''''''+ '', ''''S'''');''' + 'FROM [' + @DB_Name + '].sys.columns ' + 'WHERE object_id = object_id(''[' + @DB_Name + ']..[' + @TableName + ']'') ' + 'UNION ALL ' + 'SELECT ''----------------------------------------------------------------------------------------------------'' ' + 'UNION ALL ' + 'SELECT ''-----| Execute sp_PivotExecute with parameters: columns and dimentions and main table name'' ' + 'UNION ALL ' + 'SELECT ''----------------------------------------------------------------------------------------------------'' ' + 'UNION ALL ' + 'SELECT ''EXEC [dbo].[sp_PivotExecute] @ColumnListWithActions, ' + '''''' + @TableName + '''''' + ';''' + 'UNION ALL ' + 'SELECT ''----------------------------------------------------------------------------------------------------'' ' EXECUTE SP_EXECUTESQL @SQL_Code; GO CREATE PROCEDURE [dbo].[sp_PivotExecute] ( @ColumnListWithActions ColumnActionList ReadOnly ,@TableName nvarchar(128) ) AS --####################################################################################################################### --###| Step 1 - Select our user-defined-table-variable into temp table --####################################################################################################################### IF OBJECT_ID('tempdb.dbo.#ColumnListWithActions', 'U') IS NOT NULL DROP TABLE #ColumnListWithActions; SELECT * INTO #ColumnListWithActions FROM @ColumnListWithActions; --####################################################################################################################### --###| Step 2 - Preparing lists of column groups as strings: --####################################################################################################################### DECLARE @ColumnName nvarchar(128) DECLARE @Destiny nchar(1) DECLARE @ListOfColumns_Stable nvarchar(max) DECLARE @ListOfColumns_Dimension nvarchar(max) DECLARE @ListOfColumns_Variable nvarchar(max) --############################ --###| Cursor for List of Stable Columns --############################ DECLARE ColumnListStringCreator_S CURSOR FOR SELECT [ColumnName] FROM #ColumnListWithActions WHERE [Action] = 'S' OPEN ColumnListStringCreator_S; FETCH NEXT FROM ColumnListStringCreator_S INTO @ColumnName WHILE @@FETCH_STATUS = 0 BEGIN SELECT @ListOfColumns_Stable = ISNULL(@ListOfColumns_Stable, '') + ' [' + @ColumnName + '] ,'; FETCH NEXT FROM ColumnListStringCreator_S INTO @ColumnName END CLOSE ColumnListStringCreator_S; DEALLOCATE ColumnListStringCreator_S; --############################ --###| Cursor for List of Dimension Columns --############################ DECLARE ColumnListStringCreator_D CURSOR FOR SELECT [ColumnName] FROM #ColumnListWithActions WHERE [Action] = 'D' OPEN ColumnListStringCreator_D; FETCH NEXT FROM ColumnListStringCreator_D INTO @ColumnName WHILE @@FETCH_STATUS = 0 BEGIN SELECT @ListOfColumns_Dimension = ISNULL(@ListOfColumns_Dimension, '') + ' [' + @ColumnName + '] ,'; FETCH NEXT FROM ColumnListStringCreator_D INTO @ColumnName END CLOSE ColumnListStringCreator_D; DEALLOCATE ColumnListStringCreator_D; --############################ --###| Cursor for List of Variable Columns --############################ DECLARE ColumnListStringCreator_V CURSOR FOR SELECT [ColumnName] FROM #ColumnListWithActions WHERE [Action] = 'V' OPEN ColumnListStringCreator_V; FETCH NEXT FROM ColumnListStringCreator_V INTO @ColumnName WHILE @@FETCH_STATUS = 0 BEGIN SELECT @ListOfColumns_Variable = ISNULL(@ListOfColumns_Variable, '') + ' [' + @ColumnName + '] ,'; FETCH NEXT FROM ColumnListStringCreator_V INTO @ColumnName END CLOSE ColumnListStringCreator_V; DEALLOCATE ColumnListStringCreator_V; SELECT @ListOfColumns_Variable = LEFT(@ListOfColumns_Variable, LEN(@ListOfColumns_Variable) - 1); SELECT @ListOfColumns_Dimension = LEFT(@ListOfColumns_Dimension, LEN(@ListOfColumns_Dimension) - 1); SELECT @ListOfColumns_Stable = LEFT(@ListOfColumns_Stable, LEN(@ListOfColumns_Stable) - 1); --####################################################################################################################### --###| Step 3 - Preparing table with all possible connections between Dimension columns excluding NULLs --####################################################################################################################### DECLARE @DIM_TAB TABLE ([DIM_ID] smallint, [ColumnName] nvarchar(128)) INSERT INTO @DIM_TAB SELECT [DIM_ID] = ROW_NUMBER() OVER(ORDER BY [ColumnName]), [ColumnName] FROM #ColumnListWithActions WHERE [Action] = 'D'; DECLARE @DIM_ID smallint; SELECT @DIM_ID = 1; DECLARE @SQL_Dimentions nvarchar(max); IF OBJECT_ID('tempdb.dbo.##ALL_Dimentions', 'U') IS NOT NULL DROP TABLE ##ALL_Dimentions; SELECT @SQL_Dimentions = 'SELECT [xxx_ID_xxx] = ROW_NUMBER() OVER (ORDER BY ' + @ListOfColumns_Dimension + '), ' + @ListOfColumns_Dimension + ' INTO ##ALL_Dimentions ' + ' FROM (SELECT DISTINCT' + @ListOfColumns_Dimension + ' FROM ' + @TableName + ' WHERE ' + (SELECT [ColumnName] FROM @DIM_TAB WHERE [DIM_ID] = @DIM_ID) + ' IS NOT NULL '; SELECT @DIM_ID = @DIM_ID + 1; WHILE @DIM_ID <= (SELECT MAX([DIM_ID]) FROM @DIM_TAB) BEGIN SELECT @SQL_Dimentions = @SQL_Dimentions + 'AND ' + (SELECT [ColumnName] FROM @DIM_TAB WHERE [DIM_ID] = @DIM_ID) + ' IS NOT NULL '; SELECT @DIM_ID = @DIM_ID + 1; END SELECT @SQL_Dimentions = @SQL_Dimentions + ' )x'; EXECUTE SP_EXECUTESQL @SQL_Dimentions; --####################################################################################################################### --###| Step 4 - Preparing table with all possible connections between Stable columns excluding NULLs --####################################################################################################################### DECLARE @StabPos_TAB TABLE ([StabPos_ID] smallint, [ColumnName] nvarchar(128)) INSERT INTO @StabPos_TAB SELECT [StabPos_ID] = ROW_NUMBER() OVER(ORDER BY [ColumnName]), [ColumnName] FROM #ColumnListWithActions WHERE [Action] = 'S'; DECLARE @StabPos_ID smallint; SELECT @StabPos_ID = 1; DECLARE @SQL_MainStableColumnTable nvarchar(max); IF OBJECT_ID('tempdb.dbo.##ALL_StableColumns', 'U') IS NOT NULL DROP TABLE ##ALL_StableColumns; SELECT @SQL_MainStableColumnTable = 'SELECT xxx_ID_xxx = ROW_NUMBER() OVER (ORDER BY ' + @ListOfColumns_Stable + '), ' + @ListOfColumns_Stable + ' INTO ##ALL_StableColumns ' + ' FROM (SELECT DISTINCT' + @ListOfColumns_Stable + ' FROM ' + @TableName + ' WHERE ' + (SELECT [ColumnName] FROM @StabPos_TAB WHERE [StabPos_ID] = @StabPos_ID) + ' IS NOT NULL '; SELECT @StabPos_ID = @StabPos_ID + 1; WHILE @StabPos_ID <= (SELECT MAX([StabPos_ID]) FROM @StabPos_TAB) BEGIN SELECT @SQL_MainStableColumnTable = @SQL_MainStableColumnTable + 'AND ' + (SELECT [ColumnName] FROM @StabPos_TAB WHERE [StabPos_ID] = @StabPos_ID) + ' IS NOT NULL '; SELECT @StabPos_ID = @StabPos_ID + 1; END SELECT @SQL_MainStableColumnTable = @SQL_MainStableColumnTable + ' )x'; EXECUTE SP_EXECUTESQL @SQL_MainStableColumnTable; --####################################################################################################################### --###| Step 5 - Preparing table with all options ID --####################################################################################################################### DECLARE @FULL_SQL_1 NVARCHAR(MAX) SELECT @FULL_SQL_1 = '' DECLARE @i smallint IF OBJECT_ID('tempdb.dbo.##FinalTab', 'U') IS NOT NULL DROP TABLE ##FinalTab; SELECT @FULL_SQL_1 = 'SELECT t.*, dim.[xxx_ID_xxx] ' + ' INTO ##FinalTab ' + 'FROM ' + @TableName + ' t ' + 'JOIN ##ALL_Dimentions dim ' + 'ON t.' + (SELECT [ColumnName] FROM @DIM_TAB WHERE [DIM_ID] = 1) + ' = dim.' + (SELECT [ColumnName] FROM @DIM_TAB WHERE [DIM_ID] = 1); SELECT @i = 2 WHILE @i <= (SELECT MAX([DIM_ID]) FROM @DIM_TAB) BEGIN SELECT @FULL_SQL_1 = @FULL_SQL_1 + ' AND t.' + (SELECT [ColumnName] FROM @DIM_TAB WHERE [DIM_ID] = @i) + ' = dim.' + (SELECT [ColumnName] FROM @DIM_TAB WHERE [DIM_ID] = @i) SELECT @i = @i +1 END EXECUTE SP_EXECUTESQL @FULL_SQL_1 --####################################################################################################################### --###| Step 6 - Selecting final data --####################################################################################################################### DECLARE @STAB_TAB TABLE ([STAB_ID] smallint, [ColumnName] nvarchar(128)) INSERT INTO @STAB_TAB SELECT [STAB_ID] = ROW_NUMBER() OVER(ORDER BY [ColumnName]), [ColumnName] FROM #ColumnListWithActions WHERE [Action] = 'S'; DECLARE @VAR_TAB TABLE ([VAR_ID] smallint, [ColumnName] nvarchar(128)) INSERT INTO @VAR_TAB SELECT [VAR_ID] = ROW_NUMBER() OVER(ORDER BY [ColumnName]), [ColumnName] FROM #ColumnListWithActions WHERE [Action] = 'V'; DECLARE @y smallint; DECLARE @x smallint; DECLARE @z smallint; DECLARE @FinalCode nvarchar(max) SELECT @FinalCode = ' SELECT ID1.*' SELECT @y = 1 WHILE @y <= (SELECT MAX([xxx_ID_xxx]) FROM ##FinalTab) BEGIN SELECT @z = 1 WHILE @z <= (SELECT MAX([VAR_ID]) FROM @VAR_TAB) BEGIN SELECT @FinalCode = @FinalCode + ', [ID' + CAST((@y) as varchar(10)) + '.' + (SELECT [ColumnName] FROM @VAR_TAB WHERE [VAR_ID] = @z) + '] = ID' + CAST((@y + 1) as varchar(10)) + '.' + (SELECT [ColumnName] FROM @VAR_TAB WHERE [VAR_ID] = @z) SELECT @z = @z + 1 END SELECT @y = @y + 1 END SELECT @FinalCode = @FinalCode + ' FROM ( SELECT * FROM ##ALL_StableColumns)ID1'; SELECT @y = 1 WHILE @y <= (SELECT MAX([xxx_ID_xxx]) FROM ##FinalTab) BEGIN SELECT @x = 1 SELECT @FinalCode = @FinalCode + ' LEFT JOIN (SELECT ' + @ListOfColumns_Stable + ' , ' + @ListOfColumns_Variable + ' FROM ##FinalTab WHERE [xxx_ID_xxx] = ' + CAST(@y as varchar(10)) + ' )ID' + CAST((@y + 1) as varchar(10)) + ' ON 1 = 1' WHILE @x <= (SELECT MAX([STAB_ID]) FROM @STAB_TAB) BEGIN SELECT @FinalCode = @FinalCode + ' AND ID1.' + (SELECT [ColumnName] FROM @STAB_TAB WHERE [STAB_ID] = @x) + ' = ID' + CAST((@y+1) as varchar(10)) + '.' + (SELECT [ColumnName] FROM @STAB_TAB WHERE [STAB_ID] = @x) SELECT @x = @x +1 END SELECT @y = @y + 1 END SELECT * FROM ##ALL_Dimentions; EXECUTE SP_EXECUTESQL @FinalCode; From executing the first query (by passing source DB and table name) you will get a pre-created execution query for the second SP, all you have to do is define is the column from your source: + Stable + Value (will be used to concentrate values based on that) + Dim (column you want to use to pivot by) Names and datatypes will be defined automatically! I cant recommend it for any production environments but does the job for adhoc BI requests. A: Please try CREATE TABLE pvt (Present int, [Absent] int); GO INSERT INTO pvt VALUES (10,40); GO --Unpivot the table. SELECT Code, Value FROM (SELECT Present, Absent FROM pvt) p UNPIVOT (Value FOR Code IN (Present, [Absent]) )AS unpvt; GO DROP TABLE pvt A: One more option which could be very useful is using CROSS APPLY -- Original data SELECT * FROM (VALUES ('1', 1, 2, 3),('2', 11, 22, 33)) AS Stage(id,col1,col2,col3) -- row to columns using CROSS APPLY SELECT Stage.id,v.idd, v.colc FROM (VALUES ('1', 1, 2, 3),('2', 11, 22, 33)) AS Stage(id,col1,col2,col3) CROSS APPLY (VALUES ('col1', col1),('col2', col2),('col3', col3)) AS v(idd,colc) GO A: I modified Taryn's answer ("Pivot with unknown number of columnnames" version) to show more than 1 row in the result. This requires to have an additional "Group" column DROP TABLE #yourtable CREATE table #yourtable ([Id] int,[Group] int, [Value] varchar(6), [ColumnName] varchar(13)) ; INSERT INTO #yourtable ([Id],[Group], [Value], [ColumnName]) VALUES (1,1, 'John', 'FirstName'), (2,1, '2.4', 'Amount'), (3,1, 'ZH1E4A', 'PostalCode'), (4,1, 'Fork', 'LastName'), (5,1, '857685', 'AccountNumber'), (6,2, 'Pedro', 'FirstName'), (7,2, '5.1', 'Amount'), (8,2, '123456', 'PostalCode'), (9,2, 'Torres', 'LastName'), (10,2, '857686', 'AccountNumber') ; ; DECLARE @cols AS NVARCHAR(MAX), @query AS NVARCHAR(MAX) select @cols = STUFF((SELECT ',' + QUOTENAME(ColumnName) from #yourtable group by [Group], ColumnName, id having [group] = (SELECT TOP 1 MIN([Group])FROM #yourtable) order by id FOR XML PATH(''), TYPE ).value('.', 'NVARCHAR(MAX)') ,1,1,'') set @query = N'SELECT ' + @cols + N' from ( select value, ColumnName,[Group] from #yourtable GROUP BY [Group],ColumnName,Value ) x pivot ( max(value) for ColumnName in (' + @cols + N') ) p ' exec sp_executesql @query;
Efficiently convert rows to columns in sql server
I'm looking for an efficient way to convert rows to columns in SQL server, I heard that PIVOT is not very fast, and I need to deal with lot of records. This is my example: ------------------------------- | Id | Value | ColumnName | ------------------------------- | 1 | John | FirstName | | 2 | 2.4 | Amount | | 3 | ZH1E4A | PostalCode | | 4 | Fork | LastName | | 5 | 857685 | AccountNumber | ------------------------------- This is my result: --------------------------------------------------------------------- | FirstName |Amount| PostalCode | LastName | AccountNumber | --------------------------------------------------------------------- | John | 2.4 | ZH1E4A | Fork | 857685 | --------------------------------------------------------------------- How can I build the result?
[ "There are several ways that you can transform data from multiple rows into columns.\nUsing PIVOT\nIn SQL Server you can use the PIVOT function to transform the data from rows to columns:\nselect Firstname, Amount, PostalCode, LastName, AccountNumber\nfrom\n(\n select value, columnname\n from yourtable\n) d\npivot\n(\n max(value)\n for columnname in (Firstname, Amount, PostalCode, LastName, AccountNumber)\n) piv;\n\nSee Demo.\nPivot with unknown number of columnnames\nIf you have an unknown number of columnnames that you want to transpose, then you can use dynamic SQL:\nDECLARE @cols AS NVARCHAR(MAX),\n @query AS NVARCHAR(MAX)\n\nselect @cols = STUFF((SELECT ',' + QUOTENAME(ColumnName) \n from yourtable\n group by ColumnName, id\n order by id\n FOR XML PATH(''), TYPE\n ).value('.', 'NVARCHAR(MAX)') \n ,1,1,'')\n\nset @query = N'SELECT ' + @cols + N' from \n (\n select value, ColumnName\n from yourtable\n ) x\n pivot \n (\n max(value)\n for ColumnName in (' + @cols + N')\n ) p '\n\nexec sp_executesql @query;\n\nSee Demo.\nUsing an aggregate function\nIf you do not want to use the PIVOT function, then you can use an aggregate function with a CASE expression:\nselect\n max(case when columnname = 'FirstName' then value end) Firstname,\n max(case when columnname = 'Amount' then value end) Amount,\n max(case when columnname = 'PostalCode' then value end) PostalCode,\n max(case when columnname = 'LastName' then value end) LastName,\n max(case when columnname = 'AccountNumber' then value end) AccountNumber\nfrom yourtable\n\nSee Demo.\nUsing multiple joins\nThis could also be completed using multiple joins, but you will need some column to associate each of the rows which you do not have in your sample data. But the basic syntax would be:\nselect fn.value as FirstName,\n a.value as Amount,\n pc.value as PostalCode,\n ln.value as LastName,\n an.value as AccountNumber\nfrom yourtable fn\nleft join yourtable a\n on fn.somecol = a.somecol\n and a.columnname = 'Amount'\nleft join yourtable pc\n on fn.somecol = pc.somecol\n and pc.columnname = 'PostalCode'\nleft join yourtable ln\n on fn.somecol = ln.somecol\n and ln.columnname = 'LastName'\nleft join yourtable an\n on fn.somecol = an.somecol\n and an.columnname = 'AccountNumber'\nwhere fn.columnname = 'Firstname'\n\n", "This is rather a method than just a single script but gives you much more flexibility.\nFirst of all There are 3 objects:\n\nUser defined TABLE type [ColumnActionList] -> holds data as\nparameter \nSP [proc_PivotPrepare] -> prepares our data\nSP [proc_PivotExecute] -> execute the script\n\nCREATE TYPE [dbo].[ColumnActionList] AS TABLE\n(\n[ID] [smallint] NOT NULL,\n[ColumnName] nvarchar NOT NULL,\n[Action] nchar NOT NULL\n);\nGO\n CREATE PROCEDURE [dbo].[proc_PivotPrepare] \n (\n @DB_Name nvarchar(128),\n @TableName nvarchar(128)\n )\n AS\n SELECT @DB_Name = ISNULL(@DB_Name,db_name())\n DECLARE @SQL_Code nvarchar(max)\n\n DECLARE @MyTab TABLE (ID smallint identity(1,1), [Column_Name] nvarchar(128), [Type] nchar(1), [Set Action SQL] nvarchar(max));\n\n SELECT @SQL_Code = 'SELECT [<| SQL_Code |>] = '' '' '\n + 'UNION ALL '\n + 'SELECT ''----------------------------------------------------------------------------------------------------'' '\n + 'UNION ALL '\n + 'SELECT ''-----| Declare user defined type [ID] / [ColumnName] / [PivotAction] '' '\n + 'UNION ALL '\n + 'SELECT ''----------------------------------------------------------------------------------------------------'' '\n + 'UNION ALL '\n + 'SELECT ''DECLARE @ColumnListWithActions ColumnActionList;'''\n + 'UNION ALL '\n + 'SELECT ''----------------------------------------------------------------------------------------------------'' '\n + 'UNION ALL '\n + 'SELECT ''-----| Set [PivotAction] (''''S'''' as default) to select dimentions and values '' '\n + 'UNION ALL '\n + 'SELECT ''-----|'''\n + 'UNION ALL '\n + 'SELECT ''-----| ''''S'''' = Stable column || ''''D'''' = Dimention column || ''''V'''' = Value column '' '\n + 'UNION ALL '\n + 'SELECT ''----------------------------------------------------------------------------------------------------'' '\n + 'UNION ALL '\n + 'SELECT ''INSERT INTO @ColumnListWithActions VALUES ('' + CAST( ROW_NUMBER() OVER (ORDER BY [NAME]) as nvarchar(10)) + '', '' + '''''''' + [NAME] + ''''''''+ '', ''''S'''');'''\n + 'FROM [' + @DB_Name + '].sys.columns '\n + 'WHERE object_id = object_id(''[' + @DB_Name + ']..[' + @TableName + ']'') '\n + 'UNION ALL '\n + 'SELECT ''----------------------------------------------------------------------------------------------------'' '\n + 'UNION ALL '\n + 'SELECT ''-----| Execute sp_PivotExecute with parameters: columns and dimentions and main table name'' '\n + 'UNION ALL '\n + 'SELECT ''----------------------------------------------------------------------------------------------------'' '\n + 'UNION ALL '\n + 'SELECT ''EXEC [dbo].[sp_PivotExecute] @ColumnListWithActions, ' + '''''' + @TableName + '''''' + ';'''\n + 'UNION ALL '\n + 'SELECT ''----------------------------------------------------------------------------------------------------'' ' \nEXECUTE SP_EXECUTESQL @SQL_Code;\n\nGO\n\nCREATE PROCEDURE [dbo].[sp_PivotExecute]\n(\n@ColumnListWithActions ColumnActionList ReadOnly\n,@TableName nvarchar(128)\n)\nAS\n\n\n--#######################################################################################################################\n--###| Step 1 - Select our user-defined-table-variable into temp table\n--#######################################################################################################################\n\nIF OBJECT_ID('tempdb.dbo.#ColumnListWithActions', 'U') IS NOT NULL DROP TABLE #ColumnListWithActions; \nSELECT * INTO #ColumnListWithActions FROM @ColumnListWithActions;\n\n--#######################################################################################################################\n--###| Step 2 - Preparing lists of column groups as strings:\n--#######################################################################################################################\n\nDECLARE @ColumnName nvarchar(128)\nDECLARE @Destiny nchar(1)\n\nDECLARE @ListOfColumns_Stable nvarchar(max)\nDECLARE @ListOfColumns_Dimension nvarchar(max)\nDECLARE @ListOfColumns_Variable nvarchar(max)\n--############################\n--###| Cursor for List of Stable Columns\n--############################\n\nDECLARE ColumnListStringCreator_S CURSOR FOR\nSELECT [ColumnName]\nFROM #ColumnListWithActions\nWHERE [Action] = 'S'\nOPEN ColumnListStringCreator_S;\nFETCH NEXT FROM ColumnListStringCreator_S\nINTO @ColumnName\n WHILE @@FETCH_STATUS = 0\n\n BEGIN\n SELECT @ListOfColumns_Stable = ISNULL(@ListOfColumns_Stable, '') + ' [' + @ColumnName + '] ,';\n FETCH NEXT FROM ColumnListStringCreator_S INTO @ColumnName\n END\n\nCLOSE ColumnListStringCreator_S;\nDEALLOCATE ColumnListStringCreator_S;\n\n--############################\n--###| Cursor for List of Dimension Columns\n--############################\n\nDECLARE ColumnListStringCreator_D CURSOR FOR\nSELECT [ColumnName]\nFROM #ColumnListWithActions\nWHERE [Action] = 'D'\nOPEN ColumnListStringCreator_D;\nFETCH NEXT FROM ColumnListStringCreator_D\nINTO @ColumnName\n WHILE @@FETCH_STATUS = 0\n\n BEGIN\n SELECT @ListOfColumns_Dimension = ISNULL(@ListOfColumns_Dimension, '') + ' [' + @ColumnName + '] ,';\n FETCH NEXT FROM ColumnListStringCreator_D INTO @ColumnName\n END\n\nCLOSE ColumnListStringCreator_D;\nDEALLOCATE ColumnListStringCreator_D;\n\n--############################\n--###| Cursor for List of Variable Columns\n--############################\n\nDECLARE ColumnListStringCreator_V CURSOR FOR\nSELECT [ColumnName]\nFROM #ColumnListWithActions\nWHERE [Action] = 'V'\nOPEN ColumnListStringCreator_V;\nFETCH NEXT FROM ColumnListStringCreator_V\nINTO @ColumnName\n WHILE @@FETCH_STATUS = 0\n\n BEGIN\n SELECT @ListOfColumns_Variable = ISNULL(@ListOfColumns_Variable, '') + ' [' + @ColumnName + '] ,';\n FETCH NEXT FROM ColumnListStringCreator_V INTO @ColumnName\n END\n\nCLOSE ColumnListStringCreator_V;\nDEALLOCATE ColumnListStringCreator_V;\n\nSELECT @ListOfColumns_Variable = LEFT(@ListOfColumns_Variable, LEN(@ListOfColumns_Variable) - 1);\nSELECT @ListOfColumns_Dimension = LEFT(@ListOfColumns_Dimension, LEN(@ListOfColumns_Dimension) - 1);\nSELECT @ListOfColumns_Stable = LEFT(@ListOfColumns_Stable, LEN(@ListOfColumns_Stable) - 1);\n\n--#######################################################################################################################\n--###| Step 3 - Preparing table with all possible connections between Dimension columns excluding NULLs\n--#######################################################################################################################\nDECLARE @DIM_TAB TABLE ([DIM_ID] smallint, [ColumnName] nvarchar(128))\nINSERT INTO @DIM_TAB \nSELECT [DIM_ID] = ROW_NUMBER() OVER(ORDER BY [ColumnName]), [ColumnName] FROM #ColumnListWithActions WHERE [Action] = 'D';\n\nDECLARE @DIM_ID smallint;\nSELECT @DIM_ID = 1;\n\n\nDECLARE @SQL_Dimentions nvarchar(max);\n\nIF OBJECT_ID('tempdb.dbo.##ALL_Dimentions', 'U') IS NOT NULL DROP TABLE ##ALL_Dimentions; \n\nSELECT @SQL_Dimentions = 'SELECT [xxx_ID_xxx] = ROW_NUMBER() OVER (ORDER BY ' + @ListOfColumns_Dimension + '), ' + @ListOfColumns_Dimension\n + ' INTO ##ALL_Dimentions '\n + ' FROM (SELECT DISTINCT' + @ListOfColumns_Dimension + ' FROM ' + @TableName\n + ' WHERE ' + (SELECT [ColumnName] FROM @DIM_TAB WHERE [DIM_ID] = @DIM_ID) + ' IS NOT NULL ';\n SELECT @DIM_ID = @DIM_ID + 1;\n WHILE @DIM_ID <= (SELECT MAX([DIM_ID]) FROM @DIM_TAB)\n BEGIN\n SELECT @SQL_Dimentions = @SQL_Dimentions + 'AND ' + (SELECT [ColumnName] FROM @DIM_TAB WHERE [DIM_ID] = @DIM_ID) + ' IS NOT NULL ';\n SELECT @DIM_ID = @DIM_ID + 1;\n END\n\nSELECT @SQL_Dimentions = @SQL_Dimentions + ' )x';\n\nEXECUTE SP_EXECUTESQL @SQL_Dimentions;\n\n--#######################################################################################################################\n--###| Step 4 - Preparing table with all possible connections between Stable columns excluding NULLs\n--#######################################################################################################################\nDECLARE @StabPos_TAB TABLE ([StabPos_ID] smallint, [ColumnName] nvarchar(128))\nINSERT INTO @StabPos_TAB \nSELECT [StabPos_ID] = ROW_NUMBER() OVER(ORDER BY [ColumnName]), [ColumnName] FROM #ColumnListWithActions WHERE [Action] = 'S';\n\nDECLARE @StabPos_ID smallint;\nSELECT @StabPos_ID = 1;\n\n\nDECLARE @SQL_MainStableColumnTable nvarchar(max);\n\nIF OBJECT_ID('tempdb.dbo.##ALL_StableColumns', 'U') IS NOT NULL DROP TABLE ##ALL_StableColumns; \n\nSELECT @SQL_MainStableColumnTable = 'SELECT xxx_ID_xxx = ROW_NUMBER() OVER (ORDER BY ' + @ListOfColumns_Stable + '), ' + @ListOfColumns_Stable\n + ' INTO ##ALL_StableColumns '\n + ' FROM (SELECT DISTINCT' + @ListOfColumns_Stable + ' FROM ' + @TableName\n + ' WHERE ' + (SELECT [ColumnName] FROM @StabPos_TAB WHERE [StabPos_ID] = @StabPos_ID) + ' IS NOT NULL ';\n SELECT @StabPos_ID = @StabPos_ID + 1;\n WHILE @StabPos_ID <= (SELECT MAX([StabPos_ID]) FROM @StabPos_TAB)\n BEGIN\n SELECT @SQL_MainStableColumnTable = @SQL_MainStableColumnTable + 'AND ' + (SELECT [ColumnName] FROM @StabPos_TAB WHERE [StabPos_ID] = @StabPos_ID) + ' IS NOT NULL ';\n SELECT @StabPos_ID = @StabPos_ID + 1;\n END\n\nSELECT @SQL_MainStableColumnTable = @SQL_MainStableColumnTable + ' )x';\n\nEXECUTE SP_EXECUTESQL @SQL_MainStableColumnTable;\n\n--#######################################################################################################################\n--###| Step 5 - Preparing table with all options ID\n--#######################################################################################################################\n\nDECLARE @FULL_SQL_1 NVARCHAR(MAX)\nSELECT @FULL_SQL_1 = ''\n\nDECLARE @i smallint\n\nIF OBJECT_ID('tempdb.dbo.##FinalTab', 'U') IS NOT NULL DROP TABLE ##FinalTab; \n\nSELECT @FULL_SQL_1 = 'SELECT t.*, dim.[xxx_ID_xxx] '\n + ' INTO ##FinalTab '\n + 'FROM ' + @TableName + ' t '\n + 'JOIN ##ALL_Dimentions dim '\n + 'ON t.' + (SELECT [ColumnName] FROM @DIM_TAB WHERE [DIM_ID] = 1) + ' = dim.' + (SELECT [ColumnName] FROM @DIM_TAB WHERE [DIM_ID] = 1);\n SELECT @i = 2 \n WHILE @i <= (SELECT MAX([DIM_ID]) FROM @DIM_TAB)\n BEGIN\n SELECT @FULL_SQL_1 = @FULL_SQL_1 + ' AND t.' + (SELECT [ColumnName] FROM @DIM_TAB WHERE [DIM_ID] = @i) + ' = dim.' + (SELECT [ColumnName] FROM @DIM_TAB WHERE [DIM_ID] = @i)\n SELECT @i = @i +1\n END\nEXECUTE SP_EXECUTESQL @FULL_SQL_1\n\n--#######################################################################################################################\n--###| Step 6 - Selecting final data\n--#######################################################################################################################\nDECLARE @STAB_TAB TABLE ([STAB_ID] smallint, [ColumnName] nvarchar(128))\nINSERT INTO @STAB_TAB \nSELECT [STAB_ID] = ROW_NUMBER() OVER(ORDER BY [ColumnName]), [ColumnName]\nFROM #ColumnListWithActions WHERE [Action] = 'S';\n\nDECLARE @VAR_TAB TABLE ([VAR_ID] smallint, [ColumnName] nvarchar(128))\nINSERT INTO @VAR_TAB \nSELECT [VAR_ID] = ROW_NUMBER() OVER(ORDER BY [ColumnName]), [ColumnName]\nFROM #ColumnListWithActions WHERE [Action] = 'V';\n\nDECLARE @y smallint;\nDECLARE @x smallint;\nDECLARE @z smallint;\n\n\nDECLARE @FinalCode nvarchar(max)\n\nSELECT @FinalCode = ' SELECT ID1.*'\n SELECT @y = 1\n WHILE @y <= (SELECT MAX([xxx_ID_xxx]) FROM ##FinalTab)\n BEGIN\n SELECT @z = 1\n WHILE @z <= (SELECT MAX([VAR_ID]) FROM @VAR_TAB)\n BEGIN\n SELECT @FinalCode = @FinalCode + ', [ID' + CAST((@y) as varchar(10)) + '.' + (SELECT [ColumnName] FROM @VAR_TAB WHERE [VAR_ID] = @z) + '] = ID' + CAST((@y + 1) as varchar(10)) + '.' + (SELECT [ColumnName] FROM @VAR_TAB WHERE [VAR_ID] = @z)\n SELECT @z = @z + 1\n END\n SELECT @y = @y + 1\n END\n SELECT @FinalCode = @FinalCode + \n ' FROM ( SELECT * FROM ##ALL_StableColumns)ID1';\n SELECT @y = 1\n WHILE @y <= (SELECT MAX([xxx_ID_xxx]) FROM ##FinalTab)\n BEGIN\n SELECT @x = 1\n SELECT @FinalCode = @FinalCode \n + ' LEFT JOIN (SELECT ' + @ListOfColumns_Stable + ' , ' + @ListOfColumns_Variable \n + ' FROM ##FinalTab WHERE [xxx_ID_xxx] = ' \n + CAST(@y as varchar(10)) + ' )ID' + CAST((@y + 1) as varchar(10)) \n + ' ON 1 = 1' \n WHILE @x <= (SELECT MAX([STAB_ID]) FROM @STAB_TAB)\n BEGIN\n SELECT @FinalCode = @FinalCode + ' AND ID1.' + (SELECT [ColumnName] FROM @STAB_TAB WHERE [STAB_ID] = @x) + ' = ID' + CAST((@y+1) as varchar(10)) + '.' + (SELECT [ColumnName] FROM @STAB_TAB WHERE [STAB_ID] = @x)\n SELECT @x = @x +1\n END\n SELECT @y = @y + 1\n END\n\nSELECT * FROM ##ALL_Dimentions;\nEXECUTE SP_EXECUTESQL @FinalCode;\n\nFrom executing the first query (by passing source DB and table name) you will get a pre-created execution query for the second SP, all you have to do is define is the column from your source:\n+ Stable\n+ Value (will be used to concentrate values based on that)\n+ Dim (column you want to use to pivot by)\nNames and datatypes will be defined automatically!\nI cant recommend it for any production environments but does the job for adhoc BI requests. \n", "Please try\nCREATE TABLE pvt (Present int, [Absent] int);\nGO\nINSERT INTO pvt VALUES (10,40);\nGO\n--Unpivot the table.\nSELECT Code, Value\nFROM \n (SELECT Present, Absent\n FROM pvt) p\nUNPIVOT\n (Value FOR Code IN \n (Present, [Absent])\n)AS unpvt;\nGO\n\nDROP TABLE pvt\n\n", "One more option which could be very useful is using CROSS APPLY\n-- Original data\nSELECT * FROM (VALUES ('1', 1, 2, 3),('2', 11, 22, 33)) AS Stage(id,col1,col2,col3)\n\n-- row to columns using CROSS APPLY\nSELECT Stage.id,v.idd, v.colc\nFROM (VALUES ('1', 1, 2, 3),('2', 11, 22, 33)) AS Stage(id,col1,col2,col3)\nCROSS APPLY (VALUES ('col1', col1),('col2', col2),('col3', col3)) AS v(idd,colc)\nGO\n\n", "I modified Taryn's answer (\"Pivot with unknown number of columnnames\" version) to show more than 1 row in the result. This requires to have an additional \"Group\" column\nDROP TABLE #yourtable\nCREATE table #yourtable\n ([Id] int,[Group] int, [Value] varchar(6), [ColumnName] varchar(13))\n;\n \nINSERT INTO #yourtable\n ([Id],[Group], [Value], [ColumnName])\nVALUES\n (1,1, 'John', 'FirstName'),\n (2,1, '2.4', 'Amount'),\n (3,1, 'ZH1E4A', 'PostalCode'),\n (4,1, 'Fork', 'LastName'),\n (5,1, '857685', 'AccountNumber'),\n (6,2, 'Pedro', 'FirstName'),\n (7,2, '5.1', 'Amount'),\n (8,2, '123456', 'PostalCode'),\n (9,2, 'Torres', 'LastName'),\n (10,2, '857686', 'AccountNumber')\n;\n;\n\n\n\nDECLARE @cols AS NVARCHAR(MAX),\n @query AS NVARCHAR(MAX)\n\nselect @cols = STUFF((SELECT ',' + QUOTENAME(ColumnName) \n from #yourtable\n group by [Group], ColumnName, id\n having [group] = (SELECT TOP 1 MIN([Group])FROM #yourtable)\n order by id\n FOR XML PATH(''), TYPE\n ).value('.', 'NVARCHAR(MAX)') \n ,1,1,'')\n\n\n \n\nset @query = N'SELECT ' + @cols + N' from \n (\n select value, ColumnName,[Group]\n from #yourtable\n GROUP BY [Group],ColumnName,Value\n ) x\n pivot \n (\n max(value)\n for ColumnName in (' + @cols + N')\n ) p '\n\n\n\nexec sp_executesql @query;\n\n\n\n" ]
[ 694, 13, 0, 0, 0 ]
[]
[]
[ "pivot", "sql", "sql_server", "sql_server_2008" ]
stackoverflow_0015745042_pivot_sql_sql_server_sql_server_2008.txt
Q: Can you work out page size in physical memory without knowing the number of pages? Suppose physical memory is split into physical pages (aka page frames), which have equal size. Suppose we know that on our system, the number of page frames in the physical memory equals the number of bytes in a page frame. We also know that physical addresses have 26 bits. How many bytes are in a page frame? I'm really struggling with this question and I would be glad for any pointers on how to work this out. Thanks! I have tried working out the size of the address space (2^26) but that didn't really work. A: would you be able provide a more in depth explanation on the formula for calculating this as I am struggling to understand why its 2^half of bits / 8. Thanks.
Can you work out page size in physical memory without knowing the number of pages?
Suppose physical memory is split into physical pages (aka page frames), which have equal size. Suppose we know that on our system, the number of page frames in the physical memory equals the number of bytes in a page frame. We also know that physical addresses have 26 bits. How many bytes are in a page frame? I'm really struggling with this question and I would be glad for any pointers on how to work this out. Thanks! I have tried working out the size of the address space (2^26) but that didn't really work.
[ "would you be able provide a more in depth explanation on the formula for calculating this as I am struggling to understand why its 2^half of bits / 8. Thanks.\n" ]
[ 0 ]
[]
[]
[ "cpu_architecture", "memory", "virtual_memory" ]
stackoverflow_0074548146_cpu_architecture_memory_virtual_memory.txt
Q: Terraform error: Provider produced inconsistent final plan when expanding the plan I got an error when create iam policy for ec2 role via bamboo pipeline. Error: Provider produced inconsistent final plan when expanding the plan for aws_iam_policy.this[xx] to include new values learned so far during apply, provider "registry.terraform.io/hashicorp/aws" produced an invalid new value for policy: was cty.StringVal(xx), but now cty.StringVal(xx). This is a bug in the provider, which should be reported in the provider's own issue tracker. It was good when ran terraform from local machine but the error occurred when deployed via bamboo pipeline. Versions on my local machine: Terraform v1.2.5 AWS v4.29.0 I tried to specify the aws provider version=4.29.0 but got another error: "Provider requirements cannot be satisfied by locked dependencies". A: I believe you need to write about this types of errors to the AWS provider github Also, I suggest to double check your terraform code, maybe a little bit simplify it. About: Provider requirements cannot be satisfied by locked dependencies looks like your forget to run terraform init after provider version changing
Terraform error: Provider produced inconsistent final plan when expanding the plan
I got an error when create iam policy for ec2 role via bamboo pipeline. Error: Provider produced inconsistent final plan when expanding the plan for aws_iam_policy.this[xx] to include new values learned so far during apply, provider "registry.terraform.io/hashicorp/aws" produced an invalid new value for policy: was cty.StringVal(xx), but now cty.StringVal(xx). This is a bug in the provider, which should be reported in the provider's own issue tracker. It was good when ran terraform from local machine but the error occurred when deployed via bamboo pipeline. Versions on my local machine: Terraform v1.2.5 AWS v4.29.0 I tried to specify the aws provider version=4.29.0 but got another error: "Provider requirements cannot be satisfied by locked dependencies".
[ "I believe you need to write about this types of errors to the AWS provider github\nAlso, I suggest to double check your terraform code, maybe a little bit simplify it.\nAbout:\nProvider requirements cannot be satisfied by locked dependencies\n\nlooks like your forget to run terraform init after provider version changing\n" ]
[ 0 ]
[]
[]
[ "amazon_ec2", "amazon_iam", "amazon_web_services", "terraform", "terraform_provider_aws" ]
stackoverflow_0074649508_amazon_ec2_amazon_iam_amazon_web_services_terraform_terraform_provider_aws.txt
Q: Using Java Collectors and Stream to return a map with 1 key and multiple values I have an entity UserId that has both a name and an email. I need to return a map where the UserId can be mapped to either the name or email. Currently this is how it looks where we are able to obtain only the email. Map<UserId, String> userIdToEmailMap = futureList.stream() .map(CompletableFuture::join) .filter(Optional::isPresent) .map(userProfile -> userProfile.get()) .collect(Collectors.toMap(UserProfile::getUserId, UserProfile::getEmail)); And I need even the name to be retrieved. Is there a way I can do that without having to create another separate map and having to return a list of Maps? This is wrong but something like this - Map<UserId, String> userIdToEmailMap = futureList.stream() .map(CompletableFuture::join) .filter(Optional::isPresent) .map(userProfile -> userProfile.get()) .collect(Collectors.toMap(UserProfile::getUserId, UserProfile::getEmail, UserProfile::getName)); A: If you want to map both a name and an email to a particular Key, then you need a type that would hold references to both name and an email. So you might consider creating a map of type Map<UserId, UserProfile> associating UserProfile with its UserId. Name and email would be accessible through get().getEmail() and get().getName() respectively. Map<UserId, UserProfile> userIdToEmailMap = futureList.stream() .map(CompletableFuture::join) .filter(Optional::isPresent) .map(Optional::get) .collect(Collectors.toMap( UserProfile::getUserId, Function.identity() ));
Using Java Collectors and Stream to return a map with 1 key and multiple values
I have an entity UserId that has both a name and an email. I need to return a map where the UserId can be mapped to either the name or email. Currently this is how it looks where we are able to obtain only the email. Map<UserId, String> userIdToEmailMap = futureList.stream() .map(CompletableFuture::join) .filter(Optional::isPresent) .map(userProfile -> userProfile.get()) .collect(Collectors.toMap(UserProfile::getUserId, UserProfile::getEmail)); And I need even the name to be retrieved. Is there a way I can do that without having to create another separate map and having to return a list of Maps? This is wrong but something like this - Map<UserId, String> userIdToEmailMap = futureList.stream() .map(CompletableFuture::join) .filter(Optional::isPresent) .map(userProfile -> userProfile.get()) .collect(Collectors.toMap(UserProfile::getUserId, UserProfile::getEmail, UserProfile::getName));
[ "If you want to map both a name and an email to a particular Key, then you need a type that would hold references to both name and an email.\nSo you might consider creating a map of type Map<UserId, UserProfile> associating UserProfile with its UserId. Name and email would be accessible through get().getEmail() and get().getName() respectively.\nMap<UserId, UserProfile> userIdToEmailMap = futureList.stream()\n .map(CompletableFuture::join)\n .filter(Optional::isPresent)\n .map(Optional::get)\n .collect(Collectors.toMap(\n UserProfile::getUserId,\n Function.identity()\n ));\n\n" ]
[ 2 ]
[]
[]
[ "collections", "java", "java_stream", "maps" ]
stackoverflow_0074661689_collections_java_java_stream_maps.txt
Q: What is the proper way to determine if any duplicate elements are present between multiple arrays in C++ I am having an issue trying to determine if my arrays contain any duplicate integers. For my Lo Shu Magic Square project, we are to create three different 1-dimensional arrays along with different functions to determine if the input is magic square numbers. I was able to make all other functions work but I cant seem to figure out how to check if the combined array inputs are all unique. Can anyone help? Here is my source code for bool checkUnique. bool checkUnique(int arrayRow1[], int arrayRow2[], int arrayRow3[], int TOTAL_NUM) { int combinedArray[] = { arrayRow1[0], arrayRow1[1], arrayRow1[2], arrayRow2[0], arrayRow2[1], arrayRow3[2], arrayRow3[0], arrayRow3[1], arrayRow3[2] }; for (int counter = 0; counter < TOTAL_NUM; counter++) { for (int j = counter; j < TOTAL_NUM; j++) { if (j != counter) { if (combinedArray[counter] == combinedArray[j]) { return true; } } return false; } } } I added all elements(TOTAL_NUM = 9) from three different arrays into a new array called combinedArray. When I ran my code and entered 1 2 3 4 5 6 7 8 9, result is still showing that there are duplicates. I tried different methods I found online but still cant get this function to work. Any help would be greatly appreciated A: You're quite close to a correct solution, which might look like this: bool checkUnique(int arrayRow1[], int arrayRow2[], int arrayRow3[], int TOTAL_NUM) { int combinedArray[] = { arrayRow1[0], arrayRow1[1], arrayRow1[2], arrayRow2[0], arrayRow2[1], arrayRow3[2], arrayRow3[0], arrayRow3[1], arrayRow3[2] }; for (int counter = 0; counter < TOTAL_NUM; counter++) { for (int j = counter; j < TOTAL_NUM; j++) { if (j != counter) { if (combinedArray[counter] == combinedArray[j]) { return true; } } } } return false; } The only change relative to your code is that I moved return false; behind the loops. Why? Because you need to check all pairs before you can assert that there are no duplicates. This solution might be further improved by changing the starting index of the inner loop: bool checkUnique(int arrayRow1[], int arrayRow2[], int arrayRow3[], int TOTAL_NUM) { int combinedArray[] = { arrayRow1[0], arrayRow1[1], arrayRow1[2], arrayRow2[0], arrayRow2[1], arrayRow3[2], arrayRow3[0], arrayRow3[1], arrayRow3[2] }; for (int counter = 0; counter < TOTAL_NUM; counter++) { for (int j = counter + 1; j < TOTAL_NUM; j++) { if (combinedArray[counter] == combinedArray[j]) return true; } } return false; } Here I changed the initializer of the inner loop into int j = counter + 1 so that I'm sure that j will never be equal to counter. In this solution you need to make up to 36 comparisons. Alternative approaches: sort combinedArray and check via std::unique whether it contains duplicates. insert the elements into std::set and check if its size is 9 Since your array is small, these more universal solutions may be not optimal, you'd need to make tests. Finally a side remark: try to use consistent names to your variables. counter looks very different from j, which suggests that there's a fundamental difference between the two loop control variables. But there's none: they're very similar to each other. So give them similar names. In the same spirit, please use more useful function names. For example, I'd prefer allUnique that would return true if and only if all input umbers are unique. Compare if (checkUnique(a, b, c, 9)) with if (allUnique(a, b, c, 9)). Of course this, in fact, should be called if allUnique(a, b, c, 3), because the information about array lengths is more fundamental than about the effective buffer length. EDIT Actually, you have not defined precisely what the expected output of your function is. If you assume that checkUnique should return true if all numbers are different, then rename it to something more significant and swap all true and false: bool allUnique(int arrayRow1[], int arrayRow2[], int arrayRow3[], int TOTAL_NUM) { int combinedArray[] = { arrayRow1[0], arrayRow1[1], arrayRow1[2], arrayRow2[0], arrayRow2[1], arrayRow3[2], arrayRow3[0], arrayRow3[1], arrayRow3[2] }; for (int counter = 0; counter < TOTAL_NUM; counter++) { for (int j = counter + 1; j < TOTAL_NUM; j++) { if (combinedArray[counter] == combinedArray[j]) return false; } } return true; }
What is the proper way to determine if any duplicate elements are present between multiple arrays in C++
I am having an issue trying to determine if my arrays contain any duplicate integers. For my Lo Shu Magic Square project, we are to create three different 1-dimensional arrays along with different functions to determine if the input is magic square numbers. I was able to make all other functions work but I cant seem to figure out how to check if the combined array inputs are all unique. Can anyone help? Here is my source code for bool checkUnique. bool checkUnique(int arrayRow1[], int arrayRow2[], int arrayRow3[], int TOTAL_NUM) { int combinedArray[] = { arrayRow1[0], arrayRow1[1], arrayRow1[2], arrayRow2[0], arrayRow2[1], arrayRow3[2], arrayRow3[0], arrayRow3[1], arrayRow3[2] }; for (int counter = 0; counter < TOTAL_NUM; counter++) { for (int j = counter; j < TOTAL_NUM; j++) { if (j != counter) { if (combinedArray[counter] == combinedArray[j]) { return true; } } return false; } } } I added all elements(TOTAL_NUM = 9) from three different arrays into a new array called combinedArray. When I ran my code and entered 1 2 3 4 5 6 7 8 9, result is still showing that there are duplicates. I tried different methods I found online but still cant get this function to work. Any help would be greatly appreciated
[ "You're quite close to a correct solution, which might look like this:\nbool checkUnique(int arrayRow1[], int arrayRow2[], int arrayRow3[], int TOTAL_NUM)\n{\n int combinedArray[] = { arrayRow1[0], arrayRow1[1], arrayRow1[2],\n arrayRow2[0], arrayRow2[1], arrayRow3[2],\n arrayRow3[0], arrayRow3[1], arrayRow3[2] };\n for (int counter = 0; counter < TOTAL_NUM; counter++)\n {\n for (int j = counter; j < TOTAL_NUM; j++)\n {\n if (j != counter) {\n\n if (combinedArray[counter] == combinedArray[j])\n {\n return true;\n }\n }\n }\n \n }\n return false; \n}\n\nThe only change relative to your code is that I moved return false; behind the loops. Why? Because you need to check all pairs before you can assert that there are no duplicates.\nThis solution might be further improved by changing the starting index of the inner loop:\nbool checkUnique(int arrayRow1[], int arrayRow2[], int arrayRow3[], int TOTAL_NUM)\n{\n int combinedArray[] = { arrayRow1[0], arrayRow1[1], arrayRow1[2],\n arrayRow2[0], arrayRow2[1], arrayRow3[2],\n arrayRow3[0], arrayRow3[1], arrayRow3[2] };\n for (int counter = 0; counter < TOTAL_NUM; counter++)\n {\n for (int j = counter + 1; j < TOTAL_NUM; j++)\n {\n if (combinedArray[counter] == combinedArray[j]) \n return true;\n } \n }\n return false; \n}\n\nHere I changed the initializer of the inner loop into int j = counter + 1 so that I'm sure that j will never be equal to counter.\nIn this solution you need to make up to 36 comparisons. Alternative approaches:\n\nsort combinedArray and check via std::unique whether it contains duplicates.\ninsert the elements into std::set and check if its size is 9\n\nSince your array is small, these more universal solutions may be not optimal, you'd need to make tests.\nFinally a side remark: try to use consistent names to your variables. counter looks very different from j, which suggests that there's a fundamental difference between the two loop control variables. But there's none: they're very similar to each other. So give them similar names. In the same spirit, please use more useful function names. For example, I'd prefer allUnique that would return true if and only if all input umbers are unique. Compare if (checkUnique(a, b, c, 9)) with if (allUnique(a, b, c, 9)). Of course this, in fact, should be called if allUnique(a, b, c, 3), because the information about array lengths is more fundamental than about the effective buffer length.\nEDIT\nActually, you have not defined precisely what the expected output of your function is. If you assume that checkUnique should return true if all numbers are different, then rename it to something more significant and swap all true and false:\nbool allUnique(int arrayRow1[], int arrayRow2[], int arrayRow3[], int TOTAL_NUM)\n{\n int combinedArray[] = { arrayRow1[0], arrayRow1[1], arrayRow1[2],\n arrayRow2[0], arrayRow2[1], arrayRow3[2],\n arrayRow3[0], arrayRow3[1], arrayRow3[2] };\n for (int counter = 0; counter < TOTAL_NUM; counter++)\n {\n for (int j = counter + 1; j < TOTAL_NUM; j++)\n {\n if (combinedArray[counter] == combinedArray[j]) \n return false;\n } \n }\n return true; \n} \n\n" ]
[ 0 ]
[]
[]
[ "arrays", "c++", "function" ]
stackoverflow_0074661235_arrays_c++_function.txt
Q: Multiselect listbox with checkbox in WPF I have a WPF project and I would like to have a listbox with checkbox on each ListboxItem and have an ObservableCollection to store checked ListboxItems. I need to move checked chars from one ObservableCollection to another ObservableCollection. With this code I can select multiple checkboxes but when I trigger the command MoveChar (command to move chars to another ObservableCollection) via the button only one ListboxItem moves and I need to click it more times to move all the checked chars. View <ListBox SelectionMode="Multiple" ItemsSource="{Binding Chars}" SelectedIndex="{Binding SelectedCharsIndex, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" SelectedItem="{Binding SelectedChars, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}"> <ListBox.ItemTemplate> <DataTemplate> <Grid HorizontalAlignment="Stretch"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <TextBlock Text="{Binding Char}" HorizontalAlignment="Center" /> <TextBlock Text="{Binding Description}" Grid.Column="1" HorizontalAlignment="Left"/> <CheckBox Grid.Column="2" IsChecked="{Binding RelativeSource={RelativeSource AncestorType={x:Type ListBoxItem}}, Path=IsSelected}" HorizontalAlignment="Right" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> </ListBox> ViewModel public CharModel SelectedChars { get { return _selectedChars; } set { _selectedChars = value; NotifyPropertyChanged(); } } MoveChar = new RelayCommand( () => { if (SelectedChars != null) { TestChars.Add(SelectedChars); Chars.Remove(SelectedChars); Selected = false; } }); When I change SelectedChars to ObservableCollection<CharModel> it doesn´t work at all. In ViewModel I iterate each item via foreach. A: You are binding SelectedChars to the SelectedItem property which is for use with Single selection listboxes and will only return a single object, not a collection. Try binding to the SelectedItems property. Be sure the "Items" part is plural. You will need to refactor the property type to be a SelectedObjectCollection as well but this inherits IList so you can iterate the same.
Multiselect listbox with checkbox in WPF
I have a WPF project and I would like to have a listbox with checkbox on each ListboxItem and have an ObservableCollection to store checked ListboxItems. I need to move checked chars from one ObservableCollection to another ObservableCollection. With this code I can select multiple checkboxes but when I trigger the command MoveChar (command to move chars to another ObservableCollection) via the button only one ListboxItem moves and I need to click it more times to move all the checked chars. View <ListBox SelectionMode="Multiple" ItemsSource="{Binding Chars}" SelectedIndex="{Binding SelectedCharsIndex, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}" SelectedItem="{Binding SelectedChars, Mode=TwoWay, UpdateSourceTrigger=PropertyChanged}"> <ListBox.ItemTemplate> <DataTemplate> <Grid HorizontalAlignment="Stretch"> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <TextBlock Text="{Binding Char}" HorizontalAlignment="Center" /> <TextBlock Text="{Binding Description}" Grid.Column="1" HorizontalAlignment="Left"/> <CheckBox Grid.Column="2" IsChecked="{Binding RelativeSource={RelativeSource AncestorType={x:Type ListBoxItem}}, Path=IsSelected}" HorizontalAlignment="Right" /> </Grid> </DataTemplate> </ListBox.ItemTemplate> </ListBox> ViewModel public CharModel SelectedChars { get { return _selectedChars; } set { _selectedChars = value; NotifyPropertyChanged(); } } MoveChar = new RelayCommand( () => { if (SelectedChars != null) { TestChars.Add(SelectedChars); Chars.Remove(SelectedChars); Selected = false; } }); When I change SelectedChars to ObservableCollection<CharModel> it doesn´t work at all. In ViewModel I iterate each item via foreach.
[ "You are binding SelectedChars to the SelectedItem property which is for use with Single selection listboxes and will only return a single object, not a collection. Try binding to the SelectedItems property. Be sure the \"Items\" part is plural. You will need to refactor the property type to be a SelectedObjectCollection as well but this inherits IList so you can iterate the same.\n" ]
[ 0 ]
[]
[]
[ ".net", "c#", "data_binding", "wpf" ]
stackoverflow_0074657080_.net_c#_data_binding_wpf.txt
Q: ASP.NET Razor Page Select List Loses Data after Server-Side Validation Failed I am using .NET Identity for authentication and authorization. For my registration page, I added two selectListItem properties in the InputModel class for dropdown lists. The problem is, when the server-side validation failed, the dropdown lists lost their data as the page reloaded. Other basic data are saved. I consulted several old posts on how to repopulate the dropdown list but still can't solve the problem. I don't know what exactly is being executed after the return Page() is called. Thanks in advance. Here's page model and methods: public class InputModel { ...... [Required] public string Name { get; set; } ...... [ValidateNever] public IEnumerable<SelectListItem> RoleList { get; set; } [ValidateNever] public IEnumerable<SelectListItem> CompanyList { get; set; } } public async Task OnGetAsync(string returnUrl = null) { ...... ...... Input = new InputModel() { RoleList = _roleManager.Roles.Select(x => x.Name).Select(i => new SelectListItem { Text = i, Value = i }), CompanyList = _unitOfWork.Company.GetAll().Select(i => new SelectListItem { Text = i.Name, Value = i.Id.ToString() }) }; } public async Task<IActionResult> OnPostAsync(string returnUrl = null) { ...... if (ModelState.IsValid) { var user = CreateUser(); await _userStore.SetUserNameAsync(user, Input.Email, CancellationToken.None); await _emailStore.SetEmailAsync(user, Input.Email, CancellationToken.None); user.StreetAddress = Input.StreetAddress; user.City = Input.City; user.State = Input.State; user.PostalCode = Input.PostalCode; user.Name = Input.Name; user.PhoneNumber = Input.PhoneNumber; if(Input.Role == SD.Role_User_Comp) { user.CompanyId = Input.CompanyId; } var result = await _userManager.CreateAsync(user, Input.Password); if (result.Succeeded) { ...... ...... } foreach (var error in result.Errors) { ModelState.AddModelError(string.Empty, error.Description); } } // If we got this far, something failed, redisplay form return Page(); } A: You can try to set RoleList and CompanyList into OnPostAsync: public async Task<IActionResult> OnPostAsync(string returnUrl = null) { ...... if (ModelState.IsValid) { var user = CreateUser(); await _userStore.SetUserNameAsync(user, Input.Email, CancellationToken.None); await _emailStore.SetEmailAsync(user, Input.Email, CancellationToken.None); user.StreetAddress = Input.StreetAddress; user.City = Input.City; user.State = Input.State; user.PostalCode = Input.PostalCode; user.Name = Input.Name; user.PhoneNumber = Input.PhoneNumber; if(Input.Role == SD.Role_User_Comp) { user.CompanyId = Input.CompanyId; } var result = await _userManager.CreateAsync(user, Input.Password); if (result.Succeeded) { ...... ...... } foreach (var error in result.Errors) { ModelState.AddModelError(string.Empty, error.Description); } } RoleList = _roleManager.Roles.Select(x => x.Name).Select(i => new SelectListItem { Text = i, Value = i }); CompanyList = _unitOfWork.Company.GetAll().Select(i => new SelectListItem { Text = i.Name, Value = i.Id.ToString() }); // If we got this far, something failed, redisplay form return Page(); } A: Also suffering from the same problem for the last few days. I solved it by using Yiyi You's method. Though I made a different function for assigning values to the Properties (in your case RoleList and CompanyList) then call it in both OnPostAsync and OnGetAsync
ASP.NET Razor Page Select List Loses Data after Server-Side Validation Failed
I am using .NET Identity for authentication and authorization. For my registration page, I added two selectListItem properties in the InputModel class for dropdown lists. The problem is, when the server-side validation failed, the dropdown lists lost their data as the page reloaded. Other basic data are saved. I consulted several old posts on how to repopulate the dropdown list but still can't solve the problem. I don't know what exactly is being executed after the return Page() is called. Thanks in advance. Here's page model and methods: public class InputModel { ...... [Required] public string Name { get; set; } ...... [ValidateNever] public IEnumerable<SelectListItem> RoleList { get; set; } [ValidateNever] public IEnumerable<SelectListItem> CompanyList { get; set; } } public async Task OnGetAsync(string returnUrl = null) { ...... ...... Input = new InputModel() { RoleList = _roleManager.Roles.Select(x => x.Name).Select(i => new SelectListItem { Text = i, Value = i }), CompanyList = _unitOfWork.Company.GetAll().Select(i => new SelectListItem { Text = i.Name, Value = i.Id.ToString() }) }; } public async Task<IActionResult> OnPostAsync(string returnUrl = null) { ...... if (ModelState.IsValid) { var user = CreateUser(); await _userStore.SetUserNameAsync(user, Input.Email, CancellationToken.None); await _emailStore.SetEmailAsync(user, Input.Email, CancellationToken.None); user.StreetAddress = Input.StreetAddress; user.City = Input.City; user.State = Input.State; user.PostalCode = Input.PostalCode; user.Name = Input.Name; user.PhoneNumber = Input.PhoneNumber; if(Input.Role == SD.Role_User_Comp) { user.CompanyId = Input.CompanyId; } var result = await _userManager.CreateAsync(user, Input.Password); if (result.Succeeded) { ...... ...... } foreach (var error in result.Errors) { ModelState.AddModelError(string.Empty, error.Description); } } // If we got this far, something failed, redisplay form return Page(); }
[ "You can try to set RoleList and CompanyList into OnPostAsync:\npublic async Task<IActionResult> OnPostAsync(string returnUrl = null)\n {\n ......\n \n if (ModelState.IsValid)\n {\n var user = CreateUser();\n\n await _userStore.SetUserNameAsync(user, Input.Email, CancellationToken.None);\n await _emailStore.SetEmailAsync(user, Input.Email, CancellationToken.None);\n user.StreetAddress = Input.StreetAddress;\n user.City = Input.City;\n user.State = Input.State;\n user.PostalCode = Input.PostalCode;\n user.Name = Input.Name;\n user.PhoneNumber = Input.PhoneNumber;\n \n if(Input.Role == SD.Role_User_Comp)\n {\n user.CompanyId = Input.CompanyId;\n }\n var result = await _userManager.CreateAsync(user, Input.Password);\n\n if (result.Succeeded)\n {\n ......\n ......\n }\n foreach (var error in result.Errors)\n {\n ModelState.AddModelError(string.Empty, error.Description);\n }\n \n \n }\n RoleList = _roleManager.Roles.Select(x => x.Name).Select(i => new SelectListItem\n {\n Text = i,\n Value = i\n });\n CompanyList = _unitOfWork.Company.GetAll().Select(i => new SelectListItem\n {\n Text = i.Name,\n Value = i.Id.ToString()\n });\n \n // If we got this far, something failed, redisplay form\n return Page();\n }\n\n", "Also suffering from the same problem for the last few days. I solved it by using Yiyi You's method. Though I made a different function for assigning values to the Properties (in your case RoleList and CompanyList) then call it in both OnPostAsync and OnGetAsync\n" ]
[ 0, 0 ]
[]
[]
[ "asp.net", "asp.net_core", "asp.net_identity", "razor" ]
stackoverflow_0073365162_asp.net_asp.net_core_asp.net_identity_razor.txt
Q: Make element visible when Validation.HasError is true I have the following xaml markup: <GroupBox Margin="10" Padding="20" Header="Case 5 - Custom Error Object"> <GroupBox.DataContext> <local:ViewModel4/> </GroupBox.DataContext> <StackPanel> <Label Name="AdornerElement" Style="{StaticResource ResourceKey=AdornerElementStyle}"/> <TextBox Text="{Binding Path=UserName, UpdateSourceTrigger=PropertyChanged, ValidatesOnNotifyDataErrors=True}" Margin="0 10" Validation.ValidationAdornerSite="{Binding ElementName=AdornerElement}"> </TextBox> </StackPanel> </GroupBox> and the following style: <Style x:Key="AdornerElementStyle" TargetType="Label"> <Setter Property="Visibility" Value="Collapsed"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Label"> <StackPanel Orientation="Horizontal" Background="LightCoral"> <Image Source="/clipart.png" Width="24" Margin="10"/> <ItemsControl ItemsSource="{Binding ElementName=AdornerElement, Path=(Validation.ValidationAdornerSiteFor).(Validation.Errors)}" VerticalAlignment="Center"> <ItemsControl.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding Path=ErrorContent.ValidationMessage}" Style="{StaticResource CustomErrorTypeStyle}"/> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </StackPanel> <ControlTemplate.Triggers> <DataTrigger Binding="{Binding ElementName=AdornerElement, Path=Validation.HasError}" Value="True"> <Setter Property="Visibility" Value="Visible"/> </DataTrigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> Everything works fine except Trigger. If I set up the property "Visibility" to "Visible" initially, then I can see that error messages are shown correctly. If I use style shown before, then the Label remains to be collapsed. Please, help me to use triggers correctly to achieve the final result. A: This is not how Validation.ValidationAdornerSite works. This attached property just defines the element that will be adorned with the validation error template. By default this is the element that is validating, the Binding.Target to be more precise. When you set Validation.ValidationAdornerSite the binding engine will automatically set the attached Validation.ValidationAdornerSiteFor property to reference the element that Validation.ValidationAdornerSite was originally set on. This means that Validation.HasError and other related attached properties are always set on the Binding.Target and not on the adorner site. That's why your triggers don't work: they are triggering on the Label instead of the TextBox (where the validation/binding error is registered). To fix it, the DataTrigger must get the Validation.HasErrors attached property value of the Validation.ValidationAdornerSiteFor attached property, which references the original Binding.Target (the TextBox). <Style x:Key="AdornerElementStyle" TargetType="Label"> <Setter Property="Visibility" Value="Collapsed" /> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Label"> <StackPanel Orientation="Horizontal" Background="LightCoral"> <Image Source="/clipart.png" Width="24" Margin="10" /> <ItemsControl ItemsSource="{Binding ElementName=AdornerElement, Path=(Validation.ValidationAdornerSiteFor).(Validation.Errors)}" VerticalAlignment="Center"> <ItemsControl.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding ErrorContent.ValidationMessage}" /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </StackPanel> <ControlTemplate.Triggers> <DataTrigger Binding="{Binding ElementName=AdornerElement, Path=(Validation.ValidationAdornerSiteFor).(Validation.HasError)}" Value="True"> <Setter Property="Visibility" Value="Visible" /> </DataTrigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> Remarks It's not clear why you are using a Label to display the error messages. You could simply define an error template, which is a ControlTemplate that is referenced by the attached Validation.ErrorTemplate property. This would simplify your code significantly. The following example creates the exact same visual error feedback but without the hassle for an additional Label and associated triggers to manage the visibility of the error messages. All this is all handled by the WPF binding engine. <Window> <Window.Resources> <!-- The Validation.Errors property is the DataContext of this template --> <ControlTemplate x:Key="ErrorTemplate"> <StackPanel> <Border BorderBrush="Red" BorderThickness="1" Background="LightCoral"> <StackPanel Orientation="Horizontal"> <Image Source="/clipart.png" Width="24" Margin="10" /> <ItemsControl ItemsSource="{Binding}" VerticalAlignment="Center"> <ItemsControl.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding ErrorContent.ValidationMessage}" /> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </StackPanel> </Border> <Border BorderBrush="Transparent" BorderThickness="1" HorizontalAlignment="Left"> <!-- Placeholder for the Binding.Target --> <AdornedElementPlaceholder x:Name="AdornedElement" /> </Border> </StackPanel> </ControlTemplate> </Window.Resource> <StackPanel> <TextBox Text="{Binding Path=UserName, UpdateSourceTrigger=PropertyChanged, ValidatesOnNotifyDataErrors=True}" Validation.ErrorTemplate="{StaticResoucre ErrorTemplate}" /> </StackPanel> </Window>
Make element visible when Validation.HasError is true
I have the following xaml markup: <GroupBox Margin="10" Padding="20" Header="Case 5 - Custom Error Object"> <GroupBox.DataContext> <local:ViewModel4/> </GroupBox.DataContext> <StackPanel> <Label Name="AdornerElement" Style="{StaticResource ResourceKey=AdornerElementStyle}"/> <TextBox Text="{Binding Path=UserName, UpdateSourceTrigger=PropertyChanged, ValidatesOnNotifyDataErrors=True}" Margin="0 10" Validation.ValidationAdornerSite="{Binding ElementName=AdornerElement}"> </TextBox> </StackPanel> </GroupBox> and the following style: <Style x:Key="AdornerElementStyle" TargetType="Label"> <Setter Property="Visibility" Value="Collapsed"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="Label"> <StackPanel Orientation="Horizontal" Background="LightCoral"> <Image Source="/clipart.png" Width="24" Margin="10"/> <ItemsControl ItemsSource="{Binding ElementName=AdornerElement, Path=(Validation.ValidationAdornerSiteFor).(Validation.Errors)}" VerticalAlignment="Center"> <ItemsControl.ItemTemplate> <DataTemplate> <TextBlock Text="{Binding Path=ErrorContent.ValidationMessage}" Style="{StaticResource CustomErrorTypeStyle}"/> </DataTemplate> </ItemsControl.ItemTemplate> </ItemsControl> </StackPanel> <ControlTemplate.Triggers> <DataTrigger Binding="{Binding ElementName=AdornerElement, Path=Validation.HasError}" Value="True"> <Setter Property="Visibility" Value="Visible"/> </DataTrigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> Everything works fine except Trigger. If I set up the property "Visibility" to "Visible" initially, then I can see that error messages are shown correctly. If I use style shown before, then the Label remains to be collapsed. Please, help me to use triggers correctly to achieve the final result.
[ "This is not how Validation.ValidationAdornerSite works. This attached property just defines the element that will be adorned with the validation error template. By default this is the element that is validating, the Binding.Target to be more precise.\nWhen you set Validation.ValidationAdornerSite the binding engine will automatically set the attached Validation.ValidationAdornerSiteFor property to reference the element that Validation.ValidationAdornerSite was originally set on.\nThis means that Validation.HasError and other related attached properties are always set on the Binding.Target and not on the adorner site.\nThat's why your triggers don't work: they are triggering on the Label instead of the TextBox (where the validation/binding error is registered).\nTo fix it, the DataTrigger must get the Validation.HasErrors attached property value of the Validation.ValidationAdornerSiteFor attached property, which references the original Binding.Target (the TextBox).\n<Style x:Key=\"AdornerElementStyle\"\n TargetType=\"Label\">\n <Setter Property=\"Visibility\"\n Value=\"Collapsed\" />\n <Setter Property=\"Template\">\n <Setter.Value>\n <ControlTemplate TargetType=\"Label\">\n <StackPanel Orientation=\"Horizontal\"\n Background=\"LightCoral\">\n <Image Source=\"/clipart.png\"\n Width=\"24\"\n Margin=\"10\" />\n <ItemsControl ItemsSource=\"{Binding ElementName=AdornerElement, Path=(Validation.ValidationAdornerSiteFor).(Validation.Errors)}\"\n VerticalAlignment=\"Center\">\n <ItemsControl.ItemTemplate>\n <DataTemplate>\n <TextBlock Text=\"{Binding ErrorContent.ValidationMessage}\" />\n </DataTemplate>\n </ItemsControl.ItemTemplate>\n </ItemsControl>\n </StackPanel>\n <ControlTemplate.Triggers>\n <DataTrigger Binding=\"{Binding ElementName=AdornerElement, Path=(Validation.ValidationAdornerSiteFor).(Validation.HasError)}\"\n Value=\"True\">\n <Setter Property=\"Visibility\"\n Value=\"Visible\" />\n </DataTrigger>\n </ControlTemplate.Triggers>\n </ControlTemplate>\n </Setter.Value>\n </Setter>\n</Style>\n\nRemarks\nIt's not clear why you are using a Label to display the error messages. You could simply define an error template, which is a ControlTemplate that is referenced by the attached Validation.ErrorTemplate property. This would simplify your code significantly.\nThe following example creates the exact same visual error feedback but without the hassle for an additional Label and associated triggers to manage the visibility of the error messages. All this is all handled by the WPF binding engine.\n<Window>\n <Window.Resources>\n\n <!-- The Validation.Errors property is the DataContext of this template -->\n <ControlTemplate x:Key=\"ErrorTemplate\">\n <StackPanel>\n <Border BorderBrush=\"Red\"\n BorderThickness=\"1\"\n Background=\"LightCoral\">\n <StackPanel Orientation=\"Horizontal\">\n <Image Source=\"/clipart.png\"\n Width=\"24\"\n Margin=\"10\" />\n <ItemsControl ItemsSource=\"{Binding}\"\n VerticalAlignment=\"Center\">\n <ItemsControl.ItemTemplate>\n <DataTemplate>\n <TextBlock Text=\"{Binding ErrorContent.ValidationMessage}\" />\n </DataTemplate>\n </ItemsControl.ItemTemplate>\n </ItemsControl>\n </StackPanel>\n </Border>\n\n <Border BorderBrush=\"Transparent\"\n BorderThickness=\"1\"\n HorizontalAlignment=\"Left\">\n\n <!-- Placeholder for the Binding.Target -->\n <AdornedElementPlaceholder x:Name=\"AdornedElement\" />\n </Border>\n </StackPanel>\n </ControlTemplate>\n </Window.Resource>\n\n <StackPanel>\n <TextBox Text=\"{Binding Path=UserName, UpdateSourceTrigger=PropertyChanged, ValidatesOnNotifyDataErrors=True}\"\n Validation.ErrorTemplate=\"{StaticResoucre ErrorTemplate}\" />\n </StackPanel>\n</Window>\n\n" ]
[ 0 ]
[]
[]
[ "c#", "triggers", "visibility", "wpf", "xaml" ]
stackoverflow_0074645422_c#_triggers_visibility_wpf_xaml.txt
Q: docker image cp-kafka and schema registry i have the docker-compose file: --- version: '3' services: zookeeper: image: confluentinc/cp-zookeeper:7.3.0 container_name: zookeeper environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 networks: - kafka schemaregistry: image: confluentinc/cp-schema-registry:7.3.0 container_name: schemaregistry restart: always depends_on: - zookeeper environment: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:29092 SCHEMA_REGISTRY_HOST_NAME: schemaregistry SCHEMA_REGISTRY_LISTENERS: http://schemaregistry:8085 ports: - "8085:8085" networks: - kafka broker: image: confluentinc/cp-kafka:7.3.0 container_name: broker ports: - "9092:9092" depends_on: - zookeeper environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: schemaregistry:8085 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 KAFKA_AUTO_CREATE_TOPICS_ENABLE: false networks: - kafka networks: kafka: When i'm trying to create topic i recieve the error. docker exec broker kafka-topics --create --bootstrap-server broker:29092 --replication-factor 1 --partitions 1 --topic my-new-topic --config confluent.value.schema.validation=true Error while executing topic command : Unknown topic config name: confluent.value.schema.validation [2022-12-02 14:05:53,318] ERROR org.apache.kafka.common.errors.InvalidConfigurationException: Unknown topic config name: confluent.value.schema.validation (kafka.admin.TopicCommand$) i've looked in https://github.com/confluentinc/kafka/blob/master/core/src/main/scala/kafka/log/LogConfig.scala and there is no schema config. Is it ok? Is schema registry feature supporting by cp-kafka? maybe cp-server. A: Indeed, that config option only is applicable for topics managed by cp-server image (which does not have available open-source code).
docker image cp-kafka and schema registry
i have the docker-compose file: --- version: '3' services: zookeeper: image: confluentinc/cp-zookeeper:7.3.0 container_name: zookeeper environment: ZOOKEEPER_CLIENT_PORT: 2181 ZOOKEEPER_TICK_TIME: 2000 networks: - kafka schemaregistry: image: confluentinc/cp-schema-registry:7.3.0 container_name: schemaregistry restart: always depends_on: - zookeeper environment: SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: broker:29092 SCHEMA_REGISTRY_HOST_NAME: schemaregistry SCHEMA_REGISTRY_LISTENERS: http://schemaregistry:8085 ports: - "8085:8085" networks: - kafka broker: image: confluentinc/cp-kafka:7.3.0 container_name: broker ports: - "9092:9092" depends_on: - zookeeper environment: KAFKA_BROKER_ID: 1 KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 KAFKA_CONFLUENT_SCHEMA_REGISTRY_URL: schemaregistry:8085 KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_INTERNAL:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092,PLAINTEXT_INTERNAL://broker:29092 KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1 KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1 KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1 KAFKA_AUTO_CREATE_TOPICS_ENABLE: false networks: - kafka networks: kafka: When i'm trying to create topic i recieve the error. docker exec broker kafka-topics --create --bootstrap-server broker:29092 --replication-factor 1 --partitions 1 --topic my-new-topic --config confluent.value.schema.validation=true Error while executing topic command : Unknown topic config name: confluent.value.schema.validation [2022-12-02 14:05:53,318] ERROR org.apache.kafka.common.errors.InvalidConfigurationException: Unknown topic config name: confluent.value.schema.validation (kafka.admin.TopicCommand$) i've looked in https://github.com/confluentinc/kafka/blob/master/core/src/main/scala/kafka/log/LogConfig.scala and there is no schema config. Is it ok? Is schema registry feature supporting by cp-kafka? maybe cp-server.
[ "Indeed, that config option only is applicable for topics managed by cp-server image (which does not have available open-source code).\n" ]
[ 0 ]
[]
[]
[ "apache_kafka", "confluent_schema_registry" ]
stackoverflow_0074661403_apache_kafka_confluent_schema_registry.txt
Q: Silent push notification for web oneSignal Can I use silent web push notification using oneSignal? need to update some chunks of data without prompting the notification to user. I went through the documentation but couldn't find any useful data in it though. I am using Nest Js as BE and React Js as FE. A: Based on the documentation I see from the API, you can edit the external user id data tags. Now, using a silent web push notification, it won't do what you want because silent web push is still being shown. https://developer.mozilla.org/en-US/docs/Web/API/Notification/silent There is a userVisibleOnly setting that can be set to false which means you don't have to display a notification. However with Chrome when you try to disable this Chrome does not allow it. https://tests.peter.sh/push-message-generator/ Come join us in our discord server and learn more about our community: https://onesignal.com/onesignal-developers A: Sorry for posting my answer late, I sent a message to the onesignal team on their website's Support chat option. This is the reply I get, latest of July 20th, 2021. May be in future, they may integrate this feature, but unfortunately not available now. You cannot send notifications on web without displaying a notification to the user. You can add additional Data to the notification and access that data in the notificationDisplay event. Here is the docs on this: https://documentation.onesignal.com/docs/web-push-sdk#section-notification-display Thanks Ana From Onesignal (just for reference) A: how to trigger an event when a web notification is displayed in React.js. I want to update the ui when a new web push notification is received from OneSignal. I am using the npm package of OneSignal ` import OneSignal from "react-onesignal"; export async function runOneSignalOnLogin(data) { await OneSignal.init({ appId: process.env.REACT_APP_ONE_SIGNAL_ID, allowLocalhostAsSecureOrigin: true, }); OneSignal.showSlidedownPrompt(); OneSignal.setExternalUserId(id); OneSignal.on("notificationDisplay", function (val) { console.log("Value:", val); }); } `
Silent push notification for web oneSignal
Can I use silent web push notification using oneSignal? need to update some chunks of data without prompting the notification to user. I went through the documentation but couldn't find any useful data in it though. I am using Nest Js as BE and React Js as FE.
[ "Based on the documentation I see from the API, you can edit the external user id data tags.\nNow, using a silent web push notification, it won't do what you want because silent web push is still being shown.\nhttps://developer.mozilla.org/en-US/docs/Web/API/Notification/silent\nThere is a userVisibleOnly setting that can be set to false which means you don't have to display a notification. However with Chrome when you try to disable this Chrome does not allow it.\nhttps://tests.peter.sh/push-message-generator/\nCome join us in our discord server and learn more about our community:\nhttps://onesignal.com/onesignal-developers\n", "Sorry for posting my answer late, I sent a message to the onesignal team on their website's Support chat option. This is the reply I get, latest of July 20th, 2021. May be in future, they may integrate this feature, but unfortunately not available now.\n\nYou cannot send notifications on web without displaying a\nnotification to the user.\nYou can add additional Data to the notification and access that data\nin the notificationDisplay event.\nHere is the docs on this:\nhttps://documentation.onesignal.com/docs/web-push-sdk#section-notification-display\nThanks\n\nAna From Onesignal (just for reference)\n", "how to trigger an event when a web notification is displayed in React.js. I want to update the ui when a new web push notification is received from OneSignal. I am using the npm package of OneSignal\n`\nimport OneSignal from \"react-onesignal\";\nexport async function runOneSignalOnLogin(data) {\n await OneSignal.init({\n appId: process.env.REACT_APP_ONE_SIGNAL_ID,\n allowLocalhostAsSecureOrigin: true,\n });\n OneSignal.showSlidedownPrompt();\n OneSignal.setExternalUserId(id);\n OneSignal.on(\"notificationDisplay\", function (val) {\n console.log(\"Value:\", val);\n});\n}\n\n`\n" ]
[ 2, 1, 0 ]
[]
[]
[ "node.js", "onesignal", "push_notification", "web_push" ]
stackoverflow_0068455894_node.js_onesignal_push_notification_web_push.txt
Q: Which ports needed to be set for a k8s deployment? I do not understand how to configure ports correctly for a k8s deployment. Assume there is a nextJS application which listens to port 3003 (default is 3000). I build the docker image: FROM node:16.14.0 RUN apk add dumb-init # ... EXPOSE 3003 ENTRYPOINT ["/usr/bin/dumb-init", "--"] CMD npx next start -p 3003 So in this Dockerfile there are two places defining the port value 3003. Is this needed? Then I define this k8s manifest: apiVersion: apps/v1 kind: Deployment metadata: name: example spec: spec: containers: - name: example image: "hub.domain.com/example:1.0.0" imagePullPolicy: IfNotPresent ports: - containerPort: 3003 --- apiVersion: v1 kind: Service metadata: name: example spec: ports: - protocol: TCP port: 80 targetPort: 3003 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example namespace: default annotations: cert-manager.io/cluster-issuer: letsencrypt-prod kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" spec: tls: - hosts: - domain.com secretName: tls-key rules: - host: domain.com http: paths: - pathType: Prefix path: "/" backend: service: name: example port: number: 80 The deployment is not working correctly. Calling domain.com shows me a 503 Service Temporarily Unavailable error. If I do a port forward on the pod, I can see the working app at localhost:3003. I cannot create a port forward on the service. So obviously I'm doing something wrong with the ports. Can someone explain which value has to be set and why? A: You are missing labels from the deployment and the selector from the service. Try this: apiVersion: apps/v1 kind: Deployment metadata: name: example labels: app: example spec: selector: matchLabels: app: example template: metadata: labels: app: example spec: containers: - name: example image: "hub.domain.com/example:1.0.0" imagePullPolicy: IfNotPresent ports: - containerPort: 3003 --- apiVersion: v1 kind: Service metadata: name: example spec: selector: app: example ports: - protocol: TCP port: 80 targetPort: 3003 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example namespace: default annotations: cert-manager.io/cluster-issuer: letsencrypt-prod kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" spec: tls: - hosts: - domain.com secretName: tls-key rules: - host: domain.com http: paths: - pathType: Prefix path: "/" backend: service: name: example port: number: 80 Deployment: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/ Service: https://kubernetes.io/docs/concepts/services-networking/service/ Labels and selectors: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ You can name your label keys and values anything you like, you could even have a label as whatever: something instead of app: example but these are some recommended labels: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/ https://kubernetes.io/docs/reference/labels-annotations-taints/
Which ports needed to be set for a k8s deployment?
I do not understand how to configure ports correctly for a k8s deployment. Assume there is a nextJS application which listens to port 3003 (default is 3000). I build the docker image: FROM node:16.14.0 RUN apk add dumb-init # ... EXPOSE 3003 ENTRYPOINT ["/usr/bin/dumb-init", "--"] CMD npx next start -p 3003 So in this Dockerfile there are two places defining the port value 3003. Is this needed? Then I define this k8s manifest: apiVersion: apps/v1 kind: Deployment metadata: name: example spec: spec: containers: - name: example image: "hub.domain.com/example:1.0.0" imagePullPolicy: IfNotPresent ports: - containerPort: 3003 --- apiVersion: v1 kind: Service metadata: name: example spec: ports: - protocol: TCP port: 80 targetPort: 3003 --- apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: example namespace: default annotations: cert-manager.io/cluster-issuer: letsencrypt-prod kubernetes.io/ingress.class: nginx kubernetes.io/tls-acme: "true" nginx.ingress.kubernetes.io/ssl-passthrough: "true" spec: tls: - hosts: - domain.com secretName: tls-key rules: - host: domain.com http: paths: - pathType: Prefix path: "/" backend: service: name: example port: number: 80 The deployment is not working correctly. Calling domain.com shows me a 503 Service Temporarily Unavailable error. If I do a port forward on the pod, I can see the working app at localhost:3003. I cannot create a port forward on the service. So obviously I'm doing something wrong with the ports. Can someone explain which value has to be set and why?
[ "You are missing labels from the deployment and the selector from the service. Try this:\napiVersion: apps/v1\nkind: Deployment\nmetadata:\n name: example\n labels:\n app: example\nspec:\n selector:\n matchLabels:\n app: example\n template:\n metadata:\n labels:\n app: example\n spec:\n containers:\n - name: example\n image: \"hub.domain.com/example:1.0.0\"\n imagePullPolicy: IfNotPresent\n ports:\n - containerPort: 3003\n---\napiVersion: v1\nkind: Service\nmetadata:\n name: example\nspec:\n selector:\n app: example\n ports:\n - protocol: TCP\n port: 80\n targetPort: 3003\n---\napiVersion: networking.k8s.io/v1\nkind: Ingress\nmetadata:\n name: example\n namespace: default\n annotations:\n cert-manager.io/cluster-issuer: letsencrypt-prod\n kubernetes.io/ingress.class: nginx\n kubernetes.io/tls-acme: \"true\"\n nginx.ingress.kubernetes.io/ssl-passthrough: \"true\"\nspec:\n tls:\n - hosts:\n - domain.com\n secretName: tls-key\n rules:\n - host: domain.com\n http:\n paths:\n - pathType: Prefix\n path: \"/\"\n backend:\n service:\n name: example\n port:\n number: 80\n\nDeployment: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/\nService: https://kubernetes.io/docs/concepts/services-networking/service/\nLabels and selectors: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/\nYou can name your label keys and values anything you like, you could even have a label as whatever: something instead of app: example but these are some recommended labels: https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels/\nhttps://kubernetes.io/docs/reference/labels-annotations-taints/\n" ]
[ 3 ]
[]
[]
[ "docker", "kubernetes", "nginx_ingress", "port" ]
stackoverflow_0074661323_docker_kubernetes_nginx_ingress_port.txt
Q: Shape.Copy Series.Paste, without touching clipboard Behold minimal VBA code that successfully does one instance of using a shape as the marker for an Excel chart series. Function TestFormatting(Optional ignoredParameter As Variant) With ThisWorkbook.Sheets("Sheet1") .Shapes("Shape1").Copy .ChartObjects("Chart1").Chart.SeriesCollection(1).Paste End With End Function ' TestFormatting Alas, this code vandalises the clipboard. Please, do readers know of any way to achieve this task without changing the clipboard? My two candidate solutions have both failed. Not damaging the clipboard by some sort of .Copy Dest:=. Happily that compiles; unhappily does not execute. Some means of, with respect to the clipboard, that which a PostScript programmer might call save … restore. It might be possible to save the clipboard if I knew it contained just text (which I haven’t managed to make work), but anyway the clipboard might contain anything: perhaps small, perhaps big, perhaps not from Excel. Aside, for those wondering why it is a function. So that the correct formatting of a chart can be part of Excel’s recalc sequence: if the name of a series changes, its formatting would correctly follow. The actual function would take some parameters including ChartObject.Name and Series.Name. The ignoredParameter might be something that would change if the desired formatting were to change, so triggering re-execution. My target is that it work on most modern(ish) Excel versions, but anyway I’m using Mac Excel 16.43. A: Please, try the next ways: Load SVG file from computer path: Sub ApplySVGPictOnSeriesCollection() Dim ch As ChartObject, rng As Range Set rng = ActiveSheet.Range("I2:J4") 'place here the SourceData Set ch = ActiveSheet.ChartObjects.Add(left:=1, top:=1, width:=100, height:=100) ch.Chart.SetSourceData rng With ch.Chart.SeriesCollection(1).Fill .UserPicture PictureFile:=ThisWorkbook.Path & "\sample_640×426.svg" 'use here your svg file path .Visible = True End With End Sub Convert the SVG files to bmp or (jpg, gif, Wmf, emf) then, insert sheet ActiveX Image controls and place the pictures in the above format on them. They can be used as in the next piece of code: Sub copyPictureToSeriesCollectionNoClipboard() Dim s As Shape, ch As ChartObject, rng As Range, img As Image Set rng = ActiveSheet.Range("I2:J4") Set s = ActiveSheet.Shapes("Image1") 'an ActiveX Image control shape Set img = s.OLEFormat.Object.Object 'image stdole.SavePicture img.Picture, ThisWorkbook.Path & "\myPicture123.jpg" 'create a new chart: Set ch = ActiveSheet.ChartObjects.Add(left:=1, top:=1, width:=100, height:=100) ch.Chart.SetSourceData rng With ch.Chart.SeriesCollection(1).Fill 'place the pictures: .UserPicture PictureFile:=ThisWorkbook.Path & "\myPicture123.jpg" .Visible = True End With End Sub Please, send some feedback after testing them.
Shape.Copy Series.Paste, without touching clipboard
Behold minimal VBA code that successfully does one instance of using a shape as the marker for an Excel chart series. Function TestFormatting(Optional ignoredParameter As Variant) With ThisWorkbook.Sheets("Sheet1") .Shapes("Shape1").Copy .ChartObjects("Chart1").Chart.SeriesCollection(1).Paste End With End Function ' TestFormatting Alas, this code vandalises the clipboard. Please, do readers know of any way to achieve this task without changing the clipboard? My two candidate solutions have both failed. Not damaging the clipboard by some sort of .Copy Dest:=. Happily that compiles; unhappily does not execute. Some means of, with respect to the clipboard, that which a PostScript programmer might call save … restore. It might be possible to save the clipboard if I knew it contained just text (which I haven’t managed to make work), but anyway the clipboard might contain anything: perhaps small, perhaps big, perhaps not from Excel. Aside, for those wondering why it is a function. So that the correct formatting of a chart can be part of Excel’s recalc sequence: if the name of a series changes, its formatting would correctly follow. The actual function would take some parameters including ChartObject.Name and Series.Name. The ignoredParameter might be something that would change if the desired formatting were to change, so triggering re-execution. My target is that it work on most modern(ish) Excel versions, but anyway I’m using Mac Excel 16.43.
[ "Please, try the next ways:\n\nLoad SVG file from computer path:\n\nSub ApplySVGPictOnSeriesCollection()\n Dim ch As ChartObject, rng As Range\n \n Set rng = ActiveSheet.Range(\"I2:J4\") 'place here the SourceData\n \n Set ch = ActiveSheet.ChartObjects.Add(left:=1, top:=1, width:=100, height:=100)\n ch.Chart.SetSourceData rng\n \n With ch.Chart.SeriesCollection(1).Fill\n .UserPicture PictureFile:=ThisWorkbook.Path & \"\\sample_640×426.svg\" 'use here your svg file path\n .Visible = True\n End With\nEnd Sub\n\n\nConvert the SVG files to bmp or (jpg, gif, Wmf, emf) then, insert sheet ActiveX Image controls and place the pictures in the above format on them. They can be used as in the next piece of code:\n\nSub copyPictureToSeriesCollectionNoClipboard()\n Dim s As Shape, ch As ChartObject, rng As Range, img As Image\n \n Set rng = ActiveSheet.Range(\"I2:J4\")\n \n Set s = ActiveSheet.Shapes(\"Image1\") 'an ActiveX Image control shape \n \n Set img = s.OLEFormat.Object.Object 'image\n stdole.SavePicture img.Picture, ThisWorkbook.Path & \"\\myPicture123.jpg\"\n \n 'create a new chart:\n Set ch = ActiveSheet.ChartObjects.Add(left:=1, top:=1, width:=100, height:=100)\n \n ch.Chart.SetSourceData rng\n \n With ch.Chart.SeriesCollection(1).Fill 'place the pictures:\n .UserPicture PictureFile:=ThisWorkbook.Path & \"\\myPicture123.jpg\"\n .Visible = True\n End With\nEnd Sub\n\nPlease, send some feedback after testing them.\n" ]
[ 2 ]
[]
[]
[ "charts", "clipboard", "excel", "vba" ]
stackoverflow_0074659795_charts_clipboard_excel_vba.txt
Q: Postman Error: self signed certificate in certificate chain I try to test REST api in Postman but every time I try to POST I get the following error Error: self signed certificate in certificate chain. I have tried with the SSL certificate verification on and off but both methods dont work. Postman is also updated to latest v7.3.6. Tbh I dont know what to try anymore and would really appreciate any tip. A: Go to Postman Settings > General > turn OFF SSL certificate verification A: "make ssl certificate verification on and make it still work" If you're under organization environment, you can: Export your organization self-signed certificate as Base-64 encoded X.509 (.CER) format flat file. It could be done from Chrome. Go back to Postman: Settings -> Certificates -> CA Certificates, switch on and select the file you just exported. A: Adding CA certificates doesn't work for me. My certs are not self-signed but got the same error. Adding client certificates solved my problem. Quoted docs from Postman here: To send requests to an API that uses mutual TLS authentication, add your client certificate to Postman.
Postman Error: self signed certificate in certificate chain
I try to test REST api in Postman but every time I try to POST I get the following error Error: self signed certificate in certificate chain. I have tried with the SSL certificate verification on and off but both methods dont work. Postman is also updated to latest v7.3.6. Tbh I dont know what to try anymore and would really appreciate any tip.
[ "Go to Postman Settings > General > turn OFF SSL certificate verification\n\n", "\"make ssl certificate verification on and make it still work\"\nIf you're under organization environment, you can:\n\nExport your organization self-signed certificate as Base-64 encoded X.509 (.CER) format flat file. It could be done from Chrome.\nGo back to Postman: Settings -> Certificates -> CA Certificates, switch on and select the file you just exported.\n\n", "Adding CA certificates doesn't work for me. My certs are not self-signed but got the same error. Adding client certificates solved my problem. Quoted docs from Postman here: To send requests to an API that uses mutual TLS authentication, add your client certificate to Postman.\n\n" ]
[ 67, 7, 0 ]
[]
[]
[ "postman" ]
stackoverflow_0057424532_postman.txt