content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: Assign data in JSON file to a variable based on condition python I am trying to grab data from JSON file based on what quarter the dates represent. My goal is to assign the data to a variable so I should have Q1, Q2, Q3, Q4 variables holding the data inside. Below is the JSON: { "lastDate":{ "0":"2022Q4", "1":"2022Q4", "2":"2022Q4", "7":"2022Q4", "8":"2022Q4", "9":"2022Q4", "18":"2022Q3", "19":"2022Q3", "22":"2022Q3", "24":"2022Q2" }, "transactionType":{ "0":"Sell", "1":"Automatic Sell", "2":"Automatic Sell", "7":"Automatic Sell", "8":"Sell", "9":"Automatic Sell", "18":"Automatic Sell", "19":"Automatic Sell", "22":"Automatic Sell", "24":"Automatic Sell" }, "sharesTraded":{ "0":"20,200", "1":"176,299", "2":"8,053", "7":"167,889", "8":"13,250", "9":"176,299", "18":"96,735", "19":"15,366", "22":"25,000", "24":"25,000" } } Now if i try to use the following code: import json data = json.load(open("AAPL22data.json")) Q2data = [item for item in data if '2022Q2' in data['lastDate']] print(Q2data) My ideal output should be: { "lastDate":{ "24":"2022Q2" }, "transactionType":{ "24":"Automatic Sell" }, "sharesTraded":{ "24":"25,000" } } And then repeat the same structure for the other quarters. However, my current output gives me "[ ]" A: With pandas you can read this nested dictionary a transform it to a table representation. Then the aggregation you are required becomes quite natural. import pandas as pd sample_dict = { "lastDate":{ "0":"2022Q4", "1":"2022Q4", "2":"2022Q4", "7":"2022Q4", "8":"2022Q4", "9":"2022Q4", "18":"2022Q3", "19":"2022Q3", "22":"2022Q3", "24":"2022Q2" }, "transactionType":{ "0":"Sell", "1":"Automatic Sell", "2":"Automatic Sell", "7":"Automatic Sell", "8":"Sell", "9":"Automatic Sell", "18":"Automatic Sell", "19":"Automatic Sell", "22":"Automatic Sell", "24":"Automatic Sell" }, "sharesTraded":{ "0":"20,200", "1":"176,299", "2":"8,053", "7":"167,889", "8":"13,250", "9":"176,299", "18":"96,735", "19":"15,366", "22":"25,000", "24":"25,000" } } print(pd.DataFrame.from_dict(sample_dict)) returns Output: lastDate transactionType sharesTraded 0 2022Q4 Sell 20,200 1 2022Q4 Automatic Sell 176,299 2 2022Q4 Automatic Sell 8,053 7 2022Q4 Automatic Sell 167,889 8 2022Q4 Sell 13,250 9 2022Q4 Automatic Sell 176,299 18 2022Q3 Automatic Sell 96,735 19 2022Q3 Automatic Sell 15,366 22 2022Q3 Automatic Sell 25,000 24 2022Q2 Automatic Sell 25,000 then a simple group_by should do the trick. A: Use a dictionary comprehension: import json my_json = """{ "lastDate":{ "0":"2022Q4", "1":"2022Q4", "2":"2022Q4", "7":"2022Q4", "8":"2022Q4", "9":"2022Q4", "18":"2022Q3", "19":"2022Q3", "22":"2022Q3", "24":"2022Q2" }, "transactionType":{ "0":"Sell", "1":"Automatic Sell", "2":"Automatic Sell", "7":"Automatic Sell", "8":"Sell", "9":"Automatic Sell", "18":"Automatic Sell", "19":"Automatic Sell", "22":"Automatic Sell", "24":"Automatic Sell" }, "sharesTraded":{ "0":"20,200", "1":"176,299", "2":"8,053", "7":"167,889", "8":"13,250", "9":"176,299", "18":"96,735", "19":"15,366", "22":"25,000", "24":"25,000" } }""" data = json.loads(my_json) var = "24" #This corresponds to 2022 Q2 in your example data = {k:{var: v[var]} for k, v in data.items()} data = json.dumps(data, indent = 2) print(data) Output: { "lastDate": { "24": "2022Q2" }, "transactionType": { "24": "Automatic Sell" }, "sharesTraded": { "24": "25,000" } } A: Thanks to @FrancoMilanese for the info on Pandas group_by here is the answer below: import json import pandas as pd data = json.load(open("AAPL22data.json")) df = pd.DataFrame.from_dict(data) q2df = df.groupby('lastDate') q2df.get_group('2022Q2') #change '2022q2' for others & assign to a different variable
Assign data in JSON file to a variable based on condition python
I am trying to grab data from JSON file based on what quarter the dates represent. My goal is to assign the data to a variable so I should have Q1, Q2, Q3, Q4 variables holding the data inside. Below is the JSON: { "lastDate":{ "0":"2022Q4", "1":"2022Q4", "2":"2022Q4", "7":"2022Q4", "8":"2022Q4", "9":"2022Q4", "18":"2022Q3", "19":"2022Q3", "22":"2022Q3", "24":"2022Q2" }, "transactionType":{ "0":"Sell", "1":"Automatic Sell", "2":"Automatic Sell", "7":"Automatic Sell", "8":"Sell", "9":"Automatic Sell", "18":"Automatic Sell", "19":"Automatic Sell", "22":"Automatic Sell", "24":"Automatic Sell" }, "sharesTraded":{ "0":"20,200", "1":"176,299", "2":"8,053", "7":"167,889", "8":"13,250", "9":"176,299", "18":"96,735", "19":"15,366", "22":"25,000", "24":"25,000" } } Now if i try to use the following code: import json data = json.load(open("AAPL22data.json")) Q2data = [item for item in data if '2022Q2' in data['lastDate']] print(Q2data) My ideal output should be: { "lastDate":{ "24":"2022Q2" }, "transactionType":{ "24":"Automatic Sell" }, "sharesTraded":{ "24":"25,000" } } And then repeat the same structure for the other quarters. However, my current output gives me "[ ]"
[ "With pandas you can read this nested dictionary a transform it to a table representation. Then the aggregation you are required becomes quite natural.\nimport pandas as pd \n\nsample_dict = {\n \"lastDate\":{\n \"0\":\"2022Q4\",\n \"1\":\"2022Q4\",\n \"2\":\"2022Q4\",\n \"7\":\"2022Q4\",\n \"8\":\"2022Q4\",\n \"9\":\"2022Q4\",\n \"18\":\"2022Q3\",\n \"19\":\"2022Q3\",\n \"22\":\"2022Q3\",\n \"24\":\"2022Q2\"\n },\n \"transactionType\":{\n \"0\":\"Sell\",\n \"1\":\"Automatic Sell\",\n \"2\":\"Automatic Sell\",\n \"7\":\"Automatic Sell\",\n \"8\":\"Sell\",\n \"9\":\"Automatic Sell\",\n \"18\":\"Automatic Sell\",\n \"19\":\"Automatic Sell\",\n \"22\":\"Automatic Sell\",\n \"24\":\"Automatic Sell\"\n },\n \"sharesTraded\":{\n \"0\":\"20,200\",\n \"1\":\"176,299\",\n \"2\":\"8,053\",\n \"7\":\"167,889\",\n \"8\":\"13,250\",\n \"9\":\"176,299\",\n \"18\":\"96,735\",\n \"19\":\"15,366\",\n \"22\":\"25,000\",\n \"24\":\"25,000\"\n }\n}\n\nprint(pd.DataFrame.from_dict(sample_dict))\n\nreturns\nOutput:\n\n lastDate transactionType sharesTraded\n0 2022Q4 Sell 20,200\n1 2022Q4 Automatic Sell 176,299\n2 2022Q4 Automatic Sell 8,053\n7 2022Q4 Automatic Sell 167,889\n8 2022Q4 Sell 13,250\n9 2022Q4 Automatic Sell 176,299\n18 2022Q3 Automatic Sell 96,735\n19 2022Q3 Automatic Sell 15,366\n22 2022Q3 Automatic Sell 25,000\n24 2022Q2 Automatic Sell 25,000\n\nthen a simple group_by should do the trick.\n", "Use a dictionary comprehension:\nimport json\n\nmy_json = \"\"\"{\n \"lastDate\":{\n \"0\":\"2022Q4\",\n \"1\":\"2022Q4\",\n \"2\":\"2022Q4\",\n \"7\":\"2022Q4\",\n \"8\":\"2022Q4\",\n \"9\":\"2022Q4\",\n \"18\":\"2022Q3\",\n \"19\":\"2022Q3\",\n \"22\":\"2022Q3\",\n \"24\":\"2022Q2\"\n },\n \"transactionType\":{\n \"0\":\"Sell\",\n \"1\":\"Automatic Sell\",\n \"2\":\"Automatic Sell\",\n \"7\":\"Automatic Sell\",\n \"8\":\"Sell\",\n \"9\":\"Automatic Sell\",\n \"18\":\"Automatic Sell\",\n \"19\":\"Automatic Sell\",\n \"22\":\"Automatic Sell\",\n \"24\":\"Automatic Sell\"\n },\n \"sharesTraded\":{\n \"0\":\"20,200\",\n \"1\":\"176,299\",\n \"2\":\"8,053\",\n \"7\":\"167,889\",\n \"8\":\"13,250\",\n \"9\":\"176,299\",\n \"18\":\"96,735\",\n \"19\":\"15,366\",\n \"22\":\"25,000\",\n \"24\":\"25,000\"\n }\n}\"\"\"\n\ndata = json.loads(my_json)\n\nvar = \"24\" #This corresponds to 2022 Q2 in your example\n\ndata = {k:{var: v[var]} for k, v in data.items()}\ndata = json.dumps(data, indent = 2)\n\nprint(data)\n\nOutput:\n{\n \"lastDate\": {\n \"24\": \"2022Q2\"\n },\n \"transactionType\": {\n \"24\": \"Automatic Sell\"\n },\n \"sharesTraded\": {\n \"24\": \"25,000\"\n }\n}\n\n", "Thanks to @FrancoMilanese for the info on Pandas group_by here is the answer below:\nimport json\nimport pandas as pd \n\ndata = json.load(open(\"AAPL22data.json\"))\n\ndf = pd.DataFrame.from_dict(data)\n\nq2df = df.groupby('lastDate')\n\nq2df.get_group('2022Q2') #change '2022q2' for others & assign to a different variable\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "for_loop", "json", "python" ]
stackoverflow_0074666206_for_loop_json_python.txt
Q: NextJS: Run a Typescript script on the server I have a NextJS/Typescript project where I want to add a CLI script which should process some files on the server. Unfortunately I don't manage to run the script. Example script src/cli.ts: console.log("Hello world"); // Process files I tried to run the script with: ts-node src/cli.ts But I get this error message: src/cli.ts:1:1 - error TS1208: 'cli.ts' cannot be compiled under '--isolatedModules' because it is considered a global script file. Add an import, export, or an empty 'export {}' statement to make it a module. When I add an empty 'export {}' statement I get this error message: (node:15923) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension. As far as I know it is now not possible to use NextJS with ES modules. Is there another way to run the script in a NextJS project? Maybe I need to change the webpack config? I'm using the latest versions: Next 11, Typescript 4.3, Node 14.18, ts-node 10.13 with the default tsconfig.json, package.json. A: Run this instead: npx ts-node --skip-project src/cli.ts Reference: https://github.com/TypeStrong/ts-node#tsconfig The --skip-project flag will not resolve/load your tsconfig.json and, thus, ignore the "isolatedModules": true required by Next.js. You can also create a separate tsconfig file for ts-node and use the --project [path] option. Another way is to override configuration for ts-node in your tsconfig.json itself like this: { "extends": "ts-node/next/tsconfig.json", "ts-node": { "compilerOptions": { // compilerOptions specified here will override those declared below, // but *only* in ts-node. Useful if you want ts-node and tsc to use // different options with a single tsconfig.json. "isolatedModules": false } }, "compilerOptions": { // typescript options here } } Reference: https://github.com/TypeStrong/ts-node#via-tsconfigjson-recommended A: You can use TSX, which plays nicely with NextJS, for both JavaScript and TypeScript. No need for extra configuration or anything. Install it: yarn add -D tsx Declare your script in package.json: ... "scripts": { ... "my-cli-script": "tsx src/cli.ts" }, Run it: yarn my-cli-script
NextJS: Run a Typescript script on the server
I have a NextJS/Typescript project where I want to add a CLI script which should process some files on the server. Unfortunately I don't manage to run the script. Example script src/cli.ts: console.log("Hello world"); // Process files I tried to run the script with: ts-node src/cli.ts But I get this error message: src/cli.ts:1:1 - error TS1208: 'cli.ts' cannot be compiled under '--isolatedModules' because it is considered a global script file. Add an import, export, or an empty 'export {}' statement to make it a module. When I add an empty 'export {}' statement I get this error message: (node:15923) Warning: To load an ES module, set "type": "module" in the package.json or use the .mjs extension. As far as I know it is now not possible to use NextJS with ES modules. Is there another way to run the script in a NextJS project? Maybe I need to change the webpack config? I'm using the latest versions: Next 11, Typescript 4.3, Node 14.18, ts-node 10.13 with the default tsconfig.json, package.json.
[ "Run this instead:\nnpx ts-node --skip-project src/cli.ts\n\nReference: https://github.com/TypeStrong/ts-node#tsconfig\nThe --skip-project flag will not resolve/load your tsconfig.json and, thus, ignore the \"isolatedModules\": true required by Next.js.\nYou can also create a separate tsconfig file for ts-node and use the --project [path] option.\nAnother way is to override configuration for ts-node in your tsconfig.json itself like this:\n{\n \"extends\": \"ts-node/next/tsconfig.json\",\n\n \"ts-node\": {\n \"compilerOptions\": {\n // compilerOptions specified here will override those declared below,\n // but *only* in ts-node. Useful if you want ts-node and tsc to use\n // different options with a single tsconfig.json.\n\n \"isolatedModules\": false\n }\n },\n\n \"compilerOptions\": {\n // typescript options here\n }\n}\n\nReference: https://github.com/TypeStrong/ts-node#via-tsconfigjson-recommended\n", "You can use TSX, which plays nicely with NextJS, for both JavaScript and TypeScript. No need for extra configuration or anything.\nInstall it:\nyarn add -D tsx\n\nDeclare your script in package.json:\n...\n\"scripts\": {\n ...\n \"my-cli-script\": \"tsx src/cli.ts\"\n},\n\nRun it:\nyarn my-cli-script\n\n" ]
[ 8, 0 ]
[]
[]
[ "next.js", "ts_node", "typescript" ]
stackoverflow_0069580704_next.js_ts_node_typescript.txt
Q: Disable Safe Area Layout Guides For UIView Programmatically Safe area layout guides can be disabled in Interface Builder by unchecking the Use Safe Area Layout Guides. How can this be done in code? I didn't notice an iOS11-available boolean that directly corresponds with the checkbox. A: I think the only way to accomplish this programmatically is to override the safeAreaLayoutGuide property. override var safeAreaLayoutGuide: UILayoutGuide { return UILayoutGuide() } When you disable it through IB it still returns a UILayoutGuide but with zero layoutFrame, by returning an instance of UILayoutGuide, you are basically doing the same.
Disable Safe Area Layout Guides For UIView Programmatically
Safe area layout guides can be disabled in Interface Builder by unchecking the Use Safe Area Layout Guides. How can this be done in code? I didn't notice an iOS11-available boolean that directly corresponds with the checkbox.
[ "I think the only way to accomplish this programmatically is to override the safeAreaLayoutGuide property.\noverride var safeAreaLayoutGuide: UILayoutGuide {\n return UILayoutGuide()\n}\n\nWhen you disable it through IB it still returns a UILayoutGuide but with zero layoutFrame, by returning an instance of UILayoutGuide, you are basically doing the same.\n" ]
[ 2 ]
[ "You can do it in size inspector pane of the view:\n\n" ]
[ -1 ]
[ "ios", "ios11", "xcode9" ]
stackoverflow_0047228989_ios_ios11_xcode9.txt
Q: How can I determine if a column is a temporal column I have this table with temporal columns: CREATE TABLE [dbo].[Profile]( [Id] [int] IDENTITY(1,1) NOT NULL, [CreatedDate] [datetime2](7) GENERATED ALWAYS AS ROW START NOT NULL, [UpdatedDate] [datetime2](7) GENERATED ALWAYS AS ROW END NOT NULL, PERIOD FOR SYSTEM_TIME (CreatedDate, UpdatedDate), CONSTRAINT [PK_Profile] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY] I'd like to determine if one of my table columns is a "GENERATED ALWAYS" type of column, this does not work and returns 0 declare @TABLE_SCHEMA as nvarchar(255) = 'dbo' declare @TABLE_NAME as nvarchar(255) = 'Profile' declare @COLUMN_NAME as nvarchar(255) = 'CreatedDate' select COLUMNPROPERTY(object_id('[' + @TABLE_SCHEMA + '].[' + @TABLE_NAME + ']'), @COLUMN_NAME, 'IsComputed') A: Based on another answer, I was able to figure it out, the code below will return 1 (for row last updated date) or 2 (for row start date) if it's a generated always column, or 0 if it's not https://learn.microsoft.com/en-us/dotnet/api/microsoft.sqlserver.management.smo.generatedalwaystype?view=sql-smo-160 declare @TABLE_SCHEMA as nvarchar(255) = 'dbo' declare @TABLE_NAME as nvarchar(255) = 'Profile' declare @COLUMN_NAME as nvarchar(255) = 'CreatedDate' select COLUMNPROPERTY(object_id('[' + @TABLE_SCHEMA + '].[' + @TABLE_NAME + ']'), @COLUMN_NAME, 'GeneratedAlwaysType')
How can I determine if a column is a temporal column
I have this table with temporal columns: CREATE TABLE [dbo].[Profile]( [Id] [int] IDENTITY(1,1) NOT NULL, [CreatedDate] [datetime2](7) GENERATED ALWAYS AS ROW START NOT NULL, [UpdatedDate] [datetime2](7) GENERATED ALWAYS AS ROW END NOT NULL, PERIOD FOR SYSTEM_TIME (CreatedDate, UpdatedDate), CONSTRAINT [PK_Profile] PRIMARY KEY CLUSTERED ( [Id] ASC )WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY] I'd like to determine if one of my table columns is a "GENERATED ALWAYS" type of column, this does not work and returns 0 declare @TABLE_SCHEMA as nvarchar(255) = 'dbo' declare @TABLE_NAME as nvarchar(255) = 'Profile' declare @COLUMN_NAME as nvarchar(255) = 'CreatedDate' select COLUMNPROPERTY(object_id('[' + @TABLE_SCHEMA + '].[' + @TABLE_NAME + ']'), @COLUMN_NAME, 'IsComputed')
[ "Based on another answer, I was able to figure it out, the code below will return 1 (for row last updated date) or 2 (for row start date) if it's a generated always column, or 0 if it's not\nhttps://learn.microsoft.com/en-us/dotnet/api/microsoft.sqlserver.management.smo.generatedalwaystype?view=sql-smo-160\ndeclare @TABLE_SCHEMA as nvarchar(255) = 'dbo'\ndeclare @TABLE_NAME as nvarchar(255) = 'Profile'\ndeclare @COLUMN_NAME as nvarchar(255) = 'CreatedDate'\n\nselect COLUMNPROPERTY(object_id('[' + @TABLE_SCHEMA + '].[' + @TABLE_NAME + ']'), @COLUMN_NAME, 'GeneratedAlwaysType')\n\n" ]
[ 1 ]
[]
[]
[ "sql", "sql_server" ]
stackoverflow_0074668111_sql_sql_server.txt
Q: Call model from another component controller in Joomla 3.4 I am developing Joomla 3.4 application where I have to call one component model into another component controller but not call from there. Support, i have 2 component >> comp1 model: m1 controller: c1 >> comp2 model: m2 controller: c2 I want to call comp1 model (m1) into comp2 controller (c2). I tried using below code: $model = $this->getModel('m1', '', array()); But in $model get null value if above code use in comp1 controller (c1) then run perfect. What actually issue is not getting. Any one have a perfect idea. Thanks A: This is an old question but better late than never and I hope it will help other developers. To call a model from another component you need firstly to include the path of this model: JModelLegacy::addIncludePath(JPATH_SITE . '/components/comp1/models', 'Comp1Model'); Secondly you have to create an instance of your model: $model = JModelLegacy::getInstance('Model1', 'Comp1Model'); After that you should be able to use the methods of your model. A: Joomla 4 has a excellent dependency injection for this. Old technique not worked for me. I use this //frontend component Atricle model loaded $model = Factory::getApplication()->bootComponent('com_content')->getMVCFactory()->createModel('Article', 'Site'); $result = $model->getItem('put ID here');
Call model from another component controller in Joomla 3.4
I am developing Joomla 3.4 application where I have to call one component model into another component controller but not call from there. Support, i have 2 component >> comp1 model: m1 controller: c1 >> comp2 model: m2 controller: c2 I want to call comp1 model (m1) into comp2 controller (c2). I tried using below code: $model = $this->getModel('m1', '', array()); But in $model get null value if above code use in comp1 controller (c1) then run perfect. What actually issue is not getting. Any one have a perfect idea. Thanks
[ "This is an old question but better late than never and I hope it will help other developers.\nTo call a model from another component you need firstly to include the path of this model:\nJModelLegacy::addIncludePath(JPATH_SITE . '/components/comp1/models', 'Comp1Model');\n\nSecondly you have to create an instance of your model:\n$model = JModelLegacy::getInstance('Model1', 'Comp1Model');\n\nAfter that you should be able to use the methods of your model.\n", "Joomla 4 has a excellent dependency injection for this. Old technique not worked for me.\nI use this\n//frontend component Atricle model loaded\n$model = Factory::getApplication()->bootComponent('com_content')->getMVCFactory()->createModel('Article', 'Site');\n\n$result = $model->getItem('put ID here');\n\n" ]
[ 5, 0 ]
[]
[]
[ "joomla3.4", "model" ]
stackoverflow_0030024597_joomla3.4_model.txt
Q: Importxml data-href to GSheet Good day everyone; I am trying to import the data-href attribute from this url. I tried using importxml in gsheet but nothing works. Please helpp https://www.softwareadvice.com/manufacturing/exact-jobboss-profile/ <a id="visit-website" class="button primary md visit-website ppc " rel="nofollow noopener" data-href="https://jobboss.com/job-shop-manufacturing-software/jobboss" data-text="Visit Website" target="_blank">Visit Website Thanks in advance I tried using this formula but can't seem to figure out. =IMPORTXML(https://www.softwareadvice.com/manufacturing/exact-jobboss-profile/,"//section[@class='button-cta']") A: You can use the IMPORTXML function in Google Sheets to extract the data-href attribute from the HTML code of the URL you provided. Here's how you can do it: In Google Sheets, enter the IMPORTXML function in a cell, like this: =IMPORTXML(<url>, <xpath>) Replace with the URL of the page you want to extract the data from (in your case, this would be https://www.softwareadvice.com/manufacturing/exact-jobboss-profile/), and replace with the XPath of the element you want to extract. To get the XPath of the element you want to extract, right-click on the element in your browser and select "Inspect" (or "Inspect Element"). This will open the HTML code of the page in the Developer Tools. In the Developer Tools, find the element that contains the data-href attribute you want to extract. It should look something like this: <a id="visit-website" class="button primary md visit-website ppc " rel="nofollow noopener" data-href="https://jobboss.com/job-shop-manufacturing-software/jobboss" data-text="Visit Website" target="_blank">Visit Website Then, right-click on the element and select "Copy > Copy XPath". This will copy the XPath of the element to your clipboard. Paste the XPath you copied in step 3 into the IMPORTXML function in place of , like this: =IMPORTXML(https://www.softwareadvice.com/manufacturing/exact-jobboss-profile/, "//a[@id='visit-website']") Finally, to extract the data-href attribute from the element, you can use the @ symbol followed by the attribute name, like this: =IMPORTXML(https://www.softwareadvice.com/manufacturing/exact-jobboss-profile/, "//a[@id='visit-website']/@data-href") This should return the value of the data-href attribute, which is the URL you are looking for.
Importxml data-href to GSheet
Good day everyone; I am trying to import the data-href attribute from this url. I tried using importxml in gsheet but nothing works. Please helpp https://www.softwareadvice.com/manufacturing/exact-jobboss-profile/ <a id="visit-website" class="button primary md visit-website ppc " rel="nofollow noopener" data-href="https://jobboss.com/job-shop-manufacturing-software/jobboss" data-text="Visit Website" target="_blank">Visit Website Thanks in advance I tried using this formula but can't seem to figure out. =IMPORTXML(https://www.softwareadvice.com/manufacturing/exact-jobboss-profile/,"//section[@class='button-cta']")
[ "You can use the IMPORTXML function in Google Sheets to extract the data-href attribute from the HTML code of the URL you provided. Here's how you can do it:\nIn Google Sheets, enter the IMPORTXML function in a cell, like this:\n=IMPORTXML(<url>, <xpath>)\n\nReplace with the URL of the page you want to extract the data from (in your case, this would be https://www.softwareadvice.com/manufacturing/exact-jobboss-profile/), and replace with the XPath of the element you want to extract.\nTo get the XPath of the element you want to extract, right-click on the element in your browser and select \"Inspect\" (or \"Inspect Element\"). This will open the HTML code of the page in the Developer Tools.\nIn the Developer Tools, find the element that contains the data-href attribute you want to extract. It should look something like this:\n<a id=\"visit-website\" class=\"button primary md visit-website ppc \" rel=\"nofollow noopener\" data-href=\"https://jobboss.com/job-shop-manufacturing-software/jobboss\" data-text=\"Visit Website\" target=\"_blank\">Visit Website\n\nThen, right-click on the element and select \"Copy > Copy XPath\". This will copy the XPath of the element to your clipboard.\nPaste the XPath you copied in step 3 into the IMPORTXML function in place of , like this:\n=IMPORTXML(https://www.softwareadvice.com/manufacturing/exact-jobboss-profile/, \"//a[@id='visit-website']\")\n\nFinally, to extract the data-href attribute from the element, you can use the @ symbol followed by the attribute name, like this:\n=IMPORTXML(https://www.softwareadvice.com/manufacturing/exact-jobboss-profile/, \"//a[@id='visit-website']/@data-href\")\n\nThis should return the value of the data-href attribute, which is the URL you are looking for.\n" ]
[ 0 ]
[]
[]
[ "google_sheets_formula" ]
stackoverflow_0074668838_google_sheets_formula.txt
Q: Change SQLite database mode to read-write How can I change an SQLite database from read-only to read-write? When I executed the update statement, I always got: SQL error: attempt to write a readonly database The SQLite file is a writeable file on the filesystem. A: There can be several reasons for this error message: Several processes have the database open at the same time (see the FAQ). There is a plugin to compress and encrypt the database. It doesn't allow to modify the DB. Lastly, another FAQ says: "Make sure that the directory containing the database file is also writable to the user executing the CGI script." I think this is because the engine needs to create more files in the directory. The whole filesystem might be read only, for example after a crash. On Unix systems, another process can replace the whole file. A: I solved this by changing owner from "root" to my own user, on all files in Database's folder. Just do ls -l on said folder, and if any of the files is owned by root, just change it to your user, like: # For each file do: sudo chown "$USER":"$USER" /path/to/my-folder/file.txt # Or "R"ecursive. sudo chown -R "$USER":"$USER" /path/to/my-folder A: (this error message is typically misleading, and is usually a general permissions error) On Windows If you're issuing SQL directly against the database, make sure whatever application you're using to run the SQL is running as administrator If an application is attempting the update, the account that it uses to access the database may need permissions on the folder containing your database file. For example, if IIS is accessing the database, the IUSR and IIS_IUSRS may both need appropriate permissions (you can try this by temporarily giving these accounts full control over the folder, checking if this works, then tying down the permissions as appropriate) A: This error usually happens when your database is accessed by one application already, and you're trying to access it with another application. A: To share personal experience I encountered with this error that eventually fix both. Might not necessarily be related to your issue but it appears this error is so generic that it can be attributed to gazillion things. Database instance open in another application. My DB appeared to have been in a "locked" state so it transition to read only mode. I was able to track it down by stopping the a 2nd instance of the application sharing the DB. Directory tree permission - please be sure to ensure user account has permission not just at the file level but at the entire upper directory level all the way to / level. Thanks A: If using Android. Make sure you have added the permission to write to your EXTERNAL_STORAGE to your AndroidManifest.xml. Add this line to your AndroidManifest.xml file above and outside your <application> tag. <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE"/> This will allow your application to write to the sdcard. This will help if your EXTERNAL_STORAGE is where you have stored your database on the device. A: On win10 after a system crash, try to open db with DB Browser, but read only. Simply delete the journal file. A: In Linux command shell, I did: chmod 777 <db_folder> Where contains the database file. It works. Now I can access my database and make insert queries. A: On Ubuntu, change the owner to the Apache group and grant the right permissions (no, it's not 777): sudo chgrp www-data <path to db.sqlite3> sudo chmod 664 <path to db.sqlite3> Update You can set the permissions for group and user as well. sudo chown www-data:www-data <path to db.sqlite3> A: On Windows: tl;dr: Try opening the file again. Our system was suffering this problem, and it definitely wasn't a permissions issue, since the program itself would be able to open the database as writable from many threads most of the time, but occasionally (only on Windows, not on OSX), a thread would get these errors even though all the other threads in the program were having no difficulties. We eventually discovered that the threads that were failing were only those that were trying to open the database immediately after another thread had closed it (within 3 ms). We speculated that the problem was due to the fact that Windows (or the sqlite implementation under windows) doesn't always immediately clean up up file resources upon closing of a file. We got around this by running a test write query against the db upon opening (e.g., creating then dropping a table with a silly name). If the create/drop failed, we waited for 50 ms and tried again, repeating until we succeeded or 5 seconds elapsed. It worked; apparently there just needed to be enough time for the resources to flush out to disk. A: If <db_name>.sqlite-journal file exists in the same folder with DB file, that means your DB is opened currently and in the middle of some changes (or it had been at the moment when DB folder was copied). If you try to open DB at this moment error attempt to write a readonly database (or similar) could appear. As a solution, wait till <db_name>.sqlite-journal disappears or remove it (is not recommended on the working system) A: I had this problem today, too. It was caused by ActiveSync on Windows Mobile - the folder I was working in was synced so the AS process grabbed the DB file from time to time causing this error. A: On Linux, give read/write permissions to the entire folder containing the database file. Also, SELinux might be blocking the write. You need to set the correct permissions. In my SELinux Management GUI (on Fedora 19), I checked the box on the line labelled httpd_unified (Unify HTTPD handling of all content files), and I was good to go. A: I'm using SQLite on ESP32 and all answers here are "very strange".... When I look at the data on the flash of the ESP I notice there is only one file for the whole db (there is also a temp file). In this db file we have of course the user tables but also the system tables so "sqlite_master" for example which contain the definiton of the tables. So, it's seems hard to belive this can be a "chmod" problem, because if the file is read only, even creating table would be impossible as SQLite would be unable to write the "sqlite_master" data... So I think our friend user143482 is trying to acesse a "read only" table. In SQLite source code we can see a function named tabIsReadOnly with this comment: /* Return true if table pTab is read-only. ** ** A table is read-only if any of the following are true: ** ** 1) It is a virtual table and no implementation of the xUpdate method ** has been provided ** ** 2) It is a system table (i.e. sqlite_master), this call is not ** part of a nested parse and writable_schema pragma has not ** been specified ** ** 3) The table is a shadow table, the database connection is in ** defensive mode, and the current sqlite3_prepare() ** is for a top-level SQL statement. */
Change SQLite database mode to read-write
How can I change an SQLite database from read-only to read-write? When I executed the update statement, I always got: SQL error: attempt to write a readonly database The SQLite file is a writeable file on the filesystem.
[ "There can be several reasons for this error message:\n\nSeveral processes have the database open at the same time (see the FAQ).\nThere is a plugin to compress and encrypt the database. It doesn't allow to modify the DB.\nLastly, another FAQ says: \"Make sure that the directory containing the database file is also writable to the user executing the CGI script.\" I think this is because the engine needs to create more files in the directory.\nThe whole filesystem might be read only, for example after a crash.\nOn Unix systems, another process can replace the whole file.\n\n", "I solved this by changing owner from \"root\" to my own user, on all files in Database's folder.\nJust do ls -l on said folder, and if any of the files is owned by root, just change it to your user, like:\n# For each file do:\nsudo chown \"$USER\":\"$USER\" /path/to/my-folder/file.txt\n\n# Or \"R\"ecursive.\nsudo chown -R \"$USER\":\"$USER\" /path/to/my-folder\n\n", "(this error message is typically misleading, and is usually a general permissions error)\nOn Windows\n\nIf you're issuing SQL directly against the database, make sure whatever application you're using to run the SQL is running as administrator\nIf an application is attempting the update, the account that it uses to access the database may need permissions on the folder containing your database file. For example, if IIS is accessing the database, the IUSR and IIS_IUSRS may both need appropriate permissions (you can try this by temporarily giving these accounts full control over the folder, checking if this works, then tying down the permissions as appropriate)\n\n", "This error usually happens when your database is accessed by one application already, and you're trying to access it with another application.\n", "To share personal experience I encountered with this error that eventually fix both. Might not necessarily be related to your issue but it appears this error is so generic that it can be attributed to gazillion things.\n\nDatabase instance open in another application. My DB appeared to have been in a \"locked\" state so it transition to read only mode. I was able to track it down by stopping the a 2nd instance of the application sharing the DB.\nDirectory tree permission - please be sure to ensure user account has permission not just at the file level but at the entire upper directory level all the way to / level.\n\nThanks\n", "If using Android.\nMake sure you have added the permission to write to your EXTERNAL_STORAGE to your AndroidManifest.xml.\nAdd this line to your AndroidManifest.xml file above and outside your <application> tag.\n<uses-permission android:name=\"android.permission.WRITE_EXTERNAL_STORAGE\"/>\n\nThis will allow your application to write to the sdcard. This will help if your EXTERNAL_STORAGE is where you have stored your database on the device.\n", "On win10 after a system crash, try to open db with DB Browser, but read only.\nSimply delete the journal file.\n", "In Linux command shell, I did: \nchmod 777 <db_folder>\n\nWhere contains the database file. \nIt works. Now I can access my database and make insert queries.\n", "On Ubuntu, change the owner to the Apache group and grant the right permissions (no, it's not 777):\nsudo chgrp www-data <path to db.sqlite3>\nsudo chmod 664 <path to db.sqlite3>\n\nUpdate\nYou can set the permissions for group and user as well.\nsudo chown www-data:www-data <path to db.sqlite3>\n\n", "On Windows:\ntl;dr: Try opening the file again.\nOur system was suffering this problem, and it definitely wasn't a permissions issue, since the program itself would be able to open the database as writable from many threads most of the time, but occasionally (only on Windows, not on OSX), a thread would get these errors even though all the other threads in the program were having no difficulties.\nWe eventually discovered that the threads that were failing were only those that were trying to open the database immediately after another thread had closed it (within 3 ms). We speculated that the problem was due to the fact that Windows (or the sqlite implementation under windows) doesn't always immediately clean up up file resources upon closing of a file. We got around this by running a test write query against the db upon opening (e.g., creating then dropping a table with a silly name). If the create/drop failed, we waited for 50 ms and tried again, repeating until we succeeded or 5 seconds elapsed.\nIt worked; apparently there just needed to be enough time for the resources to flush out to disk.\n", "If <db_name>.sqlite-journal file exists in the same folder with DB file, that means your DB is opened currently and in the middle of some changes (or it had been at the moment when DB folder was copied). If you try to open DB at this moment error attempt to write a readonly database (or similar) could appear.\nAs a solution, wait till <db_name>.sqlite-journal disappears or remove it (is not recommended on the working system)\n", "I had this problem today, too. \nIt was caused by ActiveSync on Windows Mobile - the folder I was working in was synced so the AS process grabbed the DB file from time to time causing this error.\n", "On Linux, give read/write permissions to the entire folder containing the database file.\nAlso, SELinux might be blocking the write. You need to set the correct permissions.\nIn my SELinux Management GUI (on Fedora 19), I checked the box on the line labelled httpd_unified (Unify HTTPD handling of all content files), and I was good to go.\n", "I'm using SQLite on ESP32 and all answers here are \"very strange\"....\nWhen I look at the data on the flash of the ESP I notice there is only one file for the whole db (there is also a temp file).\nIn this db file we have of course the user tables but also the system tables so \"sqlite_master\" for example which contain the definiton of the tables.\nSo, it's seems hard to belive this can be a \"chmod\" problem, because if the file is read only, even creating table would be impossible as SQLite would be unable to write the \"sqlite_master\" data...\nSo I think our friend user143482 is trying to acesse a \"read only\" table. In SQLite source code we can see a function named tabIsReadOnly with this comment:\n /* Return true if table pTab is read-only.\n **\n ** A table is read-only if any of the following are true:\n **\n ** 1) It is a virtual table and no implementation of the xUpdate method\n ** has been provided\n **\n ** 2) It is a system table (i.e. sqlite_master), this call is not\n ** part of a nested parse and writable_schema pragma has not \n ** been specified\n **\n ** 3) The table is a shadow table, the database connection is in\n ** defensive mode, and the current sqlite3_prepare()\n ** is for a top-level SQL statement.\n */\n\n" ]
[ 125, 15, 10, 8, 5, 4, 3, 2, 2, 2, 2, 1, 1, 1 ]
[ "From the command line, enter the folder where your database file is located and execute the following command:\nchmod 777 databasefilename\n\nThis will grant all permissions to all users.\n", "Edit the DB: I was having problems editing the db. I ended up having to\n sudo chown 'non root username' ts3server.sqlitedb\nas long as it wasn't root, i could edit the file. Username is the username of my non root account. \nAuto start TeamSpeak: as your non root account\n crontab -e\n@reboot /path to ts3server/ aka /home/ts3server/ts3server_startscript.sh start\n", "In the project path Terminal\ndjango_project# \nsudo chown django:django *\n\n", "After hours of hit and trial, I solved my issue. Even though I had changed my permissions (used chmod 777 db.sqlite3 as well, sadly), but the issue was something else altogether.\nFinally this thing worked (probably because I used bitnami)\n$ chown :daemon /path/to/your/sqlite/file\n$ chmod 664 /path/to/your/sqlite/file\n$ chown :daemon /path/to/your/project\n$ chmod 775 /path/to/your/project\n", "\"chmod 777 databasefilename\" worked well on my debian 10 credit:Dennis\n\"chmod 775 databasefilename\" is the cause of the error\n", "remove the db journal file\nin my case it was: sudo rm universal3a.db-journal\n", "I have got this message when trying to open the DB in the folder Programs files (the same folder where DB Broser Sqlite is). I have tried opening the DB in a different folder and the problem has disapeared. Everything is OK.\n", "I was accessing the database within WSL terminal. Instead of \"sqlite3 database.db\" I ran the command \"sudo sqlite3 database.db\", which worked for me.\n", "Open Sqlite Studio as an administrator and try again\n" ]
[ -1, -1, -1, -1, -1, -1, -1, -1, -3 ]
[ "sqlite" ]
stackoverflow_0001518729_sqlite.txt
Q: Why do I get nearly the same number out of the rand() function in the first iteration? in my C99-programme I want to use pseudo-random numbers between 0 an 1. Unfortunatly, my programme always generates a first number that is almost identical. It just steps up ever so slightly every time I rerun my programme. This is the relevant part of my programme: srand(time(NULL)); for(int i = 0; i < 10; i++){ float a = (float)rand()/RAND_MAX; printf("%f\n",a); And here are the results of two Iterations with a time difference of below 10 seconds: 0.717103 0.357464 0.903628 0.271930 0.327478 0.917489 0.231215 0.026307 0.135259 0.290941 0.717221 0.330531 0.237708 0.151682 0.318986 0.201876 0.936884 0.209277 0.324705 0.311334 I tried to generate completely different numbers no matter how many I computed before, but the first is alway close to the same. A: Patterns like this are typically caused by low-quality linear congruential pseudorandom number generators. Each random number is computed as Xnew = (aXold + c) mod m. When you seed this with a number of seconds, the change between the first value generated after seeding at one time and the first value generated after seeding one second later is a modulo m. If you are using a Unix system (such as macOS or Linux), you can use srandom and random instead. A: The problem appears to be that your system's rand() implementation is sensitive to the fact that successive return values from time() don't differ by much -- that is, aren't supplying much entropy. I tried your code on my machine and got remarkably similar results. You need to provide a bit more entropy, to "mix things up". One technique I like to use (at least, on Unix-like systems) is to add the process ID into the seed, since the pid will be different each time. Usually I do it like this: srand(time(NULL) + getpid()); This is a good way to make sure your program generates different results even if you invoke it twice in the same second. Unfortunately, this wasn't enough to make a difference with your code on my machine, and I suspect it won't help on yours, either. The problem is that (especially on a non-loaded system) the process ID typically just increments by 1 or 2 each time also, and it's adding in to the same low-order bits of time's return value which also aren't changing my much. (And the high order bits of the pid don't change at all, and are being added to the high-order bits of the time of day, which also aren't changing.) On my system, I got decent results with srand(time(NULL) ^ (getpid() << 16)); and I suggest you try this. If that doesn't help, you'll want to use either a significantly better entropy source, or a significantly better pseudorandom number generator. These days, virtually all Unix-like systems have a good source of entropy accessible via the special file /dev/random. So a superior way of seeding a pseudorandom number generator is with code like this: FILE *fp = fopen("/dev/random", "rb"); if(fp != NULL) { int seed; fread(&seed, sizeof(seed), 1, fp); srand(seed); } (For a production program, obviously you'd have to decide what to do if the fopen call failed.) Eric Postpischil's answer mentions random, a significantly better PRNG available on most systems.
Why do I get nearly the same number out of the rand() function in the first iteration?
in my C99-programme I want to use pseudo-random numbers between 0 an 1. Unfortunatly, my programme always generates a first number that is almost identical. It just steps up ever so slightly every time I rerun my programme. This is the relevant part of my programme: srand(time(NULL)); for(int i = 0; i < 10; i++){ float a = (float)rand()/RAND_MAX; printf("%f\n",a); And here are the results of two Iterations with a time difference of below 10 seconds: 0.717103 0.357464 0.903628 0.271930 0.327478 0.917489 0.231215 0.026307 0.135259 0.290941 0.717221 0.330531 0.237708 0.151682 0.318986 0.201876 0.936884 0.209277 0.324705 0.311334 I tried to generate completely different numbers no matter how many I computed before, but the first is alway close to the same.
[ "Patterns like this are typically caused by low-quality linear congruential pseudorandom number generators. Each random number is computed as Xnew = (aXold + c) mod m. When you seed this with a number of seconds, the change between the first value generated after seeding at one time and the first value generated after seeding one second later is a modulo m.\nIf you are using a Unix system (such as macOS or Linux), you can use srandom and random instead.\n", "The problem appears to be that your system's rand() implementation is sensitive to the fact that successive return values from time() don't differ by much -- that is, aren't supplying much entropy.\nI tried your code on my machine and got remarkably similar results.\nYou need to provide a bit more entropy, to \"mix things up\".\nOne technique I like to use (at least, on Unix-like systems) is to add the process ID into the seed, since the pid will be different each time. Usually I do it like this:\nsrand(time(NULL) + getpid());\n\nThis is a good way to make sure your program generates different results even if you invoke it twice in the same second.\nUnfortunately, this wasn't enough to make a difference with your code on my machine, and I suspect it won't help on yours, either. The problem is that (especially on a non-loaded system) the process ID typically just increments by 1 or 2 each time also, and it's adding in to the same low-order bits of time's return value which also aren't changing my much. (And the high order bits of the pid don't change at all, and are being added to the high-order bits of the time of day, which also aren't changing.)\nOn my system, I got decent results with\nsrand(time(NULL) ^ (getpid() << 16));\n\nand I suggest you try this.\nIf that doesn't help, you'll want to use either a significantly better entropy source, or a significantly better pseudorandom number generator.\nThese days, virtually all Unix-like systems have a good source of entropy accessible via the special file /dev/random. So a superior way of seeding a pseudorandom number generator is with code like this:\nFILE *fp = fopen(\"/dev/random\", \"rb\");\nif(fp != NULL) {\n int seed;\n fread(&seed, sizeof(seed), 1, fp);\n srand(seed);\n}\n\n(For a production program, obviously you'd have to decide what to do if the fopen call failed.)\nEric Postpischil's answer mentions random, a significantly better PRNG available on most systems.\n" ]
[ 2, 2 ]
[]
[]
[ "c", "clion", "iteration", "random", "srand" ]
stackoverflow_0074668873_c_clion_iteration_random_srand.txt
Q: Use of undeclared crate or module with workspaces I have the following structure: -- project/ | |-- Cargo.toml |-- Cargo.lock |-- src/ | |-- main.rs |-- crate1/ |-- lib.rs |-- Cargo.toml |-- tests |-- Cargo.toml |-- test.rs and this are the content of the Cargo.toml # Cargo.toml ... [workspace] members = [ "crate1", "tests" ] ... # crate1/Cargo.toml [package] name = "crate1" ... [lib] path = "lib.rs" ... here I'm using another lib for my tests, I don't think the problem is here, because I already used this way to do my tests several times, and it worked perfectly, but for some reason, now this is happening to me, I don't know if everything is a typo error of my self or not, but I already checked it a lot of times # tests/Cargo.tom [package] name = "tests" version = "0.1.0" edition = "2021" publish = false [dev-dependencies] crate1 = { path = "../crate1" } [[test]] name = "crate1_test" path = "crate1_test.rs" [[test]] name = "other_crate1_test" path = "other_crate1_test.rs" this is how one of the tests looks like // tests/crate1_test.rs use crate1::random_func; [test] fn random_func_test() { assert!(random_func()); } And for some reason cargo don't recognize the "crate1" crate and throws me this error each time I import the crate: error[E0433]: failed to resolve: use of undeclared crate or module `crate1` --> tests/crate1_test.rs:1:5 | 1 | use crate1::random_func; | ^^^^^^ use of undeclared crate or module `crate1` For more information about this error, try `rustc --explain E0433`. error: could not compile `project-manager` due to previous error A: I found the problem, it was that I didn't make the crate1 as a dependency of the main project, this is how my root Cargo.toml looks now: [package] name = "project" version = "0.1.0" edition = "2021" [workspace] members = [ "crate1", "tests" ] [dependencies] crate1 = { path = "crate1" } reqwest = "0.11.13" tokio = { version = "1", features = ["full"] } and now I can build my tests correctly
Use of undeclared crate or module with workspaces
I have the following structure: -- project/ | |-- Cargo.toml |-- Cargo.lock |-- src/ | |-- main.rs |-- crate1/ |-- lib.rs |-- Cargo.toml |-- tests |-- Cargo.toml |-- test.rs and this are the content of the Cargo.toml # Cargo.toml ... [workspace] members = [ "crate1", "tests" ] ... # crate1/Cargo.toml [package] name = "crate1" ... [lib] path = "lib.rs" ... here I'm using another lib for my tests, I don't think the problem is here, because I already used this way to do my tests several times, and it worked perfectly, but for some reason, now this is happening to me, I don't know if everything is a typo error of my self or not, but I already checked it a lot of times # tests/Cargo.tom [package] name = "tests" version = "0.1.0" edition = "2021" publish = false [dev-dependencies] crate1 = { path = "../crate1" } [[test]] name = "crate1_test" path = "crate1_test.rs" [[test]] name = "other_crate1_test" path = "other_crate1_test.rs" this is how one of the tests looks like // tests/crate1_test.rs use crate1::random_func; [test] fn random_func_test() { assert!(random_func()); } And for some reason cargo don't recognize the "crate1" crate and throws me this error each time I import the crate: error[E0433]: failed to resolve: use of undeclared crate or module `crate1` --> tests/crate1_test.rs:1:5 | 1 | use crate1::random_func; | ^^^^^^ use of undeclared crate or module `crate1` For more information about this error, try `rustc --explain E0433`. error: could not compile `project-manager` due to previous error
[ "I found the problem, it was that I didn't make the crate1 as a dependency of the main project, this is how my root Cargo.toml looks now:\n[package]\nname = \"project\"\nversion = \"0.1.0\"\nedition = \"2021\"\n\n[workspace]\nmembers = [\n \"crate1\",\n\n \"tests\"\n]\n\n[dependencies]\ncrate1 = { path = \"crate1\" }\nreqwest = \"0.11.13\"\ntokio = { version = \"1\", features = [\"full\"] }\n\n\nand now I can build my tests correctly\n" ]
[ 0 ]
[ "In src/main.rs you have to add mod crate1; and potentially mod tests;\n" ]
[ -1 ]
[ "rust" ]
stackoverflow_0074663990_rust.txt
Q: Odd HashMap lifetime requirement on key when value is a new type wrapper I'm trying to understand how changing the value type in a HashMap from &'t str to Value<'t>(&'t str) leads to a stricter requirement on the Key type passed in to get below. #![allow(dead_code, unused)] use std::collections::HashMap; #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] struct Key<'t>(&'t str); #[derive(Debug, Clone, Copy)] struct Value<'t>(&'t str); #[derive(Debug)] struct Map1<'t>(HashMap<Key<'t>, &'t str>); #[derive(Debug)] struct Map2<'t>(HashMap<Key<'t>, Value<'t>>); impl<'t> Map1<'t> { fn get<'map>(&'map self, key: &Key<'_>) -> Option<&'map str> { self.0.get(key).map(|x| *x) } } impl<'t> Map2<'t> { fn get<'map>(&'map self, key: &Key<'_>) -> Option<&'map Value<'t>> { // Doesn't work, says: -------- help: add explicit lifetime `'map` to the type of `key`: `&Key<'map>` self.0.get(key) } } In Map1 with value type &'t str it's fine to pass in a Key with any lifetime, whereas in Map2 with value type Value<'t> (a new type wrapper around &'t str) it is no longer fine and I'm expected to pass a key whose inner lifetime is as long as the map itself. Could you help me understand why this the case? Is there anything I can do to make the new type wrapped Value(&str) work the same as a &str? A: The two get implementations are not equivalent: self.0.get(key).map(|x| *x) // vs self.0.get(key) what the map is doing is essentially a copy(). What would be the type of the Map1::map method without that copy? impl<'t> Map1<'t> { fn get<'map>(&'map self, key: &Key<'_>) -> Option<&'map &'t str> { self.0.get(key) } } ... which gives you the same error. So, what you really need is to copy that Value, e.g. like this: impl<'t> Map2<'t> { fn get<'map>(&'map self, key: &Key<'_>) -> Option<Value<'t>> { self.0.get(key).copied() } } If you want to not do that, then naturally you will need to change the lifetime declaration compared to Map1.
Odd HashMap lifetime requirement on key when value is a new type wrapper
I'm trying to understand how changing the value type in a HashMap from &'t str to Value<'t>(&'t str) leads to a stricter requirement on the Key type passed in to get below. #![allow(dead_code, unused)] use std::collections::HashMap; #[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)] struct Key<'t>(&'t str); #[derive(Debug, Clone, Copy)] struct Value<'t>(&'t str); #[derive(Debug)] struct Map1<'t>(HashMap<Key<'t>, &'t str>); #[derive(Debug)] struct Map2<'t>(HashMap<Key<'t>, Value<'t>>); impl<'t> Map1<'t> { fn get<'map>(&'map self, key: &Key<'_>) -> Option<&'map str> { self.0.get(key).map(|x| *x) } } impl<'t> Map2<'t> { fn get<'map>(&'map self, key: &Key<'_>) -> Option<&'map Value<'t>> { // Doesn't work, says: -------- help: add explicit lifetime `'map` to the type of `key`: `&Key<'map>` self.0.get(key) } } In Map1 with value type &'t str it's fine to pass in a Key with any lifetime, whereas in Map2 with value type Value<'t> (a new type wrapper around &'t str) it is no longer fine and I'm expected to pass a key whose inner lifetime is as long as the map itself. Could you help me understand why this the case? Is there anything I can do to make the new type wrapped Value(&str) work the same as a &str?
[ "The two get implementations are not equivalent:\n self.0.get(key).map(|x| *x)\n // vs\n self.0.get(key)\n\nwhat the map is doing is essentially a copy(). What would be the type of the Map1::map method without that copy?\nimpl<'t> Map1<'t> {\n fn get<'map>(&'map self, key: &Key<'_>) -> Option<&'map &'t str> {\n self.0.get(key)\n }\n}\n\n... which gives you the same error.\nSo, what you really need is to copy that Value, e.g. like this:\nimpl<'t> Map2<'t> {\n fn get<'map>(&'map self, key: &Key<'_>) -> Option<Value<'t>> {\n self.0.get(key).copied()\n }\n}\n\nIf you want to not do that, then naturally you will need to change the lifetime declaration compared to Map1.\n" ]
[ 2 ]
[]
[]
[ "rust" ]
stackoverflow_0074669097_rust.txt
Q: kubernetes consumes preserved network ip address I am learning kubernetes in my home network. which configures like this -192.168.1.1(router) -192.168.1.30(ubuntu machine 1, master node) -192.168.1.71(ubuntu machine 2, worker node) And router ip 192.168.1.1 cannot be modified. When I do sudo kubeadm init --pod-network-cidr=192.168.1.0/24 in master node, master node iptable adds cni0 like below, and other nodes inside network(like ubuntu machine 2) becomes unavailable to connect 361: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000 link/ether 32:61:29:d2:1d:16 brd ff:ff:ff:ff:ff:ff inet 192.168.1.1/24 brd 192.168.1.255 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::3061:29ff:fed2:1d16/64 scope link valid_lft forever preferred_lft forever what could be the problem? thanks in advance A: It sounds like you are running into an issue with the IP address you are using for your Kubernetes pod network. The pod-network-cidr flag you are using with kubeadm init specifies the range of IP addresses that will be used for the pods in your Kubernetes cluster. In your case, you are using 192.168.1.0/24 as the CIDR, which means that any pods you create will be assigned an IP address in the range of 192.168.1.0 to 192.168.1.255. However, it appears that your router has the IP address 192.168.1.1, which is within this range. This means that when you create a pod and try to assign it an IP address in the 192.168.1.0/24 range, it will conflict with your router's IP address and cause connectivity issues. To fix this issue, you will need to choose a different CIDR for your pod network that does not overlap with your router's IP address. For example, you could use 192.168.2.0/24 instead. You can specify this CIDR when you run kubeadm init like this: sudo kubeadm init --pod-network-cidr=192.168.2.0/24 This should allow you to create pods with IP addresses in the 192.168.2.0 to 192.168.2.255 range without conflicts.
kubernetes consumes preserved network ip address
I am learning kubernetes in my home network. which configures like this -192.168.1.1(router) -192.168.1.30(ubuntu machine 1, master node) -192.168.1.71(ubuntu machine 2, worker node) And router ip 192.168.1.1 cannot be modified. When I do sudo kubeadm init --pod-network-cidr=192.168.1.0/24 in master node, master node iptable adds cni0 like below, and other nodes inside network(like ubuntu machine 2) becomes unavailable to connect 361: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default qlen 1000 link/ether 32:61:29:d2:1d:16 brd ff:ff:ff:ff:ff:ff inet 192.168.1.1/24 brd 192.168.1.255 scope global cni0 valid_lft forever preferred_lft forever inet6 fe80::3061:29ff:fed2:1d16/64 scope link valid_lft forever preferred_lft forever what could be the problem? thanks in advance
[ "It sounds like you are running into an issue with the IP address you are using for your Kubernetes pod network. The pod-network-cidr flag you are using with kubeadm init specifies the range of IP addresses that will be used for the pods in your Kubernetes cluster. In your case, you are using 192.168.1.0/24 as the CIDR, which means that any pods you create will be assigned an IP address in the range of 192.168.1.0 to 192.168.1.255.\nHowever, it appears that your router has the IP address 192.168.1.1, which is within this range. This means that when you create a pod and try to assign it an IP address in the 192.168.1.0/24 range, it will conflict with your router's IP address and cause connectivity issues.\nTo fix this issue, you will need to choose a different CIDR for your pod network that does not overlap with your router's IP address. For example, you could use 192.168.2.0/24 instead. You can specify this CIDR when you run kubeadm init like this:\nsudo kubeadm init --pod-network-cidr=192.168.2.0/24\n\nThis should allow you to create pods with IP addresses in the 192.168.2.0 to 192.168.2.255 range without conflicts.\n" ]
[ 0 ]
[]
[]
[ "kubernetes" ]
stackoverflow_0074668104_kubernetes.txt
Q: Is there a performance difference between null-check and null-conditional-operator in C#? I am wondering whether there is a difference in performance between these two if-statements: if(myObject != null && myObject.someBoolean) { // do something } if (myObject?.someBoolean ?? false) { // do something } If there is a difference in performance, in favor of which approach and why? Edit: This is not a bottleneck in my application, I am not trying to over-optimize, I am simply curious. A: When the code will be compiled, both if-statements will be the same. You can easily check it with sharplab.io
Is there a performance difference between null-check and null-conditional-operator in C#?
I am wondering whether there is a difference in performance between these two if-statements: if(myObject != null && myObject.someBoolean) { // do something } if (myObject?.someBoolean ?? false) { // do something } If there is a difference in performance, in favor of which approach and why? Edit: This is not a bottleneck in my application, I am not trying to over-optimize, I am simply curious.
[ "When the code will be compiled, both if-statements will be the same. You can easily check it with sharplab.io\n\n" ]
[ 4 ]
[ "Yes, there is a performance difference between null-checking and using the null-conditional operator in C#. In general, using the null-conditional operator (?.) is slightly faster than using a null-check (if (obj != null)) because the null-conditional operator is a more efficient way of checking for a null value.\nThe null-conditional operator is a shorthand syntax for checking for a null value and accessing a property or calling a method on an object only if the object is not null. It allows you to write code that is more concise and easier to read, while still ensuring that null-checks are performed.\nFor example, consider the following code that uses a null-check to access a property on an object:\nif (obj != null)\n{\n int x = obj.Value;\n}\n\nThis code checks if the obj variable is not null, and if it is not null, it accesses the Value property of the obj object. This is equivalent to the following code that uses the null-conditional operator:\nint x = obj?.Value;\n\nBoth of these pieces of code perform the same null-check and access the Value property of the obj object if it is not null. However, the second version is more concise and easier to read.\nIn terms of performance, the null-conditional operator is slightly faster than using a null-check because it is a more efficient way of checking for a null value. The null-conditional operator uses a special instruction in the generated machine code to perform the null-check, which is more efficient than the method used by a null-check. This means that using the null-conditional operator can result in slightly better performance in some cases.\nHowever, the performance difference between null-checking and using the null-conditional operator is usually very small and not noticeable in most cases. In general, you should use the null-conditional operator because it makes your code more concise and readable, and the performance difference is not significant enough to warrant using a null-check instead.\n" ]
[ -2 ]
[ "c#", "null_check", "null_conditional_operator", "performance" ]
stackoverflow_0074668816_c#_null_check_null_conditional_operator_performance.txt
Q: `gradlew jar` is not producing a jar I'm building a Java command line application using gradle and have it running when I use gradlew run, however I would like to generate a jar -- which I would assume I would then have users download to invoke the CLI. However, when I run gradlew jar, nothing is produced (build/lib dir doesn't even exist) even though the build runs with no errors and finishes with BUILD_SUCCESSFUL. Two questions: Why is no jar being produced? Is having users download a jar the best way to ship a CLI for Java? Below is my full build.gradle.kts plugins { // Apply the application plugin to add support for building a CLI application in Java. application id("com.diffplug.spotless") version "6.12.0" } repositories { // Use Maven Central for resolving dependencies. mavenCentral() } dependencies { // Use JUnit test framework. testImplementation("junit:junit:4.13.2") // This dependency is used by the application. implementation("com.google.guava:guava:30.1-jre") implementation("info.picocli:picocli:4.7.0") annotationProcessor("info.picocli:picocli-codegen:4.7.0") implementation("io.vavr:vavr:0.10.4") } application { // Define the main class for the application. mainClass.set("testlauncher.command.Runner") } subprojects { apply { plugin("com.diffplug.spotless") } } spotless { java { importOrder() removeUnusedImports() googleJavaFormat() } } project.tasks.findByName("build")?.dependsOn(project.tasks.findByName("spotlessApply")) A: I'm dumb. I thought the jar would be in ./build/libs but it's actually in ./app/build/libs.
`gradlew jar` is not producing a jar
I'm building a Java command line application using gradle and have it running when I use gradlew run, however I would like to generate a jar -- which I would assume I would then have users download to invoke the CLI. However, when I run gradlew jar, nothing is produced (build/lib dir doesn't even exist) even though the build runs with no errors and finishes with BUILD_SUCCESSFUL. Two questions: Why is no jar being produced? Is having users download a jar the best way to ship a CLI for Java? Below is my full build.gradle.kts plugins { // Apply the application plugin to add support for building a CLI application in Java. application id("com.diffplug.spotless") version "6.12.0" } repositories { // Use Maven Central for resolving dependencies. mavenCentral() } dependencies { // Use JUnit test framework. testImplementation("junit:junit:4.13.2") // This dependency is used by the application. implementation("com.google.guava:guava:30.1-jre") implementation("info.picocli:picocli:4.7.0") annotationProcessor("info.picocli:picocli-codegen:4.7.0") implementation("io.vavr:vavr:0.10.4") } application { // Define the main class for the application. mainClass.set("testlauncher.command.Runner") } subprojects { apply { plugin("com.diffplug.spotless") } } spotless { java { importOrder() removeUnusedImports() googleJavaFormat() } } project.tasks.findByName("build")?.dependsOn(project.tasks.findByName("spotlessApply"))
[ "I'm dumb.\nI thought the jar would be in ./build/libs but it's actually in ./app/build/libs.\n" ]
[ 0 ]
[]
[]
[ "command_line_interface", "gradle", "java" ]
stackoverflow_0074661758_command_line_interface_gradle_java.txt
Q: Sum numbers in one cell that contains line break - Google Sheets Image of Gift List Here I'm looking to enter the SUM of the listed values in D4 to C4. However, I used a line break between the list of numbers in D4. I found a post from 5 years ago with various SUM methods, but they must all be specific to Excel and not to Google Sheets. Any ideas? I tried using the following functions: =SumAlt(D4) =SumLines(D4) I included 0s in the blank spaces and still no luck. It reads as an error and unknown function. A: Try this formula: =sum(split(D4,char(10)))
Sum numbers in one cell that contains line break - Google Sheets
Image of Gift List Here I'm looking to enter the SUM of the listed values in D4 to C4. However, I used a line break between the list of numbers in D4. I found a post from 5 years ago with various SUM methods, but they must all be specific to Excel and not to Google Sheets. Any ideas? I tried using the following functions: =SumAlt(D4) =SumLines(D4) I included 0s in the blank spaces and still no luck. It reads as an error and unknown function.
[ "Try this formula:\n =sum(split(D4,char(10)))\n\n\n\n\n\n" ]
[ 0 ]
[]
[]
[ "google_sheets", "sum" ]
stackoverflow_0074669194_google_sheets_sum.txt
Q: JPA SpannerRepository findById returns null in cloud spanner and Spring boot I'll go straight to the point. I'm writing an API in Spring boot using Spring Data to connect to GCP Spanner in the back end. I have followed all required annotations, but when I call the findById on the repository interface it returns null with this error: at jdk.proxy2/jdk.proxy2.$Proxy146.findById(unknown Source) Part of My Repository Interface is: public interface FileImportStatsRepository extends SpannerRepository<FileImportHistoryRecord, String>{ } Part of My FileImportHistoryRecord DTO is: @AllArgsConstructor @Data @Builder @Table(name = "FileImportStats") public class FileImportHistoryRecord { @PrimaryKey @Column(name = "FileImportId") private String fileImportId; . . . Part of My Service class is: @Service public class FileErrorServiceImpl implements FileErrorService { @Autowired FileImportStatsRepository fileImportStatsRepository; Optional<FileImportHistoryRecord> fileImportHistoryRecord = fileImportStatsRepository.findById(fileImportId); The findById(fileImportId) method returns null with bunch of stracktraces pointing to the method with the error highlighted above (at jdk.proxy2/jdk.proxy2.$Proxy146.findById(unknown Source)) being the one that stands out. I appreciate any help to solve this issue. Thanks in advance! A: I finally figured out what my issue was, so I'm closing this thread or admin can close it. The issue is that I didn't add a spanner db configuration in the config file.
JPA SpannerRepository findById returns null in cloud spanner and Spring boot
I'll go straight to the point. I'm writing an API in Spring boot using Spring Data to connect to GCP Spanner in the back end. I have followed all required annotations, but when I call the findById on the repository interface it returns null with this error: at jdk.proxy2/jdk.proxy2.$Proxy146.findById(unknown Source) Part of My Repository Interface is: public interface FileImportStatsRepository extends SpannerRepository<FileImportHistoryRecord, String>{ } Part of My FileImportHistoryRecord DTO is: @AllArgsConstructor @Data @Builder @Table(name = "FileImportStats") public class FileImportHistoryRecord { @PrimaryKey @Column(name = "FileImportId") private String fileImportId; . . . Part of My Service class is: @Service public class FileErrorServiceImpl implements FileErrorService { @Autowired FileImportStatsRepository fileImportStatsRepository; Optional<FileImportHistoryRecord> fileImportHistoryRecord = fileImportStatsRepository.findById(fileImportId); The findById(fileImportId) method returns null with bunch of stracktraces pointing to the method with the error highlighted above (at jdk.proxy2/jdk.proxy2.$Proxy146.findById(unknown Source)) being the one that stands out. I appreciate any help to solve this issue. Thanks in advance!
[ "I finally figured out what my issue was, so I'm closing this thread or admin can close it. The issue is that I didn't add a spanner db configuration in the config file.\n" ]
[ 1 ]
[]
[]
[ "google_cloud_platform", "google_cloud_spanner", "java", "spring_boot", "spring_data_jpa" ]
stackoverflow_0074636842_google_cloud_platform_google_cloud_spanner_java_spring_boot_spring_data_jpa.txt
Q: How does angular handle comparison during default change detection, when it comes to objects or array of objects? Given this statement (source) : [...] How does the default change detection mechanism work? This method might look strange at first, with all the strangely named variables. But by digging deeper into it, we notice that it's doing something very simple: for each expression used in the template, it's comparing the current value of the property used in the expression with the previous value of that property. If the property value before and after is different, it will set isChanged to true, and that's it! Well almost, it's comparing values by using a method called looseNotIdentical(), which is really just a === comparison with special logic for the NaN case (see here). I have read many similar articles, and I understand how comparison is done when it's a simple type, but I can't figure out how it is done when the property is an object or an array of objects. Looking at the implementation code from angular, I found out that it does more than just comparing the values, depending on the type of the property. So my question is : During detection change, how does angular handle comparisons by values when the property's type is an object or an array of objects, given that the ChangeDetectionStrategy is Default (and not Push) ? For example and in case of arrays of objects, does it iterate over the objects and compare the references ? A: The equality operator === works just fine on objects (arrays are objects as well). There's no difference in how these are handled. When you initialize an object, it gets an identity, so an object is only equal to itself. Example: const obj1 = {}; const obj2 = {}; console.log("obj1 === obj2",obj1 === obj2) // false, two different objects console.log("obj1 === obj1",obj1 === obj1) // true, same object const arr1 = []; const arr2 = []; console.log("arr1 === arr2",arr1 === arr2) // false, two different arrays console.log("arr1 === arr1",arr1 === arr1) // true, same array Nowadays Angular uses Object.is instead of looseNotIdentical() which has the same behaviour: https://github.com/angular/angular/blob/main/packages/core/src/util/comparison.ts#L22 If you use a directive like ngFor to iterate an array, then the change detector will check the identity of each entry according to the trackByFn provided. They also track index changes if an item moved from one place to another. The source for that is here: https://github.com/angular/angular/blob/main/packages/core/src/change_detection/differs/default_iterable_differ.ts
How does angular handle comparison during default change detection, when it comes to objects or array of objects?
Given this statement (source) : [...] How does the default change detection mechanism work? This method might look strange at first, with all the strangely named variables. But by digging deeper into it, we notice that it's doing something very simple: for each expression used in the template, it's comparing the current value of the property used in the expression with the previous value of that property. If the property value before and after is different, it will set isChanged to true, and that's it! Well almost, it's comparing values by using a method called looseNotIdentical(), which is really just a === comparison with special logic for the NaN case (see here). I have read many similar articles, and I understand how comparison is done when it's a simple type, but I can't figure out how it is done when the property is an object or an array of objects. Looking at the implementation code from angular, I found out that it does more than just comparing the values, depending on the type of the property. So my question is : During detection change, how does angular handle comparisons by values when the property's type is an object or an array of objects, given that the ChangeDetectionStrategy is Default (and not Push) ? For example and in case of arrays of objects, does it iterate over the objects and compare the references ?
[ "The equality operator === works just fine on objects (arrays are objects as well). There's no difference in how these are handled. When you initialize an object, it gets an identity, so an object is only equal to itself.\nExample:\n\n\nconst obj1 = {};\nconst obj2 = {};\n\nconsole.log(\"obj1 === obj2\",obj1 === obj2) // false, two different objects\nconsole.log(\"obj1 === obj1\",obj1 === obj1) // true, same object\n\nconst arr1 = [];\nconst arr2 = [];\n\nconsole.log(\"arr1 === arr2\",arr1 === arr2) // false, two different arrays\nconsole.log(\"arr1 === arr1\",arr1 === arr1) // true, same array\n\n\n\nNowadays Angular uses Object.is instead of looseNotIdentical() which has the same behaviour: https://github.com/angular/angular/blob/main/packages/core/src/util/comparison.ts#L22\nIf you use a directive like ngFor to iterate an array, then the change detector will check the identity of each entry according to the trackByFn provided. They also track index changes if an item moved from one place to another. The source for that is here: https://github.com/angular/angular/blob/main/packages/core/src/change_detection/differs/default_iterable_differ.ts\n" ]
[ 0 ]
[]
[]
[ "angular", "angular_changedetection" ]
stackoverflow_0074667251_angular_angular_changedetection.txt
Q: Can't retrieve data from Firebase Realtime db in Android Studio I want to access the img but when I try nothing works FirebaseDatabase database = FirebaseDatabase.getInstance(); DatabaseReference myRef = database.getReference("med"); System.out.println(myRef); System.out.println(database.getReference().child("med")); database.getReference().child("med").addValueEventListener(new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { System.out.println("11111111"); } @Override public void onCancelled(DatabaseError error) { // Failed to read value Log.w(TAG, "Failed to read value.", error.toException()); } }); Even 11111111 is not printed when I enter the onDataChange function What's the problem and how do I get the data I need? A: Try this //Getter and Setter Class public class GetSet { public String img; public String getImg() { return img; } public void setImg(String img) { this.img = img; } } OnCreate final FirebaseDatabase database = FirebaseDatabase.getInstance(); DatabaseReference ref = database.getReference().child("med").child("1"); ref.addValueEventListener(new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { GetSet adapter = dataSnapshot.getValue(GetSet.class); String image=adapter.getImg(); } @Override public void onCancelled(DatabaseError databaseError) { } });
Can't retrieve data from Firebase Realtime db in Android Studio
I want to access the img but when I try nothing works FirebaseDatabase database = FirebaseDatabase.getInstance(); DatabaseReference myRef = database.getReference("med"); System.out.println(myRef); System.out.println(database.getReference().child("med")); database.getReference().child("med").addValueEventListener(new ValueEventListener() { @Override public void onDataChange(DataSnapshot dataSnapshot) { System.out.println("11111111"); } @Override public void onCancelled(DatabaseError error) { // Failed to read value Log.w(TAG, "Failed to read value.", error.toException()); } }); Even 11111111 is not printed when I enter the onDataChange function What's the problem and how do I get the data I need?
[ "Try this\n//Getter and Setter Class\n public class GetSet {\n public String img;\n \n public String getImg() {\n return img;\n }\n \n public void setImg(String img) {\n this.img = img;\n }\n }\n\nOnCreate\nfinal FirebaseDatabase database = FirebaseDatabase.getInstance();\n DatabaseReference ref = database.getReference().child(\"med\").child(\"1\");\n ref.addValueEventListener(new ValueEventListener() {\n @Override\n public void onDataChange(DataSnapshot dataSnapshot) {\n GetSet adapter = dataSnapshot.getValue(GetSet.class);\n String image=adapter.getImg();\n }\n\n @Override\n public void onCancelled(DatabaseError databaseError) {\n }\n });\n\n" ]
[ 0 ]
[]
[]
[ "android", "firebase", "firebase_realtime_database", "java" ]
stackoverflow_0074669155_android_firebase_firebase_realtime_database_java.txt
Q: Why dummy variable speeding up my SIMD codes? I have a function that takes about 500us to execute. Somehow I found that the presence or absence of a function F (which would have no effect on the benchmark) greatly varied the benchmark results (by about 10%). I did some searching to see how this is possible, and came across C++ code execution time varies with small source change that shouldn't introduce any extra work . Since the function I implemented uses SIMD, I thought that memory alignment could have an effect. However, I knew from experience that the program would fail if the variables to be used in SIMD were not aligned, so I thought that the alignment of local variables would have an impact. To align local variables, I defined the following dummy variables that do not optimized-out. int local_var = ...; const void* volatile dummy = &local_var; The dummy variable made the benchmark results consistent regardless of whether F was present or not. Q. Can you tell me if this is really due to alignment or if it's just a coincidence (by compiler's optimization)? If it's because of alignment, can you tell me what effect the dummy variable had? I tried to debug to see if this was really related to alignment, but it failed because I couldn't find the address of the local variable in the optimized code. I attach that function and difference of assembly code. template<Model M> struct Builder<M, impl::assign_mode::PclXYZIT> { inline static void build(const detail::BuildConfig& build_config, impl::PcdPacketView<M> pcd_packet, std::byte* cursor) { using AM = impl::assign_mode::PclXYZIT; using Spec = typename impl::SpecOf<M>; const auto timestamp_base = pcd_packet.timestamp(); auto timing_offset_it = build_config.timing_offsets_.begin(); // const void* volatile dummy = &timestamp_base; for(const auto block: pcd_packet) { auto basis_scale_it = build_config.basis_scales_.begin() + (block.azimuthRaw() * Spec::NumLasers); for(const auto channel: block) { const size_t distance_raw = channel.distanceRaw(); const auto distance = static_cast<float>(distance_raw < build_config.distance_lower_bound_ ? size_t(0) : distance_raw); const auto basis_scale = _mm_load_ps(reinterpret_cast<const float*>(&*basis_scale_it)); const auto weights = _mm_set1_ps(distance); const auto xyz = _mm_mul_ps(basis_scale, weights); _mm_store_ps(reinterpret_cast<float*>(cursor), xyz); *(reinterpret_cast<float*>(cursor) + 3) = 1.0f; *reinterpret_cast<float*>(cursor + AM::I) = channel.reflectivity(); *reinterpret_cast<std::int64_t*>(cursor + AM::T) = timestamp_base + timing_offset_it->count(); cursor += AM::Step; ++timing_offset_it; ++basis_scale_it; } } } }; More instructions are added with the dummy variable. @@ -5210,8 +5230,10 @@ addq $320, %rdi cmpq %r10, %r9 jne .L429 - popq %rbx + addq $48, %rsp .cfi_remember_state + .cfi_def_cfa_offset 32 + popq %rbx .cfi_def_cfa_offset 24 popq %rbp .cfi_def_cfa_offset 16 @@ -5259,6 +5281,8 @@ imull $1000, 19(%r10), %eax imulq $1000000000, %r9, %r9 addq %rax, %r9 + leaq 40(%rsp), %rax + movq %rax, 32(%rsp) .p2align 4,,10 .p2align 3 .L434:
Why dummy variable speeding up my SIMD codes?
I have a function that takes about 500us to execute. Somehow I found that the presence or absence of a function F (which would have no effect on the benchmark) greatly varied the benchmark results (by about 10%). I did some searching to see how this is possible, and came across C++ code execution time varies with small source change that shouldn't introduce any extra work . Since the function I implemented uses SIMD, I thought that memory alignment could have an effect. However, I knew from experience that the program would fail if the variables to be used in SIMD were not aligned, so I thought that the alignment of local variables would have an impact. To align local variables, I defined the following dummy variables that do not optimized-out. int local_var = ...; const void* volatile dummy = &local_var; The dummy variable made the benchmark results consistent regardless of whether F was present or not. Q. Can you tell me if this is really due to alignment or if it's just a coincidence (by compiler's optimization)? If it's because of alignment, can you tell me what effect the dummy variable had? I tried to debug to see if this was really related to alignment, but it failed because I couldn't find the address of the local variable in the optimized code. I attach that function and difference of assembly code. template<Model M> struct Builder<M, impl::assign_mode::PclXYZIT> { inline static void build(const detail::BuildConfig& build_config, impl::PcdPacketView<M> pcd_packet, std::byte* cursor) { using AM = impl::assign_mode::PclXYZIT; using Spec = typename impl::SpecOf<M>; const auto timestamp_base = pcd_packet.timestamp(); auto timing_offset_it = build_config.timing_offsets_.begin(); // const void* volatile dummy = &timestamp_base; for(const auto block: pcd_packet) { auto basis_scale_it = build_config.basis_scales_.begin() + (block.azimuthRaw() * Spec::NumLasers); for(const auto channel: block) { const size_t distance_raw = channel.distanceRaw(); const auto distance = static_cast<float>(distance_raw < build_config.distance_lower_bound_ ? size_t(0) : distance_raw); const auto basis_scale = _mm_load_ps(reinterpret_cast<const float*>(&*basis_scale_it)); const auto weights = _mm_set1_ps(distance); const auto xyz = _mm_mul_ps(basis_scale, weights); _mm_store_ps(reinterpret_cast<float*>(cursor), xyz); *(reinterpret_cast<float*>(cursor) + 3) = 1.0f; *reinterpret_cast<float*>(cursor + AM::I) = channel.reflectivity(); *reinterpret_cast<std::int64_t*>(cursor + AM::T) = timestamp_base + timing_offset_it->count(); cursor += AM::Step; ++timing_offset_it; ++basis_scale_it; } } } }; More instructions are added with the dummy variable. @@ -5210,8 +5230,10 @@ addq $320, %rdi cmpq %r10, %r9 jne .L429 - popq %rbx + addq $48, %rsp .cfi_remember_state + .cfi_def_cfa_offset 32 + popq %rbx .cfi_def_cfa_offset 24 popq %rbp .cfi_def_cfa_offset 16 @@ -5259,6 +5281,8 @@ imull $1000, 19(%r10), %eax imulq $1000000000, %r9, %r9 addq %rax, %r9 + leaq 40(%rsp), %rax + movq %rax, 32(%rsp) .p2align 4,,10 .p2align 3 .L434:
[]
[]
[ "It is possible that the difference in performance between the two versions of the code is due to memory alignment issues. When working with SIMD instructions, it is important to ensure that data is properly aligned in memory, as misaligned data can cause the CPU to run slower. By defining a dummy variable and storing a reference to it in a volatile pointer, you are ensuring that the local variable is properly aligned in memory, which can improve the performance of your code.\nIt is also possible that the difference in performance is due to other factors, such as compiler optimization or other hardware-specific issues. Without seeing the full code and the exact performance measurements, it is difficult to say for certain why the difference in performance occurs. In general, it is best to use a proper performance testing framework to measure the performance of your code, as this can help isolate the source of performance differences and ensure that the results are reliable and repeatable.\n" ]
[ -1 ]
[ "c++", "memory_alignment", "optimization", "simd" ]
stackoverflow_0074668779_c++_memory_alignment_optimization_simd.txt
Q: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory I am trying to install kubeadm and for this i am trying to create vagrant environment i clone this link "https://github.com/kodekloudhub/certified-kubernetes-administrator-course" to my server and then run the command "vagrant up". I take this error.I am using Ubuntu 20.04.5 LTS ==> kubemaster: Clearing any previously set network interfaces... There was an error while executing VBoxManage, a CLI used by Vagrant for controlling VirtualBox. The command and stderr is shown below. Command: ["hostonlyif", "create"] Stderr: 0%... Progress state: NS_ERROR_FAILURE VBoxManage: error: Failed to create the host-only adapter VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface VBoxManage: error: Context: "RTEXITCODE handleCreate(HandlerArg*)" at line 95 of file VBoxManageHostonly.cpp i want to create vagrant environment A: It looks like the issue is with VirtualBox, which is a dependency for Vagrant. It seems that the vboxnetctl file is missing, which is necessary for Vagrant to create the host-only network adapter. To fix this issue, you may need to reinstall VirtualBox. You can do this by running the following command: sudo apt-get install virtualbox After reinstalling VirtualBox, try running the vagrant up command again. If the issue persists, you may need to delete any existing network interfaces in VirtualBox and try creating them again using Vagrant. You can do this by running the following commands: VBoxManage list hostonlyifs VBoxManage hostonlyif remove <name_of_interface> Replace <name_of_interface> with the name of the network interface that you want to delete. You can then try running the vagrant up command again. If the issue still persists, it may be helpful to check the VirtualBox log file for more detailed information about the error. The log file is located at /var/log/VBoxHardening.log. You can also try searching online for solutions or posting your issue on a forum or support channel for VirtualBox or Vagrant.
VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory
I am trying to install kubeadm and for this i am trying to create vagrant environment i clone this link "https://github.com/kodekloudhub/certified-kubernetes-administrator-course" to my server and then run the command "vagrant up". I take this error.I am using Ubuntu 20.04.5 LTS ==> kubemaster: Clearing any previously set network interfaces... There was an error while executing VBoxManage, a CLI used by Vagrant for controlling VirtualBox. The command and stderr is shown below. Command: ["hostonlyif", "create"] Stderr: 0%... Progress state: NS_ERROR_FAILURE VBoxManage: error: Failed to create the host-only adapter VBoxManage: error: VBoxNetAdpCtl: Error while adding new interface: failed to open /dev/vboxnetctl: No such file or directory VBoxManage: error: Details: code NS_ERROR_FAILURE (0x80004005), component HostNetworkInterfaceWrap, interface IHostNetworkInterface VBoxManage: error: Context: "RTEXITCODE handleCreate(HandlerArg*)" at line 95 of file VBoxManageHostonly.cpp i want to create vagrant environment
[ "It looks like the issue is with VirtualBox, which is a dependency for Vagrant. It seems that the vboxnetctl file is missing, which is necessary for Vagrant to create the host-only network adapter.\nTo fix this issue, you may need to reinstall VirtualBox. You can do this by running the following command:\nsudo apt-get install virtualbox\n\nAfter reinstalling VirtualBox, try running the vagrant up command again. If the issue persists, you may need to delete any existing network interfaces in VirtualBox and try creating them again using Vagrant.\nYou can do this by running the following commands:\nVBoxManage list hostonlyifs\nVBoxManage hostonlyif remove <name_of_interface>\n\nReplace <name_of_interface> with the name of the network interface that you want to delete. You can then try running the vagrant up command again.\nIf the issue still persists, it may be helpful to check the VirtualBox log file for more detailed information about the error. The log file is located at /var/log/VBoxHardening.log. You can also try searching online for solutions or posting your issue on a forum or support channel for VirtualBox or Vagrant.\n" ]
[ 0 ]
[]
[]
[ "kubernetes" ]
stackoverflow_0074666734_kubernetes.txt
Q: Spring database can't create entity from Model I am learning Spring boot. I am trying to work with the Spring database. But I don't why it shows errors while creating entity from Model. I am adding my spring application properties and Model here. I really appreciate any help you can provide. import jakarta.persistence.Entity; import jakarta.persistence.Id; import lombok.Data; @Data @Entity public class Order { @Id private Long id; private double price; } Here is my application properties file # Database configuration spring.datasource.url=jdbc:h2:file:./database spring.datasource.driverClassName=org.h2.Driver spring.datasource.username=dsamgroup1 spring.datasource.password=43HJ23jlkm456ndY spring.jpa.database-platform=org.hibernate.dialect.H2Dialect # Update schema when database changed spring.jpa.hibernate.ddl-auto=update # Enabling H2 Console spring.h2.console.enabled=true # Custom H2 Console URL spring.h2.console.path=/h2-console spring.jpa.show-sql=true It shows the following error in console. WARN 8513 --- [ restartedMain] o.h.t.s.i.ExceptionHandlerLoggedImpl : GenerationTarget encountered exception accepting command : Error executing DDL "create table order (id bigint not null, price float(53) not null, primary key (id))" via JDBC Statement org.hibernate.tool.schema.spi.CommandAcceptanceException: Error executing DDL "create table order (id bigint not null, price float(53) not null, primary key (id))" via JDBC Statement at org.hibernate.tool.schema.internal.exec.GenerationTargetToDatabase.accept(GenerationTargetToDatabase.java:67) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.applySqlString(AbstractSchemaMigrator.java:587) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.applySqlStrings(AbstractSchemaMigrator.java:532) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.createTable(AbstractSchemaMigrator.java:307) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.internal.GroupedSchemaMigratorImpl.performTablesMigration(GroupedSchemaMigratorImpl.java:79) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.performMigration(AbstractSchemaMigrator.java:225) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.doMigration(AbstractSchemaMigrator.java:126) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.performDatabaseAction(SchemaManagementToolCoordinator.java:284) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.lambda$process$5(SchemaManagementToolCoordinator.java:143) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at java.base/java.util.HashMap.forEach(HashMap.java:1421) ~[na:na] at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.process(SchemaManagementToolCoordinator.java:140) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:334) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:415) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1425) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:66) ~[spring-orm-6.0.2.jar:6.0.2] at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:376) ~[spring-orm-6.0.2.jar:6.0.2] at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:409) ~[spring-orm-6.0.2.jar:6.0.2] at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:396) ~[spring-orm-6.0.2.jar:6.0.2] at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.afterPropertiesSet(LocalContainerEntityManagerFactoryBean.java:352) ~[spring-orm-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1797) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1747) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:599) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:521) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:326) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:324) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1130) ~[spring-context-6.0.2.jar:6.0.2] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:905) ~[spring-context-6.0.2.jar:6.0.2] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:584) ~[spring-context-6.0.2.jar:6.0.2] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:146) ~[spring-boot-3.0.0.jar:3.0.0] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:730) ~[spring-boot-3.0.0.jar:3.0.0] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:432) ~[spring-boot-3.0.0.jar:3.0.0] at org.springframework.boot.SpringApplication.run(SpringApplication.java:308) ~[spring-boot-3.0.0.jar:3.0.0] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1302) ~[spring-boot-3.0.0.jar:3.0.0] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1291) ~[spring-boot-3.0.0.jar:3.0.0] at dsg.unibamberg.assignment1.Assignment1Application.main(Assignment1Application.java:10) ~[main/:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na] at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) ~[spring-boot-devtools-3.0.0.jar:3.0.0] Caused by: org.h2.jdbc.JdbcSQLSyntaxErrorException: Syntax error in SQL statement "create table [*]order (id bigint not null, price float(53) not null, primary key (id))"; expected "identifier"; SQL statement: create table order (id bigint not null, price float(53) not null, primary key (id)) [42001-214] at org.h2.message.DbException.getJdbcSQLException(DbException.java:502) ~[h2-2.1.214.jar:2.1.214] at org.h2.message.DbException.getJdbcSQLException(DbException.java:477) ~[h2-2.1.214.jar:2.1.214] at org.h2.message.DbException.getSyntaxError(DbException.java:261) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.readIdentifier(Parser.java:5656) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.readIdentifierWithSchema(Parser.java:5616) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.readIdentifierWithSchema(Parser.java:5645) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.parseCreateTable(Parser.java:9253) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.parseCreate(Parser.java:6784) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.parsePrepared(Parser.java:763) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.parse(Parser.java:689) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.parse(Parser.java:661) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.prepareCommand(Parser.java:569) ~[h2-2.1.214.jar:2.1.214] at org.h2.engine.SessionLocal.prepareLocal(SessionLocal.java:631) ~[h2-2.1.214.jar:2.1.214] at org.h2.engine.SessionLocal.prepareCommand(SessionLocal.java:554) ~[h2-2.1.214.jar:2.1.214] at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1116) ~[h2-2.1.214.jar:2.1.214] at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:237) ~[h2-2.1.214.jar:2.1.214] at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:223) ~[h2-2.1.214.jar:2.1.214] at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:94) ~[HikariCP-5.0.1.jar:na] at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java) ~[HikariCP-5.0.1.jar:na] at org.hibernate.tool.schema.internal.exec.GenerationTargetToDatabase.accept(GenerationTargetToDatabase.java:54) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] ... 41 common frames omitted A: Order is a keyword in SQL hence the error. Try using any other name for the table or use back ticks. Refer this for possible solutions.
Spring database can't create entity from Model
I am learning Spring boot. I am trying to work with the Spring database. But I don't why it shows errors while creating entity from Model. I am adding my spring application properties and Model here. I really appreciate any help you can provide. import jakarta.persistence.Entity; import jakarta.persistence.Id; import lombok.Data; @Data @Entity public class Order { @Id private Long id; private double price; } Here is my application properties file # Database configuration spring.datasource.url=jdbc:h2:file:./database spring.datasource.driverClassName=org.h2.Driver spring.datasource.username=dsamgroup1 spring.datasource.password=43HJ23jlkm456ndY spring.jpa.database-platform=org.hibernate.dialect.H2Dialect # Update schema when database changed spring.jpa.hibernate.ddl-auto=update # Enabling H2 Console spring.h2.console.enabled=true # Custom H2 Console URL spring.h2.console.path=/h2-console spring.jpa.show-sql=true It shows the following error in console. WARN 8513 --- [ restartedMain] o.h.t.s.i.ExceptionHandlerLoggedImpl : GenerationTarget encountered exception accepting command : Error executing DDL "create table order (id bigint not null, price float(53) not null, primary key (id))" via JDBC Statement org.hibernate.tool.schema.spi.CommandAcceptanceException: Error executing DDL "create table order (id bigint not null, price float(53) not null, primary key (id))" via JDBC Statement at org.hibernate.tool.schema.internal.exec.GenerationTargetToDatabase.accept(GenerationTargetToDatabase.java:67) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.applySqlString(AbstractSchemaMigrator.java:587) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.applySqlStrings(AbstractSchemaMigrator.java:532) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.createTable(AbstractSchemaMigrator.java:307) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.internal.GroupedSchemaMigratorImpl.performTablesMigration(GroupedSchemaMigratorImpl.java:79) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.performMigration(AbstractSchemaMigrator.java:225) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.internal.AbstractSchemaMigrator.doMigration(AbstractSchemaMigrator.java:126) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.performDatabaseAction(SchemaManagementToolCoordinator.java:284) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.lambda$process$5(SchemaManagementToolCoordinator.java:143) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at java.base/java.util.HashMap.forEach(HashMap.java:1421) ~[na:na] at org.hibernate.tool.schema.spi.SchemaManagementToolCoordinator.process(SchemaManagementToolCoordinator.java:140) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.internal.SessionFactoryImpl.<init>(SessionFactoryImpl.java:334) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.boot.internal.SessionFactoryBuilderImpl.build(SessionFactoryBuilderImpl.java:415) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.hibernate.jpa.boot.internal.EntityManagerFactoryBuilderImpl.build(EntityManagerFactoryBuilderImpl.java:1425) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] at org.springframework.orm.jpa.vendor.SpringHibernateJpaPersistenceProvider.createContainerEntityManagerFactory(SpringHibernateJpaPersistenceProvider.java:66) ~[spring-orm-6.0.2.jar:6.0.2] at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.createNativeEntityManagerFactory(LocalContainerEntityManagerFactoryBean.java:376) ~[spring-orm-6.0.2.jar:6.0.2] at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.buildNativeEntityManagerFactory(AbstractEntityManagerFactoryBean.java:409) ~[spring-orm-6.0.2.jar:6.0.2] at org.springframework.orm.jpa.AbstractEntityManagerFactoryBean.afterPropertiesSet(AbstractEntityManagerFactoryBean.java:396) ~[spring-orm-6.0.2.jar:6.0.2] at org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean.afterPropertiesSet(LocalContainerEntityManagerFactoryBean.java:352) ~[spring-orm-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.invokeInitMethods(AbstractAutowireCapableBeanFactory.java:1797) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.initializeBean(AbstractAutowireCapableBeanFactory.java:1747) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:599) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:521) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:326) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:234) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:324) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) ~[spring-beans-6.0.2.jar:6.0.2] at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:1130) ~[spring-context-6.0.2.jar:6.0.2] at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:905) ~[spring-context-6.0.2.jar:6.0.2] at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:584) ~[spring-context-6.0.2.jar:6.0.2] at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:146) ~[spring-boot-3.0.0.jar:3.0.0] at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:730) ~[spring-boot-3.0.0.jar:3.0.0] at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:432) ~[spring-boot-3.0.0.jar:3.0.0] at org.springframework.boot.SpringApplication.run(SpringApplication.java:308) ~[spring-boot-3.0.0.jar:3.0.0] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1302) ~[spring-boot-3.0.0.jar:3.0.0] at org.springframework.boot.SpringApplication.run(SpringApplication.java:1291) ~[spring-boot-3.0.0.jar:3.0.0] at dsg.unibamberg.assignment1.Assignment1Application.main(Assignment1Application.java:10) ~[main/:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:na] at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:77) ~[na:na] at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:na] at java.base/java.lang.reflect.Method.invoke(Method.java:568) ~[na:na] at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49) ~[spring-boot-devtools-3.0.0.jar:3.0.0] Caused by: org.h2.jdbc.JdbcSQLSyntaxErrorException: Syntax error in SQL statement "create table [*]order (id bigint not null, price float(53) not null, primary key (id))"; expected "identifier"; SQL statement: create table order (id bigint not null, price float(53) not null, primary key (id)) [42001-214] at org.h2.message.DbException.getJdbcSQLException(DbException.java:502) ~[h2-2.1.214.jar:2.1.214] at org.h2.message.DbException.getJdbcSQLException(DbException.java:477) ~[h2-2.1.214.jar:2.1.214] at org.h2.message.DbException.getSyntaxError(DbException.java:261) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.readIdentifier(Parser.java:5656) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.readIdentifierWithSchema(Parser.java:5616) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.readIdentifierWithSchema(Parser.java:5645) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.parseCreateTable(Parser.java:9253) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.parseCreate(Parser.java:6784) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.parsePrepared(Parser.java:763) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.parse(Parser.java:689) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.parse(Parser.java:661) ~[h2-2.1.214.jar:2.1.214] at org.h2.command.Parser.prepareCommand(Parser.java:569) ~[h2-2.1.214.jar:2.1.214] at org.h2.engine.SessionLocal.prepareLocal(SessionLocal.java:631) ~[h2-2.1.214.jar:2.1.214] at org.h2.engine.SessionLocal.prepareCommand(SessionLocal.java:554) ~[h2-2.1.214.jar:2.1.214] at org.h2.jdbc.JdbcConnection.prepareCommand(JdbcConnection.java:1116) ~[h2-2.1.214.jar:2.1.214] at org.h2.jdbc.JdbcStatement.executeInternal(JdbcStatement.java:237) ~[h2-2.1.214.jar:2.1.214] at org.h2.jdbc.JdbcStatement.execute(JdbcStatement.java:223) ~[h2-2.1.214.jar:2.1.214] at com.zaxxer.hikari.pool.ProxyStatement.execute(ProxyStatement.java:94) ~[HikariCP-5.0.1.jar:na] at com.zaxxer.hikari.pool.HikariProxyStatement.execute(HikariProxyStatement.java) ~[HikariCP-5.0.1.jar:na] at org.hibernate.tool.schema.internal.exec.GenerationTargetToDatabase.accept(GenerationTargetToDatabase.java:54) ~[hibernate-core-6.1.5.Final.jar:6.1.5.Final] ... 41 common frames omitted
[ "Order is a keyword in SQL hence the error.\nTry using any other name for the table or use back ticks.\nRefer this for possible solutions.\n" ]
[ 1 ]
[]
[]
[ "hibernate", "java", "jpa", "spring", "spring_boot" ]
stackoverflow_0074668506_hibernate_java_jpa_spring_spring_boot.txt
Q: Why won't list memorize previous inputs and sum them? With each iteration the list only presents the last appended input and not the sum of the last input + previous appended inputs. def main_program(): n = [] n.append(int(input("insert:\n"))) print(sum(n)) while True: main_program() if input("Add another number? (Y/N):\n") == "N": break I'm trying to create a "snowball effect" for lack of a better description. I wanted the program to store each appended input and sum them all together. A: Define n just once. def main_program(): n.append(int(input("insert:\n"))) print(sum(n)) n = [] while True: main_program() if input("Add another number? (Y/N):\n") == "N": break Passing the list as parameter in the function main_program also works, since lists are call-by-reference.
Why won't list memorize previous inputs and sum them?
With each iteration the list only presents the last appended input and not the sum of the last input + previous appended inputs. def main_program(): n = [] n.append(int(input("insert:\n"))) print(sum(n)) while True: main_program() if input("Add another number? (Y/N):\n") == "N": break I'm trying to create a "snowball effect" for lack of a better description. I wanted the program to store each appended input and sum them all together.
[ "Define n just once.\ndef main_program():\n n.append(int(input(\"insert:\\n\")))\n print(sum(n))\n\nn = []\nwhile True:\n main_program()\n if input(\"Add another number? (Y/N):\\n\") == \"N\":\n break\n\nPassing the list as parameter in the function main_program also works, since lists are call-by-reference.\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074669171_python_python_3.x.txt
Q: Find if all characters in one string occur within another string I am new to bash. I have a question about determining if all characters of one string occur within another string. For example, if the variables are: var_1="abcdefg" var_2="bcg" Then I want to write an if statement of the form: if [all characters of var_2 occur within var_1] then echo "All characters of var_2 occur in var_1." else echo "Not all characters of var_2 occur in var_1." fi In this example, the output should be All characters of var_2 occur in var_1. What would go in the if statement here? This is what I tried: if [[ $var_1 == *$var_2* ]] ... but I think this is only determines if var_2 is a substring of var_1. What I want is to determine if the characters of var_2 occur within var_1 in no particular order. A: The following oneliner should work: echo -e "$var_2\0$var_1" | sed -E ':a;s/(.)(.*\x0)(.*)\1(.*)/\2\3\4/;ta;s/^\x0.*/1/;s/.*\x0.*/0/' It will print 0 or 1 to mean false or true respectively. This is how it works: echo -e allows using escape sequences, and \0 represents the null character, which I'm using to mark the separation between the two strings bcg and abcdefg. The Sed script is not that complex: -E is a non POSIX option allowing to use ( and ) instead of \( and \) to write capturing groups (and other similar simplifications which I'm not using here); ;s separate commands; :a is a label, and allows one jumping here via ta or ba (I use only the former, keep reading); s/(.)(.*\x0)(.*)\1(.*)/\2\3\4/ does the following (which succeedes if there's at least one character in common between var_2 and var_1): matches and captures the first character of var_2 with (.), matches and captures the following part of var_2 together with the null character, (.*\x0) (yes, what you write as \0 in Bash is \x0 in Sed), matches and captures 0 or more characters, matches what was captured by first group, i.e. by (.), matches and captures 0 or more characters up to the end of var_1, substitutes all that was matched with what was captured by the 2nd, 3rd, and 4th capturing groups: in fact, we've got rid of one character in common between var_2 and var_1; ta test if the previous substitution was successful and, if that's the case, it jumps to :a: this way we are running a loop as long as there's a characters in common between var_2 and var_1; when ther's no characters in common between var_2 and var_1, the test will fail, and the control will fall through ta; s/^\x0.*/1/ matches whatever is left, but only if the null character \x0 is leading, which happens if all letters of var_2 were found in var_1, and changes everything to just 1; s/.*\x0.*/0/ will match everything, as long as there's still \x0 in the string, which happens only if the previous substitution failed, which means that some letter from var_2 was not found in var_1, and change it to 0. A: Not really an if clause/statement, something like. #!/usr/bin/env bash i=0 var_2="bcg" var_1="abcdefg" total_str=${#var_2} while (( i < total_str )); do [[ $var_1 = *"${var_2:i++:1}"* ]] || { printf >&2 'Not all characters of the string "%s" occur in the string "%s".\n' "$var_2" "$var_1" exit 1 } done printf 'All characters of the string "%s" occur in the string "%s".\n' "$var_2" "$var_1" Output All characters of the string "bcg" occur in the string "abcdefg". Changing the value of var_2 to something like var_2="bxg" The output should be: Not all characters of the string "bxg" occur in the string "abcdefg". A: A very simple method in pure bash: #!/bin/bash var_1="abcdefg" var_2="bcg" if [[ ${var_2//[$var_1]} ]]; then echo "Not all characters of var_2 occur in var_1." else echo "All characters of var_2 occur in var_1." fi The ${var_2//[$var_1]} expands to the value of var_2 with all characters that occur in var_1 deleted. All characters of var_2 occur in var_1 only if that expansion is null string.
Find if all characters in one string occur within another string
I am new to bash. I have a question about determining if all characters of one string occur within another string. For example, if the variables are: var_1="abcdefg" var_2="bcg" Then I want to write an if statement of the form: if [all characters of var_2 occur within var_1] then echo "All characters of var_2 occur in var_1." else echo "Not all characters of var_2 occur in var_1." fi In this example, the output should be All characters of var_2 occur in var_1. What would go in the if statement here? This is what I tried: if [[ $var_1 == *$var_2* ]] ... but I think this is only determines if var_2 is a substring of var_1. What I want is to determine if the characters of var_2 occur within var_1 in no particular order.
[ "The following oneliner should work:\necho -e \"$var_2\\0$var_1\" | sed -E ':a;s/(.)(.*\\x0)(.*)\\1(.*)/\\2\\3\\4/;ta;s/^\\x0.*/1/;s/.*\\x0.*/0/'\n\nIt will print 0 or 1 to mean false or true respectively.\nThis is how it works:\n\necho -e allows using escape sequences, and \\0 represents the null character, which I'm using to mark the separation between the two strings bcg and abcdefg.\nThe Sed script is not that complex:\n\n-E is a non POSIX option allowing to use ( and ) instead of \\( and \\) to write capturing groups (and other similar simplifications which I'm not using here);\n;s separate commands;\n:a is a label, and allows one jumping here via ta or ba (I use only the former, keep reading);\ns/(.)(.*\\x0)(.*)\\1(.*)/\\2\\3\\4/ does the following (which succeedes if there's at least one character in common between var_2 and var_1):\n\nmatches and captures the first character of var_2 with (.),\nmatches and captures the following part of var_2 together with the null character, (.*\\x0) (yes, what you write as \\0 in Bash is \\x0 in Sed),\nmatches and captures 0 or more characters,\nmatches what was captured by first group, i.e. by (.),\nmatches and captures 0 or more characters up to the end of var_1,\nsubstitutes all that was matched with what was captured by the 2nd, 3rd, and 4th capturing groups: in fact, we've got rid of one character in common between var_2 and var_1;\n\n\nta test if the previous substitution was successful and, if that's the case, it jumps to :a: this way we are running a loop as long as there's a characters in common between var_2 and var_1;\nwhen ther's no characters in common between var_2 and var_1, the test will fail, and the control will fall through ta;\ns/^\\x0.*/1/ matches whatever is left, but only if the null character \\x0 is leading, which happens if all letters of var_2 were found in var_1, and changes everything to just 1;\ns/.*\\x0.*/0/ will match everything, as long as there's still \\x0 in the string, which happens only if the previous substitution failed, which means that some letter from var_2 was not found in var_1, and change it to 0.\n\n\n\n", "Not really an if clause/statement, something like.\n#!/usr/bin/env bash\n\ni=0\nvar_2=\"bcg\"\nvar_1=\"abcdefg\"\ntotal_str=${#var_2}\n\nwhile (( i < total_str )); do\n [[ $var_1 = *\"${var_2:i++:1}\"* ]] || {\n printf >&2 'Not all characters of the string \"%s\" occur in the string \"%s\".\\n' \"$var_2\" \"$var_1\"\n exit 1\n }\ndone\n\nprintf 'All characters of the string \"%s\" occur in the string \"%s\".\\n' \"$var_2\" \"$var_1\"\n\nOutput\nAll characters of the string \"bcg\" occur in the string \"abcdefg\".\n\n\nChanging the value of var_2 to something like\nvar_2=\"bxg\"\n\nThe output should be:\nNot all characters of the string \"bxg\" occur in the string \"abcdefg\".\n\n", "A very simple method in pure bash:\n#!/bin/bash\n\nvar_1=\"abcdefg\"\nvar_2=\"bcg\"\n\nif [[ ${var_2//[$var_1]} ]]; then\n echo \"Not all characters of var_2 occur in var_1.\"\nelse\n echo \"All characters of var_2 occur in var_1.\"\nfi\n\nThe ${var_2//[$var_1]} expands to the value of var_2 with all characters that occur in var_1 deleted. All characters of var_2 occur in var_1 only if that expansion is null string.\n" ]
[ 2, 1, 0 ]
[]
[]
[ "bash", "if_statement", "string" ]
stackoverflow_0074661445_bash_if_statement_string.txt
Q: Tabula-py: specify parameters for tabula.io.build_options I am trying to understand how the build_options function defined in tabula.io module and the java_options in function convert_into work. To understand it I wrote my code with just the page options specified: import tabula options = tabula.io.build_options(pages="all") dfs = tabula.io.convert_into('input.pdf',"output.csv",output_format="csv",java_options=options) but I get this error: Error from tabula-java: Unrecognized option: --pages Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. What's the correct way to use the build_options function? A: java_options expects list of string. tabula.convert_into('input.pdf',"output.csv",output_format="csv", pages="all") You don't have to use build_options. See also: https://tabula-py.readthedocs.io/en/latest/tabula.html#tabula.io.convert_into
Tabula-py: specify parameters for tabula.io.build_options
I am trying to understand how the build_options function defined in tabula.io module and the java_options in function convert_into work. To understand it I wrote my code with just the page options specified: import tabula options = tabula.io.build_options(pages="all") dfs = tabula.io.convert_into('input.pdf',"output.csv",output_format="csv",java_options=options) but I get this error: Error from tabula-java: Unrecognized option: --pages Error: Could not create the Java Virtual Machine. Error: A fatal exception has occurred. Program will exit. What's the correct way to use the build_options function?
[ "java_options expects list of string.\ntabula.convert_into('input.pdf',\"output.csv\",output_format=\"csv\", pages=\"all\")\n\nYou don't have to use build_options.\nSee also:\nhttps://tabula-py.readthedocs.io/en/latest/tabula.html#tabula.io.convert_into\n" ]
[ 0 ]
[]
[]
[ "python", "tabula_py" ]
stackoverflow_0072317873_python_tabula_py.txt
Q: Incompatible shapes Mean Squared Error Keras I want to train a RNN with Keras, the shape for the X is (4413, 71, 19) while for y is (4413,2) Code model = Sequential() model.add(LSTM(128, return_sequences=True, input_shape=(None,19))) model.add(Dropout(.2)) model.add(BatchNormalization()) model.add(LSTM(128, return_sequences=True, input_shape=(None,19))) model.add(Dropout(.2)) model.add(BatchNormalization()) model.add(LSTM(128, return_sequences=True, input_shape=(None,19))) model.add(Dropout(.2)) model.add(BatchNormalization()) model.add(Dense(32, activation='relu')) model.add(Dropout(.2)) model.add(Dense(2, activation='softmax')) model.compile(optimizer='adam', loss='mean_squared_error', metrics=['mean_squared_error']) When I fit the model I got this error, seems that the loss function can't fit with this kind of data Incompatible shapes: [64,2] vs. [64,71,2] [[{{node mean_squared_error/SquaredDifference}}]] [Op:__inference_train_function_157671] A: Try setting the parameter return_sequences of the last LSTM layer to False: model = Sequential() model.add(LSTM(128, return_sequences=True, input_shape=(None,19))) model.add(Dropout(.2)) model.add(BatchNormalization()) model.add(LSTM(128, return_sequences=True)) model.add(Dropout(.2)) model.add(BatchNormalization()) model.add(LSTM(128, return_sequences=False)) model.add(Dropout(.2)) model.add(BatchNormalization()) model.add(Dense(32, activation='relu')) model.add(Dropout(.2)) model.add(Dense(2, activation='linear')) model.compile(optimizer='adam', loss='mean_squared_error', metrics=['mean_squared_error']) I have also changed the activation function in the output layer to linear, since a softmax layer does not make much sense in your case. Also refer to this answer.
Incompatible shapes Mean Squared Error Keras
I want to train a RNN with Keras, the shape for the X is (4413, 71, 19) while for y is (4413,2) Code model = Sequential() model.add(LSTM(128, return_sequences=True, input_shape=(None,19))) model.add(Dropout(.2)) model.add(BatchNormalization()) model.add(LSTM(128, return_sequences=True, input_shape=(None,19))) model.add(Dropout(.2)) model.add(BatchNormalization()) model.add(LSTM(128, return_sequences=True, input_shape=(None,19))) model.add(Dropout(.2)) model.add(BatchNormalization()) model.add(Dense(32, activation='relu')) model.add(Dropout(.2)) model.add(Dense(2, activation='softmax')) model.compile(optimizer='adam', loss='mean_squared_error', metrics=['mean_squared_error']) When I fit the model I got this error, seems that the loss function can't fit with this kind of data Incompatible shapes: [64,2] vs. [64,71,2] [[{{node mean_squared_error/SquaredDifference}}]] [Op:__inference_train_function_157671]
[ "Try setting the parameter return_sequences of the last LSTM layer to False:\nmodel = Sequential()\nmodel.add(LSTM(128, return_sequences=True, input_shape=(None,19)))\nmodel.add(Dropout(.2))\nmodel.add(BatchNormalization())\n\nmodel.add(LSTM(128, return_sequences=True))\nmodel.add(Dropout(.2))\nmodel.add(BatchNormalization())\n\nmodel.add(LSTM(128, return_sequences=False))\nmodel.add(Dropout(.2))\nmodel.add(BatchNormalization())\n\nmodel.add(Dense(32, activation='relu'))\nmodel.add(Dropout(.2))\n\nmodel.add(Dense(2, activation='linear'))\n\nmodel.compile(optimizer='adam', loss='mean_squared_error', metrics=['mean_squared_error'])\n\nI have also changed the activation function in the output layer to linear, since a softmax layer does not make much sense in your case. Also refer to this answer.\n" ]
[ 1 ]
[]
[]
[ "keras", "lstm", "python", "tensorflow" ]
stackoverflow_0074669249_keras_lstm_python_tensorflow.txt
Q: How to change icon in notification iOS? I am creating a navigation based application, in that there is function that will alert the user for some events that will occur. For that we are using Push notification, i know how to implement push notification, But My question is can i change the icon in notification centre when push notification comes? Note: i am not talk about badge icon, Check desired images of notificaiton centre: If not please provide reference for that. A: The layout and presentation of push notifications is defined by iOS you are able to supply an app icon that is always used with your push notifications (always the same) and the textual message. The icon you use must be based on the same app icon you use for the app itself. This is part of the guidelines. Here is the reference for it https://developer.apple.com/ios/human-interface-guidelines/features/notifications/ A: No, you cannot. The push notification icon is something which you specify in the Icons list of your Info.plist. This cannot be changed for each notification. A: iOS displays the small version of your app icon in a banner, so that people can see at a glance which app is notifying them. Click here to read Apple Notification Guidelines. A: You actually can change push notification icon but its recommended that you dont change the push notification icon so that user don't get confused from which app they are getting the notification. In order to change the push notification icon, on the appicon set you need to change the spotlight icons. AppIcon -> Change both spotlight icons. push notification icon change
How to change icon in notification iOS?
I am creating a navigation based application, in that there is function that will alert the user for some events that will occur. For that we are using Push notification, i know how to implement push notification, But My question is can i change the icon in notification centre when push notification comes? Note: i am not talk about badge icon, Check desired images of notificaiton centre: If not please provide reference for that.
[ "The layout and presentation of push notifications is defined by iOS you are able to supply an app icon that is always used with your push notifications (always the same) and the textual message. The icon you use must be based on the same app icon you use for the app itself. This is part of the guidelines.\nHere is the reference for it\nhttps://developer.apple.com/ios/human-interface-guidelines/features/notifications/\n", "No, you cannot. The push notification icon is something which you specify in the Icons list of your Info.plist. This cannot be changed for each notification.\n", "iOS displays the small version of your app icon in a banner, so that people can see at a glance which app is notifying them.\nClick here to read Apple Notification Guidelines. \n", "You actually can change push notification icon but its recommended that you dont change the push notification icon so that user don't get confused from which app they are getting the notification.\nIn order to change the push notification icon, on the appicon set you need to change the spotlight icons.\nAppIcon -> Change both spotlight icons.\npush notification icon change\n" ]
[ 15, 5, 1, 0 ]
[]
[]
[ "ios", "local_storage", "push_notification" ]
stackoverflow_0023101758_ios_local_storage_push_notification.txt
Q: How do I download TheTVDB JSON data using Java I have the appropriate API Key and Subscriber Pin but I'm not sure how to create the command line to login or to pull actual data from TheTVDB database. Does anyone have an example of how to do that? This is the code that I am using but it gives me a 405 response code. URL url = new URL("https://api4.thetvdb.com/v4/login?apikey=xxxxxxxx&pin=XXXXXXXX"); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestProperty("Content-Type", "application/json"); conn.setRequestProperty("accept", "application/json"); conn.setRequestMethod("GET"); conn.connect(); //Getting the response code int responsecode = conn.getResponseCode(); System.out.println("ResponseCode: " + responsecode); if (responsecode != 200) { throw new RuntimeException("HttpResponseCode: " + responsecode); } else { String inline = ""; Scanner scanner = new Scanner(url.openStream()); //Write all the JSON data into a string using a scanner while (scanner.hasNext()) { inline += scanner.nextLine(); } System.out.println("inline: " + inline); } A: This part is mostly correct: URL url = new URL("https://api4.thetvdb.com/v4/login?apikey=xxxxxxxx&pin=XXXXXXXX"); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestProperty("Content-Type", "application/json"); conn.setRequestProperty("accept", "application/json"); conn.setRequestMethod("GET"); conn.connect(); //Getting the response code int responsecode = conn.getResponseCode(); System.out.println("ResponseCode: " + responsecode); if (responsecode != 200) { throw new RuntimeException("HttpResponseCode: " + responsecode); …except that, according to the documentation, you need to pass the key and PIN as JSON values in the request body, not as URL query parameters. The documentation has this example request body: { "apikey": "string", "pin": "string" } So, remove those from your URL: URL url = new URL("https://api4.thetvdb.com/v4/login"); And send them as a JSON object. (I’m using Java EE’s javax.json package, but there are several other JSON libraries that will work.) HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestProperty("Content-Type", "application/json"); conn.setRequestProperty("Accept", "application/json"); // According to the documentation, /login is a POST call. conn.setRequestMethod("POST"); conn.setDoOutput(true); JsonObjectBuilder builder = Json.createObjectBuilder(); JsonObject requestBody = builder.add("apikey", myAPIKey).add("pin", myPIN).build(); try (JsonWriter writer = new Json.createWriter(conn.getOutputStream())) { writer.writeObject(requestBody); } Similarly, you must parse the response body as JSON. The documentation has this example response body: { "data": { "token": "string" }, "status": "string" } So you want to parse it and read the data/token attribute: //Getting the response code int responsecode = conn.getResponseCode(); System.out.println("ResponseCode: " + responsecode); if (responsecode != 200) { throw new RuntimeException("HttpResponseCode: " + responsecode); } JsonObject responseBody; try (JsonReader reader = Json.createReader(conn.getInputStream())) { responseBody = reader.readObject(); } String token = responseBody.getJsonObject("data").getString("token"); Now you can use the token for future requests, as demonstrated at the top of the documentation page: url = new URL("https://api4.thetvdb.com/v4/movies?page=1"); conn = (HttpURLConnection) url.openConnection(); conn.setRequestProperty("Authorization", "Bearer " + token); conn.setRequestProperty("Accept", "application/json"); responsecode = conn.getResponseCode(); if (responsecode != 200) { throw new RuntimeException("HttpResponseCode: " + responsecode); } try (JsonReader reader = Json.createReader(conn.getInputStream())) { responseBody = reader.readObject(); } JsonArray movieEntries = responseBody.getJsonArray("data"); int count = movieEntries.size(); for (int i = 0; i < count; i++) { JsonObject movieEntry = movieEntries.getJsonObject(i); String movieName = movieEntry.getString("name"); System.out.println("Found movie \"" + movieName + "\""); } The specific JSON attribute names, like "data" and "name", are all listed in the documentation page. Click on any URL path in a colored box on that page to expand it and see all of the details.
How do I download TheTVDB JSON data using Java
I have the appropriate API Key and Subscriber Pin but I'm not sure how to create the command line to login or to pull actual data from TheTVDB database. Does anyone have an example of how to do that? This is the code that I am using but it gives me a 405 response code. URL url = new URL("https://api4.thetvdb.com/v4/login?apikey=xxxxxxxx&pin=XXXXXXXX"); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.setRequestProperty("Content-Type", "application/json"); conn.setRequestProperty("accept", "application/json"); conn.setRequestMethod("GET"); conn.connect(); //Getting the response code int responsecode = conn.getResponseCode(); System.out.println("ResponseCode: " + responsecode); if (responsecode != 200) { throw new RuntimeException("HttpResponseCode: " + responsecode); } else { String inline = ""; Scanner scanner = new Scanner(url.openStream()); //Write all the JSON data into a string using a scanner while (scanner.hasNext()) { inline += scanner.nextLine(); } System.out.println("inline: " + inline); }
[ "This part is mostly correct:\nURL url = new URL(\"https://api4.thetvdb.com/v4/login?apikey=xxxxxxxx&pin=XXXXXXXX\");\n\nHttpURLConnection conn = (HttpURLConnection) url.openConnection();\nconn.setRequestProperty(\"Content-Type\", \"application/json\");\nconn.setRequestProperty(\"accept\", \"application/json\");\nconn.setRequestMethod(\"GET\");\nconn.connect();\n\n//Getting the response code\nint responsecode = conn.getResponseCode();\nSystem.out.println(\"ResponseCode: \" + responsecode);\n\nif (responsecode != 200) {\n throw new RuntimeException(\"HttpResponseCode: \" + responsecode);\n\n…except that, according to the documentation, you need to pass the key and PIN as JSON values in the request body, not as URL query parameters. The documentation has this example request body:\n{\n \"apikey\": \"string\",\n \"pin\": \"string\"\n}\n\nSo, remove those from your URL:\nURL url = new URL(\"https://api4.thetvdb.com/v4/login\");\n\nAnd send them as a JSON object. (I’m using Java EE’s javax.json package, but there are several other JSON libraries that will work.)\nHttpURLConnection conn = (HttpURLConnection) url.openConnection();\nconn.setRequestProperty(\"Content-Type\", \"application/json\");\nconn.setRequestProperty(\"Accept\", \"application/json\");\n\n// According to the documentation, /login is a POST call.\nconn.setRequestMethod(\"POST\");\nconn.setDoOutput(true);\n\nJsonObjectBuilder builder = Json.createObjectBuilder();\nJsonObject requestBody =\n builder.add(\"apikey\", myAPIKey).add(\"pin\", myPIN).build();\n\ntry (JsonWriter writer = new Json.createWriter(conn.getOutputStream())) {\n writer.writeObject(requestBody);\n}\n\nSimilarly, you must parse the response body as JSON. The documentation has this example response body:\n{\n \"data\": {\n \"token\": \"string\"\n },\n \"status\": \"string\"\n}\n\nSo you want to parse it and read the data/token attribute:\n//Getting the response code\nint responsecode = conn.getResponseCode();\nSystem.out.println(\"ResponseCode: \" + responsecode);\n\nif (responsecode != 200) {\n throw new RuntimeException(\"HttpResponseCode: \" + responsecode);\n}\n\nJsonObject responseBody;\ntry (JsonReader reader = Json.createReader(conn.getInputStream())) {\n responseBody = reader.readObject();\n}\n\nString token = responseBody.getJsonObject(\"data\").getString(\"token\");\n\nNow you can use the token for future requests, as demonstrated at the top of the documentation page:\nurl = new URL(\"https://api4.thetvdb.com/v4/movies?page=1\");\nconn = (HttpURLConnection) url.openConnection();\nconn.setRequestProperty(\"Authorization\", \"Bearer \" + token);\nconn.setRequestProperty(\"Accept\", \"application/json\");\n\nresponsecode = conn.getResponseCode();\nif (responsecode != 200) {\n throw new RuntimeException(\"HttpResponseCode: \" + responsecode);\n}\n\ntry (JsonReader reader = Json.createReader(conn.getInputStream())) {\n responseBody = reader.readObject();\n}\n\nJsonArray movieEntries = responseBody.getJsonArray(\"data\");\nint count = movieEntries.size();\nfor (int i = 0; i < count; i++) {\n JsonObject movieEntry = movieEntries.getJsonObject(i);\n String movieName = movieEntry.getString(\"name\");\n\n System.out.println(\"Found movie \\\"\" + movieName + \"\\\"\");\n}\n\nThe specific JSON attribute names, like \"data\" and \"name\", are all listed in the documentation page. Click on any URL path in a colored box on that page to expand it and see all of the details.\n" ]
[ 1 ]
[]
[]
[ "api", "java" ]
stackoverflow_0074656753_api_java.txt
Q: PySpark join iteration time increasing exponentially I have a table named "table1" and I'm splitting it based on a criterion, and then joining the split parts one by one in for loop. The following is a representation of what I am trying to do. When I joined them, the joining time increased exponentially. 0.7423694133758545 join 0.4046192169189453 join 0.5775985717773438 join 5.664674758911133 join 1.0985417366027832 join 2.2664384841918945 join 3.833379030227661 join 12.762675762176514 join 44.14520192146301 join 124.86295890808105 join 389.46189188957214 . Following are my parameters spark = SparkSession.builder.appName("xyz").getOrCreate() sqlContext = HiveContext(spark) sqlContext.setConf("spark.sql.join.preferSortMergeJoin", "true") sqlContext.setConf("spark.serializer","org.apache.spark.serializer.KryoSerializer") sqlContext.setConf("spark.sql.shuffle.partitions", "48") and --executor-memory 16G --num-executors 8 --executor-cores 8 --driver-memory 32G Source table Desired output table In the join iteration, I also increased the partitions to 2000 and decreased it to 4, and cached the DF data frame by df.cached(), but nothing worked. I know I am doing something terribly wrong but I don't know what. Please can you guide me on how to correct this. I would really appreciate any help :) code: df = spark.createDataFrame([], schema=SCHEMA) for i, column in enumerate(columns): df.cache() df_part = df_to_transpose.where(col('key') == column) df_part = df_part.withColumnRenamed("value", column) if (df_part.count() != 0 and df.count() != 0): df = df_part.join(broadcast(df), 'tuple') A: I had same problem a while ago. if you check your pyspark web ui and go in stages section and checkout dag visualization of your task you can see the dag is growing exponentialy and the waiting time you see is for making this dag not doing the task acutally. I dont know why but it seams when you join table made of a dataframe with it self pyspark cant handle partitions and it's getting a lot bigger. how ever the solution i found at that moment was to save each of join results on seperated files and at the end after restarting the kernel load and join all the files again. It seams if dataframes you want to join are not made from each other you dont see this problem. A: Add a checkpoint every loop, or every so many loops, so as to break lineage.
PySpark join iteration time increasing exponentially
I have a table named "table1" and I'm splitting it based on a criterion, and then joining the split parts one by one in for loop. The following is a representation of what I am trying to do. When I joined them, the joining time increased exponentially. 0.7423694133758545 join 0.4046192169189453 join 0.5775985717773438 join 5.664674758911133 join 1.0985417366027832 join 2.2664384841918945 join 3.833379030227661 join 12.762675762176514 join 44.14520192146301 join 124.86295890808105 join 389.46189188957214 . Following are my parameters spark = SparkSession.builder.appName("xyz").getOrCreate() sqlContext = HiveContext(spark) sqlContext.setConf("spark.sql.join.preferSortMergeJoin", "true") sqlContext.setConf("spark.serializer","org.apache.spark.serializer.KryoSerializer") sqlContext.setConf("spark.sql.shuffle.partitions", "48") and --executor-memory 16G --num-executors 8 --executor-cores 8 --driver-memory 32G Source table Desired output table In the join iteration, I also increased the partitions to 2000 and decreased it to 4, and cached the DF data frame by df.cached(), but nothing worked. I know I am doing something terribly wrong but I don't know what. Please can you guide me on how to correct this. I would really appreciate any help :) code: df = spark.createDataFrame([], schema=SCHEMA) for i, column in enumerate(columns): df.cache() df_part = df_to_transpose.where(col('key') == column) df_part = df_part.withColumnRenamed("value", column) if (df_part.count() != 0 and df.count() != 0): df = df_part.join(broadcast(df), 'tuple')
[ "I had same problem a while ago. if you check your pyspark web ui and go in stages section and checkout dag visualization of your task you can see the dag is growing exponentialy and the waiting time you see is for making this dag not doing the task acutally. I dont know why but it seams when you join table made of a dataframe with it self pyspark cant handle partitions and it's getting a lot bigger. how ever the solution i found at that moment was to save each of join results on seperated files and at the end after restarting the kernel load and join all the files again. It seams if dataframes you want to join are not made from each other you dont see this problem.\n", "Add a checkpoint every loop, or every so many loops, so as to break lineage.\n" ]
[ 0, 0 ]
[]
[]
[ "apache_spark", "hive", "pyspark" ]
stackoverflow_0074665888_apache_spark_hive_pyspark.txt
Q: Unable to load tables with "Load more" options in a website using Python Need to scrape the full table from this site with "Load more" option. As of now when I`m scraping , I only get the one that shows up by default on when loading the page. import pandas as pd import requests from six.moves import urllib URL2 = "https://www.mykhel.com/football/indian-super-league-player-stats-l750/" header = {'Accept-Language': "en-US,en;q=0.9", 'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 " "(KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" } resp2 = requests.get(url=URL2, headers=header).text tables2 = pd.read_html(resp2) overview_table2= tables2[0] overview_table2 Player Name Team Matches Goals Time Played Unnamed: 5 0 Jorge Pereyra Diaz Mumbai City 9 6 538 Mins NaN 1 Cleiton Silva SC East Bengal 8 5 707 Mins NaN 2 Abdenasser El Khayati Chennaiyin FC 5 4 231 Mins NaN 3 Lallianzuala Chhangte Mumbai City 9 4 737 Mins NaN 4 Nandhakumar Sekar Odisha 8 4 673 Mins NaN 5 Ivan Kalyuzhnyi Kerala Blasters 7 4 428 Mins NaN 6 Bipin Singh Mumbai City 9 4 806 Mins NaN 7 Noah Sadaoui Goa 8 4 489 Mins NaN 8 Diego Mauricio Odisha 8 3 526 Mins NaN 9 Pedro Martin Odisha 8 3 263 Mins NaN 10 Dimitri Petratos ATK Mohun Bagan 6 3 517 Mins NaN 11 Petar Sliskovic Chennaiyin FC 8 3 662 Mins NaN 12 Holicharan Narzary Hyderabad 9 3 705 Mins NaN 13 Dimitrios Diamantakos Kerala Blasters 7 3 529 Mins NaN 14 Alberto Noguera Mumbai City 9 3 371 Mins NaN 15 Jerry Mawihmingthanga Odisha 8 3 611 Mins NaN 16 Hugo Boumous ATK Mohun Bagan 7 2 580 Mins NaN 17 Javi Hernandez Bengaluru 6 2 397 Mins NaN 18 Borja Herrera Hyderabad 9 2 314 Mins NaN 19 Mohammad Yasir Hyderabad 9 2 777 Mins NaN 20 Load More.... Load More.... Load More.... Load More.... Load More.... Load More.... But I need the full table , including the data under "Load more", please help. A: import requests import pandas as pd from bs4 import BeautifulSoup headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0' } def main(url): params = { "action": "stats", "league_id": "750", "limit": "300", "offset": "0", "part": "leagues", "season_id": "2022", "section": "football", "stats_type": "player", "tab": "overview" } r = requests.get(url, headers=headers, params=params) soup = BeautifulSoup(r.text, 'lxml') goal = [(x['title'], *[i.get_text(strip=True) for i in x.find_all_next('td', limit=4)]) for x in soup.select('a.player_link')] df = pd.DataFrame( goal, columns=['Name', 'Team', 'Matches', 'Goals', 'Time Played']) print(df) main('https://www.mykhel.com/src/index.php') Output: Name Team Matches Goals Time Played 0 Jorge Pereyra Diaz Mumbai City 9 6 538 Mins 1 Cleiton Silva SC East Bengal 8 5 707 Mins 2 Abdenasser El Khayati Chennaiyin FC 5 4 231 Mins 3 Lallianzuala Chhangte Mumbai City 9 4 737 Mins 4 Nandhakumar Sekar Odisha 8 4 673 Mins .. ... ... ... ... ... 268 Sarthak Golui SC East Bengal 6 0 402 Mins 269 Ivan Gonzalez SC East Bengal 8 0 683 Mins 270 Michael Jakobsen NorthEast United 8 0 676 Mins 271 Pratik Chowdhary Jamshedpur FC 6 0 495 Mins 272 Chungnunga Lal SC East Bengal 8 0 720 Mins [273 rows x 5 columns] A: This is a dynamically loaded page, so you can not parse all the contents without hitting a button. Well… may be you can with XHR or smth like that, may be someone will contribute to the answers here. I'll stick to working with dynamically loaded pages with Selenium browser automation suite. Installation To get started, you'll need to install selenium bindings: pip install selenium You seem to already have beautifulsoup, but for anyone who might come across this answer, we'll also need it and html5lib, we'll need them later to parse the table: pip install html5lib BeautifulSoup4 Now, for selenium to work you'll need a driver installed for a browser of your choice. To get the drivers you may use Selenium Manager, Driver Management Software or download the drivers manually. The above mentioned options are something new, I have my manually downloaded drivers for ages, so I'll stick to them. I'll duplicate here the download links: Browser Link to driver download Chrome: https://sites.google.com/chromium.org/driver/ Edge: https://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/ Firefox: https://github.com/mozilla/geckodriver/releases Safari: https://webkit.org/blog/6900/webdriver-support-in-safari-10/ Opera: https://github.com/operasoftware/operachromiumdriver/releases You can use any browser, e.g. Brave browser, Yandex Browser, basically any Chromium based browser of your choice or even Tor browser Anyway, it's a bit out of this answer scope, just keep in mind, for any browser and it's family you'll need a driver. I'll stick with Firefox. Hence you need Firefox installed and driver placed somewhere. The best option would be to add this folder to PATH variable. If you choose chromium, you'll have to strictly stick to Chrome browser version. As for Firefox, I have a pretty old geckodriver 0.29.1 and it works like a charm with the latest update. Hands on import pandas as pd from selenium import webdriver URL2 = "https://www.mykhel.com/football/indian-super-league-player-stats-l750/" driver = webdriver.Firefox() driver.get(URL2) element = driver.find_element_by_xpath("//a[text()=' Load More.... ']") while(element.is_displayed()): driver.execute_script("arguments[0].click();", element) table = driver.find_element_by_css_selector('table') tables2 = pd.read_html(table.get_attribute('outerHTML')) driver.close() overview_table2 = tables2[0].dropna(how='all').dropna(axis='columns', how='all') overview_table2.drop_duplicates().reset_index(drop=True) overview_table2 We only need pandas for our resulting table and selenium for web automation. URL2 — is the same variable you used driver = webdriver.Firefox() — here we instantiate Firefox and the browser will get opened. This is where selenium magic will happen. Note: If you decided to skip adding driver to a PATH variable, you can directly reference your here, e.g.: webdriver.Firefox(r"C:\WebDriver\bin") webdriver.Chrome(service=Service(executable_path="/path/to/chromedriver")) driver.get(URL2) — open the desired page element = driver.find_element_by_xpath("//a[text()=' Load More.... ']") Using xpath selector we find a link that has the same text as your 20th row. With that stored element we click it all the time till it disappears. It would be more sensible and easy to just use element.click(), but it results in an error. More info on other stack overflow question. Assign table variable with a corresponding element. tables2 I left this weird variable name as is in your question. Here we get outerHTML as innnerHTML would render contents of the <table> tag, but not the tag itself. We should not forget to .close() our driver as we don't need it anymore. As a result of html parsing there will be a list just like in question provided. I drop here the unnamed column and last empty row. The resulting overview_table2 looks like: Player Name Team Matches Goals Time Played 0 Jorge Pereyra Diaz Mumbai City 9.0 6.0 538 Mins 1 Cleiton Silva SC East Bengal 8.0 5.0 707 Mins 2 Abdenasser El Khayati Chennaiyin FC 5.0 4.0 231 Mins ... ... ... ... ... ... 270 Michael Jakobsen NorthEast United 8.0 0.0 676 Mins 271 Pratik Chowdhary Jamshedpur FC 6.0 0.0 495 Mins 272 Chungnunga Lal SC East Bengal 8.0 0.0 720 Mins Side note Job done. As some further improvement you may play with different browsers and try the headless mode, a mode when browser does not open on you desktop environment, but rather runs silently in the background.
Unable to load tables with "Load more" options in a website using Python
Need to scrape the full table from this site with "Load more" option. As of now when I`m scraping , I only get the one that shows up by default on when loading the page. import pandas as pd import requests from six.moves import urllib URL2 = "https://www.mykhel.com/football/indian-super-league-player-stats-l750/" header = {'Accept-Language': "en-US,en;q=0.9", 'User-Agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 " "(KHTML, like Gecko) Chrome/107.0.0.0 Safari/537.36" } resp2 = requests.get(url=URL2, headers=header).text tables2 = pd.read_html(resp2) overview_table2= tables2[0] overview_table2 Player Name Team Matches Goals Time Played Unnamed: 5 0 Jorge Pereyra Diaz Mumbai City 9 6 538 Mins NaN 1 Cleiton Silva SC East Bengal 8 5 707 Mins NaN 2 Abdenasser El Khayati Chennaiyin FC 5 4 231 Mins NaN 3 Lallianzuala Chhangte Mumbai City 9 4 737 Mins NaN 4 Nandhakumar Sekar Odisha 8 4 673 Mins NaN 5 Ivan Kalyuzhnyi Kerala Blasters 7 4 428 Mins NaN 6 Bipin Singh Mumbai City 9 4 806 Mins NaN 7 Noah Sadaoui Goa 8 4 489 Mins NaN 8 Diego Mauricio Odisha 8 3 526 Mins NaN 9 Pedro Martin Odisha 8 3 263 Mins NaN 10 Dimitri Petratos ATK Mohun Bagan 6 3 517 Mins NaN 11 Petar Sliskovic Chennaiyin FC 8 3 662 Mins NaN 12 Holicharan Narzary Hyderabad 9 3 705 Mins NaN 13 Dimitrios Diamantakos Kerala Blasters 7 3 529 Mins NaN 14 Alberto Noguera Mumbai City 9 3 371 Mins NaN 15 Jerry Mawihmingthanga Odisha 8 3 611 Mins NaN 16 Hugo Boumous ATK Mohun Bagan 7 2 580 Mins NaN 17 Javi Hernandez Bengaluru 6 2 397 Mins NaN 18 Borja Herrera Hyderabad 9 2 314 Mins NaN 19 Mohammad Yasir Hyderabad 9 2 777 Mins NaN 20 Load More.... Load More.... Load More.... Load More.... Load More.... Load More.... But I need the full table , including the data under "Load more", please help.
[ "import requests\nimport pandas as pd\nfrom bs4 import BeautifulSoup\n\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0'\n}\n\n\ndef main(url):\n params = {\n \"action\": \"stats\",\n \"league_id\": \"750\",\n \"limit\": \"300\",\n \"offset\": \"0\",\n \"part\": \"leagues\",\n \"season_id\": \"2022\",\n \"section\": \"football\",\n \"stats_type\": \"player\",\n \"tab\": \"overview\"\n }\n r = requests.get(url, headers=headers, params=params)\n soup = BeautifulSoup(r.text, 'lxml')\n goal = [(x['title'], *[i.get_text(strip=True) for i in x.find_all_next('td', limit=4)])\n for x in soup.select('a.player_link')]\n df = pd.DataFrame(\n goal, columns=['Name', 'Team', 'Matches', 'Goals', 'Time Played'])\n print(df)\n\n\nmain('https://www.mykhel.com/src/index.php')\n\nOutput:\n Name Team Matches Goals Time Played\n0 Jorge Pereyra Diaz Mumbai City 9 6 538 Mins\n1 Cleiton Silva SC East Bengal 8 5 707 Mins\n2 Abdenasser El Khayati Chennaiyin FC 5 4 231 Mins\n3 Lallianzuala Chhangte Mumbai City 9 4 737 Mins\n4 Nandhakumar Sekar Odisha 8 4 673 Mins\n.. ... ... ... ... ...\n268 Sarthak Golui SC East Bengal 6 0 402 Mins\n269 Ivan Gonzalez SC East Bengal 8 0 683 Mins\n270 Michael Jakobsen NorthEast United 8 0 676 Mins\n271 Pratik Chowdhary Jamshedpur FC 6 0 495 Mins\n272 Chungnunga Lal SC East Bengal 8 0 720 Mins\n\n[273 rows x 5 columns]\n\n", "This is a dynamically loaded page, so you can not parse all the contents without hitting a button.\nWell… may be you can with XHR or smth like that, may be someone will contribute to the answers here.\nI'll stick to working with dynamically loaded pages with Selenium browser automation suite.\nInstallation\nTo get started, you'll need to install selenium bindings:\npip install selenium\n\nYou seem to already have beautifulsoup, but for anyone who might come across this answer, we'll also need it and html5lib, we'll need them later to parse the table:\npip install html5lib BeautifulSoup4\n\nNow, for selenium to work you'll need a driver installed for a browser of your choice. To get the drivers you may use Selenium Manager, Driver Management Software or download the drivers manually. The above mentioned options are something new, I have my manually downloaded drivers for ages, so I'll stick to them. I'll duplicate here the download links:\n\n\n\n\nBrowser\nLink to driver download\n\n\n\n\nChrome:\nhttps://sites.google.com/chromium.org/driver/\n\n\nEdge:\nhttps://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/\n\n\nFirefox:\nhttps://github.com/mozilla/geckodriver/releases\n\n\nSafari:\nhttps://webkit.org/blog/6900/webdriver-support-in-safari-10/\n\n\nOpera:\nhttps://github.com/operasoftware/operachromiumdriver/releases\n\n\n\n\nYou can use any browser, e.g. Brave browser, Yandex Browser, basically any Chromium based browser of your choice or even Tor browser \nAnyway, it's a bit out of this answer scope, just keep in mind, for any browser and it's family you'll need a driver.\nI'll stick with Firefox. Hence you need Firefox installed and driver placed somewhere. The best option would be to add this folder to PATH variable.\nIf you choose chromium, you'll have to strictly stick to Chrome browser version. As for Firefox, I have a pretty old geckodriver 0.29.1 and it works like a charm with the latest update.\nHands on\nimport pandas as pd\nfrom selenium import webdriver\n\nURL2 = \"https://www.mykhel.com/football/indian-super-league-player-stats-l750/\"\n\ndriver = webdriver.Firefox()\ndriver.get(URL2)\n\nelement = driver.find_element_by_xpath(\"//a[text()=' Load More.... ']\")\nwhile(element.is_displayed()):\n driver.execute_script(\"arguments[0].click();\", element)\n\ntable = driver.find_element_by_css_selector('table')\ntables2 = pd.read_html(table.get_attribute('outerHTML'))\ndriver.close()\n\noverview_table2 = tables2[0].dropna(how='all').dropna(axis='columns', how='all')\noverview_table2.drop_duplicates().reset_index(drop=True)\noverview_table2\n\n\nWe only need pandas for our resulting table and selenium for web automation.\nURL2 — is the same variable you used\ndriver = webdriver.Firefox() — here we instantiate Firefox and the browser will get opened. This is where selenium magic will happen.\nNote: If you decided to skip adding driver to a PATH variable, you can directly reference your here, e.g.:\n\nwebdriver.Firefox(r\"C:\\WebDriver\\bin\")\nwebdriver.Chrome(service=Service(executable_path=\"/path/to/chromedriver\"))\n\n\ndriver.get(URL2) — open the desired page\nelement = driver.find_element_by_xpath(\"//a[text()=' Load More.... ']\")\nUsing xpath selector we find a link that has the same text as your 20th row.\nWith that stored element we click it all the time till it disappears.\nIt would be more sensible and easy to just use element.click(), but it results in an error. More info on other stack overflow question.\nAssign table variable with a corresponding element.\ntables2 I left this weird variable name as is in your question.\nHere we get outerHTML as innnerHTML would render contents of the <table> tag, but not the tag itself.\nWe should not forget to .close() our driver as we don't need it anymore.\nAs a result of html parsing there will be a list just like in question provided. I drop here the unnamed column and last empty row.\n\nThe resulting overview_table2 looks like:\n\n\n\n\n\nPlayer Name\nTeam\nMatches\nGoals\nTime Played\n\n\n\n\n0\nJorge Pereyra Diaz\nMumbai City\n9.0\n6.0\n538 Mins\n\n\n1\nCleiton Silva\nSC East Bengal\n8.0\n5.0\n707 Mins\n\n\n2\nAbdenasser El Khayati\nChennaiyin FC\n5.0\n4.0\n231 Mins\n\n\n...\n...\n...\n...\n...\n...\n\n\n270\nMichael Jakobsen\nNorthEast United\n8.0\n0.0\n676 Mins\n\n\n271\nPratik Chowdhary\nJamshedpur FC\n6.0\n0.0\n495 Mins\n\n\n272\nChungnunga Lal\nSC East Bengal\n8.0\n0.0\n720 Mins\n\n\n\nSide note\nJob done. As some further improvement you may play with different browsers and try the headless mode, a mode when browser does not open on you desktop environment, but rather runs silently in the background.\n" ]
[ 3, 0 ]
[]
[]
[ "beautifulsoup", "dataframe", "pandas", "python", "web_scraping" ]
stackoverflow_0074668149_beautifulsoup_dataframe_pandas_python_web_scraping.txt
Q: Add own functions in views.py wagtail I have setup a wagtail website. It works great for postings like a blog and simply add new pages. But what if I want to add some extra functions to a page. Like showing values from my own database in a table. Normally i use a models.py, views.py and template.py. But now I don’t see any views.py to add functions or a urls.py to redirect to an url? Don’t know where to start! Or is this not the meaning of a wagtail site, to customize it that way? Thnx in advanced. A: You can certainly add additional data to pages. One option is to add the additional information to the context of a page type by overriding its get_context method. For example, this page is just a place to display a bunch of links. The links and the collections they belong to are plain old Django models (managed as snippets). And then there is a page model that queries the database like this: def get_context(self, request, *args, **kwargs): context = super().get_context(request, *args, **kwargs) collection_tuples = [] site = Site.find_for_request(request) for collection in Collection.objects.filter(links__audiences=self.audience, site=site).distinct(): links = Link.objects.filter(audiences=self.audience, collections=collection, site=site) collection_tuples.append((collection.name, links.order_by('text'))) # sort collection tuples by the collection name before sending to the template context['collection_tuples'] = sorted(collection_tuples, key=lambda x: x[0], reverse=False) return context Another option is to do basically the same thing - but in a StructBlock. Then you can include the StructBlock in a StreamField on your page. Most of the Caltech site is written using blocks that can be included in one large StreamField on a page. Some of those blocks manage their own content, e.g. rich text blocks or image blocks, but others query data and render it in a block template. A: To add to @cnk's excellent answer - you can absolutely use views.py and urls.py just as you would in an ordinary Django project. However, any views you define in that way will be available at a fixed URL, which means they'll be distinct from the Wagtail page system (where the URL for a page is determined by the page slug that the editor chooses within the Wagtail admin). If you're defining URLs this way, make sure they appear above the include(wagtail_urls) route in your project's top-level urls.py.
Add own functions in views.py wagtail
I have setup a wagtail website. It works great for postings like a blog and simply add new pages. But what if I want to add some extra functions to a page. Like showing values from my own database in a table. Normally i use a models.py, views.py and template.py. But now I don’t see any views.py to add functions or a urls.py to redirect to an url? Don’t know where to start! Or is this not the meaning of a wagtail site, to customize it that way? Thnx in advanced.
[ "You can certainly add additional data to pages. One option is to add the additional information to the context of a page type by overriding its get_context method. For example, this page is just a place to display a bunch of links. The links and the collections they belong to are plain old Django models (managed as snippets). And then there is a page model that queries the database like this:\ndef get_context(self, request, *args, **kwargs):\n context = super().get_context(request, *args, **kwargs)\n collection_tuples = []\n site = Site.find_for_request(request)\n for collection in Collection.objects.filter(links__audiences=self.audience, site=site).distinct():\n links = Link.objects.filter(audiences=self.audience, collections=collection, site=site)\n collection_tuples.append((collection.name, links.order_by('text')))\n # sort collection tuples by the collection name before sending to the template\n context['collection_tuples'] = sorted(collection_tuples, key=lambda x: x[0], reverse=False)\n return context\n\nAnother option is to do basically the same thing - but in a StructBlock. Then you can include the StructBlock in a StreamField on your page. Most of the Caltech site is written using blocks that can be included in one large StreamField on a page. Some of those blocks manage their own content, e.g. rich text blocks or image blocks, but others query data and render it in a block template.\n", "To add to @cnk's excellent answer - you can absolutely use views.py and urls.py just as you would in an ordinary Django project. However, any views you define in that way will be available at a fixed URL, which means they'll be distinct from the Wagtail page system (where the URL for a page is determined by the page slug that the editor chooses within the Wagtail admin).\nIf you're defining URLs this way, make sure they appear above the include(wagtail_urls) route in your project's top-level urls.py.\n" ]
[ 1, 1 ]
[]
[]
[ "django_views", "wagtail" ]
stackoverflow_0074666361_django_views_wagtail.txt
Q: neovis.js cannot display caption of nodes I just started to learn how to use neovis recently, but when I wrote an html according to the tutorial, I found it was very different from the effect shown in the tutorial. The node in the tutorial had caption, but mine did not. I wonder what the problem is.MineTutorial I want to know why I am using the same code as the tutorial, my nodes has no caption and the tutorial does. Is neovis.js updated? Because this tutorial is 2 years old, so I guess it is because of the update, but I can't find the newest tutorial. Here is the tutorial link:https://www.youtube.com/watch?v=0-1A7f8993M A: neovis.js is version 2.0 now and most (all) tutorials are outdated. I could fix all my problems using the Github example files and the API reference.
neovis.js cannot display caption of nodes
I just started to learn how to use neovis recently, but when I wrote an html according to the tutorial, I found it was very different from the effect shown in the tutorial. The node in the tutorial had caption, but mine did not. I wonder what the problem is.MineTutorial I want to know why I am using the same code as the tutorial, my nodes has no caption and the tutorial does. Is neovis.js updated? Because this tutorial is 2 years old, so I guess it is because of the update, but I can't find the newest tutorial. Here is the tutorial link:https://www.youtube.com/watch?v=0-1A7f8993M
[ "neovis.js is version 2.0 now and most (all) tutorials are outdated.\nI could fix all my problems using the Github example files and the API reference.\n" ]
[ 0 ]
[]
[]
[ "javascript", "neo4j", "neovis" ]
stackoverflow_0074487872_javascript_neo4j_neovis.txt
Q: Why I can't use named parameters in this function? i wrote this code in main.dart Map<String, bool> filters = { 'gluten': false, 'lactose': false, 'vegan': false, 'vegetarian': false, }; void _filteringFunction({bool gluten, bool lactose, bool vegan, bool vegetarian}) { setState(() { filters['gluten'] = gluten; filters['lactose'] = lactose; filters['vegan'] = vegan; filters['vegetarian'] = vegetarian; selectedMeals = DUMMY_MEALS.where((meal) { if (filters['gluten'] && !meal.isGlutenFree) {return false;} if (filters['lactose'] && !meal.isLactoseFree) {return false;} if (filters['vegan'] && !meal.isVegan) {return false;} if (filters['vegetarian'] && !meal.isVegetarian) {return false;} return true; }).toList(); }); } and here I call that function from another screen (the Filters Screen) , SaveFilters is the name that i gave to the function reciver in "filters screen" IconButton( onPressed: () { widget.SaveFilters( _isGlutenFree, _isLactoseFree, _isVegan, _isVegetarian);}, icon: Icon( Icons.save, color: Colors.white, )) And i was getting this error I tried to remove the curly braces that was encountering the function parameters so now the function header is like that: void _filteringFunction(bool gluten, bool lactose, bool vegan, bool vegetarian) that worked and the problem is fixed. But my question is why we can't use named parameters in this case A: In your code, you are calling the _filteringFunction method with named arguments, but the method definition does not include named parameters. This means that the names of the arguments you are passing in the method call do not match the names of the parameters in the method definition, causing the error. To fix this, you can either remove the names of the arguments in the method call, or update the method definition to include named parameters. Here is an example of how you can update the method definition to include named parameters: void _filteringFunction({bool gluten, bool lactose, bool vegan, bool vegetarian}) { setState(() { filters['gluten'] = gluten; filters['lactose'] = lactose; filters['vegan'] = vegan; filters['vegetarian'] = vegetarian; selectedMeals = DUMMY_MEALS.where((meal) { if (filters['gluten'] && !meal.isGlutenFree) {return false;} if (filters['lactose'] && !meal.isLactoseFree) {return false;} if (filters['vegan'] && !meal.isVegan) {return false;} if (filters['vegetarian'] && !meal.isVegetarian) {return false;} return true; }).toList(); }); } With this change, you can call the _filteringFunction method with named arguments, as you were doing before, and the error will not occur. It is important to note that named parameters are optional in Dart. You can choose to include them in your method definition if you want to allow calling the method with named arguments, but you don't have to include them if you don't need them.
Why I can't use named parameters in this function?
i wrote this code in main.dart Map<String, bool> filters = { 'gluten': false, 'lactose': false, 'vegan': false, 'vegetarian': false, }; void _filteringFunction({bool gluten, bool lactose, bool vegan, bool vegetarian}) { setState(() { filters['gluten'] = gluten; filters['lactose'] = lactose; filters['vegan'] = vegan; filters['vegetarian'] = vegetarian; selectedMeals = DUMMY_MEALS.where((meal) { if (filters['gluten'] && !meal.isGlutenFree) {return false;} if (filters['lactose'] && !meal.isLactoseFree) {return false;} if (filters['vegan'] && !meal.isVegan) {return false;} if (filters['vegetarian'] && !meal.isVegetarian) {return false;} return true; }).toList(); }); } and here I call that function from another screen (the Filters Screen) , SaveFilters is the name that i gave to the function reciver in "filters screen" IconButton( onPressed: () { widget.SaveFilters( _isGlutenFree, _isLactoseFree, _isVegan, _isVegetarian);}, icon: Icon( Icons.save, color: Colors.white, )) And i was getting this error I tried to remove the curly braces that was encountering the function parameters so now the function header is like that: void _filteringFunction(bool gluten, bool lactose, bool vegan, bool vegetarian) that worked and the problem is fixed. But my question is why we can't use named parameters in this case
[ "In your code, you are calling the _filteringFunction method with named arguments, but the method definition does not include named parameters. This means that the names of the arguments you are passing in the method call do not match the names of the parameters in the method definition, causing the error.\nTo fix this, you can either remove the names of the arguments in the method call, or update the method definition to include named parameters. Here is an example of how you can update the method definition to include named parameters:\nvoid _filteringFunction({bool gluten, bool lactose, bool vegan, bool vegetarian}) {\n setState(() {\n filters['gluten'] = gluten;\n filters['lactose'] = lactose;\n filters['vegan'] = vegan;\n filters['vegetarian'] = vegetarian;\n\n selectedMeals = DUMMY_MEALS.where((meal) {\n if (filters['gluten'] && !meal.isGlutenFree) {return false;}\n if (filters['lactose'] && !meal.isLactoseFree) {return false;}\n if (filters['vegan'] && !meal.isVegan) {return false;}\n if (filters['vegetarian'] && !meal.isVegetarian) {return false;}\n\n return true;\n }).toList();\n });\n}\n\nWith this change, you can call the _filteringFunction method with named arguments, as you were doing before, and the error will not occur.\nIt is important to note that named parameters are optional in Dart. You can choose to include them in your method definition if you want to allow calling the method with named arguments, but you don't have to include them if you don't need them.\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074668772_dart_flutter.txt
Q: TouchableOpacity with multiple child with onPress I have FlatList with custom row item wrappen inside a TouchableOpacity. I would like to have multiple onPress inside the TouchableOpacity so that each child view can handle their respected job. Problem is that while on pressing the child view it does it's job but also parent onPress also gets executed. How to stop that? <TouchableOpacity onPress={ () => this.doSomething()}> <Text>Some content</Text> <Icon name="trash" size={20} onPress={this.onDeleteItem(item)}/> <TouchableOpacity> In another word how to execute only onDeleteItem when user press on trash icon? Any suggestion appreciated, thanks. A: You will have to use Containers from native-base in order to use zIndex properly. I also suffered allot because of zIndex. Then with containers it was working fine. A: Add inside child function e.preventDefault() in suc an example <TouchableOpacity onPress={() => this.doSomething()}> <Text>Some content</Text> <Icon name="trash" size={20} onPress={(e) => { e.preventDefault() this.onDeleteItem(item) }}/> <TouchableOpacity> A: Move your first onPress to the Text, or where-ever is more appropriate than the outer most container. <TouchableOpacity> <Text onPress={ () => this.doSomething()} >Some content</Text> <Icon name="trash" size={20} onPress={this.onDeleteItem(item)}/> <TouchableOpacity> Theres no way to avoid an event listener on the parent TouchableOpacity because it's wrapping all content. You would need to add some Containers/Contents (NativeBase) to modularize it more.
TouchableOpacity with multiple child with onPress
I have FlatList with custom row item wrappen inside a TouchableOpacity. I would like to have multiple onPress inside the TouchableOpacity so that each child view can handle their respected job. Problem is that while on pressing the child view it does it's job but also parent onPress also gets executed. How to stop that? <TouchableOpacity onPress={ () => this.doSomething()}> <Text>Some content</Text> <Icon name="trash" size={20} onPress={this.onDeleteItem(item)}/> <TouchableOpacity> In another word how to execute only onDeleteItem when user press on trash icon? Any suggestion appreciated, thanks.
[ "You will have to use Containers from native-base in order to use zIndex properly. I also suffered allot because of zIndex. Then with containers it was working fine.\n", "Add inside child function\ne.preventDefault()\nin suc an example\n<TouchableOpacity onPress={() => this.doSomething()}>\n <Text>Some content</Text>\n <Icon name=\"trash\" size={20} onPress={(e) => {\n e.preventDefault()\n this.onDeleteItem(item)\n }}/>\n<TouchableOpacity>\n\n", "Move your first onPress to the Text, or where-ever is more appropriate than the outer most container. \n<TouchableOpacity>\n <Text onPress={ () => this.doSomething()} >Some content</Text>\n <Icon name=\"trash\" size={20} onPress={this.onDeleteItem(item)}/>\n<TouchableOpacity>\n\nTheres no way to avoid an event listener on the parent TouchableOpacity because it's wrapping all content. You would need to add some Containers/Contents (NativeBase) to modularize it more. \n" ]
[ 1, 1, 0 ]
[]
[]
[ "react_native" ]
stackoverflow_0047860387_react_native.txt
Q: nestjs + Passport + GqlAuthGuard produces Cannot read property 'logIn' of undefined I have followed the nestjs example of how to integrate passport with apollo but it constantly crashes with the following exception when I call a guarded resolver. Looking into it in detail when the @nestjs/passport auth-guard class is extended, it does not call the getRequest function of the child class, instead it calls the pre-existing one in the class (as if inheritance never took place) [Nest] 25029 - 05/26/2022, 8:29:40 PM ERROR [ExceptionsHandler] Cannot read property 'logIn' of undefined TypeError: Cannot read property 'logIn' of undefined at authenticate (.../node_modules/passport/lib/middleware/authenticate.js:96:21) at ...node_modules/@nestjs/passport/dist/auth.guard.js:96:3 at new Promise (<anonymous>) at ...node_modules/@nestjs/passport/dist/auth.guard.js:88:83 at JwtAuthGuard.<anonymous> (...node_modules/@nestjs/passport/dist/auth.guard.js:49:36) at Generator.next (<anonymous>) at fulfilled (.../node_modules/@nestjs/passport/dist/auth.guard.js:17:58) at processTicksAndRejections (internal/process/task_queues.js:93:5) I have the following setup import { ExecutionContext, Injectable } from '@nestjs/common'; import { AuthGuard } from '@nestjs/passport'; import { GqlExecutionContext } from '@nestjs/graphql'; @Injectable() export class GqlAuthGuard extends AuthGuard('jwt') { getRequest(context: ExecutionContext) { const ctx = GqlExecutionContext.create(context); return ctx.getContext().req; } } @Injectable() export class JwtAuthGuard extends AuthGuard('jwt') { constructor(private readonly reflector: Reflector) { super(); } canActivate(context: ExecutionContext) { const isGuestAllowed = this.reflector.getAllAndOverride<boolean>(IS_GUEST_ALLOWED_KEY, [ context.getHandler(), context.getClass(), ]); if (isGuestAllowed) { return true; } // Add your custom authentication logic here // for example, call super.login(request) to establish a session. return super.canActivate(context); } handleRequest(err, user, info) { // You can throw an exception based on either "info" or "err" arguments if (err || !user) { throw err || new UnauthorizedException(); } return user; } } @Injectable() export class JwtStrategy extends PassportStrategy(Strategy) { constructor() { super({ jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(), ignoreExpiration: false, secretOrKey: JwtConstants.secret, }); } async validate(payload: any) { return { userId: payload.sub, username: payload.username }; } } @Module({ imports: [ ConfigModule.forRoot({ isGlobal: true, envFilePath: `.env.${process.env.NODE_ENV}`, }), TypeOrmModule.forRoot(), GraphQLModule.forRoot<ApolloDriverConfig>({ driver: ApolloDriver, debug: true, playground: true, autoSchemaFile: join(process.cwd(), 'src/schema.gql'), installSubscriptionHandlers: true, context: ({ req }) => ({ req }), }), RecipesModule, AuthModule, UsersModule, ], controllers: [AppController], providers: [AppService], }) export class AppModule {} @Resolver((of) => Recipe) export class RecipesResolver { constructor(private readonly recipesService: RecipesService) {} @UseGuards(GqlAuthGuard) @Query((returns) => Recipe) async recipe(@CurrentUser() user: any, @Args('id') id: string): Promise<Recipe> { const recipe = await this.recipesService.findOneById(id); if (!recipe) { throw new NotFoundException(`Recipe with ID "${id}" not found`); } return recipe; } } Using the following package versions. "dependencies": { "@nestjs/apollo": "^10.0.12", "@nestjs/common": "^8.0.0", "@nestjs/config": "^2.0.1", "@nestjs/core": "^8.0.0", "@nestjs/graphql": "^10.0.12", "@nestjs/jwt": "^8.0.1", "@nestjs/passport": "^8.2.1", "@nestjs/platform-express": "^8.0.0", "@nestjs/typeorm": "^8.0.4", "apollo-server-express": "^3.8.0", "class-transformer": "^0.5.1", "class-validator": "^0.13.2", "cross-env": "^7.0.3", "graphql": "^16.5.0", "graphql-query-complexity": "^0.11.0", "graphql-subscriptions": "^2.0.0", "passport": "^0.6.0", "passport-local": "^1.0.0", "pg": "^8.7.3", "reflect-metadata": "^0.1.13", "rimraf": "^3.0.2", "rxjs": "^7.2.0", "typeorm": "^0.3.6", "uuid": "^8.3.2" }, A: The issue was making the JwtGuard global (not visible in the source code above). It was getting queued before the JqlGuard which it would seem as if my getRequest was never called (was running a different class but being js there was no clear indication as to which class is actually running). A: I've got the same problem with a custom GraphQL driver. I forgot to add a setter for context and to override getRequest in global JwtAuthGuard : import { Module } from '@nestjs/common'; import { GraphQLModule as NestGraphQLModule } from '@nestjs/graphql'; import { AbstractGraphQLDriver, GqlModuleOptions } from '@nestjs/graphql'; import { createHandler } from 'graphql-http/lib/use/express'; import { TestResolver } from './resolvers/test'; class GraphQLDriver extends AbstractGraphQLDriver { async start(options: GqlModuleOptions): Promise<void> { const { schema } = await this.graphQlFactory.mergeWithSchema(options); const { httpAdapter } = this.httpAdapterHost; httpAdapter.use( '/api/graphql', createHandler({ schema, context(req, params) { // <-- Context was missing return { req, params }; } }) ); } async stop() {} } @Module({ imports: [ NestGraphQLModule.forRoot<GqlModuleOptions>({ driver: GraphQLDriver, typePaths: ['../types/graphql/**/*.gql'] }) ], providers: [TestResolver] }) export class GraphQLModule {} import { ExecutionContext, Injectable } from '@nestjs/common'; import { AuthGuard } from '@nestjs/passport'; import { GqlContextType, GqlExecutionContext } from '@nestjs/graphql'; @Injectable() export class JwtAuthGuard extends AuthGuard('access-token') { constructor(private reflector: Reflector) { super(); } // Must override getRequest to handle graphql context type getRequest(context: ExecutionContext) { switch (context.getType<GqlContextType>()) { case 'graphql': const ctx = GqlExecutionContext.create(context); return ctx.getContext().req; default: // 'http' | 'ws' | 'rpc' return context.switchToHttp().getRequest(); } } }
nestjs + Passport + GqlAuthGuard produces Cannot read property 'logIn' of undefined
I have followed the nestjs example of how to integrate passport with apollo but it constantly crashes with the following exception when I call a guarded resolver. Looking into it in detail when the @nestjs/passport auth-guard class is extended, it does not call the getRequest function of the child class, instead it calls the pre-existing one in the class (as if inheritance never took place) [Nest] 25029 - 05/26/2022, 8:29:40 PM ERROR [ExceptionsHandler] Cannot read property 'logIn' of undefined TypeError: Cannot read property 'logIn' of undefined at authenticate (.../node_modules/passport/lib/middleware/authenticate.js:96:21) at ...node_modules/@nestjs/passport/dist/auth.guard.js:96:3 at new Promise (<anonymous>) at ...node_modules/@nestjs/passport/dist/auth.guard.js:88:83 at JwtAuthGuard.<anonymous> (...node_modules/@nestjs/passport/dist/auth.guard.js:49:36) at Generator.next (<anonymous>) at fulfilled (.../node_modules/@nestjs/passport/dist/auth.guard.js:17:58) at processTicksAndRejections (internal/process/task_queues.js:93:5) I have the following setup import { ExecutionContext, Injectable } from '@nestjs/common'; import { AuthGuard } from '@nestjs/passport'; import { GqlExecutionContext } from '@nestjs/graphql'; @Injectable() export class GqlAuthGuard extends AuthGuard('jwt') { getRequest(context: ExecutionContext) { const ctx = GqlExecutionContext.create(context); return ctx.getContext().req; } } @Injectable() export class JwtAuthGuard extends AuthGuard('jwt') { constructor(private readonly reflector: Reflector) { super(); } canActivate(context: ExecutionContext) { const isGuestAllowed = this.reflector.getAllAndOverride<boolean>(IS_GUEST_ALLOWED_KEY, [ context.getHandler(), context.getClass(), ]); if (isGuestAllowed) { return true; } // Add your custom authentication logic here // for example, call super.login(request) to establish a session. return super.canActivate(context); } handleRequest(err, user, info) { // You can throw an exception based on either "info" or "err" arguments if (err || !user) { throw err || new UnauthorizedException(); } return user; } } @Injectable() export class JwtStrategy extends PassportStrategy(Strategy) { constructor() { super({ jwtFromRequest: ExtractJwt.fromAuthHeaderAsBearerToken(), ignoreExpiration: false, secretOrKey: JwtConstants.secret, }); } async validate(payload: any) { return { userId: payload.sub, username: payload.username }; } } @Module({ imports: [ ConfigModule.forRoot({ isGlobal: true, envFilePath: `.env.${process.env.NODE_ENV}`, }), TypeOrmModule.forRoot(), GraphQLModule.forRoot<ApolloDriverConfig>({ driver: ApolloDriver, debug: true, playground: true, autoSchemaFile: join(process.cwd(), 'src/schema.gql'), installSubscriptionHandlers: true, context: ({ req }) => ({ req }), }), RecipesModule, AuthModule, UsersModule, ], controllers: [AppController], providers: [AppService], }) export class AppModule {} @Resolver((of) => Recipe) export class RecipesResolver { constructor(private readonly recipesService: RecipesService) {} @UseGuards(GqlAuthGuard) @Query((returns) => Recipe) async recipe(@CurrentUser() user: any, @Args('id') id: string): Promise<Recipe> { const recipe = await this.recipesService.findOneById(id); if (!recipe) { throw new NotFoundException(`Recipe with ID "${id}" not found`); } return recipe; } } Using the following package versions. "dependencies": { "@nestjs/apollo": "^10.0.12", "@nestjs/common": "^8.0.0", "@nestjs/config": "^2.0.1", "@nestjs/core": "^8.0.0", "@nestjs/graphql": "^10.0.12", "@nestjs/jwt": "^8.0.1", "@nestjs/passport": "^8.2.1", "@nestjs/platform-express": "^8.0.0", "@nestjs/typeorm": "^8.0.4", "apollo-server-express": "^3.8.0", "class-transformer": "^0.5.1", "class-validator": "^0.13.2", "cross-env": "^7.0.3", "graphql": "^16.5.0", "graphql-query-complexity": "^0.11.0", "graphql-subscriptions": "^2.0.0", "passport": "^0.6.0", "passport-local": "^1.0.0", "pg": "^8.7.3", "reflect-metadata": "^0.1.13", "rimraf": "^3.0.2", "rxjs": "^7.2.0", "typeorm": "^0.3.6", "uuid": "^8.3.2" },
[ "The issue was making the JwtGuard global (not visible in the source code above). It was getting queued before the JqlGuard which it would seem as if my getRequest was never called (was running a different class but being js there was no clear indication as to which class is actually running).\n", "I've got the same problem with a custom GraphQL driver. I forgot to add a setter for context and to override getRequest in global JwtAuthGuard :\nimport { Module } from '@nestjs/common';\nimport { GraphQLModule as NestGraphQLModule } from '@nestjs/graphql';\nimport { AbstractGraphQLDriver, GqlModuleOptions } from '@nestjs/graphql';\nimport { createHandler } from 'graphql-http/lib/use/express';\nimport { TestResolver } from './resolvers/test';\n\nclass GraphQLDriver extends AbstractGraphQLDriver {\n async start(options: GqlModuleOptions): Promise<void> {\n const { schema } = await this.graphQlFactory.mergeWithSchema(options);\n\n const { httpAdapter } = this.httpAdapterHost;\n\n httpAdapter.use(\n '/api/graphql',\n createHandler({\n schema,\n context(req, params) { // <-- Context was missing\n return { req, params };\n }\n })\n );\n }\n\n async stop() {}\n}\n\n@Module({\n imports: [\n NestGraphQLModule.forRoot<GqlModuleOptions>({\n driver: GraphQLDriver,\n typePaths: ['../types/graphql/**/*.gql']\n })\n ],\n providers: [TestResolver]\n})\nexport class GraphQLModule {}\n\nimport { ExecutionContext, Injectable } from '@nestjs/common';\nimport { AuthGuard } from '@nestjs/passport';\nimport { GqlContextType, GqlExecutionContext } from '@nestjs/graphql';\n\n@Injectable()\nexport class JwtAuthGuard extends AuthGuard('access-token') {\n constructor(private reflector: Reflector) {\n super();\n }\n\n // Must override getRequest to handle graphql context type\n getRequest(context: ExecutionContext) {\n switch (context.getType<GqlContextType>()) {\n case 'graphql':\n const ctx = GqlExecutionContext.create(context);\n return ctx.getContext().req;\n default: // 'http' | 'ws' | 'rpc'\n return context.switchToHttp().getRequest();\n }\n }\n}\n\n" ]
[ 1, 0 ]
[]
[]
[ "nestjs_graphql", "nestjs_passport" ]
stackoverflow_0072395981_nestjs_graphql_nestjs_passport.txt
Q: Letter combinations of a phone keypad key I have a question about letter combinations of a phone keypad key in JavaScript. I wrote a solution using DFS recursion. But it does not work as expected. I am new to JavaScript but similarly written code in Ruby works. The problem is about getting all possible letter combination from a phone keypad. Input: "23" Output: ["ad", "ae", "af", "bd", "be", "bf", "cd", "ce", "cf"]. With the code below, it stops at "af". Output is ["ad", "ae", "af"]. I am not sure why this code does not move to the second letter of "2", which is "b". const map = { "2": ["a", "b", "c"], "3": ["d", "e", "f"], "4": ["g", "h", "i"], "5": ["j", "k", "l"], "6": ["m", "n", "o"], "7": ["p", "q", "r", "s"], "8": ["t", "u", "v"], "9": ["w", "x", "y", "z"] }; let result = []; let letterCombinations = function(digits) { if (digits.length == 0) { return [] }; let stack = []; dfs(digits.split(''), 0, stack) return result }; function dfs(digits, index, stack) { const currentLetters = map[digits[index]] for (i = 0; i < currentLetters.length; i++) { stack.push(currentLetters[i]) if (index == digits.length - 1) { result.push(stack.join('')) stack.pop() } else { dfs(digits, index + 1, stack) stack.pop() } } } console.log(letterCombinations("23")); A: You need to declare i in your for loop otherwise it's global and keeps getting incremented on each recursion step. Use for (let i = 0; i < currentLetters.length; i++) const map = { "2": ["a", "b", "c"], "3": ["d", "e", "f"], "4": ["g", "h", "i"], "5": ["j", "k", "l"], "6": ["m", "n", "o"], "7": ["p", "q", "r", "s"], "8": ["t", "u", "v"], "9": ["w", "x", "y", "z"] }; let result = []; let letterCombinations = function(digits) { if (digits.length == 0) { return [] }; let stack = []; dfs(digits.split(''), 0, stack) return result }; function dfs(digits, index, stack) { const currentLetters = map[digits[index]] // declare the loop variable! for (let i = 0; i < currentLetters.length; i++) { stack.push(currentLetters[i]) if (index == digits.length - 1) { result.push(stack.join('')) stack.pop() } else { dfs(digits, index + 1, stack) stack.pop() } } } console.log(letterCombinations("23")); A: Here is a less complex implementation. I hope you find it useful! const map = { "2": ["a", "b", "c"], "3": ["d", "e", "f"], "4": ["g", "h", "i"], "5": ["j", "k", "l"], "6": ["m", "n", "o"], "7": ["p", "q", "r", "s"], "8": ["t", "u", "v"], "9": ["w", "x", "y", "z"] }; function letterCombinations(digits) { digits = digits.split(''); const firstArray = map[digits[0]]; const secondArray = map[digits[1]]; const result = []; for (let i = 0; i < firstArray.length; i++) { for (let j = 0; j < secondArray.length; j++) { result.push(firstArray[i] + secondArray[j]); } } return result; }; console.log(letterCombinations("23")); A: const DailNumbers = { 2: ['a', 'b', 'c'], 3: ['d', 'e', 'f'], 4: ['g', 'h', 'i'], 5: ['j', 'k', 'l'], 6: ['m', 'n', 'o'], 7: ['p', 'q', 'r', 's'], 8: ['t', 'u', 'v'], 9: ['w', 'x', 'y', 'z'], } const LoopNum =(arg, TeleNumbers)=>{ // Number to String conversion and splitting the values let splitnum = arg.toString().split(""); // If No values Just pass empty array if(splitnum.length < 1) return [] const combinedArray = splitnum.map( (val) => TeleNumbers[val]); const temp = []; // combined array is greater than one value if (combinedArray [1]) { for(let i = 0; i < combinedArray[0].length; i++){ for(let j = 0; j < combinedArray[1].length; j++){ temp.push(combinedArray[0][i] +""+ combinedArray[1][j]) } } } // combined array is greater than one value else { for(let i = 0; i < combinedArray[0].length; i++){ temp.push(combinedArray[0][i]) } } return temp } console.log(LoopNum('23', DailNumbers)) // result will ["ad", "ae", "af", "bd", "be", "bf","cd", "ce", "cf"] A: Letter combination of a phone number(keypad key) Javascript Solution Create a phone keypad dictionary first. Combine each digit from the dictionary array using nested loops. /** * @param {string} digits "23" * @return {string[]} ["ad","ae","af","bd","be","bf","cd","ce","cf"] */ var letterCombinations = function(digits) { let tel = { 2:['a','b','c'], 3:['d','e','f'], 4:['g','h','i'], 5:['j','k','l'], 6:['m','n','o'], 7:['p','q','r','s'], 8:['t','u','v'], 9:['w','x','y','z'] }, arr1=tel[digits[0]],arr2=tel[digits[1]],array=[]; if(!digits.trim().length){ return []; } if(digits.length == 1){ return tel[digits]; } for(let d=0;d<digits.length-1;d++) { arr1 = array.length ? array : tel[digits[d]]; arr2 = tel[digits[d+1]]; if(array.length){ array =[]; } if(digits[d+1]) { for(let i=0;i<arr1.length;i++) { for(let j=0;j<arr2.length;j++) { array.push(arr1[i]+arr2[j]) } } } else { return array; } } return array; }; A: Since this old question has been resurrected, here's an easier recursive version: const allNbrs = (chars) => ([c, ...cs]) => c == undefined ? [''] : allNbrs (chars) (cs) .flatMap (s => (chars [c] || [c]) .map (c => c + s)) const tel = {2: ['a', 'b', 'c'], 3: ['d', 'e', 'f'], 4: ['g', 'h', 'i'], 5: ['j', 'k', 'l'], 6: ['m', 'n', 'o'], 7: ['p', 'q', 'r', 's'], 8: ['t', 'u', 'v'], 9: ['w', 'x', 'y', 'z']} const letterCombinations = allNbrs (tel) console .log (letterCombinations ('23')) We separate the first letter of the string (c) from the remaining ones (cs), recur on those remaining ones, then, for every result, we combine it with each entry from our table (chars). Note that (chars || [c]) .map (...) is used for those cases in which the character supplied is no in our table. We could alternately do (chars || []) .map (...), which would return an empty array if we had any bad characters. Just chars .map (...) would throw an error if passed a zero. I prefer the version above, as it gives an interesting answer for, say, 304: letterCombinations ('304') //=> ["d0w","e0w","f0w","d0x","e0x","f0x","d0y","e0y","f0y","d0z","e0z","f0z"]
Letter combinations of a phone keypad key
I have a question about letter combinations of a phone keypad key in JavaScript. I wrote a solution using DFS recursion. But it does not work as expected. I am new to JavaScript but similarly written code in Ruby works. The problem is about getting all possible letter combination from a phone keypad. Input: "23" Output: ["ad", "ae", "af", "bd", "be", "bf", "cd", "ce", "cf"]. With the code below, it stops at "af". Output is ["ad", "ae", "af"]. I am not sure why this code does not move to the second letter of "2", which is "b". const map = { "2": ["a", "b", "c"], "3": ["d", "e", "f"], "4": ["g", "h", "i"], "5": ["j", "k", "l"], "6": ["m", "n", "o"], "7": ["p", "q", "r", "s"], "8": ["t", "u", "v"], "9": ["w", "x", "y", "z"] }; let result = []; let letterCombinations = function(digits) { if (digits.length == 0) { return [] }; let stack = []; dfs(digits.split(''), 0, stack) return result }; function dfs(digits, index, stack) { const currentLetters = map[digits[index]] for (i = 0; i < currentLetters.length; i++) { stack.push(currentLetters[i]) if (index == digits.length - 1) { result.push(stack.join('')) stack.pop() } else { dfs(digits, index + 1, stack) stack.pop() } } } console.log(letterCombinations("23"));
[ "You need to declare i in your for loop otherwise it's global and keeps getting incremented on each recursion step.\nUse for (let i = 0; i < currentLetters.length; i++)\n\n\nconst map = {\r\n \"2\": [\"a\", \"b\", \"c\"],\r\n \"3\": [\"d\", \"e\", \"f\"],\r\n \"4\": [\"g\", \"h\", \"i\"],\r\n \"5\": [\"j\", \"k\", \"l\"],\r\n \"6\": [\"m\", \"n\", \"o\"],\r\n \"7\": [\"p\", \"q\", \"r\", \"s\"],\r\n \"8\": [\"t\", \"u\", \"v\"],\r\n \"9\": [\"w\", \"x\", \"y\", \"z\"]\r\n};\r\n\r\nlet result = [];\r\n\r\nlet letterCombinations = function(digits) {\r\n if (digits.length == 0) {\r\n return []\r\n };\r\n\r\n let stack = [];\r\n dfs(digits.split(''), 0, stack)\r\n\r\n return result\r\n};\r\n\r\nfunction dfs(digits, index, stack) {\r\n const currentLetters = map[digits[index]]\r\n \r\n // declare the loop variable!\r\n for (let i = 0; i < currentLetters.length; i++) {\r\n stack.push(currentLetters[i])\r\n\r\n if (index == digits.length - 1) {\r\n result.push(stack.join(''))\r\n stack.pop()\r\n } else {\r\n dfs(digits, index + 1, stack)\r\n stack.pop()\r\n }\r\n }\r\n}\r\n\r\nconsole.log(letterCombinations(\"23\"));\n\n\n\n", "Here is a less complex implementation. I hope you find it useful!\n\n\nconst map = {\r\n \"2\": [\"a\", \"b\", \"c\"],\r\n \"3\": [\"d\", \"e\", \"f\"],\r\n \"4\": [\"g\", \"h\", \"i\"],\r\n \"5\": [\"j\", \"k\", \"l\"],\r\n \"6\": [\"m\", \"n\", \"o\"],\r\n \"7\": [\"p\", \"q\", \"r\", \"s\"],\r\n \"8\": [\"t\", \"u\", \"v\"],\r\n \"9\": [\"w\", \"x\", \"y\", \"z\"]\r\n};\r\n\r\nfunction letterCombinations(digits) {\r\n digits = digits.split('');\r\n \r\n const firstArray = map[digits[0]];\r\n const secondArray = map[digits[1]];\r\n const result = [];\r\n \r\n for (let i = 0; i < firstArray.length; i++)\r\n {\r\n for (let j = 0; j < secondArray.length; j++)\r\n {\r\n result.push(firstArray[i] + secondArray[j]);\r\n }\r\n }\r\n \r\n return result;\r\n};\r\n\r\nconsole.log(letterCombinations(\"23\"));\n\n\n\n", "\n\nconst DailNumbers = {\n 2: ['a', 'b', 'c'],\n 3: ['d', 'e', 'f'],\n 4: ['g', 'h', 'i'],\n 5: ['j', 'k', 'l'],\n 6: ['m', 'n', 'o'],\n 7: ['p', 'q', 'r', 's'],\n 8: ['t', 'u', 'v'],\n 9: ['w', 'x', 'y', 'z'],\n}\n\n\nconst LoopNum =(arg, TeleNumbers)=>{\n \n // Number to String conversion and splitting the values\n let splitnum = arg.toString().split(\"\");\n \n // If No values Just pass empty array\n if(splitnum.length < 1) return []\n \n const combinedArray = splitnum.map( (val) => TeleNumbers[val]);\n const temp = []; \n \n // combined array is greater than one value\n if (combinedArray [1]) {\n for(let i = 0; i < combinedArray[0].length; i++){ \n for(let j = 0; j < combinedArray[1].length; j++){\n temp.push(combinedArray[0][i] +\"\"+ combinedArray[1][j])\n }\n }\n }\n \n // combined array is greater than one value\n else {\n for(let i = 0; i < combinedArray[0].length; i++){ \n temp.push(combinedArray[0][i])\n } \n }\n\n return temp\n}\nconsole.log(LoopNum('23', DailNumbers)) \n\n// result will [\"ad\", \"ae\", \"af\", \"bd\", \"be\", \"bf\",\"cd\", \"ce\", \"cf\"]\n\n\n\n", "Letter combination of a phone number(keypad key) Javascript Solution\n\nCreate a phone keypad dictionary first.\nCombine each digit from the dictionary array using nested loops.\n\n/**\n * @param {string} digits \"23\"\n * @return {string[]} [\"ad\",\"ae\",\"af\",\"bd\",\"be\",\"bf\",\"cd\",\"ce\",\"cf\"]\n */\nvar letterCombinations = function(digits) {\n let tel = {\n 2:['a','b','c'],\n 3:['d','e','f'],\n 4:['g','h','i'],\n 5:['j','k','l'],\n 6:['m','n','o'],\n 7:['p','q','r','s'],\n 8:['t','u','v'],\n 9:['w','x','y','z']\n }, arr1=tel[digits[0]],arr2=tel[digits[1]],array=[];\n if(!digits.trim().length){\n return [];\n }\n if(digits.length == 1){\n return tel[digits];\n }\n for(let d=0;d<digits.length-1;d++) {\n arr1 = array.length ? array : tel[digits[d]];\n arr2 = tel[digits[d+1]];\n if(array.length){\n array =[];\n }\n if(digits[d+1]) {\n for(let i=0;i<arr1.length;i++) {\n for(let j=0;j<arr2.length;j++) {\n array.push(arr1[i]+arr2[j])\n }\n }\n } else {\n return array;\n }\n\n }\n return array;\n \n};\n\n", "Since this old question has been resurrected, here's an easier recursive version:\n\n\nconst allNbrs = (chars) => ([c, ...cs]) =>\n c == undefined\n ? ['']\n : allNbrs (chars) (cs) .flatMap (s => (chars [c] || [c]) .map (c => c + s))\n\nconst tel = {2: ['a', 'b', 'c'], 3: ['d', 'e', 'f'], 4: ['g', 'h', 'i'], 5: ['j', 'k', 'l'], 6: ['m', 'n', 'o'], 7: ['p', 'q', 'r', 's'], 8: ['t', 'u', 'v'], 9: ['w', 'x', 'y', 'z']}\n\nconst letterCombinations = allNbrs (tel)\n\nconsole .log (letterCombinations ('23'))\n\n\n\nWe separate the first letter of the string (c) from the remaining ones (cs), recur on those remaining ones, then, for every result, we combine it with each entry from our table (chars).\nNote that (chars || [c]) .map (...) is used for those cases in which the character supplied is no in our table. We could alternately do (chars || []) .map (...), which would return an empty array if we had any bad characters. Just chars .map (...) would throw an error if passed a zero. I prefer the version above, as it gives an interesting answer for, say, 304:\nletterCombinations ('304')\n//=> [\"d0w\",\"e0w\",\"f0w\",\"d0x\",\"e0x\",\"f0x\",\"d0y\",\"e0y\",\"f0y\",\"d0z\",\"e0z\",\"f0z\"]\n\n" ]
[ 6, 0, 0, 0, 0 ]
[]
[]
[ "javascript", "microsoft_distributed_file_system", "recursion" ]
stackoverflow_0053840679_javascript_microsoft_distributed_file_system_recursion.txt
Q: How can i have two not arguments in an if statement I want to check if the following value is not a digit and is not "a" or "b" but I'm met with a syntax error. It says it expect ":" after not in the second argument. if not char.isdigit() and not in ('a', 'b'): I don't know what I can try to fix this. I could nest the if statement but that leads to bad code and I know there must be some solution. A: The line should be: if not char.isdigit() and char not in ('a', 'b'): You have to declare what variable is not in ('a', 'b') Furthermore, I would take a look at how to structure questions on StackOverflow.
How can i have two not arguments in an if statement
I want to check if the following value is not a digit and is not "a" or "b" but I'm met with a syntax error. It says it expect ":" after not in the second argument. if not char.isdigit() and not in ('a', 'b'): I don't know what I can try to fix this. I could nest the if statement but that leads to bad code and I know there must be some solution.
[ "The line should be:\nif not char.isdigit() and char not in ('a', 'b'):\n\nYou have to declare what variable is not in ('a', 'b')\nFurthermore, I would take a look at how to structure questions on StackOverflow.\n" ]
[ 2 ]
[]
[]
[ "python", "python_3.x", "string" ]
stackoverflow_0074669271_python_python_3.x_string.txt
Q: Azure Function App Missing Function in Console UI After Publishing I'm programming automation to my CI/CD Gitlab to deploy my Azure Functions projects programmatically. But I'm having an issue when I publish a new function to a function app created previously using az cli like the example below: $ func azure functionapp publish $AZURE_APP_NAME ${SLOT_PARAMETER} ${FUNCTION_LANGUAGE} --nozip $ az webapp config appsettings set -g $AZURE_RG_NAME -n $AZURE_APP_NAME ${SLOT_PARAMETER} --settings "WEBSITE_RUN_FROM_PACKAGE=0" The command line shows that the new function was built, created and deployed successfully. But when I check if the new function was created in my app using the Azure Console UI nothing was shown. I even tried deploying the function as a zip package using the default publish command or like the example above using --nozip and setting WEBSITE_RUN_FROM_PACKAGE=0 to deploy files. It's strange because I could see the deployed function for another app function using the same script. In this way the behavior of the console UI function app seems erratic. A: When you are doing continuous deployments for the Azure Function App, you have to enable one of the configuration settings in the Azure Portal Function App > Configuration Menu i.e., SCM_DO_BUILD_DURING_DEPLOYMENT=true. az functionapp config appsettings set --name PravuFunctionApp \ --resource-group PraviRG \ --settings SCM_DO_BUILD_DURING_DEPLOYMENT=true After configuring this setting, then deploy using az cli command or function core tools command func azure functionapp publish, it will show the Functions in the Azure Portal Function App. Refer to this SO Thread regarding the similar issue i.e., No Functions showing after deployment/publishing. A: As was mentioned by @pravallika-kothaveerannagari I needed to enable SCM_DO_BUILD_DURING_DEPLOYMENT=true during the deployment of my function to build it on the App Service. Besides, I've changed the deployment of the function to az functionapp deployment source config-zip, and not using Azure Function core tools anymore. echo '[config] SCM_DO_BUILD_DURING_DEPLOYMENT = true' > .deployment zip -r build.zip MyFunction az functionapp deployment source config-zip -g $AZURE_RG_NAME -n $AZURE_APP_NAME ${SLOT_PARAMETER} --src build.zip
Azure Function App Missing Function in Console UI After Publishing
I'm programming automation to my CI/CD Gitlab to deploy my Azure Functions projects programmatically. But I'm having an issue when I publish a new function to a function app created previously using az cli like the example below: $ func azure functionapp publish $AZURE_APP_NAME ${SLOT_PARAMETER} ${FUNCTION_LANGUAGE} --nozip $ az webapp config appsettings set -g $AZURE_RG_NAME -n $AZURE_APP_NAME ${SLOT_PARAMETER} --settings "WEBSITE_RUN_FROM_PACKAGE=0" The command line shows that the new function was built, created and deployed successfully. But when I check if the new function was created in my app using the Azure Console UI nothing was shown. I even tried deploying the function as a zip package using the default publish command or like the example above using --nozip and setting WEBSITE_RUN_FROM_PACKAGE=0 to deploy files. It's strange because I could see the deployed function for another app function using the same script. In this way the behavior of the console UI function app seems erratic.
[ "When you are doing continuous deployments for the Azure Function App, you have to enable one of the configuration settings in the Azure Portal Function App > Configuration Menu i.e., SCM_DO_BUILD_DURING_DEPLOYMENT=true.\naz functionapp config appsettings set --name PravuFunctionApp \\\n--resource-group PraviRG \\\n--settings SCM_DO_BUILD_DURING_DEPLOYMENT=true\n\nAfter configuring this setting, then deploy using az cli command or function core tools command func azure functionapp publish, it will show the Functions in the Azure Portal Function App.\nRefer to this SO Thread regarding the similar issue i.e., No Functions showing after deployment/publishing.\n", "As was mentioned by @pravallika-kothaveerannagari I needed to enable SCM_DO_BUILD_DURING_DEPLOYMENT=true during the deployment of my function to build it on the App Service.\nBesides, I've changed the deployment of the function to az functionapp deployment source config-zip, and not using Azure Function core tools anymore.\necho '[config] SCM_DO_BUILD_DURING_DEPLOYMENT = true' > .deployment\nzip -r build.zip MyFunction\naz functionapp deployment source config-zip -g $AZURE_RG_NAME -n $AZURE_APP_NAME ${SLOT_PARAMETER} --src build.zip\n\n" ]
[ 1, 0 ]
[]
[]
[ "azure", "azure_cli", "azure_functions", "cicd", "gitlab_ci" ]
stackoverflow_0074593738_azure_azure_cli_azure_functions_cicd_gitlab_ci.txt
Q: Create a weighted graph from an adjacency matrix in graph-tool, python interface How should I create a graph using graph-tool in python, out of an adjacency matrix? Assume we have adj matrix as the adjacency matrix. What I do now is like this: g = graph_tool.Graph(directed = False) g.add_vertex(len(adj)) edge_weights = g.new_edge_property('double') for i in range(adj.shape[0]): for j in range(adj.shape[1]): if i > j and adj[i,j] != 0: e = g.add_edge(i, j) edge_weights[e] = adj[i,j] But it doesn't feel right, do we have any better solution for this? (and I guess a proper tag for this would be graph-tool, but I can't add it, some kind person with enough privileges could make the tag?) A: Graph-tool now includes a function to add a list of edges to the graph. You can now do, for instance: import graph_tool as gt import numpy as np g = gt.Graph(directed=False) adj = np.random.randint(0, 2, (100, 100)) g.add_edge_list(np.transpose(adj.nonzero())) A: this is the extension of Tiago's answer for the weighted graph: adj = numpy.random.randint(0, 10, (100, 100)) # a random directed graph idx = adj.nonzero() weights = adj[idx] g = Graph() g.add_edge_list(transpose(idx))) #add weights as an edge propetyMap ew = g.new_edge_property("double") ew.a = weights g.ep['edge_weight'] = ew A: This should be a comment to Tiago's answer, but I don't have enough reputation for that. For the latest version (2.26) of graph_tool I believe there is a missing transpose there. The i,j entry of the adjacency matrix denotes the weight of the edge going from vertex j to vertex i, so it should be g.add_edge_list(transpose(transpose(adj).nonzero())) A: import numpy as np import graph_tool.all as gt g = gt.Graph(directed=False) adj = np.tril(adj) g.add_edge_list(np.transpose(adj.nonzero())) Without np.tril the adjacency matrix will contain entries with 2s instead one 1s because every edge is counted twice. Things like gt.num_edges() will be incorrect too.
Create a weighted graph from an adjacency matrix in graph-tool, python interface
How should I create a graph using graph-tool in python, out of an adjacency matrix? Assume we have adj matrix as the adjacency matrix. What I do now is like this: g = graph_tool.Graph(directed = False) g.add_vertex(len(adj)) edge_weights = g.new_edge_property('double') for i in range(adj.shape[0]): for j in range(adj.shape[1]): if i > j and adj[i,j] != 0: e = g.add_edge(i, j) edge_weights[e] = adj[i,j] But it doesn't feel right, do we have any better solution for this? (and I guess a proper tag for this would be graph-tool, but I can't add it, some kind person with enough privileges could make the tag?)
[ "Graph-tool now includes a function to add a list of edges to the graph. You can now do, for instance:\nimport graph_tool as gt\nimport numpy as np\ng = gt.Graph(directed=False)\nadj = np.random.randint(0, 2, (100, 100))\ng.add_edge_list(np.transpose(adj.nonzero()))\n\n", "this is the extension of Tiago's answer for the weighted graph:\nadj = numpy.random.randint(0, 10, (100, 100)) # a random directed graph\nidx = adj.nonzero()\nweights = adj[idx]\ng = Graph()\ng.add_edge_list(transpose(idx)))\n\n#add weights as an edge propetyMap\new = g.new_edge_property(\"double\")\new.a = weights \ng.ep['edge_weight'] = ew\n\n", "This should be a comment to Tiago's answer, but I don't have enough reputation for that.\nFor the latest version (2.26) of graph_tool I believe there is a missing transpose there. The i,j entry of the adjacency matrix denotes the weight of the edge going from vertex j to vertex i, so it should be\ng.add_edge_list(transpose(transpose(adj).nonzero()))\n\n", "import numpy as np\nimport graph_tool.all as gt\n\ng = gt.Graph(directed=False)\nadj = np.tril(adj)\ng.add_edge_list(np.transpose(adj.nonzero()))\n\nWithout np.tril the adjacency matrix will contain entries with 2s instead one 1s because every edge is counted twice. Things like gt.num_edges() will be incorrect too.\n" ]
[ 13, 4, 2, 0 ]
[]
[]
[ "graph", "graph_tool", "python" ]
stackoverflow_0023288661_graph_graph_tool_python.txt
Q: How can I search MySQL via Timestamp range? I'm trying to search a table by timestamp range. select created_on from reservations limit 10; 2012-11-28 19:54:12 2012-12-03 17:19:40 2012-12-04 22:13:30 2012-12-04 22:14:04 2012-12-05 17:31:15 2012-12-05 17:39:34 2012-12-05 17:39:44 2012-12-05 18:12:27 2012-12-05 18:16:06 2012-12-07 15:03:02 select created_on from reservations where created_on > unix_timestamp('2012-11-28 19:54:00') and created_on < unix_timestamp('2012-11-28 19:55:00'); No results desc reservations; Field|Type|Null|Key|Default|Extra ... created_on,timestamp,YES,"",CURRENT_TIMESTAMP What am I doing wrong? A: please use DATE_FORMAT instead of unix_timestamp select created_on from reservations where created_on BETWEEN DATE_FORMAT('2012-11-28','%y-%m-%d') AND DATE_FORMAT('2012-11-28','%y-%m-%d');
How can I search MySQL via Timestamp range?
I'm trying to search a table by timestamp range. select created_on from reservations limit 10; 2012-11-28 19:54:12 2012-12-03 17:19:40 2012-12-04 22:13:30 2012-12-04 22:14:04 2012-12-05 17:31:15 2012-12-05 17:39:34 2012-12-05 17:39:44 2012-12-05 18:12:27 2012-12-05 18:16:06 2012-12-07 15:03:02 select created_on from reservations where created_on > unix_timestamp('2012-11-28 19:54:00') and created_on < unix_timestamp('2012-11-28 19:55:00'); No results desc reservations; Field|Type|Null|Key|Default|Extra ... created_on,timestamp,YES,"",CURRENT_TIMESTAMP What am I doing wrong?
[ "please use DATE_FORMAT instead of unix_timestamp\nselect created_on \nfrom reservations \nwhere created_on BETWEEN DATE_FORMAT('2012-11-28','%y-%m-%d') AND DATE_FORMAT('2012-11-28','%y-%m-%d');\n\n" ]
[ 0 ]
[]
[]
[ "mysql" ]
stackoverflow_0074669217_mysql.txt
Q: Trying to create a responsive quiz using JavaScript, but quiz is not responsive I'm trying to create a trivia. When a user clicks on an option (a button) within the MCQ portion, i want the site to respond immediately, by responding to the user's click (by changing color either to red or green depending on whether the answer is correct). It will also reset color of all buttons to its original color when the user clicks a button, before responding with the respective response of the button that was clicked I have tried looking through the code but cannot determine my error - whether its a syntax one or a problem with the logic of my code. Appreciate it Here is the code: <!DOCTYPE html> <html lang="en"> <head> <link href="https://fonts.googleapis.com/css2?family=Montserrat:wght@500&display=swap" rel="stylesheet"> <link href="styles.css" rel="stylesheet"> <title>Trivia!</title> </head> <body> <div class="header"> <h1>Trivia!</h1> </div> <script> // Run script once DOM is loaded document.addEventListener('DOMContentLoaded', function() { // When correct button is clicked, turns green let correct = document.querySelectorAll('.correct'); let feedbackmcq = document.querySelector('.feedbackmcq'); let feedbackfrq = document.querySelector('.feedbackfrq'); let wrongs = document.querySelectorAll('.wrong'); // Reset colors of the button (1st question) function resetColor() { for (let i = 0; i < correct.length; i++) { correct[i].style.backgroundColor = '#d9edff'; for (let i = 0; i < wrongs.length; i++) { wrongs[i].style.backgroundColor = '#d9edff'; } } // When correct button is clicked, turns green for (let i = 0; i < correct.length; i++) { correct[i].addEventListener('click', function() { resetColor(); correct[i].style.backgroundColor = 'green'; }); } // When wrong button is clicked, turns red for (let i = 0; i < wrongs.length; i++) { wrongs[i].addEventListener('click', function() { resetColor(); wrongs[i].style.backgroundColor = '#red'; }); }} }); </script> <div class="container"> <div class="section"> <h2>Part 1: Multiple Choice </h2> <hr> <div> <h3>Who sent you this link?</h3> <button class="correct">Jonathan</button> <button class="wrong">Thomas</button> <button class="wrong">Kiara</button> <button class="wrong">James</button> <p class="feedbackmcq" id="feedback1"></p> </div> <div> <h3>Who created this form?</h3> <button class="wrong">Joel</button> <button class="correct">Jonathan</button> <button class="wrong">Kiara</button> <button class="wrong">James</button> <p class="feedbackmcq" id="feedback2"></p> </div> </div> </div> </body> </html> After creating my script, I checked my syntax to ensure that there was not any semicolon / spelling error. I also thought through the logic, but it really seems logical to me. Here is my pseudocode: For loop looping through all correct buttons, constantly checking for a click on any one of its buttons. If a click is identified, colors of button is resetted Then, I will change the style of the respective button that was clicked (same for the wrong buttons when pressed) A: The approach your taking is incorrect - don't set eventListeners on the fly based on conditions - rather set them up in the beginning and then use logic to determine what they should or shouldn't do. use a single loop on all buttons when they're clicked, test if they are 'correct' or not based on the data-is property (You had these as classes, but really they are more appropriate as datasets Then we remove any right/wrong classes that may have been applied from that group of buttons. We do that using closest() and querySelectorAll() together finally, we apply the correct/incorrect class based on the dataset of the clicked button you had styles being applied directly, but using classes is a better, more extendable and easier to maintain approach. ** Further help with the code: buttons.forEach(b => b.addEventListener('click', e => { Here, we are looping through buttons which we already defined as 'all the buttons on the page' with buttons = document.querySelectorAll('button'). This is arrow function syntax, which isn't too important here. For each button (assigned as b in the loop) we add an event listener which also uses arrow syntax. Again, not important here but I find it easier to read and code. The event it is listening to is click and the task it will perform on click is (as you mentioned) an anonymous function. Unless you need to reuse that code in other parts of the site, it's fine to leave it as an anonymous function. e.target.closest('div').querySelectorAll('button') which says 'Find the closest containing <div> tag to event.target (the button that was clicked) - and then find all the <button> elements inside that div. That becomes an HTML collection that we can iterate through with forEach like before. Then you'll see ....forEach(bb => bb.classList.remove("isCorrect", "isWrong")). Again, the bb inside is a placeholder for the thing you're iterating through. In this case each of the buttons inside the div. At that point, we are just adding or removing CSS classes for the effect // Run script once DOM is loaded document.addEventListener('DOMContentLoaded', function() { let buttons = document.querySelectorAll('button'); buttons.forEach(b => b.addEventListener('click', e => { // first lets clear out any existing colorations e.target.closest('div').querySelectorAll('button').forEach(bb => bb.classList.remove("isCorrect", "isWrong")) if (e.target.dataset.is=='correct') e.target.classList.add('isCorrect') else e.target.classList.add('isWrong'); })) }); button { background: #d9edff; } button.isCorrect { background: green; color: #fff; } button.isWrong { background: red; color: #fff; } <link href="https://fonts.googleapis.com/css2?family=Montserrat:wght@500&display=swap" rel="stylesheet"> <link href="styles.css" rel="stylesheet"> <div class="header"> <h1>Trivia!</h1> </div> <div class="container"> <div class="section"> <h2>Part 1: Multiple Choice </h2> <hr> <div> <h3>Who sent you this link?</h3> <button data-is="correct">Jonathan</button> <button data-is="wrong">Thomas</button> <button data-is="wrong">Kiara</button> <button data-is="wrong">James</button> <p class="feedbackmcq" id="feedback1"></p> </div> <div> <h3>Who created this form?</h3> <button data-is="wrong">Joel</button> <button data-is="correct">Jonathan</button> <button data-is="wrong">Kiara</button> <button data-is="wrong">James</button> <p class="feedbackmcq" id="feedback2"></p> </div> </div> </div>
Trying to create a responsive quiz using JavaScript, but quiz is not responsive
I'm trying to create a trivia. When a user clicks on an option (a button) within the MCQ portion, i want the site to respond immediately, by responding to the user's click (by changing color either to red or green depending on whether the answer is correct). It will also reset color of all buttons to its original color when the user clicks a button, before responding with the respective response of the button that was clicked I have tried looking through the code but cannot determine my error - whether its a syntax one or a problem with the logic of my code. Appreciate it Here is the code: <!DOCTYPE html> <html lang="en"> <head> <link href="https://fonts.googleapis.com/css2?family=Montserrat:wght@500&display=swap" rel="stylesheet"> <link href="styles.css" rel="stylesheet"> <title>Trivia!</title> </head> <body> <div class="header"> <h1>Trivia!</h1> </div> <script> // Run script once DOM is loaded document.addEventListener('DOMContentLoaded', function() { // When correct button is clicked, turns green let correct = document.querySelectorAll('.correct'); let feedbackmcq = document.querySelector('.feedbackmcq'); let feedbackfrq = document.querySelector('.feedbackfrq'); let wrongs = document.querySelectorAll('.wrong'); // Reset colors of the button (1st question) function resetColor() { for (let i = 0; i < correct.length; i++) { correct[i].style.backgroundColor = '#d9edff'; for (let i = 0; i < wrongs.length; i++) { wrongs[i].style.backgroundColor = '#d9edff'; } } // When correct button is clicked, turns green for (let i = 0; i < correct.length; i++) { correct[i].addEventListener('click', function() { resetColor(); correct[i].style.backgroundColor = 'green'; }); } // When wrong button is clicked, turns red for (let i = 0; i < wrongs.length; i++) { wrongs[i].addEventListener('click', function() { resetColor(); wrongs[i].style.backgroundColor = '#red'; }); }} }); </script> <div class="container"> <div class="section"> <h2>Part 1: Multiple Choice </h2> <hr> <div> <h3>Who sent you this link?</h3> <button class="correct">Jonathan</button> <button class="wrong">Thomas</button> <button class="wrong">Kiara</button> <button class="wrong">James</button> <p class="feedbackmcq" id="feedback1"></p> </div> <div> <h3>Who created this form?</h3> <button class="wrong">Joel</button> <button class="correct">Jonathan</button> <button class="wrong">Kiara</button> <button class="wrong">James</button> <p class="feedbackmcq" id="feedback2"></p> </div> </div> </div> </body> </html> After creating my script, I checked my syntax to ensure that there was not any semicolon / spelling error. I also thought through the logic, but it really seems logical to me. Here is my pseudocode: For loop looping through all correct buttons, constantly checking for a click on any one of its buttons. If a click is identified, colors of button is resetted Then, I will change the style of the respective button that was clicked (same for the wrong buttons when pressed)
[ "The approach your taking is incorrect - don't set eventListeners on the fly based on conditions - rather set them up in the beginning and then use logic to determine what they should or shouldn't do.\n\nuse a single loop on all buttons\nwhen they're clicked, test if they are 'correct' or not based on the data-is property (You had these as classes, but really they are more appropriate as datasets\nThen we remove any right/wrong classes that may have been applied from that group of buttons. We do that using closest() and querySelectorAll() together\nfinally, we apply the correct/incorrect class based on the dataset of the clicked button\nyou had styles being applied directly, but using classes is a better, more extendable and easier to maintain approach.\n\n** Further help with the code:\nbuttons.forEach(b => b.addEventListener('click', e => {\n\nHere, we are looping through buttons which we already defined as 'all the buttons on the page' with buttons = document.querySelectorAll('button'). This is arrow function syntax, which isn't too important here. For each button (assigned as b in the loop) we add an event listener which also uses arrow syntax. Again, not important here but I find it easier to read and code. The event it is listening to is click and the task it will perform on click is (as you mentioned) an anonymous function. Unless you need to reuse that code in other parts of the site, it's fine to leave it as an anonymous function.\ne.target.closest('div').querySelectorAll('button')\n\nwhich says 'Find the closest containing <div> tag to event.target (the button that was clicked) - and then find all the <button> elements inside that div. That becomes an HTML collection that we can iterate through with forEach like before.\nThen you'll see ....forEach(bb => bb.classList.remove(\"isCorrect\", \"isWrong\")). Again, the bb inside is a placeholder for the thing you're iterating through. In this case each of the buttons inside the div.\nAt that point, we are just adding or removing CSS classes for the effect\n\n\n// Run script once DOM is loaded\ndocument.addEventListener('DOMContentLoaded', function() {\n let buttons = document.querySelectorAll('button');\n buttons.forEach(b => b.addEventListener('click', e => {\n // first lets clear out any existing colorations\n e.target.closest('div').querySelectorAll('button').forEach(bb => bb.classList.remove(\"isCorrect\", \"isWrong\"))\n if (e.target.dataset.is=='correct') e.target.classList.add('isCorrect')\n else e.target.classList.add('isWrong');\n }))\n});\nbutton {\n background: #d9edff;\n}\n\nbutton.isCorrect {\n background: green;\n color: #fff;\n}\n\nbutton.isWrong {\n background: red;\n color: #fff;\n}\n<link href=\"https://fonts.googleapis.com/css2?family=Montserrat:wght@500&display=swap\" rel=\"stylesheet\">\n<link href=\"styles.css\" rel=\"stylesheet\">\n<div class=\"header\">\n <h1>Trivia!</h1>\n</div>\n<div class=\"container\">\n <div class=\"section\">\n <h2>Part 1: Multiple Choice </h2>\n <hr>\n <div>\n <h3>Who sent you this link?</h3>\n <button data-is=\"correct\">Jonathan</button>\n <button data-is=\"wrong\">Thomas</button>\n <button data-is=\"wrong\">Kiara</button>\n <button data-is=\"wrong\">James</button>\n <p class=\"feedbackmcq\" id=\"feedback1\"></p>\n </div>\n <div>\n <h3>Who created this form?</h3>\n <button data-is=\"wrong\">Joel</button>\n <button data-is=\"correct\">Jonathan</button>\n <button data-is=\"wrong\">Kiara</button>\n <button data-is=\"wrong\">James</button>\n <p class=\"feedbackmcq\" id=\"feedback2\"></p>\n </div>\n </div>\n</div>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "button", "html", "javascript" ]
stackoverflow_0074669112_button_html_javascript.txt
Q: Converting dgCMatrix to data frame and getting error in R I am trying to convert a matrix to a data frame using as.data.frame() and I am getting an error: Error in as.data.frame.default(coef) : cannot coerce class ‘structure("dgCMatrix", package = "Matrix")’ to a data.frame Is there a simple solution to this? Matrix class: class(coef) [1] "dgCMatrix" attr(,"package") [1] "Matrix" A: First convert to regular matrix and then use as.data.frame library(Matrix) as.data.frame.matrix(Matrix(0, 3, 2)) -output V1 V2 1 0 0 2 0 0 3 0 0 Instead of > as.data.frame(Matrix(0, 3, 2)) Error in as.data.frame.default(Matrix(0, 3, 2)) : cannot coerce class ‘structure("dgCMatrix", package = "Matrix")’ to a data.frame
Converting dgCMatrix to data frame and getting error in R
I am trying to convert a matrix to a data frame using as.data.frame() and I am getting an error: Error in as.data.frame.default(coef) : cannot coerce class ‘structure("dgCMatrix", package = "Matrix")’ to a data.frame Is there a simple solution to this? Matrix class: class(coef) [1] "dgCMatrix" attr(,"package") [1] "Matrix"
[ "First convert to regular matrix and then use as.data.frame\nlibrary(Matrix)\nas.data.frame.matrix(Matrix(0, 3, 2))\n\n-output\n V1 V2\n1 0 0\n2 0 0\n3 0 0\n\nInstead of\n> as.data.frame(Matrix(0, 3, 2))\nError in as.data.frame.default(Matrix(0, 3, 2)) : \n cannot coerce class ‘structure(\"dgCMatrix\", package = \"Matrix\")’ to a data.frame\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "matrix", "r" ]
stackoverflow_0074669286_dataframe_matrix_r.txt
Q: Append row to DataFrame in Pandas and putting it on bottom I want to add a row to a multi-index dataframe and I want to group it in its outer index where the alphabetical order is important, i.e, I can't use df.sort_index(). Here is the problem. Code: import pandas as pd import numpy as np categories = {"A":["c", "b", "a"] , "B": ["a", "b", "c"], "C": ["a", "b", "d"] } array = [] expected_fields = [] for key, value in categories.items(): array.extend([key]* len(value)) expected_fields.extend(value) arrays = [array ,expected_fields] tuples = list(zip(*arrays)) index = pd.MultiIndex.from_tuples(tuples) df = pd.Series(np.random.randn(9), index=index) df["A", "d"] = 2 print(df) Output: A c 0.887137 b -0.105262 a -0.180093 B a -0.687134 b -1.120895 c 2.398962 C a -2.226126 b -0.203238 d 0.036068 A d 2.000000 <------------ dtype: float64 Expected output: A c 0.887137 b -0.105262 a -0.180093 d 2.000000 <-------------- B a -0.687134 b -1.120895 c 2.398962 C a -2.226126 b -0.203238 d 0.036068 dtype: float64 A: df.loc[['A', 'B', 'C']] output: A c 0.887137 b -0.105262 a -0.180093 d 2.000000 B a -0.687134 b -1.120895 c 2.398962 C a -2.226126 b -0.203238 d 0.036068 dtype: float64 if you want get ['A', 'B', 'C'] by code, use following idx0 = df.index.get_level_values(0).unique() df.loc[idx0] same result
Append row to DataFrame in Pandas and putting it on bottom
I want to add a row to a multi-index dataframe and I want to group it in its outer index where the alphabetical order is important, i.e, I can't use df.sort_index(). Here is the problem. Code: import pandas as pd import numpy as np categories = {"A":["c", "b", "a"] , "B": ["a", "b", "c"], "C": ["a", "b", "d"] } array = [] expected_fields = [] for key, value in categories.items(): array.extend([key]* len(value)) expected_fields.extend(value) arrays = [array ,expected_fields] tuples = list(zip(*arrays)) index = pd.MultiIndex.from_tuples(tuples) df = pd.Series(np.random.randn(9), index=index) df["A", "d"] = 2 print(df) Output: A c 0.887137 b -0.105262 a -0.180093 B a -0.687134 b -1.120895 c 2.398962 C a -2.226126 b -0.203238 d 0.036068 A d 2.000000 <------------ dtype: float64 Expected output: A c 0.887137 b -0.105262 a -0.180093 d 2.000000 <-------------- B a -0.687134 b -1.120895 c 2.398962 C a -2.226126 b -0.203238 d 0.036068 dtype: float64
[ "df.loc[['A', 'B', 'C']]\n\noutput:\nA c 0.887137\n b -0.105262\n a -0.180093\n d 2.000000\nB a -0.687134\n b -1.120895\n c 2.398962\nC a -2.226126\n b -0.203238\n d 0.036068\ndtype: float64\n\nif you want get ['A', 'B', 'C'] by code, use following\nidx0 = df.index.get_level_values(0).unique()\ndf.loc[idx0]\n\nsame result\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python", "python_2.7", "python_3.x" ]
stackoverflow_0074669264_dataframe_pandas_python_python_2.7_python_3.x.txt
Q: How I can sort an array by date I have a dimensional array and I would like to order it by expiryDate ascending and keep the ones with the null value last here is my array: $lessons = [ [ "expiryDate" => Null ], [ "expiryDate" => "2023-11-27" ], [ "expiryDate" => "2022-11-27" ] ]; What I tried : $price = array_column($lessons, 'expiryDate'); array_multisort($price, SORT_ASC, $lessons); Its working fine, but I don't know how I can move the ones with expiryDate last The expected Result : $lessons = [ [ "expiryDate" => "2022-11-27" ], [ "expiryDate" => "2023-11-27" ], [ "expiryDate" => Null ] ]; A: Normally in a comparison function NULL will sort as less than all other values [except 0 for some reason] so if you want it to sort the other way we need to write our own comparison function. The Cliff's Notes version is that a compare function for (a,b) should return a value less than zero for a<b, zero for a==b, and greater than zero for a>b. There is a simple shorthand for this in PHP 7+, which is <=>. $lessons = [ [ "expiryDate" => null ], [ "expiryDate" => "2023-11-27" ], [ "expiryDate" => "2022-11-27" ] ]; function compareWithNullGreater($a, $b) { if( is_null($a) && is_null($b)) { return 0; } else if( is_null($a) ) { return 1; } else if( is_null($b) ) { return -1; } else { return $a <=> $b; } } usort( $lessons, function($a, $b) { return compareWithNullGreater($a['expiryDate'], $b['expiryDate']); } ); var_dump($lessons); Output: array(3) { [0]=> array(1) { ["expiryDate"]=> string(10) "2022-11-27" } [1]=> array(1) { ["expiryDate"]=> string(10) "2023-11-27" } [2]=> array(1) { ["expiryDate"]=> NULL } } Ref: https://www.php.net/manual/en/function.usort A: public function sortArray($orderBy = SORT_ASC): array { $sortByASC = array_column($lessons, 'expiryDate'); array_multisort($sortByASC, $orderBy, $lessons); foreach ($lessons as $key => $value) { if ($value['expiryDate'] === null) { unset($lessons[$key]); $x[] = $value; } } return array_values($lessons); }
How I can sort an array by date
I have a dimensional array and I would like to order it by expiryDate ascending and keep the ones with the null value last here is my array: $lessons = [ [ "expiryDate" => Null ], [ "expiryDate" => "2023-11-27" ], [ "expiryDate" => "2022-11-27" ] ]; What I tried : $price = array_column($lessons, 'expiryDate'); array_multisort($price, SORT_ASC, $lessons); Its working fine, but I don't know how I can move the ones with expiryDate last The expected Result : $lessons = [ [ "expiryDate" => "2022-11-27" ], [ "expiryDate" => "2023-11-27" ], [ "expiryDate" => Null ] ];
[ "Normally in a comparison function NULL will sort as less than all other values [except 0 for some reason] so if you want it to sort the other way we need to write our own comparison function.\nThe Cliff's Notes version is that a compare function for (a,b) should return a value less than zero for a<b, zero for a==b, and greater than zero for a>b. There is a simple shorthand for this in PHP 7+, which is <=>.\n$lessons = [\n [ \"expiryDate\" => null ],\n [ \"expiryDate\" => \"2023-11-27\" ],\n [ \"expiryDate\" => \"2022-11-27\" ]\n];\n\nfunction compareWithNullGreater($a, $b) {\n if( is_null($a) && is_null($b)) {\n return 0;\n } else if( is_null($a) ) {\n return 1;\n } else if( is_null($b) ) {\n return -1;\n } else {\n return $a <=> $b;\n }\n}\n\nusort(\n $lessons,\n function($a, $b) { return compareWithNullGreater($a['expiryDate'], $b['expiryDate']); }\n);\n\nvar_dump($lessons);\n\nOutput:\narray(3) {\n [0]=>\n array(1) {\n [\"expiryDate\"]=>\n string(10) \"2022-11-27\"\n }\n [1]=>\n array(1) {\n [\"expiryDate\"]=>\n string(10) \"2023-11-27\"\n }\n [2]=>\n array(1) {\n [\"expiryDate\"]=>\n NULL\n }\n}\n\nRef: https://www.php.net/manual/en/function.usort\n", "public function sortArray($orderBy = SORT_ASC): array\n {\n \n\n $sortByASC = array_column($lessons, 'expiryDate');\n array_multisort($sortByASC, $orderBy, $lessons);\n\n foreach ($lessons as $key => $value)\n {\n if ($value['expiryDate'] === null)\n {\n unset($lessons[$key]);\n $x[] = $value;\n }\n }\n\n return array_values($lessons);\n\n }\n\n" ]
[ 0, 0 ]
[]
[]
[ "php" ]
stackoverflow_0074661100_php.txt
Q: How to transfer data from another file into json file? I have a json file similar to the following structure. { "anakategori": [ { "name": "Kadın", "url": "example.com/kadin", "img": "..." }, { "name": "Erkek", "url": "example.com/erkek", "img": "..." }, { "name": "Ayakkabı &Çanta", "url": "example.com/ayakkabi-canta", "img": "..." }, { "name": "Anne &Bebek", "url": "example.com/anne-bebek", "img": "..." }, { "name": "Elektronik", "url": "example.com/elektronik", "img": "..." }, { "name": "Ev &Yaşam", "url": "example.com/ev-yasam", "img": "..." }, { "name": "Kozmetik", "url": "example.com/kozmetik-kisisel-bakim", "img": "..." }, { "name": "Saat &Aksesuar", "url": "example.com/saat-aksesuar", "img": "..." }, { "name": "Kitap,Müzik,Film,Oyun", "url": "example.com/kitap-muzik-film-oyun", "img": "..." }, { "name": "Bahçe,Yapı Market,Oto", "url": "example.com/bahce-yapi-market,-oto", "img": "..." }, { "name": "Çok Satan Ürünler", "url": "example.com/cok-satan-urunler", "img": "..." }, { "name": "İndirimli Ürünler", "url": "example.com/indirimli-urunler", "img": "..." }, { "name": "En Çok Görüntülenenler", "url": "example.com/en-cok-goruntulenenler", "img": "..." }, { "name": "Markalar", "url": "example.com/markalar", "img": "..." } ] } In this file, I want to automatically fill the "img": "..." field with the url sorted in the order of the above. The urls in the file are like this: example.com/category/kadin.png example.com/category/erkek.png example.com/category/ayakkabicanta.png Thanks in advance for your help. I got the urls in order with regex from the json file and I got the img urls from the metadata, now how can I add them to this file in order? A: When you have stored your file in movies.json, you can do this: $ jq jq '.["anakategori"][0]' movies.json { "name": "Kadın", "url": "example.com/kadin", "img": "..." } $ jq '.["anakategori"][0]["img"]' movies.json "..." and to update this value (inplace, using the Q/A Modify a key-value in a json using jq in-place): $ jq '.["anakategori"][0]["img"] = "test"' movies.json >abc && mv abc movies.json jq '.["anakategori"][0]' movies.json { "name": "Kadın", "url": "example.com/kadin", "img": "test" } Now you see the url repleace by the text "test". The 0 in above examples refers to the first occurrence of a movie in your movies.json file. This should help you get started, the rest of the things needed can be found on Stackoverflow too, like Looping through the content of a file in Bash
How to transfer data from another file into json file?
I have a json file similar to the following structure. { "anakategori": [ { "name": "Kadın", "url": "example.com/kadin", "img": "..." }, { "name": "Erkek", "url": "example.com/erkek", "img": "..." }, { "name": "Ayakkabı &Çanta", "url": "example.com/ayakkabi-canta", "img": "..." }, { "name": "Anne &Bebek", "url": "example.com/anne-bebek", "img": "..." }, { "name": "Elektronik", "url": "example.com/elektronik", "img": "..." }, { "name": "Ev &Yaşam", "url": "example.com/ev-yasam", "img": "..." }, { "name": "Kozmetik", "url": "example.com/kozmetik-kisisel-bakim", "img": "..." }, { "name": "Saat &Aksesuar", "url": "example.com/saat-aksesuar", "img": "..." }, { "name": "Kitap,Müzik,Film,Oyun", "url": "example.com/kitap-muzik-film-oyun", "img": "..." }, { "name": "Bahçe,Yapı Market,Oto", "url": "example.com/bahce-yapi-market,-oto", "img": "..." }, { "name": "Çok Satan Ürünler", "url": "example.com/cok-satan-urunler", "img": "..." }, { "name": "İndirimli Ürünler", "url": "example.com/indirimli-urunler", "img": "..." }, { "name": "En Çok Görüntülenenler", "url": "example.com/en-cok-goruntulenenler", "img": "..." }, { "name": "Markalar", "url": "example.com/markalar", "img": "..." } ] } In this file, I want to automatically fill the "img": "..." field with the url sorted in the order of the above. The urls in the file are like this: example.com/category/kadin.png example.com/category/erkek.png example.com/category/ayakkabicanta.png Thanks in advance for your help. I got the urls in order with regex from the json file and I got the img urls from the metadata, now how can I add them to this file in order?
[ "When you have stored your file in movies.json, you can do this:\n$ jq jq '.[\"anakategori\"][0]' movies.json\n{\n \"name\": \"Kadın\",\n \"url\": \"example.com/kadin\",\n \"img\": \"...\"\n}\n$ jq '.[\"anakategori\"][0][\"img\"]' movies.json\n\"...\"\n\nand to update this value (inplace, using the Q/A Modify a key-value in a json using jq in-place):\n$ jq '.[\"anakategori\"][0][\"img\"] = \"test\"' movies.json >abc && mv abc movies.json\njq '.[\"anakategori\"][0]' movies.json\n{\n \"name\": \"Kadın\",\n \"url\": \"example.com/kadin\",\n \"img\": \"test\"\n}\n\nNow you see the url repleace by the text \"test\".\nThe 0 in above examples refers to the first occurrence of a movie in your movies.json file.\nThis should help you get started, the rest of the things needed can be found on Stackoverflow too, like Looping through the content of a file in Bash\n" ]
[ 0 ]
[]
[]
[ "json" ]
stackoverflow_0074668504_json.txt
Q: How to apply function to entire dataset in R home_last3 = function(team,game){ g1 = filter(g, Team == team & Game > game) g1 = head(g1, n=3) return(sum(g1$goals)) } print(home_last3("Arsenal","5")) [1] 11 So I have this function above. The idea is that in my dataset of all Premier League matches, I want to know how many goals the home team scored in their previous 3 matches. That dataframe g has all the games and goals scored for each team listed. So my function does exactly what I want it to when I plug stuff in manually. However, I want to apply this function to a new column (Home_Last_3) in my original dataset, where for any given match it can look at the name of the home team (col = "HomeTeam") and the number of the game (col="Game") and spit out that number. However, when I inputed this: df$Home_Last_3 = home_last3(df$HomeTeam,df$Game) It just spit back zeros in all columns. I think it has something to do with using as.data.frame, or sapply (but sapply got mad at me for having that second argument in there). A: Try mapply which is a multivariate version of sapply. If I understand your function correctly this should work: df$Home_Last_3 = mapply(FUN=home_last3, df$HomeTeam, df$Game)
How to apply function to entire dataset in R
home_last3 = function(team,game){ g1 = filter(g, Team == team & Game > game) g1 = head(g1, n=3) return(sum(g1$goals)) } print(home_last3("Arsenal","5")) [1] 11 So I have this function above. The idea is that in my dataset of all Premier League matches, I want to know how many goals the home team scored in their previous 3 matches. That dataframe g has all the games and goals scored for each team listed. So my function does exactly what I want it to when I plug stuff in manually. However, I want to apply this function to a new column (Home_Last_3) in my original dataset, where for any given match it can look at the name of the home team (col = "HomeTeam") and the number of the game (col="Game") and spit out that number. However, when I inputed this: df$Home_Last_3 = home_last3(df$HomeTeam,df$Game) It just spit back zeros in all columns. I think it has something to do with using as.data.frame, or sapply (but sapply got mad at me for having that second argument in there).
[ "Try mapply which is a multivariate version of sapply. If I understand your function correctly this should work:\ndf$Home_Last_3 = mapply(FUN=home_last3, df$HomeTeam, df$Game)\n\n" ]
[ 0 ]
[]
[]
[ "function", "r" ]
stackoverflow_0074668344_function_r.txt
Q: Selecting multiple random objects from an html database (Entity Framework) I need to select 3 random events from the database by a certain parameter (their type). How can I do this? Right now, I use foreach to select everything, but I need to select 3 objects and select them randomly. @foreach (Event entity in Model) { @if (entity.Type=="Концерт") { <img class="slider" src="~/images/@entity.TitleImagePath"/> } } A: Use the Random class to generate random indexes and select random items from the list of objects: public ActionResult Test() { // Prepare a random list var list = new List<Event>(); var cnt = db.Events.Count(); var entities = db.Events.Where(e => e.Type=="something")).ToList(); var rnd = new Random(); for (int i=0; i < 3 && cnt > 0; i++) list.Add(entities[rnd.Next(cnt)]); // Render the view return View(list); } And then enumerate the prepared list in the view: @model List<Event> @foreach (Event entity in Model) { <img class="slider" src="~/images/@entity.TitleImagePath"/> } A: Try this way var result = await this.context.Events.Where(x => x.Type == "Концерт").OrderBy(x => Guid.NewGuid()).Take(3).ToListAsync();
Selecting multiple random objects from an html database (Entity Framework)
I need to select 3 random events from the database by a certain parameter (their type). How can I do this? Right now, I use foreach to select everything, but I need to select 3 objects and select them randomly. @foreach (Event entity in Model) { @if (entity.Type=="Концерт") { <img class="slider" src="~/images/@entity.TitleImagePath"/> } }
[ "Use the Random class to generate random indexes and select random items from the list of objects:\npublic ActionResult Test()\n{\n // Prepare a random list\n var list = new List<Event>();\n var cnt = db.Events.Count();\n var entities = db.Events.Where(e => e.Type==\"something\")).ToList();\n var rnd = new Random();\n for (int i=0; i < 3 && cnt > 0; i++) list.Add(entities[rnd.Next(cnt)]);\n // Render the view\n return View(list); \n}\n\nAnd then enumerate the prepared list in the view:\n@model List<Event>\n@foreach (Event entity in Model)\n{ \n <img class=\"slider\" src=\"~/images/@entity.TitleImagePath\"/> \n}\n\n", "Try this way\nvar result = await this.context.Events.Where(x => x.Type == \"Концерт\").OrderBy(x => Guid.NewGuid()).Take(3).ToListAsync();\n\n" ]
[ 0, 0 ]
[]
[]
[ "asp.net_mvc", "c#", "entity_framework", "foreach", "html" ]
stackoverflow_0074668849_asp.net_mvc_c#_entity_framework_foreach_html.txt
Q: How is HTML structured, compared to WPF? I'm getting more into learning HTML but i'm stuck not knowing nor understanding how it lays elements out. I've used WPF quite a lot, and know that this code here: <Grid> <Button HorizontalAlignment="Left" Width="200" Content="A"/> <Grid HorizontalAlignment="Stretch" Margin="200,0,0,0"> <Button Content="B" HorizontalAlignment="Stretch" VerticalAlignment="Top" Height="150"/> <Button Content="C" HorizontalAlignment="Left" Margin="0,150,0,0" Width="200"/> <Button Content="D" Margin="200,150,0,0"/> </Grid> </Grid> Will result in a UI looking something like: But i have no idea how to replicate something like this in HTML, where A's width is constantly 200 (could be pixels or dp or a more standard unit for web design) but is latched to the left, B is latched to the top constantly 150 pixels, but both B and D are resized when the web browser/view is resized (as their horizontal alignment is stretch) From what i've learned online, HTML is structured similarly to a stack panel so I just don't get how you get that horizontally stretching behaviour A: As I said, flexbox is the tool of choice for implementing such things: .a, .b, .c, .d { outline: 1px solid white; background: black; color: white; text-align: center; } .content { /* content div is now a flexbox */ display: flex; /* div is size of screen for demonstration */ width: 100vw; height: 100vh; } .sub-content { /* set width to remaining width not taken up by .a */ width: calc(100% - 200px); /* set height to 100% of parent height */ height: 100%; } .sub-sub-content { display: flex; /* set width to 100% of parent width */ width: 100%; /* set height to remaining height not taken up by .b */ height: calc(100% - 150px); } .a { width: 200px; height: 100%; } /* b, c, and d grow but are b is constant height */ .b { flex: 1; height: 150px; } .c, .d { flex: 1; } <div class="content"> <div class="a">A</div> <div class="sub-content"> <div class="b">B</div> <div class="sub-sub-content"> <div class="c">C</div> <div class="d">D</div> </div> </div> </div>
How is HTML structured, compared to WPF?
I'm getting more into learning HTML but i'm stuck not knowing nor understanding how it lays elements out. I've used WPF quite a lot, and know that this code here: <Grid> <Button HorizontalAlignment="Left" Width="200" Content="A"/> <Grid HorizontalAlignment="Stretch" Margin="200,0,0,0"> <Button Content="B" HorizontalAlignment="Stretch" VerticalAlignment="Top" Height="150"/> <Button Content="C" HorizontalAlignment="Left" Margin="0,150,0,0" Width="200"/> <Button Content="D" Margin="200,150,0,0"/> </Grid> </Grid> Will result in a UI looking something like: But i have no idea how to replicate something like this in HTML, where A's width is constantly 200 (could be pixels or dp or a more standard unit for web design) but is latched to the left, B is latched to the top constantly 150 pixels, but both B and D are resized when the web browser/view is resized (as their horizontal alignment is stretch) From what i've learned online, HTML is structured similarly to a stack panel so I just don't get how you get that horizontally stretching behaviour
[ "As I said, flexbox is the tool of choice for implementing such things:\n\n\n.a,\n.b,\n.c,\n.d {\n outline: 1px solid white;\n background: black;\n color: white;\n text-align: center;\n}\n\n.content {\n /* content div is now a flexbox */\n display: flex;\n /* div is size of screen for demonstration */\n width: 100vw;\n height: 100vh;\n}\n\n.sub-content {\n /* set width to remaining width not taken up by .a */\n width: calc(100% - 200px);\n /* set height to 100% of parent height */\n height: 100%;\n}\n\n.sub-sub-content {\n display: flex;\n /* set width to 100% of parent width */\n width: 100%;\n /* set height to remaining height not taken up by .b */\n height: calc(100% - 150px);\n}\n\n.a {\n width: 200px;\n height: 100%;\n}\n\n\n/* b, c, and d grow but are b is constant height */\n\n.b {\n flex: 1;\n height: 150px;\n}\n\n.c,\n.d {\n flex: 1;\n}\n<div class=\"content\">\n <div class=\"a\">A</div>\n <div class=\"sub-content\">\n <div class=\"b\">B</div>\n <div class=\"sub-sub-content\">\n <div class=\"c\">C</div>\n <div class=\"d\">D</div>\n </div>\n </div>\n</div>\n\n\n\n" ]
[ 1 ]
[]
[]
[ "css", "html", "wpf" ]
stackoverflow_0074668866_css_html_wpf.txt
Q: Removing html tags from string in React I am trying to remove tags from a string but keeping the order in tact using Regular expression. The string I have is this <p><span style="color: blue;">General Power</span></p><p>This contains some info.</p><p><br></p><p>!</p><p>enable</p><p>bus at</p><p>terminal: <span style="color: red;">Name&nbsp;Terminal</span></p><p>enable bus name <span style="color: red;">Switch Bus</span></p><p>no ip domain-lookup</p><p><span style="color: rgb(0, 0, 0);">ip domain name </span><span style="color: red;">region</span><span style="color: rgb(0, 0, 0);">.google.com</span></p><p><br></p><p>!</p> What I have tried so far is const [string, setString] = useState( `<p><span style="color: blue;">General Power</span></p><p>This contains some info.</p><p><br></p><p>!</p><p>enable</p><p>bus at</p><p>terminal: <span style="color: red;">Name&nbsp;Terminal</span></p><p>enable bus name <span style="color: red;">Switch Bus</span></p><p>no ip domain-lookup</p><p><span style="color: rgb(0, 0, 0);">ip domain name </span><span style="color: red;">region</span><span style="color: rgb(0, 0, 0);">.google.com</span></p><p><br></p><p>!</p>` ); useEffect(() => { const regex = /(<([^>]+)>)/gi; const newString = string.replace(regex, " "); setString(newString); }, []); What I get is this General Power This contains some info. ! enable bus at terminal: Name Terminal enable bus name Switch Bus no ip domain-lookup ip domain name region .google.com ! The order I want is: General Power This contains some info. ! enable bus at terminal: Name Terminal enable bus name Switch Bus no ip domain-lookup ip domain name region.google.com ! const [string, setString] = useState( `<p><span style="color: blue;">General Power</span></p><p>This contains some info.</p><p><br></p><p>!</p><p>enable</p><p>bus at</p><p>terminal: <span style="color: red;">Name&nbsp;Terminal</span></p><p>enable bus name <span style="color: red;">Switch Bus</span></p><p>no ip domain-lookup</p><p><span style="color: rgb(0, 0, 0);">ip domain name </span><span style="color: red;">region</span><span style="color: rgb(0, 0, 0);">.google.com</span></p><p><br></p><p>!</p>` ); useEffect(() => { const regex = /(<([^>]+)>)/gi; const newString = string.replace(regex, " "); setString(newString); }, []); This is what I have tried so far A: DOMParser combined with a method to retrieve all text nodes would be more appropriate. And rather than an effect hook for this, consider passing a function to useState - that way, the state never gets set to the (undesirable) value with the HTML markup, but starts out as the replaced string without any re-renderings - or avoid state entirely now that it isn't being set anywhere. const getTextContentOnly = (html) => { const doc = new DOMParser().parseFromString(html, 'text/html'); const walker = document.createTreeWalker( doc.body, NodeFilter.SHOW_TEXT, null, false ); const texts = []; let node; while(node = walker.nextNode()) { texts.push(node.nodeValue); } return texts; } const App = () => { const texts = React.useMemo(() => getTextContentOnly( `<p><span style="color: blue;">General Power</span></p><p>This contains some info.</p><p><br></p><p>!</p><p>enable</p><p>bus at</p><p>terminal: <span style="color: red;">Name&nbsp;Terminal</span></p><p>enable bus name <span style="color: red;">Switch Bus</span></p><p>no ip domain-lookup</p><p><span style="color: rgb(0, 0, 0);">ip domain name </span><span style="color: red;">region</span><span style="color: rgb(0, 0, 0);">.google.com</span></p><p><br></p><p>!</p>` ), []); return texts.map((text, i) => <div key={i}>{text}</div>); }; ReactDOM.createRoot(document.querySelector('.react')).render(<App />); <script crossorigin src="https://unpkg.com/react@18/umd/react.development.js"></script> <script crossorigin src="https://unpkg.com/react-dom@18/umd/react-dom.development.js"></script> <div class='react'></div>
Removing html tags from string in React
I am trying to remove tags from a string but keeping the order in tact using Regular expression. The string I have is this <p><span style="color: blue;">General Power</span></p><p>This contains some info.</p><p><br></p><p>!</p><p>enable</p><p>bus at</p><p>terminal: <span style="color: red;">Name&nbsp;Terminal</span></p><p>enable bus name <span style="color: red;">Switch Bus</span></p><p>no ip domain-lookup</p><p><span style="color: rgb(0, 0, 0);">ip domain name </span><span style="color: red;">region</span><span style="color: rgb(0, 0, 0);">.google.com</span></p><p><br></p><p>!</p> What I have tried so far is const [string, setString] = useState( `<p><span style="color: blue;">General Power</span></p><p>This contains some info.</p><p><br></p><p>!</p><p>enable</p><p>bus at</p><p>terminal: <span style="color: red;">Name&nbsp;Terminal</span></p><p>enable bus name <span style="color: red;">Switch Bus</span></p><p>no ip domain-lookup</p><p><span style="color: rgb(0, 0, 0);">ip domain name </span><span style="color: red;">region</span><span style="color: rgb(0, 0, 0);">.google.com</span></p><p><br></p><p>!</p>` ); useEffect(() => { const regex = /(<([^>]+)>)/gi; const newString = string.replace(regex, " "); setString(newString); }, []); What I get is this General Power This contains some info. ! enable bus at terminal: Name Terminal enable bus name Switch Bus no ip domain-lookup ip domain name region .google.com ! The order I want is: General Power This contains some info. ! enable bus at terminal: Name Terminal enable bus name Switch Bus no ip domain-lookup ip domain name region.google.com ! const [string, setString] = useState( `<p><span style="color: blue;">General Power</span></p><p>This contains some info.</p><p><br></p><p>!</p><p>enable</p><p>bus at</p><p>terminal: <span style="color: red;">Name&nbsp;Terminal</span></p><p>enable bus name <span style="color: red;">Switch Bus</span></p><p>no ip domain-lookup</p><p><span style="color: rgb(0, 0, 0);">ip domain name </span><span style="color: red;">region</span><span style="color: rgb(0, 0, 0);">.google.com</span></p><p><br></p><p>!</p>` ); useEffect(() => { const regex = /(<([^>]+)>)/gi; const newString = string.replace(regex, " "); setString(newString); }, []); This is what I have tried so far
[ "DOMParser combined with a method to retrieve all text nodes would be more appropriate. And rather than an effect hook for this, consider passing a function to useState - that way, the state never gets set to the (undesirable) value with the HTML markup, but starts out as the replaced string without any re-renderings - or avoid state entirely now that it isn't being set anywhere.\n\n\nconst getTextContentOnly = (html) => {\n const doc = new DOMParser().parseFromString(html, 'text/html');\n const walker = document.createTreeWalker(\n doc.body, \n NodeFilter.SHOW_TEXT, \n null, \n false\n );\n const texts = [];\n let node;\n while(node = walker.nextNode()) {\n texts.push(node.nodeValue);\n }\n return texts;\n}\nconst App = () => {\n const texts = React.useMemo(() => getTextContentOnly(\n `<p><span style=\"color: blue;\">General Power</span></p><p>This contains some info.</p><p><br></p><p>!</p><p>enable</p><p>bus at</p><p>terminal: <span style=\"color: red;\">Name&nbsp;Terminal</span></p><p>enable bus name <span style=\"color: red;\">Switch Bus</span></p><p>no ip domain-lookup</p><p><span style=\"color: rgb(0, 0, 0);\">ip domain name </span><span style=\"color: red;\">region</span><span style=\"color: rgb(0, 0, 0);\">.google.com</span></p><p><br></p><p>!</p>`\n ), []);\n return texts.map((text, i) => <div key={i}>{text}</div>);\n};\n\nReactDOM.createRoot(document.querySelector('.react')).render(<App />);\n<script crossorigin src=\"https://unpkg.com/react@18/umd/react.development.js\"></script>\n<script crossorigin src=\"https://unpkg.com/react-dom@18/umd/react-dom.development.js\"></script>\n<div class='react'></div>\n\n\n\n" ]
[ 1 ]
[]
[]
[ "javascript", "reactjs" ]
stackoverflow_0074669280_javascript_reactjs.txt
Q: Style date picker on Jetpack Compose I'm using DatePickerDialog in Jetpack Compose. I wanted to customize it with colors that fit my application instead of the default colors. I know I have to use the styles and the ContextThemeWrapper, but I don't know exactly how and what I need to change. So, how can I customize my date picker with the colors I want? Below is the code for my DatePickerDialog: private var dateFormat = "dd/MM/yyyy" fun showDatePickerDialog(context: Context, dateOfBirth: MutableState<TextFieldValue>, onValueChanged: () -> Unit) { val calendar = getCalendar(dateOfBirth.value.text) DatePickerDialog( context, { _, year, month, day -> dateOfBirth.value = TextFieldValue(getPickedDateAsString(year, month, day)) onValueChanged.invoke() }, calendar.get(Calendar.YEAR), calendar.get(Calendar.MONTH), calendar.get(Calendar.DAY_OF_MONTH) ) .show() } private fun getCalendar(dateOfBirth: String): Calendar { return if (dateOfBirth.isEmpty()) { Calendar.getInstance() } else { getLastPickedDateCalendar(dateOfBirth) } } private fun getLastPickedDateCalendar(dateOfBirth: String): Calendar { val dateFormat = SimpleDateFormat(dateFormat, Locale.getDefault()) val calendar = Calendar.getInstance() calendar.time = dateFormat.parse(dateOfBirth) return calendar } private fun getPickedDateAsString(year: Int, month: Int, day: Int): String { val calendar = Calendar.getInstance() calendar.set(year, month, day) val dateFormat = SimpleDateFormat(dateFormat, Locale.getDefault()) return dateFormat.format(calendar.time) } A: You can use a library available https://github.com/vanpra/compose-material-dialogs
Style date picker on Jetpack Compose
I'm using DatePickerDialog in Jetpack Compose. I wanted to customize it with colors that fit my application instead of the default colors. I know I have to use the styles and the ContextThemeWrapper, but I don't know exactly how and what I need to change. So, how can I customize my date picker with the colors I want? Below is the code for my DatePickerDialog: private var dateFormat = "dd/MM/yyyy" fun showDatePickerDialog(context: Context, dateOfBirth: MutableState<TextFieldValue>, onValueChanged: () -> Unit) { val calendar = getCalendar(dateOfBirth.value.text) DatePickerDialog( context, { _, year, month, day -> dateOfBirth.value = TextFieldValue(getPickedDateAsString(year, month, day)) onValueChanged.invoke() }, calendar.get(Calendar.YEAR), calendar.get(Calendar.MONTH), calendar.get(Calendar.DAY_OF_MONTH) ) .show() } private fun getCalendar(dateOfBirth: String): Calendar { return if (dateOfBirth.isEmpty()) { Calendar.getInstance() } else { getLastPickedDateCalendar(dateOfBirth) } } private fun getLastPickedDateCalendar(dateOfBirth: String): Calendar { val dateFormat = SimpleDateFormat(dateFormat, Locale.getDefault()) val calendar = Calendar.getInstance() calendar.time = dateFormat.parse(dateOfBirth) return calendar } private fun getPickedDateAsString(year: Int, month: Int, day: Int): String { val calendar = Calendar.getInstance() calendar.set(year, month, day) val dateFormat = SimpleDateFormat(dateFormat, Locale.getDefault()) return dateFormat.format(calendar.time) }
[ "You can use a library available https://github.com/vanpra/compose-material-dialogs\n" ]
[ 0 ]
[]
[]
[ "android", "android_datepicker", "android_jetpack_compose", "android_styles" ]
stackoverflow_0074667515_android_android_datepicker_android_jetpack_compose_android_styles.txt
Q: I want to make a timer for a screen after timeout it back to previous page i want to make a question app in flutter.in this app there is a section where in a button when button clicked it go to question screen also a timer will start after time out it back to previous page.it will be able for all button question. i will try many way bt i can't solve this problem please help me A: Full Code import 'dart:async'; import 'package:flutter/material.dart'; void main() => runApp(const MyApp()); class MyApp extends StatelessWidget { const MyApp({super.key}); @override Widget build(BuildContext context) { const appTitle = 'Help with a meal....'; return MaterialApp( debugShowCheckedModeBanner: false, title: appTitle, home: const Page1(), ); } } class Page1 extends StatefulWidget { const Page1({super.key}); @override State<Page1> createState() => _Page1State(); } class _Page1State extends State<Page1> { @override Widget build(BuildContext context) { return Scaffold( body: Center( child: ElevatedButton( child: Text("Go to Page2"), onPressed: () { Navigator.push( context, MaterialPageRoute(builder: (context) => Page2())); }, ), ), ); } } class Page2 extends StatefulWidget { const Page2({super.key}); @override State<Page2> createState() => _Page2State(); } class _Page2State extends State<Page2> { late Timer timer; @override void initState() { timer = Timer(const Duration(seconds: 10), () { Navigator.push(context, MaterialPageRoute(builder: (context) => Page1())); print("changedpage"); }); super.initState(); } @override Widget build(BuildContext context) { return Scaffold( extendBodyBehindAppBar: true, body: Container( height: double.infinity, width: double.infinity, color: Colors.black, child: Center( child: ElevatedButton( child: Text("Go to Page1"), onPressed: () { timer.cancel(); Navigator.push( context, MaterialPageRoute(builder: (context) => Page1())); }, ), ), ), ); } } Hope this helps. Happy Coding :)
I want to make a timer for a screen after timeout it back to previous page
i want to make a question app in flutter.in this app there is a section where in a button when button clicked it go to question screen also a timer will start after time out it back to previous page.it will be able for all button question. i will try many way bt i can't solve this problem please help me
[ "Full Code\nimport 'dart:async';\nimport 'package:flutter/material.dart';\n\nvoid main() => runApp(const MyApp());\n\nclass MyApp extends StatelessWidget {\n const MyApp({super.key});\n\n @override\n Widget build(BuildContext context) {\n const appTitle = 'Help with a meal....';\n return MaterialApp(\n debugShowCheckedModeBanner: false,\n title: appTitle,\n home: const Page1(),\n );\n }\n}\n\nclass Page1 extends StatefulWidget {\n const Page1({super.key});\n\n @override\n State<Page1> createState() => _Page1State();\n}\n\nclass _Page1State extends State<Page1> {\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n body: Center(\n child: ElevatedButton(\n child: Text(\"Go to Page2\"),\n onPressed: () {\n Navigator.push(\n context, MaterialPageRoute(builder: (context) => Page2()));\n },\n ),\n ),\n );\n }\n}\n\n\nclass Page2 extends StatefulWidget {\n const Page2({super.key});\n\n @override\n State<Page2> createState() => _Page2State();\n}\n\nclass _Page2State extends State<Page2> {\n late Timer timer;\n @override\n void initState() {\n timer = Timer(const Duration(seconds: 10), () {\n Navigator.push(context, MaterialPageRoute(builder: (context) => Page1()));\n print(\"changedpage\");\n });\n super.initState();\n }\n\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n extendBodyBehindAppBar: true,\n body: Container(\n height: double.infinity,\n width: double.infinity,\n color: Colors.black,\n child: Center(\n child: ElevatedButton(\n child: Text(\"Go to Page1\"),\n onPressed: () {\n timer.cancel();\n Navigator.push(\n context, MaterialPageRoute(builder: (context) => Page1()));\n },\n ),\n ),\n ),\n );\n }\n}\n\nHope this helps. Happy Coding :)\n" ]
[ 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074669108_dart_flutter.txt
Q: How to Create New Macro in New File using VBA I need to assign new macro to new Bottom in new File with VBA in Excel In other words, I want to create a new file on pressing a button (run a macro) that contains a button that will run a predefined macro. Sub Shapes() 'Shape 1 Test_Shape ActiveSheet.Shapes.AddShape(msoShapeBevel, 85.25, 0, 120, 30).Select Selection.OnAction = "Macro1" End sub '___________________________________________________________________________ Sub Macro1() 'For Example Cells.Select End Sub Can anyone help me? TNX I searched many sites in English and Farsi, but I did not get a good result As long as the file is not saved with a new name, everything is fine; But when I save the file with a new name, the macros no longer work (they are not transferred from the original file to the new file). A: There is an answer on Mr. Excel that can help you: Exporting module from workbook to workbook Here is the code, just adapt it to your specific needs 'Copy macro from main file to extract Dim strModuleName As String Dim strFolder As String Dim strTempFile As String ThisWorkbook.Activate '<= changed strFolder = ThisWorkbook.Path '<= changed If Len(strFolder) = 0 Then strFolder = CurDir strFolder = strFolder & "\" strTempFile = strFolder & "~tmpexport.bas" On Error Resume Next ThisWorkbook.VBProject.VBComponents("Alloc_Date_Button").Export strTempFile '<= changed 'AMEND THE EXPORTED MACRO TO NEW NAME Dim objFSO Const ForReading = 1 Const ForWriting = 2 Dim objTS 'define a TextStream object Dim strContents As String Dim fileSpec As String fileSpec = ThisWorkbook.Path & "\~tmpexport.bas" Set objFSO = CreateObject("Scripting.FileSystemObject") Set objTS = objFSO.OpenTextFile(fileSpec, ForReading) strContents = objTS.ReadAll strContents = Replace(strContents, "Alloc_Date_Button", "Alloc_Date_Button_Export") strContents = Replace(strContents, "ALLOC_DATE_UPDATE", "ALLOC_DATE_UPDATE_EXPORT") objTS.Close Set objTS = objFSO.OpenTextFile(fileSpec, ForWriting) objTS.Write strContents objTS.Close Output.VBProject.VBComponents.Import strTempFile '<= changed Kill strTempFile On Error GoTo 0
How to Create New Macro in New File using VBA
I need to assign new macro to new Bottom in new File with VBA in Excel In other words, I want to create a new file on pressing a button (run a macro) that contains a button that will run a predefined macro. Sub Shapes() 'Shape 1 Test_Shape ActiveSheet.Shapes.AddShape(msoShapeBevel, 85.25, 0, 120, 30).Select Selection.OnAction = "Macro1" End sub '___________________________________________________________________________ Sub Macro1() 'For Example Cells.Select End Sub Can anyone help me? TNX I searched many sites in English and Farsi, but I did not get a good result As long as the file is not saved with a new name, everything is fine; But when I save the file with a new name, the macros no longer work (they are not transferred from the original file to the new file).
[ "There is an answer on Mr. Excel that can help you:\nExporting module from workbook to workbook\nHere is the code, just adapt it to your specific needs\n 'Copy macro from main file to extract\n Dim strModuleName As String\n Dim strFolder As String\n Dim strTempFile As String\n ThisWorkbook.Activate '<= changed\n strFolder = ThisWorkbook.Path '<= changed\n If Len(strFolder) = 0 Then strFolder = CurDir\n strFolder = strFolder & \"\\\"\n strTempFile = strFolder & \"~tmpexport.bas\"\n On Error Resume Next\n ThisWorkbook.VBProject.VBComponents(\"Alloc_Date_Button\").Export strTempFile '<= changed\n \n 'AMEND THE EXPORTED MACRO TO NEW NAME\n Dim objFSO\n Const ForReading = 1\n Const ForWriting = 2\n Dim objTS 'define a TextStream object\n Dim strContents As String\n Dim fileSpec As String\n \n fileSpec = ThisWorkbook.Path & \"\\~tmpexport.bas\"\n Set objFSO = CreateObject(\"Scripting.FileSystemObject\")\n Set objTS = objFSO.OpenTextFile(fileSpec, ForReading)\n strContents = objTS.ReadAll\n strContents = Replace(strContents, \"Alloc_Date_Button\", \"Alloc_Date_Button_Export\")\n strContents = Replace(strContents, \"ALLOC_DATE_UPDATE\", \"ALLOC_DATE_UPDATE_EXPORT\")\n \n objTS.Close\n \n Set objTS = objFSO.OpenTextFile(fileSpec, ForWriting)\n objTS.Write strContents\n objTS.Close\n\n \n Output.VBProject.VBComponents.Import strTempFile '<= changed\n Kill strTempFile\n On Error GoTo 0\n\n" ]
[ 0 ]
[]
[]
[ "excel", "module", "save_as", "vba" ]
stackoverflow_0074668509_excel_module_save_as_vba.txt
Q: Can I pass parameters in computed properties in Vue.Js is this possible to pass parameter in computed properties in Vue.Js. I can see when having getters/setter using computed, they can take a parameter and assign it to a variable. like here from documentation: computed: { fullName: { // getter get: function () { return this.firstName + ' ' + this.lastName }, // setter set: function (newValue) { var names = newValue.split(' ') this.firstName = names[0] this.lastName = names[names.length - 1] } } } Is this also possible: computed: { fullName: function (salut) { return salut + ' ' + this.firstName + ' ' + this.lastName } } Where computed property takes an argument and returns the desired output. However, when I try this, I am getting this error: vue.common.js:2250 Uncaught TypeError: fullName is not a function(…) Should I be using methods for such cases? A: Most probably you want to use a method <span>{{ fullName('Hi') }}</span> methods: { fullName(salut) { return `${salut} ${this.firstName} ${this.lastName}` } } Longer explanation Technically you can use a computed property with a parameter like this: computed: { fullName() { return salut => `${salut} ${this.firstName} ${this.lastName}` } } (Thanks Unirgy for the base code for this.) The difference between a computed property and a method is that computed properties are cached and change only when their dependencies change. A method will evaluate every time it's called. If you need parameters, there are usually no benefits of using a computed property function over a method in such a case. Though it allows you to have a parametrized getter function bound to the Vue instance, you lose caching so not really any gain there, in fact, you may break reactivity (AFAIU). You can read more about this in Vue documentation https://v2.vuejs.org/v2/guide/computed.html#Computed-Caching-vs-Methods The only useful situation is when you have to use a getter and need to have it parametrized. For instance, this situation happens in Vuex. In Vuex it's the only way to synchronously get parametrized result from the store (actions are async). Thus this approach is listed by official Vuex documentation for its getters https://vuex.vuejs.org/guide/getters.html#method-style-access A: You can use methods, but I prefer still to use computed properties instead of methods, if they're not mutating data or do not have external effects. You can pass arguments to computed properties this way (not documented, but suggested by maintainers, don't remember where): computed: { fullName: function () { var vm = this; return function (salut) { return salut + ' ' + vm.firstName + ' ' + vm.lastName; }; } } EDIT: Please do not use this solution, it only complicates code without any benefits. A: Well, technically speaking we can pass a parameter to a computed function, the same way we can pass a parameter to a getter function in vuex. Such a function is a function that returns a function. For instance, in the getters of a store: { itemById: function(state) { return (id) => state.itemPool[id]; } } This getter can be mapped to the computed functions of a component: computed: { ...mapGetters([ 'ids', 'itemById' ]) } And we can use this computed function in our template as follows: <div v-for="id in ids" :key="id">{{itemById(id).description}}</div> We can apply the same approach to create a computed method that takes a parameter. computed: { ...mapGetters([ 'ids', 'itemById' ]), descriptionById: function() { return (id) => this.itemById(id).description; } } And use it in our template: <div v-for="id in ids" :key="id">{{descriptionById(id)}}</div> This being said, I'm not saying here that it's the right way of doing things with Vue. However, I could observe that when the item with the specified ID is mutated in the store, the view does refresh its contents automatically with the new properties of this item (the binding seems to be working just fine). A: computed: { fullName: (app)=> (salut)=> { return salut + ' ' + this.firstName + ' ' + this.lastName } } when you want use <p>{{fullName('your salut')}}</p> A: [Vue2] Filters are a functionality provided by Vue components that let you apply formatting and transformations to any part of your template dynamic data. They don’t change a component’s data or anything, but they only affect the output. Say you are printing a name: new Vue({ el: '#container', data() { return { name: 'Maria', lastname: 'Silva' } }, filters: { prepend: (name, lastname, prefix) => { return `${prefix} ${name} ${lastname}` } } }); <script src="https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.17/vue.js"></script> <div id="container"> <p>{{ name, lastname | prepend('Hello') }}!</p> </div> Notice the syntax to apply a filter, which is | filterName. If you're familiar with Unix, that's the Unix pipe operator, which is used to pass the output of an operation as an input to the next one. The filters property of the component is an object. A single filter is a function that accepts a value and returns another value. The returned value is the one that’s actually printed in the Vue.js template. Filters were removed in Vue3 A: You can pass parameters but either it is not a vue.js way or the way you are doing is wrong. However there are cases when you need to do so.I am going to show you a simple example passing value to computed property using getter and setter. <template> <div> Your name is {{get_name}} <!-- John Doe at the beginning --> <button @click="name = 'Roland'">Change it</button> </div> </template> And the script export default { data: () => ({ name: 'John Doe' }), computed:{ get_name: { get () { return this.name }, set (new_name) { this.name = new_name } }, } } When the button clicked we are passing to computed property the name 'Roland' and in set() we are changing the name from 'John Doe' to 'Roland'. Below there is a common use case when computed is used with getter and setter. Say you have the follow vuex store: export default new Vuex.Store({ state: { name: 'John Doe' }, getters: { get_name: state => state.name }, mutations: { set_name: (state, payload) => state.name = payload }, }) And in your component you want to add v-model to an input but using the vuex store. <template> <div> <input type="text" v-model="get_name"> {{get_name}} </div> </template> <script> export default { computed:{ get_name: { get () { return this.$store.getters.get_name }, set (new_name) { this.$store.commit('set_name', new_name) } }, } } </script> A: You can also pass arguments to getters by returning a function. This is particularly useful when you want to query an array in the store: getters: { // ... getTodoById: (state) => (id) => { return state.todos.find(todo => todo.id === id) } } store.getters.getTodoById(2) // -> { id: 2, text: '...', done: false } Note that getters accessed via methods will run each time you call them, and the result is not cached. That is called Method-Style Access and it is documented on the Vue.js docs. A: I'd like to first reiterate the previous caveats that using computed (which is cached) with a parameter simply makes the computed not cached, effectively just making it a method. However, that being said, here are all the variations that I can think of which may have edge-cases for use. If you cut & paste this into a demo app it should be clear what is going on: <template> <div> <div style="background: violet;"> someData, regularComputed: {{ someData }}, {{ regularComputed }} </div> <div style="background: cornflowerblue;"> someComputedWithParameterOneLine: {{ someComputedWithParameterOneLine('hello') }} </div> <div style="background: lightgreen;"> someComputedWithParameterMultiLine: {{ someComputedWithParameterMultiLine('Yo') }} </div> <div style="background: yellow"> someComputedUsingGetterSetterWithParameterMultiLine: {{ someComputedUsingGetterSetterWithParameterMultiLine('Tadah!') }} </div> <div> <div style="background: orangered;"> inputData: {{ inputData }} </div> <input v-model="inputData" /> <button @click="someComputedUsingGetterSetterWithParameterMultiLine = inputData"> Update 'someComputedUsingGetterSetterWithParameterMultiLine' with 'inputData'. </button> </div> <div style="background: red"> newConcatenatedString: {{ newConcatenatedString }} </div> </div> </template> <script> export default { data() { return { someData: 'yo', inputData: '', newConcatenatedString: '' } }, computed: { regularComputed(){ return 'dude.' }, someComputedWithParameterOneLine(){ return (theParam) => `The following is the Parameter from *One* Line Arrow Function >>> ${theParam}` }, someComputedWithParameterMultiLine(){ return (theParam) => { return `The following is the Parameter from *Multi* Line Arrow Function >>> ${theParam}` } }, // NOTICE that Computed with GETTER/SETTER is now an Object, that has 2 methods, get() and set(), so after the name of the computed we use : instead of () // thus we do: "someComputedUsingGetterSetterWithParameterMultiLine: {...}" NOT "someComputedUsingGetterSetterWithParameterMultiLine(){...}" someComputedUsingGetterSetterWithParameterMultiLine: { get () { return (theParam) => { return `As part of the computed GETTER/SETTER, the following is inside get() which receives a Parameter (using a multi-line Arrow Function) >>> ${theParam}` } }, set(newSetValue) { console.log('Accessing get() from within the set()', this.someComputedUsingGetterSetterWithParameterMultiLine('hello from inside the Setter, using the Getter.')) console.log('Accessing newSetValue in set() >>>>', JSON.stringify(newSetValue)) this.newConcatenatedString = `**(1)${this.someComputedUsingGetterSetterWithParameterMultiLine('hello from inside the Setter, using the Getter.')}** This is a concatenation of get() value that had a Parameter, with newSetValue **(2)${newSetValue}** that came into the set().` } }, }, } </script> A: Computed could be considered as a function. So for an example on validation you could clearly do something like : methods: { validation(attr){ switch(attr) { case 'email': const re = /^(([^<>()\[\]\.,;:\s@\"]+(\.[^<>()\[\]\.,;:\s@\"]+)*)|(\".+\"))@(([^<>()[\]\.,;:\s@\"]+\.)+[^<>()[\]\.,;:\s@\"]{2,})$/i; return re.test(this.form.email); case 'password': return this.form.password.length > 4 } }, ... } Which you'll be using like : <b-form-input id="email" v-model="form.email" type="email" :state="validation('email')" required placeholder="Enter email" ></b-form-input> Just keep in mind that you will still miss the caching specific to computed. A: Yes methods are there for using params. Like answers stated above, in your example it's best to use methods since execution is very light. Only for reference, in a situation where the method is complex and cost is high, you can cache the results like so: data() { return { fullNameCache:{} }; } methods: { fullName(salut) { if (!this.fullNameCache[salut]) { this.fullNameCache[salut] = salut + ' ' + this.firstName + ' ' + this.lastName; } return this.fullNameCache[salut]; } } note: When using this, watchout for memory if dealing with thousands A: You need to be careful with the vue/no-side-effects-in-computed-properties ESlint rule and not making any inside of computed. Meanwhile, if you're looking towards having a memoization kind of approach, you can give a read to this article or useMemoize from Vueuse. Or even v-memo since Vue3.2.
Can I pass parameters in computed properties in Vue.Js
is this possible to pass parameter in computed properties in Vue.Js. I can see when having getters/setter using computed, they can take a parameter and assign it to a variable. like here from documentation: computed: { fullName: { // getter get: function () { return this.firstName + ' ' + this.lastName }, // setter set: function (newValue) { var names = newValue.split(' ') this.firstName = names[0] this.lastName = names[names.length - 1] } } } Is this also possible: computed: { fullName: function (salut) { return salut + ' ' + this.firstName + ' ' + this.lastName } } Where computed property takes an argument and returns the desired output. However, when I try this, I am getting this error: vue.common.js:2250 Uncaught TypeError: fullName is not a function(…) Should I be using methods for such cases?
[ "Most probably you want to use a method\n<span>{{ fullName('Hi') }}</span>\n\nmethods: {\n fullName(salut) {\n return `${salut} ${this.firstName} ${this.lastName}`\n }\n}\n\n\nLonger explanation\nTechnically you can use a computed property with a parameter like this:\ncomputed: {\n fullName() {\n return salut => `${salut} ${this.firstName} ${this.lastName}`\n }\n}\n\n(Thanks Unirgy for the base code for this.)\nThe difference between a computed property and a method is that computed properties are cached and change only when their dependencies change. A method will evaluate every time it's called.\nIf you need parameters, there are usually no benefits of using a computed property function over a method in such a case. Though it allows you to have a parametrized getter function bound to the Vue instance, you lose caching so not really any gain there, in fact, you may break reactivity (AFAIU). You can read more about this in Vue documentation https://v2.vuejs.org/v2/guide/computed.html#Computed-Caching-vs-Methods\nThe only useful situation is when you have to use a getter and need to have it parametrized. For instance, this situation happens in Vuex. In Vuex it's the only way to synchronously get parametrized result from the store (actions are async). Thus this approach is listed by official Vuex documentation for its getters\nhttps://vuex.vuejs.org/guide/getters.html#method-style-access\n", "You can use methods, but I prefer still to use computed properties instead of methods, if they're not mutating data or do not have external effects.\nYou can pass arguments to computed properties this way (not documented, but suggested by maintainers, don't remember where):\ncomputed: {\n fullName: function () {\n var vm = this;\n return function (salut) {\n return salut + ' ' + vm.firstName + ' ' + vm.lastName; \n };\n }\n}\n\nEDIT: Please do not use this solution, it only complicates code without any benefits.\n", "Well, technically speaking we can pass a parameter to a computed function, the same way we can pass a parameter to a getter function in vuex. Such a function is a function that returns a function.\nFor instance, in the getters of a store:\n{\n itemById: function(state) {\n return (id) => state.itemPool[id];\n }\n}\n\nThis getter can be mapped to the computed functions of a component:\ncomputed: {\n ...mapGetters([\n 'ids',\n 'itemById'\n ])\n}\n\nAnd we can use this computed function in our template as follows:\n<div v-for=\"id in ids\" :key=\"id\">{{itemById(id).description}}</div>\n\nWe can apply the same approach to create a computed method that takes a parameter.\ncomputed: {\n ...mapGetters([\n 'ids',\n 'itemById'\n ]),\n descriptionById: function() {\n return (id) => this.itemById(id).description;\n }\n}\n\nAnd use it in our template:\n<div v-for=\"id in ids\" :key=\"id\">{{descriptionById(id)}}</div>\n\nThis being said, I'm not saying here that it's the right way of doing things with Vue.\nHowever, I could observe that when the item with the specified ID is mutated in the store, the view does refresh its contents automatically with the new properties of this item (the binding seems to be working just fine).\n", "computed: {\n fullName: (app)=> (salut)=> {\n return salut + ' ' + this.firstName + ' ' + this.lastName \n }\n}\n\nwhen you want use\n<p>{{fullName('your salut')}}</p>\n\n", "[Vue2] Filters are a functionality provided by Vue components that let you apply formatting and transformations to any part of your template dynamic data.\nThey don’t change a component’s data or anything, but they only affect the output.\nSay you are printing a name:\n\n\nnew Vue({\n el: '#container',\n data() {\n return {\n name: 'Maria',\n lastname: 'Silva'\n }\n },\n filters: {\n prepend: (name, lastname, prefix) => {\n return `${prefix} ${name} ${lastname}`\n }\n }\n});\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/vue/2.5.17/vue.js\"></script>\n<div id=\"container\">\n <p>{{ name, lastname | prepend('Hello') }}!</p>\n</div>\n\n\n\nNotice the syntax to apply a filter, which is | filterName. If you're familiar with Unix, that's the Unix pipe operator, which is used to pass the output of an operation as an input to the next one.\nThe filters property of the component is an object.\nA single filter is a function that accepts a value and returns another value.\nThe returned value is the one that’s actually printed in the Vue.js template.\n\nFilters were removed in Vue3\n\n", "\nYou can pass parameters but either it is not a vue.js way or the way you are doing is wrong.\n\nHowever there are cases when you need to do so.I am going to show you a simple example passing value to computed property using getter and setter.\n<template>\n <div>\n Your name is {{get_name}} <!-- John Doe at the beginning -->\n <button @click=\"name = 'Roland'\">Change it</button>\n </div>\n</template>\n\nAnd the script\nexport default {\n data: () => ({\n name: 'John Doe'\n }),\n computed:{\n get_name: {\n get () {\n return this.name\n },\n set (new_name) {\n this.name = new_name\n }\n },\n } \n}\n\nWhen the button clicked we are passing to computed property the name 'Roland' and in set() we are changing the name from 'John Doe' to 'Roland'.\nBelow there is a common use case when computed is used with getter and setter.\nSay you have the follow vuex store:\nexport default new Vuex.Store({\n state: {\n name: 'John Doe'\n },\n getters: {\n get_name: state => state.name\n },\n mutations: {\n set_name: (state, payload) => state.name = payload\n },\n})\n\nAnd in your component you want to add v-model to an input but using the vuex store.\n<template>\n <div>\n <input type=\"text\" v-model=\"get_name\">\n {{get_name}}\n </div>\n</template>\n<script>\nexport default {\n computed:{\n get_name: {\n get () {\n return this.$store.getters.get_name\n },\n set (new_name) {\n this.$store.commit('set_name', new_name)\n }\n },\n } \n}\n</script>\n\n", "You can also pass arguments to getters by returning a function. This is particularly useful when you want to query an array in the store:\ngetters: {\n // ...\n getTodoById: (state) => (id) => {\n return state.todos.find(todo => todo.id === id)\n }\n}\nstore.getters.getTodoById(2) // -> { id: 2, text: '...', done: false }\n\nNote that getters accessed via methods will run each time you call them, and the result is not cached.\nThat is called Method-Style Access and it is documented on the Vue.js docs.\n", "I'd like to first reiterate the previous caveats that using computed (which is cached) with a parameter simply makes the computed not cached, effectively just making it a method.\nHowever, that being said, here are all the variations that I can think of which may have edge-cases for use. If you cut & paste this into a demo app it should be clear what is going on:\n<template>\n <div>\n\n <div style=\"background: violet;\"> someData, regularComputed: {{ someData }}, {{ regularComputed }} </div>\n <div style=\"background: cornflowerblue;\"> someComputedWithParameterOneLine: {{ someComputedWithParameterOneLine('hello') }} </div>\n <div style=\"background: lightgreen;\"> someComputedWithParameterMultiLine: {{ someComputedWithParameterMultiLine('Yo') }} </div>\n <div style=\"background: yellow\"> someComputedUsingGetterSetterWithParameterMultiLine: {{ someComputedUsingGetterSetterWithParameterMultiLine('Tadah!') }} </div>\n\n <div>\n <div style=\"background: orangered;\"> inputData: {{ inputData }} </div>\n <input v-model=\"inputData\" />\n <button @click=\"someComputedUsingGetterSetterWithParameterMultiLine = inputData\">\n Update 'someComputedUsingGetterSetterWithParameterMultiLine' with 'inputData'.\n </button>\n </div>\n\n <div style=\"background: red\"> newConcatenatedString: {{ newConcatenatedString }} </div>\n\n </div>\n</template>\n\n<script>\n\n export default {\n\n data() {\n return {\n someData: 'yo',\n inputData: '',\n newConcatenatedString: ''\n }\n },\n\n computed: {\n\n regularComputed(){\n return 'dude.'\n },\n\n someComputedWithParameterOneLine(){\n return (theParam) => `The following is the Parameter from *One* Line Arrow Function >>> ${theParam}`\n },\n\n someComputedWithParameterMultiLine(){\n return (theParam) => {\n return `The following is the Parameter from *Multi* Line Arrow Function >>> ${theParam}`\n }\n },\n\n // NOTICE that Computed with GETTER/SETTER is now an Object, that has 2 methods, get() and set(), so after the name of the computed we use : instead of ()\n // thus we do: \"someComputedUsingGetterSetterWithParameterMultiLine: {...}\" NOT \"someComputedUsingGetterSetterWithParameterMultiLine(){...}\"\n someComputedUsingGetterSetterWithParameterMultiLine: {\n get () {\n return (theParam) => {\n return `As part of the computed GETTER/SETTER, the following is inside get() which receives a Parameter (using a multi-line Arrow Function) >>> ${theParam}`\n }\n },\n set(newSetValue) {\n console.log('Accessing get() from within the set()', this.someComputedUsingGetterSetterWithParameterMultiLine('hello from inside the Setter, using the Getter.'))\n console.log('Accessing newSetValue in set() >>>>', JSON.stringify(newSetValue))\n this.newConcatenatedString = `**(1)${this.someComputedUsingGetterSetterWithParameterMultiLine('hello from inside the Setter, using the Getter.')}** This is a concatenation of get() value that had a Parameter, with newSetValue **(2)${newSetValue}** that came into the set().`\n }\n },\n\n },\n\n }\n\n</script>\n\n", "Computed could be considered as a function. So for an example on validation you could clearly do something like :\n methods: {\n validation(attr){\n switch(attr) {\n case 'email':\n const re = /^(([^<>()\\[\\]\\.,;:\\s@\\\"]+(\\.[^<>()\\[\\]\\.,;:\\s@\\\"]+)*)|(\\\".+\\\"))@(([^<>()[\\]\\.,;:\\s@\\\"]+\\.)+[^<>()[\\]\\.,;:\\s@\\\"]{2,})$/i;\n return re.test(this.form.email);\n case 'password':\n return this.form.password.length > 4\n }\n },\n ...\n }\n\nWhich you'll be using like :\n <b-form-input\n id=\"email\"\n v-model=\"form.email\"\n type=\"email\"\n :state=\"validation('email')\"\n required\n placeholder=\"Enter email\"\n ></b-form-input>\n\nJust keep in mind that you will still miss the caching specific to computed.\n", "Yes methods are there for using params. Like answers stated above, in your example it's best to use methods since execution is very light.\nOnly for reference, in a situation where the method is complex and cost is high, you can cache the results like so:\ndata() {\n return {\n fullNameCache:{}\n };\n}\n\nmethods: {\n fullName(salut) {\n if (!this.fullNameCache[salut]) {\n this.fullNameCache[salut] = salut + ' ' + this.firstName + ' ' + this.lastName;\n }\n return this.fullNameCache[salut];\n }\n}\n\nnote: When using this, watchout for memory if dealing with thousands\n", "You need to be careful with the vue/no-side-effects-in-computed-properties ESlint rule and not making any inside of computed.\nMeanwhile, if you're looking towards having a memoization kind of approach, you can give a read to this article or useMemoize from Vueuse.\nOr even v-memo since Vue3.2.\n" ]
[ 437, 47, 12, 6, 6, 5, 5, 4, 1, 0, 0 ]
[ "I didn't see a clear Vue 3 example so I am adding one from an app I worked on. You call a method first that then returns a computed value. So the method will be called when Vue re-renders but then the cached computed value is returned and only executed if the reactive input variables change.\n<script setup>\nimport { computed, ref } from 'vue'\n\nconst itemOne = ref(1);\nconst itemTwo = ref(2);\n\nconst getItemDoubled: (key) => {\n return computed(()=> item[key].value * 2);\n}\n</script>\n\n<template>\n <p>get dynamic name computed value: {{ getItemDoubled('One') }}\n <p>get dynamic name computed value: {{ getItemDoubled('Two') }}\n</template\n\n" ]
[ -1 ]
[ "javascript", "vue.js", "vuejs2" ]
stackoverflow_0040522634_javascript_vue.js_vuejs2.txt
Q: How to find the union and intersection of linked lists of strings? I have tried to code the union and intersection of the 2 linked lists. However, I'm unsuccessful in creating the linked list of strings as its function fails to display and also insert nodes into the list. The code is only able to read the strings of linked list 1. I would want to know the approach to creating a linked list of strings. #include<stdio.h> #include<stdlib.h> #include<string.h> typedef struct node{ char* data; struct node *next; } NODE; NODE* create(char* data){ NODE* newnode = (NODE*)malloc(sizeof(NODE)); newnode->data = data; newnode->next = NULL; return newnode; } void insertAtBeg(NODE **head, char* data){ NODE *newnode = create(data); newnode->data = data; newnode->next = (*head); (*head) = newnode; } void display(NODE *temp){ while(temp != NULL){ printf("%d->",temp->data); temp = temp->next; } } int isPresent(NODE *head, char* data){ NODE *temp = head; while(temp != NULL){ if(temp->data == data) return 1; temp = temp->next; } return 0; } NODE* getUnion(NODE* head1, NODE* head2){ NODE* result = NULL; NODE *t1 = head1; NODE *t2 = head2; while(t1 != NULL){ insertAtBeg(&result, t1->data); t1 = t1->next; } while(t2 != NULL){ if(!isPresent(result,t2->data)) insertAtBeg(&result, t2->data); t2 = t2->next; } return result; } NODE* getIntersection(NODE *head1,NODE *head2){ NODE* result = NULL; NODE *t1 = head1; while(t1 != NULL){ if(isPresent(head2,t1->data)) insertAtBeg(&result,t1->data); t1 = t1->next; } return result; } int main(){ NODE* head1 = NULL; NODE* head2 = NULL; NODE* intersection = NULL; NODE* unin = NULL; int m,n; char *arr; printf("Enter the size of the linked list1:\n"); scanf("%d",&m); printf("Enter the elements:\n"); for(int i = 0; i< m ; i++){ scanf("%s",arr[i]); insertAtBeg(&head1,arr[i]); } printf("\nDisplaying linked list1:\n"); display(head1); printf("\nEnter the size of linked list 2:\n"); scanf("%d",&n); printf("\nEnter the elements:\n"); for(int i = 0; i< n ; i++){ scanf("%s",arr[i]); insertAtBeg(&head2,arr[i]); } printf("\nDisplaying linked list2:\n"); display(head2); unin = getUnion(head1,head2); intersection = getIntersection(head1,head2); printf("\nUnion list:\n"); display(unin); printf("\nIntersection list:\n"); display(intersection); return 0; } Errors: warning: passing argument 2 of ‘insertAtBeg’ makes pointer from integer without a cast [-Wint-conversion] 84 | insertAtBeg(&head1,arr[i]); | ~~~^~~ | | | char q.c:16:37: note: expected ‘char *’ but argument is of type ‘char’ 16 | void insertAtBeg(NODE **head, char* data){ | ~~~~~~^~~~ q.c:94:31: warning: passing argument 2 of ‘insertAtBeg’ makes pointer from integer without a cast [-Wint-conversion] 94 | insertAtBeg(&head2,arr[i]); | ~~~^~~ | | | char q.c:16:37: note: expected ‘char *’ but argument is of type ‘char’ 16 | void insertAtBeg(NODE **head, char* data){ A: Several issues with the code so I made some changes, at minimum, to point out one issue and also placed printf statement to help in debugging (needs to b done as you're iteratively developing the code). #include <stdio.h> #include<stdlib.h> #include<string.h> typedef struct node{ char* data; struct node *next; } NODE; NODE* create(char* data){ NODE* newnode = (NODE*)malloc(sizeof(NODE)); newnode->data = data; newnode->next = NULL; return newnode; } void insertAtBeg(NODE **head, char* data){ printf("<in insertAtBeg(NODE **head, char* data) \n"); NODE *newnode = create(data); newnode->data = data; newnode->next = (*head); (*head) = newnode; } void display(NODE *temp){ while(temp != NULL){ printf("%d->",temp->data); temp = temp->next; } } int isPresent(NODE *head, char* data){ NODE *temp = head; while(temp != NULL){ if(temp->data == data) return 1; temp = temp->next; } return 0; } NODE* getUnion(NODE* head1, NODE* head2){ NODE* result = NULL; NODE *t1 = head1; NODE *t2 = head2; while(t1 != NULL){ insertAtBeg(&result, t1->data); t1 = t1->next; } while(t2 != NULL){ if(!isPresent(result,t2->data)) insertAtBeg(&result,t2->data); t2 = t2->next; } return result; } NODE* getIntersection(NODE *head1,NODE *head2){ NODE* result = NULL; NODE *t1 = head1; while(t1 != NULL){ if(isPresent(head2,t1->data)) insertAtBeg(&result, t1->data); t1 = t1->next; } return result; } int main(){ NODE* head1 = NULL; NODE* head2 = NULL; NODE* intersection = NULL; NODE* unin = NULL; int m,n; char *arr; NODE* result = NULL; insertAtBeg(&head2,&arr[0]); insertAtBeg(&result, "c"); // t1->data); was data allocated memory? /* printf("Enter the size of the linked list1:\n"); scanf("%d",&m); printf("Enter the elements:\n"); for(int i = 0; i< m ; i++){ scanf("%s",arr[i]); insertAtBeg(&head1,arr[i]); } printf("\nDisplaying linked list1:\n"); display(head1); printf("\nEnter the size of linked list 2:\n"); scanf("%d",&n); printf("\nEnter the elements:\n"); for(int i = 0; i< n ; i++){ scanf("%s",arr[i]); insertAtBeg(&head2,arr[i]); } printf("\nDisplaying linked list2:\n"); display(head2); unin = getUnion(head1,head2); intersection = getIntersection(head1,head2); printf("\nUnion list:\n"); display(unin); printf("\nIntersection list:\n"); display(intersection); */ return 0; } Output: <in insertAtBeg(NODE **head, char* data) <in insertAtBeg(NODE **head, char* data)
How to find the union and intersection of linked lists of strings?
I have tried to code the union and intersection of the 2 linked lists. However, I'm unsuccessful in creating the linked list of strings as its function fails to display and also insert nodes into the list. The code is only able to read the strings of linked list 1. I would want to know the approach to creating a linked list of strings. #include<stdio.h> #include<stdlib.h> #include<string.h> typedef struct node{ char* data; struct node *next; } NODE; NODE* create(char* data){ NODE* newnode = (NODE*)malloc(sizeof(NODE)); newnode->data = data; newnode->next = NULL; return newnode; } void insertAtBeg(NODE **head, char* data){ NODE *newnode = create(data); newnode->data = data; newnode->next = (*head); (*head) = newnode; } void display(NODE *temp){ while(temp != NULL){ printf("%d->",temp->data); temp = temp->next; } } int isPresent(NODE *head, char* data){ NODE *temp = head; while(temp != NULL){ if(temp->data == data) return 1; temp = temp->next; } return 0; } NODE* getUnion(NODE* head1, NODE* head2){ NODE* result = NULL; NODE *t1 = head1; NODE *t2 = head2; while(t1 != NULL){ insertAtBeg(&result, t1->data); t1 = t1->next; } while(t2 != NULL){ if(!isPresent(result,t2->data)) insertAtBeg(&result, t2->data); t2 = t2->next; } return result; } NODE* getIntersection(NODE *head1,NODE *head2){ NODE* result = NULL; NODE *t1 = head1; while(t1 != NULL){ if(isPresent(head2,t1->data)) insertAtBeg(&result,t1->data); t1 = t1->next; } return result; } int main(){ NODE* head1 = NULL; NODE* head2 = NULL; NODE* intersection = NULL; NODE* unin = NULL; int m,n; char *arr; printf("Enter the size of the linked list1:\n"); scanf("%d",&m); printf("Enter the elements:\n"); for(int i = 0; i< m ; i++){ scanf("%s",arr[i]); insertAtBeg(&head1,arr[i]); } printf("\nDisplaying linked list1:\n"); display(head1); printf("\nEnter the size of linked list 2:\n"); scanf("%d",&n); printf("\nEnter the elements:\n"); for(int i = 0; i< n ; i++){ scanf("%s",arr[i]); insertAtBeg(&head2,arr[i]); } printf("\nDisplaying linked list2:\n"); display(head2); unin = getUnion(head1,head2); intersection = getIntersection(head1,head2); printf("\nUnion list:\n"); display(unin); printf("\nIntersection list:\n"); display(intersection); return 0; } Errors: warning: passing argument 2 of ‘insertAtBeg’ makes pointer from integer without a cast [-Wint-conversion] 84 | insertAtBeg(&head1,arr[i]); | ~~~^~~ | | | char q.c:16:37: note: expected ‘char *’ but argument is of type ‘char’ 16 | void insertAtBeg(NODE **head, char* data){ | ~~~~~~^~~~ q.c:94:31: warning: passing argument 2 of ‘insertAtBeg’ makes pointer from integer without a cast [-Wint-conversion] 94 | insertAtBeg(&head2,arr[i]); | ~~~^~~ | | | char q.c:16:37: note: expected ‘char *’ but argument is of type ‘char’ 16 | void insertAtBeg(NODE **head, char* data){
[ "Several issues with the code so I made some changes, at minimum, to point out one issue and also placed printf statement to help in debugging (needs to b done as you're iteratively developing the code).\n#include <stdio.h>\n#include<stdlib.h>\n#include<string.h>\n\ntypedef struct node{\n char* data;\n struct node *next;\n} NODE;\n\nNODE* create(char* data){\n NODE* newnode = (NODE*)malloc(sizeof(NODE));\n newnode->data = data;\n newnode->next = NULL;\n return newnode;\n}\n\nvoid insertAtBeg(NODE **head, char* data){\nprintf(\"<in insertAtBeg(NODE **head, char* data) \\n\");\n NODE *newnode = create(data);\n newnode->data = data;\n newnode->next = (*head);\n (*head) = newnode;\n}\n\nvoid display(NODE *temp){\n while(temp != NULL){\n printf(\"%d->\",temp->data);\n temp = temp->next;\n }\n}\n\nint isPresent(NODE *head, char* data){\n NODE *temp = head;\n while(temp != NULL){\n if(temp->data == data)\n return 1;\n temp = temp->next;\n }\n return 0;\n}\n\nNODE* getUnion(NODE* head1, NODE* head2){\n NODE* result = NULL;\n NODE *t1 = head1;\n NODE *t2 = head2;\n\n while(t1 != NULL){\n insertAtBeg(&result, t1->data);\n t1 = t1->next;\n } \n\n while(t2 != NULL){\n if(!isPresent(result,t2->data))\n insertAtBeg(&result,t2->data);\n t2 = t2->next;\n } \n return result;\n}\n\nNODE* getIntersection(NODE *head1,NODE *head2){\n NODE* result = NULL;\n NODE *t1 = head1;\n \n while(t1 != NULL){\n if(isPresent(head2,t1->data))\n insertAtBeg(&result, t1->data);\n t1 = t1->next;\n }\n return result;\n}\n\nint main(){\n NODE* head1 = NULL;\n NODE* head2 = NULL;\n NODE* intersection = NULL;\n NODE* unin = NULL;\n int m,n;\n char *arr;\nNODE* result = NULL;\n\ninsertAtBeg(&head2,&arr[0]);\ninsertAtBeg(&result, \"c\"); // t1->data); was data allocated memory?\n\n/*\n printf(\"Enter the size of the linked list1:\\n\");\n scanf(\"%d\",&m);\n printf(\"Enter the elements:\\n\");\n \n for(int i = 0; i< m ; i++){\n scanf(\"%s\",arr[i]);\n insertAtBeg(&head1,arr[i]);\n }\n printf(\"\\nDisplaying linked list1:\\n\");\n display(head1);\n\n printf(\"\\nEnter the size of linked list 2:\\n\");\n scanf(\"%d\",&n);\n printf(\"\\nEnter the elements:\\n\");\n for(int i = 0; i< n ; i++){\n scanf(\"%s\",arr[i]);\n insertAtBeg(&head2,arr[i]);\n }\n\n printf(\"\\nDisplaying linked list2:\\n\");\n display(head2);\n\n unin = getUnion(head1,head2);\n intersection = getIntersection(head1,head2);\n\n printf(\"\\nUnion list:\\n\");\n display(unin);\n\n printf(\"\\nIntersection list:\\n\");\n display(intersection);\n*/\n return 0;\n}\n\nOutput: \n<in insertAtBeg(NODE **head, char* data) \n<in insertAtBeg(NODE **head, char* data) \n\n" ]
[ 1 ]
[]
[]
[ "c", "data_structures", "linked_list", "string" ]
stackoverflow_0074668351_c_data_structures_linked_list_string.txt
Q: How to filter resources by annotations using kubectl? I would like to return all ingress resources that do not contain a specific annotation. Using the following command returns an error: kubectl get ingress --all-namespaces -o=jsonpath='{.items[?(!(@.metadata.annotations.kubernetes\.io/ingress\.class))].metadata.name}' error: error parsing jsonpath {.items[?(!(@.metadata.annotations.kubernetes\.io/ingress\.class))].metadata.name}, unclosed array expect ] A: Could you try that one? kubectl get ingress --all-namespaces -o=jsonpath='{.items[?(!(@.metadata.annotations["kubernetes.io/ingress.class"]))].metadata.name}' A: The error message is indicating that there is an unclosed array in your JSONPath query. This is likely because you have an extra . before the items field in your query. Here is the correct JSONPath query that should work: kubectl get ingress --all-namespaces -o=jsonpath='{.items[?(!(@.metadata.annotations.kubernetes.io/ingress.class))].metadata.name}' Notice that the . before items has been removed, which should fix the error you are seeing. Keep in mind that JSONPath queries can be difficult to read and understand, especially when they contain complex filters like the one in your example. It might be easier to use the --field-selector option instead, which allows you to specify the fields you want to include or exclude in the output. For example, the following command would return the names of all ingress resources that do not have the kubernetes.io/ingress.class annotation: kubectl get ingress --all-namespaces --field-selector metadata.annotations.kubernetes.io/ingress.class!=
How to filter resources by annotations using kubectl?
I would like to return all ingress resources that do not contain a specific annotation. Using the following command returns an error: kubectl get ingress --all-namespaces -o=jsonpath='{.items[?(!(@.metadata.annotations.kubernetes\.io/ingress\.class))].metadata.name}' error: error parsing jsonpath {.items[?(!(@.metadata.annotations.kubernetes\.io/ingress\.class))].metadata.name}, unclosed array expect ]
[ "Could you try that one?\nkubectl get ingress --all-namespaces -o=jsonpath='{.items[?(!(@.metadata.annotations[\"kubernetes.io/ingress.class\"]))].metadata.name}'\n\n", "The error message is indicating that there is an unclosed array in your JSONPath query. This is likely because you have an extra . before the items field in your query.\nHere is the correct JSONPath query that should work:\nkubectl get ingress --all-namespaces -o=jsonpath='{.items[?(!(@.metadata.annotations.kubernetes.io/ingress.class))].metadata.name}'\n\nNotice that the . before items has been removed, which should fix the error you are seeing.\nKeep in mind that JSONPath queries can be difficult to read and understand, especially when they contain complex filters like the one in your example. It might be easier to use the --field-selector option instead, which allows you to specify the fields you want to include or exclude in the output. For example, the following command would return the names of all ingress resources that do not have the kubernetes.io/ingress.class annotation:\nkubectl get ingress --all-namespaces --field-selector metadata.annotations.kubernetes.io/ingress.class!=\n\n" ]
[ 1, 0 ]
[]
[]
[ "kubectl", "kubernetes", "kubernetes_ingress", "nginx_ingress" ]
stackoverflow_0074659719_kubectl_kubernetes_kubernetes_ingress_nginx_ingress.txt
Q: VBA how to exclude empty cell in condition? If sngRow.Cells(1,23) = "ready for pickup" then.. what I would like is : If sngRow.Cells(1,23) = "ready for pickup" and sngRow.Cells(1,22) is NOT EMPTY then.. I don't know how to write it in VBA and can't seem to find an easy solution A: There are several ways to check this. I would recommend the IsEmpty() function Sub test1() Debug.Print Not IsEmpty(sngRow.Cells(1, 22)) Debug.Print VarType(sngRow.Cells(1, 22)) <> vbEmpty Debug.Print sngRow.Cells(1, 22) <> "" ' If there is an error in the cell (e.g. division by 0), a type mismatch error occurs Debug.Print sngRow.Cells(1, 22) <> vbNullString ' If there is an error in the cell (e.g. division by 0), a type mismatch error occurs Debug.Print TypeName(sngRow.Cells(1, 22).Value) <> "Empty" End Sub
VBA how to exclude empty cell in condition?
If sngRow.Cells(1,23) = "ready for pickup" then.. what I would like is : If sngRow.Cells(1,23) = "ready for pickup" and sngRow.Cells(1,22) is NOT EMPTY then.. I don't know how to write it in VBA and can't seem to find an easy solution
[ "There are several ways to check this. I would recommend the IsEmpty() function\nSub test1()\n Debug.Print Not IsEmpty(sngRow.Cells(1, 22))\n Debug.Print VarType(sngRow.Cells(1, 22)) <> vbEmpty\n Debug.Print sngRow.Cells(1, 22) <> \"\" ' If there is an error in the cell (e.g. division by 0), a type mismatch error occurs\n Debug.Print sngRow.Cells(1, 22) <> vbNullString ' If there is an error in the cell (e.g. division by 0), a type mismatch error occurs\n Debug.Print TypeName(sngRow.Cells(1, 22).Value) <> \"Empty\"\nEnd Sub\n\n" ]
[ 2 ]
[]
[]
[ "excel", "vba" ]
stackoverflow_0074669023_excel_vba.txt
Q: When I append a path, python gives an error I tried to add a directory path to sys.path, but it gives me an error: import sys sys.path.append("C:\Users\tamer\Desktop\code\python\modules") SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape A: This should do it: sys.path.append("C:\\Users\\tamer\\Desktop\\code\\python\\modules") A: Another approach is to use raw string, basically r is prefixed. For this use case it should be. sys.path.append(r"C:\Users\tamer\Desktop\code\python\modules")
When I append a path, python gives an error
I tried to add a directory path to sys.path, but it gives me an error: import sys sys.path.append("C:\Users\tamer\Desktop\code\python\modules") SyntaxError: (unicode error) 'unicodeescape' codec can't decode bytes in position 2-3: truncated \UXXXXXXXX escape
[ "This should do it:\nsys.path.append(\"C:\\\\Users\\\\tamer\\\\Desktop\\\\code\\\\python\\\\modules\")\n", "Another approach is to use raw string, basically r is prefixed.\nFor this use case it should be.\nsys.path.append(r\"C:\\Users\\tamer\\Desktop\\code\\python\\modules\")\n\n" ]
[ 0, 0 ]
[]
[]
[ "python", "sys", "sys.path", "windows" ]
stackoverflow_0074667805_python_sys_sys.path_windows.txt
Q: Stable Diffusion (Wheel 'torch' located at ___ is invalid.) I have been attempting to install stable diffusion and have run into this error that I have no idea how to fix. When attempting to run the webui-user i receive this message venv "C:\Users\___\Desktop\AI\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: ce049c471b4a1d22f5a8fe8f527788edcf934eda Installing torch and torchvision Traceback (most recent call last): File "C:\Users\___\Desktop\AI\stable-diffusion-webui\launch.py", line 293, in <module> prepare_enviroment() File "C:\Users\___\Desktop\AI\stable-diffusion-webui\launch.py", line 205, in prepare_enviroment run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch") File "C:\Users\___\Desktop\AI\stable-diffusion-webui\launch.py", line 49, in run raise RuntimeError(message) RuntimeError: Couldn't install torch. Command: "C:\Users\___\Desktop\AI\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 Error code: 1 stdout: Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu113 Collecting torch==1.12.1+cu113 Downloading https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-win_amd64.whl (2143.8 MB) ---- 0.2/2.1 GB 17.2 MB/s eta 0:01:51 stderr: ERROR: Wheel 'torch' located at C:\Users\___\AppData\Local\Temp\pip-unpack-qiio06a2\torch-1.12.1+cu113-cp310-cp310-win_amd64.whl is invalid. I haven't really tried anything to fix this because I haven't seen many others being vocal with solutions with problem when attempting to install SD. (I did notice it says amd in the file it says is invalid, and I personally own a nvidia gpu and amd cpu. If this file is for amd gpu users this may be the issue, but I am not sure) [Also the method I followed was this "What I did so far was use chocolaty to install the dependencies of Python 3.10.6 and Git. Then utilized the "git clone" on the Automatic 1111 url. Downloaded my preferred weights ckpt file. Added them to the models folder. Then ran the webui-user and got the error seen in my post. I also had a pip error, but after running the update command I was prompted by the webui-user.bat it stopped having that error and left me with this wheel error. I followed a youtube video, but I pretty much followed the method that is shown on the Automatic 1111 website."]
Stable Diffusion (Wheel 'torch' located at ___ is invalid.)
I have been attempting to install stable diffusion and have run into this error that I have no idea how to fix. When attempting to run the webui-user i receive this message venv "C:\Users\___\Desktop\AI\stable-diffusion-webui\venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Commit hash: ce049c471b4a1d22f5a8fe8f527788edcf934eda Installing torch and torchvision Traceback (most recent call last): File "C:\Users\___\Desktop\AI\stable-diffusion-webui\launch.py", line 293, in <module> prepare_enviroment() File "C:\Users\___\Desktop\AI\stable-diffusion-webui\launch.py", line 205, in prepare_enviroment run(f'"{python}" -m {torch_command}', "Installing torch and torchvision", "Couldn't install torch") File "C:\Users\___\Desktop\AI\stable-diffusion-webui\launch.py", line 49, in run raise RuntimeError(message) RuntimeError: Couldn't install torch. Command: "C:\Users\___\Desktop\AI\stable-diffusion-webui\venv\Scripts\python.exe" -m pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 Error code: 1 stdout: Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu113 Collecting torch==1.12.1+cu113 Downloading https://download.pytorch.org/whl/cu113/torch-1.12.1%2Bcu113-cp310-cp310-win_amd64.whl (2143.8 MB) ---- 0.2/2.1 GB 17.2 MB/s eta 0:01:51 stderr: ERROR: Wheel 'torch' located at C:\Users\___\AppData\Local\Temp\pip-unpack-qiio06a2\torch-1.12.1+cu113-cp310-cp310-win_amd64.whl is invalid. I haven't really tried anything to fix this because I haven't seen many others being vocal with solutions with problem when attempting to install SD. (I did notice it says amd in the file it says is invalid, and I personally own a nvidia gpu and amd cpu. If this file is for amd gpu users this may be the issue, but I am not sure) [Also the method I followed was this "What I did so far was use chocolaty to install the dependencies of Python 3.10.6 and Git. Then utilized the "git clone" on the Automatic 1111 url. Downloaded my preferred weights ckpt file. Added them to the models folder. Then ran the webui-user and got the error seen in my post. I also had a pip error, but after running the update command I was prompted by the webui-user.bat it stopped having that error and left me with this wheel error. I followed a youtube video, but I pretty much followed the method that is shown on the Automatic 1111 website."]
[]
[]
[ "Automatic1111 is easier to install from scratch and has a nice user interface.\nYou'll have to provide the instructions you're following for us to help troubleshoot your particular method.\n" ]
[ -1 ]
[ "python", "stable_diffusion" ]
stackoverflow_0074665626_python_stable_diffusion.txt
Q: how to resolve dependency tree typeorm and nestjs during node modules installation During installation of a Nest Application node modules, I have the following error: npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! While resolving: [email protected] npm ERR! Found: [email protected] npm ERR! node_modules/typeorm npm ERR! typeorm@"^0.2.45" from the root project npm ERR! peer typeorm@"^0.2.25" from @nestjs-query/[email protected] npm ERR! node_modules/@nestjs-query/query-typeorm npm ERR! @nestjs-query/query-typeorm@"^0.30.0" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer typeorm@"^0.3.0" from @nestjs/[email protected] npm ERR! node_modules/@nestjs/typeorm npm ERR! @nestjs/typeorm@"^8.0.3" from the root project npm ERR! peer @nestjs/typeorm@"^8.0.0" from @nestjs-query/[email protected] npm ERR! node_modules/@nestjs-query/query-typeorm npm ERR! @nestjs-query/query-typeorm@"^0.30.0" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force, or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. Can you give me suggestions to resolve dependency? Or is better to use --force or --legacy-peer-deps? Thank you in advance. Here dependencies section of my package.json "dependencies": { "@nestjs-modules/mailer": "^1.6.1", "@nestjs-query/query-typeorm": "^0.30.0", "@nestjs/common": "^8.4.4", "@nestjs/config": "^2.0.0", "@nestjs/core": "^8.0.0", "@nestjs/jwt": "^8.0.0", "@nestjs/mapped-types": "*", "@nestjs/passport": "^8.2.1", "@nestjs/platform-express": "^8.0.0", "@nestjs/swagger": "^5.2.1", "@nestjs/typeorm": "^8.0.3", "@types/bcrypt": "^5.0.0", "@types/cookie-parser": "^1.4.2", "bcrypt": "^5.0.1", "class-transformer": "^0.4.0", "class-validator": "^0.13.2", "cookie-parser": "^1.4.6", "fastify-swagger": "^5.1.0", "handlebars": "^4.7.7", "joi": "^17.6.0", "passport": "^0.5.2", "passport-jwt": "^4.0.0", "passport-local": "^1.0.0", "pdfmake": "^0.2.5", "pg": "^8.7.3", "reflect-metadata": "^0.1.13", "rimraf": "^3.0.2", "rxjs": "^7.2.0", "swagger-themes": "^1.2.22", "swagger-ui-express": "^4.3.0", "typeorm": "^0.2.45", "uuid": "^8.3.2", "webpack": "^5.72.1" } I tried to remove "typeorm": "^0.2.45" from package.json, but I have same type error: npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! While resolving: [email protected] npm ERR! Found: [email protected] npm ERR! node_modules/typeorm npm ERR! peer typeorm@"^0.3.0" from @nestjs/[email protected] npm ERR! node_modules/@nestjs/typeorm npm ERR! @nestjs/typeorm@"^8.0.3" from the root project npm ERR! peer @nestjs/typeorm@"^8.0.0" from @nestjs-query/[email protected] npm ERR! node_modules/@nestjs-query/query-typeorm npm ERR! @nestjs-query/query-typeorm@"^0.30.0" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer typeorm@"^0.2.25" from @nestjs-query/[email protected] npm ERR! node_modules/@nestjs-query/query-typeorm npm ERR! @nestjs-query/query-typeorm@"^0.30.0" from the root project A: @nestjs/[email protected] requires that you use typeorm@^0.3.0 but @nestjs-query/query-typeorm@"^0.30.0" requires that you use typeorm@^0.2.25. You need to downgrade @nestjs/typeorm to something that is compatible with @nestjs-query/query-typeorm or upgrade @nestjs-query/query-typeorm to something that has a compatible typeorm version with @nestjs/typeorm@^8.1.4 A: I fixed version: "@nestjs/typeorm": "8.0.3" and now it works, thanks @jay-mcdoniel
how to resolve dependency tree typeorm and nestjs during node modules installation
During installation of a Nest Application node modules, I have the following error: npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! While resolving: [email protected] npm ERR! Found: [email protected] npm ERR! node_modules/typeorm npm ERR! typeorm@"^0.2.45" from the root project npm ERR! peer typeorm@"^0.2.25" from @nestjs-query/[email protected] npm ERR! node_modules/@nestjs-query/query-typeorm npm ERR! @nestjs-query/query-typeorm@"^0.30.0" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer typeorm@"^0.3.0" from @nestjs/[email protected] npm ERR! node_modules/@nestjs/typeorm npm ERR! @nestjs/typeorm@"^8.0.3" from the root project npm ERR! peer @nestjs/typeorm@"^8.0.0" from @nestjs-query/[email protected] npm ERR! node_modules/@nestjs-query/query-typeorm npm ERR! @nestjs-query/query-typeorm@"^0.30.0" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force, or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. Can you give me suggestions to resolve dependency? Or is better to use --force or --legacy-peer-deps? Thank you in advance. Here dependencies section of my package.json "dependencies": { "@nestjs-modules/mailer": "^1.6.1", "@nestjs-query/query-typeorm": "^0.30.0", "@nestjs/common": "^8.4.4", "@nestjs/config": "^2.0.0", "@nestjs/core": "^8.0.0", "@nestjs/jwt": "^8.0.0", "@nestjs/mapped-types": "*", "@nestjs/passport": "^8.2.1", "@nestjs/platform-express": "^8.0.0", "@nestjs/swagger": "^5.2.1", "@nestjs/typeorm": "^8.0.3", "@types/bcrypt": "^5.0.0", "@types/cookie-parser": "^1.4.2", "bcrypt": "^5.0.1", "class-transformer": "^0.4.0", "class-validator": "^0.13.2", "cookie-parser": "^1.4.6", "fastify-swagger": "^5.1.0", "handlebars": "^4.7.7", "joi": "^17.6.0", "passport": "^0.5.2", "passport-jwt": "^4.0.0", "passport-local": "^1.0.0", "pdfmake": "^0.2.5", "pg": "^8.7.3", "reflect-metadata": "^0.1.13", "rimraf": "^3.0.2", "rxjs": "^7.2.0", "swagger-themes": "^1.2.22", "swagger-ui-express": "^4.3.0", "typeorm": "^0.2.45", "uuid": "^8.3.2", "webpack": "^5.72.1" } I tried to remove "typeorm": "^0.2.45" from package.json, but I have same type error: npm ERR! code ERESOLVE npm ERR! ERESOLVE unable to resolve dependency tree npm ERR! npm ERR! While resolving: [email protected] npm ERR! Found: [email protected] npm ERR! node_modules/typeorm npm ERR! peer typeorm@"^0.3.0" from @nestjs/[email protected] npm ERR! node_modules/@nestjs/typeorm npm ERR! @nestjs/typeorm@"^8.0.3" from the root project npm ERR! peer @nestjs/typeorm@"^8.0.0" from @nestjs-query/[email protected] npm ERR! node_modules/@nestjs-query/query-typeorm npm ERR! @nestjs-query/query-typeorm@"^0.30.0" from the root project npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer typeorm@"^0.2.25" from @nestjs-query/[email protected] npm ERR! node_modules/@nestjs-query/query-typeorm npm ERR! @nestjs-query/query-typeorm@"^0.30.0" from the root project
[ "@nestjs/[email protected] requires that you use typeorm@^0.3.0 but @nestjs-query/query-typeorm@\"^0.30.0\" requires that you use typeorm@^0.2.25. You need to downgrade @nestjs/typeorm to something that is compatible with @nestjs-query/query-typeorm or upgrade @nestjs-query/query-typeorm to something that has a compatible typeorm version with @nestjs/typeorm@^8.1.4\n", "I fixed version:\n\"@nestjs/typeorm\": \"8.0.3\"\nand now it works, thanks @jay-mcdoniel\n" ]
[ 0, 0 ]
[]
[]
[ "nestjs", "node_modules" ]
stackoverflow_0074585212_nestjs_node_modules.txt
Q: How do I intersect two HashSets while moving values in common into a new set? use std::collections::HashSet; let mut a: HashSet<T> = HashSet::new(); let mut b: HashSet<T> = HashSet::new(); let mut c: HashSet<T> = a.intersection(&b).collect(); // Error: a collection of type `std::collections::HashSet<T>` cannot be built from an iterator over elements of type `&T` I no longer need the non-intersecting values. How do I steal/move the data from the sets a and b into c without copying or cloning? Ideally, this would have the theoretically optimal time-complexity: O(min(a, b)). A: The aliasing rules imposed by the compiler requires you to move the values back and forth. Values can be drained out of a set, although unconditionally. However, we can send certain values back if we keep track of which should be moved and which should stay in a new set. Afterwards, retain allows us to remove the common values from the second set. use std::collections::HashSet; use std::hash::Hash; /// Extracts the common values in `a` and `b` into a new set. fn inplace_intersection<T>(a: &mut HashSet<T>, b: &mut HashSet<T>) -> HashSet<T> where T: Hash, T: Eq, { let x: HashSet<(T, bool)> = a .drain() .map(|v| { let intersects = b.contains(&v); (v, intersects) }) .collect(); let mut c = HashSet::new(); for (v, is_inter) in x { if is_inter { c.insert(v); } else { a.insert(v); } } b.retain(|v| !c.contains(&v)); c } Using: use itertools::Itertools; // for .sorted() let mut a: HashSet<_> = [1, 2, 3].iter().cloned().collect(); let mut b: HashSet<_> = [4, 2, 3].iter().cloned().collect(); let c = inplace_intersection(&mut a, &mut b); let a: Vec<_> = a.into_iter().sorted().collect(); let b: Vec<_> = b.into_iter().sorted().collect(); let c: Vec<_> = c.into_iter().sorted().collect(); assert_eq!(&a, &[1]); assert_eq!(&b, &[4]); assert_eq!(&c, &[2, 3]); Playground A: Another solution, similar to E_net4's, but this one does not involve draining and then repopulating the first set. IMHO it is slightly easier to read as well. fn inplace_intersection<T>(a: &mut HashSet<T>, b: &mut HashSet<T>) -> HashSet<T> where T: Hash, T: Eq, { let mut c = HashSet::new(); for v in a.iter() { if let Some(found) = b.take(v) { c.insert(found); } } a.retain(|v| !c.contains(&v)); c } Playground Link After writing this I realized it could be made even simpler: fn inplace_intersection<T>(a: &mut HashSet<T>, b: &mut HashSet<T>) -> HashSet<T> where T: Hash, T: Eq, { let c: HashSet<T> = a.iter().filter_map(|v| b.take(v)).collect(); a.retain(|v| !c.contains(&v)); c } Playground Link A: Alternatively, if you can take ownership over the sets themselves and don't care about retaining the non-intersecting values in the other sets, you can do the following: use std::hash::Hash; use std::collections::HashSet; fn intersection<T: Eq + Hash>(a: HashSet<T>, b: &HashSet<T>) -> HashSet<T> { a.into_iter().filter(|e| b.contains(e)).collect() } This takes the elements in a which are contained in b and collects them into a new HashSet A: You can use the bitwise and operator as well: let mut c: HashSet<T> = &a & &b
How do I intersect two HashSets while moving values in common into a new set?
use std::collections::HashSet; let mut a: HashSet<T> = HashSet::new(); let mut b: HashSet<T> = HashSet::new(); let mut c: HashSet<T> = a.intersection(&b).collect(); // Error: a collection of type `std::collections::HashSet<T>` cannot be built from an iterator over elements of type `&T` I no longer need the non-intersecting values. How do I steal/move the data from the sets a and b into c without copying or cloning? Ideally, this would have the theoretically optimal time-complexity: O(min(a, b)).
[ "The aliasing rules imposed by the compiler requires you to move the values back and forth. Values can be drained out of a set, although unconditionally. However, we can send certain values back if we keep track of which should be moved and which should stay in a new set. Afterwards, retain allows us to remove the common values from the second set.\nuse std::collections::HashSet;\nuse std::hash::Hash;\n\n/// Extracts the common values in `a` and `b` into a new set.\nfn inplace_intersection<T>(a: &mut HashSet<T>, b: &mut HashSet<T>) -> HashSet<T>\nwhere\n T: Hash,\n T: Eq,\n{\n let x: HashSet<(T, bool)> = a\n .drain()\n .map(|v| {\n let intersects = b.contains(&v);\n (v, intersects)\n })\n .collect();\n\n let mut c = HashSet::new();\n for (v, is_inter) in x {\n if is_inter {\n c.insert(v);\n } else {\n a.insert(v);\n }\n }\n\n b.retain(|v| !c.contains(&v));\n\n c\n}\n\nUsing:\nuse itertools::Itertools; // for .sorted()\n\nlet mut a: HashSet<_> = [1, 2, 3].iter().cloned().collect();\nlet mut b: HashSet<_> = [4, 2, 3].iter().cloned().collect();\n\nlet c = inplace_intersection(&mut a, &mut b);\n\nlet a: Vec<_> = a.into_iter().sorted().collect();\nlet b: Vec<_> = b.into_iter().sorted().collect();\nlet c: Vec<_> = c.into_iter().sorted().collect();\nassert_eq!(&a, &[1]);\nassert_eq!(&b, &[4]);\nassert_eq!(&c, &[2, 3]);\n\n\nPlayground\n", "Another solution, similar to E_net4's, but this one does not involve draining and then repopulating the first set. IMHO it is slightly easier to read as well.\nfn inplace_intersection<T>(a: &mut HashSet<T>, b: &mut HashSet<T>) -> HashSet<T>\nwhere\n T: Hash,\n T: Eq,\n{\n let mut c = HashSet::new();\n \n for v in a.iter() {\n if let Some(found) = b.take(v) {\n c.insert(found);\n }\n }\n \n a.retain(|v| !c.contains(&v));\n\n c\n}\n\nPlayground Link\nAfter writing this I realized it could be made even simpler:\nfn inplace_intersection<T>(a: &mut HashSet<T>, b: &mut HashSet<T>) -> HashSet<T>\nwhere\n T: Hash,\n T: Eq,\n{\n let c: HashSet<T> = a.iter().filter_map(|v| b.take(v)).collect();\n \n a.retain(|v| !c.contains(&v));\n\n c\n}\n\nPlayground Link\n", "Alternatively, if you can take ownership over the sets themselves and don't care about retaining the non-intersecting values in the other sets, you can do the following:\nuse std::hash::Hash;\nuse std::collections::HashSet;\n\nfn intersection<T: Eq + Hash>(a: HashSet<T>, b: &HashSet<T>) -> HashSet<T> {\n a.into_iter().filter(|e| b.contains(e)).collect()\n}\n\nThis takes the elements in a which are contained in b and collects them into a new HashSet\n", "You can use the bitwise and operator as well:\nlet mut c: HashSet<T> = &a & &b\n\n" ]
[ 7, 4, 3, 0 ]
[]
[]
[ "rust" ]
stackoverflow_0055975234_rust.txt
Q: Running Cypress E2E Tests in Kubernetes Pod - Parallel Execution I have cypress E2E tests currently running in the Kubernetes Pods and using the docker container setup. Im trying to run these tests in parallel. Previously I have been using the plugin https://github.com/cypress-io/github-action to do that that takes args to do that, but this time I'm unsure of how to do it.
Running Cypress E2E Tests in Kubernetes Pod - Parallel Execution
I have cypress E2E tests currently running in the Kubernetes Pods and using the docker container setup. Im trying to run these tests in parallel. Previously I have been using the plugin https://github.com/cypress-io/github-action to do that that takes args to do that, but this time I'm unsure of how to do it.
[]
[]
[ "There are multiple ways to run your Cypress tests in parallel in K8s Cluster:\n\nAs a first step, you can set up multiple stages that run in parallel, and all of these Cypress stages can run in the same pod without issue.\n\nYou can use cypress-parallel library to run multiple tests in parallel on the same pods.\n\nIf you want to run the Cypress tests in separate pods, you need to create pods for each parallel stage, then run your Cypress script using tags or different scripts.\n\n\nyou can refer to this blog , i found it very useful : https://medium.com/@iamsanjeevkumar/run-cypress-test-in-parallel-without-cypress-dashboard-1c0c33377628\n" ]
[ -1 ]
[ "cypress", "kubernetes", "parallel_processing" ]
stackoverflow_0073366983_cypress_kubernetes_parallel_processing.txt
Q: How to copy images in assets to application document directory in Flutter? My flutter app has the following assets. assets: - "images/01.png" - "images/02.png" - "images/03.png" ... I'd like to copy these image files to the local path I can get by the following. Directory docDir = await getApplicattionDocumentsDirectory(); String localPath = docDir.path; A: Create a file in your application directory and copy your asset image byte by byte to that file. That's it! You copied your file. final Directory docDir = await getApplicationDocumentsDirectory(); final String localPath = docDir.path; File file = File('$localPath/${path.split('/').last}'); final imageBytes = await rootBundle.load(path); final buffer = imageBytes.buffer; await file.writeAsBytes( buffer.asUint8List(imageBytes.offsetInBytes, imageBytes.lengthInBytes)); A: getApplicationDocumentsDirectory you need path_provider package. Suppose you want to copy an m4a file to your application document directory pubspec.yaml assets: - assets/audio/no_sound_3n.m4a Your code import 'package:path_provider/path_provider.dart'; final Directory docDir = await getApplicationDocumentsDirectory(); final String localPath = docDir.path; File file = File(localPath); final asset = await rootBundle.load("assets/audio/no_sound_3n.m4a"); final buffer = asset.buffer; await file.writeAsBytes( buffer.asUint8List(asset.offsetInBytes, asset.lengthInBytes)); You asset folder
How to copy images in assets to application document directory in Flutter?
My flutter app has the following assets. assets: - "images/01.png" - "images/02.png" - "images/03.png" ... I'd like to copy these image files to the local path I can get by the following. Directory docDir = await getApplicattionDocumentsDirectory(); String localPath = docDir.path;
[ "Create a file in your application directory and copy your asset image byte by byte to that file. That's it! You copied your file. \nfinal Directory docDir = await getApplicationDocumentsDirectory();\nfinal String localPath = docDir.path;\nFile file = File('$localPath/${path.split('/').last}');\nfinal imageBytes = await rootBundle.load(path);\nfinal buffer = imageBytes.buffer;\nawait file.writeAsBytes(\n buffer.asUint8List(imageBytes.offsetInBytes, imageBytes.lengthInBytes));\n\n", "getApplicationDocumentsDirectory you need path_provider package.\nSuppose you want to copy an m4a file to your application document directory\npubspec.yaml\n assets:\n - assets/audio/no_sound_3n.m4a\n\nYour code\nimport 'package:path_provider/path_provider.dart';\n\nfinal Directory docDir = await getApplicationDocumentsDirectory();\nfinal String localPath = docDir.path;\nFile file = File(localPath);\nfinal asset = await rootBundle.load(\"assets/audio/no_sound_3n.m4a\");\nfinal buffer = asset.buffer;\nawait file.writeAsBytes(\n buffer.asUint8List(asset.offsetInBytes, asset.lengthInBytes));\n\nYou asset folder\n\n" ]
[ 7, 0 ]
[ "I too couldn't find a way yet. A way to mitigate this is following the assets guide, by specifying\nflutter:\n assets:\n - assets/ \n\nAnd load arbitrary files via rootBundle\nimport 'dart:async' show Future;\nimport 'package:flutter/services.dart' show rootBundle;\n\nFuture<String> loadAsset() async {\n return await rootBundle.loadString('assets/index.html');\n}\n\nor in your case using AssetImage\nWidget build(BuildContext context) {\n // ...\n return DecoratedBox(\n decoration: BoxDecoration(\n image: DecorationImage(\n image: AssetImage('graphics/background.png'),\n // ...\n ),\n // ...\n ),\n );\n // ...\n}\n\n", "This thread is pretty old, but in case anyone is still looking for an answer, this is how I did it :)\n rootBundle.load('assets/test.jpg').then((content) {\n File newFile = File('${dir.path}/img.jpg');\n newFile.writeAsBytesSync(content.buffer.asUint8List());\n visionImage = FirebaseVisionImage.fromFile(newFile);\n _runAnalysis();\n });\n\n" ]
[ -1, -1 ]
[ "flutter" ]
stackoverflow_0053292265_flutter.txt
Q: how to call a dictionary key that is inside another dict Have a little problem with a simple function that need solving would like to be able to edit a dict with the update() but the problem is that I want to update it using its value and not its key here is my code: contacts = {"Mohamed": {"name": "Mohamed Sayed", "number": "0123565665", "birthday": "24.11.1990", "address": "Ginnheim 60487"}, "Ahmed": {"name": "Ahmed Sayed", "number": "0123456789", "birthday": "06.06.1990", "address": "India"}} def edit_contact(): user_input = input("Please enter the name of the contact you want to edit: ") for k in contacts: if user_input == contacts["Mohamed"]["name"]: print(contacts) A: If they enter the first name, then the sub-dictionary is simply contacts[name]. You don't need to loop over the whole contacts dictionary. def edit_contact(): name = input("Please enter the first name of the contact you want to edit: ") if name in contacts: print(contacts[name]) else: print("I could not find that contact")
how to call a dictionary key that is inside another dict
Have a little problem with a simple function that need solving would like to be able to edit a dict with the update() but the problem is that I want to update it using its value and not its key here is my code: contacts = {"Mohamed": {"name": "Mohamed Sayed", "number": "0123565665", "birthday": "24.11.1990", "address": "Ginnheim 60487"}, "Ahmed": {"name": "Ahmed Sayed", "number": "0123456789", "birthday": "06.06.1990", "address": "India"}} def edit_contact(): user_input = input("Please enter the name of the contact you want to edit: ") for k in contacts: if user_input == contacts["Mohamed"]["name"]: print(contacts)
[ "If they enter the first name, then the sub-dictionary is simply contacts[name]. You don't need to loop over the whole contacts dictionary.\ndef edit_contact():\n name = input(\"Please enter the first name of the contact you want to edit: \")\n if name in contacts:\n print(contacts[name])\n else:\n print(\"I could not find that contact\")\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "list", "python" ]
stackoverflow_0074669223_dictionary_list_python.txt
Q: Codeigniter import database from framework | You have an error in your SQL syntax; I write function for import database on submit: This is one table first DROP and then CREATE and INSERT: public function demo_importer_post() { $folder_name = 'dumps'; $file_name = 'feed_ui2.sql'; $path = 'assets/backup_db/'; // Codeigniter application /assets $file_restore = $this->load->file($path . $folder_name . '/' . $file_name, true); $file_array = explode(';', $file_restore); foreach ($file_array as $query) { $this->db->query("SET FOREIGN_KEY_CHECKS = 0"); $this->db->query($query); $this->db->query("SET FOREIGN_KEY_CHECKS = 1"); } } First I try import sql file directly in PhpMyAdmin. Working correct. Now I try import from CodeIgniter and: Error Number: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '' @ update I checked and with PHPMYADMIN I import correct with below query. https://prnt.sc/bViDzHpr5jz3 UPDATE `general_settings` SET custom_css_codes = " .product-content-details .price .lbl-price { position: relative; display: block; float: left; font-size: 24px; line-height: 30px; color: red; }" But from Codeigniter I have problem. When I delete : and ; from css code UPDATE `general_settings` SET custom_css_codes = " .product-content-details .price .lbl-price { position relative display block float left font-size 24px line-height 30px color red }" WHERE id = 1; then working update also from codeigniter. So how to update from codeigniter with : and ; in css code?
Codeigniter import database from framework | You have an error in your SQL syntax;
I write function for import database on submit: This is one table first DROP and then CREATE and INSERT: public function demo_importer_post() { $folder_name = 'dumps'; $file_name = 'feed_ui2.sql'; $path = 'assets/backup_db/'; // Codeigniter application /assets $file_restore = $this->load->file($path . $folder_name . '/' . $file_name, true); $file_array = explode(';', $file_restore); foreach ($file_array as $query) { $this->db->query("SET FOREIGN_KEY_CHECKS = 0"); $this->db->query($query); $this->db->query("SET FOREIGN_KEY_CHECKS = 1"); } } First I try import sql file directly in PhpMyAdmin. Working correct. Now I try import from CodeIgniter and: Error Number: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near '' @ update I checked and with PHPMYADMIN I import correct with below query. https://prnt.sc/bViDzHpr5jz3 UPDATE `general_settings` SET custom_css_codes = " .product-content-details .price .lbl-price { position: relative; display: block; float: left; font-size: 24px; line-height: 30px; color: red; }" But from Codeigniter I have problem. When I delete : and ; from css code UPDATE `general_settings` SET custom_css_codes = " .product-content-details .price .lbl-price { position relative display block float left font-size 24px line-height 30px color red }" WHERE id = 1; then working update also from codeigniter. So how to update from codeigniter with : and ; in css code?
[]
[]
[ "This is 100% working on my project.\n public function export(){\n \n \n $this->load->database();\n $host = $this->db->hostname;\n $username = $this->db->username;\n $password = $this->db->password;\n $database_name = $this->db->database;\n\n // Get connection object and set the charset\n $conn = mysqli_connect($host, $username, $password,$database_name);\n $conn->set_charset(\"utf8\");\n\n\n // Get All Table Names From the Database\n $tables = array();\n $sql = \"SHOW TABLES\";\n $result = mysqli_query($conn, $sql);\n\n while ($row = mysqli_fetch_row($result)) {\n $tables[] = $row[0];\n }\n\n $sqlScript = \"\";\n foreach ($tables as $table) {\n \n // Prepare SQLscript for creating table structure\n $query = \"SHOW CREATE TABLE $table\";\n $result = mysqli_query($conn, $query);\n $row = mysqli_fetch_row($result);\n \n $sqlScript .= \"\\n\\n\" . $row[1] . \";\\n\\n\";\n \n \n $query = \"SELECT * FROM $table\";\n $result = mysqli_query($conn, $query);\n \n $columnCount = mysqli_num_fields($result);\n \n // Prepare SQLscript for dumping data for each table\n for ($i = 0; $i < $columnCount; $i ++) {\n while ($row = mysqli_fetch_row($result)) {\n $sqlScript .= \"INSERT INTO $table VALUES(\";\n for ($j = 0; $j < $columnCount; $j ++) {\n $row[$j] = $row[$j];\n \n if (isset($row[$j])) {\n $sqlScript .= '\"' . $row[$j] . '\"';\n } else {\n $sqlScript .= '\"\"';\n }\n if ($j < ($columnCount - 1)) {\n $sqlScript .= ',';\n }\n }\n $sqlScript .= \");\\n\";\n }\n }\n \n $sqlScript .= \"\\n\"; \n }\n\n if(!empty($sqlScript))\n {\n // Save the SQL script to a backup file\n $backup_file_name = $database_name . '_backup_' . time() . '.sql';\n $fileHandler = fopen($backup_file_name, 'w+');\n $number_of_lines = fwrite($fileHandler, $sqlScript);\n fclose($fileHandler); \n\n // Download the SQL backup file to the browser\n header('Content-Description: File Transfer');\n header('Content-Type: application/octet-stream');\n header('Content-Disposition: attachment; filename=' . basename($backup_file_name));\n header('Content-Transfer-Encoding: binary');\n header('Expires: 0');\n header('Cache-Control: must-revalidate');\n header('Pragma: public');\n header('Content-Length: ' . filesize($backup_file_name));\n ob_clean();\n flush();\n readfile($backup_file_name);\n exec('rm ' . $backup_file_name); \n }\n }\n\n" ]
[ -1 ]
[ "codeigniter", "mysql" ]
stackoverflow_0074668599_codeigniter_mysql.txt
Q: Postponing the release of an element from resource when the next queue is full I used some inputs that I got from this forum and got quite far while using simpy for the first time in my life for university. Now my question remains: I can see that the customer/entity goes through process0 and the process1_broker but gets stuck right after entering process1. It never comes out. What am I doing wrong? I followed the code from an earlier answer directly, where only 1 queue is in place (see the comment section for this). class Entity(object): pass def process0(env, entity, process_0_res, process_1_q): print(f' {env.now} customer {entity.id} is in system') with process_0_res.request() as res_req: yield res_req yield env.timeout(0.5) print(f'{env.now} customer {entity.id} is in queue 1') yield process_1_q.put(entity) def process1_broker(env, process_1_q, process_1_res): while True: # is resource available? res_req = process_1_res.request() yield res_req # is customer available? entity = yield process_1_q.get() # save resource request to release later entity.res_req = res_req # start process env.process(process1(env,entity,process_1_res, process_2_q)) def process1(env, entity, process_1_res, process_2_q): print(f' {env.now} customer {entity.id} in process 1') with process_1_res.request() as res_req: yield res_req yield env.timeout(2) print(f' {env.now} customer {entity.id} done with process 1') yield process_2_q.put(entity) def process2_broker(env, process_2_q, process_2_res): while True: res_req = process_2_res.request() yield res_req entity = yield process_2_q.get() entity.res_req = res_req env.process(process2(env,entity,process_2_res, process_3_q)) def process2(env, entity, process_2_res, process_3_q): print(f' {env.now} customer {entity.id} in process 2') with process_2_res.request() as res_req: yield res_req yield env.timeout(np.random.exponential(mu[1])) yield process_3_q.put(entity) def process3_broker(env, process_3_q, process_3_res): while True: res_req = process_3_res.request() yield res_req entity = yield process_3_q.get() entity.res_req = res_req env.process(process3(env,entity,process_3_res, process_4_q)) def process3(env, entity, process_3_res, process_4_q): print(f' {env.now} customer {entity.id} in process 3') with process_3_res.request() as res_req: yield res_req yield env.timeout(np.random.exponential(mu[2])) yield process_4_q.put(entity) def process4_broker(env, process_4_q, process_4_res): while True: res_req = process_4_res.request() yield res_req entity = yield process_3_q.get() entity.res_req = res_req env.process(process4(env,entity,process_4_res)) def process4(env, entity, process_4_res, process_4_q): print(f' {env.now} customer {entity.id} in process 4') with process_4_res.request() as res_req: yield res_req yield env.timeout(np.random.exponential(mu[3])) yield process_4_res.release(entity.res_req) print(f' {env.now} customer {entity.id} leaves system') def gen_entities(env, process_0_res, process_1_q): next_id = 1 while True: yield env.timeout(np.random.exponential(labda)) entity = Entity() entity.id = next_id next_id += 1 env.process(process0(env, entity, process_0_res, process_1_q)) env = simpy.Environment() process_0_res = simpy.Resource(env, capacity = 1) process_1_res = simpy.Resource(env, capacity = 1) process_2_res = simpy.Resource(env, capacity = 1) process_3_res = simpy.Resource(env, capacity = 1) process_4_res = simpy.Resource(env, capacity = 1) process_1_q = simpy.Store(env, capacity = 5) process_2_q = simpy.Store(env, capacity = 4) process_3_q = simpy.Store(env, capacity = 3) process_4_q = simpy.Store(env, capacity = 2) env.process(gen_entities(env, process_0_res, process_1_q)) env.process(process1_broker(env, process_1_q, process_1_res)) env.process(process2_broker(env, process_2_q, process_2_res)) env.process(process3_broker(env, process_3_q, process_3_res)) env.process(process4_broker(env, process_4_q, process_4_res)) env.run(100) A: I used a store with a capacity to act as a blocking queue Process 1 will not release its process 1 resource until the entity can be put into the store. If the store is at capacity, it will block the put until process 2 pulls a entity from the store. To manage the store, I have a broker process that matches entities in the store with process 2 resources. When a match is made process 2 starts. """ Demostrates one way to do a blocking queue where a queue has a size limit and will 'block' arrivals whe the queue is at that limit sim two processes were the first process has a unlimited queue and second process with a limited blocking queue. A entity will not releases its resource from the first process until it can advance to the second process's queue Uses a simpy store to queue entities for process 2 and a broker process to match entities from the store with process 2 resources then starts process 2 when match is made Programmer: Michael R. Gibbs """ import simpy import random class Entity(object): """ Quick class to track entities dynamicly will add id, and resouce request """ pass def process1(env, entity, process_1_res, process_2_q): """ First process with a unlimited queue """ print(f'{env.now} entity {entity.id} has arrived in process 1') with process_1_res.request() as res_req: print(f'{env.now} process 1 queue size {len(process_1_res.queue)}') yield res_req print(f'{env.now} entity {entity.id} has seized a process 1 resource') yield env.timeout(random.uniform(1,5)) print(f'{env.now} entity {entity.id} has finished process 1') # this put will block if the store is at compacity yield process_2_q.put(entity) print(f'{env.now} entity {entity.id} has advance to process 2 queue') print(f'{env.now} process 1 queue size {len(process_1_res.queue)}') def process2_broker(env, process_2_q, process_2_res): """ When a resouce for process 2 is available, will pull and entity from the store beign uses at the process 2's blocking queue """ while True: print(f'{env.now} process 2 queue size {len(process_2_q.items)}') # wait for a resouce to become avaliable res_req = process_2_res.request() yield res_req # wait for a entity to be avaliable entity = yield process_2_q.get() # save the resouce request so we can release it later entity.res_req = res_req # start process 2, no yield here, start the next match imediatly env.process(process2(env, entity, process_2_res)) def process2(env, entity, process_2_res): """ Process 2 where a entiy has already been matched with a resouce in the broker """ print(f'{env.now} entity {entity.id} has started process 2') yield env.timeout(random.uniform(1,10)) print(f'{env.now} entity {entity.id} has finished process 2') # the resouce request was saved in the broker so it could be released now process_2_res.release(entity.res_req) def gen_entities(env, process_1_res, process_2_q): """ Generates a stream of entities to be processed starting with process 1 """ next_id = 1 while True: yield env.timeout(random.uniform(1,3)) entity = Entity() entity.id = next_id next_id += 1 env.process(process1(env, entity, process_1_res, process_2_q)) # boot up env = env = simpy.Environment() # resouces for processes process_1_res = simpy.Resource(env, capacity=3) process_2_res = simpy.Resource(env, capacity=2) # used as blocking queue for process 2 process_2_q = simpy.Store(env,capacity=5) env.process(gen_entities(env, process_1_res, process_2_q)) env.process(process2_broker(env, process_2_q, process_2_res)) env.run(100)
Postponing the release of an element from resource when the next queue is full
I used some inputs that I got from this forum and got quite far while using simpy for the first time in my life for university. Now my question remains: I can see that the customer/entity goes through process0 and the process1_broker but gets stuck right after entering process1. It never comes out. What am I doing wrong? I followed the code from an earlier answer directly, where only 1 queue is in place (see the comment section for this). class Entity(object): pass def process0(env, entity, process_0_res, process_1_q): print(f' {env.now} customer {entity.id} is in system') with process_0_res.request() as res_req: yield res_req yield env.timeout(0.5) print(f'{env.now} customer {entity.id} is in queue 1') yield process_1_q.put(entity) def process1_broker(env, process_1_q, process_1_res): while True: # is resource available? res_req = process_1_res.request() yield res_req # is customer available? entity = yield process_1_q.get() # save resource request to release later entity.res_req = res_req # start process env.process(process1(env,entity,process_1_res, process_2_q)) def process1(env, entity, process_1_res, process_2_q): print(f' {env.now} customer {entity.id} in process 1') with process_1_res.request() as res_req: yield res_req yield env.timeout(2) print(f' {env.now} customer {entity.id} done with process 1') yield process_2_q.put(entity) def process2_broker(env, process_2_q, process_2_res): while True: res_req = process_2_res.request() yield res_req entity = yield process_2_q.get() entity.res_req = res_req env.process(process2(env,entity,process_2_res, process_3_q)) def process2(env, entity, process_2_res, process_3_q): print(f' {env.now} customer {entity.id} in process 2') with process_2_res.request() as res_req: yield res_req yield env.timeout(np.random.exponential(mu[1])) yield process_3_q.put(entity) def process3_broker(env, process_3_q, process_3_res): while True: res_req = process_3_res.request() yield res_req entity = yield process_3_q.get() entity.res_req = res_req env.process(process3(env,entity,process_3_res, process_4_q)) def process3(env, entity, process_3_res, process_4_q): print(f' {env.now} customer {entity.id} in process 3') with process_3_res.request() as res_req: yield res_req yield env.timeout(np.random.exponential(mu[2])) yield process_4_q.put(entity) def process4_broker(env, process_4_q, process_4_res): while True: res_req = process_4_res.request() yield res_req entity = yield process_3_q.get() entity.res_req = res_req env.process(process4(env,entity,process_4_res)) def process4(env, entity, process_4_res, process_4_q): print(f' {env.now} customer {entity.id} in process 4') with process_4_res.request() as res_req: yield res_req yield env.timeout(np.random.exponential(mu[3])) yield process_4_res.release(entity.res_req) print(f' {env.now} customer {entity.id} leaves system') def gen_entities(env, process_0_res, process_1_q): next_id = 1 while True: yield env.timeout(np.random.exponential(labda)) entity = Entity() entity.id = next_id next_id += 1 env.process(process0(env, entity, process_0_res, process_1_q)) env = simpy.Environment() process_0_res = simpy.Resource(env, capacity = 1) process_1_res = simpy.Resource(env, capacity = 1) process_2_res = simpy.Resource(env, capacity = 1) process_3_res = simpy.Resource(env, capacity = 1) process_4_res = simpy.Resource(env, capacity = 1) process_1_q = simpy.Store(env, capacity = 5) process_2_q = simpy.Store(env, capacity = 4) process_3_q = simpy.Store(env, capacity = 3) process_4_q = simpy.Store(env, capacity = 2) env.process(gen_entities(env, process_0_res, process_1_q)) env.process(process1_broker(env, process_1_q, process_1_res)) env.process(process2_broker(env, process_2_q, process_2_res)) env.process(process3_broker(env, process_3_q, process_3_res)) env.process(process4_broker(env, process_4_q, process_4_res)) env.run(100)
[ "I used a store with a capacity to act as a blocking queue\nProcess 1 will not release its process 1 resource until the entity can be put into the store. If the store is at capacity, it will block the put until process 2 pulls a entity from the store.\nTo manage the store, I have a broker process that matches entities in the store with process 2 resources. When a match is made process 2 starts.\n\"\"\"\nDemostrates one way to do a blocking queue\nwhere a queue has a size limit and will\n'block' arrivals whe the queue is at that limit\n\nsim two processes were the first process has a unlimited queue\nand second process with a limited blocking queue.\n\nA entity will not releases its resource from the first process\nuntil it can advance to the second process's queue\n\nUses a simpy store to queue entities for process 2\nand a broker process to match entities from the store\nwith process 2 resources then starts process 2 when match is made\n\nProgrammer: Michael R. Gibbs\n\"\"\"\n\nimport simpy\nimport random\n\nclass Entity(object):\n \"\"\"\n Quick class to track entities\n dynamicly will add id, and resouce request\n \"\"\"\n pass\n\ndef process1(env, entity, process_1_res, process_2_q):\n \"\"\"\n First process with a unlimited queue\n \"\"\"\n print(f'{env.now} entity {entity.id} has arrived in process 1')\n\n with process_1_res.request() as res_req:\n print(f'{env.now} process 1 queue size {len(process_1_res.queue)}')\n \n yield res_req\n print(f'{env.now} entity {entity.id} has seized a process 1 resource')\n\n yield env.timeout(random.uniform(1,5))\n print(f'{env.now} entity {entity.id} has finished process 1')\n\n # this put will block if the store is at compacity\n yield process_2_q.put(entity)\n print(f'{env.now} entity {entity.id} has advance to process 2 queue')\n \n print(f'{env.now} process 1 queue size {len(process_1_res.queue)}')\n\ndef process2_broker(env, process_2_q, process_2_res):\n \"\"\"\n When a resouce for process 2 is available, will pull\n and entity from the store beign uses at the process 2's\n blocking queue\n \"\"\"\n\n while True:\n print(f'{env.now} process 2 queue size {len(process_2_q.items)}')\n\n # wait for a resouce to become avaliable\n res_req = process_2_res.request()\n yield res_req\n\n # wait for a entity to be avaliable\n entity = yield process_2_q.get() \n\n # save the resouce request so we can release it later\n entity.res_req = res_req \n\n # start process 2, no yield here, start the next match imediatly\n env.process(process2(env, entity, process_2_res))\n\ndef process2(env, entity, process_2_res):\n \"\"\"\n Process 2 where a entiy has already been matched with\n a resouce in the broker\n \"\"\"\n\n print(f'{env.now} entity {entity.id} has started process 2')\n\n yield env.timeout(random.uniform(1,10))\n\n print(f'{env.now} entity {entity.id} has finished process 2')\n\n # the resouce request was saved in the broker so it could be released now\n process_2_res.release(entity.res_req)\n\ndef gen_entities(env, process_1_res, process_2_q):\n \"\"\"\n Generates a stream of entities to be processed\n starting with process 1\n \"\"\"\n\n next_id = 1\n\n while True:\n yield env.timeout(random.uniform(1,3))\n\n entity = Entity()\n entity.id = next_id\n next_id += 1\n\n env.process(process1(env, entity, process_1_res, process_2_q))\n\n# boot up\nenv = env = simpy.Environment()\n\n# resouces for processes\nprocess_1_res = simpy.Resource(env, capacity=3)\nprocess_2_res = simpy.Resource(env, capacity=2)\n\n# used as blocking queue for process 2\nprocess_2_q = simpy.Store(env,capacity=5)\n\nenv.process(gen_entities(env, process_1_res, process_2_q))\nenv.process(process2_broker(env, process_2_q, process_2_res))\n\nenv.run(100)\n\n" ]
[ 0 ]
[]
[]
[ "python", "simpy", "while_loop" ]
stackoverflow_0074667274_python_simpy_while_loop.txt
Q: Cast string to float is not supported ---in tensorflow model.predict(test) Here is my code: import tensorflow as tf import numpy as np import pandas as pd import functools from tensorflow.python.keras.models import Sequential from tensorflow.python.keras.layers import Dense from tensorflow.python.keras.losses import binary_crossentropy def process_continuous_data(mean, data): # Center the data data = tf.cast(data, tf.float32) * 1/(2*mean) return tf.reshape(data, [-1, 1]) # Load training data path = 'train_data.csv' data_train_tf=pd.read_csv(path) col=["Index","WeightKg","Age","Speed","V1_LF","V1_RF","V1_LH","V1_RH","STD_V1_LF","STD_V1_RF","STD_V1_LH","STD_V1_RH","V2_LF","V2_RF","V2_LH","V2_RH","STD_V2_LF","STD_V2_RF","STD_V2_LH","STD_V2_RH","V3_LF","V3_RF","V3_LH","V3_RH","STD_V3_LF","STD_V3_RF","STD_V3_LH","STD_V3_RH","V4_LF","V4_RF","V4_LH","V4_RH","STD_V4_LF","STD_V4_RF","STD_V4_LH","STD_V4_RH","V5_LF","V5_RF","V5_LH","V5_RH","STD_V5_LF","STD_V5_RF","STD_V5_LH","STD_V5_RH","V6_LF","V6_RF","V6_LH","V6Z_RH","STD_V6_LF","STD_V6_RF","STD_V6_LH","STD_V6_RH","V7_LF","V7_RF","V7_LH","V7_RH","STD_V7_LF","STD_V7_RF","STD_V7_LH","STD_V7_RH","V8_LF","V8_RF","V8_LH","V8_RH","STD_V8_LF","STD_V8_RF","STD_V8_LH","STD_V8_RH","V9_LF","V9_RF","V9_LH","V9_RH","STD_V9_LF","STD_V9_RF","STD_V9_LH","STD_V9_RH","V10_LF","V10_RF","V10_LH","V10_RH","STD_V10_LF","STD_V10_RF","STD_V10_LH","STD_V10_RH","V11_LF","V11_RF","V11_LH","V11_RH","STD_V11_LF","STD_V11_RF","STD_V11_LH","STD_V11_RH","V12_LF","V12_RF","V12_LH","V12_RH","STD_V12_LF","STD_V12_RF","STD_V12_LH","STD_V12_RH","V13_LF","V13_RF","V13_LH","V13_RH","STD_V13_LF","STD_V13_RF","STD_V13_LH","STD_V13_RH","Mean_V14_LF","Mean_V14_RF","Mean_V14_LH","Mean_V14_RH","STD_V14_LF","STD_V14_RF","STD_V14_LH","STD_V14_RH","V15_LF","V15_RF","V15_LH","V15_RH","STD_V15_LF","STD_V15_RF","STD_V15_LH","STD_V15_RH","V16_LF","V16_RF","V16_LH","V16_RH","V17_LF","V17_RF","V17_LH","V17_RH","V18_LF","V18_RF","V18_LH","V18_RH","V19_LF","V19_RF","V19_LH","V19_RH","V20_LF","V20_RF","V20_LH","V20_RH","V21_LF","V21_RF","V21_LH","V21_RH","V22_LF","V22_RF","V22_LH","V22_RH","V23_LF","V23_RF","V23_LH","V23_RH","V24_LF","V24_RF","V24_LH","V24_RH","V25_LF","V25_RF","V25_LH","V25_RH","V26_LF","V26_RF","V26_LH","V26_RH","V27_LF","V27_RF","V27_LH","V27_RH","V28_LF","V28_RF","V28_LH","V28_RH","V29_LF","V29_RF","V29_LH","V29_RH","V30_LF","V30_RF","V30_LH","V30_RH","Speed_trot","V1_LF_trot","V1_RF_trot","V1_LH_trot","V1_RH_trot","STD_V1_LF_trot","STD_V1_RF_trot","STD_V1_LH_trot","STD_V1_RH_trot","V2_LF_trot","V2_RF_trot","V2_LH_trot","V2_RH_trot","STD_V2_LF_trot","STD_V2_RF_trot","STD_V2_LH_trot","STD_V2_RH_trot","V3_LF_trot","V3_RF_trot","V3_LH_trot","V3_RH_trot","STD_V3_LF_trot","STD_V3_RF_trot","STD_V3_LH_trot","STD_V3_RH_trot","V4_LF_trot","V4_RF_trot","V4_LH_trot","V4_RH_trot","STD_V4_LF_trot","STD_V4_RF_trot","STD_V4_LH_trot","STD_V4_RH_trot","V5_LF_trot","V5_RF_trot","V5_LH_trot","V5_RH_trot","STD_V5_LF_trot","STD_V5_RF_trot","STD_V5_LH_trot","STD_V5_RH_trot","V6_LF_trot","V6_RF_trot","V6_LH_trot","V6Z_RH_trot","STD_V6_LF_trot","STD_V6_RF_trot","STD_V6_LH_trot","STD_V6_RH_trot","V7_LF_trot","V7_RF_trot","V7_LH_trot","V7_RH_trot","STD_V7_LF_trot","STD_V7_RF_trot","STD_V7_LH_trot","STD_V7_RH_trot","V8_LF_trot","V8_RF_trot","V8_LH_trot","V8_RH_trot","STD_V8_LF_trot","STD_V8_RF_trot","STD_V8_LH_trot","STD_V8_RH_trot","V9_LF_trot","V9_RF_trot","V9_LH_trot","V9_RH_trot","STD_V9_LF_trot","STD_V9_RF_trot","STD_V9_LH_trot","STD_V9_RH_trot","V10_LF_trot","V10_RF_trot","V10_LH_trot","V10_RH_trot","STD_V10_LF_trot","STD_V10_RF_trot","STD_V10_LH_trot","STD_V10_RH_trot","V11_LF_trot","V11_RF_trot","V11_LH_trot","V11_RH_trot","STD_V11_LF_trot","STD_V11_RF_trot","STD_V11_LH_trot","STD_V11_RH_trot","V12_LF_trot","V12_RF_trot","V12_LH_trot","V12_RH_trot","STD_V12_LF_trot","STD_V12_RF_trot","STD_V12_LH_trot","STD_V12_RH_trot","V13_LF_trot","V13_RF_trot","V13_LH_trot","V13_RH_trot","STD_V13_LF_trot","STD_V13_RF_trot","STD_V13_LH_trot","STD_V13_RH_trot","Mean_V14_LF_trot","Mean_V14_RF_trot","Mean_V14_LH_trot","Mean_V14_RH_trot","STD_V14_LF_trot","STD_V14_RF_trot","STD_V14_LH_trot","STD_V14_RH_trot","V15_LF_trot","V15_RF_trot","V15_LH_trot","V15_RH_trot","STD_V15_LF_trot","STD_V15_RF_trot","STD_V15_LH_trot","STD_V15_RH_trot","V16_LF_trot","V16_RF_trot","V16_LH_trot","V16_RH_trot","V17_LF_trot","V17_RF_trot","V17_LH_trot","V17_RH_trot","V18_LF_trot","V18_RF_trot","V18_LH_trot","V18_RH_trot","V19_LF_trot","V19_RF_trot","V19_LH_trot","V19_RH_trot","V20_LF_trot","V20_RF_trot","V20_LH_trot","V20_RH_trot","V21_LF_trot","V21_RF_trot","V21_LH_trot","V21_RH_trot","V22_LF_trot","V22_RF_trot","V22_LH_trot","V22_RH_trot","V23_LF_trot","V23_RF_trot","V23_LH_trot","V23_RH_trot","V24_LF_trot","V24_RF_trot","V24_LH_trot","V24_RH_trot","V25_LF_trot","V25_RF_trot","V25_LH_trot","V25_RH_trot","V26_LF_trot","V26_RF_trot","V26_LH_trot","V26_RH_trot","V27_LF_trot","V27_RF_trot","V27_LH_trot","V27_RH_trot","V28_LF_trot","V28_RF_trot","V28_LH_trot","V28_RH_trot","V29_LF_trot","V29_RF_trot","V29_LH_trot","V29_RH_trot","V30_LF_trot","V30_RF_trot","V30_LH_trot","V30_RH_trot","Group0-NormalControl1Affected"] feature_names = col[:-1] label_name = col[-1] dataset = tf.data.experimental.make_csv_dataset("train_data.csv", batch_size=32 ,column_names=col,label_name=label_name) test_dataset = tf.data.experimental.make_csv_dataset("test_data_s.csv", batch_size=32 ,column_names=feature_names) desc = pd.read_csv('train_data.csv')[feature_names].describe() MEAN = np.array(desc.T['mean']) numerical_columns = [] for feature_id in range(len(feature_names)): num_col = tf.feature_column.numeric_column(feature_names[feature_id], normalizer_fn=functools.partial(process_continuous_data, MEAN[feature_id])) numerical_columns.append(num_col) # create a feature layer that will transform the input data numeric_layer = tf.keras.layers.DenseFeatures(numerical_columns) # Create model model=Sequential() model.add(numeric_layer) model.add(Dense(20, activation='relu')) model.add(Dense(12, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss=binary_crossentropy, optimizer='adam', metrics=['accuracy']) model.fit(dataset, epochs=10, steps_per_epoch=100) # # Predict predictions = model.predict(test_dataset) print(predictions) # #save predict result np.savetxt('result.csv', predictions, delimiter = ',') There is no error or exception when I train the model, but when I run this line: predictions = model.predict(test_data) It always says "Cast string to float is not supported" My test data looks like this: (there are 365 features, I show three here) Index WeightKg Age 143 38.3 1.56 154 23.9 2.24 30 25.1 4.01 111 38.8 5.49 183 36.5 3.21 The prediction should be a single value between of 0 or 1 for each row So I do not know where this string come from I used pandas to see the tyep, and all features are float64 Could anyone tell me where I did wrong? First time doing things like this. A: It just solved. It is because there are a few blank cells in the test CSV. Thus, TensorFlow treats them as strings.
Cast string to float is not supported ---in tensorflow model.predict(test)
Here is my code: import tensorflow as tf import numpy as np import pandas as pd import functools from tensorflow.python.keras.models import Sequential from tensorflow.python.keras.layers import Dense from tensorflow.python.keras.losses import binary_crossentropy def process_continuous_data(mean, data): # Center the data data = tf.cast(data, tf.float32) * 1/(2*mean) return tf.reshape(data, [-1, 1]) # Load training data path = 'train_data.csv' data_train_tf=pd.read_csv(path) col=["Index","WeightKg","Age","Speed","V1_LF","V1_RF","V1_LH","V1_RH","STD_V1_LF","STD_V1_RF","STD_V1_LH","STD_V1_RH","V2_LF","V2_RF","V2_LH","V2_RH","STD_V2_LF","STD_V2_RF","STD_V2_LH","STD_V2_RH","V3_LF","V3_RF","V3_LH","V3_RH","STD_V3_LF","STD_V3_RF","STD_V3_LH","STD_V3_RH","V4_LF","V4_RF","V4_LH","V4_RH","STD_V4_LF","STD_V4_RF","STD_V4_LH","STD_V4_RH","V5_LF","V5_RF","V5_LH","V5_RH","STD_V5_LF","STD_V5_RF","STD_V5_LH","STD_V5_RH","V6_LF","V6_RF","V6_LH","V6Z_RH","STD_V6_LF","STD_V6_RF","STD_V6_LH","STD_V6_RH","V7_LF","V7_RF","V7_LH","V7_RH","STD_V7_LF","STD_V7_RF","STD_V7_LH","STD_V7_RH","V8_LF","V8_RF","V8_LH","V8_RH","STD_V8_LF","STD_V8_RF","STD_V8_LH","STD_V8_RH","V9_LF","V9_RF","V9_LH","V9_RH","STD_V9_LF","STD_V9_RF","STD_V9_LH","STD_V9_RH","V10_LF","V10_RF","V10_LH","V10_RH","STD_V10_LF","STD_V10_RF","STD_V10_LH","STD_V10_RH","V11_LF","V11_RF","V11_LH","V11_RH","STD_V11_LF","STD_V11_RF","STD_V11_LH","STD_V11_RH","V12_LF","V12_RF","V12_LH","V12_RH","STD_V12_LF","STD_V12_RF","STD_V12_LH","STD_V12_RH","V13_LF","V13_RF","V13_LH","V13_RH","STD_V13_LF","STD_V13_RF","STD_V13_LH","STD_V13_RH","Mean_V14_LF","Mean_V14_RF","Mean_V14_LH","Mean_V14_RH","STD_V14_LF","STD_V14_RF","STD_V14_LH","STD_V14_RH","V15_LF","V15_RF","V15_LH","V15_RH","STD_V15_LF","STD_V15_RF","STD_V15_LH","STD_V15_RH","V16_LF","V16_RF","V16_LH","V16_RH","V17_LF","V17_RF","V17_LH","V17_RH","V18_LF","V18_RF","V18_LH","V18_RH","V19_LF","V19_RF","V19_LH","V19_RH","V20_LF","V20_RF","V20_LH","V20_RH","V21_LF","V21_RF","V21_LH","V21_RH","V22_LF","V22_RF","V22_LH","V22_RH","V23_LF","V23_RF","V23_LH","V23_RH","V24_LF","V24_RF","V24_LH","V24_RH","V25_LF","V25_RF","V25_LH","V25_RH","V26_LF","V26_RF","V26_LH","V26_RH","V27_LF","V27_RF","V27_LH","V27_RH","V28_LF","V28_RF","V28_LH","V28_RH","V29_LF","V29_RF","V29_LH","V29_RH","V30_LF","V30_RF","V30_LH","V30_RH","Speed_trot","V1_LF_trot","V1_RF_trot","V1_LH_trot","V1_RH_trot","STD_V1_LF_trot","STD_V1_RF_trot","STD_V1_LH_trot","STD_V1_RH_trot","V2_LF_trot","V2_RF_trot","V2_LH_trot","V2_RH_trot","STD_V2_LF_trot","STD_V2_RF_trot","STD_V2_LH_trot","STD_V2_RH_trot","V3_LF_trot","V3_RF_trot","V3_LH_trot","V3_RH_trot","STD_V3_LF_trot","STD_V3_RF_trot","STD_V3_LH_trot","STD_V3_RH_trot","V4_LF_trot","V4_RF_trot","V4_LH_trot","V4_RH_trot","STD_V4_LF_trot","STD_V4_RF_trot","STD_V4_LH_trot","STD_V4_RH_trot","V5_LF_trot","V5_RF_trot","V5_LH_trot","V5_RH_trot","STD_V5_LF_trot","STD_V5_RF_trot","STD_V5_LH_trot","STD_V5_RH_trot","V6_LF_trot","V6_RF_trot","V6_LH_trot","V6Z_RH_trot","STD_V6_LF_trot","STD_V6_RF_trot","STD_V6_LH_trot","STD_V6_RH_trot","V7_LF_trot","V7_RF_trot","V7_LH_trot","V7_RH_trot","STD_V7_LF_trot","STD_V7_RF_trot","STD_V7_LH_trot","STD_V7_RH_trot","V8_LF_trot","V8_RF_trot","V8_LH_trot","V8_RH_trot","STD_V8_LF_trot","STD_V8_RF_trot","STD_V8_LH_trot","STD_V8_RH_trot","V9_LF_trot","V9_RF_trot","V9_LH_trot","V9_RH_trot","STD_V9_LF_trot","STD_V9_RF_trot","STD_V9_LH_trot","STD_V9_RH_trot","V10_LF_trot","V10_RF_trot","V10_LH_trot","V10_RH_trot","STD_V10_LF_trot","STD_V10_RF_trot","STD_V10_LH_trot","STD_V10_RH_trot","V11_LF_trot","V11_RF_trot","V11_LH_trot","V11_RH_trot","STD_V11_LF_trot","STD_V11_RF_trot","STD_V11_LH_trot","STD_V11_RH_trot","V12_LF_trot","V12_RF_trot","V12_LH_trot","V12_RH_trot","STD_V12_LF_trot","STD_V12_RF_trot","STD_V12_LH_trot","STD_V12_RH_trot","V13_LF_trot","V13_RF_trot","V13_LH_trot","V13_RH_trot","STD_V13_LF_trot","STD_V13_RF_trot","STD_V13_LH_trot","STD_V13_RH_trot","Mean_V14_LF_trot","Mean_V14_RF_trot","Mean_V14_LH_trot","Mean_V14_RH_trot","STD_V14_LF_trot","STD_V14_RF_trot","STD_V14_LH_trot","STD_V14_RH_trot","V15_LF_trot","V15_RF_trot","V15_LH_trot","V15_RH_trot","STD_V15_LF_trot","STD_V15_RF_trot","STD_V15_LH_trot","STD_V15_RH_trot","V16_LF_trot","V16_RF_trot","V16_LH_trot","V16_RH_trot","V17_LF_trot","V17_RF_trot","V17_LH_trot","V17_RH_trot","V18_LF_trot","V18_RF_trot","V18_LH_trot","V18_RH_trot","V19_LF_trot","V19_RF_trot","V19_LH_trot","V19_RH_trot","V20_LF_trot","V20_RF_trot","V20_LH_trot","V20_RH_trot","V21_LF_trot","V21_RF_trot","V21_LH_trot","V21_RH_trot","V22_LF_trot","V22_RF_trot","V22_LH_trot","V22_RH_trot","V23_LF_trot","V23_RF_trot","V23_LH_trot","V23_RH_trot","V24_LF_trot","V24_RF_trot","V24_LH_trot","V24_RH_trot","V25_LF_trot","V25_RF_trot","V25_LH_trot","V25_RH_trot","V26_LF_trot","V26_RF_trot","V26_LH_trot","V26_RH_trot","V27_LF_trot","V27_RF_trot","V27_LH_trot","V27_RH_trot","V28_LF_trot","V28_RF_trot","V28_LH_trot","V28_RH_trot","V29_LF_trot","V29_RF_trot","V29_LH_trot","V29_RH_trot","V30_LF_trot","V30_RF_trot","V30_LH_trot","V30_RH_trot","Group0-NormalControl1Affected"] feature_names = col[:-1] label_name = col[-1] dataset = tf.data.experimental.make_csv_dataset("train_data.csv", batch_size=32 ,column_names=col,label_name=label_name) test_dataset = tf.data.experimental.make_csv_dataset("test_data_s.csv", batch_size=32 ,column_names=feature_names) desc = pd.read_csv('train_data.csv')[feature_names].describe() MEAN = np.array(desc.T['mean']) numerical_columns = [] for feature_id in range(len(feature_names)): num_col = tf.feature_column.numeric_column(feature_names[feature_id], normalizer_fn=functools.partial(process_continuous_data, MEAN[feature_id])) numerical_columns.append(num_col) # create a feature layer that will transform the input data numeric_layer = tf.keras.layers.DenseFeatures(numerical_columns) # Create model model=Sequential() model.add(numeric_layer) model.add(Dense(20, activation='relu')) model.add(Dense(12, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss=binary_crossentropy, optimizer='adam', metrics=['accuracy']) model.fit(dataset, epochs=10, steps_per_epoch=100) # # Predict predictions = model.predict(test_dataset) print(predictions) # #save predict result np.savetxt('result.csv', predictions, delimiter = ',') There is no error or exception when I train the model, but when I run this line: predictions = model.predict(test_data) It always says "Cast string to float is not supported" My test data looks like this: (there are 365 features, I show three here) Index WeightKg Age 143 38.3 1.56 154 23.9 2.24 30 25.1 4.01 111 38.8 5.49 183 36.5 3.21 The prediction should be a single value between of 0 or 1 for each row So I do not know where this string come from I used pandas to see the tyep, and all features are float64 Could anyone tell me where I did wrong? First time doing things like this.
[ "It just solved.\nIt is because there are a few blank cells in the test CSV. Thus, TensorFlow treats them as strings.\n" ]
[ 0 ]
[]
[]
[ "prediction", "tensorflow" ]
stackoverflow_0074664296_prediction_tensorflow.txt
Q: Decrypt the encrypted private key: data isn't an object ID I try to decrypt the encrypted private key string which like this -----BEGIN ENCRYPTED PRIVATE KEY----- MIIFHDBO... -----END ENCRYPTED PRIVATE KEY----- And I also remove the head and the foot. But it throws the exception: Exception in thread "main" java.io.IOException: ObjectIdentifier() -- data isn't an object ID (tag = 48) at sun.security.util.ObjectIdentifier.<init>(ObjectIdentifier.java:257) at sun.security.util.DerInputStream.getOID(DerInputStream.java:314) at com.sun.crypto.provider.PBES2Parameters.engineInit(PBES2Parameters.java:267) at java.security.AlgorithmParameters.init(AlgorithmParameters.java:293) at sun.security.x509.AlgorithmId.decodeParams(AlgorithmId.java:132) at sun.security.x509.AlgorithmId.<init>(AlgorithmId.java:114) at sun.security.x509.AlgorithmId.parse(AlgorithmId.java:372) at javax.crypto.EncryptedPrivateKeyInfo.<init>(EncryptedPrivateKeyInfo.java:95) at com.cargosmart.mci3.as2.process.as2control.KeystoreController.decryptKey(KeystoreController.java:162) at com.cargosmart.mci3.as2.process.as2control.KeystoreController.main(KeystoreController.java:147) Here is the code import org.bouncycastle.util.encoders.Base64; String key = "-----BEGIN ENCRYPTED PRIVATE KEY-----MIII-----END ENCRYPTED PRIVATE KEY-----"; key = standardizePem(key); key = key.replace("-----BEGIN ENCRYPTED PRIVATE KEY-----\n", "").replace("\n-----END ENCRYPTED PRIVATE KEY-----", ""); byte[] b = Base64.decode(key); // here is the exception line EncryptedPrivateKeyInfo pkinfo = new EncryptedPrivateKeyInfo(b); And the function standardizePem is aim to format the key string public static String standardizePem(String cert) { String SEPARATOR = "-----"; String LINE_SEPERATOR = "\n"; String temp[] = cert.split(SEPARATOR); String certHead = temp[1]; String certEnd = temp[3]; String certContent = temp[2]; String regex = "(.{64})"; certContent = certContent.replaceAll(regex,"$1\n"); final String pem = SEPARATOR + certHead + SEPARATOR + LINE_SEPARATOR + certContent + LINE_SEPARATOR + SEPARATOR + certEnd + SEPARATOR; return pem; } Could anyone have solutions? Thanks for your help. A: Posting the answer after debugging and searching a lot about the issue Exception in thread "main" java.io.IOException: ObjectIdentifier() -- data isn't an object ID (tag = 48) I found its the issue with Java version below or equal jdk8u2x EncryptedPrivateKeyInfo pkinfo = new EncryptedPrivateKeyInfo(b); Actually java version below 8u3 for e.g. jdk8u2 or lesser, can't parse the new algorithms DER encoded stream This is the known issue which is reported and fixed now from jdk8u3. https://bugs.java.com/bugdatabase/view_bug.do?bug_id=8267837 is the link of reported bug for me, I was trying with java version 1.8_201 and it didnt worked but when I changed the version to 1.8_351, it worked like charm
Decrypt the encrypted private key: data isn't an object ID
I try to decrypt the encrypted private key string which like this -----BEGIN ENCRYPTED PRIVATE KEY----- MIIFHDBO... -----END ENCRYPTED PRIVATE KEY----- And I also remove the head and the foot. But it throws the exception: Exception in thread "main" java.io.IOException: ObjectIdentifier() -- data isn't an object ID (tag = 48) at sun.security.util.ObjectIdentifier.<init>(ObjectIdentifier.java:257) at sun.security.util.DerInputStream.getOID(DerInputStream.java:314) at com.sun.crypto.provider.PBES2Parameters.engineInit(PBES2Parameters.java:267) at java.security.AlgorithmParameters.init(AlgorithmParameters.java:293) at sun.security.x509.AlgorithmId.decodeParams(AlgorithmId.java:132) at sun.security.x509.AlgorithmId.<init>(AlgorithmId.java:114) at sun.security.x509.AlgorithmId.parse(AlgorithmId.java:372) at javax.crypto.EncryptedPrivateKeyInfo.<init>(EncryptedPrivateKeyInfo.java:95) at com.cargosmart.mci3.as2.process.as2control.KeystoreController.decryptKey(KeystoreController.java:162) at com.cargosmart.mci3.as2.process.as2control.KeystoreController.main(KeystoreController.java:147) Here is the code import org.bouncycastle.util.encoders.Base64; String key = "-----BEGIN ENCRYPTED PRIVATE KEY-----MIII-----END ENCRYPTED PRIVATE KEY-----"; key = standardizePem(key); key = key.replace("-----BEGIN ENCRYPTED PRIVATE KEY-----\n", "").replace("\n-----END ENCRYPTED PRIVATE KEY-----", ""); byte[] b = Base64.decode(key); // here is the exception line EncryptedPrivateKeyInfo pkinfo = new EncryptedPrivateKeyInfo(b); And the function standardizePem is aim to format the key string public static String standardizePem(String cert) { String SEPARATOR = "-----"; String LINE_SEPERATOR = "\n"; String temp[] = cert.split(SEPARATOR); String certHead = temp[1]; String certEnd = temp[3]; String certContent = temp[2]; String regex = "(.{64})"; certContent = certContent.replaceAll(regex,"$1\n"); final String pem = SEPARATOR + certHead + SEPARATOR + LINE_SEPARATOR + certContent + LINE_SEPARATOR + SEPARATOR + certEnd + SEPARATOR; return pem; } Could anyone have solutions? Thanks for your help.
[ "Posting the answer after debugging and searching a lot about the issue\n\nException in thread \"main\" java.io.IOException: ObjectIdentifier() --\ndata isn't an object ID (tag = 48)\n\nI found its the issue with Java version below or equal jdk8u2x\n EncryptedPrivateKeyInfo pkinfo = new EncryptedPrivateKeyInfo(b);\n\nActually java version below 8u3 for e.g. jdk8u2 or lesser, can't parse the new algorithms DER encoded stream\nThis is the known issue which is reported and fixed now from jdk8u3.\nhttps://bugs.java.com/bugdatabase/view_bug.do?bug_id=8267837 is the link of reported bug\nfor me, I was trying with java version 1.8_201 and it didnt worked but when I changed the version to 1.8_351, it worked like charm\n" ]
[ 1 ]
[ "It looks like the issue is with the standardizePem function. The function is splitting the original private key string by the ----- separator, but this is not always guaranteed to produce the expected results.\nFor example, the original private key string in the question contains MIIFHDBO which, when split by -----, will produce a string array with three elements instead of four. The standardizePem function then uses the third element of the array as the encrypted private key, but since the array only has three elements, an ArrayIndexOutOfBoundsException is thrown.\nOne way to fix this issue would be to make sure that the original private key string is in the correct format before splitting it. For example, the private key string should always have the same number of - characters on each line, and the -----BEGIN ENCRYPTED PRIVATE KEY----- and -----END ENCRYPTED PRIVATE KEY----- lines should always be present.\nOnce the private key string is in the correct format, you can split it by the \\n character instead of ----- to obtain the encrypted private key. You can then remove any leading or trailing whitespace from the encrypted private key before decoding it with the Base64 decoder.\nHere is an example of how the standardizePem function could be updated to handle these changes:\npublic static String standardizePem(String cert) {\n String LINE_SEPARATOR = \"\\n\";\n String[] lines = cert.split(LINE_SEPARATOR);\n\n // Check if the private key is in the correct format.\n if (lines.length < 3 || !lines[0].equals(\"-----BEGIN ENCRYPTED PRIVATE KEY-----\") || !lines[lines.length - 1].equals(\"-----END ENCRYPTED PRIVATE KEY----\n\n" ]
[ -1 ]
[ "encryption", "java", "private_key" ]
stackoverflow_0055333924_encryption_java_private_key.txt
Q: Proper Way for Handling Observables for Async Pipe Rendering I am trying to retrieve information from an Observable Object. but I can't figure out how to do it properly. such as this for example, Dog API it should return a JSON object { "message": "https://images.dog.ceo/breeds/setter-english/n02100735_10064.jpg", "status": "success" } for such, I have a service function getRandomImage(): Observable<Random> { return this.client.get<Random>(`https://dog.ceo/api/breeds/image/random`) } however when I try to render it on HTML <div> <p *ngIf=" random$ | async">{{ random$.message }}</p> </div> I get error message Property 'message' does not exist on type 'Observable<Random>' also, can someone please explain to me in simple terms what this function does getListFacts(length: number, limit: number) : Observable<facts[]> { return this.client .get<{ data: facts[] }>(`https://catfact.ninja/factsmax_length=${length}&limit=${limit}`) .pipe(map(({ data }) => data)); } Like, I tell it to return Observable of Array of Facts, with a get request that should return an object with contains an array of facts named data, then what does the pipe map do ? Thank you I am trying to use Observables with Async Pipe for Rending. but I can't quite understand how to properly write observable retrieval functions for proper rendering A: The main problem is that you are accessing the observable random$ directly, when in fact you want to access the object emitted by that observable. In order to achieve what you want, you can add as randomImage in your html-code and then access randomImage. I created a working example: First the TS-File: random$: Observable<RandomImage>; constructor(private client: HttpClient) { this.random$ = this.client.get<RandomImage>(`https://dog.ceo/api/breeds/image/random`); } Then the HTML-Code: <p *ngIf="random$ | async as randomImage">{{ randomImage.message }}</p> Regarding your question about the map-operator The following code retrieves the data array wrapped in an object. You then use the map operator to extract the array from the object, so that ultimately only the array without the wrapper object is returned: getListFacts(length: number, limit: number) : Observable<facts[]> { return this.client .get<{ data: facts[] }>(`https://catfact.ninja/factsmax_length=${length}&limit=${limit}`) .pipe(map(({ data }) => data)); }
Proper Way for Handling Observables for Async Pipe Rendering
I am trying to retrieve information from an Observable Object. but I can't figure out how to do it properly. such as this for example, Dog API it should return a JSON object { "message": "https://images.dog.ceo/breeds/setter-english/n02100735_10064.jpg", "status": "success" } for such, I have a service function getRandomImage(): Observable<Random> { return this.client.get<Random>(`https://dog.ceo/api/breeds/image/random`) } however when I try to render it on HTML <div> <p *ngIf=" random$ | async">{{ random$.message }}</p> </div> I get error message Property 'message' does not exist on type 'Observable<Random>' also, can someone please explain to me in simple terms what this function does getListFacts(length: number, limit: number) : Observable<facts[]> { return this.client .get<{ data: facts[] }>(`https://catfact.ninja/factsmax_length=${length}&limit=${limit}`) .pipe(map(({ data }) => data)); } Like, I tell it to return Observable of Array of Facts, with a get request that should return an object with contains an array of facts named data, then what does the pipe map do ? Thank you I am trying to use Observables with Async Pipe for Rending. but I can't quite understand how to properly write observable retrieval functions for proper rendering
[ "The main problem is that you are accessing the observable random$ directly, when in fact you want to access the object emitted by that observable. In order to achieve what you want, you can add as randomImage in your html-code and then access randomImage.\nI created a working example:\nFirst the TS-File:\nrandom$: Observable<RandomImage>;\n\nconstructor(private client: HttpClient) {\n this.random$ = this.client.get<RandomImage>(`https://dog.ceo/api/breeds/image/random`);\n}\n\nThen the HTML-Code:\n<p *ngIf=\"random$ | async as randomImage\">{{ randomImage.message }}</p>\n\n\nRegarding your question about the map-operator\nThe following code retrieves the data array wrapped in an object. You then use the map operator to extract the array from the object, so that ultimately only the array without the wrapper object is returned:\ngetListFacts(length: number, limit: number) : Observable<facts[]> {\n return this.client\n .get<{ data: facts[] }>(`https://catfact.ninja/factsmax_length=${length}&limit=${limit}`)\n .pipe(map(({ data }) => data));\n}\n\n" ]
[ 1 ]
[]
[]
[ "angular", "async_pipe", "observable", "rxjs", "typescript" ]
stackoverflow_0074669159_angular_async_pipe_observable_rxjs_typescript.txt
Q: Why Angular JIT Compiler Mode is not catching template error I am running a simple application in angular 12.2.0 app.component.ts file: import { Component} from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { appName = 'test-app'; } app.component.html: <p>{{appNameIncorrect}}</p> Notice that I have mis-spelled appName to appNameIncorrect. When I run ng serve. by default it has AOT mode. it properly prints an error in terminal that Property 'appNameIncorrect' does not exist on type 'AppComponent'. Then I tried to set aot flag to false in angular.json file, I expected that application will be compiled correctly in linux terminal which I run ng serve but it will give same error as above in browser console. But that is not happening. Neither ng serve command output threw error nor browser reported any error. Why is the case? JIT compiler should report an error in browser because in JIT mode, code gets compiled in runtime that is in Browser. I am curious to know why? Is there any compiler rule in angular which gives such result? I am using chrome as browser. A: The Angular JIT compiler does not catch template errors by default. Instead, it generates code that checks for undefined properties at runtime and logs an error to the console if a property is not found. This is why you are not seeing any error messages when running your application in JIT mode. If you want the Angular JIT compiler to catch template errors and report them as compile-time errors, you can enable the strictTemplates option in the tsconfig.json file. This option enables template type checking, which will cause the Angular compiler to report template errors as compile-time errors. Here's an example of how you can enable the strictTemplates option in the tsconfig.json file: { "compilerOptions": { // Other compiler options... "strictTemplates": true } } When the strictTemplates option is enabled, the Angular JIT compiler will report a compile-time error for your template, just like the AOT compiler does. This can help you catch template errors early and prevent them from causing runtime errors in your application.
Why Angular JIT Compiler Mode is not catching template error
I am running a simple application in angular 12.2.0 app.component.ts file: import { Component} from '@angular/core'; @Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: ['./app.component.css'] }) export class AppComponent { appName = 'test-app'; } app.component.html: <p>{{appNameIncorrect}}</p> Notice that I have mis-spelled appName to appNameIncorrect. When I run ng serve. by default it has AOT mode. it properly prints an error in terminal that Property 'appNameIncorrect' does not exist on type 'AppComponent'. Then I tried to set aot flag to false in angular.json file, I expected that application will be compiled correctly in linux terminal which I run ng serve but it will give same error as above in browser console. But that is not happening. Neither ng serve command output threw error nor browser reported any error. Why is the case? JIT compiler should report an error in browser because in JIT mode, code gets compiled in runtime that is in Browser. I am curious to know why? Is there any compiler rule in angular which gives such result? I am using chrome as browser.
[ "The Angular JIT compiler does not catch template errors by default. Instead, it generates code that checks for undefined properties at runtime and logs an error to the console if a property is not found. This is why you are not seeing any error messages when running your application in JIT mode.\nIf you want the Angular JIT compiler to catch template errors and report them as compile-time errors, you can enable the strictTemplates option in the tsconfig.json file. This option enables template type checking, which will cause the Angular compiler to report template errors as compile-time errors.\nHere's an example of how you can enable the strictTemplates option in the tsconfig.json file:\n{\n \"compilerOptions\": {\n // Other compiler options...\n \"strictTemplates\": true\n }\n}\n\nWhen the strictTemplates option is enabled, the Angular JIT compiler will report a compile-time error for your template, just like the AOT compiler does. This can help you catch template errors early and prevent them from causing runtime errors in your application.\n" ]
[ 1 ]
[]
[]
[ "angular", "angular_aot", "angular_jit" ]
stackoverflow_0074632219_angular_angular_aot_angular_jit.txt
Q: Spark SQL (coorelated subQuery) creates a BroadcastNestedLoopJoin and the job runs very slow What am I trying to do I have 3 tables which are being joined to create a desired output. Table 1 (800K records) : This is a partitioned hive external table by date (parquet file structure ). The schema is deeply nested. Following is a portion of the schema root | - uuid | - data | - k1 - string | - k2 - string | - ... | …. | - dt - string (partition column) Table 2 (0.5 million records) /Table 3 (31K records) : metadata tables root | - insert_date | - k_sub | - action_date | …. SQL being executed (Generated by a internal library) SELECT input_file_name() as filename, uuid, data.k1, data.k2, dt FROM table_1 WHERE NOT EXISTS(SELECT 1 FROM table_2 WHERE (k_sub = data.k1) AND dt > action_date) AND EXISTS( SELECT 1 FROM table_3 WHERE (k_sub = data.k1) AND (dt < action_date OR dt < DATE_ADD(CURRENT_DATE(), -1460)) Spark settings Num of executors = 50 Executor memory = 10g Driver memory = 10g The query runs very slow runs for 1.1 hrs and fails. Looking into the sparkUI (SQL table). I increased the num of executors to 100 and the job completed but I'm not confident. The plan looks like this Based on my research, BroadcastNestedLoopJoin is not good to have. As it is SQL, I tried adding /* +BroadcastJoin */ hint but still did not help. Anyone has thoughts of how I could approach this problem and improve the performance? A: I faced the same problem before, i had a table with billion records and 3 more (with hundred thousand rows) tables to enrich my table. I was combining them and doing aggregation. it was accelerated when I added the following configurations. but now I don't remember exactly which config I accelerated it with. It's been a long time since. I hope it works for you... I have given these additionally when creating spark session ; .config("hive.exec.dynamic.partition.mode", "nonstrict") \ .config("hive.exec.dynamic.partition", "true") \ .config("hive.metastore.try.direct.sql", "true") \ .config("hive.merge.smallfiles.avgsize", "40000000") \ .config("hive.merge.size.per.task", "209715200") \ .config("hive.exec.parallel", "true") \ .config("spark.debug.maxToStringFields", "5000") \ .config("spark.yarn.executor.memoryOverhead","22g") \ .config("dfs.blocksize", "268435456") \ .config("spark.kryoserializer.buffer.max", "2000m") \ .config("spark.sql.adaptive.enabled", "true") \ .config("spark.sql.shuffle.partitions","3200") \ .config("spark.default.parallelism","3200") \ After creating the session, I set the following configurations. Especially after adding autoBroadcastJoinThreshold below i saw it speed up; from pyspark.sql import HiveContext c = HiveContext(spark) c.setConf('spark.sql.parquet.compression.codec', 'gzip') c.setConf('spark.sql.autoBroadcastJoinThreshold', 1000 * 1024 * 1024) c.setConf('spark.sql.broadcastTimeout', '36000') c.setConf("spark.sql.adaptive.skewJoin.enabled","true") A: Based on my research, BroadcastNestedLoopJoin is not good to have. As it is SQL, I tried adding /* +BroadcastJoin */ hint but still did not help. Anyone has thoughts of how I could approach this problem and improve the performance? Looks like those not exists are translated into non-equi join (because they are using checking range on the condition) and in this case Spark cannot use hash join or sort-merge join. Thats also the reason why your broadcast hint is not forcing spark to use broadcastHashJoin (its only for equi join) You can find more details in this comment in source code Spark source code // If it is an equi-join, we first look at the join hints w.r.t. the following order: // 1. broadcast hint: pick broadcast hash join if the join type is supported. If both sides // have the broadcast hints, choose the smaller side (based on stats) to broadcast. // 2. sort merge hint: pick sort merge join if join keys are sortable. // 3. shuffle hash hint: We pick shuffle hash join if the join type is supported. If both // sides have the shuffle hash hints, choose the smaller side (based on stats) as the // build side. // 4. shuffle replicate NL hint: pick cartesian product if join type is inner like. // // If there is no hint or the hints are not applicable, we follow these rules one by one: // 1. Pick broadcast hash join if one side is small enough to broadcast, and the join type // is supported. If both sides are small, choose the smaller side (based on stats) // to broadcast. // 2. Pick shuffle hash join if one side is small enough to build local hash map, and is // much smaller than the other side, and `spark.sql.join.preferSortMergeJoin` is false. // 3. Pick sort merge join if the join keys are sortable. // 4. Pick cartesian product if join type is inner like. // 5. Pick broadcast nested loop join as the final solution. It may OOM but we don't have // other choice. If you cant change your condition i am afraid that you can only try to adjust your resources to make it stable and live with it
Spark SQL (coorelated subQuery) creates a BroadcastNestedLoopJoin and the job runs very slow
What am I trying to do I have 3 tables which are being joined to create a desired output. Table 1 (800K records) : This is a partitioned hive external table by date (parquet file structure ). The schema is deeply nested. Following is a portion of the schema root | - uuid | - data | - k1 - string | - k2 - string | - ... | …. | - dt - string (partition column) Table 2 (0.5 million records) /Table 3 (31K records) : metadata tables root | - insert_date | - k_sub | - action_date | …. SQL being executed (Generated by a internal library) SELECT input_file_name() as filename, uuid, data.k1, data.k2, dt FROM table_1 WHERE NOT EXISTS(SELECT 1 FROM table_2 WHERE (k_sub = data.k1) AND dt > action_date) AND EXISTS( SELECT 1 FROM table_3 WHERE (k_sub = data.k1) AND (dt < action_date OR dt < DATE_ADD(CURRENT_DATE(), -1460)) Spark settings Num of executors = 50 Executor memory = 10g Driver memory = 10g The query runs very slow runs for 1.1 hrs and fails. Looking into the sparkUI (SQL table). I increased the num of executors to 100 and the job completed but I'm not confident. The plan looks like this Based on my research, BroadcastNestedLoopJoin is not good to have. As it is SQL, I tried adding /* +BroadcastJoin */ hint but still did not help. Anyone has thoughts of how I could approach this problem and improve the performance?
[ "I faced the same problem before, i had a table with billion records and 3 more (with hundred thousand rows) tables to enrich my table. I was combining them and doing aggregation. it was accelerated when I added the following configurations. but now I don't remember exactly which config I accelerated it with. It's been a long time since. I hope it works for you...\nI have given these additionally when creating spark session ;\n.config(\"hive.exec.dynamic.partition.mode\", \"nonstrict\") \\\n.config(\"hive.exec.dynamic.partition\", \"true\") \\\n.config(\"hive.metastore.try.direct.sql\", \"true\") \\\n.config(\"hive.merge.smallfiles.avgsize\", \"40000000\") \\\n.config(\"hive.merge.size.per.task\", \"209715200\") \\\n.config(\"hive.exec.parallel\", \"true\") \\\n.config(\"spark.debug.maxToStringFields\", \"5000\") \\\n.config(\"spark.yarn.executor.memoryOverhead\",\"22g\") \\\n.config(\"dfs.blocksize\", \"268435456\") \\\n.config(\"spark.kryoserializer.buffer.max\", \"2000m\") \\\n.config(\"spark.sql.adaptive.enabled\", \"true\") \\\n.config(\"spark.sql.shuffle.partitions\",\"3200\") \\\n.config(\"spark.default.parallelism\",\"3200\") \\\n\nAfter creating the session, I set the following configurations. Especially after adding autoBroadcastJoinThreshold below i saw it speed up;\nfrom pyspark.sql import HiveContext\n\nc = HiveContext(spark)\nc.setConf('spark.sql.parquet.compression.codec', 'gzip')\nc.setConf('spark.sql.autoBroadcastJoinThreshold', 1000 * 1024 * 1024)\nc.setConf('spark.sql.broadcastTimeout', '36000')\nc.setConf(\"spark.sql.adaptive.skewJoin.enabled\",\"true\")\n\n", "\nBased on my research, BroadcastNestedLoopJoin is not good to have. As\nit is SQL, I tried adding /* +BroadcastJoin */ hint but still did not\nhelp. Anyone has thoughts of how I could approach this problem and\nimprove the performance?\n\nLooks like those not exists are translated into non-equi join (because they are using checking range on the condition) and in this case Spark cannot use hash join or sort-merge join. Thats also the reason why your broadcast hint is not forcing spark to use broadcastHashJoin (its only for equi join)\nYou can find more details in this comment in source code Spark source code\n// If it is an equi-join, we first look at the join hints w.r.t. the following order:\n // 1. broadcast hint: pick broadcast hash join if the join type is supported. If both sides\n // have the broadcast hints, choose the smaller side (based on stats) to broadcast.\n // 2. sort merge hint: pick sort merge join if join keys are sortable.\n // 3. shuffle hash hint: We pick shuffle hash join if the join type is supported. If both\n // sides have the shuffle hash hints, choose the smaller side (based on stats) as the\n // build side.\n // 4. shuffle replicate NL hint: pick cartesian product if join type is inner like.\n //\n // If there is no hint or the hints are not applicable, we follow these rules one by one:\n // 1. Pick broadcast hash join if one side is small enough to broadcast, and the join type\n // is supported. If both sides are small, choose the smaller side (based on stats)\n // to broadcast.\n // 2. Pick shuffle hash join if one side is small enough to build local hash map, and is\n // much smaller than the other side, and `spark.sql.join.preferSortMergeJoin` is false.\n // 3. Pick sort merge join if the join keys are sortable.\n // 4. Pick cartesian product if join type is inner like.\n // 5. Pick broadcast nested loop join as the final solution. It may OOM but we don't have\n // other choice.\n\nIf you cant change your condition i am afraid that you can only try to adjust your resources to make it stable and live with it\n" ]
[ 0, 0 ]
[]
[]
[ "apache_spark", "apache_spark_sql" ]
stackoverflow_0074661950_apache_spark_apache_spark_sql.txt
Q: Split Pandas dataframe by a specific custom parameter I have a sample pandas dataframe as below: What I want to do is to write a function to split this dataframe by its time value. The function returns a list of dataframes. I used the below function to split the dataframe. def split_dataframe(df, chunk_size=20): chunks = list() num_chunks = len(df) // chunk_size + 1 for i in range(num_chunks): chunks.append(df[i*chunk_size:(i+1)*chunk_size]) return chunks However, this function splits the dataframe by its number of rows equally. In this case, it's 20 by default. What I want to achieve is to get dataframe every 3 seconds (or x seconds). For instance, get the first dataframe where we have rows in 300 and 303 seconds, then the next dataframe will be in 304 to 307 seconds, and so on. I am not sure how to accomplish this. What I have done is create a new column that displays yes or no if the time is in the 3 seconds. But that did not help much. Also, please note that I might have multiple ids, and time is always increasing. It could also be the same. Normally, time values are very precise and include decimals. I just cast those to int. So, in this case, the dataframes might not be the same size. I would appreciate it if you could help me with that. A: With the following toy dataframe: import pandas as pd df = pd.DataFrame( { "id": [1, 1, 1, 2, 2, 3, 4, 4, 5, 5], "sample_val": [10, 11, 10, 12, 22, 22, 23, 23, 24, 24], "time": [300, 301, 301, 302, 302, 304, 311, 308, 309, 305], } ) Here is one way to do it: N = 3 df = df.sort_values(by="time") intervals = [[i, i + N] for i in range(df["time"].min(), df["time"].max(), N + 1)] # [[300, 303], [304, 307], [308, 311]] # Find rows that belong to the same interval chunks = df.apply( lambda x: [interval[0] <= x["time"] <= interval[1] for interval in intervals].index( True ), axis=1, ) # Split df accordingly dfs = [df_.reset_index(drop=True) for _, df_ in df.groupby(chunks)] Then: print(dfs[0]) # Output id sample_val time 0 1 10 300 1 1 11 301 2 1 10 301 3 2 12 302 4 2 22 302 print(dfs[1]) # Output id sample_val time 0 3 22 304 1 5 24 305 print(dfs[2]) # Output id sample_val time 0 4 23 308 1 5 24 309 2 4 23 311
Split Pandas dataframe by a specific custom parameter
I have a sample pandas dataframe as below: What I want to do is to write a function to split this dataframe by its time value. The function returns a list of dataframes. I used the below function to split the dataframe. def split_dataframe(df, chunk_size=20): chunks = list() num_chunks = len(df) // chunk_size + 1 for i in range(num_chunks): chunks.append(df[i*chunk_size:(i+1)*chunk_size]) return chunks However, this function splits the dataframe by its number of rows equally. In this case, it's 20 by default. What I want to achieve is to get dataframe every 3 seconds (or x seconds). For instance, get the first dataframe where we have rows in 300 and 303 seconds, then the next dataframe will be in 304 to 307 seconds, and so on. I am not sure how to accomplish this. What I have done is create a new column that displays yes or no if the time is in the 3 seconds. But that did not help much. Also, please note that I might have multiple ids, and time is always increasing. It could also be the same. Normally, time values are very precise and include decimals. I just cast those to int. So, in this case, the dataframes might not be the same size. I would appreciate it if you could help me with that.
[ "With the following toy dataframe:\nimport pandas as pd\n\ndf = pd.DataFrame(\n {\n \"id\": [1, 1, 1, 2, 2, 3, 4, 4, 5, 5],\n \"sample_val\": [10, 11, 10, 12, 22, 22, 23, 23, 24, 24],\n \"time\": [300, 301, 301, 302, 302, 304, 311, 308, 309, 305],\n }\n)\n\nHere is one way to do it:\nN = 3\n\ndf = df.sort_values(by=\"time\")\n\nintervals = [[i, i + N] for i in range(df[\"time\"].min(), df[\"time\"].max(), N + 1)]\n# [[300, 303], [304, 307], [308, 311]]\n\n# Find rows that belong to the same interval\nchunks = df.apply(\n lambda x: [interval[0] <= x[\"time\"] <= interval[1] for interval in intervals].index(\n True\n ),\n axis=1,\n)\n\n# Split df accordingly\ndfs = [df_.reset_index(drop=True) for _, df_ in df.groupby(chunks)]\n\nThen:\nprint(dfs[0])\n# Output\n id sample_val time\n0 1 10 300\n1 1 11 301\n2 1 10 301\n3 2 12 302\n4 2 22 302\n\nprint(dfs[1])\n# Output\n id sample_val time\n0 3 22 304\n1 5 24 305\n\nprint(dfs[2])\n# Output\n id sample_val time\n0 4 23 308\n1 5 24 309\n2 4 23 311\n\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074631375_dataframe_pandas_python.txt
Q: Helm Add values with --set command and extend given chart when using templates Structure like this: chart.yaml values.yaml templates/ |__deploymentconfig.yaml Usage: helm install demo --dry-run --debug -f values.yaml What i like to do is, to add an environment variable with the --set command on helm install after the template has filled the yaml. Dummy Command like this (not working): helm install demo ... -f values.yaml --set ???env[0].name=MyEnvVar Resulting deployment config should look like this: kind: Deployment ... spec: template: spec: containers: env: - name: MyEnvVar value: Hello What do i need to set on the ??? part of the install command to get the desired variable in the deployment part of the manifest? A: In a comment you clarify: I would like to add an environment variable without touching the Helm code (only using --set on CLI) You can't use helm install --set to make any changes that aren't described in the template code. If your chart says, for example, env: - name: SOME_VARIABLE value: {{ .Values.someValue | default "foo" }} then you could helm install --set someValue=bar to change the one specific environment value, but your chart itself has no way to supply additional environment variables, and --set on its own can't change this. Reason: No Test Code goes to production. You can still allow customizing the chart without breaking this rule. If "is it production" controls a specific set of things, you can make that a deploy-time control env: - name: ENVIRONMENT value: {{ Values.environment | default "production" }} helm install --set environment=development ... Or you could provide an open-ended set of extension environment variables, expecting that to normally be empty env: {{- with .Values.extraEnvironment }} {{ toYaml . | indent 2 }} {{- end }} The corresponding helm install --set syntax for this would be trickier, but you could write a separate YAML file of deploy-time values to inject with helm install -f. In principle you could use post rendering to make arbitrary changes to the deployed YAML, and that could include adding environment variables. That's a complex approach, though, and it leaves you in the situation of having tested something with modifications from the standard deployment; that might not be reproducible enough for many needs. A: To add an environment variable to a deployment in Helm, you can use the --set flag followed by the path to the variable you want to set. In this case, to set the name field of the first element in the env list, you can use the following syntax: helm install demo ... -f values.yaml --set spec.template.spec.containers[0].env[0].name=MyEnvVar This will set the name field of the first element in the env list in the containers field of the spec for the template in the deployment to MyEnvVar.
Helm Add values with --set command and extend given chart when using templates
Structure like this: chart.yaml values.yaml templates/ |__deploymentconfig.yaml Usage: helm install demo --dry-run --debug -f values.yaml What i like to do is, to add an environment variable with the --set command on helm install after the template has filled the yaml. Dummy Command like this (not working): helm install demo ... -f values.yaml --set ???env[0].name=MyEnvVar Resulting deployment config should look like this: kind: Deployment ... spec: template: spec: containers: env: - name: MyEnvVar value: Hello What do i need to set on the ??? part of the install command to get the desired variable in the deployment part of the manifest?
[ "In a comment you clarify:\n\nI would like to add an environment variable without touching the Helm code (only using --set on CLI)\n\nYou can't use helm install --set to make any changes that aren't described in the template code. If your chart says, for example,\nenv:\n - name: SOME_VARIABLE\n value: {{ .Values.someValue | default \"foo\" }}\n\nthen you could helm install --set someValue=bar to change the one specific environment value, but your chart itself has no way to supply additional environment variables, and --set on its own can't change this.\n\nReason: No Test Code goes to production.\n\nYou can still allow customizing the chart without breaking this rule. If \"is it production\" controls a specific set of things, you can make that a deploy-time control\nenv:\n - name: ENVIRONMENT\n value: {{ Values.environment | default \"production\" }}\n\nhelm install --set environment=development ...\n\nOr you could provide an open-ended set of extension environment variables, expecting that to normally be empty\nenv:\n{{- with .Values.extraEnvironment }}\n{{ toYaml . | indent 2 }}\n{{- end }}\n\nThe corresponding helm install --set syntax for this would be trickier, but you could write a separate YAML file of deploy-time values to inject with helm install -f.\nIn principle you could use post rendering to make arbitrary changes to the deployed YAML, and that could include adding environment variables. That's a complex approach, though, and it leaves you in the situation of having tested something with modifications from the standard deployment; that might not be reproducible enough for many needs.\n", "To add an environment variable to a deployment in Helm, you can use the --set flag followed by the path to the variable you want to set. In this case, to set the name field of the first element in the env list, you can use the following syntax:\nhelm install demo ... -f values.yaml --set spec.template.spec.containers[0].env[0].name=MyEnvVar\n\nThis will set the name field of the first element in the env list in the containers field of the spec for the template in the deployment to MyEnvVar.\n" ]
[ 1, 0 ]
[]
[]
[ "helm3", "helmfile", "kubernetes", "kubernetes_helm", "templates" ]
stackoverflow_0074658402_helm3_helmfile_kubernetes_kubernetes_helm_templates.txt
Q: Getting error when install mysql-server on ubuntu 20.04 Can Anyone help me with the issue, I am facing error when install mysql server on ubuntu 20.04. I hve tried with the following commands and re-install again but same error i faced. sudo apt-get purge mysql* sudo apt-get autoremove sudo apt-get autoclean sudo apt-get dist-upgrade sudo apt-get install mysql-server Setting up mysql-server-8.0 (8.0.31-0ubuntu0.20.04.1) ... Renaming removed key_buffer and myisam-recover options (if present) Warning: Unable to start the server. Job for mysql.service failed because the control process exited with error code. See "systemctl status mysql.service" and "journalctl -xe" for details. invoke-rc.d: initscript mysql, action "start" failed. * mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Tue 2022-11-29 05:04:08 UTC; 4ms ago Process: 3889 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS) **Process: 3897 ExecStart=/usr/sbin/mysqld (code=exited, status=1/FAILURE)** Main PID: 3897 (code=exited, status=1/FAILURE) Status: "Server shutdown complete" CPU: 388ms dpkg: error processing package mysql-server-8.0 (--configure): installed mysql-server-8.0 package post-installation script subprocess returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-8.0; however: Package mysql-server-8.0 is not configured yet. dpkg: error processing package mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-8.0 mysql-server **E: Sub-process /usr/bin/dpkg returned an error code (1)** Please help me out with the solution if anyone sorted this issue. Thank you. A: You can try to enable nesting in Proxmox container configuration (Options>Features)
Getting error when install mysql-server on ubuntu 20.04
Can Anyone help me with the issue, I am facing error when install mysql server on ubuntu 20.04. I hve tried with the following commands and re-install again but same error i faced. sudo apt-get purge mysql* sudo apt-get autoremove sudo apt-get autoclean sudo apt-get dist-upgrade sudo apt-get install mysql-server Setting up mysql-server-8.0 (8.0.31-0ubuntu0.20.04.1) ... Renaming removed key_buffer and myisam-recover options (if present) Warning: Unable to start the server. Job for mysql.service failed because the control process exited with error code. See "systemctl status mysql.service" and "journalctl -xe" for details. invoke-rc.d: initscript mysql, action "start" failed. * mysql.service - MySQL Community Server Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled) Active: activating (auto-restart) (Result: exit-code) since Tue 2022-11-29 05:04:08 UTC; 4ms ago Process: 3889 ExecStartPre=/usr/share/mysql/mysql-systemd-start pre (code=exited, status=0/SUCCESS) **Process: 3897 ExecStart=/usr/sbin/mysqld (code=exited, status=1/FAILURE)** Main PID: 3897 (code=exited, status=1/FAILURE) Status: "Server shutdown complete" CPU: 388ms dpkg: error processing package mysql-server-8.0 (--configure): installed mysql-server-8.0 package post-installation script subprocess returned error exit status 1 dpkg: dependency problems prevent configuration of mysql-server: mysql-server depends on mysql-server-8.0; however: Package mysql-server-8.0 is not configured yet. dpkg: error processing package mysql-server (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: mysql-server-8.0 mysql-server **E: Sub-process /usr/bin/dpkg returned an error code (1)** Please help me out with the solution if anyone sorted this issue. Thank you.
[ "You can try to enable nesting in Proxmox container configuration (Options>Features)\n" ]
[ 0 ]
[]
[]
[ "linux", "proxmox", "ubuntu" ]
stackoverflow_0074609379_linux_proxmox_ubuntu.txt
Q: How to delete multiple sheets using input box Just need some help. I want to delete multiple sheets using their partial name that I will enter in the input box. Is there any code that I can add multiple partial names in input box so they will be deleted at once? For example, I would like to add these partial names: "Pivot", "IWS, "Associate", "Split", and "Invoice" My initial code can delete sheets with just one partial name, sample if I enter "Pivot" it will delete all sheets with "Pivot" name. I want to tweak my code where I can add multiple partial name to the input box. Here's the initial code: Sub ClearAllSheetsSpecified() '---------------------------------------------------------------------------------------------------------- ' Clear all sheets specified in input box '---------------------------------------------------------------------------------------------------------- Dim shName As String Dim xName As String Dim xWs As Worksheet Dim cnt As Integer shName = Application.InputBox("Enter the sheet name to delete:", "Delete sheets", _ ThisWorkbook.ActiveSheet.Name, , , , , 2) If shName = "" Then Exit Sub '**** use LCase() here xName = "*" & LCase(shName) & "*" ' MsgBox xName Application.DisplayAlerts = False cnt = 0 For Each xWs In ThisWorkbook.Sheets '**** Use LCase() here If LCase(xWs.Name) Like xName Then xWs.Delete 'MsgBox xName cnt = cnt + 1 End If Next xWs Application.DisplayAlerts = True MsgBox "Have deleted " & cnt & " worksheets", vbInformation, "Sheets removed" I'm looking for a code that I can enter any partial name in my input box then sheets will be deleted as long as they exist in my current WB. A: This is a way to capture your list of phrases. The input text must have a common delimiter that you code for. In this case I used the semi-colon. Sub testIt() Dim shName As String Dim xName() As String Dim cnt As Integer shName = Application.InputBox("Enter the sheet names (delimited by ;) to delete:", "Delete sheets", _ ThisWorkbook.ActiveSheet.Name, , , , , 2) If shName = "" Then Exit Sub '**** use LCase() here xName = Split(LCase(shName), ";") For x = 0 To UBound(xName) Debug.Print "*" & xName(x) & "*" 'do your delete Next x End Sub
How to delete multiple sheets using input box
Just need some help. I want to delete multiple sheets using their partial name that I will enter in the input box. Is there any code that I can add multiple partial names in input box so they will be deleted at once? For example, I would like to add these partial names: "Pivot", "IWS, "Associate", "Split", and "Invoice" My initial code can delete sheets with just one partial name, sample if I enter "Pivot" it will delete all sheets with "Pivot" name. I want to tweak my code where I can add multiple partial name to the input box. Here's the initial code: Sub ClearAllSheetsSpecified() '---------------------------------------------------------------------------------------------------------- ' Clear all sheets specified in input box '---------------------------------------------------------------------------------------------------------- Dim shName As String Dim xName As String Dim xWs As Worksheet Dim cnt As Integer shName = Application.InputBox("Enter the sheet name to delete:", "Delete sheets", _ ThisWorkbook.ActiveSheet.Name, , , , , 2) If shName = "" Then Exit Sub '**** use LCase() here xName = "*" & LCase(shName) & "*" ' MsgBox xName Application.DisplayAlerts = False cnt = 0 For Each xWs In ThisWorkbook.Sheets '**** Use LCase() here If LCase(xWs.Name) Like xName Then xWs.Delete 'MsgBox xName cnt = cnt + 1 End If Next xWs Application.DisplayAlerts = True MsgBox "Have deleted " & cnt & " worksheets", vbInformation, "Sheets removed" I'm looking for a code that I can enter any partial name in my input box then sheets will be deleted as long as they exist in my current WB.
[ "This is a way to capture your list of phrases. The input text must have a common delimiter that you code for. In this case I used the semi-colon.\nSub testIt()\n Dim shName As String\n Dim xName() As String\n Dim cnt As Integer\n shName = Application.InputBox(\"Enter the sheet names (delimited by ;) to delete:\", \"Delete sheets\", _\n ThisWorkbook.ActiveSheet.Name, , , , , 2)\n If shName = \"\" Then Exit Sub\n '**** use LCase() here\n xName = Split(LCase(shName), \";\")\n For x = 0 To UBound(xName)\n Debug.Print \"*\" & xName(x) & \"*\"\n 'do your delete\n Next x\nEnd Sub\n\n" ]
[ 0 ]
[]
[]
[ "criteria", "delete_file", "excel", "inputbox", "vba" ]
stackoverflow_0074669019_criteria_delete_file_excel_inputbox_vba.txt
Q: .Net Core DenyAnonymousAuthorization Requirement error I m getting a 401 error status with this message while trying to retrieve some data from another Net core 5 Web api : "Authorization failed. These requirements were not met:DenyAnonymousAuthorizationRequirement: Requires an authenticated user.." On the Web api I m using a windows authentification : services.AddAuthentication(HttpSysDefaults.AuthenticationScheme); I cannot found many documentation on this kind of authentification, and I don't think that I miss something Any thoughts please ? A: Try swapping around the calls where it is first setup. app.UseAuthentication(); app.UseAuthorization(); I had the same error where everything else looked fine. Until I swapped these around I kept getting this issue trying to access the endpoint using a token.
.Net Core DenyAnonymousAuthorization Requirement error
I m getting a 401 error status with this message while trying to retrieve some data from another Net core 5 Web api : "Authorization failed. These requirements were not met:DenyAnonymousAuthorizationRequirement: Requires an authenticated user.." On the Web api I m using a windows authentification : services.AddAuthentication(HttpSysDefaults.AuthenticationScheme); I cannot found many documentation on this kind of authentification, and I don't think that I miss something Any thoughts please ?
[ "Try swapping around the calls where it is first setup.\napp.UseAuthentication(); app.UseAuthorization();\nI had the same error where everything else looked fine. Until I swapped these around I kept getting this issue trying to access the endpoint using a token.\n" ]
[ 0 ]
[]
[]
[ "asp.net_core", "asp.net_core_webapi" ]
stackoverflow_0071557694_asp.net_core_asp.net_core_webapi.txt
Q: Custon fullscreen button Good afternoon, tell me, is it possible to add your own button to switch to full-screen mode and open fields? I need to hide the standard toolbar and add the button data to another location I didn't find any similar questions A: This can be done by using the Fullscreen API, which allows you to programmatically control the full-screen state of the application. You can use the requestFullscreen() method of the Document object in JavaScript. This method takes the element that you want to display in full-screen mode as a parameter. For example, you can add a button with the id fullscreen-button and use the following code to switch to full-screen mode when the button is clicked: document.getElementById("fullscreen-button").addEventListener("click", function() { document.documentElement.requestFullscreen(); }); To open fields in full-screen mode, you can use the open() method of the Window object in JavaScript. This method takes the URL of the page that you want to open in full-screen mode as a parameter. For example, you can use the following code to open the page at https://example.com in full-screen mode: window.open("https://example.com", "", "fullscreen=yes"); To hide the standard toolbar, you can use the toolbar=no parameter in the open() method. This will hide the standard toolbar in the full-screen window. For example, you can use the following code to open the page at https://example.com in full-screen mode without the standard toolbar: window.open("https://example.com", "", "fullscreen=yes,toolbar=no");
Custon fullscreen button
Good afternoon, tell me, is it possible to add your own button to switch to full-screen mode and open fields? I need to hide the standard toolbar and add the button data to another location I didn't find any similar questions
[ "This can be done by using the Fullscreen API, which allows you to programmatically control the full-screen state of the application. You can use the requestFullscreen() method of the Document object in JavaScript. This method takes the element that you want to display in full-screen mode as a parameter. For example, you can add a button with the id fullscreen-button and use the following code to switch to full-screen mode when the button is clicked:\ndocument.getElementById(\"fullscreen-button\").addEventListener(\"click\", function() {\n document.documentElement.requestFullscreen();\n});\n\nTo open fields in full-screen mode, you can use the open() method of the Window object in JavaScript. This method takes the URL of the page that you want to open in full-screen mode as a parameter. For example, you can use the following code to open the page at https://example.com in full-screen mode:\nwindow.open(\"https://example.com\", \"\", \"fullscreen=yes\");\n\nTo hide the standard toolbar, you can use the toolbar=no parameter in the open() method. This will hide the standard toolbar in the full-screen window. For example, you can use the following code to open the page at https://example.com in full-screen mode without the standard toolbar:\nwindow.open(\"https://example.com\", \"\", \"fullscreen=yes,toolbar=no\");\n\n" ]
[ 0 ]
[]
[]
[ "webdatarocks" ]
stackoverflow_0074669385_webdatarocks.txt
Q: AWS CLI list-policies to find a policy with a specific name I am trying to locate a policy in AWS with a specific name via the aws cli. I tried get-policy first but it threw and error. Now I am trying list-policies and putting in a prefix. According to the documentation if I start and end the string with a forward slash it should search but it hasn't been working. I get an empty array back... any ideas? aws iam list-policies --scope Local --path-prefix /policyname.xyz/ A: It is an issue with AWS CLI V2. The issue is still open on the github repository of the AWS SDK since 11 Jan. You can check the detail here: https://github.com/aws/aws-sdk/issues/36 Complete list of issues: https://github.com/aws/aws-sdk/issues A: You can use the --query flag. For example, for exact search, aws iam list-policies --query 'Policies[?PolicyName==`policyname.xyz`]' If you want more flexible search, you can refer to https://jmespath.org/specification.html for some functions for example 'to start with policynamexxx' aws iam list-policies --query 'Policies[?starts_with(PolicyName,`policynamexxx`)]'
AWS CLI list-policies to find a policy with a specific name
I am trying to locate a policy in AWS with a specific name via the aws cli. I tried get-policy first but it threw and error. Now I am trying list-policies and putting in a prefix. According to the documentation if I start and end the string with a forward slash it should search but it hasn't been working. I get an empty array back... any ideas? aws iam list-policies --scope Local --path-prefix /policyname.xyz/
[ "It is an issue with AWS CLI V2.\nThe issue is still open on the github repository of the AWS SDK since 11 Jan.\nYou can check the detail here:\nhttps://github.com/aws/aws-sdk/issues/36\nComplete list of issues:\nhttps://github.com/aws/aws-sdk/issues\n", "You can use the --query flag. For example, for exact search,\naws iam list-policies --query 'Policies[?PolicyName==`policyname.xyz`]'\n\nIf you want more flexible search, you can refer to https://jmespath.org/specification.html for some functions for example 'to start with policynamexxx'\naws iam list-policies --query 'Policies[?starts_with(PolicyName,`policynamexxx`)]'\n\n" ]
[ 1, 0 ]
[]
[]
[ "amazon_iam", "amazon_web_services", "command_line_interface" ]
stackoverflow_0066287626_amazon_iam_amazon_web_services_command_line_interface.txt
Q: Radio button text not aligned properly So i am trying to add radio button on my survey form and the button and the text is completly in different positions so here is a picture of how it looks --> enter image description here i tried display: inline; but still nothing changed A: The below code will work as you expected. Just add necessary attributes like name and for. <p>Would you recommend this survey to your friend:</p> <input type="radio" name="test" value="yes"> <label for="test">Yes</label><br> A: According to the picture, your input tag width is 100%, that's why you are facing this issue. Add class inside input and use this CSS your problem has been fixed. <style> .inline-radio{width: auto;} </style> **HTML** <p>Would you recommend this survey to your friend:</p> <input type="radio" name="test" value="yes" class="inline-radio" > <label for="test">Yes</label>
Radio button text not aligned properly
So i am trying to add radio button on my survey form and the button and the text is completly in different positions so here is a picture of how it looks --> enter image description here i tried display: inline; but still nothing changed
[ "The below code will work as you expected. Just add necessary attributes like name and for.\n\n\n <p>Would you recommend this survey to your friend:</p>\n <input type=\"radio\" name=\"test\" value=\"yes\">\n <label for=\"test\">Yes</label><br>\n\n\n\n", "According to the picture, your input tag width is 100%, that's why you are facing this issue. Add class inside input and use this CSS your problem has been fixed.\n<style>\n.inline-radio{width: auto;}\n</style>\n\n**HTML**\n<p>Would you recommend this survey to your friend:</p>\n<input type=\"radio\" name=\"test\" value=\"yes\" class=\"inline-radio\" >\n<label for=\"test\">Yes</label>\n\n" ]
[ 0, 0 ]
[]
[]
[ "css", "html" ]
stackoverflow_0074668708_css_html.txt
Q: Is there any O(1) way to make a set of floats with tolerance? I want to make a set of floating point numbers, but with a twist: When testing if some float x is a member of the set s, I want the test to return true if s contains some float f such that abs(x - f) < tol In other words, if the set contains a number that is close to x, return true. Otherwise return false. One way I thought of doing this is to store numbers in a heap rather than a hash set, and use an approximate equality rule to decide whether the heap contains a close number. However, that would take log(N) time, which is not bad, but it would be nice to get O(1) if such an algorithm exists. Does anyone have any ideas how this might be possible? A: If you're not too fussy about the tolerance, then you can round each number to the closest multiple of tol/4. You can then use a hash map, but when you add a number x, add floor(4x/tol), floor(4x/tol+1) and floor(4x/tol-1). When you look up a number x, look up floor(4x/tol), floor(4x/tol+1) and floor(4x/tol-1). You will certainly find a match within tol/2, and you may find a match within tol. A: One idea I just had (probably not the only way to do it) is to mask the lower N bits of the number to all 0's. For instance, if you want the tolerance to be approx. 1E-3, force the lower 10 bits of mantissa to 0 when adding. Do the same when checking. One caveat of this approach is that real computers often do weird things to LSB's of mantissas when you're not looking. You store x = b00111111100000000000000000000000, and when you retrieve it you get 00111111100000000000000000000001, 001111110111111111111111111111111, etc. The reasons for this are many, but the bottom line is that it's still brittle. Anything that relies on float equality is brittle. Interested to hear other ideas, critiques, ways of overcoming the problem etc. A: Rather than another set, adjust meaning of "close". Create a function that maps each finite float to an integer. Mentally place every positive float in a list - sorted by value. 0.0 is at index 0 and MAX_FLOAT is at index N. (Likely-wise for negatives: -MAX_FLOAT to -0.0 maps to -N to 0. total ordering To find if 2 float values are "close", subtract their indexes and compare to a tolerance. This maintains the idea of float in floating-point numbers as the tolerance is a fixed integer in the index mapping domain, yet scales in the float domain.
Is there any O(1) way to make a set of floats with tolerance?
I want to make a set of floating point numbers, but with a twist: When testing if some float x is a member of the set s, I want the test to return true if s contains some float f such that abs(x - f) < tol In other words, if the set contains a number that is close to x, return true. Otherwise return false. One way I thought of doing this is to store numbers in a heap rather than a hash set, and use an approximate equality rule to decide whether the heap contains a close number. However, that would take log(N) time, which is not bad, but it would be nice to get O(1) if such an algorithm exists. Does anyone have any ideas how this might be possible?
[ "If you're not too fussy about the tolerance, then you can round each number to the closest multiple of tol/4.\nYou can then use a hash map, but when you add a number x, add floor(4x/tol), floor(4x/tol+1) and floor(4x/tol-1).\nWhen you look up a number x, look up floor(4x/tol), floor(4x/tol+1) and floor(4x/tol-1).\nYou will certainly find a match within tol/2, and you may find a match within tol.\n", "One idea I just had (probably not the only way to do it) is to mask the lower N bits of the number to all 0's. For instance, if you want the tolerance to be approx. 1E-3, force the lower 10 bits of mantissa to 0 when adding. Do the same when checking.\nOne caveat of this approach is that real computers often do weird things to LSB's of mantissas when you're not looking. You store x = b00111111100000000000000000000000, and when you retrieve it you get 00111111100000000000000000000001, 001111110111111111111111111111111, etc. The reasons for this are many, but the bottom line is that it's still brittle. Anything that relies on float equality is brittle.\nInterested to hear other ideas, critiques, ways of overcoming the problem etc.\n", "Rather than another set, adjust meaning of \"close\".\nCreate a function that maps each finite float to an integer.\nMentally place every positive float in a list - sorted by value. 0.0 is at index 0 and MAX_FLOAT is at index N. (Likely-wise for negatives: -MAX_FLOAT to -0.0 maps to -N to 0. total ordering\nTo find if 2 float values are \"close\", subtract their indexes and compare to a tolerance.\nThis maintains the idea of float in floating-point numbers as the tolerance is a fixed integer in the index mapping domain, yet scales in the float domain.\n" ]
[ 1, 0, 0 ]
[]
[]
[ "algorithm", "data_structures", "floating_point", "heap", "set" ]
stackoverflow_0074667431_algorithm_data_structures_floating_point_heap_set.txt
Q: If subproject has dependency add another I have a multi-module project and I want to add another dependency automatically if a submodule contains a specific dependency. So far I've added this on my root build.gradle.kts subprojects { apply { plugin("java") } project.configurations.implementation.get().allDependencies.forEach { println(it.name) } } But it prints nothing. How can I get all dependencies implemented by a subproject and then another if it contains one already? Thanks A: Please try to use afterEvaluate, dependencies are not available at the level you are asking for subprojects { apply { plugin("java") } afterEvaluate { project.configurations.implementation.get().getDependencies().forEach { println(it.name) // Check if the dependency is the one you're looking for // and add another dependency if needed } } } A: You can use the getDependencies() method of the Configuration class to get a list of all the dependencies for a given configuration. Then, you can iterate over the list and check if a specific dependency is present using the contains() method. Here is an example: subprojects { apply { plugin("java") } val implementationDependencies = project.configurations.implementation.get().getDependencies() implementationDependencies.forEach { println(it.name) // Check if a specific dependency is present if (it.name == "your-dependency-name") { // Add another dependency if the specific one is present project.dependencies { implementation("org.another.dependency:1.0.0") } } } } Note that this code will only check dependencies that are part of the implementation configuration. If you want to check dependencies in other configurations, you can replace implementation with the appropriate configuration name.
If subproject has dependency add another
I have a multi-module project and I want to add another dependency automatically if a submodule contains a specific dependency. So far I've added this on my root build.gradle.kts subprojects { apply { plugin("java") } project.configurations.implementation.get().allDependencies.forEach { println(it.name) } } But it prints nothing. How can I get all dependencies implemented by a subproject and then another if it contains one already? Thanks
[ "Please try to use afterEvaluate, dependencies are not available at the level you are asking for\nsubprojects {\n apply {\n plugin(\"java\")\n }\n\n afterEvaluate {\n project.configurations.implementation.get().getDependencies().forEach {\n println(it.name)\n // Check if the dependency is the one you're looking for\n // and add another dependency if needed\n }\n }\n}\n\n\n", "You can use the getDependencies() method of the Configuration class to get a list of all the dependencies for a given configuration. Then, you can iterate over the list and check if a specific dependency is present using the contains() method.\nHere is an example:\nsubprojects {\napply {\n plugin(\"java\")\n}\n\nval implementationDependencies = project.configurations.implementation.get().getDependencies()\nimplementationDependencies.forEach {\n println(it.name)\n\n // Check if a specific dependency is present\n if (it.name == \"your-dependency-name\") {\n // Add another dependency if the specific one is present\n project.dependencies {\n implementation(\"org.another.dependency:1.0.0\")\n }\n }\n}\n\n}\nNote that this code will only check dependencies that are part of the implementation configuration. If you want to check dependencies in other configurations, you can replace implementation with the appropriate configuration name.\n" ]
[ 1, 0 ]
[ "Here's one way you could achieve what you're looking for:\nsubprojects {\n apply {\n plugin(\"java\")\n }\n\n project.configurations.implementation.get().allDependencies.forEach {\n if (it.name == \"specific-dependency\") {\n // Add another dependency here\n dependencies {\n implementation(\"com.example:another-dependency:1.0.0\")\n }\n }\n }\n}\n\nThis code iterates over all the dependencies in the 'implementation' configuration for each subproject, and if it finds a dependency with the name 'specific-dependency', it adds 'another-dependency' to the 'implementation' configuration for that subproject.\n" ]
[ -1 ]
[ "build.gradle", "gradle", "gradle_kotlin_dsl" ]
stackoverflow_0074559378_build.gradle_gradle_gradle_kotlin_dsl.txt
Q: Recoding years to year nr. 0,1,2 ect I'm not very good in English or in R but hopefully I can manage to explain the problem I have a dataset where there is one column with years, from 1952 to 2007. I want to recode and reorganize it so that the first year is number 0, the next year nr. 1 and so on... Can anyone help me? I have tried recode(), arrange (), A: You mean like this? : df <- data.frame(years = 1952:2007) is.na(df$years[sample(1:nrow(df), size = 10)]) <- TRUE head(df) #> years #> 1 NA #> 2 1953 #> 3 1954 #> 4 1955 #> 5 1956 #> 6 1957 df$years_recoded <- df$years - min(df$years, na.rm = T) head(df) #> years years_recoded #> 1 NA NA #> 2 1953 0 #> 3 1954 1 #> 4 1955 2 #> 5 1956 3 #> 6 1957 4 Created on 2022-12-03 with reprex v2.0.2 A: Maybe you want this: To start with 0 we have to substract 1 from row_number() library(dplyr) df %>% mutate(year_recoded = row_number()-1) %>% head() years year_recoded 1 1952 0 2 1953 1 3 1954 2 4 1955 3 5 1956 4 6 1957 5
Recoding years to year nr. 0,1,2 ect
I'm not very good in English or in R but hopefully I can manage to explain the problem I have a dataset where there is one column with years, from 1952 to 2007. I want to recode and reorganize it so that the first year is number 0, the next year nr. 1 and so on... Can anyone help me? I have tried recode(), arrange (),
[ "You mean like this? :\ndf <- data.frame(years = 1952:2007)\nis.na(df$years[sample(1:nrow(df), size = 10)]) <- TRUE\n\nhead(df)\n#> years\n#> 1 NA\n#> 2 1953\n#> 3 1954\n#> 4 1955\n#> 5 1956\n#> 6 1957\n\ndf$years_recoded <- df$years - min(df$years, na.rm = T)\nhead(df)\n#> years years_recoded\n#> 1 NA NA\n#> 2 1953 0\n#> 3 1954 1\n#> 4 1955 2\n#> 5 1956 3\n#> 6 1957 4\n\nCreated on 2022-12-03 with reprex v2.0.2\n", "Maybe you want this: To start with 0 we have to substract 1 from row_number()\nlibrary(dplyr)\n\ndf %>% \n mutate(year_recoded = row_number()-1) %>% \n head()\n\n years year_recoded\n1 1952 0\n2 1953 1\n3 1954 2\n4 1955 3\n5 1956 4\n6 1957 5\n\n" ]
[ 2, 1 ]
[]
[]
[ "numeric", "r", "recode" ]
stackoverflow_0074669147_numeric_r_recode.txt
Q: Set and return variable in one line in dart? I think I remember something like this from python, maybe it was the walrus operator? idk. but is there a way to set an attribute while returning the value? something like this: class Foo { late String foo; Foo(); String setFoo() => foo := 'foo'; } f = Foo(); x = f.setFoo(); print(x); // 'foo' A: I think I found it, that is, for the case that foo can be null: class Foo { String? foo; Foo(); String setFoo() => foo ??= 'foo'; } A: Nobody forbids you to do it just like you said: class Foo { late String foo; String setFoo() => foo = 'foo'; } void main() { print(Foo().setFoo()); } The only drawback of using the late modifier is that you can accidentally access the foo field before the initialization, which will cause a LateInitializationError. To prevent this, you can use a nullable type for the field. Moreover, you can just initialize the field inline: String foo = 'foo';
Set and return variable in one line in dart?
I think I remember something like this from python, maybe it was the walrus operator? idk. but is there a way to set an attribute while returning the value? something like this: class Foo { late String foo; Foo(); String setFoo() => foo := 'foo'; } f = Foo(); x = f.setFoo(); print(x); // 'foo'
[ "I think I found it, that is, for the case that foo can be null:\nclass Foo {\n String? foo;\n Foo();\n String setFoo() => foo ??= 'foo';\n}\n\n", "Nobody forbids you to do it just like you said:\nclass Foo {\n late String foo;\n\n String setFoo() => foo = 'foo';\n}\n\nvoid main() {\n print(Foo().setFoo());\n}\n\nThe only drawback of using the late modifier is that you can accidentally access the foo field before the initialization, which will cause a LateInitializationError.\nTo prevent this, you can use a nullable type for the field.\nMoreover, you can just initialize the field inline:\nString foo = 'foo';\n\n" ]
[ 0, 0 ]
[]
[]
[ "dart", "properties", "syntax" ]
stackoverflow_0074668533_dart_properties_syntax.txt
Q: Splitting string, ignoring brackets including nested brackets I would like to split a string at spaces (and colons), except inside curly brackets and rounded brackets. Similar questions have been asked, but the answers fail with nested brackets. Here is an example of a string to split: p1: I/out p2: (('mean', 5), 0.0, ('std', 2)) p3: 7 p4: {'name': 'check', 'value': 80.0} The actual goal is to obtain a list of keys (p1, p2, p3 and p4) along with their values. When I try to split the string at spaces and colons, I can avoid splitting at spaces and colons inside the curly brackets. But I cannot avoid the splitting at some spaces inside the rounded brackets because of the nested brackets. The closest I got is [\s:]+(?=[^\{\(\)\}]*(?:[\{\(]|$)) which is fine except that it splits between (('mean', 5), and 0.0. A: You can use the following PCRE/Python PyPi regex compliant pattern: (?:(\((?:[^()]++|(?1))*\))|(\{(?:[^{}]++|(?2))*})|[^\s:])+ See the regex demo. It matches (?: - start of a container non-capturing group: (\((?:[^()]++|(?1))*\)) - Group 1: a substring between two nested round brackets | - or (\{(?:[^{}]++|(?2))*}) - Group 2: a substring between two nested braces | - or [^\s:] - a char other than whitespace and colon )+ - one or more occurrences. See the Python demo: import regex text = "p1: I/out p2: (('mean', 5), 0.0, ('std', 2)) p3: 7 p4: {'name': 'check', 'value': 80.0}" pattern = r"(?:(\((?:[^()]++|(?1))*\))|(\{(?:[^{}]++|(?2))*})|[^\s:])+" print( [x.group() for x in regex.finditer(pattern, text)] ) Output: ['p1', 'I/out', 'p2', "(('mean', 5), 0.0, ('std', 2))", 'p3', '7', 'p4', "{'name': 'check', 'value': 80.0}"]
Splitting string, ignoring brackets including nested brackets
I would like to split a string at spaces (and colons), except inside curly brackets and rounded brackets. Similar questions have been asked, but the answers fail with nested brackets. Here is an example of a string to split: p1: I/out p2: (('mean', 5), 0.0, ('std', 2)) p3: 7 p4: {'name': 'check', 'value': 80.0} The actual goal is to obtain a list of keys (p1, p2, p3 and p4) along with their values. When I try to split the string at spaces and colons, I can avoid splitting at spaces and colons inside the curly brackets. But I cannot avoid the splitting at some spaces inside the rounded brackets because of the nested brackets. The closest I got is [\s:]+(?=[^\{\(\)\}]*(?:[\{\(]|$)) which is fine except that it splits between (('mean', 5), and 0.0.
[ "You can use the following PCRE/Python PyPi regex compliant pattern:\n(?:(\\((?:[^()]++|(?1))*\\))|(\\{(?:[^{}]++|(?2))*})|[^\\s:])+\n\nSee the regex demo.\nIt matches\n\n(?: - start of a container non-capturing group:\n\n(\\((?:[^()]++|(?1))*\\)) - Group 1: a substring between two nested round brackets\n| - or\n(\\{(?:[^{}]++|(?2))*}) - Group 2: a substring between two nested braces\n| - or\n[^\\s:] - a char other than whitespace and colon\n\n\n)+ - one or more occurrences.\n\nSee the Python demo:\nimport regex\ntext = \"p1: I/out p2: (('mean', 5), 0.0, ('std', 2)) p3: 7 p4: {'name': 'check', 'value': 80.0}\"\npattern = r\"(?:(\\((?:[^()]++|(?1))*\\))|(\\{(?:[^{}]++|(?2))*})|[^\\s:])+\"\nprint( [x.group() for x in regex.finditer(pattern, text)] )\n\nOutput:\n['p1', 'I/out', 'p2', \"(('mean', 5), 0.0, ('std', 2))\", 'p3', '7', 'p4', \"{'name': 'check', 'value': 80.0}\"]\n\n" ]
[ 3 ]
[]
[]
[ "python", "regex", "split" ]
stackoverflow_0074668806_python_regex_split.txt
Q: Python modules I've just installed in my virtual env, are not found I'm using Ubuntu 20.04.5 LTS. Output of python3 --version command: Python 3.8.10 When I type pip in terminal and press TAB, it responds with the following options: pip, pip3, pip3.10 and pip3.8 But, when I use any of then with the --version flag, it all prints the same output, which is: pip 22.3.1 from /home/myuser/.local/lib/python3.8/site-packages/pip (python 3.8) When I use "pip list" command, I can see the "virtualenv" package version(which is 20.17.0) Then I create my virtual environment using this following command: python3 -m venv .env Then I activate it using source .env/bin/activate command Before installing the modules, I update virtual environment's pip, using the following command: .env/bin/python3 -m pip install --upgrade pip Also, I have a file called requirements.txt with the packages names I need in it: wheel numpy matplotlib sklearn seaborn So I install them using the following command: .env/bin/pip install -r requirements.txt --no-cache-dir --use-pep517 Finally, I try to run my python program using ".env/bin/python kmeans3.py" command, it prints this error: Traceback (most recent call last): File "kmeans3.py", line 10, in <module> from sklearn.cluster import KMeans ModuleNotFoundError: No module named 'sklearn' obs: This is the first 12 lines of the file: """ .env/bin/python3 -m pip install --upgrade pip .env/bin/pip install -r requirements.txt --no-cache-dir --use-pep517 """ import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score from sklearn.preprocessing import MinMaxScaler A: Looks ok to me. If your environment is activated try to just run python kmeans3.py or python3 kmeans3.py A: I don't know why is that, but I solved this problem installing "scikit-learn" package before install "sklearn"
Python modules I've just installed in my virtual env, are not found
I'm using Ubuntu 20.04.5 LTS. Output of python3 --version command: Python 3.8.10 When I type pip in terminal and press TAB, it responds with the following options: pip, pip3, pip3.10 and pip3.8 But, when I use any of then with the --version flag, it all prints the same output, which is: pip 22.3.1 from /home/myuser/.local/lib/python3.8/site-packages/pip (python 3.8) When I use "pip list" command, I can see the "virtualenv" package version(which is 20.17.0) Then I create my virtual environment using this following command: python3 -m venv .env Then I activate it using source .env/bin/activate command Before installing the modules, I update virtual environment's pip, using the following command: .env/bin/python3 -m pip install --upgrade pip Also, I have a file called requirements.txt with the packages names I need in it: wheel numpy matplotlib sklearn seaborn So I install them using the following command: .env/bin/pip install -r requirements.txt --no-cache-dir --use-pep517 Finally, I try to run my python program using ".env/bin/python kmeans3.py" command, it prints this error: Traceback (most recent call last): File "kmeans3.py", line 10, in <module> from sklearn.cluster import KMeans ModuleNotFoundError: No module named 'sklearn' obs: This is the first 12 lines of the file: """ .env/bin/python3 -m pip install --upgrade pip .env/bin/pip install -r requirements.txt --no-cache-dir --use-pep517 """ import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from sklearn.cluster import KMeans from sklearn.metrics import silhouette_score from sklearn.preprocessing import MinMaxScaler
[ "Looks ok to me.\nIf your environment is activated try to just run\npython kmeans3.py\n\nor\npython3 kmeans3.py\n\n", "I don't know why is that, but I solved this problem installing \"scikit-learn\" package before install \"sklearn\"\n" ]
[ 0, 0 ]
[]
[]
[ "pip", "python", "python_3.x", "scikit_learn", "virtualenv" ]
stackoverflow_0074632110_pip_python_python_3.x_scikit_learn_virtualenv.txt
Q: How to apply patch to multiple deployments in Kubernetes? I have applied the below patch to stop two services. But its stopped only the last one. Is there anyway to stop multiple deployments in same patch? or do i need to apply patch for each one. I know we can label it and apply patch for that label. But there are some trouble doing so. patches: - patch: |- - op: replace path: /spec/replicas value: 1 target: kind: Deployment name: ApplicationName-1 name: ApplicationName-2 A: To stop multiple deployments in the same patch, you can specify multiple target blocks in the patch, each with a different name value. Here is an example: patches: - patch: |- - op: replace path: /spec/replicas value: 1 target: kind: Deployment name: ApplicationName-1 target: kind: Deployment name: ApplicationName-2
How to apply patch to multiple deployments in Kubernetes?
I have applied the below patch to stop two services. But its stopped only the last one. Is there anyway to stop multiple deployments in same patch? or do i need to apply patch for each one. I know we can label it and apply patch for that label. But there are some trouble doing so. patches: - patch: |- - op: replace path: /spec/replicas value: 1 target: kind: Deployment name: ApplicationName-1 name: ApplicationName-2
[ "To stop multiple deployments in the same patch, you can specify multiple target blocks in the patch, each with a different name value. Here is an example:\npatches:\n- patch: |-\n - op: replace\n path: /spec/replicas\n value: 1\n target:\n kind: Deployment\n name: ApplicationName-1\n target:\n kind: Deployment\n name: ApplicationName-2\n\n" ]
[ 0 ]
[]
[]
[ "kubernetes", "kubernetes_pod" ]
stackoverflow_0074657484_kubernetes_kubernetes_pod.txt
Q: Reassign to a higher-scoped mutable referenced variable within a loop So I'm trying to implement a linked list in Rust to better understand the language and the following is what I came up with. use std::rc::Rc; use std::fmt::Debug; struct Node<T> where T: Debug, { value: T, next: Option<Rc<Box<Node<T>>>>, } pub struct LinkedList<T> where T: Debug, { start: Option<Rc<Box<Node<T>>>>, end: Option<Rc<Box<Node<T>>>>, } I managed to implement the insert method, but I'm having trouble implementing the traverse method. impl<T> LinkedList<T> where T: Debug, { pub fn insert(&mut self, value: T) { let node = Rc::new(Box::new(Node { value, next: None })); match &mut self.end { None => { self.start = Some(Rc::clone(&node)); } Some(ref mut end_node) => { if let Some(mutable_node) = Rc::get_mut(end_node) { mutable_node.next = Some(Rc::clone(&node)); } } } self.end = Some(node); } pub fn traverse(&mut self) { let mut ptr = &mut self.start; while let Some(ref mut node_rc) = &mut ptr { let inner_ptr = Rc::get_mut(node_rc).unwrap(); *ptr = inner_ptr.next; } } } In the traverse method I'm trying to do the basic, initialize a pointer at start and keep moving the pointer forward at each iteration of the loop, but the above traverse implementation gives me the following error rustc: cannot move out of `inner_ptr.next` which is behind a mutable reference move occurs because `inner_ptr.next` has type `Option<Rc<Box<Node<T>>>>`, which does not implement the `Copy` trait which made some sense to me, so I tried modifying my code to ptr = &mut inner_ptr.next; but now I get a different error stating | 56 | while let Some(ref mut node_rc) = &mut ptr { | -------- borrow of `ptr` occurs here ... 59 | ptr = &mut inner_ptr.next; | ^^^^^^^^^^^^^^^^^^^^^^^^^ | | | assignment to borrowed `ptr` occurs here | borrow later used here I thought I was getting this error because inner_ptr is dropped at the end of each loop iteration, so I made the following change to the traverse method by having inner_ptr's lifetime to equal ptr's lifetime, like so pub fn traverse(&mut self) { let mut ptr = &mut self.start; let mut inner_ptr: &mut Box<Node<T>>; while let Some(ref mut node_rc) = &mut ptr { inner_ptr = Rc::get_mut(node_rc).unwrap(); ptr = &mut inner_ptr.next; } } But, the compiler throws the same error in this case as well. Clearly I'm missing something fundamental here about Rust's borrow mechanism, but I can't figure out what A: You're taking a mutable reference of ptr when you should't. pub fn traverse(&mut self) { let mut ptr = &mut self.start; while let Some(ref mut node_rc) = ptr { // don't take a mutable reference here println!("{:?}", node_rc.value); let inner_ptr = Rc::get_mut(node_rc).unwrap(); ptr = &mut inner_ptr.next; } } You don't want to take a mutable reference there because you don't want to borrow it which would prevent you from changing it later. Instead you want to move it and replace it every loop.
Reassign to a higher-scoped mutable referenced variable within a loop
So I'm trying to implement a linked list in Rust to better understand the language and the following is what I came up with. use std::rc::Rc; use std::fmt::Debug; struct Node<T> where T: Debug, { value: T, next: Option<Rc<Box<Node<T>>>>, } pub struct LinkedList<T> where T: Debug, { start: Option<Rc<Box<Node<T>>>>, end: Option<Rc<Box<Node<T>>>>, } I managed to implement the insert method, but I'm having trouble implementing the traverse method. impl<T> LinkedList<T> where T: Debug, { pub fn insert(&mut self, value: T) { let node = Rc::new(Box::new(Node { value, next: None })); match &mut self.end { None => { self.start = Some(Rc::clone(&node)); } Some(ref mut end_node) => { if let Some(mutable_node) = Rc::get_mut(end_node) { mutable_node.next = Some(Rc::clone(&node)); } } } self.end = Some(node); } pub fn traverse(&mut self) { let mut ptr = &mut self.start; while let Some(ref mut node_rc) = &mut ptr { let inner_ptr = Rc::get_mut(node_rc).unwrap(); *ptr = inner_ptr.next; } } } In the traverse method I'm trying to do the basic, initialize a pointer at start and keep moving the pointer forward at each iteration of the loop, but the above traverse implementation gives me the following error rustc: cannot move out of `inner_ptr.next` which is behind a mutable reference move occurs because `inner_ptr.next` has type `Option<Rc<Box<Node<T>>>>`, which does not implement the `Copy` trait which made some sense to me, so I tried modifying my code to ptr = &mut inner_ptr.next; but now I get a different error stating | 56 | while let Some(ref mut node_rc) = &mut ptr { | -------- borrow of `ptr` occurs here ... 59 | ptr = &mut inner_ptr.next; | ^^^^^^^^^^^^^^^^^^^^^^^^^ | | | assignment to borrowed `ptr` occurs here | borrow later used here I thought I was getting this error because inner_ptr is dropped at the end of each loop iteration, so I made the following change to the traverse method by having inner_ptr's lifetime to equal ptr's lifetime, like so pub fn traverse(&mut self) { let mut ptr = &mut self.start; let mut inner_ptr: &mut Box<Node<T>>; while let Some(ref mut node_rc) = &mut ptr { inner_ptr = Rc::get_mut(node_rc).unwrap(); ptr = &mut inner_ptr.next; } } But, the compiler throws the same error in this case as well. Clearly I'm missing something fundamental here about Rust's borrow mechanism, but I can't figure out what
[ "You're taking a mutable reference of ptr when you should't.\npub fn traverse(&mut self) {\n let mut ptr = &mut self.start;\n\n while let Some(ref mut node_rc) = ptr { // don't take a mutable reference here\n println!(\"{:?}\", node_rc.value);\n let inner_ptr = Rc::get_mut(node_rc).unwrap();\n\n ptr = &mut inner_ptr.next;\n }\n}\n\nYou don't want to take a mutable reference there because you don't want to borrow it which would prevent you from changing it later.\nInstead you want to move it and replace it every loop.\n" ]
[ 1 ]
[]
[]
[ "borrow_checker", "rust" ]
stackoverflow_0074669077_borrow_checker_rust.txt
Q: In PrimeNG, how do I bind checkboxes with the same name to a form control? I'm using PrimeNG 14 (and Angular 14). I have a form in which I enter product information, and I would like to associate the product with one or more categories, each of which is displayed as a checkbox. <form [formGroup]="form" (ngSubmit)="submit()"> ... <p-table #dt [value]="(categories$ | async)!" [(selection)]="selectedCategories" dataKey="categoryId"> <ng-template pTemplate="header"> <tr> <th> <p-tableHeaderCheckbox></p-tableHeaderCheckbox> </th> <th pSortableColumn="name"> <div> Category <p-sortIcon field="name"></p-sortIcon> <p-columnFilter type="text" field="name" display="menu"></p-columnFilter> </div> </th> </tr> </ng-template> <ng-template pTemplate="body" let-category> <tr class="p-selectable-row"> <td> <p-tableCheckbox [value]="category" [formControl]="$any(form.controls['categoryIds'])"></p-tableCheckbox> </td> <td> <span class="p-column-title">Category</span> {{category.name}} </td> </tr> </ng-template> </p-table> In my service class I have form!: FormGroup; ... ngOnInit(): void { ... this.form = this.fb.group({ ... categoryIds: [] }); The issue is, I'm not sure how to bind the category ID checkboxes to the form control. Using the above approach doesn't work because when I check one checkbox, they all get checked. A: To bind the checkboxes to the form control, you can use the formControlName attribute on each checkbox element. Here's how you could update your code: <form [formGroup]="form" (ngSubmit)="submit()"> ... <p-table #dt [value]="(categories$ | async)!" [(selection)]="selectedCategories" dataKey="categoryId"> <ng-template pTemplate="header"> <tr> <th> <p-tableHeaderCheckbox></p-tableHeaderCheckbox> </th> <th pSortableColumn="name"> <div> Category <p-sortIcon field="name"></p-sortIcon> <p-columnFilter type="text" field="name" display="menu"></p-columnFilter> </div> </th> </tr> </ng-template> <ng-template pTemplate="body" let-category> <tr class="p-selectable-row"> <td> <input type="checkbox" formControlName="categoryIds" [value]="category.categoryId"> </td> <td> <span class="p-column-title">Category</span> {{category.name}} </td> </tr> </ng-template> </p-table> </form> Note that in the code above, we are using the formControlName attribute to bind each checkbox to the categoryIds control in the form group, and the value attribute to specify the value of the checkbox. When the form is submitted, you can access the selected category IDs using the value property of the categoryIds control, e.g. this.form.controls['categoryIds'].value. A: You can use the formArrayName attribute on the p-tableCheckbox element, and provide the name of the form array control as the value. <form [formGroup]="form" (ngSubmit)="submit()"> ... <p-table #dt [value]="(categories$ | async)!" [(selection)]="selectedCategories" dataKey="categoryId"> <ng-template pTemplate="header"> <tr> <th> <p-tableHeaderCheckbox></p-tableHeaderCheckbox> </th> <th pSortableColumn="name"> <div> Category <p-sortIcon field="name"></p-sortIcon> <p-columnFilter type="text" field="name" display="menu"></p-columnFilter> </div> </th> </tr> </ng-template> <ng-template pTemplate="body" let-category> <tr class="p-selectable-row"> <td> <p-tableCheckbox [value]="category" formArrayName="categoryIds"></p-tableCheckbox> </td> <td> <span class="p-column-title">Category</span> {{category.name}} </td> </tr> </ng-template> </p-table> In your service class, you can then create a FormArray control with the name categoryIds and add it to the form group: ngOnInit(): void { ... this.form = this.fb.group({ ... categoryIds: this.fb.array([]) }); } This will allow each checkbox to be bound to the corresponding element in the categoryIds form array control, and will allow you to submit the selected category IDs as part of the form data.
In PrimeNG, how do I bind checkboxes with the same name to a form control?
I'm using PrimeNG 14 (and Angular 14). I have a form in which I enter product information, and I would like to associate the product with one or more categories, each of which is displayed as a checkbox. <form [formGroup]="form" (ngSubmit)="submit()"> ... <p-table #dt [value]="(categories$ | async)!" [(selection)]="selectedCategories" dataKey="categoryId"> <ng-template pTemplate="header"> <tr> <th> <p-tableHeaderCheckbox></p-tableHeaderCheckbox> </th> <th pSortableColumn="name"> <div> Category <p-sortIcon field="name"></p-sortIcon> <p-columnFilter type="text" field="name" display="menu"></p-columnFilter> </div> </th> </tr> </ng-template> <ng-template pTemplate="body" let-category> <tr class="p-selectable-row"> <td> <p-tableCheckbox [value]="category" [formControl]="$any(form.controls['categoryIds'])"></p-tableCheckbox> </td> <td> <span class="p-column-title">Category</span> {{category.name}} </td> </tr> </ng-template> </p-table> In my service class I have form!: FormGroup; ... ngOnInit(): void { ... this.form = this.fb.group({ ... categoryIds: [] }); The issue is, I'm not sure how to bind the category ID checkboxes to the form control. Using the above approach doesn't work because when I check one checkbox, they all get checked.
[ "To bind the checkboxes to the form control, you can use the formControlName attribute on each checkbox element. Here's how you could update your code:\n<form [formGroup]=\"form\" (ngSubmit)=\"submit()\">\n ...\n <p-table #dt [value]=\"(categories$ | async)!\" \n [(selection)]=\"selectedCategories\"\n dataKey=\"categoryId\">\n\n <ng-template pTemplate=\"header\">\n <tr>\n <th>\n <p-tableHeaderCheckbox></p-tableHeaderCheckbox>\n </th>\n <th pSortableColumn=\"name\">\n <div>\n Category\n <p-sortIcon field=\"name\"></p-sortIcon>\n <p-columnFilter type=\"text\" field=\"name\" display=\"menu\"></p-columnFilter>\n </div>\n </th>\n </tr>\n </ng-template>\n <ng-template pTemplate=\"body\" let-category>\n <tr class=\"p-selectable-row\">\n <td>\n <input type=\"checkbox\" formControlName=\"categoryIds\" [value]=\"category.categoryId\">\n </td>\n <td>\n <span class=\"p-column-title\">Category</span>\n {{category.name}}\n </td>\n </tr>\n </ng-template>\n </p-table>\n</form>\n\nNote that in the code above, we are using the formControlName attribute to bind each checkbox to the categoryIds control in the form group, and the value attribute to specify the value of the checkbox.\nWhen the form is submitted, you can access the selected category IDs using the value property of the categoryIds control, e.g. this.form.controls['categoryIds'].value.\n", "You can use the formArrayName attribute on the p-tableCheckbox element, and provide the name of the form array control as the value.\n<form [formGroup]=\"form\" (ngSubmit)=\"submit()\">\n ...\n <p-table #dt [value]=\"(categories$ | async)!\" \n [(selection)]=\"selectedCategories\"\n dataKey=\"categoryId\">\n\n <ng-template pTemplate=\"header\">\n <tr>\n <th>\n <p-tableHeaderCheckbox></p-tableHeaderCheckbox>\n </th>\n <th pSortableColumn=\"name\">\n <div>\n Category\n <p-sortIcon field=\"name\"></p-sortIcon>\n <p-columnFilter type=\"text\" field=\"name\" display=\"menu\"></p-columnFilter>\n </div>\n </th>\n </tr>\n </ng-template>\n <ng-template pTemplate=\"body\" let-category>\n <tr class=\"p-selectable-row\">\n <td>\n <p-tableCheckbox [value]=\"category\" formArrayName=\"categoryIds\"></p-tableCheckbox>\n </td>\n <td>\n <span class=\"p-column-title\">Category</span>\n {{category.name}}\n </td>\n </tr>\n </ng-template>\n </p-table>\n\nIn your service class, you can then create a FormArray control with the name categoryIds and add it to the form group:\nngOnInit(): void {\n ...\n this.form = this.fb.group({\n ...\n categoryIds: this.fb.array([])\n });\n}\n\nThis will allow each checkbox to be bound to the corresponding element in the categoryIds form array control, and will allow you to submit the selected category IDs as part of the form data.\n" ]
[ 0, 0 ]
[]
[]
[ "angular", "checkbox", "form_control", "forms", "primeng" ]
stackoverflow_0074631454_angular_checkbox_form_control_forms_primeng.txt
Q: How to download first emails attachment using PowerShell? I'm getting an automatic weekly email with an attachment, and with an outlook rule, it is being saved in a folder in my inbox. my goal is to create a PowerShell script that downloads the attachment from the email with the latest "ReceivedTime". I have managed to sort the Object in the folder by "ReceivedTime" and get the latest email in the list: "$emails[0]", I can see the name of the attachment but I cannot seem to find how to download the file itself. the error I get is as follows: "You cannot call a method on a null-valued expression." my code look like this: $outlook = New-Object -ComObject outlook.application $mapi = $outlook.GetNamespace("MAPI"); #Get Folder path $filePath = "C:\Temp\test" $inbox = $mapi.GetDefaultFolder(6) $inbox.Folders | Select-Object FolderPath $olFolderPath = "\\[email protected]\Inbox\compliance" #Get Emails Items from folder $targetFolder = $inbox.Folders | Where-Object { $_.FolderPath -eq $olFolderPath } #Sort Emails in folder by date $emails = $targetFolder.Items | Sort-Object ReceivedTime -Descending #download attachements $emails[0].Attachments | Select-Object $_.saveasfile(($filePath)) I'm using this guide to download the attachment, but my use case is a bit different from what is described in the article: https://bronowski.it/blog/2020/09/saving-outlook-attachments-with-powershell/ Sorry if it is a very simple task, I'm new to PowerShell and just learning automation. Thank you very much :) A: First of all, I've noticed the following code: #Sort Emails in folder by date $emails = $targetFolder.Items | Sort-Object ReceivedTime -Descending Instead, you need to use the Sort method of the Items class which sorts the collection of items by the specified property. The index for the collection is reset to 1 upon completion of this method. So, to get the first item use the 1 index instead of 0. $emails[1].Attachments[1].SaveAsFile(($filePath)) Be aware, the Attachments object contains a set of Attachment objects that represent the attachments in an Outlook item. Use the Attachments property to return the Attachments collection for any Outlook item (except notes).
How to download first emails attachment using PowerShell?
I'm getting an automatic weekly email with an attachment, and with an outlook rule, it is being saved in a folder in my inbox. my goal is to create a PowerShell script that downloads the attachment from the email with the latest "ReceivedTime". I have managed to sort the Object in the folder by "ReceivedTime" and get the latest email in the list: "$emails[0]", I can see the name of the attachment but I cannot seem to find how to download the file itself. the error I get is as follows: "You cannot call a method on a null-valued expression." my code look like this: $outlook = New-Object -ComObject outlook.application $mapi = $outlook.GetNamespace("MAPI"); #Get Folder path $filePath = "C:\Temp\test" $inbox = $mapi.GetDefaultFolder(6) $inbox.Folders | Select-Object FolderPath $olFolderPath = "\\[email protected]\Inbox\compliance" #Get Emails Items from folder $targetFolder = $inbox.Folders | Where-Object { $_.FolderPath -eq $olFolderPath } #Sort Emails in folder by date $emails = $targetFolder.Items | Sort-Object ReceivedTime -Descending #download attachements $emails[0].Attachments | Select-Object $_.saveasfile(($filePath)) I'm using this guide to download the attachment, but my use case is a bit different from what is described in the article: https://bronowski.it/blog/2020/09/saving-outlook-attachments-with-powershell/ Sorry if it is a very simple task, I'm new to PowerShell and just learning automation. Thank you very much :)
[ "First of all, I've noticed the following code:\n#Sort Emails in folder by date\n$emails = $targetFolder.Items | Sort-Object ReceivedTime -Descending\n\nInstead, you need to use the Sort method of the Items class which sorts the collection of items by the specified property. The index for the collection is reset to 1 upon completion of this method. So, to get the first item use the 1 index instead of 0.\n$emails[1].Attachments[1].SaveAsFile(($filePath))\n\nBe aware, the Attachments object contains a set of Attachment objects that represent the attachments in an Outlook item. Use the Attachments property to return the Attachments collection for any Outlook item (except notes).\n" ]
[ 0 ]
[]
[]
[ "automation", "email", "outlook", "powershell" ]
stackoverflow_0074667624_automation_email_outlook_powershell.txt
Q: pow large numbers in Python How can I raise large numbers to a power in python? a = 62608558862573792084872798679396455703616395237802859621162736207631538899993 b = 93910650126758265671774994856253142403789359314618444886584691522424141933664 c = pow(a, b) It is impossible to get an answer that way. Are there any ways to raise large numbers to a power to make it work? A: If you calculate the result to all digits, it has 10^78 digits. That's more than will fit into any RAM of any computer in the world today. It is impossible to get an answer that way. It will be impossible to get a precise answer for a long time, given that Earth only has ~10^50 atoms. The number 62608558862573792084872798679396455703616395237802859621162736207631538899993 looks like a pseudo prime number (is has only 5 prime factors) as used in cryptography. Cryptography often works with modulo operations to limit the number of digits. You can use pow to do modulo math as well: pow(base, exp, mod=None) Return base to the power exp; if mod is present, return base to the power exp, modulo mod (computed more efficiently than pow(base, exp) % mod).
pow large numbers in Python
How can I raise large numbers to a power in python? a = 62608558862573792084872798679396455703616395237802859621162736207631538899993 b = 93910650126758265671774994856253142403789359314618444886584691522424141933664 c = pow(a, b) It is impossible to get an answer that way. Are there any ways to raise large numbers to a power to make it work?
[ "If you calculate the result to all digits, it has 10^78 digits. That's more than will fit into any RAM of any computer in the world today.\n\nIt is impossible to get an answer that way.\n\nIt will be impossible to get a precise answer for a long time, given that Earth only has ~10^50 atoms.\nThe number 62608558862573792084872798679396455703616395237802859621162736207631538899993 looks like a pseudo prime number (is has only 5 prime factors) as used in cryptography. Cryptography often works with modulo operations to limit the number of digits. You can use pow to do modulo math as well:\n\npow(base, exp, mod=None)\nReturn base to the power exp; if mod is present, return base to the power exp, modulo mod (computed more efficiently than pow(base, exp) % mod).\n\n" ]
[ 3 ]
[]
[]
[ "largenumber", "pow", "python" ]
stackoverflow_0074669402_largenumber_pow_python.txt
Q: Having trouble with print output of object decoded with JSONDecoder I'm trying to decode a JSON string in swift but having some weird issues accessing the properties once decoded. This is the contents of the JSON file that I retrieve from a locally stored JSON file [ { "word": "a", "usage": [ { "partOfSpeech": "determiner" } ] } ] And this is the code to access the properties of the JSON file struct WordDictionary : Codable { var word: String var usage: [Usage] } struct Usage: Codable { var partOfSpeech: String } if let url = Bundle.main.url(forResource: FILE_NAME, withExtension: "json") { do { let data = try Data(contentsOf: url) let decoder = JSONDecoder() let jsonData = try decoder.decode([WordDictionary].self, from: data) print(jsonData[0].word) //Outputs "a" print(jsonData[0].usage) //Outputs "[MyApp.AppDelegate.(unknown context at $102a37f00).(unknown context at $102a38038).Usage(partOfSpeech: "determiner")]" } catch { print("error:\(error)") } } As you can see, when I try to print(jsonData[0].usage) I get a series of unknown data messages before I get the “Usage” property. When I print this line I just want to see determiner, I’m not sure what the preamble about the “unknown context” is all about. I’m also running this code in didFinishLaunchingWithOptions function of the AppDelegate. I’m not sure what I’m missing. I've been trying to find a solution for a few days now and trying different approaches but still can’t get the desired output, any help would be appreciated. A: If you want a type to print nicely when you interpolate an instance of it into a string, you need to make it conform to CustomStringConvertible. This protocol declares one property: description and when string interpolation encounters an object that conforms to it, it uses the string returned by description instead. You need something like: extension Usage: CustomStringConvertible { var description: String { return "{ \"partOfSpeech\" : \"\(partOfSpeech)\" }" } } if you want a JSON like* string to print. An alternative would be to re-encode the object using a JSONEncoder and convert the data to a String. That's much heavier but might be a better option for more complex objects. *JSON like because things like line feeds and tabs won't be replaced by escapes and " and \ appearing in the string won't be escaped. A: tl;dr You are seeing the “unknown context” in the description of the type, because it was defined inside a function. You can solve this by either moving those type definitions outside of the function or implementing your own CustomStringConvertible conformance. It's a matter of where you defined your types. Consider: class AppDelegate: UIResponder, UIApplicationDelegate { func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { struct WordDictionary: Codable { var word: String var usage: [Usage] } struct Usage: Codable { var partOfSpeech: String } do { let url = Bundle.main.url(forResource: "test", withExtension: "json")! let data = try Data(contentsOf: url) let words = try JSONDecoder().decode([WordDictionary].self, from: data) print(words[0].usage) } catch { print(error) } return true } ... } That produces: [MyApp.AppDelegate.(unknown context at $102bac454).(unknown context at $102bac58c).Usage(partOfSpeech: "determiner")] That is saying that Usage was defined in some unknown context within the AppDelegate within MyApp. In short, it does not know how to represent the hierarchy for types defined within functions. Contrast that with: class AppDelegate: UIResponder, UIApplicationDelegate { struct WordDictionary: Codable { var word: String var usage: [Usage] } struct Usage: Codable { var partOfSpeech: String } func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { do { let url = Bundle.main.url(forResource: "test", withExtension: "json")! let data = try Data(contentsOf: url) let words = try JSONDecoder().decode([WordDictionary].self, from: data) print(words[0].usage) } catch { print(error) } return true } ... } Which produces: [MyApp.AppDelegate.Usage(partOfSpeech: "determiner")] You can also add your own CustomStringConvertible conformance: struct WordDictionary: Codable { var word: String var usage: [Usage] } struct Usage: Codable { var partOfSpeech: String } extension Usage: CustomStringConvertible { var description: String { "Usage(partOfSpeech: \"\(partOfSpeech)\")" } } @main class AppDelegate: UIResponder, UIApplicationDelegate { func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool { do { let url = Bundle.main.url(forResource: "test", withExtension: "json")! let data = try Data(contentsOf: url) let words = try JSONDecoder().decode([WordDictionary].self, from: data) print(words[0].usage) } catch { print(error) } return true } ... } Which produces: [Usage(partOfSpeech: "determiner")] Through CustomStringConvertible, you can make the print format it however you want.
Having trouble with print output of object decoded with JSONDecoder
I'm trying to decode a JSON string in swift but having some weird issues accessing the properties once decoded. This is the contents of the JSON file that I retrieve from a locally stored JSON file [ { "word": "a", "usage": [ { "partOfSpeech": "determiner" } ] } ] And this is the code to access the properties of the JSON file struct WordDictionary : Codable { var word: String var usage: [Usage] } struct Usage: Codable { var partOfSpeech: String } if let url = Bundle.main.url(forResource: FILE_NAME, withExtension: "json") { do { let data = try Data(contentsOf: url) let decoder = JSONDecoder() let jsonData = try decoder.decode([WordDictionary].self, from: data) print(jsonData[0].word) //Outputs "a" print(jsonData[0].usage) //Outputs "[MyApp.AppDelegate.(unknown context at $102a37f00).(unknown context at $102a38038).Usage(partOfSpeech: "determiner")]" } catch { print("error:\(error)") } } As you can see, when I try to print(jsonData[0].usage) I get a series of unknown data messages before I get the “Usage” property. When I print this line I just want to see determiner, I’m not sure what the preamble about the “unknown context” is all about. I’m also running this code in didFinishLaunchingWithOptions function of the AppDelegate. I’m not sure what I’m missing. I've been trying to find a solution for a few days now and trying different approaches but still can’t get the desired output, any help would be appreciated.
[ "If you want a type to print nicely when you interpolate an instance of it into a string, you need to make it conform to CustomStringConvertible.\nThis protocol declares one property: description and when string interpolation encounters an object that conforms to it, it uses the string returned by description instead.\nYou need something like:\nextension Usage: CustomStringConvertible\n{\n var description: String \n {\n return \"{ \\\"partOfSpeech\\\" : \\\"\\(partOfSpeech)\\\" }\"\n }\n}\n\nif you want a JSON like* string to print.\nAn alternative would be to re-encode the object using a JSONEncoder and convert the data to a String. That's much heavier but might be a better option for more complex objects.\n*JSON like because things like line feeds and tabs won't be replaced by escapes and \" and \\ appearing in the string won't be escaped.\n", "tl;dr\nYou are seeing the “unknown context” in the description of the type, because it was defined inside a function. You can solve this by either moving those type definitions outside of the function or implementing your own CustomStringConvertible conformance.\n\nIt's a matter of where you defined your types.\nConsider:\nclass AppDelegate: UIResponder, UIApplicationDelegate {\n\n func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {\n struct WordDictionary: Codable {\n var word: String\n var usage: [Usage]\n }\n\n struct Usage: Codable {\n var partOfSpeech: String\n }\n\n do {\n let url = Bundle.main.url(forResource: \"test\", withExtension: \"json\")!\n let data = try Data(contentsOf: url)\n let words = try JSONDecoder().decode([WordDictionary].self, from: data)\n print(words[0].usage)\n } catch {\n print(error)\n }\n\n return true\n }\n\n ...\n}\n\nThat produces:\n\n[MyApp.AppDelegate.(unknown context at $102bac454).(unknown context at $102bac58c).Usage(partOfSpeech: \"determiner\")]\n\nThat is saying that Usage was defined in some unknown context within the AppDelegate within MyApp. In short, it does not know how to represent the hierarchy for types defined within functions.\nContrast that with:\nclass AppDelegate: UIResponder, UIApplicationDelegate {\n\n struct WordDictionary: Codable {\n var word: String\n var usage: [Usage]\n }\n\n struct Usage: Codable {\n var partOfSpeech: String\n }\n\n func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {\n do {\n let url = Bundle.main.url(forResource: \"test\", withExtension: \"json\")!\n let data = try Data(contentsOf: url)\n let words = try JSONDecoder().decode([WordDictionary].self, from: data)\n print(words[0].usage)\n } catch {\n print(error)\n }\n\n return true\n }\n\n ...\n}\n\nWhich produces:\n\n[MyApp.AppDelegate.Usage(partOfSpeech: \"determiner\")]\n\n\nYou can also add your own CustomStringConvertible conformance:\nstruct WordDictionary: Codable {\n var word: String\n var usage: [Usage]\n}\n\nstruct Usage: Codable {\n var partOfSpeech: String\n}\n\nextension Usage: CustomStringConvertible {\n var description: String { \"Usage(partOfSpeech: \\\"\\(partOfSpeech)\\\")\" }\n}\n\n@main\nclass AppDelegate: UIResponder, UIApplicationDelegate {\n\n func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {\n\n do {\n let url = Bundle.main.url(forResource: \"test\", withExtension: \"json\")!\n let data = try Data(contentsOf: url)\n let words = try JSONDecoder().decode([WordDictionary].self, from: data)\n print(words[0].usage)\n } catch {\n print(error)\n }\n\n return true\n }\n\n ...\n}\n\nWhich produces:\n\n[Usage(partOfSpeech: \"determiner\")]\n\nThrough CustomStringConvertible, you can make the print format it however you want.\n" ]
[ 0, 0 ]
[]
[]
[ "codable", "ios", "json", "swift", "xcode" ]
stackoverflow_0074667900_codable_ios_json_swift_xcode.txt
Q: c# - Finding next message sent by a particular user - DSharpPlus I've spent quite a long time looking through the docs but I couldn't figure out: how do you track the next message sent by a particular user? [Command("Multi")] [Description("Start a multiplayer training session")] public async Task multi(CommandContext ctx, DiscordMember member, int pnum) { var intr = ctx.Client.GetInteractivityModule(); await ctx.RespondAsync("{member.Mention}, please respond with `Accept` to accept the party invite. "); var reminderContent = await intr.WaitForMessageAsync( c => c.Author.Id == ctx.Message.Author.Id, TimeSpan.FromSeconds(60) ); } The documentation is here: https://dsharpplus.github.io/api/index.html This is my first StackOverflow question; please tell me if I should include some more info! Thanks! A: if you are trying to track the reply you could simply do it via channel id. You are on point other then ctx.message.Author.id is going to produce the bots id since the bot sent the message by the bot. So all that is doing is waiting for the bots next message instead you would do member that will return who got @'d in the command. [Command("Multi")] [Description("Start a multiplayer training session")] public async Task multi(CommandContext ctx, DiscordMember member, int pnum) { var intr = ctx.Client.GetInteractivityModule(); await ctx.RespondAsync("{member.Mention}, please respond with `Accept` to accept the party invite. "); var reminderContent = await intr.WaitForMessageAsync( c => c.Author.Id == member.Id, TimeSpan.FromSeconds(60) ); }
c# - Finding next message sent by a particular user - DSharpPlus
I've spent quite a long time looking through the docs but I couldn't figure out: how do you track the next message sent by a particular user? [Command("Multi")] [Description("Start a multiplayer training session")] public async Task multi(CommandContext ctx, DiscordMember member, int pnum) { var intr = ctx.Client.GetInteractivityModule(); await ctx.RespondAsync("{member.Mention}, please respond with `Accept` to accept the party invite. "); var reminderContent = await intr.WaitForMessageAsync( c => c.Author.Id == ctx.Message.Author.Id, TimeSpan.FromSeconds(60) ); } The documentation is here: https://dsharpplus.github.io/api/index.html This is my first StackOverflow question; please tell me if I should include some more info! Thanks!
[ "if you are trying to track the reply you could simply do it via channel id. You are on point other then ctx.message.Author.id is going to produce the bots id since the bot sent the message by the bot. So all that is doing is waiting for the bots next message instead you would do member that will return who got @'d in the command.\n[Command(\"Multi\")]\n[Description(\"Start a multiplayer training session\")]\npublic async Task multi(CommandContext ctx, DiscordMember member, int pnum)\n{\n var intr = ctx.Client.GetInteractivityModule(); \n\n await ctx.RespondAsync(\"{member.Mention}, please respond with `Accept` to accept the party invite. \");\n\n var reminderContent = await intr.WaitForMessageAsync(\n\n c => c.Author.Id == member.Id, \n\n TimeSpan.FromSeconds(60) \n\n );\n}\n\n" ]
[ 0 ]
[]
[]
[ "c#", "dsharp+" ]
stackoverflow_0065272342_c#_dsharp+.txt
Q: How to copy the rgbalues to the pictureBox1 but one by one or each time a group of pixels? private void pictureBox2_Paint(object sender, PaintEventArgs e) { Bitmap bmp = new Bitmap(pictureBox1.Image); // Lock the bitmap's bits. Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height); System.Drawing.Imaging.BitmapData bmpData = bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadWrite, bmp.PixelFormat); // Get the address of the first line. IntPtr ptr = bmpData.Scan0; // Declare an array to hold the bytes of the bitmap. int bytes = Math.Abs(bmpData.Stride) * bmp.Height; byte[] rgbValues = new byte[bytes]; // Copy the RGB values into the array. System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, bytes); // Set every third value to 255. A 24bpp bitmap will look red. //for (int counter = 2; counter < rgbValues.Length; counter +=64) // rgbValues[counter] = 255; // Copy the RGB values back to the bitmap System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, bytes); // Unlock the bits. bmp.UnlockBits(bmpData); // Draw the modified image. e.Graphics.DrawImage(bmp, 0, 0); } i mean to see in pictureBox2 the image get fill slowly like a paint get painted each time a bit. and not at once. with the original colors of the image in pictureBox1 to copy the image in pictureBox1 to pictureBox2 buti nstead in once to make it slowly and each time copy some pixels or one by one until the whole image paint is completed in pictureBox2. I tried this. in time tick event : int cc = 0; private void timer1_Tick(object sender, EventArgs e) { cc++; pictureBox2.Invalidate(); } in pictureBox2 paint event private void pictureBox2_Paint(object sender, PaintEventArgs e) { Bitmap bmp = new Bitmap(pictureBox1.Image); // Lock the bitmap's bits. Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height); System.Drawing.Imaging.BitmapData bmpData = bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadWrite, bmp.PixelFormat); // Get the address of the first line. IntPtr ptr = bmpData.Scan0; // Declare an array to hold the bytes of the bitmap. int bytes = Math.Abs(bmpData.Stride) * cc;//bmp.Height; byte[] rgbValues = new byte[bytes]; // Copy the RGB values back to the bitmap System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, bytes); // Unlock the bits. bmp.UnlockBits(bmpData); // Draw the modified image. e.Graphics.DrawImage(bmp, 0, 0); } but the code in the pictureBox2 paint event delete the image in the pictureBox2 delete slowly from the top to the bottom. but i want the opposite that it will start that the pictureBox2 is clear and then the image will be painted slowly. i tried to change the line : Bitmap bmp = new Bitmap(pictureBox1.Image); to Bitmap bmp = new Bitmap(512, 512); but then it does nothing in the pictureBox2. A: Here's an example that will copy the image line by line: private async void button1_Click(object sender, EventArgs e) { if (pictureBox1.Image != null) { button1.Enabled = false; Bitmap bmp1 = new Bitmap(pictureBox1.Image); Bitmap bmp2 = new Bitmap(bmp1.Width, bmp1.Height); pictureBox2.Image = bmp2; using (Graphics G = Graphics.FromImage(bmp2)) { for (int y = 0; y < bmp1.Height; y++) { Rectangle rc = new Rectangle(new Point(0, y), new Size(bmp1.Width, 1)); G.DrawImage(bmp1, rc, rc, GraphicsUnit.Pixel); pictureBox2.Invalidate(); await Task.Delay(1); } } button1.Enabled = true; } } Sample run: To make it go faster, you can increase the height of the rectangle being copied, and then make the for loop jump by that much: int height = 12; // added for (int y = 0; y < bmp1.Height; y = y + height) // change { Rectangle rc = new Rectangle(new Point(0, y), new Size(bmp1.Width, height)); // change G.DrawImage(bmp1, rc, rc, GraphicsUnit.Pixel); pictureBox2.Invalidate(); await Task.Delay(1); } A: Here's another approach showing a radial expansion using a GraphicsPath, Region, and a Clip: private async void button2_Click(object sender, EventArgs e) { if (pictureBox1.Image != null) { button2.Enabled = false; Bitmap bmp1 = new Bitmap(pictureBox1.Image); Bitmap bmp2 = new Bitmap(bmp1.Width, bmp1.Height); pictureBox2.Image = bmp2; int radius = Math.Max(bmp1.Width, bmp1.Height); Point center = new Point(bmp1.Width / 2, bmp2.Height / 2); using (Graphics G = Graphics.FromImage(bmp2)) { int step = 10; for (int r=0; r <=radius; r=r+step) { Rectangle rc = new Rectangle(center, new Size(1, 1)); rc.Inflate(r, r); using (System.Drawing.Drawing2D.GraphicsPath gp = new System.Drawing.Drawing2D.GraphicsPath()) { gp.AddEllipse(rc); using (Region rgn = new Region(gp)) { G.Clip = rgn; G.DrawImage(bmp1, 0, 0); } } pictureBox2.Invalidate(); await Task.Delay(1); } } button2.Enabled = true; } } Play with the step value to make it go faster or slower:
How to copy the rgbalues to the pictureBox1 but one by one or each time a group of pixels?
private void pictureBox2_Paint(object sender, PaintEventArgs e) { Bitmap bmp = new Bitmap(pictureBox1.Image); // Lock the bitmap's bits. Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height); System.Drawing.Imaging.BitmapData bmpData = bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadWrite, bmp.PixelFormat); // Get the address of the first line. IntPtr ptr = bmpData.Scan0; // Declare an array to hold the bytes of the bitmap. int bytes = Math.Abs(bmpData.Stride) * bmp.Height; byte[] rgbValues = new byte[bytes]; // Copy the RGB values into the array. System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, bytes); // Set every third value to 255. A 24bpp bitmap will look red. //for (int counter = 2; counter < rgbValues.Length; counter +=64) // rgbValues[counter] = 255; // Copy the RGB values back to the bitmap System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, bytes); // Unlock the bits. bmp.UnlockBits(bmpData); // Draw the modified image. e.Graphics.DrawImage(bmp, 0, 0); } i mean to see in pictureBox2 the image get fill slowly like a paint get painted each time a bit. and not at once. with the original colors of the image in pictureBox1 to copy the image in pictureBox1 to pictureBox2 buti nstead in once to make it slowly and each time copy some pixels or one by one until the whole image paint is completed in pictureBox2. I tried this. in time tick event : int cc = 0; private void timer1_Tick(object sender, EventArgs e) { cc++; pictureBox2.Invalidate(); } in pictureBox2 paint event private void pictureBox2_Paint(object sender, PaintEventArgs e) { Bitmap bmp = new Bitmap(pictureBox1.Image); // Lock the bitmap's bits. Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height); System.Drawing.Imaging.BitmapData bmpData = bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadWrite, bmp.PixelFormat); // Get the address of the first line. IntPtr ptr = bmpData.Scan0; // Declare an array to hold the bytes of the bitmap. int bytes = Math.Abs(bmpData.Stride) * cc;//bmp.Height; byte[] rgbValues = new byte[bytes]; // Copy the RGB values back to the bitmap System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, bytes); // Unlock the bits. bmp.UnlockBits(bmpData); // Draw the modified image. e.Graphics.DrawImage(bmp, 0, 0); } but the code in the pictureBox2 paint event delete the image in the pictureBox2 delete slowly from the top to the bottom. but i want the opposite that it will start that the pictureBox2 is clear and then the image will be painted slowly. i tried to change the line : Bitmap bmp = new Bitmap(pictureBox1.Image); to Bitmap bmp = new Bitmap(512, 512); but then it does nothing in the pictureBox2.
[ "Here's an example that will copy the image line by line:\nprivate async void button1_Click(object sender, EventArgs e)\n{\n if (pictureBox1.Image != null)\n {\n button1.Enabled = false;\n\n Bitmap bmp1 = new Bitmap(pictureBox1.Image);\n Bitmap bmp2 = new Bitmap(bmp1.Width, bmp1.Height);\n pictureBox2.Image = bmp2;\n using (Graphics G = Graphics.FromImage(bmp2))\n {\n for (int y = 0; y < bmp1.Height; y++)\n {\n Rectangle rc = new Rectangle(new Point(0, y), new Size(bmp1.Width, 1));\n G.DrawImage(bmp1, rc, rc, GraphicsUnit.Pixel);\n pictureBox2.Invalidate();\n await Task.Delay(1);\n } \n }\n\n button1.Enabled = true;\n }\n}\n\nSample run:\n\nTo make it go faster, you can increase the height of the rectangle being copied, and then make the for loop jump by that much:\nint height = 12; // added\nfor (int y = 0; y < bmp1.Height; y = y + height) // change\n{\n Rectangle rc = new Rectangle(new Point(0, y), new Size(bmp1.Width, height)); // change\n G.DrawImage(bmp1, rc, rc, GraphicsUnit.Pixel);\n pictureBox2.Invalidate();\n await Task.Delay(1);\n}\n\n", "Here's another approach showing a radial expansion using a GraphicsPath, Region, and a Clip:\nprivate async void button2_Click(object sender, EventArgs e)\n{\n if (pictureBox1.Image != null)\n {\n button2.Enabled = false;\n\n Bitmap bmp1 = new Bitmap(pictureBox1.Image);\n Bitmap bmp2 = new Bitmap(bmp1.Width, bmp1.Height);\n pictureBox2.Image = bmp2;\n int radius = Math.Max(bmp1.Width, bmp1.Height);\n Point center = new Point(bmp1.Width / 2, bmp2.Height / 2);\n using (Graphics G = Graphics.FromImage(bmp2))\n {\n int step = 10;\n for (int r=0; r <=radius; r=r+step)\n { \n Rectangle rc = new Rectangle(center, new Size(1, 1));\n rc.Inflate(r, r);\n using (System.Drawing.Drawing2D.GraphicsPath gp = new System.Drawing.Drawing2D.GraphicsPath())\n {\n gp.AddEllipse(rc);\n using (Region rgn = new Region(gp))\n {\n G.Clip = rgn;\n G.DrawImage(bmp1, 0, 0);\n }\n } \n pictureBox2.Invalidate();\n await Task.Delay(1);\n }\n }\n\n button2.Enabled = true;\n }\n}\n\nPlay with the step value to make it go faster or slower:\n\n" ]
[ 1, 1 ]
[]
[]
[ "c#", "winforms" ]
stackoverflow_0074657323_c#_winforms.txt
Q: Kubernetes gitconfig mounted as directory instead of file I have some helm chart in which I would like to mount the gitconfig globally on each container. In this case home directory for each container is / path. When I would like to do it manually on the container I am getting following git config --global --add safe.directory "*" error: could not lock config file //.gitconfig: Permission denied Now I wanted to map my config map to the global .gitconfig file. set { name = "git.sync.extraVolumeMounts[0].name" value = "git-config" } set { name = "git.sync.extraVolumeMounts[0].mountPath" value = "/.gitconfig" } set { name = "git.sync.extraVolumeMounts[0].subPath" value = ".gitconfig" } With such config I am getting the .gitconfig as folder not the file bitnami@airflow-web-7cdb6f5d6f-48mzh:/$ ls -la total 84 drwxr-xr-x 1 root root 4096 Dec 2 12:20 . drwxr-xr-x 1 root root 4096 Dec 2 12:20 .. drwxrwsrwx 2 root bitnami 4096 Dec 2 12:20 .gitconfig drwxr-xr-x 1 root root 4096 Jul 30 11:21 bin Any idea what I am doing wrong? Is there any environment variable instead I can set? I tried to use system config but does not work either as some folder structure is missing. A: It looks like you're trying to mount a ConfigMap containing a .gitconfig file as a volume on your containers. When you do this, Kubernetes will create a directory at the mount point with the same name as the ConfigMap, and inside that directory it will create a file with the name of the key in the ConfigMap. In your case, you have a ConfigMap named git-config that contains a key named .gitconfig. This means that Kubernetes will create a directory named git-config at the mount point, and inside that directory it will create a file named .gitconfig. To fix this, you can either rename your ConfigMap so that it doesn't have a leading dot in the name, or you can change the subPath value in your volume mount to match the name of the ConfigMap. Here's an example of how you could modify your helm chart to rename the ConfigMap: set { name = "git.sync.extraVolumeMounts[0].name" value = "git-config-volume" } set { name = "git.sync.extraVolumeMounts[0].mountPath" value = "/.gitconfig" } This would create a ConfigMap named git-config-volume and mount it at /.gitconfig, so you would end up with a file at /.gitconfig containing the contents of the .gitconfig key in the ConfigMap. Alternatively, you could keep the ConfigMap named git-config and modify the subPath value in your volume mount like this: set { name = "git.sync.extraVolumeMounts[0].name" value = "git-config" } set { name = "git.sync.extraVolumeMounts[0].mountPath" value = "/.gitconfig" } set { name = "git.sync.extraVolumeMounts[0].subPath" value = "git-config" } This would create a directory named git-config at the mount point, and inside that directory it would create a file named git-config containing the contents of the .gitconfig key in the ConfigMap. This file would be located at /.gitconfig/git-config on your containers.
Kubernetes gitconfig mounted as directory instead of file
I have some helm chart in which I would like to mount the gitconfig globally on each container. In this case home directory for each container is / path. When I would like to do it manually on the container I am getting following git config --global --add safe.directory "*" error: could not lock config file //.gitconfig: Permission denied Now I wanted to map my config map to the global .gitconfig file. set { name = "git.sync.extraVolumeMounts[0].name" value = "git-config" } set { name = "git.sync.extraVolumeMounts[0].mountPath" value = "/.gitconfig" } set { name = "git.sync.extraVolumeMounts[0].subPath" value = ".gitconfig" } With such config I am getting the .gitconfig as folder not the file bitnami@airflow-web-7cdb6f5d6f-48mzh:/$ ls -la total 84 drwxr-xr-x 1 root root 4096 Dec 2 12:20 . drwxr-xr-x 1 root root 4096 Dec 2 12:20 .. drwxrwsrwx 2 root bitnami 4096 Dec 2 12:20 .gitconfig drwxr-xr-x 1 root root 4096 Jul 30 11:21 bin Any idea what I am doing wrong? Is there any environment variable instead I can set? I tried to use system config but does not work either as some folder structure is missing.
[ "It looks like you're trying to mount a ConfigMap containing a .gitconfig file as a volume on your containers. When you do this, Kubernetes will create a directory at the mount point with the same name as the ConfigMap, and inside that directory it will create a file with the name of the key in the ConfigMap.\nIn your case, you have a ConfigMap named git-config that contains a key named .gitconfig. This means that Kubernetes will create a directory named git-config at the mount point, and inside that directory it will create a file named .gitconfig.\nTo fix this, you can either rename your ConfigMap so that it doesn't have a leading dot in the name, or you can change the subPath value in your volume mount to match the name of the ConfigMap.\nHere's an example of how you could modify your helm chart to rename the ConfigMap:\nset {\n name = \"git.sync.extraVolumeMounts[0].name\"\n value = \"git-config-volume\"\n}\nset {\n name = \"git.sync.extraVolumeMounts[0].mountPath\"\n value = \"/.gitconfig\"\n}\n\nThis would create a ConfigMap named git-config-volume and mount it at /.gitconfig, so you would end up with a file at /.gitconfig containing the contents of the .gitconfig key in the ConfigMap.\nAlternatively, you could keep the ConfigMap named git-config and modify the subPath value in your volume mount like this:\nset {\n name = \"git.sync.extraVolumeMounts[0].name\"\n value = \"git-config\"\n}\nset {\n name = \"git.sync.extraVolumeMounts[0].mountPath\"\n value = \"/.gitconfig\"\n}\nset {\n name = \"git.sync.extraVolumeMounts[0].subPath\"\n value = \"git-config\"\n}\n\nThis would create a directory named git-config at the mount point, and inside that directory it would create a file named git-config containing the contents of the .gitconfig key in the ConfigMap. This file would be located at /.gitconfig/git-config on your containers.\n" ]
[ 0 ]
[]
[]
[ "azure", "kubernetes", "volumes" ]
stackoverflow_0074656452_azure_kubernetes_volumes.txt
Q: Mastodon Nginx Proxy Pass issue it wont pass through to the LXC container I have been trying to self host a sign user mastodon, i want to point out it works if i directly forward ports 80 and 443 to that IP, it can be accessed from outside all great. I have a bunch of other websites in a lxc continer about 10 websites on wordpress and the firewall 80 and 443 are pointed to this so typically i just add a proxy pass for the domain run certbot and alls good with the world. I CANNOT LOOSE port 80 / 443 to this lxc From what i can tell its the nginx on the lxc with the mastodon instance as if i add a catch all default to the server the proxy pass loads no problems. Its only when its going to mastodons Nginx config i get the too many redirects issue. I am at a bit of a loss on how to solve this as i never had this issue before.... Flow Map domain > 80/443 > router > nginx proxy > Mastodon lxc = Too many redirects Nginx Config proxypass server { server_name nameysitem.quest; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://10.255.192.84:80; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/nameysitem.quest/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/nameysitem.quest/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot add_header Strict-Transport-Security "max-age=31536000" always; # managed by Certbot ssl_trusted_certificate /etc/letsencrypt/live/nameysitem.quest/chain.pem; # managed by Certbot ssl_stapling on; # managed by Certbot ssl_stapling_verify on; # managed by Certbot } server { if ($host = nameysitem.quest) { return 301 https://$host$request_uri; } # managed by Certbot server_name nameysitem.quest; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://10.255.192.84:80; } listen 80; # managed by Certbot The LXC Nginx Config for mastodon as i said it works if i change the ip to the lxc ip in the firewall map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream backend { server 127.0.0.1:3000 fail_timeout=0; } upstream streaming { server 127.0.0.1:4000 fail_timeout=0; } proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=CACHE:10m inactive=7d max_size=1g; server { if ($host = nameysitem.quest) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; listen [::]:80; server_name nameysitem.quest; root /var/www/mastodon/public; location /.well-known/acme-challenge/ { allow all; } location / { return 301 https://$host$request_uri; } } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name nameysitem.quest; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; # Uncomment these lines once you acquire a certificate: # ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; ssl_certificate /etc/letsencrypt/live/nameysitem.quest/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/nameysitem.quest/privkey.pem; # managed by Certbot keepalive_timeout 70; sendfile on; client_max_body_size 80m; root /var/www/mastodon/public; gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml > add_header Strict-Transport-Security "max-age=31536000" always; location / { try_files $uri @proxy; } location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) { add_header Cache-Control "public, max-age=31536000, immutable"; add_header Strict-Transport-Security "max-age=31536000" always; try_files $uri @proxy; } location /sw.js { add_header Cache-Control "public, max-age=0"; add_header Strict-Transport-Security "max-age=31536000" always; try_files $uri @proxy; } location @proxy { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Proxy ""; proxy_pass_header Server; proxy_pass http://backend; proxy_buffering on; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_cache CACHE; proxy_cache_valid 200 7d; proxy_cache_valid 410 24h; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; add_header X-Cached $upstream_cache_status; add_header Strict-Transport-Security "max-age=31536000" always; tcp_nodelay on; } location /api/v1/streaming { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Proxy ""; proxy_pass http://streaming; proxy_buffering off; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; tcp_nodelay on; } error_page 500 501 502 503 504 /500.html; ssl_trusted_certificate /etc/letsencrypt/live/nameysitem.quest/chain.pem; # managed by Certbot ssl_stapling on; # managed by Certbot ssl_stapling_verify on; # managed by Certbot } I just dont know what to do now its only this that doesnt work i just get too many redirects A: proxy_pass http://backend; If script is trying to check for SSL and see only "http", it tries to redirect to "https" and can loop itself. You should try: proxy_pass https://backend; But you will need to redirect all http traffic to https with nginx for security reasons.
Mastodon Nginx Proxy Pass issue it wont pass through to the LXC container
I have been trying to self host a sign user mastodon, i want to point out it works if i directly forward ports 80 and 443 to that IP, it can be accessed from outside all great. I have a bunch of other websites in a lxc continer about 10 websites on wordpress and the firewall 80 and 443 are pointed to this so typically i just add a proxy pass for the domain run certbot and alls good with the world. I CANNOT LOOSE port 80 / 443 to this lxc From what i can tell its the nginx on the lxc with the mastodon instance as if i add a catch all default to the server the proxy pass loads no problems. Its only when its going to mastodons Nginx config i get the too many redirects issue. I am at a bit of a loss on how to solve this as i never had this issue before.... Flow Map domain > 80/443 > router > nginx proxy > Mastodon lxc = Too many redirects Nginx Config proxypass server { server_name nameysitem.quest; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://10.255.192.84:80; } listen 443 ssl; # managed by Certbot ssl_certificate /etc/letsencrypt/live/nameysitem.quest/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/nameysitem.quest/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot add_header Strict-Transport-Security "max-age=31536000" always; # managed by Certbot ssl_trusted_certificate /etc/letsencrypt/live/nameysitem.quest/chain.pem; # managed by Certbot ssl_stapling on; # managed by Certbot ssl_stapling_verify on; # managed by Certbot } server { if ($host = nameysitem.quest) { return 301 https://$host$request_uri; } # managed by Certbot server_name nameysitem.quest; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://10.255.192.84:80; } listen 80; # managed by Certbot The LXC Nginx Config for mastodon as i said it works if i change the ip to the lxc ip in the firewall map $http_upgrade $connection_upgrade { default upgrade; '' close; } upstream backend { server 127.0.0.1:3000 fail_timeout=0; } upstream streaming { server 127.0.0.1:4000 fail_timeout=0; } proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=CACHE:10m inactive=7d max_size=1g; server { if ($host = nameysitem.quest) { return 301 https://$host$request_uri; } # managed by Certbot listen 80; listen [::]:80; server_name nameysitem.quest; root /var/www/mastodon/public; location /.well-known/acme-challenge/ { allow all; } location / { return 301 https://$host$request_uri; } } server { listen 443 ssl http2; listen [::]:443 ssl http2; server_name nameysitem.quest; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!MEDIUM:!LOW:!aNULL:!NULL:!SHA; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ssl_session_tickets off; # Uncomment these lines once you acquire a certificate: # ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; ssl_certificate /etc/letsencrypt/live/nameysitem.quest/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/nameysitem.quest/privkey.pem; # managed by Certbot keepalive_timeout 70; sendfile on; client_max_body_size 80m; root /var/www/mastodon/public; gzip on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript image/svg+xml > add_header Strict-Transport-Security "max-age=31536000" always; location / { try_files $uri @proxy; } location ~ ^/(emoji|packs|system/accounts/avatars|system/media_attachments/files) { add_header Cache-Control "public, max-age=31536000, immutable"; add_header Strict-Transport-Security "max-age=31536000" always; try_files $uri @proxy; } location /sw.js { add_header Cache-Control "public, max-age=0"; add_header Strict-Transport-Security "max-age=31536000" always; try_files $uri @proxy; } location @proxy { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Proxy ""; proxy_pass_header Server; proxy_pass http://backend; proxy_buffering on; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; proxy_cache CACHE; proxy_cache_valid 200 7d; proxy_cache_valid 410 24h; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; add_header X-Cached $upstream_cache_status; add_header Strict-Transport-Security "max-age=31536000" always; tcp_nodelay on; } location /api/v1/streaming { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Proxy ""; proxy_pass http://streaming; proxy_buffering off; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $connection_upgrade; tcp_nodelay on; } error_page 500 501 502 503 504 /500.html; ssl_trusted_certificate /etc/letsencrypt/live/nameysitem.quest/chain.pem; # managed by Certbot ssl_stapling on; # managed by Certbot ssl_stapling_verify on; # managed by Certbot } I just dont know what to do now its only this that doesnt work i just get too many redirects
[ "proxy_pass http://backend;\n\nIf script is trying to check for SSL and see only \"http\", it tries to redirect to \"https\" and can loop itself. You should try:\nproxy_pass https://backend;\n\nBut you will need to redirect all http traffic to https with nginx for security reasons.\n" ]
[ 0 ]
[]
[]
[ "lxc", "mastodon", "nginx", "proxmox" ]
stackoverflow_0074416972_lxc_mastodon_nginx_proxmox.txt
Q: User defined function through inputs in Python I wish to create a custom calculator where the user defines two parameters and a function using a GUI and when they click on calculate it executes their user defined function passing the two parameters. argument1 = IntSlider( … ) argument2 = IntSlider( … ) userDefinedFunction = TextArea( … ) calculateButton = Button ( … ) calculateButton.on_click(userDefinedFunction) So that let’s say somebody defines : argument1 = 3 argument2 = 4 userDefinedFunction = def udf(arg1,arg2): return arg1**2 + arg2**2 Would return 25 as 3*3 + 4*4 = 25. A: I'd probably go with something more limiting than a full function definition. Having the user create the function signature is going to add complications as you cannot eval it, you would have to exec it instead. Then finding out the method name would be complex, and it would allow the user to overwrite local variables, or do other imports, which might not be desirable. An easier way could be to expect the user to complete the lambda method lambda arg1, arg2: <user code input>. Note that eval and exec (or running any unvalidated user input as code for that matter) are dangerous. If the user is running this on their local machine only this is somewhat okay, but do not do this if the inputs are coming from external sources, like for example, a web server. argument1 = 3 argument2 = 4 user_func_input = "arg1**2 + arg2**2" user_func = eval(f"lambda arg1, arg2: {user_func_input}") print(user_func(argument1, argument2)) 25 A: Using eval or exec for user input data is dangerous. Any valid syntax will be evaluated by the interpreter. The secure way is to verify if the operation is valid and then execute it. There are plenty of algorithms to distinguish numbers from operators (search for infix notation). To safely execute the operations, you could use the following functions # For python 3.10 and above (with support for match statement) def apply_function(argument1:int, argument2:int, function:str)->float: match function: case "add" | "sum"| "+": return argument1 + argument2 case "sub" | "subtract" | "-": return argument1 - argument2 case "mul" | "multiply" | "*" | "x" | "times": return argument1 * argument2 case "div" | "divide" | "/": return argument1 / argument2 case "pow" | "power" | "^"|"**": return argument1 ** argument2 case _: raise ValueError("Invalid function") # For python 3.9 and below (without support for match statement) def apply_function_v2(argument1:int, argument2:int, function:str)->float: if function in ("add", "sum", "+"): return argument1 + argument2 elif function in ("sub", "subtract", "-"): return argument1 - argument2 elif function in ("mul", "multiply", "*", "x", "times"): return argument1 * argument2 elif function in ("div", "divide", "/"): return argument1 / argument2 elif function in ("pow", "power", "^", "**"): return argument1 ** argument2 else: raise ValueError("Invalid function")
User defined function through inputs in Python
I wish to create a custom calculator where the user defines two parameters and a function using a GUI and when they click on calculate it executes their user defined function passing the two parameters. argument1 = IntSlider( … ) argument2 = IntSlider( … ) userDefinedFunction = TextArea( … ) calculateButton = Button ( … ) calculateButton.on_click(userDefinedFunction) So that let’s say somebody defines : argument1 = 3 argument2 = 4 userDefinedFunction = def udf(arg1,arg2): return arg1**2 + arg2**2 Would return 25 as 3*3 + 4*4 = 25.
[ "I'd probably go with something more limiting than a full function definition. Having the user create the function signature is going to add complications as you cannot eval it, you would have to exec it instead. Then finding out the method name would be complex, and it would allow the user to overwrite local variables, or do other imports, which might not be desirable.\nAn easier way could be to expect the user to complete the lambda method lambda arg1, arg2: <user code input>.\nNote that eval and exec (or running any unvalidated user input as code for that matter) are dangerous. If the user is running this on their local machine only this is somewhat okay, but do not do this if the inputs are coming from external sources, like for example, a web server.\nargument1 = 3\nargument2 = 4\nuser_func_input = \"arg1**2 + arg2**2\"\nuser_func = eval(f\"lambda arg1, arg2: {user_func_input}\")\n\nprint(user_func(argument1, argument2))\n\n25\n\n", "Using eval or exec for user input data is dangerous. Any valid syntax will be evaluated by the interpreter. The secure way is to verify if the operation is valid and then execute it.\nThere are plenty of algorithms to distinguish numbers from operators (search for infix notation).\nTo safely execute the operations, you could use the following functions\n# For python 3.10 and above (with support for match statement)\ndef apply_function(argument1:int, argument2:int, function:str)->float:\n match function:\n case \"add\" | \"sum\"| \"+\":\n return argument1 + argument2\n case \"sub\" | \"subtract\" | \"-\":\n return argument1 - argument2\n case \"mul\" | \"multiply\" | \"*\" | \"x\" | \"times\":\n return argument1 * argument2\n case \"div\" | \"divide\" | \"/\":\n return argument1 / argument2\n case \"pow\" | \"power\" | \"^\"|\"**\":\n return argument1 ** argument2\n case _:\n raise ValueError(\"Invalid function\")\n\n# For python 3.9 and below (without support for match statement)\ndef apply_function_v2(argument1:int, argument2:int, function:str)->float:\n if function in (\"add\", \"sum\", \"+\"):\n return argument1 + argument2\n elif function in (\"sub\", \"subtract\", \"-\"):\n return argument1 - argument2\n elif function in (\"mul\", \"multiply\", \"*\", \"x\", \"times\"):\n return argument1 * argument2\n elif function in (\"div\", \"divide\", \"/\"):\n return argument1 / argument2\n elif function in (\"pow\", \"power\", \"^\", \"**\"):\n return argument1 ** argument2\n else:\n raise ValueError(\"Invalid function\")\n\n" ]
[ 1, 1 ]
[]
[]
[ "ipywidgets", "panel_pyviz", "python" ]
stackoverflow_0074668885_ipywidgets_panel_pyviz_python.txt
Q: Can Goutte/Guzzle be forced into UTF-8 mode? I'm scraping from a UTF-8 site, using Goutte, which internally uses Guzzle. The site declares a meta tag of UTF-8, thus: <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> However, the content type header is thus: Content-Type: text/html and not: Content-Type: text/html; charset=utf-8 Thus, when I scrape, Goutte does not spot that it is UTF-8, and grabs data incorrectly. The remote site is not under my control, so I can't fix the problem there! Here's a set of scripts to replicate the problem. First, the scraper: <?php require_once realpath(__DIR__ . '/..') . '/vendor/goutte/goutte.phar'; $url = 'http://crawler-tests.local/utf-8.php'; use Goutte\Client; $client = new Client(); $crawler = $client->request('get', $url); $text = $crawler->text(); echo 'Whole page: ' . $text . "\n"; Now a test page to be placed on a web server: <?php // Correct #header('Content-Type: text/html; charset=utf-8'); // Incorrect header('Content-Type: text/html'); ?> <!DOCTYPE html> <html> <head> <title>UTF-8 test</title> <meta charset="utf-8" /> </head> <body> <p>When the Content-Header header is incomplete, the pound sign breaks: £15,216</p> </body> </html> Here's the output of the Goutte test: Whole page: UTF-8 test When the Content-Header header is incomplete, the pound sign breaks: £15,216 As you can see from the comments in the last script, properly declaring the character set in the header fixes things. I've hunted around in Goutte to see if there is anything that looks like it would force the character set, but to no avail. Any ideas? A: The issue is actually with symfony/browser-kit and symfony/domcrawler. The browserkit's Client does not examine the HTML meta tags to determine the charset, content-type header only. When the response body is handed over to the domcrawler, it is treated as the default charset ISO-8859-1. After examining the meta tags that decision should be reverted and the DomDocument rebuilt, but that never happens. The easy workaround is to wrap $crawler->text() with utf8_decode(): $text = utf8_decode($crawler->text()); This works if the input is UTF-8. I suppose for other encodings something similar can be achieved with iconv() or so. However, you have to remember to do that every time you call text(). A more generic approach is to make the Domcrawler believe that it deals with UTF-8. To that end I've come up with a Guzzle plugin that overwrites (or adds) the charset in the content-type response header. You can find it at https://gist.github.com/pschultz/6554265. Usage is like this: <?php use Goutte\Client; $plugin = new ForceCharsetPlugin(); $plugin->setForcedCharset('utf-8'); $client = new Client(); $client->getClient()->addSubscriber($plugin); $crawler = $client->request('get', $url); echo $crawler->text(); A: I seem to have been hitting two bugs here, one of which was identified by Peter's answer. The other was the way in which I am separately using the Symfony Crawler class to explore HTML snippets. I was doing this (to parse the HTML for a table row): $subCrawler = new Crawler($rowHtml); Adding HTML via the constructor, however, does not appear to give a way in which the character set can be specified, and I assume ISO-8859-1 is again the default. Simply using addHtmlContent gets it right; the second parameter specifies the character set, and it defaults to UTF-8 if it is not specified. $subCrawler = new Crawler(); $subCrawler->addHtmlContent($rowHtml); A: Crawler tries detect charset from <meta charset tag but frequently it's missing and then Crawler uses charset by default (ISO-8859-1) - it is source of problem described in this thread. When we are passing content to Crawler through constructor we miss Content-Type header that usually contains charset. Here's how we can handle it: $crawler = new Crawler(); $crawler->addContent( $response->getBody()->getContents(), $response->getHeaderLine('Content-Type') ); With this solution we are using correct charset from server response and don't bind our solution to any single charset and of course after that we don't need decode every single received line from Crawler (using utf8_decode() or somehow else). A: Guzzle is true about what it gets, so the best way is to do the conversion like this: // $client = \Drupal::httpClient(); $client = new \GuzzleHttp\Client(); $response = $client->get($remoteUrl); if ($response->getStatusCode() !== 200) { return NULL; } $originalBody = $response->getBody()->getContents(); $contentTypeHeader = $response->getHeader('content-type'); $originalEncoding = \GuzzleHttp\Psr7\Header::parse($contentTypeHeader)[0]['charset'] ?? NULL; $body = !$originalEncoding ? $originalBody : mb_convert_encoding($originalBody, 'UTF-8', $originalEncoding); Of course if the response lies about its encoding, you're lost until you work around or fix that.
Can Goutte/Guzzle be forced into UTF-8 mode?
I'm scraping from a UTF-8 site, using Goutte, which internally uses Guzzle. The site declares a meta tag of UTF-8, thus: <meta http-equiv="Content-Type" content="text/html; charset=utf-8"> However, the content type header is thus: Content-Type: text/html and not: Content-Type: text/html; charset=utf-8 Thus, when I scrape, Goutte does not spot that it is UTF-8, and grabs data incorrectly. The remote site is not under my control, so I can't fix the problem there! Here's a set of scripts to replicate the problem. First, the scraper: <?php require_once realpath(__DIR__ . '/..') . '/vendor/goutte/goutte.phar'; $url = 'http://crawler-tests.local/utf-8.php'; use Goutte\Client; $client = new Client(); $crawler = $client->request('get', $url); $text = $crawler->text(); echo 'Whole page: ' . $text . "\n"; Now a test page to be placed on a web server: <?php // Correct #header('Content-Type: text/html; charset=utf-8'); // Incorrect header('Content-Type: text/html'); ?> <!DOCTYPE html> <html> <head> <title>UTF-8 test</title> <meta charset="utf-8" /> </head> <body> <p>When the Content-Header header is incomplete, the pound sign breaks: £15,216</p> </body> </html> Here's the output of the Goutte test: Whole page: UTF-8 test When the Content-Header header is incomplete, the pound sign breaks: £15,216 As you can see from the comments in the last script, properly declaring the character set in the header fixes things. I've hunted around in Goutte to see if there is anything that looks like it would force the character set, but to no avail. Any ideas?
[ "The issue is actually with symfony/browser-kit and symfony/domcrawler. The browserkit's Client does not examine the HTML meta tags to determine the charset, content-type header only. When the response body is handed over to the domcrawler, it is treated as the default charset ISO-8859-1. After examining the meta tags that decision should be reverted and the DomDocument rebuilt, but that never happens.\nThe easy workaround is to wrap $crawler->text() with utf8_decode():\n$text = utf8_decode($crawler->text());\n\nThis works if the input is UTF-8. I suppose for other encodings something similar can be achieved with iconv() or so. However, you have to remember to do that every time you call text().\nA more generic approach is to make the Domcrawler believe that it deals with UTF-8. To that end I've come up with a Guzzle plugin that overwrites (or adds) the charset in the content-type response header. You can find it at https://gist.github.com/pschultz/6554265. Usage is like this:\n<?php\n\nuse Goutte\\Client;\n\n\n$plugin = new ForceCharsetPlugin();\n$plugin->setForcedCharset('utf-8');\n\n$client = new Client();\n$client->getClient()->addSubscriber($plugin);\n$crawler = $client->request('get', $url);\n\necho $crawler->text();\n\n", "I seem to have been hitting two bugs here, one of which was identified by Peter's answer. The other was the way in which I am separately using the Symfony Crawler class to explore HTML snippets.\nI was doing this (to parse the HTML for a table row):\n$subCrawler = new Crawler($rowHtml);\n\nAdding HTML via the constructor, however, does not appear to give a way in which the character set can be specified, and I assume ISO-8859-1 is again the default.\nSimply using addHtmlContent gets it right; the second parameter specifies the character set, and it defaults to UTF-8 if it is not specified.\n$subCrawler = new Crawler();\n$subCrawler->addHtmlContent($rowHtml);\n\n", "Crawler tries detect charset from <meta charset tag but frequently it's missing and then Crawler uses charset by default (ISO-8859-1) - it is source of problem described in this thread. \nWhen we are passing content to Crawler through constructor we miss Content-Type header that usually contains charset. \nHere's how we can handle it: \n$crawler = new Crawler();\n$crawler->addContent(\n $response->getBody()->getContents(), \n $response->getHeaderLine('Content-Type')\n);\n\nWith this solution we are using correct charset from server response and don't bind our solution to any single charset and of course after that we don't need decode every single received line from Crawler (using utf8_decode() or somehow else).\n", "Guzzle is true about what it gets, so the best way is to do the conversion like this:\n // $client = \\Drupal::httpClient();\n $client = new \\GuzzleHttp\\Client();\n $response = $client->get($remoteUrl);\n if ($response->getStatusCode() !== 200) {\n return NULL;\n }\n $originalBody = $response->getBody()->getContents();\n $contentTypeHeader = $response->getHeader('content-type');\n $originalEncoding = \\GuzzleHttp\\Psr7\\Header::parse($contentTypeHeader)[0]['charset'] ?? NULL;\n $body = !$originalEncoding ? $originalBody :\n mb_convert_encoding($originalBody, 'UTF-8', $originalEncoding);\n\nOf course if the response lies about its encoding, you're lost until you work around or fix that.\n" ]
[ 16, 11, 2, 1 ]
[]
[]
[ "goutte", "guzzle", "php", "symfony_components", "web_scraping" ]
stackoverflow_0018782332_goutte_guzzle_php_symfony_components_web_scraping.txt
Q: How do I get the output field as an array in MongoDb How do I get the output field as an array. What I currently have: db.users.find({ taskIds: { $in: [ObjectId("638764689c4ed1fbff18859a")] } }, {taskIds:1}) The output of it: [ { _id: ObjectId("638764689c4ed1fbff1885a2"), taskIds: [ ObjectId("638764689c4ed1fbff18859a"), ObjectId("638764689c4ed1fbff18859c") ] } ] The output I want: [ { ObjectId("638764689c4ed1fbff18859a"), ObjectId("638764689c4ed1fbff18859c") } ] A: If it's just to omit the _id from the result, you need to add it to your projection as follows: db.users.find({ taskIds: { $in: [ObjectId("638764689c4ed1fbff18859a")] } }, {taskIds:1, _id:-1}) This would produce: [ { taskIds: [ ObjectId("638764689c4ed1fbff18859a"), ObjectId("638764689c4ed1fbff18859c") ] } ]
How do I get the output field as an array in MongoDb
How do I get the output field as an array. What I currently have: db.users.find({ taskIds: { $in: [ObjectId("638764689c4ed1fbff18859a")] } }, {taskIds:1}) The output of it: [ { _id: ObjectId("638764689c4ed1fbff1885a2"), taskIds: [ ObjectId("638764689c4ed1fbff18859a"), ObjectId("638764689c4ed1fbff18859c") ] } ] The output I want: [ { ObjectId("638764689c4ed1fbff18859a"), ObjectId("638764689c4ed1fbff18859c") } ]
[ "If it's just to omit the _id from the result, you need to add it to your projection as follows:\ndb.users.find({ taskIds: { $in: [ObjectId(\"638764689c4ed1fbff18859a\")] } }, {taskIds:1, _id:-1})\nThis would produce:\n[\n {\n taskIds: [\n ObjectId(\"638764689c4ed1fbff18859a\"),\n ObjectId(\"638764689c4ed1fbff18859c\")\n ]\n }\n]\n\n" ]
[ 1 ]
[]
[]
[ "mongodb", "mongoose" ]
stackoverflow_0074667844_mongodb_mongoose.txt
Q: Is it better to use a 403 response or 404? I have a restAPI which allows for users to submit data to be saved in my database. When data is submitted their userID will be grabbed from the request header and added to the document In my collection (mongoDB) there is the normal _id field and a userID field, which references the user. My question is in the scenario that a malicious user submits a PUT or DELETE request to a document that does not belong to them what is the best way to handle it. I pull the prediction from the DB just using the submitted _id, check the userID field against the requesting userID and issue a 403 response if they do not match. I query the database by both _id and userID, so it would not be found if the document does not belong to the requesting user. I would then issue a 404 response just as if they had submitted an id that does not exist. Both choices achieve the same goal, which is to prevent a user from editing or deleting a resource that does not belong to them, but which is "better"? A: 404 is for "not found", which is clearly not the case. Use 403. Quoting from Wikipedia: HTTP 403 is returned when the client is not permitted access to the resource despite providing authentication ... A: I would return 403 if user A has permission to view a resource that belongs to user B (as they are already aware of the existence of this resource). Otherwise, I would return 404 for privacy reasons. This is in line with the RFC 9110 standard: An origin server that wishes to "hide" the current existence of a forbidden target resource MAY instead respond with a status code of 404 (Not Found).
Is it better to use a 403 response or 404?
I have a restAPI which allows for users to submit data to be saved in my database. When data is submitted their userID will be grabbed from the request header and added to the document In my collection (mongoDB) there is the normal _id field and a userID field, which references the user. My question is in the scenario that a malicious user submits a PUT or DELETE request to a document that does not belong to them what is the best way to handle it. I pull the prediction from the DB just using the submitted _id, check the userID field against the requesting userID and issue a 403 response if they do not match. I query the database by both _id and userID, so it would not be found if the document does not belong to the requesting user. I would then issue a 404 response just as if they had submitted an id that does not exist. Both choices achieve the same goal, which is to prevent a user from editing or deleting a resource that does not belong to them, but which is "better"?
[ "404 is for \"not found\", which is clearly not the case.\nUse 403.\nQuoting from Wikipedia:\n\nHTTP 403 is returned when the client is not permitted access to the resource despite providing authentication ...\n\n", "I would return 403 if user A has permission to view a resource that belongs to user B (as they are already aware of the existence of this resource). Otherwise, I would return 404 for privacy reasons. This is in line with the RFC 9110 standard:\n\nAn origin server that wishes to \"hide\" the current existence of a forbidden target resource MAY instead respond with a status code of 404 (Not Found).\n\n" ]
[ 1, 0 ]
[]
[]
[ "database", "http", "mongodb" ]
stackoverflow_0072035805_database_http_mongodb.txt
Q: How does clang generate non-looping code for sum of squares? I admit the answer to this may be 'some very specific magic', but I'm kind of shocked by what I've observed here. I was wondering if anyone had insight to how these types of optimizations work. I find compiler design to be quite interesting, and I really can't imagine how this works. I'm sure the answer is somewhere in the clang source code, but I don't even know where I would look. I'm a TA for a class at college, and I was recently asked to help with a simple homework question. This led me down an interesting path... The question is simple enough: In x86_64 assembly, write a function which given a (positive) integer n returns 1^2 + 2^2 + 3^2 + ... + n^2. I decided to play around a bit, and after helping them write this in x86_64 assembly, I, having an M1 macbook, decided to see if I could create a nice solution in arm64 assembly. I came up with the relatively simple and straightforward solution: _sum_squares: mov x1, x0 ; Do multiplication from x1 mov x0, xzr ; Clear x0 Lloop: ; x0 <-- (x1 * x1) + x0 madd x0, x1, x1, x0 ; Loop until x1 == 0 subs x1, x1, #1 bne Lloop ret (I wish there was some sort of nice way to do branch if --x1 == 0 in one instruction, but I couldn't think of any) Note: There is a simple formula for this from any basic number theory class, which is [n(n + 1)(2n + 1)] / 6, but I decided this wasn't really in the spirit of the question. I then was wondering how clang would generate assembly for a simple C version. Upon writing the simple C implementation, I found that clang with -Og generates assembly which seems a bit verbose, but generally works as expected with a loop and accumulator (although it is very inefficient): int sum_squares(int n) { int a = 0; while (n--) a += (n * n); return a; } (clang -Og -S, annotated myself, cfi removed, labels renamed) _sum_squares: sub sp, sp, #16 ; create stack space str w0, [sp, #12] ; store n str wzr, [sp, #8] ; store 0 b Ldec ; silly clang, this just falls through... Ldec: ; n-- and return if n == 0 ldr w8, [sp, #12] ; load n subs w9, w8, #1 ; w9 = (n - 1) str w9, [sp, #12] ; store (n - 1) over n subs w8, w8, #0 ; w8 = n - 0 (set flags based on n) cset w8, eq ; set w8 = 1 if n == 0 else w8 = 0 tbnz w8, #0, Lret ; branch to return if n == 0, else fall through b Ladd ; silly clang, this falls through again... Ladd: ; a += n^2 ldr w8, [sp, #12] ; load n ldr w9, [sp, #12] ; load n mul w9, w8, w9 ; w9 = n * n ldr w8, [sp, #8] ; load a add w8, w8, w9 ; a += w9 str w8, [sp, #8] ; store a b Ldec ; go back to start of look Lret: ; return a from top of stack ldr w0, [sp, #8] ; w0 = a add sp, sp, #16 ; cleanup temp stack ret ; back to caller This is altogether reasonable for a direct translation of the C code to arm64 assembly. After some optimization (O1 uses a similar formula, O2 and O3 are identical), clang comes up with some magic. I have no clue how it came up with this code, it appears to be somewhat similar to the basic formula for this summation, except with bit magic. I didn't imagine the compiler would be able to derive a formula for this without a loop, but it appears I was wrong. The generated code is as follows (with my best attempt at a commentary, n is the input in w0): _sum_squares: cbz w0, Lret ; return if n == 0 sub w8, w0, #1 ; w8 = (n - 1) mul w9, w8, w8 ; w9 = (n - 1)^2 orr w9, w9, #0x2 ; w9 = ((n - 1)^2) | 2 sub w9, w9, w0 ; w9 = [((n - 1)^2) | 2] - n mov w10, #5 ; w10 = 5 sub w10, w10, w0, lsl #1 ; w10 = 5 - (n / 2) sub w11, w0, #2 ; w11 = n - 2 umull x11, w8, w11 ; w11 = (n - 1)(n - 2) lsr x12, x11, #1 ; x12 = ((n - 1)(n - 2)) / 2 mul w10, w10, w12 ; w10 = (5 - (n / 2))(((n - 1)(n - 2)) / 2) sub w12, w0, #3 ; w12 = n - 3 mul x11, x11, x12 ; x11 = (n - 1)(n - 2)(n - 3) lsr x11, x11, #1 ; x11 = ((n - 1)(n - 2)(n - 3)) / 2 mov w12, #21846 ; w12 = 0x5556 movk w12, #21845, lsl #16 ; w12 = 0x55555556 ; w8 = ((n - 1)([((n - 1)^2) | 2] - n)) + (5 - (n / 2))(((n - 1)(n - 2)) / 2) madd w8, w9, w8, w10 ; let A = w8 (set in last instruction) ; w0 = (0x55555556 * (((n - 1)(n - 2)(n - 3)) / 2)) + A madd w0, w11, w12, w8 ; somehow, this is the correct result? ; this feels like magic to me... Lret: ret ; return. Result already in w0. My question: How in the world does this work? How can a C compiler be given a loop like this and deduce a formula not even involving a loop? I expected some loop unwinding perhaps, but nothing like this. Does anyone have references involving this type of optimization? I especially don't understand what certain steps like orr w9, w9, #0x2 or the magic number 0x55555556 does. Any insight into these steps would be extra appreciated. A: TL:DR: Yes, clang knows the closed-form formulas for sums of integer power series, and can detect such loops. Smart humans have taught modern compilers to recognize certain patterns of operations and replace them with operations not present in the source, e.g. for rotates and even popcount loops and bithacks. And for clang/LLVM specifically, also closed-form formulae for sums of i^power, including with a stride other than 1. Yay math! So you can get asm logic that's not just an unrolled or vectorized version of the source. See also a blog article How LLVM optimizes power sums which talks about how compilers find these loops by looking at how variables are updated across loop iterations. Matthieu M. comments that Closed form formulas are derived by the Scalar Evolution optimization in LLVM. The comments in the code say that it's used primarily to analyze expressions involving induction variables in loops. and cites references for the techniques it uses for chains of recurrences. Modern C compilers can recognize patterns in some loops or short sequences of logic, in the internal representation of the code. Humans (compiler devs) have told the compiler what to look for, and provided a hand-crafted replacement "formula". In GIMPLE (GCC) or LLVM-IR I expect, not just really late in compilation like a peephole optimization while generating asm. So I'd guess the logic inside LLVM's optimizer checks every loop it finds for one or more of the following possibilities, with some code to look for some property of the LLVM-IR that represents the program logic of that loop: Does it copy one array to another unmodified? If so replace with __builtin_memcpy, which might later get expanded inline or compiled to call memcpy. And if it has other side effects like leaving a pointer variable incremented, also represent that in the new LLVM-IR for the function containing the loop. Does it set every byte of a range of memory to a constant byte value? If so, memset Is its sequence of operations equivalent to this sequence which does popcnt? Then emit a popcnt if hardware support exists, otherwise keep the loop strategy. (So it's not just treating it as if it was __builtin_popcount, not replacing a loop with a bithack or a call to a helper function. That makes sense because some strategies are good for numbers with few bits set, and the programmer might have chosen with that in mind.) Is the loop variable updated with the sum of a range of integers (with some stride), or that raised to a power? Then use a closed-form formula that avoids overflow of a fixed-width integer. (And if the start point and stride aren't 1, add an offset and/or scale factor.) The checking might work in terms of considering a variable modified by a loop, which is read after the loop. So it knows what variable to consider when looking at the operations. (Loops with no used results get removed.) GCC doesn't look for sums of integer sequences, but clang does. IDK how many real-world codebases this actually speeds up; the closed-form formula is fairly well-known, having famously been re-discovered by Gauss as a schoolboy. (So hopefully a lot of code uses the formula instead of a loop). And not many programs would need to do exactly this, I'd have thought, except as an exercise. (The existence of a closed-form sum-of-squares formula is less well-known, but there is one, and apparently also for powers in general.) Clang's implementation of the formula of course has to give the exact correct result for every input integer where the C abstract machine doesn't encounter undefined behaviour (for signed integer overflow), or match the truncation of unsigned multiplies. Otherwise it wouldn't satisfy the as-if rule, or could only be used when inlining into places with a known limited value-range. (In practice, it seemed clang wasn't using the closed-form optimization for unsigned, but maybe I just had a mistake in the version I was trying. Using a 64-bit integer could safely calculate sums of 32-bit integers. And then truncating that could give the same result as the source.) n*(n+1) can overflow in cases where n*(n+1)/2 is still in range, so this is non-trivial. For 32-bit int on a 64-bit machine, LLVM can and does simply use 64-bit multiply and right-shift. This may be a peephole optimization of the general case of using a double-width output and an extended-precision right shift, across two registers if the product didn't fit in one. (e.g. x86 shrd edx, eax, 1 to shift the low bit from the high half into the top of EAX, after a mul r32 produced the 64-bit product in EDX:EAX.) It also does n * (n-1) / 2 + n instead of the usual n * (n+1)/2; not sure how that helps. It avoids overflow of the input type, I guess, in case that matters for unsigned types where the original loop would just have wrapping, not UB. Except it doesn't do this optimization for unsigned. (BTW, either n or n+-1 are even, so the division (right shift) is exact; which is good because the sum of integers had better be an integer.) In your sum-of-squares asm, you can see it using umull x, w, w to do a widening multiply, and a 64-bit right shift, before the 32-bit multiplicative-inverse for division by 3. Playing around with your code and a simplified version not squaring, it makes a small difference in code-gen when you count down or up. int sum_ints(int n) { int a = 0; //for (int i=0 ; i<n ; i++) a += i; // count up, skipping n while (n--) a += n; // count down, skipping n return a; } Negative n would have UB with your version, as the loop would run to INT_MIN--, and overflow a first. So clang might or might not be using that to assume that the initial n is non-negative. But if not, IDK why it makes more complicated code that multiplies twice. // count down version, starting with a += n-1, so x = n-1 in the usual formulae. // clang15 -O3 sum_ints(int): cbz w0, .LBB0_2 // only bail on zero, not negative. sub w8, w0, #1 // n-1 sub w9, w0, #2 // n-2 umull x10, w8, w9 // (n-1)*(n-2) madd w8, w8, w9, w0 // w8 = (n-1)*(n-2) + n lsr x9, x10, #1 // w9 = (n-1)*(n-2)/2 mvn w9, w9 // w9 = ~w9 = -w9 - 1 add w0, w8, w9 // (n-1)*(n-2) - (n-1)*(n-2)/2 + n - 1 I think? .LBB0_2: ret // count up version, ending with n-1. clang15 -O3 sum_ints(int): subs w8, w0, #1 // n-1 b.lt .LBB0_2 sub w9, w0, #2 // n-2 umull x9, w8, w9 // (n-1)*(n-2) lsr x9, x9, #1 // . / 2 add w0, w8, w9 // (n-1)*(n-2)/2 + (n-1) = (n-1)*(n-2 + 2)/2 // = the usual x * (x+1 )/2 for x=n-1 ret .LBB0_2: mov w0, wzr // separate return path for all negative inputs ret Other types of loop pattern-recognition / replacement GCC and clang do pattern-recognition for loops that count set bits, as well as the standard bithack that people will have copy/pasted from SO. (This is useful because ISO C fails to provide a portable way to express this operation that most modern CPUs have. And ISO C++ only fixed that deficiency in C++20 with <bit>, or via std::bitset<32> .count()). So some real codebases just have a bithack or simple loop over set bits instead of __builtin_popcount because people prefer simplicity and want to leave performance up to the compiler. These pattern-recognizers only work on some specific ways to implement popcount, namely the x &= x-1; count++; it would presumably cost too much compile time to try to prove equivalence for every possible loop. From that, we can be pretty sure that these work by looking for a specific implementation, not at what the result actually is for every possible integer. The variable names of course don't matter, but the sequence of operations on the input variable does. I assume there's some flexibility in reordering operations in ways that give the same result when checking for equivalence. In GCC's case, apparently number_of_iterations_popcount is the name of the function that discovers this: compilers often want to know how many iterations a loop will run for: if it's a small constant they may fully unroll it. If it can be calculated from other variables before starting the loop, it's a candidate for auto-vectorization. (GCC/clang can't auto-vectorize search loops, or anything else with a data-dependent if()break.) As shown in the top answer on Count the number of set bits in a 32-bit integer, GCC10 and clang10 (Godbolt) can also recognize a popcount using a SWAR bithack, so you get the best of both worlds: ideally a single instruction, but if not then at least a good strategy. Counting iterations of x &= x-1 until x == 0 is ok when the expected number of set bits is small, so is also a sensible choice sometimes, as the other thing that GCC / clang can replace if hardware popcount is available. (And is simple to write, without needing the masking constants, and can compile to small machine-code size with -Os if not being replaced with a single instruction.) int popcount_blsr_until_zero(unsigned x){ int count = 0; while (x){ count++; // loop trip-count = popcount, this is what GCC looks for x &= x - 1; } return count; } GCC and clang for x86-64, -O3 -march=nehalem or later, on Godbolt for this and some other versions. # gcc12 -O3 -march=znver2 popcount_blsr_until_zero(unsigned int): popcnt eax, edi ret // clang -O3 for AArch64 popcount_blsr_until_zero(unsigned int): mov w8, w0 // this is pointless, GCC doesn't do it. fmov d0, x8 cnt v0.8b, v0.8b // ARM unfortunately only has vector popcnt uaddlv h0, v0.8b // hsum bytes fmov w0, s0 // copy back to GP-integer ret One of the simplest forms of code replacement by pattern-recognition is compiling (n<<5) | (n>>(32-5)) into a rotate-left by 5. (See this Q&A for run-time variable counts, and how to safely write something that gets recognized but also avoids UB even for a count of 0.) But that might happen late enough in the compilation process that you'd call it a peephole optimization. CISC ISAs tend to have more peephole optimizations, like x86 having special-case shorter instructions to sign within the accumulator (cdqe instead of movzx eax, ax). x86 xor-zeroing to set a register to zero can still be called a peephole, despite sometimes needing to rearrange things because that clobbers FLAGS while mov eax, 0 doesn't. GCC enables xor-zeroing with -fpeephole2 (part of -O2); perhaps treating it as just a peephole is why GCC sometimes does a bad job and fails to find ways to reorder it so xor-zero / cmp / setcc instead of cmp / setcc / movzx, because x86 setcc to set a register according to a FLAGS condition sucks, only writing the low 8 bits. AArch64 has much better instructions, like csinc which can be used with the zero-register to materialize a 0/1, or with other registers to conditionally select and increment. But sum-of-series loops are a larger-scale replacement, not exactly what I'd think of as a peephole, especially since it's not target-specific. Also related: Would a C compiler be allowed to replace an algorithm with another? - yes. But usually they don't, because compilers are mechanical enough that they wouldn't always be right, and picking an efficient algorithm for the data is something that C programmers would expect a compiler to respect, unless there's a single instruction that's obviously always faster. clang knows how to auto-vectorize __builtin_popcount over an array with AVX2 vpshub as a lookup table for nibbles. It's not just making a SIMD version out of the same operation, again it's using an expansion that human compiler devs put there for it to use. Why does compiler optimization not generate a loop for sum of integers from 1..N? is about cases where this optimization doesn't happen, e.g. one with j <= n for unsigned, which is a potentially infinite loop. Comments there found some interesting limitations on when clang can optimize: e.g. if the loop was for (int j=0 ; j < n; j += 3), the trip-count would be less predictable / calculable, and defeats this optimisation. A: To at least get things started, this is your "basic number theory" formula, albeit in a rather obfuscated and inefficient form. The clang developers evidently took that class too. Some hints to help verify it: Your sum_squares function has an off-by-one bug and only sums up to n-1. Hence the formula we expect to get is n(n-1)(2n-1)/6. The orr w9, w9, #0x2 in this case is equivalent to add w9, w9, #0x2. The previous instruction, mul w9, w8, w8 loaded w9 with the square of w8. Now the only perfect squares mod 4 are 0 and 1, both of which have bit 1 clear, thus bit 1 of w9 will always be clear. So w9 | 2 is equivalent to w9 + 2. (No, I don't know why clang would do it this way.) As harold commented, multiplication by 0x55555556 is equivalent mod 2^32 to division by 3 and multiplication by 2 (assuming no remainder). This technique is sometimes called "magic number division". See Why does GCC use multiplication by a strange number in implementing integer division?. So prior to this, you have x11 = ((n - 1)(n - 2)(n - 3)) / 2 which, note, is always a multiple of 3 (and the division by 2 is always exact because the numerator is always even). Hence w11 * w12 results in (n-1)(n-2)(n-3)/6. Putting this all together, you can check the algebra to verify that the final result is equivalent to n(n-1)(2n-1)/6. I can't speak to how clang performs this optimization. I think I once went through the exercise of figuring out which LLVM optimization pass makes it happen, but I don't recall what it was. But there are known algorithms to automatically derive this sort of closed-form expression, e.g. Gosper's algorithm. So clang may be applying something like that. I'm speculating now, but perhaps the algorithm spits out a formula in an unsimplified form, and maybe clang just emits the directly corresponding code instead of trying to algebraically simplify first. A: The code generated by Clang for the sum of squares likely uses a technique called loop unrolling. This is a common optimization technique in which the compiler unrolls the loop, creating a separate code path for each iteration of the loop. This allows the compiler to generate more efficient code by eliminating the overhead of jumping back to the top of the loop and checking the loop condition on each iteration. For example, consider the following C code: int sum_squares(int n) { int a = 0; for (int i = 1; i <= n; i++) { a += i * i; } return a; } The code generated by Clang for this function with optimization level -O3 looks like this: _sum_squares: mov x0, xzr mov x8, #1 mov x9, #2 mov x10, #3 add x0, x0, x8 cmp x9, x1 b.ge LBB0_3 add x0, x0, x9 add x9, x9, #1 cmp x10, x1 b.ge LBB0_3 add x0, x0, x10 add x10, x10, #1 LBB0_3: ret As you can see, the loop has been unrolled four times. This means that the code is executing the loop body four times before checking the loop condition again. This allows the compiler to generate more efficient code by eliminating the overhead of checking the loop condition on each iteration.
How does clang generate non-looping code for sum of squares?
I admit the answer to this may be 'some very specific magic', but I'm kind of shocked by what I've observed here. I was wondering if anyone had insight to how these types of optimizations work. I find compiler design to be quite interesting, and I really can't imagine how this works. I'm sure the answer is somewhere in the clang source code, but I don't even know where I would look. I'm a TA for a class at college, and I was recently asked to help with a simple homework question. This led me down an interesting path... The question is simple enough: In x86_64 assembly, write a function which given a (positive) integer n returns 1^2 + 2^2 + 3^2 + ... + n^2. I decided to play around a bit, and after helping them write this in x86_64 assembly, I, having an M1 macbook, decided to see if I could create a nice solution in arm64 assembly. I came up with the relatively simple and straightforward solution: _sum_squares: mov x1, x0 ; Do multiplication from x1 mov x0, xzr ; Clear x0 Lloop: ; x0 <-- (x1 * x1) + x0 madd x0, x1, x1, x0 ; Loop until x1 == 0 subs x1, x1, #1 bne Lloop ret (I wish there was some sort of nice way to do branch if --x1 == 0 in one instruction, but I couldn't think of any) Note: There is a simple formula for this from any basic number theory class, which is [n(n + 1)(2n + 1)] / 6, but I decided this wasn't really in the spirit of the question. I then was wondering how clang would generate assembly for a simple C version. Upon writing the simple C implementation, I found that clang with -Og generates assembly which seems a bit verbose, but generally works as expected with a loop and accumulator (although it is very inefficient): int sum_squares(int n) { int a = 0; while (n--) a += (n * n); return a; } (clang -Og -S, annotated myself, cfi removed, labels renamed) _sum_squares: sub sp, sp, #16 ; create stack space str w0, [sp, #12] ; store n str wzr, [sp, #8] ; store 0 b Ldec ; silly clang, this just falls through... Ldec: ; n-- and return if n == 0 ldr w8, [sp, #12] ; load n subs w9, w8, #1 ; w9 = (n - 1) str w9, [sp, #12] ; store (n - 1) over n subs w8, w8, #0 ; w8 = n - 0 (set flags based on n) cset w8, eq ; set w8 = 1 if n == 0 else w8 = 0 tbnz w8, #0, Lret ; branch to return if n == 0, else fall through b Ladd ; silly clang, this falls through again... Ladd: ; a += n^2 ldr w8, [sp, #12] ; load n ldr w9, [sp, #12] ; load n mul w9, w8, w9 ; w9 = n * n ldr w8, [sp, #8] ; load a add w8, w8, w9 ; a += w9 str w8, [sp, #8] ; store a b Ldec ; go back to start of look Lret: ; return a from top of stack ldr w0, [sp, #8] ; w0 = a add sp, sp, #16 ; cleanup temp stack ret ; back to caller This is altogether reasonable for a direct translation of the C code to arm64 assembly. After some optimization (O1 uses a similar formula, O2 and O3 are identical), clang comes up with some magic. I have no clue how it came up with this code, it appears to be somewhat similar to the basic formula for this summation, except with bit magic. I didn't imagine the compiler would be able to derive a formula for this without a loop, but it appears I was wrong. The generated code is as follows (with my best attempt at a commentary, n is the input in w0): _sum_squares: cbz w0, Lret ; return if n == 0 sub w8, w0, #1 ; w8 = (n - 1) mul w9, w8, w8 ; w9 = (n - 1)^2 orr w9, w9, #0x2 ; w9 = ((n - 1)^2) | 2 sub w9, w9, w0 ; w9 = [((n - 1)^2) | 2] - n mov w10, #5 ; w10 = 5 sub w10, w10, w0, lsl #1 ; w10 = 5 - (n / 2) sub w11, w0, #2 ; w11 = n - 2 umull x11, w8, w11 ; w11 = (n - 1)(n - 2) lsr x12, x11, #1 ; x12 = ((n - 1)(n - 2)) / 2 mul w10, w10, w12 ; w10 = (5 - (n / 2))(((n - 1)(n - 2)) / 2) sub w12, w0, #3 ; w12 = n - 3 mul x11, x11, x12 ; x11 = (n - 1)(n - 2)(n - 3) lsr x11, x11, #1 ; x11 = ((n - 1)(n - 2)(n - 3)) / 2 mov w12, #21846 ; w12 = 0x5556 movk w12, #21845, lsl #16 ; w12 = 0x55555556 ; w8 = ((n - 1)([((n - 1)^2) | 2] - n)) + (5 - (n / 2))(((n - 1)(n - 2)) / 2) madd w8, w9, w8, w10 ; let A = w8 (set in last instruction) ; w0 = (0x55555556 * (((n - 1)(n - 2)(n - 3)) / 2)) + A madd w0, w11, w12, w8 ; somehow, this is the correct result? ; this feels like magic to me... Lret: ret ; return. Result already in w0. My question: How in the world does this work? How can a C compiler be given a loop like this and deduce a formula not even involving a loop? I expected some loop unwinding perhaps, but nothing like this. Does anyone have references involving this type of optimization? I especially don't understand what certain steps like orr w9, w9, #0x2 or the magic number 0x55555556 does. Any insight into these steps would be extra appreciated.
[ "TL:DR: Yes, clang knows the closed-form formulas for sums of integer power series, and can detect such loops. Smart humans have taught modern compilers to recognize certain patterns of operations and replace them with operations not present in the source, e.g. for rotates and even popcount loops and bithacks. And for clang/LLVM specifically, also closed-form formulae for sums of i^power, including with a stride other than 1. Yay math! So you can get asm logic that's not just an unrolled or vectorized version of the source.\nSee also a blog article How LLVM optimizes power sums which talks about how compilers find these loops by looking at how variables are updated across loop iterations.\nMatthieu M. comments that Closed form formulas are derived by the\nScalar Evolution optimization in LLVM. The comments in the code say that it's used primarily to analyze expressions involving induction variables in loops. and cites references for the techniques it uses for chains of recurrences.\n\nModern C compilers can recognize patterns in some loops or short sequences of logic, in the internal representation of the code. Humans (compiler devs) have told the compiler what to look for, and provided a hand-crafted replacement \"formula\". In GIMPLE (GCC) or LLVM-IR I expect, not just really late in compilation like a peephole optimization while generating asm.\nSo I'd guess the logic inside LLVM's optimizer checks every loop it finds for one or more of the following possibilities, with some code to look for some property of the LLVM-IR that represents the program logic of that loop:\n\nDoes it copy one array to another unmodified? If so replace with __builtin_memcpy, which might later get expanded inline or compiled to call memcpy. And if it has other side effects like leaving a pointer variable incremented, also represent that in the new LLVM-IR for the function containing the loop.\nDoes it set every byte of a range of memory to a constant byte value? If so, memset\nIs its sequence of operations equivalent to this sequence which does popcnt? Then emit a popcnt if hardware support exists, otherwise keep the loop strategy. (So it's not just treating it as if it was __builtin_popcount, not replacing a loop with a bithack or a call to a helper function. That makes sense because some strategies are good for numbers with few bits set, and the programmer might have chosen with that in mind.)\nIs the loop variable updated with the sum of a range of integers (with some stride), or that raised to a power? Then use a closed-form formula that avoids overflow of a fixed-width integer. (And if the start point and stride aren't 1, add an offset and/or scale factor.)\n\nThe checking might work in terms of considering a variable modified by a loop, which is read after the loop. So it knows what variable to consider when looking at the operations. (Loops with no used results get removed.)\nGCC doesn't look for sums of integer sequences, but clang does. IDK how many real-world codebases this actually speeds up; the closed-form formula is fairly well-known, having famously been re-discovered by Gauss as a schoolboy. (So hopefully a lot of code uses the formula instead of a loop). And not many programs would need to do exactly this, I'd have thought, except as an exercise.\n(The existence of a closed-form sum-of-squares formula is less well-known, but there is one, and apparently also for powers in general.)\n\nClang's implementation of the formula of course has to give the exact correct result for every input integer where the C abstract machine doesn't encounter undefined behaviour (for signed integer overflow), or match the truncation of unsigned multiplies. Otherwise it wouldn't satisfy the as-if rule, or could only be used when inlining into places with a known limited value-range. (In practice, it seemed clang wasn't using the closed-form optimization for unsigned, but maybe I just had a mistake in the version I was trying. Using a 64-bit integer could safely calculate sums of 32-bit integers. And then truncating that could give the same result as the source.)\nn*(n+1) can overflow in cases where n*(n+1)/2 is still in range, so this is non-trivial. For 32-bit int on a 64-bit machine, LLVM can and does simply use 64-bit multiply and right-shift. This may be a peephole optimization of the general case of using a double-width output and an extended-precision right shift, across two registers if the product didn't fit in one. (e.g. x86 shrd edx, eax, 1 to shift the low bit from the high half into the top of EAX, after a mul r32 produced the 64-bit product in EDX:EAX.)\nIt also does n * (n-1) / 2 + n instead of the usual n * (n+1)/2; not sure how that helps. It avoids overflow of the input type, I guess, in case that matters for unsigned types where the original loop would just have wrapping, not UB. Except it doesn't do this optimization for unsigned. (BTW, either n or n+-1 are even, so the division (right shift) is exact; which is good because the sum of integers had better be an integer.)\nIn your sum-of-squares asm, you can see it using umull x, w, w to do a widening multiply, and a 64-bit right shift, before the 32-bit multiplicative-inverse for division by 3.\n\nPlaying around with your code and a simplified version not squaring, it makes a small difference in code-gen when you count down or up.\nint sum_ints(int n) {\n int a = 0;\n //for (int i=0 ; i<n ; i++) a += i; // count up, skipping n\n while (n--) a += n; // count down, skipping n\n return a;\n}\n\nNegative n would have UB with your version, as the loop would run to INT_MIN--, and overflow a first. So clang might or might not be using that to assume that the initial n is non-negative. But if not, IDK why it makes more complicated code that multiplies twice.\n// count down version, starting with a += n-1, so x = n-1 in the usual formulae.\n// clang15 -O3\nsum_ints(int):\n cbz w0, .LBB0_2 // only bail on zero, not negative.\n sub w8, w0, #1 // n-1\n sub w9, w0, #2 // n-2\n umull x10, w8, w9 // (n-1)*(n-2)\n madd w8, w8, w9, w0 // w8 = (n-1)*(n-2) + n\n lsr x9, x10, #1 // w9 = (n-1)*(n-2)/2\n mvn w9, w9 // w9 = ~w9 = -w9 - 1\n add w0, w8, w9 // (n-1)*(n-2) - (n-1)*(n-2)/2 + n - 1 I think?\n.LBB0_2:\n ret\n\n// count up version, ending with n-1. clang15 -O3\nsum_ints(int):\n subs w8, w0, #1 // n-1\n b.lt .LBB0_2\n sub w9, w0, #2 // n-2\n umull x9, w8, w9 // (n-1)*(n-2)\n lsr x9, x9, #1 // . / 2\n add w0, w8, w9 // (n-1)*(n-2)/2 + (n-1) = (n-1)*(n-2 + 2)/2\n // = the usual x * (x+1 )/2 for x=n-1\n ret\n.LBB0_2:\n mov w0, wzr // separate return path for all negative inputs\n ret\n\n\n\nOther types of loop pattern-recognition / replacement\nGCC and clang do pattern-recognition for loops that count set bits, as well as the standard bithack that people will have copy/pasted from SO. (This is useful because ISO C fails to provide a portable way to express this operation that most modern CPUs have. And ISO C++ only fixed that deficiency in C++20 with <bit>, or via std::bitset<32> .count()). So some real codebases just have a bithack or simple loop over set bits instead of __builtin_popcount because people prefer simplicity and want to leave performance up to the compiler.\nThese pattern-recognizers only work on some specific ways to implement popcount, namely the x &= x-1; count++; it would presumably cost too much compile time to try to prove equivalence for every possible loop. From that, we can be pretty sure that these work by looking for a specific implementation, not at what the result actually is for every possible integer.\nThe variable names of course don't matter, but the sequence of operations on the input variable does. I assume there's some flexibility in reordering operations in ways that give the same result when checking for equivalence. In GCC's case, apparently number_of_iterations_popcount is the name of the function that discovers this: compilers often want to know how many iterations a loop will run for: if it's a small constant they may fully unroll it. If it can be calculated from other variables before starting the loop, it's a candidate for auto-vectorization. (GCC/clang can't auto-vectorize search loops, or anything else with a data-dependent if()break.)\nAs shown in the top answer on Count the number of set bits in a 32-bit integer, GCC10 and clang10 (Godbolt) can also recognize a popcount using a SWAR bithack, so you get the best of both worlds: ideally a single instruction, but if not then at least a good strategy.\nCounting iterations of x &= x-1 until x == 0 is ok when the expected number of set bits is small, so is also a sensible choice sometimes, as the other thing that GCC / clang can replace if hardware popcount is available. (And is simple to write, without needing the masking constants, and can compile to small machine-code size with -Os if not being replaced with a single instruction.)\nint popcount_blsr_until_zero(unsigned x){\n int count = 0;\n while (x){\n count++; // loop trip-count = popcount, this is what GCC looks for\n x &= x - 1;\n }\n return count;\n}\n\nGCC and clang for x86-64, -O3 -march=nehalem or later, on Godbolt for this and some other versions.\n# gcc12 -O3 -march=znver2\npopcount_blsr_until_zero(unsigned int):\n popcnt eax, edi\n ret\n\n// clang -O3 for AArch64\npopcount_blsr_until_zero(unsigned int):\n mov w8, w0 // this is pointless, GCC doesn't do it.\n fmov d0, x8\n cnt v0.8b, v0.8b // ARM unfortunately only has vector popcnt\n uaddlv h0, v0.8b // hsum bytes\n fmov w0, s0 // copy back to GP-integer\n ret\n\n\nOne of the simplest forms of code replacement by pattern-recognition is compiling (n<<5) | (n>>(32-5)) into a rotate-left by 5. (See this Q&A for run-time variable counts, and how to safely write something that gets recognized but also avoids UB even for a count of 0.)\nBut that might happen late enough in the compilation process that you'd call it a peephole optimization. CISC ISAs tend to have more peephole optimizations, like x86 having special-case shorter instructions to sign within the accumulator (cdqe instead of movzx eax, ax). x86 xor-zeroing to set a register to zero can still be called a peephole, despite sometimes needing to rearrange things because that clobbers FLAGS while mov eax, 0 doesn't.\nGCC enables xor-zeroing with -fpeephole2 (part of -O2); perhaps treating it as just a peephole is why GCC sometimes does a bad job and fails to find ways to reorder it so xor-zero / cmp / setcc instead of cmp / setcc / movzx, because x86 setcc to set a register according to a FLAGS condition sucks, only writing the low 8 bits. AArch64 has much better instructions, like csinc which can be used with the zero-register to materialize a 0/1, or with other registers to conditionally select and increment.\nBut sum-of-series loops are a larger-scale replacement, not exactly what I'd think of as a peephole, especially since it's not target-specific.\nAlso related:\n\nWould a C compiler be allowed to replace an algorithm with another? - yes. But usually they don't, because compilers are mechanical enough that they wouldn't always be right, and picking an efficient algorithm for the data is something that C programmers would expect a compiler to respect, unless there's a single instruction that's obviously always faster.\n\nclang knows how to auto-vectorize __builtin_popcount over an array with AVX2 vpshub as a lookup table for nibbles. It's not just making a SIMD version out of the same operation, again it's using an expansion that human compiler devs put there for it to use.\n\nWhy does compiler optimization not generate a loop for sum of integers from 1..N? is about cases where this optimization doesn't happen, e.g. one with j <= n for unsigned, which is a potentially infinite loop.\nComments there found some interesting limitations on when clang can optimize: e.g. if the loop was for (int j=0 ; j < n; j += 3), the trip-count would be less predictable / calculable, and defeats this optimisation.\n\n\n", "To at least get things started, this is your \"basic number theory\" formula, albeit in a rather obfuscated and inefficient form. The clang developers evidently took that class too.\nSome hints to help verify it:\n\nYour sum_squares function has an off-by-one bug and only sums up to n-1. Hence the formula we expect to get is n(n-1)(2n-1)/6.\n\nThe orr w9, w9, #0x2 in this case is equivalent to add w9, w9, #0x2. The previous instruction, mul w9, w8, w8 loaded w9 with the square of w8. Now the only perfect squares mod 4 are 0 and 1, both of which have bit 1 clear, thus bit 1 of w9 will always be clear. So w9 | 2 is equivalent to w9 + 2. (No, I don't know why clang would do it this way.)\n\nAs harold commented, multiplication by 0x55555556 is equivalent mod 2^32 to division by 3 and multiplication by 2 (assuming no remainder). This technique is sometimes called \"magic number division\". See Why does GCC use multiplication by a strange number in implementing integer division?. So prior to this, you have x11 = ((n - 1)(n - 2)(n - 3)) / 2 which, note, is always a multiple of 3 (and the division by 2 is always exact because the numerator is always even). Hence w11 * w12 results in (n-1)(n-2)(n-3)/6.\n\n\nPutting this all together, you can check the algebra to verify that the final result is equivalent to n(n-1)(2n-1)/6.\nI can't speak to how clang performs this optimization. I think I once went through the exercise of figuring out which LLVM optimization pass makes it happen, but I don't recall what it was. But there are known algorithms to automatically derive this sort of closed-form expression, e.g. Gosper's algorithm. So clang may be applying something like that. I'm speculating now, but perhaps the algorithm spits out a formula in an unsimplified form, and maybe clang just emits the directly corresponding code instead of trying to algebraically simplify first.\n", "The code generated by Clang for the sum of squares likely uses a technique called loop unrolling. This is a common optimization technique in which the compiler unrolls the loop, creating a separate code path for each iteration of the loop. This allows the compiler to generate more efficient code by eliminating the overhead of jumping back to the top of the loop and checking the loop condition on each iteration.\nFor example, consider the following C code:\nint sum_squares(int n) {\nint a = 0;\nfor (int i = 1; i <= n; i++) {\na += i * i;\n}\nreturn a;\n}\n\nThe code generated by Clang for this function with optimization level -O3 looks like this:\n_sum_squares:\nmov x0, xzr\nmov x8, #1\nmov x9, #2\nmov x10, #3\nadd x0, x0, x8\ncmp x9, x1\nb.ge LBB0_3\nadd x0, x0, x9\nadd x9, x9, #1\ncmp x10, x1\nb.ge LBB0_3\nadd x0, x0, x10\nadd x10, x10, #1\nLBB0_3:\nret\n\nAs you can see, the loop has been unrolled four times. This means that the code is executing the loop body four times before checking the loop condition again. This allows the compiler to generate more efficient code by eliminating the overhead of checking the loop condition on each iteration.\n" ]
[ 49, 15, 0 ]
[]
[]
[ "arm64", "assembly", "clang", "llvm", "optimization" ]
stackoverflow_0074417624_arm64_assembly_clang_llvm_optimization.txt
Q: Lakeformation and Redshift After reading the documentation I have a question related to Lakeformation and Redshift. Is it true that while using lakeformation the available data for the consumer accounts is only going to be through S3. If this is true then if I want to share information which is a Redshift/Postgres Database in a producer account then I will have to dump to S3 before it can be shared with any consumer account. Is S3 the only possible way of sharing information between the producers and consumers when using Lakeformation. A: Yes, LakeFormation only works with Glue and S3, glue for metadata and S3 for data storage. LakeFormation really is just a little bit of extra permissioning on top of Glue, nothing more. Maybe https://aws.amazon.com/blogs/aws/cross-account-data-sharing-for-amazon-redshift/ can be of help to you, there are other ways to share Redshift data, not using LakeFormation. A: Sharing of AWS RedShift native objects looks like is now possible to be shared using LakeFormation. It is still in preview mode. The below links will help understand the approach https://aws.amazon.com/about-aws/whats-new/2022/11/amazon-redshift-data-sharing-centralized-access-control-lake-formation-preview/ https://aws.amazon.com/blogs/big-data/centrally-manage-access-and-permissions-for-amazon-redshift-data-sharing-with-aws-lake-formation/
Lakeformation and Redshift
After reading the documentation I have a question related to Lakeformation and Redshift. Is it true that while using lakeformation the available data for the consumer accounts is only going to be through S3. If this is true then if I want to share information which is a Redshift/Postgres Database in a producer account then I will have to dump to S3 before it can be shared with any consumer account. Is S3 the only possible way of sharing information between the producers and consumers when using Lakeformation.
[ "Yes, LakeFormation only works with Glue and S3, glue for metadata and S3 for data storage. LakeFormation really is just a little bit of extra permissioning on top of Glue, nothing more.\nMaybe https://aws.amazon.com/blogs/aws/cross-account-data-sharing-for-amazon-redshift/ can be of help to you, there are other ways to share Redshift data, not using LakeFormation.\n", "Sharing of AWS RedShift native objects looks like is now possible to be shared using LakeFormation. It is still in preview mode. The below links will help understand the approach\nhttps://aws.amazon.com/about-aws/whats-new/2022/11/amazon-redshift-data-sharing-centralized-access-control-lake-formation-preview/\nhttps://aws.amazon.com/blogs/big-data/centrally-manage-access-and-permissions-for-amazon-redshift-data-sharing-with-aws-lake-formation/\n" ]
[ 1, 0 ]
[]
[]
[ "amazon_web_services", "aws_lake_formation" ]
stackoverflow_0069583795_amazon_web_services_aws_lake_formation.txt
Q: Python function about chemical formulas I have a CSV file that contains chemical matter names and some info.What I need to do is add new columns and write their formulas, molecular weights and count H,C,N,O,S atom numbers in each formula.I am stuck with the counting atom numbers part.I have the function related it but I don't know how to merge it and make code work. import pandas as pd import urllib.request import copy import re df = pd.read_csv('AminoAcids.csv') def countAtoms(string, dict={}): curDict = copy.copy(dict) atoms = re.findall("[A-Z]{1}[a-z]*[0-9]*", string) for j in atoms: atomGroups = re.match('([A-Z]{1}[a-z]*)([0-9]*)', j) atom = atomGroups.group(1) number = atomGroups.group(2) try : curDict[atom] = curDict[atom] + int(number) except KeyError: try : curDict[atom] = int(number) except ValueError: curDict[atom] = 1 except ValueError: curDict[atom] = curDict[atom] + 1 return curDict df["Formula"] = ['C3H7NO2', 'C6H14N4O2 ','C4H8N2O3','C4H7NO4 ', 'C3H7NO2S ','C5H9NO4','C5H10N2O3','C2H5NO2 ','C6H9N3O2', 'C6H13NO2','C6H13NO2','C6H14N2O2 ','C5H11NO2S ','C9H11NO2', 'C5H9NO2 ','C3H7NO3','C4H9NO3 ','C11H12N2O2 ','C9H11NO3 ','C5H11NO2'] df["Molecular Weight"] = ['89.09','174.2','132.12', '133.1','121.16','147.13','146.14','75.07','155.15', '131.17','131.17','146.19','149.21','165.19','115.13', '105.09','119.12','204.22','181.19','117.15'] df["H"] = 0 df["C"] = 0 df["N"] = 0 df["O"] = 0 df["S"] = 0 df.to_csv("AminoAcids.csv", index=False) print(df.to_string()) A: If I understand correctly, you should be able to use str.extract here: df["H"] = df["Formula"].str.extract(r'H(\d+)') df["C"] = df["Formula"].str.extract(r'C(\d+)') df["N"] = df["Formula"].str.extract(r'N(\d+)') df["O"] = df["Formula"].str.extract(r'O(\d+)') df["S"] = df["Formula"].str.extract(r'S(\d+)') A: here is another approach with similar result: df.join(df['Formula'].str.findall('([A-Z])(\d*)').map(dict).apply(pd.Series).replace('', 1)) >>> ''' Formula Molecular Weight C H N O S 0 C3H7NO2 89.09 3 7 1 2 NaN 1 C6H14N4O2 174.2 6 14 4 2 NaN 2 C4H8N2O3 132.12 4 8 2 3 NaN 3 C4H7NO4 133.1 4 7 1 4 NaN 4 C3H7NO2S 121.16 3 7 1 2 1.0 5 C5H9NO4 147.13 5 9 1 4 NaN 6 C5H10N2O3 146.14 5 10 2 3 NaN 7 C2H5NO2 75.07 2 5 1 2 NaN 8 C6H9N3O2 155.15 6 9 3 2 NaN 9 C6H13NO2 131.17 6 13 1 2 NaN 10 C6H13NO2 131.17 6 13 1 2 NaN 11 C6H14N2O2 146.19 6 14 2 2 NaN 12 C5H11NO2S 149.21 5 11 1 2 1.0 13 C9H11NO2 165.19 9 11 1 2 NaN 14 C5H9NO2 115.13 5 9 1 2 NaN 15 C3H7NO3 105.09 3 7 1 3 NaN 16 C4H9NO3 119.12 4 9 1 3 NaN 17 C11H12N2O2 204.22 11 12 2 2 NaN 18 C9H11NO3 181.19 9 11 1 3 NaN 19 C5H11NO2 117.15 5 11 1 2 NaN
Python function about chemical formulas
I have a CSV file that contains chemical matter names and some info.What I need to do is add new columns and write their formulas, molecular weights and count H,C,N,O,S atom numbers in each formula.I am stuck with the counting atom numbers part.I have the function related it but I don't know how to merge it and make code work. import pandas as pd import urllib.request import copy import re df = pd.read_csv('AminoAcids.csv') def countAtoms(string, dict={}): curDict = copy.copy(dict) atoms = re.findall("[A-Z]{1}[a-z]*[0-9]*", string) for j in atoms: atomGroups = re.match('([A-Z]{1}[a-z]*)([0-9]*)', j) atom = atomGroups.group(1) number = atomGroups.group(2) try : curDict[atom] = curDict[atom] + int(number) except KeyError: try : curDict[atom] = int(number) except ValueError: curDict[atom] = 1 except ValueError: curDict[atom] = curDict[atom] + 1 return curDict df["Formula"] = ['C3H7NO2', 'C6H14N4O2 ','C4H8N2O3','C4H7NO4 ', 'C3H7NO2S ','C5H9NO4','C5H10N2O3','C2H5NO2 ','C6H9N3O2', 'C6H13NO2','C6H13NO2','C6H14N2O2 ','C5H11NO2S ','C9H11NO2', 'C5H9NO2 ','C3H7NO3','C4H9NO3 ','C11H12N2O2 ','C9H11NO3 ','C5H11NO2'] df["Molecular Weight"] = ['89.09','174.2','132.12', '133.1','121.16','147.13','146.14','75.07','155.15', '131.17','131.17','146.19','149.21','165.19','115.13', '105.09','119.12','204.22','181.19','117.15'] df["H"] = 0 df["C"] = 0 df["N"] = 0 df["O"] = 0 df["S"] = 0 df.to_csv("AminoAcids.csv", index=False) print(df.to_string())
[ "If I understand correctly, you should be able to use str.extract here:\ndf[\"H\"] = df[\"Formula\"].str.extract(r'H(\\d+)')\ndf[\"C\"] = df[\"Formula\"].str.extract(r'C(\\d+)')\ndf[\"N\"] = df[\"Formula\"].str.extract(r'N(\\d+)')\ndf[\"O\"] = df[\"Formula\"].str.extract(r'O(\\d+)')\ndf[\"S\"] = df[\"Formula\"].str.extract(r'S(\\d+)')\n\n", "here is another approach with similar result:\ndf.join(df['Formula'].str.findall('([A-Z])(\\d*)').map(dict).apply(pd.Series).replace('', 1))\n\n>>>\n'''\n Formula Molecular Weight C H N O S\n0 C3H7NO2 89.09 3 7 1 2 NaN\n1 C6H14N4O2 174.2 6 14 4 2 NaN\n2 C4H8N2O3 132.12 4 8 2 3 NaN\n3 C4H7NO4 133.1 4 7 1 4 NaN\n4 C3H7NO2S 121.16 3 7 1 2 1.0\n5 C5H9NO4 147.13 5 9 1 4 NaN\n6 C5H10N2O3 146.14 5 10 2 3 NaN\n7 C2H5NO2 75.07 2 5 1 2 NaN\n8 C6H9N3O2 155.15 6 9 3 2 NaN\n9 C6H13NO2 131.17 6 13 1 2 NaN\n10 C6H13NO2 131.17 6 13 1 2 NaN\n11 C6H14N2O2 146.19 6 14 2 2 NaN\n12 C5H11NO2S 149.21 5 11 1 2 1.0\n13 C9H11NO2 165.19 9 11 1 2 NaN\n14 C5H9NO2 115.13 5 9 1 2 NaN\n15 C3H7NO3 105.09 3 7 1 3 NaN\n16 C4H9NO3 119.12 4 9 1 3 NaN\n17 C11H12N2O2 204.22 11 12 2 2 NaN\n18 C9H11NO3 181.19 9 11 1 3 NaN\n19 C5H11NO2 117.15 5 11 1 2 NaN\n\n" ]
[ 1, 1 ]
[]
[]
[ "chemistry", "csv", "python", "python_3.x" ]
stackoverflow_0074668631_chemistry_csv_python_python_3.x.txt
Q: Redirect application output to unix socket I have a socket server which creates unix socket and then reads data from this unix socket. Then I have another long-process application and I want to redirect stdout of this process to my unix socket. I tried this command to test ping 127.0.0.1 > /tmp/unixsockettest.sock But I get -bash: /tmp/unixsockettest.sock: No such device or address. Is it possible to redirect application output to unix socket? A: You can't redirect out to a Unix socket using shell i/o redirection, but you can use a tool like socat (which is probably packaged for your distribution) to accomplish the task: ping 127.0.0.1 | socat - unix-connect:/tmp/unixsockettest.sock A: Shell redirection tries to open() its target file and then dup2() the resulting filedescriptor into the right place. Unfortunately, UNIX sockets cannot be opened. The way to get a writable filedescriptor from a UNIX socket (streaming or datagram) is to connect() to it (after you've stuffed the path into a struct sockaddr_un). Shells will not do this for you, unfortunately, but you can pipe into tools like socat or netcat, which will do the connection and then shovel their piped input into the connected socket descriptor. E.g., with netcat, you'd do: ping 127.0.0.1 | nc -U /tmp/unixsockettest.sock #add -u it the socket isn't streaming but datagram (Having an UNIX derivative where opening a socket files would connect to them might not be a bad design choice for OS designers. An argument against it could be that it could cause opens to get blocked and that it might make them bother some other process (the binder) but both can happen already with named pipes anyway.)
Redirect application output to unix socket
I have a socket server which creates unix socket and then reads data from this unix socket. Then I have another long-process application and I want to redirect stdout of this process to my unix socket. I tried this command to test ping 127.0.0.1 > /tmp/unixsockettest.sock But I get -bash: /tmp/unixsockettest.sock: No such device or address. Is it possible to redirect application output to unix socket?
[ "You can't redirect out to a Unix socket using shell i/o redirection, but you can use a tool like socat (which is probably packaged for your distribution) to accomplish the task:\nping 127.0.0.1 | socat - unix-connect:/tmp/unixsockettest.sock\n\n", "Shell redirection tries to open() its target file and then dup2() the resulting filedescriptor into the right place.\nUnfortunately, UNIX sockets cannot be opened. The way to get a writable filedescriptor from a UNIX socket (streaming or datagram) is to connect() to it (after you've stuffed the path into a struct sockaddr_un).\nShells will not do this for you, unfortunately, but you can pipe into tools like socat or netcat, which will do the connection and then shovel their piped input into the connected socket descriptor.\nE.g., with netcat, you'd do:\nping 127.0.0.1 | nc -U /tmp/unixsockettest.sock #add -u it the socket isn't streaming but datagram\n\n(Having an UNIX derivative where opening a socket files would connect to them might not be a bad design choice for OS designers. An argument against it could be that it could cause opens to get blocked and that it might make them bother some other process (the binder) but both can happen already with named pipes anyway.)\n" ]
[ 1, 0 ]
[]
[]
[ "bash", "sockets", "unix" ]
stackoverflow_0074669219_bash_sockets_unix.txt