content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: npm start not working in the react app, what do i do? So i created a react app when and I did npm start for the first time, the app opened in the browser, no issues. Subsequently I closed the tab & the VSC editor. Now that i try "npm start" again in the VSC terminal, the app doesnt open and I get a strange response. This is my first proper question on Stackoverflow and first project on React please be kind to me. After making the app using create-react-app my-app and doing npm start and the project running well. Now when i do npm start this happens: PS C:\Users\Lenovo\Desktop\Coding\React\Practice> npm start npm ERR! code ENOENT npm ERR! syscall open npm ERR! path C:\Users\Lenovo\Desktop\Coding\React\Practice/package.json npm ERR! errno -4058 npm ERR! enoent ENOENT: no such file or directory, open 'C:\Users\Lenovo\Desktop\Coding\React\Practice\package.json' npm ERR! enoent This is related to npm not being able to find a file. npm ERR! enoent Except npm help, no other command runs. Please help me out. A: npx create-react-app practice-app cd practice-app /practice-app/ npm start You have to be inside the react application you have created to launch the application with npm or yarn. I think you were still on the parent folder which in this case is the Practice folder. On vscode terminal type cd then press the tab button u will see all the folders contained inside the Practice folder coming up. keep pressing till your react project folder comes to the screen and press enter. Then type code . to open the folder on vscode ide. Then use npm start or yarn start on the project to start.
npm start not working in the react app, what do i do?
So i created a react app when and I did npm start for the first time, the app opened in the browser, no issues. Subsequently I closed the tab & the VSC editor. Now that i try "npm start" again in the VSC terminal, the app doesnt open and I get a strange response. This is my first proper question on Stackoverflow and first project on React please be kind to me. After making the app using create-react-app my-app and doing npm start and the project running well. Now when i do npm start this happens: PS C:\Users\Lenovo\Desktop\Coding\React\Practice> npm start npm ERR! code ENOENT npm ERR! syscall open npm ERR! path C:\Users\Lenovo\Desktop\Coding\React\Practice/package.json npm ERR! errno -4058 npm ERR! enoent ENOENT: no such file or directory, open 'C:\Users\Lenovo\Desktop\Coding\React\Practice\package.json' npm ERR! enoent This is related to npm not being able to find a file. npm ERR! enoent Except npm help, no other command runs. Please help me out.
[ "npx create-react-app practice-app\ncd practice-app\n/practice-app/ npm start\nYou have to be inside the react application you have created to launch the application with npm or yarn. I think you were still on the parent folder which in this case is the Practice folder.\nOn vscode terminal type cd then press the tab button u will see all the folders contained inside the Practice folder coming up. keep pressing till your react project folder comes to the screen and press enter.\nThen type code . to open the folder on vscode ide. Then use npm start or yarn start on the project to start.\n" ]
[ 0 ]
[]
[]
[ "reactjs" ]
stackoverflow_0074667136_reactjs.txt
Q: How to get vertical gap between images in a flex container <div class='parent'> <img class='child' src='...' alt='img'> <img class='child' src='...' alt='img'> <img class='child' src='...' alt='img'> <img class='child' src='...' alt='img'> <img class='child' src='...' alt='img'> <img class='child' src='...' alt='img'> </div> css .parent{ display:flex; flex-wrap:wrap; justify-content:space-between; align-content: space-between; } .child{ display:block; width:calc(25% - 10px); margin:0 auto; } Images have horizontal gap 10px, but there is no vertical gap (distance between images in a column. How to add this gap without having padding/margin between childs and parent. I need only gaps between images but in both directions. Also, images have no fixed width and height. A: This can be a use case of CSS-grid instead of flexbox: .parent { display: grid; grid-template-columns: repeat(4, 1fr); grid-gap: 10px; /*define the distance between elements*/ border: 1px solid; } .child{ height:50px; border:1px solid red; } <div class='parent'> <div class='child'></div> <div class='child'></div> <div class='child'></div> <div class='child'></div> <div class='child'></div> <div class='child'></div> <div class='child'></div> <div class='child'></div> </div> A: Only a simple .parent{ display: flex; gap: 5em;(You can give any size) } Should work considering the images are in appropriate size. Try reducing the image size and do it again.
How to get vertical gap between images in a flex container
<div class='parent'> <img class='child' src='...' alt='img'> <img class='child' src='...' alt='img'> <img class='child' src='...' alt='img'> <img class='child' src='...' alt='img'> <img class='child' src='...' alt='img'> <img class='child' src='...' alt='img'> </div> css .parent{ display:flex; flex-wrap:wrap; justify-content:space-between; align-content: space-between; } .child{ display:block; width:calc(25% - 10px); margin:0 auto; } Images have horizontal gap 10px, but there is no vertical gap (distance between images in a column. How to add this gap without having padding/margin between childs and parent. I need only gaps between images but in both directions. Also, images have no fixed width and height.
[ "This can be a use case of CSS-grid instead of flexbox:\n\n\n.parent {\r\n display: grid;\r\n grid-template-columns: repeat(4, 1fr);\r\n grid-gap: 10px; /*define the distance between elements*/\r\n border: 1px solid;\r\n}\r\n\r\n.child{\r\n height:50px;\r\n border:1px solid red;\r\n}\n<div class='parent'>\r\n <div class='child'></div>\r\n <div class='child'></div>\r\n <div class='child'></div>\r\n <div class='child'></div>\r\n <div class='child'></div>\r\n <div class='child'></div>\r\n <div class='child'></div>\r\n <div class='child'></div>\r\n</div>\n\n\n\n", "Only a simple\n.parent{\ndisplay: flex;\ngap: 5em;(You can give any size)\n\n}\nShould work considering the images are in appropriate size. Try reducing the image size and do it again.\n" ]
[ 4, 0 ]
[]
[]
[ "css", "flexbox", "html" ]
stackoverflow_0051688637_css_flexbox_html.txt
Q: Why does styled-components 5.x warn about "Expected style to contain units." When styling a React Native app with Styled Components 5.x I'm getting the warning... Expected style "borderWidth: 2" to contain units. This didn't happen with previous versions. What does the warning mean? A: After some research and questions on github I tracked this one down... Styled Components uses the package css-to-react-native for it's React Native conversions. css-to-react-native recently released version 3 which now requires units to be present for all measurements. Details here. You should use px for React Native as it is density independent. A: I think that using px is a bit of a pain, unintuitive, misleading, and even dangerous if you are using a theme giving some ...px string value to a component (Ionicons size for instance) that expect react native number units. My way of dealing with this: import { LogBox } from 'react-native' LogBox.ignoreLogs([`to contain units`])
Why does styled-components 5.x warn about "Expected style to contain units."
When styling a React Native app with Styled Components 5.x I'm getting the warning... Expected style "borderWidth: 2" to contain units. This didn't happen with previous versions. What does the warning mean?
[ "After some research and questions on github I tracked this one down...\nStyled Components uses the package css-to-react-native for it's React Native conversions.\ncss-to-react-native recently released version 3 which now requires units to be present for all measurements. Details here.\nYou should use px for React Native as it is density independent.\n", "I think that using px is a bit of a pain, unintuitive, misleading, and even dangerous if you are using a theme giving some ...px string value to a component (Ionicons size for instance) that expect react native number units.\nMy way of dealing with this:\nimport { LogBox } from 'react-native'\n\nLogBox.ignoreLogs([`to contain units`])\n\n" ]
[ 15, 0 ]
[]
[]
[ "react_native", "styled_components" ]
stackoverflow_0058923065_react_native_styled_components.txt
Q: How to use a lambda function to sort a dictionary with a nested list? I've been trying to sort a dictionary based on largest to lowest values. The dictionary is structured like this: testing = {"third":[1,89],"first":[5,46],"second":[3,59]} The issue I'm coming across is that I'm not entirely sure as to how I can sort this based on the second listed value, so I want to sort it based on 89, 46 and 59. Not the first 1,5,3. The method I was currently using is: print(sorted(testing,key=lambda x:x[1][-1])) Which is sorting the dictionary, but not in the way I'm trying to get it to. Where second is being sorted for the first value. I'm sure there's a way to do this, I'm just not sure how to approach this lambda function. Any guidance would be greatly appreciate. A: sorted(testing.items(), key=lambda x: x[1][1])? output: [('first', [5, 46]), ('second', [3, 59]), ('third', [1, 89])]
How to use a lambda function to sort a dictionary with a nested list?
I've been trying to sort a dictionary based on largest to lowest values. The dictionary is structured like this: testing = {"third":[1,89],"first":[5,46],"second":[3,59]} The issue I'm coming across is that I'm not entirely sure as to how I can sort this based on the second listed value, so I want to sort it based on 89, 46 and 59. Not the first 1,5,3. The method I was currently using is: print(sorted(testing,key=lambda x:x[1][-1])) Which is sorting the dictionary, but not in the way I'm trying to get it to. Where second is being sorted for the first value. I'm sure there's a way to do this, I'm just not sure how to approach this lambda function. Any guidance would be greatly appreciate.
[ "sorted(testing.items(), key=lambda x: x[1][1])?\noutput:\n[('first', [5, 46]), ('second', [3, 59]), ('third', [1, 89])]\n\n" ]
[ 1 ]
[]
[]
[ "dictionary", "function", "python", "sorting" ]
stackoverflow_0074667202_dictionary_function_python_sorting.txt
Q: How to connect to FTPS server in node using basic-ftp module I am using https://www.npmjs.com/package/basic-ftp basic ftp package to connect to ftps server. I have tried my other extension but failed to connect to ftps server below is my code const ftp = require("basic-ftp") example(); async function example() { const client = new ftp.Client() client.ftp.verbose = true try { await client.access({ host: "ftp.xxxx.xxxxx", user: "[email protected]", password: "xxxxxxx", secure :true }) await client.ensureDir("/my/remote/directory") console.log(await client.list()) await client.uploadFrom("temp/readme.txt", "readme.txt") // await client.downloadTo("README_COPY.md", "README_FTP.md") } catch(err) { console.log(err) } client.close() } but giving me a error Connected to xxx.xxx.xx.xxx:21 < 220 Service ready for new user. Login security: No encryption > USER [email protected] < 331 User name okay, need password for [email protected]. > PASS ### < 530 Box: Smartest Energy does not allow regular FTP; use FTPS instead. (Both " explicit" and "implicit" FTPS are supported.) { FTPError: 530 Box: Smartest Energy does not allow regular FTP; use FTPS instea d. (Both "explicit" and "implicit" FTPS are supported.) at FTPContext._onControlSocketData (D:\node\basicftp\node_modules\basic-ftp\ dist\FtpContext.js:276:39) at Socket.socket.on.data (D:\node\basicftp\node_modules\basic-ftp\dist\FtpCo ntext.js:121:44) at Socket.emit (events.js:198:13) at addChunk (_stream_readable.js:288:12) at readableAddChunk (_stream_readable.js:265:13) at Socket.Readable.push (_stream_readable.js:224:10) at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17) name : 'FTPError', code: 530 } Please help Thanks in advance A: You will require to connect Explicit FTPS over TLS. to connect to ftps over tls you will need to pass the following options: const fs = require('fs'); async function example() { const client = new ftp.Client() client.ftp.verbose = true try { const secureOptions = { // Necessary only if the server requires client certificate authentication. key: fs.readFileSync('client-key.pem'), cert: fs.readFileSync('client-cert.pem'), // Necessary only if the server uses a self-signed certificate. ca: [ fs.readFileSync('server-cert.pem') ], // Necessary only if the server's cert isn't for "localhost". checkServerIdentity: () => { return null; }, }; await client.access({ host: "ftp.xxxx.xxxxx", user: "[email protected]", password: "xxxxxxx", secure :true, secureOptions : secureOptions }) await client.ensureDir("/my/remote/directory") console.log(await client.list()) await client.uploadFrom("temp/readme.txt", "readme.txt") // await client.downloadTo("README_COPY.md", "README_FTP.md") } catch(err) { console.log(err) } client.close() } A: After trying to get this working with basic-ftp, i just tried https://www.npmjs.com/package/ssh2-sftp-clientand it worked immediately.
How to connect to FTPS server in node using basic-ftp module
I am using https://www.npmjs.com/package/basic-ftp basic ftp package to connect to ftps server. I have tried my other extension but failed to connect to ftps server below is my code const ftp = require("basic-ftp") example(); async function example() { const client = new ftp.Client() client.ftp.verbose = true try { await client.access({ host: "ftp.xxxx.xxxxx", user: "[email protected]", password: "xxxxxxx", secure :true }) await client.ensureDir("/my/remote/directory") console.log(await client.list()) await client.uploadFrom("temp/readme.txt", "readme.txt") // await client.downloadTo("README_COPY.md", "README_FTP.md") } catch(err) { console.log(err) } client.close() } but giving me a error Connected to xxx.xxx.xx.xxx:21 < 220 Service ready for new user. Login security: No encryption > USER [email protected] < 331 User name okay, need password for [email protected]. > PASS ### < 530 Box: Smartest Energy does not allow regular FTP; use FTPS instead. (Both " explicit" and "implicit" FTPS are supported.) { FTPError: 530 Box: Smartest Energy does not allow regular FTP; use FTPS instea d. (Both "explicit" and "implicit" FTPS are supported.) at FTPContext._onControlSocketData (D:\node\basicftp\node_modules\basic-ftp\ dist\FtpContext.js:276:39) at Socket.socket.on.data (D:\node\basicftp\node_modules\basic-ftp\dist\FtpCo ntext.js:121:44) at Socket.emit (events.js:198:13) at addChunk (_stream_readable.js:288:12) at readableAddChunk (_stream_readable.js:265:13) at Socket.Readable.push (_stream_readable.js:224:10) at TCP.onStreamRead [as onread] (internal/stream_base_commons.js:94:17) name : 'FTPError', code: 530 } Please help Thanks in advance
[ "You will require to connect Explicit FTPS over TLS.\nto connect to ftps over tls you will need to pass the following options:\nconst fs = require('fs');\n\nasync function example() {\n const client = new ftp.Client()\n client.ftp.verbose = true\n try {\n const secureOptions = {\n // Necessary only if the server requires client certificate authentication.\n key: fs.readFileSync('client-key.pem'),\n cert: fs.readFileSync('client-cert.pem'),\n\n // Necessary only if the server uses a self-signed certificate.\n ca: [ fs.readFileSync('server-cert.pem') ],\n\n // Necessary only if the server's cert isn't for \"localhost\".\n checkServerIdentity: () => { return null; },\n};\n await client.access({\n host: \"ftp.xxxx.xxxxx\",\n user: \"[email protected]\",\n password: \"xxxxxxx\",\n secure :true,\nsecureOptions : secureOptions\n })\n await client.ensureDir(\"/my/remote/directory\")\n console.log(await client.list())\n await client.uploadFrom(\"temp/readme.txt\", \"readme.txt\")\n // await client.downloadTo(\"README_COPY.md\", \"README_FTP.md\")\n }\n catch(err) {\n console.log(err)\n }\n client.close()\n}\n\n", "After trying to get this working with basic-ftp, i just tried https://www.npmjs.com/package/ssh2-sftp-clientand it worked immediately.\n" ]
[ 4, 0 ]
[]
[]
[ "ftp_server", "ftps", "node.js" ]
stackoverflow_0059316477_ftp_server_ftps_node.js.txt
Q: Add an include path for a single file in a library I'm building a library for which one file requires an additional include path. Is there a way to adjust the include path for compilation of a single file? bld(features="cxx cxxshlib", source=[so, many, files, from an ant_glob], includes=[Some path that's really only needed for one interface file]) I'd be happy with a solution that is use based, too. A: I think most solutions will be more lines of code than just compiling your one file separately. A: You need to compile the specfic file by using objects and then use the result. Something like this: def build(bld): # build the specfifc object bld.objects(source="foo.cpp", includess="path/to/directory", target="foo") # build the library and include that object file using 'use=' bld.stlib(source='bla.cpp blu.cpp', includes="this/path that/path", target='mylibrary', use='foo')
Add an include path for a single file in a library
I'm building a library for which one file requires an additional include path. Is there a way to adjust the include path for compilation of a single file? bld(features="cxx cxxshlib", source=[so, many, files, from an ant_glob], includes=[Some path that's really only needed for one interface file]) I'd be happy with a solution that is use based, too.
[ "I think most solutions will be more lines of code than just compiling your one file separately.\n", "You need to compile the specfic file by using objects and then use the result.\nSomething like this:\ndef build(bld):\n # build the specfifc object\n bld.objects(source=\"foo.cpp\", includess=\"path/to/directory\", target=\"foo\")\n # build the library and include that object file using 'use='\n bld.stlib(source='bla.cpp blu.cpp', includes=\"this/path that/path\", target='mylibrary', use='foo') \n\n" ]
[ 0, 0 ]
[]
[]
[ "build", "c++", "waf" ]
stackoverflow_0073842261_build_c++_waf.txt
Q: Redis - Promoting a slave to master manually Suppose I have [Slave IP Address] which is the slave of [Master IP Address]. Now my master server has been shut down, and I need to set this slave to be master MANUALLY (WITHOUT using sentinel automatic failover, WITH redis command). Is it possible doing this without restarting the redis service ? (and losing all the cached data) A: use SLAVEOF NO ONE to promote a slave to master http://redis.io/commands/slaveof A: it depends, if you are in a cluster you will be better using the fail over. You will need to use the force option in the command http://redis.io/commands/cluster-failover A: Is it possible doing this without restarting the redis service? (and losing all the cached data) yes that's possible, you can use SLAVEOF NO ONE (without sentinel) But it is recommended to use sentinel to avoid data loss. sentinel failover master-name(with sentinel) This will force the sentinel to switch master. The new master will have all the data that was synchronized before the old-master shutdown. Redis will automatically choose the best slave with max. data, that will reduce the amount of data we lose when switching master. A: Below 2 options in step 3 have helped me to recover the cluster once a master node is down, compute was replaced or other not recoverable state. 1 .- First you need to connect to the slave node, use redis-cli, here a link how to do that: How to connect to remote Redis server? 2 .- Once connected to the slave node run the command cluster nodes to validate master node is in fail state, also run cluster info to see the overall state of your cluster(this is always a good idea) 3 .- Inside the slave node to be promoted run command: cluster failover, in rare cases when there is some serious issues with redis this command could fail, and you will need to use cluster failover force or cluster failover takeover, here more info abut the implications of those options: https://redis.io/commands/cluster-failover 4 .- Run cluster forged $old_master_id in all your cluster nodes 5 .- Add a new node with cluster meet $new_node_IP $new_node_PORT 6 .- Subscribe your new node to your brand new master, login in to the new bode and run cluster replicate $master_node_id Steps 1-3 are required for the slave-master promotion and 4-5 are required to left all cluster in a healthy master-slave equilibrium. A: As of Redis version 5.0.0 the SLAVEOF command is regarded as deprecated. If a Redis server is already acting as replica, the command REPLICAOF NO ONE will turn off the replication, turning the Redis server into a MASTER.
Redis - Promoting a slave to master manually
Suppose I have [Slave IP Address] which is the slave of [Master IP Address]. Now my master server has been shut down, and I need to set this slave to be master MANUALLY (WITHOUT using sentinel automatic failover, WITH redis command). Is it possible doing this without restarting the redis service ? (and losing all the cached data)
[ "use SLAVEOF NO ONE to promote a slave to master\nhttp://redis.io/commands/slaveof\n", "it depends, if you are in a cluster you will be better using the fail over. You will need to use the force option in the command \nhttp://redis.io/commands/cluster-failover\n", "\nIs it possible doing this without restarting the redis service? (and\n losing all the cached data)\n\n\nyes that's possible, you can use\nSLAVEOF NO ONE (without sentinel)\nBut it is recommended to use sentinel to avoid data loss.\nsentinel failover master-name(with sentinel)\nThis will force the sentinel to switch master.\nThe new master will have all the data that was synchronized before the old-master shutdown.\nRedis will automatically choose the best slave with max. data, that will reduce the amount of data we lose when switching master.\n", "Below 2 options in step 3 have helped me to recover the cluster once a master node is down, compute was replaced or other not recoverable state.\n\n1 .- First you need to connect to the slave node, use redis-cli, here a link how to do that: How to connect to remote Redis server?\n2 .- Once connected to the slave node run the command cluster nodes to validate master node is in fail state, also run cluster info to see the overall state of your cluster(this is always a good idea)\n3 .- Inside the slave node to be promoted run command: cluster failover,\nin rare cases when there is some serious issues with redis this\ncommand could fail, and you will need to use cluster failover force\nor cluster failover takeover, here more info abut the implications\nof those options: https://redis.io/commands/cluster-failover\n4 .- Run cluster forged $old_master_id in all your cluster nodes\n5 .- Add a new node with cluster meet $new_node_IP $new_node_PORT\n6 .- Subscribe your new node to your brand new master, login in to the new bode and run cluster replicate $master_node_id\n\nSteps 1-3 are required for the slave-master promotion and 4-5 are required to left all cluster in a healthy master-slave equilibrium.\n", "\nAs of Redis version 5.0.0 the SLAVEOF command is regarded as deprecated.\n\nIf a Redis server is already acting as replica, the command REPLICAOF NO ONE will turn off the replication, turning the Redis server into a MASTER.\n" ]
[ 31, 2, 1, 1, 0 ]
[]
[]
[ "redis" ]
stackoverflow_0034155977_redis.txt
Q: VBA - Check if a sheet exists then import in my workbook else show an error message i'm having a bit of a headache with VBA which i haven't used since 2006. I have my destination excel file where I need to import 3 predefined sheets from another excel file of the user's choice. After selecting the source file to import I would like to perform a check, IF the "Cover" sheet exists THEN copy it to the target workbook ELSE print an error message in the excel file in order to have a log, once this is done I have to do the same check for the "Functional" and "Batch" sheets. Before inserting the IFs, I was able to import the sheets but I didn't have control over whether they existed or not, "Cover" is mandatory while "Functional" and "Batch" I need at least one of the two to be able to proceed with the next steps. Now I can check if the "Cover" sheet exists and import it ELSE I exit the Sub, after which I should check if the other sheets also exist and import them but I immediately get the "absent sheet" error. Below is the code I am getting stuck with: Sub Import() Application.ScreenUpdating = False Application.DisplayAlerts = False Dim TargetWorkbook As Workbook Dim SourceWorkbook As Workbook Dim OpenFileName Set TargetWorestBookkbook = ActiveWorkbook 'Select and Open Source workbook OpenFileName = Application.GetOpenFilename("Excel Files (*.xls*),*.xls*") If OpenFileName = False Then MsgBox "Nessun file Source selezionato. Impossibile procedere." Exit Sub End If On Error GoTo exit_ Set SourceWorkbook = Workbooks.Open(OpenFileName) 'Import sheets ' if the sheet doesn't exist an error will occur here If WorksheetExists("Cover e Legenda") Then SourceWorkbook.Sheets("Cover e Legenda").Copy _ after:=TargetWorkbook.Sheets(ThisWorkbook.Sheets.Count) Application.CutCopyMode = False SourceWorkbook.Close False Else MsgBox ("Cover assente. Impossibile proseguire.") Exit Sub End If If WorksheetExists("Test Funzionali") Then SourceWorkbook.Sheets("Test Funzionali").Copy _ after:=TargetWorkbook.Sheets(ThisWorkbook.Sheets.Count) Application.CutCopyMode = False SourceWorkbook.Close False Else MsgBox ("Test Funzionali assente.") End If If WorksheetExists("Test Batch") Then SourceWorkbook.Sheets("Test Batch").Copy _ after:=TargetWorkbook.Sheets(ThisWorkbook.Sheets.Count) Application.CutCopyMode = False SourceWorkbook.Close False Else MsgBox ("Test Batch assente.") End If 'Next Sheet Application.ScreenUpdating = True Application.DisplayAlerts = True SourceWorkbook.Close SaveChanges:=False MsgBox ("Importazione completata.") TargetWorkbook.Activate exit_: Application.ScreenUpdating = True Application.DisplayAlerts = True If Err Then MsgBox Err.Description, vbCritical, "Error" End Sub A: Best to check all of the sheets before importing any of them. Try something like this: Sub Import() Dim wbTarget As Workbook, wbSource As Workbook Dim OpenFileName, haveCover As Boolean, haveFunz As Boolean, haveTest As Boolean On Error GoTo haveError Set wbTarget = ActiveWorkbook 'Select and Open Source workbook OpenFileName = Application.GetOpenFilename("Excel Files (*.xls*),*.xls*") If OpenFileName = False Then MsgBox "Nessun file Source selezionato. Impossibile procedere." Exit Sub End If Set wbSource = Workbooks.Open(OpenFileName) 'check which sheets exist haveCover = WorksheetExists(wbSource, "Cover e Legenda") haveFunz = WorksheetExists(wbSource, "Test Funzionali") haveTest = WorksheetExists(wbSource, "Test Batch") If haveCover And (haveFunz Or haveTest) Then 'have the minumum required sheets? Application.ScreenUpdating = False Application.DisplayAlerts = False ImportSheet wbTarget, wbSource.Worksheets("Cover e Legenda") If haveFunz Then ImportSheet wbTarget, wbSource.Worksheets("Test Funzionali") If haveTest Then ImportSheet wbTarget, wbSource.Worksheets("Test Batch") Application.DisplayAlerts = True Else MsgBox "Required sheet(s) not found!", vbExclamation End If wbSource.Close SaveChanges:=False MsgBox "Importazione completata" wbTarget.Activate Exit Sub 'normal exit haveError: MsgBox Err.Description, vbCritical, "Error" Application.DisplayAlerts = True End Sub 'copy sheet `ws` to the end of `wbTarget` Sub ImportSheet(wbTarget As Workbook, ws As Worksheet) ws.Copy after:=wbTarget.Worksheets(wbTarget.Worksheets.Count) End Sub 'does sheet `wsName` exist in workbook `wb` ? Function WorksheetExists(wb As Workbook, wsName As String) As Boolean On Error Resume Next WorksheetExists = Not wb.Worksheets(wsName) Is Nothing End Function A: Import Mandatory and Optional Worksheets Sub ImportWorksheets() Dim Mandatory() As Variant: Mandatory = VBA.Array("Cover e Legenda") Dim Optionally() As Variant ' 'Optional' is a keyword Optionally = VBA.Array("Test Funzionali", "Test Batch") Dim twb As Workbook: Set twb = ThisWorkbook ' workbook containing this code ' Select and open the Source workbook. Dim OpenFilePath As Variant OpenFilePath = Application.GetOpenFilename("Excel Files (*.xls*),*.xls*") If OpenFilePath = False Then MsgBox "Nessun file Source selezionato. Impossibile procedere.", _ vbExclamation Exit Sub End If Dim swb As Workbook: Set swb = Workbooks.Open(OpenFilePath) ' Check if all the mandatory worksheets exist. Dim sws As Worksheet, n As Long For n = 0 To UBound(Mandatory) On Error Resume Next ' prevent error if worksheet doesn't exist Set sws = swb.Worksheets(Mandatory(n)) On Error GoTo 0 If sws Is Nothing Then 'swb.Close SaveChanges:=False MsgBox "The mandatory worksheet """ & Mandatory(n) _ & """ was not found in """ & swb.Name & """.", vbCritical Exit Sub Else Set sws = Nothing End If Next n ' Check if at least one of the optional worksheets exists. Dim oDict As Object: Set oDict = CreateObject("Scripting.Dictionary") oDict.CompareMode = vbTextCompare For n = 0 To UBound(Optionally) On Error Resume Next ' prevent error if worksheet doesn't exist Set sws = swb.Worksheets(Optionally(n)) On Error GoTo 0 If Not sws Is Nothing Then oDict(sws.Name) = Empty: Set sws = Nothing Next n If oDict.Count = 0 Then 'swb.Close SaveChanges:=False MsgBox "No optional worksheets found in """ & swb.Name & """.", _ vbCritical Exit Sub End If ' Import the worksheets and close the Source workbook. Application.ScreenUpdating = False For n = 0 To UBound(Mandatory) swb.Sheets(Mandatory(n)).Copy After:=twb.Sheets(twb.Sheets.Count) Next n Dim oKey As Variant For Each oKey In oDict.Keys swb.Sheets(oKey).Copy After:=twb.Sheets(twb.Sheets.Count) Next oKey swb.Close SaveChanges:=False Application.ScreenUpdating = True ' Inform. MsgBox "Imported Worksheets" & vbLf & vbLf _ & "Mandatory:" & vbLf & Join(Mandatory, vbLf) & vbLf & vbLf _ & "Optionally:" & vbLf & Join(oDict.Keys, vbLf), vbInformation End Sub
VBA - Check if a sheet exists then import in my workbook else show an error message
i'm having a bit of a headache with VBA which i haven't used since 2006. I have my destination excel file where I need to import 3 predefined sheets from another excel file of the user's choice. After selecting the source file to import I would like to perform a check, IF the "Cover" sheet exists THEN copy it to the target workbook ELSE print an error message in the excel file in order to have a log, once this is done I have to do the same check for the "Functional" and "Batch" sheets. Before inserting the IFs, I was able to import the sheets but I didn't have control over whether they existed or not, "Cover" is mandatory while "Functional" and "Batch" I need at least one of the two to be able to proceed with the next steps. Now I can check if the "Cover" sheet exists and import it ELSE I exit the Sub, after which I should check if the other sheets also exist and import them but I immediately get the "absent sheet" error. Below is the code I am getting stuck with: Sub Import() Application.ScreenUpdating = False Application.DisplayAlerts = False Dim TargetWorkbook As Workbook Dim SourceWorkbook As Workbook Dim OpenFileName Set TargetWorestBookkbook = ActiveWorkbook 'Select and Open Source workbook OpenFileName = Application.GetOpenFilename("Excel Files (*.xls*),*.xls*") If OpenFileName = False Then MsgBox "Nessun file Source selezionato. Impossibile procedere." Exit Sub End If On Error GoTo exit_ Set SourceWorkbook = Workbooks.Open(OpenFileName) 'Import sheets ' if the sheet doesn't exist an error will occur here If WorksheetExists("Cover e Legenda") Then SourceWorkbook.Sheets("Cover e Legenda").Copy _ after:=TargetWorkbook.Sheets(ThisWorkbook.Sheets.Count) Application.CutCopyMode = False SourceWorkbook.Close False Else MsgBox ("Cover assente. Impossibile proseguire.") Exit Sub End If If WorksheetExists("Test Funzionali") Then SourceWorkbook.Sheets("Test Funzionali").Copy _ after:=TargetWorkbook.Sheets(ThisWorkbook.Sheets.Count) Application.CutCopyMode = False SourceWorkbook.Close False Else MsgBox ("Test Funzionali assente.") End If If WorksheetExists("Test Batch") Then SourceWorkbook.Sheets("Test Batch").Copy _ after:=TargetWorkbook.Sheets(ThisWorkbook.Sheets.Count) Application.CutCopyMode = False SourceWorkbook.Close False Else MsgBox ("Test Batch assente.") End If 'Next Sheet Application.ScreenUpdating = True Application.DisplayAlerts = True SourceWorkbook.Close SaveChanges:=False MsgBox ("Importazione completata.") TargetWorkbook.Activate exit_: Application.ScreenUpdating = True Application.DisplayAlerts = True If Err Then MsgBox Err.Description, vbCritical, "Error" End Sub
[ "Best to check all of the sheets before importing any of them.\nTry something like this:\nSub Import()\n\n Dim wbTarget As Workbook, wbSource As Workbook\n Dim OpenFileName, haveCover As Boolean, haveFunz As Boolean, haveTest As Boolean\n\n On Error GoTo haveError\n \n Set wbTarget = ActiveWorkbook\n\n 'Select and Open Source workbook\n OpenFileName = Application.GetOpenFilename(\"Excel Files (*.xls*),*.xls*\")\n If OpenFileName = False Then\n MsgBox \"Nessun file Source selezionato. Impossibile procedere.\"\n Exit Sub\n End If\n \n Set wbSource = Workbooks.Open(OpenFileName)\n \n 'check which sheets exist\n haveCover = WorksheetExists(wbSource, \"Cover e Legenda\")\n haveFunz = WorksheetExists(wbSource, \"Test Funzionali\")\n haveTest = WorksheetExists(wbSource, \"Test Batch\")\n \n If haveCover And (haveFunz Or haveTest) Then 'have the minumum required sheets?\n Application.ScreenUpdating = False\n Application.DisplayAlerts = False\n ImportSheet wbTarget, wbSource.Worksheets(\"Cover e Legenda\")\n If haveFunz Then ImportSheet wbTarget, wbSource.Worksheets(\"Test Funzionali\")\n If haveTest Then ImportSheet wbTarget, wbSource.Worksheets(\"Test Batch\")\n Application.DisplayAlerts = True\n Else\n MsgBox \"Required sheet(s) not found!\", vbExclamation\n End If\n \n wbSource.Close SaveChanges:=False\n MsgBox \"Importazione completata\"\n wbTarget.Activate\n Exit Sub 'normal exit\n \nhaveError:\n MsgBox Err.Description, vbCritical, \"Error\"\n Application.DisplayAlerts = True\n \nEnd Sub\n\n'copy sheet `ws` to the end of `wbTarget`\nSub ImportSheet(wbTarget As Workbook, ws As Worksheet)\n ws.Copy after:=wbTarget.Worksheets(wbTarget.Worksheets.Count)\nEnd Sub\n\n'does sheet `wsName` exist in workbook `wb` ?\nFunction WorksheetExists(wb As Workbook, wsName As String) As Boolean\n On Error Resume Next\n WorksheetExists = Not wb.Worksheets(wsName) Is Nothing\nEnd Function\n\n", "Import Mandatory and Optional Worksheets\nSub ImportWorksheets()\n \n Dim Mandatory() As Variant: Mandatory = VBA.Array(\"Cover e Legenda\")\n Dim Optionally() As Variant ' 'Optional' is a keyword\n Optionally = VBA.Array(\"Test Funzionali\", \"Test Batch\")\n \n Dim twb As Workbook: Set twb = ThisWorkbook ' workbook containing this code\n \n ' Select and open the Source workbook.\n\n Dim OpenFilePath As Variant\n OpenFilePath = Application.GetOpenFilename(\"Excel Files (*.xls*),*.xls*\")\n\n If OpenFilePath = False Then\n MsgBox \"Nessun file Source selezionato. Impossibile procedere.\", _\n vbExclamation\n Exit Sub\n End If\n \n Dim swb As Workbook: Set swb = Workbooks.Open(OpenFilePath)\n \n ' Check if all the mandatory worksheets exist.\n \n Dim sws As Worksheet, n As Long\n \n For n = 0 To UBound(Mandatory)\n On Error Resume Next ' prevent error if worksheet doesn't exist\n Set sws = swb.Worksheets(Mandatory(n))\n On Error GoTo 0\n If sws Is Nothing Then\n 'swb.Close SaveChanges:=False\n MsgBox \"The mandatory worksheet \"\"\" & Mandatory(n) _\n & \"\"\" was not found in \"\"\" & swb.Name & \"\"\".\", vbCritical\n Exit Sub\n Else\n Set sws = Nothing\n End If\n Next n\n \n ' Check if at least one of the optional worksheets exists.\n \n Dim oDict As Object: Set oDict = CreateObject(\"Scripting.Dictionary\")\n oDict.CompareMode = vbTextCompare\n \n For n = 0 To UBound(Optionally)\n On Error Resume Next ' prevent error if worksheet doesn't exist\n Set sws = swb.Worksheets(Optionally(n))\n On Error GoTo 0\n If Not sws Is Nothing Then oDict(sws.Name) = Empty: Set sws = Nothing\n Next n\n \n If oDict.Count = 0 Then\n 'swb.Close SaveChanges:=False\n MsgBox \"No optional worksheets found in \"\"\" & swb.Name & \"\"\".\", _\n vbCritical\n Exit Sub\n End If\n \n ' Import the worksheets and close the Source workbook.\n \n Application.ScreenUpdating = False\n\n For n = 0 To UBound(Mandatory)\n swb.Sheets(Mandatory(n)).Copy After:=twb.Sheets(twb.Sheets.Count)\n Next n\n \n Dim oKey As Variant\n \n For Each oKey In oDict.Keys\n swb.Sheets(oKey).Copy After:=twb.Sheets(twb.Sheets.Count)\n Next oKey\n\n swb.Close SaveChanges:=False\n \n Application.ScreenUpdating = True\n \n ' Inform.\n \n MsgBox \"Imported Worksheets\" & vbLf & vbLf _\n & \"Mandatory:\" & vbLf & Join(Mandatory, vbLf) & vbLf & vbLf _\n & \"Optionally:\" & vbLf & Join(oDict.Keys, vbLf), vbInformation\n \nEnd Sub\n\n" ]
[ 0, 0 ]
[]
[]
[ "excel", "vba" ]
stackoverflow_0074659179_excel_vba.txt
Q: Creating a shared link for an item in a Sharepoint list returns BadRequest - resource not found Per documentation there's an option to create a sharing link for an item in a Sharepoint list - https://learn.microsoft.com/en-us/graph/api/listitem-createlink?view=graph-rest-beta&tabs=http Tested it with code and all I get are bad requests endpoint = https://graph.microsoft.com/v1.0/sites/mysharepointsite/lists/mytestlist/items/1/createLink result: {'error': {'code': 'BadRequest', 'innerError': {'client-request-id': '64033979-ec68-493a-84e5-881c93870f5c', 'date': '2022-12-02T13:45:22', 'request-id': '64033979-ec68-493a-84e5-881c93870f5c'}, 'message': "Resource not found for the segment 'createLink'."}} Is the API broken on this particular topic? P.S. Removing the createLink option at the end returns the expected details of that specific item. A: createLink endpoint is currently available only for Beta version but you are calling v1.0 version. Use beta in URL POST https://graph.microsoft.com/beta/sites/mysharepointsite/lists/mytestlist/items/1/createLink
Creating a shared link for an item in a Sharepoint list returns BadRequest - resource not found
Per documentation there's an option to create a sharing link for an item in a Sharepoint list - https://learn.microsoft.com/en-us/graph/api/listitem-createlink?view=graph-rest-beta&tabs=http Tested it with code and all I get are bad requests endpoint = https://graph.microsoft.com/v1.0/sites/mysharepointsite/lists/mytestlist/items/1/createLink result: {'error': {'code': 'BadRequest', 'innerError': {'client-request-id': '64033979-ec68-493a-84e5-881c93870f5c', 'date': '2022-12-02T13:45:22', 'request-id': '64033979-ec68-493a-84e5-881c93870f5c'}, 'message': "Resource not found for the segment 'createLink'."}} Is the API broken on this particular topic? P.S. Removing the createLink option at the end returns the expected details of that specific item.
[ "createLink endpoint is currently available only for Beta version but you are calling v1.0 version.\nUse beta in URL\nPOST https://graph.microsoft.com/beta/sites/mysharepointsite/lists/mytestlist/items/1/createLink\n\n" ]
[ 0 ]
[]
[]
[ "microsoft_graph_api" ]
stackoverflow_0074656850_microsoft_graph_api.txt
Q: Mainframe Jcl proc I have doubt how to handle rc from proc. Suppose //STEP1 EXEC PROC=BANKEMP How to handle return code from this step using if else .pls provide syntax. A: z/OS MVS JCL The IF / THEN / ELSE / ENDIF statements The z/OS JCL IF/THEN/ELSE statements are described in the "z/OS MVS JCL Reference", chapter IF/THEN/ELSE/ENDIF statement construct. These statements can be used to test the termination result of previously executed steps. From the point of view of z/OS MVS a step has successfully completed if: it has really been run it has not ABENDed, i.e. the program has proberly terminated and has set a return code. Note that the return code value does not matter for this decision. If there was an ABEND, either forced by the system, i.e. the completion code is of form "Shhh" (where hhh is three hexadecimal digits), or issued by the program, i.e. the completion code is of form "Udddd" (where dddd is four decimal digits), the step is considered to have ended unsuccessfully. Testing the result of successful step execution Basically, you can test against the highest return code of all previous steps which have actually been run, or you can test against the specific return code of a single step that has actually been executed. Testing against steps which have not been run due to an IF statement, always returns FALSE (the step is marked as "FLUSH" in the job log). If you code an IF statement without referring back to a step, or procedure step, you test against the highest return code so far: //IF01 IF RC EQ 5 THEN If you want to test the return code of a specific step, you code the step name, or the procedure step name and the step name: //IF01 IF STEP01.RC EQ 5 THEN or //IF01 IF STEP01.PROCST02.RC EQ 5 THEN Note: You cannot refer back to an EXEC PROC= statement, since that statement does not execute a program, but merely copied the JCL statements from the procedure into the curent job. Only EXEC PGM= statements execute a program, and only programs set a return code. Testing against ABENDs or whether a Step was, or was not run You can use keywords ABEND, ¬ABEND, ABENDCC, RUN, or ¬RUN to test for those step ending states, accordingly. See above manual for more details. Sample Prodecure to Play with Return Codes Here is a sample JCL procedure you can use to play with return codes. Note that the first step creates a small REXX procedure that will be called by follow on steps to set a specific return code. Copy this procedure to a member in your JCL library, and name it RCTEST: //RCTEST PROC PROCRC01='0004',PROCRC02='0008' //* //*--------------------------------------------------------- //* //PROCST00 EXEC PGM=IEBGENER //SYSPRINT DD DUMMY //SYSIN DD DUMMY //SYSUT1 DD *,DLM=## /* REXX */ /* Test if PARM is a whole number between 0 and 4095 */ /* If true, set RC to that value, else set RC=12 */ if datatype( arg(1), "W" ) then do if arg(1) >= 0 & arg(1) < 4096 then RC = arg(1) else RC = 12 end else RC = 12 exit RC ## //SYSUT2 DD DISP=(NEW,PASS),DSN=&&REXX(SETCODE), // RECFM=FB,LRECL=80,SPACE=(1,(1,,1)),AVGREC=K //* //*--------------------------------------------------------- //* //PROCST01 EXEC PGM=IKJEFT1A,PARM='%SETCODE &PROCRC01' //SYSTSPRT DD DUMMY //SYSTSIN DD DUMMY //SYSEXEC DD DISP=(OLD,PASS),DSN=&&REXX //* //*--------------------------------------------------------- //* //PRIF01 IF PROCST01.RC GT 0 THEN //* //PROCST02 EXEC PGM=IKJEFT1A,PARM='%SETCODE &PROCRC02' //SYSTSPRT DD DUMMY //SYSTSIN DD DUMMY //SYSEXEC DD DISP=(OLD,PASS),DSN=&&REXX //* //PRIF01E ENDIF Code a job like this to call the procedure, and specify the return codes wanted via PROCRC01=, and PROCRC02= on the EXEC statement. Note that you need to adjust the JCLLIB with the name of your JCL library (the one you copied the procedure into). //jobname JOB .... //***************************************************/ //* // JCLLIB ORDER=your.jcl.library //* //***************************************************/ //* //STEP00 EXEC RCTEST,PROCRC01='0004',PROCRC02='0006' //* //IF01 IF STEP00.PROCST01.RC EQ 4 THEN //STEP01 EXEC PGM=IEFBR14 //IF01 ENDIF //* //IF02 IF RC EQ 4 THEN //STEP02 EXEC PGM=IEFBR14 //IF02 ENDIF //* //IF03 IF RC NE 0 THEN //STEP03 EXEC PGM=IEFBR14 //IF03 ENDIF //* //* //IF04 IF RC EQ 0 THEN //STEP04 EXEC PGM=IEFBR14 //IF04 ENDIF //* If you want to play with return codes outside of procedures, you can simply copy the first step from the procedure (PROCST01) to your job.
Mainframe Jcl proc
I have doubt how to handle rc from proc. Suppose //STEP1 EXEC PROC=BANKEMP How to handle return code from this step using if else .pls provide syntax.
[ "z/OS MVS JCL\nThe IF / THEN / ELSE / ENDIF statements\nThe z/OS JCL IF/THEN/ELSE statements are described in the \"z/OS MVS JCL Reference\", chapter IF/THEN/ELSE/ENDIF statement construct. These statements can be used to test the termination result of previously executed steps.\nFrom the point of view of z/OS MVS a step has successfully completed if:\n\nit has really been run\nit has not ABENDed, i.e. the program has proberly terminated and has set a return code. Note that the return code value does not matter for this decision.\n\nIf there was an ABEND, either forced by the system, i.e. the completion code is of form \"Shhh\" (where hhh is three hexadecimal digits), or issued by the program, i.e. the completion code is of form \"Udddd\" (where dddd is four decimal digits), the step is considered to have ended unsuccessfully.\nTesting the result of successful step execution\nBasically, you can test against the highest return code of all previous steps which have actually been run, or you can test against the specific return code of a single step that has actually been executed. Testing against steps which have not been run due to an IF statement, always returns FALSE (the step is marked as \"FLUSH\" in the job log).\nIf you code an IF statement without referring back to a step, or procedure step, you test against the highest return code so far:\n//IF01 IF RC EQ 5 THEN\n\nIf you want to test the return code of a specific step, you code the step name, or the procedure step name and the step name:\n//IF01 IF STEP01.RC EQ 5 THEN\n\nor\n//IF01 IF STEP01.PROCST02.RC EQ 5 THEN\n\nNote: You cannot refer back to an EXEC PROC= statement, since that statement does not execute a program, but merely copied the JCL statements from the procedure into the curent job. Only EXEC PGM= statements execute a program, and only programs set a return code.\nTesting against ABENDs or whether a Step was, or was not run\nYou can use keywords ABEND, ¬ABEND, ABENDCC, RUN, or ¬RUN to test for those step ending states, accordingly. See above manual for more details.\nSample Prodecure to Play with Return Codes\nHere is a sample JCL procedure you can use to play with return codes. Note that the first step creates a small REXX procedure that will be called by follow on steps to set a specific return code. Copy this procedure to a member in your JCL library, and name it RCTEST:\n//RCTEST PROC PROCRC01='0004',PROCRC02='0008' \n//*\n//*--------------------------------------------------------- \n//* \n//PROCST00 EXEC PGM=IEBGENER \n//SYSPRINT DD DUMMY \n//SYSIN DD DUMMY \n//SYSUT1 DD *,DLM=## \n/* REXX */ \n \n/* Test if PARM is a whole number between 0 and 4095 */ \n/* If true, set RC to that value, else set RC=12 */ \nif datatype( arg(1), \"W\" ) \nthen do \n if arg(1) >= 0 & arg(1) < 4096 \n then RC = arg(1) \n else RC = 12 \n end \nelse RC = 12 \n \nexit RC \n## \n//SYSUT2 DD DISP=(NEW,PASS),DSN=&&REXX(SETCODE), \n// RECFM=FB,LRECL=80,SPACE=(1,(1,,1)),AVGREC=K \n//* \n//*--------------------------------------------------------- \n//* \n//PROCST01 EXEC PGM=IKJEFT1A,PARM='%SETCODE &PROCRC01' \n//SYSTSPRT DD DUMMY \n//SYSTSIN DD DUMMY \n//SYSEXEC DD DISP=(OLD,PASS),DSN=&&REXX \n//* \n//*--------------------------------------------------------- \n//* \n//PRIF01 IF PROCST01.RC GT 0 THEN \n//* \n//PROCST02 EXEC PGM=IKJEFT1A,PARM='%SETCODE &PROCRC02' \n//SYSTSPRT DD DUMMY \n//SYSTSIN DD DUMMY \n//SYSEXEC DD DISP=(OLD,PASS),DSN=&&REXX \n//* \n//PRIF01E ENDIF \n\nCode a job like this to call the procedure, and specify the return codes wanted via PROCRC01=, and PROCRC02= on the EXEC statement. Note that you need to adjust the JCLLIB with the name of your JCL library (the one you copied the procedure into).\n\n//jobname JOB ....\n//***************************************************/ \n//* \n// JCLLIB ORDER=your.jcl.library\n//* \n//***************************************************/ \n//* \n//STEP00 EXEC RCTEST,PROCRC01='0004',PROCRC02='0006' \n//* \n//IF01 IF STEP00.PROCST01.RC EQ 4 THEN \n//STEP01 EXEC PGM=IEFBR14 \n//IF01 ENDIF \n//* \n//IF02 IF RC EQ 4 THEN \n//STEP02 EXEC PGM=IEFBR14 \n//IF02 ENDIF \n//* \n//IF03 IF RC NE 0 THEN \n//STEP03 EXEC PGM=IEFBR14 \n//IF03 ENDIF \n//* \n//* \n//IF04 IF RC EQ 0 THEN \n//STEP04 EXEC PGM=IEFBR14 \n//IF04 ENDIF \n//* \n\nIf you want to play with return codes outside of procedures, you can simply copy the first step from the procedure (PROCST01) to your job.\n" ]
[ 1 ]
[]
[]
[ "jcl", "mainframe" ]
stackoverflow_0074660904_jcl_mainframe.txt
Q: How to fix while loop? I’m trying to learn php. I thought of doing a simple app to divide a text for Twitter threads. For simplicity, I want to divide the text at a space closest to 20 characters. I tried using strpos() with offset like this: $str = "The main application of de Sitter space is its use \in general relativity, where it serves as one of the simplest mathematical models of the universe; while ($pos < 20) { $findme = ' '; $pos = strpos($str, $findme); $posOffset = $pos+1; $pos = strpos($str, $findme, $posOffset); } But this goes into an infinite loop? What am I doing wrong. A: The issue with your original implementation is that you always reset the value of $pos to the first match with the first call to strpos(). So from 8 to 3 with each new iteration of your loop. So you actually never search beyond the second blank in the string. Which of course means that the loop will never terminate, since you always search in the same first 9 characters of the input string. That probably is what you are looking for: <?php $input = "The main application of de Sitter space is its use \in general relativity, where it serves as one of the simplest mathematical models of the universe."; $output=[]; do { $pos = 0; do { // note: we add a trailing blank to the end of the search subject to guarantee a match $pos = strpos($input . ' ', ' ', $pos + 1); } while ($pos < 20); $output[] = substr($input, 0, $pos); $input = substr($input, $pos + 1); } while (strlen($input) > 0); print_r($output); The output is: Array ( [0] => The main application [1] => of de Sitter space is [2] => its use \in general relativity, [3] => where it serves as one [4] => of the simplest mathematical [5] => models of the universe. ) Note however that searching for ' ' (the space character) is not really a robust strategy. Instead you should search for any white space character there is. Which includes tab characters for example, something your approach would be blind towards.
How to fix while loop?
I’m trying to learn php. I thought of doing a simple app to divide a text for Twitter threads. For simplicity, I want to divide the text at a space closest to 20 characters. I tried using strpos() with offset like this: $str = "The main application of de Sitter space is its use \in general relativity, where it serves as one of the simplest mathematical models of the universe; while ($pos < 20) { $findme = ' '; $pos = strpos($str, $findme); $posOffset = $pos+1; $pos = strpos($str, $findme, $posOffset); } But this goes into an infinite loop? What am I doing wrong.
[ "The issue with your original implementation is that you always reset the value of $pos to the first match with the first call to strpos(). So from 8 to 3 with each new iteration of your loop. So you actually never search beyond the second blank in the string. Which of course means that the loop will never terminate, since you always search in the same first 9 characters of the input string.\n\nThat probably is what you are looking for:\n<?php\n$input = \"The main application of de Sitter space is its use \\in general relativity, where it serves as one of the simplest mathematical models of the universe.\";\n$output=[];\n\ndo {\n $pos = 0;\n do {\n // note: we add a trailing blank to the end of the search subject to guarantee a match\n $pos = strpos($input . ' ', ' ', $pos + 1);\n } while ($pos < 20);\n $output[] = substr($input, 0, $pos);\n $input = substr($input, $pos + 1);\n} while (strlen($input) > 0);\n\nprint_r($output);\n\nThe output is:\nArray\n(\n [0] => The main application\n [1] => of de Sitter space is\n [2] => its use \\in general relativity,\n [3] => where it serves as one\n [4] => of the simplest mathematical\n [5] => models of the universe.\n)\n\n\nNote however that searching for ' ' (the space character) is not really a robust strategy. Instead you should search for any white space character there is. Which includes tab characters for example, something your approach would be blind towards.\n" ]
[ 2 ]
[]
[]
[ "php" ]
stackoverflow_0074667109_php.txt
Q: How to add overlay in haishinkit streaming library android? I am using haishinkit android rtmp library for my live streaming app. I need to create overlay(scoreboard). How to acheive this? This library has cameraView interface, which has a method named createInputSurface() which has 4 parameters int width 2. int Height 3. int filter (PIXELFORMAT) 4. lambda function(with Surface as argument) A: Haishinkit streaming library does not currently have a feature to add an overlay. However, you may be able to use a third-party library to add an overlay. For example, you can use the Streamaxia OpenSDK library to add an overlay to your Haishinkit stream. Here is an example using Streamaxia OpenSDK to add an overlay to a Haishinkit stream: Start the Haishinkit stream. Initialize the Streamaxia OpenSDK. Add the overlay to the stream with the Streamaxia OpenSDK. Start the stream with the Streamaxia OpenSDK. Stop the stream with the Streamaxia OpenSDK when the stream has finished. here is a basic example of how you can use Streamaxia OpenSDK to add an overlay to a Haishinkit stream: // Initialize Streamaxia OpenSDK StreamaxiaOpenSDK streamaxia = new StreamaxiaOpenSDK(); // Start the Haishinkit stream streamaxia.startStreaming(haishinkitStreamURL); // Add overlay streamaxia.addOverlay(overlayImage); // Start the stream streamaxia.startStreaming(); // Stop the stream when you are done streamaxia.stopStreaming();
How to add overlay in haishinkit streaming library android?
I am using haishinkit android rtmp library for my live streaming app. I need to create overlay(scoreboard). How to acheive this? This library has cameraView interface, which has a method named createInputSurface() which has 4 parameters int width 2. int Height 3. int filter (PIXELFORMAT) 4. lambda function(with Surface as argument)
[ "Haishinkit streaming library does not currently have a feature to add an overlay. However, you may be able to use a third-party library to add an overlay. For example, you can use the Streamaxia OpenSDK library to add an overlay to your Haishinkit stream.\nHere is an example using Streamaxia OpenSDK to add an overlay to a Haishinkit stream:\n\nStart the Haishinkit stream.\nInitialize the Streamaxia OpenSDK.\nAdd the overlay to the stream with the Streamaxia OpenSDK.\nStart the stream with the Streamaxia OpenSDK.\nStop the stream with the Streamaxia OpenSDK when the stream has finished.\n\nhere is a basic example of how you can use Streamaxia OpenSDK to add an overlay to a Haishinkit stream:\n// Initialize Streamaxia OpenSDK\nStreamaxiaOpenSDK streamaxia = new StreamaxiaOpenSDK();\n\n// Start the Haishinkit stream\nstreamaxia.startStreaming(haishinkitStreamURL);\n\n// Add overlay\nstreamaxia.addOverlay(overlayImage);\n\n// Start the stream\nstreamaxia.startStreaming();\n\n// Stop the stream when you are done\nstreamaxia.stopStreaming();\n\n" ]
[ 0 ]
[]
[]
[ "android", "android_layout", "android_studio", "kotlin", "live_streaming" ]
stackoverflow_0074667164_android_android_layout_android_studio_kotlin_live_streaming.txt
Q: Access ILambdaContext in Lambda Entrypoint for dotnet serverless API Trying to get the ILambdaContext object - example and usecase below. I am using dotnet 6 public class LambdaEntryPoint : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction { internal static ILambdaContext Context; public override async Task<APIGatewayProxyResponse> FunctionHandlerAsync(APIGatewayProxyRequest request, ILambdaContext lambdaContext) { Context = lambdaContext; return await base.FunctionHandlerAsync(request, lambdaContext); } protected override void Init(IWebHostBuilder builder) { var variables = JsonConvert.SerializeObject(Context); //var variables = JsonConvert.Serliaze throw new Exception($"{variables}"); var environment = "Beta";// arr[arr.Length - 1]; //builder.UseStartup<Startup>(); builder.ConfigureAppConfiguration((c, b) => { b.AddJsonFile("appsettings.json"); b.AddSystemsManager((source) => { var awsOptions = new AWSOptions(); awsOptions.Region = RegionEndpoint.EUWest1; source.Path = $"/common"; source.AwsOptions = awsOptions; source.ReloadAfter = TimeSpan.FromMinutes(5); }); b.AddSystemsManager((source) => { var awsOptions = new AWSOptions(); awsOptions.Region = RegionEndpoint.EUWest1; source.Path = $"/{environment}"; source.AwsOptions = awsOptions; source.ReloadAfter = TimeSpan.FromMinutes(5); }); }).UseStartup<Startup>(); } } I've used an example from here to try override the FunctionHandlerAsync entrypoint but the Lambda context is null. I have also tried many other paths, all of which failed. My goal is to get the alias from the lambda context to use as an environment configuration. I've read most of the internet and I am still unable to get this right. A: I have just tested this and your example works fine. This was a .Net 6 "AWS Serverless Application" using "ASP.NET Core Web App". The function handler override works and I can access the context. A: You could make use of: ConcurrentDictionary<Guid, ILambdaContext> To which you'll add in your lambda handler wrapper, and remove when the scope is over. // MyLambdaWrapper.cs using Amazon.Lambda.Core; using Microsoft.Extensions.DependencyInjection; using System.Collections.Concurrent; namespace MyNamespace; public class MyLambdaWrapper { private readonly ConcurrentDictionary<Guid, ILambdaContext> _contexts = new(); public void ConfigureServices(IServiceCollection services) { services.AddScoped(_ => new ScopeContextIdentifier(Guid.NewGuid())); services.AddScoped<ILambdaContext>(serviceProvider => { var scopeContextIdentifier = serviceProvider.GetRequiredService<ScopeContextIdentifier>(); var contexts = serviceProvider .GetRequiredService<ConcurrentDictionary<Guid, ILambdaContext>>(); var injected = contexts[scopeContextIdentifier.Guid]; return injected; }); services .AddSingleton<ConcurrentDictionary<Guid, ILambdaContext>>(_contexts); } public async Task Run(IServiceProvider serviceProvider, App app, Type handlerType) { Func<Stream, ILambdaContext, Task<Stream?>> func = async (inputStream, lambdaContext) => { // ... using var scope = serviceProvider.CreateScope(); using var _ = new ScopeContextManager(scope, lambdaContext, _contexts); // ... }; // ... } } and the manager: // ScopeContextManager.cs using Amazon.Lambda.Core; using Microsoft.Extensions.DependencyInjection; using System.Collections.Concurrent; namespace MyNamespace; internal sealed record ScopeContextIdentifier(Guid Guid); internal sealed class ScopeContextManager : IDisposable { private readonly ConcurrentDictionary<Guid, ILambdaContext> _contexts; private readonly ScopeContextIdentifier _internalScopeContext; public ScopeContextManager(IServiceScope serviceScope, ILambdaContext lambdaContext, ConcurrentDictionary<Guid, ILambdaContext> ctx) { _internalScopeContext = serviceScope.ServiceProvider.GetRequiredService<ScopeContextIdentifier>(); _contexts = ctx; _contexts.TryAdd(_internalScopeContext.Guid, lambdaContext); } public void Dispose() { _contexts.Remove(_internalScopeContext.Guid, out _); } }
Access ILambdaContext in Lambda Entrypoint for dotnet serverless API
Trying to get the ILambdaContext object - example and usecase below. I am using dotnet 6 public class LambdaEntryPoint : Amazon.Lambda.AspNetCoreServer.APIGatewayProxyFunction { internal static ILambdaContext Context; public override async Task<APIGatewayProxyResponse> FunctionHandlerAsync(APIGatewayProxyRequest request, ILambdaContext lambdaContext) { Context = lambdaContext; return await base.FunctionHandlerAsync(request, lambdaContext); } protected override void Init(IWebHostBuilder builder) { var variables = JsonConvert.SerializeObject(Context); //var variables = JsonConvert.Serliaze throw new Exception($"{variables}"); var environment = "Beta";// arr[arr.Length - 1]; //builder.UseStartup<Startup>(); builder.ConfigureAppConfiguration((c, b) => { b.AddJsonFile("appsettings.json"); b.AddSystemsManager((source) => { var awsOptions = new AWSOptions(); awsOptions.Region = RegionEndpoint.EUWest1; source.Path = $"/common"; source.AwsOptions = awsOptions; source.ReloadAfter = TimeSpan.FromMinutes(5); }); b.AddSystemsManager((source) => { var awsOptions = new AWSOptions(); awsOptions.Region = RegionEndpoint.EUWest1; source.Path = $"/{environment}"; source.AwsOptions = awsOptions; source.ReloadAfter = TimeSpan.FromMinutes(5); }); }).UseStartup<Startup>(); } } I've used an example from here to try override the FunctionHandlerAsync entrypoint but the Lambda context is null. I have also tried many other paths, all of which failed. My goal is to get the alias from the lambda context to use as an environment configuration. I've read most of the internet and I am still unable to get this right.
[ "I have just tested this and your example works fine.\nThis was a .Net 6 \"AWS Serverless Application\" using \"ASP.NET Core Web App\".\nThe function handler override works and I can access the context.\n", "You could make use of:\nConcurrentDictionary<Guid, ILambdaContext>\n\nTo which you'll add in your lambda handler wrapper, and remove when the scope is over.\n// MyLambdaWrapper.cs\nusing Amazon.Lambda.Core;\nusing Microsoft.Extensions.DependencyInjection;\nusing System.Collections.Concurrent;\n\nnamespace MyNamespace;\n\npublic class MyLambdaWrapper\n{\n private readonly ConcurrentDictionary<Guid, ILambdaContext> _contexts = new();\n\n public void ConfigureServices(IServiceCollection services)\n {\n services.AddScoped(_ => new ScopeContextIdentifier(Guid.NewGuid()));\n services.AddScoped<ILambdaContext>(serviceProvider =>\n {\n var scopeContextIdentifier =\n serviceProvider.GetRequiredService<ScopeContextIdentifier>();\n var contexts =\n serviceProvider\n .GetRequiredService<ConcurrentDictionary<Guid, ILambdaContext>>();\n var injected = contexts[scopeContextIdentifier.Guid];\n return injected;\n });\n services\n .AddSingleton<ConcurrentDictionary<Guid, ILambdaContext>>(_contexts);\n }\n\n public async Task Run(IServiceProvider serviceProvider, App app, Type handlerType)\n {\n Func<Stream, ILambdaContext, Task<Stream?>> func =\n async (inputStream, lambdaContext) =>\n {\n // ...\n using var scope = serviceProvider.CreateScope();\n using var _ =\n new ScopeContextManager(scope, lambdaContext, _contexts);\n // ...\n };\n // ...\n }\n}\n\nand the manager:\n// ScopeContextManager.cs\nusing Amazon.Lambda.Core;\nusing Microsoft.Extensions.DependencyInjection;\nusing System.Collections.Concurrent;\n\n\nnamespace MyNamespace;\n\ninternal sealed record ScopeContextIdentifier(Guid Guid);\n\ninternal sealed class ScopeContextManager : IDisposable\n{\n private readonly ConcurrentDictionary<Guid, ILambdaContext> _contexts;\n private readonly ScopeContextIdentifier _internalScopeContext;\n\n public ScopeContextManager(IServiceScope serviceScope, ILambdaContext lambdaContext, ConcurrentDictionary<Guid, ILambdaContext> ctx)\n {\n _internalScopeContext = serviceScope.ServiceProvider.GetRequiredService<ScopeContextIdentifier>();\n _contexts = ctx;\n _contexts.TryAdd(_internalScopeContext.Guid, lambdaContext);\n }\n\n public void Dispose()\n {\n _contexts.Remove(_internalScopeContext.Guid, out _);\n }\n}\n\n" ]
[ 0, 0 ]
[ "Closing this one off, currently not possible to get context\n" ]
[ -1 ]
[ ".net", "aws_lambda", "c#" ]
stackoverflow_0072069469_.net_aws_lambda_c#.txt
Q: How to create Android App by using VueJs SPA (created in Laravel)? I am using VueJS in Laravel for Frontend Development and its working properly. Can I use the same VujeJs frontend code to create Android App. If yes then what will be the steps? I do not know how to use Native in Laravel VueJS SPA to create Android App A: Simplest way would be to migrate your app towards Nuxt and use the following Ionic module: https://github.com/nuxt-modules/ionic That way, you'll be able to get SSG/SSR and also deploy it on Android thanks to the Webview wrapping thanks to Ionic. Laravel is not really part of the plan tho. You probably need to check if you can extract the Vue part or see how to use Ionic with Laravel.
How to create Android App by using VueJs SPA (created in Laravel)?
I am using VueJS in Laravel for Frontend Development and its working properly. Can I use the same VujeJs frontend code to create Android App. If yes then what will be the steps? I do not know how to use Native in Laravel VueJS SPA to create Android App
[ "Simplest way would be to migrate your app towards Nuxt and use the following Ionic module: https://github.com/nuxt-modules/ionic\nThat way, you'll be able to get SSG/SSR and also deploy it on Android thanks to the Webview wrapping thanks to Ionic.\nLaravel is not really part of the plan tho. You probably need to check if you can extract the Vue part or see how to use Ionic with Laravel.\n" ]
[ 0 ]
[]
[]
[ "laravel_8", "vue.js", "vue_native" ]
stackoverflow_0074666631_laravel_8_vue.js_vue_native.txt
Q: Get GCP (managed) instanced group's status How is the Status shown here for GCP managed instance group derived? I don't see any such field in the API. The console shows values like Ready and Updating, while the API only has status.isStable. A: Along with CurrentActions mentioned by John Hanley, You can also Refer this Official Link to check the status of instances in a Managed Instance Group and Refer this Official Link to verify the status of MIG group level.
Get GCP (managed) instanced group's status
How is the Status shown here for GCP managed instance group derived? I don't see any such field in the API. The console shows values like Ready and Updating, while the API only has status.isStable.
[ "Along with CurrentActions mentioned by John Hanley, You can also Refer this Official Link to check the status of instances in a Managed Instance Group and Refer this Official Link to verify the status of MIG group level.\n" ]
[ 0 ]
[]
[]
[ "autoscaling", "google_cloud_platform" ]
stackoverflow_0074650123_autoscaling_google_cloud_platform.txt
Q: Failure to send data from client to server in ESP-01 WiFi Using the ESP8266WiFi library, I have two ESP-01's/ESP8266's connected over WiFi. It works perfectly when the client sends a request (all non HTML!) to the server (using port 5000 - to prevent any confusion with HTTP, FTP etc.). But I cannot get the client to receive an answer back from the server. Now, in the ESP8266WiFi library (3.0.2) there is a note that server.write() is not implemented, and that I should use server.accept() instead of server.available(); though I did not see any applicable examples using server.accept(), but I see many examples using client.print() so I try to follow those - to no avail, yet. What I am doing is the following: 1. establish connectivity to the WiFi; 2. have the client connect to the server and send two bytes to the server. 3. Do a digital write to a pin of the server-ESP8266.(this toggles a relay, which works fine) 4. write back from server to client that the digital write has been done. On the client side, after writing to the server, I run in a loop for some 10 seconds trying to receive something from the server, which never comes. Then I cycle back to the beginning, and the client asks to toggle the relay again - this runs nicely for hours. Any insights here on what I should do differently are highly appreciated. I really want to be able to get some acknowledgement back to the client once the server has toggled the relay. Or if someone has a working example with server.accept() - I would try that too. Client side code: int pin_value; uint8_t ip[4]; void setup() { Serial.begin(115200); ip[0]=10; ip[1]=0; ip[2]=0; ip[3]=6; //We connect to the WiFi network Serial.print("Connecting to "); Serial.println(ssid); WiFi.begin(ssid, password); //Wait until connected while (WiFi.status() != WL_CONNECTED){ delay(500); Serial.print("."); } Serial.print("Client - "); Serial.println("WiFi connected"); } void loop(){ //Variable that we will use to connect to the server WiFiClient client; //if not able to connect, return. if (!client.connect(ip, SERVER_PORT)){ return; } // We create a buffer to put the send data uint8_t buffer[Protocol::BUFFER_SIZE]; //We put the pin number in the buffer // whose state we want to send buffer[Protocol::PIN] = RELAY; //put the current state of the pin in the send buffer buffer[Protocol::VALUE] = pin_value; //We send the data to the server client.write(buffer, Protocol::BUFFER_SIZE); // try to read the answer from the server for about 10 seconds int nr_of_tries = 10000; while (client.connected() && nr_of_tries > 0) {if (client.available()) { String line = client.readStringUntil('\n'); nr_of_tries = 0; Serial.print("line= "); Serial.println(line); } else {delay(1); nr_of_tries=nr_of_tries-1; } } Serial.print("nr of tries= "); Serial.println(nr_of_tries); Serial.print("connected: "); Serial.println(client.connected()); client.flush(); client.stop(); Serial.println(" change sent"); if (pin_value == 0) {pin_value =1; Serial.println("Pin_value set to 1"); } else {pin_value=0; Serial.println("Pin_value set to 0");} delay(10000); } Server side code: WiFiServer server(SERVER_PORT); void setup() { Serial.begin(115200); // must have the same baud rate as the serial monitor pinMode(RELAY,OUTPUT); digitalWrite(RELAY, LOW); // Connect to the WiFi network Serial.println(); Serial.println(); Serial.print("Connecting to "); Serial.println(ssid); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED){ delay(500); Serial.print("."); } Serial.println("Server - "); Serial.println("WiFi connected"); // Set this ESP to behave as a WiFi Access Point // WiFi.mode(WIFI_AP); // set SSID and Password to connect to this ESP // WiFi.softAP(SSID, PASSWORD); // Start the server server.begin(); Serial.println("Server started"); // Output of the IP address Serial.print("Use this IP to connect: "); Serial.println(WiFi.localIP()); } void loop() { // Check if there is any client connecting WiFiClient client = server.available(); if (client) { //Serial.println("Client detected"); //If the client has data he wants to send us //check for a second or so as transmission can take time int nr_of_tries = 1000; while(!client.available() && nr_of_tries > 0) { nr_of_tries=nr_of_tries-1; delay(1); } if (client.available()) { // Serial.println(" Client data"); // create a buffer to put the data to be received uint8_t buffer[Protocol::BUFFER_SIZE]; // We put the data sent by the client in the buffer // but do not read more than the buffer length. int len = client.read(buffer, Protocol::BUFFER_SIZE); // retrieve which pin number the client sent int pinNumber = buffer[Protocol::PIN]; Serial.print("Pin Number: "); Serial.println(pinNumber); // retrieve the value of this pin int value = buffer[Protocol::VALUE]; Serial.print("Value: "); Serial.println(value); // Set the pin indicated by the received pin number in output mode // but only if the pin is the GPIO0 pin! if (pinNumber == RELAY) { pinMode(pinNumber, OUTPUT); // Set the pin indicated by the received pin number to the passed value digitalWrite(pinNumber, value); } // tell the client that the relay has been set or reset. size_t i; if (value == 0) { i=server.println("Set"); Serial.print("i= "); Serial.println(i); } else { i=server.println("Reset"); Serial.print("i= "); Serial.println(i); } } } //Close the connection with the client //client.stop(); } Common definitions: #include <ESP8266WiFi.h> const char* ssid = "blablabla"; const char* password = "blublublu"; #define SERVER_PORT 5000 #define RELAY 0 //Protocol that the Server and Client will use to communicate enum Protocol{ PIN, // Pin whose state you want to change VALUE, // State to which the pin should go (HIGH = 1 or LOW = 0) BUFFER_SIZE // The size of our protocol. IMPORTANT: always leave it as the last item of the enum }; A: Solved! By changing server.println("Set"); into client.println("Set") and doing the same for the transmission of "Reset" a few lines lower in the server side code it works!
Failure to send data from client to server in ESP-01 WiFi
Using the ESP8266WiFi library, I have two ESP-01's/ESP8266's connected over WiFi. It works perfectly when the client sends a request (all non HTML!) to the server (using port 5000 - to prevent any confusion with HTTP, FTP etc.). But I cannot get the client to receive an answer back from the server. Now, in the ESP8266WiFi library (3.0.2) there is a note that server.write() is not implemented, and that I should use server.accept() instead of server.available(); though I did not see any applicable examples using server.accept(), but I see many examples using client.print() so I try to follow those - to no avail, yet. What I am doing is the following: 1. establish connectivity to the WiFi; 2. have the client connect to the server and send two bytes to the server. 3. Do a digital write to a pin of the server-ESP8266.(this toggles a relay, which works fine) 4. write back from server to client that the digital write has been done. On the client side, after writing to the server, I run in a loop for some 10 seconds trying to receive something from the server, which never comes. Then I cycle back to the beginning, and the client asks to toggle the relay again - this runs nicely for hours. Any insights here on what I should do differently are highly appreciated. I really want to be able to get some acknowledgement back to the client once the server has toggled the relay. Or if someone has a working example with server.accept() - I would try that too. Client side code: int pin_value; uint8_t ip[4]; void setup() { Serial.begin(115200); ip[0]=10; ip[1]=0; ip[2]=0; ip[3]=6; //We connect to the WiFi network Serial.print("Connecting to "); Serial.println(ssid); WiFi.begin(ssid, password); //Wait until connected while (WiFi.status() != WL_CONNECTED){ delay(500); Serial.print("."); } Serial.print("Client - "); Serial.println("WiFi connected"); } void loop(){ //Variable that we will use to connect to the server WiFiClient client; //if not able to connect, return. if (!client.connect(ip, SERVER_PORT)){ return; } // We create a buffer to put the send data uint8_t buffer[Protocol::BUFFER_SIZE]; //We put the pin number in the buffer // whose state we want to send buffer[Protocol::PIN] = RELAY; //put the current state of the pin in the send buffer buffer[Protocol::VALUE] = pin_value; //We send the data to the server client.write(buffer, Protocol::BUFFER_SIZE); // try to read the answer from the server for about 10 seconds int nr_of_tries = 10000; while (client.connected() && nr_of_tries > 0) {if (client.available()) { String line = client.readStringUntil('\n'); nr_of_tries = 0; Serial.print("line= "); Serial.println(line); } else {delay(1); nr_of_tries=nr_of_tries-1; } } Serial.print("nr of tries= "); Serial.println(nr_of_tries); Serial.print("connected: "); Serial.println(client.connected()); client.flush(); client.stop(); Serial.println(" change sent"); if (pin_value == 0) {pin_value =1; Serial.println("Pin_value set to 1"); } else {pin_value=0; Serial.println("Pin_value set to 0");} delay(10000); } Server side code: WiFiServer server(SERVER_PORT); void setup() { Serial.begin(115200); // must have the same baud rate as the serial monitor pinMode(RELAY,OUTPUT); digitalWrite(RELAY, LOW); // Connect to the WiFi network Serial.println(); Serial.println(); Serial.print("Connecting to "); Serial.println(ssid); WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED){ delay(500); Serial.print("."); } Serial.println("Server - "); Serial.println("WiFi connected"); // Set this ESP to behave as a WiFi Access Point // WiFi.mode(WIFI_AP); // set SSID and Password to connect to this ESP // WiFi.softAP(SSID, PASSWORD); // Start the server server.begin(); Serial.println("Server started"); // Output of the IP address Serial.print("Use this IP to connect: "); Serial.println(WiFi.localIP()); } void loop() { // Check if there is any client connecting WiFiClient client = server.available(); if (client) { //Serial.println("Client detected"); //If the client has data he wants to send us //check for a second or so as transmission can take time int nr_of_tries = 1000; while(!client.available() && nr_of_tries > 0) { nr_of_tries=nr_of_tries-1; delay(1); } if (client.available()) { // Serial.println(" Client data"); // create a buffer to put the data to be received uint8_t buffer[Protocol::BUFFER_SIZE]; // We put the data sent by the client in the buffer // but do not read more than the buffer length. int len = client.read(buffer, Protocol::BUFFER_SIZE); // retrieve which pin number the client sent int pinNumber = buffer[Protocol::PIN]; Serial.print("Pin Number: "); Serial.println(pinNumber); // retrieve the value of this pin int value = buffer[Protocol::VALUE]; Serial.print("Value: "); Serial.println(value); // Set the pin indicated by the received pin number in output mode // but only if the pin is the GPIO0 pin! if (pinNumber == RELAY) { pinMode(pinNumber, OUTPUT); // Set the pin indicated by the received pin number to the passed value digitalWrite(pinNumber, value); } // tell the client that the relay has been set or reset. size_t i; if (value == 0) { i=server.println("Set"); Serial.print("i= "); Serial.println(i); } else { i=server.println("Reset"); Serial.print("i= "); Serial.println(i); } } } //Close the connection with the client //client.stop(); } Common definitions: #include <ESP8266WiFi.h> const char* ssid = "blablabla"; const char* password = "blublublu"; #define SERVER_PORT 5000 #define RELAY 0 //Protocol that the Server and Client will use to communicate enum Protocol{ PIN, // Pin whose state you want to change VALUE, // State to which the pin should go (HIGH = 1 or LOW = 0) BUFFER_SIZE // The size of our protocol. IMPORTANT: always leave it as the last item of the enum };
[ "Solved! By changing server.println(\"Set\"); into client.println(\"Set\") and doing the same for the transmission of \"Reset\" a few lines lower in the server side code it works!\n" ]
[ 0 ]
[]
[]
[ "client", "esp8266wifi", "server", "two_way" ]
stackoverflow_0074660255_client_esp8266wifi_server_two_way.txt
Q: Solving and plotting functions in Python The proplem I want to solve the above functions to plot xAxis vs yAxis for x between [0:2]. I started with the first function, "det", and used sympy library and the (solve, nsolve) methods to find the solution "yAxis for every xAxis" but I got an error that says "pop form an empty set". I am not sure if I am using the right syntax for the natural log function (ln) and even if I am using the right library "sympy" and its methods. Could anyone please help me understand what exactly I am doing wrong and if there is a better way to evaluate yAxis and plot the functions. Here is my code: import math import numpy as np import sympy as sym from sympy import * y = sym.symbols('y') xAxis = np.arange(start=0, stop=2, step=0.1) yAxis = [] for x in xAxis: det = sym.Eq ((x*y*(y*sym.log((1+sym.log((x*y+1),math.e)),math.e)+(y-1)*sym.log((x*y+1),math.e)+y)/((x*y+1)*sym.log((x*y+1),math.e)*((y-1)*sym.log((x*y+1),math.e)+y)))-1) sol = sym.nsolve(det,y) yAxis.append(sol[0]) A: This is actually a "nice" equation that can be plotted with plot_implicit. "Nice" because it is hard to plot, it pushes the algorithms to their limit in terms of capabilities and forces us to analyze what we are doing. I'm going to use the SymPy Plotting Backend module because it better deals with implicit plots. import sympy as sym det = sym.Eq ((x*y*(y*sym.log((1+sym.log((x*y+1))))+(y-1)*sym.log((x*y+1))+y)/((x*y+1)*sym.log((x*y+1))*((y-1)*sym.log((x*y+1))+y))), 1) from spb import * plot_implicit(det, (x, 0, 2)) Now we need to figure out if the plot is correct. At denominator, det contains terms like log(x * y + 1): when x=0 or y=0 those terms goes to zero and the function doesn't exist. So, the horizontal line that you see in the plot is wrong. When x is positive and y is negative, there will combinations of these two values at which the function doesn't exist. For example, let's consider x=0.25: plot(det.rewrite(Add).subs(x, 0.25), (y, -2.5, 0), ylim=(-100, 10)) For x=0.25, det doesn't exist if y < -1.9something. I believe that the vertical line indicates numerical errors. Hence, in the initial plot the curved line for 0 < x < 1 and y < 0 is wrong. What about the curved line for x > 1 and y > 0? Again, let's consider a fixed x, for example x=1.75: plot(det.rewrite(Add).subs(x, 1.75), (y, 0, 1), ylim=(-10, 10)) There is a discontinuity there, the function doesn't exists but the algorithm got confused. At end, there is only one correct line and we can plot it with: plot_implicit(det, (x, 0, 2), (y, 0.5, 10), ylim=(0, 10))
Solving and plotting functions in Python
The proplem I want to solve the above functions to plot xAxis vs yAxis for x between [0:2]. I started with the first function, "det", and used sympy library and the (solve, nsolve) methods to find the solution "yAxis for every xAxis" but I got an error that says "pop form an empty set". I am not sure if I am using the right syntax for the natural log function (ln) and even if I am using the right library "sympy" and its methods. Could anyone please help me understand what exactly I am doing wrong and if there is a better way to evaluate yAxis and plot the functions. Here is my code: import math import numpy as np import sympy as sym from sympy import * y = sym.symbols('y') xAxis = np.arange(start=0, stop=2, step=0.1) yAxis = [] for x in xAxis: det = sym.Eq ((x*y*(y*sym.log((1+sym.log((x*y+1),math.e)),math.e)+(y-1)*sym.log((x*y+1),math.e)+y)/((x*y+1)*sym.log((x*y+1),math.e)*((y-1)*sym.log((x*y+1),math.e)+y)))-1) sol = sym.nsolve(det,y) yAxis.append(sol[0])
[ "This is actually a \"nice\" equation that can be plotted with plot_implicit. \"Nice\" because it is hard to plot, it pushes the algorithms to their limit in terms of capabilities and forces us to analyze what we are doing.\nI'm going to use the SymPy Plotting Backend module because it better deals with implicit plots.\nimport sympy as sym\ndet = sym.Eq ((x*y*(y*sym.log((1+sym.log((x*y+1))))+(y-1)*sym.log((x*y+1))+y)/((x*y+1)*sym.log((x*y+1))*((y-1)*sym.log((x*y+1))+y))), 1)\nfrom spb import *\nplot_implicit(det, (x, 0, 2))\n\n\nNow we need to figure out if the plot is correct. At denominator, det contains terms like log(x * y + 1): when x=0 or y=0 those terms goes to zero and the function doesn't exist. So, the horizontal line that you see in the plot is wrong.\nWhen x is positive and y is negative, there will combinations of these two values at which the function doesn't exist. For example, let's consider x=0.25:\nplot(det.rewrite(Add).subs(x, 0.25), (y, -2.5, 0), ylim=(-100, 10))\n\n\nFor x=0.25, det doesn't exist if y < -1.9something. I believe that the vertical line indicates numerical errors. Hence, in the initial plot the curved line for 0 < x < 1 and y < 0 is wrong.\nWhat about the curved line for x > 1 and y > 0? Again, let's consider a fixed x, for example x=1.75:\nplot(det.rewrite(Add).subs(x, 1.75), (y, 0, 1), ylim=(-10, 10))\n\n\nThere is a discontinuity there, the function doesn't exists but the algorithm got confused.\nAt end, there is only one correct line and we can plot it with:\nplot_implicit(det, (x, 0, 2), (y, 0.5, 10), ylim=(0, 10))\n\n\n" ]
[ 0 ]
[]
[]
[ "function", "python", "sympy" ]
stackoverflow_0074592862_function_python_sympy.txt
Q: How do I create a looping video material in SceneKit for iOS app? How do I create a material in SceneKit that plays a looping video? A: It's possible to achieve this in SceneKit using a SpriteKit scene as the geometry's material. The following example will create a SpriteKit scene, add a video node to it with a video player, make the video player loop, create a SceneKit scene, add a SceneKit plane, and finally add the SpriteKit scene as the plane's diffuse material. import UIKit import SceneKit import SpriteKit import AVFoundation class ViewController: UIViewController, SCNSceneRendererDelegate { @IBOutlet weak var sceneView: SCNView! override func viewDidLoad() { super.viewDidLoad() // A SpriteKit scene to contain the SpriteKit video node let spriteKitScene = SKScene(size: CGSize(width: sceneView.frame.width, height: sceneView.frame.height)) spriteKitScene.scaleMode = .aspectFit // Create a video player, which will be responsible for the playback of the video material let videoUrl = Bundle.main.url(forResource: "videos/video", withExtension: "mp4")! let videoPlayer = AVPlayer(url: videoUrl) // To make the video loop videoPlayer.actionAtItemEnd = .none NotificationCenter.default.addObserver( self, selector: #selector(ViewController.playerItemDidReachEnd), name: NSNotification.Name.AVPlayerItemDidPlayToEndTime, object: videoPlayer.currentItem) // Create the SpriteKit video node, containing the video player let videoSpriteKitNode = SKVideoNode(avPlayer: videoPlayer) videoSpriteKitNode.position = CGPoint(x: spriteKitScene.size.width / 2.0, y: spriteKitScene.size.height / 2.0) videoSpriteKitNode.size = spriteKitScene.size videoSpriteKitNode.yScale = -1.0 videoSpriteKitNode.play() spriteKitScene.addChild(videoSpriteKitNode) // Create the SceneKit scene let scene = SCNScene() sceneView.scene = scene sceneView.delegate = self sceneView.isPlaying = true // Create a SceneKit plane and add the SpriteKit scene as its material let background = SCNPlane(width: CGFloat(100), height: CGFloat(100)) background.firstMaterial?.diffuse.contents = spriteKitScene let backgroundNode = SCNNode(geometry: background) scene.rootNode.addChildNode(backgroundNode) ... } // This callback will restart the video when it has reach its end func playerItemDidReachEnd(notification: NSNotification) { if let playerItem: AVPlayerItem = notification.object as? AVPlayerItem { playerItem.seek(to: kCMTimeZero) } } ... } A: Year 2019 solution: let mat = SCNMaterial() let videoUrl = Bundle.main.url(forResource: "YourVideo", withExtension: "mp4")! let player = AVPlayer(url: videoUrl) mat.diffuse.contents = player player.actionAtItemEnd = .none NotificationCenter.default.addObserver(self, selector: #selector(playerItemDidReachEnd(notification:)), name: .AVPlayerItemDidPlayToEndTime, object: player.currentItem) player.play() Code for method in selector: @objc private func playerItemDidReachEnd(notification: Notification) { if let playerItem = notification.object as? AVPlayerItem { playerItem.seek(to: .zero, completionHandler: nil) } } Note: for ten years now you do NOT remove notifications that run a selector. (You only need to do so in the obscure case you're using a block.) If you have time-travelled to before 2015: Don't forget to remove your notification observer when the object is deallocated! Something like NotificationCenter.default .removeObserver(self, name: .AVPlayerItemDidPlayToEndTime, object: player.currentItem) A: It is possible to use an AVPlayer as the content of the scene's background. However, it was not working for me until I sent .play(nil) to the sceneView. override func viewDidLoad() { super.viewDidLoad() // Set the view's delegate sceneView.delegate = self // Show statistics such as fps and timing information sceneView.showsStatistics = true // Create a new scene let scene = SCNScene(named: "art.scnassets/ship.scn")! // create and add a camera to the scene let cameraNode = SCNNode() cameraNode.camera = SCNCamera() scene.rootNode.addChildNode(cameraNode) // Set the scene to the view sceneView.scene = scene let movieFileURL = Bundle.main.url(forResource: "example", withExtension: "mov")! let player = AVPlayer(url:movieFileURL) scene.background.contents = player sceneView.play(nil) //without this line the movie was not playing player.play() } A: Swift 5.7 1. SceneKit + SpriteKit's Looping Video Material ...works fine – video is looping seamlessly... I tested this app on Xcode 14.1 Simulator (iOS 16.1) on macOS Ventura 13.0.1. For video texture I used QuickTime 1600x900 .mov file with H.264 codec. 3D model's in .scn format. import SceneKit import AVFoundation import SpriteKit class GameViewController: UIViewController { var sceneView: SCNView? = nil private var avPlayer: AVQueuePlayer? = nil private var looper: AVPlayerLooper? = nil override func viewDidLoad() { super.viewDidLoad() self.sceneView = self.view as? SCNView guard let scene = SCNScene(named: "tv.scn") else { return } sceneView?.scene = scene sceneView?.allowsCameraControl = true self.spriteKitScene(self.sceneKitNode()) } private func spriteKitScene(_ node: SCNNode) { ... } // A internal func sceneKitNode() -> SCNNode { ... } // B fileprivate func loadVideoMaterial() -> AVPlayer? { ... } // C } SpriteKit scene is capable of playing back a .mov video file: private func spriteKitScene(_ node: SCNNode) { let screenGeo: SCNPlane = node.geometry as! SCNPlane let videoNode = SKVideoNode(avPlayer: self.loadVideoMaterial()!) let skScene = SKScene(size: CGSize(width: screenGeo.width * 1600, height: screenGeo.height * 900)) videoNode.position = CGPoint(x: skScene.size.width / 2, y: skScene.size.height / 2) videoNode.size = skScene.size skScene.addChild(videoNode) let screenMaterial = screenGeo.materials.first screenMaterial?.diffuse.contents = skScene videoNode.play() sceneView?.scene?.rootNode.addChildNode(node) } The SceneKit's material is used as a medium for the SpriteKit's video: internal func sceneKitNode() -> SCNNode { if let screen = sceneView?.scene?.rootNode.childNode(withName: "screen", recursively: false) { screen.geometry?.firstMaterial?.lightingModel = .constant screen.geometry?.firstMaterial?.diffuse.contents = UIColor.black return screen } return SCNNode() } And, at last, the method used for loading the video contains an AVPlayerLooper object: fileprivate func loadVideoMaterial() -> AVPlayer? { guard let path = Bundle.main.path(forResource: "video", ofType: "mov") else { return nil } let videoURL = URL(fileURLWithPath: path) let asset = AVAsset(url: videoURL) let item = AVPlayerItem(asset: asset) self.avPlayer = AVQueuePlayer(playerItem: item) if let avPlayer { avPlayer.isMuted = true self.looper = AVPlayerLooper(player: avPlayer, templateItem: item) return avPlayer } return AVPlayer() } 2. Pure SceneKit's Looping Video Material ...works incorrectly – video is looping with delay... You can take a shorter route to solve this task, but the problem is, an approach does not work as it should - the video loop plays with a delay equal to the duration of the whole video. import SceneKit import AVFoundation class GameViewController: UIViewController { var sceneView: SCNView? = nil private var avPlayer: AVQueuePlayer? = nil private var looper: AVPlayerLooper? = nil override func viewDidLoad() { super.viewDidLoad() self.sceneView = self.view as? SCNView sceneView?.isPlaying = true // if view is playing? guard let scene = SCNScene(named: "tv.scn") else { return } sceneView?.scene = scene sceneView?.allowsCameraControl = true self.loadModelWithVideoMaterial() } fileprivate func loadModelWithVideoMaterial() { ... } } Here we are assigning an AVQueuePlayer object to the content of the material: fileprivate func loadModelWithVideoMaterial() { guard let path = Bundle.main.path(forResource: "video", ofType: "mov") else { return } let videoURL = URL(fileURLWithPath: path) let asset = AVAsset(url: videoURL) let item = AVPlayerItem(asset: asset) self.avPlayer = AVQueuePlayer(playerItem: item) if let avPlayer { avPlayer.isMuted = true guard let screen = sceneView?.scene?.rootNode.childNode( withName: "screen", recursively: true) else { return } screen.geometry?.firstMaterial?.lightingModel = .constant screen.geometry?.firstMaterial?.diffuse.contents = avPlayer sceneView?.scene?.rootNode.addChildNode(screen) self.looper = AVPlayerLooper(player: avPlayer, templateItem: item) avPlayer.playImmediately(atRate: 20) // speed x20 for testing } } A: 2022, it's now trivial to do this in Scene Kit. See Apple docs. Note that the apple doco clearly states you can now just put video on an SCNNode. // make some mesh. whatever size you want. let mesh = SCNPlane() mesh.width = 1.77 mesh.height = 1 // put the mesh on your node yourNode.geometry = mesh // add the video to the mesh plr = AVPlayer(url: "https .. .m4v") yourNode.geometry?.firstMaterial?.diffuse.contents = plr Note that you can put anything you want on the mesh. ("geometry" is the mesh.) It's easy. For example, if you just want a plain color: ... firstMaterial?.diffuse.contents = UIColor.yellow Note that the question asks about looping the video. This is trivial and unrelated to using SceneKit. You can see a million QA about looping video, it's this easy: NotificationCenter.default.addObserver(self, selector: #selector(loopy), name: .AVPlayerItemDidPlayToEndTime, object: plr.currentItem) and then @objc func loopy() { plr.seek(to: .zero) }
How do I create a looping video material in SceneKit for iOS app?
How do I create a material in SceneKit that plays a looping video?
[ "It's possible to achieve this in SceneKit using a SpriteKit scene as the geometry's material.\nThe following example will create a SpriteKit scene, add a video node to it with a video player, make the video player loop, create a SceneKit scene, add a SceneKit plane, and finally add the SpriteKit scene as the plane's diffuse material.\nimport UIKit\nimport SceneKit\nimport SpriteKit\nimport AVFoundation\n\nclass ViewController: UIViewController, SCNSceneRendererDelegate {\n\n @IBOutlet weak var sceneView: SCNView!\n\n override func viewDidLoad() {\n super.viewDidLoad()\n\n // A SpriteKit scene to contain the SpriteKit video node\n let spriteKitScene = SKScene(size: CGSize(width: sceneView.frame.width, height: sceneView.frame.height))\n spriteKitScene.scaleMode = .aspectFit\n\n // Create a video player, which will be responsible for the playback of the video material\n let videoUrl = Bundle.main.url(forResource: \"videos/video\", withExtension: \"mp4\")!\n let videoPlayer = AVPlayer(url: videoUrl)\n\n // To make the video loop\n videoPlayer.actionAtItemEnd = .none\n NotificationCenter.default.addObserver(\n self,\n selector: #selector(ViewController.playerItemDidReachEnd),\n name: NSNotification.Name.AVPlayerItemDidPlayToEndTime,\n object: videoPlayer.currentItem)\n\n // Create the SpriteKit video node, containing the video player\n let videoSpriteKitNode = SKVideoNode(avPlayer: videoPlayer)\n videoSpriteKitNode.position = CGPoint(x: spriteKitScene.size.width / 2.0, y: spriteKitScene.size.height / 2.0)\n videoSpriteKitNode.size = spriteKitScene.size\n videoSpriteKitNode.yScale = -1.0\n videoSpriteKitNode.play()\n spriteKitScene.addChild(videoSpriteKitNode)\n\n // Create the SceneKit scene\n let scene = SCNScene()\n sceneView.scene = scene\n sceneView.delegate = self\n sceneView.isPlaying = true\n\n // Create a SceneKit plane and add the SpriteKit scene as its material\n let background = SCNPlane(width: CGFloat(100), height: CGFloat(100))\n background.firstMaterial?.diffuse.contents = spriteKitScene\n let backgroundNode = SCNNode(geometry: background)\n scene.rootNode.addChildNode(backgroundNode)\n\n ...\n }\n\n // This callback will restart the video when it has reach its end\n func playerItemDidReachEnd(notification: NSNotification) {\n if let playerItem: AVPlayerItem = notification.object as? AVPlayerItem {\n playerItem.seek(to: kCMTimeZero)\n }\n }\n\n ...\n}\n\n", "Year 2019 solution:\nlet mat = SCNMaterial()\nlet videoUrl = Bundle.main.url(forResource: \"YourVideo\", withExtension: \"mp4\")!\nlet player = AVPlayer(url: videoUrl)\nmat.diffuse.contents = player\nplayer.actionAtItemEnd = .none\nNotificationCenter.default.addObserver(self,\n selector: #selector(playerItemDidReachEnd(notification:)),\n name: .AVPlayerItemDidPlayToEndTime,\n object: player.currentItem)\nplayer.play()\n\nCode for method in selector:\n@objc private func playerItemDidReachEnd(notification: Notification) {\n if let playerItem = notification.object as? AVPlayerItem {\n playerItem.seek(to: .zero, completionHandler: nil)\n }\n}\n\nNote: for ten years now you do NOT remove notifications that run a selector. (You only need to do so in the obscure case you're using a block.)\n\nIf you have time-travelled to before 2015:\nDon't forget to remove your notification observer when the object is deallocated! Something like NotificationCenter.default .removeObserver(self, name: .AVPlayerItemDidPlayToEndTime, object: player.currentItem)\n", "It is possible to use an AVPlayer as the content of the scene's background.\nHowever, it was not working for me until I sent .play(nil) to the sceneView.\noverride func viewDidLoad() {\n super.viewDidLoad()\n\n // Set the view's delegate\n sceneView.delegate = self\n\n // Show statistics such as fps and timing information\n sceneView.showsStatistics = true\n\n // Create a new scene\n let scene = SCNScene(named: \"art.scnassets/ship.scn\")!\n\n // create and add a camera to the scene\n let cameraNode = SCNNode()\n cameraNode.camera = SCNCamera()\n scene.rootNode.addChildNode(cameraNode)\n\n // Set the scene to the view\n sceneView.scene = scene\n\n let movieFileURL = Bundle.main.url(forResource: \"example\", withExtension: \"mov\")!\n let player = AVPlayer(url:movieFileURL)\n scene.background.contents = player\n sceneView.play(nil) //without this line the movie was not playing\n\n player.play()\n}\n\n", "Swift 5.7\n1. SceneKit + SpriteKit's Looping Video Material\n...works fine – video is looping seamlessly...\nI tested this app on Xcode 14.1 Simulator (iOS 16.1) on macOS Ventura 13.0.1. For video texture I used QuickTime 1600x900 .mov file with H.264 codec. 3D model's in .scn format.\nimport SceneKit\nimport AVFoundation\nimport SpriteKit\n\nclass GameViewController: UIViewController {\n \n var sceneView: SCNView? = nil\n private var avPlayer: AVQueuePlayer? = nil\n private var looper: AVPlayerLooper? = nil\n\n override func viewDidLoad() {\n super.viewDidLoad()\n \n self.sceneView = self.view as? SCNView\n guard let scene = SCNScene(named: \"tv.scn\") else { return }\n sceneView?.scene = scene\n sceneView?.allowsCameraControl = true\n \n self.spriteKitScene(self.sceneKitNode())\n }\n\n private func spriteKitScene(_ node: SCNNode) { ... } // A\n\n internal func sceneKitNode() -> SCNNode { ... } // B\n \n fileprivate func loadVideoMaterial() -> AVPlayer? { ... } // C\n}\n\nSpriteKit scene is capable of playing back a .mov video file:\nprivate func spriteKitScene(_ node: SCNNode) {\n \n let screenGeo: SCNPlane = node.geometry as! SCNPlane\n\n let videoNode = SKVideoNode(avPlayer: self.loadVideoMaterial()!)\n\n let skScene = SKScene(size: CGSize(width: screenGeo.width * 1600,\n height: screenGeo.height * 900))\n \n videoNode.position = CGPoint(x: skScene.size.width / 2,\n y: skScene.size.height / 2)\n \n videoNode.size = skScene.size\n skScene.addChild(videoNode)\n \n let screenMaterial = screenGeo.materials.first\n screenMaterial?.diffuse.contents = skScene\n videoNode.play()\n sceneView?.scene?.rootNode.addChildNode(node)\n}\n\nThe SceneKit's material is used as a medium for the SpriteKit's video:\ninternal func sceneKitNode() -> SCNNode {\n \n if let screen = sceneView?.scene?.rootNode.childNode(withName: \"screen\",\n recursively: false) {\n \n screen.geometry?.firstMaterial?.lightingModel = .constant\n screen.geometry?.firstMaterial?.diffuse.contents = UIColor.black\n return screen\n }\n return SCNNode()\n}\n\nAnd, at last, the method used for loading the video contains an AVPlayerLooper object:\nfileprivate func loadVideoMaterial() -> AVPlayer? {\n \n guard let path = Bundle.main.path(forResource: \"video\", ofType: \"mov\")\n else { return nil }\n \n let videoURL = URL(fileURLWithPath: path)\n let asset = AVAsset(url: videoURL)\n let item = AVPlayerItem(asset: asset)\n self.avPlayer = AVQueuePlayer(playerItem: item)\n\n if let avPlayer {\n avPlayer.isMuted = true\n self.looper = AVPlayerLooper(player: avPlayer, templateItem: item)\n return avPlayer\n }\n return AVPlayer()\n}\n\n\n2. Pure SceneKit's Looping Video Material\n...works incorrectly – video is looping with delay...\nYou can take a shorter route to solve this task, but the problem is, an approach does not work as it should - the video loop plays with a delay equal to the duration of the whole video.\nimport SceneKit\nimport AVFoundation\n\nclass GameViewController: UIViewController {\n \n var sceneView: SCNView? = nil\n private var avPlayer: AVQueuePlayer? = nil\n private var looper: AVPlayerLooper? = nil\n\n override func viewDidLoad() {\n super.viewDidLoad()\n \n self.sceneView = self.view as? SCNView\n sceneView?.isPlaying = true // if view is playing?\n guard let scene = SCNScene(named: \"tv.scn\") else { return }\n sceneView?.scene = scene\n sceneView?.allowsCameraControl = true\n \n self.loadModelWithVideoMaterial()\n }\n\n fileprivate func loadModelWithVideoMaterial() { ... }\n}\n\nHere we are assigning an AVQueuePlayer object to the content of the material:\nfileprivate func loadModelWithVideoMaterial() {\n \n guard let path = Bundle.main.path(forResource: \"video\", ofType: \"mov\")\n else { return }\n \n let videoURL = URL(fileURLWithPath: path)\n let asset = AVAsset(url: videoURL)\n let item = AVPlayerItem(asset: asset)\n self.avPlayer = AVQueuePlayer(playerItem: item)\n\n if let avPlayer {\n avPlayer.isMuted = true\n\n guard let screen = sceneView?.scene?.rootNode.childNode(\n withName: \"screen\",\n recursively: true)\n else { return }\n \n screen.geometry?.firstMaterial?.lightingModel = .constant\n screen.geometry?.firstMaterial?.diffuse.contents = avPlayer\n sceneView?.scene?.rootNode.addChildNode(screen)\n \n self.looper = AVPlayerLooper(player: avPlayer, templateItem: item)\n avPlayer.playImmediately(atRate: 20) // speed x20 for testing\n }\n}\n\n", "2022, it's now trivial to do this in Scene Kit. See Apple docs.\nNote that the apple doco clearly states you can now just put video on an SCNNode.\n// make some mesh. whatever size you want.\nlet mesh = SCNPlane()\nmesh.width = 1.77\nmesh.height = 1\n\n// put the mesh on your node\nyourNode.geometry = mesh\n\n// add the video to the mesh\nplr = AVPlayer(url: \"https .. .m4v\")\nyourNode.geometry?.firstMaterial?.diffuse.contents = plr\n\nNote that you can put anything you want on the mesh. (\"geometry\" is the mesh.) It's easy. For example, if you just want a plain color:\n... firstMaterial?.diffuse.contents = UIColor.yellow\n\nNote that the question asks about looping the video. This is trivial and unrelated to using SceneKit. You can see a million QA about looping video, it's this easy:\nNotificationCenter.default.addObserver(self,\n selector: #selector(loopy),\n name: .AVPlayerItemDidPlayToEndTime,\n object: plr.currentItem)\n\nand then\n@objc func loopy() { plr.seek(to: .zero) }\n\n" ]
[ 29, 7, 2, 0, 0 ]
[]
[]
[ "ios", "scenekit", "swift" ]
stackoverflow_0042469024_ios_scenekit_swift.txt
Q: python program troubleshoot if the user enters a char it should show the wrong input and continue asking for input until it reaches the range of 10 elements. how to solve this? output list = [] even = 0 for x in range(10): number = int(input("Enter a number: ")) list.append(number) for y in list: if y % 2 == 0: even +=1 print("Number of even numbers: " ,even) for y in list: if y % 2 == 0: count = list.index(y) print("Index [",count,"]: ",y) A: myList = [] while len(myList) < 10: try: number = int(input("Enter a number: ")) myList.append(number) except ValueError: print('Wrong value. Please enter a number.') print(myList) A: Hope code is self explanatory: arr = [] even = 0 error_flag = False for x in range(10): entry = input("Enter a number: ") if not entry.isdigit(): print("Entry is not a number") error_flag = True break arr.append(int(entry)) if not error_flag: brr = [] for id, y in enumerate(arr): if y%2 == 0: brr.append([id,y]) print(f"Even numbers are: {len(brr)}") for z in brr: print(f"Index{z[0]} is {z[1]}") A: list = [] even_list=[] c=0 for x in range(10): number = (input("Enter a number: ")) list.append(number) if number.isdigit()==False : print("wrong input") break elif int(number)%2==0: even_list.append(number) if len(list)==10: print("Number of even numbers: ",len(even_list)) for i in list: i=int(i) if (i) %2==0: print("Index %d : %d" %(c,i)) # print("Index",c,":",i) c=c+1
python program troubleshoot
if the user enters a char it should show the wrong input and continue asking for input until it reaches the range of 10 elements. how to solve this? output list = [] even = 0 for x in range(10): number = int(input("Enter a number: ")) list.append(number) for y in list: if y % 2 == 0: even +=1 print("Number of even numbers: " ,even) for y in list: if y % 2 == 0: count = list.index(y) print("Index [",count,"]: ",y)
[ "myList = []\nwhile len(myList) < 10:\n try:\n number = int(input(\"Enter a number: \"))\n myList.append(number)\n except ValueError:\n print('Wrong value. Please enter a number.')\nprint(myList)\n\n", "Hope code is self explanatory:\narr = []\neven = 0\nerror_flag = False\n\nfor x in range(10):\n entry = input(\"Enter a number: \")\n if not entry.isdigit():\n print(\"Entry is not a number\")\n error_flag = True\n break\n arr.append(int(entry))\n\nif not error_flag:\n brr = []\n for id, y in enumerate(arr):\n if y%2 == 0:\n brr.append([id,y])\n\n print(f\"Even numbers are: {len(brr)}\")\n for z in brr:\n print(f\"Index{z[0]} is {z[1]}\")\n\n\n\n", "list = []\neven_list=[]\nc=0\n\nfor x in range(10):\n number = (input(\"Enter a number: \"))\n list.append(number)\n if number.isdigit()==False :\n print(\"wrong input\")\n break \n elif int(number)%2==0:\n even_list.append(number) \n\nif len(list)==10:\n print(\"Number of even numbers: \",len(even_list))\n for i in list:\n i=int(i)\n if (i) %2==0:\n print(\"Index %d : %d\" %(c,i)) # print(\"Index\",c,\":\",i)\n c=c+1\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "do", "list", "python", "while_loop" ]
stackoverflow_0074666821_do_list_python_while_loop.txt
Q: How do I change the path of a site in HTML, CSS, or JS? How can I change the path of my site using HTML, CSS, or JS. I may be using the wrong grammar here using the word "path", but what I'm referring to is a subdomain, an example would be stackoverflow.com --> stackoverflow.com/questions I attempted to use the element <a href="about.html"> <img src="x"> </a> but it did not work, A: You could think of a server's public folder (the one that host your site's static files as HTML, CSS, Js) as a directory in your computer. For you to redirect the user to another path in your site you can use as you said <a href="[PATH]">Your link</a> but keep in mind that if you want to go to another folder like yoursite.com/anotherfolder you must to have two things (for this example): This folder structure in your site: [ROOTFOLDER]/anotherfolder/index.html In this case you need to add another file called index.html because is the default path that you get when no file is specified (file being specified: yoursite.com/anotherfolder/file.html). The HTML anchor tag must to start with a slash like <a href="/anotherfolder/">Your link</a>. Hope it answer your question. Note You are talking about a path http://yoursite.com/[PATH], a subdomain is like this http://subdomain.yoursite.com.
How do I change the path of a site in HTML, CSS, or JS?
How can I change the path of my site using HTML, CSS, or JS. I may be using the wrong grammar here using the word "path", but what I'm referring to is a subdomain, an example would be stackoverflow.com --> stackoverflow.com/questions I attempted to use the element <a href="about.html"> <img src="x"> </a> but it did not work,
[ "You could think of a server's public folder (the one that host your site's static files as HTML, CSS, Js) as a directory in your computer. For you to redirect the user to another path in your site you can use as you said <a href=\"[PATH]\">Your link</a> but keep in mind that if you want to go to another folder like yoursite.com/anotherfolder you must to have two things (for this example):\n\nThis folder structure in your site:\n\n[ROOTFOLDER]/anotherfolder/index.html\nIn this case you need to add another file called index.html because is the default path that you get when no file is specified (file being specified: yoursite.com/anotherfolder/file.html).\n\nThe HTML anchor tag must to start with a slash like <a href=\"/anotherfolder/\">Your link</a>.\n\nHope it answer your question.\nNote\nYou are talking about a path http://yoursite.com/[PATH], a subdomain is like this http://subdomain.yoursite.com.\n" ]
[ 0 ]
[]
[]
[ "css", "html" ]
stackoverflow_0074667163_css_html.txt
Q: I want to run migration from PostgreSQL to sqlite3 currently I am using PostgreSQL database in my project but I also want to use SQLite for localhost, so I want to run migrate command but there are errors because in SQLite array field is not used so I want to convert array field to JSONfield and makemigrations but in migrations old migrations also present. S I want to write custom logic in migrations. So, it use old migrations when database is PostgreSQL and new migrations when it is sqlite3. I don't want create new migrations and migration table every time I switch databases. A: SQLite is more of a flat file system. I think the original idea is that you can store a small amount of data on a device and update the main database, or fetch info from a database, when the device is 'idle' as a background process. I know there may be some people putting this comment down but essentially SQLite is 'Light' and a flat file. Those considerations should be taken into account. btw I see that there is MYSQL for Andriod but I have not tried it out.
I want to run migration from PostgreSQL to sqlite3
currently I am using PostgreSQL database in my project but I also want to use SQLite for localhost, so I want to run migrate command but there are errors because in SQLite array field is not used so I want to convert array field to JSONfield and makemigrations but in migrations old migrations also present. S I want to write custom logic in migrations. So, it use old migrations when database is PostgreSQL and new migrations when it is sqlite3. I don't want create new migrations and migration table every time I switch databases.
[ "SQLite is more of a flat file system. I think the original idea is that you can store a small amount of data on a device and update the main database, or fetch info from a database, when the device is 'idle' as a background process. I know there may be some people putting this comment down but essentially SQLite is 'Light' and a flat file. Those considerations should be taken into account. btw I see that there is MYSQL for Andriod but I have not tried it out.\n" ]
[ 0 ]
[]
[]
[ "database", "django", "migration" ]
stackoverflow_0074667197_database_django_migration.txt
Q: Nuxt3 Button doesn't register @click event I have the simplest nuxt3 project and my button when pressed doesn't register a click event. Why? Or where can I look for a solution? <button @click="console.log('PRESSED')" class="bg-gray-500">GENERATE TEXT</button> . Replication: npx nuxi init replica_project cd replica_project npm install --save-dev @nuxtjs/tailwindcss Added in nuxt.config.ts modules: [ '@nuxtjs/tailwindcss' ] Put <button @click="console.log('I got hit on')" class="bg-gray-500">HIT ME BABY</button> in the App.vue file ` Tried Solutions: Add .native: <button @click.native="console.log('PRESSED')" class="bg-gray-500">GENERATE TEXT</button> A: The usual way of using a console.log is by using a function to call it (rather than doing it directly from the template), like the following. Using the CompositionAPI: <script setup> function consoleHitOn() { console.log('I got hit on') } </script> <template> <button @click="consoleHitOn">HIT ME BABY</button> </template> Using the OptionsAPI, you could achieve the same with a hack by doing this <script> export default { computed: { console: () => console, } } </script> <template> <button @click="console.log('I got hit on')">HIT ME BABY</button> </template> I'm not sure if there is a way of writing the same kind of hack with CompositionAPI, but even if there is I do not really recommend it anyway. PS: one thing for sure is that it's totally not related to TailwindCSS. Tailwind is for the style, nothing concerning the event listeners in a Vue app. A: I updated my machine, turned it off, tried again today and it worked. I have no further explanation...
Nuxt3 Button doesn't register @click event
I have the simplest nuxt3 project and my button when pressed doesn't register a click event. Why? Or where can I look for a solution? <button @click="console.log('PRESSED')" class="bg-gray-500">GENERATE TEXT</button> . Replication: npx nuxi init replica_project cd replica_project npm install --save-dev @nuxtjs/tailwindcss Added in nuxt.config.ts modules: [ '@nuxtjs/tailwindcss' ] Put <button @click="console.log('I got hit on')" class="bg-gray-500">HIT ME BABY</button> in the App.vue file ` Tried Solutions: Add .native: <button @click.native="console.log('PRESSED')" class="bg-gray-500">GENERATE TEXT</button>
[ "The usual way of using a console.log is by using a function to call it (rather than doing it directly from the template), like the following.\nUsing the CompositionAPI:\n<script setup>\nfunction consoleHitOn() {\n console.log('I got hit on')\n}\n</script>\n\n<template>\n <button @click=\"consoleHitOn\">HIT ME BABY</button>\n</template>\n\nUsing the OptionsAPI, you could achieve the same with a hack by doing this\n<script>\nexport default {\n computed: {\n console: () => console,\n }\n}\n</script>\n\n<template>\n <button @click=\"console.log('I got hit on')\">HIT ME BABY</button>\n</template>\n\nI'm not sure if there is a way of writing the same kind of hack with CompositionAPI, but even if there is I do not really recommend it anyway.\n\nPS: one thing for sure is that it's totally not related to TailwindCSS.\nTailwind is for the style, nothing concerning the event listeners in a Vue app.\n", "I updated my machine, turned it off, tried again today and it worked.\nI have no further explanation...\n" ]
[ 1, 0 ]
[]
[]
[ "nuxt.js", "nuxtjs3", "vue.js", "vuejs3" ]
stackoverflow_0074649083_nuxt.js_nuxtjs3_vue.js_vuejs3.txt
Q: How can I find out which path os.path points to? i am a web developer (php, js, css and ...). i order a python script for remove image background. it worked in cmd very well but when running it from php script, it dosnt work. i look at the script for find problem and i realized that the script stops at this line: net.load_state_dict(self.torch.load(os.path.join("../library/removeBG/models/", name, name + '.pth'), map_location="cpu")) I guess the problem with the script is that it can't find the file, and probably the problem is caused by the path that os.path points to. Is it possible to print the path that os .path points to? If not, do you have a solution to this problem? A: The problem here is that the php script might be in different directory so while executing the python script via php script, the os.path points to the directory from where it is being executed i.e. the location of php script. TLDR; Try using absolute path. A: This should be enough: name = 'name' p = os.path.join("../library/removeBG/models/", name, name + '.pth') print(p) This is what i get: >>> ../library/removeBG/models/name/name.pth
How can I find out which path os.path points to?
i am a web developer (php, js, css and ...). i order a python script for remove image background. it worked in cmd very well but when running it from php script, it dosnt work. i look at the script for find problem and i realized that the script stops at this line: net.load_state_dict(self.torch.load(os.path.join("../library/removeBG/models/", name, name + '.pth'), map_location="cpu")) I guess the problem with the script is that it can't find the file, and probably the problem is caused by the path that os.path points to. Is it possible to print the path that os .path points to? If not, do you have a solution to this problem?
[ "The problem here is that the php script might be in different directory so while executing the python script via php script, the os.path points to the directory from where it is being executed i.e. the location of php script.\nTLDR; Try using absolute path.\n", "This should be enough:\nname = 'name'\np = os.path.join(\"../library/removeBG/models/\", name, name + '.pth')\nprint(p)\n\nThis is what i get:\n>>> ../library/removeBG/models/name/name.pth\n\n" ]
[ 0, 0 ]
[]
[]
[ "os.path", "python" ]
stackoverflow_0074667211_os.path_python.txt
Q: Why my second def inside the first def doesn't function? I want to make a program that can check whether the entered number is a prime number in Jupyter Notebook. This is the code: def input_number(): number = input() if number.isnumeric(): the_number = int(number) def check_prime(): divisor = 1 divisor += 1 if the_number > 1: if divisor in range(2, the_number): if the_number % divisor != 0: print(the_number, "is a prime number") else: print(the_number,"not a prime number") print(the_number, "divide", number//divisor, "is", divisor) else: print(the_number, "not a prime number") else: But when I enter a number the process will not continue to def check_prime and it just freezes. If I enter anything other than a number then I get **UnboundLocalError: cannot access local variable 'check_prime' where it is not associated with a value** A: You defined that function under input_number() You can only use check_prime() under that function. define the check_prime() outside of input_number(). def input_number(): #input number func number = input() #take the number return int(number) if number.isnumeric() else print('Input only INT.') #return the number swapped to int if its numeric. def check_prime(the_number): #prime function - num is a parameter to use in function # you defined divisor than added 1 , but its same with defining it as 2. divisor = 2 # you also dont need to define divisor if the_number > 1: if divisor in range(2, the_number): if the_number % divisor != 0: print(the_number, "is a prime number") else: print(the_number, "not a prime number") print(the_number, "divide", the_number // divisor, "is", divisor) else: print(the_number,'not a prime number.') while calling, use check_prime(input_number)
Why my second def inside the first def doesn't function?
I want to make a program that can check whether the entered number is a prime number in Jupyter Notebook. This is the code: def input_number(): number = input() if number.isnumeric(): the_number = int(number) def check_prime(): divisor = 1 divisor += 1 if the_number > 1: if divisor in range(2, the_number): if the_number % divisor != 0: print(the_number, "is a prime number") else: print(the_number,"not a prime number") print(the_number, "divide", number//divisor, "is", divisor) else: print(the_number, "not a prime number") else: But when I enter a number the process will not continue to def check_prime and it just freezes. If I enter anything other than a number then I get **UnboundLocalError: cannot access local variable 'check_prime' where it is not associated with a value**
[ "You defined that function under input_number()\nYou can only use check_prime() under that function.\ndefine the check_prime() outside of input_number().\ndef input_number(): #input number func\n number = input() #take the number\n return int(number) if number.isnumeric() else print('Input only INT.') #return the number swapped to int if its numeric.\n\ndef check_prime(the_number): #prime function - num is a parameter to use in function\n # you defined divisor than added 1 , but its same with defining it as 2.\n divisor = 2 # you also dont need to define divisor\n if the_number > 1:\n if divisor in range(2, the_number):\n if the_number % divisor != 0:\n print(the_number, \"is a prime number\")\n else:\n print(the_number, \"not a prime number\")\n print(the_number, \"divide\", the_number // divisor, \"is\", divisor)\n else:\n print(the_number,'not a prime number.')\n\nwhile calling, use\ncheck_prime(input_number) \n\n" ]
[ 0 ]
[]
[]
[ "jupyter_notebook", "python" ]
stackoverflow_0074666624_jupyter_notebook_python.txt
Q: How to fix error of WebSecurityConfigurerAdapter when upgrade to Spring Boot 3.0.0? I have code work ok with Spring 2.x . Source code of Spring 2.x File CustomFilter.java package com.example.security; import jakarta.servlet.FilterChain; import jakarta.servlet.ServletException; import jakarta.servlet.ServletRequest; import jakarta.servlet.ServletResponse; import org.springframework.web.filter.GenericFilterBean; import java.io.IOException; public class CustomFilter extends GenericFilterBean { @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { chain.doFilter(request, response); } } File AuthEntryPointJwt.java package com.example.security.jwt; import com.fasterxml.jackson.databind.ObjectMapper; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.http.MediaType; import org.springframework.security.core.AuthenticationException; import org.springframework.security.web.AuthenticationEntryPoint; import org.springframework.stereotype.Component; import java.io.IOException; import java.util.HashMap; import java.util.Map; @Component public class AuthEntryPointJwt implements AuthenticationEntryPoint { private static final Logger logger = LoggerFactory.getLogger(AuthEntryPointJwt.class); @Override public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException) throws IOException { logger.error("Unauthorized error: {}", authException.getMessage()); response.setContentType(MediaType.APPLICATION_JSON_VALUE); response.setStatus(HttpServletResponse.SC_UNAUTHORIZED); final Map<String, Object> body = new HashMap<>(); body.put("status", HttpServletResponse.SC_UNAUTHORIZED); body.put("error", "Unauthorized"); body.put("message", authException.getMessage()); body.put("path", request.getServletPath()); final ObjectMapper mapper = new ObjectMapper(); mapper.writeValue(response.getOutputStream(), body); } } File AuthTokenFilter.java package com.example.security.jwt; import com.example.security.services.UserDetailsServiceImpl; import jakarta.servlet.FilterChain; import jakarta.servlet.ServletException; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.authentication.UsernamePasswordAuthenticationToken; import org.springframework.security.core.context.SecurityContextHolder; import org.springframework.security.core.userdetails.UserDetails; import org.springframework.security.web.authentication.WebAuthenticationDetailsSource; import org.springframework.util.StringUtils; import org.springframework.web.filter.OncePerRequestFilter; import java.io.IOException; public class AuthTokenFilter extends OncePerRequestFilter { private static final Logger logger = LoggerFactory.getLogger(AuthTokenFilter.class); @Autowired private JwtUtils jwtUtils; @Autowired private UserDetailsServiceImpl userDetailsService; @Override protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { try { String jwt = parseJwt(request); if (jwt != null && jwtUtils.validateJwtToken(jwt)) { String username = jwtUtils.getUserNameFromJwtToken(jwt); UserDetails userDetails = userDetailsService.loadUserByUsername(username); UsernamePasswordAuthenticationToken authentication = new UsernamePasswordAuthenticationToken(userDetails, null, userDetails.getAuthorities()); authentication.setDetails(new WebAuthenticationDetailsSource().buildDetails(request)); SecurityContextHolder.getContext().setAuthentication(authentication); } } catch (Exception e) { logger.error("Cannot set user authentication: {}", e); } filterChain.doFilter(request, response); } private String parseJwt(HttpServletRequest request) { String headerAuth = request.getHeader("Authorization"); if (StringUtils.hasText(headerAuth) && headerAuth.startsWith("Bearer ")) { return headerAuth.substring(7); } return null; } } File JwtUtils.java package com.example.security.jwt; import com.example.security.services.UserDetailsImpl; import io.jsonwebtoken.ExpiredJwtException; import io.jsonwebtoken.Jwts; import io.jsonwebtoken.MalformedJwtException; import io.jsonwebtoken.SignatureAlgorithm; import io.jsonwebtoken.SignatureException; import io.jsonwebtoken.UnsupportedJwtException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Value; import org.springframework.security.core.Authentication; import org.springframework.stereotype.Component; import java.util.Date; @Component public class JwtUtils { private static final Logger logger = LoggerFactory.getLogger(JwtUtils.class); @Value("${app.jwtSecret}") private String jwtSecret; @Value("${app.jwtExpirationMs}") private int jwtExpirationMs; public String generateJwtToken(Authentication authentication) { UserDetailsImpl userPrincipal = (UserDetailsImpl) authentication.getPrincipal(); return Jwts.builder().setSubject((userPrincipal.getUsername())).setIssuedAt(new Date()).setExpiration(new Date((new Date()).getTime() + jwtExpirationMs)).signWith(SignatureAlgorithm.HS512, jwtSecret).compact(); } public String getUserNameFromJwtToken(String token) { return Jwts.parser().setSigningKey(jwtSecret).parseClaimsJws(token).getBody().getSubject(); } public boolean validateJwtToken(String authToken) { try { Jwts.parser().setSigningKey(jwtSecret).parseClaimsJws(authToken); return true; } catch (SignatureException e) { logger.error("Invalid JWT signature: {}", e.getMessage()); } catch (MalformedJwtException e) { logger.error("Invalid JWT token: {}", e.getMessage()); } catch (ExpiredJwtException e) { logger.error("JWT token is expired: {}", e.getMessage()); } catch (UnsupportedJwtException e) { logger.error("JWT token is unsupported: {}", e.getMessage()); } catch (IllegalArgumentException e) { logger.error("JWT claims string is empty: {}", e.getMessage()); } return false; } } File UserDetailsImpl.java package com.example.security.services; import com.example.models.User; import com.fasterxml.jackson.annotation.JsonIgnore; import org.springframework.security.core.GrantedAuthority; import org.springframework.security.core.authority.SimpleGrantedAuthority; import org.springframework.security.core.userdetails.UserDetails; import java.util.Collection; import java.util.List; import java.util.Objects; import java.util.stream.Collectors; public class UserDetailsImpl implements UserDetails { private static final long serialVersionUID = 1L; private Long id; private String username; private String email; @JsonIgnore private String password; private Collection<? extends GrantedAuthority> authorities; public UserDetailsImpl(Long id, String username, String email, String password, Collection<? extends GrantedAuthority> authorities) { this.id = id; this.username = username; this.email = email; this.password = password; this.authorities = authorities; } public static UserDetailsImpl build(User user) { List<GrantedAuthority> authorities = user.getRoles().stream().map(role -> new SimpleGrantedAuthority(role.getName().name())).collect(Collectors.toList()); return new UserDetailsImpl(user.getId(), user.getUsername(), user.getEmail(), user.getPassword(), authorities); } @Override public Collection<? extends GrantedAuthority> getAuthorities() { return authorities; } public Long getId() { return id; } public String getEmail() { return email; } @Override public String getPassword() { return password; } @Override public String getUsername() { return username; } @Override public boolean isAccountNonExpired() { return true; } @Override public boolean isAccountNonLocked() { return true; } @Override public boolean isCredentialsNonExpired() { return true; } @Override public boolean isEnabled() { return true; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; UserDetailsImpl user = (UserDetailsImpl) o; return Objects.equals(id, user.id); } } File UserDetailsServiceImpl.java package com.example.security.services; import com.example.models.User; import com.example.repository.UserRepository; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.core.userdetails.UserDetails; import org.springframework.security.core.userdetails.UserDetailsService; import org.springframework.security.core.userdetails.UsernameNotFoundException; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; // Original. @Service public class UserDetailsServiceImpl implements UserDetailsService { @Autowired UserRepository userRepository; @Override @Transactional public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException { User user = userRepository.findByUsername(username).orElseThrow(() -> new UsernameNotFoundException("User Not Found with username: " + username)); return UserDetailsImpl.build(user); } } file WebSecurityConfig.java package com.example.security; import com.example.security.jwt.AuthEntryPointJwt; import com.example.security.jwt.AuthTokenFilter; import com.example.security.services.UserDetailsServiceImpl; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.security.authentication.AuthenticationManager; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; import org.springframework.security.config.http.SessionCreationPolicy; import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; import org.springframework.security.crypto.password.PasswordEncoder; import org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter; @Configuration @EnableWebSecurity @EnableGlobalMethodSecurity( // securedEnabled = true, // jsr250Enabled = true, prePostEnabled = true) public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired UserDetailsServiceImpl userDetailsService; @Autowired private AuthEntryPointJwt unauthorizedHandler; @Bean public AuthTokenFilter authenticationJwtTokenFilter() { return new AuthTokenFilter(); } @Override public void configure(AuthenticationManagerBuilder authenticationManagerBuilder) throws Exception { authenticationManagerBuilder.userDetailsService(userDetailsService).passwordEncoder(passwordEncoder()); } @Bean @Override public AuthenticationManager authenticationManagerBean() throws Exception { return super.authenticationManagerBean(); } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } // Nếu id gửi lên != id của tenant của user đó trong database, thì không cho đi tiếp. @Override protected void configure(HttpSecurity http) throws Exception { http.cors().and().csrf().disable() .exceptionHandling().authenticationEntryPoint(unauthorizedHandler).and() .sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS).and() //.authorizeRequests().antMatchers("/api/auth/**", "/swagger-ui/**").permitAll() .authorizeRequests().antMatchers("/api/auth/**", "/swagger-ui/**", "/v3/api-docs/**").permitAll() .antMatchers("/app/**").permitAll() .antMatchers("/api/test/**").permitAll() .anyRequest().authenticated(); http.addFilterBefore(authenticationJwtTokenFilter(), UsernamePasswordAuthenticationFilter.class); //; // .addFilterAfter(new CustomFilter(), BasicAuthenticationFilter.class); // VyDN 2022_07_22 // https://www.baeldung.com/spring-security-custom-filter } } // Add filter before, after: https://stackoverflow.com/a/59000469 Now, I am using Java / JDK 19, Spring Boot 3.0.0 . After upgrade to Spring Boot 3.0.0 , it causes syntax error. How to fix error of WebSecurityConfigurerAdapter when upgrade to Spring Boot 3.0.0? Specific to my configuration. Please guide me rewrite file WebSecurityConfig.java A: On Spring Boot 3 WebSecurityConfigurerAdapter is deprecated. So in your case the WebSecurityConfig class should not extend any class and most be implemented by itself. You can implement the userDetailsService by yourself as a @Bean and also set the AuthenticationManager, not just return the super. I had the same problem and my solution was just to add @SuppressWarnings("deprecation") before the @Configuration annotation in the class.
How to fix error of WebSecurityConfigurerAdapter when upgrade to Spring Boot 3.0.0?
I have code work ok with Spring 2.x . Source code of Spring 2.x File CustomFilter.java package com.example.security; import jakarta.servlet.FilterChain; import jakarta.servlet.ServletException; import jakarta.servlet.ServletRequest; import jakarta.servlet.ServletResponse; import org.springframework.web.filter.GenericFilterBean; import java.io.IOException; public class CustomFilter extends GenericFilterBean { @Override public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) throws IOException, ServletException { chain.doFilter(request, response); } } File AuthEntryPointJwt.java package com.example.security.jwt; import com.fasterxml.jackson.databind.ObjectMapper; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.http.MediaType; import org.springframework.security.core.AuthenticationException; import org.springframework.security.web.AuthenticationEntryPoint; import org.springframework.stereotype.Component; import java.io.IOException; import java.util.HashMap; import java.util.Map; @Component public class AuthEntryPointJwt implements AuthenticationEntryPoint { private static final Logger logger = LoggerFactory.getLogger(AuthEntryPointJwt.class); @Override public void commence(HttpServletRequest request, HttpServletResponse response, AuthenticationException authException) throws IOException { logger.error("Unauthorized error: {}", authException.getMessage()); response.setContentType(MediaType.APPLICATION_JSON_VALUE); response.setStatus(HttpServletResponse.SC_UNAUTHORIZED); final Map<String, Object> body = new HashMap<>(); body.put("status", HttpServletResponse.SC_UNAUTHORIZED); body.put("error", "Unauthorized"); body.put("message", authException.getMessage()); body.put("path", request.getServletPath()); final ObjectMapper mapper = new ObjectMapper(); mapper.writeValue(response.getOutputStream(), body); } } File AuthTokenFilter.java package com.example.security.jwt; import com.example.security.services.UserDetailsServiceImpl; import jakarta.servlet.FilterChain; import jakarta.servlet.ServletException; import jakarta.servlet.http.HttpServletRequest; import jakarta.servlet.http.HttpServletResponse; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.authentication.UsernamePasswordAuthenticationToken; import org.springframework.security.core.context.SecurityContextHolder; import org.springframework.security.core.userdetails.UserDetails; import org.springframework.security.web.authentication.WebAuthenticationDetailsSource; import org.springframework.util.StringUtils; import org.springframework.web.filter.OncePerRequestFilter; import java.io.IOException; public class AuthTokenFilter extends OncePerRequestFilter { private static final Logger logger = LoggerFactory.getLogger(AuthTokenFilter.class); @Autowired private JwtUtils jwtUtils; @Autowired private UserDetailsServiceImpl userDetailsService; @Override protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException { try { String jwt = parseJwt(request); if (jwt != null && jwtUtils.validateJwtToken(jwt)) { String username = jwtUtils.getUserNameFromJwtToken(jwt); UserDetails userDetails = userDetailsService.loadUserByUsername(username); UsernamePasswordAuthenticationToken authentication = new UsernamePasswordAuthenticationToken(userDetails, null, userDetails.getAuthorities()); authentication.setDetails(new WebAuthenticationDetailsSource().buildDetails(request)); SecurityContextHolder.getContext().setAuthentication(authentication); } } catch (Exception e) { logger.error("Cannot set user authentication: {}", e); } filterChain.doFilter(request, response); } private String parseJwt(HttpServletRequest request) { String headerAuth = request.getHeader("Authorization"); if (StringUtils.hasText(headerAuth) && headerAuth.startsWith("Bearer ")) { return headerAuth.substring(7); } return null; } } File JwtUtils.java package com.example.security.jwt; import com.example.security.services.UserDetailsImpl; import io.jsonwebtoken.ExpiredJwtException; import io.jsonwebtoken.Jwts; import io.jsonwebtoken.MalformedJwtException; import io.jsonwebtoken.SignatureAlgorithm; import io.jsonwebtoken.SignatureException; import io.jsonwebtoken.UnsupportedJwtException; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.springframework.beans.factory.annotation.Value; import org.springframework.security.core.Authentication; import org.springframework.stereotype.Component; import java.util.Date; @Component public class JwtUtils { private static final Logger logger = LoggerFactory.getLogger(JwtUtils.class); @Value("${app.jwtSecret}") private String jwtSecret; @Value("${app.jwtExpirationMs}") private int jwtExpirationMs; public String generateJwtToken(Authentication authentication) { UserDetailsImpl userPrincipal = (UserDetailsImpl) authentication.getPrincipal(); return Jwts.builder().setSubject((userPrincipal.getUsername())).setIssuedAt(new Date()).setExpiration(new Date((new Date()).getTime() + jwtExpirationMs)).signWith(SignatureAlgorithm.HS512, jwtSecret).compact(); } public String getUserNameFromJwtToken(String token) { return Jwts.parser().setSigningKey(jwtSecret).parseClaimsJws(token).getBody().getSubject(); } public boolean validateJwtToken(String authToken) { try { Jwts.parser().setSigningKey(jwtSecret).parseClaimsJws(authToken); return true; } catch (SignatureException e) { logger.error("Invalid JWT signature: {}", e.getMessage()); } catch (MalformedJwtException e) { logger.error("Invalid JWT token: {}", e.getMessage()); } catch (ExpiredJwtException e) { logger.error("JWT token is expired: {}", e.getMessage()); } catch (UnsupportedJwtException e) { logger.error("JWT token is unsupported: {}", e.getMessage()); } catch (IllegalArgumentException e) { logger.error("JWT claims string is empty: {}", e.getMessage()); } return false; } } File UserDetailsImpl.java package com.example.security.services; import com.example.models.User; import com.fasterxml.jackson.annotation.JsonIgnore; import org.springframework.security.core.GrantedAuthority; import org.springframework.security.core.authority.SimpleGrantedAuthority; import org.springframework.security.core.userdetails.UserDetails; import java.util.Collection; import java.util.List; import java.util.Objects; import java.util.stream.Collectors; public class UserDetailsImpl implements UserDetails { private static final long serialVersionUID = 1L; private Long id; private String username; private String email; @JsonIgnore private String password; private Collection<? extends GrantedAuthority> authorities; public UserDetailsImpl(Long id, String username, String email, String password, Collection<? extends GrantedAuthority> authorities) { this.id = id; this.username = username; this.email = email; this.password = password; this.authorities = authorities; } public static UserDetailsImpl build(User user) { List<GrantedAuthority> authorities = user.getRoles().stream().map(role -> new SimpleGrantedAuthority(role.getName().name())).collect(Collectors.toList()); return new UserDetailsImpl(user.getId(), user.getUsername(), user.getEmail(), user.getPassword(), authorities); } @Override public Collection<? extends GrantedAuthority> getAuthorities() { return authorities; } public Long getId() { return id; } public String getEmail() { return email; } @Override public String getPassword() { return password; } @Override public String getUsername() { return username; } @Override public boolean isAccountNonExpired() { return true; } @Override public boolean isAccountNonLocked() { return true; } @Override public boolean isCredentialsNonExpired() { return true; } @Override public boolean isEnabled() { return true; } @Override public boolean equals(Object o) { if (this == o) return true; if (o == null || getClass() != o.getClass()) return false; UserDetailsImpl user = (UserDetailsImpl) o; return Objects.equals(id, user.id); } } File UserDetailsServiceImpl.java package com.example.security.services; import com.example.models.User; import com.example.repository.UserRepository; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.security.core.userdetails.UserDetails; import org.springframework.security.core.userdetails.UserDetailsService; import org.springframework.security.core.userdetails.UsernameNotFoundException; import org.springframework.stereotype.Service; import org.springframework.transaction.annotation.Transactional; // Original. @Service public class UserDetailsServiceImpl implements UserDetailsService { @Autowired UserRepository userRepository; @Override @Transactional public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException { User user = userRepository.findByUsername(username).orElseThrow(() -> new UsernameNotFoundException("User Not Found with username: " + username)); return UserDetailsImpl.build(user); } } file WebSecurityConfig.java package com.example.security; import com.example.security.jwt.AuthEntryPointJwt; import com.example.security.jwt.AuthTokenFilter; import com.example.security.services.UserDetailsServiceImpl; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.security.authentication.AuthenticationManager; import org.springframework.security.config.annotation.authentication.builders.AuthenticationManagerBuilder; import org.springframework.security.config.annotation.method.configuration.EnableGlobalMethodSecurity; import org.springframework.security.config.annotation.web.builders.HttpSecurity; import org.springframework.security.config.annotation.web.configuration.EnableWebSecurity; import org.springframework.security.config.annotation.web.configuration.WebSecurityConfigurerAdapter; import org.springframework.security.config.http.SessionCreationPolicy; import org.springframework.security.crypto.bcrypt.BCryptPasswordEncoder; import org.springframework.security.crypto.password.PasswordEncoder; import org.springframework.security.web.authentication.UsernamePasswordAuthenticationFilter; @Configuration @EnableWebSecurity @EnableGlobalMethodSecurity( // securedEnabled = true, // jsr250Enabled = true, prePostEnabled = true) public class WebSecurityConfig extends WebSecurityConfigurerAdapter { @Autowired UserDetailsServiceImpl userDetailsService; @Autowired private AuthEntryPointJwt unauthorizedHandler; @Bean public AuthTokenFilter authenticationJwtTokenFilter() { return new AuthTokenFilter(); } @Override public void configure(AuthenticationManagerBuilder authenticationManagerBuilder) throws Exception { authenticationManagerBuilder.userDetailsService(userDetailsService).passwordEncoder(passwordEncoder()); } @Bean @Override public AuthenticationManager authenticationManagerBean() throws Exception { return super.authenticationManagerBean(); } @Bean public PasswordEncoder passwordEncoder() { return new BCryptPasswordEncoder(); } // Nếu id gửi lên != id của tenant của user đó trong database, thì không cho đi tiếp. @Override protected void configure(HttpSecurity http) throws Exception { http.cors().and().csrf().disable() .exceptionHandling().authenticationEntryPoint(unauthorizedHandler).and() .sessionManagement().sessionCreationPolicy(SessionCreationPolicy.STATELESS).and() //.authorizeRequests().antMatchers("/api/auth/**", "/swagger-ui/**").permitAll() .authorizeRequests().antMatchers("/api/auth/**", "/swagger-ui/**", "/v3/api-docs/**").permitAll() .antMatchers("/app/**").permitAll() .antMatchers("/api/test/**").permitAll() .anyRequest().authenticated(); http.addFilterBefore(authenticationJwtTokenFilter(), UsernamePasswordAuthenticationFilter.class); //; // .addFilterAfter(new CustomFilter(), BasicAuthenticationFilter.class); // VyDN 2022_07_22 // https://www.baeldung.com/spring-security-custom-filter } } // Add filter before, after: https://stackoverflow.com/a/59000469 Now, I am using Java / JDK 19, Spring Boot 3.0.0 . After upgrade to Spring Boot 3.0.0 , it causes syntax error. How to fix error of WebSecurityConfigurerAdapter when upgrade to Spring Boot 3.0.0? Specific to my configuration. Please guide me rewrite file WebSecurityConfig.java
[ "On Spring Boot 3 WebSecurityConfigurerAdapter is deprecated. So in your case the WebSecurityConfig class should not extend any class and most be implemented by itself. You can implement the userDetailsService by yourself as a @Bean and also set the AuthenticationManager, not just return the super.\nI had the same problem and my solution was just to add @SuppressWarnings(\"deprecation\")\nbefore the @Configuration annotation in the class.\n" ]
[ 1 ]
[]
[]
[ "java", "spring", "spring_boot", "spring_boot_3" ]
stackoverflow_0074666596_java_spring_spring_boot_spring_boot_3.txt
Q: How to put SELECT inside CASE I have a base and I need to display if object is ready to work after delivery status "Delivered" And I need to put it inside function DO $$ BEGIN CASE WHEN status IN ('In Transit','is going') THEN 'Not ready'; ELSE 'Ready'; end case; end $$; Table named Delivery and column status Attempt with IF CREATE OR REPLACE FUNCTION prepare_object () RETURNS SETOF delivery LANGUAGE 'plpgsql' AS $$ DECLARE answer varchar; DECLARE status varchar; BEGIN SELECT * FROM delivery if status IN ('In Transit','is going'); THEN answer = ('Not ready'); ELSE answer = ('Ready'); END IF; END; RETURN; $$ Delivery table looks like this CREATE TABLE Delivery ( Delivery_Code Serial PRIMARY KEY UNIQUE, Status varchar(255) CHECK (Status = 'Delivered' OR Status = 'In Transit' OR Status = 'is going'), Composition_Delivery varchar(255), Date_and_time timestamp, Price money, Supplier_code Serial REFERENCES Provider(Supplier_code), Request_Code Serial REFERENCES Request(Request_Code), Object_ID serial REFERENCES Object(Object_ID) ); A: If you just want to have a column expression answer there is no need for a function or PL/pgSQL. This can be achieved using a CASE expression in SQL: SELECT d.*, case when d.status IN ('In Transit','is going') then 'Not ready' else 'Ready' as answer FROM delivery d If you don't want to type this every time you retrieve rows from the table, put it into a VIEW.
How to put SELECT inside CASE
I have a base and I need to display if object is ready to work after delivery status "Delivered" And I need to put it inside function DO $$ BEGIN CASE WHEN status IN ('In Transit','is going') THEN 'Not ready'; ELSE 'Ready'; end case; end $$; Table named Delivery and column status Attempt with IF CREATE OR REPLACE FUNCTION prepare_object () RETURNS SETOF delivery LANGUAGE 'plpgsql' AS $$ DECLARE answer varchar; DECLARE status varchar; BEGIN SELECT * FROM delivery if status IN ('In Transit','is going'); THEN answer = ('Not ready'); ELSE answer = ('Ready'); END IF; END; RETURN; $$ Delivery table looks like this CREATE TABLE Delivery ( Delivery_Code Serial PRIMARY KEY UNIQUE, Status varchar(255) CHECK (Status = 'Delivered' OR Status = 'In Transit' OR Status = 'is going'), Composition_Delivery varchar(255), Date_and_time timestamp, Price money, Supplier_code Serial REFERENCES Provider(Supplier_code), Request_Code Serial REFERENCES Request(Request_Code), Object_ID serial REFERENCES Object(Object_ID) );
[ "If you just want to have a column expression answer there is no need for a function or PL/pgSQL. This can be achieved using a CASE expression in SQL:\nSELECT d.*, \n case \n when d.status IN ('In Transit','is going') then 'Not ready'\n else 'Ready'\n as answer\nFROM delivery d\n\nIf you don't want to type this every time you retrieve rows from the table, put it into a VIEW.\n" ]
[ 0 ]
[]
[]
[ "case", "function", "postgresql", "sql" ]
stackoverflow_0074666347_case_function_postgresql_sql.txt
Q: input. check if value is float if not go back to input until a float is written. I fail. "Can not convert string to float" I have a school assignment where im making a budget calcylator. One of the demands are that the program checks if the input is a float, if not go back until a float is written. Im having a super hard time solving this. Ive been doing python one month so my skills are very limitied. Its hard to google on. x = float(input('nr')) isinstance(x, float) A: You could do something like this: while True: try: x = float(input('Enter a number: ')) break except ValueError: print('Invalid input. Please try again.') This code uses a while loop to continuously prompt the user for input until a valid float is entered. The try and except statements are used to handle the potential ValueError that can be raised when trying to convert an invalid input to a float.
input. check if value is float if not go back to input until a float is written. I fail. "Can not convert string to float"
I have a school assignment where im making a budget calcylator. One of the demands are that the program checks if the input is a float, if not go back until a float is written. Im having a super hard time solving this. Ive been doing python one month so my skills are very limitied. Its hard to google on. x = float(input('nr')) isinstance(x, float)
[ "You could do something like this:\nwhile True:\n try:\n x = float(input('Enter a number: '))\n break\n except ValueError:\n print('Invalid input. Please try again.')\n\nThis code uses a while loop to continuously prompt the user for input until a valid float is entered. The try and except statements are used to handle the potential ValueError that can be raised when trying to convert an invalid input to a float.\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074667280_python.txt
Q: Complex query / Index help for MongoDB query I have a big mongodb query that has some dynamic properties based on filter options, and including filtering between dates. My query is currently causing scanned Objects / returned results ratio to go above 1000. I am sure my query can be improved as well as adding suitable indexes but I am not sure of the correct index's / improvements to my query. const userFilter = user ? { assignee: new Types.ObjectId(user) } : null; const clientFilter = client ? { client: new Types.ObjectId(client) } : null; Collection.aggregate([ { $match: { ...userFilter, ...clientFilter, status: 1, customer: new Types.ObjectId(customer), }, }, { $match: { $or: [ { end: { $gte: new Date(end), }, start: { $lte: new Date(start), }, }, { end: { $gte: new Date(end), }, start: { $lte: new Date(end), $gte: new Date(start), }, }, { start: { $lte: new Date(start), }, end: { $lte: new Date(end), $gte: new Date(start), }, }, { start: { $gte: new Date(start), }, end: { $lte: new Date(end), }, }, { start: { $lte: new Date(end), }, // @ts-ignore start: { $gte: new Date(start), }, }, { end: { $lte: new Date(end), }, // @ts-ignore end: { $gte: new Date(start), }, }, ], }, }, { $sort: { createdAt: 1 }, }, } Depending on filter options we could be filtering by assignee and/or client. We also only want returned results where the start or end date of the document falls within the start and end date filters. I have tried a few variations of the query itself, as well as adding some compound indexes but had no real success improving the query or index's. A: This is not an answer, but a bit too long for a comment: Do you really like conditions like { $match: { null, null, status: 1, customer: new Types.ObjectId(customer), }, } Or should you better use: let match = { status: 1, customer: new Types.ObjectId(customer) }; if (user) match.assignee = new Types.ObjectId(user); if (client) match.client = new Types.ObjectId(client); Collection.aggregate([ { $match: match }, ... I never tried, perhaps {$match: null} prevents any index use.
Complex query / Index help for MongoDB query
I have a big mongodb query that has some dynamic properties based on filter options, and including filtering between dates. My query is currently causing scanned Objects / returned results ratio to go above 1000. I am sure my query can be improved as well as adding suitable indexes but I am not sure of the correct index's / improvements to my query. const userFilter = user ? { assignee: new Types.ObjectId(user) } : null; const clientFilter = client ? { client: new Types.ObjectId(client) } : null; Collection.aggregate([ { $match: { ...userFilter, ...clientFilter, status: 1, customer: new Types.ObjectId(customer), }, }, { $match: { $or: [ { end: { $gte: new Date(end), }, start: { $lte: new Date(start), }, }, { end: { $gte: new Date(end), }, start: { $lte: new Date(end), $gte: new Date(start), }, }, { start: { $lte: new Date(start), }, end: { $lte: new Date(end), $gte: new Date(start), }, }, { start: { $gte: new Date(start), }, end: { $lte: new Date(end), }, }, { start: { $lte: new Date(end), }, // @ts-ignore start: { $gte: new Date(start), }, }, { end: { $lte: new Date(end), }, // @ts-ignore end: { $gte: new Date(start), }, }, ], }, }, { $sort: { createdAt: 1 }, }, } Depending on filter options we could be filtering by assignee and/or client. We also only want returned results where the start or end date of the document falls within the start and end date filters. I have tried a few variations of the query itself, as well as adding some compound indexes but had no real success improving the query or index's.
[ "This is not an answer, but a bit too long for a comment:\nDo you really like conditions like\n{\n $match: {\n null,\n null,\n status: 1,\n customer: new Types.ObjectId(customer),\n },\n}\n\nOr should you better use:\nlet match = {\n status: 1,\n customer: new Types.ObjectId(customer)\n};\n\nif (user) \n match.assignee = new Types.ObjectId(user);\nif (client) \n match.client = new Types.ObjectId(client);\n\nCollection.aggregate([\n { $match: match },\n ...\n\nI never tried, perhaps {$match: null} prevents any index use.\n" ]
[ 0 ]
[]
[]
[ "mongodb", "mongodb_atlas", "node.js" ]
stackoverflow_0074666898_mongodb_mongodb_atlas_node.js.txt
Q: How to pass shared props in several child components in Vue? In react I can declare some object... const shared_props = { prop1: 'value1', prop2: 'value2', prop3: 'value3', }; ...and then pass it to several child components like this: <ChildComponent1 {...shared_props} /> <ChildComponent2 {...shared_props} /> <ChildComponent3 {...shared_props} /> How can I do this in Vue? Note that I'm not interesting in v-bind:shared_props="shared_props" because it's not the same. A: You can do it in this way : <ChildComponent1 v-bind="shared_props" /> see more answers: https://github.com/vuejs/vue/issues/4962
How to pass shared props in several child components in Vue?
In react I can declare some object... const shared_props = { prop1: 'value1', prop2: 'value2', prop3: 'value3', }; ...and then pass it to several child components like this: <ChildComponent1 {...shared_props} /> <ChildComponent2 {...shared_props} /> <ChildComponent3 {...shared_props} /> How can I do this in Vue? Note that I'm not interesting in v-bind:shared_props="shared_props" because it's not the same.
[ "You can do it in this way :\n<ChildComponent1 v-bind=\"shared_props\" />\n\nsee more answers: https://github.com/vuejs/vue/issues/4962\n" ]
[ 2 ]
[]
[]
[ "reactjs", "vue.js", "vuejs2", "vuejs3" ]
stackoverflow_0074667238_reactjs_vue.js_vuejs2_vuejs3.txt
Q: Typewriter JS, How to delete chars in Fast being type chars in Normal as it is I am using Typewriterjs on my website from this link. https://safi.me.uk/typewriterjs/. It Types chars at normal speed and Removes in Normal speed. I want to remove chars in "Fast speed". But I don't know which function is handling this scenario. Please help. var app = document.getElementById('app'); var typewriter = new Typewriter(app, { loop: true }); typewriter.typeString('Hello World!') .pauseFor(500) .deleteAll() .typeString('Strings can be removed') .pauseFor(500) .start(); <script src="https://cdnjs.cloudflare.com/ajax/libs/TypewriterJS/1.0.0/typewriter.min.js"></script> <div id='app'></div> A: You can pass a speed parameter into the deleteAll() method. const instance = new Typewriter('#typewriter', { loop: true, }); instance.typeString('Hello world!') .pauseFor(1000) .deleteAll(15) .typeString('Another message here...') .pauseFor(1000) .start(); <script src="https://unpkg.com/[email protected]/dist/core.js"></script> <div id="typewriter"></div> A: As an alternate to @James Coyle's answer, you can set deleteSpeed as a part of config to override the default delete speed. By default, it is set between 50-150ms: this.default_options = { strings: false, cursorClassName: 'typewriter-cursor', cursor: '|', animateCursor: true, blinkSpeed: 50, typingSpeed: 'natural', deleteSpeed: 'natural', charSpanClassName: 'typewriter-char', wrapperClassName: 'typewriter-wrapper', loop: false, autoStart: false, devMode: false }; ..... if (delete_speed == 'natural') { delete_speed = self._randomInteger(50, 150); } Sample: var app = document.getElementById('app'); var typewriter = new Typewriter(app, { loop: true, deleteSpeed: 10 }); typewriter.typeString('Hello World!') .pauseFor(500) .deleteAll() .typeString('Strings can be removed') .pauseFor(500) .start(); <script src="https://cdnjs.cloudflare.com/ajax/libs/TypewriterJS/1.0.0/typewriter.min.js"></script> <div id='app'></div> A: Could be helpful information if you want to remove text instantly: There is a callback function called "callFunction". In this function you can access the elements via state prop and just set the innerText of the element to an empty string. If you want to use this approach I recommend using "pauseFor" before deleting instantly and before writing some new text to get a smoother transition. Cheers. const instance = new Typewriter('#typewriter', { loop: true, }); instance.typeString('This text will be removed by element!') .pauseFor(1000) .callFunction((state) => { state.elements.wrapper.innerText = ''; }) .pauseFor(1000) .typeString('This text will be removed by deleteAll() ...') .deleteAll(15) .pauseFor(1000) .start(); <script src="https://unpkg.com/[email protected]/dist/core.js"></script> <div id="typewriter"></div>
Typewriter JS, How to delete chars in Fast being type chars in Normal as it is
I am using Typewriterjs on my website from this link. https://safi.me.uk/typewriterjs/. It Types chars at normal speed and Removes in Normal speed. I want to remove chars in "Fast speed". But I don't know which function is handling this scenario. Please help. var app = document.getElementById('app'); var typewriter = new Typewriter(app, { loop: true }); typewriter.typeString('Hello World!') .pauseFor(500) .deleteAll() .typeString('Strings can be removed') .pauseFor(500) .start(); <script src="https://cdnjs.cloudflare.com/ajax/libs/TypewriterJS/1.0.0/typewriter.min.js"></script> <div id='app'></div>
[ "You can pass a speed parameter into the deleteAll() method.\n\n\nconst instance = new Typewriter('#typewriter', {\r\n loop: true,\r\n});\r\n\r\ninstance.typeString('Hello world!')\r\n .pauseFor(1000)\r\n .deleteAll(15)\r\n .typeString('Another message here...')\r\n .pauseFor(1000)\r\n .start();\n<script src=\"https://unpkg.com/[email protected]/dist/core.js\"></script>\r\n<div id=\"typewriter\"></div>\n\n\n\n", "As an alternate to @James Coyle's answer, you can set deleteSpeed as a part of config to override the default delete speed.\nBy default, it is set between 50-150ms:\nthis.default_options = {\n strings: false,\n cursorClassName: 'typewriter-cursor',\n cursor: '|',\n animateCursor: true,\n blinkSpeed: 50,\n typingSpeed: 'natural',\n deleteSpeed: 'natural',\n charSpanClassName: 'typewriter-char',\n wrapperClassName: 'typewriter-wrapper',\n loop: false,\n autoStart: false,\n devMode: false\n};\n\n.....\n\nif (delete_speed == 'natural') {\n delete_speed = self._randomInteger(50, 150);\n}\n\n\nSample:\n\n\nvar app = document.getElementById('app');\r\n\r\nvar typewriter = new Typewriter(app, {\r\n loop: true,\r\n deleteSpeed: 10\r\n});\r\n\r\ntypewriter.typeString('Hello World!')\r\n .pauseFor(500)\r\n .deleteAll()\r\n .typeString('Strings can be removed')\r\n .pauseFor(500)\r\n .start();\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/TypewriterJS/1.0.0/typewriter.min.js\"></script>\r\n<div id='app'></div>\n\n\n\n", "Could be helpful information if you want to remove text instantly: There is a callback function called \"callFunction\". In this function you can access the elements via state prop and just set the innerText of the element to an empty string. If you want to use this approach I recommend using \"pauseFor\" before deleting instantly and before writing some new text to get a smoother transition. Cheers.\n\n\nconst instance = new Typewriter('#typewriter', {\n loop: true,\n});\n\ninstance.typeString('This text will be removed by element!')\n .pauseFor(1000)\n .callFunction((state) => {\n state.elements.wrapper.innerText = '';\n })\n .pauseFor(1000)\n .typeString('This text will be removed by deleteAll() ...')\n .deleteAll(15)\n .pauseFor(1000)\n .start();\n<script src=\"https://unpkg.com/[email protected]/dist/core.js\"></script>\n<div id=\"typewriter\"></div>\n\n\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "javascript" ]
stackoverflow_0054400468_javascript.txt
Q: Flutter : Good way of adding borders to CustomPaint I am trying to create a more complex customPaint and want to add a border to it. From the picture you can see the shape (In blue) and the partial border (in white) The way I am creating the white border is with a secondary customPaint right now. My question is, is there an easier way of adding a border to the existing customPaint, as doing the corners of the rectangle can be annoying by itself and doesn't seem like an elegant way of doing it. My code right now, without the border Path.combine( PathOperation.intersect, Path() ..addRRect(RRect.fromLTRBR( xCanvasLeft, xCanvasTop, xCanvasRight, xCanvasBot, const Radius.circular(canvasRadius))), Path() ..addOval(Rect.fromCircle( center: Offset(xHole, yHoleTop), radius: holeRadius)) ..addOval(Rect.fromCircle( center: Offset(xHole, yHoleBot), radius: holeRadius)) ..close(), ), paint, ); EDIT: I am also wondering how resource heavy custom paint is? A: I simply use the same painter for adding the border with the stroke width property. Paint paintBorder = Paint() ..style = PaintingStyle.stroke ..strokeWidth = 6.0 ..color = Colors.blue; Paint paintFill = Paint(); Then while drawing the path just draw the fill before the border like this canvas.drawPath(path, paintFill); canvas.drawPath(path, paintBorder);
Flutter : Good way of adding borders to CustomPaint
I am trying to create a more complex customPaint and want to add a border to it. From the picture you can see the shape (In blue) and the partial border (in white) The way I am creating the white border is with a secondary customPaint right now. My question is, is there an easier way of adding a border to the existing customPaint, as doing the corners of the rectangle can be annoying by itself and doesn't seem like an elegant way of doing it. My code right now, without the border Path.combine( PathOperation.intersect, Path() ..addRRect(RRect.fromLTRBR( xCanvasLeft, xCanvasTop, xCanvasRight, xCanvasBot, const Radius.circular(canvasRadius))), Path() ..addOval(Rect.fromCircle( center: Offset(xHole, yHoleTop), radius: holeRadius)) ..addOval(Rect.fromCircle( center: Offset(xHole, yHoleBot), radius: holeRadius)) ..close(), ), paint, ); EDIT: I am also wondering how resource heavy custom paint is?
[ "I simply use the same painter for adding the border with the stroke width property.\nPaint paintBorder = Paint()\n ..style = PaintingStyle.stroke\n ..strokeWidth = 6.0\n ..color = Colors.blue;\nPaint paintFill = Paint();\n\nThen while drawing the path just draw the fill before the border like this\ncanvas.drawPath(path, paintFill);\ncanvas.drawPath(path, paintBorder);\n\n" ]
[ 0 ]
[]
[]
[ "custom_painting", "flutter", "flutter_custompainter" ]
stackoverflow_0074601561_custom_painting_flutter_flutter_custompainter.txt
Q: How to make loop for panels background color from a module VB.NET I have this code in a module in VB.NET so pp1 is a panel Public Sub changecolor(ByRef curruntpanel As Panel) On Error Resume Next Main_Form.PP1.BackColor = SystemColors.Control Main_Form.PP2.BackColor = SystemColors.Control Main_Form.PP3.BackColor = SystemColors.Control Main_Form.PP4.BackColor = SystemColors.Control Main_Form.PP5.BackColor = SystemColors.Control Main_Form.PP6.BackColor = SystemColors.Control Main_Form.PP7.BackColor = SystemColors.Control Main_Form.PP8.BackColor = SystemColors.Control Main_Form.PP9.BackColor = SystemColors.Control Main_Form.PP10.BackColor = SystemColors.Contrl curruntpanel.BackColor = Color.White End Sub and I want to replace it with loop code so I tried the next code but it's not working with me Dim i As Integer For i = 1 To 10 Main_Form.Controls("pp" & i).BackColor = SystemColors.Control Next curruntpanel.BackColor = Color.White this code in a module too So can anyone give me the right code for loop? I want help to turn into some lines to loop code A: Try changing the Name of the control in the loop from, pp to PP. Here's an example: Public Sub changecolor(ByRef curruntpanel As Panel) Dim i As Integer For i = 1 To 10 Main_Form.Controls("PP" & i).BackColor = SystemColors.Control Next curruntpanel.BackColor = Color.White End Sub
How to make loop for panels background color from a module VB.NET
I have this code in a module in VB.NET so pp1 is a panel Public Sub changecolor(ByRef curruntpanel As Panel) On Error Resume Next Main_Form.PP1.BackColor = SystemColors.Control Main_Form.PP2.BackColor = SystemColors.Control Main_Form.PP3.BackColor = SystemColors.Control Main_Form.PP4.BackColor = SystemColors.Control Main_Form.PP5.BackColor = SystemColors.Control Main_Form.PP6.BackColor = SystemColors.Control Main_Form.PP7.BackColor = SystemColors.Control Main_Form.PP8.BackColor = SystemColors.Control Main_Form.PP9.BackColor = SystemColors.Control Main_Form.PP10.BackColor = SystemColors.Contrl curruntpanel.BackColor = Color.White End Sub and I want to replace it with loop code so I tried the next code but it's not working with me Dim i As Integer For i = 1 To 10 Main_Form.Controls("pp" & i).BackColor = SystemColors.Control Next curruntpanel.BackColor = Color.White this code in a module too So can anyone give me the right code for loop? I want help to turn into some lines to loop code
[ "Try changing the Name of the control in the loop from, pp to PP. Here's an example:\nPublic Sub changecolor(ByRef curruntpanel As Panel)\n Dim i As Integer\n\n For i = 1 To 10\n Main_Form.Controls(\"PP\" & i).BackColor = SystemColors.Control\n Next\n curruntpanel.BackColor = Color.White\nEnd Sub\n\n" ]
[ 0 ]
[]
[]
[ "vb.net" ]
stackoverflow_0074667207_vb.net.txt
Q: Redis Sentinel master not downgraded to slave immediately I have an architecture with three Redis instances (one master and two slaves) and three Sentinel instances. In front of it there is a HaProxy. All works well until the master Redis instance goes down. The new master is properly chosen by Sentinel. However, the old master (which is now down) is not reconfigured to be a slave. As a result, when that instance is up again I have two masters for a short period of time (about 11 seconds). After that time that instance which was brought up is properly downgraded to slave. Shouldn't it work that way, that when the master goes down it is downgraded to slave straight away? Having that, when it was up again, it would be slave immediately. I know that (since Redis 2.8?) there is that CONFIG REWRITE functionality so the config cannot be modified when the Redis instance is down. Having two masters for some time is a problem for me because the HaProxy for that short period of time instead of sending requests to one master Redis, it does the load balancing between those two masters. Is there any way to downgrade the failed master to slave immediately? Obviously, I changed the Sentinel timeouts. Here are some logs from Sentinel and Redis instances after the master goes down: Sentinel 81358:X 23 Jan 22:12:03.088 # +sdown master redis-ha 127.0.0.1 63797.0.0.1 26381 @ redis-ha 127.0.0.1 6379 81358:X 23 Jan 22:12:03.149 # +new-epoch 1 81358:X 23 Jan 22:12:03.149 # +vote-for-leader 6b5b5882443a1d738ab6849ecf4bc6b9b32ec142 1 81358:X 23 Jan 22:12:03.174 # +odown master redis-ha 127.0.0.1 6379 #quorum 3/2 81358:X 23 Jan 22:12:03.174 # Next failover delay: I will not start a failover before Sat Jan 23 22:12:09 2016 81358:X 23 Jan 22:12:04.265 # +config-update-from sentinel 127.0.0.1:26381 127.0.0.1 26381 @ redis-ha 127.0.0.1 6379 81358:X 23 Jan 22:12:04.265 # +switch-master redis-ha 127.0.0.1 6379 127.0.0.1 6381 81358:X 23 Jan 22:12:04.266 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ redis-ha 127.0.0.1 6381 81358:X 23 Jan 22:12:04.266 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ redis-ha 127.0.0.1 6381 81358:X 23 Jan 22:12:06.297 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ redis-ha 127.0.0.1 6381 Redis 81354:S 23 Jan 22:12:03.341 * MASTER <-> SLAVE sync started 81354:S 23 Jan 22:12:03.341 # Error condition on socket for SYNC: Connection refused 81354:S 23 Jan 22:12:04.265 * Discarding previously cached master state. 81354:S 23 Jan 22:12:04.265 * SLAVE OF 127.0.0.1:6381 enabled (user request from 'id=7 addr=127.0.0.1:57784 fd=10 name=sentinel-6b5b5882-cmd age=425 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=14 qbuf-free=32754 obl=36 oll=0 omem=0 events=rw cmd=exec') 81354:S 23 Jan 22:12:04.265 # CONFIG REWRITE executed with success. 81354:S 23 Jan 22:12:04.371 * Connecting to MASTER 127.0.0.1:6381 81354:S 23 Jan 22:12:04.371 * MASTER <-> SLAVE sync started 81354:S 23 Jan 22:12:04.371 * Non blocking connect for SYNC fired the event. 81354:S 23 Jan 22:12:04.371 * Master replied to PING, replication can continue... 81354:S 23 Jan 22:12:04.371 * Partial resynchronization not possible (no cached master) 81354:S 23 Jan 22:12:04.372 * Full resync from master: 07b3c8f64bbb9076d7e97799a53b8b290ecf470b:1 81354:S 23 Jan 22:12:04.467 * MASTER <-> SLAVE sync: receiving 860 bytes from master 81354:S 23 Jan 22:12:04.467 * MASTER <-> SLAVE sync: Flushing old data 81354:S 23 Jan 22:12:04.467 * MASTER <-> SLAVE sync: Loading DB in memory 81354:S 23 Jan 22:12:04.467 * MASTER <-> SLAVE sync: Finished with success A: I was also getting the same error when I want to switch master in redis-cluster using sentinel. +vote-for-leader xxxxxxxxxxxxxxxxxxxxxxxx8989 10495 Next failover delay: I will not start a failover before Fri Aug 2 23:23:44 2019 After resetting sentinel. Cluster works as expected SENTINEL RESET * or SENTINEL RESET mymaster Run above command in all sentinel server. A: In the event a Redis node goes down, when/if it recovers, it will recover with the same role it had prior to going down. The Sentinel cannot reconfigure the node if it is unable to ping it. So, there's a brief period of time between the node coming back up and the Sentinel acknowledging and reconfiguring it. This explains the multi-master state. If you are set on using Haproxy, one workaround would be to reconfigure the Redis node's role prior to starting the process. Redis will boot as a slave as long as there's a SLAVEOF entry in the redis.conf. The primary issue with this workaround is that it doesn't solve network partition scenarios. Hope that helps. A: If using HAProxy you can try to query the uptime_in_seconds something like this: backend redis mode tcp balance first timeout queue 5s default-server check inter 1s fall 2 rise 2 maxconn 100 option tcp-check tcp-check connect tcp-check send AUTH\ <secret>\r\n tcp-check expect string +OK tcp-check send PING\r\n tcp-check expect string +PONG tcp-check send info\ replication\r\n tcp-check expect string role:master tcp-check send info\ server\r\n tcp-check expect rstring uptime_in_seconds:\d{2,} tcp-check send QUIT\r\n tcp-check expect string +OK server redis-1 10.0.0.10:9736 server redis-2 10.0.0.20:9736 server redis-3 10.0.0.30:9736 Notice the: tcp-check expect rstring uptime_in_seconds:\d{2,} if uptime is not > 10 seconds, the node will not be added A: Solution This can be resolved by making use of the rise option in your HAProxy config. default-server check inter 1s fall 2 rise 30 # OR server redis-1 127.0.0.1:6379 check inter 1s fall 2 rise 30 This sets the number of successful checks that must pass for a server to be considered UP. As such this can successfully delay a re-joining Redis node from being considered UP and give Sentinel a chance to change the node's role. Important Trade-off The trade-off with this approach, is that your fail-overs will take longer to be respected by HAProxy as you are adding in an extra delay. This delay applies to both your re-joining node after a failure and also your existing slave nodes that are promoted to role:master. Ultimately you will need to make the decision between which option is better for you; having 2 masters momentarily, or taking longer to fail between nodes.
Redis Sentinel master not downgraded to slave immediately
I have an architecture with three Redis instances (one master and two slaves) and three Sentinel instances. In front of it there is a HaProxy. All works well until the master Redis instance goes down. The new master is properly chosen by Sentinel. However, the old master (which is now down) is not reconfigured to be a slave. As a result, when that instance is up again I have two masters for a short period of time (about 11 seconds). After that time that instance which was brought up is properly downgraded to slave. Shouldn't it work that way, that when the master goes down it is downgraded to slave straight away? Having that, when it was up again, it would be slave immediately. I know that (since Redis 2.8?) there is that CONFIG REWRITE functionality so the config cannot be modified when the Redis instance is down. Having two masters for some time is a problem for me because the HaProxy for that short period of time instead of sending requests to one master Redis, it does the load balancing between those two masters. Is there any way to downgrade the failed master to slave immediately? Obviously, I changed the Sentinel timeouts. Here are some logs from Sentinel and Redis instances after the master goes down: Sentinel 81358:X 23 Jan 22:12:03.088 # +sdown master redis-ha 127.0.0.1 63797.0.0.1 26381 @ redis-ha 127.0.0.1 6379 81358:X 23 Jan 22:12:03.149 # +new-epoch 1 81358:X 23 Jan 22:12:03.149 # +vote-for-leader 6b5b5882443a1d738ab6849ecf4bc6b9b32ec142 1 81358:X 23 Jan 22:12:03.174 # +odown master redis-ha 127.0.0.1 6379 #quorum 3/2 81358:X 23 Jan 22:12:03.174 # Next failover delay: I will not start a failover before Sat Jan 23 22:12:09 2016 81358:X 23 Jan 22:12:04.265 # +config-update-from sentinel 127.0.0.1:26381 127.0.0.1 26381 @ redis-ha 127.0.0.1 6379 81358:X 23 Jan 22:12:04.265 # +switch-master redis-ha 127.0.0.1 6379 127.0.0.1 6381 81358:X 23 Jan 22:12:04.266 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ redis-ha 127.0.0.1 6381 81358:X 23 Jan 22:12:04.266 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ redis-ha 127.0.0.1 6381 81358:X 23 Jan 22:12:06.297 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ redis-ha 127.0.0.1 6381 Redis 81354:S 23 Jan 22:12:03.341 * MASTER <-> SLAVE sync started 81354:S 23 Jan 22:12:03.341 # Error condition on socket for SYNC: Connection refused 81354:S 23 Jan 22:12:04.265 * Discarding previously cached master state. 81354:S 23 Jan 22:12:04.265 * SLAVE OF 127.0.0.1:6381 enabled (user request from 'id=7 addr=127.0.0.1:57784 fd=10 name=sentinel-6b5b5882-cmd age=425 idle=0 flags=x db=0 sub=0 psub=0 multi=3 qbuf=14 qbuf-free=32754 obl=36 oll=0 omem=0 events=rw cmd=exec') 81354:S 23 Jan 22:12:04.265 # CONFIG REWRITE executed with success. 81354:S 23 Jan 22:12:04.371 * Connecting to MASTER 127.0.0.1:6381 81354:S 23 Jan 22:12:04.371 * MASTER <-> SLAVE sync started 81354:S 23 Jan 22:12:04.371 * Non blocking connect for SYNC fired the event. 81354:S 23 Jan 22:12:04.371 * Master replied to PING, replication can continue... 81354:S 23 Jan 22:12:04.371 * Partial resynchronization not possible (no cached master) 81354:S 23 Jan 22:12:04.372 * Full resync from master: 07b3c8f64bbb9076d7e97799a53b8b290ecf470b:1 81354:S 23 Jan 22:12:04.467 * MASTER <-> SLAVE sync: receiving 860 bytes from master 81354:S 23 Jan 22:12:04.467 * MASTER <-> SLAVE sync: Flushing old data 81354:S 23 Jan 22:12:04.467 * MASTER <-> SLAVE sync: Loading DB in memory 81354:S 23 Jan 22:12:04.467 * MASTER <-> SLAVE sync: Finished with success
[ "I was also getting the same error when I want to switch master in redis-cluster using sentinel.\n\n+vote-for-leader xxxxxxxxxxxxxxxxxxxxxxxx8989 10495\n Next failover delay: I will not start a failover before Fri Aug 2 23:23:44 2019\n\nAfter resetting sentinel. Cluster works as expected\nSENTINEL RESET *\n\nor \nSENTINEL RESET mymaster\nRun above command in all sentinel server.\n", "In the event a Redis node goes down, when/if it recovers, it will recover with the same role it had prior to going down. The Sentinel cannot reconfigure the node if it is unable to ping it. So, there's a brief period of time between the node coming back up and the Sentinel acknowledging and reconfiguring it. This explains the multi-master state.\nIf you are set on using Haproxy, one workaround would be to reconfigure the Redis node's role prior to starting the process. Redis will boot as a slave as long as there's a SLAVEOF entry in the redis.conf. The primary issue with this workaround is that it doesn't solve network partition scenarios.\nHope that helps.\n", "If using HAProxy you can try to query the uptime_in_seconds something like this:\nbackend redis\n mode tcp\n balance first\n timeout queue 5s\n default-server check inter 1s fall 2 rise 2 maxconn 100\n option tcp-check\n tcp-check connect\n tcp-check send AUTH\\ <secret>\\r\\n\n tcp-check expect string +OK\n tcp-check send PING\\r\\n\n tcp-check expect string +PONG\n tcp-check send info\\ replication\\r\\n\n tcp-check expect string role:master\n tcp-check send info\\ server\\r\\n\n tcp-check expect rstring uptime_in_seconds:\\d{2,}\n tcp-check send QUIT\\r\\n\n tcp-check expect string +OK\n server redis-1 10.0.0.10:9736\n server redis-2 10.0.0.20:9736\n server redis-3 10.0.0.30:9736\n\nNotice the:\n tcp-check expect rstring uptime_in_seconds:\\d{2,}\n\nif uptime is not > 10 seconds, the node will not be added\n", "Solution\nThis can be resolved by making use of the rise option in your HAProxy config.\ndefault-server check inter 1s fall 2 rise 30\n\n# OR\n\nserver redis-1 127.0.0.1:6379 check inter 1s fall 2 rise 30\n\nThis sets the number of successful checks that must pass for a server to be considered UP. As such this can successfully delay a re-joining Redis node from being considered UP and give Sentinel a chance to change the node's role.\nImportant Trade-off\nThe trade-off with this approach, is that your fail-overs will take longer to be respected by HAProxy as you are adding in an extra delay. This delay applies to both your re-joining node after a failure and also your existing slave nodes that are promoted to role:master. Ultimately you will need to make the decision between which option is better for you; having 2 masters momentarily, or taking longer to fail between nodes.\n" ]
[ 3, 0, 0, 0 ]
[]
[]
[ "redis", "redis_sentinel", "sentinel" ]
stackoverflow_0034969216_redis_redis_sentinel_sentinel.txt
Q: What is the correct definition of 'abstract' in OOP? I am trying to understand the definition of 'abstraction' in OOP. I have come across a few main definitions. Are they all valid? Is one of them wrong? I'm confused. Definition 1: Abstraction is the progress of modeling real-world objects into programming language Abstraction is not about interfaces or abstract classes. Abstraction is the progress of modeling real-world objects in the programming language. Hence interfaces and abstract classes are just two techniques used in this progress. In an Object-Oriented Programming language like Java, everything is an abstraction: interface, class, field, method, variable, etc. Abstraction is the fundamental concept on which other concepts rely: encapsulation, inheritance, and polymorphism Definition 2: Abstraction is one of the key concepts of object-oriented programming (OOP) languages. Its main goal is to handle complexity by hiding unnecessary details from the user. That enables the user to implement more complex logic on top of the provided abstraction without understanding or even thinking about all the hidden complexity. A: Both definitions are valid. The differences between the definitions are largely due to the context. The first is about the role of abstraction in modelling. The second is about the role of abstraction in programming. My advice is to not get hung up on looking for the "correct" definition. These terms have a range of meanings and interpretations. And there is no official arbiter to tell you which definition is correct. This is NOT mathematics ... A: Definition 1 is too narrow. "modeling real world objects into programming language" can be regarded as abstraction (although I would rather use the term modeling here), but there are many more forms of abstraction. Definition 2 is better. A: Definition 1 doesn't exactly explain abstraction. Process of modelling real world objects to programming languages is called OOP. Abstraction is a OOP concept. Definition 2 does a better job at explaining abstraction. But it's better to understand the concept in your own thoughts and words. Think about the meaning of the word abstraction in english. It's defined as "the quality of dealing with ideas rather than events". In OOP it's similar, abstraction is a concept where you show only relevant data and conceal unnecessary details of an object from the user. A: [Purifying] is not means [Abstraction], Purifying is the important rule of Inheritance Pillar [Hiding] is not means [Abstraction], Hiding is the important rule of Encapsulation Pillar -------- —> Hiding or Purifying or Covering or Encapsulating Never = Abstraction <— The First Definition is the Correct Answer: Abstraction is the progress of modeling real-world objects in the programming language. —<< Abstraction = Classes = Blueprints = Modelling some Objects>> — Abstraction is the first Step Rule & Pillar of OOP Pillars… -------- wich means [Classes and Interfaces] … [Abstraction] is not more than a [Modelling] … in general: A home Construction Engineer cannot start Constructing any home without a Blueprint !! Abstraction =>> ( it’s the first Step into OOP ) and ( the Pillar of All Pillars ) without Abstraction there is no Classes no Interfaces no Abstract Classes and Automatically no Objects.. ---------- => (no Classes[Abstraction] no Objects[REAL THINGS] !!)
What is the correct definition of 'abstract' in OOP?
I am trying to understand the definition of 'abstraction' in OOP. I have come across a few main definitions. Are they all valid? Is one of them wrong? I'm confused. Definition 1: Abstraction is the progress of modeling real-world objects into programming language Abstraction is not about interfaces or abstract classes. Abstraction is the progress of modeling real-world objects in the programming language. Hence interfaces and abstract classes are just two techniques used in this progress. In an Object-Oriented Programming language like Java, everything is an abstraction: interface, class, field, method, variable, etc. Abstraction is the fundamental concept on which other concepts rely: encapsulation, inheritance, and polymorphism Definition 2: Abstraction is one of the key concepts of object-oriented programming (OOP) languages. Its main goal is to handle complexity by hiding unnecessary details from the user. That enables the user to implement more complex logic on top of the provided abstraction without understanding or even thinking about all the hidden complexity.
[ "Both definitions are valid. The differences between the definitions are largely due to the context. The first is about the role of abstraction in modelling. The second is about the role of abstraction in programming.\nMy advice is to not get hung up on looking for the \"correct\" definition. These terms have a range of meanings and interpretations. And there is no official arbiter to tell you which definition is correct. This is NOT mathematics ...\n", "Definition 1 is too narrow. \"modeling real world objects into programming language\" can be regarded as abstraction (although I would rather use the term modeling here), but there are many more forms of abstraction.\nDefinition 2 is better.\n", "Definition 1 doesn't exactly explain abstraction. Process of modelling real world objects to programming languages is called OOP. Abstraction is a OOP concept.\nDefinition 2 does a better job at explaining abstraction.\nBut it's better to understand the concept in your own thoughts and words.\nThink about the meaning of the word abstraction in english. It's defined as \"the quality of dealing with ideas rather than events\". In OOP it's similar, abstraction is a concept where you show only relevant data and conceal unnecessary details of an object from the user.\n", "[Purifying] is not means [Abstraction], Purifying is the important rule of Inheritance Pillar\n[Hiding] is not means [Abstraction], Hiding is the important rule of Encapsulation Pillar \n--------\n—> Hiding or Purifying or Covering or Encapsulating Never = Abstraction <—\nThe First Definition is the Correct Answer:\nAbstraction is the progress of modeling real-world objects in the programming language.\n—<< Abstraction = Classes = Blueprints = Modelling some Objects>> —\nAbstraction is the first Step Rule & Pillar of OOP Pillars…\n--------\nwich means [Classes and Interfaces] …\n[Abstraction] is not more than a [Modelling] …\nin general:\nA home Construction Engineer cannot start Constructing any home without a Blueprint !!\n\nAbstraction =>> ( it’s the first Step into OOP ) and ( the Pillar of All Pillars )\nwithout Abstraction there is no Classes no Interfaces no Abstract Classes and Automatically no Objects..\n----------\n=> (no Classes[Abstraction] no Objects[REAL THINGS] !!)\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "java" ]
stackoverflow_0064520515_java.txt
Q: Comparing two types in Dart I'm looking for a way to compare two types in Dart, and get True when they are the same. I have a method: SomeItemType exampleMethod<SomeItemType>(Map<String, dynamic> value) { if (SomeItemType.toString() == 'BuiltMap<String, String>') { return BuiltMap<String, String>.from(value) as SomeItemType; } // (...) } in which I must check if the type-parameter SomeItemType is actually equal to the type BuiltMap<String, String> or not. The only way I found so far is the above string comparions, which is not too elegant in my opinion. I have tried to following solutions, but they did not work out for me: SomeItemType is BuiltMap<String, String> // returns False SomeItemType == BuiltMap<String, String> // error: The operator '<' isn't defined for the type 'Type'. BuiltMap<String, String> is SomeItemType // error: The operator '<' isn't defined for the type 'Type'. A: You need to detect whether a type variable holds a type which is "the same" as some known type. I'd probably not go for equality in that check, I'd just check if the type of the variable is a subtype of the known type. You can do that like: class _TypeHelper<T> {} bool isSubtypeOf<S, T>() => _TypeHelper<S>() is _TypeHelper<T>; // or just ... => <S>[] is List<T>; ... if (isSubtypeOf<T, BuiltMap<String, String>>()) { ... } If you want to check type equality, then you have to decide what type equality means. The simplest is "mutual subtyping", which would mean: bool isMutualSubtypes<S, T>() => isSubtypeOf<S, T>() && isSubtypeOf<T, S>(); // or => isSubtypeOf<S Function(S), T Function(T)>(); // or => isSubtypeOf<void Function<X extends T>() Function(), // void Function<X extends S>() Function()>(); This will accept, e.g., isMutualSubtypes<void, dynamic>() because both types are top types. Type identity is much harder to check because the language itself doesn't have that notion. If types are mutual subtypes, they are considered equivalent at run-time (we distinguish void and dynamic statically, but not at run-time). You can try something like: bool isSameType<S, T>() => S == T; but equality of Type objects is very underspecified, and not guaranteed to work any better than mutual subtyping. Definitely don't use string comparison: // Don't use! bool isSameType<S, T>() => "$S" == "$T"; // BAD! because it's possible to have different types with the same name. A: I needed to check if a variable pointed to a function. This worked for me variable is function - the word is
Comparing two types in Dart
I'm looking for a way to compare two types in Dart, and get True when they are the same. I have a method: SomeItemType exampleMethod<SomeItemType>(Map<String, dynamic> value) { if (SomeItemType.toString() == 'BuiltMap<String, String>') { return BuiltMap<String, String>.from(value) as SomeItemType; } // (...) } in which I must check if the type-parameter SomeItemType is actually equal to the type BuiltMap<String, String> or not. The only way I found so far is the above string comparions, which is not too elegant in my opinion. I have tried to following solutions, but they did not work out for me: SomeItemType is BuiltMap<String, String> // returns False SomeItemType == BuiltMap<String, String> // error: The operator '<' isn't defined for the type 'Type'. BuiltMap<String, String> is SomeItemType // error: The operator '<' isn't defined for the type 'Type'.
[ "You need to detect whether a type variable holds a type which is \"the same\" as some known type.\nI'd probably not go for equality in that check, I'd just check if the type of the variable is a subtype of the known type.\nYou can do that like:\nclass _TypeHelper<T> {}\nbool isSubtypeOf<S, T>() => _TypeHelper<S>() is _TypeHelper<T>;\n// or just ... => <S>[] is List<T>;\n...\n if (isSubtypeOf<T, BuiltMap<String, String>>()) { ... }\n\nIf you want to check type equality, then you have to decide what type equality means.\nThe simplest is \"mutual subtyping\", which would mean:\nbool isMutualSubtypes<S, T>() => isSubtypeOf<S, T>() && isSubtypeOf<T, S>();\n// or => isSubtypeOf<S Function(S), T Function(T)>();\n// or => isSubtypeOf<void Function<X extends T>() Function(),\n// void Function<X extends S>() Function()>();\n\nThis will accept, e.g., isMutualSubtypes<void, dynamic>() because both types are top types.\nType identity is much harder to check because the language itself doesn't have that notion. If types are mutual subtypes, they are considered equivalent at run-time (we distinguish void and dynamic statically, but not at run-time).\nYou can try something like:\nbool isSameType<S, T>() => S == T;\n\nbut equality of Type objects is very underspecified, and not guaranteed to work any better than mutual subtyping.\nDefinitely don't use string comparison:\n// Don't use!\nbool isSameType<S, T>() => \"$S\" == \"$T\"; // BAD!\n\nbecause it's possible to have different types with the same name.\n", "I needed to check if a variable pointed to a function. This worked for me variable is function - the word is\n" ]
[ 3, 0 ]
[]
[]
[ "dart" ]
stackoverflow_0063319286_dart.txt
Q: Figure out largest value in array of numbers I want to be able to calculate the largest value in a list of numbers I want the type of number to be any number (it should work with double, int, long, etc) The method that I tried to create for this is not working and keeps returning the first value of the array public static <V extends Number & Comparable<V>> V max(final V... numbers) { V currentLargest = numbers[0]; for (V value : numbers) { int arraySize = 0; if (currentLargest.compareTo(numbers[arraySize]) < 0) { currentLargest = numbers[arraySize]; } arraySize = arraySize + 1; } return currentLargest; } I dont know what I am doing wrong A: Okay, there are a few issues with the code you have written. First off, I would recommend putting in a check to make sure the numbers array is not null. While this is not required, I would still recommend it. However, the most significant issue you are having is how you are attempting to compare your currentLargest value to a value on the array. You are always comparing against the first value of the array every single time as you have your arraySize variable updated to zero with every iteration of your loop. I created a method that does exactly what you are asking with the bugs from your method fixed. I hope this helps. public static <V extends Comparable<? super V>> V max(final V... numbers) { if (numbers == null || numbers.length == 0) { return null; } V currentLargest = numbers[0]; for (int i = 1; i < numbers.length; i++) { if (currentLargest.compareTo(numbers[i]) < 0) { currentLargest = numbers[i]; } } return currentLargest; }
Figure out largest value in array of numbers
I want to be able to calculate the largest value in a list of numbers I want the type of number to be any number (it should work with double, int, long, etc) The method that I tried to create for this is not working and keeps returning the first value of the array public static <V extends Number & Comparable<V>> V max(final V... numbers) { V currentLargest = numbers[0]; for (V value : numbers) { int arraySize = 0; if (currentLargest.compareTo(numbers[arraySize]) < 0) { currentLargest = numbers[arraySize]; } arraySize = arraySize + 1; } return currentLargest; } I dont know what I am doing wrong
[ "Okay, there are a few issues with the code you have written. First off, I would recommend putting in a check to make sure the numbers array is not null. While this is not required, I would still recommend it. However, the most significant issue you are having is how you are attempting to compare your currentLargest value to a value on the array. You are always comparing against the first value of the array every single time as you have your arraySize variable updated to zero with every iteration of your loop.\nI created a method that does exactly what you are asking with the bugs from your method fixed. I hope this helps.\n public static <V extends Comparable<? super V>> V max(final V... numbers) {\n if (numbers == null || numbers.length == 0) {\n return null;\n }\n V currentLargest = numbers[0];\n for (int i = 1; i < numbers.length; i++) {\n if (currentLargest.compareTo(numbers[i]) < 0) {\n currentLargest = numbers[i];\n }\n }\n return currentLargest;\n }\n\n" ]
[ 2 ]
[]
[]
[ "java", "math" ]
stackoverflow_0074667294_java_math.txt
Q: MultipleOutputFormat in hadoop I'm a newbie in Hadoop. I'm trying out the Wordcount program. Now to try out multiple output files, i use MultipleOutputFormat. this link helped me in doing it. http://hadoop.apache.org/common/docs/r0.19.0/api/org/apache/hadoop/mapred/lib/MultipleOutputs.html in my driver class i had MultipleOutputs.addNamedOutput(conf, "even", org.apache.hadoop.mapred.TextOutputFormat.class, Text.class, IntWritable.class); MultipleOutputs.addNamedOutput(conf, "odd", org.apache.hadoop.mapred.TextOutputFormat.class, Text.class, IntWritable.class);` and my reduce class became this public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { MultipleOutputs mos = null; public void configure(JobConf job) { mos = new MultipleOutputs(job); } public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { int sum = 0; while (values.hasNext()) { sum += values.next().get(); } if (sum % 2 == 0) { mos.getCollector("even", reporter).collect(key, new IntWritable(sum)); }else { mos.getCollector("odd", reporter).collect(key, new IntWritable(sum)); } //output.collect(key, new IntWritable(sum)); } @Override public void close() throws IOException { // TODO Auto-generated method stub mos.close(); } } Things worked , but i get LOT of files, (one odd and one even for every map-reduce) Question is : How can i have just 2 output files (odd & even) so that every odd output of every map-reduce gets written into that odd file, and same for even. A: Each reducer uses an OutputFormat to write records to. So that's why you are getting a set of odd and even files per reducer. This is by design so that each reducer can perform writes in parallel. If you want just a single odd and single even file, you'll need to set mapred.reduce.tasks to 1. But performance will suffer, because all the mappers will be feeding into a single reducer. Another option is to change the process the reads these files to accept multiple input files, or write a separate process that merges these files together. A: I wrote a class for doing this. Just use it your job: job.setOutputFormatClass(m_customOutputFormatClass); This is the my class: import java.io.IOException; import java.util.HashMap; import java.util.Map; import java.util.Map.Entry; import org.apache.hadoop.fs.Path; import org.apache.hadoop.mapreduce.RecordWriter; import org.apache.hadoop.mapreduce.TaskAttemptContext; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; /** * TextOutputFormat extension which enables writing the mapper/reducer's output in multiple files.<br> * <p> * <b>WARNING</b>: The number of different folder shuoldn't be large for one mapper since we keep an * {@link RecordWriter} instance per folder name. * </p> * <p> * In this class the folder name is defined by the written entry's key.<br> * To change this behavior simply extend this class and override the * {@link HdMultipleFileOutputFormat#getFolderNameExtractor()} method and create your own * {@link FolderNameExtractor} implementation. * </p> * * * @author ykesten * * @param <K> - Keys type * @param <V> - Values type */ public class HdMultipleFileOutputFormat<K, V> extends TextOutputFormat<K, V> { private String folderName; private class MultipleFilesRecordWriter extends RecordWriter<K, V> { private Map<String, RecordWriter<K, V>> fileNameToWriter; private FolderNameExtractor<K, V> fileNameExtractor; private TaskAttemptContext job; public MultipleFilesRecordWriter(FolderNameExtractor<K, V> fileNameExtractor, TaskAttemptContext job) { fileNameToWriter = new HashMap<String, RecordWriter<K, V>>(); this.fileNameExtractor = fileNameExtractor; this.job = job; } @Override public void write(K key, V value) throws IOException, InterruptedException { String fileName = fileNameExtractor.extractFolderName(key, value); RecordWriter<K, V> writer = fileNameToWriter.get(fileName); if (writer == null) { writer = createNewWriter(fileName, fileNameToWriter, job); if (writer == null) { throw new IOException("Unable to create writer for path: " + fileName); } } writer.write(key, value); } @Override public void close(TaskAttemptContext context) throws IOException, InterruptedException { for (Entry<String, RecordWriter<K, V>> entry : fileNameToWriter.entrySet()) { entry.getValue().close(context); } } } private synchronized RecordWriter<K, V> createNewWriter(String folderName, Map<String, RecordWriter<K, V>> fileNameToWriter, TaskAttemptContext job) { try { this.folderName = folderName; RecordWriter<K, V> writer = super.getRecordWriter(job); this.folderName = null; fileNameToWriter.put(folderName, writer); return writer; } catch (Exception e) { e.printStackTrace(); return null; } } @Override public Path getDefaultWorkFile(TaskAttemptContext context, String extension) throws IOException { Path path = super.getDefaultWorkFile(context, extension); if (folderName != null) { String newPath = path.getParent().toString() + "/" + folderName + "/" + path.getName(); path = new Path(newPath); } return path; } @Override public RecordWriter<K, V> getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException { return new MultipleFilesRecordWriter(getFolderNameExtractor(), job); } public FolderNameExtractor<K, V> getFolderNameExtractor() { return new KeyFolderNameExtractor<K, V>(); } public interface FolderNameExtractor<K, V> { public String extractFolderName(K key, V value); } private static class KeyFolderNameExtractor<K, V> implements FolderNameExtractor<K, V> { public String extractFolderName(K key, V value) { return key.toString(); } } } A: Multiple Output files will be generated based on number of reducers. You can use hadoop dfs -getmerge to merged outputs A: you may try to change the output file name (Reducer output), since HDFS supports append operations only, then it will collect all Temp-r-0000x files (partitions) from all reducers and put them together in one file. here the class you need to create which overrides methods in TextOutputFormat: import java.io.IOException; import java.util.HashMap; import java.util.Map; import java.util.Map.Entry; import org.apache.hadoop.fs.Path; import org.apache.hadoop.mapreduce.RecordWriter; import org.apache.hadoop.mapreduce.TaskAttemptContext; import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; public class CustomNameMultipleFileOutputFormat<K, V> extends TextOutputFormat<K, V> { private String folderName; private class MultipleFilesRecordWriter extends RecordWriter<K, V> { private Map<String, RecordWriter<K, V>> fileNameToWriter; private FolderNameExtractor<K, V> fileNameExtractor; private TaskAttemptContext job; public MultipleFilesRecordWriter(FolderNameExtractor<K, V> fileNameExtractor, TaskAttemptContext job) { fileNameToWriter = new HashMap<String, RecordWriter<K, V>>(); this.fileNameExtractor = fileNameExtractor; this.job = job; } @Override public void write(K key, V value) throws IOException, InterruptedException { String fileName = "**[FOLDER_NAME_INCLUDING_SUB_DIRS]**";//fileNameExtractor.extractFolderName(key, value); RecordWriter<K, V> writer = fileNameToWriter.get(fileName); if (writer == null) { writer = createNewWriter(fileName, fileNameToWriter, job); if (writer == null) { throw new IOException("Unable to create writer for path: " + fileName); } } writer.write(key, value); } @Override public void close(TaskAttemptContext context) throws IOException, InterruptedException { for (Entry<String, RecordWriter<K, V>> entry : fileNameToWriter.entrySet()) { entry.getValue().close(context); } } } private synchronized RecordWriter<K, V> createNewWriter(String folderName, Map<String, RecordWriter<K, V>> fileNameToWriter, TaskAttemptContext job) { try { this.folderName = folderName; RecordWriter<K, V> writer = super.getRecordWriter(job); this.folderName = null; fileNameToWriter.put(folderName, writer); return writer; } catch (Exception e) { e.printStackTrace(); return null; } } @Override public Path getDefaultWorkFile(TaskAttemptContext context, String extension) throws IOException { Path path = super.getDefaultWorkFile(context, extension); if (folderName != null) { String newPath = path.getParent().toString() + "/" + folderName + "/**[ONE_FILE_NAME]**"; path = new Path(newPath); } return path; } @Override public RecordWriter<K, V> getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException { return new MultipleFilesRecordWriter(getFolderNameExtractor(), job); } public FolderNameExtractor<K, V> getFolderNameExtractor() { return new KeyFolderNameExtractor<K, V>(); } public interface FolderNameExtractor<K, V> { public String extractFolderName(K key, V value); } private static class KeyFolderNameExtractor<K, V> implements FolderNameExtractor<K, V> { public String extractFolderName(K key, V value) { return key.toString(); } } } then Reducer/Mapper: public static class ExtraLabReducer extends Reducer<CustomKeyComparable, Text, CustomKeyComparable, Text> { MultipleOutputs multipleOutputs; @Override protected void setup(Context context) throws IOException, InterruptedException { multipleOutputs = new MultipleOutputs(context); } @Override public void reduce(CustomKeyComparable key, Iterable<Text> values, Context context) throws IOException, InterruptedException { for(Text d : values) { **multipleOutputs.write**("batta",key, d,**"[EXAMPLE_FILE_NAME]"**); } } @Override protected void cleanup(Context context) throws IOException, InterruptedException { multipleOutputs.close(); } } then in job config: Job job = new Job(getConf(), "ExtraLab"); job.setJarByClass(ExtraLab.class); job.setMapperClass(ExtraLabMapper.class); job.setReducerClass(ExtraLabReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(DoubleWritable.class); job.setMapOutputKeyClass(CustomKeyComparable.class); job.setMapOutputValueClass(Text.class); job.setInputFormatClass(TextInputFormat.class); //job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.addInputPath(job, new Path(args[0])); //adding one more reducer job.setNumReduceTasks(2); LazyOutputFormat.setOutputFormatClass(job, TextOutputFormat.class); MultipleOutputs.addNamedOutput(job,"batta", CustomNameMultipleFileOutputFormat.class,CustomKeyComparable.class,Text.class);
MultipleOutputFormat in hadoop
I'm a newbie in Hadoop. I'm trying out the Wordcount program. Now to try out multiple output files, i use MultipleOutputFormat. this link helped me in doing it. http://hadoop.apache.org/common/docs/r0.19.0/api/org/apache/hadoop/mapred/lib/MultipleOutputs.html in my driver class i had MultipleOutputs.addNamedOutput(conf, "even", org.apache.hadoop.mapred.TextOutputFormat.class, Text.class, IntWritable.class); MultipleOutputs.addNamedOutput(conf, "odd", org.apache.hadoop.mapred.TextOutputFormat.class, Text.class, IntWritable.class);` and my reduce class became this public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> { MultipleOutputs mos = null; public void configure(JobConf job) { mos = new MultipleOutputs(job); } public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException { int sum = 0; while (values.hasNext()) { sum += values.next().get(); } if (sum % 2 == 0) { mos.getCollector("even", reporter).collect(key, new IntWritable(sum)); }else { mos.getCollector("odd", reporter).collect(key, new IntWritable(sum)); } //output.collect(key, new IntWritable(sum)); } @Override public void close() throws IOException { // TODO Auto-generated method stub mos.close(); } } Things worked , but i get LOT of files, (one odd and one even for every map-reduce) Question is : How can i have just 2 output files (odd & even) so that every odd output of every map-reduce gets written into that odd file, and same for even.
[ "Each reducer uses an OutputFormat to write records to. So that's why you are getting a set of odd and even files per reducer. This is by design so that each reducer can perform writes in parallel.\nIf you want just a single odd and single even file, you'll need to set mapred.reduce.tasks to 1. But performance will suffer, because all the mappers will be feeding into a single reducer.\nAnother option is to change the process the reads these files to accept multiple input files, or write a separate process that merges these files together.\n", "I wrote a class for doing this.\nJust use it your job:\njob.setOutputFormatClass(m_customOutputFormatClass);\n\nThis is the my class:\nimport java.io.IOException;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Map.Entry;\n\nimport org.apache.hadoop.fs.Path;\nimport org.apache.hadoop.mapreduce.RecordWriter;\nimport org.apache.hadoop.mapreduce.TaskAttemptContext;\nimport org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;\n\n/**\n * TextOutputFormat extension which enables writing the mapper/reducer's output in multiple files.<br>\n * <p>\n * <b>WARNING</b>: The number of different folder shuoldn't be large for one mapper since we keep an\n * {@link RecordWriter} instance per folder name.\n * </p>\n * <p>\n * In this class the folder name is defined by the written entry's key.<br>\n * To change this behavior simply extend this class and override the\n * {@link HdMultipleFileOutputFormat#getFolderNameExtractor()} method and create your own\n * {@link FolderNameExtractor} implementation.\n * </p>\n * \n * \n * @author ykesten\n * \n * @param <K> - Keys type\n * @param <V> - Values type\n */\npublic class HdMultipleFileOutputFormat<K, V> extends TextOutputFormat<K, V> {\n\n private String folderName;\n\n private class MultipleFilesRecordWriter extends RecordWriter<K, V> {\n\n private Map<String, RecordWriter<K, V>> fileNameToWriter;\n private FolderNameExtractor<K, V> fileNameExtractor;\n private TaskAttemptContext job;\n\n public MultipleFilesRecordWriter(FolderNameExtractor<K, V> fileNameExtractor, TaskAttemptContext job) {\n fileNameToWriter = new HashMap<String, RecordWriter<K, V>>();\n this.fileNameExtractor = fileNameExtractor;\n this.job = job;\n }\n\n @Override\n public void write(K key, V value) throws IOException, InterruptedException {\n String fileName = fileNameExtractor.extractFolderName(key, value);\n RecordWriter<K, V> writer = fileNameToWriter.get(fileName);\n if (writer == null) {\n writer = createNewWriter(fileName, fileNameToWriter, job);\n if (writer == null) {\n throw new IOException(\"Unable to create writer for path: \" + fileName);\n }\n }\n writer.write(key, value);\n }\n\n @Override\n public void close(TaskAttemptContext context) throws IOException, InterruptedException {\n for (Entry<String, RecordWriter<K, V>> entry : fileNameToWriter.entrySet()) {\n entry.getValue().close(context);\n }\n }\n\n }\n\n private synchronized RecordWriter<K, V> createNewWriter(String folderName,\n Map<String, RecordWriter<K, V>> fileNameToWriter, TaskAttemptContext job) {\n try {\n this.folderName = folderName;\n RecordWriter<K, V> writer = super.getRecordWriter(job);\n this.folderName = null;\n fileNameToWriter.put(folderName, writer);\n return writer;\n } catch (Exception e) {\n e.printStackTrace();\n return null;\n }\n }\n\n @Override\n public Path getDefaultWorkFile(TaskAttemptContext context, String extension) throws IOException {\n Path path = super.getDefaultWorkFile(context, extension);\n if (folderName != null) {\n String newPath = path.getParent().toString() + \"/\" + folderName + \"/\" + path.getName();\n path = new Path(newPath);\n }\n return path;\n }\n\n @Override\n public RecordWriter<K, V> getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException {\n return new MultipleFilesRecordWriter(getFolderNameExtractor(), job);\n }\n\n public FolderNameExtractor<K, V> getFolderNameExtractor() {\n return new KeyFolderNameExtractor<K, V>();\n }\n\n public interface FolderNameExtractor<K, V> {\n public String extractFolderName(K key, V value);\n }\n\n private static class KeyFolderNameExtractor<K, V> implements FolderNameExtractor<K, V> {\n public String extractFolderName(K key, V value) {\n return key.toString();\n }\n }\n\n}\n\n", "Multiple Output files will be generated based on number of reducers.\nYou can use hadoop dfs -getmerge to merged outputs\n", "you may try to change the output file name (Reducer output), since HDFS supports append operations only, then it will collect all Temp-r-0000x files (partitions) from all reducers and put them together in one file.\nhere the class you need to create which overrides methods in TextOutputFormat:\nimport java.io.IOException;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Map.Entry;\n\nimport org.apache.hadoop.fs.Path;\nimport org.apache.hadoop.mapreduce.RecordWriter;\nimport org.apache.hadoop.mapreduce.TaskAttemptContext;\nimport org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;\n\n\n\npublic class CustomNameMultipleFileOutputFormat<K, V> extends TextOutputFormat<K, V> {\n\n private String folderName;\n\n private class MultipleFilesRecordWriter extends RecordWriter<K, V> {\n\n private Map<String, RecordWriter<K, V>> fileNameToWriter;\n private FolderNameExtractor<K, V> fileNameExtractor;\n private TaskAttemptContext job;\n \n \n\n public MultipleFilesRecordWriter(FolderNameExtractor<K, V> fileNameExtractor, TaskAttemptContext job) {\n fileNameToWriter = new HashMap<String, RecordWriter<K, V>>();\n this.fileNameExtractor = fileNameExtractor;\n this.job = job;\n }\n\n @Override\n public void write(K key, V value) throws IOException, InterruptedException {\n String fileName = \"**[FOLDER_NAME_INCLUDING_SUB_DIRS]**\";//fileNameExtractor.extractFolderName(key, value);\n \n RecordWriter<K, V> writer = fileNameToWriter.get(fileName);\n if (writer == null) {\n writer = createNewWriter(fileName, fileNameToWriter, job);\n if (writer == null) {\n throw new IOException(\"Unable to create writer for path: \" + fileName);\n }\n }\n writer.write(key, value);\n }\n\n @Override\n public void close(TaskAttemptContext context) throws IOException, InterruptedException {\n for (Entry<String, RecordWriter<K, V>> entry : fileNameToWriter.entrySet()) {\n entry.getValue().close(context);\n }\n }\n\n \n\n }\n\n private synchronized RecordWriter<K, V> createNewWriter(String folderName,\n Map<String, RecordWriter<K, V>> fileNameToWriter, TaskAttemptContext job) {\n try {\n this.folderName = folderName;\n RecordWriter<K, V> writer = super.getRecordWriter(job);\n this.folderName = null;\n fileNameToWriter.put(folderName, writer);\n \n return writer;\n } catch (Exception e) {\n e.printStackTrace();\n return null;\n }\n }\n\n @Override\n public Path getDefaultWorkFile(TaskAttemptContext context, String extension) throws IOException {\n Path path = super.getDefaultWorkFile(context, extension);\n \n if (folderName != null) {\n String newPath = path.getParent().toString() + \"/\" + folderName + \"/**[ONE_FILE_NAME]**\";\n \n path = new Path(newPath);\n \n }\n return path;\n }\n\n @Override\n public RecordWriter<K, V> getRecordWriter(TaskAttemptContext job) throws IOException, InterruptedException {\n return new MultipleFilesRecordWriter(getFolderNameExtractor(), job);\n }\n\n public FolderNameExtractor<K, V> getFolderNameExtractor() {\n return new KeyFolderNameExtractor<K, V>();\n }\n\n public interface FolderNameExtractor<K, V> {\n public String extractFolderName(K key, V value);\n }\n\n private static class KeyFolderNameExtractor<K, V> implements FolderNameExtractor<K, V> {\n public String extractFolderName(K key, V value) {\n return key.toString();\n }\n }\n\n}\n\nthen Reducer/Mapper:\npublic static class ExtraLabReducer extends Reducer<CustomKeyComparable, Text, CustomKeyComparable, Text>\n{\n MultipleOutputs multipleOutputs;\n\n @Override\n protected void setup(Context context) throws IOException, InterruptedException {\n multipleOutputs = new MultipleOutputs(context);\n }\n\n @Override\n public void reduce(CustomKeyComparable key, Iterable<Text> values, Context context) throws IOException, InterruptedException\n {\n for(Text d : values)\n {\n **multipleOutputs.write**(\"batta\",key, d,**\"[EXAMPLE_FILE_NAME]\"**);\n }\n \n }\n \n @Override\n protected void cleanup(Context context) throws IOException, InterruptedException {\n multipleOutputs.close();\n }\n \n}\n\nthen in job config:\n Job job = new Job(getConf(), \"ExtraLab\");\n job.setJarByClass(ExtraLab.class);\n\n job.setMapperClass(ExtraLabMapper.class);\n job.setReducerClass(ExtraLabReducer.class);\n\n job.setOutputKeyClass(Text.class);\n job.setOutputValueClass(DoubleWritable.class);\n \n job.setMapOutputKeyClass(CustomKeyComparable.class);\n job.setMapOutputValueClass(Text.class);\n\n job.setInputFormatClass(TextInputFormat.class);\n //job.setOutputFormatClass(TextOutputFormat.class);\n\n \n FileInputFormat.addInputPath(job, new Path(args[0]));\n //adding one more reducer\n job.setNumReduceTasks(2);\n \n LazyOutputFormat.setOutputFormatClass(job, TextOutputFormat.class);\n\n MultipleOutputs.addNamedOutput(job,\"batta\", CustomNameMultipleFileOutputFormat.class,CustomKeyComparable.class,Text.class);\n\n" ]
[ 3, 3, 1, 0 ]
[]
[]
[ "hadoop", "java", "mapreduce" ]
stackoverflow_0003491105_hadoop_java_mapreduce.txt
Q: python convert integer to bytes with bitwise operations I have 2 inputs: i (the integer), length (how many bytes the integer should be encoded). how can I convert integer to bytes only with bitwise operations. def int_to_bytes(i, length): for _ in range(length): pass A: Without libraries (as specified in the original post), use int.to_bytes. >>> (1234).to_bytes(16, "little") b'\xd2\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00' IOW, your function would be def int_to_bytes(i, length): return i.to_bytes(length, "little") (or big, if you want big-endian order). With just bitwise operations, def int_to_bytes(i, length): buf = bytearray(length) for j in range(length): buf[j] = i & 0xFF i >>= 8 return bytes(buf) print(int_to_bytes(1234, 4)) A: You can do something like this: def int_to_bytes(i, length): result = bytearray(length) for index in range(length): result[index] = i & 0xff i >>= 8 return result This code uses a for loop to iterate over the specified length. In each iteration, it uses the bitwise AND operator (&) to extract the least significant byte of the integer and store it in the result bytearray. It then uses the bitwise right shift operator (>>) to shift the integer to the right by 8 bits, discarding the least significant byte. This process is repeated until all bytes have been extracted from the integer and stored in the result bytearray. Finally, the result bytearray is returned. Here is an example of how this code might work: int_to_bytes(0x12345678, 4) # returns bytearray(b'\x78\x56\x34\x12') In this example, the int_to_bytes function is called with the integer 0x12345678 and a length of 4. This means that the integer will be converted to a 4-byte sequence, with the least significant byte first. The for loop iterates 4 times, and in each iteration it uses the bitwise AND and right shift operators to extract and discard each byte of the integer. At the end of the loop, the result bytearray contains the bytes [0x78, 0x56, 0x34, 0x12], which are the bytes of the original integer in little-endian order. The result bytearray is then returned.
python convert integer to bytes with bitwise operations
I have 2 inputs: i (the integer), length (how many bytes the integer should be encoded). how can I convert integer to bytes only with bitwise operations. def int_to_bytes(i, length): for _ in range(length): pass
[ "Without libraries (as specified in the original post), use int.to_bytes.\n>>> (1234).to_bytes(16, \"little\")\nb'\\xd2\\x04\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00'\n\nIOW, your function would be\ndef int_to_bytes(i, length):\n return i.to_bytes(length, \"little\")\n\n(or big, if you want big-endian order).\nWith just bitwise operations,\ndef int_to_bytes(i, length):\n buf = bytearray(length)\n for j in range(length):\n buf[j] = i & 0xFF\n i >>= 8\n return bytes(buf)\n\nprint(int_to_bytes(1234, 4))\n\n", "You can do something like this:\ndef int_to_bytes(i, length):\n result = bytearray(length)\n for index in range(length):\n result[index] = i & 0xff\n i >>= 8\n return result\n\nThis code uses a for loop to iterate over the specified length. In each iteration, it uses the bitwise AND operator (&) to extract the least significant byte of the integer and store it in the result bytearray. It then uses the bitwise right shift operator (>>) to shift the integer to the right by 8 bits, discarding the least significant byte. This process is repeated until all bytes have been extracted from the integer and stored in the result bytearray. Finally, the result bytearray is returned.\nHere is an example of how this code might work:\nint_to_bytes(0x12345678, 4)\n# returns bytearray(b'\\x78\\x56\\x34\\x12')\n\nIn this example, the int_to_bytes function is called with the integer 0x12345678 and a length of 4. This means that the integer will be converted to a 4-byte sequence, with the least significant byte first. The for loop iterates 4 times, and in each iteration it uses the bitwise AND and right shift operators to extract and discard each byte of the integer. At the end of the loop, the result bytearray contains the bytes [0x78, 0x56, 0x34, 0x12], which are the bytes of the original integer in little-endian order. The result bytearray is then returned.\n" ]
[ 3, 1 ]
[]
[]
[ "python" ]
stackoverflow_0074667168_python.txt
Q: How do i show the difference between dates where one is getting the current date in laravel? So for my to-do list, im trying to show in my view the difference in number of days between the current date and the date that the task is suppose to be completed by. But i cant seem to do that because im using date_diff() function and it only accepts objects and not strings for the data types. This is the error message date_diff(): Argument #1 ($baseObject) must be of type DateTimeInterface, string given This is my controller public function saveItem(Request $request) { $newTask = new task; if ($request->task == null) { abort(404); } $newTask->tasks = $request->task; $newTask->is_complete = 0; $newTask->task_date = date("Y-m-d"); if($request->day === "tomorrow") { $date = date_create(date("Y-m-d")); date_add($date, date_interval_create_from_date_string("1 day")); $newTask->date_of_completion = date_format($date, "Y-m-d"); } elseif($request->day === "today") { $newTask->date_of_completion = date("Y-m-d"); } $newTask->save(); return redirect('/'); } This is my view <p class="flex items-center px-4 py-1 rounded-lg text-[#555]">{{ date_diff(date("Y-m-d"), date_create($task->date_of_completion)) }}</p> If i can find out how to change or use something else to get the current date as an object so that it can be used in my date_diff(), it will really help but if you have a better solution that is much easier, im open it to it as well. A: date_diff need DateTimeInterface type for the first parameters, in your code, you send it a string. date("Y-m-d") will return current date with format Y-m-d. To solve it, you just need to change your code into: date_diff(new \DateTime, date_create($task->date_of_completion)) Maybe for easier to use, you can take a look for Carbon A: Add date_of_completion field to $casts property, so it can be cast to a Carbon instance (https://laravel.com/docs/9.x/eloquent-mutators#date-casting). Then, simply call $task->date_of_completion->diffForHumans() and it calculates that for you (https://carbon.nesbot.com/docs/#api-humandiff).
How do i show the difference between dates where one is getting the current date in laravel?
So for my to-do list, im trying to show in my view the difference in number of days between the current date and the date that the task is suppose to be completed by. But i cant seem to do that because im using date_diff() function and it only accepts objects and not strings for the data types. This is the error message date_diff(): Argument #1 ($baseObject) must be of type DateTimeInterface, string given This is my controller public function saveItem(Request $request) { $newTask = new task; if ($request->task == null) { abort(404); } $newTask->tasks = $request->task; $newTask->is_complete = 0; $newTask->task_date = date("Y-m-d"); if($request->day === "tomorrow") { $date = date_create(date("Y-m-d")); date_add($date, date_interval_create_from_date_string("1 day")); $newTask->date_of_completion = date_format($date, "Y-m-d"); } elseif($request->day === "today") { $newTask->date_of_completion = date("Y-m-d"); } $newTask->save(); return redirect('/'); } This is my view <p class="flex items-center px-4 py-1 rounded-lg text-[#555]">{{ date_diff(date("Y-m-d"), date_create($task->date_of_completion)) }}</p> If i can find out how to change or use something else to get the current date as an object so that it can be used in my date_diff(), it will really help but if you have a better solution that is much easier, im open it to it as well.
[ "date_diff need DateTimeInterface type for the first parameters, in your code, you send it a string. date(\"Y-m-d\") will return current date with format Y-m-d.\nTo solve it, you just need to change your code into:\ndate_diff(new \\DateTime, date_create($task->date_of_completion))\n\nMaybe for easier to use, you can take a look for Carbon\n", "Add date_of_completion field to $casts property, so it can be cast to a Carbon instance (https://laravel.com/docs/9.x/eloquent-mutators#date-casting).\nThen, simply call $task->date_of_completion->diffForHumans() and it calculates that for you (https://carbon.nesbot.com/docs/#api-humandiff).\n" ]
[ 0, 0 ]
[]
[]
[ "laravel_9", "php" ]
stackoverflow_0074667121_laravel_9_php.txt
Q: Prevent button on expansion header to trigger the panel I have a delete button in the header section of my expansion panel. Clicking on the delete button should not show/hide expands the panel, it is for dialogue. Instead, it also expands the panel. How do I prevent it from expanding the panel? <v-expansion-panel-header> {{ vehicle.VIN }} <v-icon v-if="type == 'saved'" color="teal"> mdi-check </v-icon> <v-btn text class="flex-grow-0" v-if="type == 'saved'" color="red" @click="remove(index, type)" > DELETE </v-btn> </v-expansion-panel-header> Live Issue : https://jsfiddle.net/bheng/gv1zech7/ A: You should use the .stop event modifier: <v-btn text class="flex-grow-0" v-if="type == 'saved'" color="red" @click.stop="remove(index, type)" > DELETE </v-btn>
Prevent button on expansion header to trigger the panel
I have a delete button in the header section of my expansion panel. Clicking on the delete button should not show/hide expands the panel, it is for dialogue. Instead, it also expands the panel. How do I prevent it from expanding the panel? <v-expansion-panel-header> {{ vehicle.VIN }} <v-icon v-if="type == 'saved'" color="teal"> mdi-check </v-icon> <v-btn text class="flex-grow-0" v-if="type == 'saved'" color="red" @click="remove(index, type)" > DELETE </v-btn> </v-expansion-panel-header> Live Issue : https://jsfiddle.net/bheng/gv1zech7/
[ "You should use the .stop event modifier:\n <v-btn\n text\n class=\"flex-grow-0\"\n v-if=\"type == 'saved'\"\n color=\"red\"\n @click.stop=\"remove(index, type)\"\n >\n DELETE\n </v-btn>\n\n" ]
[ 2 ]
[]
[]
[ "javascript", "vue.js", "vuejs2", "vuetify.js" ]
stackoverflow_0074667074_javascript_vue.js_vuejs2_vuetify.js.txt
Q: Can I set the focus for the first element in a Blazor modal? I open a modal in Blazor (Server App) that contains an array of strings. Everything is working code wise, but I have to click in the first element to set focus (these are serial numbers and are read with a scanner). After that, as scanning continues the focus moves after each scan. I would like the first element to be focused when the modal opens so scanning can start without having to click in the first element. Here is the modal setup" <Modal @ref="modalMultipleSerialNumbers" Title="Add/Change Multiple Serial Numbers" UseStaticBackdrop="true" Size="ModalSize.ExtraLarge"> <BodyTemplate> @for (var i = 0; i < SD.MaxNumberOfMultiples; i++) { var count = i; // using i doesn't work. Has to be stored in a local variable to use bind.' <input @bind="@MulipleSerialNumbers[count]" class="col-4 m-1" /> } </BodyTemplate> <FooterTemplate> <Button Color="ButtonColor.Secondary" @onclick="OnClearModalClick">Clear list of Serial Numbers</Button> <Button Color="ButtonColor.Primary" @onclick="OnSaveModalClick">Save list of Serial Numbers</Button> </FooterTemplate> I did try: <input @bind="@MulipleSerialNumbers[count]" autofocus="true" class="col-4 m-1" /> but it didn't change anything. Thanks for looking! A: To set focus on an element in Blazor, you can use the @ref attribute to reference the element in your code, and then call the FocusAsync() method on the element to set focus to it. Here is an example of how you can set focus on an input element in a modal in Blazor: @page "/modal-example" <Modal @ref="modalMultipleSerialNumbers" Title="Add/Change Multiple Serial Numbers" UseStaticBackdrop="true" Size="ModalSize.ExtraLarge"> <BodyTemplate> @for (var i = 0; i < SD.MaxNumberOfMultiples; i++) { var count = i; // using i doesn't work. Has to be stored in a local variable to use bind.' <input @ref="@inputElements[count]" @bind="@MulipleSerialNumbers[count]" class="col-4 m-1" /> } </BodyTemplate> <FooterTemplate> <Button Color="ButtonColor.Secondary" @onclick="OnClearModalClick">Clear list of Serial Numbers</Button> <Button Color="ButtonColor.Primary" @onclick="OnSaveModalClick">Save list of Serial Numbers</Button> </FooterTemplate> </Modal> @code { private IEnumerable<IInputElement> inputElements; protected override async Task OnAfterRenderAsync(bool firstRender) { if (firstRender) { // Set focus to the first input element in the modal await inputElements.First().FocusAsync(); } } } In this example, we use the @ref attribute to reference each input element in the modal, and store these references in an IEnumerable field called inputElements. We then use the OnAfterRenderAsync() method to set focus to the first input element in the modal when the modal is rendered for the first time. You can call the FocusAsync() method on any element that implements the IInputElement interface, which includes input elements such as <input>, <select>, and <textarea>. This allows you to set focus on any element in your Blazor app. A: So, I had to use JavaScript to fix the issue. The modal was changed as follows: <Modal @ref="modalMultipleSerialNumbers" Title="Add/Change Multiple Serial Numbers" UseStaticBackdrop="true" Size="ModalSize.ExtraLarge"> <BodyTemplate> <input id="focusElement" @bind="@MulipleSerialNumbers[0]" class="col-4 m-1" /> @for (var i = 1; i < SD.MaxNumberOfMultiples; i++) { var count = i; // using i in the next statement doesn't work. Has to be stored in a local variable to use bind. <input @bind="@MulipleSerialNumbers[count]" class="col-4 m-1" /> } </BodyTemplate> <FooterTemplate> <Button Color="ButtonColor.Secondary" @onclick="OnClearModalClick">Clear list of Serial Numbers</Button> <Button Color="ButtonColor.Primary" @onclick="OnSaveModalClick">Save list of Serial Numbers</Button> </FooterTemplate> This allowed me to put an id on the first element. Then, in the code to show the modal: private async Task OnShowModalClick() { await modalMultipleSerialNumbers?.ShowAsync(); Thread.Sleep(250); //had to sleep for a quarter second to allow the modal to render. await _jsRuntime.InvokeVoidAsync("FocusElement"); } And just to complete: function FocusElement() { document.getElementById("focusElement").focus(); } A: Here's some demo code that shows you how to do it in pure C# - it's irrelevant whether the code is inside or outside a modal dialog. In general you shouldn't use for in Razor - i will always be set to last value in the loop when the code actually renders. You should stick to foreach which doesn't have the same issue. @page "/" <PageTitle>Index</PageTitle> <h1>Hello, world!</h1> @{ var first = true; } @foreach (var country in countries) { @if (first) { <input type="text" class="form-control mb-3" @[email protected] @bind:event="oninput" @ref=this.firstElement /> } else { <input type="text" class="form-control mb-3" @[email protected] @bind:event="oninput" /> } first = false; } @foreach (var country in countries) { <div class="bg-dark text-white m-1 p-2">@country.Name</div> } @code { private ElementReference firstElement; private IEnumerable<Country> countries = new List<Country>() { new() { Name = "France" }, new() { Name = "Portugal" }, new() { Name = "Spain" } }; protected override async Task OnAfterRenderAsync(bool firstRender) { if (firstRender) { // Set focus to the first input element in the modal await firstElement.FocusAsync(); } } public class Country { public string Name { get; set; } = string.Empty; } }
Can I set the focus for the first element in a Blazor modal?
I open a modal in Blazor (Server App) that contains an array of strings. Everything is working code wise, but I have to click in the first element to set focus (these are serial numbers and are read with a scanner). After that, as scanning continues the focus moves after each scan. I would like the first element to be focused when the modal opens so scanning can start without having to click in the first element. Here is the modal setup" <Modal @ref="modalMultipleSerialNumbers" Title="Add/Change Multiple Serial Numbers" UseStaticBackdrop="true" Size="ModalSize.ExtraLarge"> <BodyTemplate> @for (var i = 0; i < SD.MaxNumberOfMultiples; i++) { var count = i; // using i doesn't work. Has to be stored in a local variable to use bind.' <input @bind="@MulipleSerialNumbers[count]" class="col-4 m-1" /> } </BodyTemplate> <FooterTemplate> <Button Color="ButtonColor.Secondary" @onclick="OnClearModalClick">Clear list of Serial Numbers</Button> <Button Color="ButtonColor.Primary" @onclick="OnSaveModalClick">Save list of Serial Numbers</Button> </FooterTemplate> I did try: <input @bind="@MulipleSerialNumbers[count]" autofocus="true" class="col-4 m-1" /> but it didn't change anything. Thanks for looking!
[ "To set focus on an element in Blazor, you can use the @ref attribute to reference the element in your code, and then call the FocusAsync() method on the element to set focus to it.\nHere is an example of how you can set focus on an input element in a modal in Blazor:\n@page \"/modal-example\"\n\n<Modal @ref=\"modalMultipleSerialNumbers\" Title=\"Add/Change Multiple Serial Numbers\" UseStaticBackdrop=\"true\" Size=\"ModalSize.ExtraLarge\">\n <BodyTemplate>\n @for (var i = 0; i < SD.MaxNumberOfMultiples; i++)\n {\n var count = i; // using i doesn't work. Has to be stored in a local variable to use bind.'\n <input @ref=\"@inputElements[count]\" @bind=\"@MulipleSerialNumbers[count]\" class=\"col-4 m-1\" />\n }\n </BodyTemplate>\n <FooterTemplate>\n <Button Color=\"ButtonColor.Secondary\" @onclick=\"OnClearModalClick\">Clear list of Serial Numbers</Button>\n <Button Color=\"ButtonColor.Primary\" @onclick=\"OnSaveModalClick\">Save list of Serial Numbers</Button>\n </FooterTemplate>\n</Modal>\n\n@code {\n private IEnumerable<IInputElement> inputElements;\n\n protected override async Task OnAfterRenderAsync(bool firstRender)\n {\n if (firstRender)\n {\n // Set focus to the first input element in the modal\n await inputElements.First().FocusAsync();\n }\n }\n}\n\nIn this example, we use the @ref attribute to reference each input element in the modal, and store these references in an IEnumerable field called inputElements. We then use the OnAfterRenderAsync() method to set focus to the first input element in the modal when the modal is rendered for the first time.\nYou can call the FocusAsync() method on any element that implements the IInputElement interface, which includes input elements such as <input>, <select>, and <textarea>. This allows you to set focus on any element in your Blazor app.\n", "So, I had to use JavaScript to fix the issue.\nThe modal was changed as follows:\n<Modal @ref=\"modalMultipleSerialNumbers\" Title=\"Add/Change Multiple Serial Numbers\" UseStaticBackdrop=\"true\" Size=\"ModalSize.ExtraLarge\">\n<BodyTemplate>\n <input id=\"focusElement\" @bind=\"@MulipleSerialNumbers[0]\" class=\"col-4 m-1\" />\n @for (var i = 1; i < SD.MaxNumberOfMultiples; i++)\n {\n var count = i; // using i in the next statement doesn't work. Has to be stored in a local variable to use bind.\n <input @bind=\"@MulipleSerialNumbers[count]\" class=\"col-4 m-1\" />\n }\n</BodyTemplate>\n<FooterTemplate>\n <Button Color=\"ButtonColor.Secondary\" @onclick=\"OnClearModalClick\">Clear list of Serial Numbers</Button>\n <Button Color=\"ButtonColor.Primary\" @onclick=\"OnSaveModalClick\">Save list of Serial Numbers</Button>\n</FooterTemplate>\n\nThis allowed me to put an id on the first element.\nThen, in the code to show the modal:\nprivate async Task OnShowModalClick()\n{\n await modalMultipleSerialNumbers?.ShowAsync();\n Thread.Sleep(250); //had to sleep for a quarter second to allow the modal to render.\n await _jsRuntime.InvokeVoidAsync(\"FocusElement\");\n}\n\nAnd just to complete:\nfunction FocusElement() {\ndocument.getElementById(\"focusElement\").focus();\n}\n\n", "Here's some demo code that shows you how to do it in pure C# - it's irrelevant whether the code is inside or outside a modal dialog.\nIn general you shouldn't use for in Razor - i will always be set to last value in the loop when the code actually renders. You should stick to foreach which doesn't have the same issue.\n@page \"/\"\n\n<PageTitle>Index</PageTitle>\n\n<h1>Hello, world!</h1>\n\n@{\n var first = true;\n}\n@foreach (var country in countries)\n{\n @if (first)\n {\n <input type=\"text\" class=\"form-control mb-3\" @[email protected] @bind:event=\"oninput\" @ref=this.firstElement />\n }\n else\n {\n <input type=\"text\" class=\"form-control mb-3\" @[email protected] @bind:event=\"oninput\" />\n }\n first = false;\n}\n\n@foreach (var country in countries)\n{\n <div class=\"bg-dark text-white m-1 p-2\">@country.Name</div>\n}\n\n@code {\n private ElementReference firstElement;\n private IEnumerable<Country> countries = new List<Country>() { new() { Name = \"France\" }, new() { Name = \"Portugal\" }, new() { Name = \"Spain\" } };\n\n protected override async Task OnAfterRenderAsync(bool firstRender)\n {\n if (firstRender)\n {\n // Set focus to the first input element in the modal\n await firstElement.FocusAsync();\n }\n }\n\n public class Country\n {\n public string Name { get; set; } = string.Empty;\n }\n}\n\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "blazor", "blazor_server_side" ]
stackoverflow_0074659242_blazor_blazor_server_side.txt
Q: Next-auth v4, how do I extend the information of the logged-in user? I tried the following, but I couldn't get the addition from the client and server side environment. The callbacks function allows you to add to a session, but I just want to add user information and designate it as a default session. // @types/next-auth.d.ts import 'next-auth'; declare module 'next-auth' { interface User { id: string; name?: string | null; email?: string | null; image?: string | null; address?: string | null; } } // [...nextauth].ts import NextAuth from 'next-auth'; import CredentialsProvider from 'next-auth/providers/credentials'; import { buildFeedbackPath, extractFeedback } from '../../../lib/user'; import type { NextAuthOptions } from 'next-auth'; export const authOptions: NextAuthOptions = { providers: [ CredentialsProvider({ name: 'Credentials', async authorize(credentials, req) { // Add logic here to look up the user from the credentials supplied const filePath = buildFeedbackPath(); const userData = extractFeedback(filePath); const user = userData.find((userinfo) => userinfo.email === credentials?.email); if (!user) { throw new Error('No user found!'); } return { email: user.email, image: '11233', address: 'testr' }; }, }), ], callbacks: { async session({ session, token, user }) { // console.log('session!!', session.user); return session; }, }, }; export default NextAuth(authOptions); serverside export const getServerSideProps: GetServerSideProps = async (context) => { const session = await unstable_getServerSession(context.req, context.res, authOptions); console.log(session); if (!session) { return { redirect: { destination: '/login', permanent: false, }, }; } return { props: {}, }; }; result Why can't I expand the address? A: we add more properties in session callback: async session({ session, token, user }) { console.log('userin session', session); session.user.address = 'testr'; return session; }, Because if you visit node_modules/next-auth/core/types.d.ts/Session this is the DefaultUser export interface DefaultUser { id: string; name?: string | null; email?: string | null; image?: string | null; } this is why, in your console.log, "name" property exists but its value does not exist, it returns undefined, but "image" exisits in DefaultUser, that is why you could image: '11233' but not "address" because "address" does not exist in DefaultUser properties.
Next-auth v4, how do I extend the information of the logged-in user?
I tried the following, but I couldn't get the addition from the client and server side environment. The callbacks function allows you to add to a session, but I just want to add user information and designate it as a default session. // @types/next-auth.d.ts import 'next-auth'; declare module 'next-auth' { interface User { id: string; name?: string | null; email?: string | null; image?: string | null; address?: string | null; } } // [...nextauth].ts import NextAuth from 'next-auth'; import CredentialsProvider from 'next-auth/providers/credentials'; import { buildFeedbackPath, extractFeedback } from '../../../lib/user'; import type { NextAuthOptions } from 'next-auth'; export const authOptions: NextAuthOptions = { providers: [ CredentialsProvider({ name: 'Credentials', async authorize(credentials, req) { // Add logic here to look up the user from the credentials supplied const filePath = buildFeedbackPath(); const userData = extractFeedback(filePath); const user = userData.find((userinfo) => userinfo.email === credentials?.email); if (!user) { throw new Error('No user found!'); } return { email: user.email, image: '11233', address: 'testr' }; }, }), ], callbacks: { async session({ session, token, user }) { // console.log('session!!', session.user); return session; }, }, }; export default NextAuth(authOptions); serverside export const getServerSideProps: GetServerSideProps = async (context) => { const session = await unstable_getServerSession(context.req, context.res, authOptions); console.log(session); if (!session) { return { redirect: { destination: '/login', permanent: false, }, }; } return { props: {}, }; }; result Why can't I expand the address?
[ "we add more properties in session callback:\n async session({ session, token, user }) {\n console.log('userin session', session);\n session.user.address = 'testr';\n return session;\n},\n\n\nBecause if you visit node_modules/next-auth/core/types.d.ts/Session this is the DefaultUser\nexport interface DefaultUser {\n id: string;\n name?: string | null;\n email?: string | null;\n image?: string | null;\n}\n\nthis is why, in your console.log, \"name\" property exists but its value does not exist, it returns undefined, but \"image\" exisits in DefaultUser, that is why you could image: '11233' but not \"address\" because \"address\" does not exist in DefaultUser properties.\n" ]
[ 0 ]
[]
[]
[ "authentication", "next.js", "next_auth", "session" ]
stackoverflow_0074666260_authentication_next.js_next_auth_session.txt
Q: performance issues because a lot of image in website I have a lot of images on my website and that makes my website load too slowly. should I create a small image with low quality and small size and make a blur effect on it and make the real images load slowly (lazy load) after all page files are downloaded or what should I do A: There are several things that can be done. is definitely make the size of images smaller. if they are not taking a lot of website space then it is better to make them smaller because if they are not shown in the resolution the image belongs to you are just wasting a lot of bandwidth and then downscale the image anyway. There are many online image compressor that you can use or you can directly decrease the size from your image viewer most of them have a resize option. instead of using jpg/jpeg/png use webp. webp is a better format for images since it provides further compression of data and also have a lossless compression 25% better then png. The biggest pro of webp is faster load time and less storage. BUT this might not be supported by all browsers, so before implementing this just check if anyone is still running netscape or not. Last i checked all browser that are used support it.(https://en.wikipedia.org/wiki/WebP#Support) If you need more optimization there is also a browser addon called lighthouse created by google. just install it, goto your website and click "generate report". It will tell you all the places you can actually optimize your website. (Chrome- https://developer.chrome.com/docs/lighthouse/overview/ Firefox- https://addons.mozilla.org/en-US/firefox/addon/google-lighthouse/) A: Why does it get slow? Images are too large Images are not optimized according to the device Images have unspecified dimensions You use heavy formats The browser starts loading images all at once Your cache doesn’t store images Solution: Resize and compress images Lossy = a filter that eliminates some of the data. The quality of the image is impacted. Lossless = a filter that compresses the data without touching the quality of the image. Use Imagify Ewww Image Optimizer Optimole (Image optimization & Lazy Load by Optimole) ShortPixel Image Optimizer reSmush.it Set image dimensions Serve images optimized for each device Lazy load your images Implementing Lazy Loading using a WordPress plugin. Lazy Loading by WP Rocket is a free plugin that implements the lazy load script on the images. Check this interesting guide if you want to compare the best lazy load plugins available on the market. Implementing Lazy Loading manually: follow this guide from CodeInWP that explains the two ways to implement lazy loading manually (not so easy to follow for beginners, though). Convert your images to WebP Credit:https://imagify.io/blog/reasons-images-slow-websites/ A: That's a very interesting issue. I love the answers posted here (very detailed and informative), but - I think that the crux of your problem is the coupling between your page resources and your page load. Thus I think that de-coupling those 2 will make a HUGE difference The problem You page is loading with all of it's resources. this results in a slow and leggy page load that reminds us of old-fashion web sites. We want a more modern approach that loads the page in a more elegant way The Solution Load your page with light-weight content (such as text) Present the partially-loaded page to the user with placeholders for the rest of content in the background - lazy-load heavy page resources (such as images) let the page populate itself while the user has full access to it How To Implement Let's focus on the problem you mentioned - images. You may think they are loading slowly because of their size. But, videos (which are heavier than images) are usually loaded with no problem - because modern video players are loading them in chunks and have a really good strategy to give the user a great experience. So, for now, we will NOT focus on the weight of your resources but instead focus on how to load them properly. It goes without saying (but I'll say it anyway) - AFTER you are getting the results your expected - it is advise to properly handle the size of your images as noted in the other answers A very simple and effective demo for this: <style> img.loader { display: none; } img.loader.active { display: block; } </style> <script> function loadImage(target, url) { const imgElement = target.querySelector('img.target') const imgLoaderElement = document.createElement('img') const loaderElement = target.querySelector('img.loader') imgLoaderElement.onload = event => { // nice to have - the actual size of the requested image console.log(imgLoaderElement.width, imgLoaderElement.height) loaderElement.classList.remove('active') imgElement.src = url } imgLoaderElement.src = url } </script> <div id="image-loader-1"> <img class="active loader" src="https://cdnjs.cloudflare.com/ajax/libs/galleriffic/2.0.1/css/loader.gif" alt="" width="48" height="48"> <img class="target" /> </div> <button onclick="loadImage(document.querySelector('#image-loader-1'), 'https://picsum.photos/600/400')"> Load Image </button> Let's break this down Your image container is now a div element constructed of 2 elements a pre-loader - to have a nice effect of loading something the image element - this will contain the actual image The function loadImage asks you to specify the target image container (in our example image-loader-1) and the image url (your site or 3rd0party sites - no difference here). It will then create a new image element (without populating it to the dom) and load the image there (while still playing the pre-loader in the background). Only after the image is full loaded (and the url is cached in the browser) - you may attach this url to your real image element (and then - make the pre-loader disappear). This way, you will always have the benefit to use a loader until the image is ready for view - making your users experience better Does this looks good? Right now - no, not really. I cleaned all the styling properties from it in order to make this solution as clear as possible. Can it be better? yes. Just put some styling efforts into it and you are ready to go Note: because this is a general question which does not rely on a specific modern framework, I posted a very generic solution using vanilla JS. Hope it is clear enough for future users to understand and implement in their own projects. A: just in simple language.... Sites that use too many images, or have images that are too large, have longer loading times. This can slow down your entire page, irritating visitors and actually hurting your site's ranking in online search results.
performance issues because a lot of image in website
I have a lot of images on my website and that makes my website load too slowly. should I create a small image with low quality and small size and make a blur effect on it and make the real images load slowly (lazy load) after all page files are downloaded or what should I do
[ "There are several things that can be done.\n\nis definitely make the size of images smaller. if they are not taking a lot of website space then it is better to make them smaller because if they are not shown in the resolution the image belongs to you are just wasting a lot of bandwidth and then downscale the image anyway. There are many online image compressor that you can use or you can directly decrease the size from your image viewer most of them have a resize option.\n\ninstead of using jpg/jpeg/png use webp. webp is a better format for images since it provides further compression of data and also have a lossless compression 25% better then png. The biggest pro of webp is faster load time and less storage. BUT this might not be supported by all browsers, so before implementing this just check if anyone is still running netscape or not. Last i checked all browser that are used support it.(https://en.wikipedia.org/wiki/WebP#Support)\n\nIf you need more optimization there is also a browser addon called lighthouse created by google. just install it, goto your website and click \"generate report\". It will tell you all the places you can actually optimize your website.\n\n\n(Chrome- https://developer.chrome.com/docs/lighthouse/overview/\nFirefox- https://addons.mozilla.org/en-US/firefox/addon/google-lighthouse/)\n", "Why does it get slow?\n\nImages are too large\nImages are not optimized according to the device\nImages have unspecified dimensions\nYou use heavy formats\nThe browser starts loading images all at once\nYour cache doesn’t store images\n\nSolution:\n\nResize and compress images\n\nLossy = a filter that eliminates some of the data. The quality of the image is impacted.\nLossless = a filter that compresses the data without touching the quality of the image.\nUse\n\nImagify\nEwww Image Optimizer\nOptimole (Image optimization & Lazy Load by Optimole)\nShortPixel Image Optimizer\nreSmush.it\n\n\nSet image dimensions\n\nServe images optimized for each device\n\nLazy load your images\n\n\nImplementing Lazy Loading using a WordPress plugin.\n\nLazy Loading by WP Rocket is a free plugin that implements the lazy\nload script on the images.\n\nCheck this interesting guide if you want to compare the best lazy\nload plugins available on the market.\n\nImplementing Lazy Loading manually: follow this guide from CodeInWP\nthat explains the two ways to implement lazy loading manually (not so\neasy to follow for beginners, though).\n\n\n\nConvert your images to WebP\n\nCredit:https://imagify.io/blog/reasons-images-slow-websites/\n", "That's a very interesting issue. I love the answers posted here (very detailed and informative), but - I think that the crux of your problem is the coupling between your page resources and your page load. Thus I think that de-coupling those 2 will make a HUGE difference\nThe problem\nYou page is loading with all of it's resources. this results in a slow and leggy page load that reminds us of old-fashion web sites. We want a more modern approach that loads the page in a more elegant way\nThe Solution\n\nLoad your page with light-weight content (such as text)\nPresent the partially-loaded page to the user with placeholders for the rest of content\nin the background - lazy-load heavy page resources (such as images)\nlet the page populate itself while the user has full access to it\n\nHow To Implement\nLet's focus on the problem you mentioned - images. You may think they are loading slowly because of their size. But, videos (which are heavier than images) are usually loaded with no problem - because modern video players are loading them in chunks and have a really good strategy to give the user a great experience. So, for now, we will NOT focus on the weight of your resources but instead focus on how to load them properly. It goes without saying (but I'll say it anyway) - AFTER you are getting the results your expected - it is advise to properly handle the size of your images as noted in the other answers\nA very simple and effective demo for this:\n<style>\n img.loader {\n display: none;\n }\n\n img.loader.active {\n display: block;\n }\n</style>\n\n<script>\n function loadImage(target, url) {\n const imgElement = target.querySelector('img.target')\n const imgLoaderElement = document.createElement('img')\n const loaderElement = target.querySelector('img.loader')\n \n imgLoaderElement.onload = event => {\n // nice to have - the actual size of the requested image\n console.log(imgLoaderElement.width, imgLoaderElement.height) \n \n loaderElement.classList.remove('active')\n imgElement.src = url\n }\n \n imgLoaderElement.src = url\n }\n</script>\n\n<div id=\"image-loader-1\">\n <img class=\"active loader\" src=\"https://cdnjs.cloudflare.com/ajax/libs/galleriffic/2.0.1/css/loader.gif\" alt=\"\" width=\"48\" height=\"48\">\n <img class=\"target\" />\n</div>\n\n<button onclick=\"loadImage(document.querySelector('#image-loader-1'), 'https://picsum.photos/600/400')\">\n Load Image\n</button>\n\nLet's break this down\nYour image container is now a div element constructed of 2 elements\n\na pre-loader - to have a nice effect of loading something\nthe image element - this will contain the actual image\n\nThe function loadImage asks you to specify the target image container (in our example image-loader-1) and the image url (your site or 3rd0party sites - no difference here). It will then create a new image element (without populating it to the dom) and load the image there (while still playing the pre-loader in the background). Only after the image is full loaded (and the url is cached in the browser) - you may attach this url to your real image element (and then - make the pre-loader disappear).\nThis way, you will always have the benefit to use a loader until the image is ready for view - making your users experience better\nDoes this looks good? Right now - no, not really. I cleaned all the styling properties from it in order to make this solution as clear as possible.\nCan it be better? yes. Just put some styling efforts into it and you are ready\nto go\nNote: because this is a general question which does not rely on a specific modern framework, I posted a very generic solution using vanilla JS. Hope it is clear enough for future users to understand and implement in their own projects.\n", "just in simple language....\nSites that use too many images, or have images that are too large, have longer loading times. This can slow down your entire page, irritating visitors and actually hurting your site's ranking in online search results.\n" ]
[ 1, 1, 1, 0 ]
[]
[]
[ "web_performance" ]
stackoverflow_0074529741_web_performance.txt
Q: CSS: How to fix menu and keep scroll content? Left side there is a menu, right side the content. A flex box is used. Why menu scroll, if I scroll in right side? Why it is not keep fixed? How can I fix it? CSS is inline in React.js. const DashboardLayout = ({ children }: Props) => { const isDesktop = useMediaQuery('(min-width: 575px)') return ( <Layout isNavbarTransparent={false}> <section className={`section-base`}> <div style={{ display: 'flex' }}> {isDesktop ? ( <> <div style={{ flexBasis: '350px' }}> <DashboardSidebar embeddedIn={'dashboard'} /> </div> <div>{children}</div> </> ) : ( <div>{children}</div> )} </div> </section> </Layout> ) } I tried to set this for right content: overflow: hidden; height: 100px; Did not help. A: You can use position: sticky; to make it work. Attached a working example (no React but the concept is there). See this for further information. Note: There may be various solution to this depending what you want to get, this answer was made with the context you provide. Hope it helps! .top-bar { width: 100%; height: 40px; background-color: black; } .dashboard { display: flex; width: 100%; } .menu { width: 30%; margin: 20px; } .sticky-container { position: sticky; top: 0; /* set top for sticky to work */ padding-top: 10px; } .content { width: 70%; } <div> <div class="top-bar"></div> <div class="dashboard"> <div class="menu"> <div class="sticky-container"> <li>Options</li> <li>Options</li> <li>Options</li> <li>Options</li> </div> </div> <div class="content"> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quia laborum ab ducimus, consequatur itaque modi dolores dolorem optio assumenda ad doloribus eveniet voluptas, asperiores maiores deleniti, cupiditate dolor necessitatibus aliquam. Lorem ipsum dolor sit amet consectetur adipisicing elit. Ad maxime beatae quod ea excepturi libero reprehenderit. Animi, voluptates? Obcaecati illum quis asperiores molestias, autem dolorum. Sapiente quis voluptate voluptatibus ipsa?</p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quia laborum ab ducimus, consequatur itaque modi dolores dolorem optio assumenda ad doloribus eveniet voluptas, asperiores maiores deleniti, cupiditate dolor necessitatibus aliquam. Lorem ipsum dolor sit amet consectetur adipisicing elit. Ad maxime beatae quod ea excepturi libero reprehenderit. Animi, voluptates? Obcaecati illum quis asperiores molestias, autem dolorum. Sapiente quis voluptate voluptatibus ipsa?</p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quia laborum ab ducimus, consequatur itaque modi dolores dolorem optio assumenda ad doloribus eveniet voluptas, asperiores maiores deleniti, cupiditate dolor necessitatibus aliquam. Lorem ipsum dolor sit amet consectetur adipisicing elit. Ad maxime beatae quod ea excepturi libero reprehenderit. Animi, voluptates? Obcaecati illum quis asperiores molestias, autem dolorum. Sapiente quis voluptate voluptatibus ipsa?</p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quia laborum ab ducimus, consequatur itaque modi dolores dolorem optio assumenda ad doloribus eveniet voluptas, asperiores maiores deleniti, cupiditate dolor necessitatibus aliquam. Lorem ipsum dolor sit amet consectetur adipisicing elit. Ad maxime beatae quod ea excepturi libero reprehenderit. Animi, voluptates? Obcaecati illum quis asperiores molestias, autem dolorum. Sapiente quis voluptate voluptatibus ipsa?</p> <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quia laborum ab ducimus, consequatur itaque modi dolores dolorem optio assumenda ad doloribus eveniet voluptas, asperiores maiores deleniti, cupiditate dolor necessitatibus aliquam. Lorem ipsum dolor sit amet consectetur adipisicing elit. Ad maxime beatae quod ea excepturi libero reprehenderit. Animi, voluptates? Obcaecati illum quis asperiores molestias, autem dolorum. Sapiente quis voluptate voluptatibus ipsa?</p> </div> </div> </div> A: Solution: overflow: scroll; height: calc(100vh - 64px);
CSS: How to fix menu and keep scroll content?
Left side there is a menu, right side the content. A flex box is used. Why menu scroll, if I scroll in right side? Why it is not keep fixed? How can I fix it? CSS is inline in React.js. const DashboardLayout = ({ children }: Props) => { const isDesktop = useMediaQuery('(min-width: 575px)') return ( <Layout isNavbarTransparent={false}> <section className={`section-base`}> <div style={{ display: 'flex' }}> {isDesktop ? ( <> <div style={{ flexBasis: '350px' }}> <DashboardSidebar embeddedIn={'dashboard'} /> </div> <div>{children}</div> </> ) : ( <div>{children}</div> )} </div> </section> </Layout> ) } I tried to set this for right content: overflow: hidden; height: 100px; Did not help.
[ "You can use position: sticky; to make it work. Attached a working example (no React but the concept is there). See this for further information.\nNote: There may be various solution to this depending what you want to get, this answer was made with the context you provide. Hope it helps!\n\n\n.top-bar {\n width: 100%;\n height: 40px;\n background-color: black;\n}\n\n.dashboard {\n display: flex;\n width: 100%;\n}\n\n.menu {\n width: 30%;\n margin: 20px;\n}\n\n.sticky-container {\n position: sticky;\n top: 0; /* set top for sticky to work */\n padding-top: 10px;\n}\n\n.content {\n width: 70%;\n}\n<div>\n <div class=\"top-bar\"></div>\n <div class=\"dashboard\">\n <div class=\"menu\">\n <div class=\"sticky-container\">\n <li>Options</li>\n <li>Options</li>\n <li>Options</li>\n <li>Options</li>\n </div>\n </div>\n <div class=\"content\">\n <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quia laborum ab ducimus, consequatur itaque modi dolores dolorem optio assumenda ad doloribus eveniet voluptas, asperiores maiores deleniti, cupiditate dolor necessitatibus aliquam. Lorem ipsum dolor sit amet consectetur adipisicing elit. Ad maxime beatae quod ea excepturi libero reprehenderit. Animi, voluptates? Obcaecati illum quis asperiores molestias, autem dolorum. Sapiente quis voluptate voluptatibus ipsa?</p>\n <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quia laborum ab ducimus, consequatur itaque modi dolores dolorem optio assumenda ad doloribus eveniet voluptas, asperiores maiores deleniti, cupiditate dolor necessitatibus aliquam. Lorem ipsum dolor sit amet consectetur adipisicing elit. Ad maxime beatae quod ea excepturi libero reprehenderit. Animi, voluptates? Obcaecati illum quis asperiores molestias, autem dolorum. Sapiente quis voluptate voluptatibus ipsa?</p>\n <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quia laborum ab ducimus, consequatur itaque modi dolores dolorem optio assumenda ad doloribus eveniet voluptas, asperiores maiores deleniti, cupiditate dolor necessitatibus aliquam. Lorem ipsum dolor sit amet consectetur adipisicing elit. Ad maxime beatae quod ea excepturi libero reprehenderit. Animi, voluptates? Obcaecati illum quis asperiores molestias, autem dolorum. Sapiente quis voluptate voluptatibus ipsa?</p>\n <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quia laborum ab ducimus, consequatur itaque modi dolores dolorem optio assumenda ad doloribus eveniet voluptas, asperiores maiores deleniti, cupiditate dolor necessitatibus aliquam. Lorem ipsum dolor sit amet consectetur adipisicing elit. Ad maxime beatae quod ea excepturi libero reprehenderit. Animi, voluptates? Obcaecati illum quis asperiores molestias, autem dolorum. Sapiente quis voluptate voluptatibus ipsa?</p>\n <p>Lorem ipsum dolor sit amet consectetur adipisicing elit. Quia laborum ab ducimus, consequatur itaque modi dolores dolorem optio assumenda ad doloribus eveniet voluptas, asperiores maiores deleniti, cupiditate dolor necessitatibus aliquam. Lorem ipsum dolor sit amet consectetur adipisicing elit. Ad maxime beatae quod ea excepturi libero reprehenderit. Animi, voluptates? Obcaecati illum quis asperiores molestias, autem dolorum. Sapiente quis voluptate voluptatibus ipsa?</p>\n </div>\n </div>\n</div>\n\n\n\n", "Solution:\noverflow: scroll;\nheight: calc(100vh - 64px);\n\n" ]
[ 1, 0 ]
[]
[]
[ "css", "reactjs" ]
stackoverflow_0074666901_css_reactjs.txt
Q: How to find the length of the major axis and minor axis of an 2D object with an irregular shape? I would like to find the length of the major axis and minor axis of a figure with an irregular shape like the figure below. The way I thought of is to draw a rectangle fit around the object and find the length and width of the rectangle. But I don't think this is a good idea. The center of gravity of an object is given. Any ideas would be appreciated. A: One way to find the length of the major and minor axes of an irregularly shaped object is to use its bounding box. A bounding box is the smallest rectangle that encloses the entire object, and it can be found by determining the minimum and maximum values of the object's coordinates along each dimension. For example, if the object is represented by a set of 2D points, you can find its bounding box by finding the minimum and maximum x-coordinates and y-coordinates of all the points. The length of the major axis would then be the difference between the maximum and minimum x-coordinates, and the length of the minor axis would be the difference between the maximum and minimum y-coordinates. Another way to find the length of the major and minor axes is to use the object's orientation and the distance from its center of gravity to its farthest points. If you know the orientation of the object (for example, if you have determined its principal components), you can use trigonometric functions to find the distances from the center of gravity to the farthest points along each axis. The lengths of the major and minor axes would then be equal to these distances.
How to find the length of the major axis and minor axis of an 2D object with an irregular shape?
I would like to find the length of the major axis and minor axis of a figure with an irregular shape like the figure below. The way I thought of is to draw a rectangle fit around the object and find the length and width of the rectangle. But I don't think this is a good idea. The center of gravity of an object is given. Any ideas would be appreciated.
[ "One way to find the length of the major and minor axes of an irregularly shaped object is to use its bounding box. A bounding box is the smallest rectangle that encloses the entire object, and it can be found by determining the minimum and maximum values of the object's coordinates along each dimension.\nFor example, if the object is represented by a set of 2D points, you can find its bounding box by finding the minimum and maximum x-coordinates and y-coordinates of all the points. The length of the major axis would then be the difference between the maximum and minimum x-coordinates, and the length of the minor axis would be the difference between the maximum and minimum y-coordinates.\nAnother way to find the length of the major and minor axes is to use the object's orientation and the distance from its center of gravity to its farthest points. If you know the orientation of the object (for example, if you have determined its principal components), you can use trigonometric functions to find the distances from the center of gravity to the farthest points along each axis. The lengths of the major and minor axes would then be equal to these distances.\n" ]
[ 0 ]
[]
[]
[ "algorithm", "python" ]
stackoverflow_0074666930_algorithm_python.txt
Q: how to restart java application, remembering its command line arguments I have a java application. It can be started with couple of command line flags. I want to provide ability "restart" the application by user. Currently we save the the arguments on a control file, reads it when restarting the application. What is the best way to restart the application - how can I retain the command line arguments? A: Using the RuntimeMXBean you could retrieve , Classpath, Bootclasspath etc. package com; import java.lang.management.ManagementFactory; import java.lang.management.RuntimeMXBean; class JMXTest { public static void main(String args[]) { try { for ( int i = 0 ; i < args.length ; i++ ) System.out.println( "args :" + args[i] ); RuntimeMXBean mx = ManagementFactory.getRuntimeMXBean(); System.out.println( "boot CP:" + mx.getBootClassPath() ); System.out.println( " CP:" + mx.getClassPath() ); System.out.println( "cmd args:" + mx.getInputArguments() ); } catch( Exception e ) { e.printStackTrace(); } } } A: invoke new using java -jar appname.jar arg1 arg2 close current one using System.exit(0); Here you won't face problem of retaining arg Here is example to invoke commands from java app A: Anyway, you will have to persist the commandline arguments. If the set of arguments is pretty fixed, consider writing a small batch or shell script file that does nothing but calling java with this set of arguments. If you just want to start it once with arguments and then, if you restart the application without arguments, want to have it to use the arguments from the previous call, do something like that: public static void main(String[] args) { if (args.length == 0) args = readArgsFromFile(); else writeArgsToFile(); // ... } Sidenote: For simplicity reasons I've reused args. For better code, if needed, copy the received or stored parameters to another data structure, another array, a Properties instance, ... A: It varies according to an OS of the user, If you really want to do it OS cross-platform compatible. Then you should supplied starting scripts : shell for linux like OS / bat for windows, these scripts set up the classpath and arguments. I don't think that creating "restart" button in the application is a wise decision, but If you want something like "eclipse restart", you should take a look at RuntimeMXBean which can get booting classpath for you. A: Why not serialize and create the object again from disk when restarted? You will need to implement the Serializable interface in a "CommandLineParams" class to do this. I think it's the most structured way to accomplish what you are trying to do. A: Write here solution for closed How to restart the application after an error? My opinion - no duplication with this topic, it's a problem of recover working application. If you can ran the Bash script instead of your program: #!/bin/bash while : ; do myprogram [[ $? -ne 0 ]] || break done And run as nohup script.sh & See https://superuser.com/a/1223132/561025 and https://superuser.com/a/1223132/561025
how to restart java application, remembering its command line arguments
I have a java application. It can be started with couple of command line flags. I want to provide ability "restart" the application by user. Currently we save the the arguments on a control file, reads it when restarting the application. What is the best way to restart the application - how can I retain the command line arguments?
[ "Using the RuntimeMXBean you could retrieve , Classpath, Bootclasspath etc.\npackage com;\n\nimport java.lang.management.ManagementFactory;\nimport java.lang.management.RuntimeMXBean;\n\nclass JMXTest {\n public static void main(String args[]) {\n try {\n for ( int i = 0 ; i < args.length ; i++ ) \n System.out.println( \"args :\" + args[i] );\n\n RuntimeMXBean mx = ManagementFactory.getRuntimeMXBean();\n System.out.println( \"boot CP:\" + mx.getBootClassPath() );\n System.out.println( \" CP:\" + mx.getClassPath() );\n System.out.println( \"cmd args:\" + mx.getInputArguments() );\n }\n catch( Exception e ) {\n e.printStackTrace();\n }\n }\n}\n\n", "invoke new using \njava -jar appname.jar arg1 arg2 \n\nclose current one using \nSystem.exit(0); \n\nHere you won't face problem of retaining arg \nHere is example to invoke commands from java app\n", "Anyway, you will have to persist the commandline arguments. If the set of arguments is pretty fixed, consider writing a small batch or shell script file that does nothing but calling java with this set of arguments.\nIf you just want to start it once with arguments and then, if you restart the application without arguments, want to have it to use the arguments from the previous call, do something like that:\npublic static void main(String[] args) {\n\n if (args.length == 0)\n args = readArgsFromFile();\n else\n writeArgsToFile();\n\n // ...\n\n}\n\nSidenote: For simplicity reasons I've reused args. For better code, if needed, copy the received or stored parameters to another data structure, another array, a Properties instance, ...\n", "It varies according to an OS of the user, If you really want to do it OS cross-platform compatible. Then you should supplied starting scripts : shell for linux like OS / bat for windows, these scripts set up the classpath and arguments.\nI don't think that creating \"restart\" button in the application is a wise decision, but If you want something like \"eclipse restart\", you should take a look at RuntimeMXBean which can get booting classpath for you. \n", "Why not serialize and create the object again from disk when restarted?\nYou will need to implement the Serializable interface in a \"CommandLineParams\" class to do this. \nI think it's the most structured way to accomplish what you are trying to do.\n", "Write here solution for closed How to restart the application after an error? My opinion - no duplication with this topic, it's a problem of recover working application.\nIf you can ran the Bash script instead of your program:\n#!/bin/bash\nwhile : ; do\n myprogram\n [[ $? -ne 0 ]] || break\ndone\n\nAnd run as nohup script.sh &\nSee https://superuser.com/a/1223132/561025 and https://superuser.com/a/1223132/561025\n" ]
[ 2, 1, 1, 0, 0, 0 ]
[]
[]
[ "java", "restart" ]
stackoverflow_0003854997_java_restart.txt
Q: How would i save a doc/docx/docm file into directory or S3 bucket using Pyspark I am trying to save a data frame into a document but it returns saying that the below error java.lang.ClassNotFoundException: Failed to find data source: docx. Please find packages at http://spark.apache.org/third-party-projects.html My code is below: #f_data is my dataframe with data f_data.write.format("docx").save("dbfs:/FileStore/test/test.csv") display(f_data) Note that i could save files of CSV, text and JSON format but is there any way to save a docx file using pyspark? My question here. Do we have the support for saving data in the format of doc/docx? if not, Is there any way to store the file like writing a file stream object into particular folder/S3 bucket? A: In short: no, Spark does not support DOCX format out of the box. You can still collect the data into the driver node (i.e.: pandas dataframe) and work from there. Long answer: A document format like DOCX is meant for presenting information in small tables with style metadata. Spark focus on processing large amount of files at scale and it does not support DOCX format out of the box. If you want to write DOCX files programmatically, you can: Collect the data into a Pandas DataFrame pd_f_data = f_data.toDF() Import python package to create the DOCX document and save it into a stream. See question: Writing a Python Pandas DataFrame to Word document Upload the stream to a S3 blob using for example boto: Can you upload to S3 using a stream rather than a local file? Note: if your data has more than one hundred rows, ask the receivers how they are going to use the data. Just use docx for reporting no as a file transfer format.
How would i save a doc/docx/docm file into directory or S3 bucket using Pyspark
I am trying to save a data frame into a document but it returns saying that the below error java.lang.ClassNotFoundException: Failed to find data source: docx. Please find packages at http://spark.apache.org/third-party-projects.html My code is below: #f_data is my dataframe with data f_data.write.format("docx").save("dbfs:/FileStore/test/test.csv") display(f_data) Note that i could save files of CSV, text and JSON format but is there any way to save a docx file using pyspark? My question here. Do we have the support for saving data in the format of doc/docx? if not, Is there any way to store the file like writing a file stream object into particular folder/S3 bucket?
[ "In short: no, Spark does not support DOCX format out of the box. You can still collect the data into the driver node (i.e.: pandas dataframe) and work from there.\nLong answer:\nA document format like DOCX is meant for presenting information in small tables with style metadata. Spark focus on processing large amount of files at scale and it does not support DOCX format out of the box.\nIf you want to write DOCX files programmatically, you can:\n\nCollect the data into a Pandas DataFrame pd_f_data = f_data.toDF()\nImport python package to create the DOCX document and save it into a stream. See question: Writing a Python Pandas DataFrame to Word document\nUpload the stream to a S3 blob using for example boto: Can you upload to S3 using a stream rather than a local file?\n\nNote: if your data has more than one hundred rows, ask the receivers how they are going to use the data. Just use docx for reporting no as a file transfer format.\n" ]
[ 0 ]
[]
[]
[ "apache_spark", "csv", "docx", "pyspark" ]
stackoverflow_0074659957_apache_spark_csv_docx_pyspark.txt
Q: Type hint Pandas DataFrameGroupBy How should I type hint in Python a pandas DataFrameGroupBy object? Should I just use pd.DataFrame as for normal pandas dataframes? I didn't find any other solution atm A: DataFrameGroupBy is a proper type in of itself. So if you're writing a function which must specifically take a DataFrameGroupBy instance: from pandas.core.groupby import DataFrameGroupBy def my_function(dfgb: DataFrameGroupBy) -> None: """Do something with dfgb.""" If you're looking for a more general polymorphic type, there are several possibilities: pandas.core.groupby.GroupBy since DataFrameGroupBy inherits from GroupBy[DataFrame]. If you want to accept Series instances too, you could either union DataFrameGroupBy and SeriesGroupBy or you could use GroupBy[FrameOrSeries] (if you intend to always match the input type in your return value) or GroupBy[FrameOrSeriesUnion] if your output type doesn't reflect the input type. All of these types are in pandas.core.groupby.generic. You could combine the above generics (and others) in many different ways to your liking. A: vscode type hinting was still not able to recognize the type by following the above example. Changing the import statement to below helped: from pandas.core.groupby.generic import DataFrameGroupBy
Type hint Pandas DataFrameGroupBy
How should I type hint in Python a pandas DataFrameGroupBy object? Should I just use pd.DataFrame as for normal pandas dataframes? I didn't find any other solution atm
[ "DataFrameGroupBy is a proper type in of itself. So if you're writing a function which must specifically take a DataFrameGroupBy instance:\nfrom pandas.core.groupby import DataFrameGroupBy\n\ndef my_function(dfgb: DataFrameGroupBy) -> None:\n \"\"\"Do something with dfgb.\"\"\"\n\nIf you're looking for a more general polymorphic type, there are several possibilities:\n\npandas.core.groupby.GroupBy since DataFrameGroupBy inherits from GroupBy[DataFrame].\nIf you want to accept Series instances too, you could either union DataFrameGroupBy and SeriesGroupBy or you could use GroupBy[FrameOrSeries] (if you intend to always match the input type in your return value) or GroupBy[FrameOrSeriesUnion] if your output type doesn't reflect the input type. All of these types are in pandas.core.groupby.generic.\nYou could combine the above generics (and others) in many different ways to your liking.\n\n", "vscode type hinting was still not able to recognize the type by following the above example. Changing the import statement to below helped:\nfrom pandas.core.groupby.generic import DataFrameGroupBy\n\n" ]
[ 7, 1 ]
[]
[]
[ "pandas", "python", "type_hinting" ]
stackoverflow_0070501065_pandas_python_type_hinting.txt
Q: my .attrs function is not working in beautiful soup I am a beginner programmer and I was trying to create my hangman game and importing data with Beautiful Soup but when I copied the same exact thing as the youtuber his code worked and mine didn't. I have tested and the problem is the .attrs function. I have tried looking if I had made a typo but I am pretty sure I didn't and I have also made sure I had downloaded all the packages needed and looked through the tutorial multiple times. The tutorial is by https://freecodecamp.org import requests from bs4 import BeautifulSoup result = requests.get('https://en.wikipedia.org/wiki/List_of_highest-grossing_films') src = result.content soup = BeautifulSoup(src, 'lxml') results = [] for i in soup.find_all('th'): a_tag = i.find('a') results.append(a_tag.attrs['title']) print(results) A: you are getting the error because not all the items in the list soup.find_all('th') have tag a, and if you fix this, not all the items will have title , so try like this: src = result.content soup = BeautifulSoup(src, 'lxml') results = [] for i in soup.find_all('th'): if i.find('a'): a_tag = i.find('a') if a_tag.get('title'): results.append(a_tag.attrs['title']) print(results) Note:I tried not to reflector your code, and we can made it better :)
my .attrs function is not working in beautiful soup
I am a beginner programmer and I was trying to create my hangman game and importing data with Beautiful Soup but when I copied the same exact thing as the youtuber his code worked and mine didn't. I have tested and the problem is the .attrs function. I have tried looking if I had made a typo but I am pretty sure I didn't and I have also made sure I had downloaded all the packages needed and looked through the tutorial multiple times. The tutorial is by https://freecodecamp.org import requests from bs4 import BeautifulSoup result = requests.get('https://en.wikipedia.org/wiki/List_of_highest-grossing_films') src = result.content soup = BeautifulSoup(src, 'lxml') results = [] for i in soup.find_all('th'): a_tag = i.find('a') results.append(a_tag.attrs['title']) print(results)
[ "you are getting the error because not all the items in the list soup.find_all('th') have tag a, and if you fix this, not all the items will have title , so try like this:\nsrc = result.content\nsoup = BeautifulSoup(src, 'lxml')\nresults = []\nfor i in soup.find_all('th'):\n if i.find('a'):\n a_tag = i.find('a')\n if a_tag.get('title'):\n results.append(a_tag.attrs['title'])\nprint(results)\n\nNote:I tried not to reflector your code, and we can made it better :)\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "python" ]
stackoverflow_0074666526_beautifulsoup_python.txt
Q: Printing different values for the same Struct variable in C #define MAX_HEIGHT 512 #define MAX_WIDTH 512 typedef struct { int lines; int cols; int highestValue; int matrix[MAX_WIDTH][MAX_HEIGHT]; } Pgm; void getInfo() { Pgm pgm; FILE *f = fopen("pepper.pgm", "r"); bool keepReading = true; int line = 0, countSpaces = 0, i = 0; do { fgets(buffer, MAX_LINE, f); if (feof(f)) { printf("\nCheguei no final do arquivo"); keepReading = false; break; } if (line >= 3) { char *values = strtok(buffer, " "); while (values != NULL) { total++; // printf("values: %d, cols: %d, pgm.matrix[%d][%d], total: %d\n", atoi(values), pgm.cols, i, countSpaces, total); pgm.matrix[i][countSpaces] = atoi(values); if (i == pgm.lines && countSpaces == pgm.cols) break; countSpaces++; if (countSpaces == pgm.cols) { countSpaces = 0; i++; } values = strtok(NULL, " "); } } line++; } while (keepReading); fclose(f); printf("cols: %d, lines: %d, highest: %d, matrix[0][0]: %d", pgm.cols, pgm.lines, pgm.highestValue, pgm.matrix[0][0]); } void resolveMatrix() { Pgm pgm; printf("cols: %d, lines: %d, highest: %d", pgm.cols, pgm.lines, pgm.highestValue); } I have this getInfo function that reads a .pgm file and adds the values inside this file to a matrix inside my struct. When i do a printf statement inside such function it prints out the right values that i want. But when i try to do that inside another function it prints out diffent values. I think this has to do with memory addres, but how would i solve this :( A: In your resolveMatrix function you are using the struct pgm without initializing it, so it will print random garbage that was on that memory location on the stack before the struct was created. If you want to use a struct that was created somewhere else, pass a pointer to it as a function parameter: void resolveMatrix(Pgm *pgm) { printf("cols: %d, lines: %d, highest: %d", pgm->cols, pgm->lines, pgm->highestValue); } Usage: Pgm pgm; // Initialize struct fields here ... resolveMatrix(&pgm);
Printing different values for the same Struct variable in C
#define MAX_HEIGHT 512 #define MAX_WIDTH 512 typedef struct { int lines; int cols; int highestValue; int matrix[MAX_WIDTH][MAX_HEIGHT]; } Pgm; void getInfo() { Pgm pgm; FILE *f = fopen("pepper.pgm", "r"); bool keepReading = true; int line = 0, countSpaces = 0, i = 0; do { fgets(buffer, MAX_LINE, f); if (feof(f)) { printf("\nCheguei no final do arquivo"); keepReading = false; break; } if (line >= 3) { char *values = strtok(buffer, " "); while (values != NULL) { total++; // printf("values: %d, cols: %d, pgm.matrix[%d][%d], total: %d\n", atoi(values), pgm.cols, i, countSpaces, total); pgm.matrix[i][countSpaces] = atoi(values); if (i == pgm.lines && countSpaces == pgm.cols) break; countSpaces++; if (countSpaces == pgm.cols) { countSpaces = 0; i++; } values = strtok(NULL, " "); } } line++; } while (keepReading); fclose(f); printf("cols: %d, lines: %d, highest: %d, matrix[0][0]: %d", pgm.cols, pgm.lines, pgm.highestValue, pgm.matrix[0][0]); } void resolveMatrix() { Pgm pgm; printf("cols: %d, lines: %d, highest: %d", pgm.cols, pgm.lines, pgm.highestValue); } I have this getInfo function that reads a .pgm file and adds the values inside this file to a matrix inside my struct. When i do a printf statement inside such function it prints out the right values that i want. But when i try to do that inside another function it prints out diffent values. I think this has to do with memory addres, but how would i solve this :(
[ "In your resolveMatrix function you are using the struct pgm without initializing it, so it will print random garbage that was on that memory location on the stack before the struct was created.\nIf you want to use a struct that was created somewhere else, pass a pointer to it as a function parameter:\nvoid resolveMatrix(Pgm *pgm)\n{\n printf(\"cols: %d, lines: %d, highest: %d\", pgm->cols, pgm->lines, pgm->highestValue);\n}\n\nUsage:\nPgm pgm;\n\n// Initialize struct fields here ...\n\nresolveMatrix(&pgm);\n\n" ]
[ 1 ]
[]
[]
[ "c", "struct" ]
stackoverflow_0074667256_c_struct.txt
Q: What is locality in Graph Matching problem and Distributed models? I’m a beginner in the field of Graph Matching and Parallel Computing. I read a paper that talks about an efficient parallel matching algorithm. They explained the importance of the locality, but I don't know it represents what? and What is good and bad locality? Our distributed memory parallelization (using MPI) on p processing elements (PEs or MPI processes) assigns nodes to PEs and stores all edges incident to a node locally. This can be done in a load balanced way if no node has degree exceeding m/p. The second pass of the basic algorithm from Section 2 has to exchange information on candidate edges that cross a PE boundary. In the worst case, this can involve all edges handled by a PE, i.e., we can expect better performance if we manage to keep most edges locally. In our experiments, one PE owns nodes whose numbers are a consecutive range of the input numbers. Thus, depending on how much locality the input numbering contains we have a highly local or a highly non-local situation. A: Generally speaking, locality in distributed models is basically the extent to which a global solution for a computational problem problem can be obtained from locally available data. Good locality is when most nodes can construct solutions using local data, since they'll require less communication to get any missing data. Bad locality would be if a node spends more than desirable time fetching data, rather than finding a solution using local data. A: Think of a simple distributed computer system which comprises a collection of computers each somewhat like a desktop PC, in as much as each one has a CPU and some RAM. (These are the nodes mentioned in the question.) They are assembled into a distributed system by plugging them all into the same network. Each CPU has memory-bus access (very fast) to data stored in its local RAM. The same CPU's access to data in the RAM on another computer in the system will run across the network (much slower) and may require co-operation with the CPU on that other computer. locality is a property of the data used in the algorithm, local data is on the same computer as the CPU, non-local data is elsewhere on the distributed system. I trust that it is clear that parallel computations can proceed more quickly the more that each CPU has to work only with local data. So the designers of parallel programs for distributed systems pay great attention to the placement of data often seeking to minimise the number and sizes of exchanges of data between processing elements. Complication, unnecessary for understanding the key issues: of course on real distributed systems many of the individual CPUs are multi-core, and in some designs multiple multi-core CPUs will share the same enclosure and have approximately memory-bus-speed access to all the RAM in the same enclosure. Which makes for a node which itself is a shared-memory computer. But that's just detail and a topic for another answer.
What is locality in Graph Matching problem and Distributed models?
I’m a beginner in the field of Graph Matching and Parallel Computing. I read a paper that talks about an efficient parallel matching algorithm. They explained the importance of the locality, but I don't know it represents what? and What is good and bad locality? Our distributed memory parallelization (using MPI) on p processing elements (PEs or MPI processes) assigns nodes to PEs and stores all edges incident to a node locally. This can be done in a load balanced way if no node has degree exceeding m/p. The second pass of the basic algorithm from Section 2 has to exchange information on candidate edges that cross a PE boundary. In the worst case, this can involve all edges handled by a PE, i.e., we can expect better performance if we manage to keep most edges locally. In our experiments, one PE owns nodes whose numbers are a consecutive range of the input numbers. Thus, depending on how much locality the input numbering contains we have a highly local or a highly non-local situation.
[ "Generally speaking, locality in distributed models is basically the extent to which a global solution for a computational problem problem can be obtained from locally available data.\nGood locality is when most nodes can construct solutions using local data, since they'll require less communication to get any missing data. Bad locality would be if a node spends more than desirable time fetching data, rather than finding a solution using local data.\n", "Think of a simple distributed computer system which comprises a collection of computers each somewhat like a desktop PC, in as much as each one has a CPU and some RAM. (These are the nodes mentioned in the question.) They are assembled into a distributed system by plugging them all into the same network.\nEach CPU has memory-bus access (very fast) to data stored in its local RAM. The same CPU's access to data in the RAM on another computer in the system will run across the network (much slower) and may require co-operation with the CPU on that other computer.\nlocality is a property of the data used in the algorithm, local data is on the same computer as the CPU, non-local data is elsewhere on the distributed system. I trust that it is clear that parallel computations can proceed more quickly the more that each CPU has to work only with local data. So the designers of parallel programs for distributed systems pay great attention to the placement of data often seeking to minimise the number and sizes of exchanges of data between processing elements.\nComplication, unnecessary for understanding the key issues: of course on real distributed systems many of the individual CPUs are multi-core, and in some designs multiple multi-core CPUs will share the same enclosure and have approximately memory-bus-speed access to all the RAM in the same enclosure. Which makes for a node which itself is a shared-memory computer. But that's just detail and a topic for another answer.\n" ]
[ 0, 0 ]
[]
[]
[ "algorithm", "distributed_system", "matching", "parallel_processing" ]
stackoverflow_0074663566_algorithm_distributed_system_matching_parallel_processing.txt
Q: formating file with hours and date in the same column our electricity provider think it could be very fun to make difficult to read csv files they provide. This is precise electric consumption, every 30 min but in the SAME column you have hours, and date, example : [EDIT : here the raw version of the csv file, my bad] ; "Récapitulatif de mes puissances atteintes en W"; ; "Date et heure de relève par le distributeur";"Puissance atteinte (W)" ; "19/11/2022"; "00:00:00";4494 "23:30:00";1174 "23:00:00";1130 [...] "01:30:00";216 "01:00:00";2672 "00:30:00";2816 ; "18/11/2022"; "00:00:00";4494 "23:30:00";1174 "23:00:00";1130 [...] "01:30:00";216 "01:00:00";2672 "00:30:00";2816 How damn can I obtain this kind of lovely formated file : 2022-11-19 00:00:00 2098 2022-11-19 23:30:00 218 2022-11-19 23:00:00 606 etc. A: Try: import pandas as pd current_date = None all_data = [] with open("your_file.txt", "r") as f_in: # skip first 5 rows (header) for _ in range(5): next(f_in) for row in map(str.strip, f_in): row = row.replace('"', "") if row == "": continue if "/" in row: current_date = row else: all_data.append([current_date, *row.split(";")]) df = pd.DataFrame(all_data, columns=["Date", "Time", "Value"]) print(df) Prints: Date Time Value 0 19/11/2022; 00:00:00 4494 1 19/11/2022; 23:30:00 1174 2 19/11/2022; 23:00:00 1130 3 19/11/2022; 01:30:00 216 4 19/11/2022; 01:00:00 2672 5 19/11/2022; 00:30:00 2816 6 18/11/2022; 00:00:00 4494 7 18/11/2022; 23:30:00 1174 8 18/11/2022; 23:00:00 1130 9 18/11/2022; 01:30:00 216 10 18/11/2022; 01:00:00 2672 11 18/11/2022; 00:30:00 2816 A: Okay I have an idiotic brutforce solution for you, so dont take that as coding recommondation but just something that gets the job done: import itertools dList = [f"{f}/{s}/2022" for f, s in itertools.product(range(1, 32), range(1, 13))] i assume you have a text file with that so im just gonna use that: file = 'yourfilename.txt' #make sure youre running the program in the same directory as the .txt file with open(file, "r") as f: global lines lines = f.readlines() lines = [word.replace('\n','') for word in lines] for i in lines: if i in dList: curD = i else: with open('output.txt', 'w') as g: g.write(f'{i} {(i.split())[0]} {(i.split())[1]}') make sure to create a file called output.txt in the same directory and everything will get writen into that file. A: Using pandas operations would be like the following: data.csv 19/11/2022 00:00:00 2098 23:30:00 218 23:00:00 606 01:30:00 216 01:00:00 2672 00:30:00 2816 18/11/2022 00:00:00 1994 23:30:00 260 23:00:00 732 01:30:00 200 01:00:00 1378 00:30:00 2520 17/11/2022 00:00:00 1830 23:30:00 96 23:00:00 122 01:30:00 694 01:00:00 2950 00:30:00 3062 16/11/2022 00:00:00 2420 23:30:00 678 23:00:00 644 Implementation import pandas as pd df = pd.read_csv('data.csv', header=None) df['amount'] = df[0].apply(lambda item:item.split(' ')[-1] if item.find(':')>0 else None) df['time'] = df[0].apply(lambda item:item.split(' ')[0] if item.find(':')>0 else None) df['date'] = df[0].apply(lambda item:item if item.find('/')>0 else None) df['date'] = df['date'].fillna(method='ffill') df = df.dropna(subset=['amount'], how='any') df = df.drop(0, axis=1) print(df) output amount time date 1 2098 00:00:00 19/11/2022 2 218 23:30:00 19/11/2022 3 606 23:00:00 19/11/2022 4 216 01:30:00 19/11/2022 5 2672 01:00:00 19/11/2022 6 2816 00:30:00 19/11/2022 8 1994 00:00:00 18/11/2022 9 260 23:30:00 18/11/2022 10 732 23:00:00 18/11/2022 11 200 01:30:00 18/11/2022 12 1378 01:00:00 18/11/2022 13 2520 00:30:00 18/11/2022 15 1830 00:00:00 17/11/2022 16 96 23:30:00 17/11/2022 17 122 23:00:00 17/11/2022 18 694 01:30:00 17/11/2022 19 2950 01:00:00 17/11/2022 20 3062 00:30:00 17/11/2022 22 2420 00:00:00 16/11/2022 23 678 23:30:00 16/11/2022 24 644 23:00:00 16/11/2022
formating file with hours and date in the same column
our electricity provider think it could be very fun to make difficult to read csv files they provide. This is precise electric consumption, every 30 min but in the SAME column you have hours, and date, example : [EDIT : here the raw version of the csv file, my bad] ; "Récapitulatif de mes puissances atteintes en W"; ; "Date et heure de relève par le distributeur";"Puissance atteinte (W)" ; "19/11/2022"; "00:00:00";4494 "23:30:00";1174 "23:00:00";1130 [...] "01:30:00";216 "01:00:00";2672 "00:30:00";2816 ; "18/11/2022"; "00:00:00";4494 "23:30:00";1174 "23:00:00";1130 [...] "01:30:00";216 "01:00:00";2672 "00:30:00";2816 How damn can I obtain this kind of lovely formated file : 2022-11-19 00:00:00 2098 2022-11-19 23:30:00 218 2022-11-19 23:00:00 606 etc.
[ "Try:\nimport pandas as pd\n\ncurrent_date = None\nall_data = []\nwith open(\"your_file.txt\", \"r\") as f_in:\n # skip first 5 rows (header)\n for _ in range(5):\n next(f_in)\n\n for row in map(str.strip, f_in):\n row = row.replace('\"', \"\")\n if row == \"\":\n continue\n if \"/\" in row:\n current_date = row\n else:\n all_data.append([current_date, *row.split(\";\")])\n\ndf = pd.DataFrame(all_data, columns=[\"Date\", \"Time\", \"Value\"])\nprint(df)\n\nPrints:\n Date Time Value\n0 19/11/2022; 00:00:00 4494\n1 19/11/2022; 23:30:00 1174\n2 19/11/2022; 23:00:00 1130\n3 19/11/2022; 01:30:00 216\n4 19/11/2022; 01:00:00 2672\n5 19/11/2022; 00:30:00 2816\n6 18/11/2022; 00:00:00 4494\n7 18/11/2022; 23:30:00 1174\n8 18/11/2022; 23:00:00 1130\n9 18/11/2022; 01:30:00 216\n10 18/11/2022; 01:00:00 2672\n11 18/11/2022; 00:30:00 2816\n\n", "Okay I have an idiotic brutforce solution for you, so dont take that as coding recommondation but just something that gets the job done:\nimport itertools\ndList = [f\"{f}/{s}/2022\" for f, s in itertools.product(range(1, 32), range(1, 13))]\n\ni assume you have a text file with that so im just gonna use that:\nfile = 'yourfilename.txt'\n#make sure youre running the program in the same directory as the .txt file\nwith open(file, \"r\") as f:\n global lines\n lines = f.readlines()\nlines = [word.replace('\\n','') for word in lines]\nfor i in lines:\n if i in dList:\n curD = i\n else:\n with open('output.txt', 'w') as g:\n g.write(f'{i} {(i.split())[0]} {(i.split())[1]}')\n\nmake sure to create a file called output.txt in the same directory and everything will get writen into that file.\n", "Using pandas operations would be like the following:\ndata.csv\n19/11/2022 \n00:00:00 2098\n23:30:00 218\n23:00:00 606\n01:30:00 216\n01:00:00 2672\n00:30:00 2816\n18/11/2022 \n00:00:00 1994\n23:30:00 260\n23:00:00 732\n01:30:00 200\n01:00:00 1378\n00:30:00 2520\n17/11/2022 \n00:00:00 1830\n23:30:00 96\n23:00:00 122\n01:30:00 694\n01:00:00 2950\n00:30:00 3062\n16/11/2022 \n00:00:00 2420\n23:30:00 678\n23:00:00 644\n\nImplementation\nimport pandas as pd\ndf = pd.read_csv('data.csv', header=None)\ndf['amount'] = df[0].apply(lambda item:item.split(' ')[-1] if item.find(':')>0 else None)\ndf['time'] = df[0].apply(lambda item:item.split(' ')[0] if item.find(':')>0 else None)\ndf['date'] = df[0].apply(lambda item:item if item.find('/')>0 else None)\ndf['date'] = df['date'].fillna(method='ffill')\ndf = df.dropna(subset=['amount'], how='any')\ndf = df.drop(0, axis=1)\nprint(df)\n\noutput\n amount time date\n1 2098 00:00:00 19/11/2022 \n2 218 23:30:00 19/11/2022 \n3 606 23:00:00 19/11/2022 \n4 216 01:30:00 19/11/2022 \n5 2672 01:00:00 19/11/2022 \n6 2816 00:30:00 19/11/2022 \n8 1994 00:00:00 18/11/2022 \n9 260 23:30:00 18/11/2022 \n10 732 23:00:00 18/11/2022 \n11 200 01:30:00 18/11/2022 \n12 1378 01:00:00 18/11/2022 \n13 2520 00:30:00 18/11/2022 \n15 1830 00:00:00 17/11/2022 \n16 96 23:30:00 17/11/2022 \n17 122 23:00:00 17/11/2022 \n18 694 01:30:00 17/11/2022 \n19 2950 01:00:00 17/11/2022 \n20 3062 00:30:00 17/11/2022 \n22 2420 00:00:00 16/11/2022 \n23 678 23:30:00 16/11/2022 \n24 644 23:00:00 16/11/2022 \n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "dataframe", "pandas", "parsing", "python", "reindex" ]
stackoverflow_0074667137_dataframe_pandas_parsing_python_reindex.txt
Q: Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version I am trying to store historical data from front-end(flutter) through the API. Below are the laravel controller. However, when I ran it on postman, an error comes out Column not found: 1054 Unknown column 'images' in 'field list' . I think it is retrieving the whole column of users. From the column, I only what staff_id, date_checkIn, time_checkIn, and location_checkIn. How do I fix this ? I tried User::where('staff_id', 'date_checkIn', 'time_checkIn', 'location_checkIn')->first()->toArray(); . but then it says Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'staff_id = ? limit 1' at line 1 (SQL: select * from users where location_checkIn staff_id = date_checkIn limit 1) in file ..... public function userClockIn(Request $r) { $result = []; $result['status'] = false; $result['message'] = "something error"; $users = User::where('staff_id', $r->staff_id)->first(); // retrieve current data $currentData = $users->toArray(); // store current data in historical data table $historicalData = new HistoricalData(); $historicalData->fill($currentData); $historicalData->save(); $mytime = Carbon::now(); $time = $mytime->format('H:i:s'); $date = $mytime->format('Y-m-d'); $users->date_checkIn = $date; $users->time_checkIn = $time; $users->location_checkIn = $r->location_checkIn; $users->save(); $result['data'] = $users; $result['status'] = true; $result['message'] = "suksess add data"; return response()->json($result); } A: To fix this error, you need to specify the columns you want to retrieve from the users table in the where clause. You can do this by using the select method and passing in an array of the columns you want to select. Here is an example of how you can update your code to fix the error: $users = User::where('staff_id', $r->staff_id)->select(['staff_id', 'date_checkIn', 'time_checkIn', 'location_checkIn'])->first(); This will only retrieve the specified columns from the users table and will not cause the error.
Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version
I am trying to store historical data from front-end(flutter) through the API. Below are the laravel controller. However, when I ran it on postman, an error comes out Column not found: 1054 Unknown column 'images' in 'field list' . I think it is retrieving the whole column of users. From the column, I only what staff_id, date_checkIn, time_checkIn, and location_checkIn. How do I fix this ? I tried User::where('staff_id', 'date_checkIn', 'time_checkIn', 'location_checkIn')->first()->toArray(); . but then it says Syntax error or access violation: 1064 You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'staff_id = ? limit 1' at line 1 (SQL: select * from users where location_checkIn staff_id = date_checkIn limit 1) in file ..... public function userClockIn(Request $r) { $result = []; $result['status'] = false; $result['message'] = "something error"; $users = User::where('staff_id', $r->staff_id)->first(); // retrieve current data $currentData = $users->toArray(); // store current data in historical data table $historicalData = new HistoricalData(); $historicalData->fill($currentData); $historicalData->save(); $mytime = Carbon::now(); $time = $mytime->format('H:i:s'); $date = $mytime->format('Y-m-d'); $users->date_checkIn = $date; $users->time_checkIn = $time; $users->location_checkIn = $r->location_checkIn; $users->save(); $result['data'] = $users; $result['status'] = true; $result['message'] = "suksess add data"; return response()->json($result); }
[ "To fix this error, you need to specify the columns you want to retrieve from the users table in the where clause. You can do this by using the select method and passing in an array of the columns you want to select.\nHere is an example of how you can update your code to fix the error:\n$users = User::where('staff_id', $r->staff_id)->select(['staff_id', 'date_checkIn', 'time_checkIn', 'location_checkIn'])->first();\n\nThis will only retrieve the specified columns from the users table and will not cause the error.\n" ]
[ 1 ]
[]
[]
[ "api", "laravel", "php" ]
stackoverflow_0074667272_api_laravel_php.txt
Q: Knockout js - validate if value is not empty or different to 0 to display text in data-bind I have 2 attributes precio_anterior and precio_suscriptor, and i do not know how validate with data-binding if precio_anterior is not empty or different to 0 to display precio_anterior text, but if it is the opposite display precio_suscriptor text, this is my code: <div class="prices-box"> <span class="qty"><span data-bind="text: qty"></span> x <span class="subscribe-price" data-bind="precio_anterior != '' && precio_anterior != 0 ? text: precio_anterior : text:precio_suscriptor"></span> </span> <span data-bind="html: fascicle"></span> </div> I know this is not right, but i need this with knockout js. Any help, is it possible with script and how? Thanks! A: A ternary goes like this: condition ? value_if_true : value_if_false. Not empty and not 0 will evaluate to true, so in this case you could use the logical OR operator: <span class="subscribe-price" data-bind="text: precio_anterior() || precio_suscriptor()"></span> This will show precio_anterior if it evaluates to true (so not empty and > 0), and precio_suscriptor otherwise. You didn't specify if these variables are observables; if they're not, then remove the ().
Knockout js - validate if value is not empty or different to 0 to display text in data-bind
I have 2 attributes precio_anterior and precio_suscriptor, and i do not know how validate with data-binding if precio_anterior is not empty or different to 0 to display precio_anterior text, but if it is the opposite display precio_suscriptor text, this is my code: <div class="prices-box"> <span class="qty"><span data-bind="text: qty"></span> x <span class="subscribe-price" data-bind="precio_anterior != '' && precio_anterior != 0 ? text: precio_anterior : text:precio_suscriptor"></span> </span> <span data-bind="html: fascicle"></span> </div> I know this is not right, but i need this with knockout js. Any help, is it possible with script and how? Thanks!
[ "A ternary goes like this: condition ? value_if_true : value_if_false.\nNot empty and not 0 will evaluate to true, so in this case you could use the logical OR operator:\n<span class=\"subscribe-price\" data-bind=\"text: precio_anterior() || precio_suscriptor()\"></span>\nThis will show precio_anterior if it evaluates to true (so not empty and > 0), and precio_suscriptor otherwise. You didn't specify if these variables are observables; if they're not, then remove the ().\n" ]
[ 0 ]
[]
[]
[ "data_binding", "javascript", "knockout.js" ]
stackoverflow_0074663189_data_binding_javascript_knockout.js.txt
Q: how to get rid of the border appearing at the bottom of the svg? I am trying to work with a wave SVG. The problem I am facing is that I get a border at the bottom and I want to remove it. Here is my JSX code: <DarkWave> <svg data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1200 120" preserveAspectRatio="none"> <path d="M321.39,56.44c58-10.79,114.16-30.13,172-41.86,82.39-16.72,168.19-17.73,250.45-.39C823.78,31,906.67,72,985.66,92.83c70.05,18.48,146.53,26.09,214.34,3V0H0V27.35A600.21,600.21,0,0,0,321.39,56.44Z" class="shape-fill"></path> </svg> </DarkWave> Here is the CSS part: export const DarkWave = styled.div` width: 100%; overflow: hidden; line-height: 0; height: 95vh; background-color: #705df2; transform: rotate(180deg); margin-bottom: 5rem; svg { position: relative; display: block; width: calc(100% + 1.3px); height: 127px; } path.shape-fill { fill: #ffffff; } `; I tried to make the stroke as transparent but it didn't work: svg{ ......... stroke: transparent; stroke-width: 0px; } Here is the image of the problem: Please guide me. A: When you stretch the SVG the edges will "anti-aliase". In this example I kind of turned around the transparent/ colored part and made a mask that controls the visible part. I also run into the same problem, but now it is easier to make a overlap (inside the mask). div { width: 100%; overflow: hidden; margin-bottom: 5rem; } svg { display: block; height: 95vh; width: 100%; } svg>rect { fill: #705df2; } <div> <svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1200 712" preserveAspectRatio="none"> <defs> <mask id="m1"> <rect width="1200" height="1200" fill="black" /> <rect width="1200" height="601" fill="white" /> <!-- above height 601 = overlap to --> <path transform="translate(0 600)" d="M321.39,56.44c58-10.79,114.16-30.13,172-41.86,82.39-16.72,168.19-17.73,250.45-.39C823.78,31,906.67,72,985.66,92.83c70.05,18.48,146.53,26.09,214.34,3V0H0V27.35A600.21,600.21,0,0,0,321.39,56.44Z" fill="white" /> </mask> </defs> <rect width="1200" height="1200" mask="url(#m1)" fill="white" /> </svg> </div>
how to get rid of the border appearing at the bottom of the svg?
I am trying to work with a wave SVG. The problem I am facing is that I get a border at the bottom and I want to remove it. Here is my JSX code: <DarkWave> <svg data-name="Layer 1" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 1200 120" preserveAspectRatio="none"> <path d="M321.39,56.44c58-10.79,114.16-30.13,172-41.86,82.39-16.72,168.19-17.73,250.45-.39C823.78,31,906.67,72,985.66,92.83c70.05,18.48,146.53,26.09,214.34,3V0H0V27.35A600.21,600.21,0,0,0,321.39,56.44Z" class="shape-fill"></path> </svg> </DarkWave> Here is the CSS part: export const DarkWave = styled.div` width: 100%; overflow: hidden; line-height: 0; height: 95vh; background-color: #705df2; transform: rotate(180deg); margin-bottom: 5rem; svg { position: relative; display: block; width: calc(100% + 1.3px); height: 127px; } path.shape-fill { fill: #ffffff; } `; I tried to make the stroke as transparent but it didn't work: svg{ ......... stroke: transparent; stroke-width: 0px; } Here is the image of the problem: Please guide me.
[ "When you stretch the SVG the edges will \"anti-aliase\".\nIn this example I kind of turned around the transparent/ colored part and made a mask that controls the visible part. I also run into the same problem, but now it is easier to make a overlap (inside the mask).\n\n\ndiv {\n width: 100%;\n overflow: hidden;\n margin-bottom: 5rem;\n}\n\nsvg {\n display: block;\n height: 95vh;\n width: 100%;\n}\n\nsvg>rect {\n fill: #705df2;\n}\n<div>\n <svg xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 1200 712\" preserveAspectRatio=\"none\">\n <defs>\n <mask id=\"m1\">\n <rect width=\"1200\" height=\"1200\" fill=\"black\" />\n <rect width=\"1200\" height=\"601\" fill=\"white\" />\n <!-- above height 601 = overlap to -->\n <path transform=\"translate(0 600)\" d=\"M321.39,56.44c58-10.79,114.16-30.13,172-41.86,82.39-16.72,168.19-17.73,250.45-.39C823.78,31,906.67,72,985.66,92.83c70.05,18.48,146.53,26.09,214.34,3V0H0V27.35A600.21,600.21,0,0,0,321.39,56.44Z\" fill=\"white\" />\n </mask>\n </defs>\n <rect width=\"1200\" height=\"1200\" mask=\"url(#m1)\" fill=\"white\" />\n </svg>\n</div>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "css", "svg" ]
stackoverflow_0074653407_css_svg.txt
Q: Read and use appsettings parameters in a windows forms .net 6 application What's the best practice to read values from appsettings in a windows forms .net 7 application? I have found examples for console application but it doesn't work. A: Assuming this is for appsettings.json, using the following file { "Role": { "IsAdmin": true }, "LogOptions": { "Use": true, "Destination": "LogFile", "LogFileName": "logs.txt" }, "FormSettings": { "Title": "Code sample", "FullScreen": "true" } } Use the following classes to get values from appsettings.json public class FormSettings { public string Title { get; set; } public bool FullScreen { get; set; } } public class Role { public bool IsAdmin { get; set; } } public class Logging { public bool Use { get; set; } public LoggingDestination Destination { get; set; } public string LogFileName { get; set; } } public enum LoggingDestination { DebugWindow, LogFile, None } Class to read settings from appsettings.json public class AppSettings { private static ConfigurationBuilder _configBuilder; public static Logging GetLogOptions() { Build(); return InitOptions<Logging>("LogOptions"); } public static Role Role() { Build(); return InitOptions<Role>("Role"); } public static FormSettings Settings() { Build(); return InitOptions<FormSettings>("FormSettings"); } private static IConfigurationRoot Build() { if (_configBuilder is not null) return _configBuilder.Build(); _configBuilder = new ConfigurationBuilder(); _configBuilder.SetBasePath(Directory.GetCurrentDirectory()) .AddJsonFile("appsettings.json", true, true); return _configBuilder.Build(); } public static T InitOptions<T>(string section) where T : new() => Build().GetSection(section).Get<T>(); } Usage public partial class Form1 : Form { public Form1() { InitializeComponent(); Text = AppSettings.Role().IsAdmin ? AppSettings.Settings().Title + " (Admin)" : AppSettings.Settings().Title ; } private void GetLogInfoButton_Click(object sender, EventArgs e) { if (AppSettings.GetLogOptions().Use && AppSettings.GetLogOptions().Destination == LoggingDestination.LogFile) { MessageBox.Show($"{AppSettings.GetLogOptions().LogFileName}"); } } } NuGet packages Microsoft.Extensions.Configuration Microsoft.Extensions.Configuration.FileExtensions Microsoft.Extensions.Configuration.Json
Read and use appsettings parameters in a windows forms .net 6 application
What's the best practice to read values from appsettings in a windows forms .net 7 application? I have found examples for console application but it doesn't work.
[ "Assuming this is for appsettings.json, using the following file\n{\n \"Role\": {\n \"IsAdmin\": true\n },\n \"LogOptions\": {\n \"Use\": true,\n \"Destination\": \"LogFile\",\n \"LogFileName\": \"logs.txt\"\n },\n \"FormSettings\": {\n \"Title\": \"Code sample\",\n \"FullScreen\": \"true\"\n }\n}\n\nUse the following classes to get values from appsettings.json\npublic class FormSettings\n{\n public string Title { get; set; }\n public bool FullScreen { get; set; }\n}\npublic class Role\n{\n public bool IsAdmin { get; set; }\n}\npublic class Logging\n{\n public bool Use { get; set; }\n public LoggingDestination Destination { get; set; }\n public string LogFileName { get; set; }\n}\npublic enum LoggingDestination\n{\n DebugWindow,\n LogFile,\n None\n}\n\nClass to read settings from appsettings.json\npublic class AppSettings\n{\n private static ConfigurationBuilder _configBuilder;\n public static Logging GetLogOptions()\n {\n Build();\n return InitOptions<Logging>(\"LogOptions\");\n }\n\n public static Role Role()\n {\n Build();\n return InitOptions<Role>(\"Role\");\n }\n\n public static FormSettings Settings()\n {\n Build();\n return InitOptions<FormSettings>(\"FormSettings\");\n }\n private static IConfigurationRoot Build()\n {\n if (_configBuilder is not null) return _configBuilder.Build();\n _configBuilder = new ConfigurationBuilder();\n _configBuilder.SetBasePath(Directory.GetCurrentDirectory())\n .AddJsonFile(\"appsettings.json\", true, true);\n return _configBuilder.Build();\n }\n public static T InitOptions<T>(string section) where T : new() \n => Build().GetSection(section).Get<T>();\n}\n\nUsage\npublic partial class Form1 : Form\n{\n public Form1()\n {\n InitializeComponent();\n Text = AppSettings.Role().IsAdmin\n ? AppSettings.Settings().Title + \" (Admin)\"\n : AppSettings.Settings().Title ;\n }\n\n private void GetLogInfoButton_Click(object sender, EventArgs e)\n {\n if (AppSettings.GetLogOptions().Use && AppSettings.GetLogOptions().Destination == LoggingDestination.LogFile)\n {\n MessageBox.Show($\"{AppSettings.GetLogOptions().LogFileName}\");\n }\n }\n}\n\nNuGet packages\n\nMicrosoft.Extensions.Configuration\nMicrosoft.Extensions.Configuration.FileExtensions\nMicrosoft.Extensions.Configuration.Json\n\n" ]
[ 1 ]
[]
[]
[ ".net", "c#", "windows_forms_core" ]
stackoverflow_0074666262_.net_c#_windows_forms_core.txt
Q: Custom log for each user with Serilog I am trying to create a custom logger based on serilog. The application i am building is based on net6 blazor-server-side. The goal is that every time a user logs into the application, I create a specific log file for him. First I create a dependency injection in the program.cs file Program.cs builder.Services.AddScoped<ICustomLogger>( s => new CustomLogger()); In the Customlogger class, I initialize the loggerconfiguration in the constructor file CustomLogger.cs private ILogger<CustomLogger> _logger; protected readonly LoggerConfiguration _loggerConfig; public CustomLogger() { _loggerConfig = new LoggerConfiguration() .Enrich.FromLogContext() .MinimumLevel.Debug(); } In the Login.razor , once the login is successful, I call the CreateLogger method, passing the username as a parameter (this is to create a specific folder) file CustomLogger.cs public void CreateLogger(string username) { var l = _loggerConfig.WriteTo.File($"./Logs/{username}/log_.txt", rollingInterval: RollingInterval.Day, retainedFileCountLimit: 30).CreateLogger(); _logger = new SerilogLoggerFactory(l).CreateLogger<CustomLogger>(); // creates an instance of ILogger<CustomLogger> } Beyond that, I've created methods to write the various log levels file CustomLogger.cs public void LogInformation(string m) { _logger.LogInformation(m); } public void LogError(string m) { _logger.LogError(m); } public void LogWarning(string m) { _logger.LogWarning(m); } The Customlogger class is bound to the ICustomLogger interface file ICustomLogger.cs public interface ICustomLogger { void LogInformation(string m); void LogError(string m); void LogWarning(string m); void CreateLogger(string username); ILogger<CustomLogger> GetLogger(); } For the moment I see that the system works, if I connect with a user, his folder and the file are created, and so on for each user. My question is : Could this approach cause problems? Is it already possible to do this via Serilog? Thanks for your time N. UPDATE the system works well in the login page, for each user to create his own logger, but as soon as I move to the index page, the constructor of the CustomLogger class is called and the ILogger is null. I thought AddScoped was only called once for "session" Is it possible to call AddSingleton every time the user logs in, that way the specific dependency remains for as long as needed? UPDATE 2 I changed the injection, now I use AddSingleton Program.cs builder.Services.AddSingleton<IOramsLoggerService>(s => new OramsLoggerService()); Inside the OramsLoggerService class, I created a list of loggers, which is filled at each login OramsLoggerSerivce.cs public class OramsLoggerService : IOramsLoggerService { private List<OramsLogger> loggers; public OramsLoggerService() { loggers = new List<OramsLogger>(); } public void CreateLogger(string? username) { if (string.IsNullOrEmpty(username)) { throw new ArgumentNullException("username"); } if (loggers.Where(x => x.Username == username).Count() > 0) { return; } // characters not allowed in the folder name string originaUsername = username; username = username.Replace("<", "-"); username = username.Replace(">", "-"); username = username.Replace(":", "-"); username = username.Replace("/", "-"); username = username.Replace("\\", "-"); username = username.Replace("|", "-"); username = username.Replace("?", "-"); username = username.Replace("*", "-"); username = username.Replace("\"", "-"); var loggerConfig = new LoggerConfiguration() .Enrich.FromLogContext() .MinimumLevel.Debug(); var l = loggerConfig.WriteTo.File($"./Logs/{username}/log_.txt", rollingInterval: RollingInterval.Day, retainedFileCountLimit: 100).CreateLogger(); var logger = new SerilogLoggerFactory(l).CreateLogger<OramsLogger>(); // creates an instance of ILogger<OramsLogger> loggers.Add(new OramsLogger(logger, originaUsername)); } } by doing so, I have all the loggers created in the other pages available. The OramsLogger class contains the logger and the username. I use the username to search the list, I think I'll change it to the id in the future. OramsLogger.cs public class OramsLogger { public ILogger<OramsLogger> logger; public string Username { get; set; } public OramsLogger(ILogger<OramsLogger> l, string username) { logger = l; Username = username; } } For the moment I have created 1000 dummy users when I login with my user and it seems to work. Could this be a good approach? A: For user-level custom logs, this method is appropriate. There are no particular concerns.
Custom log for each user with Serilog
I am trying to create a custom logger based on serilog. The application i am building is based on net6 blazor-server-side. The goal is that every time a user logs into the application, I create a specific log file for him. First I create a dependency injection in the program.cs file Program.cs builder.Services.AddScoped<ICustomLogger>( s => new CustomLogger()); In the Customlogger class, I initialize the loggerconfiguration in the constructor file CustomLogger.cs private ILogger<CustomLogger> _logger; protected readonly LoggerConfiguration _loggerConfig; public CustomLogger() { _loggerConfig = new LoggerConfiguration() .Enrich.FromLogContext() .MinimumLevel.Debug(); } In the Login.razor , once the login is successful, I call the CreateLogger method, passing the username as a parameter (this is to create a specific folder) file CustomLogger.cs public void CreateLogger(string username) { var l = _loggerConfig.WriteTo.File($"./Logs/{username}/log_.txt", rollingInterval: RollingInterval.Day, retainedFileCountLimit: 30).CreateLogger(); _logger = new SerilogLoggerFactory(l).CreateLogger<CustomLogger>(); // creates an instance of ILogger<CustomLogger> } Beyond that, I've created methods to write the various log levels file CustomLogger.cs public void LogInformation(string m) { _logger.LogInformation(m); } public void LogError(string m) { _logger.LogError(m); } public void LogWarning(string m) { _logger.LogWarning(m); } The Customlogger class is bound to the ICustomLogger interface file ICustomLogger.cs public interface ICustomLogger { void LogInformation(string m); void LogError(string m); void LogWarning(string m); void CreateLogger(string username); ILogger<CustomLogger> GetLogger(); } For the moment I see that the system works, if I connect with a user, his folder and the file are created, and so on for each user. My question is : Could this approach cause problems? Is it already possible to do this via Serilog? Thanks for your time N. UPDATE the system works well in the login page, for each user to create his own logger, but as soon as I move to the index page, the constructor of the CustomLogger class is called and the ILogger is null. I thought AddScoped was only called once for "session" Is it possible to call AddSingleton every time the user logs in, that way the specific dependency remains for as long as needed? UPDATE 2 I changed the injection, now I use AddSingleton Program.cs builder.Services.AddSingleton<IOramsLoggerService>(s => new OramsLoggerService()); Inside the OramsLoggerService class, I created a list of loggers, which is filled at each login OramsLoggerSerivce.cs public class OramsLoggerService : IOramsLoggerService { private List<OramsLogger> loggers; public OramsLoggerService() { loggers = new List<OramsLogger>(); } public void CreateLogger(string? username) { if (string.IsNullOrEmpty(username)) { throw new ArgumentNullException("username"); } if (loggers.Where(x => x.Username == username).Count() > 0) { return; } // characters not allowed in the folder name string originaUsername = username; username = username.Replace("<", "-"); username = username.Replace(">", "-"); username = username.Replace(":", "-"); username = username.Replace("/", "-"); username = username.Replace("\\", "-"); username = username.Replace("|", "-"); username = username.Replace("?", "-"); username = username.Replace("*", "-"); username = username.Replace("\"", "-"); var loggerConfig = new LoggerConfiguration() .Enrich.FromLogContext() .MinimumLevel.Debug(); var l = loggerConfig.WriteTo.File($"./Logs/{username}/log_.txt", rollingInterval: RollingInterval.Day, retainedFileCountLimit: 100).CreateLogger(); var logger = new SerilogLoggerFactory(l).CreateLogger<OramsLogger>(); // creates an instance of ILogger<OramsLogger> loggers.Add(new OramsLogger(logger, originaUsername)); } } by doing so, I have all the loggers created in the other pages available. The OramsLogger class contains the logger and the username. I use the username to search the list, I think I'll change it to the id in the future. OramsLogger.cs public class OramsLogger { public ILogger<OramsLogger> logger; public string Username { get; set; } public OramsLogger(ILogger<OramsLogger> l, string username) { logger = l; Username = username; } } For the moment I have created 1000 dummy users when I login with my user and it seems to work. Could this be a good approach?
[ "For user-level custom logs, this method is appropriate.\nThere are no particular concerns.\n" ]
[ 1 ]
[]
[]
[ ".net_6.0", "blazor", "blazor_server_side", "c#", "serilog" ]
stackoverflow_0074667339_.net_6.0_blazor_blazor_server_side_c#_serilog.txt
Q: CSS Generic vs Page Specific loading I need some advice on CSS placements for the sake of website load times I read that it's best to have 'critical CSS' in the head and the rest can be placed in their respective page's body via the tag. Is it good practice if I loaded all the CSS or at least the 'Generic' styles that many pages share while I kept page specific styles in a tag in the page's body? One side question, some of my pages use jQuery, should I only load that at the bottom of those pages or leave it in the template head? I tried both and the site loads just fine, but I know under the hood results may vary. I'm not sure how to even check. I tried websites that test a website's load performance and I got mixed results. So I'm not sure how to optimize my website's performance. A: Usually all CSS files are called in the head, one thing you can do to improve performance is to modularize, let's say that you have the global styles in one file called global.css and it contains your font specs, global components used in all pages such as navbar, footer, layouts, etc... And in another file you can only put the styles regarding your specified page such as contact section that's another page called contact.css and there you can have overrides to global file and specific styles that you only use in this page. This way you can serve less heavy files regarding the page that user's requiring. Regarding you jQuery question I suggest that don't load jQuery library if you're not using it, it's useless. Only load it in the pages that you're using the library. Hope it helps!
CSS Generic vs Page Specific loading
I need some advice on CSS placements for the sake of website load times I read that it's best to have 'critical CSS' in the head and the rest can be placed in their respective page's body via the tag. Is it good practice if I loaded all the CSS or at least the 'Generic' styles that many pages share while I kept page specific styles in a tag in the page's body? One side question, some of my pages use jQuery, should I only load that at the bottom of those pages or leave it in the template head? I tried both and the site loads just fine, but I know under the hood results may vary. I'm not sure how to even check. I tried websites that test a website's load performance and I got mixed results. So I'm not sure how to optimize my website's performance.
[ "Usually all CSS files are called in the head, one thing you can do to improve performance is to modularize, let's say that you have the global styles in one file called global.css and it contains your font specs, global components used in all pages such as navbar, footer, layouts, etc... And in another file you can only put the styles regarding your specified page such as contact section that's another page called contact.css and there you can have overrides to global file and specific styles that you only use in this page.\nThis way you can serve less heavy files regarding the page that user's requiring.\nRegarding you jQuery question I suggest that don't load jQuery library if you're not using it, it's useless. Only load it in the pages that you're using the library. Hope it helps!\n" ]
[ 1 ]
[]
[]
[ "css", "html", "javascript" ]
stackoverflow_0074667306_css_html_javascript.txt
Q: Foreign Key Constraint Failed Django with UUID Trying to save to a table with a foreign key but come up with a IntegrityError: Foreign Key Constraint failed. I have checked to make sure I am getting the correct data for my foreign key and it seems to be there. I am not sure why I am getting this error. Models.py class IPHold(models.Model): uuid = models.UUIDField(unique=True, default=uuid.uuid4, editable=False) CHOICES = [ ('1', 'Book'), ('2', 'Documentary'), ('3', 'Graphic Novel/Comic'), ('4', 'Journalism'), ('5', 'Merchandise'), ('6', 'Podcast'), ('7', 'Stage Play/Musical'), ('8', 'Video Game'), ] media_type = models.CharField(max_length=1, choices=CHOICES, blank=False) title = models.CharField(max_length=255, blank=False) author_creator = models.CharField(max_length=255, blank=True) production_company = models.CharField(max_length=255, blank=True) class RoleHold(models.Model): ip = models.ForeignKey(IPHold, on_delete=models.CASCADE, related_name='ip_role') name = models.CharField(max_length=128, blank=False) TYPE = [ ('1', 'Lead'), ('2', 'Supporting'), ] role_type = models.CharField(max_length=1, choices=TYPE, blank=True) age_min = models.PositiveSmallIntegerField(blank=True) age_max = models.PositiveSmallIntegerField(blank=True) ETHNICITY = [ ('1', 'American Indian or Alaska Native'), ('2', 'Asian'), ('3', 'Black or African American'), ('4', 'Hispanic or Latino'), ('5', 'Native Hawaiian or Other Pacific Islander'), ('6', 'White'), ('7', 'Unknown/Irrelevant'), ] race = models.CharField(max_length=1, choices=ETHNICITY, blank=True) GENDEROPTIONS = [ ('1', 'Male'), ('2', 'Female'), ('3', 'N/A'), ('4', 'Unknown/Irrelevant'), ] gender = models.CharField(max_length=1, choices=GENDEROPTIONS, blank=True) description = models.TextField(blank=True) Views.py def add_characters(request): id = request.GET.get('id') ips = IPHold.objects.get(uuid=id) form = forms.AddCharacter context = { 'form':form, } if request.method == 'POST': ip = ips name = request.POST.get('name') role_type = request.POST.get('role_type') age_min = request.POST.get('age_min') age_max = request.POST.get('age_max') race = request.POST.get('race') gender = request.POST.get('gender') description = request.POST.get('description') role_save = RoleHold(ip=ip, name=name, role_type=role_type, age_min=age_min, age_max=age_max, race=race, gender=gender, description=description) role_save.save() if request.POST.get('add') == 'Add Another Role': return redirect('/iphold/add_characters/?id=' + str(ips.uuid)) else: return(render, 'iphold/pay.html') return render(request, 'iphold/add_characters.html', context) The error I am getting is IntegrityError at /iphold/add_characters/ FOREIGN KEY constraint failed. When I print(ip) it shows the object is there. A: I assume that in your id it is id (integer number), and not uuid. Try to print the id, what it will show. You want to select by id, so you need to specify the id field and select uuid in the resulting object. Try doing the following: id = request.GET.get('id') ips = IPHold.objects.get(id=id)
Foreign Key Constraint Failed Django with UUID
Trying to save to a table with a foreign key but come up with a IntegrityError: Foreign Key Constraint failed. I have checked to make sure I am getting the correct data for my foreign key and it seems to be there. I am not sure why I am getting this error. Models.py class IPHold(models.Model): uuid = models.UUIDField(unique=True, default=uuid.uuid4, editable=False) CHOICES = [ ('1', 'Book'), ('2', 'Documentary'), ('3', 'Graphic Novel/Comic'), ('4', 'Journalism'), ('5', 'Merchandise'), ('6', 'Podcast'), ('7', 'Stage Play/Musical'), ('8', 'Video Game'), ] media_type = models.CharField(max_length=1, choices=CHOICES, blank=False) title = models.CharField(max_length=255, blank=False) author_creator = models.CharField(max_length=255, blank=True) production_company = models.CharField(max_length=255, blank=True) class RoleHold(models.Model): ip = models.ForeignKey(IPHold, on_delete=models.CASCADE, related_name='ip_role') name = models.CharField(max_length=128, blank=False) TYPE = [ ('1', 'Lead'), ('2', 'Supporting'), ] role_type = models.CharField(max_length=1, choices=TYPE, blank=True) age_min = models.PositiveSmallIntegerField(blank=True) age_max = models.PositiveSmallIntegerField(blank=True) ETHNICITY = [ ('1', 'American Indian or Alaska Native'), ('2', 'Asian'), ('3', 'Black or African American'), ('4', 'Hispanic or Latino'), ('5', 'Native Hawaiian or Other Pacific Islander'), ('6', 'White'), ('7', 'Unknown/Irrelevant'), ] race = models.CharField(max_length=1, choices=ETHNICITY, blank=True) GENDEROPTIONS = [ ('1', 'Male'), ('2', 'Female'), ('3', 'N/A'), ('4', 'Unknown/Irrelevant'), ] gender = models.CharField(max_length=1, choices=GENDEROPTIONS, blank=True) description = models.TextField(blank=True) Views.py def add_characters(request): id = request.GET.get('id') ips = IPHold.objects.get(uuid=id) form = forms.AddCharacter context = { 'form':form, } if request.method == 'POST': ip = ips name = request.POST.get('name') role_type = request.POST.get('role_type') age_min = request.POST.get('age_min') age_max = request.POST.get('age_max') race = request.POST.get('race') gender = request.POST.get('gender') description = request.POST.get('description') role_save = RoleHold(ip=ip, name=name, role_type=role_type, age_min=age_min, age_max=age_max, race=race, gender=gender, description=description) role_save.save() if request.POST.get('add') == 'Add Another Role': return redirect('/iphold/add_characters/?id=' + str(ips.uuid)) else: return(render, 'iphold/pay.html') return render(request, 'iphold/add_characters.html', context) The error I am getting is IntegrityError at /iphold/add_characters/ FOREIGN KEY constraint failed. When I print(ip) it shows the object is there.
[ "I assume that in your id it is id (integer number), and not uuid. Try to print the id, what it will show. You want to select by id, so you need to specify the id field and select uuid in the resulting object. Try doing the following:\nid = request.GET.get('id')\nips = IPHold.objects.get(id=id)\n\n" ]
[ 0 ]
[]
[]
[ "django", "foreign_keys" ]
stackoverflow_0074659545_django_foreign_keys.txt
Q: Get name of the column with last value in the row I am doing cohort analysis and the dataset I'm using has 15 months as the name as columns with revenue and around 7k user_id rows. I need to get a new column with the month when the user was last time active. 2021-01-01 2021-02-01 3456. Nan Nan. 8679 Result should be like this 2021-01-01 2021-02-01 Last_month 3456. Nan 2021-01-01 Nan. 8679 2021-02-01 I have tried few options but it didnt work users.apply(pd.Series.last_valid_index) A: using a boolean and idxmax() might be the solution here df['last_month'] = (~df.isna()).idxmax(axis=1) print(df) 2021-01-01 2021-02-01 last_month 0 3456 NaN 2021-01-01 1 NaN 8679 2021-02-01 A: Example data = {'2021-01-01': {0: 3456, 1: None}, '2021-02-01': {0: None, 1: 8679}} df = pd.DataFrame(data) df 2021-01-01 2021-02-01 0 3456.0 NaN 1 NaN 8679.0 Code df.apply(lambda x: x.last_valid_index(), axis=1) output: 0 2021-01-01 1 2021-02-01 dtype: object full Code make output to Last_month column df.assign(Last_month=df.apply(lambda x: x.last_valid_index(), axis=1)) result: 2021-01-01 2021-02-01 Last_month 0 3456.0 NaN 2021-01-01 1 NaN 8679.0 2021-02-01
Get name of the column with last value in the row
I am doing cohort analysis and the dataset I'm using has 15 months as the name as columns with revenue and around 7k user_id rows. I need to get a new column with the month when the user was last time active. 2021-01-01 2021-02-01 3456. Nan Nan. 8679 Result should be like this 2021-01-01 2021-02-01 Last_month 3456. Nan 2021-01-01 Nan. 8679 2021-02-01 I have tried few options but it didnt work users.apply(pd.Series.last_valid_index)
[ "using a boolean and idxmax() might be the solution here\ndf['last_month'] = (~df.isna()).idxmax(axis=1)\n\n\nprint(df)\n\n\n 2021-01-01 2021-02-01 last_month\n0 3456 NaN 2021-01-01\n1 NaN 8679 2021-02-01\n\n", "Example\ndata = {'2021-01-01': {0: 3456, 1: None}, '2021-02-01': {0: None, 1: 8679}}\ndf = pd.DataFrame(data)\n\ndf\n 2021-01-01 2021-02-01\n0 3456.0 NaN\n1 NaN 8679.0\n\n\nCode\ndf.apply(lambda x: x.last_valid_index(), axis=1)\n\noutput:\n0 2021-01-01\n1 2021-02-01\ndtype: object\n\n\nfull Code\nmake output to Last_month column\ndf.assign(Last_month=df.apply(lambda x: x.last_valid_index(), axis=1))\n\nresult:\n 2021-01-01 2021-02-01 Last_month\n0 3456.0 NaN 2021-01-01\n1 NaN 8679.0 2021-02-01\n\n" ]
[ 1, 0 ]
[]
[]
[ "pandas" ]
stackoverflow_0074667269_pandas.txt
Q: How to run python coding in anaconda prompt using vba? I am attempting to run python coding using vba. However, when running using vba, it was not successful . (i discovered that it is not running in anaconda prompt) the code is attached as follow. appreciate the help. Sub RunPythonScript() Dim objShell As Object Dim PythonExePath As String, PythonScriptPath As String Set objShell = VBA.CreateObject("Wscript.Shell") PythonExePath = """C:xxx.exe""" PythonScriptPath = """C:xxx.py""" objShell.Run PythonExePath & " " & PythonScriptPath End Sub Alternatively, I manually run in anaconda prompt and the code works. "C:xxx.exe" "C:xxx.py" What I observed on screen was the black cmd window pop out and disappeared in second. It did not work as expected. Is there anything I input incorrectly? Sub RunPythonScript() Dim pythonExePath As String, pythonScriptPath As String pythonExePath = """C:\Users\xxx\Anaconda3\python.exe""" pythonScriptPath = """C:\Users\xxx\xxx.py""" Shell pythonExePath & " " & pythonScriptPath, vbNormalFocus End Sub A: The code you provided looks like it is trying to run a Python script using the Wscript.Shell object in VBA, which is used to run external programs and scripts. However, this will not work for running a Python script in the Anaconda Prompt, as the Anaconda Prompt is a command-line interface (CLI) and not a script. To run a Python script in the Anaconda Prompt using VBA, you will need to use the Shell function to run the python.exe executable in the Anaconda Prompt and pass your Python script as a command-line argument. Here is an example of how you could do this: Sub RunPythonScript() Dim pythonExePath As String, pythonScriptPath As String ' Replace "C:\Program Files\Anaconda3\python.exe" with the path to your Anaconda Python installation pythonExePath = """C:\Program Files\Anaconda3\python.exe""" ' Replace "C:\scripts\myscript.py" with the path to your Python script pythonScriptPath = """C:\scripts\myscript.py""" Shell pythonExePath & " " & pythonScriptPath, vbNormalFocus End Sub This code will open the Anaconda Prompt and run the python.exe executable, passing the path to your Python script as a command-line argument. This will cause the Python script to be executed in the Anaconda Prompt. edit; or you can try this Sub RunPythonScript() Dim objShell As Object Dim PythonExePath As String, PythonScriptPath As String Set objShell = VBA.Interaction.CreateObject("Wscript.Shell") PythonExePath = "C:xxx.exe" PythonScriptPath = "C:xxx.py" objShell.Exec PythonExePath & " " & PythonScriptPath End Sub A: Try both of these and feedback with your results. Public Sub PythonOutput() Dim oShell As Object, oCmd As String Dim oExec As Object, oOutput As Object Dim arg As Variant Dim s As String, sLine As String Set oShell = CreateObject("WScript.Shell") arg = "somevalue" oCmd = "python ""C:\Users\ryans\from_vba.py""" ' & " " & arg Set oExec = oShell.Exec(oCmd) Set oOutput = oExec.StdOut While Not oOutput.AtEndOfStream sLine = oOutput.ReadLine If sLine <> "" Then s = s & sLine & vbNewLine Wend Debug.Print s Set oOutput = Nothing: Set oExec = Nothing Set oShell = Nothing End Sub Sub RunPython() Dim objShell As Object Dim PythonExe, PythonScript As String Set objShell = VBA.CreateObject("Wscript.Shell") PythonExe = """C:\Users\ryans\AppData\Local\Programs\Python\Python38\python.exe""" PythonScript = "C:\Users\ryans\from_vba.py" objShell.Run PythonExe & PythonScript End Sub
How to run python coding in anaconda prompt using vba?
I am attempting to run python coding using vba. However, when running using vba, it was not successful . (i discovered that it is not running in anaconda prompt) the code is attached as follow. appreciate the help. Sub RunPythonScript() Dim objShell As Object Dim PythonExePath As String, PythonScriptPath As String Set objShell = VBA.CreateObject("Wscript.Shell") PythonExePath = """C:xxx.exe""" PythonScriptPath = """C:xxx.py""" objShell.Run PythonExePath & " " & PythonScriptPath End Sub Alternatively, I manually run in anaconda prompt and the code works. "C:xxx.exe" "C:xxx.py" What I observed on screen was the black cmd window pop out and disappeared in second. It did not work as expected. Is there anything I input incorrectly? Sub RunPythonScript() Dim pythonExePath As String, pythonScriptPath As String pythonExePath = """C:\Users\xxx\Anaconda3\python.exe""" pythonScriptPath = """C:\Users\xxx\xxx.py""" Shell pythonExePath & " " & pythonScriptPath, vbNormalFocus End Sub
[ "The code you provided looks like it is trying to run a Python script using the Wscript.Shell object in VBA, which is used to run external programs and scripts. However, this will not work for running a Python script in the Anaconda Prompt, as the Anaconda Prompt is a command-line interface (CLI) and not a script.\nTo run a Python script in the Anaconda Prompt using VBA, you will need to use the Shell function to run the python.exe executable in the Anaconda Prompt and pass your Python script as a command-line argument. Here is an example of how you could do this:\nSub RunPythonScript()\nDim pythonExePath As String, pythonScriptPath As String\n\n' Replace \"C:\\Program Files\\Anaconda3\\python.exe\" with the path to your Anaconda Python installation\npythonExePath = \"\"\"C:\\Program Files\\Anaconda3\\python.exe\"\"\"\n\n' Replace \"C:\\scripts\\myscript.py\" with the path to your Python script\npythonScriptPath = \"\"\"C:\\scripts\\myscript.py\"\"\"\n\nShell pythonExePath & \" \" & pythonScriptPath, vbNormalFocus\nEnd Sub\n\nThis code will open the Anaconda Prompt and run the python.exe executable, passing the path to your Python script as a command-line argument. This will cause the Python script to be executed in the Anaconda Prompt.\nedit;\nor you can try this\nSub RunPythonScript()\nDim objShell As Object\nDim PythonExePath As String, PythonScriptPath As String\n\nSet objShell = VBA.Interaction.CreateObject(\"Wscript.Shell\")\n\nPythonExePath = \"C:xxx.exe\"\nPythonScriptPath = \"C:xxx.py\"\n\nobjShell.Exec PythonExePath & \" \" & PythonScriptPath\nEnd Sub\n\n", "Try both of these and feedback with your results.\nPublic Sub PythonOutput()\n\n Dim oShell As Object, oCmd As String\n Dim oExec As Object, oOutput As Object\n Dim arg As Variant\n Dim s As String, sLine As String\n\n Set oShell = CreateObject(\"WScript.Shell\")\n arg = \"somevalue\"\n oCmd = \"python \"\"C:\\Users\\ryans\\from_vba.py\"\"\" ' & \" \" & arg\n\n Set oExec = oShell.Exec(oCmd)\n Set oOutput = oExec.StdOut\n\n While Not oOutput.AtEndOfStream\n sLine = oOutput.ReadLine\n If sLine <> \"\" Then s = s & sLine & vbNewLine\n Wend\n\n Debug.Print s\n\n Set oOutput = Nothing: Set oExec = Nothing\n Set oShell = Nothing\n\nEnd Sub\n\n\nSub RunPython()\n\nDim objShell As Object\nDim PythonExe, PythonScript As String\n \n Set objShell = VBA.CreateObject(\"Wscript.Shell\")\n\n PythonExe = \"\"\"C:\\Users\\ryans\\AppData\\Local\\Programs\\Python\\Python38\\python.exe\"\"\"\n PythonScript = \"C:\\Users\\ryans\\from_vba.py\"\n \n objShell.Run PythonExe & PythonScript\n \nEnd Sub\n\n" ]
[ 0, 0 ]
[]
[]
[ "anaconda", "python", "vba" ]
stackoverflow_0074662928_anaconda_python_vba.txt
Q: Extension of type 'LibraryExtension' does not exist I created a gradle plugin to unify some settings between the various modules of the application. the summary of the error is this: org.gradle.api.plugins.InvalidPluginException: An exception occurred applying plugin request [id: 'common.plugin'] Caused by: org.gradle.api.UnknownDomainObjectException: Extension of type 'LibraryExtension' does not exist. Currently registered extension types: [ExtraPropertiesExtension,....] This is a summary image of the project architecture: CommonPluginClass: class CommonPluginClass : Plugin<Project> { private val consumerProguardFileName = "consumer-rules.pro" private val proguardFileName = "proguard-rules.pro" private val sdkToCompile = 33 override fun apply(project: Project) { println(">>> Adding sugar to gradle files!") with(project) { applyPlugins(this) androidConfig(this) } println(">>> Sugar added for core Module!") } private fun applyPlugins(project: Project) { println(">>> apply plugins!") project.pluginManager.run { apply("com.android.application") apply("kotlin-android") apply("kotlin-kapt") } println(">>> end apply plugins!") } private fun androidConfig(project: Project) { project.extensions.configure<LibraryExtension>{ defaultConfig.targetSdk = 33 } } } the error occurs inside the androidConfig function when calling configure I was inspired by now android, dependencies and imports are similar but not build. Can someone unlock it please. build-logic:convention build.gradle plugins { alias(libs.plugins.kotlin.jvm) apply false id "org.gradle.kotlin.kotlin-dsl" version "2.4.1" } dependencies { compileOnly(libs.android.pluginGradle) compileOnly(libs.kotlin.pluginGradle) } gradlePlugin { plugins { commonPlugin { id = "common.plugin" implementationClass = "CommonPluginClass" } } } LITTLE UPDATE: I've noticed that any module I enter always identifies it to me as ApplicationExtension A: I found the solution and as usual it's a stupid thing. the application extension use plugin apply("com.android.application") the library or module use plugin apply("com.android.library")
Extension of type 'LibraryExtension' does not exist
I created a gradle plugin to unify some settings between the various modules of the application. the summary of the error is this: org.gradle.api.plugins.InvalidPluginException: An exception occurred applying plugin request [id: 'common.plugin'] Caused by: org.gradle.api.UnknownDomainObjectException: Extension of type 'LibraryExtension' does not exist. Currently registered extension types: [ExtraPropertiesExtension,....] This is a summary image of the project architecture: CommonPluginClass: class CommonPluginClass : Plugin<Project> { private val consumerProguardFileName = "consumer-rules.pro" private val proguardFileName = "proguard-rules.pro" private val sdkToCompile = 33 override fun apply(project: Project) { println(">>> Adding sugar to gradle files!") with(project) { applyPlugins(this) androidConfig(this) } println(">>> Sugar added for core Module!") } private fun applyPlugins(project: Project) { println(">>> apply plugins!") project.pluginManager.run { apply("com.android.application") apply("kotlin-android") apply("kotlin-kapt") } println(">>> end apply plugins!") } private fun androidConfig(project: Project) { project.extensions.configure<LibraryExtension>{ defaultConfig.targetSdk = 33 } } } the error occurs inside the androidConfig function when calling configure I was inspired by now android, dependencies and imports are similar but not build. Can someone unlock it please. build-logic:convention build.gradle plugins { alias(libs.plugins.kotlin.jvm) apply false id "org.gradle.kotlin.kotlin-dsl" version "2.4.1" } dependencies { compileOnly(libs.android.pluginGradle) compileOnly(libs.kotlin.pluginGradle) } gradlePlugin { plugins { commonPlugin { id = "common.plugin" implementationClass = "CommonPluginClass" } } } LITTLE UPDATE: I've noticed that any module I enter always identifies it to me as ApplicationExtension
[ "I found the solution and as usual it's a stupid thing.\nthe application extension use plugin\napply(\"com.android.application\")\n\nthe library or module use plugin\napply(\"com.android.library\")\n\n" ]
[ 0 ]
[]
[]
[ "android", "android_gradle_plugin", "gradle" ]
stackoverflow_0074640407_android_android_gradle_plugin_gradle.txt
Q: AuthRetryableFetchError: self signed certificate in certificate chain with Supabase and Next.js I'm receiving the following error with certificates when trying to fetch the user from Supabase inside getServerSideProps with Next.js: AuthRetryableFetchError: request to https://[redacted].supabase.co/auth/v1/user failed, reason: self signed certificate in certificate chain at [redacted]/node_modules/@supabase/gotrue-js/dist/main/lib/fetch.js:30:16 This is a simplified version of my code for reference: export const getServerSideProps = async ({ req, res }) => { const supabase = createServerSupabaseClient({ req, res }); const { data: { user }, error } = await supabase.auth.getUser(); if (error) console.error(error); return { props: { user, } } }; I've already setup yarn and npm to both use the right certificate using yarn config set cafile /path/to/certificate/file and npm config set cafile /path/to/certificate/file respectively, but for some reason when Next.js tries to get this from the server side (Node.js) it fails, and I'm not sure what service I need to setup to tell it where the certificate is set? There are a lot of similar questions out there, but I couldn't find any specifically about Next.js or hitting this issue in Node.js. Any help appreciated. A: So it seems to be possible to get round this issue with setting NODE_TLS_REJECT_UNAUTHORIZED=0 in env.local, but if anyone knows how to fix the issue rather than avoid it I'd be interested still. src: https://stackoverflow.com/a/45088585/827129
AuthRetryableFetchError: self signed certificate in certificate chain with Supabase and Next.js
I'm receiving the following error with certificates when trying to fetch the user from Supabase inside getServerSideProps with Next.js: AuthRetryableFetchError: request to https://[redacted].supabase.co/auth/v1/user failed, reason: self signed certificate in certificate chain at [redacted]/node_modules/@supabase/gotrue-js/dist/main/lib/fetch.js:30:16 This is a simplified version of my code for reference: export const getServerSideProps = async ({ req, res }) => { const supabase = createServerSupabaseClient({ req, res }); const { data: { user }, error } = await supabase.auth.getUser(); if (error) console.error(error); return { props: { user, } } }; I've already setup yarn and npm to both use the right certificate using yarn config set cafile /path/to/certificate/file and npm config set cafile /path/to/certificate/file respectively, but for some reason when Next.js tries to get this from the server side (Node.js) it fails, and I'm not sure what service I need to setup to tell it where the certificate is set? There are a lot of similar questions out there, but I couldn't find any specifically about Next.js or hitting this issue in Node.js. Any help appreciated.
[ "So it seems to be possible to get round this issue with setting NODE_TLS_REJECT_UNAUTHORIZED=0 in env.local, but if anyone knows how to fix the issue rather than avoid it I'd be interested still.\nsrc: https://stackoverflow.com/a/45088585/827129\n" ]
[ 0 ]
[]
[]
[ "javascript", "next.js", "node.js", "ssl_certificate" ]
stackoverflow_0074662906_javascript_next.js_node.js_ssl_certificate.txt
Q: Why does my second python async (scraping) function (which uses results from the first async (scraping) function) return no result? Summary of what the program should do: Step 1 (sync): Determine exactly how many pages need to be scraped. Step 2 (sync): create the links to the pages to be scraped in a for-loop. Step 3 (async): Use the link list from step 2 to get the links to the desired detail pages from each of these pages. Step 4 (async): Use the result from step 3 to extract the detail information for each hofladen. This information is stored in a list for each farm store and each of these lists is appended to a global list. Where do I have the problem? The transition from step 3 to step 4 does not seem to work properly. Traceback (most recent call last): File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 108, in <module> asyncio.run(main()) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete return future.result() File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 96, in main await asyncio.gather(*tasks_detail_infos) File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 61, in scrape_detail_infos data = JsonLdExtractor().extract(body_d) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/jsonld.py", line 21, in extract tree = parse_html(htmlstring, encoding=encoding) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/utils.py", line 10, in parse_html return lxml.html.fromstring(html, parser=parser) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 873, in fromstring doc = document_fromstring(html, parser=parser, base_url=base_url, **kw) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 761, in document_fromstring raise etree.ParserError( lxml.etree.ParserError: Document is empty Process finished with exit code 1 What did I do to isolate the problem? In a first attempt I rewrote the async function append_detail_infos so that it no longer tries to create a list and append the values but only prints data[0]["name"]. This resulted in the error message Traceback (most recent call last): File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 108, in <module> asyncio.run(main()) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete return future.result() File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 96, in main await asyncio.gather(*tasks_detail_infos) File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 61, in scrape_detail_infos data = JsonLdExtractor().extract(body_d) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/jsonld.py", line 21, in extract tree = parse_html(htmlstring, encoding=encoding) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/utils.py", line 10, in parse_html return lxml.html.fromstring(html, parser=parser) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 873, in fromstring doc = document_fromstring(html, parser=parser, base_url=base_url, **kw) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 761, in document_fromstring raise etree.ParserError( lxml.etree.ParserError: Document is empty Process finished with exit code 1 In the next attempt, I exported the links from detail_links as .csv and visually checked them and opened some of them to see if they were valid. This was also the case. The program code: import asyncio import time import aiohttp import requests import re from selectolax.parser import HTMLParser from extruct.jsonld import JsonLdExtractor import pandas as pd BASE_URL = "https://hofladen.info" FIRST_PAGE = 1 def get_last_page(url: str) -> int: res = requests.get(url).text html = HTMLParser(res) last_page = int(re.findall("(\d+)", html.css("li.page-last > a")[0].attributes["href"])[0]) return last_page def build_links_to_pages(start: int, ende: int) -> list: lst = [] for i in range(start, ende + 1): url = f"https://hofladen.info/regionale-produkte?page={i}" lst.append(url) return lst async def scrape_detail_links(url: str): async with aiohttp.ClientSession() as session: async with session.get(url, allow_redirects=True) as resp: body = await resp.text() html = HTMLParser(body) for node in html.css(".sp13"): detail_link = BASE_URL + node.attributes["href"] detail_links.append(detail_link) async def append_detail_infos(data): my_detail_lst = [] # print(data[0]["name"]) # name for debugging purpose my_detail_lst.append(data[0]["name"]) # name my_detail_lst.append(data[0]["address"]["streetAddress"]) # str my_detail_lst.append(data[0]["address"]["postalCode"]) # plz my_detail_lst.append(data[0]["address"]["addressLocality"]) # ort my_detail_lst.append(data[0]["address"]["addressRegion"]) # bundesland my_detail_lst.append(data[0]["address"]["addressCountry"]) # land my_detail_lst.append(data[0]["geo"]["latitude"]) # breitengrad my_detail_lst.append(data[0]["geo"]["longitude"]) # längengrad detail_infos.append(my_detail_lst) async def scrape_detail_infos(detail_link: str): async with aiohttp.ClientSession() as session_detailinfos: async with session_detailinfos.get(detail_link) as res_d: body_d = await res_d.text() data = JsonLdExtractor().extract(body_d) await append_detail_infos(data) async def main() -> None: start_time = time.perf_counter() # Beginn individueller code # ---------- global detail_links, detail_infos detail_links, detail_infos = [], [] tasks = [] tasks_detail_infos = [] # extrahiere die letzte zu iterierende Seite last_page = get_last_page("https://hofladen.info/regionale-produkte") # scrape detail links links_to_pages = build_links_to_pages(FIRST_PAGE, last_page) for link in links_to_pages: task = asyncio.create_task(scrape_detail_links(link)) tasks.append(task) print("Saving the output of extracted information.") await asyncio.gather(*tasks) pd.DataFrame(data=detail_links).to_csv("detail_links.csv") # scrape detail infos for detail_url in detail_links: task_detail_infos = asyncio.create_task(scrape_detail_infos(detail_url)) tasks_detail_infos.append(task_detail_infos) await asyncio.gather(*tasks_detail_infos) # Ende individueller Code # ------------ time_difference = time.perf_counter() - start_time print(f"Scraping time: {time_difference} seconds.") print(len(detail_links)) # print(detail_infos[]) asyncio.run(main()) A working solution to the problem: added python allow_redirects=True to python async with session_detailinfos.get(detail_link, allow_redirects=True) as res_d: added python return_exceptions=True to python await asyncio.gather(*tasks_detail_infos, return_exceptions=True) A: A working solution to the problem: added python allow_redirects=True to python async with session_detailinfos.get(detail_link, allow_redirects=True) as res_d: added python return_exceptions=True to python await asyncio.gather(*tasks_detail_infos, return_exceptions=True)
Why does my second python async (scraping) function (which uses results from the first async (scraping) function) return no result?
Summary of what the program should do: Step 1 (sync): Determine exactly how many pages need to be scraped. Step 2 (sync): create the links to the pages to be scraped in a for-loop. Step 3 (async): Use the link list from step 2 to get the links to the desired detail pages from each of these pages. Step 4 (async): Use the result from step 3 to extract the detail information for each hofladen. This information is stored in a list for each farm store and each of these lists is appended to a global list. Where do I have the problem? The transition from step 3 to step 4 does not seem to work properly. Traceback (most recent call last): File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 108, in <module> asyncio.run(main()) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete return future.result() File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 96, in main await asyncio.gather(*tasks_detail_infos) File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 61, in scrape_detail_infos data = JsonLdExtractor().extract(body_d) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/jsonld.py", line 21, in extract tree = parse_html(htmlstring, encoding=encoding) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/utils.py", line 10, in parse_html return lxml.html.fromstring(html, parser=parser) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 873, in fromstring doc = document_fromstring(html, parser=parser, base_url=base_url, **kw) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 761, in document_fromstring raise etree.ParserError( lxml.etree.ParserError: Document is empty Process finished with exit code 1 What did I do to isolate the problem? In a first attempt I rewrote the async function append_detail_infos so that it no longer tries to create a list and append the values but only prints data[0]["name"]. This resulted in the error message Traceback (most recent call last): File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 108, in <module> asyncio.run(main()) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete return future.result() File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 96, in main await asyncio.gather(*tasks_detail_infos) File "/Users/REPLACED_MY_USER/PycharmProjects/PKI-Projekt/test_ttt.py", line 61, in scrape_detail_infos data = JsonLdExtractor().extract(body_d) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/jsonld.py", line 21, in extract tree = parse_html(htmlstring, encoding=encoding) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/extruct/utils.py", line 10, in parse_html return lxml.html.fromstring(html, parser=parser) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 873, in fromstring doc = document_fromstring(html, parser=parser, base_url=base_url, **kw) File "/Users/REPLACED_MY_USER/miniconda3/envs/scrapy/lib/python3.10/site-packages/lxml/html/__init__.py", line 761, in document_fromstring raise etree.ParserError( lxml.etree.ParserError: Document is empty Process finished with exit code 1 In the next attempt, I exported the links from detail_links as .csv and visually checked them and opened some of them to see if they were valid. This was also the case. The program code: import asyncio import time import aiohttp import requests import re from selectolax.parser import HTMLParser from extruct.jsonld import JsonLdExtractor import pandas as pd BASE_URL = "https://hofladen.info" FIRST_PAGE = 1 def get_last_page(url: str) -> int: res = requests.get(url).text html = HTMLParser(res) last_page = int(re.findall("(\d+)", html.css("li.page-last > a")[0].attributes["href"])[0]) return last_page def build_links_to_pages(start: int, ende: int) -> list: lst = [] for i in range(start, ende + 1): url = f"https://hofladen.info/regionale-produkte?page={i}" lst.append(url) return lst async def scrape_detail_links(url: str): async with aiohttp.ClientSession() as session: async with session.get(url, allow_redirects=True) as resp: body = await resp.text() html = HTMLParser(body) for node in html.css(".sp13"): detail_link = BASE_URL + node.attributes["href"] detail_links.append(detail_link) async def append_detail_infos(data): my_detail_lst = [] # print(data[0]["name"]) # name for debugging purpose my_detail_lst.append(data[0]["name"]) # name my_detail_lst.append(data[0]["address"]["streetAddress"]) # str my_detail_lst.append(data[0]["address"]["postalCode"]) # plz my_detail_lst.append(data[0]["address"]["addressLocality"]) # ort my_detail_lst.append(data[0]["address"]["addressRegion"]) # bundesland my_detail_lst.append(data[0]["address"]["addressCountry"]) # land my_detail_lst.append(data[0]["geo"]["latitude"]) # breitengrad my_detail_lst.append(data[0]["geo"]["longitude"]) # längengrad detail_infos.append(my_detail_lst) async def scrape_detail_infos(detail_link: str): async with aiohttp.ClientSession() as session_detailinfos: async with session_detailinfos.get(detail_link) as res_d: body_d = await res_d.text() data = JsonLdExtractor().extract(body_d) await append_detail_infos(data) async def main() -> None: start_time = time.perf_counter() # Beginn individueller code # ---------- global detail_links, detail_infos detail_links, detail_infos = [], [] tasks = [] tasks_detail_infos = [] # extrahiere die letzte zu iterierende Seite last_page = get_last_page("https://hofladen.info/regionale-produkte") # scrape detail links links_to_pages = build_links_to_pages(FIRST_PAGE, last_page) for link in links_to_pages: task = asyncio.create_task(scrape_detail_links(link)) tasks.append(task) print("Saving the output of extracted information.") await asyncio.gather(*tasks) pd.DataFrame(data=detail_links).to_csv("detail_links.csv") # scrape detail infos for detail_url in detail_links: task_detail_infos = asyncio.create_task(scrape_detail_infos(detail_url)) tasks_detail_infos.append(task_detail_infos) await asyncio.gather(*tasks_detail_infos) # Ende individueller Code # ------------ time_difference = time.perf_counter() - start_time print(f"Scraping time: {time_difference} seconds.") print(len(detail_links)) # print(detail_infos[]) asyncio.run(main()) A working solution to the problem: added python allow_redirects=True to python async with session_detailinfos.get(detail_link, allow_redirects=True) as res_d: added python return_exceptions=True to python await asyncio.gather(*tasks_detail_infos, return_exceptions=True)
[ "A working solution to the problem:\nadded\npython allow_redirects=True to python async with session_detailinfos.get(detail_link, allow_redirects=True) as res_d:\nadded python return_exceptions=True to python await asyncio.gather(*tasks_detail_infos, return_exceptions=True)\n" ]
[ 0 ]
[]
[]
[ "aiohttp", "python", "python_3.x", "python_asyncio" ]
stackoverflow_0074642424_aiohttp_python_python_3.x_python_asyncio.txt
Q: What does putting quotes around $@ in bash script do? i.e. "$@" instead of $@ I was getting errors passing in a command and when I put quotes around that things worked just fine. Just curious how that works. A: Double quotes around $@ (and similarly, ${array[@]}) prevents globbing and word splitting of individual elements, while still expanding to multiple separate arguments. See: https://github.com/koalaman/shellcheck/wiki/SC2068
What does putting quotes around $@ in bash script do? i.e. "$@" instead of $@
I was getting errors passing in a command and when I put quotes around that things worked just fine. Just curious how that works.
[ "Double quotes around $@ (and similarly, ${array[@]}) prevents globbing and word splitting of individual elements, while still expanding to multiple separate arguments.\nSee: https://github.com/koalaman/shellcheck/wiki/SC2068\n" ]
[ 0 ]
[]
[]
[ "bash" ]
stackoverflow_0074667335_bash.txt
Q: object of objects query using postgresql i am using postgresql in profile model there is field (progress ) which is an object and inside this object there are multiple objects each object has 2 number i want to check if those 2 numbers are equal to do increase points progress : { "progress1": { "mustWin": 1, "progress": 0 }, "progress2": { "mustWin": 1, "progress": 0 }, } this is the default value of progress how can i get access to progress1 and 2 A: If you want to manually gets progress1 and progress2 values, then use this: with tb as ( select '{"progress" : { "progress1": { "mustWin": 1, "progress": 0 }, "progress2": { "mustWin": 1, "progress": 0 } }}'::jsonb as js ) select tb.js->'progress'->'progress1', tb.js->'progress'->'progress2' from tb -- Result: |progress1 |progress2 | |-----------------------------+-----------------------------+ |{"mustWin": 1, "progress": 0}|{"mustWin": 1, "progress": 0}| If you don't know how many objects have inside progress, then use this: with tb as ( select '{"progress" : { "progress1": { "mustWin": 1, "progress": 0 }, "progress2": { "mustWin": 1, "progress": 0 }, "progress3": { "mustWin": 1, "progress": 0 }, "progress4": { "mustWin": 1, "progress": 0 } }}'::jsonb as js ) select tb2."key", tb2."value" from tb tb1 cross join jsonb_each_text(tb1.js->'progress') tb2 -- Result: | key | value | |-----------|-------------------------------| | progress1 | {"mustWin": 1, "progress": 0} | | progress2 | {"mustWin": 1, "progress": 0} | | progress3 | {"mustWin": 1, "progress": 0} | | progress4 | {"mustWin": 1, "progress": 0} |
object of objects query using postgresql
i am using postgresql in profile model there is field (progress ) which is an object and inside this object there are multiple objects each object has 2 number i want to check if those 2 numbers are equal to do increase points progress : { "progress1": { "mustWin": 1, "progress": 0 }, "progress2": { "mustWin": 1, "progress": 0 }, } this is the default value of progress how can i get access to progress1 and 2
[ "If you want to manually gets progress1 and progress2 values, then use this:\nwith tb as (\nselect \n'{\"progress\" : {\n \"progress1\": {\n \"mustWin\": 1,\n \"progress\": 0\n },\n \"progress2\": {\n \"mustWin\": 1,\n \"progress\": 0\n } \n}}'::jsonb as js \n)\nselect \n tb.js->'progress'->'progress1', \n tb.js->'progress'->'progress2' \nfrom tb \n\n-- Result:\n|progress1 |progress2 |\n|-----------------------------+-----------------------------+\n|{\"mustWin\": 1, \"progress\": 0}|{\"mustWin\": 1, \"progress\": 0}|\n\nIf you don't know how many objects have inside progress, then use this:\nwith tb as (\nselect \n'{\"progress\" : {\n \"progress1\": {\n \"mustWin\": 1,\n \"progress\": 0\n },\n \"progress2\": {\n \"mustWin\": 1,\n \"progress\": 0\n }, \n \"progress3\": {\n \"mustWin\": 1,\n \"progress\": 0\n }, \n \"progress4\": {\n \"mustWin\": 1,\n \"progress\": 0\n } \n}}'::jsonb as js \n)\nselect tb2.\"key\", tb2.\"value\" from tb tb1 \ncross join jsonb_each_text(tb1.js->'progress') tb2 \n\n-- Result:\n| key | value |\n|-----------|-------------------------------|\n| progress1 | {\"mustWin\": 1, \"progress\": 0} |\n| progress2 | {\"mustWin\": 1, \"progress\": 0} |\n| progress3 | {\"mustWin\": 1, \"progress\": 0} |\n| progress4 | {\"mustWin\": 1, \"progress\": 0} |\n\n" ]
[ 0 ]
[]
[]
[ "postgresql" ]
stackoverflow_0074665391_postgresql.txt
Q: How to put an array of objects into an object name data for datatables I have an array of objects as follows: [ { "type": "Feature", "geometry": { "type": "Point", "coordinates": [ 137.89094924926758, 36.93143814715343 ] }, "properties": { "@geometry": "center", "@id": "way/323049815", "id": "way/323049815", "landuse": "winter_sports", "name": "糸魚川シーサイドバレースキー場", "name:en": "Itoigawa Seaside Valley Ski Resort", "name:ja": "糸魚川シーサイドバレースキー場", "source": "Bing", "sport": "skiing", "website": "https://www.seasidevalley.com/", "wikidata": "Q11604871", "wikipedia": "ja:糸魚川シーサイドバレースキー場" }, [snip] I want to add the above array into a data object in javascript as follows. { "data": [ //my data here. ] } I have tried this; let mydata = { "data": skidata } but is places back slashed a lot like this snippet; { "data": "[{\"type\":\"Feature\",\"geometry\":{\"type\":\"Point\",\"coordinates\"... How do I remove the back slashes in javascript please? This is the specific code; let skidata = JSON.stringify(uniqueFeatures); let mydata = { "data": skidata } console.log((mydata)); When I console.log skidata, there are no backslashes. When I console.log mydata, back slashes are added, I think. This is a pict of console.log(mydata) A: Don't use JSON.stringify(uniqueFeatures). That turns your object into a string. Instead of doing that, just use let mydata1 = { "data": uniqueFeatures };. Demo: let uniqueFeatures = [{ "type": "Feature", "geometry": { "type": "Point", "coordinates": [ 137.89094924926758, 36.93143814715343 ] }, "properties": { "@geometry": "center", "@id": "way/323049815", "id": "way/323049815", "landuse": "winter_sports", "name": "糸魚川シーサイドバレースキー場", "name:en": "Itoigawa Seaside Valley Ski Resort", "name:ja": "糸魚川シーサイドバレースキー場", "source": "Bing", "sport": "skiing", "website": "https://www.seasidevalley.com/", "wikidata": "Q11604871", "wikipedia": "ja:糸魚川シーサイドバレースキー場" } }]; let skidata = JSON.stringify(uniqueFeatures); let mydata1 = { "data": uniqueFeatures }; let mydata2 = { "data": skidata }; console.log("What you get if you stringify your object:"); console.log(mydata2); console.log("What you want, instead:"); console.log(mydata1); <!doctype html> <html> <head> <meta charset="UTF-8"> <title>Demo</title> </head> <body> </body> </html> Reference: JSON.stringify() A: To remove all the backslashes from a string in JavaScript, you can use the replace method: const json = '"data": "[{\"type\":\"Feature\",\"geometry\":{\"type\":\"Point\",\"coordinates\"...'; const jsonWithoutBackslashes = json.replace(/\\/g, '');
How to put an array of objects into an object name data for datatables
I have an array of objects as follows: [ { "type": "Feature", "geometry": { "type": "Point", "coordinates": [ 137.89094924926758, 36.93143814715343 ] }, "properties": { "@geometry": "center", "@id": "way/323049815", "id": "way/323049815", "landuse": "winter_sports", "name": "糸魚川シーサイドバレースキー場", "name:en": "Itoigawa Seaside Valley Ski Resort", "name:ja": "糸魚川シーサイドバレースキー場", "source": "Bing", "sport": "skiing", "website": "https://www.seasidevalley.com/", "wikidata": "Q11604871", "wikipedia": "ja:糸魚川シーサイドバレースキー場" }, [snip] I want to add the above array into a data object in javascript as follows. { "data": [ //my data here. ] } I have tried this; let mydata = { "data": skidata } but is places back slashed a lot like this snippet; { "data": "[{\"type\":\"Feature\",\"geometry\":{\"type\":\"Point\",\"coordinates\"... How do I remove the back slashes in javascript please? This is the specific code; let skidata = JSON.stringify(uniqueFeatures); let mydata = { "data": skidata } console.log((mydata)); When I console.log skidata, there are no backslashes. When I console.log mydata, back slashes are added, I think. This is a pict of console.log(mydata)
[ "Don't use JSON.stringify(uniqueFeatures).\nThat turns your object into a string.\nInstead of doing that, just use let mydata1 = { \"data\": uniqueFeatures };.\nDemo:\n\n\nlet uniqueFeatures = [{\n \"type\": \"Feature\",\n \"geometry\": {\n \"type\": \"Point\",\n \"coordinates\": [\n 137.89094924926758,\n 36.93143814715343\n ]\n },\n \"properties\": {\n \"@geometry\": \"center\",\n \"@id\": \"way/323049815\",\n \"id\": \"way/323049815\",\n \"landuse\": \"winter_sports\",\n \"name\": \"糸魚川シーサイドバレースキー場\",\n \"name:en\": \"Itoigawa Seaside Valley Ski Resort\",\n \"name:ja\": \"糸魚川シーサイドバレースキー場\",\n \"source\": \"Bing\",\n \"sport\": \"skiing\",\n \"website\": \"https://www.seasidevalley.com/\",\n \"wikidata\": \"Q11604871\",\n \"wikipedia\": \"ja:糸魚川シーサイドバレースキー場\"\n }\n}];\n\nlet skidata = JSON.stringify(uniqueFeatures);\n\nlet mydata1 = {\n \"data\": uniqueFeatures\n};\n\nlet mydata2 = {\n \"data\": skidata\n};\n\nconsole.log(\"What you get if you stringify your object:\");\nconsole.log(mydata2);\n\nconsole.log(\"What you want, instead:\");\nconsole.log(mydata1);\n<!doctype html>\n<html>\n\n<head>\n <meta charset=\"UTF-8\">\n <title>Demo</title>\n</head>\n\n<body>\n\n</body>\n\n</html>\n\n\n\nReference: JSON.stringify()\n", "To remove all the backslashes from a string in JavaScript, you can use the replace method:\nconst json = '\"data\": \"[{\\\"type\\\":\\\"Feature\\\",\\\"geometry\\\":{\\\"type\\\":\\\"Point\\\",\\\"coordinates\\\"...';\n\nconst jsonWithoutBackslashes = json.replace(/\\\\/g, '');\n\n" ]
[ 1, 0 ]
[]
[]
[ "datatables", "javascript", "json" ]
stackoverflow_0074663829_datatables_javascript_json.txt
Q: dataframe group by for all columns in new dataframe I want to create a new dataframe with the values grouped by each column header dataset this is the dataset i'm working with. I essentially want a new dataframe which sums the occurences of 1 and 0 for each feature (chocolate, fruity etc) i tried this code with the groupby and sort function ` chocolate = data.groupby(["chocolate"]).size() bar = data.groupby(["bar"]).size() hard = data.groupby(["hard"]).size() display(chocolate,bar, hard) ` but this only gives me the sum per feature this is the end result i want to become end result A: You could try the following: res = ( data .drop(columns="competitorname") .melt().value_counts() .unstack() .fillna(0).astype("int").T ) Eliminate the columns that aren't relevant (I've only seen competitorname, but there could be more). .melt the dataframe. The result has 2 columns, one with the column names, and another with the resp. 0/1 values. Now .value_counts gives you a series that essentially contains what you are looking for. Then you just have to .unstack the first index level (column names) and transpose the dataframe. Example: data = pd.DataFrame({ "competitorname": ["A", "B", "C"], "chocolate": [1, 0, 0], "bar": [1, 0, 1], "hard": [1, 1, 1] }) competitorname chocolate bar hard 0 A 1 1 1 1 B 0 0 1 2 C 0 1 1 Result: variable bar chocolate hard value 0 1 2 0 1 2 1 3 Alternative with .pivot_table: res = ( data .drop(columns="competitorname") .melt().value_counts().to_frame() .pivot_table(index="value", columns="variable", fill_value=0) .droplevel(0, axis=1) ) PS: Please don't post images, provide a litte example (like here) that encapsulates your problem.
dataframe group by for all columns in new dataframe
I want to create a new dataframe with the values grouped by each column header dataset this is the dataset i'm working with. I essentially want a new dataframe which sums the occurences of 1 and 0 for each feature (chocolate, fruity etc) i tried this code with the groupby and sort function ` chocolate = data.groupby(["chocolate"]).size() bar = data.groupby(["bar"]).size() hard = data.groupby(["hard"]).size() display(chocolate,bar, hard) ` but this only gives me the sum per feature this is the end result i want to become end result
[ "You could try the following:\nres = (\n data\n .drop(columns=\"competitorname\")\n .melt().value_counts()\n .unstack()\n .fillna(0).astype(\"int\").T\n)\n\n\nEliminate the columns that aren't relevant (I've only seen competitorname, but there could be more).\n.melt the dataframe. The result has 2 columns, one with the column names, and another with the resp. 0/1 values.\nNow .value_counts gives you a series that essentially contains what you are looking for.\nThen you just have to .unstack the first index level (column names) and transpose the dataframe.\n\nExample:\ndata = pd.DataFrame({\n \"competitorname\": [\"A\", \"B\", \"C\"],\n \"chocolate\": [1, 0, 0], \"bar\": [1, 0, 1], \"hard\": [1, 1, 1]\n})\n\n competitorname chocolate bar hard\n0 A 1 1 1\n1 B 0 0 1\n2 C 0 1 1\n\nResult:\nvariable bar chocolate hard\nvalue \n0 1 2 0\n1 2 1 3\n\nAlternative with .pivot_table:\nres = (\n data\n .drop(columns=\"competitorname\")\n .melt().value_counts().to_frame()\n .pivot_table(index=\"value\", columns=\"variable\", fill_value=0)\n .droplevel(0, axis=1)\n)\n\nPS: Please don't post images, provide a litte example (like here) that encapsulates your problem.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074665750_dataframe_pandas_python.txt
Q: file put content in specific path in php I'm running my website on localhost , when I upload files the uploaded file stored in htdocs (local host file) so i want to change the location to my project assets file which is in other location than localhost files this is my code : <?php header('Access-Control-Allow-Origin: *'); header('Access-Control-Allow-Methods: GET, POST'); header("Access-Control-Allow-Headers: X-Requested-With"); $image = $_POST['file']; $name = $_POST['name']; $file = base64_decode($image); file_put_contents($name , $file); echo 'upload is finished'; ?> A: Note: I used this question to check how would OpenAI perform on a real world StackOverflow question. I'm posting the answer here as it actually looks like a legitimate solution. It looks like you want to change the directory where uploaded files are stored on your localhost web server. To do this, you can use the move_uploaded_file() function in PHP. This function allows you to move an uploaded file to a new location on the server. Here's an example of how you can use this function to move the uploaded file to a new directory: // define the new location for the uploaded file $newLocation = '/path/to/your/project/assets/' . $name; // move the uploaded file to the new location if (move_uploaded_file($file, $newLocation)) { echo 'File successfully moved to new location'; } else { echo 'Failed to move file to new location'; } In the code above, we first define the new location for the uploaded file by concatenating the file name with the path to your project's assets directory. Then, we use the move_uploaded_file() function to move the file to the new location. If the function is successful, it will return true; otherwise, it will return false. You can then use this new location to access the uploaded file in your project. I hope this helps! Let me know if you have any other questions. A: To change the location where the uploaded files are stored, you can use the "file_put_contents" function and specify the path to your project assets folder as the first argument: <?php header('Access-Control-Allow-Origin: *'); header('Access-Control-Allow-Methods: GET, POST'); header("Access-Control-Allow-Headers: X-Requested-With"); $image = $_POST['file']; $name = $_POST['name']; $file = base64_decode($image); // specify the path to your project assets folder $path = '/path/to/project/assets/'; file_put_contents($path . $name , $file); echo 'upload is finished'; ?> Make sure that the specified path is correct and that your PHP script has permission to write to that location. You can also create the folder if it doesn't exist using the "mkdir" function.
file put content in specific path in php
I'm running my website on localhost , when I upload files the uploaded file stored in htdocs (local host file) so i want to change the location to my project assets file which is in other location than localhost files this is my code : <?php header('Access-Control-Allow-Origin: *'); header('Access-Control-Allow-Methods: GET, POST'); header("Access-Control-Allow-Headers: X-Requested-With"); $image = $_POST['file']; $name = $_POST['name']; $file = base64_decode($image); file_put_contents($name , $file); echo 'upload is finished'; ?>
[ "\nNote: I used this question to check how would OpenAI perform on a real world StackOverflow question. I'm posting the answer here as it actually looks like a legitimate solution.\n\nIt looks like you want to change the directory where uploaded files are stored on your localhost web server. To do this, you can use the move_uploaded_file() function in PHP. This function allows you to move an uploaded file to a new location on the server.\nHere's an example of how you can use this function to move the uploaded file to a new directory:\n// define the new location for the uploaded file\n$newLocation = '/path/to/your/project/assets/' . $name;\n\n// move the uploaded file to the new location\nif (move_uploaded_file($file, $newLocation)) {\n echo 'File successfully moved to new location';\n} else {\n echo 'Failed to move file to new location';\n}\n\nIn the code above, we first define the new location for the uploaded file by concatenating the file name with the path to your project's assets directory. Then, we use the move_uploaded_file() function to move the file to the new location. If the function is successful, it will return true; otherwise, it will return false.\nYou can then use this new location to access the uploaded file in your project. I hope this helps! Let me know if you have any other questions.\n", "To change the location where the uploaded files are stored, you can use the \"file_put_contents\" function and specify the path to your project assets folder as the first argument:\n<?php\n header('Access-Control-Allow-Origin: *');\n header('Access-Control-Allow-Methods: GET, POST');\n header(\"Access-Control-Allow-Headers: X-Requested-With\");\n $image = $_POST['file'];\n $name = $_POST['name'];\n $file = base64_decode($image);\n\n // specify the path to your project assets folder\n $path = '/path/to/project/assets/';\n file_put_contents($path . $name , $file);\n echo 'upload is finished';\n?>\n\nMake sure that the specified path is correct and that your PHP script has permission to write to that location. You can also create the folder if it doesn't exist using the \"mkdir\" function.\n" ]
[ 0, 0 ]
[]
[]
[ "php" ]
stackoverflow_0074667170_php.txt
Q: Adding Link tag dosen't load in react-dom import React from "react"; import "./App.css" import Navbar from "./components/Navbar/Navbar"; import Home from "./components/Home/Home" import About from "./components/About/About"; import { BrowserRouter,Route,Routes,Router } from "react-router-dom"; export default function App() { return( <div> <Navbar /> <BrowserRouter> <Routes> <Route path="/" element={<Home />} /> <Route path='about' element={<About/>} /> </Routes> </BrowserRouter> </div> ) } this is my app component import React from "react"; import {Link} from "react-router-dom"; export default function Navbar() { return( <nav> <Link to='/'> Home</Link> <Link to='/about'>About</Link> </nav> ) } this is my nav component The issue whenever i try to load from the URL without adding a navbar my component works perfectly fine but after i add navbar it doesn't works A: Move navbar inside BrowserRouter <BrowserRouter> <Navbar /> <Routes> <Route path="/" element={<Home />} /> <Route path='about' element={<About/>} /> </Routes> </BrowserRouter> </div> A: Try this App.js import { RouterProvider } from "react-router-dom"; import routes from "./routes/routes"; function App() { return ( <div><RouterProvider router={routes} /> </div> ); } export default App; <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script> router.js import { createBrowserRouter } from "react-router-dom"; import Main from "../layout/Main"; import Home from "./components/Home/Home" import About from "./components/About/About"; const routes = createBrowserRouter([ { path: "/", element: <Main />, children: [ { path: "/", element: <Home />, }, { path: "about", element: <About />, } ], }, ]); export default routes; <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script> Main.js import React from "react"; import { Outlet } from "react-router-dom"; import Navbar from "./components/Navbar/Navbar"; const Main = () => { return ( <div> <Navbar /> <Outlet /> </div> ); }; export default Main; <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script> Navbar.js import React from "react"; import { Link } from "react-router-dom"; const Navbar = () => { return ( <nav> <ul> <h1>Logo</h1> <li> <Link to="/">Home</Link> </li> <li> <Link to="/about">About</Link> </li> </ul> </nav> ); }; export default Navbar; <script src="https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js"></script> this is the best practice for the latest react-router-dom just start with the App.js copy and paste it click on the import lines which will prompt the creation of required files I have coded everything according to your components folder you will have to create two or three new files just copy and paste the code and add css where needed everything will work
Adding Link tag dosen't load in react-dom
import React from "react"; import "./App.css" import Navbar from "./components/Navbar/Navbar"; import Home from "./components/Home/Home" import About from "./components/About/About"; import { BrowserRouter,Route,Routes,Router } from "react-router-dom"; export default function App() { return( <div> <Navbar /> <BrowserRouter> <Routes> <Route path="/" element={<Home />} /> <Route path='about' element={<About/>} /> </Routes> </BrowserRouter> </div> ) } this is my app component import React from "react"; import {Link} from "react-router-dom"; export default function Navbar() { return( <nav> <Link to='/'> Home</Link> <Link to='/about'>About</Link> </nav> ) } this is my nav component The issue whenever i try to load from the URL without adding a navbar my component works perfectly fine but after i add navbar it doesn't works
[ "Move navbar inside BrowserRouter\n<BrowserRouter>\n <Navbar />\n <Routes>\n <Route path=\"/\" element={<Home />} />\n <Route path='about' element={<About/>} />\n \n \n </Routes>\n\n </BrowserRouter>\n</div>\n\n", "Try this\nApp.js\n\n\nimport { RouterProvider } from \"react-router-dom\";\nimport routes from \"./routes/routes\";\n\n\nfunction App() {\n return (\n <div><RouterProvider router={routes} />\n </div>\n );\n}\n\nexport default App;\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js\"></script>\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js\"></script>\n\n\n\nrouter.js\n\n\nimport { createBrowserRouter } from \"react-router-dom\";\nimport Main from \"../layout/Main\";\nimport Home from \"./components/Home/Home\"\nimport About from \"./components/About/About\";\n\nconst routes = createBrowserRouter([\n {\n path: \"/\",\n element: <Main />,\n children: [\n {\n path: \"/\",\n element: <Home />,\n },\n {\n path: \"about\",\n element: <About />,\n }\n ],\n },\n]);\n\nexport default routes;\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js\"></script>\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js\"></script>\n\n\n\nMain.js\n\n\nimport React from \"react\";\nimport { Outlet } from \"react-router-dom\";\nimport Navbar from \"./components/Navbar/Navbar\";\n\nconst Main = () => {\n return (\n <div>\n <Navbar />\n <Outlet />\n </div>\n );\n};\n\nexport default Main;\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js\"></script>\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js\"></script>\n\n\n\nNavbar.js\n\n\nimport React from \"react\";\nimport { Link } from \"react-router-dom\";\n\n\nconst Navbar = () => {\n return (\n <nav>\n <ul>\n <h1>Logo</h1>\n <li>\n <Link to=\"/\">Home</Link>\n </li>\n <li>\n <Link to=\"/about\">About</Link>\n </li>\n </ul>\n </nav>\n );\n};\n\nexport default Navbar;\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/react/16.6.3/umd/react.production.min.js\"></script>\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/react-dom/16.6.3/umd/react-dom.production.min.js\"></script>\n\n\n\nthis is the best practice for the latest react-router-dom just start with the App.js\ncopy and paste it click on the import lines which will prompt the creation of required files I have coded everything according to your components folder you will have to create two or three new files just copy and paste the code and add css where needed everything will work\n" ]
[ 0, 0 ]
[]
[]
[ "react_router", "reactjs" ]
stackoverflow_0074667255_react_router_reactjs.txt
Q: What's the optimal implementation of a sliding window over a number's bits? Given a number i.e (0xD5B8), what is the most efficient way in Python to subset across the bits over a sliding window using only native libraries? A method might look like the following: def window_bits(n,w, s): ''' n: the number w: the window size s: step size ''' # code window_bits(0xD5B8, 4, 4) # returns [[0b1101],[0b0101],[0b1011],[0b1000]] window_bits(0xD5B8, 2, 2) # returns [[0b11],[0b01],[0b01],[0b01],[0b10],[0b11],[0b10],[0b00]] Some things to consider: should strive to use minimal possible memory footprint can only use inbuilt libraries as fast as possible. if len(bin(n)) % w != 0, then the last window should exist, with a size less than w Some of the suggestions are like How to iterate over a list in chunks, which is convert the int using bin and iterate over as a slice. However, these questions do not prove the optimality. I would think that there are other possible bitwise operations that could be done that are more optimal than running over the bin as a slice (a generic data structure), either from a memory or speed perspective. This question is about the MOST optimal, not about what gets the job done, and it can be considered from memory, speed, or both. Ideally, an answer gives good reasons why their representation is the most optimal. So, if it is provably the most optimal to convert to bin(x) and then just manage the bits as a slice, then that's the optimal methodology. But this is NOT about an easy way to move a window around bits. Every op and bit counts in this question. A: The "naive" option would be to create a bits array - bin(n)[2:] - and then use the answers from How to iterate over a list in chunks. But this is most likely not so efficient assuming we can use bit operations. Another option is to shift-and-mask the input according to the window and step size: def window_bits(n, w, step_size): offset = n.bit_length() - w # the initial shift to get the MSB window mask = 2**w-1 # To get the actual window we need while offset >= 0: print(f"{(n >> offset)&mask:x}") offset -= step_size # advance the window And running window_bits(0xD5B8, 4, 4) will indeed print each nibble on a separate line. A: This is not the full answer yet, as I need to do more research, but wanted to add it to the question. Here's a modification of Tomerikoo's answer that handles the ends better. This is the "blue" section of the graph below. def window_bits(n, w, s): offset = n.bit_length() - w mask = 2**w-1 ret = [] while offset >= 0: ret.append((n >> offset) & mask) offset -= s # advance the window if offset < 0: # close the end mask = 2**(-offset)-1 ret.append((n >> mask) & mask) return ret This, along with the red chunker algo mentioned next, was benchmarked over pytest benchamrk. The red is the chunker used over the following function: def chunker(n, size, s=None): seq = bin(n) return (seq[pos:pos + size] for pos in range(0, len(seq), size)) These were parameterized the same, with the following: numbers = [2**n for n in range(10)] window_size = [4, 8, 12] step_size = window_size A couple things that stood out to me: The chunker has much more of an even execution time whereas the window_bit function executes with a lot more variance. The chunker is just faster in general. I'm looking into why this might be the case, as it's not clear yet to me if there's something else at play here. I would think that the bit shifting ops would be faster, but maybe there's some optimizations with slicing that's happening that I'm not sure about.
What's the optimal implementation of a sliding window over a number's bits?
Given a number i.e (0xD5B8), what is the most efficient way in Python to subset across the bits over a sliding window using only native libraries? A method might look like the following: def window_bits(n,w, s): ''' n: the number w: the window size s: step size ''' # code window_bits(0xD5B8, 4, 4) # returns [[0b1101],[0b0101],[0b1011],[0b1000]] window_bits(0xD5B8, 2, 2) # returns [[0b11],[0b01],[0b01],[0b01],[0b10],[0b11],[0b10],[0b00]] Some things to consider: should strive to use minimal possible memory footprint can only use inbuilt libraries as fast as possible. if len(bin(n)) % w != 0, then the last window should exist, with a size less than w Some of the suggestions are like How to iterate over a list in chunks, which is convert the int using bin and iterate over as a slice. However, these questions do not prove the optimality. I would think that there are other possible bitwise operations that could be done that are more optimal than running over the bin as a slice (a generic data structure), either from a memory or speed perspective. This question is about the MOST optimal, not about what gets the job done, and it can be considered from memory, speed, or both. Ideally, an answer gives good reasons why their representation is the most optimal. So, if it is provably the most optimal to convert to bin(x) and then just manage the bits as a slice, then that's the optimal methodology. But this is NOT about an easy way to move a window around bits. Every op and bit counts in this question.
[ "The \"naive\" option would be to create a bits array - bin(n)[2:] - and then use the answers from How to iterate over a list in chunks.\nBut this is most likely not so efficient assuming we can use bit operations. Another option is to shift-and-mask the input according to the window and step size:\ndef window_bits(n, w, step_size):\n offset = n.bit_length() - w # the initial shift to get the MSB window\n mask = 2**w-1 # To get the actual window we need\n while offset >= 0:\n print(f\"{(n >> offset)&mask:x}\")\n offset -= step_size # advance the window\n\nAnd running window_bits(0xD5B8, 4, 4) will indeed print each nibble on a separate line.\n", "This is not the full answer yet, as I need to do more research, but wanted to add it to the question.\nHere's a modification of Tomerikoo's answer that handles the ends better.\nThis is the \"blue\" section of the graph below.\ndef window_bits(n, w, s):\n offset = n.bit_length() - w \n mask = 2**w-1 \n ret = []\n while offset >= 0:\n ret.append((n >> offset) & mask)\n offset -= s # advance the window\n if offset < 0: # close the end\n mask = 2**(-offset)-1\n ret.append((n >> mask) & mask)\n return ret\n\nThis, along with the red chunker algo mentioned next, was benchmarked over pytest benchamrk.\nThe red is the chunker used over the following function:\ndef chunker(n, size, s=None):\n seq = bin(n)\n return (seq[pos:pos + size] for pos in range(0, len(seq), size))\n\n\nThese were parameterized the same, with the following:\nnumbers = [2**n for n in range(10)]\nwindow_size = [4, 8, 12]\nstep_size = window_size\n\nA couple things that stood out to me:\n\nThe chunker has much more of an even execution time whereas the window_bit function executes with a lot more variance.\nThe chunker is just faster in general.\n\nI'm looking into why this might be the case, as it's not clear yet to me if there's something else at play here. I would think that the bit shifting ops would be faster, but maybe there's some optimizations with slicing that's happening that I'm not sure about.\n" ]
[ 1, 0 ]
[]
[]
[ "bit", "bit_shift", "python" ]
stackoverflow_0074641295_bit_bit_shift_python.txt
Q: Assign a Static Elastic IP to Application Load Balancer How to create a Network Load Balancer with one or more Elastic IP addresses in front of the Application Load Balancer using AWS CDK? This should allow having fixed IP addresses for the load balancer. The article I need a static IP address for my Application Load Balancer. How can I register an Application Load Balancer behind a Network Load Balancer? recommends this approach. The CDK API manual does not cover this use case. The class NetworkLoadBalancer (construct) lacks a definition of the SubnetMappings property. This looks like an issue in the documentation or the library. The code should be preferable in TypeScript. A: Before answering the question about the static IP address for the load balancer, I want to suggest an alternative solution. Alternative solution - register ALB with Route53 We can create a DNS record for the ALB in Route53 using CDK. The code is straightforward. Unfortunately, this solution does not work nicely if the DNS zone is not hosted by the AWS Route53. // Create or import your hosted zone. const hostedZone = new PublicHostedZone(stack, 'HostedZone', { zoneName: 'your.domain.name' }) const loadBalancer = ... // Define your ALB. // Create a DNS record for your ALB instead of the static IP address. // eslint-disable-next-line no-new new ARecord(stack, 'WebServerARecord', { recordName: 'www', target: RecordTarget.fromAlias(new LoadBalancerTarget(loadBalancer)), zone: hostedZone }) Issues with fixed address for the Network Load Balancer At the moment of writing, CDK constructs do not have the option to assign a static IP address to the Network Load Balancer. There are several open issues on GitHub: SubnetMappings support for LoadBalancer #7424, Add support for SubnetMapping to Network Load Balancer #9696. In a nutshell, using the SubnetMappings allows the load balancer to specify one or more Elastic IP addresses, but the NetworkLoadBalancer class does not have the SubnetMappings property. [Network Load Balancers] You can specify subnets from one or more Availability Zones. You can specify one Elastic IP address per subnet if you need static IP addresses for your internet-facing load balancer. Issue 7424 is more than two years old. Instead of waiting, we might want to go for a workaround if we must. Workaround to assign Elastic IP address to the NLB We will register an Elastic IP address, create a Network Load Balancer and assign it the IP address. I also add code to import VPC and create a simple Application Load Balancer for completeness. Please, check the code below and comments on the code. Note that the network load balancer and the Elastic IP start responding with some delay after the stack creation. import { App, Stack } from 'aws-cdk-lib' import { CfnEIP, Port, Vpc } from 'aws-cdk-lib/aws-ec2' import { ApplicationLoadBalancer, ListenerAction, NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2' import { AlbTarget } from 'aws-cdk-lib/aws-elasticloadbalancingv2-targets' import { env } from 'process' function createStack (scope, id, props) { const stack = new Stack(scope, id, props) // 1. Prepare required resources: import VPC and create simple ALB. const vpc = Vpc.fromLookup(stack, 'Vpc', { vpcName: 'BlogVpc' }) const alb = new ApplicationLoadBalancer(stack, 'ALB', { internetFacing: false, port: 80, vpc }) const albListener = alb.addListener('HttpListener', { defaultAction: ListenerAction.fixedResponse(200, { contentType: 'text/plain', messageBody: 'Hello World!' }), open: true, port: 80, }) // 2. Create an Elastic IP address. const elasticIp = new CfnEIP(stack, 'ElasticIp', {domain: 'vpc'}) // 3.1. Create a network load balancer. const nlb = new NetworkLoadBalancer(stack, 'NLB', { crossZoneEnabled: true, internetFacing: true, vpc }) // 3.2. Add listener and target group to forward traffic to ALB. const nlbTargetGroup = nlb .addListener('AlbListener', {port: 80}) .addTargets('AlbTargets', {targets: [new AlbTarget(alb, 80)], port: 80}); // 3.3. We should create an ALB listener before creating the target group. This dependency is not added automatically. // https://github.com/aws/aws-cdk/issues/17208 nlbTargetGroup.node.addDependency(albListener); // 3.4. Replace Subnets with SubnetMappings. // https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-elasticloadbalancingv2-loadbalancer-subnetmapping.html // We can have mappings for all public subnets. I did one for simplicity. const cfnNlb = nlb.node.defaultChild cfnNlb.addDeletionOverride('Properties.Subnets') cfnNlb.subnetMappings = [{ allocationId: elasticIp.attrAllocationId, subnetId: vpc.publicSubnets[0].subnetId }] return stack } const app = new App() createStack(app, 'StaticIpStack', { env: { account: env.CDK_DEFAULT_ACCOUNT, region: env.CDK_DEFAULT_REGION } }) The screenshot below shows the IP address association in the Console. You can add more associations to the cfnNlb.subnetMappings list. If you want to remove some, you must recreate the load balancer. A: Closing, As of now its not possible currently on Date 30 Aug 2022. A: Instead of assigning a static IP directly to your ALB, you can link a AWS Global Accelerator to your ALB and it in turn will give you two static ip addresses. Some code directly from the docs. declare const alb: elbv2.ApplicationLoadBalancer; declare const listener: globalaccelerator.Listener; listener.addEndpointGroup('Group', { endpoints: [ new ga_endpoints.ApplicationLoadBalancerEndpoint(alb, { weight: 128, preserveClientIp: true, }), ], });
Assign a Static Elastic IP to Application Load Balancer
How to create a Network Load Balancer with one or more Elastic IP addresses in front of the Application Load Balancer using AWS CDK? This should allow having fixed IP addresses for the load balancer. The article I need a static IP address for my Application Load Balancer. How can I register an Application Load Balancer behind a Network Load Balancer? recommends this approach. The CDK API manual does not cover this use case. The class NetworkLoadBalancer (construct) lacks a definition of the SubnetMappings property. This looks like an issue in the documentation or the library. The code should be preferable in TypeScript.
[ "Before answering the question about the static IP address for the load balancer, I want to suggest an alternative solution.\nAlternative solution - register ALB with Route53\nWe can create a DNS record for the ALB in Route53 using CDK. The code is straightforward. Unfortunately, this solution does not work nicely if the DNS zone is not hosted by the AWS Route53.\n\n // Create or import your hosted zone.\n const hostedZone = new PublicHostedZone(stack, 'HostedZone', {\n zoneName: 'your.domain.name'\n })\n\n const loadBalancer = ... // Define your ALB.\n\n // Create a DNS record for your ALB instead of the static IP address.\n // eslint-disable-next-line no-new\n new ARecord(stack, 'WebServerARecord', {\n recordName: 'www',\n target: RecordTarget.fromAlias(new LoadBalancerTarget(loadBalancer)),\n zone: hostedZone\n })\n\nIssues with fixed address for the Network Load Balancer\nAt the moment of writing, CDK constructs do not have the option to assign a static IP address to the Network Load Balancer. There are several open issues on GitHub: SubnetMappings support for LoadBalancer #7424, Add support for SubnetMapping to Network Load Balancer #9696.\nIn a nutshell, using the SubnetMappings allows the load balancer to specify one or more Elastic IP addresses, but the NetworkLoadBalancer class does not have the SubnetMappings property.\n\n[Network Load Balancers] You can specify subnets from one or more\nAvailability Zones. You can specify one Elastic IP address per subnet\nif you need static IP addresses for your internet-facing load\nbalancer.\n\nIssue 7424 is more than two years old. Instead of waiting, we might want to go for a workaround if we must.\nWorkaround to assign Elastic IP address to the NLB\nWe will register an Elastic IP address, create a Network Load Balancer and assign it the IP address. I also add code to import VPC and create a simple Application Load Balancer for completeness. Please, check the code below and comments on the code. Note that the network load balancer and the Elastic IP start responding with some delay after the stack creation.\nimport { App, Stack } from 'aws-cdk-lib'\nimport { CfnEIP, Port, Vpc } from 'aws-cdk-lib/aws-ec2'\nimport { ApplicationLoadBalancer, ListenerAction, NetworkLoadBalancer } from 'aws-cdk-lib/aws-elasticloadbalancingv2'\nimport { AlbTarget } from 'aws-cdk-lib/aws-elasticloadbalancingv2-targets'\nimport { env } from 'process'\n\nfunction createStack (scope, id, props) {\n const stack = new Stack(scope, id, props)\n\n // 1. Prepare required resources: import VPC and create simple ALB.\n const vpc = Vpc.fromLookup(stack, 'Vpc', { vpcName: 'BlogVpc' })\n const alb = new ApplicationLoadBalancer(stack, 'ALB', {\n internetFacing: false,\n port: 80,\n vpc\n })\n const albListener = alb.addListener('HttpListener', {\n defaultAction: ListenerAction.fixedResponse(200, {\n contentType: 'text/plain', messageBody: 'Hello World!'\n }),\n open: true,\n port: 80,\n })\n\n // 2. Create an Elastic IP address.\n const elasticIp = new CfnEIP(stack, 'ElasticIp', {domain: 'vpc'})\n\n // 3.1. Create a network load balancer.\n const nlb = new NetworkLoadBalancer(stack, 'NLB', {\n crossZoneEnabled: true,\n internetFacing: true,\n vpc\n })\n\n // 3.2. Add listener and target group to forward traffic to ALB.\n const nlbTargetGroup = nlb\n .addListener('AlbListener', {port: 80})\n .addTargets('AlbTargets', {targets: [new AlbTarget(alb, 80)], port: 80});\n\n // 3.3. We should create an ALB listener before creating the target group. This dependency is not added automatically.\n // https://github.com/aws/aws-cdk/issues/17208\n nlbTargetGroup.node.addDependency(albListener);\n\n // 3.4. Replace Subnets with SubnetMappings.\n // https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-elasticloadbalancingv2-loadbalancer-subnetmapping.html\n // We can have mappings for all public subnets. I did one for simplicity.\n const cfnNlb = nlb.node.defaultChild\n cfnNlb.addDeletionOverride('Properties.Subnets')\n cfnNlb.subnetMappings = [{\n allocationId: elasticIp.attrAllocationId,\n subnetId: vpc.publicSubnets[0].subnetId\n }]\n\n return stack\n}\n\nconst app = new App()\ncreateStack(app, 'StaticIpStack', {\n env: { account: env.CDK_DEFAULT_ACCOUNT, region: env.CDK_DEFAULT_REGION }\n})\n\nThe screenshot below shows the IP address association in the Console. You can add more associations to the cfnNlb.subnetMappings list. If you want to remove some, you must recreate the load balancer.\n\n", "Closing, As of now its not possible currently on Date 30 Aug 2022.\n", "Instead of assigning a static IP directly to your ALB, you can link a AWS Global Accelerator to your ALB and it in turn will give you two static ip addresses.\nSome code directly from the docs.\ndeclare const alb: elbv2.ApplicationLoadBalancer;\ndeclare const listener: globalaccelerator.Listener;\n\nlistener.addEndpointGroup('Group', {\n endpoints: [\n new ga_endpoints.ApplicationLoadBalancerEndpoint(alb, {\n weight: 128,\n preserveClientIp: true,\n }),\n ],\n});\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "amazon_web_services", "aws_cdk" ]
stackoverflow_0073257574_amazon_web_services_aws_cdk.txt
Q: You cannot render a inside another . You should never have more than one in your app import { BrowserRouter, Routes, Route } from "react-router-dom"; //Layouts import HomeLayoutRoute from "./components/layouts/HomeLayout"; //components import Home from './components/Home'; //import Dashboard from './components/Dash'; // Routing import PrivateRoute from "./components/routing/PrivateRoute"; // Screens import PrivateScreen from "./components/loginscreens/PrivateScreen"; import LoginScreen from "./components/loginscreens/LoginScreen"; import RegisterScreen from "./components/loginscreens/RegisterScreen"; import ForgotPasswordScreen from "./components/loginscreens/ForgotPasswordScreen"; import ResetPasswordScreen from "./components/loginscreens/ResetPasswordScreen"; const App = () => { return ( <BrowserRouter> <div className="app"> <Routes> <HomeLayoutRoute path="/" element={<Home />} /> <PrivateRoute path="/" element={<PrivateScreen/>} /> <Route path="/login" element={<LoginScreen/>} /> <Route path="/register" element={<RegisterScreen/>} /> <Route path="/forgotpassword" element={<ForgotPasswordScreen/>}/> <Route path="/passwordreset/:resetToken" element={<ResetPasswordScreen/>}/> </Routes> </div> </BrowserRouter> ); }; export default App; This is my App.js file This is the Error : Error: You cannot render a inside another . You should never have more than one in your app. This code works with React-Router-Dom Version 5, But When I move to React-Router-Dom version 6 this error came. A: Set up your index.js file similar to this ReactDOM.render( <React.StrictMode> <BrowserRouter> <Routes> <Route path="/" element={ <App /> }> </Route> </Routes> </BrowserRouter> </React.StrictMode>, document.getElementById('root') ); Then remove BrowserRouter in your App.js file const App = () => { return ( <div className="app"> <Routes> <HomeLayoutRoute path="/" element={<Home />} /> <PrivateRoute path="/" element={<PrivateScreen/>} /> <Route path="/login" element={<LoginScreen/>} /> <Route path="/register" element={<RegisterScreen/>} /> <Route path="/forgotpassword" element={<ForgotPasswordScreen/>}/> <Route path="/passwordreset/:resetToken" element={<ResetPasswordScreen/>}/> </Routes> </div> ); }; A: Try this! index.js import React from "react"; import ReactDOM from "react-dom"; import { BrowserRouter } from "react-router-dom"; import "./index.css"; import App from "./App"; ReactDOM.render( <BrowserRouter> <App /> </BrowserRouter>, document.getElementById("root") ); App.js Note that you need to import { Routes, Route } instead of { Route } (as it was in previous versions). Also, note that on the App.js file, it is not necessary to import BrowserRouter again. import { Routes, Route } from "react-router-dom"; import AllPages from "./pages/AllPages"; import NewContactsPage from "./pages/ContactsPage"; import FavoritesPage from "./pages/Favorites"; function App() { return ( <div> <Routes> <Route path="/" element={<AllPages />} /> <Route path="/new-contacts" element={<NewContactsPage />} /> <Route path="/favorites" element={<FavoritesPage />} /> </Routes> </div> ); } export default App; A: I'm using react-router-dom version 5.3.0 and I can't find an exported member named 'Routes'. Not sure if this member comes from an older version of react-router-dom. That said, I think you might want to replace <Routes> with <Switch>. <Switch> renders the first child <Route> or <Redirect> that matches the location. <BrowserRouter> <div className="app"> <Switch> <HomeLayoutRoute path="/" element={<Home />} /> <PrivateRoute path="/" element={<PrivateScreen/>} /> <Route path="/login" element={<LoginScreen/>} /> <Route path="/register" element={<RegisterScreen/>} /> <Route path="/forgotpassword" element={<ForgotPasswordScreen/>}/> <Route path="/passwordreset/:resetToken" element={<ResetPasswordScreen/>}/> </Switch> </div> </BrowserRouter> (https://reactrouter.com/web/api/Switch) Edit: As for the error: "You cannot render a inside another . You should never have more than one in your app" -> I think it might be related to the problem I mentioned above but it can also be because you have a duplicated router inside another. (E.g. the component <ForgotPasswordScreen/> contains another <Route> element inside). A: As of React Router version 6, nested routers will not be supported. So for example, you can't use <MemoryRouter> inside <BrowserRouter>. Please see: https://github.com/remix-run/react-router/issues/7375 A: I'm sorry, but, YOU DON'T NEED TO CHANGE THE VERSION If You're working with React v.18 You must update the rendering @ index.js to the propper and documented way to render root/app file... please visit this link for more details: Updates to Client Rendering APIs in your index.js: import { createRoot } from 'react-dom/client'; import { App } from './App'; const container = document.getElementById('root'); const root = createRoot(container); root.render(<App tab='home' />); in your App.jsx: import { BrowserRouter as Router, Routes, Route } from "react-router-dom"; import { AppNav } from "./Components/AppNav"; import { Home } from "./Pages/Home"; import { Users } from "./Pages/Users"; export const App = () => { return ( <Router> <AppNav /> <Routes> <Route path="/" element={<Home />} /> <Route path="/users" element={<Users />} /> </Routes> </Router> ); To see a working sample/code please visit my GitHub repo: github.com/lodela/React18 A: import { BrowserRouter as Router, Routes, Route } from "react-router-dom"; //Layouts import HomeLayoutRoute from "./components/layouts/HomeLayout"; //components import Home from './components/Home'; //import Dash from './components/DashBoard'; // Routing import PrivateRoute from "./components/routing/PrivateRoute"; // Screens import PrivateScreen from "./components/loginscreens/PrivateScreen"; import LoginScreen from "./components/loginscreens/LoginScreen"; import RegisterScreen from "./components/loginscreens/RegisterScreen"; import ForgotPasswordScreen from "./components/loginscreens/ForgotPasswordScreen"; import ResetPasswordScreen from "./components/loginscreens/ResetPasswordScreen"; const App = () => { return ( <Router> <div className="app"> <Routes> <HomeLayoutRoute exact path="/" component={Home} /> <PrivateRoute exact path="/" component={PrivateScreen} /> <Route exact path="/login" component={LoginScreen} /> <Route exact path="/register" component={RegisterScreen} /> <Route exact path="/forgotpassword" component={ForgotPasswordScreen} /> <Route exact path="/passwordreset/:resetToken" component={ResetPasswordScreen} /> </Routes> </div> </Router> ); }; export default App; This is the Code I have used with React-Router-Dom Version 5 I think problem with the Version 6 A: If nothing is working for you do this change the version but first erase the react-router-dom:6.something dependency from package.json folder and then 1). Uninstall Previous Version- npm uninstall react-router-dom 2). Install Older version- npm install [email protected] A: One possible error might be with <BrowserRouter> You might have included this element in your App.js and in your Index.js both. You can check and keep this element in any one file so nesting of duplicate routes will be removed. Hope it helps! A: after degrading react-router-dom@6 to react-router-dom@5 useNavigate is not working. useNavigate only works in react-router-dom@6
You cannot render a inside another . You should never have more than one in your app
import { BrowserRouter, Routes, Route } from "react-router-dom"; //Layouts import HomeLayoutRoute from "./components/layouts/HomeLayout"; //components import Home from './components/Home'; //import Dashboard from './components/Dash'; // Routing import PrivateRoute from "./components/routing/PrivateRoute"; // Screens import PrivateScreen from "./components/loginscreens/PrivateScreen"; import LoginScreen from "./components/loginscreens/LoginScreen"; import RegisterScreen from "./components/loginscreens/RegisterScreen"; import ForgotPasswordScreen from "./components/loginscreens/ForgotPasswordScreen"; import ResetPasswordScreen from "./components/loginscreens/ResetPasswordScreen"; const App = () => { return ( <BrowserRouter> <div className="app"> <Routes> <HomeLayoutRoute path="/" element={<Home />} /> <PrivateRoute path="/" element={<PrivateScreen/>} /> <Route path="/login" element={<LoginScreen/>} /> <Route path="/register" element={<RegisterScreen/>} /> <Route path="/forgotpassword" element={<ForgotPasswordScreen/>}/> <Route path="/passwordreset/:resetToken" element={<ResetPasswordScreen/>}/> </Routes> </div> </BrowserRouter> ); }; export default App; This is my App.js file This is the Error : Error: You cannot render a inside another . You should never have more than one in your app. This code works with React-Router-Dom Version 5, But When I move to React-Router-Dom version 6 this error came.
[ "Set up your index.js file similar to this\nReactDOM.render(\n <React.StrictMode>\n <BrowserRouter>\n <Routes>\n <Route path=\"/\" element={ <App /> }>\n </Route>\n </Routes>\n </BrowserRouter>\n </React.StrictMode>,\n document.getElementById('root')\n);\n\nThen remove BrowserRouter in your App.js file\nconst App = () => {\n return (\n <div className=\"app\">\n <Routes> \n <HomeLayoutRoute path=\"/\" element={<Home />} />\n <PrivateRoute path=\"/\" element={<PrivateScreen/>} />\n <Route path=\"/login\" element={<LoginScreen/>} />\n <Route path=\"/register\" element={<RegisterScreen/>} />\n <Route path=\"/forgotpassword\" element={<ForgotPasswordScreen/>}/>\n <Route path=\"/passwordreset/:resetToken\" element={<ResetPasswordScreen/>}/>\n </Routes>\n </div>\n );\n};\n\n", "Try this!\nindex.js\nimport React from \"react\";\nimport ReactDOM from \"react-dom\";\n\nimport { BrowserRouter } from \"react-router-dom\";\n\nimport \"./index.css\";\nimport App from \"./App\";\n\nReactDOM.render(\n <BrowserRouter>\n <App />\n </BrowserRouter>,\n document.getElementById(\"root\")\n);\n\nApp.js\nNote that you need to import { Routes, Route } instead of { Route } (as it was in previous versions). Also, note that on the App.js file, it is not necessary to import BrowserRouter again.\nimport { Routes, Route } from \"react-router-dom\";\n\nimport AllPages from \"./pages/AllPages\";\nimport NewContactsPage from \"./pages/ContactsPage\";\nimport FavoritesPage from \"./pages/Favorites\";\n\nfunction App() {\n\n return (\n <div>\n <Routes>\n <Route path=\"/\" element={<AllPages />} />\n <Route path=\"/new-contacts\" element={<NewContactsPage />} />\n <Route path=\"/favorites\" element={<FavoritesPage />} />\n </Routes>\n </div>\n );\n}\n\nexport default App;\n\n", "I'm using react-router-dom version 5.3.0 and I can't find an exported member named 'Routes'. Not sure if this member comes from an older version of react-router-dom.\nThat said, I think you might want to replace <Routes> with <Switch>.\n<Switch> renders the first child <Route> or <Redirect> that matches the location.\n<BrowserRouter>\n <div className=\"app\">\n <Switch> \n <HomeLayoutRoute path=\"/\" element={<Home />} />\n <PrivateRoute path=\"/\" element={<PrivateScreen/>} />\n <Route path=\"/login\" element={<LoginScreen/>} />\n <Route path=\"/register\" element={<RegisterScreen/>} />\n <Route path=\"/forgotpassword\" element={<ForgotPasswordScreen/>}/>\n <Route path=\"/passwordreset/:resetToken\" element={<ResetPasswordScreen/>}/>\n </Switch>\n </div>\n</BrowserRouter>\n\n(https://reactrouter.com/web/api/Switch)\nEdit:\nAs for the error: \"You cannot render a inside another . You should never have more than one in your app\" -> I think it might be related to the problem I mentioned above but it can also be because you have a duplicated router inside another. (E.g. the component <ForgotPasswordScreen/> contains another <Route> element inside).\n", "As of React Router version 6, nested routers will not be supported. So for example, you can't use <MemoryRouter> inside <BrowserRouter>.\nPlease see:\nhttps://github.com/remix-run/react-router/issues/7375\n", "I'm sorry, but, YOU DON'T NEED TO CHANGE THE VERSION\nIf You're working with React v.18 You must update the rendering @ index.js to the propper and documented way to render root/app file... please visit this link for more details: Updates to Client Rendering APIs\nin your index.js:\nimport { createRoot } from 'react-dom/client';\nimport { App } from './App';\nconst container = document.getElementById('root');\nconst root = createRoot(container);\nroot.render(<App tab='home' />);\n\nin your App.jsx:\nimport { BrowserRouter as Router, Routes, Route } from \"react-router-dom\";\nimport { AppNav } from \"./Components/AppNav\";\nimport { Home } from \"./Pages/Home\";\nimport { Users } from \"./Pages/Users\";\nexport const App = () => {\n return (\n <Router>\n <AppNav />\n <Routes>\n <Route path=\"/\" element={<Home />} />\n <Route path=\"/users\" element={<Users />} />\n </Routes>\n </Router>\n );\n\nTo see a working sample/code please visit my GitHub repo: github.com/lodela/React18\n", "import { BrowserRouter as Router, Routes, Route } from \"react-router-dom\";\n\n//Layouts\nimport HomeLayoutRoute from \"./components/layouts/HomeLayout\";\n\n//components\nimport Home from './components/Home';\n//import Dash from './components/DashBoard';\n\n// Routing\nimport PrivateRoute from \"./components/routing/PrivateRoute\";\n\n// Screens\nimport PrivateScreen from \"./components/loginscreens/PrivateScreen\";\nimport LoginScreen from \"./components/loginscreens/LoginScreen\";\nimport RegisterScreen from \"./components/loginscreens/RegisterScreen\";\nimport ForgotPasswordScreen from \"./components/loginscreens/ForgotPasswordScreen\";\nimport ResetPasswordScreen from \"./components/loginscreens/ResetPasswordScreen\";\n\n\nconst App = () => {\n return (\n <Router>\n <div className=\"app\">\n <Routes>\n <HomeLayoutRoute exact path=\"/\" component={Home} />\n <PrivateRoute exact path=\"/\" component={PrivateScreen} />\n <Route exact path=\"/login\" component={LoginScreen} />\n <Route exact path=\"/register\" component={RegisterScreen} />\n <Route\n exact\n path=\"/forgotpassword\"\n component={ForgotPasswordScreen}\n />\n <Route\n exact\n path=\"/passwordreset/:resetToken\"\n component={ResetPasswordScreen}\n />\n </Routes>\n </div>\n </Router>\n );\n};\n\nexport default App;\n\nThis is the Code I have used with React-Router-Dom Version 5 I think problem with the Version 6\n", "If nothing is working for you do this\nchange the version but first erase the react-router-dom:6.something dependency from package.json folder and then\n1). Uninstall Previous Version-\nnpm uninstall react-router-dom\n2). Install Older version-\nnpm install [email protected]\n", "One possible error might be with\n\n<BrowserRouter>\n\nYou might have included this element in your App.js and in your Index.js both.\nYou can check and keep this element in any one file so nesting of duplicate routes will be removed.\nHope it helps!\n", "after degrading react-router-dom@6 to react-router-dom@5 useNavigate is not working.\nuseNavigate only works in react-router-dom@6\n" ]
[ 23, 6, 1, 1, 1, 0, 0, 0, 0 ]
[ "We found that this problem happens when we use React Router version 6. In this version, nested routers are not supported. So, we uninstalled version 6 and installed version 5 to solve this issue quickly.\nUninstall version 6:\nnpm uninstall react-router-dom\n\nInstall version 5:\nnpm install [email protected]\n\n" ]
[ -2 ]
[ "react_router_dom", "reactjs", "version" ]
stackoverflow_0069828516_react_router_dom_reactjs_version.txt
Q: How to make String.Split() work on the new line? string candidates; string[] candidatesSplit = { }; string line; int countLines = 0; StreamReader sr = new StreamReader("..\\..\\..\\candidates.txt"); // Read candidates from file candidates = sr.ReadToEnd(); sr.Close(); candidatesSplit = candidates.Split(','); // Split the file with ',' Console.WriteLine(candidatesSplit[30]); Using this code, I wanted to split every ',' and get specific words out from my text file. My candidates file looks like this: 100,Esra Tarak,90,D1,D4,D2,A,B,D,C, ,C,A,D,B,C,D,B,A, ,B,A,C,D,C,D,A,D,B,C,D 101,Cem Ak,84,D1,D5, ,A,C,D,C,C,C,A,C,B,C,D,B,A,C,B,A,C,D,C,C,A,D,B,C,D Code works perfectly for the first line in candidates.txt, however when it comes to the second line on the text file, the output comes out like this: D 101 I need it to show only like this 101 I cannot put a ',' at the end of my lines. Is there any way to fix this? A: Just Split(Environment.NewLine) on the entire input first and then Split(',') again on each line. using var sr = new StreamReader("..\\..\\..\\candidates.txt"); var candidates = sr.ReadToEnd(); foreach (var line in candidates.Split(Environment.NewLine)) { var candidatesSplit = line.Split(','); Console.WriteLine(candidatesSplit[30]); }
How to make String.Split() work on the new line?
string candidates; string[] candidatesSplit = { }; string line; int countLines = 0; StreamReader sr = new StreamReader("..\\..\\..\\candidates.txt"); // Read candidates from file candidates = sr.ReadToEnd(); sr.Close(); candidatesSplit = candidates.Split(','); // Split the file with ',' Console.WriteLine(candidatesSplit[30]); Using this code, I wanted to split every ',' and get specific words out from my text file. My candidates file looks like this: 100,Esra Tarak,90,D1,D4,D2,A,B,D,C, ,C,A,D,B,C,D,B,A, ,B,A,C,D,C,D,A,D,B,C,D 101,Cem Ak,84,D1,D5, ,A,C,D,C,C,C,A,C,B,C,D,B,A,C,B,A,C,D,C,C,A,D,B,C,D Code works perfectly for the first line in candidates.txt, however when it comes to the second line on the text file, the output comes out like this: D 101 I need it to show only like this 101 I cannot put a ',' at the end of my lines. Is there any way to fix this?
[ "Just Split(Environment.NewLine) on the entire input first and then Split(',') again on each line.\nusing var sr = new StreamReader(\"..\\\\..\\\\..\\\\candidates.txt\");\nvar candidates = sr.ReadToEnd();\n\nforeach (var line in candidates.Split(Environment.NewLine))\n{\n var candidatesSplit = line.Split(',');\n Console.WriteLine(candidatesSplit[30]);\n}\n\n" ]
[ 1 ]
[]
[]
[ "c#" ]
stackoverflow_0074667381_c#.txt
Q: why is my user_type not working and if i log in i get a white screen? so im trying to rederict a user if their role is either admin or user but for some reasson if i go to the login process it gives me a white screen. <?php session_start(); if(isset($_POST['uname']) && isset($_POST['pass'])){ include "../db_conn.php"; $uname = $_POST['uname']; $pass = $_POST['pass']; $select = $conn->prepare("SELECT * FROM users WHERE username = ? AND password = ?"); $data = "uname=".$uname; if(empty($uname)){ $em = "User name is required"; header("Location: ../login.php?error=$em&$data"); exit; }else if(empty($pass)){ $em = "Password is required"; header("Location: ../login.php?error=$em&$data"); exit; }else { $sql = "SELECT * FROM users WHERE username = ?"; $stmt = $conn->prepare($sql); $stmt->execute([$uname]); if($stmt->rowCount() == 1){ $user = $stmt->fetch(); $username = $user['username']; $password = $user['password']; $fname = $user['fname']; $id = $user['id']; $pp = $user['pp']; if($username === $uname){ if(password_verify($pass, $password)){ $_SESSION['id'] = $id; $_SESSION['fname'] = $fname; $_SESSION['pp'] = $pp; if ($select->rowCount() > 0) { $row = $select->fetch(); if ($row['user_type'] == 'admin') { header("location: ../home.php"); } elseif ($row['user_type'] == 'user') { header("location: ../home.php"); } } exit; }else { $em = "Incorect User name or password"; header("Location: ../login.php?error=$em&$data"); exit; } }else { $em = "Incorect User name or password"; header("Location: ../login.php?error=$em&$data"); exit; } }else { $em = "Incorect User name or password"; header("Location: ../login.php?error=$em&$data"); exit; } } }else { header("Location: ../login.php?error=error"); exit; } i tried to before the user logged in trying to go through my if, ifelse statement to redirect the user to either admin page or user page but when i try that it gives me a white page instead. A: There are a few issues with the code that may be causing the white screen. First, the SQL query for selecting the user is using single quotes instead of backticks for the table name. This will cause a syntax error and prevent the query from running. The correct syntax would be: $select = $conn->prepare("SELECT * FROM users WHERE username = ? AND password = ?"); Secondly, the $select query is not being executed with the provided parameters, so it will always return 0 rows. This means that the if statement checking for the user type will never be reached, and no redirection will occur. To fix this, the $select query should be executed with the appropriate parameters, like so: $select->execute([$uname, $pass]); Thirdly, the code is checking for the existence of the 'user_tpye' column in the users table, but the actual column name is 'user_type'. This will cause the code to fail when checking for the user type. To fix this, the code should be checking for the correct column name, like so: if ($select->rowCount() > 0) { $row = $select->fetch(); if ($row['user_type'] == 'admin') { // redirect to admin page } elseif ($row['user_type'] == 'user') { // redirect to user page } } Finally, the code is using 'locaction' instead of 'location' in the header() function call for redirecting to the user page. This will cause the redirection to fail, and the user will see a white screen. To fix this, the code should use the correct spelling for the location parameter, like so: header('Location: user_page.php'); After making these changes, the code should be able to properly redirect the user to the appropriate page based on their user type.
why is my user_type not working and if i log in i get a white screen?
so im trying to rederict a user if their role is either admin or user but for some reasson if i go to the login process it gives me a white screen. <?php session_start(); if(isset($_POST['uname']) && isset($_POST['pass'])){ include "../db_conn.php"; $uname = $_POST['uname']; $pass = $_POST['pass']; $select = $conn->prepare("SELECT * FROM users WHERE username = ? AND password = ?"); $data = "uname=".$uname; if(empty($uname)){ $em = "User name is required"; header("Location: ../login.php?error=$em&$data"); exit; }else if(empty($pass)){ $em = "Password is required"; header("Location: ../login.php?error=$em&$data"); exit; }else { $sql = "SELECT * FROM users WHERE username = ?"; $stmt = $conn->prepare($sql); $stmt->execute([$uname]); if($stmt->rowCount() == 1){ $user = $stmt->fetch(); $username = $user['username']; $password = $user['password']; $fname = $user['fname']; $id = $user['id']; $pp = $user['pp']; if($username === $uname){ if(password_verify($pass, $password)){ $_SESSION['id'] = $id; $_SESSION['fname'] = $fname; $_SESSION['pp'] = $pp; if ($select->rowCount() > 0) { $row = $select->fetch(); if ($row['user_type'] == 'admin') { header("location: ../home.php"); } elseif ($row['user_type'] == 'user') { header("location: ../home.php"); } } exit; }else { $em = "Incorect User name or password"; header("Location: ../login.php?error=$em&$data"); exit; } }else { $em = "Incorect User name or password"; header("Location: ../login.php?error=$em&$data"); exit; } }else { $em = "Incorect User name or password"; header("Location: ../login.php?error=$em&$data"); exit; } } }else { header("Location: ../login.php?error=error"); exit; } i tried to before the user logged in trying to go through my if, ifelse statement to redirect the user to either admin page or user page but when i try that it gives me a white page instead.
[ "There are a few issues with the code that may be causing the white screen.\nFirst, the SQL query for selecting the user is using single quotes instead of backticks for the table name. This will cause a syntax error and prevent the query from running. The correct syntax would be:\n$select = $conn->prepare(\"SELECT * FROM users WHERE username = ? AND password = ?\");\n\nSecondly, the $select query is not being executed with the provided parameters, so it will always return 0 rows. This means that the if statement checking for the user type will never be reached, and no redirection will occur.\nTo fix this, the $select query should be executed with the appropriate parameters, like so:\n$select->execute([$uname, $pass]);\n\nThirdly, the code is checking for the existence of the 'user_tpye' column in the users table, but the actual column name is 'user_type'. This will cause the code to fail when checking for the user type.\nTo fix this, the code should be checking for the correct column name, like so:\nif ($select->rowCount() > 0) {\n $row = $select->fetch();\n if ($row['user_type'] == 'admin') {\n // redirect to admin page\n } elseif ($row['user_type'] == 'user') {\n // redirect to user page\n }\n}\n\nFinally, the code is using 'locaction' instead of 'location' in the header() function call for redirecting to the user page. This will cause the redirection to fail, and the user will see a white screen.\nTo fix this, the code should use the correct spelling for the location parameter, like so:\nheader('Location: user_page.php');\n\nAfter making these changes, the code should be able to properly redirect the user to the appropriate page based on their user type.\n" ]
[ 0 ]
[]
[]
[ "html", "mysql", "php" ]
stackoverflow_0074666947_html_mysql_php.txt
Q: Fix the upstream dependency conflict, React Native Expo I have a React Native Expo application, and I am trying to install expo-splash-screen with running npx expo install expo-splash-screen As a result I get the following error message: npm ERR! code ERESOLVE npm ERR! ERESOLVE could not resolve npm ERR! npm ERR! While resolving: [email protected] npm ERR! Found: [email protected] npm ERR! node_modules/react-native-gesture-handler npm ERR! peer react-native-gesture-handler@">= 1.0.0" from @react-navigation/[email protected] npm ERR! node_modules/@react-navigation/drawer npm ERR! @react-navigation/drawer@"^6.5.0" from the root project npm ERR! peer react-native-gesture-handler@">= 1.5.0" from [email protected] npm ERR! node_modules/react-navigation-stack npm ERR! react-navigation-stack@"^2.10.4" from the root project npm ERR! 1 more (the root project) npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer react-native-gesture-handler@"^1.0.12" from [email protected] npm ERR! node_modules/react-navigation-drawer npm ERR! react-navigation-drawer@"^2.7.2" from the root project npm ERR! npm ERR! Conflicting peer dependency: [email protected] npm ERR! node_modules/react-native-gesture-handler npm ERR! peer react-native-gesture-handler@"^1.0.12" from [email protected] npm ERR! node_modules/react-navigation-drawer npm ERR! react-navigation-drawer@"^2.7.2" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force, or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. While running npm -v react-native-gessture-handler it says: 8.15.0. I have also tried using npm i react-native-splash-screen --force but it is not good for an expo application, and I am pretty sure that installing react-navigation-drawer with --force resulted the problem. I am not really experienced with package managing so I do not really get the point of the error message. How can I install expo-splash-screen, and can somebody explain the error message? A: It looks you have libraries that use different versions of react-native-gesture-handler. react-navigation-drawer is using and older react-native-gesture-handler version and this package is deprecated so you cannot update it to use a higher version of react-native-gesture-handler. The best solution in my opinion is to change the package react-navigation-drawer to @react-navigation/drawer as it says in the documentation of the package. The other alternative but at your own risk is to specify in the package.json a resolution with the react-native-gesture-handler you need in expo-splash-screen and every library that have as peer dependecy react-native-gesture-handler will use this specific version. "resolutions": { "react-native-gesture-handler": "x.x.x" } Let me know if it helps
Fix the upstream dependency conflict, React Native Expo
I have a React Native Expo application, and I am trying to install expo-splash-screen with running npx expo install expo-splash-screen As a result I get the following error message: npm ERR! code ERESOLVE npm ERR! ERESOLVE could not resolve npm ERR! npm ERR! While resolving: [email protected] npm ERR! Found: [email protected] npm ERR! node_modules/react-native-gesture-handler npm ERR! peer react-native-gesture-handler@">= 1.0.0" from @react-navigation/[email protected] npm ERR! node_modules/@react-navigation/drawer npm ERR! @react-navigation/drawer@"^6.5.0" from the root project npm ERR! peer react-native-gesture-handler@">= 1.5.0" from [email protected] npm ERR! node_modules/react-navigation-stack npm ERR! react-navigation-stack@"^2.10.4" from the root project npm ERR! 1 more (the root project) npm ERR! npm ERR! Could not resolve dependency: npm ERR! peer react-native-gesture-handler@"^1.0.12" from [email protected] npm ERR! node_modules/react-navigation-drawer npm ERR! react-navigation-drawer@"^2.7.2" from the root project npm ERR! npm ERR! Conflicting peer dependency: [email protected] npm ERR! node_modules/react-native-gesture-handler npm ERR! peer react-native-gesture-handler@"^1.0.12" from [email protected] npm ERR! node_modules/react-navigation-drawer npm ERR! react-navigation-drawer@"^2.7.2" from the root project npm ERR! npm ERR! Fix the upstream dependency conflict, or retry npm ERR! this command with --force, or --legacy-peer-deps npm ERR! to accept an incorrect (and potentially broken) dependency resolution. While running npm -v react-native-gessture-handler it says: 8.15.0. I have also tried using npm i react-native-splash-screen --force but it is not good for an expo application, and I am pretty sure that installing react-navigation-drawer with --force resulted the problem. I am not really experienced with package managing so I do not really get the point of the error message. How can I install expo-splash-screen, and can somebody explain the error message?
[ "It looks you have libraries that use different versions of react-native-gesture-handler. react-navigation-drawer is using and older react-native-gesture-handler version and this package is deprecated so you cannot update it to use a higher version of react-native-gesture-handler.\nThe best solution in my opinion is to change the package react-navigation-drawer to @react-navigation/drawer as it says in the documentation of the package.\nThe other alternative but at your own risk is to specify in the package.json a resolution with the react-native-gesture-handler you need in expo-splash-screen and every library that have as peer dependecy react-native-gesture-handler will use this specific version.\n\n\n \"resolutions\": {\n \"react-native-gesture-handler\": \"x.x.x\"\n }\n\n\n\nLet me know if it helps\n" ]
[ 0 ]
[]
[]
[ "dependency_management", "expo", "expo_splash_screen", "react_native", "react_native_gesture_handler" ]
stackoverflow_0074666704_dependency_management_expo_expo_splash_screen_react_native_react_native_gesture_handler.txt
Q: Returning values from TextInputs in Kivy does anyone know how to return the string of a textinput in a kivy widget? The textinput is created inside the kv.file. <OrderScreen>: BoxLayout: TextInput: size_hint: (.2, None) pos_hint: {"center_y":0.5} height: 30 width: 100 hint_text: "Food" multiline: False id: input A: Yes, you can return the string of a TextInput widget in Kivy by using the text property of the TextInput widget. For example: textinput = self.ids['my_textinput'] textinput_string = textinput.text Here is an example of a Kivy TextInput widget: TextInput: id: my_textinput multiline: False font_size: 20 size_hint: .5, .2 pos_hint: {'center_x': .5, 'center_y': .5}
Returning values from TextInputs in Kivy
does anyone know how to return the string of a textinput in a kivy widget? The textinput is created inside the kv.file. <OrderScreen>: BoxLayout: TextInput: size_hint: (.2, None) pos_hint: {"center_y":0.5} height: 30 width: 100 hint_text: "Food" multiline: False id: input
[ "Yes, you can return the string of a TextInput widget in Kivy by using the text property of the TextInput widget. For example:\ntextinput = self.ids['my_textinput']\ntextinput_string = textinput.text\n\nHere is an example of a Kivy TextInput widget:\nTextInput:\n id: my_textinput\n multiline: False\n font_size: 20\n size_hint: .5, .2\n pos_hint: {'center_x': .5, 'center_y': .5}\n\n" ]
[ 0 ]
[]
[]
[ "kivy", "python", "textinput" ]
stackoverflow_0074667394_kivy_python_textinput.txt
Q: Get the week numbers between two dates with python I'd like to find the most pythonic way to output a list of the week numbers between two dates. For example: input start = datetime.date(2011, 12, 25) end = datetime.date(2012, 1, 21) output find_weeks(start, end) >> [201152, 201201, 201202, 201203] I've been struggling using the datetime library with little success A: Something in the lines of (update: removed less-readable option) import datetime def find_weeks(start,end): l = [] for i in range((end-start).days + 1): d = (start+datetime.timedelta(days=i)).isocalendar()[:2] # e.g. (2011, 52) yearweek = '{}{:02}'.format(*d) # e.g. "201152" l.append(yearweek) return sorted(set(l)) start = datetime.date(2011, 12, 25) end = datetime.date(2012, 1, 21) print(find_weeks(start,end)[1:]) # [1:] to exclude first week. Returns ['201152', '201201', '201202', '201203'] To include the first week (201151) simply remove [1:] after function call A: .isocalendar() is your friend here - it returns a tuple of (year, week of year, day of week). We use that to reset the start date to the start of th eweek, and then add on a week each time until we pass the end date: import datetime def find_weeks(start_date, end_date): subtract_days = start_date.isocalendar()[2] - 1 current_date = start_date + datetime.timedelta(days=7-subtract_days) weeks_between = [] while current_date <= end_date: weeks_between.append( '{}{:02d}'.format(*current_date.isocalendar()[:2]) ) current_date += datetime.timedelta(days=7) return weeks_between start = datetime.date(2011, 12, 25) end = datetime.date(2012, 1, 21) print(find_weeks(start, end)) This prints ['201152', '201201', '201202', '201203'] A: Using Pandas import pandas as pd dates=pd.date_range(start=start,end=end,freq='W') date_index=dates.year.astype(str)+dates.weekofyear.astype(str).str.zfill(2) date_index.tolist() A: I suggest you the following easy-to-read solution: import datetime start = datetime.date(2011, 12, 25) end = datetime.date(2012, 1, 21) def find_weeks(start, end): l = [] while (start.isocalendar()[1] != end.isocalendar()[1]) or (start.year != end.year): l.append(start.isocalendar()[1] + 100*start.year) start += datetime.timedelta(7) l.append(start.isocalendar()[1] + 100*start.year) return (l[1:]) print(find_weeks(start, end)) >> [201252, 201201, 201202, 201203] A: I prefer the arrow style solution here (might need pip install arrow): import arrow start = arrow.get('2011-12-25') end = arrow.get('2012-01-21') weeks = list(arrow.Arrow.span_range('week', start, end)) result looks like this: >> from pprint import pprint >> pprint(weeks[1:]) [(<Arrow [2011-12-19T00:00:00+00:00]>, <Arrow [2011-12-25T23:59:59.999999+00:00]>), (<Arrow [2011-12-26T00:00:00+00:00]>, <Arrow [2012-01-01T23:59:59.999999+00:00]>), (<Arrow [2012-01-02T00:00:00+00:00]>, <Arrow [2012-01-08T23:59:59.999999+00:00]>), (<Arrow [2012-01-09T00:00:00+00:00]>, <Arrow [2012-01-15T23:59:59.999999+00:00]>), (<Arrow [2012-01-16T00:00:00+00:00]>, <Arrow [2012-01-22T23:59:59.999999+00:00]>)] from there you can change the output to match the year and week number.
Get the week numbers between two dates with python
I'd like to find the most pythonic way to output a list of the week numbers between two dates. For example: input start = datetime.date(2011, 12, 25) end = datetime.date(2012, 1, 21) output find_weeks(start, end) >> [201152, 201201, 201202, 201203] I've been struggling using the datetime library with little success
[ "Something in the lines of (update: removed less-readable option)\nimport datetime\n\ndef find_weeks(start,end):\n l = []\n for i in range((end-start).days + 1):\n d = (start+datetime.timedelta(days=i)).isocalendar()[:2] # e.g. (2011, 52)\n yearweek = '{}{:02}'.format(*d) # e.g. \"201152\"\n l.append(yearweek)\n return sorted(set(l))\n\nstart = datetime.date(2011, 12, 25) \nend = datetime.date(2012, 1, 21)\n\nprint(find_weeks(start,end)[1:]) # [1:] to exclude first week.\n\nReturns\n['201152', '201201', '201202', '201203']\n\nTo include the first week (201151) simply remove [1:] after function call\n", ".isocalendar() is your friend here - it returns a tuple of (year, week of year, day of week). We use that to reset the start date to the start of th eweek, and then add on a week each time until we pass the end date:\nimport datetime\n\n\ndef find_weeks(start_date, end_date):\n subtract_days = start_date.isocalendar()[2] - 1\n current_date = start_date + datetime.timedelta(days=7-subtract_days)\n weeks_between = []\n while current_date <= end_date:\n weeks_between.append(\n '{}{:02d}'.format(*current_date.isocalendar()[:2])\n )\n current_date += datetime.timedelta(days=7)\n return weeks_between\n\nstart = datetime.date(2011, 12, 25)\nend = datetime.date(2012, 1, 21)\n\nprint(find_weeks(start, end))\n\nThis prints\n['201152', '201201', '201202', '201203']\n\n", "Using Pandas\nimport pandas as pd\n\ndates=pd.date_range(start=start,end=end,freq='W')\ndate_index=dates.year.astype(str)+dates.weekofyear.astype(str).str.zfill(2)\ndate_index.tolist()\n\n", "I suggest you the following easy-to-read solution: \nimport datetime\n\nstart = datetime.date(2011, 12, 25) \nend = datetime.date(2012, 1, 21)\n\ndef find_weeks(start, end):\n l = []\n while (start.isocalendar()[1] != end.isocalendar()[1]) or (start.year != end.year):\n l.append(start.isocalendar()[1] + 100*start.year)\n start += datetime.timedelta(7)\n l.append(start.isocalendar()[1] + 100*start.year)\n return (l[1:])\n\n\nprint(find_weeks(start, end))\n\n>> [201252, 201201, 201202, 201203]\n\n", "I prefer the arrow style solution here (might need pip install arrow):\nimport arrow\n\nstart = arrow.get('2011-12-25')\nend = arrow.get('2012-01-21')\nweeks = list(arrow.Arrow.span_range('week', start, end))\n\nresult looks like this:\n>> from pprint import pprint\n>> pprint(weeks[1:])\n[(<Arrow [2011-12-19T00:00:00+00:00]>,\n <Arrow [2011-12-25T23:59:59.999999+00:00]>),\n (<Arrow [2011-12-26T00:00:00+00:00]>,\n <Arrow [2012-01-01T23:59:59.999999+00:00]>),\n (<Arrow [2012-01-02T00:00:00+00:00]>,\n <Arrow [2012-01-08T23:59:59.999999+00:00]>),\n (<Arrow [2012-01-09T00:00:00+00:00]>,\n <Arrow [2012-01-15T23:59:59.999999+00:00]>),\n (<Arrow [2012-01-16T00:00:00+00:00]>,\n <Arrow [2012-01-22T23:59:59.999999+00:00]>)]\n\nfrom there you can change the output to match the year and week number.\n" ]
[ 6, 3, 3, 0, 0 ]
[]
[]
[ "datetime", "python", "rrule", "timedelta" ]
stackoverflow_0048927466_datetime_python_rrule_timedelta.txt
Q: Why is my implementation of CSP returning errors? I'm trying to add CSP to a page of a PHP application but I'm getting the error The Content-Security-Policy directive name 'script‑src' contains one or more invalid characters. Only ASCII alphanumeric characters or dashes '-' are allowed in directive names. This is how I'm trying to implement CSP <?php // forum.php: Forum... header("Content-Security-Policy: script‑src self;default‑src self;media‑src none;img‑src self;"); ?> <h1>Welcome to your forum</h1> <h2>Post a message bellow:</h2> <form name=posttoforumform method=POST action="<?php echo $_SERVER['SCRIPT_NAME'] . "?" . $_SERVER['QUERY_STRING']?>"> <!--<p><input type="text" name="user_name" size="20"></p> --> <p><textarea rows="5" cols="80" name="input_from_form" size="20"></textarea></p> <p><input type="submit" value="Submit" name="Submit_button" content="text/html; charset=utf-8"></p> </form> <?php // Grab inputs $inputfromform = mysql_real_escape_string($_REQUEST["input_from_form"]); $showonlyuser = $_REQUEST["show_only_user"]; if ($inputfromform <> "") { //$pattern = "<script>"; //if(!preg_match($pattern, $inputfromform)){ $query = "INSERT INTO forum_table(poster_name, comment, date) VALUES ('". $user->username . "', '". $inputfromform . "', " . " now() )"; $result = execute_query($query); /*}else{ echo '<script>alert("Nice try! Your Cross XSS has not been succcessfully delivered :P")</script>'; }*/ } ?> I also tried to implement CSP by using .httaccess file like this Header set Content-Security-Policy " default-src 'self'; script-src 'self'; img-src 'self'; " But I get this error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Why is this happening? A: There are many variations of very similar characters that look like dash (em dash, en dash, minus sign, hyphen etc.). The error message states "Only ASCII alphanumeric characters or dashes '-' are allowed in directive names." Now take the - character from the error message and search for it on this page and you will see that in your line header("Content-Security-Policy: script‑src self;default‑src self;media‑src none;img‑src self;"); only the dashes in "Content-Security-Policy" match, the ones you use in the directives are charecters that look the same. Such changes to characters often happens when word processors handle dashes. If your code has been copied to e.g. MS Word and back your dashes are no longer the same. You also seem to define two Content-Security-Policies. This means that everything need to pass both policies, another policy can only make it stricter. A: A good way would be to use a class to generate the proper CSP headers. Look at this example
Why is my implementation of CSP returning errors?
I'm trying to add CSP to a page of a PHP application but I'm getting the error The Content-Security-Policy directive name 'script‑src' contains one or more invalid characters. Only ASCII alphanumeric characters or dashes '-' are allowed in directive names. This is how I'm trying to implement CSP <?php // forum.php: Forum... header("Content-Security-Policy: script‑src self;default‑src self;media‑src none;img‑src self;"); ?> <h1>Welcome to your forum</h1> <h2>Post a message bellow:</h2> <form name=posttoforumform method=POST action="<?php echo $_SERVER['SCRIPT_NAME'] . "?" . $_SERVER['QUERY_STRING']?>"> <!--<p><input type="text" name="user_name" size="20"></p> --> <p><textarea rows="5" cols="80" name="input_from_form" size="20"></textarea></p> <p><input type="submit" value="Submit" name="Submit_button" content="text/html; charset=utf-8"></p> </form> <?php // Grab inputs $inputfromform = mysql_real_escape_string($_REQUEST["input_from_form"]); $showonlyuser = $_REQUEST["show_only_user"]; if ($inputfromform <> "") { //$pattern = "<script>"; //if(!preg_match($pattern, $inputfromform)){ $query = "INSERT INTO forum_table(poster_name, comment, date) VALUES ('". $user->username . "', '". $inputfromform . "', " . " now() )"; $result = execute_query($query); /*}else{ echo '<script>alert("Nice try! Your Cross XSS has not been succcessfully delivered :P")</script>'; }*/ } ?> I also tried to implement CSP by using .httaccess file like this Header set Content-Security-Policy " default-src 'self'; script-src 'self'; img-src 'self'; " But I get this error The server encountered an internal error or misconfiguration and was unable to complete your request. Please contact the server administrator, webmaster@localhost and inform them of the time the error occurred, and anything you might have done that may have caused the error. More information about this error may be available in the server error log. Why is this happening?
[ "There are many variations of very similar characters that look like dash (em dash, en dash, minus sign, hyphen etc.). The error message states \"Only ASCII alphanumeric characters or dashes '-' are allowed in directive names.\" Now take the - character from the error message and search for it on this page and you will see that in your line\nheader(\"Content-Security-Policy: script‑src self;default‑src self;media‑src none;img‑src self;\");\n\nonly the dashes in \"Content-Security-Policy\" match, the ones you use in the directives are charecters that look the same. Such changes to characters often happens when word processors handle dashes. If your code has been copied to e.g. MS Word and back your dashes are no longer the same.\nYou also seem to define two Content-Security-Policies. This means that everything need to pass both policies, another policy can only make it stricter.\n", "A good way would be to use a class to generate the proper CSP headers. Look at this example\n" ]
[ 2, 0 ]
[]
[]
[ "content_security_policy", "header", "php", "security", "web" ]
stackoverflow_0072325719_content_security_policy_header_php_security_web.txt
Q: ROS2 - check if a node is still alive This question is similar to ROS - check if a node is still alive (@Potaito), but now this question is for ROS 2 (Foxy to be precise). To explain my problem in short. I'm trying to check if a client of another publisher node is still alive. rclcpp_lifecycle could be an interesting way of tackling this problem, but it is made for when nodes 'properly' shutdown or change state. I am seeking for a robuster way, e.g. in case a node just crashes. So if when a node crashes, the publisher knows, and can, for example, stop publishing. I have found that the bondcpp library, which is updated to ROS 2, does not function properly. Also, just checking via the function std::vector<std::string> rclcpp::Node::get_node_names() const if the node still is alive can work, but is just too slow. It takes a while before a dead node gets removed from the node list (~10-20sec). That is why I am seeking an alternative. Is there any other solution to this problem? A: This capability doesn't exist natively in the ROS2 stack. You need some sort of watchdog, and you can definitely implement your own solution without too much work. bondcpp should work for this. An example project that uses bond pervasively is Nav2. However, bondcpp can also incur a lot of network overhead. You may also look at implementing your own "heartbeat" topic, using "Manual by topic" Liveliness QoS, and then asserting liveliness inside the alive node on a timer. Then have watchers subscribe to the "liveliness lost" QoS event to perform actions if the other node dies or locks up. You also may want to take a look at software_watchdogs
ROS2 - check if a node is still alive
This question is similar to ROS - check if a node is still alive (@Potaito), but now this question is for ROS 2 (Foxy to be precise). To explain my problem in short. I'm trying to check if a client of another publisher node is still alive. rclcpp_lifecycle could be an interesting way of tackling this problem, but it is made for when nodes 'properly' shutdown or change state. I am seeking for a robuster way, e.g. in case a node just crashes. So if when a node crashes, the publisher knows, and can, for example, stop publishing. I have found that the bondcpp library, which is updated to ROS 2, does not function properly. Also, just checking via the function std::vector<std::string> rclcpp::Node::get_node_names() const if the node still is alive can work, but is just too slow. It takes a while before a dead node gets removed from the node list (~10-20sec). That is why I am seeking an alternative. Is there any other solution to this problem?
[ "This capability doesn't exist natively in the ROS2 stack. You need some sort of watchdog, and you can definitely implement your own solution without too much work.\nbondcpp should work for this. An example project that uses bond pervasively is Nav2.\nHowever, bondcpp can also incur a lot of network overhead.\nYou may also look at implementing your own \"heartbeat\" topic, using \"Manual by topic\" Liveliness QoS, and then asserting liveliness inside the alive node on a timer. Then have watchers subscribe to the \"liveliness lost\" QoS event to perform actions if the other node dies or locks up.\nYou also may want to take a look at software_watchdogs\n" ]
[ 0 ]
[]
[]
[ "c++", "communication", "nodes", "ros2" ]
stackoverflow_0070166829_c++_communication_nodes_ros2.txt
Q: Should security concerns be present in the domain model? I'm working on a Winforms project (.NET 4) that is based loosely on MVVM. For security, the application authenticates against Active Directory and then uses role based security to determine access permissions to different parts of the program. Security is implemented with the PrincipalPermissionAttribute in most places, like so: <PrincipalPermissionAttribute(SecurityAction.Demand, Role:="Managers")> _ Public Sub Save() Implements IProductsViewModel.Save mUOW.Commit() End Sub As you can probably tell from the Interface implementation, this specific Sub is in a ViewModel. The PrincipalPermissionAttribute is checking to see if the current user (Thread.CurrentPrincipal) is in the Manager role. Which leads to my question: Should security checks (such as above) be done in the Domain Model? I have two conflicting views when thinking about it myself: 1) Keep the domain model ignorant of as many other concerns as you can to reduce complexity and dependency. (Keep security out, perhaps implemented in ViewModel). 2) The domain model is, in a way, the place where "the buck stops here." If I implement security in the domain model, then I know that even if security in another layer fails, the domain model should catch it. So, what say ye, security in the domain model or not? A: There are 2 kinds of security. One which is purely technical - something like "all traffic should go through https" or "only specific service should touch database" or "only specific process should be able to touch file system/registry". Second which is tied up with domain. Something like "only user with role Secretary can access payment history" or "unauthorized users should not be able to access accounting stuff". First one should be decoupled from domain. Second one should live inside domain. A: Personally, I find this concern seems to belong in the service layer. Presumably, the application will persist through the service layer to one degree or another in order to reach the domain, and you could easily have a non-domain service to verify the user's role prior to the commit. The reason that I would do it in this fashion is based on the theory that the closer you get to the core of the domain, the more expensive the call stack has become. Preventing abuse / misuse of the domain at a higher level means better responsiveness and cohesiveness. Additionally, assume that the requirement changes, whereas someone in another role can now do the same operation. Maintaining these all in the service layer means that you are also not changing code that should be churning less often. At least in what I have done, the positive takeaway from this is that the closer to the core you get, the less likely code is to change. This means that you also reduce the change of your change to "ripple" to other areas that you did not intend. On a broader concern, and nothing personal, I do not like the idea of putting data access of any sort in the ViewModel. The ViewModel is intended to be a representation of the model, specific to an implementation. These objects should be, ideally, as lightweight as possible. If a change is made to a product, for example, the change would go through the service, then to the repository, where it could be registered with the unit of work, awaiting to be committed. A: Great answers here. I guess it depends on the level of granularity your permission system has. Let's say we are designing a "Task Management System" and we want users to limit visiblity on a "Project" level. We could design one generic "can-read" role or permission which is addressed to all projects. We could go deeper and implement a DAC-like system, where we want to answer the question "can this user read this particular object". This means that each "Project" has it's own "Read-Permission", therefore not allowing us to implement other than at the domain-level (or very close above)
Should security concerns be present in the domain model?
I'm working on a Winforms project (.NET 4) that is based loosely on MVVM. For security, the application authenticates against Active Directory and then uses role based security to determine access permissions to different parts of the program. Security is implemented with the PrincipalPermissionAttribute in most places, like so: <PrincipalPermissionAttribute(SecurityAction.Demand, Role:="Managers")> _ Public Sub Save() Implements IProductsViewModel.Save mUOW.Commit() End Sub As you can probably tell from the Interface implementation, this specific Sub is in a ViewModel. The PrincipalPermissionAttribute is checking to see if the current user (Thread.CurrentPrincipal) is in the Manager role. Which leads to my question: Should security checks (such as above) be done in the Domain Model? I have two conflicting views when thinking about it myself: 1) Keep the domain model ignorant of as many other concerns as you can to reduce complexity and dependency. (Keep security out, perhaps implemented in ViewModel). 2) The domain model is, in a way, the place where "the buck stops here." If I implement security in the domain model, then I know that even if security in another layer fails, the domain model should catch it. So, what say ye, security in the domain model or not?
[ "There are 2 kinds of security.\nOne which is purely technical - something like \"all traffic should go through https\" or \"only specific service should touch database\" or \"only specific process should be able to touch file system/registry\".\nSecond which is tied up with domain. Something like \"only user with role Secretary can access payment history\" or \"unauthorized users should not be able to access accounting stuff\".\nFirst one should be decoupled from domain. Second one should live inside domain.\n", "Personally, I find this concern seems to belong in the service layer. Presumably, the application will persist through the service layer to one degree or another in order to reach the domain, and you could easily have a non-domain service to verify the user's role prior to the commit.\nThe reason that I would do it in this fashion is based on the theory that the closer you get to the core of the domain, the more expensive the call stack has become. Preventing abuse / misuse of the domain at a higher level means better responsiveness and cohesiveness.\nAdditionally, assume that the requirement changes, whereas someone in another role can now do the same operation. Maintaining these all in the service layer means that you are also not changing code that should be churning less often. At least in what I have done, the positive takeaway from this is that the closer to the core you get, the less likely code is to change. This means that you also reduce the change of your change to \"ripple\" to other areas that you did not intend.\nOn a broader concern, and nothing personal, I do not like the idea of putting data access of any sort in the ViewModel. The ViewModel is intended to be a representation of the model, specific to an implementation. These objects should be, ideally, as lightweight as possible. If a change is made to a product, for example, the change would go through the service, then to the repository, where it could be registered with the unit of work, awaiting to be committed.\n", "Great answers here. I guess it depends on the level of granularity your permission system has. Let's say we are designing a \"Task Management System\" and we want users to limit visiblity on a \"Project\" level.\n\nWe could design one generic \"can-read\" role or permission which is addressed to all projects.\nWe could go deeper and implement a DAC-like system, where we want to answer the question \"can this user read this particular object\". This means that each \"Project\" has it's own \"Read-Permission\", therefore not allowing us to implement other than at the domain-level (or very close above)\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ ".net", "domain_driven_design", "mvvm", "security" ]
stackoverflow_0005995444_.net_domain_driven_design_mvvm_security.txt
Q: Express: PayloadTooLargeError: request entity too large I'm getting this error when I try to send a Base64 string in a POST request. POST /saveImage 413 10.564 ms - 1459 PayloadTooLargeError: request entity too large Already tried --> app.use(bodyParser.urlencoded({ limit: "50mb", extended: true, parameterLimit: 50000 })) --> app.use(bodyParser.urlencoded({limit: '50mb'})); --> app.use(bodyParser({limit: '50mb'})); Here's my code (api.js class) const express = require('express'); var app = express(); const router = express.Router(); var Connection = require('tedious').Connection var Request = require('tedious').Request var TYPES = require('tedious').TYPES var multer = require('multer'); .... .... .... router.post('/saveImage', (req, res) => { request=new Request('SAVE_IMAGE',(err, rowCount, rows)=>{ if(err){ console.log(err); } }); request.addParameter("Base64Image", TYPES.Text, req.body.IMG) connection.callProcedure(request); }); API CALL (Image class contains a Base64 format image and other fields, but I guess the problem occurs because of the Base64 string length. Small images don't cause any trouble) create(image: Image) { return this._http.post('/saveImage', image) .map(data => data.json()).toPromise() } A: I was having the same error. I tried what you tried it did not work. I guess you are uploading a file. The simple way to solve this is to not set a Content-Type. my problem was that I was setting on my headers: Content-Type: application/json and I am [was] using multer (expressjs middle for uploading files). I have the error whenever I try uploading a file. So when using postman or making such requests using any tools or libraries like axiosjs or fetch() API do not set content-type. Once you remove the Content-type it will work. That is what I did on my code, I have: const express = require('express'); ... const app = express(); app.use(express.json()); ... ...And it is working because I removed Content-Type on my postman headers. Make sure you are not using Content-Type on the headers. A: I would recommend you to use express instead of body-parser, as body-parser got merged back in express a long ago. I am using this code and it seems to work fine, setting the limit option to 200mb, of both express.json and express.urlencoded app.use(express.json({ limit: "200mb" })); app.use(express.urlencoded({ extended: true, limit: "200mb" })); Source: express.json vs bodyparser.json
Express: PayloadTooLargeError: request entity too large
I'm getting this error when I try to send a Base64 string in a POST request. POST /saveImage 413 10.564 ms - 1459 PayloadTooLargeError: request entity too large Already tried --> app.use(bodyParser.urlencoded({ limit: "50mb", extended: true, parameterLimit: 50000 })) --> app.use(bodyParser.urlencoded({limit: '50mb'})); --> app.use(bodyParser({limit: '50mb'})); Here's my code (api.js class) const express = require('express'); var app = express(); const router = express.Router(); var Connection = require('tedious').Connection var Request = require('tedious').Request var TYPES = require('tedious').TYPES var multer = require('multer'); .... .... .... router.post('/saveImage', (req, res) => { request=new Request('SAVE_IMAGE',(err, rowCount, rows)=>{ if(err){ console.log(err); } }); request.addParameter("Base64Image", TYPES.Text, req.body.IMG) connection.callProcedure(request); }); API CALL (Image class contains a Base64 format image and other fields, but I guess the problem occurs because of the Base64 string length. Small images don't cause any trouble) create(image: Image) { return this._http.post('/saveImage', image) .map(data => data.json()).toPromise() }
[ "I was having the same error. I tried what you tried it did not work.\nI guess you are uploading a file. The simple way to solve this is to not set a Content-Type.\nmy problem was that I was setting on my headers: Content-Type: application/json and I am [was] using multer (expressjs middle for uploading files).\nI have the error whenever I try uploading a file.\nSo when using postman or making such requests using any tools or libraries like axiosjs or fetch() API do not set content-type.\nOnce you remove the Content-type it will work. That is what I did\non my code, I have:\nconst express = require('express');\n\n...\n\nconst app = express();\n\napp.use(express.json());\n...\n\n...And it is working because I removed Content-Type on my postman headers.\nMake sure you are not using Content-Type on the headers.\n", "I would recommend you to use express instead of body-parser, as body-parser got merged back in express a long ago.\nI am using this code and it seems to work fine, setting the limit option to 200mb, of both express.json and express.urlencoded\napp.use(express.json({ limit: \"200mb\" }));\napp.use(express.urlencoded({ extended: true, limit: \"200mb\" }));\n\nSource: express.json vs bodyparser.json\n" ]
[ 2, 0 ]
[]
[]
[ "base64", "express", "javascript", "node.js", "tedious" ]
stackoverflow_0061537683_base64_express_javascript_node.js_tedious.txt
Q: Create Folder for Each File in Recursive Directory, Placing File in Folder Create Folder for Each File in Recursive Directory, Placing File in Folder On MacOS, so far I have... for file in $(ls -R); do if [[ -f "$file" ]]; then mkdir "${file%.*}"; mv "$file" "${file%.*}"; fi; done This operates correctly on the top level of the nested folder, but does nothing with lower levels. To isolate the error, I tried this instead, operating on rtf files . . for i in $(ls -R);do if [ $i = '*.rtf' ];then echo "I do something with the file $i" fi done This hangs, so I simplified to . . for i in $(ls -R); do echo "file is $i" done That hangs also, so I tried . . for i in $(ls -R); do echo hello That hangs also. ls -R works to provide a recursive list of all files. Suggestions appreciated !! A: First of all don't use ls in scripts. It is meant to show output interactively. Although the newish GNU ls version has some features/options for shell parsing, not sure about on a Mac though. Now using find with the sh shell. find . -type f -name '*.rtf' -execdir sh -c ' for f; do mkdir -p -- "${f%.*}" && mv -v -- "$f" "${f%.*}"; done' _ {} + For whatever reason -execdir is not available, one can use -exec find . -type f -name '*.rtf' -exec sh -c ' for f; do mkdir -p -- "${f%.*}" && mv -v -- "$f" "${f%.*}" ; done' _ {} + See Understanding-the-exec-option-of-find A: Given this file structure: $ tree . . ├── 123 │   └── test_2.rtf ├── bar │   ├── 456 │   │   └── test_1.rtf │   └── 789 └── foo There are two common ways to find all the .rtf files in that tree. The first (and most common) is to use find: while IFS= read -r path; do echo "$path" done < <(find . -type f -name "*.rtf") Prints: ./bar/456/test_1.rtf ./123/test_2.rtf The second common way is to use a recursive glob. This is not a POSIX way and is only found in more recent shells such as Bash, zsh, etc: shopt -s globstar # In Bash, this enables recursive globs # In zsh, this is not required for path in **/*.rtf; do echo "$path" done # same output Now that you have the loop to find the files, you can modify the files found. The first issue you will run across is that you cannot have two files with the same name in a single directory; a directory is just a type of file. So you will need to proceed this way: Find all the files with their paths; Create a tmp name and create a sub-directory with that temp name; Move the found file into the temp directory; Rename the temp directory to the found file name. Here is a Bash (or zsh that is default on MacOS) script to do that: shopt -s globstar # remove for zsh for p in **/*.rtf; do [ -f "$p" ] || continue # if not a file, loop onward tmp=$(uuidgen) # easy temp name -- not POSIX however fn="${p##*/}" # strip the file name from the path path_to="${p%/*}" # get the path without the file name mkdir "${path_to}${tmp}" # use temp name for the directory mv "$p" "${path_to}$tmp" # move the file to that directory mv "${path_to}$tmp" "$p" # rename the directory to the path done And the result: . ├── 123 │   └── test_2.rtf │   └── test_2.rtf ├── bar │   ├── 456 │   │   └── test_1.rtf │   │   └── test_1.rtf │   └── 789 └── foo
Create Folder for Each File in Recursive Directory, Placing File in Folder
Create Folder for Each File in Recursive Directory, Placing File in Folder On MacOS, so far I have... for file in $(ls -R); do if [[ -f "$file" ]]; then mkdir "${file%.*}"; mv "$file" "${file%.*}"; fi; done This operates correctly on the top level of the nested folder, but does nothing with lower levels. To isolate the error, I tried this instead, operating on rtf files . . for i in $(ls -R);do if [ $i = '*.rtf' ];then echo "I do something with the file $i" fi done This hangs, so I simplified to . . for i in $(ls -R); do echo "file is $i" done That hangs also, so I tried . . for i in $(ls -R); do echo hello That hangs also. ls -R works to provide a recursive list of all files. Suggestions appreciated !!
[ "First of all don't use ls in scripts. It is meant to show output interactively. Although the newish GNU ls version has some features/options for shell parsing, not sure about on a Mac though.\nNow using find with the sh shell.\nfind . -type f -name '*.rtf' -execdir sh -c '\n for f; do mkdir -p -- \"${f%.*}\" && mv -v -- \"$f\" \"${f%.*}\"; done' _ {} +\n\n\nFor whatever reason -execdir is not available, one can use -exec\nfind . -type f -name '*.rtf' -exec sh -c '\n for f; do mkdir -p -- \"${f%.*}\" && mv -v -- \"$f\" \"${f%.*}\" ; done' _ {} +\n\n\nSee Understanding-the-exec-option-of-find\n\n", "Given this file structure:\n$ tree .\n.\n├── 123\n│   └── test_2.rtf\n├── bar\n│   ├── 456\n│   │   └── test_1.rtf\n│   └── 789\n└── foo\n\nThere are two common ways to find all the .rtf files in that tree. The first (and most common) is to use find:\nwhile IFS= read -r path; do \n echo \"$path\"\ndone < <(find . -type f -name \"*.rtf\")\n\nPrints:\n./bar/456/test_1.rtf\n./123/test_2.rtf\n\nThe second common way is to use a recursive glob. This is not a POSIX way and is only found in more recent shells such as Bash, zsh, etc:\nshopt -s globstar # In Bash, this enables recursive globs\n # In zsh, this is not required\nfor path in **/*.rtf; do \n echo \"$path\"\ndone \n# same output\n\nNow that you have the loop to find the files, you can modify the files found.\nThe first issue you will run across is that you cannot have two files with the same name in a single directory; a directory is just a type of file. So you will need to proceed this way:\n\nFind all the files with their paths;\nCreate a tmp name and create a sub-directory with that temp name;\nMove the found file into the temp directory;\nRename the temp directory to the found file name.\n\nHere is a Bash (or zsh that is default on MacOS) script to do that:\nshopt -s globstar # remove for zsh\nfor p in **/*.rtf; do \n [ -f \"$p\" ] || continue # if not a file, loop onward\n tmp=$(uuidgen) # easy temp name -- not POSIX however\n fn=\"${p##*/}\" # strip the file name from the path\n path_to=\"${p%/*}\" # get the path without the file name\n mkdir \"${path_to}${tmp}\" # use temp name for the directory \n mv \"$p\" \"${path_to}$tmp\" # move the file to that directory\n mv \"${path_to}$tmp\" \"$p\" # rename the directory to the path\ndone \n\nAnd the result:\n.\n├── 123\n│   └── test_2.rtf\n│   └── test_2.rtf\n├── bar\n│   ├── 456\n│   │   └── test_1.rtf\n│   │   └── test_1.rtf\n│   └── 789\n└── foo\n\n" ]
[ 0, 0 ]
[]
[]
[ "bash", "directory", "loops", "macos", "nested" ]
stackoverflow_0074661651_bash_directory_loops_macos_nested.txt
Q: How to get n longest entries of DataFrame? I'm trying to get the n longest entries of a dask DataFrame. I tried calling nlargest on a dask DataFrame with two columns like this: import dask.dataframe as dd df = dd.read_csv("opendns-random-domains.txt", header=None, names=['domain_name']) df['domain_length'] = df.domain_name.map(len) print(df.head()) print(df.dtypes) top_3 = df.nlargest(3, 'domain_length') print(top_3.head()) The file opendns-random-domains.txt contains just a long list of domain names. This is what the output of the above code looks like: domain_name domain_length 0 webmagnat.ro 12 1 nickelfreesolutions.com 23 2 scheepvaarttelefoongids.nl 26 3 tursan.net 10 4 plannersanonymous.com 21 domain_name object domain_length float64 dtype: object Traceback (most recent call last): File "nlargest_test.py", line 9, in <module> print(top_3.head()) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/dataframe/core.py", line 382, in head result = result.compute() File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/base.py", line 86, in compute return compute(self, **kwargs)[0] File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/base.py", line 179, in compute results = get(dsk, keys, **kwargs) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/threaded.py", line 57, in get **kwargs) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 484, in get_async raise(remote_exception(res, tb)) dask.async.TypeError: Cannot use method 'nlargest' with dtype object Traceback --------- File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 267, in execute_task result = _execute_task(task, data) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 249, in _execute_task return func(*args2) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/dataframe/core.py", line 2040, in <lambda> f = lambda df: df.nlargest(n, columns) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/frame.py", line 3355, in nlargest return self._nsorted(columns, n, 'nlargest', keep) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/frame.py", line 3318, in _nsorted ser = getattr(self[columns[0]], method)(n, keep=keep) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/util/decorators.py", line 91, in wrapper return func(*args, **kwargs) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/series.py", line 1898, in nlargest return algos.select_n(self, n=n, keep=keep, method='nlargest') File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/algorithms.py", line 559, in select_n raise TypeError("Cannot use method %r with dtype %s" % (method, dtype)) I'm confused, because I'm calling nlargest on the column which is of type float64 but still get this error saying it cannot be called on dtype object. Also this works fine in pandas. How can I get the n longest entries from a DataFrame? A: I was helped by explicit type conversion: df['column'].astype(str).astype(float).nlargest(5) A: I tried to reproduce your problem but things worked fine. Can I recommend that you produce a Minimal Complete Verifiable Example? Pandas example In [1]: import pandas as pd In [2]: df = pd.DataFrame({'x': ['a', 'bb', 'ccc', 'dddd']}) In [3]: df['y'] = df.x.map(len) In [4]: df Out[4]: x y 0 a 1 1 bb 2 2 ccc 3 3 dddd 4 In [5]: df.nlargest(3, 'y') Out[5]: x y 3 dddd 4 2 ccc 3 1 bb 2 Dask dataframe example In [1]: import pandas as pd In [2]: df = pd.DataFrame({'x': ['a', 'bb', 'ccc', 'dddd']}) In [3]: import dask.dataframe as dd In [4]: ddf = dd.from_pandas(df, npartitions=2) In [5]: ddf['y'] = ddf.x.map(len) In [6]: ddf.nlargest(3, 'y').compute() Out[6]: x y 3 dddd 4 2 ccc 3 1 bb 2 Alternatively, perhaps this is just working now on the git master version? A: You only need to change the type of respective column to int or float using .astype(). For example, in your case: top_3 = df['domain_length'].astype(float).nlargest(3) A: If you want to get the values with the most occurrences from a String type column you may use value_counts() with nlargest(n), where n is the number of elements you want to bring. df['your_column'].value_counts().nlargest(3) It will bring the top 3 occurrences from that column. A: This is how my first data frame look. This is how my new data frame looks after getting top 5. ''' station_count.nlargest(5,'count') ''' You have to give (nlargest) command to a column who have int data type and not in string so it can calculate the count. Always top n number followed by its corresponding column that is int type.
How to get n longest entries of DataFrame?
I'm trying to get the n longest entries of a dask DataFrame. I tried calling nlargest on a dask DataFrame with two columns like this: import dask.dataframe as dd df = dd.read_csv("opendns-random-domains.txt", header=None, names=['domain_name']) df['domain_length'] = df.domain_name.map(len) print(df.head()) print(df.dtypes) top_3 = df.nlargest(3, 'domain_length') print(top_3.head()) The file opendns-random-domains.txt contains just a long list of domain names. This is what the output of the above code looks like: domain_name domain_length 0 webmagnat.ro 12 1 nickelfreesolutions.com 23 2 scheepvaarttelefoongids.nl 26 3 tursan.net 10 4 plannersanonymous.com 21 domain_name object domain_length float64 dtype: object Traceback (most recent call last): File "nlargest_test.py", line 9, in <module> print(top_3.head()) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/dataframe/core.py", line 382, in head result = result.compute() File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/base.py", line 86, in compute return compute(self, **kwargs)[0] File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/base.py", line 179, in compute results = get(dsk, keys, **kwargs) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/threaded.py", line 57, in get **kwargs) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 484, in get_async raise(remote_exception(res, tb)) dask.async.TypeError: Cannot use method 'nlargest' with dtype object Traceback --------- File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 267, in execute_task result = _execute_task(task, data) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/async.py", line 249, in _execute_task return func(*args2) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/dask/dataframe/core.py", line 2040, in <lambda> f = lambda df: df.nlargest(n, columns) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/frame.py", line 3355, in nlargest return self._nsorted(columns, n, 'nlargest', keep) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/frame.py", line 3318, in _nsorted ser = getattr(self[columns[0]], method)(n, keep=keep) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/util/decorators.py", line 91, in wrapper return func(*args, **kwargs) File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/series.py", line 1898, in nlargest return algos.select_n(self, n=n, keep=keep, method='nlargest') File "/home/work/Dokumente/ModZero/Commerzbank/DNS_und_Proxylog-Analyse/dask-log-analyzer/venv/lib/python3.5/site-packages/pandas/core/algorithms.py", line 559, in select_n raise TypeError("Cannot use method %r with dtype %s" % (method, dtype)) I'm confused, because I'm calling nlargest on the column which is of type float64 but still get this error saying it cannot be called on dtype object. Also this works fine in pandas. How can I get the n longest entries from a DataFrame?
[ "I was helped by explicit type conversion:\ndf['column'].astype(str).astype(float).nlargest(5)\n\n", "I tried to reproduce your problem but things worked fine. Can I recommend that you produce a Minimal Complete Verifiable Example?\nPandas example\nIn [1]: import pandas as pd\n\nIn [2]: df = pd.DataFrame({'x': ['a', 'bb', 'ccc', 'dddd']})\n\nIn [3]: df['y'] = df.x.map(len)\n\nIn [4]: df\nOut[4]: \n x y\n0 a 1\n1 bb 2\n2 ccc 3\n3 dddd 4\n\nIn [5]: df.nlargest(3, 'y')\nOut[5]: \n x y\n3 dddd 4\n2 ccc 3\n1 bb 2\n\nDask dataframe example\nIn [1]: import pandas as pd\n\nIn [2]: df = pd.DataFrame({'x': ['a', 'bb', 'ccc', 'dddd']})\n\nIn [3]: import dask.dataframe as dd\n\nIn [4]: ddf = dd.from_pandas(df, npartitions=2)\n\nIn [5]: ddf['y'] = ddf.x.map(len)\n\nIn [6]: ddf.nlargest(3, 'y').compute()\nOut[6]: \n x y\n3 dddd 4\n2 ccc 3\n1 bb 2\n\nAlternatively, perhaps this is just working now on the git master version?\n", "You only need to change the type of respective column to int or float using .astype().\nFor example, in your case:\ntop_3 = df['domain_length'].astype(float).nlargest(3)\n\n", "If you want to get the values with the most occurrences from a String type column you may use value_counts() with nlargest(n), where n is the number of elements you want to bring.\ndf['your_column'].value_counts().nlargest(3)\n\nIt will bring the top 3 occurrences from that column.\n", "This is how my first data frame look.\nThis is how my new data frame looks after getting top 5.\n'''\nstation_count.nlargest(5,'count')\n'''\nYou have to give (nlargest) command to a column who have int data type and not in string so it can calculate the count.\nAlways top n number followed by its corresponding column that is int type.\n" ]
[ 3, 0, 0, 0, 0 ]
[]
[]
[ "dask", "python" ]
stackoverflow_0038978432_dask_python.txt
Q: How do I know which parameters to use with a pretrained Tokenizer? I must be missing something ... I want to use a pretrained model with HuggingFace: transformer_name = "Geotrend/distilbert-base-fr-cased" # Or whatever model model = AutoModelForSequenceClassification.from_pretrained(transformer_name, num_labels=5) tokenizer = AutoTokenizer.from_pretrained(transformer_name) Now that I have my model and my tokenizer, I need to tokenize my dataset, but I don't know which parameters (padding, truncation, max_length) to use with my Tokenizer. Some examples just call the tokenizer tokenizer(data), others use truncation only tokenizer(data, truncation=True), and others will use many parameters tokenizer(data, padding=True, truncation=True, return_tensors='pt', max_length=512). As I am reloading a pretrained Tokenizer, I would have love it to use the same parameters as in the original training process. How do I know which parameters to use ? My understanding is that I always need to truncate my data and leave max_length to None so that my sequences length will always be lower than the model's maximum length. Is that it ? Does leaving max_length to None makes it backup on the model's maximum length ? And what should I do with padding ? As I am using a Trainer object for training with a DataCollatorWithPadding should I set padding to False to reduce the memory impact and let the collator pad my batches ? Final question : what should I do if I use a TextClassificationPipeline for inference ? Should I specify these parameters (padding, etc.) ? Will the pipeline handle it for me ? A: The choice on whether to use padding and truncation depends on the model you are fine-tuning and on your training process, and not on the pretrained tokenizer. Tranformer-based models have a constraint on the number of tokens the model can process, so generally yes that's it. Yes, when max_length is None then the maximum acceptable input length for the model is considered. (see docs). Yes, you should not pad the input sequence if you use DataCollatorWithPadding. More about it in this video. As you already noticed, you have to specify them yourself when you pass your input text to the pipeline.
How do I know which parameters to use with a pretrained Tokenizer?
I must be missing something ... I want to use a pretrained model with HuggingFace: transformer_name = "Geotrend/distilbert-base-fr-cased" # Or whatever model model = AutoModelForSequenceClassification.from_pretrained(transformer_name, num_labels=5) tokenizer = AutoTokenizer.from_pretrained(transformer_name) Now that I have my model and my tokenizer, I need to tokenize my dataset, but I don't know which parameters (padding, truncation, max_length) to use with my Tokenizer. Some examples just call the tokenizer tokenizer(data), others use truncation only tokenizer(data, truncation=True), and others will use many parameters tokenizer(data, padding=True, truncation=True, return_tensors='pt', max_length=512). As I am reloading a pretrained Tokenizer, I would have love it to use the same parameters as in the original training process. How do I know which parameters to use ? My understanding is that I always need to truncate my data and leave max_length to None so that my sequences length will always be lower than the model's maximum length. Is that it ? Does leaving max_length to None makes it backup on the model's maximum length ? And what should I do with padding ? As I am using a Trainer object for training with a DataCollatorWithPadding should I set padding to False to reduce the memory impact and let the collator pad my batches ? Final question : what should I do if I use a TextClassificationPipeline for inference ? Should I specify these parameters (padding, etc.) ? Will the pipeline handle it for me ?
[ "\nThe choice on whether to use padding and truncation depends on the model you are fine-tuning and on your training process, and not on the pretrained tokenizer.\nTranformer-based models have a constraint on the number of tokens the model can process, so generally yes that's it. Yes, when max_length is None then the maximum acceptable input length for the model is considered. (see docs).\nYes, you should not pad the input sequence if you use DataCollatorWithPadding. More about it in this video.\nAs you already noticed, you have to specify them yourself when you pass your input text to the pipeline.\n\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "huggingface_tokenizers", "huggingface_transformers" ]
stackoverflow_0074657367_deep_learning_huggingface_tokenizers_huggingface_transformers.txt
Q: stdout and stderr in c Each time I start the program, the message "This message 2" is displayed in a random order relative to other messages. #include <stdio.h> int main() { fprintf(stdout,"This is message 1\n"); fprintf(stderr,"This is message 2\n"); fprintf(stdout,"This is message 3\n"); return 0; } Examples: Example 1 Example 2 Why is this happening and how to fix it? A: No randomness is displayed in the examples; both show message 2 first, and this is the result of deterministic behavior. Per C 2018 7.21.3 7, the standard error stream is not fully buffered. This means it is either unbuffered (printed characters are sent as soon as possible) or line buffered (printed characters are sent when a new-line character is printed or sooner). Since "This is message 2\n" ends with a new-line character, both the unbuffered and line buffered modes send it as soon as possible. Also per 7.21.3 7, the standard output stream is fully buffered if and only if it can be determined not to refer to an interactive device. Thus, we have two possibilities: The stream can be determined to refer to an interactive device. Then it is not fully buffered, so it is unbuffered or line buffered, and the strings ending with new-line characters are sent as soon as possible. The stream can not be determined to refer to an interactive device. Then it is fully buffered, so the strings are held in a buffer until the buffer is full or a flush is requested, which occurs at normal program termination. Thus the total behavior of the program when the output is going directly to an interactive device is that all strings are sent immediately, so the output will be: This is message 1 This is message 2 This is message 3 And the total behavior of the program when the output is not going to an interactive device is that message 2 is sent immediately and messages 1 and 3 are held in a buffer until the end of the program, so the output is: This is message 2 This is message 1 This is message 3 Both cases are deterministic, not random. The output shown in the question is evidence that the IDE is running the program with the standard output captured in some way and presented to the user indirectly (as through a pipe or a redirection to a temporary file) rather than sent directly to the display device. This can be changed by manually inserting fflush(stdout) calls where you want the standard output to be sent immediately to the associated device or file, or it can be changed by making the standard output stream line-buffered: setvbuf(stdout, 0, _IOLBF, 0); // Must be before stdout is used. (You could also use setvbuf to make the standard error stream fully buffered, but that is inadvisable.) If the IDE is capturing both the standard output stream and the standard error stream in the same way, this will result in the messages appearing in the 1, 2, 3 order. However, it is possible the IDE is capturing the streams separately and displaying the standard error results first, in which case changing the buffering will not change the display. A: This is what OpenAI ChatGPT had to say when fed just the body of the question with no other prompts: In this code, the messages "This is message 1" and "This is message 3" are being printed to the stdout standard output stream, while the message "This is message 2" is being printed to the stderr standard error stream. The order in which these messages are displayed may be different each time the program is run, because the stdout and stderr streams are usually buffered differently, and the messages may be printed in different orders depending on how the buffers are flushed. I neither agree nor not disagree with this. I post it because it's just ... wow...
stdout and stderr in c
Each time I start the program, the message "This message 2" is displayed in a random order relative to other messages. #include <stdio.h> int main() { fprintf(stdout,"This is message 1\n"); fprintf(stderr,"This is message 2\n"); fprintf(stdout,"This is message 3\n"); return 0; } Examples: Example 1 Example 2 Why is this happening and how to fix it?
[ "No randomness is displayed in the examples; both show message 2 first, and this is the result of deterministic behavior.\nPer C 2018 7.21.3 7, the standard error stream is not fully buffered. This means it is either unbuffered (printed characters are sent as soon as possible) or line buffered (printed characters are sent when a new-line character is printed or sooner). Since \"This is message 2\\n\" ends with a new-line character, both the unbuffered and line buffered modes send it as soon as possible.\nAlso per 7.21.3 7, the standard output stream is fully buffered if and only if it can be determined not to refer to an interactive device. Thus, we have two possibilities:\n\nThe stream can be determined to refer to an interactive device. Then it is not fully buffered, so it is unbuffered or line buffered, and the strings ending with new-line characters are sent as soon as possible.\nThe stream can not be determined to refer to an interactive device. Then it is fully buffered, so the strings are held in a buffer until the buffer is full or a flush is requested, which occurs at normal program termination.\n\nThus the total behavior of the program when the output is going directly to an interactive device is that all strings are sent immediately, so the output will be:\n\nThis is message 1\nThis is message 2\nThis is message 3\n\nAnd the total behavior of the program when the output is not going to an interactive device is that message 2 is sent immediately and messages 1 and 3 are held in a buffer until the end of the program, so the output is:\n\nThis is message 2\nThis is message 1\nThis is message 3\n\nBoth cases are deterministic, not random. The output shown in the question is evidence that the IDE is running the program with the standard output captured in some way and presented to the user indirectly (as through a pipe or a redirection to a temporary file) rather than sent directly to the display device.\nThis can be changed by manually inserting fflush(stdout) calls where you want the standard output to be sent immediately to the associated device or file, or it can be changed by making the standard output stream line-buffered:\nsetvbuf(stdout, 0, _IOLBF, 0); // Must be before stdout is used.\n\n(You could also use setvbuf to make the standard error stream fully buffered, but that is inadvisable.)\nIf the IDE is capturing both the standard output stream and the standard error stream in the same way, this will result in the messages appearing in the 1, 2, 3 order. However, it is possible the IDE is capturing the streams separately and displaying the standard error results first, in which case changing the buffering will not change the display.\n", "This is what OpenAI ChatGPT had to say when fed just the body of the question with no other prompts:\n\nIn this code, the messages \"This is message 1\" and \"This is message 3\"\nare being printed to the stdout standard output stream, while the\nmessage \"This is message 2\" is being printed to the stderr standard\nerror stream. The order in which these messages are displayed may be\ndifferent each time the program is run, because the stdout and\nstderr streams are usually buffered differently, and the messages\nmay be printed in different orders depending on how the buffers are\nflushed.\n\n\nI neither agree nor not disagree with this. I post it because it's just ... wow...\n" ]
[ 2, 0 ]
[]
[]
[ "c", "stderr", "stdout" ]
stackoverflow_0074666114_c_stderr_stdout.txt
Q: Hypothesis testing for three groups Based on the data, is the average sale amount statistically the same for the A, B, and C groups? I performed t.test on AB, BC, CA. for CA, p-value>0.05, so I concluded for CA, we can't reject null hypothesis, and average may be same. H1- alternative hypothesis was - true difference in means between group 36-45 and group 46-50 is not equal to 0 My Question is - Did I do this correctly or is there another way to check the hypothesis for three groups A: If the population means of the groups are denoted mu_A, mu_B, and mu_C, then you are actually interested in the single joint null hypothesis: H_0: mu_A=mu_B=mu_C. The problem with conducting three pairwise test is the fact that it is difficult to control the probability of the type I error. That is, how do you know that three test at a significance level of 5% will still reject the H_0 above with 5% probability if this H_0 is true? The test you are looking for is called an Analysis of Variance (ANOVA) test. It will provide a single test statistic and a single p-value to test the hypothesis above. If you search for "ANOVA statistical test", then Google will suggest many explanations (and probably also appropriate commands to do the analysis in R). I hope this helps.
Hypothesis testing for three groups
Based on the data, is the average sale amount statistically the same for the A, B, and C groups? I performed t.test on AB, BC, CA. for CA, p-value>0.05, so I concluded for CA, we can't reject null hypothesis, and average may be same. H1- alternative hypothesis was - true difference in means between group 36-45 and group 46-50 is not equal to 0 My Question is - Did I do this correctly or is there another way to check the hypothesis for three groups
[ "If the population means of the groups are denoted mu_A, mu_B, and mu_C, then you are actually interested in the single joint null hypothesis: H_0: mu_A=mu_B=mu_C. The problem with conducting three pairwise test is the fact that it is difficult to control the probability of the type I error. That is, how do you know that three test at a significance level of 5% will still reject the H_0 above with 5% probability if this H_0 is true?\nThe test you are looking for is called an Analysis of Variance (ANOVA) test. It will provide a single test statistic and a single p-value to test the hypothesis above. If you search for \"ANOVA statistical test\", then Google will suggest many explanations (and probably also appropriate commands to do the analysis in R). I hope this helps.\n" ]
[ 0 ]
[]
[]
[ "anova", "hypothesis_test", "r", "statistics", "t_test" ]
stackoverflow_0074664573_anova_hypothesis_test_r_statistics_t_test.txt
Q: im making a economy bot like owo but im getting operand errors i cant figure out the error it just says discord.ext.commands.errors.CommandInvokeError: Command raised an exception: TypeError: unsupported operand type(s) for +=: 'int' and 'str' what can i do here? here is my code - import discord from discord.ext import commands import json import random client = commands.Bot(command_prefix="owo ", intents=discord.Intents.all()) @client.event async def on_ready(): print("Logged in as {} lets rock!".format(client.user)) @client.command(aliases=['c']) async def cash(ctx): await open_account(ctx.author) user = ctx.author users = await get_bank_data() wallet_amt = users[str(user.id)]["wallet"] await ctx.send(f"<:cowoncy:1048582461610799116> **|** {ctx.author.name} you currently have **__{wallet_amt}__** **cowoncy**!") # await ctx.send(embed= em) @client.command(aliases=['sm']) async def give(ctx,member : discord.Member,amount = None): await open_account(ctx.author) await open_account(member) if amount == None: await ctx.send(f" | {ctx.author.name}, Invalid arguments! :c", delete_after=5) return bal = await update_bank(ctx.author) amount = int(amount) if amount > bal[0]: await ctx.send(f' **|** {ctx.author.name}, you silly hooman! You don\'t have enough cowoncy!') return await update_bank(ctx.author,-1*amount,'wallet') await update_bank(member,amount,'wallet') await ctx.send(f'<:CreditCard:1048583346453757992> **|** **{ctx.author.name}** sent **{amount}') @client.command(aliases=['d']) async def add(ctx,member : discord.Member, amount): await open_account(ctx.author) await open_account(member) bal = await update_bank(member) await update_bank(member, amount,'wallet') await ctx.send(f'<:CreditCard:1048583346453757992> **|** {member} **{amount}** coins has been added to your balance!') async def open_account(user): users = await get_bank_data() if str(user.id) in users: return False else: users[str(user.id)] = {} users[str(user.id)]["wallet"] = 0 users[str(user.id)]["bank"] = 0 with open('mainbank.json','w') as f: json.dump(users,f) return True async def get_bank_data(): with open('mainbank.json','r') as f: users = json.load(f) return users async def update_bank(user,change=0,mode = 'wallet'): users = await get_bank_data() users[str(user.id)][mode] += change with open('mainbank.json','w') as f: json.dump(users,f) bal = users[str(user.id)]['wallet'],users[str(user.id)]['bank'] return bal client.run("TOKEN") i was making a economy bot and i got operand error what can i do here???? any type of help will be appreciated! also you can dm me here † メ ☯ɪ ᴀᴍ ɢᴏᴋᴜ☯†ᴰᶜ#0001 if you want to ask something about the code... A: This is mainly because of your amount argument in the command callback. Since it doesn't have a type annotation (or typehint), it will always be a string. Lines 62 and 63 users[str(user.id)]["wallet"] = 0 users[str(user.id)]["bank"] = 0 The values you gave them are integers, but the change argument you passed into the update_bank in lines 41, 42, and 51 is a string; you can't add a string to an integer. This can be fixed by adding int as the type annotation to the argument, so the library will convert the user input into an integer. Line 26 async def give(ctx,member: discord.Member, amount: int = None): Line 46 async def add(ctx, member: discord.Member, amount: int):
im making a economy bot like owo but im getting operand errors
i cant figure out the error it just says discord.ext.commands.errors.CommandInvokeError: Command raised an exception: TypeError: unsupported operand type(s) for +=: 'int' and 'str' what can i do here? here is my code - import discord from discord.ext import commands import json import random client = commands.Bot(command_prefix="owo ", intents=discord.Intents.all()) @client.event async def on_ready(): print("Logged in as {} lets rock!".format(client.user)) @client.command(aliases=['c']) async def cash(ctx): await open_account(ctx.author) user = ctx.author users = await get_bank_data() wallet_amt = users[str(user.id)]["wallet"] await ctx.send(f"<:cowoncy:1048582461610799116> **|** {ctx.author.name} you currently have **__{wallet_amt}__** **cowoncy**!") # await ctx.send(embed= em) @client.command(aliases=['sm']) async def give(ctx,member : discord.Member,amount = None): await open_account(ctx.author) await open_account(member) if amount == None: await ctx.send(f" | {ctx.author.name}, Invalid arguments! :c", delete_after=5) return bal = await update_bank(ctx.author) amount = int(amount) if amount > bal[0]: await ctx.send(f' **|** {ctx.author.name}, you silly hooman! You don\'t have enough cowoncy!') return await update_bank(ctx.author,-1*amount,'wallet') await update_bank(member,amount,'wallet') await ctx.send(f'<:CreditCard:1048583346453757992> **|** **{ctx.author.name}** sent **{amount}') @client.command(aliases=['d']) async def add(ctx,member : discord.Member, amount): await open_account(ctx.author) await open_account(member) bal = await update_bank(member) await update_bank(member, amount,'wallet') await ctx.send(f'<:CreditCard:1048583346453757992> **|** {member} **{amount}** coins has been added to your balance!') async def open_account(user): users = await get_bank_data() if str(user.id) in users: return False else: users[str(user.id)] = {} users[str(user.id)]["wallet"] = 0 users[str(user.id)]["bank"] = 0 with open('mainbank.json','w') as f: json.dump(users,f) return True async def get_bank_data(): with open('mainbank.json','r') as f: users = json.load(f) return users async def update_bank(user,change=0,mode = 'wallet'): users = await get_bank_data() users[str(user.id)][mode] += change with open('mainbank.json','w') as f: json.dump(users,f) bal = users[str(user.id)]['wallet'],users[str(user.id)]['bank'] return bal client.run("TOKEN") i was making a economy bot and i got operand error what can i do here???? any type of help will be appreciated! also you can dm me here † メ ☯ɪ ᴀᴍ ɢᴏᴋᴜ☯†ᴰᶜ#0001 if you want to ask something about the code...
[ "This is mainly because of your amount argument in the command callback. Since it doesn't have a type annotation (or typehint), it will always be a string.\nLines 62 and 63\nusers[str(user.id)][\"wallet\"] = 0\nusers[str(user.id)][\"bank\"] = 0\n\nThe values you gave them are integers, but the change argument you passed into the update_bank in lines 41, 42, and 51 is a string; you can't add a string to an integer.\nThis can be fixed by adding int as the type annotation to the argument, so the library will convert the user input into an integer.\nLine 26\nasync def give(ctx,member: discord.Member, amount: int = None):\n\nLine 46\nasync def add(ctx, member: discord.Member, amount: int):\n\n" ]
[ 1 ]
[]
[]
[ "discord.py", "operands" ]
stackoverflow_0074667276_discord.py_operands.txt
Q: How to Connect to Multiple Google spanner DB running on different host from Spring Boot App Hi I am trying to connect my spring boot application to multiple google cloud spanner DB. I am able to connect with Single database by making entry in application.yml file. My requirement is tonnect with two spanner database in same application. Please help me. A: The approach is a lengthy one. The com.google.cloud:spring-cloud-gcp-starter-data-spanner lib autoconfigures using application.properties To avoid this, a. you use spring-cloud-gcp-data-spanner b. You would have to define the spannerTemplate bean manually Here is a quick code snippet @Bean public com.google.auth.Credentials getCredentials() { try { return new DefaultCredentialsProvider(Credentials::new).getCredentials(); } catch (IOException ex) { throw new RuntimeException(ex); } } @Bean public SpannerOptions spannerOptions() { return SpannerOptions.newBuilder() .setProjectId("sab-dev-anciq-ml-gol-8565") .setSessionPoolOption(SessionPoolOptions.newBuilder().setMaxSessions(10).build()) .setCredentials(getCredentials()) .build(); } @Bean public Spanner spanner(SpannerOptions spannerOptions) { return spannerOptions.getService(); } @Bean public DatabaseId databaseId() { return DatabaseId.of("your_gcp_project_id", "instance_name", "db_name"); } @Bean public DatabaseClient spannerDatabaseClient(Spanner spanner, DatabaseId databaseId) { return spanner.getDatabaseClient(databaseId); } @Bean public SpannerMappingContext spannerMappingContext(Gson gson) { return new SpannerMappingContext(gson); } @Bean public SpannerEntityProcessor spannerConverter(SpannerMappingContext mappingContext) { return new ConverterAwareMappingSpannerEntityProcessor(mappingContext); } @Bean public SpannerSchemaUtils spannerSchemaUtils( SpannerMappingContext spannerMappingContext, SpannerEntityProcessor spannerEntityProcessor) { return new SpannerSchemaUtils(spannerMappingContext, spannerEntityProcessor, true); } @Bean public SpannerMutationFactory spannerMutationFactory( SpannerEntityProcessor spannerEntityProcessor, SpannerMappingContext spannerMappingContext, SpannerSchemaUtils spannerSchemaUtils) { return new SpannerMutationFactoryImpl( spannerEntityProcessor, spannerMappingContext, spannerSchemaUtils); } @Bean("hostSpannerTemplate") public SpannerTemplate spannerTemplateForHostProject( DatabaseClient databaseClient, SpannerMappingContext mappingContext, SpannerEntityProcessor spannerEntityProcessor, SpannerMutationFactory spannerMutationFactory, SpannerSchemaUtils spannerSchemaUtils) { return new SpannerTemplate( () -> databaseClient, mappingContext, spannerEntityProcessor, spannerMutationFactory, spannerSchemaUtils); } You can now use spannerTemplateForHostProject as a bean to do your curd ops related to host project Similiarly using Qualifiers you can now define another spannerTemplate that uses databaseClient2 using qualifier in argument
How to Connect to Multiple Google spanner DB running on different host from Spring Boot App
Hi I am trying to connect my spring boot application to multiple google cloud spanner DB. I am able to connect with Single database by making entry in application.yml file. My requirement is tonnect with two spanner database in same application. Please help me.
[ "The approach is a lengthy one.\nThe com.google.cloud:spring-cloud-gcp-starter-data-spanner lib autoconfigures using application.properties\nTo avoid this,\na. you use spring-cloud-gcp-data-spanner\nb. You would have to define the spannerTemplate bean manually\nHere is a quick code snippet\n @Bean\npublic com.google.auth.Credentials getCredentials() {\n\n try {\n return new DefaultCredentialsProvider(Credentials::new).getCredentials();\n } catch (IOException ex) {\n throw new RuntimeException(ex);\n }\n}\n\n@Bean\npublic SpannerOptions spannerOptions() {\n return SpannerOptions.newBuilder()\n .setProjectId(\"sab-dev-anciq-ml-gol-8565\")\n .setSessionPoolOption(SessionPoolOptions.newBuilder().setMaxSessions(10).build())\n .setCredentials(getCredentials())\n .build();\n}\n\n@Bean\npublic Spanner spanner(SpannerOptions spannerOptions) {\n return spannerOptions.getService();\n}\n\n@Bean\npublic DatabaseId databaseId() {\n return DatabaseId.of(\"your_gcp_project_id\",\n \"instance_name\",\n \"db_name\");\n}\n\n@Bean\npublic DatabaseClient spannerDatabaseClient(Spanner spanner, DatabaseId databaseId) {\n return spanner.getDatabaseClient(databaseId);\n}\n\n\n@Bean\npublic SpannerMappingContext spannerMappingContext(Gson gson) {\n return new SpannerMappingContext(gson);\n}\n\n@Bean\npublic SpannerEntityProcessor spannerConverter(SpannerMappingContext mappingContext) {\n return new ConverterAwareMappingSpannerEntityProcessor(mappingContext);\n}\n\n@Bean\npublic SpannerSchemaUtils spannerSchemaUtils(\n SpannerMappingContext spannerMappingContext, SpannerEntityProcessor spannerEntityProcessor) {\n return new SpannerSchemaUtils(spannerMappingContext, spannerEntityProcessor, true);\n}\n\n@Bean\npublic SpannerMutationFactory spannerMutationFactory(\n SpannerEntityProcessor spannerEntityProcessor,\n SpannerMappingContext spannerMappingContext,\n SpannerSchemaUtils spannerSchemaUtils) {\n return new SpannerMutationFactoryImpl(\n spannerEntityProcessor, spannerMappingContext, spannerSchemaUtils);\n}\n\n@Bean(\"hostSpannerTemplate\")\npublic SpannerTemplate spannerTemplateForHostProject(\n DatabaseClient databaseClient,\n SpannerMappingContext mappingContext,\n SpannerEntityProcessor spannerEntityProcessor,\n SpannerMutationFactory spannerMutationFactory,\n SpannerSchemaUtils spannerSchemaUtils) {\n return new SpannerTemplate(\n () -> databaseClient,\n mappingContext,\n spannerEntityProcessor,\n spannerMutationFactory,\n spannerSchemaUtils);\n}\n\nYou can now use spannerTemplateForHostProject as a bean to do your curd ops related to host project\nSimiliarly using Qualifiers you can now define another spannerTemplate that uses databaseClient2 using qualifier in argument\n" ]
[ 0 ]
[]
[]
[ "google_cloud_spanner", "spring_boot", "spring_data_jpa" ]
stackoverflow_0067833570_google_cloud_spanner_spring_boot_spring_data_jpa.txt
Q: how i can switch vscode output terminal tabs use keyboard enter image description here i just want to switch those tabs use keyboard. anyone knows ,thx i use ctrl + pagedown/pageup. but it just switch terminal tabs , not output and termial A: You have the following commands: View: Toggle Problems: (ctrl+shift+m) workbench.actions.view.problems View: Toggle Output: (ctrl+shift+u) workbench.action.output.toggleOutput View: Toggle Debug Console: (ctrl+shift+y) workbench.debug.action.toggleRepl View: Toggle Terminal: (ctrl+`) workbench.action.terminal.toggleTerminal You could construct a collection of key bindings based on context (when) to call any of these commands. Be aware that you can move/reorder the Panels, then your key bindings should change. Or define a set of key bindings to use any sequence of 4 keys on the keyboard together with a modifier key: ctrl+shift+alt+1 ... ctrl+shift+alt+4
how i can switch vscode output terminal tabs use keyboard
enter image description here i just want to switch those tabs use keyboard. anyone knows ,thx i use ctrl + pagedown/pageup. but it just switch terminal tabs , not output and termial
[ "You have the following commands:\n\nView: Toggle Problems: (ctrl+shift+m) workbench.actions.view.problems\nView: Toggle Output: (ctrl+shift+u) workbench.action.output.toggleOutput\nView: Toggle Debug Console: (ctrl+shift+y) workbench.debug.action.toggleRepl\nView: Toggle Terminal: (ctrl+`) workbench.action.terminal.toggleTerminal\n\nYou could construct a collection of key bindings based on context (when) to call any of these commands.\nBe aware that you can move/reorder the Panels, then your key bindings should change.\nOr define a set of key bindings to use any sequence of 4 keys on the keyboard together with a modifier key: ctrl+shift+alt+1 ... ctrl+shift+alt+4 \n" ]
[ 0 ]
[]
[]
[ "keyboard", "terminal", "visual_studio_code" ]
stackoverflow_0074665351_keyboard_terminal_visual_studio_code.txt
Q: Tensorflow.js model works on google teachable machine but not on react native app I am training a model to recognize different Lego parts. When I train my model on google teachable machine and try the sample objects, the model predicts it accurately 100% of the time. However when I upload the same model to my react native app and run it through expo-go on my phone, it gets the predictions wrong almost all the time. I think it has to do with the tensor image but I am not sure. My model can be found here: https://teachablemachine.withgoogle.com/models/NSTiRzrtZ/ Accurate part prediction on google teachable machine] when taking a picture of the green piece on my phone, it predicts red piece. the prediction order is grey, tan, red, green My code: import React, {useRef, useState, useEffect} from 'react'; import {View,StyleSheet,Dimensions,Pressable,Modal,Text,ActivityIndicator,} from 'react-native'; import * as MediaLibrary from 'expo-media-library'; import {getModel,convertBase64ToTensor,startPrediction} from '../../helpers/tensor-helper'; import {cropPicture} from '../../helpers/image-helper'; import {Camera} from 'expo-camera'; // import { Platform } from 'react-native'; import * as tf from "@tensorflow/tfjs"; import { cameraWithTensors } from '@tensorflow/tfjs-react-native'; import {bundleResourceIO, decodeJpeg} from '@tensorflow/tfjs-react-native'; const initialiseTensorflow = async () => { await tf.ready(); tf.getBackend(); } const TensorCamera = cameraWithTensors(Camera); const modelJson = require('../../model/model.json'); const modelWeights = require('../../model/weights.bin'); const modelMetaData = require('../../model/metadata.json'); const RESULT_MAPPING = ['grey', 'tan', 'red','green']; const CameraScreen = () => { const [hasCameraPermission, setHasCameraPermission] = useState(); const [hasMediaLibraryPermission, setHasMediaLibraryPermission] = useState(); const [isProcessing, setIsProcessing] = useState(false); const [presentedShape, setPresentedShape] = useState(''); useEffect(() => { (async () => { const cameraPermission = await Camera.requestCameraPermissionsAsync(); const mediaLibraryPermission = await MediaLibrary.requestPermissionsAsync(); setHasCameraPermission(cameraPermission.status === "granted"); setHasMediaLibraryPermission(mediaLibraryPermission.status === "granted"); //load model await initialiseTensorflow(); })(); }, []); if (hasCameraPermission === undefined) { return <Text>Requesting permissions...</Text> } else if (!hasCameraPermission) { return <Text>Permission for camera not granted. Please change this in settings.</Text> } let frame = 0; const computeRecognitionEveryNFrames = 60; const handleCameraStream = async (images: IterableIterator<tf.Tensor3D>) => { const model = await tf.loadLayersModel(bundleResourceIO(modelJson, modelWeights, modelMetaData)); const loop = async () => { if(frame % computeRecognitionEveryNFrames === 0){ const nextImageTensor = images.next().value; if(nextImageTensor){ const tensor = nextImageTensor.reshape([ 1, 224, 224, 3, ]); const prediction = await startPrediction(model, tensor); console.log(prediction) tf.dispose([nextImageTensor]); } } frame += 1; frame = frame % computeRecognitionEveryNFrames; requestAnimationFrame(loop); } loop(); } return ( <View style={styles.container}> <Modal visible={isProcessing} transparent={true} animationType="slide"> <View style={styles.modal}> <View style={styles.modalContent}> <Text>Your current shape is {presentedShape}</Text> {presentedShape === '' && <ActivityIndicator size="large" />} <Pressable style={styles.dismissButton} onPress={() => { setPresentedShape(''); setIsProcessing(false); }}> <Text>Dismiss</Text> </Pressable> </View> </View> </Modal> <TensorCamera style={styles.camera} type={Camera.Constants.Type.back} onReady={handleCameraStream} resizeHeight={224} resizeWidth={224} resizeDepth={3} autorender={true} cameraTextureHeight={1920} cameraTextureWidth={1080} /> </View> ); }; A: you're doing const tensor = nextImageTensor.reshape([1,224,224,3]); which takes image and just reshapes tensor to new shape, regardless of actual pixels. what you probably want to use is tf.image.resizeBilinear to resize image to desired shape.
Tensorflow.js model works on google teachable machine but not on react native app
I am training a model to recognize different Lego parts. When I train my model on google teachable machine and try the sample objects, the model predicts it accurately 100% of the time. However when I upload the same model to my react native app and run it through expo-go on my phone, it gets the predictions wrong almost all the time. I think it has to do with the tensor image but I am not sure. My model can be found here: https://teachablemachine.withgoogle.com/models/NSTiRzrtZ/ Accurate part prediction on google teachable machine] when taking a picture of the green piece on my phone, it predicts red piece. the prediction order is grey, tan, red, green My code: import React, {useRef, useState, useEffect} from 'react'; import {View,StyleSheet,Dimensions,Pressable,Modal,Text,ActivityIndicator,} from 'react-native'; import * as MediaLibrary from 'expo-media-library'; import {getModel,convertBase64ToTensor,startPrediction} from '../../helpers/tensor-helper'; import {cropPicture} from '../../helpers/image-helper'; import {Camera} from 'expo-camera'; // import { Platform } from 'react-native'; import * as tf from "@tensorflow/tfjs"; import { cameraWithTensors } from '@tensorflow/tfjs-react-native'; import {bundleResourceIO, decodeJpeg} from '@tensorflow/tfjs-react-native'; const initialiseTensorflow = async () => { await tf.ready(); tf.getBackend(); } const TensorCamera = cameraWithTensors(Camera); const modelJson = require('../../model/model.json'); const modelWeights = require('../../model/weights.bin'); const modelMetaData = require('../../model/metadata.json'); const RESULT_MAPPING = ['grey', 'tan', 'red','green']; const CameraScreen = () => { const [hasCameraPermission, setHasCameraPermission] = useState(); const [hasMediaLibraryPermission, setHasMediaLibraryPermission] = useState(); const [isProcessing, setIsProcessing] = useState(false); const [presentedShape, setPresentedShape] = useState(''); useEffect(() => { (async () => { const cameraPermission = await Camera.requestCameraPermissionsAsync(); const mediaLibraryPermission = await MediaLibrary.requestPermissionsAsync(); setHasCameraPermission(cameraPermission.status === "granted"); setHasMediaLibraryPermission(mediaLibraryPermission.status === "granted"); //load model await initialiseTensorflow(); })(); }, []); if (hasCameraPermission === undefined) { return <Text>Requesting permissions...</Text> } else if (!hasCameraPermission) { return <Text>Permission for camera not granted. Please change this in settings.</Text> } let frame = 0; const computeRecognitionEveryNFrames = 60; const handleCameraStream = async (images: IterableIterator<tf.Tensor3D>) => { const model = await tf.loadLayersModel(bundleResourceIO(modelJson, modelWeights, modelMetaData)); const loop = async () => { if(frame % computeRecognitionEveryNFrames === 0){ const nextImageTensor = images.next().value; if(nextImageTensor){ const tensor = nextImageTensor.reshape([ 1, 224, 224, 3, ]); const prediction = await startPrediction(model, tensor); console.log(prediction) tf.dispose([nextImageTensor]); } } frame += 1; frame = frame % computeRecognitionEveryNFrames; requestAnimationFrame(loop); } loop(); } return ( <View style={styles.container}> <Modal visible={isProcessing} transparent={true} animationType="slide"> <View style={styles.modal}> <View style={styles.modalContent}> <Text>Your current shape is {presentedShape}</Text> {presentedShape === '' && <ActivityIndicator size="large" />} <Pressable style={styles.dismissButton} onPress={() => { setPresentedShape(''); setIsProcessing(false); }}> <Text>Dismiss</Text> </Pressable> </View> </View> </Modal> <TensorCamera style={styles.camera} type={Camera.Constants.Type.back} onReady={handleCameraStream} resizeHeight={224} resizeWidth={224} resizeDepth={3} autorender={true} cameraTextureHeight={1920} cameraTextureWidth={1080} /> </View> ); };
[ "you're doing\nconst tensor = nextImageTensor.reshape([1,224,224,3]);\n\nwhich takes image and just reshapes tensor to new shape, regardless of actual pixels.\nwhat you probably want to use is tf.image.resizeBilinear to resize image to desired shape.\n" ]
[ 0 ]
[]
[]
[ "machine_learning", "react_native", "tensorflow.js" ]
stackoverflow_0074662833_machine_learning_react_native_tensorflow.js.txt
Q: Multer/Express not reading FormData sent from React Native client; From Postman reads & saves though When I send an image to my node/express server that uses multer I am able to save an image to my "uploads" folder with this server side code and request from Postman: server.js const express = require("express"); const app = express(); const cors = require("cors"); app.use(cors()); app.use(express.urlencoded({ extended: true })); app.use(express.json()); const multer = require("multer"); const upload = multer({ dest: "uploads/" }); app.post("/upload", upload.single("image"), (req, res) => { console.log("file ==>", req.file); res.json(req.file); }); app.listen(5000, () => console.log("listening at http://localhost:5000")); But when I try to send this from the RN client, it makes it past the middleware with no error, sends back the success message, but doesn't save the image and req.file is undefined. Client side code and response: import React, { useState, useEffect } from "react"; import { Button, Image, View, Platform } from "react-native"; import * as ImagePicker from "expo-image-picker"; import axios from "axios"; export default function ImagePickerExample() { const [image, setImage] = useState(null); const pickImage = async () => { let result = await ImagePicker.launchImageLibraryAsync({ mediaTypes: ImagePicker.MediaTypeOptions.All, allowsEditing: true, aspect: [4, 3], quality: 1, }); if (!result.cancelled) setImage(result.uri); }; const uploadImage = () => { const formData = new FormData(); formData.append("image", { uri: image, name: new Date() + "_profile", type: "image/png", }); axios .post("http://localhost:5000/upload", formData, { headers: { Accept: "application/json", "Content-Type": "multipart/form-data", }, }) .then((res) => console.log("result =>", res)) .catch((err) => console.log(err)); }; return ( <View style={{ flex: 1, alignItems: "center", justifyContent: "center" }}> <Button title="Pick an image from camera roll" onPress={pickImage} /> {image && ( <Image source={{ uri: image }} style={{ width: 200, height: 200 }} /> )} {image && <Button title="Send to DB" onPress={uploadImage} />} </View> ); } I can see that URI is saved to state image. I think my problem is with FormData and how I'm putting it together, but I can't spot the error when I look through docs / articles / tuts. A: I believe that you have to use var or let instead of const when using FormData. Reason being, since you are appending data to the variable, it needs to be able to change the contents of the variable's memory.
Multer/Express not reading FormData sent from React Native client; From Postman reads & saves though
When I send an image to my node/express server that uses multer I am able to save an image to my "uploads" folder with this server side code and request from Postman: server.js const express = require("express"); const app = express(); const cors = require("cors"); app.use(cors()); app.use(express.urlencoded({ extended: true })); app.use(express.json()); const multer = require("multer"); const upload = multer({ dest: "uploads/" }); app.post("/upload", upload.single("image"), (req, res) => { console.log("file ==>", req.file); res.json(req.file); }); app.listen(5000, () => console.log("listening at http://localhost:5000")); But when I try to send this from the RN client, it makes it past the middleware with no error, sends back the success message, but doesn't save the image and req.file is undefined. Client side code and response: import React, { useState, useEffect } from "react"; import { Button, Image, View, Platform } from "react-native"; import * as ImagePicker from "expo-image-picker"; import axios from "axios"; export default function ImagePickerExample() { const [image, setImage] = useState(null); const pickImage = async () => { let result = await ImagePicker.launchImageLibraryAsync({ mediaTypes: ImagePicker.MediaTypeOptions.All, allowsEditing: true, aspect: [4, 3], quality: 1, }); if (!result.cancelled) setImage(result.uri); }; const uploadImage = () => { const formData = new FormData(); formData.append("image", { uri: image, name: new Date() + "_profile", type: "image/png", }); axios .post("http://localhost:5000/upload", formData, { headers: { Accept: "application/json", "Content-Type": "multipart/form-data", }, }) .then((res) => console.log("result =>", res)) .catch((err) => console.log(err)); }; return ( <View style={{ flex: 1, alignItems: "center", justifyContent: "center" }}> <Button title="Pick an image from camera roll" onPress={pickImage} /> {image && ( <Image source={{ uri: image }} style={{ width: 200, height: 200 }} /> )} {image && <Button title="Send to DB" onPress={uploadImage} />} </View> ); } I can see that URI is saved to state image. I think my problem is with FormData and how I'm putting it together, but I can't spot the error when I look through docs / articles / tuts.
[ "I believe that you have to use var or let instead of const when using FormData. Reason being, since you are appending data to the variable, it needs to be able to change the contents of the variable's memory.\n" ]
[ 0 ]
[]
[]
[ "expo", "express", "multer", "node.js", "react_native" ]
stackoverflow_0072512812_expo_express_multer_node.js_react_native.txt
Q: How to manipulate cases and variables in a data frame I am just learning data wrangling. Currently I am data wrangling a data frame of 6422 observation and 20 variables. I have added an example to show my problem. I have several columns which have identical cases, which i want to summarize to 1 case (in the example column x1,x2,x3,x4 and row 1,2,3). The corresponding columns value1 (in the example 1,2,3) and value2 (in the example 7,8,9) should be manipulated in such a way that they are transformed into the same row. My goal is to manipulate the data frame df1 into the data frame df7 df1 <- data.frame(x1,x2,x3,x4,stats.title,value1,value2) # x1 x2 x3 x4 stats.title value1 value2 #1 A B C D I 1 7 #2 A B C D J 2 8 #3 A B C D K 3 9 #4 E F G H I 4 10 #5 E F G H J 5 11 #6 E F G H K 6 12 df 7 <- rbind(df5,df6) # x1 x2 x3 x4 value1_I value1_J value1_K value2_I value2_J value2_K #1 A B C D 1 2 3 7 8 9 #4 E F G H 4 5 6 10 11 12 Attached the example: library(magrittr) library(dplyr) library(tidyr) x1 <- rep(c("A","E"), each=3) x2<- rep(c("B","F"), each=3) x3<- rep(c("C","G"), each=3) x4<- rep(c("D","H"), each=3) stats.title <- rep(c("I","J","K"), times=2) value1 <- (1:6) value2 <- (7:12) df1 <- data.frame(x1,x2,x3,x4,stats.title,value1,value2) # x1 x2 x3 x4 stats.title value1 value2 #1 A B C D I 1 7 #2 A B C D J 2 8 #3 A B C D K 3 9 #4 E F G H I 4 10 #5 E F G H J 5 11 #6 E F G H K 6 12 df2 <- df1[1,] %>% select(-stats.title,-value1,-value2) # x1 x2 x3 x4 #1 A B C D df2.1 <- df1[4,] %>% select(-stats.title,-value1,-value2) # x1 x2 x3 x4 #4 E F G H df3 <- df1 %>% select(stats.title,value1,value2) # stats.title value1 value2 #1 I 1 7 #2 J 2 8 #3 K 3 9 #4 I 4 10 #5 J 5 11 #6 K 6 12 df4 <- pivot_wider(df3[1:3,], names_from = stats.title, values_from = c("value1","value2")) df4 <- as.data.frame(df4) df4.1 <- pivot_wider(df3[4:6,], names_from = stats.title, values_from = c("value1","value2")) df4.1 <- as.data.frame(df4.1) df5 <- data.frame(df2,df4) df6 <- data.frame(df2.1,df4.1) df 7 <- rbind(df5,df6) # x1 x2 x3 x4 value1_I value1_J value1_K value2_I value2_J value2_K #1 A B C D 1 2 3 7 8 9 #4 E F G H 4 5 6 10 11 12 Thanks for the help in advance. I tried the example shown above. Due to the fact that i have 6422 rows, i am searching for an effective way to manipulate the data frame. A: You showed enough efforts: Here is the solution: The trick is first to pivot_longer: library(dplyr) library(tidyr) df1 %>% pivot_longer(c(value1, value2)) %>% pivot_wider( id_cols = x1:x4, # you can omit names_from = c(name, stats.title), values_from = value) x1 x2 x3 x4 value1_I value2_I value1_J value2_J value1_K value2_K <chr> <chr> <chr> <chr> <int> <int> <int> <int> <int> <int> 1 A B C D 1 7 2 8 3 9 2 E F G H 4 10 5 11 6 12
How to manipulate cases and variables in a data frame
I am just learning data wrangling. Currently I am data wrangling a data frame of 6422 observation and 20 variables. I have added an example to show my problem. I have several columns which have identical cases, which i want to summarize to 1 case (in the example column x1,x2,x3,x4 and row 1,2,3). The corresponding columns value1 (in the example 1,2,3) and value2 (in the example 7,8,9) should be manipulated in such a way that they are transformed into the same row. My goal is to manipulate the data frame df1 into the data frame df7 df1 <- data.frame(x1,x2,x3,x4,stats.title,value1,value2) # x1 x2 x3 x4 stats.title value1 value2 #1 A B C D I 1 7 #2 A B C D J 2 8 #3 A B C D K 3 9 #4 E F G H I 4 10 #5 E F G H J 5 11 #6 E F G H K 6 12 df 7 <- rbind(df5,df6) # x1 x2 x3 x4 value1_I value1_J value1_K value2_I value2_J value2_K #1 A B C D 1 2 3 7 8 9 #4 E F G H 4 5 6 10 11 12 Attached the example: library(magrittr) library(dplyr) library(tidyr) x1 <- rep(c("A","E"), each=3) x2<- rep(c("B","F"), each=3) x3<- rep(c("C","G"), each=3) x4<- rep(c("D","H"), each=3) stats.title <- rep(c("I","J","K"), times=2) value1 <- (1:6) value2 <- (7:12) df1 <- data.frame(x1,x2,x3,x4,stats.title,value1,value2) # x1 x2 x3 x4 stats.title value1 value2 #1 A B C D I 1 7 #2 A B C D J 2 8 #3 A B C D K 3 9 #4 E F G H I 4 10 #5 E F G H J 5 11 #6 E F G H K 6 12 df2 <- df1[1,] %>% select(-stats.title,-value1,-value2) # x1 x2 x3 x4 #1 A B C D df2.1 <- df1[4,] %>% select(-stats.title,-value1,-value2) # x1 x2 x3 x4 #4 E F G H df3 <- df1 %>% select(stats.title,value1,value2) # stats.title value1 value2 #1 I 1 7 #2 J 2 8 #3 K 3 9 #4 I 4 10 #5 J 5 11 #6 K 6 12 df4 <- pivot_wider(df3[1:3,], names_from = stats.title, values_from = c("value1","value2")) df4 <- as.data.frame(df4) df4.1 <- pivot_wider(df3[4:6,], names_from = stats.title, values_from = c("value1","value2")) df4.1 <- as.data.frame(df4.1) df5 <- data.frame(df2,df4) df6 <- data.frame(df2.1,df4.1) df 7 <- rbind(df5,df6) # x1 x2 x3 x4 value1_I value1_J value1_K value2_I value2_J value2_K #1 A B C D 1 2 3 7 8 9 #4 E F G H 4 5 6 10 11 12 Thanks for the help in advance. I tried the example shown above. Due to the fact that i have 6422 rows, i am searching for an effective way to manipulate the data frame.
[ "You showed enough efforts:\nHere is the solution: The trick is first to pivot_longer:\nlibrary(dplyr)\nlibrary(tidyr)\n\ndf1 %>% \n pivot_longer(c(value1, value2)) %>% \n pivot_wider(\n id_cols = x1:x4, # you can omit\n names_from = c(name, stats.title),\n values_from = value)\n\n x1 x2 x3 x4 value1_I value2_I value1_J value2_J value1_K value2_K\n <chr> <chr> <chr> <chr> <int> <int> <int> <int> <int> <int>\n1 A B C D 1 7 2 8 3 9\n2 E F G H 4 10 5 11 6 12\n\n" ]
[ 1 ]
[]
[]
[ "dataframe", "magrittr", "performance", "r", "tidyr" ]
stackoverflow_0074667181_dataframe_magrittr_performance_r_tidyr.txt
Q: Docker compose ecs fargate attach custom security group to load balancer I want to attach a custom security group to the load balancer (To accept traffic from cloud front) for my ecs deployment from docker Below is my docker-compose.yml file and I need to attach security group sg-0828b05baf4899773 to the load balancer that gets created by cloudformation. Alternativly I would be open to a way to use cloudfront as part of docker-compose where the cloudfront distribution is created as part of cloudformation. services: application: image: 000.dkr.ecr.us-east-1.amazonaws.com/org/pos:latest platform: linux/amd64 env_file: .env.${ENV} build: context: "." ports: - 80:80 restart: always networks: - mail - app_network postfix: image: 000.dkr.ecr.us-east-1.amazonaws.com/org/postfix:latest platform: linux/amd64 build: context: "postfix" container_name: postfix networks: - mail - app_network hostname: postfix restart: always networks: app_network: name: tcetra_network mail: name: postfix-mail x-aws-cloudformation: Resources: ApplicationTCP80Listener: Properties: Certificates: - CertificateArn: "arn:aws:acm:us-east-1:088048903606:certificate/d7a330ce-77d6-4753-bcf0-913f8ac0cee3" Protocol: HTTPS Port: 443 ApplicationTCP80TargetGroup: Properties: HealthCheckPath: /health.html Matcher: HttpCode: 200-499 A: The docker-compose ECS support generates a CloudFormation template from your docker-compose.yaml file. Per the official documentation, You can create CloudFormation overlays in the docker-compose file via x-aws-cloudformation. You are already doing this in your docker-compose file to specify custom SSL certificate, and a custom health check. To customize other aspects of the generated CloudFormation template, you should first run docker compose convert to generate a CloudFormation stack file from your Compose file. You can then view that file to find the security group resource, and then include the needed changes for that resource in your docker-compose.yaml file.
Docker compose ecs fargate attach custom security group to load balancer
I want to attach a custom security group to the load balancer (To accept traffic from cloud front) for my ecs deployment from docker Below is my docker-compose.yml file and I need to attach security group sg-0828b05baf4899773 to the load balancer that gets created by cloudformation. Alternativly I would be open to a way to use cloudfront as part of docker-compose where the cloudfront distribution is created as part of cloudformation. services: application: image: 000.dkr.ecr.us-east-1.amazonaws.com/org/pos:latest platform: linux/amd64 env_file: .env.${ENV} build: context: "." ports: - 80:80 restart: always networks: - mail - app_network postfix: image: 000.dkr.ecr.us-east-1.amazonaws.com/org/postfix:latest platform: linux/amd64 build: context: "postfix" container_name: postfix networks: - mail - app_network hostname: postfix restart: always networks: app_network: name: tcetra_network mail: name: postfix-mail x-aws-cloudformation: Resources: ApplicationTCP80Listener: Properties: Certificates: - CertificateArn: "arn:aws:acm:us-east-1:088048903606:certificate/d7a330ce-77d6-4753-bcf0-913f8ac0cee3" Protocol: HTTPS Port: 443 ApplicationTCP80TargetGroup: Properties: HealthCheckPath: /health.html Matcher: HttpCode: 200-499
[ "The docker-compose ECS support generates a CloudFormation template from your docker-compose.yaml file. Per the official documentation, You can create CloudFormation overlays in the docker-compose file via x-aws-cloudformation.\nYou are already doing this in your docker-compose file to specify custom SSL certificate, and a custom health check.\nTo customize other aspects of the generated CloudFormation template, you should first run docker compose convert to generate a CloudFormation stack file from your Compose file. You can then view that file to find the security group resource, and then include the needed changes for that resource in your docker-compose.yaml file.\n" ]
[ 0 ]
[]
[]
[ "amazon_ecs", "docker", "docker_compose" ]
stackoverflow_0074663897_amazon_ecs_docker_docker_compose.txt
Q: Split torch tensor : max size and end of the sentence I would like to split a tensor into several tensors with torch on Python. The tensor is the tokenization of a long text. First here is what I had done: tensor = tensor([[ 3746, 3120, 1024, ..., 2655, 24051, 2015]]) #size 14714 result = tensor.split(510) It works but now I would like to refine this, and make it so that it can't split in the middle of a sentence but at the end of a sentence, so recognizing the dot '.' (token 1012). Of course all the tensor will not be the same size but will have to respect a maximum size (510 for example). Thanks for your help A: i tried it out a solution but its not straightforward but does the trick oo and you might want to install this library more_itertools, used this to do the split from transformers import BertTokenizerFast import typer import torch from pathlib import Path from typing import List from more_itertools import split_after def open_txt(txt_path:Path) -> List[str]: with open(txt_path, 'r') as txt_file: return [txt.replace('\n', '') for txt in txt_file.readlines()] def pad_token(input_ids, pad_length=510): split_input_ids = list(split_after(input_ids, lambda x: x == 1012)) # Pad to 510 new_input_ids = [] for ids in split_input_ids: ids += [0] * (pad_length - len(ids)) new_input_ids.append(ids) return new_input_ids def main( text_path:Path=typer.Option('sent.txt') ): tokenizer:BertTokenizerFast = BertTokenizerFast.from_pretrained('bert-base-uncased') sentence = open_txt(text_path) sentence = ''.join(sentence) features = tokenizer( sentence, padding='max_length' ) input_ids = features['input_ids'] new_input_ids = pad_token(input_ids, pad_length=600) # print(tokenizer.decode(new_input_ids[0])) # convert to torch new_input_ids = torch.tensor(new_input_ids) # features['input_ids'] = new_input_ids print(new_input_ids[0]) if __name__ == '__main__': typer.run(main) A: Not sure if there's a built-in function in PyTorch to do what you asked, which involves several steps: Count how many sentences there are; assign this number to n_sents. Compute the sentences' lengths and the indices where they start in tensor. Let 1-D tensors length and start store the lengths and the indices respectively. In other words, length[i] and start[i] are the length and the start index of the i-th sentence respectively, i.e. tensor[start[i]] is the first token of the i-th sentence. Create a result tensor result = torch.full((n_sents, max_len), pad_value) where max_len = max(length). Assign result[i, :length[i]] = tensor[start[i] : start[i]+length[i]] for all i in range(n_sents). A side comment: detecting sentence ending by recognizing periods doesn't always work e.g., I went to dr. Smith yesterday. is one sentence but has two periods.
Split torch tensor : max size and end of the sentence
I would like to split a tensor into several tensors with torch on Python. The tensor is the tokenization of a long text. First here is what I had done: tensor = tensor([[ 3746, 3120, 1024, ..., 2655, 24051, 2015]]) #size 14714 result = tensor.split(510) It works but now I would like to refine this, and make it so that it can't split in the middle of a sentence but at the end of a sentence, so recognizing the dot '.' (token 1012). Of course all the tensor will not be the same size but will have to respect a maximum size (510 for example). Thanks for your help
[ "i tried it out a solution but its not straightforward but does the trick\noo and you might want to install this library more_itertools, used this to do the split\nfrom transformers import BertTokenizerFast\nimport typer\nimport torch\n\nfrom pathlib import Path\nfrom typing import List\nfrom more_itertools import split_after\n\ndef open_txt(txt_path:Path) -> List[str]:\n with open(txt_path, 'r') as txt_file:\n return [txt.replace('\\n', '') for txt in txt_file.readlines()]\n \ndef pad_token(input_ids, pad_length=510):\n split_input_ids = list(split_after(input_ids, lambda x: x == 1012))\n \n # Pad to 510\n new_input_ids = []\n for ids in split_input_ids:\n ids += [0] * (pad_length - len(ids))\n new_input_ids.append(ids)\n \n return new_input_ids\n\ndef main(\n text_path:Path=typer.Option('sent.txt')\n):\n tokenizer:BertTokenizerFast = BertTokenizerFast.from_pretrained('bert-base-uncased')\n \n sentence = open_txt(text_path)\n sentence = ''.join(sentence)\n \n features = tokenizer(\n sentence, padding='max_length'\n )\n \n input_ids = features['input_ids']\n \n new_input_ids = pad_token(input_ids, pad_length=600)\n # print(tokenizer.decode(new_input_ids[0]))\n # convert to torch\n new_input_ids = torch.tensor(new_input_ids)\n # features['input_ids'] = new_input_ids\n \n print(new_input_ids[0])\n\nif __name__ == '__main__':\n typer.run(main)\n\n\n", "Not sure if there's a built-in function in PyTorch to do what you asked, which involves several steps:\n\nCount how many sentences there are; assign this number to n_sents.\nCompute the sentences' lengths and the indices where they start in tensor. Let 1-D tensors length and start store the lengths and the indices respectively. In other words, length[i] and start[i] are the length and the start index of the i-th sentence respectively, i.e. tensor[start[i]] is the first token of the i-th sentence.\nCreate a result tensor result = torch.full((n_sents, max_len), pad_value) where max_len = max(length).\nAssign result[i, :length[i]] = tensor[start[i] : start[i]+length[i]] for all i in range(n_sents).\n\nA side comment: detecting sentence ending by recognizing periods doesn't always work e.g., I went to dr. Smith yesterday. is one sentence but has two periods.\n" ]
[ 0, 0 ]
[]
[]
[ "nlp", "python", "pytorch", "tensor", "torch" ]
stackoverflow_0074488479_nlp_python_pytorch_tensor_torch.txt
Q: How to interact with a turtle when it is invisible? I have been creating a game with turtle and I was going to make a the background change when a certain area is clicked. So I used a turtle and used the onclick() method when I realized that it did not look good with the background so I tried to use the hideturtle() method to hide it. But when I hid the turtle the clicking function did not work. This is something like my code: t = turtle.Turtle() t.hideturtle() def my_function(x, y): print('this function would change the bg but that doesn't matter right now') t.onclick(my_function, btn=1, add=None) As you can see, if the hideturtle() is not there, when the turtle is clicked the function runs. But when the hideturtle() is called the turtle doesn't respond to clicks. A: Passed your question to ChatGpt, that's his answer :) : It sounds like you're running into a problem where the turtle becomes unresponsive to clicks after you hide it. This is likely because the turtle's clickable area is also hidden when you hide the turtle. One solution to this problem would be to create a separate turtle that is used only for clicking, and keep it visible at all times. You could do this by creating a new turtle, setting its shape to "blank", and then using the onclick() method to register your function. This way, the turtle will be invisible but still respond to clicks. Here is an example of how you could do this: import turtle # Create a new turtle for clicking click_turtle = turtle.Turtle() # Set the shape to "blank" to make it invisible click_turtle.shape("blank") # Register the function to run when the turtle is clicked click_turtle.onclick(my_function, btn=1, add=None) # Hide the original turtle t.hideturtle() By using this approach, you can hide the original turtle and still have a visible area that responds to clicks.
How to interact with a turtle when it is invisible?
I have been creating a game with turtle and I was going to make a the background change when a certain area is clicked. So I used a turtle and used the onclick() method when I realized that it did not look good with the background so I tried to use the hideturtle() method to hide it. But when I hid the turtle the clicking function did not work. This is something like my code: t = turtle.Turtle() t.hideturtle() def my_function(x, y): print('this function would change the bg but that doesn't matter right now') t.onclick(my_function, btn=1, add=None) As you can see, if the hideturtle() is not there, when the turtle is clicked the function runs. But when the hideturtle() is called the turtle doesn't respond to clicks.
[ "Passed your question to ChatGpt, that's his answer :) :\n\nIt sounds like you're running into a problem where the turtle becomes\nunresponsive to clicks after you hide it. This is likely because the\nturtle's clickable area is also hidden when you hide the turtle.\nOne solution to this problem would be to create a separate turtle that\nis used only for clicking, and keep it visible at all times. You could\ndo this by creating a new turtle, setting its shape to \"blank\", and\nthen using the onclick() method to register your function. This way,\nthe turtle will be invisible but still respond to clicks.\nHere is an example of how you could do this:\nimport turtle\n\n# Create a new turtle for clicking\nclick_turtle = turtle.Turtle()\n\n# Set the shape to \"blank\" to make it invisible\nclick_turtle.shape(\"blank\")\n\n# Register the function to run when the turtle is clicked\nclick_turtle.onclick(my_function, btn=1, add=None)\n\n# Hide the original turtle\nt.hideturtle()\n\nBy using this approach, you can hide the original turtle and still\nhave a visible area that responds to clicks.\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_turtle", "turtle_graphics" ]
stackoverflow_0074667472_python_python_turtle_turtle_graphics.txt
Q: how to convert 5 digits number to date in python I would like to convert 44562 int64 data type (5 digits number) to date format like this 1/1/2022. Out[5]: 0 44562 1 44562 2 44563 3 44563 4 44564 Name: Date, dtype: int64 I try with df['Date'].apply(lambda x: (datetime.utcfromtimestamp(0) + timedelta(int(x))).strftime("%m-%d-%Y")) but output date is not correct. Please help to fix the issue. Out[13]: 0 01-03-2092 1 01-03-2092 2 01-04-2092 3 01-04-2092 4 01-05-2092 Name: Date2, dtype: object A: You very nearly had it: df['Date'].apply(lambda x: (datetime(1899, 12, 30) + timedelta(days=int(x))).strftime("%m/%d/%Y"))
how to convert 5 digits number to date in python
I would like to convert 44562 int64 data type (5 digits number) to date format like this 1/1/2022. Out[5]: 0 44562 1 44562 2 44563 3 44563 4 44564 Name: Date, dtype: int64 I try with df['Date'].apply(lambda x: (datetime.utcfromtimestamp(0) + timedelta(int(x))).strftime("%m-%d-%Y")) but output date is not correct. Please help to fix the issue. Out[13]: 0 01-03-2092 1 01-03-2092 2 01-04-2092 3 01-04-2092 4 01-05-2092 Name: Date2, dtype: object
[ "You very nearly had it:\ndf['Date'].apply(lambda x: (datetime(1899, 12, 30) + timedelta(days=int(x))).strftime(\"%m/%d/%Y\"))\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074667378_python.txt
Q: Solving large-scale nonlinear system using exact Newton's method in SciPy I am trying to solve a large-scale nonlinear system using the exact Newton method in SciPy. In my application, the Jacobian is easy to assemble (and factorize) as a sparse matrix. It seems that all methods available in scipy.optimize.root approximate the Jacobian in one way or another, and I can't find a way to use Newton's method using the API that is discussed in SciPy's documentation. Nonetheless, using the internal API, I have managed to use Newton's method with the following code: from scipy.optimize.nonlin import nonlin_solve x, info = nonlin_solve(f, x0, jac, line_search=False) where f(x) is the residual and jac(x) is a callable that returns the Jacobian at x as a sparse matrix. However, I am not sure whether this function is meant to be used outside SciPy and is subject to changes without notice. Would this be recommended approach? A: It is meant to be used. Scipy's private functions that are not meant to be used from the outside start with a _. This was confirmed by the scipy's team in an issue I raised recently: cf https://github.com/scipy/scipy/issues/17510
Solving large-scale nonlinear system using exact Newton's method in SciPy
I am trying to solve a large-scale nonlinear system using the exact Newton method in SciPy. In my application, the Jacobian is easy to assemble (and factorize) as a sparse matrix. It seems that all methods available in scipy.optimize.root approximate the Jacobian in one way or another, and I can't find a way to use Newton's method using the API that is discussed in SciPy's documentation. Nonetheless, using the internal API, I have managed to use Newton's method with the following code: from scipy.optimize.nonlin import nonlin_solve x, info = nonlin_solve(f, x0, jac, line_search=False) where f(x) is the residual and jac(x) is a callable that returns the Jacobian at x as a sparse matrix. However, I am not sure whether this function is meant to be used outside SciPy and is subject to changes without notice. Would this be recommended approach?
[ "It is meant to be used.\nScipy's private functions that are not meant to be used from the outside start with a _.\nThis was confirmed by the scipy's team in an issue I raised recently: cf https://github.com/scipy/scipy/issues/17510\n" ]
[ 0 ]
[]
[]
[ "optimization", "python", "scipy" ]
stackoverflow_0068297903_optimization_python_scipy.txt
Q: Is there anyway to store prompted inputs into a .json file and reuse the stored inputs from the json file in the next session/process? I'm creating an account generator using puppeteer and there's certain user inputs that are needed and I have it prompt the user to input the necessary variables and I was wondering, is it possible to store these inputs into a json file and then pull the stored inputs from the json file and reuse it for the next session rather than having to input the required variables over and over again after every new process. The code below is the package I have required and the following variables that are prompted. const prompt = require("prompt-sync") ({sigint: true }); const fs = require("fs").promises; const request = require('request'); const fetch = require('node-fetch') const { ImapFlow } = require('imapflow'); const random_useragent = require('random-useragent'); const { scrollPageToBottom } = require('puppeteer-autoscroll-down') const { scrollPageToTop } = require('puppeteer-autoscroll-down') const { Webhook, MessageBuilder } = require('discord-webhook-node'); const StealthPlugin = require('puppeteer-extra-plugin-stealth'); puppeteer.use(StealthPlugin()); ( async () => { const browser = await puppeteer.launch({ headless: false, // false = Shows Browser | true = Browser Not Shown executablePath: `/Applications/Google Chrome.app/Contents/MacOS/Google Chrome`, userDataDir: `/Users/senpai/Library/Application Support/Google/Chrome/Default`, ignoreHTTPSErrors: true, ignoreDefaultArgs: ['--enable-automation'], args: [ `--disable-blink-features=AutomationControlled`, `--enable-blink-feautres=IdleDetection`, `--window-size=1920,1080`, `--disable-features=IsolateOrigins,site-per-process`, `--blink-settings=imagesEnabled=true` ] }); // User Inputs let webhook = prompt ("Input Discord Webhook: "); let catchall = prompt ("Input Your Catchall - Exp: catchall.com: ");``` A: It seems that you already imported the built-in fs module, so you could use it to check whether there's an existing JSON file to be used, otherwise prompt the user, and then store the values in the said JSON file. Something like this: const fs = require("fs"); const prompt = require("prompt-sync")({ sigint: true }); const filename = "values.json"; function getValues() { if (fs.existsSync(filename)) { return JSON.parse(fs.readFileSync(filename)); } const values = { webhook: prompt("Input Discord Webhook: "), catchall: prompt("Input Your Catchall - Exp: catchall.com: "), }; fs.writeFileSync(filename, JSON.stringify(values)); return values; } const { webhook, catchall } = getValues();
Is there anyway to store prompted inputs into a .json file and reuse the stored inputs from the json file in the next session/process?
I'm creating an account generator using puppeteer and there's certain user inputs that are needed and I have it prompt the user to input the necessary variables and I was wondering, is it possible to store these inputs into a json file and then pull the stored inputs from the json file and reuse it for the next session rather than having to input the required variables over and over again after every new process. The code below is the package I have required and the following variables that are prompted. const prompt = require("prompt-sync") ({sigint: true }); const fs = require("fs").promises; const request = require('request'); const fetch = require('node-fetch') const { ImapFlow } = require('imapflow'); const random_useragent = require('random-useragent'); const { scrollPageToBottom } = require('puppeteer-autoscroll-down') const { scrollPageToTop } = require('puppeteer-autoscroll-down') const { Webhook, MessageBuilder } = require('discord-webhook-node'); const StealthPlugin = require('puppeteer-extra-plugin-stealth'); puppeteer.use(StealthPlugin()); ( async () => { const browser = await puppeteer.launch({ headless: false, // false = Shows Browser | true = Browser Not Shown executablePath: `/Applications/Google Chrome.app/Contents/MacOS/Google Chrome`, userDataDir: `/Users/senpai/Library/Application Support/Google/Chrome/Default`, ignoreHTTPSErrors: true, ignoreDefaultArgs: ['--enable-automation'], args: [ `--disable-blink-features=AutomationControlled`, `--enable-blink-feautres=IdleDetection`, `--window-size=1920,1080`, `--disable-features=IsolateOrigins,site-per-process`, `--blink-settings=imagesEnabled=true` ] }); // User Inputs let webhook = prompt ("Input Discord Webhook: "); let catchall = prompt ("Input Your Catchall - Exp: catchall.com: ");```
[ "It seems that you already imported the built-in fs module, so you could use it to check whether there's an existing JSON file to be used, otherwise prompt the user, and then store the values in the said JSON file. Something like this:\nconst fs = require(\"fs\");\nconst prompt = require(\"prompt-sync\")({ sigint: true });\n\nconst filename = \"values.json\";\n\nfunction getValues() {\n if (fs.existsSync(filename)) {\n return JSON.parse(fs.readFileSync(filename));\n }\n\n const values = {\n webhook: prompt(\"Input Discord Webhook: \"),\n catchall: prompt(\"Input Your Catchall - Exp: catchall.com: \"),\n };\n\n fs.writeFileSync(filename, JSON.stringify(values));\n\n return values;\n}\n\nconst { webhook, catchall } = getValues();\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "node.js", "puppeteer" ]
stackoverflow_0074667368_javascript_node.js_puppeteer.txt
Q: The term 'flutter' is not recognized on window I was run command "flutter build apk" then I got error : This is image error flutter: The term 'flutter' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + flutter build apk + ~~~~~~~ + CategoryInfo : ObjectNotFound: (flutter:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException This is my config for flutter This is image config And location flutter on my computer This is image location flutter Thank you for supporting A: You have to add Flutter to path: From the Start search bar, enter ‘env’ and select Edit environment variables for your account. Under User variables check if there is an entry called Path: If the entry exists, append the full path to flutter\bin using ; as a separator from existing values. If the entry doesn’t exist, create a new user variable named Path with the full path to flutter\bin as its value. You have to close and reopen any existing console windows for these changes to take effect.
The term 'flutter' is not recognized on window
I was run command "flutter build apk" then I got error : This is image error flutter: The term 'flutter' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At line:1 char:1 + flutter build apk + ~~~~~~~ + CategoryInfo : ObjectNotFound: (flutter:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException This is my config for flutter This is image config And location flutter on my computer This is image location flutter Thank you for supporting
[ "You have to add Flutter to path:\n\nFrom the Start search bar, enter ‘env’ and select Edit environment variables for your account.\nUnder User variables check if there is an entry called Path:\n\nIf the entry exists, append the full path to flutter\\bin using ; as a\nseparator from existing values.\nIf the entry doesn’t exist, create a new user variable named Path with the full path to flutter\\bin as its value.\n\n\n\nYou have to close and reopen any existing console windows for these changes to take effect.\n" ]
[ 0 ]
[]
[]
[ "apk", "build", "config", "flutter", "release" ]
stackoverflow_0074667433_apk_build_config_flutter_release.txt