content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: How to stop auto update of NodeJS I am using Node SASS and Node js in my project. Whenever I build (using Yarn ) my project my Nodejs automatically gets updated to latest version of NodeJS, (which is not supported by Node SASS) , Hence after sometime my builds fails and application stops working. Whenever I uninstall and install Nodejs version 9 . Project starts working again. A: Try running the command nvm list, to see what version is configured as your "default": $ nvm list -> v16.18.1 v18.12.1 system default -> v18.12.1 Try changing the default version via the nvm install --default <VERSION> command: $ nvm install --default v16.18.1 v16.18.1 is already installed. Now using node v16.18.1 (npm v8.19.2) default -> v16.18.1
How to stop auto update of NodeJS
I am using Node SASS and Node js in my project. Whenever I build (using Yarn ) my project my Nodejs automatically gets updated to latest version of NodeJS, (which is not supported by Node SASS) , Hence after sometime my builds fails and application stops working. Whenever I uninstall and install Nodejs version 9 . Project starts working again.
[ "Try running the command nvm list, to see what version is configured as your \"default\":\n$ nvm list\n-> v16.18.1\n v18.12.1\n system\ndefault -> v18.12.1\n\nTry changing the default version via the nvm install --default <VERSION> command:\n$ nvm install --default v16.18.1\nv16.18.1 is already installed.\nNow using node v16.18.1 (npm v8.19.2)\ndefault -> v16.18.1\n\n" ]
[ 0 ]
[]
[]
[ "node.js", "node_sass" ]
stackoverflow_0053914067_node.js_node_sass.txt
Q: can't start rabbitmq-server after installation I'm trying to use rabbitmq for a django tutorial but when I want to start the server I get this error: ~$ sudo rabbitmq-server Configuring logger redirection 14:49:57.041 [error] 14:49:57.044 [error] BOOT FAILED BOOT FAILED 14:49:57.044 [error] =========== =========== 14:49:57.044 [error] ERROR: could not bind to distribution port 25672, it is in use by another node: rabbit@wss ERROR: could not bind to distribution port 25672, it is in use by another node: rabbit@wss 14:49:57.045 [error] 14:49:58.046 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {dist_port_already_used,25672,"rabbit","wss"} in context start_error 14:49:58.046 [error] CRASH REPORT Process <0.153.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","wss"}}},{rabbit_prelaunch_app,start,[normal,[]]}} in application_master:init/4 line 138 {"Kernel pid terminated",application_controller,"{application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,\"rabbit\",\"wss\"}}},{rabbit_prelaunch_app,start,[normal,[]]}}}"} Kernel pid terminated (application_controller) ({application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","wss"}}},{rabbit_prelau Crash dump is being written to: erl_crash.dump...done I've searched for port to see that if it's in use or not and I used lsof -i :25672 and I get nothing. I don't know too much about these things so if you need anything please tell me. A: Try: sudo lsof -i :25672 sudo kill <PID> sudo rabbitmq-server Where <PID> is the process ID that is occupying port 25672 A: I have encountered this issue. I figured out that this issue is coming because the rabbitmq-server is already running on the machine. I have used the following command rabbitmqctl.bat status to know the status of the rabbitmq-server. This helped me to know if the server is up or down. If it is up, this could the reason you are getting the error that you have specified in your post. You can issue the following command to make the server down rabbitmqctl.bat stop Now you can try starting the rabbitmq-server by issuing the following command rabbitmq-server start Note that I am using Windows. And I have executed these commands by pointing the command prompt to C:\Program Files\RabbitMQ\rabbitmq_server-3.8.14\sbin as my rabbitmq installation directory is C:\Program Files\RabbitMQ\rabbitmq_server-3.8.14. A: I have encountered this before. Here is what caused it and how I fixed it: This is one of those commands which requires the magic word sudo (i.e it needs a superuser privilege). If you forget to add sudo to the command, it begins the process but later fails when it hits a superuser-only roadblock. This leaves you with an incomplete process. Now when you decide to add sudo, it attempts the same process again but finds out that someone without the right privilege has made a mess or is still messing around. Then the solution will be to cancel out whatever the first command has started and try again. sudo lsof -i :25672 This list out details about the port 25672 You will see the PID (process ID) e.g 1301 Then stop the process on that port with: sudo kill <PID> for example, sudo kill 1301 And make sure you are killing the right process if not you may get into trouble. Now, retry the command with sudo: sudo rabbitmq-server ALSO, In most cases, this error occurs because without deliberately stopping the rabbitmq-server, it always keeps running even after you restart you system. A: another way to stop rabitmq server windows+R then type "services.msc" and then find for RabitMq.slelect and stop from left top corner. Then re run your rabitmq server. A: -Hi guys, I am putting up an answer that can help Googlers to run multiple rabbitmq-server on the same machine. Trying to achieve the latter, I ran into a similar error reported in the first place and solved that by defining: export RABBITMQ_DIST_PORT=anything_other_than_25672 as stated in the documentation: https://www.rabbitmq.com/networking.html#epmd-inet-dist-port-range A: if you are using windows go to task manager and stop rabbitmq from running... then reload the rabbitmq-server A: For Linux others answered but in Windows you should press Ctrl+Alt+delete and select task management and in that end proccess that depends on erlang. Note that it requires Administrator previlage. Now enter this command to start rabbitmq-server: rabbitmq-server start Every time you restart your computer you should do these steps.For prevent do them again you should stop rabbitmq service from startup services. A: went through same problem in windows, it is already running after installation as a service so just enable the plugins from the rabbitmq commandline by entering the code as rabbitmq-plugins enable management_plugin than go to the localhost:15672 and good to go. A: This means that your port 25672 is already in use try: - sudo lsof -i :25672 sudo kill <PID> and now start your rabbitmq server using sudo rabbitmq-server
can't start rabbitmq-server after installation
I'm trying to use rabbitmq for a django tutorial but when I want to start the server I get this error: ~$ sudo rabbitmq-server Configuring logger redirection 14:49:57.041 [error] 14:49:57.044 [error] BOOT FAILED BOOT FAILED 14:49:57.044 [error] =========== =========== 14:49:57.044 [error] ERROR: could not bind to distribution port 25672, it is in use by another node: rabbit@wss ERROR: could not bind to distribution port 25672, it is in use by another node: rabbit@wss 14:49:57.045 [error] 14:49:58.046 [error] Supervisor rabbit_prelaunch_sup had child prelaunch started with rabbit_prelaunch:run_prelaunch_first_phase() at undefined exit with reason {dist_port_already_used,25672,"rabbit","wss"} in context start_error 14:49:58.046 [error] CRASH REPORT Process <0.153.0> with 0 neighbours exited with reason: {{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","wss"}}},{rabbit_prelaunch_app,start,[normal,[]]}} in application_master:init/4 line 138 {"Kernel pid terminated",application_controller,"{application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,\"rabbit\",\"wss\"}}},{rabbit_prelaunch_app,start,[normal,[]]}}}"} Kernel pid terminated (application_controller) ({application_start_failure,rabbitmq_prelaunch,{{shutdown,{failed_to_start_child,prelaunch,{dist_port_already_used,25672,"rabbit","wss"}}},{rabbit_prelau Crash dump is being written to: erl_crash.dump...done I've searched for port to see that if it's in use or not and I used lsof -i :25672 and I get nothing. I don't know too much about these things so if you need anything please tell me.
[ "Try:\n\nsudo lsof -i :25672\nsudo kill <PID>\nsudo rabbitmq-server\n\nWhere <PID> is the process ID that is occupying port 25672\n", "I have encountered this issue. I figured out that this issue is coming because the rabbitmq-server is already running on the machine.\nI have used the following command\n\nrabbitmqctl.bat status to know the status of the rabbitmq-server. This helped me to know if the server is up or down.\n\nIf it is up, this could the reason you are getting the error that you have specified in your post.\nYou can issue the following command to make the server down\n\nrabbitmqctl.bat stop\n\nNow you can try starting the rabbitmq-server by issuing the following command\n\nrabbitmq-server start\n\nNote that I am using Windows. And I have executed these commands by pointing the command prompt to C:\\Program Files\\RabbitMQ\\rabbitmq_server-3.8.14\\sbin as my rabbitmq installation directory is C:\\Program Files\\RabbitMQ\\rabbitmq_server-3.8.14.\n", "I have encountered this before. Here is what caused it and how I fixed it:\nThis is one of those commands which requires the magic word sudo (i.e it needs a superuser privilege).\nIf you forget to add sudo to the command, it begins the process but later fails when it hits a superuser-only roadblock. This leaves you with an incomplete process. Now when you decide to add sudo, it attempts the same process again but finds out that someone without the right privilege has made a mess or is still messing around.\nThen the solution will be to cancel out whatever the first command has started and try again.\nsudo lsof -i :25672\n\nThis list out details about the port 25672\nYou will see the PID (process ID) e.g 1301\nThen stop the process on that port with:\nsudo kill <PID>\n\nfor example, sudo kill 1301\nAnd make sure you are killing the right process if not you may get into trouble.\nNow, retry the command with sudo:\nsudo rabbitmq-server\n\nALSO,\nIn most cases, this error occurs because without deliberately stopping the rabbitmq-server, it always keeps running even after you restart you system.\n", "another way to stop rabitmq server windows+R then type \"services.msc\" and then find for RabitMq.slelect and stop from left top corner.\nThen re run your rabitmq server.\n", "-Hi guys, I am putting up an answer that can help Googlers to run multiple rabbitmq-server on the same machine. Trying to achieve the latter, I ran into a similar error reported in the first place and solved that by defining:\n\nexport RABBITMQ_DIST_PORT=anything_other_than_25672\n\nas stated in the documentation:\nhttps://www.rabbitmq.com/networking.html#epmd-inet-dist-port-range\n", "if you are using windows go to task manager and stop rabbitmq from running...\nthen reload the rabbitmq-server\n", "For Linux others answered but in Windows you should press Ctrl+Alt+delete and select task management and in that end proccess that depends on erlang.\nNote that it requires Administrator previlage.\nNow enter this command to start rabbitmq-server:\n\nrabbitmq-server start\n\nEvery time you restart your computer you should do these steps.For prevent do them again you should stop rabbitmq service from startup services.\n", "went through same problem in windows, it is already running after installation as a service\nso just enable the plugins from the rabbitmq commandline by entering the code as\nrabbitmq-plugins enable management_plugin\n\nthan go to the localhost:15672 and good to go.\n", "This means that your port 25672 is already in use\ntry: -\n\nsudo lsof -i :25672\n\nsudo kill <PID>\n\n\nand now start your rabbitmq server using\n\nsudo rabbitmq-server\n\n" ]
[ 25, 10, 4, 3, 1, 1, 0, 0, 0 ]
[]
[]
[ "rabbitmq" ]
stackoverflow_0063263177_rabbitmq.txt
Q: Why are .weights files the same size when learning YOLO? I made a YOLO model to learn about cars and learned it from Google Colaboratory. I then added more datasets and created new datasets and trained new versions to increase accuracy. The new dataset had about seven times more data than the previous version. (70k) But why does it take the same time to learn the previous version? Also, the size of the Weights file is the same, but I don't know why. Why is the learning time of 10k data the same as the learning time of 70k data? Why is the size of the weight file the same as 10k data and 70k data? A: Learning time will be same for all datasets. Because when training you didn't change cfg file. if you will change will be differet. you can change steps and you can see. Weights also will be same for all dataset for all yolov3 models. Because you are using yolov3 weights. For example, if you will use yolov3-tiny for training, will be different from yolov3 model.
Why are .weights files the same size when learning YOLO?
I made a YOLO model to learn about cars and learned it from Google Colaboratory. I then added more datasets and created new datasets and trained new versions to increase accuracy. The new dataset had about seven times more data than the previous version. (70k) But why does it take the same time to learn the previous version? Also, the size of the Weights file is the same, but I don't know why. Why is the learning time of 10k data the same as the learning time of 70k data? Why is the size of the weight file the same as 10k data and 70k data?
[ "\nLearning time will be same for all datasets. Because when training you didn't change cfg file. if you will change will be differet. you can change steps and you can see.\nWeights also will be same for all dataset for all yolov3 models. Because you are using yolov3 weights. For example, if you will use yolov3-tiny for training, will be different from yolov3 model.\n\n" ]
[ 0 ]
[]
[]
[ "deep_learning", "yolo" ]
stackoverflow_0074640030_deep_learning_yolo.txt
Q: group rows based on partial strings from two columns and sum values df = pd.DataFrame({'c1':['Ax','Ay','Bx','By'], 'c2':['Ay','Ax','By','Bx'], 'c3':[1,2,3,4]}) c1 c2 c3 0 Ax Ay 1 1 Ay Ax 2 2 Bx By 3 3 By Bx 4 I'd like to sum the c3 values by aggregating the same xy combinations from the c1 and c2 columns. The expected output is c1 c2 c3 0 x y 4 #[Ax Ay] + [Bx By] 1 y x 6 #[Ay Ax] + [By Bx] A: You can select values in c1 and c2 without first letters and aggregate sum: df = df.groupby([df.c1.str[1:], df.c2.str[1:]]).sum().reset_index() print (df) c1 c2 c3 0 x y 4 1 y x 6
group rows based on partial strings from two columns and sum values
df = pd.DataFrame({'c1':['Ax','Ay','Bx','By'], 'c2':['Ay','Ax','By','Bx'], 'c3':[1,2,3,4]}) c1 c2 c3 0 Ax Ay 1 1 Ay Ax 2 2 Bx By 3 3 By Bx 4 I'd like to sum the c3 values by aggregating the same xy combinations from the c1 and c2 columns. The expected output is c1 c2 c3 0 x y 4 #[Ax Ay] + [Bx By] 1 y x 6 #[Ay Ax] + [By Bx]
[ "You can select values in c1 and c2 without first letters and aggregate sum:\ndf = df.groupby([df.c1.str[1:], df.c2.str[1:]]).sum().reset_index()\nprint (df)\n c1 c2 c3\n0 x y 4\n1 y x 6\n\n" ]
[ 1 ]
[]
[]
[ "pandas", "python" ]
stackoverflow_0074656475_pandas_python.txt
Q: Use config and wildcards in snakemake rule output I have a list of fasta may be used, but some of them may also not be used, so I hope to use snakemake to index fastq if need I bulit a yaml file like this # config.yaml reference_genome: fa1: "path/to/genome" fa2: "..." fa3: "..." ... and I write a snakemake like this configfile: "config.yaml" rule all: input: expand('{reference_genome}.{type}', reference_genome=['fa1', 'fa2', 'fa3'], type=['amb', 'ann', 'pac']) rule index: input: #reference_genomeFile ref_genome=lambda wildcards:config['reference_genome'][wildcards.reference_genome] output: expand('{reference_genome}.{type}', reference_genome={reference_genome}, type=['amb', 'ann', 'pac']) log: 'log/rule_index_{reference_genome}.log' shell: "bwa index -a bwtsw {input.ref_genome} > {log} 2>&1" I hope snakemake can monitor the index file (amb, ann, pac), but this script will raise follow error: name 'reference_genome' is not defined File "/public/...", line ..., in <module> update: base on @dariober's answer: if we runing with following config.yaml reference_genome: fa1: "genome_1.fa" fa2: "genome_2.fa" fa3: "genome_3.fa" I expect the output is genome_1.fa.{amb, ann, pac} genome_2.fa.{amb, ann, pac} genome_3.fa.{amb, ann, pac} If we use following workaround rule all: input: expand('{reference_genome}.{type}', reference_genome=['fa1', 'fa2', 'fa3'], type=['amb', 'ann', 'pac']) rule index: input: #reference_genomeFile ref_genome=lambda wildcards:config['reference_genome'][wildcards.reference_genome] output: expand('{{reference_genome}}.{type}', type=['amb', 'ann', 'pac']) log: 'log/rule_index_{reference_genome}.log' shell: "bwa index -a bwtsw {input.ref_genome} > {log} 2>&1" we will get $ snakemake -s snakemake_test.smk --configfile config.yaml # for reference_name is fa1 [Fri Dec 2 17:56:29 2022] rule index: input: genome_1.fa output: fa1.amb, fa1.ann, fa1.pac log: log/rule_index_fa1.log jobid: 1 wildcards: reference_genome=fa1 ... Thats not my expected output the output is fa1.amb, fa1.ann, fa1.pac, but I wanted output is genome_1.fa.amb, genome_1.fa.ann, genome_1.fa.pac A: My guess is that you want: rule all: input: expand('{reference_genome}.{type}', reference_genome=['fa1', 'fa2', 'fa3'], type=['amb', 'ann', 'pac']) rule index: input: #reference_genomeFile ref_genome=lambda wildcards:config['reference_genome'][wildcards.reference_genome] output: expand('{{reference_genome}}.{type}', type=['amb', 'ann', 'pac']) log: 'log/rule_index_{reference_genome}.log' shell: "bwa index -a bwtsw {input.ref_genome} > {log} 2>&1" snakemake -p -n -j 1 --configfile config.yaml That is: Run rule index for each genome file, i.e., three times here. Each run of index generates the three index files. Note the use of double curly braces {{reference_genome}} to tell expand that this wildcard does not need to be expanded. Example config.yaml: reference_genome: fa1: "genome_1.fa" fa2: "genome_2.fa" fa3: "genome_3.fa" A: Building upon dariober's answer and judging from your comments, I think this is what you are looking for? configfile: "config.yaml" rule all: input: expand( "{reference_genome}.{type}", reference_genome=list(config["reference_genome"].values()), type=["amb", "ann", "pac"], ), rule index: input: #reference_genomeFile ref_genome="{reference_genome}", output: expand("{{reference_genome}}.{type}", type=["amb", "ann", "pac"]), log: "log/rule_index_{reference_genome}.log", shell: "bwa index -a bwtsw {input.ref_genome} > {log} 2>&1" with config.yaml reference_genome: fa1: "path/to/genome1" fa2: "path/to/genome2" fa3: "path/to/genome3" I've modified the rule all to use the filepaths from config.yaml rather than the list ['fa1','fa2','fa3']. I've also removed the lambda wildcard from the input of rule index as it seems unnecessary.
Use config and wildcards in snakemake rule output
I have a list of fasta may be used, but some of them may also not be used, so I hope to use snakemake to index fastq if need I bulit a yaml file like this # config.yaml reference_genome: fa1: "path/to/genome" fa2: "..." fa3: "..." ... and I write a snakemake like this configfile: "config.yaml" rule all: input: expand('{reference_genome}.{type}', reference_genome=['fa1', 'fa2', 'fa3'], type=['amb', 'ann', 'pac']) rule index: input: #reference_genomeFile ref_genome=lambda wildcards:config['reference_genome'][wildcards.reference_genome] output: expand('{reference_genome}.{type}', reference_genome={reference_genome}, type=['amb', 'ann', 'pac']) log: 'log/rule_index_{reference_genome}.log' shell: "bwa index -a bwtsw {input.ref_genome} > {log} 2>&1" I hope snakemake can monitor the index file (amb, ann, pac), but this script will raise follow error: name 'reference_genome' is not defined File "/public/...", line ..., in <module> update: base on @dariober's answer: if we runing with following config.yaml reference_genome: fa1: "genome_1.fa" fa2: "genome_2.fa" fa3: "genome_3.fa" I expect the output is genome_1.fa.{amb, ann, pac} genome_2.fa.{amb, ann, pac} genome_3.fa.{amb, ann, pac} If we use following workaround rule all: input: expand('{reference_genome}.{type}', reference_genome=['fa1', 'fa2', 'fa3'], type=['amb', 'ann', 'pac']) rule index: input: #reference_genomeFile ref_genome=lambda wildcards:config['reference_genome'][wildcards.reference_genome] output: expand('{{reference_genome}}.{type}', type=['amb', 'ann', 'pac']) log: 'log/rule_index_{reference_genome}.log' shell: "bwa index -a bwtsw {input.ref_genome} > {log} 2>&1" we will get $ snakemake -s snakemake_test.smk --configfile config.yaml # for reference_name is fa1 [Fri Dec 2 17:56:29 2022] rule index: input: genome_1.fa output: fa1.amb, fa1.ann, fa1.pac log: log/rule_index_fa1.log jobid: 1 wildcards: reference_genome=fa1 ... Thats not my expected output the output is fa1.amb, fa1.ann, fa1.pac, but I wanted output is genome_1.fa.amb, genome_1.fa.ann, genome_1.fa.pac
[ "My guess is that you want:\nrule all:\n input:\n expand('{reference_genome}.{type}', reference_genome=['fa1', 'fa2', 'fa3'], type=['amb', 'ann', 'pac'])\n\nrule index: \n input: \n #reference_genomeFile\n ref_genome=lambda wildcards:config['reference_genome'][wildcards.reference_genome]\n output: \n expand('{{reference_genome}}.{type}', type=['amb', 'ann', 'pac'])\n log: \n 'log/rule_index_{reference_genome}.log'\n shell: \n \"bwa index -a bwtsw {input.ref_genome} > {log} 2>&1\"\n\nsnakemake -p -n -j 1 --configfile config.yaml\n\nThat is: Run rule index for each genome file, i.e., three times here. Each run of index generates the three index files. Note the use of double curly braces {{reference_genome}} to tell expand that this wildcard does not need to be expanded.\n\nExample config.yaml:\nreference_genome:\n fa1: \"genome_1.fa\"\n fa2: \"genome_2.fa\"\n fa3: \"genome_3.fa\"\n\n", "Building upon dariober's answer and judging from your comments, I think this is what you are looking for?\nconfigfile: \"config.yaml\"\n\n\nrule all:\n input:\n expand(\n \"{reference_genome}.{type}\",\n reference_genome=list(config[\"reference_genome\"].values()),\n type=[\"amb\", \"ann\", \"pac\"],\n ),\n\n\nrule index:\n input:\n #reference_genomeFile\n ref_genome=\"{reference_genome}\",\n output:\n expand(\"{{reference_genome}}.{type}\", type=[\"amb\", \"ann\", \"pac\"]),\n log:\n \"log/rule_index_{reference_genome}.log\",\n shell:\n \"bwa index -a bwtsw {input.ref_genome} > {log} 2>&1\"\n\nwith config.yaml\nreference_genome:\n fa1: \"path/to/genome1\"\n fa2: \"path/to/genome2\"\n fa3: \"path/to/genome3\"\n\nI've modified the rule all to use the filepaths from config.yaml rather than the list ['fa1','fa2','fa3']. I've also removed the lambda wildcard from the input of rule index as it seems unnecessary.\n" ]
[ 1, 1 ]
[]
[]
[ "bioinformatics", "pipeline", "python", "snakemake", "wildcard" ]
stackoverflow_0074652332_bioinformatics_pipeline_python_snakemake_wildcard.txt
Q: Angular Caching Issue I have an Angular 14 SPA hosted on Firebase and I have the following issue: The users are using the cached version (previous version) instead of the latest version when accessing the application (website). Steps to reproduce the issue: User opens application - let's say v1.0.0 I deploy a new version - let's say v2.0.0 User opens application again and he is using v1.0.0 (from cache I guess) (if the user refreshes the page, v2.0.0 is loaded) Expected behavior: User opens application - let's say v1.0.0 I deploy a new version - let's say v2.0.0 User opens application again and he is using v2.0.0 (no refresh needed) Initially, I thought the PWA service worker causes the issue. I understood that the PWA service worker gets the new version after the application is opened (similar to a mobile application) and updates the application on the next lunch - and that I could show a dialog to notify the new version (based on swUpdate.available.subscribe - https://stackoverflow.com/a/66905807) and after the user's confirmation (or even without his confirmation) to force the reload using location.reload() but I didn't want to take this approach so I decide to remove PWA. The application is now just a SPA (no PWA) and I made sure the service worker is unregistered as shown below (also checked in the browser and the application doesn't have service worker, manifest, etc) but I still have the same issue. if ("caches" in window) { caches.keys().then(function (keyList) { return Promise.all( keyList.map(function (key) { return caches.delete(key); }) ); }); } if (window.navigator && navigator.serviceWorker) { navigator.serviceWorker.getRegistrations().then(function (registrations) { for (let registration of registrations) { registration.unregister(); } }); } This is my build script: { ... "build": "ng build --configuration production --aot --output-hashing=all" ... } A: I found the issue - Firebase was sending in the response headers: Cache-Control: max-age=3600. I solved the issue by setting the headers in firebase.json: { ... "headers": [ { "source": "**", "headers": [ { "key": "Cache-Control", "value": "no-cache, no-store, must-revalidate" } ] }, { "source": "**/*.@(ico|svg|jpg|jpeg|png|gif)", "headers": [ { "key": "Cache-Control", "value": "max-age=3600" } ] }, { "source": "**/*.@(eot|otf|ttf|ttc|woff|woff2|font.css)", "headers": [ { "key": "Cache-Control", "value": "max-age=31536000" } ] } ] ... } I found this article and the Firebase documentation useful.
Angular Caching Issue
I have an Angular 14 SPA hosted on Firebase and I have the following issue: The users are using the cached version (previous version) instead of the latest version when accessing the application (website). Steps to reproduce the issue: User opens application - let's say v1.0.0 I deploy a new version - let's say v2.0.0 User opens application again and he is using v1.0.0 (from cache I guess) (if the user refreshes the page, v2.0.0 is loaded) Expected behavior: User opens application - let's say v1.0.0 I deploy a new version - let's say v2.0.0 User opens application again and he is using v2.0.0 (no refresh needed) Initially, I thought the PWA service worker causes the issue. I understood that the PWA service worker gets the new version after the application is opened (similar to a mobile application) and updates the application on the next lunch - and that I could show a dialog to notify the new version (based on swUpdate.available.subscribe - https://stackoverflow.com/a/66905807) and after the user's confirmation (or even without his confirmation) to force the reload using location.reload() but I didn't want to take this approach so I decide to remove PWA. The application is now just a SPA (no PWA) and I made sure the service worker is unregistered as shown below (also checked in the browser and the application doesn't have service worker, manifest, etc) but I still have the same issue. if ("caches" in window) { caches.keys().then(function (keyList) { return Promise.all( keyList.map(function (key) { return caches.delete(key); }) ); }); } if (window.navigator && navigator.serviceWorker) { navigator.serviceWorker.getRegistrations().then(function (registrations) { for (let registration of registrations) { registration.unregister(); } }); } This is my build script: { ... "build": "ng build --configuration production --aot --output-hashing=all" ... }
[ "I found the issue - Firebase was sending in the response headers: Cache-Control: max-age=3600.\nI solved the issue by setting the headers in firebase.json:\n{\n ...\n \"headers\": [\n {\n \"source\": \"**\",\n \"headers\": [\n {\n \"key\": \"Cache-Control\",\n \"value\": \"no-cache, no-store, must-revalidate\"\n }\n ]\n },\n {\n \"source\": \"**/*.@(ico|svg|jpg|jpeg|png|gif)\",\n \"headers\": [\n {\n \"key\": \"Cache-Control\",\n \"value\": \"max-age=3600\"\n }\n ]\n },\n {\n \"source\": \"**/*.@(eot|otf|ttf|ttc|woff|woff2|font.css)\",\n \"headers\": [\n {\n \"key\": \"Cache-Control\",\n \"value\": \"max-age=31536000\"\n }\n ]\n }\n ]\n ...\n}\n\nI found this article and the Firebase documentation useful.\n" ]
[ 0 ]
[]
[]
[ "angular", "angular_cache", "angular_service_worker", "caching", "service_worker" ]
stackoverflow_0074642751_angular_angular_cache_angular_service_worker_caching_service_worker.txt
Q: how to show full images with no errors in after effects? i am trying to open my project and render it on another PC (my counsin's pc) but the 2 of the images doesn't not appear fully but looks like a tv no signal sign. this is the image to see what is happening i tried to search for missing files or miss spelled files in order to replace them but there was none, i also tried to use media encoder to render but nothing. i also tried to use GPU in prefferences so can you please solve this problem for me A: The TV signal image indicates that a file is missing. Try copying the image file onto your other PC and then drag it into the root folder in After Effects. Also, clear your media cache to delete any old references to file locations
how to show full images with no errors in after effects?
i am trying to open my project and render it on another PC (my counsin's pc) but the 2 of the images doesn't not appear fully but looks like a tv no signal sign. this is the image to see what is happening i tried to search for missing files or miss spelled files in order to replace them but there was none, i also tried to use media encoder to render but nothing. i also tried to use GPU in prefferences so can you please solve this problem for me
[ "The TV signal image indicates that a file is missing.\nTry copying the image file onto your other PC and then drag it into the root folder in After Effects.\nAlso, clear your media cache to delete any old references to file locations\n" ]
[ 0 ]
[]
[]
[ "after_effects", "windows_media_encoder" ]
stackoverflow_0074409069_after_effects_windows_media_encoder.txt
Q: How to parse a string of multiple jsons without separators in python? Given a single-lined string of multiple, arbitrary nested json-files without separators, like for example: contents = r'{"payload":{"device":{"serial":213}}}{"payload":{"device":{"serial":123}}}' How can contents be parsed into an array of dicts/jsons ? I tried df = pd.read_json(contents, lines=True) But only got a ValueError response: ValueError: Unexpected character found when decoding array value (2) A: You can split the string, then parse each JSON string into a dictionary: import json contents = r'{"payload":{"device":{"serial":213}}}{"payload":{"device":{"serial":123}}}' json_strings = contents.replace('}{', '}|{').split('|') json_dicts = [json.loads(string) for string in json_strings] Output: [{'payload': {'device': {'serial': 213}}}, {'payload': {'device': {'serial': 123}}}]
How to parse a string of multiple jsons without separators in python?
Given a single-lined string of multiple, arbitrary nested json-files without separators, like for example: contents = r'{"payload":{"device":{"serial":213}}}{"payload":{"device":{"serial":123}}}' How can contents be parsed into an array of dicts/jsons ? I tried df = pd.read_json(contents, lines=True) But only got a ValueError response: ValueError: Unexpected character found when decoding array value (2)
[ "You can split the string, then parse each JSON string into a dictionary:\nimport json\n\ncontents = r'{\"payload\":{\"device\":{\"serial\":213}}}{\"payload\":{\"device\":{\"serial\":123}}}'\n\njson_strings = contents.replace('}{', '}|{').split('|')\njson_dicts = [json.loads(string) for string in json_strings]\n\nOutput:\n[{'payload': {'device': {'serial': 213}}}, {'payload': {'device': {'serial': 123}}}]\n\n" ]
[ 1 ]
[]
[]
[ "amazon_web_services", "arrays", "json", "ndjson", "python" ]
stackoverflow_0074656450_amazon_web_services_arrays_json_ndjson_python.txt
Q: image preview does not work in tramp for org mode Suppose I'm checking out a org file in a server server1, (find-file "/ssh:server1:/path/to/org-file.org") and the org file has a link to an image file:myimage.png and the file exists since I can open it with C-c C-o. However, when I try to display the image, it does not work. I see no reason why this shouldn't work. A: By default this feature is disabled. To enable it checkout the variable org-display-remote-inline-image. For example, I set it with: (setq org-display-remote-inline-images 'cache)
image preview does not work in tramp for org mode
Suppose I'm checking out a org file in a server server1, (find-file "/ssh:server1:/path/to/org-file.org") and the org file has a link to an image file:myimage.png and the file exists since I can open it with C-c C-o. However, when I try to display the image, it does not work. I see no reason why this shouldn't work.
[ "By default this feature is disabled.\nTo enable it checkout the variable org-display-remote-inline-image.\nFor example, I set it with:\n (setq org-display-remote-inline-images 'cache)\n\n" ]
[ 0 ]
[]
[]
[ "emacs", "image", "org_mode", "tramp" ]
stackoverflow_0065620272_emacs_image_org_mode_tramp.txt
Q: material-top-tabs closes keyboard on first focus I am using "react-native": "0.70.4", with the @react-navigation/material-top-tabs to make a custom bottomsheet with top tabs inside. When clicking the TextInput in tab nr 2 it dismiss the keyboard, but if i click again it does not happen. I tried multiple ways, it happens when there is 3 or more tabs. It works as intended in the other tabs. Example of the bottomsheet and tabs. If I click the "søk" (TextInput) in tab "test2". It would open and close the keyboard the first time I click it. this after the second time I click the TextInput. A: looks like there is not enough space to open the keyboard in the BottomSheet menu, should you use const snapPoints = useMemo(() => ["30%", "50%"], []); for snapPoints?
material-top-tabs closes keyboard on first focus
I am using "react-native": "0.70.4", with the @react-navigation/material-top-tabs to make a custom bottomsheet with top tabs inside. When clicking the TextInput in tab nr 2 it dismiss the keyboard, but if i click again it does not happen. I tried multiple ways, it happens when there is 3 or more tabs. It works as intended in the other tabs. Example of the bottomsheet and tabs. If I click the "søk" (TextInput) in tab "test2". It would open and close the keyboard the first time I click it. this after the second time I click the TextInput.
[ "looks like there is not enough space to open the keyboard in the BottomSheet menu, should you use const snapPoints = useMemo(() => [\"30%\", \"50%\"], []); for snapPoints?\n" ]
[ 0 ]
[]
[]
[ "react_native", "react_navigation", "react_navigation_top_tabs" ]
stackoverflow_0074656402_react_native_react_navigation_react_navigation_top_tabs.txt
Q: Calculate average temperature in reducer I am trying to write a code that would calculate average temperature (reducer.py) based on ncdc weather. 0057011060999991928010112004+67500+012067FM-12+001199999V0202001N012319999999N0500001N9+00281+99999102171ADDAY181999GF108991999999999999001001MD1710261+9999MW1801 0062011060999991928010206004+67500+012067FM-12+001199999V0201801N00931220001CN0200001N9+00281+99999100901ADDAA199002091AY121999GF101991999999017501999999MD1810461+9999 0108011060999991928010212004+67500+012067FM-12+001199999V0201601N009319999999N0100001N9+00111+99999100062ADDAY171999GF108991999011012501001001MD1810542+9999MW1681EQDQ01+000042SCOTLCQ02+100063APOSLPQ03+000542APC3 0087011060999991928010306004+67500+012067FM-12+001199999V0202001N022619999999N0100001N9+00501+99999098781ADDAA199001091AY161999GF108991999011004501001001MD1310061+9999MW1601EQDQ01+000042SCOTLC 0057011060999991928010312004+67500+012067FM-12+001199999V0202301N01541004501CN0040001N9+00001+99999098951ADDAY161999GF108991081061004501999999MD1210201+9999MW1601 #!/usr/bin/env python import sys (last_key, max_val) = (None, -sys.maxint) for line in sys.stdin: (key, val) = line.strip().split("\t") if last_key and last_key != key: print "%s\t%s" % (last_key, max_val) (last_key, max_val) = (key, int(val)) else: (last_key, max_val) = (key, max(max_val, int(val))) if last_key: print "%s\t%s" % (last_key, max_val) A: First of all, your shown data has no tabs, so it's not clear why you've shown code that splits lines on tabs and finds the max. Not an average. To find an average, you'll need to collect all seen values into a list (values.append(int(val))), then you can from statistics import mean and call mean(values) at the end of the loop I'd highly suggest that you use mrjob or pyspark instead
Calculate average temperature in reducer
I am trying to write a code that would calculate average temperature (reducer.py) based on ncdc weather. 0057011060999991928010112004+67500+012067FM-12+001199999V0202001N012319999999N0500001N9+00281+99999102171ADDAY181999GF108991999999999999001001MD1710261+9999MW1801 0062011060999991928010206004+67500+012067FM-12+001199999V0201801N00931220001CN0200001N9+00281+99999100901ADDAA199002091AY121999GF101991999999017501999999MD1810461+9999 0108011060999991928010212004+67500+012067FM-12+001199999V0201601N009319999999N0100001N9+00111+99999100062ADDAY171999GF108991999011012501001001MD1810542+9999MW1681EQDQ01+000042SCOTLCQ02+100063APOSLPQ03+000542APC3 0087011060999991928010306004+67500+012067FM-12+001199999V0202001N022619999999N0100001N9+00501+99999098781ADDAA199001091AY161999GF108991999011004501001001MD1310061+9999MW1601EQDQ01+000042SCOTLC 0057011060999991928010312004+67500+012067FM-12+001199999V0202301N01541004501CN0040001N9+00001+99999098951ADDAY161999GF108991081061004501999999MD1210201+9999MW1601 #!/usr/bin/env python import sys (last_key, max_val) = (None, -sys.maxint) for line in sys.stdin: (key, val) = line.strip().split("\t") if last_key and last_key != key: print "%s\t%s" % (last_key, max_val) (last_key, max_val) = (key, int(val)) else: (last_key, max_val) = (key, max(max_val, int(val))) if last_key: print "%s\t%s" % (last_key, max_val)
[ "First of all, your shown data has no tabs, so it's not clear why you've shown code that splits lines on tabs and finds the max. Not an average.\nTo find an average, you'll need to collect all seen values into a list (values.append(int(val))), then you can from statistics import mean and call mean(values) at the end of the loop\nI'd highly suggest that you use mrjob or pyspark instead\n" ]
[ 0 ]
[]
[]
[ "hadoop", "hadoop_streaming", "mapreduce", "python" ]
stackoverflow_0074651008_hadoop_hadoop_streaming_mapreduce_python.txt
Q: c++ boost libraries serialization I have been trying to implement a web cache, I wrote a prototype without a library. Really my problem is that I discovered the Boost library. I'm trying to implement it in my web cache. Specially implementing this: https://www.boost.org/doc/libs/1_66_0/libs/multi_index/doc/examples.html#example9 "This data structure is implemented with multi_index_container by combining a sequenced index and an index of type hashed_unique." Currently my class looks like this, by unit tests done, it works but my goal will be to implement as precise in question the lib Boost and especially the multi index container. I tried to change the type of the items but it didn't work, like this: typedef multi_index_container<items,indexed_by<sequenced<>,hashed_unique<identity<Item> >>> item_list; I got a message: Invalid use of non-static data member '_items'. This kind of error I can understand it but i not specify it because it is the global implementation that worries me If you find any other error apart from my precise question, I am also a taker. template<typename Tkey, typename Tval> class CacheWeb{ private: unsigned int capacity; std::list<std::pair<Tkey, Tval>> items; std::unordered_map<key, typename std::list<std::pair<Tkey, Tval>>::iterator> lookup; CacheWeb(const CacheWeb&) = delete; CacheWeb& operator=(const CacheWeb&) = delete; int capacityOut(){ if( capacity == 0 || lookup.size() < capacity ) { return 0; } int cnt = 0; while(lookup.size() > capacity) { lookup.erase(items.back().first); items.pop_back(); ++cnt; } return cnt; }; public: CacheWeb(int icapacity) : capacity(icapacity){}; virtual ~CacheWeb() = default; int size(){ return lookup.size(); }; bool empty(){ return lookup.empty(); }; void clear(){ lookup.clear(); items.clear(); }; bool contains(const Tkey& key){ return lookup.find(key) != lookup.end(); }; void remove(const Tkey& key){ auto it = lookup.find(key); items.erase(it->second); lookup.erase(it); }; void put(const Tkey& key, const Tval& val){ auto it = lookup.find(key); if( it != lookup.end() ) { it->second->second = val; items.splice(items.begin(), items, it->second); return; } items.emplace_front(key, val); lookup[key] = items.begin(); capacityOut(); }; std::list<std::pair<Tkey, Tval>>getItems(){ return items; }; const VAL_T& get(const Tkey& key){ const auto it = lookup.find(key); if( it == lookup.end() ) { throw std::invalid_argument("Key does not exist"); } items.splice(items.begin(), items, it->second); return it->second->second; }; }; } A: Something like this can do: Live Coliru Demo #include <boost/multi_index_container.hpp> #include <boost/multi_index/sequenced_index.hpp> #include <boost/multi_index/hashed_index.hpp> #include <boost/multi_index/key.hpp> template<typename Tkey, typename Tval> class CacheWeb{ private: using value_type = std::pair<Tkey, Tval>; unsigned int capacity; boost::multi_index_container< value_type, boost::multi_index::indexed_by< boost::multi_index::sequenced<>, boost::multi_index::hashed_unique<boost::multi_index::key<&value_type::first>> > > container; CacheWeb(const CacheWeb&) = delete; CacheWeb& operator=(const CacheWeb&) = delete; int capacityOut(){ if( capacity == 0 || container.size() < capacity ) { return 0; } int cnt = 0; while(container.size() > capacity) { container.pop_back(); ++cnt; } return cnt; }; public: CacheWeb(int icapacity) : capacity(icapacity){}; virtual ~CacheWeb() = default; int size(){ return container.size(); }; bool empty(){ return container.empty(); }; void clear(){ container.clear(); }; bool contains(const Tkey& key){ const auto& lookup = container.template get<1>(); return lookup.find(key) != container.template get<1>().end(); }; void remove(const Tkey& key){ container.erase(key); }; void put(const Tkey& key, const Tval& val){ auto& lookup = container.template get<1>(); auto it = lookup.find(key); if( it != lookup.end() ) { lookup.modify(it,[&](value_type& x){ x.second = val; }); } else{ it=lookup.emplace(key, val).first; } container.relocate(container.begin(),container.template project<0>(it)); capacityOut(); }; std::list<std::pair<Tkey, Tval>>getItems(){ return {container.begin(), container.end()}; }; const Tval& get(const Tkey& key){ const auto& lookup = container.template get<1>(); const auto it = lookup.find(key); if( it == lookup.end() ) { throw std::invalid_argument("Key does not exist"); } return it->second; } }; #include <iostream> int main() { CacheWeb<int,int> c(10); for(int i=0;i<11;++i)c.put(i,i); for(const auto& x:c.getItems()){ std::cout<<"("<<x.first<<","<<x.second<<")"; } std::cout<<"\n"; for(int i=1;i<11;++i){ std::cout<<i<<"->"<<c.get(i)<<" "; } std::cout<<"\n"; } Output (10,10)(9,9)(8,8)(7,7)(6,6)(5,5)(4,4)(3,3)(2,2)(1,1) 1->1 2->2 3->3 4->4 5->5 6->6 7->7 8->8 9->9 10->10 A: unfortunately my following questions have disappeared but luckily I was able to better understand my errors and doubts! I'm just coming back to you for an explanation of 2 concepts. 1st: .template From what I've found is a template keyword specify on my container is using a template, correct me if i wrong. src: Where and why do I have to put the "template" and "typename" keywords? 2nd: project<0>(it) Looking for the definition in the lib, I especially saw that it needed an iterator as a parameter, but I don't understand the project<0> (same for get<1> ) ps: i find some informations like this https://theboostcpplibraries.com/boost.variant and post in stackoverflow too, but i little bit confused
c++ boost libraries serialization
I have been trying to implement a web cache, I wrote a prototype without a library. Really my problem is that I discovered the Boost library. I'm trying to implement it in my web cache. Specially implementing this: https://www.boost.org/doc/libs/1_66_0/libs/multi_index/doc/examples.html#example9 "This data structure is implemented with multi_index_container by combining a sequenced index and an index of type hashed_unique." Currently my class looks like this, by unit tests done, it works but my goal will be to implement as precise in question the lib Boost and especially the multi index container. I tried to change the type of the items but it didn't work, like this: typedef multi_index_container<items,indexed_by<sequenced<>,hashed_unique<identity<Item> >>> item_list; I got a message: Invalid use of non-static data member '_items'. This kind of error I can understand it but i not specify it because it is the global implementation that worries me If you find any other error apart from my precise question, I am also a taker. template<typename Tkey, typename Tval> class CacheWeb{ private: unsigned int capacity; std::list<std::pair<Tkey, Tval>> items; std::unordered_map<key, typename std::list<std::pair<Tkey, Tval>>::iterator> lookup; CacheWeb(const CacheWeb&) = delete; CacheWeb& operator=(const CacheWeb&) = delete; int capacityOut(){ if( capacity == 0 || lookup.size() < capacity ) { return 0; } int cnt = 0; while(lookup.size() > capacity) { lookup.erase(items.back().first); items.pop_back(); ++cnt; } return cnt; }; public: CacheWeb(int icapacity) : capacity(icapacity){}; virtual ~CacheWeb() = default; int size(){ return lookup.size(); }; bool empty(){ return lookup.empty(); }; void clear(){ lookup.clear(); items.clear(); }; bool contains(const Tkey& key){ return lookup.find(key) != lookup.end(); }; void remove(const Tkey& key){ auto it = lookup.find(key); items.erase(it->second); lookup.erase(it); }; void put(const Tkey& key, const Tval& val){ auto it = lookup.find(key); if( it != lookup.end() ) { it->second->second = val; items.splice(items.begin(), items, it->second); return; } items.emplace_front(key, val); lookup[key] = items.begin(); capacityOut(); }; std::list<std::pair<Tkey, Tval>>getItems(){ return items; }; const VAL_T& get(const Tkey& key){ const auto it = lookup.find(key); if( it == lookup.end() ) { throw std::invalid_argument("Key does not exist"); } items.splice(items.begin(), items, it->second); return it->second->second; }; }; }
[ "Something like this can do:\nLive Coliru Demo\n#include <boost/multi_index_container.hpp>\n#include <boost/multi_index/sequenced_index.hpp>\n#include <boost/multi_index/hashed_index.hpp>\n#include <boost/multi_index/key.hpp>\n\ntemplate<typename Tkey, typename Tval>\nclass CacheWeb{\n private:\n using value_type = std::pair<Tkey, Tval>;\n \n unsigned int capacity;\n boost::multi_index_container<\n value_type,\n boost::multi_index::indexed_by<\n boost::multi_index::sequenced<>,\n boost::multi_index::hashed_unique<boost::multi_index::key<&value_type::first>>\n >\n > container; \n \n CacheWeb(const CacheWeb&) = delete;\n CacheWeb& operator=(const CacheWeb&) = delete;\n\n int capacityOut(){\n if( capacity == 0 || container.size() < capacity ) {\n return 0;\n }\n int cnt = 0;\n while(container.size() > capacity) {\n container.pop_back();\n ++cnt;\n }\n return cnt; \n };\n\n public:\n CacheWeb(int icapacity) : capacity(icapacity){};\n virtual ~CacheWeb() = default;\n \n\n int size(){\n return container.size();\n };\n bool empty(){\n return container.empty(); \n };\n void clear(){\n container.clear(); \n };\n\n bool contains(const Tkey& key){\n const auto& lookup = container.template get<1>();\n return lookup.find(key) != container.template get<1>().end(); \n };\n\n void remove(const Tkey& key){\n container.erase(key);\n };\n\n void put(const Tkey& key, const Tval& val){\n auto& lookup = container.template get<1>();\n auto it = lookup.find(key);\n if( it != lookup.end() ) {\n lookup.modify(it,[&](value_type& x){ x.second = val; });\n }\n else{\n it=lookup.emplace(key, val).first;\n }\n container.relocate(container.begin(),container.template project<0>(it));\n capacityOut();\n };\n\n std::list<std::pair<Tkey, Tval>>getItems(){\n return {container.begin(), container.end()};\n };\n\n const Tval& get(const Tkey& key){\n const auto& lookup = container.template get<1>();\n const auto it = lookup.find(key);\n if( it == lookup.end() ) {\n throw std::invalid_argument(\"Key does not exist\");\n }\n return it->second;\n }\n};\n \n#include <iostream>\n\nint main()\n{\n CacheWeb<int,int> c(10);\n for(int i=0;i<11;++i)c.put(i,i);\n \n for(const auto& x:c.getItems()){\n std::cout<<\"(\"<<x.first<<\",\"<<x.second<<\")\";\n }\n std::cout<<\"\\n\";\n \n for(int i=1;i<11;++i){\n std::cout<<i<<\"->\"<<c.get(i)<<\" \";\n }\n std::cout<<\"\\n\";\n}\n\nOutput\n(10,10)(9,9)(8,8)(7,7)(6,6)(5,5)(4,4)(3,3)(2,2)(1,1)\n1->1 2->2 3->3 4->4 5->5 6->6 7->7 8->8 9->9 10->10 \n\n", "unfortunately my following questions have disappeared but luckily I was able to better understand my errors and doubts! I'm just coming back to you for an explanation of 2 concepts.\n1st: .template\nFrom what I've found is a template keyword specify on my container is using a template, correct me if i wrong.\nsrc: Where and why do I have to put the \"template\" and \"typename\" keywords?\n2nd: project<0>(it)\nLooking for the definition in the lib, I especially saw that it needed an iterator as a parameter, but I don't understand the project<0> (same for get<1> )\nps: i find some informations like this https://theboostcpplibraries.com/boost.variant and post in stackoverflow too, but i little bit confused\n" ]
[ 2, 0 ]
[]
[]
[ "boost", "boost_multi_index", "c++", "caching", "serialization" ]
stackoverflow_0074359615_boost_boost_multi_index_c++_caching_serialization.txt
Q: How to avoid navigation? I have 3 pages: Check, Login, Home. I want Check to redirect the user to Login or Home depending on the conditions met. It should redirect so that there is no navigation bar. I watched JM's video. There's no explanation. The documentation is the same too. This can be seen in the Shell.Current.GoToAsync() example. How should I do it ? A: as described in the docs, you can hide the navigation bar on a page by doing <ContentPage ... Shell.NavBarIsVisible="false"> ... </ContentPage>
How to avoid navigation?
I have 3 pages: Check, Login, Home. I want Check to redirect the user to Login or Home depending on the conditions met. It should redirect so that there is no navigation bar. I watched JM's video. There's no explanation. The documentation is the same too. This can be seen in the Shell.Current.GoToAsync() example. How should I do it ?
[ "as described in the docs, you can hide the navigation bar on a page by doing\n<ContentPage ...\n Shell.NavBarIsVisible=\"false\">\n ...\n</ContentPage>\n\n" ]
[ 1 ]
[]
[]
[ "maui" ]
stackoverflow_0074656184_maui.txt
Q: Improving precision while maintaining recall with YOLO models We are trying to build an object detection application for handheld objects (wallet, knife, phone etc) which requires high precision and recall. Tried with YoloV4 and V7 models with around 20K images per class. Achieved testing MAP around 80% , precision 90 and recall around 75 However, We are facing issues with false detections and missing detection when objects are in certain angle during field test. What are the options to improve the model further ? A: You have to collect more images and you need more epochs to detect more accurately.
Improving precision while maintaining recall with YOLO models
We are trying to build an object detection application for handheld objects (wallet, knife, phone etc) which requires high precision and recall. Tried with YoloV4 and V7 models with around 20K images per class. Achieved testing MAP around 80% , precision 90 and recall around 75 However, We are facing issues with false detections and missing detection when objects are in certain angle during field test. What are the options to improve the model further ?
[ "You have to collect more images and you need more epochs to detect more accurately.\n" ]
[ 0 ]
[]
[]
[ "computer_vision", "yolo" ]
stackoverflow_0074652202_computer_vision_yolo.txt
Q: Display a list of values from an array on a grid drawn using pygame I'm trying to create a small program that will draw a 6 x 6 grid. I have an array of values (36 elements) which I want to display in each box. I'm able to draw the grid using the below code, however I'm not able to figure out how to display the text from the array in each box. Later I want to check where the current selected box is and determine if it has a particular value and perform some action based on it. ` matrix = ["1", "0", "0", "P", "2", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "4", "3"] I want to show the values from matrix array in the grid class Grid: def __init__(self, width, height, rows, cols): self.width = width self.height = height self.rows = rows self.cols = cols def draw(self, screen): # draw the grid for i in range(self.rows): for j in range(self.cols): pygame.draw.rect(screen, (255, 255, 255), (i * self.width, j * self.height, self.width, self.height), 1) def show_grid(): pygame.init() screen_width = 500 screen_height = 500 screen = pygame.display.set_mode((screen_width, screen_height)) pygame.display.set_caption("Text Adventure Game") grid = Grid(50, 50, 6, 6) player = Player(0, 0, 50, 50, (255, 127, 0)) running = True while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False if event.type == pygame.KEYDOWN: if event.key == pygame.K_UP: player.move("up") if event.key == pygame.K_DOWN: player.move("down") if event.key == pygame.K_LEFT: player.move("left") if event.key == pygame.K_RIGHT: player.move("right") screen.fill((0, 0, 0)) grid.draw(screen) player.draw(screen) pygame.display.update() pygame.quit() class Player: global steps steps = 0 def __init__(self, x, y, width, height, color): self.x = x self.y = y self.width = width self.height = height self.color = color def draw(self, screen): pygame.draw.rect(screen, self.color, (self.x, self.y, self.width, self.height)) def move(self, direction): if direction == "up": if self.y > 0: self.y -= self.height steps = steps - 50 print(self.x, self.y) elif direction == "down": if self.y < 250: self.y += self.height print(self.x, self.y) steps = steps + 50 elif direction == "left": if self.x > 0: self.x -= self.width print(self.x, self.y) elif direction == "right": if self.x < 250: self.x += self.width print(self.x, self.y) ` I want to populate the grid with the values from the matrix array and perform actions based on the values A: def print_matrix(matrix): for i in range(len(matrix)): if i % 6 == 0 and i != 0: print('') print(matrix[i], end = ' ') print('') print_matrix(matrix)
Display a list of values from an array on a grid drawn using pygame
I'm trying to create a small program that will draw a 6 x 6 grid. I have an array of values (36 elements) which I want to display in each box. I'm able to draw the grid using the below code, however I'm not able to figure out how to display the text from the array in each box. Later I want to check where the current selected box is and determine if it has a particular value and perform some action based on it. ` matrix = ["1", "0", "0", "P", "2", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "0", "4", "3"] I want to show the values from matrix array in the grid class Grid: def __init__(self, width, height, rows, cols): self.width = width self.height = height self.rows = rows self.cols = cols def draw(self, screen): # draw the grid for i in range(self.rows): for j in range(self.cols): pygame.draw.rect(screen, (255, 255, 255), (i * self.width, j * self.height, self.width, self.height), 1) def show_grid(): pygame.init() screen_width = 500 screen_height = 500 screen = pygame.display.set_mode((screen_width, screen_height)) pygame.display.set_caption("Text Adventure Game") grid = Grid(50, 50, 6, 6) player = Player(0, 0, 50, 50, (255, 127, 0)) running = True while running: for event in pygame.event.get(): if event.type == pygame.QUIT: running = False if event.type == pygame.KEYDOWN: if event.key == pygame.K_UP: player.move("up") if event.key == pygame.K_DOWN: player.move("down") if event.key == pygame.K_LEFT: player.move("left") if event.key == pygame.K_RIGHT: player.move("right") screen.fill((0, 0, 0)) grid.draw(screen) player.draw(screen) pygame.display.update() pygame.quit() class Player: global steps steps = 0 def __init__(self, x, y, width, height, color): self.x = x self.y = y self.width = width self.height = height self.color = color def draw(self, screen): pygame.draw.rect(screen, self.color, (self.x, self.y, self.width, self.height)) def move(self, direction): if direction == "up": if self.y > 0: self.y -= self.height steps = steps - 50 print(self.x, self.y) elif direction == "down": if self.y < 250: self.y += self.height print(self.x, self.y) steps = steps + 50 elif direction == "left": if self.x > 0: self.x -= self.width print(self.x, self.y) elif direction == "right": if self.x < 250: self.x += self.width print(self.x, self.y) ` I want to populate the grid with the values from the matrix array and perform actions based on the values
[ "def print_matrix(matrix):\n for i in range(len(matrix)):\n if i % 6 == 0 and i != 0:\n print('')\n print(matrix[i], end = ' ')\n print('')\n\nprint_matrix(matrix)\n\n" ]
[ 0 ]
[]
[]
[ "pygame", "python" ]
stackoverflow_0074642010_pygame_python.txt
Q: Node.js, mysql and socket.io I am new with socket.io. I have a problem. How can I display real-time data from table from database without resetting node.js server. I have a new data in table in 10 seconds. I tried some tutorials, but still have a problem. A: You need to use something like this: const sendDataCycle = async() => { try { const data = await getData() // get data from DB io.send(data) // send by socket.io setTimeout(() => { sendDataCycle() // send again after 10 s }, 10000) } catch (err) { setTimeout(() => { sendDataCycle() // if error - send again after 20 s or else }, 20000) } } sendDataCycle() // run cycle And please, don't use the setInterval function.
Node.js, mysql and socket.io
I am new with socket.io. I have a problem. How can I display real-time data from table from database without resetting node.js server. I have a new data in table in 10 seconds. I tried some tutorials, but still have a problem.
[ "You need to use something like this:\n\n\nconst sendDataCycle = async() => {\n try {\n const data = await getData() // get data from DB\n io.send(data) // send by socket.io\n setTimeout(() => {\n sendDataCycle() // send again after 10 s\n }, 10000)\n } catch (err) {\n setTimeout(() => {\n sendDataCycle() // if error - send again after 20 s or else\n }, 20000)\n }\n}\n\nsendDataCycle() // run cycle\n\n\n\nAnd please, don't use the setInterval function.\n" ]
[ 1 ]
[]
[]
[ "javascript", "mysql", "node.js", "socket.io" ]
stackoverflow_0074656374_javascript_mysql_node.js_socket.io.txt
Q: Postgres eventually consistency Id Taken a table like this: create table mytable(id SERIAL, largeImage bytea); Imagine 2 process (A, B) simultaneously do inserts: Process A arrives first but contains a very large file. After that Process B arrives but very small file. I suppose that inserts will work in paralell and consistency order (id assignament) is done because nextval is assigned on arrives moment. (Maybe I am wrong). Because the Process B file is small than process A, the Process B is faster thant Process A to save on disk. (my guess) My questions: Process A will be assigned with ID=1 a, d Process B will get Id=2. Thats correct? It is posible (rare but possible) that "select * from mytable" will return only the ID=2 because save disk operation of Process A (Id=1) has not finished already? Thanks in advance. A: Process A will be assigned with ID=1 and Process B will get Id=2. Thats correct? Probably yes, but I have not tested it. It is posible (rare but possible) that select * from mytable will return only the ID=2 because save disk operation of Process A (Id=1) has not finished already? This is possible for a variety of reasons and not only for a short period of time. Imagine the process that got the ID=1 crashes before it finishes the transaction. You will then never see a row with ID=1 in your table. If you catch yourself assuming that SERIAL means "continuous", take a step back and redesign your data model. SERIAL rather means "automatic value that is guaranteed to be unique". It is a good idea for an ID, it is generally not appropriate for numbering.
Postgres eventually consistency Id
Taken a table like this: create table mytable(id SERIAL, largeImage bytea); Imagine 2 process (A, B) simultaneously do inserts: Process A arrives first but contains a very large file. After that Process B arrives but very small file. I suppose that inserts will work in paralell and consistency order (id assignament) is done because nextval is assigned on arrives moment. (Maybe I am wrong). Because the Process B file is small than process A, the Process B is faster thant Process A to save on disk. (my guess) My questions: Process A will be assigned with ID=1 a, d Process B will get Id=2. Thats correct? It is posible (rare but possible) that "select * from mytable" will return only the ID=2 because save disk operation of Process A (Id=1) has not finished already? Thanks in advance.
[ "\nProcess A will be assigned with ID=1 and Process B will get Id=2. Thats correct?\n\nProbably yes, but I have not tested it.\n\nIt is posible (rare but possible) that select * from mytable will return only the ID=2 because save disk operation of Process A (Id=1) has not finished already?\n\nThis is possible for a variety of reasons and not only for a short period of time. Imagine the process that got the ID=1 crashes before it finishes the transaction. You will then never see a row with ID=1 in your table.\nIf you catch yourself assuming that SERIAL means \"continuous\", take a step back and redesign your data model. SERIAL rather means \"automatic value that is guaranteed to be unique\". It is a good idea for an ID, it is generally not appropriate for numbering.\n" ]
[ 0 ]
[]
[]
[ "postgresql" ]
stackoverflow_0074654754_postgresql.txt
Q: AzerothCore: does rate.drop.item.quality from the worldserver.conf affect world drops? I want to modify world drop rates, but I'm not sure how the rate.drop.item works. It seems to not affect all drops as I would have expected. I've done some testing using various values from 200 all the way up to 10,000,000 and the results seem strange. Normal enemies don't seem to drop world blues any higher than normal, unless it's something normally on their drop table, like a blueprint or something (I tested this on many enemies across a few different zones). Enemies in dungeons drop many more BoE blues, but it would appear to be BoE drops specific to the instance, not world drops. Oddly, chests seem to act as I would expect, if I have the rate.drop.item.rare set to a very high value, I will receive several world drop rares from the chest. It just doesn't seem to affect creatures. I don't think this matters, but I was testing this with GM privileges. Can anyone clarify more precisely how this configuration option works, and if it doesn't affect world drops, is there a config that does, or will I have to edit some tables to achieve this? Thanks. UPDATE I ended up modifying the database directly. Here's the update I ran UPDATE acore_world.creature_loot_template INNER JOIN reference_loot_template ON creature_loot_template.reference = reference_loot_template.entry INNER JOIN item_template ON reference_loot_template.item = item_template.entry SET creature_loot_template.Chance = creature_loot_template.Chance*5 WHERE creature_loot_template.Reference != 0 AND creature_loot_template.Chance < 1 AND (item_template.quality = 3 OR item_template.quality = 4) AND creature_loot_template.GroupId != 0 AND (reference_loot_template.comment NOT LIKE '%ReferenceTable%') A: The way Loots are managed in AzerothCore is more complex than it seems: it is not only about the drop chance of every single item but there are also other factors for example Groups etc... I recommend reading: https://www.azerothcore.org/wiki/loot_template To understand how the core reads the loot tables.
AzerothCore: does rate.drop.item.quality from the worldserver.conf affect world drops?
I want to modify world drop rates, but I'm not sure how the rate.drop.item works. It seems to not affect all drops as I would have expected. I've done some testing using various values from 200 all the way up to 10,000,000 and the results seem strange. Normal enemies don't seem to drop world blues any higher than normal, unless it's something normally on their drop table, like a blueprint or something (I tested this on many enemies across a few different zones). Enemies in dungeons drop many more BoE blues, but it would appear to be BoE drops specific to the instance, not world drops. Oddly, chests seem to act as I would expect, if I have the rate.drop.item.rare set to a very high value, I will receive several world drop rares from the chest. It just doesn't seem to affect creatures. I don't think this matters, but I was testing this with GM privileges. Can anyone clarify more precisely how this configuration option works, and if it doesn't affect world drops, is there a config that does, or will I have to edit some tables to achieve this? Thanks. UPDATE I ended up modifying the database directly. Here's the update I ran UPDATE acore_world.creature_loot_template INNER JOIN reference_loot_template ON creature_loot_template.reference = reference_loot_template.entry INNER JOIN item_template ON reference_loot_template.item = item_template.entry SET creature_loot_template.Chance = creature_loot_template.Chance*5 WHERE creature_loot_template.Reference != 0 AND creature_loot_template.Chance < 1 AND (item_template.quality = 3 OR item_template.quality = 4) AND creature_loot_template.GroupId != 0 AND (reference_loot_template.comment NOT LIKE '%ReferenceTable%')
[ "The way Loots are managed in AzerothCore is more complex than it seems: it is not only about the drop chance of every single item but there are also other factors for example Groups etc...\nI recommend reading:\nhttps://www.azerothcore.org/wiki/loot_template\nTo understand how the core reads the loot tables.\n" ]
[ 0 ]
[]
[]
[ "azerothcore" ]
stackoverflow_0074621476_azerothcore.txt
Q: Spherical Graph Layout in Python Objective Display a 3D Sphere graph structure based on input edges & nodes using VTK for visualisation. As for example shown in https://epfl-lts2.github.io/gspbox-html/doc/graphs/gsp_sphere.html Target: State of work Input data as given factor NetworkX for position calculation Handover to VTK methods for 3D visualisation Problem 3 years ago, I had achieved the visualisation as shown above. Unfortunately, I did a little bit of too much cleaning and I just realized, that I dont have these methods anymore. It is somehow a force-directed graph on a sphere surface. Maybe similar to the "strong gravity" parameter in the 2D forceatlas. I have not found any 3D implementation of this yet. I tried again with the following algorithms, but none of them has produced this layout, neither have parameter tuning of these algorithms (or did I miss an important one?): NetworkX: Spherical, Spring, Shell, Kamada Kawaii, Fruchterman-Reingold (the 2D fruchterman-reingold in Gephi looks like it could come close to the target in a 3D version, yet gephi does not support 3D or did I oversee something?) ForceAtlas2 Gephi (the 2D fruchterman-reingold looks like a circle, but this is not available in 3D, nor does the 3D Force Atlas produce valid Z-Coordinates (they are within a range of +1e-4 and -1e-4) Researching for "spherical graph layout" has not brought me to any progress (only to this view which seems very similar https://observablehq.com/@fil/3d-graph-on-sphere ). How can I achieve this spherical layout using python (or a third party which provides a positioning information) Update: I made some progress and found the keywords non-euclidean, hyperbolic and spherical force-directed algorithms, however still not achieved anything yet. Or Non-Euclidean Riemann Embeddings (https://www2.cs.arizona.edu/~kobourov/riemann_embedders.pdf) A: Have you tried the python lib version of the GSPBOX? If yes, why it does not work for you? https://pygsp.readthedocs.io/en/stable/reference/graphs.html A: To display a 3D Sphere graph structure using VTK, you can use the vtkSphereSource class to generate the sphere geometry, and the vtkGraphLayoutView class to visualize the graph. Here is an example of how you can create a 3D Sphere graph structure in VTK: First, import the required modules: sphere = vtk.vtkSphereSource() sphere.SetRadius(1.0) sphere.SetThetaResolution(32) sphere.SetPhiResolution(32) Next, create a vtkSphereSource object and set the radius and resolution of the sphere: sphere = vtk.vtkSphereSource() sphere.SetRadius(1.0) sphere.SetThetaResolution(32) sphere.SetPhiResolution(32) Then, create a vtkPolyData object and set the points and polygons of the sphere using the output of the vtkSphereSource object: polydata = vtk.vtkPolyData() polydata.SetPoints(sphere.GetOutput().GetPoints()) polydata.SetPolys(sphere.GetOutput().GetPolys()) Next, create a vtkPoints object and set the coordinates of the nodes in the graph: points = vtk.vtkPoints() points.InsertNextPoint(1.0, 0.0, 0.0) points.InsertNextPoint(0.0, 1.0, 0.0) points.InsertNextPoint(0.0, 0.0, 1.0) Then, create a vtkCellArray object and add the edges of the graph to the vtkCellArray object: cells = vtk.vtkCellArray() cells.InsertNextCell(2) cells.InsertCellPoint(0) cells.InsertCellPoint(1) cells.InsertNextCell(2) cells.InsertCellPoint(0) cells.InsertCellPoint(2) Next, create a vtkPolyData object and set the points and edges of the graph using the vtkPoints and vtkCellArray objects: graph = vtk.vtkPolyData() graph.SetPoints(points) graph.SetLines(cells) Then, create a vtkGraphLayoutView object and add the vtkPolyData objects for the sphere and the graph to the vtkGraphLayoutView object: view = vtk.vtkGraphLayoutView() view.AddRepresentationFromInput(polydata) view.
Spherical Graph Layout in Python
Objective Display a 3D Sphere graph structure based on input edges & nodes using VTK for visualisation. As for example shown in https://epfl-lts2.github.io/gspbox-html/doc/graphs/gsp_sphere.html Target: State of work Input data as given factor NetworkX for position calculation Handover to VTK methods for 3D visualisation Problem 3 years ago, I had achieved the visualisation as shown above. Unfortunately, I did a little bit of too much cleaning and I just realized, that I dont have these methods anymore. It is somehow a force-directed graph on a sphere surface. Maybe similar to the "strong gravity" parameter in the 2D forceatlas. I have not found any 3D implementation of this yet. I tried again with the following algorithms, but none of them has produced this layout, neither have parameter tuning of these algorithms (or did I miss an important one?): NetworkX: Spherical, Spring, Shell, Kamada Kawaii, Fruchterman-Reingold (the 2D fruchterman-reingold in Gephi looks like it could come close to the target in a 3D version, yet gephi does not support 3D or did I oversee something?) ForceAtlas2 Gephi (the 2D fruchterman-reingold looks like a circle, but this is not available in 3D, nor does the 3D Force Atlas produce valid Z-Coordinates (they are within a range of +1e-4 and -1e-4) Researching for "spherical graph layout" has not brought me to any progress (only to this view which seems very similar https://observablehq.com/@fil/3d-graph-on-sphere ). How can I achieve this spherical layout using python (or a third party which provides a positioning information) Update: I made some progress and found the keywords non-euclidean, hyperbolic and spherical force-directed algorithms, however still not achieved anything yet. Or Non-Euclidean Riemann Embeddings (https://www2.cs.arizona.edu/~kobourov/riemann_embedders.pdf)
[ "Have you tried the python lib version of the GSPBOX?\nIf yes, why it does not work for you?\nhttps://pygsp.readthedocs.io/en/stable/reference/graphs.html\n", "To display a 3D Sphere graph structure using VTK, you can use the vtkSphereSource class to generate the sphere geometry, and the vtkGraphLayoutView class to visualize the graph.\nHere is an example of how you can create a 3D Sphere graph structure in VTK:\nFirst, import the required modules:\nsphere = vtk.vtkSphereSource()\nsphere.SetRadius(1.0)\nsphere.SetThetaResolution(32)\nsphere.SetPhiResolution(32)\n\nNext, create a vtkSphereSource object and set the radius and resolution of the sphere:\nsphere = vtk.vtkSphereSource()\nsphere.SetRadius(1.0)\nsphere.SetThetaResolution(32)\nsphere.SetPhiResolution(32)\n\nThen, create a vtkPolyData object and set the points and polygons of the sphere using the output of the vtkSphereSource object:\n polydata = vtk.vtkPolyData()\npolydata.SetPoints(sphere.GetOutput().GetPoints())\npolydata.SetPolys(sphere.GetOutput().GetPolys())\n\nNext, create a vtkPoints object and set the coordinates of the nodes in the graph:\n\npoints = vtk.vtkPoints()\npoints.InsertNextPoint(1.0, 0.0, 0.0)\npoints.InsertNextPoint(0.0, 1.0, 0.0)\npoints.InsertNextPoint(0.0, 0.0, 1.0)\n\nThen, create a vtkCellArray object and add the edges of the graph to the vtkCellArray object:\ncells = vtk.vtkCellArray()\ncells.InsertNextCell(2)\ncells.InsertCellPoint(0)\ncells.InsertCellPoint(1)\ncells.InsertNextCell(2)\ncells.InsertCellPoint(0)\ncells.InsertCellPoint(2)\n\nNext, create a vtkPolyData object and set the points and edges of the graph using the vtkPoints and vtkCellArray objects:\ngraph = vtk.vtkPolyData()\n\ngraph.SetPoints(points)\ngraph.SetLines(cells)\nThen, create a vtkGraphLayoutView object and add the vtkPolyData objects for the sphere and the graph to the vtkGraphLayoutView object:\n view = vtk.vtkGraphLayoutView()\nview.AddRepresentationFromInput(polydata)\nview.\n\n" ]
[ 0, 0 ]
[]
[]
[ "3d", "graph", "python" ]
stackoverflow_0074164603_3d_graph_python.txt
Q: Regex - Conditional based on first part of text Is it possible to make a conditional that does not check part of the string when is not needed? For example: Regex: ^[a-zA-Z]+.*#[0-9]+$ Example text: feature: My name is Oliver #9123 I would like to when the text being: release: My name is oliver The same Regex matches both cases not requiring the #9123 for the release prefix, is that possible? I have tried to use some regex conditionals that I found on google, but didn't have success. A: What you want is an optional group: ^[a-zA-Z\s]+(#[0-9]+)?$ So that, the following strings will match: "My name is Oliver #9123" "My name is Oliver" And this won't: "This is not valid #xxx" Regex playground A: If I understood you, you want the regex to work on part of the whole string, then you can create a function that splits the string into two parts - the part that will be checked by the regex and the part that won't be checked - then you can pass the part of the string to regex. A: You could try a logical OR (|) in the regexp: const tests=["My name is Oliver #9123", "release: My name is oliver", "this should fail", "#12345 another fail"]; tests.forEach(str=> console.log(str+" - "+(str.match(/^release|[a-zA-Z]+.*#[0-9]+/)?"pass":"fail")) )
Regex - Conditional based on first part of text
Is it possible to make a conditional that does not check part of the string when is not needed? For example: Regex: ^[a-zA-Z]+.*#[0-9]+$ Example text: feature: My name is Oliver #9123 I would like to when the text being: release: My name is oliver The same Regex matches both cases not requiring the #9123 for the release prefix, is that possible? I have tried to use some regex conditionals that I found on google, but didn't have success.
[ "What you want is an optional group:\n^[a-zA-Z\\s]+(#[0-9]+)?$\n\nSo that, the following strings will match:\n\"My name is Oliver #9123\"\n\"My name is Oliver\"\n\nAnd this won't:\n\"This is not valid #xxx\"\n\nRegex playground\n", "If I understood you, you want the regex to work on part of the whole string,\nthen you can create a function that splits the string into two parts - the part that will be checked by the regex and the part that won't be checked - then you can pass the part of the string to regex.\n", "You could try a logical OR (|) in the regexp:\n\n\nconst tests=[\"My name is Oliver #9123\",\n \"release: My name is oliver\",\n \"this should fail\",\n \"#12345 another fail\"];\n\ntests.forEach(str=>\n console.log(str+\" - \"+(str.match(/^release|[a-zA-Z]+.*#[0-9]+/)?\"pass\":\"fail\"))\n)\n\n\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "javascript", "regex" ]
stackoverflow_0074656407_javascript_regex.txt
Q: Extracting text from Tag using selenium Java I'm trying to extract text from tag. But unfortunately nothing worked for me. <div id="sign_in" class="sign_in"> <h4>usernames lists:</h4> user_1 <br> user_2 <br> user_3 <br> user_4 <br> </div> I'm trying to get it in the list and then extract text. List <WebElement> li = driver.findElements(By.xpath("//div[@class='sign_in']/br")); for(int i=0;i<li.size();i++) { System.out.println(li.get(i).getText()); } A: <br> is not a closed tag, it can not hold any text. It's just line separator, see https://www.w3schools.com/tags/tag_br.asp. This will do the job: WebElement signIn = driver.findElement(By.id("sign_in")); String signInFullText = signIn.getText(); String[] splitted = signInFullText.split("\\r?\\n|\\r"); for (String s: splitted) { if (!s.equals("usernames lists:")) { System.out.println(s); } } A: You can try using the below method String[] splitted = str.split("<br>|<br/>"); regex. Because the html tag can be <br/> or <br> String str = "<div id=\"sign_in\" class=\"sign_in\">" + " <h4>usernames lists:</h4>" + " user_1" + " <br>" + " user_2" + " <br>" + " user_3" + " <br>" + " user_4\r\n" + " <br>\r\n" + "</div>"; String[] splitted = str.split("<br>|<br/>"); System.out.println(Arrays.toString(splitted)); Sample output : [<div id="sign_in" class="sign_in"> <h4>usernames lists:</h4> user_1 , user_2 , user_3 , user_4 , </div>]
Extracting text from Tag using selenium Java
I'm trying to extract text from tag. But unfortunately nothing worked for me. <div id="sign_in" class="sign_in"> <h4>usernames lists:</h4> user_1 <br> user_2 <br> user_3 <br> user_4 <br> </div> I'm trying to get it in the list and then extract text. List <WebElement> li = driver.findElements(By.xpath("//div[@class='sign_in']/br")); for(int i=0;i<li.size();i++) { System.out.println(li.get(i).getText()); }
[ "<br> is not a closed tag, it can not hold any text. It's just line separator, see https://www.w3schools.com/tags/tag_br.asp.\nThis will do the job:\nWebElement signIn = driver.findElement(By.id(\"sign_in\"));\nString signInFullText = signIn.getText();\nString[] splitted = signInFullText.split(\"\\\\r?\\\\n|\\\\r\");\nfor (String s: splitted) {\n if (!s.equals(\"usernames lists:\")) {\n System.out.println(s);\n }\n}\n\n", "You can try using the below method String[] splitted = str.split(\"<br>|<br/>\"); regex. Because the html tag can be <br/> or <br>\nString str = \"<div id=\\\"sign_in\\\" class=\\\"sign_in\\\">\"\n + \" <h4>usernames lists:</h4>\"\n + \" user_1\"\n + \" <br>\"\n + \" user_2\"\n + \" <br>\"\n + \" user_3\"\n + \" <br>\"\n + \" user_4\\r\\n\"\n + \" <br>\\r\\n\"\n + \"</div>\";\n\nString[] splitted = str.split(\"<br>|<br/>\");\nSystem.out.println(Arrays.toString(splitted));\n\nSample output :\n[<div id=\"sign_in\" class=\"sign_in\">\n <h4>usernames lists:</h4>\n user_1\n , \n user_2\n , \n user_3\n , \n user_4\n , \n</div>]\n\n" ]
[ 0, 0 ]
[]
[]
[ "java", "selenium_webdriver" ]
stackoverflow_0074655911_java_selenium_webdriver.txt
Q: Gson and Ktor production build converts json names to letters I use Gson to create a Json string when sending data from my Android app back to the server using Ktor. Works fine for debug release, but under release Gson appears to convert all the member names to letters How it should look: How it looks on release compile: This must be a configuration to set somewhere? How do I force Gson to retain variable names? A: Release build minification obfuscates names, including your model class fields. Some options: You can add gson's @SerializedName("field") annotations to your model class fields to specify the JSON names to be used with those fields despite obfuscation. You can add -keep rules to your R8/Proguard configuration file to prevent your model classes from being obfuscated.
Gson and Ktor production build converts json names to letters
I use Gson to create a Json string when sending data from my Android app back to the server using Ktor. Works fine for debug release, but under release Gson appears to convert all the member names to letters How it should look: How it looks on release compile: This must be a configuration to set somewhere? How do I force Gson to retain variable names?
[ "Release build minification obfuscates names, including your model class fields. Some options:\n\nYou can add gson's @SerializedName(\"field\") annotations to your model class fields to specify the JSON names to be used with those fields despite obfuscation.\n\nYou can add -keep rules to your R8/Proguard configuration file to prevent your model classes from being obfuscated.\n\n\n" ]
[ 1 ]
[]
[]
[ "android", "gson", "ktor" ]
stackoverflow_0074656474_android_gson_ktor.txt
Q: Why does TypeScript infer the 'never' type when reducing an Array with concat? Code speaks better than language, so: ['a', 'b', 'c'].reduce((accumulator, value) => accumulator.concat(value), []); The code is very silly and returns a copied Array... TS complains on concat's argument: TS2345: Argument of type 'string' is not assignable to parameter of type 'ConcatArray'. A: I believe this is because the type for [] is inferred to be never[], which is the type for an array that MUST be empty. You can use a type cast to address this: ['a', 'b', 'c'].reduce((accumulator, value) => accumulator.concat(value), [] as string[]); Normally this wouldn't be much of a problem since TypeScript does a decent job at figuring out a better type to assign to an empty array based on what you do with it. However, since your example is 'silly' as you put it, TypeScript isn't able to make any inferences and leaves the type as never[]. A: A better solution which avoids a type assertion (aka type cast) in two variants: Use string[] as the generic type parameter of the reduce method (thanks @depoulo for mentioning it): ['a', 'b', 'c'].reduce<string[]>((accumulator, value) => accumulator.concat(value), []); Type the accumulator value as string[] (and avoid a type cast on []): ['a', 'b', 'c'].reduce((accumulator: string[], value) => accumulator.concat(value), []); Play with this solution in the typescript playground. Notes: Type assertions (sometimes called type casts) should be avoided if you can because you're taking one type and transpose it onto something else. This can cause side-effects since you're manually taking control of coercing a variable into another type. This typescript error only occurs if the strictNullChecks option is set to true. The Typescript error disappears when disabling that option, but that is probably not what you want. I reference the entire error message I get with Typescript 3.9.2 here so that Google finds this thread for people who are searching for answers (because Typescript error messages sometimes change from version to version): No overload matches this call. Overload 1 of 2, '(...items: ConcatArray<never>[]): never[]', gave the following error. Argument of type 'string' is not assignable to parameter of type 'ConcatArray<never>'. Overload 2 of 2, '(...items: ConcatArray<never>[]): never[]', gave the following error. Argument of type 'string' is not assignable to parameter of type 'ConcatArray<never>'.(2769) A: You should use generics to address this. ['a', 'b', 'c'].reduce<string[]>((accumulator, value) => accumulator.concat(value), []); This will set the type of the initial empty array, which in my opinion is the most correct solution. A: You can use Generic type to avoid this error. Check my example of flatten function: export const flatten = <T>(arr: T[]): T[] => arr.reduce((flat, toFlatten) => (flat.concat(Array.isArray(toFlatten) ? flatten(toFlatten) : toFlatten)), [] as T[]); A: none of the above worked for me, even with altering the tsconfig.json file to "strict": false and was only able to avoid breaking the application with the following: // eslint-disable-next-line @typescript-eslint/ban-ts-comment // @ts-ignore A: Two other ways to set the type that I like more than type casting (i.e [] as string) is: <string[]>[] Array<string>(0)
Why does TypeScript infer the 'never' type when reducing an Array with concat?
Code speaks better than language, so: ['a', 'b', 'c'].reduce((accumulator, value) => accumulator.concat(value), []); The code is very silly and returns a copied Array... TS complains on concat's argument: TS2345: Argument of type 'string' is not assignable to parameter of type 'ConcatArray'.
[ "I believe this is because the type for [] is inferred to be never[], which is the type for an array that MUST be empty. You can use a type cast to address this:\n['a', 'b', 'c'].reduce((accumulator, value) => accumulator.concat(value), [] as string[]);\n\nNormally this wouldn't be much of a problem since TypeScript does a decent job at figuring out a better type to assign to an empty array based on what you do with it. However, since your example is 'silly' as you put it, TypeScript isn't able to make any inferences and leaves the type as never[].\n", "A better solution which avoids a type assertion (aka type cast) in two variants:\n\nUse string[] as the generic type parameter of the reduce method (thanks @depoulo for mentioning it):\n\n['a', 'b', 'c'].reduce<string[]>((accumulator, value) => accumulator.concat(value), []);\n\n\nType the accumulator value as string[] (and avoid a type cast on []):\n\n['a', 'b', 'c'].reduce((accumulator: string[], value) => accumulator.concat(value), []);\n\nPlay with this solution in the typescript playground.\nNotes:\n\nType assertions (sometimes called type casts) should be avoided if you can because you're taking one type and transpose it onto something else. This can cause side-effects since you're manually taking control of coercing a variable into another type.\n\nThis typescript error only occurs if the strictNullChecks option is set to true. The Typescript error disappears when disabling that option, but that is probably not what you want.\n\nI reference the entire error message I get with Typescript 3.9.2 here so that Google finds this thread for people who are searching for answers (because Typescript error messages sometimes change from version to version):\nNo overload matches this call.\n Overload 1 of 2, '(...items: ConcatArray<never>[]): never[]', gave the following error.\n Argument of type 'string' is not assignable to parameter of type 'ConcatArray<never>'.\n Overload 2 of 2, '(...items: ConcatArray<never>[]): never[]', gave the following error.\n Argument of type 'string' is not assignable to parameter of type 'ConcatArray<never>'.(2769)\n\n\n\n", "You should use generics to address this.\n['a', 'b', 'c'].reduce<string[]>((accumulator, value) => accumulator.concat(value), []);\n\nThis will set the type of the initial empty array, which in my opinion is the most correct solution.\n", "You can use Generic type to avoid this error.\nCheck my example of flatten function:\nexport const flatten = <T>(arr: T[]): T[] => arr.reduce((flat, toFlatten) =>\n (flat.concat(Array.isArray(toFlatten) ? flatten(toFlatten) : toFlatten)), [] as T[]);\n\n", "none of the above worked for me, even with altering the tsconfig.json file to \"strict\": false and was only able to avoid breaking the application with the following:\n// eslint-disable-next-line @typescript-eslint/ban-ts-comment\n// @ts-ignore\n\n", "Two other ways to set the type that I like more than type casting (i.e [] as string) is:\n\n<string[]>[]\nArray<string>(0)\n\n" ]
[ 87, 33, 4, 1, 0, 0 ]
[]
[]
[ "functional_programming", "reduce", "reducers", "strictnullchecks", "typescript" ]
stackoverflow_0054117100_functional_programming_reduce_reducers_strictnullchecks_typescript.txt
Q: Preload Script not runnning/working properly with Electron I've been trying to get a preload script to run on my Electron app but it appears to either not be running at all or just not working properly. I currently have a main file, a preload file, a render file, and a html. I'm just trying to do the stuff from the Electron tutorial on using preload files, so right now my code is something like this: // main.js const {app, BrowserWindow, ipcMain, Menu} = require('electron'); const url = require('url'); const path = require('path'); let mainWindow; const createWindow = () => { // Create a window mainWindow = new BrowserWindow({ show: false, autoHideMenuBar: true, webPreferences: ({ preload: path.join(__dirname, 'scripts', 'preload.js'), nodeIntegration: true, }), }); mainWindow.maximize(); mainWindow.show(); // Load HTML into window mainWindow.loadFile('index.html'); // Open Dev Tools // mainWindow.webContents.openDevTools(); console.log(versions); } // preload.js const {contextBridge} = require('electron'); contextBridge.exposeInMainWorld('versions', { node: () => process.version.node, chrome: () => process.version.chrome, electron: () => process.version.electron, }); Index.html: <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self'" /> <meta http-equiv="X-Content-Security-Policy" content="default-src 'self'; script-src 'self'" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="stylesheet" href="./css/style.css"> <title>Test</title> </head> <body> <h1>Test</h1> <p id="info"></p> <script>window.$ = window.jQuery = require('jquery');</script> <script src="render.js"></script> </body> // render.js const information = document.getElementById('info'); information.innerText = `This app is using Chrome (v${versions.chrome()}), Node.js (v${versions.node()}), and Electron (v ${versions.electron()})` Currently my output on the HTML from render.js is "This app is using Chrome (vundefined),Node.js (vundefined), and Electron (vundefined)" and my console.log line in main.js throws up a ReferenceError stating "versions is not defined". Anybody able to shine some light on how I could fix this? Thanks in advance. A: I think you made a typo: In your preload script contextBridge.exposeInMainWorld('versions', { node: () => process.version.node, chrome: () => process.version.chrome, electron: () => process.version.electron, }); Should be (add a "s" to process.version contextBridge.exposeInMainWorld('versions', { node: () => **process.versions.node, chrome: () => **process.versions.chrome, electron: () => **process.versions.electron, }); Can have a look to the docs: https://www.electronjs.org/docs/latest/api/process
Preload Script not runnning/working properly with Electron
I've been trying to get a preload script to run on my Electron app but it appears to either not be running at all or just not working properly. I currently have a main file, a preload file, a render file, and a html. I'm just trying to do the stuff from the Electron tutorial on using preload files, so right now my code is something like this: // main.js const {app, BrowserWindow, ipcMain, Menu} = require('electron'); const url = require('url'); const path = require('path'); let mainWindow; const createWindow = () => { // Create a window mainWindow = new BrowserWindow({ show: false, autoHideMenuBar: true, webPreferences: ({ preload: path.join(__dirname, 'scripts', 'preload.js'), nodeIntegration: true, }), }); mainWindow.maximize(); mainWindow.show(); // Load HTML into window mainWindow.loadFile('index.html'); // Open Dev Tools // mainWindow.webContents.openDevTools(); console.log(versions); } // preload.js const {contextBridge} = require('electron'); contextBridge.exposeInMainWorld('versions', { node: () => process.version.node, chrome: () => process.version.chrome, electron: () => process.version.electron, }); Index.html: <html lang="en"> <head> <meta charset="UTF-8"> <meta http-equiv="Content-Security-Policy" content="default-src 'self'; script-src 'self'" /> <meta http-equiv="X-Content-Security-Policy" content="default-src 'self'; script-src 'self'" /> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <link rel="stylesheet" href="./css/style.css"> <title>Test</title> </head> <body> <h1>Test</h1> <p id="info"></p> <script>window.$ = window.jQuery = require('jquery');</script> <script src="render.js"></script> </body> // render.js const information = document.getElementById('info'); information.innerText = `This app is using Chrome (v${versions.chrome()}), Node.js (v${versions.node()}), and Electron (v ${versions.electron()})` Currently my output on the HTML from render.js is "This app is using Chrome (vundefined),Node.js (vundefined), and Electron (vundefined)" and my console.log line in main.js throws up a ReferenceError stating "versions is not defined". Anybody able to shine some light on how I could fix this? Thanks in advance.
[ "I think you made a typo:\nIn your preload script\ncontextBridge.exposeInMainWorld('versions', {\n node: () => process.version.node,\n chrome: () => process.version.chrome,\n electron: () => process.version.electron,\n});\n\nShould be (add a \"s\" to process.version\ncontextBridge.exposeInMainWorld('versions', {\n node: () => **process.versions.node,\n chrome: () => **process.versions.chrome,\n electron: () => **process.versions.electron,\n});\n\nCan have a look to the docs:\nhttps://www.electronjs.org/docs/latest/api/process\n" ]
[ 0 ]
[]
[]
[ "electron", "html", "javascript" ]
stackoverflow_0074655984_electron_html_javascript.txt
Q: Is there a way to make certain functions "private" in a PowerShell script? When my shell starts, I load an external script that has a few functions I use to test things. Something like: # Include Service Test Tools $scriptPath = split-path -parent $MyInvocation.MyCommand.Definition . $scriptPath\SvcTest.ps1 In SvcTest.ps1, I have two functions: function isURI ([string] $address) { ($address -as [System.URI]).AbsoluteURI -ne $null } As well as: function Test-Service ([string] $url) { if (-Not (isURI($url))) { Write-Host "Invalid URL: $url" return } # Blah blah blah, implementation not important } The isURI function is basically just a utility function that allows Test-Service and perhaps other functions validate URIs. However, when I start my shell, I see that isURI is a function loaded globally. I can even type isURI http://www.google.com from the command line and get back True. My Question: Is there a way to make isURI private, so that only functions within SvcTest.ps1 can use it, while still allowing Test-Service to be global? Basically, I'm looking for a way to use property encapsulation within PowerShell scripts. A: In fact, if you call a .ps1 file, by default any functions and variables declared within it are scoped privately within the script (this is referred to as "script scope"). Since you're seeing both functions defined globally, I infer that you're dot-sourcing SvcTest.ps1, i.e. invoking it like this PS> . <path>\SvcTest.ps1 rather than calling it like this PS> <path>\SvcTest.ps1 You have two options. 1. If your private function is only used by one other function in the script, you can declare the private function within the body of the function that uses it, and invoke the script by dot-sourcing it: function Test-Service ([string] $url) { function isURI ([string] $address) { ($address -as [System.URI]).AbsoluteURI -ne $null } if (-Not (isURI($url))) { Write-Host "Invalid URL: $url" return } # Blah blah blah, implementation not important } 2. If the private function is needed by more than one other function within the script (or even if not, this is an alternative to the above), explicitly declare global scope for any functions that you want defined globally, and then call the script rather than dot-sourcing it: function isURI ([string] $address) { ($address -as [System.URI]).AbsoluteURI -ne $null } function global:Test-Service ([string] $url) { if (-Not (isURI($url))) { Write-Host "Invalid URL: $url" return } # Blah blah blah, implementation not important } In either case, Test-Service will be defined in the global scope, and isURI will be restricted to the script scope. * One thing that might confuse the issue here is that PowerShell only looks for executables in the path, not the current working directory, unless . has been added to the path (which is not the case by default). So, it's typical in PowerShell when invoking scripts in the working directory to precede the script name with .\. Don't confuse the . representing the working directory with the dot-sourcing operator. This calls a script: PS> .\SvcTest.ps1 This dot-sources it: PS> . .\SvcTest.ps1 A: It sounds to me like you're asking for functionality that's available by creating a module. Modules let you encapsulate code and export only desired aliases and/or functions. A module manifest is not strictly required; if you don't use a manifest, you can use Export-ModuleMember to specify what members you want exported from the module. See the help about_Modules about topic for more information. A: If you want to use a private scope for your function, it is done like this in Powershell. function Private:isURI ([string] $address) { ($address -as [System.URI]).AbsoluteURI -ne $null } A: Have you tried moving the isURI function to be a script, and then dot sourcing in your other functions instead of running it as a function? isuri.ps1: Param([string] $address) ($address -as [System.URI]).AbsoluteURI -ne $null svctext.ps1: function Test-Service ([string] $url) { if (-Not (. .\isURI($url))) { Write-Host "Invalid URL: $url" return } # Blah blah blah, implementation not important } A: I had a similar issue when creating a .psm1 module. If you use Export-ModuleMember (as Bill_Stewart suggests which is the accepted answer) for the "public" function you can expose your selected function(s). All other functions not exported are hidden publicly but can still be used by the module. If you don't include the Export-Module statement in your module, all members are automatically exposed. https://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/export-modulemember?view=powershell-7.2[Export-ModuleMember][1] function isURI ([string] $address) { ($address -as [System.URI]).AbsoluteURI -ne $null } function Test-Service ([string] $url) { if (-Not (isURI($url))) { Write-Host "Invalid URL: $url" return } # Blah blah blah, implementation not important } # only expose Test-Service publicly Export-ModuleMember -Function Test-Service
Is there a way to make certain functions "private" in a PowerShell script?
When my shell starts, I load an external script that has a few functions I use to test things. Something like: # Include Service Test Tools $scriptPath = split-path -parent $MyInvocation.MyCommand.Definition . $scriptPath\SvcTest.ps1 In SvcTest.ps1, I have two functions: function isURI ([string] $address) { ($address -as [System.URI]).AbsoluteURI -ne $null } As well as: function Test-Service ([string] $url) { if (-Not (isURI($url))) { Write-Host "Invalid URL: $url" return } # Blah blah blah, implementation not important } The isURI function is basically just a utility function that allows Test-Service and perhaps other functions validate URIs. However, when I start my shell, I see that isURI is a function loaded globally. I can even type isURI http://www.google.com from the command line and get back True. My Question: Is there a way to make isURI private, so that only functions within SvcTest.ps1 can use it, while still allowing Test-Service to be global? Basically, I'm looking for a way to use property encapsulation within PowerShell scripts.
[ "\nIn fact, if you call a .ps1 file, by default any functions and variables declared within it are scoped privately within the script (this is referred to as \"script scope\"). Since you're seeing both functions defined globally, I infer that you're dot-sourcing SvcTest.ps1, i.e. invoking it like this\nPS> . <path>\\SvcTest.ps1\n\nrather than calling it like this\nPS> <path>\\SvcTest.ps1\n\n\nYou have two options.\n1. If your private function is only used by one other function in the script, you can declare the private function within the body of the function that uses it, and invoke the script by dot-sourcing it:\nfunction Test-Service ([string] $url)\n{\n function isURI ([string] $address)\n {\n ($address -as [System.URI]).AbsoluteURI -ne $null\n }\n\n if (-Not (isURI($url)))\n {\n Write-Host \"Invalid URL: $url\"\n return\n }\n\n # Blah blah blah, implementation not important\n}\n\n2. If the private function is needed by more than one other function within the script (or even if not, this is an alternative to the above), explicitly declare global scope for any functions that you want defined globally, and then call the script rather than dot-sourcing it:\nfunction isURI ([string] $address)\n{\n ($address -as [System.URI]).AbsoluteURI -ne $null\n}\n\n\nfunction global:Test-Service ([string] $url)\n{\n if (-Not (isURI($url)))\n {\n Write-Host \"Invalid URL: $url\"\n return\n }\n\n # Blah blah blah, implementation not important\n}\n\nIn either case, Test-Service will be defined in the global scope, and isURI will be restricted to the script scope.\n\n* One thing that might confuse the issue here is that PowerShell only looks for executables in the path, not the current working directory, unless . has been added to the path (which is not the case by default). So, it's typical in PowerShell when invoking scripts in the working directory to precede the script name with .\\. Don't confuse the . representing the working directory with the dot-sourcing operator. This calls a script:\nPS> .\\SvcTest.ps1\n\nThis dot-sources it:\nPS> . .\\SvcTest.ps1\n\n\n", "It sounds to me like you're asking for functionality that's available by creating a module.\nModules let you encapsulate code and export only desired aliases and/or functions. A module manifest is not strictly required; if you don't use a manifest, you can use Export-ModuleMember to specify what members you want exported from the module.\nSee the help about_Modules about topic for more information.\n", "If you want to use a private scope for your function, it is done like this in Powershell.\nfunction Private:isURI ([string] $address)\n{\n ($address -as [System.URI]).AbsoluteURI -ne $null\n}\n\n", "Have you tried moving the isURI function to be a script, and then dot sourcing in your other functions instead of running it as a function?\nisuri.ps1:\nParam([string] $address)\n($address -as [System.URI]).AbsoluteURI -ne $null\n\nsvctext.ps1:\nfunction Test-Service ([string] $url)\n{\n if (-Not (. .\\isURI($url)))\n {\n Write-Host \"Invalid URL: $url\"\n return\n }\n\n # Blah blah blah, implementation not important\n}\n\n", "I had a similar issue when creating a .psm1 module. If you use Export-ModuleMember (as Bill_Stewart suggests which is the accepted answer) for the \"public\" function you can expose your selected function(s). All other functions not exported are hidden publicly but can still be used by the module.\nIf you don't include the Export-Module statement in your module, all members are automatically exposed.\nhttps://learn.microsoft.com/en-us/powershell/module/microsoft.powershell.core/export-modulemember?view=powershell-7.2[Export-ModuleMember][1]\nfunction isURI ([string] $address)\n{\n ($address -as [System.URI]).AbsoluteURI -ne $null\n}\n\nfunction Test-Service ([string] $url)\n{\n if (-Not (isURI($url)))\n {\n Write-Host \"Invalid URL: $url\"\n return\n }\n\n # Blah blah blah, implementation not important\n}\n\n# only expose Test-Service publicly\nExport-ModuleMember -Function Test-Service\n\n" ]
[ 21, 7, 5, 1, 0 ]
[]
[]
[ "powershell", "powershell_2.0" ]
stackoverflow_0025123877_powershell_powershell_2.0.txt
Q: How to set this code to open at specific times of the day? SO basically how do i do to set this to run at a specific time of the day ? import winsound from win10toast import ToastNotifier def timer (reminder,seconds): notificator=ToastNotifier() notificator=ToastNotifier("Reminder",f"""Alarm will go off in (seconds) Seconds.""",duration=20 notificator.show_toast(f"Reminder",reminder,duration=20) #alarm frequency=2500 duration=1000 winsound.Beep(frequency,duration) if __name__=="__main__": words=input("What shall i be reminded of: ") sec=int(input("Enter seconds: ")) timer(words,sec) Could this be ? as i tried to write it but doesn t seem to work import time local_time = float(input()) local_time = local_time * 60 time.sleep(local_time) A: Two possibilities from the top of my head: [Linux] Use cron job https://help.ubuntu.com/community/CronHowto [Any OS] Use scheduler https://schedule.readthedocs.io/en/stable/ [Windows] Scheduling a .py file on Task Scheduler in Windows 10
How to set this code to open at specific times of the day?
SO basically how do i do to set this to run at a specific time of the day ? import winsound from win10toast import ToastNotifier def timer (reminder,seconds): notificator=ToastNotifier() notificator=ToastNotifier("Reminder",f"""Alarm will go off in (seconds) Seconds.""",duration=20 notificator.show_toast(f"Reminder",reminder,duration=20) #alarm frequency=2500 duration=1000 winsound.Beep(frequency,duration) if __name__=="__main__": words=input("What shall i be reminded of: ") sec=int(input("Enter seconds: ")) timer(words,sec) Could this be ? as i tried to write it but doesn t seem to work import time local_time = float(input()) local_time = local_time * 60 time.sleep(local_time)
[ "Two possibilities from the top of my head:\n\n[Linux] Use cron job https://help.ubuntu.com/community/CronHowto\n[Any OS] Use scheduler https://schedule.readthedocs.io/en/stable/\n[Windows] Scheduling a .py file on Task Scheduler in Windows 10\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074656493_python.txt
Q: S3 tagging and access control policies not working for limiting the tags keyset on an object Trying to restrict tags to only a given set of keys that can be attached to the objects. Using bucket level policies to define this condition. However, the logic is not working. Bucket policy (https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging-and-policies.html) { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<Account-Id>:user/AdminUser" }, "Action": "s3:PutObjectTagging", "Resource": "arn:aws:s3:::test-notifications-per-prefix/*", "Condition": { "ForAllValues:StringLike": { "s3:RequestObjectTagKeys": "LIFE" } } } ] } Boto3 code to upload the object s3 = boto3.client('s3') response = s3.put_object_tagging( Bucket='test-notifications-per-prefix', Key="file.txt", Tagging = { 'TagSet': [ { 'Key': "TEST", 'Value': "SHORTTERM" } ] } ) The object is still getting uploaded when i run the above code. I am not able to figure out as why this is happening. Tried denying object tagging in the bucket policy (removed the condition from the policy and made the effect as Deny) then any object uploaded with a tag was throwing an access denied error. (so, the rules are being applied for sure) Can you please let me know as what i am doing wrong here? A: { "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", "Principal": { "AWS": "arn:aws:iam::<account-id>:user/AdminUser" }, "Action": "s3:PutObjectTagging", "Resource": "arn:aws:s3:::test-notifications-per-prefix/prefix1/*", "Condition": { "StringNotEquals": { "s3:RequestObjectTag/LIFE": [ "2", "15" ] } } } ] } Able to restrict the key and value pairs in my S3 bucket using the following bucket policy. An explicit deny is denying all the requests coming from the principal that do not have the following tags. However, this policy will not work for object that are uploaded without tags.
S3 tagging and access control policies not working for limiting the tags keyset on an object
Trying to restrict tags to only a given set of keys that can be attached to the objects. Using bucket level policies to define this condition. However, the logic is not working. Bucket policy (https://docs.aws.amazon.com/AmazonS3/latest/userguide/tagging-and-policies.html) { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::<Account-Id>:user/AdminUser" }, "Action": "s3:PutObjectTagging", "Resource": "arn:aws:s3:::test-notifications-per-prefix/*", "Condition": { "ForAllValues:StringLike": { "s3:RequestObjectTagKeys": "LIFE" } } } ] } Boto3 code to upload the object s3 = boto3.client('s3') response = s3.put_object_tagging( Bucket='test-notifications-per-prefix', Key="file.txt", Tagging = { 'TagSet': [ { 'Key': "TEST", 'Value': "SHORTTERM" } ] } ) The object is still getting uploaded when i run the above code. I am not able to figure out as why this is happening. Tried denying object tagging in the bucket policy (removed the condition from the policy and made the effect as Deny) then any object uploaded with a tag was throwing an access denied error. (so, the rules are being applied for sure) Can you please let me know as what i am doing wrong here?
[ "{\n \"Version\": \"2012-10-17\",\n \"Statement\": [\n {\n \"Effect\": \"Deny\",\n \"Principal\": {\n \"AWS\": \"arn:aws:iam::<account-id>:user/AdminUser\"\n },\n \"Action\": \"s3:PutObjectTagging\",\n \"Resource\": \"arn:aws:s3:::test-notifications-per-prefix/prefix1/*\",\n \"Condition\": {\n \"StringNotEquals\": {\n \"s3:RequestObjectTag/LIFE\": [\n \"2\",\n \"15\"\n ]\n }\n }\n }\n ]\n}\n\nAble to restrict the key and value pairs in my S3 bucket using the following bucket policy. An explicit deny is denying all the requests coming from the principal that do not have the following tags.\nHowever, this policy will not work for object that are uploaded without tags.\n" ]
[ 0 ]
[]
[]
[ "amazon_s3", "amazon_web_services", "boto", "boto3" ]
stackoverflow_0074652071_amazon_s3_amazon_web_services_boto_boto3.txt
Q: OpenGL Camera Rotation with glm I'm trying to rotate camera but instead it rotates and changes position. float m_CameraRotation = 30.0f; TShader.Bind(); glm::mat4 proj = glm::ortho(0.0f, 1000.0f, 0.0f, 1000.0f, -1.0f, 1.0f); glm::mat4 view = glm::translate(glm::mat4(1.0f), glm::vec3(0, 0, 0)); glm::mat4 vp = proj * view; glm::mat4 transform = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, 0.0f)) * glm::rotate(glm::mat4(1.0f), glm::radians(m_CameraRotation), glm::vec3(0, 0, 1)); TShader.UniformSetMat4f("u_ViewProjection", vp); TShader.UniformSetMat4f("u_Transform", transform); Shader gl_Position = u_ViewProjection * u_Transform * vec4(aPos, 1.0); And my triangle coordinates float TriangleVertices[] = { // positions // colors 500.0f, 700.0f, 0.0f, 1.0f, 0.0f, 0.0f, // top 250.0f, 300.0f, 0.0f, 0.0f, 1.0f, 0.0f, // bottom left 750.0f, 300.0f, 0.0f, 0.0f, 0.0f, 1.0f // bottom right }; unsigned int TriangleIndices[] = { 0, 1, 2 }; This is my triangle without rotation. And this is my triangle with m_CameraRotation = 30.0f; What's causing this? A: Since your translations don't do anything, the only transformations applied to the triangle's vertices are the rotation first and the orthographic projection second. The rotation axis being +z, it will rotate everything in the xy plane around the origin by m_CameraRotation degrees counterclockwise. Your triangle isn't really translated, it's rotated around the origin. The green vertex for example is at (250, 300) before rotation, and approximately (66.5, 384.8) after. Looking back at your screenshots, this is exactly where I would expect the triangle to be. To rotate the triangle around its center, or around the center of the screen, or around any point P, you need to translate it first by -P, then rotate it, then translate it back by +P. Something like: glm::vec3 center(500.f, 500.f, 0.f); glm::mat4 rotation = glm::rotate(glm::radians(m_CameraRotation), glm::vec3(0, 0, 1)); glm::mat4 transform = glm::translate(center) * rotation * glm::translate(-center); You could also try doing the orthographic projection first and the rotation second. Side note: you don't need the identity matrix as first argument to glm::translate and glm::rotate every time.
OpenGL Camera Rotation with glm
I'm trying to rotate camera but instead it rotates and changes position. float m_CameraRotation = 30.0f; TShader.Bind(); glm::mat4 proj = glm::ortho(0.0f, 1000.0f, 0.0f, 1000.0f, -1.0f, 1.0f); glm::mat4 view = glm::translate(glm::mat4(1.0f), glm::vec3(0, 0, 0)); glm::mat4 vp = proj * view; glm::mat4 transform = glm::translate(glm::mat4(1.0f), glm::vec3(0.0f, 0.0f, 0.0f)) * glm::rotate(glm::mat4(1.0f), glm::radians(m_CameraRotation), glm::vec3(0, 0, 1)); TShader.UniformSetMat4f("u_ViewProjection", vp); TShader.UniformSetMat4f("u_Transform", transform); Shader gl_Position = u_ViewProjection * u_Transform * vec4(aPos, 1.0); And my triangle coordinates float TriangleVertices[] = { // positions // colors 500.0f, 700.0f, 0.0f, 1.0f, 0.0f, 0.0f, // top 250.0f, 300.0f, 0.0f, 0.0f, 1.0f, 0.0f, // bottom left 750.0f, 300.0f, 0.0f, 0.0f, 0.0f, 1.0f // bottom right }; unsigned int TriangleIndices[] = { 0, 1, 2 }; This is my triangle without rotation. And this is my triangle with m_CameraRotation = 30.0f; What's causing this?
[ "Since your translations don't do anything, the only transformations applied to the triangle's vertices are the rotation first and the orthographic projection second. The rotation axis being +z, it will rotate everything in the xy plane around the origin by m_CameraRotation degrees counterclockwise.\nYour triangle isn't really translated, it's rotated around the origin. The green vertex for example is at (250, 300) before rotation, and approximately (66.5, 384.8) after. Looking back at your screenshots, this is exactly where I would expect the triangle to be.\nTo rotate the triangle around its center, or around the center of the screen, or around any point P, you need to translate it first by -P, then rotate it, then translate it back by +P.\nSomething like:\nglm::vec3 center(500.f, 500.f, 0.f);\nglm::mat4 rotation = glm::rotate(glm::radians(m_CameraRotation), glm::vec3(0, 0, 1));\nglm::mat4 transform = glm::translate(center) * rotation * glm::translate(-center);\n\nYou could also try doing the orthographic projection first and the rotation second.\nSide note: you don't need the identity matrix as first argument to glm::translate and glm::rotate every time.\n" ]
[ 1 ]
[]
[]
[ "c++", "glm_math", "rotation", "transform" ]
stackoverflow_0074655881_c++_glm_math_rotation_transform.txt
Q: How do I add color to my svg image in react I have a list of icons. I want to change the icons colors to white. By default my icons are black. Any suggestions guys? I normally use 'fill: white' in my css but now that I am doing this in React... it's not working! import React from 'react' import menuIcon from '../img/menu.svg'; import homeIcon from '../img/home.svg'; <ul> <li> <a href="/" className="sidebar__link"> <img src={menuIcon} alt="sidebar icon" className="sidebar__icon" /> </a> </li> <li> <a href="/" className="sidebar__link"> <img src={homeIcon} alt="sidebar icon" className="sidebar__icon" /> </a> </li> </ul> .sidebar__icon { fill: #FFFFF; width: 3.2rem; height: 3.2rem; } A: I use this approach to avoid the need of creating a React component for each icon. As the docs say you can import the SVG file as a react component. import { ReactComponent as Logo } from './logo.svg'; const App = () => ( <div> {/* Logo is an actual React component */} <Logo /> </div> ); If you want to change the fill property you have to set your SVG fill's value to current then you can use it like this: <svg fill="current" stroke="current" ....> ... </svg> import { ReactComponent as Logo } from './logo.svg'; const App = () => ( <div> {/* Logo is an actual React component */} <Logo fill='red' stroke='green'/> </div> ); A: use your SVG as a component, then all the SVG goodness is accessible: const MenuIcon = (props) =>( <svg xmlns="http://www.w3.org/2000/svg" fill={props.fill} className={props.class}></svg> ) And in your render <li> <a href="/" className="sidebar__link"> <MenuIcon fill="white"/> </a> </li> A: You can change css of svg by accessing g or path. Starting from create-react-app version 2.0 import React from 'react' import {ReactComponent as Icon} from './home.svg'; export const Home = () => { return ( <div className='home'> <Icon className='home__icon'/> </div> ); }; .home__icon g { fill: #FFFFF; } .home__icon path { stroke: #e5e5e5; stroke-width: 10px; } A: I found an interesting solution to this problem. Don't know if it works in all cases though... Okay so I have an svg that import like: import LockIcon from "../assets/lock.svg" and then render it as: <LockIcon color={theme.colors.text.primary} /> and then in lock.svg I just add fill="currentColor" in the svg tag. Hopefully this can useful for some of you. A: If you want to change the color of svg without changing the style of svg or without doing any change in the code of svg itself. You can change the color of svg as an image also. This can be done with applying filter property of css. Change your color to css filter. For the blue Hex code = #0000ff blue to filter: invert(9%) sepia(99%) saturate(5630%) hue-rotate(246deg) brightness(111%) contrast(148%); code <img src="path/to/svg" style={{filter: "invert(9%) sepia(99%) saturate(5630%) hue-rotate(246deg) brightness(111%) contrast(148%)"}} /> Reference A: In my case, I needed to delete this part of my svg file to make it work: <style type="text/css"> .st0{fill:#8AC6F4;} </style> an then this works import { ReactComponent as Logo } from './logo.svg'; const App = () => ( <div> {/* Logo is an actual React component */} <Logo /> </div> ); A: As of React 16.13.1, you can use the SVG directly as a component and pass fill prop to change the colour. Please have a look at the following example: home.svg <svg fill="#000000" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 24 24" width="24px" height="24px"><path d="M 12 2.0996094 L 1 12 L 4 12 L 4 21 L 11 21 L 11 15 L 13 15 L 13 21 L 20 21 L 20 12 L 23 12 L 12 2.0996094 z M 12 4.7910156 L 18 10.191406 L 18 11 L 18 19 L 15 19 L 15 13 L 9 13 L 9 19 L 6 19 L 6 10.191406 L 12 4.7910156 z"/></svg> ShowIcon/index.js import React from 'react'; import HomeIcon from './home.svg'; const ShowIcon = props => { return ( <HomeIcon fill='#ccc' /> ) } export default ShowIcon; However, I noticed after testing with several icons that it doesn't work with all the icons. So, please test it before using. A: Make sure the fill attribute of the path tag inside your .svg file is set to "none" otherwise this won't work <Icon fill="white"/> A: I faced the same issue, tried most of the suggestions with no luck. I solved it in a very simple way... no need to create a component for the SVG image >>> Only add the wanted color as fill in <path fill='#9d8189' ...></path> in .svg file A: I use this approach: import { ReactComponent as Logo } from './logo.svg'; const App = () => ( <div> {/* Logo is an actual React component */} <Logo fill='red' stroke='green'/> </div> ); Although you don't have to set colors to current in SVG to make it work. <svg fill="current" stroke="current" ....> ... </svg> You can keep some default colors so that you don't have to set them in React component each time you are using the SVG, but only when it's necessary. <svg fill="#ffffff" stroke="#ffffff" ....> ... </svg>
How do I add color to my svg image in react
I have a list of icons. I want to change the icons colors to white. By default my icons are black. Any suggestions guys? I normally use 'fill: white' in my css but now that I am doing this in React... it's not working! import React from 'react' import menuIcon from '../img/menu.svg'; import homeIcon from '../img/home.svg'; <ul> <li> <a href="/" className="sidebar__link"> <img src={menuIcon} alt="sidebar icon" className="sidebar__icon" /> </a> </li> <li> <a href="/" className="sidebar__link"> <img src={homeIcon} alt="sidebar icon" className="sidebar__icon" /> </a> </li> </ul> .sidebar__icon { fill: #FFFFF; width: 3.2rem; height: 3.2rem; }
[ "I use this approach to avoid the need of creating a React component for each icon. As the docs say you can import the SVG file as a react component.\nimport { ReactComponent as Logo } from './logo.svg';\nconst App = () => (\n <div>\n {/* Logo is an actual React component */}\n <Logo />\n </div>\n);\n\nIf you want to change the fill property you have to set your SVG fill's value to current then you can use it like this:\n<svg fill=\"current\" stroke=\"current\" ....> ... </svg>\n\nimport { ReactComponent as Logo } from './logo.svg';\nconst App = () => (\n <div>\n {/* Logo is an actual React component */}\n <Logo fill='red' stroke='green'/>\n </div>\n);\n\n", "use your SVG as a component, then all the SVG goodness is accessible:\nconst MenuIcon = (props) =>(\n <svg xmlns=\"http://www.w3.org/2000/svg\" fill={props.fill} className={props.class}></svg>\n)\n\nAnd in your render\n<li>\n <a href=\"/\" className=\"sidebar__link\">\n <MenuIcon fill=\"white\"/>\n </a>\n</li>\n\n", "You can change css of svg by accessing g or path.\nStarting from create-react-app version 2.0\nimport React from 'react'\nimport {ReactComponent as Icon} from './home.svg';\n\nexport const Home = () => {\n\n return (\n <div className='home'>\n <Icon className='home__icon'/>\n </div>\n );\n};\n\n.home__icon g {\n fill: #FFFFF;\n}\n.home__icon path {\n stroke: #e5e5e5;\n stroke-width: 10px;\n}\n\n", "I found an interesting solution to this problem. Don't know if it works in all cases though...\nOkay so I have an svg that import like:\nimport LockIcon from \"../assets/lock.svg\"\n\nand then render it as:\n<LockIcon color={theme.colors.text.primary} />\n\nand then in lock.svg I just add fill=\"currentColor\" in the svg tag.\nHopefully this can useful for some of you.\n", "If you want to change the color of svg without changing the style of svg or without doing any change in the code of svg itself.\nYou can change the color of svg as an image also.\nThis can be done with applying filter property of css.\nChange your color to css filter.\nFor the blue\n\nHex code = #0000ff\nblue to filter: invert(9%) sepia(99%) saturate(5630%) hue-rotate(246deg) brightness(111%) contrast(148%);\n\ncode\n<img src=\"path/to/svg\" \n style={{filter: \"invert(9%) sepia(99%) saturate(5630%) hue-rotate(246deg) brightness(111%) contrast(148%)\"}} \n/>\n\nReference\n", "In my case, I needed to delete this part of my svg file to make it work:\n<style type=\"text/css\">\n .st0{fill:#8AC6F4;}\n</style>\n\nan then this works\nimport { ReactComponent as Logo } from './logo.svg';\nconst App = () => (\n <div>\n {/* Logo is an actual React component */}\n <Logo />\n </div>\n);\n\n", "As of React 16.13.1, you can use the SVG directly as a component and pass fill prop to change the colour. Please have a look at the following example:\nhome.svg\n<svg fill=\"#000000\" xmlns=\"http://www.w3.org/2000/svg\" viewBox=\"0 0 24 24\" width=\"24px\" height=\"24px\"><path d=\"M 12 2.0996094 L 1 12 L 4 12 L 4 21 L 11 21 L 11 15 L 13 15 L 13 21 L 20 21 L 20 12 L 23 12 L 12 2.0996094 z M 12 4.7910156 L 18 10.191406 L 18 11 L 18 19 L 15 19 L 15 13 L 9 13 L 9 19 L 6 19 L 6 10.191406 L 12 4.7910156 z\"/></svg>\n\nShowIcon/index.js\nimport React from 'react';\n\nimport HomeIcon from './home.svg';\n\nconst ShowIcon = props => {\n return (\n <HomeIcon fill='#ccc' />\n )\n}\n\nexport default ShowIcon;\n\nHowever, I noticed after testing with several icons that it doesn't work with all the icons. So, please test it before using.\n", "Make sure the fill attribute of the path tag inside your .svg file is set to \"none\" otherwise this won't work\n<Icon fill=\"white\"/>\n\n", "I faced the same issue, tried most of the suggestions with no luck.\nI solved it in a very simple way... no need to create a component for the SVG image\n>>> Only add the wanted color as fill in <path fill='#9d8189' ...></path> in .svg file\n", "I use this approach:\nimport { ReactComponent as Logo } from './logo.svg';\nconst App = () => (\n <div>\n {/* Logo is an actual React component */}\n <Logo fill='red' stroke='green'/>\n </div>\n);\n\nAlthough you don't have to set colors to current in SVG to make it work.\n<svg fill=\"current\" stroke=\"current\" ....> ... </svg>\n\nYou can keep some default colors so that you don't have to set them in React component each time you are using the SVG, but only when it's necessary.\n<svg fill=\"#ffffff\" stroke=\"#ffffff\" ....> ... </svg>\n\n" ]
[ 91, 49, 34, 7, 6, 4, 3, 1, 0, 0 ]
[ "My suggestion is to use a tool like SVG to font convertor, icomoon is my favorite, create your own custom font library for importing your SVG icons\nProps \n\nChange color, font size, etc with CSS for icons\nUse in the entire project with a single import\nThey are providing some free icons/ icons bundle\n\nCons\n\nInitial learning curve\nProperly follow the dimensions, outline etc while creating an SVG icons\nDifficult for multicolor icons\n\n" ]
[ -1 ]
[ "css", "html", "reactjs" ]
stackoverflow_0054519654_css_html_reactjs.txt
Q: Weblogic ProxyServlet in web.xml dynamic IP adress in I use ProxyServlet in web.xml in order to redirect requests from the frontend serveur to the backend server. <servlet> <servlet-name>ProxyServlet</servlet-name> <servlet-class>weblogic.servlet.proxy.HttpProxyServlet</servlet-class> <init-param> <param-name>WebLogicHost</param-name> <param-value>xxx.xxx.xxx.xxx</param-value> </init-param> <init-param> <param-name>WebLogicPort</param-name> <param-value>xxxx</param-value> </init-param> </servlet> But my problem is that I want to use a dynamic IP adress and port... so can I use env value or somthing else. Because I want to deploy the same WAR into diffrent servers. weblogic 12c A: Thank you Emmanuel Collin. The solution is to use a deployment plan which is an xml file that we put on each server (DEV, QUALIF, PROD) and in each file we put the value of IP address and port.
Weblogic ProxyServlet in web.xml dynamic IP adress in
I use ProxyServlet in web.xml in order to redirect requests from the frontend serveur to the backend server. <servlet> <servlet-name>ProxyServlet</servlet-name> <servlet-class>weblogic.servlet.proxy.HttpProxyServlet</servlet-class> <init-param> <param-name>WebLogicHost</param-name> <param-value>xxx.xxx.xxx.xxx</param-value> </init-param> <init-param> <param-name>WebLogicPort</param-name> <param-value>xxxx</param-value> </init-param> </servlet> But my problem is that I want to use a dynamic IP adress and port... so can I use env value or somthing else. Because I want to deploy the same WAR into diffrent servers. weblogic 12c
[ "Thank you Emmanuel Collin.\nThe solution is to use a deployment plan which is an xml file that we put on each server (DEV, QUALIF, PROD) and in each file we put the value of IP address and port.\n" ]
[ 0 ]
[]
[]
[ "proxy", "redirect", "weblogic" ]
stackoverflow_0073556458_proxy_redirect_weblogic.txt
Q: Foreach loop is not working with for loop in PHP? I am trying to iterate over two arrays using foreach and for loop but foreach not working! I have two arrays $data and $pageNumber I need to iterate over both and store data in SQL accordingly, $data contains image and $pageNumber contains number of pages so count of both arrays always same. whenever I am trying to store $data elements with respect to $pageNumber like there are 3 images in $data and their serial number are 3,11,20 in $pageNumber, this always result in first element(image) of $data and numbers are 3,11,20 this means images are not iterating same images saved with different numbers. How Will I save both Image and numbers serial wise simultaneously Like - $data = ['image1','image2','image3']; $pageNumber = [22,1,44] result should be like - image1 saved with pagenumber 22, image2 saved with page number1... in SQL table field Image_name and Image_name <?php $imagecount = 0; foreach ($data as $base64_string) { $newFileName = $id . "0" . $imagecount . ".jpg"; // replacing data:image/jpeg;base64, from post request $base64_string = str_replace('data:image/jpeg;base64,', '', $base64_string); $base64_string = str_replace(' ', '+', $base64_string); //base64 data which will store in file $decoded = base64_decode($base64_string); $imagecount++; if (file_put_contents("/record/images/data/" . $imagedirectory . "/" . $newFileName, $decoded)) { $pages = $_POST['pnumber']; $pageNumber = explode(',', $pages); for ($i = 0; $i < count($pageNumber); $i++) { $insrecords = "insert into records( ArticleID, Page_Number, imagedirectory, Image_name, ) values ( '" . $id . "', '" . $pageNumber[$i] . "', '" . $imagedirectory . "', '" . $newFileName . "', )"; mysql_query($insrecords) or die(mysql_error()); } } else { echo "here"; die(); } } ?> A: <?php $imagecount = 0; foreach ($data as $base64_string) { $newFileName = $id . "0" . $imagecount . ".jpg"; // replacing data:image/jpeg;base64, from post request $base64_string = str_replace('data:image/jpeg;base64,', '', $base64_string); $base64_string = str_replace(' ', '+', $base64_string); //base64 data which will store in file $decoded = base64_decode($base64_string); $imagecount++; if (file_put_contents("/record/images/data/" . $imagedirectory . "/" . $newFileName, $decoded)) { $pages = $_POST['pnumber']; $pageNumber = explode(',', $pages); foreach ($pageNumber as $page) { $insrecords = "insert into records( ArticleID, Page_Number, imagedirectory, Image_name, ) values ( '" . $id . "', '" . $page . "', '" . $imagedirectory . "', '" . $newFileName . "', )"; mysql_query($insrecords) or die(mysql_error()); } } else { echo "here"; die(); } } ?>
Foreach loop is not working with for loop in PHP?
I am trying to iterate over two arrays using foreach and for loop but foreach not working! I have two arrays $data and $pageNumber I need to iterate over both and store data in SQL accordingly, $data contains image and $pageNumber contains number of pages so count of both arrays always same. whenever I am trying to store $data elements with respect to $pageNumber like there are 3 images in $data and their serial number are 3,11,20 in $pageNumber, this always result in first element(image) of $data and numbers are 3,11,20 this means images are not iterating same images saved with different numbers. How Will I save both Image and numbers serial wise simultaneously Like - $data = ['image1','image2','image3']; $pageNumber = [22,1,44] result should be like - image1 saved with pagenumber 22, image2 saved with page number1... in SQL table field Image_name and Image_name <?php $imagecount = 0; foreach ($data as $base64_string) { $newFileName = $id . "0" . $imagecount . ".jpg"; // replacing data:image/jpeg;base64, from post request $base64_string = str_replace('data:image/jpeg;base64,', '', $base64_string); $base64_string = str_replace(' ', '+', $base64_string); //base64 data which will store in file $decoded = base64_decode($base64_string); $imagecount++; if (file_put_contents("/record/images/data/" . $imagedirectory . "/" . $newFileName, $decoded)) { $pages = $_POST['pnumber']; $pageNumber = explode(',', $pages); for ($i = 0; $i < count($pageNumber); $i++) { $insrecords = "insert into records( ArticleID, Page_Number, imagedirectory, Image_name, ) values ( '" . $id . "', '" . $pageNumber[$i] . "', '" . $imagedirectory . "', '" . $newFileName . "', )"; mysql_query($insrecords) or die(mysql_error()); } } else { echo "here"; die(); } } ?>
[ " <?php\n $imagecount = 0;\n foreach ($data as $base64_string) {\n $newFileName = $id . \"0\" . $imagecount . \".jpg\";\n\n // replacing data:image/jpeg;base64, from post request\n $base64_string = str_replace('data:image/jpeg;base64,', '', $base64_string);\n\n $base64_string = str_replace(' ', '+', $base64_string);\n\n //base64 data which will store in file\n $decoded = base64_decode($base64_string);\n $imagecount++;\n\n if (file_put_contents(\"/record/images/data/\" . $imagedirectory . \"/\" . $newFileName, $decoded)) {\n $pages = $_POST['pnumber'];\n $pageNumber = explode(',', $pages);\n\n foreach ($pageNumber as $page) {\n $insrecords =\n \"insert into records( \n ArticleID, Page_Number, \n imagedirectory,\n Image_name,\n \n ) values ( \n '\" .\n $id .\n \"',\n '\" .\n $page .\n \"', \n '\" .\n $imagedirectory .\n \"',\n '\" .\n $newFileName .\n \"',\n \n )\";\n mysql_query($insrecords) or die(mysql_error());\n }\n } else {\n echo \"here\";\n\n die();\n }\n }\n\n\n?> \n\n" ]
[ 0 ]
[ "I think you're missing a closing bracket in the for loop. add another closing curly bracket after the mysql_query and try again.\nAlso I am noticing you are using an old deprecated function \"mysql_query\"\nAlso check for sql injection.\nAlso you are inserting inside a loop which might exhaust your database.\nLast but not least, please ensure proper identation and formatting when posting on SO so we can help you easier, and also for yourself :)\n" ]
[ -1 ]
[ "for_loop", "foreach", "php" ]
stackoverflow_0074625068_for_loop_foreach_php.txt
Q: Typescript arrow functions overloads error 2322 This code below is working fine, but it gives an error for the resolve constant. const resolve: Resolve Type '(param: "case 1" | "case 2" | "case 3") => boolean | "string" | 1000' is not assignable to type 'Resolve'.(2322) // Overloads type Resolve = { (): false; (param: 'case 1'): string; (param: 'case 2'): number; (param: 'case 3'): true; }; const resolve: Resolve = (param) => { switch (param) { case 'case 1': return 'string'; case 'case 2': return 1000; case 'case 3': return true; default: return false; } }; const result = { first: resolve('case 1'), second: resolve('case 2'), third: resolve('case 3'), none: resolve() }; Any idea how to resolve it ? A: It's often the case that an overloaded function's implementation signature isn't fully compatible with the overload signatures. When you're using overload syntax, TypeScript uses relaxed rules for compatibility. In your case, since you're not using overload syntax, you'll have to use a type assertion (after making sure that the function behaves correctly for the overloads): const resolve = ((param?: "case 1" | "case 2" | "case 3"): string | number | true | false => { switch (param) { case "case 1": return "string"; case "case 2": return 1000; case "case 3": return true; default: return false; } }) as Resolve; (I've also added a type annotation to param and a return type annotation, just for good measure, though it's not uncommon to be really loose in the implementation signature.) Playground example Just FWIW, using function overload syntax instead (playground link): function resolve(param: "case 1"): string; function resolve(param: "case 2"): number; function resolve(param: "case 3"): true; function resolve(): false; function resolve(param?: "case 1" | "case 2" | "case 3"): string | number | true | false { switch (param) { case "case 1": return "string"; case "case 2": return 1000; case "case 3": return true; default: return false; } }; type Resolve = typeof resolve; ...but that's not always possible, it's not uncommon to have to define the type without ever actually implementing it (for instance, in a callback type for an API).
Typescript arrow functions overloads error 2322
This code below is working fine, but it gives an error for the resolve constant. const resolve: Resolve Type '(param: "case 1" | "case 2" | "case 3") => boolean | "string" | 1000' is not assignable to type 'Resolve'.(2322) // Overloads type Resolve = { (): false; (param: 'case 1'): string; (param: 'case 2'): number; (param: 'case 3'): true; }; const resolve: Resolve = (param) => { switch (param) { case 'case 1': return 'string'; case 'case 2': return 1000; case 'case 3': return true; default: return false; } }; const result = { first: resolve('case 1'), second: resolve('case 2'), third: resolve('case 3'), none: resolve() }; Any idea how to resolve it ?
[ "It's often the case that an overloaded function's implementation signature isn't fully compatible with the overload signatures. When you're using overload syntax, TypeScript uses relaxed rules for compatibility.\nIn your case, since you're not using overload syntax, you'll have to use a type assertion (after making sure that the function behaves correctly for the overloads):\nconst resolve = ((param?: \"case 1\" | \"case 2\" | \"case 3\"): string | number | true | false => {\n switch (param) {\n case \"case 1\":\n return \"string\";\n case \"case 2\":\n return 1000;\n case \"case 3\":\n return true;\n default:\n return false;\n }\n}) as Resolve;\n\n(I've also added a type annotation to param and a return type annotation, just for good measure, though it's not uncommon to be really loose in the implementation signature.)\nPlayground example\n\nJust FWIW, using function overload syntax instead (playground link):\nfunction resolve(param: \"case 1\"): string;\nfunction resolve(param: \"case 2\"): number;\nfunction resolve(param: \"case 3\"): true;\nfunction resolve(): false;\nfunction resolve(param?: \"case 1\" | \"case 2\" | \"case 3\"): string | number | true | false {\n switch (param) {\n case \"case 1\":\n return \"string\";\n case \"case 2\":\n return 1000;\n case \"case 3\":\n return true;\n default:\n return false;\n }\n};\n\ntype Resolve = typeof resolve;\n\n...but that's not always possible, it's not uncommon to have to define the type without ever actually implementing it (for instance, in a callback type for an API).\n" ]
[ 1 ]
[]
[]
[ "arrow_functions", "constructor_overloading", "function", "typescript" ]
stackoverflow_0074656478_arrow_functions_constructor_overloading_function_typescript.txt
Q: Get picture from xml ( VB.net ) I have more xml file what store image in binary format. Is there any method to add this data to a picurebox? <Thumbnail image-type="bmp" image-encoding="base64Binary"> <![CDATA[ Qk2+BwAAAAAAAD4AAAAoAAAAeAAAAHgAAAABAAEAAAAAAIAHAAAAAAAAAAAAAAAAAAAAAAAA//// AADM/wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAP/+AAAAAAAAAAAAAAAAAAGAAwAAAAAAAAAAAAAAAAACAACAAAAAAA AAAAAAAAAABAAAQAAAAAAAAAAAAAAAAAgAACAAAAAAAAAAAAAAAAAQAAAQAAAAAAAAAAAAAAAAIA AACAAAAAAAAAAAAH///+AAAA////wAAAAAAAPAAAAAAAAAAAAHgAAAAAD8AAAAAAAAAAAAAH4AAA AHAAAAAAAAAAAAAAABwAAAHAAAAAAAAAAAAAAAAHAAADAAAAAAAAAAAAAAAAAYAAAgAAAAAAAAAA AAAAAACAAAQPAAAAAAAAAAAAAAHgQAAECQAAAAAAAAAAAAABIEAABA8AAAAAAAAAAAAAAeBAAAQG AAAAAAAAAAAAAADAQAAEAAAAAAAAAAAAAAAAAEAABAAAAAAAAAAAAAAAAABAAAQAAAAAAAAAAAAA AAAAQAAEAAAAAAAAAAAAAAAAAEAABAAAAAAAAAAAAAAAAABAAAQAAAAAAAAAAAAAAAAAQAAEAAAA AAAAAAAAAAAAAEAABAAAAAAAAAAAAAAAAABAAAQAAAAAAAAAAAAAAAAAQAAEAAAAAAAAAAAAAAAA AEAABAAAAAAAAAAAAAAAAABAAAQB//////////////8AQAAEAQAAAAAAAAAAAAABAEAABAEAAAAA AAAAAAAAAQBAAAQBAAAAAAAAAAAAAAEAQAAEAQAAAAAAAAAAAAABAEAABAEAAAAAAAAAAAAAAQBA AAQBAAAAAAAAAAAAAAEAQAAEAQAAAAAAAAAAAAABAEAABAEAAAAAAAAAAAAAAQBAAAQBAAAAAAAA AAAAAAEAQAAEAQAAAAAAAAAAAAABAEAABAEAAAAAAAAAAAAAAQBAAAQBAAAAAAAAAAAAAAEAQAAE AQAAAAAAAAAAAAABAEAABAEAAAAAAAAAAAAAAQBAAAQBAAAAAAAAAAAAAAEAQAAEAQAAAAAAAAAA AAABAEAAB/8AAAAAAAAAAAAAAf/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= ]]> </Thumbnail> Best Reagds, Tibi I need a start kick, because i don't work with binary pictures. A: It is base64 so first you have to use convert to get a byte array : Convert.FromBase64String. Imports System Imports System.Xml Imports System.Xml.Linq Module Program Const FILENAME As String = "c:\temp\test.xml" Sub Main(args As String()) Dim doc As XDocument = XDocument.Load(FILENAME) Dim Xml As String = doc.Root.Value Dim image As Byte() = Convert.FromBase64String(Xml) End Sub End Module
Get picture from xml ( VB.net )
I have more xml file what store image in binary format. Is there any method to add this data to a picurebox? <Thumbnail image-type="bmp" image-encoding="base64Binary"> <![CDATA[ Qk2+BwAAAAAAAD4AAAAoAAAAeAAAAHgAAAABAAEAAAAAAIAHAAAAAAAAAAAAAAAAAAAAAAAA//// AADM/wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAP/+AAAAAAAAAAAAAAAAAAGAAwAAAAAAAAAAAAAAAAACAACAAAAAAA AAAAAAAAAABAAAQAAAAAAAAAAAAAAAAAgAACAAAAAAAAAAAAAAAAAQAAAQAAAAAAAAAAAAAAAAIA AACAAAAAAAAAAAAH///+AAAA////wAAAAAAAPAAAAAAAAAAAAHgAAAAAD8AAAAAAAAAAAAAH4AAA AHAAAAAAAAAAAAAAABwAAAHAAAAAAAAAAAAAAAAHAAADAAAAAAAAAAAAAAAAAYAAAgAAAAAAAAAA AAAAAACAAAQPAAAAAAAAAAAAAAHgQAAECQAAAAAAAAAAAAABIEAABA8AAAAAAAAAAAAAAeBAAAQG AAAAAAAAAAAAAADAQAAEAAAAAAAAAAAAAAAAAEAABAAAAAAAAAAAAAAAAABAAAQAAAAAAAAAAAAA AAAAQAAEAAAAAAAAAAAAAAAAAEAABAAAAAAAAAAAAAAAAABAAAQAAAAAAAAAAAAAAAAAQAAEAAAA AAAAAAAAAAAAAEAABAAAAAAAAAAAAAAAAABAAAQAAAAAAAAAAAAAAAAAQAAEAAAAAAAAAAAAAAAA AEAABAAAAAAAAAAAAAAAAABAAAQB//////////////8AQAAEAQAAAAAAAAAAAAABAEAABAEAAAAA AAAAAAAAAQBAAAQBAAAAAAAAAAAAAAEAQAAEAQAAAAAAAAAAAAABAEAABAEAAAAAAAAAAAAAAQBA AAQBAAAAAAAAAAAAAAEAQAAEAQAAAAAAAAAAAAABAEAABAEAAAAAAAAAAAAAAQBAAAQBAAAAAAAA AAAAAAEAQAAEAQAAAAAAAAAAAAABAEAABAEAAAAAAAAAAAAAAQBAAAQBAAAAAAAAAAAAAAEAQAAE AQAAAAAAAAAAAAABAEAABAEAAAAAAAAAAAAAAQBAAAQBAAAAAAAAAAAAAAEAQAAEAQAAAAAAAAAA AAABAEAAB/8AAAAAAAAAAAAAAf/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA= ]]> </Thumbnail> Best Reagds, Tibi I need a start kick, because i don't work with binary pictures.
[ "It is base64 so first you have to use convert to get a byte array : Convert.FromBase64String.\nImports System\nImports System.Xml\nImports System.Xml.Linq\n\n\nModule Program\n Const FILENAME As String = \"c:\\temp\\test.xml\"\n Sub Main(args As String())\n\n Dim doc As XDocument = XDocument.Load(FILENAME)\n Dim Xml As String = doc.Root.Value\n Dim image As Byte() = Convert.FromBase64String(Xml)\n End Sub\nEnd Module\n\n" ]
[ 0 ]
[]
[]
[ "binary", "image", "vb.net", "xml" ]
stackoverflow_0074652206_binary_image_vb.net_xml.txt
Q: How to create Circle with step progress (gaps in it) and animate it? I need to create a progressive with gaps in it and Animate the layers. I have achieved it. But the problem is it is starting (0) from Right centre. But the requirement is it should start from top centre. In image You can see that it is started from right side. I have attached my code sample along with Image for your understanding. Can somebody help me where I'm doing wrong or how should I make it from top. extension ViewController { func sampleProgress() { let totalSteps = 6 let frame = CGRect(x: 50, y: 50, width: 120, height: 120) let circlePath = UIBezierPath(ovalIn: frame) let gapSize: CGFloat = 0.0125 let segmentAngle: CGFloat = 0.167 // (1/totalSteps) var startAngle = 0.0 let lineWidth = 8.0 for index in 0 ... totalSteps { // Background layer let backgroundLayer = CAShapeLayer() backgroundLayer.strokeStart = startAngle backgroundLayer.strokeEnd = backgroundLayer.strokeStart + segmentAngle - gapSize backgroundLayer.path = circlePath.cgPath backgroundLayer.name = String(index) backgroundLayer.strokeColor = UIColor.lightGray.cgColor backgroundLayer.lineWidth = lineWidth backgroundLayer.lineCap = CAShapeLayerLineCap.butt backgroundLayer.fillColor = UIColor.clear.cgColor self.view.layer.addSublayer(backgroundLayer) // Foreground layer let foregroundLayer = CAShapeLayer() foregroundLayer.strokeStart = startAngle foregroundLayer.strokeEnd = backgroundLayer.strokeStart + segmentAngle - gapSize foregroundLayer.isHidden = true foregroundLayer.name = String(index) + String(index) foregroundLayer.path = circlePath.cgPath foregroundLayer.strokeColor = UIColor.green.cgColor foregroundLayer.lineWidth = lineWidth foregroundLayer.lineCap = CAShapeLayerLineCap.butt foregroundLayer.fillColor = UIColor.clear.cgColor self.view.layer.addSublayer(foregroundLayer) print("Start angle: \(startAngle)") startAngle = startAngle + segmentAngle } } func animateLayer(isAnimate: Bool, stepsToAnimate: Int) { let segmentAngle: CGFloat = (360 * 0.166) / 360 let gapSize: CGFloat = 0.0125 var startAngle = 0.0 for index in 0 ... stepsToAnimate { if let foregroundLayers = self.view.layer.sublayers { for animateLayer in foregroundLayers { if animateLayer.name == String(index) + String(index) { if index == stepsToAnimate && isAnimate { let animation = CABasicAnimation(keyPath: "strokeEnd") animation.fromValue = startAngle animation.toValue = startAngle + segmentAngle - gapSize animation.duration = 1.0 animateLayer.add(animation, forKey: "foregroundAnimation") animateLayer.isHidden = false } else { animateLayer.isHidden = false } startAngle = startAngle + segmentAngle } } } } } } A: You can "move the start" to the top by rotating the layer(s) minus 90-degrees: let tr = CATransform3DMakeRotation(-(.pi * 0.5), 0, 0, 1) I would assume this would be wrapped into a UIView subclass, but to get your example (adding sublayers to the main view's layer) to work right, we'll want to use a Zero-based origin for the path rect: // use 0,0 for the origin of the PATH frame let frame = CGRect(x: 0, y: 0, width: 120, height: 120) let circlePath = UIBezierPath(ovalIn: frame) and then an offset rect for the position: let layerFrame = frame.offsetBy(dx: 50, dy: 50) and we set the .anchorPoint of the layers to the center of that rect -- so it will rotate around its center: // set the layer's frame backgroundLayer.frame = layerFrame // set the layer's anchor point backgroundLayer.anchorPoint = CGPoint(x: 0.5, y: 0.5) // apply the rotation transform backgroundLayer.transform = tr // set the layer's frame foregroundLayer.frame = layerFrame // set the layer's anchor point foregroundLayer.anchorPoint = CGPoint(x: 0.5, y: 0.5) // apply the rotation transform foregroundLayer.transform = tr So, slight modifications to your code: extension ViewController { func sampleProgress() { let totalSteps = 6 // use 0,0 for the origin of the PATH frame let frame = CGRect(x: 0, y: 0, width: 120, height: 120) let circlePath = UIBezierPath(ovalIn: frame) // use this for the POSITION of the path let layerFrame = frame.offsetBy(dx: 50, dy: 50) let gapSize: CGFloat = 0.0125 let segmentAngle: CGFloat = 0.167 // (1/totalSteps) var startAngle = 0.0 let lineWidth = 8.0 // we want to rotate the layer by -90 degrees let tr = CATransform3DMakeRotation(-(.pi * 0.5), 0, 0, 1) for index in 0 ... totalSteps { // Background layer let backgroundLayer = CAShapeLayer() backgroundLayer.strokeStart = startAngle backgroundLayer.strokeEnd = backgroundLayer.strokeStart + segmentAngle - gapSize backgroundLayer.path = circlePath.cgPath backgroundLayer.name = String(index) backgroundLayer.strokeColor = UIColor.lightGray.cgColor backgroundLayer.lineWidth = lineWidth backgroundLayer.lineCap = CAShapeLayerLineCap.butt backgroundLayer.fillColor = UIColor.clear.cgColor self.view.layer.addSublayer(backgroundLayer) // set the layer's frame backgroundLayer.frame = layerFrame // set the layer's anchor point backgroundLayer.anchorPoint = CGPoint(x: 0.5, y: 0.5) // apply the rotation transform backgroundLayer.transform = tr // Foreground layer let foregroundLayer = CAShapeLayer() foregroundLayer.strokeStart = startAngle foregroundLayer.strokeEnd = backgroundLayer.strokeStart + segmentAngle - gapSize foregroundLayer.isHidden = true foregroundLayer.name = String(index) + String(index) foregroundLayer.path = circlePath.cgPath foregroundLayer.strokeColor = UIColor.green.cgColor foregroundLayer.lineWidth = lineWidth foregroundLayer.lineCap = CAShapeLayerLineCap.butt foregroundLayer.fillColor = UIColor.clear.cgColor self.view.layer.addSublayer(foregroundLayer) // set the layer's frame foregroundLayer.frame = layerFrame // set the layer's anchor point foregroundLayer.anchorPoint = CGPoint(x: 0.5, y: 0.5) // apply the rotation transform foregroundLayer.transform = tr print("Start angle: \(startAngle)") startAngle = startAngle + segmentAngle } } func animateLayer(isAnimate: Bool, stepsToAnimate: Int) { let segmentAngle: CGFloat = (360 * 0.166) / 360 let gapSize: CGFloat = 0.0125 var startAngle = 0.0 for index in 0 ... stepsToAnimate { if let foregroundLayers = self.view.layer.sublayers { for animateLayer in foregroundLayers { if animateLayer.name == String(index) + String(index) { if index == stepsToAnimate && isAnimate { let animation = CABasicAnimation(keyPath: "strokeEnd") animation.fromValue = startAngle animation.toValue = startAngle + segmentAngle - gapSize animation.duration = 1.0 animateLayer.add(animation, forKey: "foregroundAnimation") animateLayer.isHidden = false } else { animateLayer.isHidden = false } startAngle = startAngle + segmentAngle } } } } } } and an example controller - each tap anywhere animates the next step: class ViewController: UIViewController { override func viewDidAppear(_ animated: Bool) { super.viewDidAppear(animated) sampleProgress() } var p: Int = 0 override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) { animateLayer(isAnimate: true, stepsToAnimate: p) p += 1 } }
How to create Circle with step progress (gaps in it) and animate it?
I need to create a progressive with gaps in it and Animate the layers. I have achieved it. But the problem is it is starting (0) from Right centre. But the requirement is it should start from top centre. In image You can see that it is started from right side. I have attached my code sample along with Image for your understanding. Can somebody help me where I'm doing wrong or how should I make it from top. extension ViewController { func sampleProgress() { let totalSteps = 6 let frame = CGRect(x: 50, y: 50, width: 120, height: 120) let circlePath = UIBezierPath(ovalIn: frame) let gapSize: CGFloat = 0.0125 let segmentAngle: CGFloat = 0.167 // (1/totalSteps) var startAngle = 0.0 let lineWidth = 8.0 for index in 0 ... totalSteps { // Background layer let backgroundLayer = CAShapeLayer() backgroundLayer.strokeStart = startAngle backgroundLayer.strokeEnd = backgroundLayer.strokeStart + segmentAngle - gapSize backgroundLayer.path = circlePath.cgPath backgroundLayer.name = String(index) backgroundLayer.strokeColor = UIColor.lightGray.cgColor backgroundLayer.lineWidth = lineWidth backgroundLayer.lineCap = CAShapeLayerLineCap.butt backgroundLayer.fillColor = UIColor.clear.cgColor self.view.layer.addSublayer(backgroundLayer) // Foreground layer let foregroundLayer = CAShapeLayer() foregroundLayer.strokeStart = startAngle foregroundLayer.strokeEnd = backgroundLayer.strokeStart + segmentAngle - gapSize foregroundLayer.isHidden = true foregroundLayer.name = String(index) + String(index) foregroundLayer.path = circlePath.cgPath foregroundLayer.strokeColor = UIColor.green.cgColor foregroundLayer.lineWidth = lineWidth foregroundLayer.lineCap = CAShapeLayerLineCap.butt foregroundLayer.fillColor = UIColor.clear.cgColor self.view.layer.addSublayer(foregroundLayer) print("Start angle: \(startAngle)") startAngle = startAngle + segmentAngle } } func animateLayer(isAnimate: Bool, stepsToAnimate: Int) { let segmentAngle: CGFloat = (360 * 0.166) / 360 let gapSize: CGFloat = 0.0125 var startAngle = 0.0 for index in 0 ... stepsToAnimate { if let foregroundLayers = self.view.layer.sublayers { for animateLayer in foregroundLayers { if animateLayer.name == String(index) + String(index) { if index == stepsToAnimate && isAnimate { let animation = CABasicAnimation(keyPath: "strokeEnd") animation.fromValue = startAngle animation.toValue = startAngle + segmentAngle - gapSize animation.duration = 1.0 animateLayer.add(animation, forKey: "foregroundAnimation") animateLayer.isHidden = false } else { animateLayer.isHidden = false } startAngle = startAngle + segmentAngle } } } } } }
[ "You can \"move the start\" to the top by rotating the layer(s) minus 90-degrees:\nlet tr = CATransform3DMakeRotation(-(.pi * 0.5), 0, 0, 1)\n\nI would assume this would be wrapped into a UIView subclass, but to get your example (adding sublayers to the main view's layer) to work right, we'll want to use a Zero-based origin for the path rect:\n// use 0,0 for the origin of the PATH frame\nlet frame = CGRect(x: 0, y: 0, width: 120, height: 120)\nlet circlePath = UIBezierPath(ovalIn: frame)\n\nand then an offset rect for the position:\nlet layerFrame = frame.offsetBy(dx: 50, dy: 50)\n\nand we set the .anchorPoint of the layers to the center of that rect -- so it will rotate around its center:\n// set the layer's frame\nbackgroundLayer.frame = layerFrame\n// set the layer's anchor point\nbackgroundLayer.anchorPoint = CGPoint(x: 0.5, y: 0.5)\n// apply the rotation transform\nbackgroundLayer.transform = tr\n \n// set the layer's frame\nforegroundLayer.frame = layerFrame\n// set the layer's anchor point\nforegroundLayer.anchorPoint = CGPoint(x: 0.5, y: 0.5)\n// apply the rotation transform\nforegroundLayer.transform = tr\n \n\nSo, slight modifications to your code:\nextension ViewController {\n func sampleProgress() {\n let totalSteps = 6\n \n // use 0,0 for the origin of the PATH frame\n let frame = CGRect(x: 0, y: 0, width: 120, height: 120)\n let circlePath = UIBezierPath(ovalIn: frame)\n \n // use this for the POSITION of the path\n let layerFrame = frame.offsetBy(dx: 50, dy: 50)\n \n let gapSize: CGFloat = 0.0125\n let segmentAngle: CGFloat = 0.167 // (1/totalSteps)\n var startAngle = 0.0\n let lineWidth = 8.0\n\n // we want to rotate the layer by -90 degrees\n let tr = CATransform3DMakeRotation(-(.pi * 0.5), 0, 0, 1)\n\n for index in 0 ... totalSteps {\n // Background layer\n let backgroundLayer = CAShapeLayer()\n backgroundLayer.strokeStart = startAngle\n backgroundLayer.strokeEnd = backgroundLayer.strokeStart + segmentAngle - gapSize\n \n backgroundLayer.path = circlePath.cgPath\n backgroundLayer.name = String(index)\n backgroundLayer.strokeColor = UIColor.lightGray.cgColor\n backgroundLayer.lineWidth = lineWidth\n backgroundLayer.lineCap = CAShapeLayerLineCap.butt\n backgroundLayer.fillColor = UIColor.clear.cgColor\n self.view.layer.addSublayer(backgroundLayer)\n\n // set the layer's frame\n backgroundLayer.frame = layerFrame\n // set the layer's anchor point\n backgroundLayer.anchorPoint = CGPoint(x: 0.5, y: 0.5)\n // apply the rotation transform\n backgroundLayer.transform = tr\n \n // Foreground layer\n let foregroundLayer = CAShapeLayer()\n foregroundLayer.strokeStart = startAngle\n foregroundLayer.strokeEnd = backgroundLayer.strokeStart + segmentAngle - gapSize\n \n foregroundLayer.isHidden = true\n foregroundLayer.name = String(index) + String(index)\n foregroundLayer.path = circlePath.cgPath\n foregroundLayer.strokeColor = UIColor.green.cgColor\n foregroundLayer.lineWidth = lineWidth\n foregroundLayer.lineCap = CAShapeLayerLineCap.butt\n foregroundLayer.fillColor = UIColor.clear.cgColor\n self.view.layer.addSublayer(foregroundLayer)\n \n // set the layer's frame\n foregroundLayer.frame = layerFrame\n // set the layer's anchor point\n foregroundLayer.anchorPoint = CGPoint(x: 0.5, y: 0.5)\n // apply the rotation transform\n foregroundLayer.transform = tr\n \n print(\"Start angle: \\(startAngle)\")\n startAngle = startAngle + segmentAngle\n }\n }\n \n func animateLayer(isAnimate: Bool, stepsToAnimate: Int) {\n let segmentAngle: CGFloat = (360 * 0.166) / 360\n let gapSize: CGFloat = 0.0125\n var startAngle = 0.0\n \n for index in 0 ... stepsToAnimate {\n if let foregroundLayers = self.view.layer.sublayers {\n for animateLayer in foregroundLayers {\n if animateLayer.name == String(index) + String(index) {\n if index == stepsToAnimate && isAnimate {\n let animation = CABasicAnimation(keyPath: \"strokeEnd\")\n animation.fromValue = startAngle\n animation.toValue = startAngle + segmentAngle - gapSize\n animation.duration = 1.0\n animateLayer.add(animation, forKey: \"foregroundAnimation\")\n animateLayer.isHidden = false\n } else {\n animateLayer.isHidden = false\n }\n startAngle = startAngle + segmentAngle\n }\n }\n }\n }\n }\n}\n\nand an example controller - each tap anywhere animates the next step:\nclass ViewController: UIViewController {\n \n override func viewDidAppear(_ animated: Bool) {\n super.viewDidAppear(animated)\n sampleProgress()\n }\n \n var p: Int = 0\n override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {\n animateLayer(isAnimate: true, stepsToAnimate: p)\n p += 1\n }\n \n}\n\n" ]
[ 3 ]
[]
[]
[ "ios", "swift" ]
stackoverflow_0074655205_ios_swift.txt
Q: Enrich mediator add payload wso2 I used the enrich mediator to add a payload containing the name and totalnote of students my problem that i want to replace the values ​​with the property here is my code <property expression="get-property('uri.var.nom')" name="uri.var.nom" scope="default" type="STRING"/> <property expression="get-property('totalnote')" name="totalnote" scope="default" type="STRING"/> <enrich> <source clone="true" type="inline"> {"nom":"" , "note":""} </source> <target action="child" xpath="json-eval($)"/> </enrich> <enrich> <source clone="true" property="uri.var.nom" type="property"/> <target action="replace" xpath="json-eval($.etudiants.nom)"/> </enrich> <enrich> <source clone="true" property="totalnote" type="property"/> <target action="replace" xpath="json-eval($.etudiants.note)"/> </enrich> <respond/> it doesn't work I always receive empty { "etudiants": { "nom": "", "note": "" } A: You are placing the JSON structure at the root. As a child of $. But your structure does not contain etudiants, therefore the json-eval of $.etudiants.nom won't work. The enrich itself works, as shown by @ycr but the message structure you assume to have is incorrect. Try logging the body after the first enrich to see what your payload looks like at that point. Depending on your payload before the enrich try something like: <enrich> <source clone="true" type="inline"> {"etudiants": { "nom":"" , "note":"" } </source> <target action="replace" type="body"/> </enrich> Or if you already have the 'etudiants' object maybe try adjusting the json-eval: <enrich> <source clone="true" type="inline"> {"nom":"" , "note":""} </source> <target action="child" xpath="json-eval($.etudiants)"/> </enrich> Hope this helps to clarify the situation. A: The following seems to work for me. I have hardcoded the property values to test this. Also assuming the Payload before the Enrich is set as { "etudiants": { "nom": "", "note": "" }}. In the following example, I'm sending it in the request. <?xml version="1.0" encoding="UTF-8"?> <api context="/HelloWorld" name="HelloWorld" xmlns="http://ws.apache.org/ns/synapse"> <resource methods="POST"> <inSequence> <property name="uri.var.nom" scope="default" type="STRING" value="nomVal"/> <property name="totalnote" scope="default" type="STRING" value="20"/> <enrich> <source clone="true" property="uri.var.nom" type="property"/> <target xpath="json-eval($.etudiants.nom)"/> </enrich> <enrich> <source clone="true" property="totalnote" type="property"/> <target xpath="json-eval($.etudiants.note)"/> </enrich> <respond/> </inSequence> <outSequence/> <faultSequence/> </resource> </api> Request curl --location --request POST 'http://localhost:8290/HelloWorld' \ --header 'Content-Type: application/json' \ --data-raw '{ "etudiants": { "nom": "", "note": "" } }' Output { "etudiants": { "nom": "nomVal", "note": 20 } }
Enrich mediator add payload wso2
I used the enrich mediator to add a payload containing the name and totalnote of students my problem that i want to replace the values ​​with the property here is my code <property expression="get-property('uri.var.nom')" name="uri.var.nom" scope="default" type="STRING"/> <property expression="get-property('totalnote')" name="totalnote" scope="default" type="STRING"/> <enrich> <source clone="true" type="inline"> {"nom":"" , "note":""} </source> <target action="child" xpath="json-eval($)"/> </enrich> <enrich> <source clone="true" property="uri.var.nom" type="property"/> <target action="replace" xpath="json-eval($.etudiants.nom)"/> </enrich> <enrich> <source clone="true" property="totalnote" type="property"/> <target action="replace" xpath="json-eval($.etudiants.note)"/> </enrich> <respond/> it doesn't work I always receive empty { "etudiants": { "nom": "", "note": "" }
[ "You are placing the JSON structure at the root. As a child of $. But your structure does not contain etudiants, therefore the json-eval of $.etudiants.nom won't work.\nThe enrich itself works, as shown by @ycr but the message structure you assume to have is incorrect. Try logging the body after the first enrich to see what your payload looks like at that point.\nDepending on your payload before the enrich try something like:\n<enrich>\n <source clone=\"true\" type=\"inline\">\n {\"etudiants\": {\n \"nom\":\"\" ,\n \"note\":\"\"\n }\n </source>\n <target action=\"replace\" type=\"body\"/>\n </enrich>\n\nOr if you already have the 'etudiants' object maybe try adjusting the json-eval:\n<enrich>\n <source clone=\"true\" type=\"inline\">\n {\"nom\":\"\" ,\n \"note\":\"\"}\n </source>\n <target action=\"child\" xpath=\"json-eval($.etudiants)\"/>\n </enrich>\n\nHope this helps to clarify the situation.\n", "The following seems to work for me. I have hardcoded the property values to test this. Also assuming the Payload before the Enrich is set as { \"etudiants\": { \"nom\": \"\", \"note\": \"\" }}. In the following example, I'm sending it in the request.\n<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n<api context=\"/HelloWorld\" name=\"HelloWorld\" xmlns=\"http://ws.apache.org/ns/synapse\">\n <resource methods=\"POST\">\n <inSequence>\n <property name=\"uri.var.nom\" scope=\"default\" type=\"STRING\" value=\"nomVal\"/>\n <property name=\"totalnote\" scope=\"default\" type=\"STRING\" value=\"20\"/>\n <enrich>\n <source clone=\"true\" property=\"uri.var.nom\" type=\"property\"/>\n <target xpath=\"json-eval($.etudiants.nom)\"/>\n </enrich>\n <enrich>\n <source clone=\"true\" property=\"totalnote\" type=\"property\"/>\n <target xpath=\"json-eval($.etudiants.note)\"/>\n </enrich>\n <respond/>\n </inSequence>\n <outSequence/>\n <faultSequence/>\n </resource>\n</api>\n\nRequest\ncurl --location --request POST 'http://localhost:8290/HelloWorld' \\\n--header 'Content-Type: application/json' \\\n--data-raw '{\n \"etudiants\": {\n \"nom\": \"\",\n \"note\": \"\"\n }\n}'\n\nOutput\n{\n \"etudiants\": {\n \"nom\": \"nomVal\",\n \"note\": 20\n }\n}\n\n" ]
[ 2, 0 ]
[]
[]
[ "wso2", "wso2_enterprise_integrator", "wso2_esb", "wso2_micro_integrator" ]
stackoverflow_0074654034_wso2_wso2_enterprise_integrator_wso2_esb_wso2_micro_integrator.txt
Q: Kotlin: How to set the mutableState of an Integer in another composable function? For readability purposes, I want to extract the NavigationBar composable in another function. Same with PreviousButton. Therefore I want to pass the mutableState of index to a these functions. But passing index as a parameter does not work, because I cannot update the state. What can I do? @Composable fun MyChickensScreen(){ val art: List<Art> = Datasource().loadArt() var index: Int by remember { mutableStateOf(0) } // IDE suggests making index a val, // but I want to update the state in another composable. //... NavigationBar(index = index) } } //NavigationBar passes index to the PreviousButton Composable @Composable private fun PreviousButton(index: Int) { Button( onClick = { index = handlePrevClick(index) }, //Error: Val cannot be reassigned for index ) { //... } } A: You can add a lambda function for updating to value on the mutable state to NavigationBar and PreviousButton: @Composable fun MyChickensScreen(){ val art: List<Art> = Datasource().loadArt() var index: Int by remember { mutableStateOf(0) } // IDE suggests making index a val, // but I want to update the state in another composable. //... NavigationBar( index = index, updateIndex = { index = it } ) } @Composable private fun PreviousButton( index: Int, updateIndex: (index: Int) -> Unit ) { Button( onClick = { updateIndex(handlePrevClick(index)) }, ) { //... } } Now you can update index mutable state by passing new value to updateIndex lambda function. A: There is a probably better solution, but what I have been doing are: Put a variable inside viewmodel, and make an update method, pass the view model or method to composable Or Pass method down to update the index NavigationBar(index = index, update ={it-> index = it }) } @Composable private fun PreviousButton(index: Int, update: (Int)-> Unit { Button( onClick = { update.invoke(index) }, ) { //... } }
Kotlin: How to set the mutableState of an Integer in another composable function?
For readability purposes, I want to extract the NavigationBar composable in another function. Same with PreviousButton. Therefore I want to pass the mutableState of index to a these functions. But passing index as a parameter does not work, because I cannot update the state. What can I do? @Composable fun MyChickensScreen(){ val art: List<Art> = Datasource().loadArt() var index: Int by remember { mutableStateOf(0) } // IDE suggests making index a val, // but I want to update the state in another composable. //... NavigationBar(index = index) } } //NavigationBar passes index to the PreviousButton Composable @Composable private fun PreviousButton(index: Int) { Button( onClick = { index = handlePrevClick(index) }, //Error: Val cannot be reassigned for index ) { //... } }
[ "You can add a lambda function for updating to value on the mutable state to NavigationBar and PreviousButton:\n@Composable\nfun MyChickensScreen(){\n val art: List<Art> = Datasource().loadArt()\n var index: Int by remember { mutableStateOf(0) }\n // IDE suggests making index a val, \n // but I want to update the state in another composable.\n\n //...\n\n NavigationBar(\n index = index,\n updateIndex = { index = it }\n )\n}\n\n@Composable\nprivate fun PreviousButton(\n index: Int,\n updateIndex: (index: Int) -> Unit\n) {\n Button(\n onClick = { updateIndex(handlePrevClick(index)) },\n ) {\n //...\n }\n}\n\nNow you can update index mutable state by passing new value to updateIndex lambda function.\n", "There is a probably better solution, but what I have been doing are:\nPut a variable inside viewmodel, and make an update method, pass the view model or method to composable\nOr\nPass method down to update the index\nNavigationBar(index = index, \n update ={it->\n index = it\n})\n}\n\n@Composable\nprivate fun PreviousButton(index: Int, update: (Int)-> Unit {\n Button(\n onClick = { update.invoke(index) },\n ) {\n //...\n }\n}\n\n" ]
[ 1, 1 ]
[]
[]
[ "android_jetpack_compose", "kotlin" ]
stackoverflow_0074656346_android_jetpack_compose_kotlin.txt
Q: Uploading file with react hook form I am trying to upload a PDF file using React Hook Form with next.js in the frontend and node.js in the backend. The frontend: const FileUpload = () => { const [proof, setProof] = useState({}) const onSubmit = async (values) => { try { const proof = values.proof[0] let { data } = await axios.post('/api/upload-file', { proof, }) setProof(data) } catch (err) { console.log(err.response) } } <form onSubmit={handleSubmit(onSubmit)} className={styles['form']}> <label htmlFor="proof" className={styles['form-input-label']}> <input type="file" name="proof" {...register('proof')} placeholder=" " required className={`${ errors.proof? styles['form-input-error'] : styles['form-input'] } )`} /> <span className={styles['form-input-placeholder']}> upload file </span> </label> <p className={styles['form-error']}>{errors.file?.message}</p> <button type="submit" className="btn" disabled={!isDirty || !isValid || loading} > {loading ? <LoadingOutlined spin /> : 'Upload File'} </button> </form> </> ) } export default FileUpload The backend (/api/upload-file): export const uploadFile = async (req, res) => { try { console.log(req.body) // => returns empty object (proof:({})) const { proof } = req.body if (!proof) return res.status(400).send('File missing') // prepare the file const base64Data = new Buffer.from( proof.replace(/^data:proof\/\w+;base64,/, ''), 'base64' ) const type = proof.split(';')[0].split('/')[1] // image bucket params const params = { Bucket: 's3-bucket', Key: `${nanoid()}.${type}`, Body: base64Data, ACL: 'public-read', ContentType: 'application/pdf', } // upload to s3 s3.upload(params, (err, data) => { if (err) { console.log(err) res.sendStatus(400) } res.send(data) }) } catch (err) { console.log(err) return res .status(400) .send( 'An Error occured' ) } } I am not able to pass the file to the backend, all I get is an empty object. I tried researching the axios docs, as well as the react hook form docs, to no avail. What am I doing wrong? Thank you for your help!! A: you have to set the content type to multi-part/formdata and create a formData object to send file content to the backend. You also have to parse that file multi-part/formdata on the backend. Look up Multer for the backend and formData object for frontend.
Uploading file with react hook form
I am trying to upload a PDF file using React Hook Form with next.js in the frontend and node.js in the backend. The frontend: const FileUpload = () => { const [proof, setProof] = useState({}) const onSubmit = async (values) => { try { const proof = values.proof[0] let { data } = await axios.post('/api/upload-file', { proof, }) setProof(data) } catch (err) { console.log(err.response) } } <form onSubmit={handleSubmit(onSubmit)} className={styles['form']}> <label htmlFor="proof" className={styles['form-input-label']}> <input type="file" name="proof" {...register('proof')} placeholder=" " required className={`${ errors.proof? styles['form-input-error'] : styles['form-input'] } )`} /> <span className={styles['form-input-placeholder']}> upload file </span> </label> <p className={styles['form-error']}>{errors.file?.message}</p> <button type="submit" className="btn" disabled={!isDirty || !isValid || loading} > {loading ? <LoadingOutlined spin /> : 'Upload File'} </button> </form> </> ) } export default FileUpload The backend (/api/upload-file): export const uploadFile = async (req, res) => { try { console.log(req.body) // => returns empty object (proof:({})) const { proof } = req.body if (!proof) return res.status(400).send('File missing') // prepare the file const base64Data = new Buffer.from( proof.replace(/^data:proof\/\w+;base64,/, ''), 'base64' ) const type = proof.split(';')[0].split('/')[1] // image bucket params const params = { Bucket: 's3-bucket', Key: `${nanoid()}.${type}`, Body: base64Data, ACL: 'public-read', ContentType: 'application/pdf', } // upload to s3 s3.upload(params, (err, data) => { if (err) { console.log(err) res.sendStatus(400) } res.send(data) }) } catch (err) { console.log(err) return res .status(400) .send( 'An Error occured' ) } } I am not able to pass the file to the backend, all I get is an empty object. I tried researching the axios docs, as well as the react hook form docs, to no avail. What am I doing wrong? Thank you for your help!!
[ "you have to set the content type to multi-part/formdata and create a formData object to send file content to the backend. You also have to parse that file multi-part/formdata on the backend. Look up Multer for the backend and formData object for frontend.\n" ]
[ 0 ]
[]
[]
[ "axios", "file_upload", "next.js", "react_hook_form", "reactjs" ]
stackoverflow_0072771392_axios_file_upload_next.js_react_hook_form_reactjs.txt
Q: Reverse complement from a file The task is: Write a script (call it what you want) that that can analyze a fastafile (MySequences.fasta) by finding the reverse complement of the sequences. Using python. from itertools import repeat #opening file filename = "MySequences.fasta" file = open(filename, 'r') #reading the file for line in file: line = line.strip() if ">" in line: header = line elif (len(line) == 0): continue else: seq = line #reverse complement def reverse_complement(seq): compline = '' for n in seq: if n == 'A': compline += 'T' elif n == 'T': compline += 'A' elif n == 'C': compline += 'G' elif n == 'G': compline += 'C' return((compline)[::-1]) #run each line for line in file: rc = reverse_complement(seq) print(rc) A: You run your function in the wrong place. To run your function for each iterator, run the function there. #reading the file for line in file: line = line.strip() if ">" in line: header = line elif (len(line) == 0): continue else: seq = line #run function for each line, each time. rc = reverse_complement(seq) print(rc) In your previous code, all iteration is successful. But you didn't put the line to the function to run each time. In your previous code, after all, iterations, only the last line is assigned. Therefore you put the last line to the function at the end. This is why your code prints only one line. The solution. from itertools import repeat #reverse complement def reverse_complement(seq): compline = '' for n in seq: if n == 'A': compline += 'T' elif n == 'T': compline += 'A' elif n == 'C': compline += 'G' elif n == 'G': compline += 'C' return((compline)[::-1]) #opening file filename = "MySequences.fasta" file = open(filename, 'r') #reading the file for line in file: line = line.strip() if ">" in line: header = line elif (len(line) == 0): continue else: seq = line #run each line rc = reverse_complement(seq) print(rc) Also, this is your other mistake. You put seq as input instead line. But even if you fix this, this code won't work for the same reason I told you before. for line in file: rc = reverse_complement(seq) print(rc)
Reverse complement from a file
The task is: Write a script (call it what you want) that that can analyze a fastafile (MySequences.fasta) by finding the reverse complement of the sequences. Using python. from itertools import repeat #opening file filename = "MySequences.fasta" file = open(filename, 'r') #reading the file for line in file: line = line.strip() if ">" in line: header = line elif (len(line) == 0): continue else: seq = line #reverse complement def reverse_complement(seq): compline = '' for n in seq: if n == 'A': compline += 'T' elif n == 'T': compline += 'A' elif n == 'C': compline += 'G' elif n == 'G': compline += 'C' return((compline)[::-1]) #run each line for line in file: rc = reverse_complement(seq) print(rc)
[ "You run your function in the wrong place.\nTo run your function for each iterator, run the function there.\n#reading the file\n\nfor line in file:\n line = line.strip()\n if \">\" in line:\n header = line\n elif (len(line) == 0):\n continue\n else:\n seq = line\n #run function for each line, each time.\n rc = reverse_complement(seq)\n print(rc)\n\nIn your previous code, all iteration is successful. But you didn't put the line to the function to run each time. In your previous code, after all, iterations, only the last line is assigned. Therefore you put the last line to the function at the end. This is why your code prints only one line.\nThe solution.\nfrom itertools import repeat\n\n#reverse complement\n\ndef reverse_complement(seq):\n compline = ''\n for n in seq:\n if n == 'A':\n compline += 'T'\n elif n == 'T':\n compline += 'A'\n elif n == 'C':\n compline += 'G'\n elif n == 'G':\n compline += 'C'\n return((compline)[::-1])\n\n\n#opening file\n\nfilename = \"MySequences.fasta\"\nfile = open(filename, 'r')\n\n\n#reading the file\n\nfor line in file:\n line = line.strip()\n if \">\" in line:\n header = line\n elif (len(line) == 0):\n continue\n else:\n seq = line\n #run each line\n rc = reverse_complement(seq)\n print(rc) \n\nAlso, this is your other mistake.\nYou put seq as input instead line.\nBut even if you fix this, this code won't work for the same reason I told you before.\nfor line in file:\n rc = reverse_complement(seq)\n print(rc) \n\n" ]
[ 0 ]
[]
[]
[ "bioinformatics", "python" ]
stackoverflow_0074656373_bioinformatics_python.txt
Q: Retrieving Multiple Values From a Listview I have a listview and I would like to cycle through it and determine if an action was completed by a certain time, in this case the action is reading a book. If the time in the column "ReadByThisTime" has passed for that particular book, I would like a message to display. How would I cycle through the listview in order to make sure each book has been read on time? Image of Listview private void filllistview() //This is what is called to populate the listview { SqlConnection conn = new SqlConnection(@"conn"); using (SqlDataAdapter sda = new SqlDataAdapter(@"SELECT * FROM Books", conn)) { //Fill the DataTable with records from Table. System.Data.DataTable dt = new System.Data.DataTable(); sda.Fill(dt); //Loop through the DataTable. foreach (DataRow row in dt.Rows) { //Add Item to ListView. ListViewItem item = new ListViewItem(row["Books"].ToString()); item.SubItems.Add(row["ReadByThisTime"].ToString()); item.SubItems.Add(row["ReadAt"].ToString()); listView1.Items.Add(item); } } } /* This is what I want my listview to perform foreach(ListViewItem item in listView1.Items) { if(book is not read by time) { MessageBox.Show("Error") } }*/ A: Here's what I would do. It's a little weird that the deadline only has time information without a date. So in the example below, I'm only comparing the time portion of the datetime variables. private void alertForReadViolation() { //This is what I want my listview to perform foreach(ListViewItem item in listView1.Items) { //Datetime variables to hold datetimes parsed from listview strings. //If you don't initialize with value, complier complains that the variables aren't assigned to // in the main comparison. DateTime deadline = DateTime.MaxValue; DateTime readTime = DateTime.MinValue; //Try to parse the deadline from the listview text. if (!DateTime.TryParse(item.SubItems[1].Text, out deadline)) { Console.WriteLine("Could not parse Read Deadline from list view string."); return; } //Try to parse the readAt from the listview text. if (!DateTime.TryParse(item.SubItems[2].Text)) { Console.WriteLine("Could not parse ReadAt datetime from list view string."); return; } //Compare only the time portion of the date variables since the deadline only contains time information. if(TimeSpan.Compare(readTime.TimeOfDay, deadline.TimeOfDay) > 0) { MessageBox.Show("Error") } } } Alternatively, you could do the comparison when loading the listview. If you get the datetime data directly from the database DataRow, then it would be strongly typed and you wouldn't have to parse the dates/times from strings. Then, your main fill function would look like this: private void filllistview() //This is what is called to populate the listview { SqlConnection conn = new SqlConnection(@"conn"); using (SqlDataAdapter sda = new SqlDataAdapter(@"SELECT * FROM Books", conn)) { //Fill the DataTable with records from Table. System.Data.DataTable dt = new System.Data.DataTable(); sda.Fill(dt); //Loop through the DataTable. foreach (DataRow row in dt.Rows) { //Add Item to ListView. ListViewItem item = new ListViewItem(row["Books"].ToString()); //Get deadline data from database datetime field. DateTime deadline = (DateTime)row["ReadByThisTime"]; item.SubItems.Add(deadline.ToString()); //Get ReadAt data from database datetime field. DateTime readTime = (DateTime)row["ReadAt"]; item.SubItems.Add(readTime.ToString()); listView1.Items.Add(item); //Compare only the time portion of the date variables since the deadline only contains time information. if(TimeSpan.Compare(readTime.TimeOfDay, deadline.TimeOfDay) > 0) { MessageBox.Show("Error") } } } }
Retrieving Multiple Values From a Listview
I have a listview and I would like to cycle through it and determine if an action was completed by a certain time, in this case the action is reading a book. If the time in the column "ReadByThisTime" has passed for that particular book, I would like a message to display. How would I cycle through the listview in order to make sure each book has been read on time? Image of Listview private void filllistview() //This is what is called to populate the listview { SqlConnection conn = new SqlConnection(@"conn"); using (SqlDataAdapter sda = new SqlDataAdapter(@"SELECT * FROM Books", conn)) { //Fill the DataTable with records from Table. System.Data.DataTable dt = new System.Data.DataTable(); sda.Fill(dt); //Loop through the DataTable. foreach (DataRow row in dt.Rows) { //Add Item to ListView. ListViewItem item = new ListViewItem(row["Books"].ToString()); item.SubItems.Add(row["ReadByThisTime"].ToString()); item.SubItems.Add(row["ReadAt"].ToString()); listView1.Items.Add(item); } } } /* This is what I want my listview to perform foreach(ListViewItem item in listView1.Items) { if(book is not read by time) { MessageBox.Show("Error") } }*/
[ "Here's what I would do. It's a little weird that the deadline only has time information without a date. So in the example below, I'm only comparing the time portion of the datetime variables.\n private void alertForReadViolation() {\n //This is what I want my listview to perform\n\n foreach(ListViewItem item in listView1.Items)\n {\n //Datetime variables to hold datetimes parsed from listview strings.\n //If you don't initialize with value, complier complains that the variables aren't assigned to\n // in the main comparison.\n DateTime deadline = DateTime.MaxValue; \n DateTime readTime = DateTime.MinValue;\n \n //Try to parse the deadline from the listview text.\n if (!DateTime.TryParse(item.SubItems[1].Text, out deadline)) {\n Console.WriteLine(\"Could not parse Read Deadline from list view string.\");\n return;\n }\n \n //Try to parse the readAt from the listview text.\n if (!DateTime.TryParse(item.SubItems[2].Text)) {\n Console.WriteLine(\"Could not parse ReadAt datetime from list view string.\");\n return;\n }\n \n //Compare only the time portion of the date variables since the deadline only contains time information.\n if(TimeSpan.Compare(readTime.TimeOfDay, deadline.TimeOfDay) > 0)\n {\n MessageBox.Show(\"Error\")\n }\n }\n }\n\nAlternatively, you could do the comparison when loading the listview. If you get the datetime data directly from the database DataRow, then it would be strongly typed and you wouldn't have to parse the dates/times from strings. Then, your main fill function would look like this:\n private void filllistview() //This is what is called to populate the listview\n {\n SqlConnection conn = new SqlConnection(@\"conn\");\n using (SqlDataAdapter sda = new SqlDataAdapter(@\"SELECT * FROM Books\", conn))\n {\n //Fill the DataTable with records from Table.\n System.Data.DataTable dt = new System.Data.DataTable();\n sda.Fill(dt);\n\n //Loop through the DataTable.\n foreach (DataRow row in dt.Rows)\n {\n //Add Item to ListView.\n ListViewItem item = new ListViewItem(row[\"Books\"].ToString());\n \n //Get deadline data from database datetime field.\n DateTime deadline = (DateTime)row[\"ReadByThisTime\"];\n item.SubItems.Add(deadline.ToString());\n \n //Get ReadAt data from database datetime field.\n DateTime readTime = (DateTime)row[\"ReadAt\"];\n item.SubItems.Add(readTime.ToString());\n \n listView1.Items.Add(item);\n \n //Compare only the time portion of the date variables since the deadline only contains time information.\n if(TimeSpan.Compare(readTime.TimeOfDay, deadline.TimeOfDay) > 0)\n {\n MessageBox.Show(\"Error\")\n }\n }\n }\n }\n\n" ]
[ 0 ]
[]
[]
[ "c#", "listview", "winforms" ]
stackoverflow_0074630333_c#_listview_winforms.txt
Q: Can't read uploaded csv file in SageMaker Studio Lab anymore I used to use the pandas command: pd.read_csv('path copied from studio lab') to read the csv file but now this same command seems to not work anymore. The path that I used in the pandas command I got by right-clicking on the upload filed and then selecting copy path. Any help? The error message: FileNotFoundError: [Errno 2] No such file or directory: 'Titanic/train.csv' A: You can use boto3 to read the files #pip install boto3 import boto3 s3 = boto3.client('s3') obj = s3.get_object( Bucket = 'bucket_name', Key = 'path/to/file.csv' ) df = pd.read_csv(obj['Body'], nrows=100) check few samples using nrows to make sure you see the data you were expecting.
Can't read uploaded csv file in SageMaker Studio Lab anymore
I used to use the pandas command: pd.read_csv('path copied from studio lab') to read the csv file but now this same command seems to not work anymore. The path that I used in the pandas command I got by right-clicking on the upload filed and then selecting copy path. Any help? The error message: FileNotFoundError: [Errno 2] No such file or directory: 'Titanic/train.csv'
[ "You can use boto3 to read the files\n#pip install boto3\n\nimport boto3\ns3 = boto3.client('s3')\nobj = s3.get_object(\n Bucket = 'bucket_name',\n Key = 'path/to/file.csv'\n )\ndf = pd.read_csv(obj['Body'], nrows=100)\n\ncheck few samples using nrows to make sure you see the data you were expecting.\n" ]
[ 0 ]
[]
[]
[ "amazon_sagemaker", "amazon_sagemaker_studio" ]
stackoverflow_0073860227_amazon_sagemaker_amazon_sagemaker_studio.txt
Q: 'Found the synthetic property @panelState. Please include either "BrowserAnimationsModule" or "NoopAnimationsModule" in your application.' I upgraded an Angular 4 project using angular-seed and now get the error Found the synthetic property @panelState. Please include either "BrowserAnimationsModule" or "NoopAnimationsModule" in your application. How can I fix this? What exactly is the error message telling me? A: Make sure the @angular/animations package is installed (e.g. by running npm install @angular/animations). Then, in your app.module.ts import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; @NgModule({ ..., imports: [ ..., BrowserAnimationsModule ], ... }) A: This error message is often misleading. You may have forgotten to import the BrowserAnimationsModule. But that was not my problem. I was importing BrowserAnimationsModule in the root AppModule, as everyone should do. The problem was something completely unrelated to the module. I was animating an*ngIf in the component template but I had forgotten to mention it in the @Component.animations for the component class. @Component({ selector: '...', templateUrl: './...', animations: [myNgIfAnimation] // <-- Don't forget! }) If you use an animation in a template, you also must list that animation in the component's animations metadata ... every time. A: I ran into similar issues, when I tried to use the BrowserAnimationsModule. Following steps solved my problem: Delete the node_modules dir Clear your package cache using npm cache clean Run one of these two commands listed here to update your existing packages If you experience a 404 errors like http://.../node_modules/@angular/platform-browser/bundles/platform-browser.umd.js/animations add following entries to map in your system.config.js: '@angular/animations': 'node_modules/@angular/animations/bundles/animations.umd.min.js', '@angular/animations/browser':'node_modules/@angular/animations/bundles/animations-browser.umd.js', '@angular/platform-browser/animations': 'node_modules/@angular/platform-browser/bundles/platform-browser-animations.umd.js' naveedahmed1 provided the solution on this github issue. A: For me, I missed this statement in @Component decorator: animations: [yourAnimation] Once I added this statement, errors gone. (Angular 6.x) A: All I had to do was to install this npm install @angular/animations@latest --save and then import import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; into your app.module.ts file. A: After installing an animation module then you create an animation file inside your app folder. router.animation.ts import { animate, state, style, transition, trigger } from '@angular/animations'; export function routerTransition() { return slideToTop(); } export function slideToRight() { return trigger('routerTransition', [ state('void', style({})), state('*', style({})), transition(':enter', [ style({ transform: 'translateX(-100%)' }), animate('0.5s ease-in-out', style({ transform: 'translateX(0%)' })) ]), transition(':leave', [ style({ transform: 'translateX(0%)' }), animate('0.5s ease-in-out', style({ transform: 'translateX(100%)' })) ]) ]); } export function slideToLeft() { return trigger('routerTransition', [ state('void', style({})), state('*', style({})), transition(':enter', [ style({ transform: 'translateX(100%)' }), animate('0.5s ease-in-out', style({ transform: 'translateX(0%)' })) ]), transition(':leave', [ style({ transform: 'translateX(0%)' }), animate('0.5s ease-in-out', style({ transform: 'translateX(-100%)' })) ]) ]); } export function slideToBottom() { return trigger('routerTransition', [ state('void', style({})), state('*', style({})), transition(':enter', [ style({ transform: 'translateY(-100%)' }), animate('0.5s ease-in-out', style({ transform: 'translateY(0%)' })) ]), transition(':leave', [ style({ transform: 'translateY(0%)' }), animate('0.5s ease-in-out', style({ transform: 'translateY(100%)' })) ]) ]); } export function slideToTop() { return trigger('routerTransition', [ state('void', style({})), state('*', style({})), transition(':enter', [ style({ transform: 'translateY(100%)' }), animate('0.5s ease-in-out', style({ transform: 'translateY(0%)' })) ]), transition(':leave', [ style({ transform: 'translateY(0%)' }), animate('0.5s ease-in-out', style({ transform: 'translateY(-100%)' })) ]) ]); } Then you import this animation file to your any component. In your component.ts file import { routerTransition } from '../../router.animations'; @Component({ selector: 'app-test', templateUrl: './test.component.html', styleUrls: ['./test.component.scss'], animations: [routerTransition()] }) Don't forget to import animation in your app.module.ts import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; A: The animation should be applied on the specific component. EX : Using animation directive in other component and provided in another. CompA --- @Component ({ animations : [animation] }) CompA --- @Component ({ animations : [animation] <=== this should be provided in used component }) A: My problem was that my @angular/platform-browser was on version 2.3.1 npm install @angular/platform-browser@latest --save Upgrading to 4.4.6 did the trick and added /animations folder under node_modules/@angular/platform-browser A: For me was because I put the animation name inside square brackets. <div [@animation]></div> But after I removed the bracket all worked fine (In Angular 9.0.1): <div @animation></div> A: I got below error : Found the synthetic property @collapse. Please include either "BrowserAnimationsModule" or "NoopAnimationsModule" in your application. I follow the accepted answer by Ploppy and it resolved my problem. Here are the steps: 1. import { trigger, state, style, transition, animate } from '@angular/animations'; Or import { BrowserAnimationsModule } from '@angular/platform-browser/animations'; 2. Define the same in the import array in the root module. It will resolve the error. Happy coding!! A: A simple solution to this, do not to import BrowserAnimationsModule in your lazyloaded or child module, import only in your AppModule. If you get this same error while you run your component test, add it to your import array in your test bed. Note. This works if you don't have your own defined animations
'Found the synthetic property @panelState. Please include either "BrowserAnimationsModule" or "NoopAnimationsModule" in your application.'
I upgraded an Angular 4 project using angular-seed and now get the error Found the synthetic property @panelState. Please include either "BrowserAnimationsModule" or "NoopAnimationsModule" in your application. How can I fix this? What exactly is the error message telling me?
[ "Make sure the @angular/animations package is installed (e.g. by running npm install @angular/animations). Then, in your app.module.ts\nimport { BrowserAnimationsModule } from '@angular/platform-browser/animations';\n\n@NgModule({\n ...,\n imports: [\n ...,\n BrowserAnimationsModule\n ],\n ...\n})\n\n", "This error message is often misleading. \nYou may have forgotten to import the BrowserAnimationsModule. But that was not my problem. I was importing BrowserAnimationsModule in the root AppModule, as everyone should do.\nThe problem was something completely unrelated to the module. I was animating an*ngIf in the component template but I had forgotten to mention it in the @Component.animations for the component class.\n@Component({\n selector: '...',\n templateUrl: './...',\n animations: [myNgIfAnimation] // <-- Don't forget!\n})\n\nIf you use an animation in a template, you also must list that animation in the component's animations metadata ... every time.\n", "I ran into similar issues, when I tried to use the BrowserAnimationsModule. Following steps solved my problem:\n\nDelete the node_modules dir\nClear your package cache using npm cache clean\nRun one of these two commands listed here to update your existing packages\n\nIf you experience a 404 errors like\nhttp://.../node_modules/@angular/platform-browser/bundles/platform-browser.umd.js/animations\n\nadd following entries to map in your system.config.js:\n'@angular/animations': 'node_modules/@angular/animations/bundles/animations.umd.min.js',\n'@angular/animations/browser':'node_modules/@angular/animations/bundles/animations-browser.umd.js',\n'@angular/platform-browser/animations': 'node_modules/@angular/platform-browser/bundles/platform-browser-animations.umd.js'\n\nnaveedahmed1 provided the solution on this github issue.\n", "For me, I missed this statement in @Component decorator:\nanimations: [yourAnimation]\nOnce I added this statement, errors gone.\n(Angular 6.x)\n", "All I had to do was to install this \nnpm install @angular/animations@latest --save \n\nand then import \nimport { BrowserAnimationsModule } from '@angular/platform-browser/animations'; \n\ninto your app.module.ts file.\n", "After installing an animation module then you create an animation file inside your app folder.\n\nrouter.animation.ts\n\nimport { animate, state, style, transition, trigger } from '@angular/animations';\n export function routerTransition() {\n return slideToTop();\n }\n\n export function slideToRight() {\n return trigger('routerTransition', [\n state('void', style({})),\n state('*', style({})),\n transition(':enter', [\n style({ transform: 'translateX(-100%)' }),\n animate('0.5s ease-in-out', style({ transform: 'translateX(0%)' }))\n ]),\n transition(':leave', [\n style({ transform: 'translateX(0%)' }),\n animate('0.5s ease-in-out', style({ transform: 'translateX(100%)' }))\n ])\n ]);\n }\n\n export function slideToLeft() {\n return trigger('routerTransition', [\n state('void', style({})),\n state('*', style({})),\n transition(':enter', [\n style({ transform: 'translateX(100%)' }),\n animate('0.5s ease-in-out', style({ transform: 'translateX(0%)' }))\n ]),\n transition(':leave', [\n style({ transform: 'translateX(0%)' }),\n animate('0.5s ease-in-out', style({ transform: 'translateX(-100%)' }))\n ])\n ]);\n }\n\n export function slideToBottom() {\n return trigger('routerTransition', [\n state('void', style({})),\n state('*', style({})),\n transition(':enter', [\n style({ transform: 'translateY(-100%)' }),\n animate('0.5s ease-in-out', style({ transform: 'translateY(0%)' }))\n ]),\n transition(':leave', [\n style({ transform: 'translateY(0%)' }),\n animate('0.5s ease-in-out', style({ transform: 'translateY(100%)' }))\n ])\n ]);\n }\n\n export function slideToTop() {\n return trigger('routerTransition', [\n state('void', style({})),\n state('*', style({})),\n transition(':enter', [\n style({ transform: 'translateY(100%)' }),\n animate('0.5s ease-in-out', style({ transform: 'translateY(0%)' }))\n ]),\n transition(':leave', [\n style({ transform: 'translateY(0%)' }),\n animate('0.5s ease-in-out', style({ transform: 'translateY(-100%)' }))\n ])\n ]);\n }\n\nThen you import this animation file to your any component.\n\nIn your component.ts file\n\nimport { routerTransition } from '../../router.animations';\n\n@Component({\n selector: 'app-test',\n templateUrl: './test.component.html',\n styleUrls: ['./test.component.scss'],\n animations: [routerTransition()]\n})\n\n\nDon't forget to import animation in your app.module.ts\n\nimport { BrowserAnimationsModule } from '@angular/platform-browser/animations';\n\n", "The animation should be applied on the specific component.\nEX : Using animation directive in other component and provided in another.\nCompA --- @Component\n ({\n\n\nanimations : [animation]\n })\nCompA --- @Component\n ({\n\n\nanimations : [animation] <=== this should be provided in used component\n })\n", "My problem was that my @angular/platform-browser was on version 2.3.1\nnpm install @angular/platform-browser@latest --save\n\nUpgrading to 4.4.6 did the trick and added /animations folder under node_modules/@angular/platform-browser\n", "For me was because I put the animation name inside square brackets.\n<div [@animation]></div>\n\nBut after I removed the bracket all worked fine (In Angular 9.0.1):\n<div @animation></div>\n\n", "I got below error :\nFound the synthetic property @collapse. Please include either \"BrowserAnimationsModule\" or \"NoopAnimationsModule\" in your application.\n\nI follow the accepted answer by Ploppy and it resolved my problem.\nHere are the steps:\n1.\n import { trigger, state, style, transition, animate } from '@angular/animations';\n Or \n import { BrowserAnimationsModule } from '@angular/platform-browser/animations';\n\n2. Define the same in the import array in the root module.\n\nIt will resolve the error. Happy coding!!\n", "A simple solution to this, do not to import BrowserAnimationsModule in your lazyloaded or child module, import only in your AppModule. If you get this same error while you run your component test, add it to your import array in your test bed. Note. This works if you don't have your own defined animations\n" ]
[ 310, 251, 19, 8, 5, 5, 3, 2, 2, 1, 0 ]
[ "Try this\n\nnpm install @angular/animations@latest --save\n\nimport { BrowserAnimationsModule } from '@angular/platform-browser/animations';\nthis works for me.\n", "--\nimport { BrowserAnimationsModule } from '@angular/platform-browser/animations';\n---\n\n@NgModule({\n declarations: [ -- ],\n imports: [BrowserAnimationsModule],\n providers: [],\n bootstrap: []\n})\n\n", "Simply add ..\nimport { BrowserAnimationsModule } from '@angular/platform-browser/animations';\nimports: [\n ..\n BrowserAnimationsModule\n],\nin app.module.ts file.\nmake sure you have installed .. npm install @angular/animations@latest --save\n", "Update for angularJS 4:\nError: (SystemJS) XHR error (404 Not Found) loading http://localhost:3000/node_modules/@angular/platform-browser/bundles/platform-browser.umd.js/animations\nSolution:\n**cli:** (command/terminal)\nnpm install @angular/animations@latest --save\n\n**systemjs.config.js** (edit file)\n'@angular/animations': 'npm:@angular/animations/bundles/animations.umd.js',\n'@angular/animations/browser': 'npm:@angular/animations/bundles/animations-browser.umd.js',\n'@angular/platform-browser/animations': 'npm:@angular/platform-browser/bundles/platform-browser-animations.umd.js',\n\n**app.module.ts** (edit file)\nimport {BrowserAnimationsModule} from '@angular/platform-browser/animations';\n@NgModule({\n imports: [ BrowserModule,BrowserAnimationsModule ],\n...\n\n" ]
[ -3, -3, -4, -4 ]
[ "angular", "angular_animations", "typescript" ]
stackoverflow_0043241193_angular_angular_animations_typescript.txt
Q: How to initialise PySpark on AWS Cloud9 I want to initialise pyspark version 3.3.1 on aws cloud9 and to read a s3 file path from AWS. But when I run the code, I got this error shown in the image. I was thinking that there is something wrong with my Pyspark initilisation, and I have tried the code below provided by my colleague but apparently this doesn't work for me. enter image description here My pyspark version is 3.3.1 and hadoop version 3 pkg_list=org.apache.spark:spark-avro_2.11:2.4.4,org.apache.hadoop:hadoop-aws:2.7.1 pyspark --packages $pkg_list --driver-memory 32G --driver-cores 8 --num-executors 8 --executor-memory 32G --executor-cores 8 --driver-java-options="-Djava.io.tmpdir=/home/yoongkiat/tempfiles" A: The error is saying that in some hadoop config file or option that Spark is using, you have a string 64M, but it's only expecting a number. The error doesn't say which file, and that's not a value you've provided on the command line, so you'll need to debug the installation on your own. As mentioned in comments, AWS EMR already offers a functional Spark environment. By the, you cannot use dependencies from different Spark versions; you're running 3.3.1, but trying to add spark-avro for 2.4.4. I'm also not certain you'll need to add hadoop-aws since Spark should have those libraries included out of the box.
How to initialise PySpark on AWS Cloud9
I want to initialise pyspark version 3.3.1 on aws cloud9 and to read a s3 file path from AWS. But when I run the code, I got this error shown in the image. I was thinking that there is something wrong with my Pyspark initilisation, and I have tried the code below provided by my colleague but apparently this doesn't work for me. enter image description here My pyspark version is 3.3.1 and hadoop version 3 pkg_list=org.apache.spark:spark-avro_2.11:2.4.4,org.apache.hadoop:hadoop-aws:2.7.1 pyspark --packages $pkg_list --driver-memory 32G --driver-cores 8 --num-executors 8 --executor-memory 32G --executor-cores 8 --driver-java-options="-Djava.io.tmpdir=/home/yoongkiat/tempfiles"
[ "The error is saying that in some hadoop config file or option that Spark is using, you have a string 64M, but it's only expecting a number.\nThe error doesn't say which file, and that's not a value you've provided on the command line, so you'll need to debug the installation on your own. As mentioned in comments, AWS EMR already offers a functional Spark environment.\nBy the, you cannot use dependencies from different Spark versions; you're running 3.3.1, but trying to add spark-avro for 2.4.4. I'm also not certain you'll need to add hadoop-aws since Spark should have those libraries included out of the box.\n" ]
[ 0 ]
[]
[]
[ "apache_spark", "aws_cloud9", "pyspark" ]
stackoverflow_0074650553_apache_spark_aws_cloud9_pyspark.txt
Q: How to insert UTC with Time Zone in sql I have code in postgresql transform to sql server In postgresql while inserting in table with data type timestamp with time zone in UTC format, it inserted with time zone create table public.testt123 (tz timestamp with time zone) insert into public.testt123 select now() at time zone 'utc' select * from public.testt123 enter image description here I have tried same with Sql server, below query create table Test1(tz [datetimeoffset](7)) insert into Test1 select GETUTCDATE() AT TIME ZONE 'UTC' enter image description here It inserted without time zone, I have check using SYSDATETIMEOFFSET() but it gives time zone with current datetime not UTC I have tried by left function, but it is correct way? Select cast(left(SYSDATETIMEOFFSET() AT TIME ZONE 'UTC',28) + DATENAME(TZOFFSET, SYSDATETIMEOFFSET()) as [datetimeoffset](7)) enter image description here A: Based on the comments, I suspect what you want is: SELECT SYSUTCDATETIME() AT TIME ZONE 'UTC' AT TIME ZONE 'India Standard Time'; Though this could be abbreviated to: SELECT SYSDATETIMEOFFSET() AT TIME ZONE 'India Standard Time'; A: I have check using SYSDATETIMEOFFSET() but it gives time zone with current datetime not UTC Correct, SYSDATETIMEOFFSET() returns a datetimeoffset but with the current UTC offset of the database server. Specify AT TIME ZONE 'UTC' to get a datetimeoffset with the UTC time with a zero offset: SYSDATETIMEOFFSET() AT TIME ZONE 'UTC'
How to insert UTC with Time Zone in sql
I have code in postgresql transform to sql server In postgresql while inserting in table with data type timestamp with time zone in UTC format, it inserted with time zone create table public.testt123 (tz timestamp with time zone) insert into public.testt123 select now() at time zone 'utc' select * from public.testt123 enter image description here I have tried same with Sql server, below query create table Test1(tz [datetimeoffset](7)) insert into Test1 select GETUTCDATE() AT TIME ZONE 'UTC' enter image description here It inserted without time zone, I have check using SYSDATETIMEOFFSET() but it gives time zone with current datetime not UTC I have tried by left function, but it is correct way? Select cast(left(SYSDATETIMEOFFSET() AT TIME ZONE 'UTC',28) + DATENAME(TZOFFSET, SYSDATETIMEOFFSET()) as [datetimeoffset](7)) enter image description here
[ "Based on the comments, I suspect what you want is:\nSELECT SYSUTCDATETIME() AT TIME ZONE 'UTC' AT TIME ZONE 'India Standard Time';\n\nThough this could be abbreviated to:\nSELECT SYSDATETIMEOFFSET() AT TIME ZONE 'India Standard Time';\n\n", "\nI have check using SYSDATETIMEOFFSET() but it gives time zone with\ncurrent datetime not UTC\n\nCorrect, SYSDATETIMEOFFSET() returns a datetimeoffset but with the current UTC offset of the database server. Specify AT TIME ZONE 'UTC' to get a datetimeoffset with the UTC time with a zero offset:\nSYSDATETIMEOFFSET() AT TIME ZONE 'UTC'\n\n" ]
[ 1, 0 ]
[]
[]
[ "sql_server", "sql_server_2017" ]
stackoverflow_0074656377_sql_server_sql_server_2017.txt
Q: https://pub.dev/ https://cloud.google.com/ not reachable doctor && Error: A value of type 'T?' can't be returned from a function with return type 'T' All of a sudden this error message popped up /C:/flutter/.pub-cache/hosted/pub.dartlang.org/provider-6.0.4/lib/src/provider.dart:307:9: Error: 'ntthrow' isn't a type. ntthrow ProviderNullException(T, context.widget.runtimeType); ^^^^^^^ /C:/flutter/.pub-cache/hosted/pub.dartlang.org/provider-6.0.4/lib/src/provider.dart:307:17: Error: Expected ';' after this. ntthrow ProviderNullException(T, context.widget.runtimeType); ^^^^^^^^^^^^^^^^^^^^^ /C:/flutter/.pub-cache/hosted/pub.dartlang.org/provider-6.0.4/lib/src/provider.dart:307:40: Error: Expected ')' before this. ntthrow ProviderNullException(T, context.widget.runtimeType); ^ /C:/flutter/.pub-cache/hosted/pub.dartlang.org/provider-6.0.4/lib/src/provider.dart:309:14: Error: A value of type 'T?' can't be returned from a function with return type 'T' because 'T?' is nullable and 'T' isn't. return value; It appeared when I added this line : import 'package:provider/provider.dart'; pubspec.yaml : environment: sdk: '>=2.18.2 <3.0.0' dependencies: provider: ^6.0.4 By making flutter doctor, I get this : [SOLVED] (for doctor message not the one below) What can I do ? A: flutter clean followed by flutter pub get. also would recommend to recheck your internet connection, close and reopen your IDE and rerun flutter doctor A: I changed this on Pubspec.yaml provider: ^5.0.0-nullsafety.2
https://pub.dev/ https://cloud.google.com/ not reachable doctor && Error: A value of type 'T?' can't be returned from a function with return type 'T'
All of a sudden this error message popped up /C:/flutter/.pub-cache/hosted/pub.dartlang.org/provider-6.0.4/lib/src/provider.dart:307:9: Error: 'ntthrow' isn't a type. ntthrow ProviderNullException(T, context.widget.runtimeType); ^^^^^^^ /C:/flutter/.pub-cache/hosted/pub.dartlang.org/provider-6.0.4/lib/src/provider.dart:307:17: Error: Expected ';' after this. ntthrow ProviderNullException(T, context.widget.runtimeType); ^^^^^^^^^^^^^^^^^^^^^ /C:/flutter/.pub-cache/hosted/pub.dartlang.org/provider-6.0.4/lib/src/provider.dart:307:40: Error: Expected ')' before this. ntthrow ProviderNullException(T, context.widget.runtimeType); ^ /C:/flutter/.pub-cache/hosted/pub.dartlang.org/provider-6.0.4/lib/src/provider.dart:309:14: Error: A value of type 'T?' can't be returned from a function with return type 'T' because 'T?' is nullable and 'T' isn't. return value; It appeared when I added this line : import 'package:provider/provider.dart'; pubspec.yaml : environment: sdk: '>=2.18.2 <3.0.0' dependencies: provider: ^6.0.4 By making flutter doctor, I get this : [SOLVED] (for doctor message not the one below) What can I do ?
[ "flutter clean followed by flutter pub get. also would recommend to recheck your internet connection, close and reopen your IDE and rerun flutter doctor\n", "I changed this on Pubspec.yaml\nprovider: ^5.0.0-nullsafety.2\n" ]
[ 0, 0 ]
[ "I was connected with a VPN so this error was coming I disconnected VPN and it's working fine now.\n", "I was connected to the internet without VPN and had this problem, but when I connected with VPN, the problem was solved. If you are connected to the Internet from a place that is sanctioned by the US, you must connect with a VPN to solve the problem.\n" ]
[ -1, -1 ]
[ "dart", "flutter", "flutter_provider" ]
stackoverflow_0074130677_dart_flutter_flutter_provider.txt
Q: This is not JSON but similar, what language is this? I am parsing a larger JSON and one field (10) contains some strange data. "10": "a:2:{s:8:\"latitude\";s:17:\"55.50636887209855\";s:9:\"longitude\";s:18:\"-4.576417238098154\";}" The data is clearly latitude and longitude with the corresponding values. But I would like to parse this data correctly and I have never seen this format. Help would be appreciated. A: To me it looks like a raw self-defined format, not like a language. So you have to write your own code to process it. Using jq you can parse the input like this: INPUT=' { "10": "a:2:{s:8:\"latitude\";s:17:\"55.50636887209855\";s:9:\"longitude\";s:18:\"-4.576417238098154\";}" } ' jq '.["10"] |= (. / ";" | map(match("\"(.*)\"").captures[0].string | tonumber? // .))' <<< "$INPUT" Output { "10": [ "latitude", 55.50636887209855, "longitude", -4.576417238098154 ] } Second version INPUT=' { "10": "a:2:{s:8:\"latitude\";s:17:\"55.50636887209855\";s:9:\"longitude\";s:18:\"-4.576417238098154\";}" } ' jq '.["10"] |= (. / ";" | map(match("\"(.*)\"").captures[0].string | tonumber? // .) | {(.[0]): .[1], (.[2]): .[3]} )' <<< "$INPUT" Output { "10": { "latitude": 55.50636887209855, "longitude": -4.576417238098154 } } Remark The code only works if this format contains 4 fields separated by ; with values enclosed by \"
This is not JSON but similar, what language is this?
I am parsing a larger JSON and one field (10) contains some strange data. "10": "a:2:{s:8:\"latitude\";s:17:\"55.50636887209855\";s:9:\"longitude\";s:18:\"-4.576417238098154\";}" The data is clearly latitude and longitude with the corresponding values. But I would like to parse this data correctly and I have never seen this format. Help would be appreciated.
[ "To me it looks like a raw self-defined format, not like a language.\nSo you have to write your own code to process it.\nUsing jq you can parse the input like this:\nINPUT='\n{\n \"10\": \"a:2:{s:8:\\\"latitude\\\";s:17:\\\"55.50636887209855\\\";s:9:\\\"longitude\\\";s:18:\\\"-4.576417238098154\\\";}\"\n}\n'\n\njq '.[\"10\"] |= (. / \";\" | map(match(\"\\\"(.*)\\\"\").captures[0].string | tonumber? // .))' <<< \"$INPUT\"\n\nOutput\n{\n \"10\": [\n \"latitude\",\n 55.50636887209855,\n \"longitude\",\n -4.576417238098154\n ]\n}\n\n\nSecond version\nINPUT='\n{\n \"10\": \"a:2:{s:8:\\\"latitude\\\";s:17:\\\"55.50636887209855\\\";s:9:\\\"longitude\\\";s:18:\\\"-4.576417238098154\\\";}\"\n}\n'\n\njq '.[\"10\"] |= (. / \";\" | map(match(\"\\\"(.*)\\\"\").captures[0].string | tonumber? // .)\n | {(.[0]): .[1], (.[2]): .[3]} )' <<< \"$INPUT\"\n\nOutput\n{\n \"10\": {\n \"latitude\": 55.50636887209855,\n \"longitude\": -4.576417238098154\n }\n}\n\n\nRemark\nThe code only works if this format contains 4 fields separated by ; with values enclosed by \\\"\n" ]
[ 0 ]
[]
[]
[ "format", "json" ]
stackoverflow_0074655082_format_json.txt
Q: Using a square matrix with Networkx but keep getting Adjacency matrix not square So I'm using Networkx to plot a cooc matrix. It works well with small samples but I keep getting this error when I run it with a big cooc matrix (reason why I can't share a minimum reproductible example): Traceback (most recent call last): File "", line 113, in <module> G = nx.from_pandas_adjacency(matrix) File "", line 205, in from_pandas_adjacency G = from_numpy_array(A, create_using=create_using) File "", line 1357, in from_numpy_array raise nx.NetworkXError(f"Adjacency matrix not square: nx,ny={A.shape}") networkx.exception.NetworkXError: Adjacency matrix not square: nx,ny=(74, 76) This is my code : G = nx.from_pandas_adjacency(matrix) # visualize it with pyvis N = Network(height='100%', width='100%', bgcolor='#222222', font_color='white') N.barnes_hut() for n in G.nodes: N.add_node(n) for e in G.edges: N.add_edge((e[0]), (e[1])) And this is the ouput of my matrix : Ali Sarah Josh Maura Mort ... Jasmine Lily Adam Ute Ali 0 3 2 2 ... 0 0 1 0 Sarah 3 0 3 3 ... 0 0 1 0 Josh 2 3 0 4 ... 0 0 1 0 Maura Mort 2 3 4 0 ... 0 0 1 0 Shelly 0 0 0 0 ... 0 0 0 0 ... ... ... ... ... ... ... ... ... ... Nicol 0 0 0 0 ... 0 0 0 0 Jasmine 0 0 0 0 ... 0 0 0 0 Lily 0 0 0 0 ... 0 0 0 0 Adam 1 1 1 1 ... 0 0 0 0 Ute 0 0 0 0 ... 0 0 0 0 [74 rows x 74 columns] Weirdly, it looks like my matrix is a square (74 x 74). Any idea what might be the problem ? A: So I was able to fix my problem by first converting my matrix into a stack. cooc_matrix = matrix(matrixLabel, texts) matrix = pd.DataFrame(cooc_matrix.todense(), index=matrixLabel, columns=matrixLabel) print(matrix) #This fixed my problem stw = matrix.stack() stw = stw[stw >= 1].rename_axis(('source', 'target')).reset_index(name='weight') print(stw) G = nx.from_pandas_edgelist(stw, edge_attr=True) A: I got the same problem. I am using a square pandas data frame (as indicated by df.shape) but I am getting the error Adjacency matrix not square. Using stack() and from_pandas_edgelist() solved the problem for me.
Using a square matrix with Networkx but keep getting Adjacency matrix not square
So I'm using Networkx to plot a cooc matrix. It works well with small samples but I keep getting this error when I run it with a big cooc matrix (reason why I can't share a minimum reproductible example): Traceback (most recent call last): File "", line 113, in <module> G = nx.from_pandas_adjacency(matrix) File "", line 205, in from_pandas_adjacency G = from_numpy_array(A, create_using=create_using) File "", line 1357, in from_numpy_array raise nx.NetworkXError(f"Adjacency matrix not square: nx,ny={A.shape}") networkx.exception.NetworkXError: Adjacency matrix not square: nx,ny=(74, 76) This is my code : G = nx.from_pandas_adjacency(matrix) # visualize it with pyvis N = Network(height='100%', width='100%', bgcolor='#222222', font_color='white') N.barnes_hut() for n in G.nodes: N.add_node(n) for e in G.edges: N.add_edge((e[0]), (e[1])) And this is the ouput of my matrix : Ali Sarah Josh Maura Mort ... Jasmine Lily Adam Ute Ali 0 3 2 2 ... 0 0 1 0 Sarah 3 0 3 3 ... 0 0 1 0 Josh 2 3 0 4 ... 0 0 1 0 Maura Mort 2 3 4 0 ... 0 0 1 0 Shelly 0 0 0 0 ... 0 0 0 0 ... ... ... ... ... ... ... ... ... ... Nicol 0 0 0 0 ... 0 0 0 0 Jasmine 0 0 0 0 ... 0 0 0 0 Lily 0 0 0 0 ... 0 0 0 0 Adam 1 1 1 1 ... 0 0 0 0 Ute 0 0 0 0 ... 0 0 0 0 [74 rows x 74 columns] Weirdly, it looks like my matrix is a square (74 x 74). Any idea what might be the problem ?
[ "So I was able to fix my problem by first converting my matrix into a stack.\ncooc_matrix = matrix(matrixLabel, texts)\nmatrix = pd.DataFrame(cooc_matrix.todense(), index=matrixLabel, columns=matrixLabel)\nprint(matrix)\n\n#This fixed my problem\nstw = matrix.stack()\nstw = stw[stw >= 1].rename_axis(('source', 'target')).reset_index(name='weight')\nprint(stw)\n\nG = nx.from_pandas_edgelist(stw, edge_attr=True)\n\n", "I got the same problem. I am using a square pandas data frame (as indicated by df.shape) but I am getting the error Adjacency matrix not square. Using stack() and from_pandas_edgelist() solved the problem for me.\n" ]
[ 2, 0 ]
[]
[]
[ "matrix", "networkx", "python" ]
stackoverflow_0069349516_matrix_networkx_python.txt
Q: ActivatedRoute does not contain any params from auxilary route When user clicks the edit button, I change the url to /posts/(modal:1/edit) which displays a modal dialog. I need to access the query params of that route to get the id of the post. Params are always an empty object. Whenever I look through the browser console. I seeRouter Event: ActivationEnd` three different times. The first and last time the params object is empty however the second time it has the value I need. Why is Router Event: ActivationEnd displayed so many times? Running ngOnInit logs only once, meaning that the component is mounted only once. Any ideas? This is how I am changing the route ` this.router.navigate([{outlets: {modal: `${this.post.postID}/edit`}}], {relativeTo: this.route}); ` here are my routes: ` { path: 'new', component: PostItDialogContainerComponent, outlet: 'modal' }, { path: ':id/edit', component: PostItDialogContainerComponent, outlet: 'modal' }, ` PostItDialogContainerComponent just makes a call to the DialogService to open up the modal ` ngOnInit() { console.log("Inside of post it dialog container"); this.postItDialog.openPostItDialog(this.dialog); } ` My router-outlets are on the same app.component.html level ` <main> <router-outlet></router-outlet> <router-outlet name="modal"></router-outlet> </main> ` This is the modal dialog in ngOnInit: ` this.route.params .subscribe( (params) => { this.postID = params['id']; this.params = params; this.editMode = params['id'] != null; this.initForm(); console.log("PARAMS ID IS: ", params['id']); } ); ` I know I'm missing something here, but I cant pin point it. Thanks in advance A: The solution ended up being more convoluted than I had hoped, but this was the only way I could get the params of the auxiliary route. All other calls to this.route.params.subscribe() kept coming up as empty objects. in ngOnInit I have the following code: ` this.route.firstChild .children .filter(routes => routes.outlet == 'modal') .map(r => r.params.subscribe( (params) => { this.editMode = params['id'] != null this.postID = +params['id'] } )) ` this will look at the ActivatedRoute and then check its firstChild ActivatedRoute and then check for any children that have the outlet of 'modal'. Then we map each ActivatedRoute and subscribe to their params to retrieve the id and the mode. A: This should work as expected when you inject ActivatedRoute in the component to which the auxiliary route points (your code snippets do not reveal which instance of ActivatedRoute you have used.) Explanation: ActivatedRoute is not a global singleton, and components from different RouterOutlets will receive different instances of ActivatedRoute, depending on 'their' route, even when displayed concurrently. In particular, params from child routes are not visible in the ActivatedRoute of their parent components (which makes sense: a parent could have more than one nested outlet, which opens the possibility of auxiliary param name collision). So to get params from an auxiliary route, you need to look at the ActivatedRoute instance injected in the auxiliary route's component. If you need to propagate the value back to the parent component, you probably have to do this by hand (e.g. via a shared service).
ActivatedRoute does not contain any params from auxilary route
When user clicks the edit button, I change the url to /posts/(modal:1/edit) which displays a modal dialog. I need to access the query params of that route to get the id of the post. Params are always an empty object. Whenever I look through the browser console. I seeRouter Event: ActivationEnd` three different times. The first and last time the params object is empty however the second time it has the value I need. Why is Router Event: ActivationEnd displayed so many times? Running ngOnInit logs only once, meaning that the component is mounted only once. Any ideas? This is how I am changing the route ` this.router.navigate([{outlets: {modal: `${this.post.postID}/edit`}}], {relativeTo: this.route}); ` here are my routes: ` { path: 'new', component: PostItDialogContainerComponent, outlet: 'modal' }, { path: ':id/edit', component: PostItDialogContainerComponent, outlet: 'modal' }, ` PostItDialogContainerComponent just makes a call to the DialogService to open up the modal ` ngOnInit() { console.log("Inside of post it dialog container"); this.postItDialog.openPostItDialog(this.dialog); } ` My router-outlets are on the same app.component.html level ` <main> <router-outlet></router-outlet> <router-outlet name="modal"></router-outlet> </main> ` This is the modal dialog in ngOnInit: ` this.route.params .subscribe( (params) => { this.postID = params['id']; this.params = params; this.editMode = params['id'] != null; this.initForm(); console.log("PARAMS ID IS: ", params['id']); } ); ` I know I'm missing something here, but I cant pin point it. Thanks in advance
[ "The solution ended up being more convoluted than I had hoped, but this was the only way I could get the params of the auxiliary route. All other calls to this.route.params.subscribe() kept coming up as empty objects.\nin ngOnInit I have the following code:\n`\nthis.route.firstChild\n .children\n .filter(routes => routes.outlet == 'modal')\n .map(r => r.params.subscribe(\n (params) => {\n this.editMode = params['id'] != null\n this.postID = +params['id']\n }\n ))\n`\n\nthis will look at the ActivatedRoute and then check its firstChild ActivatedRoute and then check for any children that have the outlet of 'modal'. Then we map each ActivatedRoute and subscribe to their params to retrieve the id and the mode.\n", "This should work as expected when you inject ActivatedRoute in the component to which the auxiliary route points (your code snippets do not reveal which instance of ActivatedRoute you have used.)\nExplanation:\nActivatedRoute is not a global singleton, and components from different RouterOutlets will receive different instances of ActivatedRoute, depending on 'their' route, even when displayed concurrently. In particular, params from child routes are not visible in the ActivatedRoute of their parent components (which makes sense: a parent could have more than one nested outlet, which opens the possibility of auxiliary param name collision).\nSo to get params from an auxiliary route, you need to look at the ActivatedRoute instance injected in the auxiliary route's component. If you need to propagate the value back to the parent component, you probably have to do this by hand (e.g. via a shared service).\n" ]
[ 0, 0 ]
[]
[]
[ "angular" ]
stackoverflow_0055348075_angular.txt
Q: What is the best way to verify an email address if it actually exist? Is there any way to verify an email whether the email actually exist or no in python? Or is thee any platform offering such services? For Example: I have some emails [email protected] [email protected] [email protected] [email protected] How can I be 100% sure which of the following email really exist on the internet? A: There are several tutorials on the internet. Here's what I found... First you need to check for the correct formatting and for this you can use regular expressions like this: import re addressToVerify ='[email protected]' match = re.match('^[_a-z0-9-]+(\.[_a-z0-9-]+)*@[a-z0-9-]+(\.[a-z0-9-]+)*(\.[a-z]{2,4})$', addressToVerify) if match == None: print('Bad Syntax') raise ValueError('Bad Syntax') DNS Next we need to get the MX record for the target domain, in order to start the email verification process. Note that you are allowed in the RFCs to have a mail server on your A record, but that's outside of the scope of this article and demo script. import dns.resolver records = dns.resolver.query('scottbrady91.com', 'MX') mxRecord = records[0].exchange mxRecord = str(mxRecord) Python DNS Python doesn't have any inbuilt DNS components, so we've pulled in the popular dnspython library. Any library that can resolve an MX record from a domain name will work though. Mailbox Now that we have all the preflight information we need, we can now find out if the email address exists. import socket import smtplib # Get local server hostname host = socket.gethostname() # SMTP lib setup (use debug level for full output) server = smtplib.SMTP() server.set_debuglevel(0) # SMTP Conversation server.connect(mxRecord) server.helo(host) server.mail('[email protected]') code, message = server.rcpt(str(addressToVerify)) server.quit() # Assume 250 as Success if code == 250: print('Success') else: print('Bad') What we are doing here is the first three commands of an SMTP conversation for sending an email, stopping just before we send any data. The actual SMTP commands issued are: HELO, MAIL FROM and RCPT TO. It is the response to RCPT TO that we are interested in. If the server sends back a 250, then that means we are good to send an email (the email address exists), otherwise the server will return a different status code (usually a 550), meaning the email address does not exist on that server. And that's email verification! source: https://www.scottbrady91.com/Email-Verification/Python-Email-Verification-Script Here are some other alternatives from another website... Let’s make it more sophisticated and assume we want the following criteria to be met for an account@domain email address: Both consist of case-insensitive alphanumeric characters. Dashes, periods, hyphens or underscores are also allowed Both can only start and end with alphanumeric characters Both contain no white spaces account has at least one character and domain has at least two domain includes at least one period And of course, there’s an ‘@’ symbol between them ^[a-z]([w-]*[a-z]|[w-.]*[a-z]{2,}|[a-z])*@[a-z]([w-]*[a-z]|[w-.]*[a-z]{2,}|[a-z]){4,}?.[a-z]{2,}$ Validating emails with Python libraries Another way of running sophisticated checks is with ready-to-use packages, and there are plenty to choose from. Here are several popular options: email-validator 1.0.5 This library focuses strictly on the domain part of an email address, and checks if it’s in an [email protected] format. As a bonus, the library also comes with a validation tool. It checks if the domain name can be resolved, thus giving you a good idea about its validity. pylsEmail 1.3.2 This Python program to validate email addresses can also be used to validate both a domain and an email address with just one call. If a given address can’t be validated, the script will inform you of the most likely reasons. py3-validate-email This comprehensive library checks an email address for a proper structure, but it doesn’t just stop there. It also verifies if the domain’s MX records exist (as in, is able to send/receive emails), whether a particular address on this domain exists, and also if it’s not blacklisted. This way, you avoid emailing blacklisted accounts, which would affect deliverability. source: https://mailtrap.io/blog/python-validate-email/ These are just extracts from these websites, make sure to visit them and confirm this can be useful for your specific case, they also include much more methods and explain this in more detail, hope it helps. A: You can send a verification email with a code to the entered email to verify its presence. This can be done using any python framework. Django, flask, etc. A: You can use gmass.co, connect to your google account , go to settings , apikeys , then create an api key and use it this way : sent = requests.get('https://verify.gmass.co/verify?email='+email+'&key='+api, headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36'} ) if ("valid" in sent.content): print(email+"valid") else: print(email+"invalid") A: There are many services offering such tools that can validate your emails in bulk. If you are doing this for a marketing campaign to avoid bounce rate and getting your campaign marked as spammed then there are services that can help you checking bulk emails for your marketing campaigns. However, there is some online platform too but you can't check multiple emails. Validating email addresses is easy using valid email checkers. You can use SMTPlib, a python library used for sending emails and using python regex too. But It will only check for the email if it is an email or something else. import re regex = r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' def check(email): if(re.fullmatch(regex, email)): print("Valid Email") else: print("Invalid Email") if __name__ == '__main__': email = input("Enter Email:") check(email) This is not the actual solution to your question, but an example of matching email with regex. Check out the services that offer emails validation in bulk. I'd recommend EmailChecks A: The best method would be to send a verification mail, with some code they need to submit. Not only this is simple, it also proves the email belongs to them A: I believe there are three ways to approach this. 1. REGEX Other thread participants have already suggested this. The problem with this approach is that it simply checks a string for the correct syntax and does not ensure that the email address exists. 2. Bulk Software There are plenty of GUI tools that allow you to import lists that are then being validated. However, this usually does not fit the software development case. 3. Email Validation API The previously mentioned GUI tools often offer an API that allows you to conduct real-time validation of your user's email addresses. A few vendors that provide this: emailvalidation.io emailable.com datavalidation.com
What is the best way to verify an email address if it actually exist?
Is there any way to verify an email whether the email actually exist or no in python? Or is thee any platform offering such services? For Example: I have some emails [email protected] [email protected] [email protected] [email protected] How can I be 100% sure which of the following email really exist on the internet?
[ "There are several tutorials on the internet. Here's what I found...\n\n\nFirst you need to check for the correct formatting and for this you can use regular expressions like this:\n import re\n \n addressToVerify ='[email protected]'\n match = re.match('^[_a-z0-9-]+(\\.[_a-z0-9-]+)*@[a-z0-9-]+(\\.[a-z0-9-]+)*(\\.[a-z]{2,4})$', addressToVerify)\n \n if match == None:\n print('Bad Syntax')\n raise ValueError('Bad Syntax')\n\nDNS\nNext we need to get the MX record for the target domain, in order to start the email verification process. Note that you are allowed in the RFCs to have a mail server on your A record, but that's outside of the scope of this article and demo script.\n import dns.resolver\n \n records = dns.resolver.query('scottbrady91.com', 'MX')\n mxRecord = records[0].exchange\n mxRecord = str(mxRecord)\n\nPython DNS Python doesn't have any inbuilt DNS components, so we've\npulled in the popular dnspython library. Any library that can resolve\nan MX record from a domain name will work though.\nMailbox\nNow that we have all the preflight information we need, we can now find out if the email address exists.\n import socket\n import smtplib\n \n # Get local server hostname\n host = socket.gethostname()\n \n # SMTP lib setup (use debug level for full output)\n server = smtplib.SMTP()\n server.set_debuglevel(0)\n \n # SMTP Conversation\n server.connect(mxRecord)\n server.helo(host)\n server.mail('[email protected]')\n code, message = server.rcpt(str(addressToVerify))\n server.quit()\n \n # Assume 250 as Success\n if code == 250:\n print('Success')\n else:\n print('Bad')\n\nWhat we are doing here is the first three commands of an SMTP conversation for sending an email, stopping just before we send any data.\nThe actual SMTP commands issued are: HELO, MAIL FROM and RCPT TO. It is the response to RCPT TO that we are interested in. If the server sends back a 250, then that means we are good to send an email (the email address exists), otherwise the server will return a different status code (usually a 550), meaning the email address does not exist on that server.\nAnd that's email verification!\n\nsource: https://www.scottbrady91.com/Email-Verification/Python-Email-Verification-Script\n\nHere are some other alternatives from another website...\n\nLet’s make it more sophisticated and assume we want the following criteria to be met for an account@domain email address:\n\nBoth consist of case-insensitive alphanumeric characters. Dashes,\nperiods, hyphens or underscores are also allowed\nBoth can only start and end with alphanumeric characters\nBoth contain no white spaces account has at least one character and\ndomain has at least two domain includes at least one period\nAnd of course, there’s an ‘@’ symbol between them\n\n ^[a-z]([w-]*[a-z]|[w-.]*[a-z]{2,}|[a-z])*@[a-z]([w-]*[a-z]|[w-.]*[a-z]{2,}|[a-z]){4,}?.[a-z]{2,}$\n\nValidating emails with Python libraries\nAnother way of running sophisticated checks is with ready-to-use packages, and there are plenty to choose from. Here are several popular options:\nemail-validator 1.0.5\nThis library focuses strictly on the domain part of an email address, and checks if it’s in an [email protected] format.\nAs a bonus, the library also comes with a validation tool. It checks if the domain name can be resolved, thus giving you a good idea about its validity.\npylsEmail 1.3.2\nThis Python program to validate email addresses can also be used to validate both a domain and an email address with just one call. If a given address can’t be validated, the script will inform you of the most likely reasons.\npy3-validate-email\nThis comprehensive library checks an email address for a proper structure, but it doesn’t just stop there.\nIt also verifies if the domain’s MX records exist (as in, is able to send/receive emails), whether a particular address on this domain exists, and also if it’s not blacklisted. This way, you avoid emailing blacklisted accounts, which would affect deliverability.\n\nsource: https://mailtrap.io/blog/python-validate-email/\nThese are just extracts from these websites, make sure to visit them and confirm this can be useful for your specific case, they also include much more methods and explain this in more detail, hope it helps.\n", "You can send a verification email with a code to the entered email to verify its presence. This can be done using any python framework. Django, flask, etc.\n", "You can use gmass.co, connect to your google account , go to settings , apikeys , then create an api key and use it this way :\nsent = requests.get('https://verify.gmass.co/verify?email='+email+'&key='+api,\n headers={'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.141 Safari/537.36'}\n )\nif (\"valid\" in sent.content):\n print(email+\"valid\")\nelse: \n print(email+\"invalid\")\n\n", "There are many services offering such tools that can validate your emails in bulk. If you are doing this for a marketing campaign to avoid bounce rate and getting your campaign marked as spammed then there are services that can help you checking bulk emails for your marketing campaigns.\nHowever, there is some online platform too but you can't check multiple emails.\n\nValidating email addresses is easy using valid email checkers.\nYou can use SMTPlib, a python library used for sending emails and using python regex too. But It will only check for the email if it is an email or something else.\n\nimport re\n\nregex = r'\\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\\.[A-Z|a-z]{2,}\\b'\ndef check(email):\n\n if(re.fullmatch(regex, email)):\n print(\"Valid Email\")\n \n else:\n print(\"Invalid Email\")\n\nif __name__ == '__main__':\n email = input(\"Enter Email:\")\n check(email)\n\nThis is not the actual solution to your question, but an example of matching email with regex.\nCheck out the services that offer emails validation in bulk.\nI'd recommend EmailChecks\n", "The best method would be to send a verification mail, with some code they need to submit. Not only this is simple, it also proves the email belongs to them\n", "I believe there are three ways to approach this.\n1. REGEX\nOther thread participants have already suggested this. The problem with this approach is that it simply checks a string for the correct syntax and does not ensure that the email address exists.\n2. Bulk Software\nThere are plenty of GUI tools that allow you to import lists that are then being validated. However, this usually does not fit the software development case.\n3. Email Validation API\nThe previously mentioned GUI tools often offer an API that allows you to conduct real-time validation of your user's email addresses.\nA few vendors that provide this:\n\nemailvalidation.io\nemailable.com\ndatavalidation.com\n\n" ]
[ 2, 1, 1, 1, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0069412522_python.txt
Q: Video stream of H.264(avc1.640029) type won't play on Safari Description Here is my demo website and m3u8 source Demo: https://codepen.io/shaoyulan/pen/wvXQoQM Strem source: https://stream.huskyking.com/stream/43d1bf49-1769-41f0-ac51-88e6f35d5fc9/index.m3u8 The problem And it's run perfectly at android or windows chrome But got ERROR (CODE:3 MEDIA_ERR_DECODE) at safari Both iphone and mac Reduced test case https://codepen.io/shaoyulan/pen/wvXQoQM <video id="video" controls autoplay muted> <source type="application/x-mpegURL" src="https://stream.huskyking.com/stream/43d1bf49-1769-41f0-ac51-88e6f35d5fc9/index.m3u8" > </video> Steps to reproduce View the page on safari You will see the video is not playable. What browser(s) including version(s) does this occur with? All Sarari version What OS(es) and version(s) does this occur with? Safari Both on Iphone and Mac What is expected ? The stream is expected to playable on safari. A: From research... Safari has built-in M3U8 playback on <video> tag. So at any time, you don't need and cannot use this HLS.js with it (because an important API for playing custom bytes, needed by HLS.js, is not available or not supported by the Safari browser). Try this code in Safari (I cannot test it myself) : <!DOCTYPE html> <html> <body> <video width="640" height="400" controls> <source type="application/x-mpegURL" src="https://yioushen-camera.muki001.com/stream/82a4899f-d803-4567-b7c5-221977e14646/index.m3u8" > </video> </body> </html>
Video stream of H.264(avc1.640029) type won't play on Safari
Description Here is my demo website and m3u8 source Demo: https://codepen.io/shaoyulan/pen/wvXQoQM Strem source: https://stream.huskyking.com/stream/43d1bf49-1769-41f0-ac51-88e6f35d5fc9/index.m3u8 The problem And it's run perfectly at android or windows chrome But got ERROR (CODE:3 MEDIA_ERR_DECODE) at safari Both iphone and mac Reduced test case https://codepen.io/shaoyulan/pen/wvXQoQM <video id="video" controls autoplay muted> <source type="application/x-mpegURL" src="https://stream.huskyking.com/stream/43d1bf49-1769-41f0-ac51-88e6f35d5fc9/index.m3u8" > </video> Steps to reproduce View the page on safari You will see the video is not playable. What browser(s) including version(s) does this occur with? All Sarari version What OS(es) and version(s) does this occur with? Safari Both on Iphone and Mac What is expected ? The stream is expected to playable on safari.
[ "From research... Safari has built-in M3U8 playback on <video> tag. So at any time, you don't need and cannot use this HLS.js with it (because an important API for playing custom bytes, needed by HLS.js, is not available or not supported by the Safari browser).\nTry this code in Safari (I cannot test it myself) :\n<!DOCTYPE html>\n<html>\n<body>\n\n<video width=\"640\" height=\"400\" controls>\n<source \ntype=\"application/x-mpegURL\"\nsrc=\"https://yioushen-camera.muki001.com/stream/82a4899f-d803-4567-b7c5-221977e14646/index.m3u8\" >\n</video>\n\n</body>\n</html>\n\n" ]
[ 0 ]
[]
[]
[ "h.264", "ios", "javascript", "macos", "safari" ]
stackoverflow_0074549183_h.264_ios_javascript_macos_safari.txt
Q: Flask - show all data form Mongodb in html template I am using MongoDB as a database. I want to show all my data in the HTML template python code: from flask import Flask, render_template, request, url_for from flask_pymongo import PyMongo import os app = Flask(__name__) app.config['MONGO_DBNAME'] = 'flask_assignment' app.config['MONGO_URI'] = 'mongodb://username:[email protected]:31698/db_name' mongo = PyMongo(app) @app.route('/index') def index(): emp_list = mongo.db.employee_entry.find() return render_template('index.html', emp_list = emp_list) app.run(debug=True) my HTML code: {% for emp in emp_list %} <tr> <td>{{ emp['name'] }}</td> <td>{{ emp['password'] }}</td> <td>{{ emp['email'] }}</td> </tr> {% endfor %} when I ran the server it shows me nothing blank page... A: Maybe the issue is that the emp_list is very large, and it takes a long time to insert it in the template, see the page won't be shown. You can limit the data to for example 10 documents, using: emp_list = mongo.db.employee_entry.find().limit(10) and see if it solves the problem. A: OK, I'm sorry for that msg earlier try this: {% for key,value in emp_list %} <tr> <th scope="row">{{loop.index}}</th> <td>{{value}}</td> <td>{{value}}</td> <td>{{value}}</td> </tr> {% endfor %} where loop.index is used for incrementing you can search it's use I am 100% sure that just pasting it without editing it will not work. But what i am actually trying to say is try using key,value function of for loops in python(jinja2) try edit this so that it can fit in your code
Flask - show all data form Mongodb in html template
I am using MongoDB as a database. I want to show all my data in the HTML template python code: from flask import Flask, render_template, request, url_for from flask_pymongo import PyMongo import os app = Flask(__name__) app.config['MONGO_DBNAME'] = 'flask_assignment' app.config['MONGO_URI'] = 'mongodb://username:[email protected]:31698/db_name' mongo = PyMongo(app) @app.route('/index') def index(): emp_list = mongo.db.employee_entry.find() return render_template('index.html', emp_list = emp_list) app.run(debug=True) my HTML code: {% for emp in emp_list %} <tr> <td>{{ emp['name'] }}</td> <td>{{ emp['password'] }}</td> <td>{{ emp['email'] }}</td> </tr> {% endfor %} when I ran the server it shows me nothing blank page...
[ "Maybe the issue is that the emp_list is very large, and it takes a long time to insert it in the template, see the page won't be shown. \nYou can limit the data to for example 10 documents, using:\nemp_list = mongo.db.employee_entry.find().limit(10)\n\nand see if it solves the problem.\n", "OK, I'm sorry for that msg earlier\ntry this:\n{% for key,value in emp_list %}\n<tr>\n <th scope=\"row\">{{loop.index}}</th>\n <td>{{value}}</td>\n <td>{{value}}</td>\n <td>{{value}}</td>\n</tr>\n{% endfor %}\n\nwhere loop.index is used for incrementing you can search it's use\nI am 100% sure that just pasting it without editing it will not work.\nBut what i am actually trying to say is try using key,value function of for loops in python(jinja2)\ntry edit this so that it can fit in your code\n" ]
[ 0, 0 ]
[]
[]
[ "flask", "mongodb", "python" ]
stackoverflow_0048941101_flask_mongodb_python.txt
Q: deployment error from vercel having dependencies of node 16v I had work project on node 16v and now I am deploying it node18 havin cause some dependencies A: can run the npm install --legacy-peer-deps to ignore the version incompatibility among other dependencies.
deployment error from vercel having dependencies of node 16v
I had work project on node 16v and now I am deploying it node18 havin cause some dependencies
[ "can run the npm install --legacy-peer-deps to ignore the version incompatibility among other dependencies.\n" ]
[ 0 ]
[]
[]
[ "frontend", "javascript", "reactjs" ]
stackoverflow_0074656603_frontend_javascript_reactjs.txt
Q: 1 Button 2 activity in android studio using java How do I open another activity with the same button I clicked earlier? For example, I click button A and goes to Activity 1 and I click home and click button A again and must go to Activity 2. I tried putting id on button in xml and call different activity on its java. A: Imagine if the button is from HomeActivity,let the launcher activity pass an ExtraKey to the HomeActivity and again another ExtraKey at the OnBackPressed of the Activity you are navigating to after clicking the button.Set the onClickListner according to the String passed by those activities.
1 Button 2 activity in android studio using java
How do I open another activity with the same button I clicked earlier? For example, I click button A and goes to Activity 1 and I click home and click button A again and must go to Activity 2. I tried putting id on button in xml and call different activity on its java.
[ "Imagine if the button is from HomeActivity,let the launcher activity pass an ExtraKey to the HomeActivity and again another ExtraKey at the OnBackPressed of the Activity you are navigating to after clicking the button.Set the onClickListner according to the String passed by those activities.\n" ]
[ 0 ]
[]
[]
[ "android_studio", "java", "xml" ]
stackoverflow_0074484418_android_studio_java_xml.txt
Q: How to make Django render URL dispatcher from HTML in Pandas column, instead of forwarding raw HTML? I want to render a pandas dataframe in HTML, in which 1 column has URL dispatched links to other pages. If I try to render this HTML, it just keeps raw HTML, instead of converting the URLS: utils.py import pandas as pd df = pd.DataFrame(["2022-007", "2022-008", "2022-111", "2022-222", "2022-555", "2022-151"], columns=["column_of_interest"]) df["column_of_interest"] = df['column_of_interest'].apply(lambda x: '''<a href="{{% url 'columndetails' {0} %}}">{0}</a>'''.format(x) df_html = generate_html(df) context={"df" : df_html} def generate_html(dataframe: pd.DataFrame): # get the table HTML from the dataframe table_html = dataframe.to_html(table_id="table", escape=False) # construct the complete HTML with jQuery Data tables # You can disable paging or enable y scrolling on lines 20 and 21 respectively html = f""" {table_html} <script src="https://code.jquery.com/jquery-3.6.0.slim.min.js" integrity="sha256-u7e5khyithlIdTpu22PHhENmPcRdFiHRjhAuHcs05RI=" crossorigin="anonymous"></script> <script type="text/javascript" src="https://cdn.datatables.net/1.11.5/js/jquery.dataTables.min.js"></script> <script> $(document).ready( function () {{ $('#table').DataTable({{ // paging: false, // scrollY: 400, }}); }}); </script> """ # return the html return html views.py def column(request): context = get_context(request) return render(request, "database/column.html", context) def columndetails(request, column_of_interest): return render(request, "/columndetails.html") urls.py urlpatterns = [ path('columndetails/<str:column_of_interest>/', views.labrequest_details, name="columndetails")] toprocess.html {% extends "database/layout.html" %} {% load static %} {% block body %} <link href="https://cdn.datatables.net/1.11.5/css/jquery.dataTables.min.css" rel="stylesheet"> <br /> <div style="float: left;" class="container" id="labrequestoverview"> {{ df|safe }} </div> Everything shows normal, and the HTML is rendered almost as should, however the HTML is not being rendered by Django: Request URL: http://127.0.0.1:8000/%7B%25%20url%20'columndetails'%202022-007%25%7D The current path, {% url 'columndetails' 2022-007%}, didn’t match any of these. Is it possible to have Django render this HTML as it intended and not just forward it as raw HTML? A: In your view, you cannot use {% url '' %}. To resolve a URL dynamically in your utils.py, use build_absolute_uri instead. You can also combine this with reverse() like so (note: you will have to pass your request object): request.build_absolute_uri(reverse('columndetails', args=('2022-007', )))
How to make Django render URL dispatcher from HTML in Pandas column, instead of forwarding raw HTML?
I want to render a pandas dataframe in HTML, in which 1 column has URL dispatched links to other pages. If I try to render this HTML, it just keeps raw HTML, instead of converting the URLS: utils.py import pandas as pd df = pd.DataFrame(["2022-007", "2022-008", "2022-111", "2022-222", "2022-555", "2022-151"], columns=["column_of_interest"]) df["column_of_interest"] = df['column_of_interest'].apply(lambda x: '''<a href="{{% url 'columndetails' {0} %}}">{0}</a>'''.format(x) df_html = generate_html(df) context={"df" : df_html} def generate_html(dataframe: pd.DataFrame): # get the table HTML from the dataframe table_html = dataframe.to_html(table_id="table", escape=False) # construct the complete HTML with jQuery Data tables # You can disable paging or enable y scrolling on lines 20 and 21 respectively html = f""" {table_html} <script src="https://code.jquery.com/jquery-3.6.0.slim.min.js" integrity="sha256-u7e5khyithlIdTpu22PHhENmPcRdFiHRjhAuHcs05RI=" crossorigin="anonymous"></script> <script type="text/javascript" src="https://cdn.datatables.net/1.11.5/js/jquery.dataTables.min.js"></script> <script> $(document).ready( function () {{ $('#table').DataTable({{ // paging: false, // scrollY: 400, }}); }}); </script> """ # return the html return html views.py def column(request): context = get_context(request) return render(request, "database/column.html", context) def columndetails(request, column_of_interest): return render(request, "/columndetails.html") urls.py urlpatterns = [ path('columndetails/<str:column_of_interest>/', views.labrequest_details, name="columndetails")] toprocess.html {% extends "database/layout.html" %} {% load static %} {% block body %} <link href="https://cdn.datatables.net/1.11.5/css/jquery.dataTables.min.css" rel="stylesheet"> <br /> <div style="float: left;" class="container" id="labrequestoverview"> {{ df|safe }} </div> Everything shows normal, and the HTML is rendered almost as should, however the HTML is not being rendered by Django: Request URL: http://127.0.0.1:8000/%7B%25%20url%20'columndetails'%202022-007%25%7D The current path, {% url 'columndetails' 2022-007%}, didn’t match any of these. Is it possible to have Django render this HTML as it intended and not just forward it as raw HTML?
[ "In your view, you cannot use {% url '' %}.\nTo resolve a URL dynamically in your utils.py, use build_absolute_uri instead. You can also combine this with reverse() like so (note: you will have to pass your request object):\nrequest.build_absolute_uri(reverse('columndetails', args=('2022-007', )))\n\n" ]
[ 1 ]
[]
[]
[ "django", "django_urls", "html", "python", "url" ]
stackoverflow_0074656451_django_django_urls_html_python_url.txt
Q: Saving changes to a dataframe after editing in a GUI I wrote a code, that extracts data from a csv file and displays it in a GUI (when data is already present). No i need to find a way, that if I change or edit Data in the GUI, the value should be replaced in the csv file as well. This part here is for extracting the data form the file (which works great): ` def updatetext(self): """adds information extracted from database already provided""" df_subj = Content.extract_saved_data(self.date) self.lineEditFirstDiagnosed.setText(str(df_subj["First_Diagnosed_preop"][0])) \ if str(df_subj["First_Diagnosed_preop"][0]) != 'nan' else self.lineEditFirstDiagnosed.setText('') self.lineEditAdmNeurIndCheck.setText(str(df_subj['Admission_preop'][0])) \ if str(df_subj["Admission_preop"][0]) != 'nan' else self.lineEditAdmNeurIndCheck.setText('') self.DismNeurIndCheckLabel.setText(str(df_subj['Dismissal_preop'][0])) \ if str(df_subj["Dismissal_preop"][0]) != 'nan' else self.DismNeurIndCheckLabel.setText('') self.lineEditOutpatientContact.setText(str(df_subj['Outpat_Contact_preop'][0])) \ if str(df_subj["Outpat_Contact_preop"][0]) != 'nan' else self.lineEditOutpatientContact.setText('') self.lineEditNChContact.setText(str(df_subj['nch_preop'][0])) \ if str(df_subj["nch_preop"][0]) != 'nan' else self.lineEditNChContact.setText('') self.lineEditDBSconferenceDate.setText(str(df_subj['DBS_Conference_preop'][0])) \ if str(df_subj["DBS_Conference_preop"][0]) != 'nan' else self.lineEditDBSconferenceDate.setText('') ` Now for updating changes i started writing this: def onClickedSaveReturn(self): """closes GUI and returns to calling (main) GUI""" df_subj = {k: [] for k in Content.extract_saved_data(self.date).keys()} # extract empty dictionary df_subj["First_Diagnosed_preop"] = self.lineEditFirstDiagnosed.text() df_subj['Admission_preop'] = self.lineEditAdmNeurIndCheck.text() df_subj['Dismissal_preop'] = self.DismNeurIndCheckLabel.text() df_subj['Outpat_Contact_preop'] = self.lineEditOutpatientContact.text() df_subj['nch_preop'] = self.lineEditNChContact.text() df_subj['DBS_Conference_preop'] = self.lineEditDBSconferenceDate.text() df_subj["H&Y_preop"] = self.hy.text() But im not sure how to actually achieve the updating/replacing. GUI This is what the GUI looks like. If i change the year now for example to 1997, it should be updated in my csv Hope someone can help me. Thank you!! Expecting to get updated data in my csv file. A: I don't know if I understand exactly what you need, but I think after the modifications you should use the to_csv() function to export the changes to the CSV file, and connect it for example with a Button click. In case you can not find the file after saving, you should note that the to_csv() function usually saves the files in the root directory. # in case you have a save Button in your GUI self.save_Button.clicked.connect(self.SaveChanges) def SaveChanges(self): df_subj.to_csv("file_ame.csv", index=False)
Saving changes to a dataframe after editing in a GUI
I wrote a code, that extracts data from a csv file and displays it in a GUI (when data is already present). No i need to find a way, that if I change or edit Data in the GUI, the value should be replaced in the csv file as well. This part here is for extracting the data form the file (which works great): ` def updatetext(self): """adds information extracted from database already provided""" df_subj = Content.extract_saved_data(self.date) self.lineEditFirstDiagnosed.setText(str(df_subj["First_Diagnosed_preop"][0])) \ if str(df_subj["First_Diagnosed_preop"][0]) != 'nan' else self.lineEditFirstDiagnosed.setText('') self.lineEditAdmNeurIndCheck.setText(str(df_subj['Admission_preop'][0])) \ if str(df_subj["Admission_preop"][0]) != 'nan' else self.lineEditAdmNeurIndCheck.setText('') self.DismNeurIndCheckLabel.setText(str(df_subj['Dismissal_preop'][0])) \ if str(df_subj["Dismissal_preop"][0]) != 'nan' else self.DismNeurIndCheckLabel.setText('') self.lineEditOutpatientContact.setText(str(df_subj['Outpat_Contact_preop'][0])) \ if str(df_subj["Outpat_Contact_preop"][0]) != 'nan' else self.lineEditOutpatientContact.setText('') self.lineEditNChContact.setText(str(df_subj['nch_preop'][0])) \ if str(df_subj["nch_preop"][0]) != 'nan' else self.lineEditNChContact.setText('') self.lineEditDBSconferenceDate.setText(str(df_subj['DBS_Conference_preop'][0])) \ if str(df_subj["DBS_Conference_preop"][0]) != 'nan' else self.lineEditDBSconferenceDate.setText('') ` Now for updating changes i started writing this: def onClickedSaveReturn(self): """closes GUI and returns to calling (main) GUI""" df_subj = {k: [] for k in Content.extract_saved_data(self.date).keys()} # extract empty dictionary df_subj["First_Diagnosed_preop"] = self.lineEditFirstDiagnosed.text() df_subj['Admission_preop'] = self.lineEditAdmNeurIndCheck.text() df_subj['Dismissal_preop'] = self.DismNeurIndCheckLabel.text() df_subj['Outpat_Contact_preop'] = self.lineEditOutpatientContact.text() df_subj['nch_preop'] = self.lineEditNChContact.text() df_subj['DBS_Conference_preop'] = self.lineEditDBSconferenceDate.text() df_subj["H&Y_preop"] = self.hy.text() But im not sure how to actually achieve the updating/replacing. GUI This is what the GUI looks like. If i change the year now for example to 1997, it should be updated in my csv Hope someone can help me. Thank you!! Expecting to get updated data in my csv file.
[ "I don't know if I understand exactly what you need, but I think after the modifications you should use the to_csv() function to export the changes to the CSV file, and connect it for example with a Button click.\nIn case you can not find the file after saving, you should note that the to_csv() function usually saves the files in the root directory.\n# in case you have a save Button in your GUI\nself.save_Button.clicked.connect(self.SaveChanges)\n\ndef SaveChanges(self):\n df_subj.to_csv(\"file_ame.csv\", index=False)\n\n" ]
[ 0 ]
[]
[]
[ "csv", "pyqt5", "python" ]
stackoverflow_0074654646_csv_pyqt5_python.txt
Q: Binary Search using Geometric Means When doing a binary search on an array, we initialise an upper h and lower l bound and test the mid-point (the arithmetic mean) half-way between them m := (h+l)/2. Then we either change the upper or lower bound and iterate to convergence. We can use a similar search strategy on the (unbounded) real numbers (well their floating point approximation). We could use a tolerance of convergence to terminate the search. If the search range is on the positive real numbers (0<l<h), we could instead take the geometric mean m :=(hl)^.5. My question is, when is the geometric mean faster (taking few iterations to converge)? This cropped up when I tried a binary search for a continuous variable where the initial bounds were very wide and it took many iterations to converge. I could use an exponential search before a binary search, but I got curious about this idea. I tried to get a feel for this by picking a million random (floating point) numbers between 2 and an initial h that I picked. I kept the initial l=1 fixed and the target had to be within a tolerance of 10^-8. I varied h between 10^1 and 10^50. The arithmetic mean had fewer iterations in about 60-70% of cases. But the geometric mean is skewed (below the arithmetic mean). So when I restricted the targets to be less than the geometric mean of the initial bounds sqrt(lh) (still keeping l=1) the geometric mean was almost always faster (>99%) for large h>10^10. So it seems that the both h and the ratio of target / h could be involved in the number of iterations. Here's a simple Julia code to demonstrate: function geometric_search(target, high, low) current = sqrt(high * low) converged = false iterations = 0 eps = 1e-8 while !converged if abs(current - target) < eps converged = true elseif current < target low = current elseif current > target high = current end current = sqrt(high * low) iterations += 1 end return iterations end target = 3.0 low = 1.0 high = 1e10 println(geometric_search(target, high, low)) A: The optimal worst-case time for a binary search is achieved when each decision divides the "possible answers" in half. For searching in an array, it is therefore optimal in this respect to test the center of the possible range every time. If you're not searching in an array, then it depends on your stopping criteria. In your case, you are searching for, for example, a square root among all possible floating-point values. If your stopping criteria is an absolute difference, which is what you wrote, then choosing the arithmetic mean of the possible range is still best in the worst case. If your stopping criteria is a percentage difference, however, then there are many more "small" answers than large ones (you know what I mean). This is using an absolute difference criteria while searching for the log of the answer. In this case, it is the geometric mean that is optimal in the worst case. As mentioned above, you wrote an absolute difference stopping condition, but this is confused by the details of the floating point number representation. A floating point number is represented as 2em, where e and m are integers of limited precision. There are, therefore, more small answers than large ones, even if you code an absolute difference stopping condition, so a geometric mean can again be better. This explains why the geometric mean may be better for your specific implementation... but I'm afraid that your implementation will fail to converge for certain very large inputs.
Binary Search using Geometric Means
When doing a binary search on an array, we initialise an upper h and lower l bound and test the mid-point (the arithmetic mean) half-way between them m := (h+l)/2. Then we either change the upper or lower bound and iterate to convergence. We can use a similar search strategy on the (unbounded) real numbers (well their floating point approximation). We could use a tolerance of convergence to terminate the search. If the search range is on the positive real numbers (0<l<h), we could instead take the geometric mean m :=(hl)^.5. My question is, when is the geometric mean faster (taking few iterations to converge)? This cropped up when I tried a binary search for a continuous variable where the initial bounds were very wide and it took many iterations to converge. I could use an exponential search before a binary search, but I got curious about this idea. I tried to get a feel for this by picking a million random (floating point) numbers between 2 and an initial h that I picked. I kept the initial l=1 fixed and the target had to be within a tolerance of 10^-8. I varied h between 10^1 and 10^50. The arithmetic mean had fewer iterations in about 60-70% of cases. But the geometric mean is skewed (below the arithmetic mean). So when I restricted the targets to be less than the geometric mean of the initial bounds sqrt(lh) (still keeping l=1) the geometric mean was almost always faster (>99%) for large h>10^10. So it seems that the both h and the ratio of target / h could be involved in the number of iterations. Here's a simple Julia code to demonstrate: function geometric_search(target, high, low) current = sqrt(high * low) converged = false iterations = 0 eps = 1e-8 while !converged if abs(current - target) < eps converged = true elseif current < target low = current elseif current > target high = current end current = sqrt(high * low) iterations += 1 end return iterations end target = 3.0 low = 1.0 high = 1e10 println(geometric_search(target, high, low))
[ "The optimal worst-case time for a binary search is achieved when each decision divides the \"possible answers\" in half.\nFor searching in an array, it is therefore optimal in this respect to test the center of the possible range every time.\nIf you're not searching in an array, then it depends on your stopping criteria. In your case, you are searching for, for example, a square root among all possible floating-point values.\nIf your stopping criteria is an absolute difference, which is what you wrote, then choosing the arithmetic mean of the possible range is still best in the worst case.\nIf your stopping criteria is a percentage difference, however, then there are many more \"small\" answers than large ones (you know what I mean). This is using an absolute difference criteria while searching for the log of the answer. In this case, it is the geometric mean that is optimal in the worst case.\nAs mentioned above, you wrote an absolute difference stopping condition, but this is confused by the details of the floating point number representation. A floating point number is represented as 2em, where e and m are integers of limited precision. There are, therefore, more small answers than large ones, even if you code an absolute difference stopping condition, so a geometric mean can again be better.\nThis explains why the geometric mean may be better for your specific implementation... but I'm afraid that your implementation will fail to converge for certain very large inputs.\n" ]
[ 1 ]
[]
[]
[ "algorithm", "binary_search", "geometric_mean", "julia", "search" ]
stackoverflow_0074649679_algorithm_binary_search_geometric_mean_julia_search.txt
Q: How to make a chrome extension listen to mongodb document changes? I am building a link syncing chrome extension. I managed to push the links to mongodb database using a custom made REST API. When a document changes in mongodb database, I am able to listen to those changes using change stream in the custom made REST API. However, I am unable to pass those changes to the chrome extension, when a document change occurs in the database. Is there any way, to make a chrome extension listen to changes in mongodb database and get those changes as soon as the change occurs? A: It is possible to make a Chrome extension listen for changes in a MongoDB database, but this will require some additional work on your part. One way to do this would be to use the WebSocket API to create a persistent connection between your Chrome extension and your custom REST API. Your REST API can then use the MongoDB change stream to listen for changes in the database and pass those changes back to the Chrome extension over the WebSocket connection. This way, the Chrome extension can receive real-time updates from the database as soon as changes occur. Alternatively, you could use polling to periodically check for changes in the database and update the extension accordingly. This approach would not provide real-time updates, but it would be easier to implement and may be sufficient for your needs, depending on the specific requirements of your extension.
How to make a chrome extension listen to mongodb document changes?
I am building a link syncing chrome extension. I managed to push the links to mongodb database using a custom made REST API. When a document changes in mongodb database, I am able to listen to those changes using change stream in the custom made REST API. However, I am unable to pass those changes to the chrome extension, when a document change occurs in the database. Is there any way, to make a chrome extension listen to changes in mongodb database and get those changes as soon as the change occurs?
[ "It is possible to make a Chrome extension listen for changes in a MongoDB database, but this will require some additional work on your part. One way to do this would be to use the WebSocket API to create a persistent connection between your Chrome extension and your custom REST API. Your REST API can then use the MongoDB change stream to listen for changes in the database and pass those changes back to the Chrome extension over the WebSocket connection. This way, the Chrome extension can receive real-time updates from the database as soon as changes occur.\nAlternatively, you could use polling to periodically check for changes in the database and update the extension accordingly. This approach would not provide real-time updates, but it would be easier to implement and may be sufficient for your needs, depending on the specific requirements of your extension.\n" ]
[ 0 ]
[]
[]
[ "api", "google_chrome_extension", "javascript", "mongodb", "node.js" ]
stackoverflow_0074656619_api_google_chrome_extension_javascript_mongodb_node.js.txt
Q: How to group by similar values with pg_trgm I have the following table id error - ---------------------------------------- 1 Error 1234eee5, can not write to disk 2 Error 83457qwe, can not write to disk 3 Error 72344ee, can not write to disk 4 Fatal barier breach on object 72fgsff 5 Fatal barier breach on object 7fasdfa 6 Fatal barier breach on object 73456xcc5 I want to be able to get a result that counts by similarity, where similarity of > 80% means two errors are equal. I've been using pg_trgm extension, and its similarity function works perfectly for me, the only thing I can figure out how to produce the grouping result below. Error Count ------------------------------------- ------ Error 1234eee5, can not write to disk, 3 Fatal barier breach on object 72fgsff, 3 A: Basically you could join a table with itself to find similar strings, however this approach will end in a terribly slow query on a larger dataset. Also, using similarity() may cause inaccuracy in some cases (you need to find the appropriate limit value). You should try to find patterns. For example, if all variable words in strings begin with a digit, you can mask them using regexp_replace(): select id, regexp_replace(error, '\d\w+', 'xxxxx') as error from errors; id | error ----+------------------------------------- 1 | Error xxxxx, can not write to disk 2 | Error xxxxx, can not write to disk 3 | Error xxxxx, can not write to disk 4 | Fatal barier breach on object xxxxx 5 | Fatal barier breach on object xxxxx 6 | Fatal barier breach on object xxxxx (6 rows) so you can easily group the data by error message: select regexp_replace(error, '\d\w+', 'xxxxx') as error, count(*) from errors group by 1; error | count -------------------------------------+------- Error xxxxx, can not write to disk | 3 Fatal barier breach on object xxxxx | 3 (2 rows) The above query is only an example as the specific solution depends on the data format. Using pg_trgm The solution based on the OP's idea (see the comments below). The limit 0.8 for similarity() is certainly too high. It seems that it should be somewhere about 0.6. The table for unique errors (I've used a temporary table but it also be a regular one of course): create temp table if not exists unique_errors( id serial primary key, error text, ids int[]); The ids column is to store id of rows of the base table which contain similar errors. do $$ declare e record; found_id int; begin truncate unique_errors; for e in select * from errors loop select min(id) into found_id from unique_errors u where similarity(u.error, e.error) > 0.6; if found_id is not null then update unique_errors set ids = ids || e.id where id = found_id; else insert into unique_errors (error, ids) values (e.error, array[e.id]); end if; end loop; end $$; The final results: select *, cardinality(ids) as count from unique_errors; id | error | ids | count ----+---------------------------------------+---------+------- 1 | Error 1234eee5, can not write to disk | {1,2,3} | 3 2 | Fatal barier breach on object 72fgsff | {4,5,6} | 3 (2 rows) A: For this particular case you could just group by left(error, 5), which would lead to two groups, one containing all the strings starting with Error, the other group containing all the strings starting with Fatal. This criteria would have to be updated if you are planning to add more error types.
How to group by similar values with pg_trgm
I have the following table id error - ---------------------------------------- 1 Error 1234eee5, can not write to disk 2 Error 83457qwe, can not write to disk 3 Error 72344ee, can not write to disk 4 Fatal barier breach on object 72fgsff 5 Fatal barier breach on object 7fasdfa 6 Fatal barier breach on object 73456xcc5 I want to be able to get a result that counts by similarity, where similarity of > 80% means two errors are equal. I've been using pg_trgm extension, and its similarity function works perfectly for me, the only thing I can figure out how to produce the grouping result below. Error Count ------------------------------------- ------ Error 1234eee5, can not write to disk, 3 Fatal barier breach on object 72fgsff, 3
[ "Basically you could join a table with itself to find similar strings, however this approach will end in a terribly slow query on a larger dataset. Also, using similarity() may cause inaccuracy in some cases (you need to find the appropriate limit value).\nYou should try to find patterns. For example, if all variable words in strings begin with a digit, you can mask them using regexp_replace():\nselect id, regexp_replace(error, '\\d\\w+', 'xxxxx') as error\nfrom errors;\n\n id | error \n----+-------------------------------------\n 1 | Error xxxxx, can not write to disk\n 2 | Error xxxxx, can not write to disk\n 3 | Error xxxxx, can not write to disk\n 4 | Fatal barier breach on object xxxxx\n 5 | Fatal barier breach on object xxxxx\n 6 | Fatal barier breach on object xxxxx\n(6 rows) \n\nso you can easily group the data by error message:\nselect regexp_replace(error, '\\d\\w+', 'xxxxx') as error, count(*)\nfrom errors\ngroup by 1;\n\n error | count \n-------------------------------------+-------\n Error xxxxx, can not write to disk | 3\n Fatal barier breach on object xxxxx | 3\n(2 rows)\n\nThe above query is only an example as the specific solution depends on the data format.\nUsing pg_trgm\nThe solution based on the OP's idea (see the comments below). The limit 0.8 for similarity() is certainly too high. It seems that it should be somewhere about 0.6.\nThe table for unique errors (I've used a temporary table but it also be a regular one of course):\ncreate temp table if not exists unique_errors(\n id serial primary key, \n error text, \n ids int[]);\n\nThe ids column is to store id of rows of the base table which contain similar errors.\ndo $$\ndeclare\n e record;\n found_id int;\nbegin\n truncate unique_errors;\n for e in select * from errors loop\n select min(id)\n into found_id\n from unique_errors u\n where similarity(u.error, e.error) > 0.6;\n if found_id is not null then\n update unique_errors\n set ids = ids || e.id\n where id = found_id;\n else\n insert into unique_errors (error, ids)\n values (e.error, array[e.id]);\n end if;\n end loop;\nend $$;\n\nThe final results:\nselect *, cardinality(ids) as count\nfrom unique_errors;\n\n id | error | ids | count \n----+---------------------------------------+---------+-------\n 1 | Error 1234eee5, can not write to disk | {1,2,3} | 3\n 2 | Fatal barier breach on object 72fgsff | {4,5,6} | 3\n(2 rows)\n\n", "For this particular case you could just group by left(error, 5), which would lead to two groups, one containing all the strings starting with Error, the other group containing all the strings starting with Fatal. This criteria would have to be updated if you are planning to add more error types.\n" ]
[ 1, 0 ]
[]
[]
[ "pg_trgm", "postgresql" ]
stackoverflow_0047212230_pg_trgm_postgresql.txt
Q: No matches for kind"gateway" and "virtualservice" I am using Docker Desktop version 3.6.0 which has Kubernetes 1.21.3. I am following this tutorial to get started on Istio https://istio.io/latest/docs/setup/getting-started/ Istio is properly installed as per the instructions. Now whenever i try to apply the Istio configuration by issuing the command kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml. I get the following error unable to recognize "samples/bookinfo/networking/bookinfo-gateway.yaml": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3" unable to recognize "samples/bookinfo/networking/bookinfo-gateway.yaml": no matches for kind "VirtualService" in version "networking.istio.io/v1alpha3" I checked in internet and found that the Gateway and VirtualService resources are missing. If i perform kubectl get crd i get no resources found Content of bookinfo-gatway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080 A: The CRDs for istio should be installed as part of the istioctl install process, I'd recommend re-running the install if you don't have them available. >>> ~/.istioctl/bin/istioctl install --set profile=demo -y ✔ Istio core installed ✔ Istiod installed ✔ Egress gateways installed ✔ Ingress gateways installed ✔ Installation complete kubectl get po -n istio-system should look like >>> kubectl get po -n istio-system NAME READY STATUS RESTARTS AGE istio-egressgateway-7ddb45fcdf-ctnp5 1/1 Running 0 3m20s istio-ingressgateway-f7cdcd7dc-zdqhg 1/1 Running 0 3m20s istiod-788ff675dd-9p75l 1/1 Running 0 3m32s Otherwise your initial install has gone wrong somewhere. A: You can apply CRD to your cluster without using istioctl install from https://github.com/istio/istio/blob/master/manifests/charts/base/crds/crd-all.gen.yaml with kubectl apply -f ./crd-all.gen.yaml
No matches for kind"gateway" and "virtualservice"
I am using Docker Desktop version 3.6.0 which has Kubernetes 1.21.3. I am following this tutorial to get started on Istio https://istio.io/latest/docs/setup/getting-started/ Istio is properly installed as per the instructions. Now whenever i try to apply the Istio configuration by issuing the command kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml. I get the following error unable to recognize "samples/bookinfo/networking/bookinfo-gateway.yaml": no matches for kind "Gateway" in version "networking.istio.io/v1alpha3" unable to recognize "samples/bookinfo/networking/bookinfo-gateway.yaml": no matches for kind "VirtualService" in version "networking.istio.io/v1alpha3" I checked in internet and found that the Gateway and VirtualService resources are missing. If i perform kubectl get crd i get no resources found Content of bookinfo-gatway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: bookinfo-gateway spec: selector: istio: ingressgateway # use istio default controller servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: bookinfo spec: hosts: - "*" gateways: - bookinfo-gateway http: - match: - uri: exact: /productpage - uri: prefix: /static - uri: exact: /login - uri: exact: /logout - uri: prefix: /api/v1/products route: - destination: host: productpage port: number: 9080
[ "The CRDs for istio should be installed as part of the istioctl install process, I'd recommend re-running the install if you don't have them available.\n>>> ~/.istioctl/bin/istioctl install --set profile=demo -y\n✔ Istio core installed\n✔ Istiod installed\n✔ Egress gateways installed\n✔ Ingress gateways installed\n✔ Installation complete\n\nkubectl get po -n istio-system should look like\n>>> kubectl get po -n istio-system\nNAME READY STATUS RESTARTS AGE\nistio-egressgateway-7ddb45fcdf-ctnp5 1/1 Running 0 3m20s\nistio-ingressgateway-f7cdcd7dc-zdqhg 1/1 Running 0 3m20s\nistiod-788ff675dd-9p75l 1/1 Running 0 3m32s\n\nOtherwise your initial install has gone wrong somewhere.\n", "You can apply CRD to your cluster without using istioctl install from https://github.com/istio/istio/blob/master/manifests/charts/base/crds/crd-all.gen.yaml\nwith\nkubectl apply -f ./crd-all.gen.yaml\n\n" ]
[ 2, 0 ]
[]
[]
[ "docker_desktop", "istio", "kubernetes" ]
stackoverflow_0069461513_docker_desktop_istio_kubernetes.txt
Q: Azure Blob Storage upload failing to upload multiple files I am using blockBlobClient to upload multiple files. It works when I have a small array of files but not when I have a lot of files (~50-100+ files). I am assuming that it is crossing the connection limit. if (!files.length) return [] try { console.log('length of files: ', files.length) files.map(async file=> { return await uploadToBlob(file); }) Then I have this uploadToBlob function which uploads to blob storage const containerClient = await createBlobContainer( blobServiceClient, containerName ); const blockBlobClient = containerClient.getBlockBlobClient( blobName ); const uploadBlobResponse = await blockBlobClient.upload( content, Buffer.byteLength(content) ); console.log(`Upload block blob ${blobName} successfully. Request id: `, uploadBlobResponse.requestId); return uploadBlobResponse; }; And the error is 500 Server encountered an internal error. Please try again after some time const error = new RestError( RestError: Server encountered an internal error. Please try again after some time. RequestId:e78657e8-6ad9-452e-a7b3-6361a6c4972d Time:2022-11-02T13:06:02.2715094Z at handleErrorResponse (/usr/src/app/node_modules/@azure/core-http/src/policies/deserializationPolicy.ts:274:17) at /usr/src/app/node_modules/@azure/core-http/src/policies/deserializationPolicy.ts:179:47 at runMicrotasks (<anonymous>) at processTicksAndRejections (node:internal/process/task_queues:96:5) at async StorageRetryPolicy.attemptSendRequest (/usr/src/app/node_modules/@azure/storage-blob/src/policies/StorageRetryPolicy.ts:169:18) at async StorageClientContext.sendOperationRequest (/usr/src/app/node_modules/@azure/core-http/src/serviceClient.ts:521:23) at async BlockBlobClient.upload (/usr/src/app/node_modules/@azure/storage-blob/src/Clients.ts:3824:14) { code: 'InternalError', statusCode: 500, Any help would be highly appreciated I tried to do some kind of chunks but still the issue happens again. A: An alternative to the use of the clientContainer we can use azcopy Azcopy is a command line tool provided by azure for data transfer. You can download it here. It will provide with you with a zip file extract this file in a folder and add the address of the extracted file in path variable if you are using windows. Then you can run the following command azcopy login This will prompt you for login. Here make sure you have data contributor role added to your account. Then you can run the following command az copy <path to you folder or file> "<url of your container>" Refer this MS DOC on azcopy
Azure Blob Storage upload failing to upload multiple files
I am using blockBlobClient to upload multiple files. It works when I have a small array of files but not when I have a lot of files (~50-100+ files). I am assuming that it is crossing the connection limit. if (!files.length) return [] try { console.log('length of files: ', files.length) files.map(async file=> { return await uploadToBlob(file); }) Then I have this uploadToBlob function which uploads to blob storage const containerClient = await createBlobContainer( blobServiceClient, containerName ); const blockBlobClient = containerClient.getBlockBlobClient( blobName ); const uploadBlobResponse = await blockBlobClient.upload( content, Buffer.byteLength(content) ); console.log(`Upload block blob ${blobName} successfully. Request id: `, uploadBlobResponse.requestId); return uploadBlobResponse; }; And the error is 500 Server encountered an internal error. Please try again after some time const error = new RestError( RestError: Server encountered an internal error. Please try again after some time. RequestId:e78657e8-6ad9-452e-a7b3-6361a6c4972d Time:2022-11-02T13:06:02.2715094Z at handleErrorResponse (/usr/src/app/node_modules/@azure/core-http/src/policies/deserializationPolicy.ts:274:17) at /usr/src/app/node_modules/@azure/core-http/src/policies/deserializationPolicy.ts:179:47 at runMicrotasks (<anonymous>) at processTicksAndRejections (node:internal/process/task_queues:96:5) at async StorageRetryPolicy.attemptSendRequest (/usr/src/app/node_modules/@azure/storage-blob/src/policies/StorageRetryPolicy.ts:169:18) at async StorageClientContext.sendOperationRequest (/usr/src/app/node_modules/@azure/core-http/src/serviceClient.ts:521:23) at async BlockBlobClient.upload (/usr/src/app/node_modules/@azure/storage-blob/src/Clients.ts:3824:14) { code: 'InternalError', statusCode: 500, Any help would be highly appreciated I tried to do some kind of chunks but still the issue happens again.
[ "\nAn alternative to the use of the clientContainer we can use azcopy\n\nAzcopy is a command line tool provided by azure for data transfer. You can download it here. It will provide with you with a zip file extract this file in a folder and add the address of the extracted file in path variable if you are using windows.\n\nThen you can run the following command\n\n\nazcopy login\n\nThis will prompt you for login. Here make sure you have data contributor role added to your account.\n\nThen you can run the following command\n\naz copy <path to you folder or file> \"<url of your container>\"\n\nRefer this MS DOC on azcopy\n" ]
[ 0 ]
[]
[]
[ "azure", "azure_blob_storage", "nestjs", "node.js" ]
stackoverflow_0074289727_azure_azure_blob_storage_nestjs_node.js.txt
Q: image logo over TOC in Rmarkdown I know I can insert a logo or image at the top of HTML report rendered using knitr with in_header/before_body options output: html_document: includes: before_body: header.Rhtml My guess is: how to render logo over floating TOC? output: html_document: toc: true toc_float: true collapsed: false ?????? A: You got a few options here and I will outline two of them 1. Use the CSS pseudo element before The following snippet will add the stackoverflow logo just above the first TOC element and within the TOC box: <style> #TOC { background: url("https://cdn.sstatic.net/Sites/stackoverflow/company/img/logos/so/so-logo.png?v=9c558ec15d8a"); background-size: contain; padding-top: 80px !important; background-repeat: no-repeat; } </style> Here you will have to adjust the padding-top definition depending on your logo. The result looks more or less like this: 2. Add a new DOM element inside the TOC column Another way is to use jQuery first in order to add a new DOM element that will contain the image: <script> $(document).ready(function() { $('#TOC').parent().prepend('<div id=\"nav_logo\"><img src=\"https://cdn.sstatic.net/Sites/stackoverflow/company/img/logos/so/so-logo.png?v=9c558ec15d8a\"></div>'); }); </script> Here, when the document has finished loading, we select the element with the id TOC, then move up to its parent element (the column div) and then prepend a new div with the id nav_logo containing the image. Sounds more complicated than it actually is. Now we only have to edit the styles for this new div with some CSS again: <style> #nav_logo { width: 100%; margin-top: 20px; } </style> Here the resulting document looks like this: For details on CSS I would refer you to the big search engines which in turn will most probably refer you to https://www.w3schools.com. If I would have to pick, I would go with the second attempt. Pseudo elements are not always reliable across browsers. And it also looks better :) A: To enable a logo that floats with TOC during scrolling in solution (#2) from @Martin Schmelzer prepend nav_logo DOM element containing the image to TOC using jQuery: <script> $(document).ready(function() { $('#TOC').parent().prepend('<div id=\"nav_logo\"><img src=\"https://cdn.sstatic.net/Sites/stackoverflow/company/img/logos/so/so-logo.png?v=9c558ec15d8a\"></div>'); }); </script> And then add position: fixed to nav_logo and margin-top: 100px to TOC in CSS: <style> #TOC { margin-top: 100px; } #nav_logo { position: fixed; width: 20%; margin-top: 20px; } </style> Adjust nav_logo width as necessary.
image logo over TOC in Rmarkdown
I know I can insert a logo or image at the top of HTML report rendered using knitr with in_header/before_body options output: html_document: includes: before_body: header.Rhtml My guess is: how to render logo over floating TOC? output: html_document: toc: true toc_float: true collapsed: false ??????
[ "You got a few options here and I will outline two of them\n1. Use the CSS pseudo element before\nThe following snippet will add the stackoverflow logo just above the first TOC element and within the TOC box:\n<style>\n#TOC {\n background: url(\"https://cdn.sstatic.net/Sites/stackoverflow/company/img/logos/so/so-logo.png?v=9c558ec15d8a\");\n background-size: contain;\n padding-top: 80px !important;\n background-repeat: no-repeat;\n}\n</style>\n\nHere you will have to adjust the padding-top definition depending on your logo. The result looks more or less like this:\n\n2. Add a new DOM element inside the TOC column\nAnother way is to use jQuery first in order to add a new DOM element that will contain the image:\n<script>\n $(document).ready(function() {\n $('#TOC').parent().prepend('<div id=\\\"nav_logo\\\"><img src=\\\"https://cdn.sstatic.net/Sites/stackoverflow/company/img/logos/so/so-logo.png?v=9c558ec15d8a\\\"></div>');\n });\n</script>\n\nHere, when the document has finished loading, we select the element with the id TOC, then move up to its parent element (the column div) and then prepend a new div with the id nav_logo containing the image. Sounds more complicated than it actually is.\nNow we only have to edit the styles for this new div with some CSS again:\n<style>\n#nav_logo {\n width: 100%;\n margin-top: 20px;\n}\n</style>\n\nHere the resulting document looks like this:\n\nFor details on CSS I would refer you to the big search engines which in turn will most probably refer you to https://www.w3schools.com.\nIf I would have to pick, I would go with the second attempt. Pseudo elements are not always reliable across browsers. And it also looks better :) \n", "To enable a logo that floats with TOC during scrolling in solution (#2) from @Martin Schmelzer prepend nav_logo DOM element containing the image to TOC using jQuery:\n<script>\n $(document).ready(function() {\n $('#TOC').parent().prepend('<div id=\\\"nav_logo\\\"><img src=\\\"https://cdn.sstatic.net/Sites/stackoverflow/company/img/logos/so/so-logo.png?v=9c558ec15d8a\\\"></div>');\n });\n</script>\n\nAnd then add position: fixed to nav_logo and margin-top: 100px to TOC in CSS:\n<style>\n#TOC {\n margin-top: 100px;\n}\n#nav_logo {\n position: fixed;\n width: 20%;\n margin-top: 20px;\n}\n</style>\n\nAdjust nav_logo width as necessary.\n" ]
[ 17, 0 ]
[]
[]
[ "html", "knitr", "r", "r_markdown" ]
stackoverflow_0045834524_html_knitr_r_r_markdown.txt
Q: Can we use conditions in ifdef MACROS in C or SystemVerilog? I want something like that `ifdef N_O > N_I `define GREATER 1 `else `define LESSER 1 `endif But cannot do. Any solution or reading? I tried hard to do this but could not do it. A: #ifdef only tests if a macro is defined, regardless of the value it represents. If you want to compare their values, you need to use #if #if N_O > N_I #endif In Verilog, you can only do it this way: `ifdef N_O `ifdef N_I if(N_O > N_I) ... `endif `endif A: Verilog does not provide such a facility. There is only one action possible with text macros: checking for their existence. Therefore there are `ifdef, `ifndef, and `elsif. All accept only a single argument which is the name of a text macro. In many cases, however, this shortage of functionality could be augmented by using generate verilog features. Use of this feature is preferable because verilog controls syntax and scoping of declarations. The following is an example of using generated functionality: module mod #(parameter N_0=1, parameter N_1=2)(input logic in1, in2, output logic out); if (N_1 > N_2) begin always_comb out = in1; end else begin always_comb out = in2; end endmoudle This was a made up example to demonstrate the feature, which could be implemented without 'generate' blocks: alwas_comb begin if (N_1 > N_2) out = in1; else out = in2; end However, the generate constructs cannot solve all the problem, therefore mixing of text macros and generate blocks is often needed.
Can we use conditions in ifdef MACROS in C or SystemVerilog?
I want something like that `ifdef N_O > N_I `define GREATER 1 `else `define LESSER 1 `endif But cannot do. Any solution or reading? I tried hard to do this but could not do it.
[ "#ifdef only tests if a macro is defined, regardless of the value it represents.\nIf you want to compare their values, you need to use #if\n#if N_O > N_I\n \n#endif\n\nIn Verilog, you can only do it this way:\n`ifdef N_O\n `ifdef N_I\n if(N_O > N_I)\n ...\n `endif\n`endif\n\n", "Verilog does not provide such a facility. There is only one action possible with text macros: checking for their existence. Therefore there are `ifdef, `ifndef, and `elsif. All accept only a single argument which is the name of a text macro.\nIn many cases, however, this shortage of functionality could be augmented by using generate verilog features. Use of this feature is preferable because verilog controls syntax and scoping of declarations.\nThe following is an example of using generated functionality:\nmodule mod #(parameter N_0=1, parameter N_1=2)(input logic in1, in2, output logic out);\n if (N_1 > N_2) begin\n always_comb\n out = in1;\n end\n else begin\n always_comb \n out = in2;\n end\nendmoudle\n\nThis was a made up example to demonstrate the feature, which could be implemented without 'generate' blocks:\n alwas_comb begin\n if (N_1 > N_2) out = in1;\n else out = in2;\n end\n\nHowever, the generate constructs cannot solve all the problem, therefore mixing of text macros and generate blocks is often needed.\n" ]
[ 0, 0 ]
[]
[]
[ "c", "macros", "verilog" ]
stackoverflow_0074652467_c_macros_verilog.txt
Q: how to find the columns with the most similar values Just starting to use R and am feeling a bit confused. Suppose I have three columns data = data.frame(id=c(101, 102, 103),column1=c(2, 4, 9), column2=c(3, 4, 2), column3=c(5, 15, 7)) How can I create a new column (e.g., colmean) that is the mean of the two columns closest in value? I thought about doing a bunch of ifelse statements, but that seemed unnecessarily messy. In this case, for instance, colmean=c(2.5, 4, 8). A: Borrowing the function findClosest() created here by @Cole, we can do the following, findClosest <- function(x, n) { x <- sort(x) x[seq.int(which.min(diff(x, lag = n - 1L)), length.out = n)] } colMeans(apply(data[-1], 1, function(i)findClosest(i, 2))) #[1] 2.5 4.0 8.0 A: Here is a version with a loop: data = data.frame(id=c(101, 102, 103),column1=c(2, 4, 9), column2=c(3, 4, 2), column3=c(5, 15, 7)) data$colmean <- NaN # set up empty column for results for(i in seq(nrow(data))){ data.i <- data[i,-1] # get ith row d <- as.matrix(dist(c(data.i))) # get distances between values diag(d) <- NaN # replace diagonal of distance matrix with NaN hit <- which.min(d) # identify value of lowest distance pos <- c(row(d)[hit], col(d)[hit]) # get the position (i.e. the values that are closest) data$colmean[i] <- mean(unlist(data.i[pos])) # calculate mean } data # id column1 column2 column3 colmean # 1 101 2 3 5 2.5 # 2 102 4 4 15 4.0 # 3 103 9 2 7 8.0 A: Here's a self-contained solution, based on the tidyverse, that is independent of the number of columns to be compared. library(tidyverse) data %>% # Add the means of smallest pairwise differences to the input data bind_cols( data %>% # Make the data tidy (and hence independent of the number of "column"s) pivot_longer(starts_with("column")) %>% # For each id/row (replace with rowwise() if appropriate) group_by(id) %>% group_map( function(.x, .y) { # Form a tibble of all pairwise ciombinations of values as_tibble(t(combn(.x$value, 2))) %>% # Calculate pairwise differences mutate(difference = abs(V1 - V2)) %>% # Find the smallest pairwise difference arrange(difference) %>% head(1) %>% # Calculate the mean of this pair pivot_longer(starts_with("V")) %>% summarise(colmean=mean(value)) } ) %>% # Convert list of values to column bind_rows() ) id column1 column2 column3 colmean 1 101 2 3 5 2.5 2 102 4 4 15 4.0 3 103 9 2 7 8.0 A: A vectorized function using the Rfast package: library(Rfast) fClosest <- function(m, n) { m <- colSort(t(m)) matrix( m[ sequence( rep(n, ncol(m)), seq(0, nrow(m)*(ncol(m) - 1), nrow(m)) + colMins(diff(m, lag = n - 1)) ) ], ncol(m), n, TRUE ) } m <- matrix(sample(10, 24, 1), 4) m #> [,1] [,2] [,3] [,4] [,5] [,6] #> [1,] 4 2 6 2 5 3 #> [2,] 3 4 7 3 4 7 #> [3,] 4 2 7 6 10 2 #> [4,] 8 1 10 8 2 9 fClosest(m, 3L) #> [,1] [,2] [,3] #> [1,] 2 2 3 #> [2,] 3 3 4 #> [3,] 2 2 4 #> [4,] 8 8 9 rowMeans(fClosest(m, 3L)) #> [1] 2.333333 3.333333 2.666667 8.333333
how to find the columns with the most similar values
Just starting to use R and am feeling a bit confused. Suppose I have three columns data = data.frame(id=c(101, 102, 103),column1=c(2, 4, 9), column2=c(3, 4, 2), column3=c(5, 15, 7)) How can I create a new column (e.g., colmean) that is the mean of the two columns closest in value? I thought about doing a bunch of ifelse statements, but that seemed unnecessarily messy. In this case, for instance, colmean=c(2.5, 4, 8).
[ "Borrowing the function findClosest() created here by @Cole, we can do the following,\nfindClosest <- function(x, n) {\n x <- sort(x)\n x[seq.int(which.min(diff(x, lag = n - 1L)), length.out = n)]\n }\n\n\ncolMeans(apply(data[-1], 1, function(i)findClosest(i, 2)))\n#[1] 2.5 4.0 8.0\n\n", "Here is a version with a loop:\ndata = data.frame(id=c(101, 102, 103),column1=c(2, 4, 9), \n column2=c(3, 4, 2), column3=c(5, 15, 7))\n\n\ndata$colmean <- NaN # set up empty column for results\nfor(i in seq(nrow(data))){\n data.i <- data[i,-1] # get ith row\n d <- as.matrix(dist(c(data.i))) # get distances between values\n diag(d) <- NaN # replace diagonal of distance matrix with NaN\n hit <- which.min(d) # identify value of lowest distance\n pos <- c(row(d)[hit], col(d)[hit]) # get the position (i.e. the values that are closest)\n data$colmean[i] <- mean(unlist(data.i[pos])) # calculate mean\n}\n\ndata\n# id column1 column2 column3 colmean\n# 1 101 2 3 5 2.5\n# 2 102 4 4 15 4.0\n# 3 103 9 2 7 8.0\n\n", "Here's a self-contained solution, based on the tidyverse, that is independent of the number of columns to be compared.\nlibrary(tidyverse)\n\ndata %>%\n # Add the means of smallest pairwise differences to the input data\n bind_cols(\n data %>% \n # Make the data tidy (and hence independent of the number of \"column\"s)\n pivot_longer(starts_with(\"column\")) %>% \n # For each id/row (replace with rowwise() if appropriate)\n group_by(id) %>% \n group_map(\n function(.x, .y) {\n # Form a tibble of all pairwise ciombinations of values\n as_tibble(t(combn(.x$value, 2))) %>% \n # Calculate pairwise differences\n mutate(difference = abs(V1 - V2)) %>% \n # Find the smallest pairwise difference\n arrange(difference) %>% \n head(1) %>% \n # Calculate the mean of this pair\n pivot_longer(starts_with(\"V\")) %>% \n summarise(colmean=mean(value))\n }\n ) %>% \n # Convert list of values to column\n bind_rows()\n )\n\n id column1 column2 column3 colmean\n1 101 2 3 5 2.5\n2 102 4 4 15 4.0\n3 103 9 2 7 8.0\n\n", "A vectorized function using the Rfast package:\nlibrary(Rfast)\n\nfClosest <- function(m, n) {\n m <- colSort(t(m))\n matrix(\n m[\n sequence(\n rep(n, ncol(m)),\n seq(0, nrow(m)*(ncol(m) - 1), nrow(m)) + colMins(diff(m, lag = n - 1))\n )\n ],\n ncol(m), n, TRUE\n )\n}\n\nm <- matrix(sample(10, 24, 1), 4)\nm\n#> [,1] [,2] [,3] [,4] [,5] [,6]\n#> [1,] 4 2 6 2 5 3\n#> [2,] 3 4 7 3 4 7\n#> [3,] 4 2 7 6 10 2\n#> [4,] 8 1 10 8 2 9\nfClosest(m, 3L)\n#> [,1] [,2] [,3]\n#> [1,] 2 2 3\n#> [2,] 3 3 4\n#> [3,] 2 2 4\n#> [4,] 8 8 9\nrowMeans(fClosest(m, 3L))\n#> [1] 2.333333 3.333333 2.666667 8.333333\n\n" ]
[ 2, 0, 0, 0 ]
[]
[]
[ "data_analysis", "r" ]
stackoverflow_0074652169_data_analysis_r.txt
Q: Python compute object property in separate task to improve performace I wonder if it's possible to compute an object property in a separate background thread when it's initialized to speed up my computation. I have this example code: class Element: def __init__(self): self.__area = -1 # cache the area value @property def area(self) if self.__area < 0: self.__area = 2 # <- Here I put 2 as example but its slow area algorithm computation return self.__area Now if I want to compute the total area of N elements. total_area = sum(el.area for el in elements) this works fine for a low number of elements but when the number of element increment I need a way to process this in parallel. I think it's the same for me to precompute the area rather than compute the total area in parallel. A: The problem with Python and parallel computing is that there is that thing called GIL (Global Interpreter Lock). The GIL prevents a process to run multiple threads at the same time. So for that to work you would need to spawn a new process which has quiet some overhead. Furthermore it is cumbersome to exchange data between the processes. There is that multiprocessing package which helps with creating new processes: https://docs.python.org/3/library/multiprocessing.html Still in your case the question is, how do you split up the work. You could split up the elements in subblocks and create a process for each subblock which gets calculated there. The messaging between the processes can be implemented using pipes, as also described in 1. So you need to send a lot of objects to the new processes and then send them back via the pipes. I am not sure if the overhead of all that will make it faster in the end or even slower.
Python compute object property in separate task to improve performace
I wonder if it's possible to compute an object property in a separate background thread when it's initialized to speed up my computation. I have this example code: class Element: def __init__(self): self.__area = -1 # cache the area value @property def area(self) if self.__area < 0: self.__area = 2 # <- Here I put 2 as example but its slow area algorithm computation return self.__area Now if I want to compute the total area of N elements. total_area = sum(el.area for el in elements) this works fine for a low number of elements but when the number of element increment I need a way to process this in parallel. I think it's the same for me to precompute the area rather than compute the total area in parallel.
[ "The problem with Python and parallel computing is that there is that thing called GIL (Global Interpreter Lock). The GIL prevents a process to run multiple threads at the same time. So for that to work you would need to spawn a new process which has quiet some overhead. Furthermore it is cumbersome to exchange data between the processes.\nThere is that multiprocessing package which helps with creating new processes: https://docs.python.org/3/library/multiprocessing.html\nStill in your case the question is, how do you split up the work. You could split up the elements in subblocks and create a process for each subblock which gets calculated there. The messaging between the processes can be implemented using pipes, as also described in 1. So you need to send a lot of objects to the new processes and then send them back via the pipes. I am not sure if the overhead of all that will make it faster in the end or even slower.\n" ]
[ 0 ]
[]
[]
[ "background_process", "parallel_processing", "python", "python_multithreading" ]
stackoverflow_0074656410_background_process_parallel_processing_python_python_multithreading.txt
Q: Hidden Friend in Python I'm trying to create a hidden friend for my company. In this logic, they will fill out a google forms form and, at the end of the week, I will download it to my computer as a csv file. the data collected are: Full name, email address and desired gift. The idea is to automate the draw and each member will receive a secret friend in their email, with an email address to present them with a virtual gift. At the stage I'm at, I'm putting together the logic of the draw, but I'm not managing to develop. Because it's not making sense of the draw. One person is drawing two and it should only be one at a time. import glob import random import csv from itertools import permutations, combinations_with_replacement, combinations all_list = [] for glob in glob.glob("random_friend/csv/*"): file1 = open(glob, "r+") reader = csv.reader(file1, delimiter=',') for i in reader: all_list.append(i) all_list.pop(0) perm = permutations(all_list) gift = random.choice(['chocolat', 'Squeeze', 'fridge magnet', 'popcorn door cushion kit', 'cocktail shaker kit', 'Suspense book']) print(gift) for i in perm: name_one = i[1][1] name_two = i[2][1] mail_one = i[1][2] mail_two = i[2][2] print(f"""{name_one} took {name_two} and present with a {gift} and send it by e-mail to {mail_two}""") A: It would be very helpful if you could attach some sample records from the input .csv file (anonymized if possible). Without that, have you tried shuffling the original list instead of using the permutations? import glob import random import csv all_list = [] for glob in glob.glob("random_friend/csv/*"): file1 = open(glob, "r+") reader = csv.reader(file1, delimiter=',') for i in reader: all_list.append(i) all_list.pop(0) random.shuffle(all_list) gift = random.choice(['chocolat', 'Squeeze', 'fridge magnet', 'popcorn door cushion kit', 'cocktail shaker kit', 'Suspense book']) for i in range(0, len(all_list), 2): name_one = all_list[i][1] name_two = all_list[i+1][1] mail_one = all_list[i][2] mail_two = all_list[i+1][2] print(f"""{name_one} took {name_two} and present with a {gift} and send it by e-mail to {mail_two}""")
Hidden Friend in Python
I'm trying to create a hidden friend for my company. In this logic, they will fill out a google forms form and, at the end of the week, I will download it to my computer as a csv file. the data collected are: Full name, email address and desired gift. The idea is to automate the draw and each member will receive a secret friend in their email, with an email address to present them with a virtual gift. At the stage I'm at, I'm putting together the logic of the draw, but I'm not managing to develop. Because it's not making sense of the draw. One person is drawing two and it should only be one at a time. import glob import random import csv from itertools import permutations, combinations_with_replacement, combinations all_list = [] for glob in glob.glob("random_friend/csv/*"): file1 = open(glob, "r+") reader = csv.reader(file1, delimiter=',') for i in reader: all_list.append(i) all_list.pop(0) perm = permutations(all_list) gift = random.choice(['chocolat', 'Squeeze', 'fridge magnet', 'popcorn door cushion kit', 'cocktail shaker kit', 'Suspense book']) print(gift) for i in perm: name_one = i[1][1] name_two = i[2][1] mail_one = i[1][2] mail_two = i[2][2] print(f"""{name_one} took {name_two} and present with a {gift} and send it by e-mail to {mail_two}""")
[ "It would be very helpful if you could attach some sample records from the input .csv file (anonymized if possible).\nWithout that, have you tried shuffling the original list instead of using the permutations?\nimport glob\nimport random\nimport csv\n\nall_list = []\nfor glob in glob.glob(\"random_friend/csv/*\"):\n file1 = open(glob, \"r+\")\n reader = csv.reader(file1, delimiter=',')\n for i in reader:\n all_list.append(i)\n all_list.pop(0)\n\nrandom.shuffle(all_list)\n\ngift = random.choice(['chocolat', 'Squeeze', 'fridge magnet', 'popcorn door cushion kit', 'cocktail shaker kit', 'Suspense book'])\n\nfor i in range(0, len(all_list), 2):\n name_one = all_list[i][1]\n name_two = all_list[i+1][1]\n mail_one = all_list[i][2]\n mail_two = all_list[i+1][2]\n\n print(f\"\"\"{name_one} took {name_two} and present with a {gift} and send it by e-mail to {mail_two}\"\"\")\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_itertools", "random" ]
stackoverflow_0074656583_python_python_itertools_random.txt
Q: A query that compares the LAG value and fills the sub column with data if there is a difference? A query that compares the LAG value and fills the sub column with data if there is a difference? WITH A AS ( SELECT 'GOLD' AS Title, 1 AS RNUM, 555.4 AS VALUE1, null AS DIFF, null AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 2 AS RNUM, 555.4 AS VALUE1, 0 AS DIFF, 555.4 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 3 AS RNUM, 555.4 AS VALUE1, 0 AS DIFF, 555.4 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 4 AS RNUM, 556 AS VALUE1, 0.6 AS DIFF, 555.4 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 5 AS RNUM, 556 AS VALUE1, 0 AS DIFF, 556 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 6 AS RNUM, 556 AS VALUE1, 0 AS DIFF, 556 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 7 AS RNUM, 556.7 AS VALUE1, 0.7 AS DIFF, 556 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 8 AS RNUM, 556.7 AS VALUE1, 0 AS DIFF,556.7 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 9 AS RNUM, 557.3 AS VALUE1, 0.6 AS DIFF, 556.7 AS LAG FROM DUAL UNION ALL SELECT 'SILVER' AS Title, 1 AS RNUM, 400.3 AS VALUE1, null AS DIFF, null AS LAG FROM DUAL UNION ALL SELECT 'SILVER' AS Title, 2 AS RNUM, 401.3 AS VALUE1, 1.0 AS DIFF, 400.3 AS LAG FROM DUAL UNION ALL SELECT 'SILVER' AS Title, 3 AS RNUM, 401.3 AS VALUE1, 0 AS DIFF, 401.3 AS LAG FROM DUAL UNION ALL SELECT 'SILVER' AS Title, 4 AS RNUM, 401.3 AS VALUE1, 0 AS DIFF, 401.3 AS LAG FROM DUAL UNION ALL SELECT 'SILVER' AS Title, 5 AS RNUM, 402.2 AS VALUE1, 0.9 AS DIFF, 401.3 AS LAG FROM DUAL UNION ALL SELECT 'SILVER' AS Title, 6 AS RNUM, 403.2 AS VALUE1, 1.0 AS DIFF, 402.2 AS LAG FROM DUAL ) Using A, I want to get the same result as B. If the data in the DIFF column is greater than 0 (or according to a condition), I want to fill the value in the AccMaxNo column with the RNUM value in the DIFF column. A Title RNUM VALUE1 DIFF LAG AccMaxNo GOLD 1 555.4 null null GOLD 2 555.4 0 555.4 GOLD 3 555.4 0 555.4 GOLD 4 556 0.6 555.4 GOLD 5 556 0 556 GOLD 6 556 0 556 GOLD 7 556.7 0.7 556 GOLD 8 556.7 0 556.7 GOLD 9 557.3 0.6 556.7 SILVER 1 400.3 null null SILVER 2 401.3 1.0 400.3 SILVER 3 401.3 0 401.3 SILVER 4 401.3 0 401.3 SILVER 5 402.2 0.9 401.3 SILVER 6 403.2 1.0 402.2 QUERY B Title RNUM VALUE1 DIFF LAG AccMaxNo GOLD 1 555.4 null null 4 GOLD 2 555.4 0 555.4 4 GOLD 3 555.4 0 555.4 4 GOLD 4 556 0.6 555.4 4 GOLD 5 556 0 556 7 GOLD 6 556 0 556 7 GOLD 7 556.7 0.7 556 7 GOLD 8 556.7 0 556.7 9 GOLD 9 557.3 0.6 556.7 9 SILVER 1 400.3 null null 2 SILVER 2 401.3 1.0 400.3 2 SILVER 3 401.3 0 401.3 5 SILVER 4 401.3 0 401.3 5 SILVER 5 402.2 0.9 401.3 5 SILVER 6 403.2 1.0 402.2 6 A: From Oracle 12, you can preform row-by-row processing using MATCH_RECOGNIZE: SELECT title, rnum, value1, value1 - lag AS diff, lag, MAX(rnum) OVER (PARTITION BY title, mno) AS accmaxno FROM table_name MATCH_RECOGNIZE( PARTITION BY title ORDER BY rnum MEASURES PREV(value1) AS lag, MATCH_NUMBER() AS mno ALL ROWS PER MATCH PATTERN ((^ first_row | same_value)* any_row) DEFINE same_value AS PREV(value1) = value1 ) Which, for the sample data: CREATE TABLE table_name (Title, RNUM, VALUE1) AS SELECT 'GOLD', 1, 555.4 FROM DUAL UNION ALL SELECT 'GOLD', 2, 555.4 FROM DUAL UNION ALL SELECT 'GOLD', 3, 555.4 FROM DUAL UNION ALL SELECT 'GOLD', 4, 556 FROM DUAL UNION ALL SELECT 'GOLD', 5, 556 FROM DUAL UNION ALL SELECT 'GOLD', 6, 556 FROM DUAL UNION ALL SELECT 'GOLD', 7, 556.7 FROM DUAL UNION ALL SELECT 'GOLD', 8, 556.7 FROM DUAL UNION ALL SELECT 'GOLD', 9, 557.3 FROM DUAL UNION ALL SELECT 'SILVER', 1, 400.3 FROM DUAL UNION ALL SELECT 'SILVER', 2, 401.3 FROM DUAL UNION ALL SELECT 'SILVER', 3, 401.3 FROM DUAL UNION ALL SELECT 'SILVER', 4, 401.3 FROM DUAL UNION ALL SELECT 'SILVER', 5, 402.2 FROM DUAL UNION ALL SELECT 'SILVER', 6, 403.2 FROM DUAL; Outputs: TITLE RNUM VALUE1 DIFF LAG ACCMAXNO GOLD 1 555.4 null null 4 GOLD 2 555.4 0 555.4 4 GOLD 3 555.4 0 555.4 4 GOLD 4 556 .6 555.4 4 GOLD 5 556 0 556 7 GOLD 6 556 0 556 7 GOLD 7 556.7 .7 556 7 GOLD 8 556.7 0 556.7 9 GOLD 9 557.3 .6 556.7 9 SILVER 1 400.3 null null 2 SILVER 2 401.3 1 400.3 2 SILVER 3 401.3 0 401.3 5 SILVER 4 401.3 0 401.3 5 SILVER 5 402.2 .9 401.3 5 SILVER 6 403.2 1 402.2 6 fiddle A: The analytical function FIRST_VALUE is very practical for this purpose. NULLIF function here transforms all diff values 0 to null Then IGNORE NULLS clause used with FIRST_VALUE ignore all rows where NULLIF(diff, 0) is null Finally, the windowing clause ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING allows the function to be restricted to the rows concerned as required. SELECT a.* , FIRST_VALUE( CASE WHEN NULLIF(diff, 0) IS NOT NULL THEN rnum ELSE NULL END ) IGNORE NULLS OVER( PARTITION BY title ORDER BY rnum ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING ) AccMaxNo FROM A ; demo on db<>fiddle
A query that compares the LAG value and fills the sub column with data if there is a difference?
A query that compares the LAG value and fills the sub column with data if there is a difference? WITH A AS ( SELECT 'GOLD' AS Title, 1 AS RNUM, 555.4 AS VALUE1, null AS DIFF, null AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 2 AS RNUM, 555.4 AS VALUE1, 0 AS DIFF, 555.4 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 3 AS RNUM, 555.4 AS VALUE1, 0 AS DIFF, 555.4 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 4 AS RNUM, 556 AS VALUE1, 0.6 AS DIFF, 555.4 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 5 AS RNUM, 556 AS VALUE1, 0 AS DIFF, 556 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 6 AS RNUM, 556 AS VALUE1, 0 AS DIFF, 556 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 7 AS RNUM, 556.7 AS VALUE1, 0.7 AS DIFF, 556 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 8 AS RNUM, 556.7 AS VALUE1, 0 AS DIFF,556.7 AS LAG FROM DUAL UNION ALL SELECT 'GOLD' AS Title, 9 AS RNUM, 557.3 AS VALUE1, 0.6 AS DIFF, 556.7 AS LAG FROM DUAL UNION ALL SELECT 'SILVER' AS Title, 1 AS RNUM, 400.3 AS VALUE1, null AS DIFF, null AS LAG FROM DUAL UNION ALL SELECT 'SILVER' AS Title, 2 AS RNUM, 401.3 AS VALUE1, 1.0 AS DIFF, 400.3 AS LAG FROM DUAL UNION ALL SELECT 'SILVER' AS Title, 3 AS RNUM, 401.3 AS VALUE1, 0 AS DIFF, 401.3 AS LAG FROM DUAL UNION ALL SELECT 'SILVER' AS Title, 4 AS RNUM, 401.3 AS VALUE1, 0 AS DIFF, 401.3 AS LAG FROM DUAL UNION ALL SELECT 'SILVER' AS Title, 5 AS RNUM, 402.2 AS VALUE1, 0.9 AS DIFF, 401.3 AS LAG FROM DUAL UNION ALL SELECT 'SILVER' AS Title, 6 AS RNUM, 403.2 AS VALUE1, 1.0 AS DIFF, 402.2 AS LAG FROM DUAL ) Using A, I want to get the same result as B. If the data in the DIFF column is greater than 0 (or according to a condition), I want to fill the value in the AccMaxNo column with the RNUM value in the DIFF column. A Title RNUM VALUE1 DIFF LAG AccMaxNo GOLD 1 555.4 null null GOLD 2 555.4 0 555.4 GOLD 3 555.4 0 555.4 GOLD 4 556 0.6 555.4 GOLD 5 556 0 556 GOLD 6 556 0 556 GOLD 7 556.7 0.7 556 GOLD 8 556.7 0 556.7 GOLD 9 557.3 0.6 556.7 SILVER 1 400.3 null null SILVER 2 401.3 1.0 400.3 SILVER 3 401.3 0 401.3 SILVER 4 401.3 0 401.3 SILVER 5 402.2 0.9 401.3 SILVER 6 403.2 1.0 402.2 QUERY B Title RNUM VALUE1 DIFF LAG AccMaxNo GOLD 1 555.4 null null 4 GOLD 2 555.4 0 555.4 4 GOLD 3 555.4 0 555.4 4 GOLD 4 556 0.6 555.4 4 GOLD 5 556 0 556 7 GOLD 6 556 0 556 7 GOLD 7 556.7 0.7 556 7 GOLD 8 556.7 0 556.7 9 GOLD 9 557.3 0.6 556.7 9 SILVER 1 400.3 null null 2 SILVER 2 401.3 1.0 400.3 2 SILVER 3 401.3 0 401.3 5 SILVER 4 401.3 0 401.3 5 SILVER 5 402.2 0.9 401.3 5 SILVER 6 403.2 1.0 402.2 6
[ "From Oracle 12, you can preform row-by-row processing using MATCH_RECOGNIZE:\nSELECT title,\n rnum,\n value1,\n value1 - lag AS diff,\n lag,\n MAX(rnum) OVER (PARTITION BY title, mno) AS accmaxno\nFROM table_name\nMATCH_RECOGNIZE(\n PARTITION BY title\n ORDER BY rnum\n MEASURES\n PREV(value1) AS lag,\n MATCH_NUMBER() AS mno\n ALL ROWS PER MATCH\n PATTERN ((^ first_row | same_value)* any_row)\n DEFINE\n same_value AS PREV(value1) = value1\n)\n\nWhich, for the sample data:\nCREATE TABLE table_name (Title, RNUM, VALUE1) AS\nSELECT 'GOLD', 1, 555.4 FROM DUAL UNION ALL\nSELECT 'GOLD', 2, 555.4 FROM DUAL UNION ALL\nSELECT 'GOLD', 3, 555.4 FROM DUAL UNION ALL\nSELECT 'GOLD', 4, 556 FROM DUAL UNION ALL\nSELECT 'GOLD', 5, 556 FROM DUAL UNION ALL\nSELECT 'GOLD', 6, 556 FROM DUAL UNION ALL\nSELECT 'GOLD', 7, 556.7 FROM DUAL UNION ALL\nSELECT 'GOLD', 8, 556.7 FROM DUAL UNION ALL\nSELECT 'GOLD', 9, 557.3 FROM DUAL UNION ALL\nSELECT 'SILVER', 1, 400.3 FROM DUAL UNION ALL\nSELECT 'SILVER', 2, 401.3 FROM DUAL UNION ALL\nSELECT 'SILVER', 3, 401.3 FROM DUAL UNION ALL\nSELECT 'SILVER', 4, 401.3 FROM DUAL UNION ALL\nSELECT 'SILVER', 5, 402.2 FROM DUAL UNION ALL\nSELECT 'SILVER', 6, 403.2 FROM DUAL;\n\nOutputs:\n\n\n\n\nTITLE\nRNUM\nVALUE1\nDIFF\nLAG\nACCMAXNO\n\n\n\n\nGOLD\n1\n555.4\nnull\nnull\n4\n\n\nGOLD\n2\n555.4\n0\n555.4\n4\n\n\nGOLD\n3\n555.4\n0\n555.4\n4\n\n\nGOLD\n4\n556\n.6\n555.4\n4\n\n\nGOLD\n5\n556\n0\n556\n7\n\n\nGOLD\n6\n556\n0\n556\n7\n\n\nGOLD\n7\n556.7\n.7\n556\n7\n\n\nGOLD\n8\n556.7\n0\n556.7\n9\n\n\nGOLD\n9\n557.3\n.6\n556.7\n9\n\n\nSILVER\n1\n400.3\nnull\nnull\n2\n\n\nSILVER\n2\n401.3\n1\n400.3\n2\n\n\nSILVER\n3\n401.3\n0\n401.3\n5\n\n\nSILVER\n4\n401.3\n0\n401.3\n5\n\n\nSILVER\n5\n402.2\n.9\n401.3\n5\n\n\nSILVER\n6\n403.2\n1\n402.2\n6\n\n\n\n\nfiddle\n", "The analytical function FIRST_VALUE is very practical for this purpose.\n\nNULLIF function here transforms all diff values 0 to null\nThen IGNORE NULLS clause used with FIRST_VALUE ignore all rows where NULLIF(diff, 0) is null\nFinally, the windowing clause ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING allows the function to be restricted to the rows concerned as required.\n\nSELECT a.*\n , FIRST_VALUE( \n CASE WHEN NULLIF(diff, 0) IS NOT NULL THEN rnum ELSE NULL END \n ) IGNORE NULLS OVER(\n PARTITION BY title ORDER BY rnum \n ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\n ) AccMaxNo\nFROM A\n;\n\ndemo on db<>fiddle\n" ]
[ 1, 0 ]
[]
[]
[ "oracle", "sql" ]
stackoverflow_0074651693_oracle_sql.txt
Q: Is there a way to convert the string values within a string array to individual int values? C# WPF For context, I'm making a basic BlackJack app with C# on WPF(.Net Framework). I now have a string array for when I press a button, the cards that are handed out are dealt to both the player and dealer. I want to use that same array, assign individual int values to the individual strings, use those to calculate the sum of the cards that are put in a listbox after pressing the button to show the sum in a label. I've tried a lot of things and am now unsure as to how I'm going to convert the string array to int. If you know a better/easier way to go about this, I'm definitely open to suggestions. Thanks in advance private string[] kaartenArray = new string[52] { "Klaveren Aas", "Klaveren 2", "Klaveren 3", "Klaveren 4", "Klaveren 5", "Klaveren 6", "Klaveren 7", "Klaveren 8", "Klaveren 9", "Klaveren 10", "Klaveren Boer", "Klaveren Koningin", "Klaveren Koning", "Ruiten Aas", "Ruiten 2", "Ruiten 3", "Ruiten 4", "Ruiten 5", "Ruiten 6", "Ruiten 7", "Ruiten 8", "Ruiten 9", "Ruiten 10", "Ruiten Boer", "Ruiten Koningin", "Ruiten Koning", "Harten Aas", "Harten 2", "Harten 3", "Harten 4", "Harten 5", "Harten 6", "Harten 7", "Harten 8", "Harten 9", "Harten 10", "Harten Boer", "Harten Koningin", "Harten Koning", "Schoppen Aas", "Schoppen 2", "Schoppen 3", "Schoppen 4", "Schoppen 5", "Schoppen 6", "Schoppen 7", "Schoppen 8", "Schoppen 9", "Schoppen 10", "Schoppen Boer", "Schoppen Koningin", "Schoppen Koning" }; private void KaartWaarde() { int[] kaartWaarde = Array.ConvertAll(kaartenArray, s => int.Parse(s)); kaartWaarde[0] = 1; kaartWaarde[1] = 2; kaartWaarde[2] = 3; kaartWaarde[3] = 4; kaartWaarde[4] = 5; kaartWaarde[5] = 6; kaartWaarde[6] = 7; kaartWaarde[7] = 8; Etc............ This was my last attempt to convert, I wasn't sure what it was, but apparently this can't be used for what I wanted to use it for. Klaveren, Ruiten, Harten and Schoppen are the suits. Aas means ace, etc... A: Why not make life easier on yourself and use the power of C# and use classes e.g., class Card { public string Name {get;set;} public int Value {get;set;} public override string ToString() { return Name; } } then keep a list of class Card List<Card> AllCards = new List<Card>(){ new Card{Name="King", Value=10},...}; You can expand this further by adding a Card Suit e.g. class Card { public string Name {get;set;} public string Suit{get;set;} public int Value {get;set;} public override string ToString() { return $"{Name} of {Suit}"; } } then have new Card{Name="Queen", Value=10, Suit="Diamond"} Obviously this can be expanded by use of enumerated type for Suit or Value and use of private setters and constructor passing of properties. Further on if you wish to use databinding in your WPF pages you can enhance the Card class by have it implement INotifyPropertyChanged for the public properties.
Is there a way to convert the string values within a string array to individual int values? C# WPF
For context, I'm making a basic BlackJack app with C# on WPF(.Net Framework). I now have a string array for when I press a button, the cards that are handed out are dealt to both the player and dealer. I want to use that same array, assign individual int values to the individual strings, use those to calculate the sum of the cards that are put in a listbox after pressing the button to show the sum in a label. I've tried a lot of things and am now unsure as to how I'm going to convert the string array to int. If you know a better/easier way to go about this, I'm definitely open to suggestions. Thanks in advance private string[] kaartenArray = new string[52] { "Klaveren Aas", "Klaveren 2", "Klaveren 3", "Klaveren 4", "Klaveren 5", "Klaveren 6", "Klaveren 7", "Klaveren 8", "Klaveren 9", "Klaveren 10", "Klaveren Boer", "Klaveren Koningin", "Klaveren Koning", "Ruiten Aas", "Ruiten 2", "Ruiten 3", "Ruiten 4", "Ruiten 5", "Ruiten 6", "Ruiten 7", "Ruiten 8", "Ruiten 9", "Ruiten 10", "Ruiten Boer", "Ruiten Koningin", "Ruiten Koning", "Harten Aas", "Harten 2", "Harten 3", "Harten 4", "Harten 5", "Harten 6", "Harten 7", "Harten 8", "Harten 9", "Harten 10", "Harten Boer", "Harten Koningin", "Harten Koning", "Schoppen Aas", "Schoppen 2", "Schoppen 3", "Schoppen 4", "Schoppen 5", "Schoppen 6", "Schoppen 7", "Schoppen 8", "Schoppen 9", "Schoppen 10", "Schoppen Boer", "Schoppen Koningin", "Schoppen Koning" }; private void KaartWaarde() { int[] kaartWaarde = Array.ConvertAll(kaartenArray, s => int.Parse(s)); kaartWaarde[0] = 1; kaartWaarde[1] = 2; kaartWaarde[2] = 3; kaartWaarde[3] = 4; kaartWaarde[4] = 5; kaartWaarde[5] = 6; kaartWaarde[6] = 7; kaartWaarde[7] = 8; Etc............ This was my last attempt to convert, I wasn't sure what it was, but apparently this can't be used for what I wanted to use it for. Klaveren, Ruiten, Harten and Schoppen are the suits. Aas means ace, etc...
[ "Why not make life easier on yourself and use the power of C# and use classes e.g.,\nclass Card\n{\n public string Name {get;set;}\n public int Value {get;set;}\n public override string ToString()\n {\n return Name;\n }\n}\n\nthen keep a list of class Card\nList<Card> AllCards = new List<Card>(){ new Card{Name=\"King\", Value=10},...};\n\nYou can expand this further by adding a Card Suit e.g.\nclass Card\n{\n public string Name {get;set;}\n public string Suit{get;set;}\n public int Value {get;set;}\n public override string ToString()\n {\n return $\"{Name} of {Suit}\";\n }\n}\n\nthen have new Card{Name=\"Queen\", Value=10, Suit=\"Diamond\"}\nObviously this can be expanded by use of enumerated type for Suit or Value and use of private setters and constructor passing of properties.\nFurther on if you wish to use databinding in your WPF pages you can enhance the Card class by have it implement INotifyPropertyChanged for the public properties.\n" ]
[ 4 ]
[]
[]
[ "arrays", "blackjack", "c#", "type_conversion" ]
stackoverflow_0074656267_arrays_blackjack_c#_type_conversion.txt
Q: How to mock a function which makes a mutation on an argument that is necessary for the caller fuction logic I want to be able to mock a function that mutates an argument, and that it's mutation is relevant in order for the code to continue executing correctly. Consider the following code: def mutate_my_dict(mutable_dict): if os.path.exists("a.txt"): mutable_dict["new_key"] = "new_value" return True def function_under_test(): my_dict = {"key": "value"} if mutate_my_dict(my_dict): return my_dict["new_key"] return "No Key" def test_function_under_test(): with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock: mutate_my_dict_mock.return_value = True result = function_under_test() assert result == "new_value" **Please understand i know i can just mock os.path.exists in this case but this is just an example. I intentionally want to mock the function and not the external module. ** I also read the docs here: https://docs.python.org/3/library/unittest.mock-examples.html#coping-with-mutable-arguments But it doesn't seem to fit in my case. This is the test i've written so far, but it obviously doesn't work since the key changes: def test_function_under_test(): with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock: mutate_my_dict_mock.return_value = True result = function_under_test() assert result == "new_value" Thanks in advance for all of your time :) A: With the help of Peter i managed to come up with this final test: def mock_mutate_my_dict(my_dict): my_dict["new_key"] = "new_value" return True def test_function_under_test(): with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock: mutate_my_dict_mock.side_effect = mock_mutate_my_dict result = function_under_test() assert result == "new_value" How it works is that with a side effect you can run a function instead of the intended function. In this function you need to both change all of the mutating arguments and return the value returned.
How to mock a function which makes a mutation on an argument that is necessary for the caller fuction logic
I want to be able to mock a function that mutates an argument, and that it's mutation is relevant in order for the code to continue executing correctly. Consider the following code: def mutate_my_dict(mutable_dict): if os.path.exists("a.txt"): mutable_dict["new_key"] = "new_value" return True def function_under_test(): my_dict = {"key": "value"} if mutate_my_dict(my_dict): return my_dict["new_key"] return "No Key" def test_function_under_test(): with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock: mutate_my_dict_mock.return_value = True result = function_under_test() assert result == "new_value" **Please understand i know i can just mock os.path.exists in this case but this is just an example. I intentionally want to mock the function and not the external module. ** I also read the docs here: https://docs.python.org/3/library/unittest.mock-examples.html#coping-with-mutable-arguments But it doesn't seem to fit in my case. This is the test i've written so far, but it obviously doesn't work since the key changes: def test_function_under_test(): with patch("stack_over_flow.mutate_my_dict") as mutate_my_dict_mock: mutate_my_dict_mock.return_value = True result = function_under_test() assert result == "new_value" Thanks in advance for all of your time :)
[ "With the help of Peter i managed to come up with this final test:\ndef mock_mutate_my_dict(my_dict):\n my_dict[\"new_key\"] = \"new_value\"\n return True\n\n\ndef test_function_under_test():\n with patch(\"stack_over_flow.mutate_my_dict\") as mutate_my_dict_mock:\n mutate_my_dict_mock.side_effect = mock_mutate_my_dict\n result = function_under_test()\n assert result == \"new_value\"\n\nHow it works is that with a side effect you can run a function instead of the intended function.\nIn this function you need to both change all of the mutating arguments and return the value returned.\n" ]
[ 0 ]
[]
[]
[ "pytest", "python", "python_3.x", "python_unittest.mock", "unit_testing" ]
stackoverflow_0074643203_pytest_python_python_3.x_python_unittest.mock_unit_testing.txt
Q: Avoid CORS preflight check in rails in my view I have <%= link_to "Angebot als PDF", redirect_to_offer_pdf_offer_path(offer, project_id: @project.id), method: :get, class:"dropdown-item" %> The redirect_to_offer_pdf action in the offers controller: url = get_url_on_other_host() if url redirect_to url, allow_other_host: true end When I click on the link I get this error in the browser console: Access to fetch at '<other_host_url>' (redirected from '<the_action_url') from origin '<my_host>' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. How can I "set the request's mode to 'no-cors'"? Why is CORS check even necessary for a simple redirect? I thought it is only necessary for DELETE or PUT requests. A: Use the rack-cors gem to handle CORS. It lets you create an initializer where you allow your request settings # config/initializers/cors.rb Rails.application.config.middleware.insert_before 0, Rack::Cors do allow do origins '*' resource '*', headers: :any, methods: [:get, :post, :patch, :put] end end
Avoid CORS preflight check in rails
in my view I have <%= link_to "Angebot als PDF", redirect_to_offer_pdf_offer_path(offer, project_id: @project.id), method: :get, class:"dropdown-item" %> The redirect_to_offer_pdf action in the offers controller: url = get_url_on_other_host() if url redirect_to url, allow_other_host: true end When I click on the link I get this error in the browser console: Access to fetch at '<other_host_url>' (redirected from '<the_action_url') from origin '<my_host>' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. If an opaque response serves your needs, set the request's mode to 'no-cors' to fetch the resource with CORS disabled. How can I "set the request's mode to 'no-cors'"? Why is CORS check even necessary for a simple redirect? I thought it is only necessary for DELETE or PUT requests.
[ "Use the rack-cors gem to handle CORS.\nIt lets you create an initializer where you allow your request settings\n# config/initializers/cors.rb\n\nRails.application.config.middleware.insert_before 0, Rack::Cors do\n allow do\n origins '*'\n resource '*', headers: :any, methods: [:get, :post, :patch, :put]\n end\nend\n\n" ]
[ 0 ]
[]
[]
[ "cors", "ruby_on_rails" ]
stackoverflow_0074645090_cors_ruby_on_rails.txt
Q: Efficient way to update a table with subquery Consider the following data in the table of books: bId serial 1 123 2 234 5 445 9 556 There's another table of missing_books with a latest_known_serial whose values come from the following query: UPDATE missing_books mb SET latest_known_serial = ( SELECT serial FROM books b WHERE b.bId < mb.bId ORDER BY b.bId DESC LIMIT 1) The aforementioned query produces the following: bId latest_known_serial 3 234 4 234 6 445 7 445 8 445 It all works, but I was wondering if there's any more performant way to do this as it actually hits big tables. A: You can make performance increase by using indexes to make your query faster: I tried to simulate your query: mysql> EXPLAIN UPDATE missing_books mb -> SET latest_known_serial = ( -> SELECT serial FROM books b -> WHERE b.bId < mb.bId -> ORDER BY b.bId DESC LIMIT 1); +----+--------------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+--------------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------------------+ | 1 | UPDATE | mb | NULL | ALL | NULL | NULL | NULL | NULL | 10 | 100.00 | NULL | | 2 | DEPENDENT SUBQUERY | b | NULL | ALL | bId | NULL | NULL | NULL | 5 | 33.33 | Range checked for each record (index map: 0x1); Using filesort | +----+--------------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------------------+ 2 rows in set, 2 warnings (0.00 sec) As you can see in the above query, It uses a full table scan (type: ALL) to perform the operation: Optimizer didn't select to use the indexes (unique) defined on bId column. Now Let's make it Primary Key instead of unique index, then run the optimizer to see the result set: Drop Unique index first: mysql> ALTER TABLE books DROP INDEX bId; Query OK, 0 rows affected (0.00 sec) Records: 0 Duplicates: 0 Warnings: 0 Then Define PK on bId Column mysql> ALTER TABLE books ADD PRIMARY KEY (bId); Now test again: mysql> EXPLAIN UPDATE missing_books mb SET latest_known_serial = ( SELECT serial FROM books b WHERE b.bId < mb.bId ORDER BY b.bId DESC LIMIT 1); +----+--------------------+-------+------------+-------+---------------+---------+---------+------+------+----------+----------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+--------------------+-------+------------+-------+---------------+---------+---------+------+------+----------+----------------------------------+ | 1 | UPDATE | mb | NULL | ALL | NULL | NULL | NULL | NULL | 10 | 100.00 | NULL | | 2 | DEPENDENT SUBQUERY | b | NULL | index | PRIMARY | PRIMARY | 4 | NULL | 1 | 33.33 | Using where; Backward index scan | +----+--------------------+-------+------------+-------+---------------+---------+---------+------+------+----------+----------------------------------+ 2 rows in set, 2 warnings (0.00 sec) As you can see in the key column, optimizer used the PK index defined on books table! You can test the speed by making small adjustments.
Efficient way to update a table with subquery
Consider the following data in the table of books: bId serial 1 123 2 234 5 445 9 556 There's another table of missing_books with a latest_known_serial whose values come from the following query: UPDATE missing_books mb SET latest_known_serial = ( SELECT serial FROM books b WHERE b.bId < mb.bId ORDER BY b.bId DESC LIMIT 1) The aforementioned query produces the following: bId latest_known_serial 3 234 4 234 6 445 7 445 8 445 It all works, but I was wondering if there's any more performant way to do this as it actually hits big tables.
[ "You can make performance increase by using indexes to make your query faster: I tried to simulate your query:\nmysql> EXPLAIN UPDATE missing_books mb \n -> SET latest_known_serial = (\n -> SELECT serial FROM books b \n -> WHERE b.bId < mb.bId \n -> ORDER BY b.bId DESC LIMIT 1);\n+----+--------------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------------------+\n| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |\n+----+--------------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------------------+\n| 1 | UPDATE | mb | NULL | ALL | NULL | NULL | NULL | NULL | 10 | 100.00 | NULL |\n| 2 | DEPENDENT SUBQUERY | b | NULL | ALL | bId | NULL | NULL | NULL | 5 | 33.33 | Range checked for each record (index map: 0x1); Using filesort |\n+----+--------------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------------------+\n2 rows in set, 2 warnings (0.00 sec)\n\nAs you can see in the above query, It uses a full table scan (type: ALL) to perform the operation: Optimizer didn't select to use the indexes (unique) defined on bId column.\nNow Let's make it Primary Key instead of unique index, then run the optimizer to see the result set:\nDrop Unique index first:\nmysql> ALTER TABLE books DROP INDEX bId;\nQuery OK, 0 rows affected (0.00 sec)\nRecords: 0 Duplicates: 0 Warnings: 0\n\nThen Define PK on bId Column\nmysql> ALTER TABLE books \n ADD PRIMARY KEY (bId);\n\nNow test again:\nmysql> EXPLAIN UPDATE missing_books mb SET latest_known_serial = ( SELECT serial FROM books b WHERE b.bId < mb.bId ORDER BY b.bId DESC LIMIT 1);\n+----+--------------------+-------+------------+-------+---------------+---------+---------+------+------+----------+----------------------------------+\n| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |\n+----+--------------------+-------+------------+-------+---------------+---------+---------+------+------+----------+----------------------------------+\n| 1 | UPDATE | mb | NULL | ALL | NULL | NULL | NULL | NULL | 10 | 100.00 | NULL |\n| 2 | DEPENDENT SUBQUERY | b | NULL | index | PRIMARY | PRIMARY | 4 | NULL | 1 | 33.33 | Using where; Backward index scan |\n+----+--------------------+-------+------------+-------+---------------+---------+---------+------+------+----------+----------------------------------+\n2 rows in set, 2 warnings (0.00 sec)\n\nAs you can see in the key column, optimizer used the PK index defined on books table! You can test the speed by making small adjustments.\n" ]
[ 1 ]
[]
[]
[ "greatest_n_per_group", "mariadb", "mysql", "performance" ]
stackoverflow_0074645505_greatest_n_per_group_mariadb_mysql_performance.txt
Q: compare validation rule fails in yii2 I am using yii2, I have three fields num1,num2,num3. I want to add validation, num2 input should be greater than num1 input, so I am using compare rule. Here is the code return [ [['num1', 'num2', 'num3'], 'required'], [['num1', 'num2', 'num3'], 'integer', 'min' => self::MIN_SIZE, 'max' => self::MAX_SIZE], ['num2', 'compare', 'compareAttribute' => 'num1', 'operator' => '>'], ]; } Issue: It works if I add 8,9,10 in the inputs but fails If I add 8,10,11 in the inputs. I have tried adding input type as number. A: The operands are compared as strings by default in yii\validators\CompareValidator. That's why '10' is considered less than '8'. You need to set CompareValidator::$type property to compare operands as numbers. Your rule should look like this: [ 'num2', 'compare', 'compareAttribute' => 'num1', 'operator' => '>', 'type' => \yii\validators\CompareValidator::TYPE_NUMBER ]
compare validation rule fails in yii2
I am using yii2, I have three fields num1,num2,num3. I want to add validation, num2 input should be greater than num1 input, so I am using compare rule. Here is the code return [ [['num1', 'num2', 'num3'], 'required'], [['num1', 'num2', 'num3'], 'integer', 'min' => self::MIN_SIZE, 'max' => self::MAX_SIZE], ['num2', 'compare', 'compareAttribute' => 'num1', 'operator' => '>'], ]; } Issue: It works if I add 8,9,10 in the inputs but fails If I add 8,10,11 in the inputs. I have tried adding input type as number.
[ "The operands are compared as strings by default in yii\\validators\\CompareValidator. That's why '10' is considered less than '8'.\nYou need to set CompareValidator::$type property to compare operands as numbers.\nYour rule should look like this:\n[\n 'num2',\n 'compare',\n 'compareAttribute' => 'num1',\n 'operator' => '>',\n 'type' => \\yii\\validators\\CompareValidator::TYPE_NUMBER\n]\n\n" ]
[ 0 ]
[]
[]
[ "php", "validation", "yii", "yii2" ]
stackoverflow_0074652446_php_validation_yii_yii2.txt
Q: Debugging a Typescript Preact app with VS Code For education I have set up a preact application using the recommended way: preact create typescript preactTS This went fine and the resulting project could be opened in VS Code. To run it I changed the dev script in package.json to: "dev": "preact watch -H localhost -p 3001 --clear false", and added a launch config: { "version": "0.2.0", "configurations": [ { "type": "chrome", "request": "launch", "name": "Launch preactTS on Chrome", "url": "http://localhost:3001", "webRoot": "${workspaceFolder}", "userDataDir": "${workspaceRoot}/.vscode/chrome", "sourceMaps": true, "sourceMapPathOverrides": { "webpack:///*": "${workspaceFolder}/src/*", "webpack:///./*": "${workspaceFolder}/src/*" }, "showAsyncStacks": true, } ] } And my tsconfig.json: { "compilerOptions": { "target": "es2022", "module": "esnext", "allowJs": true, "jsx": "react", "jsxFactory": "h", "jsxFragmentFactory": "Fragment", "removeComments": true, "sourceMap": true, "esModuleInterop": true, "baseUrl": "./", "paths": { "react": [ "./node_modules/preact/compat" ], "react-dom": [ "./node_modules/preact/compat" ] }, "strict": true, "forceConsistentCasingInFileNames": true, "moduleResolution": "node", "resolveJsonModule": true, "isolatedModules": true, "noEmit": true, "noImplicitAny": true, "suppressImplicitAnyIndexErrors": true, "sourceRoot": "src", "lib": [ "es2022", "dom" ], "skipLibCheck": true, "experimentalDecorators": false, "allowSyntheticDefaultImports": true, "strictPropertyInitialization": false, "emitDecoratorMetadata": false, "noFallthroughCasesInSwitch": false, }, "include": [ "src/**/*" ], "exclude": [] } After that I was able to launch the web app in a Chrome instance. Now I want to debug the app and as you can see in the launch config I already added map path overrides. With these entries my breakpoints validate, however execution does not stop there and I don't know why. Adding a debugger statement works however, and once the debugger stopped there I can step through the code. But even then new breakpoints still don't work. What's the recommended setup to make debugging a Preact app work in Visual Studio Code? A: The solution is to set an option in the launch config, which is not used very often (at least there are not many useful results when searching for it): { "version": "0.2.0", "configurations": [ { "type": "chrome", "request": "launch", "name": "Launch preactTS on Chrome", "url": "http://localhost:3001", "webRoot": "${workspaceFolder}", "userDataDir": "${workspaceRoot}/.vscode/chrome", "sourceMaps": true, "sourceMapPathOverrides": { "webpack://*": "${workspaceFolder}/src/*", }, "runtimeArgs": [ "--allow-insecure-localhost" ], "showAsyncStacks": true, "smartStep": true, "perScriptSourcemaps": "no" } ] } Per script source maps must be disabled, apparently.
Debugging a Typescript Preact app with VS Code
For education I have set up a preact application using the recommended way: preact create typescript preactTS This went fine and the resulting project could be opened in VS Code. To run it I changed the dev script in package.json to: "dev": "preact watch -H localhost -p 3001 --clear false", and added a launch config: { "version": "0.2.0", "configurations": [ { "type": "chrome", "request": "launch", "name": "Launch preactTS on Chrome", "url": "http://localhost:3001", "webRoot": "${workspaceFolder}", "userDataDir": "${workspaceRoot}/.vscode/chrome", "sourceMaps": true, "sourceMapPathOverrides": { "webpack:///*": "${workspaceFolder}/src/*", "webpack:///./*": "${workspaceFolder}/src/*" }, "showAsyncStacks": true, } ] } And my tsconfig.json: { "compilerOptions": { "target": "es2022", "module": "esnext", "allowJs": true, "jsx": "react", "jsxFactory": "h", "jsxFragmentFactory": "Fragment", "removeComments": true, "sourceMap": true, "esModuleInterop": true, "baseUrl": "./", "paths": { "react": [ "./node_modules/preact/compat" ], "react-dom": [ "./node_modules/preact/compat" ] }, "strict": true, "forceConsistentCasingInFileNames": true, "moduleResolution": "node", "resolveJsonModule": true, "isolatedModules": true, "noEmit": true, "noImplicitAny": true, "suppressImplicitAnyIndexErrors": true, "sourceRoot": "src", "lib": [ "es2022", "dom" ], "skipLibCheck": true, "experimentalDecorators": false, "allowSyntheticDefaultImports": true, "strictPropertyInitialization": false, "emitDecoratorMetadata": false, "noFallthroughCasesInSwitch": false, }, "include": [ "src/**/*" ], "exclude": [] } After that I was able to launch the web app in a Chrome instance. Now I want to debug the app and as you can see in the launch config I already added map path overrides. With these entries my breakpoints validate, however execution does not stop there and I don't know why. Adding a debugger statement works however, and once the debugger stopped there I can step through the code. But even then new breakpoints still don't work. What's the recommended setup to make debugging a Preact app work in Visual Studio Code?
[ "The solution is to set an option in the launch config, which is not used very often (at least there are not many useful results when searching for it):\n{\n \"version\": \"0.2.0\",\n \"configurations\": [\n {\n \"type\": \"chrome\",\n \"request\": \"launch\",\n \"name\": \"Launch preactTS on Chrome\",\n \"url\": \"http://localhost:3001\",\n \"webRoot\": \"${workspaceFolder}\",\n \"userDataDir\": \"${workspaceRoot}/.vscode/chrome\",\n \"sourceMaps\": true,\n \"sourceMapPathOverrides\": {\n \"webpack://*\": \"${workspaceFolder}/src/*\",\n },\n \"runtimeArgs\": [\n \"--allow-insecure-localhost\"\n ],\n \"showAsyncStacks\": true,\n \"smartStep\": true,\n \"perScriptSourcemaps\": \"no\"\n }\n ]\n}\n\nPer script source maps must be disabled, apparently.\n" ]
[ 0 ]
[]
[]
[ "preact", "typescript" ]
stackoverflow_0074642522_preact_typescript.txt
Q: B2C - Sign in with Google account not showing We are in the process of setting Google OAuth to Azure B2C. What are the values to pass for client id and client secret when adding Google as identity provider. See this image: Configure Google as Identity Provider When users run, sign up and sign in user flow, they are not getting the Sign in with Google option. How to get this? A: To get the values of Client ID and Client secret, you need to create one Google application as mentioned in below reference. I tried to reproduce the same in my environment and got below results: I created one Google application by following same steps in that document and got the values of Client ID and Client secret like this: Go to Google Developers Console -> Your Project -> Credentials -> Select your Web application I configured Google as an identity provider by entering above client ID and secret in my Azure AD B2C tenant like below: Make sure to add Google as Identity provider in your Sign up and sign in user flow as below: When I ran the user flow, I got the login screen with Google like below: After selecting Google, I got consent screen as below: I logged in successfully with Google account like below: Reference: Set up sign-up and sign-in with a Google account - Azure AD B2C
B2C - Sign in with Google account not showing
We are in the process of setting Google OAuth to Azure B2C. What are the values to pass for client id and client secret when adding Google as identity provider. See this image: Configure Google as Identity Provider When users run, sign up and sign in user flow, they are not getting the Sign in with Google option. How to get this?
[ "To get the values of Client ID and Client secret, you need to create one Google application as mentioned in below reference.\nI tried to reproduce the same in my environment and got below results:\nI created one Google application by following same steps in that document and got the values of Client ID and Client secret like this:\nGo to Google Developers Console -> Your Project -> Credentials -> Select your Web application\n\nI configured Google as an identity provider by entering above client ID and secret in my Azure AD B2C tenant like below:\n\nMake sure to add Google as Identity provider in your Sign up and sign in user flow as below:\n\nWhen I ran the user flow, I got the login screen with Google like below:\n\nAfter selecting Google, I got consent screen as below:\n\nI logged in successfully with Google account like below:\n\nReference:\nSet up sign-up and sign-in with a Google account - Azure AD B2C \n" ]
[ 0 ]
[]
[]
[ "aad_b2c", "azure_identity", "azureportal", "google_oauth" ]
stackoverflow_0074631238_aad_b2c_azure_identity_azureportal_google_oauth.txt
Q: "type": "module" in package.json throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath) I want to use import in my nodejs project instead of using require. So, I added, "type": "module" in my package.json. import index from './index.js'; in server.js when I run node server.js Error says, internal/modules/cjs/loader.js:1174 throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath); ^ throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath); ^ Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: .... server.conf.js is pasted below. import express from 'express'; import http from 'http'; let app = express(); let server = http.createServer(app); import morgan from 'morgan'; import methodOverride from 'method-override';; import path from 'path'; let port = process.env.PORT || 4000; app.use(morgan('dev')); app.use(methodOverride('X-HTTP-Method-Override')); let router = express.Router(); import routes from '../app/routes'; routes(app, router, client); server.listen(port); console.log(`Wizardry is afoot on port ${port}`); export { app, client }; A: For my case I downgrade: node-fetch ^3.0.0 → ^2.6.1 Problem solved. A: According to stack-trace before you edit (https://stackoverflow.com/revisions/61558835/1): internal/modules/cjs/loader.js:1174 throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath); ^ throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath); ^ Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: H:\WORKSPACE\CMDs\node-basic\server.conf.js at Object.Module._extensions..js (internal/modules/cjs/loader.js:1174:13) at Module.load (internal/modules/cjs/loader.js:1002:32) at Function.Module._load (internal/modules/cjs/loader.js:901:14) at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:74:12) at internal/main/run_main_module.js:18:47 { code: 'ERR_REQUIRE_ESM' } I tried to locate the Node src who throws this error: https://github.com/nodejs/node/blob/c24b74a7abec0848484671771d250cfd961f128e/lib/internal/modules/cjs/loader.js#L1234 // Native extension for .js Module._extensions['.js'] = function(module, filename) { if (filename.endsWith('.js')) { const pkg = readPackageScope(filename); // Function require shouldn't be used in ES modules. if (pkg && pkg.data && pkg.data.type === 'module') { // ... throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath); } } // ... }; The comment Function require shouldn't be used in ES modules tells the js file to be loaded is an ES module, but the caller is trying to use require() function to load it. Moreover, a double-check into Node src https://github.com/nodejs/node/blob/6cc94b2d7f69f1f541f7c5de3cb86e569fbd4aa3/lib/internal/errors.js#L1319 proves that H:\WORKSPACE\CMDs\node-basic\server.conf.js is the ES module to be loaded. So I'm trying to guess who is trying to load server.conf.js in your app but no luck. Most likely there is a require('./server.conf.js') in your index.js or somewhere else. If you find it just change it into import to fix. A: Had the same issue. I installed the latest node version and then it worked. Try the same. Also if you are using windows make sure it is the correct version i.e 64-bit, 32-bit. A: Check your NodeJS version for module compatibility("type": "module"), there are known issues on certain versions. Most likely there is a require('./server.conf.js') in your index.js/server.js or in the dependent packages you are importing or somewhere else. If you find it just change it into import to fix. 1- Check you're all require statements 2- analyze dependent packages for a require statement in that code Try a build ... Try to deploy to NodeJS containers on GC, DO, AWS or HKU A: in my case i had a data file (data.js) with products listed as objects inside an array. looked like this : const data={ products:[ { brand:'nike', price:1200 }, { brand:'adidas', price:1400 } ] } export default data THE ERROR was caused because i FOOLISHLY exported it like it was a function or class and wrote: export default data={ etc... } i DOUBT this is a case of your error BUT it shows nonetheless how cumbersome and often this error can show up. if its any clarity what im trying to say im basically saying that this usually shows up due to a file itself being unreadable from import. if you put "type": "module" then it is def a version of node, OR a problem on a base level with something you are trying to import. try deleting each of the imports one by one initially to see which one may be the cause. then work from there A: Nothing fancy needed. Just update the Node.js version. A: I was also facing similar issue. So I downgrade chalk module version from 5.0.1 to 4.1.0. It worked for me. A: In my case I was running Angular 13.X och Nx 14.X but my Node version was still 12.X so upgrading the Node version to ^14 solves the problem. A: I updated the terminal node version to 16, deleted node_modules and installed it again. And fixed.
"type": "module" in package.json throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath)
I want to use import in my nodejs project instead of using require. So, I added, "type": "module" in my package.json. import index from './index.js'; in server.js when I run node server.js Error says, internal/modules/cjs/loader.js:1174 throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath); ^ throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath); ^ Error [ERR_REQUIRE_ESM]: Must use import to load ES Module: .... server.conf.js is pasted below. import express from 'express'; import http from 'http'; let app = express(); let server = http.createServer(app); import morgan from 'morgan'; import methodOverride from 'method-override';; import path from 'path'; let port = process.env.PORT || 4000; app.use(morgan('dev')); app.use(methodOverride('X-HTTP-Method-Override')); let router = express.Router(); import routes from '../app/routes'; routes(app, router, client); server.listen(port); console.log(`Wizardry is afoot on port ${port}`); export { app, client };
[ "For my case I downgrade:\nnode-fetch ^3.0.0 → ^2.6.1\nProblem solved.\n", "According to stack-trace before you edit (https://stackoverflow.com/revisions/61558835/1): \ninternal/modules/cjs/loader.js:1174\n throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath);\n ^\n\n throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath);\n ^\n\nError [ERR_REQUIRE_ESM]: Must use import to load ES Module: H:\\WORKSPACE\\CMDs\\node-basic\\server.conf.js\n at Object.Module._extensions..js (internal/modules/cjs/loader.js:1174:13)\n at Module.load (internal/modules/cjs/loader.js:1002:32)\n at Function.Module._load (internal/modules/cjs/loader.js:901:14)\n at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:74:12)\n at internal/main/run_main_module.js:18:47 {\n code: 'ERR_REQUIRE_ESM'\n}\n\nI tried to locate the Node src who throws this error: \nhttps://github.com/nodejs/node/blob/c24b74a7abec0848484671771d250cfd961f128e/lib/internal/modules/cjs/loader.js#L1234\n// Native extension for .js\nModule._extensions['.js'] = function(module, filename) {\n if (filename.endsWith('.js')) {\n const pkg = readPackageScope(filename);\n // Function require shouldn't be used in ES modules.\n if (pkg && pkg.data && pkg.data.type === 'module') {\n // ...\n throw new ERR_REQUIRE_ESM(filename, parentPath, packageJsonPath);\n }\n }\n // ...\n};\n\nThe comment Function require shouldn't be used in ES modules tells the js file to be loaded is an ES module, but the caller is trying to use require() function to load it.\nMoreover, a double-check into Node src https://github.com/nodejs/node/blob/6cc94b2d7f69f1f541f7c5de3cb86e569fbd4aa3/lib/internal/errors.js#L1319 proves that H:\\WORKSPACE\\CMDs\\node-basic\\server.conf.js is the ES module to be loaded.\nSo I'm trying to guess who is trying to load server.conf.js in your app but no luck. Most likely there is a require('./server.conf.js') in your index.js or somewhere else. If you find it just change it into import to fix.\n", "Had the same issue. I installed the latest node version and then it worked. Try the same. Also if you are using windows make sure it is the correct version i.e 64-bit, 32-bit.\n", "Check your NodeJS version for module compatibility(\"type\": \"module\"), there are known issues on certain versions.\nMost likely there is a require('./server.conf.js') in your index.js/server.js or in the dependent packages you are importing or somewhere else. If you find it just change it into import to fix.\n1- Check you're all require statements\n2- analyze dependent packages for a require statement in that code\n\nTry a build ...\nTry to deploy to NodeJS containers on GC, DO, AWS or HKU\n\n", "in my case i had a data file (data.js) with products listed as objects inside an array. looked like this :\nconst data={\n products:[\n {\n brand:'nike',\n price:1200\n },\n {\n brand:'adidas',\n price:1400\n }\n ]\n}\n\nexport default data\n\nTHE ERROR was caused because i FOOLISHLY exported it like it was a function or class\nand wrote:\nexport default data={\netc...\n}\n\ni DOUBT this is a case of your error BUT it shows nonetheless how cumbersome and often this error can show up. if its any clarity what im trying to say im basically saying that this usually shows up due to a file itself being unreadable from import. if you put \"type\": \"module\" then it is def a version of node, OR a problem on a base level with something you are trying to import. try deleting each of the imports one by one initially to see which one may be the cause. then work from there\n", "Nothing fancy needed. Just update the Node.js version.\n", "I was also facing similar issue.\nSo I downgrade chalk module version from 5.0.1 to 4.1.0.\nIt worked for me.\n", "In my case I was running Angular 13.X och Nx 14.X but my Node version was still 12.X so upgrading the Node version to ^14 solves the problem.\n", "I updated the terminal node version to 16, deleted node_modules and installed it again. And fixed.\n" ]
[ 11, 4, 2, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "import", "node.js", "package.json" ]
stackoverflow_0061558835_import_node.js_package.json.txt
Q: Display MongoDB Documents data on a Webpage using Python Flask I wrote a code using Python and trying to display all the Documents from Mongodb on a web page. However, on webpage I see the Column names, but no data. And on the command, it does print all the data. Any help is greatly appreciated. import pymongo from pymongo import MongoClient import datetime import sys from flask import Flask, render_template, request import werkzeug from flask_table import Table,Col from bson.json_util import dumps import json app = Flask(__name__) try: client = pymongo.MongoClient("XXXX") print("Connected to Avengers MongoClient Successfully from Project Script!!!") except: print("Connection to MongoClient Failed!!!") db = client.avengers_hack_db @app.route('/') def Results(): try: Project_List_Col = db.ppm_master_db_collection.find()#.limit(10) for row in Project_List_Col: print(row) return render_template('Results.html',tasks=row) except Exception as e: return dumps({'error': str(e)}) if __name__ == '__main__': app.run(debug = True) The HTML (Results.html) Page is: <html> <body> {% for task_id in tasks %} <h3>{{task_id}}</h3> {% endfor %} </body> </html> A: Removed the for loop and rewrote the code as below: @app.route('/') def Results(): try: Project_List_Col = db.ppm_master_db_collection.find() return render_template('Results.html',tasks=Project_List_Col) except Exception as e: return dumps({'error': str(e)}) if __name__ == '__main__': app.run(debug = True) Documents are displayed on the HTML Page as is. (***Will work on the formatting part. Meanwhile any pointers are greatly appreciated.) A: Try using key,value function of for loop and also remove that loop in the python app file from route like upper answer @Dinakar suggested
Display MongoDB Documents data on a Webpage using Python Flask
I wrote a code using Python and trying to display all the Documents from Mongodb on a web page. However, on webpage I see the Column names, but no data. And on the command, it does print all the data. Any help is greatly appreciated. import pymongo from pymongo import MongoClient import datetime import sys from flask import Flask, render_template, request import werkzeug from flask_table import Table,Col from bson.json_util import dumps import json app = Flask(__name__) try: client = pymongo.MongoClient("XXXX") print("Connected to Avengers MongoClient Successfully from Project Script!!!") except: print("Connection to MongoClient Failed!!!") db = client.avengers_hack_db @app.route('/') def Results(): try: Project_List_Col = db.ppm_master_db_collection.find()#.limit(10) for row in Project_List_Col: print(row) return render_template('Results.html',tasks=row) except Exception as e: return dumps({'error': str(e)}) if __name__ == '__main__': app.run(debug = True) The HTML (Results.html) Page is: <html> <body> {% for task_id in tasks %} <h3>{{task_id}}</h3> {% endfor %} </body> </html>
[ "Removed the for loop and rewrote the code as below:\[email protected]('/')\ndef Results():\n try:\n Project_List_Col = db.ppm_master_db_collection.find()\n return render_template('Results.html',tasks=Project_List_Col)\n except Exception as e:\n return dumps({'error': str(e)})\n\nif __name__ == '__main__': \n app.run(debug = True)\n\nDocuments are displayed on the HTML Page as is.\n(***Will work on the formatting part. Meanwhile any pointers are greatly appreciated.)\n", "Try using key,value function of for loop and also remove that loop in the python app file from route like upper answer @Dinakar suggested\n" ]
[ 0, 0 ]
[]
[]
[ "flask", "mongodb", "python" ]
stackoverflow_0057637088_flask_mongodb_python.txt
Q: Python: Move files from multiple folders in different locations into one folder I am able to move all files from one folder to another. I need help in order to move files to destination folder from multiple source folders. import os import shutil source1 = "C:\\Users\\user\\OneDrive\\Desktop\\1\\" source2 = "C:\\Users\\user\\OneDrive\\Desktop\\2\\" destination = "C:\\Users\\user\\OneDrive\\Desktop\\Destination\\" files = os.listdir(source1, source2) for f in files: shutil.move(source1 + f,source2 + f, destination + f) print("Files Transferred") I am getting error : files = os.listdir(source1, source2) TypeError: listdir() takes at most 1 argument (2 given) A: This is the line interpreter is complaining about, you cannot pass two directories to os.listdir function files = os.listdir(source1, source2) You have to have a nested loop (or list comprehension) to do what you want so: import os sources = [source1, source2, ..., sourceN] files_to_move = [] for source in sources: current_source_files =[f"{source}{filename}" for filename in os.listdir(source)] files_to_move.extend(current_source_files) for f in files_to_move: shutil.move(f, f"{destination}{f.split(os.sep)[-1]}") For "cleaner" solution it's worth to look at: https://docs.python.org/3/library/os.path.html#module-os.path
Python: Move files from multiple folders in different locations into one folder
I am able to move all files from one folder to another. I need help in order to move files to destination folder from multiple source folders. import os import shutil source1 = "C:\\Users\\user\\OneDrive\\Desktop\\1\\" source2 = "C:\\Users\\user\\OneDrive\\Desktop\\2\\" destination = "C:\\Users\\user\\OneDrive\\Desktop\\Destination\\" files = os.listdir(source1, source2) for f in files: shutil.move(source1 + f,source2 + f, destination + f) print("Files Transferred") I am getting error : files = os.listdir(source1, source2) TypeError: listdir() takes at most 1 argument (2 given)
[ "This is the line interpreter is complaining about, you cannot pass two directories to os.listdir function\nfiles = os.listdir(source1, source2)\n\nYou have to have a nested loop (or list comprehension) to do what you want so:\nimport os\nsources = [source1, source2, ..., sourceN]\nfiles_to_move = []\nfor source in sources:\n current_source_files =[f\"{source}{filename}\" for filename in os.listdir(source)]\n files_to_move.extend(current_source_files)\nfor f in files_to_move:\n shutil.move(f, f\"{destination}{f.split(os.sep)[-1]}\")\n\nFor \"cleaner\" solution it's worth to look at:\nhttps://docs.python.org/3/library/os.path.html#module-os.path\n" ]
[ 1 ]
[]
[]
[ "directory", "file", "python", "shutil" ]
stackoverflow_0074656592_directory_file_python_shutil.txt
Q: How to parse this non Json data I have data that looks like this.... how can i parse this in C# .NET [ [ WALLET_TYPE, CURRENCY, BALANCE, UNSETTLED_INTEREST, AVAILABLE_BALANCE, LAST_CHANGE, TRADE_DETAILS ], [ WALLET_TYPE, CURRENCY, BALANCE, UNSETTLED_INTEREST, AVAILABLE_BALANCE, LAST_CHANGE, TRADE_DETAILS ] ] Please note the inside can have N records..... i just showed 2. If this were Json, i would be ok, but this aint json, how can I do it the correct way in C# that is neat and clean..... This came as a string, not an array, etc..... so how do i parse this string? I have TRIED Json parsing, CSV parsing, config file parsing, profile parsing, and INI file parsing, this seem not at all it. I am NEW to programming. Please help A: Using some regex we can make the data into valid JSON, ASSUMING that the data is as provided. const string data = "[[WALLET_TYPE,CURRENCY,BALANCE,UNSETTLED_INTEREST,AVAILABLE_BALANCE,LAST_CHANGE,TRADE_DETAILS],[WALLET_TYPE,CURRENCY,BALANCE,UNSETTLED_INTEREST,AVAILABLE_BALANCE,LAST_CHANGE,TRADE_DETAILS]]"; //Using Regex we can wrap all UPPERCASE_WORDS with "'s making it valid JSON var jsonData = Regex.Replace(data, "([A-Z_]+)", "\"$&\""); var parsedData = JsonConvert.DeserializeObject<List<List<string>>>(jsonData); Now you have a valid List of List<string>. Here's the same data / regex in regex101 https://regex101.com/r/xoc7MK/1 Edit Assuming there is always 7 items in each array: const string data = "[[WALLET_TYPE,CURRENCY,BALANCE,UNSETTLED_INTEREST,AVAILABLE_BALANCE,LAST_CHANGE,TRADE_DETAILS],[WALLET_TYPE,CURRENCY,BALANCE,UNSETTLED_INTEREST,AVAILABLE_BALANCE,LAST_CHANGE,TRADE_DETAILS]]"; //Remove all but Letters, Numbers, Commas, Underscores and punctuations var dataCsv = Regex.Replace(data, "[^a-zA-Z0-9_.,]", ""); var items = dataCsv.Split(','); List<List<string>> itemsList = new List<List<string>>(); for(int i = 0; i < items.Length; i += 7) { itemsList.Add(new List<string>(items.Skip(i).Take(7))); }
How to parse this non Json data
I have data that looks like this.... how can i parse this in C# .NET [ [ WALLET_TYPE, CURRENCY, BALANCE, UNSETTLED_INTEREST, AVAILABLE_BALANCE, LAST_CHANGE, TRADE_DETAILS ], [ WALLET_TYPE, CURRENCY, BALANCE, UNSETTLED_INTEREST, AVAILABLE_BALANCE, LAST_CHANGE, TRADE_DETAILS ] ] Please note the inside can have N records..... i just showed 2. If this were Json, i would be ok, but this aint json, how can I do it the correct way in C# that is neat and clean..... This came as a string, not an array, etc..... so how do i parse this string? I have TRIED Json parsing, CSV parsing, config file parsing, profile parsing, and INI file parsing, this seem not at all it. I am NEW to programming. Please help
[ "Using some regex we can make the data into valid JSON, ASSUMING that the data is as provided.\nconst string data = \"[[WALLET_TYPE,CURRENCY,BALANCE,UNSETTLED_INTEREST,AVAILABLE_BALANCE,LAST_CHANGE,TRADE_DETAILS],[WALLET_TYPE,CURRENCY,BALANCE,UNSETTLED_INTEREST,AVAILABLE_BALANCE,LAST_CHANGE,TRADE_DETAILS]]\";\n\n//Using Regex we can wrap all UPPERCASE_WORDS with \"'s making it valid JSON\nvar jsonData = Regex.Replace(data, \"([A-Z_]+)\", \"\\\"$&\\\"\");\nvar parsedData = JsonConvert.DeserializeObject<List<List<string>>>(jsonData);\n\nNow you have a valid List of List<string>.\nHere's the same data / regex in regex101\nhttps://regex101.com/r/xoc7MK/1\nEdit Assuming there is always 7 items in each array:\nconst string data = \"[[WALLET_TYPE,CURRENCY,BALANCE,UNSETTLED_INTEREST,AVAILABLE_BALANCE,LAST_CHANGE,TRADE_DETAILS],[WALLET_TYPE,CURRENCY,BALANCE,UNSETTLED_INTEREST,AVAILABLE_BALANCE,LAST_CHANGE,TRADE_DETAILS]]\";\n\n//Remove all but Letters, Numbers, Commas, Underscores and punctuations \nvar dataCsv = Regex.Replace(data, \"[^a-zA-Z0-9_.,]\", \"\");\nvar items = dataCsv.Split(',');\n\nList<List<string>> itemsList = new List<List<string>>();\nfor(int i = 0; i < items.Length; i += 7) {\n itemsList.Add(new List<string>(items.Skip(i).Take(7)));\n}\n\n" ]
[ 3 ]
[]
[]
[ ".net_core", "c#", "json", "record", "string" ]
stackoverflow_0074655746_.net_core_c#_json_record_string.txt
Q: Unable to pass dynamic values to config records I have a requirement to pass a JWT client assertion to the oauth2 client credentials grant config record. I'm passing the parameter as the optional parameter. But this parameter has to be generated each time the token endpoint is called for an access token. Therefore I did something like the following. http:OAuth2ClientCredentialsGrantConfig oauth2Config = { tokenUrl: "https://*****/oauth2/token", clientId: "*******", optionalParams: getJWT(), clientSecret: "*****", credentialBearer: oauth2:POST_BODY_BEARER }; Here, the getJWT() method returns a map with the JWT. function getJWT() returns map<string> { string jwt = // logic to generate the JWT map<string> jwtAssertion = { "client_assertion" : jwt }; return jwtAssertion; } This works only once. When the access token returned by the token endpoint expires and when the token endpoint is called again for the access token, the getJWT() method does not get called. Therefore, I suppose the new request is going with the old JWT, hence the request fails. Is there a way to pass a dynamically changing value as a parameter to the http:OAuth2ClientCredentialsGrantConfig record? A: You can achieve this by writing a custom ClientOAuth2Handler and using it as described in the imperative approach section. Your custom handler should check for the exp value of client_assertion and create a new http:ClientOAuth2Handler with a new client_assertion when it expires. You can get an idea from the below code. import ballerina/http; import ballerina/oauth2; import ballerina/jwt; import ballerina/time; client class CustomClientOAuth2Handler { private http:ClientOAuth2Handler? oauthHandler = (); private string? jwt = (); public function init() returns error? { self.jwt = self.getJWT(); self.oauthHandler = check self.getOauth2Handler(); } remote function enrich(http:Request request) returns http:Request|error { boolean isJwtExpired = check self.isJwtExpired(); if isJwtExpired { self.jwt = self.getJWT(); self.oauthHandler = check self.getOauth2Handler(); } http:ClientOAuth2Handler oauthHandler = check self.oauthHandler.ensureType(); return oauthHandler->enrich(request); } private function getJWT() returns string { return ""; // your jwt logic } private function getOauth2Handler() returns http:ClientOAuth2Handler|error { string jwt = check self.jwt.ensureType(); return new ({ tokenUrl: "https://localhost:9445/oauth2/token", clientId: "FlfJYKBD2c925h4lkycqNZlC2l4a", clientSecret: "PJz0UhTJMrHOo68QQNpvnqAY_3Aa", credentialBearer: oauth2:POST_BODY_BEARER, optionalParams: {client_assertion: jwt} }); } private function isJwtExpired() returns boolean|error { // your logic to check jwt assertion expirty string jwt = check self.jwt.ensureType(); [jwt:Header, jwt:Payload] [_, payload] = check jwt:decode(jwt); int? assertionExpiryTime = payload.exp; [int, decimal] currentTime = time:utcNow(); return assertionExpiryTime !is () && assertionExpiryTime <= currentTime[0]; } }
Unable to pass dynamic values to config records
I have a requirement to pass a JWT client assertion to the oauth2 client credentials grant config record. I'm passing the parameter as the optional parameter. But this parameter has to be generated each time the token endpoint is called for an access token. Therefore I did something like the following. http:OAuth2ClientCredentialsGrantConfig oauth2Config = { tokenUrl: "https://*****/oauth2/token", clientId: "*******", optionalParams: getJWT(), clientSecret: "*****", credentialBearer: oauth2:POST_BODY_BEARER }; Here, the getJWT() method returns a map with the JWT. function getJWT() returns map<string> { string jwt = // logic to generate the JWT map<string> jwtAssertion = { "client_assertion" : jwt }; return jwtAssertion; } This works only once. When the access token returned by the token endpoint expires and when the token endpoint is called again for the access token, the getJWT() method does not get called. Therefore, I suppose the new request is going with the old JWT, hence the request fails. Is there a way to pass a dynamically changing value as a parameter to the http:OAuth2ClientCredentialsGrantConfig record?
[ "You can achieve this by writing a custom ClientOAuth2Handler and using it as described in the imperative approach section.\nYour custom handler should check for the exp value of client_assertion and create a new http:ClientOAuth2Handler with a new client_assertion when it expires. You can get an idea from the below code.\nimport ballerina/http;\nimport ballerina/oauth2;\nimport ballerina/jwt;\nimport ballerina/time;\n\nclient class CustomClientOAuth2Handler {\n private http:ClientOAuth2Handler? oauthHandler = ();\n private string? jwt = ();\n\n public function init() returns error? {\n self.jwt = self.getJWT();\n self.oauthHandler = check self.getOauth2Handler();\n }\n\n remote function enrich(http:Request request) returns http:Request|error {\n boolean isJwtExpired = check self.isJwtExpired();\n if isJwtExpired {\n self.jwt = self.getJWT();\n self.oauthHandler = check self.getOauth2Handler();\n }\n\n http:ClientOAuth2Handler oauthHandler = check self.oauthHandler.ensureType();\n return oauthHandler->enrich(request);\n }\n\n private function getJWT() returns string {\n return \"\"; // your jwt logic\n }\n\n private function getOauth2Handler() returns http:ClientOAuth2Handler|error {\n string jwt = check self.jwt.ensureType();\n return new ({\n tokenUrl: \"https://localhost:9445/oauth2/token\",\n clientId: \"FlfJYKBD2c925h4lkycqNZlC2l4a\",\n clientSecret: \"PJz0UhTJMrHOo68QQNpvnqAY_3Aa\",\n credentialBearer: oauth2:POST_BODY_BEARER,\n optionalParams: {client_assertion: jwt}\n });\n }\n\n private function isJwtExpired() returns boolean|error {\n // your logic to check jwt assertion expirty\n string jwt = check self.jwt.ensureType();\n [jwt:Header, jwt:Payload] [_, payload] = check jwt:decode(jwt);\n int? assertionExpiryTime = payload.exp;\n\n [int, decimal] currentTime = time:utcNow();\n return assertionExpiryTime !is () && assertionExpiryTime <= currentTime[0];\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "ballerina", "ballerina_http", "wso2" ]
stackoverflow_0074645939_ballerina_ballerina_http_wso2.txt
Q: How to change position of button in activity_main.xml | Android studio Hello there im new to android studio and i create new project with "Basic Activity" template and cant understand why i cant drag and drop button where i want in activity_main.xml is always stay on top left corner? Drag and drop in content_main.xml has no problem at all i try to copy button xml code from content_main.xml to activity_main.xml but button still sit on left top corner activity_main.xml <?xml version="1.0" encoding="utf-8"?> <androidx.coordinatorlayout.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <com.google.android.material.appbar.AppBarLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:theme="@style/Theme.MyApplication.AppBarOverlay"> <androidx.appcompat.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" android:background="?attr/colorPrimary" app:popupTheme="@style/Theme.MyApplication.PopupOverlay" /> </com.google.android.material.appbar.AppBarLayout> <include layout="@layout/content_main" /> <com.google.android.material.floatingactionbutton.FloatingActionButton android:id="@+id/fab" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="bottom|end" android:layout_marginEnd="@dimen/fab_margin" android:layout_marginBottom="16dp" app:srcCompat="@android:drawable/ic_dialog_email" /> <Button android:id="@+id/button2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Button" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" /> </androidx.coordinatorlayout.widget.CoordinatorLayout> A: The layout of elements in design are composed of three main designs. You have Linear Layout, Relative Layout and Constraint Layout. Constraint Layout Constraint uses anchor points. You have to anchor them to parent or other elements. So if you wanted two buttons side by side. You would anchor them to parent or other elements or parent. Click on the element then attributes and select the position of the element. Linear and Relative Latouts These layouts are more of a stackable layout. They are stacked above or below the element before. Personally I use constraint layout. But the reason why the element stays in the left corner is because that's the 0,0 point. So once you constrain the points to parent or other elements they will look good in design. Now each person has their favorite layout. But sometimes you may have to call multiple layouts in a design so you may have all three in one design. Take this for example This was built in linear layout. It stacks on top of each other. The next image is a Constraint layout This is a series of constraint layouts where buttons are anchored together. You could build this in relative or linear but it gets a little more complex in my opinion. I hope this helps with how designs are done with the different layouts.
How to change position of button in activity_main.xml | Android studio
Hello there im new to android studio and i create new project with "Basic Activity" template and cant understand why i cant drag and drop button where i want in activity_main.xml is always stay on top left corner? Drag and drop in content_main.xml has no problem at all i try to copy button xml code from content_main.xml to activity_main.xml but button still sit on left top corner activity_main.xml <?xml version="1.0" encoding="utf-8"?> <androidx.coordinatorlayout.widget.CoordinatorLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:app="http://schemas.android.com/apk/res-auto" xmlns:tools="http://schemas.android.com/tools" android:layout_width="match_parent" android:layout_height="match_parent" tools:context=".MainActivity"> <com.google.android.material.appbar.AppBarLayout android:layout_width="match_parent" android:layout_height="wrap_content" android:theme="@style/Theme.MyApplication.AppBarOverlay"> <androidx.appcompat.widget.Toolbar android:id="@+id/toolbar" android:layout_width="match_parent" android:layout_height="?attr/actionBarSize" android:background="?attr/colorPrimary" app:popupTheme="@style/Theme.MyApplication.PopupOverlay" /> </com.google.android.material.appbar.AppBarLayout> <include layout="@layout/content_main" /> <com.google.android.material.floatingactionbutton.FloatingActionButton android:id="@+id/fab" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_gravity="bottom|end" android:layout_marginEnd="@dimen/fab_margin" android:layout_marginBottom="16dp" app:srcCompat="@android:drawable/ic_dialog_email" /> <Button android:id="@+id/button2" android:layout_width="wrap_content" android:layout_height="wrap_content" android:text="Button" app:layout_constraintBottom_toBottomOf="parent" app:layout_constraintEnd_toEndOf="parent" app:layout_constraintStart_toStartOf="parent" app:layout_constraintTop_toTopOf="parent" /> </androidx.coordinatorlayout.widget.CoordinatorLayout>
[ "The layout of elements in design are composed of three main designs. You have\nLinear Layout, Relative Layout and Constraint Layout.\n\nConstraint Layout\n\nConstraint uses anchor points. You have to anchor them to parent or other elements. So if you wanted two buttons side by side. You would anchor them to parent or other elements or parent. Click on the element then attributes and select the position of the element.\n\nLinear and Relative Latouts\n\nThese layouts are more of a stackable layout. They are stacked above or below the element before.\nPersonally I use constraint layout. But the reason why the element stays in the left corner is because that's the 0,0 point. So once you constrain the points to parent or other elements they will look good in design.\nNow each person has their favorite layout. But sometimes you may have to call multiple layouts in a design so you may have all three in one design.\nTake this for example\n\nThis was built in linear layout. It stacks on top of each other.\nThe next image is a Constraint layout\n\nThis is a series of constraint layouts where buttons are anchored together. You could build this in relative or linear but it gets a little more complex in my opinion.\nI hope this helps with how designs are done with the different layouts.\n" ]
[ 1 ]
[]
[]
[ "android_layout", "android_studio", "kotlin" ]
stackoverflow_0074647100_android_layout_android_studio_kotlin.txt
Q: Evaluating a Multiline Code Block (after an `if`) Without Indentation Is it possible to make Python3 see an unindented code chunk as a code block? If so how? This is more of a curiousity of how Python works. Typically if you want to run a code chunk after an if statement you need to indent what comes below: if True: x = 'hello' print(x) ## hello Is there a way to use the if and not indent the next 2 lines? You can get it to work if the next line is a function call (not an assignment) and you wrap it with parenthesis as seen below: if True:( print('hello') ) ## hello But it fails to work if you add in multiple lines or an assignment: if True:( print('hello') print('hello2') ) ## File "<stdin>", line 3 ## print('hello2') ## ^ ## SyntaxError: invalid syntax ## >>> ) ## File "<stdin>", line 1 ## ) ## ^ ## SyntaxError: unmatched ')' if True:( x = 'hello' ) ## File "<stdin>", line 2 ## x = 'hello' ## ^ ## SyntaxError: invalid syntax ## >>> ) ## File "<stdin>", line 1 ## ) ## ^ ## SyntaxError: unmatched ')' Is there a way to evaluate the multiple lines after the if without indenting them? Perhaps similar to the parenthisis trick I used for the simple print('hello) but that works for multiple lines and assignments? A: This code should work: if True:( x:='hello_x', print('hello'), print(x) ) ## hello ## hello_x In your case, you are using a tuple to break python's indentation logic, so you need to separate each element with a comma. And since you are in a tuple, you need to use the Walrus Operator := to assign a value.
Evaluating a Multiline Code Block (after an `if`) Without Indentation
Is it possible to make Python3 see an unindented code chunk as a code block? If so how? This is more of a curiousity of how Python works. Typically if you want to run a code chunk after an if statement you need to indent what comes below: if True: x = 'hello' print(x) ## hello Is there a way to use the if and not indent the next 2 lines? You can get it to work if the next line is a function call (not an assignment) and you wrap it with parenthesis as seen below: if True:( print('hello') ) ## hello But it fails to work if you add in multiple lines or an assignment: if True:( print('hello') print('hello2') ) ## File "<stdin>", line 3 ## print('hello2') ## ^ ## SyntaxError: invalid syntax ## >>> ) ## File "<stdin>", line 1 ## ) ## ^ ## SyntaxError: unmatched ')' if True:( x = 'hello' ) ## File "<stdin>", line 2 ## x = 'hello' ## ^ ## SyntaxError: invalid syntax ## >>> ) ## File "<stdin>", line 1 ## ) ## ^ ## SyntaxError: unmatched ')' Is there a way to evaluate the multiple lines after the if without indenting them? Perhaps similar to the parenthisis trick I used for the simple print('hello) but that works for multiple lines and assignments?
[ "This code should work:\nif True:(\nx:='hello_x',\nprint('hello'),\nprint(x)\n)\n\n## hello\n## hello_x\n\nIn your case, you are using a tuple to break python's indentation logic, so you need to separate each element with a comma. And since you are in a tuple, you need to use the Walrus Operator := to assign a value.\n" ]
[ 3 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074656650_python_python_3.x.txt
Q: Pandas development environment: pytest does not see changes after building edited .pyx file Q: Why is pytest not seeing changes when I edit a .pyx file and build? What step am I missing? I'm using Visuals Studio Code with remote containers as described at the end of this page. If I add changes to pandas/_libs/tslibs/offsets.pyx, and then run (pandas-dev) root@60017c489843:/workspaces/pandas# python setup.py build_ext -j 4 Compiling pandas/_libs/tslibs/offsets.pyx because it changed. [1/1] Cythonizing pandas/_libs/tslibs/offsets.pyx /opt/conda/envs/pandas-dev/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py:108: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*. warnings.warn(msg, _BetaConfiguration) # ... more output here without errors ... my unit-test fails because it does not test against my updated version of offsets.pxy. He points to a line (see below) where the error exists only in the old version of the file. pandas/tests/tseries/offsets/test_offsets.py ........x......................................................F .... more output here ... E TypeError: __init__() got an unexpected keyword argument 'milliseconds' pandas/_libs/tslibs/offsets.pyx:325: TypeError Whatever change I add to cdef _determine_offset and build, pytest does not see the edits, therefore I assume I'm missing a compilation step somewhere. Reproducible example clone my pandas fork: git clone [email protected]:markopacak/pandas.git git checkout bug-dateoffset-milliseconds In your dev-environment (docker container or VS Code remote container) run: conda activate pandas-dev python setup.py build_ext -j 4 pytest pandas/tests/tseries/offsets/test_offsets.py::TestDateOffset Assumes you have set-up a dev environment for pandas, ideally using remote-containers on VS Code like I did. (pandas-dev) root@60017c489843:/workspaces/pandas# python --version Python 3.8.15 A: I'm pretty sure you need to install once the extensions are built (otherwise where are the built extension and how python/pytest should know where to look?). This is how my workflow looked some time ago (not sure it still applies but should be close enough): python setup.py build_ext --inplace -j 4 python -m pip install -e . --no-build-isolation --no-use-pep517 ... pytest pandas/tests/xxxx/yyyy.py Installing in development mode (-e) is the most convenient option in my opinion for development.
Pandas development environment: pytest does not see changes after building edited .pyx file
Q: Why is pytest not seeing changes when I edit a .pyx file and build? What step am I missing? I'm using Visuals Studio Code with remote containers as described at the end of this page. If I add changes to pandas/_libs/tslibs/offsets.pyx, and then run (pandas-dev) root@60017c489843:/workspaces/pandas# python setup.py build_ext -j 4 Compiling pandas/_libs/tslibs/offsets.pyx because it changed. [1/1] Cythonizing pandas/_libs/tslibs/offsets.pyx /opt/conda/envs/pandas-dev/lib/python3.8/site-packages/setuptools/config/pyprojecttoml.py:108: _BetaConfiguration: Support for `[tool.setuptools]` in `pyproject.toml` is still *beta*. warnings.warn(msg, _BetaConfiguration) # ... more output here without errors ... my unit-test fails because it does not test against my updated version of offsets.pxy. He points to a line (see below) where the error exists only in the old version of the file. pandas/tests/tseries/offsets/test_offsets.py ........x......................................................F .... more output here ... E TypeError: __init__() got an unexpected keyword argument 'milliseconds' pandas/_libs/tslibs/offsets.pyx:325: TypeError Whatever change I add to cdef _determine_offset and build, pytest does not see the edits, therefore I assume I'm missing a compilation step somewhere. Reproducible example clone my pandas fork: git clone [email protected]:markopacak/pandas.git git checkout bug-dateoffset-milliseconds In your dev-environment (docker container or VS Code remote container) run: conda activate pandas-dev python setup.py build_ext -j 4 pytest pandas/tests/tseries/offsets/test_offsets.py::TestDateOffset Assumes you have set-up a dev environment for pandas, ideally using remote-containers on VS Code like I did. (pandas-dev) root@60017c489843:/workspaces/pandas# python --version Python 3.8.15
[ "I'm pretty sure you need to install once the extensions are built (otherwise where are the built extension and how python/pytest should know where to look?). This is how my workflow looked some time ago (not sure it still applies but should be close enough):\npython setup.py build_ext --inplace -j 4\npython -m pip install -e . --no-build-isolation --no-use-pep517\n\n...\n\npytest pandas/tests/xxxx/yyyy.py\n\nInstalling in development mode (-e) is the most convenient option in my opinion for development.\n" ]
[ 1 ]
[]
[]
[ "cython", "pandas", "pytest", "python", "visual_studio_code" ]
stackoverflow_0074656048_cython_pandas_pytest_python_visual_studio_code.txt
Q: Django postgres unique constraint name generation When adding a new unique constraint on a django model, this will automatically create a name for this constraint in postgres. Does Postgres generate this name or is it Django? Is it deterministic? for example I use this model class MyModel(Model): field1 = TextField() field2 = IntegerField() class Meta: unique_together = ("field1", "field2") I got the name of the constraint with select constraint_name from information_schema.constraint_column_usage where table_name = 'myapp_mytable' I get a name like field1_field2_d04755de_uniq A: Django will determine this name. Indeed, the source code [GitHub] shows that it determines the name with: if name is None: name = IndexName(table, columns, "_uniq", create_unique_name) The IndexName [GitHub] will call create_unique_name with the table, columns and suffix: class IndexName(TableColumns): def __init__(self, table, columns, suffix, create_index_name): # … pass def __str__(self): return self.create_index_name(self.table, self.columns, self.suffix) and the create_unique_name will return a quoted version of _create_index_name, which will make a digest of the table_name and column_names [GitHub]: def _create_index_name(self, table_name, column_names, suffix=""): # … _, table_name = split_identifier(table_name) hash_suffix_part = "%s%s" % ( names_digest(table_name, *column_names, length=8), suffix, ) # … But using unique_together is likely to become deprecated. Indeed, the documentation on unique_together says: Use UniqueConstraint with the constraints option instead. UniqueConstraint provides more functionality than unique_together. unique_together may be deprecated in the future. You can define this UniqueConstraint where you can also manually specify the name of the constraint: from django.db import models class MyModel(models.Model): field1 = models.TextField() field2 = models.IntegerField() class Meta: constraints = [ models.UniqueConstraint(fields=('field1', 'field2'), name='some_constraint_name') ] A: As suggested by @willem-van-onsem, using UniqueConstraint allows to set the constraint name (which is pretty deterministic) A: I simply add this to migrations: ... migrations.RunSQL(""" ALTER TABLE IF EXISTS public.api_sectoraltr ADD UNIQUE (year, category_id, code_id); """, reverse_sql=migrations.RunSQL.noop), ...
Django postgres unique constraint name generation
When adding a new unique constraint on a django model, this will automatically create a name for this constraint in postgres. Does Postgres generate this name or is it Django? Is it deterministic? for example I use this model class MyModel(Model): field1 = TextField() field2 = IntegerField() class Meta: unique_together = ("field1", "field2") I got the name of the constraint with select constraint_name from information_schema.constraint_column_usage where table_name = 'myapp_mytable' I get a name like field1_field2_d04755de_uniq
[ "Django will determine this name. Indeed, the source code [GitHub] shows that it determines the name with:\n\nif name is None:\n name = IndexName(table, columns, \"_uniq\", create_unique_name)\n\nThe IndexName [GitHub] will call create_unique_name with the table, columns and suffix:\n\nclass IndexName(TableColumns):\n \n def __init__(self, table, columns, suffix, create_index_name):\n # …\n pass\n\n def __str__(self):\n return self.create_index_name(self.table, self.columns, self.suffix)\n\nand the create_unique_name will return a quoted version of _create_index_name, which will make a digest of the table_name and column_names [GitHub]:\n\ndef _create_index_name(self, table_name, column_names, suffix=\"\"):\n # …\n _, table_name = split_identifier(table_name)\n hash_suffix_part = \"%s%s\" % (\n names_digest(table_name, *column_names, length=8),\n suffix,\n )\n # …\n\nBut using unique_together is likely to become deprecated. Indeed, the documentation on unique_together says:\n\nUse UniqueConstraint\nwith the\nconstraints\noption instead.\nUniqueConstraint\nprovides more functionality than unique_together. unique_together\nmay be deprecated in the future.\n\nYou can define this UniqueConstraint where you can also manually specify the name of the constraint:\nfrom django.db import models\n\nclass MyModel(models.Model):\n field1 = models.TextField()\n field2 = models.IntegerField()\n\n class Meta:\n constraints = [\n models.UniqueConstraint(fields=('field1', 'field2'), name='some_constraint_name')\n ]\n", "As suggested by @willem-van-onsem, using UniqueConstraint allows to set the constraint name (which is pretty deterministic)\n", "I simply add this to migrations:\n...\n migrations.RunSQL(\"\"\"\n ALTER TABLE IF EXISTS public.api_sectoraltr\n ADD UNIQUE (year, category_id, code_id);\n \"\"\", reverse_sql=migrations.RunSQL.noop),\n...\n\n" ]
[ 3, 1, 0 ]
[]
[]
[ "django", "postgresql" ]
stackoverflow_0071115464_django_postgresql.txt
Q: Is there a conversion from pointer to array? For example, for the following code, I know that p is a pointer, which points to the first element of the array arr, and I also know that the array will degenerate into an array under certain conditions, but why can the [] operation be performed on the pointer here? #include<iostream> using namespace std; int main() { int arr[10]; arr[3] = 10; int* p = arr; cout << p[3]; return 0; } Is there any documentation for this? run it online A: From the C++ 20 Standard (7.6.1.2 Subscripting) 1 A postfix expression followed by an expression in square brackets is a postfix expression. One of the expressions shall be a glvalue of type “array of T” or a prvalue of type “pointer to T” and the other shall be a prvalue of unscoped enumeration or integral type. The result is of type “T”. The type “T” shall be a completely-defined object type.62 The expression E1[E2] is identical (by definition) to *((E1)+(E2)), except that in the case of an array operand, the result is an lvalue if that operand is an lvalue and an xvalue otherwise. The expression E1 is sequenced before the expression E2. That is when an array is used in this expression *((E1)+(E2)) then it is converted implicitly to pointer to its first element.
Is there a conversion from pointer to array?
For example, for the following code, I know that p is a pointer, which points to the first element of the array arr, and I also know that the array will degenerate into an array under certain conditions, but why can the [] operation be performed on the pointer here? #include<iostream> using namespace std; int main() { int arr[10]; arr[3] = 10; int* p = arr; cout << p[3]; return 0; } Is there any documentation for this? run it online
[ "From the C++ 20 Standard (7.6.1.2 Subscripting)\n\n1 A postfix expression followed by an expression in square brackets is\na postfix expression. One of the expressions shall be a glvalue of\ntype “array of T” or a prvalue of type “pointer to T” and the other\nshall be a prvalue of unscoped enumeration or integral type. The\nresult is of type “T”. The type “T” shall be a completely-defined\nobject type.62 The expression E1[E2] is identical (by definition) to\n*((E1)+(E2)), except that in the case of an array operand, the result is an lvalue if that operand is an lvalue and an xvalue otherwise. The\nexpression E1 is sequenced before the expression E2.\n\nThat is when an array is used in this expression *((E1)+(E2)) then it is converted implicitly to pointer to its first element.\n" ]
[ 2 ]
[]
[]
[ "arrays", "c++", "implicit_conversion", "pointer_arithmetic", "pointers" ]
stackoverflow_0074656665_arrays_c++_implicit_conversion_pointer_arithmetic_pointers.txt
Q: PowerShell - Print Move-Item in Console I have the following code: $SchoolFolder = "C:\Users\MyUser\Desktop\School Folder\$StudentName\$Month. $MonthWrite\$Day. $DayWrite" $MP4Lenght = (Get-ChildItem -Path $RenderFolder).Length -ne "0" $MP4existsToCopy = Test-Path -Path "$RenderFolder\*.mp4" If (($MP4existsToCopy -eq $True) -and ($MP4Lenght -eq $True)) { Get-ChildItem $MyFolder | Where-Object { $_.Length -gt 0KB} | Move-Item -Destination (new-item -type directory -force ($SchoolFolder + $newSub)) -force -ea 0 Write-Host "Done!" } I would like to know how do I make all correspondence in $MP4Lenght be printed in the console with the format $MP4Lenght + "was moved", because that way I can know which files were moved. A: Why not just use -verbose? Move-Item -Destination (new-item -type directory -force ($SchoolFolder + $newSub)) -force -ea 0 -Verbose Update as per your comment. Try it this way... $source = 'C:\Users\myuser\playground\powershell\Source\' $destination = 'C:\Users\myuser\playground\powershell\Destination' Get-ChildItem $source -File | where-object {$PSItem.Length -ne 0} | ForEach-Object{ Move-Item $PSItem.FullName -Destination '.\Destination' if (-not(Test-Path $PSItem.FullName) -and (test-path (Join-Path -Path $destination -ChildPath $PSItem.Name))) { "$($PSItem.name) has moved" } } A: Your "exist to copy" logic isn't really required, because if the file doesn't exist then get-childitem is not going to find it. Similarly with the check if MP4lenght is true. The following will check if the file does not exist in the source and does exist in the destination and if that is true then write to the host that the file has moved: $source = 'C:\Users\myuser\playground\powershell\Source\' $destination = 'C:\Users\myuser\playground\powershell\Destination' $files = Get-ChildItem $source -File | where-object {$_.Length -ne 0} foreach ($file in $files) { Move-Item $file.FullName -Destination .\Destination if (-not(Test-Path $file.FullName) -and (test-path (Join-Path -Path $destination -ChildPath $file.Name))) { Write-Host "$($file.name) has moved" } } A: Final script: $StudentName = Tyler $RenderFolder = "C:\Users\MyUser\Desktop\Render" $MP4existsToCopy = Get-ChildItem $RenderFolder -File | where-object {$_.Length -ne 0} $SchoolFolder = "C:\Users\MyUser\Desktop\School Folder\$StudentName\$Month. $MonthWrite\$Day. $DayWrite" foreach ($file in $MP4existsToCopy) { Move-Item $file.FullName -Destination (new-item -type directory -force ($SchoolFolder)) # new-item - Serves to create the folder if it does not exist if (-not(Test-Path $file.FullName) -and (test-path (Join-Path -Path $SchoolFolder -ChildPath $file.Name))) { Write-Host "$($file.name) was moved!" }
PowerShell - Print Move-Item in Console
I have the following code: $SchoolFolder = "C:\Users\MyUser\Desktop\School Folder\$StudentName\$Month. $MonthWrite\$Day. $DayWrite" $MP4Lenght = (Get-ChildItem -Path $RenderFolder).Length -ne "0" $MP4existsToCopy = Test-Path -Path "$RenderFolder\*.mp4" If (($MP4existsToCopy -eq $True) -and ($MP4Lenght -eq $True)) { Get-ChildItem $MyFolder | Where-Object { $_.Length -gt 0KB} | Move-Item -Destination (new-item -type directory -force ($SchoolFolder + $newSub)) -force -ea 0 Write-Host "Done!" } I would like to know how do I make all correspondence in $MP4Lenght be printed in the console with the format $MP4Lenght + "was moved", because that way I can know which files were moved.
[ "Why not just use -verbose?\nMove-Item -Destination (new-item -type directory -force ($SchoolFolder + $newSub)) -force -ea 0 -Verbose\n\nUpdate as per your comment.\nTry it this way...\n$source = 'C:\\Users\\myuser\\playground\\powershell\\Source\\'\n$destination = 'C:\\Users\\myuser\\playground\\powershell\\Destination'\n\nGet-ChildItem $source -File | \nwhere-object {$PSItem.Length -ne 0} | \nForEach-Object{\n Move-Item $PSItem.FullName -Destination '.\\Destination'\n\n if (-not(Test-Path $PSItem.FullName) -and (test-path (Join-Path -Path $destination -ChildPath $PSItem.Name))) {\n \"$($PSItem.name) has moved\"\n }\n}\n\n", "Your \"exist to copy\" logic isn't really required, because if the file doesn't exist then get-childitem is not going to find it. Similarly with the check if MP4lenght is true.\nThe following will check if the file does not exist in the source and does exist in the destination and if that is true then write to the host that the file has moved:\n$source = 'C:\\Users\\myuser\\playground\\powershell\\Source\\'\n$destination = 'C:\\Users\\myuser\\playground\\powershell\\Destination'\n$files = Get-ChildItem $source -File | where-object {$_.Length -ne 0}\n\nforeach ($file in $files) {\n\nMove-Item $file.FullName -Destination .\\Destination\n\nif (-not(Test-Path $file.FullName) -and (test-path (Join-Path -Path $destination -ChildPath $file.Name))) {\n Write-Host \"$($file.name) has moved\"\n}\n\n}\n", "Final script:\n $StudentName = Tyler\n $RenderFolder = \"C:\\Users\\MyUser\\Desktop\\Render\"\n $MP4existsToCopy = Get-ChildItem $RenderFolder -File | where-object {$_.Length -ne 0}\n $SchoolFolder = \"C:\\Users\\MyUser\\Desktop\\School Folder\\$StudentName\\$Month. $MonthWrite\\$Day. $DayWrite\"\n \n foreach ($file in $MP4existsToCopy) {\n \n Move-Item $file.FullName -Destination (new-item -type directory -force ($SchoolFolder)) # new-item - Serves to create the folder if it does not exist\n \n if (-not(Test-Path $file.FullName) -and (test-path (Join-Path -Path $SchoolFolder -ChildPath $file.Name))) {\n Write-Host \"$($file.name) was moved!\"\n }\n\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "powershell", "windows" ]
stackoverflow_0074635903_powershell_windows.txt
Q: Square Every Digit of a Number in Python? Square Every Digit of a Number in Python? if we run 9119 through the function, 811181 will come out, because 92 is 81 and 12 is 1. write a code but this not working. def sq(num): words = num.split() # split the text for word in words: # for each word in the line: print(word**2) # print the word num = 9119 sq(num) A: We can use list to split every character of a string, also we can use "end" in "print" to indicate the deliminter in the print out. def sq(num): words = list(str(num)) # split the text for word in words: # for each word in the line: print(int(word)**2, end="") # print the word num = 9119 sq(num) Alternatively return ''.join(str(int(i)**2) for i in str(num)) A: def sq(num): z = ''.join(str(int(i)**2) for i in str(num)) return int(z) A: number=str(input("Enter the number :")) def pc(number): digits = list(number) for j in digits: print(int(j)**2,end="") pc(number) A: We can also support input of negative numbers and zeros. Uses arithmetic operators (% and //) for fun. def sq(num): num = abs(num) #Handle negative numbers output = str((num % 10)**2) #Process rightmost digit while(num > 0): num //= 10 #Remove rightmost digit output = str((num % 10)**2) + output #Add squared digit to output print(output) A: Also you can try this variant: def square_digits(num): return int(''.join(str(int(i)**2) for i in str(num)))
Square Every Digit of a Number in Python?
Square Every Digit of a Number in Python? if we run 9119 through the function, 811181 will come out, because 92 is 81 and 12 is 1. write a code but this not working. def sq(num): words = num.split() # split the text for word in words: # for each word in the line: print(word**2) # print the word num = 9119 sq(num)
[ "We can use list to split every character of a string, also we can use \"end\" in \"print\" to indicate the deliminter in the print out.\ndef sq(num):\n words = list(str(num)) # split the text\n for word in words: # for each word in the line:\n print(int(word)**2, end=\"\") # print the word\n\nnum = 9119\nsq(num)\n\nAlternatively\nreturn ''.join(str(int(i)**2) for i in str(num))\n\n", "def sq(num):\n z = ''.join(str(int(i)**2) for i in str(num))\n return int(z)\n\n", "number=str(input(\"Enter the number :\"))\n\ndef pc(number):\n\n digits = list(number)\n\n for j in digits:\n\n print(int(j)**2,end=\"\")\n \npc(number)\n\n", "We can also support input of negative numbers and zeros. Uses arithmetic operators (% and //) for fun.\ndef sq(num):\n num = abs(num) #Handle negative numbers\n output = str((num % 10)**2) #Process rightmost digit \n\n while(num > 0): \n num //= 10 #Remove rightmost digit \n output = str((num % 10)**2) + output #Add squared digit to output\n print(output)\n\n", "Also you can try this variant:\ndef square_digits(num):\nreturn int(''.join(str(int(i)**2) for i in str(num)))\n" ]
[ 3, 2, 1, 0, 0 ]
[ "def square_digits(num):\n num = str(num)\n result = ''\n for i in num:\n result += str(int(i)**2)\n return int(result)\nvar = square_digits(123)\nprint(var)\n\n" ]
[ -1 ]
[ "numbers", "python", "python_3.x" ]
stackoverflow_0049604549_numbers_python_python_3.x.txt
Q: Android - add extra height to wrap_content I'm using wrap_content on my layout to dynamically change its size depending on the contents, however the problem is that the shadows get cut off. I tried using padding, but the shadows are still cut off. Is there a way to add a value to wrap_content? like for example wrap_content + 6dp or something. A: set up marginBottom for last View in your ViewGroup (e.g. LinearLayout), not padding for this ViewGroup/parent. this may be tricky as you wrote you are changing content, thus you may add/remove Views - keep track and add/remove bottom margin to last child in that case you probably also should take care of horizontal margins and even top one, as "shadow" is in fact appearing around whole View, not only below
Android - add extra height to wrap_content
I'm using wrap_content on my layout to dynamically change its size depending on the contents, however the problem is that the shadows get cut off. I tried using padding, but the shadows are still cut off. Is there a way to add a value to wrap_content? like for example wrap_content + 6dp or something.
[ "set up marginBottom for last View in your ViewGroup (e.g. LinearLayout), not padding for this ViewGroup/parent. this may be tricky as you wrote you are changing content, thus you may add/remove Views - keep track and add/remove bottom margin to last child in that case\nyou probably also should take care of horizontal margins and even top one, as \"shadow\" is in fact appearing around whole View, not only below\n" ]
[ 0 ]
[]
[]
[ "android", "xml" ]
stackoverflow_0074656308_android_xml.txt
Q: laravel 5.1 error in validating doc docx type file Hi i am facing a docx type validation problem. I tried $validator = Validator::make($request->all(), [ 'resume' => 'mimes:doc,pdf,docx' ]); It will upload pdf file with no error but whenever i try to upload docx files it gives validation error 'must be a file of type: doc, pdf, docx' any idea A: thanks solved it by allowing zip $validator = Validator::make($request->all(), [ 'resume' => 'mimes:doc,pdf,docx,zip' ]); this is because https://en.wikipedia.org/wiki/Office_Open_XML A: In Laravel 5.6.3., I have solved this using dot(.) sign: $request->validate([ 'file.*' => 'required|file|max:5000|mimes:pdf,docx,doc', ]); A: For Laravel 7+ to validate doc, docx you need to create mimes.php in config directory and add the following content, config/mimes.php <?php return [ 'doc' => array('application/msword', 'application/vnd.ms-office'), 'docx' => array('application/vnd.openxmlformats-officedocument.wordprocessingml.document', 'application/zip'), ];
laravel 5.1 error in validating doc docx type file
Hi i am facing a docx type validation problem. I tried $validator = Validator::make($request->all(), [ 'resume' => 'mimes:doc,pdf,docx' ]); It will upload pdf file with no error but whenever i try to upload docx files it gives validation error 'must be a file of type: doc, pdf, docx' any idea
[ "thanks solved it by allowing zip\n$validator = Validator::make($request->all(), [\n 'resume' => 'mimes:doc,pdf,docx,zip'\n ]);\n\nthis is because https://en.wikipedia.org/wiki/Office_Open_XML\n", "In Laravel 5.6.3., I have solved this using dot(.) sign:\n$request->validate([\n 'file.*' => 'required|file|max:5000|mimes:pdf,docx,doc',\n]);\n\n", "For Laravel 7+ to validate doc, docx you need to create mimes.php in config directory and add the following content,\nconfig/mimes.php\n<?php\n \nreturn [\n 'doc' => array('application/msword', 'application/vnd.ms-office'),\n 'docx' => array('application/vnd.openxmlformats-officedocument.wordprocessingml.document', 'application/zip'),\n];\n\n" ]
[ 24, 10, 0 ]
[]
[]
[ "docx", "file", "laravel_5.1", "mime_types", "validation" ]
stackoverflow_0033993874_docx_file_laravel_5.1_mime_types_validation.txt
Q: Read unquoted value as string I have an out from a tool, which is almost json. {"acquired": None, "avail": True, "cls": "NetworkService", "params": {"address": "192.168.7.193", "password": "root", "username": "root"}} The problem is None and True, is there a possible way to tell jq to threat these as strings? I could preprocess with sed, but it would break if suddenly a third value appeared. A: I don't think jq can help you. But with yq you can transform the input if you parse it as YAML and convert it to JSON: yq --input-format yaml --output-format json file.json Output { "acquired": "None", "avail": true, "cls": "NetworkService", "params": { "address": "192.168.7.193", "password": "root", "username": "root" } } A: For the sake of completeness, the other implementation, kislyuk/yq, which uses jq directly, can also read and convert your input in the same way: yq . file.json { "acquired": "None", "avail": true, "cls": "NetworkService", "params": { "address": "192.168.7.193", "password": "root", "username": "root" } }
Read unquoted value as string
I have an out from a tool, which is almost json. {"acquired": None, "avail": True, "cls": "NetworkService", "params": {"address": "192.168.7.193", "password": "root", "username": "root"}} The problem is None and True, is there a possible way to tell jq to threat these as strings? I could preprocess with sed, but it would break if suddenly a third value appeared.
[ "I don't think jq can help you.\nBut with yq you can transform the input if you parse it as YAML and convert it to JSON:\nyq --input-format yaml --output-format json file.json\n\nOutput\n{\n \"acquired\": \"None\",\n \"avail\": true,\n \"cls\": \"NetworkService\",\n \"params\": {\n \"address\": \"192.168.7.193\",\n \"password\": \"root\",\n \"username\": \"root\"\n }\n}\n\n", "For the sake of completeness, the other implementation, kislyuk/yq, which uses jq directly, can also read and convert your input in the same way:\nyq . file.json\n\n{\n \"acquired\": \"None\",\n \"avail\": true,\n \"cls\": \"NetworkService\",\n \"params\": {\n \"address\": \"192.168.7.193\",\n \"password\": \"root\",\n \"username\": \"root\"\n }\n}\n\n" ]
[ 2, 1 ]
[]
[]
[ "jq" ]
stackoverflow_0074656148_jq.txt
Q: I am deleting the object from my database using delete button made in angular 13, but this does not delete it instantly from frontend i need to delete the object real-time from frontend as well as backend my, The object gets deleted from the backend instantly but it do not reflects in the frontend till the page is refreshed //delete component deleteStory(id : string){ console.log(id) this.storyapiService.deleteStory(id).subscribe(); } service.ts deleteStory(id: string): Observable<number>{ return this.http.delete<number>(this.API_URL +id); } //html <button class="btn btn-primary" (click)="deleteStory(story.id)" style="margin-left:5px">Delete </button> A: Try to get data again once you delete the element to refresh the current view. Hope it works! A: After you send the delete request to the backend, the frontend does not react to the result of the request. Therefore, it does nothing. You have to either remove the element from your list in deleteStory() after the call to the backend was successful, or re-fetch all stories from the backend again.
I am deleting the object from my database using delete button made in angular 13, but this does not delete it instantly from frontend
i need to delete the object real-time from frontend as well as backend my, The object gets deleted from the backend instantly but it do not reflects in the frontend till the page is refreshed //delete component deleteStory(id : string){ console.log(id) this.storyapiService.deleteStory(id).subscribe(); } service.ts deleteStory(id: string): Observable<number>{ return this.http.delete<number>(this.API_URL +id); } //html <button class="btn btn-primary" (click)="deleteStory(story.id)" style="margin-left:5px">Delete </button>
[ "Try to get data again once you delete the element to refresh the current view.\nHope it works!\n", "After you send the delete request to the backend, the frontend does not react to the result of the request. Therefore, it does nothing.\nYou have to either remove the element from your list in deleteStory() after the call to the backend was successful, or re-fetch all stories from the backend again.\n" ]
[ 1, 0 ]
[]
[]
[ "angular", "angular_fullstack", "django", "django_rest_framework", "web_development_server" ]
stackoverflow_0074655384_angular_angular_fullstack_django_django_rest_framework_web_development_server.txt
Q: Invalid image reference registry when using jib to dockerize gradle project I had a problem when dockerize the Gradle project. running gradle jib What went wrong: Execution failed for task ':jib'. Invalid image reference registry.hub.docker.com/myusername/${rootProject.name}, perhaps you should check that the reference is formatted correctly according to https://docs.docker.com/engine/reference/commandline/tag/#extended-description For example, slash-separated name components cannot have uppercase letters Root Module setting.gradle rootProject.name = 'myproject' include 'discovery-service' include 'client-service' include 'notification-service' include 'api-gateway' build.gradle jib { var tag = 'latest' from { image = 'eclipse-temurin:17.0.4.1_1-jre' } to { image = 'registry.hub.docker.com/myUsername/${rootProject.name}' } } One of the service plugins { id 'java' id 'org.springframework.boot' version '2.7.5' id 'io.spring.dependency-management' version '1.0.15.RELEASE' } group = 'com.leekify' version = '0.0.1-SNAPSHOT' sourceCompatibility = '17' configurations { compileOnly { extendsFrom annotationProcessor } } repositories { mavenCentral() } jar { baseName = 'client-service' } dependencies { implementation 'org.springframework.boot:spring-boot-starter' implementation 'org.springframework.boot:spring-boot-starter-data-jpa' .... } tasks.named('test') { useJUnitPlatform() } setting.gradle rootProject.name = 'client-service' A: Use double-quotes instead of single quotes for the jib.to.image string so that Gradle evaluates the variable.
Invalid image reference registry when using jib to dockerize gradle project
I had a problem when dockerize the Gradle project. running gradle jib What went wrong: Execution failed for task ':jib'. Invalid image reference registry.hub.docker.com/myusername/${rootProject.name}, perhaps you should check that the reference is formatted correctly according to https://docs.docker.com/engine/reference/commandline/tag/#extended-description For example, slash-separated name components cannot have uppercase letters Root Module setting.gradle rootProject.name = 'myproject' include 'discovery-service' include 'client-service' include 'notification-service' include 'api-gateway' build.gradle jib { var tag = 'latest' from { image = 'eclipse-temurin:17.0.4.1_1-jre' } to { image = 'registry.hub.docker.com/myUsername/${rootProject.name}' } } One of the service plugins { id 'java' id 'org.springframework.boot' version '2.7.5' id 'io.spring.dependency-management' version '1.0.15.RELEASE' } group = 'com.leekify' version = '0.0.1-SNAPSHOT' sourceCompatibility = '17' configurations { compileOnly { extendsFrom annotationProcessor } } repositories { mavenCentral() } jar { baseName = 'client-service' } dependencies { implementation 'org.springframework.boot:spring-boot-starter' implementation 'org.springframework.boot:spring-boot-starter-data-jpa' .... } tasks.named('test') { useJUnitPlatform() } setting.gradle rootProject.name = 'client-service'
[ "Use double-quotes instead of single quotes for the jib.to.image string so that Gradle evaluates the variable.\n" ]
[ 0 ]
[]
[]
[ "gradle", "java", "jib", "spring", "spring_boot" ]
stackoverflow_0074575239_gradle_java_jib_spring_spring_boot.txt
Q: Write a query to determine how many product have been sold with profit I am very new to SQL. I have three tables, transactions, products, and customers. I want to know how many products have been sold with profit. SELECT t.product_id, p.id, sum(t.total_price / t.quantity) - p.price As profit From transactions as t , products As p INNER JOIN transactions on t.product_id = p.id GROUP by t.product_id I have only a total price column in my transactions table. Should I divide total_price to quantity or * ? How about my all query? A: SELECT t.product_id,sum(t.total_price / t.quantity - p.price) as profit FROM transactions as t INNER JOIN product as p on t.id = p.id GROUP by t.product_id order by t.product_id Can you try this way
Write a query to determine how many product have been sold with profit
I am very new to SQL. I have three tables, transactions, products, and customers. I want to know how many products have been sold with profit. SELECT t.product_id, p.id, sum(t.total_price / t.quantity) - p.price As profit From transactions as t , products As p INNER JOIN transactions on t.product_id = p.id GROUP by t.product_id I have only a total price column in my transactions table. Should I divide total_price to quantity or * ? How about my all query?
[ "SELECT t.product_id,sum(t.total_price / t.quantity - p.price) as profit\n FROM transactions as t\n INNER JOIN product as p on t.id = p.id\n GROUP by t.product_id\n order by t.product_id\n\nCan you try this way\n" ]
[ 0 ]
[ "SELECT t.product_id,sum(t.total_price / t.quantity - p.price) as profit, \n count(t.quantity) as sold_count\n FROM transactions as t\n INNER JOIN product as p on t.id = p.id\n GROUP by t.product_id\n order by t.product_id \n\n" ]
[ -1 ]
[ "mysql", "select", "sql" ]
stackoverflow_0074656297_mysql_select_sql.txt
Q: How do I write CSV file with dynamic headers in Apache Beam Java I'm new in Apache Beam and I'm working on a job that will run in GCP Dataflow. I need to fetch some data from BigQuery transform it and write a CSV file with headers as result. But I've found myself in a funny scenario. See, the headers of my CSV file are dynamic, they depends on the data that is fetched from BigQuery. So, when I'm constructing the pipeline and trying to define the headers I find a problem, I don't have the headers yet: somePCollection.apply("writing stuff", TextIO.write() .to("gs://some_bucket/somefile_name") .withSuffix(".csv").withHeader(I CAN'T SET THE HEADERS HERE BECAUSE I DON'T HAVE THEM)); Probably you wondering how the data looks like? At this point my Pcollection structure looks like this: user_id, fst_name, lst_name, team_list PCOllection Example: 1111,DANNY,CRUISE, TEAM34,TEAM12,TEAM4 2222,CARLOS,SMITH, TEAM34,TEAM44,TEAM12 33333,SASHA,CONOR, TEAM5,TEAM34,TEAM44 The expected CSV file with headers would look like this: USER_ID,FST_NAME,LST_NAME,TEAM34,TEAM12,TEAM4,TEAM44,TEAM5 1111, DANNY, CRUISE, 1, 1, 1, 0, 0 2222, CARLOS, ,SMITH, 1, 1, 0, 1, 0 33333, SASHA, ,CONOR, 1, 0, 0, 1, 1 As you can see, in the headers I need all the unique teams as columns (obviously the columns can vary between executions) and each row would have 1 or 0 depending of the user is in that team or not. It looks like the headers can only be defined at pipeline construction time. I've been trying to find a way to "cheat" apache beam and accomplish this in a single pipeline, but I'm starting to think that the only way to get this is by executing a separated job/pipeline to "calculate" the headers and write them in somewhere so I can use them as input in other pipeline. I refuse to think I'm the first person who has had to deal with this scenario, so I was wondering if somebody has any idea to solve this. Doing this with plain Java is quite simple.... but with Apache Beam it's another story. I appreciate any help. A: I don't think this can be accomplished with TextIO today. Sounds like you need some processing prior to grab all possible teams, not per-record, so it's not easy to pull that off going more custom with FileIO. The separate pipeline should work fine, but you'll be reading all the data twice. I'm not too familiar with Python SDK / Beam DataFrames yet, but what you are trying to do (one-hot encode) sounds reasonable to do with pandas and is even mentioned at Data pipeline for ML if a switch to Python is allowed.
How do I write CSV file with dynamic headers in Apache Beam Java
I'm new in Apache Beam and I'm working on a job that will run in GCP Dataflow. I need to fetch some data from BigQuery transform it and write a CSV file with headers as result. But I've found myself in a funny scenario. See, the headers of my CSV file are dynamic, they depends on the data that is fetched from BigQuery. So, when I'm constructing the pipeline and trying to define the headers I find a problem, I don't have the headers yet: somePCollection.apply("writing stuff", TextIO.write() .to("gs://some_bucket/somefile_name") .withSuffix(".csv").withHeader(I CAN'T SET THE HEADERS HERE BECAUSE I DON'T HAVE THEM)); Probably you wondering how the data looks like? At this point my Pcollection structure looks like this: user_id, fst_name, lst_name, team_list PCOllection Example: 1111,DANNY,CRUISE, TEAM34,TEAM12,TEAM4 2222,CARLOS,SMITH, TEAM34,TEAM44,TEAM12 33333,SASHA,CONOR, TEAM5,TEAM34,TEAM44 The expected CSV file with headers would look like this: USER_ID,FST_NAME,LST_NAME,TEAM34,TEAM12,TEAM4,TEAM44,TEAM5 1111, DANNY, CRUISE, 1, 1, 1, 0, 0 2222, CARLOS, ,SMITH, 1, 1, 0, 1, 0 33333, SASHA, ,CONOR, 1, 0, 0, 1, 1 As you can see, in the headers I need all the unique teams as columns (obviously the columns can vary between executions) and each row would have 1 or 0 depending of the user is in that team or not. It looks like the headers can only be defined at pipeline construction time. I've been trying to find a way to "cheat" apache beam and accomplish this in a single pipeline, but I'm starting to think that the only way to get this is by executing a separated job/pipeline to "calculate" the headers and write them in somewhere so I can use them as input in other pipeline. I refuse to think I'm the first person who has had to deal with this scenario, so I was wondering if somebody has any idea to solve this. Doing this with plain Java is quite simple.... but with Apache Beam it's another story. I appreciate any help.
[ "I don't think this can be accomplished with TextIO today. Sounds like you need some processing prior to grab all possible teams, not per-record, so it's not easy to pull that off going more custom with FileIO.\nThe separate pipeline should work fine, but you'll be reading all the data twice.\nI'm not too familiar with Python SDK / Beam DataFrames yet, but what you are trying to do (one-hot encode) sounds reasonable to do with pandas and is even mentioned at Data pipeline for ML if a switch to Python is allowed.\n" ]
[ 1 ]
[]
[]
[ "apache_beam", "google_cloud_dataflow", "java" ]
stackoverflow_0074645043_apache_beam_google_cloud_dataflow_java.txt
Q: Error in `fct_reorder()`: ! `.f` must be a factor or character vector, not a data frame - trying to reorder a bar plot I have a summary table with means for 4 variables from a dataset with 940 rows: activity_means <- activity_daily_clean %>% summarize(sedentary = mean(sedentary_minutes), lightly_active = mean(lightly_active_minutes), fairly_active = mean(fairly_active_minutes), very_active = mean(very_active_minutes)) I want to plot them into a simple bar plot, but the levels of activity intensity (sedentary - lightly active - fairly active - very active) appear disorganized: act_means_df <- data.frame( activity_intensity=c("sedentary", "lightly active", "fairly active", "very active"), intens_means=c(991.2106, 192.8128, 13.56489, 21.16489) ) ggplot(act_means_df)+ geom_col(aes(x=activity_intensity, y=intens_means)) I tried following the guide in the R Graph Gallery to reorder a bar plot following the values from the second variable: act_means_df <- data.frame( activity_intensity=c("sedentary", "lightly active", "fairly active", "very active"), intens_means=c(991.2106, 192.8128, 13.56489, 21.16489) ) %>% mutate(f_act_int = factor(activity_intensity)) act_means_df %>% fct_reorder(f_act_int, intens_means) %>% ggplot(aes(x=f_act_int, y=intens_means))+ geom_bar(stat="identity", fill="#f68060", alpha=.6, width=.4) + coord_flip() + xlab("") + theme_bw() But the following error appears when I run the last chunk: Error in fct_reorder(): ! .f must be a factor or character vector, not a data frame I confirmed whether f_act_int is a factor with: str(act_means_df): 'data.frame': 4 obs. of 3 variables: $ activity_intensity: chr "sedentary" "lightly active" "fairly active" "very active" $ intens_means : num 991.2 192.8 13.6 21.2 $ f_act_int : Factor w/ 4 levels "fairly active",..: 3 2 1 4 But when I try to inspect the object by itself with class(f_act_int), the error message says "object 'f_act_int' not found". Does anybody know what I am missing? A: You need to use fct_reorder within dplyr::mutate(). This isn't the only way to solve this issue, but the gist of your error is that you're passing your whole data frame, when it expects a factor or character vector. Instead, try this: act_means_df %>% mutate(fa_act_in = fct_reorder(f_act_int, intens_means)) %>% ggplot(aes(x=f_act_int, y=intens_means))+ geom_bar(stat="identity", fill="#f68060", alpha=.6, width=.4) + coord_flip() + xlab("") + theme_bw() There is no object in your global environment called f_act_in; When you call fct_reorder() within mutate(), R looks inside your data frame for a column with that name, upon which performs the operation in order to return an updated data frame. This updated data frame is then passed to ggplot().
Error in `fct_reorder()`: ! `.f` must be a factor or character vector, not a data frame - trying to reorder a bar plot
I have a summary table with means for 4 variables from a dataset with 940 rows: activity_means <- activity_daily_clean %>% summarize(sedentary = mean(sedentary_minutes), lightly_active = mean(lightly_active_minutes), fairly_active = mean(fairly_active_minutes), very_active = mean(very_active_minutes)) I want to plot them into a simple bar plot, but the levels of activity intensity (sedentary - lightly active - fairly active - very active) appear disorganized: act_means_df <- data.frame( activity_intensity=c("sedentary", "lightly active", "fairly active", "very active"), intens_means=c(991.2106, 192.8128, 13.56489, 21.16489) ) ggplot(act_means_df)+ geom_col(aes(x=activity_intensity, y=intens_means)) I tried following the guide in the R Graph Gallery to reorder a bar plot following the values from the second variable: act_means_df <- data.frame( activity_intensity=c("sedentary", "lightly active", "fairly active", "very active"), intens_means=c(991.2106, 192.8128, 13.56489, 21.16489) ) %>% mutate(f_act_int = factor(activity_intensity)) act_means_df %>% fct_reorder(f_act_int, intens_means) %>% ggplot(aes(x=f_act_int, y=intens_means))+ geom_bar(stat="identity", fill="#f68060", alpha=.6, width=.4) + coord_flip() + xlab("") + theme_bw() But the following error appears when I run the last chunk: Error in fct_reorder(): ! .f must be a factor or character vector, not a data frame I confirmed whether f_act_int is a factor with: str(act_means_df): 'data.frame': 4 obs. of 3 variables: $ activity_intensity: chr "sedentary" "lightly active" "fairly active" "very active" $ intens_means : num 991.2 192.8 13.6 21.2 $ f_act_int : Factor w/ 4 levels "fairly active",..: 3 2 1 4 But when I try to inspect the object by itself with class(f_act_int), the error message says "object 'f_act_int' not found". Does anybody know what I am missing?
[ "You need to use fct_reorder within dplyr::mutate().\nThis isn't the only way to solve this issue, but the gist of your error is that you're passing your whole data frame, when it expects a factor or character vector. Instead, try this:\nact_means_df %>%\n mutate(fa_act_in = fct_reorder(f_act_int, intens_means)) %>%\n ggplot(aes(x=f_act_int, y=intens_means))+\n geom_bar(stat=\"identity\", fill=\"#f68060\", alpha=.6, width=.4) +\n coord_flip() +\n xlab(\"\") +\n theme_bw()\n\nThere is no object in your global environment called f_act_in; When you call fct_reorder() within mutate(), R looks inside your data frame for a column with that name, upon which performs the operation in order to return an updated data frame. This updated data frame is then passed to ggplot().\n" ]
[ 2 ]
[]
[]
[ "bar_chart", "forcats", "r" ]
stackoverflow_0074656557_bar_chart_forcats_r.txt
Q: Build Gradle-Firebase problem in android studio This is Firebase build gradle And this is my build gradle What can ı do in this problem . there is so many difference in there and I think something missing in my build gradle . İs this about old version of android or am I looking at wrong gradle ? A: Obviously,Android Studio is much different when it comes to gradle files in recent updates and older tutorials and documentation wont support.In your case if your need is to connect Firebase,simply do it by navigating to Tools >> Firebase. Refer this tutorial https://www.youtube.com/watch?v=aiX8bMPX_t8
Build Gradle-Firebase problem in android studio
This is Firebase build gradle And this is my build gradle What can ı do in this problem . there is so many difference in there and I think something missing in my build gradle . İs this about old version of android or am I looking at wrong gradle ?
[ "Obviously,Android Studio is much different when it comes to gradle files in recent updates and older tutorials and documentation wont support.In your case if your need is to connect Firebase,simply do it by navigating to Tools >> Firebase.\nRefer this tutorial https://www.youtube.com/watch?v=aiX8bMPX_t8\n" ]
[ 0 ]
[]
[]
[ "android", "android_studio", "build.gradle", "firebase", "java" ]
stackoverflow_0074648943_android_android_studio_build.gradle_firebase_java.txt
Q: Pandas: getting a different behavior when doing .loc with list of features or with slice of features I noticed that I'm getting a different behavior when using .loc of pandas... When using a slice for selecting columns: X_train = df.loc[:, 'col_0':'col_n'] no issue But when using: X_train = df.loc[:, features] where featuers is the list of features I used in the slice in above example ('col_0', ...to... ,'col_n') I'm getting: KeyError: "Passing list-likes to .loc or [] with any missing labels is no longer supported. The following labels were missing: Index(['col_0', 'col_1', 'col_2'], dtype='object') Note: col_0, col_1, col_2 where inserted to the df I had tried to reindex or reset index but it didn't help! Another weird behavior, that the features importance (of My machine learning model) is different when using the different .loc methods! What can be the issue? Is there a difference between both methods? A: The issue comes from the missing labels Use columns.intersection: df.loc[:, df.columns.intersection(features)] Or, if you want to add the missing labels: df.reindex(columns=features)
Pandas: getting a different behavior when doing .loc with list of features or with slice of features
I noticed that I'm getting a different behavior when using .loc of pandas... When using a slice for selecting columns: X_train = df.loc[:, 'col_0':'col_n'] no issue But when using: X_train = df.loc[:, features] where featuers is the list of features I used in the slice in above example ('col_0', ...to... ,'col_n') I'm getting: KeyError: "Passing list-likes to .loc or [] with any missing labels is no longer supported. The following labels were missing: Index(['col_0', 'col_1', 'col_2'], dtype='object') Note: col_0, col_1, col_2 where inserted to the df I had tried to reindex or reset index but it didn't help! Another weird behavior, that the features importance (of My machine learning model) is different when using the different .loc methods! What can be the issue? Is there a difference between both methods?
[ "The issue comes from the missing labels\nUse columns.intersection:\ndf.loc[:, df.columns.intersection(features)]\n\nOr, if you want to add the missing labels:\ndf.reindex(columns=features)\n\n" ]
[ 1 ]
[]
[]
[ "pandas" ]
stackoverflow_0074656714_pandas.txt
Q: How to set JDK 19 Oracle for AWS Cloud9 by option (not by install)? I checked ec2-user:~/environment $ java -version openjdk version "11.0.17" 2022-10-18 LTS OpenJDK Runtime Environment Corretto-11.0.17.8.1 (build 11.0.17+8-LTS) OpenJDK 64-Bit Server VM Corretto-11.0.17.8.1 (build 11.0.17+8-LTS, mixed mode) ec2-user:~/environment $ javac --version javac 11.0.17 ec2-user:~/environment $ How to set JDK 19 Oracle for AWS Cloud9 by option (not by install)? A: Unclear what environment you're using, but in either case of EC2, or SSH, you'll need to install JDK 19 using the appropriate OS package manager, or tools such as SDKMan. Gradle nor the IDE will download it for you, nor will it be the default as long as Java 11 (or 17) are considered LTS. Best to create your own AMI with it pre-installed. Or otherwise, use a Dockerfile with Gradle and JDK 19 to compile and/or run your code.
How to set JDK 19 Oracle for AWS Cloud9 by option (not by install)?
I checked ec2-user:~/environment $ java -version openjdk version "11.0.17" 2022-10-18 LTS OpenJDK Runtime Environment Corretto-11.0.17.8.1 (build 11.0.17+8-LTS) OpenJDK 64-Bit Server VM Corretto-11.0.17.8.1 (build 11.0.17+8-LTS, mixed mode) ec2-user:~/environment $ javac --version javac 11.0.17 ec2-user:~/environment $ How to set JDK 19 Oracle for AWS Cloud9 by option (not by install)?
[ "Unclear what environment you're using, but in either case of EC2, or SSH, you'll need to install JDK 19 using the appropriate OS package manager, or tools such as SDKMan. Gradle nor the IDE will download it for you, nor will it be the default as long as Java 11 (or 17) are considered LTS. Best to create your own AMI with it pre-installed.\nOr otherwise, use a Dockerfile with Gradle and JDK 19 to compile and/or run your code.\n" ]
[ 0 ]
[]
[]
[ "amazon_web_services", "aws_cloud9", "java" ]
stackoverflow_0074559189_amazon_web_services_aws_cloud9_java.txt
Q: Align to right side of div How would I align the <a> element, containing the Button with the string Push Me to the right side of the <div> (<Paper>)? https://codesandbox.io/s/eager-noyce-j356qe The application is in ´demo.tsx` file. Note: I cannot set the position of the button to absolute and then position it right: 0px because I need to adjust the height of the <div> (<Paper>) according to its content, which includes the height of the <Button>. A: To align an element to the right side of a container, you can use the text-align: right CSS property on the container element. This will align all of the elements inside the container to the right side. In your case, you can add the text-align: right property to the element, which will align the element (which contains the ) to the right side of the element. Here is an example of how you could do this: import * as React from 'react'; import { Paper, Button } from '@material-ui/core'; function Demo() { return ( <Paper style={{ textAlign: 'right' }}> <a href="#"> <Button variant="contained">Push Me</Button> </a> </Paper> ); } In this code, the textAlign: 'right' property is added to the style attribute of the element, which aligns the element (and the inside it) to the right side of the element. Note that you can also use the className attribute of the element to apply a CSS class that contains the text-align: right property, instead of adding the property directly to the style attribute. This can help keep your React components clean and separate the styles from the component logic. A: Based on your demo i'd suggest you'd wrap the text in a <div> and give display: flex property for your <Paper>. Then you could use flex-gow: 2 on that div <Paper style={{ display: 'flex' }}> <div style={{ flexGrow: '2' }}> <p>This is some text </p> <p>This is even more text </p> </div> <Link component="button" variant="body2" onClick={() => { console.info("I'm a button."); }} > <Button variant='contained'> Push Me </Button> </Link> </Paper> Here's the forked Codesandbox More about flexbox if you're not familiar with it
Align to right side of div
How would I align the <a> element, containing the Button with the string Push Me to the right side of the <div> (<Paper>)? https://codesandbox.io/s/eager-noyce-j356qe The application is in ´demo.tsx` file. Note: I cannot set the position of the button to absolute and then position it right: 0px because I need to adjust the height of the <div> (<Paper>) according to its content, which includes the height of the <Button>.
[ "To align an element to the right side of a container, you can use the text-align: right CSS property on the container element. This will align all of the elements inside the container to the right side.\nIn your case, you can add the text-align: right property to the element, which will align the element (which contains the ) to the right side of the element.\nHere is an example of how you could do this:\nimport * as React from 'react';\nimport { Paper, Button } from '@material-ui/core';\n\nfunction Demo() {\n return (\n <Paper style={{ textAlign: 'right' }}>\n <a href=\"#\">\n <Button variant=\"contained\">Push Me</Button>\n </a>\n </Paper>\n );\n}\n\nIn this code, the textAlign: 'right' property is added to the style attribute of the element, which aligns the element (and the inside it) to the right side of the element.\nNote that you can also use the className attribute of the element to apply a CSS class that contains the text-align: right property, instead of adding the property directly to the style attribute. This can help keep your React components clean and separate the styles from the component logic.\n", "Based on your demo i'd suggest you'd wrap the text in a <div> and give display: flex property for your <Paper>. Then you could use flex-gow: 2 on that div\n<Paper style={{ display: 'flex' }}>\n <div style={{ flexGrow: '2' }}>\n <p>This is some text </p>\n <p>This is even more text </p>\n </div>\n <Link\n component=\"button\"\n variant=\"body2\"\n onClick={() => {\n console.info(\"I'm a button.\");\n }}\n >\n <Button\n variant='contained'>\n Push Me\n </Button>\n </Link>\n</Paper>\n\nHere's the forked Codesandbox\nMore about flexbox if you're not familiar with it\n" ]
[ 1, 1 ]
[]
[]
[ "css", "dom", "javascript", "material_ui", "reactjs" ]
stackoverflow_0074655983_css_dom_javascript_material_ui_reactjs.txt
Q: NodeJS app crashes when requests MSSQL DB Good day everyone! I have an old app from dev that now isn't working in our company. I need to start this app but don't have enough experience in NodeJS (I don't have it at all, TBH). The problem is: I can build a docker image, start it, and use the app, but when I make something that requires to make a request to MQSQL server, the app crashes. But there are no issues with requests to Postgres DB. This is my docker build output docker build -t fixver . Sending build context to Docker daemon 2.427MB Step 1/21 : FROM node:16-alpine as base ---> c4ee3c9d7bc1 Step 2/21 : ARG NODE_ENV=production ---> Using cache ---> ba79cfac2e2c Step 3/21 : ENV NODE_ENV=${NODE_ENV} NODE_OPTIONS="--max_old_space_size=8192" ---> Using cache ---> 1a344f8791d8 Step 4/21 : WORKDIR /usr/src/app ---> Using cache ---> 1c591a772bcd Step 5/21 : FROM base as clientBuilder ---> 1c591a772bcd Step 6/21 : COPY ./client/package.json ./client/yarn.lock ./ ---> Using cache ---> 7382b944fcc0 Step 7/21 : RUN yarn install --production=false --frozen-lockfile ---> Using cache ---> 431700be035b Step 8/21 : COPY ./client . ---> Using cache ---> 87790e7c061a Step 9/21 : RUN yarn build ---> Using cache ---> 9ba70dd8301c Step 10/21 : FROM base as serverBuilder ---> 1c591a772bcd Step 11/21 : COPY ./server/package.json ./server/yarn.lock ./ ---> Using cache ---> bf5dc70ee2eb Step 12/21 : RUN yarn install --production=false --frozen-lockfile ---> Using cache ---> c26b02f2af5c Step 13/21 : COPY ./server . ---> 16fdc772d650 Step 14/21 : RUN yarn build ---> Running in 676753d20a77 yarn run v1.22.19 $ node_modules/.bin/rimraf dist $ node_modules/.bin/nest build Done in 5.75s. Removing intermediate container 676753d20a77 ---> 66a53b6af7cd Step 15/21 : FROM base as production ---> 1c591a772bcd Step 16/21 : COPY ./server/package.json ./server/.env ./ ---> dbbbe7295cfd Step 17/21 : RUN yarn install --pure-lockfile ---> Running in f3b965d1ec0d yarn install v1.22.19 info No lockfile found. [1/4] Resolving packages... warning Resolution field "[email protected]" is incompatible with requested version "uuid@^3.1.0" warning jest > jest-cli > jest-config > jest-environment-jsdom > jsdom > [email protected]: Use your platform's native performance.now() and performance.timeOrigin. [2/4] Fetching packages... [3/4] Linking dependencies... warning " > [email protected]" has unmet peer dependency "webpack@^5.0.0". [4/4] Building fresh packages... Done in 39.04s. Removing intermediate container f3b965d1ec0d ---> 8d804071a8d2 Step 18/21 : COPY --from=serverBuilder /usr/src/app/dist ./ ---> f760fda4b1a1 Step 19/21 : COPY --from=clientBuilder /usr/src/app/build ./public ---> 4f697bab8bc6 Step 20/21 : EXPOSE 80 ---> Running in 95fae0df9266 Removing intermediate container 95fae0df9266 ---> 4bebff7957bc Step 21/21 : CMD ["node", "./main"] ---> Running in 19b4df36b37e Removing intermediate container 19b4df36b37e ---> 362fbadf0b4a Successfully built 362fbadf0b4a Successfully tagged fixver:latest Starting the app docker run -p 7007:80 fixver [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [NestFactory] Starting Nest application... [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] AppModule dependencies initialized +60ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] PassportModule dependencies initialized +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] ServeStaticModule dependencies initialized +0ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] ConfigModule dependencies initialized +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] AuthModule dependencies initialized +8ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] PagesModule dependencies initialized +0ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] ScriptsModule dependencies initialized +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RoutesResolver] ScriptsController {/api/scripts}: +8ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RouterExplorer] Mapped {/api/scripts, GET} route +4ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RouterExplorer] Mapped {/api/scripts/:id, GET} route +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RouterExplorer] Mapped {/api/scripts/:id/check, POST} route +0ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RouterExplorer] Mapped {/api/scripts/:id/apply, POST} route +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RoutesResolver] PagesController {/api/pages}: +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RouterExplorer] Mapped {/api/pages/:id, GET} route +0ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [NestApplication] Nest application successfully started +5ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG Server started! Port: 80 And error right after making request to MSSQL node:events:491 throw er; // Unhandled 'error' event ^ Error: No event 'featureExtAck' in state 'SentLogin7WithNTLMLogin' at Connection.dispatchEvent (/usr/src/app/node_modules/tedious/lib/connection.js:1663:26) at Parser.<anonymous> (/usr/src/app/node_modules/tedious/lib/connection.js:1224:12) at Parser.emit (node:events:513:28) at Readable.<anonymous> (/usr/src/app/node_modules/tedious/lib/token/token-stream-parser.js:27:14) at Readable.emit (node:events:513:28) at addChunk (node:internal/streams/readable:315:12) at readableAddChunk (node:internal/streams/readable:289:9) at Readable.push (node:internal/streams/readable:228:10) at next (node:internal/streams/from:98:31) at processTicksAndRejections (node:internal/process/task_queues:96:5) Emitted 'error' event on Readable instance at: at emitErrorNT (node:internal/streams/destroy:157:8) at emitErrorCloseNT (node:internal/streams/destroy:122:3) at processTicksAndRejections (node:internal/process/task_queues:83:21) I really don't know what to do with it, how to debug it and find the problem. Any help and advise is appreciated. A: Problem was solved by changing this code in Dockerfile -COPY ./server/package.json ./server/.env ./ -RUN yarn install --pure-lockfile +COPY ./server/package.json ./server/yarn.lock ./server/.env ./ +RUN yarn install --production=true --frozen-lockfile
NodeJS app crashes when requests MSSQL DB
Good day everyone! I have an old app from dev that now isn't working in our company. I need to start this app but don't have enough experience in NodeJS (I don't have it at all, TBH). The problem is: I can build a docker image, start it, and use the app, but when I make something that requires to make a request to MQSQL server, the app crashes. But there are no issues with requests to Postgres DB. This is my docker build output docker build -t fixver . Sending build context to Docker daemon 2.427MB Step 1/21 : FROM node:16-alpine as base ---> c4ee3c9d7bc1 Step 2/21 : ARG NODE_ENV=production ---> Using cache ---> ba79cfac2e2c Step 3/21 : ENV NODE_ENV=${NODE_ENV} NODE_OPTIONS="--max_old_space_size=8192" ---> Using cache ---> 1a344f8791d8 Step 4/21 : WORKDIR /usr/src/app ---> Using cache ---> 1c591a772bcd Step 5/21 : FROM base as clientBuilder ---> 1c591a772bcd Step 6/21 : COPY ./client/package.json ./client/yarn.lock ./ ---> Using cache ---> 7382b944fcc0 Step 7/21 : RUN yarn install --production=false --frozen-lockfile ---> Using cache ---> 431700be035b Step 8/21 : COPY ./client . ---> Using cache ---> 87790e7c061a Step 9/21 : RUN yarn build ---> Using cache ---> 9ba70dd8301c Step 10/21 : FROM base as serverBuilder ---> 1c591a772bcd Step 11/21 : COPY ./server/package.json ./server/yarn.lock ./ ---> Using cache ---> bf5dc70ee2eb Step 12/21 : RUN yarn install --production=false --frozen-lockfile ---> Using cache ---> c26b02f2af5c Step 13/21 : COPY ./server . ---> 16fdc772d650 Step 14/21 : RUN yarn build ---> Running in 676753d20a77 yarn run v1.22.19 $ node_modules/.bin/rimraf dist $ node_modules/.bin/nest build Done in 5.75s. Removing intermediate container 676753d20a77 ---> 66a53b6af7cd Step 15/21 : FROM base as production ---> 1c591a772bcd Step 16/21 : COPY ./server/package.json ./server/.env ./ ---> dbbbe7295cfd Step 17/21 : RUN yarn install --pure-lockfile ---> Running in f3b965d1ec0d yarn install v1.22.19 info No lockfile found. [1/4] Resolving packages... warning Resolution field "[email protected]" is incompatible with requested version "uuid@^3.1.0" warning jest > jest-cli > jest-config > jest-environment-jsdom > jsdom > [email protected]: Use your platform's native performance.now() and performance.timeOrigin. [2/4] Fetching packages... [3/4] Linking dependencies... warning " > [email protected]" has unmet peer dependency "webpack@^5.0.0". [4/4] Building fresh packages... Done in 39.04s. Removing intermediate container f3b965d1ec0d ---> 8d804071a8d2 Step 18/21 : COPY --from=serverBuilder /usr/src/app/dist ./ ---> f760fda4b1a1 Step 19/21 : COPY --from=clientBuilder /usr/src/app/build ./public ---> 4f697bab8bc6 Step 20/21 : EXPOSE 80 ---> Running in 95fae0df9266 Removing intermediate container 95fae0df9266 ---> 4bebff7957bc Step 21/21 : CMD ["node", "./main"] ---> Running in 19b4df36b37e Removing intermediate container 19b4df36b37e ---> 362fbadf0b4a Successfully built 362fbadf0b4a Successfully tagged fixver:latest Starting the app docker run -p 7007:80 fixver [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [NestFactory] Starting Nest application... [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] AppModule dependencies initialized +60ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] PassportModule dependencies initialized +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] ConfigHostModule dependencies initialized +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] ServeStaticModule dependencies initialized +0ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] ConfigModule dependencies initialized +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] AuthModule dependencies initialized +8ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] PagesModule dependencies initialized +0ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [InstanceLoader] ScriptsModule dependencies initialized +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RoutesResolver] ScriptsController {/api/scripts}: +8ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RouterExplorer] Mapped {/api/scripts, GET} route +4ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RouterExplorer] Mapped {/api/scripts/:id, GET} route +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RouterExplorer] Mapped {/api/scripts/:id/check, POST} route +0ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RouterExplorer] Mapped {/api/scripts/:id/apply, POST} route +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RoutesResolver] PagesController {/api/pages}: +1ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [RouterExplorer] Mapped {/api/pages/:id, GET} route +0ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG [NestApplication] Nest application successfully started +5ms [Nest] 1 - 12/01/2022, 1:49:54 PM LOG Server started! Port: 80 And error right after making request to MSSQL node:events:491 throw er; // Unhandled 'error' event ^ Error: No event 'featureExtAck' in state 'SentLogin7WithNTLMLogin' at Connection.dispatchEvent (/usr/src/app/node_modules/tedious/lib/connection.js:1663:26) at Parser.<anonymous> (/usr/src/app/node_modules/tedious/lib/connection.js:1224:12) at Parser.emit (node:events:513:28) at Readable.<anonymous> (/usr/src/app/node_modules/tedious/lib/token/token-stream-parser.js:27:14) at Readable.emit (node:events:513:28) at addChunk (node:internal/streams/readable:315:12) at readableAddChunk (node:internal/streams/readable:289:9) at Readable.push (node:internal/streams/readable:228:10) at next (node:internal/streams/from:98:31) at processTicksAndRejections (node:internal/process/task_queues:96:5) Emitted 'error' event on Readable instance at: at emitErrorNT (node:internal/streams/destroy:157:8) at emitErrorCloseNT (node:internal/streams/destroy:122:3) at processTicksAndRejections (node:internal/process/task_queues:83:21) I really don't know what to do with it, how to debug it and find the problem. Any help and advise is appreciated.
[ "Problem was solved by changing this code in Dockerfile\n-COPY ./server/package.json ./server/.env ./\n-RUN yarn install --pure-lockfile\n+COPY ./server/package.json ./server/yarn.lock ./server/.env ./\n+RUN yarn install --production=true --frozen-lockfile\n\n" ]
[ 0 ]
[]
[]
[ "docker", "nestjs", "node.js", "reactjs", "sql_server" ]
stackoverflow_0074643281_docker_nestjs_node.js_reactjs_sql_server.txt
Q: Bloom Filter Index on Delta Table Is is possible to create a Bloom filter index in Databricks on a Delta table using the filepath and not on the hive table referencing to that file location? I tried the following: CREATE BLOOMFILTER INDEX ON TABLE delta.'gs://GCS_Bucket/Delta_Folder_Path' FOR COLUMNS(colname OPTIONS(fpp=0.1, numItems=100)) But it doesn't work. I get the following error: ParseException: no viable alternative at input 'CREATE BLOOMFILTER'(line 1, pos 7) == SQL == CREATE BLOOMFILTER INDEX -------^^^ ON TABLE delta.'gs://GCS_Bucket/Delta_Folder_Path' FOR COLUMNS(LOT_W OPTIONS(fpp=0.1, numItems=100)) Replacing the delta.'gs://GCS_Bucket/Delta_Folder_Path' with a hive external table that references the file works as expected. All the examples that I found first create a table of it, and then create the bloom filter index. But this is not what we want. We only want to have tables that are in the gold layer and some in silver available in hive. The table that I want to add a bloom filter index on should not be in hive. Edit: This is on Databricks runtime 10.4 LTS A: Most probably error is arising because you're using ordinary quotes instead of backquotes for path (doc). Try: CREATE BLOOMFILTER INDEX ON TABLE delta.`gs://GCS_Bucket/Delta_Folder_Path` FOR COLUMNS(colname OPTIONS(fpp=0.1, numItems=100)) P.S. Error message points to the incorrect place, I think that it's known issue
Bloom Filter Index on Delta Table
Is is possible to create a Bloom filter index in Databricks on a Delta table using the filepath and not on the hive table referencing to that file location? I tried the following: CREATE BLOOMFILTER INDEX ON TABLE delta.'gs://GCS_Bucket/Delta_Folder_Path' FOR COLUMNS(colname OPTIONS(fpp=0.1, numItems=100)) But it doesn't work. I get the following error: ParseException: no viable alternative at input 'CREATE BLOOMFILTER'(line 1, pos 7) == SQL == CREATE BLOOMFILTER INDEX -------^^^ ON TABLE delta.'gs://GCS_Bucket/Delta_Folder_Path' FOR COLUMNS(LOT_W OPTIONS(fpp=0.1, numItems=100)) Replacing the delta.'gs://GCS_Bucket/Delta_Folder_Path' with a hive external table that references the file works as expected. All the examples that I found first create a table of it, and then create the bloom filter index. But this is not what we want. We only want to have tables that are in the gold layer and some in silver available in hive. The table that I want to add a bloom filter index on should not be in hive. Edit: This is on Databricks runtime 10.4 LTS
[ "Most probably error is arising because you're using ordinary quotes instead of backquotes for path (doc). Try:\nCREATE BLOOMFILTER INDEX \nON TABLE delta.`gs://GCS_Bucket/Delta_Folder_Path`\nFOR COLUMNS(colname OPTIONS(fpp=0.1, numItems=100))\n\nP.S. Error message points to the incorrect place, I think that it's known issue\n" ]
[ 1 ]
[]
[]
[ "bloom_filter", "databricks", "delta_lake", "hive" ]
stackoverflow_0074654790_bloom_filter_databricks_delta_lake_hive.txt
Q: looping through a data frame Python I have this data frame where I sliced columns from the original data frame: Type 1 Attack Grass 62 Grass 82 Dragon 100 Fire 52 Rock 100 I want to create each Pokemon’s adjusted attack attribute against grass Pokemon based on ‘Type 1’ where; the attack attribute is doubled if grass Pokemon are bad against that type halved if they are good against that type else remains the same. I have looping through the data: grass_attack = [] for value in df_["Type 1"]: if value ==["Type 1 == Fire"] or value==["Type 1 == Flying"] or value==["Type 1 == Poison"] or value==["Type 1 == Bug"] or value==["Type1== Steel"] or value ==["Type 1 == Grass"] or value ==["Type 1 == Dragon"]: result.append(df_["Attack"]/2) elif value==["Type 1==Ground"] or value==["Type1== Ground"] or value==["Type 1 == Water"]: grass_attack.append(df_["Attack"]*2) else: grass_attack.append(df_["Attack"]) df_["grass_attack"] = grass_attack print(df_) but I got some crazy results after this. How can I efficiently loop through a data frame's column in order to adjust another column? or is there another way to do this? A: There is some issues with your code as @azro pointed in the comments and there is no need for a loop here. You can simply use numpy.select to create a multi-conditionnal column. Here is an example to give you the general logic : df["Attack"] = df["Attack"].astype(int) conditions = [df["Type 1"].eq("Grass"), df["Type 1"].isin(["Fire", "Rock"])] choices = [df["Attack"].div(2), df["Attack"].mul(2)] df["grass_attack"] = np.select(conditions, choices, default=df["Attack"]).astype(int) # Output : print(df) Type 1 Attack grass_attack 0 Grass 62 31 1 Grass 82 41 2 Dragon 100 100 3 Fire 52 104 4 Rock 100 200 A: You could use apply to do the necessary calculations. In the following code, the modify_Attack() function is used to calculate the Grass Attack values based on the Type1 and Attack values. Type 1 values that are in the bad list will have their attack values halved. Type 1 values that are in the good list will have their attack values doubled. All other attack values will remain unchanged. Here is the code: import pandas as pd # Create dataframe df = pd.DataFrame({ 'Type 1': ['Grass', 'Grass', 'Dragon', 'Fire', 'Rock'], 'Attack': [62, 82, 100, 52, 100]}) # Function to modify the Attack value based on the Type 1 value def modify_Attack(type_val, attack_val): bad = ['Fire', 'Flying', 'Poison', 'Bug', 'Steel', 'Grass', 'Dragon'] good = ['Ground','Water'] result = attack_val # default value is unchanged if type_val in bad: result /= 2 elif type_val in good: result *= 2 return result # Create the Grass Attack column df['Grass Attack'] = df.apply(lambda x: modify_Attack(x['Type 1'], x['Attack']), axis=1).astype(int) # print the dataframe print(df) OUTPUT: Type 1 Attack Grass Attack 0 Grass 62 31 1 Grass 82 41 2 Dragon 100 50 3 Fire 52 26 4 Rock 100 100
looping through a data frame Python
I have this data frame where I sliced columns from the original data frame: Type 1 Attack Grass 62 Grass 82 Dragon 100 Fire 52 Rock 100 I want to create each Pokemon’s adjusted attack attribute against grass Pokemon based on ‘Type 1’ where; the attack attribute is doubled if grass Pokemon are bad against that type halved if they are good against that type else remains the same. I have looping through the data: grass_attack = [] for value in df_["Type 1"]: if value ==["Type 1 == Fire"] or value==["Type 1 == Flying"] or value==["Type 1 == Poison"] or value==["Type 1 == Bug"] or value==["Type1== Steel"] or value ==["Type 1 == Grass"] or value ==["Type 1 == Dragon"]: result.append(df_["Attack"]/2) elif value==["Type 1==Ground"] or value==["Type1== Ground"] or value==["Type 1 == Water"]: grass_attack.append(df_["Attack"]*2) else: grass_attack.append(df_["Attack"]) df_["grass_attack"] = grass_attack print(df_) but I got some crazy results after this. How can I efficiently loop through a data frame's column in order to adjust another column? or is there another way to do this?
[ "There is some issues with your code as @azro pointed in the comments and there is no need for a loop here. You can simply use numpy.select to create a multi-conditionnal column.\nHere is an example to give you the general logic :\ndf[\"Attack\"] = df[\"Attack\"].astype(int)\n \nconditions = [df[\"Type 1\"].eq(\"Grass\"), df[\"Type 1\"].isin([\"Fire\", \"Rock\"])]\nchoices = [df[\"Attack\"].div(2), df[\"Attack\"].mul(2)]\n \ndf[\"grass_attack\"] = np.select(conditions, choices, default=df[\"Attack\"]).astype(int)\n\n# Output :\nprint(df)\n\n Type 1 Attack grass_attack\n0 Grass 62 31\n1 Grass 82 41\n2 Dragon 100 100\n3 Fire 52 104\n4 Rock 100 200\n\n", "You could use apply to do the necessary calculations. In the following code, the modify_Attack() function is used to calculate the Grass Attack values based on the Type1 and Attack values.\n\nType 1 values that are in the bad list will have their attack values halved.\nType 1 values that are in the good list will have their attack values doubled.\nAll other attack values will remain unchanged.\n\nHere is the code:\nimport pandas as pd\n\n# Create dataframe\ndf = pd.DataFrame({ 'Type 1': ['Grass', 'Grass', 'Dragon', 'Fire', 'Rock'],\n 'Attack': [62, 82, 100, 52, 100]})\n\n\n# Function to modify the Attack value based on the Type 1 value\ndef modify_Attack(type_val, attack_val):\n bad = ['Fire', 'Flying', 'Poison', 'Bug', 'Steel', 'Grass', 'Dragon']\n good = ['Ground','Water']\n \n result = attack_val # default value is unchanged\n \n if type_val in bad:\n result /= 2\n\n elif type_val in good:\n result *= 2\n \n return result\n \n\n# Create the Grass Attack column\ndf['Grass Attack'] = df.apply(lambda x: modify_Attack(x['Type 1'], x['Attack']), axis=1).astype(int)\n\n# print the dataframe\nprint(df)\n\nOUTPUT:\n Type 1 Attack Grass Attack\n0 Grass 62 31\n1 Grass 82 41\n2 Dragon 100 50\n3 Fire 52 26\n4 Rock 100 100\n\n" ]
[ 1, 1 ]
[]
[]
[ "dataframe", "loops", "pandas", "python" ]
stackoverflow_0074656291_dataframe_loops_pandas_python.txt
Q: Error: Module build failed (from ./node_modules/@ngtools/webpack/src/ivy/index.js): Error: Emit Build at: 2022-09-14T11:34:20.503Z - Hash: c51f599b4586fb6d - Time: 7780ms ./src/main.ts - Error: Module build failed (from ./node_modules/@ngtools/webpack/src/ivy/index.js): Error: Emit ./src/polyfills.ts - Error: Module build failed (from ./node_modules/@ngtools/webpack/src/ivy/index.js): Error: Emit Error: Failed to initialize Angular compilation - Cannot read properties of null (reading 'fileName') ** Angular Live Development Server is listening on localhost:4200, open your browser on http://localhost:4200/ ** × Failed to compile. A: Try to find node_modules/@angular/compiler-cli/ngcc/ngcc_lock_file file and delete it. Or Just delete your node_modules folder and do npm install A: This appears to be an issue with Typescript version >4.9.0. I haven't worked out what that is yet, but a possible workaround is do amend your version to 4.8.2 if you can as this version doesn't seem to have that issue. I'll post again if I find what exactly is causing the issue. A: Check this One, May be helpful Basic reason behind this can be... Recursive calls to some modules. for e.g. @NgModule({ declarations: [], imports: [ CommonModule, MatSliderModule, MaterialModule ], exports:[MatSliderModule] }) export class MaterialModule { } Here, Same MaterialModule is importing self. It increases the the call stack and results into the above mentioned error. This is one condition. There can be other conditions.
Error: Module build failed (from ./node_modules/@ngtools/webpack/src/ivy/index.js): Error: Emit
Build at: 2022-09-14T11:34:20.503Z - Hash: c51f599b4586fb6d - Time: 7780ms ./src/main.ts - Error: Module build failed (from ./node_modules/@ngtools/webpack/src/ivy/index.js): Error: Emit ./src/polyfills.ts - Error: Module build failed (from ./node_modules/@ngtools/webpack/src/ivy/index.js): Error: Emit Error: Failed to initialize Angular compilation - Cannot read properties of null (reading 'fileName') ** Angular Live Development Server is listening on localhost:4200, open your browser on http://localhost:4200/ ** × Failed to compile.
[ "Try to find node_modules/@angular/compiler-cli/ngcc/ngcc_lock_file\nfile and delete it.\nOr Just delete your node_modules folder and do npm install\n", "This appears to be an issue with Typescript version >4.9.0. I haven't worked out what that is yet, but a possible workaround is do amend your version to 4.8.2 if you can as this version doesn't seem to have that issue.\nI'll post again if I find what exactly is causing the issue.\n", "Check this One, May be helpful\nBasic reason behind this can be... Recursive calls to some modules.\nfor e.g.\n@NgModule({\n declarations: [],\n imports: [\n CommonModule,\n MatSliderModule,\n MaterialModule\n ],\n exports:[MatSliderModule]\n })\n export class MaterialModule { }\n\nHere, Same MaterialModule is importing self.\nIt increases the the call stack and results into the above mentioned error.\nThis is one condition. There can be other conditions.\n" ]
[ 1, 1, 0 ]
[ "just look at templateUrl u may have a wrong type\n" ]
[ -1 ]
[ "angular", "angularjs" ]
stackoverflow_0073716279_angular_angularjs.txt