content
stringlengths
86
88.9k
title
stringlengths
0
150
question
stringlengths
1
35.8k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
30
130
Q: How do I change which folder is being used for my git in vscode? I had accidentally initialized my git repo into my base user folder. Quite obviously, I do not want to upload everything on my computer to a github repository. I've deleted the .git files in my base folder, however, vscode is still acting as if that is my desired git folder to track changes. I've reloaded my vscode, initialized a repository in the desired folder for my project, and nothing changes. Is there something I'm missing in order to change this folder within vscode's options? I didn't see anything that explicitly had a path to a desired github folder. A: I thought I had removed my .git folder, I was incorrect. It has been removed, and the perceived issue has been fixed. Turns out, it was a pebkac error.
How do I change which folder is being used for my git in vscode?
I had accidentally initialized my git repo into my base user folder. Quite obviously, I do not want to upload everything on my computer to a github repository. I've deleted the .git files in my base folder, however, vscode is still acting as if that is my desired git folder to track changes. I've reloaded my vscode, initialized a repository in the desired folder for my project, and nothing changes. Is there something I'm missing in order to change this folder within vscode's options? I didn't see anything that explicitly had a path to a desired github folder.
[ "I thought I had removed my .git folder, I was incorrect. It has been removed, and the perceived issue has been fixed. Turns out, it was a pebkac error.\n" ]
[ 1 ]
[]
[]
[ "git" ]
stackoverflow_0074660909_git.txt
Q: Replacing only specific match's group using Java regular expressions by (new Matcher..).replaceAll Good day to all! Using (Java) regular expressions, I'm trying to execute [Matcher] replaceAll to replace only a specific group, not all matches. Can you tell me how to do it in JAVA? Thank you very much in advance! static void main(String[] args) { String exp = "foofoobarfoo"; exp = Pattern .compile("foo(foo)") .matcher(exp) .replaceAll(gr -> "911" + gr.group(1) + "911"); System.out.println(exp); } expecting : foo911foo911barfoo actually resulted : 911foo911barfoo (Because replaceAll applied the replacement string to all matches, namely the groups gr.group(0) (foo foo bar foo) and gr.group(1) (foo bar foo). And it is necessary to replace only gr.group(1), without gr.group(0)). How to select a specific group to replace in a string from a regular expression. Please tell me how it is done correctly. Thank a lot in advance! A: You need to capture the first foo, too: exp = Pattern .compile("(foo)(foo)") .matcher(exp) .replaceAll(gr -> gr.group(1) + "911" + gr.group(2) + "911"); See the Java demo online. Since there are two capturing groups now - (foo)(foo) - there are now two gr.group()s in the replacement: gr.group(1) + "911" + gr.group(2) + "911".
Replacing only specific match's group using Java regular expressions by (new Matcher..).replaceAll
Good day to all! Using (Java) regular expressions, I'm trying to execute [Matcher] replaceAll to replace only a specific group, not all matches. Can you tell me how to do it in JAVA? Thank you very much in advance! static void main(String[] args) { String exp = "foofoobarfoo"; exp = Pattern .compile("foo(foo)") .matcher(exp) .replaceAll(gr -> "911" + gr.group(1) + "911"); System.out.println(exp); } expecting : foo911foo911barfoo actually resulted : 911foo911barfoo (Because replaceAll applied the replacement string to all matches, namely the groups gr.group(0) (foo foo bar foo) and gr.group(1) (foo bar foo). And it is necessary to replace only gr.group(1), without gr.group(0)). How to select a specific group to replace in a string from a regular expression. Please tell me how it is done correctly. Thank a lot in advance!
[ "You need to capture the first foo, too:\nexp = Pattern\n .compile(\"(foo)(foo)\")\n .matcher(exp)\n .replaceAll(gr -> gr.group(1) + \"911\" + gr.group(2) + \"911\");\n\nSee the Java demo online.\nSince there are two capturing groups now - (foo)(foo) - there are now two gr.group()s in the replacement: gr.group(1) + \"911\" + gr.group(2) + \"911\".\n" ]
[ 1 ]
[]
[]
[ "java", "pattern_matching", "regex", "replaceall", "string" ]
stackoverflow_0074660926_java_pattern_matching_regex_replaceall_string.txt
Q: Warning vboxdrv kernel module is not loaded Well I run this command aptitude purge ~o To delete all the Obsoletes files that aptitude show me big mistake I guess after that I update the system everything works fine but when I restart the system and I want to load the virtual machine I got this error WARNING: The vboxdrv kernel module is not loaded. Either there is no module available for the current kernel (3.14-kali1-amd64) or it failed to load. Please recompile the kernel module and install it by sudo /etc/init.d/vboxdrv setup You will not be able to start VMs until this problem is fixed. The program still running but I can't load the virtual machine so I run that command to and the out put was. sudo /etc/init.d/vboxdrv setup Stopping VirtualBox kernel modules ...done. Recompiling VirtualBox kernel modules ...failed! (Look at /var/log/vbox-install.log to find out what went wrong) I am going to post just one part of the file because vbox-install.log have to many lines. ./install.sh: 343: ./install.sh: /etc/init.d/vboxautostart-service: not found ./install.sh: 343: ./install.sh: /etc/init.d/vboxballoonctrl-service: not found ./install.sh: 343: ./install.sh: /etc/init.d/vboxweb-service: not found VirtualBox 4.3.10 r93012 installer, built 2014-03-26T19:18:38Z. Testing system setup... System setup appears correct. Installing VirtualBox to /opt/VirtualBox Output from the module build process (the Linux kernel build system) follows: make KBUILD_VERBOSE=1 SUBDIRS=/tmp/vbox.0 SRCROOT=/tmp/vbox.0 CONFIG_MODULE_SIG= -C /lib/modules/3.12-kali1-amd64/build modules make -C /usr/src/linux-headers-3.12-kali1-amd64 \ KBUILD_SRC=/usr/src/linux-headers-3.12-kali1-common \ KBUILD_EXTMOD="/tmp/vbox.0" -f /usr/src/linux-headers-3.12-kali1-common/Makefile \ modules test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo >&2; \ echo >&2 " ERROR: Kernel configuration is invalid."; \ echo >&2 " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo >&2 " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo >&2 ; \ /bin/false) mkdir -p /tmp/vbox.0/.tmp_versions ; rm -f /tmp/vbox.0/.tmp_versions/* make -f /usr/src/linux-headers-3.12-kali1-common/scripts/Makefile.build obj=/tmp/vbox.0 The last part. make -C /usr/src/linux-headers-3.12-kali1-amd64 \ KBUILD_SRC=/usr/src/linux-headers-3.12-kali1-common \ KBUILD_EXTMOD="/tmp/vbox.0" -f /usr/src/linux-headers-3.12-kali1-common/Makefile \ modules test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo >&2; \ echo >&2 " ERROR: Kernel configuration is invalid."; \ echo >&2 " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo >&2 " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo >&2 ; \ /bin/false) mkdir -p /tmp/vbox.0/.tmp_versions ; rm -f /tmp/vbox.0/.tmp_versions/* make -f /usr/src/linux-headers-3.12-kali1-common/scripts/Makefile.build obj=/tmp/vbox.0 gcc-4.7 -Wp,-MD,/tmp/vbox.0/linux/.VBoxPci-linux.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include -Iarch/x86/include/generated -I/usr/src/linux-headers-3.12-kali1-common/include -Iinclude -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/usr/src/linux-headers-3.12-kali1-common/include/uapi -Iinclude/generated/uapi -include /usr/src/linux-headers-3.12-kali1-common/include/linux/kconfig.h -I/tmp/vbox.0 -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mno-mmx -mno-sse -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -include /tmp/vbox.0/include/VBox/SUPDrvMangling.h -I/lib/modules/3.12-kali1-amd64/build/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxpci/ -I/tmp/vbox.0/vboxpci/include -I/tmp/vbox.0/vboxpci/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DRT_WITH_VBOX -DVBOX_WITH_HARDENING -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(VBoxPci_linux)" -D"KBUILD_MODNAME=KBUILD_STR(vboxpci)" -c -o /tmp/vbox.0/linux/.tmp_VBoxPci-linux.o /tmp/vbox.0/linux/VBoxPci-linux.c gcc-4.7 -Wp,-MD,/tmp/vbox.0/.VBoxPci.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include -Iarch/x86/include/generated -I/usr/src/linux-headers-3.12-kali1-common/include -Iinclude -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/usr/src/linux-headers-3.12-kali1-common/include/uapi -Iinclude/generated/uapi -include /usr/src/linux-headers-3.12-kali1-common/include/linux/kconfig.h -I/tmp/vbox.0 -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mno-mmx -mno-sse -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -include /tmp/vbox.0/include/VBox/SUPDrvMangling.h -I/lib/modules/3.12-kali1-amd64/build/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxpci/ -I/tmp/vbox.0/vboxpci/include -I/tmp/vbox.0/vboxpci/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DRT_WITH_VBOX -DVBOX_WITH_HARDENING -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(VBoxPci)" -D"KBUILD_MODNAME=KBUILD_STR(vboxpci)" -c -o /tmp/vbox.0/.tmp_VBoxPci.o /tmp/vbox.0/VBoxPci.c gcc-4.7 -Wp,-MD,/tmp/vbox.0/.SUPR0IdcClient.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include -Iarch/x86/include/generated -I/usr/src/linux-headers-3.12-kali1-common/include -Iinclude -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/usr/src/linux-headers-3.12-kali1-common/include/uapi -Iinclude/generated/uapi -include /usr/src/linux-headers-3.12-kali1-common/include/linux/kconfig.h -I/tmp/vbox.0 -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mno-mmx -mno-sse -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -include /tmp/vbox.0/include/VBox/SUPDrvMangling.h -I/lib/modules/3.12-kali1-amd64/build/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxpci/ -I/tmp/vbox.0/vboxpci/include -I/tmp/vbox.0/vboxpci/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DRT_WITH_VBOX -DVBOX_WITH_HARDENING -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(SUPR0IdcClient)" -D"KBUILD_MODNAME=KBUILD_STR(vboxpci)" -c -o /tmp/vbox.0/.tmp_SUPR0IdcClient.o /tmp/vbox.0/SUPR0IdcClient.c gcc-4.7 -Wp,-MD,/tmp/vbox.0/.SUPR0IdcClientComponent.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include -Iarch/x86/include/generated -I/usr/src/linux-headers-3.12-kali1-common/include -Iinclude -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/usr/src/linux-headers-3.12-kali1-common/include/uapi -Iinclude/generated/uapi -include /usr/src/linux-headers-3.12-kali1-common/include/linux/kconfig.h -I/tmp/vbox.0 -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mno-mmx -mno-sse -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -include /tmp/vbox.0/include/VBox/SUPDrvMangling.h -I/lib/modules/3.12-kali1-amd64/build/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxpci/ -I/tmp/vbox.0/vboxpci/include -I/tmp/vbox.0/vboxpci/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DRT_WITH_VBOX -DVBOX_WITH_HARDENING -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(SUPR0IdcClientComponent)" -D"KBUILD_MODNAME=KBUILD_STR(vboxpci)" -c -o /tmp/vbox.0/.tmp_SUPR0IdcClientComponent.o /tmp/vbox.0/SUPR0IdcClientComponent.c gcc-4.7 -Wp,-MD,/tmp/vbox.0/linux/.SUPR0IdcClient-linux.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include -Iarch/x86/include/generated -I/usr/src/linux-headers-3.12-kali1-common/include -Iinclude -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/usr/src/linux-headers-3.12-kali1-common/include/uapi -Iinclude/generated/uapi -include /usr/src/linux-headers-3.12-kali1-common/include/linux/kconfig.h -I/tmp/vbox.0 -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mno-mmx -mno-sse -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -include /tmp/vbox.0/include/VBox/SUPDrvMangling.h -I/lib/modules/3.12-kali1-amd64/build/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxpci/ -I/tmp/vbox.0/vboxpci/include -I/tmp/vbox.0/vboxpci/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DRT_WITH_VBOX -DVBOX_WITH_HARDENING -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(SUPR0IdcClient_linux)" -D"KBUILD_MODNAME=KBUILD_STR(vboxpci)" -c -o /tmp/vbox.0/linux/.tmp_SUPR0IdcClient-linux.o /tmp/vbox.0/linux/SUPR0IdcClient-linux.c ld -m elf_x86_64 -r -o /tmp/vbox.0/vboxpci.o /tmp/vbox.0/linux/VBoxPci-linux.o /tmp/vbox.0/VBoxPci.o /tmp/vbox.0/SUPR0IdcClient.o /tmp/vbox.0/SUPR0IdcClientComponent.o /tmp/vbox.0/linux/SUPR0IdcClient-linux.o (cat /dev/null; echo kernel//tmp/vbox.0/vboxpci.ko;) > /tmp/vbox.0/modules.order make -f /usr/src/linux-headers-3.12-kali1-common/scripts/Makefile.modpost find /tmp/vbox.0/.tmp_versions -name '*.mod' | xargs -r grep -h '\.ko$' | sort -u | sed 's/\.ko$/.o/' | scripts/mod/modpost -m -i /usr/src/linux-headers-3.12-kali1-amd64/Module.symvers -I /tmp/vbox.0/Module.symvers -o /tmp/vbox.0/Module.symvers -S -w -s -T - gcc-4.7 -Wp,-MD,/tmp/vbox.0/.vboxpci.mod.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include -Iarch/x86/include/generated -I/usr/src/linux-headers-3.12-kali1-common/include -Iinclude -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/usr/src/linux-headers-3.12-kali1-common/include/uapi -Iinclude/generated/uapi -include /usr/src/linux-headers-3.12-kali1-common/include/linux/kconfig.h -I/tmp/vbox.0 -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mno-mmx -mno-sse -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -include /tmp/vbox.0/include/VBox/SUPDrvMangling.h -I/lib/modules/3.12-kali1-amd64/build/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxpci/ -I/tmp/vbox.0/vboxpci/include -I/tmp/vbox.0/vboxpci/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DRT_WITH_VBOX -DVBOX_WITH_HARDENING -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(vboxpci.mod)" -D"KBUILD_MODNAME=KBUILD_STR(vboxpci)" -DMODULE -c -o /tmp/vbox.0/vboxpci.mod.o /tmp/vbox.0/vboxpci.mod.c ld -r -m elf_x86_64 -T /usr/src/linux-headers-3.12-kali1-common/scripts/module-common.lds --build-id -o /tmp/vbox.0/vboxpci.ko /tmp/vbox.0/vboxpci.o /tmp/vbox.0/vboxpci.mod.o Starting VirtualBox kernel modules ...done. End of the output from the Linux kernel build system. Installation successful Makefile:183: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again. Stop. Makefile:183: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again. Stop. Makefile:183: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again. Stop. thank you very much for any help or comments. A: for Ubuntu 15.10 and virtualbox 5.+ use: sudo /sbin/rcvboxdrv setup thx: http://www.webupd8.org/2015/10/workaround-for-sbinvboxconfig-not.html A: I met the problem: VBoxManage --version WARNING: The vboxdrv kernel module is not loaded. Either there is no module available for the current kernel (4.6.3-300.fc24.x86_64) or it failed to load. Please recompile the kernel module and install it by sudo /sbin/rcvboxdrv setup You will not be able to start VMs until this problem is fixed. 5.0.24_RPMFusionr108355 After I do sudo modprobe vboxdrv Then it works now. A: On Arch Linux sudo pacman -S virtualbox-host-modules-arch and then sudo modprobe vboxdrv A: For me, I did an update to the system and run again sudo /etc/init.d/vboxdrv setup after that. A: Run: sudo /usr/lib/virtualbox/vboxdrv.sh setup It should fix the issue. A: Arch Linux: sudo pacman -S virtualbox-host-modules-arch And then you have to reboot. That fixed it for me. Tring to restart could work on other distrobutions, too. A: Select the right version. For me was 3.13-kali1 I think that you are missing something. Try to install linux-headers-3.14-kali1-common linux-headers-3.14-kali1-amd64 linux-source-3.14 libdw1 libunwind7 it worked for me. Best Regards. A: On Debian, try: sudo apt-get install -f When all the missing dependencies are resolved, try launching VirtualBox again. A: sudo modprobe vboxdrv worked for me, right after I've disabled secure boot from bios menu. A: Just disable secure-boot from bios, it worked for me A: This guide is what got it working for me: https://gorka.eguileor.com/vbox-vmware-in-secureboot-linux-2016-update/ I am however using Ubuntu 16.04 (I am not sure which distro the guide is using..) and I modified these steps as: for f in $(dirname $(modinfo -n vboxdrv))/*.ko; do echo "Signing $f"; sudo /usr/src/kernels/$(uname -r)/scripts/sign-file sha256 ./MOK.priv ./MOK.der $f; done becomes: for f in $(dirname $(modinfo -n vboxdrv))/*.ko; do echo "Signing $f"; sudo /usr/src/linux-headers-$(uname -r)/scripts/sign-file sha256 ./MOK.priv ./MOK.der $f; done mokutil --import MOK.der becomes: sudo mokutil --import MOK.der Otherwise it got it working for me. A: This command worked for me sudo /etc/init.d/vboxdrv setup Next I got following error The VirtualBox VM was created with a user that doesn't match the current user running Vagrant. VirtualBox requires that the same user is used to manage the VM that was created. Please re-run Vagrant with that user. This is not a Vagrant issue. The UID used to create the VM was: 0 Your UID is: 1000 That got solved by running vagrant up command with root access. this should fix the issue with VirtualBox Version: 5.1 A: Run these commands to fix. First one: uname -r this will give you, you kernel version exemple : linux kernel 4.16.03 take the 3 first number then run this second command replacing those 416 with your value: this command is for manjaro linux but you can try to install the package with apt get: sudo pacman -S linux416-virtualbox-host-modules the last command to initialise the module: sudo modprobe vboxdrv A: Instead of trying to fix the .deb downloaded from Oracle, I did: sudo aptitude install virtualbox and it worked! Well, I had to remove the broken package first. (Ubuntu 18.04 LTS) A: After install VirtualBox on ubantu 18.04.3 download and install virtualbox-dkms related to your virtualbox version download virtualbox-dkms packages links urls reference (https://pkgs.org/download/virtualbox-dkms) OR (http://ftp.debian.org/debian/pool/contrib/v/virtualbox/) after download i.g virtualbox-dkms.deb package install using below command $ sudo dpkg -i virtualbox-dkms.deb Follow steps in this URL (http://www.bojankomazec.com/2019/04/how-to-install-virtualbox-on-ubuntu-1804.html) A: For Arch Linux: First step: pamac install virtualbox $(pacman -Qsq "^linux" | grep "^linux[0-9]*[-rt]*$" | awk '{print $1"-virtualbox-host-modules"}' ORS=' ') next step: sudo modprobe vboxdrv A: sudo apt install -f then you can run your virtualbox, this command fixed my problem. A: When installing VirtualBox to Fedora 36 using conventional methods (adding RPM repository and then installing with dnf), I ended up with kernel modules with *.ko.xz extension(vboxdrv.ko.xz, vboxnetadp.ko.xz, vboxnetflt.ko.xz). From what I understand kmods modules were not properly compiled. As a result, when trying to load modules (with modprobe vboxdrv) I got the following error even though I did sign them correctly. modprobe: ERROR: could not insert 'vboxdrv': Invalid argument It seems to be an issue specific to Fedora 36, as I had a similar issue with v4l2loopback module. The solution is not to use the default repository to install and instead download the release .rpm files from official sources and install manually. After manually downloading and installing the resulting kmod modules did not have that .xz extension and did load correctly after signing them. A: I simply load the kernel module at boot. As sudo, add the line vboxdrv to the file /etc/modules and reboot. This only works with secure boot disabled in the BIOS. You can check if secure boot is disabled by running mokutil --sb-state. A: on Ubuntu i executed in terminal: sudo /sbin/vboxconfig This solved my problem
Warning vboxdrv kernel module is not loaded
Well I run this command aptitude purge ~o To delete all the Obsoletes files that aptitude show me big mistake I guess after that I update the system everything works fine but when I restart the system and I want to load the virtual machine I got this error WARNING: The vboxdrv kernel module is not loaded. Either there is no module available for the current kernel (3.14-kali1-amd64) or it failed to load. Please recompile the kernel module and install it by sudo /etc/init.d/vboxdrv setup You will not be able to start VMs until this problem is fixed. The program still running but I can't load the virtual machine so I run that command to and the out put was. sudo /etc/init.d/vboxdrv setup Stopping VirtualBox kernel modules ...done. Recompiling VirtualBox kernel modules ...failed! (Look at /var/log/vbox-install.log to find out what went wrong) I am going to post just one part of the file because vbox-install.log have to many lines. ./install.sh: 343: ./install.sh: /etc/init.d/vboxautostart-service: not found ./install.sh: 343: ./install.sh: /etc/init.d/vboxballoonctrl-service: not found ./install.sh: 343: ./install.sh: /etc/init.d/vboxweb-service: not found VirtualBox 4.3.10 r93012 installer, built 2014-03-26T19:18:38Z. Testing system setup... System setup appears correct. Installing VirtualBox to /opt/VirtualBox Output from the module build process (the Linux kernel build system) follows: make KBUILD_VERBOSE=1 SUBDIRS=/tmp/vbox.0 SRCROOT=/tmp/vbox.0 CONFIG_MODULE_SIG= -C /lib/modules/3.12-kali1-amd64/build modules make -C /usr/src/linux-headers-3.12-kali1-amd64 \ KBUILD_SRC=/usr/src/linux-headers-3.12-kali1-common \ KBUILD_EXTMOD="/tmp/vbox.0" -f /usr/src/linux-headers-3.12-kali1-common/Makefile \ modules test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo >&2; \ echo >&2 " ERROR: Kernel configuration is invalid."; \ echo >&2 " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo >&2 " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo >&2 ; \ /bin/false) mkdir -p /tmp/vbox.0/.tmp_versions ; rm -f /tmp/vbox.0/.tmp_versions/* make -f /usr/src/linux-headers-3.12-kali1-common/scripts/Makefile.build obj=/tmp/vbox.0 The last part. make -C /usr/src/linux-headers-3.12-kali1-amd64 \ KBUILD_SRC=/usr/src/linux-headers-3.12-kali1-common \ KBUILD_EXTMOD="/tmp/vbox.0" -f /usr/src/linux-headers-3.12-kali1-common/Makefile \ modules test -e include/generated/autoconf.h -a -e include/config/auto.conf || ( \ echo >&2; \ echo >&2 " ERROR: Kernel configuration is invalid."; \ echo >&2 " include/generated/autoconf.h or include/config/auto.conf are missing.";\ echo >&2 " Run 'make oldconfig && make prepare' on kernel src to fix it."; \ echo >&2 ; \ /bin/false) mkdir -p /tmp/vbox.0/.tmp_versions ; rm -f /tmp/vbox.0/.tmp_versions/* make -f /usr/src/linux-headers-3.12-kali1-common/scripts/Makefile.build obj=/tmp/vbox.0 gcc-4.7 -Wp,-MD,/tmp/vbox.0/linux/.VBoxPci-linux.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include -Iarch/x86/include/generated -I/usr/src/linux-headers-3.12-kali1-common/include -Iinclude -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/usr/src/linux-headers-3.12-kali1-common/include/uapi -Iinclude/generated/uapi -include /usr/src/linux-headers-3.12-kali1-common/include/linux/kconfig.h -I/tmp/vbox.0 -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mno-mmx -mno-sse -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -include /tmp/vbox.0/include/VBox/SUPDrvMangling.h -I/lib/modules/3.12-kali1-amd64/build/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxpci/ -I/tmp/vbox.0/vboxpci/include -I/tmp/vbox.0/vboxpci/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DRT_WITH_VBOX -DVBOX_WITH_HARDENING -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(VBoxPci_linux)" -D"KBUILD_MODNAME=KBUILD_STR(vboxpci)" -c -o /tmp/vbox.0/linux/.tmp_VBoxPci-linux.o /tmp/vbox.0/linux/VBoxPci-linux.c gcc-4.7 -Wp,-MD,/tmp/vbox.0/.VBoxPci.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include -Iarch/x86/include/generated -I/usr/src/linux-headers-3.12-kali1-common/include -Iinclude -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/usr/src/linux-headers-3.12-kali1-common/include/uapi -Iinclude/generated/uapi -include /usr/src/linux-headers-3.12-kali1-common/include/linux/kconfig.h -I/tmp/vbox.0 -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mno-mmx -mno-sse -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -include /tmp/vbox.0/include/VBox/SUPDrvMangling.h -I/lib/modules/3.12-kali1-amd64/build/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxpci/ -I/tmp/vbox.0/vboxpci/include -I/tmp/vbox.0/vboxpci/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DRT_WITH_VBOX -DVBOX_WITH_HARDENING -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(VBoxPci)" -D"KBUILD_MODNAME=KBUILD_STR(vboxpci)" -c -o /tmp/vbox.0/.tmp_VBoxPci.o /tmp/vbox.0/VBoxPci.c gcc-4.7 -Wp,-MD,/tmp/vbox.0/.SUPR0IdcClient.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include -Iarch/x86/include/generated -I/usr/src/linux-headers-3.12-kali1-common/include -Iinclude -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/usr/src/linux-headers-3.12-kali1-common/include/uapi -Iinclude/generated/uapi -include /usr/src/linux-headers-3.12-kali1-common/include/linux/kconfig.h -I/tmp/vbox.0 -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mno-mmx -mno-sse -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -include /tmp/vbox.0/include/VBox/SUPDrvMangling.h -I/lib/modules/3.12-kali1-amd64/build/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxpci/ -I/tmp/vbox.0/vboxpci/include -I/tmp/vbox.0/vboxpci/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DRT_WITH_VBOX -DVBOX_WITH_HARDENING -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(SUPR0IdcClient)" -D"KBUILD_MODNAME=KBUILD_STR(vboxpci)" -c -o /tmp/vbox.0/.tmp_SUPR0IdcClient.o /tmp/vbox.0/SUPR0IdcClient.c gcc-4.7 -Wp,-MD,/tmp/vbox.0/.SUPR0IdcClientComponent.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include -Iarch/x86/include/generated -I/usr/src/linux-headers-3.12-kali1-common/include -Iinclude -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/usr/src/linux-headers-3.12-kali1-common/include/uapi -Iinclude/generated/uapi -include /usr/src/linux-headers-3.12-kali1-common/include/linux/kconfig.h -I/tmp/vbox.0 -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mno-mmx -mno-sse -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -include /tmp/vbox.0/include/VBox/SUPDrvMangling.h -I/lib/modules/3.12-kali1-amd64/build/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxpci/ -I/tmp/vbox.0/vboxpci/include -I/tmp/vbox.0/vboxpci/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DRT_WITH_VBOX -DVBOX_WITH_HARDENING -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(SUPR0IdcClientComponent)" -D"KBUILD_MODNAME=KBUILD_STR(vboxpci)" -c -o /tmp/vbox.0/.tmp_SUPR0IdcClientComponent.o /tmp/vbox.0/SUPR0IdcClientComponent.c gcc-4.7 -Wp,-MD,/tmp/vbox.0/linux/.SUPR0IdcClient-linux.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include -Iarch/x86/include/generated -I/usr/src/linux-headers-3.12-kali1-common/include -Iinclude -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/usr/src/linux-headers-3.12-kali1-common/include/uapi -Iinclude/generated/uapi -include /usr/src/linux-headers-3.12-kali1-common/include/linux/kconfig.h -I/tmp/vbox.0 -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mno-mmx -mno-sse -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -include /tmp/vbox.0/include/VBox/SUPDrvMangling.h -I/lib/modules/3.12-kali1-amd64/build/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxpci/ -I/tmp/vbox.0/vboxpci/include -I/tmp/vbox.0/vboxpci/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DRT_WITH_VBOX -DVBOX_WITH_HARDENING -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -DMODULE -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(SUPR0IdcClient_linux)" -D"KBUILD_MODNAME=KBUILD_STR(vboxpci)" -c -o /tmp/vbox.0/linux/.tmp_SUPR0IdcClient-linux.o /tmp/vbox.0/linux/SUPR0IdcClient-linux.c ld -m elf_x86_64 -r -o /tmp/vbox.0/vboxpci.o /tmp/vbox.0/linux/VBoxPci-linux.o /tmp/vbox.0/VBoxPci.o /tmp/vbox.0/SUPR0IdcClient.o /tmp/vbox.0/SUPR0IdcClientComponent.o /tmp/vbox.0/linux/SUPR0IdcClient-linux.o (cat /dev/null; echo kernel//tmp/vbox.0/vboxpci.ko;) > /tmp/vbox.0/modules.order make -f /usr/src/linux-headers-3.12-kali1-common/scripts/Makefile.modpost find /tmp/vbox.0/.tmp_versions -name '*.mod' | xargs -r grep -h '\.ko$' | sort -u | sed 's/\.ko$/.o/' | scripts/mod/modpost -m -i /usr/src/linux-headers-3.12-kali1-amd64/Module.symvers -I /tmp/vbox.0/Module.symvers -o /tmp/vbox.0/Module.symvers -S -w -s -T - gcc-4.7 -Wp,-MD,/tmp/vbox.0/.vboxpci.mod.o.d -nostdinc -isystem /usr/lib/gcc/x86_64-linux-gnu/4.7/include -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include -Iarch/x86/include/generated -I/usr/src/linux-headers-3.12-kali1-common/include -Iinclude -I/usr/src/linux-headers-3.12-kali1-common/arch/x86/include/uapi -Iarch/x86/include/generated/uapi -I/usr/src/linux-headers-3.12-kali1-common/include/uapi -Iinclude/generated/uapi -include /usr/src/linux-headers-3.12-kali1-common/include/linux/kconfig.h -I/tmp/vbox.0 -D__KERNEL__ -Wall -Wundef -Wstrict-prototypes -Wno-trigraphs -fno-strict-aliasing -fno-common -Werror-implicit-function-declaration -Wno-format-security -fno-delete-null-pointer-checks -O2 -m64 -mno-mmx -mno-sse -mtune=generic -mno-red-zone -mcmodel=kernel -funit-at-a-time -maccumulate-outgoing-args -fstack-protector -DCONFIG_AS_CFI=1 -DCONFIG_AS_CFI_SIGNAL_FRAME=1 -DCONFIG_AS_CFI_SECTIONS=1 -DCONFIG_AS_FXSAVEQ=1 -DCONFIG_AS_AVX=1 -DCONFIG_AS_AVX2=1 -pipe -Wno-sign-compare -fno-asynchronous-unwind-tables -mno-sse -mno-mmx -mno-sse2 -mno-3dnow -mno-avx -Wframe-larger-than=2048 -Wno-unused-but-set-variable -fomit-frame-pointer -g -Wdeclaration-after-statement -Wno-pointer-sign -fno-strict-overflow -fconserve-stack -DCC_HAVE_ASM_GOTO -include /tmp/vbox.0/include/VBox/SUPDrvMangling.h -I/lib/modules/3.12-kali1-amd64/build/include -I/tmp/vbox.0/ -I/tmp/vbox.0/include -I/tmp/vbox.0/r0drv/linux -I/tmp/vbox.0/vboxpci/ -I/tmp/vbox.0/vboxpci/include -I/tmp/vbox.0/vboxpci/r0drv/linux -D__KERNEL__ -DMODULE -DRT_OS_LINUX -DIN_RING0 -DIN_RT_R0 -DIN_SUP_R0 -DVBOX -DRT_WITH_VBOX -DVBOX_WITH_HARDENING -DRT_ARCH_AMD64 -DVBOX_WITH_64_BITS_GUESTS -D"KBUILD_STR(s)=#s" -D"KBUILD_BASENAME=KBUILD_STR(vboxpci.mod)" -D"KBUILD_MODNAME=KBUILD_STR(vboxpci)" -DMODULE -c -o /tmp/vbox.0/vboxpci.mod.o /tmp/vbox.0/vboxpci.mod.c ld -r -m elf_x86_64 -T /usr/src/linux-headers-3.12-kali1-common/scripts/module-common.lds --build-id -o /tmp/vbox.0/vboxpci.ko /tmp/vbox.0/vboxpci.o /tmp/vbox.0/vboxpci.mod.o Starting VirtualBox kernel modules ...done. End of the output from the Linux kernel build system. Installation successful Makefile:183: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again. Stop. Makefile:183: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again. Stop. Makefile:183: *** Error: unable to find the sources of your current Linux kernel. Specify KERN_DIR=<directory> and run Make again. Stop. thank you very much for any help or comments.
[ "for Ubuntu 15.10 and virtualbox 5.+ use:\n\nsudo /sbin/rcvboxdrv setup\n\nthx:\nhttp://www.webupd8.org/2015/10/workaround-for-sbinvboxconfig-not.html\n", "I met the problem:\nVBoxManage --version\nWARNING: The vboxdrv kernel module is not loaded. Either there is no module\n available for the current kernel (4.6.3-300.fc24.x86_64) or it failed to\n load. Please recompile the kernel module and install it by\n sudo /sbin/rcvboxdrv setup\n\n You will not be able to start VMs until this problem is fixed.\n\n5.0.24_RPMFusionr108355\nAfter I do \n\nsudo modprobe vboxdrv\n\nThen it works now.\n", "On Arch Linux\nsudo pacman -S virtualbox-host-modules-arch\n\nand then\nsudo modprobe vboxdrv\n\n", "For me, I did an update to the system and run again sudo /etc/init.d/vboxdrv setup after that. \n", "Run:\nsudo /usr/lib/virtualbox/vboxdrv.sh setup\nIt should fix the issue.\n", "Arch Linux:\nsudo pacman -S virtualbox-host-modules-arch\n\nAnd then you have to reboot. That fixed it for me.\nTring to restart could work on other distrobutions, too.\n", "Select the right version. For me was 3.13-kali1\nI think that you are missing something.\nTry to install \n\nlinux-headers-3.14-kali1-common\nlinux-headers-3.14-kali1-amd64 \nlinux-source-3.14 \nlibdw1\nlibunwind7\n\nit worked for me.\nBest Regards.\n", "On Debian, try:\nsudo apt-get install -f\n\nWhen all the missing dependencies are resolved, try launching VirtualBox again.\n", "sudo modprobe vboxdrv worked for me, right after I've disabled secure boot from bios menu.\n", "Just disable secure-boot from bios, it worked for me\n", "This guide is what got it working for me: https://gorka.eguileor.com/vbox-vmware-in-secureboot-linux-2016-update/\nI am however using Ubuntu 16.04 (I am not sure which distro the guide is using..) and I modified these steps as:\nfor f in $(dirname $(modinfo -n vboxdrv))/*.ko; do echo \"Signing $f\"; sudo /usr/src/kernels/$(uname -r)/scripts/sign-file sha256 ./MOK.priv ./MOK.der $f; done \nbecomes: \nfor f in $(dirname $(modinfo -n vboxdrv))/*.ko; do echo \"Signing $f\"; sudo /usr/src/linux-headers-$(uname -r)/scripts/sign-file sha256 ./MOK.priv ./MOK.der $f; done\n\nmokutil --import MOK.der\nbecomes:\nsudo mokutil --import MOK.der\n\nOtherwise it got it working for me.\n", "This command worked for me\nsudo /etc/init.d/vboxdrv setup\nNext I got following error \nThe VirtualBox VM was created with a user that doesn't match the\ncurrent user running Vagrant. VirtualBox requires that the same user\nis used to manage the VM that was created. Please re-run Vagrant with\nthat user. This is not a Vagrant issue.\nThe UID used to create the VM was: 0\nYour UID is: 1000\nThat got solved by running vagrant up command with root access.\nthis should fix the issue with VirtualBox Version: 5.1\n", "Run these commands to fix.\nFirst one:\nuname -r\nthis will give you, you kernel version exemple : linux kernel 4.16.03\ntake the 3 first number then run this second command replacing those 416 with your value:\nthis command is for manjaro linux but you can try to install the package with apt get:\nsudo pacman -S linux416-virtualbox-host-modules \nthe last command to initialise the module:\nsudo modprobe vboxdrv\n", "Instead of trying to fix the .deb downloaded from Oracle, I did:\nsudo aptitude install virtualbox\n\nand it worked! Well, I had to remove the broken package first.\n(Ubuntu 18.04 LTS)\n", "After install VirtualBox on ubantu 18.04.3\n\n\ndownload and install virtualbox-dkms related to your virtualbox version\ndownload virtualbox-dkms packages links urls reference\n\n(https://pkgs.org/download/virtualbox-dkms)\nOR\n(http://ftp.debian.org/debian/pool/contrib/v/virtualbox/)\nafter download i.g virtualbox-dkms.deb package install using below command\n$ sudo dpkg -i virtualbox-dkms.deb\n\n\nFollow steps in this URL (http://www.bojankomazec.com/2019/04/how-to-install-virtualbox-on-ubuntu-1804.html)\n", "For Arch Linux:\nFirst step:\npamac install virtualbox $(pacman -Qsq \"^linux\" | grep \"^linux[0-9]*[-rt]*$\" | awk '{print $1\"-virtualbox-host-modules\"}' ORS=' ')\n\nnext step:\nsudo modprobe vboxdrv\n\n", "\nsudo apt install -f\n\nthen you can run your virtualbox, this command fixed my problem.\n", "When installing VirtualBox to Fedora 36 using conventional methods (adding RPM repository and then installing with dnf), I ended up with kernel modules with *.ko.xz extension(vboxdrv.ko.xz, vboxnetadp.ko.xz, vboxnetflt.ko.xz). From what I understand kmods modules were not properly compiled. As a result, when trying to load modules (with modprobe vboxdrv) I got the following error even though I did sign them correctly.\nmodprobe: ERROR: could not insert 'vboxdrv': Invalid argument\n\nIt seems to be an issue specific to Fedora 36, as I had a similar issue with v4l2loopback module. The solution is not to use the default repository to install and instead download the release .rpm files from official sources and install manually. After manually downloading and installing the resulting kmod modules did not have that .xz extension and did load correctly after signing them.\n", "I simply load the kernel module at boot.\nAs sudo, add the line vboxdrv to the file /etc/modules and reboot. This only works with secure boot disabled in the BIOS.\nYou can check if secure boot is disabled by running mokutil --sb-state.\n", "on Ubuntu i executed in terminal: sudo /sbin/vboxconfig\nThis solved my problem\n" ]
[ 20, 12, 11, 4, 3, 3, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "kernel", "linux", "virtualbox" ]
stackoverflow_0023740932_kernel_linux_virtualbox.txt
Q: .NET Core 6 Found multiple publish output files with the same relative path I'm running into a relatively new issue on .NET Core 6 where when publishing with Web Deploy via Visual Studio 2022. I'm receiving the following error: Error Found multiple publish output files with the same relative path: C:\Work\MySolution\A\appsettings.json, C:\Work\MySolution\B\appsettings.json, C:\Work\MySolution\A\appsettings.Staging.json, C:\Work\MySolution\B\appsettings.Staging.json, , C:\Work\MySolution\A\appsettings.Development.json, C:\Work\MySolution\B\appsettings.Development.json There is no issues when building, just publishing. I have two ASP.NET Core 6 projects. Project "A" references project "B" (I know B should really be a class library, but go with me). I am aware that this is expected functionality in .NET Core 6 (https://learn.microsoft.com/en-us/dotnet/core/compatibility/sdk/6.0/duplicate-files-in-output). However, I cannot seem to tell project "A" to ignore project "B" appsettings files. I am aware of the ErrorOnDuplicatePublishOutputFiles property I can set, but I'm trying to strictly tell it not to include those files. Here's some examples of items that I have tried, but does not work. Example 1: Tried typical content update approach (supposedly does not work after VS 15.3). Also tried with absolute paths. A.csproj ... <ItemGroup> <ProjectReference Include="..\B\B.csproj"> <PrivateAssets>all</PrivateAssets> </ProjectReference> </ItemGroup> <ItemGroup> <Content Update="..\B\appsettings.json" CopyToOutputDirectory="Never" CopyToPublishDirectory="Never" /> <Content Update="..\B\appsettings.*.json" CopyToOutputDirectory="Never" CopyToPublishDirectory="Never" /> </ItemGroup> ... Example 2: Tried typical content remove approach. Also tried with absolute paths. A.csproj ... <ItemGroup> <ProjectReference Include="..\B\B.csproj"> <PrivateAssets>all</PrivateAssets> </ProjectReference> </ItemGroup> <ItemGroup> <Content Remove="..\B\appsettings.json" /> <Content Remove="..\B\appsettings.*.json" /> </ItemGroup> <ItemGroup> <None Include="..\B\appsettings.json" /> <None Include="..\B\appsettings.*.json" /> </ItemGroup> ... Example 3: I tried using the GeneratePathProperty path to make sure it was directly ignoring project B's files. A.csproj ... <ItemGroup> <ProjectReference Include="..\B\B.csproj" GeneratePathProperty="true"> <PrivateAssets>all</PrivateAssets> </ProjectReference> </ItemGroup> <ItemGroup> <Content Update="$(PkgB)\appsettings.json" CopyToPublishDirectory="Never" /> <Content Update="$(PkgB)\appsettings.*.json" CopyToPublishDirectory="Never" /> </ItemGroup> ... Example 4: Modified pubxml to ignore specific files. Tried with absolute paths too. A.pubxml ... <ExcludeFilesFromDeployment>..\B\appsettings.json;..\B\appsettings.Staging.json;...</ExcludeFilesFromDeployment> ... Example 5: Modified pubxml file to explicity ignore project B files. Tried absolute paths as well. A.pubxml ... <ItemGroup> <ResolvedFileToPublish Include="..\B\appsettings.json"> <CopyToPublishDirectory>Never</CopyToPublishDirectory> </ResolvedFileToPublish> <ResolvedFileToPublish Include="..\B\appsettings.Staging.json"> <CopyToPublishDirectory>Never</CopyToPublishDirectory> </ResolvedFileToPublish> <ResolvedFileToPublish Include="..\B\appsettings.Development.json"> <CopyToPublishDirectory>Never</CopyToPublishDirectory> </ResolvedFileToPublish> <ResolvedFileToPublish Include="..\B\appsettings.Backup.json"> <CopyToPublishDirectory>Never</CopyToPublishDirectory> </ResolvedFileToPublish> </ItemGroup> ... I've tried various other combos, but none of it seems to work... Windows 10 Visual Studio 2022 (latest) .NET Core 6 A: I ran into this problem upgrading .NET 5 web services to .NET 6. As the link you provided points out, this is by design now. I fixed it by renaming the appsettings.json file in both projects by prepending the assembly name, and then reconfiguring the configuration (yes, that's a thing) as follows: public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => { webBuilder.ConfigureAppConfiguration((context, configBuilder) => { string assemblyName = Assembly.GetExecutingAssembly().GetName().Name; string envName = context.HostingEnvironment.EnvironmentName; configBuilder.Sources.Clear(); configBuilder.AddJsonFile($"{assemblyName}.appsettings.json", optional: false, reloadOnChange: true); configBuilder.AddJsonFile($"{assemblyName}.appsettings.{envName}.json", optional: true, reloadOnChange: true); }); webBuilder.UseStartup<Startup>(); As you can see, our code is still ".NET 5-style" at this point. A: I ran into this issue with a web application that had a Razor Class Library. The Culprit file was LIBMAN.JSON. Change the properties of the file to: Build Action: NONE Copy to Output Directory: DO NOT COPY Other files that are used for tooling only could possibly be changes the same way. A: I had a similar issue and for me this was due to the solution having multiple nuget packages instaled with different versions. Once they were consolidated the error was resolved. C:\Program Files\dotnet\sdk\6.0.111\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.ConflictResolution.targets(112,5): error NETSDK1152: Found multiple publish output files with the same relative path: C:\Users\srvtfsbuild.nuget\packages\microsoft.testplatform.testhost\17.3.0\build\netcoreapp2.1\x64\testhost.exe, C:\Users\srvtfsbuild.nuget\packages\microsoft.testplatform.testhost\17.4.0\build\netcoreapp3.1\x64\testhost.exe, C:\Users\srvtfsbuild.nuget\packages\microsoft.testplatform.testhost\17.3.0\build\netcoreapp2.1\x64\testhost.dll, C:\Users\srvtfsbuild.nuget\packages\microsoft.testplatform.testhost\17.4.0\build\netcoreapp3.1\x64\testhost.dll, C:\Users\srvtfsbuild.nuget\packages\nunit3testadapter\4.2.1\build\netcoreapp2.1\NUnit3.TestAdapter.pdb, C:\Users\srvtfsbuild.nuget\packages\nunit3testadapter\4.3.1\build\netcoreapp3.1\NUnit3.TestAdapter.pdb, C:\Users\srvtfsbuild.nuget\packages\nunit3testadapter\4.2.1\build\netcoreapp2.1\testcentric.engine.metadata.dll, C:\Users\srvtfsbuild.nuget\packages\nunit3testadapter\4.3.1\build\netcoreapp3.1\testcentric.engine.metadata.dll. A: A bad project was referenced in the .csproj file. It should not have been referenced at all. Essentially, the build process was treating the referenced project as a second front-end it was trying to initialize. Removing the unnecessary reference fixed the problem. <ItemGroup> <ProjectReference Include="..\BadProject\BadFrontEndProject.csproj" /> <!-- remove me --> </ItemGroup>
.NET Core 6 Found multiple publish output files with the same relative path
I'm running into a relatively new issue on .NET Core 6 where when publishing with Web Deploy via Visual Studio 2022. I'm receiving the following error: Error Found multiple publish output files with the same relative path: C:\Work\MySolution\A\appsettings.json, C:\Work\MySolution\B\appsettings.json, C:\Work\MySolution\A\appsettings.Staging.json, C:\Work\MySolution\B\appsettings.Staging.json, , C:\Work\MySolution\A\appsettings.Development.json, C:\Work\MySolution\B\appsettings.Development.json There is no issues when building, just publishing. I have two ASP.NET Core 6 projects. Project "A" references project "B" (I know B should really be a class library, but go with me). I am aware that this is expected functionality in .NET Core 6 (https://learn.microsoft.com/en-us/dotnet/core/compatibility/sdk/6.0/duplicate-files-in-output). However, I cannot seem to tell project "A" to ignore project "B" appsettings files. I am aware of the ErrorOnDuplicatePublishOutputFiles property I can set, but I'm trying to strictly tell it not to include those files. Here's some examples of items that I have tried, but does not work. Example 1: Tried typical content update approach (supposedly does not work after VS 15.3). Also tried with absolute paths. A.csproj ... <ItemGroup> <ProjectReference Include="..\B\B.csproj"> <PrivateAssets>all</PrivateAssets> </ProjectReference> </ItemGroup> <ItemGroup> <Content Update="..\B\appsettings.json" CopyToOutputDirectory="Never" CopyToPublishDirectory="Never" /> <Content Update="..\B\appsettings.*.json" CopyToOutputDirectory="Never" CopyToPublishDirectory="Never" /> </ItemGroup> ... Example 2: Tried typical content remove approach. Also tried with absolute paths. A.csproj ... <ItemGroup> <ProjectReference Include="..\B\B.csproj"> <PrivateAssets>all</PrivateAssets> </ProjectReference> </ItemGroup> <ItemGroup> <Content Remove="..\B\appsettings.json" /> <Content Remove="..\B\appsettings.*.json" /> </ItemGroup> <ItemGroup> <None Include="..\B\appsettings.json" /> <None Include="..\B\appsettings.*.json" /> </ItemGroup> ... Example 3: I tried using the GeneratePathProperty path to make sure it was directly ignoring project B's files. A.csproj ... <ItemGroup> <ProjectReference Include="..\B\B.csproj" GeneratePathProperty="true"> <PrivateAssets>all</PrivateAssets> </ProjectReference> </ItemGroup> <ItemGroup> <Content Update="$(PkgB)\appsettings.json" CopyToPublishDirectory="Never" /> <Content Update="$(PkgB)\appsettings.*.json" CopyToPublishDirectory="Never" /> </ItemGroup> ... Example 4: Modified pubxml to ignore specific files. Tried with absolute paths too. A.pubxml ... <ExcludeFilesFromDeployment>..\B\appsettings.json;..\B\appsettings.Staging.json;...</ExcludeFilesFromDeployment> ... Example 5: Modified pubxml file to explicity ignore project B files. Tried absolute paths as well. A.pubxml ... <ItemGroup> <ResolvedFileToPublish Include="..\B\appsettings.json"> <CopyToPublishDirectory>Never</CopyToPublishDirectory> </ResolvedFileToPublish> <ResolvedFileToPublish Include="..\B\appsettings.Staging.json"> <CopyToPublishDirectory>Never</CopyToPublishDirectory> </ResolvedFileToPublish> <ResolvedFileToPublish Include="..\B\appsettings.Development.json"> <CopyToPublishDirectory>Never</CopyToPublishDirectory> </ResolvedFileToPublish> <ResolvedFileToPublish Include="..\B\appsettings.Backup.json"> <CopyToPublishDirectory>Never</CopyToPublishDirectory> </ResolvedFileToPublish> </ItemGroup> ... I've tried various other combos, but none of it seems to work... Windows 10 Visual Studio 2022 (latest) .NET Core 6
[ "I ran into this problem upgrading .NET 5 web services to .NET 6. As the link you provided points out, this is by design now. I fixed it by renaming the appsettings.json file in both projects by prepending the assembly name, and then reconfiguring the configuration (yes, that's a thing) as follows:\npublic static IHostBuilder CreateHostBuilder(string[] args) =>\n Host.CreateDefaultBuilder(args)\n .ConfigureWebHostDefaults(webBuilder =>\n {\n webBuilder.ConfigureAppConfiguration((context, configBuilder) =>\n {\n string assemblyName = Assembly.GetExecutingAssembly().GetName().Name;\n string envName = context.HostingEnvironment.EnvironmentName;\n configBuilder.Sources.Clear();\n configBuilder.AddJsonFile($\"{assemblyName}.appsettings.json\", optional: false, reloadOnChange: true);\n configBuilder.AddJsonFile($\"{assemblyName}.appsettings.{envName}.json\", optional: true, reloadOnChange: true);\n });\n webBuilder.UseStartup<Startup>();\n\nAs you can see, our code is still \".NET 5-style\" at this point.\n", "I ran into this issue with a web application that had a Razor Class Library.\nThe Culprit file was LIBMAN.JSON.\nChange the properties of the file to:\nBuild Action: NONE\nCopy to Output Directory: DO NOT COPY\nOther files that are used for tooling only could possibly be changes the same way.\n", "I had a similar issue and for me this was due to the solution having multiple nuget packages instaled with different versions. Once they were consolidated the error was resolved.\nC:\\Program Files\\dotnet\\sdk\\6.0.111\\Sdks\\Microsoft.NET.Sdk\\targets\\Microsoft.NET.ConflictResolution.targets(112,5): error NETSDK1152: Found multiple publish output files with the same relative path: C:\\Users\\srvtfsbuild.nuget\\packages\\microsoft.testplatform.testhost\\17.3.0\\build\\netcoreapp2.1\\x64\\testhost.exe, C:\\Users\\srvtfsbuild.nuget\\packages\\microsoft.testplatform.testhost\\17.4.0\\build\\netcoreapp3.1\\x64\\testhost.exe, C:\\Users\\srvtfsbuild.nuget\\packages\\microsoft.testplatform.testhost\\17.3.0\\build\\netcoreapp2.1\\x64\\testhost.dll, C:\\Users\\srvtfsbuild.nuget\\packages\\microsoft.testplatform.testhost\\17.4.0\\build\\netcoreapp3.1\\x64\\testhost.dll, C:\\Users\\srvtfsbuild.nuget\\packages\\nunit3testadapter\\4.2.1\\build\\netcoreapp2.1\\NUnit3.TestAdapter.pdb, C:\\Users\\srvtfsbuild.nuget\\packages\\nunit3testadapter\\4.3.1\\build\\netcoreapp3.1\\NUnit3.TestAdapter.pdb, C:\\Users\\srvtfsbuild.nuget\\packages\\nunit3testadapter\\4.2.1\\build\\netcoreapp2.1\\testcentric.engine.metadata.dll, C:\\Users\\srvtfsbuild.nuget\\packages\\nunit3testadapter\\4.3.1\\build\\netcoreapp3.1\\testcentric.engine.metadata.dll.\n", "A bad project was referenced in the .csproj file. It should not have been referenced at all. Essentially, the build process was treating the referenced project as a second front-end it was trying to initialize. Removing the unnecessary reference fixed the problem.\n<ItemGroup>\n <ProjectReference Include=\"..\\BadProject\\BadFrontEndProject.csproj\" /> <!-- remove me -->\n</ItemGroup>\n\n" ]
[ 5, 2, 0, 0 ]
[]
[]
[ ".net_6.0", "asp.net_core", "asp.net_core_6.0", "visual_studio", "visual_studio_2022" ]
stackoverflow_0070067014_.net_6.0_asp.net_core_asp.net_core_6.0_visual_studio_visual_studio_2022.txt
Q: Google Storage Transfer Service: how to move files to an external bucket outside of the current project I need to transfer files from my bucket to another bucket once a day. The destination bucket is outside of my project. So I've tried to create a Storage Transfer Service job but obiovsly I get the following authorization error: Failed to obtain the location of the GCS bucket "destination-bucket-name" Additional details: project-xxxxxxxxxxxxx@storage-transfer-service.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket. Permission 'storage.buckets.get' denied on resource (or it may not exist). I have the service account key json to access the external bucket, but how can I use it with Storage Transfer Service? A: As @AlessioInnocenzi mentioned in the comment section: To bypass this permission issue, for now I have implemented by my own a cloud function that gets the object, then it upload that object to the other bucket and delete the object from the source bucket.
Google Storage Transfer Service: how to move files to an external bucket outside of the current project
I need to transfer files from my bucket to another bucket once a day. The destination bucket is outside of my project. So I've tried to create a Storage Transfer Service job but obiovsly I get the following authorization error: Failed to obtain the location of the GCS bucket "destination-bucket-name" Additional details: project-xxxxxxxxxxxxx@storage-transfer-service.iam.gserviceaccount.com does not have storage.buckets.get access to the Google Cloud Storage bucket. Permission 'storage.buckets.get' denied on resource (or it may not exist). I have the service account key json to access the external bucket, but how can I use it with Storage Transfer Service?
[ "As @AlessioInnocenzi mentioned in the comment section:\n\nTo bypass this permission issue, for now I have implemented by my own a cloud function that gets the object, then it upload that object to the other bucket and delete the object from the source bucket.\n\n" ]
[ 1 ]
[]
[]
[ "google_cloud_platform", "google_cloud_storage", "service_accounts" ]
stackoverflow_0074575153_google_cloud_platform_google_cloud_storage_service_accounts.txt
Q: Trigger is not working properly in SQL Server I understand that perhaps the problem is that I use a select on the same table that I update or insert a record, but this trigger throws an exception in most cases. Then what should I rewrite? The purpose of the trigger is to block inserting or updating entries if the room is already occupied on a certain date, i.e. the dates overlap CREATE TABLE [dbo].[settlements] ( [id] [int] IDENTITY(1,1) NOT NULL, [client_id] [int] NOT NULL, [checkin_date] [date] NOT NULL, [checkout_date] [date] NOT NULL, [room_id] [int] NOT NULL, [employee_id] [int] NULL ); ALTER TRIGGER [dbo].[On_Hotel_Settlement] ON [dbo].[settlements] AFTER INSERT, UPDATE AS BEGIN SET NOCOUNT ON; DECLARE @room_id int DECLARE @checkin_date Date, @checkout_date Date DECLARE cursor_settlement CURSOR FOR SELECT room_id, checkin_date, checkout_date FROM inserted; OPEN cursor_settlement; FETCH NEXT FROM cursor_settlement INTO @room_id, @checkin_date, @checkout_date; WHILE @@FETCH_STATUS = 0 BEGIN IF EXISTS (SELECT 1 FROM settlements AS s WHERE s.room_id = @room_id AND ((s.checkin_date >= @checkin_date AND s.checkin_date <= @checkout_date) OR (s.checkout_date >= @checkin_date AND s.checkout_date <= @checkout_date))) BEGIN RAISERROR ('Room is not free', 16, 1); ROLLBACK; END; FETCH NEXT FROM cursor_settlement INTO @room_id, @checkin_date, @checkout_date; END; CLOSE cursor_settlement; DEALLOCATE cursor_settlement; RETURN I tried to test the code by removing the condition with dates and leaving only the room _id, but the trigger does not work correctly in this case either. I tried query like IF EXISTS (SELECT 1 FROM settlements AS s WHERE s.room_id = 9 AND ((s.checkin_date >= '2022-12-10' AND s.checkin_date <= '2022-12-30') OR (s.checkout_date >= '2022-12-10' AND s.checkout_date <= '2022-12-30'))) BEGIN RAISERROR ('Room is not free', 16, 1); END; and it worked correctly. Problem is that is not working in my trigger A: As noted by comments on the original post above, the cursor loop is not needed and would best be eliminated to improve efficiency. As for the date logic, consider a new record with a check-in date that is the same as the checkout date from a prior record. I believe that your logic will consider this an overlap and throw an error. My suggestion is that you treat the check-in date as inclusive (that night is in use) and the checkout date as exclusive (that night is not in use). A standard test for overlapping dates would then be Checkin1 < Checkout2 AND Checkin2 < Checkout1. (Note use of inequality.) It may not be obvious, but this test covers all overlapping date cases. (It might be more obvious if this condition is inverted and rewritten as NOT (Checkin1 >= Checkout2 OR Checkin2 >= Checkout1).) Also, if you are inserting multiple records at once, I would suggest that you also check the inserted records for mutual conflicts. Suggest something like: ALTER TRIGGER [dbo].[On_Hotel_Settlement] ON [dbo].[settlements] AFTER INSERT, UPDATE AS BEGIN SET NOCOUNT ON; IF EXISTS( SELECT * FROM inserted i JOIN settlements s ON s.room_id = i.room_id AND s.checkin_date < i.checkout_date AND i.checkin_date < s.checkout_date AND s.id <> i.id ) BEGIN RAISERROR ('Room is not free', 16, 1); ROLLBACK; END; RETURN; END One more note: Be careful with an early rollback of a transaction. If your overall logic could potentially execute additional DML after the error is thrown, that would now execute outside the transaction and there would be no remaining transaction to roll back.
Trigger is not working properly in SQL Server
I understand that perhaps the problem is that I use a select on the same table that I update or insert a record, but this trigger throws an exception in most cases. Then what should I rewrite? The purpose of the trigger is to block inserting or updating entries if the room is already occupied on a certain date, i.e. the dates overlap CREATE TABLE [dbo].[settlements] ( [id] [int] IDENTITY(1,1) NOT NULL, [client_id] [int] NOT NULL, [checkin_date] [date] NOT NULL, [checkout_date] [date] NOT NULL, [room_id] [int] NOT NULL, [employee_id] [int] NULL ); ALTER TRIGGER [dbo].[On_Hotel_Settlement] ON [dbo].[settlements] AFTER INSERT, UPDATE AS BEGIN SET NOCOUNT ON; DECLARE @room_id int DECLARE @checkin_date Date, @checkout_date Date DECLARE cursor_settlement CURSOR FOR SELECT room_id, checkin_date, checkout_date FROM inserted; OPEN cursor_settlement; FETCH NEXT FROM cursor_settlement INTO @room_id, @checkin_date, @checkout_date; WHILE @@FETCH_STATUS = 0 BEGIN IF EXISTS (SELECT 1 FROM settlements AS s WHERE s.room_id = @room_id AND ((s.checkin_date >= @checkin_date AND s.checkin_date <= @checkout_date) OR (s.checkout_date >= @checkin_date AND s.checkout_date <= @checkout_date))) BEGIN RAISERROR ('Room is not free', 16, 1); ROLLBACK; END; FETCH NEXT FROM cursor_settlement INTO @room_id, @checkin_date, @checkout_date; END; CLOSE cursor_settlement; DEALLOCATE cursor_settlement; RETURN I tried to test the code by removing the condition with dates and leaving only the room _id, but the trigger does not work correctly in this case either. I tried query like IF EXISTS (SELECT 1 FROM settlements AS s WHERE s.room_id = 9 AND ((s.checkin_date >= '2022-12-10' AND s.checkin_date <= '2022-12-30') OR (s.checkout_date >= '2022-12-10' AND s.checkout_date <= '2022-12-30'))) BEGIN RAISERROR ('Room is not free', 16, 1); END; and it worked correctly. Problem is that is not working in my trigger
[ "As noted by comments on the original post above, the cursor loop is not needed and would best be eliminated to improve efficiency.\nAs for the date logic, consider a new record with a check-in date that is the same as the checkout date from a prior record. I believe that your logic will consider this an overlap and throw an error.\nMy suggestion is that you treat the check-in date as inclusive (that night is in use) and the checkout date as exclusive (that night is not in use).\nA standard test for overlapping dates would then be Checkin1 < Checkout2 AND Checkin2 < Checkout1. (Note use of inequality.) It may not be obvious, but this test covers all overlapping date cases. (It might be more obvious if this condition is inverted and rewritten as NOT (Checkin1 >= Checkout2 OR Checkin2 >= Checkout1).)\nAlso, if you are inserting multiple records at once, I would suggest that you also check the inserted records for mutual conflicts.\nSuggest something like:\nALTER TRIGGER [dbo].[On_Hotel_Settlement]\n ON [dbo].[settlements]\n AFTER INSERT, UPDATE\nAS \nBEGIN\n SET NOCOUNT ON;\n\n IF EXISTS(\n SELECT *\n FROM inserted i\n JOIN settlements s\n ON s.room_id = i.room_id\n AND s.checkin_date < i.checkout_date\n AND i.checkin_date < s.checkout_date\n AND s.id <> i.id\n )\n BEGIN \n RAISERROR ('Room is not free', 16, 1); \n ROLLBACK;\n END;\n\n RETURN;\nEND\n\nOne more note: Be careful with an early rollback of a transaction. If your overall logic could potentially execute additional DML after the error is thrown, that would now execute outside the transaction and there would be no remaining transaction to roll back.\n" ]
[ 3 ]
[]
[]
[ "sql_server", "triggers", "tsql" ]
stackoverflow_0074660544_sql_server_triggers_tsql.txt
Q: How to set singular name for a table in gorm type user struct { ID int Username string `gorm:"size:255"` Name string `gorm:"size:255"` } I want to create a table 'user' using this model. But the table name is automatically set to 'users'. I know it is gorm's default behavior. But I want the table name to be 'user'. A: Set method TableName for your struct. func (user) TableName() string { return "user" } Link: https://gorm.io/docs/models.html#conventions A: Gorm has a in-built method for that that will be set in global level so all tables will be singular. For gorm v1, you could do: db.SingularTable(true) For v2, it's a little more verbose: db, err := gorm.Open(postgres.Open(connStr), &gorm.Config{ NamingStrategy: schema.NamingStrategy{ SingularTable: true, }, }) A: To explicitly set a table name, you would have to create an interface Tabler with TableName method, and then create a receiver method (defined in the interface) for the struct: type user struct { ID int Username string `gorm:"size:255"` Name string `gorm:"size:255"` } type Tabler interface { TableName() string } // TableName overrides the table name used by User to `profiles` func (user) TableName() string { return "user" }
How to set singular name for a table in gorm
type user struct { ID int Username string `gorm:"size:255"` Name string `gorm:"size:255"` } I want to create a table 'user' using this model. But the table name is automatically set to 'users'. I know it is gorm's default behavior. But I want the table name to be 'user'.
[ "Set method TableName for your struct.\nfunc (user) TableName() string {\n return \"user\"\n}\n\nLink: https://gorm.io/docs/models.html#conventions\n", "Gorm has a in-built method for that that will be set in global level so all tables will be singular.\nFor gorm v1, you could do:\ndb.SingularTable(true)\n\nFor v2, it's a little more verbose:\ndb, err := gorm.Open(postgres.Open(connStr), &gorm.Config{\n NamingStrategy: schema.NamingStrategy{\n SingularTable: true,\n },\n})\n\n", "To explicitly set a table name, you would have to create an interface Tabler with TableName method, and then create a receiver method (defined in the interface) for the struct:\ntype user struct {\n ID int\n Username string `gorm:\"size:255\"`\n Name string `gorm:\"size:255\"`\n}\n\ntype Tabler interface {\n TableName() string\n}\n\n// TableName overrides the table name used by User to `profiles`\nfunc (user) TableName() string {\n return \"user\"\n}\n\n" ]
[ 44, 20, 0 ]
[]
[]
[ "go", "go_gorm" ]
stackoverflow_0044589060_go_go_gorm.txt
Q: Repeat on an BehaviorSubject I want to reemit the last value of my observable at a fix interval, to I tried obs.pipe(repeat({delay:1000})).subscribe(x => console.log('Emitted', x)); but it did not work. after looking into this, my observable is in fact a BehaviorSubject. So my Question is Why does the 1st emits every second of('Observable').pipe(repeat({ delay: 1000 })).subscribe(x => console.log(x)); but not the this? var bs = new BehaviorSubject('BehaviorSubject'); bs.pipe(repeat({ delay: 1000 })).subscribe(x => console.log(x)); How to do it with my BehaviorSubject? Edit And I would also like to reset my timer when the subject emits a new value. the solution I found is var bs = new BehaviorSubject('BehaviorSubject'); bs.pipe(switchMap(x => timer(0,1000).pipe(map => () => x)).subscribe(x => console.log(x)); but it feels ugly. A: If you want to repeat the last value emitted by a BehaviorSubject, you can use the shareReplay operator instead of the repeat operator. This operator allows you to specify a buffer size, and it will maintain a buffer of the specified size containing the values emitted by the source observable. This means that any new observers that subscribe to the observable returned by shareReplay will immediately receive the values in the buffer. Here is an example of how you could use the shareReplay operator with a BehaviorSubject to repeat the last value emitted by the subject at a fixed interval: const bs = new BehaviorSubject('BehaviorSubject'); const shared = bs.pipe(shareReplay(1)); // Emit the last value from the behavior subject every second interval(1000).pipe( switchMap(() => shared) ).subscribe(x => console.log(x)); // Emit a new value to the behavior subject every 5 seconds interval(5000).subscribe(x => bs.next(x)); A: You can derive an observable from your BehaviorSubject that switchMaps to a timer that emits the received value. Whenever the subject emits, the timer is reset and will emit the latest value: const bs = new BehaviorSubject('initial value'); const repeated = bs.pipe( switchMap(val => timer(0, 1000).pipe( map(() => val) )) ); Here's a StackBlitz demo. So my Question is Why does the 1st emits every second, but not the this? The reason your example code using of as the source works and not the code using the BehaviorSubject can be found in the documentation of the repeat operator: Returns an Observable that will resubscribe to the source stream when the source stream completes. The observable created using of completes after it emits the provided value, so it will resubscribe. Since the BehaviorSubject was not completed, it will not resubscribe.
Repeat on an BehaviorSubject
I want to reemit the last value of my observable at a fix interval, to I tried obs.pipe(repeat({delay:1000})).subscribe(x => console.log('Emitted', x)); but it did not work. after looking into this, my observable is in fact a BehaviorSubject. So my Question is Why does the 1st emits every second of('Observable').pipe(repeat({ delay: 1000 })).subscribe(x => console.log(x)); but not the this? var bs = new BehaviorSubject('BehaviorSubject'); bs.pipe(repeat({ delay: 1000 })).subscribe(x => console.log(x)); How to do it with my BehaviorSubject? Edit And I would also like to reset my timer when the subject emits a new value. the solution I found is var bs = new BehaviorSubject('BehaviorSubject'); bs.pipe(switchMap(x => timer(0,1000).pipe(map => () => x)).subscribe(x => console.log(x)); but it feels ugly.
[ "If you want to repeat the last value emitted by a BehaviorSubject, you can use the shareReplay operator instead of the repeat operator. This operator allows you to specify a buffer size, and it will maintain a buffer of the specified size containing the values emitted by the source observable. This means that any new observers that subscribe to the observable returned by shareReplay will immediately receive the values in the buffer.\nHere is an example of how you could use the shareReplay operator with a BehaviorSubject to repeat the last value emitted by the subject at a fixed interval:\nconst bs = new BehaviorSubject('BehaviorSubject');\nconst shared = bs.pipe(shareReplay(1));\n\n// Emit the last value from the behavior subject every second\ninterval(1000).pipe(\n switchMap(() => shared)\n).subscribe(x => console.log(x));\n\n// Emit a new value to the behavior subject every 5 seconds\ninterval(5000).subscribe(x => bs.next(x));\n\n", "You can derive an observable from your BehaviorSubject that switchMaps to a timer that emits the received value. Whenever the subject emits, the timer is reset and will emit the latest value:\nconst bs = new BehaviorSubject('initial value');\n\nconst repeated = bs.pipe(\n switchMap(val => timer(0, 1000).pipe(\n map(() => val)\n ))\n);\n\nHere's a StackBlitz demo.\n\n\nSo my Question is Why does the 1st emits every second, but not the this?\n\nThe reason your example code using of as the source works and not the code using the BehaviorSubject can be found in the documentation of the repeat operator:\n\nReturns an Observable that will resubscribe to the source stream when the source stream completes.\n\nThe observable created using of completes after it emits the provided value, so it will resubscribe. Since the BehaviorSubject was not completed, it will not resubscribe.\n" ]
[ 1, 1 ]
[]
[]
[ "rxjs" ]
stackoverflow_0074659233_rxjs.txt
Q: Nest.js - request entity too large PayloadTooLargeError: request entity too large I'm trying to save a JSON into a Nest.js server but the server crash when I try to do it, and this is the issue that I'm seeing on the console.log: [Nest] 1976 - 2018-10-12 09:52:04 [ExceptionsHandler] request entity too large PayloadTooLargeError: request entity too large One thing is the size of the JSON request is 1095922 bytes, Does any one know How in Nest.js increase the size of a valid request? Thanks! A: you can also import urlencoded & json from express import { NestFactory } from '@nestjs/core'; import { AppModule } from './app.module'; import { urlencoded, json } from 'express'; async function bootstrap() { const app = await NestFactory.create(AppModule); app.setGlobalPrefix('api'); app.use(json({ limit: '50mb' })); app.use(urlencoded({ extended: true, limit: '50mb' })); await app.listen(process.env.PORT || 3000); } bootstrap(); A: I found the solution, since this issue is related to express (Nest.js uses express behind scene) I found a solution in this thread Error: request entity too large, What I did was to modify the main.ts file add the body-parser dependency and add some new configuration to increase the size of the JSON request, then I use the app instance available in the file to apply those changes. import { NestFactory } from '@nestjs/core'; import * as bodyParser from 'body-parser'; import { AppModule } from './app.module'; async function bootstrap() { const app = await NestFactory.create(AppModule); app.useStaticAssets(`${__dirname}/public`); // the next two lines did the trick app.use(bodyParser.json({limit: '50mb'})); app.use(bodyParser.urlencoded({limit: '50mb', extended: true})); app.enableCors(); await app.listen(3001); } bootstrap(); A: The solution that solved for me was to increase bodyLimit. Source:https://www.fastify.io/docs/latest/Server/#bodylimit const app = await NestFactory.create<NestFastifyApplication>( AppModule, new FastifyAdapter({ bodyLimit: 10048576 }), A: The default limit defined by body-parser is 100kb: https://github.com/expressjs/body-parser/blob/0632e2f378d53579b6b2e4402258f4406e62ac6f/lib/types/json.js#L53-L55 Hope this helps :) For me It helped and I set 100kb to 50mb A: This resolved my issue and also in nestjs bodyparser is depreciated now so this might be an apt solution. app.use(express.json({limit: '50mb'})); app.use(express.urlencoded({limit: '50mb'}));
Nest.js - request entity too large PayloadTooLargeError: request entity too large
I'm trying to save a JSON into a Nest.js server but the server crash when I try to do it, and this is the issue that I'm seeing on the console.log: [Nest] 1976 - 2018-10-12 09:52:04 [ExceptionsHandler] request entity too large PayloadTooLargeError: request entity too large One thing is the size of the JSON request is 1095922 bytes, Does any one know How in Nest.js increase the size of a valid request? Thanks!
[ "you can also import urlencoded & json from express\nimport { NestFactory } from '@nestjs/core';\nimport { AppModule } from './app.module';\nimport { urlencoded, json } from 'express';\n\nasync function bootstrap() {\n const app = await NestFactory.create(AppModule);\n app.setGlobalPrefix('api');\n app.use(json({ limit: '50mb' }));\n app.use(urlencoded({ extended: true, limit: '50mb' }));\n await app.listen(process.env.PORT || 3000);\n}\nbootstrap();\n\n", "I found the solution, since this issue is related to express (Nest.js uses express behind scene) I found a solution in this thread Error: request entity too large,\nWhat I did was to modify the main.ts file add the body-parser dependency and add some new configuration to increase the size of the JSON request, then I use the app instance available in the file to apply those changes.\nimport { NestFactory } from '@nestjs/core';\nimport * as bodyParser from 'body-parser';\n\nimport { AppModule } from './app.module';\n\nasync function bootstrap() {\n const app = await NestFactory.create(AppModule);\n app.useStaticAssets(`${__dirname}/public`);\n // the next two lines did the trick\n app.use(bodyParser.json({limit: '50mb'}));\n app.use(bodyParser.urlencoded({limit: '50mb', extended: true}));\n app.enableCors();\n await app.listen(3001);\n}\nbootstrap();\n\n", "The solution that solved for me was to increase bodyLimit. Source:https://www.fastify.io/docs/latest/Server/#bodylimit\nconst app = await NestFactory.create<NestFastifyApplication>(\nAppModule,\nnew FastifyAdapter({ bodyLimit: 10048576 }),\n\n", "The default limit defined by body-parser is 100kb:\nhttps://github.com/expressjs/body-parser/blob/0632e2f378d53579b6b2e4402258f4406e62ac6f/lib/types/json.js#L53-L55\nHope this helps :)\nFor me It helped and I set 100kb to 50mb\n", "This resolved my issue and also in nestjs bodyparser is depreciated now so this might be an apt solution.\napp.use(express.json({limit: '50mb'}));\napp.use(express.urlencoded({limit: '50mb'}));\n" ]
[ 74, 68, 12, 2, 0 ]
[]
[]
[ "javascript", "nestjs", "node.js" ]
stackoverflow_0052783959_javascript_nestjs_node.js.txt
Q: How to programmatically increase the font size for the entire app? I have a MaterialApp in Flutter and want to scale up text throughout the entire app, base in user preferences. A: You could use the property textScaleFactor. You have to include your variable in every Text Widget. To save the variable you can use SharedPreferences. Example: Text("dummy data",textScaleFactor: yourVariable,) A: To programmatically increase the font size for the entire app in Flutter, you can use a MediaQuery widget at the root of your app. This widget allows you to access the device's media and display information, such as the screen size and the current text scale factor. MediaQuery( data: MediaQuery.of(context).copyWith( // Increase the text scale factor to 1.5 textScaleFactor: 1.5, ), child: YourApp(), ),
How to programmatically increase the font size for the entire app?
I have a MaterialApp in Flutter and want to scale up text throughout the entire app, base in user preferences.
[ "You could use the property textScaleFactor. You have to include your variable in every Text Widget. To save the variable you can use SharedPreferences.\nExample:\nText(\"dummy data\",textScaleFactor: yourVariable,)\n\n", "To programmatically increase the font size for the entire app in Flutter, you can use a MediaQuery widget at the root of your app. This widget allows you to access the device's media and display information, such as the screen size and the current text scale factor.\nMediaQuery(\n data: MediaQuery.of(context).copyWith(\n // Increase the text scale factor to 1.5\n textScaleFactor: 1.5,\n ),\n child: YourApp(),\n),\n\n" ]
[ 0, 0 ]
[]
[]
[ "dart", "flutter" ]
stackoverflow_0074658273_dart_flutter.txt
Q: override employee time card release method Hello friends I need help: Here I specify what I am doing: 1.- I am overriding the release method of Employee Timecard, do a validation to get the project's default branch code, and then insert it into the project's transaction screen. public class TimeCardMaint_Extension : PXGraphExtension<TimeCardMaint> { #region Event Handlers public delegate IEnumerable ReleaseDelegate(PXAdapter a); [PXOverride] public IEnumerable Release(PXAdapter adapter, ReleaseDelegate InvokeBase) { PXGraph.InstanceCreated.AddHandler<RegisterEntry>((graph) => { graph.RowInserted.AddHandler<PMTran>((sender, e) => { EPTimecardDetail detail = PXResult<EPTimecardDetail>.Current; if (detail != null) { var tran = (PMTran)e.Row; PMProject project = PXSelect<PMProject, Where<PMProject.contractID, Equal<Required<PMProject.contractID>>>>.Select(Base, detail.ProjectID); if (project != null) { tran.BranchID = project.DefaultBranchID; } } }); }); return InvokeBase(adapter); } #endregion } Here we see the Transactions screen of the project, make the correct change. So far everything perfect: However, if I check the Journal Transactions screen, it has generated two new entries, it should really only generate a single journal entry as it does by default in acumatica. Due to these consequences, it is because I have modified the employee's time card, in the release method, I don't know what is happening: I need you to tell me what I should do or what I am doing wrong, really I only have to modify the Project Transactions screen and the others should not affect I hope I have been clear.. A: Digging through the code, the release function for the Project Transactions groups them together by branch. This is executed at each entry: Batch newbatch = je.BatchModule.Insert(new Batch { Module = doc.Module, Status = BatchStatus.Unposted, Released = true, Hold = false, BranchID = branchID, FinPeriodID = parts[1], CuryID = parts[2], CuryInfoID = info.CuryInfoID, Description = doc.Description }); As you can see, it adds to the batch of the branch. This would filter out one document for each type, even though it could be separate on the line: Since you are having one time card, and then two branches, it would post one time card to one branch, then the other time card to the other branch. If you were to change the line item for each, you would have to override the entire release functions. So based on spending time combing through the project code, and knowing the GL processes, it seems to be operating as intended for acumatica. Now to your point to bring it into one, I would override the Release function of the PX.Objects.PM.RegisterReleaseProcess graph: public virtual List<Batch> Release(JournalEntry je, PMRegister doc, out List<PMTask> allocTasks) You would make a copy of the entire function, and then create one batch, and then override the transaction. The document would still be tagged with the employee's primary branch, but each line would then be updated to the proper branch. NOTE: This would need to be tested thoroughly each update to Acumatica. You would also need to verify from the accounting side that all of the reporting is correct. Some companies may report on the document's branch rather than the transactions, or ignore it and look at the account that it is hitting altogether. I hope this helps get you to the desired solution!
override employee time card release method
Hello friends I need help: Here I specify what I am doing: 1.- I am overriding the release method of Employee Timecard, do a validation to get the project's default branch code, and then insert it into the project's transaction screen. public class TimeCardMaint_Extension : PXGraphExtension<TimeCardMaint> { #region Event Handlers public delegate IEnumerable ReleaseDelegate(PXAdapter a); [PXOverride] public IEnumerable Release(PXAdapter adapter, ReleaseDelegate InvokeBase) { PXGraph.InstanceCreated.AddHandler<RegisterEntry>((graph) => { graph.RowInserted.AddHandler<PMTran>((sender, e) => { EPTimecardDetail detail = PXResult<EPTimecardDetail>.Current; if (detail != null) { var tran = (PMTran)e.Row; PMProject project = PXSelect<PMProject, Where<PMProject.contractID, Equal<Required<PMProject.contractID>>>>.Select(Base, detail.ProjectID); if (project != null) { tran.BranchID = project.DefaultBranchID; } } }); }); return InvokeBase(adapter); } #endregion } Here we see the Transactions screen of the project, make the correct change. So far everything perfect: However, if I check the Journal Transactions screen, it has generated two new entries, it should really only generate a single journal entry as it does by default in acumatica. Due to these consequences, it is because I have modified the employee's time card, in the release method, I don't know what is happening: I need you to tell me what I should do or what I am doing wrong, really I only have to modify the Project Transactions screen and the others should not affect I hope I have been clear..
[ "Digging through the code, the release function for the Project Transactions groups them together by branch.\nThis is executed at each entry:\n Batch newbatch = je.BatchModule.Insert(new Batch\n {\n Module = doc.Module,\n Status = BatchStatus.Unposted,\n Released = true,\n Hold = false,\n BranchID = branchID,\n FinPeriodID = parts[1],\n CuryID = parts[2],\n CuryInfoID = info.CuryInfoID,\n Description = doc.Description\n });\n\nAs you can see, it adds to the batch of the branch. This would filter out one document for each type, even though it could be separate on the line:\n\nSince you are having one time card, and then two branches, it would post one time card to one branch, then the other time card to the other branch. If you were to change the line item for each, you would have to override the entire release functions. So based on spending time combing through the project code, and knowing the GL processes, it seems to be operating as intended for acumatica.\nNow to your point to bring it into one, I would override the Release function of the PX.Objects.PM.RegisterReleaseProcess graph:\n public virtual List<Batch> Release(JournalEntry je, PMRegister doc, out List<PMTask> allocTasks)\n\nYou would make a copy of the entire function, and then create one batch, and then override the transaction. The document would still be tagged with the employee's primary branch, but each line would then be updated to the proper branch.\nNOTE: This would need to be tested thoroughly each update to Acumatica. You would also need to verify from the accounting side that all of the reporting is correct. Some companies may report on the document's branch rather than the transactions, or ignore it and look at the account that it is hitting altogether.\nI hope this helps get you to the desired solution!\n" ]
[ 0 ]
[]
[]
[ "acumatica", "acumatica_kb", "c#" ]
stackoverflow_0073616466_acumatica_acumatica_kb_c#.txt
Q: How to use the viewParams option of a GeoServer layer defined as a SQL View in a Leaflet? I set up a PostGIS database that I added in GeoServer via a parameterized SQL view. The parameter I set is 'year' : it permits to select only the polygons of my database that have this year as a property. I know this worked because I can successfully use the layer preview of GeoServer on my layer with the local link : http://localhost:9999/geoserver/cite/wms?service=WMS&version=1.1.0&request=GetMap&viewParams=year:0&layers=cite%3Achoix_annee&bbox=-56.6015625%2C25.16517336866393%2C-16.875%2C47.040182144806664&width=768&height=422&srs=EPSG%3A404000&styles=&format=application/openlayers The interesting option there is 'viewParams=year:0'. In Leaflet, I used the following code to import my layer via wms: <script> // Find our map id var map = L.map('map'); var wmsLayer = L.tileLayer.wms('/geoserver/gwc/service/wms', { layers: 'cartowiki:choix', format: 'image/png', transparent: true, viewparams: 'year:1000' }).addTo(map); // Set the map view map.setView([46.988332, 2.605527], 2); </script> It works the first time but then GeoWebCache intervenes and prevents me from getting results fitted to the year I'm asking, it always gives me the results for the first year I asked. I tried to specify the year parameter with a regular expression in the tile caching module of my layer but it doesn't work and I cannot figure out why. This is a view of my settings in geoserver: geoserver settings Thanks, A: you have to dynamically set viewparams value the refresh wms layer
How to use the viewParams option of a GeoServer layer defined as a SQL View in a Leaflet?
I set up a PostGIS database that I added in GeoServer via a parameterized SQL view. The parameter I set is 'year' : it permits to select only the polygons of my database that have this year as a property. I know this worked because I can successfully use the layer preview of GeoServer on my layer with the local link : http://localhost:9999/geoserver/cite/wms?service=WMS&version=1.1.0&request=GetMap&viewParams=year:0&layers=cite%3Achoix_annee&bbox=-56.6015625%2C25.16517336866393%2C-16.875%2C47.040182144806664&width=768&height=422&srs=EPSG%3A404000&styles=&format=application/openlayers The interesting option there is 'viewParams=year:0'. In Leaflet, I used the following code to import my layer via wms: <script> // Find our map id var map = L.map('map'); var wmsLayer = L.tileLayer.wms('/geoserver/gwc/service/wms', { layers: 'cartowiki:choix', format: 'image/png', transparent: true, viewparams: 'year:1000' }).addTo(map); // Set the map view map.setView([46.988332, 2.605527], 2); </script> It works the first time but then GeoWebCache intervenes and prevents me from getting results fitted to the year I'm asking, it always gives me the results for the first year I asked. I tried to specify the year parameter with a regular expression in the tile caching module of my layer but it doesn't work and I cannot figure out why. This is a view of my settings in geoserver: geoserver settings Thanks,
[ "you have to dynamically set viewparams value the refresh wms layer\n" ]
[ 0 ]
[]
[]
[ "geoserver", "javascript", "leaflet", "wms" ]
stackoverflow_0065833320_geoserver_javascript_leaflet_wms.txt
Q: How do you automatically look up Microsoft teams app tenant id We have created a Microsoft Teams tab app with bot integration that we want to distribute to various organizations either manually or via an App Store. In summary, We created Tabs App with Microsoft Bot using node.js botbuilder package. We provided zip archive to another organization (another tenant Id). Organization uploaded our app using Microsoft Teams Admin panel and approved permission in Permission tabs. Question is how can we receive the tenant id from the organization we are deploying to without asking their admins to go to Azure Active Directory and look it up. Once provided, the graph api and the multi tenant bot does work fine. We are trying to avoid asking their admin to provide us the tenant id and want to retrieve it automatically upon the app being uploaded or on startup. Thank you. A: The best place to get the tenant id is from the access token you are provided by logging in to your app. Look for the 'tid' value. I'm assuming you are talking about stream lining the company wide admin consent for your application. What you can do is have a web site that a customer's admin can log into (using standard Microsoft OAuth interactive flow). You can then pull the Tenant ID from the access token and then run through the Microsoft consent process. Once consent process redirected back to your web site, you can do your own customer onboarding if required.
How do you automatically look up Microsoft teams app tenant id
We have created a Microsoft Teams tab app with bot integration that we want to distribute to various organizations either manually or via an App Store. In summary, We created Tabs App with Microsoft Bot using node.js botbuilder package. We provided zip archive to another organization (another tenant Id). Organization uploaded our app using Microsoft Teams Admin panel and approved permission in Permission tabs. Question is how can we receive the tenant id from the organization we are deploying to without asking their admins to go to Azure Active Directory and look it up. Once provided, the graph api and the multi tenant bot does work fine. We are trying to avoid asking their admin to provide us the tenant id and want to retrieve it automatically upon the app being uploaded or on startup. Thank you.
[ "The best place to get the tenant id is from the access token you are provided by logging in to your app. Look for the 'tid' value.\nI'm assuming you are talking about stream lining the company wide admin consent for your application.\nWhat you can do is have a web site that a customer's admin can log into (using standard Microsoft OAuth interactive flow). You can then pull the Tenant ID from the access token and then run through the Microsoft consent process. Once consent process redirected back to your web site, you can do your own customer onboarding if required.\n" ]
[ 0 ]
[]
[]
[ "botframework", "microsoft_graph_api", "microsoft_teams" ]
stackoverflow_0074660757_botframework_microsoft_graph_api_microsoft_teams.txt
Q: Mapping json data that have relation values inside I'm having difficulties about how I should approach rendering this JSON data that have relation values inside. { "data":{ "comments":[ { "id":"1", "text":"Data 1" }, { "id":"2", "parent_id" : "1" "text":"Data 2 with relation to Data 1", }, { "id":"3", "text":"Data 3", }, ] } } And it should look something like this in my head: A: Here is an example of how this could be done: const sortedComments = data.comments.sort((a, b) => a.parent_id - b.parent_id); return ( <div> {sortedComments.map(comment => { if (comment.parent_id) { return ( <div key={comment.id}> <div>{comment.text}</div> <div> {sortedComments.map(childComment => { if (childComment.parent_id === comment.id) { return ( <div key={childComment.id}> <div>{childComment.text}</div> </div> ); } })} </div> </div> ); } else { return ( <div key={comment.id}> <div>{comment.text}</div> </div> ); } })} </div> );
Mapping json data that have relation values inside
I'm having difficulties about how I should approach rendering this JSON data that have relation values inside. { "data":{ "comments":[ { "id":"1", "text":"Data 1" }, { "id":"2", "parent_id" : "1" "text":"Data 2 with relation to Data 1", }, { "id":"3", "text":"Data 3", }, ] } } And it should look something like this in my head:
[ "Here is an example of how this could be done:\nconst sortedComments = data.comments.sort((a, b) => a.parent_id - b.parent_id);\n\nreturn (\n <div>\n {sortedComments.map(comment => {\n if (comment.parent_id) {\n return (\n <div key={comment.id}>\n <div>{comment.text}</div>\n <div>\n {sortedComments.map(childComment => {\n if (childComment.parent_id === comment.id) {\n return (\n <div key={childComment.id}>\n <div>{childComment.text}</div>\n </div>\n );\n }\n })}\n </div>\n </div>\n );\n } else {\n return (\n <div key={comment.id}>\n <div>{comment.text}</div>\n </div>\n );\n }\n })}\n </div>\n);\n\n" ]
[ 0 ]
[]
[]
[ "json", "reactjs" ]
stackoverflow_0074660605_json_reactjs.txt
Q: the source file is different from when the module was built This is driving me crazy. I have a rather large project that I am trying to modify. I noticed earlier that when I typed DbCommand, visual studio did not do any syntax highlighting on it, and I am using using System.Data.Common. Even though nothing was highlighted, the project seemed to be running fine in my browser. So I decided to run the debugger to see if things were really working as they should be. Every time the class that didn't do the highlighting is called I get the "the source file is different from when the module was built" message. I cleaned the solution and rebuilt it several times, deleted tmp files, followed all the directions here Getting "The source file is different from when the module was built.", restarted the web server and still it tells me the source files are different when they clearly are not. I cannot test any of the code I have written today because of this. How can the source be different than the binary when I just complied it? Is there any way to knock some sense into visual studio, or am I just missing something? A: I got this issue running a console app where the source that was different was the source that had the entry-point (static void Main). Deleting the bin and obj directories and doing a full rebuild seemed to correct this, but every time I made a code change, it would go out-of-date again. The reason I found for this was: I had checked "Only build startup projects and dependencies on Run" (Tools -> Options -> Projects and Solutions -> Build and Run) In Configuration Manager, my start-up project didn't have "Build" checked (For #2 -> accessible via the toolbar under the 'Debug/Release' drop down list.) A: I was just having this same problem, my projects were all in the same solution so they were using Project to Project references, so as one changed the others should have been updated. However it was not the case, I tried to build, rebuild, close VS2010, pulled a new copy from our source control. None of this worked, what I finally ended up trying was right clicking on the project and rebuilding each project individually. That updated the .dlls and .pdb files so I could debug through. The issue here is that your dll and or your pdb files are not in sync. A: Follow these steps Just delete the bin directory from the project where the DLL is generated. Re-build the project. Remove reference from the project that make reference to the DLL. Include again the reference. Enjoy. A: Some things for you to check: Have you double checked your project references? Do you have a Visual Studio started web server still running? Check the system tray and look for a page with a cog icon (you may have more than one): (source: msdn.com) Right click and close/exit it. You may have more than one. Can you debug your changes now? Are you running the debug version but have only built the release version (or vice versa)? Did the compile actually succeed? I know I've clicked through the "there were errors, do you want to continue anyway?" message a couple of times without realising. A: In addition to these answers I had the same issue while replacing new DLLs with old ones because of the wrong path. If you are still getting this error you may not refer the wrong path for the DLLs. Go to IIS manager and click the website which uses your DLLs. On the right window click Advanced Settings and go to path of the Physical Path folder on File Explorer and be sure that you are using this folder to replace your DLLs. A: With web services, the problem can be caused by using the Visual Studio "View in Browser" command. This places the service's DLL and PDB files in the bin and obj folders. When stepping into the web service from a client, somehow Visual Studio uses the PDB in the bin (or obj) folder, but it uses the DLL in the project's output build folder. There are a couple workarounds: Try deleting the DLL and PDB files in the web service bin and obj files. Try clicking "View in Browser" in Visual Studio. If you previously got the source file mismatch error, Visual Studio might have added the filename to a black list. Check your solution properties. Choose "Common Properties -> Debug Source Files" on the left side of the dialog box. If your web service source files appear in the field "Do not look for these source files", delete them. A: Unload the project that has the file that is causing the error. Reload the project. Fixed A: I just had this issue. I tried all the above, but only this worked: delete the .pdb file for the solution. delete the offending .obj files (for the file being reported out of sync) build the solution. This fixed the issue for all builds moving forward for me. A: In Visual Studio 2017 deleting the hidden .vs folder in the resolved this issue for me. A: This is how I fixed the problem in Visual Studio 2010: 1) Change the 'Solutions Configurations' option from "Debug" to "Release" 2) Start debugging 3) Stop debugging and switch the 'Solutions Configurations' option back to "Debug" This worked for me. Step 3 is optional - it was working fine when I changed it to "Release" but I wanted to change it back. A: My solution: I had included an existing project from a different solution in a new solution file. I did not notice that when the existing project was rebuilt, it was putting the final output into the NEW solution's output directory. I had a linker path defined to look into the OLD solution's output directory. Switching my project to search in the new solution's output directory fixed this issue for me. A: I had this problem, and it turns out I was running my console application as a windows application. Switching the output type back to console fixed the issue. A: I had the same problem. To fix it I used the "Release Mode" to debug in VS2013. Which is sufficient for me, because I'm working in a node js\c++ addon. A: My problem was that I had two projects in my solution. The second one was a test project used to call the first one. I had picked the path to the references from the bin folder's release folder. So whenever I made a change to the first project's code and rebuilt it, it would update the dlls in the debug folder but the calling project was pointing to the release folder, giving me the error, "the source file is different from when the module was built." Once I deleted the reference to the main project's dll in the release folder and set it to the dll in the debug folder, the issue went away. A: In my case, the @Eliott's answer doesn't work. To solve this problem I had Exclude/Include From Project my deficient file, andalso Clean and Rebuild the solution. After these actions, my file with my last modifications and the debugger are restored. I hope this help. A: solution:- the problem is:- if your some projects in a solution , refer to some other projects, then sometimes the dll of some projects, will not update automatically, whenever you build the solution, some projects will have previous build dlls, not latest dlls you have to go manually and copy the dll of latest build project into referenced project A: I was using Visual Studio 2013 and I had an existing project under source control. I had downloaded a fresh copy from source control to a new directory. After making changes to the fresh copy, when building I received the error in question. My solution: 1) Open Documents\IISExpress\config\applicationhost.config 2) Update virtualDirectory node with directory to the fresh copy and save. A: My problem was that I had a webservice in the project and I changed the build path. Restoring the default build path solved my issue. A: I had this same problem and I followed the majority of the guidance in the other answers posted here, nothing seemed to work for me. I eventually opened IIS and recycled the application pool for my web application. I have IIS version 8.5.9600, I right-clicked my web application, then: Deploy > Recycle > Recycle application pool > OK. That seems to have fixed it, breakpoints now being hit as expected. I think that doing this along with deleting the bin and obj folders helped my situation. Good luck! A: I know this is an old question but I just had the same problem and wanted to post here in case it helps someone else. I got a new computer and the IT dept merged my old computer with the new one. When I set up TFS, I mapped a different local path than what I was previously using, to an additional internal drive. The old path still existed from the merged data on my hard drive so I could still build and run. My IIS paths were also pointing to the old directory. Once I updated IIS to the correct path, I was able to debug just fine. I also deleted the old directory for good measure. A: I also experienced that. I just open the obj folder on the project and then open the debug folder delete the .pdb file and that's all. A: This error also happens if you try to make changes to a source file that is not part of the project. I was debugging a method from a .dll of another one of my projects, where Visual Studio had quite helpfully loaded the source because the .dll had been built on the same machine and it knew the path to the source. Obviously, changing such a file isn't going to do anything unless you rebuild the referenced project. A: Delete all breakpoints. Rebuild. Done A: At Visual Studio 2015, using C++, what fixed for me the the source file is different from when the module was built problem was restart Visual Studio. A: Check if the location you pointed to using mex() in Matlab is correct (contains lib and obj files which are modified to the last date you compiled the library in Visual studio). If this is not the case: Make sure you are compiling Visual studio in a mode that saves .lib files : properties -> Config properties -> General -> Config type -> static library properties -> Config properties -> General -> Target extension=.lib (instead of exe) Make sure the output and intermediate directories match the Matlab directory in properties -> Config properties -> General -> Output directory properties -> Config properties -> General -> Intermediate directory A: I get this issue when debugging sometimes w/ Visual Studio but when the application is served by IIS. (we have to develop in this form for some complicated reasons that have to do with how the original developer setup this project.) When I change the file and rebuild, that fixes it a lot of the time. I know that sounds silly, but I was just trying to debug some code to see why it's doing something weird when I haven't changed it in a while, and I tried a dozen things from this page, but it was fixed just by changing the file.. A: In my case, the problem was that the debugger exe path was pointing to a net5.0 bin folder. I am using net6.0, so I should've updated the exe path back when I updated the target framework. Works fine now.
the source file is different from when the module was built
This is driving me crazy. I have a rather large project that I am trying to modify. I noticed earlier that when I typed DbCommand, visual studio did not do any syntax highlighting on it, and I am using using System.Data.Common. Even though nothing was highlighted, the project seemed to be running fine in my browser. So I decided to run the debugger to see if things were really working as they should be. Every time the class that didn't do the highlighting is called I get the "the source file is different from when the module was built" message. I cleaned the solution and rebuilt it several times, deleted tmp files, followed all the directions here Getting "The source file is different from when the module was built.", restarted the web server and still it tells me the source files are different when they clearly are not. I cannot test any of the code I have written today because of this. How can the source be different than the binary when I just complied it? Is there any way to knock some sense into visual studio, or am I just missing something?
[ "I got this issue running a console app where the source that was different was the source that had the entry-point (static void Main). Deleting the bin and obj directories and doing a full rebuild seemed to correct this, but every time I made a code change, it would go out-of-date again.\nThe reason I found for this was:\n\nI had checked \"Only build startup projects and dependencies on Run\" (Tools -> Options -> Projects and Solutions -> Build and Run)\nIn Configuration Manager, my start-up project didn't have \"Build\" checked\n\n(For #2 -> accessible via the toolbar under the 'Debug/Release' drop down list.)\n", "I was just having this same problem, my projects were all in the same solution so they were using Project to Project references, so as one changed the others should have been updated. However it was not the case, I tried to build, rebuild, close VS2010, pulled a new copy from our source control. None of this worked, what I finally ended up trying was right clicking on the project and rebuilding each project individually. That updated the .dlls and .pdb files so I could debug through.\nThe issue here is that your dll and or your pdb files are not in sync.\n", "Follow these steps\n\nJust delete the bin directory from the project where the DLL is generated. \nRe-build the project.\nRemove reference from the project that make reference to the DLL.\nInclude again the reference.\nEnjoy.\n\n", "Some things for you to check:\nHave you double checked your project references?\nDo you have a Visual Studio started web server still running? Check the system tray and look for a page with a cog icon (you may have more than one):\n\n(source: msdn.com) \nRight click and close/exit it. You may have more than one. Can you debug your changes now?\nAre you running the debug version but have only built the release version (or vice versa)?\nDid the compile actually succeed? I know I've clicked through the \"there were errors, do you want to continue anyway?\" message a couple of times without realising.\n", "In addition to these answers I had the same issue while replacing new DLLs with old ones because of the wrong path. If you are still getting this error you may not refer the wrong path for the DLLs. Go to IIS manager and click the website which uses your DLLs. On the right window click Advanced Settings and go to path of the Physical Path folder on File Explorer and be sure that you are using this folder to replace your DLLs. \n", "With web services, the problem can be caused by using the Visual Studio \"View in Browser\" command. This places the service's DLL and PDB files in the bin and obj folders. When stepping into the web service from a client, somehow Visual Studio uses the PDB in the bin (or obj) folder, but it uses the DLL in the project's output build folder. There are a couple workarounds:\n\nTry deleting the DLL and PDB files in the web service bin and obj files.\nTry clicking \"View in Browser\" in Visual Studio.\n\nIf you previously got the source file mismatch error, Visual Studio might have added the filename to a black list. Check your solution properties. Choose \"Common Properties -> Debug Source Files\" on the left side of the dialog box. If your web service source files appear in the field \"Do not look for these source files\", delete them.\n", "Unload the project that has the file that is causing the error.\nReload the project.\nFixed\n", "I just had this issue.\nI tried all the above, but only this worked:\n\ndelete the .pdb file for the solution.\ndelete the offending .obj files (for the file being reported out of sync)\n\nbuild the solution.\nThis fixed the issue for all builds moving forward for me.\n", "In Visual Studio 2017 deleting the hidden .vs folder in the resolved this issue for me.\n", "This is how I fixed the problem in Visual Studio 2010:\n1) Change the 'Solutions Configurations' option from \"Debug\" to \"Release\"\n2) Start debugging \n3) Stop debugging and switch the 'Solutions Configurations' option back to \"Debug\"\nThis worked for me. Step 3 is optional - it was working fine when I changed it to \"Release\" but I wanted to change it back.\n", "My solution:\nI had included an existing project from a different solution in a new solution file.\nI did not notice that when the existing project was rebuilt, it was putting the final output into the NEW solution's output directory. I had a linker path defined to look into the OLD solution's output directory.\nSwitching my project to search in the new solution's output directory fixed this issue for me.\n", "I had this problem, and it turns out I was running my console application as a windows application. Switching the output type back to console fixed the issue.\n", "I had the same problem. To fix it I used the \"Release Mode\" to debug in VS2013. Which is sufficient for me, because I'm working in a node js\\c++ addon.\n", "My problem was that I had two projects in my solution. The second one was a test project used to call the first one. I had picked the path to the references from the bin folder's release folder.\nSo whenever I made a change to the first project's code and rebuilt it, it would update the dlls in the debug folder but the calling project was pointing to the release folder, giving me the error, \"the source file is different from when the module was built.\"\nOnce I deleted the reference to the main project's dll in the release folder and set it to the dll in the debug folder, the issue went away.\n", "In my case, the @Eliott's answer doesn't work. \nTo solve this problem I had Exclude/Include From Project my deficient file, andalso Clean and Rebuild the solution.\nAfter these actions, my file with my last modifications and the debugger are restored.\nI hope this help.\n", "solution:-\n the problem is:-\nif your some projects in a solution , refer to some other projects,\nthen sometimes the dll of some projects, will not update automatically, whenever you build the solution,\nsome projects will have previous build dlls, not latest dlls\nyou have to go manually and copy the dll of latest build project into referenced project\n", "I was using Visual Studio 2013 and I had an existing project under source control.\nI had downloaded a fresh copy from source control to a new directory.\nAfter making changes to the fresh copy, when building I received the error in question. \nMy solution:\n1) Open Documents\\IISExpress\\config\\applicationhost.config\n2) Update virtualDirectory node with directory to the fresh copy and save.\n", "My problem was that I had a webservice in the project and I changed the build path.\nRestoring the default build path solved my issue.\n", "I had this same problem and I followed the majority of the guidance in the other answers posted here, nothing seemed to work for me.\nI eventually opened IIS and recycled the application pool for my web application. I have IIS version 8.5.9600, I right-clicked my web application, then: Deploy > Recycle > Recycle application pool > OK.\nThat seems to have fixed it, breakpoints now being hit as expected. I think that doing this along with deleting the bin and obj folders helped my situation.\nGood luck!\n", "I know this is an old question but I just had the same problem and wanted to post here in case it helps someone else. I got a new computer and the IT dept merged my old computer with the new one. When I set up TFS, I mapped a different local path than what I was previously using, to an additional internal drive. The old path still existed from the merged data on my hard drive so I could still build and run. My IIS paths were also pointing to the old directory. Once I updated IIS to the correct path, I was able to debug just fine. I also deleted the old directory for good measure.\n", "I also experienced that. I just open the obj folder on the project and then open the debug folder delete the .pdb file and that's all.\n", "This error also happens if you try to make changes to a source file that is not part of the project.\nI was debugging a method from a .dll of another one of my projects, where Visual Studio had quite helpfully loaded the source because the .dll had been built on the same machine and it knew the path to the source. Obviously, changing such a file isn't going to do anything unless you rebuild the referenced project.\n", "\nDelete all breakpoints.\nRebuild.\nDone\n\n", "At Visual Studio 2015, using C++, what fixed for me the the source file is different from when the module was built problem was \n\nrestart Visual Studio.\n\n", "Check if the location you pointed to using mex() in Matlab is correct (contains lib and obj files which are modified to the last date you compiled the library in Visual studio). \nIf this is not the case:\nMake sure you are compiling Visual studio in a mode that saves .lib files :\n\nproperties -> Config properties -> General -> Config type -> static library\nproperties -> Config properties -> General -> Target extension=.lib (instead of exe)\n\nMake sure the output and intermediate directories match the Matlab directory in \n\nproperties -> Config properties -> General -> Output directory\nproperties -> Config properties -> General -> Intermediate directory\n\n", "I get this issue when debugging sometimes w/ Visual Studio but when the application is served by IIS. (we have to develop in this form for some complicated reasons that have to do with how the original developer setup this project.)\nWhen I change the file and rebuild, that fixes it a lot of the time. I know that sounds silly, but I was just trying to debug some code to see why it's doing something weird when I haven't changed it in a while, and I tried a dozen things from this page, but it was fixed just by changing the file..\n", "In my case, the problem was that the debugger exe path was pointing to a net5.0 bin folder. I am using net6.0, so I should've updated the exe path back when I updated the target framework. Works fine now.\n" ]
[ 123, 28, 6, 4, 4, 3, 3, 2, 2, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 ]
[ "Debug-> start without debugging. \nThis option worked for me. Hope this helps!\n" ]
[ -1 ]
[ "c#", "visual_studio_2008" ]
stackoverflow_0003087390_c#_visual_studio_2008.txt
Q: How to convert this pseudocode to code in python I am new to coding and i can't figure out a way to convert this pseudocode to actual code in python especially the total number of dice part. I want to calculate total number of green dice in a dice stack of different colours. Y4 G6 R3 G2 W2 W1 where, Y4 = yellow dice with face value 4 R3 = red dice with face value 2 W2 = white dice with face value 2 G6 = green dice with face value 6 G2 = green dice with face value 2 W1 = white dice with face value 1 score = 0 if total number of green dice is 1 score = 2 if total number of green dice is 2 score = 5 if total number of green dice is 3 score = 10 if total number of green dice is 4 score = 15 if total number of green dice is 5 score = 20 if total number of green dice is 6 score = 30 return score A: Welcome to the community. I will help get you started and provide some references. To be honest however, when you post on here, you typically want to be as descriptive as possible. Don't be afraid to list the guidelines for your assignment or go into more exact detail on what problems you are having. I know it is probably confusing and can be hard to explain, but the more quality information you give to the professionals on here, the better they can understand what you are trying to accomplish and help you in the best way possible. With that said, I do not know exactly what techniques you are being asked to perform here or completely what you are trying to accomplish. However, based on the existing information, I will give you an idea of what techniques and implementations you may have to use. First, let's start on the section you commented you are struggling with, the total number of dice. Total number of dice should be represented as a variable such as the score variable at the top of your example (score = 0). I am unsure based on what you have what the total number of dice should be, so you will have to format that value in. What you need to focus on now though is using proper naming conventions for variables. A good naming convention usually follows a specific style whether it be underscore or camel case. It should also not be overly long, but be descriptive enough that you understand what the variable is. Most recommend a variable name to be between 8 to 20 characters long. Underscore example: new_member Camel case example: newMember From my experience, most Python programmers use underscores while Java and C++ programmers use camel case. With the underscore, you separate words such as "new" and "member" with an underscore and keep them lowercase in almost all situations. In camel case, the words are joined together but the first character in each word except for the very first one are capitalized. Here is probably the best guide to get you started and is followed by most organizations: https://peps.python.org/pep-0008/#naming-conventions Okay, now let's focus on the if statements or as some call them: conditional statements. Conditional/if statements can vary in format from one language to another, but Python conditional statements typically follow this structure: if variable1 condition variable2: As you can see, you start out with the word "if" and keep it lowercase. Having it uppercase will cause an error as compilers are mostly case sensitive. Following that you write a variable though later you will be able to use different things in these statements. Your variable would be what you decided to name the total number of dice. Here is an example: if new_member condition variable2: Next, you need to define the condition. Conditions usually are made up of equality operators (== and !=) and relational operators (< and >). If the condition is proven true, then whatever code you have in the block (the indented lines below your if statement) will execute. So in your first conditional statement, the number of dice would need to equal 1 for the score to be set to 2. If the statement is not true, the if statement does not execute. For example, let's say the total number of dice is 3. That means the if statement saying the total number of dice is 1 or 2 will not execute. However, the line that says that the total number of dice equals 3 will execute and assign score with the value of 10. Here is a resource on how these conditions work: https://www.programiz.com/python-programming/if-elif-else Now, because your pseudocode says "is" in its if statements, you will want to use == (== means is equal to) like this: if new_member == variable2: To finish off, you put the other variable you are comparing against. In your example variable2 would just be the number 1 for your first if statement. Typically, you want to use variables as writing in numbers or strings without first declaring them to a variable is bad practice. However, it is fine since you are learning. Here is the finished example: if new_member == 'High_spectre1408': print('Welcome to the community!') Lastly, I noticed you have return statements. These are typically used for classes or functions. Functions and classes are typically used to modify or output the values of variables and other things. They make code easier to read and make it so you don't have to write the same thing over and over again. You set up a function like this: def my_func(): The word 'def' is short for definition, meaning you are defining the function or what it is supposed to do. Next, you put the name of your function. It can be anything you want it to be, but it should follow proper naming conventions like variables. Like variables, they start with a lowercase letter. The reference to naming conventions I provided should give more insight on naming them too. However, whatever you do name them should be followed with by (). Here is another example: def newcomer_greeting(): Now what you can do in the () is include variables in your function that weren't part of them before. These are called parameters and can look like this: def newcomer_greeting(new_member): You can have no parameters, 1 parameter, or many parameters. With that said, let's look at an example of a definition: def newcomer_greeting(new_member): if new_member == 'High_spectre1408': print('Welcome to the community!') return Now to finish, you must call the function with the specified number of parameters in your main section. It will look like this: def newcomer_greeting(new_member): if new_member == 'High_spectre1408': print('Welcome to the community!') return new_member = 'High_spectre1408' newcomer_greeting(new_member) I was a little vague on functions compared to the last 2 concepts, but it is something considered more adept than conditional statements and variables. Instead, I will provide a reference that will save me the time of writing a book and hopefully give you more insight: https://www.programiz.com/python-programming/function Don't get discouraged and feel free to comment again if you are still having difficulties or need more examples. Best of luck!
How to convert this pseudocode to code in python
I am new to coding and i can't figure out a way to convert this pseudocode to actual code in python especially the total number of dice part. I want to calculate total number of green dice in a dice stack of different colours. Y4 G6 R3 G2 W2 W1 where, Y4 = yellow dice with face value 4 R3 = red dice with face value 2 W2 = white dice with face value 2 G6 = green dice with face value 6 G2 = green dice with face value 2 W1 = white dice with face value 1 score = 0 if total number of green dice is 1 score = 2 if total number of green dice is 2 score = 5 if total number of green dice is 3 score = 10 if total number of green dice is 4 score = 15 if total number of green dice is 5 score = 20 if total number of green dice is 6 score = 30 return score
[ "Welcome to the community. I will help get you started and provide some references. To be honest however, when you post on here, you typically want to be as descriptive as possible. Don't be afraid to list the guidelines for your assignment or go into more exact detail on what problems you are having. I know it is probably confusing and can be hard to explain, but the more quality information you give to the professionals on here, the better they can understand what you are trying to accomplish and help you in the best way possible. With that said, I do not know exactly what techniques you are being asked to perform here or completely what you are trying to accomplish. However, based on the existing information, I will give you an idea of what techniques and implementations you may have to use.\nFirst, let's start on the section you commented you are struggling with, the total number of dice. Total number of dice should be represented as a variable such as the score variable at the top of your example (score = 0). I am unsure based on what you have what the total number of dice should be, so you will have to format that value in. What you need to focus on now though is using proper naming conventions for variables.\nA good naming convention usually follows a specific style whether it be underscore or camel case. It should also not be overly long, but be descriptive enough that you understand what the variable is. Most recommend a variable name to be between 8 to 20 characters long.\nUnderscore example: new_member\nCamel case example: newMember\n\nFrom my experience, most Python programmers use underscores while Java and C++ programmers use camel case. With the underscore, you separate words such as \"new\" and \"member\" with an underscore and keep them lowercase in almost all situations. In camel case, the words are joined together but the first character in each word except for the very first one are capitalized.\nHere is probably the best guide to get you started and is followed by most organizations: https://peps.python.org/pep-0008/#naming-conventions\nOkay, now let's focus on the if statements or as some call them: conditional statements. Conditional/if statements can vary in format from one language to another, but Python conditional statements typically follow this structure:\nif variable1 condition variable2:\n\nAs you can see, you start out with the word \"if\" and keep it lowercase. Having it uppercase will cause an error as compilers are mostly case sensitive. Following that you write a variable though later you will be able to use different things in these statements. Your variable would be what you decided to name the total number of dice. Here is an example:\nif new_member condition variable2:\n\nNext, you need to define the condition. Conditions usually are made up of equality operators (== and !=) and relational operators (< and >). If the condition is proven true, then whatever code you have in the block (the indented lines below your if statement) will execute. So in your first conditional statement, the number of dice would need to equal 1 for the score to be set to 2. If the statement is not true, the if statement does not execute. For example, let's say the total number of dice is 3. That means the if statement saying the total number of dice is 1 or 2 will not execute. However, the line that says that the total number of dice equals 3 will execute and assign score with the value of 10.\nHere is a resource on how these conditions work: https://www.programiz.com/python-programming/if-elif-else\nNow, because your pseudocode says \"is\" in its if statements, you will want to use == (== means is equal to) like this:\nif new_member == variable2:\n\nTo finish off, you put the other variable you are comparing against. In your example variable2 would just be the number 1 for your first if statement. Typically, you want to use variables as writing in numbers or strings without first declaring them to a variable is bad practice. However, it is fine since you are learning. Here is the finished example:\nif new_member == 'High_spectre1408':\n print('Welcome to the community!')\n\nLastly, I noticed you have return statements. These are typically used for classes or functions. Functions and classes are typically used to modify or output the values of variables and other things. They make code easier to read and make it so you don't have to write the same thing over and over again. You set up a function like this:\ndef my_func():\n\nThe word 'def' is short for definition, meaning you are defining the function or what it is supposed to do. Next, you put the name of your function. It can be anything you want it to be, but it should follow proper naming conventions like variables. Like variables, they start with a lowercase letter. The reference to naming conventions I provided should give more insight on naming them too. However, whatever you do name them should be followed with by (). Here is another example:\ndef newcomer_greeting():\n\nNow what you can do in the () is include variables in your function that weren't part of them before. These are called parameters and can look like this:\ndef newcomer_greeting(new_member):\n\nYou can have no parameters, 1 parameter, or many parameters. With that said, let's look at an example of a definition:\ndef newcomer_greeting(new_member):\n if new_member == 'High_spectre1408':\n print('Welcome to the community!')\n\n return\n\nNow to finish, you must call the function with the specified number of parameters in your main section. It will look like this:\ndef newcomer_greeting(new_member):\n if new_member == 'High_spectre1408':\n print('Welcome to the community!')\n\n return\n\nnew_member = 'High_spectre1408'\nnewcomer_greeting(new_member)\n\nI was a little vague on functions compared to the last 2 concepts, but it is something considered more adept than conditional statements and variables. Instead, I will provide a reference that will save me the time of writing a book and hopefully give you more insight: https://www.programiz.com/python-programming/function\nDon't get discouraged and feel free to comment again if you are still having difficulties or need more examples. Best of luck!\n" ]
[ 0 ]
[ "I think you should not be asking someone to do your code.\nI will try to help Python version if you mean this\nclass Dice(object):\n\n def __init__(self, dice_list):\n self.dice_list = dice_list\n\n def score(self):\n total_number_of_dice = 0\n for dice in self.dice_list:\n total_number_of_dice = total_number_of_dice + dice\n if total_number_of_dice == 1:\n return 2\n if total_number_of_dice == 2:\n return 5\n if total_number_of_dice == 3:\n return 10\n if total_number_of_dice == 4:\n return 15\n if total_number_of_dice == 5:\n return 20\n if total_number_of_dice == 6:\n return 30\n return 0\n\n" ]
[ -2 ]
[ "algorithm", "dice", "pseudocode", "python", "python_3.x" ]
stackoverflow_0074660492_algorithm_dice_pseudocode_python_python_3.x.txt
Q: Automatically open Chrome developer tools when new tab/new window is opened I have HTML5 application which opens in a new window by clicking a link. I'm a bit tired of pressing Shift + I each time I want to logging network communication to launch Developer tools because I need it always. I was not able to find an option to keep Developer Tools always enabled on startup. Is there any way to open Developer tools automatically when new window is opened in Chrome? A: On opening the developer tools, with the developer tools window in focus, press F1. This will open a settings page. Check the "Auto-open DevTools for popups". This worked for me. A: UPDATE 2: See this answer . - 2019-11-05 You can also now have it auto-open Developer Tools in Pop-ups if they were open where you opened them from. For example, if you do not have Dev Tools open and you get a popup, it won't open with Dev Tools. But if you Have Dev Tools Open and then you click something, the popup will have Dev-Tools Automatically opened. UPDATE: Time has changed, you can now use --auto-open-devtools-for-tabs as in this answer – Wouter Huysentruit May 18 at 11:08 OP: I played around with the startup string for Chrome on execute, but couldn't get it to persist to new tabs. I also thought about a defined PATH method that you could invoke from prompt. This is possible with the SendKeys command, but again, only on a new instance. And DevTools doesn't persist to new tabs. Browsing the Google Product Forums, there doesn't seem to be a built-in way to do this in Chrome. You'll have to use a keystroke solution or F12 as mentioned above. I recommended it as a feature. I know I'm not the first either. A: There is a command line switch for this: --auto-open-devtools-for-tabs So for the properties on Google Chrome, use something like this: "C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --auto-open-devtools-for-tabs Here is a useful link: chromium-command-line-switches A: On a Mac: Quit Chrome, then run the following command in a terminal window: open -a "Google Chrome" --args --auto-open-devtools-for-tabs A: Under the Chrome DevTools settings you enable: Under Network -> Preserve Log Under DevTools -> Auto-open DevTools for popups A: With the Developer Tools window visible, click the menu icon (the three vertical dots in the top right corner) and click Settings. Under Dev Tools, check the Auto-open DevTools for popups option A: Answer for 2021: Open the Developer Tool (CTRL+SHIFT+I on Windows) Click the "Gear" icon. THe new Settings window will appear. "Auto-open DevTools for popups" is now under "Preferences" section. A: F12 is easier than Ctrl+Shift+I A: If you use Visual Studio Code (vscode), using the very popular vscode chrome debug extension (https://github.com/Microsoft/vscode-chrome-debug) you can setup a launch configuration file launch.json and specify to open the developer tool during a debug session. This the launch.json I use for my React projects : { "version": "0.2.0", "configurations": [ { "type": "chrome", "request": "launch", "name": "Launch Chrome against localhost", "url": "http://localhost:3000", "runtimeArgs": ["--auto-open-devtools-for-tabs"], "webRoot": "${workspaceRoot}/src" } ] } The important line is "runtimeArgs": ["--auto-open-devtools-for-tabs"], From vscode you can now type F5, Chrome opens your app and the console tab as well. A: Use --auto-open-devtools-for-tabs flag while running chrome from command line /Applications/Google\ Chrome.app/Contents/MacOS/Google\ Chrome --auto-open-devtools-for-tabs https://developers.google.com/web/tools/chrome-devtools/open#auto A: I came here looking for a similar solution. I really wanted to see the chrome output for the pageload from a new tab. (a form submission in my case) The solution I actually used was to modify the form target attribute so that the form submission would occur in the current tab. I was able to capture the network request. Problem Solved! A: Anyone looking to do this inside Visual Studio, this Code Project article will help. Just add "--auto-open-devtools-for-tabs" in the arguments box. Works on 2017. A: Yep,there is a way you could do it Right click->Inspect->sources(tab) Towards Your right there will be a "pause script execution button" I hope it helps!Peace P.S:This is for the first time.From the second time onwards the dev tool loads automatically A: For Windows: Right-click your Google Chrome shortcut Properties Change Target to: "C:\Program Files\Google\Chrome\Application\chrome.exe" --auto-open-devtools-for-tabs Click ok You will need to close all current chrome instances & end chrome processes in Task Manager. Or restart PC.
Automatically open Chrome developer tools when new tab/new window is opened
I have HTML5 application which opens in a new window by clicking a link. I'm a bit tired of pressing Shift + I each time I want to logging network communication to launch Developer tools because I need it always. I was not able to find an option to keep Developer Tools always enabled on startup. Is there any way to open Developer tools automatically when new window is opened in Chrome?
[ "On opening the developer tools, with the developer tools window in focus, press F1. This will open a settings page. Check the \"Auto-open DevTools for popups\".\nThis worked for me.\n\n", "UPDATE 2: \nSee this answer . - 2019-11-05\nYou can also now have it auto-open Developer Tools in Pop-ups if they were open where you opened them from. For example, if you do not have Dev Tools open and you get a popup, it won't open with Dev Tools. But if you Have Dev Tools Open and then you click something, the popup will have Dev-Tools Automatically opened.\nUPDATE:\nTime has changed, you can now use --auto-open-devtools-for-tabs as in this answer – Wouter Huysentruit May 18 at 11:08\nOP:\nI played around with the startup string for Chrome on execute, but couldn't get it to persist to new tabs.\nI also thought about a defined PATH method that you could invoke from prompt. This is possible with the SendKeys command, but again, only on a new instance. And DevTools doesn't persist to new tabs.\nBrowsing the Google Product Forums, there doesn't seem to be a built-in way to do this in Chrome. You'll have to use a keystroke solution or F12 as mentioned above.\nI recommended it as a feature. I know I'm not the first either.\n", "There is a command line switch for this: --auto-open-devtools-for-tabs\nSo for the properties on Google Chrome, use something like this:\n\"C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe\" --auto-open-devtools-for-tabs\n\nHere is a useful link:\nchromium-command-line-switches\n", "On a Mac: Quit Chrome, then run the following command in a terminal window:\nopen -a \"Google Chrome\" --args --auto-open-devtools-for-tabs\n\n", "Under the Chrome DevTools settings you enable: \nUnder Network -> Preserve Log\nUnder DevTools -> Auto-open DevTools for popups\n", "With the Developer Tools window visible, click the menu icon (the three vertical dots in the top right corner) and click Settings.\n\nUnder Dev Tools, check the Auto-open DevTools for popups option\n\n", "Answer for 2021:\n\nOpen the Developer Tool (CTRL+SHIFT+I on Windows)\nClick the \"Gear\" icon. THe new Settings window will appear.\n\n\n\n\"Auto-open DevTools for popups\" is now under \"Preferences\" section.\n\n\n", "F12 is easier than Ctrl+Shift+I\n", "If you use Visual Studio Code (vscode), using the very popular vscode chrome debug extension (https://github.com/Microsoft/vscode-chrome-debug) you can setup a launch configuration file launch.json and specify to open the developer tool during a debug session. \nThis the launch.json I use for my React projects :\n{\n \"version\": \"0.2.0\",\n \"configurations\": [\n {\n \"type\": \"chrome\",\n \"request\": \"launch\",\n \"name\": \"Launch Chrome against localhost\",\n \"url\": \"http://localhost:3000\",\n \"runtimeArgs\": [\"--auto-open-devtools-for-tabs\"],\n \"webRoot\": \"${workspaceRoot}/src\"\n }\n ]\n}\n\nThe important line is \"runtimeArgs\": [\"--auto-open-devtools-for-tabs\"],\nFrom vscode you can now type F5, Chrome opens your app and the console tab as well.\n", "Use --auto-open-devtools-for-tabs flag while running chrome from command line \n/Applications/Google\\ Chrome.app/Contents/MacOS/Google\\ Chrome --auto-open-devtools-for-tabs\nhttps://developers.google.com/web/tools/chrome-devtools/open#auto\n", "I came here looking for a similar solution. I really wanted to see the chrome output for the pageload from a new tab. (a form submission in my case) The solution I actually used was to modify the form target attribute so that the form submission would occur in the current tab. I was able to capture the network request. Problem Solved!\n", "Anyone looking to do this inside Visual Studio, this Code Project article will help. Just add \"--auto-open-devtools-for-tabs\" in the arguments box. Works on 2017.\n", "Yep,there is a way you could do it\n\nRight click->Inspect->sources(tab)\nTowards Your right there will be a \"pause script execution button\"\n\n\nI hope it helps!Peace\n\nP.S:This is for the first time.From the second time onwards the dev tool loads automatically\n\n", "For Windows:\n\nRight-click your Google Chrome shortcut\nProperties\nChange Target to: \"C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe\" --auto-open-devtools-for-tabs\nClick ok\nYou will need to close all current chrome instances & end chrome processes in Task Manager. Or restart PC.\n\n" ]
[ 350, 119, 119, 36, 24, 18, 11, 9, 9, 5, 4, 4, 1, 0 ]
[ "You can open dock view settings and adjust the window as you want.\nScreenshot attached.\n\n", "You can open Dev Tools (F12 in Chrome) in new window by clicking three, vertical dots on the right bottom corner, and choose Open as Separate Window.\n" ]
[ -3, -6 ]
[ "google_chrome", "google_chrome_devtools" ]
stackoverflow_0012212504_google_chrome_google_chrome_devtools.txt
Q: How to pass data separated to arrays with the same key name in an array in JS? I'm working with React js and Javascript. I have this object literal: artistList = { artist: [ 0:{name: 'Quirby', role: 'main'}, 1:{name: 'Jose', role: 'feature'} ], writer: [ 0:{name: 'Erik', role: 'publisher'}, 1:{name: 'Cairo', role: 'composer'}, 2:{name: 'Erik', role: 'song_writer'} ] } and i want to join the roles in an array of writers that have the same name like this: artistList = { artist: [ 0:{name: 'Quirby', role: 'main'}, 1:{name: 'Jose', role: 'feature'} ], writer: [ 0:{name: 'Erik', role: ['publisher', 'song_writer']}, 1:{name: 'Cairo', role: 'composer'} ] } any ideas? I want to join the roles of the same writer in a new array, i don't know how to check they have the same name and transform my array. A: To accomplish this, you can use the map and reduce methods in JavaScript to iterate through the objects in the writer array and create a new array with the desired format. Here is an example of how you could do this: const artistList = { artist: [ { name: "Quirby", role: "main" }, { name: "Jose", role: "feature" } ], writer: [ { name: "Erik", role: "publisher" }, { name: "Cairo", role: "composer" }, { name: "Erik", role: "song_writer" } ] }; // Use the map method to iterate through the writer array const newWriterArray = artistList.writer.map(writer => { // Use the reduce method to group the writer objects by name const roles = artistList.writer.reduce((acc, cur) => { if (cur.name === writer.name) { // If the names match, add the role to the array acc.push(cur.role); } return acc; }, []); // Return a new object with the name and the array of roles return { name: writer.name, roles }; }); // Update the writer array in the artistList object with the new array artistList.writer = newWriterArray; console.log(artistList); // Output: // { // artist: [ // { name: "Quirby", role: "main" }, // { name: "Jose", role: "feature" } // ], // writer: [ // { name: "Erik", roles: ["publisher", "song_writer"] }, // { name: "Cairo", roles: ["composer"] } // ] // } In the code above, we use the map method to iterate through the writer array in the artistList object. For each object in the array, we use the reduce method to group the objects by name and create a new array with the roles for that writer. Then we return a new object with the name and the array of roles. Finally, we update the writer array in the artistList object with the new array.
How to pass data separated to arrays with the same key name in an array in JS?
I'm working with React js and Javascript. I have this object literal: artistList = { artist: [ 0:{name: 'Quirby', role: 'main'}, 1:{name: 'Jose', role: 'feature'} ], writer: [ 0:{name: 'Erik', role: 'publisher'}, 1:{name: 'Cairo', role: 'composer'}, 2:{name: 'Erik', role: 'song_writer'} ] } and i want to join the roles in an array of writers that have the same name like this: artistList = { artist: [ 0:{name: 'Quirby', role: 'main'}, 1:{name: 'Jose', role: 'feature'} ], writer: [ 0:{name: 'Erik', role: ['publisher', 'song_writer']}, 1:{name: 'Cairo', role: 'composer'} ] } any ideas? I want to join the roles of the same writer in a new array, i don't know how to check they have the same name and transform my array.
[ "To accomplish this, you can use the map and reduce methods in JavaScript to iterate through the objects in the writer array and create a new array with the desired format.\nHere is an example of how you could do this:\nconst artistList = {\n artist: [\n { name: \"Quirby\", role: \"main\" },\n { name: \"Jose\", role: \"feature\" }\n ],\n writer: [\n { name: \"Erik\", role: \"publisher\" },\n { name: \"Cairo\", role: \"composer\" },\n { name: \"Erik\", role: \"song_writer\" }\n ]\n};\n\n// Use the map method to iterate through the writer array\nconst newWriterArray = artistList.writer.map(writer => {\n // Use the reduce method to group the writer objects by name\n const roles = artistList.writer.reduce((acc, cur) => {\n if (cur.name === writer.name) {\n // If the names match, add the role to the array\n acc.push(cur.role);\n }\n return acc;\n }, []);\n\n // Return a new object with the name and the array of roles\n return { name: writer.name, roles };\n});\n\n// Update the writer array in the artistList object with the new array\nartistList.writer = newWriterArray;\n\nconsole.log(artistList);\n// Output:\n// {\n// artist: [\n// { name: \"Quirby\", role: \"main\" },\n// { name: \"Jose\", role: \"feature\" }\n// ],\n// writer: [\n// { name: \"Erik\", roles: [\"publisher\", \"song_writer\"] },\n// { name: \"Cairo\", roles: [\"composer\"] }\n// ]\n// }\n\nIn the code above, we use the map method to iterate through the writer array in the artistList object. For each object in the array, we use the reduce method to group the objects by name and create a new array with the roles for that writer. Then we return a new object with the name and the array of roles. Finally, we update the writer array in the artistList object with the new array.\n" ]
[ 2 ]
[]
[]
[ "javascript", "join", "reactjs" ]
stackoverflow_0074660942_javascript_join_reactjs.txt
Q: JWT Authentication with NextJS I've searched around a bit, but have not found any clear, up-to-date, answers on this topic. I'm trying to implement JWT authentication in my NextJS application. The following is what I have so far. /login endpoint that will (1) check that the user/pass exists and is valid, and (2) create a JWT token based on a private RS256 key. Created a middleware layer to verify the JWT The creation of the JWT is fine - it works perfectly well reading the key from the file-system and signing a JWT. However, I've run into the problem of the middleware being unable to use node modules (fs and path) because of the edge runtime (read here). This makes it so I'm unable to read the public key from the FS. What is the proper way to verify a JWT token on every request? I've read that fetching from middleware is bad practice and should be avoided. All other reference on this topic (that I found) either uses a "secret" instead of a key (and can therefor be put into process.env and used in middleware) or glosses over the fact (1). Or should I just create a separate express application to handle JWT creation/verifying? A: What I do is add a file (_middleware.tsx) within pages. This ensures the file runs each page load but is not treated as a standard page. In this file lives an Edge function. If the user is not signed in (no JWT in cookies) and tries to access a protected page, he is immediately redirected to /signin before the server is even hit. import { NextResponse } from "next/server"; const signedInPages = ["/admin", "/members "]; export default function middleware(req) { if (signedInPages.find((p) => p === req.nextUrl.pathname)) { const { MY_TOKEN: token } = req.cookies; if (!token) { return NextResponse.redirect("/signin"); } } } A: If you signed the token from import jwt from "jsonwebtoken"; then you can use same jwt for verifying token inside a middleware function. you create _middleware.js inside pages directory and middleware will run for each request before request hits the endpoint: import { NextResponse } from "next/server"; import jwt from "jsonwebtoken"; export async function middleware(req, ev) { const token = req ? req.cookies?.token : null; let userId=null; if (token) { // this is how we sign= jwt.sign(object,secretKey) // now use the same secretKey to decode the token const decodedToken = jwt.verify(token, process.env.JWT_SECRET); userId = decodedToken?.issuer; } const { pathname } = req.nextUrl; // if user sends request to "/api/login", it has no token. so let the request pass if ( pathname.includes("/api/login") || userId ) { return NextResponse.next(); } if (!token && pathname !== "/login") { return NextResponse.redirect("/login"); } }
JWT Authentication with NextJS
I've searched around a bit, but have not found any clear, up-to-date, answers on this topic. I'm trying to implement JWT authentication in my NextJS application. The following is what I have so far. /login endpoint that will (1) check that the user/pass exists and is valid, and (2) create a JWT token based on a private RS256 key. Created a middleware layer to verify the JWT The creation of the JWT is fine - it works perfectly well reading the key from the file-system and signing a JWT. However, I've run into the problem of the middleware being unable to use node modules (fs and path) because of the edge runtime (read here). This makes it so I'm unable to read the public key from the FS. What is the proper way to verify a JWT token on every request? I've read that fetching from middleware is bad practice and should be avoided. All other reference on this topic (that I found) either uses a "secret" instead of a key (and can therefor be put into process.env and used in middleware) or glosses over the fact (1). Or should I just create a separate express application to handle JWT creation/verifying?
[ "What I do is add a file (_middleware.tsx) within pages. This ensures the file runs each page load but is not treated as a standard page. In this file lives an Edge function. If the user is not signed in (no JWT in cookies) and tries to access a protected page, he is immediately redirected to /signin before the server is even hit.\nimport { NextResponse } from \"next/server\";\n\nconst signedInPages = [\"/admin\", \"/members \"];\n\nexport default function middleware(req) {\n if (signedInPages.find((p) => p === req.nextUrl.pathname)) {\n const { MY_TOKEN: token } = req.cookies;\n\n if (!token) {\n return NextResponse.redirect(\"/signin\");\n }\n }\n}\n\n\n", "If you signed the token from\nimport jwt from \"jsonwebtoken\";\n\nthen you can use same jwt for verifying token inside a middleware function. you create _middleware.js inside pages directory and middleware will run for each request before request hits the endpoint:\nimport { NextResponse } from \"next/server\";\nimport jwt from \"jsonwebtoken\";\n\nexport async function middleware(req, ev) {\n \n const token = req ? req.cookies?.token : null;\n\n let userId=null;\n if (token) {\n // this is how we sign= jwt.sign(object,secretKey)\n // now use the same secretKey to decode the token\n const decodedToken = jwt.verify(token, process.env.JWT_SECRET);\n userId = decodedToken?.issuer;\n }\n \n const { pathname } = req.nextUrl;\n // if user sends request to \"/api/login\", it has no token. so let the request pass\n if (\n pathname.includes(\"/api/login\") || userId\n ) {\n return NextResponse.next();\n }\n\n if (!token && pathname !== \"/login\") {\n return NextResponse.redirect(\"/login\");\n }\n}\n\n" ]
[ 0, 0 ]
[ "you send the token in the post request header.\n{Authorization: 'Bearer ' + token}\n\nAnd then verify that token on the server, before sending data back\nTo verify the token you create a middleware function, then use that function on any route you want to portect\nrouter.get(\"/\", middlewarefunction, function (req: any, res: any, next: any) {\n res.send(\"/api\")\n})\n\n" ]
[ -1 ]
[ "authentication", "jwt", "next.js", "node.js", "reactjs" ]
stackoverflow_0073431746_authentication_jwt_next.js_node.js_reactjs.txt
Q: How to create two columns of a dataframe from separate lines in a text file I have a text file where every other row either begins with "A" or "B" like this A810 WE WILDWOOD DR B20220901BROOKE A6223 AMHERST BAY B20221001SARAI How can I read the text file and create a two column pandas dataframe where the line beginning with "A" is a column and likewise for the "B", on a single row. Like this |A |B | |:------------------|:--------------| |A810 WE WILDWOOD DR|B20220901BROOKE| |:------------------|---------------| |A6223 AMHERST BAY |B20221001SARAI | |:------------------|---------------| A: You can approach this by using pandas.DataFrame.shift and pandas.DataFrame.join : from io import StringIO import pandas as pd s = """A810 WE WILDWOOD DR B20220901BROOKE A6223 AMHERST BAY B20221001SARAI """ df = pd.read_csv(StringIO(s), header=None, names=["A"]) #in your case, df = pd.read_csv("path_of_your_txtfile", header=None, names=["A"]) out = ( df .join(df.shift(-1).rename(columns= {"A": "B"})) .iloc[::2] .reset_index(drop=True) ) # Output : print(out) A B 0 A810 WE WILDWOOD DR B20220901BROOKE 1 A6223 AMHERST BAY B20221001SARAI A: What about using a pivot? col = df[0].str.extract('(.)', expand=False) out = (df .assign(col=col, idx=df.groupby(col).cumcount()) .pivot(index='idx', columns='col', values=0) .rename_axis(index=None, columns=None) ) Output: A B 0 A810 WE WILDWOOD DR B20220901BROOKE 1 A6223 AMHERST BAY B20221001SARAI A: Another possible solution, which only works if the strings alternate, regularly, between A and B, as the OP states: pd.DataFrame(df.values.reshape((-1, 2)), columns=list('AB')) Output: A B 0 A810 WE WILDWOOD DR B20220901BROOKE 1 A6223 AMHERST BAY B20221001SARAI
How to create two columns of a dataframe from separate lines in a text file
I have a text file where every other row either begins with "A" or "B" like this A810 WE WILDWOOD DR B20220901BROOKE A6223 AMHERST BAY B20221001SARAI How can I read the text file and create a two column pandas dataframe where the line beginning with "A" is a column and likewise for the "B", on a single row. Like this |A |B | |:------------------|:--------------| |A810 WE WILDWOOD DR|B20220901BROOKE| |:------------------|---------------| |A6223 AMHERST BAY |B20221001SARAI | |:------------------|---------------|
[ "You can approach this by using pandas.DataFrame.shift and pandas.DataFrame.join :\nfrom io import StringIO \nimport pandas as pd\n\ns = \"\"\"A810 WE WILDWOOD DR\nB20220901BROOKE\nA6223 AMHERST BAY\nB20221001SARAI\n\"\"\"\n\ndf = pd.read_csv(StringIO(s), header=None, names=[\"A\"])\n#in your case, df = pd.read_csv(\"path_of_your_txtfile\", header=None, names=[\"A\"])\n\nout = (\n df\n .join(df.shift(-1).rename(columns= {\"A\": \"B\"}))\n .iloc[::2]\n .reset_index(drop=True)\n )\n\n# Output :\nprint(out)\n A B\n0 A810 WE WILDWOOD DR B20220901BROOKE\n1 A6223 AMHERST BAY B20221001SARAI\n\n", "What about using a pivot?\ncol = df[0].str.extract('(.)', expand=False)\n\nout = (df\n .assign(col=col, idx=df.groupby(col).cumcount())\n .pivot(index='idx', columns='col', values=0)\n .rename_axis(index=None, columns=None)\n)\n\nOutput:\n A B\n0 A810 WE WILDWOOD DR B20220901BROOKE\n1 A6223 AMHERST BAY B20221001SARAI\n\n", "Another possible solution, which only works if the strings alternate, regularly, between A and B, as the OP states:\npd.DataFrame(df.values.reshape((-1, 2)), columns=list('AB'))\n\nOutput:\n A B\n0 A810 WE WILDWOOD DR B20220901BROOKE\n1 A6223 AMHERST BAY B20221001SARAI\n\n" ]
[ 2, 1, 1 ]
[]
[]
[ "pandas", "python", "text_files" ]
stackoverflow_0074660644_pandas_python_text_files.txt
Q: Pip subprocess error: No matching distribution found for matlabengineforpython==R2020b So, I'm trying to import an environment.yml file from one windows laptop to a windows pc. I enter the following command (conda env create -f environment.yml), and get the following error (at the end of the code). The imports fail when they reach the matlabengine package. Not sure why this is. Any thoughts? Thanks. C:\Software\srv569>conda env create -f environment.yml Collecting package metadata (repodata.json): done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 22.9.0 latest version: 22.11.0 Please update conda by running $ conda update -n base -c defaults conda Preparing transaction: done Verifying transaction: done Executing transaction: done Installing pip dependencies: \ Ran pip subprocess with arguments: ['C:\\Software\\srv569\\Anaconda3\\envs\\research_projects\\python.exe', '-m', 'pip', 'install', '-U', '-r', 'C:\\Software\\srv569\\condaenv.0oaiyh7x.requirements.txt'] Pip subprocess output: Collecting absl-py==1.0.0 Using cached absl_py-1.0.0-py3-none-any.whl (126 kB) Collecting ansi2html==1.7.0 Using cached ansi2html-1.7.0-py3-none-any.whl (15 kB) Collecting argon2-cffi==21.3.0 Using cached argon2_cffi-21.3.0-py3-none-any.whl (14 kB) Collecting argon2-cffi-bindings==21.2.0 Using cached argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl (30 kB) Collecting asttokens==2.0.5 Using cached asttokens-2.0.5-py2.py3-none-any.whl (20 kB) Collecting astunparse==1.6.3 Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB) Collecting attrs==21.4.0 Using cached attrs-21.4.0-py2.py3-none-any.whl (60 kB) Collecting backcall==0.2.0 Using cached backcall-0.2.0-py2.py3-none-any.whl (11 kB) Collecting beautifulsoup4==4.11.1 Using cached beautifulsoup4-4.11.1-py3-none-any.whl (128 kB) Collecting bleach==5.0.0 Using cached bleach-5.0.0-py3-none-any.whl (160 kB) Collecting brotli==1.0.9 Using cached Brotli-1.0.9-cp38-cp38-win_amd64.whl (365 kB) Collecting cachetools==5.1.0 Using cached cachetools-5.1.0-py3-none-any.whl (9.2 kB) Collecting cffi==1.15.0 Using cached cffi-1.15.0-cp38-cp38-win_amd64.whl (179 kB) Collecting charset-normalizer==2.0.12 Using cached charset_normalizer-2.0.12-py3-none-any.whl (39 kB) Collecting click==8.1.3 Using cached click-8.1.3-py3-none-any.whl (96 kB) Collecting cycler==0.11.0 Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB) Collecting dash==2.4.1 Using cached dash-2.4.1-py3-none-any.whl (9.8 MB) Collecting dash-core-components==2.0.0 Using cached dash_core_components-2.0.0-py3-none-any.whl (3.8 kB) Collecting dash-html-components==2.0.0 Using cached dash_html_components-2.0.0-py3-none-any.whl (4.1 kB) Collecting dash-table==5.0.0 Using cached dash_table-5.0.0-py3-none-any.whl (3.9 kB) Collecting debugpy==1.6.0 Using cached debugpy-1.6.0-cp38-cp38-win_amd64.whl (4.3 MB) Collecting decorator==5.1.1 Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB) Collecting defusedxml==0.7.1 Using cached defusedxml-0.7.1-py2.py3-none-any.whl (25 kB) Collecting entrypoints==0.4 Using cached entrypoints-0.4-py3-none-any.whl (5.3 kB) Collecting executing==0.8.3 Using cached executing-0.8.3-py2.py3-none-any.whl (16 kB) Collecting fastjsonschema==2.15.3 Using cached fastjsonschema-2.15.3-py3-none-any.whl (22 kB) Collecting flask==2.1.2 Using cached Flask-2.1.2-py3-none-any.whl (95 kB) Collecting flask-compress==1.12 Using cached Flask_Compress-1.12-py3-none-any.whl (7.9 kB) Collecting flatbuffers==1.12 Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB) Collecting fonttools==4.33.3 Using cached fonttools-4.33.3-py3-none-any.whl (930 kB) Collecting gast==0.4.0 Using cached gast-0.4.0-py3-none-any.whl (9.8 kB) Collecting glob2==0.7 Using cached glob2-0.7.tar.gz (10 kB) Collecting google-auth==2.6.6 Using cached google_auth-2.6.6-py2.py3-none-any.whl (156 kB) Collecting google-auth-oauthlib==0.4.6 Using cached google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB) Collecting google-pasta==0.2.0 Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB) Collecting grpcio==1.46.3 Using cached grpcio-1.46.3-cp38-cp38-win_amd64.whl (3.5 MB) Collecting h5py==3.7.0 Using cached h5py-3.7.0-cp38-cp38-win_amd64.whl (2.6 MB) Collecting idna==3.3 Using cached idna-3.3-py3-none-any.whl (61 kB) Collecting imageio==2.19.3 Using cached imageio-2.19.3-py3-none-any.whl (3.4 MB) Collecting importlib-metadata==4.11.4 Using cached importlib_metadata-4.11.4-py3-none-any.whl (18 kB) Collecting importlib-resources==5.7.1 Using cached importlib_resources-5.7.1-py3-none-any.whl (28 kB) Collecting ipykernel==6.13.0 Using cached ipykernel-6.13.0-py3-none-any.whl (131 kB) Collecting ipython==8.3.0 Using cached ipython-8.3.0-py3-none-any.whl (750 kB) Collecting ipython-genutils==0.2.0 Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB) Collecting ipywidgets==7.7.0 Using cached ipywidgets-7.7.0-py2.py3-none-any.whl (123 kB) Collecting itsdangerous==2.1.2 Using cached itsdangerous-2.1.2-py3-none-any.whl (15 kB) Collecting jedi==0.18.1 Using cached jedi-0.18.1-py2.py3-none-any.whl (1.6 MB) Collecting jinja2==3.1.2 Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB) Collecting jsonschema==4.5.1 Using cached jsonschema-4.5.1-py3-none-any.whl (72 kB) Collecting jupyter==1.0.0 Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB) Collecting jupyter-client==7.3.1 Using cached jupyter_client-7.3.1-py3-none-any.whl (130 kB) Collecting jupyter-console==6.4.3 Using cached jupyter_console-6.4.3-py3-none-any.whl (22 kB) Collecting jupyter-core==4.10.0 Using cached jupyter_core-4.10.0-py3-none-any.whl (87 kB) Collecting jupyter-dash==0.4.2 Using cached jupyter_dash-0.4.2-py3-none-any.whl (23 kB) Collecting jupyterlab-pygments==0.2.2 Using cached jupyterlab_pygments-0.2.2-py2.py3-none-any.whl (21 kB) Collecting jupyterlab-widgets==1.1.0 Using cached jupyterlab_widgets-1.1.0-py3-none-any.whl (245 kB) Collecting keras==2.9.0 Using cached keras-2.9.0-py2.py3-none-any.whl (1.6 MB) Collecting keras-preprocessing==1.1.2 Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB) Collecting kiwisolver==1.4.2 Using cached kiwisolver-1.4.2-cp38-cp38-win_amd64.whl (55 kB) Collecting libclang==14.0.1 Using cached libclang-14.0.1-py2.py3-none-win_amd64.whl (14.2 MB) Collecting markdown==3.3.7 Using cached Markdown-3.3.7-py3-none-any.whl (97 kB) Collecting markupsafe==2.1.1 Using cached MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl (17 kB) Pip subprocess error: ERROR: Could not find a version that satisfies the requirement matlabengineforpython==R2020b (from versions: none) ERROR: No matching distribution found for matlabengineforpython==R2020b failed CondaEnvException: Pip failed A: Have you looked at condaENVexception. Try updating conda and pip versions to latest.
Pip subprocess error: No matching distribution found for matlabengineforpython==R2020b
So, I'm trying to import an environment.yml file from one windows laptop to a windows pc. I enter the following command (conda env create -f environment.yml), and get the following error (at the end of the code). The imports fail when they reach the matlabengine package. Not sure why this is. Any thoughts? Thanks. C:\Software\srv569>conda env create -f environment.yml Collecting package metadata (repodata.json): done Solving environment: done ==> WARNING: A newer version of conda exists. <== current version: 22.9.0 latest version: 22.11.0 Please update conda by running $ conda update -n base -c defaults conda Preparing transaction: done Verifying transaction: done Executing transaction: done Installing pip dependencies: \ Ran pip subprocess with arguments: ['C:\\Software\\srv569\\Anaconda3\\envs\\research_projects\\python.exe', '-m', 'pip', 'install', '-U', '-r', 'C:\\Software\\srv569\\condaenv.0oaiyh7x.requirements.txt'] Pip subprocess output: Collecting absl-py==1.0.0 Using cached absl_py-1.0.0-py3-none-any.whl (126 kB) Collecting ansi2html==1.7.0 Using cached ansi2html-1.7.0-py3-none-any.whl (15 kB) Collecting argon2-cffi==21.3.0 Using cached argon2_cffi-21.3.0-py3-none-any.whl (14 kB) Collecting argon2-cffi-bindings==21.2.0 Using cached argon2_cffi_bindings-21.2.0-cp36-abi3-win_amd64.whl (30 kB) Collecting asttokens==2.0.5 Using cached asttokens-2.0.5-py2.py3-none-any.whl (20 kB) Collecting astunparse==1.6.3 Using cached astunparse-1.6.3-py2.py3-none-any.whl (12 kB) Collecting attrs==21.4.0 Using cached attrs-21.4.0-py2.py3-none-any.whl (60 kB) Collecting backcall==0.2.0 Using cached backcall-0.2.0-py2.py3-none-any.whl (11 kB) Collecting beautifulsoup4==4.11.1 Using cached beautifulsoup4-4.11.1-py3-none-any.whl (128 kB) Collecting bleach==5.0.0 Using cached bleach-5.0.0-py3-none-any.whl (160 kB) Collecting brotli==1.0.9 Using cached Brotli-1.0.9-cp38-cp38-win_amd64.whl (365 kB) Collecting cachetools==5.1.0 Using cached cachetools-5.1.0-py3-none-any.whl (9.2 kB) Collecting cffi==1.15.0 Using cached cffi-1.15.0-cp38-cp38-win_amd64.whl (179 kB) Collecting charset-normalizer==2.0.12 Using cached charset_normalizer-2.0.12-py3-none-any.whl (39 kB) Collecting click==8.1.3 Using cached click-8.1.3-py3-none-any.whl (96 kB) Collecting cycler==0.11.0 Using cached cycler-0.11.0-py3-none-any.whl (6.4 kB) Collecting dash==2.4.1 Using cached dash-2.4.1-py3-none-any.whl (9.8 MB) Collecting dash-core-components==2.0.0 Using cached dash_core_components-2.0.0-py3-none-any.whl (3.8 kB) Collecting dash-html-components==2.0.0 Using cached dash_html_components-2.0.0-py3-none-any.whl (4.1 kB) Collecting dash-table==5.0.0 Using cached dash_table-5.0.0-py3-none-any.whl (3.9 kB) Collecting debugpy==1.6.0 Using cached debugpy-1.6.0-cp38-cp38-win_amd64.whl (4.3 MB) Collecting decorator==5.1.1 Using cached decorator-5.1.1-py3-none-any.whl (9.1 kB) Collecting defusedxml==0.7.1 Using cached defusedxml-0.7.1-py2.py3-none-any.whl (25 kB) Collecting entrypoints==0.4 Using cached entrypoints-0.4-py3-none-any.whl (5.3 kB) Collecting executing==0.8.3 Using cached executing-0.8.3-py2.py3-none-any.whl (16 kB) Collecting fastjsonschema==2.15.3 Using cached fastjsonschema-2.15.3-py3-none-any.whl (22 kB) Collecting flask==2.1.2 Using cached Flask-2.1.2-py3-none-any.whl (95 kB) Collecting flask-compress==1.12 Using cached Flask_Compress-1.12-py3-none-any.whl (7.9 kB) Collecting flatbuffers==1.12 Using cached flatbuffers-1.12-py2.py3-none-any.whl (15 kB) Collecting fonttools==4.33.3 Using cached fonttools-4.33.3-py3-none-any.whl (930 kB) Collecting gast==0.4.0 Using cached gast-0.4.0-py3-none-any.whl (9.8 kB) Collecting glob2==0.7 Using cached glob2-0.7.tar.gz (10 kB) Collecting google-auth==2.6.6 Using cached google_auth-2.6.6-py2.py3-none-any.whl (156 kB) Collecting google-auth-oauthlib==0.4.6 Using cached google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB) Collecting google-pasta==0.2.0 Using cached google_pasta-0.2.0-py3-none-any.whl (57 kB) Collecting grpcio==1.46.3 Using cached grpcio-1.46.3-cp38-cp38-win_amd64.whl (3.5 MB) Collecting h5py==3.7.0 Using cached h5py-3.7.0-cp38-cp38-win_amd64.whl (2.6 MB) Collecting idna==3.3 Using cached idna-3.3-py3-none-any.whl (61 kB) Collecting imageio==2.19.3 Using cached imageio-2.19.3-py3-none-any.whl (3.4 MB) Collecting importlib-metadata==4.11.4 Using cached importlib_metadata-4.11.4-py3-none-any.whl (18 kB) Collecting importlib-resources==5.7.1 Using cached importlib_resources-5.7.1-py3-none-any.whl (28 kB) Collecting ipykernel==6.13.0 Using cached ipykernel-6.13.0-py3-none-any.whl (131 kB) Collecting ipython==8.3.0 Using cached ipython-8.3.0-py3-none-any.whl (750 kB) Collecting ipython-genutils==0.2.0 Using cached ipython_genutils-0.2.0-py2.py3-none-any.whl (26 kB) Collecting ipywidgets==7.7.0 Using cached ipywidgets-7.7.0-py2.py3-none-any.whl (123 kB) Collecting itsdangerous==2.1.2 Using cached itsdangerous-2.1.2-py3-none-any.whl (15 kB) Collecting jedi==0.18.1 Using cached jedi-0.18.1-py2.py3-none-any.whl (1.6 MB) Collecting jinja2==3.1.2 Using cached Jinja2-3.1.2-py3-none-any.whl (133 kB) Collecting jsonschema==4.5.1 Using cached jsonschema-4.5.1-py3-none-any.whl (72 kB) Collecting jupyter==1.0.0 Using cached jupyter-1.0.0-py2.py3-none-any.whl (2.7 kB) Collecting jupyter-client==7.3.1 Using cached jupyter_client-7.3.1-py3-none-any.whl (130 kB) Collecting jupyter-console==6.4.3 Using cached jupyter_console-6.4.3-py3-none-any.whl (22 kB) Collecting jupyter-core==4.10.0 Using cached jupyter_core-4.10.0-py3-none-any.whl (87 kB) Collecting jupyter-dash==0.4.2 Using cached jupyter_dash-0.4.2-py3-none-any.whl (23 kB) Collecting jupyterlab-pygments==0.2.2 Using cached jupyterlab_pygments-0.2.2-py2.py3-none-any.whl (21 kB) Collecting jupyterlab-widgets==1.1.0 Using cached jupyterlab_widgets-1.1.0-py3-none-any.whl (245 kB) Collecting keras==2.9.0 Using cached keras-2.9.0-py2.py3-none-any.whl (1.6 MB) Collecting keras-preprocessing==1.1.2 Using cached Keras_Preprocessing-1.1.2-py2.py3-none-any.whl (42 kB) Collecting kiwisolver==1.4.2 Using cached kiwisolver-1.4.2-cp38-cp38-win_amd64.whl (55 kB) Collecting libclang==14.0.1 Using cached libclang-14.0.1-py2.py3-none-win_amd64.whl (14.2 MB) Collecting markdown==3.3.7 Using cached Markdown-3.3.7-py3-none-any.whl (97 kB) Collecting markupsafe==2.1.1 Using cached MarkupSafe-2.1.1-cp38-cp38-win_amd64.whl (17 kB) Pip subprocess error: ERROR: Could not find a version that satisfies the requirement matlabengineforpython==R2020b (from versions: none) ERROR: No matching distribution found for matlabengineforpython==R2020b failed CondaEnvException: Pip failed
[ "Have you looked at condaENVexception.\nTry updating conda and pip versions to latest.\n" ]
[ 0 ]
[]
[]
[ "anaconda", "matlab", "pip", "python" ]
stackoverflow_0074660932_anaconda_matlab_pip_python.txt
Q: Undefined variable on returning list of data I tried to return a list of data through a view called 'kodeabsensianak'. Here's the snippet on view file. @foreach ($data as $item) <tr class="hover-actions-trigger"> <td class="align-middle text-nowrap"> <div class="d-flex align-items-center"> <div class="avatar avatar-xl"> <div class="avatar-name rounded-circle"><span>{{ $item -> kode_absen }}</span></div> </div> <div class="ms-2">{{ $item -> nama_kabsen }}</div> </div> </td> <td class="align-middle text-nowrap">{{ $item -> keterangan }}</td> <td class="w-auto"> <div class="btn-group btn-group hover-actions end-0 me-4"> <button class="btn btn-light pe-2" type="button" data-bs-toggle="tooltip" data-bs-placement="top" title="Edit"><span class="fas fa-edit"></span></button> <button class="btn btn-light ps-2" type="button" data-bs-toggle="tooltip" data-bs-placement="top" title="Delete"><span class="fas fa-trash-alt"></span></button> </div> </td> <td class="align-middle text-nowrap">{{ $item -> created_at }}</td> </tr> @endforeach Here's the Controller file. public function index() { $data = KodeAbsensi::all(); return view('kodeabsensianak', compact('data')); } I tried to write return view('kodeabsensianak')->with('data', $data); and got the same result. I tried to pass thru dd($data), and also displayed same result. I tried to create other view of list data using the same method and it worked. A: probably from route. share you route snippet.
Undefined variable on returning list of data
I tried to return a list of data through a view called 'kodeabsensianak'. Here's the snippet on view file. @foreach ($data as $item) <tr class="hover-actions-trigger"> <td class="align-middle text-nowrap"> <div class="d-flex align-items-center"> <div class="avatar avatar-xl"> <div class="avatar-name rounded-circle"><span>{{ $item -> kode_absen }}</span></div> </div> <div class="ms-2">{{ $item -> nama_kabsen }}</div> </div> </td> <td class="align-middle text-nowrap">{{ $item -> keterangan }}</td> <td class="w-auto"> <div class="btn-group btn-group hover-actions end-0 me-4"> <button class="btn btn-light pe-2" type="button" data-bs-toggle="tooltip" data-bs-placement="top" title="Edit"><span class="fas fa-edit"></span></button> <button class="btn btn-light ps-2" type="button" data-bs-toggle="tooltip" data-bs-placement="top" title="Delete"><span class="fas fa-trash-alt"></span></button> </div> </td> <td class="align-middle text-nowrap">{{ $item -> created_at }}</td> </tr> @endforeach Here's the Controller file. public function index() { $data = KodeAbsensi::all(); return view('kodeabsensianak', compact('data')); } I tried to write return view('kodeabsensianak')->with('data', $data); and got the same result. I tried to pass thru dd($data), and also displayed same result. I tried to create other view of list data using the same method and it worked.
[ "probably from route.\nshare you route snippet.\n" ]
[ 0 ]
[]
[]
[ "laravel", "php" ]
stackoverflow_0074655808_laravel_php.txt
Q: AWS CDK Custom Resource triggering script to modify retention period I'm trying to update the CloudFormation logs retention period under /aws/codebuild/ path to save costing. I've did some testing and reading on AWS Custom-Resource to trigger lambda/scripts during onCreate, onUpdate and onDelete events. I need clarification to move forward because i'm kind of stuck trying to debug this issue. Any pointers would be appreciated. Custom-resource.ts import * as cdk from "@aws-cdk/core"; import * as cr from "@aws-cdk/custom-resources"; import * as lambda from "@aws-cdk/aws-lambda"; import * as path from "path"; export class CfCustomResource extends cdk.Stack { constructor(scope: cdk.Construct, id: string, props?: CfStackProps) { super(scope, id, props); const retention = new lambda.Function(this, "OnEventFunction", { memorySize: 512, timeout: cdk.Duration.seconds(120), runtime: lambda.Runtime.NODEJS_14_X, handler: "onEvent", code: lambda.Code.fromAsset(path.join(__dirname, 'set_retention_policy')) }); const getParameter = new cr.AwsCustomResource(this, 'updateRetentionPeriod', { onCreate: { service: 'Lambda', action: 'invoke', parameters: { FunctionName: retention.functionName }, physicalResourceId: cr.PhysicalResourceId.of(Date.now().toString()), }, policy: cr.AwsCustomResourcePolicy.fromSdkCalls({ resources: cr.AwsCustomResourcePolicy.ANY_RESOURCE, }), }); console.log(getParameter); } } The index.ts onEvent handler is as per below, upon onCreate it will trigger updateRetention() function as per below import { CloudFormationCustomResourceEvent, CloudFormationCustomResourceDeleteEvent, CloudFormationCustomResourceUpdateEvent, CloudFormationCustomResourceCreateEvent } from "aws-lambda"; interface Response { PhysicalResourceId: string } export function onEvent( event: CloudFormationCustomResourceEvent, ): string { console.log("Event ", event); let response: Response; switch (event.RequestType){ case "Create": response = onCreate(event); break; case "Delete": response = onDelete(event); break; case "Update": response = onUpdate(event); break; default: throw new Error("Unknown Request Type of CloudFormation"); } console.log("Return value:", JSON.stringify(response)) return JSON.stringify(response); } function onCreate(event: CloudFormationCustomResourceCreateEvent) : Response { console.log("We are in the Create Event"); updateRetention(); return { PhysicalResourceId: "abcdef-" + event.RequestId } } function onDelete(event: CloudFormationCustomResourceDeleteEvent) : Response { console.log("We are in the Delete Event"); return { PhysicalResourceId: event.PhysicalResourceId || "" } } function onUpdate(event: CloudFormationCustomResourceUpdateEvent) : Response { console.log("We are in the Update Event"); return { PhysicalResourceId: event.PhysicalResourceId || "" } } Upon running the function, i'm getting the follow error: Received response status [FAILED] from custom resource. Message returned: index.onEvent is undefined or not exported Logs: I couldn't see any further info from AWS as the function was rolled-back. Any experts have tried perform such usecase to trigger javascript or lambda during the deployment? A: I think this is happening because you have not compiled your TypeScript into JavaScript. You can use NodejsFunction which handles that automatically. I think you also meant to use Provider instead of AwsCustomResource. The latter calls any AWS API for you. But you want to just let the provider framework call the lambda for you. That is evident by the return values of your function. But all this can be easily skipped. You can create your CodeBuild project with a specific log group that already has log retention set. CDK handles log retention for you when creating log groups. For example: const logGroup = new logs.LogGroup( this, 'Logs', { retention: RetentionDays.ONE_MONTH, }, ); pushes to repository const project = new codebuild.Project(this, 'CodeBuild', { // ... logging: { cloudWatch: { logGroup, }, }, }); If you want to keep the /aws/codebuild prefix, you can manually name your log group that with logGroupName.
AWS CDK Custom Resource triggering script to modify retention period
I'm trying to update the CloudFormation logs retention period under /aws/codebuild/ path to save costing. I've did some testing and reading on AWS Custom-Resource to trigger lambda/scripts during onCreate, onUpdate and onDelete events. I need clarification to move forward because i'm kind of stuck trying to debug this issue. Any pointers would be appreciated. Custom-resource.ts import * as cdk from "@aws-cdk/core"; import * as cr from "@aws-cdk/custom-resources"; import * as lambda from "@aws-cdk/aws-lambda"; import * as path from "path"; export class CfCustomResource extends cdk.Stack { constructor(scope: cdk.Construct, id: string, props?: CfStackProps) { super(scope, id, props); const retention = new lambda.Function(this, "OnEventFunction", { memorySize: 512, timeout: cdk.Duration.seconds(120), runtime: lambda.Runtime.NODEJS_14_X, handler: "onEvent", code: lambda.Code.fromAsset(path.join(__dirname, 'set_retention_policy')) }); const getParameter = new cr.AwsCustomResource(this, 'updateRetentionPeriod', { onCreate: { service: 'Lambda', action: 'invoke', parameters: { FunctionName: retention.functionName }, physicalResourceId: cr.PhysicalResourceId.of(Date.now().toString()), }, policy: cr.AwsCustomResourcePolicy.fromSdkCalls({ resources: cr.AwsCustomResourcePolicy.ANY_RESOURCE, }), }); console.log(getParameter); } } The index.ts onEvent handler is as per below, upon onCreate it will trigger updateRetention() function as per below import { CloudFormationCustomResourceEvent, CloudFormationCustomResourceDeleteEvent, CloudFormationCustomResourceUpdateEvent, CloudFormationCustomResourceCreateEvent } from "aws-lambda"; interface Response { PhysicalResourceId: string } export function onEvent( event: CloudFormationCustomResourceEvent, ): string { console.log("Event ", event); let response: Response; switch (event.RequestType){ case "Create": response = onCreate(event); break; case "Delete": response = onDelete(event); break; case "Update": response = onUpdate(event); break; default: throw new Error("Unknown Request Type of CloudFormation"); } console.log("Return value:", JSON.stringify(response)) return JSON.stringify(response); } function onCreate(event: CloudFormationCustomResourceCreateEvent) : Response { console.log("We are in the Create Event"); updateRetention(); return { PhysicalResourceId: "abcdef-" + event.RequestId } } function onDelete(event: CloudFormationCustomResourceDeleteEvent) : Response { console.log("We are in the Delete Event"); return { PhysicalResourceId: event.PhysicalResourceId || "" } } function onUpdate(event: CloudFormationCustomResourceUpdateEvent) : Response { console.log("We are in the Update Event"); return { PhysicalResourceId: event.PhysicalResourceId || "" } } Upon running the function, i'm getting the follow error: Received response status [FAILED] from custom resource. Message returned: index.onEvent is undefined or not exported Logs: I couldn't see any further info from AWS as the function was rolled-back. Any experts have tried perform such usecase to trigger javascript or lambda during the deployment?
[ "I think this is happening because you have not compiled your TypeScript into JavaScript. You can use NodejsFunction which handles that automatically.\nI think you also meant to use Provider instead of AwsCustomResource. The latter calls any AWS API for you. But you want to just let the provider framework call the lambda for you. That is evident by the return values of your function.\nBut all this can be easily skipped. You can create your CodeBuild project with a specific log group that already has log retention set. CDK handles log retention for you when creating log groups. For example:\nconst logGroup = new logs.LogGroup(\n this,\n 'Logs',\n {\n retention: RetentionDays.ONE_MONTH,\n },\n);\n\npushes to repository\nconst project = new codebuild.Project(this, 'CodeBuild', {\n // ...\n logging: {\n cloudWatch: {\n logGroup,\n },\n },\n});\n\nIf you want to keep the /aws/codebuild prefix, you can manually name your log group that with logGroupName.\n" ]
[ 1 ]
[]
[]
[ "amazon_web_services", "aws_cdk", "aws_cloudformation_custom_resource", "aws_lambda", "typescript" ]
stackoverflow_0074651012_amazon_web_services_aws_cdk_aws_cloudformation_custom_resource_aws_lambda_typescript.txt
Q: Is it possible to render a widget under a SingleChildScrollView that goes outside its boundries? I'm using the plugin fl_chart which allows to display some bars and when you tap them a popup is displayed. Example: If the popup is big it will go outside the boundaries of the parent, for example if I have a card, the popup will be displayed over it: Until here that is my expected behavior and is achieved with a code like this simplified for the question: Card( elevation: 8, shape: RoundedRectangleBorder(borderRadius: BorderRadius.circular(30)), child: Container( padding: const EdgeInsets.all(8), child: Row( children: [ Expanded( child: BarChart( _getData(mySrc) ), ), ), ), The number of bars that I will display is dynamic, therefore I want to make my row scrollable so I wrapped my row with a SingleChildScrollView: child: Row( children: [ Expanded( child: SingleChildScrollView( scrollDirection: Axis.horizontal, child: SizedBox( width: 400, child: BarChart( _getData(mySrc), ), ), ), ), And scrolling works as expected, but now it seems the popup is not allowed to go beyond the boundaries of the SingleChildScrollView: Is there anyway I can keep the scrolling without damaging the popup generated by the fl_chart plugin? A: try this SingleChildScrollView( clipBehavior: Clip.none, if it doesnt work you can try to set padding top to the SingleChildScrollView and make some extra space so popup doesnt get clipped
Is it possible to render a widget under a SingleChildScrollView that goes outside its boundries?
I'm using the plugin fl_chart which allows to display some bars and when you tap them a popup is displayed. Example: If the popup is big it will go outside the boundaries of the parent, for example if I have a card, the popup will be displayed over it: Until here that is my expected behavior and is achieved with a code like this simplified for the question: Card( elevation: 8, shape: RoundedRectangleBorder(borderRadius: BorderRadius.circular(30)), child: Container( padding: const EdgeInsets.all(8), child: Row( children: [ Expanded( child: BarChart( _getData(mySrc) ), ), ), ), The number of bars that I will display is dynamic, therefore I want to make my row scrollable so I wrapped my row with a SingleChildScrollView: child: Row( children: [ Expanded( child: SingleChildScrollView( scrollDirection: Axis.horizontal, child: SizedBox( width: 400, child: BarChart( _getData(mySrc), ), ), ), ), And scrolling works as expected, but now it seems the popup is not allowed to go beyond the boundaries of the SingleChildScrollView: Is there anyway I can keep the scrolling without damaging the popup generated by the fl_chart plugin?
[ "try this\nSingleChildScrollView(\n clipBehavior: Clip.none,\n\nif it doesnt work you can try to set padding top to the SingleChildScrollView and make some extra space so popup doesnt get clipped\n" ]
[ 1 ]
[]
[]
[ "flutter" ]
stackoverflow_0074659624_flutter.txt
Q: Powershell: foreach loop giving incorrect output New to scripting and powershell. Need help understanding where I went wrong here. Attempting to initialize or set a disk online testing output gives the same results no matter what the disks actual state is. with a script. disk_stat.ps1 $hdisk = get-disk | select number, partitionstyle foreach ($object in $hdisk) { if ($object -ne 'MBR') { write-host = $object.number,$object.partitionstyle "Needs to be set ONLINE" exit } else { write-host = $object.number,$object.partitionstyle "Needs to be initialized" exit } } from "get-disk | select number, partitionstyle" output = number PartitionStyle ------ -------------- 0 MBR 1 RAW So my intended output should tell me which number/partition style is Raw, offlineetc.. Output: PS C:\Users\Administrator> C:\Temp\disk_stat.ps1 = 0 MBR Needs to be initialized A: You need to lose the Exit statements since you want it to process all disks on the machine, at least I'd assume so? You can also simplify the code by dropping the Select statement and the Write-Host as a string "" by itself implies this. Also, you need to Evaluate the object properties using the $() as shown. Clear-Host $hdisk = get-disk foreach ($($object.PartitionStyle) in $hdisk) { if ($object -ne 'MBR') { "$($object.number) $($object.partitionstyle) Needs to be set ONLINE" } else { "$($object.number) $($object.partitionstyle) Needs to be initialized" } } #End Foreach Sample Output: 0 GPT Needs to be set ONLINE 1 GPT Needs to be set ONLINE 2 GPT Needs to be set ONLINE Note: I don't have any MBR disks on my machine.
Powershell: foreach loop giving incorrect output
New to scripting and powershell. Need help understanding where I went wrong here. Attempting to initialize or set a disk online testing output gives the same results no matter what the disks actual state is. with a script. disk_stat.ps1 $hdisk = get-disk | select number, partitionstyle foreach ($object in $hdisk) { if ($object -ne 'MBR') { write-host = $object.number,$object.partitionstyle "Needs to be set ONLINE" exit } else { write-host = $object.number,$object.partitionstyle "Needs to be initialized" exit } } from "get-disk | select number, partitionstyle" output = number PartitionStyle ------ -------------- 0 MBR 1 RAW So my intended output should tell me which number/partition style is Raw, offlineetc.. Output: PS C:\Users\Administrator> C:\Temp\disk_stat.ps1 = 0 MBR Needs to be initialized
[ "You need to lose the Exit statements since you want it to process all disks on the machine, at least I'd assume so? You can also simplify the code by dropping the Select statement and the Write-Host as a string \"\" by itself implies this. Also, you need to Evaluate the object properties using the $() as shown.\nClear-Host\n$hdisk = get-disk \nforeach ($($object.PartitionStyle) in $hdisk) {\n\n if ($object -ne 'MBR') {\n \"$($object.number) $($object.partitionstyle) Needs to be set ONLINE\"\n }\n\n else {\n \"$($object.number) $($object.partitionstyle) Needs to be initialized\"\n }\n \n } #End Foreach\n\nSample Output:\n0 GPT Needs to be set ONLINE\n1 GPT Needs to be set ONLINE\n2 GPT Needs to be set ONLINE\n\nNote: I don't have any MBR disks on my machine.\n" ]
[ 1 ]
[]
[]
[ "foreach", "powershell" ]
stackoverflow_0074660896_foreach_powershell.txt
Q: Basic program for calculating money I have a basic assignment and can't get the program right. The assignment is to make a program that displays the minimum amount of banknotes and coins necessary to pay. #include <iostream> using namespace std; int main() { int pari; cin >> pari; switch (pari) { case 1: cout << pari/5000 << "*5000" << endl; break; case 2: cout << pari/1000 << "*1000" << endl; break; case 3: cout << pari/500 << "*500" << endl; break; case 4: cout << pari/100 << "*100" << endl; break; case 5: cout << pari/50 << "*50" << endl; break; case 6: cout << pari/10 << "*10" << endl; break; case 7: cout << pari/5 << "*5" << endl; break; case 8: cout << pari/2 << "*2" << endl; break; case 9: cout << pari/1 << "*1" << endl; break; default: cout << "WRONG"; } return 0; } For example: Input: 54321 Output: 10x5000 4x1000 0x500 3x100 0x50 2x10 0x5 0x2 1x1 I tried with switch case, with if statements, but nothing works. A: To get the kind of output you have shown, use logic that looks more like this instead: #include <iostream> using namespace std; int main() { int pari; cin >> pari; cout << pari/5000 << "*5000" << endl; pari %= 5000; cout << pari/1000 << "*1000" << endl; pari %= 1000; cout << pari/500 << "*500" << endl; pari %= 500; cout << pari/100 << "*100" << endl; pari %= 100; cout << pari/50 << "*50" << endl; pari %= 50; cout << pari/10 << "*10" << endl; pari %= 10; cout << pari/5 << "*5" << endl; pari %= 5; cout << pari/2 << "*2" << endl; pari %= 2; cout << pari/1 << "*1" << endl; return 0; } Online Demo Which can be simplified if you put the banknotes in an array and loop through it, eg: #include <iostream> using namespace std; int main() { const int bankNotes[] = {5000, 1000, 500, 100, 50, 10, 5, 2, 1}; const int numBankNotes = sizeof(bankNotes)/sizeof(bankNotes[0]); int pari; cin >> pari; for (int i = 0; i < numBankNotes; ++i) { cout << pari/bankNotes[i] << "*" << bankNotes[i] << endl; pari %= bankNotes[i]; } return 0; } Online Demo A: I have written several versions for you here. Hopefully it will help you to understand the procedure. Version 1 This is a navie version. This is how we would do it if we were doing it by hand. int main() { int input_value = 0; std::cin >> input_value; // First we get the input. // We start with the highest value banknote. int value = input_value; int const number_of_5000_notes = value / 5000; // How many of these notes do // we need? value = value % 5000; // Now calculate the rest. int const number_of_1000_notes = value / 1000; // How many of these notes do // we need? value = value % 1000; // Now calculate the rest. int const number_of_500_notes = value / 500; value = value % 500; int const number_of_100_notes = value / 100; value = value % 100; int const number_of_50_notes = value / 50; value = value % 50; int const number_of_10_notes = value / 10; value = value % 10; int const number_of_5_notes = value / 5; value = value % 5; int const number_of_2_notes = value / 2; value = value % 2; int const number_of_1_notes = value; // At the end we write the output std::cout << "Input: " << input_value << std::endl; std::cout << "Output:" << std::endl; std::cout << number_of_5000_notes << " x 5000" << std::endl; std::cout << number_of_1000_notes << " x 1000" << std::endl; std::cout << number_of_500_notes << " x 500" << std::endl; std::cout << number_of_100_notes << " x 100" << std::endl; std::cout << number_of_50_notes << " x 50" << std::endl; std::cout << number_of_10_notes << " x 10" << std::endl; std::cout << number_of_5_notes << " x 5" << std::endl; std::cout << number_of_2_notes << " x 2" << std::endl; std::cout << number_of_1_notes << " x 1" << std::endl; return 0; } Version 2 This is a more advanced version int main() { int value = 0; std::cin >> value; // Get input // Check input if (value == 0) { std::cout << "No value or 0 has been entered"; return 0; } // Output on the fly std::cout << "Input: " << value << std::endl; std::cout << "Output:" << std::endl; // loop over a sorted list of banknotes. for (auto note_value_ent : {5000, 1000, 500, 100, 50, 10, 5, 2, 1}) { int const number_of_notes = value / note_value_ent; value %= note_value_ent; std::cout << number_of_notes << " x " << note_value_ent << std::endl; } return 0; } Both versions give the same result (except in the case of an invalid entry).
Basic program for calculating money
I have a basic assignment and can't get the program right. The assignment is to make a program that displays the minimum amount of banknotes and coins necessary to pay. #include <iostream> using namespace std; int main() { int pari; cin >> pari; switch (pari) { case 1: cout << pari/5000 << "*5000" << endl; break; case 2: cout << pari/1000 << "*1000" << endl; break; case 3: cout << pari/500 << "*500" << endl; break; case 4: cout << pari/100 << "*100" << endl; break; case 5: cout << pari/50 << "*50" << endl; break; case 6: cout << pari/10 << "*10" << endl; break; case 7: cout << pari/5 << "*5" << endl; break; case 8: cout << pari/2 << "*2" << endl; break; case 9: cout << pari/1 << "*1" << endl; break; default: cout << "WRONG"; } return 0; } For example: Input: 54321 Output: 10x5000 4x1000 0x500 3x100 0x50 2x10 0x5 0x2 1x1 I tried with switch case, with if statements, but nothing works.
[ "To get the kind of output you have shown, use logic that looks more like this instead:\n#include <iostream>\nusing namespace std;\n\nint main()\n{\n int pari;\n cin >> pari;\n\n cout << pari/5000 << \"*5000\" << endl;\n pari %= 5000;\n\n cout << pari/1000 << \"*1000\" << endl;\n pari %= 1000;\n\n cout << pari/500 << \"*500\" << endl;\n pari %= 500;\n\n cout << pari/100 << \"*100\" << endl;\n pari %= 100;\n\n cout << pari/50 << \"*50\" << endl;\n pari %= 50;\n\n cout << pari/10 << \"*10\" << endl;\n pari %= 10;\n\n cout << pari/5 << \"*5\" << endl;\n pari %= 5;\n\n cout << pari/2 << \"*2\" << endl;\n pari %= 2;\n\n cout << pari/1 << \"*1\" << endl;\n\n return 0;\n}\n\nOnline Demo\nWhich can be simplified if you put the banknotes in an array and loop through it, eg:\n#include <iostream>\nusing namespace std;\n\nint main()\n{\n const int bankNotes[] = {5000, 1000, 500, 100, 50, 10, 5, 2, 1};\n const int numBankNotes = sizeof(bankNotes)/sizeof(bankNotes[0]);\n\n int pari;\n cin >> pari;\n\n for (int i = 0; i < numBankNotes; ++i) {\n cout << pari/bankNotes[i] << \"*\" << bankNotes[i] << endl;\n pari %= bankNotes[i];\n }\n\n return 0;\n}\n\nOnline Demo\n", "I have written several versions for you here.\nHopefully it will help you to understand the procedure.\n\nVersion 1\n\nThis is a navie version. This is how we would do it if we were doing it by hand.\nint main()\n{\n int input_value = 0;\n std::cin >> input_value; // First we get the input.\n // We start with the highest value banknote.\n\n int value = input_value;\n int const number_of_5000_notes = value / 5000; // How many of these notes do \n // we need?\n value = value % 5000; // Now calculate the rest.\n\n int const number_of_1000_notes = value / 1000; // How many of these notes do \n // we need? \n value = value % 1000; // Now calculate the rest.\n int const number_of_500_notes = value / 500;\n value = value % 500;\n int const number_of_100_notes = value / 100;\n value = value % 100;\n int const number_of_50_notes = value / 50;\n value = value % 50;\n int const number_of_10_notes = value / 10;\n value = value % 10;\n int const number_of_5_notes = value / 5;\n value = value % 5;\n int const number_of_2_notes = value / 2;\n value = value % 2;\n int const number_of_1_notes = value;\n\n // At the end we write the output \n std::cout << \"Input: \" << input_value << std::endl;\n std::cout << \"Output:\" << std::endl;\n std::cout << number_of_5000_notes << \" x 5000\" << std::endl;\n std::cout << number_of_1000_notes << \" x 1000\" << std::endl;\n std::cout << number_of_500_notes << \" x 500\" << std::endl;\n std::cout << number_of_100_notes << \" x 100\" << std::endl;\n std::cout << number_of_50_notes << \" x 50\" << std::endl;\n std::cout << number_of_10_notes << \" x 10\" << std::endl;\n std::cout << number_of_5_notes << \" x 5\" << std::endl;\n std::cout << number_of_2_notes << \" x 2\" << std::endl;\n std::cout << number_of_1_notes << \" x 1\" << std::endl;\n\n return 0;\n}\n\n\nVersion 2\n\nThis is a more advanced version\nint main()\n{\n int value = 0;\n std::cin >> value; // Get input\n\n // Check input\n if (value == 0)\n {\n std::cout << \"No value or 0 has been entered\";\n return 0;\n }\n\n // Output on the fly\n std::cout << \"Input: \" << value << std::endl;\n std::cout << \"Output:\" << std::endl;\n\n // loop over a sorted list of banknotes.\n for (auto note_value_ent : {5000, 1000, 500, 100, 50, 10, 5, 2, 1})\n {\n int const number_of_notes = value / note_value_ent;\n value %= note_value_ent;\n std::cout << number_of_notes << \" x \" << note_value_ent << std::endl;\n }\n return 0;\n}\n\nBoth versions give the same result (except in the case of an invalid entry).\n" ]
[ 2, 0 ]
[]
[]
[ "c++" ]
stackoverflow_0074660214_c++.txt
Q: h2o frame from pandas casting I am using h2o to perform predictive modeling from python. I have loaded some data from a csv using pandas, specifying some column types: dtype_dict = {'SIT_SSICCOMP':'object', 'SIT_CAPACC':'object', 'PTT_SSIRMPOL':'object', 'PTT_SPTCLVEI':'object', 'cap_pad':'object', 'SIT_SADNS_RESP_PERC':'object', 'SIT_GEOCODE':'object', 'SIT_TIPOFIRMA':'object', 'SIT_TPFRODESI':'object', 'SIT_CITTAACC':'object', 'SIT_INDIRACC':'object', 'SIT_NUMCIVACC':'object' } date_cols = ["SIT_SSIDTSIN","SIT_SSIDTDEN","PTT_SPTDTEFF","PTT_SPTDTSCA","SIT_DTANTIFRODE","PTT_DTELABOR"] columns_to_drop = ['SIT_TPFRODESI','SIT_CITTAACC', 'SIT_INDIRACC', 'SIT_NUMCIVACC', 'SIT_CAPACC', 'SIT_LONGITACC', 'SIT_LATITACC','cap_pad','SIT_DTANTIFRODE'] comp='mycomp' file_completo = os.path.join(dataDir,"db4modelrisk_"+comp+".csv") db4scoring = pd.read_csv(filepath_or_buffer=file_completo,sep=";", encoding='latin1', header=0,infer_datetime_format =True,na_values=[''], keep_default_na =False, parse_dates=date_cols,dtype=dtype_dict,nrows=500e3) db4scoring.drop(labels=columns_to_drop,axis=1,inplace =True) Then, after I set up a h2o cluster I import it in h2o using db4scoring_h2o = H2OFrame(db4scoring) and I convert categorical predictors in factor for example: db4scoring_h2o["SIT_SADTPROV"]=db4scoring_h2o["SIT_SADTPROV"].asfactor() db4scoring_h2o["PTT_SPTFRAZ"]=db4scoring_h2o["PTT_SPTFRAZ"].asfactor() When I check data types using db4scoring.dtypes I notice that they are properly set but when I import it in h2o I notice that h2oframe performs some unwanted conversions to enum (eg from float or from int). I wonder if is is a way to specify the variable format in H2OFrame. A: Yes, there is. See the H2OFrame doc here: http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/frame.html#h2oframe You just need to use the column_types argument when you cast. Here's a short example: # imports import h2o import numpy as np import pandas as pd # create small random pandas df df = pd.DataFrame(np.random.randint(0,10,size=(10, 2)), columns=list('AB')) print(df) # A B #0 5 0 #1 1 3 #2 4 8 #3 3 9 # ... # start h2o, convert pandas frame to H2OFrame # use column_types dict to set data types h2o.init() h2o_df = h2o.H2OFrame(df, column_types={'A':'numeric', 'B':'enum'}) h2o_df.describe() # you should now see the desired data types # A B # type int enum # ... A: # Filter a dictionary to keep elements only whose keys are even newDict = filterTheDict(dictOfNames, lambda elem : elem[0] % 2 == 0) print('Filtered Dictionary : ') print(newDict)`enter code here`
h2o frame from pandas casting
I am using h2o to perform predictive modeling from python. I have loaded some data from a csv using pandas, specifying some column types: dtype_dict = {'SIT_SSICCOMP':'object', 'SIT_CAPACC':'object', 'PTT_SSIRMPOL':'object', 'PTT_SPTCLVEI':'object', 'cap_pad':'object', 'SIT_SADNS_RESP_PERC':'object', 'SIT_GEOCODE':'object', 'SIT_TIPOFIRMA':'object', 'SIT_TPFRODESI':'object', 'SIT_CITTAACC':'object', 'SIT_INDIRACC':'object', 'SIT_NUMCIVACC':'object' } date_cols = ["SIT_SSIDTSIN","SIT_SSIDTDEN","PTT_SPTDTEFF","PTT_SPTDTSCA","SIT_DTANTIFRODE","PTT_DTELABOR"] columns_to_drop = ['SIT_TPFRODESI','SIT_CITTAACC', 'SIT_INDIRACC', 'SIT_NUMCIVACC', 'SIT_CAPACC', 'SIT_LONGITACC', 'SIT_LATITACC','cap_pad','SIT_DTANTIFRODE'] comp='mycomp' file_completo = os.path.join(dataDir,"db4modelrisk_"+comp+".csv") db4scoring = pd.read_csv(filepath_or_buffer=file_completo,sep=";", encoding='latin1', header=0,infer_datetime_format =True,na_values=[''], keep_default_na =False, parse_dates=date_cols,dtype=dtype_dict,nrows=500e3) db4scoring.drop(labels=columns_to_drop,axis=1,inplace =True) Then, after I set up a h2o cluster I import it in h2o using db4scoring_h2o = H2OFrame(db4scoring) and I convert categorical predictors in factor for example: db4scoring_h2o["SIT_SADTPROV"]=db4scoring_h2o["SIT_SADTPROV"].asfactor() db4scoring_h2o["PTT_SPTFRAZ"]=db4scoring_h2o["PTT_SPTFRAZ"].asfactor() When I check data types using db4scoring.dtypes I notice that they are properly set but when I import it in h2o I notice that h2oframe performs some unwanted conversions to enum (eg from float or from int). I wonder if is is a way to specify the variable format in H2OFrame.
[ "Yes, there is. See the H2OFrame doc here: http://docs.h2o.ai/h2o/latest-stable/h2o-py/docs/frame.html#h2oframe\nYou just need to use the column_types argument when you cast.\nHere's a short example:\n# imports\nimport h2o\nimport numpy as np\nimport pandas as pd\n\n# create small random pandas df\ndf = pd.DataFrame(np.random.randint(0,10,size=(10, 2)), \ncolumns=list('AB'))\nprint(df)\n\n# A B\n#0 5 0\n#1 1 3\n#2 4 8\n#3 3 9\n# ...\n\n# start h2o, convert pandas frame to H2OFrame\n# use column_types dict to set data types\nh2o.init()\nh2o_df = h2o.H2OFrame(df, column_types={'A':'numeric', 'B':'enum'})\nh2o_df.describe() # you should now see the desired data types \n\n# A B\n# type int enum\n# ... \n\n", "# Filter a dictionary to keep elements only whose keys are even\nnewDict = filterTheDict(dictOfNames, lambda elem : elem[0] % 2 == 0)\nprint('Filtered Dictionary : ')\nprint(newDict)`enter code here`\n\n" ]
[ 3, 0 ]
[]
[]
[ "casting", "h2o", "pandas", "python" ]
stackoverflow_0049823178_casting_h2o_pandas_python.txt
Q: How do I use lldb to debug C++ code on Android on command line I am trying to figure out to debug my Android ndk project in c++, using the lldb debugger. I am trying achieve this by using the command line only. I can not seem to find any articles or documentation on how to use lldb along with adb to debug an app from the command line. A: Probably you can try below: (This example steps are based on macOS) run gdb server and attach process //Below commands will suspend the execution on the running app, and waits for a debugger to connect to it on port 5045. adb shell // to get pid root@generic_x86:/ # ps | grep <your-app-name> u0_a54 6510 1196 800157 47442 ffffffff b662df1b S <your-app-name> root@generic_x86:/ # gdbserver :5045 --attach 6510 (PID) Attached; pid = 6510 Listening on port 5045 //The process is now suspended, and gdbserver is listening for debugging clients on port 5045. attach gdb debugger //open a new terminal, e.g. terminal2, send below commands from this new terminal //forward the above port to a local port on the host with the abd forward command adb forward tcp:5045 tcp:5045 //launch gdb client from your android ndk folder <your-ndk-home>/android-ndk-r16b/prebuilt/darwin-x86_64/bin/gdb //Target the gdb to the remote sever (gdb) target remote :5045 //now the process is successfully attached with the application for debugging, you can see below info from terminal 1. Remote debugging from host 127.0.0.1 A: make sure your android phone is rooted Use /data/local/tmp directory on your android phone. Root previledge is not required. copy NDK provided lldb-server to your android phone, and start it by: ./lldb-server platform --listen "*:10086" --server 10086 is port number, you may change it Forward port by running: adb forward tcp:10086 tcp:10086 get device name by adb devices. For me, it's 39688bd9 install LLVM with proper python (I use LLVM-11.0 with python3.6), and open lldb, typing these commands: platform select remote-android platform connect connect://39688bd9:10086 Now, you're connected with lldb-server, thus just use lldb like locally: file some_exeutable_file_with_debug_info b main r A: With android-ndk-r25b, I had some luck with the below: In shell window 1 adb push <ndk_dir>/toolchains/llvm/prebuilt/linux-x86_64/lib64/clang/14.0.6/lib/linux/aarch64/lldb-server /data/local/tmp adb shell chmod +x /data/local/tmp/lldb-server adb shell run-as <package_name> killall -9 lldb-server sleep 1 adb shell run-as <package_name> cp /data/local/tmp/lldb-server /data/data/<package_name>/ adb shell am start -D -n "<package_name>/android.app.NativeActivity" adb shell run-as <package_name> sh -c '/data/data/<package_name>/lldb-server platform --server --listen unix-abstract:///data/data/<package_name>/debug.socket'" In shell window 2 # Get the pid of the process you are trying to debug adb shell run-as <package_name> ps lldb > platform select remote-android > platform connect unix-abstract-connect:///data/data/<package_name>/debug.socket > attach <pid> In shell window 3 # You will again need the pid of the process you are trying to debug adb shell run-as <package_name> ps adb forward tcp:12345 jdwp:<pid> jdb -attach localhost:12345 Then go back to lldb running in window 2, and continue your process I found this script to be useful: https://github.com/iivke/flutter_android_lldb/blob/main/flutter_lldb.py
How do I use lldb to debug C++ code on Android on command line
I am trying to figure out to debug my Android ndk project in c++, using the lldb debugger. I am trying achieve this by using the command line only. I can not seem to find any articles or documentation on how to use lldb along with adb to debug an app from the command line.
[ "Probably you can try below: (This example steps are based on macOS)\nrun gdb server and attach process\n//Below commands will suspend the execution on the running app, and waits for a debugger to connect to it on port 5045.\nadb shell\n\n// to get pid\nroot@generic_x86:/ # ps | grep <your-app-name>\nu0_a54 6510 1196 800157 47442 ffffffff b662df1b S \n\n<your-app-name>\n\nroot@generic_x86:/ # gdbserver :5045 --attach 6510 (PID)\nAttached; pid = 6510\nListening on port 5045\n//The process is now suspended, and gdbserver is listening for debugging clients on port 5045.\n\nattach gdb debugger\n//open a new terminal, e.g. terminal2, send below commands from this new terminal\n//forward the above port to a local port on the host with the abd forward command\nadb forward tcp:5045 tcp:5045\n//launch gdb client from your android ndk folder\n<your-ndk-home>/android-ndk-r16b/prebuilt/darwin-x86_64/bin/gdb\n//Target the gdb to the remote sever\n(gdb) target remote :5045\n\n//now the process is successfully attached with the application for debugging, you can see below info from terminal 1.\nRemote debugging from host 127.0.0.1\n\n", "\nmake sure your android phone is rooted\nUse /data/local/tmp directory on your android phone. Root previledge is not required.\n\ncopy NDK provided lldb-server to your android phone, and start it by:\n\n\n./lldb-server platform --listen \"*:10086\" --server\n\n10086 is port number, you may change it\n\nForward port by running:\n\nadb forward tcp:10086 tcp:10086\n\n\nget device name by adb devices. For me, it's 39688bd9\n\ninstall LLVM with proper python (I use LLVM-11.0 with python3.6), and open lldb, typing these commands:\n\n\nplatform select remote-android\nplatform connect connect://39688bd9:10086\n\n\nNow, you're connected with lldb-server, thus just use lldb like locally:\n\nfile some_exeutable_file_with_debug_info\nb main\nr\n\n", "With android-ndk-r25b, I had some luck with the below:\nIn shell window 1\nadb push <ndk_dir>/toolchains/llvm/prebuilt/linux-x86_64/lib64/clang/14.0.6/lib/linux/aarch64/lldb-server /data/local/tmp\nadb shell chmod +x /data/local/tmp/lldb-server\nadb shell run-as <package_name> killall -9 lldb-server\nsleep 1\nadb shell run-as <package_name> cp /data/local/tmp/lldb-server /data/data/<package_name>/\nadb shell am start -D -n \"<package_name>/android.app.NativeActivity\"\nadb shell run-as <package_name> sh -c '/data/data/<package_name>/lldb-server platform --server --listen unix-abstract:///data/data/<package_name>/debug.socket'\"\n\nIn shell window 2\n# Get the pid of the process you are trying to debug\nadb shell run-as <package_name> ps\nlldb\n> platform select remote-android\n> platform connect unix-abstract-connect:///data/data/<package_name>/debug.socket\n> attach <pid>\n\nIn shell window 3\n# You will again need the pid of the process you are trying to debug\nadb shell run-as <package_name> ps\nadb forward tcp:12345 jdwp:<pid>\njdb -attach localhost:12345\n\nThen go back to lldb running in window 2, and continue your process\nI found this script to be useful:\nhttps://github.com/iivke/flutter_android_lldb/blob/main/flutter_lldb.py\n" ]
[ 7, 6, 0 ]
[]
[]
[ "adb", "android", "android_ndk", "debugging", "lldb" ]
stackoverflow_0053733781_adb_android_android_ndk_debugging_lldb.txt
Q: How do I write the __init__ to assign 1 to the __value data attibute for the dice game? Write a class named Die that simulates rolling dice. The Die class should have one private data attribute named __value. It should also have the following methods: __init__ : The __init__ method should assign 1 to the __value data attribute. roll: The roll method should set the __value data attribute to a random number from 1 to 6. get_value: The get_value function should return the value of the __value data attribute. Write a program that creates a Die object then uses a loop to roll the die 5 times. Each time the die is rolled, display its value. import random(1,6) class Die: def __init__(self): self.number int(input("Enter a number 1-6":) def get_value(self): for n in number: def main(): in
How do I write the __init__ to assign 1 to the __value data attibute for the dice game?
Write a class named Die that simulates rolling dice. The Die class should have one private data attribute named __value. It should also have the following methods: __init__ : The __init__ method should assign 1 to the __value data attribute. roll: The roll method should set the __value data attribute to a random number from 1 to 6. get_value: The get_value function should return the value of the __value data attribute. Write a program that creates a Die object then uses a loop to roll the die 5 times. Each time the die is rolled, display its value. import random(1,6) class Die: def __init__(self): self.number int(input("Enter a number 1-6":) def get_value(self): for n in number: def main(): in
[]
[]
[ "As you created number variable using self.number,you can create value with self.__value.\nLike this: self.__value = randint(1,6).\nBe aware that you can create it outside of the __init__ method. But if you do that, the variable will be linked to the class instead of instances (so multiple call to new dice will have the same value).\n", "Hope this help if you mean to create a class to demonstrate rolling dice.\n# Write a class named Die with the following methods:\nclass Die:\n # __init__ method that initializes the die's value to 1\n def __init__(self):\n self.value = 1\n\n # roll method that generates a random number in the range 1 through 6, and assigns this value to the die's value attribute\n def roll(self):\n import random\n self.value = random.randint(1, 6)\n\n # get_value method that returns the die's value\n def get_value(self):\n return self.value\n\n# main function\ndef main():\n # Create an instance of the Die class, and assign it to a variable named die.\n die = Die()\n # Write a loop that rolls the die 5 times.\n for i in range(5):\n die.roll()\n print(die.get_value())\n\n" ]
[ -1, -1 ]
[ "class", "dice", "python" ]
stackoverflow_0074660989_class_dice_python.txt
Q: curl: (26) Failed to open/read local data from file/application I'm trying to upload a ipa file to saucelabs storage but getting the error as mentioned above. The command that I'm using - $ curl -F 'payload=@/Users/<user-name>/Downloads/<file_name>.ipa' -F name=<file_name>.apk -u "$SAUCE_USERNAME:$SAUCE_ACCESS_KEY" 'https://api.us-west-1.saucelabs.com/v1/storage/upload' I'm on mac. I've seen the questions on stackoverflow but none of them answers. A: There might be a problem escaping your path. For me your command works, if the file is correct (I get an unauthorized of course). (curl 7.64.1) Try your command from the Downloads folder: cd /Users/<user-name>/Downloads curl -F @<file_name>.ipa -F name=<file_name>.apk -u "$SAUCE_USERNAME:$SAUCE_ACCESS_KEY" 'https://api.us-west-1.saucelabs.com/v1/storage/upload' You may have to escape characters in your filename. Tip: You can also try drag and drop the file form finder to Terminal while the cursor is at the point where the filename is. Terminal will escape those automatically. A: I was able to resolve this by not using single or double quotes around the file name. curl -u "<userName>:<accessKey>" \ -X POST "https://api-cloud.browserstack.com/app-automate/xcuitest/v2/test-suite" \ -F file=@/path/to/app/file/Application-debug-test.zip. I hope it helps. A: I resolved this problem by replacing single quotes with double quotes instead. I changed this: curl -F '[email protected]' https://location-bq-datasets.cloudfunctions.net/sample-endpoint curl: (26) Failed to open/read local data from file/application To this: curl -F "[email protected]" https://location-bq-datasets.cloudfunctions.net/sample-endpoint You also don't need to place your endpoint https://api.us-west-1.saucelabs.com/v1/storage/upload in quotes. A: Use below command for me it resolved curl -F "file=@C:/Users/LENOVO/Desktop/git/ppt/8.apk" http://localhost:8000/api/v1/upload -H "Authorization:f06ecb3505899296f03b0c21d7dc5baf83fb999e3d0b4c1243874c82c0184874" A: In addition to making sure the quotes were in the correct place, I had to remove the tilde that I first included in my file path. Bad: curl -F dsym=@"~/Users/<user-name>/Downloads/<file_name>.dSYM.zip" -H "X-APP-LICENSE-KEY: theKey" https://theplaceto.upload Good: curl -F dsym=@"/Users/<user-name>/Downloads/<file_name>.dSYM.zip" -H "X-APP-LICENSE-KEY: theKey" https://theplaceto.upload```**
curl: (26) Failed to open/read local data from file/application
I'm trying to upload a ipa file to saucelabs storage but getting the error as mentioned above. The command that I'm using - $ curl -F 'payload=@/Users/<user-name>/Downloads/<file_name>.ipa' -F name=<file_name>.apk -u "$SAUCE_USERNAME:$SAUCE_ACCESS_KEY" 'https://api.us-west-1.saucelabs.com/v1/storage/upload' I'm on mac. I've seen the questions on stackoverflow but none of them answers.
[ "There might be a problem escaping your path. For me your command works, if the file is correct (I get an unauthorized of course). (curl 7.64.1)\nTry your command from the Downloads folder:\ncd /Users/<user-name>/Downloads\ncurl -F @<file_name>.ipa -F name=<file_name>.apk -u \"$SAUCE_USERNAME:$SAUCE_ACCESS_KEY\" 'https://api.us-west-1.saucelabs.com/v1/storage/upload'\n\nYou may have to escape characters in your filename. Tip: You can also try drag and drop the file form finder to Terminal while the cursor is at the point where the filename is. Terminal will escape those automatically.\n", "I was able to resolve this by not using single or double quotes around the file name.\ncurl -u \"<userName>:<accessKey>\" \\ -X POST \"https://api-cloud.browserstack.com/app-automate/xcuitest/v2/test-suite\" \\ -F file=@/path/to/app/file/Application-debug-test.zip. I hope it helps.\n", "I resolved this problem by replacing single quotes with double quotes instead.\nI changed this:\ncurl -F '[email protected]' https://location-bq-datasets.cloudfunctions.net/sample-endpoint\n\ncurl: (26) Failed to open/read local data from file/application\nTo this:\ncurl -F \"[email protected]\" https://location-bq-datasets.cloudfunctions.net/sample-endpoint\n\nYou also don't need to place your endpoint https://api.us-west-1.saucelabs.com/v1/storage/upload in quotes.\n", "Use below command for me it resolved\ncurl -F \"file=@C:/Users/LENOVO/Desktop/git/ppt/8.apk\" http://localhost:8000/api/v1/upload -H \"Authorization:f06ecb3505899296f03b0c21d7dc5baf83fb999e3d0b4c1243874c82c0184874\"\n\n", "In addition to making sure the quotes were in the correct place, I had to remove the tilde that I first included in my file path.\nBad:\ncurl -F dsym=@\"~/Users/<user-name>/Downloads/<file_name>.dSYM.zip\" -H \"X-APP-LICENSE-KEY: theKey\" https://theplaceto.upload\n\nGood:\ncurl -F dsym=@\"/Users/<user-name>/Downloads/<file_name>.dSYM.zip\" -H \"X-APP-LICENSE-KEY: theKey\" https://theplaceto.upload```**\n\n" ]
[ 4, 3, 1, 0, 0 ]
[]
[]
[ "curl", "saucelabs" ]
stackoverflow_0065627977_curl_saucelabs.txt
Q: DHT detection in local network Are there any tools to detect DHT on a local network? Maybe Kademlia, Hyperswarm or some other library? The goal is not to block DHT (almost impossible), but to write a script or service that will notify if a DHT client is detected on the local network. It is also possible to determine the local ip of the DHT client. UPD: question about BitTorrent DHT A: DHTs are a general concept. Concrete implementations vary wildly, so there's no universal detect for them, especially if they use obfuscation or encryption. If you're specifically asking about the bittorrent mainline or vuze/biglybt DHTs then, yes, that should be feasible. They're not encrypted so you can implement packet sniffers for them and deploy those on central network components.
DHT detection in local network
Are there any tools to detect DHT on a local network? Maybe Kademlia, Hyperswarm or some other library? The goal is not to block DHT (almost impossible), but to write a script or service that will notify if a DHT client is detected on the local network. It is also possible to determine the local ip of the DHT client. UPD: question about BitTorrent DHT
[ "DHTs are a general concept. Concrete implementations vary wildly, so there's no universal detect for them, especially if they use obfuscation or encryption.\nIf you're specifically asking about the bittorrent mainline or vuze/biglybt DHTs then, yes, that should be feasible. They're not encrypted so you can implement packet sniffers for them and deploy those on central network components.\n" ]
[ 0 ]
[]
[]
[ "bittorrent", "dht", "kademlia", "p2p" ]
stackoverflow_0074646179_bittorrent_dht_kademlia_p2p.txt
Q: Testing Material components as a child component I have some component TestComponent that, in it's template, uses a <mat-stepper>. Because of the context of the stepper, I have to programmatically advance to the next step rather than using the matStepperNext directive on a button. So my component looks like this: test.component.ts import { MatStepper } from '@angular/material/stepper'; //module loaded elsewhere, but is accesible @Component({ selector: 'app-test', template: '<mat-stepper #stepper> <mat-step> <button (click)="completeStep()">Next</button> </mat-step> <mat-step></mat-step> <!-- etc. --> </mat-stepper>', }) export class TestComponent { @ViewChild('stepper') stepper!: MatStepper; completeStep() { this.stepper.next(); } } Now the trick is that I have to test that stepper.next() was called. Because I'm just using the <mat-dialog> directive, I never actually create an object of it in the class, nor is it a provider in the constructor, so I'm not really sure how to test it. I've tried a bunch of different things with no success, and my latest test is as follow: test.component.spec.ts describe('TestComponent', () => { let component: TestComponent, let fixture: ComponentFixture<TestCompnent>; beforeEach(async () => { await TestBed.ConfigureTestingModule({ declarations: [TestComponent], }).compileComponents(); }); beforeEach(() => { fixture = TestBed.createComponent(TestComponent); component = fixture.componentInstance; fixture.detectChanges(); }); describe('completeStep', () => { it('should call stepper.next', () => { const stepperSpy = jasmine.createSpyObj('MatStepper', ['next']); component.stepper = stepperSpy; component.completeStep(); expect(stepperSpy.next).toHaveBeenCalled(); }); }); }); But I just get the error Expected spy MatStepper.next to have been called A: In before each add MatStepper to declarations array: beforeEach(async () => { await TestBed.ConfigureTestingModule({ declarations: [TestComponent, MatStepper], }).compileComponents(); }); And the test case should look like: it('completeStep should call stepper.next', () => { jest.spyOn(component.stepper, 'next'); component.completeStep(); expect(component.stepper.next).toHaveBeenCalled(); });
Testing Material components as a child component
I have some component TestComponent that, in it's template, uses a <mat-stepper>. Because of the context of the stepper, I have to programmatically advance to the next step rather than using the matStepperNext directive on a button. So my component looks like this: test.component.ts import { MatStepper } from '@angular/material/stepper'; //module loaded elsewhere, but is accesible @Component({ selector: 'app-test', template: '<mat-stepper #stepper> <mat-step> <button (click)="completeStep()">Next</button> </mat-step> <mat-step></mat-step> <!-- etc. --> </mat-stepper>', }) export class TestComponent { @ViewChild('stepper') stepper!: MatStepper; completeStep() { this.stepper.next(); } } Now the trick is that I have to test that stepper.next() was called. Because I'm just using the <mat-dialog> directive, I never actually create an object of it in the class, nor is it a provider in the constructor, so I'm not really sure how to test it. I've tried a bunch of different things with no success, and my latest test is as follow: test.component.spec.ts describe('TestComponent', () => { let component: TestComponent, let fixture: ComponentFixture<TestCompnent>; beforeEach(async () => { await TestBed.ConfigureTestingModule({ declarations: [TestComponent], }).compileComponents(); }); beforeEach(() => { fixture = TestBed.createComponent(TestComponent); component = fixture.componentInstance; fixture.detectChanges(); }); describe('completeStep', () => { it('should call stepper.next', () => { const stepperSpy = jasmine.createSpyObj('MatStepper', ['next']); component.stepper = stepperSpy; component.completeStep(); expect(stepperSpy.next).toHaveBeenCalled(); }); }); }); But I just get the error Expected spy MatStepper.next to have been called
[ "In before each add MatStepper to declarations array:\nbeforeEach(async () => {\n await TestBed.ConfigureTestingModule({\n declarations: [TestComponent, MatStepper],\n }).compileComponents();\n});\n\n\nAnd the test case should look like:\nit('completeStep should call stepper.next', () => { \n jest.spyOn(component.stepper, 'next');\n component.completeStep();\n expect(component.stepper.next).toHaveBeenCalled();\n});\n\n" ]
[ 0 ]
[]
[]
[ "angular", "angular_material", "jasmine", "mat_stepper", "typescript" ]
stackoverflow_0070311236_angular_angular_material_jasmine_mat_stepper_typescript.txt
Q: How to know what parameters a function expects I've a function like: function myFunction(params) { // TODO: something console.log(params.message) } And I need to know all the keys that the myFunction function expects in the params object. Is this possible? I've tried using https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/arguments but it didn't work A: toString will be helpful, but other than that you can't know. JavaScript doesn't store type information in runtime (TypeScript neither) function myFunction(params) { // TODO: something console.log(params.message) } console.log(myFunction.toString()) console.log(eval) A: Nothing is built into the language that would help you here. You have these choices: Find some documentation for this function and read it. Find the code for this function and see what properties it pays attention to or what comments in the code might help you. Find some other code that calls this function and see how it does the calling. These are essentially your options. Read the doc or read code. Javascript does not require any specification in the language of what properties are expected on an object passed into a function so there's nothing built into the language. Fortunately, most things you use in the Javascript world are open source so you can usually go find the target code and just study it to see what it does. If you made this a real world situation (rather than a hypothetical one) by providing the actual function you're calling and what module it's in, we could look at the best options for that specific situation and point you in a direction.
How to know what parameters a function expects
I've a function like: function myFunction(params) { // TODO: something console.log(params.message) } And I need to know all the keys that the myFunction function expects in the params object. Is this possible? I've tried using https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Function/arguments but it didn't work
[ "toString will be helpful, but other than that you can't know. JavaScript doesn't store type information in runtime (TypeScript neither)\n\n\nfunction myFunction(params) {\n // TODO: something\n console.log(params.message)\n}\n\nconsole.log(myFunction.toString())\nconsole.log(eval)\n\n\n\n", "Nothing is built into the language that would help you here.\nYou have these choices:\n\nFind some documentation for this function and read it.\nFind the code for this function and see what properties it pays attention to or what comments in the code might help you.\nFind some other code that calls this function and see how it does the calling.\n\nThese are essentially your options. Read the doc or read code.\nJavascript does not require any specification in the language of what properties are expected on an object passed into a function so there's nothing built into the language.\nFortunately, most things you use in the Javascript world are open source so you can usually go find the target code and just study it to see what it does.\nIf you made this a real world situation (rather than a hypothetical one) by providing the actual function you're calling and what module it's in, we could look at the best options for that specific situation and point you in a direction.\n" ]
[ 0, 0 ]
[]
[]
[ "function", "javascript", "node.js", "object", "parameters" ]
stackoverflow_0074659104_function_javascript_node.js_object_parameters.txt
Q: Getting any doc on firestore v9 hanging forever I'm using a expressjs backend in combination with firestore to GET data. This line of code used to work and simply returned a list of release version, but the same line of code just hangs indefinitely const querySnapshot = await firestore.getDocs(firestore.collection(db, "releases")); I've tried replacing await with a .then() to see if it returns anything at all, but it doesn't. A: As per @ianman18's comment, the date set in the Firestore rule was already way past.
Getting any doc on firestore v9 hanging forever
I'm using a expressjs backend in combination with firestore to GET data. This line of code used to work and simply returned a list of release version, but the same line of code just hangs indefinitely const querySnapshot = await firestore.getDocs(firestore.collection(db, "releases")); I've tried replacing await with a .then() to see if it returns anything at all, but it doesn't.
[ "As per @ianman18's comment, the date set in the Firestore rule was already way past.\n" ]
[ 1 ]
[]
[]
[ "express", "firebase", "google_cloud_firestore", "node.js" ]
stackoverflow_0074576889_express_firebase_google_cloud_firestore_node.js.txt
Q: NoSuchElementException error occured in java (Scanner error) I just tried to get an integer value from the user for a variable in another a method that created differently from main but it gives an error message like this: Exception in thread "main" java.util.NoSuchElementException at java.base/java.util.Scanner.throwFor(Scanner.java:941) at java.base/java.util.Scanner.next(Scanner.java:1598) at java.base/java.util.Scanner.nextInt(Scanner.java:2263) at java.base/java.util.Scanner.nextInt(Scanner.java:2217) at StoreUsingArrays_20210808043.menu(StoreUsingArrays_20210808043.java:14) at StoreUsingArrays_20210808043.storeRun(StoreUsingArrays_20210808043.java:57) at StoreUsingArrays_20210808043.main(StoreUsingArrays_20210808043.java:90) Code: import java.util.Arrays; import java.util.Scanner; public class StoreUsingArrays_20210808043 { public static int menu(String[] items,double[] prices,int answer) { Scanner input = new Scanner(System.in); for (int i = 1;i <= items.length;i++) { System.out.println(i+" - for "+items[i-1]+" ("+prices[i-1]+")"); } System.out.println("0 - to checkout"); System.out.print("Please enter what would you like : "); answer = input.nextInt(); System.out.println("Your choice was : "+answer); input.close(); return answer; } public static void returnedAmounts(double amount) { double bill200,bill100,bill50,bill20,bill10,bill5,bill1,coin50,coin25,coin10,coin1; bill200 = (amount - (amount%200)) / 200; amount = amount%200; bill100 = (amount - (amount%100)) / 100; amount = amount%100; bill50 = (amount - (amount%50)) / 50; amount = amount%50; bill20 = (amount - (amount%20)) / 20; amount = amount%20; bill10 = (amount - (amount%10)) / 10; amount = amount%10; bill5 = (amount -(amount%5)) / 5; amount = amount%5; bill1 = (amount - (amount%1)) / 1; amount = amount%1; coin50 = (amount - (amount%0.50)) / 0.50; amount = amount%0.50; coin25 = (amount - (amount%0.25)) / 0.25; amount = amount%0.25; coin10 = (amount - (amount%0.10)) / 0.10; amount = amount%0.10; coin1 = (amount - (amount%0.01)) / 0.01; double[] returnedNumbers = {bill200,bill100,bill50,bill20,bill10,bill5,bill1,coin50,coin25,coin10,coin1}; double[] returnedValues = {200,100,50,20,10,5,1,0.50,0.25,0.10,0.01}; for (int i = 0;i < returnedNumbers.length;i++) { if ((returnedNumbers[i] > 0) && (returnedValues[i] > 0)) { System.out.println((int)returnedNumbers[i]+" - "+returnedValues[i]); } } } public static void storeRun(String[] item,int[] quantity,double[] price) { Scanner input = new Scanner(System.in); capitalizeArray(item); int choice,req = 0; while (true) { choice = menu(item, price, 0); if (choice == 0) break; else if (choice > item.length && choice < 0) { System.out.println("ERROR:Invalid choice"); break; } else { System.out.println("How many "+item[choice-1]+" would you like? "); if (input.hasNextInt()) req = input.nextInt(); System.out.println(req); } } input.close(); } public static String capitalizeString(String text) { return text.substring(0, 1).toUpperCase() + text.substring(1).toLowerCase(); } public static String[] capitalizeArray(String[] name) { for (int i = 0;i < name.length;i++) { name[i] = capitalizeString(name[i]); } return name; } public static void main(String[] args) { String[] item = {"bRead","cOLA","ROLL","BaKe"}; double[] price = {4,2,6,5}; int[] quantity = {10,25,17,22}; //capitalizeArray(item); //System.out.println(Arrays.toString(item)); //menu(item, price, 0); storeRun(item, quantity, price); //returnedAmounts(167.5); } } I expected to get a value for the req variable from user and use it for another purposes but I tried a lot of things like that: Initializing the variable at the begin. Removing the input.close() line. (etc.) But all of them didn't work. A: Replace all input.close() by input.reset() In method menu, replace answer = input.nextInt(); by if (input.hasNextInt()) answer = input.nextInt(); and it should work
NoSuchElementException error occured in java (Scanner error)
I just tried to get an integer value from the user for a variable in another a method that created differently from main but it gives an error message like this: Exception in thread "main" java.util.NoSuchElementException at java.base/java.util.Scanner.throwFor(Scanner.java:941) at java.base/java.util.Scanner.next(Scanner.java:1598) at java.base/java.util.Scanner.nextInt(Scanner.java:2263) at java.base/java.util.Scanner.nextInt(Scanner.java:2217) at StoreUsingArrays_20210808043.menu(StoreUsingArrays_20210808043.java:14) at StoreUsingArrays_20210808043.storeRun(StoreUsingArrays_20210808043.java:57) at StoreUsingArrays_20210808043.main(StoreUsingArrays_20210808043.java:90) Code: import java.util.Arrays; import java.util.Scanner; public class StoreUsingArrays_20210808043 { public static int menu(String[] items,double[] prices,int answer) { Scanner input = new Scanner(System.in); for (int i = 1;i <= items.length;i++) { System.out.println(i+" - for "+items[i-1]+" ("+prices[i-1]+")"); } System.out.println("0 - to checkout"); System.out.print("Please enter what would you like : "); answer = input.nextInt(); System.out.println("Your choice was : "+answer); input.close(); return answer; } public static void returnedAmounts(double amount) { double bill200,bill100,bill50,bill20,bill10,bill5,bill1,coin50,coin25,coin10,coin1; bill200 = (amount - (amount%200)) / 200; amount = amount%200; bill100 = (amount - (amount%100)) / 100; amount = amount%100; bill50 = (amount - (amount%50)) / 50; amount = amount%50; bill20 = (amount - (amount%20)) / 20; amount = amount%20; bill10 = (amount - (amount%10)) / 10; amount = amount%10; bill5 = (amount -(amount%5)) / 5; amount = amount%5; bill1 = (amount - (amount%1)) / 1; amount = amount%1; coin50 = (amount - (amount%0.50)) / 0.50; amount = amount%0.50; coin25 = (amount - (amount%0.25)) / 0.25; amount = amount%0.25; coin10 = (amount - (amount%0.10)) / 0.10; amount = amount%0.10; coin1 = (amount - (amount%0.01)) / 0.01; double[] returnedNumbers = {bill200,bill100,bill50,bill20,bill10,bill5,bill1,coin50,coin25,coin10,coin1}; double[] returnedValues = {200,100,50,20,10,5,1,0.50,0.25,0.10,0.01}; for (int i = 0;i < returnedNumbers.length;i++) { if ((returnedNumbers[i] > 0) && (returnedValues[i] > 0)) { System.out.println((int)returnedNumbers[i]+" - "+returnedValues[i]); } } } public static void storeRun(String[] item,int[] quantity,double[] price) { Scanner input = new Scanner(System.in); capitalizeArray(item); int choice,req = 0; while (true) { choice = menu(item, price, 0); if (choice == 0) break; else if (choice > item.length && choice < 0) { System.out.println("ERROR:Invalid choice"); break; } else { System.out.println("How many "+item[choice-1]+" would you like? "); if (input.hasNextInt()) req = input.nextInt(); System.out.println(req); } } input.close(); } public static String capitalizeString(String text) { return text.substring(0, 1).toUpperCase() + text.substring(1).toLowerCase(); } public static String[] capitalizeArray(String[] name) { for (int i = 0;i < name.length;i++) { name[i] = capitalizeString(name[i]); } return name; } public static void main(String[] args) { String[] item = {"bRead","cOLA","ROLL","BaKe"}; double[] price = {4,2,6,5}; int[] quantity = {10,25,17,22}; //capitalizeArray(item); //System.out.println(Arrays.toString(item)); //menu(item, price, 0); storeRun(item, quantity, price); //returnedAmounts(167.5); } } I expected to get a value for the req variable from user and use it for another purposes but I tried a lot of things like that: Initializing the variable at the begin. Removing the input.close() line. (etc.) But all of them didn't work.
[ "\nReplace all input.close() by input.reset()\nIn method menu, replace answer = input.nextInt(); by if (input.hasNextInt()) answer = input.nextInt();\n\nand it should work\n" ]
[ 0 ]
[]
[]
[ "java", "java.util.scanner", "methods" ]
stackoverflow_0074660198_java_java.util.scanner_methods.txt
Q: Browser auto reload when code changes angular I created a new Angular with Core project in Visual Studio 2019. Browser is not auto reloading when I made code changes. It requires hard refresh after every code change. Anyone can please help me to resolve my issue? package.json settings: { "name": "client-app", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "watch": "ng build --watch --configuration development", "test": "ng test" }, A: Change "start": "ng serve" to "start": "ng serve --live-reload"
Browser auto reload when code changes angular
I created a new Angular with Core project in Visual Studio 2019. Browser is not auto reloading when I made code changes. It requires hard refresh after every code change. Anyone can please help me to resolve my issue? package.json settings: { "name": "client-app", "version": "0.0.0", "scripts": { "ng": "ng", "start": "ng serve", "build": "ng build", "watch": "ng build --watch --configuration development", "test": "ng test" },
[ "Change \"start\": \"ng serve\" to \"start\": \"ng serve --live-reload\"\n" ]
[ 0 ]
[]
[]
[ "angular", "visual_studio_2019" ]
stackoverflow_0068761467_angular_visual_studio_2019.txt
Q: How to generate a random group of result from a list So, In a exercice of my school, 'cell' is a point that have his own coordinates x and y, in a previous question i had to generate a list of his neighbours and now I have to generate randomly his one of these neighbours and the result have to be in the form of (x,y) and only a single value. import random #Qst1 cell=(2,3) lgn, col = cell def voisines_PI(cell): n=[(lgn-1,col-1),(lgn-1,col+1),(lgn+1,col+1),(lgn+1,col-1)] return n print(voisines_PI(cell)) #Qst2 def voisine_PI_alea(cell): m= 0 b= len(voisines_PI(cell)) g= random.randint(m,b) return g print(voisine_PI_alea(voisines_PI(cell))) A: Like @JohnnyMopp said, the function already returns a list of neighbors, so you can use random.choice() to select a random element of the list, like so: def voisine_PI_alea(cell): return random.choice(voisines_PI(cell))
How to generate a random group of result from a list
So, In a exercice of my school, 'cell' is a point that have his own coordinates x and y, in a previous question i had to generate a list of his neighbours and now I have to generate randomly his one of these neighbours and the result have to be in the form of (x,y) and only a single value. import random #Qst1 cell=(2,3) lgn, col = cell def voisines_PI(cell): n=[(lgn-1,col-1),(lgn-1,col+1),(lgn+1,col+1),(lgn+1,col-1)] return n print(voisines_PI(cell)) #Qst2 def voisine_PI_alea(cell): m= 0 b= len(voisines_PI(cell)) g= random.randint(m,b) return g print(voisine_PI_alea(voisines_PI(cell)))
[ "Like @JohnnyMopp said, the function already returns a list of neighbors, so you can use random.choice() to select a random element of the list, like so:\ndef voisine_PI_alea(cell):\n return random.choice(voisines_PI(cell))\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074660869_python.txt
Q: how to run a custom hook when a prop changes? I created a custom hook which takes a url and return the data import { useEffect, useState } from "react"; export function useHttp({ url }: { url: string }) { const [data, setData] = useState<any>(null); useEffect(() => { const controller = new AbortController(); const signal = controller.signal; fetch(url, { signal }) .then((res) => res.json()) .then((data) => setData(data)) .catch((err) => { if (err.name === "AbortError") { console.log("successfully aborted"); } else { // handle error } }); return () => { // cancel the request before component unmounts controller.abort(); }; }, []); return data ; } I'm using the hook to fetch data in my main page, this works fine import { useState } from "react"; import { useHttp } from "./useHttp"; import "./App.css"; type person = { name: string; id: number }; function App() { const [selectedId, setSelectedId] = useState<number>(1); const people = useHttp({ url: "https://jsonplaceholder.typicode.com/users" }); return ( <div className="App"> {(people as unknown as person[])?.map(({ id, name }) => ( <button key={id} onClick={() => setSelectedId(id)}> {name} </button> ))} <br /> <InnerComponent selectedId={selectedId} /> </div> ); } The part where I'm stuck is, I'm trying to reuse the hook again in a child component to fetch detail about depending on some value from the main component const InnerComponent = ({ selectedId }: { selectedId: number }) => { console.log(selectedId) const person = useHttp({ url: `https://jsonplaceholder.typicode.com/users/${selectedId}`, }); return <div>{person?.name}</div>; }; also I can seen that the prop value has changed, my hook doesn't rerun, how can I implement that without rewriting the logic in useEffect? I expected the hook to rerun when the prop changes and fetch me the result, but it only runs once in the initial render A: Use the dependency array export function useHttp({ url }: { url: string }) { const [data, setData] = useState<any>(null); useEffect(() => { // ... }, [url]); return data ; } A: Like the other answer says, your hook depends on the url param, so it should be included in the dependency array of the useEffect call. useEffect(() => { // ... }, [url]); Additionally, you should take some time to learn how to configure eslint in your project. The default eslint config in most templates will force you to include all dependencies in the dependency array. This way, you won't have to remember to do this manually every time.
how to run a custom hook when a prop changes?
I created a custom hook which takes a url and return the data import { useEffect, useState } from "react"; export function useHttp({ url }: { url: string }) { const [data, setData] = useState<any>(null); useEffect(() => { const controller = new AbortController(); const signal = controller.signal; fetch(url, { signal }) .then((res) => res.json()) .then((data) => setData(data)) .catch((err) => { if (err.name === "AbortError") { console.log("successfully aborted"); } else { // handle error } }); return () => { // cancel the request before component unmounts controller.abort(); }; }, []); return data ; } I'm using the hook to fetch data in my main page, this works fine import { useState } from "react"; import { useHttp } from "./useHttp"; import "./App.css"; type person = { name: string; id: number }; function App() { const [selectedId, setSelectedId] = useState<number>(1); const people = useHttp({ url: "https://jsonplaceholder.typicode.com/users" }); return ( <div className="App"> {(people as unknown as person[])?.map(({ id, name }) => ( <button key={id} onClick={() => setSelectedId(id)}> {name} </button> ))} <br /> <InnerComponent selectedId={selectedId} /> </div> ); } The part where I'm stuck is, I'm trying to reuse the hook again in a child component to fetch detail about depending on some value from the main component const InnerComponent = ({ selectedId }: { selectedId: number }) => { console.log(selectedId) const person = useHttp({ url: `https://jsonplaceholder.typicode.com/users/${selectedId}`, }); return <div>{person?.name}</div>; }; also I can seen that the prop value has changed, my hook doesn't rerun, how can I implement that without rewriting the logic in useEffect? I expected the hook to rerun when the prop changes and fetch me the result, but it only runs once in the initial render
[ "Use the dependency array\nexport function useHttp({ url }: { url: string }) {\n const [data, setData] = useState<any>(null);\n\n useEffect(() => {\n // ...\n }, [url]);\n\n return data ;\n}\n\n", "Like the other answer says, your hook depends on the url param, so it should be included in the dependency array of the useEffect call.\nuseEffect(() => {\n // ...\n}, [url]);\n\nAdditionally, you should take some time to learn how to configure eslint in your project. The default eslint config in most templates will force you to include all dependencies in the dependency array. This way, you won't have to remember to do this manually every time.\n" ]
[ 1, 0 ]
[]
[]
[ "react_custom_hooks", "react_hooks", "reactjs" ]
stackoverflow_0074660791_react_custom_hooks_react_hooks_reactjs.txt
Q: How to retrieve data from parent-child tables using Spring Data JPA? In my Spring Boot app, I use Hibernate and applied the necessary relations to the following entities properly. @Entity public class Recipe { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Column(nullable=false, length=50) private String title; @OneToMany(mappedBy = "recipe", cascade = CascadeType.ALL) private List<RecipeIngredient> recipeIngredients = new ArrayList<>(); } @Entity public class RecipeIngredient { @EmbeddedId private RecipeIngredientId recipeIngredientId = new RecipeIngredientId(); @ManyToOne(optional = true, fetch = FetchType.LAZY) @MapsId("recipeId") @JoinColumn(name = "recipe_id", referencedColumnName = "id") private Recipe recipe; @ManyToOne(optional = true, fetch = FetchType.LAZY) @MapsId("ingredientId") @JoinColumn(name = "ingredient_id", referencedColumnName = "id") private Ingredient ingredient; } @Entity public class Ingredient { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Column(unique=true, nullable=false, length=50) @EqualsAndHashCode.Include private String name; @OneToMany(mappedBy = "ingredient", cascade = CascadeType.ALL) private Set<RecipeIngredient> recipeIngredients = new HashSet<>(); } Now I am trying to retrieve data by merging related entities. For example, when retrieving a Recipe, I also need to retrieve all Ingredients belonging to this Recipe. As far as I know, I can use Projection and maybe it is better to only use Hibernate features and retrieve related table data via Java Stream. I have no idea how should I retrieve data via Hibernate. Suppose that I just need an Optional<Recipe> that has List<Ingredient>. Then, I probably need a DTO class something like that: @Data public class ResponseDTO { private Long id; private String title; List<RecipeIngredient> ingredients; // getter, setter, constructor } So, how should I populate this DTO with the requested Recipe and corresponding Ingredient data (getting Ingredient names besides id values) using Java Stream? Or if you suggest Projection way, I tried it but the data is multiplied by the ingredient count belonging to the searched recipe. Update: @Getter @Setter @NoArgsConstructor public class ResponseDTO { private Long id; private String title; List<IngredientDTO> ingredientDTOList; public ResponseDTO(Recipe recipe) { this.id = recipe.getId(); this.title = recipe.getTitle(); this.ingredientDTOList = recipe.getRecipeIngredients().stream() .map(ri -> new IngredientDTO(ri.getIngredient().getName())) .toList(); } } @Getter @Setter public class IngredientDTO { private Long id; private String name; public IngredientDTO(String name) { this.name = name; } } A: First, in the ResponseDTO you will need you change the type of ingredients from List<RecipeIngredient> to List<Ingredient>. To manually perform the mapping, you should use (to map from a suppose Recipe recipe to a RespondeDTO response): ResponseDTO recipeToResponseDTO(Recipe recipe) { ResponseDTO response = new ResponseDTO(); response.setId(recipe.getId()); response.setTitle(recipe.getTitle()); response.setIngredients(recipe.recipeIngredients.stream() .map(RecipeIngredient::getIngredient() .collect(Collectors.toList()); return response; } On the other hand, to model a n-n relation, I encourage you to use the approach proposed by E-Riz in the comment.
How to retrieve data from parent-child tables using Spring Data JPA?
In my Spring Boot app, I use Hibernate and applied the necessary relations to the following entities properly. @Entity public class Recipe { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Column(nullable=false, length=50) private String title; @OneToMany(mappedBy = "recipe", cascade = CascadeType.ALL) private List<RecipeIngredient> recipeIngredients = new ArrayList<>(); } @Entity public class RecipeIngredient { @EmbeddedId private RecipeIngredientId recipeIngredientId = new RecipeIngredientId(); @ManyToOne(optional = true, fetch = FetchType.LAZY) @MapsId("recipeId") @JoinColumn(name = "recipe_id", referencedColumnName = "id") private Recipe recipe; @ManyToOne(optional = true, fetch = FetchType.LAZY) @MapsId("ingredientId") @JoinColumn(name = "ingredient_id", referencedColumnName = "id") private Ingredient ingredient; } @Entity public class Ingredient { @Id @GeneratedValue(strategy = GenerationType.IDENTITY) private Long id; @Column(unique=true, nullable=false, length=50) @EqualsAndHashCode.Include private String name; @OneToMany(mappedBy = "ingredient", cascade = CascadeType.ALL) private Set<RecipeIngredient> recipeIngredients = new HashSet<>(); } Now I am trying to retrieve data by merging related entities. For example, when retrieving a Recipe, I also need to retrieve all Ingredients belonging to this Recipe. As far as I know, I can use Projection and maybe it is better to only use Hibernate features and retrieve related table data via Java Stream. I have no idea how should I retrieve data via Hibernate. Suppose that I just need an Optional<Recipe> that has List<Ingredient>. Then, I probably need a DTO class something like that: @Data public class ResponseDTO { private Long id; private String title; List<RecipeIngredient> ingredients; // getter, setter, constructor } So, how should I populate this DTO with the requested Recipe and corresponding Ingredient data (getting Ingredient names besides id values) using Java Stream? Or if you suggest Projection way, I tried it but the data is multiplied by the ingredient count belonging to the searched recipe. Update: @Getter @Setter @NoArgsConstructor public class ResponseDTO { private Long id; private String title; List<IngredientDTO> ingredientDTOList; public ResponseDTO(Recipe recipe) { this.id = recipe.getId(); this.title = recipe.getTitle(); this.ingredientDTOList = recipe.getRecipeIngredients().stream() .map(ri -> new IngredientDTO(ri.getIngredient().getName())) .toList(); } } @Getter @Setter public class IngredientDTO { private Long id; private String name; public IngredientDTO(String name) { this.name = name; } }
[ "First, in the ResponseDTO you will need you change the type of ingredients from List<RecipeIngredient> to List<Ingredient>.\nTo manually perform the mapping, you should use (to map from a suppose Recipe recipe to a RespondeDTO response):\nResponseDTO recipeToResponseDTO(Recipe recipe) {\n ResponseDTO response = new ResponseDTO();\n response.setId(recipe.getId());\n response.setTitle(recipe.getTitle());\n response.setIngredients(recipe.recipeIngredients.stream()\n .map(RecipeIngredient::getIngredient()\n .collect(Collectors.toList());\n return response;\n}\n\nOn the other hand, to model a n-n relation, I encourage you to use the approach proposed by E-Riz in the comment.\n" ]
[ 2 ]
[]
[]
[ "hibernate", "java", "join", "spring", "spring_boot" ]
stackoverflow_0074660156_hibernate_java_join_spring_spring_boot.txt
Q: How to symbolicate libart.so or libc.so stacktraces in Crashlytics Android NDK? Note: Symbols are showing up in crashlytics for our c++ library, the problem is that they aren't showing for system libraries like libc, libart, libbase, and libandroid_runtime. We have some tricky crashes that happen entirely in the Android runtime and these are hard to debug without symbols. In firebase crashlytics we see the following stack trace: Crashed: Thread : SIGABRT 0x0000000000000000 #00 pc 0x4e574 libc.so #01 pc 0x4e540 libc.so #02 pc 0x5677d8 libart.so #03 pc 0x13ab0 libbase.so #04 pc 0x13090 libbase.so #05 pc 0x38cb6c libart.so #06 pc 0x39f7d8 libart.so #07 pc 0x1260e0 libandroid_runtime.so #08 pc 0x124ef4 libandroid_runtime.so #09 pc 0x124dc4 libandroid_runtime.so #10 pc 0x115468 libandroid_runtime.so When I force a test crash in our C++ library by dereferencing a null pointer, I see the following backtrace in my local Android Studio console: ...snip... #06 pc 00000000002d7644 /apex/com.android.art/lib64/libart.so (art_quick_generic_jni_trampoline+148) (BuildId: adb75d6f792faa24b1bc8cf512fb112c) #07 pc 00000000002cdd64 /apex/com.android.art/lib64/libart.so (art_quick_invoke_stub+548) (BuildId: adb75d6f792faa24b1bc8cf512fb112c) #08 pc 00000000002f23d0 /apex/com.android.art/lib64/libart.so (art::interpreter::ArtInterpreterToCompiledCodeBridge(art::Thread*, art::ArtMethod*, art::ShadowFrame*, unsigned short, art::JValue*)+312) (BuildId: adb75d6f792faa24b1bc8cf512fb112c) #09 pc 00000000003839f4 /apex/com.android.art/lib64/libart.so (bool art::interpreter::DoCall<true, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+800) (BuildId: adb75d6f792faa24b1bc8cf512fb112c) #10 pc 00000000003813f4 /apex/com.android.art/lib64/libart.so (MterpInvokeVirtualRange+1368) (BuildId: adb75d6f792faa24b1bc8cf512fb112c) #11 pc 00000000002c8714 /apex/com.android.art/lib64/libart.so (mterp_op_invoke_virtual_range+20) (BuildId: adb75d6f792faa24b1bc8cf512fb112c) ...snip... However, the same crash in crashlytics looks like this: ...snip... #07 pc 0x222244 libart.so #08 pc 0x218964 libart.so #09 pc 0x284208 libart.so #10 pc 0x3e34ac libart.so #11 pc 0x800ba4 libart.so ...snip... How can we get crashlytics to include the information that is clearly in the crashdump? Some notes on our build setup: We have already followed https://firebase.google.com/docs/crashlytics/ndk-reports Gradle Project 1 builds the native library, ensuring that debug is on and symbols are not stripped. Gradle Project 2 builds the app and links in the library and tells crashlytics to upload native symbols firebaseCrashlytics { nativeSymbolUploadEnabled true unstrippedNativeLibsDir file("PATH/TO/UNSTRIPPED/DIRECTORY") } A: This behavior is expected. When Crashlytics gets NDK crashes, these need to be symbolicated. In order to do this, the respective symbol files should be uploaded to Crashlytics. With the configuration you mentioned, the symbols available in your app will be uploaded to Crashlytics. nativeSymbolUploadEnabled true But in the case of system libraries, the respective symbol files are not available (they are not public as far as I know). So Crashlytics won't have access to the required symbol files for symbolicating frames from system libraries.
How to symbolicate libart.so or libc.so stacktraces in Crashlytics Android NDK?
Note: Symbols are showing up in crashlytics for our c++ library, the problem is that they aren't showing for system libraries like libc, libart, libbase, and libandroid_runtime. We have some tricky crashes that happen entirely in the Android runtime and these are hard to debug without symbols. In firebase crashlytics we see the following stack trace: Crashed: Thread : SIGABRT 0x0000000000000000 #00 pc 0x4e574 libc.so #01 pc 0x4e540 libc.so #02 pc 0x5677d8 libart.so #03 pc 0x13ab0 libbase.so #04 pc 0x13090 libbase.so #05 pc 0x38cb6c libart.so #06 pc 0x39f7d8 libart.so #07 pc 0x1260e0 libandroid_runtime.so #08 pc 0x124ef4 libandroid_runtime.so #09 pc 0x124dc4 libandroid_runtime.so #10 pc 0x115468 libandroid_runtime.so When I force a test crash in our C++ library by dereferencing a null pointer, I see the following backtrace in my local Android Studio console: ...snip... #06 pc 00000000002d7644 /apex/com.android.art/lib64/libart.so (art_quick_generic_jni_trampoline+148) (BuildId: adb75d6f792faa24b1bc8cf512fb112c) #07 pc 00000000002cdd64 /apex/com.android.art/lib64/libart.so (art_quick_invoke_stub+548) (BuildId: adb75d6f792faa24b1bc8cf512fb112c) #08 pc 00000000002f23d0 /apex/com.android.art/lib64/libart.so (art::interpreter::ArtInterpreterToCompiledCodeBridge(art::Thread*, art::ArtMethod*, art::ShadowFrame*, unsigned short, art::JValue*)+312) (BuildId: adb75d6f792faa24b1bc8cf512fb112c) #09 pc 00000000003839f4 /apex/com.android.art/lib64/libart.so (bool art::interpreter::DoCall<true, false>(art::ArtMethod*, art::Thread*, art::ShadowFrame&, art::Instruction const*, unsigned short, art::JValue*)+800) (BuildId: adb75d6f792faa24b1bc8cf512fb112c) #10 pc 00000000003813f4 /apex/com.android.art/lib64/libart.so (MterpInvokeVirtualRange+1368) (BuildId: adb75d6f792faa24b1bc8cf512fb112c) #11 pc 00000000002c8714 /apex/com.android.art/lib64/libart.so (mterp_op_invoke_virtual_range+20) (BuildId: adb75d6f792faa24b1bc8cf512fb112c) ...snip... However, the same crash in crashlytics looks like this: ...snip... #07 pc 0x222244 libart.so #08 pc 0x218964 libart.so #09 pc 0x284208 libart.so #10 pc 0x3e34ac libart.so #11 pc 0x800ba4 libart.so ...snip... How can we get crashlytics to include the information that is clearly in the crashdump? Some notes on our build setup: We have already followed https://firebase.google.com/docs/crashlytics/ndk-reports Gradle Project 1 builds the native library, ensuring that debug is on and symbols are not stripped. Gradle Project 2 builds the app and links in the library and tells crashlytics to upload native symbols firebaseCrashlytics { nativeSymbolUploadEnabled true unstrippedNativeLibsDir file("PATH/TO/UNSTRIPPED/DIRECTORY") }
[ "This behavior is expected. When Crashlytics gets NDK crashes, these need to be symbolicated. In order to do this, the respective symbol files should be uploaded to Crashlytics.\nWith the configuration you mentioned, the symbols available in your app will be uploaded to Crashlytics.\nnativeSymbolUploadEnabled true\n\nBut in the case of system libraries, the respective symbol files are not available (they are not public as far as I know). So Crashlytics won't have access to the required symbol files for symbolicating frames from system libraries.\n" ]
[ 0 ]
[]
[]
[ "android", "android_ndk", "bionic", "c++", "crashlytics" ]
stackoverflow_0072704518_android_android_ndk_bionic_c++_crashlytics.txt
Q: Concatenate strings algorithim and char* pointers New to C here and I have found the following algorthim to concatenate strings whilst searching books online: Algorithm: STRING_CONCAT (T, S) [string S appends at the end of string T] 1. Set I = 0, J = 0 2. Repeat step 3 while T[I] ≠ Null do 3. I = I + 1 [End of loop] 4. Repeat step 5 to 7 while S[J] ≠ Null do 5. T[I] = S[J] 6. I = I + 1 7. J = J + 1 [End of loop] 8. Set T[I] = NULL 9. return Essentially, I have tried to implement this with my current working knowledge with C. However, I am unsure on how to get the char* pointers to correctly point inside the function. For example, #include <stdio.h> #include <stdint.h> #include <stdlib.h> const char* stringConcat(char* T, char* S){ int i = 0; int j = 0; char* Q; while(*S[i] != NULL & *T[i] != NULL){ i += 1; while(*S[j] != NULL){ *T[i] = *S[j]; i += 1; j += 1; } } *T[i] = NULL; return *T } int main(void){ char* sentence = "some sentence"; char* anotherSentence = "another sentence"; const result; result = stringConcat(sentence, anotherSentence); return EXIT_SUCCESS; } I get a logged error output with the following: exe_4.c:8:11: error: indirection requires pointer operand ('int' invalid) while(*S[i] != NULL & *T[i] != NULL){ ^~~~~ exe_4.c:8:27: error: indirection requires pointer operand ('int' invalid) while(*S[i] != NULL & *T[i] != NULL){ ... ... A: To concatenate two strings, code needs a valid place to save the result. Attempting to write to a string literal in undefined behavior (UB). // v------v This is a string literal and not valid place to save. stringConcat(sentence, anotherSentence); Instead use a writeable character array. // Make it big enough char sentence[100] = "some sentence"; char* anotherSentence = "another sentence"; stringConcat(sentence, anotherSentence); Concatenation code attempts to de-reference a char with *S[i]. This is not possible. Instead, walk the destination string to its end and then append each character of the source. const char *stringConcat_alt(char *destination, const char* source) { const char *t = destination; // Get to the end of destination while (*destination) { destination++; } while (*source) { *destination++ = *source++; } *destination = 0; return t; } Do not use NULL for the null character. Use '\0'. NULL is a null pointer and possibly will not convert to a char 0.
Concatenate strings algorithim and char* pointers
New to C here and I have found the following algorthim to concatenate strings whilst searching books online: Algorithm: STRING_CONCAT (T, S) [string S appends at the end of string T] 1. Set I = 0, J = 0 2. Repeat step 3 while T[I] ≠ Null do 3. I = I + 1 [End of loop] 4. Repeat step 5 to 7 while S[J] ≠ Null do 5. T[I] = S[J] 6. I = I + 1 7. J = J + 1 [End of loop] 8. Set T[I] = NULL 9. return Essentially, I have tried to implement this with my current working knowledge with C. However, I am unsure on how to get the char* pointers to correctly point inside the function. For example, #include <stdio.h> #include <stdint.h> #include <stdlib.h> const char* stringConcat(char* T, char* S){ int i = 0; int j = 0; char* Q; while(*S[i] != NULL & *T[i] != NULL){ i += 1; while(*S[j] != NULL){ *T[i] = *S[j]; i += 1; j += 1; } } *T[i] = NULL; return *T } int main(void){ char* sentence = "some sentence"; char* anotherSentence = "another sentence"; const result; result = stringConcat(sentence, anotherSentence); return EXIT_SUCCESS; } I get a logged error output with the following: exe_4.c:8:11: error: indirection requires pointer operand ('int' invalid) while(*S[i] != NULL & *T[i] != NULL){ ^~~~~ exe_4.c:8:27: error: indirection requires pointer operand ('int' invalid) while(*S[i] != NULL & *T[i] != NULL){ ... ...
[ "To concatenate two strings, code needs a valid place to save the result.\nAttempting to write to a string literal in undefined behavior (UB).\n// v------v This is a string literal and not valid place to save.\nstringConcat(sentence, anotherSentence);\n\nInstead use a writeable character array.\n// Make it big enough\nchar sentence[100] = \"some sentence\";\nchar* anotherSentence = \"another sentence\";\n\nstringConcat(sentence, anotherSentence);\n\n\nConcatenation code attempts to de-reference a char with *S[i]. This is not possible.\nInstead, walk the destination string to its end and then append each character of the source.\nconst char *stringConcat_alt(char *destination, const char* source) {\n const char *t = destination; \n // Get to the end of destination\n while (*destination) {\n destination++;\n }\n\n while (*source) {\n *destination++ = *source++;\n }\n\n *destination = 0;\n return t;\n}\n\nDo not use NULL for the null character. Use '\\0'. NULL is a null pointer and possibly will not convert to a char 0.\n" ]
[ 1 ]
[]
[]
[ "c" ]
stackoverflow_0074655528_c.txt
Q: Get the hostname/DNSName of a Hyper-V VM (while VM is OFF) using PowerShell Can anyone figure out a clever way to grab the Hostname of a VM while it's still Off using PowerShell? I only know how to grab the VM's Hostname while the VM is still On. PS: I want the hostname/DNSName of the VM (not to be confused with the VM Name); which aren't the same thing. A: You could try get-vm |select-object -ExpandProperty network* |select-object -ExpandProperty ipaddresses |Resolve-DnsName to grab the VM's IP address and do a reverse DNS lookup on it. A: Bit late to the show here, but I took Greyula-Reyula's quite efficient answer and turned it into a function that gives more feedback on why you may be getting no output from it. I'm relatively new to this level of scripting, so I'm sure there's a more efficient way to do this, but I'm a fan of "verbosity" and try to make my scripts as easy-to-follow for myself as possible in case I want to mess with them again later. :) Function Get-HVComputerName { [CmdletBinding()] param( [Alias("ServerName")][Parameter()] $HVHostName = $env:COMPUTERNAME, [Alias("ComputerName")][Parameter()] [string[]]$VMName ) #VMWare.VimAutomation.Core also has a "Get-VM" cmdlet, #so unload that module if it's present first If (Get-Module -Name VMware*) { Remove-Module -Name VMware* -Verbose:$false } If (!(Get-Module -Name Hyper-V)) { Import-Module -Name Hyper-V -Verbose:$false } $VMs = Get-VM -ComputerName $HVHostName -Name "*$VMName*" If ($VMs) { $DNSNameArr = @() If ($VMs.Count -gt 1) { Write-Host "`nFound the following VMs on Hyper-V server $HVHostName with name like `"`*$VMName`*`":" -ForegroundColor Green $VMs.Name | Out-Default Write-Host "" } ForEach ($VM in $VMs) { $Name = $VM.Name If ($VerbosePreference -eq "Continue") { Write-Host "" } Write-Verbose "VM: $Name found on server $HVHostName" If ($VM.State -ne "Running") { Write-Verbose "VM: $Name is not in a 'running' state. No IP address will be present.`n" Continue } $VMNetAdapters = $VM | Select-Object -ExpandProperty NetworkAdapters If ($VMNetAdapters) { Write-Verbose "VM $Name - Found the following network adapter(s)" If ($VerbosePreference -eq "Continue") { $VMNetAdapters | Out-Default } ForEach ($NetAdapter in $VMNetAdapters) { $AdapterName = $NetAdapter.Name $IPAddresses = $NetAdapter | Select-Object -ExpandProperty IPAddresses If ($IPAddresses) { Write-Verbose "VM: $Name - Adapter: `"$AdapterName`" - Found the following IP address(es) on network adapter" If ($VerbosePreference -eq "Continue") { $IPAddresses | Out-Default } ForEach ($IP in $IPAddresses) { $DNSName = $IP | Resolve-DnsName -Verbose:$false -ErrorAction SilentlyContinue If ($DNSName) { $DNSFound = $true $VMDNS = [PSCustomObject]@{ VMName = $Name IP = $IP DNSName = $DNSName.NameHost } $DNSNameArr += $VMDNS } Else { Write-Warning "VM: $Name - Adapter: `"$AdapterName`" - IP: $IP - No DNS name found" Continue } } If (!($DNSFound)) { Write-Warning "VM: $Name - No DNS entries found for any associated IP addresses" } Else { $DNSFound = $false } } Else { Write-Warning "VM: $Name - Adapter: `"$AdapterName`" - No IP address assigned to adapter" Continue } } } Else { Write-Warning "VM: $Name - No Network adapters found" Continue } } If ($DNSNameArr) { If ($VerbosePreference -eq "Continue") { Write-Host "" } Return $DNSNameArr } Else { Write-Warning "No DNS names found for VM(s) with name like $VMName on server $HVHostName" } } Else { Write-Warning "No VM found on server $HVHostName with name like $VMName" } } #End function Get-HVComputerName
Get the hostname/DNSName of a Hyper-V VM (while VM is OFF) using PowerShell
Can anyone figure out a clever way to grab the Hostname of a VM while it's still Off using PowerShell? I only know how to grab the VM's Hostname while the VM is still On. PS: I want the hostname/DNSName of the VM (not to be confused with the VM Name); which aren't the same thing.
[ "You could try\nget-vm |select-object -ExpandProperty network* |select-object -ExpandProperty ipaddresses |Resolve-DnsName\n\nto grab the VM's IP address and do a reverse DNS lookup on it.\n", "Bit late to the show here, but I took Greyula-Reyula's quite efficient answer and turned it into a function that gives more feedback on why you may be getting no output from it. I'm relatively new to this level of scripting, so I'm sure there's a more efficient way to do this, but I'm a fan of \"verbosity\" and try to make my scripts as easy-to-follow for myself as possible in case I want to mess with them again later. :)\nFunction Get-HVComputerName\n{\n [CmdletBinding()]\n param(\n [Alias(\"ServerName\")][Parameter()]\n $HVHostName = $env:COMPUTERNAME,\n [Alias(\"ComputerName\")][Parameter()]\n [string[]]$VMName\n )\n\n #VMWare.VimAutomation.Core also has a \"Get-VM\" cmdlet, \n #so unload that module if it's present first\n If (Get-Module -Name VMware*) { Remove-Module -Name VMware* -Verbose:$false }\n\n If (!(Get-Module -Name Hyper-V)) { Import-Module -Name Hyper-V -Verbose:$false }\n\n $VMs = Get-VM -ComputerName $HVHostName -Name \"*$VMName*\"\n If ($VMs)\n {\n $DNSNameArr = @()\n If ($VMs.Count -gt 1)\n {\n Write-Host \"`nFound the following VMs on Hyper-V server $HVHostName with name like `\"`*$VMName`*`\":\" -ForegroundColor Green\n $VMs.Name | Out-Default\n Write-Host \"\"\n }\n\n ForEach ($VM in $VMs)\n {\n $Name = $VM.Name\n If ($VerbosePreference -eq \"Continue\")\n {\n Write-Host \"\"\n }\n Write-Verbose \"VM: $Name found on server $HVHostName\"\n If ($VM.State -ne \"Running\")\n {\n Write-Verbose \"VM: $Name is not in a 'running' state. No IP address will be present.`n\"\n Continue\n }\n $VMNetAdapters = $VM | Select-Object -ExpandProperty NetworkAdapters\n If ($VMNetAdapters)\n {\n Write-Verbose \"VM $Name - Found the following network adapter(s)\"\n If ($VerbosePreference -eq \"Continue\")\n {\n $VMNetAdapters | Out-Default\n }\n ForEach ($NetAdapter in $VMNetAdapters)\n {\n $AdapterName = $NetAdapter.Name\n $IPAddresses = $NetAdapter | Select-Object -ExpandProperty IPAddresses\n If ($IPAddresses)\n {\n Write-Verbose \"VM: $Name - Adapter: `\"$AdapterName`\" - Found the following IP address(es) on network adapter\"\n If ($VerbosePreference -eq \"Continue\")\n {\n $IPAddresses | Out-Default\n }\n\n ForEach ($IP in $IPAddresses)\n {\n $DNSName = $IP | Resolve-DnsName -Verbose:$false -ErrorAction SilentlyContinue\n If ($DNSName)\n {\n $DNSFound = $true\n $VMDNS = [PSCustomObject]@{\n VMName = $Name\n IP = $IP\n DNSName = $DNSName.NameHost\n }\n $DNSNameArr += $VMDNS\n }\n Else\n {\n Write-Warning \"VM: $Name - Adapter: `\"$AdapterName`\" - IP: $IP - No DNS name found\"\n Continue\n }\n }\n If (!($DNSFound))\n {\n Write-Warning \"VM: $Name - No DNS entries found for any associated IP addresses\"\n }\n Else\n {\n $DNSFound = $false\n }\n }\n Else\n {\n Write-Warning \"VM: $Name - Adapter: `\"$AdapterName`\" - No IP address assigned to adapter\"\n Continue\n }\n }\n }\n Else\n {\n Write-Warning \"VM: $Name - No Network adapters found\"\n Continue\n }\n }\n If ($DNSNameArr)\n {\n If ($VerbosePreference -eq \"Continue\")\n {\n Write-Host \"\"\n }\n Return $DNSNameArr\n }\n Else\n {\n Write-Warning \"No DNS names found for VM(s) with name like $VMName on server $HVHostName\" \n }\n }\n Else\n {\n Write-Warning \"No VM found on server $HVHostName with name like $VMName\" \n }\n} #End function Get-HVComputerName\n\n" ]
[ 0, 0 ]
[]
[]
[ "hyper_v", "powershell" ]
stackoverflow_0056065071_hyper_v_powershell.txt
Q: How to make a flipbook from a pdf I have a lot of pdfs and i need to convert each one of them into a flipbook so people can choose one and read it. I can't find a way to do this that is free. So i wonder if there's a way for me to make only with html, css and javascript. A: Look at any pdf viewer and that ability is not generally available, What you will find in any flipbook is a viewer like Mozilla PDF.js which replaces the pdf with images. So the simplest answer is use any conversion lib like https://3dflipbook.net/ or https://dearflip.com/responsive-html5-flipbook-jquery-plugin/ or any one of the 899 here https://github.com/search?q=flipbook this one looks nice https://kubil-ismail.github.io/Pdf-Flipbook/ or what about https://notshriram.github.io/React-Flipbook-Demo/
How to make a flipbook from a pdf
I have a lot of pdfs and i need to convert each one of them into a flipbook so people can choose one and read it. I can't find a way to do this that is free. So i wonder if there's a way for me to make only with html, css and javascript.
[ "Look at any pdf viewer and that ability is not generally available, What you will find in any flipbook is a viewer like Mozilla PDF.js which replaces the pdf with images.\nSo the simplest answer is use any conversion lib like https://3dflipbook.net/ or https://dearflip.com/responsive-html5-flipbook-jquery-plugin/\nor any one of the 899 here https://github.com/search?q=flipbook\nthis one looks nice https://kubil-ismail.github.io/Pdf-Flipbook/\nor what about https://notshriram.github.io/React-Flipbook-Demo/\n" ]
[ 1 ]
[]
[]
[ "css", "html", "javascript" ]
stackoverflow_0074657909_css_html_javascript.txt
Q: Finding the actual type (float, uint32_t...) based on the value of an enum (kFloat, kUint32...) I am reading data from a file and the type of the data is stored as a uint8_t which indicates the type of data I am about to read. This is the enum corresponding to the declaration of these values. enum DataType { kInt8, kUint16, kInt16, kUint32, kInt32, kUint64, kInt64, kFloat16, kFloat32, kFloat64 }; I then get a function that reads the values stored in the file with the specific type, something like this: template<typename T> readData(T*& data, size_t numElements) { ifs.read((char*)data, sizeof(T) * numElements); } uint8_t type; ifs.read((char*)&type, 1); uint32_t numElements; ids.read((char*)&numElements, sizeof(uint32_t)); switch (type) { case kUint8: uint8_t* data = new uint8_t[numElements]; readData<uint8_t>(data, numElements); // eventually delete mem ... break; case kUint16: uint16_t* data = new int16_t[numElements]; readData<uint16_t>(data, numElements); // eventually delete mem ... break; case ... etc. default: break; } I have just represented 2 cases but eventually, you'd need to do it for all types. It's a lot of code duplication and so what I'd like to do is find the actual type of the values given the enum value. For example, if the enum value was kUint32 then the type would be uint32_t, etc. If I was able to do so, the code could become more compact with something like this (pseudo-code): DataType typeFromFile = kUint32; ActualC++Type T = typeFromFile.getType(); readData<T>(data, numElements); What technique could you recommend to make it work (or what alternative solution may you recommend?). A: I would use a higher-order macro for this sort of thing. Start with a macro that defines the mapping between the enum values and the types: #define FOREACH_DATATYPE(OP) \ OP(kUint8, uint8_t) \ OP(kInt8, int8_t) \ OP(kUint16, uint16_t) \ OP(kInt16, int16_t) // Fill in the rest yourself Then use it to generate the switch statement for all the enum values and types: switch (type) { #define CASE_TYPE(name, type) \ case name: { \ type* data = new type[numElements]; \ readData<type>(data, numElements); \ break;} FOREACH_DATATYPE(CASE_TYPE) #undef CASE_TYPE default: break; } A: You're trying to take a runtime value and map it to a compile-time type. Since C++ is a compile-time typed language, there's no escaping that, at some point, you're going to have to do something like a switch/case statement at some point. This is because each option needs to have its own separate code. So the desire is to minimize how much of this you actually do. The best way to do it with standard library tools is to employ a variant. So, you create a variant<Ts>, such that the Ts types are unique_ptr<T[]>s, where the various Ts are the sequence of types in your enumeration in order. This way, the enumeration index matches the variant index. The unique_ptr part is important, as it will make the variant destroy the array for you without having to know which type it stores. So the only thing the switch/case needs to do is create the array. The processing of the array can happen in a visitor, which can be a template, as follows: variant_type data_array{}; switch (type) { case kUint8: data_array = std::make_unique<std::uint8_t[]>(numElements); case kUint16: data_array = std::make_unique<std::uint16_t[]>(numElements); default: //error out. `break` is not an option. break; } std::visit([&](auto &arr) { using arrtype = std::remove_cvref_t<decltype(arr)>; readData<typename arrtype::element_type>(arr.get(), numElements); ///... }, data_array); //data_array's destructor will destroy the allocated arrays. A: Besides other answer, another approach is to define a compile-time list of types first. Then you might assign enum values based on types (which prevents list vs. enum order issues) and have a visit_nth which calls the lambda with id<type>: #include <iostream> #include <type_traits> template<typename T> struct id { using type = T; }; template<typename... Ts> struct list { static constexpr size_t size = sizeof...(Ts); template<typename T> static constexpr size_t index_of = -1; }; template<typename H, typename... Ts> struct list<H, Ts...> { public: static constexpr size_t size = sizeof...(Ts) + 1; template<typename T> static constexpr size_t index_of = std::is_same_v<H, T> ? 0 : list<Ts...>::template index_of<T> + 1; template<typename F> static void visit_nth(size_t n, F f) { if constexpr (size > 0) { if (n > 0) { if constexpr (size > 1) { list<Ts...>::visit_nth(n - 1, std::move(f)); } } else { f(id<H>()); } } } }; using DataTypes = list<int8_t, uint8_t, int16_t, uint16_t>; enum DataType { kInt8 = DataTypes::index_of<int8_t>, kUint8 = DataTypes::index_of<uint8_t>, kInt16 = DataTypes::index_of<int16_t>, kUint16 = DataTypes::index_of<uint16_t> }; int main() { DataType typeFromFile = kUint8; DataTypes::visit_nth(typeFromFile, [&](auto type) { using T = typename decltype(type)::type; std::cout << std::is_same_v<T, uint8_t> << std::endl; }); }
Finding the actual type (float, uint32_t...) based on the value of an enum (kFloat, kUint32...)
I am reading data from a file and the type of the data is stored as a uint8_t which indicates the type of data I am about to read. This is the enum corresponding to the declaration of these values. enum DataType { kInt8, kUint16, kInt16, kUint32, kInt32, kUint64, kInt64, kFloat16, kFloat32, kFloat64 }; I then get a function that reads the values stored in the file with the specific type, something like this: template<typename T> readData(T*& data, size_t numElements) { ifs.read((char*)data, sizeof(T) * numElements); } uint8_t type; ifs.read((char*)&type, 1); uint32_t numElements; ids.read((char*)&numElements, sizeof(uint32_t)); switch (type) { case kUint8: uint8_t* data = new uint8_t[numElements]; readData<uint8_t>(data, numElements); // eventually delete mem ... break; case kUint16: uint16_t* data = new int16_t[numElements]; readData<uint16_t>(data, numElements); // eventually delete mem ... break; case ... etc. default: break; } I have just represented 2 cases but eventually, you'd need to do it for all types. It's a lot of code duplication and so what I'd like to do is find the actual type of the values given the enum value. For example, if the enum value was kUint32 then the type would be uint32_t, etc. If I was able to do so, the code could become more compact with something like this (pseudo-code): DataType typeFromFile = kUint32; ActualC++Type T = typeFromFile.getType(); readData<T>(data, numElements); What technique could you recommend to make it work (or what alternative solution may you recommend?).
[ "I would use a higher-order macro for this sort of thing. Start with a macro that defines the mapping between the enum values and the types:\n#define FOREACH_DATATYPE(OP) \\\n OP(kUint8, uint8_t) \\\n OP(kInt8, int8_t) \\\n OP(kUint16, uint16_t) \\\n OP(kInt16, int16_t)\n\n // Fill in the rest yourself\n\nThen use it to generate the switch statement for all the enum values and types:\n switch (type) {\n#define CASE_TYPE(name, type) \\\n case name: { \\\n type* data = new type[numElements]; \\\n readData<type>(data, numElements); \\\n break;}\n FOREACH_DATATYPE(CASE_TYPE)\n#undef CASE_TYPE\n default:\n break;\n }\n\n", "You're trying to take a runtime value and map it to a compile-time type. Since C++ is a compile-time typed language, there's no escaping that, at some point, you're going to have to do something like a switch/case statement at some point. This is because each option needs to have its own separate code.\nSo the desire is to minimize how much of this you actually do. The best way to do it with standard library tools is to employ a variant.\nSo, you create a variant<Ts>, such that the Ts types are unique_ptr<T[]>s, where the various Ts are the sequence of types in your enumeration in order. This way, the enumeration index matches the variant index. The unique_ptr part is important, as it will make the variant destroy the array for you without having to know which type it stores.\nSo the only thing the switch/case needs to do is create the array. The processing of the array can happen in a visitor, which can be a template, as follows:\n variant_type data_array{};\n\n switch (type) {\n case kUint8:\n data_array = std::make_unique<std::uint8_t[]>(numElements);\n case kUint16:\n data_array = std::make_unique<std::uint16_t[]>(numElements);\n default:\n //error out. `break` is not an option.\n break;\n }\n\n std::visit([&](auto &arr)\n {\n using arrtype = std::remove_cvref_t<decltype(arr)>;\n readData<typename arrtype::element_type>(arr.get(), numElements);\n ///...\n }, data_array);\n\n//data_array's destructor will destroy the allocated arrays.\n\n", "Besides other answer, another approach is to define a compile-time list of types first. Then you might assign enum values based on types (which prevents list vs. enum order issues) and have a visit_nth which calls the lambda with id<type>:\n#include <iostream>\n#include <type_traits>\n\ntemplate<typename T>\nstruct id {\n using type = T;\n};\n\ntemplate<typename... Ts>\nstruct list\n{\n static constexpr size_t size = sizeof...(Ts);\n \n template<typename T>\n static constexpr size_t index_of = -1;\n};\n\ntemplate<typename H, typename... Ts>\nstruct list<H, Ts...>\n{\npublic:\n static constexpr size_t size = sizeof...(Ts) + 1;\n\n template<typename T>\n static constexpr size_t index_of =\n std::is_same_v<H, T> ? 0 : list<Ts...>::template index_of<T> + 1;\n\n template<typename F>\n static void visit_nth(size_t n, F f)\n {\n if constexpr (size > 0) {\n if (n > 0) {\n if constexpr (size > 1) {\n list<Ts...>::visit_nth(n - 1, std::move(f));\n }\n } else {\n f(id<H>());\n }\n }\n }\n};\n\nusing DataTypes = list<int8_t, uint8_t, int16_t, uint16_t>;\n\nenum DataType\n{\n kInt8 = DataTypes::index_of<int8_t>,\n kUint8 = DataTypes::index_of<uint8_t>,\n kInt16 = DataTypes::index_of<int16_t>,\n kUint16 = DataTypes::index_of<uint16_t>\n};\n\n\n\nint main()\n{\n DataType typeFromFile = kUint8;\n DataTypes::visit_nth(typeFromFile, [&](auto type) {\n using T = typename decltype(type)::type;\n std::cout << std::is_same_v<T, uint8_t> << std::endl;\n });\n}\n\n" ]
[ 1, 1, 0 ]
[]
[]
[ "c++", "c++20", "type_conversion", "types" ]
stackoverflow_0074643731_c++_c++20_type_conversion_types.txt
Q: Load additional Google Maps libraries dynamically I've got a page where the Google Maps is being loaded with the Geometry library: <script src="//maps.googleapis.com/maps/api/js?v=3.exp&key=xxx&language=en&libraries=geometry"></script> Is there a way, if I only have access to update the javascript on this page, to also load in the Places library? i.e. I can't update or remove the HTML for the above script tag. I know there's already a feature request to get this added to the API: https://code.google.com/p/gmaps-api-issues/issues/detail?id=3664 And this question has already been asked, without answer, 3 years ago: How to load google maps libraries dynamically I'm hoping someone might have come up with a way to do this since then. A: This feature is now available in the v=beta channel of the Google Maps JavaScript API: https://developers.google.com/maps/documentation/javascript/dynamic-loading If you have any issues using this API, please report them at https://developers.google.com/maps/documentation/javascript/support#issue-tracker
Load additional Google Maps libraries dynamically
I've got a page where the Google Maps is being loaded with the Geometry library: <script src="//maps.googleapis.com/maps/api/js?v=3.exp&key=xxx&language=en&libraries=geometry"></script> Is there a way, if I only have access to update the javascript on this page, to also load in the Places library? i.e. I can't update or remove the HTML for the above script tag. I know there's already a feature request to get this added to the API: https://code.google.com/p/gmaps-api-issues/issues/detail?id=3664 And this question has already been asked, without answer, 3 years ago: How to load google maps libraries dynamically I'm hoping someone might have come up with a way to do this since then.
[ "This feature is now available in the v=beta channel of the Google Maps JavaScript API: https://developers.google.com/maps/documentation/javascript/dynamic-loading\nIf you have any issues using this API, please report them at https://developers.google.com/maps/documentation/javascript/support#issue-tracker\n" ]
[ 0 ]
[]
[]
[ "google_maps", "google_maps_api_3", "javascript", "jquery" ]
stackoverflow_0037655783_google_maps_google_maps_api_3_javascript_jquery.txt
Q: GUIZERO: I want to use a pushbutton to save what's in a textbox to file. How do I do that? I'm a super beginner with Python, so please be kind. I am creating an app that should take in user input from text boxes, and then when the user presses the submit button this is saved in a text file. I think the issue is that I'm not quite sure how to create the right function for the pushbutton command. I would really appreciate if someone can code a simple app showing how to do this. This is the code I have so far, but I get an error "TypeError: write() argument must be str, not TextBox". from guizero import * import os cwd = os.getcwd() # function for writing files def save_file(): with open(cwd+'/Desktop/File handling/newfile.txt','a') as f: f.write(userInput) app = App("testing") userInput = TextBox(app) submit_button = PushButton(app, command=save_file, text="submit") app.display() ` A: I figured it out. Thanks :) from guizero import * import os cwd = os.getcwd() # function for writing files def save_file(): with open(cwd+'/Desktop/File handling/newfile.txt','a') as f: f.write("First name:"+" "+first_name.value+"\n") #the app app = App("testing") #text box first_name = TextBox(app, width=30, grid=[2,4]) #submit button box = Box(app) submitbutton = PushButton(app, command=(save_file), text="Submit") app.display()
GUIZERO: I want to use a pushbutton to save what's in a textbox to file. How do I do that?
I'm a super beginner with Python, so please be kind. I am creating an app that should take in user input from text boxes, and then when the user presses the submit button this is saved in a text file. I think the issue is that I'm not quite sure how to create the right function for the pushbutton command. I would really appreciate if someone can code a simple app showing how to do this. This is the code I have so far, but I get an error "TypeError: write() argument must be str, not TextBox". from guizero import * import os cwd = os.getcwd() # function for writing files def save_file(): with open(cwd+'/Desktop/File handling/newfile.txt','a') as f: f.write(userInput) app = App("testing") userInput = TextBox(app) submit_button = PushButton(app, command=save_file, text="submit") app.display() `
[ "I figured it out. Thanks :)\n from guizero import *\nimport os\ncwd = os.getcwd()\n\n\n# function for writing files\ndef save_file():\n with open(cwd+'/Desktop/File handling/newfile.txt','a') as f:\n f.write(\"First name:\"+\" \"+first_name.value+\"\\n\")\n\n#the app\napp = App(\"testing\")\n\n\n#text box\nfirst_name = TextBox(app, width=30, grid=[2,4])\n\n#submit button\nbox = Box(app)\nsubmitbutton = PushButton(app, command=(save_file), text=\"Submit\")\n\napp.display()\n\n" ]
[ 0 ]
[]
[]
[ "guizero", "python", "textbox" ]
stackoverflow_0074648037_guizero_python_textbox.txt
Q: migrate Azure Devops artifacts to GitHub packages Is there any simple way to migrate Azure Devops artifacts to GitHub packages? We have few artifacts which need to be migrated. Are there any tools available to do this? A: Follow the below steps to perform the migration operation. dotnet tool install gpr -g gpr push MyFakePackage.1.0.0.50.nupkg --repository https://github.com/MyRepo/my-repo-name These two are the simple two lines of syntaxes to follow for migration. A: I expanded upon @TadepalliSairam's solution a little bit here: https://josh-ops.com/posts/github-packages-migrate-nuget-packages-to-github-packages/. This is a post detailing a script that migrates .nupkg files to GitHub Packages. Basically, you have to use gpr since it rewrites the <repository url="..." /> element in the .nuspec file in the .nupkg before pushing. dotnet nuget push isn't capable of doing this and you will receive a 400 error right now: dotnet nuget push \ -s github \ -k ghp_pat \ NUnit3.DotNetNew.Template_1.7.1.nupkg Pushing NUnit3.DotNetNew.Template_1.7.1.nupkg to 'https://nuget.pkg.github.com/joshjohanning-org-packages-migrated'... PUT https://nuget.pkg.github.com/joshjohanning-org-packages-migrated/ warn : Source owner 'joshjohanning-org-packages-migrated' does not match repo owner 'joshjohanning-org-packages' in repository element. BadRequest https://nuget.pkg.github.com/joshjohanning-org-packages-migrated/ 180ms error: Response status code does not indicate success: 400 (Bad Request).
migrate Azure Devops artifacts to GitHub packages
Is there any simple way to migrate Azure Devops artifacts to GitHub packages? We have few artifacts which need to be migrated. Are there any tools available to do this?
[ "Follow the below steps to perform the migration operation.\ndotnet tool install gpr -g\ngpr push MyFakePackage.1.0.0.50.nupkg --repository https://github.com/MyRepo/my-repo-name\n\nThese two are the simple two lines of syntaxes to follow for migration.\n", "I expanded upon @TadepalliSairam's solution a little bit here: https://josh-ops.com/posts/github-packages-migrate-nuget-packages-to-github-packages/. This is a post detailing a script that migrates .nupkg files to GitHub Packages.\nBasically, you have to use gpr since it rewrites the <repository url=\"...\" /> element in the .nuspec file in the .nupkg before pushing.\ndotnet nuget push isn't capable of doing this and you will receive a 400 error right now:\ndotnet nuget push \\\n -s github \\\n -k ghp_pat \\\n NUnit3.DotNetNew.Template_1.7.1.nupkg\n\nPushing NUnit3.DotNetNew.Template_1.7.1.nupkg to 'https://nuget.pkg.github.com/joshjohanning-org-packages-migrated'...\n PUT https://nuget.pkg.github.com/joshjohanning-org-packages-migrated/\nwarn : Source owner 'joshjohanning-org-packages-migrated' does not match repo owner 'joshjohanning-org-packages' in repository element.\n BadRequest https://nuget.pkg.github.com/joshjohanning-org-packages-migrated/ 180ms\nerror: Response status code does not indicate success: 400 (Bad Request).\n\n" ]
[ 0, 0 ]
[]
[]
[ "azure_devops", "github_packages" ]
stackoverflow_0072429981_azure_devops_github_packages.txt
Q: Failed to install wsgiref on Python 3 I have a problem installing wsgiref: $ python --version Python 3.6.0 :: Anaconda 4.3.1 (x86_64) $ pip --version pip 9.0.1 from /anaconda/lib/python3.6/site-packages (python 3.6) My requirement.txt file are shown as below. numpy==1.8.1 scipy==0.14.0 pyzmq==14.3.1 pandas==0.14.0 Jinja2==2.7.3 MarkupSafe==0.23 backports.ssl-match-hostname==3.4.0.2 gnureadline==6.3.3 ipython==2.1.0 matplotlib==1.3.1 nose==1.3.3 openpyxl==1.8.6 patsy==0.2.1 pyparsing==2.0.2 python-dateutil==2.2 pytz==2014.4 scikit-learn==0.14.1 six==1.7.3 tornado==3.2.2 wsgiref==0.1.2 statsmodels==0.5.0 when I run pip install -r requirement.txt, I got this error Collecting wsgiref==0.1.2 (from -r requirements.txt (line 20)) Using cached wsgiref-0.1.2.zip Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/private/var/folders/xs/y0pbzxkn7gqcdtrz4cpxtwrw0000gn/T/pip-build-hkiqbu1j/wsgiref/setup.py", line 5, in <module> import ez_setup File "/private/var/folders/xs/y0pbzxkn7gqcdtrz4cpxtwrw0000gn/T/pip-build-hkiqbu1j/wsgiref/ez_setup/__init__.py", line 170 print "Setuptools version",version,"or greater has been installed." ^ SyntaxError: Missing parentheses in call to 'print' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/xs/y0pbzxkn7gqcdtrz4cpxtwrw0000gn/T/pip-build-hkiqbu1j/wsgiref/ I have tried to run pip install --upgrade setuptools and sudo easy_install -U setuptools but neither works. How can I solve this problem? A: wsgiref is already been included as a standard library in Python 3... So in case if you are trying with Python 3 just go ahead and import wsgiref thats it. A: According to this line SyntaxError: Missing parentheses in call to 'print', I think it needs Python 2.x to run the setup.py. Whether to use parentheses in print is the different syntax of Python 2 and Python 3. This is the solution from the Github issue: There are a few fixes that will get you running, in order of least work to most: Switch over to python2.7 for your will installs. Try to upgrade wsgiref with pip install --upgrade wsgiref, and see if the latest version works with your setup, and with will (if it doesn't, you'd notice the http/webhooks stuff not working. If you try 2) and it works, submit a PR here with the upgraded version in requirements.txt. (You can find out what versions you've got by using pip freeze). You can find more about the syntax difference here A: Solution: Flask-restful is deprecated, use version flask-restx
Failed to install wsgiref on Python 3
I have a problem installing wsgiref: $ python --version Python 3.6.0 :: Anaconda 4.3.1 (x86_64) $ pip --version pip 9.0.1 from /anaconda/lib/python3.6/site-packages (python 3.6) My requirement.txt file are shown as below. numpy==1.8.1 scipy==0.14.0 pyzmq==14.3.1 pandas==0.14.0 Jinja2==2.7.3 MarkupSafe==0.23 backports.ssl-match-hostname==3.4.0.2 gnureadline==6.3.3 ipython==2.1.0 matplotlib==1.3.1 nose==1.3.3 openpyxl==1.8.6 patsy==0.2.1 pyparsing==2.0.2 python-dateutil==2.2 pytz==2014.4 scikit-learn==0.14.1 six==1.7.3 tornado==3.2.2 wsgiref==0.1.2 statsmodels==0.5.0 when I run pip install -r requirement.txt, I got this error Collecting wsgiref==0.1.2 (from -r requirements.txt (line 20)) Using cached wsgiref-0.1.2.zip Complete output from command python setup.py egg_info: Traceback (most recent call last): File "<string>", line 1, in <module> File "/private/var/folders/xs/y0pbzxkn7gqcdtrz4cpxtwrw0000gn/T/pip-build-hkiqbu1j/wsgiref/setup.py", line 5, in <module> import ez_setup File "/private/var/folders/xs/y0pbzxkn7gqcdtrz4cpxtwrw0000gn/T/pip-build-hkiqbu1j/wsgiref/ez_setup/__init__.py", line 170 print "Setuptools version",version,"or greater has been installed." ^ SyntaxError: Missing parentheses in call to 'print' ---------------------------------------- Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/xs/y0pbzxkn7gqcdtrz4cpxtwrw0000gn/T/pip-build-hkiqbu1j/wsgiref/ I have tried to run pip install --upgrade setuptools and sudo easy_install -U setuptools but neither works. How can I solve this problem?
[ "wsgiref is already been included as a standard library in Python 3...\nSo in case if you are trying with Python 3 just go ahead and import wsgiref thats it.\n", "According to this line SyntaxError: Missing parentheses in call to 'print', I think it needs Python 2.x to run the setup.py. Whether to use parentheses in print is the different syntax of Python 2 and Python 3.\nThis is the solution from the Github issue:\n\nThere are a few fixes that will get you running, in order of least work to most:\n\nSwitch over to python2.7 for your will installs.\n\nTry to upgrade wsgiref with pip install --upgrade wsgiref, and see if the latest version works with your setup, and with will (if it doesn't, you'd notice the http/webhooks stuff not working.\n\nIf you try 2) and it works, submit a PR here with the upgraded version in requirements.txt. (You can find out what versions you've got by using pip freeze).\n\n\n\nYou can find more about the syntax difference here\n", "Solution:\nFlask-restful is deprecated, use version flask-restx\n" ]
[ 30, 4, 0 ]
[]
[]
[ "pip", "python", "python_3.x", "wsgiref" ]
stackoverflow_0043026999_pip_python_python_3.x_wsgiref.txt
Q: gcc: how to produce ELF where file size equals mem size for all LOAD segments without custom linker script? I have to produce an ELF binary with gcc from a Hello World-program written in C, where the mem size equals the file size in all LOAD-segments of the ELF file. My experience says me, that I can prevent this if I move .bss into .data in a custom linker script. But in my case, I want to achieve this without a custom linker script. Is there a way I can force all LOAD-segments to have the same file size as mem size with an option for GCC? Background: I'm working on enabling Linux binaries on a custom OS. The ELF-Loader so far is pretty basic and testing/developing will be much simpler, if I just can map the ELF as it is (as long as all LOAD-segments are page-aligned).. A: For completeness, I provide the solution that includes a dedicated linker script. The relevant excerpt is the following: .data ALIGN(4K) : { *(.data .data.*) /* Putting .bss into the .data segment simplifies loading an ELF file especially in kernel scenarios. Some basic ELF loaders in OS dev space require MEMSIZE==FILESIZE for each LOAD segment. The zeroed memory will land "as is" in the ELF and increase its size. I'm not sure why but "*(COMMON)" must be specified as well so that the .bss section actually lands in .data. But the GNU ld doc also does it like this: https://sourceware.org/binutils/docs/ld/Input-Section-Common.html */ *(COMMON) *(.bss .bss.*) } : rw It is important that the output section is not called ".bss" and that the section contains more than just ".bss". Otherwise, the "FILESIZE != MEMSIZE" optimization is done where the ELF loader needs to provide zeroed memory.
gcc: how to produce ELF where file size equals mem size for all LOAD segments without custom linker script?
I have to produce an ELF binary with gcc from a Hello World-program written in C, where the mem size equals the file size in all LOAD-segments of the ELF file. My experience says me, that I can prevent this if I move .bss into .data in a custom linker script. But in my case, I want to achieve this without a custom linker script. Is there a way I can force all LOAD-segments to have the same file size as mem size with an option for GCC? Background: I'm working on enabling Linux binaries on a custom OS. The ELF-Loader so far is pretty basic and testing/developing will be much simpler, if I just can map the ELF as it is (as long as all LOAD-segments are page-aligned)..
[ "For completeness, I provide the solution that includes a dedicated linker script. The relevant excerpt is the following:\n .data ALIGN(4K) :\n {\n *(.data .data.*)\n\n /* Putting .bss into the .data segment simplifies loading an ELF file especially in kernel\n scenarios. Some basic ELF loaders in OS dev space require MEMSIZE==FILESIZE for each\n LOAD segment. The zeroed memory will land \"as is\" in the ELF and increase its size.\n\n I'm not sure why but \"*(COMMON)\" must be specified as well so that the .bss section\n actually lands in .data. But the GNU ld doc also does it like this:\n https://sourceware.org/binutils/docs/ld/Input-Section-Common.html */\n\n *(COMMON)\n *(.bss .bss.*)\n } : rw\n\nIt is important that the output section is not called \".bss\" and that\nthe section contains more than just \".bss\". Otherwise, the \"FILESIZE != MEMSIZE\" optimization is done where the ELF loader needs to provide zeroed memory.\n" ]
[ 0 ]
[]
[]
[ "c", "elf", "gcc", "linker" ]
stackoverflow_0070083404_c_elf_gcc_linker.txt
Q: Execute remote quiet MSI installs from Powershell I am trying to use the Invoke-Command powershell cmdlet to install a MSI installer. From within powershell on the local machine and from the proper directory, the following works: ./setup /quiet The following does not seem to work: $script = { param($path) cd "$path" & ./setup /quiet return pwd } return Invoke-Command -ComputerName $product.IPs -ScriptBlock $script -Args $sourcePath For test purposes I am working on the local machine passing in "." for the -ComputerName argument. The paths have been verified correct before passing in to Invoke-Command, and errors generated on different versions of this code indicate the paths are correct. I have also tried with and without the "& " on the remote call to setup. Other Invoke-Command calls are working, so I doubt it is a permissions issue. I have verified that the return from the pwd call is the expected directory. How do I get the install to work? A: You might try using Start-Process in your script block: cd $path start-process setup.exe -arg "/quiet" Not sure if you will want or need to wait. Look at help for Start-Process. A: I have had weird issues when trying to remotely execute a script on a local machine. In other words, remote powershell to the local machine. It comes back with an error that seems to say that PowerShell remoting is not enabled on the machine, but it was. I can run the script remotely from another machine to the target, but when using remoting to the same box, the issue crops up. Verify that the WinRM service is running. Verify powershell remoting has been enabled as in Enable-PSRemoting -force. Verify your powershell execution policy is loose enough as in Set-ExecutionPolicy Unrestricted, for example. If the policy was set to RemoteSigned, this might be the problem. You might also want to verify the user you are running the script as (locally, but using remoting) has privileges to "log on as a service" or as a batch job. Just guessing there, if the above list doesn't solve anything. A: What error (if any) are you receiving? Unfortunately, you must run the shell as admin on your local machine to be able to connect to your local machine with invoke-command or any WINRM based command that requires administrative privilege (this is not a requirement when connecting remotely). When connecting to loopback, I believe it is unable (for some security reason) to enumerate groups and determine if you are in an admin enabled AD or local group, which is how it auto elevates when invoking on a remote machine. The only solution may be to have a conditional which checks for localhost and if so, don't use the -ComputerName parameter. This GitHub Issue covers it
Execute remote quiet MSI installs from Powershell
I am trying to use the Invoke-Command powershell cmdlet to install a MSI installer. From within powershell on the local machine and from the proper directory, the following works: ./setup /quiet The following does not seem to work: $script = { param($path) cd "$path" & ./setup /quiet return pwd } return Invoke-Command -ComputerName $product.IPs -ScriptBlock $script -Args $sourcePath For test purposes I am working on the local machine passing in "." for the -ComputerName argument. The paths have been verified correct before passing in to Invoke-Command, and errors generated on different versions of this code indicate the paths are correct. I have also tried with and without the "& " on the remote call to setup. Other Invoke-Command calls are working, so I doubt it is a permissions issue. I have verified that the return from the pwd call is the expected directory. How do I get the install to work?
[ "You might try using Start-Process in your script block:\ncd $path\nstart-process setup.exe -arg \"/quiet\"\nNot sure if you will want or need to wait. Look at help for Start-Process.\n", "I have had weird issues when trying to remotely execute a script on a local machine. In other words, remote powershell to the local machine. It comes back with an error that seems to say that PowerShell remoting is not enabled on the machine, but it was. I can run the script remotely from another machine to the target, but when using remoting to the same box, the issue crops up.\n\nVerify that the WinRM service is running.\nVerify powershell remoting has been enabled as in Enable-PSRemoting -force.\nVerify your powershell execution policy is loose enough as in Set-ExecutionPolicy Unrestricted, for example. If the policy was set to RemoteSigned, this might be the problem.\n\nYou might also want to verify the user you are running the script as (locally, but using remoting) has privileges to \"log on as a service\" or as a batch job. Just guessing there, if the above list doesn't solve anything.\n", "What error (if any) are you receiving? Unfortunately, you must run the shell as admin on your local machine to be able to connect to your local machine with invoke-command or any WINRM based command that requires administrative privilege (this is not a requirement when connecting remotely).\nWhen connecting to loopback, I believe it is unable (for some security reason) to enumerate groups and determine if you are in an admin enabled AD or local group, which is how it auto elevates when invoking on a remote machine. The only solution may be to have a conditional which checks for localhost and if so, don't use the -ComputerName parameter.\nThis GitHub Issue covers it\n" ]
[ 0, 0, 0 ]
[]
[]
[ "powershell", "powershell_remoting", "windows_installer" ]
stackoverflow_0009220316_powershell_powershell_remoting_windows_installer.txt
Q: Paraphrase generation Can u please give some hint how can we create utterances for example I have input say - "I want my account details" The output should be like Can I get my account details Please provide me my account details Can I get my account information A: The problem you are describing here is called paraphrasing: taking an input phrase and producing an output phrase with the same meaning. To get the taste of it, you can try an online paraphraser, like https://quillbot.com/. And one way to create your own paraphraser (if you really really want it) is to get a pretrained translation model, and translate your phrase into some other language and then back into the original language. If your translator generates multiple hypotheses, then you'll have multiple paraphrases. Another (simpler!) way is just to replace some of your words with their synonyms, obtained from WordNet, Wiktionary, or another linguistic resource. Some more details you can find in this question. A: Paraphrasing is the process of taking an input phrase and creating an output. To get the taste of it, you can try an online paraphraser, like click me
Paraphrase generation
Can u please give some hint how can we create utterances for example I have input say - "I want my account details" The output should be like Can I get my account details Please provide me my account details Can I get my account information
[ "The problem you are describing here is called paraphrasing: taking an input phrase and producing an output phrase with the same meaning. \nTo get the taste of it, you can try an online paraphraser, like https://quillbot.com/. \nAnd one way to create your own paraphraser (if you really really want it) is to get a pretrained translation model, and translate your phrase into some other language and then back into the original language. If your translator generates multiple hypotheses, then you'll have multiple paraphrases. \nAnother (simpler!) way is just to replace some of your words with their synonyms, obtained from WordNet, Wiktionary, or another linguistic resource.\nSome more details you can find in this question. \n", "Paraphrasing is the process of taking an input phrase and creating an output.\nTo get the taste of it, you can try an online paraphraser, like\nclick me\n" ]
[ 0, 0 ]
[]
[]
[ "keras", "machine_learning", "nlp", "nltk", "python" ]
stackoverflow_0060712874_keras_machine_learning_nlp_nltk_python.txt
Q: Nodemon keeps restarting server I have an express server that uses a local json file for a database. I'm using https://github.com/typicode/lowdb for getters and setters. Currently the server keeps starting and restarting without any problems, but can't access it. Below is my Server.js file: import express from 'express' import session from 'express-session' import bodyParser from 'body-parser' import promisify from 'es6-promisify' import cors from 'cors' import low from 'lowdb' import fileAsync from 'lowdb/lib/storages/file-async' import defaultdb from './models/Pages' import routes from './routes/index.js' const app = express(); const db = low('./core/db/index.json', { storage: fileAsync }) app.use(cors()) app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use('/', routes); app.set('port', process.env.PORT || 1337); db.defaults(defaultdb).write().then(() => { const server = app.listen(app.get('port'), () => { console.log(`Express running → PORT ${server.address().port}`); }); }); Anyone have an issue like this before? I think it has something to do with this line: db.defaults(defaultdb).write().then(() => { const server = app.listen(app.get('port'), () => { console.log(`Express running → PORT ${server.address().port}`); }); }); A: From the documentation: nodemon will watch the files in the directory in which nodemon was started, and if any files change, nodemon will automatically restart your node application. If your db's .JSON file is under the watch of nodemon, and you're constantly writing to it, your server will restart in an infinite loop thus making it inaccessible. Try moving your .JSON file outside the scope of nodemon's watch via moving it outside your directory or via some nodemon configuration (if possible). A: I solved this issue from this page. practically you just have to do nodemon --ignore 'logs/*' A: My solution: I've added nodemonConfig in package.json file in order to stop infinite loop/restarting. In package.json: "nodemonConfig": { "ext": "js", "ignore": ["*.test.ts", "db/*"], "delay": "2" }, "scripts": { "start": "nodemon" } A: I was puzzled by a constant stream of restarts. I started with nodemon --verbose to see what was causing the restarts. This revealed that my package.json file was the culprit. I was running my installation in a Dropbbox folder and had just removed all files from my node_modules folder and done a fresh install. Another computer that shared my Dropbox folder was running at the time, and unknown to me, it was busily updating its node_module files and updating the Dropbox copy of package.json files as it did so. My solution turned out to be simple, I took a break and waited for Dropbox to finish indexing the node_modules folder. When Dropbox finished synching, nodemon ran without any unexpected restarts. A: In my case (which is the same as the OP) just ignoring the database file worked nodemon --ignore server/db.json server/server.js A: You can use this generalized config file. Name it nodemon.json and put in the root folder of your project. { "restartable": "rs", "ignore": [".git", "node_modules/", "dist/", "coverage/"], "watch": ["src/"], "execMap": { "ts": "node -r ts-node/register" }, "env": { "NODE_ENV": "development" }, "ext": "js,json,ts" } A: I solved this by adding the following code to the package.json file "nodemonConfig": { "ext": "js", "ignore": [ "*.test.ts", "db/*" ], "delay": "2" } } A: Add this in your package.json "nodemonConfig": { "ext": "js", "ignore": [ ".test.ts", "db/" ], "delay": "2" }
Nodemon keeps restarting server
I have an express server that uses a local json file for a database. I'm using https://github.com/typicode/lowdb for getters and setters. Currently the server keeps starting and restarting without any problems, but can't access it. Below is my Server.js file: import express from 'express' import session from 'express-session' import bodyParser from 'body-parser' import promisify from 'es6-promisify' import cors from 'cors' import low from 'lowdb' import fileAsync from 'lowdb/lib/storages/file-async' import defaultdb from './models/Pages' import routes from './routes/index.js' const app = express(); const db = low('./core/db/index.json', { storage: fileAsync }) app.use(cors()) app.use(bodyParser.json()); app.use(bodyParser.urlencoded({ extended: true })); app.use('/', routes); app.set('port', process.env.PORT || 1337); db.defaults(defaultdb).write().then(() => { const server = app.listen(app.get('port'), () => { console.log(`Express running → PORT ${server.address().port}`); }); }); Anyone have an issue like this before? I think it has something to do with this line: db.defaults(defaultdb).write().then(() => { const server = app.listen(app.get('port'), () => { console.log(`Express running → PORT ${server.address().port}`); }); });
[ "From the documentation:\n\nnodemon will watch the files in the directory in which nodemon was started, and if any files change, nodemon will automatically restart your node application.\n\nIf your db's .JSON file is under the watch of nodemon, and you're constantly writing to it, your server will restart in an infinite loop thus making it inaccessible. Try moving your .JSON file outside the scope of nodemon's watch via moving it outside your directory or via some nodemon configuration (if possible).\n", "I solved this issue from this page.\npractically you just have to do\n nodemon --ignore 'logs/*'\n\n", "My solution: I've added nodemonConfig in package.json file in order to stop infinite loop/restarting. In package.json:\n\"nodemonConfig\": { \"ext\": \"js\", \"ignore\": [\"*.test.ts\", \"db/*\"], \"delay\": \"2\" },\n\"scripts\": { \"start\": \"nodemon\" }\n\n", "I was puzzled by a constant stream of restarts. I started with nodemon --verbose to see what was causing the restarts.\nThis revealed that my package.json file was the culprit. I was running my installation in a Dropbbox folder and had just removed all files from my node_modules folder and done a fresh install. Another computer that shared my Dropbox folder was running at the time, and unknown to me, it was busily updating its node_module files and updating the Dropbox copy of package.json files as it did so.\nMy solution turned out to be simple, I took a break and waited for Dropbox to finish indexing the node_modules folder. When Dropbox finished synching, nodemon ran without any unexpected restarts.\n", "In my case (which is the same as the OP) just ignoring the database file worked\nnodemon --ignore server/db.json server/server.js\n\n", "You can use this generalized config file.\nName it nodemon.json and put in the root folder of your project.\n{\n \"restartable\": \"rs\",\n \"ignore\": [\".git\", \"node_modules/\", \"dist/\", \"coverage/\"],\n \"watch\": [\"src/\"],\n \"execMap\": {\n \"ts\": \"node -r ts-node/register\"\n },\n \"env\": {\n \"NODE_ENV\": \"development\"\n },\n \"ext\": \"js,json,ts\"\n}\n\n", "I solved this by adding the following code to the package.json file\n \"nodemonConfig\": {\n \"ext\": \"js\",\n \"ignore\": [\n \"*.test.ts\",\n \"db/*\"\n ],\n \"delay\": \"2\"\n }\n}\n\n", "Add this in your package.json\n\"nodemonConfig\": {\n\"ext\": \"js\",\n\"ignore\": [\n\".test.ts\",\n\"db/\"\n],\n\"delay\": \"2\"\n}\n" ]
[ 27, 22, 11, 7, 2, 0, 0, 0 ]
[ "I solved this by creating a script in my package.json like this:\nscripts\": {\n \"start-continuous\": \"supervisor server/server.js\",\n},\n\nThis will work if you have supervisor installed in your global scope.\nnpm install supervisor -g\n\nNow all I do is: npm run start-continuous\n" ]
[ -1 ]
[ "javascript", "node.js" ]
stackoverflow_0044855839_javascript_node.js.txt
Q: How to rename a consistently-named subdirectory across multiple directories? I'm wanting to rename 123/1/ -> 123/v1/ foo/1/ -> foo/v1/ bar/1/ -> bar/v1/ 345/1/ -> 345/v1/ I've searched and found a few related solutions, but not quite sure what is best in this instance. e.g., find . -wholename "*/1" -exec echo '{}' \; successfully prints out all the paths relative to ., but {} expands to ./foo/1/, so I can't move from {} to {}/v1 for instance. I also tried find . -wholename "*/1" -exec mv '{}' $(echo '{}' | sed 's/1/v1/') \; with the idea that I would be invoking mv ./foo/1 ./foo/v1, but apparently it tries to move .foo/1/ to a subdirectory of itself. Anyhow, just looking for the simplest way to do this bulk renaming. To be clear, I'm trying to move the literal subdirectory 1 to v1, not also 2 to v2. A: Something like this, untested, is IMHO the simplest way to do it using bash builtins and mandatory POSIX utils as long as your paths don't contain newlines: while IFS= read -r old; do new="${old##*/}v1" echo mv -- "$old" $new" done < <(find . -type d -name 1) Remove echo once you're happy with the output from initial testing. A: Like this, using perl rename (which may be different than the rename already existing on your system, use rename --version to check): rename -n 's|([^/]+/)(1)|$1v$2|' */1/ remove -n (dry-run) when the outputs is ok for you. (note that you can use globstar on bash or something similar on other shells to recurse into deeper sub-directories) A: This was tagged with fish. A solution using fish shell: for file in **/1/; mv $file (dirname $file)/v1; end A: Try find . -depth -type d -name 1 -execdir mv 1 v1 \; The -execdir option to find is not POSIX, but it is widely supported by find implementations. See What are the security issues and race conditions in using find -exec? for useful information about the -execdir option. The -depth option is necessary both to support paths like foo/1/1/1 and to avoid warning messages when find can't traverse the newly-renamed 1 directories. A: I would do it in straight Bash. Given: $ tree . . ├── 123 │   └── 1 ├── 345 │   └── 1 │   └── 1 # this is a file named '1' ├── bar │   └── 1 └── foo └── 1 └── file 8 directories, 2 files You can do: #!/bin/bash shopt -s globstar new_1="v1" for src in **/*/1; do # will RECURSIVELY find any directory '1' # if just next level (as in example) # you can just do */1 [ -d "$src" ] || continue tgt="${src%%/1}/$new_1" echo "./$src/ => ./$tgt/" mv "./$src" "./$tgt/" done Result: $ tree . . ├── 123 │   └── v1 ├── 345 │   └── v1 │   └── 1 ├── bar │   └── v1 └── foo └── v1 └── file 8 directories, 2 files If you want to insert parent directories, add one statement to create them: shopt -s globstar new_1="a parent/v1" # the replacement path has a parent for src in **/*/1; do [ -d "$src" ] || continue tgt="${src%%/1}/$new_1" [[ "/" == *"$new_1"* ]] || mkdir -p "${tgt%/*}" # create parents if any echo "./$src/ => ./$tgt/" mv "./$src" "./$tgt/" done Result: $ tree . . ├── 123 │   └── a parent │   └── v1 ├── 345 │   └── a parent │   └── v1 │   └── 1 ├── bar │   └── a parent │   └── v1 └── foo └── a parent └── v1 └── file 12 directories, 2 files
How to rename a consistently-named subdirectory across multiple directories?
I'm wanting to rename 123/1/ -> 123/v1/ foo/1/ -> foo/v1/ bar/1/ -> bar/v1/ 345/1/ -> 345/v1/ I've searched and found a few related solutions, but not quite sure what is best in this instance. e.g., find . -wholename "*/1" -exec echo '{}' \; successfully prints out all the paths relative to ., but {} expands to ./foo/1/, so I can't move from {} to {}/v1 for instance. I also tried find . -wholename "*/1" -exec mv '{}' $(echo '{}' | sed 's/1/v1/') \; with the idea that I would be invoking mv ./foo/1 ./foo/v1, but apparently it tries to move .foo/1/ to a subdirectory of itself. Anyhow, just looking for the simplest way to do this bulk renaming. To be clear, I'm trying to move the literal subdirectory 1 to v1, not also 2 to v2.
[ "Something like this, untested, is IMHO the simplest way to do it using bash builtins and mandatory POSIX utils as long as your paths don't contain newlines:\nwhile IFS= read -r old; do\n new=\"${old##*/}v1\"\n echo mv -- \"$old\" $new\"\ndone < <(find . -type d -name 1)\n\nRemove echo once you're happy with the output from initial testing.\n", "Like this, using perl rename (which may be different than the rename already existing on your system, use rename --version to check):\nrename -n 's|([^/]+/)(1)|$1v$2|' */1/ \n\nremove -n (dry-run) when the outputs is ok for you.\n(note that you can use globstar on bash or something similar on other shells to recurse into deeper sub-directories)\n", "This was tagged with fish. A solution using fish shell:\nfor file in **/1/; mv $file (dirname $file)/v1; end\n\n", "Try\nfind . -depth -type d -name 1 -execdir mv 1 v1 \\;\n\n\nThe -execdir option to find is not POSIX, but it is widely supported by find implementations.\nSee What are the security issues and race conditions in using find -exec? for useful information about the -execdir option.\nThe -depth option is necessary both to support paths like foo/1/1/1 and to avoid warning messages when find can't traverse the newly-renamed 1 directories.\n\n", "I would do it in straight Bash.\nGiven:\n$ tree .\n.\n├── 123\n│   └── 1\n├── 345\n│   └── 1\n│   └── 1 # this is a file named '1'\n├── bar\n│   └── 1\n└── foo\n └── 1\n └── file\n8 directories, 2 files\n\nYou can do:\n#!/bin/bash\n\nshopt -s globstar\nnew_1=\"v1\"\n\nfor src in **/*/1; do # will RECURSIVELY find any directory '1'\n # if just next level (as in example)\n # you can just do */1\n [ -d \"$src\" ] || continue \n tgt=\"${src%%/1}/$new_1\"\n echo \"./$src/ => ./$tgt/\"\n mv \"./$src\" \"./$tgt/\"\ndone \n\nResult:\n$ tree .\n.\n├── 123\n│   └── v1\n├── 345\n│   └── v1\n│   └── 1\n├── bar\n│   └── v1\n└── foo\n └── v1\n └── file\n 8 directories, 2 files\n\nIf you want to insert parent directories, add one statement to create them:\nshopt -s globstar\nnew_1=\"a parent/v1\" # the replacement path has a parent\n\nfor src in **/*/1; do \n [ -d \"$src\" ] || continue \n tgt=\"${src%%/1}/$new_1\"\n [[ \"/\" == *\"$new_1\"* ]] || mkdir -p \"${tgt%/*}\" # create parents if any\n echo \"./$src/ => ./$tgt/\"\n mv \"./$src\" \"./$tgt/\"\ndone \n\nResult:\n$ tree .\n.\n├── 123\n│   └── a parent\n│   └── v1\n├── 345\n│   └── a parent\n│   └── v1\n│   └── 1\n├── bar\n│   └── a parent\n│   └── v1\n└── foo\n └── a parent\n └── v1\n └── file\n 12 directories, 2 files\n\n" ]
[ 2, 2, 2, 0, 0 ]
[]
[]
[ "bash", "batch_rename", "fish", "linux", "rename" ]
stackoverflow_0074648370_bash_batch_rename_fish_linux_rename.txt
Q: Error while deploying web application to Amazon elastic beanstalk I am using Eclipse EE Juno edition. I created a dynamic web application which uses Amazon Simple DB and retirviing some values and showing to users. i have installed the AWS kit for using Amazon services. I have an account for simpleDB. I tried to deploy it to AWS Elastic Beanstalk (through the plugin). It shows me to select the server. I selected it as AW Beanstalk with TOmcat 6 (asia Pacific Tokyo). After sometime it gives this error. Unable to upload application to Amazon S3: User: arn:aws:iam::379007759147:user/SSSS is not authorized to perform: elasticbeanstalk:CreateStorageLocation User: arn:aws:iam::379007759147:user/SSSS is not authorized to perform: elasticbeanstalk:CreateStorageLocation I wish to upload the web application to AWS beanstalk and appreciate your help in achieving it. A: It worked for me with the following permissions AWSElasticBeanstalkFullAccess - AWS Managed policy AWSElasticBeanstalkService - AWS Managed policy AWSCodeDeployRole - AWS Managed policy AWSCodeDeployFullAccess - AWS Managed policy A: The user need to have 3 permissions: IAM(read) S3(write) Beanstalk(write) and, there is a limitation of 1 user can maximum be attached with 2 policy. so, maybe you need to create 3 groups and assign this user to these 3 groups. It worked for me. A: Your IAM profile (i.e. the user you login with) hasn't been given the permissions required to create S3 buckets. Beanstalk creates a new bucket in which to dump application versions. Talk to whomever is root-administering your AWS account. They should be able to fix your IAM profile. A: In my case, I was trying to run beanstalk from an AWS Lightsail instance, which is forbidden apparently. The role AmazonLightsailInstanceRole, pre-assigned to Lightsail server default user ("ec2-user" for aws linux), cannot be modified to include Elastic Beanstalk permissions. A: I think they change these often, some of the other answers' roles don't exist anymore .. in 2022 I had to enable AdministratorAccess-AWSElasticBeanstalk for it to work.
Error while deploying web application to Amazon elastic beanstalk
I am using Eclipse EE Juno edition. I created a dynamic web application which uses Amazon Simple DB and retirviing some values and showing to users. i have installed the AWS kit for using Amazon services. I have an account for simpleDB. I tried to deploy it to AWS Elastic Beanstalk (through the plugin). It shows me to select the server. I selected it as AW Beanstalk with TOmcat 6 (asia Pacific Tokyo). After sometime it gives this error. Unable to upload application to Amazon S3: User: arn:aws:iam::379007759147:user/SSSS is not authorized to perform: elasticbeanstalk:CreateStorageLocation User: arn:aws:iam::379007759147:user/SSSS is not authorized to perform: elasticbeanstalk:CreateStorageLocation I wish to upload the web application to AWS beanstalk and appreciate your help in achieving it.
[ "It worked for me with the following permissions\n\nAWSElasticBeanstalkFullAccess - AWS Managed policy\nAWSElasticBeanstalkService - AWS Managed policy\nAWSCodeDeployRole - AWS Managed policy\nAWSCodeDeployFullAccess - AWS Managed policy\n\n", "The user need to have 3 permissions:\n\nIAM(read)\nS3(write)\nBeanstalk(write)\n\nand, there is a limitation of 1 user can maximum be attached with 2 policy.\nso, maybe you need to create 3 groups and assign this user to these 3 groups.\nIt worked for me.\n", "Your IAM profile (i.e. the user you login with) hasn't been given the permissions required to create S3 buckets. Beanstalk creates a new bucket in which to dump application versions. Talk to whomever is root-administering your AWS account. They should be able to fix your IAM profile.\n", "In my case, I was trying to run beanstalk from an AWS Lightsail instance, which is forbidden apparently.\nThe role AmazonLightsailInstanceRole, pre-assigned to Lightsail server default user (\"ec2-user\" for aws linux), cannot be modified to include Elastic Beanstalk permissions.\n", "I think they change these often, some of the other answers' roles don't exist anymore .. in 2022 I had to enable AdministratorAccess-AWSElasticBeanstalk for it to work.\n" ]
[ 18, 7, 0, 0, 0 ]
[]
[]
[ "amazon_elastic_beanstalk", "amazon_s3", "amazon_web_services", "eclipse", "java" ]
stackoverflow_0012086198_amazon_elastic_beanstalk_amazon_s3_amazon_web_services_eclipse_java.txt
Q: How can I send photos and so on through my channel with my other peer? I have the channel already established with my other partner but at the moment I can only send messages. How could I send any file through it? i tried with formData() and FileReader() but i don't know how to read that long string on the other side. connection.ondatachannel = (event) => { channel = event.channel; channel.onmessage = (event) => { // I receive the messages document.querySelector(".messages-content").innerHTML += event.data + "<br>"; } }; How could I read the string that is returned to me by the file reader when it is delivered? A: To send a file using a data channel, you can use the FileReader API to read the file and convert it into a string or ArrayBuffer. Once you have the file content as a string or ArrayBuffer, you can send it over the data channel using the send() method. Here is an example of how you could do this: // Read the file using the FileReader API const fileReader = new FileReader(); fileReader.readAsArrayBuffer(file); // When the file has been read, send it over the data channel fileReader.onload = () => { channel.send(fileReader.result); }; On the receiving end, you can use the onmessage event handler to receive the file content as a string or ArrayBuffer. You can then use the FileReader API again to convert the received data back into a file that can be used or saved. Here is an example of how you could do this: channel.onmessage = (event) => { // Convert the received data back into a file using the FileReader API const fileReader = new FileReader(); fileReader.readAsArrayBuffer(event.data); // When the file has been read, you can use it or save it fileReader.onload = () => { const file = new File([fileReader.result], "received-file.txt"); // Use or save the file here... }; }; Note that the FileReader API is asynchronous, so you will need to use the onload event handler to ensure that the file has been fully read before you try to use or save it.
How can I send photos and so on through my channel with my other peer?
I have the channel already established with my other partner but at the moment I can only send messages. How could I send any file through it? i tried with formData() and FileReader() but i don't know how to read that long string on the other side. connection.ondatachannel = (event) => { channel = event.channel; channel.onmessage = (event) => { // I receive the messages document.querySelector(".messages-content").innerHTML += event.data + "<br>"; } }; How could I read the string that is returned to me by the file reader when it is delivered?
[ "To send a file using a data channel, you can use the FileReader API to read the file and convert it into a string or ArrayBuffer. Once you have the file content as a string or ArrayBuffer, you can send it over the data channel using the send() method. Here is an example of how you could do this:\n// Read the file using the FileReader API\nconst fileReader = new FileReader();\nfileReader.readAsArrayBuffer(file);\n\n// When the file has been read, send it over the data channel\nfileReader.onload = () => {\n channel.send(fileReader.result);\n};\n\nOn the receiving end, you can use the onmessage event handler to receive the file content as a string or ArrayBuffer. You can then use the FileReader API again to convert the received data back into a file that can be used or saved. Here is an example of how you could do this:\nchannel.onmessage = (event) => {\n // Convert the received data back into a file using the FileReader API\n const fileReader = new FileReader();\n fileReader.readAsArrayBuffer(event.data);\n\n // When the file has been read, you can use it or save it\n fileReader.onload = () => {\n const file = new File([fileReader.result], \"received-file.txt\");\n // Use or save the file here...\n };\n};\n\nNote that the FileReader API is asynchronous, so you will need to use the onload event handler to ensure that the file has been fully read before you try to use or save it.\n" ]
[ 1 ]
[]
[]
[ "javascript", "webrtc" ]
stackoverflow_0074661031_javascript_webrtc.txt
Q: Stop tracking a file in git from a timepoint on while keeping old tracked commits A friend asked me for a way to untrack a file from a time point on for the future - but being able to git checkup to a state where the file was still tracked by git. Aim is to keep the file locally but prevent for future commits I googled in the past for this - and read several stackoverflow answers to similar problems. But it seems that there is no good way to achieve this. It seems that the tracking status of files is globally controlled. Is there a way to keep the tracking status locally controlled (meaning can be changed from commit to commit)? A: It seems that the tracking status of files is globally controlled No, it doesn't. Is there a way to keep the tracking status locally controlled (meaning can be changed from commit to commit)? No, because that is not what tracking is. None of what you've said is what tracking is. A file is tracked because it is present in the index. That's basically all there is to it. It is present in the index because either (1) it is present in the currently checked out commit (aka HEAD) or (2) you have created the file and added it to the index (with git add). This definition is completely automatic and autonomous. So how can you make a tracked file untracked? On a rather obvious and crude level, since the whole definition of "tracked" depends on the index, you can simply remove the file from the index (with git rm). There is also an exclusion mechanism: you can use a .gitignore file. But this has no effect whatever on what I've already said. It merely lists exceptions to broad actions, saying that if a new file of a certain type should appear, it should not be added to the index when you give a global command such as git add ., and it should not be listed among the new untracked files when you say git status. A: But it seems that there is no good way to achieve this. That's correct, and it's kind of a fundamental problem. The issue here is that a file that is not tracked (at any given point in time) is one that does exist in the working tree, but does not exist in Git's index—and the only part that's actually in Git, here, is Git's index. If and when you check out some old commit, where the file is in the commit, Git must: extract the file to Git's index and extract the file into your working tree which means that if there was an untracked file in your working tree under that name, it must be destroyed, so that the committed version can be extracted. If it is extracted successfully at this point, it is now in both your working tree (which isn't in Git) and in Git's index, and is a tracked file. The previous untracked file is, by definition, destroyed, unless there's some not-in-Git (e.g., OS-level) method of re-obtaining a nominally deleted-and-replaced file.1 For this reason, Git will normally object to the idea of destroying some untracked file. A git checkout or git switch that would destroy such a file stops with an error instead, telling you that the file needs to be saved somehow first, unless you use the --force option. Unfortunately, listing a file in .gitignore—as you might wish to do to keep it from accidentally becoming tracked—gives Git permission to destroy the file. This is a longstanding known weakness in Git; the Git developer community would like to have some method of marking some file path as both "do not track a la the usual .gitignore style" and "precious, do not destroy either". As of today (Git 2.39 or so) there is still no way to do this. 1Many modern file systems offer a method of doing this with what the file systems call snapshots. You pick some point in time and say "make a file system snapshot", and you can then roll the file system back to that point in time. This is the same kind of idea that version control systems implement, except that it's done on a per-file-system basis. The details behind the FS snapshot method get complicated. For instance, MacOS "time machine" is very different from UFS or ZFS snapshots. The idea itself is pretty simple though: all the before-snapshot file versions are saved for some time period ("all time" or "until the snapshot expires" or whatever), and by using the "extract previous snapshot" software—whatever that may be—you can get the file back. But this is all outside Git. A: To stop tracking the file git rm file_name To keep the file from being tracked, add the filename to .gitignore
Stop tracking a file in git from a timepoint on while keeping old tracked commits
A friend asked me for a way to untrack a file from a time point on for the future - but being able to git checkup to a state where the file was still tracked by git. Aim is to keep the file locally but prevent for future commits I googled in the past for this - and read several stackoverflow answers to similar problems. But it seems that there is no good way to achieve this. It seems that the tracking status of files is globally controlled. Is there a way to keep the tracking status locally controlled (meaning can be changed from commit to commit)?
[ "\nIt seems that the tracking status of files is globally controlled\n\nNo, it doesn't.\n\nIs there a way to keep the tracking status locally controlled (meaning can be changed from commit to commit)?\n\nNo, because that is not what tracking is. None of what you've said is what tracking is.\nA file is tracked because it is present in the index. That's basically all there is to it. It is present in the index because either (1) it is present in the currently checked out commit (aka HEAD) or (2) you have created the file and added it to the index (with git add). This definition is completely automatic and autonomous.\nSo how can you make a tracked file untracked? On a rather obvious and crude level, since the whole definition of \"tracked\" depends on the index, you can simply remove the file from the index (with git rm).\nThere is also an exclusion mechanism: you can use a .gitignore file. But this has no effect whatever on what I've already said. It merely lists exceptions to broad actions, saying that if a new file of a certain type should appear, it should not be added to the index when you give a global command such as git add ., and it should not be listed among the new untracked files when you say git status.\n", "\nBut it seems that there is no good way to achieve this.\n\nThat's correct, and it's kind of a fundamental problem. The issue here is that a file that is not tracked (at any given point in time) is one that does exist in the working tree, but does not exist in Git's index—and the only part that's actually in Git, here, is Git's index.\nIf and when you check out some old commit, where the file is in the commit, Git must:\n\nextract the file to Git's index and\nextract the file into your working tree\n\nwhich means that if there was an untracked file in your working tree under that name, it must be destroyed, so that the committed version can be extracted. If it is extracted successfully at this point, it is now in both your working tree (which isn't in Git) and in Git's index, and is a tracked file.\nThe previous untracked file is, by definition, destroyed, unless there's some not-in-Git (e.g., OS-level) method of re-obtaining a nominally deleted-and-replaced file.1 For this reason, Git will normally object to the idea of destroying some untracked file. A git checkout or git switch that would destroy such a file stops with an error instead, telling you that the file needs to be saved somehow first, unless you use the --force option.\nUnfortunately, listing a file in .gitignore—as you might wish to do to keep it from accidentally becoming tracked—gives Git permission to destroy the file. This is a longstanding known weakness in Git; the Git developer community would like to have some method of marking some file path as both \"do not track a la the usual .gitignore style\" and \"precious, do not destroy either\". As of today (Git 2.39 or so) there is still no way to do this.\n\n1Many modern file systems offer a method of doing this with what the file systems call snapshots. You pick some point in time and say \"make a file system snapshot\", and you can then roll the file system back to that point in time. This is the same kind of idea that version control systems implement, except that it's done on a per-file-system basis.\nThe details behind the FS snapshot method get complicated. For instance, MacOS \"time machine\" is very different from UFS or ZFS snapshots. The idea itself is pretty simple though: all the before-snapshot file versions are saved for some time period (\"all time\" or \"until the snapshot expires\" or whatever), and by using the \"extract previous snapshot\" software—whatever that may be—you can get the file back. But this is all outside Git.\n", "To stop tracking the file git rm file_name\nTo keep the file from being tracked,\nadd the filename to .gitignore\n" ]
[ 3, 2, 1 ]
[]
[]
[ "git", "github", "gitlab" ]
stackoverflow_0074655670_git_github_gitlab.txt
Q: How to Create a Pandas DataFrame from multiple list of dictionaries I want to create a pandas dataframe using the two list of dictionaries below: country_codes = [ { "id": 92, "name": "93", "position": 1, "description": "Afghanistan" }, { "id": 93, "name": "355", "position": 2, "description": "Albania" }, { "id": 94, "name": "213", "position": 3, "description": "Algeria" }, { "id": 95, "name": "1-684", "position": 4, "description": "American Samoa" } ] gender = [ { "id": 1, "name": "Female" }, { "id": 3 "name": "Male" } ] The dataframe should have two columns: Gender and Country Code. The values for gender will be from the gender variable while the value for country code will be from the country code variable. I have tried: df = pd.DataFrame(list( zip( gender, country_codes ) ), columns=[ "name" "description" ] ).rename({ "name": "Gender", "description": "Country" })) writer = pd.ExcelWriter('my_excel_file.xlsx', engine='xlsxwriter') df.to_excel(writer, sheet_name="sample_sheet", index=False) writer.save() But after running the script, the excel file was not populated. The expected output is have the excel sheet (screenshot attached) populated with the data in those list of dictionaries I declared above A: Use: df = pd.DataFrame({'gender': pd.DataFrame(gender)['name'], 'country': pd.DataFrame(country_codes)['description']}) Output: gender country 0 Female Afghanistan 1 Male Albania 2 NaN Algeria 3 NaN American Samoa
How to Create a Pandas DataFrame from multiple list of dictionaries
I want to create a pandas dataframe using the two list of dictionaries below: country_codes = [ { "id": 92, "name": "93", "position": 1, "description": "Afghanistan" }, { "id": 93, "name": "355", "position": 2, "description": "Albania" }, { "id": 94, "name": "213", "position": 3, "description": "Algeria" }, { "id": 95, "name": "1-684", "position": 4, "description": "American Samoa" } ] gender = [ { "id": 1, "name": "Female" }, { "id": 3 "name": "Male" } ] The dataframe should have two columns: Gender and Country Code. The values for gender will be from the gender variable while the value for country code will be from the country code variable. I have tried: df = pd.DataFrame(list( zip( gender, country_codes ) ), columns=[ "name" "description" ] ).rename({ "name": "Gender", "description": "Country" })) writer = pd.ExcelWriter('my_excel_file.xlsx', engine='xlsxwriter') df.to_excel(writer, sheet_name="sample_sheet", index=False) writer.save() But after running the script, the excel file was not populated. The expected output is have the excel sheet (screenshot attached) populated with the data in those list of dictionaries I declared above
[ "Use:\ndf = pd.DataFrame({'gender': pd.DataFrame(gender)['name'],\n 'country': pd.DataFrame(country_codes)['description']})\n\nOutput:\n gender country\n0 Female Afghanistan\n1 Male Albania\n2 NaN Algeria\n3 NaN American Samoa\n\n" ]
[ 1 ]
[]
[]
[ "django", "pandas", "python", "xlsxwriter" ]
stackoverflow_0074660948_django_pandas_python_xlsxwriter.txt
Q: How to use @Query in Spring JPA? I'm trying to access expense where type is income through spring(2.7.6) jpa using @Query annotation.screenshot of database postgresql Error in terminal: Error creating bean with name 'expenseController': Unsatisfied dependency expressed through field 'expenseService'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'expenseService': Unsatisfied dependency expressed through field 'expenseRepository'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'expenseRepository' defined in com.ivy.expensely.repository.ExpenseRepository defined in @EnableJpaRepositories declared on JpaRepositoriesRegistrar.EnableJpaRepositoriesConfiguration: Invocation of init method failed; nested exception is org.springframework.data.repository.query.QueryCreationException: Could not create query for public abstract java.util.Optional com.ivy.expensely.repository.ExpenseRepository.findByType(java.lang.String); Reason: Validation failed for query for method public abstract java.util.Optional com.ivy.expensely.repository.ExpenseRepository.findByType(java.lang.String)!; nested exception is java.lang.IllegalArgumentException: Validation failed for query for method public abstract java.util.Optional com.ivy.expensely.repository.ExpenseRepository.findByType(java.lang.String)! I'm trying to execute in ExpenseRepository; import com.ivy.expensely.model.Expense; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.data.jpa.repository.Query; import org.springframework.stereotype.Repository; import java.util.Optional; @Repository public interface ExpenseRepository extends JpaRepository<Expense,Long> { // @Query(value = "select sum(amount) from expense where type:income",nativeQuery = true) @Query("SELECT sum(amount) FROM expense WHERE type='income'") public Optional<Expense> findByType(String param_name); } Model looks like this: @NoArgsConstructor @AllArgsConstructor @Entity @Data @Table(name="expense") public class Expense { @Id private Long id; private Instant expensedate; private String description; private String location; private Long amount; private String type; @ManyToOne private Category category; @JsonIgnore @ManyToOne private User user; Please help me in figuring out how to execute this A: Sum returns Long (as it a sum of amounts, which is Long), so try this: @Query("SELECT sum(e.amount) FROM expense e WHERE e.type='income'") public Long findByType(String param_name); In JPQL Queries you have to specify the table of the where clause fields. Also remember to set the correct return type for the repository method. More info at baeldung. Related question here.
How to use @Query in Spring JPA?
I'm trying to access expense where type is income through spring(2.7.6) jpa using @Query annotation.screenshot of database postgresql Error in terminal: Error creating bean with name 'expenseController': Unsatisfied dependency expressed through field 'expenseService'; nested exception is org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'expenseService': Unsatisfied dependency expressed through field 'expenseRepository'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'expenseRepository' defined in com.ivy.expensely.repository.ExpenseRepository defined in @EnableJpaRepositories declared on JpaRepositoriesRegistrar.EnableJpaRepositoriesConfiguration: Invocation of init method failed; nested exception is org.springframework.data.repository.query.QueryCreationException: Could not create query for public abstract java.util.Optional com.ivy.expensely.repository.ExpenseRepository.findByType(java.lang.String); Reason: Validation failed for query for method public abstract java.util.Optional com.ivy.expensely.repository.ExpenseRepository.findByType(java.lang.String)!; nested exception is java.lang.IllegalArgumentException: Validation failed for query for method public abstract java.util.Optional com.ivy.expensely.repository.ExpenseRepository.findByType(java.lang.String)! I'm trying to execute in ExpenseRepository; import com.ivy.expensely.model.Expense; import org.springframework.data.jpa.repository.JpaRepository; import org.springframework.data.jpa.repository.Query; import org.springframework.stereotype.Repository; import java.util.Optional; @Repository public interface ExpenseRepository extends JpaRepository<Expense,Long> { // @Query(value = "select sum(amount) from expense where type:income",nativeQuery = true) @Query("SELECT sum(amount) FROM expense WHERE type='income'") public Optional<Expense> findByType(String param_name); } Model looks like this: @NoArgsConstructor @AllArgsConstructor @Entity @Data @Table(name="expense") public class Expense { @Id private Long id; private Instant expensedate; private String description; private String location; private Long amount; private String type; @ManyToOne private Category category; @JsonIgnore @ManyToOne private User user; Please help me in figuring out how to execute this
[ "Sum returns Long (as it a sum of amounts, which is Long), so try this:\n@Query(\"SELECT sum(e.amount) FROM expense e WHERE e.type='income'\")\npublic Long findByType(String param_name);\n\nIn JPQL Queries you have to specify the table of the where clause fields. Also remember to set the correct return type for the repository method. More info at baeldung. Related question here.\n" ]
[ 0 ]
[]
[]
[ "java", "spring_annotations", "spring_boot", "spring_data_jpa", "spring_mvc" ]
stackoverflow_0074654038_java_spring_annotations_spring_boot_spring_data_jpa_spring_mvc.txt
Q: How to propagate the effect of model bias via MEAS update correctly to other variables in GEKKO EDIT: I am just checking if the issue not on the condensate side. I have a material balance optimisation problem that I have configured in GEKKO. I have reproduced my challenge on a smaller problem that I can share here. It pertains to the initial values for CV's that I have left undefined (defaulting to zero) during controller instantiation and then assigned via the MEAS attribute with FSTATUS=1 parameter before the first call to the solve() method. As expected the controller establishes a BIAS to account for the difference between MEAS and the initial controller state. It then correctly drives optimisation of the biased CV to the appropriate target. However, it then appears to continue to use the unbiased model values for the remaining to calculate other Intermediate streams and to use in Equations. The result is that the rest of the material balance shifts to a point that is not representing the actual plant operating point. Attached is a code snippet illustrating my challenge. The output is: PowerProduced.value [0.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0] PowerProduced.PRED [188.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0] Steam for Generation [1300.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0] The PRED values are realistic but the values for Steam for Generation reverts back to a explicit positional form rather than an incremental adjustment from the initial condition. I expected [1300, 1968, 1968, 1968 ...] for Steam for Generation How do I adjust the model configuration to account for this? # -*- coding: utf-8 -*- """ Created on Wed Nov 30 11:53:50 2022 @author: Jacques Strydom """ from gekko import GEKKO import numpy as np m=GEKKO(remote=False) m.time=np.linspace(0,9,10) #GLOBAL OPTIONS m.options.IMODE=6 #control mode,dynamic control, simultaneous m.options.NODES=2 #collocation nodes m.options.SOLVER=1 # 1=APOPT, 2=BPOPT, 3=IPOPT m.options.CV_TYPE=1 #2 = squared error from reference trajectory m.options.CTRL_UNITS=3 #control time steps units (3= HOURS) m.options.MV_DCOST_SLOPE=2 m.options.CTRL_TIME=1 #1=1 hour per time step m.options.REQCTRLMODE=3 #3= CONTRO m.StmToPowerRatio=m.Const(4.0) #Constant that relates Stm to Power m.StmToProductRatio=m.Const(1.5) #Constant that relates Stm to Product m.SodiumSoftner_Conductivity=m.Param(value=285,name='SodiumSoftner_Conductivity') m.Condensate_Conductivity = m.Param(value=10,name='Condensate_Conductivity') m.Cycles_of_Concentration = m.Param(value=12,name='COC') m.SodiumSoftner_Production = m.MV(lb=0,ub=2450,name='SodiumSoftner_Production') #MV m.Final_Product = m.MV(lb=0,ub=1400,name='Final Product') #MV m.Steam_Produced = m.MV(lb=0,ub=4320,name='SteamProduced') #MV m.OtherNetSteamUsers = m.MV(name='OtherNetSteamUsers') #Disturbance Var m.BFW_Conductivity =m.CV(name='BFW_Conducitivy') m.PowerProduced =m.CV(name='PowerProduced') m.Blowdown=m.Intermediate(m.Steam_Produced/(m.Cycles_of_Concentration-1),name='Blowdown') m.BoilerFeedWater_Required=m.Intermediate(m.Steam_Produced+m.Blowdown,name='BFWRequired') m.SteamforGeneration=m.Intermediate(m.Steam_Produced-m.StmToProductRatio*m.Final_Product-m.OtherNetSteamUsers,name='StmforPower') m.CondensateForBFW = m.Intermediate(m.BoilerFeedWater_Required-m.SodiumSoftner_Production,name='Condensate for BFW') m.Cond_SS_Ratio = m.Intermediate(m.CondensateForBFW/m.BoilerFeedWater_Required) m.Equation(m.PowerProduced==m.SteamforGeneration/m.StmToPowerRatio) m.Equation(m.BFW_Conductivity==(m.SodiumSoftner_Production*m.SodiumSoftner_Conductivity+m.CondensateForBFW*m.Condensate_Conductivity)/m.BoilerFeedWater_Required) #MV SETTINGS m.SodiumSoftner_Production.STATUS=1 # Manipulate this m.SodiumSoftner_Production.FSTATUS=1 # MEASURE this m.SodiumSoftner_Production.COST=-1 # Higher is better m.Final_Product.STATUS=1 # Manipulate this m.Final_Product.FSTATUS=1 # Measure this m.Final_Product.COST=-20 # Higher is better m.Steam_Produced.STATUS=1 # Manipulate this m.Steam_Produced.FSTATUS=1 # MEASURE this m.OtherNetSteamUsers.STATUS=0 # Solver cannot manipulate, disturbance m.OtherNetSteamUsers.FSTATUS=1 # MEASURE this m.BFW_Conductivity.STATUS=1 #Control this CV m.BFW_Conductivity.FSTATUS=1 #MEASURE this CV m.BFW_Conductivity.WSPHI=50 #Penalty for SPHI violation m.BFW_Conductivity.WSPLO=50 #Penalty for SPLO violation m.BFW_Conductivity.SPHI=140 #High limit for target range m.BFW_Conductivity.SPLO=110 #Low limit for target range m.PowerProduced.STATUS=1 #Control this CV m.PowerProduced.FSTATUS=1 #MEASURE this m.PowerProduced.COST=-2 #Higher is better m.PowerProduced.WSPHI=50 #Penalty for SPHI violation m.PowerProduced.WSPLO=50 #Penalty for SPLO violation m.PowerProduced.SPHI=355 #High limit for target range m.PowerProduced.SPLO=100 #Low limit for target range #Load measurements - realistic mass balance m.Final_Product.MEAS =1200 m.SodiumSoftner_Production.MEAS =2200 m.OtherNetSteamUsers.MEAS =800 m.Steam_Produced.MEAS =3900 m.BFW_Conductivity.MEAS =152 m.PowerProduced.MEAS =188 m.solve() #solve for first step print('PowerProduced.value',m.PowerProduced.value) print('PowerProduced.PRED',m.PowerProduced.PRED) print('Steam for Generation',m.SteamforGeneration.value) The process associated with the reduced problem is depicted here: A: Gekko uses the unbiased model values to solve the equations. The BIAS is only applied to that specific CV as an output correction. A state estimation algorithm such as a Kalman filter or Moving Horizon Estimator (MHE) is required to adjust parameters or initial conditions to correct for the difference between measured and model outputs. The bias method is commonly applied to model predictive control as a quick correction when a more complete state estimator is not available. See preprint or article on various estimation methods, including the bias method. Hedengren, J. D., Eaton, A. N., Overview of Estimation Methods for Industrial Dynamic Systems, Optimization and Engineering, Springer, Vol 18 (1), 2017, pp. 155-178, DOI: 10.1007/s11081-015-9295-9. To include the bias in the model, an external bias calculation is recommended with the creation of bias1 and bias2. m.bias1 =m.FV(0); m.bias1.STATUS=0; m.bias1.FSTATUS=1 m.BFW_Conductivity =m.CV(name='BFW_Conducitivy') m.bias2 =m.FV(0); m.bias2.STATUS=0; m.bias2.FSTATUS=1 m.PowerProduced =m.CV(name='PowerProduced') Substitute the Var+bias to use the biased value of the variable. The denominator terms are also rearranged to the other side of the equation to improve the numerical solution potential by avoiding potential divide-by-zero. m.Equation(m.StmToPowerRatio*(m.PowerProduced+m.bias2)==m.SteamforGeneration) m.Equation(m.BoilerFeedWater_Required*(m.BFW_Conductivity+m.bias1)==\ (m.SodiumSoftner_Production*m.SodiumSoftner_Conductivity\ +m.CondensateForBFW*m.Condensate_Conductivity)) The equations now use the biased value. Here is the complete script: from gekko import GEKKO import numpy as np m=GEKKO(remote=False) m.time=np.linspace(0,9,10) #GLOBAL OPTIONS m.options.IMODE=6 #control mode,dynamic control, simultaneous m.options.NODES=2 #collocation nodes m.options.SOLVER=1 # 1=APOPT, 2=BPOPT, 3=IPOPT m.options.CV_TYPE=1 #2 = squared error from reference trajectory m.options.CTRL_UNITS=3 #control time steps units (3= HOURS) m.options.MV_DCOST_SLOPE=2 m.options.CTRL_TIME=1 #1=1 hour per time step m.options.REQCTRLMODE=3 #3= CONTRO m.StmToPowerRatio=m.Const(4.0) #Constant that relates Stm to Power m.StmToProductRatio=m.Const(1.5) #Constant that relates Stm to Product m.SodiumSoftner_Conductivity=m.Param(value=285,name='SodiumSoftner_Conductivity') m.Condensate_Conductivity = m.Param(value=10,name='Condensate_Conductivity') m.Cycles_of_Concentration = m.Param(value=12,name='COC') m.SodiumSoftner_Production = m.MV(lb=0,ub=2450,name='SodiumSoftner_Production') #MV m.Final_Product = m.MV(lb=0,ub=1400,name='Final Product') #MV m.Steam_Produced = m.MV(lb=0,ub=4320,name='SteamProduced') #MV m.OtherNetSteamUsers = m.MV(name='OtherNetSteamUsers') #Disturbance Var m.bias1 =m.FV(0); m.bias1.STATUS=0; m.bias1.FSTATUS=1 m.BFW_Conductivity =m.CV(name='BFW_Conducitivy') m.bias2 =m.FV(0); m.bias2.STATUS=0; m.bias2.FSTATUS=1 m.PowerProduced =m.CV(name='PowerProduced') m.Blowdown=m.Intermediate(m.Steam_Produced/(m.Cycles_of_Concentration-1),name='Blowdown') m.BoilerFeedWater_Required=m.Intermediate(m.Steam_Produced+m.Blowdown,name='BFWRequired') m.SteamforGeneration=m.Intermediate(m.Steam_Produced-m.StmToProductRatio*m.Final_Product\ -m.OtherNetSteamUsers,name='StmforPower') m.CondensateForBFW = m.Intermediate(m.BoilerFeedWater_Required\ -m.SodiumSoftner_Production,name='Condensate for BFW') m.Cond_SS_Ratio = m.Intermediate(m.CondensateForBFW/m.BoilerFeedWater_Required) m.Equation(m.StmToPowerRatio*(m.PowerProduced-m.bias2)==m.SteamforGeneration) m.Equation(m.BoilerFeedWater_Required*(m.BFW_Conductivity-m.bias1)==\ (m.SodiumSoftner_Production*m.SodiumSoftner_Conductivity\ +m.CondensateForBFW*m.Condensate_Conductivity)) #MV SETTINGS m.SodiumSoftner_Production.STATUS=1 # Manipulate this m.SodiumSoftner_Production.FSTATUS=1 # MEASURE this m.SodiumSoftner_Production.COST=-1 # Higher is better m.Final_Product.STATUS=1 # Manipulate this m.Final_Product.FSTATUS=1 # Measure this m.Final_Product.COST=-20 # Higher is better m.Steam_Produced.STATUS=1 # Manipulate this m.Steam_Produced.FSTATUS=1 # MEASURE this m.OtherNetSteamUsers.STATUS=0 # Solver cannot manipulate, disturbance m.OtherNetSteamUsers.FSTATUS=1 # MEASURE this m.BFW_Conductivity.STATUS=1 #Control this CV m.BFW_Conductivity.FSTATUS=0 #MEASURE this CV m.BFW_Conductivity.WSPHI=50 #Penalty for SPHI violation m.BFW_Conductivity.WSPLO=50 #Penalty for SPLO violation m.BFW_Conductivity.SPHI=140 #High limit for target range m.BFW_Conductivity.SPLO=110 #Low limit for target range m.PowerProduced.STATUS=1 #Control this CV m.PowerProduced.FSTATUS=0 #MEASURE this m.PowerProduced.COST=-2 #Higher is better m.PowerProduced.WSPHI=50 #Penalty for SPHI violation m.PowerProduced.WSPLO=50 #Penalty for SPLO violation m.PowerProduced.SPHI=355 #High limit for target range m.PowerProduced.SPLO=100 #Low limit for target range #Load measurements - realistic mass balance m.Final_Product.MEAS =1200 m.SodiumSoftner_Production.MEAS =2200 m.OtherNetSteamUsers.MEAS =800 m.Steam_Produced.MEAS =3900 #m.BFW_Conductivity.MEAS =152 m.bias1.MEAS =-152 #m.PowerProduced.MEAS =188 m.bias2.MEAS =-188 m.solve() #solve for first step print('PowerProduced.value',m.PowerProduced.value) print('PowerProduced.PRED',m.PowerProduced.PRED) print('Steam for Generation',m.SteamforGeneration.value) import matplotlib.pyplot as plt plt.subplot(2,1,1) plt.plot(m.time,m.SteamforGeneration.value,'r-',label='SteamforGeneration') plt.legend(); plt.grid() plt.subplot(2,1,2) plt.plot(m.time,m.Steam_Produced.value,'r-.',label='Steam_Produced') plt.plot(m.time,-1.5*np.array(m.Final_Product.value),'b--',label='Final_Product') plt.plot(m.time,-np.array(m.OtherNetSteamUsers.value),'k:',label='OtherNetSteamUsers') plt.xlabel('Time') plt.legend(); plt.grid() plt.show()
How to propagate the effect of model bias via MEAS update correctly to other variables in GEKKO
EDIT: I am just checking if the issue not on the condensate side. I have a material balance optimisation problem that I have configured in GEKKO. I have reproduced my challenge on a smaller problem that I can share here. It pertains to the initial values for CV's that I have left undefined (defaulting to zero) during controller instantiation and then assigned via the MEAS attribute with FSTATUS=1 parameter before the first call to the solve() method. As expected the controller establishes a BIAS to account for the difference between MEAS and the initial controller state. It then correctly drives optimisation of the biased CV to the appropriate target. However, it then appears to continue to use the unbiased model values for the remaining to calculate other Intermediate streams and to use in Equations. The result is that the rest of the material balance shifts to a point that is not representing the actual plant operating point. Attached is a code snippet illustrating my challenge. The output is: PowerProduced.value [0.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0, 167.0] PowerProduced.PRED [188.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0, 355.0] Steam for Generation [1300.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0, 668.0] The PRED values are realistic but the values for Steam for Generation reverts back to a explicit positional form rather than an incremental adjustment from the initial condition. I expected [1300, 1968, 1968, 1968 ...] for Steam for Generation How do I adjust the model configuration to account for this? # -*- coding: utf-8 -*- """ Created on Wed Nov 30 11:53:50 2022 @author: Jacques Strydom """ from gekko import GEKKO import numpy as np m=GEKKO(remote=False) m.time=np.linspace(0,9,10) #GLOBAL OPTIONS m.options.IMODE=6 #control mode,dynamic control, simultaneous m.options.NODES=2 #collocation nodes m.options.SOLVER=1 # 1=APOPT, 2=BPOPT, 3=IPOPT m.options.CV_TYPE=1 #2 = squared error from reference trajectory m.options.CTRL_UNITS=3 #control time steps units (3= HOURS) m.options.MV_DCOST_SLOPE=2 m.options.CTRL_TIME=1 #1=1 hour per time step m.options.REQCTRLMODE=3 #3= CONTRO m.StmToPowerRatio=m.Const(4.0) #Constant that relates Stm to Power m.StmToProductRatio=m.Const(1.5) #Constant that relates Stm to Product m.SodiumSoftner_Conductivity=m.Param(value=285,name='SodiumSoftner_Conductivity') m.Condensate_Conductivity = m.Param(value=10,name='Condensate_Conductivity') m.Cycles_of_Concentration = m.Param(value=12,name='COC') m.SodiumSoftner_Production = m.MV(lb=0,ub=2450,name='SodiumSoftner_Production') #MV m.Final_Product = m.MV(lb=0,ub=1400,name='Final Product') #MV m.Steam_Produced = m.MV(lb=0,ub=4320,name='SteamProduced') #MV m.OtherNetSteamUsers = m.MV(name='OtherNetSteamUsers') #Disturbance Var m.BFW_Conductivity =m.CV(name='BFW_Conducitivy') m.PowerProduced =m.CV(name='PowerProduced') m.Blowdown=m.Intermediate(m.Steam_Produced/(m.Cycles_of_Concentration-1),name='Blowdown') m.BoilerFeedWater_Required=m.Intermediate(m.Steam_Produced+m.Blowdown,name='BFWRequired') m.SteamforGeneration=m.Intermediate(m.Steam_Produced-m.StmToProductRatio*m.Final_Product-m.OtherNetSteamUsers,name='StmforPower') m.CondensateForBFW = m.Intermediate(m.BoilerFeedWater_Required-m.SodiumSoftner_Production,name='Condensate for BFW') m.Cond_SS_Ratio = m.Intermediate(m.CondensateForBFW/m.BoilerFeedWater_Required) m.Equation(m.PowerProduced==m.SteamforGeneration/m.StmToPowerRatio) m.Equation(m.BFW_Conductivity==(m.SodiumSoftner_Production*m.SodiumSoftner_Conductivity+m.CondensateForBFW*m.Condensate_Conductivity)/m.BoilerFeedWater_Required) #MV SETTINGS m.SodiumSoftner_Production.STATUS=1 # Manipulate this m.SodiumSoftner_Production.FSTATUS=1 # MEASURE this m.SodiumSoftner_Production.COST=-1 # Higher is better m.Final_Product.STATUS=1 # Manipulate this m.Final_Product.FSTATUS=1 # Measure this m.Final_Product.COST=-20 # Higher is better m.Steam_Produced.STATUS=1 # Manipulate this m.Steam_Produced.FSTATUS=1 # MEASURE this m.OtherNetSteamUsers.STATUS=0 # Solver cannot manipulate, disturbance m.OtherNetSteamUsers.FSTATUS=1 # MEASURE this m.BFW_Conductivity.STATUS=1 #Control this CV m.BFW_Conductivity.FSTATUS=1 #MEASURE this CV m.BFW_Conductivity.WSPHI=50 #Penalty for SPHI violation m.BFW_Conductivity.WSPLO=50 #Penalty for SPLO violation m.BFW_Conductivity.SPHI=140 #High limit for target range m.BFW_Conductivity.SPLO=110 #Low limit for target range m.PowerProduced.STATUS=1 #Control this CV m.PowerProduced.FSTATUS=1 #MEASURE this m.PowerProduced.COST=-2 #Higher is better m.PowerProduced.WSPHI=50 #Penalty for SPHI violation m.PowerProduced.WSPLO=50 #Penalty for SPLO violation m.PowerProduced.SPHI=355 #High limit for target range m.PowerProduced.SPLO=100 #Low limit for target range #Load measurements - realistic mass balance m.Final_Product.MEAS =1200 m.SodiumSoftner_Production.MEAS =2200 m.OtherNetSteamUsers.MEAS =800 m.Steam_Produced.MEAS =3900 m.BFW_Conductivity.MEAS =152 m.PowerProduced.MEAS =188 m.solve() #solve for first step print('PowerProduced.value',m.PowerProduced.value) print('PowerProduced.PRED',m.PowerProduced.PRED) print('Steam for Generation',m.SteamforGeneration.value) The process associated with the reduced problem is depicted here:
[ "Gekko uses the unbiased model values to solve the equations. The BIAS is only applied to that specific CV as an output correction. A state estimation algorithm such as a Kalman filter or Moving Horizon Estimator (MHE) is required to adjust parameters or initial conditions to correct for the difference between measured and model outputs. The bias method is commonly applied to model predictive control as a quick correction when a more complete state estimator is not available. See preprint or article on various estimation methods, including the bias method.\n\nHedengren, J. D., Eaton, A. N., Overview of Estimation Methods for Industrial Dynamic Systems, Optimization and Engineering, Springer, Vol 18 (1), 2017, pp. 155-178, DOI: 10.1007/s11081-015-9295-9.\n\nTo include the bias in the model, an external bias calculation is recommended with the creation of bias1 and bias2.\nm.bias1 =m.FV(0); m.bias1.STATUS=0; m.bias1.FSTATUS=1\nm.BFW_Conductivity =m.CV(name='BFW_Conducitivy')\n\nm.bias2 =m.FV(0); m.bias2.STATUS=0; m.bias2.FSTATUS=1\nm.PowerProduced =m.CV(name='PowerProduced')\n\nSubstitute the Var+bias to use the biased value of the variable. The denominator terms are also rearranged to the other side of the equation to improve the numerical solution potential by avoiding potential divide-by-zero.\nm.Equation(m.StmToPowerRatio*(m.PowerProduced+m.bias2)==m.SteamforGeneration)\nm.Equation(m.BoilerFeedWater_Required*(m.BFW_Conductivity+m.bias1)==\\ \n (m.SodiumSoftner_Production*m.SodiumSoftner_Conductivity\\\n +m.CondensateForBFW*m.Condensate_Conductivity))\n\nThe equations now use the biased value.\n\nHere is the complete script:\nfrom gekko import GEKKO\nimport numpy as np\n\nm=GEKKO(remote=False)\nm.time=np.linspace(0,9,10)\n\n#GLOBAL OPTIONS\nm.options.IMODE=6 #control mode,dynamic control, simultaneous\nm.options.NODES=2 #collocation nodes\nm.options.SOLVER=1 # 1=APOPT, 2=BPOPT, 3=IPOPT\nm.options.CV_TYPE=1 #2 = squared error from reference trajectory\nm.options.CTRL_UNITS=3 #control time steps units (3= HOURS)\nm.options.MV_DCOST_SLOPE=2\nm.options.CTRL_TIME=1 #1=1 hour per time step\nm.options.REQCTRLMODE=3 #3= CONTRO\n\nm.StmToPowerRatio=m.Const(4.0) #Constant that relates Stm to Power\nm.StmToProductRatio=m.Const(1.5) #Constant that relates Stm to Product\n\nm.SodiumSoftner_Conductivity=m.Param(value=285,name='SodiumSoftner_Conductivity')\nm.Condensate_Conductivity = m.Param(value=10,name='Condensate_Conductivity')\nm.Cycles_of_Concentration = m.Param(value=12,name='COC')\n \n\nm.SodiumSoftner_Production = m.MV(lb=0,ub=2450,name='SodiumSoftner_Production') #MV\nm.Final_Product = m.MV(lb=0,ub=1400,name='Final Product') #MV\nm.Steam_Produced = m.MV(lb=0,ub=4320,name='SteamProduced') #MV\nm.OtherNetSteamUsers = m.MV(name='OtherNetSteamUsers') #Disturbance Var\n\nm.bias1 =m.FV(0); m.bias1.STATUS=0; m.bias1.FSTATUS=1\nm.BFW_Conductivity =m.CV(name='BFW_Conducitivy')\n\nm.bias2 =m.FV(0); m.bias2.STATUS=0; m.bias2.FSTATUS=1\nm.PowerProduced =m.CV(name='PowerProduced')\n\nm.Blowdown=m.Intermediate(m.Steam_Produced/(m.Cycles_of_Concentration-1),name='Blowdown')\nm.BoilerFeedWater_Required=m.Intermediate(m.Steam_Produced+m.Blowdown,name='BFWRequired')\nm.SteamforGeneration=m.Intermediate(m.Steam_Produced-m.StmToProductRatio*m.Final_Product\\\n -m.OtherNetSteamUsers,name='StmforPower')\nm.CondensateForBFW = m.Intermediate(m.BoilerFeedWater_Required\\\n -m.SodiumSoftner_Production,name='Condensate for BFW')\nm.Cond_SS_Ratio = m.Intermediate(m.CondensateForBFW/m.BoilerFeedWater_Required)\n\nm.Equation(m.StmToPowerRatio*(m.PowerProduced-m.bias2)==m.SteamforGeneration)\nm.Equation(m.BoilerFeedWater_Required*(m.BFW_Conductivity-m.bias1)==\\\n (m.SodiumSoftner_Production*m.SodiumSoftner_Conductivity\\\n +m.CondensateForBFW*m.Condensate_Conductivity))\n\n#MV SETTINGS\n\nm.SodiumSoftner_Production.STATUS=1 # Manipulate this\nm.SodiumSoftner_Production.FSTATUS=1 # MEASURE this\nm.SodiumSoftner_Production.COST=-1 # Higher is better\n\nm.Final_Product.STATUS=1 # Manipulate this\nm.Final_Product.FSTATUS=1 # Measure this\nm.Final_Product.COST=-20 # Higher is better\n\nm.Steam_Produced.STATUS=1 # Manipulate this\nm.Steam_Produced.FSTATUS=1 # MEASURE this\n\nm.OtherNetSteamUsers.STATUS=0 # Solver cannot manipulate, disturbance\nm.OtherNetSteamUsers.FSTATUS=1 # MEASURE this\n\nm.BFW_Conductivity.STATUS=1 #Control this CV\nm.BFW_Conductivity.FSTATUS=0 #MEASURE this CV\nm.BFW_Conductivity.WSPHI=50 #Penalty for SPHI violation\nm.BFW_Conductivity.WSPLO=50 #Penalty for SPLO violation\nm.BFW_Conductivity.SPHI=140 #High limit for target range\nm.BFW_Conductivity.SPLO=110 #Low limit for target range\n\nm.PowerProduced.STATUS=1 #Control this CV\nm.PowerProduced.FSTATUS=0 #MEASURE this\nm.PowerProduced.COST=-2 #Higher is better\nm.PowerProduced.WSPHI=50 #Penalty for SPHI violation\nm.PowerProduced.WSPLO=50 #Penalty for SPLO violation\nm.PowerProduced.SPHI=355 #High limit for target range\nm.PowerProduced.SPLO=100 #Low limit for target range\n\n#Load measurements - realistic mass balance\nm.Final_Product.MEAS =1200\nm.SodiumSoftner_Production.MEAS =2200\nm.OtherNetSteamUsers.MEAS =800\nm.Steam_Produced.MEAS =3900\n#m.BFW_Conductivity.MEAS =152\nm.bias1.MEAS =-152\n#m.PowerProduced.MEAS =188\nm.bias2.MEAS =-188\n\nm.solve() #solve for first step\n\nprint('PowerProduced.value',m.PowerProduced.value)\nprint('PowerProduced.PRED',m.PowerProduced.PRED)\nprint('Steam for Generation',m.SteamforGeneration.value)\n\nimport matplotlib.pyplot as plt\nplt.subplot(2,1,1)\nplt.plot(m.time,m.SteamforGeneration.value,'r-',label='SteamforGeneration')\nplt.legend(); plt.grid()\nplt.subplot(2,1,2)\nplt.plot(m.time,m.Steam_Produced.value,'r-.',label='Steam_Produced')\nplt.plot(m.time,-1.5*np.array(m.Final_Product.value),'b--',label='Final_Product')\nplt.plot(m.time,-np.array(m.OtherNetSteamUsers.value),'k:',label='OtherNetSteamUsers')\nplt.xlabel('Time')\nplt.legend(); plt.grid()\nplt.show()\n\n" ]
[ 0 ]
[]
[]
[ "gekko", "python" ]
stackoverflow_0074640886_gekko_python.txt
Q: How to make child communicate with parent in this scenario react Im making a quiz app. Currently i can switch to the next question by answering (clicking a button). Also, when timer expires (10 secs for each question) it automatically switches to the next question (timer resets to default state) regardless of button clicked or no. Above mentioned things works and its fine. However I made a timer as a seperate child component to avoid unneccesary rerenders of parent component. The problem is that when i click a button to answer the question timer does not reset (but it should) to its default state value (10secs). I dont know how to communicate between child (Timer) and parent component (App), because i dont wanna initialize that timer state in parent component and put it in handleAnswerCorrectness so when currentPage changes i also reset timer, because it would cause parent rerender each second and lose sense of making it a child component. How to do it properly??? const {useState, useEffect} = React; const questions = [ { questionText: "What is the capital city of France", answerOptions: [ { id: 1, answerText: "New York", isCorrect: false }, { id: 2, answerText: "London", isCorrect: false }, { id: 3, answerText: "Paris", isCorrect: true }, { id: 4, answerText: "Dublin", isCorrect: false } ] }, { questionText: "Who is CEO of Tesla?", answerOptions: [ { id: 1, answerText: "Jeff Bezos", isCorrect: false }, { id: 2, answerText: "Elon Musk", isCorrect: true }, { id: 3, answerText: "Bill Gates", isCorrect: false }, { id: 4, answerText: "Tony Stark", isCorrect: false } ] } ]; const App = () => { const [currentPage, setCurrentPage] = useState(0); const handleAnswerCorrectness = () => { if (currentPage + 1 < questions.length) { setCurrentPage((prevState) => prevState + 1); } }; return ( <div className="App"> <div> <h3> Question {currentPage + 1}/{questions.length} </h3> <Timer currentPage={currentPage} onCurrentPageChange={setCurrentPage} questionsLenght={questions.length} /> </div> <p>{questions[currentPage].questionText}</p> <div> {questions[currentPage].answerOptions.map((answerOption) => ( <button key={answerOption.id} onClick={handleAnswerCorrectness}> {answerOption.answerText} </button> ))} </div> </div> ); } const Timer = ({ onCurrentPageChange, currentPage, questionsLenght }) => { const [timer, setTimer] = useState(10); useEffect(() => { const interval = setInterval(() => { if (timer > 0) { setTimer((prevState) => prevState - 1); } if (timer === 0 && currentPage + 1 < questionsLenght) { onCurrentPageChange((prevState) => prevState + 1); setTimer(10); } }, 1000); return () => clearInterval(interval); }, [timer]); return ( <div> <span>{timer}</span> </div> ); }; const root = ReactDOM.createRoot( document.getElementById("root") ).render(<App/>); <div id="root"></div> <script src="https://cdnjs.cloudflare.com/ajax/libs/react/18.1.0/umd/react.development.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/18.1.0/umd/react-dom.development.js"></script> A: It seems that the useEffect in Timer is both setting state timer and reading it in dependencies array, which could cause a conflict. Perhaps try separating it to 2 useEffect with more simple logic, so related code will only run when needed. One for the purpose of countdown, and reset timer when currentPage changes: // Countdown effect useEffect(() => { setTimer(10); const interval = setInterval( () => setTimer((prevState) => { if (prevState > 0) return prevState - 1; return prevState; }), 1000 ); return () => clearInterval(interval); }, [currentPage]); Another to handle timeout and trigger onCurrentPageChange: // Timeout effect useEffect(() => { if (timer === 0 && currentPage + 1 < questionsLenght) onCurrentPageChange((prevState) => prevState + 1); }, [timer, currentPage, questionsLenght]); There might be other issues that needed to be addressed, but hopefully this would still help by making the logic cleaner. Example: const { useState, useEffect } = React; const questions = [ { questionText: "What is the capital city of France", answerOptions: [ { id: 1, answerText: "New York", isCorrect: false }, { id: 2, answerText: "London", isCorrect: false }, { id: 3, answerText: "Paris", isCorrect: true }, { id: 4, answerText: "Dublin", isCorrect: false }, ], }, { questionText: "Who is CEO of Tesla?", answerOptions: [ { id: 1, answerText: "Jeff Bezos", isCorrect: false }, { id: 2, answerText: "Elon Musk", isCorrect: true }, { id: 3, answerText: "Bill Gates", isCorrect: false }, { id: 4, answerText: "Tony Stark", isCorrect: false }, ], }, ]; const App = () => { const [currentPage, setCurrentPage] = useState(0); const handleAnswerCorrectness = () => { if (currentPage + 1 < questions.length) { setCurrentPage((prevState) => prevState + 1); } }; return ( <div className="App"> <div> <h3> Question {currentPage + 1}/{questions.length} </h3> <Timer currentPage={currentPage} onCurrentPageChange={setCurrentPage} questionsLenght={questions.length} /> </div> <p>{questions[currentPage].questionText}</p> <div> {questions[currentPage].answerOptions.map((answerOption) => ( <button key={answerOption.id} onClick={handleAnswerCorrectness}> {answerOption.answerText} </button> ))} </div> </div> ); }; const Timer = ({ onCurrentPageChange, currentPage, questionsLenght }) => { const [timer, setTimer] = useState(10); // Countdown effect useEffect(() => { setTimer(10); const interval = setInterval( () => setTimer((prevState) => { if (prevState > 0) return prevState - 1; return prevState; }), 1000 ); return () => clearInterval(interval); }, [currentPage]); // Timeout effect useEffect(() => { if (timer === 0 && currentPage + 1 < questionsLenght) onCurrentPageChange((prevState) => prevState + 1); }, [timer, currentPage, questionsLenght]); return ( <div> <span>{timer}</span> </div> ); }; const root = ReactDOM.createRoot(document.getElementById("root")).render( <App /> ); <div id="root"></div> <script src="https://cdnjs.cloudflare.com/ajax/libs/react/18.1.0/umd/react.development.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/18.1.0/umd/react-dom.development.js"></script>
How to make child communicate with parent in this scenario react
Im making a quiz app. Currently i can switch to the next question by answering (clicking a button). Also, when timer expires (10 secs for each question) it automatically switches to the next question (timer resets to default state) regardless of button clicked or no. Above mentioned things works and its fine. However I made a timer as a seperate child component to avoid unneccesary rerenders of parent component. The problem is that when i click a button to answer the question timer does not reset (but it should) to its default state value (10secs). I dont know how to communicate between child (Timer) and parent component (App), because i dont wanna initialize that timer state in parent component and put it in handleAnswerCorrectness so when currentPage changes i also reset timer, because it would cause parent rerender each second and lose sense of making it a child component. How to do it properly??? const {useState, useEffect} = React; const questions = [ { questionText: "What is the capital city of France", answerOptions: [ { id: 1, answerText: "New York", isCorrect: false }, { id: 2, answerText: "London", isCorrect: false }, { id: 3, answerText: "Paris", isCorrect: true }, { id: 4, answerText: "Dublin", isCorrect: false } ] }, { questionText: "Who is CEO of Tesla?", answerOptions: [ { id: 1, answerText: "Jeff Bezos", isCorrect: false }, { id: 2, answerText: "Elon Musk", isCorrect: true }, { id: 3, answerText: "Bill Gates", isCorrect: false }, { id: 4, answerText: "Tony Stark", isCorrect: false } ] } ]; const App = () => { const [currentPage, setCurrentPage] = useState(0); const handleAnswerCorrectness = () => { if (currentPage + 1 < questions.length) { setCurrentPage((prevState) => prevState + 1); } }; return ( <div className="App"> <div> <h3> Question {currentPage + 1}/{questions.length} </h3> <Timer currentPage={currentPage} onCurrentPageChange={setCurrentPage} questionsLenght={questions.length} /> </div> <p>{questions[currentPage].questionText}</p> <div> {questions[currentPage].answerOptions.map((answerOption) => ( <button key={answerOption.id} onClick={handleAnswerCorrectness}> {answerOption.answerText} </button> ))} </div> </div> ); } const Timer = ({ onCurrentPageChange, currentPage, questionsLenght }) => { const [timer, setTimer] = useState(10); useEffect(() => { const interval = setInterval(() => { if (timer > 0) { setTimer((prevState) => prevState - 1); } if (timer === 0 && currentPage + 1 < questionsLenght) { onCurrentPageChange((prevState) => prevState + 1); setTimer(10); } }, 1000); return () => clearInterval(interval); }, [timer]); return ( <div> <span>{timer}</span> </div> ); }; const root = ReactDOM.createRoot( document.getElementById("root") ).render(<App/>); <div id="root"></div> <script src="https://cdnjs.cloudflare.com/ajax/libs/react/18.1.0/umd/react.development.js"></script> <script src="https://cdnjs.cloudflare.com/ajax/libs/react-dom/18.1.0/umd/react-dom.development.js"></script>
[ "It seems that the useEffect in Timer is both setting state timer and reading it in dependencies array, which could cause a conflict.\nPerhaps try separating it to 2 useEffect with more simple logic, so related code will only run when needed.\nOne for the purpose of countdown, and reset timer when currentPage changes:\n// Countdown effect\nuseEffect(() => {\n setTimer(10);\n const interval = setInterval(\n () =>\n setTimer((prevState) => {\n if (prevState > 0) return prevState - 1;\n return prevState;\n }),\n 1000\n );\n\n return () => clearInterval(interval);\n}, [currentPage]);\n\nAnother to handle timeout and trigger onCurrentPageChange:\n// Timeout effect\nuseEffect(() => {\n if (timer === 0 && currentPage + 1 < questionsLenght)\n onCurrentPageChange((prevState) => prevState + 1);\n}, [timer, currentPage, questionsLenght]);\n\nThere might be other issues that needed to be addressed, but hopefully this would still help by making the logic cleaner.\nExample:\n\n\nconst { useState, useEffect } = React;\n\nconst questions = [\n {\n questionText: \"What is the capital city of France\",\n answerOptions: [\n { id: 1, answerText: \"New York\", isCorrect: false },\n { id: 2, answerText: \"London\", isCorrect: false },\n { id: 3, answerText: \"Paris\", isCorrect: true },\n { id: 4, answerText: \"Dublin\", isCorrect: false },\n ],\n },\n {\n questionText: \"Who is CEO of Tesla?\",\n answerOptions: [\n { id: 1, answerText: \"Jeff Bezos\", isCorrect: false },\n { id: 2, answerText: \"Elon Musk\", isCorrect: true },\n { id: 3, answerText: \"Bill Gates\", isCorrect: false },\n { id: 4, answerText: \"Tony Stark\", isCorrect: false },\n ],\n },\n];\n\nconst App = () => {\n const [currentPage, setCurrentPage] = useState(0);\n\n const handleAnswerCorrectness = () => {\n if (currentPage + 1 < questions.length) {\n setCurrentPage((prevState) => prevState + 1);\n }\n };\n\n return (\n <div className=\"App\">\n <div>\n <h3>\n Question {currentPage + 1}/{questions.length}\n </h3>\n <Timer\n currentPage={currentPage}\n onCurrentPageChange={setCurrentPage}\n questionsLenght={questions.length}\n />\n </div>\n <p>{questions[currentPage].questionText}</p>\n <div>\n {questions[currentPage].answerOptions.map((answerOption) => (\n <button key={answerOption.id} onClick={handleAnswerCorrectness}>\n {answerOption.answerText}\n </button>\n ))}\n </div>\n </div>\n );\n};\n\nconst Timer = ({ onCurrentPageChange, currentPage, questionsLenght }) => {\n const [timer, setTimer] = useState(10);\n\n // Countdown effect\n useEffect(() => {\n setTimer(10);\n const interval = setInterval(\n () =>\n setTimer((prevState) => {\n if (prevState > 0) return prevState - 1;\n return prevState;\n }),\n 1000\n );\n\n return () => clearInterval(interval);\n }, [currentPage]);\n\n // Timeout effect\n useEffect(() => {\n if (timer === 0 && currentPage + 1 < questionsLenght)\n onCurrentPageChange((prevState) => prevState + 1);\n }, [timer, currentPage, questionsLenght]);\n\n return (\n <div>\n <span>{timer}</span>\n </div>\n );\n};\n\nconst root = ReactDOM.createRoot(document.getElementById(\"root\")).render(\n <App />\n);\n<div id=\"root\"></div>\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/react/18.1.0/umd/react.development.js\"></script>\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/react-dom/18.1.0/umd/react-dom.development.js\"></script>\n\n\n\n" ]
[ 2 ]
[]
[]
[ "javascript", "reactjs" ]
stackoverflow_0074660414_javascript_reactjs.txt
Q: Easiest way to send one-time string value from my winforms desktop app to my windows service Every time my windows forms desktop app starts up, I need it to send a one-time string value to my windows service on the same pc. In most cases, the service will be running constantly even when the desktop app is not. But the desktop app will check and start the service if necessary. What is the easiest/best mechanism for simple communication of a single string on desktop app startup to the windows service? I have no idea how communication with windows service works. Is there any simple example of doing this? A: A Window service is a class derived from ServiceBase. This base class allows your service code to override a method OnCustomCommand. For example, you can have something like this in the main class of your service public enum ServiceCCommands { ExecuteBackupNow = 128, DataLoader = 129 } public partial class BackupService : ServiceBase { ..... protected override void OnCustomCommand(int command) { if (Enum.IsDefined(typeof(ServiceCCommands), command)) { ServiceCCommands cmd = (ServiceCCommands)command; switch (cmd) { case ServiceCCommands.ExecuteBackupNow: _logger.Info($"BackupService: Executing backup"); RunBackup(); break; case ServiceCCommands.DataLoader: _logger.Info($"BackupService: Executing loader"); LoadData(); break; } } else _logger.Error($"BackupService: Custom Command not recognized={command}"); } Now, your UI code could call this CustomCommand after obtaining an instance of your service from the ServiceController class using the System.ServiceProcess namespace private void cmdRunBackup_Click(object sender, EventArgs e) { try { const int ExecuteBackupNow = 128; var service = new ServiceController("yourServiceNameHere"); if (service != null) { service.ExecuteCommand(ExecuteBackupNow); msgUtil.Information("Started backup command!"); } else msgUtil.Failure("The service is not available!"); } catch (Exception ex) { _logger.Error(......); } } // end class However, as far as I know, there is no way to pass a string to this method. But of course you can store the string somewhere on a disk file and then trigger the appropriate command on the service to read the string or other parameters that you need to pass. If a more complex context is required you can always host a WCF service inside the Windows Service itself. This is a more complex solution and I recommend you to evaluate the effort required against the simple file based solution. A: What I am doing and seems to be working so far is have my desktop app write my string value to a file in the CommonApplicationData special folder using: Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData) And then my Windows Service looks for the file and its value using the same method. CommonApplicationData is the directory that serves as a common repository for application-specific data that is used by all users, including the SYSTEM account. For details, see Environment.GetFolderPath Method and CommonApplicationData in Environment.SpecialFolder.
Easiest way to send one-time string value from my winforms desktop app to my windows service
Every time my windows forms desktop app starts up, I need it to send a one-time string value to my windows service on the same pc. In most cases, the service will be running constantly even when the desktop app is not. But the desktop app will check and start the service if necessary. What is the easiest/best mechanism for simple communication of a single string on desktop app startup to the windows service? I have no idea how communication with windows service works. Is there any simple example of doing this?
[ "A Window service is a class derived from ServiceBase. This base class allows your service code to override a method OnCustomCommand.\nFor example, you can have something like this in the main class of your service\npublic enum ServiceCCommands\n{ \n ExecuteBackupNow = 128,\n DataLoader = 129\n}\n \npublic partial class BackupService : ServiceBase\n{\n .....\n\n protected override void OnCustomCommand(int command)\n {\n if (Enum.IsDefined(typeof(ServiceCCommands), command))\n {\n ServiceCCommands cmd = (ServiceCCommands)command;\n switch (cmd)\n {\n case ServiceCCommands.ExecuteBackupNow:\n _logger.Info($\"BackupService: Executing backup\");\n RunBackup();\n break;\n case ServiceCCommands.DataLoader:\n _logger.Info($\"BackupService: Executing loader\");\n LoadData();\n break;\n }\n }\n else\n _logger.Error($\"BackupService: Custom Command not recognized={command}\");\n }\n\nNow, your UI code could call this CustomCommand after obtaining an instance of your service from the ServiceController class using the System.ServiceProcess namespace\nprivate void cmdRunBackup_Click(object sender, EventArgs e)\n{\n try\n {\n const int ExecuteBackupNow = 128;\n var service = new ServiceController(\"yourServiceNameHere\");\n if (service != null)\n {\n service.ExecuteCommand(ExecuteBackupNow);\n msgUtil.Information(\"Started backup command!\");\n }\n else\n msgUtil.Failure(\"The service is not available!\");\n }\n catch (Exception ex)\n {\n _logger.Error(......);\n }\n\n} // end class\n\nHowever, as far as I know, there is no way to pass a string to this method. But of course you can store the string somewhere on a disk file and then trigger the appropriate command on the service to read the string or other parameters that you need to pass.\nIf a more complex context is required you can always host a WCF service inside the Windows Service itself. This is a more complex solution and I recommend you to evaluate the effort required against the simple file based solution.\n", "What I am doing and seems to be working so far is have my desktop app write my string value to a file in the CommonApplicationData special folder using:\nEnvironment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData)\n\nAnd then my Windows Service looks for the file and its value using the same method.\nCommonApplicationData is the directory that serves as a common repository for application-specific data that is used by all users, including the SYSTEM account.\nFor details, see Environment.GetFolderPath Method and CommonApplicationData in Environment.SpecialFolder.\n" ]
[ 0, 0 ]
[]
[]
[ "ipc", "string", "windows_services", "winforms" ]
stackoverflow_0074642709_ipc_string_windows_services_winforms.txt
Q: drop table timing out using psycopg2 (on postgres) I can't drop a table that has dependencies using psycopg2 in python because it times out. (updating to remove irrelevant info, thank you to @Adrian Klaver for the assistance so far). I have two docker images, one running a postgres database, the other a python flask application making use of multiple psycopg2 calls to create tables, insert rows, select rows, and (unsuccessfully dropping a specific table). Things I have tried: used psycopg2 to select data, insert data used psycopg2 to drop some tables successfully tried (unsuccessfully) to drop a specific table 'davey1' (through psycopg2 I get the same timeout issue) looked at locks on the table SELECT * FROM pg_locks l JOIN pg_class t ON l.relation = t.oid AND t.relkind = 'r' WHERE t.relname = 'davey1'; looked at processes running select * from pg_stat_activity; Specifically the code I call the function (i have hard coded the table name for my testing): @site.route("/drop-table", methods=['GET','POST']) @login_required def drop_table(): form = DeleteTableForm() if request.method == "POST": tablename = form.tablename.data POSTGRES_USER= os.getenv('POSTGRES_USER') POSTGRES_PASSWORD= os.getenv('POSTGRES_PASSWORD') POSTGRES_DB = os.getenv('POSTGRES_DB') POSTGRES_HOST = os.getenv('POSTGRES_HOST') POSTGRES_PORT = os.getenv('POSTGRES_PORT') try: conn = psycopg2.connect(database=POSTGRES_DB, user=POSTGRES_USER,password=POSTGRES_PASSWORD,host=POSTGRES_HOST,port=POSTGRES_PORT) cursor = conn.cursor() sql_command = "DROP TABLE "+ str(tablename) cursor.execute(sql_command) conn.commit() cursor.close() conn.close() except Exception as e: flash("Unable to Drop table " + tablename +" it does not exist","error") app.logger.info("Error %s", str(e)) cursor.close() conn.close() return render_template("drop-table.html", form=form) Update 7/11 - I don't know why, but the problem is caused by either flask @login_required and/or accessing "current_user" (both functions are part of flask_login), in my code I import them as from flask_login import login_required,current_user. I have no idea why this is happening, and it really annoying. If I comment out the above @login_required decorator it works fine, logs look like this: 2022-11-07 09:36:45.854 UTC [55] LOG: statement: BEGIN 2022-11-07 09:36:45.854 UTC [55] LOG: statement: DROP TABLE davey1 2022-11-07 09:36:45.858 UTC [55] LOG: statement: COMMIT 2022-11-07 09:36:45.867 UTC [33] LOG: statement: BEGIN 2022-11-07 09:36:45.867 UTC [33] LOG: statement: SELECT users.user_id AS users_user_id, users.name AS users_name, users.email AS users_email, users.password AS users_password, users.created_on AS users_created_on, users.last_login AS users_last_login, users.email_sent AS users_email_sent, users.verified_account AS users_verified_account, users.email_confirmed_on AS users_email_confirmed_on, users.number_of_failed_runs AS users_number_of_failed_runs, users.number_of_logins AS users_number_of_logins, users.datarobot_api_token AS users_datarobot_api_token, users.document_schema AS users_document_schema, users.column_to_classify AS users_column_to_classify, users.column_name_for_title AS users_column_name_for_title FROM users WHERE users.user_id = 1 2022-11-07 09:36:45.875 UTC [33] LOG: statement: ROLLBACK When I have the @login_required included in the code, the drop table times out and I receive this log: 2022-11-07 09:38:37.192 UTC [34] LOG: statement: BEGIN 2022-11-07 09:38:37.192 UTC [34] LOG: statement: SELECT users.user_id AS users_user_id, users.name AS users_name, users.email AS users_email, users.password AS users_password, users.created_on AS users_created_on, users.last_login AS users_last_login, users.email_sent AS users_email_sent, users.verified_account AS users_verified_account, users.email_confirmed_on AS users_email_confirmed_on, users.number_of_failed_runs AS users_number_of_failed_runs, users.number_of_logins AS users_number_of_logins, users.datarobot_api_token AS users_datarobot_api_token, users.document_schema AS users_document_schema, users.column_to_classify AS users_column_to_classify, users.column_name_for_title AS users_column_name_for_title FROM users WHERE users.user_id = 1 2022-11-07 09:38:37.209 UTC [38] LOG: statement: BEGIN 2022-11-07 09:38:37.209 UTC [38] LOG: statement: DROP TABLE davey1 I have even tried putting a "time.sleep(10)" in my code to wait for rogue database transactions to rollback (which from the logs seems like the login_required is causing perhaps?!. I am lost on how to fix this, or even debug further. A: The issue was with only including the psycopg2-binary==2.9.5 module in my requirements file... I needed to also include psycopg2==2.9.5 I don't completely understand why, but this was the solution to the problem (I found this when deploying my docker image to AWS-ECS and seeing that my uwsgi process was crashing due to psycopg2) Thank you @AdrianKlaver for your assistance.
drop table timing out using psycopg2 (on postgres)
I can't drop a table that has dependencies using psycopg2 in python because it times out. (updating to remove irrelevant info, thank you to @Adrian Klaver for the assistance so far). I have two docker images, one running a postgres database, the other a python flask application making use of multiple psycopg2 calls to create tables, insert rows, select rows, and (unsuccessfully dropping a specific table). Things I have tried: used psycopg2 to select data, insert data used psycopg2 to drop some tables successfully tried (unsuccessfully) to drop a specific table 'davey1' (through psycopg2 I get the same timeout issue) looked at locks on the table SELECT * FROM pg_locks l JOIN pg_class t ON l.relation = t.oid AND t.relkind = 'r' WHERE t.relname = 'davey1'; looked at processes running select * from pg_stat_activity; Specifically the code I call the function (i have hard coded the table name for my testing): @site.route("/drop-table", methods=['GET','POST']) @login_required def drop_table(): form = DeleteTableForm() if request.method == "POST": tablename = form.tablename.data POSTGRES_USER= os.getenv('POSTGRES_USER') POSTGRES_PASSWORD= os.getenv('POSTGRES_PASSWORD') POSTGRES_DB = os.getenv('POSTGRES_DB') POSTGRES_HOST = os.getenv('POSTGRES_HOST') POSTGRES_PORT = os.getenv('POSTGRES_PORT') try: conn = psycopg2.connect(database=POSTGRES_DB, user=POSTGRES_USER,password=POSTGRES_PASSWORD,host=POSTGRES_HOST,port=POSTGRES_PORT) cursor = conn.cursor() sql_command = "DROP TABLE "+ str(tablename) cursor.execute(sql_command) conn.commit() cursor.close() conn.close() except Exception as e: flash("Unable to Drop table " + tablename +" it does not exist","error") app.logger.info("Error %s", str(e)) cursor.close() conn.close() return render_template("drop-table.html", form=form) Update 7/11 - I don't know why, but the problem is caused by either flask @login_required and/or accessing "current_user" (both functions are part of flask_login), in my code I import them as from flask_login import login_required,current_user. I have no idea why this is happening, and it really annoying. If I comment out the above @login_required decorator it works fine, logs look like this: 2022-11-07 09:36:45.854 UTC [55] LOG: statement: BEGIN 2022-11-07 09:36:45.854 UTC [55] LOG: statement: DROP TABLE davey1 2022-11-07 09:36:45.858 UTC [55] LOG: statement: COMMIT 2022-11-07 09:36:45.867 UTC [33] LOG: statement: BEGIN 2022-11-07 09:36:45.867 UTC [33] LOG: statement: SELECT users.user_id AS users_user_id, users.name AS users_name, users.email AS users_email, users.password AS users_password, users.created_on AS users_created_on, users.last_login AS users_last_login, users.email_sent AS users_email_sent, users.verified_account AS users_verified_account, users.email_confirmed_on AS users_email_confirmed_on, users.number_of_failed_runs AS users_number_of_failed_runs, users.number_of_logins AS users_number_of_logins, users.datarobot_api_token AS users_datarobot_api_token, users.document_schema AS users_document_schema, users.column_to_classify AS users_column_to_classify, users.column_name_for_title AS users_column_name_for_title FROM users WHERE users.user_id = 1 2022-11-07 09:36:45.875 UTC [33] LOG: statement: ROLLBACK When I have the @login_required included in the code, the drop table times out and I receive this log: 2022-11-07 09:38:37.192 UTC [34] LOG: statement: BEGIN 2022-11-07 09:38:37.192 UTC [34] LOG: statement: SELECT users.user_id AS users_user_id, users.name AS users_name, users.email AS users_email, users.password AS users_password, users.created_on AS users_created_on, users.last_login AS users_last_login, users.email_sent AS users_email_sent, users.verified_account AS users_verified_account, users.email_confirmed_on AS users_email_confirmed_on, users.number_of_failed_runs AS users_number_of_failed_runs, users.number_of_logins AS users_number_of_logins, users.datarobot_api_token AS users_datarobot_api_token, users.document_schema AS users_document_schema, users.column_to_classify AS users_column_to_classify, users.column_name_for_title AS users_column_name_for_title FROM users WHERE users.user_id = 1 2022-11-07 09:38:37.209 UTC [38] LOG: statement: BEGIN 2022-11-07 09:38:37.209 UTC [38] LOG: statement: DROP TABLE davey1 I have even tried putting a "time.sleep(10)" in my code to wait for rogue database transactions to rollback (which from the logs seems like the login_required is causing perhaps?!. I am lost on how to fix this, or even debug further.
[ "The issue was with only including the psycopg2-binary==2.9.5 module in my requirements file... I needed to also include psycopg2==2.9.5\nI don't completely understand why, but this was the solution to the problem (I found this when deploying my docker image to AWS-ECS and seeing that my uwsgi process was crashing due to psycopg2)\nThank you @AdrianKlaver for your assistance.\n" ]
[ 0 ]
[]
[]
[ "flask", "postgresql", "psycopg2", "python" ]
stackoverflow_0074323783_flask_postgresql_psycopg2_python.txt
Q: How can I draw devanagari characters on a screen? I am trying to represent devanagari characters on a screen, but in the dev environment where I'm programming I don't have unicode support. Then, to write characters I use binary matrices to color the related screen's pixels. I sorted these matrices according to the unicode order. For the languages that uses the latin alphabet I had no issues, I only needed to write the characters one after the other to represent a string, but for the devanagari characters it's different. In the devanagari script some characters, when placed next to others can completely change the appearance of the word itself, both in the order and in the appearance of the characters. The resulting characters are considered as a single character, but when read as unicode they actually return 2 distinct characters. This merging sometimes occurs in a simple way: क + ् = क् ग + ् = ग् फ + ि = फि But other times you get completely different characters: क + ् + क = क्क ग + ् + घ = ग्घ क + ् + ष = क्ष I found several papers describing the complex grammatical rules that determine how these characters merges (https://www.unicode.org/versions/Unicode8.0.0/UnicodeStandard-8.0.pdf), but the more I look into it the more I realize that I need to learn Hindi for understand that rules and then create an algorithm. I would like to understand the principles behind these characters combinations but without necessarily having to learn the Hindi language. I wonder if anyone before me has already solved this problem or found an alternative solution and would like to share it with me. A: Whether Devanagari text is encoded using Unicode or ISCII, display of the text requires a combination of shaping engine and font data that maps a string of characters into an appropriate sequence of positioned glyphs. The set of glyphs needed for Devanagari will be a fair bit larger than the initial set of characters. The shaping steps involves an analysis of clusters, re-ordering of certain elements within clusters, substitution of glyphs, and finally positioning adjustments to the glyphs. Consider this example: क + ् + क + ि = क्कि The cluster analysis is needed to recognize elements against a general cluster pattern — e.g., which comprise the "base" consonant within the cluster, which are additional consonants that will conjoin to it, which are vowels and what the type of vowel with regard to visual positioning. In that sequence, the <ka, virama, ka> sequence will form a base that vowel or other marks are positioned relative to. The second ka is the "base" consonant and the inital <ka, virama> sequence will conjoin as a "half" form. And the short-i vowel is one that needs to be re-positioned to the left of the conjoined-consonant combination. The Devanagari section in the Unicode Standard describes in a general way some of the actions that will be needed in display, but it's not a specific implementation guide. The OpenType font specification supports display of scripts like Devanagari through a combination of "OpenType Layout" data in the font plus shaping implementations that interact with that data. You can find documentation specifically for Devanagari font implementations here: https://learn.microsoft.com/en-us/typography/script-development/devanagari You might also find helpful the specification for the "Universal Shaping Engine" that several implementations use (in combination with OpenType fonts) for shaping many different scripts: https://learn.microsoft.com/en-us/typography/script-development/use You don't necessarily need to use OpenType, but you will want some implementation with the functionality I've described. If you're running in a specific embedded OS environment that isn't, say, Windows IOT, you evidently can't take advantage of the OpenType shaping support built into Windows or other major OS platforms. But perhaps you could take advantage of Harfbuzz, which is an open-source OpenType shaping library: https://github.com/harfbuzz/harfbuzz This would need to be combined with Devanagari fonts that have appropriate OpenType Layout data, and there are plenty of those, including OSS options (e.g., Noto Sans Devanagari).
How can I draw devanagari characters on a screen?
I am trying to represent devanagari characters on a screen, but in the dev environment where I'm programming I don't have unicode support. Then, to write characters I use binary matrices to color the related screen's pixels. I sorted these matrices according to the unicode order. For the languages that uses the latin alphabet I had no issues, I only needed to write the characters one after the other to represent a string, but for the devanagari characters it's different. In the devanagari script some characters, when placed next to others can completely change the appearance of the word itself, both in the order and in the appearance of the characters. The resulting characters are considered as a single character, but when read as unicode they actually return 2 distinct characters. This merging sometimes occurs in a simple way: क + ् = क् ग + ् = ग् फ + ि = फि But other times you get completely different characters: क + ् + क = क्क ग + ् + घ = ग्घ क + ् + ष = क्ष I found several papers describing the complex grammatical rules that determine how these characters merges (https://www.unicode.org/versions/Unicode8.0.0/UnicodeStandard-8.0.pdf), but the more I look into it the more I realize that I need to learn Hindi for understand that rules and then create an algorithm. I would like to understand the principles behind these characters combinations but without necessarily having to learn the Hindi language. I wonder if anyone before me has already solved this problem or found an alternative solution and would like to share it with me.
[ "Whether Devanagari text is encoded using Unicode or ISCII, display of the text requires a combination of shaping engine and font data that maps a string of characters into an appropriate sequence of positioned glyphs. The set of glyphs needed for Devanagari will be a fair bit larger than the initial set of characters.\nThe shaping steps involves an analysis of clusters, re-ordering of certain elements within clusters, substitution of glyphs, and finally positioning adjustments to the glyphs. Consider this example:\nक + ् + क + ि = क्कि\nThe cluster analysis is needed to recognize elements against a general cluster pattern — e.g., which comprise the \"base\" consonant within the cluster, which are additional consonants that will conjoin to it, which are vowels and what the type of vowel with regard to visual positioning. In that sequence, the <ka, virama, ka> sequence will form a base that vowel or other marks are positioned relative to. The second ka is the \"base\" consonant and the inital <ka, virama> sequence will conjoin as a \"half\" form. And the short-i vowel is one that needs to be re-positioned to the left of the conjoined-consonant combination.\nThe Devanagari section in the Unicode Standard describes in a general way some of the actions that will be needed in display, but it's not a specific implementation guide.\nThe OpenType font specification supports display of scripts like Devanagari through a combination of \"OpenType Layout\" data in the font plus shaping implementations that interact with that data. You can find documentation specifically for Devanagari font implementations here:\nhttps://learn.microsoft.com/en-us/typography/script-development/devanagari\nYou might also find helpful the specification for the \"Universal Shaping Engine\" that several implementations use (in combination with OpenType fonts) for shaping many different scripts:\nhttps://learn.microsoft.com/en-us/typography/script-development/use\nYou don't necessarily need to use OpenType, but you will want some implementation with the functionality I've described. If you're running in a specific embedded OS environment that isn't, say, Windows IOT, you evidently can't take advantage of the OpenType shaping support built into Windows or other major OS platforms. But perhaps you could take advantage of Harfbuzz, which is an open-source OpenType shaping library:\nhttps://github.com/harfbuzz/harfbuzz\nThis would need to be combined with Devanagari fonts that have appropriate OpenType Layout data, and there are plenty of those, including OSS options (e.g., Noto Sans Devanagari).\n" ]
[ 1 ]
[]
[]
[ "devanagari", "fonts", "hindi", "lcd", "unicode" ]
stackoverflow_0074654119_devanagari_fonts_hindi_lcd_unicode.txt
Q: Can't read file included as a script in angular.json I am trying to include the Google Maps clustering script in my Angular project. I have tried all of the usual suggestions, such as adding a <script> link to index.html, but so far, nothing is working. I am getting a confusing error message when I try to include https://unpkg.com/@googlemaps/markerclustererplus/dist/index.min.js as a script in the angular.json file. According to the error output, An unhandled exception occurred: Script file https://unpkg.com/@googlemaps/markerclustererplus/dist/index.min.js does not exist. The file does exist though: the link can be clicked to verify this. From what I've read, the scripts element in angular.json should be able to include external scripts like this. What might be going wrong here? Is there a setting that needs to change? A: Add it in your index.html <script src="https://unpkg.com/@googlemaps/markerclustererplus/dist/index.min.js"></script> And it will work.
Can't read file included as a script in angular.json
I am trying to include the Google Maps clustering script in my Angular project. I have tried all of the usual suggestions, such as adding a <script> link to index.html, but so far, nothing is working. I am getting a confusing error message when I try to include https://unpkg.com/@googlemaps/markerclustererplus/dist/index.min.js as a script in the angular.json file. According to the error output, An unhandled exception occurred: Script file https://unpkg.com/@googlemaps/markerclustererplus/dist/index.min.js does not exist. The file does exist though: the link can be clicked to verify this. From what I've read, the scripts element in angular.json should be able to include external scripts like this. What might be going wrong here? Is there a setting that needs to change?
[ "Add it in your index.html\n <script src=\"https://unpkg.com/@googlemaps/markerclustererplus/dist/index.min.js\"></script>\n\nAnd it will work.\n" ]
[ 0 ]
[]
[]
[ "angular", "google_maps", "google_maps_markers" ]
stackoverflow_0073191197_angular_google_maps_google_maps_markers.txt
Q: How to include an external file on Android using C# and Godot? I am trying to export a Godot game to Android using C#. I am using external libraries, like onnxruntime and everything seems to work except for the fact that I cannot include custom files to the exported package. I have already tried to include custom files using Godot's built in resources tab in the export dialog. More specifically, I have added *.txt and *.onnx as extensions to be included. However, only .txt files are exported and recognized. I can get a .txt file by using the res:// location, but other files cannot be found, because they "don't exist". So, how can I access custom files? Where do I have to place them and how do I reference them? Is there a library I have to install? Do I have to fiddle with the AndroidManifest or gradle files? I am using Godot 3.5, Visual Studio Code and .NET 6.0. Any help is appreciated. A: At first I tried to change the extension of the files into .res in order to trick the system into accepting them. However, while the files could be seen by Godot, importing and using them proved to be tricky. Instead I used Godot's File functionality after I had declared the extensions that I wanted to use in the resources tab, in the Export dialog. This code works: var f = new Godot.File(); f.Open(m_path, Godot.File.ModeFlags.Read); res = f.GetBuffer((long)f.GetLen()); f.Close(); I have not figured out why the path res://[filename.extenion] returns no results though.
How to include an external file on Android using C# and Godot?
I am trying to export a Godot game to Android using C#. I am using external libraries, like onnxruntime and everything seems to work except for the fact that I cannot include custom files to the exported package. I have already tried to include custom files using Godot's built in resources tab in the export dialog. More specifically, I have added *.txt and *.onnx as extensions to be included. However, only .txt files are exported and recognized. I can get a .txt file by using the res:// location, but other files cannot be found, because they "don't exist". So, how can I access custom files? Where do I have to place them and how do I reference them? Is there a library I have to install? Do I have to fiddle with the AndroidManifest or gradle files? I am using Godot 3.5, Visual Studio Code and .NET 6.0. Any help is appreciated.
[ "At first I tried to change the extension of the files into .res in order to trick the system into accepting them. However, while the files could be seen by Godot, importing and using them proved to be tricky. Instead I used Godot's File functionality after I had declared the extensions that I wanted to use in the resources tab, in the Export dialog. This code works:\nvar f = new Godot.File();\n f.Open(m_path, Godot.File.ModeFlags.Read);\n res = f.GetBuffer((long)f.GetLen());\n f.Close();\n\nI have not figured out why the path res://[filename.extenion] returns no results though.\n" ]
[ 0 ]
[]
[]
[ "android", "c#", "godot", "onnxruntime" ]
stackoverflow_0074576985_android_c#_godot_onnxruntime.txt
Q: Cloud Run with Gunicorn Best-Practise I am currently working on a service that is supposed to provide an HTTP endpoint in Cloud Run and I don't have much experience. I am currently using flask + gunicorn and can also call the service. My main problem now is optimising for multiple simultaneous requests. Currently, the service in Cloud Run has 4GB of memory and 1 CPU allocated to it. When it is called once, the instance that is started directly consumes 3.7GB of memory and about 40-50% of the CPU (I use a neural network to embed my data). Currently, my settings are very basic: memory: 4096M CPU: 1 min-instances: 0 max-instances: 1 concurrency: 80 Workers: 1 (Gunicorn) Threads: 1 (Gunicorn) Timeout: 0 (Gunicorn, as recommended by Google) If I up the number of workers to two, I would need to up the Memory to 8GB. If I do that my service should be able to work on two requests simultaneously with one instance, if this 1 CPU allocated, has more than one core. But what happens, if there is a thrid request? I would like to think, that Cloud Run will start a second instance. Does the new instance gets also 1 CPU and 8GB of memory and if not, what is the best practise for me? A: One of the best practice is to let Cloud Run scale automatically instead of trying to optimize each instance. Using 1 worker is a good idea to limit the memory footprint and reduce the cold start. I recommend to play with the threads, typically to put it to 8 or 16 to leverage the concurrency parameter. If you put those value too low, Cloud Run internal load balancer will route the request to the instance, thinking it will be able to serve it, but if Gunicorn can't access new request, you will have issues. Tune your service with the correct parameter of CPU and memory, but also the thread and the concurrency to find the correct ones. Hey is a useful tool to stress your service and observe what's happens when you scale. A: The best practice so far is For environments with multiple CPU cores, increase the number of workers to be equal to the cores available. Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling. Adjust the number of workers and threads on a per-application basis. For example, try to use a number of workers equal to the cores available and make sure there is a performance improvement, then adjust the number of threads.i.e. CMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app
Cloud Run with Gunicorn Best-Practise
I am currently working on a service that is supposed to provide an HTTP endpoint in Cloud Run and I don't have much experience. I am currently using flask + gunicorn and can also call the service. My main problem now is optimising for multiple simultaneous requests. Currently, the service in Cloud Run has 4GB of memory and 1 CPU allocated to it. When it is called once, the instance that is started directly consumes 3.7GB of memory and about 40-50% of the CPU (I use a neural network to embed my data). Currently, my settings are very basic: memory: 4096M CPU: 1 min-instances: 0 max-instances: 1 concurrency: 80 Workers: 1 (Gunicorn) Threads: 1 (Gunicorn) Timeout: 0 (Gunicorn, as recommended by Google) If I up the number of workers to two, I would need to up the Memory to 8GB. If I do that my service should be able to work on two requests simultaneously with one instance, if this 1 CPU allocated, has more than one core. But what happens, if there is a thrid request? I would like to think, that Cloud Run will start a second instance. Does the new instance gets also 1 CPU and 8GB of memory and if not, what is the best practise for me?
[ "One of the best practice is to let Cloud Run scale automatically instead of trying to optimize each instance. Using 1 worker is a good idea to limit the memory footprint and reduce the cold start.\nI recommend to play with the threads, typically to put it to 8 or 16 to leverage the concurrency parameter.\nIf you put those value too low, Cloud Run internal load balancer will route the request to the instance, thinking it will be able to serve it, but if Gunicorn can't access new request, you will have issues.\nTune your service with the correct parameter of CPU and memory, but also the thread and the concurrency to find the correct ones. Hey is a useful tool to stress your service and observe what's happens when you scale.\n", "The best practice so far is For environments with multiple CPU cores, increase the number of workers to be equal to the cores available. Timeout is set to 0 to disable the timeouts of the workers to allow Cloud Run to handle instance scaling. Adjust the number of workers and threads on a per-application basis. For example, try to use a number of workers equal to the cores available and make sure there is a performance improvement, then adjust the number of threads.i.e.\nCMD exec gunicorn --bind :$PORT --workers 1 --threads 8 --timeout 0 main:app\n\n" ]
[ 0, 0 ]
[]
[]
[ "google_cloud_run", "gunicorn", "python" ]
stackoverflow_0071378905_google_cloud_run_gunicorn_python.txt
Q: Cannot add or remove mailaddress with Powersshell in Exchange Hybrid neither OnPremise nor on Exchange Online I am running a Hybrid Exchange Server installation. AD is synced to Azure and most of the mailboxes are in the cloud, not on Premise. Now I am not able to add or remove an email address from a users mailbox. The recommended way to do this reading the Micorsoft docs is Set-Mailbox -Identity <identity> -EmailAddresses @{add='[email protected]'} Unfortunately this leads to an error: Microsoft.Exchange.Configuration.DualWrite.LocStrings.UnableToWriteToAadException|An Azure Active Directory call was made to keep object in sync between Azure Active Directory and Exchange Online. However, it failed. Detailed error: Unable to update the specified properties for on-premises mastered Directory Sync objects or objects currently undergoing migration. DualWrite (Graph) Well trying to run this at the on Premise machine, it leads to this error: The operation couldn't be performed because object 'identity' couldn't be found on 'domaincontroller.domain.com'. This seems to be ok, since the mailbox is not at the On Premise server. Beeing true, I am also a little bit confused about user, usermailbox, mailbox, recipient, ... Can anyone give me a hint how to fix this and how to add/remove a mail adress from a user? Finally I found out, that it is not possible to change the mailaddresses at the cloud in a Hybrid environment. You get the same error, if you try this in the exchange online admin center, and this is by default. In a Hybrid environment the mailadresses have to be set On Premise. This works in the admin center of the On Premise machine, but I found no way to do it with powerhell, since the "object is not found" error occurs. So how to get the O365 Mailbox of a user On Premise? A: From the On-Premises-Exchangeserver's view, the object is not a mailbox, but a RemoteMailbox Therefore I have to use the Set-RemoteMailbox Cmdlet, instead of Set-Mailbox Set-RemoteMailbox -Identity <identity> -EmailAddresses @{add='[email protected]'} and Set-RemoteMailbox -Identity <identity> -EmailAddresses @{remove='[email protected]'} works for adding and removing mail addresses from these mailboxes. Thanks to Evgenij Smirnov, who pointed me in the right direction. (Link is German language only ) https://social.technet.microsoft.com/Forums/de-DE/320866b5-cf71-452c-ba65-8f331857eb64/hinzufgen-oder-lschen-einer-email-adresse-in-hybrid-umgebung-mit-powershell?forum=exchange_serverde
Cannot add or remove mailaddress with Powersshell in Exchange Hybrid neither OnPremise nor on Exchange Online
I am running a Hybrid Exchange Server installation. AD is synced to Azure and most of the mailboxes are in the cloud, not on Premise. Now I am not able to add or remove an email address from a users mailbox. The recommended way to do this reading the Micorsoft docs is Set-Mailbox -Identity <identity> -EmailAddresses @{add='[email protected]'} Unfortunately this leads to an error: Microsoft.Exchange.Configuration.DualWrite.LocStrings.UnableToWriteToAadException|An Azure Active Directory call was made to keep object in sync between Azure Active Directory and Exchange Online. However, it failed. Detailed error: Unable to update the specified properties for on-premises mastered Directory Sync objects or objects currently undergoing migration. DualWrite (Graph) Well trying to run this at the on Premise machine, it leads to this error: The operation couldn't be performed because object 'identity' couldn't be found on 'domaincontroller.domain.com'. This seems to be ok, since the mailbox is not at the On Premise server. Beeing true, I am also a little bit confused about user, usermailbox, mailbox, recipient, ... Can anyone give me a hint how to fix this and how to add/remove a mail adress from a user? Finally I found out, that it is not possible to change the mailaddresses at the cloud in a Hybrid environment. You get the same error, if you try this in the exchange online admin center, and this is by default. In a Hybrid environment the mailadresses have to be set On Premise. This works in the admin center of the On Premise machine, but I found no way to do it with powerhell, since the "object is not found" error occurs. So how to get the O365 Mailbox of a user On Premise?
[ "From the On-Premises-Exchangeserver's view, the object is not a mailbox, but a RemoteMailbox\nTherefore I have to use the Set-RemoteMailbox Cmdlet, instead of Set-Mailbox\nSet-RemoteMailbox -Identity <identity> -EmailAddresses @{add='[email protected]'}\n\nand\nSet-RemoteMailbox -Identity <identity> -EmailAddresses @{remove='[email protected]'}\n\nworks for adding and removing mail addresses from these mailboxes.\nThanks to Evgenij Smirnov, who pointed me in the right direction.\n(Link is German language only ) https://social.technet.microsoft.com/Forums/de-DE/320866b5-cf71-452c-ba65-8f331857eb64/hinzufgen-oder-lschen-einer-email-adresse-in-hybrid-umgebung-mit-powershell?forum=exchange_serverde\n" ]
[ 0 ]
[]
[]
[ "exchange_server", "hybrid", "powershell" ]
stackoverflow_0074655045_exchange_server_hybrid_powershell.txt
Q: Image area mapping is showing Infinity instead of numerical value I'm using the library react-image-mapper to put clickable areas on an image and on the initial load, the html generated shows the area with a value of infinity causing the image to not be clickable. After reloading the page, the mapping is working properly. It seems as though the computation by the library is not fast enough when the page initially loads then gets cached afterwards. I'm using NextJS to build the app and it should be working, but not sure why it's not working on the initial page load. I don't mind finding a workaround like doing a quick reload when the page initially loads or presenting the user with a spinner as the computation behind the scenes completes. Thanks for any help with this! Inital Page load: After Reload: A: For anyone experiencing the same, I wasn't able to fix this exact issue, but found a workaround. In a useEffect I put a setTimeout function to check the value of the coords attribute and if it was Infinity, the page reloads. useEffect(() => { setTimeout(() => { let mapCoordsInfinity = document .getElementsByTagName('map')[0] .areas[0]?.attributes[1]?.nodeValue?.toString() .includes('Infinity'); if (mapCoordsInfinity || mapCoordsInfinity === undefined) { router.reload(); } }, 1000); }, []); The reload happens so fast, you don't even notice it so it should be fine for now until I can think of a better fix down the line. Hope this helps someone in the future!
Image area mapping is showing Infinity instead of numerical value
I'm using the library react-image-mapper to put clickable areas on an image and on the initial load, the html generated shows the area with a value of infinity causing the image to not be clickable. After reloading the page, the mapping is working properly. It seems as though the computation by the library is not fast enough when the page initially loads then gets cached afterwards. I'm using NextJS to build the app and it should be working, but not sure why it's not working on the initial page load. I don't mind finding a workaround like doing a quick reload when the page initially loads or presenting the user with a spinner as the computation behind the scenes completes. Thanks for any help with this! Inital Page load: After Reload:
[ "For anyone experiencing the same, I wasn't able to fix this exact issue, but found a workaround. In a useEffect I put a setTimeout function to check the value of the coords attribute and if it was Infinity, the page reloads.\nuseEffect(() => {\n setTimeout(() => {\n let mapCoordsInfinity = document\n .getElementsByTagName('map')[0]\n .areas[0]?.attributes[1]?.nodeValue?.toString()\n .includes('Infinity');\n\n if (mapCoordsInfinity || mapCoordsInfinity === undefined) {\n router.reload();\n }\n }, 1000);\n }, []);\n\nThe reload happens so fast, you don't even notice it so it should be fine for now until I can think of a better fix down the line. Hope this helps someone in the future!\n" ]
[ 0 ]
[]
[]
[ "html", "next.js", "reactjs" ]
stackoverflow_0074649629_html_next.js_reactjs.txt
Q: The video writer is not writing any video just an empty .mp4 file. Rest is working fine. What's the problem? import cv2 import os cam = cv2.VideoCapture(r"C:/Users/User/Desktop/aayfryxljh.mp4") detector= cv2.CascadeClassifier("haarcascade_frontalface_default.xml") result = cv2.VideoWriter('C:/Users/User/Desktop/new.mp4',cv2.VideoWriter_fourcc(*'mp4v'),30,(112,112)) while (True): # reading from frame ret, frame = cam.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = detector.detectMultiScale(gray, 1.3, 5) size=(frame.shape[1],frame.shape[0]) c = cv2.waitKey(1) # if video is still left continue creating images for (x, y, w, h) in faces: cropped = frame[y: y + h, x: x + w] cv2.imshow('frame', cropped) result.write(cropped) # Release all space and windows once donecam.release() result.release() Should save the cropped faces video.I want to save it in .mp4 format. It Just shows an empty .mp4 file, I can't understand the issue. The code executes without any error A: As I mentioned in the comments the while loop never finishes and so result.release() never gets called. It looks like the code needs a way to end the while loop. Perhaps: while (True): # reading from frame ret, frame = cam.read() ### ADDED CODE: if ret == False: break gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = detector.detectMultiScale(gray, 1.3, 5) size=(frame.shape[1],frame.shape[0]) ### CHANGED CODE if cv2.waitKey(1) & 0xFF == ord('q'): break # if video is still left continue creating images for (x, y, w, h) in faces: cropped = frame[y: y + h, x: x + w] cv2.imshow('frame', cropped) result.write(cropped) # Release all space and windows once donecam.release() ### ADDED CODE cam.release() result.release() ### ADDED CODE cv2.destroyAllWindows() See the example at: https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#saving-a-video
The video writer is not writing any video just an empty .mp4 file. Rest is working fine. What's the problem?
import cv2 import os cam = cv2.VideoCapture(r"C:/Users/User/Desktop/aayfryxljh.mp4") detector= cv2.CascadeClassifier("haarcascade_frontalface_default.xml") result = cv2.VideoWriter('C:/Users/User/Desktop/new.mp4',cv2.VideoWriter_fourcc(*'mp4v'),30,(112,112)) while (True): # reading from frame ret, frame = cam.read() gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) faces = detector.detectMultiScale(gray, 1.3, 5) size=(frame.shape[1],frame.shape[0]) c = cv2.waitKey(1) # if video is still left continue creating images for (x, y, w, h) in faces: cropped = frame[y: y + h, x: x + w] cv2.imshow('frame', cropped) result.write(cropped) # Release all space and windows once donecam.release() result.release() Should save the cropped faces video.I want to save it in .mp4 format. It Just shows an empty .mp4 file, I can't understand the issue. The code executes without any error
[ "As I mentioned in the comments the while loop never finishes and so result.release() never gets called. It looks like the code needs a way to end the while loop. Perhaps:\nwhile (True):\n\n # reading from frame\n ret, frame = cam.read()\n\n ### ADDED CODE:\n if ret == False:\n break\n\n gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)\n\n faces = detector.detectMultiScale(gray, 1.3, 5)\n size=(frame.shape[1],frame.shape[0])\n \n ### CHANGED CODE\n if cv2.waitKey(1) & 0xFF == ord('q'):\n break\n\n # if video is still left continue creating images\n for (x, y, w, h) in faces:\n cropped = frame[y: y + h, x: x + w]\n cv2.imshow('frame', cropped)\n result.write(cropped)\n\n# Release all space and windows once donecam.release()\n\n### ADDED CODE\ncam.release()\nresult.release()\n\n### ADDED CODE\ncv2.destroyAllWindows()\n\nSee the example at: https://opencv24-python-tutorials.readthedocs.io/en/latest/py_tutorials/py_gui/py_video_display/py_video_display.html#saving-a-video\n" ]
[ 0 ]
[]
[]
[ "opencv", "python" ]
stackoverflow_0074626256_opencv_python.txt
Q: Can kustomize built-in patchtransformers take yaml file instead of map of values Using Kustomzie I am trying to generate manifest for k8s, is there any option to directly use an yaml file within PatchTransformer built in plugin instead of passing map of values? Below works when the map of value is passed to PatchTransformer Directory structure of the files example |_ base | |_ app1 | |_deployment.yaml | |_ kustomization.yaml |_ overlay | |_staging | |_ kustomization.yaml | |_ addAffinity.yaml |_ common |_ affinity_common.yaml Content of staging/kustomization.yaml file content looks like below bases: - ../base/app1/ transformers: - addAffinity.yaml app1/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx-app spec: replicas: 1 selector: matchLabels: app: nginx-app template: metadata: labels: app: nginx-app spec: containers: - name: nginx-app image: nginx:1.22.1 ports: - containerPort: 80 app1/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml Content of staging/addAffinity.yaml apiVersion: builtin kind: PatchTransformer metadata: name: add-affinity target: kind: Deployment patch : |- - op: add path: /spec/template/spec/affinity value: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - simple-deployment topologyKey: "kubernetes.io/hostname" Executing below command it works as expected. > kustomize build overlay/staging/ Question: What I am looking for is, placing the content in a yaml file and refer it in PatchTransformer value, something like below The addAffinity.yaml to directly refer the yaml file. apiVersion: builtin kind: PatchTransformer metadata: name: add-affinity-prop target: group: apps kind: Deployment - op: add path: '/spec/template/spec/affinity' value: - common_affinity.yaml #<------------ pass the yaml file directly affinity_common.yaml podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - simple-deployment topologyKey: "kubernetes.io/hostname" Is it achievable with Kustomzie, just substitute the value directly from yaml file like below #... app: nginx-app env: demo spec: affinity: - common_affinity.yaml containers: - image: nginx:1.22.1 name: nginx-app ports: - containerPort: 80 A: There is no support for this feature, reference kustomize community comments
Can kustomize built-in patchtransformers take yaml file instead of map of values
Using Kustomzie I am trying to generate manifest for k8s, is there any option to directly use an yaml file within PatchTransformer built in plugin instead of passing map of values? Below works when the map of value is passed to PatchTransformer Directory structure of the files example |_ base | |_ app1 | |_deployment.yaml | |_ kustomization.yaml |_ overlay | |_staging | |_ kustomization.yaml | |_ addAffinity.yaml |_ common |_ affinity_common.yaml Content of staging/kustomization.yaml file content looks like below bases: - ../base/app1/ transformers: - addAffinity.yaml app1/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx-app spec: replicas: 1 selector: matchLabels: app: nginx-app template: metadata: labels: app: nginx-app spec: containers: - name: nginx-app image: nginx:1.22.1 ports: - containerPort: 80 app1/kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - deployment.yaml Content of staging/addAffinity.yaml apiVersion: builtin kind: PatchTransformer metadata: name: add-affinity target: kind: Deployment patch : |- - op: add path: /spec/template/spec/affinity value: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - simple-deployment topologyKey: "kubernetes.io/hostname" Executing below command it works as expected. > kustomize build overlay/staging/ Question: What I am looking for is, placing the content in a yaml file and refer it in PatchTransformer value, something like below The addAffinity.yaml to directly refer the yaml file. apiVersion: builtin kind: PatchTransformer metadata: name: add-affinity-prop target: group: apps kind: Deployment - op: add path: '/spec/template/spec/affinity' value: - common_affinity.yaml #<------------ pass the yaml file directly affinity_common.yaml podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - simple-deployment topologyKey: "kubernetes.io/hostname" Is it achievable with Kustomzie, just substitute the value directly from yaml file like below #... app: nginx-app env: demo spec: affinity: - common_affinity.yaml containers: - image: nginx:1.22.1 name: nginx-app ports: - containerPort: 80
[ "There is no support for this feature, reference kustomize community comments\n" ]
[ 0 ]
[]
[]
[ "kubernetes", "kustomize" ]
stackoverflow_0074496671_kubernetes_kustomize.txt
Q: Angular template not rendering Right now I am creating a library (my-custom-library) and a project in which we'll use that library (called my-Project) The requirement is, that within my-project I have to use my-custom-library, extended with templates, like this (my-project's app.component.html): <my-custom-library> <ng-template #myTemplate> <div>Some static content for now</div> </ng-template> </my-custom-library> The reason for this is, that in my-custom-library they want to have template-able components, where the template is given from the outside (in this case from my-project). Within my-custom-library I'm supposed to access the given template(s) and pass them to the corresponding components. This I'm trying to achieve (my-custom-project's app.component.ts) @ContentChild("myTemplate") myTemplateRef?: TemplateRef<any>; (my-custom-project's app.component.html) <ng-container [ngTemplateOutlet]="myTemplateRef"></ng-container> My problem is, that the contentChild is always empty, the template never renders. The structure itself I think is working, since when I'm moving this same structure within just one project and use it there everything works fine, the contentChild gets its value and "my template" is rendered. One more information, I don't know if its useful but my-custom-library is created like this (my-custom-library's app.module.ts): export class AppModule { constructor(private injector: Injector) { const customElement = createCustomElement(AppComponent, { injector: this.injector }); customElements.define('my-custom-library', customElement); } } What could cause this issue? Is it even possible to achieve this? A: I had the same issue, Apparently ngTemplateOutlet does not work with angular elements. but you can try content projection without ngTemplateOutlet and it works fine. e.g <my-custom-library> <div placeholder1></div> </my-custom-library> and you can define placeholder1 within your angular element (my-custom-library) e.g <div> /*your angular my-custom-library code here */ /* the content you want to inject from outside of your angular elements*/ <ng-content select="[placeholder1]"></ng-content> </div> Note: you can also do nesting of your angular elements as well using this, but with this approach you have to make sure that ng-content is not affect by any ngif condition, because your angular element can project the content from outside but it can not regenerate projection based on your conditions. those conditions should be added from where you are projecting content.
Angular template not rendering
Right now I am creating a library (my-custom-library) and a project in which we'll use that library (called my-Project) The requirement is, that within my-project I have to use my-custom-library, extended with templates, like this (my-project's app.component.html): <my-custom-library> <ng-template #myTemplate> <div>Some static content for now</div> </ng-template> </my-custom-library> The reason for this is, that in my-custom-library they want to have template-able components, where the template is given from the outside (in this case from my-project). Within my-custom-library I'm supposed to access the given template(s) and pass them to the corresponding components. This I'm trying to achieve (my-custom-project's app.component.ts) @ContentChild("myTemplate") myTemplateRef?: TemplateRef<any>; (my-custom-project's app.component.html) <ng-container [ngTemplateOutlet]="myTemplateRef"></ng-container> My problem is, that the contentChild is always empty, the template never renders. The structure itself I think is working, since when I'm moving this same structure within just one project and use it there everything works fine, the contentChild gets its value and "my template" is rendered. One more information, I don't know if its useful but my-custom-library is created like this (my-custom-library's app.module.ts): export class AppModule { constructor(private injector: Injector) { const customElement = createCustomElement(AppComponent, { injector: this.injector }); customElements.define('my-custom-library', customElement); } } What could cause this issue? Is it even possible to achieve this?
[ "I had the same issue, Apparently ngTemplateOutlet does not work with angular elements. but you can try content projection without ngTemplateOutlet and it works fine.\ne.g\n<my-custom-library>\n <div placeholder1></div>\n</my-custom-library>\n\nand you can define placeholder1 within your angular element (my-custom-library)\ne.g\n<div>\n/*your angular my-custom-library code here */\n\n /* the content you want to inject from outside of your angular elements*/\n <ng-content select=\"[placeholder1]\"></ng-content>\n</div> \n\nNote: you can also do nesting of your angular elements as well using this, but with this approach you have to make sure that ng-content is not affect by any ngif condition, because your angular element can project the content from outside but it can not regenerate projection based on your conditions. those conditions should be added from where you are projecting content.\n" ]
[ 0 ]
[]
[]
[ "angular", "angular2_template", "angular_library", "custom_component", "javascript" ]
stackoverflow_0070923309_angular_angular2_template_angular_library_custom_component_javascript.txt
Q: How to communicate with a windows service? I want to create a windows service that validates data and access it from another windows application, but I'm new to services and I'm not sure how to start. So, while the service is running, a windows application should somehow connect to the service, send some data and get a response, true or false. A: I could successfully handle the (almost) same issue as yours doing the following: In your Class : ServiceBase, that represents your Service class, you might have: public Class () //constructor, to create your log repository { InitializeComponent(); if (!System.Diagnostics.EventLog.SourceExists("YOURSource")) { System.Diagnostics.EventLog.CreateEventSource( "YOURSource", "YOURLog"); } eventLog1.Source = "YOURSource"; eventLog1.Log = "YOURLog"; } Now, implement: protected override void OnStart(string[] args) {...} AND protected override void OnStop() {...} To handle custom commands calls: protected override void OnCustomCommand(int command) { switch (command) { case 128: eventLog1.WriteEntry("Command " + command + " successfully called."); break; default: break; } } Now, use this in the application where you'll call the Windows Service: Enum to reference your methods: (remember, Services custom methods always receive an int32 (128 to 255) as parameters and using Enum you make it easier to remember and control your methods private enum YourMethods { methodX = 128 }; To call a specific method: ServiceController sc = new ServiceController("YOURServiceName", Environment.MachineName); ServiceControllerPermission scp = new ServiceControllerPermission(ServiceControllerPermissionAccess.Control, Environment.MachineName, "YOURServiceName");//this will grant permission to access the Service scp.Assert(); sc.Refresh(); sc.ExecuteCommand((int)YourMethods.methodX); Doing this, you can control your service. Here you can check how to create and install a Windows Service. More about the ExecuteCommand method. Good luck! A: If you are using .Net Framework 4, then memory mapped files provide a fairly easy way of implementing cross process communication. It is fairly simple, and well described in documentation, and avoids the overhead (at runtime but also in terms of development effort) of using WCF or other connection/remoting based interactions, or of writing shared data to a central location and polling (database, file, etc). See here for an overview. A: You could accomplish this very easily by making the service host a WCF service and connecting to it from your application. A: In older versions of Windows, you could configure your Windows service to interact with the desktop. This allowed you to add user interface elements directly to your service that could be presented to the user. Beginning with Windows Vista, services can no longer interact directly with users, i.e., no user interfaces. To do this, what you want to do is write your Windows service and a front-end Windows application. To provide the communication bridge between the two, I would strongly recommend using Windows Communication Foundation (WCF). To create a C# Windows service, you can follow the step-by-step instructions here. A: We use Named pipes for this purpose. But our client implemented with C++. If your service and application are implemented in .Net, you can use .Net remoting. A: Think it as a remote database owner. Say you have 1 database yet 10 applications that requires different data's from the database and also you don't want to open all your data to each of the applications.. Also your applications will be independent from your database, your data layer will only be implemented in your service, your applications will not hold that logic. You can write a service and open your service to your other applications. How to write your first windows service can help you. A: What I am doing and seems to be working for my needs so far is have my windows desktop app write values to a file in the CommonApplicationData special folder using: Environment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData) And then my Windows Service knows to look for the file and the values in it using the same method. CommonApplicationData is the directory that serves as a common repository for application-specific data that is used by all users, including the SYSTEM account. For details, see Environment.GetFolderPath Method and CommonApplicationData in Environment.SpecialFolder. You could use any file format that suits your needs: .txt, .ini, .json, etc
How to communicate with a windows service?
I want to create a windows service that validates data and access it from another windows application, but I'm new to services and I'm not sure how to start. So, while the service is running, a windows application should somehow connect to the service, send some data and get a response, true or false.
[ "I could successfully handle the (almost) same issue as yours doing the following:\nIn your Class : ServiceBase, that represents your Service class, you might have:\npublic Class () //constructor, to create your log repository\n {\n InitializeComponent();\n\n if (!System.Diagnostics.EventLog.SourceExists(\"YOURSource\"))\n {\n System.Diagnostics.EventLog.CreateEventSource(\n \"YOURSource\", \"YOURLog\");\n }\n eventLog1.Source = \"YOURSource\";\n eventLog1.Log = \"YOURLog\";\n }\n\nNow, implement:\nprotected override void OnStart(string[] args)\n{...}\n\nAND\nprotected override void OnStop()\n{...}\n\nTo handle custom commands calls:\nprotected override void OnCustomCommand(int command)\n {\n switch (command)\n {\n case 128:\n eventLog1.WriteEntry(\"Command \" + command + \" successfully called.\");\n break;\n default:\n break;\n }\n }\n\nNow, use this in the application where you'll call the Windows Service:\nEnum to reference your methods: (remember, Services custom methods always receive an int32 (128 to 255) as parameters and using Enum you make it easier to remember and control your methods\nprivate enum YourMethods\n {\n methodX = 128\n };\n\nTo call a specific method:\nServiceController sc = new ServiceController(\"YOURServiceName\", Environment.MachineName);\nServiceControllerPermission scp = new ServiceControllerPermission(ServiceControllerPermissionAccess.Control, Environment.MachineName, \"YOURServiceName\");//this will grant permission to access the Service\n scp.Assert();\n sc.Refresh();\n\n sc.ExecuteCommand((int)YourMethods.methodX);\n\nDoing this, you can control your service.\nHere you can check how to create and install a Windows Service.\nMore about the ExecuteCommand method.\nGood luck!\n", "If you are using .Net Framework 4, then memory mapped files provide a fairly easy way of implementing cross process communication.\nIt is fairly simple, and well described in documentation, and avoids the overhead (at runtime but also in terms of development effort) of using WCF or other connection/remoting based interactions, or of writing shared data to a central location and polling (database, file, etc).\nSee here for an overview.\n", "You could accomplish this very easily by making the service host a WCF service and connecting to it from your application.\n", "In older versions of Windows, you could configure your Windows service to interact with the desktop. This allowed you to add user interface elements directly to your service that could be presented to the user. Beginning with Windows Vista, services can no longer interact directly with users, i.e., no user interfaces.\nTo do this, what you want to do is write your Windows service and a front-end Windows application. To provide the communication bridge between the two, I would strongly recommend using Windows Communication Foundation (WCF).\nTo create a C# Windows service, you can follow the step-by-step instructions here.\n", "We use Named pipes for this purpose. But our client implemented with C++. If your service and application are implemented in .Net, you can use .Net remoting.\n", "Think it as a remote database owner. Say you have 1 database yet 10 applications that requires different data's from the database and also you don't want to open all your data to each of the applications.. Also your applications will be independent from your database, your data layer will only be implemented in your service, your applications will not hold that logic. You can write a service and open your service to your other applications. \nHow to write your first windows service can help you.\n", "What I am doing and seems to be working for my needs so far is have my windows desktop app write values to a file in the CommonApplicationData special folder using:\nEnvironment.GetFolderPath(Environment.SpecialFolder.CommonApplicationData)\n\nAnd then my Windows Service knows to look for the file and the values in it using the same method.\nCommonApplicationData is the directory that serves as a common repository for application-specific data that is used by all users, including the SYSTEM account.\nFor details, see Environment.GetFolderPath Method and CommonApplicationData in Environment.SpecialFolder.\nYou could use any file format that suits your needs: .txt, .ini, .json, etc\n" ]
[ 42, 6, 3, 2, 2, 0, 0 ]
[ "Your service, while it is processing, can add events to the EventLog. \nYou can create another console application that runs paralel to the service, and that listens to that EventLog with the event handling mechanism:\nvar log= new EventLog(\"[name of the eventlog]\");\nlog.EnableRaisingEvents = true;\nlog.EntryWritten += Log_EntryWritten;\n\nThen you handle it immediately: \nprivate static void Log_EntryWritten(object sender, System.Diagnostics.EntryWrittenEventArgs e)\n{\n Console.WriteLine(\"Event detected !\");\n}\n\nYou can read the EntryWrittenEventArgs object to get all event details, and to show what you want in your console app. If you stop the console app the service continues to run, and still logs to the event log.\n" ]
[ -2 ]
[ "c#", "ipc", "windows_services" ]
stackoverflow_0004451216_c#_ipc_windows_services.txt
Q: jQuery .after() that replaces instead of appends I have the following test. You'll notice hi and hello were both added. What I need is for hi to be replaced with hello. As a follow-up question, is there a way to get the value of the .after()? $("#test").after("hi"); $("#test").after("hello"); <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div id=test>TEST</div> A: Add a span after your div and give it a class name. Then just reference that class name from after that. Since you have given it a class name, you can easily get the HTML of that span. $("#test").after("<span class='subTitle' />"); $(".subTitle").html("HI"); $(".subTitle").html("Hello"); console.log($(".subTitle").html()) <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div id="test">TEST</div>
jQuery .after() that replaces instead of appends
I have the following test. You'll notice hi and hello were both added. What I need is for hi to be replaced with hello. As a follow-up question, is there a way to get the value of the .after()? $("#test").after("hi"); $("#test").after("hello"); <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <div id=test>TEST</div>
[ "Add a span after your div and give it a class name.\nThen just reference that class name from after that.\nSince you have given it a class name, you can easily get the HTML of that span.\n\n\n$(\"#test\").after(\"<span class='subTitle' />\");\n\n$(\".subTitle\").html(\"HI\");\n$(\".subTitle\").html(\"Hello\");\n\nconsole.log($(\".subTitle\").html())\n<script src=\"https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js\"></script>\n<div id=\"test\">TEST</div>\n\n\n\n" ]
[ 1 ]
[]
[]
[ "html", "jquery" ]
stackoverflow_0074660816_html_jquery.txt
Q: ssh-keyscan throws write :Operation times out I have gitlab runner installed on kubernetes. I am trying to build docker image from a Dockerfile which needs to clone private repositories over ssh. I have added ssh-keyscan to get public key of the repo URL. It throws following error most of the times : write (git..com): Operation timed out I have tried increasing timeout but the behaviour is still the same. This is the command I am running from Dockerfile RUN mkdir -p -m 0600 /root/.ssh && ssh-keyscan -vvv -T 300 -p <port> git.<kygitlab>.com >> /root/.ssh/known_hosts The public key should be stored into know_hosts file without any error. This works fine in my local system but throws an error when executed with gitlab CI on kubernetes. A: The problem is: you don't need to update only the known_hosts, you also need a private/public key pair (in the ~/.ssh Docker image folder), with the public key registered on the remote private repo hosting service side. Only that would allow you to access and clone a private remote repo. A: Short Explanation: Check if SSH traffic is enabled, if not allow SSH traffic. You can run the following command to know if it is enabled. You can replace github.com with your git host instance. ssh -T [email protected] More Details: I faced a similar issue, but the platform I am working on is different. We are building a docker image on docker executor in GitLab runner. So, here, the image is getting built inside another docker container. The concept is called Docker-in-Docker which is explained here. After spending time, we got to know that SSH (port:22) traffic is blocked on the host machine which cascades to all the guests (docker containers) on it. Once, we enabled the SSH traffic, it worked like a charm.
ssh-keyscan throws write :Operation times out
I have gitlab runner installed on kubernetes. I am trying to build docker image from a Dockerfile which needs to clone private repositories over ssh. I have added ssh-keyscan to get public key of the repo URL. It throws following error most of the times : write (git..com): Operation timed out I have tried increasing timeout but the behaviour is still the same. This is the command I am running from Dockerfile RUN mkdir -p -m 0600 /root/.ssh && ssh-keyscan -vvv -T 300 -p <port> git.<kygitlab>.com >> /root/.ssh/known_hosts The public key should be stored into know_hosts file without any error. This works fine in my local system but throws an error when executed with gitlab CI on kubernetes.
[ "The problem is: \n\nyou don't need to update only the known_hosts, \nyou also need a private/public key pair (in the ~/.ssh Docker image folder), with the public key registered on the remote private repo hosting service side.\n\nOnly that would allow you to access and clone a private remote repo.\n", "Short Explanation: Check if SSH traffic is enabled, if not allow SSH traffic. You can run the following command to know if it is enabled. You can replace github.com with your git host instance.\nssh -T [email protected]\n\nMore Details: I faced a similar issue, but the platform I am working on is different. We are building a docker image on docker executor in GitLab runner. So, here, the image is getting built inside another docker container. The concept is called Docker-in-Docker which is explained here.\nAfter spending time, we got to know that SSH (port:22) traffic is blocked on the host machine which cascades to all the guests (docker containers) on it. Once, we enabled the SSH traffic, it worked like a charm.\n" ]
[ 0, 0 ]
[]
[]
[ "gitlab", "kubernetes", "ssh" ]
stackoverflow_0054516312_gitlab_kubernetes_ssh.txt
Q: Is there a way to import the code from a repository into the code of a different repository? I have the following problem: I've have created a repository and made changes to it, and commited them in git. Now, I've realized that the name of the repository is wrong, and I have created another with the right name, and I wonder if there is a way to import all the code (together with the changes I've made) in the first repository into the second repository using git and not copy pasting it manually. In the first repository there is a branch with all of my changes. Mind that I cannot directly change the name of the first repository, as it doesn't belong to me and I am unable to do so. Any ideas? My original idea is to copy-paste all of the classes and code, manually, from one repository to the other, but that is absolutely time-consuming as the first repository has to many classes and a lot of code, and some copy-pasting errors might take place. I just want a safe and fast way to do this. A: Option A: In your new repository you could add the old one as a remote and then git fetch from it. Option B: Add your new repository as a remote in your old one and then git push the branch. Option C: Create another, temporary repository, add it as a remote in both of your repositories and then first git push from the old one to the temporary and after that git fetch in your new one from the temporary. Option C is especially usefull if both repositories are behind a firewall and there is no way of them talking directly to each other. A: The phrase the code ... in the repository is imprecise: A Git repository consists of commits. Each commit holds a full snapshot of every file, plus metadata. Thus, "the code" could refer to "some or all files in one specific commit", "all versions of some or all files in some set of some specific commits", or even "some or all specific commits". If you want to preserve the actual commits themselves, that's one thing (and relatively easy to do but comes with some constraints). Methods for doing this are the options in SebDieBln's answer. If you only want some or all files from some or all commits, that tends to be a little harder, unless you just want some or all files from one specific commit. To get "some but not all files from some set of commits", use git checkout or git switch to extract each of those commits from the repository that has them, copy the files of interest elsewhere, and use Git to create new commits in the other repository as appropriate. To get "all files from some or all commits", consider automating the above. To get "some files from one specific commit", just check out the commit in question, save the files somehow, and move to the repository where you want those files and extract the saved files (this entire process could just be a matter of cp *.py ../other-repo for instance). Make one new commit in the target repository, and you're done.
Is there a way to import the code from a repository into the code of a different repository?
I have the following problem: I've have created a repository and made changes to it, and commited them in git. Now, I've realized that the name of the repository is wrong, and I have created another with the right name, and I wonder if there is a way to import all the code (together with the changes I've made) in the first repository into the second repository using git and not copy pasting it manually. In the first repository there is a branch with all of my changes. Mind that I cannot directly change the name of the first repository, as it doesn't belong to me and I am unable to do so. Any ideas? My original idea is to copy-paste all of the classes and code, manually, from one repository to the other, but that is absolutely time-consuming as the first repository has to many classes and a lot of code, and some copy-pasting errors might take place. I just want a safe and fast way to do this.
[ "\nOption A: In your new repository you could add the old one as a remote and then git fetch from it.\nOption B: Add your new repository as a remote in your old one and then git push the branch.\nOption C: Create another, temporary repository, add it as a remote in both of your repositories and then first git push from the old one to the temporary and after that git fetch in your new one from the temporary.\n\nOption C is especially usefull if both repositories are behind a firewall and there is no way of them talking directly to each other.\n", "The phrase the code ... in the repository is imprecise:\n\nA Git repository consists of commits.\nEach commit holds a full snapshot of every file, plus metadata.\n\nThus, \"the code\" could refer to \"some or all files in one specific commit\", \"all versions of some or all files in some set of some specific commits\", or even \"some or all specific commits\".\nIf you want to preserve the actual commits themselves, that's one thing (and relatively easy to do but comes with some constraints). Methods for doing this are the options in SebDieBln's answer.\nIf you only want some or all files from some or all commits, that tends to be a little harder, unless you just want some or all files from one specific commit.\nTo get \"some but not all files from some set of commits\", use git checkout or git switch to extract each of those commits from the repository that has them, copy the files of interest elsewhere, and use Git to create new commits in the other repository as appropriate.\nTo get \"all files from some or all commits\", consider automating the above.\nTo get \"some files from one specific commit\", just check out the commit in question, save the files somehow, and move to the repository where you want those files and extract the saved files (this entire process could just be a matter of cp *.py ../other-repo for instance). Make one new commit in the target repository, and you're done.\n" ]
[ 1, 1 ]
[]
[]
[ "git", "github" ]
stackoverflow_0074655844_git_github.txt
Q: I am trying to scrape a website, but it returns 404 not found error Here I am trying to retrieve all the internship offers(stage in French) from LinkedIn. If I do the same on a simple website and change my search parameters, it works. I cannot see what I am doing wrong. const PORT = 8000 const express = require('express') const axios = require('axios') const cheerio = require('cheerio') const app = express() const articles = [] app.get('/', (req, res) => { res.json('Scraping') }) app.get('/news', (req, res) => { axios.get('https://www.linkedin.com/jobs/') .then((response) => { const html = response.data const $ = cheerio.load(html) $('a:contains("stage")', html).each(function () { const title = $(this).text() const url = $(this).attr('href') articles.push({ title, url }) }) res.json(articles) }).catch((err) => console.log(err)) }) app.listen(PORT, () => console.log('server running on PORT ${8000}')) A: I was able to scrape for data engineers with this : remove " , html" and replaced it like that : $('a:contains("Data")').each. Made a console log on the http://localhost:8000/news. And it printed some URLs. const PORT = 8000 const express = require('express') const axios = require('axios') const cheerio = require('cheerio') const app = express() const articles = [] app.get('/', (req, res) => { res.json('Scraping') }) app.get('/news', (req, res) => { axios.get('https://www.linkedin.com/jobs/') .then((response) => { const html = response.data const $ = cheerio.load(html) // Find <a> elements with a title attribute that contains the word "stage" $('a:contains("Data")').each(function () { const title = $(this).text() const url = $(this).attr('href') articles.push({ title, url }) }) console.log(articles) res.json(articles) }).catch((err) => console.log(err)) }) app.listen(PORT, () => console.log('server running on PORT ${8000}'))
I am trying to scrape a website, but it returns 404 not found error
Here I am trying to retrieve all the internship offers(stage in French) from LinkedIn. If I do the same on a simple website and change my search parameters, it works. I cannot see what I am doing wrong. const PORT = 8000 const express = require('express') const axios = require('axios') const cheerio = require('cheerio') const app = express() const articles = [] app.get('/', (req, res) => { res.json('Scraping') }) app.get('/news', (req, res) => { axios.get('https://www.linkedin.com/jobs/') .then((response) => { const html = response.data const $ = cheerio.load(html) $('a:contains("stage")', html).each(function () { const title = $(this).text() const url = $(this).attr('href') articles.push({ title, url }) }) res.json(articles) }).catch((err) => console.log(err)) }) app.listen(PORT, () => console.log('server running on PORT ${8000}'))
[ "I was able to scrape for data engineers with this : remove \" , html\" and replaced it like that : $('a:contains(\"Data\")').each. Made a console log on the http://localhost:8000/news. And it printed some URLs.\n const PORT = 8000\n const express = require('express')\n const axios = require('axios')\n const cheerio = require('cheerio')\n\n const app = express()\n const articles = []\n\n app.get('/', (req, res) => {\nres.json('Scraping')\n})\n\napp.get('/news', (req, res) => {\naxios.get('https://www.linkedin.com/jobs/')\n .then((response) => {\n const html = response.data\n const $ = cheerio.load(html)\n\n // Find <a> elements with a title attribute that contains the word \"stage\"\n $('a:contains(\"Data\")').each(function () {\n const title = $(this).text()\n const url = $(this).attr('href')\n articles.push({\n title,\n url\n })\n })\n console.log(articles)\n res.json(articles)\n }).catch((err) => console.log(err))\n })\n\n app.listen(PORT, () => console.log('server running on PORT ${8000}'))\n\n" ]
[ 0 ]
[]
[]
[ "javascript", "json", "web_scraping" ]
stackoverflow_0074660431_javascript_json_web_scraping.txt
Q: How can I format time output in Power Automate In my power automate flow, I have an action that give time output in this format: 2022-12-01T18:52:50.0000000Z How can I take this output and format as yyyy/mm/dd . I want to use the time output as string for a folder structure. A: This is pretty straight forward. Use the Convert datetime to text function In the Format to use dropdown select Custom and specify the format how you would like it to appear. yyyy/MM/hh will give you 2022/12/01 HH-mm-ss will give you 18-52-50 A: https://learn.microsoft.com/en-us/azure/logic-apps/workflow-definition-language-functions-reference#formatDateTime Assuming you have a variable called DateTime, you would create another variable and use this expression ... formatDateTime(Variables('DateTime'), 'yyyy/MM/dd') Result
How can I format time output in Power Automate
In my power automate flow, I have an action that give time output in this format: 2022-12-01T18:52:50.0000000Z How can I take this output and format as yyyy/mm/dd . I want to use the time output as string for a folder structure.
[ "This is pretty straight forward.\nUse the Convert datetime to text function\n\nIn the Format to use dropdown select Custom\nand specify the format how you would like it to appear.\n\nyyyy/MM/hh will give you 2022/12/01\nHH-mm-ss will give you 18-52-50\n\n", "https://learn.microsoft.com/en-us/azure/logic-apps/workflow-definition-language-functions-reference#formatDateTime\nAssuming you have a variable called DateTime, you would create another variable and use this expression ...\nformatDateTime(Variables('DateTime'), 'yyyy/MM/dd')\n\nResult\n\n" ]
[ 1, 1 ]
[]
[]
[ "power_automate" ]
stackoverflow_0074660709_power_automate.txt
Q: Why does my file keep closing after the first loop in python I'm trying to read through a large file in which I have marked the start and end lines of each segment. I'm extracting a component of each segment using regex. What I don't understand is that after the first inner loop, my code seems to have closed the file and I don't get the desired output. Simplified code below with open("data_full", 'r') as file: for x in position: print(x) s = position[x]['start'] e = position[x]['end'] title = [] abs = [] mesh = [] ti_prev = False for i,line in enumerate(file.readlines()[s:e]): print(i) print(s,e) if re.search(r'(?<=TI\s{2}-\s).*', line) is not None and ti_prev is False: title.append(re.search(r'(?<=TI\s{2}-\s).*', line).group()) ti_prev = True line_mark = i if re.search(r'(?<=\s{6}).*',line) is not None and ti_prev is True and i == (line_mark+1): title.append(re.search(r'(?<=\s{6}).*',line).group()) else: pass data[x]['title']=title What I think has happened, is that after the first inner loop file.readlines() does not work since the file is closed. But I don't understand why, since it's within my with open loop. My alternative is to read the file for each segment (9k+ segments) and is not doing wonders to my performance. Any suggestions are welcomed with thanks ! A: Assuming your indentation is wrong in the description and not actually in your original code, readlines() moves the file pointer to the end so you can't read any more lines. You need to either reopen the file or .seek(0). See this for more info: Does fp.readlines() close a file?
Why does my file keep closing after the first loop in python
I'm trying to read through a large file in which I have marked the start and end lines of each segment. I'm extracting a component of each segment using regex. What I don't understand is that after the first inner loop, my code seems to have closed the file and I don't get the desired output. Simplified code below with open("data_full", 'r') as file: for x in position: print(x) s = position[x]['start'] e = position[x]['end'] title = [] abs = [] mesh = [] ti_prev = False for i,line in enumerate(file.readlines()[s:e]): print(i) print(s,e) if re.search(r'(?<=TI\s{2}-\s).*', line) is not None and ti_prev is False: title.append(re.search(r'(?<=TI\s{2}-\s).*', line).group()) ti_prev = True line_mark = i if re.search(r'(?<=\s{6}).*',line) is not None and ti_prev is True and i == (line_mark+1): title.append(re.search(r'(?<=\s{6}).*',line).group()) else: pass data[x]['title']=title What I think has happened, is that after the first inner loop file.readlines() does not work since the file is closed. But I don't understand why, since it's within my with open loop. My alternative is to read the file for each segment (9k+ segments) and is not doing wonders to my performance. Any suggestions are welcomed with thanks !
[ "Assuming your indentation is wrong in the description and not actually in your original code, readlines() moves the file pointer to the end so you can't read any more lines.\nYou need to either reopen the file or .seek(0).\nSee this for more info: Does fp.readlines() close a file?\n" ]
[ 2 ]
[]
[]
[ "python" ]
stackoverflow_0074661143_python.txt
Q: typewriter animation not centering text on mobile It is centered fine on desktop, but the animation will start centered on mobile but continues off, past the page to the right. Any recommendations for how to make sure it stays centered on mobile? @import url(https://fonts.googleapis.com/css?family=Anonymous+Pro); html{ min-height: 100%; overflow: hidden; } body{ height: calc(100vh - 8em); padding: 3.5em; color: #fff; font-family: 'Anonymous Pro', monospace; background-color: rgb(25,25,25); } .line-1{ position: relative; top: 10%; width: 13em; margin: 0 auto; border-right: 2px solid rgba(255,255,255,.75); font-size: 220%; text-align: center; white-space: nowrap; overflow: hidden; transform: translateY(-50%); } .anim-typewriter{ animation: typewriter 2.2s steps(20) 0.9s 1 normal both, blinkTextCursor 500ms steps(20) 0s 13 normal both; } @keyframes typewriter{ from{width: 0;} to{width: 10.9em;} } @keyframes blinkTextCursor{ from{border-right-color: rgba(255,255,255,.75);} to{border-right-color: transparent;} } <html> <body> <p class="line-1 anim-typewriter">12345.6789123<span style="color: #27B59D;">/12345</span></p> </body> </html> A: body{ padding: 0; } @media screen and (max-width: 600px) { .line-1 { font-size: 100%; } }
typewriter animation not centering text on mobile
It is centered fine on desktop, but the animation will start centered on mobile but continues off, past the page to the right. Any recommendations for how to make sure it stays centered on mobile? @import url(https://fonts.googleapis.com/css?family=Anonymous+Pro); html{ min-height: 100%; overflow: hidden; } body{ height: calc(100vh - 8em); padding: 3.5em; color: #fff; font-family: 'Anonymous Pro', monospace; background-color: rgb(25,25,25); } .line-1{ position: relative; top: 10%; width: 13em; margin: 0 auto; border-right: 2px solid rgba(255,255,255,.75); font-size: 220%; text-align: center; white-space: nowrap; overflow: hidden; transform: translateY(-50%); } .anim-typewriter{ animation: typewriter 2.2s steps(20) 0.9s 1 normal both, blinkTextCursor 500ms steps(20) 0s 13 normal both; } @keyframes typewriter{ from{width: 0;} to{width: 10.9em;} } @keyframes blinkTextCursor{ from{border-right-color: rgba(255,255,255,.75);} to{border-right-color: transparent;} } <html> <body> <p class="line-1 anim-typewriter">12345.6789123<span style="color: #27B59D;">/12345</span></p> </body> </html>
[ "body{\n padding: 0;\n}\n@media screen and (max-width: 600px) {\n .line-1 {\n font-size: 100%;\n }\n}\n\n" ]
[ 0 ]
[]
[]
[ "css" ]
stackoverflow_0074660607_css.txt
Q: Convert linear index to row, column I have a 2D array (int[][]) and switch between tracking of position in the matrix between a linear index and coordinates (row, column). I have a method calculating the linear index from a row and column: private int linearIndex(int r, int c){ return (r*columns) + c; } where columns is the nr of columns in the matrix (3x4 matrix -> columns = 4) I'm having troubles getting the row, column position from the linear index. My current approach is: row = linear index % rows column = linear index / columns Where - once again - rows = total number of rows and columns = total number of columns See above, issues arise on the first column when on a row > 0... A: Your approach for calculating the row and column from the linear index is almost correct. The issue arises because you are using the total number of columns instead of the total number of rows for the row calculation. Here is one way you can calculate the row and column from the linear index: int row = linearIndex / columns; int column = linearIndex % columns; This approach works because the row is the integer division of the linear index by the number of columns, and the column is the remainder of the integer division of the linear index by the number of columns. You can also calculate the row and column using the modulo operator instead of integer division, like this: int row = linearIndex % (rows * columns) / columns; int column = linearIndex % columns; This approach works because the modulo operator returns the remainder of the division, and the remainder of the division of the linear index by the total number of cells in the matrix (rows * columns) is the same as the linear index of the cell in the first row. Then, we can calculate the row by taking the integer division of the result by the number of columns, and the column is the same as the result of the modulo operator.
Convert linear index to row, column
I have a 2D array (int[][]) and switch between tracking of position in the matrix between a linear index and coordinates (row, column). I have a method calculating the linear index from a row and column: private int linearIndex(int r, int c){ return (r*columns) + c; } where columns is the nr of columns in the matrix (3x4 matrix -> columns = 4) I'm having troubles getting the row, column position from the linear index. My current approach is: row = linear index % rows column = linear index / columns Where - once again - rows = total number of rows and columns = total number of columns See above, issues arise on the first column when on a row > 0...
[ "Your approach for calculating the row and column from the linear index is almost correct. The issue arises because you are using the total number of columns instead of the total number of rows for the row calculation.\nHere is one way you can calculate the row and column from the linear index:\nint row = linearIndex / columns;\nint column = linearIndex % columns;\nThis approach works because the row is the integer division of the linear index by the number of columns, and the column is the remainder of the integer division of the linear index by the number of columns.\nYou can also calculate the row and column using the modulo operator instead of integer division, like this:\nint row = linearIndex % (rows * columns) / columns;\nint column = linearIndex % columns;\n\nThis approach works because the modulo operator returns the remainder of the division, and the remainder of the division of the linear index by the total number of cells in the matrix (rows * columns) is the same as the linear index of the cell in the first row. Then, we can calculate the row by taking the integer division of the result by the number of columns, and the column is the same as the result of the modulo operator.\n" ]
[ 0 ]
[]
[]
[ "java" ]
stackoverflow_0074661079_java.txt
Q: NextAuth role based login using credentials I am trying to create a roled based login strategy using credentials. I was using nextauth roled-base login tutorial but it was not working. /api/auth/[...nextauth.ts] const authOptions: NextAuthOptions = { providers: [ Credentials({ id: "credentials", name: "Credentials", credentials: {}, async authorize(credentials) { const { email, password } = credentials as { email: string; password: string; }; // perform login logic // find user from db if (email == "[email protected]" && password == "1234") { return { id: "1234", name: "John Doe", email: "[email protected]", role: "admin", }; } throw new Error("Invalid credentials"); }, }), ], callbacks: { jwt: ({ token, user }) => { console.log(token); if (user) {token.id = user.id}; return token; }, session: ({ session, token, user }) => { if (token) { session.id = token.id; session.user.role = user.role; //not working } return session; }, }, session: { strategy: "jwt", }, pages: { signIn: "/login", // error: '/auth/error', // signOut: 'auth/signout' }, secret: process.env.NEXT_PUBLIC_SECRET, }; I thought of creating a custom adapter using nextauth adapter tutorial but it seems I can only define extra field for user if I am using OAuth provider. I can't seem to find any of the same in the documentation for credentials provider. My other possible solution is to use custom jwt sign in method instead of using NextAuth, but I can't seem to find a good example online. A: Have you tried checking if the user already exists? if (session.user) { session.user.role = user.role } after login you can check if the ROLE field already appears in http://localhost:3000/api/auth/session A: https://next-auth.js.org/tutorials/role-based-login-strategy/ they said that in callback of session should be props.user but in my case is not. i fix it like this but i think it will slow down my code a lot ... async session({ session, token }) { const user = await Users.findOne({ email: session.user.email }); if (session.user) { session.user.role = user.role; } return session; }, A: When you create a User schema to save the user into a database, add a role property. For example, in mongoose/mongodb role: { type: String, default: "user", }, default value is user, you can define different roles and maybe write an endpoint to update it. You do not need to make any change in next-auth config. Now if you want to protect a route, write this getServerSideProps export const getServerSideProps =async ({ req, params, store }) => { const session = await getSession({ req }); if (!session || session.user.role !== "admin") { return { redirect: { destination: "/login", permanent: false, }, }; }}
NextAuth role based login using credentials
I am trying to create a roled based login strategy using credentials. I was using nextauth roled-base login tutorial but it was not working. /api/auth/[...nextauth.ts] const authOptions: NextAuthOptions = { providers: [ Credentials({ id: "credentials", name: "Credentials", credentials: {}, async authorize(credentials) { const { email, password } = credentials as { email: string; password: string; }; // perform login logic // find user from db if (email == "[email protected]" && password == "1234") { return { id: "1234", name: "John Doe", email: "[email protected]", role: "admin", }; } throw new Error("Invalid credentials"); }, }), ], callbacks: { jwt: ({ token, user }) => { console.log(token); if (user) {token.id = user.id}; return token; }, session: ({ session, token, user }) => { if (token) { session.id = token.id; session.user.role = user.role; //not working } return session; }, }, session: { strategy: "jwt", }, pages: { signIn: "/login", // error: '/auth/error', // signOut: 'auth/signout' }, secret: process.env.NEXT_PUBLIC_SECRET, }; I thought of creating a custom adapter using nextauth adapter tutorial but it seems I can only define extra field for user if I am using OAuth provider. I can't seem to find any of the same in the documentation for credentials provider. My other possible solution is to use custom jwt sign in method instead of using NextAuth, but I can't seem to find a good example online.
[ "Have you tried checking if the user already exists?\nif (session.user) {\n session.user.role = user.role\n}\n\nafter login you can check if the ROLE field already appears in http://localhost:3000/api/auth/session\n", "https://next-auth.js.org/tutorials/role-based-login-strategy/\nthey said that in callback of session should be props.user but in my case is not.\ni fix it like this but i think it will slow down my code a lot ...\n\n\n async session({ session, token }) {\n const user = await Users.findOne({ email: session.user.email });\n\n if (session.user) {\n session.user.role = user.role;\n }\n\n return session;\n },\n\n\n\n", "When you create a User schema to save the user into a database, add a role property. For example, in mongoose/mongodb\nrole: {\n type: String,\n default: \"user\",\n },\n\ndefault value is user, you can define different roles and maybe write an endpoint to update it. You do not need to make any change in next-auth config.\nNow if you want to protect a route, write this getServerSideProps\nexport const getServerSideProps =async ({ req, params, store }) => {\n const session = await getSession({ req });\n if (!session || session.user.role !== \"admin\") {\n return {\n redirect: {\n destination: \"/login\",\n permanent: false,\n },\n };\n }}\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "authentication", "jwt", "next.js", "next_auth", "user_roles" ]
stackoverflow_0073658567_authentication_jwt_next.js_next_auth_user_roles.txt
Q: Regex to select text with ONLY one tab in front between two specific strings I'm just getting started with Regex and want to select text that has ONLY one tab in front between two specific strings. In that case I want to select text that is between ABC and CDE (don't select ABC and CDE) and has ONLY one tab in front of it. So I want to select Rotterdam, Amsterdam, China and Japan and do not select e.g. Hello and I have two tabs in front of me because they have two tabs. EFGADADA Hello EFGHI Oslo Australia EFGHI ABC Rotterdam Amsterdam China Japan Hello I have two tabs in front of me CDE Oslo Australia I already have this which deselects everything that has two tabs in front of it: (?<!\t\t(.?)+)[ a-zA-ZäöüÄÖÜßé@.()&-/"“]+ I know, this selects also ABC, CDE and EFGHI, which of course, don't have any tabs in front of them but thats fine because between ABC and CDE is only Text that has tabs in front of it. Btw. I use the Regex inside Power Automate Desktop, if that matters. A: Assuming this tool supports .NET regex, you could use a variable length lookbehind. (?<=(?:\r?\n|^)ABC(?:.*\r?\n\t)*)\S.* See this demo at regex101 (update: refactored for working without multiline mode) There is no check included for CDE afterwards which was not in your current pattern. regex-part matches (?<= negative lookbehind to look towards the left for a subpattern (?:\r?\n|^)ABC (?: non capturing group ) containinglinebreak (CR)LF or ^ start, followed by ABC (?:.*\r?\n\t)* (?: non capturing group ) * any amountcontaining lines followed by at least one \t tab )\S.* end of subpattern; match a non-whitespace \S and anything .*
Regex to select text with ONLY one tab in front between two specific strings
I'm just getting started with Regex and want to select text that has ONLY one tab in front between two specific strings. In that case I want to select text that is between ABC and CDE (don't select ABC and CDE) and has ONLY one tab in front of it. So I want to select Rotterdam, Amsterdam, China and Japan and do not select e.g. Hello and I have two tabs in front of me because they have two tabs. EFGADADA Hello EFGHI Oslo Australia EFGHI ABC Rotterdam Amsterdam China Japan Hello I have two tabs in front of me CDE Oslo Australia I already have this which deselects everything that has two tabs in front of it: (?<!\t\t(.?)+)[ a-zA-ZäöüÄÖÜßé@.()&-/"“]+ I know, this selects also ABC, CDE and EFGHI, which of course, don't have any tabs in front of them but thats fine because between ABC and CDE is only Text that has tabs in front of it. Btw. I use the Regex inside Power Automate Desktop, if that matters.
[ "Assuming this tool supports .NET regex, you could use a variable length lookbehind.\n(?<=(?:\\r?\\n|^)ABC(?:.*\\r?\\n\\t)*)\\S.*\n\nSee this demo at regex101 (update: refactored for working without multiline mode)\nThere is no check included for CDE afterwards which was not in your current pattern.\n\n\n\n\nregex-part\nmatches\n\n\n\n\n(?<=\nnegative lookbehind to look towards the left for a subpattern\n\n\n(?:\\r?\\n|^)ABC\n(?: non capturing group ) containinglinebreak (CR)LF or ^ start, followed by ABC\n\n\n(?:.*\\r?\\n\\t)*\n(?: non capturing group ) * any amountcontaining lines followed by at least one \\t tab\n\n\n)\\S.*\nend of subpattern; match a non-whitespace \\S and anything .*\n\n\n\n" ]
[ 0 ]
[]
[]
[ "power_automate_desktop", "regex", "tabs" ]
stackoverflow_0074659136_power_automate_desktop_regex_tabs.txt
Q: Is there a way to make get_dummies work faster? I have the following code: import pandas as pd array = {'id': [1, 1, 1, 2, 2, 2, 3, 3], 'state': ['NY', 'NY', 'CA', 'CA', 'OH', 'AZ', 'NY','AZ']} df = pd.DataFrame(array) df.set_index('id', inplace=True) df2 = pd.get_dummies(df['state'])#.max(level=0) df2 Which gives the following output: AZ CA NY OH id 1 0 0 1 0 1 0 0 1 0 1 0 1 0 0 2 0 1 0 0 2 0 0 0 1 2 1 0 0 0 3 0 0 1 0 3 1 0 0 0 I am looking for a fast way to condense it - so there is one row per ID. Problem is that the result I am getting from df2 from the full dataframe is 2,000,000 rows and 15,000 columns. I tried both: .max(level=0) or groupby().sum() - both take days to complete. Is there a better way? A: As mentioned by @Andrej, try the pandas.crosstab function: # pd.crosstab(column_a, column_b) pd.crosstab(df.index, df["state"]) For me get dummies was failing due to low memory, however crosstab worked.
Is there a way to make get_dummies work faster?
I have the following code: import pandas as pd array = {'id': [1, 1, 1, 2, 2, 2, 3, 3], 'state': ['NY', 'NY', 'CA', 'CA', 'OH', 'AZ', 'NY','AZ']} df = pd.DataFrame(array) df.set_index('id', inplace=True) df2 = pd.get_dummies(df['state'])#.max(level=0) df2 Which gives the following output: AZ CA NY OH id 1 0 0 1 0 1 0 0 1 0 1 0 1 0 0 2 0 1 0 0 2 0 0 0 1 2 1 0 0 0 3 0 0 1 0 3 1 0 0 0 I am looking for a fast way to condense it - so there is one row per ID. Problem is that the result I am getting from df2 from the full dataframe is 2,000,000 rows and 15,000 columns. I tried both: .max(level=0) or groupby().sum() - both take days to complete. Is there a better way?
[ "As mentioned by @Andrej, try the pandas.crosstab function:\n# pd.crosstab(column_a, column_b)\n\npd.crosstab(df.index, df[\"state\"])\n\nFor me get dummies was failing due to low memory, however crosstab worked.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas" ]
stackoverflow_0068429779_dataframe_pandas.txt
Q: Executing Functions Between Two Child Components Svelte I'm working on an Electron project using Svelte for the frontend. I'm relatively new to Svelte. So here is the problem, I have a parent component named MainContent.svelte and two child components editor.svelte and preview.svelte. The Editor and Preview are both placed in the MainContent component. What I want to do is when the content of the Editor is changed, I want to update the Preview pane to reflect those changes. I previously had done this same project using Vanilla JavaScript but thought of using Svelte as it was easier to manage the project. There is function which listens for any changes in the Editor pane and another function to update the Preview pane. But what I can't wrap my head around is how to call the function to update the Preview when the Editor content changes. Any help would be greatly appreciated. A: I would use a property and lift the state up to the main component, you can still use functions internally if you have to, e.g. <script> // ... let content = '...'; </script> <Editor bind:content /> <Preview {content} /> If the preview needs to update via a function you can call that in a reactive statement: <script> // ... export let content; $: updatePreview(content); </script>
Executing Functions Between Two Child Components Svelte
I'm working on an Electron project using Svelte for the frontend. I'm relatively new to Svelte. So here is the problem, I have a parent component named MainContent.svelte and two child components editor.svelte and preview.svelte. The Editor and Preview are both placed in the MainContent component. What I want to do is when the content of the Editor is changed, I want to update the Preview pane to reflect those changes. I previously had done this same project using Vanilla JavaScript but thought of using Svelte as it was easier to manage the project. There is function which listens for any changes in the Editor pane and another function to update the Preview pane. But what I can't wrap my head around is how to call the function to update the Preview when the Editor content changes. Any help would be greatly appreciated.
[ "I would use a property and lift the state up to the main component, you can still use functions internally if you have to, e.g.\n<script>\n // ...\n let content = '...';\n</script>\n\n<Editor bind:content />\n<Preview {content} />\n\nIf the preview needs to update via a function you can call that in a reactive statement:\n<script>\n // ...\n export let content;\n\n $: updatePreview(content);\n</script>\n\n" ]
[ 2 ]
[]
[]
[ "ace_editor", "electron", "javascript", "svelte" ]
stackoverflow_0074658978_ace_editor_electron_javascript_svelte.txt
Q: ASP Razor pages BindProperties values lost on Post I 'm working with an ASP Razor project (.Net5). Thee is a page with the user list and a filter. The goal is to present the user that match the filtering criteria. The code below is able to do the filtering but the filtering values are lost (become blank) everytime the filtering is done. I have [BindProperties] and I don;t understand why it works for the UserList (the table data) and not for the filter criteria. I tried to reassign the values but it;s not working. Does anyone has an idea how to solve this? Any pointer to the underlying reason and documentation page would be appreciated. I have looked at https://www.learnrazorpages.com/razor-pages/tempdata but I'm not sure it's the way to go. Also do not hesitate to give feedback on the code itself as it's literally my first ASP.Net project. UserList.cshtml @model Web.Areas.Admin.Pages.UserListModel @{ ViewData["Title"] = "User list"; } <h1>User list</h1> <div> <form method="post"> <label asp-for="UserNameFilter"></label> <input type="text" name="UserNameFilter" /> @*This input criteria reset when I click on filter*@ <input type="submit" value="Filter" /> </form> </div> <table class="table"> <thead> <tr> <th>Id</th> <th>User Name</th> <th>Email</th> <th>Password</th> <th>Actions</th> </tr> </thead> <tbody> @foreach (var user in Model.UserInfos) { <tr> <td>@user.Id</td> <td>@user.UserName</td> <td>@user.Email</td> <td> <a asp-page="/UserUpdate" asp-route-id="@user.Id">Edit</a> | <a asp-page="/Details" asp-route-id="@user.Id">Details</a> | <a asp-page="/UserDelete" asp-route-id="@user.Id">Delete</a> </td> </tr> } </tbody> </table> UserList.cshtml.cs using System.Collections.Generic; using System.Linq; using Entities; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.RazorPages; using Services; namespace Web.Areas.Admin.Pages { [Authorize(Roles = "ADMIN")] [BindProperties] public class UserListModel : PageModel { public string UserNameFilter { get; set; } public List<UserInfo> UserInfos { get; set; } private readonly IUserService _userService; public UserListModel(IUserService userService) { _userService = userService; } public void OnGet() { UserInfos = _userService.GetUserInfoList(new UserInfoFilter()).ToList(); } public void OnPost() { var userInfoFilter = new UserInfoFilter { UserName = UserNameFilter }; UserInfos = _userService.GetUserInfoList(userInfoFilter).ToList(); } } } A: So that was a basic answer... you need to assign the value of the input with the model... <input type="text" name="UserNameFilter" value="@Model.UserNameFilter"/> I'm leaving the post and answer here for other noob like me in future. Keep coding.. A: The above code works. If someone else is looking for it, I want to add my scenario here. I have two screens. One is sending an ID through asp-route-CurrentPOID once on the other page, I was able to get it [BindProeprty(SupportsGet = true)] public int POID {get;set;} public void onGet(int CurrentPOID) { CurrentPOID = whatever was selected on the other page // so I want to assign route value to this pages POID CurrentPOID = POID; //works (only for onGet) _context.BLLMETHOD(currentPOID) //works OR _context.BLLMETHOD(POID) //works } However, I was not getting POID once I did onPost POID was null to solve this, I used exactly what the author of this post did on front <input type="hidden" name="POID" value="@Model.POID"/> This solved my issue on Post, where POID was null. I am new to razor pages, so please pardon the inefficiency.
ASP Razor pages BindProperties values lost on Post
I 'm working with an ASP Razor project (.Net5). Thee is a page with the user list and a filter. The goal is to present the user that match the filtering criteria. The code below is able to do the filtering but the filtering values are lost (become blank) everytime the filtering is done. I have [BindProperties] and I don;t understand why it works for the UserList (the table data) and not for the filter criteria. I tried to reassign the values but it;s not working. Does anyone has an idea how to solve this? Any pointer to the underlying reason and documentation page would be appreciated. I have looked at https://www.learnrazorpages.com/razor-pages/tempdata but I'm not sure it's the way to go. Also do not hesitate to give feedback on the code itself as it's literally my first ASP.Net project. UserList.cshtml @model Web.Areas.Admin.Pages.UserListModel @{ ViewData["Title"] = "User list"; } <h1>User list</h1> <div> <form method="post"> <label asp-for="UserNameFilter"></label> <input type="text" name="UserNameFilter" /> @*This input criteria reset when I click on filter*@ <input type="submit" value="Filter" /> </form> </div> <table class="table"> <thead> <tr> <th>Id</th> <th>User Name</th> <th>Email</th> <th>Password</th> <th>Actions</th> </tr> </thead> <tbody> @foreach (var user in Model.UserInfos) { <tr> <td>@user.Id</td> <td>@user.UserName</td> <td>@user.Email</td> <td> <a asp-page="/UserUpdate" asp-route-id="@user.Id">Edit</a> | <a asp-page="/Details" asp-route-id="@user.Id">Details</a> | <a asp-page="/UserDelete" asp-route-id="@user.Id">Delete</a> </td> </tr> } </tbody> </table> UserList.cshtml.cs using System.Collections.Generic; using System.Linq; using Entities; using Microsoft.AspNetCore.Authorization; using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.RazorPages; using Services; namespace Web.Areas.Admin.Pages { [Authorize(Roles = "ADMIN")] [BindProperties] public class UserListModel : PageModel { public string UserNameFilter { get; set; } public List<UserInfo> UserInfos { get; set; } private readonly IUserService _userService; public UserListModel(IUserService userService) { _userService = userService; } public void OnGet() { UserInfos = _userService.GetUserInfoList(new UserInfoFilter()).ToList(); } public void OnPost() { var userInfoFilter = new UserInfoFilter { UserName = UserNameFilter }; UserInfos = _userService.GetUserInfoList(userInfoFilter).ToList(); } } }
[ "So that was a basic answer... you need to assign the value of the input with the model...\n<input type=\"text\" name=\"UserNameFilter\" value=\"@Model.UserNameFilter\"/> \n\nI'm leaving the post and answer here for other noob like me in future.\nKeep coding..\n", "The above code works. If someone else is looking for it, I want to add my scenario here.\nI have two screens. One is sending an ID through asp-route-CurrentPOID\nonce on the other page, I was able to get it\n[BindProeprty(SupportsGet = true)]\npublic int POID {get;set;}\n\n\npublic void onGet(int CurrentPOID)\n{\n CurrentPOID = whatever was selected on the other page\n // so I want to assign route value to this pages POID\n CurrentPOID = POID; //works (only for onGet)\n\n _context.BLLMETHOD(currentPOID) //works\n OR\n _context.BLLMETHOD(POID) //works\n}\n\nHowever, I was not getting POID once I did onPost POID was null\nto solve this, I used exactly what the author of this post did\non front\n<input type=\"hidden\" name=\"POID\" value=\"@Model.POID\"/> \n\nThis solved my issue on Post, where POID was null.\nI am new to razor pages, so please pardon the inefficiency.\n" ]
[ 4, 0 ]
[]
[]
[ "asp.net", "c#", "razor_pages" ]
stackoverflow_0069494904_asp.net_c#_razor_pages.txt
Q: PowerShell, programmatically check if a Store App (.appx) is installed How would I check, in PowerShell, if a specific Windows Store App is installed? Specifically, I need to test whether the Microsoft "Terminal" app is currently installed. A: Make sure to use the -AllUsers option to ensure you search all packages: # if ((Get-AppPackage -AllUsers).Name -like "*WindowsTerminal*") {$True} True
PowerShell, programmatically check if a Store App (.appx) is installed
How would I check, in PowerShell, if a specific Windows Store App is installed? Specifically, I need to test whether the Microsoft "Terminal" app is currently installed.
[ "Make sure to use the -AllUsers option to ensure you search all packages:\n# if ((Get-AppPackage -AllUsers).Name -like \"*WindowsTerminal*\") {$True}\nTrue\n\n" ]
[ 1 ]
[]
[]
[ "powershell", "windows_store", "windows_store_apps" ]
stackoverflow_0073865233_powershell_windows_store_windows_store_apps.txt
Q: How to run Docker with python and Java? I need both java and python in my docker container to run some code. This is my dockerfile: It works perpectly if I don't add the FROM openjdk:slim #get python FROM python:3.6-slim RUN pip install --trusted-host pypi.python.org flask #get openjdk FROM openjdk:slim COPY . /targetdir WORKDIR /targetdir # Make port 81 available to the world outside this container EXPOSE 81 CMD ["python", "test.py"] And the test.py app is in the same directory: from flask import Flask import os app = Flask(__name__) @app.route("/") def hello(): html = "<h3>Test:{test}</h3>" test = os.environ['JAVA_HOME'] return html.format(test = test) if __name__ == '__main__': app.run(debug=True,host='0.0.0.0',port=81) I'm getting this error: D:\MyApps\Docker Toolbox\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"python\": executable file not found in $PATH": unknown. What exactly am I doing wrong here? I'm new to docker, perhaps I'm missing a step. Additional details My goal I have to run a python program that runs a Java file. The python library I'm using requires the path to JAVA_HOME. My issues: I do not know Java, so I cannot run the file properly. My entire code is in Python, except this Java bit The Python wrapper runs the file in a way I need it to run. A: An easier solution to the above issue is to use multi-stage docker containers where you can copy the content from one to another. In the above case you can have openjdk:slim as the base container and then use content from a python container to be copied over into this base container as follows: FROM openjdk:slim COPY --from=python:3.6 / / ... <normal instructions for python container continues> ... This feature is available as of Docker 17.05 and there are more things you can do using multi-stage build as in copying only the content you need from one to another. Reference documentation A: OK it took me a little while to figure it out. And my thanks go to this answer. I think my approach didn't work because I did not have a basic version of Linux. So it goes like this: Get Linux (I'm using Alpine because it's barebones) Get Java via the package manager Get Python, PIP OPTIONAL: find and set JAVA_HOME Find the path to JAVA_HOME. Perhaps there is a better way to do this, but I did this running the running the container, then I looked inside the container using docker exec -it [COINTAINER ID] bin/bash and found it. Set JAVA_HOME in dockerfile and build + run it all again Here is the final Dockerfile ( it should work with the python code in the question) : ### 1. Get Linux FROM alpine:3.7 ### 2. Get Java via the package manager RUN apk update \ && apk upgrade \ && apk add --no-cache bash \ && apk add --no-cache --virtual=build-dependencies unzip \ && apk add --no-cache curl \ && apk add --no-cache openjdk8-jre ### 3. Get Python, PIP RUN apk add --no-cache python3 \ && python3 -m ensurepip \ && pip3 install --upgrade pip setuptools \ && rm -r /usr/lib/python*/ensurepip && \ if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi && \ if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi && \ rm -r /root/.cache ### Get Flask for the app RUN pip install --trusted-host pypi.python.org flask #### #### OPTIONAL : 4. SET JAVA_HOME environment variable, uncomment the line below if you need it #ENV JAVA_HOME="/usr/lib/jvm/java-1.8-openjdk" #### EXPOSE 81 ADD test.py / CMD ["python", "test.py"] I'm new to Docker, so this may not be the best possible solution. I'm open to suggestions. UPDATE: COMMON ISUUES Difficulty using python packages As Joabe Lucena pointed out here, Alpine can have issues certain python packages. I recommend that you use a Linux distro that works best for you, e.g. centos. A: Another alternative is to simply use docker-java-python image from docker hub. https://hub.docker.com/r/rappdw/docker-java-python FROM rappdw/docker-java-python:openjdk1.8.0_171-python3.6.6 RUN java -version RUN python --version A: Oh, let me add my five cents. I took python slim as a base image. Then I found open-jdk-11 (Note, open-jdk-10 will fail because it is not supported) base image code!... And copy-pasted it into my docker file. Note, copy-paste driven development is cool... ONLY when you understand each line you use in your code!!! And here it is! <!-- language: shell --> FROM python:3.7.2-slim # Do your stuff, install python. # and now Jdk RUN rm -rf /var/lib/apt/lists/* && apt-get clean && apt-get update && apt-get upgrade -y \ && apt-get install -y --no-install-recommends curl ca-certificates \ && rm -rf /var/lib/apt/lists/* ENV JAVA_VERSION jdk-11.0.2+7 COPY slim-java* /usr/local/bin/ RUN set -eux; \ ARCH="$(dpkg --print-architecture)"; \ case "${ARCH}" in \ ppc64el|ppc64le) \ ESUM='c18364a778b1b990e8e62d094377af48b000f9f6a64ec21baff6a032af06386d'; \ BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.1%2B13/OpenJDK11U-jdk_ppc64le_linux_hotspot_11.0.1_13.tar.gz'; \ ;; \ s390x) \ ESUM='e39aacc270731dadcdc000aaaf709adae7a08113ccf5b4a045bc87fc13458d71'; \ BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11%2B28/OpenJDK11-jdk_s390x_linux_hotspot_11_28.tar.gz'; \ ;; \ amd64|x86_64) \ ESUM='d89304a971e5186e80b6a48a9415e49583b7a5a9315ba5552d373be7782fc528'; \ BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.2%2B7/OpenJDK11U-jdk_x64_linux_hotspot_11.0.2_7.tar.gz'; \ ;; \ aarch64|arm64) \ ESUM='b66121b9a0c2e7176373e670a499b9d55344bcb326f67140ad6d0dc24d13d3e2'; \ BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.1%2B13/OpenJDK11U-jdk_aarch64_linux_hotspot_11.0.1_13.tar.gz'; \ ;; \ *) \ echo "Unsupported arch: ${ARCH}"; \ exit 1; \ ;; \ esac; \ curl -Lso /tmp/openjdk.tar.gz ${BINARY_URL}; \ sha256sum /tmp/openjdk.tar.gz; \ mkdir -p /opt/java/openjdk; \ cd /opt/java/openjdk; \ echo "${ESUM} /tmp/openjdk.tar.gz" | sha256sum -c -; \ tar -xf /tmp/openjdk.tar.gz; \ jdir=$(dirname $(dirname $(find /opt/java/openjdk -name javac))); \ mv ${jdir}/* /opt/java/openjdk; \ export PATH="/opt/java/openjdk/bin:$PATH"; \ apt-get update; apt-get install -y --no-install-recommends binutils; \ /usr/local/bin/slim-java.sh /opt/java/openjdk; \ apt-get remove -y binutils; \ rm -rf /var/lib/apt/lists/*; \ rm -rf ${jdir} /tmp/openjdk.tar.gz; ENV JAVA_HOME=/opt/java/openjdk \ PATH="/opt/java/openjdk/bin:$PATH" ENV JAVA_TOOL_OPTIONS="-XX:+UseContainerSupport" Now references. https://github.com/AdoptOpenJDK/openjdk-docker/blob/master/11/jdk/ubuntu/Dockerfile.hotspot.releases.slim https://hub.docker.com/_/python/ https://hub.docker.com/r/adoptopenjdk/openjdk11/ I used them to answer this question, which may help you sometime. Running Python and Java in Docker A: I found Sunny Pal's answer very useful but I made the copy more specific and added the necessary environment variables and update-alternatives lines so that Java was accessible from the command line in the Python container. FROM python:3.9-slim COPY --from=openjdk:8-jre-slim /usr/local/openjdk-8 /usr/local/openjdk-8 ENV JAVA_HOME /usr/local/openjdk-8 RUN update-alternatives --install /usr/bin/java java /usr/local/openjdk-8/bin/java 1 ... A: I believe that by adding FROM openjdk:slim line, you tell docker to execute all of your subsequent commands in openjdk container (which does not have python) I would approach this by creating two separate containers for openjdk and python and specify individual sets of commands for them. Docker is made to modularize your solutions and mashing everything into one container is usually a bad practice. A: I tried pajamas's anwser which worked very well for creating this image. However, when trying to install packages like gensim, pandas or else, I faced some errors like: don't know how to compile Fortran code on platform 'posix'. I searched and tried this, this and that but none worked for me. So, based on pajamas's anwser I decided to convert his image from Alpine to Centos which worked very well. So here's a Dockerfile that might help someone who's may be struggling in this scenario like I was: # Get Linux FROM centos:7 # Install Java RUN yum update -y \ && yum install java-1.8.0-openjdk -y \ && yum clean all \ && rm -rf /var/cache/yum # Set JAVA_HOME environment var ENV JAVA_HOME="/usr/lib/jvm/jre-openjdk" # Install Python RUN yum install python3 -y \ && pip3 install --upgrade pip setuptools wheel \ && if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi \ && if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi \ && yum clean all \ && rm -rf /var/cache/yum CMD ["bash"] A: you should have one FROM in your dockerfile (unless you use multi-stage build for the docker) A: I think i found easiest way to mix java jdk 17 and python3. I is not working on python2 FROM openjdk:17.0.1-jdk-slim RUN apt-get update && \ apt-get install -y software-properties-common && \ apt-get install -y python3-pip Software Commons have python3 lightweight version. (3.9.1 version) U can also install some libraries like that. RUN python3 -m pip install --upgrade pip && \ python3 -m pip install numpy && \ python3 -m pip install opencv-python OR RUN apt-get update && \ apt-get install -y ffmpeg A: Easiest is to just start from a Python image and add the OpenJDK. Note that FROM openjdk has been deprecated and replaced with eclipse-temurin FROM python:3.10 ENV JAVA_HOME=/opt/java/openjdk COPY --from=eclipse-temurin:17-jre $JAVA_HOME $JAVA_HOME ENV PATH="${JAVA_HOME}/bin:${PATH}" RUN pip install --trusted-host pypi.python.org flask See How to use this Image - Using a different base Image section of https://hub.docker.com/_/eclipse-temurin for details.
How to run Docker with python and Java?
I need both java and python in my docker container to run some code. This is my dockerfile: It works perpectly if I don't add the FROM openjdk:slim #get python FROM python:3.6-slim RUN pip install --trusted-host pypi.python.org flask #get openjdk FROM openjdk:slim COPY . /targetdir WORKDIR /targetdir # Make port 81 available to the world outside this container EXPOSE 81 CMD ["python", "test.py"] And the test.py app is in the same directory: from flask import Flask import os app = Flask(__name__) @app.route("/") def hello(): html = "<h3>Test:{test}</h3>" test = os.environ['JAVA_HOME'] return html.format(test = test) if __name__ == '__main__': app.run(debug=True,host='0.0.0.0',port=81) I'm getting this error: D:\MyApps\Docker Toolbox\Docker Toolbox\docker.exe: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"python\": executable file not found in $PATH": unknown. What exactly am I doing wrong here? I'm new to docker, perhaps I'm missing a step. Additional details My goal I have to run a python program that runs a Java file. The python library I'm using requires the path to JAVA_HOME. My issues: I do not know Java, so I cannot run the file properly. My entire code is in Python, except this Java bit The Python wrapper runs the file in a way I need it to run.
[ "An easier solution to the above issue is to use multi-stage docker containers where you can copy the content from one to another. In the above case you can have openjdk:slim as the base container and then use content from a python container to be copied over into this base container as follows:\nFROM openjdk:slim\nCOPY --from=python:3.6 / /\n\n... \n\n<normal instructions for python container continues>\n\n...\n\n\nThis feature is available as of Docker 17.05 and there are more things you can do using multi-stage build as in copying only the content you need from one to another.\nReference documentation\n", "OK it took me a little while to figure it out. And my thanks go to this answer.\nI think my approach didn't work because I did not have a basic version of Linux.\nSo it goes like this:\n\nGet Linux (I'm using Alpine because it's barebones)\nGet Java via the package manager\nGet Python, PIP\n\nOPTIONAL: find and set JAVA_HOME\n\nFind the path to JAVA_HOME. Perhaps there is a better way to do this, but I did this running the running the container, then I looked inside the container using docker exec -it [COINTAINER ID] bin/bash and found it.\nSet JAVA_HOME in dockerfile and build + run it all again\n\nHere is the final Dockerfile ( it should work with the python code in the question) :\n### 1. Get Linux\nFROM alpine:3.7\n\n### 2. Get Java via the package manager\nRUN apk update \\\n&& apk upgrade \\\n&& apk add --no-cache bash \\\n&& apk add --no-cache --virtual=build-dependencies unzip \\\n&& apk add --no-cache curl \\\n&& apk add --no-cache openjdk8-jre\n\n### 3. Get Python, PIP\n\nRUN apk add --no-cache python3 \\\n&& python3 -m ensurepip \\\n&& pip3 install --upgrade pip setuptools \\\n&& rm -r /usr/lib/python*/ensurepip && \\\nif [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi && \\\nif [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi && \\\nrm -r /root/.cache\n\n### Get Flask for the app\nRUN pip install --trusted-host pypi.python.org flask\n\n####\n#### OPTIONAL : 4. SET JAVA_HOME environment variable, uncomment the line below if you need it\n\n#ENV JAVA_HOME=\"/usr/lib/jvm/java-1.8-openjdk\"\n\n####\n\nEXPOSE 81 \nADD test.py /\nCMD [\"python\", \"test.py\"]\n\nI'm new to Docker, so this may not be the best possible solution. I'm open to suggestions.\nUPDATE: COMMON ISUUES\n\nDifficulty using python packages\n\nAs Joabe Lucena pointed out here, Alpine can have issues certain python packages.\nI recommend that you use a Linux distro that works best for you, e.g. centos.\n", "Another alternative is to simply use docker-java-python image from docker hub. https://hub.docker.com/r/rappdw/docker-java-python\nFROM rappdw/docker-java-python:openjdk1.8.0_171-python3.6.6\nRUN java -version\nRUN python --version\n\n", "Oh, let me add my five cents. I took python slim as a base image. Then I found open-jdk-11 (Note, open-jdk-10 will fail because it is not supported) base image code!... And copy-pasted it into my docker file. \nNote, copy-paste driven development is cool... ONLY when you understand each line you use in your code!!!\nAnd here it is!\n<!-- language: shell -->\nFROM python:3.7.2-slim\n\n# Do your stuff, install python.\n\n# and now Jdk\nRUN rm -rf /var/lib/apt/lists/* && apt-get clean && apt-get update && apt-get upgrade -y \\\n && apt-get install -y --no-install-recommends curl ca-certificates \\\n && rm -rf /var/lib/apt/lists/*\n\nENV JAVA_VERSION jdk-11.0.2+7\n\nCOPY slim-java* /usr/local/bin/\n\nRUN set -eux; \\\n ARCH=\"$(dpkg --print-architecture)\"; \\\n case \"${ARCH}\" in \\\n ppc64el|ppc64le) \\\n ESUM='c18364a778b1b990e8e62d094377af48b000f9f6a64ec21baff6a032af06386d'; \\\n BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.1%2B13/OpenJDK11U-jdk_ppc64le_linux_hotspot_11.0.1_13.tar.gz'; \\\n ;; \\\n s390x) \\\n ESUM='e39aacc270731dadcdc000aaaf709adae7a08113ccf5b4a045bc87fc13458d71'; \\\n BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11%2B28/OpenJDK11-jdk_s390x_linux_hotspot_11_28.tar.gz'; \\\n ;; \\\n amd64|x86_64) \\\n ESUM='d89304a971e5186e80b6a48a9415e49583b7a5a9315ba5552d373be7782fc528'; \\\n BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.2%2B7/OpenJDK11U-jdk_x64_linux_hotspot_11.0.2_7.tar.gz'; \\\n ;; \\\n aarch64|arm64) \\\n ESUM='b66121b9a0c2e7176373e670a499b9d55344bcb326f67140ad6d0dc24d13d3e2'; \\\n BINARY_URL='https://github.com/AdoptOpenJDK/openjdk11-binaries/releases/download/jdk-11.0.1%2B13/OpenJDK11U-jdk_aarch64_linux_hotspot_11.0.1_13.tar.gz'; \\\n ;; \\\n *) \\\n echo \"Unsupported arch: ${ARCH}\"; \\\n exit 1; \\\n ;; \\\n esac; \\\n curl -Lso /tmp/openjdk.tar.gz ${BINARY_URL}; \\\n sha256sum /tmp/openjdk.tar.gz; \\\n mkdir -p /opt/java/openjdk; \\\n cd /opt/java/openjdk; \\\n echo \"${ESUM} /tmp/openjdk.tar.gz\" | sha256sum -c -; \\\n tar -xf /tmp/openjdk.tar.gz; \\\n jdir=$(dirname $(dirname $(find /opt/java/openjdk -name javac))); \\\n mv ${jdir}/* /opt/java/openjdk; \\\n export PATH=\"/opt/java/openjdk/bin:$PATH\"; \\\n apt-get update; apt-get install -y --no-install-recommends binutils; \\\n /usr/local/bin/slim-java.sh /opt/java/openjdk; \\\n apt-get remove -y binutils; \\\n rm -rf /var/lib/apt/lists/*; \\\n rm -rf ${jdir} /tmp/openjdk.tar.gz;\n\nENV JAVA_HOME=/opt/java/openjdk \\\n PATH=\"/opt/java/openjdk/bin:$PATH\"\nENV JAVA_TOOL_OPTIONS=\"-XX:+UseContainerSupport\"\n\nNow references.\nhttps://github.com/AdoptOpenJDK/openjdk-docker/blob/master/11/jdk/ubuntu/Dockerfile.hotspot.releases.slim\nhttps://hub.docker.com/_/python/\nhttps://hub.docker.com/r/adoptopenjdk/openjdk11/\nI used them to answer this question, which may help you sometime.\nRunning Python and Java in Docker\n", "I found Sunny Pal's answer very useful but I made the copy more specific and added the necessary environment variables and update-alternatives lines so that Java was accessible from the command line in the Python container.\nFROM python:3.9-slim\nCOPY --from=openjdk:8-jre-slim /usr/local/openjdk-8 /usr/local/openjdk-8\n\nENV JAVA_HOME /usr/local/openjdk-8\n\nRUN update-alternatives --install /usr/bin/java java /usr/local/openjdk-8/bin/java 1\n...\n\n", "I believe that by adding FROM openjdk:slim line, you tell docker to execute all of your subsequent commands in openjdk container (which does not have python)\nI would approach this by creating two separate containers for openjdk and python and specify individual sets of commands for them.\nDocker is made to modularize your solutions and mashing everything into one container is usually a bad practice. \n", "I tried pajamas's anwser which worked very well for creating this image. However, when trying to install packages like gensim, pandas or else, I faced some errors like: don't know how to compile Fortran code on platform 'posix'. I searched and tried this, this and that but none worked for me.\nSo, based on pajamas's anwser I decided to convert his image from Alpine to Centos which worked very well. So here's a Dockerfile that might help someone who's may be struggling in this scenario like I was:\n# Get Linux\nFROM centos:7\n\n# Install Java\nRUN yum update -y \\\n&& yum install java-1.8.0-openjdk -y \\\n&& yum clean all \\\n&& rm -rf /var/cache/yum\n\n# Set JAVA_HOME environment var\nENV JAVA_HOME=\"/usr/lib/jvm/jre-openjdk\"\n\n# Install Python\nRUN yum install python3 -y \\\n&& pip3 install --upgrade pip setuptools wheel \\\n&& if [ ! -e /usr/bin/pip ]; then ln -s pip3 /usr/bin/pip ; fi \\\n&& if [[ ! -e /usr/bin/python ]]; then ln -sf /usr/bin/python3 /usr/bin/python; fi \\\n&& yum clean all \\\n&& rm -rf /var/cache/yum\n\nCMD [\"bash\"]\n\n", "you should have one FROM in your dockerfile\n(unless you use multi-stage build for the docker) \n", "I think i found easiest way to mix java jdk 17 and python3. I is not working on python2\nFROM openjdk:17.0.1-jdk-slim\n\n\nRUN apt-get update && \\\n apt-get install -y software-properties-common && \\\n apt-get install -y python3-pip\n\nSoftware Commons have python3 lightweight version. (3.9.1 version)\nU can also install some libraries like that.\nRUN python3 -m pip install --upgrade pip && \\\n python3 -m pip install numpy && \\\n python3 -m pip install opencv-python\n\nOR\nRUN apt-get update && \\\n apt-get install -y ffmpeg\n\n", "Easiest is to just start from a Python image and add the OpenJDK. Note that FROM openjdk has been deprecated and replaced with eclipse-temurin\nFROM python:3.10\n\nENV JAVA_HOME=/opt/java/openjdk\nCOPY --from=eclipse-temurin:17-jre $JAVA_HOME $JAVA_HOME\nENV PATH=\"${JAVA_HOME}/bin:${PATH}\"\n\nRUN pip install --trusted-host pypi.python.org flask\n\nSee How to use this Image - Using a different base Image section of https://hub.docker.com/_/eclipse-temurin for details.\n" ]
[ 34, 30, 6, 2, 2, 1, 1, 0, 0, 0 ]
[ "Instead of using FROM openjdk:slim you can separately install Java, please refer below example:\n# Install OpenJDK-8\nRUN apt-get update && \\\napt-get install -y openjdk-8-jdk && \\\napt-get install -y ant && \\\napt-get clean;\n\n# Fix certificate issues\nRUN apt-get update && \\\napt-get install ca-certificates-java && \\\napt-get clean && \\\nupdate-ca-certificates -f;\n# Setup JAVA_HOME -- useful for docker commandline\nENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/\nRUN export JAVA_HOME\n\n" ]
[ -1 ]
[ "docker", "java", "python", "python_3.x" ]
stackoverflow_0051121875_docker_java_python_python_3.x.txt
Q: Building TG bot with python giving me "TypeError: 'module' object is not callable" I'm trying to build a Telegram bot to share my open jobs (I'm a technical recruiter). Everything looks good with the code, but I keep getting this error: TypeError: 'module' object is not callable It's also saying bug is in this line updater = telegram.ext.updater("API_KEY", use_context = True) I know it has to do something with the import and using it. But not sure how to fix this. I've imported the correct library with: pip3 install python-telegram-bot Any help is welcome! import telegram.ext Token = "API_KEY" updater = telegram.ext.updater("API_KEY", use_context = True) dispatcher = updater.dispatcher def start(update, context): update.message.reply_text("Hello! Thanks for messaging me!") def help(update, context): update.message.reply_text( """ /aboutme -> More info about me. /jobs -> See all my open jobs! /myinfo -> My email information. /latamsalaries -> Current LATAM salaries for devs. """ ) def aboutme(update, context): update.message.reply_text("My name is Robert Grootjen, and I'm a technical headhunter! My mission is to connect the best talent in Latin America with topnotch companies in the United States and Canada.") def jobs(update, context): update.message.reply_text("") def myinfo(update, context): update.message.reply_text("Linkedin: https://www.linkedin.com/in/robert-grootjen-08a10b15a/") def latamsalaries(update, context): update.message.reply_text("Jr - $1500 - $3000 USD") dispatcher.add_handler(telegram.ext.CommandHandler('start', start)) dispatcher.add_handler(telegram.ext.CommandHandler('aboutme', aboutme)) dispatcher.add_handler(telegram.ext.CommandHandler('jobs', jobs)) dispatcher.add_handler(telegram.ext.CommandHandler('myinfo', myinfo)) dispatcher.add_handler(telegram.ext.CommandHandler('latamsalaries', latamsalaries)) updater.start_polling() updater.idle() A: Code looks fine. I Hope you are running this code in vscode without setting python workenvironment. try by running python -m (path_to_py) if thats not the case, then look:Type Error: module obj...
Building TG bot with python giving me "TypeError: 'module' object is not callable"
I'm trying to build a Telegram bot to share my open jobs (I'm a technical recruiter). Everything looks good with the code, but I keep getting this error: TypeError: 'module' object is not callable It's also saying bug is in this line updater = telegram.ext.updater("API_KEY", use_context = True) I know it has to do something with the import and using it. But not sure how to fix this. I've imported the correct library with: pip3 install python-telegram-bot Any help is welcome! import telegram.ext Token = "API_KEY" updater = telegram.ext.updater("API_KEY", use_context = True) dispatcher = updater.dispatcher def start(update, context): update.message.reply_text("Hello! Thanks for messaging me!") def help(update, context): update.message.reply_text( """ /aboutme -> More info about me. /jobs -> See all my open jobs! /myinfo -> My email information. /latamsalaries -> Current LATAM salaries for devs. """ ) def aboutme(update, context): update.message.reply_text("My name is Robert Grootjen, and I'm a technical headhunter! My mission is to connect the best talent in Latin America with topnotch companies in the United States and Canada.") def jobs(update, context): update.message.reply_text("") def myinfo(update, context): update.message.reply_text("Linkedin: https://www.linkedin.com/in/robert-grootjen-08a10b15a/") def latamsalaries(update, context): update.message.reply_text("Jr - $1500 - $3000 USD") dispatcher.add_handler(telegram.ext.CommandHandler('start', start)) dispatcher.add_handler(telegram.ext.CommandHandler('aboutme', aboutme)) dispatcher.add_handler(telegram.ext.CommandHandler('jobs', jobs)) dispatcher.add_handler(telegram.ext.CommandHandler('myinfo', myinfo)) dispatcher.add_handler(telegram.ext.CommandHandler('latamsalaries', latamsalaries)) updater.start_polling() updater.idle()
[ "Code looks fine.\nI Hope you are running this code in vscode without setting python workenvironment. try by running python -m (path_to_py)\nif thats not the case, then look:Type Error: module obj...\n" ]
[ 0 ]
[]
[]
[ "python_3.x", "telegram", "telegram_bot" ]
stackoverflow_0074661159_python_3.x_telegram_telegram_bot.txt
Q: Getting error "invalid_request: Invalid web redirect url" when using Apple ID sign-in with Firebase The issue We have a development copy of a website and a production copy. We also have a Firebase project for dev and another for production. In both, sign-in with Google and email link options work. But for Apple sign-in, it only works in development. I'm honestly stumped about what the issue could be. The error in the Apple sign-in popup provides no other clue. It just thinks the redirect URL for the Firebase project is wrong, even after all of the checking I've done. Any suggestions will be greatly appreciated. What we've tried In Apple Developer account > "Certificates, Identifiers, & Profiles" A Service Identifier is configured for web Named like com.example.web, but for our company Set up for "Sign in with Apple" Primary app ID is our actual app we're actively developing Under "Website URLs" Necessary domains and subdomains are listed - without an https:// prefix Necessary redirect URLs are listed - with an https:// prefix We've generated a private key to use in Firebase I've tried using the one I know works in both Firebase projects, but I've also tried using a different one In Firebase > Authentication > Sign-in method > Apple This method is enabled The Services ID is correct in both cases We're using the same one for both Firebase projects (not sure if we need separate ones for each) The Apple team ID is correct The Key ID is correct (I've checked this several times, when using the same key for both and when using a different key for production) The correct key is used for the selected Key ID The handler URL on this page matches what was added to the Service Identifier created earlier After all this, I also tried creating a completely different Service Identifier to use for production. Instead of the invalid redirect URL issue, I got an "invalid client" error, so I went back to the identifier and key we were originally using. Solutions found elsewhere There were a few that came up on the Apple forums and a handful here on StackOverflow. Most said to make sure the redirect URL was correct and had the HTTPS prefix, and that the domains were added without a prefix. I've checked these over and over and there's no mistake here. https://developer.apple.com/forums/thread/660315 https://developer.apple.com/forums/thread/661345 Sign In With Apple JS returns 'invalid_request: Invalid redirect_uri.' I also tried deleting and recreating the indentifier as mentioned in this thread. Apple sign-in still works with the development site and development Firebase project, but not with the production site and Firebase project. https://developer.apple.com/forums/thread/132915 A: Issue resolved. When we first encountered this issue, the configuration was incomplete. I think after we finished configuration in our Apple account, it may have taken time to propagate since we were still seeing the error by the time I posted this question. But it's working correctly now.
Getting error "invalid_request: Invalid web redirect url" when using Apple ID sign-in with Firebase
The issue We have a development copy of a website and a production copy. We also have a Firebase project for dev and another for production. In both, sign-in with Google and email link options work. But for Apple sign-in, it only works in development. I'm honestly stumped about what the issue could be. The error in the Apple sign-in popup provides no other clue. It just thinks the redirect URL for the Firebase project is wrong, even after all of the checking I've done. Any suggestions will be greatly appreciated. What we've tried In Apple Developer account > "Certificates, Identifiers, & Profiles" A Service Identifier is configured for web Named like com.example.web, but for our company Set up for "Sign in with Apple" Primary app ID is our actual app we're actively developing Under "Website URLs" Necessary domains and subdomains are listed - without an https:// prefix Necessary redirect URLs are listed - with an https:// prefix We've generated a private key to use in Firebase I've tried using the one I know works in both Firebase projects, but I've also tried using a different one In Firebase > Authentication > Sign-in method > Apple This method is enabled The Services ID is correct in both cases We're using the same one for both Firebase projects (not sure if we need separate ones for each) The Apple team ID is correct The Key ID is correct (I've checked this several times, when using the same key for both and when using a different key for production) The correct key is used for the selected Key ID The handler URL on this page matches what was added to the Service Identifier created earlier After all this, I also tried creating a completely different Service Identifier to use for production. Instead of the invalid redirect URL issue, I got an "invalid client" error, so I went back to the identifier and key we were originally using. Solutions found elsewhere There were a few that came up on the Apple forums and a handful here on StackOverflow. Most said to make sure the redirect URL was correct and had the HTTPS prefix, and that the domains were added without a prefix. I've checked these over and over and there's no mistake here. https://developer.apple.com/forums/thread/660315 https://developer.apple.com/forums/thread/661345 Sign In With Apple JS returns 'invalid_request: Invalid redirect_uri.' I also tried deleting and recreating the indentifier as mentioned in this thread. Apple sign-in still works with the development site and development Firebase project, but not with the production site and Firebase project. https://developer.apple.com/forums/thread/132915
[ "Issue resolved.\nWhen we first encountered this issue, the configuration was incomplete. I think after we finished configuration in our Apple account, it may have taken time to propagate since we were still seeing the error by the time I posted this question.\nBut it's working correctly now.\n" ]
[ 0 ]
[]
[]
[ "apple_sign_in", "firebase", "oauth" ]
stackoverflow_0074633470_apple_sign_in_firebase_oauth.txt
Q: How to connect to remote and run Python from local like in SQL DB Management tools So if we want to run SQL in a remote server we can connect to it using JDBC connection strings. Is there something similar but for Python? I want to develop using my, already tuned, IDE instead of the clunky IDEs there are like Zeppelin for remote servers. Do you know a secure way to achieve this? I know it's possible using SSH, but I don't think is the best option security-wise And if there is no option, could I get a recommendation of a powerfull IDE I can install in my clusters and expose through a web interface maybe? Thanks!
How to connect to remote and run Python from local like in SQL DB Management tools
So if we want to run SQL in a remote server we can connect to it using JDBC connection strings. Is there something similar but for Python? I want to develop using my, already tuned, IDE instead of the clunky IDEs there are like Zeppelin for remote servers. Do you know a secure way to achieve this? I know it's possible using SSH, but I don't think is the best option security-wise And if there is no option, could I get a recommendation of a powerfull IDE I can install in my clusters and expose through a web interface maybe? Thanks!
[]
[]
[ "I'm not aware of a way to execute python on a remote instance without establishing an ssh (linux) or winrm/prsp (windows) session first. I do know of a powerful IDE that can accomplish this pretty smoothly though.\nPycharm Professional has the ability to establish an ssh session out to a target environment, allows you to setup a virtual environment on that target, and then set that virtual environment as you codes interpreter. This will effectively allow you to develop you code on your personal computer, but execute against the target server. The interactive debug mode also works in your local IDE even though the code is running on the remote server.\nIt's important to note that this functionality is only available in the PyCharm Professional edition, so a license will need to be purchased in order to develop locally but execute remotely.\nHopefully this will meet your needs of connecting to and remotely executing python code.\nLinks to Pycharm documentation for remote development:\n\nhttps://www.jetbrains.com/help/pycharm/remote-development-overview.html\nhttps://www.jetbrains.com/help/pycharm/remote-development-starting-page.html\n\n" ]
[ -1 ]
[ "pycharm", "python", "remote_server", "visual_studio_code" ]
stackoverflow_0074660678_pycharm_python_remote_server_visual_studio_code.txt
Q: Quit function in python programming I have tried to use the 'quit()' function in python and the spyder's compiler keep says me "quit" is not defined print("Welcome to my computer quiz") playing = input("Do you want to play? ") if (playing != "yes" ): quit() print("Okay! Let's play :)") the output keep says me "name 'quit' is not defined", how can i solve that problem? A: There is no such thing as quit() in python. Python rather has exit(). Simply replace your quit() to exit(). print("Welcome to my computer quiz") playing = input("Do you want to play? ") if (playing != "yes" ): exit() print("Okay! Let's play :)") A: Invert the logic and play if the user answers yes. The game will automatically quit when it reaches the end of the file print("Welcome to my computer quiz") playing = input("Do you want to play? ") if (playing == "yes" ): print("Okay! Let's play :)")
Quit function in python programming
I have tried to use the 'quit()' function in python and the spyder's compiler keep says me "quit" is not defined print("Welcome to my computer quiz") playing = input("Do you want to play? ") if (playing != "yes" ): quit() print("Okay! Let's play :)") the output keep says me "name 'quit' is not defined", how can i solve that problem?
[ "There is no such thing as quit() in python. Python rather has exit(). Simply replace your quit() to exit().\nprint(\"Welcome to my computer quiz\")\n\nplaying = input(\"Do you want to play? \")\n\nif (playing != \"yes\" ):\n exit()\n \nprint(\"Okay! Let's play :)\")\n\n", "Invert the logic and play if the user answers yes. The game will automatically quit when it reaches the end of the file\nprint(\"Welcome to my computer quiz\")\n\nplaying = input(\"Do you want to play? \")\n\nif (playing == \"yes\" ):\n print(\"Okay! Let's play :)\")\n\n" ]
[ 2, 0 ]
[]
[]
[ "python", "runtime_error" ]
stackoverflow_0074661123_python_runtime_error.txt
Q: Python write serial data to the second column of my .csv file Im reading from my serialport data, I can store this data to .csv file. But the problem is that I want to write my data to a second or third column. With code the data is stored in the first column: file = open('test.csv', 'w', encoding="utf",newline="") writer = csv.writer(file) while True: if serialInst.in_waiting: packet = (serialInst.readline()) packet = [str(packet.decode().rstrip())] #decode remove \r\n strip the newline writer.writerow(packet) output of the code .csv file: Column A Column B Data 1 Data 2 Data 3 Data 4 example desired output .csv file: Column A Column B Data1 data 2 Data3 Data 4 A: I've not use the csv.writer before, but a quick read of the docs, seems to indicate that you can only write one row at a time, but you are getting data one cell/value at a time. In your code example, you already have a file handle. Instead of writing one row at a time, you want to write one cell at a time. You'll need some extra variables to keep track of when to make a new line. file = open('test.csv', 'w', encoding="utf",newline="") writer = csv.writer(file) ncols = 2 # 2 columns total in this example, but it's easy to imagine you might want more one day col = 0 # use Python convention of zero based lists/arrays while True: if serialInst.in_waiting: packet = (serialInst.readline()) packet = [str(packet.decode().rstrip())] #decode remove \r\n strip the newline if col == ncols-1: # last column, leave out comma and add newline \n file.write(packet + '\n') col = 0 # reset col to first position else: file.write(packet + ',') col = col + 1 In this code, we're using the write method of a file object instead of using the csv module. See these docs for how to directly read and write from/to files.
Python write serial data to the second column of my .csv file
Im reading from my serialport data, I can store this data to .csv file. But the problem is that I want to write my data to a second or third column. With code the data is stored in the first column: file = open('test.csv', 'w', encoding="utf",newline="") writer = csv.writer(file) while True: if serialInst.in_waiting: packet = (serialInst.readline()) packet = [str(packet.decode().rstrip())] #decode remove \r\n strip the newline writer.writerow(packet) output of the code .csv file: Column A Column B Data 1 Data 2 Data 3 Data 4 example desired output .csv file: Column A Column B Data1 data 2 Data3 Data 4
[ "I've not use the csv.writer before, but a quick read of the docs, seems to indicate that you can only write one row at a time, but you are getting data one cell/value at a time.\nIn your code example, you already have a file handle. Instead of writing one row at a time, you want to write one cell at a time. You'll need some extra variables to keep track of when to make a new line.\nfile = open('test.csv', 'w', encoding=\"utf\",newline=\"\")\nwriter = csv.writer(file)\n\nncols = 2 # 2 columns total in this example, but it's easy to imagine you might want more one day\ncol = 0 # use Python convention of zero based lists/arrays\n\nwhile True:\n if serialInst.in_waiting:\n packet = (serialInst.readline())\n packet = [str(packet.decode().rstrip())] #decode remove \\r\\n strip the newline\n if col == ncols-1:\n # last column, leave out comma and add newline \\n\n file.write(packet + '\\n')\n col = 0 # reset col to first position\n else:\n file.write(packet + ',')\n col = col + 1\n\nIn this code, we're using the write method of a file object instead of using the csv module. See these docs for how to directly read and write from/to files.\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074657545_python.txt
Q: Uncaught TypeError: Cannot read properties of undefined (reading 'classList') while trying to get start animation I am a beginner in javascript. Can you help me? The animation of the running string should start when the .stage container comes into the user's field of view. I see this error in the console: Uncaught TypeError: Cannot read properties of undefined (reading 'classList') So, when .stage container comes into the user's field of view, .string-animation class should be added to the .string class and animation should start. But nothing happens. (function () { var visualBlock = document.querySelector ('.stage'); const observer = new IntersectionObserver(entries => { entries.forEach (entry => { var entryString = entry.target.querySelector ('.string'); if (typeof getCurrentAnimationPreference === 'function' && !getCurrentAnimationPreference()) { return; } if (entry.isIntersecting) { entryString.target.classList.add ('string-animation'); return; } entryString.classList.remove ('string-animation'); }); }); observer.observe (visualBlock); })(); .stage { position: relative; box-sizing: border-box; height: 130px; width: 90%; margin: 50px auto 0; background: #ffffff; overflow: hidden; border: 7px double; border-radius: 10px; border-color: #005490; } .running__string { display: block; } .string { font-size: 40px; font-weight: 600; color: rgb(111, 84, 84); text-transform: uppercase; padding-top: 35px; padding-left: 100%; white-space: nowrap; } .string-animation { -webkit-animation: text 15s linear infinite; -moz-animation: text 15s linear infinite; -o-animation: text 15s linear infinite; animation: text 15s linear infinite; } @-webkit-keyframes text { 0% { transform: translate(0, 0); } 100% { transform: translate(-350%, 0); } } @-moz-keyframes text { 0% { transform: translate(0, 0); } 100% { transform: translate(-350%, 0); } } @-o-keyframes text { 0% { transform: translate(0, 0); } 100% { transform: translate(-350%, 0); } } @keyframes text { 0% { transform: translate(0, 0); } 100% { transform: translate(-350%, 0); } } <div class="stage"> <div class="running__string"> <div class="string"> <p>Text text text text text text text text text text text text text!</p> </div> </div> <script src="string.js"></string> </div> Where had I gone wrong? I tried adding a script tag to the desired container. I tried changing the class names, at first name of desired container and animation class was the same... A: It was necessary to remove the "target" property when adding animation, as I understood in this case it is inappropriate. Also added a wrap block. I apply the result, suddenly it will be useful to someone. (function () { const visualBlock = document.querySelector ('.stage'); const observer = new IntersectionObserver(entries => { entries.forEach (entry => { const entryString = entry.target.querySelector ('.string'); if (typeof getCurrentAnimationPreference === 'function' && !getCurrentAnimationPreference()) { return; } if (entry.isIntersecting) { entryString.classList.add('string-animation'); return; } entryString.classList.remove('string-animation'); }); }); observer.observe (visualBlock); })(); .stage { display: block; box-sizing: border-box; height: 130px; width: 90%; margin: 50px auto 0; background: #ffffff; overflow: hidden; border: 7px double; border-radius: 10px; border-color: #005490; } .wrap { display: block; } .string { font-size: 40px; font-weight: 600; color: rgb(111, 84, 84); text-transform: uppercase; padding-top: 35px; padding-left: 100%; white-space: nowrap; } .string-animation { -webkit-animation: text 15s linear infinite; -moz-animation: text 15s linear infinite; -o-animation: text 15s linear infinite; animation: text 15s linear infinite; } @-webkit-keyframes text {0% {transform: translate(0, 0);} 100%{transform: translate(-350%, 0);}} @-moz-keyframes text {0% {transform: translate(0, 0);} 100%{transform: translate(-350%, 0);}} @-o-keyframes text {0% {transform: translate(0, 0);} 100%{transform: translate(-350%, 0);}} @keyframes text {0% {transform: translate(0, 0);} 100%{transform: translate(-350%, 0);}} <div class="stage"> <div class="wrap"> <div class="string"> При заключении договора на бухгалтерское обслуживание консультация по вопросам оптимизации налогообложения - бесплатно! </div> </div> <script src="string.js"></script> </div>
Uncaught TypeError: Cannot read properties of undefined (reading 'classList') while trying to get start animation
I am a beginner in javascript. Can you help me? The animation of the running string should start when the .stage container comes into the user's field of view. I see this error in the console: Uncaught TypeError: Cannot read properties of undefined (reading 'classList') So, when .stage container comes into the user's field of view, .string-animation class should be added to the .string class and animation should start. But nothing happens. (function () { var visualBlock = document.querySelector ('.stage'); const observer = new IntersectionObserver(entries => { entries.forEach (entry => { var entryString = entry.target.querySelector ('.string'); if (typeof getCurrentAnimationPreference === 'function' && !getCurrentAnimationPreference()) { return; } if (entry.isIntersecting) { entryString.target.classList.add ('string-animation'); return; } entryString.classList.remove ('string-animation'); }); }); observer.observe (visualBlock); })(); .stage { position: relative; box-sizing: border-box; height: 130px; width: 90%; margin: 50px auto 0; background: #ffffff; overflow: hidden; border: 7px double; border-radius: 10px; border-color: #005490; } .running__string { display: block; } .string { font-size: 40px; font-weight: 600; color: rgb(111, 84, 84); text-transform: uppercase; padding-top: 35px; padding-left: 100%; white-space: nowrap; } .string-animation { -webkit-animation: text 15s linear infinite; -moz-animation: text 15s linear infinite; -o-animation: text 15s linear infinite; animation: text 15s linear infinite; } @-webkit-keyframes text { 0% { transform: translate(0, 0); } 100% { transform: translate(-350%, 0); } } @-moz-keyframes text { 0% { transform: translate(0, 0); } 100% { transform: translate(-350%, 0); } } @-o-keyframes text { 0% { transform: translate(0, 0); } 100% { transform: translate(-350%, 0); } } @keyframes text { 0% { transform: translate(0, 0); } 100% { transform: translate(-350%, 0); } } <div class="stage"> <div class="running__string"> <div class="string"> <p>Text text text text text text text text text text text text text!</p> </div> </div> <script src="string.js"></string> </div> Where had I gone wrong? I tried adding a script tag to the desired container. I tried changing the class names, at first name of desired container and animation class was the same...
[ "It was necessary to remove the \"target\" property when adding animation, as I understood in this case it is inappropriate. Also added a wrap block. I apply the result, suddenly it will be useful to someone.\n\n\n(function () {\n const visualBlock = document.querySelector ('.stage');\n const observer = new IntersectionObserver(entries => {\n entries.forEach (entry => {\n const entryString = entry.target.querySelector ('.string');\n if (typeof getCurrentAnimationPreference === 'function' && !getCurrentAnimationPreference()) {\n return;\n }\n if (entry.isIntersecting) {\n entryString.classList.add('string-animation');\n return;\n }\n entryString.classList.remove('string-animation');\n });\n });\n\nobserver.observe (visualBlock);\n\n})();\n.stage {\n display: block;\n box-sizing: border-box;\n height: 130px;\n width: 90%;\n margin: 50px auto 0;\n background: #ffffff;\n overflow: hidden;\n border: 7px double;\n border-radius: 10px;\n border-color: #005490;\n}\n\n.wrap {\n display: block;\n}\n\n.string {\n font-size: 40px;\n font-weight: 600;\n color: rgb(111, 84, 84);\n text-transform: uppercase;\n padding-top: 35px;\n padding-left: 100%;\n white-space: nowrap;\n}\n\n.string-animation {\n -webkit-animation: text 15s linear infinite;\n -moz-animation: text 15s linear infinite;\n -o-animation: text 15s linear infinite;\n animation: text 15s linear infinite;\n}\n \n@-webkit-keyframes text {0% {transform: translate(0, 0);} 100%{transform: translate(-350%, 0);}}\n@-moz-keyframes text {0% {transform: translate(0, 0);} 100%{transform: translate(-350%, 0);}}\n@-o-keyframes text {0% {transform: translate(0, 0);} 100%{transform: translate(-350%, 0);}}\n@keyframes text {0% {transform: translate(0, 0);} 100%{transform: translate(-350%, 0);}}\n<div class=\"stage\">\n <div class=\"wrap\">\n <div class=\"string\">\n При заключении договора на бухгалтерское обслуживание консультация по вопросам оптимизации налогообложения - бесплатно!\n </div>\n </div>\n <script src=\"string.js\"></script> \n </div>\n\n\n\n" ]
[ 0 ]
[]
[]
[ "css_animations", "intersection_observer", "javascript" ]
stackoverflow_0074627306_css_animations_intersection_observer_javascript.txt
Q: R dataframe Removing duplicates / choosing which duplicate to remove I have a dataframe that has duplicates based on their identifying ID, but some of the columns are different. I'd like to keep the rows (or the duplicates) that have the extra bit of info. The structure of the df is as such. id <- c("3235453", "3235453", "21354315", "21354315", "2121421") Plan_name<- c("angers", "strasbourg", "Benzema", "angers", "montpellier") service_line<- c("", "AMRS", "", "Therapy", "") treatment<-c("", "MH", "", "MH", "") df <- data.frame (id, Plan_name, treatment, service_line) As you can see, the ID row has duplicates, but I'd like to keep the second duplicate where there is more info in treatment and service_line. I have tried using df[duplicated(df[,c(1,3)]),] but it doesn't work as an empty df is returned. Any suggestions? A: Try with library(dplyr) df %>% filter(if_all(treatment:service_line, ~ .x != "")) -output id Plan_name Section.B Section.C 1 3235453 strasbourg MH AMRS 2 21354315 angers MH Therapy If we need ids with blanks and not duplicated as well df %>% group_by(id) %>% filter(n() == 1|if_all(treatment:service_line, ~ .x != "")) %>% ungroup -output # A tibble: 3 × 4 id Plan_name treatment service_line <chr> <chr> <chr> <chr> 1 3235453 strasbourg "MH" "AMRS" 2 21354315 angers "MH" "Therapy" 3 2121421 montpellier "" "" A: Maybe you want something like this: First we replace all blank with NA, then we arrange be Section.B and finally slice() first row from group: library(dplyr) df %>% mutate(across(-c(id, Plan_name),~ifelse(.=="", NA, .))) %>% group_by(id) %>% arrange(Section.B, .by_group = TRUE) %>% slice(1) id Plan_name Section.B Section.C <chr> <chr> <chr> <chr> 1 2121421 montpellier NA NA 2 21354315 angers MH Therapy 3 3235453 strasbourg MH AMRS
R dataframe Removing duplicates / choosing which duplicate to remove
I have a dataframe that has duplicates based on their identifying ID, but some of the columns are different. I'd like to keep the rows (or the duplicates) that have the extra bit of info. The structure of the df is as such. id <- c("3235453", "3235453", "21354315", "21354315", "2121421") Plan_name<- c("angers", "strasbourg", "Benzema", "angers", "montpellier") service_line<- c("", "AMRS", "", "Therapy", "") treatment<-c("", "MH", "", "MH", "") df <- data.frame (id, Plan_name, treatment, service_line) As you can see, the ID row has duplicates, but I'd like to keep the second duplicate where there is more info in treatment and service_line. I have tried using df[duplicated(df[,c(1,3)]),] but it doesn't work as an empty df is returned. Any suggestions?
[ "Try with\nlibrary(dplyr)\ndf %>%\n filter(if_all(treatment:service_line, ~ .x != \"\"))\n\n-output\n id Plan_name Section.B Section.C\n1 3235453 strasbourg MH AMRS\n2 21354315 angers MH Therapy\n\n\nIf we need ids with blanks and not duplicated as well\ndf %>% \n group_by(id) %>%\n filter(n() == 1|if_all(treatment:service_line, ~ .x != \"\")) %>%\n ungroup\n\n-output\n# A tibble: 3 × 4\n id Plan_name treatment service_line\n <chr> <chr> <chr> <chr> \n1 3235453 strasbourg \"MH\" \"AMRS\" \n2 21354315 angers \"MH\" \"Therapy\" \n3 2121421 montpellier \"\" \"\" \n\n", "Maybe you want something like this:\nFirst we replace all blank with NA, then we arrange be Section.B and finally slice() first row from group:\nlibrary(dplyr)\ndf %>%\n mutate(across(-c(id, Plan_name),~ifelse(.==\"\", NA, .))) %>% \n group_by(id) %>% \n arrange(Section.B, .by_group = TRUE) %>% \n slice(1)\n\n id Plan_name Section.B Section.C\n <chr> <chr> <chr> <chr> \n1 2121421 montpellier NA NA \n2 21354315 angers MH Therapy \n3 3235453 strasbourg MH AMRS \n\n" ]
[ 2, 2 ]
[]
[]
[ "dataframe", "duplicates", "r" ]
stackoverflow_0074661025_dataframe_duplicates_r.txt
Q: Powershell Flatten dir recursively Found a couple of semi-related links: How to merge / 'flatten' a folder structure using PowerShell - recursive but my ask is that I have a root dir P:/files which has several layers of sub directories etc I'd like to flatten all of them so that all the -Files are moved just to the root of P:/files I don't need to be concerned with duplicates as I'll make sure there are non well before this stage. it looks like I can use powershell to get a list of all the files no matter the level, and then just for-each over them and a move? Get-ChildItem -LiteralPath P:\files -Directory | Get-ChildItem -Recurse -File help on the loop? A: A single recursive Get-ChildItem that pipes to Move-Item should do: Get-ChildItem -File -Recurse -LiteralPath P:\files | Move-Item -Destination $yourDestination -WhatIf Note: The -WhatIf common parameter in the command above previews the operation. Remove -WhatIf once you're sure the operation will do what you want. If you need to exclude files directly located in P:\files: Get-ChildItem -Directory -Recurse -LiteralPath P:\files | Get-ChildItem -File | Move-Item -Destination $yourDestination -WhatIf Note the use of -Directory and -Recurse first, so that all subdirectories in the entire subtree are returned, followed by a non-recursive Get-ChildItem -File call that gets each subdirectory's immediate files only.
Powershell Flatten dir recursively
Found a couple of semi-related links: How to merge / 'flatten' a folder structure using PowerShell - recursive but my ask is that I have a root dir P:/files which has several layers of sub directories etc I'd like to flatten all of them so that all the -Files are moved just to the root of P:/files I don't need to be concerned with duplicates as I'll make sure there are non well before this stage. it looks like I can use powershell to get a list of all the files no matter the level, and then just for-each over them and a move? Get-ChildItem -LiteralPath P:\files -Directory | Get-ChildItem -Recurse -File help on the loop?
[ "\nA single recursive Get-ChildItem that pipes to Move-Item should do:\nGet-ChildItem -File -Recurse -LiteralPath P:\\files |\n Move-Item -Destination $yourDestination -WhatIf\n\nNote: The -WhatIf common parameter in the command above previews the operation. Remove -WhatIf once you're sure the operation will do what you want.\nIf you need to exclude files directly located in P:\\files:\nGet-ChildItem -Directory -Recurse -LiteralPath P:\\files | Get-ChildItem -File |\n Move-Item -Destination $yourDestination -WhatIf\n\nNote the use of -Directory and -Recurse first, so that all subdirectories in the entire subtree are returned, followed by a non-recursive Get-ChildItem -File call that gets each subdirectory's immediate files only.\n" ]
[ 1 ]
[]
[]
[ "powershell" ]
stackoverflow_0074661134_powershell.txt
Q: Problem sending POST request with big body size to Plumber API endpoint in R on Redhat 7.5 I am trying to send a table of about 140 rows and 5 columns as a JSON object (around 20 KB in size) from VBA using MSXML2.ServerXMLHTTP in a body of a POST request to an endpoint made available from R using plumber API package. The endpoint/function running in R on the server is throwing the following error: simpleError in fromJSON(requestList): argument "requestList" is missing, with no default requestList is the parameter passed to the endpoint function. It looks like it gets lost in the web call. If I reduce the table size to 30 rows instead of 140 rows, requestList is found and the request is served successfully. My platform is as follows: 1. Endpoints are written in R and exposed using Plumber API. 2. Endpoints are running on AWS instance with Redhat 7.5. 3. Timeout for the request is set to 100 minutes on VBA (client side). A: If fromJSON(requestList) is: working when it has 30 rows raising an error of type argument "requestList" is missing, with no default when having 140 rows ... considering that JSON bodies have no size limits (and even if they had, for sure it wouldn't be 20 KB), I would say that the issue is in the data contained in the rows 31-140. There must be some special character which goes through fine at serialization on VBA client side (i.e. the data are correctly serialized because VBA tolerates that special character) but when deserializing on server side, this special character breaks the request as if the input wasn't actually an input. My suggestion to troubleshoot would be to chunk your request in blocks of 30 (1-30, 31-60, 61-90 etc.) until when you find the guilty chunk, and then going by bisection on that chunk until you detect the special character breaking it. A: Increasing the request size limit for Plumber helped me: options_plumber(maxRequestSize = 10 * 1024 * 1024) (or more) At least for prototype work this is great. Once you're optimizing things for production, you should improve your communications to send only what is required.
Problem sending POST request with big body size to Plumber API endpoint in R on Redhat 7.5
I am trying to send a table of about 140 rows and 5 columns as a JSON object (around 20 KB in size) from VBA using MSXML2.ServerXMLHTTP in a body of a POST request to an endpoint made available from R using plumber API package. The endpoint/function running in R on the server is throwing the following error: simpleError in fromJSON(requestList): argument "requestList" is missing, with no default requestList is the parameter passed to the endpoint function. It looks like it gets lost in the web call. If I reduce the table size to 30 rows instead of 140 rows, requestList is found and the request is served successfully. My platform is as follows: 1. Endpoints are written in R and exposed using Plumber API. 2. Endpoints are running on AWS instance with Redhat 7.5. 3. Timeout for the request is set to 100 minutes on VBA (client side).
[ "If fromJSON(requestList) is:\n\nworking when it has 30 rows\nraising an error of type argument \"requestList\" is missing, with no default when having 140 rows\n\n... considering that JSON bodies have no size limits (and even if they had, for sure it wouldn't be 20 KB), I would say that the issue is in the data contained in the rows 31-140.\nThere must be some special character which goes through fine at serialization on VBA client side (i.e. the data are correctly serialized because VBA tolerates that special character) but when deserializing on server side, this special character breaks the request as if the input wasn't actually an input.\nMy suggestion to troubleshoot would be to chunk your request in blocks of 30 (1-30, 31-60, 61-90 etc.) until when you find the guilty chunk, and then going by bisection on that chunk until you detect the special character breaking it.\n", "Increasing the request size limit for Plumber helped me:\noptions_plumber(maxRequestSize = 10 * 1024 * 1024) (or more)\nAt least for prototype work this is great. Once you're optimizing things for production, you should improve your communications to send only what is required.\n" ]
[ 1, 0 ]
[]
[]
[ "plumber", "post", "r", "redhat", "vba" ]
stackoverflow_0054354826_plumber_post_r_redhat_vba.txt
Q: How to call one intent from another intent in lambda (nodejs) I have a lex bot which triggers lambda where slot conditions are checked(eg:phone number should be 10 digit) and it returns a closing response of text. function closeresponse(intent_request, session_attributes, fulfillment_state, message) { return { "sessionState": { "sessionAttributes": session_attributes, "dialogAction": { "type": "Close" }, "intent": { 'name': intent_request[ENTITY.sessionState][ENTITY.intent][ENTITY.name], 'state': fulfillment_state } }, "messages": [message], "sessionId": intent_request["sessionId"], "requestAttributes": intent_request[ENTITY.requestAttributes] ? intent_request[ENTITY.requestAttributes] : {} } } after closing response i am not able trigger any function i need to trigger another intent which has yes or no response card in same lambda function A: To trigger another intent in the same Lambda function, you can use the delegate dialog action type. This dialog action type allows you to pass control to the built-in Amazon Lex NLU (natural language understanding) engine, which will then evaluate the user's input and determine which intent to trigger next. Here's an example of how you could use the delegate dialog action type to trigger another intent in your Lambda function: function closeresponse(intent_request, session_attributes, fulfillment_state, message) { return { "sessionState": { "sessionAttributes": session_attributes, "dialogAction": { "type": "Close", "fulfillmentState": fulfillment_state, "message": message, "responseCard": { // The response card you want to display to the user } } }, "followUpAction": { "type": "Delegate" } } } In this example, the closeresponse function returns a response object with a Close dialog action and a Delegate follow-up action. The Close dialog action will close the current dialog and display the specified message and response card to the user. The Delegate follow-up action will then pass control to the Amazon Lex NLU engine, which will evaluate the user's input and determine which intent to trigger next. You can also specify the slots and intentName properties in the Delegate follow-up action if you want to provide additional information to the Amazon Lex NLU engine. For example, you could use the slots property to provide the current slot values, and the intentName property to specify the name of the next intent you want to trigger. Here's an example of how you could specify the slots and intentName properties in the Delegate follow-up action: function closeresponse(intent_request, session_attributes, fulfillment_state, message) { return { "sessionState": { "sessionAttributes": session_attributes, "dialogAction": { "type": "Close", "fulfillmentState": fulfillment_state, "message": message, "responseCard": { // The response card you want to display to the user } } }, "followUpAction": { "type": "Delegate", "slots": { // The current slot values }, "intentName": "NextIntent" } } } In this example, the Delegate follow-up action specifies the slots and intentName properties. The slots property is used to provide the current slot values to the Amazon Lex NLU engine, and the intentName property is used to specify the name of the next intent you want to trigger. When the Lambda function returns this response, the Amazon Lex NLU engine will use the provided slot values and intent name to determine the next action to take. I hope this helps!
How to call one intent from another intent in lambda (nodejs)
I have a lex bot which triggers lambda where slot conditions are checked(eg:phone number should be 10 digit) and it returns a closing response of text. function closeresponse(intent_request, session_attributes, fulfillment_state, message) { return { "sessionState": { "sessionAttributes": session_attributes, "dialogAction": { "type": "Close" }, "intent": { 'name': intent_request[ENTITY.sessionState][ENTITY.intent][ENTITY.name], 'state': fulfillment_state } }, "messages": [message], "sessionId": intent_request["sessionId"], "requestAttributes": intent_request[ENTITY.requestAttributes] ? intent_request[ENTITY.requestAttributes] : {} } } after closing response i am not able trigger any function i need to trigger another intent which has yes or no response card in same lambda function
[ "To trigger another intent in the same Lambda function, you can use the delegate dialog action type. This dialog action type allows you to pass control to the built-in Amazon Lex NLU (natural language understanding) engine, which will then evaluate the user's input and determine which intent to trigger next.\nHere's an example of how you could use the delegate dialog action type to trigger another intent in your Lambda function:\nfunction closeresponse(intent_request, session_attributes, fulfillment_state, message) {\n return {\n \"sessionState\": {\n \"sessionAttributes\": session_attributes,\n \"dialogAction\": {\n \"type\": \"Close\",\n \"fulfillmentState\": fulfillment_state,\n \"message\": message,\n \"responseCard\": {\n // The response card you want to display to the user\n }\n }\n },\n \"followUpAction\": {\n \"type\": \"Delegate\"\n }\n }\n}\n\nIn this example, the closeresponse function returns a response object with a Close dialog action and a Delegate follow-up action. The Close dialog action will close the current dialog and display the specified message and response card to the user. The Delegate follow-up action will then pass control to the Amazon Lex NLU engine, which will evaluate the user's input and determine which intent to trigger next.\nYou can also specify the slots and intentName properties in the Delegate follow-up action if you want to provide additional information to the Amazon Lex NLU engine. For example, you could use the slots property to provide the current slot values, and the intentName property to specify the name of the next intent you want to trigger.\nHere's an example of how you could specify the slots and intentName properties in the Delegate follow-up action:\nfunction closeresponse(intent_request, session_attributes, fulfillment_state, message) {\n return {\n \"sessionState\": {\n \"sessionAttributes\": session_attributes,\n \"dialogAction\": {\n \"type\": \"Close\",\n \"fulfillmentState\": fulfillment_state,\n \"message\": message,\n \"responseCard\": {\n // The response card you want to display to the user\n }\n }\n },\n \"followUpAction\": {\n \"type\": \"Delegate\",\n \"slots\": {\n // The current slot values\n },\n \"intentName\": \"NextIntent\"\n }\n }\n}\n\nIn this example, the Delegate follow-up action specifies the slots and intentName properties. The slots property is used to provide the current slot values to the Amazon Lex NLU engine, and the intentName property is used to specify the name of the next intent you want to trigger. When the Lambda function returns this response, the Amazon Lex NLU engine will use the provided slot values and intent name to determine the next action to take.\nI hope this helps!\n" ]
[ 0 ]
[]
[]
[ "amazon_lex", "amazon_web_services", "aws_lambda", "node.js" ]
stackoverflow_0074660740_amazon_lex_amazon_web_services_aws_lambda_node.js.txt
Q: Pandas JSON Normalize multiple columns in a dataframe So I have the following dataframe: The JSON blobs all look something like this: {"id":"dddd1", "random_number":"77777"} What I want my dataframe to look like is something like this: Basically what I need is to get a way to iterate and normalize all the JSON blob columns and put them back in the dataframe in the proper rows (0-99). I have tried the following: pd.json_normalize(data_frame.iloc[:, JSON_0,JSON_99]) I get the following error: IndexingError: Too many indexers I could go through and normalize each JSON_BLOB column individually however that is inefficient, I cant think of a proper way to do this via a Lambda function or for loop because of the JSON blob. The for loop I wrote gives me the same error: array=[] for app in data_frame.iloc[:, JSON_0,JSON_99]: data = { 'id': data['id'] } array.append(data) test= pd.DataFrame(array) IndexingError: Too many indexers Also some of the JSON_Blobs have NAN values Any suggestions would be great. A: Can you try this: normalized = pd.concat([df[i].apply(pd.Series) for i in df.iloc[:,2:]],axis=1) #2 is the position number of JSON_0. final = pd.concat([df[['Root_id_PK','random_number']],normalized],axis=1) if you want the column names as in the question: normalized = pd.concat([df[i].apply(pd.Series).rename(columns={'id':'id_from_{}'.format(i),'random_number':'random_number_from_{}'.format(i)}) for i in df.iloc[:,2:]],axis=1) final = pd.concat([df[['Root_id_PK','random_number']],normalized],axis=1)
Pandas JSON Normalize multiple columns in a dataframe
So I have the following dataframe: The JSON blobs all look something like this: {"id":"dddd1", "random_number":"77777"} What I want my dataframe to look like is something like this: Basically what I need is to get a way to iterate and normalize all the JSON blob columns and put them back in the dataframe in the proper rows (0-99). I have tried the following: pd.json_normalize(data_frame.iloc[:, JSON_0,JSON_99]) I get the following error: IndexingError: Too many indexers I could go through and normalize each JSON_BLOB column individually however that is inefficient, I cant think of a proper way to do this via a Lambda function or for loop because of the JSON blob. The for loop I wrote gives me the same error: array=[] for app in data_frame.iloc[:, JSON_0,JSON_99]: data = { 'id': data['id'] } array.append(data) test= pd.DataFrame(array) IndexingError: Too many indexers Also some of the JSON_Blobs have NAN values Any suggestions would be great.
[ "Can you try this:\nnormalized = pd.concat([df[i].apply(pd.Series) for i in df.iloc[:,2:]],axis=1) #2 is the position number of JSON_0.\nfinal = pd.concat([df[['Root_id_PK','random_number']],normalized],axis=1)\n\nif you want the column names as in the question:\nnormalized = pd.concat([df[i].apply(pd.Series).rename(columns={'id':'id_from_{}'.format(i),'random_number':'random_number_from_{}'.format(i)}) for i in df.iloc[:,2:]],axis=1)\nfinal = pd.concat([df[['Root_id_PK','random_number']],normalized],axis=1)\n\n" ]
[ 1 ]
[]
[]
[ "json", "pandas", "python" ]
stackoverflow_0074660865_json_pandas_python.txt
Q: Watchman crawl failed. Retrying once with node crawler Watchman crawl failed. Retrying once with node crawler. Usually this happens when watchman isn't running. Create an empty .watchmanconfig file in your project's root folder or initialize a git or hg repository in your project. Error: watchman --no-pretty get-sockname returned with exit code=1, signal=null, stderr= 2018-03-23T11:33:13,360: [0x7fff9755f3c0] the owner of /usr/local/var/run/watchman/root-state is uid 501 and doesn't match your euid 0 A: April 2022 solution (testing with jest): Step 1: watchman watch-del-all Step 2: watchman shutdown-server A: You're running watchman as root but the state dir, which may contain trigger definitions and thus allow spawning arbitrary commands, is not owned by root. This is a security issue and thus watchman is refusing to start. The safest way to resolve this is to remove the state dir by running: rm -rf /usr/local/var/run/watchman/root-state I'd recommend that you avoid running tools that wish to use watchman using sudo to avoid this happening again. A: As Jodie suggested above I tried the below and it worked well, for the benefit of others mentioning below steps which I tried in my mac to fix this issue First, Kill all the server running and close your terminal. Go to 'System preferences' -> 'Security & Privacy' -> privacy tab Scroll down and click 'Full Disk Access' Make sure you checked on 'Terminal' and 'Watchman'. Now relaunch terminal and simply try running again it works!! A: I had a real issue with this one but finally found the answer. Here's a screenshot of the post that helped me. https://github.com/facebook/watchman/issues/751#issuecomment-542300670 The whole forum has multiple different solutions which I hadn't actually tried, but this one is the solution that worked for me! Hope this helps. A: -June 8 2022 Giving Full Disk Access to all terminals or where you're getting started your server, is fixed the error. Also, it would be good to give access (Files and Folders) to VSC. Here are the steps to do it! Open System Preferences Find Security & Privacy option and open it Give Full Disk Access to your terminals, Xcode and VSC. Happy Hacking!!! A: I solved this, on linux by using the following commands on terminal. $ echo 256 | sudo tee -a /proc/sys/fs/inotify/max_user_instances $ echo 32768 | sudo tee -a /proc/sys/fs/inotify/max_queued_events $ echo 65536 | sudo tee -a /proc/sys/fs/inotify/max_user_watches $ pkill node Then: $ npm start or $ expo start (if you are using expo) A: Use sudo command to run watchman. sudo npm run test This problem arose because you might be running watchman as root. A: @jaishankar's post worked for me.. thakns Jai... First, Kill all the server running and close your terminal. Go to 'System preferences' -> 'Security & Privacy' -> privacy tab Scroll down and click 'Full Disk Access' Make sure you checked on 'Terminal' and 'Watchman'. Now relaunch terminal and simply try running again it works!! A: watchman watch-del-all && rm -f yarn.lock && rm -rf node_modules && yarn && yarn start -- --reset-cache A: Step 1: $ npm cache clean --force Step 2: Delete node_modules: $ rm -rf node_modules Step 3: npm install Step 4? (Optional): yarn start / npm start This worked for me. Hopes it works for you too. A: On a Mac, remove all watches and associated triggers of running process, then shutdown the service. See screenshot below: A: Put your project in a shared folder (ie, Macintosh HD/Users/Shared. I kept getting operation denied on the Desktop, because of further protection policies, even though Full Disk Access was granted. A: To solve this issue on my end, i had to stop the other node instance running on my other terminal. Just make sure you don't have another node running on your machine. A: check for .watchmanconfig and add this {}. Inside the file .watchmanconfig {} Simple as that just try it.
Watchman crawl failed. Retrying once with node crawler
Watchman crawl failed. Retrying once with node crawler. Usually this happens when watchman isn't running. Create an empty .watchmanconfig file in your project's root folder or initialize a git or hg repository in your project. Error: watchman --no-pretty get-sockname returned with exit code=1, signal=null, stderr= 2018-03-23T11:33:13,360: [0x7fff9755f3c0] the owner of /usr/local/var/run/watchman/root-state is uid 501 and doesn't match your euid 0
[ "April 2022 solution (testing with jest):\nStep 1:\nwatchman watch-del-all\n\nStep 2:\nwatchman shutdown-server\n\n", "You're running watchman as root but the state dir, which may contain trigger definitions and thus allow spawning arbitrary commands, is not owned by root. This is a security issue and thus watchman is refusing to start.\nThe safest way to resolve this is to remove the state dir by running:\nrm -rf /usr/local/var/run/watchman/root-state\nI'd recommend that you avoid running tools that wish to use watchman using sudo to avoid this happening again.\n", "As Jodie suggested above I tried the below and it worked well, for the benefit of others mentioning below steps which I tried in my mac to fix this issue\n\nFirst, Kill all the server running and close your terminal.\nGo to 'System preferences' -> 'Security & Privacy' -> privacy tab\nScroll down and click 'Full Disk Access'\nMake sure you checked on 'Terminal' and 'Watchman'.\nNow relaunch terminal and simply try running again it works!!\n\n", "I had a real issue with this one but finally found the answer.\nHere's a screenshot of the post that helped me.\nhttps://github.com/facebook/watchman/issues/751#issuecomment-542300670\nThe whole forum has multiple different solutions which I hadn't actually tried, but this one is the solution that worked for me! Hope this helps.\n", "-June 8 2022\nGiving Full Disk Access to all terminals or where you're getting started your server, is fixed the error.\nAlso, it would be good to give access (Files and Folders) to VSC.\nHere are the steps to do it!\n\nOpen System Preferences\n\n\n\nFind Security & Privacy option and open it\n\n\n\nGive Full Disk Access to your terminals, Xcode and VSC.\n\n\nHappy Hacking!!!\n", "I solved this, on linux by using the following commands on terminal.\n$ echo 256 | sudo tee -a /proc/sys/fs/inotify/max_user_instances\n\n$ echo 32768 | sudo tee -a /proc/sys/fs/inotify/max_queued_events\n\n$ echo 65536 | sudo tee -a /proc/sys/fs/inotify/max_user_watches\n\n$ pkill node\n\nThen:\n$ npm start\n\nor\n$ expo start (if you are using expo)\n\n", "Use sudo command to run watchman.\nsudo npm run test\n\nThis problem arose because you might be running watchman as root.\n", "@jaishankar's post worked for me.. thakns Jai...\nFirst, Kill all the server running and close your terminal.\nGo to 'System preferences' -> 'Security & Privacy' -> privacy tab\nScroll down and click 'Full Disk Access'\nMake sure you checked on 'Terminal' and 'Watchman'.\nNow relaunch terminal and simply try running again it works!!\n", "watchman watch-del-all && rm -f yarn.lock && rm -rf node_modules && yarn && yarn start -- --reset-cache\n", "Step 1: $ npm cache clean --force\nStep 2: Delete node_modules: $ rm -rf node_modules\nStep 3: npm install\nStep 4? (Optional): yarn start / npm start\nThis worked for me. Hopes it works for you too.\n", "On a Mac, remove all watches and associated triggers of running process, then shutdown the service. See screenshot below:\n\n", "Put your project in a shared folder (ie, Macintosh HD/Users/Shared. I kept getting operation denied on the Desktop, because of further protection policies, even though Full Disk Access was granted.\n", "To solve this issue on my end, i had to stop the other node instance running on my other terminal. Just make sure you don't have another node running on your machine.\n", "check for .watchmanconfig and add this {}.\nInside the file .watchmanconfig\n{}\nSimple as that just try it.\n" ]
[ 69, 19, 10, 3, 3, 2, 1, 1, 1, 0, 0, 0, 0, 0 ]
[]
[]
[ "react_native", "watchman" ]
stackoverflow_0049443341_react_native_watchman.txt
Q: Use Python to get value from element in XML file I'm writing a program in Python that looks at an XML file that I get from an API and should return a list of users' initials to a list for later use. My XML file looks like this with about 60 users: <ArrayOfuser xmlns="WebsiteWhereDataComesFrom.com" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials1</rep> </user> <user> <active>true</active> <datelastlogin>12/1/2022 3:31:25 PM</datelastlogin> <dept>5</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>4/8/2020 3:02:08 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials2</rep> </user> ... ... ... </ArrayOfuser> I'm trying to use an XML parser to return the text in the <rep> tag for each user to a list. I would also love to have it sorted by date of last login, but that's not something I need and I'll just alphabetize the list if sorting by date overcomplicates this process. The code below shows my attempt at just printing the data without saving it to a list, but the output is unexpected as shown below as well. Code I tried: #load file activeusers = etree.parse("activeusers.xml") #declare namespaces ns = {'xx': 'http://schemas.datacontract.org/2004/07/IQWebAPI.Users'} #locate rep tag and print (saving to list once printing shows expected output) targets = activeusers.xpath('//xx:user[xx:rep]',namespaces=ns) for target in targets: print(target.attrib) Output: {} {} I'm expecting the output to look like the below codeblock. Once it looks something like that I should be able to change the print statement to instead save to a list. {userinitials1} {userinitials2} I think my issue comes from what's inside my print statement with printing the attribute. I tried this with variations of target.getparent() with keys(), items(), and get() as well and they all seem to show the same empty output when printed. EDIT: I found a post from someone with a similar problem that had been solved and the solution was to use this code but I changed filenames to suit my need: root = (etree.parse("activeusers.xml")) values = [s.find('rep').text for s in root.findall('.//user') if s.find('rep') is not None] print(values) Again, the expected output was a populated list but when printed the list is empty. I think now my issue may have to do with the fact that my document contains namespaces. For my use, I may just delete them since I don't think these will end up being required so please correct me if namespaces are more important than I realize. SECOND EDIT: I also realized the API can send me this data in a JSON format and not just XML so that file would look like the below codeblock. Any solution that can append the text in the "rep" child of each user to a list in JSON format or XML is perfect and would be greatly appreciated since once I have this list, I will not need to use the XML or JSON file for any other use. [ { "active": true, "datelastlogin": "8/21/2019 9:16:30 PM", "dept": 3, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "2/6/2019 11:10:29 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials1" }, { "active": true, "datelastlogin": "12/1/2022 3:31:25 PM", "dept": 5, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "4/8/2020 3:02:08 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials2" } ] A: As this is xml with namespace, you can have like import xml.etree.ElementTree as ET root = ET.fromstring(xml_in_qes) my_ns = {'root': 'WebsiteWhereDataComesFrom.com'} myUser=[] for eachUser in root.findall('root:user',my_ns): rep=eachUser.find("root:rep",my_ns) print(rep.text) myUser.append(rep.text) note: xml_in_qes is the XML attached in this question. ('root:user',my_ns): search user in my_ns which has key root i.e WebsiteWhereDataComesFrom.com A: XML data implementation: import xml.etree.ElementTree as ET xmlstring = ''' <ArrayOfuser> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials1</rep> </user> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials2</rep> </user> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials3</rep> </user> </ArrayOfuser> ''' user_array = ET.fromstring(xmlstring) replist = [] for users in user_array.findall('user'): replist.append((users.find('rep').text)) print(replist) Output: ['userinitials1', 'userinitials2', 'userinitials3'] JSON data implementation: userlist = [ { "active": "true", "datelastlogin": "8/21/2019 9:16:30 PM", "dept": 3, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "2/6/2019 11:10:29 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials1" }, { "active": "true", "datelastlogin": "12/1/2022 3:31:25 PM", "dept": 5, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "4/8/2020 3:02:08 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials2" }, { "active": "true", "datelastlogin": "12/1/2022 3:31:25 PM", "dept": 5, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "4/8/2020 3:02:08 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials3" } ] replist = [] for user in userlist: replist.append(user["rep"]) print(replist) Output: ['userinitials1', 'userinitials2', 'userinitials3'] A: If you like a sorted tabel of users who have last logged on you can put the parsed values into pandas: import xml.etree.ElementTree as ET import pandas as pd tree = ET.parse("activeusers.xml") root = tree.getroot() namespaces = {"xmlns":"WebsiteWhereDataComesFrom.com" , "xmlns:i":"http://www.w3.org/2001/XMLSchema-instance"} columns =["rep", "datelastlogin"] login = [] usr = [] for user in root.findall("xmlns:user", namespaces): for lastlog in user.findall("xmlns:datelastlogin", namespaces): login.append(lastlog.text) for activ in user.findall("xmlns:rep", namespaces): usr.append(activ.text) data = list(zip(usr, login)) df = pd.DataFrame(data, columns=columns) df["datelastlogin"] = df["datelastlogin"].astype('datetime64[ns]') df = df.sort_values(by='datelastlogin', ascending = False) print(df.to_string()) Output: rep datelastlogin 1 userinitials2 2022-12-01 15:31:25 0 userinitials1 2019-08-21 21:16:30
Use Python to get value from element in XML file
I'm writing a program in Python that looks at an XML file that I get from an API and should return a list of users' initials to a list for later use. My XML file looks like this with about 60 users: <ArrayOfuser xmlns="WebsiteWhereDataComesFrom.com" xmlns:i="http://www.w3.org/2001/XMLSchema-instance"> <user> <active>true</active> <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin> <dept>3</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>2/6/2019 11:10:29 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials1</rep> </user> <user> <active>true</active> <datelastlogin>12/1/2022 3:31:25 PM</datelastlogin> <dept>5</dept> <email>useremail</email> <firstname>userfirstname</firstname> <lastname>userlastname</lastname> <lastupdated>4/8/2020 3:02:08 PM</lastupdated> <lastupdatedby>lastupdateduserinitials</lastupdatedby> <loginemail>userloginemail</loginemail> <phone1>userphone</phone1> <phone2/> <rep>userinitials2</rep> </user> ... ... ... </ArrayOfuser> I'm trying to use an XML parser to return the text in the <rep> tag for each user to a list. I would also love to have it sorted by date of last login, but that's not something I need and I'll just alphabetize the list if sorting by date overcomplicates this process. The code below shows my attempt at just printing the data without saving it to a list, but the output is unexpected as shown below as well. Code I tried: #load file activeusers = etree.parse("activeusers.xml") #declare namespaces ns = {'xx': 'http://schemas.datacontract.org/2004/07/IQWebAPI.Users'} #locate rep tag and print (saving to list once printing shows expected output) targets = activeusers.xpath('//xx:user[xx:rep]',namespaces=ns) for target in targets: print(target.attrib) Output: {} {} I'm expecting the output to look like the below codeblock. Once it looks something like that I should be able to change the print statement to instead save to a list. {userinitials1} {userinitials2} I think my issue comes from what's inside my print statement with printing the attribute. I tried this with variations of target.getparent() with keys(), items(), and get() as well and they all seem to show the same empty output when printed. EDIT: I found a post from someone with a similar problem that had been solved and the solution was to use this code but I changed filenames to suit my need: root = (etree.parse("activeusers.xml")) values = [s.find('rep').text for s in root.findall('.//user') if s.find('rep') is not None] print(values) Again, the expected output was a populated list but when printed the list is empty. I think now my issue may have to do with the fact that my document contains namespaces. For my use, I may just delete them since I don't think these will end up being required so please correct me if namespaces are more important than I realize. SECOND EDIT: I also realized the API can send me this data in a JSON format and not just XML so that file would look like the below codeblock. Any solution that can append the text in the "rep" child of each user to a list in JSON format or XML is perfect and would be greatly appreciated since once I have this list, I will not need to use the XML or JSON file for any other use. [ { "active": true, "datelastlogin": "8/21/2019 9:16:30 PM", "dept": 3, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "2/6/2019 11:10:29 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials1" }, { "active": true, "datelastlogin": "12/1/2022 3:31:25 PM", "dept": 5, "email": "useremail", "firstname": "userfirstname", "lastname": "userlastname", "lastupdated": "4/8/2020 3:02:08 PM", "lastupdatedby": "lastupdateduserinitials", "loginemail": "userloginemail", "phone1": "userphone", "phone2": "", "rep": "userinitials2" } ]
[ "As this is xml with namespace, you can have like\nimport xml.etree.ElementTree as ET\nroot = ET.fromstring(xml_in_qes)\nmy_ns = {'root': 'WebsiteWhereDataComesFrom.com'}\nmyUser=[]\nfor eachUser in root.findall('root:user',my_ns):\n rep=eachUser.find(\"root:rep\",my_ns)\n print(rep.text)\n myUser.append(rep.text)\n\nnote: xml_in_qes is the XML attached in this question.\n('root:user',my_ns): search user in my_ns which has key root i.e WebsiteWhereDataComesFrom.com\n", "XML data implementation:\nimport xml.etree.ElementTree as ET\nxmlstring = '''\n<ArrayOfuser>\n <user>\n <active>true</active>\n <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin>\n <dept>3</dept>\n <email>useremail</email>\n <firstname>userfirstname</firstname>\n <lastname>userlastname</lastname>\n <lastupdated>2/6/2019 11:10:29 PM</lastupdated>\n <lastupdatedby>lastupdateduserinitials</lastupdatedby>\n <loginemail>userloginemail</loginemail>\n <phone1>userphone</phone1>\n <phone2/>\n <rep>userinitials1</rep>\n </user>\n <user>\n <active>true</active>\n <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin>\n <dept>3</dept>\n <email>useremail</email>\n <firstname>userfirstname</firstname>\n <lastname>userlastname</lastname>\n <lastupdated>2/6/2019 11:10:29 PM</lastupdated>\n <lastupdatedby>lastupdateduserinitials</lastupdatedby>\n <loginemail>userloginemail</loginemail>\n <phone1>userphone</phone1>\n <phone2/>\n <rep>userinitials2</rep>\n </user>\n <user>\n <active>true</active>\n <datelastlogin>8/21/2019 9:16:30 PM</datelastlogin>\n <dept>3</dept>\n <email>useremail</email>\n <firstname>userfirstname</firstname>\n <lastname>userlastname</lastname>\n <lastupdated>2/6/2019 11:10:29 PM</lastupdated>\n <lastupdatedby>lastupdateduserinitials</lastupdatedby>\n <loginemail>userloginemail</loginemail>\n <phone1>userphone</phone1>\n <phone2/>\n <rep>userinitials3</rep>\n </user>\n</ArrayOfuser>\n'''\n\nuser_array = ET.fromstring(xmlstring)\n\nreplist = []\nfor users in user_array.findall('user'):\n replist.append((users.find('rep').text))\n\nprint(replist)\n\nOutput:\n['userinitials1', 'userinitials2', 'userinitials3']\n\nJSON data implementation:\nuserlist = [\n {\n \"active\": \"true\",\n \"datelastlogin\": \"8/21/2019 9:16:30 PM\",\n \"dept\": 3,\n \"email\": \"useremail\",\n \"firstname\": \"userfirstname\",\n \"lastname\": \"userlastname\",\n \"lastupdated\": \"2/6/2019 11:10:29 PM\",\n \"lastupdatedby\": \"lastupdateduserinitials\",\n \"loginemail\": \"userloginemail\",\n \"phone1\": \"userphone\",\n \"phone2\": \"\",\n \"rep\": \"userinitials1\"\n },\n {\n \"active\": \"true\",\n \"datelastlogin\": \"12/1/2022 3:31:25 PM\",\n \"dept\": 5,\n \"email\": \"useremail\",\n \"firstname\": \"userfirstname\",\n \"lastname\": \"userlastname\",\n \"lastupdated\": \"4/8/2020 3:02:08 PM\",\n \"lastupdatedby\": \"lastupdateduserinitials\",\n \"loginemail\": \"userloginemail\",\n \"phone1\": \"userphone\",\n \"phone2\": \"\",\n \"rep\": \"userinitials2\"\n },\n {\n \"active\": \"true\",\n \"datelastlogin\": \"12/1/2022 3:31:25 PM\",\n \"dept\": 5,\n \"email\": \"useremail\",\n \"firstname\": \"userfirstname\",\n \"lastname\": \"userlastname\",\n \"lastupdated\": \"4/8/2020 3:02:08 PM\",\n \"lastupdatedby\": \"lastupdateduserinitials\",\n \"loginemail\": \"userloginemail\",\n \"phone1\": \"userphone\",\n \"phone2\": \"\",\n \"rep\": \"userinitials3\"\n }\n]\n\nreplist = []\nfor user in userlist:\n replist.append(user[\"rep\"])\n\nprint(replist)\n\nOutput:\n['userinitials1', 'userinitials2', 'userinitials3']\n\n", "If you like a sorted tabel of users who have last logged on you can put the parsed values into pandas:\nimport xml.etree.ElementTree as ET\nimport pandas as pd\n\ntree = ET.parse(\"activeusers.xml\")\nroot = tree.getroot()\n\nnamespaces = {\"xmlns\":\"WebsiteWhereDataComesFrom.com\" , \"xmlns:i\":\"http://www.w3.org/2001/XMLSchema-instance\"}\n\ncolumns =[\"rep\", \"datelastlogin\"]\nlogin = []\nusr = []\nfor user in root.findall(\"xmlns:user\", namespaces):\n for lastlog in user.findall(\"xmlns:datelastlogin\", namespaces):\n login.append(lastlog.text)\n \n for activ in user.findall(\"xmlns:rep\", namespaces):\n usr.append(activ.text)\n \ndata = list(zip(usr, login))\n\n\ndf = pd.DataFrame(data, columns=columns)\ndf[\"datelastlogin\"] = df[\"datelastlogin\"].astype('datetime64[ns]')\ndf = df.sort_values(by='datelastlogin', ascending = False)\nprint(df.to_string())\n\nOutput:\n rep datelastlogin\n1 userinitials2 2022-12-01 15:31:25\n0 userinitials1 2019-08-21 21:16:30\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "json", "python", "xml" ]
stackoverflow_0074659126_json_python_xml.txt