content
stringlengths 86
88.9k
| title
stringlengths 0
150
| question
stringlengths 1
35.8k
| answers
list | answers_scores
list | non_answers
list | non_answers_scores
list | tags
list | name
stringlengths 30
130
|
---|---|---|---|---|---|---|---|---|
Q:
Why does Python installed via Homebrew not include Tkinter
I've installed Python via Homebrew on my Mac.
brew install python
After that I checked my Python version as 2.7.11, then I tried to perform
import Tkinter
I got following error message:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 39, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
ImportError: No module named _tkinter
A:
I am running MacOS Big Sur (11.2.3).
With python2, I have Tkinter built-in.
With python3, it has to be installed manually and it's very simple, just run:
$ brew install python-tk
To run python2 in a terminal, execute python file.py.
To run python3 in a terminal, execute python3 file.py.
A:
Based on the comments from above and the fact that Python must be linked to Tcl/Tk framework:
If you don't have Xcode command line tools, install those:
xcode-select --install
If you don't have Tcl/Tk brew installation (check brew list), install that:
brew install tcl-tk
Then, run "brew uninstall python" if that was not installed with option --with-tcl-tk (the current official option). Then install Python again, linking it to the brew installed Tcl/Tk:
brew install python --with-tcl-tk
A:
UPDATE: Other answers have found workarounds, so this answer is now outdated.
12/18 Update: No longer possible for various reasons.
Below is now outdated. You'll have to install Python directly from python.org if you want to remove those warnings.
2018 Update
brew reinstall python --with-tcl-tk
Note: Homebrew now uses Python 3 by default - Homebrew Blog. Docs.
Testing
python should bring up system’s Python 2, python3 should bring up Python 3.
idle points to system Python/tcl-tk. It will show an out-dated tcl-tk error (unless you brew install python@2 --with-tcl-tk)
idle3 should bring up Python 3 with no warnings.
Caveat
--with-tcl-tk will install python directly from python.org, which you'll see when you run brew info python.
More info here.
A:
With brew and python3 you have to install Tinker separately.
brew message while installing python:
tkinter is no longer included with this formula, but it is available separately:
brew install [email protected]
A:
If you're using pyenv you can try installing tcl-tk via homebrew and then activating the env. vars. mentioned in its caveats section, as detailed in this answer. Activating those env. vars. prior to installing python via homebrew may work for you:
※ export PATH="/usr/local/opt/tcl-tk/bin:$PATH"
※ export LDFLAGS="-L/usr/local/opt/tcl-tk/lib"
※ export CPPFLAGS="-I/usr/local/opt/tcl-tk/include"
※ export PKG_CONFIG_PATH="/usr/local/opt/tcl-tk/lib/pkgconfig"
※ export PYTHON_CONFIGURE_OPTS="--with-tcltk-includes='-I$(brew --prefix tcl-tk)/include' \
--with-tcltk-libs='-L$(brew --prefix tcl-tk)/lib -ltcl8.6 -ltk8.6'"
※ brew reinstall python
A:
On mac OSX you must install TCL separately:
You will find instructions and dowloadables here: https://www.tcl.tk/software/tcltk/ and there: http://wiki.tcl.tk/1013
It requires a little bit of effort, but it is neither complicated nor difficult.
A:
It may be because you don't have the latest Xcode command line tools so brew built python from source instead of from bottle. Try:
xcode-select --install
brew uninstall python
brew install python --use-brewed-tk
A:
It is a bit more complicated now, true you still need to have xcode command line tools and homebrew as a start. But the procedure changes constantly. Homebrew took out tcl-tk support long ago, and apple still only supplies v8.5 of tcl-tk. Anyway, it is possible, and I maintain a github gist personally to fix these issues.
Latest update is using python 3.8.1 (will probably be usable on the 3.8.x branch later too) see here, just follow the steps outlined.
github gist link to install tcl-tk with python
A:
On MacOS 11.13.1 using
brew install python
brew install python-tk
I can now select TkAgg in matplotlib, but when I use it in ipython I get an error message
%pylab
matplotlib.use('tkagg')
plot([0,1])
results in
2021-05-07 21:51:02.954 Python[10773:71016] -[NSApplication macOSVersion]: unrecognized selector sent to instance 0x11779f8c0
2021-05-07 21:51:02.956 Python[10773:71016] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSApplication macOSVersion]: unrecognized selector sent to instance 0x11779f8c0'
*** First throw call stack:
(
0 CoreFoundation 0x00000001a0d97db8 __exceptionPreprocess + 240
1 libobjc.A.dylib 0x00000001a0ac10a8 objc_exception_throw + 60
2 CoreFoundation 0x00000001a0e28ba0 -[NSObject(NSObject) __retain_OA] + 0
3 CoreFoundation 0x00000001a0cf91e4 ___forwarding___ + 1444
4 CoreFoundation 0x00000001a0cf8b80 _CF_forwarding_prep_0 + 96
5 libtk8.6.dylib 0x000000012754a844 GetRGBA + 308
6 libtk8.6.dylib 0x000000012754a208 SetCGColorComponents + 132
7 libtk8.6.dylib 0x000000012754a65c TkpGetColor + 572
8 libtk8.6.dylib 0x00000001274ac714 Tk_GetColor + 220
9 libtk8.6.dylib 0x000000012749fea0 Tk_Get3DBorder + 204
10 libtk8.6.dylib 0x000000012749fcac Tk_Alloc3DBorderFromObj + 144
11 libtk8.6.dylib 0x00000001274adadc DoObjConfig + 840
12 libtk8.6.dylib 0x00000001274ad690 Tk_InitOptions + 348
13 libtk8.6.dylib 0x00000001274ad58c Tk_InitOptions + 88
14 libtk8.6.dylib 0x00000001274d4cb4 CreateFrame + 1448
15 libtk8.6.dylib 0x00000001274d4fac TkListCreateFrame + 156
16 libtk8.6.dylib 0x00000001274cde80 Initialize + 1848
17 _tkinter.cpython-39-darwin.so 0x000000012059a31c Tcl_AppInit + 80
18 _tkinter.cpython-39-darwin.so 0x000000012059487c Tkapp_New + 592
19 _tkinter.cpython-39-darwin.so 0x000000012059410c _tkinter_create + 580
20 Python 0x00000001007150c4 cfunction_vectorcall_FASTCALL + 88
21 Python 0x00000001007bac4c call_function + 128
22 Python 0x00000001007b8640 _PyEval_EvalFrameDefault + 39844
23 Python 0x00000001007ada9c _PyEval_EvalCode + 444
24 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364
25 Python 0x00000001006c58ac _PyObject_FastCallDictTstate + 208
26 Python 0x0000000100739bf4 slot_tp_init + 188
27 Python 0x000000010073f850 type_call + 300
28 Python 0x00000001006c5590 _PyObject_MakeTpCall + 132
29 Python 0x00000001007bacd8 call_function + 268
30 Python 0x00000001007b86e8 _PyEval_EvalFrameDefault + 40012
31 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180
32 Python 0x00000001006c8c98 method_vectorcall + 124
33 Python 0x00000001007bac4c call_function + 128
34 Python 0x00000001007b8640 _PyEval_EvalFrameDefault + 39844
35 Python 0x00000001007ada9c _PyEval_EvalCode + 444
36 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364
37 Python 0x00000001006c8c98 method_vectorcall + 124
38 Python 0x00000001006c5e40 PyVectorcall_Call + 184
39 Python 0x00000001007b880c _PyEval_EvalFrameDefault + 40304
40 Python 0x00000001007ada9c _PyEval_EvalCode + 444
41 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364
42 Python 0x00000001006c5e40 PyVectorcall_Call + 184
43 Python 0x00000001007b880c _PyEval_EvalFrameDefault + 40304
44 Python 0x00000001007ada9c _PyEval_EvalCode + 444
45 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364
46 Python 0x00000001007bac4c call_function + 128
47 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880
48 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180
49 Python 0x00000001007bac4c call_function + 128
50 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880
51 Python 0x00000001007ada9c _PyEval_EvalCode + 444
52 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364
53 Python 0x00000001007bac4c call_function + 128
54 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880
55 Python 0x00000001007ada9c _PyEval_EvalCode + 444
56 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364
57 Python 0x00000001007bac4c call_function + 128
58 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880
59 Python 0x00000001007ada9c _PyEval_EvalCode + 444
60 Python 0x00000001007a86a0 builtin_exec + 356
61 Python 0x00000001007150c4 cfunction_vectorcall_FASTCALL + 88
62 Python 0x00000001007bac4c call_function + 128
63 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880
64 Python 0x00000001006da678 gen_send_ex + 192
65 Python 0x00000001007b35b4 _PyEval_EvalFrameDefault + 19224
66 Python 0x00000001006da678 gen_send_ex + 192
67 Python 0x00000001007b35b4 _PyEval_EvalFrameDefault + 19224
68 Python 0x00000001006da678 gen_send_ex + 192
69 Python 0x00000001006d1cb0 method_vectorcall_O + 108
70 Python 0x00000001007bac4c call_function + 128
71 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720
72 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180
73 Python 0x00000001007bac4c call_function + 128
74 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880
75 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180
76 Python 0x00000001007bac4c call_function + 128
77 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720
78 Python 0x00000001007ada9c _PyEval_EvalCode + 444
79 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364
80 Python 0x00000001006c8c98 method_vectorcall + 124
81 Python 0x00000001007bac4c call_function + 128
82 Python 0x00000001007b86e8 _PyEval_EvalFrameDefault + 40012
83 Python 0x00000001007ada9c _PyEval_EvalCode + 444
84 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364
85 Python 0x00000001007bac4c call_function + 128
86 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720
87 Python 0x00000001007ada9c _PyEval_EvalCode + 444
88 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364
89 Python 0x00000001007bac4c call_function + 128
90 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720
91 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180
92 Python 0x00000001007bac4c call_function + 128
93 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720
94 Python 0x00000001007ada9c _PyEval_EvalCode + 444
95 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364
96 Python 0x00000001006c8c98 method_vectorcall + 124
97 Python 0x00000001006c5e40 PyVectorcall_Call + 184
98 Python 0x00000001007b880c _PyEval_EvalFrameDefault + 40304
99 Python 0x00000001007ada9c _PyEval_EvalCode + 444
100 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364
101 Python 0x00000001007bac4c call_function + 128
102 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880
103 Python 0x00000001007ada9c _PyEval_EvalCode + 444
104 Python 0x0000000100805498 run_eval_code_obj + 136
105 Python 0x00000001008053ac run_mod + 112
106 Python 0x0000000100802be8 pyrun_file + 168
107 Python 0x000000010080250c pyrun_simple_file + 276
108 Python 0x00000001008023b8 PyRun_SimpleFileExFlags + 80
109 Python 0x0000000100822560 pymain_run_file + 320
110 Python 0x0000000100821b2c pymain_run_python + 412
111 Python 0x000000010082194c Py_RunMain + 24
112 Python 0x0000000100822f50 pymain_main + 36
113 Python 0x00000001008231c8 Py_BytesMain + 40
114 libdyld.dylib 0x00000001a0c38420 start + 4
)
libc++abi: terminating with uncaught exception of type NSException
Abort trap: 6
A:
This is what worked for me with macOS Monterey:
brew install [email protected]
or
brew install [email protected]
Depending on which Python version you're running.
A:
It doesn't work in any OS whatsoever that doesn't have the TCL Toolkit already installed. While it's either already installed in many Linux distributions and/or bundled with Python bundles downloaded from python.org for Windows and Linux - and a a consequence of that is generally wrongly assumed that it's part of Python - it's not the case for macOS. There's official reasons for this described in the appropriate document:
If you are using macOS 12 Monterey or later, you may see problems with file open and save dialogs when using IDLE or other tkinter-based applications. The most recent versions of python.org installers (for 3.10.0 and 3.9.8) have patched versions of Tk to avoid these problems. They should be fixed in an upcoming Tk 8.6.12 release.
If you are using a Python from any current python.org Python installer for macOS (3.10.0+ or 3.9.0+), no further action is needed to use IDLE or tkinter. A built-in version of Tcl/Tk 8.6 will be used.
If you are using macOS 10.6 or later, the Apple-supplied Tcl/Tk 8.5 has serious bugs that can cause application crashes. If you wish to use IDLE or Tkinter, do not use the Apple-supplied Pythons. Instead, install and use a newer version of Python from python.org or a third-party distributor that supplies or links with a newer version of Tcl/Tk.
Python's integrated development environment, IDLE, and the tkinter GUI toolkit it uses, depend on the Tk GUI toolkit which is not part of Python itself. For best results, it is important that the proper release of Tcl/Tk is installed on your machine. For recent Python installers for macOS downloadable from this website, here is a summary of current recommendations followed by more detailed information.
It has been already mentioned but the most popular way to do it is:
$ brew install python-tk
It will work because python-tk formulae depends on other two: python and tcl-tk (hence you don't need to additionally do brew install python).
If you had already installed python with homebrew
$ brew install python
You can have tkinter with
$ brew install tcl-tk
|
Why does Python installed via Homebrew not include Tkinter
|
I've installed Python via Homebrew on my Mac.
brew install python
After that I checked my Python version as 2.7.11, then I tried to perform
import Tkinter
I got following error message:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/lib-tk/Tkinter.py", line 39, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
ImportError: No module named _tkinter
|
[
"I am running MacOS Big Sur (11.2.3).\nWith python2, I have Tkinter built-in.\nWith python3, it has to be installed manually and it's very simple, just run:\n$ brew install python-tk\n\nTo run python2 in a terminal, execute python file.py.\nTo run python3 in a terminal, execute python3 file.py.\n",
"Based on the comments from above and the fact that Python must be linked to Tcl/Tk framework:\nIf you don't have Xcode command line tools, install those:\nxcode-select --install\n\nIf you don't have Tcl/Tk brew installation (check brew list), install that:\nbrew install tcl-tk\n\nThen, run \"brew uninstall python\" if that was not installed with option --with-tcl-tk (the current official option). Then install Python again, linking it to the brew installed Tcl/Tk:\nbrew install python --with-tcl-tk\n\n",
"UPDATE: Other answers have found workarounds, so this answer is now outdated.\n12/18 Update: No longer possible for various reasons.\nBelow is now outdated. You'll have to install Python directly from python.org if you want to remove those warnings.\n\n2018 Update\nbrew reinstall python --with-tcl-tk\n\n\nNote: Homebrew now uses Python 3 by default - Homebrew Blog. Docs.\n\nTesting\npython should bring up system’s Python 2, python3 should bring up Python 3.\nidle points to system Python/tcl-tk. It will show an out-dated tcl-tk error (unless you brew install python@2 --with-tcl-tk)\nidle3 should bring up Python 3 with no warnings.\nCaveat\n--with-tcl-tk will install python directly from python.org, which you'll see when you run brew info python.\nMore info here.\n",
"With brew and python3 you have to install Tinker separately.\nbrew message while installing python:\n\ntkinter is no longer included with this formula, but it is available separately:\n\nbrew install [email protected]\n\n",
"If you're using pyenv you can try installing tcl-tk via homebrew and then activating the env. vars. mentioned in its caveats section, as detailed in this answer. Activating those env. vars. prior to installing python via homebrew may work for you:\n※ export PATH=\"/usr/local/opt/tcl-tk/bin:$PATH\"\n※ export LDFLAGS=\"-L/usr/local/opt/tcl-tk/lib\"\n※ export CPPFLAGS=\"-I/usr/local/opt/tcl-tk/include\"\n※ export PKG_CONFIG_PATH=\"/usr/local/opt/tcl-tk/lib/pkgconfig\"\n※ export PYTHON_CONFIGURE_OPTS=\"--with-tcltk-includes='-I$(brew --prefix tcl-tk)/include' \\\n --with-tcltk-libs='-L$(brew --prefix tcl-tk)/lib -ltcl8.6 -ltk8.6'\"\n※ brew reinstall python\n\n",
"On mac OSX you must install TCL separately:\nYou will find instructions and dowloadables here: https://www.tcl.tk/software/tcltk/ and there: http://wiki.tcl.tk/1013\nIt requires a little bit of effort, but it is neither complicated nor difficult.\n",
"It may be because you don't have the latest Xcode command line tools so brew built python from source instead of from bottle. Try:\nxcode-select --install\nbrew uninstall python\nbrew install python --use-brewed-tk\n\n",
"It is a bit more complicated now, true you still need to have xcode command line tools and homebrew as a start. But the procedure changes constantly. Homebrew took out tcl-tk support long ago, and apple still only supplies v8.5 of tcl-tk. Anyway, it is possible, and I maintain a github gist personally to fix these issues.\nLatest update is using python 3.8.1 (will probably be usable on the 3.8.x branch later too) see here, just follow the steps outlined.\ngithub gist link to install tcl-tk with python\n",
"On MacOS 11.13.1 using\nbrew install python\nbrew install python-tk\n\nI can now select TkAgg in matplotlib, but when I use it in ipython I get an error message\n%pylab\nmatplotlib.use('tkagg')\nplot([0,1])\n\nresults in\n2021-05-07 21:51:02.954 Python[10773:71016] -[NSApplication macOSVersion]: unrecognized selector sent to instance 0x11779f8c0\n2021-05-07 21:51:02.956 Python[10773:71016] *** Terminating app due to uncaught exception 'NSInvalidArgumentException', reason: '-[NSApplication macOSVersion]: unrecognized selector sent to instance 0x11779f8c0'\n*** First throw call stack:\n(\n 0 CoreFoundation 0x00000001a0d97db8 __exceptionPreprocess + 240\n 1 libobjc.A.dylib 0x00000001a0ac10a8 objc_exception_throw + 60\n 2 CoreFoundation 0x00000001a0e28ba0 -[NSObject(NSObject) __retain_OA] + 0\n 3 CoreFoundation 0x00000001a0cf91e4 ___forwarding___ + 1444\n 4 CoreFoundation 0x00000001a0cf8b80 _CF_forwarding_prep_0 + 96\n 5 libtk8.6.dylib 0x000000012754a844 GetRGBA + 308\n 6 libtk8.6.dylib 0x000000012754a208 SetCGColorComponents + 132\n 7 libtk8.6.dylib 0x000000012754a65c TkpGetColor + 572\n 8 libtk8.6.dylib 0x00000001274ac714 Tk_GetColor + 220\n 9 libtk8.6.dylib 0x000000012749fea0 Tk_Get3DBorder + 204\n 10 libtk8.6.dylib 0x000000012749fcac Tk_Alloc3DBorderFromObj + 144\n 11 libtk8.6.dylib 0x00000001274adadc DoObjConfig + 840\n 12 libtk8.6.dylib 0x00000001274ad690 Tk_InitOptions + 348\n 13 libtk8.6.dylib 0x00000001274ad58c Tk_InitOptions + 88\n 14 libtk8.6.dylib 0x00000001274d4cb4 CreateFrame + 1448\n 15 libtk8.6.dylib 0x00000001274d4fac TkListCreateFrame + 156\n 16 libtk8.6.dylib 0x00000001274cde80 Initialize + 1848\n 17 _tkinter.cpython-39-darwin.so 0x000000012059a31c Tcl_AppInit + 80\n 18 _tkinter.cpython-39-darwin.so 0x000000012059487c Tkapp_New + 592\n 19 _tkinter.cpython-39-darwin.so 0x000000012059410c _tkinter_create + 580\n 20 Python 0x00000001007150c4 cfunction_vectorcall_FASTCALL + 88\n 21 Python 0x00000001007bac4c call_function + 128\n 22 Python 0x00000001007b8640 _PyEval_EvalFrameDefault + 39844\n 23 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 24 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 25 Python 0x00000001006c58ac _PyObject_FastCallDictTstate + 208\n 26 Python 0x0000000100739bf4 slot_tp_init + 188\n 27 Python 0x000000010073f850 type_call + 300\n 28 Python 0x00000001006c5590 _PyObject_MakeTpCall + 132\n 29 Python 0x00000001007bacd8 call_function + 268\n 30 Python 0x00000001007b86e8 _PyEval_EvalFrameDefault + 40012\n 31 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180\n 32 Python 0x00000001006c8c98 method_vectorcall + 124\n 33 Python 0x00000001007bac4c call_function + 128\n 34 Python 0x00000001007b8640 _PyEval_EvalFrameDefault + 39844\n 35 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 36 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 37 Python 0x00000001006c8c98 method_vectorcall + 124\n 38 Python 0x00000001006c5e40 PyVectorcall_Call + 184\n 39 Python 0x00000001007b880c _PyEval_EvalFrameDefault + 40304\n 40 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 41 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 42 Python 0x00000001006c5e40 PyVectorcall_Call + 184\n 43 Python 0x00000001007b880c _PyEval_EvalFrameDefault + 40304\n 44 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 45 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 46 Python 0x00000001007bac4c call_function + 128\n 47 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 48 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180\n 49 Python 0x00000001007bac4c call_function + 128\n 50 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 51 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 52 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 53 Python 0x00000001007bac4c call_function + 128\n 54 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 55 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 56 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 57 Python 0x00000001007bac4c call_function + 128\n 58 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 59 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 60 Python 0x00000001007a86a0 builtin_exec + 356\n 61 Python 0x00000001007150c4 cfunction_vectorcall_FASTCALL + 88\n 62 Python 0x00000001007bac4c call_function + 128\n 63 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 64 Python 0x00000001006da678 gen_send_ex + 192\n 65 Python 0x00000001007b35b4 _PyEval_EvalFrameDefault + 19224\n 66 Python 0x00000001006da678 gen_send_ex + 192\n 67 Python 0x00000001007b35b4 _PyEval_EvalFrameDefault + 19224\n 68 Python 0x00000001006da678 gen_send_ex + 192\n 69 Python 0x00000001006d1cb0 method_vectorcall_O + 108\n 70 Python 0x00000001007bac4c call_function + 128\n 71 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720\n 72 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180\n 73 Python 0x00000001007bac4c call_function + 128\n 74 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 75 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180\n 76 Python 0x00000001007bac4c call_function + 128\n 77 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720\n 78 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 79 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 80 Python 0x00000001006c8c98 method_vectorcall + 124\n 81 Python 0x00000001007bac4c call_function + 128\n 82 Python 0x00000001007b86e8 _PyEval_EvalFrameDefault + 40012\n 83 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 84 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 85 Python 0x00000001007bac4c call_function + 128\n 86 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720\n 87 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 88 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 89 Python 0x00000001007bac4c call_function + 128\n 90 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720\n 91 Python 0x00000001006c61fc _PyFunction_Vectorcall + 180\n 92 Python 0x00000001007bac4c call_function + 128\n 93 Python 0x00000001007b85c4 _PyEval_EvalFrameDefault + 39720\n 94 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 95 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 96 Python 0x00000001006c8c98 method_vectorcall + 124\n 97 Python 0x00000001006c5e40 PyVectorcall_Call + 184\n 98 Python 0x00000001007b880c _PyEval_EvalFrameDefault + 40304\n 99 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 100 Python 0x00000001006c62b4 _PyFunction_Vectorcall + 364\n 101 Python 0x00000001007bac4c call_function + 128\n 102 Python 0x00000001007b8664 _PyEval_EvalFrameDefault + 39880\n 103 Python 0x00000001007ada9c _PyEval_EvalCode + 444\n 104 Python 0x0000000100805498 run_eval_code_obj + 136\n 105 Python 0x00000001008053ac run_mod + 112\n 106 Python 0x0000000100802be8 pyrun_file + 168\n 107 Python 0x000000010080250c pyrun_simple_file + 276\n 108 Python 0x00000001008023b8 PyRun_SimpleFileExFlags + 80\n 109 Python 0x0000000100822560 pymain_run_file + 320\n 110 Python 0x0000000100821b2c pymain_run_python + 412\n 111 Python 0x000000010082194c Py_RunMain + 24\n 112 Python 0x0000000100822f50 pymain_main + 36\n 113 Python 0x00000001008231c8 Py_BytesMain + 40\n 114 libdyld.dylib 0x00000001a0c38420 start + 4\n)\nlibc++abi: terminating with uncaught exception of type NSException\nAbort trap: 6\n\n",
"This is what worked for me with macOS Monterey:\nbrew install [email protected]\nor\nbrew install [email protected]\nDepending on which Python version you're running.\n",
"It doesn't work in any OS whatsoever that doesn't have the TCL Toolkit already installed. While it's either already installed in many Linux distributions and/or bundled with Python bundles downloaded from python.org for Windows and Linux - and a a consequence of that is generally wrongly assumed that it's part of Python - it's not the case for macOS. There's official reasons for this described in the appropriate document:\n\nIf you are using macOS 12 Monterey or later, you may see problems with file open and save dialogs when using IDLE or other tkinter-based applications. The most recent versions of python.org installers (for 3.10.0 and 3.9.8) have patched versions of Tk to avoid these problems. They should be fixed in an upcoming Tk 8.6.12 release.\n\n\nIf you are using a Python from any current python.org Python installer for macOS (3.10.0+ or 3.9.0+), no further action is needed to use IDLE or tkinter. A built-in version of Tcl/Tk 8.6 will be used.\n\n\nIf you are using macOS 10.6 or later, the Apple-supplied Tcl/Tk 8.5 has serious bugs that can cause application crashes. If you wish to use IDLE or Tkinter, do not use the Apple-supplied Pythons. Instead, install and use a newer version of Python from python.org or a third-party distributor that supplies or links with a newer version of Tcl/Tk.\n\n\nPython's integrated development environment, IDLE, and the tkinter GUI toolkit it uses, depend on the Tk GUI toolkit which is not part of Python itself. For best results, it is important that the proper release of Tcl/Tk is installed on your machine. For recent Python installers for macOS downloadable from this website, here is a summary of current recommendations followed by more detailed information.\n\n\nIt has been already mentioned but the most popular way to do it is:\n$ brew install python-tk \n\nIt will work because python-tk formulae depends on other two: python and tcl-tk (hence you don't need to additionally do brew install python).\nIf you had already installed python with homebrew\n$ brew install python \n\nYou can have tkinter with\n$ brew install tcl-tk \n\n"
] |
[
61,
25,
11,
11,
9,
6,
4,
3,
3,
0,
0
] |
[] |
[] |
[
"macos",
"python",
"python_3.x",
"tkinter"
] |
stackoverflow_0036760839_macos_python_python_3.x_tkinter.txt
|
Q:
Dynamically Edit PDF File
I have a PDF template containing some text, the PDF file has a name repeated many times,
I want to write a code that takes the {name} as input, then dynamically changes all appearance of the name in the pdf to the value I have entered then output the file after the changes.
I have tried to build a pdf from the begging with python but I couldn't reach the alignment I want
A:
PyFPDF has a template designer for alignment etc. however it is not the same as Acrobat or other FDF editors where a field can be copy pasted multiple times as numbered increments. It uses a different csv methodology where you would need to add each repeated name as a line entry in the data file.
For the tutoring sample see https://pyfpdf.readthedocs.io/en/latest/Templates/index.html
However, there is the newer version 2 which uses much the same features so unknown if designer is fully compatible as it was dropped from that forked version. Perhaps compare the output from V1 with the inputs for V2.
More relavent it has a newer methodology that may do what you require by using a more modular flextemplate
|
Dynamically Edit PDF File
|
I have a PDF template containing some text, the PDF file has a name repeated many times,
I want to write a code that takes the {name} as input, then dynamically changes all appearance of the name in the pdf to the value I have entered then output the file after the changes.
I have tried to build a pdf from the begging with python but I couldn't reach the alignment I want
|
[
"PyFPDF has a template designer for alignment etc. however it is not the same as Acrobat or other FDF editors where a field can be copy pasted multiple times as numbered increments. It uses a different csv methodology where you would need to add each repeated name as a line entry in the data file.\n\nFor the tutoring sample see https://pyfpdf.readthedocs.io/en/latest/Templates/index.html\nHowever, there is the newer version 2 which uses much the same features so unknown if designer is fully compatible as it was dropped from that forked version. Perhaps compare the output from V1 with the inputs for V2.\nMore relavent it has a newer methodology that may do what you require by using a more modular flextemplate\n"
] |
[
0
] |
[] |
[] |
[
"automation",
"pdf",
"pyfpdf",
"python"
] |
stackoverflow_0074656772_automation_pdf_pyfpdf_python.txt
|
Q:
React with TS, eslint error import/no-unresolved
I have next ESLINT issue:
Unable to resolve path to module 'lib-name' import/no-unresolved
App tech is React with Typescript
On running eslint got a bounch above error
Tried a lot of different solution and still has the same problem
this is .eslintrc.js file
const sharedRules = {
// import
'import/prefer-default-export': 'off',
'import/no-extraneous-dependencies': 'off', // This is useful for lodash imports like "import get from lodash/get"
'import/extensions': [
'error',
'ignorePackages',
{
js: 'never',
jsx: 'never',
ts: 'never',
tsx: 'never',
},
],
// js
'curly': 2,
'no-use-before-define': 'off', // must disable base rule to use proper typescript-eslint rule
'camelcase': 'off', // This is unavoidable in data coming from the BE
'max-classes-per-file': 'off',
'no-octal': 'off',
'no-shadow': 'off',
'no-underscore-dangle': 'off',
// react
'react-hooks/exhaustive-deps': 'warn',
'react/jsx-filename-extension': 'off',
'react/jsx-one-expression-per-line': 'off',
'react/jsx-props-no-spreading': 'off',
'react/prop-types': 'off',
'react/no-unused-prop-types': 'off',
'react/require-default-props': 'off',
'react/react-in-jsx-scope': 'off', // https://reactjs.org/blog/2020/09/22/introducing-the-new-jsx-transform.html
// a11y
'jsx-a11y/anchor-is-valid': 'off',
'jsx-a11y/label-has-associated-control': 'off', // Just a hassle, you can nest inputs inside labels and it works fine. Check checkbox component for example.
'jsx-a11y/control-has-associated-label': 'off',
'jsx-a11y/no-noninteractive-tabindex': 'off',
'jsx-a11y/click-events-have-key-events': 'off',
'jsx-a11y/no-static-element-interactions': 'off',
'import/no-unresolved': [2, { caseSensitive: false }],
};
const typeScriptRules = {
'@typescript-eslint/no-explicit-any': 'warn',
'@typescript-eslint/ban-ts-comment': 'warn',
'@typescript-eslint/explicit-function-return-type': 'off',
'@typescript-eslint/no-use-before-define': ['error'],
'@typescript-eslint/no-non-null-assertion': 'off',
...sharedRules,
};
module.exports = {
env: {
browser: true,
es6: true,
jest: true,
},
extends: ['airbnb', 'airbnb/hooks', 'prettier'],
globals: {
createMockHttpRequest: true,
createStore: true,
mockAjaxRequest: true,
},
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaFeatures: {
jsx: true,
},
ecmaVersion: 11,
sourceType: 'module',
},
plugins: ['@typescript-eslint', 'eslint-plugin-prettier'],
rules: { ...sharedRules },
overrides: [
{
extends: [
'airbnb',
'airbnb/hooks',
'plugin:@typescript-eslint/eslint-recommended',
'plugin:@typescript-eslint/recommended',
'prettier',
'prettier/@typescript-eslint',
'prettier/react',
],
files: ['**/*.ts', '**/*.tsx'],
rules: { ...typeScriptRules },
settings: {
'import/resolver': {
node: {
extensions: ['.js', '.jsx', '.ts', '.tsx'],
},
},
},
},
],
};
-- I have also next file in repo eslintrc.json
{
"root": true,
"plugins": ["@nrwl/nx", "react", "react-hooks"],
"overrides": [
{
"files": ["*.ts", "*.tsx", "*.js", "*.jsx"],
"rules": {
"@nrwl/nx/enforce-module-boundaries": [
"error",
{
"enforceBuildableLibDependency": true,
"allow": [],
"depConstraints": [
{
"sourceTag": "*",
"onlyDependOnLibsWithTags": ["*"]
}
]
}
]
},
},
{
"files": ["*.ts", "*.tsx"],
"extends": ["plugin:@nrwl/nx/typescript"],
"rules": {}
},
{
"files": ["*.js", "*.jsx"],
"extends": ["plugin:@nrwl/nx/javascript"],
"rules": {}
}
]
}
Any help is welcome!! :)
A:
I fixed this one on my Nx project by adding this to the .eslintrc.json at the root of the project:
"settings": {
"import/parsers": {
"@typescript-eslint/parser": [".ts", ".tsx"]
},
"import/resolver": {
"typescript": {
"project": ["tsconfig.base.json"]
},
"node": {
"project": ["tsconfig.base.json"]
}
}
}
|
React with TS, eslint error import/no-unresolved
|
I have next ESLINT issue:
Unable to resolve path to module 'lib-name' import/no-unresolved
App tech is React with Typescript
On running eslint got a bounch above error
Tried a lot of different solution and still has the same problem
this is .eslintrc.js file
const sharedRules = {
// import
'import/prefer-default-export': 'off',
'import/no-extraneous-dependencies': 'off', // This is useful for lodash imports like "import get from lodash/get"
'import/extensions': [
'error',
'ignorePackages',
{
js: 'never',
jsx: 'never',
ts: 'never',
tsx: 'never',
},
],
// js
'curly': 2,
'no-use-before-define': 'off', // must disable base rule to use proper typescript-eslint rule
'camelcase': 'off', // This is unavoidable in data coming from the BE
'max-classes-per-file': 'off',
'no-octal': 'off',
'no-shadow': 'off',
'no-underscore-dangle': 'off',
// react
'react-hooks/exhaustive-deps': 'warn',
'react/jsx-filename-extension': 'off',
'react/jsx-one-expression-per-line': 'off',
'react/jsx-props-no-spreading': 'off',
'react/prop-types': 'off',
'react/no-unused-prop-types': 'off',
'react/require-default-props': 'off',
'react/react-in-jsx-scope': 'off', // https://reactjs.org/blog/2020/09/22/introducing-the-new-jsx-transform.html
// a11y
'jsx-a11y/anchor-is-valid': 'off',
'jsx-a11y/label-has-associated-control': 'off', // Just a hassle, you can nest inputs inside labels and it works fine. Check checkbox component for example.
'jsx-a11y/control-has-associated-label': 'off',
'jsx-a11y/no-noninteractive-tabindex': 'off',
'jsx-a11y/click-events-have-key-events': 'off',
'jsx-a11y/no-static-element-interactions': 'off',
'import/no-unresolved': [2, { caseSensitive: false }],
};
const typeScriptRules = {
'@typescript-eslint/no-explicit-any': 'warn',
'@typescript-eslint/ban-ts-comment': 'warn',
'@typescript-eslint/explicit-function-return-type': 'off',
'@typescript-eslint/no-use-before-define': ['error'],
'@typescript-eslint/no-non-null-assertion': 'off',
...sharedRules,
};
module.exports = {
env: {
browser: true,
es6: true,
jest: true,
},
extends: ['airbnb', 'airbnb/hooks', 'prettier'],
globals: {
createMockHttpRequest: true,
createStore: true,
mockAjaxRequest: true,
},
parser: '@typescript-eslint/parser',
parserOptions: {
ecmaFeatures: {
jsx: true,
},
ecmaVersion: 11,
sourceType: 'module',
},
plugins: ['@typescript-eslint', 'eslint-plugin-prettier'],
rules: { ...sharedRules },
overrides: [
{
extends: [
'airbnb',
'airbnb/hooks',
'plugin:@typescript-eslint/eslint-recommended',
'plugin:@typescript-eslint/recommended',
'prettier',
'prettier/@typescript-eslint',
'prettier/react',
],
files: ['**/*.ts', '**/*.tsx'],
rules: { ...typeScriptRules },
settings: {
'import/resolver': {
node: {
extensions: ['.js', '.jsx', '.ts', '.tsx'],
},
},
},
},
],
};
-- I have also next file in repo eslintrc.json
{
"root": true,
"plugins": ["@nrwl/nx", "react", "react-hooks"],
"overrides": [
{
"files": ["*.ts", "*.tsx", "*.js", "*.jsx"],
"rules": {
"@nrwl/nx/enforce-module-boundaries": [
"error",
{
"enforceBuildableLibDependency": true,
"allow": [],
"depConstraints": [
{
"sourceTag": "*",
"onlyDependOnLibsWithTags": ["*"]
}
]
}
]
},
},
{
"files": ["*.ts", "*.tsx"],
"extends": ["plugin:@nrwl/nx/typescript"],
"rules": {}
},
{
"files": ["*.js", "*.jsx"],
"extends": ["plugin:@nrwl/nx/javascript"],
"rules": {}
}
]
}
Any help is welcome!! :)
|
[
"I fixed this one on my Nx project by adding this to the .eslintrc.json at the root of the project:\n\"settings\": {\n \"import/parsers\": {\n \"@typescript-eslint/parser\": [\".ts\", \".tsx\"]\n },\n \"import/resolver\": {\n \"typescript\": {\n \"project\": [\"tsconfig.base.json\"]\n },\n \"node\": {\n \"project\": [\"tsconfig.base.json\"]\n }\n }\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"eslint",
"javascript",
"reactjs",
"typescript"
] |
stackoverflow_0073139543_eslint_javascript_reactjs_typescript.txt
|
Q:
FlexLayout height not updating while content height is changed
I am using following flexlayout library for design my ios App.
https://github.com/layoutBox/FlexLayout#api_documentation
i have some problem with this. i write the simple code like this
//Basic Info Card
let card = CardView()
card.cornerRadius = 5
card.shadowOpacity = 0.5
card.backgroundColor = .white
flex.addItem(card).direction(.column).marginTop(16).padding(16).backgroundColor(.blue).define({flex in
basicInfoTextView.isEditable = true
basicInfoTextView.isScrollEnabled = false
flex.addItem(basicInfoTextView).backgroundColor(.red).width(100%).minHeight(20)
basicInfoTextView.delegate = self
})
// Delegate
func textView(_ textView: UITextView, shouldChangeTextIn range: NSRange, replacementText text: String) -> Bool {
if textView == basicInfoTextView {
print(basicInfoTextView.intrinsicContentSize.height)
basicInfoTextView.flex.height(basicInfoTextView.intrinsicContentSize.height)
basicInfoTextView.flex.markDirty()
basicInfoTextView.flex.layout(mode: .adjustHeight)
}
return true
}
UITextview properly update it's height based on user input. But it's container not updating. it's still in initial height of UITextView like minHeight of 20. What was is wrong here. Please help me thank you.
A:
You're just missing a call to layoutSubviews() in shouldChangeTextIn. I recommend putting the logic inside the textViewDidChange delegate method instead, so you can have the updated height after the user types something in the text view. Here's a link to a working example gist
|
FlexLayout height not updating while content height is changed
|
I am using following flexlayout library for design my ios App.
https://github.com/layoutBox/FlexLayout#api_documentation
i have some problem with this. i write the simple code like this
//Basic Info Card
let card = CardView()
card.cornerRadius = 5
card.shadowOpacity = 0.5
card.backgroundColor = .white
flex.addItem(card).direction(.column).marginTop(16).padding(16).backgroundColor(.blue).define({flex in
basicInfoTextView.isEditable = true
basicInfoTextView.isScrollEnabled = false
flex.addItem(basicInfoTextView).backgroundColor(.red).width(100%).minHeight(20)
basicInfoTextView.delegate = self
})
// Delegate
func textView(_ textView: UITextView, shouldChangeTextIn range: NSRange, replacementText text: String) -> Bool {
if textView == basicInfoTextView {
print(basicInfoTextView.intrinsicContentSize.height)
basicInfoTextView.flex.height(basicInfoTextView.intrinsicContentSize.height)
basicInfoTextView.flex.markDirty()
basicInfoTextView.flex.layout(mode: .adjustHeight)
}
return true
}
UITextview properly update it's height based on user input. But it's container not updating. it's still in initial height of UITextView like minHeight of 20. What was is wrong here. Please help me thank you.
|
[
"You're just missing a call to layoutSubviews() in shouldChangeTextIn. I recommend putting the logic inside the textViewDidChange delegate method instead, so you can have the updated height after the user types something in the text view. Here's a link to a working example gist\n"
] |
[
0
] |
[] |
[] |
[
"angular_flex_layout",
"ios",
"swift"
] |
stackoverflow_0059935955_angular_flex_layout_ios_swift.txt
|
Q:
Spring R2DBC Postgres Row Level Security
I'm trying to implement Postgres Row Level Security on my app that uses R2DBC.
I found this AWS post that implements this but uses a non-reactive approach.
I'm having problems converting this to a reactive approach since I can't find a class equivalent to the AbstractRoutingDataSource:
public class TenantAwareDataSource extends AbstractRoutingDataSource {
private static final Logger LOGGER = LoggerFactory.getLogger(TenantAwareDataSource.class);
@Override
protected Object determineCurrentLookupKey() {
Object key = null;
// Pull the currently authenticated tenant from the security context
// of the HTTP request and use it as the key in the map that points
// to the connection pool (data source) for each tenant.
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
try {
if (!(authentication instanceof AnonymousAuthenticationToken)) {
Tenant currentTenant = (Tenant) authentication.getPrincipal();
key = currentTenant.getId();
}
} catch (Exception e) {
LOGGER.error("Failed to get current tenant for data source lookup", e);
throw new RuntimeException(e);
}
return key;
}
@Override
public Connection getConnection() throws SQLException {
// Every time the app asks the data source for a connection
// set the PostgreSQL session variable to the current tenant
// to enforce data isolation.
Connection connection = super.getConnection();
try (Statement sql = connection.createStatement()) {
LOGGER.info("Setting PostgreSQL session variable app.current_tenant = '{}' on {}", determineCurrentLookupKey().toString(), this);
sql.execute("SET SESSION app.current_tenant = '" + determineCurrentLookupKey().toString() + "'");
} catch (Exception e) {
LOGGER.error("Failed to execute: SET SESSION app.current_tenant = '{}'", determineCurrentLookupKey().toString(), e);
}
return connection;
}
@Override
public String toString() {
return determineTargetDataSource().toString();
}
}
What would be the equivalent on R2DBC to AbstractRoutingDataSource?
Thanks
Full source code here.
|
Spring R2DBC Postgres Row Level Security
|
I'm trying to implement Postgres Row Level Security on my app that uses R2DBC.
I found this AWS post that implements this but uses a non-reactive approach.
I'm having problems converting this to a reactive approach since I can't find a class equivalent to the AbstractRoutingDataSource:
public class TenantAwareDataSource extends AbstractRoutingDataSource {
private static final Logger LOGGER = LoggerFactory.getLogger(TenantAwareDataSource.class);
@Override
protected Object determineCurrentLookupKey() {
Object key = null;
// Pull the currently authenticated tenant from the security context
// of the HTTP request and use it as the key in the map that points
// to the connection pool (data source) for each tenant.
Authentication authentication = SecurityContextHolder.getContext().getAuthentication();
try {
if (!(authentication instanceof AnonymousAuthenticationToken)) {
Tenant currentTenant = (Tenant) authentication.getPrincipal();
key = currentTenant.getId();
}
} catch (Exception e) {
LOGGER.error("Failed to get current tenant for data source lookup", e);
throw new RuntimeException(e);
}
return key;
}
@Override
public Connection getConnection() throws SQLException {
// Every time the app asks the data source for a connection
// set the PostgreSQL session variable to the current tenant
// to enforce data isolation.
Connection connection = super.getConnection();
try (Statement sql = connection.createStatement()) {
LOGGER.info("Setting PostgreSQL session variable app.current_tenant = '{}' on {}", determineCurrentLookupKey().toString(), this);
sql.execute("SET SESSION app.current_tenant = '" + determineCurrentLookupKey().toString() + "'");
} catch (Exception e) {
LOGGER.error("Failed to execute: SET SESSION app.current_tenant = '{}'", determineCurrentLookupKey().toString(), e);
}
return connection;
}
@Override
public String toString() {
return determineTargetDataSource().toString();
}
}
What would be the equivalent on R2DBC to AbstractRoutingDataSource?
Thanks
Full source code here.
|
[] |
[] |
[
"In R2DBC, the equivalent of AbstractRoutingDataSource would be AbstractR2dbcRoutingDataSource. This class provides an abstract implementation of RoutingDataSource that routes requests for connections to one of several target ConnectionFactory instances, based on a lookup key.\nHere is an example of how you could use AbstractR2dbcRoutingDataSource to implement row-level security in your R2DBC application:\npublic class TenantAwareR2dbcDataSource extends AbstractR2dbcRoutingDataSource {\n \n private static final Logger LOGGER = LoggerFactory.getLogger(TenantAwareR2dbcDataSource.class);\n \n @Override\n protected Object determineCurrentLookupKey() {\n Object key = null;\n // Pull the currently authenticated tenant from the security context\n // of the HTTP request and use it as the key in the map that points\n // to the connection factory (data source) for each tenant.\n Authentication authentication = SecurityContextHolder.getContext().getAuthentication();\n try {\n if (!(authentication instanceof AnonymousAuthenticationToken)) {\n Tenant currentTenant = (Tenant) authentication.getPrincipal();\n key = currentTenant.getId();\n }\n } catch (Exception e) {\n LOGGER.error(\"Failed to get current tenant for data source lookup\", e);\n throw new RuntimeException(e);\n }\n return key;\n }\n \n @Override\n protected Mono<Connection> getConnection(Object key) {\n // Every time the app asks the data source for a connection\n // set the PostgreSQL session variable to the current tenant\n // to enforce data isolation.\n return super.getConnection(key)\n .flatMap(connection -> {\n Statement statement = connection.createStatement(\"SET SESSION app.current_tenant = $1\");\n statement.bind(0, key);\n return statement.execute()\n .then(Mono.just(connection));\n });\n }\n}\n\nThis example extends AbstractR2dbcRoutingDataSource and overrides the determineCurrentLookupKey and getConnection methods to provide the necessary tenant-aware behavior.\n"
] |
[
-1
] |
[
"datasource",
"java",
"r2dbc",
"r2dbc_postgresql",
"spring_boot"
] |
stackoverflow_0074661168_datasource_java_r2dbc_r2dbc_postgresql_spring_boot.txt
|
Q:
Axios returning binary data instead of XML
Been spending the past few hours figuring out why Axios is doing this.
I tried to do this in the request library in nodeJS and it works fine, but Axios isn't.
Essentially what i'm doing is sending a request of XML data:
var options = {
'method': 'POST',
'url': 'https://rvices.svc',
'headers': {
'Content-Type': 'text/xml; charset=utf-8',
'SOAPAction': 'http://etProject'
},
data: xmlData};
with XMLData looking simliar to :
let xmlData = `<?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:com="http://p" xmlns:pws="http://"
etc etc (it has mostly private data in it so I stripped most of it out).
when I try to use axios to get this
const testData = await axios(options)
i get returned a whole lot of code that looks like this:
'���_\x1B�\f�10�\x17�Ч�{O�\x00��5h�S�������\x0F���s�(+Ғ�\x0F�����m�\x15\x01\x13��6b��\x06%\x00��\x15p8<;��W�4����\x0B���\x01���e�\x7FvZ�{���Ï������\x06��-�z��\x01}�!�r�A�\x13��\x11O�w6ũ���{�\x03����;{����\x01\x7Fy��KoՎ���\x1Bߚe��W��mЇ�qD�a��[�7Ӄ���@��F<\x1C/mF�{\x03�h��#�\x16�\x11\x1F\x1F�L9\x0FM\x8A\x0E�\x
17�h���\x03�4�7�f=bj*8�p�\x13_�\x17�5���_�Ӑ�|M>����\r��F�8q�iE�#��\x0E?�v�������O�xq3���x�Q�튱\x1F?G&HG9��6���V\x1B⫯Ev\x01rc\x13\x10�\'�7��`�Ii��x�~LM6�#˒74#@�����f�*\x7F\x16(5|\x1CWl��\x07\t\x1F��z�\x15\x00\x1B��4�\x13���LCTG�\x1FI�����\fec�h\x02�~��i`�:Ғ�\x0F���y\b#�]V��g��Ӈ�\x14|���6~\x19~c`�/�O���M\x01��k\x
10�\'+���\x07S\r?|��T�A�\x0FӒ�\x0F��ܷ\'.s�!>�tbX\x05�\fs\x18�\r�"`���\x10lV٠\x05@ܲ�\x02\x0E\x07h���\n' +
'���[�7}�>54 r�����ʦ\x15�\x17��\x0E:
that is the right amount of characters (100k +) but jumbled
compared to doing this with request which returns the xml back I expect ala:
</b:ProjectTaskTypeDetail></b:PwsProjectTaskTypeElement><b:PwsProjectTaskTypeElement><b:ProjectTaskTypeDetail><b:ExternalSystemIdentifier i:nil="true"/><b:ProjectTaskTypeId i:nil="true"/><b:ProjectTaskTypeUid>5776</b:ProjectTaskTypeUid><b:ProjectTaskTypeName>Faon</b:Proj
ectTaskTypeName>
one thing I noticed is axios is breaking my request up into multiple lines like this:
'<com:PwsProjectRef><com:ProjectCode>201268</com:ProjectCode></com:PwsProjectRef>\n' +
'\n' +
'<com:PwsProjectRef><com:ProjectCode>210115-01</com:ProjectCode></com:PwsProjectRef>\n' +
'\n' +
even though there's no \n's in my request or breaks like that.
So i'm wondering if anyone has ran into this before and knows how to solve it?
Request is working but request (from what I can tell?) doesn't work with asynch code (i'm probably wrong about this)
Sorry for the vagueness!
A:
You should be using the responseType config option to set the expected response which reflects the Accept HTTP header and not the Content-Type one:
const options = {
method: 'POST',
url: 'https://rvices.svc',
headers: {
'Content-Type': 'text/xml; charset=utf-8',
'SOAPAction': 'http://etProject'
},
data: xmlData,
responseType: 'document',
responseEncoding: 'utf8'
};
const testData = await axios(options);
A:
try with this code
Save as get-data.js file
const axios = require("axios");
const getData = async () => {
try {
const resp = await axios.get('your xml URL',
{
headers: {
'Accept-Encoding': 'application/xml',
}
}
);
console.log(resp.data);
} catch (err) {
// Handle Error Here
console.error(err);
}
};
getData()
npm install axios
node get-data.js
A:
The response is being returned in a compressed binary format, but
Axios does not understand the compression format that is being returned from the server. Try forcing the response to a specific compression algorithm like 'deflate' which axios understands. 'gzip' may also work.
The axios 'decompress' option tells axios to automatically decompress the binary data.
var options = {
'method': 'POST',
'url': 'https://rvices.svc',
'headers': {
'Accept-Encoding': 'deflate'
'Content-Type': 'text/xml; charset=utf-8',
'SOAPAction': 'http://etProject'
},
data: xmlData,
decompress: true
};
|
Axios returning binary data instead of XML
|
Been spending the past few hours figuring out why Axios is doing this.
I tried to do this in the request library in nodeJS and it works fine, but Axios isn't.
Essentially what i'm doing is sending a request of XML data:
var options = {
'method': 'POST',
'url': 'https://rvices.svc',
'headers': {
'Content-Type': 'text/xml; charset=utf-8',
'SOAPAction': 'http://etProject'
},
data: xmlData};
with XMLData looking simliar to :
let xmlData = `<?xml version="1.0" encoding="UTF-8"?><soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:com="http://p" xmlns:pws="http://"
etc etc (it has mostly private data in it so I stripped most of it out).
when I try to use axios to get this
const testData = await axios(options)
i get returned a whole lot of code that looks like this:
'���_\x1B�\f�10�\x17�Ч�{O�\x00��5h�S�������\x0F���s�(+Ғ�\x0F�����m�\x15\x01\x13��6b��\x06%\x00��\x15p8<;��W�4����\x0B���\x01���e�\x7FvZ�{���Ï������\x06��-�z��\x01}�!�r�A�\x13��\x11O�w6ũ���{�\x03����;{����\x01\x7Fy��KoՎ���\x1Bߚe��W��mЇ�qD�a��[�7Ӄ���@��F<\x1C/mF�{\x03�h��#�\x16�\x11\x1F\x1F�L9\x0FM\x8A\x0E�\x
17�h���\x03�4�7�f=bj*8�p�\x13_�\x17�5���_�Ӑ�|M>����\r��F�8q�iE�#��\x0E?�v�������O�xq3���x�Q�튱\x1F?G&HG9��6���V\x1B⫯Ev\x01rc\x13\x10�\'�7��`�Ii��x�~LM6�#˒74#@�����f�*\x7F\x16(5|\x1CWl��\x07\t\x1F��z�\x15\x00\x1B��4�\x13���LCTG�\x1FI�����\fec�h\x02�~��i`�:Ғ�\x0F���y\b#�]V��g��Ӈ�\x14|���6~\x19~c`�/�O���M\x01��k\x
10�\'+���\x07S\r?|��T�A�\x0FӒ�\x0F��ܷ\'.s�!>�tbX\x05�\fs\x18�\r�"`���\x10lV٠\x05@ܲ�\x02\x0E\x07h���\n' +
'���[�7}�>54 r�����ʦ\x15�\x17��\x0E:
that is the right amount of characters (100k +) but jumbled
compared to doing this with request which returns the xml back I expect ala:
</b:ProjectTaskTypeDetail></b:PwsProjectTaskTypeElement><b:PwsProjectTaskTypeElement><b:ProjectTaskTypeDetail><b:ExternalSystemIdentifier i:nil="true"/><b:ProjectTaskTypeId i:nil="true"/><b:ProjectTaskTypeUid>5776</b:ProjectTaskTypeUid><b:ProjectTaskTypeName>Faon</b:Proj
ectTaskTypeName>
one thing I noticed is axios is breaking my request up into multiple lines like this:
'<com:PwsProjectRef><com:ProjectCode>201268</com:ProjectCode></com:PwsProjectRef>\n' +
'\n' +
'<com:PwsProjectRef><com:ProjectCode>210115-01</com:ProjectCode></com:PwsProjectRef>\n' +
'\n' +
even though there's no \n's in my request or breaks like that.
So i'm wondering if anyone has ran into this before and knows how to solve it?
Request is working but request (from what I can tell?) doesn't work with asynch code (i'm probably wrong about this)
Sorry for the vagueness!
|
[
"You should be using the responseType config option to set the expected response which reflects the Accept HTTP header and not the Content-Type one:\n const options = {\n method: 'POST',\n url: 'https://rvices.svc',\n headers: {\n 'Content-Type': 'text/xml; charset=utf-8',\n 'SOAPAction': 'http://etProject'\n },\n data: xmlData,\n responseType: 'document',\n responseEncoding: 'utf8'\n};\n\nconst testData = await axios(options);\n\n",
"try with this code\nSave as get-data.js file\nconst axios = require(\"axios\");\n\nconst getData = async () => {\n try {\n const resp = await axios.get('your xml URL',\n {\n headers: {\n 'Accept-Encoding': 'application/xml',\n }\n }\n );\n console.log(resp.data);\n } catch (err) {\n // Handle Error Here\n console.error(err);\n }\n};\n\ngetData()\n\nnpm install axios\nnode get-data.js\n\n",
"The response is being returned in a compressed binary format, but\nAxios does not understand the compression format that is being returned from the server. Try forcing the response to a specific compression algorithm like 'deflate' which axios understands. 'gzip' may also work.\nThe axios 'decompress' option tells axios to automatically decompress the binary data.\n\n\n var options = {\n 'method': 'POST',\n 'url': 'https://rvices.svc',\n 'headers': {\n 'Accept-Encoding': 'deflate'\n 'Content-Type': 'text/xml; charset=utf-8',\n 'SOAPAction': 'http://etProject'\n },\n data: xmlData,\n decompress: true \n };\n\n\n\n"
] |
[
1,
1,
1
] |
[] |
[] |
[
"axios",
"node.js"
] |
stackoverflow_0074632776_axios_node.js.txt
|
Q:
Error in running a client-server socket program in c
I am new to socket programming. I am trying to run the client and server code and send messages from client to server and vice versa but I am facing an error where the connection of the client to the server is refused.
This is the error message:
Connection Failed: Connection refused
Here is the code:
Server
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
void error(const char *err){
perror(err); //error function that takes error number and outputs text description
exit(1);
}
int main(int argc, char *argv[]){ //argc = number of parameters (2 in our case which is filename and port number)
//argv will contain tthe filename and port number
if (argc < 2){
fprintf(stderr, "Port number not provided. Program Terminated\n");
exit(1);
}
int sockfd, newsockfd, portno, n;
char buffer[255];
struct sockaddr_in serv_addr, cli_addr;
socklen_t clilen;
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if(sockfd < 0){
error("Error opening Socket");
}
bzero((char *) &serv_addr, sizeof(serv_addr));
portno = atoi(argv[1]);
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
if(bind(sockfd,(struct sockaddr *)&serv_addr, sizeof(serv_addr))<0){
error("Binding Failed");
}
listen(sockfd, 5);
clilen = sizeof(cli_addr);
newsockfd = accept(sockfd, (struct sockaddr*) &cli_addr, &clilen);
if (newsockfd < 0){
error("Error on Accept");
}
while(1){
bzero(buffer, 255);
n = read(newsockfd, buffer, 255);
if(n < 0){
error("Read Failed");
}
printf("Client: %s\n", buffer);
bzero(buffer, 255);
fgets(buffer, 255, stdin);
n = write(newsockfd, buffer, strlen(buffer));
if(n < 0){
error("Write Failed");
}
int i = strncmp("Exit", buffer, 4);
if(i == 0){
break;
}
}
close(newsockfd);
close(sockfd);
return 0;
}
Client
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <netdb.h>
void error(const char *err){
perror(err); //error function that takes error number and outputs text description
exit(1);
}
int main(int argc, char *argv[]){
int sockfd, portno, n;
struct sockaddr_in serv_addr;
struct hostent *server;
char buffer[255];
if(argc < 3){
fprintf(stderr, "usage %s hostname port\n", argv[0]);
exit(1);
}
portno = atoi(argv[2]);
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (sockfd < 0){
error("ERROR opening socket");
}
server = gethostbyname(argv[1]);
if(server == NULL){
fprintf(stderr,"Error, No Such Host");
}
bzero((char *) &serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
bcopy((char *) server->h_addr , (char *) &serv_addr.sin_addr.s_addr, server->h_length);
serv_addr.sin_port = htons(portno);
if(connect(sockfd,(struct sockaddr *) &serv_addr, sizeof(serv_addr))<0){
error("Connection Failed");
}
while (1)
{
bzero(buffer, 255);
fgets(buffer, 255, stdin);
n = write(sockfd,buffer, strlen(buffer));
if(n < 0){
error("Write Failed");
}
bzero(buffer, 255);
n = read(sockfd, buffer, 255);
if(n < 0){
error("Read Failed");
}
printf("Server: %s",buffer);
int i = strncmp("Exit", buffer, 4);
if(i == 0){
break;
}
}
close(sockfd);
return 0;
}
To run the server code I used this command:
./server 9999
To run the client code I used this command:
./client 127.0.0.1 9999
A:
In the server program, you forgot to set serv_addr.sin_port before calling bind.
You should change the lines
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
to:
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
serv_addr.sin_port = htons(portno);
|
Error in running a client-server socket program in c
|
I am new to socket programming. I am trying to run the client and server code and send messages from client to server and vice versa but I am facing an error where the connection of the client to the server is refused.
This is the error message:
Connection Failed: Connection refused
Here is the code:
Server
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
void error(const char *err){
perror(err); //error function that takes error number and outputs text description
exit(1);
}
int main(int argc, char *argv[]){ //argc = number of parameters (2 in our case which is filename and port number)
//argv will contain tthe filename and port number
if (argc < 2){
fprintf(stderr, "Port number not provided. Program Terminated\n");
exit(1);
}
int sockfd, newsockfd, portno, n;
char buffer[255];
struct sockaddr_in serv_addr, cli_addr;
socklen_t clilen;
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if(sockfd < 0){
error("Error opening Socket");
}
bzero((char *) &serv_addr, sizeof(serv_addr));
portno = atoi(argv[1]);
serv_addr.sin_family = AF_INET;
serv_addr.sin_addr.s_addr = INADDR_ANY;
if(bind(sockfd,(struct sockaddr *)&serv_addr, sizeof(serv_addr))<0){
error("Binding Failed");
}
listen(sockfd, 5);
clilen = sizeof(cli_addr);
newsockfd = accept(sockfd, (struct sockaddr*) &cli_addr, &clilen);
if (newsockfd < 0){
error("Error on Accept");
}
while(1){
bzero(buffer, 255);
n = read(newsockfd, buffer, 255);
if(n < 0){
error("Read Failed");
}
printf("Client: %s\n", buffer);
bzero(buffer, 255);
fgets(buffer, 255, stdin);
n = write(newsockfd, buffer, strlen(buffer));
if(n < 0){
error("Write Failed");
}
int i = strncmp("Exit", buffer, 4);
if(i == 0){
break;
}
}
close(newsockfd);
close(sockfd);
return 0;
}
Client
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <netinet/in.h>
#include <stdlib.h>
#include <unistd.h>
#include <string.h>
#include <netdb.h>
void error(const char *err){
perror(err); //error function that takes error number and outputs text description
exit(1);
}
int main(int argc, char *argv[]){
int sockfd, portno, n;
struct sockaddr_in serv_addr;
struct hostent *server;
char buffer[255];
if(argc < 3){
fprintf(stderr, "usage %s hostname port\n", argv[0]);
exit(1);
}
portno = atoi(argv[2]);
sockfd = socket(AF_INET, SOCK_STREAM, 0);
if (sockfd < 0){
error("ERROR opening socket");
}
server = gethostbyname(argv[1]);
if(server == NULL){
fprintf(stderr,"Error, No Such Host");
}
bzero((char *) &serv_addr, sizeof(serv_addr));
serv_addr.sin_family = AF_INET;
bcopy((char *) server->h_addr , (char *) &serv_addr.sin_addr.s_addr, server->h_length);
serv_addr.sin_port = htons(portno);
if(connect(sockfd,(struct sockaddr *) &serv_addr, sizeof(serv_addr))<0){
error("Connection Failed");
}
while (1)
{
bzero(buffer, 255);
fgets(buffer, 255, stdin);
n = write(sockfd,buffer, strlen(buffer));
if(n < 0){
error("Write Failed");
}
bzero(buffer, 255);
n = read(sockfd, buffer, 255);
if(n < 0){
error("Read Failed");
}
printf("Server: %s",buffer);
int i = strncmp("Exit", buffer, 4);
if(i == 0){
break;
}
}
close(sockfd);
return 0;
}
To run the server code I used this command:
./server 9999
To run the client code I used this command:
./client 127.0.0.1 9999
|
[
"In the server program, you forgot to set serv_addr.sin_port before calling bind.\nYou should change the lines\nserv_addr.sin_family = AF_INET;\nserv_addr.sin_addr.s_addr = INADDR_ANY;\n\nto:\nserv_addr.sin_family = AF_INET;\nserv_addr.sin_addr.s_addr = INADDR_ANY;\nserv_addr.sin_port = htons(portno);\n\n"
] |
[
2
] |
[] |
[] |
[
"c",
"chat",
"client_server",
"sockets"
] |
stackoverflow_0074660847_c_chat_client_server_sockets.txt
|
Q:
Barplot colour degradation in the graph display
I would like to know if in the creation of a barplot chart, is it possible to generate a chart whose bars are coloured with an ascending or descending gradient with a colour that goes from darker to lighter or the other war round?
A:
Yeah, it is possible to create a bar plot chart with bars that are colored with a gradient in R. To do this, you will need to use a plotting library that supports gradient fills for bars, such as ggplot2.
Here is an example of how you could create a bar plot with a gradient fill using ggplot2:
library(ggplot2)
# Generate some data for the bar plot
x <- seq_len(10)
y <- sample(10, 10)
# Create a data frame with the x and y values
df <- data.frame(x, y)
# Create the bar plot
ggplot(df, aes(x, y, fill = y)) +
geom_col() +
scale_fill_gradient(low = "darkblue", high = "lightblue")
This code will create a bar plot with bars that are colored with a gradient that goes from dark to light. You can adjust the low and high values in the scale_fill_gradient function to control the range and direction of the gradient.
Note that there are many other ways to create a bar plot with a gradient fill in R, and the specific steps will depend on your specific requirements. This is just one example to illustrate the concept.
|
Barplot colour degradation in the graph display
|
I would like to know if in the creation of a barplot chart, is it possible to generate a chart whose bars are coloured with an ascending or descending gradient with a colour that goes from darker to lighter or the other war round?
|
[
"Yeah, it is possible to create a bar plot chart with bars that are colored with a gradient in R. To do this, you will need to use a plotting library that supports gradient fills for bars, such as ggplot2.\nHere is an example of how you could create a bar plot with a gradient fill using ggplot2:\nlibrary(ggplot2)\n\n# Generate some data for the bar plot\nx <- seq_len(10)\ny <- sample(10, 10)\n\n# Create a data frame with the x and y values\ndf <- data.frame(x, y)\n\n# Create the bar plot\nggplot(df, aes(x, y, fill = y)) +\n geom_col() +\n scale_fill_gradient(low = \"darkblue\", high = \"lightblue\")\n\nThis code will create a bar plot with bars that are colored with a gradient that goes from dark to light. You can adjust the low and high values in the scale_fill_gradient function to control the range and direction of the gradient.\nNote that there are many other ways to create a bar plot with a gradient fill in R, and the specific steps will depend on your specific requirements. This is just one example to illustrate the concept.\n"
] |
[
0
] |
[] |
[] |
[
"r"
] |
stackoverflow_0074661215_r.txt
|
Q:
Grpc code first JSON transcoding in .Net 7
I am trying to add grpc json transcoding to my grpc code first project but can't seem to figure out on how to add that "option"
In the Microsoft demo the set the json transcoding as follows in the protobuf file
rpc SayHello (HelloRequest) returns (HelloReply) {
option (google.api.http) = {
get: "/v1/greeter/{name}"
};
}
}
In code first you have the following as an example as your protobuf but I can't seem to figure out how to add that option for the json transcoding
[ServiceContract]
public interface IReportService
{
[OperationContract]
Task<ReportListResponse> Getreports(ProtoBuf.Grpc.CallContext context = default);
}
A:
Currently, no: not supported. It is something we can consider as a future investment, but: everything is time :/
There is nothing stopping you from defining your services in proto files and then transcoding them but yeah that's a giant pain the arse if you have request or response schemas that are not very lightweight.
|
Grpc code first JSON transcoding in .Net 7
|
I am trying to add grpc json transcoding to my grpc code first project but can't seem to figure out on how to add that "option"
In the Microsoft demo the set the json transcoding as follows in the protobuf file
rpc SayHello (HelloRequest) returns (HelloReply) {
option (google.api.http) = {
get: "/v1/greeter/{name}"
};
}
}
In code first you have the following as an example as your protobuf but I can't seem to figure out how to add that option for the json transcoding
[ServiceContract]
public interface IReportService
{
[OperationContract]
Task<ReportListResponse> Getreports(ProtoBuf.Grpc.CallContext context = default);
}
|
[
"Currently, no: not supported. It is something we can consider as a future investment, but: everything is time :/\nThere is nothing stopping you from defining your services in proto files and then transcoding them but yeah that's a giant pain the arse if you have request or response schemas that are not very lightweight.\n"
] |
[
0
] |
[] |
[] |
[
"c#",
"grpc",
"json"
] |
stackoverflow_0074456459_c#_grpc_json.txt
|
Q:
how to optimize query not use subquery
how to optimize query not use subquery
using subquery is very slow!!!
SELECT A.electric_machine mc,A.electric_imptwh meter,
DATE_FORMAT(FROM_UNIXTIME(A.electric_gts),'%Y-%m-%d %H:%i') dt,
(
SELECT electric_imptwh
FROM electric_meter
WHERE electric_machine = 'MPR93'
AND DATE(FROM_UNIXTIME(electric_gts)) = DATE(NOW() - INTERVAL 2 DAY)
AND DATE_FORMAT(TIME(FROM_UNIXTIME(electric_gts)),'%H:%i') = DATE_FORMAT(DATE_ADD(FROM_UNIXTIME(A.electric_gts),INTERVAL 15 MINUTE),'%H:%i')
) meter2,
DATE_FORMAT(DATE_ADD(FROM_UNIXTIME(A.electric_gts),INTERVAL 15 MINUTE),'%Y-%m-%d %H:%i') dt2
FROM electric_meter A
WHERE A.electric_machine = 'MPR93'
AND DATE(FROM_UNIXTIME(A.electric_gts)) = DATE(NOW() - INTERVAL 2 DAY)
AND
(
(DATE_FORMAT(TIME(FROM_UNIXTIME(A.electric_gts)),':%i') = ':00') OR
(DATE_FORMAT(TIME(FROM_UNIXTIME(A.electric_gts)),':%i') = ':15') OR
(DATE_FORMAT(TIME(FROM_UNIXTIME(A.electric_gts)),':%i') = ':30') OR
(DATE_FORMAT(TIME(FROM_UNIXTIME(A.electric_gts)),':%i') = ':45')
)
A:
Many of those date expressions are unnecessarily complicated, and they prevent use of any INDEX.
You want only data from 2 days ago? Not also yesterday?
For example, the goal is to move away from function calls around any column name:
DATE(FROM_UNIXTIME(A.electric_gts)) = DATE(NOW() - INTERVAL 2 DAY)
-->
A.electric_gts >= CURDATE() - INTERVAL 2 DAY
AND A.electric_gts < CURDATE() - INTERVAL 1 DAY
Depending on datatypes, it might need to be
A.electric_gts >= UNIX_TIMESTAMP(CURDATE() - INTERVAL 2 DAY)
AND A.electric_gts < UNIX_TIMESTAMP(CURDATE() - INTERVAL 1 DAY)
And have both of these indexes:
INDEX(electric_machine, electric_gts, electric_imptwh)
INDEX(electric_gts)
You can check for quarter-hour values with an expression involving the unix_timestamp % (15*60) = 0. But, without the table, I can't be more specific.
Might the readings come in not precisely on the minute? Such as "23:15:02"? If so, I'll provide a slightly messier expression to handle such.
|
how to optimize query not use subquery
|
how to optimize query not use subquery
using subquery is very slow!!!
SELECT A.electric_machine mc,A.electric_imptwh meter,
DATE_FORMAT(FROM_UNIXTIME(A.electric_gts),'%Y-%m-%d %H:%i') dt,
(
SELECT electric_imptwh
FROM electric_meter
WHERE electric_machine = 'MPR93'
AND DATE(FROM_UNIXTIME(electric_gts)) = DATE(NOW() - INTERVAL 2 DAY)
AND DATE_FORMAT(TIME(FROM_UNIXTIME(electric_gts)),'%H:%i') = DATE_FORMAT(DATE_ADD(FROM_UNIXTIME(A.electric_gts),INTERVAL 15 MINUTE),'%H:%i')
) meter2,
DATE_FORMAT(DATE_ADD(FROM_UNIXTIME(A.electric_gts),INTERVAL 15 MINUTE),'%Y-%m-%d %H:%i') dt2
FROM electric_meter A
WHERE A.electric_machine = 'MPR93'
AND DATE(FROM_UNIXTIME(A.electric_gts)) = DATE(NOW() - INTERVAL 2 DAY)
AND
(
(DATE_FORMAT(TIME(FROM_UNIXTIME(A.electric_gts)),':%i') = ':00') OR
(DATE_FORMAT(TIME(FROM_UNIXTIME(A.electric_gts)),':%i') = ':15') OR
(DATE_FORMAT(TIME(FROM_UNIXTIME(A.electric_gts)),':%i') = ':30') OR
(DATE_FORMAT(TIME(FROM_UNIXTIME(A.electric_gts)),':%i') = ':45')
)
|
[
"Many of those date expressions are unnecessarily complicated, and they prevent use of any INDEX.\nYou want only data from 2 days ago? Not also yesterday?\nFor example, the goal is to move away from function calls around any column name:\nDATE(FROM_UNIXTIME(A.electric_gts)) = DATE(NOW() - INTERVAL 2 DAY)\n\n-->\n A.electric_gts >= CURDATE() - INTERVAL 2 DAY\nAND A.electric_gts < CURDATE() - INTERVAL 1 DAY\n\nDepending on datatypes, it might need to be\n A.electric_gts >= UNIX_TIMESTAMP(CURDATE() - INTERVAL 2 DAY)\nAND A.electric_gts < UNIX_TIMESTAMP(CURDATE() - INTERVAL 1 DAY)\n\nAnd have both of these indexes:\nINDEX(electric_machine, electric_gts, electric_imptwh)\nINDEX(electric_gts)\n\nYou can check for quarter-hour values with an expression involving the unix_timestamp % (15*60) = 0. But, without the table, I can't be more specific.\nMight the readings come in not precisely on the minute? Such as \"23:15:02\"? If so, I'll provide a slightly messier expression to handle such.\n"
] |
[
0
] |
[] |
[] |
[
"join",
"mysql",
"sql",
"subquery"
] |
stackoverflow_0074652295_join_mysql_sql_subquery.txt
|
Q:
Mapping a object but it gives Property 'title' does not exist on type 'never'
I tried fetching bloomberg api but when I map over the response object to get title it gives me the following error
Property 'title' does not exist on type 'never'.
here's code
<div>
{loading? <h1>Loading...</h1> :null}
{data.map((item) => (
<div className="mt-10 text-center">
<h1 className="text-2xl">{item.title }</h1>
<a href={item.longURL}>Visit</a>
</div>))
}
</div>```
full code - https://github.com/Anurag30112003/FinApp/blob/main/pages/index.tsx
A:
Your mapping render even when you did not fetch api yet, so data is empty array, there are no any elements yet. you need to add conditions only render if data is not empty. And also you need to add key for mapping.
<div>
{loading? <h1>Loading...</h1> :null}
{data && data.length > 0 && data.map((item) => (
<div className="mt-10 text-center" key={'your unique key in here'}>
<h1 className="text-2xl">{item.title }</h1>
<a href={item.longURL}>Visit</a>
</div>
))
}
</div>
|
Mapping a object but it gives Property 'title' does not exist on type 'never'
|
I tried fetching bloomberg api but when I map over the response object to get title it gives me the following error
Property 'title' does not exist on type 'never'.
here's code
<div>
{loading? <h1>Loading...</h1> :null}
{data.map((item) => (
<div className="mt-10 text-center">
<h1 className="text-2xl">{item.title }</h1>
<a href={item.longURL}>Visit</a>
</div>))
}
</div>```
full code - https://github.com/Anurag30112003/FinApp/blob/main/pages/index.tsx
|
[
"Your mapping render even when you did not fetch api yet, so data is empty array, there are no any elements yet. you need to add conditions only render if data is not empty. And also you need to add key for mapping.\n<div>\n {loading? <h1>Loading...</h1> :null}\n {data && data.length > 0 && data.map((item) => ( \n <div className=\"mt-10 text-center\" key={'your unique key in here'}>\n <h1 className=\"text-2xl\">{item.title }</h1>\n <a href={item.longURL}>Visit</a>\n </div>\n ))\n }\n</div>\n\n"
] |
[
0
] |
[] |
[] |
[
"next.js",
"typescript"
] |
stackoverflow_0074661045_next.js_typescript.txt
|
Q:
Function in for cycle not being called until the for cycle finis
Here it seems the for cycle executes the console.log(" floors["+i+"]: " + floor.floorNum()) line only, cycles through all 5 elements without calling the rest of the code, and after it finishes, only then the floor.on("up_button_pressed", function() is called.
What is exactly happening, so I can fix it?
Source: https://play.elevatorsaga.com/#challenge=2
{
init: function(elevators, floors) {
var elevator = elevators[0]; // Let's use the first elevator
// Whenever the elevator is idle (has no more queued destinations) ...
elevator.on("idle", function() {
elevator.goToFloor(2);
console.log("goToFloor 2 (because idle)")
});
elevator.on("floor_button_pressed", function(floorNum) {
elevator.goToFloor(floorNum);
console.log("goToFloor " + floorNum + " (because floor_button_pressed)")
} );
console.log("floors: " + floors)
for (i = 0; i < floors.length; i++) {
var floor = floors[i];
console.log(" floors["+i+"]: " + floor.floorNum())
floor.on("up_button_pressed", function() {
elevator.goToFloor(floor.floorNum());
console.log("goToFloor " + floor.floorNum() + " (because up_button_pressed)")
} );
floor.on("down_button_pressed", function() {
elevator.goToFloor(floor.floorNum());
console.log("goToFloor " + floor.floorNum() + " (because down_button_pressed)")
} );
}
},
update: function(dt, elevators, floors) {
// We normally don't need to do anything here
}
}
A:
instead of var, use let because of JavaScript closure inside loops – simple practical example
|
Function in for cycle not being called until the for cycle finis
|
Here it seems the for cycle executes the console.log(" floors["+i+"]: " + floor.floorNum()) line only, cycles through all 5 elements without calling the rest of the code, and after it finishes, only then the floor.on("up_button_pressed", function() is called.
What is exactly happening, so I can fix it?
Source: https://play.elevatorsaga.com/#challenge=2
{
init: function(elevators, floors) {
var elevator = elevators[0]; // Let's use the first elevator
// Whenever the elevator is idle (has no more queued destinations) ...
elevator.on("idle", function() {
elevator.goToFloor(2);
console.log("goToFloor 2 (because idle)")
});
elevator.on("floor_button_pressed", function(floorNum) {
elevator.goToFloor(floorNum);
console.log("goToFloor " + floorNum + " (because floor_button_pressed)")
} );
console.log("floors: " + floors)
for (i = 0; i < floors.length; i++) {
var floor = floors[i];
console.log(" floors["+i+"]: " + floor.floorNum())
floor.on("up_button_pressed", function() {
elevator.goToFloor(floor.floorNum());
console.log("goToFloor " + floor.floorNum() + " (because up_button_pressed)")
} );
floor.on("down_button_pressed", function() {
elevator.goToFloor(floor.floorNum());
console.log("goToFloor " + floor.floorNum() + " (because down_button_pressed)")
} );
}
},
update: function(dt, elevators, floors) {
// We normally don't need to do anything here
}
}
|
[
"instead of var, use let because of JavaScript closure inside loops – simple practical example\n"
] |
[
1
] |
[] |
[] |
[
"javascript"
] |
stackoverflow_0074573269_javascript.txt
|
Q:
Archunit - classes may only be used once
i want to implement a rule which makes sure classes of type "Repository" are not being shared among classes of type "Service".
That means a Service can use as many Repositories as it wants, but a Repository must only be used by one Service (no sharing of Repositories). How would I achive that....?
What I have now is:
@ArchTest
static final ArchRule repository_must_only_be_used_by_a_service =
classes().that().resideInAnyPackage(SUBPACKAGE_NAME_REPOSITORY).should().onlyHaveDependentClassesThat()
.resideInAnyPackage(SUBPACKAGE_NAME_SERVICE);
@ArchTest
static final ArchRule repository_must_only_be_used_by_one_service = ???
A:
You can define a custom ArchCondition checking for dependent classes, which can be very concise:
import static com.tngtech.archunit.base.DescribedPredicate.describe;
import static com.tngtech.archunit.lang.conditions.ArchConditions.have;
import static com.tngtech.archunit.lang.syntax.ArchRuleDefinition.classes;
// ...
@ArchTest
ArchRule repository_must_have_exactly_one_dependent_class =
classes().that().resideInAnyPackage(SUBPACKAGE_NAME_REPOSITORY)
.should(have(describe("#{dependent classes} == 1", javaClass ->
javaClass.getDirectDependenciesToSelf().stream()
.map(Dependency::getOriginClass).count() == 1
)));
I'd probably implement the ArchCondition myself to get a more helpful violation message:
import static com.tngtech.archunit.lang.ConditionEvent.createMessage;
import static com.tngtech.archunit.lang.SimpleConditionEvent.violated;
import static com.tngtech.archunit.lang.syntax.ArchRuleDefinition.classes;
import static java.util.stream.Collectors.joining;
import static java.util.stream.Collectors.toSet;
// ...
@ArchTest
ArchRule repository_must_have_exactly_one_dependent_class =
classes().that().resideInAnyPackage(SUBPACKAGE_NAME_REPOSITORY)
.should(new ArchCondition<JavaClass>("have one dependent class") {
@Override
public void check(JavaClass javaClass, ConditionEvents events) {
Set<JavaClass> dependentClasses =
javaClass.getDirectDependenciesToSelf().stream()
.map(Dependency::getOriginClass)
.collect(toSet());
if (dependentClasses.size() != 1) {
String message = dependentClasses.isEmpty()
? "has no dependent classes"
: dependentClasses.stream()
.map(JavaClass::getName)
.collect(joining(", ", "has several dependent classes: ", ""));
events.add(violated(javaClass, createMessage(javaClass, message)));
}
}
});
|
Archunit - classes may only be used once
|
i want to implement a rule which makes sure classes of type "Repository" are not being shared among classes of type "Service".
That means a Service can use as many Repositories as it wants, but a Repository must only be used by one Service (no sharing of Repositories). How would I achive that....?
What I have now is:
@ArchTest
static final ArchRule repository_must_only_be_used_by_a_service =
classes().that().resideInAnyPackage(SUBPACKAGE_NAME_REPOSITORY).should().onlyHaveDependentClassesThat()
.resideInAnyPackage(SUBPACKAGE_NAME_SERVICE);
@ArchTest
static final ArchRule repository_must_only_be_used_by_one_service = ???
|
[
"You can define a custom ArchCondition checking for dependent classes, which can be very concise:\nimport static com.tngtech.archunit.base.DescribedPredicate.describe;\nimport static com.tngtech.archunit.lang.conditions.ArchConditions.have;\nimport static com.tngtech.archunit.lang.syntax.ArchRuleDefinition.classes;\n\n// ...\n\n@ArchTest\nArchRule repository_must_have_exactly_one_dependent_class =\n classes().that().resideInAnyPackage(SUBPACKAGE_NAME_REPOSITORY)\n .should(have(describe(\"#{dependent classes} == 1\", javaClass ->\n javaClass.getDirectDependenciesToSelf().stream()\n .map(Dependency::getOriginClass).count() == 1\n )));\n\nI'd probably implement the ArchCondition myself to get a more helpful violation message:\nimport static com.tngtech.archunit.lang.ConditionEvent.createMessage;\nimport static com.tngtech.archunit.lang.SimpleConditionEvent.violated;\nimport static com.tngtech.archunit.lang.syntax.ArchRuleDefinition.classes;\nimport static java.util.stream.Collectors.joining;\nimport static java.util.stream.Collectors.toSet;\n\n// ...\n\n@ArchTest\nArchRule repository_must_have_exactly_one_dependent_class =\n classes().that().resideInAnyPackage(SUBPACKAGE_NAME_REPOSITORY)\n .should(new ArchCondition<JavaClass>(\"have one dependent class\") {\n @Override\n public void check(JavaClass javaClass, ConditionEvents events) {\n Set<JavaClass> dependentClasses = \n javaClass.getDirectDependenciesToSelf().stream()\n .map(Dependency::getOriginClass)\n .collect(toSet());\n if (dependentClasses.size() != 1) {\n String message = dependentClasses.isEmpty()\n ? \"has no dependent classes\"\n : dependentClasses.stream()\n .map(JavaClass::getName)\n .collect(joining(\", \", \"has several dependent classes: \", \"\"));\n events.add(violated(javaClass, createMessage(javaClass, message)));\n }\n }\n });\n\n"
] |
[
0
] |
[] |
[] |
[
"archunit",
"java",
"junit"
] |
stackoverflow_0074628687_archunit_java_junit.txt
|
Q:
How to generate ObjectId() when manually inserting in Robo 3T?
I'm trying to generate ObjectId() when inserting manually in Robo 3T.
Code below seems doesn't work. I wanted every object inside TestArray have a unique id.
How do I generate ObjectId manually?
{
"Name" : "Test",
"TestArray" : [
{
"_id" : ObjectId(),
"Name" : "Test"
}
]
}
A:
I have tried
x=ObjectId();
and it works fine for me
A:
Try:
new ObjectId()
This will generate the objectId on client side
A:
You don't have to generate the _id, just don't that field in your insert query and mongo will automatically generate it for you.
A:
Do it like this:
var TestArray = []
for (let i = 0; i < 10; i++)
TestArray.push({ "_id": ObjectId(), "Name": "Test" })
{
"Name" : "Test",
"TestArray" : TestArray
}
A:
While this is pretty tedious, if you want to use the typical Meteor structure of a 17 character alphanumeric string, you can use a random text generator online to generate a string, then do a .find() in the collection to see if it exists, and then pass it in your .insert() as the _id value in your insert object. I had the same problem and this was the way I got around it.
|
How to generate ObjectId() when manually inserting in Robo 3T?
|
I'm trying to generate ObjectId() when inserting manually in Robo 3T.
Code below seems doesn't work. I wanted every object inside TestArray have a unique id.
How do I generate ObjectId manually?
{
"Name" : "Test",
"TestArray" : [
{
"_id" : ObjectId(),
"Name" : "Test"
}
]
}
|
[
"I have tried\nx=ObjectId();\n\nand it works fine for me\n\n",
"Try:\n new ObjectId()\n\nThis will generate the objectId on client side\n",
"You don't have to generate the _id, just don't that field in your insert query and mongo will automatically generate it for you.\n",
"Do it like this:\nvar TestArray = [] \nfor (let i = 0; i < 10; i++)\n TestArray.push({ \"_id\": ObjectId(), \"Name\": \"Test\" }) \n\n{\n \"Name\" : \"Test\",\n \"TestArray\" : TestArray\n}\n\n",
"While this is pretty tedious, if you want to use the typical Meteor structure of a 17 character alphanumeric string, you can use a random text generator online to generate a string, then do a .find() in the collection to see if it exists, and then pass it in your .insert() as the _id value in your insert object. I had the same problem and this was the way I got around it.\n"
] |
[
1,
0,
0,
0,
0
] |
[] |
[] |
[
"mongodb",
"mongodb_query",
"robo3t"
] |
stackoverflow_0071582815_mongodb_mongodb_query_robo3t.txt
|
Q:
I can execute Main.java file but not Main.class file
I am learning Java and I have some difficulties with the package mechanism. I have different classes in a package. I compiled it right but when I execute the Main file I have a different behavior. Music3 is the file where is the main method.
andrea@andrea:~/Documenti/java$ java -Xdiag -cp class/ source/Music3
Errore: impossibile trovare o caricare la classe principale source.Music3
Causato da: java.lang.ClassNotFoundException: source.Music3
java.lang.ClassNotFoundException: source.Music3
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at java.base/sun.launcher.LauncherHelper.loadMainClass(LauncherHelper.java:791)
at java.base/sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:686)
andrea@andrea:~/Documenti/java$ java -Xdiag -cp class/ source/Music3.java
Wind.play() C_Sharp
Percussion.play() C_Sharp
Stringed.play() C_Sharp
As you can see I have the right output when I execute Music3.java. What is wrong?
A:
It looks like you are trying to execute a .class file by running java -Xdiag -cp class/ source/Music3.class, but you are getting a ClassNotFoundException. This error indicates that the java command is unable to find the class you are trying to run.
There are a few possible reasons for this error. One reason might be that you are not specifying the correct classpath when running the java command. The classpath tells the java command where to look for class files that it needs to load. In your case, you are using -cp class/, which means that the java command will only look for class files in the class directory. If the Music3.class file is not in the class directory, then the java command will not be able to find it.
Another possible reason for the error is that the Music3.class file is not in the correct package. In Java, classes are organized into packages, and each class belongs to a single package. When you compile a Java program, the compiler creates a .class file for each class in the program. The .class file for a class is placed in a directory that corresponds to the package that the class belongs to. For example, if the Music3 class belongs to the source package, then the Music3.class file should be in the source directory.
To fix the error, you will need to make sure that the Music3.class file is in the correct directory, and that you are specifying the correct classpath when running the java command. For example, if the Music3.class file is in the class/source directory, then you would need to run the java command like this:
java -Xdiag -cp class/source Music3
This command tells the java command to look for class files in the class/source directory, which is where the Music3.class file should be located.
|
I can execute Main.java file but not Main.class file
|
I am learning Java and I have some difficulties with the package mechanism. I have different classes in a package. I compiled it right but when I execute the Main file I have a different behavior. Music3 is the file where is the main method.
andrea@andrea:~/Documenti/java$ java -Xdiag -cp class/ source/Music3
Errore: impossibile trovare o caricare la classe principale source.Music3
Causato da: java.lang.ClassNotFoundException: source.Music3
java.lang.ClassNotFoundException: source.Music3
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at java.base/sun.launcher.LauncherHelper.loadMainClass(LauncherHelper.java:791)
at java.base/sun.launcher.LauncherHelper.checkAndLoadMain(LauncherHelper.java:686)
andrea@andrea:~/Documenti/java$ java -Xdiag -cp class/ source/Music3.java
Wind.play() C_Sharp
Percussion.play() C_Sharp
Stringed.play() C_Sharp
As you can see I have the right output when I execute Music3.java. What is wrong?
|
[
"It looks like you are trying to execute a .class file by running java -Xdiag -cp class/ source/Music3.class, but you are getting a ClassNotFoundException. This error indicates that the java command is unable to find the class you are trying to run.\nThere are a few possible reasons for this error. One reason might be that you are not specifying the correct classpath when running the java command. The classpath tells the java command where to look for class files that it needs to load. In your case, you are using -cp class/, which means that the java command will only look for class files in the class directory. If the Music3.class file is not in the class directory, then the java command will not be able to find it.\nAnother possible reason for the error is that the Music3.class file is not in the correct package. In Java, classes are organized into packages, and each class belongs to a single package. When you compile a Java program, the compiler creates a .class file for each class in the program. The .class file for a class is placed in a directory that corresponds to the package that the class belongs to. For example, if the Music3 class belongs to the source package, then the Music3.class file should be in the source directory.\nTo fix the error, you will need to make sure that the Music3.class file is in the correct directory, and that you are specifying the correct classpath when running the java command. For example, if the Music3.class file is in the class/source directory, then you would need to run the java command like this:\njava -Xdiag -cp class/source Music3\n\nThis command tells the java command to look for class files in the class/source directory, which is where the Music3.class file should be located.\n"
] |
[
1
] |
[
"The java command takes a class name, not a file name: It's java -Xdiag -cp class source.Music3.\nNote that 'source' is an extremely unfortunate package name, pick something else. Say it's 'sposito', then your source file should be in project/src/sposito/Music3.java, have package sposito; compile it someplace so that the class file ends up in project/class/sposito/Music3.class, run it from the project dir with java -cp class sposito.Music3.\n"
] |
[
-1
] |
[
"class",
"file",
"java",
"package"
] |
stackoverflow_0074661146_class_file_java_package.txt
|
Q:
Debugger keeps crashing
I'm new to C++ and my project is this:
Write a program that allows the user to enter the last names of five candidates in a local election and the number of votes received by each candidate. The program should then output each candidate's name, the number of votes received, and the percentage of the total votes received by the candidate. Your program should also output the winner of the election.
Sample output:
Candidate Votes Received % of Total Votes
Johnson 5000 25.91
Miller 4000 20.73
Duffy 6000 31.09
Robinson 2500 12.95
Ashtony 1800 9.33
Total 19300
The Winner of the Election is Duffy.
My professor added a twist to this exercise wanting the information to come out of an input file.
I have written a code for this but for some reason, the debugger keeps crashing. I put comments on the lines that are the reason the debugger crashes.
#include <iostream>
#include <fstream>
#include <ostream>
#include <string>
#include <iomanip>
#include <cstdlib>
using namespace std;
int main()
{
string names;
char electionwinner{};
const int NUM = 5;
int* votes = 0;
int total = 0;
int totalvotes = 0;
int high = 0;
float* percentages = 0;
int i = 0;
ifstream file5;
file5.open("file5.txt");
if (!file5)
{
cout << "Not able to open file.\n";
return -1;
}
for (int i = 0; i < NUM; i++);
{
file5 >> names[i] >> votes[i];
total += votes[i]; // this line crashes the debugger
}
for (int i = 0; i < 5; i++)
{
percentages[i] = (votes[i] / static_cast<float>(total)) * 100; //this line also crashes the debugger
}
for (int i = 1; i < 5; i++);
{
if (votes[i] > high)
high = votes[i]; //this line also crashes it
}
for (int i = 0; i < 5; i++);
{
if (votes[i] = high)
names[i] = electionwinner; //line also crashes it
}
//Credintials
cout << "=======================" << endl;
cout << "My name" << endl;
cout << "Vote Calculator" << endl;
cout << "College" << endl;
cout << "December 4th, 2022" << endl;
cout << "=======================" << endl;
//Output Section
cout << "\nFile Complete!" << endl;
cout << left << setw(15) << "Candidate" << right << setw(15) << "Votes Recieved" << right << setw(20) << "% of Total Votes" << endl;;
cout << "===========================================================" << endl;
while (!file5.eof())
{
cout << left << setw(15) << names << right << setw(15) << fixed << setprecision(0) << votes << right << setw(20) << fixed << setprecision(2) << percentages << "%" << endl;
}
cout << "===========================================================" << endl;
cout << left << setw(15) << "Total Votes: " << fixed << setprecision(0) << totalvotes << endl;
cout << "\nThe winner of the election is " << electionwinner << endl;
file5.close();
}
I have been stumped on this for 3 days so any help would be appreciated. Thank you in advance.
A:
Lets get you started on a reasonable path.
Your problem says that you have to read five entries from your file. Because this is a fixed quantity it is reasonable to use an array (despite good advice above to use a std::vector).
So the first lines of your code should declare the arrays that you need
int main()
{
const int NUM = 5;
string names[NUM]; // array for the names
int votes[NUM]; // array for the vote totals
float percentages[NUM]; // array for the percentages
...
}
Take it from there.
|
Debugger keeps crashing
|
I'm new to C++ and my project is this:
Write a program that allows the user to enter the last names of five candidates in a local election and the number of votes received by each candidate. The program should then output each candidate's name, the number of votes received, and the percentage of the total votes received by the candidate. Your program should also output the winner of the election.
Sample output:
Candidate Votes Received % of Total Votes
Johnson 5000 25.91
Miller 4000 20.73
Duffy 6000 31.09
Robinson 2500 12.95
Ashtony 1800 9.33
Total 19300
The Winner of the Election is Duffy.
My professor added a twist to this exercise wanting the information to come out of an input file.
I have written a code for this but for some reason, the debugger keeps crashing. I put comments on the lines that are the reason the debugger crashes.
#include <iostream>
#include <fstream>
#include <ostream>
#include <string>
#include <iomanip>
#include <cstdlib>
using namespace std;
int main()
{
string names;
char electionwinner{};
const int NUM = 5;
int* votes = 0;
int total = 0;
int totalvotes = 0;
int high = 0;
float* percentages = 0;
int i = 0;
ifstream file5;
file5.open("file5.txt");
if (!file5)
{
cout << "Not able to open file.\n";
return -1;
}
for (int i = 0; i < NUM; i++);
{
file5 >> names[i] >> votes[i];
total += votes[i]; // this line crashes the debugger
}
for (int i = 0; i < 5; i++)
{
percentages[i] = (votes[i] / static_cast<float>(total)) * 100; //this line also crashes the debugger
}
for (int i = 1; i < 5; i++);
{
if (votes[i] > high)
high = votes[i]; //this line also crashes it
}
for (int i = 0; i < 5; i++);
{
if (votes[i] = high)
names[i] = electionwinner; //line also crashes it
}
//Credintials
cout << "=======================" << endl;
cout << "My name" << endl;
cout << "Vote Calculator" << endl;
cout << "College" << endl;
cout << "December 4th, 2022" << endl;
cout << "=======================" << endl;
//Output Section
cout << "\nFile Complete!" << endl;
cout << left << setw(15) << "Candidate" << right << setw(15) << "Votes Recieved" << right << setw(20) << "% of Total Votes" << endl;;
cout << "===========================================================" << endl;
while (!file5.eof())
{
cout << left << setw(15) << names << right << setw(15) << fixed << setprecision(0) << votes << right << setw(20) << fixed << setprecision(2) << percentages << "%" << endl;
}
cout << "===========================================================" << endl;
cout << left << setw(15) << "Total Votes: " << fixed << setprecision(0) << totalvotes << endl;
cout << "\nThe winner of the election is " << electionwinner << endl;
file5.close();
}
I have been stumped on this for 3 days so any help would be appreciated. Thank you in advance.
|
[
"Lets get you started on a reasonable path.\nYour problem says that you have to read five entries from your file. Because this is a fixed quantity it is reasonable to use an array (despite good advice above to use a std::vector).\nSo the first lines of your code should declare the arrays that you need\nint main()\n{\n const int NUM = 5;\n string names[NUM]; // array for the names\n int votes[NUM]; // array for the vote totals\n float percentages[NUM]; // array for the percentages\n ...\n}\n\nTake it from there.\n"
] |
[
1
] |
[] |
[] |
[
"c++"
] |
stackoverflow_0074661088_c++.txt
|
Q:
In Angular component, link doesnt work with first time. How can it fix?
When I try click one of the array heroes elements in Angular - with first time click doesn't work, only works the second time.
How can fix? Why it happens?
That is method onSelect5() with first time doesnt work.
I add link on github.com where all code:
[1]: https://github.com/site50/Angular-FETCH-an-fetch/tree/main/src/app
A:
In file src/app/heroes/heroes.component.html, line 7, you have
<a routerLink="{{'../' + hero.id}}">
but should have
<a routerLink="{{ hero.id }}">
the reason is that angular routing is trying to go to a previous route but the previous root is this page, and therefore the second time you land here, you are in the correct route and can go the '/{hero.id}' page (as specified in the routing file).
|
In Angular component, link doesnt work with first time. How can it fix?
|
When I try click one of the array heroes elements in Angular - with first time click doesn't work, only works the second time.
How can fix? Why it happens?
That is method onSelect5() with first time doesnt work.
I add link on github.com where all code:
[1]: https://github.com/site50/Angular-FETCH-an-fetch/tree/main/src/app
|
[
"In file src/app/heroes/heroes.component.html, line 7, you have\n <a routerLink=\"{{'../' + hero.id}}\">\n\nbut should have\n <a routerLink=\"{{ hero.id }}\">\n\nthe reason is that angular routing is trying to go to a previous route but the previous root is this page, and therefore the second time you land here, you are in the correct route and can go the '/{hero.id}' page (as specified in the routing file).\n"
] |
[
0
] |
[] |
[] |
[
"angular",
"button",
"hyperlink",
"lazy_loading",
"typescript"
] |
stackoverflow_0074659862_angular_button_hyperlink_lazy_loading_typescript.txt
|
Q:
Content of directory on path https://xxxxxxx.dfs.core.windows.net/dataverse-xxxx-org5a2/account/Snapshot/2018-08_1656570292/*.csv' cannot be listed
When I try to query our Serverless SQL pool in Azure Synapse Analytics I get the following error:
"Content of directory on path 'https://xxxxxx.dfs.core.windows.net/dataverse-xxxxxx-org5a2bcccf/account/Snapshot/2018-08_1656570292/*.csv' cannot be listed.".
I have checked out the following link for clues as to what could be cause:
https://learn.microsoft.com/en-us/azure/synapse-analytics/sql/resources-self-help-sql-on-demand?tabs=x80070002
It is suggested that the error is due permissions:
However, I believe I have the correct permissons,
I get this error whether I try to execute the query in SSMS or Synapse Workspace.
The error in SSMS is as follows:
Warning: Unable to resolve path https://xxxxx.dfs.core.windows.net/dataverse-xxxxx-org5a2bcccf/account/Snapshot/2018-10_1657304551/*.csv. Error number 13807, Level 16, State 1, Message "Content of directory on path 'https://xxxxxx.dfs.core.windows.net/dataverse-xxxxx-org5a2bcccf/account/Snapshot/2018-10_1657304551/*.csv' cannot be listed.".
Can someone let me know how to resolve this?
The query that I'm attempting to execute can be located here:
https://github.com/slavatrofimov/Synapse-Link-for-Dataverse-data-enrichment-in-Serverless-SQL-Pools/blob/main/SQL/Enrich%20Synapse%20Link%20for%20Dataverse%20Entities%20with%20Human-Readable%20Labels.sql
Is there a definitive way to determine if the problem is due to lack of permissions?
Update Question:
I have just realised that the issue is access the Lake on https://xxxxxx.dfs.core.windows.net/dataverse-xxxxxx-org5a2bcccf/
Therefore please take a look at my permissons on the lake and let me know if it is sufficient?
A:
This issue occurs when the user trying to query the external table does not have the relevant permissions or if there is a firewall enabled on your storage network.
When looked at the permissions you have provided, I see Storage Blob Data reader and Storage Blob Data contributor have been given.
Ref doc: Control storage account access for serverless SQL pool in Azure Synapse Analytics
In case if your storage account is firewall protect then you will have to follow the steps described in this document to overcome the issue: Access storage that is protected with the firewall
Here are couple of relevant articles which might help you configure your storage firewall to overcome this issue:
Storage configuration for external table is not accessible while query on Serverless
Synapse Studio error while trying to read data from Storage Account using SQL On Demand
|
Content of directory on path https://xxxxxxx.dfs.core.windows.net/dataverse-xxxx-org5a2/account/Snapshot/2018-08_1656570292/*.csv' cannot be listed
|
When I try to query our Serverless SQL pool in Azure Synapse Analytics I get the following error:
"Content of directory on path 'https://xxxxxx.dfs.core.windows.net/dataverse-xxxxxx-org5a2bcccf/account/Snapshot/2018-08_1656570292/*.csv' cannot be listed.".
I have checked out the following link for clues as to what could be cause:
https://learn.microsoft.com/en-us/azure/synapse-analytics/sql/resources-self-help-sql-on-demand?tabs=x80070002
It is suggested that the error is due permissions:
However, I believe I have the correct permissons,
I get this error whether I try to execute the query in SSMS or Synapse Workspace.
The error in SSMS is as follows:
Warning: Unable to resolve path https://xxxxx.dfs.core.windows.net/dataverse-xxxxx-org5a2bcccf/account/Snapshot/2018-10_1657304551/*.csv. Error number 13807, Level 16, State 1, Message "Content of directory on path 'https://xxxxxx.dfs.core.windows.net/dataverse-xxxxx-org5a2bcccf/account/Snapshot/2018-10_1657304551/*.csv' cannot be listed.".
Can someone let me know how to resolve this?
The query that I'm attempting to execute can be located here:
https://github.com/slavatrofimov/Synapse-Link-for-Dataverse-data-enrichment-in-Serverless-SQL-Pools/blob/main/SQL/Enrich%20Synapse%20Link%20for%20Dataverse%20Entities%20with%20Human-Readable%20Labels.sql
Is there a definitive way to determine if the problem is due to lack of permissions?
Update Question:
I have just realised that the issue is access the Lake on https://xxxxxx.dfs.core.windows.net/dataverse-xxxxxx-org5a2bcccf/
Therefore please take a look at my permissons on the lake and let me know if it is sufficient?
|
[
"This issue occurs when the user trying to query the external table does not have the relevant permissions or if there is a firewall enabled on your storage network.\nWhen looked at the permissions you have provided, I see Storage Blob Data reader and Storage Blob Data contributor have been given.\nRef doc: Control storage account access for serverless SQL pool in Azure Synapse Analytics\n\nIn case if your storage account is firewall protect then you will have to follow the steps described in this document to overcome the issue: Access storage that is protected with the firewall\nHere are couple of relevant articles which might help you configure your storage firewall to overcome this issue:\n\nStorage configuration for external table is not accessible while query on Serverless\nSynapse Studio error while trying to read data from Storage Account using SQL On Demand\n\n"
] |
[
1
] |
[] |
[] |
[
"azure_synapse"
] |
stackoverflow_0074492824_azure_synapse.txt
|
Q:
Finding file name with variables in between
Having a hard time with Regex.
What would be the regex for finding a file name with variable in between them?
For eg:
File name : DON_2010_JOE_1222022.txt
In the above file name the words DON, JOE and the format .txt will remain constant. Rest numbers might change for every file. There could be characters as well instead of numbers in those two places.
What im looking for is basically something like DON_*_JOE_*.txt with * being whatever it could be.
Can someone please help me with this?
I tried DON_*_JOE_*.txt and obviously it did not work.
A:
DON_(?<firstString>.*)_JOE_(?<secondString>.*).txt
You can use this. To access the specific group, you can use matcher.group("firstString").
A:
In JavaScript:
"DON_2010_JOE_1222022.txt".match(/DON_.+_JOE_.+\.txt/)
whatever it could be
It is .+ except new lines.
|
Finding file name with variables in between
|
Having a hard time with Regex.
What would be the regex for finding a file name with variable in between them?
For eg:
File name : DON_2010_JOE_1222022.txt
In the above file name the words DON, JOE and the format .txt will remain constant. Rest numbers might change for every file. There could be characters as well instead of numbers in those two places.
What im looking for is basically something like DON_*_JOE_*.txt with * being whatever it could be.
Can someone please help me with this?
I tried DON_*_JOE_*.txt and obviously it did not work.
|
[
"DON_(?<firstString>.*)_JOE_(?<secondString>.*).txt\n\nYou can use this. To access the specific group, you can use matcher.group(\"firstString\").\n",
"In JavaScript:\n\"DON_2010_JOE_1222022.txt\".match(/DON_.+_JOE_.+\\.txt/)\n\nwhatever it could be\n\nIt is .+ except new lines.\n"
] |
[
0,
-1
] |
[] |
[] |
[
"java",
"javascript",
"pentaho_data_integration",
"regex",
"regexp_replace"
] |
stackoverflow_0074659818_java_javascript_pentaho_data_integration_regex_regexp_replace.txt
|
Q:
Is there any Gson similar libraries for Python
I am new to python .I was trying to create json responses for my android app. I was wondering if there is any library similar to GSON for python.
http://nullege.com/codes/search/com.google.gson.Gson
at this link i saw Gson the usage.
Can anyone please tell me if there is GSON libary for python , or any other similar library. Also if there is any please guide me to integrated it in the code.
A:
You can use Pykson, JSON Serializer and Deserializer for Python which is somehow like Gson. It supports lists of objects and serialization names.
Simply define your object model as JsonObject, and use Pykson to convert it back and forth to JSON.
class Student(JsonObject):
first_name = StringField(serialized_name="fn")
last_name = StringField(serialized_name="ln")
age = IntegerField(serialized_name="a")
json_text = '{"fn":"John", "ln":"Smith", "a": 25}'
student = Pykson.from_json(json_text, Student)
student_json = Pykson.to_json(student)
assert (json_text == student_json)
A:
You can use Jsonic library.
Jsonic is a lightweight utility for serializing/deserializing python objects to/from JSON.
Example:
from jsonic import serialize, deserialize
class User(Serializable):
def __init__(self, user_id: str, birth_time: datetime):
super().__init__()
self.user_id = user_id
self.birth_time = birth_time
user = User('id1', datetime(2020,10,11))
obj = serialize(user) # {'user_id': 'id1', 'birth_time': {'datetime': '2020-10-11 00:00:00', '_serialized_type': 'datetime'}, '_serialized_type': 'User'}
new_user : User = deserialize(obj) # new_user is a new instance of user with same attributes
Jsonic has some nifty features:
You can serialize objects of types that are not extending Serializable.
This can come handy when you need to serialize objects of
third party library classes.
Support for custom serializers and deserializers
Serializing into JSON string or python dict
Transient class attributes
Supports both serialization of private fields or leave them out of the
serialization process.
Full disclosure: I'm the creator of Jsonic
A:
You can use BSON:
https://pymongo.readthedocs.io/en/stable/api/bson/index.html
Think of BSON as "binary JSON", meaning both:
It's output is not simple ASCII (as JSON is)
It has support for arbitrary byte strings (aka "Binary" is a native datatype)
In addition, it natively supports:
datetime objects
ObjectId objects
BSON is the "native" object marshaling technique of MongoDb. So it is an object type that you get access to with the "pymongo" library. You can load pymongo and not use it for MongoDb, and just use the BSON portion.
We use BSON to marshal long arrays of floating point numbers, for which JSON is an awful choice. This is useful over raw TCP sockets, or other transports that send "byte blobs" as their payload, such as RabbitMQ messages.
|
Is there any Gson similar libraries for Python
|
I am new to python .I was trying to create json responses for my android app. I was wondering if there is any library similar to GSON for python.
http://nullege.com/codes/search/com.google.gson.Gson
at this link i saw Gson the usage.
Can anyone please tell me if there is GSON libary for python , or any other similar library. Also if there is any please guide me to integrated it in the code.
|
[
"You can use Pykson, JSON Serializer and Deserializer for Python which is somehow like Gson. It supports lists of objects and serialization names.\nSimply define your object model as JsonObject, and use Pykson to convert it back and forth to JSON.\nclass Student(JsonObject):\n first_name = StringField(serialized_name=\"fn\")\n last_name = StringField(serialized_name=\"ln\")\n age = IntegerField(serialized_name=\"a\")\n\njson_text = '{\"fn\":\"John\", \"ln\":\"Smith\", \"a\": 25}'\nstudent = Pykson.from_json(json_text, Student)\n\nstudent_json = Pykson.to_json(student)\nassert (json_text == student_json)\n\n",
"You can use Jsonic library.\nJsonic is a lightweight utility for serializing/deserializing python objects to/from JSON.\nExample:\nfrom jsonic import serialize, deserialize\n\nclass User(Serializable):\n def __init__(self, user_id: str, birth_time: datetime):\n super().__init__()\n self.user_id = user_id\n self.birth_time = birth_time\n \nuser = User('id1', datetime(2020,10,11)) \nobj = serialize(user) # {'user_id': 'id1', 'birth_time': {'datetime': '2020-10-11 00:00:00', '_serialized_type': 'datetime'}, '_serialized_type': 'User'}\nnew_user : User = deserialize(obj) # new_user is a new instance of user with same attributes\n\nJsonic has some nifty features:\n\nYou can serialize objects of types that are not extending Serializable.\nThis can come handy when you need to serialize objects of\nthird party library classes.\nSupport for custom serializers and deserializers\nSerializing into JSON string or python dict\nTransient class attributes\nSupports both serialization of private fields or leave them out of the\nserialization process.\n\nFull disclosure: I'm the creator of Jsonic\n",
"You can use BSON:\nhttps://pymongo.readthedocs.io/en/stable/api/bson/index.html\nThink of BSON as \"binary JSON\", meaning both:\n\nIt's output is not simple ASCII (as JSON is)\nIt has support for arbitrary byte strings (aka \"Binary\" is a native datatype)\n\nIn addition, it natively supports:\n\ndatetime objects\nObjectId objects\n\nBSON is the \"native\" object marshaling technique of MongoDb. So it is an object type that you get access to with the \"pymongo\" library. You can load pymongo and not use it for MongoDb, and just use the BSON portion.\nWe use BSON to marshal long arrays of floating point numbers, for which JSON is an awful choice. This is useful over raw TCP sockets, or other transports that send \"byte blobs\" as their payload, such as RabbitMQ messages.\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"gson",
"json",
"python"
] |
stackoverflow_0034805435_gson_json_python.txt
|
Q:
How do I remove link underlining in my HTML email?
<td width="110" align="center" valign="top" style="color:#000000;">
<a href="https://example.com" target="_blank"
style="color:#000000; text-decoration:none;">BOOK NOW
</a>
</td>
I used this code to make a link in my HTML email. In browsers and Outlook it's working nicely, but in GMail, Hotmail, and ymail it shows links underlined.
Can anyone help me to get rid of this?
A:
<a href="#" style="text-decoration:none !important; text-decoration:none;">BOOK NOW</a>
Outlook will strip out the style with !important tag leaving the regular style, thus no underline. The !important tag will over rule the web based email clients' default style, thus leaving no underline.
A:
I see this has been answered; however, I feel this link provides appropriate information for what formatting is supported in various email clients.
http://www.campaignmonitor.com/css/
It's worth noting that GMail and Outlook are two of the pickiest to format HTML email for.
A:
Use !important in the text decoration rule.
<a href="#" style="text-decoration:none !important;">BOOK NOW</a>
A:
After half a day looking into this (and 2 years since this question was opened) I believe I have found a comprehensive answer to this.
<a href="#"><font color="#000000"><span style='text-decoration:none;text-underline:none'>Link</span></font></a>
(You need the text-underline property on the span inside the link and the font tag to edit the colour)
A:
Another way to fool Gmail (for phone numbers): use a
~ instead of a
-
404-835-9421 --> 404~835~9421
It'll save you (or less savvy users ;-) the trip down html lane.
I found another way to remove links in outlook that i tested so far. if you create a blank class for example in your css say .blank {} and then do the following to your links for example:
<a href="http://www.link.com/"><span class="blank" style="text-decoration:none !important;">Search</span></a>
this worked for me hopefully it will help someone who is still having trouble taking out the underline of links in outlook. If anyone has a workaround for gmail please could you help me tried everything in this thread nothing is working.
Thanks
A:
Windows Mail seemed to outright ignore inline text-decoration tag but what fixed it for me was by adding this to the head:
<!--[if (mso)|(mso 16)]>
<style type="text/css">
body, table, td, a, span { font-family: Arial, Helvetica, sans-serif !important; }
a {text-decoration: none;}
</style>
<![endif]-->
A:
I think that if you put a span style after the <a> tag with text-decoration:none it will work in the majority of the browsers / email clients.
As in:
<a href="" style="text-decoration:underline">
<span style="color:#0b92ce; text-decoration:none">BANANA</span>
</a>
A:
I added both declarations on the a href which worked in outlook and gmail apps. outlook ignores the !important and gmail needs it. Web versions of email work with both/either.
text-decoration: none !important; text-decoration: none;
A:
All email clients adjust the HTML and the CSS code you provide by
their own rules:
e.g.: gmail removes everything but the inner HTML of the body tag.
1. for most other clients you can have a style-tag in your header
<style type="text/css">
a {text-decoration: none !important;}
</style>
note: don't use CSS comments as YAHOO!Mail might cause trouble.
2. to be on the save side add the same code inline into the A tag as you did and an extra span tag as well (the style rules in a tags get often removed)
<a href="" style="text-decoration: none !important;">
<span style="text-decoration: none !important;">
text
</span>
</a>
A:
To completely "hide" underline for <a> in both mail application and web browser, can do the following tricky way.
<a href="..."><div style="background-color:red;">
<span style="color:red; text-decoration:underline;"><span style="color:white;">BUTTON</span></span>
</div></a>
Color in 1st <span> is the one you don't need, MUST set as same as your background color. (red in here)
Color in 2nd <span> is the one for your button text. (white in here)
A:
Text decoration none was not working for me, then i found an email in outlook that did not have the line and checked the code:
<span style='font-size: 12px; font-family: "Arial","Verdana", "sans-serif"; color: black; text-decoration-line: none;'>
<a href="http://www.test.com" style='font-size: 9.0pt; color: #C69E29; text-decoration: none;'><span>www.test.com</span></a>
</span>
This one is working for me.
A:
I used a combination of not showing links in google, adding links for mso (outlook) and the shy tag, to keep the looks and feels for my company. Some code may be redundant (for my company the looks where more important then the be clickable part. (it felt like a jigsaw, as every change brakes something else)
<td style="color:rgb(69, 54, 53)">
<!--[if gte mso 9]>
<a href="http://www.immothekerfinotheker.be" style="text-decoration:none;">
<span style="text-decoration:none;">
<![endif]-->
www­.­immothekerfinotheker­.­be
<!--[if gte mso 9]>
</a>
</span>
<![endif]-->
</td>
Hope this helps someone
A:
It wholly depends on the email client whether it wants to display the underline under the link or not. As of now, the styles in the body are only supported by:
Outlook 2007/10/13 +
Outlook 2000/03
Apple iPhone/iPad
Outlook.com
Apple Mail 4
Yahoo! Mail Beta
http://www.campaignmonitor.com/css/
A:
Use text-decoration:none !important; instead of text-decoration:none; to make sure you "lose" the underline.
A:
Here in http://www.campaignmonitor.com/css/, a nice explanation to say this is restricted! And a pretty nice guide to know all limitations of CSS in email clients.
A:
You can do "redundant styling" and that should fix the issue. You use the same styling you have on the but add it to a that is within the .
Example:
<td width="110" align="center" valign="top" style="color:#000000;">
<a href="https://example.com" target="_blank"
style="color:#000000; text-decoration:none;"><span style="color:#000000; text-decoration:none;">BOOK NOW</span></a>
</td>
A:
While viewing the html email try inspecting the element on that link and see what is overwriting it. Use that class and define it that style again in your head style and define the text-decoration: none !important;
In my case these are the classes that are overwriting my inline style so declared this on the head of my html email and defined the style that I want implemented.
It worked for me, hope it will work on your one too.
.ii a[href]{
text-decoration: none !important;
}
#yiv8915438996 a:link, #yiv8915438996 span.yiv8915438996MsoHyperlink{
text-decoration: none !important;
}
#yiv8915438996 a:visited, #yiv8915438996 span.yiv8915438996MsoHyperlinkFollowed{
text-decoration: none !important;
}
A:
Code like the lines below worked for me in Gmail Web client. A non-underlined black link showed up in the email. I didn't use the nested span tag.
<table>
<tbody>
<tr>
<td>
<a href="http://hexinpeter.com" style="text-decoration: none; color: #000000 !important;">Peter Blog</a>
</td>
</tr>
</tbody>
</table>
Note: Gmail will strip off any incorrect inline styles. E.g. code like the line below will have its inline styles all stripped off.
<a href="http://hexinpeter.com" style="font-family:; text-decoration: none; color: #000000 !important;">Peter Blog</a>
A:
I copied my html page and pasted to word.
Edited the signature in word deleting the spaces where the underline is placed and make my own "padding" presssing space bar.
Copied again and pasted to Outlook 2013.
Worked fine for me.
A:
In Windows 10 Mail, you might need to add these in your html head:
<!--[if (mso)|(mso 16)]>
<style type="text/css">
body, table, td, a, span { font-family: Arial, Helvetica, sans-serif !important; }
a {text-decoration: none;}
</style>
<![endif]-->
The 'a {text-decoration: none;}' fixed the underline problems :)
A:
In my case, I configured the signature (copy and paste in gmail) using Safari. I tried every code you putted here, but those didn´t worked. After you paste the signature using Safari, you can come back to Chrome and the underline is gone.
|
How do I remove link underlining in my HTML email?
|
<td width="110" align="center" valign="top" style="color:#000000;">
<a href="https://example.com" target="_blank"
style="color:#000000; text-decoration:none;">BOOK NOW
</a>
</td>
I used this code to make a link in my HTML email. In browsers and Outlook it's working nicely, but in GMail, Hotmail, and ymail it shows links underlined.
Can anyone help me to get rid of this?
|
[
"<a href=\"#\" style=\"text-decoration:none !important; text-decoration:none;\">BOOK NOW</a>\n\nOutlook will strip out the style with !important tag leaving the regular style, thus no underline. The !important tag will over rule the web based email clients' default style, thus leaving no underline.\n",
"I see this has been answered; however, I feel this link provides appropriate information for what formatting is supported in various email clients.\nhttp://www.campaignmonitor.com/css/\nIt's worth noting that GMail and Outlook are two of the pickiest to format HTML email for.\n",
"Use !important in the text decoration rule.\n<a href=\"#\" style=\"text-decoration:none !important;\">BOOK NOW</a>\n\n",
"After half a day looking into this (and 2 years since this question was opened) I believe I have found a comprehensive answer to this.\n<a href=\"#\"><font color=\"#000000\"><span style='text-decoration:none;text-underline:none'>Link</span></font></a>\n\n(You need the text-underline property on the span inside the link and the font tag to edit the colour)\n",
"Another way to fool Gmail (for phone numbers): use a \n ~ instead of a \n -\n404-835-9421 --> 404~835~9421\nIt'll save you (or less savvy users ;-) the trip down html lane.\nI found another way to remove links in outlook that i tested so far. if you create a blank class for example in your css say .blank {} and then do the following to your links for example:\n<a href=\"http://www.link.com/\"><span class=\"blank\" style=\"text-decoration:none !important;\">Search</span></a>\n\nthis worked for me hopefully it will help someone who is still having trouble taking out the underline of links in outlook. If anyone has a workaround for gmail please could you help me tried everything in this thread nothing is working.\nThanks\n",
"Windows Mail seemed to outright ignore inline text-decoration tag but what fixed it for me was by adding this to the head:\n<!--[if (mso)|(mso 16)]>\n<style type=\"text/css\">\n body, table, td, a, span { font-family: Arial, Helvetica, sans-serif !important; }\n a {text-decoration: none;}\n</style>\n<![endif]-->\n\n",
"I think that if you put a span style after the <a> tag with text-decoration:none it will work in the majority of the browsers / email clients.\nAs in:\n<a href=\"\" style=\"text-decoration:underline\">\n <span style=\"color:#0b92ce; text-decoration:none\">BANANA</span>\n</a>\n\n",
"I added both declarations on the a href which worked in outlook and gmail apps. outlook ignores the !important and gmail needs it. Web versions of email work with both/either.\ntext-decoration: none !important; text-decoration: none;\n\n",
"All email clients adjust the HTML and the CSS code you provide by \ntheir own rules:\n\ne.g.: gmail removes everything but the inner HTML of the body tag.\n\n1. for most other clients you can have a style-tag in your header\n<style type=\"text/css\">\n a {text-decoration: none !important;}\n</style>\n\nnote: don't use CSS comments as YAHOO!Mail might cause trouble.\n\n2. to be on the save side add the same code inline into the A tag as you did and an extra span tag as well (the style rules in a tags get often removed)\n<a href=\"\" style=\"text-decoration: none !important;\">\n <span style=\"text-decoration: none !important;\">\n text\n </span>\n</a>\n\n",
"To completely \"hide\" underline for <a> in both mail application and web browser, can do the following tricky way.\n<a href=\"...\"><div style=\"background-color:red;\">\n <span style=\"color:red; text-decoration:underline;\"><span style=\"color:white;\">BUTTON</span></span>\n</div></a>\n\n\nColor in 1st <span> is the one you don't need, MUST set as same as your background color. (red in here)\nColor in 2nd <span> is the one for your button text. (white in here)\n\n",
"Text decoration none was not working for me, then i found an email in outlook that did not have the line and checked the code:\n<span style='font-size: 12px; font-family: \"Arial\",\"Verdana\", \"sans-serif\"; color: black; text-decoration-line: none;'>\n<a href=\"http://www.test.com\" style='font-size: 9.0pt; color: #C69E29; text-decoration: none;'><span>www.test.com</span></a>\n</span>\n\nThis one is working for me.\n",
"I used a combination of not showing links in google, adding links for mso (outlook) and the shy tag, to keep the looks and feels for my company. Some code may be redundant (for my company the looks where more important then the be clickable part. (it felt like a jigsaw, as every change brakes something else)\n<td style=\"color:rgb(69, 54, 53)\">\n<!--[if gte mso 9]>\n<a href=\"http://www.immothekerfinotheker.be\" style=\"text-decoration:none;\">\n<span style=\"text-decoration:none;\">\n<![endif]-->\nwww­.­immothekerfinotheker­.­be\n<!--[if gte mso 9]>\n</a>\n</span>\n<![endif]-->\n</td>\n\nHope this helps someone\n",
"It wholly depends on the email client whether it wants to display the underline under the link or not. As of now, the styles in the body are only supported by:\n\nOutlook 2007/10/13 +\nOutlook 2000/03\nApple iPhone/iPad\nOutlook.com\nApple Mail 4\nYahoo! Mail Beta\n\nhttp://www.campaignmonitor.com/css/\n",
"Use text-decoration:none !important; instead of text-decoration:none; to make sure you \"lose\" the underline.\n",
"Here in http://www.campaignmonitor.com/css/, a nice explanation to say this is restricted! And a pretty nice guide to know all limitations of CSS in email clients.\n",
"You can do \"redundant styling\" and that should fix the issue. You use the same styling you have on the but add it to a that is within the .\nExample:\n<td width=\"110\" align=\"center\" valign=\"top\" style=\"color:#000000;\">\n <a href=\"https://example.com\" target=\"_blank\"\n style=\"color:#000000; text-decoration:none;\"><span style=\"color:#000000; text-decoration:none;\">BOOK NOW</span></a>\n</td>\n\n",
"While viewing the html email try inspecting the element on that link and see what is overwriting it. Use that class and define it that style again in your head style and define the text-decoration: none !important;\nIn my case these are the classes that are overwriting my inline style so declared this on the head of my html email and defined the style that I want implemented. \nIt worked for me, hope it will work on your one too.\n.ii a[href]{\ntext-decoration: none !important;\n}\n\n#yiv8915438996 a:link, #yiv8915438996 span.yiv8915438996MsoHyperlink{\ntext-decoration: none !important;\n} \n\n#yiv8915438996 a:visited, #yiv8915438996 span.yiv8915438996MsoHyperlinkFollowed{\ntext-decoration: none !important;\n} \n\n",
"Code like the lines below worked for me in Gmail Web client. A non-underlined black link showed up in the email. I didn't use the nested span tag.\n<table>\n <tbody>\n <tr>\n <td>\n <a href=\"http://hexinpeter.com\" style=\"text-decoration: none; color: #000000 !important;\">Peter Blog</a>\n </td>\n </tr>\n </tbody>\n</table>\n\nNote: Gmail will strip off any incorrect inline styles. E.g. code like the line below will have its inline styles all stripped off.\n<a href=\"http://hexinpeter.com\" style=\"font-family:; text-decoration: none; color: #000000 !important;\">Peter Blog</a>\n\n",
"I copied my html page and pasted to word.\nEdited the signature in word deleting the spaces where the underline is placed and make my own \"padding\" presssing space bar.\nCopied again and pasted to Outlook 2013.\nWorked fine for me.\n",
"In Windows 10 Mail, you might need to add these in your html head:\n<!--[if (mso)|(mso 16)]>\n <style type=\"text/css\">\n body, table, td, a, span { font-family: Arial, Helvetica, sans-serif !important; }\n a {text-decoration: none;}\n </style>\n<![endif]-->\n\nThe 'a {text-decoration: none;}' fixed the underline problems :)\n",
"In my case, I configured the signature (copy and paste in gmail) using Safari. I tried every code you putted here, but those didn´t worked. After you paste the signature using Safari, you can come back to Chrome and the underline is gone.\n"
] |
[
41,
7,
5,
5,
3,
3,
2,
2,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0,
0,
0,
0
] |
[
"All you have to do is: \n<a href=\"\" style=\"text-decoration:#none; letter-spacing: -999px;\">\n\n",
"place your \"a href\" tag without any styling before div / span of text. \nthen make your styling in the div/span tag.\nfor the most restricted styling email client.\n<div><a href=\"\"><span style=\"text-decoration:none\">title</span><a/></div>\n\n",
"You should write something like this.\n<a href=\"#\" style=\"text-decoration:none;\">BOOK NOW</a>\n\n"
] |
[
-1,
-2,
-3
] |
[
"css",
"email_client",
"html",
"html_email",
"newsletter"
] |
stackoverflow_0008998378_css_email_client_html_html_email_newsletter.txt
|
Q:
How to calculate distance from a player to a dynamic collision point
I'm trying to create sensors for a car to keep track of the distances from the car to the borders of the track. My goal is to have 5 sensors (see image below) and use them to train a machine learning algorithm.
But I can't figure out a way to calculate these distances. For now, I just need a sample of code and a logical explanation of how to implement this with PyGame. But a mathematical and geometrical explanation would be really nice as well for further reading. I'm using this code from a YouTuber tutorial series.
My biggest issue is how to get the points in blue. (last picture) I need them to create the red lines from the car to the points and to calculate the length of these lines. These points are taking the car's position and rotation into account and they have a specific angle at which they get out of the car. I've managed to create the lines, but could not get the point the line would collide with the track.
What I want to accomplish:
I've tried different approaches to this problem, but for now, my biggest problem is how to get the position of the blue dots:
--- Edit from the feedback ------
I added a new paragraph to better explain the problem. This way I hope it is clearer why this problem is different from those said to be related to it. The other problem we have the desired final position (mouse or enemy) in this one we have to figure out which point is the one we are going to use to create the line, and this is my issue.
My GitHub repo of the project
https://github.com/pedromello/ml-pygame/blob/main/main.py
The part of the code where I'm trying to implement this:
class AbstractCar:
def __init__(self, max_vel, rotation_vel):
self.img = self.IMG
self.max_vel = max_vel
self.vel = 0
self.rotation_vel = rotation_vel
self.angle = 0
self.x, self.y = self.START_POS
self.acceleration = 0.1
def rotate(self, left=False, right=False):
if left:
self.angle += self.rotation_vel
elif right:
self.angle -= self.rotation_vel
def draw(self, win):
blit_rotate_center(win, self.img, (self.x, self.y), self.angle)
def move_forward(self):
self.vel = min(self.vel + self.acceleration, self.max_vel)
self.move()
def move_backward(self):
self.vel = max(self.vel - self.acceleration, -self.max_vel/2)
self.move()
def move(self):
radians = math.radians(self.angle)
vertical = math.cos(radians) * self.vel
horizontal = math.sin(radians) * self.vel
self.y -= vertical
self.x -= horizontal
def collide(self, mask, x=0, y=0):
car_mask = pygame.mask.from_surface(self.img)
offset = (int(self.x - x), int(self.y - y))
poi = mask.overlap(car_mask, offset)
return poi
def reset(self):
self.x, self.y = self.START_POS
self.angle = 0
self.vel = 0
class PlayerCar(AbstractCar):
IMG = RED_CAR
START_POS = (180, 200)
def reduce_speed(self):
self.vel = max(self.vel - self.acceleration / 2, 0)
self.move()
def bounce(self):
self.vel = -self.vel
self.move()
def drawSensors(self):
radians = math.radians(self.angle)
vertical = -math.cos(radians)
horizontal = math.sin(radians)
car_center = pygame.math.Vector2(self.x + CAR_WIDTH/2, self.y + CAR_HEIGHT/2)
pivot_sensor = pygame.math.Vector2(car_center.x + horizontal * -100, car_center.y - vertical * -100)
#sensor1 = Vector(30, 0).rotate(self.angle) #+ self.pos # atualiza a posição do sensor 1
#sensor2 = Vector(30, 0).rotate((self.angle+30)%360) #+ self.pos # atualiza a posição do sensor 2
#sensor3 = Vector(30, 0).rotate((self.angle-30)%360) #+ self.pos # atualiza a posição do sensor 3
#rotate pivot sensor around car center
sensor_2 = pivot_sensor.rotate((self.angle+30)%360)
# Sensor 1
pygame.draw.line(WIN, (255, 0, 0), car_center, pivot_sensor, 2)
# Sensor 2
pygame.draw.line(WIN, (255, 0, 0), car_center, sensor_2, 2)
# Sensor 3
#pygame.draw.line(WIN, (255, 0, 0), (self.x, self.y), (self.x + horizontal * 100, self.y - vertical * 100), 2)
A:
Thank you for the comments, I solved my problem using the idea of firing sensors so I can get the point on the wall when the "bullet" hits it.
As we can see when the bullet hits the wall we can create a line that connects the point to the car. This is not the best solution, as it takes time for the bullet to hit the wall and in the meantime, the car is "blind".
As Rabbid76 commented, using raycasting may be the solution I was looking for.
Code for reference:
Sensor Bullet class
class SensorBullet:
def __init__(self, car, base_angle, vel, color):
self.x = car.x + CAR_WIDTH/2
self.y = car.y + CAR_HEIGHT/2
self.angle = car.angle
self.base_angle = base_angle
self.vel = vel
self.color = color
self.img = pygame.Surface((4, 4))
self.fired = False
self.hit = False
self.last_poi = None
def draw(self, win):
pygame.draw.circle(win, self.color, (self.x, self.y), 2)
def fire(self, car):
self.angle = car.angle + self.base_angle
self.x = car.x + CAR_WIDTH/2
self.y = car.y + CAR_HEIGHT/2
self.fired = True
self.hit = False
def move(self):
if(self.fired):
radians = math.radians(self.angle)
vertical = math.cos(radians) * self.vel
horizontal = math.sin(radians) * self.vel
self.y -= vertical
self.x -= horizontal
def collide(self, x=0, y=0):
bullet_mask = pygame.mask.from_surface(self.img)
offset = (int(self.x - x), int(self.y - y))
poi = TRACK_BORDER_MASK.overlap(bullet_mask, offset)
if poi:
self.fired = False
self.hit = True
self.last_poi = poi
return poi
def draw_line(self, win, car):
if self.hit:
pygame.draw.line(win, self.color, (car.x + CAR_WIDTH/2, car.y + CAR_HEIGHT/2), (self.x, self.y), 1)
pygame.display.update()
def get_distance_from_poi(self, car):
if self.last_poi is None:
return -1
return math.sqrt((car.x - self.last_poi[0])**2 + (car.y - self.last_poi[1])**2)
Methods the car must perform to use the sensor
# Inside car's __init__ method
self.sensors = [SensorBullet(self, 25, 12, (100, 0, 255)), SensorBullet(self, 10, 12, (200, 0, 255)), SensorBullet(self, 0, 12, (0, 255, 0)), SensorBullet(self, -10, 12, (0, 0, 255)), SensorBullet(self, -25, 12, (0, 0, 255))]
# ------
# Cars methods
def fireSensors(self):
for bullet in self.sensors:
bullet.fire(self)
def sensorControl(self):
#print(contains(self.sensors, lambda x: x.hit))
for bullet in self.sensors:
if not bullet.fired:
bullet.fire(self)
for bullet in self.sensors:
bullet.move()
def get_distance_array(self):
return [bullet.get_distance_from_poi(self) for bullet in self.sensors]
|
How to calculate distance from a player to a dynamic collision point
|
I'm trying to create sensors for a car to keep track of the distances from the car to the borders of the track. My goal is to have 5 sensors (see image below) and use them to train a machine learning algorithm.
But I can't figure out a way to calculate these distances. For now, I just need a sample of code and a logical explanation of how to implement this with PyGame. But a mathematical and geometrical explanation would be really nice as well for further reading. I'm using this code from a YouTuber tutorial series.
My biggest issue is how to get the points in blue. (last picture) I need them to create the red lines from the car to the points and to calculate the length of these lines. These points are taking the car's position and rotation into account and they have a specific angle at which they get out of the car. I've managed to create the lines, but could not get the point the line would collide with the track.
What I want to accomplish:
I've tried different approaches to this problem, but for now, my biggest problem is how to get the position of the blue dots:
--- Edit from the feedback ------
I added a new paragraph to better explain the problem. This way I hope it is clearer why this problem is different from those said to be related to it. The other problem we have the desired final position (mouse or enemy) in this one we have to figure out which point is the one we are going to use to create the line, and this is my issue.
My GitHub repo of the project
https://github.com/pedromello/ml-pygame/blob/main/main.py
The part of the code where I'm trying to implement this:
class AbstractCar:
def __init__(self, max_vel, rotation_vel):
self.img = self.IMG
self.max_vel = max_vel
self.vel = 0
self.rotation_vel = rotation_vel
self.angle = 0
self.x, self.y = self.START_POS
self.acceleration = 0.1
def rotate(self, left=False, right=False):
if left:
self.angle += self.rotation_vel
elif right:
self.angle -= self.rotation_vel
def draw(self, win):
blit_rotate_center(win, self.img, (self.x, self.y), self.angle)
def move_forward(self):
self.vel = min(self.vel + self.acceleration, self.max_vel)
self.move()
def move_backward(self):
self.vel = max(self.vel - self.acceleration, -self.max_vel/2)
self.move()
def move(self):
radians = math.radians(self.angle)
vertical = math.cos(radians) * self.vel
horizontal = math.sin(radians) * self.vel
self.y -= vertical
self.x -= horizontal
def collide(self, mask, x=0, y=0):
car_mask = pygame.mask.from_surface(self.img)
offset = (int(self.x - x), int(self.y - y))
poi = mask.overlap(car_mask, offset)
return poi
def reset(self):
self.x, self.y = self.START_POS
self.angle = 0
self.vel = 0
class PlayerCar(AbstractCar):
IMG = RED_CAR
START_POS = (180, 200)
def reduce_speed(self):
self.vel = max(self.vel - self.acceleration / 2, 0)
self.move()
def bounce(self):
self.vel = -self.vel
self.move()
def drawSensors(self):
radians = math.radians(self.angle)
vertical = -math.cos(radians)
horizontal = math.sin(radians)
car_center = pygame.math.Vector2(self.x + CAR_WIDTH/2, self.y + CAR_HEIGHT/2)
pivot_sensor = pygame.math.Vector2(car_center.x + horizontal * -100, car_center.y - vertical * -100)
#sensor1 = Vector(30, 0).rotate(self.angle) #+ self.pos # atualiza a posição do sensor 1
#sensor2 = Vector(30, 0).rotate((self.angle+30)%360) #+ self.pos # atualiza a posição do sensor 2
#sensor3 = Vector(30, 0).rotate((self.angle-30)%360) #+ self.pos # atualiza a posição do sensor 3
#rotate pivot sensor around car center
sensor_2 = pivot_sensor.rotate((self.angle+30)%360)
# Sensor 1
pygame.draw.line(WIN, (255, 0, 0), car_center, pivot_sensor, 2)
# Sensor 2
pygame.draw.line(WIN, (255, 0, 0), car_center, sensor_2, 2)
# Sensor 3
#pygame.draw.line(WIN, (255, 0, 0), (self.x, self.y), (self.x + horizontal * 100, self.y - vertical * 100), 2)
|
[
"Thank you for the comments, I solved my problem using the idea of firing sensors so I can get the point on the wall when the \"bullet\" hits it.\n\nAs we can see when the bullet hits the wall we can create a line that connects the point to the car. This is not the best solution, as it takes time for the bullet to hit the wall and in the meantime, the car is \"blind\".\nAs Rabbid76 commented, using raycasting may be the solution I was looking for.\nCode for reference:\nSensor Bullet class\nclass SensorBullet:\n def __init__(self, car, base_angle, vel, color):\n self.x = car.x + CAR_WIDTH/2\n self.y = car.y + CAR_HEIGHT/2\n self.angle = car.angle\n self.base_angle = base_angle\n self.vel = vel\n self.color = color\n self.img = pygame.Surface((4, 4))\n self.fired = False\n self.hit = False\n self.last_poi = None\n\n def draw(self, win):\n pygame.draw.circle(win, self.color, (self.x, self.y), 2)\n\n def fire(self, car):\n self.angle = car.angle + self.base_angle\n self.x = car.x + CAR_WIDTH/2\n self.y = car.y + CAR_HEIGHT/2\n self.fired = True\n self.hit = False\n\n def move(self):\n if(self.fired):\n radians = math.radians(self.angle)\n vertical = math.cos(radians) * self.vel\n horizontal = math.sin(radians) * self.vel\n\n self.y -= vertical\n self.x -= horizontal\n\n def collide(self, x=0, y=0):\n bullet_mask = pygame.mask.from_surface(self.img)\n offset = (int(self.x - x), int(self.y - y))\n poi = TRACK_BORDER_MASK.overlap(bullet_mask, offset)\n if poi:\n self.fired = False\n self.hit = True\n self.last_poi = poi\n return poi\n\n def draw_line(self, win, car):\n if self.hit:\n pygame.draw.line(win, self.color, (car.x + CAR_WIDTH/2, car.y + CAR_HEIGHT/2), (self.x, self.y), 1)\n pygame.display.update()\n\n def get_distance_from_poi(self, car):\n if self.last_poi is None:\n return -1\n return math.sqrt((car.x - self.last_poi[0])**2 + (car.y - self.last_poi[1])**2)\n\nMethods the car must perform to use the sensor\n# Inside car's __init__ method\nself.sensors = [SensorBullet(self, 25, 12, (100, 0, 255)), SensorBullet(self, 10, 12, (200, 0, 255)), SensorBullet(self, 0, 12, (0, 255, 0)), SensorBullet(self, -10, 12, (0, 0, 255)), SensorBullet(self, -25, 12, (0, 0, 255))]\n# ------\n\n# Cars methods\ndef fireSensors(self): \n for bullet in self.sensors:\n bullet.fire(self)\n\ndef sensorControl(self):\n #print(contains(self.sensors, lambda x: x.hit))\n\n for bullet in self.sensors:\n if not bullet.fired:\n bullet.fire(self)\n\n for bullet in self.sensors:\n bullet.move()\n\ndef get_distance_array(self):\n return [bullet.get_distance_from_poi(self) for bullet in self.sensors]\n\n\n"
] |
[
0
] |
[] |
[] |
[
"euclidean_distance",
"geometry",
"math",
"python",
"raycasting"
] |
stackoverflow_0074616569_euclidean_distance_geometry_math_python_raycasting.txt
|
Q:
Chrome Extension - If a specific webpage is opened then
I want to make a chrome extension to do this:
If in browser is opened a specific webpage (check a list of webpages ex. google.com; yahoo.com, etc.) open in new tab a specific page.
For example:
When I open booking.com I want the plugin to open a new tab with airbnb.com. This action will save a cookie with life time for x days to prevent plugin to open airbnb.com in this nuber of days.
booking.com opens airbnb.com;
website2 opens website3
website4 opens website5
etc.
Any ideas? Thank you.
A:
This is my suggestion.
manifest.json
{
"name": "hoge",
"version": "1.0",
"manifest_version": 3,
"permissions": [
"tabs"
],
"background": {
"service_worker": "background.js"
}
}
background.js
chrome.tabs.onUpdated.addListener((tabId, changeInfo, tab) => {
if (changeInfo.status == "complete") {
if (tab.url.indexOf("https://www.yahoo.co.jp/") != -1) {
chrome.tabs.create({ url: "https://nory-soft.web.app/" });
}
}
});
|
Chrome Extension - If a specific webpage is opened then
|
I want to make a chrome extension to do this:
If in browser is opened a specific webpage (check a list of webpages ex. google.com; yahoo.com, etc.) open in new tab a specific page.
For example:
When I open booking.com I want the plugin to open a new tab with airbnb.com. This action will save a cookie with life time for x days to prevent plugin to open airbnb.com in this nuber of days.
booking.com opens airbnb.com;
website2 opens website3
website4 opens website5
etc.
Any ideas? Thank you.
|
[
"This is my suggestion.\nmanifest.json\n{\n \"name\": \"hoge\",\n \"version\": \"1.0\",\n \"manifest_version\": 3,\n \"permissions\": [\n \"tabs\"\n ],\n \"background\": {\n \"service_worker\": \"background.js\"\n }\n}\n\nbackground.js\nchrome.tabs.onUpdated.addListener((tabId, changeInfo, tab) => {\n if (changeInfo.status == \"complete\") {\n if (tab.url.indexOf(\"https://www.yahoo.co.jp/\") != -1) {\n chrome.tabs.create({ url: \"https://nory-soft.web.app/\" });\n }\n }\n});\n\n"
] |
[
0
] |
[] |
[] |
[
"google_chrome_extension"
] |
stackoverflow_0074659893_google_chrome_extension.txt
|
Q:
Trouble validating file path in PowerShell script
I am having some trouble validating a file input in a script. Using the following function
Function pathtest {
[CmdletBinding()]
param (
[ValidateScript({
if (-Not ($_ | Test-Path -PathType Leaf) ) {
throw "The Path argument must be a file. Folder paths are not allowed."
}
return $true
})]
[System.IO.FileInfo]$Path
)
Write-Host ("Output file: {0}" -f $Path.FullName)
}
Then I call the function with these two input files
pathtest -Path c:\temp\test.txt
pathtest -Path c:\temp\test.csv
The first one (test.txt) returns the path, but the second one (test.csv) returns an error:
PS C:\> pathtest c:\it\test.txt
Output file: c:\it\test.txt
PS C:\> pathtest c:\it\test.csv
pathtest : Cannot validate argument on parameter 'Path'. The Path argument must be a file. Folder paths are not
allowed.
At line:1 char:10
+ pathtest c:\it\test.csv
+ ~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (:) [pathtest], ParameterBindingValidationException
+ FullyQualifiedErrorId : ParameterArgumentValidationError,pathtest
Any idea what's up with this?
A:
Test-Path -Leaf also returns $false when the specified path doesn't exist.
Therefore, you need to test for existence separately, so as to distinguish between an existing path that happens to be a folder and a non-existent path.
Try the following instead:
Function pathtest {
[CmdletBinding()]
param (
[ValidateScript({
$item = Get-Item -ErrorAction Ignore -LiteralPath $_
if (-not $item) {
throw "Path doesn't exist: $_"
}
elseif ($item.PSIsContainer) {
throw "The Path argument must be a file. Folder paths are not allowed."
}
return $true
})]
[System.IO.FileInfo] $Path
)
Write-Host ("Output file: {0}" -f $Path.FullName)
}
|
Trouble validating file path in PowerShell script
|
I am having some trouble validating a file input in a script. Using the following function
Function pathtest {
[CmdletBinding()]
param (
[ValidateScript({
if (-Not ($_ | Test-Path -PathType Leaf) ) {
throw "The Path argument must be a file. Folder paths are not allowed."
}
return $true
})]
[System.IO.FileInfo]$Path
)
Write-Host ("Output file: {0}" -f $Path.FullName)
}
Then I call the function with these two input files
pathtest -Path c:\temp\test.txt
pathtest -Path c:\temp\test.csv
The first one (test.txt) returns the path, but the second one (test.csv) returns an error:
PS C:\> pathtest c:\it\test.txt
Output file: c:\it\test.txt
PS C:\> pathtest c:\it\test.csv
pathtest : Cannot validate argument on parameter 'Path'. The Path argument must be a file. Folder paths are not
allowed.
At line:1 char:10
+ pathtest c:\it\test.csv
+ ~~~~~~~~~~~~~~
+ CategoryInfo : InvalidData: (:) [pathtest], ParameterBindingValidationException
+ FullyQualifiedErrorId : ParameterArgumentValidationError,pathtest
Any idea what's up with this?
|
[
"\nTest-Path -Leaf also returns $false when the specified path doesn't exist.\nTherefore, you need to test for existence separately, so as to distinguish between an existing path that happens to be a folder and a non-existent path.\nTry the following instead:\nFunction pathtest {\n [CmdletBinding()]\n param (\n [ValidateScript({\n $item = Get-Item -ErrorAction Ignore -LiteralPath $_\n if (-not $item) {\n throw \"Path doesn't exist: $_\"\n }\n elseif ($item.PSIsContainer) {\n throw \"The Path argument must be a file. Folder paths are not allowed.\"\n }\n return $true\n })]\n [System.IO.FileInfo] $Path\n )\n\n Write-Host (\"Output file: {0}\" -f $Path.FullName)\n}\n\n"
] |
[
1
] |
[] |
[] |
[
"powershell",
"powershell_5.1"
] |
stackoverflow_0074660710_powershell_powershell_5.1.txt
|
Q:
numpy.ndarray.data attribute buffer object
I create different numpy arrays as follows:
import numpy as np
a = np.array([[1,2,3],[1,2,3]]) # 2d array of integers
b = np.array([[1,2,3],[1,2,5.0]]) # 2d array of floats
c = np.array([1,2,3,4,5,6,7,8,9]) # 1d array of integers
d = np.array([10,20,30]) # different 1d array of integers
# python buffer object pointing to the start of the arrays data.
print(a.data)
print(b.data)
print(c.data)
print(d.data)
According to the numpy docs i expect to get a "python buffer object pointing to the start of the arrays data". Here is the official doc (from the numpy website): https://numpy.org/doc/stable/reference/generated/numpy.ndarray.data.html
So i would expect a different memory address for each array.
but i get this:
<memory at 0x000001EAE8B7CAD0>
<memory at 0x000001EAE8B7CAD0>
<memory at 0x000001EAE98E8A00>
<memory at 0x000001EAE98E8A00>
The first two have the same memory buffer object pointer.
And the second two have the same memory buffer object pointer.
Is numpy inferring the memory buffer object pointer based on the dimension of the array or if not, what is the rule that generates similar pointer addresses ?
A:
They aren't sharing the same memory. .data creates a memoryview object every time the attribute is accessed.
You can see from this session that it's a different address every time:
>>> d.data
<memory at 0x6ffff70bddc0>
>>> d.data
<memory at 0x6ffff70bdb80>
>>> d.data
<memory at 0x6ffff70bd1c0>
In your case, the object is reclaimed immediately after print() and the next one created happens to have the same address.
A:
In an ipython session:
In [38]: type(a.data)
Out[38]: memoryview
and the docs for that object:
In [39]: a.data?
Type: memoryview
String form: <memory at 0x000002AE38BAF5F0>
Length: 2
Docstring: Create a new memoryview object which references the given object.
The print string of a memory view doesn't tell us anything about the databuffer address.
It can be used to make a view of the the array:
In [45]: aa = np.ndarray(a.shape, a.dtype, buffer=a.data)
In [46]: aa
Out[46]:
array([[1, 2, 3],
[1, 2, 3]])
In [47]: aa.base is a
Out[47]: True
The data of __array_interface__ is closer to being a numeric address of the underlying data buffer. I don't think it can be used in code, but I find it useful when checking whether an array is a view or copy:
In [48]: a.__array_interface__
Out[48]:
{'data': (2947257325392, False),
'strides': None,
'descr': [('', '<i4')],
'typestr': '<i4',
'shape': (2, 3),
'version': 3}
In [49]: aa.__array_interface__['data']
Out[49]: (2947257325392, False)
|
numpy.ndarray.data attribute buffer object
|
I create different numpy arrays as follows:
import numpy as np
a = np.array([[1,2,3],[1,2,3]]) # 2d array of integers
b = np.array([[1,2,3],[1,2,5.0]]) # 2d array of floats
c = np.array([1,2,3,4,5,6,7,8,9]) # 1d array of integers
d = np.array([10,20,30]) # different 1d array of integers
# python buffer object pointing to the start of the arrays data.
print(a.data)
print(b.data)
print(c.data)
print(d.data)
According to the numpy docs i expect to get a "python buffer object pointing to the start of the arrays data". Here is the official doc (from the numpy website): https://numpy.org/doc/stable/reference/generated/numpy.ndarray.data.html
So i would expect a different memory address for each array.
but i get this:
<memory at 0x000001EAE8B7CAD0>
<memory at 0x000001EAE8B7CAD0>
<memory at 0x000001EAE98E8A00>
<memory at 0x000001EAE98E8A00>
The first two have the same memory buffer object pointer.
And the second two have the same memory buffer object pointer.
Is numpy inferring the memory buffer object pointer based on the dimension of the array or if not, what is the rule that generates similar pointer addresses ?
|
[
"They aren't sharing the same memory. .data creates a memoryview object every time the attribute is accessed.\nYou can see from this session that it's a different address every time:\n>>> d.data\n<memory at 0x6ffff70bddc0>\n>>> d.data\n<memory at 0x6ffff70bdb80>\n>>> d.data\n<memory at 0x6ffff70bd1c0>\n\nIn your case, the object is reclaimed immediately after print() and the next one created happens to have the same address.\n",
"In an ipython session:\nIn [38]: type(a.data)\nOut[38]: memoryview\n\nand the docs for that object:\nIn [39]: a.data?\nType: memoryview\nString form: <memory at 0x000002AE38BAF5F0>\nLength: 2\nDocstring: Create a new memoryview object which references the given object.\n\nThe print string of a memory view doesn't tell us anything about the databuffer address.\nIt can be used to make a view of the the array:\nIn [45]: aa = np.ndarray(a.shape, a.dtype, buffer=a.data) \nIn [46]: aa\nOut[46]: \narray([[1, 2, 3],\n [1, 2, 3]]) \nIn [47]: aa.base is a\nOut[47]: True\n\nThe data of __array_interface__ is closer to being a numeric address of the underlying data buffer. I don't think it can be used in code, but I find it useful when checking whether an array is a view or copy:\nIn [48]: a.__array_interface__\nOut[48]: \n{'data': (2947257325392, False),\n 'strides': None,\n 'descr': [('', '<i4')],\n 'typestr': '<i4',\n 'shape': (2, 3),\n 'version': 3}\nIn [49]: aa.__array_interface__['data']\nOut[49]: (2947257325392, False)\n\n"
] |
[
2,
1
] |
[] |
[] |
[
"numpy",
"numpy_ndarray",
"python"
] |
stackoverflow_0074661127_numpy_numpy_ndarray_python.txt
|
Q:
Using Scss Variable Overrides in Vue 3 with Vuetify 3 Beta Using Vue CLI
I am working on a project using Vue 3, Vuetify 3.0.0 (beta 0), and the latest Vue-CLI. I am trying to customize the vuetify font, however every method i've found online to override the vue sass variables has not worked.
The first attempt I made was was using the Vuetify documentation on the vuetify website: https://next.vuetifyjs.com/en/features/sass-variables/
Using a default project, I added a styles directory and variables.scss file as directed. Inside the variables.scss file I have the following contents:
$body-font-family: cursive;
Digging through the variables inside the Vuetify lib directory It looks like this variable I needed to override (and while I will be using a custom font, for now cursive should ensure a different enought font to validate it works).
This did not work, I tried changing the directory to scss and got the same results (it does not import) see the result image below.
So my second attempt was following the documentation found ing the vue.config.js file where it points to https://github.com/vuetifyjs/vuetify-loader/tree/next/packages/vuetify-loader#customising-variables . This led to me changing my vue.config.js file to look like:
const { defineConfig } = require("@vue/cli-service");
const { VuetifyLoaderPlugin } = require("vuetify-loader");
module.exports = defineConfig({
transpileDependencies: true,
configureWebpack: {
plugins: [new VuetifyLoaderPlugin({ styles: "expose" })],
},
pluginOptions: {
vuetify: {
// https://github.com/vuetifyjs/vuetify-loader/tree/next/packages/vuetify-loader
},
},
});
with a main.scss file added to the plugins dir with the following contents:
// main.scss
$font: cursive !important;
@use 'vuetify/styles' with (
$color-pack: false,
$utilities: false,
$body-font-family: $font
);
This basically removed all formating from the page but did not change the text to a cursive font (see image below)
At this point I have been searching online and been unable to find anything that has worked:
Initial research led me Change default font in vuetify this is where I got my first two approaches, I did not have luck with the sass loaders either.
In the git repo https://github.com/vuetifyjs/vuetify-loader/issues/221 seems to pose a similar question but lead to no results
How to override vuetify 3 components sass variable with vue 3 poses a similar question but lacked the depth
Here is a git repo with my current code: https://github.com/dragonman117/Vuetify-Theme-Test
A:
I just fought with this for a solid 2 hours and FINALLY figured it out. Stumbled upon your question and figured I'd throw you a bone, even if it is 3 months later.
The problem currently is that the Vuetify 3 docs for this are not up to date. I'm going to try my hand at updating the docs in a PR after this. The key missing piece is that vuetify-loader was used for Vuetify 2, but Vuetify 3 is using the new wepback-plugin-vuetify. I figured that out when I stumbled upon vuetify-loader's "next" branch (see the README).
Anyway, here's all you need for when you're using a Vue CLI installation.
vuetify.ts
import @/styles/variables.scss;
// the rest of your vuetify.ts file...
variables.scss
@use "vuetify/styles" with (
$body-font-family: "Comic Sans"
);
Edit: Also, I just started having issues with including commas in my font family in that list after the "with," so I ended up doing the following. This may not be the best SASSy solution for this, but idk their syntax well enough to come up with something else atm:
variables.scss
$body-font-family: Inter, sans-serif;
@use "vuetify/styles" with (
$body-font-family: $body-font-family
);
Hope this helps. Good luck.
A:
And if you want to override default typography settings:
@use 'vuetify/settings' with (
$heading-font-family: Comic Sans,
$body-font-family: Comic Sans,
$typography: (
'h1': (
'size': 2rem,
'weight': 500,
'line-height': 2.2rem,
'letter-spacing': 0,
'font-family': Comic Sans,
'text-transform': none
),
),
),
|
Using Scss Variable Overrides in Vue 3 with Vuetify 3 Beta Using Vue CLI
|
I am working on a project using Vue 3, Vuetify 3.0.0 (beta 0), and the latest Vue-CLI. I am trying to customize the vuetify font, however every method i've found online to override the vue sass variables has not worked.
The first attempt I made was was using the Vuetify documentation on the vuetify website: https://next.vuetifyjs.com/en/features/sass-variables/
Using a default project, I added a styles directory and variables.scss file as directed. Inside the variables.scss file I have the following contents:
$body-font-family: cursive;
Digging through the variables inside the Vuetify lib directory It looks like this variable I needed to override (and while I will be using a custom font, for now cursive should ensure a different enought font to validate it works).
This did not work, I tried changing the directory to scss and got the same results (it does not import) see the result image below.
So my second attempt was following the documentation found ing the vue.config.js file where it points to https://github.com/vuetifyjs/vuetify-loader/tree/next/packages/vuetify-loader#customising-variables . This led to me changing my vue.config.js file to look like:
const { defineConfig } = require("@vue/cli-service");
const { VuetifyLoaderPlugin } = require("vuetify-loader");
module.exports = defineConfig({
transpileDependencies: true,
configureWebpack: {
plugins: [new VuetifyLoaderPlugin({ styles: "expose" })],
},
pluginOptions: {
vuetify: {
// https://github.com/vuetifyjs/vuetify-loader/tree/next/packages/vuetify-loader
},
},
});
with a main.scss file added to the plugins dir with the following contents:
// main.scss
$font: cursive !important;
@use 'vuetify/styles' with (
$color-pack: false,
$utilities: false,
$body-font-family: $font
);
This basically removed all formating from the page but did not change the text to a cursive font (see image below)
At this point I have been searching online and been unable to find anything that has worked:
Initial research led me Change default font in vuetify this is where I got my first two approaches, I did not have luck with the sass loaders either.
In the git repo https://github.com/vuetifyjs/vuetify-loader/issues/221 seems to pose a similar question but lead to no results
How to override vuetify 3 components sass variable with vue 3 poses a similar question but lacked the depth
Here is a git repo with my current code: https://github.com/dragonman117/Vuetify-Theme-Test
|
[
"I just fought with this for a solid 2 hours and FINALLY figured it out. Stumbled upon your question and figured I'd throw you a bone, even if it is 3 months later.\nThe problem currently is that the Vuetify 3 docs for this are not up to date. I'm going to try my hand at updating the docs in a PR after this. The key missing piece is that vuetify-loader was used for Vuetify 2, but Vuetify 3 is using the new wepback-plugin-vuetify. I figured that out when I stumbled upon vuetify-loader's \"next\" branch (see the README).\nAnyway, here's all you need for when you're using a Vue CLI installation.\nvuetify.ts\nimport @/styles/variables.scss;\n// the rest of your vuetify.ts file...\n\nvariables.scss\n@use \"vuetify/styles\" with (\n $body-font-family: \"Comic Sans\"\n);\n\nEdit: Also, I just started having issues with including commas in my font family in that list after the \"with,\" so I ended up doing the following. This may not be the best SASSy solution for this, but idk their syntax well enough to come up with something else atm:\nvariables.scss\n$body-font-family: Inter, sans-serif;\n\n@use \"vuetify/styles\" with (\n $body-font-family: $body-font-family\n);\n\nHope this helps. Good luck.\n",
"And if you want to override default typography settings:\n@use 'vuetify/settings' with (\n $heading-font-family: Comic Sans,\n $body-font-family: Comic Sans,\n $typography: (\n 'h1': (\n 'size': 2rem,\n 'weight': 500,\n 'line-height': 2.2rem,\n 'letter-spacing': 0,\n 'font-family': Comic Sans,\n 'text-transform': none\n ),\n ),\n),\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"vuetify.js",
"vuetify_loader",
"vuetifyjs3"
] |
stackoverflow_0071564313_vuetify.js_vuetify_loader_vuetifyjs3.txt
|
Q:
How is a session ID authenticated in SharedPreferences?
I am learning more about SharedPreferences and would like to understand how exactly everything is working. I have watched a decent amount of videos but I still have some questions.
Upon a user logging in, I generate a random session ID using UUID. I then assign the user their session ID and save a session by passing the UserModel into a SessionManagement instance that handles SharedPreferences.
userModel.setSessionId(UUID.randomUUID().toString());
SessionManagement sessionManagement = new SessionManagement(LoginActivity.this);
sessionManagement.saveSession(userModel);
When the user closes/opens the app, onStart() is called. It creates another instance of SessionManagement and checks if the session is null using getSession() to determine whether they're logged in or not.
SessionManagement sessionManagement = new SessionManagement(LoginActivity.this);
if (sessionManagement.getSession() != null) {
// go to some activity
}
And here is what SessionManagement constructor looks like:
private SharedPreferences sharedPreferences;
private final SharedPreferences.Editor editor;
private MasterKey masterKey;
//private String SHARED_PREF_NAME = "session";
private final String SESSION_KEY = "session_user";
private final String SESSION_USERNAME = "session_username";
public SessionManagement(Context context) {
try {
masterKey = new MasterKey.Builder(context)
.setKeyScheme(MasterKey.KeyScheme.AES256_GCM)
.build();
} catch (GeneralSecurityException | IOException e) {
e.printStackTrace();
}
try {
sharedPreferences = EncryptedSharedPreferences.create(
context,
"secret_shared_prefs",
masterKey,
EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV,
EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM
);
} catch (GeneralSecurityException | IOException e) {
e.printStackTrace();
}
//sharedPreferences = context.getSharedPreferences(SHARED_PREF_NAME, Context.MODE_PRIVATE);
editor = sharedPreferences.edit();
}
My question now is, if I am just checking whether the session is null or not, how does SharedPreferences know that the sessionID corresponds to the user that initialized it in step 1?
What are the ways that people work around weak/exposed session ID's that a SharedPreferences implementation can protect against?
Is my implementation/flow correct?
Is it safe to save the sessionID to a user model?
I appreciate any help I can get with this topic!
A:
It looks like your implementation is using a combination of UUID and SharedPreferences to manage sessions for users. When a user logs in, you generate a random UUID and save it as a session ID, which is then associated with the user. When the user closes and reopens the app, you check the SharedPreferences to see if the session ID is still present, and if it is, you assume the user is still logged in.
To answer your specific questions:
SharedPreferences doesn't have any built-in way of knowing which user a session ID belongs to. It simply stores key-value pairs, and it's up to you to manage the relationship between the session ID and the user it belongs to.
There are several ways to protect against weak or exposed session IDs. One common approach is to use a combination of UUIDs and tokens, where the UUID is used to identify the session and the token is used to authenticate the user. This way, even if an attacker somehow gets hold of the session ID, they won't be able to impersonate the user unless they also have the user's token.
Your implementation seems to be correct, but it's always a good idea to review and test your code to make sure it's working as expected.
Saving the session ID to a user model is generally safe, as long as you take steps to protect the user's data. For example, you could encrypt the user's data before saving it to SharedPreferences, or you could use a secure database to store the user's data instead of SharedPreferences.
Overall, it's important to carefully consider the security implications of your session management implementation, and to take steps to protect against potential attacks. I hope this helps!
|
How is a session ID authenticated in SharedPreferences?
|
I am learning more about SharedPreferences and would like to understand how exactly everything is working. I have watched a decent amount of videos but I still have some questions.
Upon a user logging in, I generate a random session ID using UUID. I then assign the user their session ID and save a session by passing the UserModel into a SessionManagement instance that handles SharedPreferences.
userModel.setSessionId(UUID.randomUUID().toString());
SessionManagement sessionManagement = new SessionManagement(LoginActivity.this);
sessionManagement.saveSession(userModel);
When the user closes/opens the app, onStart() is called. It creates another instance of SessionManagement and checks if the session is null using getSession() to determine whether they're logged in or not.
SessionManagement sessionManagement = new SessionManagement(LoginActivity.this);
if (sessionManagement.getSession() != null) {
// go to some activity
}
And here is what SessionManagement constructor looks like:
private SharedPreferences sharedPreferences;
private final SharedPreferences.Editor editor;
private MasterKey masterKey;
//private String SHARED_PREF_NAME = "session";
private final String SESSION_KEY = "session_user";
private final String SESSION_USERNAME = "session_username";
public SessionManagement(Context context) {
try {
masterKey = new MasterKey.Builder(context)
.setKeyScheme(MasterKey.KeyScheme.AES256_GCM)
.build();
} catch (GeneralSecurityException | IOException e) {
e.printStackTrace();
}
try {
sharedPreferences = EncryptedSharedPreferences.create(
context,
"secret_shared_prefs",
masterKey,
EncryptedSharedPreferences.PrefKeyEncryptionScheme.AES256_SIV,
EncryptedSharedPreferences.PrefValueEncryptionScheme.AES256_GCM
);
} catch (GeneralSecurityException | IOException e) {
e.printStackTrace();
}
//sharedPreferences = context.getSharedPreferences(SHARED_PREF_NAME, Context.MODE_PRIVATE);
editor = sharedPreferences.edit();
}
My question now is, if I am just checking whether the session is null or not, how does SharedPreferences know that the sessionID corresponds to the user that initialized it in step 1?
What are the ways that people work around weak/exposed session ID's that a SharedPreferences implementation can protect against?
Is my implementation/flow correct?
Is it safe to save the sessionID to a user model?
I appreciate any help I can get with this topic!
|
[
"It looks like your implementation is using a combination of UUID and SharedPreferences to manage sessions for users. When a user logs in, you generate a random UUID and save it as a session ID, which is then associated with the user. When the user closes and reopens the app, you check the SharedPreferences to see if the session ID is still present, and if it is, you assume the user is still logged in.\nTo answer your specific questions:\nSharedPreferences doesn't have any built-in way of knowing which user a session ID belongs to. It simply stores key-value pairs, and it's up to you to manage the relationship between the session ID and the user it belongs to.\nThere are several ways to protect against weak or exposed session IDs. One common approach is to use a combination of UUIDs and tokens, where the UUID is used to identify the session and the token is used to authenticate the user. This way, even if an attacker somehow gets hold of the session ID, they won't be able to impersonate the user unless they also have the user's token.\nYour implementation seems to be correct, but it's always a good idea to review and test your code to make sure it's working as expected.\nSaving the session ID to a user model is generally safe, as long as you take steps to protect the user's data. For example, you could encrypt the user's data before saving it to SharedPreferences, or you could use a secure database to store the user's data instead of SharedPreferences.\nOverall, it's important to carefully consider the security implications of your session management implementation, and to take steps to protect against potential attacks. I hope this helps!\n"
] |
[
1
] |
[] |
[] |
[
"android",
"java",
"session",
"sharedpreferences"
] |
stackoverflow_0074605361_android_java_session_sharedpreferences.txt
|
Q:
MongoDB GO driver overwriting existing data
I am using GO-FIBER and using MONGODB MongoDB Go Driver.
I want to update only the fields given by the body. But it is overwriting the data.
func UpdateOneUser(c *fiber.Ctx) error {
params := c.Params("id")
body := new(models.User)
id, err := primitive.ObjectIDFromHex(params)
if err != nil {
return c.Status(500).SendString("invalid onjectid")
}
if err := c.BodyParser(&body); err != nil {
return c.Status(400).SendString("invalid body")
}
filter := bson.M{"_id": id}
update := bson.M{"$set": bson.M{
"name": body.Name,
"username": body.Username,
"first_name": body.FirstName,
"last_name": body.LastName,
"email": body.Email,
"phone_number": body.PhoneNumber,
"contry": body.Contry,
"age": body.Age,
"child_accounts": body.ChildAccounts,
"groups": body.Groups,
}}
result, err := db.User.UpdateOne(context.Background(), filter, update)
if err != nil {
return c.Status(500).SendString("user not found")
}
fmt.Println(result)
return c.JSON(body)
}
If this is how the driver works then tell me a better way to update my documents.
A:
The $set operator will overwrite all the fields you specify, so you have to build the update statement selectively:
fields:=bson.M{}
if body.Name!="" {
fields["name"]=body.Name
}
...
update:=bson.M{"$set":fields}
You can use some shortcuts:
fields:=bson.M{}
add:=func(key,value string) {
if value!="" {
fields[key]=value
}
}
add("name",body.Name)
add("userName",body.UserName)
|
MongoDB GO driver overwriting existing data
|
I am using GO-FIBER and using MONGODB MongoDB Go Driver.
I want to update only the fields given by the body. But it is overwriting the data.
func UpdateOneUser(c *fiber.Ctx) error {
params := c.Params("id")
body := new(models.User)
id, err := primitive.ObjectIDFromHex(params)
if err != nil {
return c.Status(500).SendString("invalid onjectid")
}
if err := c.BodyParser(&body); err != nil {
return c.Status(400).SendString("invalid body")
}
filter := bson.M{"_id": id}
update := bson.M{"$set": bson.M{
"name": body.Name,
"username": body.Username,
"first_name": body.FirstName,
"last_name": body.LastName,
"email": body.Email,
"phone_number": body.PhoneNumber,
"contry": body.Contry,
"age": body.Age,
"child_accounts": body.ChildAccounts,
"groups": body.Groups,
}}
result, err := db.User.UpdateOne(context.Background(), filter, update)
if err != nil {
return c.Status(500).SendString("user not found")
}
fmt.Println(result)
return c.JSON(body)
}
If this is how the driver works then tell me a better way to update my documents.
|
[
"The $set operator will overwrite all the fields you specify, so you have to build the update statement selectively:\nfields:=bson.M{}\nif body.Name!=\"\" {\n fields[\"name\"]=body.Name\n}\n...\nupdate:=bson.M{\"$set\":fields}\n\nYou can use some shortcuts:\nfields:=bson.M{}\nadd:=func(key,value string) {\n if value!=\"\" {\n fields[key]=value\n }\n}\nadd(\"name\",body.Name)\nadd(\"userName\",body.UserName)\n\n"
] |
[
1
] |
[] |
[] |
[
"go",
"go_fiber",
"mongodb"
] |
stackoverflow_0074661282_go_go_fiber_mongodb.txt
|
Q:
Run code for every subset starting by filtering data with df.loc
I am trying to run some experiments with my Python code. The input of my code is based on a DataFrame. To filter my DataFrame I use df.loc. Before running my code I filter the DataFrame for the instance I want to run my code. I have the following list of instances:
instance = ['A', 'B', 'C', 'D']
(These instances are also contained in a column in my DataFrame named df[Instance]). When I want to run my code for instance 'A' only, I first filter my dataframe for instance 'A':
df = df.loc[(df['Instance'] == 'A')]
When I want to run my code for instance 'B'
df = df.loc[(df['Instance'] == 'B')]
When I want to run my code for instance 'A' and 'B' I do the following:
df = df.loc[(df['Instance'] == 'A') | (df['Instance'] == 'B')]
Now I want to run my code for all the subsets between 'A', 'B', 'C', 'D'. I can make subsets with the following function
from itertools import chain, combinations
def powerset(iterable):
s = list(iterable)
return chain.from_iterable(combinations(s, r) for r in range(1, len(s)+1))
subsets = list(powerset(instance))
Giving the following output
[('A',), ('B',), ('C',), ('D',), ('A', 'B'), ('A', 'C'), ('A', 'D'), ('B', 'C'), ('B', 'D'), ('C', 'D'), ('A', 'B', 'C'), ('A', 'B', 'D'), ('A', 'C', 'D'), ('B', 'C', 'D'), ('A', 'B', 'C', 'D')]
Now I want to run my code for all the subsets starting with that it filters the DataFrame for the items in a subset. At the moment, I filter my DataFrames manually. What I want to achieve is that my code runs for every subset. Now I filter every subset by hand using df.loc. Has anyone a tip how to do this automatically?
Expecting:
Iterate through all the subsets.
Run code for A (subset 1)
df = df.loc[(df['Instance'] == 'A')]
Run code for B (subset 2)
df = df.loc[(df['Instance'] == 'C')]
Run code For C (subset 3)
df = df.loc[(df['Instance'] == 'B')]
Run code for D (subset 4)
df = df.loc[(df['Instance'] == 'D')]
Run code for A, B (subset 5)
df = df.loc[(df['Instance'] == 'A') | (df['Instance'] == 'B')]
Etc.
A:
I think you want to use pandas.Series.apply which
Invoke[s] function on values of Series.
It takes each value from the series, in your case df["Instance"] and passes it through a function. Your function only needs to check whether the instance is in the element of subsets you're currently working on:
for subset in subsets:
selected_rows = df["Instance"].apply(lambda i: i in subset)
# do things with selected rows
|
Run code for every subset starting by filtering data with df.loc
|
I am trying to run some experiments with my Python code. The input of my code is based on a DataFrame. To filter my DataFrame I use df.loc. Before running my code I filter the DataFrame for the instance I want to run my code. I have the following list of instances:
instance = ['A', 'B', 'C', 'D']
(These instances are also contained in a column in my DataFrame named df[Instance]). When I want to run my code for instance 'A' only, I first filter my dataframe for instance 'A':
df = df.loc[(df['Instance'] == 'A')]
When I want to run my code for instance 'B'
df = df.loc[(df['Instance'] == 'B')]
When I want to run my code for instance 'A' and 'B' I do the following:
df = df.loc[(df['Instance'] == 'A') | (df['Instance'] == 'B')]
Now I want to run my code for all the subsets between 'A', 'B', 'C', 'D'. I can make subsets with the following function
from itertools import chain, combinations
def powerset(iterable):
s = list(iterable)
return chain.from_iterable(combinations(s, r) for r in range(1, len(s)+1))
subsets = list(powerset(instance))
Giving the following output
[('A',), ('B',), ('C',), ('D',), ('A', 'B'), ('A', 'C'), ('A', 'D'), ('B', 'C'), ('B', 'D'), ('C', 'D'), ('A', 'B', 'C'), ('A', 'B', 'D'), ('A', 'C', 'D'), ('B', 'C', 'D'), ('A', 'B', 'C', 'D')]
Now I want to run my code for all the subsets starting with that it filters the DataFrame for the items in a subset. At the moment, I filter my DataFrames manually. What I want to achieve is that my code runs for every subset. Now I filter every subset by hand using df.loc. Has anyone a tip how to do this automatically?
Expecting:
Iterate through all the subsets.
Run code for A (subset 1)
df = df.loc[(df['Instance'] == 'A')]
Run code for B (subset 2)
df = df.loc[(df['Instance'] == 'C')]
Run code For C (subset 3)
df = df.loc[(df['Instance'] == 'B')]
Run code for D (subset 4)
df = df.loc[(df['Instance'] == 'D')]
Run code for A, B (subset 5)
df = df.loc[(df['Instance'] == 'A') | (df['Instance'] == 'B')]
Etc.
|
[
"I think you want to use pandas.Series.apply which\n\nInvoke[s] function on values of Series.\n\nIt takes each value from the series, in your case df[\"Instance\"] and passes it through a function. Your function only needs to check whether the instance is in the element of subsets you're currently working on:\nfor subset in subsets:\n selected_rows = df[\"Instance\"].apply(lambda i: i in subset)\n # do things with selected rows\n\n"
] |
[
0
] |
[] |
[] |
[
"dataframe",
"pandas",
"python"
] |
stackoverflow_0074661188_dataframe_pandas_python.txt
|
Q:
React / Node - PayPal can't capture a new subscription
I wan't to capture a new paypal subscription from frontend in my backend and give response with the needed data for mongodb.
If I add a body with capture_type: 'OUTSTANDING_BALANCE' (I found that in the manual) I'm getting this error.
So I'm not sure either it's just a wrong body or i totally mess up something else in the backend but so far I can't capture the subscription even so I get a subscription Id from
createSubscription Controller
PayPalScriptProvider
<PayPalScriptProvider options={initialOptions}>
<PayPalSubscriptionButton/>
</PayPalScriptProvider>
PayPal Button
{isPending ? <LoadingMedium /> : null}
<PayPalButtons
createSubscription={(data, actions) => {
return axios
.post(
'/api/subscription',
)
.then((response) => {
return response.data.id;
});
}}
onApprove={(data, actions) => {
axios
.post(`/api/subscription/${data.subscriptionID}/capture`)
.then(() => {
axios
.patch(
`/api/activesubscription`,
{
id: activeSub[0]?._id,
subscriptionID: data.subscriptionID,
}
)
});
});
}}
/>
Route for createSubscription
router.route('/subscription').post(async (req, res) => {
const searchPlan = await SubscriptionAmount.find();
console.log(searchPlan[0]?.subscriptionAmount);
const subscription = await paypalFee.createSubscription(
searchPlan[0]?.subscriptionAmount
);
res.json(subscription);
});
Router for onApprove
router.post('/subscription/:subscriptionID/capture', async (req, res) => {
const { subscriptionID } = req.params;
console.log('subscriptionID', subscriptionID);
const captureData = await paypalFee.captureSubscription(subscriptionID);
console.log('captureData', captureData);
res.json(captureData);
});
createSubscription Controller
async function createSubscription(planId) {
const accessToken = await generateAccessToken();
const url = `${base}/v1/billing/subscriptions`;
const response = await fetch(url, {
method: 'post',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${accessToken}`,
},
body: JSON.stringify({
intent: 'subscription',
plan_id: planId,
}),
});
const data = await response.json();
console.log('data', data);
return data;
}
captureSubscription Controller
async function captureSubscription(subscriptionId) {
const accessToken = await generateAccessToken();
const url = `${base}/v1/billing/subscriptions/${subscriptionId}/capture`;
const response = await fetch(url, {
method: 'post',
body: JSON.stringify({
// capture_type: 'OUTSTANDING_BALANCE',
}),
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${accessToken}`,
},
});
const data = await response.json();
console.log('data', data);
return data;
}
I'm getting this logs for my data in captureSubscription if I do not pass a body in my captureSubscription Controller:
captureData {
name: 'INVALID_REQUEST',
message: 'Request is not well-formed, syntactically incorrect, or violates schema.',
details: [
{
location: 'body',
issue: 'MISSING_REQUEST_BODY',
description: 'Request body is missing.'
}
]
}
With body I'm getting this error
captureData {
name: 'UNPROCESSABLE_ENTITY',
message: 'The requested action could not be performed, semantically incorrect, or failed business validation.',
details: [
{
issue: 'ZERO_OUTSTANDING_BALANCE',
description: 'Current outstanding balance should be greater than zero.'
}
],
}
A:
ZERO_OUTSTANDING_BALANCE
There is no outstanding balance to capture. An outstanding balance occurs when payments are missed due to failures.
For ordinary (non-outstanding) subscription payments, no captures can be triggered. Subscriptions will capture automatically on the schedule you specify in the plan, that is the point of subscriptions.
|
React / Node - PayPal can't capture a new subscription
|
I wan't to capture a new paypal subscription from frontend in my backend and give response with the needed data for mongodb.
If I add a body with capture_type: 'OUTSTANDING_BALANCE' (I found that in the manual) I'm getting this error.
So I'm not sure either it's just a wrong body or i totally mess up something else in the backend but so far I can't capture the subscription even so I get a subscription Id from
createSubscription Controller
PayPalScriptProvider
<PayPalScriptProvider options={initialOptions}>
<PayPalSubscriptionButton/>
</PayPalScriptProvider>
PayPal Button
{isPending ? <LoadingMedium /> : null}
<PayPalButtons
createSubscription={(data, actions) => {
return axios
.post(
'/api/subscription',
)
.then((response) => {
return response.data.id;
});
}}
onApprove={(data, actions) => {
axios
.post(`/api/subscription/${data.subscriptionID}/capture`)
.then(() => {
axios
.patch(
`/api/activesubscription`,
{
id: activeSub[0]?._id,
subscriptionID: data.subscriptionID,
}
)
});
});
}}
/>
Route for createSubscription
router.route('/subscription').post(async (req, res) => {
const searchPlan = await SubscriptionAmount.find();
console.log(searchPlan[0]?.subscriptionAmount);
const subscription = await paypalFee.createSubscription(
searchPlan[0]?.subscriptionAmount
);
res.json(subscription);
});
Router for onApprove
router.post('/subscription/:subscriptionID/capture', async (req, res) => {
const { subscriptionID } = req.params;
console.log('subscriptionID', subscriptionID);
const captureData = await paypalFee.captureSubscription(subscriptionID);
console.log('captureData', captureData);
res.json(captureData);
});
createSubscription Controller
async function createSubscription(planId) {
const accessToken = await generateAccessToken();
const url = `${base}/v1/billing/subscriptions`;
const response = await fetch(url, {
method: 'post',
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${accessToken}`,
},
body: JSON.stringify({
intent: 'subscription',
plan_id: planId,
}),
});
const data = await response.json();
console.log('data', data);
return data;
}
captureSubscription Controller
async function captureSubscription(subscriptionId) {
const accessToken = await generateAccessToken();
const url = `${base}/v1/billing/subscriptions/${subscriptionId}/capture`;
const response = await fetch(url, {
method: 'post',
body: JSON.stringify({
// capture_type: 'OUTSTANDING_BALANCE',
}),
headers: {
'Content-Type': 'application/json',
Authorization: `Bearer ${accessToken}`,
},
});
const data = await response.json();
console.log('data', data);
return data;
}
I'm getting this logs for my data in captureSubscription if I do not pass a body in my captureSubscription Controller:
captureData {
name: 'INVALID_REQUEST',
message: 'Request is not well-formed, syntactically incorrect, or violates schema.',
details: [
{
location: 'body',
issue: 'MISSING_REQUEST_BODY',
description: 'Request body is missing.'
}
]
}
With body I'm getting this error
captureData {
name: 'UNPROCESSABLE_ENTITY',
message: 'The requested action could not be performed, semantically incorrect, or failed business validation.',
details: [
{
issue: 'ZERO_OUTSTANDING_BALANCE',
description: 'Current outstanding balance should be greater than zero.'
}
],
}
|
[
"\nZERO_OUTSTANDING_BALANCE\n\nThere is no outstanding balance to capture. An outstanding balance occurs when payments are missed due to failures.\nFor ordinary (non-outstanding) subscription payments, no captures can be triggered. Subscriptions will capture automatically on the schedule you specify in the plan, that is the point of subscriptions.\n"
] |
[
1
] |
[] |
[] |
[
"javascript",
"node.js",
"paypal",
"paypal_sandbox",
"reactjs"
] |
stackoverflow_0074661253_javascript_node.js_paypal_paypal_sandbox_reactjs.txt
|
Q:
yq returns null but not read in condition if value is null
I have a command that takes a field in a YAML file and will execute some commands if the returned value is null. I am using this implementation https://github.com/kislyuk/yq .
TAG="$(yq -y '.pod.image.imageTag' "${VALUES_FILE}")"
if [ "${TAG}" = null ]; then
echo "no tag is found..."
else
echo "Tag is ${TAG}..."
fi
but I keep getting 'Tag is null'. I tried with 'null' and "null" but same result...
A:
Using yq:
On my computer the content of the tag is null\n..., not null.
To get rid of the trailing dashes use yq without the option -y.
Add the option -r for raw output.
INPUT='
pod:
image:
imageTag: moonwalk
'
TAG=$(yq -r '.pod.image.imageTag' <<< "$INPUT")
if [ $TAG == null ]; then
echo "no tag is found"
else
echo "Tag is $TAG"
fi
Output
Tag is moonwalk
INPUT='
pod:
image:
imageTag:
'
# same code as above
Output
no tag is found
|
yq returns null but not read in condition if value is null
|
I have a command that takes a field in a YAML file and will execute some commands if the returned value is null. I am using this implementation https://github.com/kislyuk/yq .
TAG="$(yq -y '.pod.image.imageTag' "${VALUES_FILE}")"
if [ "${TAG}" = null ]; then
echo "no tag is found..."
else
echo "Tag is ${TAG}..."
fi
but I keep getting 'Tag is null'. I tried with 'null' and "null" but same result...
|
[
"Using yq:\nOn my computer the content of the tag is null\\n..., not null.\nTo get rid of the trailing dashes use yq without the option -y.\nAdd the option -r for raw output.\nINPUT='\npod:\n image:\n imageTag: moonwalk\n'\n\nTAG=$(yq -r '.pod.image.imageTag' <<< \"$INPUT\")\nif [ $TAG == null ]; then\n echo \"no tag is found\"\nelse\n echo \"Tag is $TAG\"\nfi\n\nOutput\nTag is moonwalk\n\n\nINPUT='\npod:\n image:\n imageTag:\n'\n\n# same code as above\n\nOutput\nno tag is found\n\n"
] |
[
1
] |
[] |
[] |
[
"bash",
"yq"
] |
stackoverflow_0074661129_bash_yq.txt
|
Q:
Bootstrap Accordions expanding, but not collapsing?
I am having trouble with a bootstrap 5 accordion. They all expand without issue but won't collapse back down. I don't know if my JS or CSS is causing an issue. I don't see any errors in Chrome's console.
https://www.harpercollege.edu/dev/whoward-dev-area/df.php
I copied them into codepen and didn't have any issues.
<h2>Important Dates (Fall 2022)</h2>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e639">
16-Week Classes (August 22 - December 16)</a></div>
</div>
<div id="collapse_d15e639" class="accordion-body collapse">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>August 22, 2022</td>
<td>16-week classes begin this week</td>
</tr>
<tr>
<td>August 29, 2022</td>
<td>Last day to drop for 100% refund for 16-week classes</td>
</tr>
<tr>
<td>September 5, 2022</td>
<td>College closed in observance of Labor Day</td>
</tr>
<tr>
<td>September 7, 2022</td>
<td>First Financial Aid Disbursement</td>
</tr>
<tr>
<td>November 8, 2022</td>
<td>College closed in observance of General Election Day</td>
</tr>
<tr>
<td>November 21, 2022</td>
<td>Last day to withdraw from 16-week classes</td>
</tr>
<tr>
<td>November 23-27, 2022</td>
<td>Thanksgiving Break</td>
</tr>
<tr>
<td>December 12-16, 2022</td>
<td>Final Exam Week</td>
</tr>
<tr>
<td>December 21, 2022</td>
<td>Grades available online</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e685">
First 8-Week Classes (August 22 - October 16)</a></div>
</div>
<div id="collapse_d15e685" class="accordion-body collapse">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>August 22, 2022</td>
<td>First 8-week classes begin this week</td>
</tr>
<tr>
<td>August 29, 2022</td>
<td>Last day to drop for 100% refund for first 8-week classes</td>
</tr>
<tr>
<td>September 5, 2022</td>
<td>College closed in observance of Labor Day</td>
</tr>
<tr>
<td>September 7, 2022</td>
<td>First Financial Aid Disbursement</td>
</tr>
<tr>
<td>October 3, 2022</td>
<td>Last day to withdraw from first 8-week classes</td>
</tr>
<tr>
<td>October 16, 2022</td>
<td>First 8-week classes end</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e722">
Second 8-Week Classes (October 17 - December 11)</a></div>
</div>
<div id="collapse_d15e722" class="accordion-body collapse">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>October 17, 2022</td>
<td>Second 8-week classes begin this week</td>
</tr>
<tr>
<td>October 24, 2022</td>
<td>Last day to drop for 100% refund for second 8-week classes.</td>
</tr>
<tr>
<td>November 8, 2022</td>
<td>College closed in observance of General Election Day</td>
</tr>
<tr>
<td>November 23-27, 2022</td>
<td>Thanksgiving Break</td>
</tr>
<tr>
<td>November 28, 2022</td>
<td>Last day to withdraw from second 8-week classes.</td>
</tr>
<tr>
<td>December 11, 2022</td>
<td>Second 8-week term ends</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e759">
First 13-week Classes (August 22 - November 20)</a></div>
</div>
<div id="collapse_d15e759" class="accordion-body collapse">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>August 22, 2022</td>
<td>First 13-week classes begin this week</td>
</tr>
<tr>
<td>August 29, 2022</td>
<td>Last day to drop for 100% refund for first 13-week classes</td>
</tr>
<tr>
<td>September 5, 2022</td>
<td>College closed in observance of Labor Day</td>
</tr>
<tr>
<td>September 7, 2022</td>
<td>First Financial Aid Disbursement</td>
</tr>
<tr>
<td>November 8, 2022</td>
<td>College closed in observance of General Election Day</td>
</tr>
<tr>
<td>October 31, 2022</td>
<td>Last day to withdraw from first 13-week classes</td>
</tr>
<tr>
<td>November 22, 2022</td>
<td>First 13-week classes end.</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e799">
Second 13-week Classes (September 19 - December 16)</a></div>
</div>
<div id="collapse_d15e799" class="accordion-body collapse">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>September 19, 2022</td>
<td>Second 13-week classes begin this week</td>
</tr>
<tr>
<td>September 26, 2022</td>
<td>Last day to drop for 100% refund for second 13-week classes</td>
</tr>
<tr>
<td>November 8, 2022</td>
<td>College closed in observance of General Election Day</td>
</tr>
<tr>
<td>November 23-27, 2022</td>
<td>Thanksgiving Break</td>
</tr>
<tr>
<td>November 28, 2022</td>
<td>Last day to withdraw from second 13-week classes</td>
</tr>
<tr>
<td>December 16, 2022</td>
<td>Second 13-week classes end.</td>
</tr>
</tbody>
</table>
</div>
</div>
</div></div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e836">
Important Dates for All Parts of Term</a></div>
</div>
<div id="collapse_d15e836" class="accordion-body collapse">
<div class="accordion-inner">
<p><a href="https://www.harpercollege.edu/registration/pdf/web_dates_04192022.pdf">Fall 2022 Important Dates for All Parts of Term (pdf)</a></p>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e846">
Final Examination Schedule</a></div>
</div>
<div id="collapse_d15e846" class="accordion-body collapse">
<div class="accordion-inner">
<p><a href="https://www.harpercollege.edu/registration/pdf/finalexamschedule_fall_2022.pdf">Final Examination Schedule Fall 2022 (pdf)</a></p>
</div>
</div><script src="https://cdnjs.cloudflare.com/ajax/libs/bootstrap/5.2.0/js/bootstrap.min.js"></script></div>
A:
Please check this code now. It's working fine now. You have missed a parent id and attribute data-bs-parent="#accordionExample".
<h2>Important Dates (Fall 2022)</h2>
<div class="accordion-content" id="accordionExample">
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse"
aria-expanded="false" href="#collapse_d15e639">
16-Week Classes (August 22 - December 16)</a></div>
</div>
<div id="collapse_d15e639" class="accordion-body collapse" data-bs-parent="#accordionExample">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>August 22, 2022</td>
<td>16-week classes begin this week</td>
</tr>
<tr>
<td>August 29, 2022</td>
<td>Last day to drop for 100% refund for 16-week classes</td>
</tr>
<tr>
<td>September 5, 2022</td>
<td>College closed in observance of Labor Day</td>
</tr>
<tr>
<td>September 7, 2022</td>
<td>First Financial Aid Disbursement</td>
</tr>
<tr>
<td>November 8, 2022</td>
<td>College closed in observance of General Election Day</td>
</tr>
<tr>
<td>November 21, 2022</td>
<td>Last day to withdraw from 16-week classes</td>
</tr>
<tr>
<td>November 23-27, 2022</td>
<td>Thanksgiving Break</td>
</tr>
<tr>
<td>December 12-16, 2022</td>
<td>Final Exam Week</td>
</tr>
<tr>
<td>December 21, 2022</td>
<td>Grades available online</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse"
aria-expanded="false" href="#collapse_d15e685">
First 8-Week Classes (August 22 - October 16)</a></div>
</div>
<div id="collapse_d15e685" class="accordion-body collapse" data-bs-parent="#accordionExample">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>August 22, 2022</td>
<td>First 8-week classes begin this week</td>
</tr>
<tr>
<td>August 29, 2022</td>
<td>Last day to drop for 100% refund for first 8-week classes</td>
</tr>
<tr>
<td>September 5, 2022</td>
<td>College closed in observance of Labor Day</td>
</tr>
<tr>
<td>September 7, 2022</td>
<td>First Financial Aid Disbursement</td>
</tr>
<tr>
<td>October 3, 2022</td>
<td>Last day to withdraw from first 8-week classes</td>
</tr>
<tr>
<td>October 16, 2022</td>
<td>First 8-week classes end</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse"
aria-expanded="false" href="#collapse_d15e722">
Second 8-Week Classes (October 17 - December 11)</a></div>
</div>
<div id="collapse_d15e722" class="accordion-body collapse" data-bs-parent="#accordionExample">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>October 17, 2022</td>
<td>Second 8-week classes begin this week</td>
</tr>
<tr>
<td>October 24, 2022</td>
<td>Last day to drop for 100% refund for second 8-week classes.</td>
</tr>
<tr>
<td>November 8, 2022</td>
<td>College closed in observance of General Election Day</td>
</tr>
<tr>
<td>November 23-27, 2022</td>
<td>Thanksgiving Break</td>
</tr>
<tr>
<td>November 28, 2022</td>
<td>Last day to withdraw from second 8-week classes.</td>
</tr>
<tr>
<td>December 11, 2022</td>
<td>Second 8-week term ends</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse"
aria-expanded="false" href="#collapse_d15e759">
First 13-week Classes (August 22 - November 20)</a></div>
</div>
<div id="collapse_d15e759" class="accordion-body collapse" data-bs-parent="#accordionExample">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>August 22, 2022</td>
<td>First 13-week classes begin this week</td>
</tr>
<tr>
<td>August 29, 2022</td>
<td>Last day to drop for 100% refund for first 13-week classes</td>
</tr>
<tr>
<td>September 5, 2022</td>
<td>College closed in observance of Labor Day</td>
</tr>
<tr>
<td>September 7, 2022</td>
<td>First Financial Aid Disbursement</td>
</tr>
<tr>
<td>November 8, 2022</td>
<td>College closed in observance of General Election Day</td>
</tr>
<tr>
<td>October 31, 2022</td>
<td>Last day to withdraw from first 13-week classes</td>
</tr>
<tr>
<td>November 22, 2022</td>
<td>First 13-week classes end.</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse"
aria-expanded="false" href="#collapse_d15e799">
Second 13-week Classes (September 19 - December 16)</a></div>
</div>
<div id="collapse_d15e799" class="accordion-body collapse" data-bs-parent="#accordionExample">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>September 19, 2022</td>
<td>Second 13-week classes begin this week</td>
</tr>
<tr>
<td>September 26, 2022</td>
<td>Last day to drop for 100% refund for second 13-week classes</td>
</tr>
<tr>
<td>November 8, 2022</td>
<td>College closed in observance of General Election Day</td>
</tr>
<tr>
<td>November 23-27, 2022</td>
<td>Thanksgiving Break</td>
</tr>
<tr>
<td>November 28, 2022</td>
<td>Last day to withdraw from second 13-week classes</td>
</tr>
<tr>
<td>December 16, 2022</td>
<td>Second 13-week classes end.</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse"
aria-expanded="false" href="#collapse_d15e836">
Important Dates for All Parts of Term</a></div>
</div>
<div id="collapse_d15e836" class="accordion-body collapse" data-bs-parent="#accordionExample">
<div class="accordion-inner">
<p><a href="https://www.harpercollege.edu/registration/pdf/web_dates_04192022.pdf">Fall 2022
Important Dates for All Parts of Term (pdf)</a></p>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse"
aria-expanded="false" href="#collapse_d15e846">
Final Examination Schedule</a></div>
</div>
<div id="collapse_d15e846" class="accordion-body collapse" data-bs-parent="#accordionExample">
<div class="accordion-inner">
<p><a href="https://www.harpercollege.edu/registration/pdf/finalexamschedule_fall_2022.pdf">Final
Examination Schedule Fall 2022 (pdf)</a></p>
</div>
</div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/bootstrap/5.2.0/js/bootstrap.min.js"></script>
</div>
</div>
</div>
A:
Rather than using Bootstrap accordion, I use the following structure that works quite well. It opens the panel clicked on and closes any other that may already be open.
CSS
.FAQ {
margin: 0 auto;
max-width: 100%;
}
.FAQcard {
margin: 10px 0;
position: relative;
}
.FAQtitle {
background: #fff;
box-shadow: 0 0 10px rgba(0, 0, 0, 0.2);
color: #000;
font-weight: bold;
font-size: 150%;
cursor: default;
display: block;
padding: 1em 1.5em;
position: relative;
text-align: left;
}
.FAQtitle::after {
content: " ";
width: 8px;
height: 8px;
border-right: 1px solid #4a6e78;
border-bottom: 1px solid #4a6e78;
position: absolute;
right: 20px;
top: 20px;
-webkit-transform: rotate(-45deg);
transform: rotate(-45deg);
-webkit-transition: all 0.2s ease-in-out;
transition: all 0.2s ease-in-out;
}
.FAQtitle.active::after {
-webkit-transform: rotate(45deg);
transform: rotate(45deg);
-webkit-transition: all 0.2s ease-in-out;
transition: all 0.2s ease-in-out;
}
.FAQpanel {
background: #f1f2f3;
color: #000;
display: none;
margin: 0;
padding: 2em;
text-align: left;
}
.FAQpanel p {
margin-bottom: 5px;
text-align: justify;
}
HTML
<div class="FAQ">
<div class="FAQcard">
<div class="FAQtitle">Test Panel 1</div>
<div class="FAQpanel fs-5">
Test text for panel 1
</div>
</div>
<div class="FAQcard">
<div class="FAQtitle">Test Panel 2</div>
<div class="FAQpanel fs-5">
Test text for panel 2
</div>
</div>
</div>
Javascript inside the (document).ready() function:
$(".FAQtitle").click(function (j) {
var dropDown = $(this).closest(".FAQcard").find(".FAQpanel");
$(this).closest(".FAQ").find(".FAQpanel").not(dropDown).slideUp();
if ($(this).hasClass("active")) {
$(this).removeClass("active").removeClass("selected");
} else {
$(this).closest(".FAQ").find(".FAQtitle.active").removeClass("active").removeClass("selected");
$(this).addClass("active selected");
}
dropDown.stop(false, true).slideToggle();
j.preventDefault();
});
Hopefully this is helpful for you!
|
Bootstrap Accordions expanding, but not collapsing?
|
I am having trouble with a bootstrap 5 accordion. They all expand without issue but won't collapse back down. I don't know if my JS or CSS is causing an issue. I don't see any errors in Chrome's console.
https://www.harpercollege.edu/dev/whoward-dev-area/df.php
I copied them into codepen and didn't have any issues.
<h2>Important Dates (Fall 2022)</h2>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e639">
16-Week Classes (August 22 - December 16)</a></div>
</div>
<div id="collapse_d15e639" class="accordion-body collapse">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>August 22, 2022</td>
<td>16-week classes begin this week</td>
</tr>
<tr>
<td>August 29, 2022</td>
<td>Last day to drop for 100% refund for 16-week classes</td>
</tr>
<tr>
<td>September 5, 2022</td>
<td>College closed in observance of Labor Day</td>
</tr>
<tr>
<td>September 7, 2022</td>
<td>First Financial Aid Disbursement</td>
</tr>
<tr>
<td>November 8, 2022</td>
<td>College closed in observance of General Election Day</td>
</tr>
<tr>
<td>November 21, 2022</td>
<td>Last day to withdraw from 16-week classes</td>
</tr>
<tr>
<td>November 23-27, 2022</td>
<td>Thanksgiving Break</td>
</tr>
<tr>
<td>December 12-16, 2022</td>
<td>Final Exam Week</td>
</tr>
<tr>
<td>December 21, 2022</td>
<td>Grades available online</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e685">
First 8-Week Classes (August 22 - October 16)</a></div>
</div>
<div id="collapse_d15e685" class="accordion-body collapse">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>August 22, 2022</td>
<td>First 8-week classes begin this week</td>
</tr>
<tr>
<td>August 29, 2022</td>
<td>Last day to drop for 100% refund for first 8-week classes</td>
</tr>
<tr>
<td>September 5, 2022</td>
<td>College closed in observance of Labor Day</td>
</tr>
<tr>
<td>September 7, 2022</td>
<td>First Financial Aid Disbursement</td>
</tr>
<tr>
<td>October 3, 2022</td>
<td>Last day to withdraw from first 8-week classes</td>
</tr>
<tr>
<td>October 16, 2022</td>
<td>First 8-week classes end</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e722">
Second 8-Week Classes (October 17 - December 11)</a></div>
</div>
<div id="collapse_d15e722" class="accordion-body collapse">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>October 17, 2022</td>
<td>Second 8-week classes begin this week</td>
</tr>
<tr>
<td>October 24, 2022</td>
<td>Last day to drop for 100% refund for second 8-week classes.</td>
</tr>
<tr>
<td>November 8, 2022</td>
<td>College closed in observance of General Election Day</td>
</tr>
<tr>
<td>November 23-27, 2022</td>
<td>Thanksgiving Break</td>
</tr>
<tr>
<td>November 28, 2022</td>
<td>Last day to withdraw from second 8-week classes.</td>
</tr>
<tr>
<td>December 11, 2022</td>
<td>Second 8-week term ends</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e759">
First 13-week Classes (August 22 - November 20)</a></div>
</div>
<div id="collapse_d15e759" class="accordion-body collapse">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>August 22, 2022</td>
<td>First 13-week classes begin this week</td>
</tr>
<tr>
<td>August 29, 2022</td>
<td>Last day to drop for 100% refund for first 13-week classes</td>
</tr>
<tr>
<td>September 5, 2022</td>
<td>College closed in observance of Labor Day</td>
</tr>
<tr>
<td>September 7, 2022</td>
<td>First Financial Aid Disbursement</td>
</tr>
<tr>
<td>November 8, 2022</td>
<td>College closed in observance of General Election Day</td>
</tr>
<tr>
<td>October 31, 2022</td>
<td>Last day to withdraw from first 13-week classes</td>
</tr>
<tr>
<td>November 22, 2022</td>
<td>First 13-week classes end.</td>
</tr>
</tbody>
</table>
</div>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e799">
Second 13-week Classes (September 19 - December 16)</a></div>
</div>
<div id="collapse_d15e799" class="accordion-body collapse">
<div class="accordion-inner">
<div class="harperTable">
<table>
<tbody>
<tr>
<td><span>April 18-20, 2022</span></td>
<td><span>Fall priority registration begins</span></td>
</tr>
<tr>
<td><span>April 21, 2022</span></td>
<td><span>Open registration for Fall 2022 for all students.</span></td>
</tr>
<tr>
<td>September 19, 2022</td>
<td>Second 13-week classes begin this week</td>
</tr>
<tr>
<td>September 26, 2022</td>
<td>Last day to drop for 100% refund for second 13-week classes</td>
</tr>
<tr>
<td>November 8, 2022</td>
<td>College closed in observance of General Election Day</td>
</tr>
<tr>
<td>November 23-27, 2022</td>
<td>Thanksgiving Break</td>
</tr>
<tr>
<td>November 28, 2022</td>
<td>Last day to withdraw from second 13-week classes</td>
</tr>
<tr>
<td>December 16, 2022</td>
<td>Second 13-week classes end.</td>
</tr>
</tbody>
</table>
</div>
</div>
</div></div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e836">
Important Dates for All Parts of Term</a></div>
</div>
<div id="collapse_d15e836" class="accordion-body collapse">
<div class="accordion-inner">
<p><a href="https://www.harpercollege.edu/registration/pdf/web_dates_04192022.pdf">Fall 2022 Important Dates for All Parts of Term (pdf)</a></p>
</div>
</div>
</div>
</div>
<div class="accordion">
<div class="accordion-group">
<div class="accordion-alt-1">
<div class="accordion-heading"><a class="accordion-toggle collapsed" data-bs-toggle="collapse" aria-expanded="false" href="#collapse_d15e846">
Final Examination Schedule</a></div>
</div>
<div id="collapse_d15e846" class="accordion-body collapse">
<div class="accordion-inner">
<p><a href="https://www.harpercollege.edu/registration/pdf/finalexamschedule_fall_2022.pdf">Final Examination Schedule Fall 2022 (pdf)</a></p>
</div>
</div><script src="https://cdnjs.cloudflare.com/ajax/libs/bootstrap/5.2.0/js/bootstrap.min.js"></script></div>
|
[
"Please check this code now. It's working fine now. You have missed a parent id and attribute data-bs-parent=\"#accordionExample\".\n<h2>Important Dates (Fall 2022)</h2>\n<div class=\"accordion-content\" id=\"accordionExample\">\n <div class=\"accordion\">\n <div class=\"accordion-group\">\n <div class=\"accordion-alt-1\">\n <div class=\"accordion-heading\"><a class=\"accordion-toggle collapsed\" data-bs-toggle=\"collapse\"\n aria-expanded=\"false\" href=\"#collapse_d15e639\">\n 16-Week Classes (August 22 - December 16)</a></div>\n </div>\n <div id=\"collapse_d15e639\" class=\"accordion-body collapse\" data-bs-parent=\"#accordionExample\">\n <div class=\"accordion-inner\">\n <div class=\"harperTable\">\n <table>\n <tbody>\n <tr>\n <td><span>April 18-20, 2022</span></td>\n <td><span>Fall priority registration begins</span></td>\n </tr>\n <tr>\n <td><span>April 21, 2022</span></td>\n <td><span>Open registration for Fall 2022 for all students.</span></td>\n </tr>\n <tr>\n <td>August 22, 2022</td>\n <td>16-week classes begin this week</td>\n </tr>\n <tr>\n <td>August 29, 2022</td>\n <td>Last day to drop for 100% refund for 16-week classes</td>\n </tr>\n <tr>\n <td>September 5, 2022</td>\n <td>College closed in observance of Labor Day</td>\n </tr>\n <tr>\n <td>September 7, 2022</td>\n <td>First Financial Aid Disbursement</td>\n </tr>\n <tr>\n <td>November 8, 2022</td>\n <td>College closed in observance of General Election Day</td>\n </tr>\n <tr>\n <td>November 21, 2022</td>\n <td>Last day to withdraw from 16-week classes</td>\n </tr>\n <tr>\n <td>November 23-27, 2022</td>\n <td>Thanksgiving Break</td>\n </tr>\n <tr>\n <td>December 12-16, 2022</td>\n <td>Final Exam Week</td>\n </tr>\n <tr>\n <td>December 21, 2022</td>\n <td>Grades available online</td>\n </tr>\n </tbody>\n </table>\n </div>\n </div>\n </div>\n </div>\n </div>\n <div class=\"accordion\">\n <div class=\"accordion-group\">\n <div class=\"accordion-alt-1\">\n <div class=\"accordion-heading\"><a class=\"accordion-toggle collapsed\" data-bs-toggle=\"collapse\"\n aria-expanded=\"false\" href=\"#collapse_d15e685\">\n First 8-Week Classes (August 22 - October 16)</a></div>\n </div>\n <div id=\"collapse_d15e685\" class=\"accordion-body collapse\" data-bs-parent=\"#accordionExample\">\n <div class=\"accordion-inner\">\n <div class=\"harperTable\">\n <table>\n <tbody>\n <tr>\n <td><span>April 18-20, 2022</span></td>\n <td><span>Fall priority registration begins</span></td>\n </tr>\n <tr>\n <td><span>April 21, 2022</span></td>\n <td><span>Open registration for Fall 2022 for all students.</span></td>\n </tr>\n <tr>\n <td>August 22, 2022</td>\n <td>First 8-week classes begin this week</td>\n </tr>\n <tr>\n <td>August 29, 2022</td>\n <td>Last day to drop for 100% refund for first 8-week classes</td>\n </tr>\n <tr>\n <td>September 5, 2022</td>\n <td>College closed in observance of Labor Day</td>\n </tr>\n <tr>\n <td>September 7, 2022</td>\n <td>First Financial Aid Disbursement</td>\n </tr>\n <tr>\n <td>October 3, 2022</td>\n <td>Last day to withdraw from first 8-week classes</td>\n </tr>\n <tr>\n <td>October 16, 2022</td>\n <td>First 8-week classes end</td>\n </tr>\n </tbody>\n </table>\n </div>\n </div>\n </div>\n </div>\n </div>\n <div class=\"accordion\">\n <div class=\"accordion-group\">\n <div class=\"accordion-alt-1\">\n <div class=\"accordion-heading\"><a class=\"accordion-toggle collapsed\" data-bs-toggle=\"collapse\"\n aria-expanded=\"false\" href=\"#collapse_d15e722\">\n Second 8-Week Classes (October 17 - December 11)</a></div>\n </div>\n <div id=\"collapse_d15e722\" class=\"accordion-body collapse\" data-bs-parent=\"#accordionExample\">\n <div class=\"accordion-inner\">\n <div class=\"harperTable\">\n <table>\n <tbody>\n <tr>\n <td><span>April 18-20, 2022</span></td>\n <td><span>Fall priority registration begins</span></td>\n </tr>\n <tr>\n <td><span>April 21, 2022</span></td>\n <td><span>Open registration for Fall 2022 for all students.</span></td>\n </tr>\n <tr>\n <td>October 17, 2022</td>\n <td>Second 8-week classes begin this week</td>\n </tr>\n <tr>\n <td>October 24, 2022</td>\n <td>Last day to drop for 100% refund for second 8-week classes.</td>\n </tr>\n <tr>\n <td>November 8, 2022</td>\n <td>College closed in observance of General Election Day</td>\n </tr>\n <tr>\n <td>November 23-27, 2022</td>\n <td>Thanksgiving Break</td>\n </tr>\n <tr>\n <td>November 28, 2022</td>\n <td>Last day to withdraw from second 8-week classes.</td>\n </tr>\n <tr>\n <td>December 11, 2022</td>\n <td>Second 8-week term ends</td>\n </tr>\n </tbody>\n </table>\n </div>\n </div>\n </div>\n </div>\n </div>\n <div class=\"accordion\">\n <div class=\"accordion-group\">\n <div class=\"accordion-alt-1\">\n <div class=\"accordion-heading\"><a class=\"accordion-toggle collapsed\" data-bs-toggle=\"collapse\"\n aria-expanded=\"false\" href=\"#collapse_d15e759\">\n First 13-week Classes (August 22 - November 20)</a></div>\n </div>\n <div id=\"collapse_d15e759\" class=\"accordion-body collapse\" data-bs-parent=\"#accordionExample\">\n <div class=\"accordion-inner\">\n <div class=\"harperTable\">\n <table>\n <tbody>\n <tr>\n <td><span>April 18-20, 2022</span></td>\n <td><span>Fall priority registration begins</span></td>\n </tr>\n <tr>\n <td><span>April 21, 2022</span></td>\n <td><span>Open registration for Fall 2022 for all students.</span></td>\n </tr>\n <tr>\n <td>August 22, 2022</td>\n <td>First 13-week classes begin this week</td>\n </tr>\n <tr>\n <td>August 29, 2022</td>\n <td>Last day to drop for 100% refund for first 13-week classes</td>\n </tr>\n <tr>\n <td>September 5, 2022</td>\n <td>College closed in observance of Labor Day</td>\n </tr>\n <tr>\n <td>September 7, 2022</td>\n <td>First Financial Aid Disbursement</td>\n </tr>\n <tr>\n <td>November 8, 2022</td>\n <td>College closed in observance of General Election Day</td>\n </tr>\n <tr>\n <td>October 31, 2022</td>\n <td>Last day to withdraw from first 13-week classes</td>\n </tr>\n <tr>\n <td>November 22, 2022</td>\n <td>First 13-week classes end.</td>\n </tr>\n </tbody>\n </table>\n </div>\n </div>\n </div>\n </div>\n </div>\n <div class=\"accordion\">\n <div class=\"accordion-group\">\n <div class=\"accordion-alt-1\">\n <div class=\"accordion-heading\"><a class=\"accordion-toggle collapsed\" data-bs-toggle=\"collapse\"\n aria-expanded=\"false\" href=\"#collapse_d15e799\">\n Second 13-week Classes (September 19 - December 16)</a></div>\n </div>\n <div id=\"collapse_d15e799\" class=\"accordion-body collapse\" data-bs-parent=\"#accordionExample\">\n <div class=\"accordion-inner\">\n <div class=\"harperTable\">\n <table>\n <tbody>\n <tr>\n <td><span>April 18-20, 2022</span></td>\n <td><span>Fall priority registration begins</span></td>\n </tr>\n <tr>\n <td><span>April 21, 2022</span></td>\n <td><span>Open registration for Fall 2022 for all students.</span></td>\n </tr>\n <tr>\n <td>September 19, 2022</td>\n <td>Second 13-week classes begin this week</td>\n </tr>\n <tr>\n <td>September 26, 2022</td>\n <td>Last day to drop for 100% refund for second 13-week classes</td>\n </tr>\n <tr>\n <td>November 8, 2022</td>\n <td>College closed in observance of General Election Day</td>\n </tr>\n <tr>\n <td>November 23-27, 2022</td>\n <td>Thanksgiving Break</td>\n </tr>\n <tr>\n <td>November 28, 2022</td>\n <td>Last day to withdraw from second 13-week classes</td>\n </tr>\n <tr>\n <td>December 16, 2022</td>\n <td>Second 13-week classes end.</td>\n </tr>\n </tbody>\n </table>\n </div>\n </div>\n </div>\n </div>\n </div>\n <div class=\"accordion\">\n <div class=\"accordion-group\">\n <div class=\"accordion-alt-1\">\n <div class=\"accordion-heading\"><a class=\"accordion-toggle collapsed\" data-bs-toggle=\"collapse\"\n aria-expanded=\"false\" href=\"#collapse_d15e836\">\n Important Dates for All Parts of Term</a></div>\n </div>\n <div id=\"collapse_d15e836\" class=\"accordion-body collapse\" data-bs-parent=\"#accordionExample\">\n <div class=\"accordion-inner\">\n <p><a href=\"https://www.harpercollege.edu/registration/pdf/web_dates_04192022.pdf\">Fall 2022\n Important Dates for All Parts of Term (pdf)</a></p>\n </div>\n </div>\n </div>\n </div>\n <div class=\"accordion\">\n <div class=\"accordion-group\">\n <div class=\"accordion-alt-1\">\n <div class=\"accordion-heading\"><a class=\"accordion-toggle collapsed\" data-bs-toggle=\"collapse\"\n aria-expanded=\"false\" href=\"#collapse_d15e846\">\n Final Examination Schedule</a></div>\n </div>\n <div id=\"collapse_d15e846\" class=\"accordion-body collapse\" data-bs-parent=\"#accordionExample\">\n <div class=\"accordion-inner\">\n <p><a href=\"https://www.harpercollege.edu/registration/pdf/finalexamschedule_fall_2022.pdf\">Final\n Examination Schedule Fall 2022 (pdf)</a></p>\n </div>\n </div>\n <script src=\"https://cdnjs.cloudflare.com/ajax/libs/bootstrap/5.2.0/js/bootstrap.min.js\"></script>\n </div>\n\n </div>\n</div>\n\n",
"Rather than using Bootstrap accordion, I use the following structure that works quite well. It opens the panel clicked on and closes any other that may already be open.\nCSS\n.FAQ {\n margin: 0 auto;\n max-width: 100%;\n}\n\n.FAQcard {\n margin: 10px 0;\n position: relative;\n}\n\n.FAQtitle {\n background: #fff;\n box-shadow: 0 0 10px rgba(0, 0, 0, 0.2);\n color: #000;\n font-weight: bold;\n font-size: 150%;\n cursor: default;\n display: block;\n padding: 1em 1.5em;\n position: relative;\n text-align: left;\n}\n\n.FAQtitle::after {\n content: \" \";\n width: 8px;\n height: 8px;\n border-right: 1px solid #4a6e78;\n border-bottom: 1px solid #4a6e78;\n position: absolute;\n right: 20px;\n top: 20px;\n -webkit-transform: rotate(-45deg);\n transform: rotate(-45deg);\n -webkit-transition: all 0.2s ease-in-out;\n transition: all 0.2s ease-in-out;\n}\n\n.FAQtitle.active::after {\n -webkit-transform: rotate(45deg);\n transform: rotate(45deg);\n -webkit-transition: all 0.2s ease-in-out;\n transition: all 0.2s ease-in-out;\n}\n\n.FAQpanel {\n background: #f1f2f3;\n color: #000;\n display: none;\n margin: 0;\n padding: 2em;\n text-align: left;\n}\n\n.FAQpanel p {\n margin-bottom: 5px;\n text-align: justify;\n}\n\nHTML\n <div class=\"FAQ\">\n <div class=\"FAQcard\">\n <div class=\"FAQtitle\">Test Panel 1</div>\n <div class=\"FAQpanel fs-5\">\n Test text for panel 1\n </div>\n </div>\n <div class=\"FAQcard\">\n <div class=\"FAQtitle\">Test Panel 2</div>\n <div class=\"FAQpanel fs-5\">\n Test text for panel 2\n </div>\n </div>\n </div>\n\nJavascript inside the (document).ready() function:\n\n $(\".FAQtitle\").click(function (j) {\n var dropDown = $(this).closest(\".FAQcard\").find(\".FAQpanel\");\n $(this).closest(\".FAQ\").find(\".FAQpanel\").not(dropDown).slideUp();\n if ($(this).hasClass(\"active\")) {\n $(this).removeClass(\"active\").removeClass(\"selected\");\n } else {\n $(this).closest(\".FAQ\").find(\".FAQtitle.active\").removeClass(\"active\").removeClass(\"selected\");\n $(this).addClass(\"active selected\");\n }\n dropDown.stop(false, true).slideToggle();\n j.preventDefault();\n });\n\nHopefully this is helpful for you!\n"
] |
[
2,
1
] |
[] |
[] |
[
"bootstrap_5",
"javascript"
] |
stackoverflow_0074661125_bootstrap_5_javascript.txt
|
Q:
What does `location=$(type -p "htop")` mean in a script?
The script is this
#!/bin/bash
echo
echo "################################################################"
echo " Installing Htop "
echo "################################################################"
echo
if ! location=$(type -p "htop"); then
sudo apt install -y htop
fi
I'm confused as to what this code snippet from the script does
location=$(type -p "htop");
I need a clear explanation about this.
A:
! negates the exit status of the following command;
location=... assigns a value to the variable $location;
$(...) is command substitution. It expands to the output of the enclosed command, whose exit status is propagated as the assignment's exit status;
type -p htop (double quotes are not needed here) searches for an executable htop in the $PATH and returns the full path to it. It fails if no such executable exists and there's no alias nor function named htop (in which case it returns an empty string, but doesn't fail).
Putting it all together, it searches for an executable named htop, assings the the full path to it to $location, and if it can't be found (and there's no alias or function defining it), it runs sudo apt install -y htop, which on some systems (that use apt to manage packages) tries to install the htop package with root privileges, answering yes to any questions.
A:
In short, the exit status of the assignment is the exit status of the command substitution, and the exit status of the command substitution is the exit status fo type.
type -p htop has an exit status of 0 if htop is a command that can be executed, with the output being the full path to the command.
The idea here is that location is assigned the full path to htop if it exists, and if it doesn't, then sudo apt install -y htop is run to install it. (With the slight problem, alluded to in the comments, that location remains if htop needs to be installed.)
|
What does `location=$(type -p "htop")` mean in a script?
|
The script is this
#!/bin/bash
echo
echo "################################################################"
echo " Installing Htop "
echo "################################################################"
echo
if ! location=$(type -p "htop"); then
sudo apt install -y htop
fi
I'm confused as to what this code snippet from the script does
location=$(type -p "htop");
I need a clear explanation about this.
|
[
"\n! negates the exit status of the following command;\nlocation=... assigns a value to the variable $location;\n$(...) is command substitution. It expands to the output of the enclosed command, whose exit status is propagated as the assignment's exit status;\ntype -p htop (double quotes are not needed here) searches for an executable htop in the $PATH and returns the full path to it. It fails if no such executable exists and there's no alias nor function named htop (in which case it returns an empty string, but doesn't fail).\n\nPutting it all together, it searches for an executable named htop, assings the the full path to it to $location, and if it can't be found (and there's no alias or function defining it), it runs sudo apt install -y htop, which on some systems (that use apt to manage packages) tries to install the htop package with root privileges, answering yes to any questions.\n",
"In short, the exit status of the assignment is the exit status of the command substitution, and the exit status of the command substitution is the exit status fo type.\ntype -p htop has an exit status of 0 if htop is a command that can be executed, with the output being the full path to the command.\nThe idea here is that location is assigned the full path to htop if it exists, and if it doesn't, then sudo apt install -y htop is run to install it. (With the slight problem, alluded to in the comments, that location remains if htop needs to be installed.)\n"
] |
[
2,
1
] |
[] |
[] |
[
"linux",
"shell"
] |
stackoverflow_0074661270_linux_shell.txt
|
Q:
Scroll-snap-stop not working when scroll is fast ( aggresive scroll )
.home-component {
width: 100%;
height: 93vh;
top : 7vh ;
scroll-snap-type: y mandatory !important;
overflow-y: scroll !important;
-webkit-scroll-snap-type: y mandatory;
}
app-card {
scroll-snap-align: start !important;
scroll-snap-stop : always !important;
-webkit-scroll-snap-align: start;
}
I have a container/wrapper element : home-component and I have a component app-card whose list is contained within the wrapper. I have implemented the CSS to stop at each app-card , the code works when the scroll is light but when I scroll with speed , the scroll does not stop at the next app-card element.
I have already tried to implement the same for horizontal scroll and different align positions , but the code doesn't work.
A:
It looks like you're trying to use scroll snapping to make the page stop at specific elements as you scroll. Unfortunately, scroll snapping is not fully supported in all browsers, so you may need to use a different method to achieve the effect you want.
One way to make the page stop at specific elements as you scroll is to use JavaScript to listen for the scroll event on the wrapper element, and then use the scrollTop property to check the position of the scrollbar and adjust it as needed. This will require some additional code and customization to make it work with your specific layout, but it should be possible to achieve the effect you want using this approach:
const wrapper = document.querySelector('.home-component');
const cards = document.querySelectorAll('.app-card');
function snapToCard() {
// Get the current scroll position
const scrollTop = wrapper.scrollTop;
// Check the position of each card relative to the top of the wrapper
cards.forEach(card => {
const cardTop = card.offsetTop;
// If the scroll position is within 50 pixels of the top of the card,
// adjust the scroll position to align with the top of the card
if (Math.abs(scrollTop - cardTop) < 50) {
wrapper.scrollTop = cardTop;
}
});
}
// Listen for the 'scroll' event on the wrapper element
wrapper.addEventListener('scroll', snapToCard);
You can adjust the snapToCard function to suit your specific layout and needs.
|
Scroll-snap-stop not working when scroll is fast ( aggresive scroll )
|
.home-component {
width: 100%;
height: 93vh;
top : 7vh ;
scroll-snap-type: y mandatory !important;
overflow-y: scroll !important;
-webkit-scroll-snap-type: y mandatory;
}
app-card {
scroll-snap-align: start !important;
scroll-snap-stop : always !important;
-webkit-scroll-snap-align: start;
}
I have a container/wrapper element : home-component and I have a component app-card whose list is contained within the wrapper. I have implemented the CSS to stop at each app-card , the code works when the scroll is light but when I scroll with speed , the scroll does not stop at the next app-card element.
I have already tried to implement the same for horizontal scroll and different align positions , but the code doesn't work.
|
[
"It looks like you're trying to use scroll snapping to make the page stop at specific elements as you scroll. Unfortunately, scroll snapping is not fully supported in all browsers, so you may need to use a different method to achieve the effect you want.\nOne way to make the page stop at specific elements as you scroll is to use JavaScript to listen for the scroll event on the wrapper element, and then use the scrollTop property to check the position of the scrollbar and adjust it as needed. This will require some additional code and customization to make it work with your specific layout, but it should be possible to achieve the effect you want using this approach:\nconst wrapper = document.querySelector('.home-component');\nconst cards = document.querySelectorAll('.app-card');\n\nfunction snapToCard() {\n // Get the current scroll position\n const scrollTop = wrapper.scrollTop;\n\n // Check the position of each card relative to the top of the wrapper\n cards.forEach(card => {\n const cardTop = card.offsetTop;\n\n // If the scroll position is within 50 pixels of the top of the card, \n // adjust the scroll position to align with the top of the card\n if (Math.abs(scrollTop - cardTop) < 50) {\n wrapper.scrollTop = cardTop;\n }\n });\n}\n\n// Listen for the 'scroll' event on the wrapper element\nwrapper.addEventListener('scroll', snapToCard);\n\nYou can adjust the snapToCard function to suit your specific layout and needs.\n"
] |
[
1
] |
[] |
[] |
[
"css",
"html"
] |
stackoverflow_0074661267_css_html.txt
|
Q:
Get containing path of lua file
I am wondering if there is a way of getting the path to the currently executing lua script file?
This is specifically not the current working directory, which could be entirely different. I know luafilesystem will let me get the current working directory, but it doesn't seem to be able to tell the current executing script file.
Thanks
EDIT:
I'm not running from the standard command line interpreter, I am executing the scripts from a C++ binary via luabind.
A:
This is a more elegant way:
function script_path()
local str = debug.getinfo(2, "S").source:sub(2)
return str:match("(.*/)")
end
print(script_path())
A:
If the Lua script is being run by the standard command line interpreter, then try arg[0].
A:
Shortest form which I have found looks like this:
debug.getinfo(1).source:match("@?(.*/)")
Index 1, 2- other - depends on which function in call stack you want to query. 1 is last called function (where you're in). If you're running in global context, then probably 2 is more appropriate (haven't tested by myself)
A:
As lhf says:
~ e$ echo "print(arg[0])" > test.lua
~ e$ lua test.lua
test.lua
~ e$ cd /
/ e$ lua ~/test.lua
/Users/e/test.lua
/ e$
Here's the same info using the debug.getinfo mechanism
~ e$ echo "function foo () print(debug.getinfo(1).source) end; foo()" > test.lua
~ e$ lua test.lua
@test.lua
~ e$ cd /
/ e$ lua ~/test.lua
@/Users/e/test.lua
/ e$
This is available from the C API lua_getinfo
A:
The only reliable way to get what you want is to replace dofile with your own version of this function. Even the debug.getinfo method won't work, because it will only return the string passed to dofile. If that was a relative path, it has no idea how it was converted to an absolute path.
The overriding code would look something like this:
local function CreateDoFile()
local orgDoFile = dofile;
return function(filename)
if(filename) then --can be called with nil.
local pathToFile = extractFilePath(filename);
if(isRelativePath(pathToFile)) then
pathToFile = currentDir() .. "/" .. pathToFile;
end
--Store the path in a global, overwriting the previous value.
path = pathToFile;
end
return orgDoFile(filename); --proper tail call.
end
end
dofile = CreateDoFile(); //Override the old.
The functions extractFilePath, isRelativePath, and currentDir are not Lua functions; you will have to write them yourself. The extractFilePath function pulls a path string out of a filename. isRelativePath takes a path and returns whether the given path is a relative pathname. currentDir simply returns the current directory. Also, you will need to use "\" instead of "/" on Windows machines.
This function stores the path in a global called path. You can change that to whatever you like.
A:
I have written a function getScriptDir which uses the debug information like a few other people have suggested, but this one is going to work everytime (at least in Windows). But the thing is there are quite a few lines of code as it uses another function string.cut which i have created, which separates a string every given pattern, and puts it into a table.
function string.cut(s,pattern)
if pattern == nil then pattern = " " end
local cutstring = {}
local i1 = 0
repeat
i2 = nil
local i2 = string.find(s,pattern,i1+1)
if i2 == nil then i2 = string.len(s)+1 end
table.insert(cutstring,string.sub(s,i1+1,i2-1))
i1 = i2
until i2 == string.len(s)+1
return cutstring
end
function getScriptDir(source)
if source == nil then
source = debug.getinfo(1).source
end
local pwd1 = (io.popen("echo %cd%"):read("*l")):gsub("\\","/")
local pwd2 = source:sub(2):gsub("\\","/")
local pwd = ""
if pwd2:sub(2,3) == ":/" then
pwd = pwd2:sub(1,pwd2:find("[^/]*%.lua")-1)
else
local path1 = string.cut(pwd1:sub(4),"/")
local path2 = string.cut(pwd2,"/")
for i = 1,#path2-1 do
if path2[i] == ".." then
table.remove(path1)
else
table.insert(path1,path2[i])
end
end
pwd = pwd1:sub(1,3)
for i = 1,#path1 do
pwd = pwd..path1[i].."/"
end
end
return pwd
end
Note: if you want to use this function in another OS than Windows, you have to change the io.popen("echo %cd%") in the line 15 to whatever command gives you present working directory in your OS, e.g. io.popen("pwd") for Linux, and the pwd2:sub(2,3) == ":/" in the line 18 to whatever represents the root directory in your OS, e.g. pwd2:sub(1,1) == "/" for Linux.
Note2: if you don't provide the source variable to the function via debug.getinfo(1).source when calling it, then it will return the path to the directory of the file containing this function. Therefore, if you want to get the directory of a file which you called via dofile or loadfile, you will have to give it the source, like this: getScriptDir(debug.getinfo(1).source).
A:
Have a look at the Lua debug library, which is part of the standard Lua distribution. You can use debug.getinfo to find the current file, or the file up N frames on the call stack:
http://www.lua.org/manual/5.1/manual.html#5.9
Note that this is probably fairly slow, so it is not something you want to do on the fast path if you are worried about such things.
A:
if you want the actual path :
path.dirname(path.abspath(debug.getinfo(1).short_src))
else use this for full file path :
path.abspath(debug.getinfo(1).short_src)
A:
If you want the real path including the filename, just use the following
pathWithFilename=io.popen("cd"):read'*all'
print(pathWithFilename)
Tested on Windows.
Explanation:
io.popen - Sends commands to the command line, and returns the output.
"cd" - when you input this in cmd you get the current path as output.
:read'*all' - as io.popen returns a file-like object you can read it with the same kind of commands. This reads the whole output.
If someone requires the UNC path:
function GetUNCPath(path,filename)
local DriveLetter=io.popen("cd "..path.." && echo %CD:~0,2%"):read'*l'
local NetPath=io.popen("net use "..DriveLetter):read'*all'
local NetRoot=NetPath:match("[^\n]*[\n]%a*%s*([%a*%p*]*)")
local PathTMP=io.popen("cd "..path.." && cd"):read'*l'
PathTMP=PathTMP:sub(3,-1)
UNCPath=NetRoot..PathTMP.."\\"..filename
return UNCPath
end
A:
arg[0]:match('.*\\')
If it returns nil try changing the .\*\\\ with .*/ and arg[0] with debug.getinfo(1).short_src.
But I find this to be the best and shortest way to get the current directory.
You can of course append the file you are looking for with the .. operator. It will look something like this:
arg[0]:match('.*\\')..'file.lua'
A:
This version of anthonygore's answer is cross-platform (handles Windows backslash paths) and works when the script is run without a path (returns relative path).
local function script_path()
local str = debug.getinfo(2, "S").source:sub(2)
return str:match("(.*[/\\])") or "./"
end
|
Get containing path of lua file
|
I am wondering if there is a way of getting the path to the currently executing lua script file?
This is specifically not the current working directory, which could be entirely different. I know luafilesystem will let me get the current working directory, but it doesn't seem to be able to tell the current executing script file.
Thanks
EDIT:
I'm not running from the standard command line interpreter, I am executing the scripts from a C++ binary via luabind.
|
[
"This is a more elegant way:\nfunction script_path()\n local str = debug.getinfo(2, \"S\").source:sub(2)\n return str:match(\"(.*/)\")\nend\n\nprint(script_path())\n\n",
"If the Lua script is being run by the standard command line interpreter, then try arg[0].\n",
"Shortest form which I have found looks like this:\ndebug.getinfo(1).source:match(\"@?(.*/)\")\n\nIndex 1, 2- other - depends on which function in call stack you want to query. 1 is last called function (where you're in). If you're running in global context, then probably 2 is more appropriate (haven't tested by myself)\n",
"As lhf says:\n~ e$ echo \"print(arg[0])\" > test.lua\n~ e$ lua test.lua\ntest.lua\n~ e$ cd /\n/ e$ lua ~/test.lua\n/Users/e/test.lua\n/ e$ \n\nHere's the same info using the debug.getinfo mechanism\n~ e$ echo \"function foo () print(debug.getinfo(1).source) end; foo()\" > test.lua\n~ e$ lua test.lua\[email protected]\n~ e$ cd /\n/ e$ lua ~/test.lua\n@/Users/e/test.lua\n/ e$ \n\nThis is available from the C API lua_getinfo\n",
"The only reliable way to get what you want is to replace dofile with your own version of this function. Even the debug.getinfo method won't work, because it will only return the string passed to dofile. If that was a relative path, it has no idea how it was converted to an absolute path.\nThe overriding code would look something like this:\nlocal function CreateDoFile()\n local orgDoFile = dofile;\n\n return function(filename)\n if(filename) then --can be called with nil.\n local pathToFile = extractFilePath(filename);\n if(isRelativePath(pathToFile)) then\n pathToFile = currentDir() .. \"/\" .. pathToFile;\n end\n\n --Store the path in a global, overwriting the previous value.\n path = pathToFile; \n end\n return orgDoFile(filename); --proper tail call.\n end\nend\n\ndofile = CreateDoFile(); //Override the old.\n\nThe functions extractFilePath, isRelativePath, and currentDir are not Lua functions; you will have to write them yourself. The extractFilePath function pulls a path string out of a filename. isRelativePath takes a path and returns whether the given path is a relative pathname. currentDir simply returns the current directory. Also, you will need to use \"\\\" instead of \"/\" on Windows machines.\nThis function stores the path in a global called path. You can change that to whatever you like.\n",
"I have written a function getScriptDir which uses the debug information like a few other people have suggested, but this one is going to work everytime (at least in Windows). But the thing is there are quite a few lines of code as it uses another function string.cut which i have created, which separates a string every given pattern, and puts it into a table.\nfunction string.cut(s,pattern)\n if pattern == nil then pattern = \" \" end\n local cutstring = {}\n local i1 = 0\n repeat\n i2 = nil\n local i2 = string.find(s,pattern,i1+1)\n if i2 == nil then i2 = string.len(s)+1 end\n table.insert(cutstring,string.sub(s,i1+1,i2-1))\n i1 = i2\n until i2 == string.len(s)+1\n return cutstring\nend\n\nfunction getScriptDir(source)\n if source == nil then\n source = debug.getinfo(1).source\n end\n local pwd1 = (io.popen(\"echo %cd%\"):read(\"*l\")):gsub(\"\\\\\",\"/\")\n local pwd2 = source:sub(2):gsub(\"\\\\\",\"/\")\n local pwd = \"\"\n if pwd2:sub(2,3) == \":/\" then\n pwd = pwd2:sub(1,pwd2:find(\"[^/]*%.lua\")-1)\n else\n local path1 = string.cut(pwd1:sub(4),\"/\")\n local path2 = string.cut(pwd2,\"/\")\n for i = 1,#path2-1 do\n if path2[i] == \"..\" then\n table.remove(path1)\n else\n table.insert(path1,path2[i])\n end\n end\n pwd = pwd1:sub(1,3)\n for i = 1,#path1 do\n pwd = pwd..path1[i]..\"/\"\n end\n end\n return pwd\nend\n\nNote: if you want to use this function in another OS than Windows, you have to change the io.popen(\"echo %cd%\") in the line 15 to whatever command gives you present working directory in your OS, e.g. io.popen(\"pwd\") for Linux, and the pwd2:sub(2,3) == \":/\" in the line 18 to whatever represents the root directory in your OS, e.g. pwd2:sub(1,1) == \"/\" for Linux.\nNote2: if you don't provide the source variable to the function via debug.getinfo(1).source when calling it, then it will return the path to the directory of the file containing this function. Therefore, if you want to get the directory of a file which you called via dofile or loadfile, you will have to give it the source, like this: getScriptDir(debug.getinfo(1).source).\n",
"Have a look at the Lua debug library, which is part of the standard Lua distribution. You can use debug.getinfo to find the current file, or the file up N frames on the call stack:\nhttp://www.lua.org/manual/5.1/manual.html#5.9\nNote that this is probably fairly slow, so it is not something you want to do on the fast path if you are worried about such things.\n",
"if you want the actual path :\npath.dirname(path.abspath(debug.getinfo(1).short_src))\n\nelse use this for full file path :\npath.abspath(debug.getinfo(1).short_src)\n\n",
"If you want the real path including the filename, just use the following\npathWithFilename=io.popen(\"cd\"):read'*all'\nprint(pathWithFilename)\n\nTested on Windows.\nExplanation:\nio.popen - Sends commands to the command line, and returns the output.\n\"cd\" - when you input this in cmd you get the current path as output.\n:read'*all' - as io.popen returns a file-like object you can read it with the same kind of commands. This reads the whole output.\n\nIf someone requires the UNC path:\nfunction GetUNCPath(path,filename)\nlocal DriveLetter=io.popen(\"cd \"..path..\" && echo %CD:~0,2%\"):read'*l'\nlocal NetPath=io.popen(\"net use \"..DriveLetter):read'*all'\nlocal NetRoot=NetPath:match(\"[^\\n]*[\\n]%a*%s*([%a*%p*]*)\")\nlocal PathTMP=io.popen(\"cd \"..path..\" && cd\"):read'*l'\nPathTMP=PathTMP:sub(3,-1)\nUNCPath=NetRoot..PathTMP..\"\\\\\"..filename\nreturn UNCPath\nend\n\n",
"arg[0]:match('.*\\\\')\n\nIf it returns nil try changing the .\\*\\\\\\ with .*/ and arg[0] with debug.getinfo(1).short_src.\nBut I find this to be the best and shortest way to get the current directory.\nYou can of course append the file you are looking for with the .. operator. It will look something like this:\narg[0]:match('.*\\\\')..'file.lua'\n\n",
"This version of anthonygore's answer is cross-platform (handles Windows backslash paths) and works when the script is run without a path (returns relative path).\nlocal function script_path()\n local str = debug.getinfo(2, \"S\").source:sub(2)\n return str:match(\"(.*[/\\\\])\") or \"./\"\nend\n\n"
] |
[
32,
14,
8,
6,
4,
4,
2,
2,
2,
0,
0
] |
[] |
[] |
[
"filesystems",
"lua"
] |
stackoverflow_0006380820_filesystems_lua.txt
|
Q:
Defining a surjective predicate for maps and functions
I'm running into a few problems defining a surjective predicate for maps and functions.
predicate isTotal<G(!new), B(!new)>(f:G -> B)
reads f.reads;
{
forall g:G :: f.requires(g)
}
predicate Surjective<A(!new), B(!new)>(f: A -> B)
requires isTotal(f)
{
forall b: B :: exists a: A :: f(a) == b
}
predicate isTotalMap<G(!new), B(!new)>(m:map<G,B>)
{
forall g:G :: g in m
}
predicate mapSurjective<U(!new), V(!new)>(m: map<U,V>)
requires forall u: U :: u in m.Keys
{
forall x: V :: exists a: U :: m[a] == x
}
These definitions seems to work somewhat. However, they fail to verify the following setups.
datatype Color = Blue | Yellow | Green | Red
function toRed(x: Color): Color {
Red
}
function shiftColor(x: Color): Color {
match x {
case Red => Blue
case Blue => Yellow
case Yellow => Green
case Green => Red
}
}
lemma TestSurjective() {
assert isTotal(toRed);
assert isTotal(shiftColor);
var toRedm := map[Red := Red, Blue := Red, Yellow := Red, Green := Red];
var toShiftm := map[Red := Blue, Blue := Yellow, Yellow := Green, Green := Red];
// assert Surjective(toRed); //should fail
// assert Surjective(shiftColor); //should succeed
// assert mapSurjective(toRedm); //should fail
// assert forall u: Color :: u in toShiftm.Keys;
assert isTotalMap(toShiftm); //also fails
assume forall u: Color :: u in toShiftm.Keys;
assert mapSurjective(toShiftm); // should succeed
}
I assume the reason the maps fail the totality requirement defined in mapSurjective is because that the maps are potentially heap objects and Dafny isn't bothering to keep track of what is in them? Even if I assume the precondition the predicate still fails even though it should pass.
For the function case assert Surjective(shiftColor) also fails. For types with infinite cardinality I could understand it failing, but I feel like it should be possible to evaluate for finite types.
A:
Here, let me clarify how you can improve and prove your code.
// Note: to be useful, the function's type should be --> (a broken arrow)
// indicating the function CAN have preconditions.
// Otherwise, -> is already a subset type of --> whose constraint is exactly your predicate
// so it would be a typing issue to provide a non-total function.
// See https://dafny.org/latest/DafnyRef/DafnyRef#sec-arrow-subset-types
predicate isTotal<G(!new), B(!new)>(f:G --> B)
// reads f.reads // You don't need this, because f is not declared as being able to read a function
{
forall g:G :: f.requires(g)
}
// Passthrough identity function used for triggers
function Id<T>(t: T): T { t }
predicate Surjective<A(!new), B(!new)>(f: A -> B)
{
// If not using Id(b), the first forall does not have a trigger
// and get hard to prove. Not impossible, but extremely lengthy
forall b: B :: exists a: A :: f(a) == Id(b)
}
predicate isTotalMap<G(!new), B(!new)>(m:map<G,B>)
{
forall g: G :: g in m
}
predicate mapSurjective<U(!new), V(!new)>(m: map<U,V>)
requires forall u: U :: u in m.Keys
{
// If not using Id(b), the first forall does not have a trigger
// and get hard to prove. Not impossible, but extremely lengthy
forall x: V :: exists a: U :: m[a] == Id(x)
}
datatype Color = Blue | Yellow | Green | Red
function toRed(x: Color): Color {
Red
}
function shiftColor(x: Color): Color {
match x {
case Red => Blue
case Blue => Yellow
case Yellow => Green
case Green => Red
}
}
function partialFunction(x: Color): Color
requires x.Red? {
x
}
lemma TestWrong() {
// When trying to prove an assertion with a proof, use assert ... by like this:
assert !isTotal(partialFunction) by {
// If we were using ->, we would get "Value does not satisfies Color -> Color"*
// But here we can just exhibit a counter-example that disproves the forall
assert !partialFunction.requires(Blue);
// A longer proof could be done by contradiction like this:
if(isTotal(partialFunction)) {
assert forall c: Color :: partialFunction.requires(c);
assert partialFunction.requires(Blue); // it can instantiate the forall above.
assert false; // We get a contradiction
}
assert !isTotal(partialFunction);// A ==> false means !A
}
}
lemma TestSurjective() {
assert isTotal(toRed);
assert isTotal(shiftColor);
var toRedm := map[Red := Red, Blue := Red, Yellow := Red, Green := Red];
var toShiftm := map[Red := Blue, Blue := Yellow, Yellow := Green, Green := Red];
assert !Surjective(toRed) by {
if(Surjective(toRed)) {
var _ := Id(Blue);
}
}
assert Surjective(shiftColor) by {
if(!Surjective(shiftColor)) {
var _ := Id(Blue); // We need to trigger the condition of surjective so that Dafny is happy with the below:
assert !forall b: Color :: exists a: Color :: shiftColor(a) == Id(b);
assert exists b: Color :: forall a: Color :: shiftColor(a) != Id(b);
var b : Color :| forall a: Color :: shiftColor(a) != Id(b);
assert shiftColor(shiftColor(shiftColor(shiftColor(b)))) == Id(b);
assert false;
}
}
assert forall c: Color :: c in toRedm by {
if(!forall c :: c in toRedm) {
assert exists c :: c !in toRedm;
var c :| c !in toRedm;
assert c != Red;// Dafny picks up from here
assert false;
}
}
assert !mapSurjective(toRedm) by {
if(mapSurjective(toRedm)) {
assert forall x :: exists a :: toRedm[a] == Id(x);
var _ := Id(Blue); // Will instantiate the axiom above with x == Blue
assert exists a :: toRedm[a] == Id(Blue); // Not needed, but Dafny can prove this.
assert false;
}
}
assert forall u: Color :: u in toShiftm.Keys by {
if(!forall u: Color :: u in toShiftm.Keys) {
assert exists u :: u !in toShiftm.Keys;
var u :| u !in toShiftm.Keys;
assert u != Red; // Dafny can pick up from here
assert false;
}
}
assert isTotalMap(toShiftm); //also fails
assert forall u: Color :: u in toShiftm.Keys;
assert mapSurjective(toShiftm) by {
if(!mapSurjective(toShiftm)) {
var _ := Id(Red); // Necessary so that Dafny understands that the next forall is equivalent
assert !forall x :: exists a :: toShiftm[a] == Id(x);
assert exists x :: forall a :: toShiftm[a] != Id(x);
var x :| forall a :: toShiftm[a] != Id(x);
assert forall b :: exists a :: toShiftm[a] == Id(b) by {
forall b: Color ensures exists a :: toShiftm[a] == Id(b) {
var a := toShiftm[toShiftm[toShiftm[b]]];
assert toShiftm[toShiftm[toShiftm[toShiftm[b]]]] == Id(b);
}
}
assert exists a :: toShiftm[a] == Id(x);
var b :| toShiftm[b] == Id(x);
assert false;
}
}
}
A:
I figured out the following. For maps, I defined these two lemmas.
lemma TotalColorMapIsTotal<B>(m: map<Color, B>)
requires m.Keys == {Red, Blue, Green, Yellow}
// ensures forall u: Color :: u in m
ensures isTotalMap(m)
{
forall u: Color | true
ensures u in m
{
if u.Red? {
assert u in m;
}
}
}
lemma ColorMapIsOnto<A>(m: map<A, Color>)
requires m.Values == {Red, Blue, Green, Yellow}
ensures forall u: Color :: u in m.Values
{
forall u: Color | true
ensures u in m.Values
{
if u.Red? {
assert u in m.Values;
}
}
}
Which with some assertions, when applied to the example total and surjective map verified.
assert toShiftm[Red] == Blue;
assert toShiftm[Blue] == Yellow;
assert toShiftm[Yellow] == Green;
assert toShiftm[Green] == Red;
TotalColorMapIsTotal(toShiftm);
ColorMapIsOnto(toShiftm);
assert mapSurjective(toShiftm);
and when I supplied the following assertions for the function version also verified.
assert shiftColor(Green) == Red;
assert shiftColor(Red) == Blue;
assert shiftColor(Blue) == Yellow;
assert shiftColor(Yellow) == Green;
assert Surjective(shiftColor); //should succeed
So I guess they work, but feels a bit underpowered. Is this the best that can be done?
|
Defining a surjective predicate for maps and functions
|
I'm running into a few problems defining a surjective predicate for maps and functions.
predicate isTotal<G(!new), B(!new)>(f:G -> B)
reads f.reads;
{
forall g:G :: f.requires(g)
}
predicate Surjective<A(!new), B(!new)>(f: A -> B)
requires isTotal(f)
{
forall b: B :: exists a: A :: f(a) == b
}
predicate isTotalMap<G(!new), B(!new)>(m:map<G,B>)
{
forall g:G :: g in m
}
predicate mapSurjective<U(!new), V(!new)>(m: map<U,V>)
requires forall u: U :: u in m.Keys
{
forall x: V :: exists a: U :: m[a] == x
}
These definitions seems to work somewhat. However, they fail to verify the following setups.
datatype Color = Blue | Yellow | Green | Red
function toRed(x: Color): Color {
Red
}
function shiftColor(x: Color): Color {
match x {
case Red => Blue
case Blue => Yellow
case Yellow => Green
case Green => Red
}
}
lemma TestSurjective() {
assert isTotal(toRed);
assert isTotal(shiftColor);
var toRedm := map[Red := Red, Blue := Red, Yellow := Red, Green := Red];
var toShiftm := map[Red := Blue, Blue := Yellow, Yellow := Green, Green := Red];
// assert Surjective(toRed); //should fail
// assert Surjective(shiftColor); //should succeed
// assert mapSurjective(toRedm); //should fail
// assert forall u: Color :: u in toShiftm.Keys;
assert isTotalMap(toShiftm); //also fails
assume forall u: Color :: u in toShiftm.Keys;
assert mapSurjective(toShiftm); // should succeed
}
I assume the reason the maps fail the totality requirement defined in mapSurjective is because that the maps are potentially heap objects and Dafny isn't bothering to keep track of what is in them? Even if I assume the precondition the predicate still fails even though it should pass.
For the function case assert Surjective(shiftColor) also fails. For types with infinite cardinality I could understand it failing, but I feel like it should be possible to evaluate for finite types.
|
[
"Here, let me clarify how you can improve and prove your code.\n\n// Note: to be useful, the function's type should be --> (a broken arrow)\n// indicating the function CAN have preconditions.\n// Otherwise, -> is already a subset type of --> whose constraint is exactly your predicate\n// so it would be a typing issue to provide a non-total function.\n// See https://dafny.org/latest/DafnyRef/DafnyRef#sec-arrow-subset-types\npredicate isTotal<G(!new), B(!new)>(f:G --> B)\n// reads f.reads // You don't need this, because f is not declared as being able to read a function\n{\n forall g:G :: f.requires(g)\n}\n\n// Passthrough identity function used for triggers\nfunction Id<T>(t: T): T { t }\n\npredicate Surjective<A(!new), B(!new)>(f: A -> B) \n{\n // If not using Id(b), the first forall does not have a trigger\n // and get hard to prove. Not impossible, but extremely lengthy\n forall b: B :: exists a: A :: f(a) == Id(b)\n}\n\npredicate isTotalMap<G(!new), B(!new)>(m:map<G,B>)\n{\n forall g: G :: g in m\n}\n\npredicate mapSurjective<U(!new), V(!new)>(m: map<U,V>)\n requires forall u: U :: u in m.Keys\n{\n // If not using Id(b), the first forall does not have a trigger\n // and get hard to prove. Not impossible, but extremely lengthy\n forall x: V :: exists a: U :: m[a] == Id(x)\n}\n\ndatatype Color = Blue | Yellow | Green | Red\n\nfunction toRed(x: Color): Color {\n Red\n}\n\nfunction shiftColor(x: Color): Color {\n match x {\n case Red => Blue\n case Blue => Yellow\n case Yellow => Green\n case Green => Red\n }\n}\nfunction partialFunction(x: Color): Color\n requires x.Red? {\n x\n}\n\nlemma TestWrong() {\n // When trying to prove an assertion with a proof, use assert ... by like this:\n assert !isTotal(partialFunction) by {\n // If we were using ->, we would get \"Value does not satisfies Color -> Color\"*\n // But here we can just exhibit a counter-example that disproves the forall \n assert !partialFunction.requires(Blue);\n\n // A longer proof could be done by contradiction like this:\n if(isTotal(partialFunction)) {\n assert forall c: Color :: partialFunction.requires(c);\n assert partialFunction.requires(Blue); // it can instantiate the forall above.\n assert false; // We get a contradiction\n }\n assert !isTotal(partialFunction);// A ==> false means !A\n }\n}\n\nlemma TestSurjective() {\n assert isTotal(toRed);\n assert isTotal(shiftColor);\n var toRedm := map[Red := Red, Blue := Red, Yellow := Red, Green := Red];\n var toShiftm := map[Red := Blue, Blue := Yellow, Yellow := Green, Green := Red];\n assert !Surjective(toRed) by {\n if(Surjective(toRed)) {\n var _ := Id(Blue);\n }\n }\n assert Surjective(shiftColor) by {\n if(!Surjective(shiftColor)) {\n var _ := Id(Blue); // We need to trigger the condition of surjective so that Dafny is happy with the below:\n assert !forall b: Color :: exists a: Color :: shiftColor(a) == Id(b);\n assert exists b: Color :: forall a: Color :: shiftColor(a) != Id(b);\n var b : Color :| forall a: Color :: shiftColor(a) != Id(b);\n assert shiftColor(shiftColor(shiftColor(shiftColor(b)))) == Id(b);\n assert false;\n }\n }\n assert forall c: Color :: c in toRedm by {\n if(!forall c :: c in toRedm) {\n assert exists c :: c !in toRedm;\n var c :| c !in toRedm;\n assert c != Red;// Dafny picks up from here\n assert false;\n }\n }\n assert !mapSurjective(toRedm) by {\n if(mapSurjective(toRedm)) {\n assert forall x :: exists a :: toRedm[a] == Id(x);\n var _ := Id(Blue); // Will instantiate the axiom above with x == Blue\n assert exists a :: toRedm[a] == Id(Blue); // Not needed, but Dafny can prove this.\n assert false;\n }\n }\n assert forall u: Color :: u in toShiftm.Keys by {\n if(!forall u: Color :: u in toShiftm.Keys) {\n assert exists u :: u !in toShiftm.Keys;\n var u :| u !in toShiftm.Keys;\n assert u != Red; // Dafny can pick up from here\n assert false;\n }\n }\n assert isTotalMap(toShiftm); //also fails\n assert forall u: Color :: u in toShiftm.Keys;\n assert mapSurjective(toShiftm) by {\n if(!mapSurjective(toShiftm)) {\n var _ := Id(Red); // Necessary so that Dafny understands that the next forall is equivalent\n assert !forall x :: exists a :: toShiftm[a] == Id(x);\n assert exists x :: forall a :: toShiftm[a] != Id(x);\n var x :| forall a :: toShiftm[a] != Id(x);\n assert forall b :: exists a :: toShiftm[a] == Id(b) by {\n forall b: Color ensures exists a :: toShiftm[a] == Id(b) {\n var a := toShiftm[toShiftm[toShiftm[b]]];\n assert toShiftm[toShiftm[toShiftm[toShiftm[b]]]] == Id(b);\n }\n }\n assert exists a :: toShiftm[a] == Id(x);\n var b :| toShiftm[b] == Id(x);\n assert false;\n }\n }\n}\n\n",
"I figured out the following. For maps, I defined these two lemmas.\nlemma TotalColorMapIsTotal<B>(m: map<Color, B>) \n requires m.Keys == {Red, Blue, Green, Yellow}\n // ensures forall u: Color :: u in m\n ensures isTotalMap(m)\n{\n forall u: Color | true\n ensures u in m\n {\n if u.Red? {\n assert u in m;\n }\n }\n}\n\nlemma ColorMapIsOnto<A>(m: map<A, Color>) \n requires m.Values == {Red, Blue, Green, Yellow}\n ensures forall u: Color :: u in m.Values\n{\n\n forall u: Color | true\n ensures u in m.Values\n {\n if u.Red? {\n assert u in m.Values;\n }\n }\n}\n\nWhich with some assertions, when applied to the example total and surjective map verified.\n assert toShiftm[Red] == Blue;\n assert toShiftm[Blue] == Yellow;\n assert toShiftm[Yellow] == Green;\n assert toShiftm[Green] == Red;\n\n TotalColorMapIsTotal(toShiftm);\n ColorMapIsOnto(toShiftm);\n assert mapSurjective(toShiftm);\n\nand when I supplied the following assertions for the function version also verified.\n assert shiftColor(Green) == Red;\n assert shiftColor(Red) == Blue;\n assert shiftColor(Blue) == Yellow;\n assert shiftColor(Yellow) == Green;\n assert Surjective(shiftColor); //should succeed\n\nSo I guess they work, but feels a bit underpowered. Is this the best that can be done?\n"
] |
[
1,
0
] |
[] |
[] |
[
"dafny"
] |
stackoverflow_0074632153_dafny.txt
|
Q:
Pandas groupby two columns get earliest date
This is the dataset:
`
data = {'id': ['1','1','1','1','2','2','2','2','2','3','3','3','3','3','3','3'],
'status': ['Active','Active','Active','Pending Action','Pending Action','Pending Action','Active','Pending Action','Active','Draft','Active','Draft','Draft','Draft','Active','Draft'],
'calc_date_id':['05/07/2022','07/06/2022','31/08/2021','01/07/2021','20/11/2022','25/10/2022','02/04/2022','28/02/2022','01/07/2021','23/06/2022','15/06/2022','07/04/2022','09/11/2022','18/08/2020','19/03/2020','17/01/202']
}
df = pd.DataFrame(data)
#to datetime
df['calc_date_id'] = pd.to_datetime(df['calc_date_id'])
`
How do I get the first date in the last time the status change by id?
I tried sorting by date and groupby with id and status and keep="first" but I got:
Groupbing by status
Also tried
df_mt_date.loc[df_mt_date.groupby(['id',' status'])['calc_date_id'].idxmin()]
Instead of that I'd like to preserve the order by date obtaining only the first time where the id has changed status for the last time (not all of the history).
This is the desired output
I'm running out of ideas, I'll appreciate any suggestion
Thank you
A:
Try:
df["desired_output"] = df.groupby("id")["status"].transform(
lambda x: df.loc[x.index, "calc_date_id"][(x != x.shift(-1)).idxmax()]
)
print(df)
Prints:
id status calc_date_id desired_output
0 1 Active 2022-07-05 2021-08-31
1 1 Active 2022-06-07 2021-08-31
2 1 Active 2021-08-31 2021-08-31
3 1 Pending Action 2021-07-01 2021-08-31
4 2 Pending Action 2022-11-20 2022-10-25
5 2 Pending Action 2022-10-25 2022-10-25
6 2 Active 2022-04-02 2022-10-25
7 2 Pending Action 2022-02-28 2022-10-25
8 2 Active 2021-07-01 2022-10-25
9 3 Draft 2022-06-23 2022-06-23
10 3 Active 2022-06-15 2022-06-23
11 3 Draft 2022-04-07 2022-06-23
12 3 Draft 2022-11-09 2022-06-23
13 3 Draft 2020-08-18 2022-06-23
14 3 Active 2020-03-19 2022-06-23
15 3 Draft 2020-01-17 2022-06-23
A:
From your desired output I see, that the group "boundaries" are
points where particular value of status column occurs for the
first time, regardless of id column.
To indicate first occurrences of values in status column, run:
wrk = df.groupby('status', group_keys=False).apply(
lambda grp: grp.assign(isFirst=grp.index[0] == grp.index))
wrk.isFirst = wrk.isFirst.cumsum()
To see the result, print wrk and look at isFirst column.
Then, to generate the result, run:
result = wrk.groupby('isFirst', group_keys=False).apply(
lambda grp: grp.assign(desired_output=grp.calc_date_id.min()))\
.drop(columns='isFirst')
Note the terminating drop to drop now unnecessary isFirst column.
The result, for your data sample, is:
id status calc_date_id desired_output
0 1 Active 2022-07-05 2021-08-31
1 1 Active 2022-06-07 2021-08-31
2 1 Active 2021-08-31 2021-08-31
3 1 Pending Action 2021-07-01 2021-07-01
4 2 Pending Action 2022-11-20 2021-07-01
5 2 Pending Action 2022-10-25 2021-07-01
6 2 Active 2022-04-02 2021-07-01
7 2 Pending Action 2022-02-28 2021-07-01
8 2 Active 2021-07-01 2021-07-01
9 3 Draft 2022-06-23 2020-03-19
10 3 Active 2022-06-15 2020-03-19
11 3 Draft 2022-04-07 2020-03-19
12 3 Draft 2022-11-09 2020-03-19
13 3 Draft 2020-08-18 2020-03-19
14 3 Active 2020-03-19 2020-03-19
15 3 Draft 2022-01-17 2020-03-19
|
Pandas groupby two columns get earliest date
|
This is the dataset:
`
data = {'id': ['1','1','1','1','2','2','2','2','2','3','3','3','3','3','3','3'],
'status': ['Active','Active','Active','Pending Action','Pending Action','Pending Action','Active','Pending Action','Active','Draft','Active','Draft','Draft','Draft','Active','Draft'],
'calc_date_id':['05/07/2022','07/06/2022','31/08/2021','01/07/2021','20/11/2022','25/10/2022','02/04/2022','28/02/2022','01/07/2021','23/06/2022','15/06/2022','07/04/2022','09/11/2022','18/08/2020','19/03/2020','17/01/202']
}
df = pd.DataFrame(data)
#to datetime
df['calc_date_id'] = pd.to_datetime(df['calc_date_id'])
`
How do I get the first date in the last time the status change by id?
I tried sorting by date and groupby with id and status and keep="first" but I got:
Groupbing by status
Also tried
df_mt_date.loc[df_mt_date.groupby(['id',' status'])['calc_date_id'].idxmin()]
Instead of that I'd like to preserve the order by date obtaining only the first time where the id has changed status for the last time (not all of the history).
This is the desired output
I'm running out of ideas, I'll appreciate any suggestion
Thank you
|
[
"Try:\ndf[\"desired_output\"] = df.groupby(\"id\")[\"status\"].transform(\n lambda x: df.loc[x.index, \"calc_date_id\"][(x != x.shift(-1)).idxmax()]\n)\nprint(df)\n\nPrints:\n id status calc_date_id desired_output\n0 1 Active 2022-07-05 2021-08-31\n1 1 Active 2022-06-07 2021-08-31\n2 1 Active 2021-08-31 2021-08-31\n3 1 Pending Action 2021-07-01 2021-08-31\n4 2 Pending Action 2022-11-20 2022-10-25\n5 2 Pending Action 2022-10-25 2022-10-25\n6 2 Active 2022-04-02 2022-10-25\n7 2 Pending Action 2022-02-28 2022-10-25\n8 2 Active 2021-07-01 2022-10-25\n9 3 Draft 2022-06-23 2022-06-23\n10 3 Active 2022-06-15 2022-06-23\n11 3 Draft 2022-04-07 2022-06-23\n12 3 Draft 2022-11-09 2022-06-23\n13 3 Draft 2020-08-18 2022-06-23\n14 3 Active 2020-03-19 2022-06-23\n15 3 Draft 2020-01-17 2022-06-23\n\n",
"From your desired output I see, that the group \"boundaries\" are\npoints where particular value of status column occurs for the\nfirst time, regardless of id column.\nTo indicate first occurrences of values in status column, run:\nwrk = df.groupby('status', group_keys=False).apply(\n lambda grp: grp.assign(isFirst=grp.index[0] == grp.index))\nwrk.isFirst = wrk.isFirst.cumsum()\n\nTo see the result, print wrk and look at isFirst column.\nThen, to generate the result, run:\nresult = wrk.groupby('isFirst', group_keys=False).apply(\n lambda grp: grp.assign(desired_output=grp.calc_date_id.min()))\\\n .drop(columns='isFirst')\n\nNote the terminating drop to drop now unnecessary isFirst column.\nThe result, for your data sample, is:\n id status calc_date_id desired_output\n0 1 Active 2022-07-05 2021-08-31\n1 1 Active 2022-06-07 2021-08-31\n2 1 Active 2021-08-31 2021-08-31\n3 1 Pending Action 2021-07-01 2021-07-01\n4 2 Pending Action 2022-11-20 2021-07-01\n5 2 Pending Action 2022-10-25 2021-07-01\n6 2 Active 2022-04-02 2021-07-01\n7 2 Pending Action 2022-02-28 2021-07-01\n8 2 Active 2021-07-01 2021-07-01\n9 3 Draft 2022-06-23 2020-03-19\n10 3 Active 2022-06-15 2020-03-19\n11 3 Draft 2022-04-07 2020-03-19\n12 3 Draft 2022-11-09 2020-03-19\n13 3 Draft 2020-08-18 2020-03-19\n14 3 Active 2020-03-19 2020-03-19\n15 3 Draft 2022-01-17 2020-03-19\n\n"
] |
[
0,
0
] |
[] |
[] |
[
"dataframe",
"group_by",
"pandas",
"python",
"sorting"
] |
stackoverflow_0074660690_dataframe_group_by_pandas_python_sorting.txt
|
Q:
How to read the txt input and store to something in O(k+p) time complexity using JAVA
the input file will be below:
5 6 2 12
1 1
1 2
1 3
1 4
2 1
2 2
2 5
3 3
3 4
3 6
4 5
5 6
the input means
m is number of set, n is the total element
k representing the size of each line, p representing the total line of data
m = 5, n = 6, k = 2, p = 12
set 1{1,2,3,4}
set 2{1,2,5}
set 3{3,4,6}
set 4{5}
set 5{6}
However, {m,n,k,p} will be random value.
I expect that input all the data into an 2D array data table like below
1 1 1 1 0 0
1 1 0 0 1 0
0 0 1 1 0 1
0 0 0 0 1 0
0 0 0 0 0 1
in O(k+p) time complexity.
Can you give me some idea to do the input in time? Thank you!
I tried to use loop
for loop from 0 to p-1{
get the data as a string
convert string to int array
for loop from 0 to k-1{
array[int[0]].store element[int[k]]
}
}
but this takes O(kp).
A:
To read the input file and store the data in a 2D array with time complexity O(k + p), you can use the following approach:
Read the first four numbers from the input file, which represent the values of m, n, k, and p.
Initialize a 2D array of size m x n, where m is the number of sets and n is the total number of elements.
Loop through the remaining lines in the input file (p lines in total). For each line, read the k numbers and store them in the appropriate location in the 2D array.
The key to achieving O(k + p) time complexity is to avoid using nested loops, as they would increase the time complexity to O(kp). Instead, use a single loop to read and store the data from each line of the input file.
Here is an example of how you could implement this approach in Java:
// Read the first four numbers from the input file
Scanner input = new Scanner(new File("input.txt"));
int m = input.nextInt();
int n = input.nextInt();
int k = input.nextInt();
int p = input.nextInt();
// Initialize a 2D array to store the data
int[][] data = new int[m][n];
// Loop through the remaining lines in the input file
for (int i = 0; i < p; i++) {
// Read the k numbers from the current line
int[] lineData = new int[k];
for (int j = 0; j < k; j++) {
lineData[j] = input.nextInt();
}
// Store the data in the appropriate location in the 2D array
data[lineData[0] - 1][lineData[1] - 1] = 1;
}
// The data is now stored in the 2D array
In this example, we read the first four numbers from the input file and use them to initialize the 2D array. Then, we loop through the remaining lines in the input file, reading the k numbers from each line and storing them in the appropriate location in the 2D array. This approach has a time complexity of O(k + p), which is the desired result.
|
How to read the txt input and store to something in O(k+p) time complexity using JAVA
|
the input file will be below:
5 6 2 12
1 1
1 2
1 3
1 4
2 1
2 2
2 5
3 3
3 4
3 6
4 5
5 6
the input means
m is number of set, n is the total element
k representing the size of each line, p representing the total line of data
m = 5, n = 6, k = 2, p = 12
set 1{1,2,3,4}
set 2{1,2,5}
set 3{3,4,6}
set 4{5}
set 5{6}
However, {m,n,k,p} will be random value.
I expect that input all the data into an 2D array data table like below
1 1 1 1 0 0
1 1 0 0 1 0
0 0 1 1 0 1
0 0 0 0 1 0
0 0 0 0 0 1
in O(k+p) time complexity.
Can you give me some idea to do the input in time? Thank you!
I tried to use loop
for loop from 0 to p-1{
get the data as a string
convert string to int array
for loop from 0 to k-1{
array[int[0]].store element[int[k]]
}
}
but this takes O(kp).
|
[
"To read the input file and store the data in a 2D array with time complexity O(k + p), you can use the following approach:\n\nRead the first four numbers from the input file, which represent the values of m, n, k, and p.\n\nInitialize a 2D array of size m x n, where m is the number of sets and n is the total number of elements.\n\nLoop through the remaining lines in the input file (p lines in total). For each line, read the k numbers and store them in the appropriate location in the 2D array.\n\n\nThe key to achieving O(k + p) time complexity is to avoid using nested loops, as they would increase the time complexity to O(kp). Instead, use a single loop to read and store the data from each line of the input file.\nHere is an example of how you could implement this approach in Java:\n// Read the first four numbers from the input file\nScanner input = new Scanner(new File(\"input.txt\"));\nint m = input.nextInt();\nint n = input.nextInt();\nint k = input.nextInt();\nint p = input.nextInt();\n\n// Initialize a 2D array to store the data\nint[][] data = new int[m][n];\n\n// Loop through the remaining lines in the input file\nfor (int i = 0; i < p; i++) {\n // Read the k numbers from the current line\n int[] lineData = new int[k];\n for (int j = 0; j < k; j++) {\n lineData[j] = input.nextInt();\n }\n\n // Store the data in the appropriate location in the 2D array\n data[lineData[0] - 1][lineData[1] - 1] = 1;\n}\n\n// The data is now stored in the 2D array\n\nIn this example, we read the first four numbers from the input file and use them to initialize the 2D array. Then, we loop through the remaining lines in the input file, reading the k numbers from each line and storing them in the appropriate location in the 2D array. This approach has a time complexity of O(k + p), which is the desired result.\n"
] |
[
0
] |
[] |
[] |
[
"algorithm",
"file_io",
"java"
] |
stackoverflow_0074661280_algorithm_file_io_java.txt
|
Q:
How to cross compile arm64 assembly from linux to mac
I'm trying to cross compile assembly hello world
Currently when I run the linux build it says "exec format error: ./linuxbuild.out" with error code 126 and some differences in objdump shown below. I tried using -F on the linker and lazy lib without success. One difference in the dump is __mh_execute_header is missing
Here's what I did
On x86-64 linux I have the assembly
$ cat hello.s
.global _main
.align 2
_main: mov X0, #1
adr X1, hello
mov X2, #13
mov X16, #4
svc 0
mov X0, #0
mov X16, #1
svc 0
hello: .ascii "Hello\n"
I wrote
clang hello.s --target=arm64-apple-macosx12.5.0 -c -o h.o
ld64.lld h.o -arch arm64 -platform_version macos 12 5 -v -dylib -o linuxbuild.out
The output
LLD 14.0.6
Library search paths:
/usr/lib
/usr/local/lib
Framework search paths:
ld64.lld: warning: h.o has version 12.5.0, which is newer than target minimum of 12
You can see the difference
% /opt/homebrew/opt/binutils/bin/objdump -x a.out linuxbuild.out
a.out: file format mach-o-arm64
a.out
architecture: aarch64, flags 0x00000012:
EXEC_P, HAS_SYMS
start address 0x0000000100003f90
MACH-O header:
magic: 0xfeedfacf
cputype: 0x100000c (ARM64)
cpusubtype: 0 (ARM64_ALL)
filetype: 0x2
ncmds: 0x10
sizeocmds: 0x2e8
flags: 0x200085
version: 2
Sections:
Idx Name Size VMA LMA File off Algn
0 .text 00000026 0000000100003f90 0000000100003f90 00003f90 2**2
CONTENTS, ALLOC, LOAD, CODE
1 __TEXT.__unwind_info 00000048 0000000100003fb8 0000000100003fb8 00003fb8 2**2
CONTENTS, ALLOC, LOAD, READONLY, CODE
SYMBOL TABLE:
0000000100003fb0 l 0e SECT 01 0000 [.text] hello
0000000100000000 g 0f SECT 01 0010 [.text] __mh_execute_header
0000000100003f90 g 0f SECT 01 0000 [.text] _main
linuxbuild.out: file format mach-o-arm64
linuxbuild.out
architecture: aarch64, flags 0x00000050:
HAS_SYMS, DYNAMIC
start address 0x0000000000000000
MACH-O header:
magic: 0xfeedfacf
cputype: 0x100000c (ARM64)
cpusubtype: 0 (ARM64_ALL)
filetype: 0x6
ncmds: 0xb
sizeocmds: 0x208
flags: 0x100085
version: 2
Sections:
Idx Name Size VMA LMA File off Algn
0 .text 00000026 0000000000000248 0000000000000248 00000248 2**2
CONTENTS, ALLOC, LOAD, CODE
SYMBOL TABLE:
0000000000000268 l 0e SECT 01 0000 [.text] hello
0000000000000248 g 0f SECT 01 0000 [.text] _main
A:
I figured it out by writing clang -v on my mac, using the linker then repeating it on linux. I have 0 mac files on linux. It seems to work on my simple assembly file
ld64.lld -dynamic -arch arm64 -platform_version macos 12.0.0 13.0 -syslibroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk h.o -o linux-to-mac
|
How to cross compile arm64 assembly from linux to mac
|
I'm trying to cross compile assembly hello world
Currently when I run the linux build it says "exec format error: ./linuxbuild.out" with error code 126 and some differences in objdump shown below. I tried using -F on the linker and lazy lib without success. One difference in the dump is __mh_execute_header is missing
Here's what I did
On x86-64 linux I have the assembly
$ cat hello.s
.global _main
.align 2
_main: mov X0, #1
adr X1, hello
mov X2, #13
mov X16, #4
svc 0
mov X0, #0
mov X16, #1
svc 0
hello: .ascii "Hello\n"
I wrote
clang hello.s --target=arm64-apple-macosx12.5.0 -c -o h.o
ld64.lld h.o -arch arm64 -platform_version macos 12 5 -v -dylib -o linuxbuild.out
The output
LLD 14.0.6
Library search paths:
/usr/lib
/usr/local/lib
Framework search paths:
ld64.lld: warning: h.o has version 12.5.0, which is newer than target minimum of 12
You can see the difference
% /opt/homebrew/opt/binutils/bin/objdump -x a.out linuxbuild.out
a.out: file format mach-o-arm64
a.out
architecture: aarch64, flags 0x00000012:
EXEC_P, HAS_SYMS
start address 0x0000000100003f90
MACH-O header:
magic: 0xfeedfacf
cputype: 0x100000c (ARM64)
cpusubtype: 0 (ARM64_ALL)
filetype: 0x2
ncmds: 0x10
sizeocmds: 0x2e8
flags: 0x200085
version: 2
Sections:
Idx Name Size VMA LMA File off Algn
0 .text 00000026 0000000100003f90 0000000100003f90 00003f90 2**2
CONTENTS, ALLOC, LOAD, CODE
1 __TEXT.__unwind_info 00000048 0000000100003fb8 0000000100003fb8 00003fb8 2**2
CONTENTS, ALLOC, LOAD, READONLY, CODE
SYMBOL TABLE:
0000000100003fb0 l 0e SECT 01 0000 [.text] hello
0000000100000000 g 0f SECT 01 0010 [.text] __mh_execute_header
0000000100003f90 g 0f SECT 01 0000 [.text] _main
linuxbuild.out: file format mach-o-arm64
linuxbuild.out
architecture: aarch64, flags 0x00000050:
HAS_SYMS, DYNAMIC
start address 0x0000000000000000
MACH-O header:
magic: 0xfeedfacf
cputype: 0x100000c (ARM64)
cpusubtype: 0 (ARM64_ALL)
filetype: 0x6
ncmds: 0xb
sizeocmds: 0x208
flags: 0x100085
version: 2
Sections:
Idx Name Size VMA LMA File off Algn
0 .text 00000026 0000000000000248 0000000000000248 00000248 2**2
CONTENTS, ALLOC, LOAD, CODE
SYMBOL TABLE:
0000000000000268 l 0e SECT 01 0000 [.text] hello
0000000000000248 g 0f SECT 01 0000 [.text] _main
|
[
"I figured it out by writing clang -v on my mac, using the linker then repeating it on linux. I have 0 mac files on linux. It seems to work on my simple assembly file\n\nld64.lld -dynamic -arch arm64 -platform_version macos 12.0.0 13.0 -syslibroot /Library/Developer/CommandLineTools/SDKs/MacOSX.sdk h.o -o linux-to-mac\n\n"
] |
[
1
] |
[] |
[] |
[
"apple_m1",
"arm",
"arm64",
"assembly",
"cross_compiling"
] |
stackoverflow_0074660777_apple_m1_arm_arm64_assembly_cross_compiling.txt
|
Q:
Request multiple parameters with one search term
I'm new using JSON and I've created an API using Amazon's AWS service to work as a database for my dictionary website.
On the site there is a search bar which when receiving an input goes through my JavaScript to my JSON database to look for the search parameter given. For example "hello" as shown in the code below.
However my problem is that the only thing fetched to my JavaScript is the "hello" and none of the other things.
I know I'm new, but I would appreciate some tips on how i can make it possible using a search parameter get everything shown below.
[
{
"word": "hello",
"phonetic": "həˈləʊ",
"phonetics": [
{
"text": "həˈləʊ",
"audio": "//ssl.gstatic.com/dictionary/static/sounds/20200429/hello--_gb_1.mp3"
},
{
"text": "hɛˈləʊ"
}
],
"origin": "early 19th century: variant of earlier hollo ; related to holla.",
"meanings": [
{
"partOfSpeech": "exclamation",
"definitions": [
{
"definition": "used as a greeting or to begin a phone conversation.",
"example": "hello there, Katie!"
}
]
}
]
}
]
A:
You can use filter to search through the array for a string matching the word.
let data = [
{
"word": "hello",
"phonetic": "həˈləʊ",
"phonetics": [
{
"text": "həˈləʊ",
"audio": "//ssl.gstatic.com/dictionary/static/sounds/20200429/hello--_gb_1.mp3"
},
{
"text": "hɛˈləʊ"
}
],
"origin": "early 19th century: variant of earlier hollo ; related to holla.",
"meanings": [
{
"partOfSpeech": "exclamation",
"definitions": [
{
"definition": "used as a greeting or to begin a phone conversation.",
"example": "hello there, Katie!"
}
]
}
]
},
{
"word": "TEST",
"phonetic": "həˈləʊ",
"phonetics": [
{
"text": "həˈləʊ",
"audio": "//ssl.gstatic.com/dictionary/static/sounds/20200429/hello--_gb_1.mp3"
},
{
"text": "hɛˈləʊ"
}
],
"origin": "early 19th century: variant of earlier hollo ; related to holla.",
"meanings": [
{
"partOfSpeech": "exclamation",
"definitions": [
{
"definition": "used as a greeting or to begin a phone conversation.",
"example": "hello there, Katie!"
}
]
}
]
}
]
let search = "HELLO";
let found = data.filter((row) => {
return (row.word.toLowerCase() == search.toLowerCase())
});
console.log(found)
|
Request multiple parameters with one search term
|
I'm new using JSON and I've created an API using Amazon's AWS service to work as a database for my dictionary website.
On the site there is a search bar which when receiving an input goes through my JavaScript to my JSON database to look for the search parameter given. For example "hello" as shown in the code below.
However my problem is that the only thing fetched to my JavaScript is the "hello" and none of the other things.
I know I'm new, but I would appreciate some tips on how i can make it possible using a search parameter get everything shown below.
[
{
"word": "hello",
"phonetic": "həˈləʊ",
"phonetics": [
{
"text": "həˈləʊ",
"audio": "//ssl.gstatic.com/dictionary/static/sounds/20200429/hello--_gb_1.mp3"
},
{
"text": "hɛˈləʊ"
}
],
"origin": "early 19th century: variant of earlier hollo ; related to holla.",
"meanings": [
{
"partOfSpeech": "exclamation",
"definitions": [
{
"definition": "used as a greeting or to begin a phone conversation.",
"example": "hello there, Katie!"
}
]
}
]
}
]
|
[
"You can use filter to search through the array for a string matching the word.\n\n\nlet data = [\n {\n \"word\": \"hello\",\n \"phonetic\": \"həˈləʊ\",\n \"phonetics\": [\n {\n \"text\": \"həˈləʊ\",\n \"audio\": \"//ssl.gstatic.com/dictionary/static/sounds/20200429/hello--_gb_1.mp3\"\n },\n {\n \"text\": \"hɛˈləʊ\"\n }\n ],\n \"origin\": \"early 19th century: variant of earlier hollo ; related to holla.\",\n \"meanings\": [\n {\n \"partOfSpeech\": \"exclamation\",\n \"definitions\": [\n {\n \"definition\": \"used as a greeting or to begin a phone conversation.\",\n \"example\": \"hello there, Katie!\"\n }\n ]\n }\n ]\n },\n {\n \"word\": \"TEST\",\n \"phonetic\": \"həˈləʊ\",\n \"phonetics\": [\n {\n \"text\": \"həˈləʊ\",\n \"audio\": \"//ssl.gstatic.com/dictionary/static/sounds/20200429/hello--_gb_1.mp3\"\n },\n {\n \"text\": \"hɛˈləʊ\"\n }\n ],\n \"origin\": \"early 19th century: variant of earlier hollo ; related to holla.\",\n \"meanings\": [\n {\n \"partOfSpeech\": \"exclamation\",\n \"definitions\": [\n {\n \"definition\": \"used as a greeting or to begin a phone conversation.\",\n \"example\": \"hello there, Katie!\"\n }\n ]\n }\n ]\n }\n]\nlet search = \"HELLO\";\nlet found = data.filter((row) => {\n return (row.word.toLowerCase() == search.toLowerCase()) \n});\n\nconsole.log(found)\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"html",
"javascript",
"json"
] |
stackoverflow_0074661148_html_javascript_json.txt
|
Q:
Lambda Snapstart with Serverless framework
So AWS announced Lambda Snapstart very recently, I tried to give it a go since my application has cold start time ~4s.
I was able to do this by adding the following under resources:
- extensions:
NodeLambdaFunction:
Properties:
SnapStart:
ApplyOn: PublishedVersions
Now, when I actually go to the said lambda, this is what I see :
So far so good!
But, the issue is that when I check my Cloudwatch Logs, there's no trace of Restore Time instead the good old Init Duration for cold starts which means Snapstart isn't working properly.
I dug deeper, so Snapstart only works for versioned ARNs. But the thing is Serverless already claims that :
By default, the framework creates function versions for every deploy.
And on checking the logs, I see that the logStreams have the prefix : 2022/11/30/[$LATEST].
When I check the Versions tab in console, I see version number 240. So I would expect that 240 is the latest version of this lambda function and this is the function version being invoked everytime.
However, clicking on the version number open a lambda function with 240 attached to its ARN and testing that function with Snapstart works perfectly fine.
So I am confused if the LATEST version and version number 240 ( in my case ), are these different?
If no, then why isn't Snapstart automatically activated for LATEST?
If yes, how do I make sure they are same?
A:
SnapStart is only available for published versions of a Lambda function. It cannot be used with $LATEST.
Using Versions is pretty hard for Serverless Framework, SAM, CDK, and basically any other IaC tool today, because by default they will all use $LATEST to integrate with API Gateway, SNS, SQS, DynamoDB, EventBridge, etc.
You need to update the integration with API Gateway (or whatever service you're using) to point to the Lambda Version you publish, after that Lambda deployment has completed. This isn't easy to do using Serverless Framework (and other tools). You may be able to achieve this using this traffic-shifting plugin.
|
Lambda Snapstart with Serverless framework
|
So AWS announced Lambda Snapstart very recently, I tried to give it a go since my application has cold start time ~4s.
I was able to do this by adding the following under resources:
- extensions:
NodeLambdaFunction:
Properties:
SnapStart:
ApplyOn: PublishedVersions
Now, when I actually go to the said lambda, this is what I see :
So far so good!
But, the issue is that when I check my Cloudwatch Logs, there's no trace of Restore Time instead the good old Init Duration for cold starts which means Snapstart isn't working properly.
I dug deeper, so Snapstart only works for versioned ARNs. But the thing is Serverless already claims that :
By default, the framework creates function versions for every deploy.
And on checking the logs, I see that the logStreams have the prefix : 2022/11/30/[$LATEST].
When I check the Versions tab in console, I see version number 240. So I would expect that 240 is the latest version of this lambda function and this is the function version being invoked everytime.
However, clicking on the version number open a lambda function with 240 attached to its ARN and testing that function with Snapstart works perfectly fine.
So I am confused if the LATEST version and version number 240 ( in my case ), are these different?
If no, then why isn't Snapstart automatically activated for LATEST?
If yes, how do I make sure they are same?
|
[
"SnapStart is only available for published versions of a Lambda function. It cannot be used with $LATEST.\nUsing Versions is pretty hard for Serverless Framework, SAM, CDK, and basically any other IaC tool today, because by default they will all use $LATEST to integrate with API Gateway, SNS, SQS, DynamoDB, EventBridge, etc.\nYou need to update the integration with API Gateway (or whatever service you're using) to point to the Lambda Version you publish, after that Lambda deployment has completed. This isn't easy to do using Serverless Framework (and other tools). You may be able to achieve this using this traffic-shifting plugin.\n"
] |
[
0
] |
[] |
[] |
[
"amazon_web_services",
"aws_lambda",
"serverless_framework"
] |
stackoverflow_0074639793_amazon_web_services_aws_lambda_serverless_framework.txt
|
Q:
Newbie question about return keyword in Python functions
I am currently working in codecademy on a Python course and while trying to define a function that takes in a list and returns a list with the length of that same list added to the list I realized I keeping getting "None" instead of a full list and was wondering why.
I was able figure out the correct solution but for my own education, I'm curious why my original code didn't work as intended.
#This is the first one I tried
def append_size(lst):
return lst.append(len(lst))
#Uncomment the line below when your function is done
print(append_size([23, 42, 108]))
# returns None instead of [23, 42, 108]
#This is the correct function
def append_size(lst):
lst.append(len(lst))
return lst
A:
lst.append always returns None. It modifies lst in place, so all you need to do is return lst itself.
def append_size(lst):
lst.append(len(lst))
return lst
This is a violation, though, of the usually conventional (followed by list.append itself) that a function or method should either modify an argument in-place and return None or return a new value based on the unchanged argument.
There's no particular need to return lst, since presumably the caller already has a reference to the list, as they were able to pass it as an argument in the first place.
|
Newbie question about return keyword in Python functions
|
I am currently working in codecademy on a Python course and while trying to define a function that takes in a list and returns a list with the length of that same list added to the list I realized I keeping getting "None" instead of a full list and was wondering why.
I was able figure out the correct solution but for my own education, I'm curious why my original code didn't work as intended.
#This is the first one I tried
def append_size(lst):
return lst.append(len(lst))
#Uncomment the line below when your function is done
print(append_size([23, 42, 108]))
# returns None instead of [23, 42, 108]
#This is the correct function
def append_size(lst):
lst.append(len(lst))
return lst
|
[
"lst.append always returns None. It modifies lst in place, so all you need to do is return lst itself.\ndef append_size(lst):\n lst.append(len(lst))\n return lst\n\n\nThis is a violation, though, of the usually conventional (followed by list.append itself) that a function or method should either modify an argument in-place and return None or return a new value based on the unchanged argument.\nThere's no particular need to return lst, since presumably the caller already has a reference to the list, as they were able to pass it as an argument in the first place.\n"
] |
[
0
] |
[] |
[] |
[
"function",
"python",
"return"
] |
stackoverflow_0074661327_function_python_return.txt
|
Q:
How to convert text into structured data, taking into account missing fields, in Python?
First, apologies if this sounds too basic. I have the following semi-structured data in text format, I need to parse these into a structured format:
example:
Name
Alex
Address
14 high street
London
Color
blue
red
Name
Bob
Color
black
**Note that Alex has two colors, while Bob does not have an address. **
I want something that looks like this:
example output
I think the right way is using regular expressions, but I'm struggling to split the text properly since some fields may be missing. What's a proper clean way to do this?
text='Name\nAlex\n\nAddress\n14 high street\nLondon\n\nColor\nblue\nred\n\nName\nBob\nColor\nblack'
profiles=re.split('(Name\n)', text, flags=re.IGNORECASE)
for profile in profiles:
#get name
name=re.split('(Name\n)|(Address\n)|(Color\n)', profile.strip(), flags=re.IGNORECASE)[0]
print(name)
#get address
#get color
A:
Try:
s = """\
Name
Alex
Address
14 high street
London
Color
blue
red
Name
Bob
Color
black"""
import pandas as pd
from itertools import groupby
colnames = ["Name", "Address", "Color"]
col1, col2 = [], []
for k, g in groupby(
(l for l in s.splitlines() if l.strip()), lambda l: l in colnames
):
(col2, col1)[k].append(" ".join(g))
df = pd.DataFrame({"col1": col1, "col2": col2})
df = df.assign(col3=df.col1.eq("Name").cumsum()).pivot(
index="col3", columns="col1", values="col2"
)
df.index.name, df.columns.name = None, None
df["Color"] = df["Color"].str.split()
df = df.explode("Color").fillna("")
print(df[colnames])
Prints:
Name Address Color
1 Alex 14 high street London blue
1 Alex 14 high street London red
2 Bob black
A:
Here's a vanilla python way to tackle the problem (rather than using pandas)
Load your data (you'll probably read the data from your file, but we'll use this placeholder instead)
s = """
Name
Alex
Address
14 high street
London
Color
blue
red
Name
Bob
Color
black
"""
Split up each entry by lines with 'Name'
entities = [e for e in s.split('Name')]
produces
[
'\n',
'\nAlex\n\nAddress\n14 high street\nLondon\n\nColor\nblue\nred\n\n',
'\n\nBob\nColor\nblack\n'
]
Replace the newlines with spaces, then clean up duplicate spaces
entities = [e.replace('\n',' ').replace(' ',' ') for e in entities]
produces
[
' ',
' Alex Address 14 high street London Color blue red ',
' Bob Color black '
]
Split each entry by space and toss any empty list entries
entities = [
[x for x in e.split(' ') if x != '']
for e in entities
]
produces
[
[],
['Alex', 'Address', '14', 'high', 'street', 'London', 'Color', 'blue', 'red'],
['Bob', 'Color', 'black']
]
Get rid of any empty entities
entities = [e for e in entities if len(e) > 0]
produces
[
['Alex', 'Address', '14', 'high', 'street', 'London', 'Color', 'blue', 'red'],
['Bob', 'Color', 'black']
]
Establish our tokens, then for the index of each token, capture the list elements that appear before the next token
# We'll store our findings here.
# We'll use the name, which is the first element of our 'entity' list,
# as the key for this dict.
properties = {}
for entity in entities:
# We'll figure out where each token shows up in our list
token_indices = []
for token in tokens:
# we could use
# token_indices.append(entity.index(token))
# if we knew that each token would only show up once
token_indices += [i for i,t in enumerate(entity) if t==token]
# now we'll sort the list of indices so that we can be sure
# we're dealing with them in order
token_indices = sorted(token_indices)
# since we haven't seen this person before, we'll establish a
# dict for their properties.
individual_properties = {}
for k,_ in enumerate(token_indices):
this_tkn_name = entity[token_indices[k]]
this_tkn_idx = token_indices[k]
next_tkn_idx = token_indices[k+1]
# We'll iterate over each token's index
if k+1 < len(token_indices):
# this isn't the last token
individual_properties[
this_tkn_name
] = ' '.join(entity[this_tkn_idx+1:next_tkn_idx])
else:
# this is the last token
individual_properties[
this_tkn_name
] = ' '.join(entity[this_tkn_idx+1:])
# the first element in the entity list is their name, so we can
# find that with entity[0]
properties[entity[0]] = individual_properties
produces
{
'Alex': {
'Address': '14 high street London',
'Color': 'blue red'
},
'Bob': {
'Color': 'black'
}
}
NOTE: Depending on what you want to do with this, you may need further processing. Maybe you know what the colors are a list so you could use split(' ') to get a list instead of a single string.
|
How to convert text into structured data, taking into account missing fields, in Python?
|
First, apologies if this sounds too basic. I have the following semi-structured data in text format, I need to parse these into a structured format:
example:
Name
Alex
Address
14 high street
London
Color
blue
red
Name
Bob
Color
black
**Note that Alex has two colors, while Bob does not have an address. **
I want something that looks like this:
example output
I think the right way is using regular expressions, but I'm struggling to split the text properly since some fields may be missing. What's a proper clean way to do this?
text='Name\nAlex\n\nAddress\n14 high street\nLondon\n\nColor\nblue\nred\n\nName\nBob\nColor\nblack'
profiles=re.split('(Name\n)', text, flags=re.IGNORECASE)
for profile in profiles:
#get name
name=re.split('(Name\n)|(Address\n)|(Color\n)', profile.strip(), flags=re.IGNORECASE)[0]
print(name)
#get address
#get color
|
[
"Try:\ns = \"\"\"\\\nName\nAlex\n\nAddress\n14 high street\nLondon\n\nColor\nblue\nred\n\nName\n\nBob\nColor\nblack\"\"\"\n\n\nimport pandas as pd\nfrom itertools import groupby\n\ncolnames = [\"Name\", \"Address\", \"Color\"]\n\n\ncol1, col2 = [], []\nfor k, g in groupby(\n (l for l in s.splitlines() if l.strip()), lambda l: l in colnames\n):\n (col2, col1)[k].append(\" \".join(g))\n\ndf = pd.DataFrame({\"col1\": col1, \"col2\": col2})\ndf = df.assign(col3=df.col1.eq(\"Name\").cumsum()).pivot(\n index=\"col3\", columns=\"col1\", values=\"col2\"\n)\ndf.index.name, df.columns.name = None, None\n\n\ndf[\"Color\"] = df[\"Color\"].str.split()\ndf = df.explode(\"Color\").fillna(\"\")\n\nprint(df[colnames])\n\nPrints:\n Name Address Color\n1 Alex 14 high street London blue\n1 Alex 14 high street London red\n2 Bob black\n\n",
"Here's a vanilla python way to tackle the problem (rather than using pandas)\n\nLoad your data (you'll probably read the data from your file, but we'll use this placeholder instead)\n\ns = \"\"\"\nName\nAlex\n\nAddress\n14 high street\nLondon\n\nColor\nblue\nred\n\nName\n\nBob\nColor\nblack\n\"\"\"\n\n\nSplit up each entry by lines with 'Name'\n\nentities = [e for e in s.split('Name')]\n\nproduces\n[\n '\\n',\n '\\nAlex\\n\\nAddress\\n14 high street\\nLondon\\n\\nColor\\nblue\\nred\\n\\n',\n '\\n\\nBob\\nColor\\nblack\\n'\n]\n\n\nReplace the newlines with spaces, then clean up duplicate spaces\n\nentities = [e.replace('\\n',' ').replace(' ',' ') for e in entities]\n\nproduces\n[\n ' ',\n ' Alex Address 14 high street London Color blue red ',\n ' Bob Color black '\n]\n\n\nSplit each entry by space and toss any empty list entries\n\nentities = [\n [x for x in e.split(' ') if x != '']\n for e in entities\n]\n\nproduces\n[\n [],\n ['Alex', 'Address', '14', 'high', 'street', 'London', 'Color', 'blue', 'red'],\n ['Bob', 'Color', 'black']\n]\n\n\nGet rid of any empty entities\n\nentities = [e for e in entities if len(e) > 0]\n\nproduces\n[\n ['Alex', 'Address', '14', 'high', 'street', 'London', 'Color', 'blue', 'red'],\n ['Bob', 'Color', 'black']\n]\n\n\nEstablish our tokens, then for the index of each token, capture the list elements that appear before the next token\n\n# We'll store our findings here. \n# We'll use the name, which is the first element of our 'entity' list, \n# as the key for this dict.\n\nproperties = {}\n\nfor entity in entities:\n # We'll figure out where each token shows up in our list\n token_indices = []\n for token in tokens:\n # we could use \n # token_indices.append(entity.index(token)) \n # if we knew that each token would only show up once\n token_indices += [i for i,t in enumerate(entity) if t==token]\n\n # now we'll sort the list of indices so that we can be sure\n # we're dealing with them in order\n token_indices = sorted(token_indices)\n \n # since we haven't seen this person before, we'll establish a \n # dict for their properties.\n\n\n\n individual_properties = {}\n \n for k,_ in enumerate(token_indices):\n this_tkn_name = entity[token_indices[k]]\n this_tkn_idx = token_indices[k]\n next_tkn_idx = token_indices[k+1]\n\n # We'll iterate over each token's index\n if k+1 < len(token_indices):\n # this isn't the last token\n individual_properties[\n this_tkn_name\n ] = ' '.join(entity[this_tkn_idx+1:next_tkn_idx])\n\n else:\n # this is the last token\n individual_properties[\n this_tkn_name\n ] = ' '.join(entity[this_tkn_idx+1:])\n \n # the first element in the entity list is their name, so we can\n # find that with entity[0]\n properties[entity[0]] = individual_properties\n\nproduces\n{\n 'Alex': {\n 'Address': '14 high street London', \n 'Color': 'blue red'\n },\n 'Bob': {\n 'Color': 'black'\n }\n}\n\nNOTE: Depending on what you want to do with this, you may need further processing. Maybe you know what the colors are a list so you could use split(' ') to get a list instead of a single string.\n"
] |
[
2,
0
] |
[] |
[] |
[
"dataframe",
"python",
"python_re",
"string"
] |
stackoverflow_0074661015_dataframe_python_python_re_string.txt
|
Q:
Day Name from Date in JS
I need to display the name of the day given a date (like "05/23/2014") which I get from a 3rd party.
I've tried using Date, but I only get the date.
What is the correct way to get the name of the day?
A:
Use the methods provided by the standard JavaScript Date class:
Getting the day name from a date:
function getDayName(dateStr, locale)
{
var date = new Date(dateStr);
return date.toLocaleDateString(locale, { weekday: 'long' });
}
var dateStr = '05/23/2014';
var day = getDayName(dateStr, "nl-NL"); // Gives back 'Vrijdag' which is Dutch for Friday.
Getting all weekdays in an array:
function getWeekDays(locale)
{
var baseDate = new Date(Date.UTC(2017, 0, 2)); // just a Monday
var weekDays = [];
for(i = 0; i < 7; i++)
{
weekDays.push(baseDate.toLocaleDateString(locale, { weekday: 'long' }));
baseDate.setDate(baseDate.getDate() + 1);
}
return weekDays;
}
var weekDays = getWeekDays('nl-NL'); // Gives back { 'maandag', 'dinsdag', 'woensdag', 'donderdag', 'vrijdag', 'zaterdag', 'zondag'} which are the days of the week in Dutch.
For American dates use 'en-US' as locale.
A:
You could use the Date.getDay() method, which returns 0 for sunday, up to 6 for saturday. So, you could simply create an array with the name for the day names:
var days = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'];
var d = new Date(dateString);
var dayName = days[d.getDay()];
Here dateString is the string you received from the third party API.
Alternatively, if you want the first 3 letters of the day name, you could use the Date object's built-in toString method:
var d = new Date(dateString);
var dayName = d.toString().split(' ')[0];
That will take the first word in the d.toString() output, which will be the 3-letter day name.
A:
use the Date.toLocaleString() method :
new Date(dateString).toLocaleString('en-us', {weekday:'long'})
A:
let weekday = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'][new Date().getDay()]
A:
var days = [
"Sunday",
"Monday",
"...", //etc
"Saturday"
];
console.log(days[new Date().getDay()]);
Simple, read the Date object in JavaScript manual
To do other things with date, like get a readable string from it, I use:
var d = new Date();
d.toLocaleString();
If you just want time or date use:
d.toLocaleTimeString();
d.toLocaleDateString();
You can parse dates either by doing:
var d = new Date(dateToParse);
or
var d = Date.parse(dateToParse);
A:
To get the day from any given date, just pass the date into a new Date object:
let date = new Date("01/05/2020");
let day = date.toLocaleString('en-us', {weekday: 'long'});
console.log(day);
// expected result = tuesday
To read more, go to mdn-date.prototype.toLocaleString()(https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toLocaleString)
A:
let weekday = new Date(dateString).toLocaleString('en-us', {weekday:'long'});
console.log('Weekday',weekday);
A:
Take a look at this :
var event = new Date(Date.UTC(2012, 11, 20, 3, 0, 0));
var options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric' };
console.log(event.toLocaleDateString('de-DE', options));
// expected output: Donnerstag, 20. Dezember 2012
console.log(event.toLocaleDateString('ar-EG', options));
// expected output: الخميس، ٢٠ ديسمبر، ٢٠١٢
console.log(event.toLocaleDateString('ko-KR', options));
// expected output: 2012년 12월 20일 목요일
Source : Mozilla Doc
A:
One line solution :
const day = ["sunday","monday","tuesday","wednesday","thursday","friday","saturday"][new Date().getDay()]
A:
Easiest and simplest way:
var days = ["Sun", "Mon", "Tue", "Wed", "Thu", "Fri", "Sat"];
var dayName = days[new Date().getDay()];
A:
var dayName =['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'];
var day = dayName[new Date().getDay()];
console.log(day)
A:
Try using this code:
var event = new Date();
var options = { weekday: 'long' };
console.log(event.toLocaleDateString('en-US', options));
this will give you the day name in string format.
A:
I'm not a fan of over-complicated solutions if anyone else comes up with something better, please let us know :)
any-name.js
var today = new Date().toLocaleDateString(undefined, {
day: '2-digit',
month: '2-digit',
year: 'numeric',
weekday: 'long'
});
any-name.html
<script>
document.write(today);
</script>
A:
Shortest one liner
Change the UTC day from 6 to 5 if you want Array to start from Sunday.
const getWeekDays = (locale) => [...Array(7).keys()].map((v)=>new Date(Date.UTC(1970, 0, 6+v)).toLocaleDateString(locale, { weekday: 'long' }));
console.log(getWeekDays('de-DE'));
A:
One more option is to use the inbuilt function Intl.DateTimeFormat, e.g.:
const getDayName = (dateString) =>
new Intl.DateTimeFormat('en-Us', { weekday: 'long' }).format(new Date(dateString));
<label for="inp">Enter a date string in the format "MM/DD/YYYY" or "YYYY-MM-DD" and press "OK":</label><br>
<input type="text" id="inp" value="01/31/2021">
<button onclick="alert(getDayName(document.getElementById('inp').value))">OK</button>
A:
This method doesn't require you to set a random date or know the stringLocale beforehand. This method is independent from predefined values.
The locale can be retrieved from the client.
Automatically fill the weekdays array in the string locale.
const locale = 'en-US' // Change this based on client settings
const date = new Date()
const weekdays = []
while(!weekdays[date.getDay()]) {
weekdays[date.getDay()] = date.toLocaleString(locale, { weekday: 'long'})
date.setDate(date.getDate() + 1)
}
console.log(weekdays)
If you want the locale names for the months as well;
const locale = 'en-US' // Change this based on client settings
const date = new Date()
date.setMonth(0) // Not strictly needed, but why not..
date.setDate(1) // Needed because if current date is >= 29, the month Feb can get skipped.
const months = []
while(!months[date.getMonth()]) {
months[date.getMonth()] = date.toLocaleString(locale, { month: 'long'})
date.setMonth(date.getMonth() + 1)
}
console.log(months)
I currently use it like this:
(As you can see, I make a clone of the current date and set the month and date to their first occurance)
const date = new Date()
let locale = navigator.languages
? navigator.languages[0]
: (navigator.language || navigator.userLanguage)
let clone = new Date(date.getFullYear(), 0, 1, 0, 0, 0, 0)
let weekdays = []
while (!weekdays[clone.getDay()]) {
weekdays[clone.getDay()] = {
index: clone.getDay(),
long: clone.toLocaleString(locale, { weekday: 'long' }),
short: clone.toLocaleString(locale, { weekday: 'short' })
}
clone.setDate(clone.getDate() + 1)
}
clone.setDate(clone.getDate() - weekdays.length) // Reset
let months = []
while (!months[clone.getMonth()]) {
months[clone.getMonth()] = {
index: clone.getMonth(),
long: clone.toLocaleString(locale, { month: 'long' }),
short: clone.toLocaleString(locale, { month: 'short' })
}
clone.setMonth(clone.getMonth() + 1)
}
clone.setMonth(clone.getMonth() - months.length) // Reset
let hours = []
while (!hours[clone.getHours()]) {
hours[clone.getHours()] = {
index: clone.getHours(),
hour24: clone.toLocaleTimeString(locale, { hour12: false, hour: '2-digit', minute: '2-digit' }),
hour12: clone.toLocaleTimeString(locale, { hour12: true, hour: 'numeric' })
}
clone.setHours(clone.getHours() + 1)
}
clone.setHours(clone.getHours() - hours.length) // Reset
console.log(locale)
console.log(weekdays)
console.log(months)
console.log(hours)
console.log(clone.toLocaleString())
A:
Solution No.1
var today = new Date();
var day = today.getDay();
var days = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"];
var dayname = days[day];
document.write(dayname);
Solution No.2
var today = new Date();
var day = today.getDay();
switch(day){
case 0:
day = "Sunday";
break;
case 1:
day = "Monday";
break;
case 2:
day ="Tuesday";
break;
case 3:
day = "Wednesday";
break;
case 4:
day = "Thrusday";
break;
case 5:
day = "Friday";
break;
case 6:
day = "Saturday";
break;
}
document.write(day);
A:
you can use an object
var days = {
'Mon': 'Monday',
'etc..': 'etc..',
'Fri': 'Friday'
}
var date = new Date().toString().split(' ')[0]; //get day abreviation first
console.log(days[date]);
A:
Just use it:
function getWeekDayNames(format = 'short', locale = 'ru') {
const names = [];
const date = new Date('2020-05-24');
let days = 7;
while (days !== 0) {
date.setDate(date.getDate() + 1);
names.push(date.toLocaleDateString(locale, { weekday: format }));
days--;
}
return names;
}
About formats you can read here Documentation DateTimeFormat
A:
var date = new Date(Date.UTC(2012, 11, 20, 3, 0, 0));
// request a weekday along with a long date
var options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric' };
console.log(date.toLocaleDateString('de-DE', options));
// → "Donnerstag, 20. Dezember 2012"
// an application may want to use UTC and make that visible
options.timeZone = 'UTC';
options.timeZoneName = 'short';
console.log(date.toLocaleDateString('en-US', options));
// → "Thursday, December 20, 2012, UTC"
A:
// Solve this problem with a function.
// The days of the week are: "Sunday", "Monday", "Tuesday", "Wednesday", "Thursday", "Friday", "Saturday"
function getDayName(dateString) {
let dayName = ["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"][new Date(dateString).getDay()];
return dayName;
}
let result = getDayName(10/12/2022);
console.log(result);
|
Day Name from Date in JS
|
I need to display the name of the day given a date (like "05/23/2014") which I get from a 3rd party.
I've tried using Date, but I only get the date.
What is the correct way to get the name of the day?
|
[
"Use the methods provided by the standard JavaScript Date class:\nGetting the day name from a date:\nfunction getDayName(dateStr, locale)\n{\n var date = new Date(dateStr);\n return date.toLocaleDateString(locale, { weekday: 'long' }); \n}\n\nvar dateStr = '05/23/2014';\nvar day = getDayName(dateStr, \"nl-NL\"); // Gives back 'Vrijdag' which is Dutch for Friday.\n\nGetting all weekdays in an array:\nfunction getWeekDays(locale)\n{\n var baseDate = new Date(Date.UTC(2017, 0, 2)); // just a Monday\n var weekDays = [];\n for(i = 0; i < 7; i++)\n { \n weekDays.push(baseDate.toLocaleDateString(locale, { weekday: 'long' }));\n baseDate.setDate(baseDate.getDate() + 1); \n }\n return weekDays;\n}\n\nvar weekDays = getWeekDays('nl-NL'); // Gives back { 'maandag', 'dinsdag', 'woensdag', 'donderdag', 'vrijdag', 'zaterdag', 'zondag'} which are the days of the week in Dutch.\n\nFor American dates use 'en-US' as locale.\n",
"You could use the Date.getDay() method, which returns 0 for sunday, up to 6 for saturday. So, you could simply create an array with the name for the day names:\nvar days = ['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'];\nvar d = new Date(dateString);\nvar dayName = days[d.getDay()];\n\nHere dateString is the string you received from the third party API.\nAlternatively, if you want the first 3 letters of the day name, you could use the Date object's built-in toString method:\nvar d = new Date(dateString);\nvar dayName = d.toString().split(' ')[0];\n\nThat will take the first word in the d.toString() output, which will be the 3-letter day name.\n",
"use the Date.toLocaleString() method :\nnew Date(dateString).toLocaleString('en-us', {weekday:'long'})\n\n",
"let weekday = ['Sun', 'Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat'][new Date().getDay()]\n\n",
"var days = [\n \"Sunday\",\n \"Monday\",\n \"...\", //etc\n \"Saturday\"\n];\n\nconsole.log(days[new Date().getDay()]);\n\nSimple, read the Date object in JavaScript manual\nTo do other things with date, like get a readable string from it, I use:\nvar d = new Date();\nd.toLocaleString();\n\nIf you just want time or date use:\nd.toLocaleTimeString();\nd.toLocaleDateString();\n\nYou can parse dates either by doing:\nvar d = new Date(dateToParse);\n\nor\nvar d = Date.parse(dateToParse);\n\n",
"To get the day from any given date, just pass the date into a new Date object:\nlet date = new Date(\"01/05/2020\");\nlet day = date.toLocaleString('en-us', {weekday: 'long'});\nconsole.log(day);\n// expected result = tuesday\n\nTo read more, go to mdn-date.prototype.toLocaleString()(https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Date/toLocaleString)\n",
"let weekday = new Date(dateString).toLocaleString('en-us', {weekday:'long'});\nconsole.log('Weekday',weekday);\n\n",
"Take a look at this :\nvar event = new Date(Date.UTC(2012, 11, 20, 3, 0, 0));\n\nvar options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric' };\n\nconsole.log(event.toLocaleDateString('de-DE', options));\n// expected output: Donnerstag, 20. Dezember 2012\n\nconsole.log(event.toLocaleDateString('ar-EG', options));\n// expected output: الخميس، ٢٠ ديسمبر، ٢٠١٢\n\nconsole.log(event.toLocaleDateString('ko-KR', options));\n// expected output: 2012년 12월 20일 목요일\n\nSource : Mozilla Doc\n",
"One line solution :\nconst day = [\"sunday\",\"monday\",\"tuesday\",\"wednesday\",\"thursday\",\"friday\",\"saturday\"][new Date().getDay()]\n\n",
"Easiest and simplest way: \nvar days = [\"Sun\", \"Mon\", \"Tue\", \"Wed\", \"Thu\", \"Fri\", \"Sat\"];\nvar dayName = days[new Date().getDay()];\n\n",
"\n\nvar dayName =['Sunday', 'Monday', 'Tuesday', 'Wednesday', 'Thursday', 'Friday', 'Saturday'];\nvar day = dayName[new Date().getDay()];\nconsole.log(day)\n\n\n\n",
"Try using this code:\nvar event = new Date();\nvar options = { weekday: 'long' };\nconsole.log(event.toLocaleDateString('en-US', options));\n\nthis will give you the day name in string format.\n",
"I'm not a fan of over-complicated solutions if anyone else comes up with something better, please let us know :) \nany-name.js\n\nvar today = new Date().toLocaleDateString(undefined, {\n day: '2-digit',\n month: '2-digit',\n year: 'numeric',\n weekday: 'long'\n});\n\nany-name.html\n<script>\n document.write(today);\n</script>\n\n",
"Shortest one liner\nChange the UTC day from 6 to 5 if you want Array to start from Sunday.\n\n\nconst getWeekDays = (locale) => [...Array(7).keys()].map((v)=>new Date(Date.UTC(1970, 0, 6+v)).toLocaleDateString(locale, { weekday: 'long' }));\n\nconsole.log(getWeekDays('de-DE')); \n\n\n\n",
"One more option is to use the inbuilt function Intl.DateTimeFormat, e.g.:\n\n\nconst getDayName = (dateString) =>\n new Intl.DateTimeFormat('en-Us', { weekday: 'long' }).format(new Date(dateString));\n<label for=\"inp\">Enter a date string in the format \"MM/DD/YYYY\" or \"YYYY-MM-DD\" and press \"OK\":</label><br>\n<input type=\"text\" id=\"inp\" value=\"01/31/2021\">\n<button onclick=\"alert(getDayName(document.getElementById('inp').value))\">OK</button>\n\n\n\n",
"This method doesn't require you to set a random date or know the stringLocale beforehand. This method is independent from predefined values.\nThe locale can be retrieved from the client.\nAutomatically fill the weekdays array in the string locale.\n\n\nconst locale = 'en-US' // Change this based on client settings\nconst date = new Date()\n\nconst weekdays = []\nwhile(!weekdays[date.getDay()]) {\n weekdays[date.getDay()] = date.toLocaleString(locale, { weekday: 'long'})\n date.setDate(date.getDate() + 1)\n}\n\nconsole.log(weekdays)\n\n\n\nIf you want the locale names for the months as well;\n\n\nconst locale = 'en-US' // Change this based on client settings\nconst date = new Date()\ndate.setMonth(0) // Not strictly needed, but why not..\ndate.setDate(1) // Needed because if current date is >= 29, the month Feb can get skipped.\n\nconst months = []\nwhile(!months[date.getMonth()]) {\n months[date.getMonth()] = date.toLocaleString(locale, { month: 'long'})\n date.setMonth(date.getMonth() + 1)\n}\n\nconsole.log(months)\n\n\n\nI currently use it like this:\n(As you can see, I make a clone of the current date and set the month and date to their first occurance)\n\n\nconst date = new Date()\n\nlet locale = navigator.languages\n ? navigator.languages[0]\n : (navigator.language || navigator.userLanguage)\nlet clone = new Date(date.getFullYear(), 0, 1, 0, 0, 0, 0)\n\nlet weekdays = []\nwhile (!weekdays[clone.getDay()]) {\n weekdays[clone.getDay()] = {\n index: clone.getDay(),\n long: clone.toLocaleString(locale, { weekday: 'long' }),\n short: clone.toLocaleString(locale, { weekday: 'short' })\n }\n clone.setDate(clone.getDate() + 1)\n}\nclone.setDate(clone.getDate() - weekdays.length) // Reset\n\nlet months = []\nwhile (!months[clone.getMonth()]) {\n months[clone.getMonth()] = {\n index: clone.getMonth(),\n long: clone.toLocaleString(locale, { month: 'long' }),\n short: clone.toLocaleString(locale, { month: 'short' })\n }\n clone.setMonth(clone.getMonth() + 1)\n}\nclone.setMonth(clone.getMonth() - months.length) // Reset\n\nlet hours = []\nwhile (!hours[clone.getHours()]) {\n hours[clone.getHours()] = {\n index: clone.getHours(),\n hour24: clone.toLocaleTimeString(locale, { hour12: false, hour: '2-digit', minute: '2-digit' }),\n hour12: clone.toLocaleTimeString(locale, { hour12: true, hour: 'numeric' })\n }\n clone.setHours(clone.getHours() + 1)\n}\nclone.setHours(clone.getHours() - hours.length) // Reset\n\nconsole.log(locale)\nconsole.log(weekdays)\nconsole.log(months)\nconsole.log(hours)\nconsole.log(clone.toLocaleString())\n\n\n\n",
"Solution No.1\nvar today = new Date();\n\n var day = today.getDay();\n\n var days = [\"Sunday\",\"Monday\",\"Tuesday\",\"Wednesday\",\"Thursday\",\"Friday\",\"Saturday\"];\n\n var dayname = days[day];\n\n document.write(dayname);\n\nSolution No.2\n var today = new Date();\n\n var day = today.getDay();\n\n switch(day){\n case 0:\n day = \"Sunday\";\n break;\n\n case 1:\n day = \"Monday\";\n break;\n\n case 2:\n day =\"Tuesday\";\n break;\n\n case 3:\n day = \"Wednesday\";\n break;\n\n case 4:\n day = \"Thrusday\";\n break;\n\n case 5:\n day = \"Friday\";\n break;\n\n case 6:\n day = \"Saturday\";\n break;\n }\n\n\ndocument.write(day);\n\n",
"you can use an object \nvar days = {\n 'Mon': 'Monday',\n 'etc..': 'etc..',\n 'Fri': 'Friday'\n}\n\nvar date = new Date().toString().split(' ')[0]; //get day abreviation first\nconsole.log(days[date]);\n\n",
"Just use it:\n\n\nfunction getWeekDayNames(format = 'short', locale = 'ru') {\r\n const names = [];\r\n const date = new Date('2020-05-24');\r\n let days = 7;\r\n\r\n while (days !== 0) {\r\n date.setDate(date.getDate() + 1);\r\n names.push(date.toLocaleDateString(locale, { weekday: format }));\r\n days--;\r\n }\r\n\r\n return names;\r\n}\n\n\n\nAbout formats you can read here Documentation DateTimeFormat\n",
"var date = new Date(Date.UTC(2012, 11, 20, 3, 0, 0));\n\n// request a weekday along with a long date\nvar options = { weekday: 'long', year: 'numeric', month: 'long', day: 'numeric' };\nconsole.log(date.toLocaleDateString('de-DE', options));\n// → \"Donnerstag, 20. Dezember 2012\"\n\n// an application may want to use UTC and make that visible\noptions.timeZone = 'UTC';\noptions.timeZoneName = 'short';\nconsole.log(date.toLocaleDateString('en-US', options));\n// → \"Thursday, December 20, 2012, UTC\"\n\n",
"// Solve this problem with a function.\n// The days of the week are: \"Sunday\", \"Monday\", \"Tuesday\", \"Wednesday\", \"Thursday\", \"Friday\", \"Saturday\"\nfunction getDayName(dateString) {\n let dayName = [\"Sunday\",\"Monday\",\"Tuesday\",\"Wednesday\",\"Thursday\",\"Friday\",\"Saturday\"][new Date(dateString).getDay()];\n return dayName;\n }\n let result = getDayName(10/12/2022);\n console.log(result);\n\n"
] |
[
278,
167,
50,
28,
11,
10,
8,
7,
4,
3,
3,
2,
2,
2,
2,
2,
1,
0,
0,
0,
0
] |
[
"Not the best method, use an array instead. This is just an alternative method.\nhttp://www.w3schools.com/jsref/jsref_getday.asp\nvar date = new Date();\nvar day = date.getDay();\n\nYou should really use google before you post here.\nSince other people posted the array method I'll show you an alternative way using a switch statement.\nswitch(day) {\n case 0:\n day = \"Sunday\";\n break;\n case 1:\n day = \"Monday\";\n break;\n\n ... rest of cases\n\n default:\n // do something\n break;\n}\n\nThe above works, however, the array is the better alternative. You may also use if() statements however a switch statement would be much cleaner then several if's.\n"
] |
[
-3
] |
[
"javascript",
"jquery"
] |
stackoverflow_0024998624_javascript_jquery.txt
|
Q:
Flutter Error - The argument type 'MaterialStateProperty' can't be assigned to the parameter type 'OutlinedBorder
want to customize the button's default border radius but having the following error:
"The argument type 'MaterialStateProperty' can't be assigned to the parameter type 'OutlinedBorder"
Padding(
padding: const EdgeInsets.all(20.0),
child: ElevatedButton(
style: ElevatedButton.styleFrom(
shape: MaterialStateProperty.all<RoundedRectangleBorder>(
RoundedRectangleBorder(
borderRadius: BorderRadius.circular(10.0))),
elevation: 0,
primary: Colors.white,
minimumSize: const Size.fromHeight(50)),
onPressed: () {
Navigator.pushNamed(context, '/home');
// Navigator.of(context).pushReplacementNamed('/home');
},
child: const Text(
"Start",
style: TextStyle(
fontWeight: FontWeight.bold,
fontSize: 18,
color: Color.fromRGBO(86, 96, 49, 1)),
)),
),
A:
Just remove MaterialStateProperty.all( or replace your code with below.
Padding(
padding: const EdgeInsets.all(20.0),
child: ElevatedButton(
style: ElevatedButton.styleFrom(
shape:RoundedRectangleBorder(
borderRadius: BorderRadius.circular(30.0),
),
elevation: 0,
primary: Colors.white,
minimumSize: const Size.fromHeight(50)),
onPressed: () {
Navigator.pushNamed(context, '/home');
// Navigator.of(context).pushReplacementNamed('/home');
},
child: const Text(
"Start",
style: TextStyle(
fontWeight: FontWeight.bold,
fontSize: 18,
color: Color.fromRGBO(86, 96, 49, 1)),
)),
)
A:
That's helpful for typing if you don't want to avoid you only have to add the generic type. Documentation
ElevatedButton(
style: ButtonStyle(
backgroundColor: MaterialStateProperty.all(Colors.red),
shape: MaterialStateProperty.all(
const RoundedRectangleBorder(
borderRadius: BorderRadius.all(Radius.circular(8)),
),
)),
child: const Text('Sign in with Google'),
onPressed: () {},
);
A:
That's helpful for the types, if you want to use you only have to add the generic type. Documentation
ElevatedButton(
style: ButtonStyle(
backgroundColor: MaterialStateProperty.all<Color>(Colors.red),
shape: MaterialStateProperty.all<RoundedRectangleBorder>(
const RoundedRectangleBorder(
borderRadius: BorderRadius.all(Radius.circular(8)),
),
)),
child: const Text('Text'),
onPressed: () {},
);
|
Flutter Error - The argument type 'MaterialStateProperty' can't be assigned to the parameter type 'OutlinedBorder
|
want to customize the button's default border radius but having the following error:
"The argument type 'MaterialStateProperty' can't be assigned to the parameter type 'OutlinedBorder"
Padding(
padding: const EdgeInsets.all(20.0),
child: ElevatedButton(
style: ElevatedButton.styleFrom(
shape: MaterialStateProperty.all<RoundedRectangleBorder>(
RoundedRectangleBorder(
borderRadius: BorderRadius.circular(10.0))),
elevation: 0,
primary: Colors.white,
minimumSize: const Size.fromHeight(50)),
onPressed: () {
Navigator.pushNamed(context, '/home');
// Navigator.of(context).pushReplacementNamed('/home');
},
child: const Text(
"Start",
style: TextStyle(
fontWeight: FontWeight.bold,
fontSize: 18,
color: Color.fromRGBO(86, 96, 49, 1)),
)),
),
|
[
"Just remove MaterialStateProperty.all( or replace your code with below.\n Padding(\n padding: const EdgeInsets.all(20.0),\n child: ElevatedButton(\n style: ElevatedButton.styleFrom(\n shape:RoundedRectangleBorder(\n borderRadius: BorderRadius.circular(30.0),\n ),\n elevation: 0,\n primary: Colors.white,\n minimumSize: const Size.fromHeight(50)),\n onPressed: () {\n Navigator.pushNamed(context, '/home');\n // Navigator.of(context).pushReplacementNamed('/home');\n },\n child: const Text(\n \"Start\",\n style: TextStyle(\n fontWeight: FontWeight.bold,\n fontSize: 18,\n color: Color.fromRGBO(86, 96, 49, 1)),\n )),\n)\n\n",
"That's helpful for typing if you don't want to avoid you only have to add the generic type. Documentation\nElevatedButton(\nstyle: ButtonStyle(\nbackgroundColor: MaterialStateProperty.all(Colors.red),\nshape: MaterialStateProperty.all(\nconst RoundedRectangleBorder(\nborderRadius: BorderRadius.all(Radius.circular(8)),\n),\n)),\nchild: const Text('Sign in with Google'),\nonPressed: () {},\n);\n",
"That's helpful for the types, if you want to use you only have to add the generic type. Documentation\n ElevatedButton(\n style: ButtonStyle(\n backgroundColor: MaterialStateProperty.all<Color>(Colors.red),\n shape: MaterialStateProperty.all<RoundedRectangleBorder>(\n const RoundedRectangleBorder(\n borderRadius: BorderRadius.all(Radius.circular(8)),\n ),\n )),\n child: const Text('Text'),\n onPressed: () {},\n );\n\n"
] |
[
2,
0,
0
] |
[] |
[] |
[
"error_handling",
"flutter",
"flutter_layout",
"syntax_error"
] |
stackoverflow_0072791743_error_handling_flutter_flutter_layout_syntax_error.txt
|
Q:
Struggling with an increasing paths algorithm question
I’m struggling to find a good solution to a problem which asks for ALL paths which increase as you traverse a 2D array/matrix (rather than the classic ‘Longest Increasing Path’.
Here’s the question:
Definitions
• Path: a sequence of two or mere spaces for which ea. space is horizontally or vertically adjacent to the previous
• Increasing path: a path for which each space has a greater value than the previous space.
Example 1:
[
[5, 1],
[2, 7]
]
There are 4 Increasing paths.
1 -> 5
1 -> 7
2 -> 5
2 -> 7
Example 2:
[
[0, 4, 3],
[5, 8, 9],
[5, 9, 9]
]
There are 4 Increasing paths.
0 -> 4
0 -> 4 -> 8
0 -> 4 -> 8
0 -> 4 -> 8 -> 9
0 -> 5
0 -> 5 -> 8
... and so on.
I’ve tried a few things, but none of which what I need. Because I don’t want this to seem like an “answer my homework for me”, here’s the code of what I’ve tried (I knew it wouldn’t work 100%, but it was a good start for me, and I wasn’t sure where to go from there)
/*
This attempt was from what I
gathered from longest increasing,
so it clearly isn’t valid.
*/
function(v){
let n=v.length;
let m=v[0].length;
let mem=(new Array(n));
for(let i=0;i<n;i++)mem[i]=(new Array(m)).fill(0);
mem[0][0]=1;
for(let i=1;i<n;i++){
if (v[i][0] > v[i-1][0] && mem[i-1][0] == 1) {
mem[i][0] = 1;
}
}
for(let i=1;i<m;i++){
if (v[0][i] > v[0][i-1] && mem[0][i-1] == 1) {
mem[0][i] = 1;
}
}
for(let i=1;i<n;i++){
for(let j=1;j<m;j++){
if (v[i][j] > v[i-1][j] && mem[i-1][j] == 1) {
mem[i][j] = 1;
}
if (mem[i][j] == 0 && v[i][j] > v[i][j-1] && mem[i][j-1] == 1){
mem[i][j] = 1;
}
}
}
return mem[n-1][m-1] ? n+m-1 : -1;
}
(Note: I come from a strictly UX and front-end background, but I’m trying to improve my skills on this type of programming and move to a more full stack position, so I appreciate the help with a novice question! :))
A:
Some tips:
View the grid as a graph, nodes are each gridpoint and edges are formed where it is possible to move from a to b
An increasing path means going from a smaller value a to a bigger value b.
This ensures that the graph is directed. (you can never go back the same edge)
such a path a->b->c...->f you can never connect to any earlier point in your path, as it would imply that a < b < .. < f < a. this means that the graph has no cycles
A graph with directed edges and no cycles are known as DAGs (directed acyclic graphs). Many graph algorithms are a lot easier on these graphs, including listing all paths.
Your task is simply to write a DFS (depth first search) and start it on every gridpoint (filter out path of length 0 in the end).
A:
The following is the way I have solved using Javascript
const savedPathCount = [];
function getAvailablePath(grid, x, y) {
if (!savedPathCount[x]) {
savedPathCount[x] = [];
}
if (savedPathCount[x][y]) {
return savedPathCount[x][y]
}
savedPathCount[x][y] = 0
const currentValue = grid[x][y];
if (y > 0) {
const topValue = grid[x][y - 1];
if (topValue && topValue > currentValue) {
savedPathCount[x][y]++;
savedPathCount[x][y] += getAvailablePath(grid, x, y - 1)
}
}
if (x > 0) {
const leftValue = grid[x - 1][y];
if (leftValue && leftValue > currentValue) {
savedPathCount[x][y]++;
savedPathCount[x][y] += getAvailablePath(grid, x - 1, y)
}
}
if (grid[x]) {
const rightValue = grid[x][y + 1];
if (rightValue && rightValue > currentValue) {
savedPathCount[x][y]++;
savedPathCount[x][y] += getAvailablePath(grid, x, y + 1)
}
}
if (grid[x + 1]) {
const bottomValue = grid[x + 1][y];
if (bottomValue && bottomValue > currentValue) {
savedPathCount[x][y]++;
savedPathCount[x][y] += getAvailablePath(grid, x + 1, y)
}
}
return savedPathCount[x][y];
}
function paths(grid) {
// Write your code here
let pathCount = 0
for (let x = 0; x < grid.length; x++) {
for (let y = 0; y < grid[x].length; y++) {
pathCount += getAvailablePath(grid, x, y);
}
}
return pathCount;
}
const grid = [[15, 34],[1, 6]]
console.log(grid(path))
|
Struggling with an increasing paths algorithm question
|
I’m struggling to find a good solution to a problem which asks for ALL paths which increase as you traverse a 2D array/matrix (rather than the classic ‘Longest Increasing Path’.
Here’s the question:
Definitions
• Path: a sequence of two or mere spaces for which ea. space is horizontally or vertically adjacent to the previous
• Increasing path: a path for which each space has a greater value than the previous space.
Example 1:
[
[5, 1],
[2, 7]
]
There are 4 Increasing paths.
1 -> 5
1 -> 7
2 -> 5
2 -> 7
Example 2:
[
[0, 4, 3],
[5, 8, 9],
[5, 9, 9]
]
There are 4 Increasing paths.
0 -> 4
0 -> 4 -> 8
0 -> 4 -> 8
0 -> 4 -> 8 -> 9
0 -> 5
0 -> 5 -> 8
... and so on.
I’ve tried a few things, but none of which what I need. Because I don’t want this to seem like an “answer my homework for me”, here’s the code of what I’ve tried (I knew it wouldn’t work 100%, but it was a good start for me, and I wasn’t sure where to go from there)
/*
This attempt was from what I
gathered from longest increasing,
so it clearly isn’t valid.
*/
function(v){
let n=v.length;
let m=v[0].length;
let mem=(new Array(n));
for(let i=0;i<n;i++)mem[i]=(new Array(m)).fill(0);
mem[0][0]=1;
for(let i=1;i<n;i++){
if (v[i][0] > v[i-1][0] && mem[i-1][0] == 1) {
mem[i][0] = 1;
}
}
for(let i=1;i<m;i++){
if (v[0][i] > v[0][i-1] && mem[0][i-1] == 1) {
mem[0][i] = 1;
}
}
for(let i=1;i<n;i++){
for(let j=1;j<m;j++){
if (v[i][j] > v[i-1][j] && mem[i-1][j] == 1) {
mem[i][j] = 1;
}
if (mem[i][j] == 0 && v[i][j] > v[i][j-1] && mem[i][j-1] == 1){
mem[i][j] = 1;
}
}
}
return mem[n-1][m-1] ? n+m-1 : -1;
}
(Note: I come from a strictly UX and front-end background, but I’m trying to improve my skills on this type of programming and move to a more full stack position, so I appreciate the help with a novice question! :))
|
[
"Some tips:\nView the grid as a graph, nodes are each gridpoint and edges are formed where it is possible to move from a to b\nAn increasing path means going from a smaller value a to a bigger value b.\n\nThis ensures that the graph is directed. (you can never go back the same edge)\nsuch a path a->b->c...->f you can never connect to any earlier point in your path, as it would imply that a < b < .. < f < a. this means that the graph has no cycles\n\nA graph with directed edges and no cycles are known as DAGs (directed acyclic graphs). Many graph algorithms are a lot easier on these graphs, including listing all paths.\nYour task is simply to write a DFS (depth first search) and start it on every gridpoint (filter out path of length 0 in the end).\n",
"The following is the way I have solved using Javascript\n\nconst savedPathCount = [];\n\nfunction getAvailablePath(grid, x, y) {\n\n if (!savedPathCount[x]) {\n savedPathCount[x] = [];\n }\n\n if (savedPathCount[x][y]) {\n return savedPathCount[x][y]\n }\n\n savedPathCount[x][y] = 0\n\n const currentValue = grid[x][y];\n\n if (y > 0) {\n const topValue = grid[x][y - 1];\n if (topValue && topValue > currentValue) {\n savedPathCount[x][y]++;\n savedPathCount[x][y] += getAvailablePath(grid, x, y - 1)\n }\n }\n\n if (x > 0) {\n const leftValue = grid[x - 1][y];\n if (leftValue && leftValue > currentValue) {\n savedPathCount[x][y]++;\n savedPathCount[x][y] += getAvailablePath(grid, x - 1, y)\n }\n }\n\n if (grid[x]) {\n const rightValue = grid[x][y + 1];\n if (rightValue && rightValue > currentValue) {\n savedPathCount[x][y]++;\n savedPathCount[x][y] += getAvailablePath(grid, x, y + 1)\n }\n }\n\n if (grid[x + 1]) {\n const bottomValue = grid[x + 1][y];\n if (bottomValue && bottomValue > currentValue) {\n savedPathCount[x][y]++;\n savedPathCount[x][y] += getAvailablePath(grid, x + 1, y)\n }\n }\n\n return savedPathCount[x][y];\n}\n\nfunction paths(grid) {\n // Write your code here\n let pathCount = 0\n for (let x = 0; x < grid.length; x++) {\n\n for (let y = 0; y < grid[x].length; y++) {\n pathCount += getAvailablePath(grid, x, y);\n }\n }\n return pathCount;\n}\n\nconst grid = [[15, 34],[1, 6]]\n\nconsole.log(grid(path))\n\n\n"
] |
[
2,
0
] |
[] |
[] |
[
"algorithm",
"javascript"
] |
stackoverflow_0067580420_algorithm_javascript.txt
|
Q:
Terrafrom data source a localfile and retrieve some keys from the content as output
I Have a local file (named as x.json)contain some json content. like
"client": {
"apiKey": "xyzabcpqr!23",
"permissions": {},
"firebaseSubdomain": "my-project-1"
},
I am doing data source on this file like,
data "local_file" "myfile" {
filename = "x.json" #localfile
}
Now I want to extract the apiKey as terraform out and pass the output to some other resout rce.
output "apiKey" {
value = data.local_file.myfile.content
}
But I don't find any option to get that.
I tried this one also,but it throughing the error as
Can't access attributes on a primitive-typed value (string).
output "apiKey" {
value = data.local_file.myfile.content.client.apiKey
}
A:
I hope it can help.
Instead of using local file then output, you can also pass your configuration as variable.
For each Terraform module, set the client configuration as variable, before to plan and apply :
export TF_VAR_client='{"apiKey": "xyzabcpqr!23","permissions": {},"firebaseSubdomain": "my-project-1"}'
or
terraform apply -var='apiKey={"apiKey": "xyzabcpqr!23","permissions": {},"firebaseSubdomain": "my-project-1"}'
Then in the Terraform code :
variables.tf file
variable "client" {
description = "Client"
type = "map"
}
main.tf file
resource "your_resource" "name" {
apikey = var.client["apiKey"]
....
|
Terrafrom data source a localfile and retrieve some keys from the content as output
|
I Have a local file (named as x.json)contain some json content. like
"client": {
"apiKey": "xyzabcpqr!23",
"permissions": {},
"firebaseSubdomain": "my-project-1"
},
I am doing data source on this file like,
data "local_file" "myfile" {
filename = "x.json" #localfile
}
Now I want to extract the apiKey as terraform out and pass the output to some other resout rce.
output "apiKey" {
value = data.local_file.myfile.content
}
But I don't find any option to get that.
I tried this one also,but it throughing the error as
Can't access attributes on a primitive-typed value (string).
output "apiKey" {
value = data.local_file.myfile.content.client.apiKey
}
|
[
"I hope it can help.\nInstead of using local file then output, you can also pass your configuration as variable.\nFor each Terraform module, set the client configuration as variable, before to plan and apply :\nexport TF_VAR_client='{\"apiKey\": \"xyzabcpqr!23\",\"permissions\": {},\"firebaseSubdomain\": \"my-project-1\"}'\n\nor\nterraform apply -var='apiKey={\"apiKey\": \"xyzabcpqr!23\",\"permissions\": {},\"firebaseSubdomain\": \"my-project-1\"}'\n\nThen in the Terraform code :\nvariables.tf file\nvariable \"client\" {\n description = \"Client\"\n type = \"map\"\n}\n\nmain.tf file\nresource \"your_resource\" \"name\" {\n apikey = var.client[\"apiKey\"]\n\n ....\n\n"
] |
[
0
] |
[] |
[] |
[
"datasource",
"google_cloud_platform",
"local_files",
"terraform"
] |
stackoverflow_0074652982_datasource_google_cloud_platform_local_files_terraform.txt
|
Q:
How to install telegram api 'aiogram'
I just tried to install telegram api 'aiogram' and it didn't work
building 'yarl._quoting_c' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for yarl
Failed to build multidict yarl
ERROR: Could not build wheels for multidict, yarl, which is required to install pyproject.toml-based projects
A:
It looks like you are trying to install the aiogram library for Python, but you are encountering an error related to Microsoft Visual C++ 14.0 or greater. This error is occurring because the aiogram library has a dependency on the yarl library, which requires Microsoft Visual C++ 14.0 or greater to be installed on your system in order to build.
To fix this error, you will need to install Microsoft Visual C++ 14.0 or greater on your system. The error message provides a link to the Microsoft C++ Build Tools page, where you can download and install the necessary tools. Once you have installed Microsoft Visual C++ 14.0 or greater, you should be able to install the aiogram library successfully.
Here are the steps to install Microsoft Visual C++ 14.0 or greater and fix the error:
Open the following link in your web browser: https://visualstudio.microsoft.com/visual-cpp-build-tools/
On the Microsoft C++ Build Tools page, click the "Download" button to download the installer for the build tools.
Run the downloaded installer and follow the on-screen instructions to install Microsoft Visual C++ 14.0 or greater on your system.
Once the installation is complete, try installing the aiogram library again using pip. It should now install successfully without any errors.
|
How to install telegram api 'aiogram'
|
I just tried to install telegram api 'aiogram' and it didn't work
building 'yarl._quoting_c' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for yarl
Failed to build multidict yarl
ERROR: Could not build wheels for multidict, yarl, which is required to install pyproject.toml-based projects
|
[
"It looks like you are trying to install the aiogram library for Python, but you are encountering an error related to Microsoft Visual C++ 14.0 or greater. This error is occurring because the aiogram library has a dependency on the yarl library, which requires Microsoft Visual C++ 14.0 or greater to be installed on your system in order to build.\nTo fix this error, you will need to install Microsoft Visual C++ 14.0 or greater on your system. The error message provides a link to the Microsoft C++ Build Tools page, where you can download and install the necessary tools. Once you have installed Microsoft Visual C++ 14.0 or greater, you should be able to install the aiogram library successfully.\nHere are the steps to install Microsoft Visual C++ 14.0 or greater and fix the error:\n\nOpen the following link in your web browser: https://visualstudio.microsoft.com/visual-cpp-build-tools/\n\nOn the Microsoft C++ Build Tools page, click the \"Download\" button to download the installer for the build tools.\n\nRun the downloaded installer and follow the on-screen instructions to install Microsoft Visual C++ 14.0 or greater on your system.\n\nOnce the installation is complete, try installing the aiogram library again using pip. It should now install successfully without any errors.\n\n\n"
] |
[
0
] |
[] |
[] |
[
"aiogram",
"python",
"telegram_bot"
] |
stackoverflow_0074661367_aiogram_python_telegram_bot.txt
|
Q:
How can I change the size (height) of the cursor of a TextField in jetpack compose irrespective of font size?
How can I change the size of the cursor (height) of a TextField in jetpack compose irrespective of font size ? any tips or tricks ?
A:
Maybe this will help? https://stackoverflow.com/a/68860541
By playing with the gradient, you can make the colors Transparent on the ends of the cursor. I cannot make this work in Material 3 though...
BasicTextField(
cursorBrush = Brush.verticalGradient(
0.00f to Color.Transparent,
0.35f to Color.Transparent,
0.35f to Color.Green,
0.90f to Color.Green,
0.90f to Color.Transparent,
1.00f to Color.Transparent
)
)
|
How can I change the size (height) of the cursor of a TextField in jetpack compose irrespective of font size?
|
How can I change the size of the cursor (height) of a TextField in jetpack compose irrespective of font size ? any tips or tricks ?
|
[
"Maybe this will help? https://stackoverflow.com/a/68860541\nBy playing with the gradient, you can make the colors Transparent on the ends of the cursor. I cannot make this work in Material 3 though...\nBasicTextField(\n cursorBrush = Brush.verticalGradient(\n 0.00f to Color.Transparent,\n 0.35f to Color.Transparent,\n 0.35f to Color.Green,\n 0.90f to Color.Green,\n 0.90f to Color.Transparent,\n 1.00f to Color.Transparent\n )\n)\n\n"
] |
[
1
] |
[] |
[] |
[
"android_compose_textfield",
"android_jetpack_compose",
"android_studio",
"kotlin"
] |
stackoverflow_0074660924_android_compose_textfield_android_jetpack_compose_android_studio_kotlin.txt
|
Q:
Chrome extension, Permission to allow site access ONLY on click
What's the manifest permission to allow access to all webpage data, but only on click?
In the extension manager this is the option that appears as
Allow this extension to read and change all your data on websites you visit
- On click
As an example, the Pinterest extension asks for full read/change access, but you have the option to change that to "on click" in the extension manager. I'd like "on click" to be the initial permission request.
I've looked at the permissions here https://chrome-apps-doc2.appspot.com/extensions/declare_permissions.html
I know I can request permission to access all webpage data, all the time, with
"permissions": ["*://*"],
A:
have you tried cookies?
permissions: ['cookies'],
origins: ['https://dummydomain.com/*']
|
Chrome extension, Permission to allow site access ONLY on click
|
What's the manifest permission to allow access to all webpage data, but only on click?
In the extension manager this is the option that appears as
Allow this extension to read and change all your data on websites you visit
- On click
As an example, the Pinterest extension asks for full read/change access, but you have the option to change that to "on click" in the extension manager. I'd like "on click" to be the initial permission request.
I've looked at the permissions here https://chrome-apps-doc2.appspot.com/extensions/declare_permissions.html
I know I can request permission to access all webpage data, all the time, with
"permissions": ["*://*"],
|
[
"have you tried cookies?\npermissions: ['cookies'],\norigins: ['https://dummydomain.com/*']\n"
] |
[
0
] |
[] |
[] |
[
"google_chrome_extension"
] |
stackoverflow_0066588450_google_chrome_extension.txt
|
Q:
How to Supress / permently Hide error box of windows media player
HI i wanted to Hide this error box:
i tried serching on internet but did not find usefull results , i am a fresher in coding so any way or any code that can help me ?
This issue is caused from server not acsepting first time but secind time it works great .
it would be great if u provide me easy steps .
MY Currrent code is
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace Media_Player
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
}
private void axWindowsMediaPlayer1_Enter(object sender, EventArgs e)
{
label2.AutoSize = true;
label1.AutoSize = true;
}
private void button1_Click_1(object sender, EventArgs e)
{
axWindowsMediaPlayer1.URL = ("http://y0b.net/radiosa.m3u");
MessageBox.Show("Successfuly Selected Radio SA , you will encounter a error , click close and then click play . " , "Thank You" , MessageBoxButtons.OK);
axWindowsMediaPlayer1.Ctlcontrols.play();
label2.Text = "Playing Radio SA";
}
private void button2_Click_1(object sender, EventArgs e)
{
axWindowsMediaPlayer1.Ctlcontrols.stop();
}
private void button3_Click(object sender, EventArgs e)
{
axWindowsMediaPlayer1.Ctlcontrols.play();
}
private void button4_Click(object sender, EventArgs e)
{
axWindowsMediaPlayer1.URL = ("http://y0b.net/radiosa2.m3u");
MessageBox.Show("Successfuly Selected Radio SA Clasic , you will encounter a error after this mesage , click close and then click play . ", "Thank You", MessageBoxButtons.OK);
axWindowsMediaPlayer1.Ctlcontrols.play();
label2.Text = "Playing Radio SA CLASSIC ";
}
private void button5_Click(object sender, EventArgs e)
{
axWindowsMediaPlayer1.Ctlcontrols.pause();
}
private void Form1_Load(object sender, EventArgs e)
{
}
private void radioButton1_CheckedChanged(object sender, EventArgs e)
{
axWindowsMediaPlayer1.Visible = false;
}
private void radioButton2_CheckedChanged(object sender, EventArgs e)
{
radioButton1.Checked = false;
axWindowsMediaPlayer1.Visible = true;
}
private void button6_Click(object sender, EventArgs e)
{
axWindowsMediaPlayer1.URL = ("http://y0b.net/radiosa3.m3u");
MessageBox.Show("Successfuly Selected Radio SA Dance Department , you will encounter a error after this mesage , click close and then click play . ", "Thank You", MessageBoxButtons.OK);
axWindowsMediaPlayer1.Ctlcontrols.play();
label2.Text = "Playing Radio SA Dance Department";
}
private void trackBar1_Scroll(object sender, EventArgs e)
{
axWindowsMediaPlayer1.settings.volume = trackBar1.Value;
label4.Text = trackBar1.Value.ToString();
}
private void label4_Click(object sender, EventArgs e)
{
}
private void webBrowser1_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)
{
}
private void button7_Click(object sender, EventArgs e)
{
}
private void progressBar1_Click(object sender, EventArgs e)
{
axWindowsMediaPlayer1.settings.playCount.ToString();
}
private void treeView1_AfterSelect(object sender, TreeViewEventArgs e)
{
axWindowsMediaPlayer1.Ctlcontrols.pause();
}
}
}
A:
Try this out. I suppressed the error message by setting enableErrorDialogs to false, then repeatedly asked it to play every 250 ms in a loop, with a ten second time out:
private async void button1_Click(object sender, EventArgs e)
{
button1.Enabled = false;
axWindowsMediaPlayer1.settings.autoStart = true;
axWindowsMediaPlayer1.settings.enableErrorDialogs = false;
axWindowsMediaPlayer1.URL = "http://y0b.net/radiosa3.m3u";
DateTime stopAt = DateTime.Now.AddSeconds(10);
while (DateTime.Now<stopAt && axWindowsMediaPlayer1.playState!=WMPLib.WMPPlayState.wmppsPlaying)
{
axWindowsMediaPlayer1.Ctlcontrols.play();
await Task.Delay(250);
}
if (axWindowsMediaPlayer1.playState != WMPLib.WMPPlayState.wmppsPlaying)
{
MessageBox.Show("Failed to load stream!");
}
button1.Enabled = true;
}
Music started playing after about two seconds for me. Your mileage may vary...
Note that I added async to the method handler at the top to allow the use of await Task.Delay(250);.
|
How to Supress / permently Hide error box of windows media player
|
HI i wanted to Hide this error box:
i tried serching on internet but did not find usefull results , i am a fresher in coding so any way or any code that can help me ?
This issue is caused from server not acsepting first time but secind time it works great .
it would be great if u provide me easy steps .
MY Currrent code is
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
namespace Media_Player
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void button1_Click(object sender, EventArgs e)
{
}
private void axWindowsMediaPlayer1_Enter(object sender, EventArgs e)
{
label2.AutoSize = true;
label1.AutoSize = true;
}
private void button1_Click_1(object sender, EventArgs e)
{
axWindowsMediaPlayer1.URL = ("http://y0b.net/radiosa.m3u");
MessageBox.Show("Successfuly Selected Radio SA , you will encounter a error , click close and then click play . " , "Thank You" , MessageBoxButtons.OK);
axWindowsMediaPlayer1.Ctlcontrols.play();
label2.Text = "Playing Radio SA";
}
private void button2_Click_1(object sender, EventArgs e)
{
axWindowsMediaPlayer1.Ctlcontrols.stop();
}
private void button3_Click(object sender, EventArgs e)
{
axWindowsMediaPlayer1.Ctlcontrols.play();
}
private void button4_Click(object sender, EventArgs e)
{
axWindowsMediaPlayer1.URL = ("http://y0b.net/radiosa2.m3u");
MessageBox.Show("Successfuly Selected Radio SA Clasic , you will encounter a error after this mesage , click close and then click play . ", "Thank You", MessageBoxButtons.OK);
axWindowsMediaPlayer1.Ctlcontrols.play();
label2.Text = "Playing Radio SA CLASSIC ";
}
private void button5_Click(object sender, EventArgs e)
{
axWindowsMediaPlayer1.Ctlcontrols.pause();
}
private void Form1_Load(object sender, EventArgs e)
{
}
private void radioButton1_CheckedChanged(object sender, EventArgs e)
{
axWindowsMediaPlayer1.Visible = false;
}
private void radioButton2_CheckedChanged(object sender, EventArgs e)
{
radioButton1.Checked = false;
axWindowsMediaPlayer1.Visible = true;
}
private void button6_Click(object sender, EventArgs e)
{
axWindowsMediaPlayer1.URL = ("http://y0b.net/radiosa3.m3u");
MessageBox.Show("Successfuly Selected Radio SA Dance Department , you will encounter a error after this mesage , click close and then click play . ", "Thank You", MessageBoxButtons.OK);
axWindowsMediaPlayer1.Ctlcontrols.play();
label2.Text = "Playing Radio SA Dance Department";
}
private void trackBar1_Scroll(object sender, EventArgs e)
{
axWindowsMediaPlayer1.settings.volume = trackBar1.Value;
label4.Text = trackBar1.Value.ToString();
}
private void label4_Click(object sender, EventArgs e)
{
}
private void webBrowser1_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)
{
}
private void button7_Click(object sender, EventArgs e)
{
}
private void progressBar1_Click(object sender, EventArgs e)
{
axWindowsMediaPlayer1.settings.playCount.ToString();
}
private void treeView1_AfterSelect(object sender, TreeViewEventArgs e)
{
axWindowsMediaPlayer1.Ctlcontrols.pause();
}
}
}
|
[
"Try this out. I suppressed the error message by setting enableErrorDialogs to false, then repeatedly asked it to play every 250 ms in a loop, with a ten second time out:\nprivate async void button1_Click(object sender, EventArgs e)\n{\n button1.Enabled = false;\n\n axWindowsMediaPlayer1.settings.autoStart = true;\n axWindowsMediaPlayer1.settings.enableErrorDialogs = false;\n axWindowsMediaPlayer1.URL = \"http://y0b.net/radiosa3.m3u\";\n\n DateTime stopAt = DateTime.Now.AddSeconds(10);\n while (DateTime.Now<stopAt && axWindowsMediaPlayer1.playState!=WMPLib.WMPPlayState.wmppsPlaying)\n {\n axWindowsMediaPlayer1.Ctlcontrols.play();\n await Task.Delay(250);\n }\n if (axWindowsMediaPlayer1.playState != WMPLib.WMPPlayState.wmppsPlaying)\n {\n MessageBox.Show(\"Failed to load stream!\");\n }\n\n button1.Enabled = true;\n} \n\nMusic started playing after about two seconds for me. Your mileage may vary...\nNote that I added async to the method handler at the top to allow the use of await Task.Delay(250);.\n"
] |
[
0
] |
[] |
[] |
[
"visual_studio"
] |
stackoverflow_0074660152_visual_studio.txt
|
Q:
IntelliJ Optimize Imports for the entire scala project
One of the very useful features of IntelliJ is that when I am done editing a file, I can do a "optimize imports". this removes all the unused imports from my code.
This is very useful, but I have to do it for every file.
Can I do "optimize imports" for the entire project?
A:
Select the source root in the project tree;
1. Hit the keyboard shortcut for "Optimize import"
MAC
Cmd-shift-A
Windows
Ctrl-shift-A
2. You will see
3. Hit Run
A:
First you build the project, In "build output" tab in left bottom corner, you will see lot of files with compilation errors. Select them all and press CRTL+ALT+O
A popup will open which will ask "Optimize import for 'module'"
Click Run.
|
IntelliJ Optimize Imports for the entire scala project
|
One of the very useful features of IntelliJ is that when I am done editing a file, I can do a "optimize imports". this removes all the unused imports from my code.
This is very useful, but I have to do it for every file.
Can I do "optimize imports" for the entire project?
|
[
"Select the source root in the project tree; \n1. Hit the keyboard shortcut for \"Optimize import\"\nMAC\nCmd-shift-A \n\nWindows\nCtrl-shift-A \n\n2. You will see\n\n3. Hit Run\n",
"First you build the project, In \"build output\" tab in left bottom corner, you will see lot of files with compilation errors. Select them all and press CRTL+ALT+O\nA popup will open which will ask \"Optimize import for 'module'\"\nClick Run.\n"
] |
[
60,
0
] |
[] |
[] |
[
"intellij_idea"
] |
stackoverflow_0044362502_intellij_idea.txt
|
Q:
luacheck: ignore globals defined in file
I have a module constants.lua that has a lot of globals defined. How can I ignore all of these in luacheck?
I assume I can build logic to do this inside my .luacheckrc: load constants.lua, see what it added to _G, and then add those to read_globals.
I'm using Lua 5.4.
A:
If you have only variable definitions (no runtime logic) in your constants.lua, then you can load it from luacheckrc. It could even require other files that are also only definitions. Beware of doing too much from here since luacheck needs to process this file on every lint event.
To load the files from your .luacheckrc:
-- Your normal luacheck config here. We'll modify this table later.
read_globals = {
-- Can manually define globals too.
}
-- Below code will load your constants files and append them to
-- read_globals.
-- Load script with constants and add to populate dest_globals.
local function append_globals(dest_globals, script)
-- Create a fallback env so lua stdlib is available, but we
-- have a clean list of globals.
local env = setmetatable({}, {__index = _G})
local fn = loadfile(script, "t", env)
fn()
for key,val in pairs(env) do
table.insert(dest_globals, key)
end
end
-- Pass a global table created above to populate with globals:
-- read_globals, files["*.lua"].read_globals, etc.
xpcall(append_globals, print, read_globals, "absolute/path/to/constants.lua")
If constants show up as missing variables, run luacheck on command line to debug (you should get output).
Absolute vs. Relative paths
Unfortunately, it uses absolute paths because debug.getinfo and other tricks don't work from within luacheckrc -- they return "chunk" instead of a filename. However, if all your lua code is in a common directory, you can try to get the absolute path from current working directory since luacheck is probably run from the same folder as your lua code:
local cwd = io.popen("cd"):read('*all')
-- All code lives in the script folder: c:/proj/script/*.lua
local root_dir = cwd:match("(.*[/\\])script[/\\]")
xpcall(append_globals, print, read_globals, root_dir .."script/constants.lua")
Supporting require
If you embed lua and your host program sets the package path, then you need to set that path to support require. You could use $LUA_PATH_5_4 or modify it at runtime. Put this before any calls to append_globals:
package.path = package.path .. ";" .. root_dir .."scripts/?.lua"
|
luacheck: ignore globals defined in file
|
I have a module constants.lua that has a lot of globals defined. How can I ignore all of these in luacheck?
I assume I can build logic to do this inside my .luacheckrc: load constants.lua, see what it added to _G, and then add those to read_globals.
I'm using Lua 5.4.
|
[
"If you have only variable definitions (no runtime logic) in your constants.lua, then you can load it from luacheckrc. It could even require other files that are also only definitions. Beware of doing too much from here since luacheck needs to process this file on every lint event.\nTo load the files from your .luacheckrc:\n-- Your normal luacheck config here. We'll modify this table later.\nread_globals = {\n -- Can manually define globals too.\n}\n\n\n-- Below code will load your constants files and append them to\n-- read_globals.\n\n-- Load script with constants and add to populate dest_globals.\nlocal function append_globals(dest_globals, script)\n -- Create a fallback env so lua stdlib is available, but we\n -- have a clean list of globals.\n local env = setmetatable({}, {__index = _G})\n local fn = loadfile(script, \"t\", env)\n fn()\n for key,val in pairs(env) do\n table.insert(dest_globals, key)\n end\nend\n\n-- Pass a global table created above to populate with globals:\n-- read_globals, files[\"*.lua\"].read_globals, etc.\nxpcall(append_globals, print, read_globals, \"absolute/path/to/constants.lua\")\n\nIf constants show up as missing variables, run luacheck on command line to debug (you should get output).\nAbsolute vs. Relative paths\nUnfortunately, it uses absolute paths because debug.getinfo and other tricks don't work from within luacheckrc -- they return \"chunk\" instead of a filename. However, if all your lua code is in a common directory, you can try to get the absolute path from current working directory since luacheck is probably run from the same folder as your lua code:\nlocal cwd = io.popen(\"cd\"):read('*all')\n-- All code lives in the script folder: c:/proj/script/*.lua\nlocal root_dir = cwd:match(\"(.*[/\\\\])script[/\\\\]\")\nxpcall(append_globals, print, read_globals, root_dir ..\"script/constants.lua\")\n\nSupporting require\nIf you embed lua and your host program sets the package path, then you need to set that path to support require. You could use $LUA_PATH_5_4 or modify it at runtime. Put this before any calls to append_globals:\npackage.path = package.path .. \";\" .. root_dir ..\"scripts/?.lua\"\n\n"
] |
[
0
] |
[] |
[] |
[
"global",
"lua",
"luacheck"
] |
stackoverflow_0074660898_global_lua_luacheck.txt
|
Q:
checking a variable against a record in a sqlite3 database to see if data entered is unique
so I am trying to create a function that allows a user to create a profile with personal information, in this they will enter a username that will act as primary key and requires to be unique, so when entering this username I am trying to check the data entered to see if it already exists in the sqlite3 database, if it does, the user is asked to try another username, if not the function will continue.
I was certain this should work because i used similar code to check values entered when a user logs in in a login function i coded, so i am quite stumped.
Any help would be greatly appreciated...
the code in question:
def signupInfo():
#takes input for username and checks it is alphanumeric, username will also act as primary key so a validation for it being unique will occur
username = input("Choose username: ")
#check for if the username is unique
uniqueUserCheck = '''SELECT * FROM users WHERE `username` = ?'''
cursor.execute(uniqueUserCheck, [username])
user = cursor.fetchone()
if user is not None:
username = input("Choose a unique user: ")
else:
#check if username is alphanumeric or has enough characters
while username.isalpha() == False or len(username) <= 3:
username = input("invalid username, try again: ")
A:
It looks like you're trying to check if the username already exists in the database. To do this, you can use an SQL SELECT query to check if the username exists in the users table. If the query returns a result, then the username already exists and you can prompt the user to enter a different username.
Here's one way you could implement this:
def signupInfo():
#takes input for username and checks it is alphanumeric, username will also act as primary key so a validation for it being unique will occur
username = input("Choose username: ")
#check if username is alphanumeric or has enough characters
while username.isalpha() == False or len(username) <= 3:
username = input("invalid username, try again: ")
#check for if the username is unique
uniqueUserCheck = '''SELECT * FROM users WHERE `username` = ?'''
cursor.execute(uniqueUserCheck, [username])
user = cursor.fetchone()
if user is not None:
username = input("Choose a unique user: ")
Note that in your current implementation, if the username is not unique, you will only prompt the user to enter a new username once, but the code does not check if the new username is unique. To fix this, you can use a while loop to keep prompting the user until they enter a unique username.
Here's one way you could do this:
def signupInfo():
#takes input for username and checks it is alphanumeric, username will also act as primary key so a validation for it being unique will occur
username = input("Choose username: ")
#check if username is alphanumeric or has enough characters
while username.isalpha() == False or len(username) <= 3:
username = input("invalid username, try again: ")
#check for if the username is unique
uniqueUserCheck = '''SELECT * FROM users WHERE `username` = ?'''
cursor.execute(uniqueUserCheck, [username])
user = cursor.fetchone()
# keep prompting the user until they enter a unique username
while user is not None:
username = input("Choose a unique user: ")
cursor.execute(uniqueUserCheck, [username])
user = cursor.fetchone()
|
checking a variable against a record in a sqlite3 database to see if data entered is unique
|
so I am trying to create a function that allows a user to create a profile with personal information, in this they will enter a username that will act as primary key and requires to be unique, so when entering this username I am trying to check the data entered to see if it already exists in the sqlite3 database, if it does, the user is asked to try another username, if not the function will continue.
I was certain this should work because i used similar code to check values entered when a user logs in in a login function i coded, so i am quite stumped.
Any help would be greatly appreciated...
the code in question:
def signupInfo():
#takes input for username and checks it is alphanumeric, username will also act as primary key so a validation for it being unique will occur
username = input("Choose username: ")
#check for if the username is unique
uniqueUserCheck = '''SELECT * FROM users WHERE `username` = ?'''
cursor.execute(uniqueUserCheck, [username])
user = cursor.fetchone()
if user is not None:
username = input("Choose a unique user: ")
else:
#check if username is alphanumeric or has enough characters
while username.isalpha() == False or len(username) <= 3:
username = input("invalid username, try again: ")
|
[
"It looks like you're trying to check if the username already exists in the database. To do this, you can use an SQL SELECT query to check if the username exists in the users table. If the query returns a result, then the username already exists and you can prompt the user to enter a different username.\nHere's one way you could implement this:\ndef signupInfo():\n #takes input for username and checks it is alphanumeric, username will also act as primary key so a validation for it being unique will occur\n username = input(\"Choose username: \")\n\n #check if username is alphanumeric or has enough characters\n while username.isalpha() == False or len(username) <= 3:\n username = input(\"invalid username, try again: \")\n\n #check for if the username is unique\n uniqueUserCheck = '''SELECT * FROM users WHERE `username` = ?'''\n cursor.execute(uniqueUserCheck, [username])\n user = cursor.fetchone()\n if user is not None:\n username = input(\"Choose a unique user: \")\n\n\nNote that in your current implementation, if the username is not unique, you will only prompt the user to enter a new username once, but the code does not check if the new username is unique. To fix this, you can use a while loop to keep prompting the user until they enter a unique username.\nHere's one way you could do this:\ndef signupInfo():\n #takes input for username and checks it is alphanumeric, username will also act as primary key so a validation for it being unique will occur\n username = input(\"Choose username: \")\n\n #check if username is alphanumeric or has enough characters\n while username.isalpha() == False or len(username) <= 3:\n username = input(\"invalid username, try again: \")\n\n #check for if the username is unique\n uniqueUserCheck = '''SELECT * FROM users WHERE `username` = ?'''\n cursor.execute(uniqueUserCheck, [username])\n user = cursor.fetchone()\n\n # keep prompting the user until they enter a unique username\n while user is not None:\n username = input(\"Choose a unique user: \")\n cursor.execute(uniqueUserCheck, [username])\n user = cursor.fetchone()\n\n\n"
] |
[
1
] |
[] |
[] |
[
"python",
"sql",
"validation"
] |
stackoverflow_0074661361_python_sql_validation.txt
|
Q:
how to trigger search automatically when using SearchDelegate buildSuggestions in flutter
Now I am using SearchDelegate in flutter 2.0.1, this is my buildSuggestions code:
@override
Widget buildSuggestions(BuildContext context) {
var channelRequest = new ChannelRequest(pageNum: 1, pageSize: 10, name: query);
if (query.isEmpty) {
return Container();
}
return FutureBuilder(
future: ChannelAction.fetchSuggestion(channelRequest),
builder: (context, AsyncSnapshot snapshot) {
if (snapshot.hasData) {
List<ChannelSuggestion> suggestions = snapshot.data;
return buildSuggestionComponent(suggestions, context);
} else {
return Text("");
}
});
}
Widget buildSuggestionComponent(List<ChannelSuggestion> suggestions, BuildContext context) {
return ListView.builder(
itemCount: suggestions.length,
itemBuilder: (context, index) {
return ListTile(
title: Text('${suggestions[index].name}'),
onTap: () async {
query = '${suggestions[index].name}';
},
);
},
);
}
when select the recommand text, I want to automatically trigger search event(when I click the suggestion text, trigger the search, fetch data from server side and render the result to UI) so I do not need to click search button. this is my search code:
@override
Widget buildResults(BuildContext context) {
var channelRequest = new ChannelRequest(pageNum: 1, pageSize: 10, name: query);
return buildResultImpl(channelRequest);
}
Widget buildResultImpl(ChannelRequest channelRequest) {
return FutureBuilder(
future: ChannelAction.searchChannel(channelRequest),
builder: (context, AsyncSnapshot snapshot) {
if (snapshot.hasData) {
List<Channel> channels = snapshot.data;
return buildResultsComponent(channels, context);
} else {
return Text("");
}
return Center(child: CircularProgressIndicator());
});
}
what should I do to implement it? I have tried invoke buildResults function in buildSuggestionComponent but it seems not work.
A:
To update the data based on the query, you can make an API call to get the result when clicking on a suggestion, then use a StreamController to stream the results to the buildResults() method and call showResults().
I'm creating a simple app here for demonstration:
import 'dart:async';
import 'package:flutter/material.dart';
void main() {
runApp(MaterialApp(home: Home()));
}
class Home extends StatefulWidget {
@override
_HomeState createState() => _HomeState();
}
class _HomeState extends State<Home> {
final _controller = StreamController.broadcast();
@override
dispose() {
super.dispose();
_controller.close();
}
Future<void> _showSearch() async {
await showSearch(
context: context,
delegate: TheSearch(context: context, controller: _controller),
query: "any query",
);
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text("Search Demo"),
actions: <Widget>[
IconButton(
icon: Icon(Icons.search),
onPressed: _showSearch,
),
],
),
);
}
}
class TheSearch extends SearchDelegate<String> {
TheSearch({this.context, this.controller});
BuildContext context;
StreamController controller;
final suggestions =
List<String>.generate(10, (index) => 'Suggestion ${index + 1}');
@override
List<Widget> buildActions(BuildContext context) {
return [IconButton(icon: Icon(Icons.clear), onPressed: () => query = "")];
}
@override
Widget buildLeading(BuildContext context) {
return IconButton(
icon: AnimatedIcon(
icon: AnimatedIcons.menu_arrow,
progress: transitionAnimation,
),
onPressed: () {
close(context, null);
},
);
}
@override
Widget buildResults(BuildContext context) {
return StreamBuilder(
stream: controller.stream,
builder: (context, snapshot) {
if (!snapshot.hasData)
return Container(
child: Center(
child: Text('Empty result'),
));
return Column(
children: List<Widget>.generate(
snapshot.data.length,
(index) => ListTile(
onTap: () => close(context, snapshot.data[index]),
title: Text(snapshot.data[index]),
),
),
);
},
);
}
@override
Widget buildSuggestions(BuildContext context) {
final _suggestions = query.isEmpty ? suggestions : [];
return ListView.builder(
itemCount: _suggestions.length,
itemBuilder: (content, index) => ListTile(
onTap: () {
query = _suggestions[index];
// Make your API call to get the result
// Here I'm using a sample result
controller.add(sampleResult);
showResults(context);
},
title: Text(_suggestions[index])),
);
}
}
final List<String> sampleResult =
List<String>.generate(10, (index) => 'Result ${index + 1}');
A:
I have done it through a simple workaround
Simply add this line after your database call
query = query
But be careful of the call looping
|
how to trigger search automatically when using SearchDelegate buildSuggestions in flutter
|
Now I am using SearchDelegate in flutter 2.0.1, this is my buildSuggestions code:
@override
Widget buildSuggestions(BuildContext context) {
var channelRequest = new ChannelRequest(pageNum: 1, pageSize: 10, name: query);
if (query.isEmpty) {
return Container();
}
return FutureBuilder(
future: ChannelAction.fetchSuggestion(channelRequest),
builder: (context, AsyncSnapshot snapshot) {
if (snapshot.hasData) {
List<ChannelSuggestion> suggestions = snapshot.data;
return buildSuggestionComponent(suggestions, context);
} else {
return Text("");
}
});
}
Widget buildSuggestionComponent(List<ChannelSuggestion> suggestions, BuildContext context) {
return ListView.builder(
itemCount: suggestions.length,
itemBuilder: (context, index) {
return ListTile(
title: Text('${suggestions[index].name}'),
onTap: () async {
query = '${suggestions[index].name}';
},
);
},
);
}
when select the recommand text, I want to automatically trigger search event(when I click the suggestion text, trigger the search, fetch data from server side and render the result to UI) so I do not need to click search button. this is my search code:
@override
Widget buildResults(BuildContext context) {
var channelRequest = new ChannelRequest(pageNum: 1, pageSize: 10, name: query);
return buildResultImpl(channelRequest);
}
Widget buildResultImpl(ChannelRequest channelRequest) {
return FutureBuilder(
future: ChannelAction.searchChannel(channelRequest),
builder: (context, AsyncSnapshot snapshot) {
if (snapshot.hasData) {
List<Channel> channels = snapshot.data;
return buildResultsComponent(channels, context);
} else {
return Text("");
}
return Center(child: CircularProgressIndicator());
});
}
what should I do to implement it? I have tried invoke buildResults function in buildSuggestionComponent but it seems not work.
|
[
"To update the data based on the query, you can make an API call to get the result when clicking on a suggestion, then use a StreamController to stream the results to the buildResults() method and call showResults().\nI'm creating a simple app here for demonstration:\nimport 'dart:async';\n\nimport 'package:flutter/material.dart';\n\nvoid main() {\n runApp(MaterialApp(home: Home()));\n}\n\nclass Home extends StatefulWidget {\n @override\n _HomeState createState() => _HomeState();\n}\n\nclass _HomeState extends State<Home> {\n final _controller = StreamController.broadcast();\n\n @override\n dispose() {\n super.dispose();\n _controller.close();\n }\n\n Future<void> _showSearch() async {\n await showSearch(\n context: context,\n delegate: TheSearch(context: context, controller: _controller),\n query: \"any query\",\n );\n }\n\n @override\n Widget build(BuildContext context) {\n return Scaffold(\n appBar: AppBar(\n title: Text(\"Search Demo\"),\n actions: <Widget>[\n IconButton(\n icon: Icon(Icons.search),\n onPressed: _showSearch,\n ),\n ],\n ),\n );\n }\n}\n\nclass TheSearch extends SearchDelegate<String> {\n TheSearch({this.context, this.controller});\n\n BuildContext context;\n StreamController controller;\n final suggestions =\n List<String>.generate(10, (index) => 'Suggestion ${index + 1}');\n\n @override\n List<Widget> buildActions(BuildContext context) {\n return [IconButton(icon: Icon(Icons.clear), onPressed: () => query = \"\")];\n }\n\n @override\n Widget buildLeading(BuildContext context) {\n return IconButton(\n icon: AnimatedIcon(\n icon: AnimatedIcons.menu_arrow,\n progress: transitionAnimation,\n ),\n onPressed: () {\n close(context, null);\n },\n );\n }\n\n @override\n Widget buildResults(BuildContext context) {\n return StreamBuilder(\n stream: controller.stream,\n builder: (context, snapshot) {\n if (!snapshot.hasData)\n return Container(\n child: Center(\n child: Text('Empty result'),\n ));\n return Column(\n children: List<Widget>.generate(\n snapshot.data.length,\n (index) => ListTile(\n onTap: () => close(context, snapshot.data[index]),\n title: Text(snapshot.data[index]),\n ),\n ),\n );\n },\n );\n }\n\n @override\n Widget buildSuggestions(BuildContext context) {\n final _suggestions = query.isEmpty ? suggestions : [];\n return ListView.builder(\n itemCount: _suggestions.length,\n itemBuilder: (content, index) => ListTile(\n onTap: () {\n query = _suggestions[index];\n // Make your API call to get the result\n // Here I'm using a sample result\n controller.add(sampleResult);\n showResults(context);\n },\n title: Text(_suggestions[index])),\n );\n }\n}\n\nfinal List<String> sampleResult =\n List<String>.generate(10, (index) => 'Result ${index + 1}');\n\n",
"I have done it through a simple workaround\nSimply add this line after your database call\nquery = query\nBut be careful of the call looping\n"
] |
[
1,
0
] |
[] |
[] |
[
"flutter"
] |
stackoverflow_0066792056_flutter.txt
|
Q:
JSON file to R dataframe
I have a JSON file. While the original file is quite large, I reduced to a much smaller reproducible example for the purposes of this question (I still get the same error no matter what size):
{
"relationships_followers": [
{
"title": "",
"media_list_data": [
],
"string_list_data": [
{
"href": "https://www.instagram.com/testaccount1",
"value": "testaccount1",
"timestamp": 1669418204
}
]
},
{
"title": "",
"media_list_data": [
],
"string_list_data": [
{
"href": "https://www.instagram.com/testaccount2",
"value": "testaccount2",
"timestamp": 1660426426
}
]
},
{
"title": "",
"media_list_data": [
],
"string_list_data": [
{
"href": "https://www.instagram.com/testaccount3",
"value": "testaccount3",
"timestamp": 1648230499
}
]
},
{
"title": "",
"media_list_data": [
],
"string_list_data": [
{
"href": "https://www.instagram.com/testaccount4",
"value": "testaccount4",
"timestamp": 1379513403
}
]
}
]
}
I am attempting to convert it into a dataframe in R, which contains the values for href, value, and the timestamp variables:
But when I run the following, which I pulled from another SO answer about converting JSON to R:
library("rjson")
result <- fromJSON(file = "test_file.json")
json_data_frame <- as.data.frame(result)
I get met with this error about differing rows.
Error in (function (..., row.names = NULL, check.rows = FALSE, check.names = TRUE, :
arguments imply differing number of rows: 1, 0
How can I get what I have into the desired DF format?
A:
Looks like the data is nested...
Try this:
library("rjson")
library("dplyr")
result <- fromJSON(file = "test_file.json")
result_list <-sapply(result$relationships_followers,
"[[", "string_list_data")
json_data_frame <- bind_rows(result_list)
A:
That is because there is nested data.
df<- as.data.frame(do.call(rbind, lapply(
lapply(result$relationships_followers, "[[", "string_list_data"), "[[", 1)))
df
#> href value timestamp
#> "https://www.instagram.com/testaccount1" "testaccount1" 1669418204
#> "https://www.instagram.com/testaccount2" "testaccount2" 1660426426
#> "https://www.instagram.com/testaccount3" "testaccount3" 1648230499
#> "https://www.instagram.com/testaccount4" "testaccount4" 1379513403
NOTE: jsonlite package does a better job on parsing data.frame by default.
|
JSON file to R dataframe
|
I have a JSON file. While the original file is quite large, I reduced to a much smaller reproducible example for the purposes of this question (I still get the same error no matter what size):
{
"relationships_followers": [
{
"title": "",
"media_list_data": [
],
"string_list_data": [
{
"href": "https://www.instagram.com/testaccount1",
"value": "testaccount1",
"timestamp": 1669418204
}
]
},
{
"title": "",
"media_list_data": [
],
"string_list_data": [
{
"href": "https://www.instagram.com/testaccount2",
"value": "testaccount2",
"timestamp": 1660426426
}
]
},
{
"title": "",
"media_list_data": [
],
"string_list_data": [
{
"href": "https://www.instagram.com/testaccount3",
"value": "testaccount3",
"timestamp": 1648230499
}
]
},
{
"title": "",
"media_list_data": [
],
"string_list_data": [
{
"href": "https://www.instagram.com/testaccount4",
"value": "testaccount4",
"timestamp": 1379513403
}
]
}
]
}
I am attempting to convert it into a dataframe in R, which contains the values for href, value, and the timestamp variables:
But when I run the following, which I pulled from another SO answer about converting JSON to R:
library("rjson")
result <- fromJSON(file = "test_file.json")
json_data_frame <- as.data.frame(result)
I get met with this error about differing rows.
Error in (function (..., row.names = NULL, check.rows = FALSE, check.names = TRUE, :
arguments imply differing number of rows: 1, 0
How can I get what I have into the desired DF format?
|
[
"Looks like the data is nested...\nTry this:\nlibrary(\"rjson\")\nlibrary(\"dplyr\")\n\nresult <- fromJSON(file = \"test_file.json\")\nresult_list <-sapply(result$relationships_followers,\n \"[[\", \"string_list_data\")\njson_data_frame <- bind_rows(result_list)\n\n",
"That is because there is nested data.\ndf<- as.data.frame(do.call(rbind, lapply(\n lapply(result$relationships_followers, \"[[\", \"string_list_data\"), \"[[\", 1)))\n\ndf\n#> href value timestamp \n#> \"https://www.instagram.com/testaccount1\" \"testaccount1\" 1669418204\n#> \"https://www.instagram.com/testaccount2\" \"testaccount2\" 1660426426\n#> \"https://www.instagram.com/testaccount3\" \"testaccount3\" 1648230499\n#> \"https://www.instagram.com/testaccount4\" \"testaccount4\" 1379513403\n\nNOTE: jsonlite package does a better job on parsing data.frame by default.\n"
] |
[
4,
3
] |
[] |
[] |
[
"r",
"rjson"
] |
stackoverflow_0074661277_r_rjson.txt
|
Q:
How to define different layers in neural network with MLPRegressor
I am trying to set up a neural network model using MLPRegressor, I have been told to do so using the following structure:
The network must have two different hidden layer node layouts: the first with one hidden layer with 100 nodes, the second with three hidden layers with 100 nodes each.
Use the neural network fitting with two activation functions: 'identity' and 'relu'.
I have looked around online, but I couldn't really make much sense of the documentation. What I tried so far took the following form:
model = MLPRegressor(hidden_layer_sizes=((100),(100,100,100)), activation='relu', solver = 'lbfgs').fit(X,Y)
But that doesn't consider the two activation functions, and it throws the following error:
TypeError: '<=' not supported between instances of 'tuple' and 'int'
Any suggestions on how to implement this?
[Edit]
I have been asked to clarify the question. The task that I have to complete is to fit experimental data (X and Y) by using different techniques, for example: interpolation, regression... And now, a neural network. The structure of the neural network must be as given above (what I have written there is quiet literally what I have been asked to do.
A:
To use two different hidden layer node layouts and two activation functions with the MLPRegressor class, you can specify the hidden layer node layouts and activation functions as a list. For example:
from sklearn.neural_network import MLPRegressor
# Define the hidden layer node layout
hidden_layer_sizes = (100)
# Define the activation function
activation = 'relu'
# Create the MLPRegressor model
model = MLPRegressor(hidden_layer_sizes=hidden_layer_sizes, activation=activation, solver='lbfgs')
# Fit the model to your data
model.fit(X, Y)
The hidden_layer_sizes and activation parameters should be specified as lists with the same length. The model will then use the first hidden layer node layout with the first activation function, the second hidden layer node layout with the second activation function, and so on.
You may also want to consider using a different solver than 'lbfgs'. The 'lbfgs' solver is generally not recommended for use with neural networks because it can be slow and may not always converge. Some other solvers that may work well with the MLPRegressor class are 'adam' and 'sgd'.
|
How to define different layers in neural network with MLPRegressor
|
I am trying to set up a neural network model using MLPRegressor, I have been told to do so using the following structure:
The network must have two different hidden layer node layouts: the first with one hidden layer with 100 nodes, the second with three hidden layers with 100 nodes each.
Use the neural network fitting with two activation functions: 'identity' and 'relu'.
I have looked around online, but I couldn't really make much sense of the documentation. What I tried so far took the following form:
model = MLPRegressor(hidden_layer_sizes=((100),(100,100,100)), activation='relu', solver = 'lbfgs').fit(X,Y)
But that doesn't consider the two activation functions, and it throws the following error:
TypeError: '<=' not supported between instances of 'tuple' and 'int'
Any suggestions on how to implement this?
[Edit]
I have been asked to clarify the question. The task that I have to complete is to fit experimental data (X and Y) by using different techniques, for example: interpolation, regression... And now, a neural network. The structure of the neural network must be as given above (what I have written there is quiet literally what I have been asked to do.
|
[
"To use two different hidden layer node layouts and two activation functions with the MLPRegressor class, you can specify the hidden layer node layouts and activation functions as a list. For example:\nfrom sklearn.neural_network import MLPRegressor\n\n# Define the hidden layer node layout\nhidden_layer_sizes = (100)\n\n# Define the activation function\nactivation = 'relu'\n\n# Create the MLPRegressor model\nmodel = MLPRegressor(hidden_layer_sizes=hidden_layer_sizes, activation=activation, solver='lbfgs')\n\n# Fit the model to your data\nmodel.fit(X, Y)\n\nThe hidden_layer_sizes and activation parameters should be specified as lists with the same length. The model will then use the first hidden layer node layout with the first activation function, the second hidden layer node layout with the second activation function, and so on.\nYou may also want to consider using a different solver than 'lbfgs'. The 'lbfgs' solver is generally not recommended for use with neural networks because it can be slow and may not always converge. Some other solvers that may work well with the MLPRegressor class are 'adam' and 'sgd'.\n"
] |
[
1
] |
[] |
[] |
[
"artificial_intelligence",
"deep_learning",
"neural_network",
"python",
"scikit_learn"
] |
stackoverflow_0074661342_artificial_intelligence_deep_learning_neural_network_python_scikit_learn.txt
|
Q:
Extract date from string in date format, add n number of days. to then replace with that modified data another substring within the original string
import re, datetime, time
input_text = "tras la aparicion del objeto misterioso el 2022-12-30 visitamos ese sitio nuevamente revisando detras de los arboles pero recien tras 3 dias ese objeto aparecio de nuevo tras 2 arboles" #example 1
input_text = "luego el 2022-11-15 fuimos nuevamente a dicho lugar pero nada ocurrio y 3 dias despues ese objeto aparecio en el cielo durante el atardecer de aquel dia de verano" #example 2
input_text = "Me entere del misterioso suceso ese 2022-11-01 y el 2022-11-15 fuimos al monte pero nada ocurrio, pero luego de 13 dias ese objeto aparecio en el cielo" #example 3
identified_referencial_date = r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})" #obtained with regex capture groups
# r"(?:luego[\s|]*de[\s|]*unos|luego[\s|]*de|pasados[\s|]*ya[\s|]*unos|pasados[\s|]*unos|pasados[\s|]*ya|pasados|tras[\s|]*ya|tras)[\s|]*\d*[\s|]*(?:días|dias|día|dia)"
# r"\d*[\s|]*(?:días|dias|día|dia)[\s|]*(?:despues|luego)"
n = #the number of days that in this case should increase
indicated_date_relative_to_another = str(identified_date_in_date_format - datetime.timedelta(days = int(n) ))
input_text = re.sub(identified_referencial_date, indicated_date_relative_to_another, input_text)
print(repr(input_text)) # --> output
The objective is that if a day is indicated first in the format year-month-day (are integers separated by hyphens in that order) \d*-\d{2}-\d{2} and then it says that n amount of days have passed, so you would have to replace that sentence with year-month-day+n
luego de unos 3 dias ---> add 3 days to a previous date
luego de 6 dias ---> add 6 days to a previous date
pasados ya 13 dias ---> add 13 days to a previous date
pasados ya unos 48 dias ---> add 48 days to a previous date
pasados unos 36 dias ---> add 36 days to a previous date
pasados 9 dias ---> add 9 days to a previous date
tras ya 2 dias ---> add 2 days to a previous date
tras 32 dias ---> add 32 days to a previous date
3 dias despues ---> add 3 days to a previous date
3 dias luego ---> add 3 days to a previous date
Keep in mind that in certain cases, increasing the number of days could also change the number of the month or even the year, as in example 1.
Outputs that I need obtain in each case:
"tras la aparicion del objeto misterioso el 2022-12-30 visitamos ese sitio nuevamente revisando detras de los arboles pero recien 2023-01-02 ese objeto aparecio de nuevo tras 2 arboles" #for the example 1
"luego el 2022-11-15 fuimos nuevamente a dicho lugar pero nada ocurrio y 2022-11-18 ese objeto aparecio en el cielo durante el atardecer de aquel dia de verano" #for the example 2
"Me entere del misterioso suceso ese 2022-11-01 y el 2022-11-15 fuimos al monte pero nada ocurrio, pero 2022-11-28 ese objeto aparecio en el cielo" #for the example 3
A:
Here is a regex solution you could use:
([12]\d{3}-[01]\d-[0-3]\d)(\D*?)(?:(?:luego de|pasados|tras)(?: ya)?(?: unos)? (\d+) dias|(\d+) dias (?:despues|luego))
This regex requires that there are no other digits between the date and the days. It also is a bit loose on grammar. It would also match "luego de ya 3 dias". You can of course make it more precise with a longer regex, but you get the picture.
In a program:
from datetime import datetime, timedelta
import re
def add(datestr, days):
return (datetime.strptime(datestr, "%Y-%m-%d")
+ timedelta(days=int(days))).strftime('%Y-%m-%d')
input_texts = [
"tras la aparicion del objeto misterioso el 2022-12-30 visitamos ese sitio nuevamente revisando detras de los arboles pero recien tras 3 dias ese objeto aparecio de nuevo tras 2 arboles",
"luego el 2022-11-15 fuimos nuevamente a dicho lugar pero nada ocurrio y 3 dias despues ese objeto aparecio en el cielo durante el atardecer de aquel dia de verano",
"Me entere del misterioso suceso ese 2022-11-01 y el 2022-11-15 fuimos al monte pero nada ocurrio, pero luego de 13 dias ese objeto aparecio en el cielo"
]
for input_text in input_texts:
result = re.sub(r"([12]\d{3}-[01]\d-[0-3]\d)(\D*?)(?:(?:luego de|pasados|tras)(?: ya)?(?: unos)? (\d+) dias|(\d+) dias (?:despues|luego))",
lambda m: m[1] + m[2] + add(m[1], m[3] or m[4]),
input_text)
print(result)
|
Extract date from string in date format, add n number of days. to then replace with that modified data another substring within the original string
|
import re, datetime, time
input_text = "tras la aparicion del objeto misterioso el 2022-12-30 visitamos ese sitio nuevamente revisando detras de los arboles pero recien tras 3 dias ese objeto aparecio de nuevo tras 2 arboles" #example 1
input_text = "luego el 2022-11-15 fuimos nuevamente a dicho lugar pero nada ocurrio y 3 dias despues ese objeto aparecio en el cielo durante el atardecer de aquel dia de verano" #example 2
input_text = "Me entere del misterioso suceso ese 2022-11-01 y el 2022-11-15 fuimos al monte pero nada ocurrio, pero luego de 13 dias ese objeto aparecio en el cielo" #example 3
identified_referencial_date = r"(?P<year>\d*)-(?P<month>\d{2})-(?P<startDay>\d{2})" #obtained with regex capture groups
# r"(?:luego[\s|]*de[\s|]*unos|luego[\s|]*de|pasados[\s|]*ya[\s|]*unos|pasados[\s|]*unos|pasados[\s|]*ya|pasados|tras[\s|]*ya|tras)[\s|]*\d*[\s|]*(?:días|dias|día|dia)"
# r"\d*[\s|]*(?:días|dias|día|dia)[\s|]*(?:despues|luego)"
n = #the number of days that in this case should increase
indicated_date_relative_to_another = str(identified_date_in_date_format - datetime.timedelta(days = int(n) ))
input_text = re.sub(identified_referencial_date, indicated_date_relative_to_another, input_text)
print(repr(input_text)) # --> output
The objective is that if a day is indicated first in the format year-month-day (are integers separated by hyphens in that order) \d*-\d{2}-\d{2} and then it says that n amount of days have passed, so you would have to replace that sentence with year-month-day+n
luego de unos 3 dias ---> add 3 days to a previous date
luego de 6 dias ---> add 6 days to a previous date
pasados ya 13 dias ---> add 13 days to a previous date
pasados ya unos 48 dias ---> add 48 days to a previous date
pasados unos 36 dias ---> add 36 days to a previous date
pasados 9 dias ---> add 9 days to a previous date
tras ya 2 dias ---> add 2 days to a previous date
tras 32 dias ---> add 32 days to a previous date
3 dias despues ---> add 3 days to a previous date
3 dias luego ---> add 3 days to a previous date
Keep in mind that in certain cases, increasing the number of days could also change the number of the month or even the year, as in example 1.
Outputs that I need obtain in each case:
"tras la aparicion del objeto misterioso el 2022-12-30 visitamos ese sitio nuevamente revisando detras de los arboles pero recien 2023-01-02 ese objeto aparecio de nuevo tras 2 arboles" #for the example 1
"luego el 2022-11-15 fuimos nuevamente a dicho lugar pero nada ocurrio y 2022-11-18 ese objeto aparecio en el cielo durante el atardecer de aquel dia de verano" #for the example 2
"Me entere del misterioso suceso ese 2022-11-01 y el 2022-11-15 fuimos al monte pero nada ocurrio, pero 2022-11-28 ese objeto aparecio en el cielo" #for the example 3
|
[
"Here is a regex solution you could use:\n([12]\\d{3}-[01]\\d-[0-3]\\d)(\\D*?)(?:(?:luego de|pasados|tras)(?: ya)?(?: unos)? (\\d+) dias|(\\d+) dias (?:despues|luego))\n\nThis regex requires that there are no other digits between the date and the days. It also is a bit loose on grammar. It would also match \"luego de ya 3 dias\". You can of course make it more precise with a longer regex, but you get the picture.\nIn a program:\nfrom datetime import datetime, timedelta\nimport re\n\ndef add(datestr, days):\n return (datetime.strptime(datestr, \"%Y-%m-%d\") \n + timedelta(days=int(days))).strftime('%Y-%m-%d')\n\ninput_texts = [\n \"tras la aparicion del objeto misterioso el 2022-12-30 visitamos ese sitio nuevamente revisando detras de los arboles pero recien tras 3 dias ese objeto aparecio de nuevo tras 2 arboles\",\n \"luego el 2022-11-15 fuimos nuevamente a dicho lugar pero nada ocurrio y 3 dias despues ese objeto aparecio en el cielo durante el atardecer de aquel dia de verano\",\n \"Me entere del misterioso suceso ese 2022-11-01 y el 2022-11-15 fuimos al monte pero nada ocurrio, pero luego de 13 dias ese objeto aparecio en el cielo\"\n]\n\nfor input_text in input_texts:\n result = re.sub(r\"([12]\\d{3}-[01]\\d-[0-3]\\d)(\\D*?)(?:(?:luego de|pasados|tras)(?: ya)?(?: unos)? (\\d+) dias|(\\d+) dias (?:despues|luego))\",\n lambda m: m[1] + m[2] + add(m[1], m[3] or m[4]), \n input_text)\n print(result)\n\n"
] |
[
1
] |
[] |
[] |
[
"datetime",
"python",
"python_3.x",
"regex",
"regex_group"
] |
stackoverflow_0074660456_datetime_python_python_3.x_regex_regex_group.txt
|
Q:
How do I use Excel Solver for exponents?
My equation is Q^Q^.25=6,512,786
I have the following input
And here is my solver
I keep getting errors.
A:
The problem is with the non-standard way that Excel handles iterated exponents. Excel parses Q^Q^0.25 as (Q^Q)^0.25, which grows very rapidly. You probably intended Q^(Q^0.25), which, while still growing rapidly, doesn't explode in the same way. If you change your equation in D2 to
=C2^(C2^0.25)
and rerun the solver with the same settings (but a nonzero starting value at C2 since 0^0 is undefined) you will get convergence, to approximately 117.44.
On Edit: It turns out that in the programming world there is a greater diversity of how associativity of exponentiation is handled than I realized. This answer to a related question contains an interesting table surveying how different programming languages handle it.
|
How do I use Excel Solver for exponents?
|
My equation is Q^Q^.25=6,512,786
I have the following input
And here is my solver
I keep getting errors.
|
[
"The problem is with the non-standard way that Excel handles iterated exponents. Excel parses Q^Q^0.25 as (Q^Q)^0.25, which grows very rapidly. You probably intended Q^(Q^0.25), which, while still growing rapidly, doesn't explode in the same way. If you change your equation in D2 to\n=C2^(C2^0.25)\n\nand rerun the solver with the same settings (but a nonzero starting value at C2 since 0^0 is undefined) you will get convergence, to approximately 117.44.\nOn Edit: It turns out that in the programming world there is a greater diversity of how associativity of exponentiation is handled than I realized. This answer to a related question contains an interesting table surveying how different programming languages handle it.\n"
] |
[
1
] |
[] |
[] |
[
"excel",
"exponent",
"exponential",
"solver"
] |
stackoverflow_0074660587_excel_exponent_exponential_solver.txt
|
Q:
How to create a vector of positions of a numeric vector in R?
I have a vector of numbers that contain some gaps. For example,
vec <- c(3,1,7,3,5,7)
So, there are 4 different values and I would like to transform it into a vector of values (without gaps) indicating the order of the entry while respecting the same position. So, in this case, I would like to obtain
2 1 4 2 3 4
Indicating a sequence of between 1 and 4 and showing the orders in the original vector vec.
A:
You can use match to help you look up the values in a sorted unique order. For example
vec <- c(3,1,7,3,5,7)
match(vec, sort(unique(vec)))
# [1] 2 1 4 2 3 4
This works because match returns the indexes which will start at 1.
A:
We may use factor
as.integer(factor(vec))
[1] 2 1 4 2 3 4
|
How to create a vector of positions of a numeric vector in R?
|
I have a vector of numbers that contain some gaps. For example,
vec <- c(3,1,7,3,5,7)
So, there are 4 different values and I would like to transform it into a vector of values (without gaps) indicating the order of the entry while respecting the same position. So, in this case, I would like to obtain
2 1 4 2 3 4
Indicating a sequence of between 1 and 4 and showing the orders in the original vector vec.
|
[
"You can use match to help you look up the values in a sorted unique order. For example\nvec <- c(3,1,7,3,5,7)\nmatch(vec, sort(unique(vec)))\n# [1] 2 1 4 2 3 4\n\nThis works because match returns the indexes which will start at 1.\n",
"We may use factor\nas.integer(factor(vec))\n[1] 2 1 4 2 3 4\n\n"
] |
[
1,
1
] |
[] |
[] |
[
"r",
"sorting",
"vector"
] |
stackoverflow_0074661376_r_sorting_vector.txt
|
Q:
Checking if an array of items exist in a DynamoDB Python without looping
I have a DynamoDB with hashes as UserIDs and set as partition key.
I want to know whether an Item exists in the table or not.
I gonna pass an array of User-Hashes. Each Hash in this array should be checked whether it exists or not.
I already found a solution with GetItem. But that would mean, that i have to loop over all the User-Hashes in the array, right?
Does anybody has a solution how to do this without looping? Looping takes too much of the performance.
A:
There is no shortcut here. You could do parallel (multi-threaded client) calls to reduce the overall latency.
|
Checking if an array of items exist in a DynamoDB Python without looping
|
I have a DynamoDB with hashes as UserIDs and set as partition key.
I want to know whether an Item exists in the table or not.
I gonna pass an array of User-Hashes. Each Hash in this array should be checked whether it exists or not.
I already found a solution with GetItem. But that would mean, that i have to loop over all the User-Hashes in the array, right?
Does anybody has a solution how to do this without looping? Looping takes too much of the performance.
|
[
"There is no shortcut here. You could do parallel (multi-threaded client) calls to reduce the overall latency.\n"
] |
[
1
] |
[] |
[] |
[
"amazon_dynamodb",
"amazon_web_services",
"for_loop",
"python_3.x"
] |
stackoverflow_0074629090_amazon_dynamodb_amazon_web_services_for_loop_python_3.x.txt
|
Q:
Merge 2 dataframes and update column with lists using condtions
I have 2 dataframes with same columns and indexes.
a a
1 [] 1 [5,2,7]
2 [1,2,3] 2 [1,2,3,4]
3 [7] 3 [7,5]
I want to merge them using condition, when length of list is <=1 then take value and add it to 1st data frame, else left old value.
So after that result is:
a
1 [5,2,7]
2 [1,2,3]
3 [7,5]
What is the best way to do this?
A:
for i, (x,y) in enumerate(zip(dfa['a'], dfb['b'])):
# apply your logic - 'when length of list is <=1 then take value...' and save it in dfa['a'][i]
if len(x) <= 1:
dfa.loc[i]['a'] = y
A:
Here is an approach using pandas.DataFrame.mask.
First, make sure that the values of each dataframe/column are lists :
df1["a"]= df1["a"].str.strip("[]").str.split(",") #skip if already a list
df2["a"]= df2["a"].str.strip("[]").str.split(",") #skip if already a list
Then, use pandas.Series.str.len :
out = df1.mask(df1["a"].str.len().le(1), other=df2["a"], axis=0)
Or use pandas.Series.transform :
out = df1.mask(df1["a"].transform(len).le(1), other=df2["a"], axis=0)
# Output :
print(out)
a
0 [5, 2, 7]
1 [1, 2, 3]
2 [7, 5]
A:
You can use numpy.where() to evaluate the criteria and return which version you want.
In this solution, I combine the two lists and then convert to a set to remove duplicates, and then convert back to a list because that is what you want at the end, I believe. Note that this does change the elements (you can [2,5,7] instead of [5,2,7])
new_df = np.where(
df1['a'].apply(len)<=1,
(df1['a'] + df2['a']).apply(set).apply(list),
df1['a'])
|
Merge 2 dataframes and update column with lists using condtions
|
I have 2 dataframes with same columns and indexes.
a a
1 [] 1 [5,2,7]
2 [1,2,3] 2 [1,2,3,4]
3 [7] 3 [7,5]
I want to merge them using condition, when length of list is <=1 then take value and add it to 1st data frame, else left old value.
So after that result is:
a
1 [5,2,7]
2 [1,2,3]
3 [7,5]
What is the best way to do this?
|
[
"for i, (x,y) in enumerate(zip(dfa['a'], dfb['b'])):\n # apply your logic - 'when length of list is <=1 then take value...' and save it in dfa['a'][i]\n if len(x) <= 1:\n dfa.loc[i]['a'] = y\n\n",
"Here is an approach using pandas.DataFrame.mask.\nFirst, make sure that the values of each dataframe/column are lists :\ndf1[\"a\"]= df1[\"a\"].str.strip(\"[]\").str.split(\",\") #skip if already a list\ndf2[\"a\"]= df2[\"a\"].str.strip(\"[]\").str.split(\",\") #skip if already a list\n\nThen, use pandas.Series.str.len :\nout = df1.mask(df1[\"a\"].str.len().le(1), other=df2[\"a\"], axis=0)\n\nOr use pandas.Series.transform :\nout = df1.mask(df1[\"a\"].transform(len).le(1), other=df2[\"a\"], axis=0)\n\n# Output :\nprint(out)\n a\n0 [5, 2, 7]\n1 [1, 2, 3]\n2 [7, 5]\n\n",
"You can use numpy.where() to evaluate the criteria and return which version you want.\nIn this solution, I combine the two lists and then convert to a set to remove duplicates, and then convert back to a list because that is what you want at the end, I believe. Note that this does change the elements (you can [2,5,7] instead of [5,2,7])\nnew_df = np.where(\n df1['a'].apply(len)<=1,\n (df1['a'] + df2['a']).apply(set).apply(list),\n df1['a'])\n\n"
] |
[
0,
0,
0
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074661219_pandas_python.txt
|
Q:
Default Quasar app in VSCode with recommend extensions has no IntelliSense code completion for Quasar components
I created a default Quasar project in VSCode and have the recommended extensions. The Quasar version is 2.6.0. The Quasar docs on VSCode configuration state, "If you created your project with Quasar CLI, you already have the recommended VS Code configuration." However, when I create a default project with Quasar CLI, auto-complete does not with Quasar components, just standard Vue components. For example, typing <q-b in the template should suggest <q-btn>, but there is no suggestion. Typing < is enough to bring up a list of suggested Vue components. Interestingly, the Quasar learning project Takeoff does have auto-complete when I clone it from Github. I am using the same IDE and extensions with both projects, so the project files have to be the issue, but the Quasar docs say auto-complete should work out of the box. What am I missing here?
I expected auto-completion to work out of the box, but it isn't. I have tried switching between Volar and Vetur, but neither provide Quasar component auto-completion. I have tried running both Webpack and Vite servers but it didn't make a difference. I have restarted VSCode after each of these changes.
A:
There is no such feature as Auto-completion of components in Quasar.
Only IntelliSense can provide you some suggestions based on the components it knows the existence or maybe some Typescript configuration.
|
Default Quasar app in VSCode with recommend extensions has no IntelliSense code completion for Quasar components
|
I created a default Quasar project in VSCode and have the recommended extensions. The Quasar version is 2.6.0. The Quasar docs on VSCode configuration state, "If you created your project with Quasar CLI, you already have the recommended VS Code configuration." However, when I create a default project with Quasar CLI, auto-complete does not with Quasar components, just standard Vue components. For example, typing <q-b in the template should suggest <q-btn>, but there is no suggestion. Typing < is enough to bring up a list of suggested Vue components. Interestingly, the Quasar learning project Takeoff does have auto-complete when I clone it from Github. I am using the same IDE and extensions with both projects, so the project files have to be the issue, but the Quasar docs say auto-complete should work out of the box. What am I missing here?
I expected auto-completion to work out of the box, but it isn't. I have tried switching between Volar and Vetur, but neither provide Quasar component auto-completion. I have tried running both Webpack and Vite servers but it didn't make a difference. I have restarted VSCode after each of these changes.
|
[
"There is no such feature as Auto-completion of components in Quasar.\nOnly IntelliSense can provide you some suggestions based on the components it knows the existence or maybe some Typescript configuration.\n"
] |
[
1
] |
[] |
[] |
[
"javascript",
"quasar",
"quasar_framework",
"visual_studio_code",
"vue.js"
] |
stackoverflow_0074647859_javascript_quasar_quasar_framework_visual_studio_code_vue.js.txt
|
Q:
jquery dialog window opens with datepicker text already focused
I know this is a tough one to understand....
I have a button that opens a dialog window. The buildPurchaseOrder url is just a form.
$('#poDialog').dialog({
width:1000,
height:1000,
modal:true,
autoOpen: false,
close: function(event, ui) {
$('#poDialog').dialog('close');
}
})
$.ajax({
type:"POST",
data:{'items' : items,
'_token' : token},
url:"buildPurchaseOrder",
success:function(result){
$('#poDialog').dialog('open');
$('#poDialog').html(result);
}
})
The relevant part of the form:
<input type='text' id='requestedDate' class='datePicker'>
In that dialog window I have a input text that has datepicker on it.
$('body').on('focus',".datePicker", function(){
$(this).datepicker({
});
});
The first time i click on the button to open dialog, it works fine.
If i close the dialog with the upper right corner x and open the dialog again, The input text datepicker is already focused and clicking dates does not work:
(error: jquery-ui.js:8188 Uncaught TypeError: Cannot set properties of undefined (setting 'currentDay')).
This is what is seen when i clicked on the button to open the dialog on the second time i open dialog. I havent clicked/focused on anything yet. What other info might be needed to help figure out why the datepicker is already focused when dialog loads? I know there's a lot going on here that you can't see. I have verified that the id is not duplicated anywhere. What else should i be looking at?
A:
Consider the following.
$('#poDialog').dialog({
width: 1000,
height: 1000,
modal: true,
autoOpen: false,
close: function(event, ui) {
$('#poDialog').dialog('close');
}
})
$.ajax({
type: "POST",
data: {
'items': items,
'_token': token
},
url: "buildPurchaseOrder",
success: function(result) {
$('#poDialog').html(result).dialog('open');
$('#poDialog .datepicker').datepicker().focus();
}
})
When the AJAX call completes, HTML will be loaded into the Dialog window. Now you initialize DatePicker upon the element.
|
jquery dialog window opens with datepicker text already focused
|
I know this is a tough one to understand....
I have a button that opens a dialog window. The buildPurchaseOrder url is just a form.
$('#poDialog').dialog({
width:1000,
height:1000,
modal:true,
autoOpen: false,
close: function(event, ui) {
$('#poDialog').dialog('close');
}
})
$.ajax({
type:"POST",
data:{'items' : items,
'_token' : token},
url:"buildPurchaseOrder",
success:function(result){
$('#poDialog').dialog('open');
$('#poDialog').html(result);
}
})
The relevant part of the form:
<input type='text' id='requestedDate' class='datePicker'>
In that dialog window I have a input text that has datepicker on it.
$('body').on('focus',".datePicker", function(){
$(this).datepicker({
});
});
The first time i click on the button to open dialog, it works fine.
If i close the dialog with the upper right corner x and open the dialog again, The input text datepicker is already focused and clicking dates does not work:
(error: jquery-ui.js:8188 Uncaught TypeError: Cannot set properties of undefined (setting 'currentDay')).
This is what is seen when i clicked on the button to open the dialog on the second time i open dialog. I havent clicked/focused on anything yet. What other info might be needed to help figure out why the datepicker is already focused when dialog loads? I know there's a lot going on here that you can't see. I have verified that the id is not duplicated anywhere. What else should i be looking at?
|
[
"Consider the following.\n$('#poDialog').dialog({\n width: 1000,\n height: 1000,\n modal: true,\n autoOpen: false,\n close: function(event, ui) {\n $('#poDialog').dialog('close');\n }\n})\n\n$.ajax({\n type: \"POST\",\n data: {\n 'items': items,\n '_token': token\n },\n url: \"buildPurchaseOrder\",\n success: function(result) {\n $('#poDialog').html(result).dialog('open');\n $('#poDialog .datepicker').datepicker().focus();\n }\n})\n\nWhen the AJAX call completes, HTML will be loaded into the Dialog window. Now you initialize DatePicker upon the element.\n"
] |
[
0
] |
[] |
[] |
[
"datepicker",
"dialog",
"jquery",
"jquery_ui"
] |
stackoverflow_0074656957_datepicker_dialog_jquery_jquery_ui.txt
|
Q:
How to determine constexpr size of C array typedef without sizeof, similar to std::size
Is there something similar to std::size that works with the typedef of C array in C++17 or later available in STL?
To calculate a constexpr number of elements in typedef CArray defined like this:
typedef double MyCArrayType[20];
This works, but I don't want to declare a variable:
MyCArrayType arr;
constexpr size_t sz = std::size(arr);
This works, but I prefer not to specify element type:
constexpr size_t sz = sizeof(MyCArrayType) / sizeof(double);
I'd like something similar to this, if it is in STL already:
constexpr size_t sz = std::size<MyCArrayType>();
A:
Okay, I've found what I need. The std::extent works for my case
typedef double MyCArrayType[20];
constexpr auto sz = std::extent<MyCArrayType>::value;
//or
//constexpr auto sz = std::extent_v<MyCArrayType>;
|
How to determine constexpr size of C array typedef without sizeof, similar to std::size
|
Is there something similar to std::size that works with the typedef of C array in C++17 or later available in STL?
To calculate a constexpr number of elements in typedef CArray defined like this:
typedef double MyCArrayType[20];
This works, but I don't want to declare a variable:
MyCArrayType arr;
constexpr size_t sz = std::size(arr);
This works, but I prefer not to specify element type:
constexpr size_t sz = sizeof(MyCArrayType) / sizeof(double);
I'd like something similar to this, if it is in STL already:
constexpr size_t sz = std::size<MyCArrayType>();
|
[
"Okay, I've found what I need. The std::extent works for my case\ntypedef double MyCArrayType[20];\n\n\nconstexpr auto sz = std::extent<MyCArrayType>::value;\n//or\n//constexpr auto sz = std::extent_v<MyCArrayType>;\n\n"
] |
[
4
] |
[] |
[] |
[
"c++",
"c++17",
"stl"
] |
stackoverflow_0074661366_c++_c++17_stl.txt
|
Q:
I want to write in an embed the postion of the role compared to the role position+1 and role position -1
{name:"Position",value: interaction.guild.roles.cache.filter(role.position+1||role.position||role.position-1).map(m=>m).join(" => ")})
i try to map the role position +1 , role position and role position -1.
A:
There are a few issues..
<Collection>#filter's First option is a callback function, not a comparison like you're doing.
Second what you are doing while logically saying "or" makes sense the way node will interpret it is true || true || true as the comparison operator will convert the numeric role position value to a boolean, which if greater than 0 will be true. The proper way to do something like this would be:
interaction.guild.roles.cache.filter(filterRole =>
role.position === filterRole.position - 1 ||
role.position === filterRole.position ||
role.position === filterRole.position + 1
)
|
I want to write in an embed the postion of the role compared to the role position+1 and role position -1
|
{name:"Position",value: interaction.guild.roles.cache.filter(role.position+1||role.position||role.position-1).map(m=>m).join(" => ")})
i try to map the role position +1 , role position and role position -1.
|
[
"There are a few issues..\n<Collection>#filter's First option is a callback function, not a comparison like you're doing.\nSecond what you are doing while logically saying \"or\" makes sense the way node will interpret it is true || true || true as the comparison operator will convert the numeric role position value to a boolean, which if greater than 0 will be true. The proper way to do something like this would be:\ninteraction.guild.roles.cache.filter(filterRole => \n role.position === filterRole.position - 1 ||\n role.position === filterRole.position ||\n role.position === filterRole.position + 1\n)\n\n"
] |
[
0
] |
[] |
[] |
[
"discord.js",
"roles"
] |
stackoverflow_0074660379_discord.js_roles.txt
|
Q:
3D secure authentication and Stripe
I am creating subscriptions upon completing the Stripe Checkout session and from what I have read, stripe supports 3d secure authentication payment on its checkout session. However, if this is true for the first time the client pays for the subscription (stripe will ask him to enter a code on the checkout session page), how will that be applicable for the remaining payments in the following months? Where will the user enter the code?
A:
Assuming you are using Stripe Billing, your user will be automatically charged on recurring months. So they usually only have to complete 3DS for the initial payment. But if the card issuer requires 3DS to be fulfilled every invoice, you can configure your Stripe settings to automatically email your user to complete 3DS on a Stripe hosted page. However, if you want to write custom failure handling, you will need to add a webhook for customer.subscription.updated and check if the status is past_due.
https://stripe.com/docs/billing/subscriptions/overview#recurring-charges
A:
In my case, Just added off_session when you create charge about user.
$user->charge($price, $paymentMethod, ['off_session' => true]);
Hopefully, it would be worked for you.
|
3D secure authentication and Stripe
|
I am creating subscriptions upon completing the Stripe Checkout session and from what I have read, stripe supports 3d secure authentication payment on its checkout session. However, if this is true for the first time the client pays for the subscription (stripe will ask him to enter a code on the checkout session page), how will that be applicable for the remaining payments in the following months? Where will the user enter the code?
|
[
"Assuming you are using Stripe Billing, your user will be automatically charged on recurring months. So they usually only have to complete 3DS for the initial payment. But if the card issuer requires 3DS to be fulfilled every invoice, you can configure your Stripe settings to automatically email your user to complete 3DS on a Stripe hosted page. However, if you want to write custom failure handling, you will need to add a webhook for customer.subscription.updated and check if the status is past_due.\nhttps://stripe.com/docs/billing/subscriptions/overview#recurring-charges\n",
"In my case, Just added off_session when you create charge about user.\n$user->charge($price, $paymentMethod, ['off_session' => true]); \n\nHopefully, it would be worked for you.\n"
] |
[
1,
0
] |
[] |
[] |
[
"stripe_payments"
] |
stackoverflow_0067004434_stripe_payments.txt
|
Q:
Powershell: How to download the latest file from a DropBox Folder?
I need to be able to create a Powershell script, that will download the latest file from a folder in DropBox. Each day it will need to run to get the very latest file. Currently, there are hundreds of files in the folder, so I just need to download one file.
I can get connected to Dropbox with my script.
Any help will be appreciated.
I have tried a script that will download the entire folder, but that times out since there are so many files in the folder.
A:
I found the way to do this:
wget -Uri https://www.dropbox.com/home/FolderName/$Filename?dl=1 -d -OutFile "c:\temp$FileName" -Verbose
|
Powershell: How to download the latest file from a DropBox Folder?
|
I need to be able to create a Powershell script, that will download the latest file from a folder in DropBox. Each day it will need to run to get the very latest file. Currently, there are hundreds of files in the folder, so I just need to download one file.
I can get connected to Dropbox with my script.
Any help will be appreciated.
I have tried a script that will download the entire folder, but that times out since there are so many files in the folder.
|
[
"I found the way to do this:\nwget -Uri https://www.dropbox.com/home/FolderName/$Filename?dl=1 -d -OutFile \"c:\\temp$FileName\" -Verbose\n"
] |
[
0
] |
[] |
[] |
[
"download",
"dropbox",
"powershell"
] |
stackoverflow_0074550181_download_dropbox_powershell.txt
|
Q:
Best way to create a table with one column expanding its width, and others minimal based on content
What would be the best idea way to have a table (I am not committed to <table> if other options are better) where:
The first column, third and fourth column width is as small as content will allow
The second column would expand to use all remaining spaced (not used by column 1, 3 and 4) without forcing line returns in the content
The column width is the same for all the lines in the table
I am trying to replicate what I would get with display:flex with <div>, where one child could grow, and the other not, but I want each column to have the same width across all rows.
Visual example, table would take full screen width, column width cannot be pre-defined:
------------------------------------------------------------------------
| ID | Title | Comments | Votes Count |
| 245325 | Lorem ipsum | 5 | 2 |
| 32 | Even longer title | 0 | 1 |
------------------------------------------------------------------------
A:
Simply by setting the width of columns 1, 3, and 4 to 1% will force them to shrink to be as small as possible while expanding the second column to fill the remaining space:
td:first-child,
td:nth-child(3n),
td:nth-child(4n) {
width: 1%;
}
table,
td {
border: 1px solid #000;
}
<table style="width: 100%">
<tr>
<td>ID</td>
<td>Title</td>
<td>Comments</td>
<td>Votes</td>
</tr>
<tr>
<td>245325</td>
<td>Lorem ipsum</td>
<td>5</td>
<td>2</td>
</tr>
<tr>
<td>32</td>
<td>Even longer title</td>
<td>0</td>
<td>1</td>
</tr>
</table>
|
Best way to create a table with one column expanding its width, and others minimal based on content
|
What would be the best idea way to have a table (I am not committed to <table> if other options are better) where:
The first column, third and fourth column width is as small as content will allow
The second column would expand to use all remaining spaced (not used by column 1, 3 and 4) without forcing line returns in the content
The column width is the same for all the lines in the table
I am trying to replicate what I would get with display:flex with <div>, where one child could grow, and the other not, but I want each column to have the same width across all rows.
Visual example, table would take full screen width, column width cannot be pre-defined:
------------------------------------------------------------------------
| ID | Title | Comments | Votes Count |
| 245325 | Lorem ipsum | 5 | 2 |
| 32 | Even longer title | 0 | 1 |
------------------------------------------------------------------------
|
[
"Simply by setting the width of columns 1, 3, and 4 to 1% will force them to shrink to be as small as possible while expanding the second column to fill the remaining space:\n\n\ntd:first-child,\ntd:nth-child(3n),\ntd:nth-child(4n) {\n width: 1%;\n}\n\ntable,\ntd {\n border: 1px solid #000;\n}\n<table style=\"width: 100%\">\n <tr>\n <td>ID</td>\n <td>Title</td>\n <td>Comments</td>\n <td>Votes</td>\n </tr>\n <tr>\n <td>245325</td>\n <td>Lorem ipsum</td>\n <td>5</td>\n <td>2</td>\n </tr>\n <tr>\n <td>32</td>\n <td>Even longer title</td>\n <td>0</td>\n <td>1</td>\n </tr>\n</table>\n\n\n\n"
] |
[
0
] |
[] |
[] |
[
"css",
"html"
] |
stackoverflow_0074661288_css_html.txt
|
Q:
DAX expression to extract value
I want to write a dax expression that allows me to extract the contents of column B based on column A.
Column A being a filter in my report.
Regards,
A
B
ab
1
ac
2
ad
3
In this logic : if A = ab then 1 as result.
A:
Use this measure
B (ab) =
CALCULATE(
MIN('Table'[B]),
'Table'[A] = "ab"
)
|
DAX expression to extract value
|
I want to write a dax expression that allows me to extract the contents of column B based on column A.
Column A being a filter in my report.
Regards,
A
B
ab
1
ac
2
ad
3
In this logic : if A = ab then 1 as result.
|
[
"Use this measure\nB (ab) = \nCALCULATE(\n MIN('Table'[B]),\n 'Table'[A] = \"ab\"\n)\n\n\n"
] |
[
0
] |
[] |
[] |
[
"dax",
"powerbi"
] |
stackoverflow_0074660868_dax_powerbi.txt
|
Q:
Could not find method dependencyResolutionManagement() for arguments
I'm trying to use a project which my teacher gave to me, but it shows me an error
Settings file '/Users/admin/AndroidStudioProjects/HTTPNetworking/settings.gradle' line: 1
A problem occurred evaluating settings 'HTTPNetworking'.
> Could not find method dependencyResolutionManagement() for arguments [settings_d1xerae4a210x6r7efckrwyki$_run_closure1@580a3803] on settings 'HTTPNetworking' of type org.gradle.initialization.DefaultSettings.
* Try:
Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Exception is:
org.gradle.api.GradleScriptException: A problem occurred evaluating settings 'HTTPNetworking'.
Caused by: org.gradle.internal.metaobject.AbstractDynamicObject$CustomMessageMissingMethodException: Could not find method dependencyResolutionManagement() for arguments [settings_d1xerae4a210x6r7efckrwyki$_run_closure1@580a3803] on settings 'HTTPNetworking' of type org.gradle.initialization.DefaultSettings
at settings_d1xerae4a210x6r7efckrwyki.run(/Users/admin/AndroidStudioProjects/HTTPNetworking/settings.gradle:1)
settings.gradle contains:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
google()
mavenCentral()
jcenter() // Warning: this repository is going to shut down soon
}
}
rootProject.name = "HTTP Networking"
include ':app'
What's wrong with it?
A:
in gradle-7.3 and lower method dependencyResolutionManagement have @Incubating annotation. To use this method in your settings.gradle or settings.gradle.kts file you need to add the line:
enableFeaturePreview("VERSION_CATALOGS")
A:
This error can be caused by defining the dependencyResolutionManagement block in a build.gradle file instead of in the settings.gradle (which is where the doco clearly tells you to put it).
Not that I did that or anything.
Feel free to upvote this answer if you also totally did not make that obviously dumb mistake.
A:
This error could be caused due to having an older gradle-version, the issue got resolved for me after i upgraded my gradle version to 7.4.2 for my project
A:
Copy and replace below codes in Top-level build file :
// Top-level build file where you can add configuration options common to all sub-projects/modules.
buildscript {
repositories {
google()
mavenCentral()
maven { url 'https://jitpack.io' }
mavenCentral()
}
dependencies {
classpath 'com.android.tools.build:gradle:4.1.3'
// NOTE: Do not place your application dependencies here; they belong
// in the individual module build.gradle files
}
}
allprojects {
repositories {
google()
mavenCentral()
maven { url 'https://jitpack.io' }
mavenCentral()
}
}
task clean(type: Delete) {
delete rootProject.buildDir
}
and make comment these lines in settings.gardle(:) :
pluginManagement {
repositories {
gradlePluginPortal()
google()
mavenCentral()
}
}
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
google()
mavenCentral()
}
}
Note that Gradle Version better to be 6.6 for Gradle Plugin 4.1.3
A:
Go to gradle/gradle-wrapper.properties and change your distributionUrl
distributionUrl=https\://services.gradle.org/distributions/gradle-7.4-bin.zip
|
Could not find method dependencyResolutionManagement() for arguments
|
I'm trying to use a project which my teacher gave to me, but it shows me an error
Settings file '/Users/admin/AndroidStudioProjects/HTTPNetworking/settings.gradle' line: 1
A problem occurred evaluating settings 'HTTPNetworking'.
> Could not find method dependencyResolutionManagement() for arguments [settings_d1xerae4a210x6r7efckrwyki$_run_closure1@580a3803] on settings 'HTTPNetworking' of type org.gradle.initialization.DefaultSettings.
* Try:
Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Exception is:
org.gradle.api.GradleScriptException: A problem occurred evaluating settings 'HTTPNetworking'.
Caused by: org.gradle.internal.metaobject.AbstractDynamicObject$CustomMessageMissingMethodException: Could not find method dependencyResolutionManagement() for arguments [settings_d1xerae4a210x6r7efckrwyki$_run_closure1@580a3803] on settings 'HTTPNetworking' of type org.gradle.initialization.DefaultSettings
at settings_d1xerae4a210x6r7efckrwyki.run(/Users/admin/AndroidStudioProjects/HTTPNetworking/settings.gradle:1)
settings.gradle contains:
dependencyResolutionManagement {
repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)
repositories {
google()
mavenCentral()
jcenter() // Warning: this repository is going to shut down soon
}
}
rootProject.name = "HTTP Networking"
include ':app'
What's wrong with it?
|
[
"in gradle-7.3 and lower method dependencyResolutionManagement have @Incubating annotation. To use this method in your settings.gradle or settings.gradle.kts file you need to add the line:\nenableFeaturePreview(\"VERSION_CATALOGS\")\n\n",
"This error can be caused by defining the dependencyResolutionManagement block in a build.gradle file instead of in the settings.gradle (which is where the doco clearly tells you to put it).\nNot that I did that or anything.\nFeel free to upvote this answer if you also totally did not make that obviously dumb mistake.\n",
"This error could be caused due to having an older gradle-version, the issue got resolved for me after i upgraded my gradle version to 7.4.2 for my project\n",
"Copy and replace below codes in Top-level build file :\n // Top-level build file where you can add configuration options common to all sub-projects/modules.\nbuildscript {\n repositories {\n google()\n mavenCentral()\n maven { url 'https://jitpack.io' }\n mavenCentral()\n\n }\n dependencies {\n classpath 'com.android.tools.build:gradle:4.1.3'\n\n // NOTE: Do not place your application dependencies here; they belong\n // in the individual module build.gradle files\n }\n}\n\nallprojects {\n repositories {\n google()\n mavenCentral()\n maven { url 'https://jitpack.io' }\n mavenCentral()\n\n }\n}\n\ntask clean(type: Delete) {\n delete rootProject.buildDir\n}\n\nand make comment these lines in settings.gardle(:) :\npluginManagement {\n repositories {\n gradlePluginPortal()\n google()\n mavenCentral()\n }\n}\ndependencyResolutionManagement {\n repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS)\n repositories {\n google()\n mavenCentral()\n }\n}\n\nNote that Gradle Version better to be 6.6 for Gradle Plugin 4.1.3\n",
"Go to gradle/gradle-wrapper.properties and change your distributionUrl\ndistributionUrl=https\\://services.gradle.org/distributions/gradle-7.4-bin.zip\n\n"
] |
[
0,
0,
0,
0,
0
] |
[] |
[] |
[
"android",
"android_studio",
"gradle",
"java"
] |
stackoverflow_0069150086_android_android_studio_gradle_java.txt
|
Q:
How do you hook into the new Pick,Pack And ship/WarehouseManagementSystem code
Good day
I have code that worked on the old Pick pack and ship screen, the code would do a couple of changes on a QR code and then send it in to Acumatica.
With the new changes in Acumatica this is not possible any more.
What is the correct way to hook into the new (version 22) process barcode code?
Originally I could do this:
using WMSBase = PX.Objects.IN.WarehouseManagementSystemGraph<PX.Objects.IN.INScanReceive, PX.Objects.IN.INScanReceiveHost, PX.Objects.IN.INRegister, PX.Objects.IN.INScanReceive.Header>;
using PX.Objects;
using PX.Objects.IN;
namespace ExtScannerCode
{
public class INScanReceiveHostExtCustomPackage : PXGraphExtension<INScanReceive, INScanReceiveHost>
{
public static bool IsActive() => true;
#region Overrides ProcessItemBarcode
//ProcessItemBarcode
public delegate void ProcessItemBarcodeDelegate(string barcode);
[PXOverride]
public virtual void ProcessItemBarcode(string barcode, ProcessItemBarcodeDelegate baseMethod)
{
baseMethod?.Invoke(barcode);
}
#endregion
#region Overrides ProcessLotSerialBarcode
//ProcessLotSerialBarcode
public delegate void ProcessLotSerialBarcodeDelegate(string barcode);
[PXOverride]
public virtual void ProcessLotSerialBarcode(string barcode, ProcessLotSerialBarcodeDelegate baseMethod)
{
baseMethod?.Invoke(barcode);
}
#endregion
#region Overrides ProcessExpireDate
//ProcessLotSerialBarcode
public delegate void ProcessExpireDateDelegate(string barcode);
[PXOverride]
public virtual void ProcessExpireDate(string barcode, ProcessLotSerialBarcodeDelegate baseMethod)
{
baseMethod?.Invoke(barcode);
}
#endregion
}
[PXProtectedAccess]
public abstract class INScanReceiveHostExtProtectedAccess : PXGraphExtension<INScanReceiveHostExtCustomPackage, INScanReceive, INScanReceiveHost>
{
[PXProtectedAccess(typeof(INScanReceive))]
protected abstract void ProcessItemBarcode(string barcode);
[PXProtectedAccess(typeof(INScanReceive))]
protected abstract void ApplyState(string state);
[PXProtectedAccess(typeof(INScanReceive))]
protected abstract void ProcessLotSerialBarcode(string barcode);
}
}
With the new layout I am a bit lost, how would I hook into the new WarehouseManagementSystem? to process my barcodes
A:
Referencing the private articles in the Acumatica Community site, you need to use an extension that has already been declared for each graph. For Pick Pack and Ship, the class definition would be
public class PickPackShipExt : PickPackShip.ScanExtension
{
}
From there, you would override the DecorateScanState function. There is an existing functionin the solution library, to show as an example. The code file is PX.Objects.SO\WMS\Modes\PickModes.cs.
You would inject into the state you are checking. Search for the graph you are overriding, so you can list states. For example, pick pack ship has these states:
protected override IEnumerable<ScanState<PickPackShip>> CreateStates()
{
yield return new ShipmentState();
yield return new LocationState();
yield return new InventoryItemState() { AlternateType = INPrimaryAlternateType.CPN, IsForIssue = true, SuppressModuleItemStatusCheck = true };
yield return new LotSerialState();
yield return new ExpireDateState() { IsForIssue = true };
yield return new ConfirmState();
yield return new CommandOrShipmentOnlyState();
}
So lets say we want to interject the lot serial number barcode reader. In this example, we want to add an X in front of what is scanned.
public class PickPackShipExt : PickPackShip.ScanExtension
{
[PXOverride]
public virtual ScanState<PickPackShip> DecorateScanState(ScanState<PickPackShip> original, Func<ScanState<PickPackShip>, ScanState<PickPackShip>> base_DecorateScanState)
{
var state = base_DecorateScanState(original);
//are you in pick mode?
if (state.ModeCode == PickMode.Value)
{
//are you scanning lot serial information?
if(state is LotSerialState lotSerialState)
{
//add some sort of validation/transoformation
lotSerialState.Intercept.GetByBarcode.ByOverride((basis, barcode, del) =>
{
//call the delegate, which just trims the barcode
string newBarcode = del(barcode);
//do something else with the barcode to transform. This example, add an X to the beginning and return
newBarcode = "X" + newBarcode;
return newBarcode;
});
}
}
return state;
}
}
You can search the solution for the state, and check the functions that are called. For example, the lot serial state code is:
public class LotSerialState : EntityState<string>
{
public const string Value = "LTSR";
public class value : BqlString.Constant<value> { public value() : base(LotSerialState.Value) { } }
public override string Code => Value;
protected override string StatePrompt => Msg.Prompt;
protected override bool IsStateActive() => Basis.ItemHasLotSerial;
protected override string GetByBarcode(string barcode) => barcode.Trim();
protected override Validation Validate(string lotSerial) => Basis.IsValid<WMSScanHeader.lotSerialNbr>(lotSerial, out string error) ? Validation.Ok : Validation.Fail(error);
protected override void Apply(string lotSerial) => Basis.LotSerialNbr = lotSerial;
protected override void ReportSuccess(string lotSerial) => Basis.Reporter.Info(Msg.Ready, lotSerial);
protected override void ClearState() => Basis.LotSerialNbr = null;
[PXLocalizable]
public abstract class Msg
{
public const string Prompt = "Scan the lot/serial number.";
public const string Ready = "The {0} lot/serial number is selected.";
public const string NotSet = "The lot/serial number is not selected.";
}
}
I hope this helps everyone get their customizations working.
|
How do you hook into the new Pick,Pack And ship/WarehouseManagementSystem code
|
Good day
I have code that worked on the old Pick pack and ship screen, the code would do a couple of changes on a QR code and then send it in to Acumatica.
With the new changes in Acumatica this is not possible any more.
What is the correct way to hook into the new (version 22) process barcode code?
Originally I could do this:
using WMSBase = PX.Objects.IN.WarehouseManagementSystemGraph<PX.Objects.IN.INScanReceive, PX.Objects.IN.INScanReceiveHost, PX.Objects.IN.INRegister, PX.Objects.IN.INScanReceive.Header>;
using PX.Objects;
using PX.Objects.IN;
namespace ExtScannerCode
{
public class INScanReceiveHostExtCustomPackage : PXGraphExtension<INScanReceive, INScanReceiveHost>
{
public static bool IsActive() => true;
#region Overrides ProcessItemBarcode
//ProcessItemBarcode
public delegate void ProcessItemBarcodeDelegate(string barcode);
[PXOverride]
public virtual void ProcessItemBarcode(string barcode, ProcessItemBarcodeDelegate baseMethod)
{
baseMethod?.Invoke(barcode);
}
#endregion
#region Overrides ProcessLotSerialBarcode
//ProcessLotSerialBarcode
public delegate void ProcessLotSerialBarcodeDelegate(string barcode);
[PXOverride]
public virtual void ProcessLotSerialBarcode(string barcode, ProcessLotSerialBarcodeDelegate baseMethod)
{
baseMethod?.Invoke(barcode);
}
#endregion
#region Overrides ProcessExpireDate
//ProcessLotSerialBarcode
public delegate void ProcessExpireDateDelegate(string barcode);
[PXOverride]
public virtual void ProcessExpireDate(string barcode, ProcessLotSerialBarcodeDelegate baseMethod)
{
baseMethod?.Invoke(barcode);
}
#endregion
}
[PXProtectedAccess]
public abstract class INScanReceiveHostExtProtectedAccess : PXGraphExtension<INScanReceiveHostExtCustomPackage, INScanReceive, INScanReceiveHost>
{
[PXProtectedAccess(typeof(INScanReceive))]
protected abstract void ProcessItemBarcode(string barcode);
[PXProtectedAccess(typeof(INScanReceive))]
protected abstract void ApplyState(string state);
[PXProtectedAccess(typeof(INScanReceive))]
protected abstract void ProcessLotSerialBarcode(string barcode);
}
}
With the new layout I am a bit lost, how would I hook into the new WarehouseManagementSystem? to process my barcodes
|
[
"Referencing the private articles in the Acumatica Community site, you need to use an extension that has already been declared for each graph. For Pick Pack and Ship, the class definition would be\npublic class PickPackShipExt : PickPackShip.ScanExtension\n{\n\n}\n\nFrom there, you would override the DecorateScanState function. There is an existing functionin the solution library, to show as an example. The code file is PX.Objects.SO\\WMS\\Modes\\PickModes.cs.\nYou would inject into the state you are checking. Search for the graph you are overriding, so you can list states. For example, pick pack ship has these states:\n protected override IEnumerable<ScanState<PickPackShip>> CreateStates()\n {\n yield return new ShipmentState();\n yield return new LocationState();\n yield return new InventoryItemState() { AlternateType = INPrimaryAlternateType.CPN, IsForIssue = true, SuppressModuleItemStatusCheck = true };\n yield return new LotSerialState();\n yield return new ExpireDateState() { IsForIssue = true };\n yield return new ConfirmState();\n yield return new CommandOrShipmentOnlyState();\n }\n\nSo lets say we want to interject the lot serial number barcode reader. In this example, we want to add an X in front of what is scanned.\npublic class PickPackShipExt : PickPackShip.ScanExtension\n{\n [PXOverride]\n public virtual ScanState<PickPackShip> DecorateScanState(ScanState<PickPackShip> original, Func<ScanState<PickPackShip>, ScanState<PickPackShip>> base_DecorateScanState)\n {\n var state = base_DecorateScanState(original);\n \n //are you in pick mode?\n if (state.ModeCode == PickMode.Value)\n {\n //are you scanning lot serial information?\n if(state is LotSerialState lotSerialState)\n {\n //add some sort of validation/transoformation\n lotSerialState.Intercept.GetByBarcode.ByOverride((basis, barcode, del) =>\n {\n //call the delegate, which just trims the barcode\n string newBarcode = del(barcode);\n //do something else with the barcode to transform. This example, add an X to the beginning and return\n newBarcode = \"X\" + newBarcode;\n return newBarcode;\n });\n }\n }\n\n return state;\n }\n\n}\n\nYou can search the solution for the state, and check the functions that are called. For example, the lot serial state code is:\npublic class LotSerialState : EntityState<string>\n{\n public const string Value = \"LTSR\";\n public class value : BqlString.Constant<value> { public value() : base(LotSerialState.Value) { } }\n\n public override string Code => Value;\n protected override string StatePrompt => Msg.Prompt;\n protected override bool IsStateActive() => Basis.ItemHasLotSerial;\n\n protected override string GetByBarcode(string barcode) => barcode.Trim();\n protected override Validation Validate(string lotSerial) => Basis.IsValid<WMSScanHeader.lotSerialNbr>(lotSerial, out string error) ? Validation.Ok : Validation.Fail(error);\n protected override void Apply(string lotSerial) => Basis.LotSerialNbr = lotSerial;\n protected override void ReportSuccess(string lotSerial) => Basis.Reporter.Info(Msg.Ready, lotSerial);\n protected override void ClearState() => Basis.LotSerialNbr = null;\n\n [PXLocalizable]\n public abstract class Msg\n {\n public const string Prompt = \"Scan the lot/serial number.\";\n public const string Ready = \"The {0} lot/serial number is selected.\";\n public const string NotSet = \"The lot/serial number is not selected.\";\n }\n}\n\nI hope this helps everyone get their customizations working.\n"
] |
[
0
] |
[] |
[] |
[
"acumatica"
] |
stackoverflow_0073245877_acumatica.txt
|
Q:
Unable to convert a pandas Dataframe to a list using literal_eval
I have been trying to convert a pandas Dataframe column to a list as the data in the column is being read as a str by default.
Sample data in the dataframe 'movie' column 'genres' is
[{"id": 28, "name": "Action"}, {"id": 12, "name": "Adventure"}, {"id": 14, "name": "Fantasy"}, {"id": 878, "name": "Science Fiction"}]
The code I am writing
import ast
import pandas as pd
movie = pd.read_csv("tmdb_5000_movies.csv")
movie['genres'] = movie['genres'].apply(lambda x : ast.literal_eval(str(x)))
print(type(movie['genres']))
The output I am getting is
<class 'pandas.core.series.Series'>
Really can't wrap my head around where am I going wrong
A:
pandas.DataFrames are composed of Series objects (where a Series is simply a column. Series are container objects similar to Python lists and can actually be converted into a list by using their Series.tolist method.
ast.literal_eval is being applied on each element inside of your Series, converting them a string into dictionary, those dictionaries as then stored back into a Series.
So pretty much your code is working- but if you want a list of dictionaries instead of a Series of dictionaries, you'll need to the following:
import ast
import pandas as pd
movie = pd.read_csv("tmdb_5000_movies.csv")
movie['genres'] = movie['genres'].apply(lambda x : ast.literal_eval(str(x)))
genres = movie['genres'].tolist()
print(genres)
|
Unable to convert a pandas Dataframe to a list using literal_eval
|
I have been trying to convert a pandas Dataframe column to a list as the data in the column is being read as a str by default.
Sample data in the dataframe 'movie' column 'genres' is
[{"id": 28, "name": "Action"}, {"id": 12, "name": "Adventure"}, {"id": 14, "name": "Fantasy"}, {"id": 878, "name": "Science Fiction"}]
The code I am writing
import ast
import pandas as pd
movie = pd.read_csv("tmdb_5000_movies.csv")
movie['genres'] = movie['genres'].apply(lambda x : ast.literal_eval(str(x)))
print(type(movie['genres']))
The output I am getting is
<class 'pandas.core.series.Series'>
Really can't wrap my head around where am I going wrong
|
[
"pandas.DataFrames are composed of Series objects (where a Series is simply a column. Series are container objects similar to Python lists and can actually be converted into a list by using their Series.tolist method.\nast.literal_eval is being applied on each element inside of your Series, converting them a string into dictionary, those dictionaries as then stored back into a Series.\nSo pretty much your code is working- but if you want a list of dictionaries instead of a Series of dictionaries, you'll need to the following:\nimport ast \nimport pandas as pd\nmovie = pd.read_csv(\"tmdb_5000_movies.csv\")\nmovie['genres'] = movie['genres'].apply(lambda x : ast.literal_eval(str(x)))\n\ngenres = movie['genres'].tolist()\nprint(genres)\n\n"
] |
[
1
] |
[] |
[] |
[
"pandas",
"python"
] |
stackoverflow_0074661236_pandas_python.txt
|
Q:
WatchDog Library is only running once
I am new to coding and python, and I am struggling to use this WatchDog library to run this data_analysis function when a file is added to a folder. While it runs, i notice that pasting this function makes the watchdog only detect an added file once. Without, it will keep running. Anyone know why? I have tried searching online but I am endlessly confused lol Also, I tried to pasting my whole function to make it easier to read, but if you can condense it in your IDE, then it should be easier to see the rest of the py file.
from tkinter import *
from tkinter import filedialog
from watchdog.observers import Observer
from watchdog.events import PatternMatchingEventHandler
import pandas as pd
import numpy as np
class Watchdog(PatternMatchingEventHandler, Observer):
def __init__(self, path='.', patterns='*', logfunc=print):
PatternMatchingEventHandler.__init__(self, patterns)
Observer.__init__(self)
self.schedule(self, path=path, recursive=False)
self.log = logfunc
def on_created(self, event):
# This function is called when a file is created
self.log(f"hey, {event.src_path} has been created!")
def data_analysis(src_path):
readdata = pd.read_csv(event.src_path, delimiter='\t', encoding="latin1", skiprows=24)
df = pd.DataFrame(readdata)
df = df.drop(labels=0, axis=0)
df['Station']=df['Station'].astype(float)
df['Station']=df['Station'].astype(int)
df["Axial Force Occurences"] = 0
df["Axial Force Actual Value"] = pd.NaT
df["Flexion Occurences"] = 0
df["Flexion Actual Value"] = pd.NaT
df["IE Occurences"] = 0
df["IE Actual Value"] = pd.NaT
df["AP Occurences"] = 0
df["AP Actual Value"] = pd.NaT
df['Fz 1']=df['Fz 1'].astype(float)
df['Fz 1']=df['Fz 1'].astype(int)
df['VLWf']=df['VLWf'].astype(float)
df['VLWf']=df['VLWf'].astype(int)
df['FLPt']=df['FLPt'].astype(float)
# df['FLPt']=df['FLPt'].astype(int)
df['FLWf']=df['FLWf'].astype(float)
# df['FLWf']=df['FLWf'].astype(int)
df['IEPt']=df['IEPt'].astype(float)
# df['IEPt']=df['IEPt'].astype(int)
df['IEWf']=df['IEWf'].astype(float)
# df['IEWf']=df['IEWf'].astype(int)
df['APPt']=df['APPt'].astype(float)
# df['APPt']=df['APPt'].astype(int)
df['APWf']=df['APWf'].astype(float)
# df['APWf']=df['APWf'].astype(int)
data = df.loc[df['Station'] == 1, ['VLWf','Fz 1', "Axial Force Occurences", "Axial Force Actual Value",
'FLPt', 'FLWf', "Flexion Occurences", "Flexion Actual Value",
'IEPt', 'IEWf', "IE Occurences", "IE Actual Value",
'APPt', 'APWf', "AP Occurences", "AP Actual Value", ]]
tol = 3
y = int(len(data.index))
num = int(y * (3/100))
##Extract first and last rows based on tolerance, and append the first rows to the end, and the last rows to the beginning
first_rows = data.iloc[0: num]
last_rows = data.iloc[y-num: y]
##Add the last_rows to the beginning, and the first_rows to the end, all one df
data = last_rows.append(data)
data = data.append(first_rows)
##This keeps the indexing from appending, which is nice to see, but we need to change it use for loops
z = int(len(data.index))
new_index = np.linspace(start = 1, stop = z, num = z)
new_index2 = new_index.astype(int)
data2 = data.set_index(new_index2)
# To test if the tables are correct, you can call specific values in console eg: 'data['VLWf'].iloc[1]'
axoccur = []
##AXIAL FORCE OOT
for i in range(num, z-num):
val = data2['Fz 1'].iloc[i]
extract_data = data2.iloc[1:z, 0]
xval = data2.iloc[i-num: i+num,0]-0.5*2600
if np.any(val >= ((data2.iloc[i-num: i+num,0])-0.05*2600)) and np.any(val <= ((data2.iloc[i-num: i+num,0])+0.05*2600)):
data2.at[i,'Axial Force Occurences'] = 0
else:
data2.at[i,'Axial Force Occurences'] = 1
data2.at[i,'Axial Force Actual Value'] = val
axoccur.append(i)
# print(apoccur)
##After reading the data, we need to sum the
totalaxial = data2['Axial Force Occurences'].sum()
print('The number of Axial Force values outside of the tolerance is: ' + str(totalaxial))
flexionoccur = []
##FLEXION OOT
for i in range(num, z-num):
val = data2['FLPt'].iloc[i]
extract_data = data2.iloc[1:z, 0]
xval = data2.iloc[i-num: i+num,0]-0.5*2600
if np.any(val >= ((data2.iloc[i-num: i+num,5])-0.05*58)) and np.any(val <= ((data2.iloc[i-num: i+num,5])+0.05*58)):
data2.at[i,'Flexion Occurences'] = 0
else:
data2.at[i,'Flexion Occurences'] = 1
data2.at[i,'Flexion Actual Value'] = val
flexionoccur.append(i)
##After reading the data, we need to sum the
totalflexion = data2['Flexion Occurences'].sum()
print('The number of Flexion values outside of the tolerance is: ' + str(totalflexion))
ieoccur = []
##IE OOT
for i in range(num, z-num):
val = data2['IEPt'].iloc[i]
extract_data = data2.iloc[1:z, 0]
xval = data2.iloc[i-num: i+num,0]-0.5*2600
if np.any(val >= ((data2.iloc[i-num: i+num,9])-0.05*5.7)) and np.any(val <= ((data2.iloc[i-num: i+num,9])+0.05*5.7)):
data2.at[i,'IE Occurences'] = 0
else:
data2.at[i,'IE Occurences'] = 1
data2.at[i,'IE Actual Value'] = val
ieoccur.append(i)
##After reading the data, we need to sum the
totalie = data2['IE Occurences'].sum()
print('The number of IE values outside of the tolerance is: ' + str(totalie))
apoccur = []
##AP OOT
for i in range(num, z-num):
val = data2['APPt'].iloc[i]
extract_data = data2.iloc[1:z, 0]
xval = data2.iloc[i-num: i+num,0]-0.5*2600
if np.any(val >= ((data2.iloc[i-num: i+num,13])-0.05*5.2)) and np.any(val <= ((data2.iloc[i-num: i+num,13])+0.05*5.2)):
data2.at[i,'IE Occurences'] = 0
else:
data2.at[i,'AP Occurences'] = 1
data2.at[i,'AP Actual Value'] = val
apoccur.append(i)
##After reading the data, we need to sum the
totalap = data2['AP Occurences'].sum()
print('The number of AP values outside of the tolerance is: ' + str(totalap))
data_analysis(event.src_path)
def on_deleted(self, event):
# This function is called when a file is deleted
self.log(f"what the f**k! Someone deleted {event.src_path}!")
def on_modified(self, event):
# This function is called when a file is modified
self.log(f"hey buddy, {event.src_path} has been modified")
def on_moved(self, event):
# This function is called when a file is moved
self.log(f"ok ok ok, someone moved {event.src_path} to {event.dest_path}")
class GUI:
def __init__(self):
self.watchdog = None
self.watch_path = '.'
self.root = Tk()
self.messagebox = Text(width=80, height=10)
self.messagebox.pack()
frm = Frame(self.root)
Button(frm, text='Browse', command=self.select_path).pack(side=LEFT)
Button(frm, text='Start Watchdog', command=self.start_watchdog).pack(side=RIGHT)
Button(frm, text='Stop Watchdog', command=self.stop_watchdog).pack(side=RIGHT)
# Button(frm, text='Excel', command=self.excelexport)pack(side=LEFT)
frm.pack(fill=X, expand=1)
self.root.mainloop()
def start_watchdog(self):
if self.watchdog is None:
self.watchdog = Watchdog(path=self.watch_path, logfunc=self.log)
self.watchdog.start()
self.log('Watchdog started')
else:
self.log('Watchdog already started')
def stop_watchdog(self):
if self.watchdog:
self.watchdog.stop()
self.watchdog = None
self.log('Watchdog stopped')
else:
self.log('Watchdog is not running')
def select_path(self):
path = filedialog.askdirectory()
if path:
self.watch_path = path
self.log(f'Selected path: {path}')
def log(self, message):
self.messagebox.insert(END, f'{message}\n')
self.messagebox.see(END)
if __name__ == '__main__':
GUI()
A:
class Handler(watchdog.events.PatternMatchingEventHandler):
def __init__(self):
watchdog.events.PatternMatchingEventHandler.__init__(self, patterns=['*.pdf'],
ignore_patterns = None,
ignore_directories = False,
case_sensitive = False)
def on_created(self, event):
print(f"File was created at {event.src_path}")
OCRscript(self, event)
def on_deleted(self, event):
print(f"File was deleted at {event.src_path}")
event_handler = Handler()
observer = watchdog.observers.Observer()
observer.schedule(event_handler, "C://Users//Installer//Desktop//tesseract test",
recursive = False)
observer.start()
observer.join()
This is the code I have been using to have a continually running watchdog. I call my other funcitions from within the class to ensure that the watchdog continues running. I have the functions I'm calling defined outside of the class just for debugging purposes. It helped me figure out which step was broken.
|
WatchDog Library is only running once
|
I am new to coding and python, and I am struggling to use this WatchDog library to run this data_analysis function when a file is added to a folder. While it runs, i notice that pasting this function makes the watchdog only detect an added file once. Without, it will keep running. Anyone know why? I have tried searching online but I am endlessly confused lol Also, I tried to pasting my whole function to make it easier to read, but if you can condense it in your IDE, then it should be easier to see the rest of the py file.
from tkinter import *
from tkinter import filedialog
from watchdog.observers import Observer
from watchdog.events import PatternMatchingEventHandler
import pandas as pd
import numpy as np
class Watchdog(PatternMatchingEventHandler, Observer):
def __init__(self, path='.', patterns='*', logfunc=print):
PatternMatchingEventHandler.__init__(self, patterns)
Observer.__init__(self)
self.schedule(self, path=path, recursive=False)
self.log = logfunc
def on_created(self, event):
# This function is called when a file is created
self.log(f"hey, {event.src_path} has been created!")
def data_analysis(src_path):
readdata = pd.read_csv(event.src_path, delimiter='\t', encoding="latin1", skiprows=24)
df = pd.DataFrame(readdata)
df = df.drop(labels=0, axis=0)
df['Station']=df['Station'].astype(float)
df['Station']=df['Station'].astype(int)
df["Axial Force Occurences"] = 0
df["Axial Force Actual Value"] = pd.NaT
df["Flexion Occurences"] = 0
df["Flexion Actual Value"] = pd.NaT
df["IE Occurences"] = 0
df["IE Actual Value"] = pd.NaT
df["AP Occurences"] = 0
df["AP Actual Value"] = pd.NaT
df['Fz 1']=df['Fz 1'].astype(float)
df['Fz 1']=df['Fz 1'].astype(int)
df['VLWf']=df['VLWf'].astype(float)
df['VLWf']=df['VLWf'].astype(int)
df['FLPt']=df['FLPt'].astype(float)
# df['FLPt']=df['FLPt'].astype(int)
df['FLWf']=df['FLWf'].astype(float)
# df['FLWf']=df['FLWf'].astype(int)
df['IEPt']=df['IEPt'].astype(float)
# df['IEPt']=df['IEPt'].astype(int)
df['IEWf']=df['IEWf'].astype(float)
# df['IEWf']=df['IEWf'].astype(int)
df['APPt']=df['APPt'].astype(float)
# df['APPt']=df['APPt'].astype(int)
df['APWf']=df['APWf'].astype(float)
# df['APWf']=df['APWf'].astype(int)
data = df.loc[df['Station'] == 1, ['VLWf','Fz 1', "Axial Force Occurences", "Axial Force Actual Value",
'FLPt', 'FLWf', "Flexion Occurences", "Flexion Actual Value",
'IEPt', 'IEWf', "IE Occurences", "IE Actual Value",
'APPt', 'APWf', "AP Occurences", "AP Actual Value", ]]
tol = 3
y = int(len(data.index))
num = int(y * (3/100))
##Extract first and last rows based on tolerance, and append the first rows to the end, and the last rows to the beginning
first_rows = data.iloc[0: num]
last_rows = data.iloc[y-num: y]
##Add the last_rows to the beginning, and the first_rows to the end, all one df
data = last_rows.append(data)
data = data.append(first_rows)
##This keeps the indexing from appending, which is nice to see, but we need to change it use for loops
z = int(len(data.index))
new_index = np.linspace(start = 1, stop = z, num = z)
new_index2 = new_index.astype(int)
data2 = data.set_index(new_index2)
# To test if the tables are correct, you can call specific values in console eg: 'data['VLWf'].iloc[1]'
axoccur = []
##AXIAL FORCE OOT
for i in range(num, z-num):
val = data2['Fz 1'].iloc[i]
extract_data = data2.iloc[1:z, 0]
xval = data2.iloc[i-num: i+num,0]-0.5*2600
if np.any(val >= ((data2.iloc[i-num: i+num,0])-0.05*2600)) and np.any(val <= ((data2.iloc[i-num: i+num,0])+0.05*2600)):
data2.at[i,'Axial Force Occurences'] = 0
else:
data2.at[i,'Axial Force Occurences'] = 1
data2.at[i,'Axial Force Actual Value'] = val
axoccur.append(i)
# print(apoccur)
##After reading the data, we need to sum the
totalaxial = data2['Axial Force Occurences'].sum()
print('The number of Axial Force values outside of the tolerance is: ' + str(totalaxial))
flexionoccur = []
##FLEXION OOT
for i in range(num, z-num):
val = data2['FLPt'].iloc[i]
extract_data = data2.iloc[1:z, 0]
xval = data2.iloc[i-num: i+num,0]-0.5*2600
if np.any(val >= ((data2.iloc[i-num: i+num,5])-0.05*58)) and np.any(val <= ((data2.iloc[i-num: i+num,5])+0.05*58)):
data2.at[i,'Flexion Occurences'] = 0
else:
data2.at[i,'Flexion Occurences'] = 1
data2.at[i,'Flexion Actual Value'] = val
flexionoccur.append(i)
##After reading the data, we need to sum the
totalflexion = data2['Flexion Occurences'].sum()
print('The number of Flexion values outside of the tolerance is: ' + str(totalflexion))
ieoccur = []
##IE OOT
for i in range(num, z-num):
val = data2['IEPt'].iloc[i]
extract_data = data2.iloc[1:z, 0]
xval = data2.iloc[i-num: i+num,0]-0.5*2600
if np.any(val >= ((data2.iloc[i-num: i+num,9])-0.05*5.7)) and np.any(val <= ((data2.iloc[i-num: i+num,9])+0.05*5.7)):
data2.at[i,'IE Occurences'] = 0
else:
data2.at[i,'IE Occurences'] = 1
data2.at[i,'IE Actual Value'] = val
ieoccur.append(i)
##After reading the data, we need to sum the
totalie = data2['IE Occurences'].sum()
print('The number of IE values outside of the tolerance is: ' + str(totalie))
apoccur = []
##AP OOT
for i in range(num, z-num):
val = data2['APPt'].iloc[i]
extract_data = data2.iloc[1:z, 0]
xval = data2.iloc[i-num: i+num,0]-0.5*2600
if np.any(val >= ((data2.iloc[i-num: i+num,13])-0.05*5.2)) and np.any(val <= ((data2.iloc[i-num: i+num,13])+0.05*5.2)):
data2.at[i,'IE Occurences'] = 0
else:
data2.at[i,'AP Occurences'] = 1
data2.at[i,'AP Actual Value'] = val
apoccur.append(i)
##After reading the data, we need to sum the
totalap = data2['AP Occurences'].sum()
print('The number of AP values outside of the tolerance is: ' + str(totalap))
data_analysis(event.src_path)
def on_deleted(self, event):
# This function is called when a file is deleted
self.log(f"what the f**k! Someone deleted {event.src_path}!")
def on_modified(self, event):
# This function is called when a file is modified
self.log(f"hey buddy, {event.src_path} has been modified")
def on_moved(self, event):
# This function is called when a file is moved
self.log(f"ok ok ok, someone moved {event.src_path} to {event.dest_path}")
class GUI:
def __init__(self):
self.watchdog = None
self.watch_path = '.'
self.root = Tk()
self.messagebox = Text(width=80, height=10)
self.messagebox.pack()
frm = Frame(self.root)
Button(frm, text='Browse', command=self.select_path).pack(side=LEFT)
Button(frm, text='Start Watchdog', command=self.start_watchdog).pack(side=RIGHT)
Button(frm, text='Stop Watchdog', command=self.stop_watchdog).pack(side=RIGHT)
# Button(frm, text='Excel', command=self.excelexport)pack(side=LEFT)
frm.pack(fill=X, expand=1)
self.root.mainloop()
def start_watchdog(self):
if self.watchdog is None:
self.watchdog = Watchdog(path=self.watch_path, logfunc=self.log)
self.watchdog.start()
self.log('Watchdog started')
else:
self.log('Watchdog already started')
def stop_watchdog(self):
if self.watchdog:
self.watchdog.stop()
self.watchdog = None
self.log('Watchdog stopped')
else:
self.log('Watchdog is not running')
def select_path(self):
path = filedialog.askdirectory()
if path:
self.watch_path = path
self.log(f'Selected path: {path}')
def log(self, message):
self.messagebox.insert(END, f'{message}\n')
self.messagebox.see(END)
if __name__ == '__main__':
GUI()
|
[
"class Handler(watchdog.events.PatternMatchingEventHandler):\n def __init__(self):\n watchdog.events.PatternMatchingEventHandler.__init__(self, patterns=['*.pdf'],\n ignore_patterns = None,\n ignore_directories = False,\n case_sensitive = False)\n def on_created(self, event):\n print(f\"File was created at {event.src_path}\")\n OCRscript(self, event)\n def on_deleted(self, event):\n print(f\"File was deleted at {event.src_path}\")\n\nevent_handler = Handler()\nobserver = watchdog.observers.Observer()\nobserver.schedule(event_handler, \"C://Users//Installer//Desktop//tesseract test\",\n recursive = False)\nobserver.start()\nobserver.join()\n\nThis is the code I have been using to have a continually running watchdog. I call my other funcitions from within the class to ensure that the watchdog continues running. I have the functions I'm calling defined outside of the class just for debugging purposes. It helped me figure out which step was broken.\n"
] |
[
0
] |
[] |
[] |
[
"python",
"python_watchdog",
"tkinter"
] |
stackoverflow_0074480624_python_python_watchdog_tkinter.txt
|
Q:
create new column based on sum of another column and plot
I have a large data frame called data_frame with 3 columns PRE, STATUS, and CHR that look like this:
PRE STATUS CHR
1_752566 GAINED 1
1_776546 LOST 1
1_832918 NA 1
1_842013 LOST 1
1_846864 GAINED 1
11_8122943 NA 11
11_8188699 GAINED 11
11_8321128 NA 11
23_95137734 NA 23
23_95146814 GAINED 23
From here I'd like to group CHR by number and then find the sum of each group. If possible, I would like a new data table (let's call it TOTAL) showing the sums of each group number like this:
CHR TOTAL_SUM
1 5
11 3
23 2
from here I would like to create another data table called BY_STATUS with 3 columns CHR, 'SUM _GAINED', 'SUM_LOST' where 'SUM_GAINED is the sum of CHR that matches with the 'STATUS' output 'GAINED' and 'SUM_LOST' is the sum of CHR that matches with the 'STATUS' output 'LOST' like this:
CHR SUM _GAINED SUM_LOST
1 2 2
11 1 0
23 1 0
I would then create two different plots:
1st plot would be for the data table TOTAL to visualize the sums of each number where my x-axis is NUM and my y-axis is SUM
2nd plot would be for the data table BY_STATUS to visualize the different frequencies of each number in CHR based on both SUM_GAINED and SUM_LOST where my x-axis is CHR and my y-axis is both SUM_GAINED and SUM_LOST. Maybe a side-by-side comparison of the two different y-axis?
A:
We can convert the column to logical and count (sum) the TRUE values for GAINED and LOST after grouping by 'CHR'
library(dplyr)
df %>%
group_by(CHR) %>%
summarise(SUM_GAINED = sum(STATUS == "GAINED", na.rm = TRUE),
SUM_LOST = sum(STATUS == "LOST", na.rm =TRUE))
-output
# A tibble: 3 × 3
CHR SUM_GAINED SUM_LOST
<int> <int> <int>
1 1 2 2
2 11 1 0
3 23 1 0
Or use pivot_wider
library(tidyr)
df %>%
drop_na() %>%
pivot_wider(id_cols = CHR, names_from = STATUS,
values_from = STATUS, values_fn = length, values_fill = 0)
# A tibble: 3 × 3
CHR GAINED LOST
<int> <int> <int>
1 1 2 2
2 11 1 0
3 23 1 0
For plotting, it may be better to have it in long format with ggplot
library(ggplot2)
df %>%
drop_na(STATUS) %>%
count(CHR, STATUS) %>%
ggplot(aes(x = CHR, y = n, fill = STATUS)) +
geom_col(position="dodge")
With base R, this can be done using table and barplot
barplot(table(df[-1]), beside = TRUE, legend = TRUE)
|
create new column based on sum of another column and plot
|
I have a large data frame called data_frame with 3 columns PRE, STATUS, and CHR that look like this:
PRE STATUS CHR
1_752566 GAINED 1
1_776546 LOST 1
1_832918 NA 1
1_842013 LOST 1
1_846864 GAINED 1
11_8122943 NA 11
11_8188699 GAINED 11
11_8321128 NA 11
23_95137734 NA 23
23_95146814 GAINED 23
From here I'd like to group CHR by number and then find the sum of each group. If possible, I would like a new data table (let's call it TOTAL) showing the sums of each group number like this:
CHR TOTAL_SUM
1 5
11 3
23 2
from here I would like to create another data table called BY_STATUS with 3 columns CHR, 'SUM _GAINED', 'SUM_LOST' where 'SUM_GAINED is the sum of CHR that matches with the 'STATUS' output 'GAINED' and 'SUM_LOST' is the sum of CHR that matches with the 'STATUS' output 'LOST' like this:
CHR SUM _GAINED SUM_LOST
1 2 2
11 1 0
23 1 0
I would then create two different plots:
1st plot would be for the data table TOTAL to visualize the sums of each number where my x-axis is NUM and my y-axis is SUM
2nd plot would be for the data table BY_STATUS to visualize the different frequencies of each number in CHR based on both SUM_GAINED and SUM_LOST where my x-axis is CHR and my y-axis is both SUM_GAINED and SUM_LOST. Maybe a side-by-side comparison of the two different y-axis?
|
[
"We can convert the column to logical and count (sum) the TRUE values for GAINED and LOST after grouping by 'CHR'\nlibrary(dplyr)\ndf %>%\n group_by(CHR) %>%\n summarise(SUM_GAINED = sum(STATUS == \"GAINED\", na.rm = TRUE),\n SUM_LOST = sum(STATUS == \"LOST\", na.rm =TRUE))\n\n-output\n# A tibble: 3 × 3\n CHR SUM_GAINED SUM_LOST\n <int> <int> <int>\n1 1 2 2\n2 11 1 0\n3 23 1 0\n\n\nOr use pivot_wider\nlibrary(tidyr)\ndf %>% \n drop_na() %>% \n pivot_wider(id_cols = CHR, names_from = STATUS, \n values_from = STATUS, values_fn = length, values_fill = 0)\n# A tibble: 3 × 3\n CHR GAINED LOST\n <int> <int> <int>\n1 1 2 2\n2 11 1 0\n3 23 1 0\n\n\nFor plotting, it may be better to have it in long format with ggplot\nlibrary(ggplot2)\ndf %>%\n drop_na(STATUS) %>% \n count(CHR, STATUS) %>%\n ggplot(aes(x = CHR, y = n, fill = STATUS)) + \n geom_col(position=\"dodge\")\n\n\nWith base R, this can be done using table and barplot\nbarplot(table(df[-1]), beside = TRUE, legend = TRUE)\n\n"
] |
[
1
] |
[] |
[] |
[
"dataframe",
"datatable",
"ggplot2",
"mutate",
"r"
] |
stackoverflow_0074661422_dataframe_datatable_ggplot2_mutate_r.txt
|
Q:
Docker-Compose: connect to host when host.docker.internal doesn't work
My eventual goal here is to allow a container running a FastAPI app to communicate with a MySQL database on the host.
First I tried using host.docker.internal
Dockerfile
FROM debian:latest
RUN apt update && apt install -y \
netcat \
iputils-ping
CMD echo "tailing /dev/null" && tail -f /dev/null
docker-compose.yml
version: "3.2"
services:
test:
build:
context: "."
extra_hosts:
- "host.docker.internal:host-gateway"
Expected behavior: ping works, nc -vz works
In particular, with nc -vz I'd expect to see something like:
root@9fe8de220d44:/# nc -vz host.docker.internal 80
Connection to host.docker.internal (172.17.0.1) port 80 (tcp) succeeded!
Actual behavior: ping works, nc -vz doesn't
root@5981bcfbf598:/# ping host.docker.internal
PING host.docker.internal (172.17.0.1) 56(84) bytes of data.
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=1 ttl=64 time=0.079 ms
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=2 ttl=64 time=0.067 ms
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=3 ttl=64 time=0.068 ms
^C
--- host.docker.internal ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2013ms
rtt min/avg/max/mdev = 0.067/0.071/0.079/0.005 ms
root@5981bcfbf598:/# nc -vz host.docker.internal 80
nc: connect to host.docker.internal (172.17.0.1) port 80 (tcp) failed: Connection refused
On the host
I have apache running on port 80
$ netstat -tulpn
...
tcp6 0 0 :::80 :::* LISTEN 1258/apache2
Additionally, my firewall is configured to allow all inbound requests to port 80:
firewall says http port 80 allows all ipv4 and ipv6
OS and docker versions:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
$ docker --version
Docker version 20.10.21, build baeda1f
Manually specifying the network also fails
After host.docker.internal failed, I followed the instructions for connecting from a container to a Linux host (Ubuntu 18.04 in my case) here using a manually specified network: https://stackoverflow.com/a/70725882
Here's my setup:
Dockerfile
FROM debian:latest
RUN apt update && apt install -y \
netcat \
iputils-ping
CMD echo "tailing /dev/null" && tail -f /dev/null
docker-compose.yml
version: "3.2"
networks:
test:
name: test-network
attachable: true
ipam:
driver: default
config:
- subnet: 172.42.0.0/16
ip_range: 172.42.5.0/24
gateway: 172.42.0.1
services:
test:
build:
context: "."
networks:
- test
Confirm gateway
$ docker inspect test-test-1 -f '{{range .NetworkSettings.Networks}}{{.Gateway}}{{end}}'
172.42.0.1
ping works
root@07f81c211a0c:/# ping 172.42.0.1
PING 172.42.0.1 (172.42.0.1) 56(84) bytes of data.
64 bytes from 172.42.0.1: icmp_seq=1 ttl=64 time=0.056 ms
64 bytes from 172.42.0.1: icmp_seq=2 ttl=64 time=0.068 ms
64 bytes from 172.42.0.1: icmp_seq=3 ttl=64 time=0.065 ms
Expected behavior: nc -vz succeeds
From the instructions at https://stackoverflow.com/a/70725882:
root@9fe8de220d44:/# nc -vz 172.18.0.1 80
Connection to 172.18.0.1 80 port [tcp/http] succeeded!
Actual behavior: nc -vz fails
root@07f81c211a0c:/# nc -vz 172.42.0.1 80
nc: connect to 172.42.0.1 port 80 (tcp) failed: Connection refused
What am I doing wrong?
Thanks in advance for your help!
A:
Update
The answer below works for the general problem of connecting to the host, but there's a much simpler solution if you're trying to expose a particular service that has a socket: mount the socket! For example, if you want to connect to a local mysql, then in your docker-compose.yml you can simply add:
volumes:
- /var/run/mysqld/mysqld.sock:/var/run/mysqld/mysqld.sock:ro
to whichever service needs to communicate with the host mysql. Easy peasy. If you're using the url syntax for mysql, you then specify that you want to use the unix socket, eg, for sqlalchemy:
mysql+pymysql://user:passwd@host/db?unix_socket=/var/run/mysqld/mysqld.sock
Original answer
I was able to solve this problem thanks to https://forums.docker.com/t/how-to-connect-from-docker-container-to-the-host/123318.
I ran
$ ifconfig
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet XXX.XXX.XXX.XXX netmask ...
and copied the address XXX.XXX.XXX.XXX, and replaced host-gateway with it in my docker-compose.yml:
version: "3.2"
services:
test:
build:
context: "."
extra_hosts:
- "host.docker.internal:XXX.XXX.XXX.XXX"
Now, from within the container:
root@f5836a37815a:/# nc -vz host.docker.internal 80
Connection to host.docker.internal (104.248.221.215) 80 port [tcp/*] succeeded!
I'm not sure why all the advice to use host-gateway didn't work. I was under the impression it should work for my version of docker (compose).
$ docker version
Client: Docker Engine - Community
Version: 20.10.21
...
$ docker compose version
Docker Compose version v2.12.2
|
Docker-Compose: connect to host when host.docker.internal doesn't work
|
My eventual goal here is to allow a container running a FastAPI app to communicate with a MySQL database on the host.
First I tried using host.docker.internal
Dockerfile
FROM debian:latest
RUN apt update && apt install -y \
netcat \
iputils-ping
CMD echo "tailing /dev/null" && tail -f /dev/null
docker-compose.yml
version: "3.2"
services:
test:
build:
context: "."
extra_hosts:
- "host.docker.internal:host-gateway"
Expected behavior: ping works, nc -vz works
In particular, with nc -vz I'd expect to see something like:
root@9fe8de220d44:/# nc -vz host.docker.internal 80
Connection to host.docker.internal (172.17.0.1) port 80 (tcp) succeeded!
Actual behavior: ping works, nc -vz doesn't
root@5981bcfbf598:/# ping host.docker.internal
PING host.docker.internal (172.17.0.1) 56(84) bytes of data.
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=1 ttl=64 time=0.079 ms
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=2 ttl=64 time=0.067 ms
64 bytes from host.docker.internal (172.17.0.1): icmp_seq=3 ttl=64 time=0.068 ms
^C
--- host.docker.internal ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2013ms
rtt min/avg/max/mdev = 0.067/0.071/0.079/0.005 ms
root@5981bcfbf598:/# nc -vz host.docker.internal 80
nc: connect to host.docker.internal (172.17.0.1) port 80 (tcp) failed: Connection refused
On the host
I have apache running on port 80
$ netstat -tulpn
...
tcp6 0 0 :::80 :::* LISTEN 1258/apache2
Additionally, my firewall is configured to allow all inbound requests to port 80:
firewall says http port 80 allows all ipv4 and ipv6
OS and docker versions:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.6 LTS
Release: 18.04
Codename: bionic
$ docker --version
Docker version 20.10.21, build baeda1f
Manually specifying the network also fails
After host.docker.internal failed, I followed the instructions for connecting from a container to a Linux host (Ubuntu 18.04 in my case) here using a manually specified network: https://stackoverflow.com/a/70725882
Here's my setup:
Dockerfile
FROM debian:latest
RUN apt update && apt install -y \
netcat \
iputils-ping
CMD echo "tailing /dev/null" && tail -f /dev/null
docker-compose.yml
version: "3.2"
networks:
test:
name: test-network
attachable: true
ipam:
driver: default
config:
- subnet: 172.42.0.0/16
ip_range: 172.42.5.0/24
gateway: 172.42.0.1
services:
test:
build:
context: "."
networks:
- test
Confirm gateway
$ docker inspect test-test-1 -f '{{range .NetworkSettings.Networks}}{{.Gateway}}{{end}}'
172.42.0.1
ping works
root@07f81c211a0c:/# ping 172.42.0.1
PING 172.42.0.1 (172.42.0.1) 56(84) bytes of data.
64 bytes from 172.42.0.1: icmp_seq=1 ttl=64 time=0.056 ms
64 bytes from 172.42.0.1: icmp_seq=2 ttl=64 time=0.068 ms
64 bytes from 172.42.0.1: icmp_seq=3 ttl=64 time=0.065 ms
Expected behavior: nc -vz succeeds
From the instructions at https://stackoverflow.com/a/70725882:
root@9fe8de220d44:/# nc -vz 172.18.0.1 80
Connection to 172.18.0.1 80 port [tcp/http] succeeded!
Actual behavior: nc -vz fails
root@07f81c211a0c:/# nc -vz 172.42.0.1 80
nc: connect to 172.42.0.1 port 80 (tcp) failed: Connection refused
What am I doing wrong?
Thanks in advance for your help!
|
[
"Update\nThe answer below works for the general problem of connecting to the host, but there's a much simpler solution if you're trying to expose a particular service that has a socket: mount the socket! For example, if you want to connect to a local mysql, then in your docker-compose.yml you can simply add:\n volumes:\n - /var/run/mysqld/mysqld.sock:/var/run/mysqld/mysqld.sock:ro\n\nto whichever service needs to communicate with the host mysql. Easy peasy. If you're using the url syntax for mysql, you then specify that you want to use the unix socket, eg, for sqlalchemy:\nmysql+pymysql://user:passwd@host/db?unix_socket=/var/run/mysqld/mysqld.sock\n\nOriginal answer\nI was able to solve this problem thanks to https://forums.docker.com/t/how-to-connect-from-docker-container-to-the-host/123318.\nI ran\n$ ifconfig\neth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500\n inet XXX.XXX.XXX.XXX netmask ...\n\nand copied the address XXX.XXX.XXX.XXX, and replaced host-gateway with it in my docker-compose.yml:\nversion: \"3.2\"\n\nservices:\n test:\n build:\n context: \".\"\n extra_hosts:\n - \"host.docker.internal:XXX.XXX.XXX.XXX\"\n\nNow, from within the container:\nroot@f5836a37815a:/# nc -vz host.docker.internal 80\nConnection to host.docker.internal (104.248.221.215) 80 port [tcp/*] succeeded!\n\nI'm not sure why all the advice to use host-gateway didn't work. I was under the impression it should work for my version of docker (compose).\n$ docker version\nClient: Docker Engine - Community\n Version: 20.10.21\n...\n$ docker compose version\nDocker Compose version v2.12.2\n\n"
] |
[
1
] |
[] |
[] |
[
"docker",
"docker_compose",
"localhost",
"mysql",
"ubuntu"
] |
stackoverflow_0074527573_docker_docker_compose_localhost_mysql_ubuntu.txt
|
Q:
How to define and call functions in javascript in elevatorsaga
How to define and call functions in javascript in elevatorsaga?
Neither this
var testFunction = function() {
console.log("testFunction");
},
{
init: function(elevators, floors) {
testFunction();
},
update: function(dt, elevators, floors) {}
}
nor this
function testFunction() {
console.log("testFunction");
},
{
init: function(elevators, floors) {
testFunction();
},
update: function(dt, elevators, floors) {}
}
nor this
{
var testFunction = function() {
console.log("testFunction");
},
init: function(elevators, floors) {
testFunction();
},
update: function(dt, elevators, floors) {}
}
is acceptable in the context of https://play.elevatorsaga.com/#challenge=4
|
How to define and call functions in javascript in elevatorsaga
|
How to define and call functions in javascript in elevatorsaga?
Neither this
var testFunction = function() {
console.log("testFunction");
},
{
init: function(elevators, floors) {
testFunction();
},
update: function(dt, elevators, floors) {}
}
nor this
function testFunction() {
console.log("testFunction");
},
{
init: function(elevators, floors) {
testFunction();
},
update: function(dt, elevators, floors) {}
}
nor this
{
var testFunction = function() {
console.log("testFunction");
},
init: function(elevators, floors) {
testFunction();
},
update: function(dt, elevators, floors) {}
}
is acceptable in the context of https://play.elevatorsaga.com/#challenge=4
|
[] |
[] |
[
"Define it like testFunction: function() { and call it with this.testFunction()\n{\n testFunction: function() {\n console.log(\"testFunction\");\n },\n init: function(elevators, floors) {\n console.log(\"init\");\n this.testFunction();\n },\n update: function(dt, elevators, floors) { }\n}\n\n"
] |
[
-4
] |
[
"javascript"
] |
stackoverflow_0074661459_javascript.txt
|
Q:
How do I make a post request work after using get_queryset?
I would like to have a list of a user's devices with a checkbox next to each one. The user can select the devices they want to view on a map by clicking on the corresponding checkboxes then clicking a submit button. I am not including the mapping portion in this question, because I plan to work that out later. The step that is causing problems right now is trying to use a post request.
To have the user only be able to see the devices that are assigned to them I am using the get_queryset method. I have seen a couple questions regarding using a post request along with the get_queryset method, but they do not seem to work for me. Using the view below, when I select a checkbox and then click submit, it looks like a post request happens followed immediately by a get request and the result is my table is empty when the page loads.
Portion of my views.py file:
class DeviceListView(LoginRequiredMixin, ListView):
model = Device
template_name = 'tracking/home.html'
context_object_name = 'devices'
def get_queryset(self, *args, **kwargs):
return super().get_queryset(*args, **kwargs).filter(who_added=self.request.user)
def post(self, request):
form = SelectForm(request.POST)
return render(request, self.template_name, {'form': form})
Portions of my template:
<div class="table-responsive">
<form action="" method="post" name="devices_to_check">
{% csrf_token %}
<table id="registered_devices"
class="table table-striped table-bordered table-hover table-sm"
style="width:100%; border: 1px solid black; font-size: 10px">
<thead class="table-primary"
style="text-align:center; border: 1px solid black">
<tr>
<th>IMEI</th>
<th>Label</th>
<th>Device Type</th>
<th>Group</th>
<th>Subgroup</th>
<th>Description</th>
<th>Display</th>
</tr>
</thead>
<tbody>
{% for device in devices %}
<tr>
<td>{{device.imei}}</td>
<td>{{device.label}}</td>
<td style="text-transform:uppercase">{{device.device_type}}</td>
<td>{{device.main_group}}</td>
<td>{{device.subgroup}}</td>
<td>{{device.description}}</td>
<td style="text-align:center">
<a href="{% url 'device-detail' device.id %}">i </a>
<input type="checkbox" id="{{device.imei}}" name="chk"
value="{{device.imei}}" onclick="show_info_icon()"
class="chckvalues"/>
</td>
</tr>
{% endfor %}
</tbody>
</table>
<input type="button" class="btn btn-outline-info" onclick='selects()'
value="Select All"/>
<input type="button" class="btn btn-outline-info" onclick='deSelect()'
value="Deselect All"/>
<button type="submit" class="btn btn-outline-info">Show on Map</button>
</form>
</div>
A:
I think you're better off using a function based view with the typical "if request.method = POST" logic, this isn't really what the generic list view is for.
@loginrequired
def device_list_view(request):
context = {}
if request.method = POST:
form = SelectForm(request.POST)
if form.is_valid():
# if a request was posted and is valid, do your thing:
# maybe your thing is to give your map view the device id
# and render it
# assuming your form has a field called device_id:
context['device_id'] = form.device_id
return render(request, 'tracking/device_map.html', context)
# either request method is not post or the form wasn't valid
user_devices = Device.objects.filter(who_added=request.user)
device_forms = []
for i, device in enumerate(user_devices):
form = SelectForm(instance=device, prefix=i)
device_forms.append(form)
context['device_forms'] = device_forms
return render(request, 'tracking/home.html', context)
The key part here is the if-else logic of checking whether the request was POST or not, you'll need to tweak it based on exactly what you want to happen. It sounds like what you're really doing is creating one form from which you're pulling multiple device_ids. This is a good reference for what's going on with the prefix bit and should help you decide whether to do things that way or with one form like you're trying.
|
How do I make a post request work after using get_queryset?
|
I would like to have a list of a user's devices with a checkbox next to each one. The user can select the devices they want to view on a map by clicking on the corresponding checkboxes then clicking a submit button. I am not including the mapping portion in this question, because I plan to work that out later. The step that is causing problems right now is trying to use a post request.
To have the user only be able to see the devices that are assigned to them I am using the get_queryset method. I have seen a couple questions regarding using a post request along with the get_queryset method, but they do not seem to work for me. Using the view below, when I select a checkbox and then click submit, it looks like a post request happens followed immediately by a get request and the result is my table is empty when the page loads.
Portion of my views.py file:
class DeviceListView(LoginRequiredMixin, ListView):
model = Device
template_name = 'tracking/home.html'
context_object_name = 'devices'
def get_queryset(self, *args, **kwargs):
return super().get_queryset(*args, **kwargs).filter(who_added=self.request.user)
def post(self, request):
form = SelectForm(request.POST)
return render(request, self.template_name, {'form': form})
Portions of my template:
<div class="table-responsive">
<form action="" method="post" name="devices_to_check">
{% csrf_token %}
<table id="registered_devices"
class="table table-striped table-bordered table-hover table-sm"
style="width:100%; border: 1px solid black; font-size: 10px">
<thead class="table-primary"
style="text-align:center; border: 1px solid black">
<tr>
<th>IMEI</th>
<th>Label</th>
<th>Device Type</th>
<th>Group</th>
<th>Subgroup</th>
<th>Description</th>
<th>Display</th>
</tr>
</thead>
<tbody>
{% for device in devices %}
<tr>
<td>{{device.imei}}</td>
<td>{{device.label}}</td>
<td style="text-transform:uppercase">{{device.device_type}}</td>
<td>{{device.main_group}}</td>
<td>{{device.subgroup}}</td>
<td>{{device.description}}</td>
<td style="text-align:center">
<a href="{% url 'device-detail' device.id %}">i </a>
<input type="checkbox" id="{{device.imei}}" name="chk"
value="{{device.imei}}" onclick="show_info_icon()"
class="chckvalues"/>
</td>
</tr>
{% endfor %}
</tbody>
</table>
<input type="button" class="btn btn-outline-info" onclick='selects()'
value="Select All"/>
<input type="button" class="btn btn-outline-info" onclick='deSelect()'
value="Deselect All"/>
<button type="submit" class="btn btn-outline-info">Show on Map</button>
</form>
</div>
|
[
"I think you're better off using a function based view with the typical \"if request.method = POST\" logic, this isn't really what the generic list view is for.\n@loginrequired\ndef device_list_view(request):\n context = {}\n if request.method = POST:\n form = SelectForm(request.POST)\n if form.is_valid():\n # if a request was posted and is valid, do your thing:\n # maybe your thing is to give your map view the device id\n # and render it\n # assuming your form has a field called device_id:\n context['device_id'] = form.device_id \n return render(request, 'tracking/device_map.html', context)\n \n # either request method is not post or the form wasn't valid\n user_devices = Device.objects.filter(who_added=request.user)\n device_forms = []\n for i, device in enumerate(user_devices):\n form = SelectForm(instance=device, prefix=i)\n device_forms.append(form)\n context['device_forms'] = device_forms\n return render(request, 'tracking/home.html', context)\n \n\nThe key part here is the if-else logic of checking whether the request was POST or not, you'll need to tweak it based on exactly what you want to happen. It sounds like what you're really doing is creating one form from which you're pulling multiple device_ids. This is a good reference for what's going on with the prefix bit and should help you decide whether to do things that way or with one form like you're trying.\n"
] |
[
0
] |
[] |
[] |
[
"django",
"post",
"python"
] |
stackoverflow_0074659563_django_post_python.txt
|
Q:
Result shows in console only will not write to a file
I have tried everything I can find online for posting my powershell script result to a text file with no luck. I only get results in the console and no text file is created.
See code below
$rootSitePath = "\\MyServer\JWS_SUL"
$paths = ($rootSitePath + "\" + "UNITED\Image\Ticket\Loadout"),
($rootSitePath + "\" + "UNITED\Image\Ticket\Pit2"),
($rootSitePath + "\" + "UNITED\Image\Ticket\Photo\Loadout")
($rootSitePath + "\" + "UNITED\Image\Ticket\Photo\Pit2")
$folder = $paths
foreach ($folder in $paths){
}
if ($_.LastWriteTime.Date -ne (Get-Date).ToDay) {
Write-Output $folder | Out-File -Path c:\temp\Test\BackupResults.txt
}
Console Output
PS C:\WINDOWS\system32> C:\temp\JWS\TEST-1.ps1
\\MyServer\JWS_SUL\UNITED\Image\Ticket\Photo\Pit2
PS C:\WINDOWS\system32>
Your code updated with my network share
# --- Setup ---
$rootSitePath = "\\server\JWS_SUL"
$OutputPath = "C:\temp\BackupResults.txt"
$CurDate = (Get-Date).ToShortDateString()
#--- Cleanup from previous runs ---
If (Test-Path -Path "$OutputPath" ) {
Remove-Item -Path "$OutputPath"
}
#--- Initialize paths - always use Join-Path ---
$paths = @((Join-Path -Path "$rootSitePath" -Childpath "\UNITED\Image\Ticket\Pit2" )
(Join-Path -Path "$rootSitePath" -Childpath "\UNITED\Image\Ticket\Loadout")
(Join-Path -Path "$rootSitePath" -Childpath "\UNITED\Image\Ticket\Photo\Loadout"))
foreach ($folder in $paths){
#--- Retrieve Folder time as ShortDate
$FInfo = (Get-item -Path "$Folder").LastWriteTime.Date.ToShortDateString()
if ($FInfo -ne $CurDate ) {
Write-Output $folder |
Out-File -FilePath "C:\temp\BackupResults.txt" -Append
}
} #End Foreach
Powershell console just shows this
PS Microsoft.PowerShell.Core\FileSystem::\\server\JWS_SUL> C:\temp\JWS\TEST-123.ps1
A:
Here's a cleaned up version of your code with comments. It uses several of mkelements comments which I didn't see as I was writing and testing the code.
# --- Setup ---
$rootSitePath = "G:\BEKDocs"
$OutputPath = "G:\Test\BackupResults.txt"
$CurDate = (Get-Date).ToShortDateString()
#--- Cleanup from previous runs ---
If (Test-Path -Path "$OutputPath" ) {
Remove-Item -Path "$OutputPath"
}
#--- Initialize paths - always use Join-Path ---
$paths = @((Join-Path -Path "$rootSitePath" -Childpath "Money" )
(Join-Path -Path "$rootSitePath" -Childpath "Outlook Files")
(Join-Path -Path "$rootSitePath" -Childpath "Access"))
foreach ($folder in $paths){
#--- Retrieve Folder time as ShortDate
$FInfo = (Get-item -Path "$Folder").LastWriteTime.Date.ToShortDateString()
if ($FInfo -ne $CurDate ) {
Write-Output $folder |
Out-File -FilePath "G:\Test\BackupResults.txt" -Append
}
} #End Foreach
Sample File Output:
G:\BEKDocs\Outlook Files
G:\BEKDocs\Access
The Money directory was changed today. Note the conversion to ShortDateString to eliminate the time element in the compare.
|
Result shows in console only will not write to a file
|
I have tried everything I can find online for posting my powershell script result to a text file with no luck. I only get results in the console and no text file is created.
See code below
$rootSitePath = "\\MyServer\JWS_SUL"
$paths = ($rootSitePath + "\" + "UNITED\Image\Ticket\Loadout"),
($rootSitePath + "\" + "UNITED\Image\Ticket\Pit2"),
($rootSitePath + "\" + "UNITED\Image\Ticket\Photo\Loadout")
($rootSitePath + "\" + "UNITED\Image\Ticket\Photo\Pit2")
$folder = $paths
foreach ($folder in $paths){
}
if ($_.LastWriteTime.Date -ne (Get-Date).ToDay) {
Write-Output $folder | Out-File -Path c:\temp\Test\BackupResults.txt
}
Console Output
PS C:\WINDOWS\system32> C:\temp\JWS\TEST-1.ps1
\\MyServer\JWS_SUL\UNITED\Image\Ticket\Photo\Pit2
PS C:\WINDOWS\system32>
Your code updated with my network share
# --- Setup ---
$rootSitePath = "\\server\JWS_SUL"
$OutputPath = "C:\temp\BackupResults.txt"
$CurDate = (Get-Date).ToShortDateString()
#--- Cleanup from previous runs ---
If (Test-Path -Path "$OutputPath" ) {
Remove-Item -Path "$OutputPath"
}
#--- Initialize paths - always use Join-Path ---
$paths = @((Join-Path -Path "$rootSitePath" -Childpath "\UNITED\Image\Ticket\Pit2" )
(Join-Path -Path "$rootSitePath" -Childpath "\UNITED\Image\Ticket\Loadout")
(Join-Path -Path "$rootSitePath" -Childpath "\UNITED\Image\Ticket\Photo\Loadout"))
foreach ($folder in $paths){
#--- Retrieve Folder time as ShortDate
$FInfo = (Get-item -Path "$Folder").LastWriteTime.Date.ToShortDateString()
if ($FInfo -ne $CurDate ) {
Write-Output $folder |
Out-File -FilePath "C:\temp\BackupResults.txt" -Append
}
} #End Foreach
Powershell console just shows this
PS Microsoft.PowerShell.Core\FileSystem::\\server\JWS_SUL> C:\temp\JWS\TEST-123.ps1
|
[
"Here's a cleaned up version of your code with comments. It uses several of mkelements comments which I didn't see as I was writing and testing the code.\n# --- Setup ---\n$rootSitePath = \"G:\\BEKDocs\"\n$OutputPath = \"G:\\Test\\BackupResults.txt\"\n$CurDate = (Get-Date).ToShortDateString() \n\n#--- Cleanup from previous runs ---\nIf (Test-Path -Path \"$OutputPath\" ) {\n Remove-Item -Path \"$OutputPath\"\n}\n\n#--- Initialize paths - always use Join-Path ---\n$paths = @((Join-Path -Path \"$rootSitePath\" -Childpath \"Money\" )\n (Join-Path -Path \"$rootSitePath\" -Childpath \"Outlook Files\")\n (Join-Path -Path \"$rootSitePath\" -Childpath \"Access\"))\n\n\nforeach ($folder in $paths){\n\n #--- Retrieve Folder time as ShortDate\n $FInfo = (Get-item -Path \"$Folder\").LastWriteTime.Date.ToShortDateString()\n \n if ($FInfo -ne $CurDate ) {\n Write-Output $folder | \n Out-File -FilePath \"G:\\Test\\BackupResults.txt\" -Append\n }\n\n} #End Foreach\n\nSample File Output:\nG:\\BEKDocs\\Outlook Files\nG:\\BEKDocs\\Access\n\nThe Money directory was changed today. Note the conversion to ShortDateString to eliminate the time element in the compare.\n"
] |
[
1
] |
[] |
[] |
[
"powershell"
] |
stackoverflow_0074660730_powershell.txt
|
Q:
Assert.IsType() Failure
Im not sure why im failing this test.
Message:
Assert.IsType() Failure
Expected: Microsoft.AspNetCore.Mvc.OkObjectResult
Actual: Microsoft.AspNetCore.Mvc.ObjectResult
var controller = GetMockedTokenController();
var response = await controller.Search(GetSearchMasterCardTokenRequestDto(), Id);
var objectResult = Assert.IsType<OkObjectResult>(response);
A:
Because there’s no actual difference between an ObjectResult and an OkObjectResult. An OkObjectResult just return an object result and sets the status code to 200.
You can see that in your error message that the actual responded type is just an ObjectResult
|
Assert.IsType() Failure
|
Im not sure why im failing this test.
Message:
Assert.IsType() Failure
Expected: Microsoft.AspNetCore.Mvc.OkObjectResult
Actual: Microsoft.AspNetCore.Mvc.ObjectResult
var controller = GetMockedTokenController();
var response = await controller.Search(GetSearchMasterCardTokenRequestDto(), Id);
var objectResult = Assert.IsType<OkObjectResult>(response);
|
[
"Because there’s no actual difference between an ObjectResult and an OkObjectResult. An OkObjectResult just return an object result and sets the status code to 200.\nYou can see that in your error message that the actual responded type is just an ObjectResult\n"
] |
[
0
] |
[] |
[] |
[
"xunit"
] |
stackoverflow_0074649993_xunit.txt
|
Q:
How to wrap the conda command in a fish function to initialize conda only on demand?
From time to time I use the miniconda package manager. Normally, conda adds the following line to ~/.config/fish/config.fish upon installation:
eval /home/quappas/.apps/miniconda3/bin/conda "shell.fish" "hook" $argv | source
This line must be executed before conda can be used. However, it is quite slow to execute and having it in config.fish causes a significant startup delay every time I open a terminal. This is annoying because most of the time I don't even want to use conda when I open a terminal. So I decided to remove the line from config.fish and define a function conda.fish to wrap the conda command instead:
function conda --wraps 'conda'
if not set -q CONDA_INITIALIZED
echo 'Initializing conda...'
eval /home/quappas/.apps/miniconda3/bin/conda "shell.fish" "hook" | source
set -g CONDA_INITIALIZED 1
end
command conda $argv
end
With this function, for some reason I can do
conda
conda activate myenv
in a new terminal and it works just fine. However I do
conda activate myenv
directly in a new terminal, I get conda's "Your shell has not been properly configured to use 'conda activate'." error. Confusingly if I do
conda activate myenv
conda activate myenv
in a new terminal, the first command gives me the above error but the second one activates my environment successfully without complaint.
I'm not sure if this is a problem with my function, fish or conda. What can I do to be able to use the conda command normally, but only run the slow conda initialization script when I actually call conda?
A:
The issue is that the conda integration happens by defining a function called conda. So calling command conda is wrong, they want it to be called via a wrapper function.
See what happens after you run conda once and then use type conda to see the definition:
conda is a function with definition
# Defined via `source`
function conda
set -l CONDA_EXE /opt/miniconda3/bin/conda
if [ (count $argv) -lt 1 ]
$CONDA_EXE
else
set -l cmd $argv[1]
set -e argv[1]
switch $cmd
case activate deactivate
eval ($CONDA_EXE shell.fish $cmd $argv)
case install update upgrade remove uninstall
$CONDA_EXE $cmd $argv
and eval ($CONDA_EXE shell.fish reactivate)
case '*'
$CONDA_EXE $cmd $argv
end
end
end
You can either do that in your own function, or call conda $argv, which would ordinarily be an infinite loop.
Since that seems a bit awkward, how about this instead:
function conda --wraps 'conda'
echo 'Initializing conda...'
# We erase ourselves because conda defines a function of the same name.
# This allows checking that that happened and can prevent infinite loops
functions --erase conda
/home/quappas/.apps/miniconda3/bin/conda "shell.fish" "hook" | source
if not functions -q conda
# If the function wasn't defined, we should not do the call below.
echo 'Something went wrong initializing conda!' >&2
return 1
end
# Now we can call `conda`, which is a function, but not this one (because we erased it),
# so this is not an infinite loop.
conda $argv
end
The variable is now unnecessary since this function is never called twice in the same shell, and I removed the dangerous and unnecessary eval.
|
How to wrap the conda command in a fish function to initialize conda only on demand?
|
From time to time I use the miniconda package manager. Normally, conda adds the following line to ~/.config/fish/config.fish upon installation:
eval /home/quappas/.apps/miniconda3/bin/conda "shell.fish" "hook" $argv | source
This line must be executed before conda can be used. However, it is quite slow to execute and having it in config.fish causes a significant startup delay every time I open a terminal. This is annoying because most of the time I don't even want to use conda when I open a terminal. So I decided to remove the line from config.fish and define a function conda.fish to wrap the conda command instead:
function conda --wraps 'conda'
if not set -q CONDA_INITIALIZED
echo 'Initializing conda...'
eval /home/quappas/.apps/miniconda3/bin/conda "shell.fish" "hook" | source
set -g CONDA_INITIALIZED 1
end
command conda $argv
end
With this function, for some reason I can do
conda
conda activate myenv
in a new terminal and it works just fine. However I do
conda activate myenv
directly in a new terminal, I get conda's "Your shell has not been properly configured to use 'conda activate'." error. Confusingly if I do
conda activate myenv
conda activate myenv
in a new terminal, the first command gives me the above error but the second one activates my environment successfully without complaint.
I'm not sure if this is a problem with my function, fish or conda. What can I do to be able to use the conda command normally, but only run the slow conda initialization script when I actually call conda?
|
[
"The issue is that the conda integration happens by defining a function called conda. So calling command conda is wrong, they want it to be called via a wrapper function.\nSee what happens after you run conda once and then use type conda to see the definition:\nconda is a function with definition\n# Defined via `source`\nfunction conda\n set -l CONDA_EXE /opt/miniconda3/bin/conda\n if [ (count $argv) -lt 1 ]\n $CONDA_EXE\n else\n set -l cmd $argv[1]\n set -e argv[1]\n switch $cmd\n case activate deactivate\n eval ($CONDA_EXE shell.fish $cmd $argv)\n case install update upgrade remove uninstall\n $CONDA_EXE $cmd $argv\n and eval ($CONDA_EXE shell.fish reactivate)\n case '*'\n $CONDA_EXE $cmd $argv\n end\n end\nend\n\nYou can either do that in your own function, or call conda $argv, which would ordinarily be an infinite loop.\nSince that seems a bit awkward, how about this instead:\nfunction conda --wraps 'conda'\n echo 'Initializing conda...'\n # We erase ourselves because conda defines a function of the same name.\n # This allows checking that that happened and can prevent infinite loops\n functions --erase conda\n /home/quappas/.apps/miniconda3/bin/conda \"shell.fish\" \"hook\" | source\n\n if not functions -q conda\n # If the function wasn't defined, we should not do the call below.\n echo 'Something went wrong initializing conda!' >&2\n return 1\n end\n \n # Now we can call `conda`, which is a function, but not this one (because we erased it),\n # so this is not an infinite loop.\n conda $argv\nend\n\nThe variable is now unnecessary since this function is never called twice in the same shell, and I removed the dangerous and unnecessary eval.\n"
] |
[
3
] |
[] |
[] |
[
"anaconda",
"conda",
"fish",
"shell"
] |
stackoverflow_0074661211_anaconda_conda_fish_shell.txt
|
Q:
LINQ and Sorting into sequential Collection with random access capability
I have this code:
SortedList<int, SortedList<int, SimulationPoint>> sl = new SortedList<int, SortedList<int, SimulationPoint>>();
for(int i=0; i<source.Reflections+1; i++)
{
sl.Add(i, new SortedList<int, SimulationPoint>());
}
var q = source.SimulationResult.Where(x => !x.Value.hasHit);
foreach (var qa in q)
{
sl[qa.Key.Item2].Add(qa.Key.Item1, qa.Value);
}
I wanted to generate a sorted output of the collection source.SimulationResult, which is a Dictionary<(int, int), SimulationPoint>. (This dictionary was generated using a Parallel.For() loop, so all the items are in random order.)
Dictionary Key: Just the Ray number emitted from the source (e.g. 0->100) and the Reflection number (e.g. 0->10) as it bounces around the scene: (int Ray, int Reflection).
Dictionary Value:
The SimulationPoint is an output point of a ray-tracing procedure, the most important element of which is that it contains a field bool hasHit that indicates if the point, well, hit an element in the scene or not. (We're looking for those errors here, hence source.SimulationResult.Where(x=>!x.Value.hashit);) (FWIW, this struct SimulationPoint also contains the Ray & Reflection data.)
Generally, this works. But I really like the LINQ syntax and one-liner concept, as it can avoid multiple nested loops. Does anyone have an idea how this can be simplified using the LINQ extensions?
Please keep in mind that I'd like to be able to jump around at the user's choice within the sl collection, this is the problem I'm having with the IGrouping<int, SimulationPoint> outputs from the GroupBy(x=>x.Key.Item2, x=>x.Value) method - it's only accessible sequentially using a foreach loop, even when ordering it with OrderBy(x.Key.Item2).ThenBy(x.Key.Item1).
A:
I suggest adding an extension method (group) analogous to ToDictionary to do ToSortedList. Then with that extension method available, you can use GroupBy (which is internally creating a Dictionary...) to create the SortedLists. Internally you are essentially running over the source data twice, but it doesn't seem like you are talking about a lot of data.
var sl2 = q.GroupBy(kvp => kvp.Key.Reflection) // group by outer SortedList Key
.ToSortedList(kvpg => kvpg.Key, // outer SortedList Key (Reflection)
// outer SortedList Value (SortedList: Ray -> SimulationPoint)
kvpg => kvpg.ToSortedList(kvp => kvp.Key.Ray, // inner SortedList Key (Ray)
kvp => kvp.Value) // inner SortedList Value (SimulationPoint)
);
Here are the extension method definitions:
public static class IEnumerableExt {
public static SortedList<TKey, TValue> ToSortedList<TItem, TKey, TValue>(this IEnumerable<TItem> items, Func<TItem, TKey> keyFn, Func<TItem, TValue> valueFn) {
var ans = new SortedList<TKey, TValue>();
foreach (var item in items)
ans.Add(keyFn(item), valueFn(item));
return ans;
}
public static SortedList<TKey, TValue> ToSortedList<TKey, TValue>(this IEnumerable<TValue> items, Func<TValue, TKey> keyFn) {
var ans = new SortedList<TKey, TValue>();
foreach (var item in items)
ans.Add(keyFn(item), item);
return ans;
}
public static SortedList<TKey, TKey> ToSortedList<TKey>(this IEnumerable<TKey> items) {
var ans = new SortedList<TKey, TKey>();
foreach (var item in items)
ans.Add(item, item);
return ans;
}
}
|
LINQ and Sorting into sequential Collection with random access capability
|
I have this code:
SortedList<int, SortedList<int, SimulationPoint>> sl = new SortedList<int, SortedList<int, SimulationPoint>>();
for(int i=0; i<source.Reflections+1; i++)
{
sl.Add(i, new SortedList<int, SimulationPoint>());
}
var q = source.SimulationResult.Where(x => !x.Value.hasHit);
foreach (var qa in q)
{
sl[qa.Key.Item2].Add(qa.Key.Item1, qa.Value);
}
I wanted to generate a sorted output of the collection source.SimulationResult, which is a Dictionary<(int, int), SimulationPoint>. (This dictionary was generated using a Parallel.For() loop, so all the items are in random order.)
Dictionary Key: Just the Ray number emitted from the source (e.g. 0->100) and the Reflection number (e.g. 0->10) as it bounces around the scene: (int Ray, int Reflection).
Dictionary Value:
The SimulationPoint is an output point of a ray-tracing procedure, the most important element of which is that it contains a field bool hasHit that indicates if the point, well, hit an element in the scene or not. (We're looking for those errors here, hence source.SimulationResult.Where(x=>!x.Value.hashit);) (FWIW, this struct SimulationPoint also contains the Ray & Reflection data.)
Generally, this works. But I really like the LINQ syntax and one-liner concept, as it can avoid multiple nested loops. Does anyone have an idea how this can be simplified using the LINQ extensions?
Please keep in mind that I'd like to be able to jump around at the user's choice within the sl collection, this is the problem I'm having with the IGrouping<int, SimulationPoint> outputs from the GroupBy(x=>x.Key.Item2, x=>x.Value) method - it's only accessible sequentially using a foreach loop, even when ordering it with OrderBy(x.Key.Item2).ThenBy(x.Key.Item1).
|
[
"I suggest adding an extension method (group) analogous to ToDictionary to do ToSortedList. Then with that extension method available, you can use GroupBy (which is internally creating a Dictionary...) to create the SortedLists. Internally you are essentially running over the source data twice, but it doesn't seem like you are talking about a lot of data.\nvar sl2 = q.GroupBy(kvp => kvp.Key.Reflection) // group by outer SortedList Key\n .ToSortedList(kvpg => kvpg.Key, // outer SortedList Key (Reflection)\n // outer SortedList Value (SortedList: Ray -> SimulationPoint)\n kvpg => kvpg.ToSortedList(kvp => kvp.Key.Ray, // inner SortedList Key (Ray)\n kvp => kvp.Value) // inner SortedList Value (SimulationPoint)\n );\n\nHere are the extension method definitions:\npublic static class IEnumerableExt {\n public static SortedList<TKey, TValue> ToSortedList<TItem, TKey, TValue>(this IEnumerable<TItem> items, Func<TItem, TKey> keyFn, Func<TItem, TValue> valueFn) {\n var ans = new SortedList<TKey, TValue>();\n foreach (var item in items)\n ans.Add(keyFn(item), valueFn(item));\n return ans;\n }\n public static SortedList<TKey, TValue> ToSortedList<TKey, TValue>(this IEnumerable<TValue> items, Func<TValue, TKey> keyFn) {\n var ans = new SortedList<TKey, TValue>();\n foreach (var item in items)\n ans.Add(keyFn(item), item);\n return ans;\n }\n public static SortedList<TKey, TKey> ToSortedList<TKey>(this IEnumerable<TKey> items) {\n var ans = new SortedList<TKey, TKey>();\n foreach (var item in items)\n ans.Add(item, item);\n return ans;\n }\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"c#",
"dictionary",
"linq",
"sorting"
] |
stackoverflow_0074659799_c#_dictionary_linq_sorting.txt
|
Q:
What guarantees re-rendering in Svelte?
In React, changes to local variables do not guarantee a component (which uses that local variable) re-rendering. Only changes in props or state or context (AFAIK) would guarantee re-rendering.
Is it the same case for Svelte, that is, one must use a proper store to guarantee a re-rendering?
A:
Every assignment to a locally declared variable triggers re-rendering, not just stores.
That is why you need to do something like:
let arr = [];
onMount(() => {
arr.push(1);
// A rerender will not be scheduled yet
arr = arr;
// Just now, the statement itself will be compiled away, but the runtime will know
});
To guarantee a re-render has been executed before continiuing you can use await tick().
|
What guarantees re-rendering in Svelte?
|
In React, changes to local variables do not guarantee a component (which uses that local variable) re-rendering. Only changes in props or state or context (AFAIK) would guarantee re-rendering.
Is it the same case for Svelte, that is, one must use a proper store to guarantee a re-rendering?
|
[
"Every assignment to a locally declared variable triggers re-rendering, not just stores.\nThat is why you need to do something like:\nlet arr = [];\nonMount(() => {\n arr.push(1);\n // A rerender will not be scheduled yet\n arr = arr;\n // Just now, the statement itself will be compiled away, but the runtime will know\n});\n\nTo guarantee a re-render has been executed before continiuing you can use await tick().\n"
] |
[
3
] |
[] |
[] |
[
"svelte"
] |
stackoverflow_0074661360_svelte.txt
|
Q:
Trigger.Once Spark Structured Streaming with KAFKA offsets and writing to KAFKA continues
When using Spark Structured Streaming with Trigger.Once and processing KAFKA input
then if running the Trigger.Once invocation
and KAFKA is being written to as well simultaneously
will the Trigger.Once invocation see those newer KAFKA records being written during current invocation?
or will they not be seen until next invocation of Trigger.Once?
A:
From the manuals: it processes all. See below.
Configuring incremental batch processing Apache Spark provides the
.trigger(once=True) option to process all new data from the source
directory as a single micro-batch. This trigger once pattern ignores
all setting to control streaming input size, which can lead to massive
spill or out-of-memory errors.
Databricks supports trigger(availableNow=True) in Databricks Runtime
10.2 and above for Delta Lake and Auto Loader sources. This functionality combines the batch processing approach of trigger once
with the ability to configure batch size, resulting in multiple
parallelized batches that give greater control for right-sizing
batches and the resultant files.
|
Trigger.Once Spark Structured Streaming with KAFKA offsets and writing to KAFKA continues
|
When using Spark Structured Streaming with Trigger.Once and processing KAFKA input
then if running the Trigger.Once invocation
and KAFKA is being written to as well simultaneously
will the Trigger.Once invocation see those newer KAFKA records being written during current invocation?
or will they not be seen until next invocation of Trigger.Once?
|
[
"From the manuals: it processes all. See below.\n\nConfiguring incremental batch processing Apache Spark provides the\n.trigger(once=True) option to process all new data from the source\ndirectory as a single micro-batch. This trigger once pattern ignores\nall setting to control streaming input size, which can lead to massive\nspill or out-of-memory errors.\nDatabricks supports trigger(availableNow=True) in Databricks Runtime\n10.2 and above for Delta Lake and Auto Loader sources. This functionality combines the batch processing approach of trigger once\nwith the ability to configure batch size, resulting in multiple\nparallelized batches that give greater control for right-sizing\nbatches and the resultant files.\n\n"
] |
[
0
] |
[] |
[] |
[
"apache_kafka",
"apache_spark",
"databricks",
"spark_structured_streaming"
] |
stackoverflow_0071263689_apache_kafka_apache_spark_databricks_spark_structured_streaming.txt
|
Q:
Converting XML node values to comma separated values in SQL
I am trying to convert XML node values to comma separated values but, getting a
Incorrect syntax near the keyword 'SELECT'.
error message
declare @dataCodes XML = '<Root>
<List Value="120" />
<List Value="110" />
</Root>';
DECLARE @ConcatString VARCHAR(MAX)
SELECT @ConcatString = COALESCE(@ConcatString + ', ', '') + Code FROM (SELECT T.Item.value('@Value[1]','VARCHAR(MAX)') as Code FROM @dataCodes.nodes('/Root/List') AS T(Item))
SELECT @ConcatString AS Result
GO
I tried to follow an article but not sure how to proceed further. Any suggestion is appreciated.
Expectation:
Comma separated values ('120,110') stored in a variable.
A:
Try this;
DECLARE @dataCodes XML = '<Root>
<List Value="120" />
<List Value="110" />
</Root>';
DECLARE @ConcatString VARCHAR(MAX)
SELECT @ConcatString = COALESCE(@ConcatString + ', ', '') + Code
FROM (
SELECT T.Item.value('@Value[1]', 'VARCHAR(MAX)') AS Code
FROM @dataCodes.nodes('/Root/List') AS T(Item)
) as TBL
SELECT @ConcatString AS Result
GO
You just need to add an alias to your sub SQL query.
A:
For future readers, XML data can be extracted into arrays, lists, vectors, and variables for output in comma separated values more fluidly using general purpose languages. Below are open-source solutions using OP's needs taking advantage of XPath.
Python
import lxml.etree as ET
xml = '<Root>\
<List Value="120" />\
<List Value="110" />\
</Root>'
dom = ET.fromstring(xml)
nodes = dom.xpath('//List/@Value')
data = [] # LIST
for elem in nodes:
data.append(elem)
print((", ").join(data))
120, 110
PHP
$xml = '<Root>
<List Value="120" />
<List Value="110" />
</Root>';
$dom = simplexml_load_string($xml);
$node = $dom->xpath('//List/@Value');
$data = []; # Array
foreach ($node as $n){
$data[] = $n;
}
echo implode(", ", $data);
120, 110
R
library(XML)
xml = '<Root>
<List Value="120" />
<List Value="110" />
</Root>'
doc<-xmlInternalTreeParse(xml)
data <- xpathSApply(doc, "//List", xmlGetAttr, 'Value') # LIST
print(paste(data, collapse = ', '))
120, 110
A:
To do this without a variable, you can use the nodes method to convert the xml nodes into a table format with leading commas, then use FOR XML PATH('') to collapse it into a single line of XML, then wrap that in STUFF to convert it to varchar and strip off the initial leading comma:
DECLARE @dataCodes XML = '<Root>
<List Value="120" />
<List Value="110" />
</Root>';
SELECT STUFF(
(
SELECT ', ' + T.Item.value('@Value[1]', 'VARCHAR(MAX)')
FROM @dataCodes.nodes('/Root/List') AS T(Item)
FOR XML PATH('')
), 1, 2, '')
|
Converting XML node values to comma separated values in SQL
|
I am trying to convert XML node values to comma separated values but, getting a
Incorrect syntax near the keyword 'SELECT'.
error message
declare @dataCodes XML = '<Root>
<List Value="120" />
<List Value="110" />
</Root>';
DECLARE @ConcatString VARCHAR(MAX)
SELECT @ConcatString = COALESCE(@ConcatString + ', ', '') + Code FROM (SELECT T.Item.value('@Value[1]','VARCHAR(MAX)') as Code FROM @dataCodes.nodes('/Root/List') AS T(Item))
SELECT @ConcatString AS Result
GO
I tried to follow an article but not sure how to proceed further. Any suggestion is appreciated.
Expectation:
Comma separated values ('120,110') stored in a variable.
|
[
"Try this;\nDECLARE @dataCodes XML = '<Root>\n <List Value=\"120\" />\n <List Value=\"110\" />\n </Root>'; \n\nDECLARE @ConcatString VARCHAR(MAX)\n\nSELECT @ConcatString = COALESCE(@ConcatString + ', ', '') + Code\nFROM (\n SELECT T.Item.value('@Value[1]', 'VARCHAR(MAX)') AS Code\n FROM @dataCodes.nodes('/Root/List') AS T(Item)\n ) as TBL\n\nSELECT @ConcatString AS Result\nGO\n\nYou just need to add an alias to your sub SQL query.\n",
"For future readers, XML data can be extracted into arrays, lists, vectors, and variables for output in comma separated values more fluidly using general purpose languages. Below are open-source solutions using OP's needs taking advantage of XPath.\nPython\nimport lxml.etree as ET\n\nxml = '<Root>\\\n <List Value=\"120\" />\\\n <List Value=\"110\" />\\\n </Root>'\n\ndom = ET.fromstring(xml)\nnodes = dom.xpath('//List/@Value')\n\ndata = [] # LIST\nfor elem in nodes:\n data.append(elem)\n\nprint((\", \").join(data))\n\n120, 110\n\nPHP\n$xml = '<Root>\n <List Value=\"120\" />\n <List Value=\"110\" />\n </Root>';\n\n$dom = simplexml_load_string($xml); \n$node = $dom->xpath('//List/@Value');\n\n$data = []; # Array\nforeach ($node as $n){ \n $data[] = $n; \n}\n\necho implode(\", \", $data);\n\n120, 110\n\nR\nlibrary(XML)\n\nxml = '<Root>\n <List Value=\"120\" />\n <List Value=\"110\" />\n </Root>'\n\ndoc<-xmlInternalTreeParse(xml) \ndata <- xpathSApply(doc, \"//List\", xmlGetAttr, 'Value') # LIST\n\nprint(paste(data, collapse = ', '))\n\n120, 110\n\n",
"To do this without a variable, you can use the nodes method to convert the xml nodes into a table format with leading commas, then use FOR XML PATH('') to collapse it into a single line of XML, then wrap that in STUFF to convert it to varchar and strip off the initial leading comma:\nDECLARE @dataCodes XML = '<Root>\n <List Value=\"120\" />\n <List Value=\"110\" />\n </Root>'; \n \nSELECT STUFF(\n (\n SELECT ', ' + T.Item.value('@Value[1]', 'VARCHAR(MAX)')\n FROM @dataCodes.nodes('/Root/List') AS T(Item)\n FOR XML PATH('')\n ), 1, 2, '')\n\n"
] |
[
4,
1,
0
] |
[] |
[] |
[
"sql",
"sql_server"
] |
stackoverflow_0032891371_sql_sql_server.txt
|
Q:
How to stop nginx for windows?
I can't stop the nginx server on windows. I've tried: nginx -s stop, taskkill /if nginx.exe, and end process via task manager, yet it's still running!
A:
Use @taskkill /f /im nginx.exe for this task.
A:
You can stop using quit
nginx.exe -s quit
A:
One can toggle Nginx start stop in Windows using 2 command prompts. One for Nginx start and other for Nginx Stop.
If you stop Nginx from one command prompt then the Nginx process which was started from other prompt will automatically stop, Otherwise if you try to stop the Nginx process from where it started using ctrl+c then it will not stop even tough you close the command prompt, Unless you kill the nginx processes from TaskManager.
A:
This worked for me
wmic process where name='nginx.exe' delete
A:
Make a .bat file in the nginx.exe folder with the command: nginx.exe -s quit. Then make a shortcut to desktop or whereever needed.
A:
We can follow the steps motioned below to quit or reload the nginx if facing any issue like.
nginx: [error] CreateFile() "...logs/nginx.pid" failed
(2: The system cannot find the file specified)
A:
Starting nginx with daemon off will do the trick.
nginx is detauched from the console by default, independently if you run it directly using
./nginx.exe
or (as suggested by the guide) using
start ./nginx.exe
if you wanna stay coupled with the process who run the nginx (so it can pass the SIGTERM / CTRL+C to the nginx) you have to run this command:
nginx -g "daemon off;"
A:
Make a windows shortcut somewhere and add the following in the target box:
C:\Windows\System32\cmd.exe /k "wmic process where name='nginx.exe' delete" && exit
Anytime you double clicked, nginx get shut down.
|
How to stop nginx for windows?
|
I can't stop the nginx server on windows. I've tried: nginx -s stop, taskkill /if nginx.exe, and end process via task manager, yet it's still running!
|
[
"Use @taskkill /f /im nginx.exe for this task.\n",
"You can stop using quit\n\nnginx.exe -s quit\n\n",
"One can toggle Nginx start stop in Windows using 2 command prompts. One for Nginx start and other for Nginx Stop.\nIf you stop Nginx from one command prompt then the Nginx process which was started from other prompt will automatically stop, Otherwise if you try to stop the Nginx process from where it started using ctrl+c then it will not stop even tough you close the command prompt, Unless you kill the nginx processes from TaskManager.\n\n",
"This worked for me\nwmic process where name='nginx.exe' delete\n\n",
"Make a .bat file in the nginx.exe folder with the command: nginx.exe -s quit. Then make a shortcut to desktop or whereever needed.\n",
"We can follow the steps motioned below to quit or reload the nginx if facing any issue like.\nnginx: [error] CreateFile() \"...logs/nginx.pid\" failed\n(2: The system cannot find the file specified)\n\n",
"Starting nginx with daemon off will do the trick.\nnginx is detauched from the console by default, independently if you run it directly using\n./nginx.exe\nor (as suggested by the guide) using\nstart ./nginx.exe\nif you wanna stay coupled with the process who run the nginx (so it can pass the SIGTERM / CTRL+C to the nginx) you have to run this command:\nnginx -g \"daemon off;\"\n",
"Make a windows shortcut somewhere and add the following in the target box:\nC:\\Windows\\System32\\cmd.exe /k \"wmic process where name='nginx.exe' delete\" && exit\n\nAnytime you double clicked, nginx get shut down.\n"
] |
[
47,
34,
9,
1,
0,
0,
0,
0
] |
[] |
[] |
[
"localhost",
"nginx",
"windows"
] |
stackoverflow_0059613201_localhost_nginx_windows.txt
|
Q:
why we cannot use only 1 for loop for scanning and printing elements of an array?
Why we need to declare 2 for loops for scanning and printing elements of arrays why we cannot use only one loop for both??
#include<stdio.h>
int main()
{
int arr[5]; //array of lengeth 5 type integer.. arr[0]-->arr[4]
int i;
for(i = 0; i < 5; i++) //for getting 5 elements from user
scanf("%d",&arr[i]);
//printing all 5 elements
printf("hi\n%d\n",arr[i]);
return 0;
}
A:
You can. Your indentation is confusing, you should always prefer to use braces around loops.
This will ask for input, print it, ask for input, print it, etc:
// Integer array of length 5
int arr[5];
int i;
for(i = 0; i < 5; i++) {
// Get an element from the user
scanf("%d",&arr[i]);
// Print the element from the array
printf("hi\n%d\n",arr[i]);
}
If you used two loops, it would ask you for input 5 times, then print out all 5:
for(i = 0; i < 5; i++) {
// Get 5 elements from the user
scanf("%d",&arr[i]);
}
for(i = 0; i < 5; i++) {
// Print 5 elements from the array
printf("hi\n%d\n",arr[i]);
}
|
why we cannot use only 1 for loop for scanning and printing elements of an array?
|
Why we need to declare 2 for loops for scanning and printing elements of arrays why we cannot use only one loop for both??
#include<stdio.h>
int main()
{
int arr[5]; //array of lengeth 5 type integer.. arr[0]-->arr[4]
int i;
for(i = 0; i < 5; i++) //for getting 5 elements from user
scanf("%d",&arr[i]);
//printing all 5 elements
printf("hi\n%d\n",arr[i]);
return 0;
}
|
[
"You can. Your indentation is confusing, you should always prefer to use braces around loops.\nThis will ask for input, print it, ask for input, print it, etc:\n// Integer array of length 5\nint arr[5];\nint i;\n\nfor(i = 0; i < 5; i++) {\n // Get an element from the user\n scanf(\"%d\",&arr[i]);\n\n // Print the element from the array\n printf(\"hi\\n%d\\n\",arr[i]);\n}\n\nIf you used two loops, it would ask you for input 5 times, then print out all 5:\nfor(i = 0; i < 5; i++) {\n // Get 5 elements from the user\n scanf(\"%d\",&arr[i]);\n}\n\nfor(i = 0; i < 5; i++) {\n // Print 5 elements from the array\n printf(\"hi\\n%d\\n\",arr[i]);\n}\n\n"
] |
[
0
] |
[] |
[] |
[
"c"
] |
stackoverflow_0074661449_c.txt
|
Q:
trying to correct "getAverageRainfall" to display the average of input using "getTotalRainfall" divided by "MONTHS"
Specifications: Write a program that stores the total rainfall for each of the 12 months in an array. The program should have the following separate methods:
getRainfall – takes an array as an argument and reads the amount of rainfall in inches for each of the 12 months from the user and stores the values in the array. The method cannot accept a negative number from the user.
displayRainfall – takes the array containing the inches of rainfall per month as an argument and displays the rainfall for each month.
getTotalRainfall – takes the array containing the inches of rainfall per month as an argument and returns the total number of inches of rainfall for the year.
getAverageRainfall – takes the array containing the inches of rainfall per month as an argument and returns the average number of inches per month.
getRainfallAbove – takes the array containing the inches of rainfall per month and a number as arguments and returns the number of months that had rainfall exceeded the number argument.
Declare the array in the main method and call each of separate methods and output the results in main to demonstrate the working state of the entire program.
My code:
public static final int MONTHS = 12;
static double[] rain = new double[MONTHS]; //Crate an array to hold the rain values
public static void main(String[] args) throws IOException {
rain = getRainfall(args);
displayRainfall(rain);
getTotalRainfall(rain);
getAverageRainfall(rain);
}
public static double[] getRainfall(String[] args) throws IOException {
String[] months = { "January", "February", "March", "April", "May", "June", "July",
"August", "September", "October", "November", "December" }; //Each month in a year
//Crate an array to hold the rain values
double[] rain = new double[12];
//Create a Scanner object for keyboard input.
Scanner keyboard = new Scanner(System.in);
//Get rain values and store them in the rain array
for (int i = 0; i < months.length; i++)
{
System.out.print("Enter rainfall for " + months[(i)] + ": ");
rain[i] = keyboard.nextDouble();
if (rain[i] <0)
{
System.out.println("You can not use a negative number.\n");
i--;
}
}
return rain;
}
private static void displayRainfall(double[] rain) {
String[] months = { "January", "February", "March", "April", "May", "June", "July",
"August", "September", "October", "November", "December" }; //Each month in a year
System.out.println("\nDisplaying rainfall for each month\n");
for(int i = 0; i < months.length; i++) {
System.out.print("Rainfall for " + months[i] + ": " + rain[i] + "\n");
}
}
private static double getTotalRainfall(double[] rain) {
{
double total = 0.0; //Accumulator
//Get sum of all values in the rain array.
for (double value: rain)
total += value;
System.out.printf("\nThe total rainfall for the year is: " + total);
return total;
}
}
private static double getAverageRainfall(double[] rain) {
{
double average = 0.0;
average += getTotalRainfall(rain)/ MONTHS;
return average;
}
Output:
Rainfall for April: 1.0
Rainfall for May: 1.0
Rainfall for June: 1.0
Rainfall for July: 1.0
Rainfall for August: 1.0
Rainfall for September: 1.0
Rainfall for October: 1.0
Rainfall for November: 1.0
Rainfall for December: 1.0
The total rainfall for the year is: 12.0
The total rainfall for the year is: 12.0
A:
You got to return rain and use it in the display method
public static void main(String[] args) throws IOException {
double[] rain = getRainfall(args);
displayRainfall(rain);
}
public static void getRainfall(String[] args) throws IOException {
String[] months = { "January", "February", "March", "April", "May", "June", "July",
"August", "September", "October", "November", "December" }; //Each month in a year
//Crate an array to hold the rain values
double[] rain = new double[12];
//Create a Scanner object for keyboard input.
Scanner keyboard = new Scanner(System.in);
//Get rain values and store them in the rain array
for (int i = 0; i < months.length; i++)
{
System.out.print("Enter rainfall for " + months[(i)] + ": ");
rain[i] = keyboard.nextDouble();
if (rain[i] <0)
{
System.out.println("You can not use a negative number.\n");
i--;
}
}
return rain;
}
private static void displayRainfall(double[] rain) {
String[] months = { "January", "February", "March", "April", "May", "June", "July",
"August", "September", "October", "November", "December" }; //Each month in a year
System.out.println("\nDisplaying rainfall for each month\n");
for(int i = 0; i < months.length; i++) {
System.out.print("Rainfall for " + months[i] + ": " + rain[i] + "\n, ");
}
}
|
trying to correct "getAverageRainfall" to display the average of input using "getTotalRainfall" divided by "MONTHS"
|
Specifications: Write a program that stores the total rainfall for each of the 12 months in an array. The program should have the following separate methods:
getRainfall – takes an array as an argument and reads the amount of rainfall in inches for each of the 12 months from the user and stores the values in the array. The method cannot accept a negative number from the user.
displayRainfall – takes the array containing the inches of rainfall per month as an argument and displays the rainfall for each month.
getTotalRainfall – takes the array containing the inches of rainfall per month as an argument and returns the total number of inches of rainfall for the year.
getAverageRainfall – takes the array containing the inches of rainfall per month as an argument and returns the average number of inches per month.
getRainfallAbove – takes the array containing the inches of rainfall per month and a number as arguments and returns the number of months that had rainfall exceeded the number argument.
Declare the array in the main method and call each of separate methods and output the results in main to demonstrate the working state of the entire program.
My code:
public static final int MONTHS = 12;
static double[] rain = new double[MONTHS]; //Crate an array to hold the rain values
public static void main(String[] args) throws IOException {
rain = getRainfall(args);
displayRainfall(rain);
getTotalRainfall(rain);
getAverageRainfall(rain);
}
public static double[] getRainfall(String[] args) throws IOException {
String[] months = { "January", "February", "March", "April", "May", "June", "July",
"August", "September", "October", "November", "December" }; //Each month in a year
//Crate an array to hold the rain values
double[] rain = new double[12];
//Create a Scanner object for keyboard input.
Scanner keyboard = new Scanner(System.in);
//Get rain values and store them in the rain array
for (int i = 0; i < months.length; i++)
{
System.out.print("Enter rainfall for " + months[(i)] + ": ");
rain[i] = keyboard.nextDouble();
if (rain[i] <0)
{
System.out.println("You can not use a negative number.\n");
i--;
}
}
return rain;
}
private static void displayRainfall(double[] rain) {
String[] months = { "January", "February", "March", "April", "May", "June", "July",
"August", "September", "October", "November", "December" }; //Each month in a year
System.out.println("\nDisplaying rainfall for each month\n");
for(int i = 0; i < months.length; i++) {
System.out.print("Rainfall for " + months[i] + ": " + rain[i] + "\n");
}
}
private static double getTotalRainfall(double[] rain) {
{
double total = 0.0; //Accumulator
//Get sum of all values in the rain array.
for (double value: rain)
total += value;
System.out.printf("\nThe total rainfall for the year is: " + total);
return total;
}
}
private static double getAverageRainfall(double[] rain) {
{
double average = 0.0;
average += getTotalRainfall(rain)/ MONTHS;
return average;
}
Output:
Rainfall for April: 1.0
Rainfall for May: 1.0
Rainfall for June: 1.0
Rainfall for July: 1.0
Rainfall for August: 1.0
Rainfall for September: 1.0
Rainfall for October: 1.0
Rainfall for November: 1.0
Rainfall for December: 1.0
The total rainfall for the year is: 12.0
The total rainfall for the year is: 12.0
|
[
"You got to return rain and use it in the display method\n public static void main(String[] args) throws IOException { \n double[] rain = getRainfall(args); \n displayRainfall(rain); \n }\n \n public static void getRainfall(String[] args) throws IOException {\n String[] months = { \"January\", \"February\", \"March\", \"April\", \"May\", \"June\", \"July\",\n \"August\", \"September\", \"October\", \"November\", \"December\" }; //Each month in a year\n\n //Crate an array to hold the rain values\n double[] rain = new double[12];\n\n //Create a Scanner object for keyboard input.\n Scanner keyboard = new Scanner(System.in);\n \n //Get rain values and store them in the rain array\n for (int i = 0; i < months.length; i++)\n { \n System.out.print(\"Enter rainfall for \" + months[(i)] + \": \");\n rain[i] = keyboard.nextDouble();\n if (rain[i] <0)\n {\n System.out.println(\"You can not use a negative number.\\n\");\n i--;\n }\n }\n return rain;\n }\n private static void displayRainfall(double[] rain) {\n String[] months = { \"January\", \"February\", \"March\", \"April\", \"May\", \"June\", \"July\",\n \"August\", \"September\", \"October\", \"November\", \"December\" }; //Each month in a year\n \n System.out.println(\"\\nDisplaying rainfall for each month\\n\");\n for(int i = 0; i < months.length; i++) {\n System.out.print(\"Rainfall for \" + months[i] + \": \" + rain[i] + \"\\n, \");\n }\n }\n\n"
] |
[
0
] |
[] |
[] |
[
"eclipse",
"java"
] |
stackoverflow_0074661420_eclipse_java.txt
|
Q:
Spring authorization server OAuth2 login from my own login page
I have a front end (angular) with login form, a back-end for that angular application as my OAuth client (spring dependency) then I have a third application that is the Authorization server and finally the forth is the ressources server.
So I want to know is there is any way to jump the redirect to /login from the authorization server?
I want to login the user with angular login page, then make a get with my OAuth client (spring) for the authorization code flow and then, since i'm not authenticated, instead of getting redirect, I want to get an error "401" and then send a post request with my OAuth client (spring) to the auth server again to login the user that have sent the data previously in angular login page.
Essentially I just wanted to login to my auth server with a custom page that exists in the front end application and let the backend build specially for that front, take over the flow.
A:
You want seamless integration between login form and the rest of your Angular app? Share your CSS between Angular (public) client and authorization-server embedded (private) one, don't implement login in public client.
You might need to better grasp OAuth2 concepts.
Login, logout and user-registration are authorization-server business. Leave it there. Reasons are related to security and just being future-proof: what if you want to plug additional clients to your system (mobile apps for instance)? Are you going to implement login, logout and user-registration again and again? What if you have to introduce multi-factor authentication at some point? Would you break all clients at once?
Your "backend" (Spring REST API secured with OAuth2) is a resource-server, not a client. Make sure it is configured as so: depends on spring-boot-starter-oauth2-resource-server (directly or transitively). This is where the HTTP status for missing (or invalid) authorization is handled. The way to do it depends on your resource-server being a servlet or reactive app. The libs in the repo linked just before do what you want by default (401 instead of 302).
In your case, client is Angular app. I hope you use an OAuth2 client lib such as angular-auth-oidc-client to handle:
authorization-code flow (redirects to authorization-server, back from authorization-server with authorization-code and tokens retrieval with this authorization-code)
refresh-token flow (automatic access-token refreshing just before it expires)
requests authorization (add Bearer Authorization header with access-token on configured routes)
What my Angular apps do for login is just redirect users to authorization-server "authorization" end-point and then just wait for a redirect back to "post-login URL" with an authorization-code. How this code was obtained is none of their business (can be a login form, a "remember-me" cookie, some biometry, etc.).
Also, I have to admit that I don't use Spring's authorization-server. I prefer mature / feature-full solutions like Keycloak, Auth0, Okta, etc. which come with much more already implemented: multi-factor authentication, integration with LDAP, identity federation for "social" providers (Google, Facebook, Github, etc.), administration UI, ...
A:
As ch4mp said, it's best to leave all the authentication and authorization logic and pages to the Spring Authorization Server, it's pretty easy to customize and configure all the user accessible pages (login, authorize, etc...) with Thymeleaf, you only have to bring in your CSS to unify the design.
There was a great demo in last year's Spring One which seems like something you'd like to achieve, you can find the code in this repo.
They used Spring Cloud Gateway as a means to run the Angular SPA in the flights-web app, configuring it as the OAuth client. Following this path you can route all calls to your backend API through the gateway's WebClient.
|
Spring authorization server OAuth2 login from my own login page
|
I have a front end (angular) with login form, a back-end for that angular application as my OAuth client (spring dependency) then I have a third application that is the Authorization server and finally the forth is the ressources server.
So I want to know is there is any way to jump the redirect to /login from the authorization server?
I want to login the user with angular login page, then make a get with my OAuth client (spring) for the authorization code flow and then, since i'm not authenticated, instead of getting redirect, I want to get an error "401" and then send a post request with my OAuth client (spring) to the auth server again to login the user that have sent the data previously in angular login page.
Essentially I just wanted to login to my auth server with a custom page that exists in the front end application and let the backend build specially for that front, take over the flow.
|
[
"You want seamless integration between login form and the rest of your Angular app? Share your CSS between Angular (public) client and authorization-server embedded (private) one, don't implement login in public client.\nYou might need to better grasp OAuth2 concepts.\nLogin, logout and user-registration are authorization-server business. Leave it there. Reasons are related to security and just being future-proof: what if you want to plug additional clients to your system (mobile apps for instance)? Are you going to implement login, logout and user-registration again and again? What if you have to introduce multi-factor authentication at some point? Would you break all clients at once?\nYour \"backend\" (Spring REST API secured with OAuth2) is a resource-server, not a client. Make sure it is configured as so: depends on spring-boot-starter-oauth2-resource-server (directly or transitively). This is where the HTTP status for missing (or invalid) authorization is handled. The way to do it depends on your resource-server being a servlet or reactive app. The libs in the repo linked just before do what you want by default (401 instead of 302).\nIn your case, client is Angular app. I hope you use an OAuth2 client lib such as angular-auth-oidc-client to handle:\n\nauthorization-code flow (redirects to authorization-server, back from authorization-server with authorization-code and tokens retrieval with this authorization-code)\nrefresh-token flow (automatic access-token refreshing just before it expires)\nrequests authorization (add Bearer Authorization header with access-token on configured routes)\n\nWhat my Angular apps do for login is just redirect users to authorization-server \"authorization\" end-point and then just wait for a redirect back to \"post-login URL\" with an authorization-code. How this code was obtained is none of their business (can be a login form, a \"remember-me\" cookie, some biometry, etc.).\nAlso, I have to admit that I don't use Spring's authorization-server. I prefer mature / feature-full solutions like Keycloak, Auth0, Okta, etc. which come with much more already implemented: multi-factor authentication, integration with LDAP, identity federation for \"social\" providers (Google, Facebook, Github, etc.), administration UI, ...\n",
"As ch4mp said, it's best to leave all the authentication and authorization logic and pages to the Spring Authorization Server, it's pretty easy to customize and configure all the user accessible pages (login, authorize, etc...) with Thymeleaf, you only have to bring in your CSS to unify the design.\nThere was a great demo in last year's Spring One which seems like something you'd like to achieve, you can find the code in this repo.\nThey used Spring Cloud Gateway as a means to run the Angular SPA in the flights-web app, configuring it as the OAuth client. Following this path you can route all calls to your backend API through the gateway's WebClient.\n"
] |
[
0,
0
] |
[] |
[] |
[
"oauth_2.0",
"spring",
"spring_authorization_server",
"spring_security",
"spring_security_oauth2"
] |
stackoverflow_0074336283_oauth_2.0_spring_spring_authorization_server_spring_security_spring_security_oauth2.txt
|
Q:
Some SVG elements disappear when hiding adjacent divs
I have a project where I've created a number of divs which each contain an svg gauge. Each gauge is multi-layered and when displayed, it looks great. Its a complex gauge, so I wont include the code, but the structure is:
` <div> (filtered at this level with css display)
<svg>
<defs>
</defs>
<g>
</g> (multiple g tags)
</svg>
</div>`
I use jQuery and a dropdown to filter the view where I literally only change the div from "display:inline-block" to "display:none". The divs which are meant to be hidden are, but the showing divs with their child svgs are missing layers (entire g tags). When I change the filter setting back to all of them, the layers return. As you can see in the code, its very simple and the un-filter (id = -1) results in the same condition as the shown divs in a filtered view.
` function filterCell(id) {
$('.theseGauges').each(function () {
let filteredList = CellM.filter(a => a.cellNameId == id);
if (parseInt(id) == -1) {
$(this).css({ 'display': "inline-block" });
} else {
let a = filteredList.findIndex(b => b.machineId == $(this).attr("mid"));
if (a != -1) {
$(this).css({ 'display': "inline-block" });
} else {
$(this).css({ 'display': 'none' });
}
}
})
}`
To add to the strangeness, I also have the ability to filter these gauges by location. This works perfectly in a filtered and unfiltered state (in this case, thisOption == 1 is all locations).
` $('#filterMachineLocation').change(function (e) {
let thisOption = $(this).val();
$('.theseGauges').each(function () {
if (parseInt(thisOption) == 1) {
$(this).css({ 'display': "inline-block" });
} else if (parseInt($(this).attr('loc')) != thisOption) {
$(this).css({ 'display': 'none' });
} else {
$(this).css({ 'display': "inline-block" });
}
});`
I've tried this with the same result in Chrome and Edge. I opened the dev tools and copied the div HTML in both the complete and incomplete states and ran it through WinMerge. They are the same.
Any ideas are appreciated
A:
@enxaneta was right on with the answer. The real problem was that I was loading the defs into each svg and creating duplicate id's for my gradients. It didn't work the first time because I was still iterating and making duplicate ids. Thanks for your help!
|
Some SVG elements disappear when hiding adjacent divs
|
I have a project where I've created a number of divs which each contain an svg gauge. Each gauge is multi-layered and when displayed, it looks great. Its a complex gauge, so I wont include the code, but the structure is:
` <div> (filtered at this level with css display)
<svg>
<defs>
</defs>
<g>
</g> (multiple g tags)
</svg>
</div>`
I use jQuery and a dropdown to filter the view where I literally only change the div from "display:inline-block" to "display:none". The divs which are meant to be hidden are, but the showing divs with their child svgs are missing layers (entire g tags). When I change the filter setting back to all of them, the layers return. As you can see in the code, its very simple and the un-filter (id = -1) results in the same condition as the shown divs in a filtered view.
` function filterCell(id) {
$('.theseGauges').each(function () {
let filteredList = CellM.filter(a => a.cellNameId == id);
if (parseInt(id) == -1) {
$(this).css({ 'display': "inline-block" });
} else {
let a = filteredList.findIndex(b => b.machineId == $(this).attr("mid"));
if (a != -1) {
$(this).css({ 'display': "inline-block" });
} else {
$(this).css({ 'display': 'none' });
}
}
})
}`
To add to the strangeness, I also have the ability to filter these gauges by location. This works perfectly in a filtered and unfiltered state (in this case, thisOption == 1 is all locations).
` $('#filterMachineLocation').change(function (e) {
let thisOption = $(this).val();
$('.theseGauges').each(function () {
if (parseInt(thisOption) == 1) {
$(this).css({ 'display': "inline-block" });
} else if (parseInt($(this).attr('loc')) != thisOption) {
$(this).css({ 'display': 'none' });
} else {
$(this).css({ 'display': "inline-block" });
}
});`
I've tried this with the same result in Chrome and Edge. I opened the dev tools and copied the div HTML in both the complete and incomplete states and ran it through WinMerge. They are the same.
Any ideas are appreciated
|
[
"@enxaneta was right on with the answer. The real problem was that I was loading the defs into each svg and creating duplicate id's for my gradients. It didn't work the first time because I was still iterating and making duplicate ids. Thanks for your help!\n"
] |
[
0
] |
[] |
[] |
[
"html",
"jquery",
"jquery_svg",
"svg",
"svg_filters"
] |
stackoverflow_0074659996_html_jquery_jquery_svg_svg_svg_filters.txt
|
Q:
what operations in the __main of KEIL
Before running main() function in users' application, it will IMPORT __main and execute __main, so I wonder that what does this function do?
__main
copy rw variables from flash to ram?
initialize bss section?
initialzie stack/heap section?
anything else?
Does it initialize according to the scater file which defines the execute region?
A:
Copied from https://developer.arm.com/documentation/100748/0618/Embedded-Software-Development/Application-startup
Application startup
In most embedded systems, an initialization sequence executes to set up the system before the main task is executed.
The following figure shows the default initialization sequence.
Figure 1. Default initialization sequence
Default initialization sequence
__main is responsible for setting up the memory and __rt_entry is responsible for setting up the run-time environment.
__main performs code and data copying, decompression, and zero initialization of the ZI data. It then branches to __rt_entry to set up the stack and heap, initialize the library functions and static data, and call any top level C++ constructors. __rt_entry then branches to main(), the entry to your application. When the main application has finished executing, __rt_entry shuts down the library, then hands control back to the debugger.
The function label main() has a special significance. The presence of a main() function forces the linker to link in the initialization code in __main and __rt_entry. Without a function labeled main(), the initialization sequence is not linked in, and as a result, some standard C library functionality is not supported.
|
what operations in the __main of KEIL
|
Before running main() function in users' application, it will IMPORT __main and execute __main, so I wonder that what does this function do?
__main
copy rw variables from flash to ram?
initialize bss section?
initialzie stack/heap section?
anything else?
Does it initialize according to the scater file which defines the execute region?
|
[
"Copied from https://developer.arm.com/documentation/100748/0618/Embedded-Software-Development/Application-startup\nApplication startup\nIn most embedded systems, an initialization sequence executes to set up the system before the main task is executed.\nThe following figure shows the default initialization sequence.\nFigure 1. Default initialization sequence\n\nDefault initialization sequence\n__main is responsible for setting up the memory and __rt_entry is responsible for setting up the run-time environment.\n__main performs code and data copying, decompression, and zero initialization of the ZI data. It then branches to __rt_entry to set up the stack and heap, initialize the library functions and static data, and call any top level C++ constructors. __rt_entry then branches to main(), the entry to your application. When the main application has finished executing, __rt_entry shuts down the library, then hands control back to the debugger.\nThe function label main() has a special significance. The presence of a main() function forces the linker to link in the initialization code in __main and __rt_entry. Without a function labeled main(), the initialization sequence is not linked in, and as a result, some standard C library functionality is not supported.\n"
] |
[
0
] |
[] |
[] |
[
"assembly",
"c",
"embedded",
"keil",
"startup"
] |
stackoverflow_0074651811_assembly_c_embedded_keil_startup.txt
|
Q:
Does defining nested functions forbidden in Julia?
I can say I've never seen any package maintainer define nested functions so far:
function foo()
function bar()
# do
end
# do
end
Is it forbidden in Julia, or can it cause performance reduction?
A:
To expand on DanGetz response:
It is allowed.
It does not impact performance if used correctly.
It can impact performance or code correctness if used incorrectly (especially if you capture variables from outer scope), so you need to be careful.
Within functions defining anonymous functions is much more common.
If you want to learn more about the potential performance impact see here.
|
Does defining nested functions forbidden in Julia?
|
I can say I've never seen any package maintainer define nested functions so far:
function foo()
function bar()
# do
end
# do
end
Is it forbidden in Julia, or can it cause performance reduction?
|
[
"To expand on DanGetz response:\n\nIt is allowed.\nIt does not impact performance if used correctly.\nIt can impact performance or code correctness if used incorrectly (especially if you capture variables from outer scope), so you need to be careful.\nWithin functions defining anonymous functions is much more common.\n\nIf you want to learn more about the potential performance impact see here.\n"
] |
[
9
] |
[] |
[] |
[
"julia"
] |
stackoverflow_0074659755_julia.txt
|
Q:
CSS for circle avatar doesn’t work for Safari browser
I’m using gatsby react template for my blog but when I’m opening it on Safari browser (iPhone or Mac) the avatar image is not circle.
easy-code.blog
This is the style I used for the avatar
.bio-avatar {
margin-right: var(--spacing-4);
margin-bottom: var(--spacing-0);
min-width: 50px;
border-radius: 100%;
}
A:
You should change your CSS code with this one. The issue is that the border-radius value.
.bio-avatar {
width: 50px;
height: 50px;
border-radius: 50%;
overflow: hidden;
}
|
CSS for circle avatar doesn’t work for Safari browser
|
I’m using gatsby react template for my blog but when I’m opening it on Safari browser (iPhone or Mac) the avatar image is not circle.
easy-code.blog
This is the style I used for the avatar
.bio-avatar {
margin-right: var(--spacing-4);
margin-bottom: var(--spacing-0);
min-width: 50px;
border-radius: 100%;
}
|
[
"You should change your CSS code with this one. The issue is that the border-radius value.\n.bio-avatar {\n width: 50px;\n height: 50px;\n border-radius: 50%;\n overflow: hidden;\n}\n\n"
] |
[
2
] |
[] |
[] |
[
"css"
] |
stackoverflow_0074656241_css.txt
|
Q:
Does Bert model need text?
Does Bert models need pre-processed text (Like removing special characters, stopwords, etc.) or I can directly pass my text as it is to Bert models. (HuggigFace libraries).
note: Follow up question to: String cleaning/preprocessing for BERT
A:
Cleaning the input text for transformer models is not required. Removing stop words (which are considered as noise in conventional text representation like bag-of-words or tf-idf) can and probably will worsen the predictions of your BERT model.
Since BERT is making use of the self-attention mechanism these 'stop words' are valuable information for BERT.
Consider the following example:
Python's NLTK library considers words like 'her' or 'him' as stop words. Let's say we want to process a text like: 'I told her about the best restaurants in town'.
Removing stop words with NLTK would give us: 'I told best restaurants town'. As you can see a lot of information is being discarded. Sure, we could try and train a classic ML classifier (i.e. topic classification, here food) but BERT captures a lot more semantic information based on the surroundings of words.
A:
You need to tokenize your text first. The BertTokenizer class handles everything you need from raw text to tokens. See this:
from transformers import BertTokenizer, BertModel
import torch
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
model = BertModel.from_pretrained('bert-base-uncased')
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
A:
According to me, pre-processing is not required while training as well as inferring from BERT. I can explain it with a few examples:
So as to continue to @Arthuro's answer, the stop words actually are valuable and BERT internally maps relations between different words.
We even should not clean things like a hyperlink or things like Twitter handle mentions (eg. @someones_twitter_handle). The reason is subword tokenization! BERT uses a special subword tokenization called WordPiece Tokenization. WordPiece tokenizer breaks the words into subwords. HuggingFace has a really nice article that explains how this works.
|
Does Bert model need text?
|
Does Bert models need pre-processed text (Like removing special characters, stopwords, etc.) or I can directly pass my text as it is to Bert models. (HuggigFace libraries).
note: Follow up question to: String cleaning/preprocessing for BERT
|
[
"Cleaning the input text for transformer models is not required. Removing stop words (which are considered as noise in conventional text representation like bag-of-words or tf-idf) can and probably will worsen the predictions of your BERT model.\nSince BERT is making use of the self-attention mechanism these 'stop words' are valuable information for BERT.\nConsider the following example:\nPython's NLTK library considers words like 'her' or 'him' as stop words. Let's say we want to process a text like: 'I told her about the best restaurants in town'.\nRemoving stop words with NLTK would give us: 'I told best restaurants town'. As you can see a lot of information is being discarded. Sure, we could try and train a classic ML classifier (i.e. topic classification, here food) but BERT captures a lot more semantic information based on the surroundings of words.\n",
"You need to tokenize your text first. The BertTokenizer class handles everything you need from raw text to tokens. See this:\nfrom transformers import BertTokenizer, BertModel\nimport torch\n\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\nmodel = BertModel.from_pretrained('bert-base-uncased')\n\ninputs = tokenizer(\"Hello, my dog is cute\", return_tensors=\"pt\")\noutputs = model(**inputs)\n\nlast_hidden_states = outputs.last_hidden_state\n\n",
"According to me, pre-processing is not required while training as well as inferring from BERT. I can explain it with a few examples:\n\nSo as to continue to @Arthuro's answer, the stop words actually are valuable and BERT internally maps relations between different words.\nWe even should not clean things like a hyperlink or things like Twitter handle mentions (eg. @someones_twitter_handle). The reason is subword tokenization! BERT uses a special subword tokenization called WordPiece Tokenization. WordPiece tokenizer breaks the words into subwords. HuggingFace has a really nice article that explains how this works.\n\n"
] |
[
1,
0,
0
] |
[] |
[] |
[
"bert_language_model",
"data_preprocessing",
"nlp",
"python",
"text_classification"
] |
stackoverflow_0070649831_bert_language_model_data_preprocessing_nlp_python_text_classification.txt
|
Q:
Program python that launch when start Windows
I know, it's a lot of similar questions, but i don't understand how i can make a python program what launch when you start pc, so please learn me that.
I want to get a code in python what explain me how to create a program what start when you lanch the pc
A:
This type of execution in programs/script is usually done through the task scheduler
A simple tutorial would be following the next steps:
1: At the windows search box, type: task scheduler
2: Open Task scheduler
3: From Action menu select Create Task.
4: At General tab, type a name for the task. e.g. "StartPythonScript" and select Run with highest privileges.
5.1: At Triggers tab, click New.
5.2: Select to begin the task: At system startup and click OK.
6.1: At Actions tab, click New.
6.2: At New Action window, select the option start a program and then click Browse.
6.3: Choose the script that you want to run at startup and click Open.
6.4: Click OK.
7: At Conditions tab, clear the Start the task only if the computer is on AC Power checkbox and click OK.
8: Restart your PC to apply the change.
Hope this solution will help you, anyway you can learn more about task scheduler here: https://learn.microsoft.com/en-us/windows/win32/taskschd/task-scheduler-start-page
|
Program python that launch when start Windows
|
I know, it's a lot of similar questions, but i don't understand how i can make a python program what launch when you start pc, so please learn me that.
I want to get a code in python what explain me how to create a program what start when you lanch the pc
|
[
"This type of execution in programs/script is usually done through the task scheduler\nA simple tutorial would be following the next steps:\n1: At the windows search box, type: task scheduler\n2: Open Task scheduler\n3: From Action menu select Create Task.\n4: At General tab, type a name for the task. e.g. \"StartPythonScript\" and select Run with highest privileges.\n5.1: At Triggers tab, click New.\n5.2: Select to begin the task: At system startup and click OK.\n6.1: At Actions tab, click New.\n6.2: At New Action window, select the option start a program and then click Browse.\n6.3: Choose the script that you want to run at startup and click Open.\n6.4: Click OK.\n7: At Conditions tab, clear the Start the task only if the computer is on AC Power checkbox and click OK.\n8: Restart your PC to apply the change.\nHope this solution will help you, anyway you can learn more about task scheduler here: https://learn.microsoft.com/en-us/windows/win32/taskschd/task-scheduler-start-page\n"
] |
[
0
] |
[] |
[] |
[
"app_startup",
"python"
] |
stackoverflow_0074661287_app_startup_python.txt
|
Q:
%in% operator not operating as expected with reactive statement of a Shiny app
I have a Shiny app that takes a dataset and filters it through several user inputs. To do this, I use selectizeInput functions where the user can select one or many options from a list and then these selections are run through reactive statements to get the desired final dataset. I've noticed recently that this no longer works in one of the places I have the app hosted; this app was built and deployed with Shiny 1.6.0 and it's still working in that location, but it isn't working in another spot that has Shiny 1.7.3. I'm wondering if this may be an issue with newer versions of Shiny. Here is an example where multiple selections causes the resulting table to not populate:
library(shiny)
library(dplyr)
data <- mtcars
ui <- fluidPage(
fluidRow(
column(width = 4, wellPanel(
selectizeInput("carb", "carb:", c("All", sort(unique(data$carb))),
selected = "All", multiple = TRUE,
options = list('plugins' = list('remove_button'),
'create' = TRUE, 'persist' = FALSE)))),
column(width = 8, wellPanel(tableOutput("table")))
)
)
server <- function(input,output,session){
process <- reactive({
req(input$carb) # require some input
if(input$carb == "All"){data} #pass entire dataset if selected
else(data %>% dplyr::filter(carb %in% input$carb))}) # will not work with > 1 selected
output$table <- renderTable({process()})
}
shinyApp(ui = ui, server = server)
Selecting just one value allows everything to work fine, but there's an error about the condition having length > 1 if multiple values are selected. Previously when this worked, I was able to select something like 1,2, and 4 for the carb variable and the resulting table would show all rows with one of those three values. I know the input is getting passed on to the argument by adding a renderTable statement into the server:
output$test <- renderTable({as.data.frame(input$carb)})
However, this isn't working when I'm trying to filter the full dataset. I can run everything when selectizeInput(multiple = FALSE), but ideally it should be equal to TRUE so the user has more functionality.
A:
The problem is not the %in% in the filter. The problem is in the following statement
if(input$carb == "All") {data}
If you want carb to allow for multiple values, you can't test for equality with just "All". If you want to test if All in in the selected options, use something like this instead
if("All" %in% input$carb) {data}
A:
The error comes from input$carb == "All". as soon as you input multiple elements, it results in a vecotr of the same length. you can try either of those:
if (all(input$carb == "All")) # shows all rows if Only All is selected
OR
if ("All" %in% input$carb)) # shows all rows as soon as All is selected
Furthermore, you have to use curly brackets in else. doesn't really matter here but just for syntax consistency
|
%in% operator not operating as expected with reactive statement of a Shiny app
|
I have a Shiny app that takes a dataset and filters it through several user inputs. To do this, I use selectizeInput functions where the user can select one or many options from a list and then these selections are run through reactive statements to get the desired final dataset. I've noticed recently that this no longer works in one of the places I have the app hosted; this app was built and deployed with Shiny 1.6.0 and it's still working in that location, but it isn't working in another spot that has Shiny 1.7.3. I'm wondering if this may be an issue with newer versions of Shiny. Here is an example where multiple selections causes the resulting table to not populate:
library(shiny)
library(dplyr)
data <- mtcars
ui <- fluidPage(
fluidRow(
column(width = 4, wellPanel(
selectizeInput("carb", "carb:", c("All", sort(unique(data$carb))),
selected = "All", multiple = TRUE,
options = list('plugins' = list('remove_button'),
'create' = TRUE, 'persist' = FALSE)))),
column(width = 8, wellPanel(tableOutput("table")))
)
)
server <- function(input,output,session){
process <- reactive({
req(input$carb) # require some input
if(input$carb == "All"){data} #pass entire dataset if selected
else(data %>% dplyr::filter(carb %in% input$carb))}) # will not work with > 1 selected
output$table <- renderTable({process()})
}
shinyApp(ui = ui, server = server)
Selecting just one value allows everything to work fine, but there's an error about the condition having length > 1 if multiple values are selected. Previously when this worked, I was able to select something like 1,2, and 4 for the carb variable and the resulting table would show all rows with one of those three values. I know the input is getting passed on to the argument by adding a renderTable statement into the server:
output$test <- renderTable({as.data.frame(input$carb)})
However, this isn't working when I'm trying to filter the full dataset. I can run everything when selectizeInput(multiple = FALSE), but ideally it should be equal to TRUE so the user has more functionality.
|
[
"The problem is not the %in% in the filter. The problem is in the following statement\nif(input$carb == \"All\") {data}\n\nIf you want carb to allow for multiple values, you can't test for equality with just \"All\". If you want to test if All in in the selected options, use something like this instead\nif(\"All\" %in% input$carb) {data}\n\n",
"The error comes from input$carb == \"All\". as soon as you input multiple elements, it results in a vecotr of the same length. you can try either of those:\nif (all(input$carb == \"All\")) # shows all rows if Only All is selected\nOR\nif (\"All\" %in% input$carb)) # shows all rows as soon as All is selected\n\nFurthermore, you have to use curly brackets in else. doesn't really matter here but just for syntax consistency\n"
] |
[
1,
0
] |
[] |
[] |
[
"r",
"shiny"
] |
stackoverflow_0074661451_r_shiny.txt
|
Q:
Cancel git merge after git commit
I made some changes on my branch, I fetched changes and ran 'git merge'. I then made more local changes and wanted to amend my commit: git commit --amend.
But when trying to push I received that error:
remote: error: GH006: Protected branch update failed for refs/heads/v1.
remote: error: This branch must not contain merge commits.
! [remote rejected] v1 -> v1 (protected branch hook declined)
error: failed to push some refs to xxx
I tried to abort but:
git merge --abort
fatal: There is no merge to abort (MERGE_HEAD missing).
How can I cancel the merge commit without losing my local changes?
A:
I was able to cancel the merge by cancelling the commit, and to keep my local changes by doing:
git reset --soft HEAD~1
A:
I would recommend creating a new branch because the following steps could delete some of your changes permanently. Also create a backup of your directory just in case.
First checkout a new branch from your current one, e.g. git checkout -b my-new-branch.
Then hard set your branch to your last commit on your branch before you merged in the other branch: git reset --hard <last commit reference, e.g. 134124124124>.
Then you could cherry pick any commits you made on your old branch after the merge commit, e.g. git cherry-pick 123123123.
|
Cancel git merge after git commit
|
I made some changes on my branch, I fetched changes and ran 'git merge'. I then made more local changes and wanted to amend my commit: git commit --amend.
But when trying to push I received that error:
remote: error: GH006: Protected branch update failed for refs/heads/v1.
remote: error: This branch must not contain merge commits.
! [remote rejected] v1 -> v1 (protected branch hook declined)
error: failed to push some refs to xxx
I tried to abort but:
git merge --abort
fatal: There is no merge to abort (MERGE_HEAD missing).
How can I cancel the merge commit without losing my local changes?
|
[
"I was able to cancel the merge by cancelling the commit, and to keep my local changes by doing:\ngit reset --soft HEAD~1\n",
"I would recommend creating a new branch because the following steps could delete some of your changes permanently. Also create a backup of your directory just in case.\nFirst checkout a new branch from your current one, e.g. git checkout -b my-new-branch.\nThen hard set your branch to your last commit on your branch before you merged in the other branch: git reset --hard <last commit reference, e.g. 134124124124>.\nThen you could cherry pick any commits you made on your old branch after the merge commit, e.g. git cherry-pick 123123123.\n"
] |
[
0,
0
] |
[] |
[] |
[
"git",
"github"
] |
stackoverflow_0074661312_git_github.txt
|
Q:
create a non-unique ID column, where a new ID is created every time a numeric sequence resets
I have the following table:
product price date
banana 90 2022-01-01
banana 90 2022-01-02
banana 90 2022-01-03
banana 95 2022-01-04
banana 90 2022-01-05
banana 90 2022-01-06
I need to add a non-unique ID column to the table. Every time the price changes, I want the ID to change. This would result in the following table.
id product price date
A banana 90 2022-01-01
A banana 90 2022-01-02
A banana 90 2022-01-03
B banana 95 2022-01-04
C banana 90 2022-01-05
C banana 90 2022-01-06
By searching for answers in SO and Google, I was able to create a column (my_seq) that contains a sequence that resets every time (see sql fiddle for my query) the price changes. But I still don't know how to create an ID column that resets every time the my_seq starts over.
my_seq rn1 rn2 product price date
1 1 1 banana 90 2022-01-01
2 2 2 banana 90 2022-01-02
3 3 3 banana 90 2022-01-03
1 1 4 banana 95 2022-01-04
1 4 5 banana 90 2022-01-05
2 5 6 banana 90 2022-01-06
sql-fiddle with DDL and my query
thanks
|
create a non-unique ID column, where a new ID is created every time a numeric sequence resets
|
I have the following table:
product price date
banana 90 2022-01-01
banana 90 2022-01-02
banana 90 2022-01-03
banana 95 2022-01-04
banana 90 2022-01-05
banana 90 2022-01-06
I need to add a non-unique ID column to the table. Every time the price changes, I want the ID to change. This would result in the following table.
id product price date
A banana 90 2022-01-01
A banana 90 2022-01-02
A banana 90 2022-01-03
B banana 95 2022-01-04
C banana 90 2022-01-05
C banana 90 2022-01-06
By searching for answers in SO and Google, I was able to create a column (my_seq) that contains a sequence that resets every time (see sql fiddle for my query) the price changes. But I still don't know how to create an ID column that resets every time the my_seq starts over.
my_seq rn1 rn2 product price date
1 1 1 banana 90 2022-01-01
2 2 2 banana 90 2022-01-02
3 3 3 banana 90 2022-01-03
1 1 4 banana 95 2022-01-04
1 4 5 banana 90 2022-01-05
2 5 6 banana 90 2022-01-06
sql-fiddle with DDL and my query
thanks
|
[] |
[] |
[
"You are half-there already with the query in your Fiddle.\nConsider what you get if you subtract your rn1 & rn2 values - you get the values you need to group by.\nIf you want an increasing sequence you can then apply a dense rank:\nwith cte as (\n select *, \n Row_Number() over(order by date)\n - Row_Number() over(partition by product, price order by date) rn\n from my_table\n)\nselect Dense_Rank() over(order by rn) as my_seq,\n product, price, date\nfrom cte;\n\nModified fiddle\n",
"It will generate sequence number whenever price change.\nwith cte as (\nselect \nrow_number () over( w) as price_change,\n\nproduct,price,date\nfrom \nmy_table_1\nwindow w as (partition by product,price order by date)\n)\nselect \nproduct,price,date,\nrow_number()over(partition by price_change order by price_change) as seq_no\nfrom cte\norder by seq_no,price_change\n\noutput:\nproduct|price|date |seq_no|\n-------+-----+----------+------+\nbanana | 90|2022-01-01| 1|\nbanana | 90|2022-01-02| 1|\nbanana | 90|2022-01-03| 1|\nbanana | 90|2022-01-05| 1|\nbanana | 90|2022-01-06| 1|\nbanana | 95|2022-01-04| 2|\n\n",
"with cte as (\n select coalesce( (lag(price) over w1) <> price\n , false )::int changes\n , *\n from my_table\n window w1 as (partition by product order by date) )\nselect sum(changes) over w2 as your_seq\n , product,price,date\nfrom cte\nwindow w2 as (\n partition by product \n order by date \n groups between unbounded preceding and current row);\n\n your_seq | product | price | date\n----------+---------+-------+------------\n 0 | apple | 90 | 2022-01-01\n 0 | apple | 90 | 2022-01-02\n 0 | apple | 90 | 2022-01-03\n 1 | apple | 95 | 2022-01-04\n 2 | apple | 90 | 2022-01-05\n 2 | apple | 90 | 2022-01-06\n 0 | banana | 90 | 2022-01-01\n 0 | banana | 90 | 2022-01-02\n 0 | banana | 90 | 2022-01-03\n 1 | banana | 95 | 2022-01-04\n 2 | banana | 90 | 2022-01-05\n 2 | banana | 90 | 2022-01-06\n\n"
] |
[
-1,
-1,
-1
] |
[
"postgresql",
"sql"
] |
stackoverflow_0074659044_postgresql_sql.txt
|
Q:
When agm-map is inside component, map is not rendered
Goal
I'm trying to wrap an <agm-map> inside my own <app-map> component but is not even being render in the HTML.
The agm (angular google maps) library is well configured and the maps shows well when the tag is used alone, but I need to provide my own component
Things that I have tried
https://github.com/sebholstein/angular-google-maps/issues/1101
https://github.com/sebholstein/angular-google-maps/issues/1319
https://stackoverflow.com/a/55099459/1461862
Here are is the code for the component template
map.component.html
<agm-map flex #map [latitude]="latitude" [zoomControl]="true" [longitude]="longitude" [zoom]="zoomLevel" [zoomControl]="hasZoomControls">
<agm-marker [latitude]="latitude" [longitude]="longitude" [markerDraggable]="isMarkerDraggable" (dragEnd)="onDragEnd($event)"> </agm-marker>
</agm-map>
map.component.css
agm-map {
height: 130px;
width: 100%;
}
map.component.ts
import { AfterViewInit, Component, Input, ViewChild } from '@angular/core';
import { AgmMap } from '@agm/core';
@Component({
selector: 'app-map',
templateUrl: './map.component.html',
styleUrls: ['./map.component.scss'],
})
export class MapComponent implements AfterViewInit {
@Input() latitude = 0;
@Input() longitude = 0;
@Input() zoomLevel = 4;
@Input() hasZoomControls = true;
@Input() isMarkerDraggable = true;
@ViewChild('map') public agmMap: AgmMap;
ngAfterViewInit(): void {
console.log('ngAfterViewInit');
if (this.agmMap) {
setTimeout(() => {
console.log('Resizing');
this.agmMap.triggerResize();
}, 100);
} else {
console.log('map was not rendered');
}
}
onDragEnd(event): void {
console.log('dragged');
}
}
And the the shared.module.ts I have added the new component
@NgModule({
declarations: [...,MapComponent]
What can I be missing ?
A:
All I needed was to export the component on the shared module
exports: [ ..., MapComponent],
|
When agm-map is inside component, map is not rendered
|
Goal
I'm trying to wrap an <agm-map> inside my own <app-map> component but is not even being render in the HTML.
The agm (angular google maps) library is well configured and the maps shows well when the tag is used alone, but I need to provide my own component
Things that I have tried
https://github.com/sebholstein/angular-google-maps/issues/1101
https://github.com/sebholstein/angular-google-maps/issues/1319
https://stackoverflow.com/a/55099459/1461862
Here are is the code for the component template
map.component.html
<agm-map flex #map [latitude]="latitude" [zoomControl]="true" [longitude]="longitude" [zoom]="zoomLevel" [zoomControl]="hasZoomControls">
<agm-marker [latitude]="latitude" [longitude]="longitude" [markerDraggable]="isMarkerDraggable" (dragEnd)="onDragEnd($event)"> </agm-marker>
</agm-map>
map.component.css
agm-map {
height: 130px;
width: 100%;
}
map.component.ts
import { AfterViewInit, Component, Input, ViewChild } from '@angular/core';
import { AgmMap } from '@agm/core';
@Component({
selector: 'app-map',
templateUrl: './map.component.html',
styleUrls: ['./map.component.scss'],
})
export class MapComponent implements AfterViewInit {
@Input() latitude = 0;
@Input() longitude = 0;
@Input() zoomLevel = 4;
@Input() hasZoomControls = true;
@Input() isMarkerDraggable = true;
@ViewChild('map') public agmMap: AgmMap;
ngAfterViewInit(): void {
console.log('ngAfterViewInit');
if (this.agmMap) {
setTimeout(() => {
console.log('Resizing');
this.agmMap.triggerResize();
}, 100);
} else {
console.log('map was not rendered');
}
}
onDragEnd(event): void {
console.log('dragged');
}
}
And the the shared.module.ts I have added the new component
@NgModule({
declarations: [...,MapComponent]
What can I be missing ?
|
[
"All I needed was to export the component on the shared module\nexports: [ ..., MapComponent],\n\n"
] |
[
0
] |
[] |
[] |
[
"angular",
"google_maps"
] |
stackoverflow_0074660952_angular_google_maps.txt
|
Q:
Structures for C programming
Here is the current code. For averaging gradesa from 3 students. Underneath the code are the errors im facing. Was able to work them down from 25 to 4.
Visual studio is saying i have an unidentified expression E0029, E0020, C2065, C2109
dont believe im using pointers as we were told not use them. Read the error codes and looked them but cant determine where im falling short.
Thank you in advanced.
#include <stdio.h>
#define N 5
struct student {
char firstName[50];
int roll;
float marks[3];
}s[N];
typedef struct {
char firstName[50];
int roll;
float marks[3];
}student_t;
int main() {
int i;
double total = 0;
double marks_avg ;
student_t personA, personB, SMCstudent[N];
SMCstudent->marks[0] = 99;
personA= { "Daniel",10 {100,98,90}
};
printf("Enter information of student:\n");
//storing
for (i = 0;i < N;++i); {
s[i].roll = i + 1;
printf("\nFor all number %d,\n", s[i].roll);
printf("Enter first name:");
scanf_s("%s", &s[i].marks);
}
printf("Enter 3 marks");
for (int j = 0; j < 3; j++) {
printf("enter grade\n");
scanf_s("%f", s[i].marks[j]);
scanf_s("%f", & array[i]);
}
for (i = 0; i < N; i++) {
total += s[i].marks[2];
}
marks_avg = total / N;
printf("average grade for 5 students is %f\n\n", marks_avg);
printf("displaying information:\n\n");
return 0;
}
A:
@RetiredNinja claims that you can only do a structure assignment during initialization.
This is simply not true.
You need to specify the type name as well, but this code is valid:
personA = (student_t){ "Daniel", 10, {100.f,98.f,90.f} };
This is assigning a "compound literal" to a structure.
(l-value personA is the structure, and the right side, (student_t){...} is the compound literal)
Example, working code:
#include <stdio.h>
typedef struct {
char firstName[50];
int roll;
float marks[3];
}student_t;
int main(void) {
student_t personA;
personA= (student_t){ "Daniel",10, {100.f,98.f,90.f} };
printf("%s(%d): %.2f %.2f %.2f", personA.firstName, personA.roll,
personA.marks[0], personA.marks[1], personA.marks[2]);
return 0;
}
|
Structures for C programming
|
Here is the current code. For averaging gradesa from 3 students. Underneath the code are the errors im facing. Was able to work them down from 25 to 4.
Visual studio is saying i have an unidentified expression E0029, E0020, C2065, C2109
dont believe im using pointers as we were told not use them. Read the error codes and looked them but cant determine where im falling short.
Thank you in advanced.
#include <stdio.h>
#define N 5
struct student {
char firstName[50];
int roll;
float marks[3];
}s[N];
typedef struct {
char firstName[50];
int roll;
float marks[3];
}student_t;
int main() {
int i;
double total = 0;
double marks_avg ;
student_t personA, personB, SMCstudent[N];
SMCstudent->marks[0] = 99;
personA= { "Daniel",10 {100,98,90}
};
printf("Enter information of student:\n");
//storing
for (i = 0;i < N;++i); {
s[i].roll = i + 1;
printf("\nFor all number %d,\n", s[i].roll);
printf("Enter first name:");
scanf_s("%s", &s[i].marks);
}
printf("Enter 3 marks");
for (int j = 0; j < 3; j++) {
printf("enter grade\n");
scanf_s("%f", s[i].marks[j]);
scanf_s("%f", & array[i]);
}
for (i = 0; i < N; i++) {
total += s[i].marks[2];
}
marks_avg = total / N;
printf("average grade for 5 students is %f\n\n", marks_avg);
printf("displaying information:\n\n");
return 0;
}
|
[
"@RetiredNinja claims that you can only do a structure assignment during initialization.\nThis is simply not true.\nYou need to specify the type name as well, but this code is valid:\npersonA = (student_t){ \"Daniel\", 10, {100.f,98.f,90.f} };\n\nThis is assigning a \"compound literal\" to a structure.\n(l-value personA is the structure, and the right side, (student_t){...} is the compound literal)\nExample, working code:\n#include <stdio.h>\n\ntypedef struct {\n char firstName[50];\n int roll;\n float marks[3];\n}student_t;\n\nint main(void) {\n student_t personA;\n \n personA= (student_t){ \"Daniel\",10, {100.f,98.f,90.f} };\n \n printf(\"%s(%d): %.2f %.2f %.2f\", personA.firstName, personA.roll,\n personA.marks[0], personA.marks[1], personA.marks[2]);\n \n return 0;\n}\n\n"
] |
[
2
] |
[] |
[] |
[
"c",
"struct"
] |
stackoverflow_0074661372_c_struct.txt
|