date
stringlengths
10
10
nb_tokens
int64
60
629k
text_size
int64
234
1.02M
content
stringlengths
234
1.02M
2018/03/21
585
1,711
<issue_start>username_0: Is there a simple way to find all binary strings whose digits add up to a value of x points, assuming that all ones are worth 2 points, and all zeros are worth 1 point. Let me explain: Considering that I receive a number 5, how could I get all possible strings such that 2\*(number of ones) + 1\*zeros = 5. *All results for 5 down:* 00000 10000 0100 0010 0001 101 110 011 (I do know that the number of possible solutions is the fibonacci number of 5+1 (x+1), but I cannot think of a way to find all values). I was thinking of adding numbers in binary or maybe using a base converter, but I may be missing something here. Thank you in advance.<issue_comment>username_1: By a single loop you can generate base strings (in your case "00000", "0001" and "011") then use `std::next_permutation()`: ``` for( int zeros = n; zeros >= 0; zeros -= 2 ) { int ones = ( n - zeros ) / 2; std::string base = std::string( zeros, '0' ) + std::string( ones, '1' ); } ``` [live example](https://ideone.com/faNU5m) Upvotes: 2 [selected_answer]<issue_comment>username_2: Try sth like this: ``` #include #include #include void getSums(int sum, std::vector& results, std::string currSum) { if (0 == sum) { results.emplace\_back(currSum.c\_str()); return; } else if (sum < 0) { return; } else { getSums(sum-2, results, currSum+"1"); getSums(sum-1, results, currSum+"0"); } } std::vector getAllSums(int sum) { std::vector results; std::string currSum; getSums(sum, results, currSum); return results; } int main() { std::vector res = getAllSums(5); for (std::string& r : res) { std::cout << r << std::endl; } } ``` Or switch to DP and cache results. Upvotes: 0
2018/03/21
743
3,376
<issue_start>username_0: I used Oracle for the half past year and learned some tricks of sql tuning,but now our DB is moving to greenplum and the project manager suggest us to change some of the codes that writted in Oracle sql for their efficiency or grammar. I am curious that Are sql tuning ways same for different DB engine,like oracle,postgresql,mysql and so on?if yes or not,why?Any suggestion are welcomed! some like: 1. in or exists 2. `count(*)` or `count(column)` 3. use index or not 4. use exact column instead of `select *`<issue_comment>username_1: Getting specifics in why they differ requires someone to be an expert in bother to be able to compare both. I don't claim to know much of greenplum. The basic principles which I would expect all developers to learn over time dont really change. But there are "quirks" of individual engines which make specific differences. From your question I would personally anticipate 1 and 4 to remain the same. Indexing is something which does vary. For example the ability to use two indexes was not (is not?) Ubiquitous. I wouldn't like to guess which DBMS can / can't count columns from the second field in a composite index. And the way indexes are maintained is very different from one DBMS to the next. From my own experience I've also seen differences caused by: Different capabilities in the data access path. As an example, one optimisation is for a DBMS to create a bit map of rows (matching and not matching) the combine multiple bitmaps to select rows. A DBMS with this feature can use multiple indexes in a single query. One without it can't. Availability of hints / lack of hints. Not all DBMS support them. I know they are very common in Oracle. Different locking strategies. This is a big one and can really affect update and insert queries. In some cases DBMS have very specific capabilities for certain types of data such as geographic data or searchable free text (natural language). In these cases the way of working with the data is entirely different from one DBMS to the next. Upvotes: 2 [selected_answer]<issue_comment>username_2: For the most part the syntax that is used will remain the same, there may be small differences from one engine to another and you may run into different terms to achieve some of the more specific output or do more complex tasks. In order to achieve parity you will need to learn those new terms. As far as tuning, this will vary from system to system. Specifically going from Oracle to Greenplum you are looking at moving from a database where efficiency in a query if often driven by dropping an index on the data. Where Greenplum is a parallel execution system where efficiency is gained by effectively distributing the data across multiple systems and querying them in parallel. In Greenplum indexing is an additional layer that usually does not add benefit, just additional overhead. Even within a single system using changing the storage engine type can result in different ways to optimize a query. In practice queries are often moved to a new platform and work, but are far from optimal as they don't take advantage of optimizations of that platform. I would strongly suggest getting an understanding of the new platform and you should not go in assuming a query that is optimized for one platform is the optimal way to run it in another. Upvotes: 2
2018/03/21
4,754
10,288
<issue_start>username_0: Im trying to run docker with tensorflow using Nvidia GPUs, however when I run my container I get the following error: ``` pgp_1 | Traceback (most recent call last): pgp_1 | File "/opt/app-root/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in pgp\_1 | from tensorflow.python.pywrap\_tensorflow\_internal import \* pgp\_1 | File "/opt/app-root/lib/python3.6/site-packages/tensorflow/python/pywrap\_tensorflow\_internal.py", line 28, in pgp\_1 | \_pywrap\_tensorflow\_internal = swig\_import\_helper() pgp\_1 | File "/opt/app-root/lib/python3.6/site-packages/tensorflow/python/pywrap\_tensorflow\_internal.py", line 24, in swig\_import\_helper pgp\_1 | \_mod = imp.load\_module('\_pywrap\_tensorflow\_internal', fp, pathname, description) pgp\_1 | File "/opt/app-root/lib64/python3.6/imp.py", line 243, in load\_module pgp\_1 | return load\_dynamic(name, filename, file) pgp\_1 | File "/opt/app-root/lib64/python3.6/imp.py", line 343, in load\_dynamic pgp\_1 | return \_load(spec) pgp\_1 | ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory ``` **Docker-compose** My docker compose file looks like: ``` version: '3' services: pgp: devices: - /dev/nvidia0 - /dev/nvidia1 - /dev/nvidia2 - /dev/nvidia3 - /dev/nvidia4 - /dev/nvidiactl - /dev/nvidia-uvm image: "myimg/pgp" ports: - "5000:5000" environment: - LD_LIBRARY_PATH=/opt/local/cuda/lib64/ - GPU_DEVICE=4 - NVIDIA_VISIBLE_DEVICES all - NVIDIA_DRIVER_CAPABILITIES compute,utility volumes: - ./train_package:/opt/app-root/src/train_package - /usr/local/cuda/lib64/:/opt/local/cuda/lib64/ ``` As you can see, I have tried having a volume to map host cuda to the docker container but this didnt help. I am able to successfully run `nvidia-docker run --rm nvidia/cuda nvidia-smi` Versions ======== **Cuda** cat /usr/local/cuda/version.txt shows CUDA Version 9.0.176 **nvcc -V** nvcc -V nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2017 NVIDIA Corporation Built on Fri\_Sep\_\_1\_21:08:03\_CDT\_2017 Cuda compilation tools, release 9.0, V9.0.176 **nvidia-docker version** NVIDIA Docker: 2.0.3 Client: Version: 17.12.1-ce API version: 1.35 Go version: go1.9.4 Git commit: 7390fc6 Built: Tue Feb 27 22:17:40 2018 OS/Arch: linux/amd64 Server: Engine: Version: 17.12.1-ce API version: 1.35 (minimum version 1.12) Go version: go1.9.4 Git commit: 7390fc6 Built: Tue Feb 27 22:16:13 2018 OS/Arch: linux/amd64 Experimental: false **Tensorflow** 1.5 with gpu support, via pip ``` ldconfig -p | grep cuda libnvrtc.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnvrtc.so.9.0 libnvrtc.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnvrtc.so libnvrtc-builtins.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnvrtc-builtins.so.9.0 libnvrtc-builtins.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnvrtc-builtins.so libnvgraph.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnvgraph.so.9.0 libnvgraph.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnvgraph.so libnvblas.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnvblas.so.9.0 libnvblas.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnvblas.so libnvToolsExt.so.1 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnvToolsExt.so.1 libnvToolsExt.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnvToolsExt.so libnpps.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnpps.so.9.0 libnpps.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnpps.so libnppitc.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppitc.so.9.0 libnppitc.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppitc.so libnppisu.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppisu.so.9.0 libnppisu.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppisu.so libnppist.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppist.so.9.0 libnppist.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppist.so libnppim.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppim.so.9.0 libnppim.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppim.so libnppig.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppig.so.9.0 libnppig.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppig.so libnppif.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppif.so.9.0 libnppif.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppif.so libnppidei.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppidei.so.9.0 libnppidei.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppidei.so libnppicom.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppicom.so.9.0 libnppicom.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppicom.so libnppicc.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppicc.so.9.0 libnppicc.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppicc.so libnppial.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppial.so.9.0 libnppial.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppial.so libnppc.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppc.so.9.0 libnppc.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libnppc.so libicudata.so.55 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libicudata.so.55 libcusparse.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcusparse.so.9.0 libcusparse.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcusparse.so libcusolver.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcusolver.so.9.0 libcusolver.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcusolver.so libcurand.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcurand.so.9.0 libcurand.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcurand.so libcuinj64.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcuinj64.so.9.0 libcuinj64.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcuinj64.so libcufftw.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcufftw.so.9.0 libcufftw.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcufftw.so libcufft.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcufft.so.9.0 libcufft.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcufft.so libcudart.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudart.so.9.0 libcudart.so.7.5 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libcudart.so.7.5 libcudart.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudart.so libcudart.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libcudart.so libcuda.so.1 (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libcuda.so.1 libcuda.so (libc6,x86-64) => /usr/lib/x86_64-linux-gnu/libcuda.so libcublas.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcublas.so.9.0 libcublas.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libcublas.so libaccinj64.so.9.0 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libaccinj64.so.9.0 libaccinj64.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libaccinj64.so libOpenCL.so.1 (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libOpenCL.so.1 libOpenCL.so (libc6,x86-64) => /usr/local/cuda-9.0/targets/x86_64-linux/lib/libOpenCL.so ``` Tests with Tensorflow on Docker vs host --------------------------------------- The following works, when running on the host: ``` python3 -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)" v1.5.0-0-g37aa430d84 1.5.0 ``` Run container ============= `nvidia-docker run -d --name testtfgpu -p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu` Log in ====== `nvidia-docker exec -it testtfgpu bash` Test Tensorflow version ======================= `pip show tensorflow-gpu` shows: `pip show tensorflow-gpu Name: tensorflow-gpu Version: 1.6.0 Summary: TensorFlow helps the tensors flow Home-page: https://www.tensorflow.org/ Author: Google Inc. Author-email: <EMAIL> License: Apache 2.0 Location: /usr/local/lib/python2.7/dist-packages Requires: astor, protobuf, gast, tensorboard, six, wheel, absl-py, backports.weakref, termcolor, enum34, numpy, grpcio, mock` Python 2 -------- `python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"` Results in: ``` Illegal instruction (core dumped) ``` Python 3 -------- `python3 -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"` Results in: ``` python3 -c "import tensorflow as tf; print(tf.GIT_ Traceback (most recent call last): File "", line 1, in ImportError: No module named 'tensorflow' ```<issue_comment>username_1: It looks like a conflict between CUDA's version and TensorFlow's First, try to check your CUDA version with one of the commands such as `nvcc --version` or `cat /usr/local/cuda/version.txt` If that's 8.x, you may need to reinstall CUDA or simpler, downgrade TensorFlow to 1.4. Otherwise, if your CUDA is 9.x, you need TensorFlow 1.5 or newer. Hope that helps. Upvotes: 0 <issue_comment>username_2: The problem because of your cuDNN version. Tensorflow-GPU 1.5 version will support cuDNN 7.0.\_ version. You can download that from [here](https://developer.nvidia.com/compute/machine-learning/cudnn/secure/v7.0.5/prod/9.0_20171129/cudnn-9.0-linux-x64-v7). Make sure that your CUDA version 9.0.\_ and cuDNN version 7.0.\_ . Please refer link in [here](https://developer.nvidia.com/rdp/cudnn-download) for more details. Upvotes: 1
2018/03/21
596
2,076
<issue_start>username_0: I've been following <NAME>'s [Pro Git Book](https://git-scm.com/book/en/v2) and I'm reaching the end of chapter 3 in which we have just done a three way merge. See the diagram below (my own drawn diagram not a screen grab of the book): [![enter image description here](https://i.stack.imgur.com/a7fJk.png)](https://i.stack.imgur.com/a7fJk.png) After doing the merge of '**master**' with '**issue53**', resulting in merge commit **c6**, the author says we can delete branch '**issue53**'. ``` git branch -d issue53 ``` What actually happens to that branch under the hood and what would the resultant diagram look like? I ask this because the author does not address the issue. Would the branch still be there but just not pointed to? or is there some magic going on that I'm unaware of?<issue_comment>username_1: > > What actually happens to that branch under the hood > > > That depends on what you mean by the word *branch*. See [What exactly do we mean by "branch"?](https://stackoverflow.com/q/25068543/1256452) > > and what would the resultant diagram look like? > > > Drawing it in ASCII rather than fancy graphics, I get: ``` C0<-C1<-C2<-C4<---C6 <-- master (HEAD) \ / C3<-C5 ``` That is, *nothing* happens to the *commits* at all. The *name* `issue53`, however, which used to point to commit `C5`, no longer exists (at all). Since every commit in the diagram is still find-able by starting from the name `master` and working backwards, every commit remains protected from Git's garbage collection process. Upvotes: 3 [selected_answer]<issue_comment>username_2: A branch in git is just a pointer to a commit, and it has a very simple implementation - it is a text file containing the checksum of a commit it is pointing to. So in this example, you would have a file `.git/refs/heads/issue53` that represents your branch (is your branch). And when you delete a branch in git, you delete that text file (pointer) representing that branch. In this case, `.git/refs/heads/issu53` Upvotes: 1
2018/03/21
579
2,193
<issue_start>username_0: I am doing an ordering app in android studio. I want to show the list of orders when the "cart" button is clicked Here is what I have when "order" button is clicked ``` btnCart.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { new Database(getBaseContext()).addToCart(new Order( foodId, currentFood.getName(), numberButton.getNumber(), currentFood.getPrice() )); Toast.makeText(FoodDetail.this, "Added To Cart", Toast.LENGTH_SHORT).show(); } }); ``` my ***add to cart*** IS STILL EMPTY and this is what i only have ``` public void addToCart(Order order) { order.getProductId(), order.getProductName(), order.getQuantity(), order.getPrice()); } ``` please help me :( thank you! :)<issue_comment>username_1: > > What actually happens to that branch under the hood > > > That depends on what you mean by the word *branch*. See [What exactly do we mean by "branch"?](https://stackoverflow.com/q/25068543/1256452) > > and what would the resultant diagram look like? > > > Drawing it in ASCII rather than fancy graphics, I get: ``` C0<-C1<-C2<-C4<---C6 <-- master (HEAD) \ / C3<-C5 ``` That is, *nothing* happens to the *commits* at all. The *name* `issue53`, however, which used to point to commit `C5`, no longer exists (at all). Since every commit in the diagram is still find-able by starting from the name `master` and working backwards, every commit remains protected from Git's garbage collection process. Upvotes: 3 [selected_answer]<issue_comment>username_2: A branch in git is just a pointer to a commit, and it has a very simple implementation - it is a text file containing the checksum of a commit it is pointing to. So in this example, you would have a file `.git/refs/heads/issue53` that represents your branch (is your branch). And when you delete a branch in git, you delete that text file (pointer) representing that branch. In this case, `.git/refs/heads/issu53` Upvotes: 1
2018/03/21
589
2,456
<issue_start>username_0: I define mSelectedItem as a public var in the class CustomAdapter, I think `mSelectedItem=getAdapterPosition()` will be Ok when I use mSelectedItem in inner class ViewHolder. But it failed, and display "Unresolved reference: mSelectedItem" error, why? And more, what is good way for `getAdapterPosition()` in Kotlin, there is hint which display "This inspection reports calls to java get and set methods that can be replaced with use of Kotlin synthetic properties", but it will cause errro when I use `mSelectedItem=getAdapterPosition` . ``` class CustomAdapter (val backupItemList: List) : RecyclerView.Adapter() { public var mSelectedItem = -1 override fun onCreateViewHolder(parent: ViewGroup, viewType: Int): CustomAdapter.ViewHolder { val v = LayoutInflater.from(parent.context).inflate(R.layout.item\_recyclerview, parent, false) return ViewHolder(v) } override fun onBindViewHolder(holder: CustomAdapter.ViewHolder, position: Int) { holder.bindItems(backupItemList[position]) holder.itemView.radioButton.setChecked(position == mSelectedItem); } override fun getItemCount(): Int { return backupItemList.size } class ViewHolder(itemView: View) : RecyclerView.ViewHolder(itemView) { fun bindItems(aMSetting: MSetting) { itemView.radioButton.tag=aMSetting.\_id itemView.textViewUsername.text=aMSetting.createdDate.toString() itemView.textViewAddress.text=aMSetting.description mSelectedItem=getAdapterPosition() //It will cause error } } } ```<issue_comment>username_1: ViewHolder is the Recycler rather than the operator.if you want to get the position,you put this `mSelectedItem = position` in onBindViewHolder.And this method named getAdapterPosition() always works on with notifyItemsetChanged().hope this will help you. Upvotes: 0 <issue_comment>username_2: If you don't want to make the ViewHolder an `inner class` (which you should not), you could create a class like AdapterSelection that has a field `var selectedItem:Int` inside it and replace your `public var mSelectedItem = -1` with `private var mSelectedItem = AdapterSelection(-1`). Then pass the `mSelectedItem` to the `bind` method (`bindItems(aMSetting: MSetting, adapterSelection:AdapterSelection)` and inside the bind, set the position `adapterSelection.selectedItem = getAdapterPosition()`. You could have passed the adapter itself, but it is messy, that's why I suggest making another class. Upvotes: 3 [selected_answer]
2018/03/21
1,483
4,898
<issue_start>username_0: I'm trying to have a header banner in my html assignment however I cannot get the header image to appear.. or any background-image. Here is my code All of the files are in a folder named `assign1_1`, images are in a folder named `images`, css is in a folder named `styles`. Everything else in the css works however the image doesn't. Help would be appreciated! ```css * { padding: 0; margin: 0; font-family: Arial } #footer { position: absolute; right: 0; bottom: 0; left: 0; text-align: center; } header{ text-align: left; padding: 20px; margin-top: 35px; background: url("images/bg.jpg") no-repeat; height: 100%; width: 97%; } nav ul { list-style-type: none; margin: 0; padding: 0; background-color: black; position: fixed; top: 0; width: 100%; display: flex; justify-content: space-around; } li { float: left; } li a { display: block; text-align: center; padding: 14px 16px; text-decoration: none; color:white; } section{ padding: 20px; } body{ background-color: lightblue; } li a:hover:not(.active) { border-bottom: 5px solid lightgreen; } .active { border-bottom: 5px solid green; } ``` ```html Introduction to eCommerce on the Web eCommerce on the Web ==================== * [Home](#index) * [What is eCommerce?](#topic) * [eCommerce Quiz](#quiz) * [Enhancements made](#enhancements) Do you want to learn about eCommerce? ------------------------------------- You have come to the right spot! Click Here to learn more [Contact me](1<EMAIL>) ```<issue_comment>username_1: Make sure the current web page is in the same directory with "images" and "styles". Does your directory tree look like this? ``` - assign1_1 - - file.html - - styles - - - style.css - - images - - - bg.jpg ``` Or is `assign1_1` in the same folder as the others? You might want to change `images/bg.jpg` to `../images/bg.jpg` if it is. I tested your code using a stock blue sky image from Google Images, it seems to be working fine. You were missing a semicolon in the universal selector block with `"Arial"`. ```css * { padding: 0; margin: 0; font-family: "Arial"; } #footer { position: absolute; right: 0; bottom: 0; left: 0; text-align: center; } header { text-align: left; padding: 20px; margin-top: 35px; background: url("https://image.freepik.com/free-photo/blue-sky-with-clouds_1232-936.jpg") no-repeat; height: 100%; width: 97%; display:flex; } nav ul { list-style-type: none; margin: 0; padding: 0; background-color: black; position: fixed; top: 0; width: 100%; display: flex; justify-content: space-around; } li { float: left; } li a { display: block; text-align: center; padding: 14px 16px; text-decoration: none; color:white; } section{ padding: 20px; } body{ background-color: lightblue; } li a:hover:not(.active) { border-bottom: 5px solid lightgreen; } .active { border-bottom: 5px solid green; } ``` ```html Introduction to eCommerce on the Web eCommerce on the Web ==================== * [Home](#index) * [What is eCommerce?](#topic) * [eCommerce Quiz](#quiz) * [Enhancements made](#enhancements) Do you want to learn about eCommerce? ------------------------------------- You have come to the right spot! Click Here to learn more [Contact me](<EMAIL>) ``` I found it odd you have your nested inside of a , instead of being alongside and . You could get rid of the nesting and just do `footer` instead of `#footer` in the CSS. Upvotes: 1 <issue_comment>username_2: First try to edit in your elements first edit your css like this /\*changes top 60px or its up to you \*/ ``` nav ul { list-style-type: none; margin: 0; padding: 0; background-color: black; position: fixed; top: 60px; width: 100%; display: flex; justify-content: space-around; } ``` Then Copy my HTML code ``` Introduction to eCommerce on the Web Header ====== * [Home](#index) * [What is eCommerce?](#topic) * [eCommerce Quiz](#quiz) * [Enhancements made](#enhancements) Do you want to learn about eCommerce? ------------------------------------- You have come to the right spot! Click Here to learn more [Contact me](<EMAIL>) ``` That is only a sample if you want a background image you need to set background image inside header class. hope it helps you Upvotes: 0
2018/03/21
598
1,864
<issue_start>username_0: Run the the following C++ program twice. Once with the given destructor and once with `std::fesetround(value);` removed from the destructor. Why do I receive different outputs? Shouldn't destructor be called after function `add`? I ran both versions on <http://cpp.sh/> and Clang++ 6.0, and g++ 7.2.0. For g++, I also included `#pragma STDC FENV_ACCESS on` in the source code, nothing changed. ``` #include #include #include struct raii\_feround { raii\_feround() : value(std::fegetround()) { } ~raii\_feround() { std::fesetround(value); } inline void round\_up () const noexcept { std::fesetround(FE\_UPWARD ); } inline void round\_down() const noexcept { std::fesetround(FE\_DOWNWARD); } template T add(T fst, T snd) const noexcept { return fst + snd; } private: int value; }; float a = 1.1; float b = 1.2; float c = 0; float d = 0; int main() { { raii\_feround raii; raii.round\_up(); c = raii.add(a, b); } { raii\_feround raii; raii.round\_down(); d = raii.add(a, b); } std::cout << c << "\n"; // Output is: 2.3 std::cout << d << "\n"; // Output is: 2.3 or 2.29999 } ```<issue_comment>username_1: Using the floating-point environment facilities requires inserting `#pragma STDC FENV_ACCESS on` into the source (or ensure that they default to `on` for the implementation you are using. (Although `STDC` is a C feature, the C++ standard says that these facilities are imported into C++ by the header.) Doing so at cpp.sh results in “warning: ignoring #pragma STDC FENV\_ACCESS [-Wunknown-pragmas]”. Therefore, accessing and modifying the floating-point environment is not supported by the compiler at cpp.sh. Upvotes: 1 <issue_comment>username_2: All I needed to do was to do `std::cout << std::setprecision(30);` before calling `std::cout` in the code (`iomanip` should be included as well). Upvotes: 0
2018/03/21
648
2,441
<issue_start>username_0: I'm learning a lot about a11y and making my site WCAG AA compliant and I'm finding big-name sites all over the web that seem to violate contrast ratios. For example, here is a screen shot from DuoLingo with color contrast ratios for text all below 3.0: <http://prntscr.com/iu2syg> I also understand that icons need to be >3.0 or 4.5 depending on the line width, but here is a screen shot of Gmail with icons that are clearly below 3.0: <http://prntscr.com/iu2te7> So what gives? Are these sites just not WCAG AA compliant? Or am I misunderstanding the requirements of AA compliance?<issue_comment>username_1: Websites do have to respect contrast ratio to be WCAG AA compliant. But some websites don't care, some others don't know. W3.org homepage itself has some non conforming contrast ratios. Problem with accessibility is that when people care, they do not always have the competencies to know the guidelines, the tools to detect the problems, and the comprehension to understand the impacts. A lot of people think that validating a website with a screenreader gives definitive blessing that it is accessible while this leaves besides contrast aspects. Upvotes: 2 <issue_comment>username_2: In order to claim compliance to WCAG 2.0 Level AA, a website needs to fulfil **all** of the success criteria at Level A and Level AA. This is stated explicitly in the [section on conformance requirements](https://www.w3.org/TR/WCAG20/#conformance-reqs). [Success criterion 1.4.3](https://www.w3.org/TR/WCAG20/#visual-audio-contrast-contrast) in WCAG 2.0 applies to text and images of text and therefore applies to the examples in the Duolingo screenshot. However, it does not apply to controls, so there seems to be a loophole for the controls in the GMail screenshot. For this reason, the drafts for WCAG 2.1 introduced a new [success criterion](https://www.w3.org/TR/2018/CR-WCAG21-20180130/#non-text-contrast) for user interface components and graphical objects. This success criterion is still marked as "at risk" in the Candidate Recommendation (January 2018), but would close this loophole if it makes it to the final recommendation. I could not find a WCAG 2.0 conformance claim on the Duolingo website; it is clear that the contrast ratio in the screenshot is insufficient and would cause Duolingo to fail WCAG 2.0 Level AA conformance. I could not find a WCAG conformance claim on GMail either. Upvotes: 3
2018/03/21
815
2,799
<issue_start>username_0: I've been unable to get my test jasmine test suite running with webpack 4. After upgrading webpack, I get the following error for almost every test: ``` Error: : getField is not declared writable or has no setter ``` This is due to a common pattern we use to create spys for simple functions is: ``` import * as mod from 'my/module'; //... const funcSpy = spyOn(mod, 'myFunc'); ``` I've played around with `module.rules[].type` but none of the options seem to do the trick. This webpack [GH issue](https://github.com/webpack/webpack/issues/5834) indicates ECMA modules are meant to not be writable which makes sense for the web but is there really no workaround for testing? Relevant package versions: ``` "jasmine-core": "2.6.4", "typescript": "2.5.3", "webpack": "4.1.1", "webpack-cli": "^2.0.12", "karma": "^0.13.22", "karma-jasmine": "^1.1.0", "karma-webpack": "^2.0.13", ```<issue_comment>username_1: There's [this GitHub issue](https://github.com/webpack/webpack/issues/6979) where they arrive at the same conclusion; that immutable exports are intended. But user **lavelle** has a workaround (in [this comment](https://github.com/webpack/webpack/issues/6979#issuecomment-379414342)) where they've created different webpack configs for test and production code. The test config uses `"commonjs"` modules, which seems to have worked for them by not creating getters. Upvotes: 2 <issue_comment>username_2: There's `spyOnProperty` which allows treating a property as read-only by setting the `accessType` argument to `'get'`. Your setup would then look like ``` import * as mod from 'my/module'; //... const funcSpy = jasmine.createSpy('myFunc').and.returnValue('myMockReturnValue'); spyOnProperty(mod, 'myFunc', 'get').and.returnValue(funcSpy); ``` Upvotes: 5 <issue_comment>username_3: Adding to @username_2's answer: I've added this TypeScript function to my shared testing module as a convenience: ``` export const spyOnFunction = (obj: T, func: keyof T) => { const spy = jasmine.createSpy(func as string); spyOnProperty(obj, func, 'get').and.returnValue(spy); return spy; }; ``` Example usage: ``` import * as mod from 'my/module'; //... spyOnFunction(mod, 'myFunc').and.returnValue('myMockReturnValue'); ``` Upvotes: 2 <issue_comment>username_4: To resolve this, it is possible to wrap the methods in custom class and then mock it. Example below: ``` //Common Utility import * as library from './myLibrary' export class CustomWrapper{ static _func = library.func; } //Code File import { CustomWrapper } from './util/moduleWrapper'; const output = CustomWrapper._func(arg1, arg2); //Test File import { CustomWrapper } from './util/moduleWrapper'; spyOn(CustomWrapper, '_func').and.returnValue('mockedResult'); ``` Upvotes: 1
2018/03/21
513
1,462
<issue_start>username_0: ``` #include using namespace std; int main() { int x; cout << "How many rows would you like? " << endl; cin >> x; cout << endl; cout << "Number| Power 1| Power 2| Power 3| Power 4| Power 5" << endl; for (int j=0; j<=x; j++) { cout << j << "\t" << j << "\t" << pow(j,2) << "\t" << pow(j,3) << "\t" << pow(j,4) << "\t" << pow(j,5) << endl; } return 0; } ``` It produces the above error. I'm not sure what is wrong, please let me know. Thank you in advance.<issue_comment>username_1: [`std::pow`](http://www.cplusplus.com/reference/valarray/pow/) is defined in `cmath`, so you need to include `cmath`: ``` #include #include // <-- include cmath here using namespace std; int main() { int x; cout << "How many rows would you like? " << endl; cin >> x; cout << endl; cout << "Number| Power 1| Power 2| Power 3| Power 4| Power 5" << endl; for (int j=0; j<=x; j++) { cout << j << "\t" << j << "\t" << pow(j,2) << "\t" << pow(j,3) << "\t" << pow(j,4) << "\t" << pow(j,5) << endl; } return 0; } ``` Upvotes: 3 <issue_comment>username_2: As the error message tells you, the compiler does not know where to find `pow()`. When using functions that you did not write yourself, you need to include the appropriate header file. Just like you are including `iostream` for `std::cout` and `std::cin`, you need to include `cmath` for `std::pow`. Just add `#include` to the beginning of your program. Upvotes: 2
2018/03/21
460
1,442
<issue_start>username_0: ``` if __name__ == '__main__': string =[' \n Boeing Vancouver\n ', '\n Airbus\n ', '\n Lockheed Martin\n ', '\n Rolls-Royce\n ', '\n Northrop Grumman\n ', '\n BOMBARDIER\n ', '\n Raytheon\n '] for item in string: item.replace("\n"," ") item.strip() print(string) ``` the output is the same as the input, why?<issue_comment>username_1: [`std::pow`](http://www.cplusplus.com/reference/valarray/pow/) is defined in `cmath`, so you need to include `cmath`: ``` #include #include // <-- include cmath here using namespace std; int main() { int x; cout << "How many rows would you like? " << endl; cin >> x; cout << endl; cout << "Number| Power 1| Power 2| Power 3| Power 4| Power 5" << endl; for (int j=0; j<=x; j++) { cout << j << "\t" << j << "\t" << pow(j,2) << "\t" << pow(j,3) << "\t" << pow(j,4) << "\t" << pow(j,5) << endl; } return 0; } ``` Upvotes: 3 <issue_comment>username_2: As the error message tells you, the compiler does not know where to find `pow()`. When using functions that you did not write yourself, you need to include the appropriate header file. Just like you are including `iostream` for `std::cout` and `std::cin`, you need to include `cmath` for `std::pow`. Just add `#include` to the beginning of your program. Upvotes: 2
2018/03/21
791
2,785
<issue_start>username_0: Lets say I have a constructor or a method with 3 integer arguments. Object.h ``` Object::Object(int alenght, int awidth, int aheight); ``` Is there a way to make sure at compile type that I pass them in the right order? ``` int l = 10; int w = 15; int h = 5; ``` main.cpp ``` Object myObject(l,w,h); // is correct Object myObject(w,l,h); // incorrect but compiles ``` Of course I could create a class for each integer but that's pretty time intensive. Is there a way, based on templates probably to generate an error at compile time, without needing to create a class for each integer?<issue_comment>username_1: In a word, no. The only way to do this is to make each parameter be a different type, and ensure that each type can't be passed in as another type. You could write a single template class to wrap the integers, and then specialize it for each parameter type based on an enum, for instance. Upvotes: 0 <issue_comment>username_2: No. There are three integers and neither the language nor the compiler has any way at all of knowing what they "mean", unless you introduce some type-safety (but you've ruled that out). The conventional way to handle this is with *testing*. Upvotes: 0 <issue_comment>username_3: In the current version of C++ there's nothing simple you can do. The next version of C++ will allow something like: ``` Object myObject = { .length = 10, .width = 15, .height = 5 }; ``` --- In the meantime you would have to use some sort of hack, e.g. define a separate type for each parameter so that the compiler can report type mismatches. Here's a code sample: ``` struct Object { int length, width, height; struct Length { explicit Length(int x): x(x) {}; int x; }; struct Width { explicit Width(int x): x(x) {}; int x; }; struct Height { explicit Height(int x): x(x) {}; int x; }; Object(Length length, Width width, Height height): length(length.x), width(width.x), height(height.x) {} }; int main() { Object obj( Object::Length(10), Object::Width(15), Object::Height(5) ); } ``` Upvotes: 4 [selected_answer]<issue_comment>username_4: You could use tags to create a single integer (or numeric) adaptor, and then allow only allow the proper types to be passed to the argument, minimizing the amount of boilerplate. ``` struct l_tag {}; struct w_tag {}; struct h_tag {}; template struct int\_adaptor { public: explicit int\_adaptor(Int x = Int()): data\_(x) {} // .... private: Int data\_; }; using l\_type = int\_adaptor; using w\_type = int\_adaptor; using h\_type = int\_adaptor; ``` This would allow compile-time enforcement of the proper type signatures, and minimize the boilerplate. However, in practice, it may be more boilerplate than desired. Upvotes: 2
2018/03/21
1,678
5,573
<issue_start>username_0: Hi everyone I have the next Table ``` Name | Gender | Count(Gender) BBC | M | 31 BBC | F | 1 BBC | B | 3 BBC | N | 160 M: Male F: Female B: Both N: Not Specified ``` I need to group this in only three categories. M, F, N. How can I make a Case Statement that when the register is B the Count for Male and Female increments in 1 for both. I need a table like this. ``` Name | Gender | Count(Gender) BBC | M | 34 BBC | F | 4 BBC | N | 160 ``` I hope I explained myself well. Thanks to everyone.<issue_comment>username_1: I don't see a way of doing this without generating data for both stats to cover both male and female. We can take a union of a query which aggregates males and both along with one for females and both. The first half of the union also includes not specified, since it needs to come in from somewhere. ``` SELECT Name, CASE WHEN Gender IN ('M', 'B') THEN 'M' ELSE Gender END AS Gender, SUM(cnt) AS cnt FROM yourTable WHERE Gender IN ('M', 'B', 'N') GROUP BY Name, CASE WHEN Gender IN ('M', 'B') THEN 'M' ELSE Gender END UNION ALL SELECT Name, CASE WHEN Gender IN ('F', 'B') THEN 'F' END AS Gender, SUM(cnt) AS cnt FROM yourTable WHERE Gender IN ('F', 'B') GROUP BY Name, CASE WHEN Gender IN ('F', 'B') THEN 'F' END ORDER BY Name, Gender ``` [![enter image description here](https://i.stack.imgur.com/8BAkc.png)](https://i.stack.imgur.com/8BAkc.png) [Demo ----](http://rextester.com/BWHH80523) Upvotes: 0 <issue_comment>username_2: I would do this by using conditional aggregation and then unpivoting: ``` SELECT * FROM ( SELECT name, SUM(CASE WHEN gender IN ('M','B') THEN 1 ELSE 0 END) AS "M" , SUM(CASE WHEN gender IN ('F','B') THEN 1 ELSE 0 END) AS "F" , SUM(CASE WHEN gender = 'N' THEN 1 ELSE 0 END) AS "N" FROM my_table GROUP BY name ) UNPIVOT ( count_gender FOR gender IN ("M","F","N") ); ``` The `B` values are counted as under both the `M` (male) and `F` (female) columns under the conditional aggregation - we can then unpivot to turn our columns into rows. This assuming you're using at least Oracle 11g - for Oracle 10g and below you'll have to use a query like the one given in username_1's answer. **EDIT:** If you have a table of *counts* (that is, if the table in your post is the raw data and not a result of aggregation), then substitute the count column above in the `SUM()`s in place of the "1": ``` SUM(CASE WHEN gender IN ('M','B') THEN count_gender ELSE 0 END) AS "M" ``` Hope this helps. Upvotes: 0 <issue_comment>username_3: Here's one way to do this. Not the simplest code, perhaps, but I believe it is quite efficient. It uses an incomplete cross join to duplicate the 'B' rows - so that the base data is read just once. ``` with inputs ( name, gender, cnt ) as ( select 'BBC', 'M', 31 from dual union all select 'BBC', 'F', 1 from dual union all select 'BBC', 'B', 3 from dual union all select 'BBC', 'N', 160 from dual union all select 'ZYX', 'M', 55 from dual union all select 'ZYX', 'F', 12 from dual union all select 'ZYX', 'B', 43 from dual union all select 'ZYX', 'N', 123 from dual ) select i.name , case i.gender when 'B' then case h.flag when 1 then 'F' else 'M' end else i.gender end as gender , sum(cnt) as cnt from inputs i cross join ( select 1 as flag from dual union all select 2 from dual ) h where h.flag = 1 or i.gender = 'B' group by i.name , case i.gender when 'B' then case h.flag when 1 then 'F' else 'M' end else i.gender end order by name, gender ; ``` **Output** (from the extended test data I created in the WITH clause): ``` NAME GENDER CNT ---- ------ --- BBC F 4 BBC M 34 BBC N 160 ZYX F 55 ZYX M 98 ZYX N 123 ``` Upvotes: 1 <issue_comment>username_4: ``` WITH yourTable AS ( SELECT 'BBC' AS Name, 'M' AS Gender, 31 AS cnt from dual UNION ALL SELECT 'BBC', 'F', 1 from dual UNION ALL SELECT 'BBC', 'B', 3 from dual UNION ALL SELECT 'BBC', 'K', 3 from dual UNION ALL SELECT 'BBC', 'N', 160 from dual )select t.*,sum(cnt) over (partition by gender) from yourTable t where gender in('F','M','N') ``` Upvotes: -1 <issue_comment>username_5: I think the other answers overcomplicate things. I am taking your original statement literally - you have a table of counts by gender. If this assumption does not hold the answer will change slightly. Just use a case statement to include B in both male and female counts, then aggregate : ``` WITH yourTable AS ( SELECT 'BBC' AS Name, 'M' AS Gender, 31 AS cnt from dual UNION ALL SELECT 'BBC', 'F', 1 from dual UNION ALL SELECT 'BBC', 'B', 3 from dual UNION ALL SELECT 'BBC', 'N', 160 from dual ) SELECT SUM(male) ,SUM(female) ,SUM(not_known) FROM (SELECT gender ,cnt ,CASE WHEN gender IN ('M','B') THEN cnt ELSE 0 END male ,CASE WHEN gender IN ('F','B') THEN cnt ELSE 0 END female ,CASE WHEN gender = 'N' THEN cnt ELSE 0 END not_known FROM yourTable ) ; ``` Upvotes: 1
2018/03/21
1,195
3,063
<issue_start>username_0: When i cast datetime in SQLLite, it truncates the string. for example ``` select cast("2017-04-23 9:12:08 PM" as datetime) as dt ``` returns ``` 2017 ```<issue_comment>username_1: The closest I could come up with is: ``` select date(datetime(strftime('%s','2017-04-23 09:12:08'), 'unixepoch')) ``` Result: ``` 2017-04-23 ``` The dateformat you have is not recognised by SQLite: ``` "2017-04-23 9:12:08 PM" ``` It does not conform to the Time string formats recognised: > > A time string can be in any of the following formats: > > > > ``` > YYYY-MM-DD > YYYY-MM-DD HH:MM > YYYY-MM-DD HH:MM:SS > YYYY-MM-DD HH:MM:SS.SSS > YYYY-MM-DDTHH:MM > YYYY-MM-DDTHH:MM:SS > YYYY-MM-DDTHH:MM:SS.SSS > HH:MM > HH:MM:SS > HH:MM:SS.SSS > now > DDDDDDDDDD > > ``` > > [Date And Time Functions](https://www.sqlite.org/lang_datefunc.html) Upvotes: 0 <issue_comment>username_2: SQLite's CAST can only cast to the defined storage classes and can therefore only be used to cast to NONE (blob), TEXT, REAL, INTEGER or NUMERIC. However the normal rules for determing column-affinity are applied to the type so by coding `CAST(value AS datetime)` you are effectively using `CAST(value AS NONE)` (i.e. a BLOB). [CAST expressions](https://www.sqlite.org/lang_expr.html#castexpr) Therefore you can't effectively use CAST. However you simply use the DateTime functions against an appropriate value (accepted formats) as per [Date And Time Functions](https://www.sqlite.org/lang_datefunc.html) e.g. :- ``` SELECT datetime("2017-04-23 09:12:08") as dt; ``` results in ``` 2017-04-23 09:12:08 ``` or to show date manipulation ``` select date(dt), dt FROM ( select datetime("2017-04-23 09:12:08") as dt ); ``` results in ``` 2017-04-23 ``` and ``` 2017-04-23 09:12:08 ``` However considering that your format isn't one of the accepted formats you could convert the value. This is more complex but it can be done. Here's an example that will perform the conversion (not substantially tested though) :- ``` SELECT CASE WHEN (CAST(hour AS INTEGER) + CAST(adjustment AS INTEGER)) > 9 THEN datepart||' '||CAST(CAST(hour AS INTEGER) + CAST(adjustment AS INTEGER) AS TEXT)||':'||mins_and_secs ELSE datepart||' 0'||CAST(CAST(hour AS INTEGER) + CAST(adjustment AS INTEGER) AS TEXT)||':'||mins_and_secs END AS converted FROM ( SELECT substr(ts,1,10) as datepart, CASE WHEN instr(ts,"PM") THEN 12 ELSE 0 END AS adjustment, CASE WHEN length(ts) = 21 THEN substr(ts,12,1) ELSE substr(ts,12,2) END AS hour, CASE WHEN length(ts) = 21 THEN substr(ts,14,5) ELSE substr(ts,15,5) END AS mins_and_secs FROM ( select("2017-04-23 9:12:08 PM") as ts ) ); ``` This would result in **`2017-04-23 21:12:08`**. Using `select("2017-04-23 9:12:08 AM")` results in **`2017-04-23 09:12:08`** Using `select("2017-04-23 11:12:08 PM")` results in **`2017-04-23 23:12:08`** Using `select("2017-04-23 11:12:08 AM")` results in **`2017-04-23 11:12:08`** Upvotes: 1
2018/03/21
663
2,320
<issue_start>username_0: ``` Mixed Content: The page at 'https://yourwebsite.com/' was loaded over HTTPS, but requested an insecure XMLHttpRequest endpoint 'http://otherwebsite.com/'. This request has been blocked; the content must be served over HTTPS. ``` After running into this error a few times, I have learned in modern web browsers you cannot have mixed content. That means an https site cannot make a request to one that is just http. So my issue is enabling this in safari. --- However in the development stages I am able to enable this in chrome 1) Click on the shield. [![enter image description here](https://i.stack.imgur.com/gfRUe.png)](https://i.stack.imgur.com/gfRUe.png) 2) Click "Load Unsafe Scripts" [![enter image description here](https://i.stack.imgur.com/DWmx9.png)](https://i.stack.imgur.com/DWmx9.png) 3) Now you should be able to see the blocked content. but of course the website will no longer be secure. [![enter image description here](https://i.stack.imgur.com/881dU.png)](https://i.stack.imgur.com/881dU.png) --- I am also able to do this in firefox. 1) Click on the upside-down ! [![enter image description here](https://i.stack.imgur.com/YnocV.png)](https://i.stack.imgur.com/YnocV.png) 2) Click on the > arrow. [![enter image description here](https://i.stack.imgur.com/dsCVx.png)](https://i.stack.imgur.com/dsCVx.png) 3) Disable protection for now. [![enter image description here](https://i.stack.imgur.com/EUfPc.png)](https://i.stack.imgur.com/EUfPc.png) 4) Now the site is not secure, but your request/ content is available. [![enter image description here](https://i.stack.imgur.com/Ukmel.png)](https://i.stack.imgur.com/Ukmel.png) --- However, I have looked forever and cannot find this in safari for any of the recent versions. Even when enabling developer tools, I am unable to find it. I looked through some of the release logs to see if this feature was deprecated, and could not find anything. Is there any way to do this in safari?<issue_comment>username_1: Mixed contents are disabled from Safari 9+ so this is not possible. you need to use workaround like proxy Upvotes: -1 <issue_comment>username_2: Navigates to your Safari "Privacy Settings" and disable the 'Block Third Party Cookies' and 'Disable Cross-site tracking' settings. Upvotes: -1
2018/03/21
885
3,798
<issue_start>username_0: As the title says, I'm trying to create a mechanic that decides what landing the third person controller should play. The player has a "HardLanding" and a "NormalLanding" animation. At first I tried achieving this with a timer, but this doesn't deliver the expected behaviour (When the player jumps forward it also plays the HardLanding animation, which is not what should happen). I'm using a character controller component, so there is no rigidbody or collider (Only the CharacterController and the script). I have already setup a void that raycasts downwards, and this is currently being done when the player is not grounded. The problem is that the groundCheck is being updated every frame because it's in the update function. This means this ray is sent out every frame and gives different values. I think this ray should either only shoot once, so there is a single value that determines the height of the player, or that the ray that currently shoots out as long as the player is not grounded should be checked on its highest value and than I can do something like. if (rayheight number is higher than x) { play HardLanding} else {play NormalLanding}. The landing animation should start playing when the player is grounded again obviously. Here is what I have gotton so far, which doens't work yet: ``` void CheckFallingDistance() { RaycastHit hitFall; Vector3 bottom = controller.transform.position - new Vector3(0, controller.height / 2, 0); if (Physics.Raycast(bottom, dirDown, out hitFall)) { rayDistance = hitFall.distance; } Debug.DrawRay(bottom, dirDown * hitFall.distance, Color.cyan, 10f); } ``` Below you can see my current groundcheck which still contains the timer to decide which landing animation should play. The timer part should be replaced with the raycast check for falling height. ``` if (GroundCheck()) { velocityY = 0; if (airTime > airTimeHandler) { //Debug.Log("Hard landing"); anim.SetBool("hardLanding", true); anim.SetBool("onAir", false); anim.SetBool("onAirIdle", false); } else { //Debug.Log("Normal landing"); anim.SetBool("hardLanding", false); anim.SetBool("onAir", false); anim.SetBool("onAirIdle", false); } airTime = 0f; } else { CheckFallingDistance(); if (animationSpeedPercent < 0.51f) { anim.SetBool("onAirIdle", true); } else { anim.SetBool("onAir", true); } anim.SetBool("hardLanding", false); airTime += Time.deltaTime; } ```<issue_comment>username_1: You know your player's absolute position (y component), what remains is finding out the gound height at this point. If your floor is at y=0 than this is trivial, but since you mention raycasting I am assuming its not. I would assume you only need to measure height at the highest point, you can launch a Coroutine when you leave the ground (its best to avoid Update()), and check for vertical position in each frame within that coroutine. At the beggining phase of the jump the position will increase, but at some point current position will become lower than the position in the last frame. This is the frame when you shoot your raycast, and make a decision as to which animation you play. You can end your coroutine at this point (this is unless your ground is terribly uneven). You could also measure player velocity before landing and decide based on that Upvotes: 0 <issue_comment>username_2: Eventually I decided to ditch the raycast part and went for a check of the Y velocity of the charactercontroller component. Upvotes: -1 [selected_answer]
2018/03/21
1,135
4,224
<issue_start>username_0: I recently was reading a JavaScript book and discovered using innerHTML to pass plain text poses a security risk, so I was wondering does using the `html()` jQuery method pose these same risks? I tried to research it but I could not find anything. For Example: ``` $("#saveContact").html("Save"); //change text to Save var saveContact = document.getElementById("saveContact"); saveContact.innerHTML = "Save"; //change text to Save ``` These do the same thing from what I know, but do they both pose the same security risk of someone being able to inject some JavaScript and execute it? I am not very knowledgeable in security, so I apologize in advance if anything is incorrect or explained incorrectly.<issue_comment>username_1: From the **[JQuery documentation](http://api.jquery.com/html/)**: > > **Additional Notes:** > > > By design, any jQuery constructor or method that > accepts an HTML string — jQuery(), .append(), .after(), etc. — **can > potentially execute code**. This can occur by injection of script tags > or use of HTML attributes that execute code (for example, ). **Do not use these methods to insert strings obtained from > untrusted sources such as URL query parameters, cookies, or form > inputs. Doing so can introduce cross-site-scripting (XSS) > vulnerabilities. Remove or escape any user input before adding content > to the document.** > > > So, for example, if the user were to pass an HTML string that contains a `</code> element, then that script would be executed:</p> <p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false"> <div class="snippet-code"> <pre class="snippet-code-js lang-js prettyprint-override"><code>$("#input").focus(); $("#input").on("blur", function(){ $("#output").html($("#input").val()); });</code></pre> <pre class="snippet-code-css lang-css prettyprint-override"><code>textarea { width:300px; height: 100px; }</code></pre> <pre class="snippet-code-html lang-html prettyprint-override"><code><script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"> alert("The HTML in this element contains a script element that was processed! What if the script contained malicious content?!") Press TAB` But, if we escape the string's contents before we pass it, we're safer: ```js $("#input").focus(); $("#input").on("blur", function(){ $("#output").html($("#input").val().replace("<", "<").replace(">", ">")); }); ``` ```css textarea { width:300px; height: 100px; } ``` ```html alert("This time the < and > characters (which signify an HTML tag are escaped into their HTML entity codes, so they won't be processed as HTML.") Press TAB ``` **Finally, the best way to avoid processing a string as HTML is not to pass it to `.innerHTML` or `.html()` in the first place.** That's why we have `.textContent` and `.text()` - they do the escaping for us: ```js $("#input").focus(); $("#input").on("blur", function(){ // Using .text() escapes the HTML automatically $("#output").text($("#input").val()); }); ``` ```css textarea { width:300px; height: 100px; } ``` ```html alert("This time nothing will be processed as HTML.") Press TAB ``` Upvotes: 4 [selected_answer]<issue_comment>username_2: From the [`.html()`](http://api.jquery.com/html/) docs: > > By design, any jQuery constructor or method that accepts an HTML string — jQuery(), .append(), .after(), etc. — can potentially execute code. This can occur by injection of script tags or use of HTML attributes that execute code (for example, ). Do not use these methods to insert strings obtained from untrusted sources such as URL query parameters, cookies, or form inputs. Doing so can introduce cross-site-scripting (XSS) vulnerabilities. Remove or escape any user input before adding content to the document. > > > This is why `.innerHTML` is bad and why `.html()` is also not good to use on strings from untrusted sources, say if you make an ajax request to get some data from an untrusted third party. You should use one of the numerous methods [here](https://stackoverflow.com/questions/6234773/can-i-escape-html-special-chars-in-javascript) or better still, a proven library function. Upvotes: 1
2018/03/21
861
3,579
<issue_start>username_0: I'm trying to understand how I can do a signature capture in React Native. My App is created with `create-react-native-app` and `Expo` and I'd prefer to not have to eject the app to get this functionality to work. Would it be possible to wrap something like this in a webview? <https://github.com/szimek/signature_pad> I've also looked at this project, <https://github.com/RepairShopr/react-native-signature-capture> but it requires me to eject the app and use `react-native link`. Looking for any advice or suggestions on how to implement this feature while keeping my project as straightforward as possible (ideally, using create-react-native-app, but if this isn't possible could someone please explain to me why?)<issue_comment>username_1: The way React Native works is that each component available in React Native maps to a native component in the underlying platform. ie. a is an `ImageView` in Android and a `UIImageView.h` in iOS. The Javascript code itself runs in a Javascript thread on each platform and as you use Components in React Native, there's a translation layer that passes information from JS into the React Native bridge that then results in corresponding native components being created. By default, React Native has included the following components: <https://facebook.github.io/react-native/docs/components-and-apis.html#basic-components> which means that only those components come out-of-the-box in React Native. If you want other components, then you have 2 options, either create a "composite" component in which your JS component is written into other JS components or, if your feature needs a native component not yet exposed by React Native, write your own "native" component to expose certain native functionality to your React Native code. The way Expo works is that they have wrapped React Native and a handful of 3rd party components and built it within their application. The reason why you can't use a 3rd party native component they don't support is because when that component is used, the app itself doesn't have translation code to go from JS to a native Android/iOS view. So, to do what you're asking, you'd need to find either a "native" drawing component that Expo has included in their platform/app. OR you need to find a "composite" drawing component that is built with other default React Native components (or other components Expo supports). ie. On Android, I might build this with a Canvas view, but from what I can tell React Native doesn't support that object natively, so I would probably write this myself, etc. It's hard for Expo to support every 3rd party "native" component out there because React Native is open source and it iterates so fast that most community-built components aren't always up to date or they might conflict with one another. Upvotes: 2 <issue_comment>username_2: I know it's been a while, but there is an interesting article here: <https://blog.expo.io/drawing-signatures-with-expo-25d1629ca1ac> Wait, but how? Using “expo-pixi”, you can add a component that lets you choose your brush’s color, thickness, and opacity. Then when your user lifts her finger, you get a callback. From there you can take a screenshot of the transparent view or get the raw point data if that’s what you’re looking for. Upvotes: 2 [selected_answer]<issue_comment>username_3: I am using react-native-signature-capture. Working properly on both Android and iOS. [![enter image description here](https://i.stack.imgur.com/mMFt1.jpg)](https://i.stack.imgur.com/mMFt1.jpg) Upvotes: 2
2018/03/21
2,692
7,592
<issue_start>username_0: I'm having some trouble coming up with a working algorithm for the following problem. Given determined quantities of available coins from 100, 50, 25 and 10 cents, I need to find how to fit a combination of these coins into a given value x. (it doesn't have to be optimal, any combination from availables coins will do). So far, I've got this code, which works only for some cases. ``` struct coins{ int valor; int quant; }; int change = 0; int changecoins[4] = {0}; struct coins available_coins[4] = { 0 }; moedas_disp[3].value = 10; //10 cents coins moedas_disp[2].value = 25; //25 cents coins moedas_disp[1].value = 50; //50 cents coins moedas_disp[0].value = 100; //100 cents coins //quantity values just for test purposes moedas_disp[3].quant = 10; //10 cents coins moedas_disp[2].quant = 15; //25 cents coins moedas_disp[1].quant = 8; //50 cents coins moedas_disp[0].quant = 12; //100 cents coins for(int i=0; i<4; i++){ while((change/available_coins[i].value>0)&&(available_coins[i].quant>0)){ change -= available_coins[i].value; available_coins[i].quant--; changecoins[i]++; } } if(change>0){ printf("It was not possible to change the value"); } else{ printf("Change:\n"); printf("\t%d 100 cent coin(s).\n", changecoins[0]); printf("\t%d 50 cent coin(s).\n", changecoins[1]); printf("\t%d 25 cent coin(s).\n", changecoins[2]); printf("\t%d 10 cent coin(s).\n", changecoins[3]); } ``` However for quantities like 30 this won't work. The program will fit 1 coin of 25 cents, but then have 5 cents left, which will fail to compute. This also occurs with 40, 65, and so on. Thanks in advance!<issue_comment>username_1: You could use a recursive algorithm along the following steps: * Take 1 100c coin and try to break down the remaining amount into only 50, 25, 10s * If that didn't work, take 2 100c coins and try to break down the remaining amount into only 50, 25, 10s * Etc. If you tried every possibility for the number of 100c coins (including 0!) then you will have covered all possible solutions. I wrote some demo code. If this is homework then please don't copy-paste my code but maybe write your own code once you understand the ideas involved ... ``` #include #include #include bool coin\_find(unsigned int total, unsigned int \*denom) { if ( total == 0 ) return true; // Success - reduced total remaining to 0 if ( \*denom == 0 ) return false; // Failure - tried all coins in the list with no solution yet // Try 0 of the largest coin, then 1, etc. for (unsigned int d = 0; ; ++d) { if ( d \* \*denom > total ) return false; if ( coin\_find(total - d \* \*denom, denom + 1) ) { if ( d ) printf("%ux%uc ", d, \*denom); return true; } } } int main(int argc, char \*\*argv) { if ( argc < 2 ) return EXIT\_FAILURE; unsigned int denoms[] = { 100, 50, 25, 10, 0 }; long t = strtol(argv[1], NULL, 10); if ( t < 0 || t >= LONG\_MAX ) return EXIT\_FAILURE; if ( !coin\_find(t, denoms) ) printf("No solution found"); printf("\n"); } ``` Exercises for the reader: 1. Loop backwards instead of forwards so that we find tidier solutions by default. 2. Output only the breakdown with the smallest number of coins. 3. Output all possible breakdowns. Bonus exercise: * Rewrite this to not actually use recursion at all; instead use an array that holds the solution so far, and *backtrack* when you reach the end. Exercise 3 will actually be easier this way. Upvotes: 2 [selected_answer]<issue_comment>username_2: The following solution uses *Dynamic Programming* and you can use it as long as the value of `M` (`x` for you) is small. If `prev[j]` is different than `-1` then this means that `j` can be made with the given coins. `coin[j]` store the value of the coin used to make `j`, and `prev[j]` is the value without using `coin[j]`. Thus, `(j - prev[j]) / coin[j]` give us the number of coins of denomination `coin[j]` used to make weight `j`. ``` #include const int COINS = 4; int M = 1100; int prev[10000]; int coin[10000]; // Available denominations. int value[COINS] = { 10, 25, 50, 100 }; // Available quantities. int quant[COINS] = { 10, 15, 8, 12 }; // Number of selected coins per denomination. int answer[COINS] = { 0, 0, 0, 0 }; int main() { // base case prev[0] = 0; for (int i = 1; i < 10000; i++) prev[i] = -1; // dynamic programming for (int i = 0; i < COINS; i++) for (int j = M; j >= 0; j--) if (prev[j] != -1) { int k = 1; while (k <= quant[i] && j + k \* value[i] <= M) { if (prev[j + k \* value[i]] == -1) { prev[j + k \* value[i]] = j; coin[j + k \* value[i]] = value[i]; } k++; } } // build the answer if (prev[M] != -1) { int current = M; while (current > 0) { int k = 0; while (k < COINS && coin[current] != value[k]) k++; answer[k] += (current - prev[current]) / coin[current]; current = prev[current]; } printf("Change\n"); for (int i = 0; i < COINS; i++) printf("\t%d %d cent coin(s).\n", answer[i], value[i]); } else { printf("It was not possible to change the value"); } return 0; } ``` Upvotes: 0 <issue_comment>username_3: Because 25%10 is not equal to 0, You need to consider it. Try this algorithm: ``` #include struct coins{ int value; int quant; }; int main() { int change = 30; int changecoins[4] = {0}; struct coins available\_coins[4] = { 0 }; int temp; available\_coins[3].value = 10; //10 cents coins available\_coins[2].value = 25; //25 cents coins available\_coins[1].value = 50; //50 cents coins available\_coins[0].value = 100; //100 cents coins //quantity values just for test purposes available\_coins[3].quant = 10; //10 cents coins available\_coins[2].quant = 15; //25 cents coins available\_coins[1].quant = 8; //50 cents coins available\_coins[0].quant = 12; //100 cents coins if(((change/10 < 2)&&(change%10 != 0)) || (change/10 >= 2)&&((change%10 != 5) && change%10 != 0)) { printf("It was not possible to change the value\n"); return 0; } else { for(int i=0; i<2; i++){ changecoins[i] = change / available\_coins[i].value; change = change % available\_coins[i].value; if(changecoins[i] >= available\_coins[i].quant) { change = change + (changecoins[i] - available\_coins[i].quant) \* available\_coins[i].value; changecoins[i] = available\_coins[i].quant; } } if(change%10 == 5) { if(available\_coins[2].quant < 1) { printf("It was not possible to change the value\n"); return 0; } else { changecoins[2] = change / available\_coins[2].value; change = change % available\_coins[2].value; if(changecoins[2] >= available\_coins[2].quant) { change = change + (changecoins[2] - available\_coins[2].quant) \* available\_coins[2].value; changecoins[2] = available\_coins[2].quant; } if(change%10 == 5) { changecoins[2]--; change = change + available\_coins[2].value; } } } changecoins[3] = change / available\_coins[3].value; change = change % available\_coins[3].value; if(changecoins[3] >= available\_coins[3].quant) { change = change + (changecoins[3] - available\_coins[3].quant) \* available\_coins[3].value; changecoins[3] = available\_coins[3].quant; } if(change>0) { printf("It was not possible to change the value\n"); } else { printf("Change:\n"); printf("\t%d 100 cent coin(s).\n", changecoins[0]); printf("\t%d 50 cent coin(s).\n", changecoins[1]); printf("\t%d 25 cent coin(s).\n", changecoins[2]); printf("\t%d 10 cent coin(s).\n", changecoins[3]); for(int i = 0; i < 4; i++) { available\_coins[i].quant -= changecoins[i]; } } } return 0; } ``` Upvotes: 0
2018/03/21
513
2,007
<issue_start>username_0: I have a table called mytable that has four fields (id, field2, field3, field4). I want to make another table called tablecopy that will take components from mytable. In one field I want to copy over JUST the column name and pk constraints from mytable. Then in the other fields of tablecopy I want to copy over 'field2' and 'field3' from mytable (with the data). Here's what I'm working with. ``` CREATE TABLE mycopy (LIKE mytable INCLUDING ALL); INSERT INTO mycopy SELECT field2, field3 FROM mytable; ``` The problem is it field2 is a character varying and it's trying to put that in my id field. I want that field2 in the next column over from id. That id field shouldn't have any data. Any help with this would be appreciated. Thank You.<issue_comment>username_1: You may specify the columns in the `mycopy` table into which you want to insert `field2` and `field3`: ``` INSERT INTO mycopy (field2, field3) SELECT field2, field3 FROM mytable; ``` This is the preferred way of doing `INSERT INTO ... SELECT` for several reasons. Even if you got away with not specifying the target columns, later on if the table structure were to change, or someone did a vacuum, the script could suddenly stop working the way you intend. Also, by explicitly specifying the target columns you make it clear to anyone else who inherits your code. **Edit:** If you expect Postgres to auto populate the `Id` column in the `mycopy` table, then your table definition should look something like this: ``` CREATE TABLE mycopy ( Id SERIAL PRIMARY KEY, field1 TEXT NOT NULL, field2 TEXT NOT NULL, field3 TEXT NOT NULL ); ``` Now Postgres will automatically increment the `Id` column as you insert your data. Upvotes: 3 [selected_answer]<issue_comment>username_2: ``` INSERT INTO mycopy (a_column, other_column) SELECT field2, field3 FROM mytable; ``` You need to define the column names to be inserted into in the same order as your select statement columns. Upvotes: 0
2018/03/21
944
3,276
<issue_start>username_0: I am going through 'The C language by K&R'. Right now I am doing the bitwise section. I am having a hard time in understanding the following code. ``` int mask = ~0 >> n; ``` I was playing on using this to mask n left side of another binary like this. 0000 1111 1010 0101 // random number My problem is that when I print var mask it still negative -1. Assuming n is 4. I thought shifting ~0 which is -1 will be 15 (0000 1111). thanks for the answers<issue_comment>username_1: Right-shifting negative signed integers is an implementation-defined behavior, which is usually (but not always) filling the left with ones instead of zeros. That's why no matter how many bits you've shifted, it's always -1, as the left is always filled by ones. When you shift unsigned integers, the left will always be filled by zeros. So you can do this: ``` unsigned int mask = ~0U >> n; ^ ``` You should also note that `int` is typically 2 or 4 bytes, meaning if you want to get 15, you need to right-shift 12 or 28 bits instead of only 4. You can use a `char` instead: ``` unsigned char mask = ~0U; mask >>= 4; ``` Upvotes: 2 <issue_comment>username_2: In C, and many other languages, `>>` is (usually) an [*arithmetic* right shift](https://en.wikipedia.org/wiki/Arithmetic_shift) when performed on *signed* variables (like `int`). This means that the new bit shifted in from the left is a copy of the previous most-significant bit (MSB). This has the effect of preserving the sign of a two's compliment negative number (and in this case the value). This is in contrast to a [*logical* right shift](https://en.wikipedia.org/wiki/Logical_shift), where the MSB is always replaced with a zero bit. This is applied when your variable is *unsigned* (e.g. `unsigned int`). From Wikipeda: > > The >> operator in C and C++ is not necessarily an arithmetic shift. Usually it is only an arithmetic shift if used with a signed integer type on its left-hand side. If it is used on an unsigned integer type instead, it will be a logical shift. > > > In your case, if you plan to be working at a bit level (i.e. using masks, etc.) I would strongly recommend two things: 1. Use unsigned values. 2. Use types with specific sizes from like `uint32_t` Upvotes: 2 <issue_comment>username_3: Performing a right shift on a negative value yields an *implementation defined* value. Most hosted implementations will shift in `1` bits on the left, as you've seen in your case, however that doesn't necessarily have to be the case. Unsigned types as well as positive values of signed types always shift in `0` bits on the left when shifting right. So you can get the desired behavior by using unsigned values: ``` unsigned int mask = ~0u >> n; ``` This behavior is documented in section 6.5.7 of the [C standard](http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1570.pdf): > > *5* The result of **E1 >> E2** is **E**1 right-shifted **E2** bit positions. If **E1** has an unsigned type or if **E1** has a signed type and a nonnegative > value, the value of the result is the integral part of the quotient > of **E1 / 2E2** .If **E1** has a signed type and a negative value, the > resulting value is implementation-defined. > > > Upvotes: 3
2018/03/21
2,638
7,134
<issue_start>username_0: In the picture below are three groups of touching squares where each individual square is numbered. [![enter image description here](https://i.stack.imgur.com/XaBTum.png)](https://i.stack.imgur.com/XaBTum.png) I have been able to use the spatial library ArcPy to construct the dictionary below which uses the square numbers as its keys and a list of numbers for the squares it touches as its values. For example, square 1 touches only square 4, square 4 touches squares 1 and 6, etc. ``` dict = {1: [4], 2: [3, 5], 3: [2, 5], 4: [1, 6], 5: [2, 3], 6: [4, 8], 7: [9, 10], 8: [6, 11], 9: [7, 10], 10: [7, 9], 11: [8]} ``` From the picture it is clear that there are three groups of touching squares, and so the result I am after is a new dictionary where the keys are the square numbers and the values are the touching group it belongs to. I'll name the touching groups using letters but these names can be anything so one possible solution would be: ``` newDict = {9:"A",10:"A",7:"A",1:"B",4:"B",6:"B",8:"B",11:"B",5:"C",2:"C",3:"C"} ``` Is there a Pythonic way to go from `dict` to `newDict`? I am using Python 2.7.14 for my testing.<issue_comment>username_1: Just for your consideration... solution with the Disjoint Set Union By Rank algorithm from CLRS. It's the most efficient disjoint set finding algorithm I know of. Essentially just view the problem as a disconnected graph, find the parent of each edge using union-find and associate them. The output associates different set identifiers than uniformly A-Z for each parent, but it's more efficient to generate the mapping of vertices to letters beforehand, instead of associating them after. You can have up to 26 disjoint sets in this way. For any more, you'll probably want to move to numeric identifiers. Complexity is `O( | d.keys() | * log(| d.values() |) )` ``` d = {1: [4], 2: [3, 5], 3: [2, 5], 4: [1, 6], 5: [2, 3], 6: [4, 8], 7: [9, 10], 8: [6, 11], 9: [7, 10], 10: [7, 9], 11: [8]} class MSet(object): def __init__(self, p): self.val = p self.p = self self.rank = 0 def parent_of(x): # recursively find the parents of x if x.p == x: return x.val else: return parent_of(x.p) def make_set(x): return MSet(x) def find_set(x): if x != x.p: x.p = find_set(x.p) return x.p def link(x,y): if x.rank > y.rank: y.p = x else: x.p = y if x.rank == y.rank: y.rank += 1 def union(x,y): link(find_set(x), find_set(y)) vertices = {k: make_set(k) for k in d.keys()} edges = [] for k,u in vertices.items(): for v in d[k]: edges.append((u,vertices[v])) # do disjoint set union find similar to kruskal's algorithm for u,v in edges: if find_set(u) != find_set(v): union(u,v) # resolve the root of each disjoint set parents = {} # generate set of parents set_parents = set() for u,v in edges: set_parents |= {parent_of(u)} set_parents |= {parent_of(v)} # make a mapping from only parents to A-Z, to allow up to 26 disjoint sets letters = {k : chr(v) for k,v in zip(set_parents, list(range(65,91)))} for u,v in edges: parents[u.val] = letters[parent_of(u)] parents[v.val] = letters[parent_of(v)] print(parents) ``` Output: ``` rpg711$ python disjoint_set_union_find {1: 'C', 2: 'B', 3: 'B', 4: 'C', 5: 'B', 6: 'C', 7: 'A', 8: 'C', 9: 'A', 10: 'A', 11: 'C'} ``` I sorted your expected dictionary to make it easier to correlate the set identifiers and check my work: ``` sorted(d.items(), key=lambda k: k[0]) [(1, 'B'), (2, 'C'), (3, 'C'), (4, 'B'), (5, 'C'), (6, 'B'), (7, 'A'), (8, 'B'), (9, 'A'), (10, 'A'), (11, 'B')] ``` In my proposed solution 'B' -> 'C', 'C' -> 'B', 'A' -> 'A', but notice that the set identifier that each vertex belongs in is just a remapping of your expected. PS: If there exist vertices that do not touch any other vertices (has no edges), the input dict should be generated or modified such that these vertices have an edge to itself. Upvotes: 3 [selected_answer]<issue_comment>username_2: i tried with recursive solution if you want you can try: ``` dict22 = {1: [4], 2: [3, 5], 3: [2, 5], 4: [1, 6], 5: [2, 3], 6: [4, 8], 7: [9, 10], 8: [6, 11], 9: [7, 10], 10: [7, 9], 11: [8]} def connected_nodes(dict34): final=[] for i,j in dict34.items(): def recursive_approach(dict1, tem, data,check=[], dict_9={}): if data!=None: dict_9.update({data:dict22[data]}) dict_9.update({tem: dict1[tem]}) check.append(tem) final.append(dict_9) if check.count(tem) > 1: return 0 for i, j in dict1.items(): if tem in dict1: return recursive_approach(dict1, tem=dict1[tem][-1],data=None) recursive_approach(dict22, tem=j[-1],data=i) return final bew=[] for i in connected_nodes(dict22): bew.append(list(i.keys())) new_bew=bew[:] final_result=[] for j,i in enumerate(bew): for m in new_bew: if set(i).issubset(set(m)) or set(m).issubset(set(i)): if len(i)>len(m): final_result.append(tuple(i)) new_bew.remove(m) else: final_result.append(tuple(m)) else: pass print(set(final_result)) ``` output: ``` {(2, 3, 5), (9, 10, 7), (1, 4, 6, 8, 11)} ``` Upvotes: 1 <issue_comment>username_3: You can use the [`networkx` library](https://networkx.github.io/documentation/networkx-1.9.1/tutorial/tutorial.html) to determine clusters or sub-groups of graphs. **Given** ``` from string import ascii_uppercase as uppercase import networkx as nx import matplotlib.pyplot as plt %matplotlib inline d = { 1: [4], 2: [3, 5], 3: [2, 5], 4: [1, 6], 5: [2, 3], 6: [4, 8], 7: [9, 10], 8: [6, 11], 9: [7, 10], 10: [7, 9], 11: [8] } ``` **Code** ``` G = nx.from_dict_of_lists(d) # Label sub-groups sub_graphs = list(nx.connected_component_subgraphs(G)) {val: label for label, sg in zip(uppercase, sub_graphs) for val in sg.nodes()} # {1: 'A', 2: 'B', 3: 'B', 4: 'A', 5: 'B', 6: 'A', 7: 'C', 8: 'A', 9: 'C', 10: 'C', 11: 'A'} ``` --- **Details** For easier visualization, here are the labeled sub-groups ([see also motivating code](https://stackoverflow.com/questions/17450521/networkx-finding-the-natural-clusters-of-points-on-a-graph)): ``` # Printed subgroups for label, sg in zip(uppercase, sub_graphs): print("Subgraph {}: contains {}".format(label, sg.nodes())) # Subgraph A: contains [8, 1, 11, 4, 6] # Subgraph B: contains [2, 3, 5] # Subgraph C: contains [9, 10, 7] ``` Although I would ultimately recommend a resulting dict of lists for cleaner grouping of data: ``` {label: sg.nodes() for label, sg in zip(uppercase, sub_graphs)} # {'A': [8, 1, 11, 4, 6], 'B': [2, 3, 5], 'C': [9, 10, 7]} ``` In addition, you can optionally plot these graphs: ``` # Plot graphs in networkx (optional) nx.draw(G, with_labels=True) ``` [![enter image description here](https://i.stack.imgur.com/8ZVi0.png)](https://i.stack.imgur.com/8ZVi0.png) Upvotes: 1
2018/03/21
507
1,619
<issue_start>username_0: How do I hide an entire table column in printing mode (using JS javascript:print())? Because I'm using Bootstrap I tried its .hidden-print class to hide the last column (Operation column) in printing mode: ``` | No | Name | Operation | | --- | --- | --- | | 1 | <NAME> | [Edit](edit.php) [Delete](delete.php) | | 2 | Neo | [Edit](edit.php) [Delete](delete.php) | | <-- continued --> ``` but it's only hide the column's content, displaying a column without content, when I need is it also hides TH and TD tags. Is it possible to do this?<issue_comment>username_1: ```css table,th,td { border: 1px solid; text-align: center; } table { border-collapse: collapse; width: 100%; } @media print { table,th,td { border: 0px } button { display: none; } } ``` ```html | No | Name | Operation | | --- | --- | --- | | 1 | <NAME> | [Edit](edit.php) [Delete](delete.php) | | 2 | Neo | [Edit](edit.php) [Delete](delete.php) | | Print ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: There are issues with adding `display: none;` (which is what `.hidden-print` does) to table rows and cells because removing them from the document flow could very easily mess up the table formatting ([see this question](https://stackoverflow.com/questions/916528/how-can-i-hide-a-td-tag-using-inline-javascript-or-css)). What about using `visibility: hidden;` instead? In your case, you could try this, if you wanted to always target the last column: ``` @media print { th:last-child, td:last-child { visibility: hidden; } } ``` Upvotes: 0
2018/03/21
857
2,576
<issue_start>username_0: I am trying to create horizontal bar plot and would like to fill the individual bar as per their log fold values, white for lowest value and darkest for highest value. However, I am not able to do it. Here is my work, can somebody help me fix it? ``` df <- data.frame(LogFold = c(14.20, 14.00, 8.13, 5.3, 3.8, 4.9, 1.3, 13.3, 14.7, 12.2), Tissue = c("liver", "adrenal", "kidney", "heart", "limb", "adipose", "brown", "hypothalamus", "arcuate", "lung")) df1<-df%>% arrange(desc(LogFold)) ggplot(data=df1, aes(x=Tissue, y=LogFold, fill = Tissue)) + geom_bar(stat="identity")+ scale_colour_gradient2()+ coord_flip()+ ylim(0, 15)+ scale_x_discrete(limits = df1$Tissue)+ theme_classic() ``` [![enter image description here](https://i.stack.imgur.com/Je85w.png)](https://i.stack.imgur.com/Je85w.png) Thank You in advance!<issue_comment>username_1: hope this works. ``` ggplot(data=df1, aes(x=Tissue, y=LogFold, fill = LogFold)) + geom_bar(stat="identity",color="black")+ scale_fill_gradient(low="white",high="darkred")+ coord_flip()+ ylim(0, 15)+ scale_x_discrete(limits = df1$Tissue)+ theme_classic() ``` Upvotes: 0 <issue_comment>username_2: Here's what you need to think about: * sorting the data frame values makes no difference to ggplot * in your code you are mapping fill color to Tissue, not to LogFold * the newer `geom_col` is less typing than `geom_bar(stat = ...)` * the limits to your scale are unnecessary as you specify all values for Tissue * if you want to fill with a gradient use `scale_fill_gradient2()` * fill = white will not be visible on the white background of `theme_classic` So you could try something like this: ``` library(tidyverse) df1 %>% ggplot(aes(reorder(Tissue, LogFold), LogFold)) + geom_col(aes(fill = LogFold)) + scale_fill_gradient2(low = "white", high = "blue", midpoint = median(df1$LogFold)) + coord_flip() + labs(x = "Tissue") ``` [![enter image description here](https://i.stack.imgur.com/wte09.png)](https://i.stack.imgur.com/wte09.png) But I don't know that the color gradient really adds anything much in terms of interpreting the information. So here's the result without it, you be the judge: ``` df1 %>% ggplot(aes(reorder(Tissue, LogFold), LogFold)) + geom_col() + coord_flip() + labs(x = "Tissue") + theme_classic() ``` [![enter image description here](https://i.stack.imgur.com/slglc.png)](https://i.stack.imgur.com/slglc.png) Upvotes: 3
2018/03/21
1,050
4,154
<issue_start>username_0: I cannot uncheck initial radiobutton after setting default checked button. Problem situation :: XML Code ----------------------------- ``` ``` ### Problem situation :: Initial state -> Works well [![enter image description here](https://i.stack.imgur.com/ZdXfX.png)](https://i.stack.imgur.com/ZdXfX.png) ### Problem situation :: When I touch Home button...... [![enter image description here](https://i.stack.imgur.com/El0SJ.png)](https://i.stack.imgur.com/El0SJ.png) I cannot understand why it happened. I already read [this article](https://stackoverflow.com/questions/11505262/android-radiobutton-isnt-uncheck-with-initial-checked-state) from stackoverflow. But it is not helpful to me and also too old post. What I tried to solve --------------------- ### Removing 1 line in XML I remove the line, `android:checkedButton="@+id/attendanceRadioButton"` I works well. (I mean never both buttons are clicked simultaneously) But there is no default checked button. So this is not what I want. ### Adding codes in onCreateView() I already add 2 lines in `onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState)` But not helpful. (Actually these views are on Fragment.) ``` stateRadioGroup.clearCheck(); attendanceRadioButton.setChecked(true); ``` Same results as above images. My Android studio System(Config) -------------------------------- Android Studio : 3.0.1 Gradle : 4.1 compileSdkVersion : 27 minSdkVersion : 21 targetSdkVersion : 23 Running Device : LG V10, Android 7.0 ============== I think I found real possible reason ==================================== I made new Project for solving this problem. I think **this problem has a relationship with Fragment**. But I don't know how to solve this. So, please help me :( In this situation, Upper situation is happened. ----------------------------------------------- There is `MainActivity` and `RadioButtonFragment` and I will inflate Fragment in `RadioButtonFragment` from `MainActivity` like below codes. ### activity\_main.xml ``` ``` ### MainActivity.java ``` protected void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); FragmentManager fm = getFragmentManager(); FragmentTransaction fragmentTransaction = fm.beginTransaction(); fragmentTransaction.add(R.id.fragmentLayout, new RadioButtonFragment()); fragmentTransaction.commit(); setContentView(R.layout.activity_main); } ``` ### fragment\_radio\_button.xml ``` ``` ### RadioButtonFragment.java ``` @Override public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) { // Inflate the layout for this fragment View view = inflater.inflate(R.layout.fragment_radio_button, container, false); return view; } ``` Problem explain --------------- It works well when I just make a RadioGroup and some RadioButtons in Activity. But, in that case, making RadioButtons in Fragment, two buttons checking problem is happened. Is there any idea to solve this? ### What I import. Not `android.support.v4.app.Fragment` But `android.app.Fragment` In MainActivity.java ``` import android.app.FragmentManager; import android.app.FragmentTransaction; import android.os.Bundle; import android.support.v7.app.AppCompatActivity; ``` In RadioButtonFragment.java ``` import android.app.Fragment; import android.os.Bundle; import android.view.LayoutInflater; import android.view.View; import android.view.ViewGroup; import android.widget.RadioButton; import android.widget.RadioGroup; ```<issue_comment>username_1: Try this, **remove** below code from your **onCreateView()** method ``` stateRadioGroup.clearCheck(); attendanceRadioButton.setChecked(true); ``` And **add** the following in your xml. ``` ``` Upvotes: 2 <issue_comment>username_2: Try not setting check attribute for radioButton. Instead, add this to your radioGroup ``` android:checkedButton="@+id/attendanceRadioButton" ``` Edit: Also, you dont have to init your radioButtons, you can listen to it using your radioGroup Upvotes: 1
2018/03/21
384
1,187
<issue_start>username_0: I have an empty list and I want to append to it a raw string. For example, I have list1 = [] and I want to append the following string '\"quote\"', so I would get ['\"quote\"']. However when I do the following ``` list1 = [] text = '\"quote\"' list1.append(text) list1 ``` I get ['"quote"'] when I do the following ``` list2 = [] text = r'\"quote\"' list2.append(text) list2 ``` I get ['\\"quote\\"'] and when I do the following ``` list3 = [] text = '\"quote\"'.replace('\"','\\"') list3.append(text) list3 ``` I get ['\\"quote\\"'] How can I get ['\"quote\"']? I use Jupyter and Python 3.5.2. Thanks!<issue_comment>username_1: Try this, **remove** below code from your **onCreateView()** method ``` stateRadioGroup.clearCheck(); attendanceRadioButton.setChecked(true); ``` And **add** the following in your xml. ``` ``` Upvotes: 2 <issue_comment>username_2: Try not setting check attribute for radioButton. Instead, add this to your radioGroup ``` android:checkedButton="@+id/attendanceRadioButton" ``` Edit: Also, you dont have to init your radioButtons, you can listen to it using your radioGroup Upvotes: 1
2018/03/21
310
1,157
<issue_start>username_0: I'm trying to understand the details of WCAG AA compliance. Would the unselected checkboxes in this image be considered too low-contrast to be compliant? [![Heading Yesterday with a group of checkboxes: Last 7 days, Last 30 days, All time, Custom Range. The Last 7 days checkbox is checked. The unselected checkboxes appear to be low contrast.](https://i.stack.imgur.com/vXvrUm.png)](https://i.stack.imgur.com/vXvrU.png) Is there anything we can do (other than change the color) to make it more compliant? If tab-focus makes the contrast go up, is that enough? Or do we really just need to boost the contrast?<issue_comment>username_1: Try this, **remove** below code from your **onCreateView()** method ``` stateRadioGroup.clearCheck(); attendanceRadioButton.setChecked(true); ``` And **add** the following in your xml. ``` ``` Upvotes: 2 <issue_comment>username_2: Try not setting check attribute for radioButton. Instead, add this to your radioGroup ``` android:checkedButton="@+id/attendanceRadioButton" ``` Edit: Also, you dont have to init your radioButtons, you can listen to it using your radioGroup Upvotes: 1
2018/03/21
629
2,563
<issue_start>username_0: Java is saying there is an error on the code below and I cant find it. I have indicated the 2 problem lines with a comments right above the lines reading "Problem line" and I have indicated the method closely related to this problem lines with a "Closely related" comment right above it. The error reads that "it cant resolve the symbol processed" and I dont why the heck not since I have in fact defined the variable (where I have marked problem line 1) . It really stomped me when I noticed that it gives a warning that the "variable processed is never used" in the spot where I defines it, even as it tells me that it dont know where it came from at the place where I try to use it. p.s., I do have a Scanner imported, i just that stackoverflow cut that part when they were uploading the code. ``` public class sub { String NameNum; int FirstNum, SecondNum, ThirdNum; public sub(){ Scanner input = new Scanner(System.in); System.out.println("Enter List Name"); NameNum = input.next(); System.out.println("Enter number"); FirstNum = input.nextInt(); System.out.println("Enter number"); SecondNum = input.nextInt(); System.out.println("Enter number"); ThirdNum = input.nextInt(); // Problem Line 1 of 2 below int[] processed = process(FirstNum, SecondNum, ThirdNum); } public void print(){ System.out.println(NameNum + ": " + FirstNum + " " + SecondNum + " " + ThirdNum); } //Closely related to Problem lines (method below, 3 lines in all) int[] process(int FirstNum, int SecondNum, int ThirdNum) { int [] processed = {FirstNum, SecondNum * 2, ThirdNum * 3}; return processed; } public void printProcessed(){ //Problem line 2 of 2 below System.out.println(NameNum + " Processed: " + processed[0] + processed[1] + processed[2]); } } ```<issue_comment>username_1: Try this, **remove** below code from your **onCreateView()** method ``` stateRadioGroup.clearCheck(); attendanceRadioButton.setChecked(true); ``` And **add** the following in your xml. ``` ``` Upvotes: 2 <issue_comment>username_2: Try not setting check attribute for radioButton. Instead, add this to your radioGroup ``` android:checkedButton="@+id/attendanceRadioButton" ``` Edit: Also, you dont have to init your radioButtons, you can listen to it using your radioGroup Upvotes: 1
2018/03/21
536
1,901
<issue_start>username_0: I have a table with following columns: item (varchar) date (date) revision (datetime) value (numeric) I have this SQL statement that finds the last revision for each item: ``` select item, max(revision) as 'last_revision' from [mytable] group by item ORDER BY last_revision ASC ``` However I am looking to get the last 2 revisions for each item: item|last\_revision|previous\_revision I have tried this which fails miserably ``` select TOP 2 item, revision as 'last_revision' from [myTable] group by item ORDER BY last_revision ASC ``` > > Column 'myTable.revision' is invalid in the select list because it is > not contained in either an aggregate function or the GROUP BY clause. > > ><issue_comment>username_1: Use `ROW_NUMBER`: ``` SELECT item, revision AS last_revision FROM ( SELECT item, revision, ROW_NUMBER() OVER (PARTITION BY item ORDER BY revision) rn FROM [myTable] ) t WHERE rn <= 2; ``` SQL Server's `TOP` operator just limits the number of records returned for the entire result set, not for each group in your aggregation. You really should use analytic functions here. Upvotes: 0 <issue_comment>username_2: You appears to want `top (4)` with `ties` ``` select top (4) with ties item, revision from mytable t order by row_number() over (partition by item order by revision desc) ``` Upvotes: 0 <issue_comment>username_3: Try this queries. Both of them should work ``` select item, revision from ( select item, revision , rn = row_number() over (partition by item order by revision desc) from [mytable] group by item , revision ) t where rn <= 2 ``` Or ``` select distinct item, revision from ( select item, revision , rn = dense_rank() over (partition by item order by revision desc) from [mytable] ) t where rn <= 2 ``` Upvotes: 2 [selected_answer]
2018/03/21
560
2,074
<issue_start>username_0: When I run this code, ``` public static void read_all_lines(){ String file_name = "input.txt"; File input_file = new File(file_name); Scanner in_file = null; try{ in_file = new Scanner(input_file); } catch(FileNotFoundException ex){ System.out.println("Error: This file doesn't exist"); System.exit(0); } while(in_file.hasNextLine()){ String line = in_file.nextLine(); System.out.println(line); } in_file.close(); } ``` That is supposed to read all lines in a .txt file and print them on the screen the FileNotFoundException is thrown. It catches it and prints out the error message with no problem. But the file does exist, I made two files input and input.txt, but the exception is still thrown. [This is the file directory where the files and project are.](https://i.stack.imgur.com/Gkdm8.png)<issue_comment>username_1: Use `ROW_NUMBER`: ``` SELECT item, revision AS last_revision FROM ( SELECT item, revision, ROW_NUMBER() OVER (PARTITION BY item ORDER BY revision) rn FROM [myTable] ) t WHERE rn <= 2; ``` SQL Server's `TOP` operator just limits the number of records returned for the entire result set, not for each group in your aggregation. You really should use analytic functions here. Upvotes: 0 <issue_comment>username_2: You appears to want `top (4)` with `ties` ``` select top (4) with ties item, revision from mytable t order by row_number() over (partition by item order by revision desc) ``` Upvotes: 0 <issue_comment>username_3: Try this queries. Both of them should work ``` select item, revision from ( select item, revision , rn = row_number() over (partition by item order by revision desc) from [mytable] group by item , revision ) t where rn <= 2 ``` Or ``` select distinct item, revision from ( select item, revision , rn = dense_rank() over (partition by item order by revision desc) from [mytable] ) t where rn <= 2 ``` Upvotes: 2 [selected_answer]
2018/03/21
460
1,682
<issue_start>username_0: In numpy, when I have a vector with size `(m,)` and another vector with size `(m, 1)`, adding the two gives a matrix with size `(m, m)`. However, I just want the vector result. How do I go about doing this? Edit: I am in fact using tensorflow and the `(m, 1)` vector is actually `(m, ?)`. However, when I am adding biases to the dot product between the weights `(n, m) . (m, ?)` I get a `(m, m)` matrix which essentially replicates the bias across the rows. How do I resolve this issue?<issue_comment>username_1: Use `ROW_NUMBER`: ``` SELECT item, revision AS last_revision FROM ( SELECT item, revision, ROW_NUMBER() OVER (PARTITION BY item ORDER BY revision) rn FROM [myTable] ) t WHERE rn <= 2; ``` SQL Server's `TOP` operator just limits the number of records returned for the entire result set, not for each group in your aggregation. You really should use analytic functions here. Upvotes: 0 <issue_comment>username_2: You appears to want `top (4)` with `ties` ``` select top (4) with ties item, revision from mytable t order by row_number() over (partition by item order by revision desc) ``` Upvotes: 0 <issue_comment>username_3: Try this queries. Both of them should work ``` select item, revision from ( select item, revision , rn = row_number() over (partition by item order by revision desc) from [mytable] group by item , revision ) t where rn <= 2 ``` Or ``` select distinct item, revision from ( select item, revision , rn = dense_rank() over (partition by item order by revision desc) from [mytable] ) t where rn <= 2 ``` Upvotes: 2 [selected_answer]
2018/03/21
403
1,555
<issue_start>username_0: I have this code for parsing RSS to HTML. ``` php $channel_desc = $channel-getElementsByTagName('description') ->item(0) ->childNodes ->item(0) ->nodeValue; ``` I want to get only 150 characters from first description. How can I limit the length output?<issue_comment>username_1: Use `ROW_NUMBER`: ``` SELECT item, revision AS last_revision FROM ( SELECT item, revision, ROW_NUMBER() OVER (PARTITION BY item ORDER BY revision) rn FROM [myTable] ) t WHERE rn <= 2; ``` SQL Server's `TOP` operator just limits the number of records returned for the entire result set, not for each group in your aggregation. You really should use analytic functions here. Upvotes: 0 <issue_comment>username_2: You appears to want `top (4)` with `ties` ``` select top (4) with ties item, revision from mytable t order by row_number() over (partition by item order by revision desc) ``` Upvotes: 0 <issue_comment>username_3: Try this queries. Both of them should work ``` select item, revision from ( select item, revision , rn = row_number() over (partition by item order by revision desc) from [mytable] group by item , revision ) t where rn <= 2 ``` Or ``` select distinct item, revision from ( select item, revision , rn = dense_rank() over (partition by item order by revision desc) from [mytable] ) t where rn <= 2 ``` Upvotes: 2 [selected_answer]
2018/03/21
443
1,524
<issue_start>username_0: Would it be possible to alert and block submission of a simple form if it has a url in it? Or, to be more specific. I need it to block any of these input types: > > https //domain.com/... > > > or > > www domain.com// > > > or > > domain.com > > > The form must accept anything else except anything like an url.<issue_comment>username_1: Use `ROW_NUMBER`: ``` SELECT item, revision AS last_revision FROM ( SELECT item, revision, ROW_NUMBER() OVER (PARTITION BY item ORDER BY revision) rn FROM [myTable] ) t WHERE rn <= 2; ``` SQL Server's `TOP` operator just limits the number of records returned for the entire result set, not for each group in your aggregation. You really should use analytic functions here. Upvotes: 0 <issue_comment>username_2: You appears to want `top (4)` with `ties` ``` select top (4) with ties item, revision from mytable t order by row_number() over (partition by item order by revision desc) ``` Upvotes: 0 <issue_comment>username_3: Try this queries. Both of them should work ``` select item, revision from ( select item, revision , rn = row_number() over (partition by item order by revision desc) from [mytable] group by item , revision ) t where rn <= 2 ``` Or ``` select distinct item, revision from ( select item, revision , rn = dense_rank() over (partition by item order by revision desc) from [mytable] ) t where rn <= 2 ``` Upvotes: 2 [selected_answer]
2018/03/21
382
1,544
<issue_start>username_0: I am an experienced BizTalk developer who is now moving on to Azure logic apps. I have installed Visual Studio 2017 and added the "Azure Logic Apps Tools for Visual Studio" through the **Tools|Extensions and Updates** menu. However, I don't see an option for creating/editing XML schemas(.xsd files), I don't see the BizTalk EDI X12 schemas and there is no option for creating a map. What am I missing? I have searched and searched with no luck. Thanks.<issue_comment>username_1: XSD & Maps are part of the Azure Integration Account service. You need the Microsoft Azure Logic Apps Enterprise Integration Tools in order to use those. You will see that the mapper is very similar to what you know from the BizTalk-world. I'm not sure Microsoft Azure Logic Apps Enterprise Integration Tools is already supported in VS 2017. I know it's supported on VS 2015. On a side note, xsd's and maps created using a BizTalk Server Project will also work fine in an Azure Integration Account and Logic Apps,but might lead to a more complex ALM-story. Upvotes: 2 <issue_comment>username_2: Right now **there is no tooling available for Visual Studio 2017**, only for Visual Studio 2015. The tools are based on the BizTalk components and those components are only available for Visual Studio 2015. You can download EDI schemas from Open Source Github, but only EANCOM and EDIFACT, X12 messages have been removed due to licensing questions <https://github.com/Microsoft/Integration/tree/master/BizTalk%20Server/Schema> Upvotes: 0
2018/03/21
291
1,133
<issue_start>username_0: I have a couple questions in Kafka. 1) Does Kafka have a default web UI? 2) How can we gracefully shutdown a standalone kafka server, kafka console- consumer/console-producer. Any solutions will be highly appreciated. Thank you.<issue_comment>username_1: 1) No Kafka does not have a default UI. There are however a number of third party tools that can graphically display Kafka resources. Just Google for `kafka ui` and pick the tool that displays what you want and you like the most. 2) To gracefully shutdown a Kafka broker, just send a `SIGTERM` to the Kafka process and it will properly shutdown. This can be done via the `./bin/kafka-server-stop.sh` tool. If it's part of a cluster, new leaders will be elected on other brokers, otherwise it will simply cleanly close all its resources. Note that depending on the number of partitions, this can take a few minutes. Upvotes: 4 [selected_answer]<issue_comment>username_2: You can try *Landoop* Kafka UI: <https://github.com/Landoop/fast-data-dev> They provide a nice Web-UI for Kafka topics, Avro schemata, Kafka Connect and much more. Upvotes: 2
2018/03/21
508
1,769
<issue_start>username_0: I have searched for this error, in the context of jQuery and data tables, and none of the suggestions worked (in case this is marked duplicate immediately!) I have .Net project (C#) and in master page I have references to jQuery, Datatables and date picker (among other things): ``` ``` In a page that uses the the master page I have a datatable and date pciker. I also try to set the modal pop up's title to be draggable. Using any of them causes "Object does not support property or method xxxx" error, where xxxx can be "draggable", "dataTable" or "datepicker". I can't see why I get the errors when I am referencing jQuery. I even tried the ".noConflict" approach but it did't help either. ``` function pageLoad() { $('#tbStartDate').unbind(); $("#tbStartDate").datepicker({ showOtherMonths: true, selectOtherMonths: true, showOn: "button", buttonImage: "../assets/images/calendar.gif", buttonImageOnly: true, buttonText: "Select Statrt Date", option: "mm/dd/yy", onSelect: function (dateStr) { $("#tbEndDate").val(dateStr); $("#tbEndDate").datepicker("option", { minDate: new Date(dateStr) }) } }); } ``` or ``` $(document).ready(function () { var j$ = jQuery.noConflict(); j$(".modal-dialog").draggable({ handle: ".modal-header" }); }) ```<issue_comment>username_1: Try... ``` $(window).load(function(){ }) ``` instead of document.ready. Upvotes: 1 <issue_comment>username_2: In the "Scripts" folder of the project, I had jquery-1.10.2.js and jquery-1.10.2.min.js while in my master page I was referencing jQuery 1.12.4. Once I removed the 1.10 scripts from"Scripts" folder, the error went away. Thank you username_1 and Brahma for taking time and responding; I up-voted your responses. Upvotes: 0
2018/03/21
1,439
3,974
<issue_start>username_0: I have a menu that spans across the page. I have its wrapper set to 100% and correctly, it spans completely across its container. I have for menu items set at 25% each however, there is around 20px space to the left of the menu revealing the background color. Regardless of sizes and margin:0 auto, the space remains. ```css .center{ margin:0 auto; width:100%; height:100%; background-color:#3c87bc; padding:0; } #top_forums_menu{ width:100%; height:24px; background-color:#5dbcff; position:relative; } ul#top_forums_menu_bars { list-style-type: none; margin:0 auto; } ul#top_forums_menu_bars li { float: left; width:25%; height: 100%; overflow:hidden; -webkit-box-shadow: 0 6px 2px -4px black; -moz-box-shadow: 0 6px 2px -4px black; box-shadow: 0 6px 2px -4px black; transition: all .25s ease-in-out; -moz-transition: all .25s ease-in-out; -webkit-transition: all .25s ease-in-out; } ul#top_forums_menu_bars li:hover{ border-bottom:5px solid #3a5871; } ul#top_forums_menu_bars li a{ display: inline-block; width: 100%; height: 100%; opacity:1; text-decoration:none; transition: opacity .5s ease-in-out; -moz-transition: opacity .5s ease-in-out; -webkit-transition: opacity .5s ease-in-out; line-height:10px; padding-top:7px; text-align:center; font-family: Tahoma, Verdana, Segoe, sans-serif; font-size:12px; color:white; } ul#top_forums_menu_bars li a:hover { opacity:.8; } ``` ```html * Profile * Account Settings * My Content * MyOACH ```<issue_comment>username_1: has a default padding which can be removed with the following rule: ``` ul#top_forums_menu_bars { padding-left: 0; } ``` Upvotes: 1 <issue_comment>username_2: The problem is that elements (along with and elements have a built-in browser rendering of `padding-start: 40px`. This is what is causing the offset that you see. The easiest way to resolve this is to explicitly give your `ul#top_forums_menu_bars` a `padding` of `0`, as can be seen in the following: ```css .center { margin: 0 auto; width: 100%; height: 100%; background-color: #3c87bc; padding: 0; } #top_forums_menu { width: 100%; height: 24px; background-color: #5dbcff; position: relative; } ul#top_forums_menu_bars { list-style-type: none; margin: 0 auto; padding: 0; } ul#top_forums_menu_bars li { float: left; width: 25%; height: 100%; overflow: hidden; -webkit-box-shadow: 0 6px 2px -4px black; -moz-box-shadow: 0 6px 2px -4px black; box-shadow: 0 6px 2px -4px black; transition: all .25s ease-in-out; -moz-transition: all .25s ease-in-out; -webkit-transition: all .25s ease-in-out; } ul#top_forums_menu_bars li:hover { border-bottom: 5px solid #3a5871; } ul#top_forums_menu_bars li a { display: inline-block; width: 100%; height: 100%; opacity: 1; text-decoration: none; transition: opacity .5s ease-in-out; -moz-transition: opacity .5s ease-in-out; -webkit-transition: opacity .5s ease-in-out; line-height: 10px; padding-top: 7px; text-align: center; font-family: Tahoma, Verdana, Segoe, sans-serif; font-size: 12px; color: white; } ul#top_forums_menu_bars li a:hover { opacity: .8; } ``` ```html * Profile * Account Settings * My Content * MyOACH ``` Alternatively, you could apply a **global** low-**[specificity](https://css-tricks.com/specifics-on-css-specificity/)** rule of: ``` * { margin: 0; padding: 0; } ``` This will override all of the browser padding / margin quirks, while still having a low enough specificity for any individual selectors to override these rules. Upvotes: 0 <issue_comment>username_3: Just remove the left-padding ``` ul#top_forums_menu_bars { list-style-type: none; margin: 0 auto; padding-left: 0; } ``` <https://jsfiddle.net/36ywxtyk/4/> Upvotes: 1 [selected_answer]
2018/03/21
417
1,478
<issue_start>username_0: I have a TextField named `custDoe_TextField` and I have set the editability disabled as follows: ``` custDoe_TextField.setEditable(false); ``` Now I want to check if the editability is disabled or not and I planned to use `if` function as follows: ``` if (custDoe_TextField.setEditable == false) { do something here } ``` It is showing me an error, any tips on this?<issue_comment>username_1: You can use [`isEditable`](https://docs.oracle.com/javase/7/docs/api/java/awt/TextComponent.html). for example ``` if (custDoe_TextField.isEditable()) {//isEditable returns boolean //do something here } ``` Upvotes: 1 <issue_comment>username_2: Use `isEditable()` instead. `setEditable` is a *setter* method, you can use it to set the *editable* property but not to retrieve it. Also, use `!` instead of `== false` to check a boolean for`false`. ``` if(!custDoe_TextField.isEditable()) { // … } ``` Upvotes: 2 [selected_answer]<issue_comment>username_3: Java follows good naming convention for setters and getters for instance variables. For eg. `int x;` Setter - `public void setX(int valueToSet)` Getter - `public int getX()` For boolean values the convention modifies for getter to ensure good readability. For eg. `boolean editable;` Setter- `public void setEditable(boolean valueToSet)` Getter- `public boolean isEditable()` You need to use getter i.e., `isEditable()` to get the boolean property value of the instance. Upvotes: 2
2018/03/21
2,444
9,356
<issue_start>username_0: I was working on a project where we wanted to validate that a parameter could actually be raised as an exception if necessary. We went with the following: ``` def is_raisable(exception): funcs = (isinstance, issubclass) return any(f(exception, BaseException) for f in funcs) ``` This handles the following use cases, which meet our needs (for now): ``` is_raisable(KeyError) # the exception type, which can be raised is_raisable(KeyError("key")) # an exception instance, which can be raised ``` It fails, however, for old-style classes, which are raisable in old versions (2.x). We tried to then solve it this way: ``` IGNORED_EXCEPTIONS = [ KeyboardInterrupt, MemoryError, StopIteration, SystemError, SystemExit, GeneratorExit ] try: IGNORED_EXCEPTIONS.append(StopAsyncIteration) except NameError: pass IGNORED_EXCEPTIONS = tuple(IGNORED_EXCEPTIONS) def is_raisable(exception, exceptions_to_exclude=IGNORED_EXCEPTIONS): funcs_to_try = (isinstance, issubclass) can_raise = False try: can_raise = issubclass(exception, BaseException) except TypeError: # issubclass doesn't like when the first parameter isn't a type pass if can_raise or isinstance(exception, BaseException): return True # Handle old-style classes try: raise exception except TypeError as e: # It either couldn't be raised, or was a TypeError that wasn't # detected before this (impossible?) return exception is e or isinstance(exception, TypeError) except exceptions_to_exclude as e: # These are errors that are unlikely to be explicitly tested here, # and if they were we would have caught them before, so percolate up raise except: # Must be bare, otherwise no way to reliably catch an instance of an # old-style class return True ``` This passes all of our tests, but it isn't very pretty, and still feels hacky if we're considering things that we wouldn't expect the user to pass in, but might be thrown in there anyway for some other reason. ``` def test_is_raisable_exception(self): """Test that an exception is raisable.""" self.assertTrue(is_raisable(Exception)) def test_is_raisable_instance(self): """Test that an instance of an exception is raisable.""" self.assertTrue(is_raisable(Exception())) def test_is_raisable_old_style_class(self): """Test that an old style class is raisable.""" class A: pass self.assertTrue(is_raisable(A)) def test_is_raisable_old_style_class_instance(self): """Test that an old style class instance is raisable.""" class A: pass self.assertTrue(is_raisable(A())) def test_is_raisable_excluded_type_background(self): """Test that an exception we want to ignore isn't caught.""" class BadCustomException: def __init__(self): raise KeyboardInterrupt self.assertRaises(KeyboardInterrupt, is_raisable, BadCustomException) def test_is_raisable_excluded_type_we_want(self): """Test that an exception we normally want to ignore can be not ignored.""" class BadCustomException: def __init__(self): raise KeyboardInterrupt self.assertTrue(is_raisable(BadCustomException, exceptions_to_exclude=())) def test_is_raisable_not_raisable(self): """Test that something not raisable isn't considered rasiable.""" self.assertFalse(is_raisable("test")) ``` Unfortunately we need to continue to support both Python 2.6+ (soon Python 2.7 only, so if you have a solution that doesn't work in 2.6 that's fine but not ideal) and Python 3.x. Ideally I'd like to do that without an explicit test for the version, but if there isn't a way to do it otherwise then that's fine. Ultimately, my questions are: 1. Is there an easier way to do this and support all listed versions? 2. If not, is there a better or safer way to handle the "special exceptions", e.g. `KeyboardInterrupt`. 3. To be most Pythonic I'd like to ask forgiveness rather than permission, but given that we could get two types of `TypeError` (one because it worked, and one because it didn't) that feels odd as well (but I have to fall back on that anyway for 2.x support).<issue_comment>username_1: The way you test most things in Python is to `try` then and see whether you get an exception. That works fine for `raise`. If something isn't raisable, you will get a `TypeError`; otherwise, you will get what you raised (or an instance of what you raised). That will work for 2.6 (or even 2.3) just as well as 3.6. Strings as exceptions in 2.6 will be raisable; types that don't inherit from `BaseException` in 3.6 will be not raisable; etc.—you get the right result for everything. No need to check `BaseException` or handle old-style and new-style classes differently; just let `raise` do what it does. Of course we do need to special-case `TypeError`, because it'll land in the wrong place. But since we don't care about pre-2.4, there's no need for anything more complicated than an `isinstance` and `issubclass` test; there are no weird objects that can do anything other than return `False` anymore. The one tricky bit (which I initially got wrong; thanks to username_2 for catching it) is that you have to do the `isinstance` test first, because if the object is a `TypeError` instance, `issubclass` will raise `TypeError`, so we need to short-circuit and return `True` without trying that. The other issue is handling any special exceptions that we don't want to accidentally capture, like `KeyboardInterrupt` and `SystemError`. But fortunately, [these all go back to before 2.6](https://docs.python.org/release/2.6/library/exceptions.html). And both [`isinstance`/`issubclass`](https://docs.python.org/release/2.6/library/functions.html#isinstance) and [`except` clauses](https://docs.python.org/release/2.6/reference/compound_stmts.html#the-try-statement) (as long as you don't care about capturing the exception value, which we don't) can take tuples with syntax that also works in 3.x. Since it's required that we return `True` for those cases, we need to test them before trying to raise them. But they're all `BaseException` subclasses, so we don't have to worry about classic classes or anything like that. So: ``` def is_raisable(ex, exceptions_to_exclude=IGNORED_EXCEPTIONS): try: if isinstance(ex, TypeError) or issubclass(ex, TypeError): return True except TypeError: pass try: if isinstance(ex, exceptions_to_exclude) or issubclass(ex, exceptions_to_exclude): return True except TypeError: pass try: raise ex except exceptions_to_exclude: raise except TypeError: return False except: return True ``` This doesn't pass your test suite as written, but I think that's because some of your tests are incorrect. I'm assuming you want `is_raisable` to be true for objects that are raisable *in the current Python version*, not objects that are raisable *in any supported version* even if they aren't raisable in the current version. You wouldn't want `is_raisable('spam')` to return `True` in 3.6 and then attempting to `raise 'spam'` would fail, right? So, off the top of my head: * The `not_raisable` test raises a string--but those are raisable in 2.6. * The `excluded_type` test raises a class, which Python 2.x *may* handle by instantiating the class, but it isn't required to, and CPython 2.6 has optimizations that will trigger in this case. * The `old_style` tests raise new-style classes in 3.6, and they're not subclasses of `BaseException`, so they're not raisable. I'm not sure how to write correct tests without writing separate tests for 2.6, 3.x, and maybe even 2.7, and maybe even for different implementations for the two 2.x versions (although probably you don't have any users on, say, Jython?). Upvotes: 3 [selected_answer]<issue_comment>username_2: If you want to detect old-style classes and instances, just make an explicit check for them: ``` import types if isinstance(thing, (types.ClassType, types.InstanceType)): ... ``` You'll probably want to wrap this in some sort of version check so it doesn't fail on Python 3. Upvotes: 1 <issue_comment>username_3: You can raise the object, catch the exception and then use the `is` keyword to check that the raised exception is the object or an instance of the object. If anything else was raised, it was a `TypeError` meaning the object was not raisable. Furthermore, to handle absolutely any raisable object, we can use [`sys.exc_info`](https://docs.python.org/2/library/sys.html#sys.exc_info). This will also catch exceptions such as `KeyboardInterrupt`, but we can then reraise them if the comparison with the argument is inconclusive. ``` import sys def is_raisable(obj): try: raise obj except: exc_type, exc = sys.exc_info()[:2] if exc is obj or exc_type is obj: return True elif exc_type is TypeError: return False else: # We reraise exceptions such as KeyboardInterrupt that originated from outside raise is_raisable(ValueError) # True is_raisable(KeyboardInterrupt) # True is_raisable(1) # False ``` Upvotes: 2
2018/03/21
884
2,997
<issue_start>username_0: I am posting on Postman to my api route (in api.php) and when the data fails in validation, it returns me the errors in 200 response under this: [![enter image description here](https://i.stack.imgur.com/4CeCE.png)](https://i.stack.imgur.com/4CeCE.png) The above response is when I try: ``` { $valid = validator($request->only('email', 'password'), [ 'email' => 'required|string|email|max:255|unique:users', 'password' => '<PASSWORD>', ]); if ($valid->fails()) { $jsonError=response()->json($valid->errors()->all(), 400); return \Response::json($jsonError); } } ``` --- However, when I try the one in the [documentation](https://laravel.com/docs/5.6/validation) like below, it returns me to the view (laravel welcome page), something like `return back()` ``` $request->validate([ 'email' => 'required|string|email|max:255|unique:users', 'password' => '<PASSWORD>', ]); ``` And the data I post is: ``` email: '' password: '' // also tried with and without header Content-Type: application/json ```<issue_comment>username_1: This code is problematic. ``` $jsonError=response()->json($valid->errors()->all(), 400); return \Response::json($jsonError); ``` `$jsonError` is already a JSONResponse object, then you've encapsulated it again inside a JSONReponse object. The `json` method creates a new JSONResponse object, here is the underlying code. ``` public function json($data = [], $status = 200, array $headers = [], $options = 0) { return new JsonResponse($data, $status, $headers, $options); } ``` So, when you've pass `$jsonError` here, it was the `$data` argument. Now, `$status` argument has a default value of `200`, but you **DIDN'T** pass any, as per your code: ``` return \Response::json($jsonError); // ^-- yeah, no $status argument here! ``` So it is correct that you will have a 200 response. To fix your issue, just return the first JSONResponse object you've created. ``` return response()->json($valid->errors()->all(), 400); ``` If you still want to stick with your current code, then do this. But this is pointless. ``` $jsonError=response()->json($valid->errors()->all(), 400); return \Response::json($jsonError, 400); ``` Also, `response()->json()` is just the same as `Response::json()` :) Upvotes: 3 [selected_answer]<issue_comment>username_2: Do you have the Accept: application/json header ? Please add **Accept: application/json** in you header. Upvotes: 2 <issue_comment>username_3: There is another reason for such behavior in your validation rules you can use array for any input key instead of "|" separator if it happened and mixed between array and "|" separator you will get always status code 200 instead of 422 example of wrong code: ``` 'first_name' => [ 'bail', 'required|string', // this should be replaced by 'required', 'string' 'min:3', 'max:15', ], ``` Upvotes: 0
2018/03/21
779
2,785
<issue_start>username_0: appending a form field value to the next url. I am trying to pass a form field value on form submission to the next page URL. So far; **The form snippet:** ``` <NAME> ``` At the script section where the url for the next page is created; ``` var fn = document.getElementById('first\_name').value; function update() { fn = document.getElementById('first\_name').value; } document.getElementById('thankPage').value = 'https://example.com/thank' + fn; ``` The fn is currently not passing to the next url. However, if I pre-fill the first\_name field, the fn is passed correctly. So I am assuming the fn value being used is not updated and its using its initial value. I tried initializing fb outside update() so its a global var. Unless I am mistaken? And I am assuming when '+ fn' occurs, its using the updated value of fn, which currently is not the case.<issue_comment>username_1: This code is problematic. ``` $jsonError=response()->json($valid->errors()->all(), 400); return \Response::json($jsonError); ``` `$jsonError` is already a JSONResponse object, then you've encapsulated it again inside a JSONReponse object. The `json` method creates a new JSONResponse object, here is the underlying code. ``` public function json($data = [], $status = 200, array $headers = [], $options = 0) { return new JsonResponse($data, $status, $headers, $options); } ``` So, when you've pass `$jsonError` here, it was the `$data` argument. Now, `$status` argument has a default value of `200`, but you **DIDN'T** pass any, as per your code: ``` return \Response::json($jsonError); // ^-- yeah, no $status argument here! ``` So it is correct that you will have a 200 response. To fix your issue, just return the first JSONResponse object you've created. ``` return response()->json($valid->errors()->all(), 400); ``` If you still want to stick with your current code, then do this. But this is pointless. ``` $jsonError=response()->json($valid->errors()->all(), 400); return \Response::json($jsonError, 400); ``` Also, `response()->json()` is just the same as `Response::json()` :) Upvotes: 3 [selected_answer]<issue_comment>username_2: Do you have the Accept: application/json header ? Please add **Accept: application/json** in you header. Upvotes: 2 <issue_comment>username_3: There is another reason for such behavior in your validation rules you can use array for any input key instead of "|" separator if it happened and mixed between array and "|" separator you will get always status code 200 instead of 422 example of wrong code: ``` 'first_name' => [ 'bail', 'required|string', // this should be replaced by 'required', 'string' 'min:3', 'max:15', ], ``` Upvotes: 0
2018/03/21
624
2,216
<issue_start>username_0: I've recently published my very first app to Play Store. I asked a friend to download and test it and it crashed on his phone. However, I have an exact phone as him with the same version of Android, and when I tested it the app runs without error. So what do I need to know about a device to re-create the bug?<issue_comment>username_1: This code is problematic. ``` $jsonError=response()->json($valid->errors()->all(), 400); return \Response::json($jsonError); ``` `$jsonError` is already a JSONResponse object, then you've encapsulated it again inside a JSONReponse object. The `json` method creates a new JSONResponse object, here is the underlying code. ``` public function json($data = [], $status = 200, array $headers = [], $options = 0) { return new JsonResponse($data, $status, $headers, $options); } ``` So, when you've pass `$jsonError` here, it was the `$data` argument. Now, `$status` argument has a default value of `200`, but you **DIDN'T** pass any, as per your code: ``` return \Response::json($jsonError); // ^-- yeah, no $status argument here! ``` So it is correct that you will have a 200 response. To fix your issue, just return the first JSONResponse object you've created. ``` return response()->json($valid->errors()->all(), 400); ``` If you still want to stick with your current code, then do this. But this is pointless. ``` $jsonError=response()->json($valid->errors()->all(), 400); return \Response::json($jsonError, 400); ``` Also, `response()->json()` is just the same as `Response::json()` :) Upvotes: 3 [selected_answer]<issue_comment>username_2: Do you have the Accept: application/json header ? Please add **Accept: application/json** in you header. Upvotes: 2 <issue_comment>username_3: There is another reason for such behavior in your validation rules you can use array for any input key instead of "|" separator if it happened and mixed between array and "|" separator you will get always status code 200 instead of 422 example of wrong code: ``` 'first_name' => [ 'bail', 'required|string', // this should be replaced by 'required', 'string' 'min:3', 'max:15', ], ``` Upvotes: 0
2018/03/21
964
3,059
<issue_start>username_0: I am reading a snippet of TensorFlow [codes](https://github.com/davidsandberg/facenet/blob/28d3bf2fa7254037229035cac398632a5ef6fc24/src/compare.py#L85) (line 85-89) and get confused. I made some changes to clarify: ``` with tf.Graph().as_default(): gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=gpu_memory_fraction) sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options, log_device_placement=False)) with sess.as_default(): pnet = align.detect_face.create_mtcnn(sess, None) out = pnet(img_y) ``` The `create_mtcnn` is defined as: ``` def create_mtcnn(sess, model_path): if not model_path: model_path,_ = os.path.split(os.path.realpath(__file__)) with tf.variable_scope('pnet'): data = tf.placeholder(tf.float32, (None,None,None,3), 'input') pnet = PNet({'data':data}) pnet.load(os.path.join(model_path, 'det1.npy'), sess) pnet_fun = lambda img : sess.run(('pnet/conv4-2/BiasAdd:0', 'pnet/prob1:0'), feed_dict={'pnet/input:0':img}) return pnet_fun ``` My question is why `out = pnet(img_y)` does not throw an error since the graph and session is closed?<issue_comment>username_1: This code is problematic. ``` $jsonError=response()->json($valid->errors()->all(), 400); return \Response::json($jsonError); ``` `$jsonError` is already a JSONResponse object, then you've encapsulated it again inside a JSONReponse object. The `json` method creates a new JSONResponse object, here is the underlying code. ``` public function json($data = [], $status = 200, array $headers = [], $options = 0) { return new JsonResponse($data, $status, $headers, $options); } ``` So, when you've pass `$jsonError` here, it was the `$data` argument. Now, `$status` argument has a default value of `200`, but you **DIDN'T** pass any, as per your code: ``` return \Response::json($jsonError); // ^-- yeah, no $status argument here! ``` So it is correct that you will have a 200 response. To fix your issue, just return the first JSONResponse object you've created. ``` return response()->json($valid->errors()->all(), 400); ``` If you still want to stick with your current code, then do this. But this is pointless. ``` $jsonError=response()->json($valid->errors()->all(), 400); return \Response::json($jsonError, 400); ``` Also, `response()->json()` is just the same as `Response::json()` :) Upvotes: 3 [selected_answer]<issue_comment>username_2: Do you have the Accept: application/json header ? Please add **Accept: application/json** in you header. Upvotes: 2 <issue_comment>username_3: There is another reason for such behavior in your validation rules you can use array for any input key instead of "|" separator if it happened and mixed between array and "|" separator you will get always status code 200 instead of 422 example of wrong code: ``` 'first_name' => [ 'bail', 'required|string', // this should be replaced by 'required', 'string' 'min:3', 'max:15', ], ``` Upvotes: 0
2018/03/21
395
1,402
<issue_start>username_0: When setting up a Google App Engine instance you can configure a [cron.yaml](https://cloud.google.com/appengine/docs/standard/python/config/cronref) to set up Cron jobs. There does not seem to be any documentation on how to configure jobs that run say every 30 seconds. I tried ``` schedule: every 30 seconds ``` and ``` schedule: 0/30 0 0 ? * * * ``` But no good. Google Cloud tells me the format is incorrect when I deploy. Can you schedule in frequencies less then 1 minute with Google App Engine Cron jobs?<issue_comment>username_1: You cannot configure GAE cron services with resolutions below 1 minute. FWIW, you can't do that on unix/linux systems either. But it is possible to use an `every 1 minutes` cron job from which you can further trigger delayed execution of deferred/push/pull queue tasks with down to 1 second resolution, see [High frequency data refresh with Google App Engine](https://stackoverflow.com/questions/35206365/high-frequency-data-refresh-with-google-app-engine/35206885#35206885) Upvotes: 2 <issue_comment>username_2: Had the same problem, solved with `setTimeout()`. Using setTimeout for 30 seconds inside the appengine one-minute cron job did the trick, in this case we'll be fetching data 2 times per minute that is every 30 seconds. `saveData(); setTimeout(function () { saveData(); }, 30000);` Tested and worked fine, Upvotes: 0
2018/03/21
796
2,412
<issue_start>username_0: I have this following program for map with custom keys: ``` class MyClass { public: MyClass(int i): val(i) {} bool operator< (const MyClass& that) const { return val <= that.val; } private: int val; }; int main() { MyClass c1(1); MyClass c2(2); MyClass c3(3); map table; table[c1] = 12; table[c2] = 22; table[c3] = 33; cout << "Mapped values are: " << table.lower\_bound(c1)->second << " " << table[c2] << " " << table[c3] << endl; } ``` The output comes as: ``` Mapped values are: 22 0 0 ``` But if I compare using < or > in the operator< instead of <= then everything works fine. And the output comes as: ``` Mapped values are: 12 22 33 ``` Can someone explain why <= does not work at all, but < and even > works?<issue_comment>username_1: On [cppreference](http://en.cppreference.com/w/cpp/container/map) we find this quote. > > Everywhere the standard library uses the Compare concept, uniqueness is determined by using the equivalence relation. In imprecise terms, two objects a and b are considered equivalent (not unique) if neither compares less than the other: !comp(a, b) && !comp(b, a). > > > This means that with you current compare ``` bool operator< (const MyClass& that) const { return val <= that.val; } ``` if you have two `MyClass` with `val` 5 and 5, `5 <= 5` will return true, and they will not be considered equivalent. Upvotes: 1 <issue_comment>username_2: The comparison function used by `std::map` must implement a [strict weak ordering](https://en.wikipedia.org/wiki/Weak_ordering#Strict_weak_orderings). That means it must implement the following rules given objects `x`, `y`, and `z`: * `op(x, x)` must always be false * if `op(x, y)` is true then `op(y, x)` must be false * if `op(x, y) && op(y, z)` is true then `op(x, z)` must also be true * if `!op(x, y) && !op(y, x)` is true then `!op(x, z) && !op(z, x)` must also be true The `<=` operator does not satisfy these conditions because, given `x = y = 1`, `x <= x` is not false and both `x <= y` and `y <= x` are true. `std::map` uses these rules to implement its comparisons. For example, it could implement an equality check as `!(op(x, y) || op(y, x))`. Given `x = 4`, `y = 4`, and `op = operator<=` this becomes `!(4 <= 4 || 4 <= 4)`, so `4` does not compare equal to `4` because the first rule above was broken. Upvotes: 4 [selected_answer]
2018/03/21
1,662
5,441
<issue_start>username_0: I keep on getting error squiggles on std::string\_view, but I am able to build just fine. Is there a way to tell intellisense or the C++ linter to use C++17? The specific error I get is: ``` namespace "std" has no member "string_view" ```<issue_comment>username_1: There's a posting in their GitHub issue tracker about this: [std::string\_view intellisense missing (CMake, VC++ 2017)](https://github.com/Microsoft/vscode-cpptools/issues/1579). In another issue, it is said that the extension defaults to C++17, but does not yet support all of C++17 features: [Setting C++ standard](https://github.com/Microsoft/vscode-cpptools/issues/1468#issuecomment-358805332). This is confirmed by [c\_cpp\_properties.json Reference Guide](https://github.com/Microsoft/vscode-cpptools/blob/master/Documentation/LanguageServer/c_cpp_properties.json.md), where an option is listed `cppStandard` which defaults to C++17. (To edit this file, press `Ctrl` + `Shift` + `P` and type in `C/CPP: Edit Configurations`). It appears, then, they just don't have full support yet. Upvotes: 4 <issue_comment>username_2: Just an updated. I got this issue as well. I solve it by adding `c_cpp_properties.json` 1. Ctrl + Shift + P then select `C/C++:Edit Configurations (JSON)` 2. Adjust the content for `cStandard` and `cppStandard`: ``` "cStandard": "gnu17", "cppStandard": "gnu++17", ``` Upvotes: 3 <issue_comment>username_3: If you're unable to enable even after trying the solutions by [@username_1](https://stackoverflow.com/a/49415248/14631939) and [@username_2](https://stackoverflow.com/a/65372220/14631939), do the following 1. Open `tasks.json` in the `.vscode` folder 2. Add `"--std","c++17"` under `"args:"` 3. Save `tasks.json` Upvotes: 2 <issue_comment>username_4: This has become much easier now. Search for `cppstandard` in your vs code extension settings and choose the version of C++ you want the extension to use from the drop down. [![enter image description here](https://i.stack.imgur.com/y83fx.png)](https://i.stack.imgur.com/y83fx.png) In order to make sure your debugger is using the same version, make sure you have something like this for your `tasks.json`, where the important lines are the `--std` and the line after that defines the version. ``` { "tasks": [ { "type": "cppbuild", "label": "C/C++: g++ build active file", "command": "/usr/bin/g++", "args": [ "-std=c++17", "-I", "${fileDirname}", "-g", "${fileDirname}/*.cpp", "-o", "${workspaceFolder}/out/${fileBasenameNoExtension}.o" ], "options": { "cwd": "${workspaceFolder}" }, "problemMatcher": ["$gcc"], "group": { "kind": "build", "isDefault": true } } ], "version": "2.0.0" } ``` Note that if you're copying the above `tasks.json` directly, you'll need to have a folder named `out` in your workspace root. Upvotes: 7 [selected_answer]<issue_comment>username_5: **For people trying out on Linux and having GCC 7.5.0 installed, this worked for me.** Do these two steps to enable the linter to acknowledge the c++17 writings and for the compiler to pick up the c++17. 1. Open up the `C/C++:Edit Configurations (JSON)`, and change the default values for these two fields to: > > "cStandard": "gnu18", "cppStandard": "gnu++17", > > > 2. Open up the `tasks.json` file inside `.vscode` directory and add the following statements to the `args` key: > > "--std", "c++17" > > > Upvotes: 2 <issue_comment>username_6: After trying many things I have found probably a solution for people using CMake and willing to edit the `CMakeLists.txt` file. I just put the following line at the beginning of my `CMakeLists.txt` `set (CMAKE_CXX_STANDARD 17)` You can check your c++ version by doing: `cout << __cplusplus ;` and the 3rd and 4th number gives you the version of c++ you are using. For example: `cout << __cplusplus ;` `201703` means you are using c++ 17 and `cout << __cplusplus ;` `201402` means you are using c++ 14 I think that there must be an easier solution but I could not find it yet. Upvotes: 1 <issue_comment>username_7: Additional to set `cppStandard` to `gnu++17` in `c_cpp_properties.json` mentioned in other posts, you need to change the `__cplusplus`-define to the corresponding value (e.g. `201703L`) Like that: ``` { "version": 4, "configurations": [ { // ... "cStandard": "gnu17", "cppStandard": "gnu++17", "defines": [ // ... "__cplusplus=201703L" // ... ] } ] } ``` Upvotes: 1 <issue_comment>username_8: Check your version of g++ using `g++ --version` on the command line. If it is like version 6 or 7 then you need to update to a newer version with mingw. I used [mysys2](https://www.msys2.org/) to do this and now I do not have the same problem. Upvotes: -1 <issue_comment>username_9: I have tried editing the settings of `C_Cpp>Default: Cpp Standard` and `C Standard` to specify them as `C++17 standard`. However, that did not work. Then I found a `.vscode` folder in my project directory with a `c_cpp_properties.json` file. In there I found settings that I had not edited, where I specified to use the `C++11 standard`. When I changed it, I found that the problem was gone. If you find that the answer above doesn't work, maybe we made the same mistake. Upvotes: 0
2018/03/21
1,291
3,789
<issue_start>username_0: I have a table called `votes` with 4 columns: `id`, `name`, `choice`, `date`. \*\*\*\*`id`\*\*\*\*`name`\*\*\*\*`vote`\*\*\*\*\*\*`date`\*\*\* \*\*\*\*`1`\*\*\*\*\*`sam`\*\*\*\*\*\*\*`A`\*\*\*\*\*\*`01-01-17` \*\*\*\*`2`\*\*\*\*\*`sam`\*\*\*\*\*\*\*`B`\*\*\*\*\*\*`01-05-30` \*\*\*\*`3`\*\*\*\*\*`jon`\*\*\*\*\*\*\*`A`\*\*\*\*\*\*`01-01-19` My ultimate goal is to count up all the votes, but I only want to count 1 vote per person, and specifically each person's most recent vote. In the example above, the result should be 1 vote for A, and 1 vote for B. Here is what I currently have: ``` select name, sum(case when uniques.choice = A then 1 else 0 end) votesA, sum(case when uniques.choice = B then 1 else 0 end) votesB FROM ( SELECT id, name, choice, max(date) FROM votes GROUP BY name ) uniques; ``` However, this doesn't work because the subquery is indeed selecting the max date, but it's not including the correct `choice` that is associated with that max date.<issue_comment>username_1: Don't think "group by" to get the most recent vote. Think of join or some other option. Here is one way: ``` SELECT v.name, SUM(v.choice = 'A') as votesA, SUM(v.choice = 'B') as votesB FROM votes v WHERE v.date = (SELECT MAX(v2.date) FROM votes v2 WHERE v2.name = v.name) GROUP BY v.name; ``` [Here](http://www.sqlfiddle.com/#!9/155814/1) is a SQL Fiddle. Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this ``` SELECT choice, COUNT(1) FROM votes v INNER JOIN ( SELECT id, max(date) FROM votes GROUP BY name ) tmp ON v.id = tmp.id GROUP BY choice; ``` Upvotes: 0 <issue_comment>username_3: Your answer are close but need to `JOIN` self Subquery get `Max` date by name then `JOIN` self. ``` select sum(case when T.vote = 'A' then 1 else 0 end) votesA, sum(case when T.vote = 'B' then 1 else 0 end) votesB FROM ( SELECT name,Max(date) as date FROM T GROUP BY name ) AS T1 INNER JOIN T ON T1.date = T.date ``` [SQLFiddle](http://sqlfiddle.com/#!9/d2504a/30) Upvotes: 1 <issue_comment>username_4: ``` select name, sum( case when choice = 'A' then 1 else 0 end) voteA, sum( case when choice = 'B' then 1 else 0 end) voteB from ( select id, name, choice from votes where date = (select max(date) from votes t2 where t2.name = votes.name ) ) t group by name ``` Or output just one row for the total counts of VoteA and VoteB: ``` select sum( case when choice = 'A' then 1 else 0 end) voteA, sum( case when choice = 'B' then 1 else 0 end) voteB from ( select id, name, choice from votes where date = (select max(date) from votes t2 where t2.name = votes.name ) ) t ``` Upvotes: 0 <issue_comment>username_5: Something like this (if you really need count only last vote of person) ``` SELECT sum(case when vote='A' then cnt else 0 end) voteA, sum(case when vote='B' then cnt else 0 end) voteB FROM (SELECT vote,count(distinct name) cnt FROM ( SELECT name,vote,date,max(date) over (partition by name) maxd FROM votes ) WHERE date=maxd GROUP BY vote ) ``` PS. MySQL v 8 Upvotes: 0 <issue_comment>username_6: Based on @d-shish solution, and since introduction (in MySQL 5.7) of `ONLY_FULL_GROUP_BY`, the `GROUP BY` statement must be placed in subquery like this : ``` SELECT v.`name`, SUM(v.`choice` = 'A') as `votesA`, SUM(v.`choice` = 'B') as `votesB` FROM `votes` v WHERE ( SELECT MAX(v2.`date`) FROM `votes` v2 WHERE v2.`name` = v.`name` GROUP BY v.`name` # << after ) = v.`date` # GROUP BY v.`name` << before ``` Otherwise, it won't work anymore ! Upvotes: 0
2018/03/21
1,290
4,058
<issue_start>username_0: In my project, I have a table and I have fetched data to that table using a while loop. Here's is the code for that. ``` ### Return Document | No | Document ID | Put the tick for returns | | --- | --- | --- | php while ($\_r = mysqli\_fetch\_assoc($q\_set)) { ?| php echo $\_r['number']; ? | php echo $\_r['doc\_id']; ? | | php } ? ``` As you can see, the table is inside a form and there's a checkbox for each row in the table. [![enter image description here](https://i.stack.imgur.com/RMCQb.png)](https://i.stack.imgur.com/RMCQb.png) This is a screenshot of my database table. As you can see there's a field called availability and values for them are 'locked'. When I click submit, I want to update these locked fields into 'returned' where checkbox is ticked. Others which are not ticked should not be updated. So, here's what I did. ``` if (isset($_POST['submit'])) { $update_query = "UPDATE req SET availability = 'returned' WHERE doc_id='$_POST[return]'"; mysqli_query($conn, $update_query); } ``` My problem is, this query updates only the last row I ticked because it gets the value after loop is finished. It doesn't update all the rows I select. So if you have any idea how to achieve this, please help me.<issue_comment>username_1: Don't think "group by" to get the most recent vote. Think of join or some other option. Here is one way: ``` SELECT v.name, SUM(v.choice = 'A') as votesA, SUM(v.choice = 'B') as votesB FROM votes v WHERE v.date = (SELECT MAX(v2.date) FROM votes v2 WHERE v2.name = v.name) GROUP BY v.name; ``` [Here](http://www.sqlfiddle.com/#!9/155814/1) is a SQL Fiddle. Upvotes: 3 [selected_answer]<issue_comment>username_2: Try this ``` SELECT choice, COUNT(1) FROM votes v INNER JOIN ( SELECT id, max(date) FROM votes GROUP BY name ) tmp ON v.id = tmp.id GROUP BY choice; ``` Upvotes: 0 <issue_comment>username_3: Your answer are close but need to `JOIN` self Subquery get `Max` date by name then `JOIN` self. ``` select sum(case when T.vote = 'A' then 1 else 0 end) votesA, sum(case when T.vote = 'B' then 1 else 0 end) votesB FROM ( SELECT name,Max(date) as date FROM T GROUP BY name ) AS T1 INNER JOIN T ON T1.date = T.date ``` [SQLFiddle](http://sqlfiddle.com/#!9/d2504a/30) Upvotes: 1 <issue_comment>username_4: ``` select name, sum( case when choice = 'A' then 1 else 0 end) voteA, sum( case when choice = 'B' then 1 else 0 end) voteB from ( select id, name, choice from votes where date = (select max(date) from votes t2 where t2.name = votes.name ) ) t group by name ``` Or output just one row for the total counts of VoteA and VoteB: ``` select sum( case when choice = 'A' then 1 else 0 end) voteA, sum( case when choice = 'B' then 1 else 0 end) voteB from ( select id, name, choice from votes where date = (select max(date) from votes t2 where t2.name = votes.name ) ) t ``` Upvotes: 0 <issue_comment>username_5: Something like this (if you really need count only last vote of person) ``` SELECT sum(case when vote='A' then cnt else 0 end) voteA, sum(case when vote='B' then cnt else 0 end) voteB FROM (SELECT vote,count(distinct name) cnt FROM ( SELECT name,vote,date,max(date) over (partition by name) maxd FROM votes ) WHERE date=maxd GROUP BY vote ) ``` PS. MySQL v 8 Upvotes: 0 <issue_comment>username_6: Based on @d-shish solution, and since introduction (in MySQL 5.7) of `ONLY_FULL_GROUP_BY`, the `GROUP BY` statement must be placed in subquery like this : ``` SELECT v.`name`, SUM(v.`choice` = 'A') as `votesA`, SUM(v.`choice` = 'B') as `votesB` FROM `votes` v WHERE ( SELECT MAX(v2.`date`) FROM `votes` v2 WHERE v2.`name` = v.`name` GROUP BY v.`name` # << after ) = v.`date` # GROUP BY v.`name` << before ``` Otherwise, it won't work anymore ! Upvotes: 0
2018/03/21
892
2,924
<issue_start>username_0: For the record, I've read these following threads but none of them seems to fulfill my need: * [Python pandas - filter rows after groupby](https://stackoverflow.com/questions/27488080/python-pandas-filter-rows-after-groupby) * [Pandas get rows after groupby](https://stackoverflow.com/questions/43448895/pandas-get-rows-after-groupby) * [filter rows after groupby pandas](https://stackoverflow.com/questions/41821430/filter-rows-after-groupby-pandas) Say I have this following table `df`: ``` user_id is_manually created_per_week ---------------------------------------- 10 True 59 10 False 90 33 True 0 33 False 64 50 True 0 50 False 0 ``` I want to exclude the users who have created nothing, i.e. created\_per\_week is 0 in both rows of is\_manually True and False, which is user 50 in this case. ``` user_id is_manually created_per_week ---------------------------------------- 10 True 59 10 False 90 33 True 0 33 False 64 ``` I learned that `df.groupby` doesn't have `query` method and should use `apply` instead. The closest answer I've got is `df.groupby("user_id").apply(lambda x: x[x["created_per_week"] > 0])`, but it also excludes the row of user 33 manually True, which is undesirable. I've also tried `df.groupby("user_id").apply(lambda x: x[any(x["created_per_week"] > 0)])` but it returns a KeyError. In other words, I am searching the equivalence of `df %>% group_by(user_id) %>% filter(any(created_per_week > 0))` in R. Thanks.<issue_comment>username_1: You can apply `groupby` then `filter` command to get the output. ``` df.groupby('user_id').filter(lambda x: (x['created_per_week'] != 0).any()) user_id is_manually created_per_week 0 10 True 59 1 10 False 90 2 33 True 0 3 33 False 64 ``` Upvotes: 1 [selected_answer]<issue_comment>username_2: `transform` + `any` ``` df[df.assign(New=df.created_per_week==0).groupby('user_id').created_per_week.transform('any')] Out[425]: user_id is_manually created_per_week 0 10 True 59 1 10 False 90 2 33 True 0 3 33 False 64 ``` Or simply by using `loc`+`isin` ``` df.loc[df.user_id.isin(df[df.created_per_week!=0].user_id)] Out[426]: user_id is_manually created_per_week 0 10 True 59 1 10 False 90 2 33 True 0 3 33 False 64 ``` From PiR ``` f, u = pd.factorize(df.user_id); df[np.bincount(f, df.created_per_week)[f] > 0] ``` Upvotes: 1
2018/03/21
436
1,313
<issue_start>username_0: I need to find the parameters in a method which is a `string` ``` "MyMethod1(int a ,int b){Console.WriteLine("the sum is"); return(a+b);}" ``` **Required output** ``` "int a ,int b" ```<issue_comment>username_1: You can use this ``` (?<=\().*?(?=\)) ``` **Explanation** * Positive Lookbehind `(?<=\()` + Assert that the Regex below matches + `\(` matches the character `(` literally (case sensitive) * `.*?` matches any character (except for line terminators) + `*?` Quantifier — Matches between zero and unlimited times, as few times as possible, expanding as needed (lazy) * Positive Lookahead `(?=\))` + Assert that the Regex below matches + `\)` matches the character `)` literally (case sensitive) **Example** ``` string str1 = @"MyMethod1(int a ,int b) blah blah (asd)"; Regex regex = new Regex(@"(?<=\().*?(?=\))"); Match match = regex.Match(str1); if (match.Success) { Console.WriteLine(match.Value); } ``` [**Demo here**](https://dotnetfiddle.net/ITaqAU) Upvotes: 2 <issue_comment>username_2: **Without Regex** ``` string str1 = @"MyMethod1(int a ,int b) blah blah (asd)"; int start = str1.IndexOf("(") + 1; int length = str1.IndexOf(")") - start; Console.WriteLine(str1.Substring(start, length)); ``` [FIDDLE](http://rextester.com/XUQ77375) Upvotes: 0
2018/03/21
1,221
4,576
<issue_start>username_0: How to determine the exact error message instead of `Illegal argument Exception`? I am using a visual studio and avd manager to launch android emulator. The following is the code but when I click the `click` button, the error message is just `ILLEGAL ARGUMENT EXCEPTION`. **index.html** ``` Hello World TESTING Click ![]() ``` **index.js** ``` /* * Licensed to the Apache Software Foundation (ASF) under one * or more contributor license agreements. See the NOTICE file * distributed with this work for additional information * regarding copyright ownership. The ASF licenses this file * to you under the Apache License, Version 2.0 (the * "License"); you may not use this file except in compliance * with the License. You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, * software distributed under the License is distributed on an * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY * KIND, either express or implied. See the License for the * specific language governing permissions and limitations * under the License. */ var app = { // Application Constructor initialize: function() { document.addEventListener('deviceready', this.onDeviceReady.bind(this), false); }, // deviceready Event Handler // // Bind any cordova events here. Common events are: // 'pause', 'resume', etc. onDeviceReady: function() { document.getElementById('btn1').addEventListener('click', app.takephoto); }, // Update DOM on a Received Event takephoto: function () { var ops = { quality: 50, destinationType: Camera.DestinationType.FILE_URL, //sourceType: Camera.PictureSourceType.CAMERA, //mediaType: Camera.MediaType.PICTURE, //encodingType: Camera.EncodingType.JPEG, //targetWidth: 300, //targetHeight: 400 }; navigator.camera.getPicture(app.ftw, app.wtf, ops); }, ftw: function (imgURI) { document.getElementById('msg').textContent = imgURI; document.getElementById('img1').src = imgURI; }, wtf: function (msg) { document.getElementById('msg').textContent = msg; } }; app.initialize(); ``` **index.js updated still the same error** ``` var app = { // Application Constructor initialize: function() { document.addEventListener('deviceready', this.onDeviceReady.bind(this), false); }, // deviceready Event Handler // // Bind any cordova events here. Common events are: // 'pause', 'resume', etc. onDeviceReady: function () { document.getElementById('msg').textContent = navigator.camera; document.getElementById('btn1').addEventListener('click', app.takephoto); }, // Update DOM on a Received Event takephoto: function () { navigator.camera.getPicture(app.onSuccess, app.onFail, { quality: 50, destinationType: Camera.DestinationType.FILE_URI }); }, onSuccess: function (imageURI) { var image = document.getElementById('myImage'); image.src = imageURI; }, onFail: function (message) { alert('Failed because: ' + message); } }; app.initialize(); ``` [![enter image description here](https://i.stack.imgur.com/HLn0z.jpg)](https://i.stack.imgur.com/HLn0z.jpg)<issue_comment>username_1: You can use this ``` (?<=\().*?(?=\)) ``` **Explanation** * Positive Lookbehind `(?<=\()` + Assert that the Regex below matches + `\(` matches the character `(` literally (case sensitive) * `.*?` matches any character (except for line terminators) + `*?` Quantifier — Matches between zero and unlimited times, as few times as possible, expanding as needed (lazy) * Positive Lookahead `(?=\))` + Assert that the Regex below matches + `\)` matches the character `)` literally (case sensitive) **Example** ``` string str1 = @"MyMethod1(int a ,int b) blah blah (asd)"; Regex regex = new Regex(@"(?<=\().*?(?=\))"); Match match = regex.Match(str1); if (match.Success) { Console.WriteLine(match.Value); } ``` [**Demo here**](https://dotnetfiddle.net/ITaqAU) Upvotes: 2 <issue_comment>username_2: **Without Regex** ``` string str1 = @"MyMethod1(int a ,int b) blah blah (asd)"; int start = str1.IndexOf("(") + 1; int length = str1.IndexOf(")") - start; Console.WriteLine(str1.Substring(start, length)); ``` [FIDDLE](http://rextester.com/XUQ77375) Upvotes: 0
2018/03/21
951
3,629
<issue_start>username_0: I want to detect a credit card sized card in image. The card can be any card eg identity card, member card. Currently, I am thinking to use Canny Edge, Hough Line and Hough Circle to detect the card. But the process will be tedious when I want to combine all the information of Hough Line and Hough Circle to locate the card. Some people suggest threshold and findContour but the color of card can be similar to the background which make this method difficult to achieve the desired result. Is there any kernel and method which can help me to detect the card? [![enter image description here](https://i.stack.imgur.com/bHszC.jpg)](https://i.stack.imgur.com/bHszC.jpg) [![enter image description here](https://i.stack.imgur.com/aDlJK.png)](https://i.stack.imgur.com/aDlJK.png)<issue_comment>username_1: There are two sub-problems here - 1. Detect rectangular object in the image. 2. The rectangular object's actual size should be similar to Credit Card. For first part, you can try out several methods to extract rectangular region in the image and see which suits your need. * This [post](https://stackoverflow.com/questions/6555629/algorithm-to-detect-corners-of-paper-sheet-in-photo?rq=1) shows a lot of methods which you can try out. * In my experience edge detection works best in most cases. Try Canny > Contours with Appoximations > Filter out irrelevant contours > Search for rectangles using some shape detection, template matching or any other methods. Even [this post](https://www.pyimagesearch.com/2014/04/21/building-pokedex-python-finding-game-boy-screen-step-4-6/) does a similar thing to achieve its task. Coming to the second point, you cannot find out the size of an object in an image unless you have any reference(known) sized object in the image. If the image was captured from a closer distance, the card will seem larger and if taken from far, the card will seem smaller. So while capturing, you will have to enforce some restrictions like you can ask the user to capture image along with some standard ruler. You can also ask the user to capture image on an A4 sheet with all the sheet edges visible. Since, you know the size of the A4 sheet, you'll be able to estimate the size of the card. Apart from above methods, if you have enough data set of such images, you can use Haar Classifiers or Neural Network/Deep Learning based methods to do this with much better accuracy. Upvotes: 2 <issue_comment>username_2: I think, your problem is similar to document scanner. You can refer to [this link](https://www.pyimagesearch.com/2014/09/01/build-kick-ass-mobile-document-scanner-just-5-minutes/) 1. Find edges in the image using Canny edge detector (lower and higher thresholds can be set as 0.66\*meanIntensity and 1.33\*meanIntensity) and do a morphological close operation. [Edge image after performing close](https://i.stack.imgur.com/OgWzS.jpg) 2. Find the contours in the image using findContours 3. filterout unwanted contours (I used contourArea to filter contours) 4. using approxPolyDP approximate the contours to 7 or more points. (I used 0.005 \* perimeter as the parameter here) 5. If you want to find accurate edges, fit lines between the points and get the 4 biggest lines.Find their intersection (since the card may or may not contain curved edges) 6. You'll end up with the card endpoints which can be used further for homography or to determine the region. [vertices of the card](https://i.stack.imgur.com/Y0JjZ.jpg) **Edit** Edited the answer to include the steps to obtain the vertices of the card and results are updated. Upvotes: 4 [selected_answer]
2018/03/21
843
2,921
<issue_start>username_0: If bar calls bar(i/2) if i is an even integer and bar(3\*i + 1) otherwise, the recursive function bar would tail-recursion. ``` const int bar(const int i) { if (i < 2) return i; return i % 2 ? bar(i/2) : bar(3*i + 1); } ``` However, what if bar calls either bar or foo, who has a completely different set of local variables from bar? ``` const int bar(const int i) { if (i < 2) return i; return i % 2 ? bar(i/2) : foo(3*i + 1); // where foo is very complicated recursive call that has // 11 different user-defined/primitive type of // local variables that can't be optimized out } ``` My understanding is tail recursion optimization will use caller's stack. The caller is already done with its local variables before calling the callee. So, the callee can reuse it. My understanding sounds fine when the caller and callee are the same function(e.g. foo calls foo, bar calls bar). However, if the stack size and layout are totally different, and the callees might be one of multiple different functions with different stack layout, what is going to happen? Firstly, would it be tail recursion? (Now, I understand that tail "call" optimization might be applied to but not this is not tail "recursion.") Secondly, major compilers such as gcc, clang, etc, would optimize this kind of tail calls? (The answer seems yes.) What if the callee (foo in the second code example) is much more complicated? If the callee and caller are calling each other (mutual recursion), is the call in my second code example would be tail-call-optimized? If the callee (e.g. foo) is a complicated recursive call that is definitely not a tail recursion and very hard for a compiler to reduce it to a loop or so, would it be still tail-call-optimized?<issue_comment>username_1: The term "tail-recursive" is a local property of a call site. It's not affected at all by other calls in the same method. Roughly speaking, a call is tail-recursive if no executable code needs to run between its return and the return of the enclosing method. Consequently all the calls to `bar()` in your example are tail-recursive. But note that if you said ``` return i % 2 ? bar(i/2) : 1 + bar(3*i + 1); ``` then the first call is tail-recursive, but the second isn't because the addition `1 +` must execute after it returns. Upvotes: 3 <issue_comment>username_2: > > Is tail recursion optimization applicable to this function? > > > Yes, the tail recursion optimization is applicable in your examples. Please look at assembler <https://godbolt.org/g/cSpUZw> for the second sample. The more progressive optimization applied. Recursion is replaced with a loop. ``` bar(int): cmp edi, 1 jg .L12 jmp .L6 .L15: sar edi cmp edi, 1 je .L14 .L12: test dil, 1 jne .L15 lea edi, [rdi+1+rdi*2] jmp foo(int) .L14: mov eax, 1 ret .L6: mov eax, edi ret ``` Upvotes: 3 [selected_answer]
2018/03/21
523
1,660
<issue_start>username_0: I have a table with a bunch of columns, but we only need to look at two of them. I'm trying to join another table on this table, but all we know about these two columns is that one will be null and the other won't: ``` client_id | note_id ``` The main table wants to join `client_id` (if not null) on `clients.id` OR `note_id` on `notes.id` if `clients.id` is null.<issue_comment>username_1: You can use coalesce in the join on clause. See demo here: <http://sqlfiddle.com/#!9/99911/2>. If client id is null then use note id to join table1 and table2. ``` Select t1.client_id, t1.note_id,t2.client_id, t2.note_id From table1 t1 Join table2 t2 on coalesce(t1.client_id, t1.note_id) =coalesce(t2.client_id, t2.note_id) ``` Upvotes: 1 <issue_comment>username_2: Assuming there are 3 tables involved (the main table that contains `client_id` and `note_id` columns, `clients` table, and `notes` table), you can use a query such as this: ``` (select * from mainTable inner join clients on mainTable.client_id = clients.id) union (select * from mainTable inner join notes on mainTable.note_id = notes.id where mainTable.client_id is NULL); ``` The above query contains 2 queries where each query will output rows where the joining column is not null. The results are then combined using `union`. Upvotes: 2 <issue_comment>username_3: This will work for you. This is very basic query I wrote. Make changes if required. ``` SELECT * FROM YOUR_TABLE t LEFT OUTER JOIN clients c ON t.client_id = c.id LEFT OUTER JOIN notes n ON t.note_id = n.id WHERE c.id IS NOT NULL OR n.id IS NOT NULL ``` Upvotes: 3 [selected_answer]
2018/03/21
1,475
3,838
<issue_start>username_0: Background -> I'm trying to remove all old forest out of certain forest types by using the overlay function. If the forest is type 1 and the age is > 250, then I want that cell to be 0, if it's type 2 and > 200 I want it to be 0, otherwise it can be what it is. Problem -> I get two different error messages, with no consistency on which appears, and the code has produced correct results (once). The errors are: ``` #First error message Error in setValues(x, value) : values must be a vector Error in .local(x, i, j, ..., value) : cannot replace values on this raster (it is too large #Second error message Error in (function (x, fun, filename = "", recycle = TRUE, forcefun = FALSE, : cannot use this formula, probably because it is not vectorized ``` I would like to know why it changes error messages (why it's inconsistent), and of course, what can I do to fix my code so that it works? The code works when applied without a for loop, and has with it as well, but I can't reproduce the working result again. My sample data and code: ``` library(raster) library(dplyr) set.seed(123) r1 <- raster(ncol=100,nrow=100) r2 <- raster(ncol=100,nrow=100) r3 <- raster(ncol=100,nrow=100) r4 <- raster(ncol=100,nrow=100) r5 <- raster(ncol=100,nrow=100) r1[] <- round(runif(n=ncell(r1), min=0, max = 1000)) r2[] <- round(runif(n=ncell(r1), min=0, max = 1000)) r3[] <- round(runif(n=ncell(r1), min=0, max = 1000)) r4[] <- round(runif(n=ncell(r1), min=0, max = 1000)) r5[] <- round(runif(n=ncell(r1), min=0, max = 1000)) testStack <- stack(x=c(r1,r2,r3,r4,r5)) testAge <- raster(ncol=100,nrow=100) testAge[] <- round(runif(n=ncell(testAge), min=0,max=400)) testBec <- raster(ncol=100,nrow=100) testBec[] <- round(runif(n=ncell(testBec), min = 1, max = 5)) testList <- list() # create a list to hold the files that will go into the raster stack # create a for loop to run through raster stack and fill a list with the output rasters for(i in 1:(nlayers(testStack))){ testList[[i]] <- overlay(x=testStack, y=testBec, z=testAge, fun=function(x,y,z){ ifelse(y == 1 & z > 249, # if x is type 1 and z(age from inventory) 0, # is >= 250, then y = 0, else ifelse(y == 2 & z > 200, # if x is type 2 and z is >200 0, x)) # y = 0, else keep x return(testList[[i]]) # return a list with rasters }) testList[[i]] <- stack(testList[[i]]) } ``` I'm running on a mac OS and RStudio Version 1.1.383, R 3.4.2 (2017-09-28)<issue_comment>username_1: Example data ``` library(raster) set.seed(123) b <- brick(ncol=10,nrow=10, nl=5) b <- setValues(b, round(runif(n=ncell(b)*nlayers(b), min=0, max = 1000))) testAge <- setValues(raster(b), round(runif(n=ncell(b), min=0,max=400))) testBec <- setValues(raster(b), round(runif(n=ncell(b), min = 1, max = 5))) ``` A "raster algebra" approach ``` test <- (testBec == 1 & testAge > 249) | (testBec == 2 & testAge > 200) # values that test TRUE become 0, the others stay the same r <- b * !test ``` Or with `overlay` for the same result ``` f <- function(x, y, z) { i <- (y == 1 & z > 249) | (y == 2 & z > 200) x[i] <- 0 x } rr <- overlay(b, testBec, testAge, fun=f) ``` Upvotes: 1 <issue_comment>username_2: To do this with the for loop: ``` testList <- list() # create a list to hold the files that will go into the raster stack # create a for loop to run through raster stack and fill a list with the output rasters for(i in 1:(nlayers(testStack))){ testList <- overlay(x=testStack[[i]], y=testBec, z=testAge, fun=function(x,y,z){ temp <- ifelse(y == 1 & z > 249,0, ifelse(y == 2 & z > 200,0, x)) return(temp) }) tList[[i]] <- stack(testList) } second_stack <- stack(tList) ``` Upvotes: 1 [selected_answer]
2018/03/21
1,955
7,438
<issue_start>username_0: I am using MySQL server via Amazon could service, with default settings. The table involved `mytable` is of `InnoDB` type and has about 1 billion rows. The query is: ``` select count(*), avg(`01`) from mytable where `date` = "2017-11-01"; ``` Which takes almost 10 min to execute. I have an index on `date`. The `EXPLAIN` of this query is: ``` +----+-------------+---------------+------+---------------+------+---------+-------+---------+-------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+---------------+------+---------------+------+---------+-------+---------+-------+ | 1 | SIMPLE | mytable | ref | date | date | 3 | const | 1411576 | NULL | +----+-------------+---------------+------+---------------+------+---------+-------+---------+-------+ ``` The indexes from this table are: ``` +---------------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment | +---------------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ | mytable | 0 | PRIMARY | 1 | ESI | A | 60398679 | NULL | NULL | | BTREE | | | | mytable | 0 | PRIMARY | 2 | date | A | 1026777555 | NULL | NULL | | BTREE | | | | mytable | 1 | lse_cd | 1 | lse_cd | A | 1919210 | NULL | NULL | YES | BTREE | | | | mytable | 1 | zone | 1 | zone | A | 732366 | NULL | NULL | YES | BTREE | | | | mytable | 1 | date | 1 | date | A | 85564796 | NULL | NULL | | BTREE | | | | mytable | 1 | ESI_index | 1 | ESI | A | 6937686 | NULL | NULL | | BTREE | | | +---------------+------------+-----------+--------------+-------------+-----------+-------------+----------+--------+------+------------+---------+---------------+ ``` If I remove `AVG()`: ``` select count(*) from mytable where `date` = "2017-11-01"; ``` It only takes 0.15 sec to return the count. The count of this specific query is 692792; The counts are similar for other `date`s. I don't have an index over `01`. Is it an issue? Why `AVG()` takes so long to compute? There must be something I didn't do properly. Any suggestion is appreciated!<issue_comment>username_1: For MyISAM tables, COUNT(\*) is optimized to return very quickly if the SELECT retrieves from one table, no other columns are retrieved, and there is no WHERE clause. For example: > > SELECT COUNT(\*) FROM student; > > > <https://dev.mysql.com/doc/refman/5.6/en/group-by-functions.html#function_count> If you add AVG() or something else, you lose this optimization Upvotes: 0 <issue_comment>username_2: To count the number of rows with a specific date, MySQL has to locate that value in the index (which is pretty fast, after all that is what indexes are made for) and then read the subsequent entries *of the index* until it finds the next date. Depending on the datatype of `esi`, this will sum up to reading some MB of data to count your 700k rows. Reading some MB does not take much time (and that data might even already be cached in the buffer pool, depending on how often you use the index). To calculate the average for a column that is not included in the index, MySQL will, again, use the index to find all rows for that date (the same as before). But additionally, for every row it finds, it has to read the actual table data for that row, which means to use the primary key to locate the row, read some bytes, and repeat this 700k times. This ["random access"](https://www.percona.com/blog/2008/04/28/the-mysql-optimizer-the-os-cache-and-sequential-versus-random-io/) is *a lot* slower than the sequential read in the first case. (This gets worse by the problem that "some bytes" is the [`innodb_page_size`](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_page_size) (16KB by default), so you may have to read up to 700k \* 16KB = 11GB, compared to "some MB" for `count(*)`; and depending on your memory configuration, some of this data might not be cached and has to be read from disk.) A solution to this is to include all used columns in the index (a "covering index"), e.g. create an index on `date, 01`. Then MySQL does not need to access the table itself, and can proceed, similar to the first method, by just reading the index. The size of the index will increase a bit, so MySQL will need to read "some more MB" (and perform the `avg`-operation), but it should still be a matter of seconds. In the comments, you mentioned that you need to calculate the average over 24 columns. If you want to calculate the `avg` for several columns at the same time, you would need a covering index on all of them, e.g. `date, 01, 02, ..., 24` to prevent table access. Be aware that an index that contains all columns requires as much storage space as the table itself (and it will take a long time to create such an index), so it might depend on how important this query is if it is worth those resources. To avoid the [MySQL-limit of 16 columns per index](https://dev.mysql.com/doc/refman/5.7/en/multiple-column-indexes.html), you could split it into two indexes (and two queries). Create e.g. the indexes `date, 01, .., 12` and `date, 13, .., 24`, then use ``` select * from (select `date`, avg(`01`), ..., avg(`12`) from mytable where `date` = ...) as part1 cross join (select avg(`13`), ..., avg(`24`) from mytable where `date` = ...) as part2; ``` Make sure to document this well, as there is no obvious reason to write the query this way, but it might be worth it. If you only ever average over a single column, you could add 24 seperate indexes (on `date, 01`, `date, 02`, ...), although in total, they will require even more space, but might be a little bit faster (as they are smaller individually). But the buffer pool might still favour the full index, depending on factors like usage patterns and memory configuration, so you may have to test it. Since `date` is part of your primary key, you could also consider changing the primary key to `date, esi`. If you find the dates by the primary key, you would not need an additional step to access the table data (as you already access the table), so the behaviour would be similar to the covering index. But this is a significant change to your table and can affect all other queries (that e.g. use `esi` to locate rows), so it has to be considered carefully. As you mentioned, another option would be to build a summary table where you store precalculated values, especially if you do not add or modify rows for past dates (or can keep them up-to-date with a trigger). Upvotes: 4 [selected_answer]
2018/03/21
555
1,790
<issue_start>username_0: Using <https://developer.microsoft.com/en-us/graph/docs/api-reference/v1.0/api/group_post_members> to add a member to a group ``` POST /groups/{id}/members/$ref { "@odata.id": "https://graph.microsoft.com/v1.0/users/[email protected]" } ``` returns error ``` { "error": { "code": "Request_BadRequest", "message": "Unable to update the specified properties for objects that have originated within an external service." } } ``` The group that I'm trying to add to has the following properties: ``` { "@odata.context": "https://graph.microsoft.com/v1.0/$metadata#groups/$entity", "id": "xxx", "deletedDateTime": null, "classification": null, "createdDateTime": "2017-09-11T21:04:49Z", "description": null, "displayName": "xxx", "groupTypes": [], "mail": "<EMAIL>", "mailEnabled": true, "mailNickname": "xxx", "onPremisesLastSyncDateTime": null, "onPremisesProvisioningErrors": [], "onPremisesSecurityIdentifier": null, "onPremisesSyncEnabled": null, "preferredDataLocation": null, "proxyAddresses": [ "smtp:<EMAIL>", "SMTP:<EMAIL>" ], "renewedDateTime": "2017-09-11T21:04:49Z", "securityEnabled": false, "visibility": null } ```<issue_comment>username_1: My test result for your reference. [![enter image description here](https://i.stack.imgur.com/Aaht3.png)](https://i.stack.imgur.com/Aaht3.png) List user ID so get user ID. ``` https://graph.microsoft.com/v1.0/users ``` Upvotes: 0 <issue_comment>username_2: Found out that the reason for the rejection was because Microsoft only allows adding members to Office 365 Groups (groupTypes = ["Unified"]) In my case, the group properties above shows (groupTypes= []) Upvotes: 1
2018/03/21
762
3,035
<issue_start>username_0: I'm trying to follow a course to create apps using C#, unfortunately I'm using a Mac so I can't follow all the step as the y do it on the videos. I.E. the method to create an empty class is different, Had to manually include the Entity Framework Dependency and create the Models folder.... but now I'm trying to create a Controller by scaffolding it but I can't see how to do it. I don't know if there's a better option than using VS or if I need to install some basic plugins or packages to make it a 'windows-esque' experience. P.S.: I'm a total noob regarding C#, this is the first time I work on it so it's not as easy as saying "well, you can manually create your controller and the datacontext file" 'cause I'm not really familiar with which files I need to make the project running, that's the reason to stick with VS, since it seems to help you during all the develop process [or at least in Windows].<issue_comment>username_1: File -> New File Highlight `Asp.NET Core` on the left, then select `MVC Controller Class`. [![enter image description here](https://i.stack.imgur.com/EopDT.png)](https://i.stack.imgur.com/EopDT.png) Upvotes: 1 <issue_comment>username_2: I ran into the same issue going through an older Microsoft tutorial (<https://learn.microsoft.com/en-us/aspnet/core/data/ef-mvc/intro?view=aspnetcore-2.2>). Answer: the scaffolding needs to be done from the command line using dotnet aspnet-codegenerator. The documentation (including the options for scaffolding controllers) can be found here: <https://learn.microsoft.com/en-us/aspnet/core/fundamentals/tools/dotnet-aspnet-codegenerator?view=aspnetcore-2.2> For the tutorial I linked above the controller scaffolding was done with the following command: ``` dotnet aspnet-codegenerator controller -name StudentsController -m Student -dc SchoolContext --relativeFolderPath Controllers --useDefaultLayout --referenceScriptLibraries --useAsyncActions ``` -m is the name of the model. -dc is the database context. The aspnet-codegenerator tool has to be installed using the following command: ``` dotnet tool install -g dotnet-aspnet-codegenerator ``` I also had to add the Microsoft.VisualStudio.Web.CodeGeneration.Design NuGet package as a project dependency. Upvotes: 2 <issue_comment>username_3: Visual Studio 2019 for Mac introduces support for scaffolding pages and controllers in a very similar way to what Visual Studio on Windows does. This tooling simplifies the generation of these kinds of items by creating the appropriate code files with basic user interfaces, for pages, and basic methods. You use the new scaffolding feature by right-clicking the project name in the Solution pad and then selecting **Add** > **New Scaffolding**. The Add New Scaffolding dialog appears as below. [![enter image description here](https://i.stack.imgur.com/EEbdw.jpg)](https://i.stack.imgur.com/EEbdw.jpg) For more information: <https://www.syncfusion.com/ebooks/visual-studio-for-mac-succinctly/creating-net-core-apps> Upvotes: 2
2018/03/21
844
3,041
<issue_start>username_0: I have KIE Workbench deployed on Minishift instance. I want to clone the repository. First of all I know, I should make ssh port reachable. To do this I use the following command: ``` $ oc port-forward workbench-1-vfd6k 8001:8001 ``` To check the port exposed as expected, I use 'telnet' and able to observe sshd working: ``` $ telnet localhost 8001 Trying 127.0.0.1... Connected to localhost. Escape character is '^]'. SSH-2.0-SSHD-CORE-1.6.0 ``` So far, so good. However, actual cloning doesn't work: ``` $ git clone ssh://admin@localhost:8001/myrepo Cloning into 'myrepo'... Warning: Permanently added '[localhost]:8001' (DSA) to the list of known hosts. ssh_dispatch_run_fatal: Connection to 127.0.0.1 port 8001: incorrect signature fatal: Could not read from remote repository. Please make sure you have the correct access rights and the repository exists. ``` Does anybody know how to fix the issue?<issue_comment>username_1: File -> New File Highlight `Asp.NET Core` on the left, then select `MVC Controller Class`. [![enter image description here](https://i.stack.imgur.com/EopDT.png)](https://i.stack.imgur.com/EopDT.png) Upvotes: 1 <issue_comment>username_2: I ran into the same issue going through an older Microsoft tutorial (<https://learn.microsoft.com/en-us/aspnet/core/data/ef-mvc/intro?view=aspnetcore-2.2>). Answer: the scaffolding needs to be done from the command line using dotnet aspnet-codegenerator. The documentation (including the options for scaffolding controllers) can be found here: <https://learn.microsoft.com/en-us/aspnet/core/fundamentals/tools/dotnet-aspnet-codegenerator?view=aspnetcore-2.2> For the tutorial I linked above the controller scaffolding was done with the following command: ``` dotnet aspnet-codegenerator controller -name StudentsController -m Student -dc SchoolContext --relativeFolderPath Controllers --useDefaultLayout --referenceScriptLibraries --useAsyncActions ``` -m is the name of the model. -dc is the database context. The aspnet-codegenerator tool has to be installed using the following command: ``` dotnet tool install -g dotnet-aspnet-codegenerator ``` I also had to add the Microsoft.VisualStudio.Web.CodeGeneration.Design NuGet package as a project dependency. Upvotes: 2 <issue_comment>username_3: Visual Studio 2019 for Mac introduces support for scaffolding pages and controllers in a very similar way to what Visual Studio on Windows does. This tooling simplifies the generation of these kinds of items by creating the appropriate code files with basic user interfaces, for pages, and basic methods. You use the new scaffolding feature by right-clicking the project name in the Solution pad and then selecting **Add** > **New Scaffolding**. The Add New Scaffolding dialog appears as below. [![enter image description here](https://i.stack.imgur.com/EEbdw.jpg)](https://i.stack.imgur.com/EEbdw.jpg) For more information: <https://www.syncfusion.com/ebooks/visual-studio-for-mac-succinctly/creating-net-core-apps> Upvotes: 2
2018/03/21
892
2,823
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/TYhC5.png)](https://i.stack.imgur.com/TYhC5.png) I am getting Unexpected token on the `React Component Name` while running `npm test`. Tried reading several other similar questions but none seems to be working for me. I have added content of babelrc , package.json and my test file content below ```html { "presets": ["env"] } "dependencies": { "react": "^16.2.0", "react-bootstrap": "^0.32.1", "react-dom": "^16.2.0", "react-redux": "^5.0.7", "react-router-dom": "^4.2.2", "react-scripts": "1.1.1", "redux": "^3.7.2" }, "scripts": { "start": "react-scripts start", "build": "react-scripts build", "test": "jest", "eject": "react-scripts eject" }, "devDependencies": { "babel-cli": "^6.26.0", "babel-jest": "^23.0.0-alpha.0", "babel-preset-env": "^1.6.1", "babel-preset-react": "^6.24.1", "enzyme": "^3.3.0", "enzyme-adapter-react-16": "^1.1.1", "enzyme-to-json": "^3.3.3", "jest": "^22.4.2", "react-test-renderer": "^16.2.0" }, "jest": { "notify": true, "snapshotSerializers": [ "enzyme-to-json/serializer" ], "setupTestFrameworkScriptFile": "./src/setupTests.js", "transform": { "^.+\\.jsx?$": "babel-jest" } } import React from 'react'; import { shallow, mount } from 'enzyme'; import App from '../../src/components/App'; // describe what we are testing describe('Render App Component', () => { // make our assertion and what we expect to happen it('should render without throwing an error', () => { wrapper = shallow(); expect(wrapper.find('.app__wrapper').length).toEqual(1); }) }) ```<issue_comment>username_1: Here is how I resolved the issue. • Add the following content to your .babelrc file and make sure .babelrc is in the root folder ``` { "presets": ["env","react"] } ``` • Make sure you exclude static assets like CSS, images, SCSS, PDF, fonts, etc. Add the following to package.json as highlighted in the screenshot ``` "moduleNameMapper": { "\\.(jpg|jpeg|png|gif|eot|otf|webp|svg|ttf|woff|woff2|mp4|webm|wav|mp3|m4a|aac|oga)$": "/\_\_mocks\_\_/fileMock.js", "\\.(css|less)$": "/\_\_mocks\_\_/styleMock.js" } ``` --- Screenshot: ![exclude-static-assets](https://i.stack.imgur.com/glNx2.png) Upvotes: 3 [selected_answer]<issue_comment>username_2: If anybody comes here to figure out what is happening, I think the accepted answer is not related to the issue here. When the file extension is `.js` and if you try to write JSX in that file, linter will complain if you have a correct ESlint configuration. Instead, try to change the file extension to `.jsx` **or** `.tsx` (if Typescript involved). Upvotes: 0
2018/03/21
359
1,328
<issue_start>username_0: > > ImportError: Could not find 'nvcuda.dll'. TensorFlow requires that > this DLL be installed in a directory that is named in your %PATH% > environment variable. Typically it is installed in > 'C:\Windows\System32'. If it is not present, ensure that you have a > CUDA-capable GPU with the correct driver installed. > > > please solve this error i am doing FYP<issue_comment>username_1: The error because , your system couldn't find CUDA enable for tensorflow-GPU version. Please refer link for installing tensorflow-GPU in [here](https://www.tensorflow.org/install/install_windows). If you want to access GPU version you have to install CUDA toolkit first. Make sure that when you are installing CUDA toolkit and cuDNN should support to your tensrflow version. Upvotes: 2 <issue_comment>username_2: First of all, my computer does not have an Nvidia card. So I can not install CUDA driver. I downloaded `nvcuda.dll` and executed ``` regsvr32 C:\Windows\System32\nvcuda.dll ``` instruction, they make a fire so as to compile all TensorFlow code that note > > ImportError: Could not find 'nvcuda.dll'. > > > Anyway, please reinstall your TensorFlow: ``` pip uninstall protobuf pip uninstall tensorflow ``` and then ``` pip install protobuf pip install tensorflow ``` Upvotes: 2
2018/03/21
1,568
5,314
<issue_start>username_0: [![enter image description here](https://i.stack.imgur.com/Ym9GR.png)](https://i.stack.imgur.com/Ym9GR.png) I want to give name to the green and right pin annotation above. I see a video tutorial, and he can give name to the annotation by using `annotation.title =` but I don't know why I can get the name correctly show in my MapKit. here is the code I use ``` import UIKit import MapKit class ViewController: UIViewController, MKMapViewDelegate { @IBOutlet weak var mapKit: MKMapView! override func viewDidLoad() { super.viewDidLoad() mapKit.delegate = self let bakrieTowerCoordinate = CLLocation(latitude: -6.23860724759536, longitude: 106.789429759178) let GBKCoordinate = CLLocation(latitude: -6.23864960081552, longitude: 106.789627819772) let locationGBK : CLLocationCoordinate2D = CLLocationCoordinate2DMake(-6.23864960081552, 106.789627819772) let locationBakrieToweer : CLLocationCoordinate2D = CLLocationCoordinate2DMake(-6.23860724759536, 106.789429759178) let annotation = MKPointAnnotation() annotation.coordinate = locationGBK annotation.title = "GBK" annotation.subtitle = "Stadion" mapKit.addAnnotation(annotation) let annotation2 = MKPointAnnotation() annotation2.coordinate = locationBakrieToweer annotation2.title = "Bakrie Tower" annotation2.subtitle = "Office" mapKit.addAnnotation(annotation2) zoomMapOn(location1: GBKCoordinate, location2: bakrieTowerCoordinate) } func zoomMapOn(location1: CLLocation, location2: CLLocation) { let distanceOf2CoordinateInMeters = location1.distance(from: location2) let radius = distanceOf2CoordinateInMeters * 3 let coordinateRegion = MKCoordinateRegionMakeWithDistance(location1.coordinate, radius, radius) mapKit.setRegion(coordinateRegion, animated: true) } func mapView(_ mapView: MKMapView, viewFor annotation: MKAnnotation) -> MKAnnotationView? { let annotationView = MKPinAnnotationView(annotation: annotation, reuseIdentifier: "pin") guard let locationName = annotation.title else {return nil} if locationName == "GBK" { annotationView.pinTintColor = UIColor.green } else if locationName == "Bakrie Tower" { annotationView.pinTintColor = UIColor.red } return annotationView } } ```<issue_comment>username_1: You need to set this property in `mapView(_:viewFor:)` before returning your `annotationView`: ``` annotationView.canShowCallout = true ``` Now when you tap the pin, it will show your text. Upvotes: 0 <issue_comment>username_2: Add this code to your view controller - ``` import UIKit import MapKit class ViewController: UIViewController, MKMapViewDelegate { @IBOutlet weak var mapKit: MKMapView! override func viewDidLoad() { super.viewDidLoad() mapKit.delegate = self let bakrieTowerCoordinate = CLLocation(latitude: -6.23860724759536, longitude: 106.789429759178) let GBKCoordinate = CLLocation(latitude: -6.23864960081552, longitude: 106.789627819772) let locationGBK : CLLocationCoordinate2D = CLLocationCoordinate2DMake(-6.23864960081552, 106.789627819772) let locationBakrieToweer : CLLocationCoordinate2D = CLLocationCoordinate2DMake(-6.23860724759536, 106.789429759178) let annotation = MKPointAnnotation() annotation.coordinate = locationGBK annotation.title = "GBK" annotation.subtitle = "Stadion" mapKit.addAnnotation(annotation) let annotation2 = MKPointAnnotation() annotation2.coordinate = locationBakrieToweer annotation2.title = "Bakrie Tower" annotation2.subtitle = "Office" mapKit.addAnnotation(annotation2) zoomMapOn(location1: GBKCoordinate, location2: bakrieTowerCoordinate) } func zoomMapOn(location1: CLLocation, location2: CLLocation) { let distanceOf2CoordinateInMeters = location1.distance(from: location2) let radius = distanceOf2CoordinateInMeters * 3 let coordinateRegion = MKCoordinateRegionMakeWithDistance(location1.coordinate, radius, radius) mapKit.setRegion(coordinateRegion, animated: true) } func mapView(_ mapView: MKMapView, viewFor annotation: MKAnnotation) -> MKAnnotationView? { let annotationView = MKPinAnnotationView(annotation: annotation, reuseIdentifier: "pin") guard let locationName = annotation.title else {return nil} if locationName == "GBK" { annotationView.canShowCallout = true } else if locationName == "Bakrie Tower" { annotationView.pinTintColor = UIColor.red } annotationView.canShowCallout = true // Add this line in your code return annotationView } } ``` When you tap on the pin, it will show the text, Like - [![enter image description here](https://i.stack.imgur.com/YEHp3.png)](https://i.stack.imgur.com/YEHp3.png) [![enter image description here](https://i.stack.imgur.com/R9aBD.png)](https://i.stack.imgur.com/R9aBD.png) Just Add `annotationView.canShowCallout = true` inside your `mapView(_ mapView:)`. Thank you. Upvotes: 3 [selected_answer]
2018/03/21
464
1,442
<issue_start>username_0: I am trying to run a Python script a.py from another Python script scheduler.py and I want to pass a list as argument something like: Scheduler.py: ``` t = {"code": 161123134, "name": "task2", "domain": "www.google.com", "type": "Type1", "keywords": ["bai2", "yin4", "jiao3", "yi8", "ping1", "tai3"]} hourTasks = json.dumps(t) os.system("python a.py " + hourTasks) ``` a.py ``` task = sys.argv[1:] task = json.loads(task) ``` However It gives me an error **the JSON object must be str, bytes or bytearray, not 'list'**. Anyone know what the problem is?<issue_comment>username_1: from one script you should import the other and there is no need to use json between your two scripts, just pass an python dict instead. Upvotes: 0 <issue_comment>username_2: You could use the method of the dicts, to read the keys and the values. And before that do a counter and membership check like. For x in xs: Upvotes: 0 <issue_comment>username_3: Try this: Scheduler.py: ``` import json import subprocess t = {"code": 161123134, "name": "task2", "domain": "www.google.com", "type": "Type1", "keywords": ["bai2", "yin4", "jiao3", "yi8", "ping1", "tai3"]} task = json.dumps(t) subprocess.call(["python", "a.py", task]) ``` a.py: ``` import json import sys task = sys.argv[1] t = json.loads(task) ``` Upvotes: 2 [selected_answer]
2018/03/21
1,031
3,053
<issue_start>username_0: I'm using Window Linux Subsystem (Debian stretch). Followed the instruction on Docker website, I installed docker-ce, but it cannot start. Here is the info: ``` $ sudo service docker start grep: /etc/fstab: No such file or directory [ ok ] Starting Docker: docker. $ sudo service docker status [FAIL] Docker is not running ... failed! ``` What should I do with `/etc/fstab` not found?<issue_comment>username_1: Perhaps a good signal <https://learn.microsoft.com/en-us/windows/wsl/release-notes#build-17093> > > WSL now processes the /etc/fstab file during instance start [GH 2636]. > > > Upvotes: 2 <issue_comment>username_2: I was getting the same error. Apparently on my install of WSL with Debian, I didn't have an etc/fstab file. Surprisingly, just creating the file via 'touch' worked: ``` sudo touch /etc/fstab ``` Upvotes: 4 <issue_comment>username_3: For anybody stumbling across this years later like me, Docker doesn't work inside WSL. But you can use Docker for Windows and WSL2 to run native containers inside your Linux Distro and the install and config is quite painless <https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers> Upvotes: -1 <issue_comment>username_4: to fix fstab ```sh touch /etc/fstab ``` if you run `dockerd`, it will give you the failed message: ``` INFO[2022-01-27T17:55:14.100489400+07:00] Loading containers: start. WARN[2022-01-27T17:55:14.191666800+07:00] Running iptables --wait -t nat -L -n failed with message: `iptables v1.8.2 (nf_tables): CHAIN_ADD failed (No such file or directory): chain PREROUTING`, error: exit status 4 INFO[2022-01-27T17:55:14.493716300+07:00] stopping event stream following graceful shutdown error="" module=libcontainerd namespace=moby INFO[2022-01-27T17:55:14.494906600+07:00] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby INFO[2022-01-27T17:55:14.495048400+07:00] stopping healthcheck following graceful shutdown module=libcontainerd failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables --wait -t nat -N DOCKER: iptables v1.8.2 (nf\_tables): CHAIN\_ADD failed (No such file or directory): chain PREROUTING (exit status 4) ``` that is [Debian](https://forums.docker.com/t/failing-to-start-dockerd-failed-to-create-nat-chain-docker/78269) nat issue, fix it with: ``` sudo update-alternatives --set iptables /usr/sbin/iptables-legacy sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy ``` now you can start the service again you can follow this to make it start on startup <https://askubuntu.com/a/1356147/138352> Edited: if the issue with IP table still persisted try to set [WSL version](https://learn.microsoft.com/en-us/windows/wsl/basic-commands#set-wsl-version-to-1-or-2) to 2, run the command from Windows shell: ``` wsl --set-version 2 ``` the distribution list can be found with command `wsl -l` Upvotes: 4
2018/03/21
983
2,911
<issue_start>username_0: I have imported numpy library in my python code. Now my question is, is it possible to get a list of all the functions that are covered under the numpy library? I know documentation is a thing, but I am looking for something else. I use spyder.<issue_comment>username_1: Perhaps a good signal <https://learn.microsoft.com/en-us/windows/wsl/release-notes#build-17093> > > WSL now processes the /etc/fstab file during instance start [GH 2636]. > > > Upvotes: 2 <issue_comment>username_2: I was getting the same error. Apparently on my install of WSL with Debian, I didn't have an etc/fstab file. Surprisingly, just creating the file via 'touch' worked: ``` sudo touch /etc/fstab ``` Upvotes: 4 <issue_comment>username_3: For anybody stumbling across this years later like me, Docker doesn't work inside WSL. But you can use Docker for Windows and WSL2 to run native containers inside your Linux Distro and the install and config is quite painless <https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers> Upvotes: -1 <issue_comment>username_4: to fix fstab ```sh touch /etc/fstab ``` if you run `dockerd`, it will give you the failed message: ``` INFO[2022-01-27T17:55:14.100489400+07:00] Loading containers: start. WARN[2022-01-27T17:55:14.191666800+07:00] Running iptables --wait -t nat -L -n failed with message: `iptables v1.8.2 (nf_tables): CHAIN_ADD failed (No such file or directory): chain PREROUTING`, error: exit status 4 INFO[2022-01-27T17:55:14.493716300+07:00] stopping event stream following graceful shutdown error="" module=libcontainerd namespace=moby INFO[2022-01-27T17:55:14.494906600+07:00] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby INFO[2022-01-27T17:55:14.495048400+07:00] stopping healthcheck following graceful shutdown module=libcontainerd failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables --wait -t nat -N DOCKER: iptables v1.8.2 (nf\_tables): CHAIN\_ADD failed (No such file or directory): chain PREROUTING (exit status 4) ``` that is [Debian](https://forums.docker.com/t/failing-to-start-dockerd-failed-to-create-nat-chain-docker/78269) nat issue, fix it with: ``` sudo update-alternatives --set iptables /usr/sbin/iptables-legacy sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy ``` now you can start the service again you can follow this to make it start on startup <https://askubuntu.com/a/1356147/138352> Edited: if the issue with IP table still persisted try to set [WSL version](https://learn.microsoft.com/en-us/windows/wsl/basic-commands#set-wsl-version-to-1-or-2) to 2, run the command from Windows shell: ``` wsl --set-version 2 ``` the distribution list can be found with command `wsl -l` Upvotes: 4
2018/03/21
1,202
3,560
<issue_start>username_0: My colleague and I have been on a six month quest for a VBA macro that can run in a Read-Only, password protected PowerPoint. Is it possible or will PowerPoint always block the VBA macro from running in the presentation, because of the read-only status? ``` Private Sub CheckBox1_Click() Dim ppShp As Shape Dim eff1 As Effect Dim ani1 As AnimationBehavior With ActivePresentation.Slides(1) Set ppShp = .Shapes("Oval 4") Set eff1 = .TimeLine.MainSequence.AddEffect(Shape:=ppShp, effectId:=msoAnimEffectAppear) End With End Sub ``` I tried using `If ReadOnly = True Then` conditions. I want the user to use the macros and save, but not edit beyond that, or to open and 'look under the hood'. (It's all for an educational program) We get > > Run-time error '-2147467259 (80004005)' > > Presentation (unknown member) : Invalid request. Presentation cannot be modified. > > ><issue_comment>username_1: Perhaps a good signal <https://learn.microsoft.com/en-us/windows/wsl/release-notes#build-17093> > > WSL now processes the /etc/fstab file during instance start [GH 2636]. > > > Upvotes: 2 <issue_comment>username_2: I was getting the same error. Apparently on my install of WSL with Debian, I didn't have an etc/fstab file. Surprisingly, just creating the file via 'touch' worked: ``` sudo touch /etc/fstab ``` Upvotes: 4 <issue_comment>username_3: For anybody stumbling across this years later like me, Docker doesn't work inside WSL. But you can use Docker for Windows and WSL2 to run native containers inside your Linux Distro and the install and config is quite painless <https://learn.microsoft.com/en-us/windows/wsl/tutorials/wsl-containers> Upvotes: -1 <issue_comment>username_4: to fix fstab ```sh touch /etc/fstab ``` if you run `dockerd`, it will give you the failed message: ``` INFO[2022-01-27T17:55:14.100489400+07:00] Loading containers: start. WARN[2022-01-27T17:55:14.191666800+07:00] Running iptables --wait -t nat -L -n failed with message: `iptables v1.8.2 (nf_tables): CHAIN_ADD failed (No such file or directory): chain PREROUTING`, error: exit status 4 INFO[2022-01-27T17:55:14.493716300+07:00] stopping event stream following graceful shutdown error="" module=libcontainerd namespace=moby INFO[2022-01-27T17:55:14.494906600+07:00] stopping event stream following graceful shutdown error="context canceled" module=libcontainerd namespace=plugins.moby INFO[2022-01-27T17:55:14.495048400+07:00] stopping healthcheck following graceful shutdown module=libcontainerd failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to create NAT chain DOCKER: iptables failed: iptables --wait -t nat -N DOCKER: iptables v1.8.2 (nf\_tables): CHAIN\_ADD failed (No such file or directory): chain PREROUTING (exit status 4) ``` that is [Debian](https://forums.docker.com/t/failing-to-start-dockerd-failed-to-create-nat-chain-docker/78269) nat issue, fix it with: ``` sudo update-alternatives --set iptables /usr/sbin/iptables-legacy sudo update-alternatives --set ip6tables /usr/sbin/ip6tables-legacy ``` now you can start the service again you can follow this to make it start on startup <https://askubuntu.com/a/1356147/138352> Edited: if the issue with IP table still persisted try to set [WSL version](https://learn.microsoft.com/en-us/windows/wsl/basic-commands#set-wsl-version-to-1-or-2) to 2, run the command from Windows shell: ``` wsl --set-version 2 ``` the distribution list can be found with command `wsl -l` Upvotes: 4
2018/03/21
1,251
4,842
<issue_start>username_0: I use work with relationship in laravel 5.6. I create `product` table with migration: ``` Schema::create('products', function (Blueprint $table) { $table->increments('id'); $table->string('name'); $table->string('slug'); $table->text('description'); $table->string('tags'); $table->string('original_price'); $table->integer('view_order')->default(0); $table->unsignedInteger('admin_id'); $table->foreign('admin_id')->references('id')->on('admins'); $table->boolean('status'); $table->timestamps(); }); ``` And i create `category` table with migration: ``` Schema::create('categories', function (Blueprint $table) { $table->increments('id'); $table->string('name'); $table->boolean('status'); $table->timestamps(); }); ``` And create `product_categories` table with migration: ``` Schema::create('product_categories', function (Blueprint $table) { $table->increments('id'); $table->unsignedInteger('product_id'); $table->foreign('product_id')->references('id')->on('products'); $table->unsignedInteger('category_id'); $table->foreign('category_id')->references('id')->on('categories'); $table->timestamps(); }); ``` **Now, I use `Bootstrap Multiselect` for categories in one product.** In `category` model: ``` /** * @return \Illuminate\Database\Eloquent\Relations\BelongsToMany */ public function products() { return $this->belongsToMany(Product::class); } ``` In `Product` model: ``` /** * @return \Illuminate\Database\Eloquent\Relations\BelongsToMany */ public function categories() { return $this->belongsToMany(Category::class); } ``` How to add `category_id` and `product_id` in `product_categories` table with relationship?<issue_comment>username_1: You only have to add the pivot table: ``` public function products() { return $this->belongsToMany(Product::class, 'product_categories'); } public function categories() { return $this->belongsToMany(Category::class, 'product_categories'); } ``` Upvotes: 0 <issue_comment>username_2: Check the documantation related to [Many to many Relationships.](https://laravel.com/docs/5.6/eloquent-relationships#updating-many-to-many-relationships) Your pivot table doesn't follow Laravel's convention, either update your table name or update your relationships to address this issue. The convention is the alphabetical order of the two models, thus your pivot table should be named: `category_product` If you do not want to update the table name, update your relationships. ```php public function categories() { return $this->belongsToMany(Product::class, 'product_categories') } public function products() { return $this->belongsToMany(Category::class, 'product_categories') } ``` Now to *"save an entry to the pivot table"* -or in other words: to create the relationship between the two models- you may use `attach` or `sync` method. ```php $product->categories()->attach($category); $product->categories()->attach([$categoryId1, $categoryId2]); ``` `sync` is different. > > The sync method accepts an array of IDs to place on the intermediate table. Any IDs that are not in the given array will be removed from the intermediate table. > > > To detach (delete entry in pivot table), simple use the `detach` method. ```php $product->categories()->detach([1, 2]); ``` Of course, do the same for `Category`. Upvotes: 1 <issue_comment>username_3: By default, laravel derives the table name from the alphabetical order of the related model names. Here your model names are `Product` and `Category` and the derived relational table will be `category_product` because category came before product in alphabetical order. Either you can change the table name or you can override this my mentioning the table name as the second parameter in the relational method as follows. public function products() ``` { return $this->belongsToMany(Product::class, 'product_categories'); } public function categories() { return $this->belongsToMany(Category::class, 'product_categories'); } ``` Upvotes: 0 <issue_comment>username_4: Your model names are `Product` and `Category` and the derived relational table will be `category_product` because category came before product in alphabetical order. You only have to add the pivot table: ``` public function categories() { return $this->belongsToMany(Product::class, 'product_categories') } public function products() { return $this->belongsToMany(Category::class, 'product_categories') } ``` Now for save with relationship: ``` $product->categories()->attach($category); $product->categories()->attach([$category_id_1, $category_id_2]); ``` Upvotes: 1 [selected_answer]
2018/03/21
866
3,574
<issue_start>username_0: I am migrating a code that I myself made in Java to Scala which does an aggregation in MongoDB. But I was stuck on how to accumulate the results of aggregation inside a collection using MongoDB Scala Driver. Java code: ``` mongoCollection.aggregate(aggregatePipeline) .map(document -> { Document group = document.get("_id", Document.class); return new Document("chat", group).append("count", document.get("count")); }) .into(new ArrayList<>(), (results, e) -> { Document document = new Document("chats", results); System.out.println(document.toJson()); }); ``` Scala code: ``` mongoCollection.aggregate(aggregatePipeline) .map[Document](doc => Document("chat" -> doc.get("_id"), "count" -> doc.get("count"))) .subscribe((doc: Document) => println(doc.toJson)) ``` As can be seen in the code in Scala, I'm not accumulating the aggregation results, because I do not know how to get the same behavior from the .into() method present in the Java code, using the MongoDB Scala Driver. I've done a lot of research on the internet, but without success. If anyone can help me, I appreciate it.<issue_comment>username_1: You should use the implicit [Observable helpers](http://mongodb.github.io/mongo-scala-driver/2.2/reference/observables/) specifically [`collect()`](http://mongodb.github.io/mongo-scala-driver/2.2/scaladoc/org/mongodb/scala/ObservableImplicits$ScalaObservable.html#collect[S]():org.mongodb.scala.SingleObservable[Seq[T]]). There is also a `toFuture()` method that effectively runs collect and returns the result as a `Future`. ``` mongoCollection.aggregate(aggregatePipeline) .map[Document](doc => Document("chat" -> doc.get("_id"), "count" -> doc.get("count"))) .collect() .subscribe((docs: Seq[Document]) => println(docs)) ``` Upvotes: 2 [selected_answer]<issue_comment>username_2: You can setup a variable of Seq[Document] type and then append the resulting document sequence to the variable once the subscription event fires. Use Promise/Future to wait for the result. For example: ``` def find_all (collection_name: String): Seq[Document] = { /* The application will need to wait for the find operation thread to complete in order to process the returned value. */ log.debug(s"Starting database find_all operation thread") /* Set up new client connection, database, and collection */ val _client: MongoClient = MongoClient(config_client) val _database: MongoDatabase = _client.getDatabase(config_database) val collection: MongoCollection[Document] = _database.getCollection(collection_name) /* Set up result sequence */ var result_seq : Seq[Document] = Seq.empty /* Set up Promise container to wait for the database operation to complete */ val promise = Promise[Boolean] /* Start insert operation thread; once the thread has finished, read resulting documents. */ collection.find().collect().subscribe((results: Seq[Document]) => { log.trace(s"Found operation thread completed") /* Append found documents to the results */ result_seq = result_seq ++ results log.trace(s" Result sequence: $result_seq") /* set Promise container */ promise.success(true) /* close client connection to avoid memory leaks */ _client.close }) /* Promise completion result */ val future = promise.future /* wait for the promise completion result */ Await.result(future, Duration.Inf) /* Return document sequence */ result_seq } ``` Upvotes: 0
2018/03/21
581
1,845
<issue_start>username_0: I have an array of strings. I want to search in that array for and string that contains a specific string. If it's found, return that string WITHOUT the bit of the string we looked for. So, the array has three words. "Strawbery", "Lime", "Word:Word Word" I want to search in that array and find the full string that has "Word:" in it and return "Word Word" So far I've tried several different solutions to no avail. <https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/includes> looks promising, but I'm lost. Any suggestions? Here's what I've been trying so far: ``` var arrayName = ['Strawberry', 'Lime', 'Word: Word Word'] function arrayContains('Word:', arrayName) { return (arrayName.indexOf('Word:') > -1); } ```<issue_comment>username_1: You can use `find` to search the array. And use `replace` to remove the word. This code will return the value of the first element only. ```js let arr = ["Strawbery", "Lime", "Word:Word Word"]; let search = "Word:"; let result = (arr.find(e => e.includes(search)) || "").replace(search, ''); console.log(result); ``` If there are multiple search results, you can use `filter` and `map` ```js let arr = ["Strawbery", "Word:Lime", "Word:Word Word"]; let search = "Word:"; let result = arr.filter(e => e.includes(search)).map(e => e.replace(search, '')); console.log( result ); ``` Upvotes: 3 [selected_answer]<issue_comment>username_2: Use **.map**: ``` words_array = ["Strawbery", "Lime", "Word:Word Word"] function get_word(words_array, search_word) { res = ''; words_array.map(function(word) { if(word.includes(search_word)) { res = word.replace(search_word, '') } }) return(res) } ``` **Usage**: ``` res = get_word(words_array, 'Word:') alert(res) // returns "Word Word" ``` Upvotes: 0
2018/03/21
656
2,625
<issue_start>username_0: I am trying to get a better hang of doing this and have been playing around with strings. What I am trying to do is collect a user input as a string and manipulate it so that when I display the text, whatever they wrote would be first displayed in all lower case, then all uppercase, then all of the text divided into its own line. So, the output would be like this if I enter: This is an example this is an example THIS IS AN EXAMPLE This is an example I feel like this is supposed to be a lot easier than it looks, but I am trying to do an all lowercase so far, but cannot get that to work so far (as well as the other two parts). I think that if I get the lowercase right, I just repeat the same thing for uppercase and splitting it. ``` Try it function myFunction() { var person = prompt("Please enter a phrase"); if (person != null) { document.getElementById("test").innerHTML = test.toLowerCase; document.getElementById("test").innerHTML = test.toUpperCase; document.getElementById("test").innerHTML = test.split("\n"); } } ``` The above is what I am playing with so far, I get undefined when I click the button to test it. Can someone help me edit this?<issue_comment>username_1: 1. functions are invoked using `()` 2. your variable is `person` not `test` 3. you want to split on `space` not `\n` 4. you want to ADD to test innerHTML, not replace it each time 5. to get line breaks in HTML, use tag I've gone for code that assigns innerHTML once, as this is more performant than adding to it a bit at a time - of course, with such a simple example there's no perceived difference, however I thought I should mention why I chose to use this odd methodology ```js function myFunction() { var person = prompt("Please enter a phrase"); if (person != null) { document.getElementById("test").innerHTML = [ person.toLowerCase(), person.toUpperCase(), person.split(" ").join(' ') ].join(" "); } } ``` ```html Try it ``` Upvotes: 2 <issue_comment>username_2: You may want to split the string into words first and use join() function with tags to render them into multiple lines of words. ``` function myFunction() { var person = prompt("Please enter a phrase"); if (person != null) { document.getElementById("test").innerHTML += person.toLowerCase() + " "; document.getElementById("test").innerHTML += person.toUpperCase() + " "; document.getElementById("test").innerHTML += person.split(' ').join(' '); } } ``` Upvotes: 2
2018/03/21
598
2,243
<issue_start>username_0: I know how to get a data attribute using .data('id') for example.. But I cannot figure out how to get the "attachment\_id" value contained within "data-kgvid\_video\_vars" ? this is what im trying to get it into so I can then use it elsewhere.. ``` alert(jQuery(this).find('.kgvid_videodiv').data('id')); data-kgvid_video_vars='{"id":"kgvid_16","attachment_id":116,"player_type":"Video.js","width":"640","height":"360","fullwidth":"true","countable":true,"count_views":"start_complete","start":"","autoplay":"false","pauseothervideos":"true","set_volume":"1","mute":"false","meta":true,"endofvideooverlay":"","resize":"true","auto_res":"automatic","pixel_ratio":"true","right_click":"on","playback_rate":"false","nativecontrolsfortouch":"false","locale":"en","enable_resolutions_plugin":false}' ```<issue_comment>username_1: 1. functions are invoked using `()` 2. your variable is `person` not `test` 3. you want to split on `space` not `\n` 4. you want to ADD to test innerHTML, not replace it each time 5. to get line breaks in HTML, use tag I've gone for code that assigns innerHTML once, as this is more performant than adding to it a bit at a time - of course, with such a simple example there's no perceived difference, however I thought I should mention why I chose to use this odd methodology ```js function myFunction() { var person = prompt("Please enter a phrase"); if (person != null) { document.getElementById("test").innerHTML = [ person.toLowerCase(), person.toUpperCase(), person.split(" ").join(' ') ].join(" "); } } ``` ```html Try it ``` Upvotes: 2 <issue_comment>username_2: You may want to split the string into words first and use join() function with tags to render them into multiple lines of words. ``` function myFunction() { var person = prompt("Please enter a phrase"); if (person != null) { document.getElementById("test").innerHTML += person.toLowerCase() + " "; document.getElementById("test").innerHTML += person.toUpperCase() + " "; document.getElementById("test").innerHTML += person.split(' ').join(' '); } } ``` Upvotes: 2
2018/03/21
749
2,370
<issue_start>username_0: I have password-protected .xls files in a directory. I would like to open each of these files and save them without the password. However, the files can be opened by using either of the sample passwords listed below. ``` pwd1 = "123" pwd2 = "456" pwd3 = "789" 'Check if pwd1 opens Application.Workbooks.Open(Filename:=fn, Password:=<PASSWORD>) 'If fail then use pwd2 Application.Workbooks.Open(Filename:=fn, Password:=pwd2) 'and so on.. ``` How should I implement this?<issue_comment>username_1: Once the file has been opened once, you only need to `Unprotect` it. This will save a lot of time, instead of constantly opening/closing workbooks. Here's how I'd do it: ``` Public Sub CrackWorkbook() Dim fn As String fn = "C:\Temp\test_password_is_456.xlsx" Dim wb As Workbook Set wb = Workbooks.Open(fn) Dim lst As Variant lst = Array("123", "456", "789") Dim item As Variant For Each item In lst On Error GoTo did_not_work Call wb.Unprotect(item) Call wb.Save Call wb.Close(False) Exit Sub did_not_work: On Error GoTo 0 Next item End Sub ``` In other words, create an array of strings and do a `For Each` on them, and set some error-handling to deal with all the failed attempts. I know `GoTo` statements are a bit yucky, but that's the best way to handle errors in VBA (as far as I know). Upvotes: 2 <issue_comment>username_2: I tried @username_1 answer however the routine gives a 1004 error. I googled this behavior and found this post: [https://stackoverflow.com/questions/21176638/vba-how-to-force-ignore-continue-past-1004-error](https://stackoverflow.com/questions/21176638/vba-how-to-force-ignore-continue-past-1004-error/ "vba - how to force ignore/continue past 1004 error") The code below works by using `On Error Resume Next`. I based this on @username_1 answer. ``` Public Sub CrackWorkbook() Dim fn As String fn = "C:\Temp\test_password_is_456.xlsx" Dim wb As Workbook Dim item As Variant Dim lst As Variant lst = Array("123", "456", "789") For Each item In lst On Error Resume Next Workbooks.Open Filename:=fn, Password:=item If Err.Number <> 0 Then GoTo did_not_work: Exit For did_not_work: Next item Set wb = Activeworkbook wb.SaveAs Filename:=fn, Password:="" wb.Close End Sub ``` Upvotes: 2 [selected_answer]
2018/03/21
651
2,170
<issue_start>username_0: The line and the commented line below produces the same results: ``` public class StringEscapeMain { public static void main(String[] args){ String fileName = "C:\\Dev\\TryEclipseJava\\StringEscape\\\\\\f.txt"; /* String fileName = "C:\\Dev\\TryEclipseJava\\StringEscape\\f.txt";*/ File file = new File(fileName); if(file.exists()){ System.out.println("File exists!"); } else{ System.out.println("File does not exist!"); } } } ``` Does Java always treat any slash sequence of more than 2 slashes the same as "\"? Thanks!<issue_comment>username_1: No, Java does not treat these Strings as the same. A literal `\` has to be *escaped* by prefixing it with another `\`. So `\\` is one backslash, and `\\\\\\` is three. Only once you use the String as a path to a `File` on a platform where `\` is the delimiter (Windows …) are the extra backslashes stripped. See the more detailed [answer](https://stackoverflow.com/a/49397998/9131078) by user6690200. Upvotes: 0 <issue_comment>username_2: The first `\` is used to escape, which means * `C:\\Dev\\TryEclipseJava\\StringEscape\\\\\\f.txt` will be compiled to `C:\Dev\TryEclipseJava\StringEscape\\\f.txt`. and * `C:\\Dev\\TryEclipseJava\\StringEscape\\f.txt` will be compiled to `C:\Dev\TryEclipseJava\StringEscape\f.txt`. --- When you create a `File` instance with: ``` File file = new File(fileName); ``` The `fileName` will be "normalized" according to your `FileSystem`: ``` public File(String pathname) { if (pathname == null) { throw new NullPointerException(); } this.path = fs.normalize(pathname); this.prefixLength = fs.prefixLength(this.path); } ``` In the "normalize" procedure of `WinNTFileSystem`, `C:\Dev\TryEclipseJava\StringEscape\\\f.txt` will be truncated to: `C:\Dev\TryEclipseJava\StringEscape` Then it will: > > Remove redundant slashes from the remainder of the path, forcing all > slashes into the preferred slash > > > Finally, `fileName` is normalized as: `C:\Dev\TryEclipseJava\StringEscape\f.txt`. Upvotes: 4 [selected_answer]
2018/03/21
816
2,856
<issue_start>username_0: I'm following the [example](https://www.styled-components.com/docs/advanced#media-templates) in the docs for creating media templates, and I'm really struggling to type the arguments to pass to the `css` function (plain JS version from example): ``` const sizes = { desktop: 992 } const media = Object.keys(sizes).reduce((acc, label) => { acc[label] = (...args) => css` // <----- how to type args @media(max-width: ${sizes[label]}px) { ${css(...args)} } ` return acc }, {}) ``` In case you know TS but not styled-components, `args` is a tagged template literal, so I would use the `media` object as such: ``` media.desktop` background-color: blue; ${variable} ` ``` I have tried to type `args` as `TemplateStringsArray` but TS complains because spread arguments need to be of `array` type (which I think it is but somehow it is not recognised). If I change the type to `TemplateStringsArray[]`, the `css()` function complains because it expects at least 1 argument, but received 0 or more.<issue_comment>username_1: The signature for a tagged template should be `(literals: TemplateStringsArray, ...placeholders: any[]) => string` where `literals` are the strings in the template and `placeholders` are the variable values. If you just want to pass all arguments to the `css` you can use `call`. Typescript will not let you spread directly because `css` has required arguments that the ts compiler tries to check for: ``` acc[label] = (...args: any[]) => css` @media(max-width: ${sizes[label]}px) { ${css.call(undefined, ...args)} } ` ``` A fully typed version that correctly specifies the types for the `media.*` functions would be : ``` const sizes = { desktop: 992 } const media = Object.keys(sizes).reduce((acc, label) => { acc[label] = (literals: TemplateStringsArray, ...placeholders: any[]) => css` @media(max-width: ${sizes[label]}px) { ${css(literals, ...placeholders)} } `; return acc }, {} as Record string>) ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: The `username_1` [solution](https://stackoverflow.com/questions/49397538/typing-the-css-function-in-styled-components/49398439#49398439) is great but doesn't work for me. It produces a string with commas. So I add `join` to `css` function. ``` import { css } from "styled-components"; const sizes = { desktop: 730, }; const media = Object.keys(sizes).reduce( (acc, label) => { acc[label] = (literals: TemplateStringsArray, ...placeholders: any[]) => css` @media (max-width: ${sizes[label]}px) { ${css(literals, ...placeholders)}; } `.join(""); return acc; }, {} as Record< keyof typeof sizes, (l: TemplateStringsArray, ...p: any[]) => string >, ); export default media; ``` Upvotes: 3
2018/03/21
671
2,225
<issue_start>username_0: I have an array A of 1s and 0s and want to see if the larger array of bits B contains those bits in that exact order? Example: A= [0 1 1 0 0 0 0 1] B= [0 1 0 0 1 1 0 0 0 0 1 0 1 0 1] would be true as A is contained in B Most solutions I have found only determine if a value IS contained in another matrix, this is no good here as it is already certain that both matrices will be 1s and 0s Thanks<issue_comment>username_1: The signature for a tagged template should be `(literals: TemplateStringsArray, ...placeholders: any[]) => string` where `literals` are the strings in the template and `placeholders` are the variable values. If you just want to pass all arguments to the `css` you can use `call`. Typescript will not let you spread directly because `css` has required arguments that the ts compiler tries to check for: ``` acc[label] = (...args: any[]) => css` @media(max-width: ${sizes[label]}px) { ${css.call(undefined, ...args)} } ` ``` A fully typed version that correctly specifies the types for the `media.*` functions would be : ``` const sizes = { desktop: 992 } const media = Object.keys(sizes).reduce((acc, label) => { acc[label] = (literals: TemplateStringsArray, ...placeholders: any[]) => css` @media(max-width: ${sizes[label]}px) { ${css(literals, ...placeholders)} } `; return acc }, {} as Record string>) ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: The `username_1` [solution](https://stackoverflow.com/questions/49397538/typing-the-css-function-in-styled-components/49398439#49398439) is great but doesn't work for me. It produces a string with commas. So I add `join` to `css` function. ``` import { css } from "styled-components"; const sizes = { desktop: 730, }; const media = Object.keys(sizes).reduce( (acc, label) => { acc[label] = (literals: TemplateStringsArray, ...placeholders: any[]) => css` @media (max-width: ${sizes[label]}px) { ${css(literals, ...placeholders)}; } `.join(""); return acc; }, {} as Record< keyof typeof sizes, (l: TemplateStringsArray, ...p: any[]) => string >, ); export default media; ``` Upvotes: 3
2018/03/21
809
3,013
<issue_start>username_0: I have the following tables user table, post table and profile table. the post belongs to a user and the profile belongs to a user. I also have a forum were people can post I want the users username to show instead of their name. But The username is in the profiles table and I don't know how to get it. here is my code ``` class Post extends Model { // public function users(){ return $this->belongsToMany('App\User','posted_by'); } class Profile extends Model { // public function users(){ return $this->belongsTo('App\User','whos_profile'); } class User extends Authenticatable { use Notifiable; /** * The attributes that are mass assignable. * * @var array */ protected $fillable = [ 'firstname','lastname', 'email', 'password', ]; /** * The attributes that should be hidden for arrays. * * @var array */ protected $hidden = [ 'password', 'remember_token', ]; public function profile(){ return $this->hasOne('App\Profile'); } public function post(){ return $this->hasMany('App\Post'); } }`enter code here` ```<issue_comment>username_1: The signature for a tagged template should be `(literals: TemplateStringsArray, ...placeholders: any[]) => string` where `literals` are the strings in the template and `placeholders` are the variable values. If you just want to pass all arguments to the `css` you can use `call`. Typescript will not let you spread directly because `css` has required arguments that the ts compiler tries to check for: ``` acc[label] = (...args: any[]) => css` @media(max-width: ${sizes[label]}px) { ${css.call(undefined, ...args)} } ` ``` A fully typed version that correctly specifies the types for the `media.*` functions would be : ``` const sizes = { desktop: 992 } const media = Object.keys(sizes).reduce((acc, label) => { acc[label] = (literals: TemplateStringsArray, ...placeholders: any[]) => css` @media(max-width: ${sizes[label]}px) { ${css(literals, ...placeholders)} } `; return acc }, {} as Record string>) ``` Upvotes: 5 [selected_answer]<issue_comment>username_2: The `username_1` [solution](https://stackoverflow.com/questions/49397538/typing-the-css-function-in-styled-components/49398439#49398439) is great but doesn't work for me. It produces a string with commas. So I add `join` to `css` function. ``` import { css } from "styled-components"; const sizes = { desktop: 730, }; const media = Object.keys(sizes).reduce( (acc, label) => { acc[label] = (literals: TemplateStringsArray, ...placeholders: any[]) => css` @media (max-width: ${sizes[label]}px) { ${css(literals, ...placeholders)}; } `.join(""); return acc; }, {} as Record< keyof typeof sizes, (l: TemplateStringsArray, ...p: any[]) => string >, ); export default media; ``` Upvotes: 3
2018/03/21
1,164
4,029
<issue_start>username_0: Visual C++ 2017 compiles the following cleanly, calling the user-defined `log`: ``` // Source encoding: UTF-8 with BOM ∩ #include // std::for\_each #include #include // ::(sin, cos, atan, ..., log) #include // std::string void log( std::string const& message ) { std::clog << "-- log entry: " << message << std::endl; } auto main() -> int { auto const messages = { "Blah blah...", "Duh!", "Oki doki" }; std::for\_each( messages.begin(), messages.end(), log ); // C++03 style. } ``` I think that's a compiler bug, since I designed the code to show how an identifier can be ambiguous due to name collision with the standard library. Is it a compiler bug? --- Supplemental info: MinGW g++ 7.2 issues several error messages. They're not exactly informative, 15 lines complaining about `std::for_each`, but evidently they're due to the name collision. Changing the name of `log` the code compiles nicely. --- **Update**: Further checking indicates that it's clearly a compiler bug, because Visual C++ compiles the following (except when symbol `D` is defined): ``` #include // std::(sin, cos, atan, ..., log) #include // std::string namespace my{ void log( std::string const& ) {} } using std::log; using my::log; auto main() -> int #ifdef D { return !!log; } #else { auto f = log; return f==my::log; } #endif ``` [Reported to Microsoft](https://developercommunity.visualstudio.com/content/problem/219839/ambigous-function-name-erroneously-selects-user-de.html) (the new MS bug reporting scheme is very buggy: it thought it was a good idea to word-wrap the code, then refused to let me upload source code file unless I gave it a ".txt" filename extension).<issue_comment>username_1: Defining your own `::log` causes undefined behaviour (no diagnostic required). From C++17 (N4659) [extern.names]/3: > > Each name from the C standard library declared with external linkage is reserved to the implementation for use as a name with extern "C" linkage, both in namespace std and in the global namespace. > > > [Link to related answer](https://stackoverflow.com/a/11893237/1505939). Upvotes: 2 <issue_comment>username_2: This is a compiler bug because the compiler should not be able to perform template argument deduction for the `for_each` call. The only declaration of `for_each` that could match is defined as [[alg.foreach]](https://timsong-cpp.github.io/cppwp/n4659/alg.foreach): ``` template Function for\_each(InputIterator first, InputIterator last, Function f); ``` Template argument deduction applied on function parameter `f` needs the type of the function call argument `log` to proceed. But log is overloaded, and an overload set of functions does not have a type. For example, this simpler code should not compile for the same reason: ``` #include // std::for\_each #include // std::string void log( std::string const& message ); void log(); auto main() -> int { auto const messages = { "Blah blah...", "Duh!", "Oki doki" }; std::for\_each( messages.begin(), messages.end(), log ); //template argument deduction for template parameter Function failed. } ``` It works in this version of MSVC because templates (used to be/) are implemented as a kind of macro, so `log` is passed as a name, and overload resolution is performed at the point where `log` is called in the body of `for_each`. --- About the edit: The expression `!!log` is equivalent to a call to `bool operator(bool)` there are no template argument deduction, the compiler just can not know which overload of `log` it can use to make the conversion to `bool`. Inside declaration of the form `auto x=y`, the actual type of `x` is deduced using template argument deduction [[dcl.type.auto.deduct]/4](https://timsong-cpp.github.io/cppwp/dcl.type.auto.deduct#4): > > If the placeholder is the auto type-specifier, the deduced type T' replacing T is determined using the rules for template argument deduction. [...] > > > So the behavior of MSVC is wrong but consistent. Upvotes: 3
2018/03/21
404
1,714
<issue_start>username_0: Currently I have a service that returns that returns a list of parameters. if there are 4 parameters I need to perform one request per parameter to the same endpoint using the each of the parameter. After that I need to save the list of results of all the request into a collection. If I don't know how many request do I have to perform, **What rxJava operator I need to use and how should I use it??** . Take into account that I don't need to wait for the answer of the first request to perform the second one and .... I have seen that the zip operator allow me to perform parallel request but I have to know the number of request to use it.<issue_comment>username_1: You can use `flatMap` to create `Observable` for each `parameter` and execute them in parallel as in ``` Observable.fromArray(parameters) .flatMap(val -> Observable.just(val) .subscribeOn(Schedulers.io()) .map(request -> doApiCall(request)) ) .observeOn(AndroidSchedulers.mainThread()) .subscribe(response -> log(response)); ``` Upvotes: 1 <issue_comment>username_2: At the end I implemented it this way: ```java public Subscription getElementsByStage(List requiredStages) { List < Observable > observables = new ArrayList < > (); for (String stage: requiredStages) { ElementsRequest request = buildElementRequest(stage); observables.add(request).subscribeOn(Schedulers.newThread())); } Observable zippedObservables = Observable.zip(observables, this::arrangeElementsByStage); return zippedObservables .observeOn(AndroidSchedulers.mainThread()) .subscribe(new Subscriber >() { ..... } } ``` Upvotes: 1 [selected_answer]
2018/03/21
696
2,000
<issue_start>username_0: Here is the code ``` class A { x = 0; y = 0; visible = false; render() { } } type RemoveProperties = { readonly [P in keyof T]: T[P] extends Function ? T[P] : never//; }; var a = new A() as RemovePropertiesa.visible // never a.render() // ok! ``` I want to remove " visible / x / y " properties via RemoveProperties ,but I can only replace it with never<issue_comment>username_1: You can use the same trick as the [`Omit`](http://ideasintosoftware.com/typescript-advanced-tricks/) type uses: ``` // We take the keys of P and if T[P] is a Function we type P as P (the string literal type for the key), otherwise we type it as never. // Then we index by keyof T, never will be removed from the union of types, leaving just the property keys that were not typed as never type JustMethodKeys = ({[P in keyof T]: T[P] extends Function ? P : never })[keyof T]; type JustMethods = Pick>; ``` Upvotes: 6 <issue_comment>username_2: TS 4.1 ------ You can use [`as` clauses](https://github.com/microsoft/TypeScript/pull/40336) in [mapped types](https://www.typescriptlang.org/docs/handbook/advanced-types.html#mapped-types) to filter out properties in one go: ``` type Methods = { [P in keyof T as T[P] extends Function ? P : never]: T[P] }; type A\_Methods = Methods; // { render: () => void; } ``` > > When the type specified in an `as` clause resolves to `never`, no property is generated for that key. Thus, an `as` clause can be used as a filter [.] > > > Further info: [Announcing TypeScript 4.1 - Key Remapping in Mapped Types](https://devblogs.microsoft.com/typescript/announcing-typescript-4-1/#key-remapping-mapped-types) [Playground](https://www.typescriptlang.org/play?#code/C4TwDgpgBAglC8UDeAoK6oA8BcUB2ArgLYBGEATmhiLoaRVegG4CWAziyQDYS4kD2-HgEM8jKOQh4AJhQAUASlxN+LaSgC+KFKEhQAshGAALftLYAeACoA+BMigBtAApQWeKAGsIIfgDMoKyhhNkCXAF0oCExgKXMoADECPABjYBZ+DwB+KFdaCCYKcNwrCKgNAG4dcGgYAH1DEzNQxEbTcwsYGwr0AHpeh0kZClxFBDsVNR6NIA) Upvotes: 6
2018/03/21
538
2,082
<issue_start>username_0: I have successfully use JSch library to create a SSH connection to a server, but I have trouble figuring out how to add the subsystem NETCONF to SSH connection. When doing it manually, the command line that establishes SSH connection with sybsystem NETCONF is `ssh -p 4444 nerconf@myserver -s netconf`. How do I add the option `-s netconf` to the SSH connection using JSch? Does JSch support subsystem for NETCONF?<issue_comment>username_1: JSch supports SSH subsystems in general but does not implement anything NETCONF specific (this is not necessary). All you need to do is to make the following calls (pseudo-code): ```java com.jcraft.jsch.JSch ssh = new com.jcraft.jsch.JSch(); com.jcraft.jsch.Session session = ssh.getSession(username, host, port); session.setUserInfo(myUserInfo); // authentication session.connect(connectTimeout); // this opens up the proper subsystem for NETCONF com.jcraft.jsch.ChannelSubsystem subsystem = (com.jcraft.jsch.ChannelSubsystem) session.openChannel("subsystem"); subsystem.setSubsystem("netconf"); // at this point you may get your streams subsystem.getInputStream(); subsystem.getErrStream(); subsystem.getOutputStream(); subsystem.connect(); ``` For NETCONF, the only requirement that the subsystem has to fulfill is a proper subsystem name. Upvotes: 2 <issue_comment>username_2: Thanks, username_1. This is work for me. netconf-hello is done. ``` session = new JSch().getSession("username", "remote-ip", netconf-port); session.setPassword("<PASSWORD>"); session.setConfig("StrictHostKeyChecking", "no"); session.connect(); channel = (ChannelSubsystem) session.openChannel("subsystem"); //necessary channel.setSubsystem("netconf"); //necessary channel.connect(); System.out.println(channel.isConnected()); // debug use System.out.println(session.isConnected()); // debug use InputStream inputStream = channel.getInputStream(); // use this to read OutputStream outputStream = channel.getOutputStream(); PrintStream printStream = new PrintStream(outputStream); // use this to send ``` Upvotes: -1
2018/03/21
1,596
6,471
<issue_start>username_0: I have a Spring boot application that requires the use of properties in files different from the standard application.properties. I wanted to map my properties files to a specific properties class and be able to use the properties class inside of a Configuration class. The code is the following: ``` @Configuration @EnableConfigurationProperties(AWSProperties.class) @PropertySource("kaizen-batch-common-aws.properties") public class ResourceConfigAWS { @Autowired private AWSProperties awsProperties; @Autowired private ResourceLoader resourceLoader; private static final Logger logger = LoggerFactory.getLogger(ResourceConfigAWS.class); @Bean public AmazonS3 amazonS3Client() { logger.debug("AWS Config: " + this.awsProperties); BasicAWSCredentials awsCreds = new BasicAWSCredentials("access_key_id", "secret_key_id"); AmazonS3 s3Client = AmazonS3ClientBuilder.standard() .withCredentials(new AWSStaticCredentialsProvider(awsCreds)) .withRegion(awsProperties.getRegion()) .build(); return s3Client; } @Bean public SimpleStorageResourceLoader resourceLoader() { return new SimpleStorageResourceLoader(this.amazonS3Client()); } @Bean @StepScope public Resource getResourceToProcess(@Value("#{jobParameters[T(com.kaizen.batch.common.JobRunnerTemplate).INPUT_FILE_PARAM_NAME]}") String inputFile) { return this.resourceLoader.getResource(this.awsProperties.getInputLocation() + inputFile); } @PostConstruct public void postConstruct() { System.out.print("Properties values: " + this.awsProperties); } @Bean public AbstractFileValidator inputFileValidator() { InputS3Validator inputS3Validator = new InputS3Validator(); inputS3Validator.setRequiredKeys(new String[]{InputFileSystemValidator.INPUT_FILE}); return inputS3Validator; } @Bean public InputFileFinalizerDelegate inputFileFinalizerDelegate() { InputFileFinalizerDelegate inputFileFinalizerDelegate = new AWSInputFileFinalizerDelegate(); return inputFileFinalizerDelegate; } @Bean public InputFileInitializerDelegate inputFileInitializerDelegate() { InputFileInitializerDelegate inputFileInitializerDelegate = new AWSInputFileInitializerDelegate(); return inputFileInitializerDelegate; } } @ConfigurationProperties("aws") @Data public class AWSProperties { private static final String SEPARATOR = "/"; private static final String S3_PREFFIX = "s3://"; @Value("${s3.bucket.batch}") private String bucket; @Value("${s3.bucket.batch.batch-job-folder}") private String rootFolder; @Value("${s3.bucket.batch.input}") private String inputFolder; @Value("${s3.bucket.batch.processed}") private String processedFolder; @Value("${region}") private String region; public String getInputLocation() { return this.getBasePath() + this.inputFolder + SEPARATOR; } public String getProcessedLocation() { return this.getBasePath() + this.processedFolder + SEPARATOR; } private String getBasePath() { return S3_PREFFIX + this.bucket + SEPARATOR + this.rootFolder + SEPARATOR; } } ``` I am struggling to manage to Get AWSProperties to get populated with the values defined in the properties files, somehow I always end up with awsProperties with a null value. Any insight on how to map properties in Spring Boot into a Properties class without using the standard property files naming conventions in Spring Boot will be very appreciated. Here is the properties file: ``` aws.s3.bucket.batch=com-kaizen-batch-dev aws.s3.bucket.batch.input=input aws.s3.bucket.batch.processed=processed aws.s3.bucket.batch.batch-job-folder=merchant-file aws.region=us-east-2 ``` Note: I modified the Configuration class slightly and now I am getting the following exception: ``` Caused by: org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'com.kaizen.batch.common.ResourceConfigAWS': Unsatisfied dependency expressed through field 'awsProperties'; nested exception is org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'aws-com.kaizen.batch.common.AWSProperties': Injection of autowired dependencies failed; nested exception is java.lang.IllegalArgumentException: Could not resolve placeholder 's3.bucket.batch' in value "${s3.bucket.batch}" at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor$AutowiredFieldElement.inject(AutowiredAnnotationBeanPostProcessor.java:588) ~[spring-beans-4.3.7.RELEASE.jar:4.3.7.RELEASE] at org.springframework.beans.factory.annotation.InjectionMetadata.inject(InjectionMetadata.java:88) ~[spring-beans-4.3.7.RELEASE.jar:4.3.7.RELEASE] at org.springframework.beans.factory.annotation.AutowiredAnnotationBeanPostProcessor.postProcessPropertyValues(AutowiredAnnotationBeanPostProcessor.java:366) ~[spring-beans-4.3.7.RELEASE.jar:4.3.7.RELEASE] at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.populateBean(AbstractAutowireCapableBeanFactory.java:1264) ~[spring-beans-4.3.7.RELEASE.jar:4.3.7.RELEASE] ```<issue_comment>username_1: Add the property file in classpath or keep it in resource folder and update the `@PropertySource` in this way: ``` @PropertySource("classpath:kaizen-batch-common-aws.properties") ``` Upvotes: 0 <issue_comment>username_2: `@ConfigurationProperties` are used for binding property file keys with POJO classes. You have to do following changes for your code to work. *kaizen-batch-common-aws.properties* ``` aws.s3.bucket.batch.bucket=com-kaizen-batch-dev aws.s3.bucket.batch.root-folder=processed aws.s3.bucket.batch.input-folder=input aws.s3.bucket.batch.processed-folder=processed aws.s3.bucket.batch.batch-job-folder=merchant-file @Component @ConfigurationProperties(prefix = "aws.s3.bucket.batch") public class AWSProperties { private String bucket; private String rootFolder; private String inputFolder; private String processedFolder; private String region; // setter require } ``` Please visit [doc](https://docs.spring.io/spring-boot/docs/1.4.7.RELEASE/reference/htmlsingle/#boot-features-external-config-typesafe-configuration-properties) here Upvotes: 2 [selected_answer]
2018/03/21
2,943
11,608
<issue_start>username_0: I using [Node-Mongo-Native](https://github.com/mongodb/node-mongodb-native) and trying to set a global connection variable, but I am confused between two possible solutions. Can you guys help me out with which one would be the good one? 1. Solution ( which is bad because every request will try to create a new connection.) ``` var express = require('express'); var app = express(); var MongoClient = require('mongodb').MongoClient; var assert = require('assert'); // Connection URL var url = '[connectionString]]'; // start server on port 3000 app.listen(3000, '0.0.0.0', function() { // print a message when the server starts listening console.log("server starting"); }); // Use connect method to connect to the server when the page is requested app.get('/', function(request, response) { MongoClient.connect(url, function(err, db) { assert.equal(null, err); db.listCollections({}).toArray(function(err, collections) { assert.equal(null, err); collections.forEach(function(collection) { console.log(collection); }); db.close(); }) response.send('Connected - see console for a list of available collections'); }); }); ``` 2. Solution ( to connect at app init and assign the connection string to a global variable). but I believe assigning connection string to a global variable is a not a good idea. var mongodb; var url = '[connectionString]'; MongoClient.connect(url, function(err, db) { assert.equal(null, err); mongodb=db; } ); I want to create a connection at the app initialization and use throughout the app lifetime. Can you guys help me out? Thanks.<issue_comment>username_1: Create a `Connection` singleton module to manage the apps database connection. MongoClient does not provide a singleton connection pool so you don't want to call `MongoClient.connect()` repeatedly in your app. A singleton class to wrap the mongo client works for most apps I've seen. ``` const MongoClient = require('mongodb').MongoClient class Connection { static async open() { if (this.db) return this.db this.db = await MongoClient.connect(this.url, this.options) return this.db } } Connection.db = null Connection.url = 'mongodb://127.0.0.1:27017/test_db' Connection.options = { bufferMaxEntries: 0, reconnectTries: 5000, useNewUrlParser: true, useUnifiedTopology: true, } module.exports = { Connection } ``` Everywhere you `require('./Connection')`, the `Connection.open()` method will be available, as will the `Connection.db` property if it has been initialised. ``` const router = require('express').Router() const { Connection } = require('../lib/Connection.js') // This should go in the app/server setup, and waited for. Connection.open() router.get('/files', async (req, res) => { try { const files = await Connection.db.collection('files').find({}) res.json({ files }) } catch (error) { res.status(500).json({ error }) } }) module.exports = router ``` Upvotes: 6 [selected_answer]<issue_comment>username_2: **module version ^3.1.8** Initialize the connection as a promise: ``` const MongoClient = require('mongodb').MongoClient const uri = 'mongodb://...' const client = new MongoClient(uri) const connection = client.connect() ``` And then summon the connection whenever you wish you perform an action on the database: ``` app.post('/insert', (req, res) => { const connect = connection connect.then(() => { const doc = { id: 3 } const db = client.db('database_name') const coll = db.collection('collection_name') coll.insertOne(doc, (err, result) => { if(err) throw err }) }) }) ``` Upvotes: 1 <issue_comment>username_3: This is how i did. ``` // custom class const MongoClient = require('mongodb').MongoClient const credentials = "mongodb://user:pass@mongo" class MDBConnect { static connect (db, collection) { return MongoClient.connect(credentials) .then( client => { return client.db(db).collection(collection); }) .catch( err => { console.log(err)}); } static findOne(db, collection, query) { return MDBConnect.connect(db,collection) .then(c => { return c.findOne(query) .then(result => { return result; }); }) } // create as many as you want //static find(db, collection, query) //static insert(db, collection, query) // etc etc etc } module.exports = MDBConnect; // in the route file var express = require('express'); var router = express.Router(); var ObjectId = require('mongodb').ObjectId; var MDBConnect = require('../storage/MDBConnect'); // Usages router.get('/q/:id', function(req, res, next) { let sceneId = req.params.id; // user case 1 MDBConnect.connect('gameapp','scene') .then(c => { c.findOne({_id: ObjectId(sceneId)}) .then(result => { console.log("result: ",result); res.json(result); }) }); // user case 2, with query MDBConnect.findOne('gameapp','scene',{_id: ObjectId(sceneId)}) .then(result => { res.json(result); }); }); ``` Upvotes: 2 <issue_comment>username_4: Another more straightforward method is to utilise Express's built in feature to share data between routes and modules within your app. There is an object called app.locals. We can attach properties to it and access it from inside our routes. To use it instantiate your mongo connection in your app.js file. ```js var app = express(); MongoClient.connect('mongodb://localhost:27017/') .then(client =>{ const db = client.db('your-db'); const collection = db.collection('your-collection'); app.locals.collection = collection; }); // view engine setup app.set('views', path.join(__dirname, 'views')); ``` This database connection, or indeed any other data you wish to share around the modules of you can now be accessed within your routes with `req.app.locals` as below without the need for creating and requiring additional modules. ```js app.get('/', (req, res) => { const collection = req.app.locals.collection; collection.find({}).toArray() .then(response => res.status(200).json(response)) .catch(error => console.error(error)); }); ``` This method ensures that you have a database connection open for the duration of your app unless you choose to close it at any time. It's easily accessible with `req.app.locals.your-collection` and doesn't require creation of any additional modules. Upvotes: 2 <issue_comment>username_5: In Express you can add the mongo connection like this ``` import {MongoClient} from 'mongodb'; import express from 'express'; import bodyParser from 'body-parser'; let mongoClient = null; MongoClient.connect(config.mongoURL, {useNewUrlParser: true, useUnifiedTopology: true},function (err, client) { if(err) { console.log('Mongo connection error'); } else { console.log('Connected to mongo DB'); mongoClient = client; } }) let app = express(); app.use(bodyParser.urlencoded({ extended: false })); app.use(bodyParser.json()); app.use((req,res,next)=>{ req.db = mongoClient.db('customer_support'); next(); }); ``` and later you can access it as req.db ``` router.post('/hello',async (req,res,next)=>{ let uname = req.body.username; let userDetails = await getUserDetails(req.db,uname) res.statusCode = 200; res.data = userDetails; next(); }); ``` Upvotes: 0 <issue_comment>username_6: I did a lot of research on the answer but couldn't find a solution that would convince me so I developed my own. ```js const {MongoClient} = require("mongodb"); class DB { static database; static client; static async setUp(url) { if(!this.client) { await this.setClient(url); await this.setConnection(); } return this.database; } static async setConnection() { this.database = this.client.db("default"); } static async setClient(url) { console.log("Connecting to database"); const client = new MongoClient(url); await client.connect(); this.client = client; } } module.exports = DB; ``` Usage: ``` const DB = require("./Path/to/DB"); (async () => { const database = await DB.setUp(); const users = await database.collection("users").findOne({ email: "" }); }); ``` Upvotes: 0 <issue_comment>username_7: Here is a version of [Matt's answer](https://stackoverflow.com/a/49400334) that lets you define `database` as well as `collection` when using the connection. Not sure if it is as 'water tight' as his solution, but it was too long for a comment. I removed `Connection.options` as they were giving me errors ([perhaps some options are deprecated?](https://stackoverflow.com/questions/68958221/mongoparseerror-options-usecreateindex-usefindandmodify-are-not-supported)). **lib/Connection.js** ```js const MongoClient = require('mongodb').MongoClient; const { connection_string } = require('./environment_variables'); class Connection { static async open() { if (this.conn) return this.conn; this.conn = await MongoClient.connect(connection_string); return this.conn; } } Connection.conn = null; Connection.url = connection_string; module.exports = { Connection }; ``` **testRoute.js** ```js const express = require('express'); const router = express.Router(); const { Connection } = require('../lib/Connection.js'); Connection.open(); router.route('/').get(async (req, res) => { try { const query = { username: 'my name' }; const collection = Connection.conn.db('users').collection('users'); const result = await collection.findOne(query); res.json({ result: result }); } catch (error) { console.log(error); res.status(500).json({ error }); } }); module.exports = router; ``` If you would like to take the middleware out of the route file: **testRoute.js** becomes: ```js const express = require('express'); const router = express.Router(); const test_middleware_01 = require('../middleware/test_middleware_01'); router.route('/').get(test_middleware_01); module.exports = router; ``` And the middleware is defined in **middleware/test\_middleware\_01.js**: ```js const { Connection } = require('../lib/Connection.js'); Connection.open(); const test_middleware_01 = async (req, res) => { try { const query = { username: 'my name' }; const collection = Connection.conn.db('users').collection('users'); const result = await collection.findOne(query); res.json({ result: result }); } catch (error) { console.log(error); res.status(500).json({ error }); } }; module.exports = test_middleware_01; ``` Upvotes: 0
2018/03/21
1,258
2,982
<issue_start>username_0: I have a series that looks like this: ``` s = pd.Series(['abdhd','abadh','aba', 'djjb','kjsdhf','abwer', 'djd, 'kja']) ``` I need to select all rows whose strings begin with 'dh' or 'kj' I attempted to use .startswith() and .match(); but i get boolean returns of True and False instead of the values of the list. I tried this as part of a dictionary as well and got the same bool returns and not the valued themselves. Is there something else I can do?<issue_comment>username_1: Try ``` s[(s.str.startswith('dh')) | (s.str.startswith('kj'))] ``` Explanation: `(s.str.startswith('dh')) | (s.str.startswith('kj'))` is the logical condition you care about, and then putting that inside of `s[]` slices the series by rows, returning only the rows where the condition is `True` Upvotes: 3 [selected_answer]<issue_comment>username_2: ### [`pd.Series.str.contains`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.str.contains.html) ``` s[s.str.contains('^dh|kj')] 4 kjsdhf 7 kja dtype: object ``` --- ### [`pd.Series.isin`](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.isin.html) ``` s[s.str[:2].isin(['dh', 'kj'])] 4 kjsdhf 7 kja dtype: object ``` --- ### [`str.startswith`](https://docs.python.org/3/library/stdtypes.html#str.startswith) within a comprehension ``` s[[any(map(x.startswith, ['dh', 'kj'])) for x in s]] 4 kjsdhf 7 kja dtype: object ``` --- ### Time Tests Functions ``` pir1 = lambda s: s[s.str.contains('^dh|kj')] pir2 = lambda s: s[s.str[:2].isin(['dh', 'kj'])] pir3 = lambda s: s[[any(map(x.startswith, ['dh', 'kj'])) for x in s]] alol = lambda s: s[(s.str.startswith('dh')) | (s.str.startswith('kj'))] ``` Testing ``` res = pd.DataFrame( np.nan, [10, 30, 100, 300, 1000, 3000, 10000, 30000], 'pir1 pir2 pir3 alol'.split() ) for i in res.index: s_ = pd.concat([s] * i) for j in res.columns: stmt = f'{j}(s_)' setp = f'from __main__ import {j}, s_' res.at[i, j] = timeit(stmt, setp, number=200) ``` Results ``` res.plot(loglog=True) ``` [![enter image description here](https://i.stack.imgur.com/130XV.png)](https://i.stack.imgur.com/130XV.png) ``` res.div(res.min(1), 0) pir1 pir2 pir3 alol 10 2.424637 3.272403 1.000000 4.747473 30 2.756702 2.812140 1.000000 4.446757 100 2.673724 2.190306 1.000000 3.128486 300 1.787894 1.000000 1.342434 1.997433 1000 2.164429 1.000000 1.788028 2.244033 3000 2.325746 1.000000 1.922993 2.227902 10000 2.424354 1.000000 2.042643 2.242508 30000 2.153505 1.000000 1.847457 1.935085 ``` Conclusions The only real winner (and only just barely) is `isin` and it also happens to be the least general. You can only really extend its use so long as you are looking at just the first 2 characters. Other than that, the other methods all seem to perform with similar time complexity. Upvotes: 2
2018/03/21
1,582
3,958
<issue_start>username_0: The following recursive method is intended to produce a Fibonacci number for a given integer (written in Java) ``` public static long fib(int n) { if (n == 0) return (long)0; else if (n == 1) return (long)1; else return fib(n - 1) + fib(n - 2); } ``` But what I found is that it was taking more than 20 seconds to produce a Fibonacci number at the position of 48th or higher. Can you help to explain why this Fib producer is so inefficient? For example, here I attach a simple testing client: ``` public static void main(String[] args) { int hi = 50; System.out.println("Sequance, elapsed time, number"); for (int n = 0; n<= hi; n++) { long start = System.currentTimeMillis(); long fib_num = fib(n); long end = System.currentTimeMillis(); long elapse = (end-start)/1000; System.out.printf("%d, %d, %d%n", n, elapse, fib_num); } } ``` and its output is (running on a i7, 4core MacBook Pro 2017 model): ``` Sequance, elapsed time, number 0, 0, 0 1, 0, 1 2, 0, 1 3, 0, 2 4, 0, 3 5, 0, 5 6, 0, 8 7, 0, 13 8, 0, 21 9, 0, 34 10, 0, 55 11, 0, 89 12, 0, 144 13, 0, 233 14, 0, 377 15, 0, 610 16, 0, 987 17, 0, 1597 18, 0, 2584 19, 0, 4181 20, 0, 6765 21, 0, 10946 22, 0, 17711 23, 0, 28657 24, 0, 46368 25, 0, 75025 26, 0, 121393 27, 0, 196418 28, 0, 317811 29, 0, 514229 30, 0, 832040 31, 0, 1346269 32, 0, 2178309 33, 0, 3524578 34, 0, 5702887 35, 0, 9227465 36, 0, 14930352 37, 0, 24157817 38, 0, 39088169 39, 0, 63245986 40, 0, 102334155 41, 0, 165580141 42, 1, 267914296 43, 2, 433494437 44, 3, 701408733 45, 5, 1134903170 46, 8, 1836311903 47, 14, 2971215073 48, 22, 4807526976 49, 34, 7778742049 50, 58, 12586269025 ```<issue_comment>username_1: *This is a classical problem of recursion where Dynamic programming comes into rescue .* What actually happens is that your code do some recalculations which are already being done, hence result in extra machine cycle and high processing time. [![enter image description here](https://i.stack.imgur.com/Z89sE.png)](https://i.stack.imgur.com/Z89sE.png) Above picture depicts how your program calculates 5th Fibonacci Number . We can see that the function f(3) is being called 2 times fib(2) 3 times fib(1) 5 times, instead of calculating it for one time. **Solution** ============ If we would have stored the value of f(3),f(2),f(1) then instead of computing it again, we could have reused the old stored value. From here you can do the research and can have a read from [here](https://www.geeksforgeeks.org/dynamic-programming-set-1/) Upvotes: 2 <issue_comment>username_2: This is a classic example where you should be using memoization. You need to memoize already calculated numbers. In simple words, you need to maintain a data structure which can be an array in this case which will hold the fibonacci number for that index. Sample array will look like this(Starting from index 0) > > 0,1,1,2,3,5,8,13 > > > So, whenever you need to calculate a fibonacci number, first check in the array if a fibonacci number has already been calculated. If yes, then return it from the array. If not, then calculate it and store it in the array. Upvotes: 0 <issue_comment>username_3: This is primarily because of the fact that this implementation of generating Fibonacci numbers is extremely inefficient. This particular algorithm grows exponentially instead of linearly because each call of Fibonacci branches off into two more calls and continues on this track. Thus, increasing the size of N heavily increases the time required to complete. A better approach would be keep track of the previous values for computing the next value. ``` long fibbonaci(int n) { long c = 0, k = 1; for (int i = 0; i < n; i++) { k += c; c = k - c; } return c; } ``` Upvotes: 3 [selected_answer]
2018/03/21
638
2,471
<issue_start>username_0: I'm using Spring frameworks for my Java Project. When I start the module sometimes spring beans are not correctly initialized and the bean will be null and program will have exception (Null Pointer Exception) . Is there any way to make sure that during start up all the beans are initialized properly?<issue_comment>username_1: *This is a classical problem of recursion where Dynamic programming comes into rescue .* What actually happens is that your code do some recalculations which are already being done, hence result in extra machine cycle and high processing time. [![enter image description here](https://i.stack.imgur.com/Z89sE.png)](https://i.stack.imgur.com/Z89sE.png) Above picture depicts how your program calculates 5th Fibonacci Number . We can see that the function f(3) is being called 2 times fib(2) 3 times fib(1) 5 times, instead of calculating it for one time. **Solution** ============ If we would have stored the value of f(3),f(2),f(1) then instead of computing it again, we could have reused the old stored value. From here you can do the research and can have a read from [here](https://www.geeksforgeeks.org/dynamic-programming-set-1/) Upvotes: 2 <issue_comment>username_2: This is a classic example where you should be using memoization. You need to memoize already calculated numbers. In simple words, you need to maintain a data structure which can be an array in this case which will hold the fibonacci number for that index. Sample array will look like this(Starting from index 0) > > 0,1,1,2,3,5,8,13 > > > So, whenever you need to calculate a fibonacci number, first check in the array if a fibonacci number has already been calculated. If yes, then return it from the array. If not, then calculate it and store it in the array. Upvotes: 0 <issue_comment>username_3: This is primarily because of the fact that this implementation of generating Fibonacci numbers is extremely inefficient. This particular algorithm grows exponentially instead of linearly because each call of Fibonacci branches off into two more calls and continues on this track. Thus, increasing the size of N heavily increases the time required to complete. A better approach would be keep track of the previous values for computing the next value. ``` long fibbonaci(int n) { long c = 0, k = 1; for (int i = 0; i < n; i++) { k += c; c = k - c; } return c; } ``` Upvotes: 3 [selected_answer]
2018/03/21
679
2,585
<issue_start>username_0: In Angular Documentation I frequently found the word data-bound properties, but meaning of that I searched in google and found [What is data-bound properties?](https://stackoverflow.com/questions/39367423/what-is-data-bound-properties) It not fully explained in the answer. Under the answer people still questioning. If it is accepted answer, it won't mean it is correct answer. Can Some One explain more detail?<issue_comment>username_1: *This is a classical problem of recursion where Dynamic programming comes into rescue .* What actually happens is that your code do some recalculations which are already being done, hence result in extra machine cycle and high processing time. [![enter image description here](https://i.stack.imgur.com/Z89sE.png)](https://i.stack.imgur.com/Z89sE.png) Above picture depicts how your program calculates 5th Fibonacci Number . We can see that the function f(3) is being called 2 times fib(2) 3 times fib(1) 5 times, instead of calculating it for one time. **Solution** ============ If we would have stored the value of f(3),f(2),f(1) then instead of computing it again, we could have reused the old stored value. From here you can do the research and can have a read from [here](https://www.geeksforgeeks.org/dynamic-programming-set-1/) Upvotes: 2 <issue_comment>username_2: This is a classic example where you should be using memoization. You need to memoize already calculated numbers. In simple words, you need to maintain a data structure which can be an array in this case which will hold the fibonacci number for that index. Sample array will look like this(Starting from index 0) > > 0,1,1,2,3,5,8,13 > > > So, whenever you need to calculate a fibonacci number, first check in the array if a fibonacci number has already been calculated. If yes, then return it from the array. If not, then calculate it and store it in the array. Upvotes: 0 <issue_comment>username_3: This is primarily because of the fact that this implementation of generating Fibonacci numbers is extremely inefficient. This particular algorithm grows exponentially instead of linearly because each call of Fibonacci branches off into two more calls and continues on this track. Thus, increasing the size of N heavily increases the time required to complete. A better approach would be keep track of the previous values for computing the next value. ``` long fibbonaci(int n) { long c = 0, k = 1; for (int i = 0; i < n; i++) { k += c; c = k - c; } return c; } ``` Upvotes: 3 [selected_answer]
2018/03/21
299
1,077
<issue_start>username_0: I am new to JetBrains Rider and find it extremely handy for C# developing. Just in our solution we have a certain type of exception that is thrown frequently. When I debug I don't want it to break on this type of exception while I do want it to break on all other exceptions thrown. It can be easily achieved by Visual Studio. I found a post here [Project Rider - Break on Exception](https://stackoverflow.com/questions/39150047/project-rider-break-on-exception/39155087#39155087) It told me how to break on a certain type of Exception. But still no clue how to break on all Exceptions except this certain type.<issue_comment>username_1: You can add exception breakpoint for all types and then add exception breakpoint for your certain type with unchecked 'Suspend' option Upvotes: 3 [selected_answer]<issue_comment>username_2: Jetbrains > Settings > Build, Execution, Deployment > Debugger > > Exceptions: > > > [x] Break on user-unhandled exceptions (excluding Mono) > > > Except: > > > *-> + Add your Exception here <-* > > > Upvotes: 0
2018/03/21
551
1,999
<issue_start>username_0: I got 2 cases for a class, one is like this: ``` class User { private val name:Name = NameGenerator() fun sayName() { println(this.name.fakeName) } } ``` The other one is: ``` class User(suggestion:String) { private val name:Name = NameGenerator(suggestion) fun sayName() { println(this.name.fakeName) } } ``` Now you see the differences, it's all about that `name`, it has 2 different ways to initialize. How could I write a base class to DRY in this case? I end up with something like this: ``` abstract class BaseUser { protected val name:Name = MockName() fun sayName() { println(this.name.fakeName) } } ``` Then I can use it like: ``` class UserCaseOne():BaseUser() { } class UserCaseTwo(suggestion:String):BaseUser() { init { this.name = NameGenerator(suggestion) } } ``` This is pretty lame considering that I have to use a mock object to initialize. Are there better ways to do this? Without injecting the implementation? I don't want to force the `UserCaseTwo` to have a redundant constructor.<issue_comment>username_1: If your goal is essentially two combine your first two `User` classes into a single class, you could do this: ``` class User(suggestion: String? = null) { private val name: NameGenerator = suggestion?.let { NameGenerator(it) } ?: NameGenerator() fun sayName() { println(this.name.fakeName) } } ``` Upvotes: 1 <issue_comment>username_2: Would it be acceptable to make the name provisioning part of the constructor signature by taking a function as a parameter? The following approach enables the desired abstraction imho: ``` class User(private val nameProvider: () -> Name) { private val name: Name by lazy(nameProvider) fun sayName() { println(this.name.fakeName) } } ``` You can create `User`s with any `nameProvider` such as: ``` User { MockName() } User { NameGenerator("blah") } ``` Upvotes: 0
2018/03/21
1,220
4,625
<issue_start>username_0: I'm working on Recyclerview, and I want to set an image to center imageview from Recyclerview's selected item [![enter image description here](https://i.stack.imgur.com/gaLkD.jpg)](https://i.stack.imgur.com/gaLkD.jpg) > > Error : You must pass in a non null View > > > here I'm trying to set an image to imageview located in `activity_set_back.xml` from `Onclick` method of Recyclerview's I have: `moviesList(ArrayList)` that hold all URL and I'm getting position on click only thing I need to do is set that image to img\_back(Imageview) Shown in this code MoviesAdapter.class > > this is my adapter > > > ``` public class MoviesAdapter extends RecyclerView.Adapter { String string\_url; String clicked\_url; private ArrayList moviesList; public class MyViewHolder extends RecyclerView.ViewHolder implements View.OnClickListener { public ImageView img\_backimage; public ImageView img\_back; public MyViewHolder(View itemView) { super(itemView); img\_backimage = (ImageView) itemView.findViewById(R.id.img\_backimage); img\_back =(ImageView) itemView.findViewById(R.id.img\_edit\_this); itemView.setOnClickListener(this); } @Override public void onClick(View view) { int position = getLayoutPosition(); // gets item position clicked\_url= moviesList.get(position).getPath(); Toast.makeText(view.getContext(),position+"",Toast.LENGTH\_LONG).show(); Glide.with(view.getContext()).load(clicked\_url) .diskCacheStrategy(DiskCacheStrategy.ALL) .fitCenter() .crossFade() .into(img\_back); } } public MoviesAdapter(ArrayList moviesList) { this.moviesList = moviesList; } @Override public MyViewHolder onCreateViewHolder(ViewGroup parent, int viewType) { LayoutInflater inflater = LayoutInflater.from(parent.getContext()); View rootView = inflater.inflate(R.layout.image\_list\_row, parent, false); return new MyViewHolder(rootView); } @Override public void onBindViewHolder(final MyViewHolder holder, int position) { string\_url= moviesList.get(position).getPath(); Glide.with(holder.img\_backimage.getContext()).load(string\_url) .diskCacheStrategy(DiskCacheStrategy.ALL) .fitCenter() .crossFade() .into(holder.img\_backimage); } @Override public int getItemCount() { return moviesList.size(); } } ``` > > image\_list\_row.xml > > > ``` xml version="1.0" encoding="utf-8"? ``` > > activity\_set\_back.xml > > > ``` ```<issue_comment>username_1: Don't use `onclick` method of `RecyclerView` to detect child click instead use `ClickListener` on child view in `bind` method as in ``` @Override public void onBindViewHolder(final MyViewHolder holder, int position) { string_url= moviesList.get(position).getPath(); Glide.with(holder.img_backimage.getContext()).load(string_url) .diskCacheStrategy(DiskCacheStrategy.ALL) .fitCenter() .crossFade() .into(holder.img_backimage); holder.itemView.setonClickListener(new View.OnClickListener(){ Glide.with(view.getContext()).load(string_url) .diskCacheStrategy(DiskCacheStrategy.ALL) .fitCenter() .crossFade() .into(img_back); }); } ``` Upvotes: 0 <issue_comment>username_2: Here your `imageview`- `img_edit_this` is inside your activity and not inside the itemview. Inorder to access the imageview in your activity from adapter, you first need to pass a listner from your activity to adapter during adapter initilalisation. Then use this listener in your adapter to change the imageview in activity. First create an interface like this ``` public interface RecyclerImageClick { void onCenterImageChange(String imagePath); } ``` Make your activity implement this, for egs ``` public class MembersList extends AppCompatActivity implements RecyclerImageClick ``` Now implement the method `onCenterImageChange(String imagePath)` in activity Now when initialising adapter, pass this listner into it along with arraylist like this ``` data= new MoviesAdapter(ArrayList moviesList,this); ``` Now change your adapter constructor like this ``` private RecyclerImageClick listener; public MoviesAdapter(ArrayList moviesList,RecyclerImageClick listener) { this.moviesList = moviesList; this.listener=listener; } ``` Now inside the adapter's imageview click listener call `listener.onCenterImageChange(imagePath)` Now inside your activity, declare your `imageview` and set it to the image path passed from `adapter` Upvotes: 3 [selected_answer]
2018/03/21
625
2,355
<issue_start>username_0: I'm fairly new to Git and Github and also gotten very confused at Git's vast array of command line. Today I push my commit like usual and create a pull request. But when my coworker tries to merge, it shows: > > This branch has conflicts that must be resolved > > > Use command line to resolve conflicts before continuing. > > > Conflicting files > > > .DS\_Store > > > I would like to resolve this using steps as simple as possible. I used subversion SVN in the past, and it has a very easy solution: "use 'theirs' version" or "use 'mine' version". I would like to be able to right click somewhere, and choose "use 'theirs' version" on this `.DS_Store` file, and don't care with whatever its contents is. All I want is for the conflict to go away. I don't know where I can find that feature in Github. I've tried to browse Git command lines for this, but I don't know exactly which one to use. The problem is, this is pull request merge conflict, which 'happened' at the server. My commit that has the .DS\_Store is the 2nd last commit. The last commit on my branch is a "rebase" from the master branch. So if I want to revert to my commit, I will have to revert the rebase commit as well, and I don't know what's the side effect of it (won't I be able to do a "rebase" again because I have already do a "rebase"? I don't know). I can't solve this at my or my coworker's computer because I can't pull the conflict from the master branch from the server to my local computer. How should I fix this? Thanks.<issue_comment>username_1: I had a problem with pull request. I found this answer on the Internet and it helped me. It implies a simple deletion of this file from the computer and a new commit. ``` find . -name .DS_Store -print0 | xargs -0 git rm -f --ignore-unmatch git add .gitignore git commit -m '.DS_Store banished!' ``` Upvotes: 1 <issue_comment>username_2: If you are using macOS, your system appends the .DS\_Store file in your directories. It’s not a big issue, but often you need to exclude these files explicitly in your .gitignore file, to prevent any unnecessary files in your commit. You may want to ignore this file globally, so next time you don’t need to add it to your project’s ignore list. `git config --global core.excludesfile ~/.gitignore` `echo .DS_Store >> ~/.gitignore` Upvotes: 0
2018/03/21
1,010
3,660
<issue_start>username_0: How do I disable the second set of radio buttons if the radio button on the first set is set to Yes? And enable if set to No? First set: ``` No Yes ``` Second set: ``` No Yes ```<issue_comment>username_1: ```js // Onload disable $.each($("input[name='xsecondset']"), function(index, radio){ $(radio).attr("disabled", "disabled"); }); $("input[name='xfirstset']").click(function(){ $.each($("input[name='xfirstset']"), function(index, radio){ if(radio.checked && radio.value === "0"){ $("#secondsetno")[0].checked = true; //reset to default $.each($("input[name='xsecondset']"), function(index, radio){ $(radio).attr("disabled", "disabled"); }); } if(radio.checked && radio.value === "1"){ $.each($("input[name='xsecondset']"), function(index, radio){ $(radio).removeAttr("disabled"); }); } }); }); ``` ```html No Yes No Yes ``` ``` // Onload disable $.each($("input[name='xsecondset']"), function(index, radio){ $(radio).attr("disabled", "disabled"); }); $("input[name='xfirstset']").click(function(){ $.each($("input[name='xfirstset']"), function(index, radio){ if(radio.checked && radio.value === "0"){ $("#secondsetno")[0].checked = true; //reset to default $.each($("input[name='xsecondset']"), function(index, radio){ $(radio).attr("disabled", "disabled"); }); } if(radio.checked && radio.value === "1"){ $.each($("input[name='xsecondset']"), function(index, radio){ $(radio).removeAttr("disabled"); }); } }); }); ``` Upvotes: 0 <issue_comment>username_2: I know Jain answered this already but I figured I'd provide a solution that's slightly easier to read. I used jQuery (as did Jain) but you could accomplish the same thing using vanilla JavaScript if you prefer. ``` //Fire when the status of the radio button #firstsetyes changes $("input[type=radio][name='xfirstset']").change(function() { // Get the radio buttons in the second set var secondSet = $("input[type=radio][name='xsecondset']"); for (var i = 0; i< secondSet.length; i++){ console.log(secondSet[i]); } if( $('#firstsetyes').prop('checked') ) { // Loop through the second set and disable the buttons for (var i = 0; i< secondSet.length; i++){ secondSet[i].disabled = true; } } else { for (var i = 0; i< secondSet.length; i++){ secondSet[i].disabled = false; } } }); ``` Here's also link to the CodePen where I wrote and tested the code: <https://codepen.io/username_2/pen/OvpzYg?editors=1011> --- **Update** To make second set default to 'no', you just need `checked` in your input tag corressponding to answer 'no' (like you had in your code, I just took it out while I was testing. I've now updated my pen in Codepen to have it.) ``` No Yes Second set: No Yes ``` --- **Update 2** If you'd like it to just disable the 'no' button in the second set, instead of looping through all the buttons in the second set, you simply target the no button. ``` $("input[type=radio][name='xfirstset']").change(function() { if( $('#firstsetyes').prop('checked') ) { $('#secondsetno').prop('disabled', true); } else { $('#secondsetno').prop('disabled', false); } }); ``` Upvotes: 2 [selected_answer]
2018/03/21
6,419
12,212
<issue_start>username_0: I understand that Numpy can generate index of an array given the value that we are looking for with `numpy.where`. My question: Is there a function that can generate index **given multiple values**. For example, with this array ``` a = np.array([1.,0.,0.,0.,1.,1.,0.,0.,0.,0.,...,1.,1.]) ``` It will be great if I can just specify 4 zeros and the function can tell the index of it then I can replace those right away with another value. I have a function that can recognize the pattern but it is not efficient. Any pointers will be very helpful<issue_comment>username_1: This should work and be quite efficient: ``` a = np.array([1,0,0,0,1,1,0,0,0,0,1,1,0,0,0,0]) a0 = a==0 np.where(a0[:-3] & a0[1:-2] & a0[2:-1] & a0[3:]) (array([6, 12], dtype=int64),) # indices of first of 4 consecutive 0's ``` Upvotes: 0 <issue_comment>username_2: Here are three different approaches. Approach 1: linear correlation ``` import numpy as np def pattwhere_corr(pattern, a): pattern, a = map(np.asanyarray, (pattern, a)) k = len(pattern) if k>len(a): return np.empty([0], int) n = np.dot(pattern, pattern) a2 = a*a slf = a2[:k].sum() + np.r_[0, np.cumsum(a2[k:] - a2[:-k])] crs = np.correlate(a, pattern, 'valid') return np.flatnonzero(np.isclose(slf, n) & np.isclose(crs, n)) ``` Approach 2: whittle down element by element ``` def pattwhere_sequ(pattern, a): pattern, a = map(np.asanyarray, (pattern, a)) k = len(pattern) if k>len(a): return np.empty([0], int) hits = np.flatnonzero(a == pattern[-1]) for p in pattern[-2::-1]: hits -= 1 hits = hits[a[hits] == p] return hits ``` Approach 3: brute force ``` def pattwhere_direct(pattern, a): pattern, a = map(np.asanyarray, (pattern, a)) k, n = map(len, (pattern, a)) if k>n: return np.empty([0], int) astr = np.lib.stride_tricks.as_strided(a, (n-k+1, k), 2*a.strides) return np.flatnonzero((astr == pattern).all(axis=1)) ``` Some testing: ``` k, n, p = 4, 100, 5 pattern, a = np.random.randint(0, p, (k,)), np.random.randint(0, p, (n,)) print('results consistent:', np.all(pattwhere_sequ(pattern, a) == pattwhere_corr(pattern, a)) & np.all(pattwhere_sequ(pattern, a) == pattwhere_direct(pattern, a))) from timeit import timeit for k, n, p in [(4, 100, 5), (10, 1000000, 4), (1000, 10000, 3)]: print('k, n, p = ', k, n, p) pattern, a = np.random.randint(0, p, (k,)), np.random.randint(0, p, (n,)) glb = {'pattern': pattern, 'a': a} kwds = {'number': 1000, 'globals': glb} for name, glb['func'] in list(locals().items()): if not name.startswith('pattwhere'): continue print(name.replace('pattwhere_', '').ljust(8), '{:8.6f} ms'.format( timeit('func(pattern, a)', **kwds))) ``` Sample output. Note that these benchmarks are in a scenario where pattern occurs with random frequency. Results may change if it is for example overrepresented. ``` results consistent: True k, n, p = 4 100 5 corr 0.090752 ms sequ 0.015759 ms direct 0.023338 ms k, n, p = 10 1000000 4 corr 39.290270 ms sequ 8.182161 ms direct 34.399724 ms k, n, p = 1000 10000 3 corr 6.319400 ms sequ 2.225807 ms direct 9.001689 ms ``` Upvotes: 0 <issue_comment>username_3: Try this code ! This will return you the range of consecutive zero location of numpy array . So you can replace the zero with any integer values in between these range. **Code :** ``` import numpy as np from itertools import groupby a = np.array([1.,0.,0.,0.,1.,1.,0.,0.,0.,0.,1.,1.]) b = range(len(a)) for group in groupby(iter(b), lambda x: a[x]): if group[0]==0: lis=list(group[1]) print([min(lis),max(lis)]) ``` **Output :** ``` [1, 3] [6, 9] ``` Upvotes: 0 <issue_comment>username_4: Seems like I give this answer once a week or so. Fastest and most memory-efficient way is to use `void` views over `as_strided` views ``` def rolling_window(a, window): #based on @senderle's answer: https://stackoverflow.com/q/7100242/2901002 shape = a.shape[:-1] + (a.shape[-1] - window + 1, window) strides = a.strides + (a.strides[-1],) c = np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides) return c def vview(a): #based on @jaime's answer: https://stackoverflow.com/a/16973510/4427777 return np.ascontiguousarray(a).view(np.dtype((np.void, a.dtype.itemsize * a.shape[1]))) def pattwhere_void(pattern, a): # Using @PaulPanzer's template form above k, n = map(len, (pattern, a)) pattern = np.atleast_2d(pattern) a = np.asanyarray(a) if k>n: return np.empty([0], int) return np.flatnonzero(np.in1d(vview(rolling_window(a, k)), vview(pattern))) ``` Upvotes: 1 <issue_comment>username_5: Not the fastest method, but you can generate a robust solution using `scipy` for n-dimensional arrays and n-dimensional patterns. ``` import scipy from scipy.ndimage import label #================= # Helper functions #================= # Nested list to nested tuple helper function # from https://stackoverflow.com/questions/27049998/convert-a-mixed-nested-list-to-a-nested-tuple def to_tuple(L): return tuple(to_tuple(i) if isinstance(i, list) else i for i in L) # Helper function to convert array to set of tuples def arr2set(arr): return set(to_tuple(arr.tolist())) #=============== # Main algorithm #=============== # First pass: filter for exact matches a1 = scipy.zeros_like(a, dtype=bool) freq_dict = {} notnan = ~scipy.isnan(pattern) for i in scipy.unique(pattern[notnan]): a1 = a1 + (a == i) freq_dict[i] = (pattern == i).sum() # Minimise amount of pattern checking by choosing least frequent occurrence check_val = freq_dict.keys()[scipy.argmin(freq_dict.values())] # Get set of indices of pattern pattern_inds = scipy.transpose(scipy.nonzero(scipy.ones_like(pattern)*notnan)) check_ind = scipy.transpose(scipy.nonzero(pattern == check_val))[0] pattern_inds = pattern_inds - check_ind pattern_inds_set = arr2set(pattern_inds) # Label different regions found in first pass which may contains pattern label_arr, n = label(a1) found_inds_list = [] pattern_size = len(pattern_inds) for i in range(1, n+1): arr_inds = scipy.transpose(scipy.nonzero(label_arr == i)) bbox_inds = [ind for ind in arr_inds if a[tuple(ind)] == check_val] for ind in bbox_inds: check_inds_set = arr2set(arr_inds - ind) if len(pattern_inds_set - check_inds_set) == 0: found_inds_list.append(tuple(scipy.transpose(pattern_inds + ind))) # Replace values for inds in found_inds_list: a[inds] = replace_value ``` Generate a random test array, pattern, and final replace value for a 4D case ``` replace_value = scipy.random.rand() # Final value that you want to replace everything with nan = scipy.nan # Use this for places in the rectangular pattern array that you don't care about checking # Generate random data a = scipy.random.random([12,12,12,12])*12 pattern = scipy.random.random([3,3,3,3])*12 # Put the pattern in random places for i in range(4): j1, j2, j3, j4 = scipy.random.choice(xrange(10), 4, replace=True) a[j1:j1+3, j2:j2+3, j3:j3+3, j4:j4+3] = pattern a_org = scipy.copy(a) # Randomly insert nans in the pattern for i in range(20): j1, j2, j3, j4 = scipy.random.choice(xrange(3), 4, replace=True) pattern[j1, j2, j3, j4] = nan ``` After running the main algorithm... ``` >>> print found_inds_list[-1] (array([ 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11, 11], dtype=int64), array([1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3], dtype=int64), array([5, 5, 5, 6, 6, 6, 7, 7, 5, 5, 6, 6, 7, 7, 7, 5, 5, 6, 6, 7, 5, 5, 5, 6, 6, 7, 7, 5, 5, 6, 7, 7, 7, 5, 5, 5, 6, 6, 6, 7, 7, 7, 5, 5, 5, 6, 6, 7, 5, 5, 5, 6, 6, 6, 7, 5, 5, 6, 6, 6, 7, 7, 7], dtype=int64), array([1, 2, 3, 1, 2, 3, 1, 3, 1, 3, 1, 3, 1, 2, 3, 2, 3, 1, 2, 2, 1, 2, 3, 1, 2, 1, 3, 1, 2, 2, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 2, 3, 1, 3, 1, 1, 2, 3, 1, 2, 3, 2, 1, 3, 1, 2, 3, 1, 2, 3], dtype=int64)) >>> >>> replace_value # Display value that's going to be replaced 0.9263912485289564 >>> >>> print a_org[9:12, 1:4, 5:8, 1:4] # Display original rectangular window of replacement [[[[ 9.68507479 1.77585089 5.06069382] [10.63768984 11.41148096 1.13120712] [ 6.83684611 2.46838238 11.40490158]] [[ 9.17344668 11.21669704 7.60737639] [ 3.14870787 6.22857282 5.61295454] [ 4.32709261 8.00493326 9.96124294]] [[ 4.16785078 10.66054365 2.95677408] [11.53789218 2.70725911 11.98647139] [ 5.00346525 4.75230895 4.05213149]]] [[[11.23856096 8.45979355 7.53268864] [ 6.14703327 11.90052117 5.48127994] [ 2.16777734 10.27373562 7.75420214]] [[10.04726853 11.44895046 7.78071007] [ 0.79030038 3.69735083 1.51921116] [11.29782542 2.58494314 9.8714708 ]] [[ 7.9356587 1.48053473 9.71362122] [ 5.11866341 3.43895455 6.86491947] [ 8.33774813 5.66923131 2.27884056]]] [[[ 0.75091443 2.02917445 5.68207987] [ 4.58299978 7.14960394 9.13853129] [10.60912932 4.52190424 0.6557605 ]] [[ 0.54393627 8.02341744 11.69489975] [ 9.09878676 10.60836714 2.41188805] [ 9.13098333 6.12284334 8.9349382 ]] [[ 5.84489355 10.19848245 1.65080169] [ 2.75161562 1.05154552 0.17804374] [ 3.3166642 10.74081484 5.13601563]]]] >>> >>> print a[9:12, 1:4, 5:8, 1:4] # Same window in the replaced array [[[[ 0.92639125 0.92639125 0.92639125] [ 0.92639125 0.92639125 0.92639125] [ 0.92639125 2.46838238 0.92639125]] [[ 0.92639125 11.21669704 0.92639125] [ 0.92639125 6.22857282 0.92639125] [ 0.92639125 0.92639125 0.92639125]] [[ 4.16785078 0.92639125 0.92639125] [ 0.92639125 0.92639125 11.98647139] [ 5.00346525 0.92639125 4.05213149]]] [[[ 0.92639125 0.92639125 0.92639125] [ 0.92639125 0.92639125 5.48127994] [ 0.92639125 10.27373562 0.92639125]] [[ 0.92639125 0.92639125 7.78071007] [ 0.79030038 0.92639125 1.51921116] [ 0.92639125 0.92639125 0.92639125]] [[ 0.92639125 0.92639125 0.92639125] [ 0.92639125 0.92639125 0.92639125] [ 0.92639125 0.92639125 0.92639125]]] [[[ 0.92639125 0.92639125 0.92639125] [ 0.92639125 7.14960394 0.92639125] [ 0.92639125 4.52190424 0.6557605 ]] [[ 0.92639125 0.92639125 0.92639125] [ 0.92639125 0.92639125 0.92639125] [ 9.13098333 0.92639125 8.9349382 ]] [[ 0.92639125 10.19848245 0.92639125] [ 0.92639125 0.92639125 0.92639125] [ 0.92639125 0.92639125 0.92639125]]]] >>> >>> print pattern # The pattern that was matched and replaced [[[[ 9.68507479 1.77585089 5.06069382] [10.63768984 11.41148096 1.13120712] [ 6.83684611 nan 11.40490158]] [[ 9.17344668 nan 7.60737639] [ 3.14870787 nan 5.61295454] [ 4.32709261 8.00493326 9.96124294]] [[ nan 10.66054365 2.95677408] [11.53789218 2.70725911 nan] [ nan 4.75230895 nan]]] [[[11.23856096 8.45979355 7.53268864] [ 6.14703327 11.90052117 nan] [ 2.16777734 nan 7.75420214]] [[10.04726853 11.44895046 nan] [ nan 3.69735083 nan] [11.29782542 2.58494314 9.8714708 ]] [[ 7.9356587 1.48053473 9.71362122] [ 5.11866341 3.43895455 6.86491947] [ 8.33774813 5.66923131 2.27884056]]] [[[ 0.75091443 2.02917445 5.68207987] [ 4.58299978 nan 9.13853129] [10.60912932 nan nan]] [[ 0.54393627 8.02341744 11.69489975] [ 9.09878676 10.60836714 2.41188805] [ nan 6.12284334 nan]] [[ 5.84489355 nan 1.65080169] [ 2.75161562 1.05154552 0.17804374] [ 3.3166642 10.74081484 5.13601563]]]] ``` Upvotes: 0
2018/03/21
633
2,427
<issue_start>username_0: For example, I log in and selected `en` as my language and then I select `zh` (chinese) and then log out. Is there any way to retain the localization after you log-out the app? this is the way I implement locale. web.php: `Route::post('change-locale', 'LocaleController@changeLocale')->name('change.locale');` LocaleController: ``` class LocaleController extends Controller { /** * @param Request $request */ public function changeLocale(Request $request) { $this->validate($request, ['locale' => 'required|in:' . implode(',', config('app.available_locales'))]); Session::put('locale', $request->input('locale')); return redirect()->back(); } } ``` Logout: ``` public function logout(Request $request) { $this->guard()->logout(); $locale = session('locale'); $locale = Session::put('locale', $locale); Session::flush(); //$request->session()->invalidate(); //$request->session()->put('locale',$locale); return redirect('/login'); } ``` Locale Setting: ``` public function handle($request, Closure $next) { if (Session::has('locale')) { $locale = Session::get('locale', Config::get('app.locale')); } else { $locale = substr($request->server('HTTP_ACCEPT_LANGUAGE'), 0, 2); if (!in_array($locale, Config::get('app.available_locales'))) { $locale = 'en'; } } App::setLocale($locale); return $next($request); } ```<issue_comment>username_1: if you into session then that value you can get through ``` $locale = Session::get('locale'); Session::flush(); Session::set('locale',$locale); ``` what i do, i use session global helper for get session value you can get any where this value Upvotes: 3 [selected_answer]<issue_comment>username_2: I do it this way in Laravel 6: Add to /app/Http/Controllers/Auth/LoginController.php this two functions: ``` public function logout(Request $request) { $locale = Session::get('locale'); $this->guard()->logout(); $request->session()->invalidate(); return $this->loggedOut($request, $locale) ?: redirect('/'); } protected function loggedOut(Request $request, $locale) { Session::put('locale',$locale); } ``` and this uses: ``` use Illuminate\Http\Request; use Session; ``` Upvotes: 2
2018/03/21
470
1,434
<issue_start>username_0: So I try to make a map and then choose its begin() as first iterator position. Then I try to use it in a range based for loop. Have a look at this code here: ``` long int lilysHomework(map& a,int n,map& p) { long int c=0; map::iterator pos; pos=p.begin(); for(auto& elem : a) { if((pos->second)!=(elem->first)) //fix this part c++; pos++; } c--; cout << c; return c; } ``` but in 6th line i get an error saying base operator has non pointer type. I am aware that the iterator returns always the address so it should have been able to get dereferenced. Can someone please help me out?.<issue_comment>username_1: ``` for(auto& elem : a) ``` does not make `elem` an iterator. It make `elem` a `std::map::value_type`. Use ``` if ( (pos->second) != (elem.first) ) { ... } ``` Upvotes: 2 <issue_comment>username_2: What you use here is `range_for` since c++11,see [range-based for](http://en.cppreference.com/w/cpp/language/range-for) ,so it directly operates on the `value_type` of `map`,which is `std::pair`,not the `std::map::iterator`. In your case,you can use the **structured binding declaration** since c++17 ``` for (auto&& [first,second] : mymap) { // use first and second } ``` Or just use the ``` for(auto it=mymap.begin();it!=mymap.end();++it) { /* do something*/ } ``` Here you can use `it` as an iterator and use the operator `->`. Upvotes: 2 [selected_answer]
2018/03/21
1,008
3,967
<issue_start>username_0: I'm very new at Java and I have been trying to figure this out, the assignment is to let the user put in an undefined amount of integers into the program and then the program should be able to dived and then print out the # of integers into each category. The categories are Integers that are over 100, under 100, equals 100 and the total amount of integers. When the user enters a negative # the program should end and print out the before mentioned categories. The problem I'm getting is #1 Not all of the users input is added correctly #2 the while loop doesn't stop at a negative # but stops whenever it feels like it, for example after 20 different #. I don't really know what I'm doing wrong. Please help me. This is the code: ``` public static void main(String[] args) { int high = 0; int low = 0; int hundred = 0; int total = 0; Scanner scan = new Scanner(System.in); System.out.println("Please enter values, put a negative # to quit:"); while (scan.nextInt() > -1) { if (scan.nextInt() > 100) { high++; total++; } else if (scan.nextInt() < 100) { low++; total++; } else if (scan.nextInt() == 100) { hundred++; total++; } else { break; } } System.out.println("Amount of # over 100: " + high); System.out.println("Amount of # under 100: " + low); System.out.println("Amount of # that equals 100: " + hundred); System.out.println("Total amount of # : " + total); ```<issue_comment>username_1: [`scan.nextInt()`](https://docs.oracle.com/javase/7/docs/api/java/util/Scanner.html#nextInt()) - It scans the next token of the input as in int i.e, each call to nextInt gets an input integer from the console. So, you need to have exactly one `nextInt()` for one iteration of the loop. You shouldn't be having one for each of the `if` blocks. Upvotes: 1 <issue_comment>username_2: Each time you call `nextInt()` the `int` returned is removed from the buffer – subsequent invocations of `nextInt()` won't return it. Your code checks the result directly when you should be saving it to a variable and checking that. This makes the program seem erratic, since not every input is subject to all the checks you put in. Also notice that the `break` is unnecessary – you have already covered all the possibilities. ``` int input; while ((input = scan.nextInt()) > -1) { if (input > 100) { high++; total++; } else if (input < 100) { low++; total++; } else if (input == 100) { hundred++; total++; } } ``` Upvotes: 0 <issue_comment>username_3: Actully you are getting input in every condition check, i think it was wrong in your code. Try this.... ``` public static void main(String[] args){ int high = 0; int low = 0; int hundred = 0; int total = 0; Scanner scan = new Scanner(System.in); System.out.println("Please enter values, put a negative # to quit:"); int number=0; while ((number=scan.nextInt()) > -1) { if (number> 100) { high++; total++; } else if (number < 100) { low++; total++; } else if (number == 100) { hundred++; total++; } else { break; } } System.out.println("Amount of # over 100: " + high); System.out.println("Amount of # under 100: " + low); System.out.println("Amount of # that equals 100: " + hundred); System.out.println("Total amount of # : " + total); ``` Upvotes: 2
2018/03/21
1,636
5,690
<issue_start>username_0: I'm working on an application in Python that takes a list provided by a program and attempts to process the text for closed captioning. I’m using IBM Watson to transcribe audio files, then return a JSON with the transcribed words and a timestamp that notes the start time and end time of each word. Here is a small example of what that data looks like. (NOTE: I've simplified the JSON response so that I only highlight the portion of data applicable to this question) ``` section = [ ['for', 5.77, 5.92], ['the', 5.92, 6.03], ['longest', 6.03, 6.53], ['time', 6.53, 7.01], ['only', 7.23, 7.56], ['a', 7.56, 7.64], ['handful', 7.64, 8.2], ['of', 8.2, 8.3], ['people', 8.3, 8.72], ['would', 8.72, 8.88], ['know', 8.88, 9.01], ['the', 9.01, 9.14], ['data', 9.14, 9.56], ['that', 9.59, 9.73], ['was', 9.73, 9.84], ['collected', 9.84, 10.39], ['by', 10.39, 10.55], ['a', 10.55, 10.63], ['specific', 10.63, 11.18], ['application', 11.18, 11.91] ] ``` I’m only interested in the words (0-index0 and the start time (1-index) of each list in 'section'. For closed captioning, my goal is to capture a collection of words for every 2.5 seconds and only mark the time stamp of the first word within that set. So, in the example provided above, the first index provided would be my “zero-marker” and every word that followed within a 2.5 second time frame would be collected into a phrase. Any data afterwards would follow the same logic – for all data, group words that exist within 2.5 seconds from each other and mark the time stamp of the first word in the set. However, since I cannot predict the duration of the files nor how Watson will transcribe them – I’m having difficulty figuring out the best way to handle identifying groups of words by the 2.5 second requirement. Here’s what I’ve written: ``` # use the tag variable to identify the start time of the # first word outside of 2.5 seconds tag = 0 # use the first index's start time as the benchmark for 2.5 second duration benchmark = section[0][1] for i in range(len(section)): if abs(benchmark - section[i][1]) < 2.5: # do stuff foo(bar) # update tag variable to identify first start time # for word outside of 2.5 seconds. This will # continue to update until the if statement is no longer true. if (i + 1) < len(section): tag = section[i + 1][1] else: # use tag to create new benchmark benchmark = tag if abs(benchmark - section[i][1]) < 2.5: # do stuff ``` Where I'm struggling is that I would have to keep writing the function this way for however long the transcription is -- what seems like an endless series of potential if statements. In other words, I would still have to handle what to do with words that didn't fall within the first 2.5 seconds, the second set of 2.5 seconds, so on and so forth. I feel like there should be a more eloquent and efficient way of doing this. My goal would be to ultimately process the text so that it looked similar to what I've listed below, but would work no matter how long the list / time frame was. ``` ['for the longest time only a handful of', 5.77], ['people would know the data that was collected by a specific', 8.3], ['application', 11.18] ``` Any assistance, guidance, advice, etc. would be greatly appreciated. Thanks!<issue_comment>username_1: [`scan.nextInt()`](https://docs.oracle.com/javase/7/docs/api/java/util/Scanner.html#nextInt()) - It scans the next token of the input as in int i.e, each call to nextInt gets an input integer from the console. So, you need to have exactly one `nextInt()` for one iteration of the loop. You shouldn't be having one for each of the `if` blocks. Upvotes: 1 <issue_comment>username_2: Each time you call `nextInt()` the `int` returned is removed from the buffer – subsequent invocations of `nextInt()` won't return it. Your code checks the result directly when you should be saving it to a variable and checking that. This makes the program seem erratic, since not every input is subject to all the checks you put in. Also notice that the `break` is unnecessary – you have already covered all the possibilities. ``` int input; while ((input = scan.nextInt()) > -1) { if (input > 100) { high++; total++; } else if (input < 100) { low++; total++; } else if (input == 100) { hundred++; total++; } } ``` Upvotes: 0 <issue_comment>username_3: Actully you are getting input in every condition check, i think it was wrong in your code. Try this.... ``` public static void main(String[] args){ int high = 0; int low = 0; int hundred = 0; int total = 0; Scanner scan = new Scanner(System.in); System.out.println("Please enter values, put a negative # to quit:"); int number=0; while ((number=scan.nextInt()) > -1) { if (number> 100) { high++; total++; } else if (number < 100) { low++; total++; } else if (number == 100) { hundred++; total++; } else { break; } } System.out.println("Amount of # over 100: " + high); System.out.println("Amount of # under 100: " + low); System.out.println("Amount of # that equals 100: " + hundred); System.out.println("Total amount of # : " + total); ``` Upvotes: 2
2018/03/21
761
2,852
<issue_start>username_0: I'm trying to import my excel to my database but the problem is that i get nothing in my array of inserts. When i try to dd($inserts), it only shows me "[]". ``` public function importExcel() { $path = Input::file('import_file')->getRealPath(); $inserts = []; Excel::load($path, function($reader) use ($inserts) { foreach ($reader->toArray() as $rows) { // <-- $rows pertains to array of rows foreach($rows as $row) { // <-- $row pertains to the row itself $inserts[] = ['title' => $row['title'], 'description' => $row['description']]; } } }); dd($inserts); return back(); } ```<issue_comment>username_1: [`scan.nextInt()`](https://docs.oracle.com/javase/7/docs/api/java/util/Scanner.html#nextInt()) - It scans the next token of the input as in int i.e, each call to nextInt gets an input integer from the console. So, you need to have exactly one `nextInt()` for one iteration of the loop. You shouldn't be having one for each of the `if` blocks. Upvotes: 1 <issue_comment>username_2: Each time you call `nextInt()` the `int` returned is removed from the buffer – subsequent invocations of `nextInt()` won't return it. Your code checks the result directly when you should be saving it to a variable and checking that. This makes the program seem erratic, since not every input is subject to all the checks you put in. Also notice that the `break` is unnecessary – you have already covered all the possibilities. ``` int input; while ((input = scan.nextInt()) > -1) { if (input > 100) { high++; total++; } else if (input < 100) { low++; total++; } else if (input == 100) { hundred++; total++; } } ``` Upvotes: 0 <issue_comment>username_3: Actully you are getting input in every condition check, i think it was wrong in your code. Try this.... ``` public static void main(String[] args){ int high = 0; int low = 0; int hundred = 0; int total = 0; Scanner scan = new Scanner(System.in); System.out.println("Please enter values, put a negative # to quit:"); int number=0; while ((number=scan.nextInt()) > -1) { if (number> 100) { high++; total++; } else if (number < 100) { low++; total++; } else if (number == 100) { hundred++; total++; } else { break; } } System.out.println("Amount of # over 100: " + high); System.out.println("Amount of # under 100: " + low); System.out.println("Amount of # that equals 100: " + hundred); System.out.println("Total amount of # : " + total); ``` Upvotes: 2
2018/03/21
859
2,891
<issue_start>username_0: I have a case class that represents 3D vectors and I'm trying to use traits to mark each one with reference frames relevant to each problem domain. More specifically, I'm trying to do something like this: ``` trait bFrame type bVector = Vector with bFrame /** Inertial position of the point represented in b+ */ def p_b:bVector = Vector(x + r * sP, y - cP*sR*r, z - cP*cR*r) with bFrame ``` The expressions in the constructor evaluate to doubles and everything works fine before I try this trait trick. I've read that you can apply traits to instances of classes and not just classes themselves, but it doesn't seem to work here. The error I get is "';' expected but 'with' found." I want to use the type system to check reference frames without having to modify the original class. Is there a way to make this work?<issue_comment>username_1: I think I figured it out. It's not clear why, but for some reason Scala (as of 2.12.2) doesn't like for you to use the "apply" way of constructing case classes. I have to add "new" to make it work. Also, I should have been more clear originally that Vector is a case class that represents a vector in the mathematical sense, not the Scala collection. I changed it to Vector3D here to make it clearer. Also, the 2.12.2 compiler says that the line that prints "Vector in the b-Frame" is unreachable, but then when I run it, that line gets executed (i.e. the output is what you'd expect). Maybe this is a bug in the compiler. I'll try it with a more recent version of Scala and see. ``` object Test extends App { case class Vector3D(x:Double, y:Double, z:Double) trait bFrame trait nFrame type bVector3D = Vector3D with bFrame type nVector3D = Vector3D with nFrame val p_b:bVector3D = new Vector3D(1.0, 2.0, 3.0) with bFrame //Works //val p_b:bVector3D = Vector3D(1.0, 2.0, 3.0) with bFrame //Doesn't work p_b match { case _:nVector3D => println("Vector in the n-Frame") case _:bVector3D => println("Vector in the b-Frame") //Compiler says this is unreachable case _:Vector3D => println("Vector in an undetermined frame") case _ => println("Something other than a vector") } } ``` Upvotes: 0 <issue_comment>username_2: Not enough space in comment to answer to > > It doesn't look like it creates a new anonymous class. When I add .... > > > It is =) Example: ``` $ cat T.scala trait A case class T(name: String) object B extends App { val a = new T("123") with A println(a) } $ scalac -Xprint:typer T.scala ``` I skip most of output - you can check it by yourself. Most interesting: ``` ... private[this] val a: T with A = { final class $anon extends T with A { def (): <$anon: T with A> = { $anon.super.("123"); () } }; new $anon() }; def a: T with A = B.this.a; ... ``` as you can see - anonymous class initialization. Upvotes: 1
2018/03/21
274
904
<issue_start>username_0: I have a csv and I need to be able to print the total number of records how is this done? I have tried using sum statements and count but nothing seems to be working<issue_comment>username_1: Try this: ``` with open(adresse,"r") as f: reader = csv.reader(f,delimiter = ",") data = list(reader) row_count = len(data) print(row_count) ``` Upvotes: 1 <issue_comment>username_2: Did you use pandas to import the csv file? If so, here are some quick and easy options for obtaining the record count: ``` df = pandas.read_csv(filename) len(df) df.shape[0] df.index ``` Otherwise, an alternative solution if you used `csv.reader(filename.csv)` is: `row_count = sum(1 for line in open(filename))` (this solution was originally suggested [here](https://stackoverflow.com/questions/16108526/count-how-many-lines-are-in-a-csv-python)) Upvotes: 1 [selected_answer]